This is interesting question with at least couple options. So, let's dig in…
First things first, I need to have some tables to test it on:
$ CREATE TABLE this_is_normal (i int4); $ CREATE TABLE this_is_special (i int4);
Let's put there some data, so we have something to select:
$ INSERT INTO this_is_normal(i) VALUES (7); $ INSERT INTO this_is_special(i) VALUES (13);
Quick check if setting are not making any logs on normal queries:
$ SELECT name, setting FROM pg_settings WHERE name ~ 'log.*statement'; name │ setting ────────────────────────────┼───────── log_min_duration_statement │ -1 log_min_error_statement │ error log_statement │ NONE log_statement_sample_rate │ 1 log_statement_stats │ off (5 ROWS) $ SELECT 1/0; ERROR: division BY zero $ SELECT * FROM this_is_normal; i --- 7 (1 ROW) $select * FROM this_is_special; i ---- 13 (1 ROW) $select 2/0; ERROR: division BY zero
And in logs I see:
2022-07-03 18:38:48.593 CEST  depesz@depesz ERROR: division BY zero 2022-07-03 18:38:48.593 CEST  depesz@depesz STATEMENT: SELECT 1/0; 2022-07-03 18:38:48.594 CEST  depesz@depesz ERROR: division BY zero 2022-07-03 18:38:48.594 CEST  depesz@depesz STATEMENT: SELECT 2/0;
Which shows that errors are logged, but the queries that aren't erroring out – do not get logged.
OK. So, what are our options?
First, let's start with the best option: PGAudit extension. I'm on Ubuntu, so installing it is simple:
$ sudo apt-get install postgresql-14-pgaudit
Then, I need to make sure that it will get loaded by pg:
$ SHOW shared_preload_libraries ; shared_preload_libraries ────────────────────────── (1 ROW) $ ALTER system SET shared_preload_libraries = 'pgaudit'; ALTER SYSTEM
I checked first what is the shared_preload_libraries, because if there was anything there (like pg_stat_statements), I should keep it and set new value accordingly.
Now – config is set, but unfortunatley I have to restart pg. Not a big deal, this is just test instance:
$ sudo systemctl restart email@example.com
Sweet, and now I can check to make sure that it is loaded:
$ psql -c 'show shared_preload_libraries' shared_preload_libraries ────────────────────────── pgaudit (1 row)
Now, in the database that I want to use it I have to actually configure it to log what I want. First, obviously, I have to create the extension:
$ CREATE extension pgaudit ; CREATE EXTENSION
Now, to enable logging of accesses to specific table, we have to inform pgaudit that it should run in auditor mode, make a role for it, and grant it privileges to types of queries we want to log:
$ SET pgaudit.role = 'auditor'; SET $ CREATE ROLE auditor; CREATE $ GRANT SELECT, INSERT, UPDATE, DELETE ON this_is_special TO auditor; GRANT
And now, we can re-test the logging. Just like previously, I will issue 2 errors, and 2 selects in between errors, to make sure we see everything that was there:
$ SELECT LENGTH('starting') / 0; $ SELECT * FROM this_is_normal; $ SELECT * FROM this_is_special; $ SELECT LENGTH('finishing') / 0;
Log now contains:
2022-07-04 13:28:03.874 CEST  depesz@depesz ERROR: division BY zero 2022-07-04 13:28:03.874 CEST  depesz@depesz STATEMENT: SELECT LENGTH('starting') / 0; 2022-07-04 13:28:20.274 CEST  depesz@depesz LOG: AUDIT: OBJECT,3,1,READ,SELECT,TABLE,public.this_is_special,SELECT * FROM this_is_special;,<NOT logged> 2022-07-04 13:28:25.207 CEST  depesz@depesz ERROR: division BY zero 2022-07-04 13:28:25.207 CEST  depesz@depesz STATEMENT: SELECT LENGTH('finishing') / 0;
Sweet. Audit line is CSV value that has following fields:
- AUDIT_TYPE : It is either SESSION or OBJECT depending on what type of auditing caused this line to be logged. For our case it will be always object, because we're auditing certain object (table).
- STATEMENT_ID : Basically, which query is it within this session (connection to db)
- SUBSTATEMENT_ID : This increases when there are substatements – for example, you called a function, which, in its body, has queries.
- CLASS : What type of access was it. One of READ, WRITE, FUNCTION, ROLE, DDL, MISC, MISC_SET. You can use these names to enable logging all queries of specific class, regardless of object that they touch, if you'd want.
- COMMAND : What command caused it – just a keyword, SELECT, INSERT, …
- OBJECT_TYPE : What object type was touched by the query – in our case it will be table, but we could log accesses to indexes, views, and so on.
- OBJECT_NAME : Full name (with schema) of the object that was touched by the query.
- STATEMENT : What was the query
- PARAMETER : If you're using server side prepared statements, and you'd enable pgaudit.log_parameter, your parameters (values for $1, $2, …) will get logged there
Interestingly queries get logged if they touch the table, even if they don't get any data. For example:
$ SELECT i FROM this_is_special WHERE i = 1; ?COLUMN? │ ?COLUMN? ──────────┼────────── (0 ROWS)
despite not returning anything, will still get this in logs:
2022-07-04 13:39:58.582 CEST  depesz@depesz LOG: AUDIT: OBJECT,8,1,READ,SELECT,TABLE,public.this_is_special,SELECT i FROM this_is_special WHERE i = 1;,<NOT logged>
Of course PGAudit has much more features. If you're interested – check the docs.
That solves the problem in elegant, and efficient way. But what if you can't use PGAudit?
We can't use rules but, in a pinch, we could abuse views…
$ CREATE FUNCTION logit() RETURNS bool AS $$ BEGIN raise log 'LOGGING ACCESS'; RETURN TRUE; END; $$ LANGUAGE plpgsql immutable; $ ALTER TABLE this_is_special RENAME TO this_is_special_real; $ CREATE VIEW this_is_special AS SELECT * FROM this_is_special_real WHERE logit();
With this in place, every query to this_is_special will get data from view, which in turn will get data from real table, but will also add condition that will call function, that's sole purpose is to log data.
This is how it looks:
$ SELECT * FROM this_is_special; i ───── 13 123 (2 ROWS)
2022-07-04 14:34:10.697 CEST  depesz@depesz LOG: LOGGING ACCESS 2022-07-04 14:34:10.697 CEST  depesz@depesz CONTEXT: PL/pgSQL FUNCTION logit() line 1 at RAISE 2022-07-04 14:34:10.697 CEST  depesz@depesz STATEMENT: SELECT * FROM this_is_special;
Since the view is simple, we can get normal write operations working on it transparently:
$ INSERT INTO this_is_special (i) VALUES (256); $ UPDATE this_is_special SET i = 100 WHERE i = 123; $ DELETE FROM this_is_special WHERE i = 13; $ SELECT * FROM this_is_special t2; i ───── 256 100 (2 ROWS)
Please note that PGAudit is much better solution, but this will do, sometimes, for some cases.
Hope it helps 🙂