Waiting for PostgreSQL 12 – Log all statements from a sample of transactions

On 3rd of April 2019, Alvaro Herrera committed patch:

Log all statements from a sample of transactions
This is useful to obtain a view of the different transaction types in an
application, regardless of the durations of the statements each runs.
Author: Adrien Nayrat

Commit message makes it pretty clear, so let's see this in action.

I'll need a sample table:

=$ CREATE TABLE test (
    id serial PRIMARY KEY,
    int_1 int4 NOT NULL DEFAULT 0,
    int_2 int4 NOT NULL DEFAULT 0

Now, let's make some test transactions.

Let's assume that I will want transactions that insert a row, and then update it.

Something like:

=$ INSERT INTO test (int_1, int_2) VALUES (random() * 100000, random() * 100000);
=$ UPDATE test SET int_1 = int_1 - 50, int_2 = int_2 - 100 WHERE id = currval('test_id_seq');

Nothing really fancy. All of these queries will be fast. Now, let's make sure we run quite a lot of them:

$ (
    echo "BEGIN;"
    echo "insert into test (int_1, int_2) values (random() * 100000, random() * 100000);"
    echo "update test set int_1 = int_1 - 50, int_2 = int_2 - 100 where id = currval('test_id_seq');"
    echo "COMMIT;"
) > single.sql
$ for i in {1..10000}; do cat single.sql ; done > 10k.sql

Now, running this via:

$ psql -f 10k.sql -qAtX

generated logfile with size of ~ 5.5 MB (with log_min_duration_statement = 0).

So, let's now change the settings:

=$ ALTER system SET log_min_duration_statement = 1000;
=$ ALTER system SET log_transaction_sample_rate = 0.001;
=$ SELECT pg_reload_conf();
(1 ROW)

And now, redoing the 10k.sql file generated only ~ 6.5kB of LOG! With 11 transactions in it, so pretty close to expected 0.1%.

This is great. Thanks, Adrien and Alvaro.

One thought on “Waiting for PostgreSQL 12 – Log all statements from a sample of transactions”

Comments are closed.