November 23rd, 2008 by depesz | Tags: , , , , | No comments »
Did it help? If yes - maybe you can help me? Donate BTC to 19zPa5diT2LZqGtTi8f8bfApLn8rw9zBHx

On 19th of November Tom Lane committed patch by Takahiro Itagaki which does:

Add auto-explain contrib module for automatic logging of the plans of
slow-running queries.

What exactly does it do?

Before I will go into more details – side note – this is the first extension to PostgreSQL (that I've seen) that uses custom_variable_classes GUC.

I mean – plperl uses it, but besides this is was mainly used to abuse GUC system to provide temporary storage between function calls :)

NOW. There are 2 ways to use this module:

  1. LOAD ‘auto_explain';
  2. shared_preload_libraries = ‘auto_explain'

First method can be load to any session (you have to be logged as superuser though), and will turn on auto-explain functionality in this session only:

# LOAD 'auto_explain';
LOAD

The other method requires you to change postgresql.conf, and adding ‘auto_explain' to shared_preload_libraries GUC variable.

This method has the benefit that it will work for all sessions.

So, let's load it – also – please note not to mistakenly change ‘local_preload_libraries' instead of ‘shared_preload_libraries' – when I did so, I was unable to stard PostgreSQL backend (i.e. connect to Pg).

After loading – nothing really changes. You still can work the way you worked before, but now you can set additional variables:

# set explain.log_min_duration = 5;

This will automatically log ‘explain' output for all statements that took more than 5ms:

2008-11-23 14:45:14.711 CET depesz@depesz 28352 [local] LOG: duration: 1003.424 ms plan:
Result (cost=0.01..0.03 rows=1 width=0)
InitPlan
-> Result (cost=0.00..0.01 rows=1 width=0)
2008-11-23 14:45:14.711 CET depesz@depesz 28352 [local] STATEMENT: select pg_sleep((select 1));
2008-11-23 14:45:14.711 CET depesz@depesz 28352 [local] LOG: duration: 1005.477 ms statement: select pg_sleep((select 1));

Last line was added by ‘log_min_duration_statement'.

As you can see, it's pretty cool. Kind of. It will log explain, but it means that if you'll set the threshold too low, the logs will grow faster than ever – after all – explain plans are usually quite large.

Additionally you can turn on “explain analyze" logging – not really cool idea as it will add serious performance penalty even in cases when you don't have many statements that run over explain.log_min_duration time.

The reason for this is pretty simple – PostgreSQL has to run timings of all queries to be able to print analyze for the queries that should be logged. And just imagine the “fun" of timing all hundreds of statements per second just to log 2 per hour.

Another option is to log “explain verbose output" (please consider reading more about it here).

Output is pretty expectable:

# set explain.log_verbose = 1;
SET
 
# select pg_sleep((select 1));
pg_sleep
----------
 
(1 row)

Log:

2008-11-23 14:54:48.443 CET depesz@depesz 28812 [local] LOG: duration: 1001.721 ms plan:
Result (cost=0.01..0.03 rows=1 width=0)
Output: pg_sleep(($0)::double precision)
InitPlan
-> Result (cost=0.00..0.01 rows=1 width=0)
Output: 1
2008-11-23 14:54:48.443 CET depesz@depesz 28812 [local] STATEMENT: select pg_sleep((select 1));
2008-11-23 14:54:48.443 CET depesz@depesz 28812 [local] LOG: duration: 1002.139 ms statement: select pg_sleep((select 1));

All in all – it's nice addon, but you'd rather be careful with it, not to fill your disc with logs.

Leave a comment