Waiting for 9.2 – temporary file stats per database

On 26th of January, Magnus Hagander committed patch:

Add counters for number and size of temporary files used
for spill-to-disk queries for each database to the
pg_stat_database view.
 
Tomas Vondra, review by Magnus Hagander

As you perhaps know certain operations in PostgreSQL can create temporary files. I am not talking about temporary tables – these files are created when operation done by PostgreSQL backend needs to use more memory than it can use, due to limits of work_mem.

So, let's see how that counter works.

First, this is how pg_stat_database looks like now:

$ SELECT * FROM pg_stat_database;
 datid │  datname  │ numbackends │ xact_commit │ xact_rollback │ blks_read │ blks_hit │ tup_returned │ tup_fetched │ tup_inserted │ tup_updated │ tup_deleted │ conflicts │ temp_files │ temp_bytes │ deadlocks │          stats_reset
───────┼───────────┼─────────────┼─────────────┼───────────────┼───────────┼──────────┼──────────────┼─────────────┼──────────────┼─────────────┼─────────────┼───────────┼────────────┼────────────┼───────────┼───────────────────────────────
 12039 │ template0 │           00000000000000[NULL]
 16387 │ depesz    │           2246226180637173824552610000002012-02-07 12:29:59.048977+01
 16388 │ jab       │           02880679041878772218386637651305153100002012-02-07 12:29:59.366588+01
 16389 │ pgdba     │           02250105655569964164256000002012-02-07 12:30:21.039997+01
 12047 │ postgres  │           0253014968417074317642718000002012-02-07 12:29:56.597627+01
     1 │ template1 │           02451104652670626164705000002012-02-07 12:29:56.606047+01
(6 ROWS)

The row is quite wide, so let's get just one row of data, in extended format:

$ SELECT * FROM pg_stat_database WHERE datname = 'depesz';
─[ RECORD 1 ]─┬──────────────────────────────
datid         │ 16387
datname       │ depesz
numbackends   │ 1
xact_commit   │ 3
xact_rollback │ 0
blks_read     │ 0
blks_hit      │ 12
tup_returned  │ 4
tup_fetched   │ 4
tup_inserted  │ 0
tup_updated   │ 0
tup_deleted   │ 0
conflicts     │ 0
temp_files    │ 0
temp_bytes    │ 0
deadlocks     │ 0
stats_reset   │ 2012-02-07 14:20:14.514258+01

My PostgreSQL is configured to use 1MB of RAM, so if I'll ask it to do something that fits in this amount of ram, it will not use temp files:

$ EXPLAIN analyze SELECT COUNT(*) FROM (SELECT random() AS i FROM generate_series(1,1000) ORDER BY i) AS x;
                                                           QUERY PLAN
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 Aggregate  (cost=77.33..77.34 ROWS=1 width=0) (actual TIME=1.308..1.308 ROWS=1 loops=1)
   ->  Sort  (cost=62.33..64.83 ROWS=1000 width=0) (actual TIME=1.099..1.244 ROWS=1000 loops=1)
         Sort KEY: (random())
         Sort Method: quicksort  Memory: 71kB
         ->  FUNCTION Scan ON generate_series  (cost=0.00..12.50 ROWS=1000 width=0) (actual TIME=0.184..0.380 ROWS=1000 loops=1)
 Total runtime: 1.449 ms
(6 ROWS)

As you can see sort used only RAM, and so the temp_* counters are still at zero:

$ SELECT temp_files, temp_bytes FROM pg_stat_database WHERE datname = 'depesz';
 temp_files │ temp_bytes
────────────┼────────────
          00
(1 ROW)

But when I'll do something more demanding:

$ EXPLAIN analyze SELECT COUNT(*) FROM (SELECT random() AS i FROM generate_series(1,20000) ORDER BY i) AS x;
                                                            QUERY PLAN
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 Aggregate  (cost=77.33..77.34 ROWS=1 width=0) (actual TIME=31.668..31.669 ROWS=1 loops=1)
   ->  Sort  (cost=62.33..64.83 ROWS=1000 width=0) (actual TIME=27.529..30.368 ROWS=20000 loops=1)
         Sort KEY: (random())
         Sort Method: external MERGE  Disk: 352kB
         ->  FUNCTION Scan ON generate_series  (cost=0.00..12.50 ROWS=1000 width=0) (actual TIME=4.822..10.117 ROWS=20000 loops=1)
 Total runtime: 32.248 ms
(6 ROWS)
$ SELECT temp_files, temp_bytes FROM pg_stat_database WHERE datname = 'depesz';
 temp_files │ temp_bytes
────────────┼────────────
          2640448
(1 ROW)

It is interesting to see that temp_bytes is ~ 2x what explain analyze showed. I checked Pg logs, and saw:

LOG:  TEMPORARY file: path "base/pgsql_tmp/pgsql_tmp22791.1", SIZE 360448
STATEMENT:  EXPLAIN analyze SELECT COUNT(*) FROM (SELECT random() AS i FROM generate_series(1,20000) ORDER BY i) AS x;
LOG:  TEMPORARY file: path "base/pgsql_tmp/pgsql_tmp22791.0", SIZE 280000
STATEMENT:  EXPLAIN analyze SELECT COUNT(*) FROM (SELECT random() AS i FROM generate_series(1,20000) ORDER BY i) AS x;
LOG:  duration: 32.506 ms  statement: EXPLAIN analyze SELECT COUNT(*) FROM (SELECT random() AS i FROM generate_series(1,20000) ORDER BY i) AS x;

So the amount that temp_* columns showed is correct, but explain analyze showed just part of it – my assumption is that PostgreSQL used first 352kB of temp file, and then rewrote it to 280kB of file, but at the same it did remote the 352 kB file – so explain shows peak usage, not total usage.

If I'll run the explain again, and check pg_stat_database one more time, I can see:

$ SELECT temp_files, temp_bytes FROM pg_stat_database WHERE datname = 'depesz';
 temp_files │ temp_bytes
────────────┼────────────
          41280896
(1 ROW)

Which clearly shows that values in these columns are always incrementing – rolling counter of number of created temp files, and total size of all created temp files.

So it is not a tool to see “what is the current state" – more along the lines of: get the data to some graphing tool, so you can see trends.

This information was previously available, in logs, but now we got easier and faster access to this data, which is always a good thing.

One thought on “Waiting for 9.2 – temporary file stats per database”

  1. I guess this is good… kind of feels like they decided to make it easier without making it better. I could see all kinds of poor assumptions being made based on the information given inside the database; I think it would have been better to add database identifying information into the logs.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.