Waiting for 8.5 – VACUUM FULL change

Some time ago Josh Berkus wrote about possible changes in VACUUM FULL.

Now these changes came to life. By now, I mean 6th of January, when Takahiro Itagaki committed his patch:

Log Message:
Support rewritten-based full vacuum as VACUUM FULL. Traditional
VACUUM FULL was renamed to VACUUM FULL INPLACE. Also added a new
option -i, --inplace for vacuumdb to perform FULL INPLACE vacuuming.
Since the new VACUUM FULL uses CLUSTER infrastructure, we cannot
use it for system tables. VACUUM FULL for system tables always
fall back into VACUUM FULL INPLACE silently.
Itagaki Takahiro, reviewed by Jeff Davis and Simon Riggs.

To make it short:

VACUUM FULL will start to behave like CLUSTER (well, without actual reordering of rows, but with full rewrite). This will make it faster and better.

The only problem with current approach is that it requires more disk space – which is (I guess) the only reason why old VACUUM FULL code is still available, but now it's called “VACUUM FULL INPLACE" (available also with vacuumdb –inplace).

So, let's see the difference.

First, let's create test table:

    id serial PRIMARY KEY,
    whatever INT4,
    anything TEXT

and put some data in it:

INSERT INTO test (whatever, anything)
        random() * 100000,
            'some TEXT' || random() * 100,
            CAST( 100 + random() * 900 AS INT)

This inserts 5 million rows, each with simple integer, and text column, which contains some random text repeated 100-1000 times.

Here are basic stats of generated table (remember, it's generated randomly, so the numbers will vary from run to run):

  • table size (pg_relation_size): 1127 MB
  • full table size (pg_total_relation_size): 1234 MB
  • size of index on id column: 107 MB
  • number of rows: 5000000
  • length of anything column:
    • minimal: 2121
    • average: 13693
    • maximal: 28884
    • summarized: 68468270282

( side note: PostgreSQL compression for TOASTed tuples works really nicely in this case )

Now. Having this CREATE TABLE, and INSERT, I added following DELETE:

DELETE FROM test WHERE 0 < ( id % 10 );

Which effectlively removed 90% (4500000 rows removed, 500000 stayes still in) of rows.

Afterwards I run old VACUUM with VACUUM ( FULL INPLACE ) test.

Then I repeated the procedure (create, inser, delete) but with new vacuum: VACUUM ( FULL ) test.


After old-fashioned VACUUM ( FULL INPLACE ) sizes looked like this:

  • table size (pg_relation_size): 106 MB
  • full table size (pg_total_relation_size): 214 MB
  • size of index on id column: 107 MB

On the other hand, when table was cleared with new VACUUM (FULL) I got:

  • table size (pg_relation_size): 109 MB
  • full table size (pg_total_relation_size): 120 MB
  • size of index on id column: 11 MB

I repeated the test 5 times, so I'm pretty sure about results.

Interesting facts – old VACUUM FULL is actually a bit better when it comes to freeing disc space ?! (that's shock, at least for me), but it does nothing about INDEX – which is why after VACUUM FULL you should immediately run REINDEX.

And now timings. Raw data look like this:

Old vacuum times:

  • 48011.511 ms
  • 45663.906 ms
  • 46099.042 ms
  • 46566.423 ms
  • 46095.106 ms

New code:

  • 14042.748 ms
  • 14546.671 ms
  • 14645.091 ms
  • 14757.264 ms
  • 14331.536 ms

On average old vacuum took 46.5s, and new code – 14.5s. Pretty good I think.

Of courese new code still requires exclusive lock on table, but at the very least – it will be 3 times shorter than previously. Great stuff.

13 thoughts on “Waiting for 8.5 – VACUUM FULL change”

  1. You listed both reasons the old VACUUM FULL behavior is still around: it can work better in cases where you’re low on disk space because CLUSTER makes a second copy of the table, and it will operate on system tables that the CLUSTER approach can’t touch. Either reason alone might be enough to keep the old implementation around, with two of them it’s particularly hard to get rid of it just yet.

  2. @Greg: thx for info about the system tables – I forgot about it. Side note – any idea why old VACUUM FULL reclaims more disc space?

  3. Theory: It’s not that much bigger. Could it be that the “vacuum full inplace” frees up all excess allocation, while the new “vacuum full” re-writing code ends up allocating a little bit extra at the end (since it’s growing the table as you write)? (i.e. the new one allocates data a chunk at a time, and it just so happens that the last chunk was not completely filled)

  4. I think the size difference is because when you CLUSTER, it makes the new copy have the FILLFACTOR amount of free space again (10% for btrees), which isn’t happening when you VACUUM FULL–you’re getting really packed pages in a lot of cases instead.

  5. A quick peek at vacuum.c suggests that the size difference is because inplace vacuum does a better job at packing the tuples. Cluster just takes a new page when a tuple doesn’t fit in the existing one, inplace tries to find a page where the existing tuple fits. The numbers also suggest this – the table when created has ~34.4 tuples per page. Assuming that vacuum manages to squeeze on average almost one more tuple into each page this would result in about 3% size decrease which is in line with your results.

  6. could you say please, both these operations blocks tables for READ and WRITE?

  7. thank you for you answer, and one more question if possible.

    Could you say from your practice, is it possible not to use manual vacuum full at all, and don’t loose in performance on heavy write databases – if you use just autovacuum? e.g. I have big database, what I can’t vacuum as don’t have enough free space and free time to block table. In mysql it’s the problem too, I have to recreate innodb table what will block all connections as well. So here situation the same? If I want to defragment table and get better performance I have to use vacuum full? If I will not use it, I’ll get worse performance with time even with autovacuum enabled?

  8. @Suor:
    you can work around it by adding new index concurrently, dropping old index, and renaming new one.

Comments are closed.