Autovacuum was added LONG time ago (in 7.4, as pg_autovacuum). Since then, there were many changes related to it.
These days, hopefully, we no longer see someone saying that they have to disable autovacuum due to performance issues.
But I still see people that say that they have to run daily/weekly vacuum because “autovacuum is not enough". Is it really?
I wrote already about how autovacuum knows what to vacuum or analyze. But – can we be sure that it does all that is necessary?
Perhaps there is just too much work? How can we know?
First thing we can do, is actually very, very simple. Run this query on your system:
$ SELECT COUNT(*) FROM pg_stat_activity WHERE backend_type = 'autovacuum worker'; COUNT ─────── 5 (1 ROW)
This will tell you how many autovacuum workers are currently doing something. You can check what they are doing by issuing:
$ SELECT query FROM pg_stat_activity WHERE backend_type = 'autovacuum worker'; query ───────────────────────────────────────────── autovacuum: VACUUM public.some_table autovacuum: VACUUM public.other_table autovacuum: VACUUM something.xxx autovacuum: VACUUM ANALYZE public.mega_jobs autovacuum: VACUUM public.magical_table (5 ROWS)
Now, we need to know what is configured autovacuum_max_workers:
$ SHOW autovacuum_max_workers ; autovacuum_max_workers ──────────────────────── 5 (1 ROW)
Now, with cron, shell/psql loop in screen/tmux, or with some other statistics-gathering tool start saving this count every so often.
You can, for example, connect to the DB in psql -AtX, and in it run: select now(), count(*) from pg_stat_activity where backend_type = ‘autovacuum worker' \watch 300. Every 5 minutes you will get new line, looking like:
Which tells you that at 3:48pm today, there were 5 workers.
With some data (1-2 days worth of stats should be enough) you can then check if you are running at “max workers" all the time, usually, sometimes, or never.
If you run at your autovacuum_max_workers usually or most of the time, then clearly autovacuum doesn't do its work enough (or it just barely is able to keep up).
Clearly one of the solutions is to increase autovacuum_max_workers. But what other options are there?
Let's see what options we have available:
$ SELECT name, setting, unit FROM pg_settings WHERE name ~ 'autovacuum|vacuum.*cost' ORDER BY 1; name │ setting │ unit ─────────────────────────────────────┼───────────┼──────── autovacuum │ ON │ [NULL] autovacuum_analyze_scale_factor │ 0.1 │ [NULL] autovacuum_analyze_threshold │ 50 │ [NULL] autovacuum_freeze_max_age │ 200000000 │ [NULL] autovacuum_max_workers │ 5 │ [NULL] autovacuum_multixact_freeze_max_age │ 400000000 │ [NULL] autovacuum_naptime │ 60 │ s autovacuum_vacuum_cost_delay │ 10 │ ms autovacuum_vacuum_cost_limit │ -1 │ [NULL] autovacuum_vacuum_scale_factor │ 0.2 │ [NULL] autovacuum_vacuum_threshold │ 50 │ [NULL] autovacuum_work_mem │ -1 │ kB log_autovacuum_min_duration │ 1000 │ ms vacuum_cost_delay │ 0 │ ms vacuum_cost_limit │ 200 │ [NULL] vacuum_cost_page_dirty │ 20 │ [NULL] vacuum_cost_page_hit │ 1 │ [NULL] vacuum_cost_page_miss │ 10 │ [NULL] (18 ROWS)
There are some important things, and some not so much.
The important thing to look at are parameters with “cost" in their names. Out of the seven that do, we can ignore vacuum_cost_page_dirty, vacuum_cost_page_hit, vacuum_cost_page_miss as these don't matter much (for now, I'll explain why in a moment).
This leaves us with four interesting parameters:
The ones with names starting with vacuum are related to vacuum ran manually, the ones starting with autovacuum are for autovacuum. But we need to look at both of them, because if the autovacuum*cost* parameter is set to -1 then it inherits value from it's vacuum_cost* counterpart.
In our case we have:
autovacuum_vacuum_cost_delay │ 10 │ ms autovacuum_vacuum_cost_limit │ -1 │ [NULL] vacuum_cost_delay │ 0 │ ms vacuum_cost_limit │ 200 │ [NULL]
Which effectively means that when autovacuum runs, it runs with delay of 10ms, and limit of 200. Hope it's clear. So far.
What is this limit?
When autovacuum works, it works on pages. Each page is 8192 bytes (8kB), unless for some reason you have compiled your Pg with nonstandard page sizes. You can always verify it with show block_size query.
Whenever it deals with page that was in shared_buffers, and wasn't dirty (didn't contain any unsaved information) – it added, to cost counter, value of vacuum_cost_page_hit. Which is 1. If the page is not in shared_buffers, then it adds vacuum_cost_page_miss (10) to cost counter. And if the page is in shared_buffers, but is dirty, it adds vacuum_cost_page_dirty (20) to the counter.
And, whenever counter will reach cost_limit (200 in my case) it will pause processing for cost_delay miliseconds.
So, assuming there were no writes, and all of pages are in shared_buffers, with the settings above, every 200 pages (200 * 8kB = 1.6MB) autovacuum will pause processing for 10ms.
On the other hand. If there have been many writes, and all shared_buffers are dirty, the pause will happen after only 10 pages (limit (200) / vacuum_cost_page_dirty(20) = 10). That means that the 10ms pause will happen after 80kB worth of vacuuming.
This means that I can make autovacuum faster (more aggressive) in two ways:
- decrease autovacuum_vacuum_cost_delay to make the sleeps shorter
- increase autovacuum_vacuum_cost_limit to make the sleeps happen less frequently
Analogically if you'd like to make autovacuum slower (to incur lower IO cost), you can:
- increase autovacuum_vacuum_cost_delay to make the sleeps longer
- decrease autovacuum_vacuum_cost_limit to make the sleeps happen more frequently
On most of the system, these days, with SSDs everywhere, I tend to use:
- autovacuum_max_workers = 10
- autovacuum_vacuum_cost_delay = 2
- autovacuum_vacuum_cost_limit = 500
This gives me 2ms pause every 500 units in cost, which is from 25 to 500 pages, depending on their state (hit/miss/dirty). Which means that the sleep happens every 200kB..4MB worth of data “handled".
The takeaway from here is:
- make sure you don't run autovac at max workers all the time
- if you do, either give it more workers, or make it more aggresive
- if you think that autovac is causing performance problems – make it slower, less aggresive
The great thing is that you can change autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit without restarting, and you can even do it per table, using:
ALTER TABLE a SET ( autovacuum_vacuum_cost_delay = 1, autovacuum_vacuum_cost_limit = 1000 ); ALTER TABLE b SET ( autovacuum_vacuum_cost_delay = 50, autovacuum_vacuum_cost_limit = 100 );
And you can, of course, change them globally too:
$ ALTER system SET autovacuum_vacuum_cost_delay = 5; ALTER SYSTEM $ SELECT pg_reload_conf(); pg_reload_conf ──────────────── t (1 ROW)
Which is great if you'd want, for example, to make autovacuum less aggressive during the day, but faster during the night.
Hope it will help you 🙂
One thought on “Is my autovacuum configured properly?”
I applied the configs written here in our (almost) huge database where there are at least 50 million rows in the tables. I was sure it wouldn’t be a problem since not all of our tables were bloat. There is no problem for now. Autovacuum processes are closing faster than before.
Comments are closed.