February 3rd, 2012 by depesz | Tags: , , , , , , , | No comments »
Did it help? If yes - maybe you can help me? Donate BTC to 19zPa5diT2LZqGtTi8f8bfApLn8rw9zBHx

On 25th of January, Simon Riggs committed patch:

Allow pg_basebackup from standby node with safety checking.
Base backup follows recommended procedure, plus goes to great
lengths to ensure that partial page writes are avoided.
 
Jun Ishizuka and Fujii Masao, with minor modifications

In PostgreSQL 9.1 we got pg_basebackup – it is a tool to make backup of pg instance that works over the PG connection. It is very smart tool, and definitely a big benefit for Pg.

But – it was limited to being able to make backups off master only.

In 9.2 we already got ability to cascade replication, so it was expected that we'll get the ability to make backups off slave.

Well, to be honest – such option (making backups off slave) exists for quite some time now – thanks to OmniPITR, but it is external project, not developed by PostgreSQL team, and it definitely was never distributed with Pg itself.

So, now – we got the ability to make backups off slave that uses only tools that come with PostgreSQL itself.

So, let's see how it works.

I have PostgreSQL 9.2 instance on my machine, with following settings:

$ select name, setting from pg_settings where name ~ '^(port|data_directory|archive|max_wal|wal_keep|checkp)';
             name             │     setting
──────────────────────────────┼──────────────────
 archive_command              │ /bin/true
 archive_mode                 │ on
 archive_timeout              │ 0
 checkpoint_completion_target │ 0.5
 checkpoint_segments          │ 3
 checkpoint_timeout           │ 300
 checkpoint_warning           │ 30
 data_directory               │ /home/pgdba/data
 max_wal_senders              │ 3
 port                         │ 5920
 wal_keep_segments            │ 10
(11 rows)

pretty simple config. I also created special user:

$ create user replication with replication superuser;

Took backup off master:

=$ time pg_basebackup -D /home/pgdba/slave/ -F p -x stream -c fast -P -v -h 127.0.0.1 -p 5920 -U replication
xlog start point: 2/85000020
pg_basebackup: starting background WAL receiver
338953/338953 kB (100%), 1/1 tablespace
xlog end point: 2/850000E0
pg_basebackup: waiting for background process to finish streaming...
pg_basebackup: base backup completed
 
real    0m6.310s
user    0m0.108s
sys     0m0.512s

It was fast as the datadir is only ~ 400MB.

Now, I can fix the config in slave, to contain:

  • port = 5921
  • hot_standby = on

and of course recovery.conf in /home/pgdba/slave, which contains:

restore_command = '/bin/false'
standby_mode = 'on'
primary_conninfo = 'port=5920 user=replication host=127.0.0.1'
trigger_file = '/tmp/trigger.file'

Afterwards, I started Pg in /home/pgdba/slave and it happily worked.

Now, I can:

=$ time pg_basebackup -D /home/pgdba/slave2/ -F p -x stream -c fast -P -v -h 127.0.0.1 -p 5921 -U replication
xlog start point: 2/AC4E2600
pg_basebackup: starting background WAL receiver
692447/692447 kB (100%), 1/1 tablespace
xlog end point: 2/AC4E2600
pg_basebackup: waiting for background process to finish streaming...
pg_basebackup: base backup completed
 
real    3m56.237s
user    0m0.224s
sys     0m0.936s

(time is long because this is only test database with no traffic, so I had to make some inserts for it to finish)

Simple port change later, I have working Pg instance.

This is great. There are some limitations, though. But these shouldn't be a big problem.

It's superb addition to Pg tools, and definitely a thing that we were missing.

Leave a comment