I have PostgreSQL, loaded some data, and have app using it. Now what?

I had to deal with this question, or some version of it, quite a few times. So, decided to write a summary on what one could (or should) do, after data is in database, and application is running. Namely – setup some kind of replication and backups.

What to use, how, and why? This is what this post is all about.

First, let me start with some basic information about my test setup.

I have “master" server (called imaginatively master), which is running Ubuntu Server LTS 14.04 (Trusty) – that's the latest LTS Server Ubuntu, which makes it reasonably good test environment.

To make sure that the post is as usable as possible, I did not use my installation method – instead I used Ubuntu's pre-packaged Pg.

To get PostgreSQL I installed it using:

$ sudo apt-get install postgresql postgresql-contrib postgresql-plperl

which installed these packages:

  1. libpq5 in version: 9.3.6-0ubuntu0.14.04
  2. postgresql in version: 9.3+154
  3. postgresql-9.3 in version: 9.3.6-0ubuntu0.14.04
  4. postgresql-client-9.3 in version: 9.3.6-0ubuntu0.14.04
  5. postgresql-client-common in version: 154
  6. postgresql-common in version: 154
  7. postgresql-contrib in version: 9.3+154
  8. postgresql-contrib-9.3 in version: 9.3.6-0ubuntu0.14.04
  9. postgresql-plperl-9.3 in version: 9.3.6-0ubuntu0.14.04

I need contrib and plperl for some of the databases that I have there, not for anything related to replication or backups. So if you don't need them for your code – you can skip installation of these packages.

Also – I didn't modify default config in any way before starting work on this blogpost – this should make sure that every change that is necessary to get described results is covered in here.

Additionally – since this is test environment it doesn't have any real traffic happening. To make sure that there is some work being processed, I added simple script, which in a loop created table with some data, and then dropped it.

One final note before I will start describing what I did – I purposely include commands that will render bad results. Not because I want you to fail, but because I want to show why certain things should be set. For everything that I show that doesn't work, I also include, later on, correct, sensible version. As bad as it sounds, it's only so you will actually understand what you're doing, instead of blindly copy-pasting examples.

Having working database, we will first setup replication. Replica is good because you can offload some queries there, and you can (possibly, we'll see) make backups off replica, to limit load on master.

So we need new machine. This one will be named “slave" – as this is the role it will be performing. I installed the same packages, and verified that have the same versions.

Locations are the same in both cases – config files in /etc/postgresql/9.3/main/ and data in /var/lib/postgresql/9.3/main.

After installation of Pg on slave, it creates some template databases, which I don't need – since I will be setting it as slave. So, I have to stop Pg (on slave) and remove all data in data directory:

15:50:19 depesz@slave ~
=$ sudo service postgresql stop
 * Stopping PostgreSQL 9.3 database server
   ...done.
 
15:50:19 depesz@slave ~
=$ sudo -u postgres bash -c "rm -rf /var/lib/postgresql/9.3/main/*"

Please note that I removed content of data directory, but not the directory itself.

You might ask – why bash -c “…", and not just: “sudo rm -rf /var/lib/postgresql/9.3/main/*" – the reason is that * in this example would be “expanded" using my shell (as depesz account), but this account can't list files in data directory, so rm would get literal “/var/lib/postgresql/9.3/main/*" as filename to remove, which it can't handle.

Bash -c trick makes sure that * expansion happens from postgres account, and will give rm correct list of files and directories to remove. There are other ways around this, of course, like: sudo su – postgres, and then from shell running: rm -rf /var/lib/postgresql/9.3/main/*, or even smarter:

$ sudo -u postgres find /var/lib/postgresql/9.3/main/ -mindepth 1 -depth -delete

Now, using postgres account, I need to copy base set of data – there is a great tool to do it – pg_basebackup, so let's use it:

15:56:30 postgres@slave ~
=$ pg_basebackup --pgdata=/var/lib/postgresql/9.3/main/ --write-recovery-conf --checkpoint=fast --verbose --progress --host=master
pg_basebackup: could not connect to server: could not connect to server: Connection refused
        Is the server running on host "master" (172.28.173.160) and accepting
        TCP/IP connections on port 5432?

First problem. PostgreSQL on master doesn't listen on 5432? It does. But only on localhost, and we need it to listen on everything. So, on master, I edit /etc/postgresql/9.3/main/postgresql.conf and change line:

#listen_addresses = 'localhost'         # what IP address(es) to listen on;

into:

listen_addresses = '*'          # what IP address(es) to listen on;

This change requires restart, so I also need:

16:03:20 depesz@master ~
=$ sudo service postgresql restart
 * Restarting PostgreSQL 9.3 database server
   ...done.

Now, let's retry pg_basebackup:

15:59:44 postgres@slave ~
=$ pg_basebackup --pgdata=/var/lib/postgresql/9.3/main/ --write-recovery-conf --checkpoint=fast --verbose --progress --host=master
pg_basebackup: could not connect to server: FATAL:  no pg_hba.conf entry for replication connection from host "172.28.173.157", user "postgres", SSL on
FATAL:  no pg_hba.conf entry for replication connection from host "172.28.173.157", user "postgres", SSL off

OK. It's better – PostgreSQL on master listens, but refuses connections. Of course – in it's pg_hba.conf file, I didn't include any information about connections from slave. Default pg_hba.conf on Ubuntu looks like:

16:05:36 depesz@master ~
=$ sudo grep -E '^[^#]' /etc/postgresql/9.3/main/pg_hba.conf
local   all             postgres                                peer
local   all             all                                     peer
host    all             all             127.0.0.1/32            md5
host    all             all             ::1/128                 md5

Which means that only local connections (either using unix socket or tcpip) will be allowed.

Before I will make any changed, first let's think for a moment. We need connection to send initial data, and to replicate it later on. This will be done using the same protocol/methods that normal PostgreSQL clients use. With this in mind, being somewhat security-conscious person, I think it's better to setup special database account for replication purposes. Luckily it's simple:

16:08:12 postgres@master ~
=$ createuser --replication replicant
 
16:09:15 postgres@master ~
=$ echo "host replication replicant 172.28.173.157/24 trust" >> /etc/postgresql/9.3/main/pg_hba.conf
 
16:10:17 postgres@master ~
=$ psql -c "select pg_reload_conf()"
 pg_reload_conf 
----------------
 t
(1 row)

This created user “replicant" – which doesn't have any great privileges, but it can be used for replication. Added also line to pg_hba.conf which allows passwordless authentication as “replicant" from slave (using its IP address). And finally, to make sure that Pg will load modified pg_hba.conf, I reloaded config.

Now. Are we set? Let's see again, on slave:

16:10:40 postgres@slave ~
=$ pg_basebackup --pgdata=/var/lib/postgresql/9.3/main/ --write-recovery-conf --checkpoint=fast --verbose --progress --host=master --user=replicant
pg_basebackup: could not connect to server: FATAL:  number of requested standby connections exceeds max_wal_senders (currently 0)

Please note that I added –user=replicant option to make sure that pg_basebackup will use this new role. But we got error anyway. Why? Well – it just so happens that you can't have any number of slaves (at least, so called streaming slaves) running – each such thing is using some resources, and in the general cases, you don't need them. But now we do. Also – let's be smarter. We will need one now, but let's set the new limit with some margin for future. Let's say 5. To do so, on master, in /etc/postgresql/9.3/main/postgresql.conf, I changed line:

#max_wal_senders = 0            # max number of walsender processes

into

max_wal_senders = 5             # max number of walsender processes

Also, while checking docs for the parameter, I noticed that I also have to change wal_level to archive or higher. Quick read of for this setting shows that I also need to enable archive_mode, and this, in turn, requires non-empty archive_command.

At the moment, I don't care about archiving, so let's use “/bin/true" as my archiving program – it will basically immediately report to PostgreSQL that archiving “succeeded".

All changes in postgresql.conf, on master, now are:

  • wal_level – was commented out with value “minimal", uncommented it, and changed value to hot_standby since I want to be able to query slave)
  • archive_mode – was commented out, with value “off", uncommented, and changed value to “on"
  • archive_command – was commented out, with value " (empty string in quotes), uncommented, and changed value to ‘/bin/true'

These changes require restart. After I did restart master, I try again to start copy data to slave:

10:41:48 postgres@slave ~
=$ pg_basebackup --pgdata=/var/lib/postgresql/9.3/main/ --write-recovery-conf --checkpoint=fast --verbose --progress --host=master --user=replicant
 492331/1840023 kB (26%), 0/1 tablespace (...resql/9.3/main//base/93542/93575)
...
1724933/1840023 kB (100%), 1/1 tablespace                                         
NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
pg_basebackup: base backup completed

Nice. So now, I can start Pg on slave:

10:53:10 root@slave ~
=# service postgresql start
 * Starting PostgreSQL 9.3 database server
   ...done.

Does it mean we're set. Well, unfortunately no.

Quick check in PostgreSQL logs on slave (it's ubuntu, so logs are: /var/log/postgresql/postgresql-9.3-main.log) shows:

10:54:26 root@slave ~
=# tail -n 5 /var/log/postgresql/postgresql-9.3-main.log
2015-02-19 10:54:34 CET FATAL:  could not receive data from WAL stream: ERROR:  requested WAL segment 00000001000000A100000048 has already been removed
 
2015-02-19 10:54:39 CET LOG:  started streaming WAL from primary at A1/48000000 on timeline 1
2015-02-19 10:54:39 CET FATAL:  could not receive data from WAL stream: ERROR:  requested WAL segment 00000001000000A100000048 has already been removed

What is the problem? PostgreSQL keeps wal files only for some short time, after which they get recycled. How long it keeps them depends on checkpoint_segments setting, and additionally on wal_keep_segments.

Checkpoint_segments is used, as I described earlier to keep up with concurrent writes (and default value is too small, but I left it small purposedly).

Wal_keep_segments is used for exactly the reason we had problems – to keep some segments around, so that newly built slave will be able to catchup with replication, and, additionally, to make sure that in case of network hiccup or slave restart, we will not have to rebuild it from scratch.

I checked which wal is current on master:

10:40:56 root@master ~
=# ps uwwf -u postgres | grep archive
postgres  2700  0.1  0.0 103408  1656 ?        Ss   10:40   0:01  \_ postgres: archiver process   last was 00000001000000A9000000F1

Then I converted it into hex number of this particular segment:

=# printf "ibase=16\n%s\n" "$( echo 00000001000000A9000000F1 | cut -b 9-16,23,24 )" | bc
43505

Then, I checked which xlog slave needs (it's in logs above), converted it to decimal xlog number:

=# printf "ibase=16\n%s\n" "$( echo 00000001000000A100000048 | cut -b 9-16,23,24 )" | bc
41288

So it looks that we should have wal_keep_segments of around 2000 to handle current situation. OK. Let's do it.

On master, in postgresql.conf, I find wal_keep_segments:

11:02:12 root@master /etc/postgresql/9.3/main
=# grep wal_keep_segments postgresql.conf 
#wal_keep_segments = 0          # in logfile segments, 16MB each; 0 disables

change it to 2000:

11:02:37 root@master /etc/postgresql/9.3/main
=# grep wal_keep_segments postgresql.conf
wal_keep_segments = 2000                # in logfile segments, 16MB each; 0 disables

Afterwards I need to reload config:

(postgres@[LOCAL]:5432) 11:04:46 [postgres] 
$ SHOW wal_keep_segments;
 wal_keep_segments 
-------------------
 0
(1 ROW)
 
(postgres@[LOCAL]:5432) 11:04:55 [postgres] 
$ SELECT pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 ROW)
 
(postgres@[LOCAL]:5432) 11:04:58 [postgres] 
$ SHOW wal_keep_segments;
 wal_keep_segments 
-------------------
 2000
(1 ROW)

Now – this doesn't help us with our currently broken slave, because the deleted segments are deleted and master can't “undelete" them.

So I need to setup slave again:

11:02:12 postgres@slave ~
=$ pg_basebackup --pgdata=/var/lib/postgresql/9.3/main/ --write-recovery-conf --checkpoint=fast --verbose --progress --host=master --user=replicant
...
pg_basebackup: base backup completed
 
11:08:44 root@slave ~
=# service postgresql start
 * Starting PostgreSQL 9.3 database server
   ...done.

After a while, ps on postgres account shows:

11:18:18 postgres@slave ~
=$ ps xf
  PID TTY      STAT   TIME COMMAND
 1149 pts/3    S      0:00 -su
14778 pts/3    R+     0:00  \_ ps xf
11863 ?        S      0:00 /usr/lib/postgresql/9.3/bin/postgres -D /var/lib/postgresql/9.3/main -c config_file=/etc/postgresql/9.3/main/postgresql.conf
11869 ?        Ss     1:16  \_ postgres: startup process   recovering 00000001000000AF0000002D
11880 ?        Ss     0:01  \_ postgres: checkpointer process
11881 ?        Ss     0:00  \_ postgres: writer process
12462 ?        Ss     1:28  \_ postgres: wal receiver process   streaming AF/2DBE0000

Please note that it says: ‘wal receiver process streaming' – which means that streaming replication is established, and works.

So, are we all set? Well, not quite. If I'll try to connect to slave server, it will fail:

11:18:21 postgres@slave ~
=$ psql
psql: FATAL:  the DATABASE system IS starting up

The reason is that, by default, slaves, even in streaming replication, are not readable. You have to enable it. How? That's simple. In postgresql.conf on slave you have to enable hot_standby.

So you find, commented out, “hot_standby = off" and change it to, uncommented, “hot_standby = on". Afterwards, we'll need to restart slave PostgreSQL (service postgresql restart from root account on slave).

Afterwards, I will be able to connect to any database, and select any data, but not create anything:

11:22:28 postgres@slave ~
=$ psql -d depesz_explain
psql (9.3.6)
TYPE "help" FOR help.
 
(postgres@[LOCAL]:5432) 11:22:31 [depesz_explain] 
$ SELECT COUNT(*) FROM plans;
 COUNT  
--------
 176953
(1 ROW)
 
(postgres@[LOCAL]:5432) 11:22:35 [depesz_explain] 
$ CREATE TABLE z (i int4);
ERROR:  cannot EXECUTE CREATE TABLE IN a read-ONLY TRANSACTION

This is as expected – you can't write, even temp tables, on slave. But you can query them without problems.

Having replication setup, we should know how to check if it's up to date. To do it, on master, you query (in any database) system view pg_stat_replication:

=$ psql -x -c "select *, pg_current_xlog_location() from pg_stat_replication"
-[ RECORD 1 ]------------+------------------------------
pid                      | 2460
usesysid                 | 101631
usename                  | replicant
application_name         | walreceiver
client_addr              | 172.28.173.157
client_hostname          | 
client_port              | 59200
backend_start            | 2015-02-19 11:21:42.525399+01
state                    | catchup
sent_location            | B0/8DB20000
write_location           | B0/8DB00000
flush_location           | B0/8DAE0000
replay_location          | B0/8DADFFC0
sync_priority            | 0
sync_state               | async
pg_current_xlog_location | B2/CE000000

Added column pg_current_xlog_location shows current location. And sent_location/write_location/flush_location/replay_location show state of replication to given slave (as shown in client_addr).

Why are there four values? Well masters sends data (sent_location). Slave gets the data, and writes to its own xlog (write_location). Every so often data is flushed to disk (flush_location). And finally – once the data has been flushed to disk on slave – so we had 100% guarantee (usually, if the disks don't lie) – we can apply the wal data on actual data on slave (replay_location).

One common way to check for replication lag is to use pg_last_xact_replay_timestamp function, on slave:

11:29:17 postgres@slave ~
=$ psql -c "select clock_timestamp() - pg_last_xact_replay_timestamp();"
    ?COLUMN?    
----------------
 00:08:27.02805
(1 ROW)

This shows lag of 8 minutes and 27 seconds.

The problem with pg_last_xact_replay_timestamp is that in case you have no traffic on server, or you have, but only large, long, transaction – it will show your slave as lagging, while it's not. This might, or might not be a problem in your case, so you have to test.

I personally prefer to run on master this query:

11:41:07 postgres@master ~
=$ psql -c "select pg_xlog_location_diff(pg_current_xlog_location(), replay_location) from pg_stat_replication" 
 pg_xlog_location_diff 
-----------------------
           18389761624
(1 ROW)

This shows delay in WAL data – in this case around 18GB. The thing is that it will be accurate even in case of no traffic, or just couple of long transactions.

It also shows that my test machines (well, virtual boxes) are not fast enough to handle the replication. Reason for this is most likely misconfiguration of checkpointing, but that's irrelevant for now.

To give some kind of summary now, we have:

  • working streaming replication
  • enabled, but unused archiving
  • slave is readable
  • replication will be OK with network problems or slave restarts, for up to 2000 wal segments (i.e. the time it takes master to write 2000 segments, plus whatever is the checkpoint_segments/ checkpoint_timeout setting).

This is nice. Now we need backups.

Before I'll show you how to make backups, let me step back and think for a moment.

The simplest way of making backup is to call pg_dump on every database (and possibly some additional pg_dumpall to get global data, like users). So, more or less this:

11:48:37 postgres@master ~/backups
=$ pg_dumpall -g > globals.dump
 
11:50:42 postgres@master ~/backups
=$ psql -qAtX -c 'select datname from pg_database where datallowconn order by pg_database_size(oid) desc' | xargs -I{} -n1 -P3 pg_dump -Fc -f {}.dump {}

This makes the backups in parallel, dumps at a time.

There are problems with this approach, though.

For starters – these are dumps taken at a given moment in time. And you can't store increments. So if I had such backup done every day at 6:00 am, and my database would fail at 5:00 am, I would have to recover from backup that was made 23 hours earlier!

What's more – restoring them takes quite a lot of time, since data has to be actually added to database, indexes created, constraints checked. This takes time, which you might not have.

And finally – let's assume you have a problem with database that is not obvious – for example, bloated tables/indexes, or damaged data file. Dump, if it will work, will not contain the problem. Restored database (assuming it will restore) – will be without bloat, and all trace evidence of problems will be gone.

Luckily – there is another way of handling backups, which solves all of the above problems, with one drawback – backups are larger.

Instead of doing pg_dump, you can actually get a copy of whole data directory (with tar/rsync/whatever). And if you also archive xlogs, you will be able to restore to any point in time that you have xlogs to.

So, in the scenario above – I make full backup daily at 6:00 am, and store all xlogs – when db is gone, I just load last day backup, and apply all xlogs that have since accumulated, and get very fresh backup copy of all data.

To make it happen we'll need to use sensible archive-command, and preferably have separate server for backups.

One final note – since we will be archiving all xlogs for the purpose of backup – we no longer need to have large wal_keep_segments – basically we just need it large enough to handle occasional network problem, but we don't need it to handle prolonged slave downtime, or just building – the wal segments that would be needed will be in walarchive anyway.

Why is that a good thing? Well, wal_keep_segments is not free – it costs disk space (16MB per segment, so 2000 of these is 32GB). And it puts some additional load on master when restoring from them.

Now, let's set it up.

There are many tools that can handle wal archiving and backups, but I like OmniPITR – I know it pretty well, given that I wrote most of it 🙂

So, let's install it. On both machines (master and slave), as postgres account, in postgres home directory (/var/lib/postgresql) I do:

11:06:31 postgres@master ~
=$ git clone https://github.com/omniti-labs/omnipitr.git
Cloning into 'omnipitr'...
remote: Counting objects: 2549, done.
remote: Total 2549 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (2549/2549), 764.62 KiB | 652.00 KiB/s, done.
Resolving deltas: 100% (1212/1212), done.
Checking connectivity... done.
 
11:07:17 postgres@master ~
=$ ~/omnipitr/bin/sanity-check.sh
Checking:
- /var/lib/postgresql/omnipitr/bin
- /var/lib/postgresql/omnipitr/lib
9 programs, 31 libraries.
Tar version
All checked, and looks ok.

The sanity-check command is used to verify that everything that OmniPITR uses is in place, and that it is in appropriate version.

With OmniPITR in place, not yet configured, I need a place to store backups and wal archive. It could be on slave server, but it makes more sense to put it on yet another machine. So, I made myself third box, aptly named “storage".

On this server, I made pgarchive user:

11:09:54 root@storage ~
=# adduser --disabled-password --gecos "PostgreSQL archive" pgarchive
Adding user `pgarchive' ...
Adding new group `pgarchive' (1001) ...
Adding new user `pgarchive' (1001) with group `pgarchive' ...
Creating home directory `/home/pgarchive' ...
Copying files from `/etc/skel' ...

I made it with “–disabled-password" because all connections to this account will be done using ssh keys, so password is not needed.

Now, on both master and slave, on postgres account, I generate passwordless ssh keys:

11:07:24 postgres@master ~
=$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/postgresql/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /var/lib/postgresql/.ssh/id_rsa.
Your public key has been saved in /var/lib/postgresql/.ssh/id_rsa.pub.
The key fingerprint is:
24:58:61:16:fa:80:4b:bf:81:ea:6c:50:a1:bc:0b:bb postgres@master
The key's randomart image is:
+--[ RSA 2048]----+
|      *o         |
|  .. *           |
|..o.+ . .        |
|.o.+ o o         |
| .+ o . S        |
|oo   o           |
|+.. .            |
|+o               |
|E+               |
+-----------------+
 
11:07:10 postgres@slave ~
=$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/postgresql/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /var/lib/postgresql/.ssh/id_rsa.
Your public key has been saved in /var/lib/postgresql/.ssh/id_rsa.pub.
The key fingerprint is:
69:da:0a:4d:cc:4b:42:5f:80:b4:b8:de:cc:01:72:d7 postgres@slave
The key's randomart image is:
+--[ RSA 2048]----+
|   ....          |
|   ..o .         |
|. + + E .        |
| o = + . .       |
|  . o * S        |
| . + * =         |
|  . = + .        |
|     . .         |
|      .          |
+-----------------+

Afterwards I copy content of file /var/lib/postgresql/.ssh/id_rsa.pub, from both servers, to /home/pgarchive/.ssh/authorized_keys. You can use any method of copying you want, I just connected to all places, and copy/pasted the key using normal copy/paste commands for terminal (ctrl-c/ctrl-v).

After this is done, the authorized_keys file should look like:

11:15:16 pgarchive@storage ~
=$ cat /home/pgarchive/.ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD...ju1 postgres@slave
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD...VSP postgres@master

We will also need to copy the key from master to slave's .ssh/authorized_keys.

This should make sure that both master and slave can connect to storage. Let's test it:

11:13:52 postgres@master ~
=$ ssh pgarchive@storage id
Warning: Permanently added 'storage,172.28.173.167' (ECDSA) to the list of known hosts.
uid=1001(pgarchive) gid=1001(pgarchive) groups=1001(pgarchive)
 
11:16:20 postgres@master ~
=$ ssh pgarchive@storage id
uid=1001(pgarchive) gid=1001(pgarchive) groups=1001(pgarchive)
 
11:13:45 postgres@slave ~
=$ ssh pgarchive@storage id
Warning: Permanently added 'storage,172.28.173.167' (ECDSA) to the list of known hosts.
uid=1001(pgarchive) gid=1001(pgarchive) groups=1001(pgarchive)
 
11:16:28 postgres@slave ~
=$ ssh pgarchive@storage id
uid=1001(pgarchive) gid=1001(pgarchive) groups=1001(pgarchive)

As you can see on both machines, on first connect, ssh warned about unknown host, but added its key to “known_hosts", and on subsequent connection no warnings were emitted, and “id" worked fine.

We can login, we have OmniPITR, last thing we need is place to store the archives, and backups:

11:15:20 pgarchive@storage ~
=$ mkdir -pv /home/pgarchive/{walarchive,backups}/9.3
mkdir: created directory ‘/home/pgarchive/walarchive’
mkdir: created directory ‘/home/pgarchive/walarchive/9.3’
mkdir: created directory ‘/home/pgarchive/backups’
mkdir: created directory ‘/home/pgarchive/backups/9.3

While technically we don't need separate directories, and we don't need them versioned (named after version of PostgreSQL) I found out that it's good idea to have at least walarchive contain version information – saves some headaches in case of upgrades.

We will also need directory on slave for short-time walarcvhive:

11:28:38 postgres@slave ~
=$ mkdir walarchive/9.3 -pv
mkdir: created directory ‘walarchive’
mkdir: created directory ‘walarchive/9.3

All this in place, so let's start actually using it.

In postgresql.conf on master (/etc/postgresql/9.3/main/postgresql.conf), I change, again, wal_keep_segments and archive_command.

wal_keep_segments becomes 10 (high enough to handle occasional network issue, not big enough to use lots of disk space).

archive_command is a bit more complex:

archive_command = '/var/lib/postgresql/omnipitr/bin/omnipitr-archive --config-file /var/lib/postgresql/omnipitr.config "%p"'

And then we need /var/lib/postgresql/omnipitr.config, with this content:

--verbose
--dst-remote=gzip=pgarchive@storage:/home/pgarchive/walarchive/9.3/
--dst-remote=gzip=postgres@slave:/var/lib/postgresql/walarchive/9.3/
--state-dir=/var/lib/postgresql/.omnipitr-state
--log=/var/log/postgresql/omnipitr.log

Finally, we need the state dir on master:

11:38:59 postgres@master ~
=$ mkdir /var/lib/postgresql/.omnipitr-state

This will usually be empty, but it's needed in some edge cases.

After it's been set, I need to reload configuration (no restart needed):

11:39:14 root@master /etc/postgresql/9.3/main
=# service postgresql reload
 * Reloading PostgreSQL 9.3 database server
   ...done.

And now, archiving works. I can see it in /var/log/postgresql/omnipitr.log:

=$ tail /var/log/postgresql/omnipitr.log 
2015-02-21 11:41:15.187345 +0100 : 18094 : omnipitr-archive : LOG : Segment ./pg_xlog/00000001000000BB00000094 successfully sent TO ALL destinations.
2015-02-21 11:41:15.258605 +0100 : 18124 : omnipitr-archive : LOG : Called WITH parameters: --config-file /var/lib/postgresql/omnipitr.config pg_xlog/00000001000000BB00000095
2015-02-21 11:41:15.852138 +0100 : 18124 : omnipitr-archive : LOG : Timer [Compressing WITH gzip] took: 0.591s
2015-02-21 11:41:16.417153 +0100 : 18124 : omnipitr-archive : LOG : Sending /tmp/omnipitr-archive/18124/00000001000000BB00000095.gz TO pgarchive@storage:/home/pgarchive/walarchive/9.3/00000001000000BB00000095.gz ended IN 0.553252s
2015-02-21 11:41:16.991692 +0100 : 18124 : omnipitr-archive : LOG : Sending /tmp/omnipitr-archive/18124/00000001000000BB00000095.gz TO postgres@slave:/var/lib/postgresql/walarchive/9.3/00000001000000BB00000095.gz ended IN 0.548646s
2015-02-21 11:41:17.017245 +0100 : 18124 : omnipitr-archive : LOG : Timer [Segment delivery] took: 1.154s
2015-02-21 11:41:17.027570 +0100 : 18124 : omnipitr-archive : LOG : Segment ./pg_xlog/00000001000000BB00000095 successfully sent TO ALL destinations.
2015-02-21 11:41:17.104728 +0100 : 18151 : omnipitr-archive : LOG : Called WITH parameters: --config-file /var/lib/postgresql/omnipitr.config pg_xlog/00000001000000BB00000096
2015-02-21 11:41:18.145986 +0100 : 18151 : omnipitr-archive : LOG : Timer [Compressing WITH gzip] took: 1.039s
2015-02-21 11:41:18.744663 +0100 : 18151 : omnipitr-archive : LOG : Sending /tmp/omnipitr-archive/18151/00000001000000BB00000096.gz TO pgarchive@storage:/home/pgarchive/walarchive/9.3/00000001000000BB00000096.gz ended IN 0.551973s

If you'll decide you don't need that much logs just remove –verbose from config line for omnipitr on master. You can also use parallel-jobs to send to both places at the same time. Or use pigz instead of gzip for compression. But that's not important now.

Currently both walarchives on slave and backup are getting their wals:

11:42:40 postgres@slave ~
=$ ls -l /var/lib/postgresql/walarchive/9.3/ | tail -n 2
-rw------- 1 postgres postgres 4551998 Feb 21 11:42 00000001000000BB000000C1.gz
-rw------- 1 postgres postgres 5046770 Feb 21 11:42 00000001000000BB000000C2.gz
 
11:42:55 postgres@slave ~
=$ ls -l /var/lib/postgresql/walarchive/9.3/ | wc -l
100
11:26:02 pgarchive@storage ~
=$ ls -l /home/pgarchive/walarchive/9.3/ | tail -n 2
-rw------- 1 pgarchive pgarchive 4680326 Feb 21 11:42 00000001000000BB000000CC.gz
-rw------- 1 pgarchive pgarchive 5114668 Feb 21 11:42 00000001000000BB000000CD.gz
 
11:43:17 pgarchive@storage ~
=$ ls -l /home/pgarchive/walarchive/9.3/ | wc -l
111

Now, that we have archive, we should configure slave to use it.

File /var/lib/postgresql/9.3/main/recovery.conf contains, now, only:

=$ cat recovery.conf 
standby_mode = 'on'
primary_conninfo = 'user=replicant host=master port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'

We'll add two more lines there:

restore_command = '/var/lib/postgresql/omnipitr/bin/omnipitr-restore --config-file /var/lib/postgresql/omnipitr-restore.config -sr "%f" "%p"'
archive_cleanup_command = '/var/lib/postgresql/omnipitr/bin/omnipitr-cleanup --config-file /var/lib/postgresql/omnipitr-cleanup.config "%r"'

As you can imagine, it will require two additional config files:

  • /var/lib/postgresql/omnipitr-restore.config:
    --verbose
    --source=gzip=/var/lib/postgresql/walarchive/9.3/
    --log=/var/log/postgresql/omnipitr.log
  • /var/lib/postgresql/omnipitr-cleanup.config:
    --verbose
    --archive=gzip=/var/lib/postgresql/walarchive/9.3/
    --log=/var/log/postgresql/omnipitr.log

After this is set, we need to restart slave Pg:

11:49:51 root@slave ~
=# service postgresql restart
 * Restarting PostgreSQL 9.3 DATABASE server
   ...done.

Afterwards we should see in omnipitr log on slave something like:

2015-02-21 12:19:25.979064 +0100 : 8064 : omnipitr-restore : LOG : Called WITH parameters: --config-file /var/lib/postgresql/omnipitr-restore.config -sr 00000001000000BC00000079 pg_xlog/RECOVERYXLOG
2015-02-21 12:19:26.010718 +0100 : 8064 : omnipitr-restore : FATAL : Requested file does NOT exist, AND it IS streaming replication environment. Dying.

And after some time even:

2015-02-21 12:21:51.868415 +0100 : 6549 : omnipitr-cleanup : LOG : Segment 00000001000000BC0000006A.gz removed.
2015-02-21 12:21:51.869320 +0100 : 6549 : omnipitr-cleanup : LOG : Segment 00000001000000BC0000006B.gz removed.
2015-02-21 12:21:51.870323 +0100 : 6549 : omnipitr-cleanup : LOG : 267 segments removed.

This tells us two things:

  1. if slave would be down for longer time, (Pg itself), it could still be restarted without problems using walarchive.
  2. walarchive on slave will be kept small, because cleanup script removes xlogs from it when they are no longer useful

Of course this didn't change how walarchive on storage server works – it is stored there “forever", as nothing removes wal files from there.

Now, backups.

To make them without impacting master, we'll call them on slave.

First, single run, in command line. I'll make config file for backups, /var/lib/postgresql/omnipitr-backup.config, with this content:

--data-dir=/var/lib/postgresql/9.3/main/
--verbose
--log=/var/log/postgresql/omnipitr.log
--dst-direct=gzip=pgarchive@storage:/home/pgarchive/backups/9.3/
--call-master
--digest=SHA-1
--host=master
--skip-xlogs
--pgcontroldata-path=/usr/lib/postgresql/9.3/bin/pg_controldata

Then I can run backup with:

12:37:39 postgres@slave ~
=$ /var/lib/postgresql/omnipitr/bin/omnipitr-backup-slave --config=/var/lib/postgresql/omnipitr-backup.config

It ended immediately, so I checked the logs (/var/log/postgresql/omnipitr.log). In there I found:

12:38:56 postgres@slave ~
=$ grep backup-slave /var/log/postgresql/omnipitr.log 
2015-02-21 12:38:56.681703 +0100 : 18562 : omnipitr-backup-slave : LOG : Called WITH parameters: --config=/var/lib/postgresql/omnipitr-backup.config
2015-02-21 12:38:56.736474 +0100 : 18562 : omnipitr-backup-slave : LOG : Timer [SELECT w, pg_xlogfile_name(w) FROM (SELECT pg_start_backup('omnipitr_slave_backup_with_master_callback') AS w ) AS x] took: 0.042s
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL : Running [SELECT w, pg_xlogfile_name(w) FROM (SELECT pg_start_backup('omnipitr_slave_backup_with_master_callback') AS w ) AS x] via psql failed: $VAR1 = {
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL :           'stdout' => '',
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL :           'error_code' => 2,
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL :           'stderr' => 'psql: FATAL:  no pg_hba.conf entry for host "172.28.173.157", user "postgres", database "postgres", SSL on
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL : FATAL:  no pg_hba.conf entry for host "172.28.173.157", user "postgres", database "postgres", SSL off
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL : ',
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL :           'status' => 512
2015-02-21 12:38:56.737776 +0100 : 18562 : omnipitr-backup-slave : FATAL :         };

It tells me that I can't connect to master, using psql, because pg_hba.conf doesn't let me. So let's change it.

On master, in /etc/postgresql/9.3/main/pg_hba.conf, I add (at the end) line:

host postgres postgres 172.28.173.157/24 trust

And reload config:

12:40:46 root@master /etc/postgresql
=# service postgresql reload
 * Reloading PostgreSQL 9.3 database server
   ...done.

With this in place, I retry /var/lib/postgresql/omnipitr/bin/omnipitr-backup-slave –config=/var/lib/postgresql/omnipitr-backup.config on slave.

This time it took significantly longer – around 2 minutes.

Logs shows lots of info:

2015-02-21 12:49:08.386503 +0100 : 24220 : omnipitr-backup-slave : LOG : Called with parameters: --config=/var/lib/postgresql/omnipitr-backup.config
2015-02-21 12:49:08.573861 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [SELECT w, pg_xlogfile_name(w) from (select pg_start_backup('omnipitr_slave_backup_with_master_callback') as w ) as x] took: 0.159s
2015-02-21 12:49:08.593847 +0100 : 24220 : omnipitr-backup-slave : LOG : pg_start_backup('omnipitr') returned BC/F7000028|00000001000000BC000000F7.
2015-02-21 12:49:08.675852 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [select pg_read_file( 'backup_label', 0, ( pg_stat_file( 'backup_label' ) ).size )] took: 0.080s
2015-02-21 12:49:08.710153 +0100 : 24220 : omnipitr-backup-slave : LOG : Waiting for checkpoint (based on backup_label from master) - CHECKPOINT LOCATION: BC/F7000060
2015-02-21 12:49:28.726300 +0100 : 24220 : omnipitr-backup-slave : LOG : Checkpoint .
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : Script to make tarballs:
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : mkfifo \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-0 \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-1 \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-2 \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-3
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : ssh pgarchive\@storage cat\ \-\ \>\ \\\/home\\\/pgarchive\\\/backups\\\/9\\\.3\\\/slave\\\-SHA\\\-1\\\-2015\\\-02\\\-21\\\.tar\\\.gz < \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-0 &
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : nice \/var\/lib\/postgresql\/omnipitr\/bin\/omnipitr\-checksum \-d SHA\-1 \-f slave\-data\-2015\-02\-21\.tar\.gz < \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-1 >> \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-0 &
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : ssh pgarchive\@storage cat\ \-\ \>\ \\\/home\\\/pgarchive\\\/backups\\\/9\\\.3\\\/slave\\\-data\\\-2015\\\-02\\\-21\\\.tar\\\.gz < \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-2 &
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : nice gzip \-\-stdout \- < \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-3 | tee \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-1 > \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-2 &
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : nice tar cf \- \-\-exclude\=main\/pg_log\/\* \-\-exclude\=main\/pg_xlog\/0\* \-\-exclude\=main\/pg_xlog\/archive_status\/\* \-\-exclude\=main\/recovery\.conf \-\-exclude\=main\/postmaster\.pid \-\-transform\=s\#\^tmp\/omnipitr\-backup\-slave\/24220\/\#main\/\# main \/tmp\/omnipitr\-backup\-slave\/24220\/backup_label 2> \/tmp\/omnipitr\-backup\-slave\/24220\/tar\.stderr > \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-3
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : wait
2015-02-21 12:49:28.741424 +0100 : 24220 : omnipitr-backup-slave : LOG : rm \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-0 \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-1 \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-2 \/tmp\/CommandPiper\-24220\-uqBntH\/fifo\-3
2015-02-21 12:51:12.451373 +0100 : 24220 : omnipitr-backup-slave : LOG : tar stderr:
2015-02-21 12:51:12.490083 +0100 : 24220 : omnipitr-backup-slave : LOG : ==============================================
2015-02-21 12:51:12.491193 +0100 : 24220 : omnipitr-backup-slave : LOG : tar: Removing leading `/' from member names
2015-02-21 12:51:12.491999 +0100 : 24220 : omnipitr-backup-slave : LOG : ==============================================
2015-02-21 12:51:12.492854 +0100 : 24220 : omnipitr-backup-slave : LOG : full command stderr:
2015-02-21 12:51:12.493682 +0100 : 24220 : omnipitr-backup-slave : LOG : ==============================================
2015-02-21 12:51:12.494623 +0100 : 24220 : omnipitr-backup-slave : LOG : ControlSocket /var/lib/postgresql/.ssh/sockets/pgarchive-storage-22 already exists, disabling multiplexing
2015-02-21 12:51:12.496041 +0100 : 24220 : omnipitr-backup-slave : LOG : ==============================================
2015-02-21 12:51:12.496917 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [Compressing $PGDATA] took: 103.756s
2015-02-21 12:51:16.573252 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [SELECT pg_stop_backup()] took: 4.075s
2015-02-21 12:51:16.589268 +0100 : 24220 : omnipitr-backup-slave : LOG : pg_stop_backup() returned BC/FF76CF90.
2015-02-21 12:51:16.590622 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [Making data archive] took: 128.179s
2015-02-21 12:51:16.591744 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [Making xlog archive] took: 0.000s
2015-02-21 12:51:16.593287 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [Delivering to all remote destinations] took: 0.000s
2015-02-21 12:51:16.827004 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [Delivering meta files] took: 0.232s
2015-02-21 12:51:16.831344 +0100 : 24220 : omnipitr-backup-slave : LOG : Timer [Whole backup procedure] took: 128.420s
2015-02-21 12:51:16.832572 +0100 : 24220 : omnipitr-backup-slave : LOG : All done.

But the important thing is that I now have backup on storage server:

12:52:15 pgarchive@storage ~
=$ ls -l backups/9.3/
total 888876
-rw-rw-r-- 1 pgarchive pgarchive 910194282 Feb 21 12:51 slave-data-2015-02-21.tar.gz
-rw-rw-r-- 1 pgarchive pgarchive        91 Feb 21 12:51 slave-meta-2015-02-21.tar.gz
-rw-rw-r-- 1 pgarchive pgarchive        71 Feb 21 12:51 slave-SHA-1-2015-02-21.tar.gz

Despite the names having “tar.gz" at the end, only the *data* file is actuall tarball. the other files are plain text, with content:

12:52:18 pgarchive@storage ~
=$ cat backups/9.3/slave-meta-2015-02-21.tar.gz 
Min-Xlog: 00000001000000BC000000F7
Started-epoch: 1424519348
Timezone: CET
Hostname: slave
 
12:52:40 pgarchive@storage ~
=$ cat backups/9.3/slave-SHA-1-2015-02-21.tar.gz 
8a7bba8d05b41eda0ca855aa69ce5dfcb6f1fe24 *slave-data-2015-02-21.tar.gz

Basically some meta-information about the backup and its checksum.

Now, I can add cronjob on slave to run such backups every day, for example:

15 6 * * * /var/lib/postgresql/omnipitr/bin/omnipitr-backup-slave --config=/var/lib/postgresql/omnipitr-backup.config

Before we'll move further, I want you to understand what the config meant. So let's bring it back for a moment:

--data-dir=/var/lib/postgresql/9.3/main/
--verbose
--log=/var/log/postgresql/omnipitr.log
--dst-direct=gzip=pgarchive@storage:/home/pgarchive/backups/9.3/
--call-master
--digest=SHA-1
--host=master
--skip-xlogs
--pgcontroldata-path=/usr/lib/postgresql/9.3/bin/pg_controldata

Meaning of the lines:

  • data-dir – where pg datadir is, so that backup script will know what to backup
  • verbose – make the logs contain more information
  • log – well, where the logs go
  • dst-direct – this is actually pretty smart – it tells omnipitr-backup-slave where to save backups, on remote machine. But it does it in such a way that the backup is never saved to file on slave server – it gets made and delivered to backup server “on the fly" – so it's faster, and doesn't use additional disk space on slave.
  • call-master – since Pg 9.0 it's obligatory that when you run backup, you have to call pg_start_backup() and pg_stop_backup() functions on master, even if doing the backup on slave. That's what it means.
  • digest – type of digest to calculate for backup files. You can have any type that is supported by Perl's Digest library. Usual choices are MD5 or SHA-1, but you can go wild with it, if you want 🙂
  • host – what is hostname to connect that is master (this is needed due to call-master above
  • skip-xlogs – normally backup contains data tarball, and additionally xlogs-tarball, with all xlogs that are required to restore from it. But if you have working wal archive – it's no longer needed, so we can skip creation of xlog tarball.
  • pgcontroldata-path – unfortunately Ubuntu puts pg_controldata program in a place that is not, by default, in PATH environment variable, so I have to provide the path to it myself.

To restore from such backup we need data tarball, and all xlogs from 00000001000000BC000000F7 (Min-Xlog from meta file).

That means that all older xlogs in walarchive are useless – there is no backup to apply them to.

What's more – if we'd have multiple backups – we'd have to find a way to delete them after some time, and all the older xlogs too. Luckily there is tool for that. In OmniPITR bundle 🙂

So, we'll get omnipitr on storage too, in pgarchive home dir:

12:54:16 pgarchive@storage ~
=$ git clone https://github.com/omniti-labs/omnipitr.git
Cloning into 'omnipitr'...
./remote: Counting objects: 2549, done.
remote: Total 2549 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (2549/2549), 764.62 KiB | 666.00 KiB/s, done.
Resolving deltas: 100% (1212/1212), done.
Checking connectivity... done.
 
12:55:11 pgarchive@storage ~
=$ ./omnipitr/bin/sanity-check.sh 
Checking:
- /home/pgarchive/omnipitr/bin
- /home/pgarchive/omnipitr/lib
9 programs, 31 libraries.
Tar version
All checked, and looks ok.

With this in place, we'll make config file, /home/pgarchive/omnipitr-backup-cleanup.config, with this content:

--archive=gzip=/home/pgarchive/walarchive/9.3/
--backup-dir=gzip=/home/pgarchive/backups/9.3/
--keep-days=7
--verbose
--log=/home/pgarchive/omnipitr-backup-cleanup.log

With this in place, I test-run it once:

13:06:25 pgarchive@storage ~
=$ /home/pgarchive/omnipitr/bin/omnipitr-backup-cleanup --config=/home/pgarchive/omnipitr-backup-cleanup.config
 
13:06:51 pgarchive@storage ~
=$ cat omnipitr-backup-cleanup.log 
2015-02-21 13:06:51.412343 +0100 : 582 : omnipitr-backup-cleanup : LOG : Called with parameters: --config=/home/pgarchive/omnipitr-backup-cleanup.config
2015-02-21 13:06:51.455380 +0100 : 582 : omnipitr-backup-cleanup : LOG : No backups to remove.
2015-02-21 13:06:51.457652 +0100 : 582 : omnipitr-backup-cleanup : LOG : Timer [Delete backups] took: 0.002s
2015-02-21 13:06:51.460833 +0100 : 582 : omnipitr-backup-cleanup : LOG : 408 xlogs to be removed, from 00000001000000BB00000061.gz to 00000001000000BC000000F6.gz
2015-02-21 13:06:51.510467 +0100 : 582 : omnipitr-backup-cleanup : LOG : Timer [Delete xlogs] took: 0.051s
2015-02-21 13:06:51.512446 +0100 : 582 : omnipitr-backup-cleanup : LOG : Removed 408 files, total: 1872.3MB
2015-02-21 13:06:51.513309 +0100 : 582 : omnipitr-backup-cleanup : LOG : Timer [Backup cleanup] took: 0.079s

As you can see it removed 408 xlog segments, which were too old to be useful for anything. It also mentioned that there are no backups to remove – I didn't have anything older than 1 week.

This command can be cronned too, in cron of pgarchive account on storage server, for example:

34 7 * * * /home/pgarchive/omnipitr/bin/omnipitr-backup-cleanup --config=/home/pgarchive/omnipitr-backup-cleanup.config

And that's it.

At the moment we have:

  • streaming replication from master to slave
  • walarchiving to storage server for restoring from backup
  • walarchiving to slave server, so that in case of slave pg being down, it will be able to recover without need to rebuild
  • walarchive on slave is being cleaned up – unnecessary xlogs are removed
  • daily backups, ran from slave, saved on storage server, without any work on master (aside from starting and stopping backup mode, but this has no effect on normal master work)
  • backups and walarchive on storage are kept in sync (no xlogs that are useless), and small enough

Was it long? Possibly. Could it be done differently? Definitely. Is it the only way? NO! But it works. If it helps even one person – then I'm a happy camper 🙂

13 thoughts on “I have PostgreSQL, loaded some data, and have app using it. Now what?”

  1. After the phrase “So I need to setup slave again:” you just restarting slave instead of making pg_basebackup again, is it correct?

  2. @X:
    No. As previous sentence explains – we can’t use this slave, because master doesn’t have the required wal.

  3. I would suggest another layer: do add some active monitoring (nagios or simile), to check every so often that the master, slave and backup are okay. For the backup, it means checking that xlogs have been received recently.

  4. @Calvin: well, sure. monitoring in terms of some graphs and alerts is necessary, but it’s hardly a dba task. There should be some common monitoring tool for the whole stack, and adding monitoring there makes more sense then to setup specific tool just for pg.

    @Rafael: good catch, thanks. Fixed.

    @X: I added copy of command line for pg_basebackup so it will be clearer.

  5. Hope to see things going towards that, including troubleshooting, PITRing those examplesand some failover to complete the job.

    Great post, Depesz.

  6. Hi. Do I have the correct understanding to as if I use the replication slots in 9.4. I’ll be able to remove these parts (from the summary at the end):

    – walarchiving to slave server, so that in case of slave pg being down, it will be able to recover without need to rebuild
    – walarchive on slave is being cleaned up – unnecessary xlogs are removed

    ?

  7. @Odd:
    to be honest, I’m not sure. If you’re talking about logical replication – well, if your slave disconnects, then I don’t think you can reconnect easily – you have to keep the wal somewhere.

    All things said – I didn’t yet use anything related to logical replication in production, so I just don’t know.

  8. Hello,

    Recently I had catched this message in my master: requested WAL segment 000000010000448F000000CA has already been removed in my log file.

    Is it a serious problem in my replication? I have one master and two slaves.

    Thanks a lot.

    Best Regards

  9. @João:

    this just means that slave, when started, was far behind in replication from master, so that master no longer had the necessary wal.

    if you have working archiving, and restore_command set so that it can read xlogs from archive – there is nothing to worry about.

  10. Fantastic tutorial, very well written and easy to follow. Thank you so much.

    Two tiny comments:

    Shouldn’t the two instances of 172.28.173.157/24 rather be 172.28.173.157/32 ?

    I also had a small hiccup with this line from the archive config from master:
    –dst-remote=gzip=postgres@slave:/var/lib/postgresql/walarchive/9.3/

    Nowhere in the text is it written that I should add the public key of the postgres account on master to postgres account on slave. But that was easy enough to figure out.

  11. @Otto:

    1. ip /24 vs. /32 – it could be, but I’m lazy. Making it /24 will make it work for other hosts in the same net.

    2. ssh key – well, I didn’t mention it because it’s not really pg-setup, more like system-setup.

Comments are closed.