On 19th of July, Simon Riggs committed patch:
Cascading replication feature FOR streaming log-based replication.
Standby servers can now have WALSender processes, which can WORK WITH
either WALReceiver OR archive_commands TO pass DATA. Fully updated
docs, including NEW conceptual terms OF sending server, upstream AND
downstream servers. WALSenders TERMINATED WHEN promote TO master.
Fujii Masao, review, rework AND doc rewrite BY Simon Riggs
Continue reading Waiting for 9.2 – cascading streaming replication
On 6th of March, Simon Riggs committed patch:
Efficient transaction-controlled synchronous replication.
If a standby is broadcasting reply messages and we have named
one or more standbys in synchronous_standby_names then allow
users who set synchronous_replication to wait for commit, which
then provides strict data integrity guarantees. Design avoids
sending and receiving transaction state information so minimises
bookkeeping overheads. We synchronize with the highest priority
standby that is connected and ready to synchronize. Other standbys
can be defined to takeover in case of standby failure.
This version has very strict behaviour; more relaxed options
may be added at a later date.
Simon Riggs and Fujii Masao, with reviews by Yeb Havinga, Jaime
Casanova, Heikki Linnakangas and Robert Haas, plus the assistance
of many other design reviewers.
Continue reading Waiting for 9.1 – Synchronous replication
On 16th of February, , Robert Haas committed patch:
Fujii Masao, reviewed by Robert Haas, Stephen Frost, and Magnus Hagander.
Continue reading Waiting for 9.1 – pg_ctl promote
As of yesterday OmniPITR got following changes/fixes:
- Fixed bug which caused immediate finish request be treated the same as smart finish request.
- Fixed problem with using omnipitr-backup-slave on PostgreSQL 9.0 slave, which is using streaming replication.
- Added option to omnipitr-restore, so that you can now use it for streaming-replication slaves
I'm very ashamed of the first thing (smart/immediate finish request), as it was simply my lack of test.
Second thing – the problem was that with streaming replication slave behaves a bit differently than normally, and so the backup procedure had to be modified.
As for third – restore command in streaming replication environment has to behave differently than normal restore command – i.e. it should finish, with error, as soon as it will be called for wal file that does not exist in wal archive. Which is direct opposite of what should be done normally.
Now, omnipitr-restore got switch (-sr) to make it work correctly in SR situation.
Links for svn/docs are listed on project page.
On 23rd of January, Magnus Hagander committed patch which adds:
Add pg_basebackup tool for streaming base backups
This tool makes it possible to do the pg_start_backup/
copy files/pg_stop_backup step in a single command.
There are still some steps to be done before this is a
complete backup solution, such as the ability to stream
the required WAL logs, but it's still usable, and
could do with some buildfarm coverage.
In passing, make the checkpoint request optionally
fast instead of hardcoding it.
Magnus Hagander, reviewed by Fujii Masao and Dimitri Fontaine
Continue reading Waiting for 9.1 – pg_basebackup
On 7th of January (I know, it was quite some time ago, I apologize for delay) Itagaki Takahiro committed patch:
New system view pg_stat_replication displays activity of wal sender processes.
Itagaki Takahiro and Simon Riggs.
Continue reading Waiting for 9.1 – pg_stat_replication
Well, the biggest information is that hot-backups on slave work. And they work fine. Really fine.
Some more information (with nice graph!):
Continue reading OmniPITR – hot backup on slave
There are several approaches on replication/failover – you might have heard of Slony, Londiste, pgPool and some other tools.
WAL Replication is different from all of them in one aspect – it doesn't let you query slave database (until 9.0, in which you actually can run read only queries on slave.
Since you can't run queries on slave, what is it good for? Well. It's good, and great in 1 very important aspect – all things that happen in database are replicated. Schema changes. Sequence modifications. Everything.
There is also drawback – you can't (as of now) replicate just one database. You replicate whole cluster (I don't like this word in this context – let's say: whole installation) of PostgreSQL. All databases that reside in given DATA directory.
So, the question is – how to set it up?
Continue reading Setting WAL Replication
The BIG feature. The feature that made PostgreSQL leap from 8.4 to 9.0. Patch was written by Fujii Masao, and committed by Heikki Linnakangas on 15th of January 2010:
Introduce Streaming Replication.
This includes two new kinds of postmaster processes, walsenders and
walreceiver. Walreceiver is responsible for connecting to the primary server
and streaming WAL to disk, while walsender runs in the primary server and
streams WAL from disk to the client.
Documentation still needs work, but the basics are there. We will probably
pull the replication section to a new chapter later on, as well as the
sections describing file-based replication. But let's do that as a separate
patch, so that it's easier to see what has been added/changed. This patch
also adds a new section to the chapter about FE/BE protocol, documenting the
protocol used by walsender/walreceivxer.
Bump catalog version because of two new functions,
pg_last_xlog_receive_location() and pg_last_xlog_replay_location(), for
monitoring the progress of replication.
Fujii Masao, with additional hacking by me
Continue reading Waiting for 9.0 – Streaming replication
On 19th of December Simon Riggs committed a patch that will quite likely be the single most-talked-about change in PostgreSQL 8.5:
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery
using a recovery.conf. Recovery processing now emulates the original
transactions as they are replayed, providing full locking and MVCC behaviour
for read only queries. Recovery must enter consistent state before
connections are allowed, so there is a delay, typically short, before
connections succeed. Replay of recovering transactions can conflict and in
some cases deadlock with queries during recovery; these result in query
cancellation after max_standby_delay seconds have expired. Infrastructure
changes have minor effects on normal running, though introduce four new
types of WAL record.
New test mode "make standbycheck" allows regression tests of
static command behaviour on a standby server while in recovery. Typical and
extreme dynamic behaviours have been checked via code inspection and manual
testing. Few port specific behaviours have been utilised, though primary
testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this
release to enhance some aspects of behaviour, notably improved handling of
conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL
are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas,
including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure,
Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas,
Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other
Continue reading Waiting for 8.5 – Hot Standby