March 5th, 2014 by depesz | Tags: , , , , , | No comments »
Did it help? If yes - maybe you can help me?

On 27th of February, Alvaro Herrera committed patch:

Allow BASE_BACKUP to be throttled
A new MAX_RATE option allows imposing a limit to the network transfer
rate from the server side.  This is useful to limit the stress that
taking a base backup has on the server.
pg_basebackup is now able to specify a value to the server, too.
Author: Antonin Houska
Patch reviewed by Stefan Radomski, Andres Freund, Zoltán Böszörményi,
Fujii Masao, and Álvaro Herrera.

Showing this would be tricky, so let me just show some example command line.

Before I'll do that, though, simple information – pg_basebackup gets copy of master (or slave) PGDATA directory (and possibly all tablespaces, if you use them) and saves as either unpacked data, or tarballs.

When working, it might cause some load elevation, simply due to reading all the data.

If getting the backup from your server proves to be too expensive – you can, now, make it slower, with less impact on I/O performance.

The option to do it is –max-rate, so example command line could looks like:

=$ pg_basebackup --checkpoint=fast --pgdata=back2/ --host=master --max-rate=1000

max-rate value is in kilobytes per second, and should be between 32 and 1048576 (32kb and 1GB). Value of 0 means that there is no limit.

To simplify command lines, you can also use values with unit suffixes, like:

  • k – as in, 512k – redundant, but helps with readability
  • M – as in 256M – given value is in megabytes

That's a very nice addition for those of us that, for whatever reason, need to take backup off master, and can't afford to put too much pressure on it's I/O.

Leave a comment