What is the point of bouncing?

Some of you might be familiar with pgBouncer project. Some are not. Some understand what/how/why it does, others do not.

This blog post is to have a place where I can point people who have question about how it works, why, and when it makes sense to use it (pgBouncer that is).

First, let's start with some general description:

pgBouncer is a connection pooler. It keeps a number of connections to database (PostgreSQL of course), and when client (application) connects to it, it will be given ready-made connection to server.

More or less a cache for connections.

This of course requires asking: OK, but what is the point? Is establishing the connection so slow that you need to optimize it? Let's see.

We have to take into consideration at least two layers of things that can be optimized:

  • network traffic
  • overhead of starting new PostgreSQL backend, and authentication

Let's start with the first one – network.

Obviously – pgBouncer can not do anything about network usage when it comes to running queries – it's not a cache of data/query-results. So it will actually slow down queries a bit – because instead of sending the query to server, you will pass it to pgBouncer which, in turn, will send it to server.

Luckily, in all cases I've seen so far, overhead of pgBouncer for queries is negligible.

But what about connection establishing?

I did a test, with two different authentication schemes: trust and md5. In both bases, I ran psql, without any other commands, just “psql -h localhost -U …". During psql startup I had tcpdump running, which was killed immediately after I got psql prompt – i.e. before any query was ran.

Results? For trust:

14:41:27.603668 IP 127.0.0.1.50126 > 127.0.0.1.5920: Flags [S], seq 716127354, win 32792, options [mss 16396,sackOK,TS val 2052107 ecr 0,nop,wscale 7], length 0
14:41:27.603682 IP 127.0.0.1.5920 > 127.0.0.1.50126: Flags [S.], seq 212587636, ack 716127355, win 32768, options [mss 16396,sackOK,TS val 2052107 ecr 2052107,nop,wscale 7], length 0
14:41:27.603693 IP 127.0.0.1.50126 > 127.0.0.1.5920: Flags [.], ack 1, win 257, options [nop,nop,TS val 2052107 ecr 2052107], length 0
14:41:27.603797 IP 127.0.0.1.50126 > 127.0.0.1.5920: Flags [P.], seq 1:81, ack 1, win 257, options [nop,nop,TS val 2052107 ecr 2052107], length 80
14:41:27.603805 IP 127.0.0.1.5920 > 127.0.0.1.50126: Flags [.], ack 81, win 256, options [nop,nop,TS val 2052107 ecr 2052107], length 0
14:41:27.606109 IP 127.0.0.1.5920 > 127.0.0.1.50126: Flags [P.], seq 1:327, ack 81, win 256, options [nop,nop,TS val 2052108 ecr 2052107], length 326
14:41:27.606195 IP 127.0.0.1.50126 > 127.0.0.1.5920: Flags [.], ack 327, win 256, options [nop,nop,TS val 2052108 ecr 2052108], length 0

and with md5:

14:42:24.502813 IP 127.0.0.1.50143 > 127.0.0.1.5920: Flags [S], seq 1860178418, win 32792, options [mss 16396,sackOK,TS val 2066332 ecr 0,nop,wscale 7], length 0
14:42:24.502825 IP 127.0.0.1.5920 > 127.0.0.1.50143: Flags [S.], seq 2534748821, ack 1860178419, win 32768, options [mss 16396,sackOK,TS val 2066332 ecr 2066332,nop,wscale 7], length 0
14:42:24.502834 IP 127.0.0.1.50143 > 127.0.0.1.5920: Flags [.], ack 1, win 257, options [nop,nop,TS val 2066332 ecr 2066332], length 0
14:42:24.502944 IP 127.0.0.1.50143 > 127.0.0.1.5920: Flags [P.], seq 1:81, ack 1, win 257, options [nop,nop,TS val 2066332 ecr 2066332], length 80
14:42:24.502950 IP 127.0.0.1.5920 > 127.0.0.1.50143: Flags [.], ack 81, win 256, options [nop,nop,TS val 2066332 ecr 2066332], length 0
14:42:24.504242 IP 127.0.0.1.5920 > 127.0.0.1.50143: Flags [P.], seq 1:14, ack 81, win 256, options [nop,nop,TS val 2066332 ecr 2066332], length 13
14:42:24.504296 IP 127.0.0.1.50143 > 127.0.0.1.5920: Flags [.], ack 14, win 257, options [nop,nop,TS val 2066332 ecr 2066332], length 0
14:42:24.504395 IP 127.0.0.1.50143 > 127.0.0.1.5920: Flags [P.], seq 81:122, ack 14, win 257, options [nop,nop,TS val 2066332 ecr 2066332], length 41
14:42:24.505548 IP 127.0.0.1.5920 > 127.0.0.1.50143: Flags [P.], seq 14:340, ack 122, win 256, options [nop,nop,TS val 2066332 ecr 2066332], length 326
14:42:24.541972 IP 127.0.0.1.50143 > 127.0.0.1.5920: Flags [.], ack 340, win 256, options [nop,nop,TS val 2066342 ecr 2066332], length 0

Given that it's not all on “send packet, reply" basis, we are talking about 5 (trust) and 7 (md5) one-way trips between application (psql in my example) and server (pg).

So, I tested that in my example test environment, I have ping to localhost, get average 0.018ms roundtrip time, and to server connected to the same switch, over 1000Mb/s, full duplex, network link (ethernet) – 0.213ms.

This means that connecting to localhost with “trust" takes on average 0.045ms and with md5 authorization – 0.063ms. Over the network, it takes 0.532ms (trust) and 0.745ms (md5).

This might not sound like much, but given that fetching single row from relatively wide table (width, as shown by explain: 750 bytes), using primary key – takes 0.03ms, and we're suddenly can see overhead of connection being 20 times more than fetching the data from disk.

So, using relatively normal situation, when you have N application servers, which connect to PostgreSQL database, you can shed ~ 0.7ms from connection time by adding local (for application servers) pgBouncer.

That looks great, doesn't it?

But there is also another benefit. pgBouncer caches connections. And if your application stops using its connection to server (for example, by disconnecting from pgBouncer) – connection between pgBouncer and PostgreSQL is not closed. Instead it is left in a pool to be reused for next connection from application.

This means that in perfect situation (which is the most common actually) we also get rid of overhead of starting new PostgreSQL backend – because as far as PostgreSQL is concerned – it just keeps connection to single client (pgBouncer).

So, how big is this overhead?

To calculate it, I will write simple Perl script, which connects, in a loop, to database, using unix socket (to limit all possible overheads). And let's see how well it goes:

#!/usr/bin/env perl
use strict;
use warnings;
use DBI;
use Time::HiRes qw( time );
use List::Util qw( min max sum );
 
print get_dbh()->selectall_arrayref('SELECT version()')->[0]->[0], "\n";
my @times = ();
for (1..10) {
    my $start_time = time();
    my $dbh = get_dbh();
    my $end_time = time();
    push @times, $end_time - $start_time;
}
 
printf "min/avg/max: %.6f/%.6f/%.6f (ms)\n", min(@times)*1000, sum(@times)*100, max(@times)*1000;
 
exit;
 
sub get_dbh {
    return DBI->connect("dbi:Pg:dbname=depesz") or die "Oops\n";
}

When I ran it, I got:

PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-44), 64-bit
min/avg/max: 2.194166/2.564168/2.914906 (ms)

↓↓↓ HUGE CORRECTION ↓↓↓

Previously I wrote that it took 4 millionth of a millisecond. I was way off, due to error in arithmetic, that was spotted by esh. I was clearly wrong, the result is fixed.

Clearly the time to start new connection is not irrelevant – it's actually pretty high (the test was made on a machine that has virtually no traffic, so in normal circumstances the time will be even higher.

↑↑↑ HUGE CORRECTION ↑↑↑

From all I wrote above it shows that thanks to pgBouncer we can get new connection to database in ~0.7ms less time than directly to PostgreSQL. But one can say: sure, but I can use persistent connections, and virtually never will need to create new connections.

That's correct. And if you're in such situation you do not benefit from shorter connection times via pgBouncer.

But this leads usually to very interesting, situation, which a lot of you are probably familiar with.

You have PostgreSQL instance, which has like 500 connections, but only a handful of them actually does something. Rest of them is just IDLE. Not “IDLE in transaction". Just plain, old, boring, IDLE.

Why is it so? Because usually, applications do not run queries in database all the time. Even if it looks that way to you, in reality there is a lot of time spent in non-db-related stuff, like getting requests from clients, generating output, doing all kinds of logic in app code. Database queries, while important – are not happening all the time from all the app servers at once.

Result looks like this:

postgres=# SELECT CASE WHEN current_query = '<IDLE>' THEN 'idle' ELSE 'non-idle' END AS state, COUNT(*), SUM(clock_timestamp() - query_start) FROM pg_stat_activity GROUP BY 1;
  state   | COUNT |            SUM
----------+-------+----------------------------
 idle     |   647 | 106 days 2298:57:58.271602
 non-idle |     5 | 00:00:05.509914
(2 ROWS)

In this particular database (which is main production DB for online retailer), I had, at the time of checking it, 5 working backends, with total query duration of 5 seconds, and 647 idle backends, with total idle time of 106 days.

What's more – all those IDLE backends are using memory. Of course not as much as ps shows, but still. In my case, over these 600 backends, there is almost 2GB of RAM that would be freed if I could close all those idling, useless, processes.

And this is precisely why pgBouncer is great. It can do it. How?

Normally flow of managing connections is like this:

  • client connects to pgBouncer, thus establishing client connection
  • pgBouncer gets one free connection from pool, or makes a new one, and assigns this server connection to client connection
  • as long as client is connected – its connection to pgBouncer (client connection) is paired with the same connection between pgBouncer and PostgreSQL (server connection)
  • when client disconnects – it's client connection is of course closed, but the server connection is instead put back to pool

So far so good. Plain and simple.

But in pgbouncer.ini you can find one special option: “pool_mode". It can have three possible values:

  • session
  • transaction
  • statement

Session pooling is the default, and it is what I just described what happens. The other two are much smarter.

Transaction pooling means that the pairing between client connection and server connection is not constant. Server connection is assigned to client connection whenever client starts new transaction, and is released to pool not when client disconnects, but rather – when it finishes transaction (using commit or rollback, doesn't matter).

Immediately this screams: GREAT! Thanks to this – all those idle transactions would go away. Right? Well, yes and no. Before I will go to more details, let's mention the statement pooling. As you can figure out based on transaction pooling description, and name – in statement pooling, server connection is returned to pool immediately after any statement – thus effectively requiring autocommit.

If you're not seasoned PostgreSQL user, you might say: but I don't use transactions usually, just fire statements without any BEGIN/COMMIT things, because these are slow.

If that's you – don't worry. Or rather – worry a lot. Every query in PostgreSQL, if not surrounded by explicit BEGIN; COMMIT; – is still in transaction. Just very small/short – just for the duration of this statement. Transactions are always there.

So. Transaction pooling to the rescue. You can enable it in pgbouncer.ini, and then you'll basically get rid of most of the idles, right? Yes. Sure.

On one of the servers that we were working on, we were able to go from:

  • daily average of 500 concurrent connections
  • peak connections in a day – ~ 2000

to:

  • daily average: 40
  • peak: 60

Mind you – there are still more idle connections than non-idle:

$ SELECT CASE WHEN current_query = '<IDLE>' THEN 'idle' ELSE 'non-idle' END AS state, COUNT(*), SUM(clock_timestamp() - query_start) FROM pg_stat_activity GROUP BY 1;
  state   | COUNT |       SUM
----------+-------+-----------------
 idle     |    20 | 00:12:23.771483
 non-idle |     6 | 00:00:00.178652
(2 ROWS)

But that's because pgBouncer has to have some connection in the pool.

And trust me – this is database server that is very busy. Its main database for one, very large, platform in the internet, which even now (4pm in Poland, 10am, on east coast of USA, on Sunday) has ~ 3000 transactions per second.

Benefits of decreasing number of backends are twofold. Of course you get more free memory, which can be used as disk cache by kernel, but also – it decreases overhead of interprocess communication within PostgreSQL itself. With new Pgs it's much better, but I've seen cases with earlier Pgs where IPC between 1000 backends was significantly slowing everything down.

Is it a silver bullet then? Something that can be used everywhere and always? Unfortunately no. Despite enormous benefits of using transaction pooling (I am not using statement pooling because I fail to see when it would be beneficial), it is actually relatively problematic to use.

Reason is very simple: there is no guarantee that two subsequent transactions (or even queries, if you're using autocommit mode) will be ran on the same backend.

This means that if you are using non-transactional data across transactions, you might see failures. And you will see them.

But who uses non-transactional data? Well, virtually everybody.

For starters: sequences. Cached query plans. Advisory locks. %_SHARED if you're using PL/Perl functions.

Does that mean you can't use any of them? NO. You can. But only if you're keeping state to single transaction.

For example, let's talk about sequences.

If you're using them like:

$ BEGIN;
$ INSERT INTO z (x) VALUES (1);
$ INSERT INTO z2 (z_id) VALUES ( currval('z_id_seq') );
$ commit;

That is – you're getting currval from sequence that you just updated, and you do it within the same transaction – you're fine.

But if you have something like:

$ BEGIN;
$ INSERT INTO z (x) VALUES (1);
$ commit;
$ BEGIN;
$ INSERT INTO z2 (z_id) VALUES ( currval('z_id_seq') );
$ commit;

You will get data corruption. Because the 2nd transaction will inevitable at one point in time run in different backend than first transaction, and you will either get error about uninitialized sequence, or you will get wrong currval().

Same thing happens if you just run those two inserts without begin/commit blocks, using autocommit.

Of course, for every problem, there is solution. In case of sequences – either use them within single transaction, or use “INSERT INTO … RETURNING id" and reuse this id from your application, without calling to currval(), or just do: select nextval(‘z_id_seq'); first, and then use this value in all subsequent queries. That's simple.

It's more complicated when we'll consider prepared statements.

Unfortunately (that's my opinion based on couple of situations) these are becoming default in some database drivers – for example in DBD::Pg in Perl. Or PDO in PHP.

If your application is using prepared statements (this has actually nothing to do with calls like ->prepare_statement() in your application), you will see them in Pg logs as “parse some_identifier: SELECT …" or “bind some_identifier: …" or “execute some_identifier: …".

Depending on your language/driver there are different ways to handle it. In Perl, I just use “pg_server_prepare => 0" option when creating new database connections, and curse every time when I think that someone actually took some effort to make this behaviour DEFAULT.

In PDO, the solution is to enable so called emulated prepares, by passing “PDO::ATTR_EMULATE_PREPARES => TRUE" as 4th argument to new PDO().

In other languages – not sure. I had to work with these two, and solved the problem there.

And what if you really, really want to use prepared statements? And, by the way, what is the problem with them?

The problem is that prepared statement stays in backend even outside of transaction. So you can have something like:

$ PREPARE ins(int4, int4, int4, int4, int4) AS INSERT INTO i (j) VALUES ($1),($2),($3),($4),($5);
 
$ BEGIN;
$ EXECUTE ins(1,2,3,4,5);
$ commit;
 
$ BEGIN;
$ EXECUTE ins(6,7,8,9,10);
$ commit;
 
$ BEGIN;
$ EXECUTE ins(11,12,13,14,15);
$ commit;

And when you'll try it with transaction pooling mode in pgBouncer, you'll sooner or later get:

ERROR:  prepared statement "ins" does NOT exist

or

ERROR:  prepared statement "ins" already EXISTS

Because you will either try to execute the statement before preparing it in given backend (preparation was in different transaction and ended up in different process), or you will try to prepare a statement, but it is already prepared in given backend due to work done by another client.

Luckily pgBouncer has the concept of “RESET QUERY" – that is a query that is run on backend, when server connection is returned to pool. This usually clears all modified settings and can remove prepared statements.

You can also consider using pre_prepare. It's a tool that you can use to make sure that all backends started by pgBouncer will have some predefined set of prepared statements ready for your use.

Personally – I just remove usage of them from application.

And in the rare cases that application would actually benefit from using prepared statements? In such case – go ahead, use them. But consider carefully all the benefits and drawbacks. And consider that for this single usecase, perhaps it would be better to connect directly to Pg.

Now we know what pgBouncer does and how. So let's consider what is the best option to set it up, and how to monitor.

First question is of course: should I have one, centralized, pgbouncer, or perhaps each application server should have its own? The answer is: why not both?

Seriously – consider following situation:

  • Application uses persistent connections, to connect to local pgBouncer
  • pgBouncer on app server, connects to centralized pgBouncer
  • centralized pgBouncer connects to database

Benefits:

  • thanks to persistent connections, we have the overhead of starting new connection set to minimum
  • having local pgBouncer means that generally application never has to wait for getting db connection
  • all pgbouncers from all app servers connect through centralized “idle-removing" pgBouncer

Let's consider that in our case we have 10 web servers, each running 30 processes which generate webpages. Each of these processes can need database server. In total we might need 300 connections.

First pgBouncer (on webserver) limits number of outgoing connections from 30, to perhaps 4-5. But this means that database would be hit with 40-50 connections (4-5 from each webserver). So the centralized pgBouncer removes some more idles by “juggling" server connections between connections coming from different webservers.

Thanks to such setup, we would most likely be fine running 10-15 connections to PostgreSQL.

When you'll have your pgBouncer in place, sooner or later you will end up with a problem: I have a connection to database, and I need to find where it came from, because it's doing something wrong.

But when you have pgBouncer (centralized) – all the connections come from it. So how can you find where did the connection actually start? Luckily it's not that complicated, just a bit tedious. But – you can never use UNIX socket connections – always TCP. At least on Linux.

Apparently, on Linux, there is no way to tell where given UNIX-socket connection started. I asked about this on ServerFault, and the only working solution, on Linux, involved custom builds of kernel, and running gdb (debugger) on /proc/kcore (basically whole memory of system). Definitely not user friendly.

Because of this problem, I generally don't use UNIX sockets, and treat them suspiciously.

Anyway – let's consider following example. I have this connection:

$ SELECT * FROM pg_stat_activity WHERE procpid = 5225;
-[ RECORD 1 ]----+------------------------------
datid            | 16416
datname          | some_db
procpid          | 5225
usesysid         | 136689798
usename          | depesz
application_name |
client_addr      | 127.0.0.1
client_hostname  |
client_port      | 36740
backend_start    | 2012-12-02 16:03:17.214369+00
xact_start       | 2012-12-02 16:03:25.775003+00
query_start      | 2012-12-02 16:03:25.775003+00
waiting          | f
current_query    | SELECT pg_sleep(123124);

As you can see it's running long pg_sleep, and I want to find what application/user started it. Connection comes from 127.0.0.1 and the port number leads to:

=$ sudo netstat -ntp | grep 36740
tcp        0      0 127.0.0.1:36740             127.0.0.1:5432              ESTABLISHED 23028/pgbouncer
tcp        0      0 127.0.0.1:5432              127.0.0.1:36740             ESTABLISHED 5225/postgres: depe

So, it's pgBouncer. Now, I need to connect to pgbouncer special database. This is done by connecting to pgbouncer port and using pgbouncer database:

$ psql -U postgres -h localhost -p 6543 -d pgbouncer

Within this database I can run SHOW SOCKETS command. Full output is very long, so let's just see the row that represents my connection:

 TYPE |   USER    | DATABASE  |   state   |      addr      | port  |  local_addr  | local_port |    connect_time     |    request_time     |      ptr       |      link      | recv_pos | pkt_pos | pkt_remain | send_pos | send_remain | pkt_avail | send_avail·
------+-----------+-----------+-----------+----------------+-------+--------------+------------+---------------------+---------------------+----------------+----------------+----------+---------+------------+----------+-------------+-----------+------------
 S    | depesz    | some_db   | sv_active | 127.0.0.1      |  6666 | 127.0.0.1    |      36740 | 2012-12-02 16:03:17 | 2012-12-02 16:03:17 | 0x5012530      | 0x2ab9b676e1a0 |        0 |       0 |          0 |        0 |           0 |         0 |          0

First column shows that it's server connection. Then there are a number of columns which are more or less irrelevant for now. But “ptr" and “link" are important.

Thanks to it, I can find client connection:

 TYPE |   USER    | DATABASE  |   state   |      addr      | port  |  local_addr  | local_port |    connect_time     |    request_time     |      ptr       |      link      | recv_pos | pkt_pos | pkt_remain | send_pos | send_remain | pkt_avail | send_avail·
------+-----------+-----------+-----------+----------------+-------+--------------+------------+---------------------+---------------------+----------------+----------------+----------+---------+------------+----------+-------------+-----------+------------
 C    | depesz    | some_db   | cl_active | 192.168.1.19   | 34368 | 192.168.1.10 |       6543 | 2012-12-02 16:03:17 | 2012-12-02 16:03:25 | 0x2ab9b676e1a0 | 0x5012530      |        0 |       0 |          0 |        0 |           0 |         0 |          0
 S    | depesz    | some_db   | sv_active | 127.0.0.1      |  6666 | 127.0.0.1    |      36740 | 2012-12-02 16:03:17 | 2012-12-02 16:03:17 | 0x5012530      | 0x2ab9b676e1a0 |        0 |       0 |          0 |        0 |           0 |         0 |          0

Please note that “ptr" for C-connection is the same as “link" for S-connection, and “link" for C-connection is the same a “ptr" for S-connection.

In C connection (client connection) data I see that the client connects from machine with IP 192.168.1.19, and on this machine it is using TCP port 34368. With this info, I can ssh to this remote machine, and check with netstat who/what it is.

Unfortunately pgBouncer doesn't let you filter the data – SHOW SOCKETS (and other SHOW … commands) just dumps the data to your app, and you have to parse/filter it yourself. Doable, though could be improved.

Last thing that I want to write. Some of my readers will want to ask: what about pgPool? After all, it's even got “pool" in it's name, so it's the pooling solution. Right?

Well, first of all – I have very little pgPool experience. And it was at least couple of years before. I tested it, and saw that:

  • For every client connection pgPool created new process (pgBouncer uses single process and async-IO)
  • It worked only in session pooling mode, which I am not fan of – I prefer persistent connections over session pooling

These facts were making it unjustifiable (for me) to use pgPool as pooling solution. But – it was years ago, and it could have changed. On the other hand – pgPool seems to be going different route – bringing statement-level replication, auto-load-balancing and possibly failover, to the table, so it's not really an apples-to-apples comparison.

I hope that above post will help someone, and if anything is unclear – will appreciate comments, and will try to answer all questions.

40 thoughts on “What is the point of bouncing?”

  1. I was following along until I got to this:

    “Application uses persistent connections, to connect to local pgBouncer”

    I thought the point of pgBouncer was not having persistent connections. I am obviously missing something?

  2. @Adrian:
    Persistent connections are bad when you’re connecting directly to PostgreSQL. Persistent connections to pgBouncer are the best thing you can imagine, assuming you can run in transaction pooling mode.

  3. Went back over the post and I realized I stepped over the part where the local pgBouncer throttles the 30 persistent connections in to 4-5 out. Understand now.

  4. Can I assume a case where pgBouncer is NOT appropriate is when you actually want to use real database logins rather than the common web framework habit of always assuming the same login (e.g. admin)?

  5. Great article.

    But there is one mistake – “statement pooling” does not switch connections in mid-transaction. Instead, as soon as it detects open transaction, it kills client with “long transactions not allowed” error message, thus also rollbacking the transaction.

    It’s goal is to enforce autocommit access to database in situation where all clients are required to use autocommit statements. Which usually means that all database access must go over functions, instead of directly manipulating tables.

    Btw, when going from multi-statement transaction to autocommit function-calls, you get another pool size decrease, although not as dramatic as you get when going from session pooling to transaction pooling. And server-side functions also solve the “i want prepared plans” problem by transparently caching all plans. So it’s good idea on several fronts.

  6. @RWalker: if you mean – each connection on its own user (more or less) – yes. pgBouncer is not good idea then.

    @Marko:
    Thanks for clarification.

  7. My previous post was about sentence: “Even if it was inside a transaction.” and the text around it. Could you fix it?

    In the other news, PgBouncer 1.6 will have backend_pid in SHOW SERVERS/CLIENTS/SOCKETS output, which should help to pair connections even if they are made through unix socket.

    (Well, you could do the pairing even on older pgbouncers, but it will hurt you, so don’t do it…)

  8. I think you have a missing zero in this sentence:

    This means that connecting to localhost with “trust” takes on average 0.045ms and with md5 authorization – 0.63ms.

    I think that last number should be 0.063ms, i.e. (0.018ms / 2) * 7.

  9. @Bruce:
    Of course – thanks for the comment. And congratulations on perfect eyesight 🙂

  10. Maybe Marko could answer this:

    For a psycopg2 user, which provides pooling directly in the Python client (class ThreadedConnectionPool), is there some benefit to using pgBouncer?

  11. @Joe:
    It doesn’t take author to answer.

    Whether it makes sense depends on many things, but I generally would say: of course it does. Reason: pooled connection are (usually) bound to single process (it can have multiple threads, though). And always – to single machine.
    pgBouncer is not tied to client, to it can, and it does, handle hundreds of client connections from multiple processes, across multiple servers.

    Whether you will benefit from pgBouncer (and by benefiting from pgBouncer I mean from using transaction pooling, because usually I don’t see session pooling as improvement) you can check for yourself – how many idle connections you are vs. how many are doing anything.

    pgBouncer, with transaction pooling enabled, cuts back on the number of IDLE backends. So if you have them (preferably many, but “many” is subjective) – consider using pgBouncer.

  12. “You can also consider using pre_prepare. It’s a tool that you can use to make sure that all backends started by pgBouncer will have some predefined set of prepared statements ready for your use.”

    for me, it would be simplier to use

    connect_query = ‘select x_prepare_my_plans()’

    http://pgbouncer.projects.pgfoundry.org/doc/config.html#_pool_configuration

    instead of using any outer extension.

    you also could make there temp tables, set some constants or do any other initial work.

    and you can do it in different way for each one of pools by executing different functions or putting different input for function

  13. > printf “min/avg/max: %.6f/%.6f/%.6f (ms)\n”, min(@times)/1000, sum(@times)/10000, max(@times)/1000;

    To get milliseconds, I think, you must multiply here, not divide.

  14. @Misha:
    Sure, but then you need to write the function (x_prepare_my_plans()). And this is basically what pre_prepare is.

  15. @esh:
    Oh my god. So stupid mistake. I am so sorry. Fixed, of course, and huge thanks for spotting it.

  16. Could you help me to make sure this thing, I saw this comments link
    http://comments.gmane.org/gmane.comp.db.postgresql.pgbouncer.general/938

    “Session pooling of connections” is a 1:1 connection pooling to postgres-Server instance, right?

    If I set pgbouncer.ini :
    max_client_conn = 10000
    default_pool_size = 950

    And if I have ~2k Apache connections (client connection), will I have ~2k postgres-Server connection (server connection) or only 950 server connection?

    Thanks for your help.

  17. Do you think transaction polling would be much help in this case?
    Thanks a lot.

  18. Hi there Depesz, great post!

    One question I could not figure out from your post:
    If I have an app server that keeps several ” in transaction” connections just doing nothing 99% of the time, how would I use PgBouncer mainly to reduce number of connections?

    Thanks.

  19. Thanks for your answer. The problem is this is a commercial app so we cannot change it… I’m trying to work around it.

    I think they do this on purpose to somehow have some “peace of mind” and avoid schema changes by establishing some locks… not sure though. It seems all software from this company behaves like this, even desktop apps.

    In this scenario is there any hope for reducing connections?

    Thanks again.

  20. @Duarte: if you don’t know the sources, it’s virtually impossible to tell. reduction of connectons is possible only in transaction pooling (or statement). but doing this without analyzing of the source code is rather dangerous, as many drivers use, by default, techniques that, as described in the blogpost, are not compatible with transaction pooling.

  21. I’m getting to that disapointing conclusion… thanks for your input!

  22. I am I am late in reading this very informative article. I am facing a problem and I think you can suggest from your experience. I am running pgBouncer in transaction pooling mode but whenever there is any exception raised by web server or the server side code and leaves db connection unclean, it forces to restart the web server process. This is very bad issue as it has happened few times in the middle of the night when no one was around.

    How do resolve such issues. Any automated way to make the connection clean or restart all connections? What is the ideal number of pool connection for a db with 50 requests/second in transaction pooling mode.

  23. @Dev:
    No idea what you mean by “unclean”. pgbouncer, just like pg, either has a connection or doesn’t. It can break (in case of network problems. If it’s alive, then there can be transaction, or not, and if in transaction – you can be in “errored-out” transaction.

    Nothing from this is “unclean”.

    From what I understand you’re saying about something in the app layer – sorry, can’t help with these. The simplest thing you can do is just to disconnect and reconnect. But there could be other ways depending on what the “unclean” really means.

  24. Hi DEPESZ, I recently installed pgBouncer version 1.5.4 (on Postgres DB version 9.3.4) and immediately figured out that Prepared Statements (PS) are not supported by pgBouncer in Transaction pooling mode.
    That leaves us with the only option of using pgBouncer in Session pooling mode which may not be as efficient as Transaction pooling.

    Does anyone know a work around or a patch/fix for using pgBouncer in Transaction pooling mode with application that utilizes Prepared Statements (PS)

    Appreciate your help!

    Thanks,
    Bhanu Gandikota
    Mobile: (415) 420-7740

  25. @Bhanu:
    Yes, I wrote about prepared statements. In my opinion the best way to handle it is not to use them. These are rarely what you think they are, and in cases where you know what they are, and are using it purposely – well, then transaction pooling is no longer a problem.

  26. Does anyone know when pgBouncer 1.6 is going to be available? There is not much information available on 1.6 availability. I was told that 1.6 has a better way to easily identify where exactly the client connections are coming from without going thru a lot of hoops!.

    Thank you!

  27. It is sad, but it is almost useless when hosting hundreds of databases with small amount of concurrent connections on a single cluster – aka multi tenant deployment.
    First time I went through docs I was sure it can share connections between DBs just doing auth stuff behind.

  28. Depesz: would pgBouncer help at all for a system running 100s of long-running transactions? I have a project where up to 600 processes, each with a connection to the db, may run concurrently. However, these jobs all start a transaction upon db handle creation (which in turns happens at the start of the job) and commits just before job termination. The transaction_isolation is set to ‘read committed’ in case it matters.

  29. @Lance:

    not really if you have 600 concurrently open transactions then you need 600 connections.

    The real question there is, though: do you really need them? I understand that it’s “how it works”, but are these 600 transactions actually doing stuff in pg all the time?

  30. @Depesz: well no, there’s certainly a lot of “idle in transaction” time. But we do want the db changes that happen near the beginning of a job’s life to be committed along with the changes made near the end, in a transaction. I think… maybe I need to rethink whether this is absolutely critical.

  31. @Lance:

    idle in transaction connections are one of the worst things you can have. These mean that you have a connection that is not doing any active work (so it just sits on some resources), but it keeps all the locks it obtained earlier for however long the transaction will keep on going.

    This makes certain things impossible (altering tables touched by the transactions), and certain much less effective (vacuum of any table in the database).

  32. Thanks @Depesz, I’m reading your “Hunting Idle in Transactions” post now. Hopefully I’ll find some way to trim them down a bit. Thanks for the excellent posts and your personal replies… a rare gem!

Comments are closed.