# Prepared statements gotcha

Friend from my previous employer told me that plans of execution of prepared statement, and the same statement run “as it" are different.

Well, I checked and this is what I found (it's not shocking, it's actually quite obvious, but You have to think about it for a while to “get it").

Let's assume we have pretty simple table:

```CREATE TABLE users (
id        BIGSERIAL,
is_active BOOL NOT NULL DEFAULT 'false',
PRIMARY KEY (id)
);
CREATE UNIQUE INDEX idx_u ON users (username) WHERE is_active = TRUE;```

Now, this index serves 2 purposes:

• username should be unique, but only for active users
• we usually search only for active users, so indexing all of them doesn't make sense

Let's put some data in it:

```CREATE OR REPLACE FUNCTION random_username() RETURNS TEXT AS \$\$
RETURN JOIN "", map { [ "a".."z" ]->[rand() * 26] } (1..(10 + rand() * 20));
\$\$ LANGUAGE plperl;
SELECT random_username(), random() > 0.01 FROM generate_series(1,500000) i;```

Function random_username() generates random user name – which is string of 10 – 29 random lower case letters.

I inserted half a million of rows to my table, and this is how it looks:

```SELECT
COUNT(*) AS all_users,
SUM(CASE WHEN is_active = TRUE THEN 1 ELSE 0 END) AS active_users,
SUM(CASE WHEN is_active = TRUE THEN 0 ELSE 1 END) AS inactive_users
FROM users;
all_users | active_users | inactive_users
-----------+--------------+----------------
500000 |       494916 |           5084
(1 ROW)```

OK. So let's get first 50 active users, sorted by username:

```# EXPLAIN ANALYZE SELECT * FROM users WHERE is_active = TRUE ORDER BY username ASC LIMIT 50;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
LIMIT  (cost=0.00..3.44 ROWS=50 width=29) (actual TIME=0.081..1.011 ROWS=50 loops=1)
->  INDEX Scan USING idx_u ON users  (cost=0.00..34097.81 ROWS=495500 width=29) (actual TIME=0.076..0.821 ROWS=50 loops=1)
Total runtime: 1.151 ms
(3 ROWS)```

Pretty nice.

What happens if I do it via prepare?

```# PREPARE test1 AS SELECT * FROM users WHERE is_active = TRUE ORDER BY username ASC LIMIT 50;
PREPARE
# EXPLAIN ANALYZE EXECUTE test1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
LIMIT  (cost=0.00..3.44 ROWS=50 width=29) (actual TIME=0.038..0.429 ROWS=50 loops=1)
->  INDEX Scan USING idx_u ON users  (cost=0.00..34097.81 ROWS=495500 width=29) (actual TIME=0.032..0.245 ROWS=50 loops=1)
Total runtime: 0.581 ms
(3 ROWS)```

So far so good. But what if I'll make the “true" parameter to plan?

```# PREPARE test2 (bool) AS SELECT * FROM users WHERE is_active = \$1 ORDER BY username ASC LIMIT 50;
PREPARE
# EXPLAIN ANALYZE EXECUTE test2(TRUE);
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
LIMIT  (cost=18179.82..18179.95 ROWS=50 width=29) (actual TIME=1576.059..1576.244 ROWS=50 loops=1)
->  Sort  (cost=18179.82..18804.82 ROWS=250000 width=29) (actual TIME=1576.054..1576.124 ROWS=50 loops=1)
Sort Method:  top-N heapsort  Memory: 20kB
->  Seq Scan ON users  (cost=0.00..9875.00 ROWS=250000 width=29) (actual TIME=0.047..751.791 ROWS=494916 loops=1)
FILTER: (is_active = \$1)
Total runtime: 1576.363 ms
(7 ROWS)```

Whoa. That's not good.

```# PREPARE test3 (int4) AS SELECT * FROM users WHERE is_active = TRUE ORDER BY username ASC LIMIT \$1;
PREPARE
# EXPLAIN ANALYZE EXECUTE test3(50);
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
LIMIT  (cost=0.00..3409.78 ROWS=49550 width=29) (actual TIME=0.026..0.283 ROWS=50 loops=1)
->  INDEX Scan USING idx_u ON users  (cost=0.00..34097.81 ROWS=495500 width=29) (actual TIME=0.015..0.142 ROWS=50 loops=1)
Total runtime: 0.382 ms
(3 ROWS)```

This one is “safe".

Do You know why it happens? It's pretty simple. Prepared plan is generated without knowledge about what will be the value of parameter.

So it can't use partial index as it wouldn't work well if I ran it with “false" as argument.

Is it a bug? Not really. What can You do about it? Basically: think. Query planing is done once, and if planner doesn't have all knowledge necessary – it will generate suboptimal plans. Very suboptimal in some cases 🙂

## 10 thoughts on “Prepared statements gotcha”

1. Bartek Jablonski says:

Going to java world – jdbc driver for postgresql sinve version 7.4 can use server side prepared statements to speed up queries. I see that things are different – it’s better to be careful with this feature.

2. Ciekawe. A komentuje tutaj tylko eo?

3. hmm .. I’ve been thinking about it for better part of the night, and I think that this gotcha doesn’t apply only to partial indexes. It can apply to any other case when knowing the value in advance might provide another plan.

Limit as well – it was not shown in above example, but I think it’s entirely possible for this issue to influent limit plans as well.

Same goes for standard indexes, with non perfect value distribution.

Basically – after thinking about it, I have to admit that I wouldn’t suggest using prepared statements in most of the cases.

4. Szymon says:

What’s more, all this is nicely written in the PostgreSQL documentation at 38.10.2. Plan Caching.

5. Vincenzo Romano says:

Well I had the very same problem in a number of cases, mostly involving the LIKE/ILIKE operator when one of the two arguments was a parameter.
Solution?
Embed the query into a PL/PgSQL function which will create a dynamic SQL query by expanding the parametric value and then execute it.
By doing so you force the planner to do its job soon before performing the query, when there is no unknown element on it.
It’d be nice, though, to have a “lazy query planner” to void such “gotchas”.

6. Generally for stateless web applications I recommend not using prepared statements. If at all one should use client side emulated prepared statements. Here are my thought in more detail:
http://pooteeweet.org/blog/1083