Waiting for 8.4 – Common Table Expressions (WITH queries)

On 4th of September Tom Lane committed another great patch. This one is very large, and even after applying – it's has some rough edges. There will be need for additional patches to make the functionality fully robust, but the fact that it got committed means that it will be available in final 8.4.

What does it do?

First, let's check what was written in commit log:

Implement SQL-standard WITH clauses, including WITH RECURSIVE.
 
There are some unimplemented aspects: recursive queries must use UNION ALL
(should allow UNION too), and we don't have SEARCH or CYCLE clauses.
These might or might not get done for 8.4, but even without them it's a
pretty useful feature.
 
There are also a couple of small loose ends and definitional quibbles,
which I'll send a memo about to pgsql-hackers shortly.  But let's land
the patch now so we can get on with other development.
 
Yoshiyuki Asaba, with lots of help from Tatsuo Ishii and Tom Lane

Description looks interesting, it relates to SQL standard, but what it really is?

Example straight from new docs:

WITH regional_sales AS (
        SELECT region, SUM(amount) AS total_sales
        FROM orders
        GROUP BY region
     ), top_regions AS (
        SELECT region
        FROM regional_sales
        WHERE total_sales > (SELECT SUM(total_sales)/10 FROM
regional_sales)
     )
SELECT region,
       product,
       SUM(quantity) AS product_units,
       SUM(amount) AS product_sales
FROM orders
WHERE region IN (SELECT region FROM top_regions)
GROUP BY region, product;

By looking at it, you will see that “WITH" queries are basically queries with inlined views or rather temporary tables, but such that exist only for the time of running of the query.

What does it give us?

Let's test:

# CREATE TABLE orders (region int4, product int4, quantity int4, amount int4);
CREATE TABLE
 
# INSERT INTO orders (region, product, quantity, amount)
    SELECT random() * 1000, random() * 5000, 1 + random() * 20, 1 + random() * 1000
        FROM generate_series(1,10000);
INSERT 0 10000

So, first, let's try the docs query (or actually, a bit modified doc query):

# EXPLAIN analyze
>> WITH regional_sales AS (
>>         SELECT region, SUM(amount) AS total_sales
>>         FROM orders
>>         GROUP BY region
>>      ), top_regions AS (
>>         SELECT region
>>         FROM regional_sales
>>         WHERE total_sales > (SELECT 2 * avg(total_sales) FROM regional_sales)
>>      )
>> SELECT region,
>>        product,
>>        SUM(quantity) AS product_units,
>>        SUM(amount) AS product_sales
>> FROM orders
>> WHERE region IN (SELECT region FROM top_regions)
>> GROUP BY region, product;
                                                             QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=509.55..539.52 ROWS=1998 width=16) (actual TIME=71.882..72.254 ROWS=221 loops=1)
   InitPlan
     ->  HashAggregate  (cost=205.00..217.51 ROWS=1001 width=8) (actual TIME=31.532..33.234 ROWS=1001 loops=1)
           ->  Seq Scan ON orders  (cost=0.00..155.00 ROWS=10000 width=8) (actual TIME=0.005..13.888 ROWS=10000 loops=1)
     ->  CTE Scan ON regional_sales  (cost=22.54..47.56 ROWS=334 width=4) (actual TIME=39.250..39.719 ROWS=12 loops=1)
           FILTER: ((total_sales)::NUMERIC > $1)
           InitPlan
             ->  Aggregate  (cost=22.52..22.54 ROWS=1 width=8) (actual TIME=7.666..7.667 ROWS=1 loops=1)
                   ->  CTE Scan ON regional_sales  (cost=0.00..20.02 ROWS=1001 width=8) (actual TIME=0.002..4.786 ROWS=1001 loops=1)
   ->  Hash JOIN  (cost=12.02..224.50 ROWS=1998 width=16) (actual TIME=40.069..71.422 ROWS=221 loops=1)
         Hash Cond: (public.orders.region = top_regions.region)
         ->  Seq Scan ON orders  (cost=0.00..155.00 ROWS=10000 width=16) (actual TIME=0.011..16.861 ROWS=10000 loops=1)
         ->  Hash  (cost=9.52..9.52 ROWS=200 width=4) (actual TIME=39.825..39.825 ROWS=12 loops=1)
               ->  HashAggregate  (cost=7.51..9.52 ROWS=200 width=4) (actual TIME=39.785..39.805 ROWS=12 loops=1)
                     ->  CTE Scan ON top_regions  (cost=0.00..6.68 ROWS=334 width=4) (actual TIME=39.254..39.762 ROWS=12 loops=1)
 Total runtime: 72.674 ms
(16 ROWS)

As you can see I changes definition of “top_regions" – instead of having 10% of total sales, I define top region as having over 2 times average sale.

Now, let's try to write the same query without using “WITH":

# EXPLAIN analyze
>> SELECT region,
>>        product,
>>        SUM(quantity) AS product_units,
>>        SUM(amount) AS product_sales
>> FROM orders
>> WHERE region IN (
>>     SELECT region
>>     FROM orders
>>     GROUP BY region
>>     HAVING SUM(amount) > 2 * (
>>         SELECT avg(SUM) FROM (
>>             SELECT SUM(amount) FROM orders GROUP BY region
>>         ) AS x
>>     )
>> )
>> GROUP BY region, product;
                                                                   QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=710.04..740.01 ROWS=1998 width=16) (actual TIME=130.271..130.644 ROWS=221 loops=1)
   ->  Hash JOIN  (cost=477.58..690.06 ROWS=1998 width=16) (actual TIME=85.452..129.649 ROWS=221 loops=1)
         Hash Cond: (public.orders.region = public.orders.region)
         ->  Seq Scan ON orders  (cost=0.00..155.00 ROWS=10000 width=16) (actual TIME=0.011..16.427 ROWS=10000 loops=1)
         ->  Hash  (cost=465.07..465.07 ROWS=1001 width=4) (actual TIME=85.157..85.157 ROWS=12 loops=1)
               ->  HashAggregate  (cost=435.04..455.06 ROWS=1001 width=8) (actual TIME=83.615..85.127 ROWS=12 loops=1)
                     FILTER: ((SUM(public.orders.amount))::NUMERIC > (2::NUMERIC * $0))
                     InitPlan
                       ->  Aggregate  (cost=230.03..230.04 ROWS=1 width=8) (actual TIME=47.372..47.374 ROWS=1 loops=1)
                             ->  HashAggregate  (cost=205.00..217.51 ROWS=1001 width=8) (actual TIME=41.329..43.404 ROWS=1001 loops=1)
                                   ->  Seq Scan ON orders  (cost=0.00..155.00 ROWS=10000 width=8) (actual TIME=0.007..20.318 ROWS=10000 loops=1)
                     ->  Seq Scan ON orders  (cost=0.00..155.00 ROWS=10000 width=8) (actual TIME=0.005..15.737 ROWS=10000 loops=1)
 Total runtime: 131.050 ms
(13 ROWS)

As you can see – it's slower. Reason is very simple. “Old" query had to scan orders table 3 times.

New query does it only 2 times.

So, WITH can be used to write more readable queries. Queries which work on smaller amounts of data. Queries which can be faster.

But, that's not all. There is also a special type of WITH query which can do something that is simply not possible without it. It is WITH RECURSIVE.

Let's imagine the simplest possible tree structure:

CREATE TABLE tree (id serial PRIMARY KEY, parent_id int4 REFERENCES tree (id));
INSERT INTO tree (parent_id) VALUES (NULL);
INSERT INTO tree (parent_id)
    SELECT CASE WHEN random() < 0.95 THEN FLOOR(1 + random() * currval('tree_id_seq')) ELSE NULL END
        FROM generate_series(1,1000) i;

This created nice forest (not tree, as we have many root elements), that has (with my random data):

  • 1001 elements
  • 56 root nodes
  • 28 of root nodes have any elements “below"
  • longest path in tree has 16 nodes: 1 -> 2 -> 3 -> 7 -> 12 -> 15 -> 39 -> 52 -> 61 -> 107 -> 123 -> 194 -> 466 -> 493 -> 810 -> 890

Traditionally, if one would want to print all parents of node 890, he would have to write a loop to query tree table many time – until it would find row that has “parent_id IS NULL".

But now, we can:

WITH RECURSIVE struct AS (
    SELECT t.* FROM tree t WHERE id = 890
UNION ALL
    SELECT t.* FROM tree t, struct s WHERE t.id = s.parent_id
)
SELECT * FROM struct;

Which works really cool:

                                                             QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
 CTE Scan on struct  (cost=402.18..404.20 rows=101 width=8) (actual time=0.037..69.311 rows=16 loops=1)
   InitPlan
     ->  Recursive Union  (cost=0.00..402.18 rows=101 width=8) (actual time=0.030..69.222 rows=16 loops=1)
           ->  Index Scan using tree_pkey on tree t  (cost=0.00..8.27 rows=1 width=8) (actual time=0.024..0.029 rows=1 loops=1)
                 Index Cond: (id = 890)
           ->  Hash Join  (cost=0.33..39.19 rows=10 width=8) (actual time=2.165..4.313 rows=1 loops=16)
                 Hash Cond: (t.id = s.parent_id)
                 ->  Seq Scan on tree t  (cost=0.00..35.01 rows=1001 width=8) (actual time=0.005..2.434 rows=1001 loops=15)
                 ->  Hash  (cost=0.20..0.20 rows=10 width=4) (actual time=0.012..0.012 rows=1 loops=16)
                       ->  WorkTable Scan on struct s  (cost=0.00..0.20 rows=10 width=4) (actual time=0.002..0.005 rows=1 loops=16)
 Total runtime: 69.445 ms
(11 rows)

You might be worried by seeing “Seq Scan on tree…loops=15", but don't worry. It worked this way only because i have very little rows in the table.

After adding another 50000, and re-running the query, I got:

                                                              QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
 CTE Scan on struct  (cost=840.76..842.78 rows=101 width=8) (actual time=0.039..0.672 rows=16 loops=1)
   InitPlan
     ->  Recursive Union  (cost=0.00..840.76 rows=101 width=8) (actual time=0.032..0.591 rows=16 loops=1)
           ->  Index Scan using tree_pkey on tree t  (cost=0.00..8.27 rows=1 width=8) (actual time=0.025..0.029 rows=1 loops=1)
                 Index Cond: (id = 890)
           ->  Nested Loop  (cost=0.00..83.05 rows=10 width=8) (actual time=0.018..0.027 rows=1 loops=16)
                 ->  WorkTable Scan on struct s  (cost=0.00..0.20 rows=10 width=4) (actual time=0.002..0.004 rows=1 loops=16)
                 ->  Index Scan using tree_pkey on tree t  (cost=0.00..8.27 rows=1 width=8) (actual time=0.008..0.011 rows=1 loops=16)
                       Index Cond: (t.id = s.parent_id)
 Total runtime: 0.799 ms
(10 rows)

Which looks perfect.

As I wrote in the beginning – there are still some things to be polished. But as for now, WITH queries look simply great.

6 thoughts on “Waiting for 8.4 – Common Table Expressions (WITH queries)”

  1. Just perfect! 🙂 I have been waiting for this feature sooo long 🙂

  2. That’s great news, now on with the window function, CTE and Window function will make PostgreSQL the Open Source database with the most rich SQL dialect

  3. Excellent news! 🙂 I think some queries may be 2-10… times faster!

  4. I wanted something like WITH RECURSIVE for ages! That can be incredibly useful and there are so many applications that could benefit form it.

  5. I confirm : CTE is great enhancement.
    I use it in Firebird 2.1 and it works very well.

Comments are closed.