August 5th, 2008 by depesz | Tags: , , | 2 comments »
Did it help? If yes - maybe you can help me? Donate BTC to 19zPa5diT2LZqGtTi8f8bfApLn8rw9zBHx

Today Tom Lane committed patch which gives DISTINCT ability to use hash aggregate – just like GROUP BY.

Log message:

Improve SELECT DISTINCT to consider hash aggregation, as well as sort/uniq,
as methods for implementing the DISTINCT step. This eliminates the former
performance gap between DISTINCT and GROUP BY, and also makes it possible
to do SELECT DISTINCT on datatypes that only support hashing not sorting.
 
SELECT DISTINCT ON is still always implemented by sorting; it would take
executor changes to support hashing that, and it's not clear it's worth
the trouble.
 
This is a release-note-worthy incompatibility from previous PG versions,
since SELECT DISTINCT can no longer be counted on to deliver sorted output
without explicitly saying ORDER BY. (Anyone who can't cope with that
can consider turning off enable_hashagg.)
 
Several regression test queries needed to have ORDER BY added to preserve
stable output order. I fixed the ones that manifested here, but there
might be some other cases that show up on other platforms.

What exactly it gives you?

It's pretty simple. Let's create simple test table:

# create table x (i int4);
CREATE TABLE
# insert into x (i) select random() * 100000 from generate_series(1,150000);
INSERT 0 150000

Now, in version of PostgreSQL without this patch, I run (2 times to get better results with filled cache):

# explain analyze select distinct i from x;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Unique (cost=15789.25..16496.05 rows=200 width=4) (actual time=623.208..1177.344 rows=77774 loops=1)
-> Sort (cost=15789.25..16142.65 rows=141360 width=4) (actual time=623.203..872.388 rows=150000 loops=1)
Sort Key: i
Sort Method: external merge Disk: 2328kB
-> Seq Scan on x (cost=0.00..2002.60 rows=141360 width=4) (actual time=0.051..224.693 rows=150000 loops=1)
Total runtime: 1280.360 ms
(6 rows)
 
# explain analyze select distinct i from x;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Unique (cost=15789.25..16496.05 rows=200 width=4) (actual time=555.254..1111.487 rows=77774 loops=1)
-> Sort (cost=15789.25..16142.65 rows=141360 width=4) (actual time=555.251..805.267 rows=150000 loops=1)
Sort Key: i
Sort Method: external merge Disk: 2328kB
-> Seq Scan on x (cost=0.00..2002.60 rows=141360 width=4) (actual time=0.020..199.417 rows=150000 loops=1)
Total runtime: 1208.449 ms
(6 rows)

As you can see we get time in around 1.2s, and DISTINCT clearly works by sorting data and then doing “unique" over them.

But what happens after I apply this patch?

# explain analyze select distinct i from x;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
HashAggregate (cost=2356.00..2358.00 rows=200 width=4) (actual time=475.303..603.042 rows=77774 loops=1)
-> Seq Scan on x (cost=0.00..2002.60 rows=141360 width=4) (actual time=0.059..213.981 rows=150000 loops=1)
Total runtime: 700.063 ms
(3 rows)
 
# explain analyze select distinct i from x;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
HashAggregate (cost=2356.00..2358.00 rows=200 width=4) (actual time=465.973..576.127 rows=77774 loops=1)
-> Seq Scan on x (cost=0.00..2002.60 rows=141360 width=4) (actual time=0.019..205.799 rows=150000 loops=1)
Total runtime: 670.394 ms
(3 rows)

Pretty cool. The time was cut in half. Of course – it will never be very fast –
to make it very fast I should have made dictionary table (possibly with updates
on triggers), but it is definitely worthy addition to PostgreSQL code.

  1. 2 comments

  2. # Frank
    Nov 9, 2009

    Hi, what do you mean with dictionary table to make the distinct-query faster? Some kind of preprocessing?
    Another question I´ve got is, how good the hash-function is to avoid collisions, while inserting the groups to the buckets?

  3. Nov 9, 2009

    @Frank: yes. triggers on insert/update/delete on base table which generate dictionary table.

    As for your other question – I have no idea. You can try to ask on pg-hackers list – it never bothered me.

Leave a comment