January 11th, 2014 by depesz | Tags: , , , , , , | No comments »
Did it help? If yes - maybe you can help me? Donate BTC to 19zPa5diT2LZqGtTi8f8bfApLn8rw9zBHx

On 23rd of December, Tom Lane committed patch:

Support ordered-set (WITHIN GROUP) aggregates.
 
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()).  We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
 
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions.  To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need.  There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
 
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
 
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane

Now, did you read the commit message? Did you understand it? Well, to be honest – I didn't. But. That's why we have access to sources, including to new changes in documentation.

So I've read the changes in docs. And still didn't understand it. Well. I've always learned best by trying, so that's what I did.

Example in docs references households table with income column. So let's test it:

create table households as
    select cast(random() * 200000 + 20000 as int4) as income from generate_series(1,10) i;
SELECT 10

Data is:

select row_number() over (order by income), * from households order by income;
 row_number | income 
------------+--------
          1 |  21118
          2 |  27346
          3 |  40818
          4 |  64789
          5 |  75681
          6 |  99033
          7 | 132923
          8 | 140213
          9 | 174634
         10 | 204843
(10 rows)

OK. And now, what does the query from docs do:

SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households;
 percentile_disc 
-----------------
           75681
(1 row)

It gave me the 5th row. (50%-th row). What if 50% of the rowcount was fractional?

Please note that I didn't get median. That's because I used percentile_disc. There is a another aggregate that can provide value interpolated.

SELECT percentile_cont(0.5) WITHIN GROUP (ORDER BY income) FROM households;
 percentile_cont 
-----------------
           87357
(1 row)

The value I got is simply in the middle between 5th and 6th row. So it's the “right" median. Please note that while you can use percentile_disc() on basically anything you can order, you can't use percentile_cont() on some types because it's not possible to interpolate them.

Interestingly, we can check, using the same query, more than one percentile:

SELECT percentile_cont(array[0.25, 0.33, 0.5, 0.66, 0.75, 0.95]) WITHIN GROUP (ORDER BY income) FROM households;
                    percentile_cont                    
-------------------------------------------------------
 {46810.75,75354.24,87357,130889.6,138390.5,191248.95}
(1 row)

Now, that's pretty cool.

There is also “mode" aggregate, which returns the most common row:

select mode() within group (order by added_by) from plans;
  mode  
--------
 depesz
(1 row)
 
select added_by, count(*) from plans group by 1 order by 2 desc limit 2;
 added_by | count  
----------+--------
 [null]   | 105421
 depesz   |     35
(2 rows)

NULLs are ignored by aggregates, so it returned the most common value that is not null.

select percentile_cont(0.5) within group (order by id) from plans;
percentile_disc
—————–
Kl
(1 row)

So far it looks that you can use “ordered set aggregates" to extract certain values that you'd otherwise would have to use window functions or some weird constructions based on aggregating values to array, and then picking something from there.

The cool thing is that it works with normal grouping too.

For example. In plans table (the one from explain.depesz.com), I have is_deleted column, which tells if given plan was deleted (I don't delete so to avoid reusing ids, but the plan is removed from row and replaced by empty string).

This is relatively recent feature, so it should show different distribution or values. Let's see:

select
    is_deleted,
    min(entered_on),
    max(entered_on),
    count(*),
    percentile_cont( 0.5 ) within group (order by entered_on)
from
    plans
group by
    is_deleted;
ERROR:  function percentile_cont(numeric, timestamp with time zone) does not exist
LINE 6:     percentile_cont( 0.5 ) within group (order by entered_on...
            ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.

Interestingly – I can't use percentile_cont (though it should be possible to interpolate there). OK. Let's just pick percentile_disc:

select
    is_deleted,
    min(entered_on),
    max(entered_on),
    count(*),
    percentile_disc( 0.5 ) within group (order by entered_on)
from
    plans
group by
    is_deleted;
 is_deleted |              min              |              max              | count |        percentile_disc        
------------+-------------------------------+-------------------------------+-------+-------------------------------
 f          | 2008-12-04 13:20:43+01        | 2013-12-16 22:56:49.951565+01 | 98687 | 2012-10-22 15:38:51.41854+02
 t          | 2013-03-30 20:20:40.102483+01 | 2013-12-16 22:51:34.011972+01 |  6971 | 2013-08-20 07:47:46.051709+02
(2 rows)

The cool thing in here is that we can also, easily, add more than one ordered-set aggregate:

select
    is_deleted,
    count(*),
    percentile_disc( 0.5 ) within group (order by entered_on),
    percentile_cont( 0.5 ) within group (order by length(plan) )
from
    plans
group by
    is_deleted;
 is_deleted | count |        percentile_disc        | percentile_cont 
------------+-------+-------------------------------+-----------------
 f          | 98687 | 2012-10-22 15:38:51.41854+02  |            2728
 t          |  6971 | 2013-08-20 07:47:46.051709+02 |               0
(2 rows)

This is of course stupid example, since all deleted plans have 0 length, but still – it shows nicely what can be done.

There is another part of this – “Hypothetical-Set" Aggregate functions.

The example I made for this is probably even more contrived, but, let's see it anyway.

id strings for plans are added randomly, starting with random 2 letter string, if it's taken, length is increased and new string is generated. After each “hit" length of id is increased. This means that while in the beginning explain was generating mostly 2 character ids, sometimes you would get 3 letter id. And sometimes, rarely, you'd get 4 letter.

Currently (well, on 16th of December) the stats looked:

 length | count | filled_percent | namespace_size 
--------+-------+----------------+----------------
      2 |  3844 |  100.000000000 |           3844
      3 | 82458 |   34.598536471 |         238328
      4 | 19332 |    0.130830809 |       14776336
      5 |    24 |    0.000002620 |      916132832
(4 rows)

Now. I might want to know how many rows (or even better, how many percent of all rows) with given length were added at given point in time.

So I can:

select length(id),
       percent_rank( '2012-01-01 00:00:00 UTC' ) within group (order by entered_on)
from plans
group by length(id)
order by length(id)
 length |    percent_rank    
--------+--------------------
      2 |  0.997918834547347
      3 |  0.277705013461399
      4 | 0.0611421477343265
      5 |                  0
(4 rows)

So apparently at the given moment (UTC midnight on 1st of January 2012) I already had 99.79..% of the 2 character ids, 27.77…% or 3 character ids, ~ 6% of 4 character ids, and none (0%) of 5 character ids).

100% is the current state.

Please note that row “5, 0″ is interesting – as it shows 0% (correctly) even though there were no rows in the group (length(id) = 5) that would be older than the cutoff value, so if I'd made simply:

select length(id), count(*) from plans where entered_on <= '2012-01-01 00:00:00 UTC' group by 1;

I would see only 3 rows, and the row with length(id) = 5 would be missing!

Other hypothetical-set aggregate functions are:

  • rank – what rank would have given value, within given group/ordering, if it was there (even if it isn't there really)
  • dense_rank – similar, but when there are duplicate values in the order by clause, they are ignored
  • percent_rank – percentile that the value would get
  • cume_dist – number of rows preceding (or including) given value divided by total number of rows

All in all – this is great addition. Will definitely take some time to get used to the syntax and the functionality, but it's definitely looks worth it. If I miss anything, is support in percentile_cont for more datatypes (timestamps should be simple to add, I think).

The last bit is performance.

For example, normally, when getting median value, you would have to either:

  • get all values into array and get middle element, or
  • count number values, then order all values and get middle one with limit/offset

So, I made a 10 million row table, with random() values:

$ create table test as select random() as i from generate_series(1,10000000) q;
SELECT 10000000

And let's get median.

First approach – get all values, sorted, into array, and then get middle element – used ~ 60MB of RAM, and took ~ 15 seconds:

with x as (select array_agg(i order by i) as arr from test) select arr[ array_upper(arr, 1) / 2 ] from x;

Second approach – get count, and use limit offset to find middle row – ~ 25MB of ram and ~ 16.5 seconds.

select * from test order by i limit 1 offset ( select count(*) / 2 from test);

Using the ordered sets: ~ 30MB and 13 seconds:

select percentile_disc(0.5) within group (order by i) from test;

Definitely great stuff, and one that I hope to use more.

Leave a comment