October 22nd, 2010 by depesz | Tags: , , , , , , , , , | 6 comments »
Did it help? If yes - maybe you can help me?

Today some guy on IRC asked question, which I didn't fully understand, but which could (probably) be summarized: how to group data into 5 minute intervals, based on some timestamp column.

Well, it seems trivial (as long as you know how to do it), but since he clearly didn't know how to do it (or I misunderstood his problem), here's the explanation:

First, let's make some test table:

$ create table test as
    select i, now() - '6 months'::interval * random() as ts
    from generate_series(1,100000) i;

Data looks like this:

$ select * from test limit 10;
 i  |              ts
----+------------------------------
  1 | 2010-05-11 08:58:18.94062+02
  2 | 2010-05-09 00:01:20.46702+02
  3 | 2010-09-29 10:51:48.29742+02
  4 | 2010-08-11 20:46:35.64462+02
  5 | 2010-05-07 05:39:20.96622+02
  6 | 2010-05-14 08:08:43.15182+02
  7 | 2010-10-15 09:50:42.86382+02
  8 | 2010-05-11 22:21:28.36782+02
  9 | 2010-05-05 19:56:36.95982+02
 10 | 2010-08-26 01:33:35.25102+02
(10 rows)

some base statistics:

$ select count(*), min(ts), max(ts) from test;
 count  |             min              |             max
--------+------------------------------+------------------------------
 100000 | 2010-04-22 14:06:09.01422+02 | 2010-10-22 14:05:20.02542+02
(1 row)

Before I'll go to 5 minute intervals (which are somewhat tricky), let's see how to get data grouped into some simpler ranges:

by day:

$ select date_trunc( 'day', ts ), count(*)
    from test
    group by 1
    order by 1 desc limit 10;
       date_trunc       | count
------------------------+-------
 2010-10-22 00:00:00+02 |   301
 2010-10-21 00:00:00+02 |   548
 2010-10-20 00:00:00+02 |   571
 2010-10-19 00:00:00+02 |   571
 2010-10-18 00:00:00+02 |   543
(5 rows)

by hour:

$ select date_trunc( 'hour', ts ), count(*)
    from test
    group by 1
    order by 1 desc limit 5;
       date_trunc       | count
------------------------+-------
 2010-10-22 14:00:00+02 |     3
 2010-10-22 13:00:00+02 |    27
 2010-10-22 12:00:00+02 |    21
 2010-10-22 11:00:00+02 |    32
 2010-10-22 10:00:00+02 |    22
(5 rows)

So far it looks rather simple. For other date_trunc options, you can check the fine manual.

But there is no ‘5 minutes' truncation. So how can we do it?

One approach is to use epoch values, and integer mathemetics:

$ select now(), 'epoch'::timestamptz + '300 seconds'::interval * (extract(epoch from now())::int4 / 300);
              now              |        ?column?
-------------------------------+------------------------
 2010-10-22 14:15:21.397404+02 | 2010-10-22 14:15:00+02
(1 row)

Wait a moment …

$ select now(), 'epoch'::timestamptz + '300 seconds'::interval * (extract(epoch from now())::int4 / 300);
              now              |        ?column?
-------------------------------+------------------------
 2010-10-22 14:17:47.363632+02 | 2010-10-22 14:15:00+02
(1 row)

Nice. Of course using this in queries will get tedious pretty soon, but we can use functions to make it simpler:

CREATE FUNCTION ts_round( timestamptz, INT4 ) RETURNS TIMESTAMPTZ AS $$
    SELECT 'epoch'::timestamptz + '1 second'::INTERVAL * ( $2 * ( extract( epoch FROM $1 )::INT4 / $2 ) );
$$ LANGUAGE SQL;

The cool thing is that when writing the function, there is not much point in hardcoding the 300 value, so instead I can make it variable. And now I can:

$ select now(), ts_round( now(), 300 );
              now              |        ts_round
-------------------------------+------------------------
 2010-10-22 14:20:57.548683+02 | 2010-10-22 14:20:00+02
(1 row)

Or even, if I'd need to:

$ select now(), ts_round( now(), 629 );
              now              |        ts_round
-------------------------------+------------------------
 2010-10-22 14:21:17.878654+02 | 2010-10-22 14:16:53+02
(1 row)

So, with this function at hand, I can do the grouping:

$ select ts_round( ts, 300 ), count(*) from test group by 1 order by 1 desc limit 5;
        ts_round        | count
------------------------+-------
 2010-10-22 14:05:00+02 |     2
 2010-10-22 14:00:00+02 |     1
 2010-10-22 13:55:00+02 |     2
 2010-10-22 13:50:00+02 |     1
 2010-10-22 13:45:00+02 |     1
(5 rows)

And to verify that the values are indeed grouped correctly, I can use the trick I wrote about earlier, and do:

select
    ts_round( ts, 300 ),
    count(*),
    '- ' || array_to_string(
        array_agg( ts::text ),
        E'\n- '
    )
from
    test
group by 1
order by 1 desc limit 10;
        ts_round        | count |            ?column?
------------------------+-------+--------------------------------
 2010-10-22 14:05:00+02 |     2 | - 2010-10-22 14:05:20.02542+02+
                        |       | - 2010-10-22 14:05:03.43662+02
 2010-10-22 14:00:00+02 |     1 | - 2010-10-22 14:01:11.71182+02
 2010-10-22 13:55:00+02 |     2 | - 2010-10-22 13:57:30.87342+02+
                        |       | - 2010-10-22 13:58:36.79662+02
 2010-10-22 13:50:00+02 |     1 | - 2010-10-22 13:51:30.06702+02
 2010-10-22 13:45:00+02 |     1 | - 2010-10-22 13:46:38.38062+02
 2010-10-22 13:40:00+02 |     5 | - 2010-10-22 13:44:55.39182+02+
                        |       | - 2010-10-22 13:42:52.27182+02+
                        |       | - 2010-10-22 13:44:28.95342+02+
                        |       | - 2010-10-22 13:41:53.43342+02+
                        |       | - 2010-10-22 13:40:49.67022+02
 2010-10-22 13:35:00+02 |     1 | - 2010-10-22 13:38:17.34702+02
 2010-10-22 13:30:00+02 |     2 | - 2010-10-22 13:32:31.40142+02+
                        |       | - 2010-10-22 13:31:16.23342+02
 2010-10-22 13:25:00+02 |     4 | - 2010-10-22 13:26:56.86062+02+
                        |       | - 2010-10-22 13:25:20.00622+02+
                        |       | - 2010-10-22 13:25:37.89102+02+
                        |       | - 2010-10-22 13:29:00.15342+02
 2010-10-22 13:20:00+02 |     2 | - 2010-10-22 13:23:31.57422+02+
                        |       | - 2010-10-22 13:21:27.50382+02
(10 rows)

Works quite nicely.

  1. 6 comments

  2. Oct 22, 2010

    Or you could use window functions
    select count(*) over(partition by ts_round(ts, 300), …

  3. # gregj
    Oct 22, 2010

    you could probably do it with bucket width.

  4. Oct 24, 2010

    I’d probably go for “GROUP BY DATE_TRUNC(‘hour’, ts), DATE_PART(‘minute’, ts) / 5”.

  5. # pg_alchemist
    Oct 26, 2010

    Hey depesz 🙂 . I can use ur trick. anyway here is a solution too..

    select date_trunc(‘minute’,ms_click_date) – (extract(minute from ms_click_date)::integer % 5) * interval ‘1 minute’ , count(*) from user_sessions where us_click_date between ‘2010-06-15 00:00:01’ and ‘2010-06-15 02:00:01’ group by 1 order by 1;

  6. # pg_alchemist
    Oct 26, 2010

    oops corrected columns names.

    select date_trunc(‘minute’,us_click_date) – (extract(minute from us_click_date)::integer % 5) * interval ‘1 minute’ , count(*) from user_sessions where us_click_date between ‘2010-06-15 00:00:01’ and ‘2010-06-15 02:00:01’ group by 1 order by 1;

  7. # Michael
    Oct 7, 2012

    This works great—thanks!

Sorry, comments for this post are disabled.