Did it help? If yes - maybe you can help me? Donate BTC to 19zPa5diT2LZqGtTi8f8bfApLn8rw9zBHx
I found the numbers presented by Periscope (as their improvement) as not that great.
Unfortunately – their blog doesn't allow for comments, so I decided to test it, and write on my own blog, what I can find about it.
First, what I gathered from their blogposts:
- 2 tables: dashboards (with columns: name, id and possibly something else) and time_on_site_logs (with columns: user_id, dashboard_id, and possibly something else)
- dashboards has 1200 rows
- time_on_site_logs has 14 million rows
- there is 1700 distinct (dashboard_id, user_id) pairs in time_on_site_logs
Since, I have no idea about table width (which can be important), I will make some guesses about number of columns that should be there. There is also no information about data distribution in time_on_site_logs (as in: average number of dashboards per user, average number of rows per user, and so on). So I'll have to improvise.
$ create table dashboards ( id serial primary key, name text not null unique, created timestamptz not null default now(), visited int8 not null default 0 ); $ insert into dashboards (name, created, visited) with x as ( select now() - '1 year'::interval * random() as c from generate_series(1,1200) order by c ) select 'Dashboard #' || row_number() over (order by c), c, random() * 10000 from x;
This gives me data like these:
select * from dashboards order by id desc limit 20; id | name | created | visited ------+-----------------+-------------------------------+--------- 1200 | Dashboard #1200 | 2014-01-29 09:16:01.894632+01 | 3264 1199 | Dashboard #1199 | 2014-01-29 05:27:19.501032+01 | 5889 1198 | Dashboard #1198 | 2014-01-29 02:52:03.939432+01 | 595 1197 | Dashboard #1197 | 2014-01-28 11:22:32.134632+01 | 8283 1196 | Dashboard #1196 | 2014-01-28 07:43:48.406632+01 | 5713 1195 | Dashboard #1195 | 2014-01-28 03:16:50.537832+01 | 4082 1194 | Dashboard #1194 | 2014-01-27 23:22:24.877032+01 | 7836 1193 | Dashboard #1193 | 2014-01-27 11:16:10.765032+01 | 9849 1192 | Dashboard #1192 | 2014-01-27 10:34:12.032232+01 | 7207 1191 | Dashboard #1191 | 2014-01-27 10:34:10.304232+01 | 2657 1190 | Dashboard #1190 | 2014-01-27 03:49:03.565032+01 | 1996 1189 | Dashboard #1189 | 2014-01-26 08:06:21.257832+01 | 9084 1188 | Dashboard #1188 | 2014-01-26 02:46:08.921832+01 | 9549 1187 | Dashboard #1187 | 2014-01-25 21:19:41.869032+01 | 7073 1186 | Dashboard #1186 | 2014-01-25 15:07:59.264232+01 | 9089 1185 | Dashboard #1185 | 2014-01-25 14:12:43.750632+01 | 3228 1184 | Dashboard #1184 | 2014-01-25 10:03:10.198632+01 | 9176 1183 | Dashboard #1183 | 2014-01-25 07:31:07.395432+01 | 5257 1182 | Dashboard #1182 | 2014-01-25 01:29:44.710632+01 | 633 1181 | Dashboard #1181 | 2014-01-24 21:51:53.773032+01 | 9663 (20 rows)
Now, for the time_on_site_logs table:
$ create table time_on_site_logs ( id serial primary key, dashboard_id int4 not null references dashboards (id), user_id int4 not null, session_started timestamptz not null, session_time interval not null );
Schema is ready. Now for the data. Since I have no idea about data distribution or anything like this, let's just create 1700 random dashboard/user pairs, and repeat the data 14000000/1700 times, and call it a day:
insert into time_on_site_logs (dashboard_id, user_id, session_started, session_time) with x as ( select d, u from generate_series(1,1200) as d, generate_series(1,10) as u order by random() limit 1700 ) select d, u, now() - random() * '2 years'::interval, random() * '30 minutes'::interval from x, generate_series(1,8236) as q;
with x as ( select dashboard_id, user_id, count(*) from time_on_site_logs group by dashboard_id, user_id ) select sum(count) as all_rows, count(*) as distinct_pairs FROM x; all_rows | distinct_pairs ----------+---------------- 14001200 | 1700 (1 row)
Some of the comments (very ill-informed, in my opinion) on Reddit suggested that the speed of the queries (as shown in original blogposts) depends on indexes, and the fact that the query plan showed seq scans means that there are no indexes.
I don't believe this explanation, so let me add some additional indexes:
$ create index i1 on time_on_site_logs (dashboard_id); CREATE INDEX $ create index i2 on time_on_site_logs (user_id); CREATE INDEX $ create index i3 on time_on_site_logs (dashboard_id, user_id); CREATE INDEX $ create index i4 on time_on_site_logs (user_id, dashboard_id); CREATE INDEX
Now tables look like this:
$ \d dashboards Table "public.dashboards" Column | Type | Modifiers ---------+--------------------------+--------------------------------------------------------- id | integer | not null default nextval('dashboards_id_seq'::regclass) name | text | not null created | timestamp with time zone | not null default now() visited | bigint | not null default 0 Indexes: "dashboards_pkey" PRIMARY KEY, btree (id) "dashboards_name_key" UNIQUE CONSTRAINT, btree (name) Referenced by: TABLE "time_on_site_logs" CONSTRAINT "time_on_site_logs_dashboard_id_fkey" FOREIGN KEY (dashboard_id) REFERENCES dashboards(id) $ \d time_on_site_logs Table "public.time_on_site_logs" Column | Type | Modifiers -----------------+--------------------------+---------------------------------------------------------------- id | integer | not null default nextval('time_on_site_logs_id_seq'::regclass) dashboard_id | integer | not null user_id | integer | not null session_started | timestamp with time zone | not null session_time | interval | not null Indexes: "time_on_site_logs_pkey" PRIMARY KEY, btree (id) "i1" btree (dashboard_id) "i2" btree (user_id) "i3" btree (dashboard_id, user_id) "i4" btree (user_id, dashboard_id) Foreign-key constraints: "time_on_site_logs_dashboard_id_fkey" FOREIGN KEY (dashboard_id) REFERENCES dashboards(id)
That should handle all indexing needs for now
Now – Periscope guys wrote that they tested it on Amazon EC2 instance. I'm cheap, so I'm testing it on my desktop. So the numbers will not be directly comparable. But the ratios should be.
So, let's test the queries.
First, the naive approach:
select dashboards.name, count(distinct time_on_site_logs.user_id) from time_on_site_logs join dashboards on time_on_site_logs.dashboard_id = dashboards.id group by name order by count desc;
I ran it 3 times, and got best result: 492 seconds.
Their (Periscope's) second approach was:
select dashboards.name, log_counts.ct from dashboards join ( select dashboard_id, count(distinct user_id) as ct from time_on_site_logs group by dashboard_id ) as log_counts on log_counts.dashboard_id = dashboards.id order by log_counts.ct desc;
Again, best of three runs was 23.1 second.
Third query by Periscope:
select dashboards.name, log_counts.ct from dashboards join ( select distinct_logs.dashboard_id, count(1) as ct from ( select distinct dashboard_id, user_id from time_on_site_logs ) as distinct_logs group by distinct_logs.dashboard_id ) as log_counts on log_counts.dashboard_id = dashboards.id order by log_counts.ct desc
Yielded best time of slighly less than 4.6s.
So far, we have:
|Query||Periscope test||depesz test|
|Aggregate, Then Join||10.6s||23.1s|
|First, Reduce The Data Set||7.13s||4.6s|
This wraps me re-testing what Periscope wrote.
Couple of comments:
- to the person that said (on Reddit) that PostgreSQL cannot use Hash for count(distinct …) – well, true. But rewriting the query so that it will use hashes is trivial, as shown above
- to the people saying that “Yet again they say the query plans include table scans, which imply either no indexes or an awful index design" – well, look at this – I have indexes on everything, yet PostgreSQL doesn't use them in such queries. You can of course provide information how do you think indexes can help us with these queries. With sample schema, data and queries.
But – all things said – the numbers are, in my opinion, still too large.
I mean – if I was to get how many rows there are in time_on_site_logs for every (dashboard/user) – sure. It can take a while, as it has to scan all of the table. But just getting distinct count? Especially so few rows (946 rows of output)? That's got to be optimizable.
Luckily, it is:
I can define a simple function which gives me list of distinct (dashboard_id, user_id):
CREATE OR REPLACE FUNCTION get_list_of_unique_pairs( OUT dashboard_id INT4, OUT user_id INT4 ) RETURNS setof record as $$ declare v_dash ALIAS FOR dashboard_id; v_user ALIAS FOR user_id; BEGIN SELECT l.dashboard_id, l.user_id INTO v_dash, v_user FROM time_on_site_logs l ORDER BY l.dashboard_id, l.user_id LIMIT 1; LOOP EXIT WHEN NOT FOUND; RETURN NEXT; SELECT l.dashboard_id, l.user_id INTO v_dash, v_user FROM time_on_site_logs l WHERE (l.dashboard_id, l.user_id) > (v_dash, v_user ) ORDER BY l.dashboard_id, l.user_id LIMIT 1; END LOOP; RETURN; END; $$ language plpgsql;
With this function in place, I can just:
with x as ( SELECT dashboard_id, count(*) FROM get_list_of_unique_pairs() group BY dashboard_id ) SELECT d.name, x.count FROM x join dashboards d on x.dashboard_id = d.id ORDER BY x.count desc;
This query runs faster than 4.6s. It runs faster than 2.13s. It runs faster than 1s. It runs in 39 ms (0.039s).
Of course – you can say that using functions is cheating. Well, it can be done without functions, but the query is more complicated then:
with recursive distinct_pairs as ( ( SELECT l as rl FROM time_on_site_logs l ORDER BY l.dashboard_id, l.user_id LIMIT 1 ) UNION all SELECT ( SELECT l FROM time_on_site_logs l WHERE (l.dashboard_id, l.user_id) > ((p.rl).dashboard_id, (p.rl).user_id) ORDER BY l.dashboard_id, l.user_id LIMIT 1 ) FROM distinct_pairs p WHERE (p.rl).id IS NOT NULL ), unpacked_counts as ( SELECT (rl).dashboard_id, count(*) FROM distinct_pairs WHERE (rl).dashboard_id IS NOT NULL GROUP BY (rl).dashboard_id ) SELECT d.name, uc.count FROM dashboards d join unpacked_counts uc on d.id = uc.dashboard_id ORDER BY uc.count desc;
- I would really expect more from a company that writes: “We make … tool that makes SQL data analysis really fast." or “building a much faster way to do SQL data analysis." – sorry, but it can be done faster. Way faster.
- The queries that I wrote are based on the fact that there is little (comparatively) distinct (dashboard_id, user_id) values. If there were more (for example – 50% of row count of time_on_site_logs rows) – it wouldn't work nicely.
- Doing this kind of analysis should be done using rollup tables, which gather information in some side tables, and then you just query these side tables to get results. Otherwise – it might be fast for 14M rows, but for 14B rows, it will be slow again
- The whole process, as shown in Periscope posts can be, relatively simply, parallelized – even in PostgreSQL
and finally – that was fun.