April 16th, 2013 by depesz | Tags: , , | 12 comments »
Did it help? If yes - maybe you can help me? Donate BTC to 19zPa5diT2LZqGtTi8f8bfApLn8rw9zBHx

One of the first things new DBA hears is “Use the EXPLAIN". And upon first try he/she is greeted with incomprehensible:

                                                        QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=146.63..148.65 rows=808 width=138) (actual time=55.009..55.012 rows=71 loops=1)
   Sort Key: n.nspname, p.proname, (pg_get_function_arguments(p.oid))
   Sort Method: quicksort  Memory: 43kB
   ->  Hash Join  (cost=1.14..107.61 rows=808 width=138) (actual time=42.495..54.854 rows=71 loops=1)
         Hash Cond: (p.pronamespace = n.oid)
         ->  Seq Scan on pg_proc p  (cost=0.00..89.30 rows=808 width=78) (actual time=0.052..53.465 rows=2402 loops=1)
               Filter: pg_function_is_visible(oid)
         ->  Hash  (cost=1.09..1.09 rows=4 width=68) (actual time=0.011..0.011 rows=4 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 1kB
               ->  Seq Scan on pg_namespace n  (cost=0.00..1.09 rows=4 width=68) (actual time=0.005..0.007 rows=4 loops=1)
                     Filter: ((nspname <> 'pg_catalog'::name) AND (nspname <> 'information_schema'::name))

What does it even mean?

Of course trying to understand the explain above, as a start point, is pretty futile. Let's start with something simpler. But even before that I want you to understand one very important thing:

PostgreSQL knows

That means that PostgreSQL keeps some meta-information (information about information). Rowcounts, number of distinct values, most common values, and so on. For large tables these are based on random sample, but in general, it (Pg) is pretty good at knowing stuff about our data.

So, now. Let's see simple query, and it's explain:

$ explain select * from test where i = 1;
                      QUERY PLAN                      
------------------------------------------------------
 Seq Scan on test  (cost=0.00..40.00 rows=12 width=4)
   Filter: (i = 1)
(2 rows)

Query is pretty simple, and I take it that it doesn't require any special comments.

In explain – first line, and all lines that start with “->" are operations. Other lines as additional info for operation above.

In our case, we have just one operation: sequential scanning of table test.

There is additional information – about Filter.

Sequential scan means that PostgreSQL will “open" the table data, and read it all, potentially filtering (removing) rows, but generally ready to read and return whole table.

Why ready? I'll talk about it in a minute.

So, the Seq scan line, informs us that we are scanning table in sequential mode. And that the table is named “test" (though in here lies one of the biggest problems with explain, as it's not showing schema, which did bite my ass more than couple of times).

And what about the numbers afterwards?

Let me ask you a question: You have table like this:

                            Table "public.t"
   Column    |  Type   |                   Modifiers                    
-------------+---------+------------------------------------------------
 id          | integer | not null default nextval('t_id_seq'::regclass)
 some_column | integer | 
 something   | text    | 
Indexes:
    "t_pkey" PRIMARY KEY, btree (id)
    "q" btree (some_column)

Given this table definition, and query:

SELECT * FROM t where some_column = 123;

What do you think would be the best way to run the query? Sequentially scan the table, or use index?

If you say: use index of course, there is index on this column, so it will make it faster – I'll ask: what about case where the table has one row, and that it has some_column = 123?

To do seq scan, I just need to read one page (8192 bytes) from table, and I get the row. To use index, I have to read page from index, check it to find if we have row matching condition in the table and then read page from table.

In total – more than twice the work!

You could say – sure, but that's for very small tables, so the speed doesn't matter. OK. So let's imagine a table that has 10 billion rows, and all of them have some_column = 123. Index doesn't help at all, and in reality it makes the situation much, much worse.

Of course – if you have million rows, and just one has some_column = 123 – index scan will be clearly better.

So – it is impossible to tell whether given query will use index, or even if it should use index (I'm talking about general case) – you need to know more. And this leads us to simple thing: depending on situation one way of getting data will be better or worse then other.

PostgreSQL (up to a point) examines all possible plans. It knows how many rows you have, it knows how many rows will (likely) match the criteria, so it can make pretty smart decisions.

But how are the decisions made? That's exactly what the first set of numbers in explain shows. It's the cost.

Some people think that cost is estimate shown in seconds. It's not. It's unit is “fetching single page in sequential manner". It is about time and resource usage.

In postgresql.conf, you might have noticed these params:

seq_page_cost = 1.0                    # measured on an arbitrary scale
random_page_cost = 4.0                 # same scale as above
cpu_tuple_cost = 0.01                  # same scale as above
cpu_index_tuple_cost = 0.005           # same scale as above
cpu_operator_cost = 0.0025             # same scale as above

So, we can even change how much it costs to read sequential page. These parameters dictate costs that PostgreSQL assumes it would take to run various methods of running the same query.

For example, let's make a simple 1000 row table, with some texts, and index:

create table test (id serial primary key, some_text text);
CREATE TABLE
 
insert into test (some_text) select 'whatever' from generate_series(1,1000);
INSERT 0 1000

Now, we can see that running explain with condition on id will show:

explain select * from test where id = 50;
                              QUERY PLAN                               
-----------------------------------------------------------------------
 Index Scan using test_pkey on test  (cost=0.28..8.29 rows=1 width=36)
   Index Cond: (id = 50)
(2 rows)

And what if we'd tell pg that under no circumstances it can use index scan?

explain select * from test where id = 50;
                               QUERY PLAN                               
------------------------------------------------------------------------
 Bitmap Heap Scan on test  (cost=4.28..8.30 rows=1 width=13)
   Recheck Cond: (id = 50)
   ->  Bitmap Index Scan on test_pkey  (cost=0.00..4.28 rows=1 width=0)
         Index Cond: (id = 50)
(4 rows)

and let's disable this one too:

explain select * from test where id = 50;
                      QUERY PLAN                      
------------------------------------------------------
 Seq Scan on test  (cost=0.00..18.50 rows=1 width=13)
   Filter: (id = 50)
(2 rows)

OK, let's show them close to each other:

Index Scan using test_pkey on test  (cost=0.28..8.29 rows=1 width=36)
Bitmap Heap Scan on test  (cost=4.28..8.30 rows=1 width=13)
Seq Scan on test  (cost=0.00..18.50 rows=1 width=13)

By default, Pg used Index Scan. Why? It's simple – it's the cheapest in this case. Total cost of 8.29, while bitmap heap scan (whatever that would be) would be 8.30 and seq scan – 18.5.

OK, But cost shows two numbers: number..number. What is this about, and why am I talking about the second number only? If we'd get into consideration first number then seq scan is winner, as it has there 0 (zero), while index scan is 0.28, and bitmap heap scan – 4.28.

So, the range (number .. number) is because it shows cost for starting the operation row and cost for getting all rows (By all, I mean all returned by this operation, not all in table).

What is the starting cost? Well, for seq scan there is none – you just read page, and return rows. That's all. But for example, for sorting dataset – you have to read all the data, and actually sort it, before you can consider returning even first of the rows. Which can be nicely seen in this explain:

                            QUERY PLAN                             
-------------------------------------------------------------------
 Sort  (cost=22.88..23.61 rows=292 width=202)
   Sort Key: relfilenode
   ->  Seq Scan on pg_class  (cost=0.00..10.92 rows=292 width=202)
(3 rows)

Please note that startup cost for Sort is 22.88, while total cost is just 23.61. So returning rows from Sort is trivial (in terms of cost), but sorting them – not so much.

Next information in explain is “rows". This is estimate, how many rows does PostgreSQL think that this operation is capable of returning (it might return less, for example in case of LIMIT). This is also important for some operations – joins for example. As joining two tables that have 20 rows together can be done in many ways, and it doesn't really matter how, but when you join 1 million row table with 1 billion row table, the way you do the join is very important (I'm not talking about “inner join/left join/…" but rather about “hash join", “nested loop", “merge join" – if you don't understand, don't worry – will write about it later).

This number can be of course misestimated – for many reasons. Sometimes it doesn't matter, and sometimes it does. But we'll talk about misestimates later.

Final bit of information is width. This is PostgreSQL idea on how many bytes, on average, there are in single row returned from given operation. For example:

explain select * from pg_class;
                         QUERY PLAN                          
-------------------------------------------------------------
 Seq Scan on pg_class  (cost=0.00..10.92 rows=292 width=202)
(1 row)
 
explain select relname, relkind from pg_class;
                         QUERY PLAN                         
------------------------------------------------------------
 Seq Scan on pg_class  (cost=0.00..10.92 rows=292 width=65)
(1 row)

As you can see limiting number of fields modified width, and in turn, total amount of data that will need to be passed through execution of the query.

Next, it the single most important bit of information. Explains are trees. Upper node needs data from nodes below.

Let's consider this plan.

There are 5 operations there: sort, hash join, seq scan, hash, and seq scan. PostgreSQL executes the top one – sort. Which in turn executes the ones directly below (Hash join) and gets data from them. Hash join, to return data to sort, has to run seq scan (on pg_proc) and hash (#4). And the final hash, to be able to return data, has to run seq scan on pg_namespace.

It is critical to understand that some operations can return data immediately, and, what's even more important, gradually. For example – Seq Scan. And some others cannot. For example – in here we see that Hash (#4) has the same cost for startup as it's “suboperation" seq scan – for “all rows". This means, that for hashing operation to start (well, to be able to return even single row), it has to read in all the rows from suboperation(s).

The part about returning rows gradually becomes very important when you'll start writing functions. Let's consider such functions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
CREATE OR REPLACE FUNCTION public.test()
 RETURNS SETOF integer
 LANGUAGE plpgsql
AS $function$
declare
    i int4;
begin
    for i in 1..3 loop
        return next i;
        perform pg_sleep(1);
    end loop;
    return;
end;
$function$;

If you don't understand it, don't worry. The functions returns 3 rows, each contains single integer – 1, 2 and 3. The important bit, though, is that it sleeps for 1 second after returning each row.

This means that if I'll do:

select * from test();

I can expect to wait 3 seconds for results.

But how long will it take to return from this:

select * from test() limit 1;

Let's see:

\timing
Timing is on.
 
select * from test() limit 1;
 test 
------
    1
(1 row)
 
Time: 3005.334 ms

The same 3 seconds. Why? Because PL/pgSQL (and most, if not all, other PL/* languages) cannot return partial results. It looks like it can – with “return next", but all these are stored in a buffer and returned together when function execution ends.

On the other hand – “normal" operations, usually, can return partial data. It can be seen with something trivial like seq scan, on non-trivial table:

create table t as
    select i as id,
        repeat('depesz', 100)::text as payload
    from generate_series(1,1000000) i;

With this table, we can see:

explain analyze select * from t;
                                                   QUERY PLAN                                                   
----------------------------------------------------------------------------------------------------------------
 Seq Scan on t  (cost=0.00..185834.82 rows=10250082 width=36) (actual time=0.015..232.380 rows=1000000 loops=1)
 Total runtime: 269.666 ms
(2 rows)
 
explain analyze select * from t limit 1;
                                                  QUERY PLAN                                                  
--------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..0.02 rows=1 width=36) (actual time=0.003..0.003 rows=1 loops=1)
   ->  Seq Scan on t  (cost=0.00..185834.82 rows=10250082 width=36) (actual time=0.003..0.003 rows=1 loops=1)
 Total runtime: 0.016 ms
(3 rows)

(please look only at “Total runtime: .." for now)

As it can be seen – seq scan ended very fast – as soon as it satisfied Limit's appetite for exactly 1 row.

Please also note that in here, even the costs (which are not the best thing for comparing queries) show that top node (seq scan in first, and limit in second query) have very different values for returning all rows – 185834.82 vs. 0.02.

So, the first 4 numbers for any given operation (two for cost plus rows and width) are all estimates. They might be correct, but but they as well might not.

The other 4 numbers, which you get when you run “EXPLAIN ANALYZE query" or “EXPLAIN ( ANALYZE on ) query" show the reality.

Time is again a range, but this time is real. It is how much time PostgreSQL actually did spend working on given operation (on average, because it could have run the same operation multiple times). And just as with cost – time is a range. Startup time, and time to return all data. Let's check this plan:

$ explain analyze select * from t limit 100;
                                                  QUERY PLAN                                                  
--------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.00..9.33 rows=100 width=608) (actual time=0.008..0.152 rows=100 loops=1)
   ->  Seq Scan on t  (cost=0.00..93333.86 rows=999986 width=608) (actual time=0.007..0.133 rows=100 loops=1)
 Total runtime: 0.181 ms
(3 rows)

As you can see – Limit has startup time of 0.008 (millisecond, that's the unit in here). This is because Seq Scan (which Limit called to get data) took 0.007ms to return first row, and then there was 0.001ms of processing within limit itself.

Afterwards (after returning 1st row), limit kept getting data from Seq Scan, until it got 100 rows. Then it terminated Seq scan (which happened 0.133ms after start of query), and it finished after another 0.019 ms.

Actual rows value, just as name suggests, shows how many rows (on average) this operation returned. And loops show how many times this operation was in total ran.

In what case would an operation be called more than once? For example in some cases of joins, or subqueries. It looks like this plan.

Please note that loops in 3rd operation is 2. This means that this Seq Scan was ran twice, returning, on average 1 row, and it took, on average, 0.160ms to finish. So the total time spent in this particular operation is 2 * 0.160ms = 0.32ms (that's what is in exclusive/inclusive columns on explain.depesz.com).

Very often poor performance of a query comes from the fact that it had to loop many times over something. As in here.

(of course it doesn't mean that it's Pg fault that it chose such plan – maybe there simply weren't other options, or other options were estimated as even more expensive).

In above example, please note that while actual time for operation 3 is just 0.003ms, this operation was run over 26000 times, resulting in total time spent in here of almost 79ms.

I think that wraps theoretical information required to read explains. You will probably still don't understand what the operations or other information mean, but at the very least – you will know what the numbers mean, and what's the difference between explain (shows costs in abstract units, which are based on random-sample estimates) and explain analyze (shows real life times, rowcounts and execution counts, in units that can be compared with different queries).

As always, I'm afraid that I skipped a lot of things that might be important but just escaped me, or (even worse) I assumed that these are “obvious". If you'll find anything missing, please let me know, I'll fill in ASAP.

But, before, let me just say, that I plan to extend this blogpost in 2-3 another posts that would cover more about:

  • what are various operations, how they work, and what to expect when you see them in explain output
  • what are statistics, how Pg gets them, how to view them, and how to get the best out of it
  1. 12 comments

  2. Apr 17, 2013

    Just wanted to help, thought you may want to correct:

    1. examctly -> exactly
    2. simple 100 row table -> simple 1000 row table
    3. and actually sort if -> and actually sort it

  3. Apr 17, 2013

    @Robins: thanks fixed.

  4. Apr 18, 2013

    Excellent post, clear and understandable. Thanks!

  5. # kk
    May 9, 2013

    Great.
    Waiting for 2 next blogposts.

  6. # Tobu
    May 22, 2013

    Yay I understood all of this.

    I sort of expected an unit / decimal point mixup with 0.008 milliseconds and the timings after that, but the post is correct, that’s 8 microseconds. I do wish PostgreSQL would print the units (or at least default to seconds).

  7. May 22, 2013

    @Tobu: once you get used to it, it just makes sense. Adding “ms” to every value would take too much space. And defaulting to second would make the times even worse. I see lots of queries that run in less than 1ms. and this would look absolutely awful: 0.000008

  8. # varg
    May 23, 2013

    Thanks for writing this up, very informative.

  9. # boris
    Jul 2, 2013

    >> So, we can even change how much it costs to read sequential page.
    Are there any reasonable recommendations whether these settings worth changing ?

  10. Jul 2, 2013

    I wouldn’t change seq scan cost, as this is basically an unit. If you have your database on SSD, it might be good to lower random_page_cost significantly. As for the rest – test it. Play with it on your dataset and see what comes out of it.

  11. # manuscola
    Jul 3, 2013

    In above example, please note that while actual time for operation 3 is just 0.003ms, this operation was run over 26000 times, resulting in total time spent in here of almost 79ms.

    not 79ms but 79s

    You artical helps a lot

  12. Jul 3, 2013

    @Manuscola: not sure where/how you got 79s, but it is 79ms.

  13. # bartek
    Sep 3, 2013

    Fully qualified names in explain output can be reached by verbose option of explain (EXPLAIN VERBOSE). I had the same problem. Thanks to advice from Pavel Stehule my problem has been resolved.

    Keep writing posts like this one :) I learnt so much

Leave a comment