# Explaining the unexplainable – part 5

In previous posts in this series, I talked about how to read EXPLAIN output, and what each line (operation/node) means.

Now, in the final post, I will try to explain how it happens that Pg chooses “Operation X" over “Operation Y".

You could have heard that PostgreSQL's planner is choosing operations based on statistics. What statistics?

Let's imagine a simplest possible case:

`SELECT * FROM TABLE WHERE COLUMN = some_value;`

If all rows in have the same some_value – then using (potentially existing) index on column doesn't make sense.

On the other hand – if column is unique (or almost unique) – usage of index is really good idea.

Let's see what's happening:

```CREATE TABLE test ( all_the_same int4, almost_unique int4 );
CREATE TABLE

INSERT INTO test ( all_the_same, almost_unique )
SELECT 123, random() * 1000000 FROM generate_series(1,100000);
INSERT 0 100000```

So, now I have a 100,000 row table, where “all_the_same" column has always the same value (123), and almost_unique column is, well, almost unique:

```SELECT COUNT(*), COUNT(DISTINCT almost_unique) FROM test;
COUNT  | COUNT
--------+-------
100000 | 95142
(1 ROW)```

Now, to make it all equal, I will create two simple indexes:

```CREATE INDEX i1 ON test (all_the_same);
CREATE INDEX

CREATE INDEX i2 ON test (almost_unique);
CREATE INDEX```

OK. Test setup is ready. And how are the plans:

```EXPLAIN SELECT * FROM test WHERE all_the_same = 123;
QUERY PLAN
------------------------------------------------------------
Seq Scan ON test  (cost=0.00..1693.00 ROWS=100000 width=8)
FILTER: (all_the_same = 123)
(2 ROWS)

EXPLAIN SELECT * FROM test WHERE almost_unique = 123;
QUERY PLAN
---------------------------------------------------------------
INDEX Scan USING i2 ON test  (cost=0.29..8.31 ROWS=1 width=8)
INDEX Cond: (almost_unique = 123)
(2 ROWS)```

As you can see Pg chose wisely. But the interesting thing is the “rows=" estimate. How does it know how many rows the query might return?

The answer lies in the ANALYZE command, or VACUUM ANALYZE.

When doing “ANALYZE" of the table, pg gets some “random sample" (more on it in a moment), and gets some stats. What are the stats, where are they, and can we see them? Sure we can:

```SELECT * FROM pg_statistic WHERE starelid = 'test'::regclass;
-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
starelid    | 16882
staattnum   | 1
stainherit  | f
stanullfrac | 0
stawidth    | 4
stakind1    | 1
stakind2    | 3
stakind3    | 0
stakind4    | 0
stakind5    | 0
staop1      | 96
staop2      | 97
staop3      | 0
staop4      | 0
staop5      | 0
stanumbers1 | {1}
stanumbers2 | {1}
stanumbers3 | [NULL]
stanumbers4 | [NULL]
stanumbers5 | [NULL]
stavalues1  | {123}
stavalues2  | [NULL]
stavalues3  | [NULL]
stavalues4  | [NULL]
stavalues5  | [NULL]
-[ RECORD 2 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
starelid    | 16882
staattnum   | 2
stainherit  | f
stanullfrac | 0
stawidth    | 4
stakind1    | 1
stakind2    | 2
stakind3    | 3
stakind4    | 0
stakind5    | 0
staop1      | 96
staop2      | 97
staop3      | 97
staop4      | 0
staop5      | 0
stanumbers1 | {0.0001,0.0001,0.0001,0.0001,0.0001,0.0001,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05}
stanumbers2 | [NULL]
stanumbers3 | {-0.000468686}
stanumbers4 | [NULL]
stanumbers5 | [NULL]
stavalues1  | {21606,27889,120502,289914,417495,951355,283,1812,3774,6028,6229,10372,12234,13291,18309,18443,21758,22565,26634,28392,28413,31208,32890,36563,39277,40574,44527,49954,53344,53863,56492,56715,60856,62993,64294,65275,65355,68353,71194,74718,77205,82096,82783,84764,85301,87498,90990,94043,97304,98779,101181,103700,103889,106288,108562,110796,113154,117850,121578,122643,123874,126299,129236,129332,129512,134430,134980,136987,137368,138175,139001,141519,142934,143432,143707,144501,148633,152481,154327,157067,157799,162437,164072,164337,165942,167611,170319,171047,177383,184134,188702,189005,191786,192718,196330,197851,199457,202652,202689,205983}
stavalues2  | {2,10560,20266,31061,40804,50080,59234,69240,79094,89371,99470,109557,119578,130454,140809,152052,162656,173855,183914,194263,204593,214876,224596,233758,243246,253552,264145,273855,283780,294475,303972,314544,324929,335008,346169,356505,367395,376639,387302,397004,407093,416615,426646,436146,445701,455588,466463,475910,485228,495434,505425,515853,525374,534824,545387,554794,563591,573721,584021,593368,602935,613238,623317,633947,643431,653397,664177,673976,684042,694791,703922,714113,724602,735848,745596,754477,764171,772535,781924,791652,801703,812487,822196,831618,841665,850722,861532,872067,881570,891654,901595,910975,921698,931785,940716,950623,960551,970261,979855,989540,999993}
stavalues3  | [NULL]
stavalues4  | [NULL]
stavalues5  | [NULL]```

This table (pg_statistic) is, of course, described the docs. But it is pretty cryptic. Of course, you can find very precise explanation in sources, but that's not (usually) best solution.

Luckily, there is a view over this table, that contains the same data, in more readable way:

```SELECT * FROM pg_stats WHERE tablename = 'test';
-[ RECORD 1 ]----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
schemaname             | public
tablename              | test
attname                | all_the_same
inherited              | f
null_frac              | 0
avg_width              | 4
n_distinct             | 1
most_common_vals       | {123}
most_common_freqs      | {1}
histogram_bounds       | [NULL]
correlation            | 1
most_common_elems      | [NULL]
most_common_elem_freqs | [NULL]
elem_count_histogram   | [NULL]
-[ RECORD 2 ]----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
schemaname             | public
tablename              | test
attname                | almost_unique
inherited              | f
null_frac              | 0
avg_width              | 4
n_distinct             | -0.92146
most_common_vals       | {21606,27889,120502,289914,417495,951355,283,1812,3774,6028,6229,10372,12234,13291,18309,18443,21758,22565,26634,28392,28413,31208,32890,36563,39277,40574,44527,49954,53344,53863,56492,56715,60856,62993,64294,65275,65355,68353,71194,74718,77205,82096,82783,84764,85301,87498,90990,94043,97304,98779,101181,103700,103889,106288,108562,110796,113154,117850,121578,122643,123874,126299,129236,129332,129512,134430,134980,136987,137368,138175,139001,141519,142934,143432,143707,144501,148633,152481,154327,157067,157799,162437,164072,164337,165942,167611,170319,171047,177383,184134,188702,189005,191786,192718,196330,197851,199457,202652,202689,205983}
most_common_freqs      | {0.0001,0.0001,0.0001,0.0001,0.0001,0.0001,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05}
histogram_bounds       | {2,10560,20266,31061,40804,50080,59234,69240,79094,89371,99470,109557,119578,130454,140809,152052,162656,173855,183914,194263,204593,214876,224596,233758,243246,253552,264145,273855,283780,294475,303972,314544,324929,335008,346169,356505,367395,376639,387302,397004,407093,416615,426646,436146,445701,455588,466463,475910,485228,495434,505425,515853,525374,534824,545387,554794,563591,573721,584021,593368,602935,613238,623317,633947,643431,653397,664177,673976,684042,694791,703922,714113,724602,735848,745596,754477,764171,772535,781924,791652,801703,812487,822196,831618,841665,850722,861532,872067,881570,891654,901595,910975,921698,931785,940716,950623,960551,970261,979855,989540,999993}
correlation            | -0.000468686
most_common_elems      | [NULL]
most_common_elem_freqs | [NULL]
elem_count_histogram   | [NULL]```

OK. So what can we learn from this?

schemaname, tablename and attname columns seem to be obvious. Inherited simply says whether values for this table contain values from any tables that did inherit this column.

So, if I'd have:

`CREATE TABLE z () inherits (test);`

and then I'd put some data in table z, then statistics for table test would have “inherited = true".

The rest of columns means:

• null_frac – how many rows contains null value in given column. This is a fraction, so it's value from 0 to 1
• avg_width – average width of data in this column. In case of constant-width types (like int4 here) it is not really interesting, but in case of any datatype with variable width (like text/varchar/numeric) – it is potentially interesting.
• n_distinct – very intersting value. If it is positive ( 1+ ) – it will be be simply estimated number (not fraction!) of distinct values – this we can see in case of all_the_same column, there n_distinct is correctly 1. If it is negative, though it's meaning is different. It means then which fraction of rows has unique value. So, in case of almost_unique, stats suggest that 92.146% of rows have unique value (which is a bit short of 95.142% which I showed earlier). The values can be incorrect due to the “random sample" thing I mentioned earlier, and will explain more in a bit.
• most_common_vals – array of most common values in this table
• most_common_freqs – how frequent are the values from most_common_vals – again, it's fraction, so it can be at most 1 (though then we'd have only one value in most_common_vals). In her, in almost_unique, we see that Pg “thinks" that values 21606, 27889, 120502, 289914, 417495, 951355 are the ones that happen most often – which are not, but this is, again, caused by “random sample" effect
• histogram_bounds – array of values which divide (or should divide – again “random sample" thing) whole recordset into groups of the same number of rows. That is – number of rows with almost_unique between 2 and 10560 is the same (more or less) as number of rows with almost_unique between 931785 and 940716
• correlation – this is interesting statistic – it shows whether there is correlation between physical row ordering on disk, and values. This can go from -1 to 1, and generally the closer it is to -1/1 – the more correlation there is. For example – after doing “CLUSTER test using i2" – that is reordering table in almost_unique order, I got correlation of -0.919358 – much better than shown above -0.000468686.
• most_common_elems, most_common_elem_freqs and elem_count_histogram are like most_common_vals, most_common_freqs and histogram_bounds but for non-scalar datatypes (think: arrays, tsvectors and alike).

Based on this data, PostgreSQL can estimate how many rows will be returned by any given part of query, and based on this information it can decide whether it's better to use seq scan or index scan or bitmap index scan. And when joining – which one should be faster – Hash Join, Merge Join or perhaps Nested Loop.

If you looked at the data above you could have asked yourself – it's pretty wide output – there are many values in the most_common_vals/most_common_freqs/histogram_bounds arrays. Why are there so many?

Reason is simple – it's configuration. In postgresql.conf you can find default_statistics_target variable. This variable tells Pg how many values to keep in these arrays. In my case (default) it's 100. But you can easily change it. Either by changing postgresql.conf, or even on a per-column basis, with:

`ALTER TABLE test ALTER COLUMN almost_unique SET statistics 5;`

After such ALTER (and ANALYZE), data in pg_stats is much shorter:

```SELECT * FROM pg_stats WHERE tablename = 'test' AND NOT inherited AND attname = 'almost_unique';
-[ RECORD 1 ]----------+---------------------------------------------------------
schemaname             | public
tablename              | test
attname                | almost_unique
inherited              | f
null_frac              | 0
avg_width              | 4
n_distinct             | -0.92112
most_common_vals       | {114832,3185,3774,6642,11984}
most_common_freqs      | {0.0001,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05}
histogram_bounds       | {2,199470,401018,596414,798994,999964}
correlation            | 1
most_common_elems      | [NULL]
most_common_elem_freqs | [NULL]
elem_count_histogram   | [NULL]```

Changing statistic target has also one more effect.

Let me show you. First I'll revert the change on statistic count I did with ALTER TABLE:

`ALTER TABLE test ALTER COLUMN almost_unique SET statistics -1;`

And now:

```\$ analyze verbose test;
INFO:  analyzing "public.test"
INFO:  "test": scanned 443 OF 443 pages, containing 100000 live ROWS AND 0 dead ROWS; 30000 ROWS IN sample, 100000 estimated total ROWS
ANALYZE

\$ ALTER TABLE test ALTER COLUMN almost_unique SET statistics 10;
ALTER TABLE

\$ ALTER TABLE test ALTER COLUMN all_the_same SET statistics 10;
ALTER TABLE

\$ analyze verbose test;
INFO:  analyzing "public.test"
INFO:  "test": scanned 443 OF 443 pages, containing 100000 live ROWS AND 0 dead ROWS; 3000 ROWS IN sample, 100000 estimated total ROWS
ANALYZE```

Please note that the second analyze tested only 3000 rows – not 30000 as the first one.

This is the “random sample".

Analyzing all rows in a table would be prohibitively expensive in any medium or large table.

So, Pg does something rather smart.

First – it reads random part of pages in a table (reminder: each page is 8kB of data). How many – 300 * statistics_target.

Which means, in my case, with default_statistics_target = 100, it would read 30000 pages. (my table doesn't have that many, so instead it did read all of them).

From these pages, ANALYZE gets just information about live and dead rows. Afterwards, it gets data on random sample of rows – again 300 * statistics target, and calculated the column statistics based on these data.

In my case – table had 100,000 rows, but with default_statistics_target = 100, only a third was analyzed. With statistics target the count of rows that is analyzed is even lower – just 3000.

You could have said: OK, but then these statistics are not accurate. It could be that some super-common value just doesn't happen to be in the scanned rows. Sure. You're right. It's possible. Though not really likely. I mean – you are getting random part of data. The chances that you'll get the x% of the table that just doesn't happen to have any row with some value that exists in all other rows are small.

This also means that sometimes, it can happen, that running analyze will “break" your queries. For example – you'll get statistics on another pages, and it will happen that some values will get skipped (or, on the contrary – you will get in most_common_vals things that aren't really all that common, just Pg happened to pick right pages/rows to see it). And based on such stats, Pg will generate suboptimal plans.

If such case would hit you, the solution is rather simple – bump statistics target. This will make analyze work harder, and scan more rows, so the chances of this happening again will get even smaller.

There is a drawback in setting large targets, though. ANALYZE has to work more, of course, but this is maintenance thing, so we don't really care about it (usually). The problem is that having more data in pg_statistic means that more data has to be taken into consideration by Pg planner. So, while it might look tempting to set default_statistics_target to it's max of 10,000, in reality I haven't seen database which would have it set that high.

Current default of 100 is there since 8.4. Before that it was set at 10, and it was pretty common to see on irc suggestions to increase it. Now, with default 100, you're more or less set.

One final thing I have to talk about, though I really don't want, are settings that make Pg planner use different operations.

First – why I don't want to talk about it – I know for a fact that this can be easily abused. So please remember – these settings are for debugging problems, not for solving problems. Application that would use them in normal operation mode is at least suspected if not outright broken. And yes, I know that sometimes you have to. But the sometimes is very rare.

Now, with this behind me, let's see the options.

In postgresql.conf you have several settings like these:

```enable_bitmapscan = ON
enable_hashagg = ON
enable_hashjoin = ON
enable_indexscan = ON
enable_indexonlyscan = ON
enable_material = ON
enable_mergejoin = ON
enable_nestloop = ON
enable_seqscan = ON
enable_sort = ON
enable_tidscan = ON```

These settings, are for disabling given operations.

For example – setting enable_seqscan to false (which can be done with SET command in SQL session, you don't have to modify postgresql.conf) will cause planner to use anything else it might, just to avoid seq scan.

Since sometimes it's not possible to avoid seq scan (there are no indexes on the table, for example) – these settings don't actually disable operations, just associate huge cost with using them.

For example. With our test table, we know what searching with “all_the_same = 123" will use seq scan, as it's cheap:

```EXPLAIN SELECT * FROM test WHERE all_the_same = 123;
QUERY PLAN
------------------------------------------------------------
Seq Scan ON test  (cost=0.00..1693.00 ROWS=100000 width=8)
FILTER: (all_the_same = 123)
(2 ROWS)```

But if I'd disable seq scans:

```SET enable_seqscan = FALSE;
SET
EXPLAIN SELECT * FROM test WHERE all_the_same = 123;
QUERY PLAN
-----------------------------------------------------------------------
INDEX Scan USING i1 ON test  (cost=0.29..3300.29 ROWS=100000 width=8)
INDEX Cond: (all_the_same = 123)
(2 ROWS)```

We see that estimated cost of getting the same data with index scan is ~ two times higher (3300.29 vs 1693).

If I'd drop i1 index:

```DROP INDEX i1;
DROP INDEX
SET enable_seqscan = FALSE;
SET
EXPLAIN SELECT * FROM test WHERE all_the_same = 123;
QUERY PLAN
-----------------------------------------------------------------------------
Seq Scan ON test  (cost=10000000000.00..10000001693.00 ROWS=100000 width=8)
FILTER: (all_the_same = 123)
(2 ROWS)```

And now we see that when there is no other option – just a seq scan (it's interesting it didn't choose to do index scan on i2, after all, it has pointers to all rows in the table), the cost skyrocketed to 10,000,000,000 – which is exactly what enable_* = false does.

I think that's about it. If you read the whole series you should have enough knowledge to understand what's going on, and, more importantly, why.

## 9 thoughts on “Explaining the unexplainable – part 5”

1. Nate Teller says:

great articles … enjoy following your knowledge.

how do you address ‘many_of_the_same’, for the same size table … when there only 10 or so different values.

we do max(many_of_the_same) selects and sometimes max(many_of_the_same) < arbitrary_value.

thank you.

2. @Nate:
Sorry, not sure I understand your question. Can you show me (via a pastesite perhaps to avoid formatting issues) the problem/question?

3. Nate Teller says:

create table test ( many_the_same int4, almost_unique int4 );
CREATE TABLE
insert into test ( many_the_same, almost_unique )
select random() * 10,
random() * 1000000
from generate_series(1,10000000);
INSERT 0 10000000
create index test_i1 on test (many_the_same);
CREATE INDEX
create index test_i2 on test (many_the_same, almost_unique);
CREATE INDEX
vacuum analyze verbose test;
psql:max_test.sql:13: INFO: vacuuming “ants.test”
psql:max_test.sql:13: INFO: index “test_i1” now contains 10000000 row versions in 27422 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.10s/0.03u sec elapsed 0.14 sec.
psql:max_test.sql:13: INFO: index “test_i2” now contains 10000000 row versions in 27422 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.10s/0.03u sec elapsed 0.14 sec.
psql:max_test.sql:13: INFO: “test”: found 0 removable, 10000000 nonremovable row versions in 44248 out of 44248 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.25s/0.64u sec elapsed 0.91 sec.
psql:max_test.sql:13: INFO: analyzing “ants.test”
psql:max_test.sql:13: INFO: “test”: scanned 44248 of 44248 pages, containing 10000000 live rows and 0 dead rows; 300000 rows in sample, 10000000 estimated total rows
VACUUM
explain select max(many_the_same) from test;
QUERY PLAN
——————————————————————————————————-
Result (cost=0.04..0.05 rows=1 width=0)
InitPlan 1 (returns \$0)
-> Limit (cost=0.00..0.04 rows=1 width=4)
-> Index Scan Backward using test_i1 on test (cost=0.00..435477.47 rows=10000000 width=4)
Index Cond: (many_the_same IS NOT NULL)
(5 rows)

explain select max(many_the_same) from test where many_the_same Limit (cost=0.00..0.07 rows=1 width=4)
-> Index Scan Backward using test_i1 on test (cost=0.00..303585.45 rows=4496733 width=4)
Index Cond: ((many_the_same IS NOT NULL) AND (many_the_same Limit (cost=0.00..0.04 rows=1 width=4)
-> Index Scan Backward using test_i1 on test (cost=0.00..435477.47 rows=10000000 width=4)
Index Cond: (many_the_same IS NOT NULL)
(5 rows)

explain select max(many_the_same) from test where many_the_same Limit (cost=0.00..0.07 rows=1 width=4)
-> Index Scan Backward using test_i1 on test (cost=0.00..303585.45 rows=4496733 width=4)
Index Cond: ((many_the_same IS NOT NULL) AND (many_the_same < 5))
(5 rows)

My apologies, this portion of a more complex query is performing well.

4. well, no need to apologize, but the queries (i think) got mangled:

“explain select max(many_the_same) from test where many_the_same Limit (” looks like either bad copy/paste or something got broken somewhere else.

You should be able to use code html tags: < CODE >put your code in here < / CODE > to get better formatting. It should look like this:

`whatever`
5. boris says:

In the 1st blogpost you wrote about settings:
seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 4.0 # same scale as above
cpu_tuple_cost = 0.01 # same scale as above
cpu_index_tuple_cost = 0.005 # same scale as above
cpu_operator_cost = 0.0025 # same scale as above

Here you introduced new portion
enable_bitmapscan = on
enable_hashagg = on
enable_hashjoin = on
enable_indexscan = on
enable_indexonlyscan = on
enable_material = on
enable_mergejoin = on
enable_nestloop = on
enable_seqscan = on
enable_sort = on
enable_tidscan = on

How do these settings treat to each other ? Seems that they have rather similar meaning, or not ?

6. They have very different meaning. Not sure how you got into conclusion that they are similar. *_cost values are some values attached to specific low-level operations. enable_* are basically a tools to skyrocket cost of some high-level operations, so that pg will not use it.

7. Éric says:

Hi. Just want to thank and congratulate you for this series of articles. They are just invaluable!

8. Thanks Eric, glad you liked it.

9. Santosh says:

Just wanted to say Thank you! for writing such a detailed article.
I went through complete series and I feel now I’m better equipped to write optimal queries.

This site uses Akismet to reduce spam. Learn how your comment data is processed.