On 7th of April 2018, Teodor Sigaev committed patch:

Indexes with INCLUDE columns and their support in B-tree
 
This patch introduces INCLUDE clause to index definition.  This clause
specifies a list of columns which will be included as a non-key part in
the index.  The INCLUDE columns exist solely to allow more queries to
benefit from index-only scans.  Also, such columns don't need to have
appropriate operator classes.  Expressions are not supported as INCLUDE
columns since they cannot be used in index-only scans.
 
Index access methods supporting INCLUDE are indicated by amcaninclude flag
in IndexAmRoutine.  For now, only B-tree indexes support INCLUDE clause.
 
In B-tree indexes INCLUDE columns are truncated from pivot index tuples
(tuples located in non-leaf pages and high keys).  Therefore, B-tree indexes
now might have variable number of attributes.  This patch also provides
generic facility to support that: pivot tuples contain number of their
attributes in t_tid.ip_posid.  Free 13th bit of t_info is used for indicating
that.  This facility will simplify further support of index suffix truncation.
The changes of above are backward-compatible, pg_upgrade doesn't need special
handling of B-tree indexes for that.
 
Bump catalog version
 
Author: Anastasia Lubennikova with contribition by Alexander Korotkov and me
Reviewed by: Peter Geoghegan, Tomas Vondra, Antonin Houska, Jeff Janes,
             David Rowley, Alexander Korotkov
Discussion: https://www.postgresql.org/message-id/flat/.4010101@postgrespro.ru

This basically means we're getting full covering indexes.

The idea is that Index Only Scan was able to return only values that were part of index.

So, if I'd do:

=$ create table test (id serial primary key, some_rand int4, larger text);
CREATE TABLE
 
=$ insert into test (some_rand, larger) select random() * 500000, substr(md5(i::text), 1, 10) from generate_series(1,10000000) i;
INSERT 0 10000000
 
=$ create index small_idx on test (some_rand, id);
CREATE INDEX
 
=$ vacuum freeze analyze test;
VACUUM

Then, getting columns included in index uses Index Only Scan:

=$ explain select some_rand, id from test where some_rand > 30000 order by some_rand, id limit 20;
                                         QUERY PLAN                                         
--------------------------------------------------------------------------------------------
 Limit  (cost=0.43..1.00 rows=20 width=8)
   ->  Index Only Scan using small_idx on test  (cost=0.43..267048.14 rows=9380326 width=8)
         Index Cond: (some_rand > 30000)
(3 rows)

But if I'll add larger column to select, it can't use only index, so will switch to Index Scan:

=$ explain select some_rand, id, larger from test where some_rand > 30000 order by some_rand, id limit 20;
                                       QUERY PLAN                                       
----------------------------------------------------------------------------------------
 Limit  (cost=0.43..1.55 rows=20 width=19)
   ->  Index Scan using small_idx on test  (cost=0.43..521828.01 rows=9380326 width=19)
         Index Cond: (some_rand > 30000)
(3 rows)

Of course I could add largercolumn to index:

=$ create index large_idx on test (some_rand, id, larger);
CREATE INDEX

and now the query will use Index only scan:

=$ explain select some_rand, id, larger from test where some_rand > 30000 order by some_rand, id limit 20;
                                         QUERY PLAN                                          
---------------------------------------------------------------------------------------------
 Limit  (cost=0.56..1.31 rows=20 width=19)
   ->  Index Only Scan using large_idx on test  (cost=0.56..350178.38 rows=9380218 width=19)
         Index Cond: (some_rand > 30000)
(3 rows)

But it made the index larger:

=$ select relname, pg_size_pretty( pg_relation_size(oid)) from pg_class where relname in ('small_idx','large_idx');
  relname  | pg_size_pretty 
-----------+----------------
 large_idx | 387 MB
 small_idx | 214 MB
(2 rows)

Now, how about we use this new idea:

=$ drop index large_idx;
DROP INDEX
 
=$ create index magic_idx on test (some_rand, id) include (larger);
CREATE INDEX

And now:

=$ explain select some_rand, id, larger from test where some_rand > 30000 order by some_rand, id limit 20;
                                         QUERY PLAN                                          
---------------------------------------------------------------------------------------------
 Limit  (cost=0.43..1.18 rows=20 width=19)
   ->  Index Only Scan using magic_idx on test  (cost=0.43..349650.25 rows=9380218 width=19)
         Index Cond: (some_rand > 30000)
(3 rows)
 
=$ select pg_size_pretty( pg_relation_size('magic_idx'::regclass));
 pg_size_pretty 
----------------
 386 MB
(1 row)

Index is slightly smaller, and can be used for the Index Only Scan.

So, the benefit is no in size of index (1MB is ~ 0.3% of the index size). But, it makes it possible to include data for columns that can't normally be included, because they lack appropriate access method (btree in my example).

Long story short – I, personally, don't see immediate usecase for this in the databases I work with, but I definitely welcome, and am grateful, for the new feature – as I might use it in the future. Thanks a lot 🙂

  1. 4 comments

  2. # Teodor
    Apr 26, 2018

    Hm, it looks much more interesting to combine UNIQUE/PRIMARY KEY and INCLUDE:

    create table t (id int, large text, primary key (id) include (large));

    In this case you don’t need two indexes: primary key to be sure of uniqueness and (id, large) to possible use of IndexOnlyScan.

  3. May 17, 2018

    The overall index is slightly smaller, but what matters is that hot index pages contain more tuples, this reduces the height of B-tree, when index includes big attributes.

  4. # YIH
    May 23, 2018

    Your test case ain’t right.
    Your not using this feature like why your suppose to use it. Your case can be better solved with “CLUSTER”. https://www.postgresql.org/docs/10/static/sql-cluster.html
    Then you trow somehow your index away and save 50% of the space and write TPS.(Your index and Tables have equal number of columns)

    This feature is like “grouping sets,cube” very handy for OLAP usecases. Heavy denormalized table structures with allot of Columns that are not used for each use case. With incluse you can create a subset of a big table. So abusing a index for queries that use a smaller subset of synconized table.

    Cheers

  5. # YIH
    May 23, 2018

    Before someone asks:
    Just to add wat is the difference between:
    (1) create index large_idx on test (some_rand, id, larger);
    (2) create index large_idx on test (some_rand, id) INCLUDE larger;

    In the second the is no extra “sort” check done on “larger”. Making the maintainace done by postgres less heavy by figuring out which segment to write to.

Leave a comment