Concurrent REINDEX of all indexes in database

Recent release of new versions of PostgreSQL suggests that you do reindex of all indexes. But this will take a while, and since we don't actually have ‘REINDEX CONCURRENTLY' command – it's a bit tricky.

So, since I will be doing this on several databases, decided to write a script that will handle the work for me.

The script is written to be ran using psql, and it generates file (/tmp/upgrade_reindex.sql) that will do actuall reindexing.

It is not perfect, there are two known issues with it:

  • It doesn't handle indexes used by exclusion constraints – reason is very simple – you can do “ALTER TABLE ADD PRIMARY KEY USING INDEX", or “… ADD UNIQUE USING INDEX", but you can't do so with exclusion constraints
  • While index generation is not locking (using CONCURRENTLY), there are ALTER commands that can lock – they take just a short time, but getting index might take a while in environment with lots of activity

First thing – I can't do much about. Seconds – well, it's possible to handle using trick that pg_reorg is using (short statement_timeout, lock, long statement_timeout, alter – all in loop to retry on error with statement_timeout), but I haven't got yet to do it.

Lastly – it doesn't touch indexes in pg_catalog or information_schema – why? I don't like touching catalog. And if I'd have to (which looks like I might need to), I would just do it using plain old “REINDEX" – these indexes are usually very small, so it shouldn't be a problem.

My script:

SET search_path = pg_catalog;
\pset format unaligned
\pset tuples_only ON
\SET output_file /tmp/upgrade_reindex.sql
\o :output_file
WITH base_info AS (
        pg_get_indexdef(i.indexrelid) AS index_create,
        format( 'DROP INDEX %I.%I', n.nspname, c.relname) AS index_drop,
        pg_get_constraintdef(con.oid) AS constraint_create,
            WHEN con.conname IS NOT NULL THEN
                format( 'ALTER TABLE %I.%I DROP CONSTRAINT %I', tn.nspname, t.relname, con.conname )
            ELSE NULL::TEXT
        END AS constraint_drop,
        c.relname AS index_name,
        n.nspname AS index_namespace,
        t.relname AS TABLE_NAME,
        con.contype AS constraint_type
        pg_index i
        JOIN pg_class c ON i.indexrelid = c.oid
        JOIN pg_namespace n ON c.relnamespace = n.oid
        LEFT OUTER JOIN pg_constraint con ON i.indexrelid = con.conindid AND con.contype IN ('p', 'u', 'x')
        LEFT OUTER JOIN pg_class t ON con.conrelid = t.oid
        LEFT OUTER JOIN pg_namespace tn ON t.relnamespace = tn.oid
        n.nspname !~ '^(pg_|information_schema)'
), cmds AS (
        index_create || COALESCE( ' [ constraint: ' || constraint_create || ']', '' ) AS cmd_comment,
        CASE WHEN constraint_type = 'x' THEN E'\\echo ' ELSE '' END AS prefix,
        regexp_replace( index_create, ' INDEX ' || quote_ident( index_name ) || ' ', ' INDEX CONCURRENTLY __upgrade_reindex ' ) AS cmd_create,
        COALESCE( constraint_drop, index_drop) AS cmd_drop,
        format( 'ALTER INDEX %I.%I RENAME TO %I', index_namespace, '__upgrade_reindex', index_name ) AS cmd_rename,
        WHEN constraint_type = 'p' THEN format( 'ALTER TABLE %I.%I ADD PRIMARY KEY USING INDEX %I', index_namespace, TABLE_NAME, index_name )
        WHEN constraint_type = 'u' THEN format( 'ALTER TABLE %I.%I ADD UNIQUE USING INDEX %I', index_namespace, TABLE_NAME, index_name )
        END AS cmd_conadd
    CASE WHEN prefix = '' THEN '' ELSE E'\\echo This cannot be reindexed concurrently, because it is used by exclusion constraint\n' END ||
    prefix || '-- ' || cmd_comment || E'\n' ||
    prefix || cmd_create || E';\n' ||
    prefix || cmd_drop || E';\n' ||
    prefix || cmd_rename || E';\n' ||
    COALESCE( prefix || cmd_conadd || E';\n', '' )
\echo 'Just type: \\i' :output_file

12 thoughts on “Concurrent REINDEX of all indexes in database”

  1. Thanks for the perfectly timed topic depesz!

    I was wondering if we simply cannot use pg_reorg to achieve the same. That will also remove all the bloat from tables as an add on bonus.

  2. Sure we can use pg_reorg. The only “problem” is that it will also rewrite the table – which is not required, and will take longer. On the other hand – create index concurrently is very slow, so it might be that pg_reorg, including rewrite of table, will be faster than concurrent index creation.

  3. Ah – one more thing. It was tested that there are problems also with constraints that are used by foreign keys (i.e. foreign key has to “point to” unique index or primary key, and you can’t drop those without causing problems.

    I don’t plan on extending this query to handle such cases for very simple reason – it’s counterproductive, and, what’s more – re-adding fkey will take long time.

  4. Thanks depesz.

    Actually I just did a pg_reorg on a referenced table and a referring table. All went fine.

  5. Sure – reorg is perfectly fine. What is not fine is doing: “create index”, “drop primary key”, “add primary key using index” when you have fkey pointing to this table.

  6. Just a brief comment here re pg_reorg and slony —

    Just discovered that pg_reorg 1.1.7 doesn’t work well with reorganizing tables on a slony slave node if the table being reorganizing is being written to by slony at the same time –since the inserts, updates and deletes that slony does on a slave table are done in session_replication_role=REPLICA, the temp trigger that pg_reorg puts on the table while it reorganizes it doesn’t fire on slony changes.

    so you end up with :
    1) slony and the database logs thinking you’ve committed changes to table that disappear when the reorganization of the table is done.

    The fix would be for pg_reorg to add code to make its trigger a trigger that is ENABLED ALWAYS… until then, if you’re reorganizing a slonied table on a slave node, you should probably stop slony before doing the pg_reorg

    I’ve posted warning to the pg_reorg and slony mailing list, but though I should drop a mention here as well.

  7. For Google reference, the problem with using pg_repack on Slony slaves is solved since pg_repack 1.1.8, through a patch by Mr Yamada himself.

  8. Is it possible to simply ommit constraints used by foreign keys to allow smooth execution? (Now script stops on first ERROR: cannot drop constraint a_pkey on table a because other objects depend on it).

    I tried to replace CREATE,DROP ALTER with REINDEX in such cases which would eliminate need to re-add foreign keys but failed (I was unable to discover if anything depends on specific index. But postgres knows how retrieve this kind of information: “DETAILS: constraint b_id_fkey on table b depends on index a_pkey”)

  9. @Michal:
    not really. You could possibly do it by modifying catalogs (i.e. pg_* tables), but that’s rather dangerous.

  10. @Depesz
    Thanks for response. Actually solution that works for me turned out to be really simple.
    I leave it below so maybe someone will use it
    I modified WHERE clause in base_info SELECT:
    WHERE n.nspname !~ ‘^(pg_|information_schema)’ and (select count(*) from pg_constraint con where con.conindid=i.indexrelid and con.contype=’f’) = 0

    and then added modified SELECT (for real reindex) at the end:

    format( ‘REINDEX INDEX %I.%I;’, n.nspname, c.relname) as index_reindex
    pg_index i
    join pg_class c on i.indexrelid = c.oid
    join pg_namespace n on c.relnamespace = n.oid
    left outer join pg_constraint con on i.indexrelid = con.conindid AND con.contype in (‘p’, ‘u’, ‘x’)
    left outer join pg_class t on con.conrelid = t.oid
    left outer join pg_namespace tn on t.relnamespace = tn.oid
    n.nspname !~ ‘^(pg_|information_schema)’ and (select count(*) from pg_constraint con where con.conindid=i.indexrelid and con.contype=’f’) != 0;

    Maybe not elegant but works 😉

  11. @Depesz

    Great script. Does it make sense to wrap any of this inside a transaction? Maybe starting just before cmd_drop with a commit after cmd_conadd ?

  12. @Charles:

    transaction seems like terrible idea. What would be the point? Having the transaction will not help in any way, but might cause problems with other things (like vacuums) because of “long transactions are bane of any kind of db maintenance”.

Comments are closed.