On 20th of January 2020, Amit Kapila committed patch:
Allow vacuum command to process indexes in parallel.
This feature allows the vacuum to leverage multiple CPUs in order to
process indexes. This enables us to perform index vacuuming and index
cleanup with background workers. This adds a PARALLEL option to VACUUM
command where the user can specify the number of workers that can be used
to perform the command which is limited by the number of indexes on a
table. Specifying zero as a number of workers will disable parallelism.
This option can't be used with the FULL option.
Each index is processed by at most one vacuum process. Therefore parallel
vacuum can be used when the table has at least two indexes.
The parallel degree is either specified by the user or determined based on
the number of indexes that the table has, and further limited by
max_parallel_maintenance_workers. The index can participate in parallel
vacuum iff it's size is greater than min_parallel_index_scan_size.
Author: Masahiko Sawada and Amit Kapila
Mahendra Singh and Sergei Kornilov
Continue reading Waiting for PostgreSQL 13 – Allow vacuum command to process indexes in parallel.
Title: Waiting for PostgreSQL 13 – ALTER TABLE … ALTER COLUMN … DROP EXPRESSION
On 14th of January 2020, Peter Eisentraut committed patch:
ALTER TABLE ... ALTER COLUMN ... DROP EXPRESSION
Add an ALTER TABLE subcommand for dropping the generated property from
a column, per SQL standard.
Continue reading Waiting for PostgreSQL 13 – ALTER TABLE … ALTER COLUMN … DROP EXPRESSION
Recently, on irc, there have been some talks with people using various pg_dump/pg_dumpall calls to get dumps of database.
I voiced my ideas, but figured it could be good subject for a blog post.
Continue reading How to effectively dump PostgreSQL databases
Just pushed new version of explain.depesz.com site.
- most importantly : changed favicon to be more related to site
- extended, quite a lot, help page
- added a way to embed explains on your page
Continue reading Changes on explain.depesz.com
On 19th of November 2019, Thomas Munro committed patch:
Allow invisible PROMPT2 in psql.
Keep track of the visible width of PROMPT1, and provide %w as a way
for PROMPT2 to generate the same number of spaces.
Author: Thomas Munro, with ideas from others
Continue reading Waiting for PostgreSQL 13 – Allow invisible PROMPT2 in psql.
On 13rd of November 2019, Amit Kapila committed patch:
Introduce the 'force' option for the Drop Database command.
This new option terminates the other sessions connected to the target
database and then drop it. To terminate other sessions, the current user
must have desired permissions (same as pg_terminate_backend()). We don't
allow to terminate the sessions if prepared transactions, active logical
replication slots or subscriptions are present in the target database.
Author: Pavel Stehule with changes by me
Ryan Lambert and Amit Kapila
Continue reading Waiting for PostgreSQL 13 – Introduce the ‘force' option for the Drop Database command.
On 6th of November 2019, Tomas Vondra committed patch:
Allow sampling of statements depending on duration
This allows logging a sample of statements, without incurring excessive
log traffic (which may impact performance). This can be useful when
analyzing workloads with lots of short queries.
The sampling is configured using two new GUC parameters:
* log_min_duration_sample - minimum required statement duration
* log_statement_sample_rate - sample rate (0.0 - 1.0)
Only statements with duration exceeding log_min_duration_sample are
considered for sampling. To enable sampling, both those GUCs have to
be set correctly.
The existing log_min_duration_statement GUC has a higher priority, i.e.
statements with duration exceeding log_min_duration_statement will be
always logged, irrespectedly of how the sampling is configured. This
means only configurations
log_min_duration_sample < log_min_duration_statement
do actually sample the statements, instead of logging everything.
Author: Adrien Nayrat
Continue reading Waiting for PostgreSQL 13 – Allow sampling of statements depending on duration
Added two new functions, which basically work only for JSON / YAML / XML plans.
First – the view of source explain is now syntax highlighted. Just click source button on one of these to see:
Next additon is text tab, which shows the same explain but converted to text-format.
Hope you'll find it useful.
Just now pushed new version of Pg::Explain Perl library that is handling parsing for explain.depesz.com.
There have been many changes, but the short summary is:
- Fix display of heap fetches from json (https://gitlab.com/depesz/explain.depesz.com/issues/15)
- Move global data (planning/execution/total time, trigger info) from top_node to explain object itself
- Add method to easily get real total, wall-clock, runtime of query.
- Add Pg::Explain::Analyzer, so far it can extract node types and their “paths"
- Add scripts to run perltidy, start development tmux env, and explain schema dumper
- Fix handling of parallel queries (hopefully)
- Remove edge-case deep recursion error
- Speed optimization for large explains
Hopefully it will not break anything for you 🙂
And as a final note: explain.depesz.com has been used to add almost 740kplans (out of which 120k were deleted). Currently database contains over 311k public (visible in history), not deleted, plans.