For very long time plans with parallel execution showed bogus values. Not any more.
Recently, on irc, there have been some talks with people using various pg_dump/pg_dumpall calls to get dumps of database.
I voiced my ideas, but figured it could be good subject for a blog post.
On 19th of November 2019, Thomas Munro committed patch:
Allow invisible PROMPT2 in psql. Keep track of the visible width of PROMPT1, and provide %w as a way for PROMPT2 to generate the same number of spaces. Author: Thomas Munro, with ideas from others Discussion: https://postgr.es/m/CA%2BhUKG%2BzGd7RigjWbxwhzGW59gUpf76ydQECeGdEdodH6nd__A%40mail.gmail.com
On 13rd of November 2019, Amit Kapila committed patch:
Introduce the 'force' option for the Drop Database command. This new option terminates the other sessions connected to the target database and then drop it. To terminate other sessions, the current user must have desired permissions (same as pg_terminate_backend()). We don't allow to terminate the sessions if prepared transactions, active logical replication slots or subscriptions are present in the target database. Author: Pavel Stehule with changes by me Ryan Lambert and Amit Kapila Discussion: https://postgr.es/m/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
On 6th of November 2019, Tomas Vondra committed patch:
Allow sampling of statements depending on duration This allows logging a sample of statements, without incurring excessive log traffic (which may impact performance). This can be useful when analyzing workloads with lots of short queries. The sampling is configured using two new GUC parameters: * log_min_duration_sample - minimum required statement duration * log_statement_sample_rate - sample rate (0.0 - 1.0) Only statements with duration exceeding log_min_duration_sample are considered for sampling. To enable sampling, both those GUCs have to be set correctly. The existing log_min_duration_statement GUC has a higher priority, i.e. statements with duration exceeding log_min_duration_statement will be always logged, irrespectedly of how the sampling is configured. This means only configurations log_min_duration_sample < log_min_duration_statement do actually sample the statements, instead of logging everything. Author: Adrien Nayrat Discussion: https://email@example.com
Added two new functions, which basically work only for JSON / YAML / XML plans.
First – the view of source explain is now syntax highlighted. Just click source button on one of these to see:
Next additon is text tab, which shows the same explain but converted to text-format.
Hope you'll find it useful.
There have been many changes, but the short summary is:
- Fix display of heap fetches from json (https://gitlab.com/depesz/explain.depesz.com/issues/15)
- Move global data (planning/execution/total time, trigger info) from top_node to explain object itself
- Add method to easily get real total, wall-clock, runtime of query.
- Add Pg::Explain::Analyzer, so far it can extract node types and their “paths"
- Add scripts to run perltidy, start development tmux env, and explain schema dumper
- Fix handling of parallel queries (hopefully)
- Remove edge-case deep recursion error
- Speed optimization for large explains
Hopefully it will not break anything for you 🙂
On 3rd of October 2019, Amit Kapila committed patch:
pgbench: add --partitions and --partition-method options. These new options allow users to partition the pgbench_accounts table by specifying the number of partitions and partitioning method. The values allowed for partitioning method are range and hash. This feature allows users to measure the overhead of partitioning if any. Author: Fabien COELHO Alvaro Herrera Discussion: https://postgr.es/m/alpine.DEB.2.21..7008@lancre
Recently I've seen case like:
- application had to add column to table.
- application ran ALTER TABLE ADD COLUMN (without default!)
- everything stopped for many MINUTES
Why? How to avoid the problem?