Waiting for 9.6 – Generate parallel sequential scan plans in simple cases.

On 11th of November, Robert Haas committed patch:

Generate parallel sequential scan plans in simple cases.
 
Add a new flag, consider_parallel, to each RelOptInfo, indicating
whether a plan for that relation could conceivably be run inside of
a parallel worker.  Right now, we're pretty conservative: for example,
it might be possible to defer applying a parallel-restricted qual
in a worker, and later do it in the leader, but right now we just
don't try to parallelize access to that relation.  That's probably
the right decision in most cases, anyway.
 
Using the new flag, generate parallel sequential scan plans for plain
baserels, meaning that we now have parallel sequential scan in
PostgreSQL.  The logic here is pretty unsophisticated right now: the
costing model probably isn't right in detail, and we can't push joins
beneath Gather nodes, so the number of plans that can actually benefit
from this is pretty limited right now.  Lots more work is needed.
Nevertheless, it seems time to enable this functionality so that all
this code can actually be tested easily by users and developers.
 
Note that, if you wish to test this functionality, it will be
necessary to set max_parallel_degree to a value greater than the
default of 0.  Once a few more loose ends have been tidied up here, we
might want to consider changing the default value of this GUC, but
I'm leaving it alone for now.
 
Along the way, fix a bug in cost_gather: the previous coding thought
that a Gather node's transfer overhead should be costed on the basis of
the relation size rather than the number of tuples that actually need
to be passed off to the leader.
 
Patch by me, reviewed in earlier versions by Amit Kapila.

Continue reading Waiting for 9.6 – Generate parallel sequential scan plans in simple cases.

Waiting for 9.6 – Allow usage of huge maintenance_work_mem for GIN build.

On 2nd of September, Teodor Sigaev committed patch:

Allow usage of huge maintenance_work_mem for GIN build.
 
Currently, in-memory posting list during GIN build process is limited 1GB
because of using repalloc. The patch replaces call of repalloc to repalloc_huge.
It increases limit of posting list from 180 millions
(1GB / sizeof(ItemPointerData)) to 4 billions limited by maxcount/count fields
in GinEntryAccumulator and subsequent calls. Check added.
 
Also, fix accounting of allocatedMemory during build to prevent integer
overflow with maintenance_work_mem > 4GB.
 
Robert Abraham <robert.abraham86@googlemail.com> with additions by me

Continue reading Waiting for 9.6 – Allow usage of huge maintenance_work_mem for GIN build.

Waiting for 9.5 – Add stats for min, max, mean, stddev times to pg_stat_statements.

On 27th of March, Andrew Dunstan committed patch:

Add stats for min, max, mean, stddev times to pg_stat_statements.
 
The new fields are min_time, max_time, mean_time and stddev_time.
 
Based on an original patch from Mitsumasa KONDO, modified by me. Reviewed by Petr Jelínek.

Continue reading Waiting for 9.5 – Add stats for min, max, mean, stddev times to pg_stat_statements.

Waiting for 9.5 – Use 128-bit math to accelerate some aggregation functions.

On 20th of March, Andres Freund committed patch:

Use 128-bit math to accelerate some aggregation functions.
 
On platforms where we support 128bit integers, use them to implement
faster transition functions for sum(int8), avg(int8),
var_*(int2/int4),stdev_*(int2/int4). Where not supported continue to use
numeric as a transition type.
 
In some synthetic benchmarks this has been shown to provide significant
speedups.
 
Bumps catversion.
 
Discussion: 544BB5F1.50709@proxel.se
Author: Andreas Karlsson
Reviewed-By: Peter Geoghegan, Petr Jelinek, Andres Freund, Oskari Saarenmaa, David Rowley

Continue reading Waiting for 9.5 – Use 128-bit math to accelerate some aggregation functions.

Waiting for 9.5 – Use abbreviated keys for faster sorting of text datums.

On 19th of January, Robert Haas committed patch:

Use abbreviated keys for faster sorting of text datums.
 
This commit extends the SortSupport infrastructure to allow operator
classes the option to provide abbreviated representations of Datums;
in the case of text, we abbreviate by taking the first few characters
of the strxfrm() blob.  If the abbreviated comparison is insufficent
to resolve the comparison, we fall back on the normal comparator.
This can be much faster than the old way of doing sorting if the
first few bytes of the string are usually sufficient to resolve the
comparison.
 
There is the potential for a performance regression if all of the
strings to be sorted are identical for the first 8+ characters and
differ only in later positions; therefore, the SortSupport machinery
now provides an infrastructure to abort the use of abbreviation if
it appears that abbreviation is producing comparatively few distinct
keys.  HyperLogLog, a streaming cardinality estimator, is included in
this commit and used to make that determination for text.
 
Peter Geoghegan, reviewed by me.

Continue reading Waiting for 9.5 – Use abbreviated keys for faster sorting of text datums.

Waiting for 9.5 – vacuumdb: enable parallel mode

On 23rd of January, Alvaro Herrera committed patch:

vacuumdb: enable parallel mode
 
This mode allows vacuumdb to open several server connections to vacuum
or analyze several tables simultaneously.
 
Author: Dilip Kumar.  Some reworking by Álvaro Herrera
Reviewed by: Jeff Janes, Amit Kapila, Magnus Hagander, Andres Freund

Continue reading Waiting for 9.5 – vacuumdb: enable parallel mode

What logging has least overhead?

When working with PostgreSQL you generally want to get information about slow queries. The usual approach is to set log_min_duration_statement to some low(ish) value, run your app, and then analyze logs.

But you can log to many places – flat file, flat file on another disk, local syslog, remote syslog. And – perhaps, instead of log_min_duration_statement – just use pg_stat_statements?

Well, I wondered about it, and decided to test.

Continue reading What logging has least overhead?

Waiting for 9.3 – Support indexing of regular-expression searches in contrib/pg_trgm.

On 9th of April, Tom Lane committed patch:

Support indexing of regular-expression searches in contrib/pg_trgm.
 
This works by extracting trigrams from the given regular expression,
in generally the same spirit as the previously-existing support for
LIKE searches, though of course the details are far more complicated.
 
Currently, only GIN indexes are supported.  We might be able to make
it work with GiST indexes later.
 
The implementation includes adding API functions to backend/regex/
to provide a view of the search NFA created from a regular expression.
These functions are meant to be generic enough to be supportable in
a standalone version of the regex library, should that ever happen.
 
Alexander Korotkov, reviewed by Heikki Linnakangas and Tom Lane

One day later Tom Lane added support for the same operations using GiST indexes (original patch was working only with GIN).

Continue reading Waiting for 9.3 – Support indexing of regular-expression searches in contrib/pg_trgm.

Waiting for 9.3 – Add parallel pg_dump option.

On 24th of March, Andrew Dunstan committed patch:

Add parallel pg_dump option.
 
New infrastructure is added which creates a set number of workers
(threads on Windows, forked processes on Unix). Jobs are then
handed out to these workers by the master process as needed.
pg_restore is adjusted to use this new infrastructure in place of the
old setup which created a new worker for each step on the fly. Parallel
dumps acquire a snapshot clone in order to stay consistent, if
available.
 
The parallel option is selected by the -j / --jobs command line
parameter of pg_dump.
 
Joachim Wieland, lightly editorialized by Andrew Dunstan.

Continue reading Waiting for 9.3 – Add parallel pg_dump option.