Waiting for 9.6 – Add new system view, pg_config

On 17th of February, Joe Conway committed patch:

Add new system view, pg_config
 
Move and refactor the underlying code for the pg_config client
application to src/common in support of sharing it with a new
system information SRF called pg_config() which makes the same
information available via SQL. Additionally wrap the SRF with a
new system view, as called pg_config.
 
Patch by me with extensive input and review by Michael Paquier
and additional review by Alvaro Herrera.

Continue reading Waiting for 9.6 – Add new system view, pg_config

Waiting for 9.6 – Remove GROUP BY columns that are functionally dependent on other columns.

On 11th of February, Tom Lane committed patch:

Remove GROUP BY columns that are functionally dependent on other columns.
 
If a GROUP BY clause includes all columns of a non-deferred primary key,
as well as other columns of the same relation, those other columns are
redundant and can be dropped from the grouping; the pkey is enough to
ensure that each row of the table corresponds to a separate group.
Getting rid of the excess columns will reduce the cost of the sorting or
hashing needed to implement GROUP BY, and can indeed remove the need for
a sort step altogether.
 
This seems worth testing for since many query authors are not aware of
the GROUP-BY-primary-key exception to the rule about queries not being
allowed to reference non-grouped-by columns in their targetlists or
HAVING clauses.  Thus, redundant GROUP BY items are not uncommon.  Also,
we can make the test pretty cheap in most queries where it won't help
by not looking up a rel's primary key until we've found that at least
two of its columns are in GROUP BY.
 
David Rowley, reviewed by Julien Rouhaud

Continue reading Waiting for 9.6 – Remove GROUP BY columns that are functionally dependent on other columns.

explain.depesz.com – new change and some stats

Quite a long time ago (in October), Oskar Liljeblad reported a bug in anonymization. Namely – group keys were not anonymized.

You can see example of such plan here.

I finally got to it, fixed the bug, pushed new version to live site, and now such plans will be correctly anonymized.

Thanks Oskar, and sorry for long delay.

Continue reading explain.depesz.com – new change and some stats

Waiting for 9.6 – Add num_nulls() and num_nonnulls() to count NULL arguments.

On 5th of February, Tom Lane committed patch:

Add num_nulls() and num_nonnulls() to count NULL arguments.
 
An example use-case is "CHECK(num_nonnulls(a,b,c) = 1)" to assert that
exactly one of a,b,c isn't NULL.  The functions are variadic, so they
can also be pressed into service to count the number of null or nonnull
elements in an array.
 
Marko Tiikkaja, reviewed by Pavel Stehule

Continue reading Waiting for 9.6 – Add num_nulls() and num_nonnulls() to count NULL arguments.

Waiting for 9.6 – Add trigonometric functions that work in degrees.

On 22nd of January, Tom Lane committed patch:

Add trigonometric functions that work in degrees.
 
The implementations go to some lengths to deliver exact results for values
where an exact result can be expected, such as sind(30) = 0.5 exactly.
 
Dean Rasheed, reviewed by Michael Paquier

Continue reading Waiting for 9.6 – Add trigonometric functions that work in degrees.

Waiting for 9.6 – Support parallel joins, and make related improvements.

On 20th of January, Robert Haas committed patch:

The core innovation of this patch is the introduction of the concept
of a partial path; that is, a path which if executed in parallel will
generate a subset of the output rows in each process.  Gathering a
partial path produces an ordinary (complete) path.  This allows us to
generate paths for parallel joins by joining a partial path for one
side (which at the baserel level is currently always a Partial Seq
Scan) to an ordinary path on the other side.  This is subject to
various restrictions at present, especially that this strategy seems
unlikely to be sensible for merge joins, so only nested loops and
hash joins paths are generated.
 
This also allows an Append node to be pushed below a Gather node in
the case of a partitioned table.
 
Testing revealed that early versions of this patch made poor decisions
in some cases, which turned out to be caused by the fact that the
original cost model for Parallel Seq Scan wasn't very good.  So this
patch tries to make some modest improvements in that area.
 
There is much more to be done in the area of generating good parallel
plans in all cases, but this seems like a useful step forward.
 
Patch by me, reviewed by Dilip Kumar and Amit Kapila.

Continue reading Waiting for 9.6 – Support parallel joins, and make related improvements.