Previously I tested performance of pl/PgSQL coded foreign keys to partitioned table.
Now, let's see if I can make creation of them a bit easier.
Previously I tested performance of pl/PgSQL coded foreign keys to partitioned table.
Now, let's see if I can make creation of them a bit easier.
Previously I wrote about how to create foreign key pointing to partitioned table.
Final solution in there required four separate functions and four triggers for each key between two tables.
Let's see how fast it is, and if it's possible to make it simpler.
On 30th of October 2018, Michael Paquier committed patch:
Add pg_partition_tree to display information about partitions This new function is useful to display a full tree of partitions with a partitioned table given in output, and avoids the need of any complex WITH RECURSIVE query when looking at partition trees which are deep multiple levels. It returns a set of records, one for each partition, containing the partition's name, its immediate parent's name, a boolean value telling if the relation is a leaf in the tree and an integer telling its level in the partition tree with given table considered as root, beginning at zero for the root, and incrementing by one each time the scan goes one level down. Author: Amit Langote Discussion: https://postgr.es/m/-9a51-ad02-d53e-@lab.ntt.co.jp
On 25th of October 2018, Michael Paquier committed patch:
Add pg_promote function This function is able to promote a standby with this new SQL-callable function. Execution access can be granted to non-superusers so that failover tools can observe the principle of least privilege. Catalog version is bumped. Author: Laurenz Albe Discussion: https://postgr.es/m/.camel@cybertec.at
Continue reading Waiting for PostgreSQL 12 – Add pg_promote function
One of the long standing limitations of partitions is that you can't have foreign keys pointing to them.
Let's see if I can make it possible to have some kind of constraint that would do the same thing as fkey.
I just released first version of pg_terminator.
It's a tool that is supposed to be run on PostgreSQL db server, monitor a database, and cancel or terminate offending queries/connections.
It can be used, for example to:
Basically – if you can write a where condition, that operates on pg_stat_activity view, that can list backends to cancel/terminate – you can make pg_terminate do it for you.
To work, it requires Ruby, and its Pg library.
Usage is free, and it's fully open-source (BSD license).
On 1st of August 2018, Peter Eisentraut committed patch:
Allow multi-inserts during COPY into a partitioned table CopyFrom allows multi-inserts to be used for non-partitioned tables, but this was disabled for partitioned tables. The reason for this appeared to be that the tuple may not belong to the same partition as the previous tuple did. Not allowing multi-inserts here greatly slowed down imports into partitioned tables. These could take twice as long as a copy to an equivalent non-partitioned table. It seems wise to do something about this, so this change allows the multi-inserts by flushing the so-far inserted tuples to the partition when the next tuple does not belong to the same partition, or when the buffer fills. This improves performance when the next tuple in the stream commonly belongs to the same partition as the previous tuple. In cases where the target partition changes on every tuple, using multi-inserts slightly slows the performance. To get around this we track the average size of the batches that have been inserted and adaptively enable or disable multi-inserts based on the size of the batch. Some testing was done and the regression only seems to exist when the average size of the insert batch is close to 1, so let's just enable multi-inserts when the average size is at least 1.3. More performance testing might reveal a better number for, this, but since the slowdown was only 1-2% it does not seem critical enough to spend too much time calculating it. In any case it may depend on other factors rather than just the size of the batch. Allowing multi-inserts for partitions required a bit of work around the per-tuple memory contexts as we must flush the tuples when the next tuple does not belong the same partition. In which case there is no good time to reset the per-tuple context, as we've already built the new tuple by this time. In order to work around this we maintain two per-tuple contexts and just switch between them every time the partition changes and reset the old one. This does mean that the first of each batch of tuples is not allocated in the same memory context as the others, but that does not matter since we only reset the context once the previous batch has been inserted. Author: David Rowley <david.rowley@2ndquadrant.com>
On Saturday, 30th of June, Andrew Dunstan stamped HEAD in git as 12devel.
This means that there will be no new features in Pg11. And now, my test Pg reports:
=$ psql -c 'select version()' version ════════════════════════════════════════════════════════════════════════════════════════════════════════════ PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit (1 row)
Nice 🙂
On 10th of June 2018, Tom Lane committed patch:
Improve run-time partition pruning to handle any stable expression. The initial coding of the run-time-pruning feature only coped with cases where the partition key(s) are compared to Params. That is a bit silly; we can allow it to work with any non-Var-containing stable expression, as long as we take special care with expressions containing PARAM_EXEC Params. The code is hardly any longer this way, and it's considerably clearer (IMO at least). Per gripe from Pavel Stehule. David Rowley, whacked around a bit by me Discussion: https://postgr.es/m/CAFj8pRBjrufA3ocDm8o4LPGNye9Y+pm1b9kCwode4X04CULG3g@mail.gmail.com