A while ago someone wrote on irc that PostgreSQL has built in hard limit to number of partitions. This tweet was linked as a source of this information.
Decided to check it.
On 6th of April 2020, Peter Eisentraut committed patch:
Add logical replication support to replicate into partitioned tables Mainly, this adds support code in logical/worker.c for applying replicated operations whose target is a partitioned table to its relevant partitions. Author: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Rafia Sabih <rafia.pghackers@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Reviewed-by: Petr Jelinek <petr@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
On 18th of March 2020, Alvaro Herrera committed patch:
Enable BEFORE row-level triggers for partitioned tables ... with the limitation that the tuple must remain in the same partition. Reviewed-by: Ashutosh Bapat Discussion: https://postgr.es/m/20200227165158.GA2071@alvherre.pgsql
Continue reading Waiting for PostgreSQL 13 – Enable BEFORE row-level triggers for partitioned tables
On 3rd of October 2019, Amit Kapila committed patch:
pgbench: add --partitions and --partition-method options. These new options allow users to partition the pgbench_accounts table by specifying the number of partitions and partitioning method. The values allowed for partitioning method are range and hash. This feature allows users to measure the overhead of partitioning if any. Author: Fabien COELHO Alvaro Herrera Discussion: https://postgr.es/m/alpine.DEB.2.21..7008@lancre
Continue reading Waiting for PostgreSQL 13 – pgbench: add –partitions and –partition-method options.
On 3rd of April 2019, Alvaro Herrera committed patch:
Support foreign keys that reference partitioned tables Previously, while primary keys could be made on partitioned tables, it was not possible to define foreign keys that reference those primary keys. Now it is possible to do that. Author: Álvaro Herrera Discussion: https://postgr.es/m/20181102234158.735b3fevta63msbj@alvherre.pgsql
Continue reading Waiting for PostgreSQL 12 – Support foreign keys that reference partitioned tables
Recently someone asked, on irc, how to make table partitioned.
The thing is that it was supposed to be done with new partitioning, and not the old way.
The problem is that while we can create table that will be seen as partitioned – we can't alter table to become partitioned.
So. Is it possible?
Continue reading Migrating simple table to partitioned. How?
Previously I tested performance of pl/PgSQL coded foreign keys to partitioned table.
Now, let's see if I can make creation of them a bit easier.
Previously I wrote about how to create foreign key pointing to partitioned table.
Final solution in there required four separate functions and four triggers for each key between two tables.
Let's see how fast it is, and if it's possible to make it simpler.
One of the long standing limitations of partitions is that you can't have foreign keys pointing to them.
Let's see if I can make it possible to have some kind of constraint that would do the same thing as fkey.
On 1st of August 2018, Peter Eisentraut committed patch:
Allow multi-inserts during COPY into a partitioned table CopyFrom allows multi-inserts to be used for non-partitioned tables, but this was disabled for partitioned tables. The reason for this appeared to be that the tuple may not belong to the same partition as the previous tuple did. Not allowing multi-inserts here greatly slowed down imports into partitioned tables. These could take twice as long as a copy to an equivalent non-partitioned table. It seems wise to do something about this, so this change allows the multi-inserts by flushing the so-far inserted tuples to the partition when the next tuple does not belong to the same partition, or when the buffer fills. This improves performance when the next tuple in the stream commonly belongs to the same partition as the previous tuple. In cases where the target partition changes on every tuple, using multi-inserts slightly slows the performance. To get around this we track the average size of the batches that have been inserted and adaptively enable or disable multi-inserts based on the size of the batch. Some testing was done and the regression only seems to exist when the average size of the insert batch is close to 1, so let's just enable multi-inserts when the average size is at least 1.3. More performance testing might reveal a better number for, this, but since the slowdown was only 1-2% it does not seem critical enough to spend too much time calculating it. In any case it may depend on other factors rather than just the size of the batch. Allowing multi-inserts for partitions required a bit of work around the per-tuple memory contexts as we must flush the tuples when the next tuple does not belong the same partition. In which case there is no good time to reset the per-tuple context, as we've already built the new tuple by this time. In order to work around this we maintain two per-tuple contexts and just switch between them every time the partition changes and reset the old one. This does mean that the first of each batch of tuples is not allocated in the same memory context as the others, but that does not matter since we only reset the context once the previous batch has been inserted. Author: David Rowley <david.rowley@2ndquadrant.com>