So, I made some changes to explain.depesz.com, and while informing about it, I'd like to brag a bit about numbers, again 🙂
On 29th of March, Andrew Dunstan committed patch:
Add new JSON processing functions and parser API. The JSON parser is converted into a recursive descent parser, and exposed for use by other modules such as extensions. The API provides hooks for all the significant parser event such as the beginning and end of objects and arrays, and providing functions to handle these hooks allows for fairly simple construction of a wide variety of JSON processing functions. A set of new basic processing functions and operators is also added, which use this API, including operations to extract array elements, object fields, get the length of arrays and the set of keys of a field, deconstruct an object into a set of key/value pairs, and create records from JSON objects and arrays of objects. Catalog version bumped. Andrew Dunstan, with some documentation assistance from Merlin Moncure.
On 24th of March, Andrew Dunstan committed patch:
Add parallel pg_dump option. New infrastructure is added which creates a set number of workers (threads on Windows, forked processes on Unix). Jobs are then handed out to these workers by the master process as needed. pg_restore is adjusted to use this new infrastructure in place of the old setup which created a new worker for each step on the fly. Parallel dumps acquire a snapshot clone in order to stay consistent, if available. The parallel option is selected by the -j / --jobs command line parameter of pg_dump. Joachim Wieland, lightly editorialized by Andrew Dunstan.
On 10th of March, Tom Lane committed patch:
Support writable foreign tables. This patch adds the core-system infrastructure needed to support updates on foreign tables, and extends contrib/postgres_fdw to allow updates against remote Postgres servers. There's still a great deal of room for improvement in optimization of remote updates, but at least there's basic functionality there now. KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather heavily revised by Tom Lane.
On 10th of March, Magnus Hagander committed patch:
Report pg_hba line number and contents when users fail to log in Instead of just reporting which user failed to log in, log both the line number in the active pg_hba.conf file (which may not match reality in case the file has been edited and not reloaded) and the contents of the matching line (which will always be correct), to make it easier to debug incorrect pg_hba.conf files. The message to the client remains unchanged and does not include this information, to prevent leaking security sensitive information. Reviewed by Tom Lane and Dean Rasheed
On 10th of March, Andrew Dunstan committed patch:
JSON generation improvements. This adds the following: json_agg(anyrecord) -> json to_json(any) -> json hstore_to_json(hstore) -> json (also used as a cast) hstore_to_json_loose(hstore) -> json The last provides heuristic treatment of numbers and booleans. Also, in json generation, if any non-builtin type has a cast to json, that function is used instead of the type's output function. Andrew Dunstan, reviewed by Steve Singer. Catalog version bumped.
Yet another missed thing for “Waiting for 9.3". Sorry about that.
On 29th of January, Tom Lane committed patch:
Provide database object names as separate fields in error messages. This patch addresses the problem that applications currently have to extract object names from possibly-localized textual error messages, if they want to know for example which index caused a UNIQUE_VIOLATION failure. It adds new error message fields to the wire protocol, which can carry the name of a table, table column, data type, or constraint associated with the error. (Since the protocol spec has always instructed clients to ignore unrecognized field types, this should not create any compatibility problem.) Support for providing these new fields has been added to just a limited set of error reports (mainly, those in the "integrity constraint violation" SQLSTATE class), but we will doubtless add them to more calls in future. Pavel Stehule, reviewed and extensively revised by Peter Geoghegan, with additional hacking by Tom Lane.
Some time ago I wrote a piece on speeding up dump/restore process using custom solution that was parallelizing process.
Later on I wrote some tools (“fast dump and restore") to do it in more general way.
But all of them had a problem – to get consistent dump you need to stop all concurrent access to your database. Why, and how to get rid of this limitation?
On 4th of March, Kevin Grittner committed patch:
Add a materialized view relations. A materialized view has a rule just like a view and a heap and other physical properties like a table. The rule is only used to populate the table, references in queries refer to the materialized data. This is a minimal implementation, but should still be useful in many cases. Currently data is only populated "on demand" by the CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements. It is expected that future releases will add incremental updates with various timings, and that a more refined concept of defining what is "fresh" data will be developed. At some point it may even be possible to have queries use a materialized in place of references to underlying tables, but that requires the other above-mentioned features to be working first. Much of the documentation work by Robert Haas. Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja Security review by KaiGai Kohei, with a decision on how best to implement sepgsql still pending.
On 1st of February, Peter Eisentraut committed patch:
Add CREATE RECURSIVE VIEW syntax This is specified in the SQL standard. The CREATE RECURSIVE VIEW specification is transformed into a normal CREATE VIEW statement with a WITH RECURSIVE clause. reviewed by Abhijit Menon-Sen and Stephen Frost