How to play with upcoming, unreleased, PostgreSQL?

Lately reader of my blog asked about some detail, and then in next comment said that he doesn't have PostgreSQL 12 (which is currently developed version of Pg) available.

Well, I answered the original question, but I figured that if someone is reading my Waiting for … series, then it would make sense that such person could play with devel Pg herself.

So, here is how to get it done.

All commands that I show were tested on Debian 9.9 (stretch), but should be trivial to modify to run on any other Linux.

Given that my test server is empty (straight after install), I need to install some packages that will make it work:

  • build-essential – so that I will have normal development tools available
  • git – so I can download (and then refresh) Pg source
  • bison and flex – if you don't have them, compilation will complain
  • python-dev – so I can have PL/Python available
  • zlib1g-dev – for compression
  • libssl-dev – to be able to use cryptographic functions
  • libxml2-dev – so XML datatypes will work
  • libossp-uuid-dev – for UUID type
  • tcl-dev – in case I'd want to do some PL/Tcl coding
  • libperl-dev – well, for PL/Perl

So, let's install it:

root@pgtest:~# apt-get install build-essential git bison flex python-dev zlib1g-dev libssl-dev libxml2-dev libossp-uuid-dev tcl-dev libperl-dev
0 upgraded, 68 newly installed, 0 to remove and 0 not upgraded.

And some more packages (required just for compilation), that are easier to install as build-dep:

root@pgtest:~# apt-get build-dep postgresql
0 upgraded, 19 newly installed, 0 to remove and 0 not upgraded.

As you can see it installed for me (on my clean system) quite a lot of packages.

Next step – I prefer to run test PostgreSQL on separate account, just in case. I tend to use account pgdba but you can, of course, use any account you want:

root@pgtest:~# adduser --gecos "Test PostgreSQL account" --disabled-password pgdba
Adding user `pgdba' ...
Adding new group `pgdba' (1001) ...
Adding new user `pgdba' (1001) with group `pgdba' ...
Creating home directory `/home/pgdba' ...
Copying files from `/etc/skel' ...

You might want to skip –disabled-password option to have password access to pgdba, but I use ssh with keys even on localhost 🙂

Now, we can get the source:

pgdba@pgtest:~$ mkdir -pv src/git
mkdir: created directory 'src'
mkdir: created directory 'src/git'
pgdba@pgtest:~$ git clone git:// src/git/postgresql/
Cloning into 'src/git/postgresql'...
remote: Counting objects: 734318, done.
remote: Compressing objects: 100% (109454/109454), done.
remote: Total 734318 (delta 628160), reused 728129 (delta 622061)
Receiving objects: 100% (734318/734318), 220.79 MiB | 5.08 MiB/s, done.
Resolving deltas: 100% (628160/628160), done.

Now, couple of decisions. I figured that I'll put compiled PostgreSQL in ~pgdba/work, and data directory will be ~pgdba/data.

Thanks to this everything related to this dev pg will be ~pgdba.

To make it official, let's:

pgdba@pgtest:~$ echo >> ~/.profile
pgdba@pgtest:~$ echo 'export PGDATA="${HOME}/data"' >> ~/.profile
pgdba@pgtest:~$ echo 'PATH="${HOME}/work/bin:$PATH"' >> ~/.profile
pgdba@pgtest:~$ . ~/.profile

We can get to compilation. But I don't like repeating myself, so I created two scripts:

  • – main script wrapping work related to installing current Pg HEAD
  • src/ – helper script wrapping ./configure options

These scripts also configure newly installed Pg, so let's create directory for configuration bits:

pgdba@pgtest:~$ mkdir -p ~/src/configs

and then, download there these two files:

Of course, feel free to open the files, and modify according to your taste.

Once everything is in place, I simply run

pgdba@pgtest:~$ ./ 
2019-05-14 23:42:45 : Stopping Pg
./ line 22: pg_ctl: command not found
2019-05-14 23:42:45 : Removing Pg
2019-05-14 23:42:45 : Fetching newest PG sources
real    0m0.485s
user    0m0.032s
sys     0m0.028s
2019-05-14 23:42:46 : Configuring sources
real    0m13.562s
user    0m8.832s
sys     0m1.276s
2019-05-14 23:42:59 : Compiling Pg
All of PostgreSQL successfully made. Ready to install.
real    2m59.661s
user    2m42.392s
sys     0m9.788s
2019-05-14 23:45:59 : Installing Pg
PostgreSQL installation complete.
real    0m1.886s
user    0m0.880s
sys     0m0.116s
2019-05-14 23:46:01 : Compiling contrib
real    0m17.562s
user    0m15.180s
sys     0m1.240s
2019-05-14 23:46:18 : Installing contrib
real    0m0.413s
user    0m0.124s
sys     0m0.012s
2019-05-14 23:46:19 : Fetching documentation for dev Pg
real    0m7.946s
user    0m0.280s
sys     0m1.204s
2019-05-14 23:46:27 : Installing documentation for dev pg
2019-05-14 23:46:29 : Running initdb
real    0m3.301s
user    0m0.312s
sys     0m0.144s
2019-05-14 23:46:33 : Fixing configuration
2019-05-14 23:46:33 : Starting new Pg
waiting for server to start....2019-05-14 23:46:33.211 CEST @ 26783  LOG:  redirecting log output to logging collector process
2019-05-14 23:46:33.211 CEST @ 26783  HINT:  Future log output will appear in directory "log".
server started
2019-05-14 23:46:33 : Remove obsolete sources
2019-05-14 23:46:33 : Some final touches to allow easier work
2019-05-14 23:46:34 : All done.

As you can see the whole procedure took ~ 4 minutes on my single-core, virtual, server.

Afterwards I can:

pgdba@pgtest:~$ psql -c 'select version()'
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
(1 row)

To stop or start the pg on this account I simply:

pgdba@pgtest:~$ pg_ctl stop
waiting for server to shut down.... done
server stopped
pgdba@pgtest:~$ pg_ctl start
waiting for server to start....2019-05-14 23:56:54.325 CEST @ 26834  LOG:  redirecting log output to logging collector process
2019-05-14 23:56:54.325 CEST @ 26834  HINT:  Future log output will appear in directory "log".
server started

Please note that this approach doesn't keep data that you loaded to the test database – all it's content is dropped thanks to line 25 in

  1. progress "Stopping Pg"
  2. pg_ctl -w -m immediate stop || true
  4. progress "Removing Pg"
  5. rm -Rf "${PGDATA}"/* work/* src/pgsql
  7. progress "Fetching newest PG sources"

So, if you'd like to keep the data across upgrades – make a pg_dump before calling, and then load it back when Pg will be back up.

Hope it helps 🙂

4 thoughts on “How to play with upcoming, unreleased, PostgreSQL?”

  1. Great article!
    I have also introduced a not-completed-yet feature in pgenv that allows for downloading and building the git source tree. See .

  2. Less flexible but maybe easier to use alternative for some:
    Both the / have pg12 (daily) builds already available (for testing) which can be just i.e. apt-get installed.

  3. This absolutely is a useful article. The problem is that it won’t take just those 4 minutes for me – because I don’t have any Linux environment at my disposal at all. So it would take 4 minutes plus however long it takes me to provision some kind of Debian system. Or maybe I wouldn’t use Debian but just figure out how to do the equivalent steps in one of my available environments (MacOS and FreeBSD). No matter which way I go, though, I have this uncertain one-time up-front cost to pay… and I haven’t done that so far. It would be good to do that, of course, since it would offer me many benefits besides getting to play with unreleased Pg versions. But however good the investment, I need to be able to make it first.

    I’m certainly not the only person in this position, though, so it would be really awesome if there were a Postgres playground – like the ones for Go or Haskell, where you can play with the language in a browser session without having to install anything whatsoever. But of course building and running that would require time and money that someone else would have to invest.

  4. @Aristotle:

    damn, I keep forgetting that there are people out there that are using non-Linux OS.

    Anyway – as long as the OS you’re using is something based on unix-like thing, you should be able to do the whole thing with some minor changes to scripts.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.