PostgreSQL Redeploys with Juju Storage

New features and improvements to Juju 2.3’s storage feature provide new mechanisms for server migrations. A PostgreSQL deployment can be made with the data stored on an attached volume, such as a Ceph mount or Amazon EBS volume. To migrate to a new instance, we can bring up new units in the same or new Juju model, or even with a new Juju controller, and reuse the storage volume to bring our data across. While this has been possible for a while, it was an ad hoc process that needed to be performed manually or with non-standard tools like the (now deprecated) BlockStorageBroker charms. With Juju 2.3, the process becomes smooth and can be managed with Juju. Charms like PostgreSQL now have standard mechanisms they can use to support these sorts of processes, and creates opportunities for new features such as major version upgrades. Starting with a configured Juju controller and a fresh model, PostgreSQL can easily be deployed using Juju, with Juju managing storage. Using Amazon for this example, this will deploy a PostgreSQL instance with 50GB of attached EBS storage:

juju deploy cs:postgresql --storage pgdata=ebs,50G

While it is possible to attach storage after the initial deploy, for PostgreSQL it is best to specify storage at deployment time. This way, new units will also be deployed with the same attached storage and have enough space to replicate the database from the primary. So to add a hot standby unit:

juju add-unit postgresql

After things settle, you end up with a deployment like this:

$ juju status

Model Controller Cloud/Region Version SLA
 rightsaidfred aws-ap-southeast-2 aws/ap-southeast-2 2.3.1 unsupported

App Version Status Scale Charm Store Rev OS Notes
 postgresql 9.5.10 active 2 postgresql jujucharms 164 ubuntu

Unit Workload Agent Machine Public address Ports Message
 postgresql/0* active idle 0 13.211.42.219 5432/tcp Live master (9.5.10)
 postgresql/1  active idle 1 52.65.20.140  5432/tcp Live secondary (9.5.10)

Machine State DNS Inst id Series AZ Message
 0 started 13.211.42.219 i-0fc0f0a21290ff909 xenial ap-southeast-2a running
 1 started 52.65.20.140  i-0be95ac0e1a048e6f xenial ap-southeast-2b running

Relation provider Requirer Interface Type Message
 postgresql:coordinator postgresql:coordinator coordinator peer
 postgresql:replication postgresql:replication pgpeer peer

$ juju list-storage

[Storage]
 Unit Id Type Pool Provider id Size Status Message
 postgresql/0 pgdata/0 filesystem ebs vol-0bbe053869187f9c6 50GiB attached
 postgresql/1 pgdata/1 filesystem ebs vol-0a95d56991e1dff1b 50GiB attached
$ juju ssh postgresql/0
 Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1047-aws x86_64)
 [...]
 ubuntu@ip-172-31-15-43:~$ sudo -u postgres psql
 psql (9.5.10)
 Type "help" for help.

postgres=# create table data(d text);
 CREATE TABLE
 postgres=# insert into data values ('hello');
 INSERT 0 1
 postgres=# \q
 ubuntu@ip-172-31-15-43:~$ exit
 logout
 Connection to 13.211.42.219 closed.

We can tear down this deploy whilst preserving our data

$ juju destroy-model rightsaidfred
 WARNING! This command will destroy the "rightsaidfred" model.
 This includes all machines, applications, data and other resources.

Continue [y/N]? y
 Destroying model
 ERROR cannot destroy model "rightsaidfred"

The model has persistent storage remaining:
 2 volumes and 2 filesystems

To destroy the storage, run the destroy-model
 command again with the "--destroy-storage" flag.

To release the storage from Juju's management
 without destroying it, use the "--release-storage"
 flag instead. The storage can then be imported
 into another Juju model.

$ juju destroy-model rightsaidfred --release-storage
 [...]
 Model destroyed.

For the record, I normally wouldn’t be so brash as to destroy the old model before bringing up the replacement. A better approach when dealing with production data is to put the PostgreSQL database into backup mode and duplicate the storage volume (exactly how depends on your cloud provider or bare metal setup). You would then proceed to bring up the new deployment with the duplicated filesystem, while leaving the original deployment in place in case you need to back out the migration.

Continuing, build the new deployment in a new model. First, the master database reusing the destroyed master’s storage (pgdata/0, vol-0bbe053869187f9c6).

$ juju add-model knowwhatimean
 Uploading credential 'aws/admin/aws' to controller
 Added 'knowwhatimean' model on aws/ap-southeast-2 with credential 'aws' for user 'admin'

$ juju import-filesystem ebs vol-0bbe053869187f9c6 pgdata
 importing "vol-0bbe053869187f9c6" from storage pool "ebs" as storage "pgdata"
 imported storage pgdata/0

$ juju deploy cs:postgresql --attach-storage pgdata/0
 Located charm "cs:postgresql-164".
 Deploying charm "cs:postgresql-164".

At this point, it is important to wait for setup to complete. If we attempted to bring up a second unit right now, there is a chance the second unit would be anointed the master; it would depend on which AWS VM happened to spin up and complete initial setup first. And that would be bad, as a new, empty database would be replicated instead of the one on the attached storage.

$ juju status
 Model Controller Cloud/Region Version SLA
 knowwhatimean aws-ap-southeast-2 aws/ap-southeast-2 2.3.1 unsupported

App Version Status Scale Charm Store Rev OS Notes
 postgresql 9.5.10 active 1 postgresql jujucharms 164 ubuntu

Unit Workload Agent Machine Public address Ports Message
 postgresql/0* active idle 0 13.55.208.91 5432/tcp Live master (9.5.10)

Machine State DNS Inst id Series AZ Message
 0 started 13.55.208.91 i-09c740e176da8f90f xenial ap-southeast-2a running

Relation provider Requirer Interface Type Message
 postgresql:coordinator postgresql:coordinator coordinator peer
 postgresql:replication postgresql:replication pgpeer peer

$ juju ssh postgresql/0
 [...]
 ubuntu@ip-172-31-11-134:~$ sudo -u postgres psql
 psql (9.5.10)
 Type "help" for help.

postgres=# \d data
 Table "public.data"
 Column | Type | Modifiers
 --------+------+-----------
 d | text |

postgres=# \q
 ubuntu@ip-172-31-11-134:~$ exit
 logout
 Connection to 13.55.208.91 closed.

Now, it is safe to add a new unit.

$ juju import-filesystem ebs vol-0a95d56991e1dff1b pgdata
 importing "vol-0a95d56991e1dff1b" from storage pool "ebs" as storage "pgdata"
 imported storage pgdata/1

$ juju add-unit postgresql --attach-storage pgdata/1

[... wait ...]

$ juju status
 Model Controller Cloud/Region Version SLA
 knowwhatimean aws-ap-southeast-2 aws/ap-southeast-2 2.3.1 unsupported

App Version Status Scale Charm Store Rev OS Notes
 postgresql 9.5.10 active 2 postgresql jujucharms 164 ubuntu

Unit Workload Agent Machine Public address Ports Message
 postgresql/0* active idle 0 13.55.208.91  5432/tcp Live master (9.5.10)
 postgresql/1  active idle 1 52.65.239.125 5432/tcp Live secondary (9.5.10)

Machine State DNS Inst id Series AZ Message
 0 started 13.55.208.91  i-09c740e176da8f90f xenial ap-southeast-2a running
 1 started 52.65.239.125 i-0c60aea0716cf8320 xenial ap-southeast-2b running

Relation provider Requirer Interface Type Message
 postgresql:coordinator postgresql:coordinator coordinator peer
 postgresql:replication postgresql:replication pgpeer peer

$ juju list-storage
 [Storage]
 Unit Id Type Pool Provider id Size Status Message
 postgresql/0 pgdata/0 filesystem ebs vol-0bbe053869187f9c6 50GiB attached
 postgresql/1 pgdata/1 filesystem ebs vol-0a95d56991e1dff1b 50GiB attached

Future charm work is expected to make migration stories even easier, with Ubuntu 18.04 (Bionic) support and the latest version of PostgreSQL. pg_rewind will avoid unnecessary database cloning. And logical replication should allow major version upgrades, bringing up a new PostgreSQL deployment in parallel with a live deployment and cutting over.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s