KEMBAR78
PostgreSQL - backup and recovery with large databases | PDF
Life on a rollercoaster
Scaling the PostgreSQL backup and recovery
Federico Campoli
Brighton PostgreSQL Users Group
9 September 2016
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 1 / 51
Table of contents
1 2012 - Who am I? What I’m doing?
2 2013 - On thin ice
3 2014 - The battle of five armies
4 2015 - THIS IS SPARTA!
5 2016 - Breathe
6 Wrap up
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 2 / 51
Warning!
The story you are about to hear is true.
Only the names have been changed to protect the innocent.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 3 / 51
Dramatis personae
A brilliant startup ACME
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 4 / 51
Dramatis personae
The clueless engineers CE
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 5 / 51
Dramatis personae
An elephant on steroids PG
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 6 / 51
Dramatis personae
The big cheese HW
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 7 / 51
Dramatis personae
The real hero DBA
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 8 / 51
In the beginning
Our story starts in the year 2012. The world was young and our DBA started a
new brilliant career in ACME.
After the usual time required by the onboarding, to our DBA were handed the
production’s servers.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 9 / 51
2012 - Who am I? What I’m doing?
Table of contents
1 2012 - Who am I? What I’m doing?
2 2013 - On thin ice
3 2014 - The battle of five armies
4 2015 - THIS IS SPARTA!
5 2016 - Breathe
6 Wrap up
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 10 / 51
2012 - Who am I? What I’m doing?
2012 - Who am I? What I’m doing?
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 11 / 51
2012 - Who am I? What I’m doing?
Size does matter
PG, our powerful and friendly elephant was used for storing the data in a multi
shard configuration.
Not really big actually but very troubled indeed!
A small logger database - 50 GB
A larger configuration and auth datababase - 200 GB
Two archive db - 4 TB each
One db for the business intelligence - 2 TB
Each db had an hot standby counterpart hosted on less powerful HW.
Our story tells the life of the BI database.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 12 / 51
2012 - Who am I? What I’m doing?
The carnival of monsters
In the early 2013 our brave DBA addressed the several problems found on the
current backup and recovery configuration.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 13 / 51
2012 - Who am I? What I’m doing?
Lagging standby
Suboptimal schema.
Churn on large tables and high wal generation rate
The slave lagged just because there was autovacuum running
rsync used in archive command.
The wal were archived over the network using rsync+ssh
The *.ready files in the pg xlog increased the risk of the cluster’s crash.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 14 / 51
2012 - Who am I? What I’m doing?
Base backup
Rudimentary init standby script.
Just a pg start backup call followed by a rsync between the master and the
slave
The several tablespaces were synced using a single rsync process with the
–delete option
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 15 / 51
2012 - Who am I? What I’m doing?
Slow dump
Remote pg dump.
Each cluster was dumped remotely on a separate server using the custom
format.
The backup server had limited memory and cpu
Dump time between 3 hours and 2 days depending on the database size
The BI database was dumped on a daily basis, taking 14/18 hours.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 16 / 51
2013 - On thin ice
Table of contents
1 2012 - Who am I? What I’m doing?
2 2013 - On thin ice
3 2014 - The battle of five armies
4 2015 - THIS IS SPARTA!
5 2016 - Breathe
6 Wrap up
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 17 / 51
2013 - On thin ice
2013 - On thin ice
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 18 / 51
2013 - On thin ice
Parallel rsync
Our DBA took the baby step approach. He started fixing one issue at a time
without affecting ACME’s activity.
The first enhancement was the init backup script.
Two bash arrays listed the origin and destination’s tablespaces
An empty bash array stored the rsync pids
The script started the pg start backup
For each tablespace a rsync process were spawned and the pid was stored in
the third array
A loop checked that the pids were present in the process list
When all the rsync finished pg stop backup were executed
An email to DBA was sent to tell him to start the slave
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 19 / 51
2013 - On thin ice
Local archive
The high rate of wal generation required a different archive strategy.
The archive command changed to a local copy
A simple rsync script copied every minute the archives to the slave
The script queried remotely the slave for the last restartpoint
The restartpoint was used by pg archivecleanup on the master
Implementing this solution solved the *.ready files problem but the autovacuum
still caused high lag.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 20 / 51
2013 - On thin ice
Autovacuum tune down
DBA investigated the autovacuum issue and finally addressed the cause.
The high lag on the slave was caused when autovacuum (or vacuum) hit a table
concurrently updated. This behaviour is normal and is caused by the standby
code’s design.
With large denormalised tables which are updated constantly the only workaround
possible was to use the autovacuum cost based with a large naptime.
When the autovacuum process reached an arbitrary cost during the execution
there was a 1 minute sleep before the activity resumed.
.
The lag on the standbys disappeared at the cost of longer autovacuum runs.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 21 / 51
2013 - On thin ice
In the meanwhile...
The CE decided to shard the business intelligence database using the hot
standby copy
The three new databases initially had the same amount of data which was
slowly cleaned up later
But even with one third of data on each shard, the daily dump was really
slow at the point of overlapping over the 24 hours
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 22 / 51
2013 - On thin ice
A slowish dump
pg dump connects to the running cluster like any other backend; it pulls out
data using in the copy format
With the custom format the compression happens on the server where
pg dump runs
The backup server were hammered on the network and the cpu
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 23 / 51
2013 - On thin ice
You got speed!
Our DBA wrote a bash script doing the following steps
Dump the database in custom format locally
Generate the file’s md5 checksum
Ship the file on the backup via rsync
Check the remote file’s md5
Send a message to nagios for success or failure
The backup time per each cluster dropped dramatically to just 5 hours including
the copy and the checksum verification.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 24 / 51
2013 - On thin ice
Growing pains
Despite the business growth the CE ignored the problems with the poor schema
design.
Speed was achieved by brute force using expensive SSD storage
The amount of data store in the BI db increased
The only accepted solution was to create new shards over and over again
By the end of 2013 the BI databases total size was 15 TB
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 25 / 51
2013 - On thin ice
In the meanwhile...
Our DBA upgraded all the PG to the version 9.2 with pg upgrade
THANKS BRUCE!!!!!
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 26 / 51
2014 - The battle of five armies
Table of contents
1 2012 - Who am I? What I’m doing?
2 2013 - On thin ice
3 2014 - The battle of five armies
4 2015 - THIS IS SPARTA!
5 2016 - Breathe
6 Wrap up
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 27 / 51
2014 - The battle of five armies
2014 - The battle of five armies
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 28 / 51
2014 - The battle of five armies
Bloating data
Q1 2014 opened with another backup performance issue
The dump size increased over the time
The database CPU usage increased constantly with no apparent reason
Most of the shards had the tablespace usage at 90%
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 29 / 51
2014 - The battle of five armies
Mostly harmless
Against all odds our DBA addressed the issue.
The table used by the BI database was causing the bloat
The table’s design was technically a materalised view
The table was partitioned in some way
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 30 / 51
2014 - The battle of five armies
Mostly harmless
Against all odds our DBA addressed the issue.
The table used by the BI database was causing the bloat
The table’s design was technically a materalised view
The table was partitioned in some way
The table had an harmless hstore field
Where everybody added new keys just changing the app code
And nobody did housekeeping of their data
The row length jumped from 200 bytes to 1200 bytes in few months
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 30 / 51
2014 - The battle of five armies
Mostly harmless
Against all odds our DBA addressed the issue.
The table used by the BI database was causing the bloat
The table’s design was technically a materalised view
The table was partitioned in some way
The table had an harmless hstore field
Where everybody added new keys just changing the app code
And nobody did housekeeping of their data
The row length jumped from 200 bytes to 1200 bytes in few months
Each BI shard contained up to 2 billion rows...
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 30 / 51
2014 - The battle of five armies
Just a little trick
Despite the impending doom and the CE resistance DBA succeeded in converting
the hstore field to a conventional columnar store (SORRY OLEG!).
The storage usage dropped by 30%
The CPU usage dropped by 60%
The speed of ACME’s product boosted
ACME saved $BIG BUNCH OF MONEY in new HW otherwise required to
shard again the dying databases
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 31 / 51
2014 - The battle of five armies
In the meanwhile...
DBA knew the fix was just a workaround
He asked the CE to help him in the schema redesign
He told them things would be problematic again in just one year
Nobody listened
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 32 / 51
2015 - THIS IS SPARTA!
Table of contents
1 2012 - Who am I? What I’m doing?
2 2013 - On thin ice
3 2014 - The battle of five armies
4 2015 - THIS IS SPARTA!
5 2016 - Breathe
6 Wrap up
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 33 / 51
2015 - THIS IS SPARTA!
2015 - THIS IS SPARTA!
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 34 / 51
2015 - THIS IS SPARTA!
I hate to say that, but I told you so
As predicted by our DBA the time required for backing up the BI databases
increased again, approaching dangerously the 24 hours.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 35 / 51
2015 - THIS IS SPARTA!
Parallel is da way!
Back in the 2013 PG 9.3 added the parallel export. But, to DBA great
disappointment, the version 9.3 was initially cursed by some bugs causing data
corruption. DBA could not use the parallel dump.
However...
The parallel backup takes advantage of the snapshot export introduced in the
PG 9.2
The debian packaging allows different PG’s major versions on the same
machine
DBA installed the client 9.3 and used its pg dump to dump the 9.2 in parallel
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 36 / 51
2015 - THIS IS SPARTA!
It worked very well...
The wrapper script required some adjustments
Accept the -j parameter
Check if the 9.3+ client is installed
Override the format to directory if the parallel backup is possible
Adapt the checksum procedure to check the files in the dump directory
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 37 / 51
2015 - THIS IS SPARTA!
...with just a little infinitesimal catch
All fine right?
Not exactly
The restore test complained about the unknown parameter lock timeout
The backup hit the speed record since 2013
The schema was still the same of 2013
The databases performance were massively affected with 6 parallel jobs
DBA found that with just 4 parallel jobs the databases worked with minimal
disruption
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 38 / 51
2015 - THIS IS SPARTA!
In the meanwhile...
Our DBA upgraded PG to the latest version 9.4.
THANK YOU AGAIN BRUCE!!!!!
No more errors for the restore test.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 39 / 51
2016 - Breathe
Table of contents
1 2012 - Who am I? What I’m doing?
2 2013 - On thin ice
3 2014 - The battle of five armies
4 2015 - THIS IS SPARTA!
5 2016 - Breathe
6 Wrap up
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 40 / 51
2016 - Breathe
2016 - Breathe
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 41 / 51
2016 - Breathe
A new hope
The upgrade to PG 9.4 improved the performance issues and DBA had some time
to breath.
The script to ship the archived wal was improved to support multiple slaves
in cascading replica
Each slave had a dedicated rsync process configurable with compression and
protocol (rsync or rsync +ssh)
The script determined automatically the farthest slave querying the remote
controlfiles and cleaned the local archive accordingly
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 42 / 51
2016 - Breathe
A new hope
The init standby script switched to the rsync protocol
The automated restore script used the ALTER SYSTEM added to the PG 9.4
to switch between the restore and production configuration
Therefore the restore time improved to at most 9 hours for the largest BI
database (4.5 TB)
Working with BOFH JR, DBA wrapped the backup script in the
$BACKUP MANAGER pre and post execution hooks
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 43 / 51
2016 - Breathe
The rise of the machines
In the 2016 Q2, finally, DBA completed the configuration for $DEVOP TOOL and
deployed the several scripts to the 17 BI databases with minimal effort.
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 44 / 51
Wrap up
Table of contents
1 2012 - Who am I? What I’m doing?
2 2013 - On thin ice
3 2014 - The battle of five armies
4 2015 - THIS IS SPARTA!
5 2016 - Breathe
6 Wrap up
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 45 / 51
Wrap up
BI database at a glance
Year N. Databases Average size Total size Version
2012 1 2 TB 2 TB 9.1
2013 5 3 TB 15 TB 9.2
2014 9 2.2 TB 19 TB 9.2
2015 13 2.7 TB 32 TB 9.4
2016 16 2.5 TB 40 TB 9.4
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 46 / 51
Wrap up
Few words of wisdom
Reading the products source code is always a good practice.
Understanding it is even better.
Bad design can lead to disasters, in particular if the business is successful.
It’s never too early to book the CE to a SQL training course.
If in doubt ask your DBA for advice. And please listen to him.
“One bad programmer can easily create two new jobs a year.” – David Parnas
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 47 / 51
Wrap up
That’s all folks!
QUESTIONS?
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 48 / 51
Wrap up
Boring legal stuff
LAPD badge - source wikicommons
Montparnasse derailment - source wikipedia
Base jumper - copyright Chris McNaught
Disaster girl - source memegenerator
Blue elephant - source memecenter
Commodore 64 - source memecenter
Deadpool- source memegenerator
Thin ice - source Boating on Lake Winnebago
Boromir - source memegenerator
Sparta birds - source memestorage
Dart Vader - source memegenerator
Angry old man - source memegenerator
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 49 / 51
Wrap up
Contacts and license
Twitter: 4thdoctor scarf
Blog:http://www.pgdba.co.uk
Brighton PostgreSQL Meetup:
http://www.meetup.com/Brighton-PostgreSQL-Meetup/
This document is distributed under the terms of the Creative Commons
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 50 / 51
Wrap up
Life on a rollercoaster
Scaling the PostgreSQL backup and recovery
Federico Campoli
Brighton PostgreSQL Users Group
9 September 2016
Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 51 / 51

PostgreSQL - backup and recovery with large databases

  • 1.
    Life on arollercoaster Scaling the PostgreSQL backup and recovery Federico Campoli Brighton PostgreSQL Users Group 9 September 2016 Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 1 / 51
  • 2.
    Table of contents 12012 - Who am I? What I’m doing? 2 2013 - On thin ice 3 2014 - The battle of five armies 4 2015 - THIS IS SPARTA! 5 2016 - Breathe 6 Wrap up Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 2 / 51
  • 3.
    Warning! The story youare about to hear is true. Only the names have been changed to protect the innocent. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 3 / 51
  • 4.
    Dramatis personae A brilliantstartup ACME Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 4 / 51
  • 5.
    Dramatis personae The cluelessengineers CE Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 5 / 51
  • 6.
    Dramatis personae An elephanton steroids PG Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 6 / 51
  • 7.
    Dramatis personae The bigcheese HW Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 7 / 51
  • 8.
    Dramatis personae The realhero DBA Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 8 / 51
  • 9.
    In the beginning Ourstory starts in the year 2012. The world was young and our DBA started a new brilliant career in ACME. After the usual time required by the onboarding, to our DBA were handed the production’s servers. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 9 / 51
  • 10.
    2012 - Whoam I? What I’m doing? Table of contents 1 2012 - Who am I? What I’m doing? 2 2013 - On thin ice 3 2014 - The battle of five armies 4 2015 - THIS IS SPARTA! 5 2016 - Breathe 6 Wrap up Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 10 / 51
  • 11.
    2012 - Whoam I? What I’m doing? 2012 - Who am I? What I’m doing? Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 11 / 51
  • 12.
    2012 - Whoam I? What I’m doing? Size does matter PG, our powerful and friendly elephant was used for storing the data in a multi shard configuration. Not really big actually but very troubled indeed! A small logger database - 50 GB A larger configuration and auth datababase - 200 GB Two archive db - 4 TB each One db for the business intelligence - 2 TB Each db had an hot standby counterpart hosted on less powerful HW. Our story tells the life of the BI database. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 12 / 51
  • 13.
    2012 - Whoam I? What I’m doing? The carnival of monsters In the early 2013 our brave DBA addressed the several problems found on the current backup and recovery configuration. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 13 / 51
  • 14.
    2012 - Whoam I? What I’m doing? Lagging standby Suboptimal schema. Churn on large tables and high wal generation rate The slave lagged just because there was autovacuum running rsync used in archive command. The wal were archived over the network using rsync+ssh The *.ready files in the pg xlog increased the risk of the cluster’s crash. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 14 / 51
  • 15.
    2012 - Whoam I? What I’m doing? Base backup Rudimentary init standby script. Just a pg start backup call followed by a rsync between the master and the slave The several tablespaces were synced using a single rsync process with the –delete option Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 15 / 51
  • 16.
    2012 - Whoam I? What I’m doing? Slow dump Remote pg dump. Each cluster was dumped remotely on a separate server using the custom format. The backup server had limited memory and cpu Dump time between 3 hours and 2 days depending on the database size The BI database was dumped on a daily basis, taking 14/18 hours. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 16 / 51
  • 17.
    2013 - Onthin ice Table of contents 1 2012 - Who am I? What I’m doing? 2 2013 - On thin ice 3 2014 - The battle of five armies 4 2015 - THIS IS SPARTA! 5 2016 - Breathe 6 Wrap up Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 17 / 51
  • 18.
    2013 - Onthin ice 2013 - On thin ice Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 18 / 51
  • 19.
    2013 - Onthin ice Parallel rsync Our DBA took the baby step approach. He started fixing one issue at a time without affecting ACME’s activity. The first enhancement was the init backup script. Two bash arrays listed the origin and destination’s tablespaces An empty bash array stored the rsync pids The script started the pg start backup For each tablespace a rsync process were spawned and the pid was stored in the third array A loop checked that the pids were present in the process list When all the rsync finished pg stop backup were executed An email to DBA was sent to tell him to start the slave Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 19 / 51
  • 20.
    2013 - Onthin ice Local archive The high rate of wal generation required a different archive strategy. The archive command changed to a local copy A simple rsync script copied every minute the archives to the slave The script queried remotely the slave for the last restartpoint The restartpoint was used by pg archivecleanup on the master Implementing this solution solved the *.ready files problem but the autovacuum still caused high lag. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 20 / 51
  • 21.
    2013 - Onthin ice Autovacuum tune down DBA investigated the autovacuum issue and finally addressed the cause. The high lag on the slave was caused when autovacuum (or vacuum) hit a table concurrently updated. This behaviour is normal and is caused by the standby code’s design. With large denormalised tables which are updated constantly the only workaround possible was to use the autovacuum cost based with a large naptime. When the autovacuum process reached an arbitrary cost during the execution there was a 1 minute sleep before the activity resumed. . The lag on the standbys disappeared at the cost of longer autovacuum runs. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 21 / 51
  • 22.
    2013 - Onthin ice In the meanwhile... The CE decided to shard the business intelligence database using the hot standby copy The three new databases initially had the same amount of data which was slowly cleaned up later But even with one third of data on each shard, the daily dump was really slow at the point of overlapping over the 24 hours Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 22 / 51
  • 23.
    2013 - Onthin ice A slowish dump pg dump connects to the running cluster like any other backend; it pulls out data using in the copy format With the custom format the compression happens on the server where pg dump runs The backup server were hammered on the network and the cpu Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 23 / 51
  • 24.
    2013 - Onthin ice You got speed! Our DBA wrote a bash script doing the following steps Dump the database in custom format locally Generate the file’s md5 checksum Ship the file on the backup via rsync Check the remote file’s md5 Send a message to nagios for success or failure The backup time per each cluster dropped dramatically to just 5 hours including the copy and the checksum verification. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 24 / 51
  • 25.
    2013 - Onthin ice Growing pains Despite the business growth the CE ignored the problems with the poor schema design. Speed was achieved by brute force using expensive SSD storage The amount of data store in the BI db increased The only accepted solution was to create new shards over and over again By the end of 2013 the BI databases total size was 15 TB Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 25 / 51
  • 26.
    2013 - Onthin ice In the meanwhile... Our DBA upgraded all the PG to the version 9.2 with pg upgrade THANKS BRUCE!!!!! Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 26 / 51
  • 27.
    2014 - Thebattle of five armies Table of contents 1 2012 - Who am I? What I’m doing? 2 2013 - On thin ice 3 2014 - The battle of five armies 4 2015 - THIS IS SPARTA! 5 2016 - Breathe 6 Wrap up Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 27 / 51
  • 28.
    2014 - Thebattle of five armies 2014 - The battle of five armies Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 28 / 51
  • 29.
    2014 - Thebattle of five armies Bloating data Q1 2014 opened with another backup performance issue The dump size increased over the time The database CPU usage increased constantly with no apparent reason Most of the shards had the tablespace usage at 90% Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 29 / 51
  • 30.
    2014 - Thebattle of five armies Mostly harmless Against all odds our DBA addressed the issue. The table used by the BI database was causing the bloat The table’s design was technically a materalised view The table was partitioned in some way Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 30 / 51
  • 31.
    2014 - Thebattle of five armies Mostly harmless Against all odds our DBA addressed the issue. The table used by the BI database was causing the bloat The table’s design was technically a materalised view The table was partitioned in some way The table had an harmless hstore field Where everybody added new keys just changing the app code And nobody did housekeeping of their data The row length jumped from 200 bytes to 1200 bytes in few months Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 30 / 51
  • 32.
    2014 - Thebattle of five armies Mostly harmless Against all odds our DBA addressed the issue. The table used by the BI database was causing the bloat The table’s design was technically a materalised view The table was partitioned in some way The table had an harmless hstore field Where everybody added new keys just changing the app code And nobody did housekeeping of their data The row length jumped from 200 bytes to 1200 bytes in few months Each BI shard contained up to 2 billion rows... Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 30 / 51
  • 33.
    2014 - Thebattle of five armies Just a little trick Despite the impending doom and the CE resistance DBA succeeded in converting the hstore field to a conventional columnar store (SORRY OLEG!). The storage usage dropped by 30% The CPU usage dropped by 60% The speed of ACME’s product boosted ACME saved $BIG BUNCH OF MONEY in new HW otherwise required to shard again the dying databases Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 31 / 51
  • 34.
    2014 - Thebattle of five armies In the meanwhile... DBA knew the fix was just a workaround He asked the CE to help him in the schema redesign He told them things would be problematic again in just one year Nobody listened Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 32 / 51
  • 35.
    2015 - THISIS SPARTA! Table of contents 1 2012 - Who am I? What I’m doing? 2 2013 - On thin ice 3 2014 - The battle of five armies 4 2015 - THIS IS SPARTA! 5 2016 - Breathe 6 Wrap up Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 33 / 51
  • 36.
    2015 - THISIS SPARTA! 2015 - THIS IS SPARTA! Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 34 / 51
  • 37.
    2015 - THISIS SPARTA! I hate to say that, but I told you so As predicted by our DBA the time required for backing up the BI databases increased again, approaching dangerously the 24 hours. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 35 / 51
  • 38.
    2015 - THISIS SPARTA! Parallel is da way! Back in the 2013 PG 9.3 added the parallel export. But, to DBA great disappointment, the version 9.3 was initially cursed by some bugs causing data corruption. DBA could not use the parallel dump. However... The parallel backup takes advantage of the snapshot export introduced in the PG 9.2 The debian packaging allows different PG’s major versions on the same machine DBA installed the client 9.3 and used its pg dump to dump the 9.2 in parallel Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 36 / 51
  • 39.
    2015 - THISIS SPARTA! It worked very well... The wrapper script required some adjustments Accept the -j parameter Check if the 9.3+ client is installed Override the format to directory if the parallel backup is possible Adapt the checksum procedure to check the files in the dump directory Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 37 / 51
  • 40.
    2015 - THISIS SPARTA! ...with just a little infinitesimal catch All fine right? Not exactly The restore test complained about the unknown parameter lock timeout The backup hit the speed record since 2013 The schema was still the same of 2013 The databases performance were massively affected with 6 parallel jobs DBA found that with just 4 parallel jobs the databases worked with minimal disruption Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 38 / 51
  • 41.
    2015 - THISIS SPARTA! In the meanwhile... Our DBA upgraded PG to the latest version 9.4. THANK YOU AGAIN BRUCE!!!!! No more errors for the restore test. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 39 / 51
  • 42.
    2016 - Breathe Tableof contents 1 2012 - Who am I? What I’m doing? 2 2013 - On thin ice 3 2014 - The battle of five armies 4 2015 - THIS IS SPARTA! 5 2016 - Breathe 6 Wrap up Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 40 / 51
  • 43.
    2016 - Breathe 2016- Breathe Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 41 / 51
  • 44.
    2016 - Breathe Anew hope The upgrade to PG 9.4 improved the performance issues and DBA had some time to breath. The script to ship the archived wal was improved to support multiple slaves in cascading replica Each slave had a dedicated rsync process configurable with compression and protocol (rsync or rsync +ssh) The script determined automatically the farthest slave querying the remote controlfiles and cleaned the local archive accordingly Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 42 / 51
  • 45.
    2016 - Breathe Anew hope The init standby script switched to the rsync protocol The automated restore script used the ALTER SYSTEM added to the PG 9.4 to switch between the restore and production configuration Therefore the restore time improved to at most 9 hours for the largest BI database (4.5 TB) Working with BOFH JR, DBA wrapped the backup script in the $BACKUP MANAGER pre and post execution hooks Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 43 / 51
  • 46.
    2016 - Breathe Therise of the machines In the 2016 Q2, finally, DBA completed the configuration for $DEVOP TOOL and deployed the several scripts to the 17 BI databases with minimal effort. Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 44 / 51
  • 47.
    Wrap up Table ofcontents 1 2012 - Who am I? What I’m doing? 2 2013 - On thin ice 3 2014 - The battle of five armies 4 2015 - THIS IS SPARTA! 5 2016 - Breathe 6 Wrap up Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 45 / 51
  • 48.
    Wrap up BI databaseat a glance Year N. Databases Average size Total size Version 2012 1 2 TB 2 TB 9.1 2013 5 3 TB 15 TB 9.2 2014 9 2.2 TB 19 TB 9.2 2015 13 2.7 TB 32 TB 9.4 2016 16 2.5 TB 40 TB 9.4 Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 46 / 51
  • 49.
    Wrap up Few wordsof wisdom Reading the products source code is always a good practice. Understanding it is even better. Bad design can lead to disasters, in particular if the business is successful. It’s never too early to book the CE to a SQL training course. If in doubt ask your DBA for advice. And please listen to him. “One bad programmer can easily create two new jobs a year.” – David Parnas Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 47 / 51
  • 50.
    Wrap up That’s allfolks! QUESTIONS? Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 48 / 51
  • 51.
    Wrap up Boring legalstuff LAPD badge - source wikicommons Montparnasse derailment - source wikipedia Base jumper - copyright Chris McNaught Disaster girl - source memegenerator Blue elephant - source memecenter Commodore 64 - source memecenter Deadpool- source memegenerator Thin ice - source Boating on Lake Winnebago Boromir - source memegenerator Sparta birds - source memestorage Dart Vader - source memegenerator Angry old man - source memegenerator Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 49 / 51
  • 52.
    Wrap up Contacts andlicense Twitter: 4thdoctor scarf Blog:http://www.pgdba.co.uk Brighton PostgreSQL Meetup: http://www.meetup.com/Brighton-PostgreSQL-Meetup/ This document is distributed under the terms of the Creative Commons Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 50 / 51
  • 53.
    Wrap up Life ona rollercoaster Scaling the PostgreSQL backup and recovery Federico Campoli Brighton PostgreSQL Users Group 9 September 2016 Federico Campoli (Brighton PostgreSQL Users Group) Life on a rollercoaster 9 September 2016 51 / 51