KEMBAR78
Five Steps Performance Postgres | PDF | Postgre Sql | Cache (Computing)
0% found this document useful (0 votes)
7 views94 pages

Five Steps Performance Postgres

The document outlines strategies for optimizing PostgreSQL performance through five key areas: application design, query tuning, OS and filesystem configuration, hardware considerations, and monitoring. It emphasizes the importance of proper schema design, indexing, caching, and connection management, as well as the need for continuous testing and tuning. Additionally, it provides specific recommendations for hardware allocation and configuration settings to enhance database performance.

Uploaded by

rauf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views94 pages

Five Steps Performance Postgres

The document outlines strategies for optimizing PostgreSQL performance through five key areas: application design, query tuning, OS and filesystem configuration, hardware considerations, and monitoring. It emphasizes the importance of proper schema design, indexing, caching, and connection management, as well as the need for continuous testing and tuning. Additionally, it provides specific recommendations for hardware allocation and configuration settings to enhance database performance.

Uploaded by

rauf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

13 15

14
12
Five Steps
to PostgreSQL
11 Performance
Josh Berkus
PostgreSQL Project
MelPUG 2013
13 15
postgresql.conf
14 Hardware
12 OS &
Filesystem
Query
Tuning

11 Application
Design
0. Getting Outfitted
5 Layer Cake

Queries Transactions Application

Drivers Connections Caching Middleware

Schema Config PostgreSQL

Filesystem Kernel Operating System

Storage RAM/CPU Network Hardware


5 Layer Cake

Queries Transactions Application

Drivers Connections Caching Middleware

Schema Config PostgreSQL

Filesystem Kernel Operating System

Storage RAM/CPU Network Hardware


Scalability Funnel

Application

Middleware

PostgreSQL

OS

HW
What Flavor is Your DB? O
1
W ►Web Application (Web)
●DB smaller than RAM
●90% or more “one-liner” queries
What Flavor is Your DB? O
1
O ►Online Transaction Processing
(OLTP)
●DB slightly larger than RAM to 1TB
●20-70% small data write queries,
some large transactions
What Flavor is Your DB? O
1
D ►Data Warehousing (DW)
●Large to huge databases (100GB to
100TB)
●Large complex reporting queries
●Large bulk loads of data
●Also called "Decision Support" or
"Business Intelligence"
Tips for Good Form O
1
►Engineer for the problems you have
●not for the ones you don't
Tips for Good Form O
1
►A little overallocation is cheaper than
downtime
●unless you're an OEM, don't stint a few
GB
●resource use will grow over time
Tips for Good Form O
1
►Test, Tune, and Test Again
●you can't measure performance by “it
seems fast”
Tips for Good Form O
1
►Most server performance is
thresholded
●“slow” usually means “25x slower”
●it's not how fast it is, it's how close you
are to capacity
1 Application
Design
Schema Design 11
►Table design
●do not optimize prematurely
▬normalize your tables and wait for a proven
issue to denormalize
▬Postgres is designed to perform well with

normalized tables
●Entity-Attribute-Value tables and other
innovative designs tend to perform poorly
Schema Design 11
►Table design
●consider using natural keys
▬can cut down on the number of joins you
need
●BLOBs can be slow
▬have to be completely rewritten,
compressed
▬can also be fast, thanks to compression
Schema Design 11
►Table design
●think of when data needs to be updated,
as well as read
▬sometimes you need to split tables which
will be updated at different times
▬don't trap yourself into updating the same

rows multiple times


Schema Design 11
►Indexing
●index most foreign keys
●index common WHERE criteria
●index common aggregated columns
●learn to use special index types:
expressions, full text, partial
Schema Design 11
►Not Indexing
●indexes cost you on updates, deletes
▬especially with HOT
●too many indexes can confuse the
planner
●don't index: tiny tables, low-cardinality
columns
Right indexes? 11
►pg_stat_user_indexes
●shows indexes not being used
●note that it doesn't record unique index
usage
►pg_stat_user_tables
●shows seq scans: index candidates?
●shows heavy update/delete tables: index
less
Partitioning 11
►Partition large or growing tables
●historical data
▬data will be purged
▬massive deletes are server-killers

●very large tables


▬anything over 10GB / 10m rows
▬partition by active/passive
Partitioning 11
►Application must be partition-compliant
●every query should call the partition key
●pre-create your partitions
▬do not create them on demand … they will
lock
Query design 11
►Do more with each query
●PostgreSQL does well with fewer larger
queries
●not as well with many small queries
●avoid doing joins, tree-walking in
middleware
Query design 11
►Do more with each transaction
●batch related writes into large
transactions
Query design 11
►Know the query gotchas (per version)
●Always try rewriting subqueries as joins
●try swapping NOT IN and NOT EXISTS
for bad queries
●try to make sure that index/key types
match
●avoid unanchored text searches "ILIKE
'%josh%'"
But I use ORM! 11
►ORM != high performance
●ORM is for fast development, not fast
databases
●make sure your ORM allows "tweaking"
queries
●applications which are pushing the limits
of performance probably can't use ORM
▬but most don't have a problem
It's All About Caching 11
►Use prepared queries W O

●whenever you have repetitive loops


It's All About Caching 11
►Cache, cache everywhere W O

●plan caching: on the PostgreSQL server


●parse caching: in some drivers
●data caching:
▬in the appserver
▬in memcached/varnish/nginx

▬in the client (javascript, etc.)

●use as many kinds of caching as you can


It's All About Caching 11
But …
►think carefully about cache invalidation
●and avoid “cache storms”
Connection Management 11
►Connections take resources W O

●RAM, CPU
●transaction checking
Connection Management 11
►Make sure you're only using W O
connections you need
●look for “<IDLE>” and “<IDLE> in
Transaction”
●log and check for a pattern of connection
growth
●make sure that database and appserver
timeouts are synchronized
Pooling 11
►Over 100 connections? You need
pooling!

Webserver

Webserver Pool PostgreSQL

Webserver
Pooling 11
►New connections are expensive
●use persistent connections or connection
pooling sofware
▬appservers
▬pgBouncer

▬pgPool (sort of)

●set pool side to maximum connections


needed
2
1
Query
Tuning
Bad Queries 12
Ranked Query Execution Times

5000

4000

3000
execution time

2000

1000

0
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
% ranking
Optimize Your Queries
12
in Test
►Before you go production
●simulate user load on the application
●monitor and fix slow queries
●look for worst procedures
Optimize Your Queries
12
in Test
►Look for “bad queries”
●queries which take too long
●data updates which never complete
●long-running stored procedures
●interfaces issuing too many queries
●queries which block
Finding bad queries 12

►Log Analysis
●dozens of logging
options
●log_min_duration_
statement
●pgfouine
●pgBadger
Fixing bad queries 12
►EXPLAIN ANALYZE
►things to look for:
●bad rowcount estimates
●sequential scans
●high-count loops
●large on-disk sorts
Fixing bad queries 12
►reading explain analyze is an art
●it's an inverted tree
●look for the deepest level at which the
problem occurs
►try re-writing complex queries several
ways
Query Optimization Cycle 12
log queries run pgbadger

explain analyze
apply fixes worst queries

troubleshoot
worst queries
Query Optimization Cycle
12
(new)
check pg_stat_statements

explain analyze
apply fixes worst queries

troubleshoot
worst queries
Procedure Optimization
12
Cycle
log queries run pg_fouine

instrument
apply fixes worst
functions

find slow
operations
Procedure Optimization
12
Cycle (new)
check pg_stat_function

instrument
apply fixes worst
functions

find slow
operations
3
1
postgresql.conf
max_connections 3
1
►As many as you need to use
●web apps: 100 to 300 W O
D
●analytics: 20 to 40
►If you need more than 100 regularly,
use a connection pooler
●like pgbouncer
shared_buffers 3
1
►1/4 of RAM on a dedicated server
W O
●not more than 8GB (test)
●cache_miss statistics can tell you if you
need more
►less buffers to preserve cache space
D
Other memory parameters 3
1
►work_mem
●non-shared
▬lower it for many connections W O

▬raise it for large queries D

●watch for signs of misallocation


▬swapping RAM: too much work_mem
▬log temp files: not enough work_mem

●probably better to allocate by task/ROLE


Other memory parameters 3
1
►maintenance_work_mem
●the faster vacuum completes, the better
▬but watch out for multiple autovacuum
workers!
●raise to 256MB to 1GB for large
databases
●also used for index creation
▬raise it for bulk loads
Other memory parameters 3
1
►temp_buffers
●max size of temp tables before swapping
to disk
●raise if you use lots of temp tables D
►wal_buffers
●raise it to 32MB
Commits 3
1
►checkpoint_segments
●more if you have the disk: 16, 64, 128
►synchronous_commit
●response time more important than data
integrity?
●turn synchronous_commit = off W
●lose a finite amount of data in a
shutdown
Query tuning 3
1
►effective_cache_size
●RAM available for queries
●set it to 3/4 of your available RAM
►default_statistics_target D
●raise to 200 to 1000 for large databases
●now defaults to 100
●setting statistics per column is better
Query tuning 3
1
►effective_io_concurrency
●set to number of disks or channels
●advisory only
●Linux only
A word about
Random Page Cost
3
1
►Abused as a “force index use”
parameter
►Lower it if the seek/scan ratio of your
storage is actually different
●SSD/NAND: 1.0 to 2.0
●EC2: 1.1 to 2.0
●High-end SAN: 2.5 to 3.5
►Never below 1.0
Maintenance 3
1
►Autovacuum
●leave it on for any application which gets
constant writes W O
●not so good for batch writes -- do manual
vacuum for bulk loads D
Maintenance 3
1
►Autovacuum
●have 100's or 1000's of tables?
multiple_autovacuum_workers
▬but not more than ½ cores
●large tables? raise
autovacuum_vacuum_cost_limit
●you can change settings per table
1
4
OS &
Filesystem
Spread Your Files Around 14
►Separate the transaction log if O D

possible
●pg_xlog directory
●on a dedicated disk/array, performs
10-50% faster
●many WAL options only work if you have
a separate drive
Spread Your Files Around 14

number of drives/arrays 1 2 3
which partition
OS/applications 1 1 1
transaction log 1 1 2
database 1 2 3
Spread Your Files Around 14
►Tablespaces for temp files D

●more frequently useful if you do a lot of


disk sorts
●Postgres can round-robin multiple temp
tablespaces
Linux Tuning 14
►Filesystems
●Use XFS or Ext4
▬butrfs not ready yet, may never work for DB
▬Ext3 has horrible flushing behavior

●Reduce logging
▬data=ordered, noatime,
nodiratime
Linux Tuning 14
►OS tuning
●must increase shmmax, shmall in kernel
●use deadline or noop scheduler to speed
writes
●disable NUMA memory localization
(recent)
●check your kernel version carefully for
performance issues!
Linux Tuning 14
►Turn off the OOM Killer!
● vm.oom-kill = 0
● vm.overcommit_memory = 2
● vm.overcommit_ratio = 80
OpenSolaris/IIlumos 14
►Filesystems
●Use ZFS
▬reduce block size to 8K W O

●turn off full_page_writes


►OS configuration
●no need to configure shared memory
●use packages compiled with Sun
compiler
Windows, OSX Tuning 14
►You're joking, right?
What about The Cloud? 14
►Configuring for cloud servers is
different
●shared resources
●unreliable I/O
●small resource limits
►Also depends on which cloud
●AWS, Rackspace, Joyent, GoGrid

… so I can't address it all here.


What about The Cloud? 14
►Some general advice:
●make sure your database fits in RAM
▬except on Joyent
●Don't bother with most OS/FS tuning
▬just some basic FS configuration options
●use synchronous_commit = off if
possible
Set up Monitoring! 14
►Get warning ahead of time
●know about performance problems
before they go critical
●set up alerts
▬80% of capacity is an emergency!
●set up trending reports
▬is there a pattern of steady growth
15
Hardware
Hardware Basics 15
►Four basic components:
●CPU
●RAM
●I/O: Disks and disk bandwidth
●Network
Hardware Basics 15
►Different priorities for different
applications
●Web: CPU, Network, RAM, ... I/O W

●OLTP: balance all O


●DW: I/O, CPU, RAM D
Getting Enough CPU 15
►One Core, One Query
●How many concurrent queries do you
need?
●Best performance at 1 core per no more
than two concurrent queries
►So if you can up your core count, do
►Also: L1, L2 cache size matters
Getting Enough RAM 15
►RAM use is "thresholded"
●as long as you are above the amount of
RAM you need, even 5%, server will be
fast
●go even 1% over and things slow down a
lot
Getting Enough RAM 15
►Critical RAM thresholds W

●Do you have enough RAM to keep the


database in shared_buffers?
▬Ram 3x to 6x the size of DB
Getting Enough RAM 15
►Critical RAM thresholds O

●Do you have enough RAM to cache the


whole database?
▬RAM 2x to 3x the on-disk size of the
database
●Do you have enough RAM to cache the
“working set”?
▬the data which is needed 95% of the time
Getting Enough RAM 15
►Critical RAM thresholds D

●Do you have enough RAM for sorts &


aggregates?
▬What's the largest data set you'll need to
work with?
▬For how many users
Other RAM Issues 15
►Get ECC RAM
●Better to know about bad RAM before it
corrupts your data.
►What else will you want RAM for?
●RAMdisk?
●SWRaid?
●Applications?
Getting Enough I/O 15
►Will your database be I/O Bound?
●many writes: bound by transaction log O
●database much larger than RAM: bound
by I/O for many/most queries D
Getting Enough I/O 15
►Optimize for the I/O you'll need
●if you DB is terabytes, spend most of
your money on disks
●calculate how long it will take to read
your entire database from disk
▬backups
▬snapshots

●don't forget the transaction log!


I/O Decision Tree 15
lots of fits in
No Yes mirrored
writes? RAM?

Yes No

afford
terabytes HW RAID
good HW Yes No
of data?
RAID?
Yes
No
mostly
SW RAID Storage read?
Device
Yes No

RAID 5 RAID 1+0


I/O Tips 15
►RAID
●get battery backup and turn your write
cache on
●SAS has 2x the real throughput of SATA
●more spindles = faster database
▬big disks are generally slow
I/O Tips 15
►DAS/SAN/NAS
●measure lag time: it can kill response
time
●how many channels?
▬“gigabit” is only 100mb/s
▬make sure multipath works

●use fiber if you can afford it


I/O Tips 15

iSCSI
=
death
SSD 15
►Very fast seeks D

●great for index access on large tables


●up to 20X faster
►Not very fast random writes
●low-end models can be slower than HDD
●most are about 2X speed
►And use server models, not desktop!
NAND (FusionIO) 15
All the advantages of SSD, Plus:
►Very fast writes ( 5X to 20X ) W O

●more concurrency on writes


●MUCH lower latency
►But … very expensive (50X)
Tablespaces for NVRAM 15
►Have a "hot" and a "cold" tablespace
●current data on "hot" O D

●older/less important data on "cold"


●combine with partitioning
►compromise between speed and size
Network 15
►Network can be your bottleneck
●lag time
●bandwith
●oversubscribed switches
●NAS
Network 15
►Have dedicated connections
●between appserver and database server
●between database server and failover
server
●between database and storage
Network 15
►Data Transfers
●Gigabit is only 100MB/s
●Calculate capacity for data copies,
standby, dumps
The Most Important
Hardware Advice:
15

►Quality matters
●not all CPUs are the same
●not all RAID cards are the same
●not all server systems are the same
●one bad piece of hardware, or bad driver,
can destroy your application
performance
The Most Important
Hardware Advice:
15
►High-performance databases means
hardware expertise
●the statistics don't tell you everything
●vendors lie
●you will need to research different
models and combinations
●read the pgsql-performance mailing list
The Most Important
Hardware Advice:
1
5
►Make sure you test your hardware
before you put your database on it
●“Try before you buy”
●Never trust the vendor or your sysadmins
The Most Important
Hardware Advice:
15
►So Test, Test, Test!
●CPU: PassMark, sysbench, Spec CPU
●RAM: memtest, cachebench, Stream
●I/O: bonnie++, dd, iozone
●Network: bwping, netperf
●DB: pgBench, sysbench
Questions? 16
►Josh Berkus ►More Advice
● josh@pgexperts.com ● www.postgresql.org/docs
● www.pgexperts.com ● pgsql-performance
▬ /presentations.html mailing list
● www.databasesoup.com ● planet.postgresql.org
● irc.freenode.net
▬ #postgresql

This talk is copyright 2013 Josh Berkus, and is licensed under the creative commons attribution license

You might also like