KEMBAR78
MySQL Head to Head Performance | PPTX
MySQL and
Ceph
2:20pm – 3:10pm
Room 203
MySQL in the Cloud
Head-to-Head Performance Lab
1:20pm – 2:10pm
Room 203
WHOIS
Brent Compton and Kyle Bader
Storage Solution Architectures
Red Hat
Yves Trudeau
Principal Architect
Percona
AGENDA
MySQL on Ceph MySQL in the Cloud
Head-to-Head Performance Lab
• MySQL on Ceph vs. AWS
• Head-to-head: Performance
• Head-to-head: Price/performance
• IOPS performance nodes for Ceph
• Why MySQL on Ceph
• Ceph Architecture
• Tuning: MySQL on Ceph
• HW Architectural Considerations
MySQL on Ceph vs. AWS
• Shared, elastic storage pool
• Dynamic DB placement
• Flexible volume resizing
• Live instance migration
• Backup to object pool
• Read replicas via copy-on-write snapshots
MySQL ON CEPH STORAGE CLOUD
OPS EFFICIENCY
MYSQL-ON-CEPH PRIVATE CLOUD
FIDELITY TO A MYSQL-ON-AWS EXPERIENCE
• Hybrid cloud requires public/private cloud commonalities
• Developers want DevOps consistency
• Elastic block storage, Ceph RBD vs. AWS EBS
• Elastic object storage, Ceph RGW vs. AWS S3
• Users want deterministic performance
HEAD-TO-HEAD
PERFORMANCE
30 IOPS/GB: AWS EBS P-IOPS TARGET
HEAD-TO-HEAD LAB
TEST ENVIRONMENTS
• EC2 r3.2xlarge and m4.4xlarge
• EBS Provisioned IOPS and GPSSD
• Percona Server
• Supermicro servers
• Red Hat Ceph Storage RBD
• Percona Server
OSD Storage Server Systems
5x SuperStorage SSG-6028R-OSDXXX
Dual Intel Xeon E5-2650v3 (10x core)
32GB SDRAM DDR3
2x 80GB boot drives
4x 800GB Intel DC P3700 (hot-swap U.2 NVMe)
1x dual port 10GbE network adaptors AOC-STGN-i2S
8x Seagate 6TB 7200 RPM SAS (unused in this lab)
Mellanox 40GbE network adaptor(unused in this lab)
MySQL Client Systems
12x Super Server 2UTwin2 nodes
Dual Intel Xeon E5-2670v2
(cpuset limited to 8 or 16 vCPUs)
64GB SDRAM DDR3
Storage Server Software:
Red Hat Ceph Storage 1.3.2
Red Hat Enterprise Linux 7.2
Percona Server
5x OSD Nodes 12x Client Nodes
Shared10GSFP+Networking
Monitor Nodes
SUPERMICRO CEPH
LAB ENVIRONMENT
7996 7956
950
1680 1687
267
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
P-IOPS
m4.4xl
P-IOPS
r3.2xl
GP-SSD
r3.2xl
100% Read
100% Write
SYSBENCH BASELINE ON AWS EC2 + EBS
7996
67144
40031
1680
5677
1258
20053
4752
0
10000
20000
30000
40000
50000
60000
70000
80000
P-IOPS
m4.4xl
Ceph cluster
1x "m4.4xl"
(14% capacity)
Ceph cluster
6x "m4.4xl"
(87% capacity)
100% Read
100% write
70/30 RW
SYSBENCH REQUESTS PER MYSQL INSTANCE
CONVERTING SYSBENCH REQUESTS TO
IOPS READ PATH
X% FROM INNODB BUFFER POOL
IOPS = (READ REQUESTS – X%)
SYSBENCH READ
CONVERTING SYSBENCH REQUESTS TO
IOPS WRITE PATH
SYSBENCH WRITE
1X READ
X% FROM INNODB BUFFER POOL
IOPS = (READ REQ – X%)
LOG, DOUBLE WRITE BUFFER
IOPS = (WRITE REQ * 2.3)
1X WRITE
30.0 29.8
3.6
25.6 25.7
4.1
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
P-IOPS
m4.4xl
P-IOPS
r3.2xl
GP-SSD
r3.2xl
100% Read
100% Write
AWS IOPS/GB BASELINE: ~ AS ADVERTISED!
IOPS/GB PER MYSQL INSTANCE
30
252
150
26
78
19
0
50
100
150
200
250
300
P-IOPS
m4.4xl
Ceph cluster
1x "m4.4xl"
(14% capacity)
Ceph cluster
6x "m4.4xl"
(87% capacity)
MySQL IOPS/GB Reads
MySQL IOPS/GB Writes
FOCUSING ON WRITE IOPS/GB
AWS THROTTLE WATERMARK FOR DETERMINISTIC PERFORMANCE
26
78
19
0
10
20
30
40
50
60
70
80
90
P-IOPS
m4.4xl
Ceph cluster
1x "m4.4xl"
(14% capacity)
Ceph cluster
6x "m4.4xl"
(87% capacity)
EFFECT OF CEPH CLUSTER LOADING ON
IOPS/GB
78
37
25
19
134
72
37 36
0
20
40
60
80
100
120
140
160
Ceph cluster
(14% capacity)
Ceph cluster
(36% capacity)
Ceph cluster
(72% capacity)
Ceph cluster
(87% capacity)
IOPS/GB
100% Write
70/30 RW
A NOTE ON WRITE AMPLIFICATION
MYSQL ON CEPH – WRITE PATH
INNODB DOUBLE
WRITE BUFFER
CEPH REPLICATION
OSD JOURNALING
MYSQL
INSERT
X2
X2
X2
HEAD-TO-HEAD
PERFORMANCE
30 IOPS/GB: AWS EBS P-IOPS TARGET
25 IOPS/GB: CEPH 72% CLUSTER CAPACITY (WRITES)
78 IOPS/GB: CEPH 14% CLUSTER CAPACITY (WRITES)
HEAD-TO-HEAD
PRICE/PERFORMANCE
$2.50: TARGET AWS EBS P-IOPS STORAGE PER IOP
IOPS/GB ON VARIOUS CONFIGS
31
18 18
78
-
10
20
30
40
50
60
70
80
90
IOPS/GB
(SysbenchWrite)
AWS EBS Provisioned-IOPS
Ceph on Supermicro FatTwin 72% Capacity
Ceph on Supermicro MicroCloud 87% Capacity
Ceph on Supermicro MicroCloud 14% Capacity
$/STORAGE-IOP ON THE SAME CONFIGS
$2.40
$0.80 $0.78
$1.06
$-
$0.50
$1.00
$1.50
$2.00
$2.50
$3.00
Storage$/IOP
(SysbenchWrite)
AWS EBS Provisioned-IOPS
Ceph on Supermicro FatTwin 72% Capacity
Ceph on Supermicro MicroCloud 87% Capacity
Ceph on Supermicro MicroCloud 14% Capacity
HEAD-TO-HEAD
PRICE/PERFORMANCE
$2.50: TARGET AWS P-IOPS $/IOP (EBS ONLY)
$0.78: CEPH ON SUPERMICRO MICRO CLOUD CLUSTER
IOPS PERFORMANCE NODES
FOR CEPH
ARCHITECTURAL CONSIDERATIONS
UNDERSTANDING THE WORKLOAD
Traditional Ceph Workload
• $/GB
• PBs
• Unstructured data
• MB/sec
MySQL Ceph Workload
• $/IOP
• TBs
• Structured data
• IOPS
ARCHITECTURAL CONSIDERATIONS
FUNDAMENTALLY DIFFERENT DESIGN
Traditional Ceph Workload
• 50-300+ TB per server
• Magnetic Media (HDD)
• Low CPU-core:OSD ratio
• 10GbE->40GbE
MySQL Ceph Workload
• < 10 TB per server
• Flash (SSD -> NVMe)
• High CPU-core:OSD ratio
• 10GbE
18
18
19
6
34 34
36
8
0
5
10
15
20
25
30
35
40
Ceph cluster
80 cores
8 NVMe
(87% capacity)
Ceph cluster
40 cores
4 NVMe
(87% capacity)
Ceph cluster
80 cores
4 NVMe
(87% capacity)
Ceph cluster
80 cores
12 NVMe
(84% capacity)
IOPS/GB
100% Write
70/30 RW
CONSIDERING CORE-TO-FLASH RATIO
8x Nodes in 3U chassis
Model:
SYS-5038MR-OSDXXXP
Per Node Configuration:
CPU: Single Intel Xeon E5-2630 v4
Memory: 32GB
NVMe Storage: Single 800GB Intel P3700
Networking: 1x dual-port 10G SFP+
+ +
1x CPU + 1x NVMe + 1x SFP
SUPERMICRO MICRO CLOUD
CEPH MYSQL PERFORMANCE SKU
SEE US AT PERCONA LIVE!
• Hands on Test Drive: MySQL on Ceph
April 18, 1:30-4:30
• MySQL on Ceph
April 19, 1:20-2:10
• MySQL in the Cloud: Head-to-Head Performance
April 19, 2:20-3:10
• Running MySQL Virtualized on Ceph: Which Hypervisor?
April 20, 3:30-4:20
THANK YOU!

MySQL Head to Head Performance

  • 1.
    MySQL and Ceph 2:20pm –3:10pm Room 203 MySQL in the Cloud Head-to-Head Performance Lab 1:20pm – 2:10pm Room 203
  • 2.
    WHOIS Brent Compton andKyle Bader Storage Solution Architectures Red Hat Yves Trudeau Principal Architect Percona
  • 3.
    AGENDA MySQL on CephMySQL in the Cloud Head-to-Head Performance Lab • MySQL on Ceph vs. AWS • Head-to-head: Performance • Head-to-head: Price/performance • IOPS performance nodes for Ceph • Why MySQL on Ceph • Ceph Architecture • Tuning: MySQL on Ceph • HW Architectural Considerations
  • 4.
  • 5.
    • Shared, elasticstorage pool • Dynamic DB placement • Flexible volume resizing • Live instance migration • Backup to object pool • Read replicas via copy-on-write snapshots MySQL ON CEPH STORAGE CLOUD OPS EFFICIENCY
  • 6.
    MYSQL-ON-CEPH PRIVATE CLOUD FIDELITYTO A MYSQL-ON-AWS EXPERIENCE • Hybrid cloud requires public/private cloud commonalities • Developers want DevOps consistency • Elastic block storage, Ceph RBD vs. AWS EBS • Elastic object storage, Ceph RGW vs. AWS S3 • Users want deterministic performance
  • 7.
  • 8.
    HEAD-TO-HEAD LAB TEST ENVIRONMENTS •EC2 r3.2xlarge and m4.4xlarge • EBS Provisioned IOPS and GPSSD • Percona Server • Supermicro servers • Red Hat Ceph Storage RBD • Percona Server
  • 9.
    OSD Storage ServerSystems 5x SuperStorage SSG-6028R-OSDXXX Dual Intel Xeon E5-2650v3 (10x core) 32GB SDRAM DDR3 2x 80GB boot drives 4x 800GB Intel DC P3700 (hot-swap U.2 NVMe) 1x dual port 10GbE network adaptors AOC-STGN-i2S 8x Seagate 6TB 7200 RPM SAS (unused in this lab) Mellanox 40GbE network adaptor(unused in this lab) MySQL Client Systems 12x Super Server 2UTwin2 nodes Dual Intel Xeon E5-2670v2 (cpuset limited to 8 or 16 vCPUs) 64GB SDRAM DDR3 Storage Server Software: Red Hat Ceph Storage 1.3.2 Red Hat Enterprise Linux 7.2 Percona Server 5x OSD Nodes 12x Client Nodes Shared10GSFP+Networking Monitor Nodes SUPERMICRO CEPH LAB ENVIRONMENT
  • 10.
  • 11.
    7996 67144 40031 1680 5677 1258 20053 4752 0 10000 20000 30000 40000 50000 60000 70000 80000 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14%capacity) Ceph cluster 6x "m4.4xl" (87% capacity) 100% Read 100% write 70/30 RW SYSBENCH REQUESTS PER MYSQL INSTANCE
  • 12.
    CONVERTING SYSBENCH REQUESTSTO IOPS READ PATH X% FROM INNODB BUFFER POOL IOPS = (READ REQUESTS – X%) SYSBENCH READ
  • 13.
    CONVERTING SYSBENCH REQUESTSTO IOPS WRITE PATH SYSBENCH WRITE 1X READ X% FROM INNODB BUFFER POOL IOPS = (READ REQ – X%) LOG, DOUBLE WRITE BUFFER IOPS = (WRITE REQ * 2.3) 1X WRITE
  • 14.
  • 15.
    IOPS/GB PER MYSQLINSTANCE 30 252 150 26 78 19 0 50 100 150 200 250 300 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14% capacity) Ceph cluster 6x "m4.4xl" (87% capacity) MySQL IOPS/GB Reads MySQL IOPS/GB Writes
  • 16.
    FOCUSING ON WRITEIOPS/GB AWS THROTTLE WATERMARK FOR DETERMINISTIC PERFORMANCE 26 78 19 0 10 20 30 40 50 60 70 80 90 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14% capacity) Ceph cluster 6x "m4.4xl" (87% capacity)
  • 17.
    EFFECT OF CEPHCLUSTER LOADING ON IOPS/GB 78 37 25 19 134 72 37 36 0 20 40 60 80 100 120 140 160 Ceph cluster (14% capacity) Ceph cluster (36% capacity) Ceph cluster (72% capacity) Ceph cluster (87% capacity) IOPS/GB 100% Write 70/30 RW
  • 18.
    A NOTE ONWRITE AMPLIFICATION MYSQL ON CEPH – WRITE PATH INNODB DOUBLE WRITE BUFFER CEPH REPLICATION OSD JOURNALING MYSQL INSERT X2 X2 X2
  • 19.
    HEAD-TO-HEAD PERFORMANCE 30 IOPS/GB: AWSEBS P-IOPS TARGET 25 IOPS/GB: CEPH 72% CLUSTER CAPACITY (WRITES) 78 IOPS/GB: CEPH 14% CLUSTER CAPACITY (WRITES)
  • 20.
  • 21.
    IOPS/GB ON VARIOUSCONFIGS 31 18 18 78 - 10 20 30 40 50 60 70 80 90 IOPS/GB (SysbenchWrite) AWS EBS Provisioned-IOPS Ceph on Supermicro FatTwin 72% Capacity Ceph on Supermicro MicroCloud 87% Capacity Ceph on Supermicro MicroCloud 14% Capacity
  • 22.
    $/STORAGE-IOP ON THESAME CONFIGS $2.40 $0.80 $0.78 $1.06 $- $0.50 $1.00 $1.50 $2.00 $2.50 $3.00 Storage$/IOP (SysbenchWrite) AWS EBS Provisioned-IOPS Ceph on Supermicro FatTwin 72% Capacity Ceph on Supermicro MicroCloud 87% Capacity Ceph on Supermicro MicroCloud 14% Capacity
  • 23.
    HEAD-TO-HEAD PRICE/PERFORMANCE $2.50: TARGET AWSP-IOPS $/IOP (EBS ONLY) $0.78: CEPH ON SUPERMICRO MICRO CLOUD CLUSTER
  • 24.
  • 25.
    ARCHITECTURAL CONSIDERATIONS UNDERSTANDING THEWORKLOAD Traditional Ceph Workload • $/GB • PBs • Unstructured data • MB/sec MySQL Ceph Workload • $/IOP • TBs • Structured data • IOPS
  • 26.
    ARCHITECTURAL CONSIDERATIONS FUNDAMENTALLY DIFFERENTDESIGN Traditional Ceph Workload • 50-300+ TB per server • Magnetic Media (HDD) • Low CPU-core:OSD ratio • 10GbE->40GbE MySQL Ceph Workload • < 10 TB per server • Flash (SSD -> NVMe) • High CPU-core:OSD ratio • 10GbE
  • 27.
    18 18 19 6 34 34 36 8 0 5 10 15 20 25 30 35 40 Ceph cluster 80cores 8 NVMe (87% capacity) Ceph cluster 40 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 12 NVMe (84% capacity) IOPS/GB 100% Write 70/30 RW CONSIDERING CORE-TO-FLASH RATIO
  • 28.
    8x Nodes in3U chassis Model: SYS-5038MR-OSDXXXP Per Node Configuration: CPU: Single Intel Xeon E5-2630 v4 Memory: 32GB NVMe Storage: Single 800GB Intel P3700 Networking: 1x dual-port 10G SFP+ + + 1x CPU + 1x NVMe + 1x SFP SUPERMICRO MICRO CLOUD CEPH MYSQL PERFORMANCE SKU
  • 29.
    SEE US ATPERCONA LIVE! • Hands on Test Drive: MySQL on Ceph April 18, 1:30-4:30 • MySQL on Ceph April 19, 1:20-2:10 • MySQL in the Cloud: Head-to-Head Performance April 19, 2:20-3:10 • Running MySQL Virtualized on Ceph: Which Hypervisor? April 20, 3:30-4:20
  • 30.

Editor's Notes