IBM Systems & Technology Group
IBM TotalStorage DS8000 Architecture & Configuration
Charlie Burger
Storage Systems Advanced Technical Support
2005 IBM Corporation
IBM Systems and Technology Group
Topics
DS8000 Physical Overview
DS8000 Subsystem Storage Hierarchy
Logical Configuration of CKD Volume Using DS CLI
Addendum
DS8000
Volumes with z/OS
HyperPAV
High
Performance FICON (zHPF)
DS8000
Performance Monitoring
TPC
RMF
Tivoli
Productivity Center 4.1
References
2
IBM Systems and Technology Group
DS8000 Physical Overview
IBM Systems and Technology Group
DS8000 Physical Overview
146
GB, 300 GB, 450 GB switched
bunch of disk (SBOD) disk drives (up to
1024) 15,000 RPM
1
TB SATA 7,200 RPM
73
& 146 GB Solid State Drive (SSD)
Encryption
available on 146 GB, 300 GB
and 450 GB 15 RPM HDDs
pSeries
Power5+
4-port
FC/FICON Host Adapter cards
(up to 128 ports)
Device
Adapters scale with capacity
64K
devices; 255 Logical Control Units
per logical DS8000 (FICON)
Two
logical DS8000s in one physical
DS8000 (9B2 LPAR)
Metro
and Global Mirror compatibility
with ESS and DS6000
Base Frame
Expansion Frame
IBM Systems and Technology Group
DS8000 Physical Overview D8S100 & DS8300
Models
DS8100 (931)
DS8300 (932, 9B2)
Shared SMP processor configuration
POWER5+ dual 2-way
POWER5+ dual 4-way
Other major processors
PowerPC, Asics
PowerPC, Asics
IBM Virtualization Engine (LPAR) capability
Not available
Optional
Processor memory for cache and NVS (min/max)
16 GB/128 GB
32 GB/256 GB
Host adapter interfaces
4-port 4 Gbps or 2 Gbps Fibre
Channel/FICON, 2-port ESCON
4-port 4 Gbps or 2 Gbps Fibre
Channel/FICON, 2-port ESCON
Host adapters (min/max)
2/16
2/32
Host ports (min/max)
4/64
4/128
Drive interface
FC-AL
FC-AL
Number of disk drives (min/max)
16/384
16/1024
Device adapters
Up to eight 4-port FC-AL
Up to 16 4-port FC-AL
Maximum physical storage capacity**
384 TB
1024 TB
Disk sizes
73 GB solid-state drives
146 GB solid-state drives
146 GB (15,000 rpm)
300 GB (15,000 rpm)
450 GB (15,000 rpm)
1 TB SATA (7,200 rpm)
73 GB solid-state drives
146 GB solid-state drives
146 GB (15,000 rpm)
300 GB (15,000 rpm)
450 GB (15,000 rpm)
1 TB SATA (7,200 rpm)
RAID levels
5, 6, 10
5, 6, 10
Number of expansion frames
Up to 4
http://www-03.ibm.com/systems/storage/disk/ds8000/specifications.html
5
IBM Systems and Technology Group
DS8100 931 Maximum Configuration
IBM Systems and Technology Group
DS8100 (931) Disks and Device Adapters (DAs)
8 arrays (64 disks) on DA0
DA Pair 0
2
2
0
3
3
1
Up to 4 DA pairs
HMC
2
2
0
DA pairs 0 to 3
2 DA pairs (DA0 and DA2)
used for disks in full base frame
4 DA pairs used for full base frame
and full expansion frame
Up to 16 Host Adapters
Up to 64 host ports
Server0 and Server1
7
S0
S1
0
b
0/1 1/0
2/3 3/2
IBM Systems and Technology Group
DS8300 932/9B2 with 2 Expansion Frames
IBM Systems and Technology Group
DS8300 Disks and Device Adapters (DAs)
Order of installation of disks on DA pairs
Top
to bottom
64-disk
increments(2 disk enclosure pairs) per DA
Maximum of 8 DA pairs
Base
frame
DA2, DA0
1st
expansion frame
DA6, DA4, DA7, DA5
2nd
expansion frame
DA3, DA1, DA2, DA0
3rd
expansion frame
DA6, DA4, DA7, DA5
4th
expansion frame
DA3, DA1
Up to 32 Host Adapters
Up
to 128 FICON/FCP host ports
64
ports in base frame
64
ports in expansion frame
IBM Systems and Technology Group
I/O Ports
FCP/FICON capable ports
4
ports per adapter *
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclos u re
Port ID
I0xyz
CEC 0
x indicates DS8000 I/O enclosure (0-7)
y indicates slot/card (0,1,3,4)
z indicates port (0-3)
* There are 2 ESCON ports per adapter
10
CEC 1
I/O Drawer
RIO 1
I/O Drawer
RIO 1
I/O Drawer
RIO 0
I/O Drawer
RIO 0
IBM Systems and Technology Group
DS8000 Disk Drives
Three different Fibre Channel DDM types (available in both non-encrypted and
encrypted versions):
146 GB, 15K RPM drive
300 GB, 15K RPM drive
450 GB, 15K RPM drive
One Serial Advanced Technology Attachment (SATA) DDM drive type:
1 TB, 7.2K RPM drive
Two different Solid State Drive (SSD) types:
73 GB
146 GB
IBM has withdrawn from marketing the following DS8000 disks:
11
146 GB 10,000 RPM Fibre Channel Disk Drives
300 GB 10,000 RPM Fibre Channel Disk Drives
73 GB 15,000 RPM Fibre Channel Disk Drives
500 GB 7,200 RPM FC Advanced Technology Attachment (FATA) Disk Drives
IBM Systems and Technology Group
DS8000 Disk Drive Characteristics
Fibre Channel is:
Intended for heavy workloads in multi-user environments
Highest performance, availability, reliability, and functionality
Good capacity: 146 GB450 GB
Very high activity
Greater than 80% duty cycle
SATA-2 characteristics are:
Intended for lower workloads in multi-user environments
High performance, availability, and functionality
High reliability
More robust technology:
Extensive Command Queuing
High capacity: 5001,000 GB disk drives
Moderate activity
20-30% duty cycle
SSD characteristics are
Best suited for cache-unfriendly data
Random read access, high performance data sets
Both FB and CKD environments
12
IBM Systems and Technology Group
DS8000 Architecture
8-32 host adapters
4 Gb FC/FICON 4-port *
p-Series Power5+ RISC
processors
2-way, 4-way
1 to 8 pairs 4Gb device
adapters w/ redundant
FC-AL loops
* ESCON 2-port
13
IBM Systems and Technology Group
FC-AL Disk Layout
Single
Pack
Pack
Interconnect
Note: Diagrams only show the connections for one of
the two loops in a pack.
Problems with standard FC-AL JBOD
Full loop required to participate in xfer
Difficult to identify loop breakage
Performance drop off with loop increase
14
IBM Systems and Technology Group
Switched FC-AL Logical Layout
C1
C3
C2
C4
FC Interface Card 1
FC-AL Switch
SES
Upper
connection (to Backplane
D0
device adapters
or prior set of 16
DDMs
SES
D1
D2
D3
D4
D5
Port Bypass Circuits
D6
D7
D8
D9
D10
D11
D12
D13
D14
D15
FC-AL Switch
FC Interface Card 2
C1
C2
Notes:
Cx = (x=1-4) connections to external FC-AL loops
Dy = y=0-16) DDMs
Switch provides 2 element FC-AL loop for all transfers
No performance drop off
15
C3
C4
Lower
connection (to
next set of 16
DDMs)
IBM Systems and Technology Group
DS8100 (931) Components
16
IBM Systems and Technology Group
DS8300 (932/9B2) Components
17
IBM Systems and Technology Group
DS8300 (932/9B2) Architecture - LPAR
2 pSeries Systems
per Storage Facility
2 LPARs per
Storage Facility Image
Persistent Memory
By Dumping Memory
To System Disk while
On Battery
18
IBM Systems and Technology Group
DS8300 2107-9B2 LPAR Dual Storage Images
Storage
Image 2
Storage
Image 2
19
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Empty Storage
Enclosure Image2
Storage
Enclosure
Empty Storage
Enclosure Image2
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
CEC 0
Storage
Image 1
CEC 1
Storage
Image 1
I/O Drawer
RIO 1
I/O Drawer
RIO 1
I/O Drawer
RIO 1
I/O Drawer
RIO 1
I/O Drawer
RIO 0
I/O Drawer
RIO 0
I/O Drawer
RIO 0
I/O Drawer
RIO 0
Image 1 Resource
Image 2 Resource
IBM Systems and Technology Group
DS8300 LPAR Resource Allocation
Current implementation is 2 Storage Images (logical DS8000s)
50 percent of the processors is the default.
With variable LPAR implementation on DS8300 Model 9B2 (R4.0), the
split can be 25/75 percent or 75/25 percent.
50 percent of the processor memory is the default.
With variable LPAR implementation on DS8300 Model 9B2 (R4.0), the
split can be 25/75 percent or 75/25 percent.
Each storage image has access to:
Up to 16 host adapters (4 I/O drawers with up to 4 host adapters)
20
Up to 512 disk drives (up to 512 TB of capacity)
IBM Systems and Technology Group
DS8000 Enhanced Addressing & Connectivity
21
ESS 800
DS8000
DS8000 w/LPAR
Max Logical Subsystems
32
255
510
Max Logical Devices
8K
64K
128K
Max Logical CKD Devices
4K
64K
128K
Max Logical FB Devices
4K
64K
128K
Max N-Port Logins/Port
128
509
509
Max N-Port Logins
512
8K
16K
Max Logical Paths/FICON Port
256
2K
2K
Max Logical Paths/CU Image
256
512
512
Max Path Groups/CU Image
128
256
256
IBM Systems and Technology Group
DS8000 Caching Management
2004 - ARC (Adaptive Record Cache)
Dynamically
partitions the read cache between random and
sequential portions
2007 AMP (Adaptive Multi-Stream Pre-Fetch R3.0)
Manages
the sequential read cache and decides what, when
and how much to prefetch
2009 IWC (Intelligent Write Cache R4.2)
Manages
destage
22
the write cache and decides what order and rate to
IBM Systems and Technology Group
Intelligent Write Caching (IWC)
What does it do?
Improves performance through better write cache management and better destage order of writes
Where can it be applied?
Write caches for storage controllers and even operating systems (i.e. linux)
DS8000 is the first product implementation
Benefits on DS8000
Up to 2X throughput for random write workloads
Typical database workload throughput may improve 15 - 20%
DS8000 SMP cycle usage reduced up to 12% via better data structures
Example of IBMs continuous innovation in our products
4 IBM Patents
Intelligent Write Caching is a hybrid of two other well known cache management
algorithms
CLOCK which exploits temporal ordering (Reduce the total number of destages to disk)
CSCAN which exploits spatial ordering (Average cost of each destage due to arm movement)
Both use a similar data structure so they are compatible
Key Attributes
23
Exploits and creates spatial locality
Also, exploits temporal locality
Works well even with concurrent reads
Handles a wide variety of workloads and cache sizes
Deployed at the storage controller level
And yet, it is simple
IBM Systems and Technology Group
How Intelligent Write Caching Works
Organize write groups in a sorted
order forming a clock (temporal)
Clock hand moves clockwise
destaging write groups in order
(spatial)
Write groups are created with bit
initialized to 0
Clock hand can only destage
groups with bit 0
When clock hand encounters a
write group with bit 1, it resets bit
to zero, and skips it
On a write to an existing group
(hit), set the bit to 1
24
IBM Systems and Technology Group
DS8000 Implementation
An IWC list for each Rank
NVS shared across all ranks in a cluster
Dynamically adapted size of each IWC list based on
workload intensity on each rank
Linear thresholding: Rate of destage is proportional to the
portion of NVS occupied by a IWC list
Destages are smoothed out (thus write bursts are not
translated into destage bursts)
25
IBM Systems and Technology Group
Copy Services
Point in time copy
FlashCopy
Instant, T0 copy of what a set of volumes looked like at that one particular instant in time
Any changes after the point in time are not replicated unless another point in time
copy is requested
Not continuous - may be repeated
Logical copy - physical copy may be done later
The target volumes are immediately accessible for read and write
Target volumes are on the same storage subsystem as source volumes
May be used to protect against application-caused corruption
Remote Mirroring
Ongoing, continuous replication of a set of volumes
Target volumes are not accessible while replication is in progress
Target volumes are commonly on a different storage subsystem than source volumes
Disaster protection
Not designed to protect against application-caused corruption
Replicates what application allows
26
Data migration
Asynchronous
Global Copy
Global Mirror
z/OS Global Mirror
Synchronous
Metro Mirror
IBM Systems and Technology Group
DS8000 Subsystem Storage Hierarchy
27
IBM Systems and Technology Group
Storage Subsystem Hierarchy
Storage Complex
One
or multiple physical storage subsystems
Central
management point with Network Server
DS8000 Hardware Management Console (HMC)
28
S to ra g e
E n c lo su re
S to ra g e
E n c lo su re
S to ra g e
E n c lo su re
S to ra g e
E n c lo su re
S to ra g e
E n c lo su re
S to ra g e
E n c lo su re
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
CE C 0
CE C 0
CE C 1
CE C 1
I/O D r a w e r
R IO 1
I/O D r a w e r
R IO 1
I/O D r a w e r
R IO 1
I/O D r a w e r
R IO 1
I/O D r a w e r
R IO 0
I/O D r a w e r
R IO 0
I/O D r a w e r
R IO 0
I/O D r a w e r
R IO 0
IBM Systems and Technology Group
Storage Subsystem Hierarchy (continued)
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
Storage Unit (Storage facility)
Single
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
CEC 0
physical storage subsystem
CEC 1
I/O D r a w e r
R IO 1
I/O D ra w e r
R IO 1
I/O D r a w e r
R IO 0
I/O D ra w e r
R IO 0
Storage Image
Single
logical storage subsystem
Same as physical subsystem for 2107
921/931, 2107 922/932 and 1750
Single LPAR for 2107 9A2/9B2
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
CEC 0
CEC 1
or
S to ra g e
E n c lo s u re
S h a rk
Im a ge 0
(L P A R 0 )
CEC 0
S h a rk
Im a g e 1
(L P A R 1 )
S h a rk
Im a ge 0
(L P A R 0 )
CEC 1
S h a rk
Im a g e 1
(L P A R 1 )
L P A R S y s te m
D is k s
I/O D r a w e r
R IO 1
I/O D ra w e r
R IO 1
I/O D r a w e r
R IO 1
I/O D r a w e r
R IO 1
I/O D r a w e r
R IO 0
I/O D ra w e r
R IO 0
I/O D r a w e r
R IO 0
I/O D r a w e r
R IO 0
Server
Server0
and Server1
Manage
extent pools and volumes
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
S to ra g e
E n c lo s u re
Even numbers managed by Server0
Odd numbers managed by Server1
29
S to ra g e
E n c lo s u re
S0
S1
CEC 0
CEC 1
I/O D r a w e r
R IO 1
I/O D ra w e r
R IO 1
I/O D r a w e r
R IO 0
I/O D ra w e r
R IO 0
S0
S1
IBM Systems and Technology Group
DS8000 Array Site
Logical grouping of disks
Same
capacity and speed
73 GB solid state drives
146 GB solid state drives
146 GB (15,000 rpm)
300 GB (15,000 rpm)
450 GB (15,000 rpm)
1 TB (7,200 rpm)
Created and assigned to DA pair by software
during installation
DS8000 array site
8
30
disks (DDMs)
DS8000 array site
IBM Systems and Technology Group
Array
RAID array
RAID5, RAID6 or RAID10
Created from 1 array site on DS8000
RAID type (RAID5, RAID6 or RAID10) and storage type (Fixed
Block/FB or Count Key Data/CKD) now defined in 2 separate
steps
31
Array defines RAID type
RANK defines storage type
IBM Systems and Technology Group
Array
D
D
1 array site
RAID5
6+P
7+P
Parity is striped across all disks in
array but consumes capacity
equivalent to one disk
RAID 6
6+P+Q
5+P+Q+Spare
RAID10
3+3
4+4
D
P
D
S
RAID5 6+P+S
DS8000 8 DDM arrays
D
D
D
D
D
D
D
D
D
P
RAID5 7+P
D
D
D
D
D
D
P
Q
RAID6 6+P+Q
D
D
D
D
D
P
S
Q
RAID6 5+P+Q+S
D
D
D
D
D
D
S
S
RAID10 3+3+S+S
D
D
D
D
D
D
D
D
RAID10 4+4
32
IBM Systems and Technology Group
Rank
DS8000 CKD rank
RAID array with storage type defined
CKD or FB
One-to-one relationship between an
array and a rank
33
One RAID array becomes one rank
DS8000 8 DDMs
RAID5 7+P
IBM Systems and Technology Group
Rank (continued)
Ranks have no pre-determined or
fixed relation to:
Server0 or Server1
Logical Subsystems (LSSs)
CKD Rank
Ranks are divided into extents
Units of space for volume creation
CKD rank
Extents equivalent to a 3390M1
1113 cylinders or .94GB
FB rank
1GB extents
34
FB Rank
IBM Systems and Technology Group
Extent Pool
Logical grouping of extents from one or more ranks from
which volumes will be created
Ranks are assigned to extent pools
Pool contains one or more ranks
LUN size is not limited to rank size
If more than one rank assigned to pool
Rotate Volumes method of using extents still the default for
volume creation
Rotate Extents or Storage Pool Striping introduced with R3
Extent pool is one storage type
CKD or FB
35
Extent Pool
Rank
IBM Systems and Technology Group
Extent Pool (continued)
User-assigned to Server0 or Server1
Activity should be balanced across
both servers
36
Minimum of 2 extent pools required
to utilize system resources
Maximum number of ranks in a
single pool should be the total
number of ranks to balance system
resources
Server0 extent pools will support
even-numbered LSSs
Server1 extent pools will support
odd-numbered LSSs
Ext. Pool 0
Ext. Pool 1
Rank 0
Rank 2
Rank 4
Rank 6
Rank 1
Rank 3
Rank 5
Rank 7
IBM Systems and Technology Group
Extent Pool ID
System-generated
Px
If
x is an even number:
Pool has been assigned to Server0
Even-numbered LSSs are available for
volume creation
If
x is an odd number:
Pool has been assigned to Server1
Odd-numbered LSSs are available for
volume creation
Extent pools may be given user
specified nicknames
37
Extent Pool P0
Rank
IBM Systems and Technology Group
DS8000 Logical Control Units and Address Groups
Similar to ESS:
LCU has a maximum of 256 addresses
Aliases (Parallel Access Volumes/PAVs) are shared within an LCU
LCU is the basis for Copy Services paths and consistency groups
Even-numbered LCUs are associated with Server0 and odd-numbered LCUs are
associated with Server1
New:
LCU does not have a pre-determined relationship to rank/DA pair
Up to 255 LCUs are available
FB LSSs are automatically created during LUN creation
e.g. Creation of volume 1000 results in creation of LSS 10
Set of 16 Logical Subsystems (LSSs) is called an Address Group
LSS 00-0F, 10-1F, 20-2F, 30-3F, etc.
Storage type for entire Address Group (16 LSSs) is set to CKD or Fixed Block by the
first LSS defined *
CKD LSSs (LCUs) are explicitly defined
Allows specification of LCU type and SSID
* ESCON devices must be in Address Group 0 (LCU 00-0F)
38
IBM Systems and Technology Group
DS8000 Flexible LCU Assignment
No predetermined or fixed relationship to rank
3 options:
1.
One LCU per rank
Simplifies management
2.
3.
39
Multiple LCUs on one rank
Provides additional addresses for
rank
Allows utilization of large drives
with small device types
One LCU across multiple ranks
Enables cross-rank Copy Services
consistency group
This will be the default for multiplerank extent pools
Server1
Server0
Ranks/Pools
Ranks/Pools
LCU 00
LCU 01
12
14
16
18
13
2A
2B
2A
2B
15
17
19
IBM Systems and Technology Group
Address Group
Logical grouping of 16 LSSs (4096 volumes)
00-0F,
10-1F, 20-2F,
All LSSs in an address group are the same storage
type
All CKD or all FB
User determines whether an address group will be CKD or FB
by creation of 1st LSS in that address group
CKD LCU creation or FB LUN creation
If first LSS is CKD, all 16 LSSs must be CKD
If first LSS is FB, all 16 LSSs must be FB
DS8000 ESCON devices must be in address group 0 LSS 00-0F
40
IBM Systems and Technology Group
Logical Subsystem ID
xy
Now designated by 2 digits to allow increased numbers of LSSs
x
indicates the address group
An address group is a pre-determined set of 16 LCUs/LSSs (x0-xf)
of the same storage type (all CKD or all FB)
y
indicates server assignment
If y is even, LSS is available with Server0 extent pools
If y is odd, LSS is available with Server1 extent pools
Example:
LCU 2A is created (storage type CKD)
2 indicates Address Group 2 all 16 LSSs in address group 2
(20-2f) will be CKD
A (even) indicates Server0 LSS 2A can only be used with extent
pools assigned to Server0
41
IBM Systems and Technology Group
Volume
Extent Pool
Rank
Created from extents in one extent pool
Volumes can be larger than the size of a rank (if
multiple ranks are in one extent pool)
DS8000 introduced with CKD max size 64K
cylinders or 56GB (with appropriate software
support)
DS8000 with R3.1 has CKD max size 262,668
cylinders or 223 GB (with appropriate software
support)
2
A
1
0
FB max size 2TB
Extent Pool
Rank
Rank
Volumes can be presented to host server in
cylinder, 100MB or block granularity
42
Space is allocated in 1GB extents (FB) or 1113
cylinder extents (CKD)
2A11
IBM Systems and Technology Group
Volumes (continued)
Volumes can be created or displayed in
Binary GB
Decimal GB
Blocks (512 Bytes)
Volumes can be dynamically deleted
FB volume creation results in FB LSS creation and
assignment of FB storage type to address group
CKD Logical Control Unit (LCU) creation is pre-req for
CKD volume creation
43
IBM Systems and Technology Group
DS8000 Volume ID
User specifies 4-digit hex volume ID which includes address
group, LCU and device ID:
xyzz
x=Address Group
xy=LSS
Even LCUs are available for Server0 extent pools
Odd LCUs are available for Server1 extent pools
zz=device ID
Example: Volume ID x2A10
2=Address Group 2 (group of 16 LCUs 20-2F)
2A=LCU 2A
2A is even- may only be used with Server0 pools
10=device ID
Maximum number of volumes is 64 K
Nicknames can be created for volumes (and PAVs)
User may choose volume IDs to match z/OS device IDs
44
Extent Pool P0
Assigned to Server0
Even LCUs available
Rank R0
2A10
IBM Systems and Technology Group
DS8000 Rotate Volumes
Extent Pool P0
Assigned to Server0
Even LCUs available
Rank R0
Single volume is created from extents on one
rank if possible
2A10
Single volume may spill across ranks in pool
or may be larger than the size of a single rank
Rank R0
Volumes may be dynamically deleted and
extents reused
2A10
2A11
Rank R2
45
IBM Systems and Technology Group
DS8000 Rotate Volumes
Rank R0
In single-rank extent pool, multiple
volumes will be created
sequentially on the rank
In a multiple-rank extent pool,
current implementation places
multiple volumes on rank with
most free extents
2A10
2A11
Rank R0
2A10
2A11
Volumes may be dynamically
deleted and extents reused
46
Rank R2
IBM Systems and Technology Group
DS8000 Storage Pool Striping
New algorithm choice for volume creation
Rotate Volumes method still the default
4
7
5
1
Rank 9
Naming
Marketing Storage Pool Striping
DS CLI & DS Storage Manager Rotate Extents
Volumes are created by allocating one Extent from
available Ranks in an Extent Pool, in a round-robin
fashion
At right - 7 GB Volume showing the order of Extent
allocation
6
Rank 10
2
Rank 11
3
CKD and Fixed Block
Extent Pool with 3 Ranks
47
IBM Systems and Technology Group
Storage Pool Striping - Advantages
Technical Advantages
Method
to distribute I/O load across multiple Ranks
DS8000
Far
optimized for performance
less special tuning required for high performance data placement.
Means less work for the storage administrator
Reduces storage administrator work needed to optimize performance
48
IBM Systems and Technology Group
Storage Pool Striping - Characteristics
2nd Volume start
The next Volume will be started from an Extent on
the next Rank in the round-robin rotation
If a Rank runs out of extents, it is skipped
Multiple Volume allocations will not start on the same
Rank
If
many Volumes are created with a single
command, the Volumes will not start on the
same Rank
Supports new Volume Expansion capabilities
49
Extent pool with 3 Ranks
IBM Systems and Technology Group
Storage Pool Striping Considerations & Recommendations (2)
Deleting Volumes creates free Extent
units for future volume allocation
No reorg capability
Do not add new Ranks to existing MultiRank Extent Pools used for Storage Pool
Striping
No
DA0
DA0
DA3
DA3
reorg or reallocation capability
Mixing striped and non-striped Volumes
in same multi-Rank Extent Pool
Supported,
but strongly not recommended
because it will create Rank I/O imbalance
50
DA2
DA2
ExtPool 0
ExtPool 1
IBM Systems and Technology Group
Storage Pool Striping Considerations &
Recommendations (3)
Not recommended for:
SVC
When providing LUNs to the SVC, current best practice is to not use
SPS and just give the SVC LUNs that are an entire RAID5 array
iSeries
Performance benefit varies due to OS striping
There are cases where performance will benefit from rotate extents
Supported and can be exploited for better capacity utilization
Specific
vendor Volume layout recommendations/requirements
DB2 BCU
Oracle ASM
Applications
where OS level striping would be better choice
Small, hot Volumes
51
IBM Systems and Technology Group
Volume Virtualization Overview
Standard Logical Volumes
Track Space Efficient Logical Volumes
Used
with Space-Efficient Flash Copy
Extent Space Efficient Logical Volumes
Used
52
with Thin Provisioning
IBM Systems and Technology Group
Standard Logical Volume (LV)
Standard LV consists of 1 to N Real
Extents
Each Extent contains Extent Data and
Extent Metadata
Each LV Extent is mapped to a Real
Extent on a Real Rank
All Extents allocated to a LV come from
one Extent Pool
FB Extent = 1024 MB Extent Data
Less 4.5MB Extent Metadata
CKD Extent = 1113 Cylinders Extent Data
Less 64 cylinders Extent
Metadata
53
Extent Pool
Standard LV
M
M
M
Real Rank
M
M
M
M
M
M
Extent Extent
Metadata
Data
IBM Systems and Technology Group
Track Space Efficient (TSE) LV
Extent Pool
TSE LV consists of 1 to N Real Tracks
Each TSE LV Extent is mapped to a Virtual
Extent on a Virtual Rank
Virtual Extent provides only Extent metadata
If host writes to a TSE LV track, a Real Track in
the SE Repository is allocated (if not already
allocated) to the TSE LV Track
Mapping from LV Track to SE Repository Track is
maintained in the SE Repository Metadata
In same Extent Pool as TSE LV
Virtual extents allocated to a TSE LV
Real tracks allocated to a TSE LV come from the
one SE Repository
Capacity for virtual ranks & SE Repository
auxiliary volumes
Configured with real extents like a standard LV, but
is not host addressable
FB Repository Extent = 16K tracks = 1GB
CKD Repository Extent = 16695 tracks = 1GB
1 Metadata Extent for each 91 Repository Extents
Introduced with R3.0
54
Track SE LV
Intended for use as FlashCopy targets
SE Repository
Auxiliary Volume
M
M
M
Tracks
Real Rank
M
M
M
M
M
M
Virtual Rank
M
M
M
M
M
M
Rep. Metadata
Virtual Rank
Auxiliary Volume
MMMMMM
MMMMMM
IBM Systems and Technology Group
Extent Space Efficient (ESE) LV
ESE LV consists of 1 to N Real Extents
Each ESE LV Extent is mapped to a Virtual
Extent on a Virtual Rank
Virtual Extent provides only Extent metadata
If write is first host write to an ESE LV
Extent, a Real Extent on a Real Rank is
allocated to the ESE LV Track
If not first host write to an ESE LV Extent,
write goes to already existing Real Extent
In same Extent Pool as ESE LV
Virtual extents allocated to a ESE LV
Real extents allocated to a ESE LV
Capacity for the metadata in the virtual rank
is from a Virtual Rank Auxiliary Volume (VR
AV)
Configured with real extents like a standard LV,
but is not host addressable
FB VR AV Real Extent = 4.5MB
CKD VR AV Real Extent = 64 cylinders
Introduced with R4.3
55
Only FB support with this release
No copy services support with this release
Extent Pool
Extent SE LV
M
M
M
Real Rank
M
M
M
M
M
M
Virtual Rank
M
M
M
M
M
M
Virtual Rank
Auxiliary Volume
MMMMMM
MMMMMM
IBM Systems and Technology Group
Open Systems Host
pSeries1
Server connecting to the storage subsystem
Host definition includes one or more host
attachments
Each host attachment contains one or more
Host Bus Attachments (HBAs) with
Worldwide Port Names (WWPNs)
Multiple server HBAs can be grouped into a
single host attachment for convenience
A single host attachment can be in only one
volume group
Four host attachments (one port each) are
defined for pSeries1
Two host attachments with 2 ports each
are defined for pSeries2
One host attachment with 4 ports is
defined for pSeries3
56
SAN
SAN
pSeries2
SAN
SAN
pSeries3
SAN
SAN
IBM Systems and Technology Group
Open Systems Host (continued)
Storage
Enclosure
A host attachment can access:
User-specified disk subsystem I/O ports
All
Storage
Enclosure
Storage
Enclosure
Storage
Enclosure
CEC 0
valid disk subsystem I/O ports
Configured as FC_AL or FCP
CEC 1
I/O Drawer
RIO 1
I/O Drawer
RIO 1
I/O Drawer
RIO 0
I/O Drawer
RIO 0
Access
may also be controlled through
SAN zoning
Host definitions may be given userspecified nicknames
SAN
SAN
pSeries2
57
IBM Systems and Technology Group
Open Systems Volume Group
Used to control server access to LUNs (LUN
masking)
Contains:
Open systems server HBA WWPN(s) AND
LUNs to be accessed
AIX 1
AIX 1 Volume Group
Recommend creating one volume group for
each server unless LUN sharing is required
Many-many relationship to extent pools or
LSSs
Volume groups may be given user-specified
nicknames
No relation to AIX volume groups
58
1000
1001
IBM Systems and Technology Group
Open Systems Volume Groups
(continued)
AIX 1
AIX 2
When LUN sharing is required:
Place host attachments for multiple servers
in the same volume group
AIX Volume Group
OR
Place LUNs in multiple volume groups
AIX 3
AIX 3 Volume Group
1000
59
1001
1000
1001
AIX 4
AIX 4 Volume Group
1000
1001
In either case,
server software
is responsible for
data integrity!
IBM Systems and Technology Group
Storage Resource Summary
Disk
Individual DDMs
Array Sites
Pre-determined grouping of DDMs of same speed
and capacity (8 DDMs for DS8000; 4 DDMs for
DS6000)
Arrays
One 8-DDM Array Site used to construct one
RAID array (DS8000)
One or two 4-DDM Array Sites used to construct
one RAID array (DS6000)
Ranks
One Array forms one CKD or FB Rank (8 DDMs
for DS8000; 4 or 8 DDMs for DS6000)
RAID5, RAID6 or RAID10
No fixed, pre-determined relation to LSS
Extent Pools
1 or more ranks of a single storage type (CKD or
FB)
CKD or FB
60
Assigned to Server0 or Server1
Extent
Pool
IBM Systems and Technology Group
Storage Resource Summary (continued)
Volumes or LUNs
Made
Min
Max
up of extents from one extent pool
allocation is one extent -- 1GB(FB) Mod1(CKD)
size is 2TB (FB); 223GB(CKD)
Can be larger than 1 rank if more than 1 rank in pool
Associated
with LSS during configuration
Available LSSs determined by Extent Pool server affinity
Can
be individually deleted
Open Systems Volume Group
Contains
LUNs and host attachments -- FB LUN
AIX host port
AIX host port
iSeries host
port group
masking
One
host attachment (one port or port group) can be
member of only one volume group
One
volume can be member of multiple volume groups
Multiple hosts can be contained in a single volume
group
FB
FB
61
FB (i)
FB
FB
FB
IBM Systems and Technology Group
Recommendations
Create extent pools using storage pool striping with 4-8 ranks
in the pool
Balance extent pools and ranks across servers
Create one LSS per rank
Unless more addresses are needed
Use a limited number of device types for ease of management
Use custom volumes that are even multiple of extents
CKD 1113 cylinder extents
3390M3
3390M9
30051 cylinders
60102 cylinders
FB 1 GB extents
Use PAVs to allow concurrent access to base volumes for z/OS
Preferably HyperPAV
62
IBM Systems and Technology Group
Logical Configuration of CKD Volumes Using DS CLI
63
IBM Systems and Technology Group
CKD Volume Logical Configuration
DS8000 Storage Manager
DS8000 DS CLI
64
IBM Systems and Technology Group
DS8000 Storage Manager
Easy-to-use, powerful and flexible User Interface
Wizards,
Filters, Sorts, Hyperlinks, Animation, Copy/Paste
Includes optional automated methods
Runs on DS8000 integrated HMC; accessed via Browser
65
IBM Systems and Technology Group
DS8000 DS CLI
Powerful tool for automating configuration tasks and
collecting configuration information
Same DS CLI for DS6000 and for ESS 800 Copy Services
DS CLI commands can be saved as scripts which
significantly reduces the time to create, edit and verify
their content
Uses a consistent syntax with other IBM TotalStorage
products now and in the future
All of the function available to the GUI is also available
via the DS CLI
66
IBM Systems and Technology Group
Supported DS CLI Platforms
The DS Command-Line Interface (CLI) can be installed on the following
operating systems:
AIX 5.1, 5.2, 5.3
HP-UX 11i v1, v2
HP Tru64 version 5.1, 5.1A
Linux (RedHat 3.0 Advanced Server (AS) and Enterprise Server
(ES)
SUSE Linux SLES 8, SLES 9, SUSE 8, SUSE 9)
Novell Netware 6.5
Open VMS 7.3-1, 7.3-2
Sun Solaris 7, 8, 9
Windows 2000, Windows Datacenter, and Windows 2003
67
IBM Systems and Technology Group
CKD Logical Configuration Steps
Creating CKD extent pools
Creating arrays
Creating and associating ranks with extent pools
Creating logical control units
Creating CKD volumes
Creating CKD volume groups (system generated)
68
IBM Systems and Technology Group
Creating CKD extent pools
Remember from earlier?
Minimum of 2 extent pools required
Server0 extent pools will support even-numbered LSSs
Server1 extent pools will support odd-numbered LSSs
Consider creating additional extent pools for each of the following
conditions:
Each RAID type (5, 6 or 10)
Each disk drive module (DDM) size
Each CKD volume type (3380, 3390)
Each logical control unit (LCU) address group
mkextpool -dev IBM.2107-75nnnnn rankgrp 0 -stgtype ckd P0
mkextpool -dev IBM.2107-75nnnnn rankgrp 1 -stgtype ckd P1
lsextpool dev IBM.2107-75nnnnnn -l
69
IBM Systems and Technology Group
Creating Arrays
Remember from earlier?
RAID array
RAID5, RAID6 or RAID10
Created from 1 array site on DS8000
Array Site
Logical grouping of disks
Same capacity and speed
Issue the lsarraysite command to find the unassigned array sites
lsarraysite -dev IBM.2107-75nnnnn -state unassigned
Issue the mkarray command to create an array
from each site with the status unassigned
mkarray -dev IBM.2107-75nnnnn -raidtype 5 -arsite A1
lsarray dev IBM.2107-75nnnnn l A1
70
IBM Systems and Technology Group
Creating CKD Ranks
Remember from earlier?
RAID array with storage type defined
CKD or FB
One-to-one relationship between an array and a rank
One RAID array becomes one rank
DS8000 8 DDMs
Ranks have no pre-determined or fixed relation to:
Server0, Server1 or Logical Subsystems (LSSs)
Ranks are divided into extents
Units of space for volume creation
CKD rank extents equivalent to a 3390M1
1113 cylinders or .94GB
Issue the lsarray command to find unassigned arrays
lsarray -dev IBM.2107-75nnnnn -state unassigned
Issue the mkrank command to assign a rank to rank group 0 or 1
according to the rank group number of the assigned extent pool ID.
mkrank -dev IBM.2107-75nnnnn -array a1 -stgtype ckd -extpool p1
lsrank -dev IBM.2107-75nnnnn -l
71
IBM Systems and Technology Group
Creating CKD Logical Control Units (LCUs)
Remember from earlier?
Up to 255 LCUs are available
LCU has a maximum of 256 addresses
Aliases (Parallel Access Volumes/PAVs) are shared within an LCU
Even-numbered LCUs are associated with Server0 and odd-numbered
LCUs are associated with Server1
LCU does not have a pre-determined relationship to rank/DA pair
Set of 16 Logical Subsystems (LSSs) is called an Address Group
LSS 00-0F, 10-1F, 20-2F, 30-3F, etc.
Storage type for entire Address Group (16 LSSs) is set to CKD or
Fixed Block by the first LSS defined
CKD LSSs (LCUs) are explicitly defined
Allows specification of LCU type and SSID
Issue lsaddressgrp to find unassigned address groups
lsaddressgrp -dev IBM.2107-75nnnnn
72
IBM Systems and Technology Group
Creating CKD LCUs (cont)
Analyze the report to identify all of the address groups that are
available to be defined. Use the following criteria:
If the list is empty, all of the address groups are available to be defined.
A defined address group with the storage type fb (fixed block) is not
available to be defined.
A defined address group with the storage type ckd and with fewer than
16 LSSs is available for LCU definition.
If you are using an undefined address group to make new LCUs, select
the lowest numbered address group that is not defined.
If you are defining a new LCU in an existing CKD address group, use
the lslcu command to identify LCUs that are already defined in the
target address group.
Issue the mklcu command to create an LCU.
dscli>mklcu dev IBM.2107-75nnnnn -qty 16 -id 00 -ss 0010 -lcutype 3390-3
lslcu -dev IBM.2107-75nnnnn -l
73
IBM Systems and Technology Group
Creating CKD Volumes
Remember from earlier?
p1 Extent Pool
a1 Array
r1 Rank
00:0F LCUs
View your list of CKD extent pool IDs and determine which extent pool IDs that you
want to use as the source for the CKD volumes to be created. You obtained this list
when you first created your extent pools. If this list is not available, you can issue
the lsextpool command to obtain the list of extent pool IDs.
Issue the mkckdvol command to make 128 base and 128 alias volumes for each LCU.
Issue the mkckdvol command to create 128 3390 base volumes for the LCU.
mkckdvol -dev IBM.2107-75nnnnn -extpool p1 -cap 3339 -name finance#d 0000-007F
mkaliasvol -dev IBM.2107-75nnnnn base 0000-007F -order decrement -qty 2 00FF
lsrank -dev IBM.2107-75nnnnn -l
74
IBM Systems and Technology Group
Initialize the CKD Volumes
Use ICKDSF to initialize the newly configured CKD Volumes
There is no VTOC, IXVTOC, VVDS or Volume Label at this time
To insure that you only initialize volumes without a label, specify
INIT UNITADDRESS(uadd) VOLID(volser) VFY(*NONE*) VTOC(n,n,nn) INDEX(n,n,nn)
75
IBM Systems and Technology Group
Addendum
DS8000 Volumes with z/OS
HyperPAV
High Performance FICON
DS8000 Performance Monitoring
TPC
RMF
Tivoli Productivity Center
References
76
IBM Systems and Technology Group
DS8000 Volumes with z/OS
77
IBM Systems and Technology Group
DS8000 Volumes with z/OS
System z supports both CKD and FB volumes (FB for zSeries Linux)
FB volumes are FCP attached, CKD volumes are FICON attached
Same 4GB 4 port FC Host Attachment Feature supports either FCP or
FICON
Assigned at the Port level (FCP or FICON, single port cant be both)
FICON Fastloadnew method for Adapter Code Load
FICON FASTLOAD used for FICON attach
Compatible with FCP FastloadAllows intermix use of ports on HA
Architected event, no long busy used (yes)
Loss of Light less than 1.5 seconds for Adapter Code Load (only when Adapter
Code is upgraded)
Concurrent Code Load Support
Advise all host attachments have (at least) two ports
Preferred on two separate Host Adapters
CKD also supported by ESCON Host Attachment Feature
78
IBM Systems and Technology Group
DS8000 CKD Volumes
CKD standard volumes
3380
3390M3
3390M9
CKD custom volumes
Minimum volume size specification is 1 cylinder
Minimum space allocation is 1 extent (1113 cylinders)
Maximum volume size is 65,520 cylinders/56GB (DS8000 introduced with it)
With z/OS 1.4 or higher software support
Maximum volume size is 262,668 cylinders/223GB with R3.1
With z/OS 1.9+ or higher software support
Use a multiple of 1113 cylinders if possible
Maximum number of CKD volumes
64K per logical DS8000 *
* 4K limitation for for ESCON access
79
IBM Systems and Technology Group
DS8000 z/OS HCD Considerations
New device support for D/T2107
Address Group 2 LCU 20 & 21
CNTLUNIT CUNUMBR=A000,PATH=(52,53,54,55),
DS8000 supports up to 16
Address Groups
64K logical volumes
For IOCP and HCD, the CU
addresses are hex 00 FE
LCU / LSS do not have to be
contiguous
UNITADD=((00,256)),LINK=(24,34,25,35),
CUADD=20,UNIT=2107,DESC='N150 LCU20'
CNTLUNIT CUNUMBR=A100,PATH=(52,53,54,55),
UNITADD=((00,256)),LINK=(24,34,25,35),
CUADD=21,UNIT=2107,DESC='N150 LCU21
IODEVICE ADDRESS=((2000,128)),CUNUMBR=A000,
STADAT=Y,UNIT=3390B
IODEVICE ADDRESS=((2080,128)),CUNUMBR=A000,
STADAT=Y,UNIT=3390A
IODEVICE ADDRESS=((2100,128)),CUNUMBR=A100,
STADAT=Y,UNIT=3390B
IODEVICE ADDRESS=((2180,128)),CUNUMBR=A100,
STADAT=Y,UNIT=3390A
Examples provided at the DS8000 Information Center Search with IOCP
80
IBM Systems and Technology Group
DS8000 z/OS HCD Considerations Subchannel Sets
Multiple Subchannel Sets
Relief for 64K Devices per LPAR
Z9 (2094) processor/ zOS 1.7 only
HCD Implementation
Initial Implementation of SS1 req POR
Channel SubSystem (CSS) definition can
contain Subchannel Sets 0 & 1
Changed QPAVS display:
DS QPAVS,E278,VOLUME
IEE459I 09.57.53 DEVSERV QPAVS 046
HOST
256 Channels per CSS
No changes to LSS definition in ESS,
DS6000, DS8000
CONFIGURATION
CONFIGURATION
--------------
--------------------
UNIT
UNIT
UA
NUM. UA
TYPE
STATUS
SSID
ADDR.
TYPE
----- --
----
------
----
----
-----------
0E278 78
BASE
3205
78
BASE
1E279 79
ALIAS-E278
3205
79
ALIAS-78
1E27A 7A
ALIAS-E278
3205
7A
ALIAS-78
Assign IODevice Base to Set 0
Assign IODevice Alias to Set 0 or 1
0E27B 7B
ALIAS-E278
3205
7B
ALIAS-78
0E27C 7C
ALIAS-E278
3205
7C
ALIAS-78
Duplicate Device Numbers Possible
Desirable
0E27D 7D
ALIAS-E278
3205
7D
ALIAS-78
1E27E 7E
ALIAS-E278
3205
7E
ALIAS-78
ALIAS-E278
3205
7F
ALIAS-78
Providing they are in separate
Subchannel Sets ...No Problem
SUBSYSTEM
1E27F 7F
****
8 DEVICE(S) MET THE SELECTION CRITERIA
Flexible LSS structure
LPAR
Information provided in: z/OS V1R7.0 HCD Planning (GA22-7525-09)
0
81
IBM Systems and Technology Group
Using larger volume sizes
Benefits
Fewer objects to define and manage
Less processing for fewer I/O resources
CF CHPID, VARY PATH, VARY DEVICE
Channel path recover, link recovery, reset event processing
CC3 processing
ENF Signals
RMF, SMF
Number of physical resources: CHPIDs, Switches, CU ports, fibers
Each device consumes real storage:
768 bytes of real storage for UCB and related control blocks
256 bytes of HSA
1024 bytes/device * 64K devices = 64MB
31 bit common storage constraints
EOV processing to switch to the next volume of a sequential data set
significantly slows the access methods
Considerations
82
Data migration to larger devices may be challenging, time consuming
IBM Systems and Technology Group
zSeries Parallel Access Volumes (PAVs)
Additional addresses for a single device for improved performance
PAVs are shared within an LSS
An LSS may be on multiple ranks
Multiple LSSs may be on one rank
Recommendations
Use HyperPav if possible
If not HyperPav use dynamic PAV if possible
Requires parallel sysplex and WLM
Requires WLM having dynamic PAV specified
Requires WLM specified in device definition
83
IBM Systems and Technology Group
DS8000 HyperPAV
84
IBM Systems and Technology Group
IO without PAV
z/OS Sysplex
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
85
z/OS Image
DS8000 Storage Server
UCB 0801
UCB 0802
If UCB is
BUSY then
IOS Queue
Logical Subsystem (LSS) 0800
Base UA=01
UCB 0801
If UCB is
BUSY then
IOS Queue
UCB 0802
z/OS Image
Base UA=02
IBM Systems and Technology Group
Parallel Access Volumes - Today
z/OS Sysplex
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
86
z/OS Image
DS8000 Storage Server
UCB 08F1
UCB 08F0
UCB 0801
UCB 08F3
UCB 08F2
UCB 0802
Logical Subsystem (LSS) 0800
Alias UA=F1
Alias UA=F0
Base UA=01
UCB 08F1
UCB 08F0
UCB 0801
Alias UA=F3
Alias UA=F2
Base UA=02
UCB 08F3
UCB 08F2
UCB 0802
z/OS Image
IBM Systems and Technology Group
HyperPAV
z/OS Sysplex
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
Applications
do I/O to
base
volumes
z/OS Image
UCB 08F3
UCB 08F2
DS8000 Storage Server
O
O
UCB 0801
UCB 08F1
Logical Subsystem (LSS) 0800
UCB 08F0
UCB 0802
Alias UA=F0
Alias UA=F1
Alias UA=F2
Alias UA=F3
z/OS Image
Base UA=01
UCB 08F0
UCB 0801
Base UA=02
Applications
do I/O to
base
volumes
87
UCB 08F1
UCB 08F3
P
O
UCB 0802
O
UCB 08F2
IBM Systems and Technology Group
Benefits of HyperPAV
Reduce number of required aliases
Give back addressable device numbers
Use additional addresses to
support more base addresses
larger capacity devices.
z/OS can react more quickly to I/O loads
React instantaneously to market open conditions
Overhead of managing alias exposures reduced
WLM not involved in measuring and moving aliases
Alias moves not coordinated throughout sysplex
Initialization doesnt require static bindings
Static bindings not required after swaps
IO reduction, no longer need to BIND/UNBIND to manage
HyperPAV aliases
Increases I/O Parallelism
88
IBM Systems and Technology Group
HyperPAV System Requirements
Hardware
Software
DS8000 Bundle Version 6.2.4
DS8000 FCP/FICON Host
Adapters
Licensed Features
FICON/ESCON Attach (Turbo)
DS8000 #0702 and
#7090
239x-LFA #7090 or 2244OEL #7090
PAV
DS8000 #0780 and #78xx
239x-LFA #78xx or 2244PAV #78xx
HyperPAV
DS8000 #0782 and
#7899
239x-LFA #7899 or 2244PAV #7899
89
z/OS 1.6 plus
RMF
IOS support
OA13915
DFSMS support (DFSMS, SMS, AOM,
DEVSERV)
OA13928, OA13929, OA14002,
OA14005, OA17605, OA17746
WLM support
OA12699
GRS support
OA14556
ASM support
OA14248
OA12865
Optionally Fixes for GDPS/Omegamon
IBM Systems and Technology Group
HyperPAV - Migration
No HCD or DS8000 Logical Configuration Changes
required
On
existing LSSs, assuming PAV and FICON are used today
HyperPAV deployment can be staged
Load/Authorize
HyperPAV feature on DS8000
Can
run without exploiting this feature if necessary using z/OS
PARMLIB option
Enable
HyperPAV feature on z/OS images that want to utilize
HyperPAV via PARMLIB or SETIOS command
Eventually
enable HyperPAV feature on all z/OS images in the
sysplex and authorize licensed function on all attached DS8000s
Reduce
90
the number of aliases defined
IBM Systems and Technology Group
HyperPAV z/OS Options and Commands
SYS1.PARMLIB(IECIOSxx)
HYPERPAV=YES|NO|BASEONLY
YES Attempt to initialize LSSes in HyperPAV mode
NO Do not attempt to initialize LSSes in HyperPAV mode
BASEONLY Attempt to initialize LSSes in HyperPAV
mode, but only start I/Os on base volumes
Enhanced Commands
SETIOS HYPERPAV=YES|NO|BASEONLY
91
SET IOS=xx
D M=DEV
D IOS,HYPERPAV
DEVSERV QPAV,dddd
IBM Systems and Technology Group
HyperPAV D M=DEV
SY1 d m=dev(0710)
SY1 IEE174I 23.35.49 DISPLAY M 835
DEVICE 0710
STATUS=ONLINE
CHP
10
20
30
40
DEST LINK ADDRESS
10
20
30
40
PATH ONLINE
Y
Y
Y
Y
CHP PHYSICALLY ONLINE Y
Y
Y
Y
PATH OPERATIONAL
Y
Y
Y
Y
MANAGED
N
N
N
N
CU NUMBER
0700 0700 0700 0700
MAXIMUM MANAGED CHPID(S) ALLOWED: 0
DESTINATION CU LOGICAL ADDRESS = 07
SCP CU ND
= 002107.000.IBM.TC.03069A000007.00FF
SCP TOKEN NED
= 002107.900.IBM.TC.03069A000007.0700
SCP DEVICE NED
= 002107.900.IBM.TC.03069A000007.0710
HYPERPAV ALIASES IN POOL 4
D M=DEV(dddd) where dddd is a base
volume in a HyperPAV LSS
92
IBM Systems and Technology Group
HyperPAV D M=DEV
SY1
D M=DEV(0718)
SY1
IEE174I 23.39.07 DISPLAY M 838
DEVICE 0718
STATUS=POOLED HYPERPAV ALIAS
D M=DEV(dddd) where dddd is an alias
device in a HyperPAV LSS
93
IBM Systems and Technology Group
HyperPAV D M=DEV
94
SY1 d m=dev
SY1 IEE174I 23.42.09 DISPLAY M 844
DEVICE STATUS: NUMBER OF ONLINE CHANNEL PATHS
0 1 2 3 4 5 6 7 8 9 A B C D E F
000 DN 4 DN DN DN DN DN DN DN . DN DN 1 1 1 1
018 DN DN DN DN 4 DN DN DN DN DN DN DN DN DN DN DN
02E 4 DN 4 DN 4 8 4 4 4 4 4 4 4 DN 4 DN
02F DN 4 4 4 4 4 4 DN 4 4 4 4 4 DN DN 4
030 8 . . . . . . . . . . . . . . .
033 4 . . . . . . . . . . . . . . .
034 4 4 4 4 DN DN DN DN DN DN DN DN DN DN DN DN
03E 1 DN DN DN DN DN DN DN DN DN DN DN DN DN DN DN
041 4 4 4 4 4 4 4 4 AL AL AL AL AL AL AL AL
048 4 4 DN DN DN DN DN DN DN DN DN DN DN DN DN 4
051 4 4 4 4 4 4 4 4 UL UL UL UL UL UL UL UL
061 4 4 4 4 4 4 4 4 AL AL AL AL AL AL AL AL
071 4 4 4 4 DN DN DN DN HA HA DN DN . . . .
073 DN DN DN . DN . DN . DN . DN . HA . HA .
098 4 4 4 4 DN 8 4 4 4 4 4 DN 4 4 4 4
0E0 DN DN 1 DN DN DN DN DN DN DN DN DN DN DN DN DN
0F1 1 DN DN DN DN DN DN DN DN DN DN DN DN DN DN DN
FFF . . . . . . . . . . . . HA HA HA HA
************************ SYMBOL EXPLANATIONS ************************
@ ONLINE, PHYSICALLY ONLINE, AND OPERATIONAL INDICATORS ARE NOT EQUAL
+ ONLINE
# DEVICE OFFLINE
. DOES NOT EXIST
BX DEVICE IS BOXED
SN SUBCHANNEL NOT AVAILABLE
DN DEVICE NOT AVAILABLE
PE SUBCHANNEL IN PERMANENT ERROR
AL DEVICE IS AN ALIAS
UL DEVICE IS AN UNBOUND ALIAS
HA DEVICE IS A HYPERPAV ALIAS
D M=DEV shows HA for HyperPAV aliases
IBM Systems and Technology Group
WLM and HyperPAV
WLM Dynamic Alias Tuning
Ignores
HyperPAV control units
Avoids
sysplex communications for devices on
HyperPAV control units
Eliminates need for multi-system interlock (DST)
Mixed
environment tolerated (within a system and
within a sysplex)
Manages non-HyperPAV aliases for all systems in
the sysplex
Control units in mixed mode (some systems in
HyperPAV, some in Base-PAV mode) only
manages aliases in Base-PAV mode
95
IBM Systems and Technology Group
HyperPAV Summary
Equal or better performance than Original PAV feature
Requires
z/OS
1.6+ (with PTFs installed and set-up completed)
DS8000
PAV License activated
DS8000
HyperPAV license activated
FICON
attachment (any supported speed)
No changes to an existing DS8000 configuration required
Full co-existence with Original DS8000 PAV (static or dynamic)
and sharing z/OS without HyperPAV enabled.
Allows
for Migration to HyperPAV to flexibly proceed
"z/OS MVS Setting Up a Sysplex (SA22-7625)" publication updated
with migration and usage information.
96
IBM Systems and Technology Group
DS8000 High Performance FICON (zHPF)
97
IBM Systems and Technology Group
High Performance FICON (zHPF)
Improve FICON Scale, Efficiency and RAS
As the data density behind a CU and device increase, scale I/O rates
and bandwidth to grow with the data
Significant improvements in I/O rates for OLTP (small block transfer)
Improved I/O bandwidth
New ECKD commands for improved efficiency
Improved first failure data capture
Additional channel and CU diagnostics for MIH conditions
Value
Reduce the number of channels, switch ports, control unit ports and
optical cables required to balance CPU MIPS with I/O capacity
Reduce elapsed times (DB2, VSAM) 2X
Requirements
98
z10 microcode update for FICON Express 4 channel
DS8000 R4.0
IBM Systems and Technology Group
DS8000 Performance Monitoring
99
IBM Systems and Technology Group
DS800 Performance Monitoring
For zOS, there is RMF
For Open Systems, there is TPC for Disk.
100
IBM Systems and Technology Group
Collecting the Data with TPC
SC33-7990 Resource Measurement Facility (RMF) Report Analysis
101
IBM Systems and Technology Group
DS8000 Performance Reports
SC33-7991 Resource Measurement Facility (RMF) Report Analysis
102
IBM Systems and Technology Group
103
IBM Systems and Technology Group
TPC Key Performance Metrics
104
IO Rates and Response Times by Volume
Cache Behavior: Read Hit Percentages
Write Cache Behavior: Write Delays
Disk/Array Performance: Rates, Resp Times, Utilization
Port Behaviors: Data rates, Port Response Times
IBM Systems and Technology Group
105
IBM Systems and Technology Group
106
IBM Systems and Technology Group
DS8000 RMF Performance Monitoring
APAR OA06476 provides support for 2107 RMF Reporting
Monitor I
Cache
Subsystem Activity Data
Type 74 Subtype 5
Device
Activity Data
Type 74 Subtype 1
ESS
Data
Type 74 Subtype 8
I/O
Queuing Activity Data
Type 78 Subtype 3
107
IBM Systems and Technology Group
Collecting the Data
Monitor I Session Options in ERBRMF00
CACHE
DEVICE
DEVICE(DASD) is the default
ESS
NOESS is the default!
Options
ESS(LINK | NOLINK | RANK | NORANK)
LINK and RANK are the defaults
IOQ
IOQ(DASD) is the default
SC33-7990 Resource Measurement Facility (RMF) Report Analysis
108
IBM Systems and Technology Group
RMF Reporting
REPORTS(CACHE(options))
CACHE(SUMMARY)
CACHE(SUBSYS)
CACHE(DEVICE)
REPORTS(DEVICE(DASD))
REPORTS(ESS)
Link statistics
Extent Pool statistics
Rank statistics
REPORTS(IOQ)
SC33-7991 Resource Measurement Facility (RMF) Report Analysis
109
IBM Systems and Technology Group
RMF Cache Subsystem Summary Report
110
IBM Systems and Technology Group
RMF Cache Subsystem Report
111
IBM Systems and Technology Group
RMF Cache Device Report
112
IBM Systems and Technology Group
RMF ESS Link Statistics
113
IBM Systems and Technology Group
RMF ESS Extent Pool Statistics
114
IBM Systems and Technology Group
DS8000 ESS Rank Statistics also available for FB Ranks
115
IBM Systems and Technology Group
RMF Device Activity Report
If the PAV being used is a HyperPAV,
the number will have an H after it
116
IBM Systems and Technology Group
RMF I/O Queuing Activity
LCU
CU
0 0003
0070
0 000D
1000
0 000E
DCM GROUP
MIN MAX DEF
1100
1
0 000F
117
1200
CHAN
PATHS
68
*
90
91
92
94
*
10
11
3
*
90
91
92
94
*
CHPID
TAKEN
0.482
0.482
5.015
5.101
5.162
% DP
BUSY
0.00
0.00
0.00
0.00
0.00
% CU
BUSY
0.00
0.00
0.00
0.00
0.00
AVG
CUB
DLY
0.0
0.0
0.0
0.0
0.0
AVG
CMR
DLY
0.0
0.0
0.0
0.0
0.0
15.278
9.423
8.755
9.399
27.578
3.667
3.526
3.659
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
10.852
0.00
0.00
0.0
0.0
DELAY
Q
LNGTH
AVG
CSS
DLY
0.000
0.00
0.0
OFFLINE
0.000
0.00
0.2
0.000
0.00
0.4
OFFLINE
0.000
0.00
0.2
CONTENTION
RATE
HPAV
WAIT MAX
0.000
0.000
IBM Systems and Technology Group
RMF Monitor II & III
Monitor II (Updated for HyperPAV)
SMF Type 79 Subtype 9
Device Report
IOQ Report
Monitor III (Updated for HyperPAV)
118
Job Delays
Device Resource Delays
Data Set Delays
Storage Resource Delays
IBM Systems and Technology Group
Tivoli Productivity Center 4.1
119
IBM Systems and Technology Group
Tivoli Productivity Center 4.1
Tivoli Integrated Portal
SSPC for DS8000
Custom Reporting
Storage Resource Agents
Disk Performance Optimization
Storage Resource Groups
TPC-R embedded in the TPC
installation process
120
IBM Systems and Technology Group
SSPC
1. Storage portal for configuration & management
2. Centralized server reduces the need to install,
manage & administer multiple servers
3. Ease of deployment by shipping pre-loaded
Administrator points browser at SSPC
for enterprise storage view of multiple
devices
Pre-loaded Software:
IBM TPC Basic Edition
SVC Admin Console
DS8K Storage Manager linkage
DS3K, DS4K, DS5K Storage Manager
TS3500 Tape Specialist linkage
IBM SSPC
4. TPC Basic Edition provides
Server, disk & tape asset & capacity reporting
Contextual, topology viewer
5. Easy Migration path to storage mgmt. suite
Disk performance reporting & trend analysis
SAN management
Storage resource management (SRM)
Replication management
121
DS8000
SVC
DS3000
DS4000
DS5000
TS3500
TS3310
IBM Systems and Technology Group
Tivoli Productivity Center
Simplified storage management
Cross-platform
DS8000 automated management
Increase efficiency
Lower costs
Integrated management options
Storage Area Network Fabric
Data
Manage data replication
122
IBM Systems and Technology Group
TPC Disk Manager
Manages Storage Subsystems
Connected via SMI-S Provider (CIM Agent)
Volume Management
List, create, & remove volumes
SAN Planner
Provides policy and performance based guidance in
configuring subsystem volumes and assigning the
volumes to hosts.
Monitoring
Create groups of storage devices
View job status
Create Performance Monitor tasks
Alerts
Create storage subsystem alerts
Subsystem, Disk, Volume, Port, etc
Policy Management Workload Profiles
Used by SAN Planner to define disk I/O samples for
performance.
Reports
Subsystem, Volume, Disk, Association
123
Performance Reports
IBM Systems and Technology Group
TPC Automatic charts
124
IBM Systems and Technology Group
Extract TPC metrics into CSV file
125
IBM Systems and Technology Group
Example of TPC bulk metrics output
126
IBM Systems and Technology Group
SGA07 - Storage Subsystem Performance, Monitoring
and Capacity Planning for Open Systems
Learn both theoretical foundations in storage
performance as well as specific monitoring techniques
using IBM TotalStorage Performance Center (TPC). The
course discusses essential performance characteristics
of cached disk subsystems, the essential performance
metrics, and enough theory to help understand why
storage products perform as they do. Moreover, the
course covers the practical use of TPC to monitor
performance, to spot performance issues, and to
investigate the causes. Specific TPC reports and
interpretation of the reports are covered, as well as
application of the data to long term capacity planning.
The students will have the opportunity to run through a
variety of hands on exercises with TPC as well.
http://www304.ibm.com/jct03001c/services/learning/ites.wss/us/en?pageType=course_search&sortBy=5&searchType=1&sortDirection=9&in
cludeNotScheduled=15&rowStart=0&rowsToReturn=20&maxSearchResults=200&language=en&country=us&searchString=sga07
127
IBM Systems and Technology Group
DS8000 References
DS8000 Information Center
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
Tutorials
Overviews
Publications and much more!
GC35-0515
GC26-7914
SC26-7917
SG24-6786
SC26-7917
DS8000 Introduction & Planning
DS8000 Messages Reference
DS8000 Host Systems Attachment Guide
DS8000 Architecture & Implementation
DS8000 Command-Line Interface Users Guide
The above publications can be found on the DS8000 Information Center web site!
DS8000 Code Bundle Information (Code Bundle, DS CLI, Storage Manager xref
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1002949&rs=555
DS8000 Turbo Information (specs, white papers, etc.)
http://www-03.ibm.com/systems/storage/disk/ds8000/index.html
128
IBM Systems and Technology Group
DS8000 References
Techdocs
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
PRS3574
WP101528
PRS3565
TD104162
TD103689
IBM DS8000 + System z Synergy - March 2009
IBM System z & DS8000 Technology Synergy
ATU - Storage Perf Mgmt with TPC
Open System Storage Performance Evaluation
Pulling TPC Performance Metrics for Archive and Analysis
Many more white papers, presentations and trifolds can be found on Techdocs!
129
IBM Systems and Technology Group
Trademarks
The following terms are trademarks of International Business Machines Corporation in the United
States, other countries or both.
AS/400, DS6000, DS8000, DS Storage Manager, Enterprise Storage Server, FICON, FlashCopy,
GDPS, IBM, iSeries, pSeries, RS/6000, RMF, IBM TotalStorage, VM/ESA, VSE/ESA, xSeries, z/OS,
zSeries, z/VM, On Demand Business
Intel and Pentium are trademarks of Intel Corporation in the United States, other countries, or both.
Microsoft, Windows and Windows NT are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
130
IBM Systems and Technology Group
Disclaimer
Copyright 2004 by International Business Machines Corporation.
No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation.
Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This
information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s)
and/or programs(s) at any time without notice. Any statements regarding IBM's future direction and intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
References in this document to IBM products, programs, or services does not imply that IBM intends to make such such products,
programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this
document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does
not infringe IBM's intellectually property rights, may be used instead. It is the user's responsibility to evaluate and verify the operation of
any on-IBM product, program or service.
THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED.
IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and
conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement,
etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed
herein.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or
copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
131