Académique Documents
Professionnel Documents
Culture Documents
This course gives an overview of IBM DS8000 concepts and architecture, introduces
the features and advanced functions of the IBM DS8000 seriestaking the latest
Description
model of DS8870 storage system as example , for students to better understand the
Implementation of DS8000 logical configuration and advanced function.
After completing this course, you should be able to:
Describe the DS8000 hardware components and architecture
Describe the virtualization concept of DS8000
Objective
Use DS CLI and DS Storage Manager GUI to configure DS8000
Assess usage scenarios for DS8000 Easy Tier
Assess usage scenarios for DS8000 Flash Copy
Assess usage scenarios for DS8000 Remote Mirror
1.0 2.0 2.4 3.0 3.1 4.0 4.1 4.2 4.3 5.0 5.1 6.0 6.1 6.2 6.3 7.0
DS8300 DS8870
Turbo
Total System
Active Processor Total Active Expansion
Processor Feature Code Memory /
System Class Configuration per Cores per Frames
Cards per CEC Persistent
CEC DS8870 Supported
16 GB/1 GB
Business Class 2 cores 1 x 4 core #4401 4 cores None
32 GB/1 GB
128 GB/4 GB 02
8 cores 2 x 4 core #4403 16 cores
Enterprise Class The DS8870 is available in the following options: 256 GB/8 GB 03
DS8870 Model 961 Enterprise (standard) Class 512 GB/16 GB 03
16 cores 2 x 8 core #4404 32 cores
1024 GB /32 GB 03
Model Processors Physical Capacity Disk Drives System Host Adapters 9xE Attach
per CEC (max.) (max.) Memory (max.)
Business Class
961 2-core 216 TB 144 16/32 4 0
Enterprise Class
Disk Enclosures
I/O Enclosures
Max 16-way,
512G Cache per CEC
High Bandwidth, Fault Tolerant Interconnect
Enterprise N-way Cache memory N-way
Cache memory
Controller SMP SMP
Back-End
RAID RAID
Adapters Adapters
Max 8 x DA Pairs
Port 0 (T0) I0000 000 I0030 030 I0100 080 I0130 0B0
Port 1 (T1) I0001 004 I0031 034 I0101 084 I0131 0B4
Port 2 (T2) I0002 008 I0032 038 I0102 088 I0132 0B8
Port 3 (T3) I0003 00C Device I0033 03C Device I0103 08C Device I0133 0BC Device
Port 4 (T4) I0004 400 Adapter I0034 430 Adapter I0104 480 Adapter I0134 4B0 Adapter
Port 5 (T5) I0005 404 I0035 434 I0105 484 I0135 4B4
Port 6 (T6) I0006 408 I0036 438 I0106 488 I0136 4B8
Port 7 (T7) I0007 40C I0037 43C I0107 48C I0137 4BC
Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5
C1 C2 C3 C4 C5 C6 C1 C2 C3 C4 C5 C6
Port 0 (T0) I0200 100 I0230 130 I0300 180 I0330 1B0
Port 1 (T1) I0201 104 I0231 134 I0301 184 I0331 1B4
Port 2 (T2) I0202 108 I0232 138 I0302 188 I0332 1B8
Port 3 (T3) I0203 10C Device I0233 13C Device I0303 18C Device I0333 1BC Device
I0204 500 Adapter I0234 530 Adapter I0304 580 Adapter I0334 5B0 Adapter
Port 4 (T4)
Port 5 (T5) I0205 504 I0235 534 I0305 584 I0335 5B4
Port 6 (T6) I0206 508 I0236 538 I0306 588 I0336 5B8
Port 7 (T7) I0207 50C I0237 53C I0307 58C I0337 5BC
Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5
Port 0 (T0) I0400 200 I0430 230 I0500 280 I0530 2B0
Port 1 (T1) I0401 204 I0431 234 I0501 284 I0531 2B4
Port 2 (T2) I0402 208 I0432 238 I0502 288 I0532 2B8
Port 3 (T3) I0403 20C Device I0433 23C Device I0503 28C Device I0533 2BC Device
Port 4 (T4) I0404 600 Adapter I0434 630 Adapter I0504 680 Adapter I0534 6B0 Adapter
Port 5 (T5) I0405 604 I0435 634 I0505 684 I0535 6B4
Port 6 (T6) I0406 608 I0436 638 I0506 688 I0536 6B8
Port 7 (T7) I0407 60C I0437 63C I0507 68C I0537 6BC
Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5
C1 C2 C3 C4 C5 C6 C1 C2 C3 C4 C5 C6
Port 0 (T0) I0600 300 I0630 330 I0700 380 I0730 3B0
Port 1 (T1) I0601 304 I0631 334 I0701 384 I0731 3B4
Port 2 (T2) I0602 308 I0632 338 I0702 388 I0732 3B8
Port 3 (T3) I0603 30C Device I0633 33C Device I0703 38C Device I0733 3BC Device
I0604 700 Adapter I0634 730 Adapter I0704 780 Adapter I0734 7B0 Adapter
Port 4 (T4)
Port 5 (T5) I0605 704 I0635 734 I0705 784 I0735 7B4
Port 6 (T6) I0606 708 I0636 738 I0706 788 I0736 7B8
Port 7 (T7) I0607 70C I0637 73C I0707 78C I0737 7BC
Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5
2L
2L
2R
2R
1R 1L
enclosures in the processors to the disk
1R 1L
drives
Device adapters perform all RAID
functions and rebuilds in the event of a
drive failure
Device Adapters are configured in
active/active pairs that provide redundant Gigapack Gigapack
access to drives
IC Card
IC Card
IC Card
IC Card
RAID levels supported by these device
adapters include RAID-5, RAID-6 and
RAID-10
Gigapack Gigapack
Allow for two concurrent read operations
IC Card
IC Card
IC Card
IC Card
and two concurrent write operations at the
same time
Device adapters on DS8800 are 8
Gigapack Gigapack
Gb/second Fibre Channel
IC Card
IC Card
IC Card
IC Card
Gigapack Gigapack
IC Card
IC Card
IC Card
IC Card
24 2014 IBM Corporation
2.8 Storage Enclosure to DA-Pair Relationships
by CECs concurrently
I/O to a volume can load balance between
HA ports, not between CECs
Gigapack Enclosures
SFP
SAS SAS ..24.. SAS SAS SAS SAS ..24.. SAS SAS
SFP AC/DC AC/DC
Device SFP
SFP
Power Supply Power Supply
Processor
SRAM
Flash
6 Gbps SAS
Processor
SRAM
Flash
SFP
ASIC SFP SFP
ASIC SFP
8Gbps FC 8Gbps FC 8Gbps FC 8Gbps FC
SFP SFP SFP SFP
Disk
Individual DDMs
Array Sites(Sx)
Logical Grouping of 8 DDMs of same speed and
capacity
Array(Ax)
One 8-DDM Array Sites used to construct one RAID5, RAID5 or RAID-6 or RAID10
RAID-6 or RAID10 array
Ranks(Rx)
One Array becomes one CKD or FB Rank
CKD or FB
Available space in rank divided into extents
An extent is the minimum allocation unit when a LUN
or CKD volume is created (FB = 1GB, CKD = 1113
cylinders)
Extent Pools(Px)
1-N Ranks form an Extent Pool
Min of 2 pools1 each for server0 and server1 Extent
Max of 1 pool for each rank Pool
All Extents in a Pool are same storage type (CKD/FB);
same RAID recommended
Associated with server0 or server1
fixed 8 disks
same capacity and speed 300G 300G 300G 300G
RAID 6 D D D P
6+P+Q D D D Q
5+P+Q+Spare RAID6 6+P+Q
RAID10 D D D S
3+3 D D P Q
4+4
RAID6 5+P+Q+S
RAID10 array performs better for random write
RAID5 and RAID10 arrays perform equally for read
D D D S
D D D S
Sparing RAID10 3+3+2S
Arrays without spares may mean better potential random
performance
Minimum of 4 spares per DA (64 or 128 disks) (single disk type) D D D D
D D D D
RAID10 4+4
45 2014 IBM Corporation
4.6 Sparing example 1: DS8000 RAID 5 with all same capacity, and
same rpm
An extent pool is a logical construct to aggregate the extents from a set of ranks
to form a domain for extent allocation to a logical volume
Same RAID type and the same disk rpm characteristics
So that the extents in the extent pool have homogeneous characteristics
One or more ranks with the same extent type can be assigned to an extent pool
One rank can be assigned to only one extent pool.
There can be as many extent pools as there are ranks.
At least two extent pools (one per server to balance the activity) of each type
CKD or FB
Additional extent pools may be used to segregate ranks with different DDM types
Ranks
Choice between
FB and CKD
Arrays
Choice between RAID 5 RAID 5 RAID 10 RAID 5 RAID 5
Raid 5 or 10 10k rpm 10k rpm 15k rpm 15k rpm 15K rpm
Array Sites
4 disks from front
4 disks from back
Rank group 1
Rank group 0 associated associated with server 1
with server 0
RAID Array 0
Volume 2501 Extent 0
RAID Array 2
Volume 2501 Extent 1
RAID Array 4
X Y AB
ADDRESS GROUP LSS Logical
(CKD or FB) Volumes
Max = 16 per
Max = 16 Address Group 0 to F Max = 256 / LSS
0 to F : 255 LSS 00 to FF
LSS
LSS X00, X02, X04, up to XFE even > Server 0
LSS LSS X01, X03, X05, up to XFD odd > Server 1
Max = 255
00 to FE
All extents are taken from the same rank until there are enough extents for the
requested volume size or the rank is full
If more than one volume is created in one operation, the allocation for each volume
starts in another rank
The rotate volume is not allowed if one extent pool is composed of SSD disks and
has a space efficient repository or virtual capacity configured
Note: Logical Subsystem X'FF' reserved for internal Storage Facility use
Creation of first LCU (or first FB volume on first LSS to be configured) in address
group will determine whether all 16 LCUs/LSSs in address group are CKD (or all
16 are FB)
If no LSSs/LCUs existed in address group 2 (LSSs 20-2F) and if FB volume 2310 was
created, LSS 23 would be FB and all address group 2 (LSSs 20-2F) would be FB
If no LSSs/LCUs existed in address group 2 and LCU 23 was created, address group 2
(LSSs 20-2F) would be CKD
Host attachments can access specific disk subsystem I/O ports or all valid I/O ports
Recommend allowing access to all valid disk
subsystem I/O ports for convenience
Replace access control by host by port for ESS System p1
5
4 6
Volume Group X Volume Group Y Volume Group X
1 3 4 6 1 3
2 5 2
It is possible to have
several Host Attachments
associated to one Volume
Group.
But we do recommend,
1 4 5 7 For management
to associate only one Host
attachment to each Volume
Group
2 3 6 8
Example 1:
Hosts using the same shared WWPN-1 WWPN-2
WWPN-1 WWPN-2
LUNs in a cluster.
Host: Volume Group:DBVG
Host:
Prod1 Prod2
Example 2:
A portion of LUNs are shared.
Others are used as dedicated.
Thin provisioning
Allocations from extent pool in 1 GB increments
The first volume in an address group determines the type of the address group (all CKD
or all FB).
An extent pool should contain only ranks with similar characteristics (for example,
Redundant Array of Independent Disks( (RAID) level, disk type). Exceptions apply to
hybrid pools.
Ranks in one extent pool should belong to separate device adapters (DAs).
Assign multiple ranks to extent pools to take advantage of Storage Pool Striping.
I/O ports:
Distribute host connections of each type (FICON and FCP) evenly across the I/O
enclosure.
A port can be configured to be FICON or Fibre Channel Protocol (FCP).
Ensure that each host is connected to at least two different host adapters in two
different I/O enclosures for redundancy.
Install license
Create arrays Create ranks
keys
Create Create a
Create volumes extpool,assign repository for
for open ranks to extent Space Efficient
systems pool volumes
Create volume
Create host
group,assign Configure I/O
connections for
volumes to ports open systems
volume group
Supported browser:
Mozilla Firefox 17 ESR
Microsoft Internet Explorer 9
In the navigation pane of the DS GUI, hover over Home and click System Status.
The System Status window opens
Enter the IP address of the HMC that belongs to the storage complex that you want to
add. Click OK to continue.
In the navigation pane, hover over Pools and click Internal Storage. The Internal
Storage window displays
Click the Properties under the Action drop-down menu to check detail status of arraysite.
Click Create All after you decide to continue with the proposed configuration.
Click Create All after you decide to continue with the proposed configuration.
1. Hover over the Home icon and select System Status. The System Status window opens.
2. Select the storage image for which you want to configure the ports and, from the
Action drop-down menu, click Storage Image -> Configure I/O ports
3. The Configure I/O Ports window opens, you select the ports that you want to format
and then click the wanted port format (FcSf, FC-AL, or FICON) from the Action drop-
down menu
1. Hover over Hosts and click Hosts. The Host connections summary open:
After the I/O ports are defined, click Next. The Verification window opens,click Finish to create the host
system
3. To ensure a balanced
configuration, select extent
pools in pairs (one from each
server). Click Next to continue.
4.Click OK to continue. The Create Volumes window opens, click Next to continue
Click Finish.
Run
Windows: dscli.exe
UNIX: ./dscli in /opt/ibm/dscli
Must be root
R2 Screen
DS CLI profile
Default: /opt/ibm/dscli/profile/dscli.profile
C:\Program Files\IBM\dscli\profile\dscli.profile
Created at installation
Invoked at DSCLI startup
Contains:
Address of S-HMC
Storage image ID
User information (password file or username and password)
Password files
Encrypted userid and password
Generated with mkuser and managepwfile commands
Stored in /opt/ibm/dscli/security/<S-HMC IP>/<file_name>
DSCLI profile file is the place to specify values for flags that
are needed for many commands
HMC/SMC IP addresses
hmc1
hmc2
Storage image ID
devid
Userid and password
Profile can point to password file (pwfile) containing encrypted userid and
password
Profile can point directly to username and password
Timeout for network interface client/server communication
Default is 420 seconds
Other values (see following slides)
# indicates a comment
#
#
# Management Console/Node IP Address(es)
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1: 9.82.22.112
hmc2: 9.82.22.112
#
# Password filename
# The password file can be generated using mkuser command.
#
pwfile: rmmadmin
#
# Default tat Storage Image ID
# "devid" and "remotedevid" are equivalent to
# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options,
respectively.
devid: IBM.2107-7506571
#remotedevid: IBM.2107-AZ12341
#
# locale
# Default locale is based on user environment.
#locale: en
Single-shot mode:
The user specifies the CLI program and command to execute at the
shell prompt:
shell> dscli lsvolgrp -dev IBM.2107-1300861
Interactive mode:
The user starts the CLI program and then specifies commands at the
program prompt.
Script mode:
The user starts the CLI program and specifies a file containing
commands to run:
shell> dscli -script ~/bin/volumegroupcreation
dscli command
dscli -script
C:/easydscli/projects/MONTPELLIER/inventory.scr
-cfg C:\Progra~1\IBM\dscli\profile\MOP_EDUC2.profile
mkfbvol -help
148 2014 IBM Corporation
Copyright IBM Corporation 2009
7.17 Command flag parameters
-v verbose
Case-insensitive
Except for nicknames
DS8000
Define users
Apply keys
Verify DS8000 hardware
Configure storage unit in the storage facility image (SFI)
Verify array site
Create arrays
Create ranks
Create pool
Create volume groups
Create volume
Create host connect with WWPN
Assign volume groups to host
Apply keys
applykey - key xxxxxxxxx,yyyyyy,zzzzzz SFI
or
applykey - file storage image ID
-key
Keys separated by commas
-file
xml file containing keys, for example:
applykey -file C:\2107_75BV321.xml IBM.2107-75BV321
161 2014 IBM Corporation
Copyright IBM Corporation 2009
7.30 Verify your license keys before starting
dscli lskey IBM.2107-75BV321 -cfg
C:\Progra~1\IBM\dscli\profile\MOP_EDUC2.profile
Date/Time: May 28, 2008 11:26:26 AM PDT IBM DSCLI Version:
5.3.1.101 DS: IBM.2107-75BV321
Activation Key Authorization Level (TB) Scope
============================================================
Global mirror (GM) 50.6 All
IBM FlashCopy SE 50.6 All
IBM HyperPAV On CKD
Metro mirror (MM) 50.6 All
Metro/Global mirror (MGM) 50.6 All
Operating environment (OEL) 50.6 All
Parallel access volumes (PAV) 50.6 CKD
Point in time copy (PTC) 50.6 All
RMZ Resync 50.6 CKD
Remote mirror for z/OS (RMZ) 50.6 CKD
3. T/F. The DS8000 CLI can create configuration xml files which can
be downloaded to the DS8000 subsystem and used to configure the
DS8000.
2. False. The DSCLI must access the DS8000 S-HMC to perform its
functions.
4. True. The DSCLI is the primary Copy Services interface for the
DS6000 or DS8000.
List DDMs
dscli> lsddm -dapair 0 IBM.2107-75TY901
ID DA Pair dkcap (10^9B) dkuse arsite State
===============================================================================
IBM.2107-D01-08XJS/R1-P1-D1 0 300.0 array member S3 Normal
IBM.2107-D01-08XJS/R1-P1-D15 0 300.0 spare required S3 Normal
IBM.2107-D01-092P5/R1-P1-D12 0 300.0 unconfigured S2 Failed
Create Array
mkarray -dev storage_image_ID -raidtype [5 | 6 | 10] -arsite array_site
dscli> mkarray -raidtype 10 arsite S1
List Array
dscli> lsarray -l
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B) diskclass encrypt
=====================================================================================
A0 Assigned Normal 10 (4x2) S1 R0 0 300.0 ENT unsupported
A1 Assigned Normal 10 (3x2+2S) S2 R1 0 300.0 ENT unsupported
A72 Assigned Normal 5 (6+P+S) S74 R70 4 146.0 SSD unsupported
Change Rank
dscli>chrank unassign extpool extpool_id
Show Rank
dscli> showrank r0
ID R0
Array A0
RAIDtype 10
extpoolID P0
extpoolnam cmcc_archive
-------->volumes on rank
volumes 7201
stgtype fb
exts 1054
usedexts 30
widearrays 1
nararrays 0
trksize 128
strpsize 512
strpesize 0
extsize 16384
migrating(in) 0
migrating(out) 0
Creating Volume
>mkfbvol [ { -t10dif|-os400 A01|A81|A02|A82|A04|A84|A05|A85|A06|A86|A07|A87 } ] -extpool extentpool_ID -cap
capacity -type blocks|ess|ds -name volume_name -volgrp volume_group_id -eam [rotatevols|rotateexts] -sam
[standard|tse|ese] Volume_ID
List Volume
>lsfbvol -datatype [512 | 512t | 520p | 520u] extpool extpool_id -lss lss_id volgrp volgrp_id eam [legacy |
rotatevols |rotateexts | managed] volume_id
dscli> lsfbvol
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks)
=====================================================================================
DIF 0801 Online Normal Normal 2107-900 FB 512T P0 10.0 - 20971520
Vol 0600 Online Normal Normal 2107-900 FB 512 P6 50.0 - 104857600
S01 3209 Online Normal Normal 2107-A02 FB 520P P18 16.3 17.5 34275328
Create Volgrp
mkvolgrp -hosttype hostType -volume volume Volume_Group_Name
List Volgrp
dscli> lsvolgrp -l
Name ID Type
====================================================
DS8KPerf V0 SCSI Mask
I7202VG V1 OS400 Mask
Change Volgrp
chvolgrp -name new_Volume_Group_name -action [add|remove|replace] -volume volume _id
Volume_Group_ID
Show Volgrp
dscli> showvolgrp v16
Name Reclam_VG
ID V16
Type SCSI Mask
Vols 4000 4001 4002 4003 4004 4100 4101 4102 4103 4104
List hostconnect
Name ID WWPN HostType Profile portgrp volgrpID ESSIOport
==========================================================================================
SVC_N1P3 0019 500507680110C441 SVC San Volume Controller 0 V8 all
TYF_FCS2 0021 C05076EA2D000EB0 zLinux IBM zSeries - zLinux 0 V17 all
Ora_fcs2 000C 10000000C9CC666E pSeries IBM pSeries - AIX 0 V4 all
PWRVMi 0064 C0507602773F002A iSeries IBM iSeries - OS/400 0 V35 all
VMware_H1 0025 10000000C9DD291F - VMWare 0 V14 all
zBX_FCS0 003A 21000024FF3891C4 LinuxRHEL Intel - Linux RHEL 0 V8 all
This activity is broken down by reads and writes & large I/O and small I/O for
the analysis code to evaluate which data should be moved
Large I/Os are those which are basically a full track in size
Easy Tier only considers the small I/O when looking at prioritizing the extents
to be moved.
Any tracks that are staged as full tracks in track mode for read misses
will therefore not be considered in this decision criteria.
Large I/Os are considered to evaluate whether ranks or device adapters may
become overloaded by moving extents to SSDs.
Merged Pool
(Two or three tiers)
Manual Mode
Volume Based Data Relocation
Automatic Mode
Extent Level Data Relocation
Hot Extents
Extent Virtualization
Migrate Up
Cold Extents
Migrate Down
Highest
SSD SSD SSD
Performance
Tier
RANK 1 RANK 2 RANK n Promote & Swap
SSD Auto Move hot data to higher performing tiers.
Warm Rebalance
Promote Swap
Demote
Warm Demote
Prevent performance overload of a tier.
Higher ENT HDD
ENT HDD ENT HDD
Performance
RANK 1 RANK 2 ... RANK n
Tier
Auto-Rebalance
Lower
Re-distribute extents within a tier to
Performance
NL HDD RANK
1
NL HDD RANK
2 ...
NL HDD RANK
m
balance utilization across ranks for
Tier
maximum performance.
NL Move and swap capability
Auto Rebalance
Easy Tier Manual Mode allows a user to perform the following actions:
Dynamic Volume Migration
User can change a logical volumes storage technology by dynamically
relocating between extent pools
User can change a logical volumes extent allocation algorithm (EAM)
(e.g. can re-rotate extents within the target extent pool)
Dynamic Extent Pool Merge
User can merge two existing extent pools without moving data
Consolidate extent pools with equivalent disks
Merge extent pools with to create a mixed technology extent pool for Automatic
Mode
Rank Depopulation
Storage Administrator can ask that a rank be removed from an extent pool
Automatic, non-disruptive and transparent to host access, the used extents will
be reallocated to other ranks in the pool and rank freed
Manual Volume Rebalance
Redistributes a volumes extents within a non-managed single-tier pool
Same pool and can change EAM
1004
1004
Extent Pool 0
Extent Pool 0
chrank unassign r2
After Rank Depopulation
1004
Extent Pool 0
1004
Extent Pool 0
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=============================================================================================
Merg01_Pool P6 fb 0 below 375 2 375 0 1
Merg02_Pool P8 fb 0 below 375 2 375 0 1
Merge pools
dscli> chextpool -merge p6 p8
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=============================================================================================
Merg02_Pool P8 fb 0 below 750 2 750 0 2
Run managefbvol to rebalance volume 100c, make volume 100c cross 8 ranks
dscli> managefbvol -eam rotateexts -action migstart 100c
You control and manage the DS8000 Copy Services functions by using the following
interfaces:
DS Storage Manager, the GUI of the DS8000 (DS GUI).
DS Command-Line Interface (DS CLI), which provides a set command that cover all Copy
Service functions and options.
Tivoli Storage Productivity Center for Replication, with which you manage large Copy
Services implementations easily and provides data consistency across multiple systems.
Tivoli Storage Productivity Center for Replication is now part of Tivoli Productivity Center
5.1 and IBM SmartCloud Virtual Storage Center.
DS Open Application Programming Interface (DS Open API).
Source Target
FlashCopy command
issued with NOCPY
When BACKGROUND
COPY is complete,
relationship is withdrawn
BACKGROUND COPY
Change Recording keeps track of changes made to source and target volumes
after establishment of FlashCopy relationship
Use Change Recording along with BACKGROUND COPY and PERSISTENT
Supported only at full volume/LUN level
There can only be one incremental relation per volume but can coexist with other
non-incremental relationships
During refresh:
To maintain the incremental relationship, specify Change Recording on
each incremental FlashCopy
Only changed data is copied in the background
Previous increment BACKGROUND COPY does not have to complete before
new increment is taken if the FlashCopy is in the same direction
A new FlashCopy increment can be performed in the reverse direction
Previous incremental BACKGROUND COPY going in the opposite direction
must complete before performing an incremental in the other direction
Hold off initiation/completion of write I/O to the source volumes until FlashCopy
establish is completed
Select source and target volumes with freeze option
Create Consistency Group Created command allows resumption of I/O
One per LSS
Enables creation of a consistent point-in-time copy across multiple volumes with
minimum host impact
Target of each source volume is within one physical disk subsystem but source
volumes within a consistency group can span physical disk subsystems
Create FlashCopy
mkflash -tgtse, -cp, -nocp, -freeze, -record, -persist source_id:target_id
List FlashCopy
lsflash
Resync FlashCopy
resyncflash source_id:target_id
FlashCopy Consistent Group
unfreezeflash
Reverse FlashCopy
reverseflash
Remove FlashCopy
rmflash
Create a FlashCopy
Incremental FlashCopy, with CG option
dscli> lsfbvol -lss 13
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks)
=============================================================================================
Jun_FCS01 1301 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
Jun_FCT01 1302 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
# rmdev -l hdisk2
hdisk2 Defined
#varyonvg fcvg ; mount /fcfs
Replaying log for /dev/fclv.
# ls -la /fcfs
total 4790008
-rw-r--r-- 1 root system 204800 Jan 22 07:55 lvmt.log
-rw-r--r-- 1 root system 2452264960 Jan 22 07:28 t01.tar
3
2
Resync FlashCopy
dscli> resyncflash -record -tgtse -cp 1000:1200
Reverse FlashCopy
dscli> reverseflash -record -tgtse 1000:1201
CMUC00169I reverseflash: FlashCopy volume pair 1000:1201 successfully reversed..
dscli> lsflash 1000
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy
==================================================================================================================
1201:1000 12 0 60 Disabled Enabled Enabled Disabled Enabled Enabled Disabled
(2)Choose options
(4)Add copyset
Source Target
(5)Initial FlashCopy
Before FlashCopy
After FlashCopy
Isolate host connections from remote copy connections (MM, GM, zGM, GC and
MGM) on a host adapter basis
Size the number of host adapters needed based on expected aggregate
maximum bandwidth and maximum IOPS (use Disk Magic or other common
sizing methods based on actual or expected workload)
When possible, isolate asynchronous from synchronous copy connections on a
host adapter basis
CMUC00196I failoverpprc: Remote Mirror and Copy pair 1302:1502 successfully reversed.
dscli> lspprc 1302
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status
========================================================================================
1302:1502 Suspended Host Source Metro Mirror 13 60 Disabled Invalid
1502:1302 Full Duplex - Metro Mirror 15 60 Disabled Invalid
Mapping PPRC target volume on Secondary storage to host for data validation
# lspv
hdisk2 00f74e92626b6200 fcvg
hdisk3 00f74e92626b6200 fcvg
# rmdev -l hdisk2
hdisk2 Defined
#varyonvg fcvg ; mount /fcfs
# ls -la /fcfs
total 4790008
-rw-r--r-- 1 root system 2452264960 Jan 22 07:28 t01.tar
CMUC00196I failoverpprc: Remote Mirror and Copy pair 1502:1302 successfully reversed.
dscli> lspprc 1502
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status
========================================================================================
1502:1302 Suspended Host Source Metro Mirror 15 60 Disabled Invalid
dscli> lspprc 1302
1302:1502 Suspended Freeze Metro Mirror 13 60 Disabled Invalid
Data validation
# lspv
hdisk2 00f74e92626b6200 fcvg
hdisk3 00f74e92626b6200 fcvg
# rmdev -l hdisk3
hdisk3 Defined
#varyonvg fcvg ; mount /fcfs
# ls -la /fcfs
total 4790008
-rw-r--r-- 1 root system 204800 Jan 22 08:54 lvmt.log
-rw-r--r-- 1 root system 2452264960 Jan 22 07:28 t01.tar
Define site
Add copyset
Suspend
Recover session
Recover
Suspend
Recover
Recover
Application independent
Multiple Session support introduced with R5.1
A 2 site Disaster Recovery replication solution
Integrated solution using 3 volumes
A (local) B (remote) C (journal can be thinly provisioned)
Or 4 copies (D copy for testing without impacting active mirroring)
Very little data loss (Recovery Point Objective (RPO))
Single digit seconds (typically 3-5 seconds)
Scalable
Consistency maintained across multiple subsystems
Up to a 16 physical subsystems in any combination
Ex. 3 primary subsystems and 5 secondary subsystems
252 2014 IBM Corporation
16.2 DS8000 Global Mirror Basic Concept
Concept:
Global Copy is used to transmit data to remote site
FlashCopy is used to journal consistent data as updates are made to the Global
Copy secondary volume (B volume)
Data is consistent at remote site using asynchronous data transfer
Intended benefit:
Long distance, no application impact (adjusts to peak workloads automatically),
small RPO, remote copy solution for zSeries and Open Systems data, and
consistency across multiple subsystems
Volume A
FlashCopy
(record, nocopy, persistent, inhibit target write)
Host
I/O Primary Secondary Tertiary
(Asynchronous)
Local Global Copy Volume B
Remote Volume C
Site Site
Global mirror locks tracks in consistency group (CG) on primary DS8000 at end
of CG formation window
Need to get remaining CG tracks to secondary for CG closure
Host writes to CG track are held while track is locked
Host may notice increased response time (e.g. 1 ms -> 100 ms)
if collision occurs
When planning to use FlashCopy SE with GM, the following rules of thumb may
be applied to ensure that GM performance is not negatively affected:
For a single RAID array, the maximum write throughput should not exceed 20
MB/sec or 300 IO/sec
For a single extent pool of four or more RAID arrays, the maximum write
throughput should not exceed 80 MB/sec or 1200 IO/sec
The anticipated peak write throughput to the primary DS8000 volumes should not
exceed the available link bandwidth for any significant time interval.
Performance is slightly better with DS8700/DS8800 but these recommendations
are still reasonable for DS8700/DS8800
Performance monitor
Optimization
Easy tier
Thin provisioning
Performance reporting
Vmware information
management
Storage device
management
Host management
Volume management
Performance data
collection