Vous êtes sur la page 1sur 67

Chuck Laing- Senior Technical Staff Member (STSM) 14 June 2013

Storage Migration Methods Session 1090

2013 IBM Corporation

My objective is simple
To teach someone here today:
One new concept that will make you better Even better, if many of you learn a few new things today that will make your job easier

2
2

2013 IBM Corporation

Addressing data migration issues

How do I address and avoid:


Extended or unexpected downtime Data corruption Missing data or data loss Application performance issues Technical compatibility issues

2013 IBM Corporation

Knowledge is POWER!

As Systems Administrators we dont always KNOW what we dont know about storage
Ask for storage, leveraging what you know Avoid bottlenecks Use tools available Speed problem isolation Make more informed architectural decisions

As Storage Administrators we dont always KNOW how the storage will be utilized
Make more informed architectural decisions Ask what is needed for best performance and IO separation

What we are NOT going to do today:


Try to turn Sys Admins into Storage Admins or vice versa Boil the ocean
4 2013 IBM Corporation

Problem statement
Storage and System Administrators often clash in the common goal to achieve data performance and availability, leading to:
Too many logical configuration related outages Performance related enhancements not working to specification Leading causes:
Lack of understanding configurations No cohesiveness between the logical and physical implementations Lack of communication between System and Storage Administrators

Resulting in:
A lack of data reliability and IO throughput

2013 IBM Corporation

The top ten things to know


Systems Administrators should know about Storage
What should I be aware of/what should I avoid? (Tips & Pitfalls-Tuning) Storage Overview - what's inside? What is the Physical makeup? What is the Virtual makeup (good throughput design tips)? What is a Storage Pool ? Where do I place data? Connectivity- Picking the right drivers Host Attachment Kits How to Improve Performance using LVM Documentation - why it matters Topology Diagrams Disk Mapping (view at a glance) Easy Storage Inquiry Tools Bottlenecks
6 2013 IBM Corporation

Storage Admins should know about Logical Volume Manager (LVM)


What should I be aware of/what should I avoid? (Tips & Pitfalls-Tuning) Hdisk Volume (LUN) purpose ? DB type Access Patterns Number of spindles required Stripe Spread Mirror What is a Volume Group (VG)? What I a Logical Volume (LV)? Disk Mapping (view at a glance) Easy Storage Inquiry Tools Bottlenecks ..

A deeper dive
Physical to logical makeup

2013 IBM Corporation

NOSaying SAN Volume Controller doesn't count! What is SVC? SVC Provides Flexibility Across Entire Storage Infrastructure

Make changes to the storage without disrupting host applications


Volume Volume Volume Volume

Manage the storage pool from a central point

SAN
Apply copy services across the storage pool

SAN Volume Controller


Advanced Copy Services

Storage Subsystems
DS8000 15K rpm
8 2013 IBM Corporation

HDS

DS4000

EMC

HP

SATA

RAID 5

RAID 1

JBOD

Combine the capacity from multiple arrays on frames into storage pools

SVC - From Physical to Logical View


Mapping to Hosts w/SDD or supported MultiPath Driver
Space-efficient

Volume

vdisk0 125GB

vdisk1 10GB

vdisk2 525GB

vdisk3 1500GB

vdisk4 275GB

vdisk5 5GB

Storage Pool
Stripe 16MB 2GB

mdiskgrp0 [EMC Group] 400GB

mdiskgrp1 [IBM Group] 600GB

Managed Disk

mdisk0 100GB

mdisk1 100GB

mdisk2 100GB

mdisk3 100GB

mdisk4 200GB

mdisk5 200GB

mdisk6 200GB

LUN

EMC 100GB

EMC 100GB

EMC 100GB

EMC 100GB

IBM 200GB

IBM 200GB

IBM 200GB

2013 IBM Corporation

Examples of correct Host to SVC Cluster zoning


vdisk3 vdisk4

vdisk1

vdisk2

Preferred path for vdisk1 is SVC N1P2 & N1P3 Non Preferred path for vdisk1 is SVC N2P2 &N2P3

Preferred path for vdisk2 is SVC N2P2 & N2P3 Non Preferred path for vdisk2 is SVC N1P2 &N1P3

vdisk1

vdisk2

v d i s k 3

v d i s k 4

10

2013 IBM Corporation

DS8000 Hardware Physical Makeup


Is it important to know the physical makeup?; Does it really matter?
Summary - DS8700 (2-way/4 way base frame) 242x model 941

99,999% = day in 72 years MTBF


11 2013 IBM Corporation

Familiar Layout

Physical to Logical - peeling back the layers


Even numbered extpools Primary IO Data flow ownership Odd numbered extpools Primary IO Data flow ownership

Balance
Just like an onion -virtualization has many layers

A
12 2013 IBM Corporation

Arrays across Enclosures


12

How does the DS8000 virtual layer work?

Raid-0 only 13 2013 IBM Corporation

DS8000 Virtualization layers (Concepts & Architecture)


How logical extents in ranks are formed from the DS8000, 6+P+S type array format

EXT EXT 1 1 1GB

EXT EXT2 2 1GB

EXT EXT 3 3 EXT EXT3 3 1GB

EXT 4 EXT EXT 4 4 1GB

EXT 5 EXT EXT5 5 1GB

14

2013 IBM Corporation

Since the time to complete an I/O operation depends on:


What causes disk latency? Does Cache mitigate disk latency?
LUN 1 LUN

a. Seek time- time to position the read/write head b. Rotational delay- time waiting for disk to spin to proper starting point c. Transfer time
RAID-5 6+P

Made up of strips from the outer section/edge of each physical Disk

You could deduce that:


a) Logical-disk3 would be a better place to store data that will be randomly accessed since the read/write heads would most likely have Made shorter times middle of the disks. LUN 3 3 up seek of strips from to thethe middle sections of each physical Disk RAID-5 6+P LUN

b) Logical-disk1 would provide greater sequential throughput since it is on the outer edge of the disks.
EXT EXT 3 3 EXT LV3 3 EXT 4 EXT LV4 4

EXT LV 1 1

EXT 2 LV 2

EXT 5 EXT LV5 5

15

2013 IBM Corporation

LUN Sharing Best practices is it OK?


You should know!

LUNs sharing the same array/rank in one extpool

16

2013 IBM Corporation

IBM XIV Storage Distribution Algorithm


Each volume is spread across all drives Data is cut into 1MB partitions and stored on the disks XIV algorithm automatically distributes partitions across all disks in the system pseudo-randomly

Interface

Interface

Interface

Switching

Data Module Data Module Data Module


17 2013 IBM Corporation

XIV Distribution Algorithm on System Changes


Data distribution only changes when the system changes Equilibrium is kept when new hardware is added Equilibrium is kept when old hardware is removed Equilibrium is kept after a hardware failure

Data Module 1

Data Module 2

Data Module 3

Data 4 NodeModule 4

[ hardware upgrade ]
18
2013 IBM Corporation

XIV Distribution Algorithm on System Changes


Data distribution only changes when the system changes Equilibrium is kept when new hardware is added Equilibrium is kept when old hardware is removed Equilibrium is kept after a hardware failure

The fact that distribution is full and automatic ensures that all spindles join the effort of data [re-distribution after hardware failure ] configuration change. Tremendous performance gains are seen in recovery/optimization times thanks Data Module 2 Data Module 1 to this fact.

Data Module 3

Data Module 4

19

2013 IBM Corporation

Tips What are the most common/Important OS I/O Tuning Parameters?


Device Queue Depth Queue Depth can help or hurt performance per LUN

Be aware of Queue Depth when planning system layout, adjust only if necessary Queue Depth is central to the following fundamental performance formula:
IO Rate = Number Commands * Response Time per Command To calculate - best of thing to do is go to each device Information Center URLs listed in link slide For example: IO Rate = 32 Commands per Second / .01 Seconds (10 milliseconds) per Command = 3200 IOPs What are the default Queue Depths? ___ Some real-world examples: rates OS=Default Queue Depth= Expected IO Rate HBA transfer AIX Standalone = 16 per LUN = 1600 IOPs per LUN FC adapters AIX VIOS = 20 per LUN = 2000 IOPs per LUN LVM striping vs spreading AIX VIOC = 3 per LUN = 300 IOPs per LUN Windows = 32 per Disk = 3200 IOPS per LUN Data Placement Random versus sequential Spreading versus Isolation

Source: Queue Depth content provided by Mark Chitti 20 2013 IBM Corporation

Data Placement and Host Vdisk mapping


Spreading versus Isolation
Spreading the I/O across MDGs exploits the aggregate throughput offered by more physical resources working together Spreading I/O across the hardware resources will also render more throughput than isolating the I/O to only a subset of hardware resource You may reason that the more hardware resources you can spread across, the better the throughput
Dont spread file systems across multiple frames

Should you ever isolate data to specific hardware resources? Name a circumstance! Isolation
In some cases more isolation on dedicated resources may produce better I/O throughput by eliminating I/O contention Separate FlashCopy Source and Target LUNs on isolated spindles
Slide Provided by Dan Braden
21 2013 IBM Corporation

Makes it more difficult to manage code upgrades, etc.

Data Placement What causes THRASHING?___


Placing Applications on the same LUNs/Pools result in IO contention
For existing applications, use storage and tools to understand current LUN1 made of strips on the outer edge of server the DDMs performance (1s) also could havemonitoring App A Raid-5 7+P application workload characteristics such as:
Read/Write ratio 1 1 1 1 1 1 1 1 2 2 2 2 2 Random/sequential ratio 2 2 2 5 5 5 5 5 5 5 5 4 4 4 4 4 4 4 4 3 3 3 3 size (blocksize) 3 3 transfer 3 3 Average Peak workload (I/Os per second for random access, and MB per second for sequential access) Peak workload periods (time of day, time of month) made of strips in the middle of the DDMs (3s) also could have App Mirroring) B Raid-5 7+P CopyLUN3 services requirements (Point-in-Time Copy, Remote Host connection utilization and throughput (HBA Host connections) Remote mirroring link utilization and throughput

Extent pool or 8 Ranks

Strip1 Strip2 Strip3 Strip4 Strip5

Most commonly when workloads peak at the same time or log files and data files share physical spindles
22 2013 IBM Corporation

Data Placement - Storage Pools and Striping


Should you ever stripe with previrtualized volumes? We recommend not striping or spreading in SVC, V7000 and XIV Storage Pools Avoid LVM spreading with any striped storage pool You can use file system striping with DS8000 storage pools
Across storage pools with a finer granularity stripe
Within DS8000 storage pools but on separate spindles when volumes are created sequentially

Striped Pools

No Host Stripe

Sequential Pools
Host Stripe

S t r i p e

Host Stripe

Host Stripe - Raid-0 only 23 2013 IBM Corporation

Random IO Data layout


Disk subsystem

What does random LV creation order, help prevent ____?

2
3 4 5
RAID array LUN or logical disk PV

datavg
# mklv lv1 e x hdisk1 hdisk2 hdisk5 # mklv lv2 e x hdisk3 hdisk1 . hdisk4 .. Use a random order for the hdisks for each LV

Slide Provided by Dan Braden


Source: Slide provided by Dan Braden 24 2013 IBM Corporation

Sequential IO Data layout


Does understanding the backend enable good front-end configuration?

Sequential IO (with no random IOs) best practice:


Create RAID arrays with data stripes a power of 2
RAID 5 arrays of 5 or 9 disks RAID 10 arrays of 2, 4, 8, or 16 disks

Create VGs with one LUN per array Create LVs that are spread across all PVs in the VG using a PP or LV strip size >= a full stripe on the RAID array Do application IOs equal to, or a multiple of, a full stripe on the RAID array

Avoid LV Striping
Reason: Cant dynamically change the stripe width for LV striping

Use PP Striping
Reason: Can dynamically change the stripe width for PP striping

Slide Provided by Dan Braden


25 2013 IBM Corporation

Data Layout - OS Spreading versus Striping

Is there is a difference? Whats the diff?


Do you know what your volumes are made of? Change Vpath to hdisk here.

File system spread

Source: Redbook SG24-6422-00 IBM 800 Performance Monitoring and Tuning Guide 26 2013 IBM Corporation

Data Layout Summary


Does data layout affect IO performance more than any tunable IO parameter?

Good data layout avoids dealing with disk hot spots An ongoing management issue and cost Data layout must be planned in advance
Changes are generally painful

iostat might and filemon can show unbalanced IO Best practice: evenly balance IOs across all physical disks unless TIERING Random IO best practice:
Spread IOs evenly across all physical disks unless dedicated resources are needed to isolate specific performance sensitive data
For disk subsystems

Create RAID arrays of equal size and type Create VGs with one LUN from every array Spread all LVs across all PVs in the VG

27

2013 IBM Corporation

Documentation Does it matter? Why?


Track data placement and Host Vdisk mapping
Disk mapping at a glance
Mapping becomes important

Spreading versus isolation

Spreading

Isolation

28

2013 IBM Corporation

Documentation Why it matters

Use the SVCQTOOL listed under the tools section of this slide How do I achieve SVC node to Server Balance? deck to produce a spread sheet similar to this Or

Use the script found in the speaker notes of this slide Add a column for preferred node to host client

Spread sheet developed by Keith Williams 29 2013 IBM Corporation

Are there any automated storage inquiry tools out there that will help me understand my setup?
Storage tools Gathers information such as, but not limited to: LUN layout LUN to Host mapping Storage Pool maps Fabric connectivity DS8QTOOL Go to the following Website to download the tool: http://congsa.ibm.com/~dlutz/public/ds8qtool/index.htm

SVCQTOOL Go to the following Website to download the tool: http://congsa.ibm.com/~dlutz/public/svcqtool/index.htm

30

2013 IBM Corporation

SA - How do I improve disk performance on the Host?


Reduce the number of IOs
Bigger caches
Application, file system, disk subsystem

Use caches more efficiently

No file system logging No access time updates

Improve average IO service times


Better data layout Reduce locking for IOs Buffer/queue tuning Use SSDs or RAM disk Faster disks/interfaces, more disks Short stroke the disks and use the outer edge Smooth the IOs out over time

Reduce the overhead to handle IOs

31

2013 IBM Corporation

Troubleshooting Whats the most common thing that changes over time?
Pool 1 Rank

An ounce of prevention is worth a pound of cure

Pool 2 Rank

Depending on the work load characteristics, isolating the workload may prove to be more Vdisk1 beneficial and out perform a larger array. App A

Rank

There are 3 important principles for creating a logical configuration for the Storage Pools to optimize performance: Map Host A
Workload isolation Workload resource-sharing Workload spreading Rank

Some examples of I/O workloads or files/datasets which may have heavy and continuous I/O access patterns are:

Sequential workloads (especially those with large blocksize transfers) Rank Vdisk2 Log files or datasets
Sort/work datasets or files Business Intelligence and Data Mining

Rank

Map

App B

Disk copies (including Point in Time Copy background copies, remote mirroring target volumes, and tape simulation on disk) Video/imaging applications Engineering/scientific applications Rank Certain batch workloads

Host B

Data Migration

Rank

I always separate Log files from Data files for best performance.

Apps sharing the same physical spindles on traditional arrays may peak at the same time
32 2013 IBM Corporation

StorAdmin How do I improve disk performance?


Data layout affects IO performance more than any tunable IO parameter
If a bottleneck is discovered, then some of the things you need to do are:
Identify the hardware resources the heavy hitting volumes are on Identify which D/A pair the rank resides on Identify which I/O enclosure the D/A pair resides on Identify which host adapters the heavy hitting volumes are using

Identify which host server the problem volumes reside on


Identify empty non used volumes on other ranks storage pools Move data off the saturated I/O enclosures to empty volumes residing on less used ranks/storage pools Move data off the heavy hitting volumes to empty volumes residing on less used hardware resources and perhaps to the another Storage Device Balance LUN mapping across Backend and host HBAs SVC IOgrps SVC preferred nodes Change Raid type.
33 2013 IBM Corporation

Troubleshooting: What are some Storage Bottlenecks?


After verifying that the disk subsystem is causing a system bottleneck, a number of solutions are possible. These solutions include the following: Consider using faster disks SDD will out perform HDD, etc. Eventually change the RAID implementation if this is relevant to the servers I/O workload characteristics. For example, going to RAID-10 if the activity is heavy random writes may show observable gains.

Add more arrays/ranks to the Storage pool.


This will allow you to spread the data across more physical disks and thus improve performance for both reads and writes. Add more RAM Adding memory will increase system memory disk cache, which in effect improves disk response times. Finally, if the previous actions do not provide the desired application performance: Off-load/migrate - processing to another host system in the network (either users, applications, or services).

34

2013 IBM Corporation

Summary

Knowing - what's inside will help you make informed decisions? You should make a list of the things you dont know
Talk to the Storage Administrator or those who do know

A better Admin understands


1. 2. 3. 4. The backend physical makeup The backend virtual makeup What's in a Storage Pool for better data placement Avoid the Pitfalls associated with IO Tuning

5. Know where to go to get right device drivers 6. Know why documentation matters
7. Keep Topology Diagrams 8. keep Disk Mapping documentation 9. Be able to use Storage Inquiry Tools to find answers 10. Understand how to troubleshoot storage performance bottlenecks
2013 IBM Corporation

35

35

A four step Migration Process

Evaluate

Validate

Migration Process

Plan

Execute
36 2013 IBM Corporation

Storage Migration Methods

Evaluate

37

2013 IBM Corporation

Evaluate the data migration process

Migrating data is always a disruptive process. Whatever the migration technique used, it always affects to some degree the normal operations of the system.
Selecting the appropriate technique depends on: The criticality of the data being moved The resources available Other business constraints and requirements. Note: Risks should be identified depending on the migration technique used. We strongly recommend that you consider selecting the technique that is the best compromise between efficiency and the least impact to the system users.

38

2013 IBM Corporation

Evaluate the migration technique summary


Make a list of Pros and cons (each offering strengths and limitations)
Migration technique Host-based
LVM LDM Add-on software such as VxVM Volume (block) level TDMF

Pros
Generally lowest initial implementation cost Leverages existing and IP network LVM or LDM tools available Storage device-agnostic Leverages existing Operating System skills Migration can happen on-line during peak hours

Cons
Consumes host resources Operating system specific Management can become complex and time consuming Each host is its own island no central management console May cause an initial outage to install the utility or software if it is not already existing on the host Higher initial cost due to hardware & replication software Requires proprietary hardware and may require implementation of Storage Requires an initial outage to bring the host volumes on-line to SVC Requires the host to reboot to load or upgrade the multipathing drivers Requires the disruption of the applications and down time Slow and cumbersome

Network-based
Fabric TDMF-IP

Supports heterogeneous environments servers and storage Single point of management for replication services

Application-based
SVC

Migration can happen on-line during peak hours Supports heterogeneous environments servers and storage Single point of management for migration Does not require additional special tools, software or hardware Does not require additional skills or training

Tape backup/restore based


TSM Etc

39

2013 IBM Corporation

Evaluating key decision factors


Key Factors Description
How quickly can data be copied from the source to the target, balanced against system overhead
If something goes wrong, the migration can be terminated and application processing restarted or continued on the source data/device Moving data to a different array or to different storage media for cost performance without disruption Many data centers use hardware from several vendors, which can result in source and target hardware being from different vendors

Capability

Performance
Primary volume/ Source data protection Implement tiered storage Multi-vendor environments

TDMF SVC
TDMF SVC with limitations LVM / LDM Tape based TDMF SVC LVM / LDM TDMF SVC LVM / LDM with possible restrictions Fabric Tape based All with limits TDMF SVC LVM / LDM Fabric

Application downtime

Applications have different levels of business criticality and therefore have varying degrees of acceptable downtime

40

2013 IBM Corporation

Evaluating migration triggers


Disk consolidation can trigger data migration of storage when:
You want to change computer systems You need to upgrade to new products to stay competitive New functions of evolving technology are introduced Database growth You need newer, faster, higher density devices Taking advantage of the ever improving price to performance ratio of new storage devices You just require more flexibility You need to relocate you data center You need to reduce the foot print of your storage subsystem within the data center Leveraging data migration to provide Disaster Recovery solutions
41 2013 IBM Corporation

Storage migration can trigger LVM data movement when:


You want to spread IO evenly across all the disks in the VG You need to align IO access patterns Random access Sequential access You want to protect the data integrity Database growth Database refreshes You need consolidate the space in a VG or multiple VGs You need to troubleshoot an ailing volume for Performance Availability (failure boundary) You need to separate data into separate LVs

A few LVM migration dos and donts


Do
Consider failure boundaries Span multiple frames for temporary migration purposes Add LUNs to the VGs spanning multiple frames temporarily Put sequential accessed LVs on their own LUN Take advantage of the disk subsystems Treat Raid array groups as single disks Use PP Striping Reason: Can dynamically change the stripe width for PP striping

Dont
Span multiple storage frames in one LV Avoid LV Striping Reason: Cant dynamically change the stripe width for LV striping Avoid pladding Striping on Striping Using the same spindles for data and logs

42

2013 IBM Corporation

Storage Migration Methods

Plan

43

2013 IBM Corporation

Planning phase

A successful data migration always requires substantial evaluation and planning


Adequate planning is the critical success factor in a migration project Develop a high level migration plan Develop a detailed migration plan

44

2013 IBM Corporation

Plan data migration techniques in Open Systems

This functionality can be used for: Redistribution of LVs and their workload within or across back-end storage Moving workload onto newly installed storage subsystems Moving workload off of storage so that old/failing storage subsystems can be decommissioned. Moving workload to re-balance a changed workload Migrating data from legacy back-end storage to newer managed storage.

45

2013 IBM Corporation

Plan

Determine whether you can migrate data online or off-line


Online migrations means data can be moved from Source to Target Platforms with: No impact to the end user outside of unscheduled outage windows No data I/O loss between the application and the disk storage subsystem Very little performance impact affecting the end user Off-line migrations means when data is moved: Data must be in a known state, typically requiring updates or changes to cease while the movement occurs. Data could be unavailable for an extended periods of time, perhaps several hours or days.

46

2013 IBM Corporation

Planning phase - Example migration methodology plan


Action Item
Establish a migration management team Gather availability and production schedules Document Change Control procedures and incorporate into the plan Document the time line for migration activities

Assigned

Status

Date

Announce the migration at least 30 days prior to the intended target migration date
Gather information about the storage server environment and applications (lists, commands, scripts and/or drawings) Schedule a pre-migration rehearsal that includes all the members on the migration team and a data sampling that will enable the application groups to appropriately conduct the pre- and post migration verification process Establish a Migration Status call-in process

47

Utilize a Migration Planning Checklist to assure that all of the pre migration planning steps have been executed 2013 IBM Corporation

Establish a migration management team technical migration team


Team members may include but are not limited to:
Project manager Client (account) manager DBA/Application owners System administrator Network administrator Security administrator Firewall administrator Disk storage administrator Backup/Recovery administrator SAN fabric administrator Hardware CE Floor planner Cable vendor Disaster/Recover administrator IT Architect
48 2013 IBM Corporation

Gather availability and production schedules

Some examples of application availability may be but are not limited to the following list:
Month/end /quarterly processes FlashCopy or Metro/Global mirror/copy running processes and their time restrictions. Database/application refreshes

49

2013 IBM Corporation

Planning Phase - Example drawings may look like this:


For example you may want to go from an Hitachi 9980 on the left to a DS8000/SVC on the right Preparation of physical/zoning cabling setup from the backend through the SAN fabric to the hosts and SVC

Left Controller(1)
CHA P
1P 1Q
ABCD
E F GH

Right Controller(2)
CHA P CHA P
Current Layout
2V 2W
ABCD
E F GH

Left Controller(1)
CHA P
1P 1Q
ABCD
E F GH

Right Controller(2)
CHA P CHA P
2V 2W
ABCD
E F GH

CHA P
1R 1S
J KLM
N PQR

CHA P
1R 1S
J KLM
N PQR

2X 2Y
J K LM
N PQR

2X 2Y
J K LM
N PQR

50060E80039C6202

50060E80039C6212

8
50060E80039C6208

48

52

53

Frame 1

28

46

47

50060E80039C6218

Brocade 1
46 19 35 62 Port

Brocade 2

SAN

42

06

38

21 Port
01 02 03 04 05 06 07 08 01 02 03 04 05 06 07 08

Even SAN Fabric


09 10 11 12 13 14 15 16

ODD SAN Fabric


09 10 11 12 13 14 15 16

fcs3 3v-08
10000000C92D1A97

fcs0 5V-08
10000000C93D6ADA

fcs2 3p-08
10000000C93830A5

fcs1 5b-08
10000000C93D72C3

P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4

Host Server AIX hrinprd

SVC

HBA 1

HBA 2

HBA 1

HBA 2

HBA 1

HBA 2

HBA 1

HBA 2

Node 1

Node 2

Node 3

Node 4

50

2013 IBM Corporation

Planning phase design requirements


Understanding the requirements may help simplify migration process
Action Item Application Environment Databases to be moved (DB2, Informix, Oracle, SQL, Sybase) Database version Database size Availability requirements of databases (any existing SLAs, downtime issues to consider) Cluster environment (MSCS, Veritas, Sun, HACMP, MC/Service Guard, etc.) Action Item Network Environment (if applicable)

Topology
Speed of network

51

2013 IBM Corporation

Planning phase design requirements


Understanding the requirements may help simplify migration process
Action Item Storage Environment Storage Vendor and model (EMC, HDS, IBM, STK, Dell, HP) Channel type (ESCON, FICON, Fibre, iSCSI, SAN) SAN HBA & Model (Qlogic, Emulex, JNI) Number of Channel Paths Logical to Physical mapping (i.e. RAID-1 vs. RAID-5) Number of Source volumes to be migrated Volume sizes Identify Target volumes to receive source data

52

2013 IBM Corporation

Planning summary - Example migration methodology approach


Utilize a Migration Planning Checklist to assure that all of the pre migration planning steps have been executed. Action Migration and validation methodology checklist Item Based on the information gathered in the planning phase, structure the migration architecture to match the production requirements Use checklists to ensure any operating patches and software are at the correct levels Build detailed migration procedures following the chosen architecture Put together a schedule of events with time lines to implement the migration procedures Establish an initial test plan to validate the initial installation of all required components Develop a cooperative deployment plan

Write and configure any automation scripts that will speed up the process
Run a simple initial test plan that validates the migration process Implement the migration procedures and time line built in the design phase Verify the migration completion by checking the successful completion and status of the migration jobs
53 2013 IBM Corporation

Storage Migration Methods

Execute

54

2013 IBM Corporation

Execute

During the migration phase, you will need to:


Communicate your plans Obtain, install and configure any necessary:
Hardware Software Automation scripts and tools (to perform the actual data migration)

55

2013 IBM Corporation

Execute
An example migration may go as follows: This high level illustration is the execution migratepv l

56

2013 IBM Corporation

Execute summary - Example migration methodology approach


LVM using migratepv
Command lsvpcfg Process and explanation Identify and assign the DS8000 LUNs to the targeted AIX host server

Identify the ESS source and DS8000 targeted LUNs on the host server
Identify the sizes of the DS8000 target LUNs Move the DS8000 LUNs into the VGs appropriately Verify the DS8000 LUNs are added to the VG Identify the logical volumes (LVs) to migrate Copy LV data from the ESS source LUNs to the DS8000 target LUNs

bootinfo -s extendvg lsvg-p lsvg -l migratepv -l lv_name

lsvg -p vg_name
rmdev -dl rmdev -dl

Verify the LUNs are copied


Remove the ESS source LUNs from the VGs Delete the device definitions from the host ODM

Lsdev Cc disk Verify the device definitions are are removed In the ESS, unassign the LUNs from the host server
Source: If applicable, describe source origin 57 2013 IBM Corporation

Execute summary - Example migration methodology approach


LVM using mklvcopy
Command What is does

lsvpcfg
chvolgrp dev

Identify and assign the DS8000 LUNs to the targeted AIX host server
Identify the ESS source and DS8000 targeted LUNs on the host server

bootinfo -s
extendvg

Identify the sizes of the DS8000 target LUNs


Move the DS8000 LUNs into the VGs appropriately

lsvg -l
lsvg -p lslv -l lv_name mklv -y lvdummy

Verify the DS8000 LUNs are added to the VG


Identify the logical volumes (LVs) to migrate Determine how the LVs are spread across the vpaths Reserve free space on each LUN for an even spread of the data across LUNs

mklvcopy
lslv-l

Copy LV data from the ESS source LUNs to the DS8000 target LUNs
Verify the lv copies are made

syncvg
lsvg -l

Syncronize the LV data from the ESS source LUNs to the DS8000 target LUNs
Verify that the sync isn't showing stale, it should show as syncd

58

2013 IBM Corporation

LVM using mklvcopy (continued)

Execute summary - Example migration methodology approach

Command What these commands do lslv -l Verify the source and target LUNs for each lv

rmlvcopy
lsvg -p reducevg rmdev -dl hdisk#

Remove the source copy of the lv from the ESS LUNs


Verify that all the source ESS LUNs are free with no data Remove the ESS source LUNs from the VGs and verify the removal Delete the device definitions from the host ODM

lsdev -Cc disk

Verify the device definitions are are removed


In the ESS, unassign the LUNs from the host server

59

2013 IBM Corporation

Storage Migration Methods

Validate

60

2013 IBM Corporation

Validate

It is important to validate that you have the same data and functionality of the application after the migration You should make sure that the application runs with the new LUNs, that performance is still adequate, that operations and scripts work with the new system

61

2013 IBM Corporation

A sample validation list may include but not be limited to the following items:

Compile migration statistics Prepare a report to highlight: What worked What didnt work Lessons learned Share the report with all members of the migration team
These types of reports are critical in building a repeatable and consistent process through continuous process improvement, building on what worked and fixing or changing what didnt work. Further, documenting the migration process can help you train your staff, and simplify or streamline the next migration you do, reducing both expense and risk

62

2013 IBM Corporation

Storage Migration Methods

Overall Summary

63

2013 IBM Corporation

Migration Methodology Summary


Plan

Plan
Determine migration requirements Identify existing environment Define future environment Create migration plan Develop design requirements Migration types Create migration architecture Develop test plan

Execute
Obtain software tools and licenses Communicate deployment plan Validate HW & SW requirements Customize Migration procedures Install & configure Run pre-validation test Perform migration Verify migration completion

Validate
Run Post validation test Perform knowledge transfer Communicate Project information Create report on migration statistics Conduct migration close out meeting

Analyze business impact Risks Business interviews Criticality of data being moved Performance Migration types Key factors Multi-vendor environment requirements Application down time

64

2013 IBM Corporation

Thank you!

For you interest


and attendance

2013 IBM Corporation

Storage Migration Methods

Questions and Answers

66

2013 IBM Corporation

Storage Migration Methods

Backup slides and extra reference materials provided separately

67

2013 IBM Corporation

Vous aimerez peut-être aussi