Vous êtes sur la page 1sur 17

Mapped and masked to host (Added to storage Group)

Striped or concatenated

Meta Device

Device

Hyper
Physical

RAID Group (RAID5 3+1)

Striped or concatenated

Thiuin Meta mapped and masked to host (added to Storage Group)

Thin Devices (bound to pool) Thin Pool

Data Device

Hyper
Physical

RAID Group (RAID5 3+1)

Volume Geometry
A Symmetrix Logical Volume is an emulation of a Physical Disk and uses similar terminology
Sector
(16) 512 byte Block

Track (R/W Head)


(8) Sectors

Cylinders
(15) Tracks

Cylinder = 983040 bytes (15 X 8 X 16 X 512) Volume sizes are typically specified in Cylinders
In most instances can also specify MB, GB, and TB

9000 Cyl Device = 8.8GB (9000 X 15 X 8 X 16 X 512 = 8,847,360,000)

Fibre Channel Drives


RAID 5 7+1 X 28 Per Raid Group

60 FC 10K 600GB
(56 +4 spares) 7 Raid Groups 117GB devices X 28 per Raid Group = (196) 117 GB LUNs ~22.42 TB total capacity Unconfigured space used for boot devs (48GB) ~ 3.2 TB

LUN Size:
127988 Cylinders
Spare Pool

SATA Drives
RAID 6 6+2 X 32 Per Raid Group

60 7.2K 1 TB SATA
(56 +4 spares) 7 Raid Groups 171GB devices X 32 = (224) 171GB LUNs ~ 37.4 TB total capacity Configured space

LUN Size:
186768 Cylinders
Spare Pool

Logical Configuration
Storage configuration are application specific Logical device considerations*
LUN size
RAID Protection
RAID 1 RAID 5 RAID 6 Consistent size (based on application requirements)

LUN aggregation using meta devices Virtual Provisioning


Size Performance

Simplifies storage provisioning

Thin Pool Thin Devices/Fully allocated

Local and remote replication requirements


* Existing Best Practices Apply

Storage Tiers based on drive and protection Storage Templates Storage Policies

Zoning
0 1 0 1 E F

HBA ESX1 HBA HBA ESX2 HBA

Fabric

0 1 0 1 0 1

Dir7
G

Engine 4

H E F

HBA ESX9 HBA

Fabric

1
0 1 0 1

Dir8
G H

Single Initiator - Single Zoning


Example:
ESX1_HBA1_456_7_E0 ESX1_HBA2_456_8_E0 ESX1_HBA1_456_7_G0 ESX1_HBA2_456_8_G0

Each ESX farm will use the same set of FA ports Cascaded Initiator groups
Masking view for each host for boot Single masking view for shared LUNs

Virtual Provisioning - FC
Create data devices(TDAT) # symconfigure sid xxx cmd create dev count=196, size=127988, config=raid5, data_member_count=7, attribute=datadev, emulation=fba; -v commit nop Create thin devices(TDEV) # symconfigure sid xxx cmd create dev count=????, size=?????, config=tdev, emulation=fba; -v commit -nop Create THIN pool # symconfigure sid xxx cmd create pool esx_fc_pool, type=thin, max_subs_precent=xxx; -v commit -nop Add data devices to thin pool # symconfigure sid xxx cmd add dev xxx:yyy to pool esx_fc_pool type=thin, member_state=enable; -v commit nop Bind thin devices to thin pool # symconfigure sid xxx cmd bind tdev xxx:yyy to pool esx_fc_pool; -v commit -nop

Virtual Provisioning - SATA


Create data devices(TDAT) # symconfigure sid xxx cmd create dev count=224, size=186768, config=raid6, data_member_count=6, attribute=datadev, emulation=fba; -v commit nop Create thin devices(TDEV) # symconfigure sid xxx cmd create dev count=????, size=?????, config=tdev, emulation=fba; -v commit -nop Create THIN pool # symconfigure sid xxx cmd create pool esx_sata_pool, type=thin, max_subs_precent=; -v commit -nop Add data devices to thin pool # symconfigure sid xxx cmd add dev xxx:yyy to pool esx_sata_pool type=thin, member_state=enable; -v commit nop Bind thin devices to thin pool # symconfigure sid xxx cmd bind tdev xxx:yyy to pool esx_sata_pool; -v commit -nop

Storage allocation with auto-provisioning groups

Create Storage Groups # symaccess create sid 456 name ESX1_SG type storage devs 100:11f Create Initiator Groups # symaccess create sid 456 name ESX1_IG type initiator file esx1_hba.txt Create Port Groups # symaccess create sid 456 name ESX1_PG type port dirport 7E:0,7G:0,8E:0,8G:0 Creating Masking View # symaccess create sid 456 create view name ESX1 sg ESX1_SG pg ESX1_PG ig ESX1_IG Verifying masking view # symaccess sid 456 show view ESX1 Verifying zoning # symaccess sid 456 list logins dirport 7E:0 Renaming HBA # symaccess sid 456 wwn xxxxxxxxxxxxxxxx rename ESX1/hba1

Migration from SVC to V-Max


Present V-Max to SVC Rename V-Max MDISK Perform Image mode Migration Shutdown Application (Can be scheduled) Present V-Max Devices directly to Host
Zoning and LUN Masking

Restart Application - - - Will IBM support this?

Migration from SVC to V-Max Using Open Migrator


Also physical server to VM (VMDK > 300GB)
Install Open Migrator on source host (may require reboot) Create zone from host to V-Max on existing fabric Create a 1TB Target LUN (TDEV)
Create a masking view on V-Max to present TDEV to source host

Begin data copy from source (D drive) to target using OM When sync complete, validate the data is complete and consistent Take source host down (Can be scheduled) Perform a P-to-V of C Drive and bring up host on ESX Map and mask copy of D drive to ESX Perform RDM of Target LUN (D drive) to source host ( VM)

+++ Supported, Mitigated risk, allows for data validation, Sparse-ability

Migration from SVC to V-Max Using Open Migrator


Also physical server to physical
Install Open Migrator on source host (may require reboot) Create zone from host to V-Max on existing fabric Create a Target LUN (TDEV) for both C and D drives
Create a masking view on V-Max to present TDEV to source host

Begin data copy from source (C & D drive) to target devices using OM When sync complete, validate the data is complete and consistent Switch over to V-Max Remove zoning and masking for original source devices

+++ Supported, Mitigated risk, allows for data validation, Sparse-ability

Migration from SVC to V-Max


UNIX Host based migration

Migration from SVC to V-Max Using Open Replicator


(V-Max looks like an initiator to the SVC)
Set up zoning from V-Max to SVC (V-Max looks like a host) On the SVC present source devices to the V-Max

Migration from SVC to V-Max Using Open Replicator


(Using a swing Array)

Migration from SVC to V-Max

Host

MCDT

BRCD
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 G G H H E E F F

Dir8 Engine 4

Dir7

Vous aimerez peut-être aussi