Vous êtes sur la page 1sur 70

HPE 3PAR StoreServ

A Technical Overview of the HPE


3PAR Storage

Daniel Lim
WW Technical Marketing Engineering

1
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Table of Content
3PAR StoreServ Overview
Constant Evolution, One Architecture

3PAR Architecture
Hardware (8k, 9k, 20k), 3-layer Abstraction, ASIC, Wide Striping, Autonomic Sets

3PAR HA and Persistency


HA Cage, Persistent Cache, Persistent Port, Persistent Checksum

3PAR Optimization
Dynamic Optimization

3PAR Leadership
Replication, Peer Persistence, Multi-Tenant, Security, Applications Integrations, QoS

3PAR with VMware vSphere


Performance, Capacity, Availability, Management

3PAR Storage Federation


Online Import, Peer Motion
2
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HPE 3PAR StoreServ History August 15
New 8000 midrange and 20450 all-flash; Smart SAN for 3PAR
Constant evolution March 10 June 13 December 15
Introduction of: 3.1.2 MU2, 400GB SSD; 3.84TB FE SSD
Adaptive Optimization, 7450 AFA, Encryption, QoS, RM RMC-S; Online Import for IBM XIV
Recovery Manager for vSphere Hyper-V, native FCoE, SPC-1 June 16
September 10 7400 AFA 258,078 SPC-1 IOPS 7.68TB SSD; 24PiB scalability
3PAR acquired by HP September 16
June 14
May 99 2005 2007 November 11 RMC 3.0 (RMC-O, File Persona);
EMC to 3PAR Online
3PAR founded Dynamic 3PAR IPO; V800 SPC-1 world DAC Support for IP, iSCSI and FCoE;
Import; 1.92TB SSD;
with 5 employees Optimization, Virtual Domains, record released: 15.36TB SSD; SSMC 3.0
Inline Thin Dedup and
Recovery native iSCSI 450213 IOPS
Thin Clones
Manager

1999 Bright Future

2002 2006 2009 December 12 February 16 February 17


3PAR S-Class; E-Class New: F-Class the first New: StoreServ 7000; New 20840; SSD Adaptive Data Reduction (ADR)
Thin midrange quad controller midrange 3PAR OS 3.1.2, Express Layout; addl. for the 8k and 20k; 3DC Peer
Provisioning Storage array; InForm OS v2.3.1 Peer Persistence, more OS support for Persistence; all-inclusive SW
with many new features Persistent Ports Peer Persistence licensing model; 8k combo IO
2001 November 15 adapters; Express Layout for all
3PAR secures $100 Million in August 11 20850 world record: Drives; SSMC 3.1; RMC 4.0
third-round financing New: V-Class w. Gen 4 June 15 62844.45 SPC-2 MBPS
ASIC, v3.1.1, Peer Motion December 13 New: StoreServ 20000 flash and hybrid;
2000 2008 3.1.3, performance enhancements, extended QoS, 3.84TB SSD; E2E 10-PI; Peer Persistence for
Bring-up of the New: T-Class with new drives; higher system limits and scalabilities RHEL, HP-UX, Oracle RAC;
Gen 1 3PAR Gen 3 ASIC
ASIC September 14 December 14
3PAR OS 3.2.1, Peer Persistence support Windows, New: 7000c with Converge Block, File and Object Access;
FCOE, iSCSI, Adaptive Flash Cache, 480GB & 1.92TB 16Gb FC ports; VMAX Online Import; new management
SSD, RAID6 & FC protocol performance enhancements console SSMC; Recover Manager Central (vSphere)

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Eliminating distinctions between Midrange and Tier 1
Polymorphic Simplicity Storage without Boundaries 20850
20840
ONE Architecture from Midrange to High-end 20800
Midrange Flash High-end 9450
2 4 Nodes
Common Tier-1 Feature Set
Interoperability end-to-end
Only HPE

8450

8440
When Performance matters
8400 208x0 3.8 million IOPS @ <1 ms latency
20450 1.8 million IOPS @ <1 ms latency
8200 9450 1.8 million IOPS @ <1 ms latency
When Scale matters 8450 1 million IOPS @ <1 ms latency
20800 and 20840 with up to 6 PiB raw capacity
When Value matters 8440 with up to 3 PiB raw capacity
8200 AFA starting at $19k

4
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HPE 3PAR: The broadest flash array family in the industry

Affordable Lightning Fast Enterprise-Class

75% $1.20/GB 1M+ IOPS 3M+ IOPS 24PB 99.9999%


Data Reduction <0.6ms <0.6ms
Usable Scale Availability
Guaranteed StoreServ 8000 StoreServ 20000

ONE OS. ONE Interface. ONE Feature Set. ANY Workload. FEDERATED Mobility.

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HP 3PAR OS Evolution HPE 3PAR OS 3.3.1
Compression
Dedup Enhancements
SSMC 3.1
SP 5.0
Support for larger raw capacity
Larger Volume Sizes
HP 3PAR OS 3.2.2 Express Writes for iSCSI and 16Gb/s FC
HP 3PAR StoreServ 8000/20000 Systems Persistent Checksum with standard T10 DIF
StoreServ Management Console 2.2 Adaptive Sparing 2.0
Support for Higher Scalability Express Layout for all Drives
Persistent Checksum Self Identifying Drives
HP 3PAR OS 3.2.1 Remote Copy Asynchronous Streaming 3DC Peer Persistence
7000 Converged models Peer Persistence for RHEL On Node CLX
HP 3PAR OS 3.1.3
HP 3PAR StoreServ 7440 HP StoreOnce Recovery Manager Central Async Streaming longer distance and RCIP support
Max limits increase
Performance Optimizations Adaptive Flash Cache 1.1 Remote Copy Scalability and Performance
Priority Optimization: Latency Goal Express Writes Storage Federation (4x4 multi-directional) improvements
MxN Replication FIPS 140-2 EKM Online Import for HDS Adaptive Flash Cache Enhancements
Adaptive Sparing AO on VVsets Priority Optimization: <1ms latency goal File Persona 1.3
480GB/920GB SSD w/ 5 Year Peer Persistence for MSFT Adaptive Flash Cache enhancements Write Cache for node down in a 2 node system
Warranty VMware VVOLS iSCSI VLAN tagging Combo Cards
1.2TiB 10K and 4TiB 7.2K HDDs Tunesys fixes VMware VVOLs higher scalability DC PCM Support
Upgrade Automation (SW/Drives) Resiliency improvements Autonomic Rebalance enhancements Replication for VMware VVols
Peer Motion: load balancing and Thin Deduplication On-node System Reporter changes VMware VVols over iSCSI
clusters File Persona Adaptive Optimization new options 2 Factor Authentication
Resiliency improvements 1,92/3.84TiB cMLC SSDs LDAP improvements Security Updates
SR-on-Node Performance Alerts SmartSAN support Improved serviceability
Online-Import for EMC Express Layout SmartSAN 2.0

2014 2015 2017


HPE Confidential | Subject to terms of confidential disclosure agreement 6
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR wins in the Gartner Report 22 November 2016
Critical Capabilities for General-Purpose, Midrange Storage Arrays
HIGHEST SCORE in ALL USE CASES
Consolidation
OLTP
Server Virtualization and VDI
Analytics
Cloud

Gartner, Critical Capabilities for General-Purpose, Midrange Storage Arrays, 22 November 2016
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the
entire document. The Gartner document is available upon request.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology
users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the
opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties,
expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HPE 3PAR StoreServ Awards and Recognitions

Scale: StoreServ 20840


IDC MarketScape: 2015/16 AFA
CRN's 2016 Products Of The Year
20000 2016/17 StoreServ 8440 2015/16 StoreServ 7400 https://goo.gl/XhIlHi
3PAR StoreServ 7450c on top
Best-in-Class Best-in-Class
8440

Speed:
20000 3PAR StoreServ 7450
8450 http://goo.gl/nU5oUv
8440 2015-16 All-Flash award August 2016
3PAR StoreServ 7400 3PAR StoreServ Value Leader
StoreServ 20000 Series http://goo.gl/vkmHPe
Best-in-Class https://goo.gl/Ovimi0
Value: Critical Systems Capabilities #1 Mid-Range
8400 Overall Use Case
3PAR StoreServ 7000

Cost:
8200 Making the right flash storage array Flash Memory Summit
Critical Systems Capabilities
High End Storage Arrays #3 High End Overall Use Case
Best-in-Class buying decision: 3PAR 7450 Best of Show
3PAR StoreServ 10000
3PAR StoreServ 10800 https://db.tt/RyaejcsQ 3PAR StoreServ 20850

IT-as-a-Service leadership
Proven Flash-optimized 2014
Disruptive Tier 1 midrange
Storage Magazine Storage Magazine #1 All Flash
Midrange Array
2014 StoreServ7400 #1 Midrange Storage Vendor Storage Array Vendor
Best-in-Class
http://goo.gl/TxlKMe 3PAR StoreServ 7400 HPE 3PAR StoreServ 7450
3PAR StoreServ 7440c

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR StoreServ 8000

8200 8400 8440 8450


Controller Nodes 2 2 4 2 4 2 4

Controller Cache 64 GB 64 GB 128 GB 192 GB 384 GB 192 GB 384 GB

Max Adaptive Flash Cache 768 GB 768 GB 1500 GB 1500 GB 3000 GB NA NA

Max local node system IOPS (100% 8KB random read) 400K 400K 800K 500K 1M 500K 1M

Max. LUN size for TPVV and CPVV 64TiB

Max. LUN size for TDVV and compressed VV 16TiB

Max node-distributed IOPS (100% 8KB random read) 342K 342K 659K 425K 786K 425K 786K

Max drive count total / SSD only 240 / 120 288 / 120 576 / 240 480 / 240 960 / 480 480 / 480 480 / 480

Max raw capacities total / SSD only 1000 / 838 1200 / 838 2400 / 1676 2000 / 1675 4000 / 3351 1675 3352

Built in 16Gbit/s FC ports 4 4 8 4 8 4 8

Built-in 1GbE RJ45 port (Remote Copy/File Persona) 2 2 4 2 4 2 4

PCIe slots for optional port extension 2 2 4 2 4 2 4

1) Only to be used by the File Persona Software

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HPE 3PAR StoreServ 8000 Hardware Building Blocks
Base Storage Host Expansion Drive Service
Systems Adapters Enclosures Drives Racks Processor

3PAR StoreServ 8200 3PAR StoreServ 8000


(2 nodes, 4 FC ports, 24 SFF slots) 2.5in 2U SAS
Choice of one Adapter-pair per
SFF SAS
Node-pair of the following:
HDD & SSD HPE Enterprise Virtual (Default)
4-port 16Gb/s FC Adapter Series Rack
2-port 10Gb/s iSCSI/FCoE

3PAR StoreServ 84x0 2-port 10Gb/s IP NIC 3PAR StoreServ 8000


(2-node, 4 FC ports, 24 SFF slots) for File Persona 3.5in 4U SAS
4-port 1Gb/s IP NIC
for File Persona
4-port FC / IP combo Adapter DC Power Option LFF SAS
HDD & SSDs
Physical (Optional)
2 x 16Gb FC +
2 x 10Gb IP NIC
for File Persona or RCIP Customer rack
Choice of encrypted (4-post, square hole,
4-port iSCSI / IP combo Adapter EIA standard, 19 in.,
3PAR StoreServ 84x0 and non-encrypted
2 x 10Gb iSCSI + 48V DC Power Cooling Module rack from HPE or other
(4-node, 8 FC ports, 48 SFF slots) drives
2 x 10Gb IP NIC Only available for field integration suppliers)
for File Persona or RCIP 2 PCM per enclosure required
(Base and Expansion)
3PAR OS 3.3.1 required

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR StoreServ 8000 Controller Enclosure

Front view Node 1/3


CAUTION

3
2
1
0
3
2
1
0
CAUTION

UID
764W PCM 764W PCM

Mfg
DP-2

DP-1
FC-2

Intr 1

Intr 0
RC-1

Disconnect all power Disconnect all power


for complete isolation for complete isolation

3PAR
7400
MGMT

PCI-HBA
4

1
1 1

FC-1

HP
PCI-HBA

3PAR
7400

MGMT
HP

FC-1
1

RC-1
for complete isolation for complete isolation
Disconnect all power
0 0

Intr 0

Intr 1
Disconnect all power

DP-1

DP-2

FC-2
Mfg

764W PCM 764W PCM

UID

0
1
2
3
0
1
2
3
CAUTION CAUTION

Power Cooling
Node 0/2 Rear view
Module with Battery
Pack

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR StoreServ 9450 Hardware Building Blocks
Service
Base Enclosure Adapters Drive Enclosures Drives Rack
Processor

Controller Chassis
4-port 12Gb/s
SAS drive HBA
UID

8 9 10 11

0 1 2 3

HPE Enterprise Virtual (Default)


3PAR
StoreServ
20000
Drive Enclosure

2 or 4-Nodes
2U 12Gb/s SAS Series Rack
4-port 16Gb/s shown with bezel
FC HBA
Backplane 2.5 SFF 12Gb
(completely passive) SAS dual-port
20 21 22 23 UID

15 16 17 18 19

SSDs
1U Physical SP
10 11 12 13 14

5 6 7 8 9

3PAR
0 1 2 3 4 StoreServ
20000

(optional for 9k)


2-port 10Gb/s 24 drive SFF 2.5
iSCSI/FCoE CNA
3rd party rack
(4-post, square hole, EIA
standard, 19 in., rack from
HPE or other suppliers)

2-port 10Gb/s NIC

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR StoreServ 20000 Hardware Building Blocks
Service
Base Enclosure Adapters Drive Enclosures Drives Rack
Processor

4-port 12Gb/s
SAS drive HBA 2U 12Gb/s SAS
shown with bezel
4-Node 8-Node
Controller Chassis HPE Enterprise
Series Rack
4-port 16Gb/s 1U Physical SP
FC HBA
2.5 SFF SAS
24 drive SFF 2.5
HDDs/SSDs

2-port 10Gb/s
iSCSI/FCoE CNA
4-Node 8-Node
Backplane 3.5 LFF SAS 3rd party rack
12 drive LFF 3.5 (4-post, square hole, EIA
(completely passive) HDDs/SSDs standard, 19 in., rack from
HPE or other suppliers)

2-port 10Gb/s NIC

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR StoreServ 20450 Controllers

Front Rear

Node 3
Fans
Node 2 LED Indicators
Battery Backup Units
Node 1 IO Adapters
Power Supplies
Node 0

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR SSD 7 year wear-out support
Available for all 8000 and 20000 SSD purchased after June 1st 2015
On top of the 5 years unlimited warranty
System must have been under HPE support coverage without interruption
New 7 years 3PAR upgrade Care Packs available
3 to 7 years upgrade Care Pack
5 to 7 years upgrade Care Pack
HPE to provide free replacement SSD if life left goes below 5% within 7 years (SSD Wear
Gauge)

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
SPC-2 MBPS Top-five as of 4-Nov-15
http://www.storageperformance.org/results/benchmark_results_spc2_top-ten
SPC-2 MBPS Top Five as at 4-Nov-15
World
Record
62,844

55,644
SPC-2 MBPS - Aggregate MB/seconds

43,013

33,477
31,486

SPC-2 price-performance $19.93 $32.94 $28.30

HPE 3PAR StoreServ 20850 EMC VMAX 400K HP XP7 Kaminario K2 Oracle ZFS Storage ZS4-4

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Virtualization Advantages
Traditional Array 3PAR StoreServ
Each RAID level requires dedicated drives All RAID levels can reside on same drives
Dedicated spare disk required Distributed sparing, no dedicated spare drives
Limited single LUN performance Built-in wide-striping based on Chunklets
Dedicated pool drives No pools and reservation required

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

R1 R1 R5 R5 R1 R1 R6 R6 R1 R1 R5 R5 R1 R1 R6 R6

Traditional Controllers 3PAR StoreServ Controllers


RAID5 Set RAID1 RAID6 Set

Spare
LUN 4 LUN 6

Spare
LUN 2
LUN 3 LUN 5 LUN 7
Physical drives and Chunklets
RAID1 Set RAID5 Set Spare
Spare

LUN 0
Snapshot Pool
LUN 1

17
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR High Availability
Spare Disk Drives vs. Distributed Sparing

3PAR StoreServ
Traditional Arrays

Spare Chunklets

Many-to-many rebuild
Spare drive parallel rebuilds in less time

Few-to-one rebuild
hotspots & long rebuild exposure

18
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HP 3PAR ASIC
It makes the difference in real-world environments

RAID Rebuilds Data Deduplication

Snapshots Snapshots

Thin Reclamation Thin Reclamation

Sub-LUN Tiering
CPU Load

CPU Load
Sub-LUN Tiering
RAID Rebuilds
Mixed Workload
Mixed Workload
Thin Provisioning
Thin Provisioning
Replication
Replication
RAID Calculations
RAID Calculations
Inter-Node & Cache IO
Inter-Node & Cache IO

Any
Processors Intel x64
Processors + 3PAR Gen5
ASIC

Most Arrays HP 3PAR

19
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
The traditional way of setting up a Storage Array
High chance of fragmentation and hot-spots
For my five servers I need five 2TB LUNs with
average performance
OK, lets build a RAID5 pool consisting of five
RAID5 4+1 groups
Ah, we also need some spares of course
0 1 2 3 4 5 6 Now I am carving five 2TB LUNs out of the pool
R5 R5 R5 R5 R5 R1 R1 and present them to you
Traditional Controllers Listen, I have two more servers. They also need
2TB LUNs but with very high write performance
RAID5 Group RAID5 Group RAID5 Group RAID5 Group Spares
Okie-dokie, I am going to have more drives
installed, will build a RAID1 pool and create new
RAID5 Group RAID1 Group RAID1 Group LUNs which will have higher write performance
I also need to create snapshots of my LUNs

Snapshot Pool
No problem, Ill have another set of drives
installed to create a snapshot pool

20
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Why are Chunklets so Important?
Think of many Virtual Drives on a single Physical Drive
Ease of use and Drive Utilization
Array managed by policies, not by administrative planning
The same drives can service all RAID types at the same time
RAID10
RAID50 2:1 to 8:1
RAID60 4:2*; 6:2; 8:2*; 10:2; 14:2*
Transparent mobility between drives and RAID types
thanks to Dynamic and Adaptive Optimization
Performance
Enables wide-striping across hundreds of drives
Avoids hot-spots
Autonomic data restriping after disk installations
High Availability selectable by CPG
HA Drive/Magazine - Protect against drive/magazine failure (Industry standard)
HA Cage - Protect against a cage failure (complete drive Enclosure)
* Preferred, performance enhances R6 set sizes

21
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
The 3PAR way of setting up a Storage Array
Optimal use of resources
For my five servers I need five 2TB LUNs with
average performance
OK, easy! I just create 5 VVs in a RAID5 CPG
striped across all available drives and present the
VLUNs to you. Thanks to distributed sparing I
dont have to waste drives for sparing
0 1 2 3 4 5 6 Listen, I have two more servers. They also need
R5 R5 R5 R5 R5 R1 R1 2TB LUNs but with very high write performance
3PAR Controller Nodes No problem, I just create two more VVs in a
RAID1 CPG striped across the very same drives.
Physical Drives
These new LUNs have higher write performance
I also need to create snapshots of my LUNs
Easy, I still have enough space on my set of
physical drives to hold your snapshots

22
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Virtualization the Logical View
With three drive types
3PAR autonomy User initiated
Physical Drives Logical Disks (LD) CPG(s) Virtual Exported
formatted in 1GB autonomically Volumes LUNs
Chunklets created

SSD SSD LDs


Fast Class Drives FC LDs
AO
Nearline Drives NL LDs

SSD

FC

NL

AO = Adaptive Optimization

23
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Virtualization Concept
End-to-end on a 4-node system Server

Active-active

i.e. RAID5 (3+1)


Multipathing
Process step Phase state
Physical drives (PD) are Exported
automatically formatted in Disk initialization LUN
1GB Chunklets

Chunklets are bound together to form


LD LD LD LD
Logical Disks (LD) in the format Defines RAID level, step size, set LD LD
C LD LD
LD LD LD LD
defined in the CPG policies size and redundancy Virtual
(RAID level, Ste Size ) P Volume
LD
LD
LD
LD G LD
LD
LD
LD
Virtual Volumes are built striped LD LD LD LD
Autonomical wide-striping across all
across all LDs of all nodes from all
Logical Disks (LD)
drives defined in a particular CPG

Present and access LUNs across


Virtual Volumes can now be
multiple active-active paths
exported as LUNs to servers
(HBAs, Fabrics, Nodes)

24
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Autonomic Sets
Simplify Provisioning Autonomic 3PAR Storage
Traditional Storage Autonomic Host Set
Cluster of VMware vSphere Servers

V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10

Individual Volumes
Autonomic Volume Set
Initial provisioning of the cluster Initial provisioning of the cluster
Requires 50 provisioning actions Add hosts to the Host Set
(1 per host volume relationship) Add volumes to the Volume Set
Add another host/server Export Volume Set to the Host Set
Requires 10 provisioning actions (1 per volume) Add another host/server
Add another volume Just add host to the Host Set
Requires 5 provisioning actions (1 per host) Add another volume
Just add the volume to the Volume Set

25
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR High-Availability and
Persistency Features

26
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Persistency features
HA Cage / Enclosure Persistent Cache Persistent Ports Persistent Checksum

RAID 50 Volume
R R R R R R
Reads Writes
5 5 5 5 5 5 MPIO
A1 A2 A3 A4 A5 A6
B1 B2 B3 B4 B5 B6 Host HBA
C1 C2 C3 C4 C5 C6
D1 D2 D3 D4 D5 D6
SAN Switch
cage

A1 A2 A5 A3 A6
A4 3PAR Front-end
cage

B1 B4 B2 B5 B3 B6
RAID

3PAR Back-end
cage

C1 C4 C2 C5 C3 C6
0:0:1 0:0:2 1:0:1 1:0:2
0:0:1 0:0:2
cage

D1 D4 D2 D5 D3 D6 Ctrl 0 Ctrl 1 3PAR Drives

RAIDlet Groups built across Write Cache Re-Mirroring Transparent handling of T10-PI end-to-end
cages (any RAID level) paths or controller loss data protection

27
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR High Availability
Guaranteed Drive Enclosure (Drive Cage) Availability if desired

Traditional Arrays 3PAR StoreServ


RAID 50 Volume
RAID 5 Group R5 A7

cage
A8 A9
A1 AA A2 AB A3 AC R R R R R R
enclosure

A4 A5 A6 5 5 5 5 5 5

A E A
A1 A2 A3 A4 A5 A6
B1 B2 B3 B4 B5 B6
C1 C2 C3 C4 C5 C6
B7

cage
B8 B9 D1 D2 D3 D4 D5 D6
B1 BA B2 BB B3 BC
enclosure

B4 B5 B6

B F B
C7 C8 C9

cage
CA CB CC RAID 10 Volume
C1 C2 C3
enclosure

C4 C5 C6

C G C
R R R R R R R R R R R R
1 1 1 1 1 1 1 1 1 1 1 1
A7 A8 A9 AA AB AC C7 C8 C9 CA CB CC

cage
D7 D8 D9 B7 B8 B9 BA BB BC D7 D8 D9 DA DB DC
D1 DA D2 DB D3 DC
enclosure

D4 D5 D6

D H D
Enclosure-independent RAID
RAIDlet Groups for any RAID level built across enclosures
Enclosure-dependent RAID Data access preserved with HA Enclosure (Cage)
Enclosure (cage) failure might mean no access to data User selectable per CPG

28
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR High Availability
Write Cache Re-Mirroring
3PAR StoreServ

Traditional Mid-range Arrays


Write-Cache stays on
Ctrl 1 Ctrl 2 thanks to Re-Mirroring

Write Cache Write Cache

Mirror Mirror

Persistent Write-Cache Mirroring


No write-thru mode consistent performance
Traditional Write-Cache Mirroring Standard behavior on all 4-, 6- and 8-node systems
Losing one controller either results in poor performance due
to write-thru mode or risk of write data loss

29
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Multipathing of traditional arrays
Path loss and controller maintenance or loss behavior

Path and controller loss have to be


handled by MPIO
MPIO MPIO
Depending on the settings a failover
can take up to 60 seconds

Requires regular
maintenance/patching of MPIO
0:0:1

0:0:1 0:0:2 1:0:1 1:0:2 0:0:1


0:0:1 0:0:2 1:0:1 1:0:2
Ctrl 0 Ctrl 1 Ctrl 0 Ctrl 1

A path loss requires an A controller maintenance or


MPIO path failover which loss requires an MPIO path
can be lengthy or even fail failover of all paths

30
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Persistent Ports
Path loss, controller maintenance or loss behavior of 3PAR arrays

No user intervention required


In FC SAN environments all paths
MPIO MPIO stay online in case of loss of signal
of an FC path, during node
maintenance and in case of a node
failure
For FC, iSCSI and FCoE
deployments all paths stay online
0:0:1

0:0:1
during node maintenance and in
0:0:1 0:0:2 1:0:1 1:0:2 Native port ID 0:0:1 0:0:2 1:0:1 1:0:2 case of a node failure
1:0:1 1:0:2 0:0:1 0:0:2 Guest port ID 1:0:1 1:0:2 0:0:1 0:0:2
Ctrl 0 Ctrl 1 Ctrl 0 Ctrl 1 Server will not see the swap of
the 3PAR port ID thus no MPIO
An FC path loss is handled by A controller maintenance or loss is handled
3PAR Persistent Ports by 3PAR Persistent Ports for all protocols
path failover required
all server paths stay online all server paths stay online

Read more in the Persistent Ports whitepaper

31
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Persistent Checksum
Avoid Silent Data Corruption by ensuring E2E Data Integrity
Challenge: Media and transmission errors can Reads Writes
occur in any component in the I/O stack 2)
Undetected data corruption can have unpredictable Host HBA 1)
consequences Data &
Data &
T10-PI
T10-PI
HPE Solution: Industry standard end-to-end data
SAN Switch
protection via T10-PI (Protection Information) Data &
Data &
Currently supported on FC only 1) T10-PI
T10-PI
3PAR Front-end
Benefit: End to End Protection for all data stored on Adapter
3PAR StoreServ systems
Data & Data &
Completely agentless, OS and application agnostic T10-PI T10-PI

Supported on all Gen5 ASIC based 3PAR StoreServ 3PAR Back-end


systems (8000 and 20000) Adapter
Data & Data &
T10-PI T10-PI
1) For a list of supported T10-PI HBA see HPE SPOCK Other Hardware Host Bus Adapters
2) Also read the CERN data integrity report
3PAR Drives

32
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Persistent Checksum

An extra 8 bytes Data Integrity Field (DIF) is added to the standard 512-byte disk block
2 bytes Guard field: CRC of the data block
2 bytes App field: Application specific field
4 bytes Ref: Least significant bit of a Logical Block Address (LBA)

0 511 513 515 519


User Data(512 Bytes) Guard (2 Bytes) App (2 Bytes) Ref (4 Bytes)

DIF Field

33
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Technical deep-dive and validation!
http://www.demartek.com/Reports_Free/Demartek_HPE_3PAR_Persistent_Checksums_Evaluation_2016-11.pdf

Uncorrectable bit errors on drives and in the FC SAN are


more likely with larger drives and higher IOPS
Persistent Checksum effectively protects against error within
the entire I/O path
No impact on storage bandwidth or I/O response times

Application failure caused by transmission errors

34
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
All-inclusive 3PAR Software Suites
Frame-based dramatically simplifies licensing

All-inclusive Single-System Software All-inclusive Multi-System Software

Optional frame license (1 x per array) including:


Frame license bundled with Base Controllers including:
Remote Copy (RC)
OS Suite
Peer Motion (PM)
Adaptive Data Reduction (ADR)
Peer Persistence (PP)
File Persona (FP)
Cluster Extension (CLX)
Virtual Copy (VC)
Storage Federation
Dynamic Optimization (DO)
Adaptive Optimization (AO)
Priority Optimization (PO)
Virtual Domains
Virtual Lock
Online Import Data at Rest Encryption
Recovery Manager Central (RMC)
Recovery Manager Central App Suites Optional frame license (1 x per array) including:
All legacy Recovery Managers Data Encryption
Smart SAN (requires self-encrypting drives SED)

35
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Optimization
Dynamic Optimization

36
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Dynamic and Adaptive Optimization
Manual or Automatic Tiering
3PAR Dynamic 3PAR Adaptive
Optimization Optimization

Tier 0
CPG 1 CPG A

Tier 1
CPG 2 CPG B

Tier 2 CPG 3 CPG C

- Region LUN Movement Sub-LUN Block Movements between


between Tiers Tiers based on Policies

37
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Storage Tiers 3PAR Dynamic Optimization

SSD

RAID 5 RAID 1
Performance

Fast Class RAID 6

RAID 5 RAID 1

Nearline In a single command non-


disruptively optimize and adapt
RAID 6 Cost
RAID 1 Performance
Efficiency
Resiliency
RAID 6

Cost per Useable TB


38
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Dynamic Optimization Use Cases
Deliver the required service levels for the lowest possible cost throughout the data lifecycle

~50% ~80%
Savings Savings
10TB net 10TB net 10TB net

RAID 10 RAID 50 (3+1) RAID 60 (14+2)


300GB FC Drives 600GB FC Drives 2TB NL Drives

Accommodate rapid or unexpected, application growth on demand by freeing raw capacity

7.5TB net
free Free 7.5 TBs of net capacity
on demand !
10 TB net 10 TB net

20 TB raw RAID 10 20 TB raw RAID 50

39
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
How to tune a Volume with Dynamic Optimization

40
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Tuning Example with Dynamic Optimization
Tune Virtual Volume from a 4 Drive NL R5 CPG to a 16 Drive FC R1 CPG

Iometer before tune


Tune started Tune finished

Iometer after tune

41
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Online Virtual Volume conversion with Dynamic Optimization
Part of Dynamic Optimization
Non-disruptively convert VVs between fully
and thinly provisioning
Within or across CPGs
Thin VV with Dedup and/or Compression enabled

Original volume can either be


Deleted
Saved with a prefix or suffix
Saved with a new name

Note:
Volumes to be migrated must not be in a Remote Copy relationship

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Get 6-Nines Guarantee
99.9999% data availabilityguaranteed*
Industrys 1st 6-Nines
guarantee across midrange,
Products covered: enterprise, and all-flash
All 4-node 7000 and 10000 systems storage
8000 and 20000 are pending qualification

Program Details*
6-Nines Availability Guarantee on the systems
noted above
Remedy: HPE will work with the customer to
resolve their issue and fund 3 additional months
on customers mission-critical support contract
Length of Guarantee: 1st 12 months 3PAR
storage system is deployed

* Complete Program Terms and Conditions on the Get 6-Nines Portal Page

43
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Online Firmware Update
Fast and reliable
Non-disruptive to your business applications
One node after the other gets updated
Can be performed under IO load
Tests performed by ESG in the following environment:
VMware vSphere 5.1 running on 4 BL460
3PAR StoreServ 7450 4-Node array
OLTP workload of 144000 IOPS generated with IOMETER
The actual FW update:
1. Initially each of the 4 nodes made 36000 IOPS
2. First node being updated
3. Second node being updated
4. Third node being updated
Note: While one node was updated the 3 remaining nodes made 48000 IOPS
each and the array performance stayed at 144000 IOPS all time
Performance during the FW update

Find the ESG Whitepaper here: https://db.tt/6Gv5nvzg

44
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Leadership Data Availability and Protection
Simplify HA, DR, Backup and Restore

Synchronous Asynchronous
RMC Express Protect
Replication Replication

S P S P

P S P S
RMC
3PAR 3PAR 3PAR 3PAR 3PAR StoreOnce

StoreOnce Recovery Manager Central


3PAR Remote Copy sync 3PAR Remote Copy async
Backup your 3PAR snapshots
High Availability across metro distances Disaster Recovery across continental distances
directly to StoreOnce
(up to 10ms RTT) (up to 150ms RTT)
(currently available for vSphere and MS SQL)

45
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Leadership Storage Federation

3PAR Peer Motion 3PAR Online Import 3PAR Peer Persistence


Data mobility Built-in tech refresh Federated High-Availability

Do-it-yourself lifecycle data migration from


Respond to growth and changing Never lose data access across
3PAR, EVA, EMC CX, VNX, VMAX, HDS
demands map and move workloads to metropolitan distances with
USP, VSP and IBM XIV to 3PAR
the right resources with Peer Motion Peer Persistence
1 year license included with every new 3PAR

46
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Leadership Multi-Tenant
The Tier-1 Storage for Utility Computing

Shared Resilient Secure

Dom x

Dom A

Massive Consolidation Buy fewer servers Virtual Private Array

Storage can be used across many different Double VM density per server Secure segregation of storage while
applications and lines of business Guaranteed * preserving the benefits of massive parallelism

* As compared to your legacy storage array.


See Get Virtual Guarantee http://www.hp.com/storage/getvirtual programs for more details

47
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Leadership Security
The Tier-1 Storage

Management Encrypted Compliant

Secured 3PAR StoreServ

Management Protect your precious data Virtual Lock

IPv6 and SSH capabilities, All data securely encrypted using AES256, Create snapshots that cannot be
WEB API, LDAP user authentication protected against theft and misuse modified or deleted up to a defined date

48
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Leadership Integrated
Application Recovery Solutions

Recovery Manager Central Recovery Manager for Recovery Manager for


for vSphere and MS SQL MS Exchange & Hyper-V Oracle

Create application consistent 3PAR snapshots Fast point-in-time snapshot for Fast point-in-time snapshot for
for local test and recovery backups of databases backups of databases
Move 3PAR snapshots directly to StoreOnce Recovery from snapshot Increases efficiency of recoveries
for recovery from RMC or vCenter Exporting of database backups to Allows Cloning and Exporting of new
On subsequent backups only delta blocks need other hosts databases
to be copied to StoreOnce
New vSphere Element Recovery from
StoreOnce capability

49
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Leadership
Priority Optimization
Protect your mission critical applications
Tenant 1 Oracle DB Tenant 2 ESX Cluster
Assure tenant/application level QoS by
assigning priorities, targets and caps to Oracle VM 1 VM 2 VM3
VV sets and/or Virtual Domains
Max limit IOPS or bandwidth per object

Min goal Min floor for IOPS or bandwidth


VVset 1 VVset 2 VVset 3 VVset 4

Latency goal Service level target for an Values


inherited from
Min 8000 IOPS
Max 20000 IOPS
Min IOPS 6000,
Max IOPS 15000
Min 5000 IOPS
Max 18000 IOPS
object Domain Priority = Normal Priority = Normal Priority = Low

Priority level throttle order for object


Virtual Domain 1 Virtual Domain 2
Priority = High Priority = Normal
max IOPS 80000 max IOPS 40000
Latency Goal = 8ms No Latency goal

3PAR StoreServ sized for 100000 IOPS

Read the Evaluator Test Validation: http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA5-6504ENW&cc=us&lc=en

50
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Priority Optimization
Basic max IOPS capping QoS enabled

10000
Example
8000
Application red is critical and is being under-serviced

IOPS
6000
Application blue is well behaved
4000
Application green is taking more resources than it
should and starving red 2000

Time

Enabling QoS 10

IO Latency (ms)
By throttling the VV-set of application green 8
to 4000 IOPS resources for application red
6
are freed-up
4

Time

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Adaptive Cache
Self-adapting Cache 50 to 100% for reads / 50 to 0% for writes
3000
Host Load
20K IOPs
2500
MBs of Cache Dedicated to Writes per Node

30K IOPs
40K IOPs

2000

1500

1000

500

0
0 10 20 30 40 50 60 70 80 90 100
% Read IOPS from Host

52
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR addresses the changed requirements!
Scale-out architecture
SSD optimized 3.2M IOPS < 1ms latency
Performance

Clustered node architecture


One architecture and OS for all
Persistent Cache
Multi-level virtualization Agility Availability Persistent Ports
Adaptive and Dynamic Optimization
Peer Persistence
Customer
Requirements
ASIC driven ultra-fast integrations
Thin Technologies 50%+ savings
Federated storage
Scalability Efficiency Thin Deduplication and Clones
Scale-out architecture
another 50%+ savings

Management Tier 1 capabilities with midrange simplicity


Autonomic
90% less time needed Go back to
Table of
Content

53
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Block IO Connectivity

SAN HPE Flat SAN Direct attach


BladeSystems
with VC FlexFabric

LAN

FC with optical cables FC with optical cables FC with optical cables


iSCSI with optical or DAC cables iSCSI with optical or DAC cables Support Planned January 2016
FCoE with optical cables

54
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HPE 3PAR is the long-standing leader of capacity efficiency
Hardware-accelerated data reduction technologies

Thin Provisioning Zero Detect


Dont store free space Remove zeros inline

Deduplication Compression Data Packing


Prevent storing duplicate data Reduce data footprint Pack odd-sized data together

55
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Advantages of 3PAR StoreServ
with VMware vSphere

Superior Architecture Best Integration Simplified Management

Modern architecture designed Silicon-level Integration with Support for VVols enables policy
specifically to support vStorage VAAI APIs based management and
virtualization Consistent day one support of automatic provisioning
System Wide Striping of VMs any new VMware integration (1st 8x faster to deploy VMs than
across array, ASIC accelerated with VASA, UNMAP & VVols) competitive solutions2
storage operations Rated Best in Class for VMware Only vendor to offer a single
Native data mobility features with integration by DCIG vCenter plug-in that supports
Peer Motion & Peer Persistence VMware design partner on new servers, storage & networking
Guaranteed double VM density1 integration technologies (VVols) Custom 3PAR specific
Guaranteed 50% less capacity Full support for all VMware dashboards for vRealize
requirements1 points of integration (VAAI, VASA, Operations Manager
STM, etc)
1As compared to your legacy storage array. See HP 3PAR Get Thin Guarantee http://www.hp.com/storage/getthin and Get Virtual Guarantee http://www.hp.com/storage/getvirtual Programs for more details
2Based on November 2013 analysis by HP comparing provisioning time of 4 LUNs to a cluster of VMware ESX hosts and creating a consistency group snapshot of the four LUNs using similar steps and host configuration
for HP 3PAR StoreServ 7400 vs. EMC VNX5700
56
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
VMware Space Reclamation
With vSphere 5 and 3PAR OS 3.1.x
3PAR with Thin Persistence
Transparent
Thin Persistence allows manual reclaiming X X
VMware space with T10 Unmap support in
vSphere 5.0 and 3PAR OS 3.1.x using the DATASTORE
vmkfstools y command. * 25GB
00000000
25GB
00000000
25GB
00000000
25GB
00000000

Granular
00000000
10GB
00000000
10GB X
20GB X
00000000
15GB

Reclamation granularity is as low as 16KB 0 Rapid, Inline T10 UNMAP vmkfstools y


0 ASIC Zero Detect (16KB granularity)
compared to 768KB with EMC VMAX or 0
42MB with HDS VSP. 0 3PAR Scalable Thin Provisioning
Freed blocks of 16 KB of contiguous space 100GB ThP 100GB ThP
are returned to the source volume
Freed blocks of 128 MB of contiguous space 55GB 20 GB
are returned to the CPG for use by other
volumes. Time
20GB VMDKs finally only consume ~20GB rather than 100GB

* Initial vSphere 5.0 implementation automatically reclaimed space. However, VMware detected a flaw which can cause major
performance issues with certain non-HPE arrays. VMware therefore disabled the automatic T10 UNMAP; see KB article

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Peer Persistence 3PAR Storage & VMware vSphere
Never lose access to your volumes
What does it provide?
High Availability across data centers
Automatic or manual transparent LUN swap
Transparent VM vMotion between data centers
How does it work?
Based on 3PAR Remote Copy and vSphere ALUA
Presents primary LUN as active and secondary as standby
Automated LUN swap arbitrated by a Quorum Witness A
(QW Linux ESX VM on 3rd site)
P
Supported environments:
ESX vSphere 5.x
incl. HA, Failsafe and vSphere Metro Storage Cluster
Sync Remote Copy P
Up to Remote Copy sync supported maximum
A
of 10ms RTT (~1000km)
Requirements:
Two 3PAR Disk Arrays
Two RC sync links (RCFC or RCIP*)
3PAR Remote Copy and Peer Persistence Licenses
Also see the VMware KB "Implementing vMSC using 3PAR Peer Persistence"
and the HP Whitepaper Implementing vMSC using HP 3PAR Peer Persistence A Active LUN presentation
* RCFC strongly recommended; VMware vMSC certification is based on RCFC P Passive LUN presentation

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Points of integration with vSphere

Value-add Plug-ins VASA Integration VAAI Integration


OneView for vCenter VASA 1.0 & 2.0 support Full Copy Offload
StoreFront Analytics for 128,000 VVol objects Block Zero Offload
vRealize Operations Mgr Array-based VASA Provider Hardware Assisted Locking
vCenter SRM Adapter Thin Provisioning
Recovery Manager Central Space Reclaim (UNMAP)
Log Insight Content Packs

Innovation Platform
Silicon level Zero Detect Application aware snapshots
Certification
Multi-tier snapshots Storage I/O Control
vSphere: ESXi & vCenter
Peer Motion & Peer Persistence Enhanced multi-pathing
vSphere Metro Storage Cluster
Silicon enhanced locking (PSA/ALUA)
VMware Virtual Volumes
Wide-striping VMs Adaptive Queue Depth throttling
vCenter Site Recovery Manager

59
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Storage Federation

60
HPE Restricted for HPE and HPE Channel Partner Internal Use Only
Federated Storage
Whats the benefit? To Federate means to cause to join
into a union or similar association,
thus federated means to be united
under a central government
Storage Federation Dictionary

The transparent, dynamic and non-


disruptive distribution of storage
resources across self-governing,
discrete, peer storage systems
Marc Farley, StorageRap, April 2010

Provides Peer-to-Peer versus Hierarchical functionality like with Compute Federation


Distributed volume management across self-governing, homogeneous peers systems allows
resources manage at the data center or metro not device by device level
Experience secure, non-disruptive data mobility at the array not host level
Eliminates the risk of over-provisioning a single array

61
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Storage Federation
Data mobility to manage SLAs at a datacenter level

Federated Storage
8450
Balance workloads across resources
Manage service levels
No additional appliances / hardware PERFORMANCE

Peer Motion
SCALE COST

Adaptive Priority
Optimization Optimization

20850 8440

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Federation Zoning detailed view
Multiple Initiator Multiple Target Zoning

Host1 Host2
Peer/Initiator Port
Target Port

Fabric 1 Host1_zone Host2_zone Host1_zone Fabric 2


Host2_zone

Federation Zone Federation Zone

3PAR 3PAR 3PAR 3PAR

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Online Import

64
HPE Restricted for HPE and HPE Channel Partner Internal Use Only
New enhanced Federation with 3.2.2
Up to 60PB Usable and >12Mio IOPS of Federated Storage with Zero added overhead

Online Import
3PAR with EMC VPLEX
Federation Virtualization

Native and Separate device


Deployment
Built-in in data path
One-Step
One-Step Many Clicks
Management
One Interface Many Interfaces
1
Efficiency Thin Aware Fat

Added latency
Performance No Impact
Limited Bandwidth
Online Import
Cost $ $$$

65
Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR StoreServ Online Import
Evolution and migration with the lowest risks

3PAR StoreServ EMC


A Uniquely Agile Tier 1 CX4, VNX,
EVA Storage Platform VNX2, VMAX
All models except
3000 and 5000
P6000
All models
CV EVA

SSMC IBM
XIV
Online Import

3PAR
F-, T-Class,
7000, 8000, SSMC HDS
10000, 20000 NSC, USP,
USP-V, USP-
VM, VSP

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
3PAR Peer Motion / Online Import
Orchestration for Peer Motion and Online Import now part of SSMC
Host IO paths
Peer paths
VMAX Management path
Host Servers
Source Arrays Target Arrays
EMC HDS IBM 3PAR
FC SAN

Business Logic

Source Plugins Target Plugins


IBM

HDS 3PAR

EMC
HDS Hi -
Command
SMI-S Provider SSMC Server
EMC
SMI-S Provider

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Online Import Utility vs. Storage vMotion
Migrating with OIU resulted in 2 X performance increase over vMotion migration
Source Array Host and Volume

EMC VMAX 10K Linux VM installed on a 240GB VMAX datastore (ESX 6.0)
Array Software - 5876.268.174 Same host and fabric used to capture performance numbers
Disk Drive SAS Medusa Read/Write I/O load test running during both migrations
Host Connectivity 8 Gbps
Fabric Brocade & Cisco 16Gb

Storage vMotion vs. 3PAR Online Import Utility


Destination Array 40.00

Migration Time (minutes)


34.60
35.00
HPE 3PAR 20000 4 Node 30.00
Array Software - 3.2.2.eMU2 25.00
Disk Drive SSD 20.00 16.90
15.00
Host Connectivity 16 Gbps
10.00
Peer Connectivity 16 Gbps 5.00
Fabric Brocade & Cisco 16Gb 0.00
Migration Technology
vMotion OIU

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
HPE Online Import Utility vs VMware Storage vMotion
When to use what and why

VMware Storage vMotion Online Import Utility

Uses a software feature of ESX to copy datastores from Performs block level transfer from source to destination
source to destination storage

IO flow is source ESX server target IO flow is source target


Adds IO load to the ESX server running the VM ESX host is relieved from data movement

Cannot migrate RDMs Ability to migrate RDMs

Can migrate individual datastores Moves entire LUNs containing potentially many
datastores and VMs

Can only be used for VMware Works on both physical and virtualized systems

~50% slower than OIU which is OK for small migrations ~2 x faster than Storage vMotion thus preferred for large
migrations

Hewlett-Packard Enterprise The information contained herein is subject to change without notice
Thank you
70
Hewlett-Packard Enterprise The information contained herein is subject to change without notice