Académique Documents
Professionnel Documents
Culture Documents
Michael Cram
Bala Ganeshan
Bradford Glade
Varina Hammond
Mary Peraro
Suzanne Quest
Jim Wentworth
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Contents
Preface
Chapter 1
Chapter 2
18
20
21
22
22
22
23
24
26
29
30
31
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
34
35
36
37
38
40
41
42
43
Chapter 3
Software Architecture
3.1 Introduction ...........................................................................
3.2 Simplified storage management .........................................
3.3 Management server user accounts .....................................
3.4 Management server software ..............................................
3.4.1 Management console..................................................
3.4.2 Command line interface.............................................
3.4.3 System reporting.........................................................
3.5 Director software...................................................................
3.6 Internal connections..............................................................
3.7 External connections.............................................................
3.8 Configuration overview.......................................................
3.8.1 Small configurations...................................................
3.8.2 Medium configurations .............................................
3.8.3 Large configurations ..................................................
3.9 I/O implementation .............................................................
3.9.1 Cache layering roles ...................................................
3.9.2 Share groups................................................................
3.9.3 Cache coherence..........................................................
3.9.4 Meta-directory.............................................................
3.9.5 How a read is handled ...............................................
3.9.6 How a write is handled..............................................
Chapter 4
System Integrity
4.1 Overview ................................................................................
4.2 Cluster.....................................................................................
4.3 Path redundancy through different ports .........................
4.4 Path redundancy through different directors ...................
4.5 Path redundancy through different engines .....................
4.6 Path redundancy through site distribution.......................
4.7 Safety check............................................................................
Chapter 5
70
71
72
73
74
75
76
46
47
48
49
49
52
56
57
58
59
61
62
63
64
65
65
66
66
66
66
67
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
78
79
79
80
80
81
Chapter 6
Glossary
Index
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figures
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
18
20
21
21
23
25
26
28
29
30
38
39
40
41
49
50
51
51
58
59
61
62
63
64
65
72
73
74
75
79
7
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Tables
1
2
3
4
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
31
37
48
55
10
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Preface
Related
documentation
Storage Virtualization
VMware Technologies
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
11
Nondisruptive Storage Relocation: Planned Events with EMC VPLEX Best Practices Planning
12
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Typographical
conventions
Normal
Bold
Italic
Courier
Used for:
System output, such as an error message or script
URLs, complete paths, filenames, prompts, and syntax when shown outside of
running text
Courier bold
Used for:
Specific user input (such as commands)
Courier italic
<>
[]
{}
...
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
13
14
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
15
16
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
1
VPLEX Family Overview
18
20
21
23
31
17
1.1 Introduction
The purpose of this TechBook is to introduce the EMC VPLEX
family as it is logically and physically architectured and provides an
overview of the features and functionality associated with the VPLEX
family as it pertains to its primary use cases, as shown in Figure 1.
Figure 1
18
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Introduction
19
Figure 2
20
Scale-out clustering hardware, that lets you start small and grow
big with predictable service levels
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 3
VPLEX offerings
Figure 4
VPLEX vision
21
22
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 5
23
All VPLEX clusters are built from a standard engine configuration. The
engine is responsible for the federation of the I/O stream. It connects to
hosts and storage using Fibre Channel as the data transport. A VPLEX
cluster consists of an engine that contains several major components.
Hardware architecture, on page 38 provides more hardware details.
Registering initiators
24
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 6
Typically, one storage view is created for all hosts that require access to
the same storage.
25
Figure 7
26
Use cases
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
27
Figure 8
28
Cluster configuration
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 9
29
Figure 10
Multi-site configuration
30
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Feature
Description
RAID 0
RAID C
RAID 1
Distributed RAID 1
Disk slicing
Migration
Remote export
Write-through cache
Host writes pass through the cache to the back-end arrays and are
acknowledged by the arrays prior to acknowledgement to the host.
High-level features
31
32
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
2
Hardware and Software
2.1 Hardware...........................................................................................
2.2 Software .............................................................................................
2.3 Networks ...........................................................................................
2.4 Scalability and limits........................................................................
2.5 Hardware architecture.....................................................................
2.6 Engine components..........................................................................
2.7 I/O modules .....................................................................................
2.8 Management server .........................................................................
2.9 Other hardware components..........................................................
34
35
36
37
38
40
41
42
43
33
2.1 Hardware
All VPLEX clusters are built from a standard VPLEX engine
component. The engine is responsible for the federation of the I/O
stream. It connects to hosts and storage using Fibre Channel as the data
transport. A VPLEX cluster consists of one, two, or four engines each of
which contain the following major components:
Each director is configured with five (5) Fibre Channel I/O modules
and one (1) Gigabit Ethernet I/O module.
34
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
2.2 Software
The VPLEX cluster firmware is GeoSynchrony 4.0, which manages
cluster functions, such as processing I/O from hosts, cache processing,
virtualization logic, and interfaces with arrays for claiming and I/O
processing.
The VPLEX management server contains the software for the command
line interface (VPlexcli) and the VPLEX management console, a
web-based graphical user interface (GUI). The VPLEX management
server communicates with the directors, retrieves logs by querying
system state, supports multiple CLI and HTTP sessions, listens to the
system events and determines which events are of interest for call
home, and interprets the call home list and initiates the call home.
The management server can also provide call-home services through
the public Ethernet port by connecting to an EMC Secure Remote
Support (ESRS) gateway deployed on that same network, which can
also be used to facilitate service by EMC personnel.
Software
35
2.3 Networks
The VPLEX system is inter-connected using Fibre Channel.
A management server has four Ethernet ports, identified as eth0
through eth3 by the operating system. A public management port (eth3)
is the only Ethernet port in the VPLEX rack connected to the customers
management LAN. Other components in the cluster's rack are
connected to two redundant private management Ethernet networks,
connected to the management server's eth0 and eth2 ports. The service
port (eth1) can be connected to a service laptop, giving it access to the
same services as a host on the management LAN.
36
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Maximum number
Virtual volumes
8000
Storage volumes
8000
400
256
128
Extents
24000
37
Figure 11
38
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 12
Hardware architecture
39
Figure 13
Each director has four (4) ports of 8 Gb/s Fibre Channel for host I/O
and four (4) ports of 8 Gb/s Fibre Channel for storage array I/O.
Each director has one (1) SSD with a 30 GB capacity. The SSDs are used
to store the operating system, firmware, and logs.
Two management access modules provide service access to the
directors through an embedded Ethernet switch and two (2) external
RJ-45 interfaces as well as two (2) micro-DB-9 interfaces that are used
for monitoring the standby power supply (SPS) that provides battery
backup power to the chassis in the presence of power loss and also for
monitoring of the power source (UPS) that provides backup power to
the internal intra-cluster switches in medium and large configurations.
Power is provided by two (2) power supplies and cooling is supported
within the engine by four (4) hot-swappable fan modules.
The engine consumes 4U of rack space.
40
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 14
I/O modules
41
4 GB RAM
42
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
2.9.0.2 UPS
A 350V UPS unit is used for each of the private intra-cluster switches
used in the medium and large VPLEX configurations to supply power
to these switches for transient power loss ride through. The UPS units
are monitored through their serial port. These ports are connected via a
serial cable to the primary management station within the rack.
Each power supply module is hot-swappable.
2.9.0.3 Fans
43
44
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
3
Software Architecture
Software Architecture
46
47
48
49
57
58
59
61
65
45
3.1 Introduction
The system management software for VPLEX family systems consists
of the following high-level components:
Business layer
Firmware layer
46
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
47
Purpose
admin (customer)
service
(EMC service)
Uses VPlexcli
Modifies their own password
Can SSH or VNC into the management server
Can SCP files off the management server from
directories to which they have access
48
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
VPlexcli
Call-home software
Figure 15
49
The VPLEX Management Console provides online help for all of its
available functions. You can access online help in the following ways:
Click the Help icon in the upper right corner on the main screen to
open the online help system, or in a specific screen to open a topic
specific to the current task.
Click the Help button on the task bar to display a list of links to
additional VPLEX documentation and other sources of information.
Figure 16
50
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 17
Figure 18
51
52
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Each context may have attributes as well as sub- or child contexts. You
may change your current context location using:
cd <relative path> such as
VPlexcli:/> cd clusters/Boston
cd <full path> such as
VPlexcli:/> cd /clusters/Boston
cd ..such as
VPlexcli:/clusters/Hopkinton> cd ../Boston
53
expel
forget
shutdown
summary
status
54
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
or
VPlexcli set --help
Symbol / description
Example
*
Matches any unknown
number of characters.
VPlexcli:/> cd engines/*/directors
VPlexcli:/engines/engine-1-1/directors>
?
Matches one unknown
character
VPlexcli:/engines/engine-1-1/directors> cd
128.221.25?.35
VPlexcli:/engines/engine-1-1/directors/128.221.252.35>
**
Recursively matches
objects at any level
VPlexcli:/> cd **/directors
VPlexcli:/engines/engine-1-1/directors>
multiple symbols
VPlexcli:/> ls **/*.36/firmware
/engine/engine-1-1/directors/128.221.252.36/firmware:
Name
Value
----------------------------------application-status
running
uid
0x000000003b300c49
uptime
6 days,21 hours,31 minutes,58
seconds.
version
v8_1_41-0
55
56
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Director software
57
Figure 19
Network architecture
58
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 20
External connections
59
60
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
ON
I
ON
I
ON
I
ON
I
ON
I
ON
I
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
ON
I
ON
I
ON
I
ON
I
ON
I
ON
I
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
ON
I
ON
I
ON
I
ON
I
ON
I
ON
I
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
Figure 21
Configuration overview
61
Two directors
One engine
ON
I
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
Management server
ON
I
O
OFF
Engine
Director B
Director A
SPS
Figure 22
62
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Four directors
Two engines
Redundant Fibre Channel COM switches for local COM; UPS for
each Fibre Channel switch
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
O
OFF
ON
I
ON
I
Management server
Engine 2
Director 2B
Director 2A
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
SPS 2
ON
I
ON
I
Engine 1
Director 1B
Director 1A
SPS 1
Figure 23
63
Eight directors
Four engines
Redundant Fibre Channel COM switches for local COM; UPS for
each Fibre Channel switch
ON
I
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
O
OFF
O
OFF
ON
I
ON
I
Management server
ON
I
O
OFF
Engine
Director B
Director A
SPS
Figure 24
64
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 25
65
3.9.4 Meta-directory
Directory chunks are managed by the meta-directory, which assigns
and remembers chunk ownership. These chunks can migrate using
Locality-Conscious Directory Migration (LCDM). This meta-directory
knowledge is cached across the share group for efficiency.
66
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
I/O implementation
67
68
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
4
System Integrity
This chapter explains how VPLEX clusters are able to handle hardware
failures in any subsystem within the storage cluster. Topics include:
System Integrity
70
71
72
73
74
75
76
69
4.1 Overview
VPLEX clusters are capable of surviving any single hardware failure in
any subsystem within the overall storage cluster. These include host
connectivity subsystem, memory subsystem, etc. A single failure in any
subsystem will not affect the availability or integrity of the data.
Multiple failures in a single subsystem and certain combinations of
single failures in multiple subsystems may affect the availability or
integrity of data.
This availability requires that host connections be redundant and that
hosts are supplied with multipath drivers. In the event of a front-end
port failure or a director failure, hosts without redundant physical
connectivity to a VPLEX cluster and without multipathing software
installed may be susceptible to data unavailability.
70
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
4.2 Cluster
A cluster is a collection of one, two, or four engines in a physical
cabinet. A cluster serves I/O for one storage domain and is managed as
one storage cluster.
All hardware resources (CPU cycles, I/O ports, and cache memory) are
pooled:
Cluster
71
Figure 26
Port redundancy
72
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 27
Director redundancy
73
Figure 28
Engine redundancy
74
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 29
Site redundancy
75
76
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
5
VPLEX Local and VPLEX
Metro Federated Solution
This chapter explains the VPLEX offering for careful planning and an
understanding of the capabilities and scalability options. Topics
included:
78
79
83
87
88
91
77
78
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 30
Deployment overview
79
Figure 31
80
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Deployment overview
81
Figure 32
82
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
SAN outages
Host failures
1.
83
Figure 33
84
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 34
Dual-fabric deployment
Workload resiliency
85
Figure 35
86
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
87
88
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
89
5.5.2 Removing barriers for distributed data access with VPLEX Metro
VPLEX Metro federates storage domains, hiding most of the costs and
complexity associated with distributed data access. A VPLEX cluster
federates the storage domains from all the storage arrays connected to
it, creating a single point of control for provisioning and LUN masking,
overcoming some of the complexity issues. VPLEX Metro connects two
clusters over Fibre Channel, federating each clusters storage domain
into one domain, accessible by initiators connected to each cluster.
The distributed VPLEX cache also provides local read caching, hiding
the latency effects for common workloads, while not burdening the
applications with maintaining the cache coherency. To hosts accessing
storage through a VPLEX cluster, all storage appears on the SAN as if it
were local. A distributed or clustered file system on top of this storage
provides a hierarchical file representation of the data, more suitable for
most applications that raw block access. Hosts accessing a file on the
distributed file system will use their local SAN connection to retrieve
the data, relying on VPLEX to retrieve the data from local cache or from
the appropriate storage array.
90
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
91
92
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
93
94
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
6
VPLEX Use Case Example
using VMware
This chapter provides use case example using Storage VMotion for ease
of use in VMware environments with VPLEX. Topics include:
95
1.
96
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
97
Figure 36
Figure 37 on page 99 shows the devices visible on the ESX server. It can
be seen that there are two devices with the product identification
Invista but without any details. This is the case since EMC Virtual
Storage Integrator (VSI) at this point does not have the capability to
resolve the devices presented from VPLEX systems. The figure also
shows the NAA number for the devices. As discussed earlier, the Fibre
Channel OUI (organizationally unique identifier), 00:01:44, corresponds
98
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 37
The migration of the data from the VMAX arrays to the storage
presented from VPLEX can be performed using Storage VMotion after
appropriate datastores are created on the devices presented from
VPLEX.
Figure 38 on page 100 shows the steps required to initiate the migration
of a virtual machine from Datastore_1 to the target datastore, Target_1,
that resides on a VPLEX device. It is important to note that although an
ESX server V3.5 was utilized to showcase the migration procedure, the
same process is applicable for ESX servers running V4.0 or later.
Furthermore, it should also be noted that the migration wizard is
available only when vCenter Server V4.0 or later is leveraged. The
Storage VMotion functionality is available via command line utility for
vCenter Server V2.5. Detailed discussion of Storage VMotion is beyond
the scope of this TechBook.
99
Figure 38
100
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
101
Figure 39
Devices to be encapsulated
3. Once the devices are visible to the VPLEX system they have to be
claimed. This step is shown in Figure 40 on page 103. The -appc
flag during the claiming process ensures that the content of this
device is preserved, and that the device is encapsulated for further
use within the VPLEX system.
102
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 40
4. After claiming the devices, create a single extent that spans the
whole disk. Figure 41 on page 104 shows this step for the two
datastores that are being encapsulated in this example.
103
Figure 41
104
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 42
Figure 43
105
Figure 44
Storage view
106
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
10. The devices presented from the VPLEX system host the original
datastore. However, the VMware ESX hosts do not automatically
mount datastores since VMware ESX considers datastores as a
snapshot since the WWN of the devices exposed through the
VPLEX system differ from the WWN of the devices presented from
the Symmetrix VMAX system.
VMware vSphere allows the resignaturing process of datastores
that are considered snapshots to performed on a device-by-device
basis. This reduces the risk of mistakenly resignaturing the
encapsulated devices from VPLEX system. The use of persistent
mount also provides other advantages such as retaining of the
history of all of the virtual machines. Therefore, for a homogeneous
vSphere environment, EMC recommends the use of persistent
mounts for VMware datastores that are encapsulated by VPLEX.
For VMware environments that contain VMware ESX version 3.5 or
earlier, this step should be skipped.
Activate the zone set that was created in step 8. A manual rescan of
the SCSI bus on the VMware ESX servers should remove the
original devices and add the encapsulated devices presented from
the VPLEX system.
Figure 45 on page 108 shows an example of this for a VMware
vSphere environment. The figure shows all of the original virtual
machines in the environment are now marked as inaccessible. This
occurs since the datastores, Datastore_1 and Datastore_2, created on
the devices presented from the Symmetrix VMAX system are no
longer available.
107
Figure 45
108
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 46
109
Figure 47
13. Once the VPlex devices have been discovered and the VMware
datastores have been resignatured, the advanced parameter
LVM.EnableResignature should be set to 0.
14. The virtual machines that were unregistered in step 10 can be added
back to the vCenter Server inventory using either the Virtual
Infrastructure Client, vSphere Client, or command line utilities
based on the VMware SDK. An example of this is shown in
Figure 48 on page 111.
110
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 48
15. After the virtual machines are properly identified or registered, they
can be powered on.
111
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
The I/O operations at the other site to the same device continue
normally. Furthermore, since the rules are applied on a
device-by-device basis it is possible to have active devices on both sites
in case of a network partition. Imposition of the rules to minimize the
impact of network interruptions does have an impact in site failure. In
this case, based on the rules defining the site that detaches, if there is a
breakdown in communications, the VPLEX cluster at the surviving site
automatically suspends the I/O to some of the devices at the surviving
site. To address this, the VPLEX software provides the capability to
manually resume I/Os to the detached devices. However, a more
detailed of the procedure to perform these operations is beyond the
scope of this TechBook.
Figure 49 on page 114 shows the recommended cluster configuration
for VMware deployments that leverage devices presented through
VPLEX Metro systems. It can be seen from the figure that the VMware
virtualization platform is divided into two separate VMware clusters.
Each cluster includes the VMware ESX servers at each physical
datacenter (site A and site B). However, both VMware clusters are
managed under a single datacenter entity that represents the logical
combination of multiple physical sites involved in the solution. Also
shown in the figure, as an inset, are the settings for each cluster. The
inset shows that VMware DRS and VMware HA are active in each
cluster, thus restricting the domain of operation of these components of
the VMware offering to a single physical location.
113
Figure 49
114
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
115
Figure 50
116
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 51
117
Figure 52
Figure 53 on page 119 shows the rules enforced on the virtual volume
hosting the datastore Distributed_DSC_Site_A. It can be seen from the
figure that the rules are set to suspend I/Os at site B in case of a
network partition. Therefore, the rules ensure that if there is a network
partition, the virtual machines hosted on datastore
Distributed_DSC_Site_A is not impacted by it. Similarly, for the virtual
machines hosted at site B, the rules are set to ensure that the I/Os to
those datastores are not impacted in case of a network partition.
118
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 53
119
Figure 54
120
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 55
121
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 56
Figure 57
123
Figure 58
2. Add the newly created device as a mirror to the existing device that
needs the geographical protection. This is shown in Figure 59 on
page 125, and just like the previous step, is independent of the host
operating system utilizing the virtual volumes created from the
devices.
124
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Figure 59
125
Figure 60
Figure 61
126
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
7.
127
128
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
6.6 Conclusion
VPLEX running on the GeoSynchrony operating system and common
EMC hardware introduces the first platform in the world that delivers
both local and distributed federation to the data center. It is capable of
providing ease-of-use integration into an existing data center as well as
data distribution on a global level. This TechBook not only describes the
architecture of VPLEX, but also provides insight into the primary use
cases that VPLEX supports at publication. The use cases revolve around
unplanned and the planned events, nondisruptive data migration,
simplified storage management, AccessAnywhere, across synchronous
distances in a VPLEX Metro, as well as workload resiliency.
Conclusion
129
130
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Glossary
A
AccessAnywhere
active/active
active/passive
array
A collection of disk drives where user data and parity data may be
stored. Devices can consist of some or all of the drives within an array.
asynchronous
B
bandwidth
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
131
bit
block
block size
byte
C
cache
cache coherency
cluster
Temporary storage for recent writes and recently accessed data. Disk
data is read through the cache so that subsequent read references are
found in the cache.
Managing the cache so data is not lost, corrupted, or overwritten. With
multiple processors, data blocks may have several copies, one in the
main memory and one in each of the cache memories. Cache coherency
propagates the blocks of multiple users throughout the system in a
timely fashion, ensuring the data blocks do not have inconsistent
versions in the different processors caches.
Two or more VPLEX directors forming a single fault-tolerant cluster,
deployed as one to four engines.
cluster ID
cluster deployment
ID
clustering
COM
command line
interface (CLI)
132
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
continuity of
operations (COOP)
controller
D
data sharing
device
director
dirty data
disaster recovery
(DR)
disk cache
distributed device
distributed file
system (DFS)
The ability to share access to the same data with multiple servers
regardless of time and location.
A combination of one or more extents to which you add specific RAID
properties. Devices use storage from one cluster only; distributed
devices use storage from both clusters in a multi-cluster plex. See also
distributed device.
A CPU module that runs GeoSynchrony, the core VPLEX software.
There are two directors in each engine, and each has dedicated
resources and is capable of functioning independently.
The write-specific data stored in the cache memory that has yet to be
written to disk.
The ability to restart system operations after an error, preventing data
loss.
A section of RAM that provides cache between the disk and the CPU.
RAMs access time is significantly faster than disk access time; therefore,
a disk-caching program enables the computer to operate faster by
placing recently accessed data in the disk cache.
A RAID 1 device whose mirrors are in geographically separate
locations.
Supports the sharing of files and resources in the form of persistent
storage over a network.
E
engine
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
133
Ethernet
event
extent
F
failover
fault tolerance
field replaceable
unit (FRU)
firmware
G
geographically
distributed system
gigabit (Gb or
Gbit)
gigabit Ethernet
134
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
gigabyte (GB)
H
host bus adapter
(HBA)
I
input/output (I/O)
internet Fibre
Channel protocol
(iFCP)
intranet
A network operating like the World Wide Web but with access
restricted to a limited group of authorized users.
internet small
computer system
interface (iSCSI)
I/O (input/output)
K
kilobit (Kb)
kilobyte (K or KB)
L
latency
load balancing
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
135
M
megabit (Mb)
megabyte (MB)
metadata
metavolume
Metro-Plex
mirroring
miss
N
namespace
network
network
architecture
network-attached
storage (NAS)
136
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
network partition
P
parity
parity checking
Checking for errors in binary data. Depending on whether the byte has
an even or odd number of bits, an extra 0 or 1 bit, called a parity bit, is
added to each byte in a transmission. The sender and receiver agree on
odd parity, even parity, or no parity. If they agree on even parity, a
parity bit is added that makes each byte even. If they agree on odd
parity, a parity bit is added that makes each byte odd. If the data is
transmitted incorrectly, the change in parity will reveal the error.
partition
plex
R
RAID
RAID 0
RAID 1
Also called mirroring, this has been used longer than any other form of
RAID. It remains popular because of simplicity and a high level of data
availability. A mirrored array consists of two or more disks. Each disk
in a mirrored array holds an identical image of the user data. RAID 1
has no striping. Read performance is improved since either disk can be
read at the same time. Write performance is lower than single disk
storage. Writes must be performed on all disks, or mirrors, in the RAID
1. RAID 1 provides very good data reliability for read-intensive
applications.
RAID leg
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
137
rebuild
redundancy
reliability
remote direct
memory access
(RDMA)
S
scalability
simple network
management
protocol (SNMP)
site ID
small computer
system interface
(SCSI)
stripe depth
striping
138
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
storage area
network (SAN)
storage view
storage volume
synchronous
T
throughput
tool command
language (TCL)
transmission
control
protocol/Internet
protocol (TCP/IP)
uninterruptible
power supply (UPS)
universal unique
identifier (UUID)
U
A power supply that includes a battery to maintain power in the event
of a power failure.
A 64-bit number used to uniquely identify each VPLEX director. This
number is based on the hardware serial number assigned to each
director.
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
139
V
virtualization
virtual volume
W
wide area network
(WAN)
140
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
Index
architecture
characteristics 20
clustering 23
overview 20
software 45
audience 11
deployment overview 79
VPLEX Local 79
when to use 80
director software 57
distribute data access
introduction 88
distributed block cache 91
distributed data access
removing barriers with VPLEX Metro 90
traditional approach 89
approach problems 89
distributed device
cluster quorum 93
detach rules 92
fencing mechanisms 93
C
cabinet
power distribution panels 39
power distribution units 39
cache coherence 66
cache layering roles 65
distributed cache 65
I/O implementation 65
local node cache 65
clustering architecture 23
command line interface 52
describe command 52
options 55
Secure Shell 52
command line options 55
configuration overview 61
configurations
large 64
medium 63
small 62
ConnectEMC 46
Connectrix DS-300B switch 58
context-sensitive help 51
E
EmaAdapter 46
EMC VPLEX Metro-Plex 23
EMC VPLEX Storage Cluster 23
encapsulate and migrate
steps 101
engine components 40
director 40
director, I/O modules 41
management access modules 40
power 40
rack space 40
external connections 59
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
141
F
federated soultion
enabling 78
Fibre Channel COM 58
Fibre Channel SANs 89
H
hardware
engine 43
fans 43
Fibre Channel COM switch 43
other components 43
switches 43
hardware architecture 38
storage cluster cabinet 38
hardware components 24, 34
high-level features
disk slicing 31
distributed RAID 1 31
migration 31
RAID 0 31
RAID 1 31
RAID C 31
remote export 31
storage volume encapsulator 31
write-through cache 31
N
networks 36
Ethernet ports 36
public management port 36
nondisruptive migration 119
nondisruptive migrations 98
nondisruptive workload relocation 96
O
online help
accessing 50
overview and architecture
introduction 18
R
related documentation 11
I/O implementation 65
cache coherence 66
cache layering roles 65
meta-directory 66
share groups 66
I/O modules
Fibre Channel 41
Gigabit Ethernet 41
port roles 41
internal connections 58
Fibre channel COM 58
SAN outages 84
best practices 85
scalabilty and limits 37
Secure Shell (SSH) 52
share groups 66
simplified storage management 87
context-sensitive help 51
provision storage tab 51
software
GeoSynchrony 35
management server 35
storage provisioning 24
process 24
Storage VMotion 98
storage VMotion 95
system cabinet 38
system management server 46
ConnectEMC 46
M
management connectivity
Solar Flare management 59
management console 49, 53
online help 50
142
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
EmaAdapter 46
processes 46
Secure Shell 46
user accounts 48
system management software
components 46
system reporting 56
automatically collect data 56
configuration file 56
manually collect data 56
T
technology refresh
use case 87
U
user accounts 48
admin 48
all account type 48
Linux CLI 48
management server 48
service 48
V
VMware cluster configuration 112
VMware deployments 112
VMware vSphere 96
VPLEX
deployment overview 79
system reporting 56
VPLEX cluster
cabinet 39
configurations
pre-installed 39
management server 42
scalabilty and limits 37
single phase power 39
storage provisioning 24
VPLEX family
architecture 20
clustering architecture 23
components 21
VPLEX Geo 21
VPLEX Global 21
VPLEX Local 21
VPLEX Metro 21
configuration overview 61
configurations
large 61
medium 61
small 61
engine components 40
external connections 59
handling a read 66
hardware 24, 34
hardware and software 33
high-level features 31
how to handle a write 67
management console 49
networks 36
software 35
software architecture 45
system management software 46
VPLEX Geo 21, 22
VPLEX Global 21, 22
VPLEX Local 21, 22
VPLEX Local deployment 79
VPLEX management console 49
VPLEX management console web interface 46
VPLEX Metro 21, 22
distributed data access
removing barriers 90
distributed devices 92
remote access 91
technical overview 91
VMware deployments 112
VPLEX Metro deployment
between data centers 81
when to use 82
within a data center 80
when to use 81
VPLEX system data management 67
VPLEX System Management Software (SMS) 42
VPlexcli 52
command line options 55
wildcards 55
multiple symbols 55
question mark 55
two astericks 55
W
workload relocation 96
use case example 96, 97
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud
143
workload resiliency 83
best practices 83
storage array outages 83
144
EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud