Académique Documents
Professionnel Documents
Culture Documents
Update 1
ESXi 5.0
vCenter Server 5.0
EN-000810-00
vSphere Storage
You can find the most up-to-date technical documentation on the VMware Web site at:
http://www.vmware.com/support/
The VMware Web site also provides the latest product updates.
If you have comments about this documentation, submit your feedback to:
docfeedback@vmware.com
Copyright 20092012 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and
intellectual property laws. VMware products are covered by one or more patents listed at
http://www.vmware.com/go/patents.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
VMware, Inc.
Contents
1 Introduction to Storage 9
Storage Virtualization 9
Supported Storage Adapters 10
Types of Physical Storage 11
Target and Device Representations 15
Viewing Storage Devices 16
Displaying Datastores 19
How Virtual Machines Access Storage 20
Comparing Types of Storage 21
VMware, Inc.
44
vSphere Storage
58
VMware, Inc.
Contents
VMware, Inc.
vSphere Storage
162
Index 211
VMware, Inc.
vSphere Storage describes storage options available to VMware ESXi and explains how to configure your
ESXi system so that it can use and manage different types of storage. In addition, vSphere Storage explicitly
concentrates on Fibre Channel and iSCSI storage area networks (SANs) as storage options and discusses
specifics of using ESXi in Fibre Channel and iSCSI environments.
Intended Audience
This information is for experienced system administrators who are familiar with virtual machine technology,
datacenter operations, and SAN storage concepts.
VMware, Inc.
vSphere Storage
VMware, Inc.
Introduction to Storage
This introduction describes available storage options for ESXi and explains how to configure your host so that
it can use and manage different types of storage.
This chapter includes the following topics:
n
Storage Virtualization
ESXi provides host-level storage virtualization, which logically abstracts the physical storage layer from virtual
machines.
An ESXi virtual machine uses a virtual disk to store its operating system, program files, and other data
associated with its activities. A virtual disk is a large physical file, or a set of files, that can be copied, moved,
archived, and backed up as easily as any other file. You can configure virtual machines with multiple virtual
disks.
To access virtual disks, a virtual machine uses virtual SCSI controllers. These virtual controllers include
BusLogic Parallel, LSI Logic Parallel, LSI Logic SAS, and VMware Paravirtual. These controllers are the only
types of SCSI controllers that a virtual machine can see and access.
Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides on a
vSphere Virtual Machine File System (VMFS) datastore, an NFS-based datastore, or on a raw disk. From the
standpoint of the virtual machine, each virtual disk appears as if it were a SCSI drive connected to a SCSI
controller. Whether the actual physical disk device is being accessed through parallel SCSI, iSCSI, network, or
Fibre Channel adapters on the host is transparent to the guest operating system and to applications running
on the virtual machine.
VMware, Inc.
vSphere Storage
Description
Model
World Wide Name formed according to Fibre Channel standards that uniquely identifies
the FC adapter.
Unique name formed according to iSCSI standards that identifies the iSCSI adapter.
IP Address (independent
hardware iSCSI)
Devices
Paths
Properties
Link that indicates that the adapter requires additional configuration. iSCSI and FCoE
adapters display this link.
10
To view details for a specific adapter, select the adapter from the Storage Adapters list.
To list all storage devices the adapter can access, click Devices.
VMware, Inc.
In the Details panel, select the value in the name field, right-click, and select Copy.
Networked Storage
Stores virtual machine files on external storage disks or arrays attached to your
host through a direct connection or through a high-speed network.
Local Storage
Local storage can be internal hard disks located inside your ESXi host, or it can be external storage systems
located outside and connected to the host directly through protocols such as SAS or SATA.
Local storage does not require a storage network to communicate with your host. You need a cable connected
to the storage unit and, when required, a compatible HBA in your host.
The following illustration depicts a virtual machine using local SCSI storage.
Figure 1-1. Local Storage
Host
virtual
machine
SCSI
VMFS
In this example of a local storage topology, the host uses a single connection to a storage disk. On that disk,
you can create a VMFS datastore, which you use to store virtual machine disk files.
VMware, Inc.
11
vSphere Storage
Although this storage configuration is possible, it is not a recommended topology. Using single connections
between storage arrays and hosts creates single points of failure (SPOF) that can cause interruptions when a
connection becomes unreliable or fails.
ESXi supports a variety of internal or external local storage devices, including SCSI, IDE, SATA, USB, and SAS
storage systems. Regardless of the type of storage you use, your host hides a physical storage layer from virtual
machines.
NOTE You cannot use IDE/ATA or USB drives to store virtual machines.
Local storage devices do not support sharing across multiple hosts. A datastore on a local storage device can
be accessed by only one host.
Because the majority of local storage devices do not support multiple connections, you cannot use multiple
paths to access local storage.
Networked Storage
Networked storage consists of external storage systems that your ESXi host uses to store virtual machine files
remotely. Typically, the host accesses these systems over a high-speed storage network.
Networked storage devices are shared. Datastores on networked storage devices can be accessed by multiple
hosts concurrently. ESXi supports the following networked storage technologies.
NOTE Accessing the same storage through different transport protocols, such as iSCSI and Fibre Channel, at
the same time is not supported.
12
VMware, Inc.
virtual
machine
fibre
channel
HBA
SAN
VMFS
fibre array
In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches and storage
arrays, using a Fibre Channel adapter. LUNs from a storage array become available to the host. You can access
the LUNs and create datastores for your storage needs. The datastores use the VMFS format.
For specific information on setting up the Fibre Channel SAN, see Chapter 3, Using ESXi with Fibre Channel
SAN, on page 31.
Software iSCSI
You must configure iSCSI initiators for the host to access and display iSCSI storage devices.
iSCSI Storage depicts different types of iSCSI initiators.
VMware, Inc.
13
vSphere Storage
virtual
machine
virtual
machine
software
adapter
iSCSI
HBA
ethernet
NIC
LAN
LAN
VMFS
iSCSI array
In the left example, the host uses the hardware iSCSI adapter to connect to the iSCSI storage system.
In the right example, the host uses a software iSCSI adapter and an Ethernet NIC to connect to the iSCSI storage.
iSCSI storage devices from the storage system become available to the host. You can access the storage devices
and create VMFS datastores for your storage needs.
For specific information on setting up the iSCSI SAN, see Chapter 8, Using ESXi with iSCSI SAN, on
page 61.
14
VMware, Inc.
virtual
machine
ethernet
NIC
LAN
NFS
NAS appliance
LUN
storage array
LUN
target
target
target
LUN
LUN
LUN
storage array
In this illustration, three LUNs are available in each configuration. In one case, the host sees one target, but
that target has three LUNs that can be used. Each LUN represents an individual storage volume. In the other
example, the host sees three different targets, each having one LUN.
VMware, Inc.
15
vSphere Storage
Targets that are accessed through the network have unique names that are provided by the storage systems.
The iSCSI targets use iSCSI names, while Fibre Channel targets use World Wide Names (WWNs).
NOTE ESXi does not support accessing the same LUN through different transport protocols, such as iSCSI and
Fibre Channel.
A device, or LUN, is identified by its UUID name. If a LUN is shared by multiple hosts, it must be presented
to all host with the same UUID.
16
Description
Name
Also called Display Name. It is a name that the ESXi host assigns to the device based on
the storage type and manufacturer. You can change this name to a name of your choice.
Identifier
Runtime Name
Operational State
Indicates whether the device is mounted or unmounted. For details, see Detach Storage
Devices, on page 128.
LUN
Type
Drive Type
Information about whether the device is a solid-state drive (SSD) or a regular non-SSD
hard drive. For details, see Chapter 15, Solid State Disks Enablement, on page 141.
Transport
Capacity
Owner
The plug-in, such as the NMP or a third-party plug-in, that the host uses to manage paths
to the storage device. For details, see Managing Multiple Paths, on page 156.
Hardware Acceleration
Information about whether the storage device assists the host with virtual machine
management operations. The status can be Supported, Not Supported, or Unknown. For
details, see Chapter 18, Storage Hardware Acceleration, on page 171.
Location
Partition Format
A partition scheme used by the storage device. It could be of a master boot record (MBR)
or GUID partition table (GPT) format. The GPT devices can support datastores greater
than 2TB. For more information, see VMFS Datastores and Storage Disk Formats, on
page 113.
Partitions
VMware, Inc.
Device Identifiers
Depending on the type of storage, the ESXi host uses different algorithms and conventions to generate an
identifier for each storage device.
SCSI INQUIRY
identifiers.
The host uses the SCSI INQUIRY command to query a storage device and uses
the resulting data, in particular the Page 83 information, to generate a unique
identifier. Device identifiers that are based on Page 83 are unique across all
hosts, persistent, and have one of the following formats:
n
naa.number
t10.number
eui.number
These formats follow the T10 committee standards. See the SCSI-3
documentation on the T10 committee Web site.
Path-based identifier.
When the device does not provide the Page 83 information, the host generates
an mpx.path name, where path represents the path to the device, for example,
mpx.vmhba1:C0:T1:L3. This identifier can be used in the same way as the SCSI
INQUIRY identifies.
The mpx. identifier is created for local devices on the assumption that their path
names are unique. However, this identifier is neither unique nor persistent and
could change after every boot.
Legacy Identifier
In addition to the SCSI INQUIRY or mpx. identifiers, for each device, ESXi generates an alternative legacy
name. The identifier has the following format:
vml.number
The legacy identifier includes a series of digits that are unique to the device and can be derived in part from
the Page 83 information, if it is available. For nonlocal devices that do not support Page 83 information, the
vml. name is used as the only available unique identifier.
Runtime Name
In the vSphere Client, you can see the device identifier and a runtime name. The runtime name is generated
by the host and represents the name of the first path to the device. It is not a reliable identifier for the device,
and is not persistent.
Typically, the path to the device has the following format:
VMware, Inc.
17
vSphere Storage
vmhbaAdapter:CChannel:TTarget:LLUN
n
vmhbaAdapter is the name of the storage adapter. The name refers to the physical adapter on the host, not
to the SCSI controller used by the virtual machines.
TTarget is the target number. Target numbering is determined by the host and might change if the
mappings of targets visible to the host change. Targets that are shared by different hosts might not have
the same target number.
LLUN is the LUN number that shows the position of the LUN within the target. The LUN number is
provided by the storage system. If a target has only one LUN, the LUN number is always zero (0).
For example, vmhba1:C0:T3:L1 represents LUN1 on target 3 accessed through the storage adapter vmhba1 and
channel 0.
Click Devices.
To view additional details about a specific device, select the device from the list.
Click Devices.
18
VMware, Inc.
Displaying Datastores
Datastores are logical containers, analogous to file systems, that hide specifics of each storage device and
provide a uniform model for storing virtual machine files. You can display all datastores available to your
hosts and analyze their properties.
Datastores are added to the vSphere Client in the following ways:
n
Discovered when a host is added to the inventory. When you add a host to the inventory, the vSphere
Client displays any datastores available to the host.
If your vSphere Client is connected to a vCenter Server system, you can see datastore information in the
Datastores and Datastore Clusters view. This view displays all datastores and datastore clusters in the
inventory, arranged by a datacenter. Through this view, you can organize datastores into folder hierarchies,
create datastores, edit their properties, or remove existing datastores.
This view is comprehensive and shows all information for your datastores and clusters including hosts and
virtual machines using the datastores, storage reporting information, permissions, alarms, tasks and events,
storage topology, and storage reports.
NOTE The Datastores and Datastore Clusters view is not available when the vSphere Client connects directly
to your host. In this case, review datastore information through the host storage configuration tab.
The following table describes the datastore details that you can see when you review datastores.
Table 1-3. Datastore Information
Datastore Information
Description
Identification
Device
Drive Type
Type of underlying storage device, a Solid State Drive (SSD) or a regular non-SSD hard
drive. For details, see Chapter 15, Solid State Disks Enablement, on page 141.
Capacity
Free
Available space.
Type
File system that the datastore uses, either VMFS or NFS. For information about
datastores and how to upgrade to VMFS5, see Chapter 13, Working with
Datastores, on page 111.
Hardware Acceleration
Storage capabilities that you define and associate with datastores. For information, see
Understanding Storage Capabilities, on page 194.
VMware, Inc.
19
vSphere Storage
Description
Path selection policy the host uses to access storage. For more information, see
Chapter 17, Understanding Multipathing and Failover, on page 151.
To display details for a particular datastore, select the datastore from the list.
20
VMware, Inc.
virtual
machine
virtual
machine
virtual
machine
virtual
machine
SCSI
VMFS
Key
software
adapter
fibre
channel
HBA
iSCSI
HBA
SAN
LAN
ethernet
NIC
ethernet
NIC
LAN
LAN
physical
disk
datastore
virtual
disk
VMFS
VMFS
fibre array
iSCSI array
NFS
NAS appliance
NOTE This diagram is for conceptual purposes only. It is not a recommended configuration.
You can use maps on the Storage Views tab to visually represent and understand the relationships between
virtual machines on your host and all available virtual and physical storage resources. For more information,
see the vSphere Monitoring and Performance documentation.
Protocols
Transfers
Interface
Fibre Channel
FC/SCSI
FC HBA
FCoE/SCSI
n
n
iSCSI
IP/SCSI
n
n
NAS
IP/NFS
Network adapter
The following table compares the vSphere features that different types of storage support.
VMware, Inc.
21
vSphere Storage
Storage Type
Boot VM
vMotion
Datastore
RDM
VM Cluster
VMware HA
and DRS
Storage
APIs Data
Protectio
n
Local Storage
Yes
No
VMFS
No
Yes
No
Yes
Fibre Channel
Yes
Yes
VMFS
Yes
Yes
Yes
Yes
iSCSI
Yes
Yes
VMFS
Yes
No
Yes
Yes
Yes
Yes
NFS
No
No
Yes
Yes
NOTE Local storage supports a cluster of virtual machines on a single host (also known as a cluster in a box).
A shared virtual disk is required. For more information about this configuration, see the vSphere Resource
Management documentation.
22
VMware, Inc.
Using ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESXi with a SAN also supports
centralized management, failover, and load balancing technologies.
The following are benefits of using ESXi with a SAN:
n
You can store data securely and configure multiple paths to your storage, eliminating a single point of
failure.
Using a SAN with ESXi systems extends failure resistance to the server. When you use SAN storage, all
applications can instantly be restarted on another host after the failure of the original host.
You can perform live migration of virtual machines using VMware vMotion.
Use VMware High Availability (HA) in conjunction with a SAN to restart virtual machines in their last
known state on a different server if their host fails.
Use VMware Fault Tolerance (FT) to replicate protected virtual machines on two different hosts. Virtual
machines continue to function without interruption on the secondary host if the primary one fails.
Use VMware Distributed Resource Scheduler (DRS) to migrate virtual machines from one host to another
for load balancing. Because storage is on a shared SAN array, applications continue running seamlessly.
If you use VMware DRS clusters, put an ESXi host into maintenance mode to have the system migrate all
running virtual machines to other ESXi hosts. You can then perform upgrades or other maintenance
operations on the original host.
The portability and encapsulation of VMware virtual machines complements the shared nature of this storage.
When virtual machines are located on SAN-based storage, you can quickly shut down a virtual machine on
one server and power it up on another server, or suspend it on one server and resume operation on another
server on the same network. This ability allows you to migrate computing resources while maintaining
consistent shared access.
This chapter includes the following topics:
n
VMware, Inc.
23
vSphere Storage
If you are working with multiple hosts, and each host is running multiple
virtual machines, the storage on the hosts is no longer sufficient and external
storage is required. Choose a SAN for external storage to provide a simpler
system architecture along with other benefits.
Load balancing
You can add a host to a DRS cluster, and the host's resources become part of
the cluster's resources. The distribution and usage of CPU and memory
resources for all hosts and virtual machines in the cluster are continuously
monitored. DRS compares these metrics to an ideal resource utilization. Ideal
utilization takes into account the attributes of the cluster's resource pools and
virtual machines, the current demand, and the imbalance target. It then
performs (or recommends) virtual machine migrations accordingly.
Disaster recovery
You can use VMware High Availability to configure multiple ESXi hosts as a
cluster to provide rapid recovery from outages and cost-effective high
availability for applications running in virtual machines.
Simplified array
migrations and storage
upgrades
When you purchase new storage systems or arrays, use Storage vMotion to
perform live automated migration of virtual machine disk files from existing
storage to their new destination without interruptions to the users of the virtual
machines.
You cannot directly access the virtual machine operating system that uses the storage. With traditional
tools, you can monitor only the VMware ESXi operating system. You use the vSphere Client to monitor
virtual machines.
The HBA visible to the SAN administration tools is part of the ESXi system, not part of the virtual machine.
24
Each LUN should have the correct RAID level and storage characteristic for the applications running in
virtual machines that use the LUN.
VMware, Inc.
If multiple virtual machines access the same VMFS, use disk shares to prioritize virtual machines. See
Use Disk Shares to Prioritize Virtual Machines, on page 26.
You might want fewer, larger LUNs for the following reasons:
n
More flexibility to create virtual machines without asking the storage administrator for more space.
More flexibility for resizing virtual disks, doing snapshots, and so on.
You might want more, smaller LUNs for the following reasons:
n
More flexibility, as the multipathing policy and disk shares are set per LUN.
Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN.
When the storage characterization for a virtual machine is not available, there is often no simple method to
determine the number and size of LUNs to provision. You can experiment using either a predictive or adaptive
scheme.
Create a VMFS datastore on each LUN, labeling each datastore according to its characteristics.
Create virtual disks to contain the data for virtual machine applications in the VMFS datastores created
on LUNs with the appropriate RAID level for the applications' requirements.
Provision a large LUN (RAID 1+0 or RAID 5), with write caching enabled.
If performance is acceptable, you can place additional virtual disks on the VMFS. If performance is not
acceptable, create a new, large LUN, possibly with a different RAID level, and repeat the process. Use migration
so that you do not lose virtual machines data when you recreate the LUN.
VMware, Inc.
25
vSphere Storage
Select the virtual machine in the inventory panel and click Edit virtual machine settings from the menu.
Double-click the Shares column for the disk to modify and select the required value from the drop-down
menu.
Shares is a value that represents the relative metric for controlling disk bandwidth to all virtual machines.
The values Low, Normal, High, and Custom are compared to the sum of all shares of all virtual machines
on the host. Share allocation symbolic values can be used to configure their conversion into numeric values.
NOTE Disk shares are relevant only within a given ESXi host. The shares assigned to virtual machines on one
host have no effect on virtual machines on other hosts.
High Tier. Offers high performance and high availability. Might offer built-in snapshots to facilitate
backups and point-in-time (PiT) restorations. Supports replication, full SP redundancy, and SAS drives.
Uses high-cost spindles.
Mid Tier. Offers mid-range performance, lower availability, some SP redundancy, and SCSI or SAS drives.
May offer snapshots. Uses medium-cost spindles.
Lower Tier. Offers low performance, little internal storage redundancy. Uses low end SCSI drives or SATA
(serial low-cost spindles).
Not all applications need to be on the highest-performance, most-available storageat least not throughout
their entire life cycle.
NOTE If you need some of the functionality of the high tier, such as snapshots, but do not want to pay for it,
you might be able to achieve some of the high-performance characteristics in software. For example, you can
create snapshots in software.
When you decide where to place a virtual machine, ask yourself these questions:
26
VMware, Inc.
A virtual machine might change tiers throughout its life cycle because of changes in criticality or changes in
technology that push higher-tier features to a lower tier. Criticality is relative and might change for a variety
of reasons, including changes in the organization, operational processes, regulatory requirements, disaster
planning, and so on.
Layered Applications
SAN administrators customarily use specialized array-based software for backup, disaster recovery, data
mining, forensics, and configuration testing.
Storage providers typically supply two types of advanced services for their LUNs: snapshotting and
replication.
n
Snapshotting creates space with efficient copies of LUNs that share common blocks of data. In general,
snapshotting is used locally on the same storage systems as the primary LUN for quick backups,
application testing, forensics, or data mining.
Replication creates full copies of LUNs. Replicas are usually made to separate storage systems, possibly
separate sites to protect against major outages that incapacitate or destroy an entire array or site.
When you use an ESXi system in conjunction with a SAN, you must decide whether array-based or host-based
tools are more suitable for your particular situation.
Array-based solutions usually result in more comprehensive statistics. With RDMs, data always takes the
same path, which results in easier performance management.
Security is more transparent to the storage administrator when you use an RDM and an array-based
solution because with RDMs, virtual machines more closely resemble physical machines.
If you use an array-based solution, physical compatibility RDMs are often used for the storage of virtual
machines. If you do not intend to use RDMs, check the storage vendor documentation to see if operations
on LUNs with VMFS volumes are supported. If you use array operations on VMFS LUNs, carefully read
the section on resignaturing.
Using VMware tools and VMFS is better for provisioning. One large LUN is allocated and
multiple .vmdk files can be placed on that LUN. With an RDM, a new LUN is required for each virtual
machine.
ESXi administrators who use the file-based solution are more independent from the SAN administrator.
VMware, Inc.
27
vSphere Storage
Storage array management, including LUN creation, array cache management, LUN mapping, and LUN
security.
If you decide to run the SAN management software on a virtual machine, you gain the benefits of running a
virtual machine, including failover using vMotion and VMware HA. Because of the additional level of
indirection, however, the management software might not be able to see the SAN. In this case, you can use an
RDM.
NOTE Whether a virtual machine can run management software successfully depends on the particular storage
system.
Identification of critical applications that require more frequent backup cycles within a given period of
time.
Recovery point and recovery time goals. Consider how precise your recovery point needs to be, and how
long you are willing to wait for it.
The rate of change (RoC) associated with the data. For example, if you are using
synchronous/asynchronous replication, the RoC affects the amount of bandwidth required between the
primary and secondary storage devices.
Overall impact on SAN environment, storage performance (while backing up), and other applications.
Identification of peak traffic periods on the SAN (backups scheduled during those peak periods can slow
the applications and the backup process).
Resource availability for archiving data; usually offline media access (tape).
Include a recovery-time objective for each application when you design your backup strategy. That is, consider
the time and resources necessary to perform a backup. For example, if a scheduled backup stores so much data
that recovery requires a considerable amount of time, examine the scheduled backup. Perform the backup
more frequently, so that less data is backed up at a time and the recovery time decreases.
28
VMware, Inc.
If a particular application requires recovery within a certain time frame, the backup process needs to provide
a time schedule and specific data processing to meet this requirement. Fast recovery can require the use of
recovery volumes that reside on online storage to minimize or eliminate the need to access slow offline media
for missing data components.
Perform full, differential, and incremental image backup and restore of virtual machines.
Perform file-level backup of virtual machines that use supported Windows and Linux operating systems.
Ensure data consistency by using Microsoft Volume Shadow Copy Services (VSS) for virtual machines
that run supported Microsoft Windows operating systems.
Because the Storage APIs - Data Protection leverage the snapshot capabilities of VMFS, backups that you can
perform do not require downtime for virtual machines. These backups are nondisruptive, can be performed
at any time, and do not need extended backup windows.
For information about the Storage APIs - Data Protection and integration with backup products, see the
VMware Web site or contact you backup vendor.
VMware, Inc.
29
vSphere Storage
30
VMware, Inc.
When you set up ESXi hosts to use FC SAN storage arrays, special considerations are necessary. This section
provides introductory information about how to use ESXi with a FC SAN array.
This chapter includes the following topics:
n
VMware, Inc.
31
vSphere Storage
A globally unique identifier for a port that allows certain applications to access
the port. The FC switches discover the WWPN of a device or host and assign
a port address to the device.
Within a SAN, each port has a unique port ID that serves as the FC address for
the port. This unique ID enables routing of data through the SAN to that port.
The FC switches assign the port ID when the device logs in to the fabric. The
port ID is valid only while the device is logged on.
When N-Port ID Virtualization (NPIV) is used, a single FC HBA port (N-port) can register with the fabric by
using several WWPNs. This method allows an N-port to claim multiple fabric addresses, each of which appears
as a unique entity. When ESXi hosts use a SAN, these multiple, unique identifiers allow the assignment of
WWNs to individual virtual machines as part of their configuration.
Allows access to the LUNs simultaneously through all the storage ports that
are available without significant performance degradation. All the paths are
active at all times, unless a path fails.
Active-passive storage
system
Asymmetrical storage
system
32
Can prevent non-ESXi systems from accessing a particular storage system, and from possibly destroying
VMFS data.
Can be used to separate different environments, for example, a test from a production environment.
VMware, Inc.
With ESXi hosts, use a single-initiator zoning or a single-initiator-single-target zoning. The latter is a preferred
zoning practice. Using the more restrictive zoning prevents problems and misconfigurations that can occur on
the SAN.
For detailed instructions and best zoning practices, contact storage array or switch vendors.
When the guest operating system in a virtual machine reads or writes to a SCSI disk, it issues SCSI
commands to the virtual disk.
Device drivers in the virtual machines operating system communicate with the virtual SCSI controllers.
Locates the file in the VMFS volume that corresponds to the guest virtual machine disk.
Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.
Sends the modified I/O request from the device driver in the VMkernel to the physical HBA.
Depending on a port the HBA uses to connect to the fabric, one of the SAN switches receives the request
and routes it to the storage device that the host wants to access.
VMware, Inc.
33
vSphere Storage
34
VMware, Inc.
When you use ESXi systems with SAN storage, specific hardware and system requirements exist.
This chapter includes the following topics:
n
Make sure that the SAN storage hardware and firmware combinations you use are supported in
conjunction with ESXi systems. For an up-to-date list, see the vSphere Compatibility Guide.
Configure your system to have only one VMFS volume per LUN.
Unless you are using diskless servers, do not set up the diagnostic partition on a SAN LUN.
In the case of diskless servers that boot from a SAN, a shared diagnostic partition is appropriate.
Use RDMs to access raw disks. For information, see Chapter 14, Raw Device Mapping, on page 133.
For multipathing to work properly, each LUN must present the same LUN ID number to all ESXi hosts.
Make sure the storage device driver specifies a large enough queue. You can set the queue depth for the
physical HBA during system setup. For information on changing queue depth for HBAs and virtual
machines, see the vSphere Troubleshooting documentation.
On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue parameter
to 60. This increase allows Windows to better tolerate delayed I/O resulting from path failover. For
information, see Set Timeout on Windows Guest OS, on page 155.
You cannot use virtual machine multipathing software to perform I/O load balancing to a single physical
LUN.
VMware, Inc.
35
vSphere Storage
You cannot use multipathing software inside a virtual machine to perform I/O load balancing to a single
physical LUN. However, when your Microsoft Windows virtual machine uses dynamic disks, this
restriction does not apply. For information about configuring dynamic disks, see Set Up Dynamic Disk
Mirroring, on page 132.
To ensure that the ESXi system recognizes the LUNs at startup time, provision
all LUNs to the appropriate HBAs before you connect the SAN to the ESXi
system.
VMware recommends that you provision all LUNs to all ESXi HBAs at the same
time. HBA failover works only if all HBAs see the same LUNs.
For LUNs that will be shared among multiple hosts, make sure that LUN IDs
are consistent across all hosts. For example, LUN 5 should be mapped to host
1, host 2, and host 3 as LUN 5.
When you use vCenter Server and vMotion or DRS, make sure that the LUNs
for the virtual machines are provisioned to all ESXi hosts. This provides the
most ability to move virtual machines.
Active-active compared
to active-passive arrays
When you use vMotion or DRS with an active-passive SAN storage device,
make sure that all ESXi systems have consistent paths to all storage processors.
Not doing so can cause path thrashing when a vMotion migration occurs.
For active-passive storage arrays not listed in Storage/SAN Compatibility,
VMware does not support storage port failover. In those cases, you must
connect the server to the active port on the storage array. This configuration
ensures that the LUNs are presented to the ESXi host.
Do not mix FC HBAs from different vendors in a single host. Having different models of the same HBA
is supported, but a single LUN cannot be accessed through two different HBA types, only through the
same type.
Set the timeout value for detecting a failover. To ensure optimal performance, do not change the default
value.
36
Design your SAN if it is not already configured. Most existing SANs require only minor modification to
work with ESXi.
VMware, Inc.
Set up the HBAs for the hosts you have connected to the SAN.
(Optional) Set up your system for VMware HA failover or for using Microsoft Clustering Services.
On the ports that communicate with your ESXi host, disable the Spanning Tree Protocol (STP). Having
the STP enabled might delay the FCoE Initialization Protocol (FIP) response at the switch and cause an all
paths down (APD) condition.
The FIP is a protocol that FCoE uses to discover and initialize FCoE entities on the Ethernet.
VMware, Inc.
37
vSphere Storage
VMware recommends that you use the following firmware on the FCoE switch:
n
Cisco Nexus 5000: version 4.1(3)N2 or higher.
n
Make sure that the latest microcode is installed on the FCoE network adapter.
If the network adapter has multiple ports, when configuring networking, add each port to a separate
vSwitch. This practice helps you to avoid an APD condition when a disruptive event, such as an MTU
change, occurs.
Do not move a network adapter port from one vSwitch to another when FCoE traffic is active. If you need
to make this change, reboot your host afterwards.
If you changed the vSwitch for a network adapter port and caused a failure, moving the port back to the
original vSwitch resolves the problem.
Log in to the vSphere Client, and select a host from the inventory panel.
Select Create a vSphere standard switch to create a new vSphere standard switch.
Select the network adapter (vmnic#) that supports FCoE and click Next.
If your host has multiple network adapters or multiple ports on the adapter, you can add all of them to a
single vSphere standard switch. An alternative is to connect each FCoE NIC to a separate standard switch.
NOTE ESXi 5.0 supports the maximum of four network adapter ports used for software FCoE.
10
You have created the virtual VMkernel adapter for the physical FCoE network adapter installed on your host.
NOTE To avoid FCoE traffic disruptions, do not remove the FCoE network adapter (vmnic#) from the vSphere
standard switch after you set up FCoE networking.
38
VMware, Inc.
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab and click Storage Adapters in the Hardware panel.
On the Add Software FCoE Adapter dialog box, select an appropriate vmnic from the drop-down list of
physical network adapters.
Only those adapters that are not yet used for FCoE traffic are listed.
Click OK.
The software FCoE adapters appears on the list of storage adapters.
After you activate the software FCoE adapter, you can view its properties. If you do not use the adapter, you
can remove it from the list of adapters.
N-Port ID Virtualization
N-Port ID Virtualization (NPIV) is an ANSI T11 standard that describes how a single Fibre Channel HBA port
can register with the fabric using several worldwide port names (WWPNs). This allows a fabric-attached Nport to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre Channel fabric.
VMware, Inc.
39
vSphere Storage
If NPIV is enabled, WWN pairs (WWPN & WWNN) are specified for each virtual machine at creation time.
When a virtual machine using NPIV is powered on, it uses each of these WWN pairs in sequence to try to
discover an access path to the storage. The number of VPORTs that are instantiated equals the number of
physical HBAs present on the host. A VPORT is created on each physical HBA that a physical path is found
on. Each physical path is used to determine the virtual path that will be used to access the LUN. Note that
HBAs that are not NPIV-aware are skipped in this discovery process because VPORTs cannot be instantiated
on them.
NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual disks
use the WWNs of the hosts physical HBAs.
Use HBAs of the same type, either all QLogic or all Emulex. VMware does not support heterogeneous
HBAs on the same host accessing the same LUNs.
If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual
machine. This is required to support multipathing even though only one path at a time will be active.
Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by NPIVenabled virtual machines running on that host.
When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number and
NPIV target ID match the physical LUN and Target ID.
NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the assigned
WWN.
If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV, VMkernel reverts
to using a physical HBA to route the I/O.
If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the
concurrent I/O to two different NPIV ports is also supported.
When you use ESXi with NPIV, the following limitations apply:
40
Because the NPIV technology is an extension to the FC protocol, it requires an FC switch and does not
work on the direct attached FC disks.
When you clone a virtual machine or template with a WWN assigned to it, the clones do not retain the
WWN.
Disabling and then re-enabling the NPIV capability on an FC switch while virtual machines are running
can cause an FC link to fail and I/O to stop.
VMware, Inc.
On the Select a Disk page, select Raw Device Mapping, and click Next.
From a list of SAN disks or LUNs, select a raw LUN you want your virtual machine to access directly.
Follow the steps required to create a virtual machine with the RDM.
On the Ready to Complete page, select the Edit the virtual machine settings before completion check
box and click Continue.
The Virtual Machine Properties dialog box opens.
10
Click Finish.
VMware, Inc.
41
vSphere Storage
Prerequisites
Make sure to power off the virtual machine if you want to edit the existing WWNs.
Before you begin, ensure that your SAN administrator has provisioned the storage LUN ACL to allow the
virtual machines ESXi host to access it.
Procedure
1
Open the Virtual Machine Properties dialog box by clicking the Edit Settings link for the selected virtual
machine.
42
Description
Leave unchanged
New WWNs are generated and assigned to the virtual machine, overwriting
any existing WWNs (those of the HBA itself are unaffected).
The WWNs assigned to the virtual machine are removed and it uses the HBA
WWNs to access the storage LUN. This option is not available if you are
creating a new virtual machine.
VMware, Inc.
This section discusses many of the storage devices supported in conjunction with VMware ESXi. For each
device, it lists the major known potential issues, points to vendor-specific information (if available), and
includes information from VMware knowledge base articles.
NOTE Information related to specific storage devices is updated only with each release. New information
might already be available. Consult the most recent Storage/SAN Compatibility, check with your storage array
vendor, and explore the VMware knowledge base articles.
This chapter includes the following topics:
n
Tests whether ESXi can recognize and operate with the storage array. This
configuration does not allow for multipathing or any type of failover.
HBA failover
The server is equipped with multiple HBAs connecting to one or more SAN
switches. The server is robust to HBA and switch failure only.
The server is attached to multiple storage ports and is robust to storage port
failures and switch failures.
VMware, Inc.
43
vSphere Storage
The host boots from a LUN configured on the SAN rather than from the server
itself.
Direct connect
The server connects to the array without using switches. For all other tests, a
fabric connection is used. FC Arbitrated Loop (AL) is not supported.
Clustering
The system is tested with Microsoft Cluster Service running in the virtual
machine.
LUNs must be presented to each HBA of each host with the same LUN ID number.
Because instructions on how to configure identical SAN LUN IDs are vendor specific, consult your storage
array documentation for more information.
Unless specified for individual storage arrays, set the host type for LUNs presented to ESXi to Linux, Linux
Cluster, or, if available, to vmware or esx.
If you are using vMotion, DRS, or HA, make sure that both source and target hosts for virtual machines
can see the same LUNs with identical LUN IDs.
SAN administrators might find it counterintuitive to have multiple hosts see the same LUNs because they
might be concerned about data corruption. However, VMFS prevents multiple virtual machines from
writing to the same file at the same time, so provisioning the LUNs to all required ESXi system is
appropriate.
Registering the servers connected to the SAN. By default, the host automatically performs this step.
Use the EMC storage management software to perform configuration. For information, see the EMC
documentation.
ESXi automatically sends the host's name and IP address to the array and registers the host with the array. You
are no longer required to perform host registration manually. However, if you prefer to use storage
management software, such as EMC Navisphere, to perform manual registration, turn off the ESXi autoregistration. Turning it off helps you avoid overwriting the manual user registration. For information, see
Disable Automatic Host Registration, on page 58.
Because this array is an active-passive disk array, the following general considerations apply.
n
44
The default multipathing policy for CLARiiON arrays that do not support ALUA is Most Recently Used.
For CLARiiON arrays that support ALUA, the default multipathing policy is VMW_PSP_FIXED. The
ESXi system sets the default policy when it identifies the array.
VMware, Inc.
To use boot from SAN, make sure that the active SP is chosen for the boot LUNs target in the HBA BIOS.
IMPORTANT For ESXi to support EMC CLARiiON with ALUA, check the HCLs to make sure that you use the
correct firmware version on the storage array. For additional information, contact your storage vendor.
The ESXi host considers any LUNs from a Symmetrix storage array with a capacity of 50MB or less as
management LUNs. These LUNs are also known as pseudo or gatekeeper LUNs. These LUNs appear in the
EMC Symmetrix Management Interface and should not be used to hold data.
VMware, Inc.
45
vSphere Storage
Configuring the Hardware for SAN Failover with DS4800 Storage Servers
This topic provides information on how to set up a highly available SAN failover configuration with an
ESXi host and DS4800 storage.
You must have the following hardware components:
n
Two FC switches connecting the HBAs to the SAN (for example, FC switch 1 and FC switch 2).
Use the following connection settings for the ESXi host, as shown in SAN Failover:
n
Connect each HBA on each ESXi machine to a separate switch. For example, connect HBA1 to FC switch
1 and HBA2 to FC switch 2.
Figure 5-1. SAN Failover
Host 2
Host 1
HBA1
HBA2
FC switch 1
HBA3
HBA4
FC switch 2
SP1
SP2
storage
This configuration provides two paths from each HBA, so that each element of the connection can fail over to
a redundant path. The order of the paths in this configuration provides HBA and switch failover without the
need to trigger SP failover. The storage processor that the preferred paths are connected to must own the LUNs.
In the preceding example configuration, SP1 owns them.
NOTE The preceding example assumes that the switches are not connected through an Inter-Switch Link (ISL)
in one fabric.
46
VMware, Inc.
Determine the index for the LNXCL host type by using the following commands in a shell window.
Press Enter after each command.
SMcli.exe ip-addr-for-SPA show hosttopology; Enter SMcli.exe ip-addr-for-SPB show hosttopology
The following commands assume that 13 is the index corresponding to LNXCL in the NVSRAM host type
definitions. If your storage processors have LNXCL at a different index, substitute that index for 13 in the
following commands.
2
Execute these commands for SPA to have it return Not Ready sense data.
Press Enter only after you enter all
commands.SMcli.exe ip-addr-for-SPA set controller [a] HostNVSRAMBYTE [13,0x12]=0x01; set
controller [a] HostNVSRAMBYTE [13,0x13]=0x00; reset Controller [a]
Execute these commands for SPB to have it return Not Ready sense data.
Press Enter only after you enter all
commands.SMcli.exe ip-addr-for-SPB set controller [b] HostNVSRAMBYTE [13,0x12]=0x01; set
controller [b] HostNVSRAMBYTE [13,0x13]=0x00; reset Controller [b]
NOTE If you use the DS4800 Storage Manager GUI, paste the configuration commands for both storage
processors into a single script and configure both storage processors at the same time. If you use
SMcli.exe, make individual connections to each SP.
To use RDMs successfully, a given LUN must be presented with the same LUN ID to every ESXi host in
the cluster.
In the ESS800 Configuration Management tool, select Use same ID for LUN in source and target.
If you are configuring the host to use boot from SAN from these arrays, disable the internal fibre port for
the corresponding blade until installation is finished.
VMware, Inc.
47
vSphere Storage
HP StorageWorks EVA
To use an HP StorageWorks EVA system with ESXi, you must configure the correct host mode type.
Set the connection type to Custom when you present a LUN to the host. The value is one of the following:
n
For EVA4000/6000/8000 active-active arrays with firmware below 5.031, use the host mode type
000000202200083E.
For EVA4000/6000/8000 active-active arrays with firmware 5.031 and above, use the host mode type
VMware.
Otherwise, EVA systems do not require special configuration changes to work with an ESXi system.
See the VMware Infrastructure, HP StorageWorks Best Practices at the HP Web site.
HP StorageWorks XP
For HP StorageWorks XP, you need to set the host mode to specific parameters.
n
To mask LUNs on an ESXi host, use the HDS Storage Navigator software for
best results.
Microcode and
configurations
Check with your HDS representative for exact configurations and microcode
levels needed for interoperability with ESXi. If your microcode is not
supported, interaction with ESXi is usually not possible.
Modes
The modes you set depend on the model you are using, for example:
n
Check with your HDS representative for host mode settings for the models not
listed here.
48
VMware, Inc.
VMware, Inc.
49
vSphere Storage
50
VMware, Inc.
When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the
SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk.
ESXi supports booting through a Fibre Channel host bus adapter (HBA) or a Fibre Channel over Ethernet
(FCoE) converged network adapter (CNA).
This chapter includes the following topics:
n
Cheaper servers. Servers can be more dense and run cooler without internal storage.
Easier server replacement. You can replace servers and have the new server point to the old boot location.
Less wasted space. Servers without local disks often take up less space.
Easier backup processes. You can backup the system boot images in the SAN as part of the overall SAN
backup procedures. Also, you can use advanced array features such as snapshots on the boot image.
Improved management. Creating and managing the operating system image is easier and more efficient.
Better reliability. You can access the boot disk through multiple paths, which protects the disk from being
a single point of failure.
VMware, Inc.
51
vSphere Storage
Description
ESXi system
requirements
Adapter
requirements
Enable and correctly configure the adapter, so it can access the boot LUN. See your vendor
documentation.
Access control
n
n
Each host must have access to its own boot LUN only, not the boot LUNs of other hosts. Use
storage system software to make sure that the host accesses only the designated LUNs.
Multiple servers can share a diagnostic partition. You can use array specific LUN masking to
achieve this.
Multipathing
support
Multipathing to a boot LUN on active-passive arrays is not supported because the BIOS does not
support multipathing and is unable to activate a standby path.
SAN considerations
SAN connections must be through a switched topology if the array is not certified for direct connect
topology. If the array is certified for direct connect topology, the SAN connections can be made
directly to the array. Boot from SAN is supported for both switched topology and direct connect
topology if these topologies for the specific array are certified.
Hardware- specific
considerations
If you are running an IBM eServer BladeCenter and use boot from SAN, you must disable IDE drives
on the blades.
Connect network cable, referring to any cabling guide that applies to your setup.
Check the switch wiring, if there is any.
52
VMware, Inc.
From the SAN storage array, make the ESXi host visible to the SAN. This process is often referred to
as creating an object.
From the SAN storage array, set up the host to have the WWPNs of the hosts adapters as port names
or node names.
Create LUNs.
Assign LUNs.
CAUTION If you use scripted installation to install ESXi in boot from SAN mode, you need to take special
steps to avoid unintended data loss.
During your system power up, enter the system BIOS Configuration/Setup Utility.
VMware, Inc.
53
vSphere Storage
Run lputil.
Select an adapter.
To configure the adapter parameters, press ALT+E at the Emulex prompt and follow these steps.
54
To configure the boot device, follow these steps from the Emulex main menu.
a
VMware, Inc.
Boot into the system BIOS and move Emulex first in the boot controller sequence.
While booting the server, press Ctrl+Q to enter the Fast!UTIL configuration utility.
Description
One HBA
If you have only one host bus adapter (HBA), the Fast!UTIL Options page
appears. Skip to Step 3.
Multiple HBAs
If you have more than one HBA, select the HBA manually.
a In the Select Host Adapter page, use the arrow keys to position the cursor
on the appropriate HBA.
b Press Enter.
In the Fast!UTIL Options page, select Configuration Settings and press Enter.
In the Configuration Settings page, select Adapter Settings and press Enter.
Use the cursor keys to select the Boot Port Name entry in the list of storage processors (SPs) and press
Enter to open the Select Fibre Channel Device screen.
Use the cursor keys to select the specific SP and press Enter.
If you are using an active-passive storage array, the selected SP must be on the preferred (active) path to
the boot LUN. If you are not sure which SP is on the active path, use your storage array management
software to find out. The target IDs are created by the BIOS and might change with each reboot.
VMware, Inc.
Perform the appropriate action depending on the number of LUNs attached to the SP.
Option
Description
One LUN
The LUN is selected as the boot LUN. You do not need to enter the Select
LUN screen.
Multiple LUNs
Select LUN screen opens. Use the cursor to select the boot LUN, then press
Enter.
55
vSphere Storage
56
10
If any remaining storage processors show in the list, press C to clear the data.
11
Press Esc twice to exit and press Enter to save the setting.
VMware, Inc.
When using ESXi with Fibre Channel SAN, follow best practices that VMware offers to avoid performance
problems.
The vSphere Client offers extensive facilities for collecting performance information. The information is
graphically displayed in the vSphere Client. The vSphere Client updates its display periodically.
You can also use the resxtop or esxtop command-line utilities. The utilities provide a detailed look at how
ESXi uses resources in real time. For more information, see the vSphere Resource Management documentation.
Check with your storage representative if your storage system supports Storage API - Array Integration
hardware acceleration features. If it does, refer to your vendor documentation for information on how to enable
hardware acceleration support on the storage system side. For more information, see Chapter 18, Storage
Hardware Acceleration, on page 171.
This chapter includes the following topics:
n
Do not change the path policy the system sets for you unless you understand the implications of making
such a change.
Document everything. Include information about zoning, access control, storage, switch, server and FC
HBA configuration, software and firmware versions, and storage cable plan.
VMware, Inc.
Make several copies of your topology maps. For each element, consider what happens to your SAN
if the element fails.
Cross off different links, switches, HBAs and other elements to ensure you did not miss a critical
failure point in your design.
57
vSphere Storage
Ensure that the Fibre Channel HBAs are installed in the correct slots in the host, based on slot and bus
speed. Balance PCI bus load among the available busses in the server.
Become familiar with the various monitor points in your storage network, at all visibility points, including
host's performance charts, FC switch statistics, and storage performance statistics.
Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your ESXi host. If
you change the ID, the datastore becomes inactive and its virtual machines fail. You can resignature the
datastore to make it active again. See Managing Duplicate VMFS Datastores, on page 120.
If there are no running virtual machines on the VMFS datastore, after you change the ID of the LUN, you
must use rescan to reset the ID on your host. For information on using rescan, see Perform Storage
Rescan, on page 122.
Click the Configuration tab and click Advanced Settings under Software.
Click Disk in the left panel and scroll down to Disk.EnableNaviReg on the right.
58
When assigning LUNs, remember that each LUN is accessed by a number of hosts, and that a number of
virtual machines can run on each host. One LUN used by a host can service I/O from many different
applications running on different operating systems. Because of this diverse workload, the RAID group
containing the ESXi LUNs should not include LUNs used by other servers that are not running ESXi.
VMware, Inc.
SAN storage arrays require continual redesign and tuning to ensure that I/O is load balanced across all
storage array paths. To meet this requirement, distribute the paths to the LUNs among all the SPs to
provide optimal load balancing. Close monitoring indicates when it is necessary to rebalance the LUN
distribution.
Tuning statically balanced storage arrays is a matter of monitoring the specific performance statistics (such
as I/O operations per second, blocks per second, and response time) and distributing the LUN workload
to spread the workload across all the SPs.
NOTE Dynamic load balancing is not currently supported with ESXi.
Because each application has different requirements, you can meet these goals by choosing an appropriate
RAID group on the storage array. To achieve performance goals:
n
Place each LUN on a RAID group that provides the necessary performance levels. Pay attention to the
activities and resource utilization of other LUNS in the assigned RAID group. A high-performance RAID
group that has too many applications doing I/O to it might not meet performance goals required by an
application running on the ESXi host.
Make sure that each server has a sufficient number of HBAs to allow maximum throughput for all the
applications hosted on the server for the peak period. I/O spread across multiple HBAs provide higher
throughput and less latency for each application.
To provide redundancy in the event of HBA failure, make sure the server is connected to a dual redundant
fabric.
When allocating LUNs or RAID groups for ESXi systems, multiple operating systems use and share that
resource. As a result, the performance required from each LUN in the storage subsystem can be much
higher if you are working with ESXi systems than if you are using physical machines. For example, if you
expect to run four I/O intensive applications, allocate four times the performance capacity for the ESXi
LUNs.
When using multiple ESXi systems in conjunction with vCenter Server, the performance needed from the
storage subsystem increases correspondingly.
The number of outstanding I/Os needed by applications running on an ESXi system should match the
number of I/Os the HBA and storage array can handle.
Comments
Topology
No single failure should cause both HBA and SP failover, especially with active-passive
storage arrays.
VMware, Inc.
59
vSphere Storage
Comments
HDS 99xx family (Lightning) and HDS Tabma (USP) require host mode set to Netware.
EMC Symmetrix
Enable the SPC2 and SC3 settings. Contact EMC for the latest settings.
EMC Clariion
Set the EMC Clariion failover mode to 1 or 4. Contact EMC for details.
HP MSA
HP EVA
For EVA4000/6000/8000 firmware 5.031 and above, set the host type to VMware.
Otherwise, set the host mode type to Custom. The value is: 000000202200083E.
HP XP
NetApp
No specific requirements
ESXi Configuration
n
n
n
n
n
60
For all LUNs hosting clustered disks on active-passive arrays, use the Most Recently
Used PSP.
For LUNs on active-active arrays, you can use the Most Recently Used or Fixed PSP.
With either active-passive or active-active arrays, you can use the Round Robin PSP.
All FC HBAs must be of the same model.
Set the following Software Advanced Settings for the host: Disk.UseLunReset to 1.
VMware, Inc.
You can use ESXi in conjunction with a storage area network (SAN), a specialized high-speed network that
connects computer systems to high-performance storage subsystems. Using ESXi together with a SAN provides
storage consolidation, improves reliability, and helps with disaster recovery.
To use ESXi effectively with a SAN, you must have a working knowledge of ESXi systems and SAN concepts.
Also, when you set up ESXi hosts to use Internet SCSI (iSCSI) SAN storage systems, you must be aware of
certain special considerations that exist.
This chapter includes the following topics:
n
iSCSI Multipathing
When transferring data between the host server and storage, the SAN uses a technique known as multipathing.
Multipathing allows you to have more than one physical path from the ESXi host to a LUN on a storage system.
Generally, a single path from a host to a LUN consists of an iSCSI adapter or NIC, switch ports, connecting
cables, and the storage controller port. If any component of the path fails, the host selects another available
path for I/O. The process of detecting a failed path and switching to another is called path failover.
For more information on multipathing, see Chapter 17, Understanding Multipathing and Failover, on
page 151.
VMware, Inc.
61
vSphere Storage
Each iSCSI node can have an IP address associated with it so that routing and
switching equipment on your network can establish the connection between
the server and storage. This address is just like the IP address that you assign
to your computer to get access to your company's network or the Internet.
iSCSI Name
A worldwide unique name for identifying the node. iSCSI uses the iSCSI
Qualified Name (IQN) and Extended Unique Identifier (EUI).
By default, ESXi generates unique iSCSI names for your iSCSI initiators, for
example, iqn.1998-01.com.vmware:iscsitestox-68158ef2. Usually, you do not
have to change the default value, but if you do, make sure that the new iSCSI
name you enter is worldwide unique.
iSCSI Alias
A more manageable name for an iSCSI device or port used instead of the iSCSI
name. iSCSI aliases are not unique and are intended to be just a friendly name
to associate with a port.
yyyy-mm is the year and month when the naming authority was established.
naming-authority is usually reverse syntax of the Internet domain name of the naming authority. For
example, the iscsi.vmware.com naming authority could have the iSCSI qualified name form of iqn.
1998-01.com.vmware.iscsi. The name indicates that the vmware.com domain name was registered in
January of 1998, and iscsi is a subdomain, maintained by vmware.com.
unique name is any name you want to use, for example, the name of your host. The naming authority must
make sure that any names assigned following the colon are unique, such as:
n
iqn.1998-01.com.vmware.iscsi:name1
iqn.1998-01.com.vmware.iscsi:name2
iqn.1998-01.com.vmware.iscsi:name999
62
VMware, Inc.
The 16-hexadecimal digits are text representations of a 64-bit number of an IEEE EUI (extended unique
identifier) format. The top 24 bits are a company ID that IEEE registers with a particular company. The lower
40 bits are assigned by the entity holding that company ID and must be unique.
iSCSI Initiators
To access iSCSI targets, your host uses iSCSI initiators. The initiators transport SCSI requests and responses,
encapsulated into the iSCSI protocol, between the host and the iSCSI target.
Your host supports different types of initiators.
For information on configuring and using iSCSI adapters, see Chapter 9, Configuring iSCSI Adapters and
Storage, on page 67.
Independent Hardware
iSCSI Adapter
Hardware iSCSI adapters might need to be licensed. Otherwise, they will not appear in the vSphere Client or
vSphere CLI. Contact your vendor for licensing information.
VMware, Inc.
63
vSphere Storage
LUN
LUN
target
target
target
LUN
LUN
LUN
storage array
storage array
Three LUNs are available in each of these configurations. In the first case, the host detects one target but that
target has three LUNs that can be used. Each of the LUNs represents individual storage volume. In the second
case, the host detects three different targets, each having one LUN.
Host-based iSCSI initiators establish connections to each target. Storage systems with a single target containing
multiple LUNs have traffic to all the LUNs on a single connection. With a system that has three targets with
one LUN each, a host uses separate connections to the three LUNs. This information is useful when you are
trying to aggregate storage traffic on multiple connections from the host with multiple iSCSI HBAs, where
traffic for one target can be set to a particular HBA, while traffic for another target can use a different HBA.
Allows access to the LUNs simultaneously through all the storage ports that
are available without significant performance degradation. All the paths are
active at all times, unless a path fails.
Active-passive storage
system
Asymmetrical storage
system
Allows access to all available LUNs through a single virtual port. These are
active-active storage devices, but hide their multiple connections though a
single port. ESXi multipathing does not make multiple connections from a
specific port to the storage by default. Some storage vendors supply session
managers to establish and manage multiple connections to their storage. These
storage systems handle port failover and connection balancing transparently.
This is often referred to as transparent failover.
64
VMware, Inc.
Discovery
A discovery session is part of the iSCSI protocol, and it returns the set of targets you can access on an iSCSI
storage system. The two types of discovery available on ESXi are dynamic and static. Dynamic discovery
obtains a list of accessible targets from the iSCSI storage system, while static discovery can only try to access
one particular target by target name and address.
For more information, see Configuring Discovery Addresses for iSCSI Adapters, on page 81.
Authentication
iSCSI storage systems authenticate an initiator by a name and key pair. ESXi supports the CHAP protocol,
which VMware recommends for your SAN implementation. To use CHAP authentication, the ESXi host and
the iSCSI storage system must have CHAP enabled and have common credentials.
For information on enabling CHAP, see Configuring CHAP Parameters for iSCSI Adapters, on page 82.
Access Control
Access control is a policy set up on the iSCSI storage system. Most implementations support one or more of
three types of access control:
n
By initiator name
By IP address
Only initiators that meet all rules can access the iSCSI volume.
Using only CHAP for access control can slow down rescans because the ESXi host can discover all targets, but
then fails at the authentication step. iSCSI rescans work faster if the host discovers only the targets it can
authenticate.
Error Correction
To protect the integrity of iSCSI headers and data, the iSCSI protocol defines error correction methods known
as header digests and data digests.
Both parameters are disabled by default, but you can enable them. These digests pertain to, respectively, the
header and SCSI data being transferred between iSCSI initiators and targets, in both directions.
Header and data digests check the end-to-end, noncryptographic data integrity beyond the integrity checks
that other networking layers provide, such as TCP and Ethernet. They check the entire communication path,
including all elements that can change the network-level traffic, such as routers, switches, and proxies.
The existence and type of the digests are negotiated when an iSCSI connection is established. When the initiator
and target agree on a digest configuration, this digest must be used for all traffic between them.
Enabling header and data digests does require additional processing for both the initiator and the target and
can affect throughput and CPU use performance.
NOTE Systems that use Intel Nehalem processors offload the iSCSI digest calculations, thus reducing the impact
on performance.
For information on enabling header and data digests, see Configuring Advanced Parameters for iSCSI, on
page 86.
VMware, Inc.
65
vSphere Storage
When the guest operating system in a virtual machine reads or writes to SCSI disk, it issues SCSI
commands to the virtual disk.
Device drivers in the virtual machines operating system communicate with the virtual SCSI controllers.
66
Locates the file, which corresponds to the guest virtual machine disk, in the VMFS volume.
Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.
Sends the modified I/O request from the device driver in the VMkernel to the iSCSI initiator (hardware
or software).
If the iSCSI initiator is a hardware iSCSI adapter (both independent or dependent), the adapter performs
the following tasks.
a
If the iSCSI initiator is a software iSCSI adapter, the following takes place.
a
The physical NIC sends IP packets over Ethernet to the iSCSI storage system.
Depending on which port the iSCSI initiator uses to connect to the network, Ethernet switches and routers
carry the request to the storage device that the host wants to access.
VMware, Inc.
Before ESXi can work with a SAN, you must set up your iSCSI adapters and storage.
To do this, you must first observe certain basic requirements and then follow best practices for installing and
setting up hardware or software iSCSI adapters to access the SAN.
The following table lists the iSCSI adapters (vmhbas) that ESXi supports and indicates whether VMkernel
networking configuration is required.
Table 9-1. Supported iSCSI adapters
iSCSI Adapter (vmhba)
Description
VMkernel Networking
Software
Required
Independent Hardware
Not required
Dependent Hardware
Required
After you set up the iSCSI adapters, you can create a datastore on iSCSI storage. For details on how to create
and manage datastores, see Create a VMFS Datastore, on page 116.
This chapter includes the following topics:
n
VMware, Inc.
67
vSphere Storage
Verify that your SAN storage hardware and firmware combinations are supported in conjunction with
ESXi systems. For an up-to-date list, see vSphere Compatibility Guide.
Configure your system to have only one VMFS datastore for each LUN.
Unless you are using diskless servers, set up a diagnostic partition on a local storage. If you have diskless
servers that boot from iSCSI SAN, see General Boot from iSCSI SAN Recommendations, on page 97
for information about diagnostic partitions with iSCSI.
Use RDMs for access to any raw disk. For information, see Chapter 14, Raw Device Mapping, on
page 133.
Set the SCSI controller driver in the guest operating system to a large enough queue. For information on
changing queue depth for iSCSI adapters and virtual machines, see vSphere Troubleshooting.
On virtual machines running Microsoft Windows, increase the value of the SCSI TimeoutValue parameter
to allow Windows to better tolerate delayed I/O resulting from path failover. For information, see Set
Timeout on Windows Guest OS, on page 155.
You cannot use virtual-machine multipathing software to perform I/O load balancing to a single physical
LUN.
ESXi does not support multipathing when you combine independent hardware adapters with either
software or dependent hardware adapters.
Storage Provisioning. To ensure that the host recognizes LUNs at startup time, configure all iSCSI storage
targets so that your host can access them and use them. Also, configure your host so that it can discover
all available iSCSI targets.
vMotion and VMware DRS. When you use vCenter Server and vMotion or DRS, make sure that the LUNs
for the virtual machines are provisioned to all hosts. This configuration provides the greatest freedom in
moving virtual machines.
Active-active versus active-passive arrays. When you use vMotion or DRS with an active-passive SAN
storage device, make sure that all hosts have consistent paths to all storage processors. Not doing so can
cause path thrashing when a vMotion migration occurs.
For active-passive storage arrays not listed in Storage/SAN Compatibility, VMware does not support
storage-port failover. You must connect the server to the active port on the storage system. This
configuration ensures that the LUNs are presented to the host.
68
VMware, Inc.
For software iSCSI and dependent hardware iSCSI, networking for the VMkernel must be configured. You
can verify the network configuration by using the vmkping utility. For independent hardware iSCSI,
network parameters, such as IP address, subnet mask, and default gateway must be configured on the
HBA.
The dynamic discovery address or static discovery address and target name of the storage system must
be set. For software iSCSI and dependent hardware iSCSI, the address should be pingable using
vmkping.
For CHAP authentication, enable it on the initiator and the storage system side. After authentication is
enabled, it applies for all of the targets that are not yet discovered, but does not apply to targets that are
already discovered. After the discovery address is set, the new targets discovered are exposed and can be
used at that point.
For details on how to use the vmkping command, search the VMware Knowledge Base.
VMware, Inc.
69
vSphere Storage
Procedure
1
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab and click Storage Adapters in the Hardware panel.
If it is installed, the hardware iSCSI adapter appears on the list of storage adapters.
Click Properties.
The iSCSI Initiator Properties dialog box appears. The General tab displays additional characteristics of
the adapter.
You can now configure your independent hardware adapter or change its default characteristics.
Click Configure.
To change the default iSCSI name for your adapter, enter the new name.
Make sure the name you enter is worldwide unique and properly formatted or some storage devices might
not recognize the iSCSI adapter.
If you change the iSCSI name, it will be used for new iSCSI sessions. For existing sessions, new settings will
not be used until logout and re-login.
70
VMware, Inc.
After you configure the dependent hardware iSCSI adapter, the discovery and authentication data are passed
through the network connection, while the iSCSI traffic goes through the iSCSI engine, bypassing the network.
The entire setup and configuration process for the dependent hardware iSCSI adapters involves these steps:
1
Determine the association between the dependent hardware adapters and physical NICs.
See Determine Association Between iSCSI and Network Adapters, on page 72
Make sure to note the names of the corresponding physical NICs. For example, the vmhba33 adapter
corresponds to vmnic1 and vmhba34 corresponds to vmnic2.
When you use any dependent hardware iSCSI adapter, performance reporting for a NIC associated with
the adapter might show little or no activity, even when iSCSI traffic is heavy. This behavior occurs because
the iSCSI traffic bypasses the regular networking stack.
If you use a third-party virtual switch, for example Cisco Nexus 1000V DVS, disable automatic pinning.
Use manual pinning instead, making sure to connect a VMkernel adapter (vmk) to an appropriate physical
NIC (vmnic). For information, refer to your virtual switch vendor documentation.
The Broadcom iSCSI adapter performs data reassembly in hardware, which has a limited buffer space.
When you use the Broadcom iSCSI adapter in a congested network or under heavy load, enable flow
control to avoid performance degradation.
Flow control manages the rate of data transmission between two nodes to prevent a fast sender from
overrunning a slow receiver. For best results, enable flow control at the end points of the I/O path, at the
hosts and iSCSI storage systems.
VMware, Inc.
71
vSphere Storage
Procedure
1
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab and click Storage Adapters in the Hardware panel.
If it is installed, the dependent hardware iSCSI adapter appears on the list of storage adapters under such
category as, for example, Broadcom iSCSI Adapter.
What to do next
Although the dependent iSCSI adapter is enabled by default, to make it functional, you must set up networking
for the iSCSI traffic and bind the adapter to the appropriate VMkernel iSCSI port. You then configure discovery
addresses and CHAP parameters.
In the iSCSI Initiator Properties dialog box, click the Network Configuration tab.
Click Add.
The network adapter, for example vmnic2, that corresponds to the dependent iSCSI adapter is listed.
What to do next
You must bind the associated dependent hardware iSCSI and network adapters by creating the network
connections.
72
VMware, Inc.
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab and click Storage Adapters in the Hardware panel.
Select the iSCSI adapter from the list and click Properties.
Click Configure.
After you activate the adapter, you can disable it, but you cannot remove it from the list of storage adapters.
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab and click Storage Adapters in the Hardware panel.
VMware, Inc.
73
vSphere Storage
Select the software iSCSI adapter from the list of storage adapters and click Properties.
Click Configure.
vmhba#
vmk1
vmhba33
vmhba34
vmk1
vmk2
iSCSI
ports
vmk2
vSwitch
vSwitch
Host1
Host1
Host2
vmnic1
vmnic2
vmnic1
physical NICs
vmnic2
virtual
Host2 physical
physical NICs
with iSCSI
offload
capabilities
IP network
iSCSI storage
74
iSCSI storage
VMware, Inc.
The iSCSI adapter and physical NIC connect through a virtual VMkernel adapter, also called virtual network
adapter or VMkernel port. You create a VMkernel adapter (vmk) on a vSphere switch (vSwitch) using 1:1
mapping between each virtual and physical network adapter.
One way to achieve the 1:1 mapping when you have multiple NICs, is to designate a separate vSphere switch
for each virtual-to-physical adapter pair. The following examples show configurations that use vSphere
standard switches, but you can use distributed switches as well. For more information about vSphere
distributed switches, see the vSphere Networking documentation.
Figure 9-2. 1:1 adapter mapping on separate vSphere standard switches
NOTE If you use separate vSphere switches, you must connect them to different IP subnets. Otherwise,
VMkernel adapters might experience connectivity problems and the host will fail to discover iSCSI LUNs.
An alternative is to add all NICs and VMkernel adapters to a single vSphere standard switch. In this case, you
must override the default network setup and make sure that each VMkernel adapter maps to only one
corresponding active physical adapter.
Figure 9-3. 1:1 adapter mapping on a single vSphere standard switch
The following table summarises the iSCSI networking configuration discussed in this topic.
Table 9-2. Networking configuration for iSCSI
iSCSI Adapters
vmk1
vmnic1
vmk2
vmnic2
vmhba33
vmk1
vmnic1
vmhba34
vmk2
vmnic2
Software iSCSI
vmhba32
VMware, Inc.
75
vSphere Storage
Log in to the vSphere Client, and select a host from the inventory panel.
Click Next.
76
Click Next.
10
11
VMware, Inc.
You created the virtual VMkernel adapter for a physical network adapter on your host.
What to do next
If your host has one physical network adapter for iSCSI traffic, you must bind the virtual adapter that you
created to the iSCSI adapter.
If you have multiple network adapters, create additional VMkernel adapters and then perform iSCSI binding.
The number of virtual adapters must correspond to the number of physical adapters on the host.
Log in to the vSphere Client, and select a host from the inventory panel.
Select the vSphere standard switch that you use for iSCSI and click Properties.
In the standard switch Properties dialog box, click the Network Adapters tab and click Add.
Select one or more NICs from the list and click Next.
With dependent hardware iSCSI adapters, select only those NICs that have a corresponding iSCSI
component.
Review the information on the Adapter Summary page and click Finish.
The list of network adapters reappears, showing the network adapters that the vSphere standard
switch now claims.
Create iSCSI VMkernel adapters for all NICs that you added.
The number of VMkernel interfaces must correspond to the number of NICs on the vSphere standard
switch.
a
In the standard switch Properties dialog box, click the Ports tab and click Add.
Under Port Group Properties, enter a network label, for example iSCSI, and click Next.
CAUTION If the NIC you use with your iSCSI adapter, either software or dependent hardware, is not in
the same subnet as your iSCSI target, your host cannot establish sessions from this network adapter to the
target.
VMware, Inc.
77
vSphere Storage
What to do next
Change the network policy for all VMkernel adapters, so that it is compatible with the network binding
requirements. You can then bind the iSCSI VMkernel adapters to the software iSCSI or dependent hardware
iSCSI adapters.
Log in to the vSphere Client and select the host from the inventory panel.
Select the vSphere standard switch that you use for iSCSI and click Properties.
On the Ports tab, select an iSCSI VMkernel adapter and click Edit.
Click the NIC Teaming tab and select Override switch failover order.
Designate only one physical adapter as active and move all remaining adapters to the Unused Adapters
category.
Repeat Step 4 through Step 6 for each iSCSI VMkernel interface on the vSphere standard switch.
What to do next
After you perform this task, bind the virtual VMkernel adapters to the software iSCSI or dependent hardware
iSCSI adapters.
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab, and click Storage Adapters in the Hardware panel.
The list of available storage adapters appears.
78
Select the software or dependent iSCSI adapter to configure and click Properties.
In the iSCSI Initiator Properties dialog box, click the Network Configuration tab.
VMware, Inc.
Click Add and select a VMkernel adapter to bind with the iSCSI adapter.
You can bind the software iSCSI adapter to one or more VMkernel adapters. For a dependent hardware
iSCSI adapter, only one VMkernel interface associated with the correct physical NIC is available.
Click OK.
The network connection appears on the list of VMkernel port bindings for the iSCSI adapter.
Verify that the network policy for the connection is compliant with the binding requirements.
Make sure that the VMkernel network adapters are assigned addresses on the same subnet as the iSCSI
storage portal they connect to.
iSCSI adapters using VMkernel adapters are not able to connect to iSCSI ports on different subnets, even
if those ports are discovered by the iSCSI adapters.
When using separate vSphere switches to connect physical network adapters and VMkernel adapters,
make sure that the vSphere switches connect to different IP subnets.
If you migrate VMkernel adapters to a different vSphere switch, move associated physical adapters.
Do not make configuration changes to iSCSI-bound VMkernel adapters or physical network adapters.
Do not make changes that might break association of VMkernel adapters and physical network adapters.
You can break the association if you remove one of the adapters or the vSphere switch that connects them,
or change the 1:1 network policy for their connection.
The VMkernel adapter is connected to more than one physical network adapter.
Solution
Follow the steps in Change Port Group Policy for iSCSI VMkernel Adapters, on page 78 to set up the correct
network policy for the iSCSI-bound VMkernel adapter.
VMware, Inc.
79
vSphere Storage
The network must support Jumbo Frames end-to-end for Jumbo Frames to be effective.
Check with your vendors to ensure your physical NICs support Jumbo Frames.
To set up and verify physical network switches for Jumbo Frames, consult your vendor documentation.
The following table explains the level of support that ESXi provides to Jumbo Frames.
Table 9-3. Support of Jumbo Frames
Type of iSCSI Adapters
Software iSCSI
Supported
Not Supported
Log in to the vSphere Client, and select a host from the inventory panel.
Click Properties for the standard switch you use for iSCSI traffic.
On the Ports tab, select the standard switch and click Edit.
Set the MTU parameter for the standard switch, and click OK.
This step sets the MTU for all physical NICs on that standard switch. The MTU value should be set to the
largest MTU size among all NICs connected to the standard switch.
80
On the Ports tab, select the VMkernel adapter and click Edit.
Set the MTU to match the value configured on the standard switch, and click OK.
VMware, Inc.
Static Discovery
The initiator does not have to perform any discovery. The initiator has a list of
targets it can contact and uses their IP addresses and target names to
communicate with them.
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab, and click Storage Adapters in the Hardware panel.
The list of available storage adapters appears.
Click the Configuration tab, and click Storage Adapters in the Hardware panel.
The list of available storage adapters appears.
Type the IP address or DNS name of the storage system and click OK.
After your host establishes the SendTargets session with this system, any newly discovered targets appear
in the Static Discovery list.
What to do next
After configuring Dynamic Discovery for your iSCSI adapter, rescan the adapter.
VMware, Inc.
81
vSphere Storage
Log in to the vSphere Client, and select a host from the inventory panel.
Click the Configuration tab, and click Storage Adapters in the Hardware panel.
The list of available storage adapters appears.
What to do next
After configuring Static Discovery for your iSCSI adapter, rescan the adapter.
82
VMware, Inc.
Mutual CHAP
For software and dependent hardware iSCSI adapters, you can set one-way CHAP and mutual CHAP for each
initiator or at the target level. Independent hardware iSCSI supports CHAP only at the initiator level.
When you set the CHAP parameters, specify a security level for CHAP.
NOTE When you specify the CHAP security level, how the storage array responds depends on the arrays
CHAP implementation and is vendor specific. For example, when you select Use CHAP unless prohibited by
target, some storage arrays use CHAP in response, while others do not. For information on CHAP
authentication behavior in different initiator and target configurations, consult the array documentation.
Table 9-4. CHAP Security Level
CHAP Security Level
Description
Supported
Software iSCSI
Dependent hardware iSCSI
Independent hardware
iSCSI
Software iSCSI
Dependent hardware iSCSI
Software iSCSI
Dependent hardware iSCSI
Independent hardware
iSCSI
Use CHAP
Software iSCSI
Dependent hardware iSCSI
Before setting up CHAP parameters for software or dependent hardware iSCSI, determine whether to
configure one-way or mutual CHAP. Independent hardware iSCSI adapters do not support mutual
CHAP.
n
In mutual CHAP, both the target and the initiator authenticate each other. Use different secrets for
CHAP and mutual CHAP.
When you configure CHAP parameters, verify that they match the parameters on the storage side.
VMware, Inc.
83
vSphere Storage
Procedure
1
Do not use CHAP unless required by target (software and dependent hardware iSCSI only)
Use CHAP (software and dependent hardware iSCSI only). To configure mutual CHAP, you
must select this option.
c
4
To set the CHAP name to the iSCSI initiator name, select Use initiator name.
To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator
name and type a name in the Name text box.
Enter a one-way CHAP secret to be used as part of authentication. Use the same secret that you enter
on the storage side.
To configure mutual CHAP, first configure one-way CHAP by following the directions in Step 3.
Make sure to select Use CHAP as an option for one-way CHAP. Then, specify the following under Mutual
CHAP:
a
Enter the mutual CHAP secret. Make sure to use different secrets for the one-way CHAP and mutual
CHAP.
Click OK.
If you change the CHAP or mutual CHAP parameters, they are used for new iSCSI sessions. For existing
sessions, new settings are not used until you log out and log in again.
84
VMware, Inc.
In mutual CHAP, both the target and initiator authenticate each other. Make sure to use different secrets
for CHAP and mutual CHAP.
Procedure
1
From the list of available targets, select a target you want to configure and click Settings > CHAP.
Use CHAP. To be able to configure mutual CHAP, you must select this option.
d
5
To set the CHAP name to the iSCSI initiator name, select Use initiator name.
To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator
name and enter a name in the Name field.
Enter a one-way CHAP secret to be used as part of authentication. Make sure to use the same secret
that you enter on the storage side.
To configure mutual CHAP, first configure one-way CHAP by following directions in Step 4.
Make sure to select Use CHAP as an option for one-way CHAP. Then, specify the following in the Mutual
CHAP area:
a
Enter the mutual CHAP secret. Make sure to use different secrets for the one-way CHAP and mutual
CHAP.
Click OK.
If you change the CHAP or mutual CHAP parameters, they are used for new iSCSI sessions. For existing
sessions, new settings are not used until you log out and login again.
Disable CHAP
You can disable CHAP if your storage system does not require it.
If you disable CHAP on a system that requires CHAP authentication, existing iSCSI sessions remain active
until you reboot your host, end the session through the command line, or the storage system forces a logout.
After the session ends, you can no longer connect to targets that require CHAP.
Required privilege: Host.Configuration.Storage Partition Configuration
Procedure
1
VMware, Inc.
85
vSphere Storage
For software and dependent hardware iSCSI adapters, to disable just the mutual CHAP and leave the oneway CHAP, select Do not use CHAP in the Mutual CHAP area.
To disable one-way CHAP, select Do not use CHAP in the CHAP area.
The mutual CHAP, if set up, automatically turns to Do not use CHAP when you disable the one-way
CHAP.
Click OK.
86
Advanced Parameter
Description
Configurable On
Header Digest
Software iSCSI
Dependent Hardware
iSCSI
Data Digest
Software iSCSI
Dependent Hardware
iSCSI
Maximum
Outstanding R2T
Software iSCSI
Dependent Hardware
iSCSI
Software iSCSI
Dependent Hardware
iSCSI
Maximum Burst
Length
Software iSCSI
Dependent Hardware
iSCSI
Maximum Receive
Data Segment Length
Software iSCSI
Dependent Hardware
iSCSI
Session Recovery
Timeout
Specifies the amount of time, in seconds, that can lapse while a session
recovery is performed. If the timeout exceeds its limit, the iSCSI
initiator terminates the session.
Software iSCSI
Dependent Hardware
iSCSI
No-Op Interval
Software iSCSI
Dependent Hardware
iSCSI
VMware, Inc.
Description
Configurable On
No-Op Timeout
Specifies the amount of time, in seconds, that can lapse before your
host receives a NOP-In message. The message is sent by the iSCSI
target in response to the NOP-Out request. When the no-op timeout
limit is exceeded, the initiator terminates the current session and starts
a new one.
Software iSCSI
ARP Redirect
Software iSCSI
Dependent Hardware
iSCSI
Independent Hardware
iSCSI
Delayed ACK
Software iSCSI
Dependent Hardware
iSCSI
Dependent Hardware
iSCSI
To configure advanced parameters at the initiator level, on the General tab, click Advanced. Proceed to
Step 4.
From the list of available targets, select a target to configure and click Settings > Advanced.
Enter any required values for the advanced parameters you want to modify and click OK to save your
changes.
VMware, Inc.
87
vSphere Storage
You can also establish a session to a specific target port. This can be useful if your host connects to a singleport storage system that, by default, presents only one target port to your initiator, but can redirect additional
sessions to a different target port. Establishing a new session between your iSCSI initiator and another target
port creates an additional path to the storage system.
The following considerations apply to iSCSI session management:
n
Some storage systems do not support multiple sessions from the same initiator name or endpoint.
Attempts to create multiple sessions to such targets can result in unpredictable behavior of your iSCSI
environment.
Storage vendors can provide automatic session managers. Using the automatic session manages to add
or delete sessions, does not guarantee lasting results and can interfere with the storage performance.
Description
-A|--adapter=str
-s|--isid=str
-n|--name=str
88
VMware, Inc.
Procedure
u
Description
-A|--adapter=str
The iSCSI adapter name, for example, vmhba34. This option is required.
-s|--isid=str
The ISID of a session to duplicate. You can find it by listing all session.
-n|--name=str
What to do next
Rescan the iSCSI adapter.
Description
-A|--adapter=str
The iSCSI adapter name, for example, vmhba34. This option is required.
-s|--isid=str
The ISID of a session to remove. You can find it by listing all session.
-n|--name=str
What to do next
Rescan the iSCSI adapter.
VMware, Inc.
89
vSphere Storage
90
VMware, Inc.
10
After you configure your iSCSI initiators and storage, you might need to modify your storage system to ensure
that it works properly with your ESXi implementation.
This section discusses many of the iSCSI storage systems supported in conjunction with VMware ESXi. For
each device, it lists major known potential issues, points to vendor-specific information (if available), or
includes information from VMware knowledge base articles.
NOTE Information in this section is updated only with each release. New information might already be
available. Also, other iSCSI storage systems are supported but are not covered in this chapter. Consult the most
recent Storage/SAN Compatibility, check with your storage vendor, and explore the VMware knowledge base
articles.
This chapter includes the following topics:
n
VMware, Inc.
91
vSphere Storage
Tests whether ESXi can recognize and operate with the storage system. This
configuration does not allow for multipathing or any type of failover.
iSCSI Failover
The server is equipped with multiple iSCSI HBAs or NICs. The server is robust
to HBA or NIC failure.
The server is attached to multiple storage ports and is robust to storage port
failures and switch failures.
The host successfully boots from a LUN configured on the iSCSI SAN.
LUNs must be presented to each HBA of each host with the same LUN ID number. If different numbers
are used, the ESXi hosts do not recognize different paths to the same LUN. Because instructions on how
to configure identical SAN LUN IDs are vendor-specific, consult your storage documentation for more
information.
Unless specified for individual storage systems discussed in this chapter, set the host type for LUNs
presented to ESXi to Linux or Linux Cluster, if applicable to your storage system. The method ESXi uses
to access the storage system is most compatible with Linux access, however, this can vary depending on
the storage system you are using.
If you are using vMotion, DRS, or HA, make sure that source and target hosts for virtual machines can
see the same LUNs with identical LUN IDs. SAN administrators might find it counterintuitive to have
multiple hosts see the same LUNs because they might be concerned about data corruption. However,
VMFS prevents multiple virtual machines from writing to the same file at the same time, so provisioning
the LUNs to all required ESXi system is appropriate.
If you do not have CHAP authentication set up on the LUNs that are being accessed, you must also disable
CHAP on the ESXi host. Otherwise, authentication of the storage system fails, although the LUNs have
no CHAP requirement.
92
To avoid the possibility of path thrashing, the default multipathing policy is Most Recently Used, not
Fixed. The ESXi system sets the default policy when it identifies the storage system.
To boot from a SAN, choose the active storage processor for the boot LUNs target in the HBA BIOS.
Port binding support on EMC CLARiiON storage systems requires initiators in different subnets if the
storage is using a version of FLARE earlier than FLARE 30. See vendor documentation for additional
details.
For ESXi to support EMC CLARiiON with ALUA, check the HCLs to make sure that you use the correct
firmware version on the storage array. For additional information, contact your storage vendor.
VMware, Inc.
Certain EMC CLARiiON CX3 models might not behave appropriately during periods of network
congestion. For information on how to address this behavior, see the VMware knowledge base article at
http://kb.vmware.com/kb/1002598.
NOTE The ESXi host considers any LUNs from a Symmetrix storage system that have a capacity of 50MB or
less as management LUNs. These LUNs are also known as pseudo or gatekeeper LUNs. These LUNs appear
in the EMC Symmetrix Management Interface and should not be used to hold data.
Install, connect, and power up the network devices as detailed in the vendor installation document.
Scroll through the messages on the LCD panel until the following message appears: 603 Port MA0 IP
address
Record the management port IP address that appears in Basic MSA1510i information.
From the server or a workstation on the MSA1510i LAN segment, open a Web browser and enter the
address obtained in Step 2.
VMware, Inc.
Password: root
93
vSphere Storage
Description
Storage configuration
a
b
c
Login settings
Management settings
NOTE Wizards are available for basic configuration tasks only. Use the Manage and Configure tabs to view
and change your configuration.
What to do next
After initial setup, perform the following tasks to complete the configuration:
n
Create an array.
Create a target.
Associate or assign the portals created using the wizard with the portal group created.
Update the ACLs of the logical drives to provide access to initiators (select the list of initiators to access
the logical drive).
For HP EVAgl 3000/5000 (active-passive), use the 000000002200282E host mode type.
For HP EVAgl firmware 4.001 (active-active firmware for GL series) and above, use the VMware host mode
type.
For EVA4000/6000/8000 active-active arrays with firmware earlier than 5.031, use the 000000202200083E
host mode type.
For EVA4000/6000/8000 active-active arrays with firmware 5.031 and later, use the VMware host mode type.
Otherwise, EVA systems do not require special configuration changes to work with an ESXi system.
94
VMware, Inc.
Description
Disable ALUA.
If any of your iSCSI initiators are a part of an initiator group (igroup), disable ALUA on the
NetApp filer.
Set up multipathing.
When you set up multipathing between two iSCSI HBAs and multiple ports on a NetApp
storage system, give each HBA a different iSCSI initiator name.
The NetApp storage system only permits one connection for each target and each initiator.
Attempts to make additional connections cause the first connection to drop. Therefore, a
single HBA should not attempt to connect to multiple IP addresses associated with the same
NetApp target.
Set the appropriate LUN type and initiator group type for the storage system:
n LUN type VMware (if VMware type is not available, use Linux).
n Initiator group type VMware (if VMware type is not available, use Linux).
Provision storage.
Multipathing. No special setup is needed because EqualLogic storage systems support storage-processor
failover that is transparent to iSCSI. Multiple iSCSI HBAs or NICs can connect to the same target or LUN
on the storage side. EqualLogic provides a multipathing extension module that can be installed on ESXi
to provide additional capabilities.
For information about the custom multipathing configuration packages, contact Dell EqualLogic.
Creating iSCSI LUNs. From the EqualLogic web portal, right-click Volumes, and then select Create
Volume.
EqualLogic storage systems impose a maximum limit of 1024 iSCSI connections per storage pool and 2048
connections per storage group.
For more information about configuring and using EqualLogic storage systems, see the vendors
documentation.
VMware, Inc.
95
vSphere Storage
Create volumes.
As a best practice, configure virtual IP load balancing in SAN/iQ for all ESXi authentication groups.
On the MD3000i storage system, mutual CHAP configuration requires only a CHAP secret.
On the ESXi host, mutual CHAP configuration requires both the name and CHAP secret. When
configuring mutual CHAP on the ESXi host, enter the IQN name of the target as the mutual CHAP name.
Make sure the CHAP secret matches the one set on the array.
96
VMware, Inc.
11
When you set up your host to boot from a SAN, your host's boot image is stored on one or more LUNs in the
SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk.
You can use boot from the SAN if you do not want to handle maintenance of local storage or have diskless
hardware configurations, such as blade systems.
ESXi supports different methods of booting from the iSCSI SAN.
Table 11-1. Boot from iSCSI SAN support
Independent Hardware iSCSI
Use the network adapter that supports the iBFT. For information, see
iBFT iSCSI Boot Overview, on page 99.
Review any vendor recommendations for the hardware you use in your boot configuration.
For installation prerequisites and requirements, review vSphere Installation and Setup.
VMware, Inc.
The boot LUN should be visible only to the host that uses the LUN. No other host on the SAN should
be permitted to see that boot LUN.
If a LUN is used for a VMFS datastore, it can be shared by multiple hosts. ACLs on the storage systems
can allow you to do this.
97
vSphere Storage
With independent hardware iSCSI only, you can place the diagnostic partition on the boot LUN. If
you configure the diagnostic partition in the boot LUN, this LUN cannot be shared across multiple
hosts. If a separate LUN is used for the diagnostic partition, it can be shared by multiple hosts.
If you boot from SAN using iBFT, you cannot set up a diagnostic partition on a SAN LUN.
Connect network cables, referring to any cabling guide that applies to your setup.
Create a volume (or LUN) on the storage system for your host to boot from.
Configure the storage system so that your host has access to the assigned LUN.
This could involve updating ACLs with the IP addresses, iSCSI names, and the CHAP authentication
parameter you use on your host. On some storage systems, in addition to providing access information
for the ESXi host, you must also explicitly associate the assigned LUN with the host.
Record the iSCSI name and IP addresses of the targets assigned to the host.
You must have this information to configure your iSCSI adapters.
Insert the installation CD/DVD in the CD/DVD-ROM drive and reboot the host.
Use the BIOS to set the host to boot from the CD/DVD-ROM drive first.
During server POST, press Crtl+q to enter the QLogic iSCSI HBA configuration menu.
98
VMware, Inc.
From the Fast!UTIL Options menu, select Configuration Settings > Host Adapter Settings.
Configure the following settings for your host adapter: initiator IP address, subnet mask, gateway,
initiator iSCSI name, and CHAP (if required).
From the Fast!UTIL Options menu, select Configuration Settings > iSCSI Boot Settings.
Before you can set SendTargets, set Adapter Boot mode to Manual.
You can leave the Boot LUN and iSCSI Name fields blank if only one iSCSI target and one LUN are
at the specified address to boot from. Otherwise, you must specify these fields to ensure that you do
not boot from a volume for some other system. After the target storage system is reached, these fields
will be populated after a rescan.
Save changes.
From the iSCSI Boot Settings menu, select the primary boot device. An auto rescan of the HBA is made
to find new target LUNS.
Return to the Primary Boot Device Setting menu. After the rescan, the Boot LUNand iSCSI Name fields
are populated. Change the value of Boot LUN to the desired LUN ID.
VMware, Inc.
99
vSphere Storage
When restarted, the system BIOS detects the iSCSI boot firmware on the network adapter.
The iSCSI boot firmware uses the preconfigured boot parameters to connect with the specified iSCSI target.
If the connection to the iSCSI target is successful, the iSCSI boot firmware writes the networking and iSCSI
boot parameters in to the iBFT and stores the table in the system memory.
NOTE The system uses this table to configure its own iSCSI connection and networking and to start up.
The VMkernel starts loading and takes over the boot operation.
Using the boot parameters from the iBFT, the VMkernel connects to the iSCSI target.
IPv6
NOTE Update your NIC's boot code and iBFT firmware using vendor supplied tools before trying to install
and boot VMware ESXi. Consult vendor documentation and VMware HCL for supported boot code and iBFT
firmware versions for VMware ESXi iBFT boot. The boot code and iBFT firmware released by vendors prior
to the ESXi 4.1 release might not work.
After you set up your host to boot from iBFT iSCSI, the following restrictions apply:
n
You cannot disable the software iSCSI adapter. If the iBFT configuration is present in the BIOS, the host
re-enables the software iSCSI adapter during each reboot.
NOTE If you do not use the iBFT-enabled network adapter for the iSCSI boot and do not want the software
iSCSI adapter to be always enabled, remove the iBFT configuration from the network adapter.
You cannot remove the iBFT iSCSI boot target using the vSphere Client. The target appears on the list of
adapter static targets.
100
VMware, Inc.
On the network adapter that you use for the boot from iSCSI, specify networking and iSCSI parameters.
Because configuring the network adapter is vendor specific, review your vendor documentation for
instructions.
iSCSI
DVD-ROM
Because changing the boot sequence in the BIOS is vendor specific, refer to vendor documentation for
instructions. The following sample procedure explains how to change the boot sequence on a Dell host with a
Broadcom network adapter.
Procedure
1
In the Boot Sequence menu, arrange the bootable items so that iSCSI precedes the DVD-ROM.
Select Save Changes and click Exit to exit the BIOS Setup menu.
Configure iSCSI boot firmware on your boot NIC to point to the target LUN that you want to use as the
boot LUN.
Change the boot sequence in the BIOS so that iSCSI precedes the DVD-ROM.
VMware, Inc.
101
vSphere Storage
Procedure
1
Insert the installation media in the CD/DVD-ROM drive and restart the host.
Configure the iSCSI boot firmware on your boot NIC to point to the boot LUN.
Change the boot sequence in the BIOS so that iSCSI precedes the boot device.
Procedure
1
If you use VLANs to isolate the networks, they must have different subnets to ensure that routing tables
are properly set up.
VMware recommends that you configure the iSCSI adapter and target to be on the same subnet. If you
set up the iSCSI adapter and target on different subnets, the following restrictions apply:
n
102
The default VMkernel gateway must be able to route both the management and iSCSI traffic.
VMware, Inc.
After you boot your host, you can use the iBFT-enabled network adapter only for iBFT. You cannot
use the adapter for other iSCSI traffic.
Use the first physical network adapter for the management network.
Use the second physical network adapter for the iSCSI network. Make sure to configure the iBFT.
After the host boots, you can add secondary network adapters to both the management and iSCSI
networks.
VMware, Inc.
103
vSphere Storage
Cause
The firmware uses the updated boot configuration and is able to connect to the iSCSI target and load the
ESXi image. However, when loaded, the system does not pick up the new parameters, but continues to use
persistent networking and iSCSI parameters from the previous boot. As a result, the host cannot connect to
the target and boots in the stateless mode.
Solution
104
Re-configure the iSCSI and networking on the host to match the iBFT parameters.
Perform a rescan.
VMware, Inc.
12
When using ESXi with the iSCSI SAN, follow best practices that VMware offers to avoid problems.
Check with your storage representative if your storage system supports Storage API - Array Integration
hardware acceleration features. If it does, refer to your vendor documentation for information on how to enable
hardware acceleration support on the storage system side. For more information, see Chapter 18, Storage
Hardware Acceleration, on page 171.
This chapter includes the following topics:
n
Place only one VMFS datastore on each LUN. Multiple VMFS datastores on one LUN is not recommended.
Do not change the path policy the system sets for you unless you understand the implications of making
such a change.
Document everything. Include information about configuration, access control, storage, switch, server
and iSCSI HBA configuration, software and firmware versions, and storage cable plan.
Make several copies of your topology maps. For each element, consider what happens to your SAN
if the element fails.
Cross off different links, switches, HBAs and other elements to ensure you did not miss a critical
failure point in your design.
Ensure that the iSCSI HBAs are installed in the correct slots in the ESXi host, based on slot and bus speed.
Balance PCI bus load among the available busses in the server.
Become familiar with the various monitor points in your storage network, at all visibility points, including
ESXi performance charts, Ethernet switch statistics, and storage performance statistics.
Be cautious when changing IDs of the LUNs that have VMFS datastores being used by your host. If you
change the ID, virtual machines running on the VMFS datastore will fail.
VMware, Inc.
105
vSphere Storage
If there are no running virtual machines on the VMFS datastore, after you change the ID of the LUN, you
must use rescan to reset the ID on your host. For information on using rescan, see Perform Storage
Rescan, on page 122.
n
If you need to change the default iSCSI name of your iSCSI adapter, make sure the name you enter is
worldwide unique and properly formatted. To avoid storage access problems, never assign the same iSCSI
name to different adapters, even on different hosts.
106
VMware, Inc.
Because each application has different requirements, you can meet these goals by choosing an appropriate
RAID group on the storage system. To achieve performance goals, perform the following tasks:
n
Place each LUN on a RAID group that provides the necessary performance levels. Pay attention to the
activities and resource utilization of other LUNS in the assigned RAID group. A high-performance RAID
group that has too many applications doing I/O to it might not meet performance goals required by an
application running on the ESXi host.
Provide each server with a sufficient number of network adapters or iSCSI hardware adapters to allow
maximum throughput for all the applications hosted on the server for the peak period. I/O spread across
multiple ports provides higher throughput and less latency for each application.
To provide redundancy for software iSCSI, make sure the initiator is connected to all network adapters
used for iSCSI connectivity.
When allocating LUNs or RAID groups for ESXi systems, multiple operating systems use and share that
resource. As a result, the performance required from each LUN in the storage subsystem can be much
higher if you are working with ESXi systems than if you are using physical machines. For example, if you
expect to run four I/O intensive applications, allocate four times the performance capacity for the ESXi
LUNs.
When using multiple ESXi systems in conjunction with vCenter Server, the performance needed from the
storage subsystem increases correspondingly.
The number of outstanding I/Os needed by applications running on an ESXi system should match the
number of I/Os the SAN can handle.
Network Performance
A typical SAN consists of a collection of computers connected to a collection of storage systems through a
network of switches. Several computers often access the same storage.
Single Ethernet Link Connection to Storage shows several computer systems connected to a storage system
through an Ethernet switch. In this configuration, each system is connected through a single Ethernet link to
the switch, which is also connected to the storage system through a single Ethernet link. In most configurations,
with modern switches and typical traffic, this is not a problem.
Figure 12-1. Single Ethernet Link Connection to Storage
When systems read data from storage, the maximum response from the storage is to send enough data to fill
the link between the storage systems and the Ethernet switch. It is unlikely that any single system or virtual
machine gets full use of the network speed, but this situation can be expected when many systems share one
storage device.
When writing data to storage, multiple systems or virtual machines might attempt to fill their links. As Dropped
Packets shows, when this happens, the switch between the systems and the storage system has to drop data.
This happens because, while it has a single connection to the storage device, it has more traffic to send to the
storage system than a single link can carry. In this case, the switch drops network packets because the amount
of data it can transmit is limited by the speed of the link between it and the storage system.
VMware, Inc.
107
vSphere Storage
1 Gbit
dropped packets
Recovering from dropped network packets results in large performance degradation. In addition to time spent
determining that data was dropped, the retransmission uses network bandwidth that could otherwise be used
for current transactions.
iSCSI traffic is carried on the network by the Transmission Control Protocol (TCP). TCP is a reliable
transmission protocol that ensures that dropped packets are retried and eventually reach their destination.
TCP is designed to recover from dropped packets and retransmits them quickly and seamlessly. However,
when the switch discards packets with any regularity, network throughput suffers significantly. The network
becomes congested with requests to resend data and with the resent packets, and less data is actually
transferred than in a network without congestion.
Most Ethernet switches can buffer, or store, data and give every device attempting to send data an equal chance
to get to the destination. This ability to buffer some transmissions, combined with many systems limiting the
number of outstanding commands, allows small bursts from several systems to be sent to a storage system in
turn.
If the transactions are large and multiple servers are trying to send data through a single switch port, a switch's
ability to buffer one request while another is transmitted can be exceeded. In this case, the switch drops the
data it cannot send, and the storage system must request retransmission of the dropped packet. For example,
if an Ethernet switch can buffer 32KB on an input port, but the server connected to it thinks it can send 256KB
to the storage device, some of the data is dropped.
Most managed switches provide information on dropped packets, similar to the following:
*: interface is up
IHQ: pkts in input hold queue
OHQ: pkts in output hold queue
RXBS: rx rate (bits/sec)
TXBS: tx rate (bits/sec)
TRTL: throttle count
IHQ
IQD
OHQ
OQD
RXBS
RXPS
TXBS
TXPS
TRTL
*
GigabitEt
hernet0/1
9922
47630300
0
62273
47784000
0
63677
In this example from a Cisco switch, the bandwidth used is 476303000 bits/second, which is less than half of
wire speed. In spite of this, the port is buffering incoming packets and has dropped quite a few packets. The
final line of this interface summary indicates that this port has already dropped almost 10,000 inbound packets
in the IQD column.
Configuration changes to avoid this problem involve making sure several input Ethernet links are not funneled
into one output link, resulting in an oversubscribed link. When a number of links transmitting near capacity
are switched to a smaller number of links, oversubscription is a possibility.
108
VMware, Inc.
Generally, applications or systems that write a lot of data to storage, such as data acquisition or transaction
logging systems, should not share Ethernet links to a storage device. These types of applications perform best
with multiple connections to storage devices.
Multiple Connections from Switch to Storage shows multiple connections from the switch to the storage.
Figure 12-3. Multiple Connections from Switch to Storage
1 Gbit
1 Gbit
1 Gbit
1 Gbit
Using VLANs or VPNs does not provide a suitable solution to the problem of link oversubscription in shared
configurations. VLANs and other virtual partitioning of a network provide a way of logically designing a
network, but do not change the physical capabilities of links and trunks between switches. When storage traffic
and other network traffic end up sharing physical connections, as they would with a VPN, the possibility for
oversubscription and lost packets exists. The same is true of VLANs that share interswitch trunks. Performance
design for a SANs must take into account the physical limitations of the network, not logical allocations.
Comments
Topology
No single failure should cause HBA and SP failover, especially with active-passive storage
arrays.
EMC Symmetrix
Enable the SPC2 and SC3 settings. Contact EMC for the latest settings.
EMC Clariion
Set the EMC Clariion failover mode to 1 or 4. Contact EMC for details.
HP MSA
No specific requirements
HP EVA
For EVA3000/5000 firmware 4.001 and later, and EVA4000/6000/8000 firmware 5.031 and
later, set the host type to VMware.
Otherwise, set the host mode type to Custom. The value is:
n
n
NetApp
VMware, Inc.
If any of your iSCSI initiators are a part of an initiator group (igroup), disable ALUA on the
NetApp array.
109
vSphere Storage
110
Component
Comments
Dell EqualLogic
HP StorageWorks (formerly
LeftHand)
ESXi Configuration
Set the following Advanced Settings for the ESXi host: Disk.UseLunReset to 1.
A multipathing policy of Most Recently Used must be set for all LUNs hosting clustered
disks for active-passive arrays. A multipathing policy of Most Recently Used or Fixed may
be set for LUNs on active-active arrays.
Allow ARP redirection if the storage system supports transparent failover.
VMware, Inc.
13
Datastores are logical containers, analogous to file systems, that hide specifics of each storage device and
provide a uniform model for storing virtual machine files. Datastores can also be used for storing ISO images,
virtual machine templates, and floppy images.
You use the vSphere Client to access different types of storage devices that your ESXi host discovers and to
deploy datastores on them.
Depending on the type of storage you use, datastores can be backed by the following file system formats:
n
After creating datastores, you can organize them in different ways. For example, you can group them into
folders according to business practices. This allows you to assign the same permissions and alarms on the
datastores in the group at one time.
You can also add datastores to datastore clusters. A datastore cluster is a collection of datastores with shared
resources and a shared management interface. When you create a datastore cluster, you can use Storage DRS
to manage storage resources. For information about datastore clusters, see the vSphere Resource Management
documentation.
This chapter includes the following topics:
n
VMware, Inc.
111
vSphere Storage
VMFS2
RO
RO
VMFS3
RW
RW
RW
VMFS5
RW
RW: Complete read and write support. You can create and power on virtual machines.
RO: Read only support. You cannot create or power on virtual machines.
N: No access. ESXi 5.0 does not support VMFS2. If your datastore was formatted with VMFS2, upgrade
to VMFS5 using a two step process. For information, see Upgrade VMFS2 Datastores to VMFS3, on
page 119.
Use the vSphere Client to set up a VMFS datastore in advance on a block-based storage device that your
ESXi host discovers. A VMFS datastore can be extended to span several physical storage extents, including
SAN LUNs and local storage. This feature allows you to pool storage and gives you flexibility in creating the
datastore necessary for your virtual machines.
You can increase the capacity of a datastore while virtual machines are running on the datastore. This ability
lets you add new space to your VMFS datastores as your virtual machine requires it. VMFS is designed for
concurrent access from multiple physical machines and enforces the appropriate access controls on virtual
machine files.
112
Support of greater than 2TB storage devices for each VMFS extent.
Standard 1MB file system block size with support of 2TB virtual disks.
Support of greater than 2TB disk size for RDMs in physical compatibility mode.
Scalability improvements on storage devices that support hardware acceleration. For information, see
Chapter 18, Storage Hardware Acceleration, on page 171.
Default use of hardware assisted locking, also called atomic test and set (ATS) locking, on storage devices
that support hardware acceleration. For information about how to turn off ATS locking, see Turn off ATS
Locking, on page 125.
Ability to reclaim physical storage space on thin provisioned storage devices. For information, see Array
Thin Provisioning and VMFS Datastores, on page 184.
Online in-place upgrade process that upgrades existing datastores without disrupting hosts or virtual
machines that are currently running.
VMware, Inc.
For VMFS3 datastores, the 2TB limit still applies, even when the storage device has a capacity of more
than 2TB. To be able to use the entire storage space, upgrade a VMFS3 datastore to VMFS5. Conversion
of the MBR format to GPT happens only after you expand the datastore.
When you upgrade a VMFS3 datastore to VMFS5, any spanned extents have the GPT format.
When you upgrade a VMFS3 datastore, remove from the storage device any partitions that ESXi does not
recognize, for example, partitions that use the EXT2 or EXT3 formats. Otherwise, the host cannot format
the device with GPT and the upgrade fails.
You cannot expand a VMFS3 datastore on devices that have the GPT partition format.
VMware, Inc.
113
vSphere Storage
host
B
host
C
VM1
VM2
VM3
VMFS volume
disk1
disk2
virtual
disk
files
disk3
To ensure that the same virtual machine is not accessed by multiple servers at the same time, VMFS provides
on-disk locking.
Sharing the same VMFS volume across multiple hosts offers the following advantages:
n
You can use VMware Distributed Resource Scheduling (DRS) and VMware High Availability (HA).
You can distribute virtual machines across different physical servers. That means you run a mix of virtual
machines on each server so that not all experience high demand in the same area at the same time. If a
server fails, you can restart virtual machines on another physical server. In case of a failure, the on-disk
lock for each virtual machine is released. For more information about VMware DRS, see the vSphere
Resource Management documentation. For information about VMware HA, see the vSphere Availability
documentation.
You can use vMotion to migrate running virtual machines from one physical server to another. To support
vMotion between ESXi 5 and version 3.x or 4.x hosts, virtual machines must be located on VMFS3 volumes.
For information about migrating virtual machines, see the vCenter Server and Host Management
documentation.
114
VMware, Inc.
Creating a template
When metadata changes are made in a shared storage enviroment, VMFS uses special locking mechanisms to
protect its data and prevent multiple hosts from concurrently writing to the metadata.
SCSI Reservations
VMFS uses SCSI reservations on storage devices that do not support hardware acceleration. SCSI reservations
lock an entire storage device while an operation that requires metadata protection is performed. After the
operation completes, VMFS releases the reservation and other operations can continue. Because this lock is
exclusive, excessive SCSI reservations by a host can cause performance degradation on other hosts that are
accessing the same VMFS. For information about how to reduce SCSI reservations, see the vSphere
Troubleshooting documentation.
New VMFS5
Upgraded VMFS5
VMFS3
Single extent
ATS only
Multiple extents
In certain cases, you might need to turn off the ATS-only setting for a new VMFS5 datastore. For information,
see Turn off ATS Locking, on page 125.
VMware, Inc.
115
vSphere Storage
If your VMFS datastore requires more space, you can increase the VMFS volume. You can dynamically add
new extents to any VMFS datastore. An extent is a partition on a physical storage device. The datastore can
span over up to 32 extents with the size of each extent of more than 2TB, yet appear as a single volume.
NOTE ATS-only datastores cannot span over non-ATS devices.
Another option is to grow the existing datastore if the storage device where your datastore resides has free
space.
Log in to the vSphere Client and select the host from the Inventory panel.
Click the Configuration tab and click Storage in the Hardware panel.
If the disk is not blank, review the current disk layout in the top panel of the Current Disk Layout page
and select a configuration option from the bottom panel.
Option
Description
Dedicates the entire disk to a single VMFS datastore. If you select this option,
all file systems and data currently stored on this device are destroyed.
If the disk you are formatting is blank, the Current Disk Layout page presents the entire disk space for
storage configuration.
8
Click Next.
10
If the space specified for storage is excessive for your purposes, you can adjust the capacity values.
By default, the entire free space on the storage device is available.
11
Click Next.
12
In the Ready to Complete page, review the datastore configuration information and click Finish.
A datastore on the SCSI-based storage device is created. If you use the vCenter Server system to manage your
hosts, the newly created datastore is added to all hosts.
116
VMware, Inc.
Actions
vCenter Server
1
2
1
2
Select a host from the inventory, click the Configuration tab and click Storage.
From the Datastores view, select the datastore to modify and click Properties.
Add a new extent. An extent is a partition on a storage device. You can add up to 32 extents of the same
storage type to an existing VMFS datastore. The spanned VMFS datastore can use any or all of its extents
at any time. It does not need to fill up a particular extent before using the next one.
Grow an extent in an existing VMFS datastore, so that it fills the available adjacent capacity. Only extents
with free space immediately after them are expandable.
NOTE If a shared datastore has powered on virtual machines and becomes 100% full, you can increase the
datastore's capacity only from the host with which the powered on virtual machines are registered.
Prerequisites
Required privilege: Host.Configuration.Storage Partition Configuration
Procedure
1
Log in to the vSphere Client and select a host from the Inventory panel.
From the Datastores view, select the datastore to increase and click Properties.
Click Increase.
Select a device from the list of storage devices and click Next.
VMware, Inc.
Option
Description
Select the device for which the Expandable column reads NO.
Select the device for which the Expandable column reads YES
Review the Current Disk Layout to see the available configurations and click Next.
117
vSphere Storage
Option
Description
Deploys an extent in the remaining free space of the disk. This option is
available only when you are adding an extent.
Dedicates the entire disk to a single extent. This option is available only when
you are adding an extent and when the disk you are formatting is not blank.
The disk is reformatted, and the datastores and any data that it contains are
erased.
Click Next.
10
Review the proposed layout and the new configuration of your datastore, and click Finish.
What to do next
After you grow an extent in a shared VMFS datastore, refresh the datastore on each host that can access this
datastore, so that the vSphere Client can display the correct datastore capacity for all hosts.
To upgrade a VMFS2 datastore, you use a two-step process that involves upgrading VMFS2 to VMFS3
first. Because ESXi 5.0 hosts cannot access VMFS2 datastores, use a legacy host, ESX/ESXi 4.x or earlier, to
access the VMFS2 datastore and perform the VMFS2 to VMFS3 upgrade.
After you upgrade your VMFS2 datastore to VMFS3, the datastore becomes available on the ESXi 5.0 host,
where you complete the process of upgrading to VMFS5.
When you upgrade your datastore, the ESXi file-locking mechanism ensures that no remote host or local
process is accessing the VMFS datastore being upgraded. Your host preserves all files on the datastore.
The datastore upgrade is a one-way process. After upgrading your datastore, you cannot revert it back to
its previous VMFS format.
118
Characteristics
Upgraded VMFS5
Formatted VMFS5
1, 2, 4, and 8MB
1MB
Subblock size
64KB
8KB
Partition format
GPT
Datastore limits
VMware, Inc.
Commit or discard any changes to virtual disks in the VMFS2 datastore that you plan to upgrade.
Be sure that no powered on virtual machines are using the VMFS2 datastore.
To upgrade the VMFS2 file system, its file block size must not exceed 8MB.
Procedure
1
Log in to the vSphere Client and select a host from the Inventory panel.
What to do next
After you upgrade your VMFS2 datastore to VMFS3, the datastore becomes available on the ESXi 5.0 host. You
can now use the ESXi 5.0 host to complete the process of upgrading to VMFS5.
If you use a VMFS2 datastore, you must first upgrade it to VMFS3. Follow the instructions in Upgrade
VMFS2 Datastores to VMFS3, on page 119.
Verify that the volume to be upgraded has at least 2MB of free blocks available and 1 free file descriptor.
Procedure
1
Log in to the vSphere Client and select a host from the Inventory panel.
VMware, Inc.
Perform a rescan on all hosts that are associated with the datastore.
119
vSphere Storage
LUN ID changes
Log in to the vSphere Client and select the server from the inventory panel.
Click the Configuration tab and click Storage in the Hardware panel.
From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and
click Next.
The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an
existing VMFS datastore.
In the Ready to Complete page, review the datastore configuration information and click Finish.
What to do next
If you later want to resignature the mounted datastore, you must unmount it first.
120
VMware, Inc.
The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy.
A spanned datastore can be resignatured only if all its extents are online.
The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it later.
You can mount the new VMFS datastore without a risk of its UUID colliding with UUIDs of any other
datastore, such as an ancestor or child in a hierarchy of LUN snapshots.
Prerequisites
To resignature a mounted datastore copy, first unmount it.
Before you resignature a VMFS datastore, perform a storage rescan on your host so that the host updates its
view of LUNs presented to it and discovers any LUN copies.
Procedure
1
Log in to the vSphere Client and select the server from the inventory panel.
Click the Configuration tab and click Storage in the Hardware panel.
From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and
click Next.
The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an
existing VMFS datastore.
Under Mount Options, select Assign a New Signature and click Next.
In the Ready to Complete page, review the datastore configuration information and click Finish.
What to do next
After resignaturing, you might have to do the following:
n
If the resignatured datastore contains virtual machines, update references to the original VMFS datastore
in the virtual machine files, including .vmx, .vmdk, .vmsd, and .vmsn.
VMware, Inc.
121
vSphere Storage
Prerequisites
n
Procedure
1
Reconnect a cable.
Add a single host to the vCenter Server after you have edited or removed from the vCenter Server a
datastore shared by the vCenter Server hosts and the single host.
IMPORTANT If you rescan when a path is unavailable, the host removes the path from the list of paths to the
device. The path reappears on the list as soon as it becomes available and starts working again.
122
VMware, Inc.
Procedure
1
In the vSphere Client, select a host and click the Configuration tab.
Option
Description
Storage
In the Hardware panel, click Storage, and click Rescan All above the
Datastores or Devices panel.
Storage Adapters
In the Hardware panel, click Storage Adapters, and click Rescan All above
the Storage Adapters panel.
NOTE You can also right-click an individual adapter and select Rescan to
rescan just that adapter.
Description
Rescan all adapters to discover new storage devices. If new devices are
discovered, they appear in the device list.
Rescan all storage devices to discover new datastores that have been added
since the last scan. Any new datastores appear in the datastore list.
In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced
Settings under Software.
Select Disk.
Change the existing value to the value of your choice, and click OK.
The value you enter specifies the LUN after the last one you want to discover.
For example, to discover LUNs from 0 through 31, set Disk.MaxLUN to 32.
VMware, Inc.
123
vSphere Storage
Filter Name
config.vpxd.filter.vmfsFilter
VMFS Filter
config.vpxd.filter.rdmFilter
RDM Filter
config.vpxd.filter.SameHostAndTra
nsportsFilter
config.vpxd.filter.hostRescanFilter
In the Value text box, type False for the specified key.
Click Add.
Click OK.
You are not required to restart the vCenter Server system.
Storage Filtering
vCenter Server provides storage filters to help you avoid storage device corruption or performance degradation
that can be caused by an unsupported use of storage devices. These filters are available by default.
Table 13-5. Storage Filters
Filter Name
Description
Key
VMFS Filter
config.vpxd.filter.vmfsFilter
RDM Filter
config.vpxd.filter.rdmFilter
config.vpxd.filter.SameHostAndTransportsFilter
config.vpxd.filter.hostRescanFilter
124
VMware, Inc.
The device parameter is the path to the head extent device on which VMFS5 was deployed. Use the
following format:
/vmfs/devices/disks/disk_ID:P
NFS Datastores
ESXi can access a designated NFS volume located on a NAS server, mount the volume, and use it for its storage
needs. You can use NFS volumes to store and boot virtual machines in the same way that you use VMFS
datastores.
ESXi supports the following shared storage capabilities on NFS volumes:
n
vMotion
When you work with NFS storage, the following considerations apply:
n
The maximum size of NFS datastores depends on the support that an NFS server provides. ESXi does not
impose any limits on the NFS datastore size.
If you use non-ASCII characters to name datastores and virtual machines, make sure that the underlying
NFS server offers internationalization support. If the server does not support international characters, use
only ASCII characters, otherwise unpredictable failures might occur.
VMware, Inc.
125
vSphere Storage
Log in to the vSphere Client and select the host from the Inventory panel.
Click the Configuration tab and click Storage in the Hardware panel.
Select Network File System as the storage type and click Next.
Enter the server name, the mount point folder name, and the datastore name.
For the server name, you can enter an IP address, a DNS name, or an NFS UUID.
NOTE When you mount the same NFS volume on different hosts, make sure that the server and folder
names are identical across the hosts. If the names do not match exactly, the hosts see the same NFS volume
as two different datastores. This might result in a failure of such features as vMotion. An example of such
discrepancy could be if you enter filer as the server name on one host and filer.domain.com on the other.
(Optional) Select Mount NFS read only if the volume is exported as read only by the NFS server.
Click Next.
In the Network File System Summary page, review the configuration options and click Finish.
Procedure
1
126
VMware, Inc.
If the datastore is shared, specify which hosts should no longer access the datastore.
a
Deselect the hosts on which you want to keep the datastore mounted.
By default, all hosts are selected.
Click Next.
Review the list of hosts from which to unmount the datastore, and click Finish.
After you unmount a VMFS datastore, the datastore becomes inactive and is dimmed in the host's datastore
list. An unmounted NFS datastore no longer appears on the list.
NOTE The datastore that is unmounted from some hosts while being mounted on others, is shown as active
in the Datastores and Datastore Clusters view.
If you use the vCenter Server system to manage your hosts, the new name appears on all hosts that have access
to the datastore.
VMware, Inc.
127
vSphere Storage
To mount a shared datastore, see Mount Shared VMFS Datastores, on page 129.
To mount an unshared datastore, see Mount Unshared VMFS Datastores, on page 129.
128
VMware, Inc.
Procedure
1
The device name is dimmed in the vSphere Client and becomes inaccessible. The operational state of the device
changes to Unmounted.
What to do next
If multiple hosts share the device, detach the device from each host.
Specify which hosts should access the datastore in the Mount Datastore wizard.
By default, all hosts are selected.
a
Deselect the hosts where you want to keep the datastore unmounted.
Click Next.
Review the list of hosts on which to mount the datastore and click Finish.
Log in to the vSphere Client and select the host on which to mount the unmounted datastore.
NOTE If the datastore is shared, it will be mounted only on the host you select. It will remain unmounted
on other hosts not specified in this procedure.
VMware, Inc.
129
vSphere Storage
Click the Configuration tab and click Storage in the Hardware panel.
To verify the status of the device, see Check the Connection Status of a Storage Device, on page 130.
Run the esxcli --server=server_name storage core device list -d=device_ID command.
The following sample output shows that the device is not connected.
# esxcli --server=server_name storage core device list -d naa.XXXXXXXXXXXX4c
naa.XXXXXXXXXXXX4c
Display Name: XXXX Fibre Channel Disk(naa.XXXXXXXXXXXX4c)
Size: 20480
Device Type: Direct-Access
130
VMware, Inc.
A diagnostic partition cannot be located on an iSCSI LUN accessed through the software iSCSI or
dependent hardware iSCSI adapter. For more information about diagnostic partitions with iSCSI, see
General Boot from iSCSI SAN Recommendations, on page 97.
Unless you are using diskless servers, set up a diagnostic partition on a local storage.
Each host must have a diagnostic partition of 110MB. If multiple hosts share a diagnostic partition on a
SAN LUN, the partition should be large enough to accommodate core dumps of all hosts.
If a host that uses a shared diagnostic partition fails, reboot the host and extract log files immediately after
the failure. Otherwise, the second host that fails before you collect the diagnostic data of the first host
might not be able to save the core dump.
To mange the hosts diagnostic partition, use the vCLI commands. See vSphere Command-Line Interface Concepts
and Examples.
Log in to the vSphere Client and select the host from the Inventory panel.
Click the Configuration tab and click Storage in the Hardware panel.
VMware, Inc.
Description
Private Local
Creates the diagnostic partition on a local disk. This partition stores fault
information only for your host.
Click Next.
131
vSphere Storage
Select the device to use for the diagnostic partition and click Next.
Procedure
1
Log in to your virtual machine and configure the disks as dynamic mirrored disks.
See Microsoft documentation.
Change virtual machine settings to allow the use of dynamic disk mirroring.
a
132
Name
Value
scsi#.returnNoConnectDuringAPD
True
scsi#.returnBusyOnNoConnectStatus
False
Click OK.
VMware, Inc.
14
Raw device mapping (RDM) provides a mechanism for a virtual machine to have direct access to a LUN on
the physical storage subsystem (Fibre Channel or iSCSI only).
The following topics contain information about RDMs and provide instructions on how to create and manage
RDMs.
This chapter includes the following topics:
n
reads,
writes
opens
VMFS volume
mapping file
address
resolution
mapped device
Although VMware recommends that you use VMFS datastores for most virtual disk storage, on certain
occasions, you might need to use raw LUNs or logical disks located in a SAN.
VMware, Inc.
133
vSphere Storage
For example, you need to use raw LUNs with RDMs in the following situations:
n
When SAN snapshot or other layered applications run in the virtual machine. The RDM better enables
scalable backup offloading systems by using features inherent to the SAN.
In any MSCS clustering scenario that spans physical hosts virtual-to-virtual clusters as well as physicalto-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than
as virtual disks on a shared VMFS.
Think of an RDM as a symbolic link from a VMFS volume to a raw LUN. The mapping makes LUNs appear
as files in a VMFS volume. The RDM, not the raw LUN, is referenced in the virtual machine configuration. The
RDM contains a reference to the raw LUN.
Using RDMs, you can:
n
Use file system features such as distributed file locking, permissions, and naming.
Virtual compatibility mode allows an RDM to act exactly like a virtual disk file, including the use of
snapshots.
Physical compatibility mode allows direct access of the SCSI device for those applications that need lower
level control.
Provides a user-friendly name for a mapped device. When you use an RDM,
you do not need to refer to the device by its device name. You refer to it by the
name of the mapping file, for example:
/vmfs/volumes/myVolume/myVMDirectory/myRawDisk.vmdk
134
Dynamic Name
Resolution
Makes it possible to use VMFS distributed locking for raw SCSI devices.
Distributed locking on an RDM makes it safe to use a shared raw LUN without
losing data when two virtual machines on different servers try to access the
same LUN.
File Permissions
Makes file permissions possible. The permissions of the mapping file are
enforced at file-open time to protect the mapped volume.
Makes it possible to use file system utilities to work with a mapped volume,
using the mapping file as a proxy. Most operations that are valid for an ordinary
file can be applied to the mapping file and are redirected to operate on the
mapped device.
Snapshots
VMware, Inc.
vMotion
Lets you migrate a virtual machine with vMotion. The mapping file acts as a
proxy to allow vCenter Server to migrate the virtual machine by using the same
mechanism that exists for migrating virtual disk files.
Figure 14-2. vMotion of a Virtual Machine Using Raw Device Mapping
Host 1
Host 2
VMotion
VM1
VM2
VMFS volume
mapping file
address
resolution
mapped device
SAN Management
Agents
N-Port ID Virtualization
(NPIV)
Makes it possible to use the NPIV technology that allows a single Fibre Channel
HBA port to register with the Fibre Channel fabric using several worldwide
port names (WWPNs). This ability makes the HBA port appear as multiple
virtual ports, each having its own ID and virtual port name. Virtual machines
can then claim each of these virtual ports and use them for all RDM traffic.
NOTE You can use NPIV only for virtual machines with RDM disks.
VMware works with vendors of storage management software to ensure that their software functions correctly
in environments that include ESXi. Some applications of this kind are:
n
Snapshot software
Replication software
Such software uses a physical compatibility mode for RDMs so that the software can access SCSI devices
directly.
VMware, Inc.
135
vSphere Storage
Various management products are best run centrally (not on the ESXi machine), while others run well on the
virtual machines. VMware does not certify these applications or provide a compatibility matrix. To find out
whether a SAN management application is supported in an ESXi environment, contact the SAN management
software provider.
The RDM is not available for direct-attached block devices or certain RAID devices. The RDM uses a SCSI
serial number to identify the mapped device. Because block devices and some direct-attach RAID devices
do not export serial numbers, they cannot be used with RDMs.
If you are using the RDM in physical compatibility mode, you cannot use a snapshot with the disk. Physical
compatibility mode allows the virtual machine to manage its own, storage-based, snapshot or mirroring
operations.
Virtual machine snapshots are available for RDMs with virtual compatibility mode.
You cannot map to a disk partition. RDMs require the mapped device to be a whole LUN.
136
You cannot relocate larger than 2TB RDMs to datastores other than VMFS5.
You cannot convert larger than 2TB RDMs to virtual disks, or perform other operations that involve RDM
to virtual disk conversion. Such operations include cloning.
VMware, Inc.
Host 3
Host 4
VM3
VM4
shared access
mapping file
address
resolutiion
mapped
device
VMFS volume
No
No
Yes
REPORT LUNs is not passed
through
Yes
Yes
Yes
Snapshots
Yes
Yes
No
Distributed Locking
Yes
Yes
Yes
Clustering
Cluster-in-a-box only
Cluster-in-a-box
cluster-across-boxes
Physical-to-virtual clustering
cluster-across-boxes
No
No
Yes
VMware, Inc.
137
vSphere Storage
VMware recommends that you use virtual disk files for the cluster-in-a-box type of clustering. If you plan to
reconfigure your cluster-in-a-box clusters as cluster-across-boxes clusters, use virtual mode RDMs for the
cluster-in-a-box clusters.
In the Select a Disk page, select Raw Device Mapping, and click Next.
From the list of SAN disks or LUNs, select a raw LUN for your virtual machine to access directly.
138
Description
Physical
Allows the guest operating system to access the hardware directly. Physical
compatibility is useful if you are using SAN-aware applications on the virtual
machine. However, powered on virtual machines that use RDMs configured
for physical compatibility cannot be migrated if the migration involves
copying the disk. Such virtual machines cannot be cloned or cloned to a
template either.
Virtual
Allows the RDM to behave as if it were a virtual disk, so you can use such
features as snapshotting, cloning, and so on.
Description
Persistent
Nonpersistent
Changes to the disk are discarded when you power off or revert to the
snapshot.
Click Next.
In the Ready to Complete New Virtual Machine page, review your selections.
10
VMware, Inc.
Log in as administrator or as the owner of the virtual machine to which the mapped disk belongs.
On the Hardware tab, select Hard Disk, then click Manage Paths.
Use the Manage Paths dialog box to enable or disable your paths, set multipathing policy, and specify the
preferred path.
For information on managing paths, see Chapter 17, Understanding Multipathing and Failover, on
page 151.
VMware, Inc.
139
vSphere Storage
140
VMware, Inc.
15
In addition to regular hard disk drives, ESXi supports Solid State Disks (SSDs).
Unlike the regular hard disks that are electromechanical devices containing moving parts, SSDs use
semiconductors as their storage medium and have no moving parts.
On several storage arrays, the ESXi host can automatically distinguish SSDs from traditional hard disks. To
tag the SSD devices that are not detected automatically, you can use PSA SATP claim rules.
This chapter includes the following topics:
n
It enables usage of SSD as swap space for improved system performance. For information about using
SSD datastores to allocate space for host cache, see the vSphere Resource Management documentation.
It increases virtual machine consolidation ratio as SSDs can provide very high I/O throughput.
VMware, Inc.
141
vSphere Storage
You can add a claim rule by specifying the vendor name and the model name.
esxcli storage nmp satp rule add -s SATP
--transport transport_protocol --
You can unclaim the device by specifying the vendor name and the model name.
esxcli storage core claiming unclaim --type device -V vendor_name -M model_name
142
VMware, Inc.
What to do next
If the SSD device that you want to tag is shared among multiple hosts, make sure that you tag the device from
all the hosts that share the device.
Remove the claim rule that was used to tag the device.
esxcli storage nmp satp rule remove -s SATP --device device_name
For example,
esxcli storage nmp satp rule remove -s VMW_SATP_CX --device naa.
6006016042fa19010a12d9b16d6ade11
VMware, Inc.
143
vSphere Storage
For example,
esxcli storage nmp satp rule add -s VMW_SATP_CX --device naa.6006016042fa19010a12d9b16d6ade11
--option disable_ssd
144
VMware, Inc.
Procedure
1
For example,
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL --device naa.
6006016042fa19010a12d9b16d6ade11 option="enable_local"
The command output includes the following information about the listed device.
Is SSD: true
VMware, Inc.
145
vSphere Storage
Detection of virtual SSDs is supported on ESXi 5 hosts and Virtual Hardware version 8.
If VMDKs are located on shared VMFS datastores with SSD device extents, the device must be marked as
SSD on all hosts.
For a VMDK to be detected as virtual SSD, all underlying physical extents should be SSD-backed.
Use datastores that are created on SSD storage devices to allocate space for ESXi host cache. For more
information see the vSphere Resource Management documentation.
Make sure to use the latest firmware with SSD devices. Frequently check with your storage vendors for
any updates.
Carefully monitor how intensively you use the SSD device and calculate its estimated lifetime. The lifetime
expectancy depends on how actively you continue to use the SSD device.
146
VMware, Inc.
16
The VMkernel is a high-performance operating system that runs directly on the ESXi host. The VMkernel
manages most of the physical resources on the hardware, including memory, physical processors, storage, and
networking controllers.
To manage storage, VMkernel has a storage subsystem that supports several Host Bus Adapters (HBAs)
including parallel SCSI, SAS, Fibre Channel, FCoE, and iSCSI. These HBAs connect a wide variety of activeactive, active-passive, and ALUA storage arrays that are certified for use with the VMkernel. See the vSphere
Compatibility Guide for a list of the supported HBAs and storage arrays.
The primary file system that the VMkernel uses is the VMware Virtual Machine File System (VMFS). VMFS is
a cluster file system designed and optimized to support large files such as virtual disks and swap files. The
VMkernel also supports the storage of virtual disks on NFS file systems.
The storage I/O path provides virtual machines with access to storage devices through device emulation. This
device emulation allows a virtual machine to access files on a VMFS or NFS file system as if they were SCSI
devices. The VMkernel provides storage virtualization functions such as the scheduling of I/O requests from
multiple virtual machines and multipathing.
In addition, VMkernel offers several Storage APIs that enable storage partners to integrate and optimize their
products for vSphere.
The following graphic illustrates the basics of the VMkernel core, with special attention to the storage stack.
Storage-related modules reside between the logical device I/O scheduler and the adapter I/O scheduler layers.
VMware, Inc.
147
vSphere Storage
virtual machine
guest OS
guest OS
VMkernel
raw
disk
physical
compatibility
mode RDM
regular
VMDK
snapshot
configuration
NFS
VMFS
block devices
disk
LVM
scanning
Linux emulation
FC driver
FCoE driver
iSCSI
iSCSI driver
driver
SCSI
iSCSIdriver
driver
block
iSCSIdriver
driver
iSCSI
IDE driver
driver
Storage APIs
Storage APIs is a family of APIs used by third-party hardware, software, and storage providers to develop
components that enhance several vSphere features and solutions.
This publication describes the following sets of Storage APIs and explains how they contribute to your storage
environment. For information about other APIs from this family, including Storage API - Data Protection and
Storage API - Site Recovery Manager, see the VMware Web site.
n
Storage APIs - Multipathing, also known as the Pluggable Storage Architecture (PSA). PSA is a collection
of VMkernel APIs that allows storage partners to enable and certify their arrays asynchronous to ESXi
release schedules, as well as deliver performance-enhancing, multipathing and load-balancing behaviors
that are optimized for each array. For more information, see Managing Multiple Paths, on page 156.
Storage APIs - Array Integration, formerly known as VAAI, include the following APIs:
n
148
Hardware Acceleration APIs. Allows arrays to integrate with vSphere to transparently offload certain
storage operations to the array. This integration significantly reduces CPU overhead on the host. See
Chapter 18, Storage Hardware Acceleration, on page 171.
VMware, Inc.
Array Thin Provisioning APIs. Help to monitor space use on thin-provisioned storage arrays to
prevent out-of-space conditions, and to perform space reclamation. See Array Thin Provisioning and
VMFS Datastores, on page 184.
Storage APIs - Storage Awareness. These vCenter Server-based APIs enable storage arrays to inform the
vCenter Server about their configurations, capabilities, and storage health and events. See Chapter 20,
Using Storage Vendor Providers, on page 189.
VMware, Inc.
149
vSphere Storage
150
VMware, Inc.
17
To maintain a constant connection between a host and its storage, ESXi supports multipathing. Multipathing
is a technique that lets you use more than one physical path that transfers data between the host and an external
storage device.
In case of a failure of any element in the SAN network, such as an adapter, switch, or cable, ESXi can switch
to another physical path, which does not use the failed component. This process of path switching to avoid
failed components is known as path failover.
In addition to path failover, multipathing provides load balancing. Load balancing is the process of distributing
I/O loads across multiple physical paths. Load balancing reduces or removes potential bottlenecks.
NOTE Virtual machine I/O might be delayed for up to sixty seconds while path failover takes place. These
delays allow the SAN to stabilize its configuration after topology changes. In general, the I/O delays might be
longer on active-passive arrays and shorter on active-active arrays.
This chapter includes the following topics:
n
VMware, Inc.
151
vSphere Storage
Host 1
HBA2
HBA1
HBA3
HBA4
switch
switch
SP1
SP2
storage array
Similarly, if SP1 fails or the links between SP1 and the switches breaks, SP2 takes over and provides the
connection between the switch and the storage device. This process is called SP failover. VMware ESXi supports
both HBA and SP failovers with its multipathing capability.
ESXi does not support multipathing when you combine an independent hardware adapter with software
iSCSI or dependent iSCSI adapters in the same host.
Multipathing between software and dependent adapters within the same host is supported.
On different hosts, you can mix both dependent and independent adapters.
The following illustration shows multipathing setups possible with different types of iSCSI initiators.
152
VMware, Inc.
software
iSCSI
host 1
host 2
software
adapter
HBA2
HBA1
NIC2
NIC1
IP network
SP
iSCSI storage
VMware, Inc.
153
vSphere Storage
10.0.0.1
Reconnect to 10.0.0.2
10.0.0.2
storage
10.0.0.1
10.0.0.2
storage
If the port on the storage system that is acting as the virtual port becomes unavailable, the storage system
reassigns the address of the virtual port to another port on the system. Port Reassignment shows an example
of this type of port reassignment. In this case, the virtual port 10.0.0.1 becomes unavailable and the storage
system reassigns the virtual port IP address to a different port. The second port responds to both addresses.
154
VMware, Inc.
10.0.0.1
10.0.0.1
10.0.0.2
storage
With this form of array-based failover, you can have multiple paths to the storage only if you use multiple
ports on the ESXi host. These paths are active-active. For additional information, see iSCSI Session
Management, on page 87.
Double-click TimeOutValue.
Set the value data to 0x3c (hexadecimal) or 60 (decimal) and click OK.
After you make this change, Windows waits at least 60 seconds for delayed disk operations to complete
before it generates errors.
VMware, Inc.
155
vSphere Storage
Definition
PSA
NMP
PSP
SATP
The VMkernel multipathing plug-in that ESXi provides by default is the VMware Native Multipathing PlugIn (NMP). The NMP is an extensible module that manages sub plug-ins. There are two types of NMP sub plugins, Storage Array Type Plug-Ins (SATPs), and Path Selection Plug-Ins (PSPs). SATPs and PSPs can be built-in
and provided by VMware, or can be provided by a third party.
If more multipathing functionality is required, a third party can also provide an MPP to run in addition to, or
as a replacement for, the default NMP.
When coordinating the VMware NMP and any installed third-party MPPs, the PSA performs the following
tasks:
n
Routes I/O requests for a specific logical device to the MPP managing that device.
As the Pluggable Storage Architecture illustration shows, multiple third-party MPPs can run in parallel with
the VMware NMP. When installed, the third-party MPPs replace the behavior of the NMP and take complete
control of the path failover and the load-balancing operations for specified storage devices.
156
VMware, Inc.
third-party
MPP
VMware NMP
VMware SATP
VMware PSP
VMware SATP
VMware PSP
VMware SATP
third-party SATP
third-party PSP
Depending on a storage device, perform specific actions necessary to handle path failures and I/O
command retries.
VMware SATPs
Storage Array Type Plug-Ins (SATPs) run in conjunction with the VMware NMP and are responsible for arrayspecific operations.
ESXi offers a SATP for every type of array that VMware supports. It also provides default SATPs that support
non-specific active-active and ALUA storage arrays, and the local SATP for direct-attached devices. Each SATP
accommodates special characteristics of a certain class of storage arrays and can perform the array-specific
operations required to detect path state and to activate an inactive path. As a result, the NMP module itself
can work with multiple storage arrays without having to be aware of the storage device specifics.
After the NMP determines which SATP to use for a specific storage device and associates the SATP with the
physical paths for that storage device, the SATP implements the tasks that include the following:
n
VMware, Inc.
157
vSphere Storage
Performs array-specific actions necessary for storage fail-over. For example, for active-passive devices, it
can activate passive paths.
VMware PSPs
Path Selection Plug-Ins (PSPs) are sub plug-ins of the VMware NMP and are responsible for choosing a physical
path for I/O requests.
The VMware NMP assigns a default PSP for each logical device based on the SATP associated with the physical
paths for that device. You can override the default PSP. For information, see Change the Path Selection
Policy, on page 161.
By default, the VMware NMP supports the following PSPs:
VMW_PSP_MRU
The host selects the path that it used most recently. When the path becomes
unavailable, the host selects an alternative path. The host does not revert back
to the original path when that path becomes available again. There is no
preferred path setting with the MRU policy. MRU is the default policy for most
active-passive storage devices.
Displayed in the vSphere Client as the Most Recently Used (VMware) path
selection policy.
VMW_PSP_FIXED
The host uses the designated preferred path, if it has been configured.
Otherwise, it selects the first working path discovered at system boot time. If
you want the host to use a particular preferred path, specify it manually. Fixed
is the default policy for most active-active storage devices.
NOTE If the host uses a default preferred path and the path's status turns to
Dead, a new path is selected as preferred. However, if you explicitly designate
the preferred path, it will remain preferred even when it becomes inaccessible.
Displayed in the vSphere Client as the Fixed (VMware) path selection policy.
VMW_PSP_RR
The host uses an automatic path selection algorithm rotating through all active
paths when connecting to active-passive arrays, or through all available paths
when connecting to active-active arrays. RR is the default for a number of arrays
and can be used with both active-active and active-passive arrays to implement
load balancing across paths for different LUNs.
Displayed in the vSphere Client as the Round Robin (VMware) path selection
policy.
158
The PSP selects an appropriate physical path on which to issue the I/O.
The NMP issues the I/O request on the path selected by the PSP.
If the I/O operation reports an error, the NMP calls the appropriate SATP.
The SATP interprets the I/O command errors and, when appropriate, activates the inactive paths.
The PSP is called to select a new path on which to issue the I/O.
VMware, Inc.
Paths available for issuing I/O to a LUN. A single or multiple working paths
currently used for transferring data are marked as Active (I/O).
NOTE For hosts that run ESX/ESXi version 3.5 or earlier, the term active means
the only path that the host is using to issue I/O to a LUN.
Standby
If active paths fail, the path can quickly become operational and can be used
for I/O.
Disabled
Dead
If you are using the Fixed path policy, you can see which path is the preferred path. The preferred path is
marked with an asterisk (*) in the Preferred column.
VMware, Inc.
159
vSphere Storage
For each path you can also display the path's name. The name includes parameters that describe the path:
adapter ID, target ID, and device ID. Usually, the path's name has the format similar to the following:
fc.adapterID-fc.targetID-naa.deviceID
NOTE When you use the host profiles editor to edit paths, you must specify all three parameters that describe
a path, adapter ID, target ID, and device ID.
Log in to the vSphere Client and select a host from the inventory panel.
Click the Configuration tab and click Storage in the Hardware panel.
From the list of configured datastores, select the datastore whose paths you want to view, and click
Properties.
Under Extents, select the storage device whose paths you want to view and click Manage Paths.
(Optional) To extract the path's parameters, right-click the path and select Copy path to clipboard.
Log in to the vSphere Client and select a server from the inventory panel.
Click the Configuration tab and click Storage in the Hardware panel.
Select the storage device whose paths you want to view and click Manage Paths.
160
(Optional) To extract the path's parameters, right-click the path and select Copy path to clipboard.
VMware, Inc.
The host uses the designated preferred path, if it has been configured.
Otherwise, it selects the first working path discovered at system boot time. If
you want the host to use a particular preferred path, specify it manually. Fixed
is the default policy for most active-active storage devices.
NOTE If the host uses a default preferred path and the path's status turns to
Dead, a new path is selected as preferred. However, if you explicitly designate
the preferred path, it will remain preferred even when it becomes inaccessible.
The host selects the path that it used most recently. When the path becomes
unavailable, the host selects an alternative path. The host does not revert back
to the original path when that path becomes available again. There is no
preferred path setting with the MRU policy. MRU is the default policy for most
active-passive storage devices.
The host uses an automatic path selection algorithm rotating through all active
paths when connecting to active-passive arrays, or through all available paths
when connecting to active-active arrays. RR is the default for a number of arrays
and can be used with both active-active and active-passive arrays to implement
load balancing across paths for different LUNs.
Open the Manage Paths dialog box either from the Datastores or Devices view.
Fixed (VMware)
For the fixed policy, specify the preferred path by right-clicking the path you want to assign as the
preferred path, and selecting Preferred.
VMware, Inc.
161
vSphere Storage
Disable Paths
You can temporarily disable paths for maintenance or other reasons. You can do so using the vSphere Client.
Procedure
1
Open the Manage Paths dialog box either from the Datastores or Devices view.
In the Paths panel, right-click the path to disable, and select Disable.
You can also disable a path from the adapters Paths view by right-clicking the path in the list and selecting
Disable.
Multipathing Considerations
Specific considerations apply when you manage storage multipathing plug-ins and claim rules.
The following considerations help you with multipathing:
n
If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is
VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
When the system searches the SATP rules to locate a SATP for a given device, it searches the driver rules
first. If there is no match, the vendor/model rules are searched, and finally the transport rules are searched.
If no match occurs, NMP selects a default SATP for the device.
If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no claim
rule match occurs for this device. The device is claimed by the default SATP based on the device's transport
type.
The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU
selects an active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path
if there is no active/optimized path. This path is used until a better path is available (MRU). For example,
if the VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path
becomes available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
If you enable VMW_PSP_FIXED with VMW_SATP_ALUA, the host initially makes an arbitrary selection
of the preferred path, regardless of whether the ALUA state is reported as optimized or unoptimized. As
a result, VMware does not recommend to enable VMW_PSP_FIXED when VMW_SATP_ALUA is used
for an ALUA-compliant storage array.
The exception is when you assign the preferred path to be to one of the redundant storage processor (SP)
nodes within an active-active storage array. The ALUA state is irrelevant.
162
By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless you
want to unmask these devices.
VMware, Inc.
Vendor/model strings
In the procedure, --server=server_name specifies the target server. The specified target server prompts you
for a user name and password. Other connection options, such as a configuration file or session file, are
supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting , run esxcli commands in the ESXi Shell.
Procedure
u
Run the esxcli --server=server_name storage core claimrule list --claimrule-class=MP command
to list the multipathing claim rules.
Example: Sample Output of the esxcli storage core claimrule list Command
Rule Class
MP
MP
MP
MP
MP
MP
MP
MP
MP
MP
MP
MP
MP
MP
Rule
0
1
2
3
4
101
101
200
200
201
201
202
202
65535
Class
runtime
runtime
runtime
runtime
runtime
runtime
file
runtime
file
runtime
file
runtime
file
runtime
Type
transport
transport
transport
transport
transport
vendor
vendor
vendor
vendor
location
location
driver
driver
vendor
Plugin
NMP
NMP
NMP
NMP
NMP
MASK_PATH
MASK_PATH
MPP_1
MPP_1
MPP_2
MPP_2
MPP_3
MPP_3
NMP
Matches
transport=usb
transport=sata
transport=ide
transport=block
transport=unknown
vendor=DELL model=Universal Xport
vendor=DELL model=Universal Xport
vendor=NewVend model=*
vendor=NewVend model=*
adapter=vmhba41 channel=* target=* lun=*
adapter=vmhba41 channel=* target=* lun=*
driver=megaraid
driver=megaraid
vendor=* model=*
The NMP claims all paths connected to storage devices that use the USB, SATA, IDE, and Block SCSI
transportation.
You can use the MASK_PATH module to hide unused devices from your host. By default, the PSA claim
rule 101 masks Dell array pseudo devices with a vendor string of DELL and a model string of Universal
Xport.
The MPP_1 module claims all paths connected to any model of the NewVend storage array.
The MPP_3 module claims the paths to storage devices controlled by the Mega-RAID device driver.
Any paths not described in the previous rules are claimed by NMP.
VMware, Inc.
163
vSphere Storage
The Rule Class column in the output describes the category of a claim rule. It can be MP (multipathing
plug-in), Filter, or VAAI.
The Class column shows which rules are defined and which are loaded. The file parameter in the Class
column indicates that the rule is defined. The runtime parameter indicates that the rule has been loaded
into your system. For a user-defined claim rule to be active, two lines with the same rule number should
exist, one line for the rule with the file parameter and another line with runtime. Several low numbered
rules, have only one line with the Class of runtime. These are system-defined claim rules that you cannot
modify.
This command typically shows the NMP and, if loaded, the MASK_PATH module. If any third-party MPPs
have been loaded, they are listed as well.
For each SATP, the output displays information that shows the type of storage array or system this SATP
supports and the default PSP for any LUNs using this SATP. Placeholder (plugin not loaded) in the
Description column indicates that the SATP is not loaded.
164
VMware, Inc.
Use the --device | -d=device_ID option to filter the output of this command to show a single device.
VMware, Inc.
Option
Description
-A|--adapter=<str>
-u|--autoassign
-C|--channel=<long>
-c|--claimrule-class=<str>
-d|--device=<str>
165
vSphere Storage
Option
Description
-D|--driver=<str>
-f|--force
Force claim rules to ignore validity checks and install the rule anyway.
--if-unset=<str>
-i|--iqn=<str>
Indicate the iSCSI Qualified Name for the target to use in this operation.
-L|--lun=<long>
-M|--model=<str>
-P|--plugin=<str>
-r|--rule=<long>
-T|--target=<long>
-R|--transport=<str>
-t|--type=<str>
-V|--vendor=<str>
--wwnn=<str>
Indicate the World-Wide Node Number for the target to use in this operation.
--wwpn=<str>
Indicate the World-Wide Port Number for the target to use in this operation.
To load the new claim rule into your system, run the following command:
esxcli --server=server_name storage core claimrule load
This command loads all newly created multipathing claim rules from your system's configuration file.
After you run the esxcli --server=server_name storage core claimrule list command, you can see the
new claim rule appearing on the list.
NOTE The two lines for the claim rule, one with the Class of runtime and another with the Class of file, indicate
that the new claim rule has been loaded into the system and is active.
166
Rule Class
MP
Rule
0
Class
runtime
Type
transport
Plugin
NMP
Matches
transport=usb
MP
MP
MP
MP
MP
MP
MP
MP
1
2
3
4
101
101
500
500
runtime
runtime
runtime
runtime
runtime
file
runtime
file
transport
transport
transport
transport
vendor
vendor
vendor
vendor
NMP
NMP
NMP
NMP
MASK_PATH
MASK_PATH
NMP
NMP
transport=sata
transport=ide
transport=block
transport=unknown
vendor=DELL model=Universal Xport
vendor=DELL model=Universal Xport
vendor=NewVend model=NewMod
vendor=NewVend model=NewMod
VMware, Inc.
NOTE By default, the PSA claim rule 101 masks Dell array pseudo devices. Do not delete this rule, unless
you want to unmask these devices.
The command takes the following options:
Option
Description
-c|--claimrule-class=<str>
Indicate the claim rule class to use in this operation (MP, Filter, VAAI).
-P|--plugin=<str>
-r|--rule=<long>
This step removes the claim rule from the File class.
2
This step removes the claim rule from the Runtime class.
Mask Paths
You can prevent the host from accessing storage devices or LUNs or from using individual paths to a LUN.
Use the esxcli commands to mask the paths. When you mask paths, you create claim rules that assign the
MASK_PATH plug-in to the specified paths.
In the procedure, --server=server_name specifies the target server. The specified target server prompts you
for a user name and password. Other connection options, such as a configuration file or session file, are
supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting , run esxcli commands in the ESXi Shell.
Procedure
1
The claim rules that you use to mask paths should have rule IDs in the range of 101 200. If this command
shows that rule 101 and 102 already exist, you can specify 103 for the rule to add.
VMware, Inc.
167
vSphere Storage
Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in.
esxcli --server=server_name storage core claimrule add -P MASK_PATH
If a claim rule for the masked path exists, remove the rule.
esxcli --server=server_name storage core claiming unclaim
After you assign the MASK_PATH plug-in to a path, the path state becomes irrelevant and is no longer
maintained by the host. As a result, commands that display the masked path's information might show the
path state as dead.
#esxcli --server=server_name
vmhba2 -C 0 -T 1 -L 20
#esxcli --server=server_name
vmhba3 -C 0 -T 1 -L 20
#esxcli --server=server_name
vmhba2 -C 0 -T 2 -L 20
#esxcli --server=server_name
vmhba3 -C 0 -T 2 -L 20
Unmask Paths
When you need the host to access the masked storage device, unmask the paths to the device.
In the procedure, --server=server_name specifies the target server. The specified target server prompts you
for a user name and password. Other connection options, such as a configuration file or session file, are
supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting , run esxcli commands in the ESXi Shell.
168
VMware, Inc.
Procedure
1
Reload the path claiming rules from the configuration file into the VMkernel.
esxcli --server=server_name storage core claimrule load
Run the esxcli --server=server_name storage core claiming unclaim command for each path to the
masked storage device.
For example:
esxcli --server=server_name storage core claiming unclaim -t location -A vmhba0 -C 0 -T 0 -L
149
Your host can now access the previously masked storage device.
esxcli --server=server_name storage nmp satp rule add command. The command takes the following
options.
VMware, Inc.
Option
Description
-b|--boot
This is a system default rule added at boot time. Do not modify esx.conf or
add to host profile.
-c|--claim-option=string
Set the claim option string when adding a SATP claim rule.
-e|--description=string
Set the claim rule description when adding a SATP claim rule.
-d|--device=string
Set the device when adding SATP claim rules. Device rules are mutually
exclusive with vendor/model and driver rules.
-D|--driver=string
Set the driver string when adding a SATP claim rule. Driver rules are
mutually exclusive with vendor/model rules.
-f|--force
Force claim rules to ignore validity checks and install the rule anyway.
-h|--help
169
vSphere Storage
Option
Description
-M|--model=string
Set the model string when adding SATP a claim rule. Vendor/Model rules
are mutually exclusive with driver rules.
-o|--option=string
-P|--psp=string
-O|--psp-option=string
-s|--satp=string
-R|--transport=string
Set the claim transport type string when adding a SATP claim rule.
-t|--type=string
-V|--vendor=string
Set the vendor string when adding SATP claim rules. Vendor/Model rules
are mutually exclusive with driver rules.
NOTE When searching the SATP rules to locate a SATP for a given device, the NMP searches the driver
rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules. If
there is still no match, NMP selects a default SATP for the device.
2
If you run the esxcli --server=server_name storage nmp satp list -s VMW_SATP_INV command, you can see
the new rule added to the list of VMW_SATP_INV rules.
170
VMware, Inc.
18
The hardware acceleration functionality enables the ESXi host to integrate with compliant storage arrays and
offload specific virtual machine and storage management operations to storage hardware. With the storage
hardware assistance, your host performs these operations faster and consumes less CPU, memory, and storage
fabric bandwidth.
The hardware acceleration is supported by block storage devices, Fibre Channel and iSCSI, and NAS devices.
This chapter includes the following topics:
n
VMFS clustered locking and metadata operations for virtual machine files
VMware, Inc.
171
vSphere Storage
NAS Devices
Not supported
NOTE If your SAN or NAS storage fabric uses an intermediate appliance in front of a storage system that
supports hardware acceleration, the intermediate appliance must also support hardware acceleration and be
properly certified. The intermediate appliance might be a storage virtualization appliance, I/O acceleration
appliance, encryption appliance, and so on.
Full copy, also called clone blocks or copy offload. Enables the storage arrays to make full copies of data
within the array without having the host read and write the data. This operation reduces the time and
network load when cloning virtual machines, provisioning from a template, or migrating with vMotion.
Block zeroing, also called write same. Enables storage arrays to zero out a large number of blocks to provide
newly allocated storage, free of previously written data. This operation reduces the time and network load
when creating virtual machines and formatting virtual disks.
Hardware assisted locking, also called atomic test and set (ATS). Supports discrete virtual machine locking
without use of SCSI reservations. This operation allows disk locking per sector, instead of the entire LUN
as with SCSI reservations.
Check with your vendor for the hardware acceleration support. Certain storage arrays require that you activate
the support on the storage side.
172
VMware, Inc.
On your host, the hardware acceleration is enabled by default. If your storage does not support the hardware
acceleration, you can disable it using the vSphere Client.
In addition to hardware acceleration support, ESXi includes support for array thin provisioning. For
information, see Array Thin Provisioning and VMFS Datastores, on page 184.
Click the Configuration tab, and click Advanced Settings under Software.
VMFS3.HardwareAcceleratedLocking
DataMover.HardwareAcceleratedMove
DataMover.HardwareAcceleratedInit
VMware, Inc.
173
vSphere Storage
Procedure
u
Run the esxcli --server=server_name storage core plugin list --plugin-class=value command.
For value, enter one of the following options:
n
VAAI
VAAI
Run the esxcli --server=server_name storage core device list -d=device_ID command.
The output shows the hardware acceleration, or VAAI, status that can be unknown, supported, or
unsupported.
# esxcli --server=server_name storage core device list -d naa.XXXXXXXXXXXX4c
naa.XXXXXXXXXXXX4c
Display Name: XXXX Fibre Channel Disk(naa.XXXXXXXXXXXX4c)
Size: 20480
Device Type: Direct-Access
Multipath Plugin: NMP
XXXXXXXXXXXXXXXX
Attached Filters: VAAI_FILTER
VAAI Status: supported
XXXXXXXXXXXXXXXX
174
VMware, Inc.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting , run esxcli commands in the ESXi Shell.
Procedure
u
Run the esxcli --server=server_name storage core device vaai status get -d=device_ID command.
If the device is managed by a VAAI plug-in, the output shows the name of the plug-in attached to the
device. The output also shows the support status for each T10 SCSI based primitive, if available. Output
appears in the following example:
# esxcli --server=server_name storage core device vaai status get -d naa.XXXXXXXXXXXX4c
naa.XXXXXXXXXXXX4c
VAAI Plugin Name: VMW_VAAIP_SYMM
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: unsupported
In this example, the filter claim rules specify devices that should be claimed by the VAAI_FILTER filter.
# esxcli --server=server_name storage core claimrule list
Rule
Class
Type
Plugin
Matches
65430 runtime vendor VAAI_FILTER vendor=EMC
65430 file
vendor VAAI_FILTER vendor=EMC
65431 runtime vendor VAAI_FILTER vendor=DGC
65431 file
vendor VAAI_FILTER vendor=DGC
Rule Class
Filter
Filter
Filter
Filter
--claimrule-class=Filter
model=SYMMETRIX
model=SYMMETRIX
model=*
model=*
In this example, the VAAI claim rules specify devices that should be claimed by a particular VAAI plugin.
esxcli --server=server_name storage core claimrule list --claimrule-class=VAAI
Rule Class
VAAI
VAAI
VAAI
VAAI
Rule
65430
65430
65431
65431
Class
runtime
file
runtime
file
Type
vendor
vendor
vendor
vendor
Plugin
VMW_VAAIP_SYMM
VMW_VAAIP_SYMM
VMW_VAAIP_CX
VMW_VAAIP_CX
Matches
vendor=EMC
vendor=EMC
vendor=DGC
vendor=DGC
model=SYMMETRIX
model=SYMMETRIX
model=*
model=*
VMware, Inc.
175
vSphere Storage
In the procedure, --server=server_name specifies the target server. The specified target server prompts you
for a user name and password. Other connection options, such as a configuration file or session file, are
supported. For a list of connection options, see Getting Started with vSphere Command-Line Interfaces.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting , run esxcli commands in the ESXi Shell.
Procedure
1
Define a new claim rule for the VAAI filter by running the
esxcli --server=server_name storage core claimrule add --claimrule-class=Filter -plugin=VAAI_FILTER command.
Define a new claim rule for the VAAI plug-in by running the
NOTE Only the Filter-class rules need to be run. When the VAAI filter claims a device, it automatically
finds the proper VAAI plug-in to attach.
Example: Defining Hardware Acceleration Claim Rules
This example shows how to configure hardware acceleration for IBM arrays using the VMW_VAAIP_T10 plugin. Use the following sequence of commands. For information about the options that the command takes, see
Add Multipathing Claim Rules, on page 165.
# esxcli --server=server_name storage core claimrule add --claimrule-class=Filter -plugin=VAAI_FILTER --type=vendor --vendor=IBM --autoassign
# esxcli --server=server_name storage core claimrule add --claimrule-class=VAAI -plugin=VMW_VAAIP_T10 --type=vendor --vendor=IBM --autoassign
# esxcli --server=server_name storage core claimrule load --claimrule-class=Filter
# esxcli --server=server_name storage core claimrule load --claimrule-class=VAAI
# esxcli --server=server_name storage core claimrule run --claimrule-class=Filter
176
VMware, Inc.
Procedure
u
Full file clone. This operation is similar to the VMFS block cloning except that NAS devices clone entire
files instead of file segments.
Reserve space. Enables storage arrays to allocate space for a virtual disk file in thick format.
Typically, when you create a virtual disk on an NFS datastore, the NAS server determines the allocation
policy. The default allocation policy on most NAS servers is thin and does not guarantee backing storage
to the file. However, the reserve space operation can instruct the NAS device to use vendor-specific
mechanisms to reserve space for a virtual disk. As a result, you can create thick virtual disks on the NFS
datastore.
Lazy file clone. Allows VMware View to offload creation of linked clones to a NAS array.
Extended file statistics. Enables storage arrays to accurately report space utilization.
With NAS storage devices, the hardware acceleration integration is implemented through vendor-specific NAS
plug-ins. These plug-ins are typically created by vendors and are distributed as VIB packages through a web
page. No claim rules are required for the NAS plug-ins to function.
There are several tools available for installing and upgrading VIB packages. They include the esxcli commands
and vSphere Update Manager. For more information, see the vSphere Upgrade and Installing and Administering
VMware vSphere Update Manager documentation.
VMware, Inc.
177
vSphere Storage
The command controls which VIB package is allowed on the host. The value can be one of the following:
VMwareCertified
VMwareAccepted
PartnerSupported
CommunitySupported
The URL specifies the URL to the VIB package to install. http:, https:, ftp:, and file: are supported.
4
178
VMware, Inc.
Prerequisites
This topic discusses how to update a VIB package using the esxcli command. For more details, see the vSphere
Upgrade documentation.
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting , run esxcli commands in the ESXi Shell.
Procedure
1
The URL specifies the URL to the VIB package to update. http:, https:, ftp:, and file: are supported.
2
The source and destination VMFS datastores have different block sizes.
The source file type is RDM and the destination file type is non-RDM (regular file).
The source VMDK type is eagerzeroedthick and the destination VMDK type is thin.
VMware, Inc.
179
vSphere Storage
The logical address and transfer length in the requested operation are not aligned to the minimum
alignment required by the storage device. All datastores created with the vSphere Client are aligned
automatically.
The VMFS has multiple LUNs or extents, and they are on different arrays.
Hardware cloning between arrays, even within the same VMFS datastore, does not work.
180
VMware, Inc.
19
With ESXi, you can use two models of thin provisioning, array-level and virtual disk-level.
Thin provisioning is a method that optimizes storage utilization by allocating storage space in a flexible ondemand manner. Thin provisioning contrasts with the traditional model, called thick provisioning. With thick
provisioning, large amount of storage space is provided in advance in anticipation of future storage needs.
However, the space might remain unused causing underutilization of storage capacity.
The VMware thin provisioning features help you eliminate storage underutilization problems at the datastore
and storage array level.
This chapter includes the following topics:
n
Storage Over-Subscription
Thin provisioning allows you to report more virtual storage space than there is real physical capacity. This
discrepancy can lead to storage over-subscription, also called over-provisioning.
When you use thin provisioning, you should monitor actual storage usage to avoid conditions when you run
out of physical storage space.
VMware, Inc.
181
vSphere Storage
VM 2
THICK
THIN
80GB
40GB
40GB
40GB
20GB
provisioned
capacity
used
capacity
virtual disks
datastore
20GB
40GB
182
VMware, Inc.
You can use Storage vMotion to transform virtual disks from one format to another.
Thick Provision Lazy
Zeroed
Creates a virtual disk in a default thick format. Space required for the virtual
disk is allocated when the virtual disk is created. Data remaining on the
physical device is not erased during creation, but is zeroed out on demand at
a later time on first write from the virtual machine.
Using the default flat virtual disk format does not zero out or eliminate the
possibility of recovering deleted files or restoring old data that might be present
on this allocated space. You cannot convert a flat disk to a thin disk.
A type of thick virtual disk that supports clustering features such as Fault
Tolerance. Space required for the virtual disk is allocated at creation time. In
contrast to the flat format, the data remaining on the physical device is zeroed
out when the virtual disk is created. It might take much longer to create disks
in this format than to create other types of disks.
Thin Provision
Use this format to save storage space. For the thin disk, you provision as much
datastore space as the disk would require based on the value that you enter for
the disk size. However, the thin disk starts small and at first, uses only as much
datastore space as the disk needs for its initial operations.
NOTE If a virtual disk supports clustering solutions such as Fault Tolerance,
do not make the disk thin.
If the thin disk needs more space later, it can grow to its maximum capacity
and occupy the entire datastore space provisioned to it. Also, you can manually
convert the thin disk into a thick disk.
Provisioned Storage Shows datastore space guaranteed to the virtual machine. The entire space
might not be used by the virtual machine if it has disks in thin provisioned format. Other virtual
machines can occupy any unused space.
Not-shared Storage Shows datastore space occupied by the virtual machine and not shared with
any other virtual machines.
Used Storage Shows datastore space actually occupied by virtual machine files, including
configuration and log files, snapshots, virtual disks, and so on. When the virtual machine is running,
the used storage space also includes swap files.
Click Edit Settings to display the Virtual Machine Properties dialog box.
VMware, Inc.
183
vSphere Storage
Click the Hardware tab and select the appropriate hard disk in the Hardware list.
The Disk Provisioning section on the right shows the type of your virtual disk.
Click OK.
What to do next
If your virtual disk is in the thin format, you can inflate it to its full size.
Click the Summary tab and, under Resources, double-click the datastore for the virtual machine to open
the Datastore Browser dialog box.
Click the virtual machine folder to find the virtual disk file you want to convert. The file has the .vmdk
extension.
The inflated virtual disk occupies the entire datastore space originally provisioned to it.
184
VMware, Inc.
Using thin provision integration, your host can perform these tasks:
n
Monitor the use of space on thin-provisioned LUNs to avoid running out of physical space. As your
datastore grows or if you use Storage vMotion to migrate virtual machines to a thin-provisioned LUN,
the host communicates with the LUN and warns you about breaches in physical space and about out-ofspace conditions.
Inform the array about the datastore space that is freed when files are deleted or removed from the
datastore by Storage vMotion. The array can then reclaim the freed blocks of space.
NOTE ESXi does not support enabling and disabling of thin provisioning on a storage device.
Requirements
To use the thin provision reporting feature, your host and storage array must meet the following requirements:
n
Storage array has appropriate firmware that supports T10-based Storage APIs - Array Integration (Thin
Provisioning). For information, contact your storage provider and check the HCL.
Using storage-specific tools, your storage administrator provisions a thin LUN and sets a soft threshold
limit that, when reached, triggers an alert. This step is vendor-specific.
Using the vSphere Client, you create a VMFS datastore on the thin-provisioned LUN. The datastore spans
the entire logical size that the LUN reports.
As the space used by the datastore increases and reaches the specified soft threshold, the following actions
take place:
a
If no space is left to allocate to the thin-provisioned LUN, the following actions take place:
a
VMware, Inc.
185
vSphere Storage
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Getting Started with
vSphere Command-Line Interfaces. For troubleshooting , run esxcli commands in the ESXi Shell.
Procedure
u
Run the esxcli --server=server_name storage core device list -d=device_ID command.
The following thin provisioning status indicates that the storage device is thin-provisioned.
# esxcli --server=server_name storage core device list -d naa.XXXXXXXXXXXX4c
naa.XXXXXXXXXXXX4c
Display Name: XXXX Fibre Channel Disk(naa.XXXXXXXXXXXX4c)
Size: 20480
Device Type: Direct-Access
Multipath Plugin: NMP
--------------------Thin Provisioning Status: yes
Attached Filters: VAAI_FILTER
VAAI Status: supported
---------------------
186
VMware, Inc.
Procedure
1
On the ESXi Shell, change to the datastore directory by using the cd /vmfs/volumes/datastore_name
command.
Reclaim a specified percentage of free capacity on the VMFS5 datastore for the thin-provisioned device
by running vmkfstools -y %.
% is an integer between 1 and 99. It represents a percentage of VMFS free capacity to be reclaimed. The
recommended value is 60.
VMware, Inc.
187
vSphere Storage
188
VMware, Inc.
20
When using vendor provider components, the vCenter Server can integrate with external storage, both block
storage and NFS, so that you can gain a better insight into resources and obtain comprehensive and meaningful
storage data.
The vendor provider is a software plug-in developed by a third party through the Storage APIs - Storage
Awareness. The vendor provider component is typically installed on the storage array side and acts as a server
in the vSphere environment. The vCenter Server uses vendor providers to retrieve information about storage
topology, capabilities, and status.
For information about whether your storage supports the vendor provider plug-ins, contact your storage
vendor.
If your storage supports vendor providers, use the Storage Providers menu option in the vSphere Client to
register and manage each vendor provider component.
This chapter includes the following topics:
n
Storage topology. Information about physical storage elements appears on the Storage Views tab. It
includes such data as storage arrays, array IDs, and so on.
This type of information can be helpful when you need to track virtual machine-to-storage relationships
and configuration, or to identify any changes in physical storage configuration.
For more information, see the vSphere Monitoring and Performance documentation.
VMware, Inc.
189
vSphere Storage
Storage capabilities. The vendor provider collects and communicates information about physical
capabilities and services that underlying storage offers.
This information can be useful when, for example, you need to properly aggregate storage into tiers, or
select the right storage, in terms of space and performance, for a particular virtual machine.
The capabilities appear on the list of system-defined storage capabilities. For details, see Understanding
Storage Capabilities, on page 194.
Storage status. This category includes reporting about status of various storage entities. It also includes
alarms and events for notifying about configuration changes.
This type of information can help you troubleshoot storage connectivity and performance problems. It
can also help you to correlate array-generated events and alarms to corresponding performance and load
changes on the array.
Storage arrays that support Storage APIs - Storage Awareness plug-ins. The vendor provider component
must be installed on the storage side. See the vSphere Compatibility Guide or check with your storage vendor.
NOTE Fibre Channel over Ethernet (FCoE) does not support vendor providers.
The following considerations exist when you use the vendor providers:
n
Both block storage and file system storage devices can use vendor providers.
Multiple vCenter Servers can simultaneously connect to a single instance of a vendor provider.
A single vCenter Server can simultaneously connect to multiple different vendor providers. It is possible
to have a different vendor provider for each type of physical storage device available to your host.
Events indicate important changes in the storage configuration. Such changes might include creation and
deletion of a LUN, or a LUN becoming inaccessible due to LUN masking.
For a standalone host, the vSphere Client displays storage events in the Events tab. For managed hosts,
information is displayed in the Tasks & Events tab.
Alarms indicate a change in storage system availability. For example, when you use profile-based storage
management, you can specify virtual machine storage requirements. When changes to underlying storage
occur that might violate the storage requirements of the virtual machine, an alarm gets triggered.
For more information about events and alarms, see the vSphere Monitoring and Performance documentation.
190
VMware, Inc.
Thin-provisioned LUNs have special reporting requirements. For information about space monitoring on thinprovisioned LUNs, see Array Thin Provisioning and VMFS Datastores, on page 184.
Click Add.
In the Add Vendor Provider dialog box, type connection information for the vendor provider, including
the name, URL, and credentials.
(Optional) To direct the vCenter Server to the vendor provider certificate, select the Use Vendor Provider
Certificate option and specify the certificate's location.
If you do not select this option, the vSphere Client displays a thumbprint of the certificate. You can check
the thumbprint and approve it.
The vCenter Server has registered the vendor provider and established a secure SSL connection with it.
Direct the vCenter Server to the vendor provider certificate. In the Add Vendor Provider dialog box, select
the Use Vendor Provider Certificate option and specify the certificate's location.
Use a thumbprint of the vendor provider certificate. If you do not direct the vCenter Server to use the
provider certificate, the vSphere Client displays the certificate thumbprint. You can check the thumbprint
and approve it. The vCenter Server adds the certificate to the truststore and proceeds with the connection.
The vendor provider adds the vCenter Server certificate to its truststore when the vCenter Server first connects
to the provider.
VMware, Inc.
191
vSphere Storage
Procedure
1
In the Vendor Providers list, view the vendor provider components registered with the vCenter Server.
The list shows general vendor information including the name, URL, and the time of the last view refresh.
To display additional details, select a specific vendor provider from the list.
The details include storage array vendors and array models that the vendor provider supports.
NOTE A single vendor provider can support storage arrays from multiple different vendors.
From the list of vendor providers, select the one you want to unregister and click Remove.
The vCenter Server terminates the connection and removes the vendor provider from its configuration.
From the list, select the vendor provider that you want to synchronise with and click Sync.
The vSphere Client updates the storage data for the provider.
192
VMware, Inc.
21
With profile-driven storage, you use storage capabilities and virtual machine storage profiles to ensure that
virtual machines use storage that guarantees a certain level of capacity, performance, availability, redundancy,
and so on.
NOTE Profile-driven storage does not support RDMs.
To manage storage placement by using virtual machine storage profiles, you must perform the following tasks:
1
Verify that system-defined storage capabilities appear in the Manage Storage Capabilities dialog box, if
your storage system supports the Storage APIs - Storage Awareness.
For more information about Storage APIs - Storage Awareness, see Chapter 20, Using Storage Vendor
Providers, on page 189.
Create virtual machine storage profiles by defining the storage capabilities that an application running on
a virtual machine requires.
Associate a virtual machine storage profile with the virtual machine files or virtual disks.
Verify that virtual machines and virtual disks use datastores that are compliant with their associated
virtual machine storage profile.
VMware, Inc.
193
vSphere Storage
In the vSphere Client, select View > Management > VM Storage Profiles.
In the VM Storage Profiles view of the vSphere Client, click Manage Storage Capabilities.
The Manage Storage Capabilities dialog box appears.
View the names and descriptions of the storage capabilities in the Name and Description column.
View the System and User-defined types of the existing storage capabilities.
a
What to do next
Modify the list of existing user-defined storage capabilities by using the Add, Remove, or Edit buttons.
194
VMware, Inc.
In the VM Storage Profiles view of the vSphere Client, click Manage Storage Capabilities.
The Manage Storage Capabilities dialog box appears.
Click Add.
Example
Name
Fault tolerance.
Description
Click OK.
The storage capability appears in the list and is specified as User-defined in the Type column.
What to do next
Assign the user-defined storage capabilities to datastores that have that capability.
In the VM Storage Profiles view of the vSphere Client, click Manage Storage Capabilities.
The Manage Storage Capabilities dialog box appears.
In the Manage Storage Capabilities dialog box, select a user-defined storage capability and click Edit.
Provide a more detailed description of the storage capability and click OK.
You cannot change the name of a user-defined storage capability.
The new description appears in the Description column of the Manage Storage Capabilities dialog box.
VMware, Inc.
195
vSphere Storage
Prerequisites
Add a user-defined storage capability to the list of storage capabilities.
Procedure
1
In the vSphere Client, select View > Inventory > Datastores and Datastore Clusters.
Right-click a datastore from the inventory and select Assign User-Defined Storage Capability.
Select a storage capability from the list of storage capabilities and click OK.
Option
Description
a
b
c
Click New.
Type a name and a description.
Click OK.
The user-defined storage capability appears in the Storage Capabilities pane of the Summary tab of the
datastore or its datastore cluster.
Action
a
b
a
b
Select a storage capability from the list of storage capabilities and click OK.
Option
Description
a
b
c
Click New.
Type a name and a description.
Click OK.
196
VMware, Inc.
In the VM Storage Profiles view of the vSphere Client, click Manage Storage Capabilities.
The Manage Storage Capabilities dialog box appears.
Storage Profile 2
datastore02
windows_2008r2_test.vmdk
Storage Profile 3
datastore05
windows_2008r2_test_1.vmdk
Storage Profile 5
datastore10
windows_2008r2_test.vmx
windows_2008r2_test.vmxf
windows_2008r2_test.log
windows_2008r2_test.nvram
windows_2008r2_test.vmem
windows_2008r2_test.vmsd
When you create, clone, or migrate a virtual machine, you can select to associate it with a virtual machine
storage profile. When you select a virtual machine storage profile, vSphere Client shows you the datastores
that are compatible with the capabilities of the profile. You can then select a datastore or a datastore cluster.
If you select a datastore that does not match the virtual machine storage profile, the vSphere Client shows that
the virtual machine is using non-compliant storage.
VMware, Inc.
197
vSphere Storage
In the VM Storage Profiles view of the vSphere Client, click Enable VM Storage Profiles.
The Enable VM Storage Profiles window appears. The window shows all available clusters and hosts,
their licensing status, and whether virtual machine storage profiles are enabled or disabled for the host
or the cluster.
To enable virtual machine storage profiles, select a host or a cluster whose status is Disabled or Unknown
and click Enable.
The status of the host or cluster changes to Enabled in the VM Storage Profile Status column.
What to do next
You can use virtual machine storage profiles for the virtual machines that run on the enabled host or cluster.
In the VM Storage Profiles view of the vSphere Client, select an existing virtual machine storage profile
from the inventory and click Edit VM Storage Profile.
The Edit VM Storage Profile dialog box opens.
Select new storage capabilities that you want to include in the virtual machine storage profile.
Deselect storage capabilities that you want to exclude from the virtual machine storage profile.
IMPORTANT Excluding a storage capability might break the virtual machine storage profile compliance
for a virtual machine, if the virtual machine or its disks use datastores with that capability.
198
In the VM Storage Profiles view of the vSphere Client, select a virtual machine storage profile that you do
not use, and click Delete VM Storage Profile.
VMware, Inc.
Associate a Virtual Machine Storage Profile with a Virtual Machine and Its Virtual
Disks
You can associate a virtual machine storage profile with a virtual machine to define the storage capabilities
that are required by the applications running on the virtual machine.
You can associate a virtual machine storage profile with a powered-off and powered-on virtual machine.
Procedure
1
Description
a
b
Associate the virtual machine home files with a virtual machine storage profile from the Home VM
Storage Profile drop-down menu.
NOTE The virtual machine home files include the file types .vmx, .vmsd, .nvram, and so on.
(Optional) Click Propagate to disks to associate all virtual disks with the same virtual machine storage
profile.
Under VM storage profiles for virtual disks, associate each virtual disk with a different virtual machine
storage profile from the VM Storage Profile drop-down menu.
Click OK.
The virtual machine storage profile name appears in the VM Storage Profiles pane of the Summary tab for the
virtual machine.
NOTE If you add a new virtual disk and associate it with a virtual machine storage profile at the same time,
the VMware vSphere Profile-Driven Storage Service might take some to associate the virtual machine storage
profile with the new virtual disk.
VMware, Inc.
In the vSphere Client, select View > Management > VM Storage Profiles.
199
vSphere Storage
The Compliance Status column shows whether the virtual machine files or the virtual disks use datastores that
are compliant or noncompliant with the selected virtual machine storage profile.
Compliance Status
Description
Compliant
The datastore that the virtual machine or virtual disk uses has the storage capabilities that are
required by the virtual machine storage profile.
Non-compliant
The datastore that the virtual machine or virtual disk uses does not have the storage capabilities that
are required by the virtual machine storage profile. You can migrate the virtual machine files and
virtual disks to compliant datastores.
What to do next
If a virtual machine or a virtual disk uses a datastore that is no longer compliant with the virtual machine
storage profile, you can migrate it to a compliant datastore.
From the vSphere Client inventory, right-click a virtual machine and select VM Storage Profile > Check
Profiles Compliance.
View the compliance in the VM Storage Profiles pane beside the Profiles Compliance text box.
The Profiles Compliance text box in the VM Storage Profiles pane shows whether the virtual machine files or
its virtual disks comply with their associated virtual machine storage profile.
Compliance Status
Description
Compliant
The datastores used by the virtual machine files or the virtual disks have the storage capabilities that
are required by the respective virtual machine storage profile.
Non-compliant
The datastores used by the virtual machine or the virtual disks do not have the storage capabilities
that are required by the respective virtual machine storage profile. You can migrate the virtual
machine files and its virtual disks to a compliant datastore.
What to do next
If the status is noncompliant, read Check Storage Compliance with Virtual Machine Storage Profile, on
page 199 to view whether the virtual machine files or any of the virtual disks use noncompliant storage. You
can then migrate the files or virtual disks to a compliant datastore.
200
VMware, Inc.
22
Using vmkfstools
vmkfstools is one of the ESXi Shell commands for managing VMFS volumes and virtual disks. You can perform
many storage operations using the vmkfstools command. For example, you can create and manage VMFS
datastores on a physical partition, or manipulate virtual disk files, stored on VMFS or NFS datastores.
NOTE After you make a change using the vmkfstools, the vSphere Client may not be updated immediately.
You need to use a refresh or rescan operation from the vSphere Client.
For more information on the ESXi Shell, see Getting Started with vSphere Command-Line Interfaces.
This chapter includes the following topics:
n
Target specifies a partition, device, or path to apply the command option to.
Table 22-1. vmkfstools command arguments
Argument
Description
options
One or more command-line options and associated arguments that you use to
specify the activity for vmkfstools to perform, for example, choosing the disk
format when creating a new virtual disk.
After entering the option, specify a target on which to perform the operation.
Target can indicate a partition, device, or path.
partition
Specifies disk partitions. This argument uses a disk_ID:P format, where disk_ID is
the device ID returned by the storage array and P is an integer that represents the
partition number. The partition digit must be greater than zero (0) and should
correspond to a valid VMFS partition.
VMware, Inc.
201
vSphere Storage
Description
device
Specifies devices or logical volumes. This argument uses a path name in the
ESXi device file system. The path name begins with /vmfs/devices, which is the
mount point of the device file system.
Use the following formats when you specify different types of devices:
n /vmfs/devices/disks for local or SAN-based disks.
path
Specifies a VMFS file system or file. This argument is an absolute or relative path
that names a directory symbolic link, a raw device mapping, or a file
under /vmfs.
n
vmkfstools Options
The vmkfstools command has several options. Some of the options are suggested for advanced users only.
The long and single-letter forms of the options are equivalent. For example, the following commands are
identical.
vmkfstools --createfs vmfs5 --blocksize 1m disk_ID:P
vmkfstools -C vmfs5 -b 1m disk_ID:P
-v Suboption
The -v suboption indicates the verbosity level of the command output.
The format for this suboption is as follows:
-v --verbose number
202
VMware, Inc.
When you use this option on any file or directory that resides on a VMFS volume, the option lists the attributes
of the specified volume. The listed attributes include the file system label, if any, the number of extents
comprising the specified VMFS volume, the UUID, and a listing of the device names where each extent resides.
NOTE If any device backing VMFS file system goes offline, the number of extents and available space change
accordingly.
You can specify the -h suboption with the -P option. If you do so, vmkfstools lists the capacity of the volume
in a more readable form, for example, 5k, 12.1M, or 2.1G.
This option creates a VMFS3 or VMFS5 datastore on the specified SCSI partition, such as disk_ID:P. The partition
becomes the file system's head partition.
NOTE Use the VMFS3 option when you need legacy hosts to access the datastore.
You can specify the following suboptions with the -C option:
n
For VMFS5, the only available block size is 1MB. For VMFS3, the default block size is 1MB. Depending on
your needs, the block size can be 1MB, 2MB, 4MB, and 8MB. When you enter the size, indicate the unit
type by adding a suffix, such as m or M. The unit type is not case sensitive.
n
-S --setfsname Define the volume label of the VMFS datastore you are creating. Use this suboption only
in conjunction with the -C option. The label you specify can be up to 128 characters long and cannot contain
any leading or trailing blank spaces.
After you define a volume label, you can use it whenever you specify the VMFS datastore for the
vmkfstools command. The volume label appears in listings generated for the ls -l command and as a
symbolic link to the VMFS volume under the /vmfs/volumes directory.
To change the VMFS volume label, use the ln -sf command. Use the following as an example:
ln -sf /vmfs/volumes/UUID /vmfs/volumes/datastore
datastore is the new volume label to use for the UUID VMFS.
NOTE If your host is registered with the vCenter Server, any changes you make to the VMFS volume label
get overwritten by the vCenter Server. This guarantees that the VMFS label is consistent across all vCenter
Server hosts.
VMware, Inc.
203
vSphere Storage
This option extends the VMFS file system with the specified head partition by spanning it across the partition
specified by span_partition. You must specify the full path name, for example /vmfs/devices/disks/disk_ID:
1. Each time you use this option, you extend a VMFS volume with a new extent so that the volume spans
multiple partitions.
CAUTION When you run this option, you lose all data that previously existed on the SCSI device you specified
in span_partition .
The extended file system spans two partitionsnaa.disk_ID_1:1 and naa.disk_ID_2:1. In this example,
naa.disk_ID_1:1 is the name of the head partition.
This option grows an existing VMFS datastore or its extent. For example,
vmkfstools --growfs /vmfs/devices/disks/disk_ID:1 /vmfs/devices/disks/disk_ID:1
204
VMware, Inc.
zeroedthick (default) Space required for the virtual disk is allocated during creation. Any data remaining
on the physical device is not erased during creation, but is zeroed out on demand at a later time on first
write from the virtual machine. The virtual machine does not read stale data from disk.
n
eagerzeroedthick Space required for the virtual disk is allocated at creation time. In contrast to
zeroedthick format, the data remaining on the physical device is zeroed out during creation. It might take
much longer to create disks in this format than to create other types of disks.
thin Thin-provisioned virtual disk. Unlike with the thick format, space required for the virtual disk is
not allocated during creation, but is supplied, zeroed out, on demand at a later time.
2gbsparse A sparse disk with 2GB maximum extent size. You can use disks in this format with hosted
VMware products, such as VMware Fusion, Player, Server, or Workstation. However, you cannot power
on sparse disk on an ESXi host unless you first re-import the disk with vmkfstools in a compatible format,
such as thick or thin.
See Migrate Virtual Machines Between Different VMware Products, on page 207.
NFS Disk Formats
The only disk formats you can use for NFS are thin, thick, zeroedthick and 2gbsparse.
Thick, zeroedthick and thin formats usually behave the same because the NFS server and not the ESXi host
determines the allocation policy. The default allocation policy on most NFS servers is thin. However, on NFS
servers that support Storage APIs - Array Integration, you can create virtual disks in zeroedthick format. The
reserve space operation enables NFS servers to allocate and guarantee space.
For more information on array integration APIs, see Chapter 18, Storage Hardware Acceleration, on
page 171.
This option creates a virtual disk at the specified path on a datastore. Specify the size of the virtual disk. When
you enter the value for size, you can indicate the unit type by adding a suffix of k (kilobytes), m (megabytes),
or g (gigabytes). The unit type is not case sensitive. vmkfstools interprets either k or K to mean kilobytes. If you
dont specify a unit type, vmkfstools defaults to bytes.
VMware, Inc.
205
vSphere Storage
-a specifies the device driver that is used to communicate with the virtual disks. You can choose between
This option cleans the virtual disk by writing zeros over all its data. Depending on the size of your virtual disk
and the I/O bandwidth to the device hosting the virtual disk, completing this command might take a long time.
CAUTION When you use this command, you lose any existing data on the virtual disk.
This option converts a thin virtual disk to eagerzeroedthick, preserving all existing data. The option allocates
and zeroes out any blocks that are not already allocated.
This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and
contain valid data. The resulting virtual disk is in thin format.
While performing the conversion, this option preserves any data on the virtual disk.
206
VMware, Inc.
You can use the -d suboption for the -i option. This suboption specifies the disk format for the copy you create.
A non-root user is not allowed to clone a virtual disk or an RDM.
You can configure a virtual machine to use this virtual disk by adding lines to the virtual machine configuration
file, as in the following example:
scsi0:0.present = TRUE
scsi0:0.fileName = /vmfs/volumes/myVMFS/myOS.vmdk
Import a disk in 2gbsparse format into the ESXi host by running the following command. Make sure to
select the disk format compatible with ESXi.
vmkfstools -i <input> <output> -d <format>
In the vSphere Client, create a new virtual machine using the Custom configuration option.
When you configure a disk, select Use an existing virtual disk and attach the disk you imported.
You must power off the virtual machine that uses this disk file before you enter this command. You might
have to update the file system on the disk so the guest operating system can recognize and use the new size
of the disk and take advantage of the extra space.
You specify the newSize parameter in kilobytes, megabytes, or gigabytes by adding a k (kilobytes), m
(megabytes), or g (gigabytes) suffix. The unit type is not case sensitive. vmkfstools interprets either k or K to
mean kilobytes. If you dont specify a unit type, vmkfstools defaults to kilobytes.
VMware, Inc.
207
vSphere Storage
The newSize parameter defines the entire new size, not just the increment you add to the disk.
For example, to extend a 4g virtual disk by 1g, enter: vmkfstools -X 5g disk name.
You can extend the virtual disk to the eagerzeroedthick format by using the -d eagerzeroedthick option.
NOTE Do not extend the base disk of a virtual machine that has snapshots associated with it. If you do, you
can no longer commit the snapshot or revert the base disk to its original size.
You can configure a virtual machine to use the my_rdm.vmdk mapping file by adding the following lines to the
virtual machine configuration file:
scsi0:0.present = TRUE
scsi0:0.fileName = /vmfs/volumes/myVMFS/my_rdm.vmdk
After you establish this type of mapping, you can use it to access the raw disk just as you would any other
VMFS virtual disk.
When specifying the device parameter, use the following format:
/vmfs/devices/disks/disk_ID
This option prints the name of the raw disk RDM. The option also prints other identification information, like
the disk ID, for the raw disk.
208
VMware, Inc.
The output is in the form: Geometry information C/H/S, where C represents the number of cylinders, H
represents the number of heads, and S represents the number of sectors.
NOTE When you import virtual disks from hosted VMware products to the ESXi host, you might see a disk
geometry mismatch error message. A disk geometry mismatch might also be the cause of problems loading a
guest operating system or running a newly-created virtual machine.
CAUTION Using the -L option can interrupt the operations of other servers on a SAN. Use the -L option only
when troubleshooting clustering setups.
Unless specifically advised by VMware, never use this option on a LUN hosting a VMFS volume.
You can specify the -L option in several ways:
n
-L reserve Reserves the specified LUN. After the reservation, only the server that reserved that LUN
can access it. If other servers attempt to access that LUN, a reservation error results.
n
-L release Releases the reservation on the specified LUN. Other servers can access the LUN again.
-L lunreset Resets the specified LUN by clearing any reservation on the LUN and making the LUN
available to all servers again. The reset does not affect any of the other LUNs on the device. If another
LUN on the device is reserved, it remains reserved.
n
-L targetreset Resets the entire target. The reset clears any reservations on all the LUNs associated
with that target and makes the LUNs available to all servers again.
n
-L busreset Resets all accessible targets on the bus. The reset clears any reservation on all the LUNs
accessible through the bus and makes them available to all servers again.
When entering the device parameter, use the following format:
/vmfs/devices/disks/disk_ID:P
VMware, Inc.
209
vSphere Storage
You can use this command when a host fails in the middle of a datastore operation, such as grow extent, add
extent, or resignaturing. When you issue this command, make sure that no other host is holding the lock.
210
VMware, Inc.
Index
Symbols
* next to path 159
A
access control 64
accessing storage 20
active-active disk arrays 32, 36, 47, 64, 68, 94,
161
active-passive disk arrays, boot from SAN 52
adaptive scheme 25
add storage capability 195
adding, NFS storage 126
ADT, See auto disk transfer
advanced settings
Disk.EnableNaviReg 58
Disk.MaxLUN 123
alarms 190
all-paths-down event 128
allocations, LUN 36
allocations, LUN 68
applications, layered 27
array integration, thin provisioning 184
array-based solution 27
assign storage capability 195, 196
asterisk next to path 159
atomic test and set 125
ATS locking 125
authentication 64, 82
auto disk transfer 46
auto volume transfer 46
auto-detect,SSD devices 141
automatic host registration, disabling 58
AVT, See auto volume transfer
B
backups
considerations 28
third-party backup package 29
basic connectivity 43
best practices, FCoE 37
BIOS, enabling for BFS 54
block devices 136
boot adapters 53
boot BIOS prompt, enabling for BFS 54
boot from DVD-ROM 53
VMware, Inc.
C
CHAP
disabling 85
for discovery targets 84
for iSCSI initiators 83
for static targets 84
mutual 82
one-way 82
CHAP authentication 64, 82
CHAP authentication methods 82
checklist 109
claim rules 159
clustering 43
compatibility modes
physical 136
virtual 136
configuring
dynamic discovery 81
static discovery 82
current multipathing state 160
D
data digests 65
datastore, storage capability 195, 196
datastores
adding extents 117
211
vSphere Storage
212
E
educational support 7
EMC CLARiiON 44, 92
EMC CLARiiON AX100
and RDM 45
directly connected 45
EMC Symmetrix, pseudo LUNs 45, 93
EqualLogic, storage systems 95
EUI 62
EVA (HP StorageWorks) 48, 94
events 190
examples
vmkfstools -C 204
vmkfstools -Z 204
extents
adding to datastore 117
growing 117
F
failover
I/O delay 154
transparent 32, 64
failover paths, status 159
FC HBA setup 36
FC SAN
accessing 33
hardware requirements 35
FCoE, best practices 37
FCoE adapters 37
Fibre Channel
concepts 31
configuration checklist 59
Fibre Channel SAN
best practices 57
preventing problems 57
file systems, upgrading 112
file-based (VMFS) solution 27
FIP 37
Fixed path policy 158, 161
G
GPT 16
H
hardware acceleration
about 171
benefits 171
block storage 172
deleting claim rules 176
enabling 173
NAS 177
NAS status 179
requirements 172
VMware, Inc.
Index
status 172
support details 174
hardware acceleration, considerations 179
hardware iSCSI, and failover 152
hardware iSCSI adapters
dependent 63
independent 63
hardware iSCSI initiators
configuring 69
installing 69
setting up discovery addresses 81
viewing 69
HBAs
queue depth 35
setup 36
static load balancing 36
header digests 65
high-tier storage 26
Hitachi Data Systems storage, microcode 48
host configuration, advanced settings 58
host registration, disabling 58
host type 44, 92
host-based failover 151
hosts, and FC SAN 31
HP LeftHand P4000 VSA 96
HP StorageWorks
EVA 48, 94
MSA 93
XP 48
HP StorageWorks SAN/iQ storage 95
I
I/O delay 68, 154
iBFT 99
iBFT iSCSI boot
booting an ESXi host 102
changing boot sequence 101
installing an ESXi host 101
limitations 100
networking best practices 102
setting up ESXi 100
troubleshooting 103
IBM ESS800 47
IBM FAStT 46
IBM System Storage DS4800, failover
configuration 46
IBM Systems Storage 8000 47
IDE 11
independent hardware iSCSI adapters
change IP address 70
change name 70
installation
preparing for boot from SAN 52
steps 36
inter-switch link 46
IP address 62
IQN 62
iSCSI 12
iSCSI adapters
about 67
advanced parameters 86
hardware 63
software 63
iSCSI alias 62
iSCSI boot, iBFT 99
iSCSI Boot Firmware Table, See iBFT
iSCSI boot parameters, configuring 101
iSCSI initiators
configuring advanced parameters 87
configuring CHAP 83
hardware 69
setting up CHAP parameters 82
iSCSI names, conventions 62
iSCSI networking
binding adapters 78
changing policy 78
creating a VMkernel interface 76
managing 79
troubleshooting 79
iSCSI ports 62
iSCSI SAN
accessing 66
best practices 105
boot 97
concepts 61
preventing problems 105
iSCSI SAN restrictions 68
iSCSI sessions
adding for a target 88
displaying 88
managing 87
removing 89
iSCSI storage systems 91
ISL 46
J
jumbo frames
enabling for dependent hardware iSCSI 80
enabling for software iSCSI 80
using with iSCSI 80
L
layered applications 27
Linux, host type 44
VMware, Inc.
213
vSphere Storage
M
maintenance 24
masking LUNs 167
MBR 16
metadata, RDMs 136
metadata updates 114
microcode, Hitachi Data Systems storage 48
Microsoft Cluster Service 43
mid-tier storage 26
Most Recently Used path policy 158, 161
mounting VMFS datastores 129
MPPs
displaying 164
See also multipathing plug-ins
MRU path policy 161
MSA (HP StorageWorks) 93
MSCS 43
multipathing
active paths 159
broken paths 159
considerations 162
disabled paths 159
standby paths 159
viewing the current state of 159
multipathing claim rules
adding 165
deleting 167
multipathing plug-ins, path claiming 159
multipathing policy 161
214
N
N-Port ID Virtualization, See NPIV
NAA 62
NAS 12
NAS plug-ins
installing 177
uninstalling 178
upgrading 178
Native Multipathing Plug-In 156, 157
NetApp storage system 95
Netware host mode 48
network adapters, configuring for iBFT iSCSI
boot 101
Network Appliance storage 48
network connections, create 76
network performance 107
networking, configuring 69
NFS datastores
and non-ASCII characters 125
maximum size 125
repositories 125
unmounting 126
NFS storage, adding 126
NICs, mapping to VMkernel 77
NMP
I/O flow 158
path claiming 159
See also Native Multipathing Plug-In
NPIV
about 39
assigning WWNs 41
changing WWNs 41
limitations 40
requirements 40
O
one-way CHAP 82
P
partition mappings 136
passive disk arrays 36, 68, 161
path claiming 159
path failover
and virtual machines 155
array-based 154
host-based 152
path failure rescan 122
path management 151
path policies
changing defaults 161
VMware, Inc.
Index
Q
Qlogic HBA BIOS, enabling for BFS 55
queue depth 68
R
RAID devices 136
raw device mapping, see RDM 133
RDM
advantages 134
and virtual disk files 137
dynamic name resolution 137
overview 133
physical compatibility mode 136
virtual compatibility mode 136
with clustering 137
RDMs
and snapshots 136
path management 139
reclaiming space 187
remove a storage capability 197
requirements, boot from SAN 52
rescan
LUN creation 122
VMware, Inc.
S
SAN
backup considerations 28
benefits 23
hardware failover 46
requirements 35
specifics 24
SAN fabric 31
SAN management software 28
SAN storage performance, optimizing 58, 106
SAS 11
SATA 11
SATPs
adding rules 169
displaying 164
See also Storage Array Type Plug-Ins
scanning, changing number 123
SCSI, vmkfstools 201
SCSI controllers 9
server performance 59, 106
setup steps 36
software FCoE
and VMkernel 38
activating adapters 39
software iSCSI
and failover 152
diagnostic partition 131
networking 74
software iSCSI adapter
configuring 72
disabling 73
software iSCSI boot, changing settings 103
software iSCSI initiator, enabling 73
software iSCSI initiators, setting up discovery
addresses 81
SSD 141
SSD devices
auto-detect 141
tag 142
SSD enablement, benefits 141
SSDs
best Practices 146
estimating lifetime 146
standard switches 77
215
vSphere Storage
216
paths 160
rescanning 122
viewing 16
storage filters
disabling 123
host rescan 124
RDM 124
same host and transports 124
VMFS 124
storage processors
configuring sense data 47
sense data 47
storage space 181
storage status 189
storage systems
Dell PowerVault MD3000i 96
EMC CLARiiON 44, 92
EMC Symmetrix 45, 93
EqualLogic 95
Hitachi 48
HP StorageWorks 47, 93
HP StorageWorks SAN/iQ 95
NetApp 95
Network Appliance 48
performance 106
types 32, 64
storage topology 189
storage virtualization 9
storage, and VMkernel 147
STP 37
supported devices 44
system-defined storage capability 194
T
tag, SSD devices 142
tag devices 144
tape devices 36
targets 15, 63
targets vs. LUNs 63
technical support 7
testing, storage systems 91
thin disks, creating 182
thin provisioned LUNs, space reclamation 186
thin provisioning, over-subscription 181
thin-provisioned LUNs
identify 185
reporting 185
third-party backup package 29
third-party management applications 28
TimeoutValue parameter 35, 68
troubleshooting
changing iSCSI boot parameters 103
loss of network connection 103
VMware, Inc.
Index
U
unplanned device loss 130
untag 143, 144
USB 11
use cases 24
user-defined storage capability 194, 195
V
VAAI claim rules
defining 175
deleting 176
VAAI filter 175
VAAI plug-in 175
VAAI filter, displaying 173
VAAI plug-ins, displaying 173
vendor provider, requirements 190
vendor providers
registering 191
SSL connection 191
unregistering 192
updating 192
viewing 191
view storage capabilities 194
virtual disk, repair 209
virtual disks
extending 207
formats 182
supported formats 205
virtual machine storage profile
associate with virtual disks 199
associate with virtual machine 199
compliance 197, 199, 200
definition 197
delete 198
edit 198
enable 198
virtual machines
accessing FC SAN 33
accessing iSCSI SAN 66
assigning WWNs to 41
I/O delay 154
locations 26
prioritizing 26
with RDMs 138
virtual ports (VPORTs) 39
virtual SSD device 146
VMFS
conversion 204
locking 115
one volume per LUN 68
resignaturing 120
vmkfstools 201
VMware, Inc.
VMFS datastores
adding extents 117
changing properties 117
changing signatures 121
creating 115
creating on Fibre Channel storage 116
creating on iSCSI storage 116
creating on SCSI disk 116
creating on storage device 116
deleting 121
disk formats 113
increasing capacity 117
mounting 129
rescanning 122
sharing 114
unmounting 126
upgrading 118
VMFS resignaturing 120
VMFS2 datastores, upgrading 119
VMFS3, upgrading 119
VMFS5, improvements over VMFS3 112
VMFS5 and VMFS3, differences 112
VMkernel interfaces 77
vmkfstools
breaking locks 210
cloning disks 207
creating RDMs 208
creating virtual disks 205
deleting virtual disks 206
device options 209
disk chain 209
extending virtual disks 207
file system options 203
geometry 209
inflating thin disks 206
initializing virtual disks 206
migrating virtual disks 207
overview 201
RDM attributes 208
removing zeroed blocks 206
renaming virtual disks 207
SCSI reservations 209
syntax 201
upgrading virtual disks 208
virtual disk options 205
virtual disks conversion 206
vmkfstools -C command 203
vmkfstools -G command 204
vmkfstools -P command 203
vmkfstools -v command 202
vmkfstools -Z command 204
vmkfstools command options 202
217
vSphere Storage
vmkfstools examples
cloning disks 207
creating RDMs 208
creating virtual disks 206
vMotion 23, 24, 36, 68, 92
vmware, host type 44
VMware DRS, using with vMotion 68
VMware HA 23, 92
VMware NMP
I/O flow 158
See also Native Multipathing Plug-In
VMware PSPs, See Path Selection Plug-Ins
VMware SATPs, See Storage Array Type PlugIns
W
Windows guest OS timeout 155
World Wide Names, See WWNs
World Wide Port Names, See WWPNs
WWNNs 41
WWNs
assigning to virtual machines 41
changing 41
WWPNs 32, 41
X
XP (HP StorageWorks) 48
Z
zoning 31, 32
218
VMware, Inc.