Vous êtes sur la page 1sur 16

White Paper

BEST PRACTICES FOR DEPLOYING VMWARE


VIRTUAL MACHINES ON EMC VNX2 WITH NFS

EMC Solutions
Abstract
This white paper describes best practices for VMware virtual machine deployment on
EMC VNX2 over NFS.
January 2016

Copyright

Copyright 2016 EMC Corporation. All rights reserved. Published in the USA.
Published January 2016
EMC believes the information in this publication is accurate as of its publication date.
The information is subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose. Use, copying, and distribution of any EMC software
described in this publication requires an applicable software license.
EMC2, EMC ESA, EMC FAST, EMC VNX, and the EMC logo are registered trademarks or
trademarks of EMC Corporation in the United States and other countries. All other
trademarks used herein are the property of their respective owners.
For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.
Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper
Part Number H14671

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Contents

Contents
Executive Summary ...............................................................................................................................4
Technology overview .............................................................................................................................6
Storage best practices ..........................................................................................................................8
Networking best practices .................................................................................................................. 14
Troubleshooting ................................................................................................................................. 15
References ......................................................................................................................................... 16

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Executive Summary

Executive Summary
EMC VNX2 unified hybrid flash storage systems provide an efficient and innovative
platform with enterprise-level file and block storage in a scalable, easy-to-use, highperformance solution. The VNX2 family of storage systems is ideal for mixed
workloads in virtual environments. It combines extremely powerful and flexible
hardware with advanced multicore optimization, management, and data protection
software to meet the most demanding needs of today's enterprises.
VNX2 supports multiple protocols for both file and block, giving you the flexibility to
deploy expandable, future-proof storage. The powerful MCx VNX2 controller
architecture helps ensure that the storage processor can sufficiently manage
transactions or bandwidth during the storage systems service life. Capacity- and
performance-scaling capabilities for both flash and hard drives enable on-demand
capacity management in combination with built-in capacity efficiency features such
as thin provisioning, block-level deduplication and EMC Fully Automated Storage
Tiering (FAST) technology.
VNX2 provides access to data through a variety of access protocols, including
Network File System (NFS). NFS is a distributed file system protocol that provides file
sharing in network environments.
This white paper introduces specific configuration recommendations that enable high
performance when deploying VMware virtual machines on VNX2 over NFS.
Document purpose This white paper presents the set of current EMC-recommended best practices for
storage design in support of VMware virtual machine deployment on EMC VNX2.
Guidelines are presented within the context of deploying NFS on the EMC VNX2
family.
Audience

This white paper is intended for IT architects, administrators, and others who are
interested in the VNX2 series storage arrays. It assumes familiarity with storage array
concepts, general hardware, and the software services provided by the arrays.

Recommendations

Benchmark results are highly dependent upon workload, specific application


requirements, and system design and implementation. Relative system performance
will vary as a result of these and other factors. Therefore, this workload should not be
used as a substitute for a specific customer application benchmark when critical
capacity planning and/or product evaluation decisions are contemplated.
All performance data contained in this white paper was obtained in a rigorously
controlled environment. Results obtained in other operating environments may vary
significantly.

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Executive Summary

Terminology

Table 1 provides definitions for some of the terms used in this white paper.
Table 1.

Terminology

Term

Definition

ESA

EMC Storage Analytics (ESA) delivers actionable


performance analysis and proactively facilitates increased
insight into storage resource pools. This helps to detect
capacity and performance issues so they can be corrected
before they cause a major impact.

FAST VP

Fully Automated Storage Tiering for Virtual Pools is a


solution that matches storage requirements with changes
in the frequency of data access by segregating disk drives
into categories.

FC

Fibre Channel

FCoE

Fibre Channel over Ethernet

FSN

Fail-Safe Network. A high-availability feature that extends


link failover into the network by providing switch-level
redundancy.

NFS

Network File System. A network file system protocol that


allows a user on a client computer to access files over a
network as easily as if the network devices were attached
to its local disks.

VAAI

vStorage API for Array Integration (VAAI) is an application


program interface (API) framework from VMware that
enables certain storage tasks, such as thin provisioning, to
be offloaded from the VMware server virtualization
hardware to the storage array.

VASA

vStorage API for Storage Awareness is a vCenter server that


communicates with the storage system to discover the
storage capabilities of the VNX2 devices.

VSI

Virtual Storage Integrator is a VMware vCenter plug-in


available to all VMware users with EMC storage in their
environments. VSI enables IT organizations to achieve
simplicity and efficiency in data center operations.

VDM

Virtual Data Mover is a VNX2 software feature that enables


the grouping of NFS environments and servers into virtual
containers for mobility and security.

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Technology overview

Technology overview
Overview

VMware supports datastore deployments with multiple protocols. On IP-based


storage, you have the choice to provision a VMware datastore on block storage with
iSCSI protocol or on file storage with NFS protocol. Organizations can benefit from
deploying VMware on IP-based storage for the following reasons:

Lower cost per port

Suitable I/O performance

High availability configuration

Ease of deployment and storage management

Deploying virtualization solution on IP-based storage generally includes the following


technology components:

VMware vSphere hypervisor

EMC VNX2 IP-based storage system

NFS for file storage and IP-based-iSCSI for block storage

Figure 1 shows the solution architecture.

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Technology overview

Figure 1.

VMware vSphere

Solution architecture diagram

vSphere virtualizes the entire IT infrastructure, including servers, storage, and


networks. vSphere aggregates these resources and presents a uniform set of
elements in the virtual environment. With vSphere, you can manage IT resources like
a shared utility and dynamically provision resources to different business units and
projects.
VMware vSphere supports storage device access for hosts and virtual machines using
the FC, FCoE, iSCSI, and NFS protocols provided by the VNX2 platform. VNX2 provides
the NFS protocol for shared file systems in a Windows environment.

EMC VNX2

The VNX2 unified hybrid flash series is EMCs latest generation of midrange-toenterprise products. The VNX2 series utilizes EMCs VNX2 Operating Environment
(OE) for Block and File that you can manage with Unisphere, a simple, easy to use
management framework.

NFS

Network File System (NFS) is a client/server application that allows a user to view,
store, and update files on a remote computer as though they were on the user's own
computer. It uses Remote Procedure Calls (RPC) to communicate between computers.
NFS allows a user on a client computer to access files over a network as easily as if
the network devices were attached to its local disks.

Considerations

There are a number of considerations when configuring the storage array and network
to achieve the best performance for deploying VMware on IP-based storage. These

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Storage best practices

considerations include storage provisioning, performance tuning, efficiency,


management best practices and high availability design.

Storage best practices


Datastore

Using thin provisioning is highly recommended when a new NFS datastore is created
with VSI. You should type the values for the initial capacity and maximum capacity of
the datastore.
Configuration tips:

When selecting a storage pool, all available storage within the storage pool is
displayed. Ensure that the storage pool you select is designated by the storage
administrator for use by vSphere.

When a new NFS datastore is created with VSI, thin provisioning and automatic
file system extension are automatically enabled. You will enter the values for
the initial capacity and maximum capacity of the datastore.

If virtual provisioning is enabled for the file system, the maximum capacity is
required.

When configuring advanced options, the following settings are important for
optimal performance:

The High Water Mark option specifies the percentage of consumed file
system space at which VNX2 initiates automatic file system extension.
Acceptable values are 50 to 99.

The Direct Write option enhances write performance to the VNX2 file
system. This mechanism enables well-formed NFS writes to bypass the DM
cache. The Direct Writes mechanism is designed to improve the
performance of applications with many connections to a large file, such as
virtual disk files. When replication is used, Direct Writes are enabled on the
secondary file system as well.

To ease future management and configuration tasks, incorporate identifying


elements (such as IP addresses or NFS server names) into your data store
definition and annotate with the name of the VNX2 being used.

If used as a data store, limit the VNX2 NFS exports to only the VMkernel
interfaces.

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Storage best practices

Thin provisioning

Thin provisioning is a storage efficiency technology in VMware vSphere and EMC


VNX2. With thin provisioning, the VNX2 presents the host with a storage device that
is not fully allocated. VNX2 performs an initial allocation with a portion of the device
capacity. Additional space is consumed on an as-needed basis by the user,
applications, or operating system. When using vSphere with VNX2, the following thin
provisioning combinations are available:

On ESXi, through ESXi thin provisioning

On VNX2 file systems, through thinly provisioned VNX2 file systems

Monitor the storage utilization to prevent an accelerated out-of-space condition when


thin provisioning is in use. For thin virtual disks on thin LUNs, the storage pool is the
authoritative resource for storage capacity. Monitor the pool to avoid an out-of-space
condition.
FAST Cache and
FAST VP

FAST Cache and FAST VP are part of the FAST suite. These two products are sold
together, work together, and complement each other. FAST Cache allows the storage
system to provide Flash performance to the most heavily accessed chunks of data
across the entire system. FAST Cache absorbs I/O bursts from applications, thereby
reducing the load on back-end hard disks. This improves the performance of the
storage array and the supported applications.
FAST Cache
One of the major benefits of using FAST Cache is the improved application
performance, especially for workloads with frequent and unpredictable large
increases in I/O activity. The part of an applications working dataset that is
frequently accessed is copied to the FAST Cache, so the application receives an
immediate performance boost. FAST Cache enables applications to deliver consistent
performance by absorbing bursts of read/write loads at Flash speeds.

FAST Cache on a running system


When adding FAST Cache to a running system, it is recommended to enable FAST
Cache on a few LUNs at a time, and then wait until the LUNs have reached steady
state in FAST Cache before enabling more.
Note: For storage pools, FAST Cache is a pool-wide feature so you have to enable/disable at
the pool level (for all objects in the pool).

FAST Cache can improve overall system performance if the current bottleneck is driverelated, but boosting the IOPS will result in greater CPU utilization on the SPs. On an
existing system, check the SP CPU utilization of the system, and then proceed as
follows:

Less than 60 percent SP CPU utilization enable a few LUNs or one pool at a
time; let it reach steady state in FAST Cache, and ensure that SP CPU utilization
is still acceptable before enabling FAST Cache on more LUNs/pools

60-80 percent SP CPU utilization scale in carefully; enable FAST Cache on one
or two LUNs, or one pool with the smallest capacity, and verify that SP CPU
utilization does not go above 80 percent

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Storage best practices

CPU greater than 80 percent do not activate FAST Cache

FAST VP
File data is supported on LUNs created in pools with FAST VP configured on VNX2.

Best practices for VNX2 for file storage


In general, when creating pool LUNs for use with file storage:

Pre-provision space from the pool; create and assign LUNs to VNX OE for File,
so that VNX OE for File has available space for file system creation and
extension, snapshots, etc

Create approximately 1 LUN for every 4 drives in the storage pool

Create LUNs in multiples of 10

Number of LUNs = (number of drives in pool divided by 4), rounded up to


nearest multiple of 10

Make all LUNs the same size

Balance LUN ownership across SPA and SPB

Apply the same tiering policies to all LUNs in the storage pool

File-based space efficiency features are generally recommended over Block LUNbased space efficiency features:

If Virtual Provisioning is required for VNX OE for File, use a thin-enabled file
system on classic or thick LUNs

If compression is required for VNX OE for file, use VNX OE for file deduplication
and compression. Do not use compressed LUNs with VNX OE for file. Block
compression is not supported with VNX OE for file LUNs

If snapshots or checkpoints are required for VNX OE for file, use SnapSure. Do not
create VNX SnapShots on LUNs used by VNX OE for file.
All-flash

EMC recommends deploying flash drives in the following priority order:

File deduplication
and compression

Add a flash tier to pools utilizing FAST VP:

Configure at least 10 percent of pool capacity in flash to capture metadata

Configure at least 25 percent or pool capacity for near-all-flash performance

Dedicate an all-flash pool to storage objects with very high performance


requirements

Deploying virtual machines stored on an NFS datastore increases efficiency. VNX2 file
deduplication and compression targets active virtual disk files. This feature is
available for VMware vSphere virtual machines that are deployed on VNX2-based NFS
datastores.
With this feature, the VMware administrator compresses a virtual machine disk at the
VNX2 level to reduce the file system storage consumption by up to 50 percent. Some

10

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Storage best practices

CPU overhead is associated with the compression process, but VNX2 includes several
optimization techniques to minimize this performance impact.
VNX2 file deduplication and compression provides the ability to perform efficient,
array-level cloning of virtual machines. Two cloning alternatives are available:

Full cloneThis operation creates a full virtual machine clone that is


comparable to a native VMware vSphere clone operation. A full VNX2 virtual
machine clone operation is performed on the storage system instead of the
ESXi host to save the ESXi CPU cycles required to perform the native cloning
operation. The result is an efficient virtual machine clone operation that is up to
two or three times faster than a native vSphere virtual machine clone operation.

Fast cloneThis operation clones only the blocks that are changed between the
replica and the source virtual machine. This is very similar to a VNX2 LUN
snapshot operation, except that the operation is done at the file level instead
of at the LUN-level. A fast clone resides in the same file system as the source
virtual machine.
The source files satisfy unchanged block reads, and the fast clone files deliver
the updated blocks. Fast clone creation is an almost instantaneous operation
because no data needs to be copied from the source virtual machine to the
target device.

VAAI

VMware vStorage APIs for Array Integration (VAAI) offloads VMware storage-related
functions from the server to the storage system, enabling more efficient use of server
and network resources for increased performance and consolidation. Letting the
VNX2 series software perform common data management tasks, such as vMotion
migration, results in greater network IOPS, support for more virtual machines, and
faster response time. Other examples of offloaded tasks include:

Thin provisioning (block)

Thin provisioning stun (block)

Full clone (file)

Extended statistics (file)

Space reservations (file)

Hardware-accelerated locking (block)

Hardware-accelerated zero (block)

Hardware-accelerated copy (block)

VAAI leverages the VNX2 DM to create thin fast-clone replicas and thick full-clone
replicas of virtual machines on the NFS datastores.
VASA

VMware vStorage APIs for Storage Awareness (VASA) is a VMware API that enables
the display of storage information through vCenter. Integration between VASA
technology and VNX2 makes storage management in a virtualized environment a
seamless experience. Administrators can use the familiar vSphere interface to view
details of virtual and physical resources, provision storage, integrate replication, and
offload storage functions to the storage system. VASA enables the VMware

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

11

Storage best practices

administrator to view basic storage components including arrays, storage processors,


I/O ports, and LUNs.
VASA removes the need for maintaining complex and tedious spreadsheets and
validating compliance manually during every migration or creation of a virtual
machine or virtual disk.
VSI

VSI for VMware vSphere is a plug-in to the VMware vSphere client that provides a
single management interface used for managing EMC storage within the vSphere
environment. Features can be added and removed from VSI independently, providing
flexibility for customizing VSI user environments. VSI provides a unified user
experience, allowing each of the features to be updated independently, and new
features to be introduced rapidly in response to changing customer requirements.
VSI enables you to:

Rapidly provision full virtual machine clones or space-efficient fast clones


within NFS datastores

Enable deduplication on selected datastores

Compress virtual NFS disk files

Presents storage information in a common view within the vSphere Client

Enables VMware administrators to identify VNX2 storage properties of VMFS,


NFS, and RDM storage

Presents LUN connectivity and device details for VNX2 storage

Change the multipath policy and manage multiple paths from within the
VMware vSphere Client

Figure 2 shows the VSI for VMware vSphere plug-in.

12

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Storage best practices

Figure 2.

ESA

VSI for VMware vSphere plug-in

EMC Storage Analytics (ESA) software enables you to proactively optimize storage
performance and efficiency.
The software combines the features and functionality of VMware vCenter Operations
Manager with deep VNX2 storage analytics. It delivers custom analytics and
visualizations that provide deep visibility into your EMC infrastructure and enable you
to troubleshoot, identify, and take action on storage performance and capacity
management problems quickly.
Out-of-the box custom visualizations enable customers to quickly deploy EMC
infrastructure support within vCenter Operations Manager without the need for
customer integration or Professional Services. This software also delivers actionable
performance analysis to enable customers to quickly identify and resolve
performance and capacity issues for VNX2 series systems.
ESA for VNX2 is supported on all VNX2 systems:

Rich storage analytics: View performance and capacity statistics, including


statistics for FAST Cache and FAST VP.

Topology views: End-to-end topology mapping from virtual machines to the


disk drives helps simplify storage operations management.

Service level agreement maintenance: Quick troubleshooting of performance


abnormalities and remediation assistance helps you maintain service levels.

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

13

Networking best practices

Figure 3 shows the ESA dashboard.

Figure 3.

ESA dashboard

Networking best practices


High availability

Networking

The general rule is not to have a single point of failure in the network path between
the ESX/ESXi server and VNX2. VNX2 provides high availability or redundancy by
using three types of virtual devices that deal with the problem of link or switch
failure:

Fail-safe networks (FSNs) that extend link failover out into the network by
providing switch-level redundancy.

Ethernet channels that enable multiple active Ethernet connections to the same
switch to appear as a single link.

Link aggregation that allows Ethernet ports with similar characteristics to the
same switch to be combined into a single virtual device or link.

NAS protocols (NFS and SMB/CIFS) are facilitated via I/O modules on the file DMs.

Use 10 Gbps for the best performance

Configure Jumbo Frames (MTU of 9000) on all NAS ports

Note: The entire network infrastructure must also support Jumbo Frames.

14

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Troubleshooting

Use network trunking and multipathing to provide port failover and greater
aggregate bandwidth for NAS connections to a single DM.

DM/VDM

Configure LACP across two or more ports on a single DM - use LACP instead
of EtherChannel

Virtual Data Mover (VDM) is a VNX2 software feature that is supported on VNX Data
Mover (DM) hardware. VDM enables the creation of multiple NFS environments by the
use of virtual containers. It is possible to isolate NFS environments from each other
making them more secure, easier to replicate, and easier to migrate.
When VNX2 is configured as an NFS server, file systems are mounted on a DM and a
path to that file system is exported. Exported file systems are then available across
the network and can be mounted by remote users.
VNX2 DM disruption in vSphere environments can result in application unavailability
and guest operating system crashes.
In the event of a DM disruption, the guest OS loses its connection to the NAS
datastore on the VNX2 file system, and virtual machine I/O requests to virtual disks
in the NAS datastore are shown as Disk SCSI Timeout errors in the OS event viewer.
Use the following best practices for the guest OS to keep the application and virtual
machines available during VNX2 DM outage events and avoid downtime:

Configure the environment with at least one standby DM to avoid a guest OS


crash and application unavailability.

Configure the DM and ESX host to take advantage of DNS round-robin for NFS
path fault tolerance.

Install the VMware tools for the guest OS.

Set the disk timeout value to at least 60 seconds in the guest OS.

For a Windows OS, modify


HKEY_LOCAL_MACHINE/System/ControlSet/Services/DISK and set the
TimeoutValue to 120. The following command performs the same task and can
be used for automation on multiple virtual machines:
reg.exe add \\%1\HKLM\SYSTEM\CurrentControlSet\Services\Disk/V
TimeoutValue /t /REG_DWORD /d 120 /f

Troubleshooting
EMC VNX2 for file storage generates events to record errors, commands, and other
information that you might need. You can configure the system to perform an action
when specified events occur. Actions based on events are called notifications.
System notifications include logging the event in an event log file, sending an email,
or generating a Simple Network Management Protocol (SNMP) trap.
We value your
feedback!

EMC and the authors of this document welcome your feedback on the solution and
the solution documentation. Contact EMC.Solution.Feedback@emc.com with your
comments.

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

15

References

References
EMC
documentation

16

The following documentation on EMC.com or EMC Online Support provides additional


and relevant information. Access to these documents depends on your login
credentials. If you do not have access to a document, contact your EMC
representative.

Configuring Events and Notifications on VNX2 for File

EMC VNX2: Data at Rest Encryption

EMC VNX2 Deduplication and Compression

EMC VNX2 FAST VP

EMC MirrorView Knowledgebook Releases 30 33

EMC VNX2 FAST Cache

EMC VNX2 Unified Best Practices for Performance

Introduction to the EMC VNX2 Series White Paper

Introduction to the EMC VNX2 Series

Managing an FLR-Enabled NAS Environment with the EMC File-Level Retention


Toolkit

Using EMC VNX2 Storage with VMware vSphere

Virtual Data Movers on EMC VNX2

Virtual Provisioning for the EMC VNX2 Series

VNX2 Glossary

Best Practices for Deploying VMware Virtual Machines on EMC VNX2 with NFS
White Paper

Vous aimerez peut-être aussi