Vous êtes sur la page 1sur 43

Technical white paper

HP ProLiant SL2500 VMware VSAN


Reference Architecture

Updated: 9/19/2014

Hongjun Ma and Marcus D'Andrea, HP DIA


HP ProLiant SL2500 VMware VSAN Reference Architecture

Table of contents
Introduction ............................................................................................................................................................................................... 4
Technology Overview ................................................................................................................................................................. 4
Conclusion..................................................................................................................................................................................... 4
SL2500 Hardware Overview.................................................................................................................................................................. 5
Compute Node and Chassis Drive Cage ........................................................................................................................................ 6
Storage Controller ............................................................................................................................................................................. 6
Boot Device ......................................................................................................................................................................................... 6
Networking ......................................................................................................................................................................................... 7
SL2500 VSAN Deployment Network Diagram .................................................................................................................................. 8
VSAN Prerequisites and Requirements............................................................................................................................................... 9
VMware vSphere Requirements ..................................................................................................................................................... 9
Storage Requirements ..................................................................................................................................................................... 9
Disk Controllers ........................................................................................................................................................................... 9
Flash-Based Devices .................................................................................................................................................................. 9
Hard Disk Drive Requirements .............................................................................................................................................. 10
Network Requirements ..................................................................................................................................................................10
Network Interface Cards ......................................................................................................................................................... 10
Supported Virtual Switch Types ............................................................................................................................................ 10
Multicast ..................................................................................................................................................................................... 10
VSAN Ready Notes ..........................................................................................................................................................................12
HP ProLiant SL2500 VSAN Installation and Configuration ...........................................................................................................14
Installing VMware ESXi 5.5U1 .......................................................................................................................................................14
VSAN VMkernel Configuration ......................................................................................................................................................17
Enable Cluster for VSAN .................................................................................................................................................................19
Configure Disk Management.........................................................................................................................................................20
Define VM Storage Policies ............................................................................................................................................................21
Assign VM with Storage Policy ......................................................................................................................................................22
VSAN Redundancy, Monitoring and Performance ..........................................................................................................................24
Redundancy ......................................................................................................................................................................................24
Monitoring and Troubleshooting .................................................................................................................................................27
vSphere CLI ................................................................................................................................................................................ 27
Ruby vSphere Console (RVC) ................................................................................................................................................. 30
VSAN Observer .......................................................................................................................................................................... 35
Performance .....................................................................................................................................................................................35
SL2500 BIOS Performance Settings .................................................................................................................................... 41
ESXI Performance Settings .................................................................................................................................................... 42
Conclusion ...............................................................................................................................................................................................43

Page 2
HP ProLiant SL2500 VMware VSAN Reference Architecture

Page 3
HP ProLiant SL2500 VMware VSAN Reference Architecture

Introduction
The purpose of this reference architecture whitepaper is twofold: 1). to introduce the technical reader to VMWare Virtual SAN
using HPs SL2500 Servers and 2). To provide a proven and tested installation tutorial. The intended audience of this paper is the
IT technical staff to include the network, server and storage designers, architects, and operations teams. The reader of this
document is expected to be fluent in virtualization technologies with moderate levels of server, storage and networking expertise.
The reader is expected to have a working knowledge of VMWare ESXi release 5.5 administration.
Technology Overview
This section outlines the 2 components used in the solution overview:
VMware Virtual SAN (VSAN) is the VMware policy-driven storage product designed for vSphere environments that
simplifies and streamlines storage provisioning and management.
The HP ProLiant SL2500 server line provides all the features expected in an enterprise server, with the added benefit of
being optimized for efficiency, density and flexibility. The ProLiant SL2500 doubles the density, as compared to
standard rack servers, to increase available data center floor space, improve performance while lowering energy
consumption.
Conclusion
After reading this guide and deploying VMware Virtual SAN on HP ProLiant SL2500 provides customers yield the ideal hyper-
converged architecture in terms of cost, space, energy, and compute with systems management.
This white paper exemplifies and verifies VSAN deployment and performance on HP ProLiant SL2500. The paper will help HP
customers to deploy VSAN over HP ProLiant SL2500, highlight VSAN design general concept and implementation options unique
for HP ProLiant SL2500 platform.

Page 4
HP ProLiant SL2500 VMware VSAN Reference Architecture

SL2500 Hardware Overview


This section details the SL2500 Infrastructure offering.1

SL2500 Front View (24 SFF Drive)

1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
Server 1 Server 2 Server 3 Server 4

SL2500 Back View (4x1U Node)

4 2

PCI Slot FlexLOM

NC55
2SF P
3 1

Serial iLO
SUV Connector 2x1GE SAS HBA
10G NIC
(1 Serial/2 USB 2.0/1 Video) NIC H220

The HP ProLiant SL2500 scalable system offers a very dense solution with up to four independent HP ProLiant SL210t G8 hot-
pluggable server nodes in a standard 2U chassis. Each SL210t Gen8 Server node can be serviced individually without impacting
the operation of other nodes sharing the same chassis to provide increased server uptime.

HP ProLiant SL2500 supports:


Four SL210t-Gen8 server nodes in SL2500 2U Chassis
24 Small Form Factor (SFF) or 12 Large Form Factor (LFF) SAS, SATA or SSD drives with 6 SFF (or 3 LFF) drive bays
dedicated to one SL210t-Gen8 server

HP ProLiant SL210t-Gen8 servers support:


Up to two Intel Xeon process E5-2600 v2 CPUs
Up to 512GB of RAM with 16 DIMM slots
One FlexibleLOM slot and one low profile PCIe 3.0 expansion slot
Two gigabit Ethernet and one management LAN interfaces on the motherboard

1 This paper uses a 1U server node for maximum server density and the SFF drive cage with six dedicated SFF hard disk drives (HDD) or solid state drives (SSD) per
server node

Page 5
HP ProLiant SL2500 VMware VSAN Reference Architecture

For detailed hardware specifications, please refer to SL2500 QuickSpecs.

Each of the four SL210t-Gen8 nodes tested included the following components:

Component Specification
Processor Dual-Socket Intel Xeon CPU E5-2620 v2 @ 2.10GHz. 6 cores per socket with Hyper-Threading enabled
Memory 96 GB DDR3, 6x16GB RDIMM @ 1600MHz
Network HP FlexFabric 10G 2-port 554FLR-SFP+ Adapter
Storage Controller HP H220 SAS HBA
SSDs 2x100GB 6G SATA HP Enterprise Mainstream Endurance SSD (691862-B21)
HDDs 4x1.2TB 6G SAS 10K RPM HP SAS drives (697574-B21)
* Note: The above HDD is replaced by the new VSAN certified 1.2TB 6G SAS 10K RPM HDD (718162-B21).

Major HP ProLiant SL2500 component details:

Compute Node and Chassis Drive Cage


The HP ProLiant SL210t Gen8 hot-pluggable server node is available in 1U and 2U options (1U used for this paper). The drive cage
is available in small form factor (SFF) and large form factor (LFF) options. The HP ProLiant SL2500 chassis can hold up to four HP
ProLiant SL210t Gen8 hot-pluggable 1U server nodes and up to 24 SFF disk drives. Each 1U server node can support up to six SFF
disk drives and the drives may be SAS, SATA, SSD or a combination of these drive types.

This paper will focus on the 1U server node for maximum server density and the SFF drive cage with six dedicated SFF hard disk
drives (HDD) or solid state drives (SSD) per server node.

Storage Controller
The base model storage controller for the HP ProLiant SL210t Gen8 server is configured with the HP B120i software RAID
controller2. However for our testing we use a very cost-effective and VSAN preferred storage controller the HP H220 Host Bus
Adapter. For our testing we have added one of these HP H220 Host Bus Adapter to each server.

Figure 1 HP H220 Host Bus Adapter

Boot Device
This reference architecture HP boots VMware ESXi from one 100GB SSD located in drive bay six. This drive is dedicated to booting
VMware ESXi. The five remaining drives will be used for VSAN, the SSD for caching and HDDs for capacity.

2 The HP B120i controller does not support Pass-through mode and not qualified for VSAN compatibility.

Page 6
HP ProLiant SL2500 VMware VSAN Reference Architecture

Note: The current version of VSAN does not support stateless auto-deploy as a boot method so hosts need to provide local
storage to boot ESXi. HP ProLiant SL210t Gen8 server does not have an internal USB or SD card slot to boot VMware ESXi.
The VMware ESXi boot drive can be a small capacity HDD or SSD. It is for these reasons we chose to boot from a local
SSD.

Networking
The HP ProLiant SL210 Gen8 server includes two integrated 1Gb Ethernet NICs for user data networking. VMware recommends 10
Gb networking for performance reasons for data storage networking. Therefore, we have added one 10 Gb adapter in FlexibleLOM
form factor to preserve our PCI slot for the HP H220 Host Bus Adapter (described above in the Storage Controller section)

The HP FlexFabric 554FLR-SFP+ was used in our testing but users have the options to choose other HP 10 Gb adapters.

The following 10 Gb adapters are available and listed in the SL2500 QuickSpecs.

Figure 2: HP FlexFabric 10G 2-port 554FLR-SFP+ Adapter

Note HP network adapter naming: The second digit in the model name indicates the manufacturer. Therefore, for our testing we
used the Emulex based adapter.
52x: Qlogic
53x: Broadcom
55x: Emulex
56x: Intel
57x: SolarFare

Page 7
HP ProLiant SL2500 VMware VSAN Reference Architecture

SL2500 VSAN Deployment Network Diagram


For this reference architecture, the HP ProLiant SL2500 is a hyper-converged platform for VMware VSAN. This architecture
provides a scale-out solution for compute, memory, and storage and eliminates complex SAN networking and centralized
storage.
Notice the simplicity in network connectivity; each server node utilizes two 10 Gb links to the redundant upstream switches
(Nexus 5548 in this example).

Nexus5548 Nexus5548

NC55

NC55
2SF P

2SF P
NC55

NC55
2SF P

2SF P

HP Proliant SL2500
VSAN 10G Network Connectivity

Page 8
HP ProLiant SL2500 VMware VSAN Reference Architecture

VSAN Prerequisites and Requirements


This section describes the VSAN requirements for this Reference Architecture. This section includes the VMWare Software
elements, the storage requirements and the network elements. Wherever appropriate we differentiate between the virtualized
devices such as Virtual Switches and the physical devices e.g. SATA disks.

VMware vSphere Requirements


VMware VSAN requires as a basis the following software components:
1. VMware ESXi version 5.5U1. A minimum of three ESXi hosts is required for VSAN.
2. VMware vCenter server 5.5U1(vCenter Server Appliance or Windows based vCenter).
3. VMware vSphere Distributed Switches (VDS) or VMware standard switches (VSS).
4. VMware VSAN License.

Storage Requirements
Disk Controllers
Each vSphere host that contributes storage to the Virtual SAN cluster requires a disk controller. This can be a SAS or SATA host
bus adapter (HBA) or a RAID controller. However, the RAID controller must function in one of two modes:
Pass-through mode
RAID0 mode
Pass-through mode, commonly referred to as JBOD or HBA mode, is the preferred configuration for Virtual SAN because it
enables Virtual SAN to manage the RAID configuration settings for storage policy attributes based on availability and performance
requirements that are defined on a virtual machine.
The HP H220 Host Bus Adapter is selected so that VSAN can directly manage all HDDs and SSDs.
For a list of VSAN certified HP disk controllers, please check the VMware VSAN Compatibility Guide
Flash-Based Devices
Each vSphere host must have at least one flash-based deviceSAS, SATA, or PCI Express SSDto participate in the Virtual SAN
cluster. Flash-based devices provide both a write buffer and a read cache.
Its important to note that VSAN read/write performance is mostly dependent on SSD class and flash capacity ratio based on
anticipated storage consumption.
VMware VSAN categorized SSDs into five classes:
Class A: 2,500-5,000 writes per second
Class B: 5,000-10,000 writes per second
Class C: 10,000-20,000 writes per second
Class D: 20,000-30,000 writes per second
Class E: 30,000+ writes per second
For a list of VSAN certified HP flash-based devices, please check the VMware VSAN Compatibility Guide
In the lab testing, each HP ProLiant SL210t Gen8 server had six drives. Two out of six drives are 100GB SSD (691862-B21). This
SSD is rated as Class-C category (10-20K writes) in VSAN compatibility matrix. Recommended best performance, uses Class-E
SSDs (30K+ writes).
The two SSDs were installed in drive bays five and six. One SSD was used to install and boot ESXi (drive bay six) and the other
Solid State Drives (SSD) was used for VSAN caching/buffering. Users can also use HDDs to install and boot ESXi images.

Note: Flash-based devices do not contribute to the overall size of the distributed Virtual SAN shared datastore. They count only
toward the capacity of the Virtual SAN caching tier.

Page 9
HP ProLiant SL2500 VMware VSAN Reference Architecture

During VSAN beta period, VMware recommendation was a 10 percent ratio of flash/Solid-State versus magnetic disks from a
capacity point of view. This recommendation has been changed over time and is now as follows:

The general recommendation for sizing Virtual SANs flash capacity is to have 10 percent of the anticipated consumed storage
capacity before the number of failures to tolerate is considered. Please refer to the Virtual SAN Hardware Quick Reference
Guide for more VSAN sizing guideline.
In our lab testing, our benchmark Ubuntu VMs storage capacity consumption was about 80GB for each VM and our 100GB SSD
flash capacity was more than 10 percent of this anticipated testing VM consumed storage capacity.
Hard Disk Drive Requirements
Each VMware vSphere host must have at least one SAS, near-line SAS (NL-SAS), or SATA magnetic hard-disk drive (HDD) to
participate in the Virtual SAN cluster. HDDs account for the storage capacity of the Virtual SAN shared datastore. Additional
magnetic disks increase the overall capacity and can also improve virtual machine performance because the virtual machine
storage objects may be striped across multiple physical volumes/spindles.
For a list of VSAN certified HP hard disk drives, please check VMware VSAN Compatibility Guide
In our lab testing, each HP ProLiant SL210t Gen8 server had six drives. Four out of the six drives were 1.2TB HDD (697574-B21).
HDDs were installed in drive bays 1-4. All 4 HDDs were used for VSAN capacity.

Note: The above HDD is replaced by the new VSAN certified 1.2TB 6G SAS 10K RPM HDD (718162-B21).

Network Requirements
This section outlines the network interface cards (NIC) and (virtual) switch interfaces.
Network Interface Cards
Each VMware vSphere host must have at least one network adapter. It must be 1Gb Ethernet or 10Gb Ethernet capable, but
VMware recommends 10Gb.
The HP Flexfabric 10Gb 2-port 554FLR-SFP+ adapter was used in our lab testing.
Supported Virtual Switch Types
Virtual SAN is supported on both the VMware vSphere Distributed Switch (VDS) and the vSphere standard switch (VSS). No other
virtual switch types are supported in the current VSAN release.
Multicast
VSAN requires multicast for cluster communication and meta-data transfer. The majority of VSAN inter-node traffic is unicast
storage data traffic. The traffic rate for multicast is very low but its critical control traffic. Without multicast control traffic
communication among VSAN nodes, VMware vCenter will alert users to a VSAN configuration error.
For network multicast infrastructure, there are two scenarios of VSAN multicast traffic forwarding: with or without IGMP Snooping.
With IGMP snooping enabled, switches will only send VSAN multicast traffic to the ports reporting the related multicast
membership.
Without IGMP snooping enabled, all VSAN multicast control traffic will be flooded to all ports in the same VLAN. VSAN nodes can
communicate successfully but this is not the most efficient and secure model.
Depending on switch vendors, you may need to configure multicast router or IGMP querier to send IGMP general queries in order
for the switch to maintain the IGMP snooping group table. In our lab testing, we found that the Cisco Nexus 5548 running
7.0(2)N1(1) can successfully register IGMP snooping membership without a querier.
VSAN nodes use multicast group address 224.1.2.3 and 224.2.3.4 for control traffic and send out IGMP unsolicited membership
reports every 90 seconds.
The following esxcli command shows VSAN master and agent multicast address.

~ # esxcli vsan network list


Interface
VmkNic Name: vmk1
IP Protocol: IPv4

Page 10
HP ProLiant SL2500 VMware VSAN Reference Architecture

Interface UUID: 24753e53-de14-858e-30a2-9cb654084c44


Agent Group Multicast Address: 224.2.3.4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group Multicast Port: 12345
Multicast TTL: 5
~ #

Cisco Nexus 5500 logs indicate periodical VSAN membership reports from nodes.

DIA-N5K-02# show ip igmp snooping event-history vlan | i 10.16.127.111


2014 Aug 5 21:31:46.915356 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.2.3.4
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:31:46.915011 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.1.2.3
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:30:17.037868 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.2.3.4
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:30:17.037606 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.1.2.3
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:28:46.950748 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.2.3.4
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:28:46.950273 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.1.2.3
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:27:17.073420 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.2.3.4
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:27:17.073042 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.1.2.3
from 10.16.127.111 on Eth1/21
2014 Aug 5 21:25:46.985909 igmp [3465]: [3917]: SN: <127> Received v2 report: group 224.2.3.4
from 10.16.127.111 on Eth1/21

Cisco Nexus 5500 port configuration for VSAN. (IGMP snooping is enabled by default.)

DIA-N5K-02# show run int e1/22


interface Ethernet1/22
switchport mode trunk
switchport trunk native vlan 125
switchport trunk allowed vlan 125-127
spanning-tree port type edge trunk

Cisco Nexus 5500 IGMP snooping information for VSAN.

DIA-N5K-02# show ip igmp snooping vlan 127


IGMP Snooping information for vlan 127
IGMP snooping enabled
Lookup mode: IP
Optimised Multicast Flood (OMF) disabled
IGMP querier none
Switch-querier disabled
IGMPv3 Explicit tracking enabled
IGMPv2 Fast leave disabled
IGMPv1/v2 Report suppression enabled
IGMPv3 Report suppression disabled
Link Local Groups suppression enabled
Router port detection using PIM Hellos, IGMP Queries
Number of router-ports: 1
Number of groups: 2
VLAN vPC function enabled
Active ports:
Eth1/21 Eth1/22 Eth1/23 Eth1/27
Po1

Cisco Nexus 5500 IGMP snooping group table

DIA-N5K-02# show ip igmp snooping groups vlan 127

Page 11
HP ProLiant SL2500 VMware VSAN Reference Architecture

Type: S - Static, D - Dynamic, R - Router port, F - Fabricpath core port

Vlan Group Address Ver Type Port list


127 */* - R Po1
127 224.1.2.3 v2 D Eth1/22 Eth1/21 Eth1/23
Eth1/27

127 224.2.3.4 v2 D Eth1/21 Eth1/22 Eth1/27


Eth1/23

Cisco Nexus 5500 interface MAC table

DIA-N5K-02# show mac address-table interface e1/21


Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+------------------
* 127 0050.5665.57cc dynamic 0 F F Eth1/21
127 0100.5e01.0203 igmp 0 F F Eth1/21 Eth1/22
Eth1/23 Eth1/27 Po1
127 0100.5e02.0304 igmp 0 F F Eth1/21 Eth1/22
Eth1/23 Eth1/27 Po1
DIA-N5K-02#

For your reference, the Cisco Nexus 5500 IGMP snooping querier command is under vlan configuration mode.

vlan configuration 127


ip igmp snooping querier 192.168.1.1

VSAN Ready Notes


There are two ways to build a VSAN Node:
1. Choose from the list of Virtual SAN Ready Nodes (more on this below).
2. Build your own based on VMware VSAN certified components3.
For Option 1, Virtual SAN Ready Node is a hyper-converged node sold by server OEMs, which in many cases, has been pre-
configured to run the Virtual SAN + vSphere stack in a tested, certified hardware form factor. Workloads characterize classifying
Virtual VSAN Ready notes. E.g. an OEM builds a Virtual San Ready Node differently for server workloads. In other words, the Virtual
SAN Ready Nodes are classified into Low, Medium and High profiles for Server workloads and Full Clone and Linked Clone profiles
for VDI workloads.

In order to choose the right Virtual SAN Ready Node, follow this two-step process:

1. Refer to the Virtual SAN Hardware Quick Reference Guide for guidance on how to identify the right solution profile
category for your workload profile and the category of Ready Node that meets your needs.
2. Choose Ready Nodes listed in the VMware Virtual SAN Ready Nodes list that corresponds to the solution profile
category that you identified for your workload.

As of this printing, HP offers HP ProLiant DL380p Gen8 and HP ProLiant DL360p Gen8 servers as VSAN Ready Nodes in numerous
configurations to accommodate many different workloads.
A sample VSAN Ready Node is shown as below:

3 This white paper primarily focuses on Option 2; we chose the server platform, storage controller, SSDs and HDDs in order to build our own VSAN nodes

Page 12
HP ProLiant SL2500 VMware VSAN Reference Architecture

As previously discussed, this paper uses Option 2 Build your own based on VMWare components. Case-in-point we use the HP
ProLiant SL2500 product line to build-out customized VSAN nodes following the VMware VSAN Compatibility Guide. The VMware
VSAN Compatibility Guide provides a list of VMware certified I/O controllers, SSDs and HDDs so users can build out their own VSAN
nodes.
The following example shows the example of selection of SSD drive used in this white paper for VSAN node built-out.

Page 13
HP ProLiant SL2500 VMware VSAN Reference Architecture

HP ProLiant SL2500 VSAN Installation and Configuration


This section details the VMWare VSAN installation, setup and configuration on the HP ProLiant SL2500 servers.
VSAN installation consists of the following steps:
Installing VMware ESXi 5.5U1 on a HP ProLaint SL210t Gen8 server local disk.
Configure VMware ESXi hosts with new VMkernal interfaces dedicated for VSAN communication.
Enable hosts cluster for VSAN option.
Configure disk Management.
Define VM Storage Policies.
Assign storage policies to Virtual Machines.

Installing VMware ESXi 5.5U1


Loading ESXi on HP ProLiant SL210t Gen8 servers is accomplished by using iLO virtual media4 or PXE booting.
Important note: mark a specific HDD or SSD in the HP H220 Host Bus Adapter (HBA) BIOS as a boot drive before VMware
ESXi installation in order for the server to boot from the installed VMware ESXi image.
During initial setup/installation press F9 during the booting process to disable the embedded HP B120i controller.
After the HP B120i has been disabled, press F8 during the server booting process to configure the HP H220 HBA BIOS.

4 http://h20628.www2.hp.com/km-ext/kmcsdirect/emr_na-c03334051-10.pdf

Page 14
HP ProLiant SL2500 VMware VSAN Reference Architecture

Select Option 2 for the SAS configuration Utility.

Press Enter to get into H220 config mode.

Select SAS Topology and press Enter.

Press Enter for a list of Direct Attach Devices

Page 15
HP ProLiant SL2500 VMware VSAN Reference Architecture

Notice that HDDs are installed in bays 1-4 and SSDs in bay 5 and 6. The following screen shows that the SSD in bay 6
has been marked as boot disk previously.

You may want to choose another drive for a boot drive, please scroll down to highlight your drive of choice.

Alt-B to mark the drive as boot drive. Type Alt-M to open help menu for more hotkeys including Alt-B.

Page 16
HP ProLiant SL2500 VMware VSAN Reference Architecture

During VMware ESXi image installation make sure you install the ESXi image to the correct drive being marked as the
boot drive in the H220 BIOS. Users should choose a low capacity HDD or SSD for ESXi installation and boot.

VSAN VMkernel Configuration


This section describes the VMWare VSAN Kernel changes for VSAN.
Post VMware ESXi installation all hosts should be configured with a new VMkernel interface to include the VSAN service
enabled. A dedicated VLAN should be used for VMware Virtual SAN service.

Page 17
HP ProLiant SL2500 VMware VSAN Reference Architecture

The VMKernel VSAN interface uses the VMware vSphere Standard Switch (vSS) or vSphere Distributed Switch (vDS). The
following shows the lab setup of four hosts in a VSAN cluster using the VMware VDS Virtual Switch.

Page 18
HP ProLiant SL2500 VMware VSAN Reference Architecture

The VSAN port group NIC teaming method follows the general VMware design. The key is to ensure enough bandwidth
for VSAN traffic. The following examples shows a dedicated 10 Gb link for VSAN while other VM data and management
traffic share the other 10 Gb link using the Explicit Failover Order option. NetIOC can also be configured to give VSAN
traffic enough share of the link.

Enable Cluster for VSAN


Notice after hosts are configured with the VSAN VMkernel interfaces & are in the same cluster, the VSAN option is
enabled using Management->Settings. The option of Add disks to storage is set to Automatic as default.

This screenshot shows the HP ProLiant SL2500 four node cluster with 17TB of available raw capacity. Recall that each
server node has four 1.2TB SFF HDDs (4 servers with 4 drives 1.2TB = 4*4*1.2 = 17TB)

Page 19
HP ProLiant SL2500 VMware VSAN Reference Architecture

Configure Disk Management


Under VSAN Disk Management options you can configure and verify the VSAN disk group for each host. If Automatic
was selected in the previous steps you will see all local disks being added automatically under disk groups. Some SAS
HBAs will mark the drives as Remote. In these cases users can manually add disks into groups by clicking the Add
buttons at the top of disk group table.

Page 20
HP ProLiant SL2500 VMware VSAN Reference Architecture

Define VM Storage Policies


After the VSAN host cluster is setup and providing shared resources to virtual machines, you can define your custom VM
storage policies to offer VM VMDKs different stripe/mirror capacity.

In the VM storage policy creation wizard, you may select rules based on VSAN and after that a number of options will
be available for configuration.

You choose various options to uniquely define your storage requirement as an example:

Page 21
HP ProLiant SL2500 VMware VSAN Reference Architecture

Assign VM with Storage Policy


With different VM storage policies created, Virtual Machine VMDKs can be assigned to the corresponding policy per storage
requirement.

If no policy is specified, a default policy will be applied with the following setting:
Number of failures to tolerate = 1
Number of disk stripes per object = 1
Flash read cache reservation = 0 percent
Object space reservation = not used
Force provisioning = disabled
You can also verify VM disks storage policy compliance under the virtual machine management tab.

Page 22
HP ProLiant SL2500 VMware VSAN Reference Architecture

Page 23
HP ProLiant SL2500 VMware VSAN Reference Architecture

VSAN Redundancy, Monitoring and Performance


This section provides examples and details for redundancy, monitoring and performance. In this section the reader will see
configuration examples with default settings as-well-as explanations with ideas on changing for specific implementations.

Redundancy
Redundancy is defined per VM according to storage policy in VSAN. A VM can have four different types of Objects on a VSAN such
as VM Home and VMDKs. Each storage Object is deployed on VSAN as a RAID tree and each leaf of the tree is a Component.
A VMDK with stripe width of 2, a RAID-0 stripe will be configured across a minimum of two disks for the VMDK. The VMDK is an
Object and each stripe of this VMDK is a Component of this Object.
There is another component named Witness. The Witness component is very special and important. Witnesses are part of each
and every object in the RAID tree. They are used to determine a quorum for a VM storage object in the event of a failure in the
cluster. Witnesses do not contain data, only metadata. A witness consumes about 2MB of space for metadata.
There are multiple failure scenarios for VSAN such as SSD, HDD or host failure. The following example illustrates a host failure
scenario.
In the case of a host failure and assuming VM storage policy has number of failures to tolerate set to 1, VSAN will initially wait for
60 minutes (this is the default value and can be changed) in case the host comes back online within this period. After the default
time the VSAN will start recreating VM object components on other hosts in the cluster. VSAN has the internal mechanism to
ensure that if original failed host comes back online, it can either select the replica in the original host or continue to re-create a
new replica at the other node, depending which option is faster and will guarantee data integrity.
In the screen capture below, Ubuntu-03 VM (residing on SL-02 node) had disk3 VMDK components created on SL-
02(.113) and SL-04(.114) with witness on SL-01(.111). This is with VM storage policy failure to tolerate set to 1.
Compliance status is green.

SL-03 server was then powered off using the iLO remote power button. VSAN and vCenter reported host disconnected
and VSAN components not compliant since another failure would likely cause object data inaccessible. The component
on SL-03 was marked as absent.

Page 24
HP ProLiant SL2500 VMware VSAN Reference Architecture

After 60 minutes, VSAN begins to re-create components on other nodes.

The component recreation process can be monitored by Ruby vSphere Console (RVC) command
vsan.resync_dashboard <cluster>.

Page 25
HP ProLiant SL2500 VMware VSAN Reference Architecture

After successful recreation of another replica on the other host (SL-02 node in this case), the VSAN is back to
compliance status. SSD and HDD disk name and UUID in SL-02 provide further information on which HDD and SDD in SL-
02 are holding the newly created replica for persistence and caching.

Page 26
HP ProLiant SL2500 VMware VSAN Reference Architecture

Monitoring and Troubleshooting


vSphere CLI
This section explores various vSphere CLI commands used to monitor VSAN hardware inventory and running status.
Queue depth of VSAN node storage controller shows the number of IO request waiting. The following esxcli commands
can be used to verify HP H220 SAS HBA queue depth is 600. A minimum queue depth of 256 is required in Virtual SAN.

~ # esxcli storage core adapter list


HBA Name Driver Link State UID Description
-------- -------- ---------- -------------------- ----------------------------------------
---------------------------------
vmhba1 be2iscsi online iscsi.vmhba1 (0:5:0.2) Emulex Corporation Emulex
OneConnect OCe11100 10GbE, iSCSI UCNA
vmhba2 be2iscsi online iscsi.vmhba2 (0:5:0.3) Emulex Corporation Emulex
OneConnect OCe11100 10GbE, iSCSI UCNA
vmhba3 mpt2sas link-n/a sas.500605b006e0a81a (0:6:0.0) LSI Logic / Symbios Logic HP
H220 Host Bus Adapter
~ #
~ # esxcli system module list | grep mpt2sas
mpt2sas true true
~ #
~ # esxcli system module parameters list -m mpt2sas
Name Type Value Description
------------------------ ------------ ----- -----------------------------------------------
--------------------
command_retry_count int Device discovery TUR command retry count:
(default=144)
diag_buffer_enable int post diag buffers
(TRACE=1/SNAPSHOT=2/EXTENDED=4/default=0)
disable_discovery int disable discovery
heap_initial int Initial heap size allocated for the driver.
heap_max int Maximum attainable heap size for the driver.
logging_level int bits for enabling additional logging info
(default=0)
max_lun int max lun, default=16895
max_queue_depth int max controller queue depth (default=600)
max_sectors short max sectors, range 64 to 8192 default=8192

Page 27
HP ProLiant SL2500 VMware VSAN Reference Architecture

max_sgl_entries int max sg entries


missing_delay array of int device missing delay , io missing delay
mpt2sas_fwfault_debug int enable detection of firmware fault and halt
firmware - (default=0)
mpt2sas_raid_queue_depth int Max RAID Device Queue Depth (default=128)
mpt2sas_sas_queue_depth int Max SAS Device Queue Depth (default=254)
mpt2sas_sata_queue_depth int Max SATA Device Queue Depth (default=32)
msix_disable int disable msix routed interrupts (default=-1:
msix enabled)
~ #

esxcli vsan network list lists VSAN VMkernel multicast info (described earlier)

~ # esxcli vsan network list


Interface
VmkNic Name: vmk1
IP Protocol: IPv4
Interface UUID: 81e1e353-c9af-8fd3-a421-9cb654060cae
Agent Group Multicast Address: 224.2.3.4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group Multicast Port: 12345
Multicast TTL: 5

esxcli vsan storage list lists VSAN disk info.

~ # esxcli vsan storage list


naa.5000cca01d5086c4
Device: naa.5000cca01d5086c4
Display Name: naa.5000cca01d5086c4
Is SSD: false
VSAN UUID: 5253475e-6d7a-aa2b-1684-4e5b69d5c899
VSAN Disk Group UUID: 527d77e7-d506-ff6f-39b5-e6671158c3d9
VSAN Disk Group Name: naa.55cd2e404b5ecbef
Used by this host: true
In CMMDS: true
Checksum: 10504162676956883562
Checksum OK: true

naa.55cd2e404b5ecbef
Device: naa.55cd2e404b5ecbef
Display Name: naa.55cd2e404b5ecbef
Is SSD: true
VSAN UUID: 527d77e7-d506-ff6f-39b5-e6671158c3d9
VSAN Disk Group UUID: 527d77e7-d506-ff6f-39b5-e6671158c3d9
VSAN Disk Group Name: naa.55cd2e404b5ecbef
Used by this host: true
In CMMDS: true
Checksum: 11444427935466237220
Checksum OK: true
<snip>

esxcli vsan cluster get lists VSAN cluster info.

~ # esxcli vsan cluster get


Cluster Information
Enabled: true
Current Local Time: 2014-08-07T22:26:51Z
Local Node UUID: 533e6e3f-8a81-8e03-1bff-9cb654060cae
Local Node State: AGENT
Local Node Health State: HEALTHY
Sub-Cluster Master UUID: 533e77e1-8085-530d-9f47-9cb654084238
Sub-Cluster Backup UUID: 533e515a-d9af-438e-8402-9cb654084c44
Sub-Cluster UUID: 527844df-d950-63c9-e997-a30628596f7a
Sub-Cluster Membership Entry Revision: 3
Sub-Cluster Member UUIDs: 533e77e1-8085-530d-9f47-9cb654084238, 533e515a-d9af-438e-8402-
9cb654084c44, 53008bcd-7370-6297-6203-9cb654084920, 533e6e3f-8a81-8e03-1bff-9cb654060cae

Page 28
HP ProLiant SL2500 VMware VSAN Reference Architecture

Sub-Cluster Membership UUID: 7fe1e353-fcde-a6fd-e7d3-9cb654084238

vdq commands provides disk label, partition and interface information

~ # vdq -i -H
Mappings:
DiskMapping[0]:
SSD: naa.55cd2e404b5ecbef
MD: naa.5000cca01d5086c4
MD: naa.5000cca01d55ffb8
MD: naa.5000cca01d528c40
MD: naa.5000cca01d544be0

~ # vdq -q
[
{
"Name" : "naa.5000cca01d5086c4",
"VSANUUID" : "5253475e-6d7a-aa2b-1684-4e5b69d5c899",
"State" : "In-use for VSAN",
"Reason" : "Non-local disk",
"IsSSD" : "0",
"IsPDL" : "0",
},

{
"Name" : "naa.5000cca01d55ffb8",
"VSANUUID" : "52c53d0f-fcd8-a9f0-d905-4e516b9898ab",
"State" : "In-use for VSAN",
"Reason" : "Non-local disk",
"IsSSD" : "0",
"IsPDL" : "0",
},

{
"Name" : "naa.55cd2e404b5eaf0c",
"VSANUUID" : "",
"State" : "Ineligible for use by VSAN",
"Reason" : "Has partitions",
"IsSSD" : "1",
"IsPDL" : "0",
},

VSAN cannot use or reuse the disks when the disks still contain partitions or file systems. To reuse these disks in VSAN,
you must manually wipe the disks. This can be done by using disk partition tools such as the GParted disk tool ISO, or
using the ESXi built-in tool partedUtil.

Page 29
HP ProLiant SL2500 VMware VSAN Reference Architecture

The partedUtil command outputs the disk ID from the vdq cli or list vmfs/devices/disks contents.

~ # ls /vmfs/devices/disks/ -l
total 9767641693
-rw------- 1 root root 1200243695616 Aug 7 23:07 naa.5000cca01d5513b0
-rw------- 1 root root 2097152 Aug 7 23:07 naa.5000cca01d5513b0:1
-rw------- 1 root root 1200240532992 Aug 7 23:07 naa.5000cca01d5513b0:2
-rw------- 1 root root 1200243695616 Aug 7 23:07 naa.5000cca01d551e20
-rw------- 1 root root 2097152 Aug 7 23:07 naa.5000cca01d551e20:1
-rw------- 1 root root 1200240532992 Aug 7 23:07 naa.5000cca01d551e20:2

To list disk partition info:


~ # partedUtil getptbl /dev/disks/naa.55cd2e404b5f6686
gpt
12161 255 63 195371568
1 2048 6143 381CFCCC728811E092EE000C2911D0B2 vsan 0
2 6144 195371534 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

To delete above 2 partitions in the disk

~ # partedUtil delete /dev/disks/naa.55cd2e404b5f6686 1


~ # partedUtil delete /dev/disks/naa.55cd2e404b5f6686 2

Ruby vSphere Console (RVC)


Ruby vSphere Console (RVC) provides a cluster-centric view of VSAN and is the major tool to monitor VSAN performance
and operation. VMware vCenter Virtual Appliance has RVC integrated so that you can log into the RVC console directly
from the vCenter Virtual Appliance console.
DIA-VCEN-02:~ # rvc root@localhost
password:
0 /
1 localhost/
>
> ls
0 /
1 localhost/
> cd 1
/localhost> ls
0 DIA-2014 (datacenter)
1 DIA-VSAN (datacenter)
/localhost> cd 1
/localhost/DIA-VSAN> ls
0 storage/
1 computers [host]/
2 networks [network]/
3 datastores [datastore]/
4 vms [vm]/

RVC offers in-line help where you can display all options and descriptions for any command.
/localhost/DIA-VSAN> help vsan
Commands:
enable_vsan_on_cluster: Enable VSAN on a cluster
disable_vsan_on_cluster: Disable VSAN on a cluster
cluster_change_autoclaim: Enable VSAN on a cluster
host_consume_disks: Consumes all eligible disks on a host
host_wipe_vsan_disks: Wipes content of all VSAN disks on a host
host_info: Print VSAN info about a host
<snip>

Sub-command help

/localhost/DIA-VSAN> help vsan.disks_info


usage: disks_info [opts] host...
Print physical disk info about a host

Page 30
HP ProLiant SL2500 VMware VSAN Reference Architecture

host: Path to a HostSystem


--show-adapters: Show adapter information
--help, -h: Show this message

The following command shows the summary for VSAN cluster compute information.

/localhost/DIA-VSAN/computers/DIA-VSAN-POC/hosts> ls
0 10.16.125.112 (host): cpu 2*12*2.09 GHz, memory 206.00 GB
1 10.16.125.113 (host): cpu 2*12*2.09 GHz, memory 206.00 GB
2 10.16.125.114 (host): cpu 2*12*2.09 GHz, memory 206.00 GB
3 10.16.125.111 (host): cpu 2*12*2.09 GHz, memory 206.00 GB
/localhost/DIA-VSAN/computers/DIA-VSAN-POC/hosts>

vsan.host_info <host> displays host cluster, storage and networking information.

/localhost/DIA-VSAN/computers/DIA-VSAN-POC/hosts> vsan.host_info 2
VSAN enabled: yes
Cluster info:
Cluster role: agent
Cluster UUID: 527844df-d950-63c9-e997-a30628596f7a
Node UUID: 53008bcd-7370-6297-6203-9cb654084920
Member UUIDs: ["533e77e1-8085-530d-9f47-9cb654084238", "533e515a-d9af-438e-8402-
9cb654084c44", "53008bcd-7370-6297-6203-9cb654084920", "533e6e3f-8a81-8e03-1bff-9cb654060cae"]
(4)
Storage info:
Auto claim: yes
Disk Mappings:
SSD: Local ATA Disk (naa.55cd2e404b5ec641) - 93 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d544d70) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d57a250) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d570d7c) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d524c14) - 1117 GB
NetworkInfo:
Adapter: vmk1 (10.16.127.114)

vsan.disks_info <host> displays detailed disk and sizing info for a specific host.

vsan.cluster_info <cluster> display add hosts within the specific vsan cluster.

/localhost/DIA-VSAN/computers> vsan.cluster_info 0
Host: 10.16.125.112
VSAN enabled: yes
Cluster info:
Cluster role: agent
Cluster UUID: 527844df-d950-63c9-e997-a30628596f7a
Node UUID: 533e6e3f-8a81-8e03-1bff-9cb654060cae

Page 31
HP ProLiant SL2500 VMware VSAN Reference Architecture

Member UUIDs: ["533e77e1-8085-530d-9f47-9cb654084238", "533e515a-d9af-438e-8402-


9cb654084c44", "53008bcd-7370-6297-6203-9cb654084920", "533e6e3f-8a81-8e03-1bff-9cb654060cae"]
(4)
Storage info:
Auto claim: yes
Disk Mappings:
SSD: Local ATA Disk (naa.55cd2e404b5ecbef) - 93 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d5086c4) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d55ffb8) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d528c40) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d544be0) - 1117 GB
NetworkInfo:
Adapter: vmk1 (10.16.127.112)

Host: 10.16.125.113
VSAN enabled: yes
Cluster info:
Cluster role: master
Cluster UUID: 527844df-d950-63c9-e997-a30628596f7a
Node UUID: 533e77e1-8085-530d-9f47-9cb654084238
Member UUIDs: ["533e77e1-8085-530d-9f47-9cb654084238", "533e515a-d9af-438e-8402-
9cb654084c44", "53008bcd-7370-6297-6203-9cb654084920", "533e6e3f-8a81-8e03-1bff-9cb654060cae"]
(4)
Storage info:
Auto claim: yes
Disk Mappings:
SSD: Local ATA Disk (naa.55cd2e404b625f20) - 93 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d53ab48) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d52c420) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d53ba4c) - 1117 GB
MD: HP Serial Attached SCSI Disk (naa.5000cca01d52dec0) - 1117 GB
NetworkInfo:
Adapter: vmk1 (10.16.127.113)

Page 32
HP ProLiant SL2500 VMware VSAN Reference Architecture

vsan.disks_stats <cluster> displays summary information of host SSD/HDD utilization inside VSAN cluster.

vsan.whatif_host_failures CLI display VSAN capacity stats after a simulated host failure. The optional mark CLI set
alias for a RVC variable for easy of use.

/localhost/DIA-VSAN> mark cluster ~/computers/DIA-VSAN-POC


/localhost/DIA-VSAN> vsan.whatif_host_failures --show-current-usage-per-host ~cluster
Current utilization of hosts:
+---------------+---------+--------------+------+----------+-----------------+--------------+
| | | HDD Capacity | | | Components | SSD Capacity |
| Host | NumHDDs | Total | Used | Reserved | Used | Reserved |
+---------------+---------+--------------+------+----------+-----------------+--------------+
| 10.16.125.113 | 2 | 2235.50 GB | 7 % | 7 % | 30/3000 (1 %) | 0 % |
| 10.16.125.111 | 4 | 4471.00 GB | 0 % | 0 % | 7/3000 (0 %) | 0 % |
| 10.16.125.112 | 4 | 4471.00 GB | 6 % | 6 % | 38/3000 (1 %) | 0 % |
| 10.16.125.114 | 4 | 4471.00 GB | 7 % | 7 % | 42/3000 (1 %) | 0 % |
+---------------+---------+--------------+------+----------+-----------------+--------------+

Simulating 1 host failures:

+-----------------+------------------------------+-----------------------------------+
| Resource | Usage right now | Usage after failure/re-protection |
+-----------------+------------------------------+-----------------------------------+
| HDD capacity | 5% used (14935.27 GB free) | 6% used (10464.27 GB free) |
| Components | 1% used (11883 available) | 1% used (8883 available) |
| RC reservations | 0% used (260.84 GB free) | 0% used (195.63 GB free) |
+-----------------+------------------------------+-----------------------------------+

Page 33
HP ProLiant SL2500 VMware VSAN Reference Architecture

vsan.vm_object_info <VM> displays detailed object and component info for a specific VM.

vsan.resync_dashboard <cluster> displays current VSAN cluster replication information.

spbm.check_compliance <vm> checks VM storage policy compliance status.

/localhost/DIA-VSAN/vms> spbm.check_compliance 7
+-----------------+-----------------------+------------+
| VM/Virtual Disk | Profile | Compliance |
+-----------------+-----------------------+------------+
| ubuntu-vsan-03 | failure-to-tolerate-1 | compliant |
| Hard disk 1 | failure-to-tolerate-1 | compliant |
| Hard disk 2 | failure-to-tolerate-1 | compliant |
| Hard disk 3 | failure-to-tolerate-1 | compliant |
| Hard disk 4 | failure-to-tolerate-1 | compliant |
| Hard disk 5 | failure-to-tolerate-1 | compliant |
| Hard disk 6 | failure-to-tolerate-1 | compliant |
| Hard disk 7 | failure-to-tolerate-1 | compliant |
| Hard disk 8 | failure-to-tolerate-1 | compliant |
| Hard disk 9 | failure-to-tolerate-1 | compliant |
+-----------------+-----------------------+------------+

Page 34
HP ProLiant SL2500 VMware VSAN Reference Architecture

VSAN Observer
VSAN Observer is a performance tool that has been specifically written for VSAN performance diagnostics. Its part of the Ruby
vSphere Console (RVC) integrated with VMware vCenter 5.5U1. After starting VSAN Observer services, administrators can use a
web browser to display various VSAN performance graphics.
To start VSAN Observer, the administrator should login to RVC console first and type the following CLI with the related DC and
cluster name.

> vsan.observer /localhost/DIA-VSAN/computers/DIA-VSAN-POC --run-webserver force

Web display information is through VMware vCenter server HTTP port 8010

Performance
We also tested our HP ProLiant SL2500 four node VSAN cluster for performance using the same guidelines specified in the
VMware blog How to Supercharge your Virtual SAN Cluster (2 Million IOPS!!!) .
The goal of our performance testing is to provide you with an appreciation for expected VSAN performance, but not to seek
maximum performance for the HP ProLiant SL2500 with VSAN. Our four node cluster is not configured for maximum
performance. Recall from earlier in this paper that we do not have sufficient SSD caching installed. Even so, we decided to
measure it to see how well it performs.

Note: VSAN read/write performance is largely dependent on the selection of SSD class category and size. VMware VSAN
performance was done with Intel S3700 Class-E 400G SSD while we conducted SL2500 testing using HP Class-C 100G
SSD. For best VSAN performance, users should provision Class-E SSD with higher capacity.

Two tests for VSAN performance benchmarking using IOMeter:


100% Read (80% random)
70% Read, 30% Write (80% random)

Page 35
HP ProLiant SL2500 VMware VSAN Reference Architecture

We set all servers to use maximum performance power setting in server BIOS, increased all hosts network heap size to 512MB
and changed all four Ubuntu VMs boot time parameter to vmw_pvscsi.cmd_per_lun=254 vmw_pvscsi.ring_pages=32 .
Each HP ProLiant SL210t Gen8 server node ran a single four vCPU 32-bit Ubuntu 14.04.01 VM with 8GB of a single VMDK. Disks
were distributed across two VMware Paravirtual SCSI (PVSCSI) controllers.
For each VM, we ran IOMeter with 8 workers. Each worker was configured to work on a single 8GB VMDK. The max disk size per
worker was reduced to 2GB because our SSD had much lower capacity than VMware benchmark testing. Outstanding I/Os per
worker was 16 and all I/O block size and alignment was set to 4K bytes.
For 100% read test, VMs were applied with a storage policy which set HostFailuresToTolerate=0. This setting provided no VM
storage redundancy but did improve testing performance.
For 70% read test, we used default storage policy which set HostFailuresToTolerate=1 and stripeWidth=1 for all VMDKs.
The following chart is the summary of the performance. In general, using four HP ProLiant SL210t Gen8 server nodes with Class-C
SSD, 100% read test can achieve around 150-160K IOPs while 70% read can achieve around 55K IOPs. Again, for maximum VSAN
performance, HP customers should choose Class-E SSD along with larger SSD size.

Page 36
HP ProLiant SL2500 VMware VSAN Reference Architecture

IOMeter 100% Read report:

Note: Performance was bound to IOPs.

Page 37
HP ProLiant SL2500 VMware VSAN Reference Architecture

IOMeter 70% read report:

Note: Performance was bound to IOPs.

Page 38
HP ProLiant SL2500 VMware VSAN Reference Architecture

VSAN Observer is a very useful tool to monitor VSAN performance with its detailed stats graph output.

VSAN client output displays performance data from a VM perspective.

Page 39
HP ProLiant SL2500 VMware VSAN Reference Architecture

VSAN Disk and Disk (Deep-Dive) view display information from the VSAN storage tier perspective per host. 100% Read Cache (RC)
hit rate and zero evictions indicate SSD cache layer can serve traffic load efficiently without transferring data from HDDs.

More detailed storage statistics are also available for per SSD and HDD components.

Page 40
HP ProLiant SL2500 VMware VSAN Reference Architecture

SL2500 BIOS Performance Settings

The following capture shows the HP ProLiant SL210t Gen8 server BIOS power setting that was used to set maximum
performance.

Page 41
HP ProLiant SL2500 VMware VSAN Reference Architecture

ESXI Performance Settings

VMware ESXi host network heap size was increased from default 256MB to 512MB.
~ # esxcli system settings advanced list -o /Net/TcpipHeapmax
Path: /Net/TcpipHeapMax
Type: integer
Int Value: 512
Default Int Value: 256
Min Value: 32
Max Value: 512
String Value:
Default String Value:
Valid Characters:
Description: Max megabytes the tcpip module heap can grow to. (REQUIRES REBOOT!)
~ #

Each VM Ubuntu OS was configured to boot with vmw_pvscsi.cmd_per_lun=254 vmw_pvscsi.ring_pages=32


user1@ubuntu-vsan-01:~$ less /boot/grub/grub.cfg | grep vmw_pvscsi
linux /vmlinuz-3.13.0-32-generic root=/dev/mapper/ubuntu--vsan--01--vg-root ro
vmw_pvscsi.cmd_per_lun=254 vmw_pvscsi.ring_pages=32
user1@ubuntu-vsan-01:~$

Page 42
HP ProLiant SL2500 VMware VSAN Reference Architecture

Conclusion
This reference architecture and whitepaper serve as a guide for constructing a VMware VSAN based on VMware VSAN certified
components especially with the HP SL2500 serving both performance, energy and space needs. VMware VSAN is the next
generation VM-based storage architecture. Using VM-level storage policies, Virtual SAN automatically and dynamically matches
requirements with underlying storage resources. With Virtual SAN, many manual storage tasks are automated - delivering a more
efficient and cost-effective operational model for shared storage.
Virtual SAN enables read/write caching using server-side flash, optimizing the I/O data path to deliver the performance demanded
by virtualization infrastructure.
HP ProLiant SL2500 Scalable System is an ideal platform for new hyper-converged virtualization architectures providing the right
balance of compute, memory, storage and networking capacity in the appealing 2U/4-Node/24-SFF-drives form factor.
When compared with the traditional rack-mount form factor, HP ProLiant SL2500 enables administrators to easily scale out a
VSAN cluster while providing exceptional VSAN performance without compromising compute/storage/memory requirements.

Page 43

Vous aimerez peut-être aussi