Vous êtes sur la page 1sur 40

Administration Guide

Nutanix Complete Cluster Version 2.6


August 2012
Copyright
Copyright 2012 Nutanix, Inc.
Nutanix, Inc.
1735 Technology Drive, Suite 575
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as


nutanix) in the system shell.

root@esx# command The commands are executed as the root user in the ESXi shell.

output The information is displayed as output from a command or in a log


file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin


vSphere client vCenter administrator nutanix/4u
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
SSH client vMA vi-admin Nutanix/4u
IPMI web interface or Nutanix node ADMIN ADMIN
ipmitool
Table of Contents
Overview.............................................................................................................................7

Chapter 1: About Nutanix Complete Cluster............................................9


Cluster Architecture.............................................................................................................. 9
Your Nutanix Complete Block........................................................................................... 10
Nutanix Networking................................................................................................. 11
System Maximums................................................................................................... 12
Factory-Installed Components..................................................................................13
Guest VM Data Management............................................................................................. 14
Heat-Optimized Tiering............................................................................................14

Chapter 2: Cluster Management.............................................................. 15


Web Console....................................................................................................................... 15
To Sign in to the Web Console............................................................................... 17
Nutanix Command-Line Interface...................................................................................... 17
To Install the nCLI on Your Local System.............................................................17
To Initiate an nCLI Session From Your Local System........................................... 18
Command Format.....................................................................................................19
Embedded Help........................................................................................................ 19
Monitoring the Nutanix Cluster..........................................................................................20
Serviceability....................................................................................................................... 20

Chapter 3: Storage Management..............................................................23


Storage Tiers....................................................................................................................... 23
Storage Pools.......................................................................................................................24
To Create a Storage Pool.........................................................................................24
To Add Disks to a Storage Pool..............................................................................25
Containers............................................................................................................................ 25
Container Required Parameters................................................................................25
To Create a Container..............................................................................................26
vDisks.................................................................................................................................. 27
vDisk Parameters......................................................................................................27
To Create a vDisk.................................................................................................... 28
To Discover Nutanix vDisks....................................................................................28
To Delete an iSCSI Target...................................................................................... 29
Datastores............................................................................................................................ 30
To Create an NFS Datastore....................................................................................30
To Create a VMFS Datastore.................................................................................. 31

Appendix A: System Specifications.......................................................... 33


Field-Replaceable Unit List................................................................................................ 35
Appendix B: Glossary................................................................................ 37
Abbreviations.......................................................................................................................39
Overview
This guide is intended for experienced Windows or Linux system administrators who are
managing Nutanix Complete Cluster. It is assumed that the administrator is familiar with
virtualization and storage technologies.

© Copyright 2012 Nutanix | 7


Nutanix Complete Cluster Version 2.6

8
Chapter 1
About Nutanix Complete Cluster
Nutanix Complete Cluster is a converged, scale-out compute and storage system that is purpose-
built to host and store virtual machines. All nodes in a Nutanix cluster converge to deliver a
unified pool of tiered storage and present resources to VMs for seamless access. A global data
system architecture integrates each new node into the cluster, allowing you to scale the solution
to meet the needs of your infrastructure.

Cluster Architecture
The building block for the cluster is a Nutanix Complete Block, which is a rackable 2U chassis
with four high-performance servers, each running a standard hypervisor, that contain processors,
memory, and local storage (SSDs and hard disks).
Each node hosts a Nutanix Controller VM that enables the pooling of local storage from all nodes
in the cluster.

© Copyright 2012 Nutanix | 9


Nutanix Complete Cluster Version 2.6

Your Nutanix Complete Block

A Nutanix Complete Block is a 2U rackable chassis with four industry-standard x86 servers, or
nodes. Each node contains the following components:

10
About Nutanix Complete Cluster

Hardware Software

• Dual 6-core Intel Xeon processors • VMware vSphere 5.0 (ESXi)


• Up to 192 GB of RAM • VMware vCenter 5.0*
• 320 GB Fusion-io PCIe SSD • VMware vMA (customized by Nutanix)*
• 300 GB SATA SSD
• 5 SATA HDDs (1 TB each for a total * Hosted on only one node, but used to manage
capacity of 5 TB) all nodes and VMs in the cluster.

Nutanix Networking

Interfaces
Each Nutanix node has three network interfaces: one 10-gigabit Ethernet interface and two 1-
gigabit Ethernet interfaces. A factory-installed Nutanix Complete Block sends all traffic through
a 10-gigabit Ethernet port on each node. The two 1-gigabit ports per node are set up as standby
interfaces.

IP Addresses
All Controller VMs and ESXi hosts have two network interfaces.

Interface IP Address vSwitch

ESXi host vmk0 User-defined vSwitch0


Controller VM eth0 User-defined vSwitch0
ESXi host vmk1 192.168.5.1 vSwitchNutanix
Controller VM eth1 192.168.5.2 vSwitchNutanix
Controller VM eth1:1 192.168.5.254 vSwitchNutanix

Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any
subnets that overlap with subnet 192.168.5.0/24.

Ports
Nutanix uses a number of ports for internal communication. The following unique ports are
required for external access to Controller VMs in a Nutanix cluster.

Purpose Port Number

Cluster and IP address configuration 2100


Command Center 9443

11
Nutanix Complete Cluster Version 2.6

vSwitches
A Nutanix node is configured with two vSwitches:
• vSwitchNutanix is used for local communication between the Controller VM and the ESXi
host. It has no uplinks.
• vSwitch0 is used for all other communication. It has uplinks to the three physical network
interfaces. vSwitch0 has two networks:
• Management Network is used for HA, vMotion, and vCenter communication.
• VM Network is used by all VMs.

Caution: If you need to manage network traffic between VMs with greater control,
create additional port groups on vSwitch0. Do not modify vSwitchNutanix.

System Maximums
The figures listed here are the maximum tested and supported values for entities in a Nutanix
cluster. Nutanix clusters are also subject to the vSphere maximum values documented by
VMware.
Entity Supported Maximum

VMs 80 per ESXi host

12
About Nutanix Complete Cluster

Entity Supported Maximum

vDisks (including snapshots) 70,000 per Nutanix cluster


vDisk size 2 TB
RDM vDisks 256 per vZone or ESXi host
NFS datastores 256 per ESXi host
NFS datastore size Available disk capacity in the Nutanix cluster
(no known logical limit)
Small files (<512 KB) on NFS datastores 2.5 GB per Nutanix node (10 GB per Nutanix
block)
Storage pools 2 per Nutanix cluster
Storage pool size Available disk capacity in the Nutanix cluster
(no known logical limit)
Physical disks per storage pool Number of disks in the Nutanix cluster (no
known logical limit)
Containers 16 per Nutanix cluster
Container size Available disk capacity in a storage pool (no
known logical limit)
Replication factor 2
Nutanix Command Center concurrent 20 per Controller VM
connections

Factory-Installed Components
The components listed here are configured by the Nutanix manufacturing process. Do not modify
any of these components except under the direction of Nutanix support.

Nutanix Software
• Local datastore name
• Settings and contents of any Controller VM, including the name
Important: If you create vSphere resource pools, Nutanix Controller VMs must have
the top share.

ESXi Settings
• NFS settings
• VM swapfile location
• VM startup/shutdown order
• iSCSI software adapter settings

13
Nutanix Complete Cluster Version 2.6

• vSwitchNutanix standard virtual switch


• vmk0 interface is in port group "Management Network"
• Firewall disabled

Guest VM Data Management

Hosts read and write data in shared Nutanix datastores as if they were connected to a SAN. From
the perspective of an ESXi host, the only difference is the improved performance that results
from data not traveling across a network. VM data is stored locally, and replicated on other nodes
for protection against hardware failure.
When a guest VM submits a write request through ESXi, that request is sent to the Controller
VM on the host. To provide a rapid response to the guest VM, this data is first stored on the
SSD-PCIe device, within a subset of storage called the HOT Cache. This cache is rapidly
distributed across the 10 GigE network to other SSD-PCIe devices in the cluster. HOT Cache
data is periodically transferred to persistent storage within the cluster. Data is written locally for
performance and replicated on multiple nodes for high availability.
When the guest VM sends a read request through ESXi, the Controller VM will read from the
local copy first, if present. If the host does not contain a local copy, then the Controller VM will
read across the network from a host that does contain a copy. As remote data is accessed, it will
be migrated to storage devices on the current host, so that future read requests can be local.

Heat-Optimized Tiering

The Nutanix cluster dynamically manages data based on how frequently it is accessed. When
possible, new data is saved on the SSD tier. Frequently-accessed, or "hot" data is kept on this
tier, while "cold" data is migrated to the HDD tier. Data that is accessed frequently again will be
moved back to the SSD tier.
This automated data migration also applies to read requests across the network. If a block of data
is repeatedly accessed by a guest VM on a remote host, the cluster will migrate the data to the
SSD device on the remote host. This migration not only reduces network latency, but also ensures
that frequently-accessed data is stored on the fastest storage tier.

14
Chapter 2
Cluster Management
Nutanix Command Center is a central location to monitor and configure all entities within the
cluster, including virtual machines, vDisks, and snapshots. You can access Command Center
either through the web-based management console or the Nutanix Command-Line Interface
(nCLI).
Many of the common administrative actions you need to perform can be completed using either
interface. In such cases, it is recommended that you take advantage of the features in the web
console, which provide context and up-to-date information about the cluster.
The web console is also best-suited for monitoring the cluster. The Dashboard page provides
an overview of all Nutanix entities, and allows you to filter the display based on one of these
entities.
Some tasks are only supported in the nCLI. These tasks are not available in the web console for
one of the following reasons:
• The task is a new feature that has not yet been incorporated into the web console.
• The task is part of an advanced feature that most administrators do not need to use.

Web Console
The web console enables you to monitor and manage a Nutanix cluster through an intuitive web
interface.

Context-based Filtering
Throughout the web console, all entities are presented in trays, which dynamically update as
you filter the view. For example, if you select a single VM from the VMs tray, the Hosts tray
updates to display only the node that is currently hosting the selected VM. Removing the VM
filter returns both trays to their previous states.

© Copyright 2012 Nutanix | 15


Nutanix Complete Cluster Version 2.6

Intuitive Searching
Entities and actions are found not only on their relevant pages, but also through the search field
that is always present in the upper-right corner of the web console.
For example, you can type the name of a virtual machine, such as Win7 or the action edit
Win7. Either text string presents all VMs and other entities that contain Win7 in their name. You
can click any of these options to navigate to the appropriate page of the web console.

Performance and Usage Statistics


The Nutanix web console displays statistics for entities throughout the cluster. You can toggle the
view between historical and real-time.
The usage and performance statistics vary based on the entity that you are viewing. For example,
virtual machine usage statistics are displayed in terms of CPU and memory, while disk usage
statistics show disk capacity in TB. In most cases, performance statistics are displayed in IOPS,
latency, and bandwidth.

16
Cluster Management

To Sign in to the Web Console


Before you begin. Ensure your browser has Flash player 10.2 or higher. To upgrade your Flash
player, go to http://get.adobe.com/flashplayer/.

1. Open a web browser.

2. Type http://management_ip_addr in the address field and press Enter.


Replace management_ip_addr with the IP address of any Nutanix Controller VM in the
cluster.
The browser will redirect to the encrypted port (9443) and display an SSL certificate warning.
Acknowledge the warning and proceed to the site.

3. Enter the Nutanix Command Center credentials and click Enter.

Nutanix Command-Line Interface

To Install the nCLI on Your Local System

Tip: Refer to Default Cluster Credentials on page 3 for the default credentials of all
cluster components.

1. Verify that your system has Java Runtime Environment (JRE) version 5.0 or higher.
To check which version of Java is installed on your system or to download the latest version,
go to http://www.java.com/en/download/installed.jsp.

2. Download the nCLI.

a. Connect to the Nutanix Command Center web console.

b. Click the System Settings wheel at the top of the console.

c. Select Download nCLI.

d. Download and unzip the file on your local system.

3. If desired, configure your Windows %PATH% or Linux $PATH environment variables.

17
Nutanix Complete Cluster Version 2.6

• ncli directory (for example, C:\ncli)


• JRE bin directory (for example, C:\Program Files\Java\jre1.6.0_21\bin)

The procedure to complete this step depends on your operating system. For more information,
go to http://java.com/en/download/help/path.xml.
If you do not set these environment variables, you will need to specify the complete path to
the ncli command when you run it.

To Initiate an nCLI Session From Your Local System


Before you begin. Install the nCLI on your local system by following To Install the nCLI on
Your Local System on page 17.

1. Open a command prompt.

2. Start the nCLI.


$ cd ncli_dir
$ ./ncli -s management_ip_addr
• Replace ncli_dir with the directory where the nCLI is installed.
• Replace management_ip_addr with the IP address of any Nutanix Controller VM in the
cluster.

If you receive the message Error: Could not connect to Nutanix Gateway, the
cluster is not started. To start the cluster, log on to a Controller VM as the nutanix user and
run the following commands:
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the
cluster:

CVM: 172.16.8.191 Up
Medusa UP [22088, 22089, 22090, 22098]
Pithos UP [22331, 22332, 22333, 22334]
Stargate UP [22336, 22341, 22342, 22347]
Chronos UP [22457, 22458, 22459, 22460]
Curator UP [22463, 22466, 22467, 22473]
Prism UP [22472, 22477, 22483, 22501]
AlertManager UP [22502, 22506, 22507, 22537]
Scavenger UP [22536, 22543, 22544, 22564]
StatsAggregator UP [22556, 22557, 22558, 22559]
SysStatCollector UP [22567, 22570, 22571, 22583]

When the cluster is up, exit the nCLI and start it again.

Results. The Nutanix CLI is now in interactive mode. To exit this mode, type exit at the ncli>
prompt.

18
Cluster Management

Command Format

Nutanix Command-Line Interface commands must match the following format:


ncli> entity action parameter1=value parameter2=value ...
entity can be replaced by any Nutanix entity, such as cluster or disk.
action can be replaced by any valid action for the preceding entity. Each entity has a unique set
of actions, but a common action across all entities is list. For example, you can type the following
command to request a list of all vDisks in the cluster.
ncli> vdisk list
Some actions require parameters at the end of the command. For example, when creating an NFS
datastore, you need to provide both the name of the datastore as it will appear in vSphere and the
name of the source container.
ncli> datastore create name="NTNX-NFS" ctr-name="nfs-ctr"
Parameter-value pairs can be listed in any order, as long as they are preceded by a valid entity
and action.
Tip: To avoid syntax errors, surround all string values with double-quotes, as
demonstrated in the preceding example. This is particularly important when specifying
parameters that accept a list of values.

Embedded Help

The nCLI provides assistance on all entities and actions. By typing help at the command line,
you can request additional information at one of three levels of detail.

help Provides a list of entities and their


corresponding actions
entity help Provides a list of all actions and parameters
associated with the entity, as well as which
parameters are required, and which are
optional
entity action help Provides a list of all parameters associated
with the action, as well as a description of each
parameter

The nCLI provides additional details at each level. To control the scope of the nCLI help output,
add the detailed parameter, which can be set to either true or false.
For example, type the following command to request a detailed list of all actions and parameters
for the cluster entity.
ncli> cluster help detailed=true

19
Nutanix Complete Cluster Version 2.6

You can also type the following command if you prefer to see a list of parameters for the
cluster edit-params action without descriptions.
ncli> cluster edit-params help detailed=false

Monitoring the Nutanix Cluster


The Nutanix web console displays statistics for different entities. You can view graphs in the web
console either by right-clicking an entity and selecting View Performance or View Usage or by
navigating to the entity list and clicking the usage or performance icon in the upper-left corner.
The metrics are also displayed in the entity grid view.
The following table shows which metrics are available for each entity type. Both historical and
real-time data is available for performance metrics.

Entity Usage metrics Performance metrics

Storage > Hosts TB IOPS, bandwidth


Storage > Tiers/Disks TB IOPS, bandwidth
Storage > Pools TB IOPS, bandwidth
Storage > Containers Information lifecycle N/A
management (ILM), total and
tierwise usage, provisioned
and physical storage
Storage > vDisks N/A IOPS, bandwidth, latency
Compute > Virtual Machines N/A IOPS, bandwidth, latency
Compute > vDisks N/A IOPS, bandwidth, latency
Compute > Snapshots N/A N/A

Statistics in nCLI
Textual statistics are available in the nCLI with the list-stats action, for example:
ncli> host list-stats id=29

Serviceability
Remote Support
Remote support is enabled during the site installation procedure. Remote support can be managed
in Nutanix Command Center, using either the web console or the nCLI.
Click System Settings > Remote Support to enable, temporarily enable, or disable remote
support. If remote support is temporarily enabled, an icon appears in the header, including a
countdown until remote support will be disabled.

20
Cluster Management

To start or stop remote support, select the appropriate option in the Remote Support dialog box.

You can also use the nCLI command cluster stop-remote-support. To start remote
support after you have stopped it, use the nCLI command cluster start-remote-support.
Both commands have an optional duration parameter. For example, if you have stopped remote
support and want to enable it only for the next hour, use the following nCLI command.
ncli> cluster start-remote-support duration=60
At the end of 60 minutes, remote support will be disabled.

Email Alerts
Email alerts to Nutanix support are enabled by default. To stop email alerts, use the nCLI
command cluster stop-email-alerts. To start email alerts after you have stopped them,
use the nCLI command cluster start-email-alerts.
Both commands have an optional duration parameter. For example, if email alerts are in effect
and you want to disable them only for the next half hour, use the following nCLI command.
ncli> cluster stop-email-alerts duration=30
At the end of 30 minutes, email alerts will again be sent.

21
Nutanix Complete Cluster Version 2.6

22
Chapter 3
Storage Management
Nutanix Complete Cluster classifies available storage into separate tiers with distinct
performance capabilities. Storage is managed hierarchically with storage pools, containers,
and vDisks. You can specify the amount of storage contributed from each tier to these storage
entities, and thereby manage their storage performance characteristics.

Storage Pools Provide Physical Separation


A storage pool is a group of physical disks from one or more tiers. Storage pools provide physical
separation between virtual machines because a storage device can only be assigned to a single
storage pool at a time.
Nutanix recommends creating a single storage pool to manage all physical disks within the
cluster. This configuration allows you to use all available resources in the cluster, and supports
the majority of use cases.

Containers Support NFS Storage


A container is a subset of available storage within a storage pool. Containers hold the virtual
disks (vDisks) used by virtual machines. Selecting a storage pool for a new container defines the
physical disks where the vDisks will be stored.
Nodes in the Nutanix cluster can mount a container as an NFS datastore to provide shared storage
for VM files. This storage is thinly provisioned, which means that storage is allocated to the
container only as needed when data is written, rather than allocating the total maximum capacity
when the container is created.

vDisks Enable a Variety of Storage Options


A vDisk is a subset of available storage within a container. The cluster automatically creates
and manages vDisks within an NFS container. You can also create vDisks that are optimized for
direct mapping to a VM for high-performance storage.

Storage Tiers
The cluster defines the following tiers, based on the physical storage that is included on each
node.

SSD-PCIe Contains all solid-state drives that are


connected to the PCIe slots of the nodes.
DAS-SATA Contains all hard-disk drives that are
connected to the SATA slots of the nodes.

© Copyright 2012 Nutanix | 23


Nutanix Complete Cluster Version 2.6

Note: The cluster also includes an empty tier, named SSD-SATA. This tier will be used
in future product releases, but should be ignored at present.

Storage Pools
Storage pools are groups of physical disks from one or more tiers. Nutanix recommends creating
a single storage pool to hold all disks within the cluster. This configuration, which supports the
majority of use cases, allows the cluster to dynamically optimize the distribution of resources
like capacity and IOPS. Isolating disks into separate storage pools provides physical separation
between VMs, but can also create an imbalance of these resources if the disks are not actively
used.
When you expand your cluster by adding new nodes, the new disks can also be added to the
existing storage pool. This scale-out architecture allows you to build a cluster that grows with
your needs.

To Create a Storage Pool

1. Click Storage > Pools.

2. Click the configuration wheel at the top of the storage pool table to open the Create Storage
Pool pane.

3. Type a name for the storage pool in the Name field.


Note: This entity has the following naming restrictions:
• The maximum length is 75 characters.
• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-
z), decimal digits (0-9), dots (.), hyphens (-), and underscores (_).

4. Expand a tier by clicking the triangle next to its name.

24
Storage Management

5. Assign disks from one or more nodes by clicking + or - in the relevant rows.
You can also type a number directly in the text fields or check the Select All box.

6. Repeat steps 4 and 5 with any additional tiers that you want to add to the pool.

7. Click Create.

You can also perform this task using the nCLI. For more information, type the
following command:
ncli> storagepool create help

To Add Disks to a Storage Pool

1. Click Storage > Pools.

2. Right click the storage pool to expand and select Edit Storage Pool.
The Update Storage Pool pane appears.

3. Expand a tier by clicking the triangle next to its name.


If you recently added a node to the cluster, it will contain disks in both the DAS-SATA and
SSD-PCIe tiers.

4. Assign disks from one or more nodes by clicking + in the relevant rows.
You can also type a number directly in the text fields.

5. Repeat steps 3 and 4 with any additional tiers that you want to add to the pool.
Nutanix recommends adding all available disks to a single storage pool.

6. Click Update.

You can also perform this task using the nCLI. For more information, type the
following command:
ncli> storagepool update help

Containers
A container is a subset of available storage within a storage pool. Containers hold the virtual
disks (vDisks) used by virtual machines. Selecting a storage pool for a new container defines the
disks where the vDisks are stored.

Container Required Parameters


When creating a container, you must specify the following parameters.

25
Nutanix Complete Cluster Version 2.6

Name Distinguishes the container from other entities


within Nutanix Command Center.
Storage Pool Defines the physical disks on which the
container and its vDisks will be stored.

To Create a Container

1. Click Storage > Containers.

2. Click the configuration wheel at the top of the table to open the Create Container pane.

3. Type a name for the container in the Name field.


Note: This entity has the following naming restrictions:
• The maximum length is 75 characters.
• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-
z), decimal digits (0-9), dots (.), hyphens (-), and underscores (_).

4. Select a storage pool from the Storage Pool drop-down menu.

5. Click Create.
A dialog box appears with the following message:
Container (container_name) has been created. Do you want to create
an NFS Datastore on this container?

6. Choose whether to use NFS on the new container.


• If you do not want to use NFS on the new container, click No.
• If you want to use NFS on the new container, click Yes and complete the following steps.

a. Type a name for the NFS datastore in the Datastore Name field.

b. Click Create.
A message similar to the following is displayed for each host in the cluster:

You can also perform this task using the nCLI. For more information, type the
following command:
ncli> container create help

26
Storage Management

vDisks
A vDisk is a subset of available storage within a container. If the container is mounted as an
NFS volume, then the creation and management of vDisks within that container is handled
automatically by the cluster. You can view these vDisks within Nutanix Command Center.
It may be necessary to enable iSCSI access for a subset of your VM workloads. To provide iSCSI
access at the VM or host level, you can create vDisks of one of two types:
• RDM, which can be directly attached to a virtual machine as an iSCSI LUN to provide high-
performance storage.
• VMFS, which can be mounted as a VMFS datastore to provide additional shared storage
within the cluster.
Important: VMFS datastores are not recommended for most VM workloads. For
more information, see Datastores on page 30.

vDisk Parameters
vDisks have the following parameters.

Name Distinguishes the vDisk from other entities


within Nutanix Command Center.
Container Defines the subset of available storage on
which the vDisk can be stored, as well as the
settings that are applied to the vDisk.
Containers provide logical separation among
groups of vDisks. When you apply certain
rules or settings to a container, it applies to all
vDisks within that container.

Maximum Capacity Defines the upper limit (in GB) of physical


storage that can be consumed by the vDisk.
Type Determines whether the vDisk accepts
connection requests from more than one
system.
• RDM vDisks accept connection requests
from only one system at a time. Attach this
type of vDisk to a single VM for better
performance.
• VMFS vDisks accept connection requests
from multiple systems. Create a VMFS
datastore to make this type of vDisk
available to multiple ESXi hosts.

When creating a vDisk in the nCLI, type is


determined by the shared parameter. This

27
Nutanix Complete Cluster Version 2.6

parameter is set to false in RDM vDisks, and


true in VMFS vDisks.

To Create a vDisk
Before you begin. Create a container in Nutanix Command Center. See To Create a Container
on page 26.

1. Type create vDisk in the web console search field.

2. Select create vdisk - Admin action from the resulting list.

3. Type a name for the vDisk in the Name field.


Note: This entity has the following naming restrictions:
• The maximum length is 75 characters.
• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-
z), decimal digits (0-9), dots (.), hyphens (-), and underscores (_).

4. Select a container from the Container drop-down list.

5. Type a numeric value in the Max Capacity field.


This amount (in GB) is thinly provisioned to the vDisk but appears to a host as a LUN with
the full amount in capacity.

6. Select a vDisk type from the Type drop-down menu.


• RDM provides better performance for a single VM.
• VMFS allows multiple ESXi hosts to access the vDisk.

7. Click Create.

You can also perform this task using the nCLI. For more information, type the
following command:
ncli> vdisk create help

What to do next. If the vDisk is not visible to the host and vmkernel.log shows an error
ChannelID or TargetID is out of range, remove iSCSI targets associated with deleted
vDisks.

To Discover Nutanix vDisks

The following procedure is only necessary if you plan to attach an iSCSI vDisk to a host (as a
VMFS datastore) or to a VM (as a raw device mapping). If you are using an NFS datastore, the
vDisks are managed completely by the cluster.

1. Log on to vCenter with the vSphere client.

28
Storage Management

2. Select a cluster node and click the Configuration tab.

3. Click Storage Adapters in the Hardware frame.

4. Right-click the device under iSCSI Software Adapter and select Rescan.

5. Wait until two new tasks (Rescan HBA and Rescan VMFS) show a status of Complete at the
bottom of the vSphere client.

6. Confirm that at least one device with a name that starts with Nutanix iSCSI Disk was added
to the Details pane of the iSCSI adapter.

To Delete an iSCSI Target

ESXi has a limit of 256 RDM vDisks per host, as documented in Configuration Maximums
for VMware vSphere 5.0. If a host has detected this number of vDisks, even if they have
subsequently been deleted, new vDisks will not appear. The vmkernel.log shows a message
like this:

WARNING: iscsi_vmk: iscsivmk_TransportCreateSessionWithTargetID:


ChannelID or TargetID is out of range. channelID=0 targetID=256
MaxChannels=64 MaxTargets=256

To resolve this issue, remove deleted vDisks from the host iSCSI static discovery list in vCenter.

1. Log on to vCenter with the vSphere client.

2. Select the host that contained the vDisk.

3. Go to Configuration > Storage Adapters.

4. Click the iSCSI adapter under iSCSI Software Adapter and click Properties.

5. Go to the Static Discovery tab.

6. Locate the deleted vDisk in the list of discovered targets. The target is named
iqn.2010-06.com.nutanix:vdisk_name.

29
Nutanix Complete Cluster Version 2.6

7. Select the deleted vDisk and click Remove.

Datastores
Nutanix provides choice by supporting both iSCSI and NFS protocols when mounting a storage
volume as a datastore within vSphere. NFS has many performance and scalability advantages
over iSCSI, and is the recommended datastore type.

NFS Datastores
The Nutanix NFS implementation (NDFS) reduces unnecessary network chatter by localizing
the data path of guest VM traffic to its host. This boosts performance by eliminating unnecessary
hops between remote storage devices that is common with the pairing of iSCSI and VMFS.

To enable vMotion and related vSphere features, each host in the cluster must mount an NFS
volume using the same datastore name. The Nutanix web console and nCLI both have a function
to create an NFS datastore on multiple hosts in a Nutanix cluster.

VMFS Datastores
VMFS vDisks are exported as iSCSI LUNs that can be mounted as VMFS datastores. The
vDisk name is included in the iSCSI identifier, which helps you identify the correct LUN when
mounting the VMFS volume.
VMFS datastores are not recommended for most VM workloads. To optimize your deployment,
it is recommended that you discuss the needs of all VM workloads with a Nutanix representative
before creating a new VMFS datastore within the cluster.

To Create an NFS Datastore


Before you begin. Create a container in Nutanix Command Center. See To Create a Container
on page 26.

1. Sign in to the Nutanix web console.

30
Storage Management

2. Click the Common Tasks menu toward the top of the dashboard.

3. Click Create NFS Datastore.

4. Type a name for the NFS datastore in the Datastore Name field.

5. If the option is available, select a container from the Container menu. Otherwise, proceed to
the next step.

6. Ensure that Hosts > Select All is selected.

7. Click Create.
A message similar to the following is displayed for each host in the cluster:

You can also perform this task using the nCLI. For more information, type the
following command:
ncli> datastore create help

To Create a VMFS Datastore


Before you begin. Discover a new VMFS vDisk in vSphere.
Important: VMFS datastores are not recommended for most VM workloads. For more
information, see Datastores on page 30.

1. Log on to vCenter with the vSphere client.

2. Select a host in the Nutanix cluster and click the Configuration tab.

3. Click Storage in the Hardware frame.

4. Click Add Storage.

5. Select Disk/LUN and click Next.

6. Select a VMFS vDisk that was previously created in Nutanix Command Center.
Caution: Do not attempt to mount a VMFS datastore on an RDM vDisk. If you are
unsure about the vDisk type, return to Nutanix Command Center and type update
vdisk-name in the web console search field.

7. Select VMFS-5 and click Next.

8. Click Next to accept the current disk layout.

31
Nutanix Complete Cluster Version 2.6

9. Type a meaningful name in the Name field, such as NTNX-VMFS and click Next.

10. Click Next to accept the default formatting settings.

11. Review the wizard summary and click Finish.

12. Confirm that the new datastore appears in the storage view of all other nodes in the cluster.
If the datastore does not appear, or it is shown as inactive, click Rescan All to rescan the
host's iSCSI adapter.

32
Appendix A
System Specifications

Hardware Components

CPU 8 × Intel Xeon 6-core Westmere x5650


@2.66GHz
Hard drives 20 × hot-swappable 1 TB SATA hard disk
drives (HDD)
4 × hot-swappable 300 GB solid state drives
(SSD)
Memory 12 × 240-pin DIMM sockets per node
Supports up to 192 GB RAM per node
(1333/1066/800 MHz DDR3 RDIMM), 1.5V
or 1.35V
Network connections 2 × 1GbE and 1 × 10GbE per node
Power supplies 2 × redundant, hot-swappable supplies
80 PLUS Gold Certified
1100W Output @ 100-140V, 13.5-9.5A,
50-60HZ
1400W Output @ 180-240V, 9.5-7.0A,
50-60Hz
Expansion slots 1 (x16) PCIe 2.0 (low profile) per node
Fans 4 × cooling fans

Software Components

Hypervisor VMware vSphere 5.x (ESXi)


Management VMware vCenter 5.x
Intelligent Platform Management Interface
(IPMI)

© Copyright 2012 Nutanix | 33


Nutanix Complete Cluster Version 2.6

System Characteristics

Block weight (standalone) 85 lbs. (38.6 kg)


Block weight (package) 95 lbs. (43 kg)
Block form factor 28" (711 mm) deep 2U rack-mount chassis
Block dimensions Height: 3.5" (89 mm)
Width: 17.2" (437 mm)
Depth: 28" (711 mm)
Node weight 7 lbs. (3.2 kg)
Node dimensions Width: 6.8" (173 mm)
Length: 22.5" (572 mm)

Power and Electrical

AC input 1100W Output @ 100-140 V, 13.5-9.5A,


50-60Hz
1400W Output @ 180-240 V, 9.5-7.0A,
50-60Hz
Block power consumption (maximum) 48 GB RAM per node: 1200W
192 GB RAM per node: 1350W
Thermal dissipation (maximum) 48 GB RAM per node: 4100 BTU/hr.
192 GB RAM per node: 4600 BTU/hr.

Operating Environment

Operating temperature 50° to 95° F (10° to 35° C)


Nonoperating temperature 32° to 110° F (0° to 40° C)
Operating relative humidity 20% to 95% (non-condensing)
Nonoperating relative humidity 5% to 95% (non-condensing)

34
System Specifications

Field-Replaceable Unit List


Image Description Part Number

Spare, Bezel, 2U, with lock & keys, X-BEZEL-NX2000


NX-2000 Family
Spare, Cable, 3m, QSFP to SFP+ X-CBL-3M-QSFP-SFP+

Spare, Cable, 4m, QSFP to SFP+ X-CBL-4M-QSFP-SFP+

Spare, Cable, 5m, QSFP to SFP+ X-CBL-5M-QSFP-SFP+

Spare, Chassis, Nutanix 2000 series X-CHASSIS-NX2000


(PSU & Fans only)
Doc, Replacement, Chassis, NX2000 775-0007-0001
Series

Spare, Fan, Nutanix 2000 series X-FAN-NX2000


Doc, Replacement, Fan, NX2000 Series 775-0008-0001

Spare, SSD, PCI, 320GB, Fusion I/O X-FUSION-IO-320GB


Nutanix 2000 series
Doc, Replacement, Fusion i/o, 320GB, 775-0003-0001
NX2000 Series
Spare, HDD, SATA, 1TB, Nutanix X-HDD-SATA-1TB
2000 series
Doc, Replacement, HDD, 1TB, SATA, 775-0001-0001
NX20000 Series
Spare, Memory, 16GB Nutanix 2000 X-MEM-16GB
series (qty 2)
Doc, Replacement, Memory, NX2000 775-0005-0001
Series

35
Nutanix Complete Cluster Version 2.6

Image Description Part Number

Spare, Memory, 8GB Nutanix 2000 X-MEM-8GB


series (qty 2)
Memory, NX2000 Series 775-0005-0001
Spare, Node, 192GB, Nutanix 2000 X-NODE-192GB-
series NX2000
Doc, Replacement, Node, NX2000 775-0006-0001
Series
Spare, Node, 48GB, Nutanix 2000 X-NODE-48GB-NX2000
series
Doc, Replacement, Node, NX2000 775-0006-0001
Series
Spare, Node, 96GB, Nutanix 2000 X-NODE-96GB-NX2000
series
Doc, Replacement, Node, NX2000 775-0006-0001
Series
Spare, Power Supply, 1400W, Nutanix X-PSU-1400-NX2000
2000 series
Doc, Replacement, Power Supply, 775-0004-0001
NX2000 Series
Spare, Power Supply, 1620W, Nutanix X-PSU-1620-NX2000
2000 series
Doc, Replacement, Power Supply, 775-0004-0001
NX2000 Series
Spare, Rail, 2U, NX-2000 Family X-RAIL-NX2000

Spare, SSD, SATA, 300GB, Nutanix X-SSD-SATA-300GB


2000 series
Doc, Replacement, SSD, 300GB, 775-0002-0001
NX2000 Series
N/A Spare Kit, NX-2000 series (2 × HDD, 1 XC-SPRKT-NX2000
× PSU, 1 × Fan)

36
Appendix B
Glossary

block
A set of four Nutanix nodes contained in a single enclosure.

clone
A writeable copy of a vDisk.

cluster
A group of nodes contained in one or more Nutanix blocks.

Controller VM (CVM)
A Nutanix VM that manages storage and other cluster functions on a node.

Command Center
Cluster management tools from Nutanix; includes the web console and nCLI.

container
A subset of available storage within a storage pool.

datastore
A logical container for files necessary for VM operations.

guest VM
A VM running on a Nutanix cluster that executes a workload, such as VDI or Exchange, as
opposed to a VM that is involved in cluster operations, such as the vMA or a Controller VM.

© Copyright 2012 Nutanix | 37


Nutanix Complete Cluster Version 2.6

host
An instance of the ESXi hypervisor that runs on a Nutanix node.

heat-optimized tiering (HOT) cache


Short-term storage on a faster, more expensive storage tier.

node
A physical server contained in a Nutanix block; runs an ESXi host.

raw device mapping (RDM)


A vDisk type that is configured for direct mapping to a single VM for increased performance.

replication factor (RF)


The number of copies of data within a container. These copies exist for fault tolerance: if a
physical disk fails, the cluster can recover data from another copy. The cluster manages the
replicated data, and the copies are not visible to the user. Replication factor cannot be any value
other than 2.

snapshot
A read-only copy of the state and data of a VM at a point in time.

storage pool
A group of physical disks from one or more tiers.

tier
A type of physical storage in a Nutanix node. There are two tiers: SSD-PCIe (solid-state drives in
a PCIe slot) and DAS-SATA (hard disk drives on a SATA controller).

vDisk
Data associated with a VM represented as a set of files on a datastore.

38
Glossary

virtual machine file system (VMFS)


A vDisk type that is configured to provide storage to multiple ESXi hosts through the creation of
a VMFS datastore.

vSphere Management Assistant (vMA)


A VMware Linux virtual appliance that can run commands on hosts VMs in a cluster.

vZone
A group of hosts and vDisks to overcome the ESXi limit of 256 RDM vDisks per host.

Abbreviations

CVM
Controller VM

HOT
Heat-optimized tiering

nCLI
Nutanix command-line interface; part of Nutanix Command Center

RDM
Raw device mapping

RF
Replication factor

SVM
Service VM; see Controller VM

VM
Virtual machine

39
Nutanix Complete Cluster Version 2.6

vMA
vSphere Management Assistant

VMFS
Virtual machine file system

40