Vous êtes sur la page 1sur 18

Exanodes Virtual Machine Edition 1.

1 for VMware ESX


Quick Start Guide

Exanodes Virtual Machine Edition 1.1 for VMware ESX: Quick Start Guide

e06-VM-1.1 rev01 Publication date 04/20/2009 Copyright 2009 Seanodes

Abstract
This Guide contains information on how to quickly setup the Exanodes application. It is a step-bystep, task-oriented guide for rapid deployment of the Exanodes solution.

Disclaimer and Copyright


The information contained in this publication is subject to change without notice. SEANODES makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. SEANODES shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. All Rights Reserved. SEANODES, Exanodes, are registered trademarks of SEANODES. Other product names mentioned herein may be trademarks or registered trademarks of their respective companies.

Table of Contents
1. Introduction ......................................................................................... 1 2. Presentation ......................................................................................... 2 3. Exanodes VM requirements ....................................................................... 4 3.1. Memory ...................................................................................... 4 3.2. Storage ...................................................................................... 4 3.3. CPU ........................................................................................... 4 3.4. Network ..................................................................................... 4 3.5. Time ......................................................................................... 4 4. Installation .......................................................................................... 5 4.1. Configuring the Exanodes VM network ................................................. 5 4.2. OVF package installation procedure .................................................... 7 4.3. ESX package installation procedure ..................................................... 7 4.4. Adding disks to the Exanodes disk pool ................................................ 7 4.5. Verifying the Exanodes VM setup ....................................................... 8 5. Exanodes VM storage configuration ............................................................ 10 6. VMware ESX configuration ....................................................................... 12 6.1. Setting up the VMware ESX software iSCSI initiator ................................ 12 6.2. Creating a VMFS datastore on the Exanodes storage ............................... 13 7. Guest VM configuration .......................................................................... 14 7.1. Configuring the virtual machines restart ............................................. 14 7.2. Stopping the virtual machines ......................................................... 14 Glossary ................................................................................................ 15

iii

1.Introduction
This document illustrates the necessary steps to install the Exanodes VM. This document does not explain how to manage Exanodes; please refer to the Exanodes User's Guide for more information about Exanodes management. It does not either cover the detailed instructions for setting up a virtual machine. This document provides indications on the integration of the Exanodes solution into a Virtual Infrastructure environment. For more information about VMware configuration, please check the documentation available with your VMware software package. Throughout this document, we will use VMware vCenter Server (previously: Virtual Center (VC)) and Virtual Infrastructure Client (VIC) 2.5.0 to perform administrative tasks. Please refer to the appropriate VMware documentation on how to setup VMware vCenter Server. Security Note: Seanodes states that the Exanodes VM does not contain any virus, spyware or malware known to this date. Exanodes virtual machine edition is designed to be compatible with almost every type of commoditized x86 server supported by VMware ESX 3.5 or ESXi.

Table1.1.Compatibility list
Server CPU Memory Network Internal disks i386, x86_64, AMD, Intel 500MHz minimum 256MB minimum 2 Gigabit Ethernet network cards Full disk, disk partition Internal RAID, DAS (or SAN storage with exclusive server access) SSD Hypervisor Supported ESX versions ESX 3.5 ESXi Other: Please contact SEANODES

2.Presentation
The following presents the way Exanodes should be implemented in a VMware virtualized environment. The following illustration is the "big picture" of what happens on one ESX hosting an Exanodes VM.

Figure2.1.A typical Exanodes VM Storage stack on one ESX host

Local physical disks (1) are used for 2 purposes: storing the local Exanodes VM on the ESX (one and only one Exanodes VM per ESX host) and offering the storage pools that the Exanodes VM will use Each Exanodes VM must be capable of accessing other Exanodes VMs through the network to work properly and to bring the level of functionalities and fault tolerance Exanodes offers to the storage. VMFS uses physical disks (2) also called VMHBA devices where data is actually stored, and upon which (3) datastores are created. Virtual disks are then created on these datastores for Exanodes use. Exanodes (4) aggregates these virtual disks (3) from many hosts (from the VMHBA physical devices (2)). It then virtualizes them to create volumes that are exported through an iSCSI target (5). The storage is then imported back in ESX through the iSCSI software initiator (6), to participate to the VMFS storage pool as LUNs. Once formatted as a VMFS datastore, any VM virtual disk (7) can be created on this storage. They can be used by the application VMs (8) 2

Presentation The following figure illustrates this mechanism at the cluster level:

Figure2.2.A typical Exanodes VM cluster

Each ESX host runs one and only one Exanodes VM. Thanks to the Shared Internal Storage, each ESX has access to the same same datastores exported by the Exanodes iSCSI target. So any VM (like VM3) can be started on any ESX host and migrated to any ESX host.

Note
It is mandatory that the Exanodes storage used by other VMs goes through VMFS to benefit from the VMFS functionalities like VMotion and snapshots. It is not recommended to configure a VM with direct access to the Exanodes storage via an iSCSI connection.

3.Exanodes VM requirements
3.1Memory
Each Exanodes VM must have a minimum of 256 MB reserved memory.

3.2Storage
Each Exanodes VM should be configured with one or more independant - persistent virtual disks for Exanodes VM use. It is not mandatory to assign disks to each Exanodes VM, but the best practice is to provide a homogeneous disk setup between the Exanodes VMs. i.e. assign virtual disks of the same size and if possible located on the same physical disk type to each Exanodes VM.

3.3CPU
Each Exanodes VM needs a minimum of 500 MHz reserved CPU or more for performance.

3.4Network
The network requirements for Exanodes VM iSCSI are the same as with a traditional iSCSI storage system. Each Exanodes VM must be able to get its IP information over DHCP, to keep the same IP, hostname and DNS information after reboot. Adapt the network settings accordingly. A good way of doing this is to set a constant MAC address for each Exanodes VM and have the DHCP configuration set-up for each of these MAC addresses. At the hardware level, Exanodes requires using a non-blocking switch that supports multicast. Note: The Exanodes VM must use a dedicated Virtual Machine Network bound to a dedicated physical network adapter. Sharing this Virtual Machine Network with other VMs or VMKernel is NOT SUPPORTED.

3.5Time
It is recommended to have ESX time set-up correctly, as the VMware tools in the Exanodes VM make use of it to synchronize time. Setting-up ESX using NTP is a good practice.

4.Installation
It is considered that all disks that will either store the Exanodes VM or the storage pool, are already configured by the ESX host as part of a VMFS datastore. The Exanodes VM installation procedure depends on whether you are using the OVF version or the ESX version.

4.1Configuring the Exanodes VM network


The Exanodes VM requires two dedicated virtual switches: The first vSwitch is used for the local iSCSI loopback, it must not be linked to a physical NIC The second vSwitch is used for the Exanodes network traffic, it must be connected to a physical NIC to allow access to the other Exanodes VMs on the other hosts. First configure the dedicated iSCSI loopback virtual switch. This means creating a VM Network (for the Exanodes iSCSI target), and a VMkernel & Service console to enable the ESX iSCSI software initiator to contact the Exanodes target: 1. 2. 3. 4. 5. 6. 7. 8. 9. Click on the ESX host Configuration tab Hardware Networking Click on "Add networking" link at the top right of the window Select "Virtual Machine" Select "Create a virtual switch" and uncheck the physical NIC if necessary Type "VM Network for Exanodes iSCSI" as the new virtual network label Next Finish Click on the "Properties" of the new vSwitch created (at the top right of this vSwitch) Click "Add"

10. Select "VMkernel" 11. Change the label to "VMkernel for Exanodes iSCSI" 12. Set the IP and submasks addresses to "10.0.0.1" and "255.255.255.0" 13. You may have to set the default gateway if this was not previously done 14. Click "Next", then "Finish" 15. Click "Add" 16. Select "Service Console" 17. Change the label to "Service Console iSCSI" 18. Set the IP and submasks addresses to "10.0.0.2" and "255.255.255.0" 19. Click "Next", then "Finish" Then configure the Virtual Network dedicated to Exanodes VM: 1. 2. 3. Click on the ESX host Configuration tab Hardware Networking Click on the "Add networking" link at the top right of the window 5

Installation 4. 5. 6. 7. Select "Virtual Machine" Select "Create a virtual switch" and choose the NIC dedicated to Exanodes VM Type "VM Network for Exanodes" as the new virtual network label Next Finish

Reminder: this new "VM Network for Exanodes" port group MUST NOT be used by other VM appliances nor its vSwitch be configured with a VMKernel port. For better performance, SEANODES recommends that other appliances use a dedicated virtual machine network bound on a third NIC. The next screenshot shows a typical Network Configuration, with only 2 physical NICs.

Figure4.1.A typical ESX network configuration

Installation

4.2OVF package installation procedure


Unzip the Exanodes VM package using your favorite UnZip tool (the document you are reading is part of this zip file). 1. 2. 3. 4. 5. 6. 7. 8. 9. File Virtual Appliance Import Choose "Import from File" and select the OVF file : ExanodesVM.ovf Next Next Read and Accept the License Agreement Next Provide a name "ExanodesVM-<ESX_hostname>" for example Choose a local datastore to store the Exanodes VM (Exanodes Root Disk) Choose "VM Network for Exanodes" in the first "Network label" combo box Choose "VM Network for Exanodes iSCSI" in the second "Network label" combo box Next Finish

4.3ESX package installation procedure


Upload the directory called ExanodesVM-VERSION which contains .vmx and .vmdk files to your ESX server in the appropriate VMFS datastore. It contains the minimal requirements for the Exanodes VM to work. Then from the virtual infrastructure client: 1. 2. 3. 4. 5. 6. Select the ESX configuration tab and then the storage item Right-click on the local datastore where you wish to put this directory and select Browse Datastore Click on the "upload" icon. Choose "Upload Folder" and select the ExanodesVMVERSION folder of the Exanodes ESX distribution Browse the newly uploaded DISK folder. Right click on ExanodesVM.vmx and select 'Add to inventory' In the VMware installation steps: Provide a name "ExanodesVM-<ESX_hostname>" for example If no VM Network called "VM Network for Exanodes" exists, specify the one to use

Check that the Virtual Network Adapter uses the Virtual Network dedicated to Exanodes.

4.4Adding disks to the Exanodes disk pool


Now, you must add virtual disks to the pool of storage you wish Exanodes to virtualize: 1. 2. 3. 4. Right Click on ExanodesVM Edit Settings Add Click on "Hard Disk" 7

Installation 5. 6. 7. 8. 9. Choose "Create a new Virtual Disk" Choose "Specify a Datastore" and choose the datastore Set the appropriate size, and click on Next Within the Advanced Options, set the disk mode to: Independent - Persistent Next, verify the new device summary and click on Finish to add the disk.

You must repeat these last operations for any virtual disks you wish to virtualize using Exanodes.

Note
It is not recommended to virtualize virtual disks stored on VMFS volumes made of multiple extents stored on different disks. The following screenshot shows an Exanodes VM vmware41v1.vmdomain, running on the ESX host sam41.toulouse.com.

Figure4.2.A typical Exanodes VM configuration

4.5Verifying the Exanodes VM setup


1. Right Click on ExanodesVM Edit Settings (Properties) 8

Installation 2. 3. 4. 5. 6. 7. 8. 9. Hardware Tab Memory 256 MB Set all virtual disks (the ExanodesVM system and data disks) to independent/persistent. Resources Tab CPU Right panel: Shares "High" , Reservation 500 MHz Memory: Reservation 256MB Disks: Shares "High" for all virtual disks Hardware Tab Network Adapter 1: Network label "VM Network for Exanodes" (or the dedicated one you created) Network Adapter 2: Network label "VM Network for Exanodes iSCSI" (or the dedicated one you created)

You can now start the Exanodes VM: Right Click on ExanodesVM Power On Once the VM is running, you can log on using the "root" or "exanodes" accounts, the default passwords being "exanodes". You must repeat all these operations (the Exanodes VM installation, disks, the Exanodes VM setup) on all the ESX hosts on which you wish to have access to the Exanodes storage.

5.Exanodes VM storage configuration


At this point, you should be able to log into any of your Exanodes VM hosts using either the console or SSH. SSH access is possible with the "exanodes" user, the default password being "exanodes". Once logged on you can run the Exanodes CLI commands to configure the cluster and storage. Please refer to the Exanodes User's Guide for more detailed information on configuring clusters, groups and volumes. The following command examples achieve this: 1. First create the cluster. node/1-3/ represents the 3 nodes of the cluster, whose hostname are: node1 node2 node3/.
[exanodes@node1 ~]$exa_clcreate mycluster --node node/1-3/ --really-create Creating the cluster mycluster: ** ----------------------------------------------------------------------- ** ** This command erases all the disks listed in your configuration. ** ** Please check carefully your disks are free to be used by Exanodes ** ** ----------------------------------------------------------------------- ** Hit Ctrl-C if unsure. Remaining seconds : 0 Checking nodes are not already used SUCCESS Sending cluster configuration SUCCESS

2.

Then start it:


[exanodes@node1 ~]$exa_clstart mycluster Initializing the cluster (Please wait): Initializing the cluster

SUCCESS

3.

Create a disk group with all available disks and start it:
[exanodes@node1 ~]$exa_dgcreate mycluster:mygroup --layout rainX --all -s Creating disk group 'mycluster:mygroup': Disk group create: SUCCESS Starting disk group 'mycluster:mygroup' Disk group start:

SUCCESS

4.

Once you've finished with Exanodes groups, you can configure Exanodes volumes. Each Exanodes volume is mapped to an iSCSI LUN device by an iSCSI target (which is part of Exanodes). It is therefore important to plan in advance the number of volumes and their sizes.
[exanodes@node1 ~]$exa_vlcreate mycluster:mygroup:vol1 --size 100G Creating a 100 G volume 'mygroup:vol1' for cluster 'mycluster' Creating volume 'mygroup:vol1': SUCCESS

5.

Starting a volume on a node will present the volume to the ESX host running that node. Start each volume on the nodes where access from the ESX hosts to the volume is required. Starting a volume exports it as an iSCSI LUN. Here, we start the volume on all nodes. 10

Exanodes VM storage configuration


[exanodes@node1 ~]$exa_vlstart mycluster:mygroup:vol1 --all Starting volume 'vol1' in the group 'mygroup' for cluster 'mycluster' Volume start: SUCCESS

Next, we need to configure access to the Exanodes iSCSI target for each ESX host that requires it.

11

6.VMware ESX configuration


6.1Setting up the VMware ESX software iSCSI initiator
Perform the following steps to setup the software iSCSI initiator on the ESX host: 1. 2. 3. 4. 5. 6. 7. 8. 9. Click on the ESX host and select the configuration tab Select the Storage Adapters item From iSCSI software adapters select 'Properties' on the lower panel On the 'General' Tab select configure, check 'Enable' and click 'OK' On the 'Dynamic Discovery' Tab select 'Add' Enter the IP address of the iSCSI target. This is "10.0.0.3" by default in the Exanodes VM unless you manually changed the network configuration. Click 'OK' The initiator is probing the target click on 'Close' Right Click on the 'iSCSI software Adapter' icon and select the 'Rescan' action The initiator detects the LUN started inside the local Exanodes VM

Figure6.1.The list of iSCSI targets

Note
Although all ESX hosts are configured with the same target IP address, each will contact their local Exanodes iSCSI target through the local loopback. The following screenshot shows that 3 LUNs of 10GB and 5GB and 3GB have been detected after rescan: 12

VMware ESX configuration

Figure6.2.The list of detected LUNs

You must perform these operations on all hosts that need to access the Exanodes storage.

6.2Creating a VMFS datastore on the Exanodes storage


You can now use the Exanodes storage as any other storage, which means you can format it and import it in VMFS. To do this, follow these steps, not specific to Exanodes: 1. 2. 3. 4. 5. 6. 7. Click on the ESX host and select the configuration tab Select 'Storage' Click on 'Add storage' Choose 'Disk / LUN' Choose the LUN you wish to use on this ESX click 'Next' twice Provide a name for the datastore Click Next twice and then Finish

13

7.Guest VM configuration
The storage we configured earlier is ready to use, but due to the inner way in which it is accessed (from Exanodes, which is a VM), there are a few parameters that must be set.

7.1Configuring the virtual machines restart


As the iSCSI storage is presented by Exanodes which runs inside a VM, only this VM will be able to boot at start-up. There is no auto-detection of new LUNs on an iSCSI connection, so it's necessary to run a rescan of the target shown by the Exanodes VM after a reboot, and only after the Exanodes VM has completed its initialization. After this, you can restart the Virtual Machines residing on Exanodes storage.

7.2Stopping the virtual machines


We must configure any VM using the Exanodes storage to stop before the Exanodes VM. Click on the ESX and select the Configuration Tab In "Software", Virtual Machine Startup / Shutdown Check "Allow Virtual Machines to start and stop automatically with the system" Set the Exanodes VM to start first, stop last, and a delay of a few minutes

14

Glossary
Cluster Node Disk A group of interconnected servers (nodes). A system belonging to a cluster. A disk is a storage system able to read or write data to a storage device: hard drive, partition, volume, ram disk, network drive... It can be accessed as a block device from the operating system. A disk group is a storage subsystem that creates a storage array. A disk group agglomerates the storage space from all of its disks. The layout defines the way Exanodes will arrange (locate) the data on a disk group. RainX is a disk group data layout where redundant information is split between the disks, allowing for the loss of one disk while providing full access to data. Moreover, it has optional support for spare disks. Each spare disk allows the RainX layout to handle an additional disk loss, provided there is sufficient time in between to allow Exanodes to rebuild the data. The RainX layout requires at least 3 nodes sharing one disk (or more) each. Simple Striping is a disk group data layout, where data is striped across the disks, resulting in higher data throughput. Since no redundant information is stored, failure of a disk in the group can result in data loss. A volume is a virtual storage area. The storage space of a disk group is used to create volumes up to the total physical capacity of the group.

Disk group

Layout

RainX

Simple striping

Volume

15

Vous aimerez peut-être aussi