Académique Documents
Professionnel Documents
Culture Documents
Exanodes Virtual Machine Edition 1.1 for VMware ESX: Quick Start Guide
Abstract
This Guide contains information on how to quickly setup the Exanodes application. It is a step-bystep, task-oriented guide for rapid deployment of the Exanodes solution.
Table of Contents
1. Introduction ......................................................................................... 1 2. Presentation ......................................................................................... 2 3. Exanodes VM requirements ....................................................................... 4 3.1. Memory ...................................................................................... 4 3.2. Storage ...................................................................................... 4 3.3. CPU ........................................................................................... 4 3.4. Network ..................................................................................... 4 3.5. Time ......................................................................................... 4 4. Installation .......................................................................................... 5 4.1. Configuring the Exanodes VM network ................................................. 5 4.2. OVF package installation procedure .................................................... 7 4.3. ESX package installation procedure ..................................................... 7 4.4. Adding disks to the Exanodes disk pool ................................................ 7 4.5. Verifying the Exanodes VM setup ....................................................... 8 5. Exanodes VM storage configuration ............................................................ 10 6. VMware ESX configuration ....................................................................... 12 6.1. Setting up the VMware ESX software iSCSI initiator ................................ 12 6.2. Creating a VMFS datastore on the Exanodes storage ............................... 13 7. Guest VM configuration .......................................................................... 14 7.1. Configuring the virtual machines restart ............................................. 14 7.2. Stopping the virtual machines ......................................................... 14 Glossary ................................................................................................ 15
iii
1.Introduction
This document illustrates the necessary steps to install the Exanodes VM. This document does not explain how to manage Exanodes; please refer to the Exanodes User's Guide for more information about Exanodes management. It does not either cover the detailed instructions for setting up a virtual machine. This document provides indications on the integration of the Exanodes solution into a Virtual Infrastructure environment. For more information about VMware configuration, please check the documentation available with your VMware software package. Throughout this document, we will use VMware vCenter Server (previously: Virtual Center (VC)) and Virtual Infrastructure Client (VIC) 2.5.0 to perform administrative tasks. Please refer to the appropriate VMware documentation on how to setup VMware vCenter Server. Security Note: Seanodes states that the Exanodes VM does not contain any virus, spyware or malware known to this date. Exanodes virtual machine edition is designed to be compatible with almost every type of commoditized x86 server supported by VMware ESX 3.5 or ESXi.
Table1.1.Compatibility list
Server CPU Memory Network Internal disks i386, x86_64, AMD, Intel 500MHz minimum 256MB minimum 2 Gigabit Ethernet network cards Full disk, disk partition Internal RAID, DAS (or SAN storage with exclusive server access) SSD Hypervisor Supported ESX versions ESX 3.5 ESXi Other: Please contact SEANODES
2.Presentation
The following presents the way Exanodes should be implemented in a VMware virtualized environment. The following illustration is the "big picture" of what happens on one ESX hosting an Exanodes VM.
Local physical disks (1) are used for 2 purposes: storing the local Exanodes VM on the ESX (one and only one Exanodes VM per ESX host) and offering the storage pools that the Exanodes VM will use Each Exanodes VM must be capable of accessing other Exanodes VMs through the network to work properly and to bring the level of functionalities and fault tolerance Exanodes offers to the storage. VMFS uses physical disks (2) also called VMHBA devices where data is actually stored, and upon which (3) datastores are created. Virtual disks are then created on these datastores for Exanodes use. Exanodes (4) aggregates these virtual disks (3) from many hosts (from the VMHBA physical devices (2)). It then virtualizes them to create volumes that are exported through an iSCSI target (5). The storage is then imported back in ESX through the iSCSI software initiator (6), to participate to the VMFS storage pool as LUNs. Once formatted as a VMFS datastore, any VM virtual disk (7) can be created on this storage. They can be used by the application VMs (8) 2
Presentation The following figure illustrates this mechanism at the cluster level:
Each ESX host runs one and only one Exanodes VM. Thanks to the Shared Internal Storage, each ESX has access to the same same datastores exported by the Exanodes iSCSI target. So any VM (like VM3) can be started on any ESX host and migrated to any ESX host.
Note
It is mandatory that the Exanodes storage used by other VMs goes through VMFS to benefit from the VMFS functionalities like VMotion and snapshots. It is not recommended to configure a VM with direct access to the Exanodes storage via an iSCSI connection.
3.Exanodes VM requirements
3.1Memory
Each Exanodes VM must have a minimum of 256 MB reserved memory.
3.2Storage
Each Exanodes VM should be configured with one or more independant - persistent virtual disks for Exanodes VM use. It is not mandatory to assign disks to each Exanodes VM, but the best practice is to provide a homogeneous disk setup between the Exanodes VMs. i.e. assign virtual disks of the same size and if possible located on the same physical disk type to each Exanodes VM.
3.3CPU
Each Exanodes VM needs a minimum of 500 MHz reserved CPU or more for performance.
3.4Network
The network requirements for Exanodes VM iSCSI are the same as with a traditional iSCSI storage system. Each Exanodes VM must be able to get its IP information over DHCP, to keep the same IP, hostname and DNS information after reboot. Adapt the network settings accordingly. A good way of doing this is to set a constant MAC address for each Exanodes VM and have the DHCP configuration set-up for each of these MAC addresses. At the hardware level, Exanodes requires using a non-blocking switch that supports multicast. Note: The Exanodes VM must use a dedicated Virtual Machine Network bound to a dedicated physical network adapter. Sharing this Virtual Machine Network with other VMs or VMKernel is NOT SUPPORTED.
3.5Time
It is recommended to have ESX time set-up correctly, as the VMware tools in the Exanodes VM make use of it to synchronize time. Setting-up ESX using NTP is a good practice.
4.Installation
It is considered that all disks that will either store the Exanodes VM or the storage pool, are already configured by the ESX host as part of a VMFS datastore. The Exanodes VM installation procedure depends on whether you are using the OVF version or the ESX version.
10. Select "VMkernel" 11. Change the label to "VMkernel for Exanodes iSCSI" 12. Set the IP and submasks addresses to "10.0.0.1" and "255.255.255.0" 13. You may have to set the default gateway if this was not previously done 14. Click "Next", then "Finish" 15. Click "Add" 16. Select "Service Console" 17. Change the label to "Service Console iSCSI" 18. Set the IP and submasks addresses to "10.0.0.2" and "255.255.255.0" 19. Click "Next", then "Finish" Then configure the Virtual Network dedicated to Exanodes VM: 1. 2. 3. Click on the ESX host Configuration tab Hardware Networking Click on the "Add networking" link at the top right of the window 5
Installation 4. 5. 6. 7. Select "Virtual Machine" Select "Create a virtual switch" and choose the NIC dedicated to Exanodes VM Type "VM Network for Exanodes" as the new virtual network label Next Finish
Reminder: this new "VM Network for Exanodes" port group MUST NOT be used by other VM appliances nor its vSwitch be configured with a VMKernel port. For better performance, SEANODES recommends that other appliances use a dedicated virtual machine network bound on a third NIC. The next screenshot shows a typical Network Configuration, with only 2 physical NICs.
Installation
Check that the Virtual Network Adapter uses the Virtual Network dedicated to Exanodes.
Installation 5. 6. 7. 8. 9. Choose "Create a new Virtual Disk" Choose "Specify a Datastore" and choose the datastore Set the appropriate size, and click on Next Within the Advanced Options, set the disk mode to: Independent - Persistent Next, verify the new device summary and click on Finish to add the disk.
You must repeat these last operations for any virtual disks you wish to virtualize using Exanodes.
Note
It is not recommended to virtualize virtual disks stored on VMFS volumes made of multiple extents stored on different disks. The following screenshot shows an Exanodes VM vmware41v1.vmdomain, running on the ESX host sam41.toulouse.com.
Installation 2. 3. 4. 5. 6. 7. 8. 9. Hardware Tab Memory 256 MB Set all virtual disks (the ExanodesVM system and data disks) to independent/persistent. Resources Tab CPU Right panel: Shares "High" , Reservation 500 MHz Memory: Reservation 256MB Disks: Shares "High" for all virtual disks Hardware Tab Network Adapter 1: Network label "VM Network for Exanodes" (or the dedicated one you created) Network Adapter 2: Network label "VM Network for Exanodes iSCSI" (or the dedicated one you created)
You can now start the Exanodes VM: Right Click on ExanodesVM Power On Once the VM is running, you can log on using the "root" or "exanodes" accounts, the default passwords being "exanodes". You must repeat all these operations (the Exanodes VM installation, disks, the Exanodes VM setup) on all the ESX hosts on which you wish to have access to the Exanodes storage.
2.
SUCCESS
3.
Create a disk group with all available disks and start it:
[exanodes@node1 ~]$exa_dgcreate mycluster:mygroup --layout rainX --all -s Creating disk group 'mycluster:mygroup': Disk group create: SUCCESS Starting disk group 'mycluster:mygroup' Disk group start:
SUCCESS
4.
Once you've finished with Exanodes groups, you can configure Exanodes volumes. Each Exanodes volume is mapped to an iSCSI LUN device by an iSCSI target (which is part of Exanodes). It is therefore important to plan in advance the number of volumes and their sizes.
[exanodes@node1 ~]$exa_vlcreate mycluster:mygroup:vol1 --size 100G Creating a 100 G volume 'mygroup:vol1' for cluster 'mycluster' Creating volume 'mygroup:vol1': SUCCESS
5.
Starting a volume on a node will present the volume to the ESX host running that node. Start each volume on the nodes where access from the ESX hosts to the volume is required. Starting a volume exports it as an iSCSI LUN. Here, we start the volume on all nodes. 10
Next, we need to configure access to the Exanodes iSCSI target for each ESX host that requires it.
11
Note
Although all ESX hosts are configured with the same target IP address, each will contact their local Exanodes iSCSI target through the local loopback. The following screenshot shows that 3 LUNs of 10GB and 5GB and 3GB have been detected after rescan: 12
You must perform these operations on all hosts that need to access the Exanodes storage.
13
7.Guest VM configuration
The storage we configured earlier is ready to use, but due to the inner way in which it is accessed (from Exanodes, which is a VM), there are a few parameters that must be set.
14
Glossary
Cluster Node Disk A group of interconnected servers (nodes). A system belonging to a cluster. A disk is a storage system able to read or write data to a storage device: hard drive, partition, volume, ram disk, network drive... It can be accessed as a block device from the operating system. A disk group is a storage subsystem that creates a storage array. A disk group agglomerates the storage space from all of its disks. The layout defines the way Exanodes will arrange (locate) the data on a disk group. RainX is a disk group data layout where redundant information is split between the disks, allowing for the loss of one disk while providing full access to data. Moreover, it has optional support for spare disks. Each spare disk allows the RainX layout to handle an additional disk loss, provided there is sufficient time in between to allow Exanodes to rebuild the data. The RainX layout requires at least 3 nodes sharing one disk (or more) each. Simple Striping is a disk group data layout, where data is striped across the disks, resulting in higher data throughput. Since no redundant information is stored, failure of a disk in the group can result in data loss. A volume is a virtual storage area. The storage space of a disk group is used to create volumes up to the total physical capacity of the group.
Disk group
Layout
RainX
Simple striping
Volume
15