Académique Documents
Professionnel Documents
Culture Documents
Revisions
Date
Description
October 2014
Initial release
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND
TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF
ANY KIND.
2014 Dell Inc. Confidential. All rights reserved. Reproduction of this material in any manner whatsoever without the
express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Dell, the Dell logo, Dell Boomi, Dell Precision ,OptiPlex, Latitude, PowerEdge, PowerVault,
PowerConnect, OpenManage, EqualLogic, Compellent, KACE, FlexAddress, Force10 and Vostro are
trademarks of Dell Inc. Other Dell trademarks may be used in this document. Cisco Nexus, Cisco MDS, Cisco NX0S, and other Cisco Catalyst are registered trademarks of Cisco System Inc. EMC VNX, and EMC Unisphere are
registered trademarks of EMC Corporation. Intel, Pentium, Xeon, Core and Celeron are registered trademarks of
Intel Corporation in the U.S. and other countries. AMD is a registered trademark and AMD Opteron, AMD
Phenom and AMD Sempron are trademarks of Advanced Micro Devices, Inc. Microsoft, Windows, Windows
Server, Internet Explorer, MS-DOS, Windows Vista and Active Directory are either trademarks or registered
trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat and Red Hat Enterprise
Linux are registered trademarks of Red Hat, Inc. in the United States and/or other countries. Novell and SUSE are
registered trademarks of Novell Inc. in the United States and other countries. Oracle is a registered trademark of
Oracle Corporation and/or its affiliates. Citrix, Xen, XenServer and XenMotion are either registered trademarks or
trademarks of Citrix Systems, Inc. in the United States and/or other countries. VMware, Virtual SMP, vMotion,
vCenter and vSphere are registered trademarks or trademarks of VMware, Inc. in the United States or other
countries. IBM is a registered trademark of International Business Machines Corporation. Broadcom and
NetXtreme are registered trademarks of Broadcom Corporation. Qlogic is a registered trademark of QLogic
Corporation.
Table of contents
Revisions ..................................................................................................................................................................................................2
Acknowledgements ............................................................................................................................................................................. 4
Feedback ................................................................................................................................................................................................ 4
Executive summary .............................................................................................................................................................................. 4
1
Introduction .....................................................................................................................................................................................5
1.1
Audience................................................................................................................................................................................5
2.2
2.3
3.2
Prerequisites ....................................................................................................................................................................... 11
4.2
4.3
4.4
4.5
4.6
Conclusion .................................................................................................................................................................................... 23
Acknowledgements
This best practice white paper was produced by the following members of the Dell Storage team:
Engineering: Clay Cooper
Editing: Camille Daily
Feedback
We encourage readers of this publication to provide feedback on the quality and usefulness of this
information by sending an email to SISfeedback@Dell.com.
SISfeedback@Dell.com
Executive summary
This paper provides an overview of and best practices for using an EqualLogic PS Series storage group as
back-end storage for the OpenStack Cinder block storage service.
Introduction
OpenStack is a suite of services running on Linux server nodes that provide Infrastructure as a Service
(IaaS) by provisioning virtualized compute instances and networks from a pool of heterogeneous
enterprise hardware. OpenStack also makes block and object storage available to the compute instances
through the Cinder block storage service and the Swift object store service.
Dell EqualLogic PS Series arrays provide a storage solution that delivers the benefits of consolidated
networked storage in a self-managing iSCSI storage area network (SAN) that is affordable and easy to use,
regardless of scale. Built on an advanced, peer storage architecture, EqualLogic storage simplifies the
deployment and administration of consolidated storage environments, enabling:
Perpetual self-optimization with automated load balancing across disks, RAID sets, connections,
cache and controllers.
Efficient enterprise scalability for both performance and capacity without forklift upgrades.
Powerful, intelligent and simplified management.
This technical paper serves as a guide to the effective utilization of EqualLogic PS Series storage in an
OpenStack private cloud.
1.1
Audience
This technical white paper is for storage administrators, SAN system designers, storage consultants, or
anyone tasked with building an OpenStack private cloud that includes EqualLogic PS Series storage. It is
assumed that all readers have experience designing, deploying and administering an OpenStack private
cloud and shared storage solutions. Also, there are some assumptions made in terms of familiarity with all
current Ethernet standards as defined by the IEEE (Institute of Electrical and Electronic Engineers) as well
as TCP/IP (Transmission Control Protocol/Internet Protocol) and iSCSI standards as defined by the IETF
(Internet Engineering Task Force).
2.1
2.2
2.3
Figure 1
Logical diagram of the Cinder block storage service using the default LVM driver and local
storage
3.1
Figure 2
Logical diagram of the Cinder block storage service using the EqualLogic driver and a PS Series
storage group
3.2
Driver functions
As mentioned, the EqualLogic driver gives Cinder the ability to initiate volume administration tasks at the
EqualLogic storage group using the management interface.
The following functions are direct calls to native EqualLogic functions:
Volume create and delete
Volume attach to and detach from a compute instance
- Access rules are assigned to the volume that grants access to the IQN of the Nova node iSCSI
initiator, or to a set of credentials when using CHAP
Snapshot create and delete
Create volume from snapshot
Get volume information
Clone volume
Extend volume
The following functions are generic volume actions performed by the Cinder node that do not leverage
native EqualLogic functionality. They require iSCSI connectivity from the Cinder node to the EqualLogic
storage:
Create volume from Glance image
Create Glance image from volume
Volume backup to an object store
Figure 3
10
The OpenStack private cloud test environment with EqualLogic PS Series storage
4.1
Prerequisites
The instructions below assume the following:
A fully deployed OpenStack private cloud using Red Hat Enterprise Linux 6.5 and OpenStack
Platform 5 (based on the OpenStack Icehouse release) including running Nova and Cinder services.
RHN access configured on each node, including the following RHN channels:
- rhel-x86_64-server-6-ost-5
- rhel-x86_64-server-6
- rhel-x86_64-server-6-ost-foreman
An initialized and configured EqualLogic PS Series storage group.
Proper network connectivity among the OpenStack nodes and the EqualLogic storage group. See
Section 4.2 for more detail on network connectivity requirements.
The Linux iSCSI initiator installed on the Nova and Cinder nodes.
yum install iscsi-initiator-utils
Update the python-paramiko RPM on the Cinder node to avoid an SSH failure when connecting to
the EqualLogic storage.
yum update python-paramiko
For more information, see the following Cinder bug: https://bugs.launchpad.net/cinder/+bug/1150720
Properly configured SAN interface on Nova and Cinder nodes.
For SAN interface configuration best practices in Red Hat Enterprise Linux, see RHEL 6.3 NIC optimization
and best practices with EqualLogic SANs:
http://en.community.dell.com/techcenter/extras/m/white_papers/20438152.aspx
4.2
11
Out-of-band network for access to server, storage and switch management interfaces and remote
access controllers.
Figure 4 illustrates the network connectivity as implemented in the simplified test environment. It consists
of the following networks.
A Management network for inter-node communication and for the hardware management
interfaces. This network also serves as the public network
A SAN for the iSCSI traffic among the OpenStack nodes and the EqualLogic storage group
A private network for connectivity among the compute instances
Figure 4
12
Network connectivity requirements for the OpenStack nodes and the EqualLogic storage
group
4.3
Configuring Cinder
To configure the Cinder block storage service, perform the following steps at the Cinder node.
1.
For more information on configuring the Cinder EqualLogic driver, see the official RHEL OpenStack
platform documentation see: https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Configuration_Reference_Guide/section_vo
lume-drivers.html#dell-equallogic-driver
2. Restart the Cinder volume service.
/etc/init.d/openstack-cinder-volume restart
3. Look for driver initialization success in the Cinder volume log file.
13
4.4
4.5
[backend1]
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend1
san_ip=IP_EQLX1
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL
[backend2]
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend2
san_ip=IP_EQLX2
14
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL
Once the back-ends are configured in cinder.conf and activated by restarting the Cinder volume service,
they can be associated with a volume type and then chosen during volume provisioning.
To associate a back-end to a volume type:
cinder type-create EQL-group-1
cinder type-key EQL-group-1 set volume_backend_name=backend1
Volume types can be useful for differentiating between EqualLogic storage groups or pools with different
performance profiles. For example, array members with SATA HDD versus those with SSD array members.
It is also possible to add more than one EqualLogic group or pool to the same back-end storage type. This
might be done when adding a second EqualLogic group or pool with an identical performance profile.
Notice that in the cinder.conf below, there are still two different EqualLogic groups but now with a single
back-end name.
[DEFAULT]
enabled_backends=backend1
san_ssh_port=22
ssh_conn_timeout=30
[backend1]
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend1
san_ip=IP_EQLX1
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL
[backend2]
15
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name=backend1
san_ip=IP_EQLX2
san_login=SAN_UNAME
san_password=SAN_PW
eqlx_group_name=EQLX_GROUP
eqlx_pool=EQLX_POOL
When more than one back-end storage is configured with the same back-end name, the Cinder Scheduler
will determine which specific back-end on which to provision a volume by filtering based on availability
zones, capacity and capability and then choosing the back-end based on available capacity.
For more information on configuring multiple back-end storage types see:
https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_manag
e-volumes.html#multi_backend
16
4.6
17
volume_backend_name=LVM_iSCSI
[lvmdriver-2]
volume_group=cinder-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
18
EqualLogic MPIO
Multi-path I/O to EqualLogic volumes from OpenStack nodes is currently not supported and is not
enabled with the default Nova configuration in RHEL OSP 5. However, it is technically possible to enable
iSCSI multipath I/O on a particular compute instance using the following steps. It is recognized that these
steps are not friendly to the OpenStack workflow. They are provided as a guide for OpenStack developers
to fully enable EqualLogic MPIO in the future.
1.
2. Restart the Nova API and compute services at the nodes running each service.
/etc/init.d/openstack-nova-api restart
/etc/init.d/openstack-nova compute restart
Linux multipathd will not be used to enable multi-path I/O to EqualLogic volumes. However, currently
Nova code requires multipathd, specifically the multipath utility, to be installed when iSCSI multipath is set
to true.
3. Install, enable, and configure the native multipath application at the Nova node running the
compute service.
yum install device-mapper-multipath
mpathconf --enable
mpathconf --find_multipaths y
Host Integration Tools (HIT) for Linux provide EqualLogic recommended multi-path (MPIO) functionality
by creating a multipath device for each volume and providing a kernel driver which together intelligently
direct I/O to the volume slices on the correct EqualLogic array members. HIT for Linux also provides
command-line tools for discovering and connecting to EqualLogic volumes and a performance tuning
system check. Follow the instructions below to install HIT for Linux.
4. Install the HIT for Linux at the Nova node running the compute service.
a. Download the latest HIT for Linux ISO from the Dell EqualLogic support site (login required).
i. https://eqlsupport.dell.com/secure/login.aspx
b. Mount the ISO image from within Linux.
c. Change to the directory of the ISO mount point, for example:
cd /media/CDROM
d. Run the HIT for Linux installer script.
./install --nogpgcheck
e. Follow the instructions, choosing to include only the SAN interface subnets.
19
Eqltune, the EqualLogic performance tuning utility, will be run automatically by the HIT for Linux
installer. This utility will detect and fix problematic settings for block devices, Ethernet adapters,
sysctl tunable options, and more. Most importantly, it configures RHEL 6 to allow I/O over multiple
interfaces on the same SAN subnet. Eqltune will record and can, if necessary, restore the original
configuration. Run eqltune from the command line for further information.
f.
Once complete, include into the shell the HIT bash configuration file for command line
completion of EqualLogic tools. Note the space between the period and the full path.
. /etc/bash_completion.d/equallogic
5. Using the Horizon web interface:
a. Create a volume on the EqualLogic group.
b. Attach the volume to a running compute instance.
6. At the Nova node hosting the compute instance, multiple iSCSI sessions can be observed. Initially,
one iSCSI session per SAN interface will be created. HIT will monitor the volume sessions and will
create additional sessions if the volume is distributed across more than one array member. For
example, if the volume is distributed across two array members, HIT will ensure that four iSCSI
sessions are created. Use the following command to view current iSCSI sessions.
iscsiadm m session
7.
Unfortunately, Nova references a single path device node when adding the virtual disk to the
compute instance and I/O will not be distributed across the multiple iSCSI sessions. Correcting this
requires identifying the multipath device created by HIT for the volume and modifying the
compute instance configuration file to reference it. This can be done with the following steps.
a. List the Cinder volumes and identify the volume attached to the compute instance.
cinder list
b. Find the EqualLogic multipath device that corresponds to this volume.
ls -al /dev/eql/
c. Find the Nova ID of the compute instance.
nova list
d. Find the KVM domain name of the compute instance.
nova show <compute instance ID>
...
OS-EXT-SRV-ATTR:instance_name
| instance-00000005
...
e. Find the KVM domain ID of the compute instance using the KVM domain name.
20
virsh list
f.
21
22
Conclusion
Cinder block storage is an important component of an OpenStack private cloud, providing persistent
storage or bootable volumes for Nova compute instances.
A Cinder driver, also called a plugin, specifically for EqualLogic is included in the community distribution of
OpenStack as of the Havana release. The driver allows an EqualLogic PS Series storage group to serve as
back-end storage for the Cinder. The Cinder service can provision volumes on the EqualLogic group
automatically, allowing the Nova node to connect directly to the storage group using iSCSI on behalf of
the compute instances.
Configuring Cinder to use EqualLogic as back-end storage is a very straightforward process. The
EqualLogic group can be easily scaled out without the need for Cinder reconfiguration. The Cinder service
can utilize multiple array members or pools per storage group, and even supports multiple groups. Pools
or groups with different performance profiles can be differentiated using Cinder volume typing which
allows end users to choose backend storage types at the time of volume creation within the Horizon
interface.
While EqualLogic MPIO does not work out of the box, it can be enabled on a per compute instance basis.
The steps are provided as a guide to future OpenStack development.
23
Description
Blade enclosure
Blade servers
24
Rack servers
SAN switches
Additional Resources
EqualLogic Configuration Guide:
http://en.community.dell.com/dell-groups/dtcmedia/m/mediagallery/19852516/download.aspx
EqualLogic Compatibility Matrix (ECM):
http://en.community.dell.com/techcenter/storage/w/wiki/2661.equallogic-compatibilitymatrix.aspx
EqualLogic Switch Configuration Guides:
http://en.community.dell.com/techcenter/storage/w/wiki/4250.switch-configuration-guides-bysis.aspx
The latest EqualLogic firmware updates and documentation (site requires a login):
http://support.equallogic.com
Official RHEL OpenStack platform documentation:
https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Configuration_Reference_Guide/sect
ion_volume-drivers.html#dell-equallogic-driver
https://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_
manage-volumes.html#multi_backend
RHEL 6.3 NIC optimization and best practices with EqualLogic SANs:
http://en.community.dell.com/techcenter/extras/m/white_papers/20438152.aspx
Dell Tech Center Storage page:
http://en.community.dell.com/techcenter/storage/
25