Académique Documents
Professionnel Documents
Culture Documents
NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: doccomments@netapp.com Information Web: http://www.netapp.com Part number 215-04803_A0 July 2009
Table of Contents | 3
Contents
Copyright information...................................................................................9 Trademark information...............................................................................11 About this guide............................................................................................13
Audience......................................................................................................................13 Terminology.................................................................................................................14 Command conventions................................................................................................14 Keyboard and formatting conventions.........................................................................15 Special messages.........................................................................................................16 How to send your comments.......................................................................................16
4 | Windows Host Utilities 5.2 Installation and Setup Guide Installing and configuring the Host Utilities (high level)............................................33 Verifying your host and storage system configuration................................................34 Confirming your storage system configuration...........................................................35 Configuring FC HBAs and switches...........................................................................35 Checking the media type of FC ports..........................................................................36 Configuring iSCSI initiators and HBAs......................................................................37 iSCSI software initiator options.......................................................................37 Downloading the iSCSI software initiator.......................................................38 Installing the iSCSI Initiator software.............................................................39 Installing the iSCSI HBA................................................................................39 Options for iSCSI sessions and error recovery levels......................................40 Options for using CHAP with iSCSI Initiators...............................................41 Installing multipath I/O software.................................................................................41 How to have a DSM multipath solution..........................................................42 Disabling ALUA for Data ONTAP DSM........................................................42 Enabling ALUA for FC with msdsm...............................................................43 About configuring Hyper-V systems...........................................................................44 Adding virtual machines to a failover cluster..................................................44 Configuring SUSE Linux guests for Hyper-V.................................................44 About Veritas Storage Foundation...............................................................................45 Using Veritas Storage Foundation 5.1 for Windows........................................46 Installation process overview.......................................................................................46 Installing the Host Utilities interactively.........................................................47 Installing the Host Utilities from a command line..........................................47 About SnapDrive for Windows....................................................................................48 Repairing and removing Windows Host Utilities........................................................49 Repairing or removing Windows Host Utilities interactively..........................49 Repairing or removing Windows Host Utilities from a command line.............................................................................................49 Removing Windows Host Utilities affects DSM.............................................50
Table of Contents | 5 Summary of FC HBA values set by Windows Host Utilities..........................56 ManageDisksOnSystemBuses setting used by the Host Utilities....................57 ClusSvcHangTimeout setting..........................................................................57 TimeOutValue setting used by the Host Utilities ............................................57 PathVerifyEnabled setting used by the Host Utilities .....................................58 PDORemovePeriod setting used by the Host Utilities ...................................58 RetryCount setting used by the Host Utilities.................................................58 RetryInterval setting used by the Host Utilities...............................................59 DsmMaximumStateTransitionTime and DsmMaximumRetryTimeDuringStateTransition settings used by the Host Utilities .............................................................59 MPIOSupportedDeviceList setting used by the Host Utilities ......................................................................................................60 DsmSupportedDeviceList setting used by Host Utilities................................60 IPSecConfigTimeout setting used by the Host Utilities..................................60 LinkDownTime setting used by the Host Utilities..........................................60 MaxRequestHoldTime setting used by the Host Utilities...............................61 FC HBA parameters set by the Host Utilities..................................................61
Setting up LUNs............................................................................................63
LUN overview.............................................................................................................63 LUN types to use for hosts and guest operating systems................................63 Overview of creating LUNs.............................................................................64 Initiator group overview..............................................................................................65 Mapping LUNs to igroups...............................................................................65 About mapping LUNs for Windows clusters...................................................66 About FC targets..........................................................................................................66 About proxy paths in FC configurations.........................................................66 Adding iSCSI targets...................................................................................................68 About dependent services on the Native Stack and iSCSI..............................69 About dependent services on Veritas and iSCSI..............................................70 Accessing LUNs on hosts that use Veritas Storage Foundation..................................70 Accessing LUNs on hosts that use the native OS stack...............................................71 Overview of initializing and partitioning the disk.......................................................71
Troubleshooting............................................................................................73
Areas to check for possible problems..........................................................................73 Updating the HBA software driver..............................................................................74
6 | Windows Host Utilities 5.2 Installation and Setup Guide Understanding the Host Utilities changes to FC HBA driver settings.........................75 Verifying the Emulex HBA driver settings on FC systems.............................75 Verifying the QLogic HBA driver settings on FC systems..............................76 Enabling logging on the Emulex HBA .......................................................................76 Enabling logging on the QLogic HBA........................................................................77 About the diagnostic programs....................................................................................77 Running the diagnostic programs (general steps)............................................78 Collecting storage system information............................................................79 Collecting Windows host information.............................................................79 Collecting FC switch information...................................................................80 About collecting information on Veritas Storage Foundation.........................80
SAN Booting..................................................................................................81
What SAN booting is...................................................................................................81 How SnapDrive supports SAN booting...........................................................82 General requirements for SAN booting...........................................................82 About queue depths used with FC SAN booting.............................................82 Configuring FC SAN booting .....................................................................................83 About BootBIOS and SAN booting................................................................84 Enabling Emulex BootBIOS using HBAnyware.............................................84 Enabling Emulex BootBIOS using LP6DUTIL..............................................85 Enabling QLogic BootBIOS............................................................................85 WWPN for the HBA required.........................................................................86 Configuring a single path between the host and storage system.....................86 Creating the boot LUN................................................................................................89 Configuring Emulex BootBIOS..................................................................................89 Configuring QLogic BootBIOS...................................................................................90 About configuring BIOS to allow booting from a LUN..............................................91 Configuring a Dell BIOS Revision A10..........................................................91 Configuring an IBM BIOS..............................................................................92 Configuring a Phoenix BIOS 4 Release 6.......................................................92 Configuring QLogic iSCSI HBA boot BIOS settings.................................................92 Configuration process requires limited paths to HBA.....................................93 Setting iSCSI HBA parameters in boot BIOS.................................................93 Getting the correct driver for the boot LUN................................................................93 Installing Windows on the boot LUN..............................................................94
Table of Contents | 7 Configuring a VCS or MSCS cluster with Veritas in a SAN booted environment................................................................................................95
Index...............................................................................................................97
Copyright information | 9
Copyright information
Copyright 19942009 NetApp, Inc. All rights reserved. Printed in the U.S.A. No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp. The product described in this manual may be protected by one or more U.S.A. patents, foreign patents, or pending applications. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark information | 11
Trademark information
NetApp, the Network Appliance logo, the bolt design, NetApp-the Network Appliance Company, Cryptainer, Cryptoshred, DataFabric, DataFort, Data ONTAP, Decru, FAServer, FilerView, FlexClone, FlexVol, Manage ONTAP, MultiStore, NearStore, NetCache, NOW NetApp on the Web, SANscreen, SecureShare, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapRestore, SnapValidator, SnapVault, Spinnaker Networks, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, StoreVault, SyncMirror, Topio, VFM, and WAFL are registered trademarks of NetApp, Inc. in the U.S.A. and/or other countries. gFiler, Network Appliance, SnapCopy, Snapshot, and The evolution of storage are trademarks of NetApp, Inc. in the U.S.A. and/or other countries and registered trademarks in some other countries. The NetApp arch logo; the StoreVault logo; ApplianceWatch; BareMetal; Camera-to-Viewer; ComplianceClock; ComplianceJournal; ContentDirector; ContentFabric; EdgeFiler; FlexShare; FPolicy; Go Further, Faster; HyperSAN; InfoFabric; Lifetime Key Management, LockVault; NOW; ONTAPI; OpenKey, RAID-DP; ReplicatorX; RoboCache; RoboFiler; SecureAdmin; Serving Data by Design; Shadow Tape; SharedStorage; Simplicore; Simulate ONTAP; Smart SAN; SnapCache; SnapDirector; SnapFilter; SnapMigrator; SnapSuite; SohoFiler; SpinMirror; SpinRestore; SpinShot; SpinStor; vFiler; VFM Virtual File Manager; VPolicy; and Web Filer are trademarks of NetApp, Inc. in the U.S.A. and other countries. NetApp Availability Assurance and NetApp ProTech Expert are service marks of NetApp, Inc. in the U.S.A. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. A complete and current list of other IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml. Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/or other countries. Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks. NetCache is certified RealSystem compatible.
Audience on page 13 Terminology on page 14 Command conventions on page 14 Keyboard and formatting conventions on page 15 Special messages on page 16 How to send your comments on page 16
Related information
Audience
This document is written with certain assumptions about your technical knowledge and experience. This guide is intended for system installers and administrators who are familiar with the Microsoft Windows operating system and configuring and administering NetApp storage systems. You should be familiar with the specifics of your configuration, including the following items. Multipathing software, such as Data ONTAP DSM (device-specific module) for Windows MPIO (multipath I/O) or Veritas Storage Foundation for Windows. The iSCSI or Fibre Channel protocols. Creating virtual hard disks and virtual networks if you are using Virtual Server 2005 or the Hyper-V role of Windows Server 2008 to create virtual machines.
Terminology
To understand the concepts in this document, you might need to know how certain terms are used. Storage terms Storage system refers to the hardware device running Data ONTAP that receives data from and sends data to native disk shelves, third-party storage, or both. Storage systems that run Data ONTAP are sometimes referred to as filers, appliances, storage appliances, V-Series systems, or systems. When a storage system is in a high-availability configuration, it is usually referred to as a node. The terms used in this document reflect one of these common usages. Storage controller refers to the component of a storage system that runs the Data ONTAP operating system and controls its disk subsystem. Storage controllers are also sometimes called controllers, storage appliances, appliances, storage engines, heads, CPU modules, or controller modules.
Cluster and high-availability terms In Data ONTAP 7.2 and 7.3 release families, an HA pair is referred to as an active/active configuration or active/active pair.
Command conventions
You can use your product more effectively when you understand how this document uses command conventions to present information. You can perform common administrator tasks in one or more of the following ways: You can enter commands on the system console, or from any client computer that can obtain access to the storage system using a Telnet or Secure Socket Shell (SSH) session. In examples that illustrate command execution, the command syntax and output might differ, depending on your version of the operating system. You can use the FilerView graphical user interface. For information about accessing your system with FilerView and about FilerView Help, which explains Data ONTAP features and how to work with them in FilerView, see the Data ONTAP System Administration Guide. You can enter Windows, ESX, HP-UX, AIX, Linux, and Solaris commands on a client console. Your product documentation provides specific command options you can use. You can use the client graphical user interface.
About this guide | 15 Your product documentation provides details about how to use the graphical user interface. You can enter commands on the switch console or from any client that can obtain access to the switch using a Telnet session. In examples that illustrate command execution, the command syntax and output might differ, depending on your version of the operating system.
hyphen (-)
Used to separate individual keys. For example, Ctrl-D means holding down the Ctrl key while pressing the D key. Used to mean pressing one or more keys on the keyboard.
type
Formatting conventions
Convention Italic font What it means Words or characters that require special attention. Placeholders for information you must supply. For example, if the guide says to enter the arp -d hostname command, you enter the characters "arp -d" followed by the actual name of the host. Book titles in cross-references.
What it means Command names, option names, keywords, and daemon names. Information displayed on the system console or other computer monitors. Contents of files. File, path, and directory names.
Words or characters you type. What you type is always shown in lowercase letters, unless your program is case-sensitive and uppercase letters are necessary for it to work properly.
Special messages
This document might contain the following types of messages to alert you to conditions that you need to be aware of.
Note: A note contains important information that helps you install or operate the system efficiently. Attention: An attention notice contains instructions that you must follow to avoid a system crash,
Windows Host Utilities, run the Repair option of the Host Utilities to set the required value for this parameter. An updated installation dialog with the LUN Protocol Support selection removed. The PROTOCOLS option is also removed from the command line installation. For MPIO configurations, a single DSM manages all LUNs on storage systems running Data ONTAP software regardless of the protocol used to access the LUNs. An updated installation dialog with the Enable Microsoft DSM (MSDSM) Support option removed. The MSDSMSUPPORT option is also removed from the command line installation. Veritas Software Foundation for Windows (SFW) can now coexist with the MSDSM. If you are using the optional SFW software, SFW 5.1 with the Device Driver Installation Package 1 (DDI-1) is required.
Related concepts
What the Host Utilities are on page 19 Tasks required for installing and configuring the Host Utilities on page 20 What the Host Utilities Contain on page 20 Windows configurations supported by the Host Utilities on page 21 Protocols supported by the Host Utilities on page 22 Dynamic disk support on page 25 Multipathing options supported by the Host Utilities on page 26 What is Hyper-V on page 27 Virtual Server 2005 overview on page 28 About SAN booting on page 29 Support for non-English operating system versions on page 30 Where to find more information on page 30
Related concepts
Host configuration settings on page 51 Setting up LUNs on page 63 SAN Booting on page 81
Related tasks
Changing the Cluster cfmode Setting in Fibre Channel SAN Configurations http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/SSICFMODE_1205.pdf
Program cisco_info.exe
Purpose Collects information about Cisco Fibre Channel switches the host is connected to in a file that can be submitted to technical support for problem analysis. Collects information about a storage system in a file that can be submitted to technical support for problem analysis. This program was named filer_info.exe in previous Host Utilities versions. Displays detailed information about Fibre Channel host bus adapters (HBAs) in the host. Collects information about McData Fibre Channel switches the host is connected to in a file that can be submitted to technical support for problem analysis. Displays information about the iSCSI initiator running on the Windows host. Collects information about QLogic Fibre Channel switches the host is connected to in a file that can be submitted to technical support for problem analysis. Displays the version of the diagnostic tools and Fibre Channel HBAs. Displays detailed information about the virtual machines including the storage resources they use. Installed only on Windows Server 2008 and 2008 R2 x64 systems. Runs on a Hyper-V parent only. Collects information about the Windows host in a file that can be submitted to technical support for problem analysis.
controller_info.exe
hba_info.exe mcdata_info.exe
msiscsi_info.exe qlogic_info.exe
san_version.exe vm_info.exe
windows_info.exe
22 | Windows Host Utilities 5.2 Installation and Setup Guide SAN booting
Use the Interoperability Matrix to find a supported combination of host and storage system components and software and firmware versions.
Related information
Data ONTAP and Fibre Channel on page 22 Data ONTAP and Fibre Channel over Ethernet on page 23 Data ONTAP and iSCSI on page 24
Introduction to Host Utilities | 23 Two (or more) FC connections from the HBA to the storage system through dual switches or a zoned switch. In this configuration, the host has at least one dual-port HBA or two single-port HBAs. The redundant configuration avoids the single point of failure of a single-switch configuration. This variation requires a supported multipathing solution and device-specific module (DSM), such as the Data ONTAP DSM for Windows MPIO or the Veritas DMP DSM.
Direct-attached. A single host with a direct FC connection from the HBA to stand-alone or active/active storage systems.
ALUA (asymmetric logical unit access) must be enabled on igroups when using the Microsoft DSM (msdsm), and must be disabled when using the Data ONTAP DSM for Windows MPIO or the Veritas DMP DSM. ALUA requires a supported version of Data ONTAP software; see the Interoperability Matrix.
Note: Use redundant configurations with two FC switches for high availability in production
environments. However, direct FC connections and switched configurations using a single zoned switch might be appropriate for less critical business applications. For more detailed information about the supported Fibre Channel topologies, including diagrams, see the Fibre Channel and iSCSI Configuration Guide for your version of Data ONTAP. For more information about using Fibre Channel on your storage system, see the Data ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP.
Related information
Supported FCoE configurations The Host Utilities support fabric-attached SAN network configurations, but direct-attached configurations are not supported. FCoE adapter configuration The converged network adapters (CNA) must be directly cabled to a supported data center bridging (DCB) switch. No intermediate Ethernet switches may be connected between the CNA end point and the DCB switch. The CNA is a consolidation point and effectively serves as both an HBA and an Ethernet adapter. Updating the drivers and firmware for a CNA is just like updating them for a traditional FC HBA. Check the Interoperability Matrix for the supported firmware versions. The CNA uses the same timeout parameters as a traditional FC HBA. The Windows Host Utilities installation program detects the FC HBA portion of the CNA and sets the required parameters. If you install a CNA after installing the Host Utilities, run the Repair option of the installation program to configure the CNA parameters. FCoE cabling configuration FCoE cabling information and diagrams are included in the Fibre Channel and iSCSI Configuration Guide for your version of Data ONTAP software. FCoE switch configuration The DCB switch requires special setup steps for FCoE. See the documentation supplied by the switch manufacturer. For example, the steps for a Cisco 5020 are included in the Cisco Nexus 5000 Series Switch Fabric Manager Software Configuration Guide. You zone the DCB switch for FCoE just like you zone a traditional FC switch.
Related information
NetApp Interoperability Matrix - now.netapp.com/NOW/products/interoperability Cisco Nexus 5000 Series Switch Fabric Manager Software Configuration Guide www.cisco.com/en/US/products/ps9670/tsd_products_support_series_home.html
Introduction to Host Utilities | 25 The iSCSI protocol is a licensed service on the NetApp storage system that enables you to transfer block data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC 3720 (www.ietf.org). The storage system is an iSCSI target device. A host running the iSCSI Initiator software or an iSCSI HBA uses the iSCSI protocol to access LUNs on a storage system. The connection between the initiator and target uses a standard TCP/IP network. No special network configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, or your regular public network. The storage system listens for iSCSI connections on TCP port 3260. For more information on using iSCSI on your storage system, see the Data ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP. About iSCSI on IPv6 You can use iSCSI connections over networks running Internet Protocol version 6 (IPv6) if all components of the network have IPv6 support enabled. IPv6 uses a longer IP address than the previous IPv4. This enables a larger address space. IPv6 address look like
fe80:0000:0000:0000:ad7a:41a0:62d0:dd0b
Windows Host Utilities supports IPv6 starting with version 5.1. A version of Data ONTAP software that supports IPv6 is required on the storage system. There are no IPv6-specific tasks for Windows Host Utilities. See the Microsoft Windows documentation for information about enabling IPv6 on the Windows host. See your Ethernet switch documentation for information on enabling IPv6 on your switches if needed. See the Data ONTAP Network Management Guide for your version of Data ONTAP for information on enabling IPv6 on the storage system.
Note: The diagnostic programs included with Windows Host Utilities do not currently support IPv6
addresses. See the Host Utilities Release Notes for information about collecting diagnostic information in an IPv6 configuration and any updates to the diagnostic programs.
either Fibre Channel paths or iSCSI paths depending on how you want to connect to the storage system. There is no support for both Fibre Channel and iSCSI protocols on the same host with Veritas Storage Foundation.
This is the native DSM provided with Microsoft Windows Server 2008. It offers active/active and active/passive load balance policies for both the FC and iSCSI protocols. ALUA must be enabled on the storage system igroup for FC. See the Interoperability Matrix to be sure you have a version of Data ONTAP software that is supported with this DSM. This is the DSM provided with the Microsoft iSCSI initiator. You can use this DSM for iSCSI paths on Windows Server 2003 systems.
What is Hyper-V
Hyper-V is a Windows technology that enables you to create multiple virtual machines on a single physical x64 computer running Microsoft Windows Server 2008. Hyper-V is a role available in Microsoft Windows Server 2008. Each virtual machine runs its own operating system and applications. For a list of currently-supported operating systems on Hyper-V virtual machines, see the Windows Host Utilities Release Notes.
Next topics
Methods for using storage with Hyper-V on page 27 Methods for clustering Windows hosts with Hyper-V on page 28 Recommended LUN layout with Hyper-V on page 28
using raw (passthru) disks. The raw disks do not show up in the guest OS. An iSCSI LUN accessed by an iSCSI initiator running on the guest OS. For Windows Vista, use the built-in iSCSI initiator; multipathing is not supported. For Windows XP, use Microsoft iSCSI initiator 2.07; multipathing is not supported.
28 | Windows Host Utilities 5.2 Installation and Setup Guide For Windows Server 2003 and Windows Server 2008, use an iSCSI initiator and multipathing solution that is supported by NetApp for use on a standard host platform. The guest OS supports the same iSCSI configurations as if it was not running on a virtual machine. For SUSE Linux Enterprise Server, use a supported iSCSI initiator and multipathing solution. The guest OS supports the same iSCSI configurations as if it was not running on a virtual machine.
The parent Hyper-V system can connect to storage system LUNs just like any other Windows Server 2008 host.
Introduction to Host Utilities | 29 Connect to LUNs from the underlying Windows Server 2003 system. Then create virtual hard disks (VHDs) on those LUNs for the virtual machines. You can use either iSCSI or FC to connect to the LUNs. Connect to LUNs using a software iSCSI initiator running on the virtual machine.
For specific configurations that support Virtual Server 2005 R2, see the Interoperability Matrix.
Next topics
About using virtual hard disks on page 29 Virtual machines and iSCSI initiators on page 29
Related information
30 | Windows Host Utilities 5.2 Installation and Setup Guide The downside of SAN booting is that loss of connectivity between the host and storage system can prevent the host from booting. Be sure to use a reliable connection to the storage system. There are three options for SAN booting a Windows host: Fibre Channel HBA Requires one or more supported adapters. These same adapters can also be used for data LUNs. The Windows Host Utilities installer automatically configures required HBA settings. Requires one or more supported adapters. These same adapters can also be used for data LUNs, or you can use an iSCSI software initiator for data. You must manually configure the HBA settings. Requires a supported network interface card (NIC) and a special version of the Microsoft iSCSI software initiator.
iSCSI HBA
For information on iSCSI software boot, see the vendor (emBoot, Intel, or IBM) documentation for the iSCSI boot solution you choose. Also, see Technical Report 3644.
Related information
Go to... The Data ONTAP Software Setup Guide The Data ONTAP Block Access Management Guide for iSCSI and FC
Supported SAN topologies Installing and configuring the HBA in your host
The FC and iSCSI Configuration Guide for your version of Data ONTAP software Your HBA vendor documentation
Installing and configuring MPIO using the Data ONTAP The Installation and Administration Guide for that DSM version of Data ONTAP DSM for Windows MPIO Installing and configuring Veritas Storage Foundation for Windows and the Veritas DMP DSM The Veritas Storage Foundation and High Availability Solutions Installation and Upgrade Guide The Veritas Storage Foundation Administrators Guide
Configuring Veritas Cluster Server and Microsoft Clustering in a Storage Foundation environment
The Veritas Cluster Server Administrators Guide The Veritas Storage Foundation Administrators Guide
Installing and configuring a supported version of SnapDrive for Windows software Managing SAN storage on the storage system
The Installation and Administration Guide for that version of SnapDrive for Windows Data ONTAP Commands: Manual Page Reference, Volume 1and Data ONTAP Commands: Manual Page Reference, Volume 2 The Data ONTAP Block Access Management Guide for iSCSI and FC The FilerView online Help
Related information
This section is for people familiar with this operating system and storage systems. If you need more information, see the detailed instructions for the steps.
Note: If you are upgrading a host running the Data ONTAP DSM from Windows Server 2008 to
Server 2008 R2, you must upgrade to Data ONTAP DSM 3.3.1 before installing Server 2008 R2. You must also upgrade to Windows Host Utilities 5.2 before upgrading to Windows Server 2008 R2.
Steps
1. 2. 3. 4. 5.
Verify your host and storage system configuration. Confirm your storage system is set up. Configure FC HBAs and switches. Check the media type setting of FC target ports. Install an iSCSI software initiator or HBA.
34 | Windows Host Utilities 5.2 Installation and Setup Guide 6. Configure iSCSI options and security. 7. Configure a multipathing solution. 8. Install Veritas Storage Foundation. 9. Install the Host Utilities. 10. Install SnapDrive for Windows.
After you finish
If you add a Windows 2008 R2 host to a failover cluster after installing the Host Utilities, run the Repair option of the Host Utilities installation program to set the required ClusSvcHangTimeout parameter.
The Interoperability Matrix lists all supported configurations. Individual computer models are not listed; Windows hosts are qualified based on their CPU chips. The following configuration items must be verified: Windows host CPU architecture Windows operating system version, service pack level, and required hotfixes
Note: If you are upgrading a host running the Data ONTAP DSM from Windows Server 2008
to Server 2008 R2, you must upgrade to Data ONTAP DSM 3.3.1 before installing Server 2008 R2. You must also upgrade to Windows Host Utilities 5.2 before upgrading to Windows Server 2008 R2. HBA model and firmware version Fibre Channel switch model and firmware version iSCSI initiator Multipathing software Veritas Storage Foundation for Windows software Data ONTAP version and cfmode setting Option software such as SnapDrive for Windows
Steps
1. Verify that your entire configuration is listed in the matrix. 2. Verify guest operating systems on Hyper-V virtual machines are shown as supported in the Windows Host Utilities Release Notes.
This topic describes the high-level tasks you must complete to configure your storage system for use with Fibre Channel and iSCSI hosts. See the Data ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP for detailed instructions.
Steps
1. Add the iSCSI or FCP license and start the target service. The Fibre Channel and iSCSI protocols are licensed features of Data ONTAP software. If you need to purchase a license, contact your NetApp or sales partner representative. 2. Verify your cabling. See the FC and iSCSI Configuration Guide for detailed cabling and configuration information.
Related information
The Windows Host Utilities installer sets the required Fibre Channel HBA settings.
Note: Do not change HBA settings manually. Steps
1. Install one or more supported Fibre Channel host bus adapters (HBAs) according to the instructions provided by the HBA vendor.
36 | Windows Host Utilities 5.2 Installation and Setup Guide 2. Obtain the supported HBA drivers and management utilities and install them according to the instructions provided by the HBA vendor. The supported HBA drivers and utilities are available from the following locations: Emulex HBAs QLogic HBAs Emulex support page for NetApp. QLogic support page for NetApp.
3. Connect the HBAs to your Fibre Channel switches or directly to the storage system. 4. Create zones on the Fibre Channel switch according to your Fibre Channel switch documentation.
Related information
FC and iSCSI Configuration Guide http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/ Emulex support page for NetApp - http://www.emulex.com/ts/index.html QLogic support page for NetApp - http://support.qlogic.com/support/drivers_software.asp
The default media type setting of auto is for fabric (switched) connections. If you are connecting the hosts HBA ports directly to the storage system, you must change the media setting of the target ports to loop.
Steps
1. To display the current setting of the storage systems target ports, enter the following command at a storage system command prompt:
fcp show adapter -v
The current media type setting is displayed. 2. To change the setting of a target port to loop for direct connections, enter the following commands at a storage system command prompt:
fcp config adapter down fcp config adapter mediatype loop fcp config adapter up adapter is the storage system adapter directly connected to the host.
Installing and Configuring Host Utilities | 37 For more information, see the fcp man page or Data ONTAP Commands: Manual Page Reference, Volume 1 for your version of Data ONTAP.
iSCSI software initiator options on page 37 Downloading the iSCSI software initiator on page 38 Installing the iSCSI Initiator software on page 39 Installing the iSCSI HBA on page 39 Options for iSCSI sessions and error recovery levels on page 40 Options for using CHAP with iSCSI Initiators on page 41
MPIO with Windows XP. Note that a Windows XP iSCSI connection to NetApp storage is supported only on Hyper-V virtual machines. Windows Vista guest systems on Hyper-V For guest systems on Hyper-V virtual machines that access storage directly (not as a virtual hard disk mapped to the parent system), the iSCSI initiator is built into the operating system. The iSCSI Initiator Properties dialog is available from Administrative Tools. Note that a Windows Vista iSCSI connection to NetApp storage is supported only on Hyper-V virtual machines. For guest systems on Hyper-V virtual machines that access storage directly (not as a virtual hard disk mapped to the parent system), use an iSCSI initiator solution on a Hyper-V guest that is supported for standalone hardware. A supported version of Linux Host Utilities is required. For guest systems on Virtual Server 2005 virtual machines that access storage directly (not as a virtual hard disk mapped to the parent system), use an iSCSI initiator solution on a Virtual Server 2005 guest that is supported for standalone hardware. A supported version of Linux Host Utilities is required.
Note: If you want to use an iSCSI HBA on Windows Server 2003 hosts to access the storage system,
If you are using iSCSI software boot, you need a special boot-enabled version of the iSCSI software initiator.
Steps
1. 2. 3. 4.
Go to the Microsoft Web site at http://www.microsoft.com/. Click Downloads & Trials. Click Download Center. Keep the default setting of All Downloads. In the Search box, type
iSCSI Initiator
and then click Go. 5. Select the supported Initiator version you want to install.
Installing and Configuring Host Utilities | 39 6. Click the download link for the CPU type in your Windows host. You might also choose to download the Release Notes and Users Guide for the iSCSI Initiator from this Web page. 7. Click Save to save the installation file to a local directory on your Windows host. The initiator installation program is saved to the Windows host.
Related concepts
You must have downloaded the appropriate iSCSI initiator installer to the Windows host.
Steps
1. 2. 3. 4.
Open the local directory to which you downloaded the iSCSI Initiator software. Run the installation program by double-clicking the icon. When prompted to select installation options, select Initiator Service and Software Initiator. For all multipathing solutions except Veritas, select the Microsoft MPIO Multipathing Support for iSCSI check box, regardless of whether you are using MPIO or not. For the Veritas multipathing, clear this check box. Multipathing is not available for Windows XP and Windows Vista.
If you use an iSCSI HBA on Windows Server 2003 hosts, you also need to install the Microsoft iSCSI initiator service. Follow the instructions in Installing an iSCSI software initiator on page 29 to install the software initiator. If you are using only the iSCSI HBA, you can clear the iSCSI Initiator check box when installing the initiator package. The initiator service is built into Windows Server 2008.
About this task
You can optionally boot your Windows host from a storage system LUN using a supported HBA.
1. Install one or more supported iSCSI host bus adapters according to the instructions provided by the HBA vendor. 2. Obtain the supported HBA drivers and management utilities and install them according to the instructions provided by the HBA vendor. Drivers for QLogic iSCSI HBA devices can be found here: http://support.qlogic.com/support/drivers_software.asp 3. Manually set the required QLogic iSCSI HBA settings. a. Start the SANsurfer program on the Windows host and select the iSCSI HBA. See the SANsurfer online Help for more information. b. Specify an IP address for each HBA port. c. Set the Connection KeepAliveTO value to 180. d. Enable ARP Redirect. e. Set the iSCSI node name of all iSCSI HBA ports to the same name as shown in the iSCSI initiator GUI on the Windows host. f. Save the HBA settings and reboot the Windows host. 4. Connect the iSCSI HBA to your Ethernet switches or directly to the storage system. Avoid routing if possible. 5. Using the iSCSI initiator GUI, configure the iSCSI target addresses of your storage system. If you are using more than one path, explicitly select the initiator and target for each path when you log on.
After you finish
If you are SAN booting from an iSCSI HBA, you must also manually set the boot BIOS on the HBA.
Related concepts
Installing and Configuring Host Utilities | 41 The iSCSI initiator does not automatically create multiple sessions. You must explicitly create each session using the iSCSI Initiator GUI.
You specify the iSCSI Initiator CHAP settings using the Microsoft iSCSI Initiator GUI on the host. Click Advanced on the GUI Discovery tab to specify inbound values for each storage system when you add a target portal. Click Secret on the General tab to specify the outbound value (mutual CHAP only). By default, the iSCSI Initiator uses its iSCSI node name as its CHAP user name. Always use ASCII text passwords; do not use hexadecimal passwords. For mutual (bidirectional) CHAP, the inbound and outbound passwords cannot be the same.
42 | Windows Host Utilities 5.2 Installation and Setup Guide When you select MPIO support, the Windows Host Utilities installs the Microsoft MPIO components on Windows Server 2003 or enables the included MPIO feature of Windows Server 2008. You then need to install a supported DSM. Choices include the Data ONTAP DSM for Windows MPIO, the Veritas DMP DSM, the Microsoft iSCSI DSM (part of the iSCSI initiator package), and the Microsoft msdsm (included with Windows Server 2008).
Next topics
How to have a DSM multipath solution on page 42 Disabling ALUA for Data ONTAP DSM on page 42 Enabling ALUA for FC with msdsm on page 43
ALUA was introduced in Data ONTAP 7.2. The default settings is disabled, which is the required setting for the Data ONTAP DSM. ALUA is enabled or disabled on igroups.
Steps
1. If the igroup has already been created, verify the ALUA setting by entering
If ALUA is disabled, nothing about ALUA is displayed. 2. If enabled, disable ALUA on the igroup by entering
igroup set igroup_name alua no
For more information about the igroup command, see the igroup man page or the Data ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP.
Data ONTAP 7.3.0 or later software running single_image cfmode is required to support ALUA for the msdsm.
About this task
The msdsm uses ALUA to identify the primary (non-proxy) paths to FC LUNs. ALUA is enabled on FC igroups. ALUA is not currently supported for iSCSI Windows igroups. You cannot map a LUN to both FC and iSCSI igroups when ALUA is enabled on the FC igroup. The msdsm does not support mixed FC and iSCSI paths to the same LUN. If you want mixed paths, use the Data ONTAP DSM for Windows MPIO.
Steps
1. Verify you have a supported version of Data ONTAP software and it is configured for single_image cfmode. 2. Create the FC igroup for the Windows host. 3. Enable ALUA on the igroup by entering
igroup set igroup_name alua yes
For more information about the igroup command, see the igroup man page or the Data ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP.
Adding virtual machines to a failover cluster on page 44 Configuring SUSE Linux guests for Hyper-V on page 44
When you have more than one virtual machine (configuration files and boot .vhd file) stored on the same LUN, and you are adding the virtual machines to a failover cluster, you must put all of the virtual machine resources in the same resource group. Otherwise adding virtual machines to the cluster fails.
Steps
1. Move the available storage group to the node on which you are creating and adding virtual machines. (The available storage resource group in a Windows Server 2008 failover cluster is hidden.) On the cluster node, enter the following command at a Windows command prompt:
c:\cluster group "Available Storage" /move:node_name node_name is the host name of the cluster node from which you are adding virtual machines.
2. Move all of the virtual machine resources to the same failover cluster resource group. 3. Run the Virtual Machine Resource Wizard to create the virtual machines and then add them to the failover cluster. Be sure that the resources for all virtual machines are configured as dependent on the disk mapped to the LUN.
Install a supported version of the SUSE Linux Enterprise Server operating system on a Hyper-V virtual machine.
Setting timeout parameters on a Linux guest ensures correct failover behavior. You can use an iSCSI initiator solution on a Hyper-V guest that is supported for standalone hardware. Be sure to install a supported version of Linux Host Utilities. Use the linux type for LUNs accessed with an iSCSI initiator and for raw Hyper-V LUNs. Use the windows_2008 or hyper_v LUN type for LUNs that contain VHDs.
Steps
1. Download and install the Linux Integration Components package from Microsoft. The package is available from the Microsoft Connect site. Registration is required. 2. Set the timeout parameter. a. Open the /etc/udev/rules.d/50-udev-default.rules file on the Linux guest. b. Locate the line that includes sys$$DEVPATH/timeout c. Change the value in that line to 120.
Example
ACTION==add, SUBSYSTEM==scsi , SYSFS{type}==0|7|14, RUN+=/bin/sh c echo 120 > /sys$$DEVPATH/timeout d. Save the file. e. Make the new value active by entering the following command at the Linux command prompt:
/etc/init.d/boot.udev force-reload
f. Verify the new timeout value is active by running the following command:
cat /sys/block/sdX/device/timeout sdX is the name of the SCSI disk device, such as sdb.
3. Set all virtual network adapters for the virtual machine to use static MAC addresses. 4. If you are running an iSCSI initiator on the Linux guest, install a supported version of SUSE Linux Host Utilities.
Related information
1. Download and install the DDI-1 package for Veritas Storage Foundation 5.1 from the Symantec Web site. 2. For clustering environments (VCS or Microsoft Clustering-MSCS), set the SCSI setting in the Veritas Enterprise Administrator control panel to SCSI-3. For the latest information, see the Host Utilities Release Notes.
Related information
paths or iSCSI paths depending on how you want to connect to the storage system. There is no support for both Fibre Channel and iSCSI protocols on the same host with Veritas Storage Foundation.
Next topics
Installing the Host Utilities interactively on page 47 Installing the Host Utilities from a command line on page 47
1. Insert your Windows Host Utilities CD or change to the directory to which you downloaded the executable file. 2. Run the executable file and follow the instructions on the screen.
Note: The Windows Host Utilities installer checks for required Windows hotfixes. If it detects
a missing hotfix, it displays an error. Download and install any requested hotfixes, and then restart the installer. Some hotfixes for Windows Server 2008 are not recognized unless the affected feature is enabled. For example, an MPIO hotfix might not be recognized as installed until the MPIO feature is enabled. If you are prompted to install a hotfix that is already installed, try enabling the affected Windows feature and then restart the Host Utilities installer. 3. Reboot the Windows host when prompted.
After you finish
If you add a Windows 2008 R2 host to a failover cluster after installing the Host Utilities, run the Repair option of the Host Utilities installation program to set the required ClusSvcHangTimeout parameter.
Related tasks
The Host Utilities installation package must be in a path that is accessible by the Windows host. Follow the instructions for installing the Host Utilities interactively to obtain the installation package.
The system automatically reboots when the installation is complete. If a required Windows hotfix is missing, the installation fails with a simple message in the event log saying the installation failed, but not why. Problems with hotfixes are logged to the msiexec log file. It is recommend that you enable logging in the msiexec command to record the cause of this and other possible installation failures.
Step
MULTIPATHING specifies whether MPIO support is installed. Allowed values are 0 for no, 1 for yes.
inst_path is the path where the Host Utilities files are installed. The default path is C:\Program
enter
msiexec /help
If you add a Windows 2008 R2 host to a failover cluster after installing the Host Utilities, run the Repair option of the Host Utilities installation program to set the required ClusSvcHangTimeout parameter.
Related tasks
Installing and Configuring Host Utilities | 49 Follow the instructions in the Installation and Administration Guide for your version of SnapDrive software.
Note: SnapDrive for Windows is not supported with Veritas Storage Foundation for Windows
software.
Repairing or removing Windows Host Utilities interactively on page 49 Repairing or removing Windows Host Utilities from a command line on page 49 Removing Windows Host Utilities affects DSM on page 50
1. Open Windows Add or Remove Programs (Windows Server 2003) or Programs and Features (Windows Server 2008). 2. Select Windows Host Utilities 3. Click Change. 4. Click Repair or Remove as needed. 5. Follow the instructions on the screen.
1. Enter the following command on the Windows command line to repair Windows Host Utilities:
msiexec {/uninstall | /f]installer.msi [/quiet]
command completes
What are FC and iSCSI identifiers on page 51 Overview of settings used by the Host Utilities on page 53
Recording the WWPN on page 51 Recording the iSCSI initiator node name on page 53
Each HBA port has its own WWPN. For a dual-port HBA, you need to record two values. The WWPN looks like this:
WWPN: 10:00:00:00:c9:73:5b:90 Steps
1. If the system is running and Windows Host Utilities are installed, obtain the WWPNs using the hba_info.exe program. 2. If the system is SAN booted and not yet running an operating system, obtain the WWPNs using the boot BIOS.
Obtaining the WWPN using hba_info on page 52 Obtaining the WWPN using Emulex BootBIOS on page 52 Obtaining the WWPN using QLogic BootBIOS on page 52 Obtaining the WWPN using hba_info The hba_info.exe program installed by Windows Host Utilities displays the WWPN of all FC ports on a Windows host.
Steps
1. Run the hba_info.exe program that was installed as part of Windows Host Utilities. The program displays detailed information about each Fibre Channel HBA port. 2. Copy the WWPN line for each port to a text file or write it down. Obtaining the WWPN using Emulex BootBIOS For SAN-booted systems with Emulex HBAs that do not yet have an operating system, you can get the WWPNs from the boot BIOS.
Steps
1. Restart the host. 2. During startup, press Alt-E to access BootBIOS. 3. Select the menu entry for the Emulex HBA. BootBIOS displays the configuration information for the HBA, including the WWPN. 4. Record the WWPN for each HBA port. Obtaining the WWPN using QLogic BootBIOS For SAN-booted systems with QLogic HBAs that do not yet have an operating system, you can get the WWPNs from the boot BIOS.
Steps
1. Restart the host. 2. During startup, press Ctrl-Q to access BootBIOS. 3. Select the appropriate HBA and press Enter. The Fast!UTIL options are displayed. 4. Select Configuration Settings and press Enter.
Host configuration settings | 53 5. Select Adapter Settings and press Enter. 6. Record the WWPN for each HBA port from the Adapter Port Name field.
1. For Windows Server 2008 and Windows Vista, click Start Administrative Tools iSCSI Initiator. For Windows Server 2003 and Windows XP, click Start All Programs Microsoft iSCSI Initiator Microsoft iSCSI Initiator. The iSCSI Initiator Properties dialog box is displayed. 2. Copy the Initiator Name or Initiator Node Name value to a text file or write it down. The exact label in the dialog box differs depending on the Windows version. The iSCSI node name looks like this:
iqn.1991-05.com.microsoft:server3
Summary of registry values set by Windows Host Utilities on page 54 Summary of FC HBA values set by Windows Host Utilities on page 56
54 | Windows Host Utilities 5.2 Installation and Setup Guide ManageDisksOnSystemBuses setting used by the Host Utilities on page 57 ClusSvcHangTimeout setting on page 57 TimeOutValue setting used by the Host Utilities on page 57 PathVerifyEnabled setting used by the Host Utilities on page 58 PDORemovePeriod setting used by the Host Utilities on page 58 RetryCount setting used by the Host Utilities on page 58 RetryInterval setting used by the Host Utilities on page 59 DsmMaximumStateTransitionTime and DsmMaximumRetryTimeDuringStateTransition settings used by the Host Utilities on page 59 MPIOSupportedDeviceList setting used by the Host Utilities on page 60 DsmSupportedDeviceList setting used by Host Utilities on page 60 IPSecConfigTimeout setting used by the Host Utilities on page 60 LinkDownTime setting used by the Host Utilities on page 60 MaxRequestHoldTime setting used by the Host Utilities on page 61 FC HBA parameters set by the Host Utilities on page 61
All Windows Server 2008 R2 configurations The following value is set only for Windows Server 2008 R2 cluster configurations:
Property Type \HKLM\Cluster\ClusSvcHangTimeout Property Value 240
option of the Host Utilities installation program to set ClusSvcHangTimeout. No MPIO specified The following value is set when no multipath I/O (MPIO) support is selected:
Property Type HKLM\SYSTEM\CurrentControlSet\Services\disk\TimeOutValue Property Value 120
MPIO support specified, all OS The following values are set when multipath I/O (MPIO) is selected. These values are set for all operating systems.
Property Type HKLM\SYSTEM\CurrentControlSet\Services\disk\TimeOutValue HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\ PDORemovePeriod HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\ PathVerifyEnabled HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\RetryCount HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\ RetryInterval HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ PDORemovePeriod HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ PathVerifyEnabled HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ RetryCount HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ RetryInterval HKLM\SYSTEM\CurrentControlSet\Control\MPDEV\ MPIOSupportedDeviceList Property Value 20 130 0 6 1 130 0 6 1 "NETAPP LUN "
MPIO support specified, Windows Server 2003 only The following values are set when multipath I/O (MPIO) is selected. These values are set for Windows Server 2003 only.
Property Type Property Value
56 | Windows Host Utilities 5.2 Installation and Setup Guide MPIO support specified, Windows Server 2008 only The following values are set when multipath I/O (MPIO) is selected. These values are set for Windows Server 2008 only.
Property Type HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ PathVerifyEnabled Property Value 0
HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ PDORemovePeriod 130 HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ RetryCount HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ RetryInterval HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ DsmMaximumRetryTimeDuringStateTransition HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ DsmMaximumStateTransitionTime HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ DsmSupportedDeviceList 6 1 120 120 "NETAPP LUN "
For Emulex Fibre Channel HBAs, the installer sets the following parameters when no MPIO is selected:
Property Type LinkTimeOut NodeTimeOut Property Value 30 120
For QLogic Fibre Channel HBAs, the installer sets the following parameters when MPIO is selected:
Property Type LinkDownTimeOut PortDownRetryCount Property Value 5 30
For QLogic Fibre Channel HBAs, the installer sets the following parameters when no MPIO is selected:
ClusSvcHangTimeout setting
The Windows Host Utilities installer sets ClusSvcHangTimeout to 240 seconds for Windows Server 2008 R2 hosts in a cluster configuration. This ensures the correct path discovery sequence after all paths to a LUN are lost and paths begin to return. The ClusSvcHangTimeout setting specifies the time period for which a PR_IN will is retried after loss of all paths. This setting must be greater than the PDORemovePeriod for Windows Server 2008 R2. This parameter is set in the following Windows registry key: \HKLM\Cluster\
Note: If you configure a Windows cluster after installing Windows Host Utilities, run the Repair
If cluster service is installed after you install Windows Host Utilities, use the Repair option to change the disk TimeOutValue back to 20 or 120 seconds. In the Windows Control Panel, launch Add or Remove Programs. Select Windows Host Utilities and click Change. Then click Repair to correct the TimeOutValue setting.
Host configuration settings | 59 The RetryCount parameter specifies the number of times the current path to a LUN is retried before failing over to an alternate path. Note that this parameter affects all DSMs on the system. This parameter is set in the following Windows registry keys: HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\ HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\
The LinkTimeOut parameter specifies the interval after which a link that is down stops issuing a BUSY status for requests and starts issuing SELECTION_TIMEOUT error status. This LinkTimeOut includes port login and discovery time. The NodeTimeOut parameter specifies the interval after which a formerly logged-in node issues SELECTION_TIMEOUT error status to an I/O request. This causes the system to wait for a node that might reenter the configuration soon before reporting a failure. The timer starts after port discovery is completed and the node is no longer present. For QLogic HBAs, the installation program sets the following HBA parameters: LinkDownTimeOut=5 (with MPIO) LinkDownTimeOut=30 (without MPIO) PortDownRetryCount=30 (with MPIO) PortDownRetryCount=120 (without MPIO)
62 | Windows Host Utilities 5.2 Installation and Setup Guide The LinkDownTimeOut parameter controls the timeout when a link that is down stops issuing a BUSY status for requests and starts issuing SELECTION_TIMEOUT error status. This LinkDownTimeOut includes port login and discovery time. The PortDownRetryCount parameter specifies the number of times the I/O request is re-sent to a port that is not responding in one second intervals.
Setting up LUNs | 63
Setting up LUNs
LUNs are the basic unit of storage in a SAN configuration. The host system uses LUNs as virtual disks.
Next topics
LUN overview on page 63 Initiator group overview on page 65 About FC targets on page 66 Adding iSCSI targets on page 68 Accessing LUNs on hosts that use Veritas Storage Foundation on page 70 Accessing LUNs on hosts that use the native OS stack on page 71 Overview of initializing and partitioning the disk on page 71
LUN overview
You can use a LUN the same way you use local disks on the host. After you create the LUN, you must make it visible to the host. The LUN then appears on the Windows host as a disk. You can: Format the disk with NTFS. To do this, you must initialize the disk and create a new partition. Only basic disks are supported with the native OS stack. Use the disk as a raw device. To do this, you must leave the disk offline. Do not initialize or format the disk. Configure automatic start services or applications that access the LUNs. You must configure these start services so that they are dependent on the Microsoft iSCSI Initiator service.
Next topics
LUN types to use for hosts and guest operating systems on page 63 Overview of creating LUNs on page 64
the hyper_v, windows_2008, windows_gpt, and windows_lhs LUN types, you must create the LUN using the Data ONTAP command line interface.
Use the following table to select the correct LUN type. LUN type windows Data ONTAP All Windows disk type and version Master boot record (MBR) on hyper_v 7.3.1 and later Windows Server 2003 Windows XP Windows Vista
Windows Server 2008 Hyper-V LUNs containing virtual hard disks (VHDs). Note: For raw LUNs, use the type of child operating system as the LUN type.
7.2.5 and later 7.3.0 RC2 and later 7.2.1 and later 7.3.0 RC1 7.2.4 and earlier
All other disks on Windows Server 2008 GUID Partition Type (GPT) disks on Windows Server 2003 All disks on Windows Server 2008 All disks on Windows Server 2008
Refer to the documentation for your version of SnapDrive for specific steps. If you create LUNs manually, you cannot manage them using SnapDrive. The igroup must be the correct type for the protocol. You cannot use an iSCSI igroup when you are using the FC protocol to access the LUN. If you want to access a LUN with both FC and iSCSI protocols, you must create two igroups, one FC and one iSCSI. The lun setup command steps you through the process of creating an igroup and LUN on the storage system. You can also create igroups and LUNs by executing a series of individual commands (such as
Setting up LUNs | 65
igroup create, lun create, and lun map). Detailed steps for creating LUNs are in the Data
ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP.
MPIO. Mixed connections are not supported with the Veritas DSM or other multipathing solutions.
Note: If you are using the Microsoft native DSM for Windows Server 2008 (msdsm), you must
enable ALUA for FC igroups. If you are using the Data ONTAP DSM for Windows MPIO, you must disable ALUA for igroups. There are many ways to create and manage initiator groups and LUNs on your storage system. These processes vary depending on your configuration. These topics are covered in detail in the Data ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP software.
Next topics
Mapping LUNs to igroups on page 65 About mapping LUNs for Windows clusters on page 66
Related tasks
Enabling ALUA for FC with msdsm on page 43 Disabling ALUA for Data ONTAP DSM on page 42
of the number of LUNs mapped. Be sure to map your LUNs to numbers in this range.
About FC targets
The host automatically discovers FC targets that are accessible to its HBAs. However, you do need to verify that the host selects only primary (non-proxy) paths to FC targets.
Verifying FC paths to LUNs on page 66 Correcting FC path configurations with the Data ONTAP DSM on page 67 Correcting FC path configurations with the Veritas DMP DSM on page 68 Correcting FC path configurations with the Microsoft msdsm on page 68 Verifying FC paths to LUNs When you configure your host for FC, verify that the active paths are not proxy paths. You can verify the paths by mapping a LUN to the host on each storage system node, generating I/O to the LUN, and then checking the FC statistics on each note. Complete the following steps.
Steps
1. Map a LUN to the host on each node in the active/active storage system configuration.
Setting up LUNs | 67 2. On the consoles of both storage system in the active/active configuration, start collecting statistics using the following command:
sysstat -b
3. Generate I/O to the LUNs. 4. Check the FC statistics on each storage system node to verify that the proxy paths have essentially no traffic. The sysstat command periodically writes a line of statistics to the console. Check the Partner columns; the values should remain close to zero, while the FCP columns should show data.
Note: Some initiators send occasional traffic over passive paths to ensure they are still available,
so you typically see some traffic on proxy paths even when the system is correctly configured. 5. Enter Ctrl-C to exit the sysstat command on each console. If the Partner values remain close to zero, traffic is flowing over the correct paths. If the Partner values are high, as in the example below, the paths are not configured correctly. Example of high partner values In this example, all FC traffic is flowing over the proxy paths. Some columns from the sysstat command are removed from the example to make it easier to read.
CPU FCP iSCSI Partner Total FCP kB/s Partner kB/s in out in out 6% 0 0 124 124 0 0 5987 26 9% 0 0 186 186 0 0 9777 15 7% 0 0 147 147 0 0 6675 26 6% 0 0 87 87 0 0 3934 14 1% 0 0 6 6 0 0 257 0
Correcting FC path configurations with the Data ONTAP DSM When running the Data ONTAP DSM for Windows MPIO, correct any FC path configurations that are using partner (proxy) paths to the LUNs during routine operations. You might need to select a different load balance policy to be able to select which paths are used.
Step
1. Use the DSM management interface to check the paths to the LUNs. The detail display for each LUN shows whether it is using a proxy path. Reconfigure the paths to the LUN so that they do not use a proxy path under normal conditions. For more information, see the Installation and Administration Guide for your version of the Data ONTAP DSM.
Correcting FC path configurations with the Veritas DMP DSM When running the Veritas DMP DSM, correct any FC path configurations that are using partner (proxy) paths to the LUNs during routine operations. The Veritas DMP DSM selects FC paths automatically. You cannot manually select paths.
Step
1. Verify that you are running a supported version of the DSM. See the Host Utilities Release Notes for information about obtaining an updated DSM.
Correcting FC path configurations with the Microsoft msdsm When running the Microsoft Windows Server 2008 native msdsm, correct any FC path configurations that are using partner (proxy) paths to the LUNs during routine operations. The Microsoft msdsm selects FC paths using ALUA.
Steps
1. Verify that you are running a supported version of Data ONTAP software with the single_image cfmode. Data ONTAP 7.3.0 is the first supported version. 2. Verify that ALUA is enabled on the igroup.
Related tasks
You only need one entry for each storage system in the configuration, regardless of the number of interfaces that are enabled for iSCSI traffic. An active/active storage system configuration must have two entries, one for each storage system node in the configuration. The iSCSI Initiator GUI manages connections for both the software initiator and the optional iSCSI HBAs.
Setting up LUNs | 69 You can also add entries for the targets using the iscsicli interface. Enter
iscsicli help
iSCSI initiator interface to log on to the iSCSI targets. Do not use the Veritas Enterprise Administrator interface to log on because it does not create iSCSI sessions with the specified source-target portals. If you are using SnapDrive for Windows software, use the SnapDrive interface to add iSCSI targets. To add a target, complete the following steps.
Steps
1. 2. 3. 4.
Run the Microsoft iSCSI Initiator GUI. On the Discovery tab, create an entry for the storage system. On the Targets tab, log on to the storage system. If you want the LUNs to be persistent across host reboots, select the Automatically restore this connection when the system boots check box when logging on to the target. 5. If you are using MPIO or multiple connections per session, create additional connections to the target as needed. Enabling the optional MPIO support or multiple-connections-per-session support does not automatically create multiple connections between the host and storage system. You must explicitly create the additional connections. For Windows Server 2003, see the section Multipathing I/O in the Microsoft iSCSI Software Initiator 2.x Users Guide for specific instructions on configuring multiple paths to iSCSI LUNs. For Windows Server 2008, see the iSCSI topics in Help.
Next topics
About dependent services on the Native Stack and iSCSI on page 69 About dependent services on Veritas and iSCSI on page 70
These steps apply only to a host running Veritas Storage Foundation. They do not apply to the hosts running the native OS stack.
Steps
1. Select Start All Programs Symantec Veritas Storage Foundation Veritas Enterprise Administrator. The Select Profile window is displayed. 2. Select a profile and click OK to continue. The Veritas Enterprise Administrator window is displayed. 3. Click Connect to a Host or Domain. The Connect window is displayed. 4. Select a Host from the drop-down menu and click Browse to find a host, or enter the host name of the computer and click Connect. The Veritas Enterprise Administrator GUI window with storage objects is displayed. 5. Select Action Rescan. All the disks on the host are rescanned. 6. Select Action Refresh. The latest data is displayed. 7. In the Veritas Enterprise Administrator, with the Disks expanded, verify that the newly created LUNs are visible as disks on the host. The LUNs appear on the Windows host as basic disks under Veritas Enterprise Administrator.
Setting up LUNs | 71
After you finish
You can upgrade the disks to dynamic disks by adding them to dynamic disk groups. For more information on creating basic or dynamic volumes, see the Veritas Storage Foundation Administrators Guide.
Related tasks
These steps apply only to the native OS stack only. They do not apply to the hosts running the Veritas Storage Foundation.
Steps
1. 2. 3. 4. 5.
Right-click My Computer on your desktop and select Manage. Expand Storage and double-click the Disk Management folder. From the Action menu, select Rescan Disks. From the Action menu, select Refresh. In the Computer Management window, with Storage expanded and the Disk Management folder open, check the lower right pane to verify that the newly created LUN is visible as a disk on the host.
Related tasks
Troubleshooting | 73
Troubleshooting
This section describes general troubleshooting techniques for Windows Host Utilities. Be sure to check the latest Release Notes for known problems and solutions.
Next topics
Areas to check for possible problems on page 73 Updating the HBA software driver on page 74 Understanding the Host Utilities changes to FC HBA driver settings on page 75 Enabling logging on the Emulex HBA on page 76 Enabling logging on the QLogic HBA on page 77 About the diagnostic programs on page 77
NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/ Bugs Online - http://now.netapp.com/NOW/cgi-bin/bol Contacting NetApp Global Services - http://www.netapp.com/us/support/ngs-contacts.html
To see if you have the latest driver, complete the following steps.
Steps
1. Right-click My Computer and select Manage. The Computer Management window is displayed. 2. Double-click Device Manager. A list of installed devices displays. Previously installed drivers are listed under SCSI and RAID controller. One installed driver appears for each port on the HBA.
Note: If you uninstalled a device driver, a FC controller (HBA) appears under Other devices.
3. Expand SCSI and RAID controllers and double-click the appropriate HBA. The General dialog box is displayed. 4. Click Driver. If the driver version is correct, then you do not need to do anything else and can stop now. If the version is not correct, proceed to the next step.
5. Obtain the latest supported version from the Emulex or QLogic Web site.
Related information
Troubleshooting | 75 Emulex support page for NetApp - http://www.emulex.com/ts/index.html QLogic support page for NetApp - http://support.qlogic.com/support/drivers_software.asp
When troubleshooting HBA issues, check to make sure these settings have the correct values. The correct values depend on two factors: The HBA vendor Whether you are using multipathing software (MPIO)
You can correct the HBA settings by running the Repair option of the Windows Host Utilities installer.
Next topics
Verifying the Emulex HBA driver settings on FC systems on page 75 Verifying the QLogic HBA driver settings on FC systems on page 76
1. Open HBAnyware. 2. Select the appropriate HBA from the list and click the Driver Parameters tab. The driver parameters appear. 3. If you are using MPIO software, ensure you have the following driver settings: LinkTimeOut - 5 NodeTimeOut - 30
76 | Windows Host Utilities 5.2 Installation and Setup Guide 4. If you are not using MPIO software, ensure you have the following driver settings: LinkTimeOut - 30 NodeTimeOut - 120
1. Open SANsurfer and click Connect on the toolbar. The Connect to Host dialog appears. 2. Select the appropriate host from the list and click Connect. A list of HBAs appears in the FC HBA pane. 3. Select the appropriate HBA port from the list and click the Settings tab. 4. Select Advanced HBA Port Settings from the Select Settings section. 5. If you are using MPIO software, ensure you have the following driver settings: Link Down Timeout (linkdwnto) - 5 Port Down Retry Count (portdwnrc) - 30
6. If you are not using MPIO software, ensure you have the following driver settings: Link Down Timeout (linkdwnto) - 30 Port Down Retry Count (portdwnrc) - 120
1. 2. 3. 4.
Open HBAnyware. Select the appropriate HBA from the list and click the Driver Parameters tab. Select the LogErrors parameter and change the value to the desired severity level. Click Apply.
Troubleshooting | 77
1. 2. 3. 4.
Open SANsurfer. Open the Settings menu and select Options. Ensure Log Informational Events, Warning Events, and Enable Warning display are selected. Click OK.
on collecting diagnostic information in an IPv6 environment. There are two types of diagnostic programs: programs that display information you can use yourself, and programs that collect and package information that you send to technical support. The following programs display information that you can use yourself:
hba_info.exe. The hba_info.exe program displays information about FC HBAs in the Windows
host. Information includes the worldwide port names and firmware version. msiscsi_info.exe.The msiscsi_info.exe program displays detailed information about the iSCSI initiator version, iSCSI targets, sessions, connections, and LUNs. san_version.exe. The san_version.exe program displays the version number of the Host Utilities programs installed on the Windows host. It also displays the model and firmware version of FC HBAs installed on the host. vm_info.exe. The vm_info.exe program displays detailed information about each virtual machine and virtual switch running on a Hyper-V system. Information includes virtual hard disk resources and where they are stored.
The following programs collect configuration information in a file that you can send to technical support. Normally, you use these programs only when requested to do so by a technical support engineer. The programs are:
Next topics
Running the diagnostic programs (general steps) on page 78 Collecting storage system information on page 79 Collecting Windows host information on page 79 Collecting FC switch information on page 80 About collecting information on Veritas Storage Foundation on page 80
These programs run in command-line mode only. Double-clicking a program name to run it does not work.
Step
1. Select Start Programs NetApp Windows Host Utilities program_name. The system opens a command window. If the program requires you to provide parameters, the command syntax is displayed. If the program does not require parameters, performing this step runs the program. The program either displays the information on the screen or saves it to a file that you can send to technical support for analysis. You can use this step to run:
hba_info.exe msiscsi_info.exe san_version.exe vm_info.exe controller_info.exe windows_info.exe brocade_info.exe cisco_info.exe mcdata_info.exe
Troubleshooting | 79
qlogic_info.exe
You should run this utility only when requested to do so by a technical support engineer.
About this task
1. Select Start Programs NetApp Windows Host Utilities Controller Info. A command prompt opens and the command syntax is displayed. 2. Enter the appropriate information for your storage system.
Note: For an active/active storage system configuration you should run the program twice, once
for each controller. When the program completes, it displays a message telling you where the file is stored. Send this file to the address provided by the technical support engineer. The message is similar to the following:
Controller ss1 info is in directory C:\temp\netapp\netapp_controller_ss1.yourcompany.com Compressed file is C:\temp\netapp\netapp_controller_ss1.yourcompany.com.tar.gz Please send this file to Customer Support for analysis.
You should run this utility only when requested to do so by a technical support engineer.
About this task
1. Select Start Programs NetApp Windows Host Utilities Windows Info. When the program completes, it displays a message telling you where the file is stored. Send this file to the address provided by the technical support engineer. The message is similar to the following:
Host info is in directory C:\temp\netapp\windows_info Compressed file is C:\temp\netapp\windows_info.tar.gz Please send this file to Customer Support for analysis.
1. Select Start Programs NetApp Windows Host Utilities program_name. program_name is the name of the program for your switch. A command prompt opens and the command syntax is displayed. 2. Enter the appropriate information for your switch.
Note: If the Windows host is connected to more than one switch, repeat the command for each
switch. When the program completes, it displays a message telling you where the file is stored. Send this file to the address provided by the technical support engineer. The message is similar to the following:
Executing "supportShow" (may take up to 10 mins)... Switch fcsw32a1 info is in directory C:\temp\netapp\brocade_switch_fcsw32a1 Compressed file is C:\temp\netapp\brocade_switch_fcsw32a1.tar.gz Please send this file to Customer Support for analysis.
SAN Booting | 81
SAN Booting
This section describes how to boot your host from a storage system LUN.
Next topics
What SAN booting is on page 81 Configuring FC SAN booting on page 83 Creating the boot LUN on page 89 Configuring Emulex BootBIOS on page 89 Configuring QLogic BootBIOS on page 90 About configuring BIOS to allow booting from a LUN on page 91 Configuring QLogic iSCSI HBA boot BIOS settings on page 92 Getting the correct driver for the boot LUN on page 93 Configuring a VCS or MSCS cluster with Veritas in a SAN booted environment on page 95
Note: Following a system failure, the bootable virtual disk is no longer the default boot device. You
need to reconfigure the hard disk sequence in the system BIOS to set the bootable virtual disk as the default boot device.
How SnapDrive supports SAN booting on page 82 General requirements for SAN booting on page 82 About queue depths used with FC SAN booting on page 82
Support for booting from a Storage Area Network (SAN) http://support.microsoft.com/kb/305547/en-us NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/
per-target basis. See the FC and iSCSI Configuration Guide for more information about target queue depths.
Related information
SAN Booting | 83
1. Enable BootBIOS on the HBA. 2. For Fibre Channel, obtain the WWPN of the initiator HBA installed in the host. The WWPN is required when you configure the igroup on the storage system. 3. Use port sets to limit the available paths to only a single path to the boot LUN. 4. Create the LUN that will be used as a boot device and map the LUN to the igroup that contains the WWPNs of the host. 5. Configure BootBIOS to use the LUN as a boot device. 6. Configure the host BIOS to make the boot LUN the first disk device in the boot order. 7. Get the correct Emulex or QLogic driver for the boot LUN. 8. Install the Windows operating system on the new LUN. 9. If you are using the Veritas Storage Foundation for Windows software, install it now. Follow the instructions in the Veritas Storage Foundation and High Availability Solutions Installation and Upgrade Guide. Be sure to install the Veritas DMP DSM when you install the Veritas Storage Foundation for Windows software. Do not install the Data ONTAP DSM for Windows MPIO with Veritas Storage Foundation. 10. Install Windows Host Utilities.
Note: For the latest information about SAN booting, including restrictions and configuration
Next topics
About BootBIOS and SAN booting on page 84 Enabling Emulex BootBIOS using HBAnyware on page 84 Enabling Emulex BootBIOS using LP6DUTIL on page 85 Enabling QLogic BootBIOS on page 85 WWPN for the HBA required on page 86 Configuring a single path between the host and storage system on page 86
Related tasks
Recording the WWPN on page 51 Getting the correct driver for the boot LUN on page 93
Utilities, as listed in the Interoperability Matrix. BootBIOS firmware is disabled by default. To configure SAN booting, you must first enable BootBIOS firmware and then configure it to boot from a SAN disk.
Related information
1. 2. 3. 4.
Open HBAnyware. Select the appropriate HBA from the list. Click the Firmware. Click Enable next to BootBIOS Message. The BootBIOS message is enabled, allowing you to press Alt-E to access BootBIOS upon rebooting.
5. Reboot the host. 6. Press Alt-E to access BootBIOS. BootBIOS displays a menu of available adapters. 7. Enter the number next to the appropriate adapter and press Enter. 8. Type 2, Configure this Adapters Parameters, and press Enter. 9. Type 1, Enable or Disable BIOS, and press Enter. 10. Type 1 to enable BIOS and press Enter. BIOS is enabled. 11. Enter x to exit BootBIOS. 12. Enter y to reboot the system. The changes are saved.
SAN Booting | 85
1. 2. 3. 4.
Create a bootable floppy disk. Copy the contents of the Emulex HBA firmware package to the floppy disk. Boot the host from the floppy disk. At the command prompt, type
lp6dutil
and press Enter. The lp6dutil program starts. 5. Press Alt-L to open the Flash menu. 6. Type B for BootBIOS. A list of adapters appears. 7. Select the appropriate adapter and press Enter. 8. Press Alt-C to activate the selected adapter. 9. Press Tab until OK is selected and press Enter. The BootBIOS is enabled. 10. Exit the lp6dutil program. 11. Reboot the host.
1. Reboot the host. 2. Press Ctrl-Q to access BootBIOS. The QLogic Fast!UTIL displays the available adapters, listed in boot order. The primary boot device is the first device listed. If the primary boot device is unavailable, the host boots from the next available device in the list. 3. Press Enter to configure the boot device. The Fast!UTIL options appear. 4. Select Configuration Settings and press Enter.
86 | Windows Host Utilities 5.2 Installation and Setup Guide The Configuration Settings appear. 5. Select Adapter Settings and press Enter. The Adapter Settings appear. 6. Scroll to Host Adapter BIOS: If this option is disabled, press Enter to enable it. If this option is enabled, go to the next step.
7. Press Esc to return to the Configuration Settings. 8. Press Esc. You are prompted to save the configuration settings. 9. Select Save changes and press Enter. The changes are saved and you are returned to the configuration settings. 10. Press Esc. You are prompted to reboot the system. 11. Select Reboot system and press Enter. The system reboots.
MPIO (multipathing) drivers are not installed with the operating system and first boot. When preparing to install the operating system, ensure there is only one path from the HBA to the LUN used as a boot device. The pre-requisites are: There is at least one HBA installed in the host. If there is more than one HBA, only one HBA is mapped to the boot LUN. Two dual-port FC adapters are installed in each storage system. On both storage systems, two ports from Adapter1 connect to a single switch, and two ports from Adapter2 connect to a different switch.
SAN Booting | 87
Steps
1. 2. 3. 4.
Use port sets to ensure there is a single path to the boot LUN. Map the boot LUN in the HBA BIOS and system BootBIOS. After installing the operating system, install Windows Host Utilities and reboot your system. Install a supported multipathing solution, such as Data ONTAP DSM for Windows MPIO or Veritas Storage Foundation for Windows, then reboot your system. See the multipathing product documentation for details. 5. Verify the availability of the paths to the LUN in the management interface for your multipathing solution. Use port sets to limit FC paths Use ports sets to limit the number of paths from a host to the storage system. The single_image cfmode setting on the storage system creates multiple paths from a FC LUN to a host. By default, all LUNs in an active/active configuration are exported to all FC target ports. When setting up a SAN boot configuration, you must have only a single path from the host to the boot LUN.
Step
characters.
port is the target FC port. You can specify a list of ports, but at this point, you must specify a single
port. If you do not specify any ports, then you create an empty port set. You specify a port using the following formats:
filername:slotletter adds only a specific port on a storage systemfor example, filerA:4b. slotletter is the slot and letter of the portfor example, 4b. If you use the slotletter format
and the system is in an active/active configuration, the ports from both the local and partner storage system are added to the port set. Bind the port group to the igroup for the boot LUN.
Next topics
Binding a port set on page 88 Adding a port to a port set on page 88 Removing a port from a port set on page 88
Binding a port set Binding a port set to an igroup allows you to precisely control which target ports can access the LUN.
Step
1. To bind a port set to an igroup enter the following command: igroup bind igroup_name
portset_name igroup_name is the name of the igroup. portset_name is the name of the port set.
Adding a port to a port set Use the portset add command to add a port to a port set.
Step
1. To add a port to a port set, enter the following command: portset add portset_name port ...
portset_name is the name of the port set. port is the target FC port. You can specify more than one port. You specify a port by using the
following formats:
slotletter is the slot and letter of the portfor example, 4b. If you use the slotletter format
and the system is in a cluster, the port from both the local and partner system is added to the port set. filername : slotletter adds only a specific port on a systemfor example, filerA:4b.
Removing a port from a port set Use the portset remove command to remove the port from a port set.
Step
SAN Booting | 89
slotletter is the slot and letter of the portfor example, 4b. If you use the slotletter format
and the system is in a cluster, the port from both the local and partner system is removed from the port set. filername:slotletter removes only a specific port on a system for example, filerA:4b.
For additional instructions, see the Data ONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP.
1. Reboot the host. 2. Press Alt-E to access BootBIOS. BootBIOS displays a menu of available adapters. 3. Type the number of the adapter you want to configure and press Enter. 4. Type 1 to configure the primary boot device. BootBIOS displays a menu of bootable devices. The devices are listed in boot order. The primary boot device is the first device listed. If the primary boot device is unavailable, the host boots from the next available device in the list. 5. Type 1 to select the primary boot entry. BootBIOS displays a menu of boot devices. 6. Type the two-digit menu entry for the LUN and press Enter.
90 | Windows Host Utilities 5.2 Installation and Setup Guide BootBIOS prompts you to enter the two digits of the starting LUN, in hexadecimal format. 7. Type 00 and press Enter. BootBIOS displays a menu of LUNs. The storage system LUN appears as follows:
01 LUN 0.2
8. Type 01 to select the LUN and press Enter. BootBIOS prompts you to configure the device to boot using WWPN or DID. 9. Type 1 to boot the device using WWPN and press Enter. BootBIOS returns to the menu of boot devices. 10. Press Esc to return to the previous menu.
1. Reboot the host. 2. Press Ctrl-Q to access BootBIOS. The QLogic Fast!UTIL displays the available adapters, listed in boot order. The primary boot device is the first device listed. If the primary boot device is unavailable, the host boots from the next available device in the list. 3. Press Enter to configure the boot device.
Example
The Fast!UTIL options appear. 4. Select Configuration Settings and press Enter. The Configuration Settings appear. 5. Select Selectable Boot Settings and press Enter. The Selectable Boot Settings appear. 6. Scroll to Selectable Boot: If this option is disabled, press Enter to enable it. If this option is enabled, go to the next step.
7. Select the (Primary) Boot Port Name, LUN field and press Enter.
SAN Booting | 91 The available Fibre Channel devices appear. 8. Select the boot LUN from the list of devices and press Enter. The LUN name and WWPN display as the primary boot device in the Selectable Boot Settings.
If your host is running a different type of PC BIOS, see the vendor documentation for the host system or see the vendors Web site for more information about configuring BIOS setup.
Next topics
Configuring a Dell BIOS Revision A10 on page 91 Configuring an IBM BIOS on page 92 Configuring a Phoenix BIOS 4 Release 6 on page 92
1. 2. 3. 4. 5. 6.
Reboot the host. Press F2 to enter BIOS setup. In BIOS setup, select Boot Sequence. Select option 3: Hard drive. From the Hard drive menu, select Disk drive sequence. From the Disk drive sequence menu, select the HBA.
1. 2. 3. 4. 5.
Reboot the host. Press F1 to enter BIOS setup. Select Start Options. Select PCI Device Boot Priority. Select the slot in which the HBA is installed.
1. 2. 3. 4. 5. 6.
Reboot the host. Press F2 to enter BIOS setup. Select the Boot tab. The Boot tab lists the boot device order. Ensure the HBA is configured as the First Boot Device. Select Hard Drive. Configure the boot LUN as the first boot device.
Configuration process requires limited paths to HBA on page 93 Setting iSCSI HBA parameters in boot BIOS on page 93
SAN Booting | 93
Related information
1. 2. 3. 4. 5. 6.
Restart the Windows host and press Ctrl-Q to open the QLogic Fast!UTIL menu when prompted. Select the iSCSI HBA, and then select Configuration Settings > Host Adapter Settings. Specify an IP address for each port. Set the Connection KeepAliveTO value to 180. Enable ARP Redirect. Set the iSCSI node name of all iSCSI HBA ports to the same name as shown in the iSCSI initiator GUI on the Windows host. 7. Enter the IP address of the target (storage system) port where the boot LUN is located. 8. Save the HBA settings and reboot the Windows host.
When you boot from a LUN, you must ensure the operating system on the LUN has the required HBA driver for booting from a LUN. You must download these drivers from the QLogic or Emulex Web site.
Steps
1. Download the Emulex or QLogic driver: Emulex: go to Emulex downloads for Windows and download the Storport Miniport driver.
94 | Windows Host Utilities 5.2 Installation and Setup Guide QLogic: go to QLogic downloads for Windows and download the appropriate STOR Miniport driver.
2. Copy the driver files to a floppy disk (Windows Server 2003) or a CD (Windows Server 2008).
Related information
Emulex downloads for Windows - http://www.emulex.com/support/windows/index.jsp QLogic downloads for Windows http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/default.aspx
1. Insert the Windows CD-ROM and reboot the host. A message displays, indicating that the HBA BIOS is installed along with the boot LUN.
LUN: 00 NETAPP LUN BIOS is installed successfully! Note:
If the message does not display, do not continue installing Windows. Check to ensure that the LUN is created and mapped, and that the target HBA is in the correct mode for directly connected hosts. Also, ensure that the WWPN for the HBA is the same WWPN you entered when creating the igroup. If the LUN appears but the message indicates the BIOS is not installed, reboot and enable the BIOS. 2. When prompted, press any key to boot from the CD-ROM. 3. When prompted, press F6 to install a third-party SCSI array driver. 4. Insert the HBA driver floppy disk you created when the following message is displayed:
Setup could not determine the type of one or more mass storage devices installed in your system, or you have chosen to manually specify an adapter.
5. Press S to continue. 6. From the list of HBAs, select the supported HBA you are using and press Enter.
Example
The driver for the selected HBA is configured in the Windows operating system. 7. Follow the prompts to set up the Windows operating system. When prompted, set up the Windows operating system in a partition formatted with NTFS. 8. The host system reboots and then prompts you to complete the server setup process.
SAN Booting | 95 9. Complete the server setup process. The following prompts require specific responses: Digital Signature Not Found window Click Yes to continue with the Windows setup. This window asks you whether you want to search the Microsoft Web site for a digital signature for the HBA software. You do not need a digital signature to use the HBA software, so click Yes to continue. The Found New Hardware wizard prompts you to install and configure the driver. Click Cancel to continue the setup process.
10. Reboot the host. 11. Install Windows. 12. Install Windows Host Utilities.
Related references
The vxclus command enables the cluster disk group to reside on the same bus as the cluster node's system and boot disk.
Step
1. Enter
vxclus UseSystemBus ON
on the Windows host. See the Veritas Storage Foundation Administrators Guide for details about using the vxclus command. The Windows registry is updated to enable a dynamic cluster disk group to be created on the same bus as the cluster node's system and boot disk.
Index | 97
Index
A
ALUA disabling for Data ONTAP DSM for Windows MPIO 42 enabling for msdsm 43 application configuring to start after iSCSI 69, 70 command line repairing or removing Host Utilities 49 command line installation 47 configuration verifying 34 configurations Fibre Channel 22 Fibre Channel over Ethernet 24 supported 21 controller_info collecting storage system information 79 converged network adapter (CNA) 24
B
binding port set to igroup 88 BIOS configuring Dell for SAN booting 91 configuring for SAN booting 91 configuring IBM for SAN booting 92 configuring Phoenix for SAN booting 92 boot LUN creating 89 BootBIOS configuring for Emulex HBA 89 configuring for QLogic HBA 90 displaying FC WWPN 52 enabling for Emulex HBA using HBAnyware 84 enabling for Emulex HBA using LP6DUTIL 85 enabling for QLogic HBA using Fast!UTIL 85 overview 84 brocade_info collecting FC switch information 80
D
data center bridging (DCB) switch 24 Data ONTAP licences required 35 Data ONTAP DSM for Windows MPIO 26, 42, 50 disabling ALUA 42 effect of removing Host Utilities 50 installing 42 DDI-1 package downloading and installing for Veritas Storage Foundation 46 Dell BIOS configuring for SAN booting 91 device-specific module choices 26 installing 42 diagnostic programs brocade_info 80 cisco_info 80 controller_info 79 how to run 78 mcdata_info 80 qlogic_info 80 troubleshooting with 77 windows_info 79 disk initializing and partioning 71 Disk Management accessing LUNs 71
C
CHAP security options for iSCSI 41 cisco_info collecting FC switch information 80 ClusSvcHangTimeout setting 57 cluster configuring with SAN booting 95 mapping LUNs 66 clustering with Hyper-V 28
H
HBA configuring 35 configuring BootBIOS for Emulex 89 configuring BootBIOS for QLogic 90 configuring iSCSI 37 enabling Emulex BootBIOS using HBAnyware 84 enabling logging on Emulex 76 enabling logging on QLogic 77 enabling QLogic BootBIOS using Fast!UTIL 85 getting correct driver for SAN booting 93 installing iSCSI 39 recording FC WWPN 51 setting iSCSI parameters for SAN booting 93 setting summary 56 settings changed by Host Utilities 75 settings for Host Utilities 61 verifying driver version 74 hba_info displaying FC WWPN 52 HBAnyware enabling BootBIOS for Emulex HBA 84 enabling logging on Emulex HBA 76 verifying Emulex HBA settings 75 host collecting troubleshooting information for 79 verifying configuration 34 Host Utilities components 19 components of Host Utilities 19 installation overview 46 installing and configuring (high level) 33 installing and configuring overview 20 installing from a command line 47 installing interactively 47 introduction 19 programs included 20 repairing and removing 49 repairing or removing from command line 49 repairing or removing interactively 49 settings 53
E
Emulex BootBIOS displaying FC WWPN 52 Emulex HBA configuring 35 configuring BootBIOS 89 enabling BootBIOS using HBAnyware 84 enabling BootBIOS using LP6DUTIL 85 enabling logging 76 getting correct driver for SAN booting 93 settings for Host Utilities 61 verifying driver version 74 error recovery level for iSCSI 40
F
failover cluster adding Hyper-V VM 44 Fast!UTIL enabling BootBIOS for QLogic HBA 85 setting iSCSI parameters for SAN booting 93 FCoE overview 23 supported configurations 24 Fibre channel targets 66 Fibre Channel access to storage 22 collecting FC switch troubleshooting information 80 configuring switch 35 recording WWPN 51 storage system port media type 36 supported configurations 22 Fibre Channel HBA configuring 35
Index | 99
Host Utilities (continued) settings summary 54, 56 supported Windows configurations 21 hotfixes installing 47 Hyper-V adding VM to failover cluster 44 clustering 28 configuring SUSE Linux guest OS 44 configuring virtual machines 44 LUN layout 28 overview 27 storage options 27 hyper-v LUN type 63 installing and configuring overview 20 installing and configuring Host Utilities (high level) 33 IPSecConfigTimeout setting 60 IPv6 overview 25 IQN recording iSCSI 53 iSCSI access to storage 24 adding target 68 CHAP security options 41 configuring HBA 37 configuring HBA for SAN booting 92 configuring initiator 37 creating single path when installing SAN booting 93 dependent services 69, 70 downloading initiator 38 error recovery level 40 initiator service for HBA 39 installing HBA 39 installing software initiator 39 iscsicli command 68 multiple connections per session 40 node name overview 51 recording initiator node name 53 setting iSCSI HBA parameters for SAN booting 93 using IPv6 25 iSCSI initiator for Virtual Server 2005 virtual machine 29 options 37
I
IBM BIOS configuring for SAN booting 92 igroup binding port set 88 disabling ALUA 42 enabling ALUA 43 mapping LUN 65 overview 65 info scripts how to run 78 troubleshooting with 77 information finding more 30 initializing a disk 71 initiator downloading iSCSI 38 installing iSCSI 39 iSCSI configuring 37 iSCSI options 37 initiator group mapping LUN 65 overview 65 initiator node name recording iSCSI 53 installing DSM software 42 Host Utilities 46 Host Utilities from command line 47 Host Utilities interactively 47 iSCSI software initiator 39 MPIO software 41 SnapDrive for Windows software 48 Veritas Storage Foundation for Windows 45
L
language versions of Windows supported 30 licenses required for Data ONTAP 35 LinkDownTime setting 60 LinkDownTimeout setting verifying for QLogic HBA 76 LinkDownTimeOut setting 61 LinkTimeOut setting 61, 75 verifying for Emulex HBA 75 Linux configuring Hyper-V guest OS 44 Linux Integration Components 44 linux LUN type 63 LP6DUTIL enabling BootBIOS for Emulex HBA 85
N
NodeTimeOut setting verifying for Emulex HBA 75 NodeTimeOut settings 61 non-English versions of Windows supported 30 NTFS disk format 71
P
partitioning a disk 71 passthru disk 27 path verifying correct FC path used 66 path configuration fixing using Data ONTAP DSM 67 fixing using Microsoft DSM 68 fixing using Veritas DMP DSM 68 PathVerifyEnabled setting 58 PDORemovePeriod setting 58 Phoenix BIOS configuring for SAN booting 92 port adding to port set 88 removing from port set 88 port sets adding port to 88 binding 88 for SAN booting 86 limiting FC paths for SAN booting 87 removing port from 88 PortDownRetryCount setting 61, 76 verifying for QLogic HBA 76 programs included in Host Utilities 20 proxy path correcting configuration using Data ONTAP DSM 67 correcting configuration using Microsoft DSM 68 correcting configuration using Veritas DMP DSM 68 overview 66 verifying not used 66 publications finding more information 30
M
ManageDisksOnSystemBuses setting 57 MaxRequestHoldTime setting 61 mcdata_info collecting FC switch information 80 MCS enabling for iSCSI 40 media type storage system FC port 36 Microsoft DSM 26, 43 enabling ALUA 43 Microsoft iSCSI DSM installing 42 Microsoft iSCSI initiator downloading 38 MPIO installing software 41 overview 26 MPIOSupportedDeviceList setting 60 MSCS configuring cluster with SAN booting 95 msdsm 26, 43 enabling ALUA 43 msiexec command installing Host Utilities 47 repairing or removing Host Utilities 49 msiscsidsm 26 multipathing installing software 41 overview 26 multiple connections per session enabling for iSCSI 40
Q
QLogic BootBIOS displaying FC WWPN 52 QLogic HBA configuring 35 configuring BootBIOS 90
Index | 101
QLogic HBA (continued) configuring iSCSI for SAN booting 92 enabling logging 77 enabling QLogic BootBIOS using Fast!UTIL 85 getting correct driver for SAN booting 93 installing iSCSI 39 setting iSCSI parameters for SAN booting 93 settings for Host Utilities 61 verifying driver version 74 qlogic_info collecting FC switch information 80 quiet installation 47 SAN booting (continued) installing Windows on boot LUN 94 overview 29, 83 setting iSCSI HBA parameters 93 WWPN of HBA required 86 SANsurfer enabling logging on QLogic HBA 77 verifying QLogic HBA settings 76 scripts included in Host Utilities 20 security CHAP for iSCSI 41 settings RetryInterval 59 ClusSvcHangTimeout 57 DsmMaximumRetryTimeDuringStateTransition 59 DsmMaximumStateTransitionTime 59 DsmSupportedDeviceList 60 HBA 61 IPSecConfigTimeout 60 LinkDownTime 60 LinkDownTimeOut 61 LinkTimeOut 61 ManageDisksOnSystemBuses 57 MaxRequestHoldTime 61 MPIOSupportedDeviceList 60 NodeTimeOut 61 PathVerifyEnabled 58 PDORemovePeriod 58 PortDownRetryCount 61 RetryCount 59 TimeOutValue 57 silent installation 47 SnapDrive for Windows creating LUNs 64 installation order 48 software boot iSCSI initiator requirement 38 storage access Fibre Channel (FC) 22 Fibre Channel over Ethernet (FCoE) 23 iSCSI 24 storage configuration verifying 34 storage system protocol licenses 35 supported configurations Fibre Channel 22 Fibre Channel over Ethernet 24 SUSE Linux configuring Hyper-V guest OS 44
R
raw disk 27 references finding more information 30 registry settings for Host Utilities 53 values summary 54 removing Host Utilities from command line 49 Host Utilities interactively 49 Repair option of Host Utilities installation program 49 repairing Host Utilities from command line 49 Host Utilities interactively 49 requirements verifying 34 RetryCount setting 59 RetryInterval setting 59
S
SAN booting BootBIOS overview 84 configuring BIOS 91 configuring BootBIOS for Emulex HBA 89 configuring BootBIOS for QLogic HBA 90 configuring cluster with Veritas Storage Foundation 95 configuring iSCSI HBA 92 create single path during installation 86, 93 creating boot LUN 89 enabling Emulex BootBIOS using HBAnyware 84 enabling Emulex BootBIOS using LP6DUTIL 85 enabling QLogic BootBIOS using Fast!UTIL 85 getting correct HBA driver for boot LUN 93
T
target adding iSCSI 68 Fibre Channel 66 TimeOutValue setting 57 troubleshooting collecting FC switch information 80 collecting host information with windows_info 79 collecting storage system information with controller_info 79 collecting Veritas Storage Foundation information 80 diagnostic program overview 77 enabling logging on Emulex HBA 76 enabling logging on QLogic HBA 77 HBA driver version 74 HBA settings 75 how to run diagnostic programs 78 items to check 73 verifying Emulex HBA settings 75 verifying QLogic HBA settings 76
W
Windows dynamic disk support 25 installing on boot LUN 94 registry settings for Host Utilities 53 support for non-English language versions 30 supported configurations 21 Windows cluster mapping LUNs 66 Windows Disk Management accessing LUNs 71 Windows failover cluster adding Hyper-V VM 44 Windows host collecting troubleshooting information for 79 Windows hotfixes installing 47 windows LUN type 63 Windows registry values summary 54 windows_2008 LUN type 63 windows_gpt LUN type 63 windows_info collecting host information 79 windows_lhs LUN type 63 WWPN displaying using Emulex BootBIOS 52 displaying using hba_info 52 displaying using QLogic BootBIOS 52 overview 51 recording from host 51 required for boot LUN igroup 86
U
uninstalling Host Utilities 49
V
VCS configuring cluster with SAN booting 95 Veritas DMP DSM 26, 42 installing 42 Veritas Enterprise Administrator connecting 70 Veritas Storage Foundation accessing LUNs 70 collecting troubleshooting information 80 configuring and using 46 configuring applications dependent on iSCSI 70 configuring cluster with SAN booting 95 dynamic disk support 25