Vous êtes sur la page 1sur 82

Fibre Channel and iSCSI Configuration Guide

for the Data ONTAP 8.0 Release Family

NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: doccomments@netapp.com Information Web: http://www.netapp.com Part number 215-05038_A0 December 2009

Table of Contents | 3

Contents
Copyright information ................................................................................. 7 Trademark information ............................................................................... 9 About this guide .......................................................................................... 11
Audience .................................................................................................................... 11 Terminology .............................................................................................................. 11 Keyboard and formatting conventions ...................................................................... 13 Special messages ....................................................................................................... 14 How to send your comments ..................................................................................... 14

iSCSI topologies .......................................................................................... 15


Single-network HA pair in an iSCSI SAN ................................................................ 15 Multinetwork HA pair in an iSCSI SAN ................................................................... 17 Direct-attached single-controller configurations in an iSCSI SAN ........................... 18 VLANs ....................................................................................................................... 19 Static VLANs ................................................................................................ 19 Dynamic VLANs ........................................................................................... 19

Fibre Channel topologies ........................................................................... 21


FC onboard and expansion port combinations .......................................................... 22 Fibre Channel supported hop count ........................................................................... 23 Fibre Channel switch configuration best practices .................................................... 23 Host multipathing software requirements .................................................................. 23 60xx supported topologies ......................................................................................... 24 60xx target port configuration recommendations .......................................... 24 60xx: Single-fabric single-controller configuration ...................................... 25 60xx: Single-fabric HA pair .......................................................................... 26 60xx: Multifabric HA pair ............................................................................. 28 60xx: Direct-attached single-controller configuration .................................. 29 60xx: Direct-attached HA pair ...................................................................... 30 31xx supported topologies ......................................................................................... 31 31xx target port configuration recommendations .......................................... 32 31xx: Single-fabric single-controller configuration ...................................... 32 31xx: Single-fabric HA pair .......................................................................... 33 31xx: Multifabric HA pair ............................................................................. 34

4 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family 31xx: Direct-attached single-controller configurations ................................. 36 31xx: Direct-attached HA pair ...................................................................... 37 30xx supported topologies ......................................................................................... 38 30xx target port configuration recommendations .......................................... 38 3040 and 3070 supported topologies ............................................................. 38 FAS2040 supported topologies ................................................................................. 44 FAS2040: Single-fabric single-controller configuration ............................... 44 FAS2040: Single-fabric HA pair ................................................................... 45 FAS2040: Multifabric single-controller configuration .................................. 46 FAS2040: Multifabric HA pair ...................................................................... 47 FAS2040: Direct-attached single-controller configurations .......................... 48 FAS2040: Direct-attached HA pair ............................................................... 49

Fibre Channel over Ethernet overview ..................................................... 51


FCoE initiator and target combinations ..................................................................... 51 Fibre Channel over Ethernet supported topologies ................................................... 52 FCoE: FCoE initiator to FC target configuration .......................................... 53

Fibre Channel and FCoE zoning ............................................................... 55


Port zoning ................................................................................................................. 56 World Wide Name based zoning ............................................................................... 56 Individual zones ......................................................................................................... 56 Single-fabric zoning .................................................................................................. 57 Dual-fabric HA pair zoning ....................................................................................... 58

Shared SAN configurations ....................................................................... 61 ALUA configurations ................................................................................. 63


(Native OS, FC) AIX Host Utilities configurations that support ALUA .................. 63 ESX configurations that support ALUA ................................................................... 65 HP-UX configurations that support ALUA ............................................................... 65 Linux configurations that support ALUA ................................................................. 66 (MPxIO/FC) Solaris Host Utilities configurations that support ALUA .................... 66 Windows configurations that support ALUA ............................................................ 67

Configuration limits .................................................................................... 69


Configuration limit parameters and definitions ......................................................... 69 Host operating system configuration limits for iSCSI and FC .................................. 71 60xx and 31xx single-controller limits ...................................................................... 72 60xx and 31xx HA pair limits ................................................................................... 73 30xx single-controller limits ...................................................................................... 75

Table of Contents | 5 30xx HA pair limits ................................................................................................... 76 FAS2040 single-controller limits .............................................................................. 77 FAS2040 HA pair configuration limits ..................................................................... 78

Index ............................................................................................................. 79

Copyright information | 7

Copyright information
Copyright 19942009 NetApp, Inc. All rights reserved. Printed in the U.S.A. No part of this document covered by copyright may be reproduced in any form or by any means graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp. The product described in this manual may be protected by one or more U.S.A. patents, foreign patents, or pending applications. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).

Trademark information | 9

Trademark information
NetApp, the Network Appliance logo, the bolt design, NetApp-the Network Appliance Company, Cryptainer, Cryptoshred, DataFabric, DataFort, Data ONTAP, Decru, FAServer, FilerView, FlexClone, FlexVol, Manage ONTAP, MultiStore, NearStore, NetCache, NOW NetApp on the Web, SANscreen, SecureShare, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapRestore, SnapValidator, SnapVault, Spinnaker Networks, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, StoreVault, SyncMirror, Topio, VFM, VFM Virtual File Manager, and WAFL are registered trademarks of NetApp, Inc. in the U.S.A. and/or other countries. gFiler, Network Appliance, SnapCopy, Snapshot, and The evolution of storage are trademarks of NetApp, Inc. in the U.S.A. and/or other countries and registered trademarks in some other countries. The NetApp arch logo; the StoreVault logo; ApplianceWatch; BareMetal; Camera-to-Viewer; ComplianceClock; ComplianceJournal; ContentDirector; ContentFabric; Data Motion; EdgeFiler; FlexShare; FPolicy; Go Further, Faster; HyperSAN; InfoFabric; Lifetime Key Management, LockVault; NOW; ONTAPI; OpenKey, RAID-DP; ReplicatorX; RoboCache; RoboFiler; SecureAdmin; SecureView; Serving Data by Design; Shadow Tape; SharedStorage; Simplicore; Simulate ONTAP; Smart SAN; SnapCache; SnapDirector; SnapFilter; SnapMigrator; SnapSuite; SohoFiler; SpinMirror; SpinRestore; SpinShot; SpinStor; vFiler; VPolicy; and Web Filer are trademarks of NetApp, Inc. in the U.S.A. and other countries. NetApp Availability Assurance and NetApp ProTech Expert are service marks of NetApp, Inc. in the U.S.A. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. A complete and current list of other IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml. Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/or other countries. Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks. NetApp, Inc. NetCache is certified RealSystem compatible.

About this guide | 11

About this guide


You can use your product more effectively when you understand this document's intended audience and the conventions that this document uses to present information. This document describes the configuration of fabric-attached, network-attached, and direct-attached storage systems in Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI environments. This guide explains the various topologies that are supported and describes the relevant SAN configuration limits for each controller model. The configurations apply to controllers with their own disks and to V-Series configurations.
Next topics

Audience on page 11 Terminology on page 11 Keyboard and formatting conventions on page 13 Special messages on page 14 How to send your comments on page 14

Audience
This document is written with certain assumptions about your technical knowledge and experience. This document is for system administrators who are familiar with host operating systems connecting to storage systems using FC, FCoE, and iSCSI protocols. This guide assumes that you are familiar with basic FC, FCoE, and iSCSI solutions and terminology. This guide does not cover basic system or network administration topics, such as IP addressing, routing, and network topology; it emphasizes the characteristics of the storage system.

Terminology
To understand the concepts in this document, you might need to know how certain terms are used. Storage terms array LUN Refers to storage that third-party storage arrays provide to storage systems running Data ONTAP software. One array LUN is the equivalent of one disk on a native disk shelf. Refers to a logical unit of storage identified by a number.

LUN (Logical Unit Number)

12 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

native disk native disk shelf storage controller

Refers to a disk that is sold as local storage for storage systems that run Data ONTAP software. Refers to a disk shelf that is sold as local storage for storage systems that run Data ONTAP software. Refers to the component of a storage system that runs the Data ONTAP operating system and controls its disk subsystem. Storage controllers are also sometimes called controllers, storage appliances, appliances, storage engines, heads, CPU modules, or controller modules. Refers to the hardware device running Data ONTAP that receives data from and sends data to native disk shelves, third-party storage, or both. Storage systems that run Data ONTAP are sometimes referred to as filers, appliances, storage appliances, V-Series systems, or systems. Refers to back-end storage arrays, such as IBM, Hitachi Data Systems, and HP, that provide storage for storage systems running Data ONTAP.

storage system

third-party storage

Cluster and high-availability terms cluster In Data ONTAP 8.0 Cluster-Mode, refers to a group of connected nodes (storage systems) that share a global namespace and that you can manage as a single virtual server or multiple virtual servers, providing performance, reliability, and scalability benefits. In the Data ONTAP 7.1 release family and earlier releases, refers to an entirely different functionality: a pair of storage systems (sometimes called nodes) configured to serve data for each other if one of the two systems stops functioning.

HA (high availability) HA pair

In Data ONTAP 8.0, refers to the recovery capability provided by a pair of nodes (storage systems), called an HA pair, that are configured to serve data for each other if one of the two nodes stops functioning. In Data ONTAP 8.0, refers to a pair of nodes (storage systems) configured to serve data for each other if one of the two nodes stops functioning. In the Data ONTAP 7.3 and 7.2 release families, this functionality is referred to as an active/ active configuration.

About this guide | 13

Keyboard and formatting conventions


You can use your product more effectively when you understand how this document uses keyboard and formatting conventions to present information. Keyboard conventions Convention What it means

The NOW site Refers to NetApp On the Web at http://now.netapp.com/.

Enter, enter

Used to refer to the key that generates a carriage return; the key is named Return on some keyboards. Used to mean pressing one or more keys on the keyboard and then pressing the Enter key, or clicking in a field in a graphical interface and then typing information into the field.

hyphen (-) type

Used to separate individual keys. For example, Ctrl-D means holding down the Ctrl key while pressing the D key. Used to mean pressing one or more keys on the keyboard.

Formatting conventions Convention What it means Words or characters that require special attention. Placeholders for information that you must supply. For example, if the guide says to enter the arp -d hostname command, you enter the characters "arp -d" followed by the actual name of the host. Book titles in cross-references. Command names, option names, keywords, and daemon names. Information displayed on the system console or other computer monitors. Contents of files. File, path, and directory names.

Italic font

Monospaced font

Bold monospaced Words or characters you type. What you type is always shown in lowercase

font

letters, unless your program is case-sensitive and uppercase letters are necessary for it to work properly.

14 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Special messages
This document might contain the following types of messages to alert you to conditions that you need to be aware of.
Note: A note contains important information that helps you install or operate the system

efficiently.
Attention: An attention notice contains instructions that you must follow to avoid a system crash,

loss of data, or damage to the equipment.

How to send your comments


You can help us to improve the quality of our documentation by sending us your feedback. Your feedback is important in helping us to provide the most accurate and high-quality information. If you have suggestions for improving this document, send us your comments by e-mail to doccomments@netapp.com. To help us direct your comments to the correct division, include in the subject line the name of your product and the applicable operating system. For example, FAS6070 Data ONTAP 7.3, or Host UtilitiesSolaris, or Operations Manager 3.8Windows.

iSCSI topologies | 15

iSCSI topologies
Supported iSCSI configurations include direct-attached and network-attached topologies. Both single-controller and HA pairs are supported. In an iSCSI environment, all methods of connecting Ethernet switches to a network approved by the switch vendor are supported. Ethernet switch counts are not a limitation in Ethernet iSCSI topologies. For specific recommendations and best practices, see the Ethernet switch vendor's documentation. For Windows iSCSI multipathing options, please see Technical Report 3441.
Next topics

Single-network HA pair in an iSCSI SAN on page 15 Multinetwork HA pair in an iSCSI SAN on page 17 Direct-attached single-controller configurations in an iSCSI SAN on page 18 VLANs on page 19
Related information

NetApp Interoperability Matrix - now.netapp.com/NOW/products/interoperability/ Technical Report 3441: iSCSI multipathing possibilities on Windows with Data ONTAP media.netapp.com/documents/tr-3441.pdf

Single-network HA pair in an iSCSI SAN


You can connect hosts using iSCSI to HA pair controllers using a single IP network. The network can consist of one or more switches, and the controllers can be attached to multiple switches. Each controller can have multiple iSCSI connections to the network. The number of ports is based on the storage controller model and the number of supported Ethernet ports. The following figure shows two Ethernet connections to the network per storage controller. Depending on the controller model, more connections are possible.

16 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Figure 1: iSCSI single network HA pair Attribute Fully redundant Type of network Different host operating systems Multipathing required Type of configuration Value No, due to the single network Single network Yes, with multiple-host configurations Yes HA pair

iSCSI topologies | 17

Multinetwork HA pair in an iSCSI SAN


You can connect hosts using iSCSI to HA pair controllers using multiple IP networks. To be fully redundant, a minimum of two connections to separate networks per controller is necessary to protect against NIC, network, or cabling failure.

Figure 2: iSCSI multinetwork HA pair Attribute Fully redundant Type of network Different host operating systems Multipathing required Type of configuration Value Yes Multinetwork Yes, with multiple-host configurations Yes HA pair

18 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Direct-attached single-controller configurations in an iSCSI SAN


You can connect hosts using iSCSI directly to controllers. The number of hosts that can be directly connected to a controller or pair of controllers depends on the number of available Ethernet ports.
Note: Direct-attached configurations are not supported in HA pairs.

Figure 3: iSCSI direct-attached single-controller configurations Attribute Fully redundant Type of network Different host operating systems Multipathing required Type of configuration Value No, due to the single controller None, direct-attached Yes, with multiple-host configurations Yes Single controller

iSCSI topologies | 19

VLANs
A VLAN consists of a group of switch ports, optionally across multiple switch chassis, grouped together into a broadcast domain. Static and dynamic VLANs enable you to increase security, isolate problems, and limit available paths within your IP network infrastructure. Reasons for implementing VLANs Implementing VLANs in larger IP network infrastructures has the following benefits. VLANs provide increased security because they limit access between different nodes of an Ethernet network or an IP SAN. VLANs enable you to leverage existing infrastructure while still providing enhanced security. VLANs improve Ethernet network and IP SAN reliability by isolating problems. VLANs can also help reduce problem resolution time by limiting the problem space. VLANs enable you to reduce the number of available paths to a particular iSCSI target port. VLANs enable you to reduce the maximum number of paths to a manageable number. You need to verify that only one path to a LUN is visible if a host does not have a multipathing solution available.

Next topics

Static VLANs on page 19 Dynamic VLANs on page 19

Static VLANs
Static VLANs are port-based. The switch and switch port are used to define the VLAN and its members. Static VLANs offer improved security because it is not possible to breach VLANs using media access control (MAC) spoofing. However, if someone has physical access to the switch, replacing a cable and reconfiguring the network address can allow access. In some environments, static VLANs are also easier to create and manage because only the switch and port identifier need to be specified, instead of the 48-bit MAC address. In addition, you can label switch port ranges with the VLAN identifier.

Dynamic VLANs
Dynamic VLANs are MAC address based. You can define a VLAN by specifying the MAC address of the members you want to include. Dynamic VLANs provide flexibility and do not require mapping to the physical ports where the device is physically connected to the switch. You can move a cable from one port to another without reconfiguring the VLAN.

Fibre Channel topologies | 21

Fibre Channel topologies


Supported FC configurations include single-fabric, multifabric, and direct-attached topologies. Both single-controller and HA pairs are supported. For multiple-host configurations, hosts can use different operating systems, such as Windows or UNIX. HA pairs with multiple, physically independent storage fabrics (minimum of two) are recommended for SAN solutions. This provides redundancy at the fabric and storage system layers, which is particularly important because these layers typically support many hosts. The use of heterogeneous FC switch fabrics is not supported, except in the case of embedded blade switches. For specific exceptions, see the Interoperability Matrix on the NOW site. Cascade, mesh, and core-edge fabrics are all industry-accepted methods of connecting FC switches to a fabric, and all are supported. A fabric can consist of one or multiple switches, and the storage arrays can be connected to multiple switches.
Note: The following sections show detailed SAN configuration diagrams for each type of storage system. For simplicity, the diagrams show only a single fabric or, in the case of the dual-fabric configurations, two fabrics. However, it is possible to have multiple fabrics connected to a single storage system. In the case of dual-fabric configurations, even multiples of fabrics are supported. This is true for both HA pairs and single-controller configurations. Next topics

FC onboard and expansion port combinations on page 22 Fibre Channel supported hop count on page 23 Fibre Channel switch configuration best practices on page 23 Host multipathing software requirements on page 23 60xx supported topologies on page 24 31xx supported topologies on page 31 30xx supported topologies on page 38 FAS2040 supported topologies on page 44
Related information

NetApp Interoperability Matrix - now.netapp.com/NOW/products/interoperability/

22 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

FC onboard and expansion port combinations


You can use storage controller onboard FC ports as both initiators and targets. You can also add storage controller FC ports on expansion adapters and use them as initiators and targets. The following table lists FC port combinations and specifies which combinations are supported. All expansion adapters should be the same speed (2 Gb, 4 Gb, or 8 Gb); you can configure 4-Gb or 8-Gb ports to run at a lower speed if needed for the connected device.
Onboard ports Initiator + Target Initiator + Target Expansion ports None Target only Supported? Yes Yes with Data ONTAP 7.3.2 and later Yes Yes with Data ONTAP 7.3.2 and later Yes Yes Yes, but no FC SAN support Yes, but no FC SAN support Yes Yes with Data ONTAP 7.3.2 and later Yes with Data ONTAP 7.3.2 and later, but no FC disk shelf or VSeries configurations or tape support Yes, but no FC disk shelf or VSeries configurations or tape support

Initiator + Target Initiator + Target

Initiator only Initiator + Target

Initiator only Initiator only Initiator only Initiator only Target only Target only

Target only Initiator + Target Initiator only None Initiator only Initiator + Target

Target only

Target only

Target only

None

Related concepts

Configuration limits on page 69


Related references

FCoE initiator and target combinations on page 51

Fibre Channel topologies | 23

Fibre Channel supported hop count


The maximum supported FC hop count, or the number of inter-switch links (ISLs) crossed between a particular host and storage system, depends on the hop count that the switch supplier and storage system support for FC configurations. The following table shows the supported hop count for each switch supplier. Switch supplier Brocade Cisco McData QLogic Supported hop count 6 5 3 4

Fibre Channel switch configuration best practices


A fixed link speed setting is highly recommended, especially for large fabrics, because it provides the best performance for fabric rebuild times. In large fabrics, this can create significant time savings. Although autonegotiation provides the greatest flexibility, it does not always perform as expected. Also, it adds time to the overall fabric-build sequence because the FC port has to autonegotiate.
Note: Where supported, it is recommended to set the switch port topology to F (point-to-point).

Host multipathing software requirements


Multipathing software is required on a host computer any time it can access a LUN through more than one path. The multipathing software presents a single disk to the operating system for all paths to a LUN. Without multipathing software, the operating system could see each path as a separate disk, which can lead to data corruption. Multipathing software is also known as MPIO (multipath I/O) software. Supported multipathing software for an operating system is listed in the Interoperability Matrix. For single-fabric single-controller configurations, multipathing software is not required if you have a single path from the host to the controller. You can use zoning to limit paths.

24 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family For an HA pair, host multipathing software is required unless you use zoning to limit the host to a single path.

60xx supported topologies


60xx controllers are available in single-controller and HA pairs. The 6030 and 6070 systems have eight onboard 2-Gb FC ports per controller and each one can be configured as either a target or initiator FC port. 2-Gb target connections are supported with the onboard 2-Gb ports. 4-Gb target connections are supported with 4-Gb target expansion adapters. If you use 4-Gb target expansion adapters, then you can only configure the onboard ports as initiators. You cannot use both 2-Gb and 4-Gb targets on the same controller or on two different controllers in an HA pair. The 6040 and 6080 systems have eight onboard 4-Gb FC ports per controller and each one can be configured as either a target or initiator FC port. 4-Gb target connections are supported with the onboard 4-Gb ports configured as targets. Additional target connections can be supported using 4-Gb target expansion adapters with Data ONTAP 7.3 and later.
Note: The 60xx systems support the use of 8-Gb target expansion adapters beginning with Data ONTAP version 7.3.1. While 8-Gb and 4-Gb target expansion adapters function similarly, 8-Gb targets cannot be combined with 2-Gb or 4-Gb targets (whether using expansion adapters or onboard). Next topics

60xx target port configuration recommendations on page 24 60xx : Single-fabric single-controller configuration on page 25 60xx : Single-fabric HA pair on page 26 60xx : Multifabric HA pair on page 28 60xx : Direct-attached single-controller configuration on page 29 60xx : Direct-attached HA pair on page 30

60xx target port configuration recommendations


For best performance and highest availability, use the recommended FC target port configuration. The port pairs on a 60xx controller that share an ASIC are 0a+0b, 0c+0d, 0e+0f, and 0g+0h. The following table shows the preferred port usage order for onboard FC target ports. For target expansion adapters, the preferred slot order is given in the System Configuration Guide for the version of Data ONTAP software being used by the controllers.

Fibre Channel topologies | 25

Number of target ports 1 2 3 4 5 6 7 8

Ports 0h 0h, 0d 0h, 0d, 0f 0h, 0d, 0f, 0b 0h, 0d, 0f, 0b, 0g 0h, 0d, 0f, 0b, 0g, 0c 0h, 0d, 0f, 0b, 0g, 0c, 0e 0h, 0d, 0f, 0b, 0g, 0c, 0e, 0a

60xx: Single-fabric single-controller configuration


You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

26 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Figure 4: 60xx single-fabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric and single controller Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration
Related references

Single-controller configuration

60xx target port configuration recommendations on page 24

60xx: Single-fabric HA pair


You can connect hosts to both controllers in an HA pair using a single FC switch.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Fibre Channel topologies | 27

Figure 5: 60xx single-fabric HA pair Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller Type of configuration
Related references

HA pair

60xx target port configuration recommendations on page 24

28 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

60xx: Multifabric HA pair


You can connect hosts to both controllers in an HA pair using two or more FC switch fabrics for redundancy.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 6: 60xx multifabric HA pair Attribute Fully redundant Type of fabric Value Yes Multifabric

Different host operating systems Yes, with multiple-host configurations

Fibre Channel topologies | 29

Attribute FC ports or adapters

Value One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller

Type of configuration
Related references

HA pair

60xx target port configuration recommendations on page 24

60xx: Direct-attached single-controller configuration


You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.

Figure 7: 60xx direct-attached single-controller configuration

30 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Attribute Fully redundant Type of fabric

Value No, due to the single controller None

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration
Related references

Single-controller configuration

60xx target port configuration recommendations on page 24

60xx: Direct-attached HA pair


You can connect hosts directly to FC target ports on both controllers in an HA pair. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Fibre Channel topologies | 31

Figure 8: 60xx direct-attached HA pair Attribute Fully redundant Type of fabric Value Yes None

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration
Related references

HA pair

60xx target port configuration recommendations on page 24

31xx supported topologies


31xx systems are available in single-controller and HA pairs. The 31xx systems have four onboard 4-Gb FC ports per controller and each port can be configured as either an FC target port or an initiator port. For example, you can configure two ports as SAN targets and two ports as initiators for disk shelves.

32 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family Each 31xx controller supports 4-Gb FC target expansion adapters.
Note: 31xx controllers support the use of 8-Gb target expansion adapters beginning with Data ONTAP 7.3.1. However, the 8-Gb expansion adapters cannot be combined with 4-Gb targets (whether using expansion adapters or onboard). Next topics

31xx target port configuration recommendations on page 32 31xx : Single-fabric single-controller configuration on page 32 31xx : Single-fabric HA pair on page 33 31xx : Multifabric HA pair on page 34 31xx : Direct-attached single-controller configurations on page 36 31xx : Direct-attached HA pair on page 37

31xx target port configuration recommendations


For best performance and highest availability, use the recommended FC target port configuration. The port pairs on a 31xx controller that share an ASIC are 0a+0b and 0c+0d. The following table shows the preferred port usage order for onboard FC target ports. For target expansion adapters, the preferred slot order is given in the System Configuration Guide for the version of Data ONTAP software being used by the controllers. Number of target ports 1 2 3 4 Ports 0d 0d, 0b 0d, 0b, 0c 0d, 0b, 0c, 0a

31xx: Single-fabric single-controller configuration


You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Fibre Channel topologies | 33

Figure 9: 31xx single-fabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric and single controller Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration
Related references

Single-controller configuration

31xx target port configuration recommendations on page 32

31xx: Single-fabric HA pair


You can connect hosts to both controllers in an HA pair using a single FC switch.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

34 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Figure 10: 31xx single-fabric HA pair Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration
Related references

HA pair

31xx target port configuration recommendations on page 32

31xx: Multifabric HA pair


You can connect hosts to both controllers in an HA pair using two or more FC switch fabrics for redundancy.
Note: The FC target port numbers in the following figure are examples. The actual port numbers

might vary depending on whether you are using onboard ports or FC target expansion adapters. If

Fibre Channel topologies | 35 you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 11: 31xx multifabric HA pair Attribute Fully redundant Type of fabric Value Yes Multifabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration
Related references

HA pair

31xx target port configuration recommendations on page 32

36 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

31xx: Direct-attached single-controller configurations


You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.
Note: The FC target port numbers in the following figure are examples. The actual port numbers

might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 12: 31xx direct-attached single-controller configurations Attribute Fully redundant Type of fabric Value No, due to the single controller None

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration Single-controller configuration

Fibre Channel topologies | 37


Related references

31xx target port configuration recommendations on page 32

31xx: Direct-attached HA pair


You can connect hosts directly to FC target ports on both controllers in an HA pair. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 13: 31xx direct-attached HA pair Attribute Fully redundant Type of fabric FC ports or adapters Value Yes None One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration HA pair
Related references

31xx target port configuration recommendations on page 32

38 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

30xx supported topologies


30xx systems are available in single-controller and HA pairs.
Note: 3040 and 3070 controllers support the use of 8-Gb target expansion adapters beginning with Data ONTAP 7.3.1. While 8-Gb and 4-Gb target expansion adapters function similarly, please note that the 8-Gb target expansion adapters cannot be combined with 4-Gb targets (expansion adapters or onboard). Next topics

30xx target port configuration recommendations on page 38 3040 and 3070 supported topologies on page 38

30xx target port configuration recommendations


For best performance and highest availability, use the recommended FC target port configuration. The port pairs on a 30xx controller that share an ASIC are 0a+0b, 0c+0d. The following table shows the preferred port usage order for onboard FC target ports. For target expansion adapters, the preferred slot order is given in the System Configuration Guide for the version of Data ONTAP software being used by the controllers. Number of target ports 1 2 3 4 Ports 0d 0d, 0b 0d, 0b, 0c 0d, 0b, 0c, 0a

3040 and 3070 supported topologies


3040 and 3070 systems are available in single-controller and HA pairs. The 3040 and 3070 controllers have four onboard 4-Gb FC ports per controller and each port can be configured as either an FC target port or an initiator port. For example, you can configure two ports as SAN targets and two ports as initiators for disk shelves.
Next topics

3040 and 3070 : Single-fabric single-controller configuration on page 39 3040 and 3070 : Single-fabric HA pair on page 40 3040 and 3070 : Multifabric HA pair on page 41

Fibre Channel topologies | 39

3040 and 3070 : Direct-attached single-controller configurations on page 42 3040 and 3070 : Direct-attached HA pair on page 43
3040 and 3070: Single-fabric single-controller configuration You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 14: 3040 and 3070 single-fabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric and single controller Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters Type of configuration Single-controller configuration

40 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family
Related references

30xx target port configuration recommendations on page 38


3040 and 3070: Single-fabric HA pair You can connect hosts to both controllers in an HA pair using a single FC switch.
Note: The FC target port numbers in the following figure are examples. The actual port numbers

might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 15: 3040 and 3070 single-fabric HA pair Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller Type of configuration HA pair

Fibre Channel topologies | 41


Related references

30xx target port configuration recommendations on page 38


3040 and 3070: Multifabric HA pair You can connect hosts to both controllers in an HA pair using two or more FC switch fabrics for redundancy.
Note: The FC target port numbers in the following figure are examples. The actual port numbers might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 16: 3040 and 3070 multifabric HA pair Attribute Fully redundant Type of fabric Value Yes Multifabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports using target expansion adapters per controller

42 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Attribute Type of configuration


Related references

Value HA pair

30xx target port configuration recommendations on page 38


3040 and 3070: Direct-attached single-controller configurations You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.

Figure 17: 3040 and 3070 direct-attached single-controller configurations Attribute Fully redundant Type of fabric Value No, due to the single controller None

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters

Fibre Channel topologies | 43

Attribute Type of configuration


Related references

Value Single-controller configuration

30xx target port configuration recommendations on page 38


3040 and 3070: Direct-attached HA pair You can connect hosts directly to FC target ports on both controllers in an HA pair. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.
Note: The FC target port numbers in the following figure are examples. The actual port numbers

might vary depending on whether you are using onboard ports or FC target expansion adapters. If you are using FC target expansion adapters, the target port numbers also depend on the expansion slots into which your target expansion adapters are installed.

Figure 18: 3040 and 3070 direct-attached HA pair Attribute Fully redundant Type of fabric FC ports or adapters Value Yes None One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC target expansion adapters

44 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Attribute

Value

Type of configuration HA pair


Related references

30xx target port configuration recommendations on page 38

FAS2040 supported topologies


FAS2040 systems are available in single-controller and HA pairs. The FAS2040 have two onboard 4-Gb FC ports per controller. You can configure these ports as either target ports for FC SANs or initiator ports for connecting to disk shelves.
Next topics

FAS2040 : Single-fabric single-controller configuration on page 44 FAS2040 : Single-fabric HA pair on page 45 FAS2040 : Multifabric single-controller configuration on page 46 FAS2040 : Multifabric HA pair on page 47 FAS2040 : Direct-attached single-controller configurations on page 48 FAS2040 : Direct-attached HA pair on page 49

FAS2040: Single-fabric single-controller configuration


You can connect hosts to single controllers using a single FC switch. If you use multiple paths, multipathing software is required on the host.
Note: The FC target port numbers in the following illustration are examples.

Fibre Channel topologies | 45

Figure 19: FAS2040 single-fabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single fabric and single controller Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller Single-controller configuration

FAS2040: Single-fabric HA pair


You can connect hosts to both controllers in an HA pair using a single FC switch.
Note: The FC target port numbers in the following illustration are examples.

46 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Figure 20: FAS2040 single-fabric HA pair Attribute Fully redundant Type of fabric Value No, due to the single fabric Single fabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller HA pair

FAS2040: Multifabric single-controller configuration


You can connect hosts to one controller using two or more FC switch fabrics for redundancy.
Note: The FC target port numbers in the following illustration are examples.

Fibre Channel topologies | 47

Figure 21: FAS2040 multifabric single-controller configuration Attribute Fully redundant Type of fabric Value No, due to the single controller Multifabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller Single-controller configuration

FAS2040: Multifabric HA pair


You can connect hosts to both controllers in an HA pair using two or more FC switch fabrics for redundancy.
Note: The FC target port numbers in the following illustration are examples.

48 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Figure 22: FAS2040 multifabric HA pair Attribute Fully redundant Type of fabric Value Yes Multifabric

Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller HA pair

FAS2040: Direct-attached single-controller configurations


You can connect hosts directly to FC target ports on a single controller. Each host can connect to one port, or to two ports for redundancy. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.
Note: The FC target port numbers in the following illustration are examples.

Fibre Channel topologies | 49

Figure 23: FAS2040 direct-attached single-controller configurations Attribute Fully redundant Type of fabric Value No, due to the single controller None

Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller Single-controller configuration

FAS2040: Direct-attached HA pair


You can connect hosts directly to FC target ports on both controllers in an HA pair. The number of hosts is limited by the number of available target ports. Direct-attached configurations typically need the FC ports set to loop mode. Be sure to follow the recommendation of your host operating system provider for FC port settings. You can use the Data ONTAP fcp config mediatype command to set the target ports.
Note: The FC target port numbers in the following illustration are examples.

50 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Figure 24: FAS2040 direct-attached HA pair Attribute Fully redundant Type of fabric Value Yes None

Different host operating systems Yes, with multiple-host configurations FC ports or adapters Type of configuration One to the maximum number of supported onboard FC ports per controller HA pair

Fibre Channel over Ethernet overview | 51

Fibre Channel over Ethernet overview


Fibre Channel over Ethernet (FCoE) is a new model for connecting hosts to storage systems. FCoE is very similar to traditional Fibre Channel (FC), as it maintains existing FC management and controls, but the hardware transport is a lossless 10-gigabit Ethernet network. Setting up an FCoE connection requires one or more supported converged network adapters (CNAs) in the host, connected to a supported data center bridging (DCB) Ethernet switch. The CNA is a consolidation point and effectively serves as both an HBA and an Ethernet adapter. As an HBA, the presentation to the host is FC targets and all FC traffic is sent out as FC frames mapped into Ethernet packets (FC over Ethernet). The 10 gigabit Ethernet adapter is also used for host IP traffic, such as iSCSI, NFS, and HTTP. Both FCoE and IP communications through the CNA run over the same 10 gigabit Ethernet port, which connects to the DCB switch.
Note: Using the FCoE target adapter in the storage controller for non-FCoE IP traffic such as NFS

or iSCSI is NOT currently supported. In general, you configure and use FCoE connections just like traditional FC connections.
Note: For detailed information about how to set up and configure your host to run FCoE, see your appropriate host documentation. Next topics

FCoE initiator and target combinations on page 51 Fibre Channel over Ethernet supported topologies on page 52

FCoE initiator and target combinations


Certain combinations of FCoE and traditional FC initiators and targets are supported. FCoE initiators You can use FCoE initiators in host computers with both FCoE and traditional FC targets in storage controllers. The FCoE initiator must connect to an FCoE DCB (data center bridging) switch; direct connection to a target is not supported. The following table lists the supported combinations.
Initiator FC FC Target FC FCoE Supported? Yes No

52 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Initiator FCoE FCoE

Target FC FCoE

Supported? Yes Yes with Data ONTAP 7.3.2 and later No with Data ONTAP 8.0

FCoE targets You can mix FCoE target ports with 4Gb or 8Gb FC ports on the storage controller regardless of whether the FC ports are add-in target adapters or onboard ports. You can have both FCoE and FC target adapters in the same storage controller.
Note: Using the FCoE target adapter for non-FCoE IP traffic such as NFS or iSCSI is NOT currently supported. Note: The rules for combining onboard and expansion FC ports still apply. Related references

FC onboard and expansion port combinations on page 22

Fibre Channel over Ethernet supported topologies


Supported FCoE native configurations include single-fabric and multifabric topologies. Both singlecontroller and HA pairs are supported. Supported storage systems with native FCoE target expansion adapters are the FAS60xx series, the FAS31xx series, and the FAS3040 and FAS3070. In HA pairs, only single_image cfmode is supported. Native FCoE configurations using an FCoE target adapter are supported only in the Data ONTAP 7.3 release family. The FCoE initiator with FC target configuration is also supported on FAS60xx, FAS31xx, FAS30xx, FAS20xx, FAS270, and FAS900 series storage systems in Data ONTAP 7.2.5.1 and later using an FCoE/DCB switch.
Note: The following configuration diagrams are examples only. Most supported FC and iSCSI configurations on supported storage systems can be substituted for the example FC or iSCSI configurations in the following diagrams. However, direct-attached configurations are not supported in FCoE. Note: While iSCSI configurations allow any number of Ethernet switches, there must be no additional Ethernet switches in FCoE configurations. The CNA must connect directly to the FCoE switch.

Fibre Channel over Ethernet overview | 53

FCoE: FCoE initiator to FC target configuration


You can connect hosts to both controllers in an HA pair using FCoE initiators through data center bridging (DCB) Ethernet switches to FC target ports. The FCoE initiator always connects to a supported DCB switch. The DCB switch can connect directly to an FC target, or can connect through FC switches to the FC target.
Note: The FC target expansion adapter port numbers (2a and 2b) in the following figure are examples. The actual port numbers might vary, depending on the expansion slot in which the FC target expansion adapter is installed.

Host 1

Host 2

Host N

CNA Ports

CNA Ports

CNA Ports

DCB Ports

DCB Ports

IP Network FCoE Switch


FC Ports FC Ports

IP Network FCoE Switch

Switch/Fabric 1

Switch/Fabric 2

Controller 1 0b 0d 0b 0d Controller 2

Figure 25: FCoE initiator to FC dual-fabric HA pair Attribute Fully redundant Type of fabric Value Yes Dual fabric

54 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Attribute

Value

Different host operating systems Yes, with multiple-host configurations FC ports or adapters One to the maximum number of supported onboard FC ports per controller One to the maximum number of supported 4-Gb or 8-Gb FC ports per controller using FC target expansion adapters Multipathing required Type of configuration Yes HA pair

Fibre Channel and FCoE zoning | 55

Fibre Channel and FCoE zoning


An FC or FCoE zone is a subset of the fabric that consists of a group of FC or FCoE ports or nodes that can communicate with each other. You must contain the nodes within the same zone to allow communication. Reasons for zoning Zoning reduces or eliminates cross talk between initiator HBAs. This occurs even in small environments and is one of the best arguments for implementing zoning. The logical fabric subsets created by zoning eliminate cross-talk problems. Zoning reduces the number of available paths to a particular FC or FCoE port and reduces the number of paths between a host and a particular LUN that is visible. For example, some host OS multipathing solutions have a limit on the number of paths they can manage. Zoning can reduce the number of paths that an OS multipathing driver sees. If a host does not have a multipathing solution installed, you need to verify that only one path to a LUN is visible. Zoning increases security because there is limited access between different nodes of a SAN. Zoning improves SAN reliability by isolating problems that occur and helps to reduce problem resolution time by limiting the problem space.

Recommendations for zoning You should implement zoning anytime four or more hosts are connected to a SAN. Although World Wide Node Name zoning is possible with some switch vendors, World Wide Port Name zoning is recommended. You should limit the zone size while still maintaining manageability. Multiple zones can overlap to limit size. Ideally, a zone is defined for each host or host cluster. You should use single-initiator zoning to eliminate crosstalk between initiator HBAs.

Next topics

Port zoning on page 56 World Wide Name based zoning on page 56 Individual zones on page 56 Single-fabric zoning on page 57 Dual-fabric HA pair zoning on page 58

56 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Port zoning
Port zoning, also referred to as hard zoning, specifies the unique fabric N_port IDs of the ports to be included within the zone. The switch and switch port are used to define the zone members. Port zoning provides the following advantages: Port zoning offers improved security because it is not possible to breach the zoning by using WWN spoofing. However, if someone has physical access to the switch, replacing a cable can allow access. In some environments, port zoning is easier to create and manage because you only work with the switch or switch domain and port number.

World Wide Name based zoning


World Wide Name based zoning (WWN) specifies the WWN of the members to be included within the zone. Depending on the switch vendor, either World Wide Node Name or World Wide Port Names can be used. You should use World Wide Port Name zoning when possible. WWN zoning provides flexibility because access is not determined by where the device is physically connected to the fabric. You can move a cable from one port to another without reconfiguring zones.

Individual zones
In the standard zoning configuration for a simple environment where each host is shown in a separate zone, the zones overlap because the storage ports are included in each zone to allow each host to access the storage. Each host can see all of the FC target ports but cannot see or interact with the other host ports. Using port zoning, you can do this zoning configuration in advance even if all of the hosts are not present. You can define each zone to contain a single switch port for the host and switch ports one through four for the storage system. For example, Zone 1 would consist of switch ports 1, 2, 3, 4 (storage ports) and 5 (Host1 port). Zone 2 would consist of switch ports 1, 2, 3, 4 (storage ports) and 6 (Host2 port), and so forth. This diagram shows only a single fabric, but multiple fabrics are supported. Each subsequent fabric has the same zone structure.

Fibre Channel and FCoE zoning | 57

Figure 26: Hosts in individual zones

Single-fabric zoning
Zoning and multipathing software used in conjunction prevent possible controller failure in a singlefabric environment. Without multipathing software in a single-fabric environment, hosts are not protected from a possible controller failure. In the following figure, Host1 and Host2 do not have multipathing software and are zoned so that there is only one path to each LUN (Zone 1). Therefore, Zone 1 contains only one of the two storage ports. Even though the host has only one HBA, both storage ports are included in Zone 2. The LUNs are visible through two different paths, one going from the host FC port to storage port 0 and the other going from host FC port to storage port 1. Because this figure contains only a single fabric, it is not fully redundant. However, as shown, Host3 and Host4 have multipathing software that protects against a possible controller failure. They are zoned so that a path to the LUNs is available through each of the controllers.

58 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Figure 27: Single-fabric zoning

Dual-fabric HA pair zoning


Zoning can separate hosts in a topology to eliminate HBA cross talk. Zoning can also prevent a host from accessing LUNs from a storage system in a different zone. The following figure shows a configuration where Host1 accesses LUNs from storage system 1 and Host2 accesses LUNs from storage system 2. Each storage system is an HA pair and both are fully redundant. Multiple FAS270c storage systems are shown in this figure, but they are not necessary for redundancy.

Fibre Channel and FCoE zoning | 59

Figure 28: Dual-fabric zoning

Shared SAN configurations | 61

Shared SAN configurations


Shared SAN configurations are defined as hosts that are attached to both NetApp and non-NetApp storage arrays. Accessing NetApp arrays and other vendors' arrays from a single host is supported as long as several requirements are met. The following requirements must be met for support of accessing NetApp arrays and other vendors' arrays from a single host: Native Host OS multipathing or VERITAS DMP is used for multipathing (see exception for EMC PowerPath co-existence below) NetApp configuration requirements (such as timeout settings) as specified in the appropriate NetApp Host Utilities documents are met Single_image cfmode is used

Support for Native Host OS multipathing in combination with EMC PowerPath is supported for the following configurations. For configurations that do meet these requirements, a PVR is required to determine supportability. Host Supported configuration

Windows EMC CLARiiON CX3-20, CX3-40, CX3-80 w/ PowerPath 4.5+ and connected to a NetApp storage system using Data ONTAP DSM for Windows MPIO Solaris AIX EMC CLARiiON CX3-20, CX3-40, CX3-80 / PowerPath 5+ and connected to a NetApp storage system using SUN Traffic Manager (MPxIO) EMC CLARiiON CX3-20, CX3-40, CX3-80 / PowerPath 5+ and connected to a NetApp storage system using AIX MPIO

ALUA configurations | 63

ALUA configurations
ALUA (asymmetric logical unit access) is supported for certain combinations of host operating systems and Data ONTAP software. ALUA is an industry standard protocol for identifying optimized paths between a storage system and a host computer. The administrator of the host computer does not need to manually select the paths to use. ALUA is enabled or disabled on the igroup mapped to a NetApp LUN. The default ALUA setting in Data ONTAP is disabled. For information about using ALUA on a host, see the Host Utilities Installation and Setup Guide for your host operating system. For information about enabling ALUA on the storage system, see the Block Access Management Guide for iSCSI and FC for your version of Data ONTAP software.
Next topics

(Native OS, FC) AIX Host Utilities configurations that support ALUA on page 63 ESX configurations that support ALUA on page 65 HP-UX configurations that support ALUA on page 65 Linux configurations that support ALUA on page 66 (MPxIO/FC) Solaris Host Utilities configurations that support ALUA on page 66 Windows configurations that support ALUA on page 67

(Native OS, FC) AIX Host Utilities configurations that support ALUA
The Native OS environment of the AIX Host Utilities supports ALUA on hosts using MPIO and the FC protocol. The following AIX Native OS configurations support ALUA when you are using the FC protocol:

64 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Host Utilities version Host Utilities 4.0, 4.1, and 5.0

Host requirements 5.2 TL8 5.3 TL9 SP4 with APAR IZ53157 5.3 TL10 SP1 with APAR IZ53158 6.1 TL2 SP4 with APAR IZ53159 6.1 TL3 SP1 with APAR IZ53160 Note: It is strongly recommended that, if you want to use ALUA, you use the latest levels of 5.3 TL9 or 6.1 TL2 listed in the support matrix. ALUA is supported on all AIX Service Streams that have the corresponding APAR (authorized program analysis report) installed. At the time this document was prepared, the Host Utilities supported AIX Service Streams with the APARs listed above as well as with APARs IZ53718, IZ53730, IZ53856, IZ54130, IZ57806, and IZ61549. If an APAR listed here has not been publicly released, contact IBM and request a copy.

Data ONTAP version 7 3.1 and later

Note: The Host Utilities do not support ALUA with AIX environments using iSCSI or Veritas.

If you have a Native OS environment and do not want to use ALUA, you can use the dotpaths utility to specify path priorities. The Host Utilities provide dotpaths as part of the SAN Toolkit.

ALUA configurations | 65

ESX configurations that support ALUA


ESX hosts support ALUA with certain combinations of ESX, Data ONTAP, and guest operating system configurations. The following table lists which configurations support ALUA (asymmetric logical unit access). Use the Interoperability Matrix to determine a supported combination of ESX, Data ONTAP, and Host Utilities software. Then enable or disable ALUA based on the information in the table.
ESX version Minimum Data ONTAP Windows guest in Microsoft cluster Supported ?

4.0 or later 4.0 or later 3.5 and earlier

7.3.1 with single_image cfmode No 7.3.1 with single_image cfmode Yes any any

Yes No No

Using ALUA is strongly recommend, but not required, for configurations that support ALUA. If you do not use ALUA, be sure to set an optimized path using the tools supplied with ESX Host Utilities or Virtual Storage Console.

HP-UX configurations that support ALUA


The HP-UX Host Utilities support asymmetric logical unit access (ALUA). ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on FC and iSCSI SANs. You should enable ALUA when your HP-UX configuration supports it. ALUA is enabled on the igroup mapped to NetApp LUNs used by the HP-UX host. Currently, the default setting in Data ONTAP software for ALUA is disabled. You can use the NetApp Interoperability Matrix to determine a supported combination of HP-UX, Data ONTAP, Host Utilities, and Native MPIO software. You can then enable or disable ALUA based on the information in the following table:
HP-UX version HP-UX 11iv3 HP-UX 11iv2 Native MPIO software ALUA ALUA Minimum Data ONTAP Supported 7.2.5 or later None Yes No

Note: ALUA is mandatory and is supported with HP UX 11iv3 September 2007 and later. Related information

NetApp Interoperability Matrix - http://now.netapp.com/matrix/mtx/login.do

66 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Linux configurations that support ALUA


The Linux Host Utilities supports asymmetric logical unit access (ALUA) on hosts running Red Hat Enterprise Linux or SUSE Linux Enterprise Server. ALUA is also known as Target Port Group Support (TPGS). DM-Multipath works with ALUA to determine which paths are primary paths and which paths are secondary or partner paths to be used for failover. ALUA is automatically enabled for Linux operating system. The following configurations support ALUA:
Host Utilities Version Host Utilities 4.0 and later Host requirements Red Hat Enterprise Linux 5 Update 1 and later SUSE Linux Enterprise Server 10 SP1 and later Data ONTAP versions 7.2.4 and later

Note: The Host Utilities do not support ALUA with both iSCSI and Veritas environments.

(MPxIO/FC) Solaris Host Utilities configurations that support ALUA


The MPxIO environment of the Solaris Host Utilities supports ALUA on hosts running either the SPARC processor or the x86 processor and using the FC protocol. If you are using MPxIO with FC and active/active storage controllers with any of the following configurations, you must have ALUA enabled:
Host Utilities version Host Utilities 4.1 through 5.1 Host Utilities 4.0 Host requirements Solaris 10 update 3 and later Solaris 10 update 2 only with QLogic drivers and SPARC processors Solaris 10 update 2 only Data ONTAP version 7.2.1.1 and later 7.2.1 and later

iSCSI Support Kit 3.0

7.2.1 and later

Note: The Host Utilities do not support ALUA with iSCSI except with the 3.0 Support Kit. The Host Utilities do not support ALUA in Veritas environments.

ALUA configurations | 67

Windows configurations that support ALUA


Windows hosts support ALUA with certain combinations of Windows, Data ONTAP, Host Utilities, and MPIO software. The following table lists configurations that support ALUA (asymmetric logical unit access). Use the Interoperability Matrix to determine a supported combination of Windows, Data ONTAP, Host Utilities, and MPIO software. Then enable or disable ALUA based on the information in the table.
Windows version Server 2008 Server 2008 Server 2008 Server 2003 MPIO software Microsoft DSM (msdsm) Data ONTAP DSM Veritas DSM all Minimum Data ONTAP 7.3.0 none none none Supported ? Yes No No No

ALUA is required when using the Microsoft DSM (msdsm).

Configuration limits | 69

Configuration limits
Configuration limits are available for FC , FCoE, and iSCSI topologies. In some cases, limits might be theoretically higher, but the published limits are tested and supported.
Next topics

Configuration limit parameters and definitions on page 69 Host operating system configuration limits for iSCSI and FC on page 71 60xx and 31xx single-controller limits on page 72 60xx and 31xx HA pair limits on page 73 30xx single-controller limits on page 75 30xx HA pair limits on page 76 FAS2040 single-controller limits on page 77 FAS2040 HA pair configuration limits on page 78

Configuration limit parameters and definitions


There are a number of parameters and definitions related to FC, FCoE, and iSCSI configuration limits. Parameter Visible target ports per host (iSCSI) Visible target ports per host (FC) LUNs per host Paths per LUN Definition The maximum number of target iSCSI Ethernet ports that a host can see or access on iSCSI attached controllers. The maximum number of FC adapters that a host can see or access on the attached Fibre Channel controllers. The maximum number of LUNs that you can map from the controllers to a single host. The maximum number of accessible paths that a host has to a LUN.
Note: Using the maximum number of paths is not recommended.

Maximum LUN size LUNs per controller

The maximum size of an individual LUN on the respective operating system. The maximum number of LUNs that you can configure per controller, including cloned LUNs and LUNs contained within cloned volumes. LUNs contained in Snapshot copies do not count in this limit and there is no limit on the number of LUNs that can be contained within Snapshot copies.

70 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Parameter LUNs per volume

Definition The maximum number of LUNs that you can configure within a single volume. LUNs contained in Snapshot copies do not count in this limit and there is no limit on the number of LUNs that can be contained within Snapshot copies. The maximum number of hosts that can connect to a single FC port on a controller. Connecting the maximum number of hosts is generally not recommended and you might need to tune the FC queue depths on the host to achieve this maximum value. The maximum number of LUNs mapped to a host through a FC target port on a controller. The recommended maximum number of iSCSI hosts that you can connect to a single controller. The general formula to calculate this is as follows: Maximum hosts = 8 * System Memory divided by 512 MB. The maximum number of hosts that you can connect to a controller. Connecting the maximum number of hosts is generally not recommended and you might need to tune the FC queue depths on the host to achieve this maximum value.

FC port fan-in

FC port fan-out Hosts per controller (iSCSI) Hosts per controller (FC)

igroups per controller The maximum number of initiator groups that you can configure per controller. Initiators per igroup LUN mappings per controller LUN path name length LUN size FC queue depth available per port FC target ports per controller The maximum number of FC initiators (HBA WWNs) or iSCSI initiators (host iqn/eui node names) that you can include in a single igroup. The maximum number of LUN mappings per controller. For example, a LUN mapped to two igroups counts as two mappings. The maximum number of characters in a full LUN name. For example, /vol/ abc/def has 12 characters. The maximum capacity of an individual LUN on a controller. The usable queue depth capacity of each FC target port. The number of LUNs is limited by available FC queue depth. The maximum number of supported FC target ports per controller. FC initiator ports used for back-end disk connections, for example, connections to disk shelves, are not included in this number.

Related information

Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf

Configuration limits | 71

Host operating system configuration limits for iSCSI and FC


Each host operating system has host-based configuration limits for FC, FCoE, and iSCSI. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iSCSI unless noted.
Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values. Parameter Visible target ports per host LUNs per host Windows 28 Linux 16 HP-UX 16 Solaris 16 AIX 16 ESX 16

64 (Windows 2000) 128 (Windows 2003) 255 (Windows 2008)

FC, 8 paths per LUN: 64 FC, 4 paths per LUN: 128 iSCSI, 8 paths per LUN: 32 (RHEL4, OEL4 and SLES9 series); 64 (all other series) iSCSI, 4 paths per LUN: 64 (RHEL4, OEL4 and SLES9 series); 128 (all other series)

11iv2: 512 11iv3: 1024

512

128

2.x=128 3.x=256

72 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Parameter Paths per LUN

Windows 8 (max of 1024 per host)

Linux 4 (FC Native Multipath without ALUA) 8 (all others, FC and iSCSI)

HP-UX 11iv2: 8 11iv3: 32

Solaris 16

AIX 16

ESX 2.x=4 3.x=8

Max LUN size

2 TB 16 TB (Windows 2003 and Windows 2008)

2 TB

2 TB

1023 GB 16 TB with Solaris 9+, VxVM, EFI, and appropriate patches

1 TB 16 TB with AIX 5.2ML7 or later and AIX 5.3ML3 or later

2 TB

Related references

Configuration limit parameters and definitions on page 69

60xx and 31xx single-controller limits


Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iSCSI unless noted.
Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values.

The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
Parameter LUNs per controller FC queue depth available per port LUNs per volume Port fan-in 31xx 2,048 1966 6030 or 6040 2,048 1966 6070 or 6080 2,048 1966

2,048 64

2,048 64

2,048 64

Configuration limits | 73

Parameter Connected hosts per storage controller (FC) Connected hosts per controller (iSCSI) igroups per controller Initiators per igroup LUN mappings per controller LUN path name length LUN size

31xx 256

6030 or 6040 256

6070 or 6080 256

256

256

512

256 256 4,096

256 256 8,192

256 256 8,192

255 16 TB (might require deduplication and thin provisioning) Data ONTAP 7.3.0: 8 7.3.1 and later: 16

255 16 TB (might require deduplication and thin provisioning) Data ONTAP 7.3.0: 12 7.3.1 and later: 16

255 16 TB (might require deduplication and thin provisioning) Data ONTAP 7.3.0: 12 7.3.1 and later: 16

FC target ports per controller

Related references

Configuration limit parameters and definitions on page 69


Related information

Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf

60xx and 31xx HA pair limits


Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iSCSI unless noted. Limits for HA pair systems are NOT double the limits for single-controller systems. This is because one controller in the HA pair must be able to handle the entire system load during failover.
Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values.

The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.

74 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

Parameter LUNs per HA pair

31xx 2,048 4,096 (available on the 3160A and 3170A with PVR approval)

6030 or 6040 2,048 4,096 (with PVR approval)

6070 or 6080 2,048 4,096 (with PVR approval)

FC queue depth available per port LUNs per volume FC port fan-in Connected hosts per HA pair (FC)

1,720

1,720

1,720

2,048 64 256 512 (available on the 3160A and 3170A with PVR approval)

2,048 64 256 512 (with PVR approval)

2,048 64 256 512 (with PVR approval)

Maximum connected 512 hosts per HA pair (iSCSI) igroups per HA pair 256 512 (available on the 3160A and 3170A with PVR approval) Initiators per igroup LUN mappings per HA pair 256 4,096 8,192 (available on the 3160A and 3170A with PVR approval) 255 16 TB (might require deduplication and thin provisioning) Data ONTAP 7.3.0: 16 7.3.1 and later: 32

512

1,024

256 512 (with PVR approval)

256 512 (with PVR approval)

256 8,192

256 8,192

LUN path name length LUN size

255 16 TB (might require deduplication and thin provisioning) Data ONTAP 7.3.0: 24 7.3.1 and later: 32

255 16 TB (might require deduplication and thin provisioning) 16Data ONTAP 7.3.0: 24 7.3.1 and later: 32

FC target ports per HA pair

Related references

Configuration limit parameters and definitions on page 69

Configuration limits | 75
Related information

Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf

30xx single-controller limits


Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iSCSI unless noted.
Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values.

The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
Parameter LUNs per controller FC queue depth available per port LUNs per volume Port fan-in Connected hosts per storage controller (FC) Connected hosts per controller (iSCSI) igroups per controller Initiators per igroup LUN mappings per controller LUN path name length LUN size 3040 and 3070 2,048 1,720 2,048 64 256 256 256 256 4,096 255 16 TB (might require deduplication and thin provisioning) Data ONTAP 7.3.0: 8 7.3.1 and later: 12 Related references

FC target ports per controller

Configuration limit parameters and definitions on page 69

76 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family
Related information

Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf

30xx HA pair limits


Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iSCSI unless noted. Limits for HA pair systems are NOT double the limits for single-controller systems. This is because one controller in the HA pair must be able to handle the entire system load during failover.
Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values.

The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
Parameter LUNs per HA pair FC queue depth available per port LUNs per volume FC port fan-in Connected hosts per HA pair (FC) Connected hosts per HA pair (iSCSI) igroups per HA pair Initiators per igroup LUN mappings per HA pair LUN path name length LUN size 3040A and 3070A 2,048 1,720 2,048 64 256 512 256 256 4,096 255 16 TB (might require deduplication and thin provisioning) Data ONTAP 7.3.0: 16 7.3.1: 24 Related references

FC target ports per HA pair

Configuration limit parameters and definitions on page 69

Configuration limits | 77
Related information

Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf

FAS2040 single-controller limits


Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iSCSI unless noted.
Note: The values listed are the maximum that can be supported. For best performance, do not configure your system at the maximum values.

The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
Parameter LUNs per controller FC queue depth available per port LUNs per volume FC port fan-in Connected hosts per controller (FC) Connected hosts per controller (iSCSI) igroups per controller Initiators per igroup LUN mappings per controller LUN path name length LUN size FAS2040 1,024 1720 1,024 64 128 128 256 256 4,096 255 16 TB (might require deduplication and thin provisioning) 2

FC target ports per controller Related references

Configuration limit parameters and definitions on page 69


Related information

Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf

78 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

FAS2040 HA pair configuration limits


Each system model has configuration limits for reliable operation. Do not exceed the tested limits. The following table lists the maximum supported value for each parameter based on testing. All values are for FC, FCoE, and iSCSI unless noted. Limits for HA pair systems are NOT double the limits for single-controller systems. This is because one controller in the HA pair must be able to handle the entire system load during failover.
Note: The values listed are the maximum that can be supported. For best performance, do not

configure your system at the maximum values. The maximum number of LUNs and the number of HBAs that can connect to an FC port is limited by the available queue depth on the FC target ports.
Parameter LUNs per HA pair FC queue depth available per port LUNs per volume FC port fan-in Connected hosts per HA pair (FC) Connected hosts per HA pair (iSCSI) igroups per HA pair Initiators per igroup LUN mappings per HA pair LUN path name length LUN size FAS2040A 1,024 1720 1,024 64 128 128 256 256 4,096 255 16 TB (might require deduplication and thin provisioning) 4

FC target ports per HA pair Related references

Configuration limit parameters and definitions on page 69


Related information

Technical Report: NetApp Storage Controllers and Fibre Channel Queue Depth - now.netapp.com/ NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf

Index | 79

Index
20xx HA pair configuration limits 78 single-controller limits 77 3040 and 3070 direct-attached HA pair FC topologies 43 direct-attached single-controller FC topologies 42 multifabric HA pair FC topologies 41 single-fabric HA pair FC topologies 40 single-fabric single-controller FC topologies 39 30xx FC topologies 38 HA pair configuration limits 76 single-controller configuration limits 75 target port configuration 38 31xx FC topologies 31 direct-attached HA pair FC topologies 37 direct-attached single-controller FC topologies 36 HA pair configuration limits 73 multifabric HA pair FC topologies 34 single-controller configuration limits 72 single-fabric HA pair FC topologies 33 single-fabric single-controller FC topologies 32 target port configuration 32 60xx FC topologies 24 direct-attached HA pair FC topologies 30 direct-attached single-controller FC topologies 29 HA pair configuration limits 73 multifabric HA pair FC topologies 28 single-controller configuration limits 72 single-fabric HA pair FC topologies 26 single-fabric single-controller FC topologies 25 target port configuration 24

C
configuration limits 20xx HA pair storage systems 78 20xx single-controller storage systems 77 30xx HA pair storage systems 76 30xx single-controller storage systems 75 31xx HA pair storage systems 73 31xx single-controller storage systems 72 60xx HA pair storage systems 73 60xx single-controller storage systems 72 by host operating system 71 parameters defined 69

D
DCB (data center bridging) switch for FCoE 51 direct-attached configuration iSCSI 18 direct-attached HA pair FC topologies 3040 and 3070 43 31xx 37 60xx 30 FAS20xx 49 direct-attached single-controller FC topologies 3040 and 3070 42 31xx 36 60xx 29 FAS20xx 48 dynamic VLANs 19

E
EMC CLARiiON shared configurations 61 ESX host configuration limits 71 supported ALUA configurations 65 expansion FC ports usage rules 22

A
AIX host configuration limits 71 ALUA ESX configurations supported 65 supported AIX configurations 63 supported configurations 66 Windows configurations supported 67 ALUA configurations 63 asymmetric logical unit access (ALUA) configurations 63

80 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

F
FAS20xx FC topologies 44 direct-attached HA pair FC topologies 49 direct-attached single-controller FC topologies 48 multifabric HA pair FC topologies 47 multifabric single-controller FC topologies 46 single-fabric HA pair FC topologies 45 single-fabric single-controller FC topologies 44 FC 30xx target port configuration 38 30xx topologies 38 31xx target port configuration 32 31xx topologies 31 60xx target port configuration 24 60xx topologies 24 FAS20xx topologies 44 multifabric switch zoning 58 onboard and expansion port usage rules 22 single-fabric switch zoning 57 switch configuration 23 switch hop count 23 switch port zoning 56 switch WWN zoning 56 switch zoning 55 switch zoning with individual zones 56 topologies overview 21 FC protocol ALUA configurations 63, 66 FCoE initiator and target combinations 51, 52 supported configurations 52 switch zoning 55 FCoE topologies FCoE initiator to FC target 53 Fibre Channel over Ethernet (FCoE) overview 51

for FC switches 23 host multipathing software when required 23 HP-UX host configuration limits 71

I
initiator FC ports onboard and expansion usage rules 22 initiators FCoE and FC combinations 51, 52 inter-switch links (ISLs) supported hop count 23 iSCSI direct-attached configuration 18 dynamic VLANs 19 multinetwork configuration 17 single-network configuration 15 static VLANs 19 topologies 15 using VLANs 19

L
Linux host configuration limits 71 Linux configurations ALUA support automatically enabled 66 asymmetric logical unit access Target Port Group Support 66

M
MPIO ALUA configurations 63 MPIO software when required 23 MPxIO ALUA configurations 66 multifabric HA pair FC topologies 3040 and 3070 41 31xx 34 60xx 28 FAS20xx 47 multifabric single-controller FC topologies FAS20xx 46 multipathing software

H
HA pair iSCSI direct-attached configuration 18 iSCSI multinetwork configuration 17 iSCSI single-network configuration 15 hard zoning FC switch 56 heterogeneous SAN using VSAN 21 hop count

Index | 81
when required 23 FC WWN zoning 56 FC zoning 55 FC zoning with individual zones 56 FCoE zoning 55

N
Native OS ALUA configurations 63

T
target FC ports onboard and expansion usage rules 22 target port configurations 30xx 38 31xx 32 60xx 24 targets FCoE and FC combinations 51, 52 topologies 30xx FC topologies 38 31xx FC topologies 31 60xx FC topologies 24 FAS20xx FC topologies 44 FC 21 FCoE initiator to FC target 53 iSCSI 15 topologies, 3040 and 3070 direct-attached HA pair FC configuration 43 direct-attached single-controller FC topologies 42 multifabric HA pair FC configuration 41 single-fabric HA pair FC configuration 40 single-fabric single-controller FC topologies 39 topologies, 31xx direct-attached HA pair FC configuration 37 direct-attached single-controller FC topologies 36 multifabric HA pair FC configuration 34 single-fabric HA pair FC configuration 33 single-fabric single-controller FC topologies 32 topologies, 60xx direct-attached HA pair FC configuration 30 direct-attached single-controller FC topologies 29 multifabric HA pair FC configuration 28 single-fabric HA pair FC configuration 26 single-fabric single-controller FC topologies 25 topologies, FAS20xx direct-attached HA pair FC configuration 49 direct-attached single-controller FC topologies 48 multifabric HA pair FC configuration 47 multifabric single-controller FC topologies 46 single-fabric HA pair FC configuration 45 single-fabric single-controller FC topologies 44

O
onboard FC ports usage rules 22

P
parameters configuration limit definitions 69 point-to-point FC switch port topology 23 port topology FC switch 23 port zoning FC switch 56 PowerPath with shared configurations 61

S
shared SAN configurations 61 single-fabric HA pair FC topologies 3040 and 3070 40 31xx 33 60xx 26 FAS20xx 45 single-fabric single-controller FC topologies 3040 and 3070 39 31xx 32 60xx 25 FAS20xx 44 soft zoning FC switch 56 Solaris host configuration limits 71 static VLANs 19 switch FC configuration 23 FC hop count 23 FC multifabric zoning 58 FC port zoning 56 FC single-fabric zoning 57

82 | Fibre Channel and iSCSI Configuration Guide for the Data ONTAP 8.0 Release Family

V
virtual LANs reasons for using 19 VLANs dynamic 19 reasons for using 19 static 19 VSAN for heterogeneous SAN 21

WWN zoning FC switch 56

Z
zoning FC switch 55 FC switch by port 56 FC switch by WWN 56 FC switch multifabric 58 FC switch single-fabric 57 FC switch with individual zones 56 FCoE switch 55

W
Windows host configuration limits 71 supported ALUA configurations 67

Vous aimerez peut-être aussi