Vous êtes sur la page 1sur 31

Print Slides

1 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Data ONTAP 8 Cabling

Welcome to Data ONTAP 8 Cabling. This course steps you through cabling a single-node
system, adding more resiliency by expanding the system to a high-availability (or HA)
multipath paired configuration, and adding these HA pairs to a multiple-node clustered Data
ONTAP 8 system. We will also look at single-node and switchless cluster options.

06-11-16 12:08

Print Slides

2 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Introduction

NetApp provides hardware and software solutions for the ever-growing demand for storage.
Through the use of hardware like the FAS3200 and FAS6200 series storage controllers and
cutting-edge Data ONTAP 8 operating system, NetApp provides storage administrators the
ability to run their businesses.

06-11-16 12:08

Print Slides

3 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Course Objectives

By the end of this course, you should be able to: Describe NetApp storage configuration
building blocks Perform single-node cabling, which is the basis for all NetApp hardware
configurations Attach front-end data cabling to allow access to the node for networkattached storage (NAS) and SAN clients Cable two nodes into an HA configuration Describe
the switchless cluster configuration, and Perform cluster cabling to create a clustered Data
ONTAP configuration consisting of HA pairs.

06-11-16 12:08

Print Slides

4 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Storage Configuration Building Blocks

NetApp storage is easily expandable from a single node to an HA solution and up to a


multiple-node cluster. A single node and its shelf are the basis for more complicated
configurations. With proper front-end cabling for client access, a single-node system can
serve both NAS and SAN datasets for small environments that use the Data ONTAP 8
7-Mode operating system. To add storage resiliency, two nodes may be combined into an HA
pair. This is the recommended NetApp solution because it provides the storage resiliency
that most customers require. This multipath configuration provides services of data even
when one of the nodes controllers is unavailable. HA pair configurations are recommended
in a 7-Mode system and are the foundational configuration building blocks for a clustered
Data ONTAP system. Clustered Data ONTAP can be implemented on two-node switchless
clusters, and single node clusters. By combining HA pairs and connecting a back-end cluster
interconnect, a multiple-node clustered Data ONTAP configuration can be created to serve
even the highest storage needs.

06-11-16 12:08

Print Slides

5 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Demonstration Environment

NetApp provides numerous controllers and disk shelves for a wide variety of workloads. This
course uses the FAS3270 controllers in separate chassis in the cabling environment. Other
storage controllers are available from NetApp. The FAS3270 controller that we will be using
has the I/O Expansion Module (or IOXM) attached. The FAS3270 has two onboard SerialAttached SCSI (or SAS) ports (0a and 0b) for connecting to SAS disk shelves. It also has
two 10-Gb Ethernet ports (c0a and c0b) used for connecting to a partner controller within
an HA configuration. Two onboard host bus adapter (HBA) ports (0c and 0d) can be used to
attach Fibre Channel (or FC) connected shelves or used within a SAN as target adapters.
There are two 1-Gb Ethernet adapters (e0a and e0b), which can be used in general network
connectivity. e0M is used to connect to the controllers management interface and the
service processor (SP). The SP is accessible even when the controller is powered down. e0P
is used to connect to the alternate control path (or ACP) of certain disk shelves. Finally, the
FAS3270 provides access to the controllers console by way of a serial connection. This
course also uses the SAS DS2246 disk shelves in the cabling examples. Other disk shelves
are available from NetApp. The DS2246 has two IOM modules: A and B. Lets get started
cabling our storage configuration.

06-11-16 12:08

Print Slides

6 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Module 1

Module 1. Node cabling.

06-11-16 12:08

Print Slides

7 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

SAS Shelf Data Path Cabling

Here you see a single FAS3270 controller with two stacks of shelves. For simplicity reasons,
only two shelves are in each stack. The first shelf in stack 1 should begin with a Shelf ID of
10. Each shelf in stack 1 would then have the next cardinal number. In this case, shelf 2
within stack 1 would have a Shelf ID of 11. The first shelf in stack 2 would then begin with a
Shelf ID of 20. Following the same pattern, shelf 2 with stack 2 would have a Shelf ID of 21.
In this way, each stack may support up to 10 shelves. The FAS3270 controller has the
optional Quad Port SAS adapter card positioned in the E5 expansion slot as defined in the
Hardware Universe on the NetApp Support site. You start the node SAS data path cabling by
daisy-chaining the SAS ports from the out (or circle port) in shelf 1 to the in (or square
port) of shelf 2 for the two stacks with QSFP to QSFP SAS cables. For stack 1, you proceed
by attaching the onboard SAS port A of controller 1 to the first shelf I/O module A square
port. You complete stack 1s SAS cabling by attaching controller 1s 5b port to the last shelf
in our stack I/O module B circle port. With stack 2, attach controller 1s 5a port to the first
shelfs I/O module A square port. Finally, add the controller 1s 0b port to the last shelf I/O
module B circle port. SAS data path cabling is now complete.

06-11-16 12:08

Print Slides

8 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

SAS Shelf ACP Path Cabling

Next, you will attach the ACP cabling. You will first attach the daisy-chain between the
shelves. This will use CAT 6 Ethernet cable with RJ-45 connectors between the out (or
circle) port within shelf 1 to the in (or square) port of shelf 2 within both stacks. Next,
attach the I/O module A circle port on the last shelf to the square port of I/O module B of
the first shelf within the stack. The I/O module B circle port on shelf 2 of stack 1 will then
be connected to the first shelf within the next stack. This would continue until all stacks are
connected. Finally, attach the e0P port of controller 1 to the first stacks shelf 1 I/O module
As square port. You have now completed the ACP cabling for your node. Remember to add
the power cables to your controller and shelves to a properly distributed power source.

06-11-16 12:08

Print Slides

9 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

FC and ATA Shelf Cabling

Other shelves that are available from NetApp can be used in combinations with or without
the SAS shelves. The FAS3270 controller has FC ports 0c and 0d that could be used to
support these drives or you could add FC Initiator HBA cards to support these shelves. We
will use the onboard FC ports later as targets in an FC SAN.

06-11-16 12:08

Print Slides

10 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Network Management Cabling

To complete the initial cabling, you should cable the network management ports. This
example uses the NetApp CN1601 10/100-Mb switch as the management switch. This
switch is a supported management switch for clustered Data ONTAP configurations;
however, if this were a 7-mode configuration you could use any standard Ethernet switch for
management. This presentation assumes that the management switch has been properly
powered, networked, and configured. The management switch might be cabled to the
internal network in some environments. First, cable the serial connection port using the
provided RJ-45 to DB-9 cable to a system that is used for initial configuration or to a
terminal server. The terminal server could then be cabled directly to the management switch
for continuous serial connection. Next, cable the e0M port to the management switch.
Remember that this port is used to connect to the controllers service processor. For a
NetApp FAS system, e0M will be used as the nodes management interface port. For greater
resiliency, use redundant management switches. Node management cabling is now
complete. After the initial configuration, the serial connection can be disconnected. You will
do that here in this example. The single node is now ready to be configured for a customers
applications storage requirements.

06-11-16 12:08

Print Slides

11 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Module 2

Module 2. Data cabling.

06-11-16 12:08

Print Slides

12 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Ethernet SAN and NAS Cabling

Up to now, the single node is cabled for operation but no clients can reach it. So you add a
Cisco Nexus 5010 switch with the optional N4k-M1008 FC module. This switch will be used
in this example as the data switch; however, for better resiliency, you might add a second
switch for redundancy. Two X1107A dual-port 10-Gb Ethernet SFP+ cards have been added
to expansion slot E1 and E2 on the controller. These two cards were added to provide both
data paths for clients and to later provide cluster networking when you add this system to a
cluster. The X1139A dual-port 10-Gb Ethernet Unified Target Adapter (or UTA) has been
added to E3 in the controller for Fibre Channel over Ethernet (FCoE) and NAS operations
simultaneously. This feature is called Unified Connect. If this was going to be a 7-Mode-only
storage system, you might get by with only the UTA adapter and not need the X1107A
10-Gb Ethernet cards, depending on your data throughput needs. You now add data cabling
for NAS operations over e1b and e2b. Next, add data cabling for both SAN and NAS
operations over e3a and e3b. Ethernet data cabling for both SAN and NAS operations is now
complete.

06-11-16 12:08

Print Slides

13 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

FC Target Cabling

If you have not used the onboard FC adapters for FC or ATA shelves, or if you add an FC
HBA card, you could additionally cable the controllers for FC targets for use in the SAN. You
will use the onboard adapter 0c and 0d as FC targets on controller 1. Remember that Data
ONTAP software defaults all onboard adapters as initiators. If you intend to use onboard
adapters as targets, you will have to configure them this way. Please see the SAN
Fundamentals on Data ONTAP Web-based training course for more information about using
the fcadmin command to configure onboard adapters. You now add the FC OM-3 cabling
with LC connectors from 0c and 0d on controller 1 to the FC switch on the Nexus 5010
switch. Data cabling for your single-node system is now complete. If we were using this
storage system with the Data ONTAP 7-Mode system, there is no additional cabling
required.

06-11-16 12:08

Print Slides

14 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Single Node Cluster Configuration

This configuration can also be used in a single-node cluster. Some operations in a single
node cluster are disruptive because there is no take-over node available. For example, an
upgrade to the node is disruptive and if a panic occurs the node will reboot causing a
disruption to the service. When adding a second node to a single-node cluster, it has to be
added as the HA partner to the current single node. NetApp highly recommends that all
customer production systems controllers be configured as HA pairs, and that the switches
are also installed as redundant pairs. This provides customers a highly resilient solution for
their applications.

06-11-16 12:08

Print Slides

15 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Module 3

Module 3. High Availability Cabling.

06-11-16 12:08

Print Slides

16 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA Controller Configuration

A single-node system has the controller as a single point of failure. Therefore, for greater
resiliency, you add a second controller, creating an HA pair configuration. The controllers
within an HA pair must be identical platforms.

06-11-16 12:08

Print Slides

17 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA SAS Shelf Data Path Cabling

You will next add the second controller to our existing SAS shelf data path. The second
FAS3270 has the optional Quad Port SAS adapter card positioned in the E5 expansion slot as
defined in the Hardware Universe on the NetApp Support site. For stack 1, you proceed by
attaching the onboard SAS port A of controller 2 to the first shelf I/O module B square port.
You complete stack 1s SAS cabling by attaching from controller 2s 5b port to the last shelf
in our stack I/O module A circle port. With stack 2, you attach controller 2s 5a port to the
first shelf I/O module B square port. Finally, add the controller 2s 0b port to the last shelf
I/O module A circle port. SAS data path cabling is now complete.

06-11-16 12:08

Print Slides

18 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA SAS Shelf ACP Path Cabling

Next, you will attach the ACP cabling to the second controller. Remember that the first
controller and shelf ACP cabling has already been done in the single node example. Attach
the e0P port of controller 2 to the last stacks last shelf I/O module Bs circle port. You have
now completed the ACP cabling for your nodes. Remember to add the power cables to a
properly distributed power source.

06-11-16 12:08

Print Slides

19 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA Interconnect Cabling

In the FAS3270 controller, c0a and c0b are 10-Gb Ethernet ports that are reserved for HA
interconnect. Other controllers, such as the NetApp FAS6200 series controllers, use the
NVRAM8 adapter with QSFP or optical interconnect ports. In some controller models it is
possible to have two controllers in one chassis and in these cases the HA interconnect is
achieved through the backplane. Connect c0a on controller 1 to c0a on controller 2 using
10-Gb cables with SFP+ connectors. Repeat this process by connecting c0b on controller 1
to c0b on controller 2. The HA interconnect cabling is now complete.

06-11-16 12:08

Print Slides

20 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Network Management Cabling

To complete the initial cabling, you should cable the network management port and the
serial cable. First, cable the serial connection port using the provided RJ-45 to DB-9 cable to
a system that is used for initial configuration or to a terminal server. The terminal server
could then be cabled directly to the management switch for continuous serial connection.
Next, cable the e0M port to the management switch. In a FAS3270 controller, the e0M port
is supported by the SP that can be used even when the controller is powered down. For a
clustered Data OTNAP system, e0M will be used as the node management default port. For
greater resiliency, use redundant management switches. Node management cabling is now
complete. After the initial configuration, the serial connection can be disconnected.

06-11-16 12:08

Print Slides

21 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA Pair Configuration Complete

This diagram combines all that you have done so far. It adds all the shelf cabling, the HA
cabling, the management cabling, and the client data access cabling to controller 1. Disk
ownership needs to be planned appropriately for an HA pair, but this is beyond the scope of
this course. For more information about disk ownership, please see the NetApp Support site.

06-11-16 12:08

Print Slides

22 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA Ethernet SAN and NAS Cabling

Up to now, the HA pair is cabled for operation but clients can only access controller 1. You
will now cable controller 2 to allow client data access. Add X1107A dual-port 10-Gb Ethernet
SFP+ card to expansion slot E1 and E2 on controller 2. These two cards were added to
provide both data paths for clients and to later provide cluster networking for the
Cluster-Mode system. Add X1139A dual-port 10-Gb Ethernet Unified Target Adapter (or
UTA) with Fibre connections to the E3 slot in controller 2 for FCoE as well as NAS
operations. If this was going to be a 7-Mode-only storage system, you might get by with
only the UTA adapter to meet your Unified Connect needs. You now add data cabling for
NAS operations over e1b and e2b on controller 2. Next, add data cabling for both SAN and
NAS operations over e3a and e3b on controller 2. Ethernet data cabling for both SAN and
NAS operations is now complete.

06-11-16 12:08

Print Slides

23 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA FC Target Cabling

You will use the onboard adapter 0c and 0d as FC targets on controller 2 as well. Remember
that Data ONTAP software defaults all onboard adapters as initiators. If you intend to use
onboard adapters as targets, you will have to configure them this way. Please see the SAN
Fundamentals on Data ONTAP Web-based training course for more information about using
the fcadmin command to configure onboard adapters. You now add the FC OM-3 cabling
with LC connectors from 0c and 0d on controller 2 to the FC ports on the Nexus 5010
switch. Data cabling for your HA pair is now complete. If we were using this storage system
with the Data ONTAP 7-Mode system, there would be no additional cabling configuration
required. However, if this system was to be used as a clustered Data ONTAP system,
additional cabling configuration would be required.

06-11-16 12:08

Print Slides

24 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Module 4

Module 4. Cluster Cabling.

06-11-16 12:08

Print Slides

25 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Port and Role Mapping

For clustered Data ONTAP systems, certain physical ports are designed to be used for
certain roles. These default designations are specific to the hardware platform. For the
FAS3270 controllers, the data ports are as we already defined them: e1b and e2b. The ports
e0a, e0b, e3a, and e3b might also be used as additional data ports. The default node
management port is the e0M port. Ports e0a and e0b might also be used as additional node
management ports. The cluster network ports are e1a and e2a. Always check the Hardware
Universe on the NetApp Support site for the specific ports and the designated roles for your
clustered Data ONTAP system. For clustered Data ONTAP 8.1 and later, the FC SAN target
cabling along with the FCoE and iSCSI target cabling is fully functional. Earlier versions of
clustered Data ONTAP systems do not support SAN targets.

06-11-16 12:08

Print Slides

26 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Two-Node Switchless Cluster

Before we look at cabling that includes the cluster interconnect switches, lets look at an
example of a two-node switchless cluster. The Two Node Switchless Cluster feature requires
the cluster ports of two nodes to be directly attached to each other instead of being
connected to an Ethernet switch located between the two sets of cluster ports. Not having
to buy a switch just to connect cluster ports decreases the overall cost required to set up a
two node cluster. Clusters can transition from switch to switchless and visa versa. The only
requirement when transitioning to a switchless cluster is that the cluster can not have more
than two nodes. Transitioning to and from switched and switchless clusters requires physical
configuration changes as well as Data ONTAP configuration changes. The cabling as
described before for a two node HA system remains the same. For documentation describing
the switchless cluster feature, refer to the NetApp Support site and select the Data ONTAP
8.2 documentation and the documentation for the particular hardware platform you are
using.

06-11-16 12:08

Print Slides

27 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Clustered Data ONTAP Switches

Clustered Data ONTAP systems require two switches for the backend cluster interconnect,
and two management switches. Currently, NetApp provides the CN1601 switches for cluster
management, and the CN1610 switches for the cluster interconnect. The CN1610 switches
are configured to support up to 12 nodes in a cluster. Remember, if you are using SAN
protocols, Data ONTAP 8.2 can now support up to 8 nodes in a cluster using block storage.
The Cisco Nexus 5000 series are also certified for the cluster interconnect. The Cisco Nexus
5596 can support up to 24 nodes. FAS2220 system can be configured as two-node clusters
and use the CN1601 switches for both management and interconnect functions.

06-11-16 12:08

Print Slides

28 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Clustered Data ONTAP Switch Configuration

The switches that are used for the cluster interconnect and management network must be
properly configured to work within a clustered environment. The CN1601 management
switch ports are configured to have ports 1-8 for node management, ports 9-12 are not
allocated, ports 15 and 16 are ISLs, port 14 should connect to the customer management
network, and port 13 is the switch service port. For simplicity we are only showing one
management switch here, although two switches should always be installed for redundancy.
The clustered Data ONTAP systems cluster interconnect requires 10Gb Ethernet access. On
the CN1610 switches ports 1-12 are10GbE ports configured for the cluster network. Ports
13 16 are the ISL ports. For clusters with more than 12 nodes, use the Cisco Nexus 5596
switch for the cluster interconnect. On these switches, ports 1-24 are 10GbE cluster ports,
ports 25-40 are reserved, and ports 41-48 are ISL ports. The URL that is shown provides
the reference configuration files or RCF, for each of the switches. These configuration scripts
define ports with unique descriptions and predefined port types. Additionally, the script
configures the description parameter for the ISL ports with a switch port mode of trunk. The
documentation on the NetApp Support site provides information about installation and
configuration of each of the switches.

06-11-16 12:08

Print Slides

29 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Cluster Interconnect Cabling

It is now time to cable the cluster interconnect. Please note that the terminal server with
the console ports and the second management switch have been omitted from this slide to
make the diagram easier to view. First, cable e1a on controller 1 to switch 1 and e2a on
controller 1 to switch 2 within the cluster interconnect. Repeat this task, cabling e1a and
e2a on controller 2 to their respective switches within the cluster interconnect. Finally,
connect ports 1/13 through 1/16 from cluster interconnect switch 1 to the same port on
cluster interconnect switch 2. This is the ISL connection between the cluster switches. The
cluster interconnect cabling is now complete and so is all the cabling that is required for a
two-node clustered Data ONTAP system.

06-11-16 12:08

Print Slides

30 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

HA Pairs in Clusters

We have completed our cabling of an HA pair within a clustered Data ONTAP configuration.
We can continue to add HA pairs, cabling the node management ports and the terminal
server Ethernet connection ports to the management switch, the data ports to the data
switch, and the cluster ports to the two cluster interconnect switches. In this way, we can
expand our cluster up to the current supported number of nodes based on the version of the
Data ONTAP software that is installed in the cluster. Please verify the number of nodes that
is supported in a cluster by checking the Hardware Universe on the NetApp Support site.
Click the link provided for more information; the third module in that course deals
specifically with cabling.

06-11-16 12:08

Print Slides

31 sur 31

https://www.brainshark.com/brainshark/viewer/PrintSlides.aspx?pi=4...

Course Summary

Now that you have completed this course, you should be able to: Describe NetApp storage
configuration building blocks Perform single-node cabling, which is the basis for all NetApp
hardware configurations Attach front-end data cabling to allow access to the node for NAS
and SAN clients Cable two nodes into an HA configuration Describe the switchless cluster
configuration, and Perform cluster cabling to create a clustered Data ONTAP configuration
consisting of HA pairs.

06-11-16 12:08