Vous êtes sur la page 1sur 87

January 31 February 3, 2011 Hello LONDON!

FCoE Design and Best Practices


Ozden Karakok CAE

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

The Evolving Data Centre Access


FC

The Access Layer is becoming more than just a port aggregator Edge of the growing Layer 2 topology
Scaling of STP Edge Ports Virtual embedded switches vPC and loop free designs Layer 2 Multi-Pathing (future)

DCB and Multi-Hop FCoE Support Enhanced Multi-hop FCoE with E-NPV

Single Point for Access Management


VN-Tag and Port Extension Nexus 2000 (current) VSM and VN-Link (future)

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

Virtualized Edge/Access Layer

Foundational element for Unified I/O and Unified Wire

Core/Aggregation Layer

The Consolidated Nexus Edge Layer

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

Why are we here?


Session Objectives Understand the design requirements of a Unified Network

Be able to design single-hop Unified Networks available today which meet the demands of both SAN and LAN networks
Start the conversation between Network and Storage teams regarding consolidation and FCoE beyond the access layer Understand the Operations and Management aspects of a Unified Network

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

FCoE Building Blocks


The Acronyms Defined
FCF : Fibre Channel Forwarder (Nexus 5000, Nexus 7000, MDS 9000) FPMA : A unique MAC address that is assigned by an FCF to a single Enode Enode: a Fiber Channel end node that is able to transmit FCoE frames using one or more Enode MACs. FCoE Pass-Through : a DCB device capable of passing FCoE frames to an FCF (i.e. FIP-Snooping) FIP Snooping Bridge FCoE N-Port Virtualizer Single hop FCoE : running FCoE between the host and the first hop access level switch Multi-hop FCoE : the extension of FCoE beyond a single hop into the Aggregation and Core layers of the Data Centre Network

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

Enode MAC Address


Fibre Channel over Ethernet Addressing Scheme
Domain ID

Enode FCoE MAC assigned for each FCID

FC Fabric

Enode FCoE MAC composed of a FC-MAP and FCID

FC-MAP is the upper 24 bits of the Enodes FCoE MAC FCID is the lower 24 bits of the Enodes MAC
FCoE forwarding decisions still made based on FSPFand the FCID within the Enode MAC Fibre Channel FCID Addressing

FC-MAP (0E-FC-xx)

FC-ID 10.00.01

FC-MAC Address
Data Center Partner Webinar

FC-MAP (0E-FC-xx)

FC-ID 10.00.01
8

2010 Cisco and/or its affiliates. All rights reserved.

Show fcoe
N5K2-60# show fcoe FCF details for interface san-port-channel 200 FCF-MAC is 00:0d:ec:a4:3b:87 FC-MAP is 0e:fc:00 FCF Priority is 128 FKA Advertisement period for FCF is 8 seconds N5K2-60# show fcoe database ------------------------------------------------------------------------------INTERFACE FCID PORT NAME MAC ADDRESS ------------------------------------------------------------------------------vfc1 0x240101 21:00:00:c0:dd:0a:b8:df 00:c0:dd:0a:b8:df vfc201 0x240100 21:00:00:c0:dd:12:04:f2 00:c0:dd:12:04:f2

Qlogic SanSurfer output

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

FCoE Building Blocks


Converged Network Adapter
Replaces multiple adapters per server, consolidating both Ethernet and FC on a single interface Appears to the operation system as individual interfaces (NICs and HBAs) First Generation CNAs from support PFC and CIN-DCBX Second Generation CNAs support PFC, CEE-DCBX as well as FIP Single chip implementation Half Height/Length Half power consumption
FC Driver bound to FC HBA PCI address

Fibre Channel Drivers

Operating System

10GbE Fibre Channel

Link

PCIe

10GbE

Ethernet Driver bound to Ethernet NIC PCI address

Ethernet

Ethernet Drivers

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

10

FCoE Building Blocks


Fibre Channel Forwarder
FCF (Fibre Channel Forwarder) is the Fibre Channel switching element inside an FCoE switch
Fibre Channel logins (FLOGIs) happens at the FCF Consumes a Domain ID

FCoE encap/decap happens within the FCF Forwarding based on FC information


FCoE Switch
FC Domain ID : 15
FC port FC port FC port FC port Eth port
Data Center Partner Webinar

FCF Ethernet Bridge

Eth port

Eth port

Eth port

Eth port

Eth port

Eth port

Eth port
11

2010 Cisco and/or its affiliates. All rights reserved.

FCoE Building Blocks


FCoE Port Types
FibreChannel over Ethernet Switch

FCF Switch VE_Port

VE_Port

VF_Port

VNP_Port

E_NPV Switch

VF_Port

VN_Port

End Node End

VF_Port

VN_Port Node

**Available NOW

FCoE Switch : FCF


Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

12

FCoE Building Blocks

The New BuzzwordUnified


Unified I/O using Ethernet as the transport medium in all network environments -- no long needing separate cabling options for LAN and SAN networks Unified Wire a single DCB Ethernet link actively carrying both LAN and Storage (FC/FCoE/NAS/iSCSI) traffic simultaneously

Unified Dedicated Wire -- a single DCB Ethernet link capable of carrying all traffic types but actively dedicated to a single traffic type for traffic engineering purposes
Unified Fabric An Ethernet Network made up of Unified Wires everywhere: all protocols network and storage transverse all links simultaneously

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

13

FCoE Building Blocks


Unfied Wire vs Unified Dedicated Wire
Unified Wire to the access switch cost savings in the reduction of required equipment cable once for all servers to have access to both LAN and SAN networks Unified Dedicated Wire from access to aggregation separate links for SAN and LAN traffic - both links are same I/O (10GE) advanced Ethernet features can be applied to the LAN links maintains fabric isolation
CNA

Fabric A

Fabric B

Core

L3 L2

Aggregation

Unified Dedicated Wire

Shared Access

Unified Wire

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

14

FCoE Building Blocks


The Unified Fabric - Definition
A single network

All links carry all types of traffic simultaneously


all/any Storage and Network protocols Possible reduction of equipment leading to cost savings Abolition of Fabric A and Fabric B Single SAN fabric with redundant fabric services
L3 L2

Ethernet and Storage traffic EVERYWHERE

Core

Aggregation

Virtual PortChannel (VPC)

Access

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

15

Unified Technology
LAN and SAN networks share the same Unified I/O building blocks: switches and cabling Maintains operations, management and troubleshooting Takes advantage of the Ethernet Roadmap (10G40G100G)
Native Ethernet LAN Fibre Channel over Ethernet SAN
Fabric A Fabric B

Core

L3 L2

Aggregation

Core

Virtual PortChannel (VPC)

Access

Edge

NIC / CNA Ether-channel


Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

CNA

Multi-pathing

16

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

17

Standards for I/O Consolidation


Developed by IEEE 802.1 Data Center Bridging Task Group (DCB) All technically stable FC-BB-5 standards published by ANSI in May 2010 Standard / Feature
IEEE 802.1Qbb Priority-based Flow Control (PFC) IEEE 802.3bd Frame Format for PFC IEEE 802.1Qaz Enhanced Transmission Selection (ETS) and Data Center Bridging eXchange (DCBX) IEEE 802.1Qau Congestion Notification IEEE 802.1Qbh Port Extender

Status of the Standard


Passed Sponsor Ballot, awaiting publication Passed Sponsor Ballot, awaiting publication Entering Sponsor Ballot

Done! In its first task group ballot

CEE (Converged Enhanced Ethernet) is an informal group of companies that submitted initial inputs to the DCB WGs.
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

18

Whats Necessary for FCoE?


FCoE Standard REQUIRES Lossless Ethernet

PFC: necessary to guarantee Ethernet can provide lossless transport


ETS: nice-to-have for bandwidth management and traffic separation QCN: NOT necessary for FCoE today

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

19

Priority Flow Control


FCoE Flow Control Mechanism
Enables lossless Ethernet using PAUSE based on a COS as defined in 802.1p When link is congested, CoS assigned to no-drop will be PAUSED Other traffic assigned to other CoS values will continue to transmit and rely on upper layer protocols for retransmission Not only for FCoE traffic
Transmit Queues
Fibre Channel
One Two Three
R_RDY
STOP

Ethernet Link

Receive Buffers
One Two

PAUSE

Three Four Five Six Seven Eight 20

Four Five Six Seven

Eight Virtual Lanes

B2B Credits
Data Center Partner Webinar

Packet

Eight
2010 Cisco and/or its affiliates. All rights reserved.

Priority Flow Control

Operations Configuration Switch Level


Once feature fcoe is configured, 2 classes are made by default
policy-map type qos default-in-policy class type qos class-fcoe set qos-group 1 class type qos class-default set qos-group 0
DCB Switch

class-fcoe is configured to be no-drop with an MTU of 2158


class type network-qos class-fcoe pause no-drop mtu 2158

DCB CNA Adapter

Best Practice - use the default COS value of 3 for FCoE/no-drop traffic Can be changed through QOS class-map configuration
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

N5K# show class-map Type qos class-maps =================== class-map type qos match-any class-fcoe match cos 3 class-map type qos match-any class-default match any
21

Priority Flow Control


Changing PFC Settings

Create classification rules first by defining and applying policy-map type qos
N5010-2(config)# class-map type qos class-lossless N5010-2(config-cmap-qos)# match cos 4 N5010-2(config-cmap-qos)# policy-map type qos policy-lossless N5010-2(config-pmap-qos)# class type qos class-lossless N5010-2(config-pmap-c-qos)# set qos-group 4 N5010-2(config-pmap-uf)# system qos N5010-2(config-sys-qos)# service-policy type qos input policy-lossless

DCBX protocol to negotiate PFC for priority 4

Define and apply policy-map type network-qos


N5010-2(config-pmap-qos)# class type network-qos policy-lossless N5010-2(config-cmap-uf)# match qos-group 4 N5010-2(config-cmap-uf)# policy-map type network-qos policy-lossless N5010-2(config-pmap-uf)# class type network-qos class-lossless N5010-2(config-pmap-uf-c)# pause no-drop N5010-2(config-pmap-uf)# system qos N5010-2(config-sys-qos)# service-policy type network-qos policy-lossless

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

22

Priority Flow Control


Verifying Configurations
N5K1# show interface priority-flow-control ====================================================== = Port Mode Oper(VL bmap) RxPPP TxPPP ====================================================== =

Checking the PFC settings on an interface


VL bmap = COS set for PFC
VL bmap 1 2 4 Binary 00000001 00000010 00000100 COS 0 1 2

Ethernet1/1 Ethernet1/2 Ethernet1/3 Ethernet1/4 Ethernet1/5 Ethernet1/6 Ethernet1/7 Ethernet1/8

Auto Auto Auto Auto Auto Auto Auto Auto

On On On Off Off Off On Off

(8) (8) (8)

(8)

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

show interface priority-flow-control

8
16 32 64

00001000
00010000 00100000 01000000

3
4 5 6

Shows ports where PFC is configured, the COS value associated with PFC as well as the PAUSE packets received and sent on that port
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

128

10000000

7
23

Enhanced Transmission Selection


Bandwidth Management
Prevents a single traffic class of hogging all the bandwidth and starving other classes When a given load doesnt fully utilize its allocated bandwidth, it is available to other classes

Helps accommodate for classes of a bursty nature

Offered Traffic

10 GE Link Realized Traffic Utilization


3G/s HPC Traffic 3G/s 2G/s

3G/s

3G/s

2G/s

3G/s
3G/s 3G/s 3G/s

Storage Traffic 3G/s

3G/s

3G/s

4G/s

6G/s

3G/s

LAN Traffic 4G/s

5G/s

t1
Data Center Partner Webinar

t2

t3
2010 Cisco and/or its affiliates. All rights reserved.

t1

t2

t3 24

Enhanced Transmission Selection


Bandwidth Management
Once feature fcoe is configured, 2 classes are made by default By default, each class is given 50% of the available bandwidth
N5K1# show queuing interface ethernet 1/13 Ethernet1/13 queuing information: TX Queuing qos-group sched-type oper-bandwidth 0 WRR 50 1 WRR 50
1Gig FC HBAs

1Gig Ethernet NICs

Traditional Server

A typical server has equal BW per traffic type Best Practice : FCoE and Ethernet each receive 50% Can be changed through QoS settings when higher demands for certain traffic exist (i.e. HPC traffic, more Ethernet NICs)
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

25

Enhanced Transmission Selection


Changing ETS Bandwidth Configurations

Create classification rules first by defining and applying policy-map type qos
Define and apply policy-map type queuing to configure strict priority and bandwidth sharing
N5010-2(config)# class-map type queuing class-voice N5010-2(config-cmap-que)# match qos-group 2 N5010-2(config-cmap-que)# class-map type queuing class-high N5010-2(config-cmap-que)# match qos-group 3 N5010-2(config-cmap-que)# class-map type queuing class-low N5010-2(config-cmap-que)# match qos-group 4 N5010-2(config-cmap-que)# exit N5010-2(config)# policy-map type queuing policy-BW N5010-2(config-pmap-que)# class type queuing class-voice N5010-2(config-pmap-c-que)# priority N5010-2(config-pmap-c-que)# class type queuing class-high N5010-2(config-pmap-c-que)# bandwidth percent 50 N5010-2(config-pmap-c-que)# class type queuing class-low N5010-2(config-pmap-c-que)# bandwidth percent 20 N5010-2(config-pmap-c-que)# class type queuing class-fcoe N5010-2(config-pmap-c-que)# bandwidth percent 30 N5010-2(config-pmap-c-que)# class type queuing class-default N5010-2(config-pmap-c-que)# bandwidth percent 0 N5010-2(config-pmap-c-que)# system qos N5010-2(config-sys-qos)# service-policy type queuing output policy-BW N5010-2(config-sys-qos)#
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

FCoE Traffic given 30% of the 10GE link

26

Data Center Bridging eXchange


Control Protocol the handshake
Negotiates Ethernet capabilitys PFC, ETS, CoS values between peer devices Simplifies management of DCB nodes Allows for configuration and distribution of parameters from one node to another Responsible for Logical Link Up/Down signaling of Ethernet and Fibre Channel Uses Link Layer Discovery Protocol (LLDP) defined by 802.1AB to exchange and discover DCB capabilities DCBX negotiation failures result in: per-priority-pause not enabled on CoS values vfc not coming up when DCBX is being used in FCoE environment

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

27

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

28

Understanding FCoE

Fibre Channel is to FCoE as is to

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

29

Fiber Channel over Ethernet


FC-BB-5 Protocol

FCoE
Mapping of FC Frames over Ethernet
Ethernet Fibre Channel Traffic
Byte 0
Ethernet Header FCoE Header FC Header

Completely based on the FC model Same host-to-switch and switch-to-switch behavior as FC WWNs, FC-IDs, hard/soft zoning, DNS, RSCN

FCoE is Fibre Channel


Byte 2229
CRC FCS

Cisco HP Intel

Dell IBM QLOGIC

EMC2
Microsoft

EMULEX NetApp
VMWARE
30

FC Payload

EOF

Redhat

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

Fiber Channel over Ethernet


FC-BB-5 Protocol

FCoE
Mapping of FC Frames over Ethernet
Ethernet Fibre Channel Traffic
Byte 0
Ethernet Header FCoE Header FC Header

Roadmap of Ethernet

Economy of Scale

Massive industry investment

FCoE is Ethernet
Byte 2229
CRC FCS

FC Payload

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

EOF

31

Fiber Channel over Ethernet


Protocol Mapping From a Fibre Channel standpoint its
FC connectivity over a new type of cable called Ethernet

From an Ethernet standpoints its


Yet another ULP (Upper Layer Protocol) to be transported

FC-4 ULP Mapping

FC-4 ULP Mapping FC-3 Generic Services FC-2 Framing & Flow Control FCoE Logical End Point Ethernet Media Access Control Ethernet Physical Layer

FC-3 Generic Services


FC-2 Framing & Flow Control

FC-1 Encoding
FC-0 Physical Interface

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

32

Fiber Channel over Ethernet


Data and Control plane

FCoE itself
Is the data plane protocol It is used to carry most of the FC frames and all the SCSI traffic

FIP (FCoE Initialization Protocol)


It is the control plane protocol It is used to discover the FC entities connected to an Ethernet cloud It is also used to login to and logout from the FC fabric

Uses Fabric Assigned MAC address (dynamic)

Both Protocols Have


Two different Ethertypes FIP 0x8914 , FCOE 0x8906 Two different frame formats Both are defined in FC-BB-5 http://www.cisco.biz/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-560403.html Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved. 33

Fibre Channel over Ethernet Protocol


FIP: FCoE Initialization Protocol
FIP discovers other FCoE capable devices within the Ethernet Cloud
Enables FCoE adapters (CNAs) to discover FCoE switches (FCFs) on the FCoE VLAN Establishes a virtual link with between the adapter and FCF or between two FCFs

FIP frames use a different Ethertype from FCoE frames making FIP-Snooping by DCB capable Ethernet bridges

Building foundation for future multi-hop FCoE topologies


Multi-hop refers to FCoE extending beyond a single hop or access switch Today, multi-hop is achievable with a Nexus 4000 (FIP Snooping Bridge) connected to Nexus 5000 (FCF)

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

34

Fiber Channel over Ethernet Protocol


FCoE Initialization Protocol (FIP)
Step 1: FCoE VLAN Discovery
FIP sends out a multicast to ALL_FCF_MAC address looking for the FCoE VLAN FIP frames use the native VLAN
Enode Initiator
VLAN Discover y

FCoE Switch FCF

VLAN Discovery

Step 2: FCF Discovery


FIP sends out a multicast to the ALL_FCF_MAC address on the FCoE VLAN to find the FCFs answering for that FCoE VLAN FCFs responds back with their MAC address

FCF Discovery

FCF Discovery

FCoE Initialization Protocol (FIP)

Step 3: Fabric Login


FIP sends a FLOGI request to the FCF_MAC found in Step 2 Establishes a virtual link between host and FCF

FLOGI/F DISC

FLOGI/FDIS C Accept

FC Command

FC Command Responses

FCoE Protocol

** FIP does not carry any Fibre Channel frames


Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

35

Fiber Channel over Ethernet Protocol


FCoE Initialization Protocol (FIP)
The FCoE VLAN is manually configured on the Nexus 5000
N5K(config)# feature fcoe N5K(config)# vlan 2 N5K(config-vlan)# fcoe vsan 2 N5K(config-vlan)# show vlan fcoe Original VLAN ID Translated VSAN ID --------------------------------2 2

Association State ----------------Operational

The FCF-MAC address is configured on the Nexus 5000 by default once feature fcoe has been configured This is the MAC address returned in step 2 of the FIP exchange

This MAC is used by the host to login to the FCoE fabric


N5K# show fcoe Global FCF details FCF-MAC is 00:0d:ec:d5:fe:00 FC-MAP is 0e:fc:00 FCF Priority is 128 FKA Advertisement period for FCF is 8 seconds

** FIP does not carry any Fibre Channel frames


Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

36

Fiber Channel over Ethernet Protocol


FCoE Initialization Protocol (FIP)
Step 3 - login process: show flogi database and show fcoe database show the logins and associated FCiDs, xWWNs and FCoE MAC addresses

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

37

Fiber Channel over Ethernet Protocol


Configuration using Device Manager

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

38

Fiber Channel over Ethernet Protocol


Host Side FIP and DCBX Configuration

2nd portion of the MAC is the FC-ID

1st portion of the MAC is the FCMAP of the Nexus 5000


FC-MAP (0E-FC-xx) FC-ID B1.00.01 FC-ID B1.00.01

FC-MAC Address
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

FC-MAP (0E-FC-00)

39

FCoE Forwarding
FCF Intermediate switches in the Ethernet cloud
All are Fibre Channel Aware

FC Storage
FCID 7.1.1 FC link FC Fabric VE_port Ethernet Fabric VE_port VF_port Ethernet Fabric VN_port

FC Domain 7

FC Domain 3 MAC A

FC Domain 1 MAC B

FCID 1.1.1 MAC C

D_ID = FC-ID (1.1.1) S_ID = FC-ID (7.1.1)

D_ID = FC-ID (1.1.1) S_ID = FC-ID (7.1.1)

FC Frame

FC Frame

Dest. = MAC B Srce. = MAC A D_ID = FC-ID (1.1.1) S_ID = FC-ID (7.1.1)

Dest. = MAC C Srce. = MAC B D_ID = FC-ID (1.1.1) S_ID = FC-ID (7.1.1)

FCoE Frame

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

40

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

41

The Design Requirements


Ethernet vs Fibre Channel Ethernet is non-deterministic.
Flow control is destination-based

Relies on TCP drop-retransmission / sliding window

Fibre-Channel is deterministic.
Flow control is source-based (B2B credits) Services are fabric integrated (no loop concept)

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

42

The Design Requirements


Classical Ethernet
Ethernet/IP
Goal is to provide any-to-any connectivity Unaware of packet loss relies on ULPs for retransmission and windowing Provides the transport without worrying about the services -services provided by upper layers East-west vs north-south traffic ratios are undefined

? ?

? ? ?

Switch

Switch

Switch

?
? ?

? ? ?
Fabric topology and traffic flows are highly flexible

Network design has been optimized for:


High Availability from a transport perspective by connecting nodes in mesh architectures Service HA is implemented separately Takes in to account control protocol interaction (STP, OSPF, EIGRP, L2/L3 boundary, etc)

? ?

?
Client/Server Relationships are not predefined
43

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

The Design Requirements


Servers typically dual homed to two or more access switches LAN switches have redundant connections to the next layer Distribution and Core can be collapsed into a single box L2/L3 boundary typically deployed in the aggregation layer
Spanning tree or advanced L2 technologies (vPC) used to prevent loops within the L2 boundary L3 routes are summarized to the core
L3 L2

LAN Design Access/Aggregation/Core


Outside Data Center cloud Core

Aggregation

Virtual PortChannel (VPC)


STP

Access

Services deployed in the L2/L3 boundary of the network (loadbalancing, firewall, NAM, etc)
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

STP

Virtual PortChannel (VPC)

44

The Design Requirements


Classical Fibre Channel
Fibre Channel SAN
Transport and Services are on the same layer in the same devices T0
DNS FSPF

T1

T2
FSPF

Zone

Well defined end device relationships (initiators and targets)


Does not tolerate packet drop requires lossless transport Only north-south traffic, east-west traffic mostly irrelevant
I0 I1

Switch
Zone

Switch
DNS RSCN
DNS
FSPF Zone

RSCN

Switch RSCN

I5
I4

I2

I3

Network designs optimized for Scale and Availability


High availability of network services provided through dual fabric architecture SAN A and SAN B : physically separate and redundant fabrics Strict change isolation - end to end driver certification
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

Fabric topology, services and traffic flows are structured

I(c) T(s) I(c)


Client/Server Relationships are pre-defined
45

The Design Requirements


SAN Design Two Tier Topology
Edge-Core Topology
Servers connect to the edge switches Storage devices connect to one or more core switches Core switches provide storage services to one or more edge switches, thus servicing more servers in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

FC

Core

Core

46

The Design Requirements


SAN Design Three Tier Topology
Edge-Core-Edge Topology For environments where future growth of the network has the number of storage devices exceeding the number of ports available at the core switch A set of edge switches dedicated to server connectivity and another set of dedicated for storage devices Extra edge can also be services edge for advanced network services Core is for transport only, rarely accommodates end nodes HA achieved with dual fabrics
FC

Core

Core

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

47

The Design Requirements


Classical Ethernet + Classical Fibre Channel == ??
Question Do we build a FC network on top of an Ethernet Cloud? Or and Ethernet Network on top of a Fibre Channel Fabric? Unified Fabric design has to incorporate the super-set of requirements
Network -- Lossless and Loss full Topologies Transport undefined (any-to-any) and defined (one-to-one)

?
Switch

?
?
Switch

?
?

? ?

Switch

High Availability redundant network topology (mesh/full mesh) and physically separate redundant fabrics
Bandwidth FC fan-in and oversubscription ratios and Ethernet oversubscription Security FC controls (zoning, port security, ) and IP controls (CISF, ACL, ) Manageability and Visibility Hop by hop visibility for FC and the cloud for Ethernet
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

T0
DNS FSPF

T1

T2
FSPF

Switch
Zone RSCN

Switch
DNS
FSPF Zone

Zone

DNS RSCN

Switch RSCN

I5

I0
I1 I2

I3

I4
48

The Design Requirements


Classical Ethernet + Classical Fibre Channel = ?? Cant we just fold down the dotted line??
Outside Data Center cloud Core Fold Here
FC

L3 L2
STP

Aggregation

Core
Virtual PortChannel (VPC)

Core

Access
STP Virtual PortChannel (VPC)

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

Fold Here

49

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

50

Single Hop Design


Todays Solution
Host connected over unified wire to first hop access switch
Access switch (Nexus 5000) is the FCF Fibre Channel ports on the access switch can be in NPV or Switch mode for native FC traffic
Ethernet Fabric FC Fabric

Target
FC

DCBX is used to negotiate the enhanced Ethernet capabilities FIP is use to negotiate the FCoE capabilities as well as the host login process FCoE runs from host to access switch FCF native Ethernet and native FC break off at the access layer
ENode
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

DCB capable Switch acting as an FCF

Unified Wire

CNA

51

Single Hop Design


Unified Wire at the Access
The first phase of the Unified Fabric evolution design focused on the fabric edge Unified the LAN Access and the SAN Edge by using FCoE Consolidated Adapters, Cabling and Switching at the first hop in the fabrics The Unified Edge supports multiple LAN and SAN topology options
Virtualized Data Center LAN designs Fibre Channel edge with direct attached initiators and targets Fibre Channel edge-core and edgecore-edge designs Fibre Channel NPV edge designs The Unified Edge
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

FC

LAN Fabric

Fabric A

Fabric B

FCoE FC

Nexus 5000 FCF-A

Nexus 5000 FCF-B

LAN Access/SAN Edge

52

Single Hop Design


The CNA Point of View
Converged Network Adapter (CNA) presents two PCI address to the Operating System (OS) OS loads two unique sets of drivers and manages two unique application topologies Server participates in both topologies separately Two stacks and thus two views of the same unified wire SAN Multi-Pathing provides failover between two fabrics (SAN A and SAN B) NIC Teaming provides failover within the same fabric (VLAN)
Nexus 5000 FCF-A

Nexus Edge participates in both distinct FC and IP Core topologies

Nexus 5000 FCF-B

Unified Wire shared by both FC and IP topologies

Nexus Unified Edge supports both FC and IP topologies

10GbE Ethernet

10GbE Fibre Channel

Link

FC Driver bound to FC HBA PCI address

PCIe

Ethernet Driver bound to Ethernet NIC PCI address

Fibre Channel Drivers

Ethernet Drivers

Operating System
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

53

Single Hop Design


The CNA Point of View
In this first phase we were limited to direct attached CNAs at the access Generation 1 CNA
LAN Fabric Fabric A Fabric B

Utilized Cisco, Intel, Nuova Data Center Bridging Exchange protocol (CIN-DCBX)
Only supports direct attachment of an VN_Port to an VF_Port over the unified wire Generation 2 CNA Utilizes Converged Enhanced Ethernet Data Center Bridging Exchange protocol (CEE-DCBX) Utilizes FCoE Initialization Protocol (FIP) as defined by the T.11 FC-BB-5 specification Supports both direct and multi-hop attachment (through a Nexus 4000 FIP Snooping Bridge)
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

Nexus 5000 FCF-A

Nexus 5000 FCF-A

CEE-DCBX

VF VN

Direct attach VN_Port to VF_Port

CIN-DCBX

Generation 2 CNA

Generation 1 CNA

54

Single Hop Design


The FCoE VLAN
A VLAN is dedicated for every VSAN in the fabric FIP discovers the FCoE VLAN and signals it to the hosts Trunking is not required on the host driver all FCoE frames are tagged by the CNA
VSAN 2 VSAN3

LAN Fabric

Fabric A

Fabric B

FCoE VLANs must not be configured on Ethernet links that are not designate for FCoE

Nexus 5000 FCF-A

Nexus 5000 FCF-B

VLAN 10,20

STP Edge Trunk VLAN 10,30

Maintains isolated edge switches for SAN A and B and separate LAN switches for NIC 1 and NIC 2 (standard NIC teaming)

! VLAN 20 is dedicated for VSAN 2 FCoE traffic (config)# vlan 20 (config-vlan)# fcoe vsan 2

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

55

Single Hop Design


The FCoE VLAN
In order to maintain the integrity of FC forwarding over FCoE, FCoE VLANs are treated differently than LAN VLANs No flooding, MAC learning, broadcasts, etc. The FCoE VLAN must not be configured as a native VLAN
Nexus 5000 FCF

LAN Fabric

Fabric A

Fabric B

VSAN 2

VSAN 3

Nexus 5000 FCF

FIP uses native VLAN


Separate FCoE VLANs must be used for FCoE in SAN-A and SAN-B Unified Wires must be configured as trunk ports and STP edge ports

VLAN 10,20

STP Edge Trunk VLAN 10,30

! VLAN 20 is dedicated for VSAN 2 FCoE traffic (config)# vlan 20 (config-vlan)# fcoe vsan 2

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

56

Single Hop Design


The FCoE VLAN and STP
FCoE Fabric A will have a different VLAN topology than FCoE Fabric B which are different from the LAN Fabric PVST+ allows unique topology per VLAN MST requires that all switches in the same Region have the same mapping of VLANs to instances MST does not require that all VLANs be defined in all switches A separate instance must be used for FCoE VLANs Recommended: three separate instances native Ethernet VLANs, SAN A VLANs and SAN B VLANs
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

LAN Fabric

Fabric A

Fabric B

VSAN 2 VLAN 10 VSAN 3

Nexus 5000 FCF-A

Nexus 5000 FCF-B

VLAN 10,20 VLAN 10,30

spanning-tree mst configuration name FCoE-Fabric revision 5 instance 5 vlan 1-19,40-3967,4048-4093 instance 10 vlan 20-29 instance 15 vlan 30-39
57

Single Hop Design


Unified Wires and MCEC
Optimal layer 2 LAN design often leverages Multi-Chassis Etherchannel (MCEC)
LAN Fabric Fabric A Fabric B

Nexus utilizes Virtual Port Channel (vPC) to enable MCEC either between switches or to 802.3ad attached servers
MCEC provides network based load sharing and redundancy without introducing layer 2 loops in the topology MCEC results in diverging LAN and SAN high availability topologies
FC maintains separate SAN A and SAN B topologies
LAN utilizes a single logical topology
Nexus 5000 FCF-A

vPC Peer Link

Nexus 5000 FCF-B

vPC Peers

MCEC

Direct Attach vPC Topology

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

58

Single Hop Design


Unified Wires and MCEC
vPC enabled topologies with FCoE must follow specific design and forwarding rules
LAN Fabric Fabric A Fabric B

With the NX-OS 4.1(3) releases a vfc interface can only be associated with a vPC which has a single [one (1)] CNA port attached to each edge switch
While the port-channel is the same on N5K-1 and N5K-2, the FCoE VLANs are different vPC configuration works with Gen-2 FIP enabled CNAs ONLY FCoE VLANs are not carried on the vPC peer-link FCoE and FIP ethertypes are not forwarded over the vPC peer link
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

VLAN 10 ONLY HERE!

Nexus 5000 FCF-A

Nexus 5000 FCF-B

VLAN 10,20 STP Edge Trunk

VLAN 10,30

vPC contains only 2 X 10GE links one to each Nexus 5000

Direct Attach vPC Topology


59

Single Hop Design


Unsupported Topologies
Dual CNA (FC initiator) connected via an Etherchannel to a single edge switch is unsupported A vfc interface can only be bound to a port channel with one local interface
Nexus 5000 Not consistent with Fibre FCF-A Channel High Availability design requirements (No isolation of SAN A and SAN B)

LAN Fabric

Fabric A

Fabric B

VLAN 10,20,30

Nexus 5000 FCF-B

VLAN 10,20

VLAN 10,30

If SAN design evolves to a shared physical with only VSAN isolation for SAN A and B this could change (currently this appears to be a big if) ISLs between the Nexus 5000 access switches breaks SAN HA requirements
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

Single homed dual CNA Direct Attach Topology


60

Single Hop Design


Introduction of 10Gig/FCoE Fabric Extender
32 server facing 10Gig/FCoE ports
T11 standard based FIP/FCoE support on all ports

Nexus 2232

8 10Gig/FCoE uplink ports for connections to the Nexus 5000

Remote Line Card of the Nexus 5000

FEX-2232

Management and configuration handled by the Nexus 5000 Support for Converged Enhanced Ethernet including PFC Part of the Cisco Nexus 2000 Fabric Extender family

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

61

Single Hop Design


Server Ethernet driver connected to the FEX in NIC Teaming (AFT, TLB) or with vPC (802.3ad) FCoE runs over vPC member port with a single link from server to FEX FEX single homed to upstream Nexus 5000 FEX fabric links can be connected to Nexus 5000 with individual links (static pinning) or a port channel oversubscribed 4:1 Consistent with separate LAN Access and SAN Edge Topologies
Data Center Partner Webinar

Extending the FCoE Edge Nexus 2232


SAN A SAN B

Nexus 5000 FCF-A Fabric Links Option 1: Single Homed Port Channel Nexus 2232 10GE FEX

Nexus 5000 FCF-B

Fabric Links Option 2: Static Pinned


Nexus 2232 10GE FEX

Server Option 2: FCoE on a vPC member PC with a single link

Server Option 1: FCoE on individual links. Ethernet traffic is Active/Standby

Requires FIP enabled CNAs


62

2010 Cisco and/or its affiliates. All rights reserved.

Single Hop Design

Extending the FCoE Edge Nexus 2232


Nexus 2232 can not be configured in a dual homed configuration (vPC between two N5K) when configured to support FCoE attached servers
MCEC Port Channel will not keep SAN A and San B traffic isolated Nexus 2000 not supported with dedicated FCoE and dedicated IP upstream fabric links
Nexus 5000
Fabric Links: vPC Port Channel Nexus 2232 10GE FEX Nexus 2232 10GE FEX

SAN A

SAN B

Nexus 7000

Nexus 2232 can only currently be connected to the Nexus 5000 when configured to support FCoE attached servers
Nexus 7000 will support Nexus 2000 in Ethernet only mode in CY2010 (support for FCoE on FEX targeted for CY2011 on next generation N7K line cards)
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

63

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

64

What is NPIV? And Why?


N-Port ID Virtualization (NPIV) provides a means to assign multiple FCIDs to a single N_Port
Limitation exists in FC where only a single FCID can be handed out per F-port. Therefore and F-Port can only accept a single FLOGI

allows multiple applications to share the same Fiber Channel adapter port usage applies to applications such as VMWare, MS Virtual Server and Citrix
Application Server FC NPIV Core Switch

Email

Email I/O N_Port_ID 1 Web I/O N_Port_ID 2 File Services I/O N_Port_ID 3

F_Port
F_Port

Web

File Services
Data Center Partner Webinar

N_Port
65

2010 Cisco and/or its affiliates. All rights reserved.

What is NPV? And Why?


N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a server performing multiple logins through a single physical link Physical servers connected to the NPV switch login to the upstream NPIV core switch No local switching is done on an FC switch in NPV mode FC edge switch in NPV mode does not take up a domain ID
Helps to alleviate domain ID exhaustion in large fabrics

Nexus 5000, MDS 91xx, MDS blade switches, UCS Fabric Interconnect
F-Port

FC NPIV Core Switch

Eth1/1

Server1 N_Port_ID 1 Server2 N_Port_ID 2 Server3 N_Port_ID 3

NP-Port

F-Port

Eth1/2

F_Port

Eth1/3
N-Port
Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

66

Multi - Hop Design


Considerations for FCoE Multi-hop
What design considerations do we have when extending FCoE beyond the Unified Edge?
High Availability for both LAN and SAN
LAN Fabric Fabric A Fabric B

Oversubscription for SAN and LAN


Ethernet Layer 2 and STP design

Where does Unified Wire make sense over Unified Dedicated Wire? Unified Wire provides for sharing of a single link for both FC and Ethernet traffic

FCF

DCB + FIP Snooping Bridge

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

67

Multi - Hop Design


FCoE Pass-through options
Multi-hop FCoE networks allow for FCoE traffic to extend past the access layer (first hop)
SAN A SAN B

In Multi-hop FCoE the role of a transit Ethernet bridge needs to be DCB Capable evaluated Ethernet
Avoid Domain ID exhaustion Ease management
Switch

VF

FCF

FCF
VF VN

FIP Snooping is a minimum requirement suggested in FC-BB-5 Fibre Channel over Ethernet NPV (FCoE-NPV) is a new capability intended to solve a number of design and management challenges
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

DCB Capable Ethernet Switch

VN

68

Multi - Hop Design


FIP-Snooping What is FIP-Snooping?
Efficient, automatic configuration of ACLs locks down the forwarding path going from CNA to FCF
accomplished by snooping FIP packets
Spoofed MAC 0E.FC.00.DD.EE.FF

SAN

FCF
FCF MAC 0E.FC.00.DD.EE.FF

Why FIP-Snooping?
Security - Protection from MAC Address spoofing of FCoE end devices (ENode) Fibre Channel links are Point-to-Point
Ethernet bridges can utilize ACLs to provide the equivalent path control (equivalent of point-topoint)

FIP Snooping

Support for FIP-Snooping?


Nexus 4000 (Blade switch for IBM BC H)

ENode MAC 0E.FC.00.07.08.09

ENode

FIP Capable Multi-Hop Topology


Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

69
69

Multi - Hop Design


Fibre Channel over Ethernet NPV Bridge
On the control plane (FIP ethertype), an Fibre Channel over Ethernet NPV bridge improves over a "FIP snooping bridge" by intelligently proxying FIP functions between a CNA and an FCF
- takes control of how a live network will build FCoE connectivity - makes the connectivity very predictable, without the need for an FCF at the next hop from the CNA

On the data plane (FCoE ethertype), an FCoE NPV bridge offers more ways to engineer traffic between CNA-facing ports and FCF-facing ports
An FCoE-NPV bridge knows nothing about Fibre Channel, and cant parse packets with FCoE ethertype

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

70

Multi - Hop Design


Fibre Channel over Ethernet NPV Bridge

Proxys FIP functions between a CNA and an FCF


FCoE VLAN configuration and assignment FCF Assignment

FCoE-NPV load balance logins from the CNAs evenly across the available FCF uplink ports
FCoE-NPV will take VSAN into account when mapping or pinning logins from a CNA to an FCF uplink

Operations and management process are in line with todays SAN-Admin practices Similar to NPV in a native Fibre Channel network

71
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

71

Multi - Hop Design


FCoE-NPV - Enode Login Process FCoE Pass through device
All FCoE Switching is performed at the upstream FCF Addressing is pass out by the upstream FCF
Domain ID and FC-MAP come from the FCF

Target

FABRIC A

FC

FCF
VF

more FCoE connectivity to hosts without


Less-expensive Consistent management
FLOGI

VNP

FCoE_NPV bridge
VF
FCoE_NPV does not consume a domain ID

Running into the domain ID issue

VN

FCoE-NPV is the FIP-Snooping Plus


Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

E_Node MAC Address

72

Multi - Hop Design


Extending FCoE with FIP Snooping
Nexus 4000 is a Unified Fabric capable Blade Switch DCB enabled FIP Snooping Bridge
SAN A SAN B

Dual Topology requirements for FCoE multi-hop


Servers IP connection to the Nexus 4000 is Active/Standby MCEC is not currently supported from blade server to Nexus 4000 Options 1: Unified Dedicated Wires from Nexus 4000 to Nexus 5000 Options 2: Single Unified Wire Port Channel from Nexus 4000 to Nexus 5000
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

Nexus 5000 FCF-A Option 2: Single Homed Unified Wire

Nexus 5000 FCF-B Option 1: Unified Dedicated Wire

Nexus 4000 FIP Snooping Bridge-A

Nexus 4000 FIP Snooping Bridge-B

10GbE Fibre Channel

Link Ethernet

PCIe

10GbE

Mezzanine Converged Network Adapter

73

Multi - Hop Design


Extending FCoE with VE_Ports
Extending FCoE Fibre Channel fabrics beyond direct attach initiators can be achieved in two basic ways Extend the Unified Edge (Stage 1) Add DCB enabled Ethernet switches between the VN and VF ports (stretch the link between the VN_Port and the VF_Port) Extend Unified Fabric capabilities into the SAN Core Leverage FCoE wires between Fibre Channel of Ethernet switches (VE_Ports)
LAN Fabric Fabric A Fabric B

MDS 9000 FCF-A

VE Using FCoE for ISL between FC Switches

Nexus 5000 FCF-A VE VE VF

VE

Nexus 5000 FCF-A VN

Extending FCoE into a multi-hop Ethernet Access Fabric

DCB + FIP Snooping Bridge

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

74

Multi - Hop Design


Extending FCoE with FCoE-NPV
Two basic design options are possible when we deploy any FCoE multi-hop configuration Option 1 Unified Dedicated Wire Allows MCEC for IP/Ethernet Dedicated FCoE links for Storage Option 2 Unified Wire Leverage Server side failover mechanisms for both SAN and LAN Allows for Unified Wire beyond the Server to first device
Nexus 5000 FCF-A Option 2: Single Homed Unified Wire Nexus 5000 FCF-B Option 1: Dedicated Links and Topologies

SAN A

SAN B

FCoE-NPV

FCoE-NPV

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

75

Multi - Hop Design


Unsupported Topologies
SAN and LAN high availability design requirements are not always identical Optimal layer 2 LAN design may not meet FC high availability and operational design requirements Features such as vPC & MCEC are not viable and not supported beyond the direct attached server Server has two stacks and manages two topologies Layer 2 network has a single topology L2MP and TRILL provide options to change the design paradigm and come up with potential solutions FCoE over L2MP/TRILL is not currently supported
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

SAN A

SAN B

FCF

FIP and FcoEframes load shared over MCEC on a per flow basis NO SAN A and SAN B isolation

DCB Enabled

76

Agenda
Why are we here? Background Information FCoE Building Blocks and Terminology DCB Standard FCoE Protocol Information Design Requirements Classical Ethernet + Classical Fibre Channel = ?? Single Hop Designs Multi-Hop Designs FCoE Deployment Considerations Questions

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

77

FCoE Deployment Considerations


Dedicated Aggregation/Core Devices
Where is it efficient to leverage unified wire, shared links for both SAN and LAN traffic?
At the edge of the fabric the volume of end nodes allows for a greater degree of sharing for LAN and SAN In the core we will not reduce the number of links and will either maintain separate FC or FCoE links to the SAN core and Ethernet links to the LAN core

LAN and SAN HA models are very different (and not fully compatible) FC and FCoE are prone to HOLB in the network and therefore we are limited in the physical topologies we can build
e.g. 10 x 10G uplinks to LAN aggregation will require 10 x 10G links to a next hop SAN core (with targets attached) No savings, actually spending more to achieve this direct uplinks to SAN core

Targets are attached to the SAN core (the LAN aggregation and SAN core have different topology functions) Where is it more beneficial to deploy two cores SAN and LAN over a unified core topology
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

78

FCoE Deployment Considerations


Migration Strategy for FCoE
Migration to 10G FCoE in place of 4/8G FC links (Ethernet price per bit economics) Edge switch running as FCF with VE_ports connecting to FCF on Core switch
Must be careful of Domain ID creeping
SAN A SAN B

MDS 9000 FCF-B

VE Ports

FSPF forwarding for FCoE traffic is end-to-end Hosts will log into the FCF which they are attached to (access FCF) Storage devices will log into the FCF at the core/storage edge Maintains HA requirements from both LAN and SAN perspective
2010 Cisco and/or its affiliates. All rights reserved.

Nexus 5000 FCF-B

Data Center Partner Webinar

79

FCoE Deployment Considerations


Migration Strategy for FCoE
Migration to 10G FCoE in place of 4/8G FC links (Ethernet price per bit economics) Edge switch running either as FCF in NPV mode or in FCoE-NPV mode with FCF migrating to the SAN Core Fibre Channel over Ethernet NPV (FCoE-NPV) is a new construct intended to solve a number of system management problems Using FCoE_NPV alleviates domain ID issue HA planning for the SAN side required
Does loosing a core switch mean the loss of a whole fabric? SAN A SAN B

FCF

FCoE-NPV

FCF

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

80

FCoE Deployment Considerations


Shared Aggregation/Core Devices
Does passing FCoE traffic through a larger aggregation point make sense?
Multiple links required to support the HA models 1:1 ratio between access to aggregation and aggregation to SAN core is required SAN is more vulnerable to HOLB so need to plan for appropriate capacity in any core ISL When is a direct Edge to Core links for FCoE are more cost effective than adding another hop? Smaller Edge device more likely to be able to use under-provisioned uplinks
SAN A CORE Congestion on Agg-Core links will HOLB all attached edge devices

SAN B

1:1 Ratio of links required unless FCoE-NPV FCoE uplink is over-provisioned

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

81

FCoE Deployment Considerations


Shared Aggregation/Core Devices
Different requirements for LAN and SAN network designs
Factors that will influence this use case
Port density
Multiple VDCs FCoE SAN LAN Agg LAN Core CORE Direct Attach FCoE Targets

Operational roles and change management


Storage device types

Potentially viable for smaller environments

Larger environments will need dedicated FCoE SAN devices providing target ports
Use connections to a SAN Use a storage edge of other FCoE/DCB capable devices
Data Center Partner Webinar 2010 Cisco and/or its affiliates. All rights reserved.

Nexus 5000 FCF-A

Nexus 5000 FCF-B

82

FCoE Deployment Considerations


Dedicated Aggregation/Core Devices
Topology will vary based on scale (single vs multiple tiers) Architecture as defined for product development has a dual core Question - where is the demark between Unified Wire and Unified Fabric As the topology grows less Unified Wire In all practical terms the edge is the unified point for LAN and SAN (not the core/agg) In smaller topologies where core and edge merge then everything collapses but the essential design elements remain
CORE SAN A SAN B

Dedicated SAN and LAN Core FCF FCF


VLAN 10,30

VLAN 10,20

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

83

FCoE Deployment Considerations


Larger Fabric Multi-Hop Topologies
Multi-hop edge/core/edge topology Core SAN switches supporting FCoE N7K with DCB/FCoE line cards
N7K or MDS FCoE enabled Fabric Switches

MDS with FCoE line cards (Sup2A)


Edge FC switches supporting either N5K - FCoE-NPV with FCoE uplinks to the FCoE enabled core (VNP to VF) N5K or N7K - FC Switch with FCoE ISL uplinks (VE to VE) Scaling of the fabric (FLOGI, ) will most likely drive the selection of which mode to deploy
Servers, FCoE attached Storage
VE
Edge FCF Switch Mode

VE

VF

VE

VNP

VE

Servers

FC Attached Storage Edge Switch in FCoE-NPV Mode

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

84

So Remember
All Unified options are important and have different places within the Data Center Network FCoE offers a more flexible and cheaper deployment option over Fibre Channel FCoE IS Fibre Channel Multi-hop FCoE extends the FCoE fabric beyond the access Cisco offers end-to-end FCoE solution with Nexus platform

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

85

Question and Answer


Thanks for attending this session.

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

86

Data Center Partner Webinar

2010 Cisco and/or its affiliates. All rights reserved.

87

Vous aimerez peut-être aussi