Académique Documents
Professionnel Documents
Culture Documents
2011 Cisco
Page 1 of 217
DATA CENTER VIRTUALIZATION (DCV) LAB GUIDE 1 DATA CENTER VIRTUALIZATION TRAINING LAB OVERVIEW
1.1 1.1 1.2
1 1 6
DATA CENTER VIRTUALIZATION ARCHITECTURE ................................................................. 6 DOCUMENTATION KEY ....................................................................................................... 7 LAB TOPOLOGY AND ACCESS .............................................................................................. 8
11
LAB REFERENCE GUIDE ..................................................................................................... 11 CABLING INFORMATION................................................................................................... 13 REQUIRED SOFTWARE VERSIONS ..................................................................................... 16 GLOBAL CONFIGURATION VARIABLES............................................................................... 17 NETAPP CONFIGURATION VARIABLES ............................................................................... 18 CISCO CONFIGURATION VARIABLES .................................................................................. 21 VMWARE CONFIGURATION VARIABLES ............................................................................ 22
23
NEXUS INITIAL SETUP .................................................................................................... 27 ENABLE FEATURES............................................................................................................ 34 NEXUS GLOBAL SETTINGS .............................................................................................. 35 NEXUS CONFIGURE ETHERNET INTERFACES.................................................................... 40 CONFIGURING PORT CHANNELS ....................................................................................... 43 CONFIGURING VIRTUAL PORT CHANNELS ......................................................................... 49 CONFIGURING FEX ON N5K-1 AND N5K-2 ......................................................................... 54 PERFORM THE INITIAL SETUP OF MDS9124 ....................................................................... 58
4 5
61 63
POWER ON THE ESX HOSTS AND VERIFY THE NEXUS INTERFACES ..................................... 61
CREATE FIBRE CHANNEL OVER ETHERNET (FCOE) INTERFACES .......................................... 63 DEVICE ALIASES, ZONES, AND ZONESETS .......................................................................... 71
77
ESXI INSTALLATION AND BASIC SETUP .............................................................................. 77 ESXI NETWORKING ........................................................................................................... 82 ESXI DATASTORES ............................................................................................................ 89
FlexPod Training Guide 2011 Cisco
Page 2 of 217
92
ADDING HOSTS TO VMWARE VCENTER SERVER ................................................................ 94 CONFIGURE FIBRE CHANNEL STORAGE ON ESX HOSTS ...................................................... 95 ADD A VM FROM NFS ATTACHED STORAGE ...................................................................... 97
98
INSTALL VIRTUAL SUPERVISOR MODULE (VSM) AS A VM ON ESXI ..................................... 98 REGISTERING THE CISCO NEXUS 1000V AS A VCENTER PLUG-IN ...................................... 100 CONFIGURING NETWORKING ON THE CISCO NEXUS 1000V ............................................. 101 NEXUS 1000V CREATE VLANS ......................................................................................... 102 NEXUS 1000V CREATE PORT PROFILES ............................................................................ 104 INSTALL VIRTUAL ETHERNET MODULES (VEMS) ON ESXI HOSTS ...................................... 107 MIGRATE ESXI HOSTS TO NEXUS 1000V .......................................................................... 108 MIGRATE VIRTUAL MACHINE PORTS .............................................................................. 112
113
LAB TOPOLOGY .............................................................................................................. 114 JOB AIDS ........................................................................................................................ 117 BASE CONFIGURATION ................................................................................................... 127 SPANNING TREE ............................................................................................................. 129 INTERFACE CONFIGURATION .......................................................................................... 133 OSPF CONFIGURATION ................................................................................................... 134 CONFIGURING OTV TO CONNECT EDGE DEVICES TO REMOTE END-SITES ......................... 138 OTV VERIFICATION AND MONITORING ........................................................................... 144 VERIFYING THE VMWARE VSPHERE SETUP...................................................................... 148
10
10.1 10.2 10.3
151
MISSING L2 CONNECTIVITY ACROSS SITES WITHOUT OTV ............................................... 152 SUCCESSFUL CONNECTIVITY WITHIN SAME SITE ............................................................. 153 SUCCESSFUL VMOTION ACROSS SITES DUE TO L2 CONNECTIVITY WITH OTV ................... 154
11
11.1 11.2 11.3 11.4
158
CLONE A VM TO SAN ATTACHED STORAGE ..................................................................... 159 CONFIGURE VIRTUAL MACHINE NETWORKING ............................................................... 160 MIGRATE A VM TO SAN ATTACHED STORAGE ................................................................. 163 CONFIGURE VM DISKS (OPTIONAL)................................................................................. 165
12
SUMMARY
FlexPod Training Guide Page 3 of 217
168
2011 Cisco
12.1
13 14 15
15.1 15.2 15.3 15.4 15.5 15.6 15.7 15.8 15.9
APPENDIX A: COPYING SWITCH CONFIGURATIONS FROM A TFTP SERVER APPENDIX B: RECOVERING FROM THE LOADER PROMPT NETAPP FAS2020A DEPLOYMENT PROCEDURE: PART 1
NETAPP ASSIGNING DISKS ........................................................................................... 173 NETAPP ONTAP INSTALLATION .................................................................................... 174 NETAPP INITIAL SETUP................................................................................................. 175 NETAPP - AGGREGATES AND VOLUMES .......................................................................... 179 NETAPP NETWORK & SECURITY ................................................................................... 180 NETAPP - VOLUMES ....................................................................................................... 183 NETAPP IP SPACE AND MULTISTORE ............................................................................ 187 NETAPP NFS ................................................................................................................ 190 NETAPP PERFORMANCE OPTIMIZATION ...................................................................... 190
16
16.1 16.2 16.3 16.4
192
FLEXCLONE ..................................................................................................................... 193 REMOVE CLONED VOLUMES AND LUNS .......................................................................... 197 REMOVING VFILERS........................................................................................................ 197 REMOVING VFILER VOLUMES ......................................................................................... 197
17 18
198 215
Page 4 of 217
2011 Cisco
Table 1 Device Management Addresses and Accounts ........................................................................................ 11 Table 2 - ESXi Network Parameters ......................................................................................................................... 11 Table 3 Virtual Machines ...................................................................................................................................... 11 Table 4 VLAN Summary ........................................................................................................................................ 12 Table 5 - Ethernet Cabling Information ................................................................................................................... 14 Table 6 - Ethernet Cabling Information - Management Switch............................................................................... 15 Table 7 - Fibre Channel Cabling Information ........................................................................................................... 15 Table 8 - Data Center Virtualization global variables. ............................................................................................. 17 Table 9 - NetApp FAS2020 A variables. ................................................................................................................... 18 Table 10 - NetApp licensing variables. .................................................................................................................... 20 Table 11 - NetApp disk and volume variables ......................................................................................................... 20 Table 12 - Cisco Nexus 5010 variables..................................................................................................................... 21 Table 13 - Cisco Nexus 1000v variables. .................................................................................................................. 21 Table 14 - VMware variables. .................................................................................................................................. 22 Table 15 - Commands .............................................................................................................................................. 24 Table 16 - Commands .............................................................................................................................................. 25 Table 17 - WWPN Addresses ................................................................................................................................... 71 Table 18 - IP Addresses for Uplinks and Loopbacks .............................................................................................. 116 Table 19 - OTV Edge Access Ports Connectivity to Access Switches ..................................................................... 116 Table 20 OTV Multicast Addresses ..................................................................................................................... 116 Table 21 - Commands used in this exercise .......................................................................................................... 118
2011 Cisco
Page 5 of 217
Important Prior to configuration, be sure to obtain the latest version of this document http://db.tt/LI79cwH.
Welcome to the Cisco Data Center Virtualization Lab. This lab is intended to provide you with a solid understanding of what you need to implement a wide range of solution features. The lab tasks are designed to focus on achieving: Customer awareness of what the solution can do for them. Customer understanding of why the Cisco solution is unique and an improvement over the status quo or competitive solutions. Customer introduction to the deployment process of the demonstrated solution.
The FlexPod demonstration should go beyond the topics of interest to the technical decision maker (TDM) and should appeal to the business decision maker (BDM) by focusing on the benefits that this solution provides. The Quick Reference Guide section provides general positioning and primary marketing messages, as well as a guide to which demonstrations will work together to show the benefits for a particular person in the workplace. As always, you will want to tailor your sales presentation to address specific audience needs or issues. Demonstration Script Style The demonstration scripts are organized by task; they include important marketing messages as well as product and feature overviews and demonstration instructions. Using the Quick Reference Guide, you will be able to quickly tailor demonstrations for different customers, while communicating the benefits of each one to facilitate product sales.
Industry trends indicate a vast data center transformation toward shared infrastructures. Enterprise customers are moving away from silos of information and moving toward shared infrastructures to virtualized environments and eventually to the cloud to increase agility and reduce costs. The Cisco Data Center Virtualization lab is built on the Cisco Unified Computing System (Cisco UCS), Cisco Nexus data center switches, NetApp FAS storage components, and a range of software partners. This guide is based on the design principle of the FlexPod Implementation Guide.
AUDIENCE
This document describes the basic architecture of FlexPod and also prescribes the procedure for deploying a base Data Center Virtualization configuration. The intended audience of this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to deploy the core Data Center Virtualization architecture.
easily be scaled as requirements and demand change. This includes scaling both up (adding additional resources within a Data Center Virtualization unit) and out (adding additional Data Center Virtualization units). Data Center Virtualization includes NetApp storage, Cisco networking, Cisco Unified Computing System (Cisco UCS), and virtualization software in which the computing and storage fit in one data center rack with the networking residing in the same or separate rack. The networking components can accommodate multiple Data Center Virtualization configurations. Figure 1 shows our lab components. Our lab hardware includes: Two Cisco Nexus 5010 switches One Cisco MDS 9124 Switch Two Cisco UCS C200 M1 and One Cisco UCS C250 M1 servers powered by Intel Xeon processors o Quanities and types might vary for lab One NetApp 2020 Filer For server virtualization, the lab includes VMware vSphere Enterprise Plus with vCenter Standard.
2011 Cisco
Page 7 of 217
Your management tasks will be performed on an RDP server (VC_SERVER or MGMT_PC). You will access the UCS, Nexus, and etc via SSH and each devices element manager. The Putty SSH client is on the Desktop. Figure 2 - Lab Tools Interface
Page 8 of 217
2011 Cisco
Here is a view of how all the Data Center Virtualization Pods are interconnected. Figure 3 - Full Topology for Three Pods in a VDC Deployment
2011 Cisco
Page 9 of 217
The following diagram illustrates how all the different networks/vlans are interconnected. The router in the center is connected to the Nexus 5000s via a Port-Channel Trunk. Figure 4 - Logical Topology of Lab
2011 Cisco
Page 10 of 217
The following section provides detailed information on configuring all aspects of a base FlexPod environment. The Data Center Virtualization architecture is flexible; therefore, the exact configuration detailed in this section might vary for customer implementations depending on specific requirements. Although customer implementations might deviate from the information that follows, the best practices, features, and configurations listed in this section should still be used as a reference for building a customized Data Center Virtualization architecture.
Management IP
10.1.111.1 10.1.111.2 10.1.111.3 10.1.111.4 10.1.111.40 10.1.111.161 10.1.111.162 10.1.111.163 admin admin admin admin admin admin admin admin admin admin
Username
Password
1234Qwer 1234Qwer 1234Qwer 1234Qwer 1234Qwer 1234Qwer 1234Qwer 1234Qwer 1234Qwer 1234Qwer
Management IP
10.1.111.21 10.1.111.22 10.1.111.23 root root root
Username
Password
1234Qwer 1234Qwer 1234Qwer
vMotion
10.1.151.21 10.1.151.22 10.1.151.23
NFS
10.1.211.21 10.1.211.22 10.1.211.23
Role
vCenter, VSC N1KV VSM AD,DNS,DHCP XenDesktop XenApp PVS
Management IP
10.1.111.100 10.1.111.17 10.1.111.10 10.1.111.11 10.1.111.12 10.1.111.13
Username
administrator admin
Password
1234Qwer 1234Qwer
2011 Cisco
Page 11 of 217
Description
MGMT VMTRAFFIC VMOTION CTRL-PKT NFS Fabric A FCoE VLAN Fabric B FCoE VLAN Native VLAN OTV Site VLAN
VSAN
Description
11 12
2011 Cisco
Page 12 of 217
2011 Cisco
Page 13 of 217
Device
N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-1 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 N5K-2 NetApp-A NetApp-A NetApp-A ESX1 ESX1 ESX1 ESX1 ESX1 ESX2 ESX2 ESX2 ESX2 ESX2 ESX3 ESX3 ESX3 ESX3 ESX3
Local Ports
e1/4 e1/7 e1/8 e1/9 e1/10 e1/11 e1/17 e1/18 e1/19 e1/20 e1/19 e1/20 e1/19 e1/20 m0 e1/4 e1/7 e1/8 e1/9 e1/10 e1/11 e1/17 e1/18 e1/19 e1/20 e1/19 e1/20 e1/19 e1/20 m0 bmc e0a e0b vmnic0 vmnic1 vmnic2 vmnic3 cimc vmnic0 vmnic1 vmnic2 vmnic3 cimc vmnic0 vmnic1 vmnic4 vmnic5 cimc
Device
MGMT Switch FEX A FEX A ESX1 ESX2 ESX3 N5K-2 N5K-2 N7K-1 N7K-2 N7K-1 N7K-2 N7K-1 N7K-2 MGMT Switch 3750 FEX B FEX B ESX1 ESX2 ESX3 N5K-1 N5K-1 N7K-1 N7K-2 N7K-1 N7K-2 N7K-1 N7K-2 MGMT Switch MGMT Switch MGMT Switch MGMT Switch N5K-1 N5K-2 FEX A FEX B 3750 N5K-1 N5K-2 FEX A FEX B 3750 FEX A FEX B N5K-1 N5K-2 3750
Access Ports
1/23 port1 port2 vmnic0 vmnic0 vmnic4 e1/17 e1/18 e1/14 e1/14 e1/22 e1/22 e1/30 e1/30 e1/7 1/24 port1 port2 vmnic1 vmnic1 vmnic5 e1/17 e1/18 e1/16 e1/16 e1/24 e1/24 e1/32 e1/32 e1/8 e1/12 e1/13 e1/14 e1/9 e1/9 e1/1 e1/1 1/1 e1/10 e1/10 e1/2 e1/2 1/3 e1/3 e1/3 e1/11 e1/11 1/5
Nexus 1010 A&B Ethernet Cabling Information. Note: Require the use of two 1GbE Copper SFP+s (GLC-T=) on the N5K side.
2011 Cisco
Page 14 of 217
Device
MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch MGMT Switch
Local Ports
1/0/1 1/0/2 1/0/3 1/0/4 1/0/5 1/0/6 1/0/7 1/0/8 1/0/9 1/0/10 1/0/11 1/0/12 1/0/13 1/0/14 1/0/15 1/0/16 1/0/17 1/0/18 1/0/15 1/0/16 1/0/17 1/0/18 1/0/15 1/0/16 1/0/17 1/0/18 1/0/15 1/0/16 1/0/23 1/0/24
Device
ESX1 ESX1 ESX2 ESX2 ESX3 ESX3 N5K-1 N5K-2 MDS9124 VC Server RDC VC Server NTAP NTAP NTAP FlexMGMT FlexMGMT N7K-1 N7K-2 FlexMGMT FlexMGMT N7K-1 N7K-2 FlexMGMT FlexMGMT N7K-1 N7K-2 FlexMGMT FlexMGMT N5K-1 N5K-2
Access Ports
CIMC vmnic CIMC vmnic CIMC vmnic m0 m0 m0
bmc e0a e0b 1/37 1/38 3/24 3/24 1/39 1/40 3/36 3/36 1/41 1/42 3/48 3/48 1/43 1/44 e1/4 e1/4
Device
N5K-1 N5K-1 N5K-2 N5K-2 NetApp-A NetApp-A MDS9124 MDS9124 MDS9124 MDS9124 MDS9124 MDS9124
Local Ports
fc2/3 fc2/4 fc2/3 fc2/4 0a 0b fc1/1 fc1/2 fc1/3 fc1/4 fc1/5 fc1/6
Device
MDS9124 MDS9124 MDS9124 MDS9124 MDS9124 MDS9124 N5K-1 N5K-1 N5K-2 N5K-2 NetApp A NetApp A
Access Ports
fc1/1 fc1/2 fc1/3 fc1/4 fc1/5 fc1/6 fc2/3 fc2/4 fc2/3 fc2/4 0a 0b
2011 Cisco
Page 15 of 217
Variable Name NetApp cluster license code NetApp Fibre Channel license code NetApp Flash Cache license code NetApp NearStore license code NetApp deduplication license code NetApp NFS license code NetApp MultiStore license code NetApp FlexClone license code
Customized Value 0 0 0 0
Description Provide the license code to enable cluster mode within the FAS2020 A configuration. Provide the license code to enable the Fibre Channel protocol. Provide the license code to enable the installed Flash Cache adapter. Provide the license code to enable the NearStore capability, which is required to enable deduplication.
0 0 0 0
Provide the license code to enable deduplication. Provide the license code to enable the NFS protocol. Provide the license code to enable MultiStore . Provide the license code to enable FlexClone.
2011 Cisco
Page 16 of 217
VLAN ID for VM traffic Default password DNS server name Domain name suffix VSAN ID for fabric A
12
FCoE VLAN ID for fabric A FCoE VLAN ID for fabric B SSL country name code SSL state or province name SSL locality name SSL organization name SSL organization unit NTP Server IP Address
2011 Cisco
Page 17 of 217
NetApp FAS2020 A netboot interface IP address NetApp FAS2020 A netboot interface subnet mask NetApp FAS2020 A netboot interface gateway IP address NetApp Data ONTAP 7.3.5 netboot kernel location NetApp FAS2020 A management interface IP address NetApp FAS2020 A management interface subnet mask NetApp FAS2020 A management interface gateway IP address NetApp FAS2020 A administration host IP address NetApp FAS2020 A location NetApp FAS2020 A mailhost name NetApp FAS2020 A mail host IP address NetApp Data ONTAP 7.3.5 flash image location NetApp FAS2020 A administrators e-mail address NetApp FAS2020 A infrastructure vFiler IP address
Incomplete Incomplete Incomplete Incomplete 10.1.111.151 255.255.255.0 10.1.111.254 10.1.111.100 Nevada Incomplete Incomplete Incomplete
pephan@cisco.com
10.1.211.151
10.1.111.10
Provide the IP address of the host that will be used to administer the infrastructure vFiler unit on FAS2020 A. This variable might have the same IP address as the administration host IP address for the physical controllers as well.
2011 Cisco
Page 18 of 217
2011 Cisco
Page 19 of 217
NetApp deduplication license code NetApp NFS license code NetApp MultiStore license code NetApp FlexClone license code
0 0 0 0
Provide the license code to enable deduplication. Provide the license code to enable the NFS protocol. Provide the license code to enable MultiStore . Provide the license code to enable FlexClone.
NetApp FAS2020 A total disks in aggregate 1 NetApp FAS2020 A ESXi boot volume size
9 20g
Number of disks to be assigned to aggr1 on controller A. Each Cisco UCS server boots by using the FC protocol. Each FC LUN will be stored in a volume on either controller A or controller B. Choose the appropriate volume size depending on how many ESXi hosts will be in the environment.
2011 Cisco
Page 20 of 217
10.1.111.2
255.255.255.0
255.255.255.0
10.1.111.254
10.1.111.254
10
10.1.111.17
Provide the IP address for the management interface for the primary Cisco Nexus 1000v virtual supervisor module. Provide the netmask for the management interface for the primary Cisco Nexus 1000v virtual supervisor module. Provide the gateway for the management interface for the primary Cisco Nexus 1000v virtual supervisor module. Provide a unique domain ID for the Cisco Nexus 1000v VSMs. This domain ID should be different than the domain ID used for the Cisco Nexus 1010 virtual appliance domain ID.
255.255.255.0
10.1.111.254
11
2011 Cisco
Page 21 of 217
2011 Cisco
Page 22 of 217
The following section provides a detailed procedure for configuring the Cisco Nexus 5010 switches for use in a DCV environment. Complete this lab exercise to learn how to configure Virtual Port Channeling (vPC), Fibre Channel over Ethernet (FCoE), and Fabric Extender (FEX Nexus 2000) using the NX-OS command line interface. Note: The Data Center Virtualization labs start up with completed configurations for VPC, FCoE, and FEX . Sections 3 - 5 provide you with the opportunity to build up these configurations from the ground up. If you just want to test or demo other features such as OTV or Nexus 1000v then please proceed to Section 6.
EXERCISE OBJECTIVE
In this exercise you will use the NX-OS CLI to configure vPC and FEX in a Dual Homed Fabric Extender vPC Topology. After completing these exercises you will be able to meet these objectives: Clear the current startup configuration and reboot the switch Recover from the loader prompt Start the interactive setup process on the Nexus 5000 and MDS 9124 switch Configure a Nexus 5000 and an MDS 9124 switch for out-of-band management Navigate through the switch CLI structure on the Nexus 5000 and MDS 9124 Use command completion and help Save the running configuration Save the switch configuration to a tFTP/FTP server Enable the vPC feature Create a vPC domain and enter vpc-domain mode Configure the vPC peer keepalive link Configure vPC role priority Create the vPC peer link Move the PortChannel to vPC Configuring VSANs and Fibre Channel Interfaces Configure Zones and Zone Sets Map a VSAN for FCoE traffic onto a VLAN Create virtual Fibre Channel interfaces to carry the FCoE traffic Configure an Ethernet Interface
2011 Cisco
Page 23 of 217
COMMAND LIST
The commands used in this exercise are described in the table below. Table 15 - Commands Command
write erase boot boot kickstart bootflash:filename boot system bootflash:filename show boot reload setup show ? show running-config show interface brief show vlan show vsan show version show environment config term ping interface fc1/3 show module copy tftp://x.x.x.x/filename bootflash:/filename load bootflash:/filename show file volatile del file volatile dir volatile exit end shut no shut copy running-config startupconfig copy running-config tftp://ip_address/path copy tftp load bootflash show fcns database dir [volatile: | bootflash:] show file name del name
Description
Erases the switchs startup configuration Configures the boot variable for the kickstart software image to the file named in bootflash: Configures the boot variable for the system software image to the file named in bootflash: Displays the boot variable configuration Reboots the switch Enter the basic device setup dialog Displays all the permissible features for the show command for the current user Shows the running configuration Displays an interface status summary Displasy VLAN configuration and status Displays VSAN configuration and status Displays current code version Displays environment-related switch information, such as fan, power, and temperature status Enters configuration mode Packet internet gopher used to determine network connectivity Enters configuration submode for FC port 3 on module 1 Displays all the modules associated with the network device Copies a file (filename) from the tftp server with the address x.x.x.x to the bootflash: Loads the system file (filename) from the bootflash: when booting from the loader prompt Examine the contents of the configuration file in the volatile file system Deletes the file from the volatile system Display volatile file to confirm action Exits one level in the menu structure. If you are in EXEC mode this command will log you off the system Exits configuration mode to EXEC mode Disables an interface Enables an interface Saves the running configuration as the startup configuration Saves the running configuration to a TFTP server Copy the system file from the TFPT server to the local bootflash Loads the system file from bootflash Shows the FCNS database Displays the contents of the specified memory area Displays the contents of the specified file Deletes the specified file
2011 Cisco
Page 24 of 217
Description
Verify that interface is Gigabit capable Enter configuration mode Enter interface mode Shut down an interface Set the port speed to 1 Gig Adds a description to the Ethernet interface Bring interface out of shutdown mode Exit current configuration mode Display the configuration, used to confirm changes made Sets interface to trunk mode Allows VSANs to traverse the trunk mode Enable Portfast Create a virtual Fibre Channel interface Binds the virtual Fibre Channel interface to a physical Ethernet interface Enter VSAN configuration mode Add the Virtual Fibre interface to the VSAN Enter VSAN configuration mode Bind the Ethernet VLAN to the FCoE VSAN View the configuration information of the virtual FC interface View all of the virtual Fibre Channel interfaces Enter configuration mode for FC interfaces Used to select the physical FC interfaces
Used to examine the available switchport options Configures the port mode to auto-negotiation on the FC ports Configures the port speed to auto-negotiation on the FC ports
View and verify FC interface configuration Enter configuration mod for virtual Fibre Channel interface Add a description to the virtual Fibre Channel interface Verify that devices have completed a fabric login into the Nexus 5000 Verify devices have registered in the Fibre Channel server Define a port as a N-Port proxy Verify NIV port configurations
2011 Cisco
Page 25 of 217
Double-click on the tftpd32 or tftpd64 icon on the desktop. The default directory is c:\tftp:
JOB AIDS
Nexus 5000 CLI Configuration Guide http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfiguratio nGuide.html Cisco Nexus 5000 Series Switches - Virtual PortChannel Quick Configuration Guide http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-543563.html Cisco Nexus 5000 Series NX-OS Software Configuration Guide - Configuring Virtual Interfaes
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_0_1a/VirtIntf.html
Cisco Nexus 5000 Series Switch Fabric Manager Software Configuration Guide
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/fm/FabricManager.html
Cisco MDS 9000 Family CLI Quick Configuration Guide - Configuring VSANs and Interfaces
http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/san-os/quick/guide/qcg_vin.html
2011 Cisco
Page 26 of 217
Step 1 Perform initial Cisco Nexus 5010 Switch setup Duration: 60-75 minutes Cisco Nexus 5010 A - N5K-1 1.1 Access N5K-1 using the console button on the lab interface. 1.2 The prompt should be at the System Admin Account Setup. Run through the setup script. 1.3 If the switch is not at the System Admin Account Setup, log into the switch and issue the following commands.
switch# write erase Warning: This command will erase the startup-configuration. Do you wish to proceed anyway? (y/n) [n] y switch# reload WARNING: This command will reboot the system Do you want to continue? (y/n) [n] y
1.4
Upon initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start.
Do you want to enforce secure password standard (yes/no): yes Enter the password for "admin": 1234Qwer Confirm the password for "admin": 1234Qwer ---- Basic System Configuration Dialog ---<snip> Would you like to enter the basic configuration dialog(yes/no): yes Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : N5K-1 Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 10.1.111.1 Mgmt0 IPv4 netmask : 255.255.255.0 Configure the default gateway? (yes/no) [y]:y IPv4 address of the default gateway : 10.1.111.254 Enable the telnet service? (yes/no) [n]: n Enable the http-server? (yes/no) [y]: y Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) : rsa Number of key bits <768-2048> : 1024 Configure the ntp server? (yes/no) [n]: n Enter basic FC configurations (yes/no) [n]: n
2011 Cisco
Page 27 of 217
1.5
The following configuration will be applied: switchname N5K-1 interface mgmt0 ip address 10.1.111.1 255.255.255.0 no shutdown exit vrf context management ip route 0.0.0.0/0 10.1.111.254 <snip> Would you like to edit the configuration? (yes/no) [n]: n
1.6
Use this configuration and save it? (yes/no) [y]: y [########################################] 100%
Cisco Nexus 5010 B - N5K-2 1.7 Log in to the Nexus 5000 using the console button on the lab interface. The prompt should be at the System Admin Account Setup. Run through the setup script. 1.8 If the switch is not at the System Admin Account Setup, log into the switch and issue the following commands.
switch# write erase Warning: This command will erase the startup-configuration. Do you wish to proceed anyway? (y/n) [n] y switch# reload WARNING: This command will reboot the system Do you want to continue? (y/n) [n] y
1.9
Upon initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start.
Do you want to enforce secure password standard (yes/no): yes Enter the password for "admin": 1234Qwer Confirm the password for "admin": 1234Qwer ---- Basic System Configuration Dialog ---<snip> Would you like to enter the basic configuration dialog(yes/no): yes Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : N5K-2 Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 10.1.111.2 Mgmt0 IPv4 netmask : 255.255.255.0 Configure the default gateway? (yes/no) [y]:y IPv4 address of the default gateway : 10.1.111.254 Enable the telnet service? (yes/no) [n]: n Enable the http-server? (yes/no) [y]: y Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) : rsa Number of key bits <768-2048> : 1024 Configure the ntp server? (yes/no) [n]: n Enter basic FC configurations (yes/no) [n]: n <snip>
1.10 1.11
Use this configuration and save it? (yes/no) [y]: y [########################################] 100%
2011 Cisco
Page 28 of 217
MANAGEMENT VRF
The default gateway is connected through the management interface. The management interface is by default part of the management VRF. This particular VRF is part of the default configuration and the management interface mgmt0 is the only interface allowed to be part of this VRF. The philosophy behind Management VRF is to provide total isolation to the management traffic from the rest of the traffic flowing through the box by confining the former to its own forwarding table. These are the steps for the exercise: Verify that only the mgmt0 interface is part of the management VRF Verify that the default gateway is reachable only using the management VRF Cisco Nexus 5010 A - N5K-1 Step 2 Verify that only the mgmt0 interface is part of the management VRF. 2.1 Log in to N5K-1
N5K-1 login: admin Password: 1234Qwer
2.2
N5K-1# show vrf VRF-Name VRF-ID State default 1 Up management 2 Up N5K-1# show vrf management interface Interface VRF-Name mgmt0 management
Step 3 3.1
Verify that the default gateway is reachable only using the management VRF Ping the default gateway using the default VRF.
host host host host host
N5K-1# ping 10.1.111.254 PING 10.1.111.254 (10.1.111.254): 56 data bytes ping: sendto 10.1.111.254 64 chars, No route to Request 0 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 1 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 2 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 3 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 4 timed out
--- 10.1.111.254 ping statistics --5 packets transmitted, 0 packets received, 100.00% packet loss
Note:
The ping fails because the default gateway is reachable only from the management interface, while we used the default VRF.
2011 Cisco
Page 29 of 217
3.2
N5K-1# ping 10.1.111.254 vrf management PING 10.1.111.254 (10.1.111.254): 56 data bytes Request 0 timed out 64 bytes from 10.1.111.254: icmp_seq=1 ttl=254 time=2.361 ms 64 bytes from 10.1.111.254: icmp_seq=2 ttl=254 time=3.891 ms 64 bytes from 10.1.111.254: icmp_seq=3 ttl=254 time=4.07 ms 64 bytes from 10.1.111.254: icmp_seq=4 ttl=254 time=4.052 ms --- 10.1.111.254 ping statistics --5 packets transmitted, 4 packets received, 20.00% packet loss round-trip min/avg/max = 2.361/3.593/4.07 ms N5K-1#
3.3
Alternatively, we can set the routing context for the VRF management interface to allow for layer 3 access. This will also allow you to ping and TFTP as needed in the following exercises Ping the tFTP server
3.4
N5K-1%management# ping 10.1.111.10 PING 10.1.111.10 (10.1.111.10): 56 data bytes Request 0 timed out 64 bytes from 10.1.111.10: icmp_seq=1 ttl=127 64 bytes from 10.1.111.10: icmp_seq=2 ttl=127 64 bytes from 10.1.111.10: icmp_seq=3 ttl=127 64 bytes from 10.1.111.10: icmp_seq=4 ttl=127
ms ms ms ms
--- 10.1.111.10 ping statistics --5 packets transmitted, 4 packets received, 20.00% packet loss round-trip min/avg/max = 3.664/3.919/4.074 ms
3.5
5.1
Display all commands that begin with s, sh, and show. Press Enter or space to scroll through the list of commands.
5.2 5.3
Display the current running configuration. Display the current installed version of code and environmental information.
5.4
Display the Ethernet and Fibre Channel modules of the Nexus 5020. This is where youll find the WWN range for the FC ports and the range of Ethernet addresses for the 10 Gigabit Ethernet
Data Center Virtualization Volume 1 Page 30 of 217
2011 Cisco
ports. The first address (whether FC or Ethernet) is associated with port 1 of that transport type and subsequent ascending address numbers are associated with the next ascending port number.
N5K-1# show module Mod Ports Module-Type Model Status --- ----- -------------------------------- ---------------------- -----------1 20 20x10GE/Supervisor N5K-C5010P-BF-SUP active * 2 8 4x10GE + 4x1/2/4G FC Module N5K-M1404 ok Mod --1 2 Sw -------------5.0(2)N2(1) 5.0(2)N2(1) Hw -----1.2 1.0 World-Wide-Name(s) (WWN) --------------------------------------------------2f:6c:69:62:2f:6c:69:62 to 63:6f:72:65:2e:73:6f:00 Serial-Num ---------JAF1413CEGC JAF1409ASQD
Abbreviate the syntax, then hit tab key to complete each word; for example, type sh<tab> ru<tab>. 5.5 Display the status of the switch interfaces. Notice that only Ethernet interfaces are listed.
N5K-1# show interface brief (abbr: sh int bri) -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth1/1 1 eth access down SFP validation failed 10G(D) -Eth1/2 1 eth access down SFP not inserted 10G(D) -Eth1/3 1 eth access down Link not connected 10G(D) -Eth1/4 1 eth access down Link not connected 10G(D) -Eth1/5 1 eth access down SFP not inserted 10G(D) -Eth1/6 1 eth access down SFP not inserted 10G(D) -<snip> Eth2/1 1 eth access down SFP not inserted 10G(D) -Eth2/2 1 eth access down SFP not inserted 10G(D) -Eth2/3 1 eth access down SFP not inserted 10G(D) -Eth2/4 1 eth access down SFP not inserted 10G(D) --------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 10.1.111.1 1000 1500
5.6
N5K-1# show vlan VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------1 default active Eth1/1, Eth1/2, Eth1/3, Eth1/4 Eth1/5, Eth1/6, Eth1/7, Eth1/8 Eth1/9, Eth1/10, Eth1/11 Eth1/12, Eth1/13, Eth1/14 Eth1/15, Eth1/16, Eth1/17 Eth1/18, Eth1/19, Eth1/20 Eth2/1, Eth2/2, Eth2/3, Eth2/4 Remote SPAN VLANs ------------------------------------------------------------------------------Primary ------Secondary --------Type --------------Ports -------------------------------------------
2011 Cisco
Page 31 of 217
5.7
The fcoe feature must be activated to use the fibre channel features. 5.8 Activate the fcoe features:
N5K-1# configure terminal N5K-1(config)# feature fcoe FC license checked out successfully fc_plugin extracted successfully FC plugin loaded successfully FCoE manager enabled successfully FC enabled on all modules successfully
5.9
N5K-1# show vsan vsan 1 information name:VSAN0001 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down vsan 4079:evfp_isolated_vsan vsan 4094:isolated_vsan
Cisco Nexus 5010 B - N5K-2 5.10 Log into N5K-2 and activate the fcoe feature on N5K-2.
N5K-2 login: admin Password: 1234Qwer N5K-2# conf t Enter configuration commands, one per line. N5K-2(config)# feature fcoe FC license checked out successfully fc_plugin extracted successfully FC plugin loaded successfully FCoE manager enabled successfully FC enabled on all modules successfully N5K-2(config)# exit
2011 Cisco
Page 32 of 217
6.2 6.3
Access your desktop (Username: Administrator/ password: 1234Qwer) and start your TFTP server Save your running configuration on the tFTP server.
N5K-1# copy running-config tftp://10.1.111.100/N5K-1-Lab1-config Enter vrf (If no input, current vrf 'default' is considered): management Trying to connect to tftp server...... Connection to Server Established. TFTP put operation was successful
Note:
Be sure you start the tFTP/FTP Server before attempting to save the configuration or your copy will fail. Please review Lab 0 Lab Services for instructions on how to use the tFTP/FTP server. Use a tFTP/FTP Server in production networks to keep backup configurations and code releases for each network device. Be sure to include these servers in your regular Data Center backup plans.
Cisco Nexus 5010 B - N5K-2 6.4 Save the running configuration for N5K-2
N5K-2# copy run start [########################################] 100%
2011 Cisco
Page 33 of 217
Cisco Nexus 5010 B - N5K-2 7.7 Enable Virtual Port Channel. 7.8 Enable LACP port channel negotiation. 7.9 Enable FC and Fibre Channel over Ethernet. 7.10 Enable N Port ID Virtualization. 7.11 Enable Fibre Channel port channeling and trunking. 7.12 Enable Fabric Extender.
feature feature feature feature feature feature vpc lacp fcoe npiv fport-channel-trunk fex
Type show feature and verify that the appropriate licenses are enabled.
N5K-1(config)# show feature | i enabled assoc_mgr 1 enabled fcoe 1 enabled fex 1 enabled fport-channel-trunk 1 enabled lacp 1 enabled lldp 1 enabled npiv 1 enabled sshServer 1 enabled vpc 1 enabled
2011 Cisco
Page 34 of 217
8.2 8.3
Enable bpdufilter on all edge ports by default. Create an access list to match Platinum traffic. The ACL is matching for traffic from NFS vlan.
8.4
8.5
8.6
8.7
Create a policy map that will be used for tagging incoming traffic.
policy-map type qos POL_CLASSIFY class CLASS-PLATINUM set qos-group 2 exit class CLASS-SILVER set qos-group 4 exit exit
2011 Cisco
Page 35 of 217
8.8
Create a network-qos class map for Platinum traffic to be used in a Network QoS policy.
8.9
Create a network-qos Class Map for Silver traffic to be used in a Network QoS policy.
8.10
Create a network-qos policy map to be applied to the System QoS policy. Set Platinum class to CoS value of 5 and to MTU of 9000. Set Silver class to CoS value of 4 and to MTU of 9000. Set Default class to MTU of 9000.
Find out more about Configuring QoS on the Nexus 5000. http://www.cisco.com/en/US/docs/switches/datacenter/nexus50 00/sw/qos/Cisco_Nexus_5000_Series_NXOS_Quality_of_Service_Configuration_Guide_chapter3.html
policy-map type network-qos POL_SETUP_NQ class type network-qos CLASS-PLATINUM_NQ set cos 5 mtu 9000 exit class type network-qos CLASS-SILVER_NQ set cos 4 mtu 9000 exit class type network-qos class-default mtu 9000 exit
!!! The following section will enable Jumbo Frames for all unclassified traffic.
8.11
Associate the policies to the system class policy map using service policies.
system qos service-policy type qos input POL_CLASSIFY service-policy type network-qos POL_SETUP_NQ exit
8.12
2011 Cisco
Page 36 of 217
Cisco Nexus 5010 B - N5K-2 8.13 Repeat steps 8.1 8.12 for N5K-2.
spanning-tree port type edge bpduguard default spanning-tree port type edge bpdufilter default ip access-list ACL_COS_5 10 permit ip 10.1.211.0/24 any 20 permit ip any 10.1.211.0/24 class-map type qos CLASS-PLATINUM match access-group name ACL_COS_5 ip access-list ACL_COS_4 10 permit ip 10.1.151.0/24 any 20 permit ip any 10.1.151.0/24 class-map type qos CLASS-SILVER match access-group name ACL_COS_4 policy-map type qos POL_CLASSIFY class CLASS-PLATINUM set qos-group 2 exit class CLASS-SILVER set qos-group 4 exit exit class-map type network-qos CLASS-PLATINUM_NQ match qos-group 2 class-map type network-qos CLASS-SILVER_NQ match qos-group 4 policy-map type network-qos POL_SETUP_NQ class type network-qos CLASS-PLATINUM_NQ set cos 5 mtu 9000 exit class type network-qos CLASS-SILVER_NQ set cos 4 mtu 9000 exit class type network-qos class-default mtu 9000 exit system qos service-policy type qos input POL_CLASSIFY service-policy type network-qos POL_SETUP_NQ exit copy run start
2011 Cisco
Page 37 of 217
Use the show run command to view the global spanning-tree configuration
N5K-1(config)# show run ipqos class-map type qos class-fcoe class-map type qos match-all CLASS-SILVER match access-group name ACL_COS_4 class-map type qos match-all CLASS-PLATINUM match access-group name ACL_COS_5 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 policy-map type qos POL_CLASSIFY class CLASS-PLATINUM set qos-group 2 class CLASS-SILVER set qos-group 4 class-map type network-qos CLASS-SILVER_NQ match qos-group 4 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos CLASS-PLATINUM_NQ match qos-group 2 class-map type network-qos class-ip-multicast match qos-group 2 policy-map type network-qos POL_SETUP_NQ class type network-qos CLASS-PLATINUM_NQ set cos 5 mtu 9000 class type network-qos CLASS-SILVER_NQ set cos 4 mtu 9000 class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default mtu 9000 system qos service-policy type qos input POL_CLASSIFY service-policy type network-qos POL_SETUP_NQ
2011 Cisco
Page 38 of 217
Step 9 Create necessary VLANs Duration: 5 minutes Cisco Nexus 5010 A - N5K-1 9.1 Create VLANs for Management, VM, vMotion, Nexus 1000V Control and Packet, and NFS storage traffic.
vlan 111 name MGMT-VLAN vlan 131 name VMTRAFFIC vlan 151 name VMOTION vlan 171 name PKT-CTRL vlan 211 name NFS-VLAN
Cisco Nexus 5010 B - N5K-2 9.2 Create VLANs for Management, VM, vMotion, Nexus 1000V Control and Packet, and NFS storage traffic.
vlan 111 name MGMT-VLAN vlan 131 name VMTRAFFIC vlan 151 name VMOTION vlan 171 name PKT-CTRL vlan 211 name NFS-VLAN
9.3
Use the show vlan command to show the list of VLANs that have been created on the switch.
vlan | include "Status|active" | exclude VLAN0 Status Ports active Eth1/1, Eth1/2, Eth1/3, Eth1/4 active active active active active
N5K-1(config-vlan)# show VLAN Name 1 default 10 INFRA-MGMT-VLAN 110 MGMT 111 VMTRAFFIC-VLAN 151 VMOTION-VLAN 171 PKT-CTRL-VLAN
2011 Cisco
Page 39 of 217
10.2
Router uplink.
10.3
FEX ports.
10.4
Server ports.
interface Eth1/9 description ESX1:vmnic0 interface Eth1/10 description ESX2:vmnic0 interface Eth1/11 description ESX3:vmnic4
10.5
10.6
OTV uplinks.
2011 Cisco
Page 40 of 217
Cisco Nexus 5010 B - N5K-2 10.7 Placeholder for new storage array.
interface Eth1/1 description NTAP1-A:e2b interface Eth1/2 description NTAP1-B:e2b
10.8
Router uplink.
10.9
FEX ports.
10.10
Server ports.
interface Eth1/9 description ESX1:vmnic1 interface Eth1/10 description ESX2:vmnic1 interface Eth1/11 description ESX3:vmnic5
10.11
10.12
OTV uplinks.
2011 Cisco
Page 41 of 217
Step 11 Use the show interface status command to print a list of ports and corresponding information including configured port descriptions 11.1 Output from N5K-1
N5K-1(config-if)# Eth1/1 Eth1/2 Eth1/4 Eth1/7 Eth1/8 Eth1/9 Eth1/10 Eth1/11 Eth1/17 Eth1/18 Eth1/19 Eth1/20 show interface status | include ":" sfpAbsent sfpAbsent sfpInvali connected connected connected connected connected connected connected notconnec notconnec 1 1 1 1 1 1 1 1 1 1 1 1 full full full full full full full full full full full full 10G 10G 10G 10G 10G 10G 10G 10G 10G 10G 10G 10G 10g 10g 10g 10g 10g 10g 10g 10g 10g 10g 10g 10g NTAP1-A:e2a NTAP1-B:e2a To 3750: N2K-1: N2K-1: ESX1:vmnic0 ESX2:vmnic0 ESX3:vmnic4 N5K-2:Eth1/17 N5K-2:Eth1/18 N7K-1: N7K-2:
11.2
N5K-2(config-if)# sh interface status | i : Eth1/1 NTAP1-A:e2b sfpAbsent trunk Eth1/2 NTAP1-B:e2b sfpAbsent 1 Eth1/4 To 3750: sfpInvali trunk Eth1/7 N2K-2: vpcPeerLn 1 Eth1/8 N2K-2: vpcPeerLn 1 Eth1/9 ESX1:vmnic1 connected 1 Eth1/10 ESX2:vmnic1 connected 1 Eth1/11 ESX3:vmnic5 connected 1 Eth1/17 N5K-1:Eth1/17 connected trunk Eth1/18 N5K-1:Eth1/18 connected trunk Eth1/19 N7K-1: notconnec 1 Eth1/20 N7K-2: notconnec 1
2011 Cisco
Page 42 of 217
12.2
interface Po11 description NTAP1-A interface Eth1/1 channel-group 11 mode active no shutdown
12.3
interface Po12 description NTAP1-B interface Eth1/2 channel-group 12 mode active no shutdown
12.4 Define port channel for servers. Add server host link to port-channel group. For VPC and FCoE, we recommend setting channel-mode to on versus active (aka LACP). This is useful for operating systems that dont support port-channel negotiation such as ESXi.
interface Po13 description ESX1 interface Eth1/9 channel-group 13 mode on no shutdown interface Po14 description ESX2 interface Eth1/10 channel-group 14 mode on no shutdown interface Po15 description ESX3 interface Eth1/11 channel-group 15 mode on no shutdown
12.5
interface Po20 description 3750 interface Eth1/4 channel-group 20 mode active no shutdown
12.6
interface Po101 description FEX1 interface Eth1/7-8 channel-group 101 mode active no shutdown
12.7
2011 Cisco
Page 43 of 217
Cisco Nexus 5010 B - N5K-2 12.8 From the global configuration mode, type
interface Po1 description vPC peer-link interface Eth1/17-18 channel-group 1 mode active no shutdown interface Po11 description NTAP1-A interface Eth1/1 channel-group 11 mode active no shutdown interface Po12 description NTAP1-B interface Eth1/2 channel-group 12 mode active no shutdown interface Po13 description ESX1 interface Eth1/9 channel-group 13 mode on no shutdown interface Po14 description ESX2 interface Eth1/10 channel-group 14 mode on no shutdown interface Po15 description ESX3 interface Eth1/11 channel-group 15 mode on no shutdown interface Po20 description 3750 interface Eth1/4 channel-group 20 mode active no shutdown interface Po101 description FEX2 interface Eth1/7-8 channel-group 101 mode active no shutdown
12.9
2011 Cisco
Page 44 of 217
12.10
N5K-1(config-vlan)# show interface status | inc Po Port Name Status Vlan Po1 vPC peer-link connected 1 Po11 NTAP1-A noOperMem 1 Po12 NTAP1-B noOperMem 1 Po13 ESX1 connected 1 Po14 ESX2 connected 1 Po15 ESX3 connected 1 Po20 3750 noOperMem 1 Po101 FEX1 noOperMem 1 N5K-2(config)# show interface status | inc Port Name Status Po1 vPC peer-link connected Po11 NTAP1-A noOperMem Po12 NTAP1-B noOperMem Po13 ESX1 connected Po14 ESX2 connected Po15 ESX3 connected Po20 3750 noOperMem Po101 FEX2 noOperMem Po Vlan trunk 1 1 1 1 1 1 1
Type ---------
12.11
Verify that the correct individual ports have been added to the correct port-channel.
N5K-1(config-vlan)# show port-channel summary <snip> -------------------------------------------------------------------------------Group PortType Protocol Member Ports Channel -------------------------------------------------------------------------------1 Po1(SU) Eth LACP Eth1/17(P) Eth1/18(P) 11 Po11(SD) Eth LACP Eth1/1(D) 12 Po12(SD) Eth LACP Eth1/2(D) 13 Po13(SU) Eth NONE Eth1/9(P) 14 Po14(SU) Eth NONE Eth1/10(P) 15 Po15(SU) Eth NONE Eth1/11(P) 20 Po20(SD) Eth LACP Eth1/4(D) 101 Po101(SD) Eth LACP Eth1/7(I) Eth1/8(I) N5K-2(config)# show port-channel summary <snip> -------------------------------------------------------------------------------Group PortType Protocol Member Ports Channel -------------------------------------------------------------------------------1 Po1(SU) Eth LACP Eth1/17(P) Eth1/18(P) 11 Po11(SD) Eth LACP Eth1/1(D) 12 Po12(SD) Eth LACP Eth1/2(D) 13 Po13(SU) Eth NONE Eth1/9(P) 14 Po14(SU) Eth NONE Eth1/10(P) 15 Po15(SU) Eth NONE Eth1/11(P) 20 Po20(SD) Eth LACP Eth1/4(D) 101 Po101(SD) Eth LACP Eth1/7(I) Eth1/8(I)
2011 Cisco
Page 45 of 217
Step 13 Add port channel configurations/ Duration: 20 minutes Cisco Nexus 5010 A - N5K-1 13.1 From the global configuration mode, type
int Po1 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 spanning-tree port type network no shut
Note: 13.2
Do not allow any vlans that carry FCoE traffic on the vPC peer link . Configure port-channel for NetApp.
int Po11-12 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 spanning-tree port type edge trunk no shut
13.3
Configure port-channel for ESX Servers. They will allow vlans 111,211,171,151,and 131.
int Po13-15 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 spanning-tree port type edge trunk no shut
13.4
Configure port channel for L3 Switch. Our L3 switch is 1GB so we set our speed to 1000.
interface Po20 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 speed 1000 no shutdown
13.5
2011 Cisco
Page 46 of 217
Cisco Nexus 5010 B - N5K-2 13.6 From the global configuration mode, type
int Po1 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 spanning-tree port type network no shut
13.7
int Po11-12 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 spanning-tree port type edge trunk no shut
13.8
int Po13-15 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 spanning-tree port type edge trunk no shut
13.9
Configure port channel for L3 Switch. Our L3 switch is 1GB so we set our speed to 1000.
interface Po20 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,211,171,151,131 speed 1000 no shutdown
13.10
Step 14 Use the show run interface <interface name> command to show the configuration for a given interface or portchannel.
N5K-1(config-if-range)# sh run int po1,po11-15,po20 interface port-channel1 description vPC peer-link switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type network interface port-channel11 description NTAP1-A switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk interface port-channel12 description NTAP1-B switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk interface port-channel13 description ESX1 2011 Cisco Data Center Virtualization Volume 1 Page 47 of 217
switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk <snip> interface port-channel20 description 3750 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 N5K-2(config-if)# sh run int po1,po11-15,po20 interface port-channel1 description vPC peer-link switchport mode trunk vpc peer-link switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type network interface port-channel11 description NTAP1-A switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk interface port-channel12 description NTAP1-B switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk interface port-channel13 description ESX1 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk interface port-channel14 description ESX2 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk interface port-channel15 description ESX3 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211 spanning-tree port type edge trunk interface port-channel20 description 3750 switchport mode trunk switchport trunk native vlan 999 switchport trunk allowed vlan 111,131,151,171,211
2011 Cisco
Page 48 of 217
Step 15 Configure virtual portchannels (vPCs) Duration: 20 minutes Cisco Nexus 5010 A - N5K-1 15.1 Create the vPC domain. The domain ID must match between VPC peers, but must differ from other VPC pairs.
vpc domain 10
15.2
Configure the vPC role priority (optional): We will make N5K-1 the primary switch. The switch with the lower priority will be elected as the vPC primary switch. Configure the peer keepalive link. The management interface IP address for N5K-2 is 10.1.111.2 :
role priority 10
15.3
The system does not create the vPC peer link until you configure a vPC peer keepalive link. 15.4 Designate the port-channel to be used as the vPC peer link.
interface Po1 vpc peer-link
15.5
15.6
15.7
15.8
2011 Cisco
Page 49 of 217
Cisco Nexus 5010 B - N5K-2 15.9 Create the vPC domain. The domain ID must match between VPC peers, but must differ from other VPC pairs.
vpc domain 10
15.10
Configure the vPC role priority (optional): We will make N5K-1 the primary switch. The switch with the lower priority will be elected as the vPC primary switch. Configure the peer keepalive link. The management interface IP address for N5K-1 is 10.1.111.1 :
role priority 20
15.11
The system does not create the vPC peer link until you configure a vPC peer keepalive link. 15.12 Designate the port-channel to be used as the vPC peer link.
interface Po1 vpc peer-link
15.13
15.14
15.15
15.16
2011 Cisco
Page 50 of 217
The following show commands are useful for verifying the vPC configuration. Cisco Nexus 5010 A & B - N5K-1 & N5K-2 Step 16 Check the vPC role of each switch. 16.1 N5K-1 is the primary because we set the role priority number lower :
N5K-1(config)# show vpc role vPC Role status ---------------------------------------------------vPC role : primary Dual Active Detection Status : 0 vPC system-mac : 00:23:04:ee:be:0a vPC system-priority : 32667 vPC local system-mac : 00:05:9b:7a:03:bc vPC local role-priority : 10
16.2
N5K-2 is the secondary because we set the role priority number higher :
N5K-2(config)# show vpc role vPC Role status ---------------------------------------------------vPC role : secondary Dual Active Detection Status : 0 vPC system-mac : 00:23:04:ee:be:0a vPC system-priority : 32667 vPC local system-mac : 00:05:9b:79:b1:fc vPC local role-priority : 20
Step 17 Verify VPC status on N5K-1 and N5K-2. Cisco Nexus 5010 A - N5K-1 17.1 Make sure the domain id and role is correct. Make sure your peer status is ok or alive.
N5K-1(config-if)# sh vpc brief Legend: (*) - local vPC is down, forwarding via vPC peer-link vPC domain id : 10 Peer status : peer adjacency formed ok vPC keep-alive status : peer is alive Configuration consistency status: success Per-vlan consistency status : success Type-2 consistency status : success vPC role : primary Number of vPCs configured : 6 <snip> vPC Peer-link status --------------------------------------------------------------------id Port Status Active vlans ---------- -------------------------------------------------1 Po1 up 111,131,151,171,211 vPC status ---------------------------------------------------------------------------id Port Status Consistency Reason Active vlans ------ ----------- ------ ----------- -------------------------- ----------<snip> 13 Po13 up success success 111,131,151 ,171,211 14 Po14 up success success 111,131,151 ,171,211 15 Po15 up success success 111,131,151 ,171,211 20 Po20 up success success 111,131,151 ,171,211
2011 Cisco
Page 51 of 217
17.2
Make sure the domain id and role is correct. Make sure your peer status is ok or alive.
N5K-2(config-if)# show vpc bri Legend: (*) - local vPC is down, forwarding via vPC peer-link vPC domain id : Peer status : vPC keep-alive status : Configuration consistency status: Per-vlan consistency status : Type-2 consistency status : vPC role : Number of vPCs configured : Peer Gateway : Dual-active excluded VLANs : Graceful Consistency Check : 10 peer adjacency formed ok peer is alive success success success secondary 6 Disabled Enabled
vPC Peer-link status --------------------------------------------------------------------id Port Status Active vlans ---------- -------------------------------------------------1 Po1 up 111,131,151,171,211 vPC status ---------------------------------------------------------------------------id Port Status Consistency Reason Active vlans ------ ----------- ------ ----------- -------------------------- ----------11 Po11 down* failed Consistency Check Not Performed 12 Po12 down* failed Consistency Check Not Performed 13 Po13 up success success 111,131,151 ,171,211 14 Po14 up success success 111,131,151 ,171,211 15 Po15 up success success 111,131,151 ,171,211 20 Po20 up success success 111,131,151 ,171,211
Cisco Nexus 5010 A & B - N5K-1 & N5K-2 17.3 View information on the peer-keepalive messages :
N5K-1(config)# show vpc peer-keepalive vPC keep-alive status : peer is alive <snip> N5K-2(config-if)# show vpc peer-keepalive vPC keep-alive status --Peer is alive for <snip> vPC Keep-alive parameters --Destination <snip> : peer is alive : (2158) seconds, (636) msec
: 10.1.111.1
2011 Cisco
Page 52 of 217
17.4 View the running-configuration specific to vpc : Cisco Nexus 5010 A - N5K-1
N5K-1(config)# show running-config vpc feature vpc vpc domain 10 role priority 10 peer-keepalive destination 10.1.111.2 source 10.1.111.1 interface port-channel1 vpc peer-link interface port-channel11 vpc 11 interface port-channel12 vpc 12 interface port-channel13 vpc 13 interface port-channel14 vpc 14 interface port-channel15 vpc 15 interface port-channel20 vpc 20
2011 Cisco
Page 53 of 217
Here are the steps for this section: Enable feature for FEX. (Completed in an earlier section) Pre-provision a Fabric Extender identifier (for example, "100"). Configure the fabric EtherChannel links for the Fabric Extender. Configure each host interface port on the Fabric Extender on both Nexus 5000 Series switch. Figure 5 - vPC to Dual Fabric Extenders
2011 Cisco
Page 54 of 217
Cisco Nexus 5010 A - N5K-1 17.5 Configure the Nexus 2000 Fabric Extender and move the fabric interfaces of N5K-1 to the vPC. Interfaces Eth1/7-8 connect to the Nexus 2000 uplink ports.
feature fex N5K-1(config)# show feature | grep fex fex 1 enabled
17.6
17.7
Configure the fabric EtherChannel links for the Fabric Extender 100.
int po100 description single-homed FEX100 int e1/7-8 channel-group 100 int po100 switchport mode fex-fabric fex associate 100
It may take several minutes for the Nexus 2000 to register with the Nexus 5000 switches. A syslog notification will announce when the FEX is online. 17.8 Configure the Nexus 2000 (FEX) Ethernet Interfaces on N5K-1. The FEX interfaces will be used as management ports for the ESXi servers. Ports Eth100/1/1-3 will be configured to trunk. We are not going to going to put these ports into a channel group, so we commented out those lines. The port channel configuration is also not necessary, but is included in case we need to port channel them later.
int po113 description ESX1 switchport mode trunk vpc 113 int po114 description ESX2 switchport mode trunk vpc 114 int po115 description ESX3 switchport mode trunk vpc 115 int ethernet 100/1/1 description ESX1 vmnic2 switchport mode trunk ! channel-group 113 force int ethernet 100/1/2 description ESX2 vmnic2 switchport mode trunk ! channel-group 114 force int ethernet 100/1/3 description ESX3 vmnic0 switchport mode trunk ! channel-group 115 force
The vPC number does not need to match the PortChannel number, but it must match the number of the vPC peer switch for that vPC bundle.
2011 Cisco
Page 55 of 217
Cisco Nexus 5010 B - N5K-2 17.9 Configure the Nexus 2000 Fabric Extender and move the fabric interfaces of N5K-2 to the vPC. Interfaces Eth1/7-8 connect to the Nexus 2000 uplink ports.
feature fex
17.10
17.11
Configure the fabric EtherChannel links for the Fabric Extender 101.
int po101 description single-homed FEX101 int e1/7-8 channel-group 101 int po101 switchport mode fex-fabric fex associate 101
17.12
Configure the Nexus 2000 (FEX) Ethernet Interfaces on N5K-2. The FEX interfaces will be used as management ports for the ESXi servers. Ports Eth100/1/1-3 will be configured to trunk. We are not going to going to put these ports into a channel group, so we commented out those lines. The port channel configuration is also not necessary, but is included in case we need to port channel them later.
int po113 description ESX1 switchport mode trunk vpc 113 int po114 description ESX2 switchport mode trunk vpc 114 int po115 description ESX3 switchport mode trunk vpc 115 int ethernet 101/1/1 description ESX1 vmnic2 switchport mode trunk ! channel-group 113 force int ethernet 101/1/2 description ESX2 vmnic2 switchport mode trunk ! channel-group 114 force int ethernet 101/1/3 description ESX3 vmnic0 switchport mode trunk ! channel-group 115 force
The vPC number does not need to match the PortChannel number, but it must match the number of the vPC peer switch for that vPC bundle.
2011 Cisco
Page 56 of 217
2011 Cisco
Page 57 of 217
19.3
Connect to the MDS 9124 using the console button on the lab interface and perform the System Admin Account Setup:
---- System Admin Account Setup ---Do you want to enforce secure password standard (yes/no) [y]:y Enter the password for "admin": 1234Qwer Confirm the password for "admin": 1234Qwer ---- Basic System Configuration Dialog ---<snip> Would you like to enter the basic configuration dialog (yes/no): yes Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : MDS9124 Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 10.1.111.40 Mgmt0 IPv4 netmask : 255.255.255.0 Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : 10.1.111.254 Configure advanced IP options? (yes/no) [n]: n Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa Number of rsa key bits <768-2048> [1024]: 1024 Enable the telnet service? (yes/no) [n]: n Enable the http-server? (yes/no) [y]: y Configure clock? (yes/no) [n]: n Configure timezone? (yes/no) [n]: n Configure summertime? (yes/no) [n]: n Configure the ntp server? (yes/no) [n]: n Configure default switchport interface state (shut/noshut) [shut]: shut Configure default switchport trunk mode (on/off/auto) [on]: on Configure default switchport port mode F (yes/no) [n]: n Configure default zone policy (permit/deny) [deny]: deny Enable full zoneset distribution? (yes/no) [n]: y Configure default zone mode (basic/enhanced) [basic]: basic
2011 Cisco
Page 58 of 217
19.4
The following configuration will be applied: password strength-check switchname MDS9124 interface mgmt0 ip address 10.1.111.40 255.255.255.0 no shutdown ip default-gateway 10.1.111.254 ssh key rsa 1024 force feature ssh no feature telnet feature http-server system default switchport shutdown system default switchport trunk mode on no system default zone default-zone permit system default zone distribute full no system default zone mode enhanced Would you like to edit the configuration? (yes/no) [n]: n
19.5
Use this configuration and save it? (yes/no) [y]: y [########################################] 100%
20.2
MDS# ping 10.1.111.254 PING 10.1.111.254 (10.1.111.254) 56(84) bytes of data. 64 bytes from 10.1.111.254: icmp_seq=2 ttl=255 time=0.422 64 bytes from 10.1.111.254: icmp_seq=3 ttl=255 time=0.382 64 bytes from 10.1.111.254: icmp_seq=4 ttl=255 time=0.391 64 bytes from 10.1.111.254: icmp_seq=5 ttl=255 time=0.403
Note:
21.1
Display all commands that begin with S, sh, and show. Press Enter or space to scroll through the list of commands.
21.2
Abbreviate the syntax, then hit tab key to complete each word; for example, type sh<tab> ru<tab>.
2011 Cisco
Page 59 of 217
21.3
Display the status of the switch interfaces. Notice that fibre channel interfaces fc 1/1 - fc 1/6 are down.
MDS9124# sh int brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/1 1 auto on down swl --fc1/2 1 auto on down swl --fc1/3 1 auto on down swl --fc1/4 1 auto on down swl --fc1/5 1 auto on down swl --fc1/6 1 auto on down swl --fc1/7 1 auto on sfpAbsent ---<snip>
21.4
MDS9124# show vsan vsan 1 information name:VSAN0001 state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down vsan 4079:evfp_isolated_vsan vsan 4094:isolated_vsan
21.5
Step 22 Save your configuration locally and to a remote server. Cisco MDS 9124 22.1 Update the startup configuration with the changes made in the running configuration.
MDS9124# copy running-config startup-config [########################################] 100%
22.2
MDS9124# copy running-config tftp://10.1.111.100/MDS9124-Lab1-config Trying to connect to tftp server...... Connection to server Established. Copying Started..... | TFTP put operation was successful
Note:
Be sure you start the tFTP/FTP Server before attempting to save the configuration or your copy will fail. Please review Lab 0 Lab Services for instructions on how to use the tFTP/FTP server. Use a tFTP/FTP Server in production networks to keep backup configurations and code releases for each network device. Be sure to include these servers in your regular Data Center backup plans.
2011 Cisco
Page 60 of 217
The following section provides a detailed procedure for configuring the Cisco Unified Computing System for use in a DCV environment. These steps should be followed precisely because a failure to do so could result in an improper configuration.
4.1 POWER ON THE ESX HOSTS AND VERIFY THE NEXUS INTERFACES
We will use Cisco Unified Computing System C-Series Servers, powered by Intel Xeon processors, providing industry-leading virtualization performance, to validate our configuration. The ESX CNA interfaces must be up in order to verify interface connectivity and fabric login. Power up the ESX hosts, then use show commands on the Nexus 5000 to verify the interfaces. Step 23 Power up ESXi hosts. 23.1 Connect to the VC_SERVER from the SSL Dashboard. 23.2 Log into the server with credentials: administrator/1234Qwer. 23.3 Double click on the ESX1 CIMC shortcut on the desktop (or http://10.1.111.161/). 23.4 Accept any SSL warnings. 23.5 Authenticate with admin/1234Qwer.
2011 Cisco
Page 61 of 217
23.6
1 2 3
Step 24 Repeat Step 23 for ESX2 CIMC (http://10.1.111.162) and ESX3 CIMC (http://10.1.111.163).
2011 Cisco
Page 62 of 217
This section contains the procedural steps for the second part of the Cisco Nexus 5010 deployment.
2011 Cisco
Page 63 of 217
Cisco Nexus 5010 A - N5K-1 Step 25 Create vlan 1011 to carry FCoE enabled vsan 11.
vlan 1011 fcoe vsan 11
25.1
25.2
Create virtual Fibre Channel interfaces. Bind them to server port-channel interfaces. Then bring up the vFC interfaces. When FCoE hosts are using vPC, vfc interfaces need to bind to the port-channel interface instead of the physical interface.
interface vfc13 bind interface po13 interface vfc14 bind interface po14 interface vfc15 bind interface po15 int vfc13-15 switchport trunk allowed vsan 11 2011 Jan 14 06:05:37 N5K-1 %$ VDC-1 %$ %PORT-2-IF_DOWN_ERROR_DISABLED: %$VSAN 1%$ Interface vfc3 is down (Error disabled)
You will get error disabled messages, if the servers have not been powered up, yet. 25.3 Define SAN port-channel for uplinks.
interface san-port-channel 111 channel mode active interface fc2/3-4 channel-group 111 interface san-port-channel 111 switchport trunk mode auto switchport trunk allowed vsan 11
25.4
Create vsan 11. On N5K-1, associate vsan 11 with vfc 13-15 and san-port-channel 111.
vsan database vsan 11 name FABRIC_A vsan 11 interface vfc 13-15 vsan 11 interface san-port-channel 111 exit
25.5
2011 Cisco
Page 64 of 217
Cisco Nexus 5010 B - N5K-2 Step 26 Perform Steps 1-5 on N5K-2 to configure vfc interfaces bound to port-channel 3-5 and to bind vsan 20 to vlan 120 :
vlan 1012 fcoe vsan 12 exit
26.1
26.2
Create virtual Fibre Channel interfaces. Bind them to server port-channel interfaces. Then bring up the vFC interfaces. When FCoE hosts are using vPC, vfc interfaces need to bind to the port-channel interface instead of the physical interface.
int vfc13 bind interface port-channel 13 int vfc14 bind interface port-channel 14 int vfc15 bind interface port-channel 15 int vfc13-15 switchport trunk allowed vsan 12 exit
26.3
interface san-port-channel 112 channel mode active interface fc2/3-4 channel-group 112 interface san-port-channel 112 switchport trunk mode auto switchport trunk allowed vsan 12
26.4
Create vsan 12. On N5K-1, associate vsan 12 with vfc 13-15 and san-port-channel 111.
vsan database vsan 12 name FABRIC_B vsan 12 interface vfc13-15 vsan 12 interface san-port-channel 112 exit
Note: 26.5
VLAN and VSAN needs to be different from N5K-1. This is so we can create two paths. Enable the interfaces fc2/1-4:
2011 Cisco
Page 65 of 217
Cisco MDS9124 Step 27 Create vsan 10 and vsan 20. Assign fc1/3,fc1/5 to vsan 10 and fc 1/4,fc 1/6 to vsan 20.: Note: 27.1 FC Port Connectivity: MDS fc1/1 to N5K-1 fc2/1, MDS fc1/2 to N5K-2 fc2/1, MDS fc1/3 to EMC SPA. Put descriptions on each fc interface. (optional)
description Trunk N5K-1:fc2/3 description Trunk N5K-2:fc2/4 description Trunk N5K-2:fc2/3 description Trunk N5K-2:fc2/4 description Trunk NTAP:e2a description Trunk NTAP:e2b
int fc1/1 switchport int fc1/2 switchport int fc1/3 switchport int fc1/4 switchport int fc1/5 switchport int fc1/6 switchport exit
27.2
interface port-channel 111 channel mode active ! switchport rate-mode dedicated switchport trunk allowed vsan 11 interface fc1/1-2 channel-group 111 force no shutdown interface port-channel 112 channel mode active ! switchport rate-mode dedicated switchport trunk allowed vsan 12 interface fc1/3-4 channel-group 112 force no shutdown
27.3
27.4
vsan vsan vsan vsan exit
Assign fc1/5 and port-channel 111 to vsan 11. Assign fc 1/6 and port-channel 112to vsan 12.:
11 11 12 12 interface interface interface interface fc1/5 port-channel 111 fc1/6 port-channel 112
27.5
Note:
2011 Cisco
Page 66 of 217
Step 28 Verify Fibre Channel configuration. 28.1 Verify membership for VSANs.
N5K-1(config)# sh vsan membership vsan 1 interfaces: fc2/1 fc2/2 vsan 11 interfaces: fc2/3 vfc14 <snip>
fc2/4 vfc15
N5K-2(config)# show vsan membership vsan 1 interfaces: fc2/1 fc2/2 vsan 12 interfaces: fc2/3 vfc14
fc2/4 vfc15
MDS9124(config-vsan-db)# show vsan membership vsan 1 interfaces: fc1/7 fc1/8 fc1/9 <snip> vsan 11 interfaces: fc1/1 vsan 12 interfaces: fc1/3
fc1/10
fc1/2
fc1/5
port-channel 111
fc1/4
fc1/6
port-channel 112
28.2
Note:
If the association state is non-operational, then you did not define vsan 10 in a previous step.
28.3
View all of the virtual Fibre Channel interfaces. Make sure all defined vFCs are present and in the correct VSANs.
---TF TF TF auto -auto -auto --
N5K-1(config)# sh int brief | include vfc vfc13 11 F on trunking vfc14 11 F on trunking vfc15 11 F on trunking
Note:
All of the vfc interfaces will up as errDisabled if the servers are turned off.
N5K-2(config)# sh int bri | i vfc vfc13 12 F on trunking vfc14 12 F on trunking vfc15 12 F on trunking
----
TF TF TF
2011 Cisco
Page 67 of 217
28.4
Confirm the configuration of the virtual Fibre Channel interface. Note the bound Ethernet interface information. The rest of the information is similar to a standard fibre channel port.
8 vfc
N5K-1(config)# sh int vfc13-15 | grep next vfc13 is trunking Bound interface is port-channel13 Hardware is Virtual Fibre Channel Port WWN is 20:0c:00:05:9b:7a:03:bf Admin port mode is F, trunk mode is on Port mode is TF Port vsan is 11 Trunk vsans (admin allowed and active) -vfc14 is trunking Bound interface is port-channel14 Hardware is Virtual Fibre Channel Port WWN is 20:0d:00:05:9b:7a:03:bf Admin port mode is F, trunk mode is on Port mode is TF Port vsan is 11 Trunk vsans (admin allowed and active) -vfc15 is trunking Bound interface is port-channel15 Hardware is Virtual Fibre Channel Port WWN is 20:0e:00:05:9b:7a:03:bf Admin port mode is F, trunk mode is on snmp link state traps are enabled Port mode is TF Port vsan is 11 Trunk vsans (admin allowed and active)
(11)
(11)
(11)
Note:
The interfaces will show down if the connecting servers are powered off.
N5K-2(config-if)# sh int vfc13-15 | grep next 8 vfc vfc13 is trunking Bound interface is port-channel13 Hardware is Virtual Fibre Channel Port WWN is 20:0c:00:05:9b:79:b1:ff Admin port mode is F, trunk mode is on Port mode is TF Port vsan is 12 Trunk vsans (admin allowed and active) (12) -vfc14 is trunking Bound interface is port-channel14 Hardware is Virtual Fibre Channel Port WWN is 20:0d:00:05:9b:79:b1:ff Admin port mode is F, trunk mode is on Port mode is TF Port vsan is 12 Trunk vsans (admin allowed and active) (12) -vfc15 is trunking Bound interface is port-channel15 Hardware is Virtual Fibre Channel Port WWN is 20:0e:00:05:9b:79:b1:ff Admin port mode is F, trunk mode is on Port mode is TF Port vsan is 12 Trunk vsans (admin allowed and active) (12)
2011 Cisco
Page 68 of 217
MDS9124(config-if)# sh int fc1/5-6 | grep next 8 fc1 fc1/5 is up Port description is Trunk NTAP:e2a Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN) Port WWN is 20:05:00:05:9b:7a:ec:c0 Admin port mode is auto, trunk mode is on snmp link state traps are enabled Port mode is F, FCID is 0xc80000 Port vsan is 11 Speed is 4 Gbps -fc1/6 is up Port description is Trunk NTAP:e2b Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN) Port WWN is 20:06:00:05:9b:7a:ec:c0 Admin port mode is auto, trunk mode is on snmp link state traps are enabled Port mode is F, FCID is 0x1c0000 Port vsan is 12 Speed is 4 Gbps
28.1
N5K-1(config-if)# sh san-port-channel sum U-Up D-Down B-Hot-standby S-Suspended I-Individual link summary header -------------------------------------------------------------------------------Group PortType Protocol Member Ports Channel -------------------------------------------------------------------------------111 San-po111 FC PCP (U) FC fc2/3(P) fc2/4(P) N5K-2(config)# show san-port-channel sum -------------------------------------------------------------------------------Group PortType Protocol Member Ports Channel -------------------------------------------------------------------------------112 San-po112 FC PCP (U) FC fc2/3(P) fc2/4(P) MDS9124(config-if)# show port-channel sum -----------------------------------------------------------------------------Interface Total Ports Oper Ports First Oper Port -----------------------------------------------------------------------------port-channel 111 2 2 fc1/2 port-channel 112 2 2 fc1/4
2011 Cisco
Page 69 of 217
28.1
N5K-1(config-if)# sh int san-port-channel 111 san-port-channel 111 is trunking Hardware is Fibre Channel Port WWN is 24:6f:00:05:9b:7a:03:80 Admin port mode is auto, trunk mode is auto snmp link state traps are enabled Port mode is TE Port vsan is 11 Speed is 8 Gbps Trunk vsans (admin allowed and active) (11) Trunk vsans (up) (11) Trunk vsans (isolated) () Trunk vsans (initializing) () N5K-2(config)# sh int san-port-channel 112 san-port-channel 112 is trunking Hardware is Fibre Channel Port WWN is 24:70:00:05:9b:79:b1:c0 Admin port mode is auto, trunk mode is snmp link state traps are enabled Port mode is TE Port vsan is 12 Speed is 8 Gbps Trunk vsans (admin allowed and active) Trunk vsans (up) Trunk vsans (isolated) Trunk vsans (initializing)
auto
(12) (12) () ()
MDS9124(config-if)# sh int port-channel 111-112 | grep next 8 channel port-channel 111 is trunking Hardware is Fibre Channel Port WWN is 24:6f:00:05:9b:7a:ec:c0 Admin port mode is auto, trunk mode is on snmp link state traps are enabled Port mode is TE Port vsan is 11 Speed is 8 Gbps Trunk vsans (admin allowed and active) (11) -port-channel 112 is trunking Hardware is Fibre Channel Port WWN is 24:70:00:05:9b:7a:ec:c0 Admin port mode is auto, trunk mode is on snmp link state traps are enabled Port mode is TE Port vsan is 12 Speed is 8 Gbps Trunk vsans (admin allowed and active) (12)
2011 Cisco
Page 70 of 217
Here are the general steps for creating zones and zone sets: Create aliases Create zones Create zone sets Activate the zone set. Note: For the following steps, you will need the information from the table below. On occasion, hardware needs to replaced or upgraded, and the documentation is not updated at the same time. One way to verify this is to compare the output from a show flogi database versus the output from show run zone. In other words, compare the values of the devices registering in versus the values you manually zoned in.
50:0a:09:81:88:bc:c3:04 21:00:00:c0:dd:12:bc:6d 21:00:00:c0:dd:14:60:31 21:00:00:c0:dd:11:bc:e9 50:06:01:60:4b:a0:66:c7 21:00:00:c0:dd:13:ec:19 21:00:00:c0:dd:14:71:8d 21:00:00:c0:dd:14:73:c1 50:06:01:60:4b:a0:6e:75 21:00:00:c0:dd:13:eb:bd 21:00:00:c0:dd:13:ed:31 21:00:00:c0:dd:14:73:19 50:0a:09:81:88:ec:c2:a1 21:00:00:c0:dd:12:0e:59 21:00:00:c0:dd:12:0d:51 21:00:00:c0:dd:14:73:65
50:0a:09:82:88:bc:c3:04 21:00:00:c0:dd:12:bc:6f 21:00:00:c0:dd:14:60:33 21:00:00:c0:dd:11:bc:eb 50:06:01:61:4b:a0:66:c7 21:00:00:c0:dd:13:ec:1b 21:00:00:c0:dd:14:71:8f 21:00:00:c0:dd:14:73:c3 50:06:01:61:4b:a0:6e:75 21:00:00:c0:dd:13:eb:bf 21:00:00:c0:dd:13:ed:33 21:00:00:c0:dd:14:73:1b 50:0a:09:82:88:ec:c2:a1 21:00:00:c0:dd:12:0e:59 21:00:00:c0:dd:12:0d:53 21:00:00:c0:dd:14:73:67
2011 Cisco
Page 71 of 217
Step 29 Create device aliases on each Cisco Nexus 5010 and create zones for each ESXi host Duration: 30 minutes Cisco Nexus 5010 A - N5K-1 29.1 Aliases for storage (targets).
device-alias database device-alias name NTAP1-A_0a pwwn <ntap1_a_wwpn>
29.2
device-alias name ESX1_NTAP1-A_A pwwn <esx1_a_wwpn> device-alias name ESX2_NTAP1-A_A pwwn <esx2_a_wwpn> device-alias name ESX3_NTAP1-A_A pwwn <esx2_a_wwpn> exit device-alias commit
Note:
Get this information from Error! Reference source not found. and Error! Reference source not found.. Create the zones for each service profile. Each zone contains one initiator and one target. We place port 1 of each CNA in a zone with NTAP1-A 0a for VSAN 11.
29.3
zone name ESX1_NTAP1-A_A vsan 11 member device-alias ESX1_NTAP1-A_A member device-alias NTAP1-A_0a exit zone name ESX2_NTAP1-A_A vsan 11 member device-alias ESX2_NTAP1-A_A member device-alias NTAP1-A_0a exit zone name ESX3_NTAP1-A_A vsan 11 member device-alias ESX3_NTAP1-A_A member device-alias NTAP1-A_0a exit
29.4
zoneset name FLEXPOD_A vsan 11 member ESX1_NTAP1-A_A member ESX2_NTAP1-A_A member ESX3_NTAP1-A_A exit
29.5
29.6
2011 Cisco
Page 72 of 217
Cisco Nexus 5010 B - N5K-2 Step 30 Create device aliases on each Cisco Nexus 5010 and create zones for each ESXi host Duration: 30 minutes 30.1 From the global configuration mode, type:
device-alias database device-alias name NTAP1-A_0b pwwn <ntap1_b_wwpn> device-alias name ESX1_NTAP1-A_B pwwn <esx1_b_wwpn> device-alias name ESX2_NTAP1-A_B pwwn <esx2_b_wwpn> device-alias name ESX3_NTAP1-A_B pwwn <esx3_b_wwpn> exit device-alias commit
Note: 30.2
Get this information from Table 17. Create the zones for each service profile. Each zone contains one initiator and one target. We place port 2 of each CNA in a zone with NTAP1-A 0b for VSAN 12.
zone name ESX1_NTAP1-A_B vsan 12 member device-alias ESX1_NTAP1-A_B member device-alias NTAP1-A_0b exit zone name ESX2_NTAP1-A_B vsan 12 member device-alias ESX2_NTAP1-A_B member device-alias NTAP1-A_0b exit zone name ESX3_NTAP1-A_B vsan 12 member device-alias ESX3_NTAP1-A_B member device-alias NTAP1-A_0b exit
30.3 30.4
After all of the zones for the Cisco UCS service profiles have been created, create a zoneset to organize and manage them. Create the zoneset and add the necessary members.
zoneset name FLEXPOD_B vsan 12 member ESX1_NTAP1-A_B member ESX2_NTAP1-A_B member ESX3_NTAP1-A_B exit
30.5
30.6
Cisco MDS9124 Note: When you activate the zone sets on N5K-1 and N5K-2, the switches will propagate the zone info to the MDS.
2011 Cisco
Page 73 of 217
30.7
Verify that the entries were successfully entered into the device alias database by entering show device-alias. Examples below are for Pod1.
N5K-1# show device-alias database device-alias name NTAP1-A_0a pwwn 50:0a:09:81:88:bc:c3:04 device-alias name NTAP1-A_0b pwwn 50:0a:09:82:88:bc:c3:04 device-alias name ESX1_NTAP1-A_A pwwn 21:00:00:c0:dd:12:bc:6d device-alias name ESX1_NTAP1-A_B pwwn 21:00:00:c0:dd:12:bc:6f device-alias name ESX2_NTAP1-A_A pwwn 21:00:00:c0:dd:14:60:31 device-alias name ESX2_NTAP1-A_B pwwn 21:00:00:c0:dd:14:60:33 device-alias name ESX3_NTAP1-A_A pwwn 21:00:00:c0:dd:11:bc:e9 device-alias name ESX3_NTAP1-A_B pwwn 21:00:00:c0:dd:11:bc:eb Total number of entries = 8 N5K-2(config)# show device-alias database device-alias name NTAP1-A_0a pwwn 50:0a:09:81:88:bc:c3:04 device-alias name NTAP1-A_0b pwwn 50:0a:09:82:88:bc:c3:04 device-alias name ESX1_NTAP1-A_A pwwn 21:00:00:c0:dd:12:bc:6d device-alias name ESX1_NTAP1-A_B pwwn 21:00:00:c0:dd:12:bc:6f device-alias name ESX2_NTAP1-A_A pwwn 21:00:00:c0:dd:14:60:31 device-alias name ESX2_NTAP1-A_B pwwn 21:00:00:c0:dd:14:60:33 device-alias name ESX3_NTAP1-A_A pwwn 21:00:00:c0:dd:11:bc:e9 device-alias name ESX3_NTAP1-A_B pwwn 21:00:00:c0:dd:11:bc:eb MDS9124(config)# show device-alias database device-alias name NTAP1-A_0a pwwn 50:0a:09:81:88:bc:c3:04 device-alias name NTAP1-A_0b pwwn 50:0a:09:82:88:bc:c3:04 device-alias name ESX1_NTAP1-A_A pwwn 21:00:00:c0:dd:12:bc:6d device-alias name ESX1_NTAP1-A_B pwwn 21:00:00:c0:dd:12:bc:6f device-alias name ESX2_NTAP1-A_A pwwn 21:00:00:c0:dd:14:60:31 device-alias name ESX2_NTAP1-A_B pwwn 21:00:00:c0:dd:14:60:33 device-alias name ESX3_NTAP1-A_A pwwn 21:00:00:c0:dd:11:bc:e9 device-alias name ESX3_NTAP1-A_B pwwn 21:00:00:c0:dd:11:bc:eb
30.8
Verify that the ESX hosts have completed a fabric login into N5K-1 and N5K-2. Make sure the VSAN numbers are correct and that their alias shows up. Port numbers might not match yours.
N5K-1# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------vfc13 11 0xdb0002 21:00:00:c0:dd:12:0e:59 20:00:00:c0:dd:12:0e:59 [ESX1_NTAP1-A_A] vfc14 11 0xdb0001 21:00:00:c0:dd:12:0d:51 20:00:00:c0:dd:12:0d:51 [ESX2_NTAP1-A_A] vfc15 11 0xdb0000 21:00:00:c0:dd:14:73:65 20:00:00:c0:dd:14:73:65 [ESX3_NTAP1-A_A] Total number of flogi = 3. N5K-2# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------vfc13 12 0xb80002 21:00:00:c0:dd:12:0e:5b 20:00:00:c0:dd:12:0e:5b [ESX1_NTAP1-A_B] vfc14 12 0xb80001 21:00:00:c0:dd:12:0d:53 20:00:00:c0:dd:12:0d:53 [ESX2_NTAP1-A_B] vfc15 12 0xb80000 21:00:00:c0:dd:14:73:67 20:00:00:c0:dd:14:73:67 [ESX3_NTAP1-A_B]
2011 Cisco
Page 74 of 217
Verify devices registered in the Fibre Channel Name server. The output fromhere shows you all the hosts that have registered into the database. Note that you can you see an entry for the NetApp array in here but not in the show flogi database above. Cisco Nexus 5010 A - N5K-1
N5K-1# sh fcns database VSAN 11: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x140000 N 50:0a:09:81:88:ec:c2:a1 (NetApp) scsi-fcp:target [NTAP1-A_0a] 0xdb0000 N 21:00:00:c0:dd:14:73:65 (Qlogic) scsi-fcp:init [ESX3_NTAP1-A_A] 0xdb0001 N 21:00:00:c0:dd:12:0d:51 (Qlogic) scsi-fcp:init [ESX2_NTAP1-A_A] 0xdb0002 N 21:00:00:c0:dd:12:0e:59 (Qlogic) scsi-fcp:init [ESX1_NTAP1-A_A] Total number of entries = 4
30.9
2011 Cisco
Page 75 of 217
30.10
Verify that the zones are correct by issuing the command show zoneset active. The output should show the zoneset and the zones that were added to the zoneset. Examples below are for Pod1.
N5K-2# show zoneset active zoneset name FLEXPOD_A vsan 11 zone name ESX1_NTAP1-A_A vsan 11 * fcid 0xd40002 [pwwn 21:00:00:c0:dd:12:bc:6d] [ESX1_NTAP1-A_A] * fcid 0xc80000 [pwwn 50:0a:09:81:88:bc:c3:04] [NTAP1-A_0a] zone name ESX2_NTAP1-A_A vsan 11 * fcid 0xd40000 [pwwn 21:00:00:c0:dd:14:60:31] [ESX2_NTAP1-A_A] * fcid 0xc80000 [pwwn 50:0a:09:81:88:bc:c3:04] [NTAP1-A_0a] zone name ESX3_NTAP1-A_A vsan 11 * fcid 0xd40001 [pwwn 21:00:00:c0:dd:11:bc:e9] [ESX3_NTAP1-A_A] * fcid 0xc80000 [pwwn 50:0a:09:81:88:bc:c3:04] [NTAP1-A_0a]
N5K-2(config)# show zoneset active zoneset name FLEXPOD_B vsan 12 zone name ESX1_NTAP1-A_B vsan 12 * fcid 0x620001 [pwwn 21:00:00:c0:dd:12:bc:6f] [ESX1_NTAP1-A_B] * fcid 0x1c0000 [pwwn 50:0a:09:82:88:bc:c3:04] [NTAP1-A_0b] zone name ESX2_NTAP1-A_B vsan 12 * fcid 0x620002 [pwwn 21:00:00:c0:dd:14:60:33] [ESX2_NTAP1-A_B] * fcid 0x1c0000 [pwwn 50:0a:09:82:88:bc:c3:04] [NTAP1-A_0b] zone name ESX3_NTAP1-A_B vsan 12 * fcid 0x620000 [pwwn 21:00:00:c0:dd:11:bc:eb] [ESX3_NTAP1-A_B] * fcid 0x1c0000 [pwwn 50:0a:09:82:88:bc:c3:04] [NTAP1-A_0b] MDS9124(config)# show zoneset active zoneset name FLEXPOD_A vsan 11 zone name ESX1_NTAP1-A_A vsan 11 * fcid 0xd40002 [pwwn 21:00:00:c0:dd:12:bc:6d] [ESX1_NTAP1-A_A] * fcid 0xc80000 [pwwn 50:0a:09:81:88:bc:c3:04] [NTAP1-A_0a] zone name ESX2_NTAP1-A_A vsan 11 * fcid 0xd40000 [pwwn 21:00:00:c0:dd:14:60:31] [ESX2_NTAP1-A_A] * fcid 0xc80000 [pwwn 50:0a:09:81:88:bc:c3:04] [NTAP1-A_0a] zone name ESX3_NTAP1-A_A vsan 11 * fcid 0xd40001 [pwwn 21:00:00:c0:dd:11:bc:e9] [ESX3_NTAP1-A_A] * fcid 0xc80000 [pwwn 50:0a:09:81:88:bc:c3:04] [NTAP1-A_0a] zoneset name FLEXPOD_B vsan 12 zone name ESX1_NTAP1-A_B vsan 12 * fcid 0x620001 [pwwn 21:00:00:c0:dd:12:bc:6f] [ESX1_NTAP1-A_B] * fcid 0x1c0000 [pwwn 50:0a:09:82:88:bc:c3:04] [NTAP1-A_0b] zone name ESX2_NTAP1-A_B vsan 12 * fcid 0x620002 [pwwn 21:00:00:c0:dd:14:60:33] [ESX2_NTAP1-A_B] * fcid 0x1c0000 [pwwn 50:0a:09:82:88:bc:c3:04] [NTAP1-A_0b] zone name ESX3_NTAP1-A_B vsan 12 * fcid 0x620000 [pwwn 21:00:00:c0:dd:11:bc:eb] [ESX3_NTAP1-A_B] * fcid 0x1c0000 [pwwn 50:0a:09:82:88:bc:c3:04] [NTAP1-A_0b]
2011 Cisco
Page 76 of 217
This section presents a detailed procedure for installing VMware ESXi within a Data Center Virtualization environment. The deployment procedures that follow are customized to include the specific environment variables that have been noted in previous sections.
31.6 31.7
Under the Actions section, click the Launch KVM Console link. Click Run on any certificate mismatch warning dialogs that may pop up. You will now have a java KVM Console to the server Repeat Steps 31.1 - 31.6 for ESX2 CIMC (http://10.1.111.162), and ESX3 CIMC (http://10.1.111.163).
2011 Cisco
Page 77 of 217
Step 32 Setting up the ESXi install This step has already been done for you. Skip to the next step. On both ESXi Hosts ESX1 and ESX2 32.1 Under the VM tab in the KVM window, select Add image. 32.2 Click the Add Image button in the window that displays. 32.3 Browse to the ESXi installer iso image file. Note: 32.4 32.5 32.6 The file is at E:\Lab\Software\VMware-VMvisor-Installer-4.1.0-260247_Cisco.iso Click Open to add the image to the list of virtual media. Click the checkbox for Mapped next to the entry corresponding to the image you just added. The hosts should detect the presence of the virtual media on reboot. Step 33 Installing ESXi Note: This step has already been done for you. Skip to the next step. On both ESXi hosts ESX1 and ESX2 33.1 Reboot the server using the Power Cycle Server button at the top of the KVM window. a. It doesnt matter whether you use a soft or hard reboot, because the blades do not have an OS. 33.2 On reboot, the machine detects the presence of the ESXi install media. 33.3 Select ESXi Installer from the menu that displays. 33.4 After the installer is finished loading, press Enter to continue with the install. 33.5 Read through the EULA and press F11 to accept and continue with the install. 33.6 Select the NetApp LUN (2GB in size) that you set up previously as the install disk for ESXi and then press Enter to continue. 33.7 The installer warns you that existing partitions will be removed on the volume. After you are sure this is what you want, press F11 to install ESXi. 33.8 After the install is complete, be sure to unmap the ESXi install image by unchecking the Mapped checkbox in the Virtual Media window. a. This is so that the server reboots into ESXi and not the installer. 33.9 The Virtual Media window might warn you that it is preferable to eject the media from the guest. Because we cannot do this (and the media is read-only), click Yes and unmap it anyway. 33.10 Press Enter to reboot the server. 33.11 Each of the hosts should now have a bootable ESXi environment installed from the virtual media. Step 34 Setting up the ESXi hosts administration password This step has already been done for you. Skip to the next step. On both ESXi hosts ESX1 and ESX2 34.1 After the server is done rebooting, press F2 (the Customize System option). 34.2 Login with root as the login name and an empty password field. 34.3 Select the Configure Password menu option. 34.4 Enter 1234Qwer as the password you want to use for administering the ESXi host. 34.5 Enter the same password to confirm, and press Enter to set the password.
2011 Cisco
Page 78 of 217
Step 35 Setting up the ESXi hosts management networking. Duration: 3 minutes Note: This step has already been done for you. Skip to the next step. ESXi host 1 - ESX1 35.1 From the System Customization menu, select the Configure Management Network option. 35.2 Select the IP Configuration menu option. 35.3 Select the Set static IP address and network configuration: option to manually setup the management networking. 35.4 Enter 10.1.111.21 for the IP address for managing the ESXi host. 35.5 Enter 255.255.255.0 as the subnet mask for the ESXi host. 35.6 Enter 10.1.111.254 as the default gateway for the ESXi. 35.7 Press Enter to accept the changes to the management networking. 35.8 Press Esc to exit the Configure Management Network submenu. 35.9 Press y to confirm the changes made and return to the main menu. ESXi host 2 - ESX2 35.10 From the System Customization menu, select the Configure Management Network option. 35.11 Select the IP Configuration menu option. 35.12 Select the Set static IP address and network configuration: option to manually setup the management networking. 35.13 Enter 10.1.111.22 for the IP address for managing the ESXi host. 35.14 Enter255.255.255.0 as the subnet mask for the ESXi host. 35.15 Enter 10.1.111.254 as the default gateway for the ESXi. 35.16 Press Enter to accept the changes to the management networking. 35.17 Press Esc to exit the Configure Management Network submenu. 35.18 Press y to confirm the changes made and return to the main menu. Step 36 Setting up the management VLAN Note: This step has already been done for you. Skip to the next step.
On both ESXi hosts ESX1 and ESX2 36.1 From the System Customization menu, select the Configure Management Network option. 36.2 Select the VLAN (optional) menu item. 36.3 Input 111 for the VLAN ID of the management interface. 36.4 Press Esc to exit the Configure Management Network submenu. 36.5 Press y to confirm the changes made and to return to the main menu. 36.6 Select Test Management Network to verify that the management network is set up correctly. Note: 36.7 36.8 DNS test will fail because we have not configured DNS, yet. Press Esc to log out of the console interface. To verify, in the right panel of the ESXi configuration window, when the VLAN (optional) item is highlighted, the specified VLAN should be shown.
2011 Cisco
Page 79 of 217
Step 37 Setting up DNS Note: This step has already been done for you. Skip to the next step.
ESXi host 1 - ESX1 37.1 From the System Customization menu, select the Configure Management Network option. 37.2 Select the DNS Configuration menu option. 37.3 Because we manually specified the IP configuration for the ESXi host, we also must specify the DNS information manually. 37.4 Enter 10.1.111.10 as the primary DNS servers IP address. 37.5 (Optional) Enter the secondary DNS servers IP address. 37.6 Enter ESX1.dcvlabs.lab as the hostname for the ESXi host. 37.7 Press Enter to accept the changes to the DNS configuration. 37.8 Press Esc to exit the Configure Management Network submenu. 37.9 Press y to confirm the changes made and return to the main menu. 37.10 Select Test Management Network on the System Configuration screen. 37.11 On the Test Management Network screen, press the Enter key. You should see OK as the result from pinging the default gateway, DNS server and test resolution of the ESXi server hostname. If any of the tests fails, contact your instructor. ESXi host 2 - ESX2 37.12 From the System Customization menu, select the Configure Management Network option. 37.13 Select the DNS Configuration menu option. 37.14 Because we manually specified the IP configuration for the ESXi host, we also must specify the DNS information manually. 37.15 Enter 10.1.111.10as the primary DNS servers IP address. 37.16 (Optional) Enter the Secondary DNS servers IP address. 37.17 Enter ESX2.dcvlabs.lab as the hostname for the ESXi host. 37.18 Press Enter to accept the changes to the DNS configuration. 37.19 Press Esc to exit the Configure Management Network submenu. 37.20 Press y to confirm the changes made and return to the main menu. 37.21 You can verify this step and the two previous steps by selecting the Test Management Network option from the System Customization menu. Here you can specify up to three addresses to ping and one hostname to resolve by using the DNS server.
2011 Cisco
Page 80 of 217
Step 38 Enable CLI Support for ESXi. Note: 38.1 This step has already been done for you. Skip to the next step.
38.2
38.3
Press Esc twice to log out of the console interface. Press ALT-F1 to access the command-line console interface. Login with the root user ID and password.
2011 Cisco
Page 81 of 217
VLAN ID 0 111
Used Ports 0 1
vmnic2 and vmnic3 are the 1Gbps nics connected to the Cisco Nexus 2248 Fabric Extenders. They are both active and uses the default ESXi virtual port id load balancing mechanism. 39.3 39.4 Enable jumbo frames for default vSwitch0. Type esxcfg-vswitch -m 9000 vSwitch0. Add a new vSwitch for the 10Gbps CNA ports. Enable jumbo frames for vSwitch1.
39.5
2011 Cisco
Page 82 of 217
Step 40 Create necessary port groups on vSwitch1. On both ESXi hosts ESX1 and ESX2 40.1 Add a new port group called MGMT Network to vSwitch0 and assign it to vlan 111.
esxcfg-vswitch -A "MGMT Network" vSwitch1 esxcfg-vswitch -v 111 -p "MGMT Network" vSwitch1
Why am I creating another Management network group? The default Management Network is a vmkernel management interface. This new port group is for VMs to be on the Management VLAN. 40.2 Add a new port group called NFS to vSwitch1 and assign it to vlan 211.
40.3
Add a new port group called VMotion to vSwitch1 and assign it to vlan 151.
40.4
Add a new port group called CTRL-PKT to vSwitch0 and assign it to vlan 171.
40.5
Add a new port group called VMTRAFFIC to vSwitch0 and assign it to vlan 131.
40.6
Add a new port group called Local LAN to vSwitch0 and assign it to vlan 24.
40.7
vim-cmd hostsvc/net/refresh
You need to run a refresh of your network settings for the following steps. This is important when running these commands from a script.
2011 Cisco
Page 83 of 217
40.8 Verify the MTU 9000 setting and the addition of Port Groups. Type esxcfg-vswitch -l. On both ESXi hosts ESX1 and ESX2
~ # esxcfg-vswitch -l Switch Name Num Ports vSwitch0 128 PortGroup Name VM Network Management Network Switch Name vSwitch1 PortGroup Name Local LAN CTRL-PKT MGMT Network VMotion NFS Used Ports 4 Configured Ports 128 Uplinks vmnic2,vmnic3 vmnic2,vmnic3 MTU 9000 Uplinks vmnic0,vmnic1 MTU 9000 Uplinks vmnic2,vmnic3
VLAN ID 0 111
Used Ports 0 1
Used Ports 5
Used Ports 0 0 0 0 0
Step 41 Enable load balancing via IP Hash on vSwitch1. 41.1 Set vSwitch1 to load balance based on IP Hash. The Nexus 10Gbps ports have already been configured for load balancing based on IP Hash.
vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_ip' vSwitch1
41.2
Verify your vSwitch load balancing policy. vSwitch0 should be set to lb_srcid and vSwitch1 should be set to lb_ip
~ # grep "vswitch" /etc/vmware/esx.conf | egrep '(teamPolicy\/team|vSwitch)' /net/vswitch/child[0000]/name = "vSwitch0" /net/vswitch/child[0000]/teamPolicy/team = "lb_srcid" /net/vswitch/child[0001]/name = "vSwitch1" /net/vswitch/child[0001]/teamPolicy/team = "lb_ip"
Step 42 Create vmkernel interfaces for vMotion and NFS storage. On ESXi host ESX1 42.1 Create vmkernel interface for NFS traffic. Enable it for Jumbo Frames on port group NFS.
esxcfg-vmknic -a -i 10.1.211.21 -n 255.255.255.0 -m 9000 -p NFS
42.2
Create vmkernel interface for VMotion traffic. Enable it for Jumbo Frames on port group VMotion.
On ESXi host ESX2 42.3 Create vmkernel interface for NFS traffic. Enable it for Jumbo Frames on port group NFS.
esxcfg-vmknic -a -i 10.1.211.22 -n 255.255.255.0 -m 9000 -p NFS
42.4
Create vmkernel interface for VMotion traffic. Enable it for Jumbo Frames on port group VMotion.
2011 Cisco
Page 84 of 217
Type esxcfg-vmknic -l and verify that the vmkernel ports were added properly with an MTU of 9000. On ESXi host ESX1
~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family Broadcast MAC Address MTU vmk0 Management Network IPv4 10.1.111.255 c4:7d:4f:7c:a7:6a 1500 vmk1 NFS IPv4 10.1.211.255 00:50:56:7e:60:53 9000 vmk2 VMotion IPv4 10.1.151.255 00:50:56:7b:ae:78 9000 IP Address TSO MSS Enabled 10.1.111.21 65535 true 10.1.211.21 65535 true 10.1.151.21 65535 true Netmask Type 255.255.255.0 STATIC 255.255.255.0 STATIC 255.255.255.0 STATIC
42.5
Summary of Commands
esxcfg-vswitch -m 9000 vSwitch0 esxcfg-vswitch -a vSwitch1 esxcfg-vswitch -m 9000 vSwitch1 esxcfg-vswitch -L vmnic0 vSwitch1 esxcfg-vswitch -L vmnic1 vSwitch1 esxcfg-vswitch -A "MGMT Network" vSwitch1 esxcfg-vswitch -v 111 -p "MGMT Network" vSwitch1 esxcfg-vswitch -A VMotion vSwitch1 esxcfg-vswitch -v 151 -p VMotion vSwitch1 esxcfg-vswitch -A NFS vSwitch1 esxcfg-vswitch -v 211 -p NFS vSwitch1 esxcfg-vswitch -A "CTRL-PKT" vSwitch1 esxcfg-vswitch -v 171 -p "CTRL-PKT" vSwitch1 esxcfg-vswitch -A "VMTRAFFIC" vSwitch1 esxcfg-vswitch -v 131 -p "VMTRAFFIC" vSwitch1 esxcfg-vswitch -A "Local LAN" vSwitch1 esxcfg-vswitch -v 24 -p "Local LAN" vSwitch1 vim-cmd hostsvc/net/refresh vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_ip' vSwitch1
2011 Cisco
Page 85 of 217
Step 43 Logging into VMware ESXi host using VMware vSphere client Duration: 5 minutes ESXi host 1 - ESX1 43.1 Open the vSphere client and enter 10.1.111.21 as the host you are trying to connect to. 43.2 Enter root for the username. 43.3 Enter 1234Qwer as the password. 43.4 Click the Login button to connect. ESXi Host 2 - ESX2 43.5 Open the vSphere client and enter 10.1.111.22 as the host you are trying to connect to. 43.6 Enter root for the username. 43.7 Enter 1234Qwer as the password. 43.8 Click the Login button to connect. 43.9 To verify that the login was successful, the vSphere clients main window should be visible. Step 44 Setting up the VMotion vKernel port on the virtual switch for individual hosts Duration: 5 minutes per host Now we need to enable VMotion on the vmKernel port we created. ESXi host 1 - ESX1 44.1 Select ESX1 on the left panel. 44.2 Go to the Configuration tab. 44.3 Click the Networking link in the Hardware box. 44.4 Click the Properties link in the right field on vSwitch1. 1
3 44.5 Select the VMotion configuration and click the Edit button.
2011 Cisco
Page 86 of 217
44.6
44.7 44.8
ESXi host 2 - ESX2 44.9 Select ESX2 on the left panel. 44.10 Go to the Configuration tab. 44.11 Click the Networking link in the Hardware box. 44.12 Click the Properties link in the right field on vSwitch1. 44.13 Select the VMotion configuration and click the Edit button. 44.14 Check the vMotion: Enabled checkbox. 44.15 Click OK to continue. 44.16 Click Close to close the dialog box. 44.17 On the right panel, click the Virtual Switch View. Individual VMkernel ports will be displayed for the various networks defined. Select a VMkernel port and display the VM associated with that port.
2011 Cisco
Page 87 of 217
Step 45 Change VLAN ID for default VM-traffic port-group called VM Network Duration: 5 minutes For each ESXi Host ESX1 and ESX2 45.1 Select the host on the left panel. 45.2 Select the Configuration tab. 45.3 Select the Networking link in the Hardware box. 45.4 Click Properties in the right field for vSwitch0.
45.5
45.6 45.7
Click Edit. Type in the VLAN ID for your Pods VM Traffic VLAN (ex 131.)
2011 Cisco
Page 88 of 217
46.3
~ # touch /vmfs/volumes/SWAP/test 46.4 View the contents of the mount to confirm files.
~ # ls /vmfs/volumes/SWAP/ test 46.5 46.6 46.7 From the vSphere client, view contents of the mount to confirm files. Select your host from the left panel. Select the Configuration tab. Select Storage in the Hardware box. Inspect the right panel where the cluster is displayed. You should see all of the datastores associated with the host. 1
2 46.8
Summary of Commands
esxcfg-nas -a --host 10.1.211.151 -s /vol/VDI_VFILER1_DS DS esxcfg-nas -a --host 10.1.211.151 -s /vol/VDI_SWAP SWAP
2011 Cisco
Page 89 of 217
Step 47 Time configuration for individual hosts - (SKIP for LAB) Duration: 5 minutes per host For each ESXi host ESX1 and ESX2 47.1 Select the host on the left panel. 47.2 Select the Configuration tab. 47.3 Click the Time Configuration link in the Software box. 47.4 Click the Properties link on the right panel. 47.5 A Time Configuration window displays. Click Options at the bottom. 47.6 An NTP Daemon Options window displays. Select NTP Settings in the left box, then click Add 47.7 Another pop-up window displays. Enter " 192.43.244.18" for the IP address of the NTP server, and click OK to continue. 47.8 On the original NTP Daemon Options window, check the Restart NTP Service checkbox. 47.9 Click OK at the bottom of the window to continue and close the window. 47.10 On the Time Configuration window, verify that the clock is now set to the correct time. If the time is correct, click OK to save the configuration and exit. To verify, the right panel displays the correct time, the NTP client status, and the NTP server IP address. Step 48 Moving the swap file Duration: 5 minutes per host. For ESXi host ESX1, ESX2 and ESX3 48.1 Select the host on the left panel. 48.2 Select the Configuration tab. 48.3 In the Software box, select Virtual Machine Swapfile Location. 48.4 On the right panel, click Edit 1 3
2 48.5 Select the radio button for Store the swapfile in a swap file datastore selected below if it is not already selected.
2011 Cisco
Page 90 of 217
48.6
Select SWAP as the datastore you want to store the swapfile on.
48.7 48.8
Click OK at the bottom of the page to finish. The swapfile location is specified on the right panel.
You are now done with the initial setup of a Base Data Center Virtualization infrastructure. The remaining tasks will allow you to configure vCenter, Nexus 1000v, and OTV.
2011 Cisco
Page 91 of 217
Step 49 Setting up vCenter datacenter Duration: 5 minutes 49.1 On the VC_SERVER desktop, double-click the Vmware vSphere Client icon. Make sure that the settings are for localhost and Using the Windows session credentials (as below) and click Login :
2 3
49.2
49.3 49.4
Enter FlexPod_DC_1 as the name of the new datacenter. On the left panel, the datacenter displays underneath the vCenter name.
2011 Cisco
Page 92 of 217
Step 50 Setting up the management cluster Duration: 5 minutes per cluster 50.1 Right-click the datacenter and select New Cluster. 50.2 Enter FlexPod_Mgmt as the name for the cluster. 50.3 Check the box for VMware HA. Do not check the box for VMware DRS. Click Next to continue. Note: 50.4 50.5 50.6 50.7 50.8 50.9 The FlexPod Implementation Guide, recommends you enable and accept the defaults for VMware DRS. Accept the defaults for power management, and click Next to continue. Accept the defaults for VMware HA, and click Next to continue. Accept the defaults for Virtual Machine Options, and click Next to continue. Accept the defaults for VM Monitoring, and click Next to continue. Accept the defaults for VMware EVC, and click Next to continue. Select Store the Swapfile in the datastore specified by the host in the VM Swapfile Location section and click Next to continue.
50.10 50.11
Review the selections made and click Finish to continue. On the left panel, the cluster displays under the datacenter name.
2011 Cisco
Page 93 of 217
2011 Cisco
Page 94 of 217
7.2
This task has already been completed for you. You may review for completeness. Please skip ahead to Section 7.3. ESX1 vmnic0 is the CNA connected to N5K-1 Eth1/9. ESX2 vmnic0 is the CNA connected to N5K-1 Eth1/4. Add a datastore to each ESX host presented via FCoE through the fabric. Step 52 Click on the 10.1.111.21 (ESX1) host under ClusterA cluster. Select the Configuration tab. Click on the Storage link under Hardware. Click on the Add Storage link: 52.1 Select the Disk/LUN radio button, then click Next :
52.2
Select the 50 GB Fibre Channel disk that is found and click Next.
This LUN is connected via FcoE. ESX1 vmnic0 is the CNA port that is connected to N5K-1 Eth1/9. Then, click Next on the Current Disk Layout dialog box that follows. Name the datastore NetApp-SAN-1, then click Next
2011 Cisco
Page 95 of 217
52.5
Uncheck the Maximize capacity box, and then enter 40.00 GB in the size box. Click Next.
We will not use the full capacity of the LUN Click Finish to add the datastore : Note that the datastore appears on both ESX1 and ESX2 Storage. This is because the NetApp Array has this LUN masked for both ESX1 and ESX2 initiators. You might need to click Refresh.
Note:
2011 Cisco
Page 96 of 217
3
4
53.3
Click on the Server-2003R2 to open the folder. Right-click on the Server-2003R2.vmx file and select Add to Inventory from the pop-up menu.
53.4 Leave the Name as Server-2003R2. Select FlexPod_DC_1. Click Next.. 53.5 Specify your cluster and click Next. 53.6 Select ESX1 for the host. Click Next, then click Finish on the Add to Inventory dialog box. Step 54 Add Client VM to ESX2 inventory. 54.1 Click on the ClientXP to open the folder. Right-click on the ClientXP.vmx file and select Add to Inventory from the pop-up menu.
Leave the Name as ClientXP. Select FlexPod_DC_1. Click Next.. Specify your cluster and click Next. Select ESX2 for the host. Click Next, then click Finish on the Add to Inventory dialog box. Close the Datastore Browser.
2011 Cisco
Page 97 of 217
Specify the following ftp location for the source URL. ftp://10.1.111.100/Nexus1000v.4.2.1.SV1.4/VSM/Install/nexus-1000v.4.2.1.SV1.4.ova Verify the VSM OVF template details such as version number. Click Next. Accept the End User License Agreement. Click Next: Name it vsm-1. Click Next: Select Nexus 1000V Installer for Deployment Configuration. Click Next : Select Netapp-SAN-1 for the Datastore.
1 55.9 55.10 Select your Cluster and click Next. Select Thick provisioned format storage for Disk Format. Click Next.
2011 Cisco
Page 98 of 217
55.11
Map the the Nexus 1000V Control and Packet source networks to CTRL_PKT. Map the Management source network to "MGMT Network". Click Next.
Note: 55.12
Cisco supports using the same vlan for Management, Control, and Packet port-groups. We are using one group for Management traffic and another group for control and packet traffic.
Fill out the VSM Configuration Properties with information below, and then click Next. VSM Domain ID: 11 Password: 1234Qwer Management IP Address: 10.1.111.17 Management IP Subnet Mask: 255.255.255.0 Management IP Gateway: 10.1.111.254 55.13 Click Finish. 55.14 After the template is finished deploying, click Close : 55.15 Power on the VSM by clicking on the Nexus1000v VM and pressing the Power On icon ( ). 55.16 Then, launch the VM Console and verify that the VM boots to the login prompt :
2011 Cisco
Page 99 of 217
56.3 56.4
Save the XML document to your desktop. Select Plug-Ins Manage Plug-ins in the vSphere Client window.
1 1
2 56.5 In the new window, right-click in open area below "Available Plug-ins" and select New Plug-in (you may have to expand the window to do so).
1 56.6 56.7 56.8 56.9 56.10 Click Browse and navigate to where you saved cisco_nexus_1000v_extension.xml. Click Open to open the XML file. Click Register Plug-in. If you get a security warning, click Ignore. Click OK to confirm that the plug-in installed correctly.
2011 Cisco
57.4 57.5
Configure the system mtu to be 9000. This is on by default. Configure the Nexus 1000v domain.
svs-domain domain id 11 control vlan 171 packet vlan 171 svs mode L2
57.6
svs connection vcenter protocol vmware-vim remote ip address 10.1.111.100 port 80 vmware dvs datacenter-name FlexPod_DC_1 connect exit
Step 58 Verify connection to the vCenter and status before adding hosts to the VSM. The command show svs connections shows VSM connection information to the vCenter. Make sure operational status is Connected and Sync status is Complete. If the status is good,then proceed to adding hosts.
vsm-1# show svs connections connection vcenter: ip address: 10.1.111.100 remote port: 80 protocol: vmware-vim https certificate: default datacenter name: FlexPod_DC_1 DVS uuid: 84 52 1a 50 0c aa 52 b2-10 64 47 c3 8d af 46 70 config status: Enabled operational status: Connected sync status: Complete version: VMware vCenter Server 4.1.0 build-345043
58.1
The Cisco Nexus 1000V switch should now be available in the Inventory Networking view.
2 1
2011 Cisco
59.1
vsm-1(config-vlan)# show vlan brief VLAN Name ---- -------------------------------1 default 111 MGMT-VLAN 131 VMTRAFFIC 151 VMOTION 171 CTRL-PKT 211 NFS-VLAN
Step 60 Enable lacp and lacp offload. In our lab, we wont be using LACP to negotiate our port-channel, but we will enable the feature in case we do later on. LACP offload is a feature that allows the VEM to negotiate the LACP port-channel instead of the VSM. This is useful in case the VSM becomes unavailable. 60.1 To support LACP port-channels you need to first enable the LACP feature.
feature lacp
60.2
Now we need to enable LACP offload. This WILL require a reboot of the VSM.
60.3
vsm-1# show lacp offload status Current Status : Enabled Running Config Status : Enabled Saved Config Status : Enabled
2011 Cisco
Summary of Commands
hostname vsm-1 system jumbomtu 9000 svs-domain domain id 11 control vlan 171 packet vlan 171 svs mode L2 exit svs connection vcenter protocol vmware-vim remote ip address 10.1.111.100 port 80 vmware dvs datacenter-name FlexPod_DC_1 connect exit vlan 111 name MGMT-VLAN vlan 131 name VMTRAFFIC vlan 151 name VMOTION vlan 171 name CTRL-PKT vlan 211 name NFS-VLAN feature lacp lacp offload copy running startup reload
2011 Cisco
port-profile type ethernet SYSTEM-UPLINK description System profile for blade uplink ports vmware port-group switchport mode trunk switchport trunk allowed vlan 111,131,151,171,211 mtu 9000
61.2 Note:
We are going to turn on port-channel for our uplink. For channel-groups, my rule of thumb is: UCS-B Series use channel-group auto mode on mac-pinning UCS-C Series to Switch(es) and no port-channel use channel-group auto mode on mac-pinning UCS-C Series to Switch(es) and port-channel on use channel-group auto mode UCS-C Series to Switch(es) and port-channel LACP use channel-group auto mode active Enable all ports in the profile. VLAN 111, 151 , 171, and 211 are used for Management, VMotion, N1K management, and data store traffic, so they have to be configured as system VLANs to ensure that these VLANs are available during the boot process. Enable the port profile.
61.3 61.4
no shutdown
61.5
state enabled
Step 62 Create a Management port-profile for your ESXi management VMKernel interface. This port profile will also be used by the Management interface of the VSM. As VLAN 111 is used for management traffic, it has to be configured as a system VLAN to ensure that this VLAN is available during the boot process of the ESXi server.
port-profile type vethernet MGMT-VLAN vmware port-group switchport mode access switchport access vlan 111 no shutdown system vlan 111 state enabled
Note:
2011 Cisco
Step 63 Create a Nexus 1000V Control and Packet port profile for the VSM virtual interfaces. 63.1 As VLAN 171 is used for management traffic it has to be configured as a system VLAN to ensure that this VLAN is available during the boot process of the ESXi server. Note: The following section is not used currently, because we are using VLAN 1 for Control, Packet, and Management.
port-profile type vethernet N1KV-CTRL-PKT vmware port-group switchport mode access switchport access vlan 171 no shutdown system vlan 171 state enabled
Step 64 Create a NFS Storage port-profile for NFS VMKernel interface. 64.1 VLAN 211 is used for storage traffic, so it has to be configured as a system VLAN to ensure that this VLAN is available during the boot process of the ESXi server.
port-profile type vethernet NFS-VLAN vmware port-group switchport mode access switchport access vlan 211 no shutdown system vlan 211 state enabled
Step 65 Create a vMotion port-profile for vmotion vmkernel interface. 65.1 Configure the port profile for the Virtual Machine network to which the VSM connects for Control and Packet traffic. As VLAN 151 is used for management traffic it has to be configured as a system VLAN to ensure that this VLAN is available during the boot process of the ESXi server.
port-profile type vethernet VMOTION vmware port-group switchport mode access switchport access vlan 151 no shutdown system vlan 151 state enabled
Step 66 Create VM Traffic port-profile for VM virtual interfaces. This will be for the non-mangement Virtual Machines residing on the ESXi hosts.
port-profile type vethernet VMTRAFFIC-VLAN vmware port-group switchport mode access switchport access vlan 131 no shutdown ! system vlan 131 state enabled exit
66.1
66.2
2011 Cisco
Summary of Commands
port-profile type ethernet SYSTEM-UPLINK description system profile for blade uplink ports vmware port-group switchport mode trunk switchport trunk allowed vlan 111,131,151,171,211 mtu 9000 channel-group auto mode on no shutdown system vlan 111,151,171,211 state enabled port-profile type vethernet MGMT-VLAN vmware port-group switchport mode access switchport access vlan 111 no shutdown system vlan 111 state enabled port-profile type vethernet NFS-VLAN vmware port-group switchport mode access switchport access vlan 211 no shutdown system vlan 211 state enabled exit port-profile type vethernet VMOTION vmware port-group switchport mode access switchport access vlan 151 no shutdown system vlan 151 state enabled exit port-profile type vethernet VMTRAFFIC-VLAN vmware port-group switchport mode access switchport access vlan 131 no shutdown ! system vlan 131 state enabled port-profile type vethernet N1KV-CTRL-PKT vmware port-group switchport mode access switchport access vlan 171 no shutdown system vlan 171 state enabled
2011 Cisco
67.7
Type vem status and confirm that the VEM has been installed properly.
/vmfs/volumes/e413d232-639669f1 # vem status VEM modules are loaded Switch Name vSwitch0 Num Ports 128 Used Ports 16 Configured Ports 128 MTU 9000 Uplinks vmnic1,vmnic0
Note:
Summary of Commands
cd /vmfs/volumes/DS esxupdate -b cross_cisco-vem-v130-4.2.1.1.4.0.0-2.0.1.vib update
2011 Cisco
2 68.2 Select vsm-1 from the tree on the left. Right-click on it and select Add Host from the menu.
1 68.3
68.4
Select hosts ESX1 and ESX2. Next, select the adapters for each hosts vSwitch1 (vmnic0 and vmnic1). Dont select vmnic that are used by vSwitch0 (the default virtual switch provided by the ESXi server). Select SYSTEM-UPLINK as the DVUplink port group for all of the vmnics you are adding. 1 2 1 2 3 3
Placeholder
Click Next to continue. For Network Connectivity, do NOT migrate any adapters. Click Next to continue. For Virtual Machine Networking, do NOT migrate any virtual machines now. Click Next to continue. Click Finish to apply the changes..
2011 Cisco
Step 69 Verify that the Virtual Ethernet Module(s) are seen by VSM.
vsm-1(config)# show module Mod Ports Module-Type --- ----- -------------------------------1 0 Virtual Supervisor Module 3 248 Virtual Ethernet Module 4 248 Virtual Ethernet Module Mod --1 3 4 Mod --1 3 4 Sw ---------------4.2(1)SV1(4) 4.2(1)SV1(4) 4.2(1)SV1(4) Model -----------------Nexus1000V NA NA Status -----------active * ok ok
Hw -----------------------------------------------0.0 VMware ESXi 4.1.0 Releasebuild-260247 (2.0) VMware ESXi 4.1.0 Releasebuild-260247 (2.0) Serial-Num ---------NA NA NA Server-Name -------------------NA esx1 esx2
Mod Server-IP Server-UUID --- --------------- -----------------------------------1 10.1.111.17 NA 3 10.1.111.21 6da2f331-dfd4-11de-b82d-c47d4f7ca766 4 10.1.111.22 67ae4b62-debb-11de-b88b-c47d4f7ca604 * this terminal session
69.1
vsm-1(config)# sh int trunk -------------------------------------------------------------------------------Port Native Status Port Vlan Channel -------------------------------------------------------------------------------Eth3/1 1 trnk-bndl Po1 Eth3/2 1 trnk-bndl Po1 Eth4/5 1 trnk-bndl Po2 Eth4/6 1 trnk-bndl Po2 Po1 1 trunking -Po2 1 trunking --------------------------------------------------------------------------------Port Vlans Allowed on Trunk -------------------------------------------------------------------------------Eth3/1 111,131,151,171,211 Eth3/2 111,131,151,171,211 Eth4/5 111,131,151,171,211 Eth4/6 111,131,151,171,211 Po1 111,131,151,171,211 Po2 111,131,151,171,211 <snip> -------------------------------------------------------------------------------Port STP Forwarding -------------------------------------------------------------------------------Eth3/1 none Eth3/2 none Eth4/5 none Eth4/6 none Po1 111,131,151,171,211 Po2 111,131,151,171,211
2011 Cisco
Step 70 Migrate the ESXi hosts existing management vmkernel interface on vSwitch0 to the Nexus 1000V. 70.1 From the browser bar, select Hosts and Clusters. 1 2 70.2 Select ESX1 (10.1.111.21), select the Configuration tab, select Networking under Hardware, select the Virtual Distributed Switch tab, click on Manage Virtual Adapters link: 1
2 70.3 Click the Add link, select Migrate existing virtual adapters, then click Next: 1
Select MGMT-VLAN for any adapter on the Management Network Select NFS-VLAN for any adapter on the NFS source port group. Select VMOTION for any adapter on the VMotion source port group. Click Next to continue. In the figure below, the current switch should say vSwitch1.
70.8
Click Finish.
2011 Cisco
70.9
Verify that all the vmkernel ports for ESX1 have migrated to the Nexus 1000V distributed virtual switch:
Step 71 Repeat Step 70 to move ESX2 to the Nexus 1000V distributed virtual switch. Step 72 Verify that jumbo frames are enabled correctly for your vmkernel interfaces. 72.1 From VSM run show interface port-channel to verify that the MTU size is 9000.
vsm-1# show interface port-channel 1-2 | grep next 2 port-c port-channel1 is up Hardware: Port-Channel, address: 0050.5652.0e5a (bia 0050.5652.0e5a) MTU 9000 bytes, BW 20000000 Kbit, DLY 10 usec, -port-channel2 is up Hardware: Port-Channel, address: 0050.5652.0d52 (bia 0050.5652.0d52) MTU 9000 bytes, BW 20000000 Kbit, DLY 10 usec,
72.2
From both ESXi servers, verify that environment is configured for Jumbo frames end-to-end. We are going to use the -d option to prevent fragmenting the packet.
~ # vmkping -d -s 8000 -I vmk0 10.1.111.151 PING 10.1.111.151 (10.1.111.151): 8000 data bytes 8008 bytes from 10.1.111.151: icmp_seq=0 ttl=255 time=0.552 ms 8008 bytes from 10.1.111.151: icmp_seq=1 ttl=255 time=0.553 ms 8008 bytes from 10.1.111.151: icmp_seq=2 ttl=255 time=0.544 ms --- 10.1.111.151 ping statistics --3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.544/0.550/0.553 ms
Note:
In our environment, since the NetApp is plugged into our 3750 management switch, we had to also enable it for jumbo frames using the command system mtu jumbo 9000.
2011 Cisco
1 2
3 73.2 73.3 Select the VMTRAFFIC port-profile from the drop down list and select OK. Verify that your VMs virtual interface is showing up in the VSM.
vsm-1(config)# show interface virtual vm ------------------------------------------------------------------------------Port Adapter Owner Mod Host ------------------------------------------------------------------------------Veth7 Net Adapter 1 Server-2003R2 3 10.1.111.21
73.4
C:\Documents and Settings\Administrator>ping 10.1.131.254 Pinging 10.1.131.254 with 32 bytes of data: Reply from 10.1.131.254: bytes=32 time<1ms TTL=128 Reply from 10.1.131.254: bytes=32 time<1ms TTL=128 Reply from 10.1.131.254: bytes=32 time<1ms TTL=128
Step 74 Repeat the above steps for any remaining VMs you have except for your VSM. Be sure to select the appropriate port profile.
2011 Cisco
Nexus 7000
The Cisco Nexus 7000 Series is a modular data center class series of switching systems designed for highly scalable end-to-end 10 Gigabit Ethernet networks. The Cisco Nexus 7000 Series is purpose built for the data center and has many unique features and capabilities designed specifically for such mission critical place in the network. Cisco NX-OS Cisco NX-OS is a state-of-the-art operating system that powers the Cisco Nexus 7000 Platform. Cisco NX-OS is built with modularity, resiliency, and serviceability at its foundation. Drawing on its Cisco IOS and Cisco SAN-OS heritage, Cisco NX-OS helps ensure continuous availability and sets the standard for mission-critical data center environments.
2011 Cisco
EXERCISE OBJECTIVES
This hands-on lab will introduce participants to the OTV (Overlay Transport Virtualization) solution for the Nexus 7000. This innovative feature set simplifies Datacenter Interconnect designs, allowing Data Center communication and transparent Layer 2 extension between geographically distributed Data Centers. OTV accomplishes this without the overhead introduced by MPLS or VPLS. By the end of the laboratory session the participant should be able to understand OTV functionality and configuration with the Nexus 7000. Students will go through the following steps: 1. System Verification. 2. Base configuration. 3. OSPF Configuration. 4. OTV Configuration and Verification. 5. VMotion across Data Centers.
Each lab POD has a pair of Nexus 7000s that are used as edge devices attached to a layer 3 Core cloud. The core (which you dont configure) consists of a pair of Nexus 7000s that are used to model a simple L3 WAN core network. A pair of Nexus 5000s with an attached ESX server represent the access layer. The equipment we are using is the Nexus 7000 10-slot chassis with dual supervisors, one 48-port GE Copper card (model N7K-M148GT-12) and one 32-port 10GE fiber card (model N7K-M132XP-12) each. We will convert our single Data Center site environment into two geographically distributed Data Center sites. Each site will have one ESXi 4.1 server that is part of the same VMWare Host cluster. The sites are connected via Nexus 7000 edge devices (virtual device contexts) to a Nexus 7000 IP core (virtual device contexts). We will configure the Nexus 7000s at Site A and B. The goal of the lab is to establish L2 connectivity between the two sites and then perform a vmotion over a generic IP core leveraging the Nexus 7000 OTV technology.
2011 Cisco
We leverage the Virtual Device Context feature to consolidate multiple nodes and reduce the number of required equipment. The eight Nexus 7000s (N7K) below are actually two physical boxes. Figure 7 - Full Topology for Three Pods in a VDC Deployment
2011 Cisco
Device
N7K-1 N7K-2 N7K-1 N7K-2 N7K-1 N7K-2 N7K-1 N7K-2 N7K-1 N7K-2 N7K-1 N7K-2
Interface
Eth 1/10 Eth 1/12 Lo0 Lo0 Eth 1/18 Eth 1/20 Lo0 Lo0 Eth 1/26 Eth 1/28 Lo0 Lo0
IP on uplink
10.1.11.3/24 10.1.14.4/24 10.1.0.11/32 10.1.0.12/32 10.1.21.5/24 10.1.24.6/24 10.1.0.21/32 10.1.0.22/32 10.1.31.7/24 10.1.34.8/24 10.1.0.31/32 10.1.0.32/32
Device
N7K-1 N7K-2 N7K-1 N7K-2 N7K-1 N7K-2
Access Ports
e1/14 e1/16 e1/22 e1/24 e1/30 e1/32
Device
N5K-1 N5K-2 N5K-1 N5K-2 N5K-1 N5K-2
Access Ports
e1/19 e1/20 e1/19 e1/20 e1/19 e1/20
Device
N7K-1 N7K-1 N7K-1
Access Ports
e1/14 e1/22 e1/30
Device
N5K-1 N5K-1 N5K-1
Access Ports
e1/19 e1/19 e1/19
Note:
If you did not do Sections 3-5, then you can load the configurations from the tftp server. See Appendix A: Copying Switch Configurations From a tftp Server for instructions. However, you must do Sections 6 and 7 to prepare the servers and virtual machines.
2011 Cisco
Cisco Nexus 7000 Series Switches Configuration Guides http://www.cisco.com/en/US/products/ps9402/products_installation_and_configuration_guides_list.html Cisco Nexus 7000 Series OTV Quick Start Guide
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/OTV/b_Cisco_Nexus_7000_Series_OTV_Quick_Start_Guide.html
OTV between 2 DCs connected with Dark Fiber (sent to corporate editing) "The scope of this document is to provide guidance on configuring and designing a network with Overlay Transport Virtualization (OTV) to extend Layer 2 between two Data Centers connected via dark fiber links. This is a very common DCI deployment model and this paper will be very helpful in guiding AS team, partners and customer in deploying OTV." http://bock-bock.cisco.com/wiki_file/N7K:tech_resources:otv/OTV_over_DarkFiber-AS_team.docx Note: If you do not have access to the above document, please contact your local Cisco SE.
2011 Cisco
Description
Display module information (N7K) Show the running configuration, including default values. Show the VRF on your system. Show all the interfaces belonging to any VRF context. Show the interfaces that belong to the management VRF. Display information about the software version (N7K) Enter interface mode Add an interface to a VRF. Show interface information for mgmt0. Ping a host via a specified VRF context. Display every match of mgmt0 along with the next 3 lines. Display the CLI context that you are in. N7K-1, N7K-2, N5K-1, N5K-2
Basic Configuration
vlan 20, 23, 1005 no shut sh vlan br spanning-tree vlan 20,23,1005 priority 4096 spanning-tree vlan 20,23,1005 priority 8192 int e1/<5k-7k link> switchport switchport mode trunk switchport trunk allowed vlan 20,23,1005 no shutdown
OSPF Configuration
feature ospf router ospf 1 log-adjacency-changes interface loopback0 ip address 10.1.0.y/32 ip router ospf 1 area 0.0.0.0 interface e1/<uplink_port> mtu 9042 ip address 10.1.y.z/24 ip ospf network point-to-point ip router ospf 1 area 0.0.0.0 ip igmp version 3 no shutdown show running-config ospf show ip ospf neighbors show ip ospf int brief show ip route ospf-1 Step 1 Lets configure Layer 3 and OSPF Routing N7K-1, N7K-2, N7K-1, N7K-2, - Refer to Table 18 for loopback info.
Step 2 Lets now configure the interface towards N7K-1 (Core Layer). Join Interface N7K-1, N7K-2, - Refer to Table 18 for uplink info.
First, lets check our OSPF configuration check if we were able to establish adjacency. verify if we exchanged routes. Enable the OTV feature. Next, we specify the OTV Site VLAN, which is vlan 1005. Configure OTV Overlay Interface Replace X with pod number. Join The OTV Site to the Core Extend a VLAN Across The Overlay Check the OTV configuration. Display local OTV status check the status of the VLANs extended across the overlay. see how many OTV edge devices are present at the local site. Display the status of adjacent sites Display the OTV ARP/ND L3->L2 Address Mapping Cache Display the MAC addresses of devices learnt on the VLAN.
2011 Cisco
N7K-1-OTV-1A# show module Mod Ports Module-Type --- ----- -------------------------------1 32 10 Gbps Ethernet Module 3 48 10/100/1000 Mbps Ethernet Module 5 0 Supervisor module-1X 6 0 Supervisor module-1X Mod --1 3 5 6 Sw -------------5.1(2) 5.1(2) 5.1(2) 5.1(2) Hw -----2.0 1.6 1.8 1.8
MAC-Address(es) -------------------------------------1c-df-0f-d2-05-20 to 1c-df-0f-d2-05-44 1c-df-0f-4a-06-04 to 1c-df-0f-4a-06-38 b4-14-89-e3-f6-20 to b4-14-89-e3-f6-28 b4-14-89-df-fe-50 to b4-14-89-df-fe-58 Online Diag Status -----------------Pass Pass Pass Pass Ports ----0 0 0 Module-Type -------------------------------Fabric Module 1 Fabric Module 1 Fabric Module 1
Status -----------ok ok ok
<snip>
2011 Cisco
75.3
Next, we will check the currently running software version. Our lab is currently NX-OS 5.1(2).
N7K-1-OTV-1A# show version Cisco Nexus Operating System (NX-OS) Software <snip> Software BIOS: version 3.22.0 kickstart: version 5.1(2) NX-OS Version system: version 5.1(2) BIOS compile time: 02/20/10 kickstart image file is: bootflash:///n7000-s1-kickstart.5.1.2.bin kickstart compile time: 12/25/2020 12:00:00 [12/18/2010 01:55:20] system image file is: bootflash:///n7000-s1-dk9.5.1.2.bin system compile time: 11/29/2010 12:00:00 [12/18/2010 03:02:00]
Images Location
Hardware cisco Nexus7000 C7010 (10 Slot) Chassis ("Supervisor module-1X") Intel(R) Xeon(R) CPU with 4115776 kB of memory. Processor Board ID JAF1444BLHB Device name: N7K-1-OTV-1A bootflash: 2029608 kB slot0: 2074214 kB (expansion flash) Kernel uptime is 9 day(s), 15 hour(s), 50 minute(s), 32 second(s) Last reset Reason: Unknown System version: 5.1(2) Service: plugin Core Plugin, Ethernet Plugin N7K-1-OTV-1A#
CPU
Storage Devices
Active Plug-in
Note:
Cisco Overlay Transport Virtualization (OTV) requires NX-OS version 5.0(3) or higher.
NX-OS is composed of two images: 1. a kickstart image that contains the Linux Kernel and 2. a system image that contains the NX-OS software components. They both show up in the configuration. In future releases, we will be adding other plug-ins, such as the Storage plug-in for FCoE.
2011 Cisco
75.4
N7K-1-OTV-1A# show running-config version 5.1(2) <omitted config> vrf context management vlan 1 <omitted interface config> interface Ethernet1/9 interface Ethernet1/10 interface Ethernet1/11 interface Ethernet1/12 <omitted interface config> interface mgmt0 ip address 10.1.111.111/24
These are the interfaces available to your Pod (Virtual Device Context)
75.5
This is the configuration for Pod 1. As explained earlier, the Nexus 7000s in each Pod runs within a Virtual Device Context (VDC). By using the VDC feature, we can segment the physical Nexus 7000 into multiple logical switches, each of which runs in a separate memory space and only has visibility into the hardware resources that it owns, providing total isolation between the VDCs. One of the features of show running-config in NX-OS is the ability to not only look at the running-config but to also reveal the default values, which do not appear in the base config. The keyword to use is all.
75.6
N7K-1-OTV-1A# show running-config all | section mgmt0 interface mgmt0 no description speed auto duplex auto no shutdown cdp enable ip address 10.1.111.111/24
2011 Cisco
The Management VRF provides total isolation of management traffic from the rest of the traffic flowing through the box. In this task we will: Verify that only the mgmt0 interface is part of the management VRF Verify that no other interface can be part of the management VRF Verify that the default gateway is reachable only using the management VRF Step 76 Verify VRF characteristics and behavior. Duration: 15 minutes 76.1 Verify that only the mgmt0 interface is part of the management VRF
VRF-ID 1 2
State Up Up
Reason ---
N7K-1-OTV-1A#show vrf interface Interface VRF-Name Ethernet1/9 default Ethernet1/10 default Ethernet1/11 default Ethernet1/12 default <omitted output> Ethernet3/24 default mgmt0 management N7K-1-OTV-1A# show vrf management interface Interface VRF-Name mgmt0 management
VRF-ID 1 1 1 1
1 2
VRF-ID 2
Note:
The management VRF is part of the default configuration and the management interface mgmt0 is the only interface that can be made member of this VRF. Lets verify it.
2011 Cisco
76.2 Note:
Verify that no other interface can be part of the management VRF. The following example is for Pod1. Please use e1/17 for Pod 2 or e1/25 for Pod3.
FastEthernet? GigabitEthernet?... No, just ethernet interfaces
N7K-1-OTV-1A# conf t N7K-1-OTV-1A(config)# interface ethernet1/9 N7K-1-OTV-1A(config-if)# vrf member management % VRF management is reserved only for mgmt0
N7K-1-OTV-1A(config-if)# show int mgmt0 mgmt0 is up Hardware: GigabitEthernet, address: 0022.5577.f8f8 (bia 0022.5577.f8f8) Internet Address is 10.1.111.17/16 MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA full-duplex, 1000 Mb/s Auto-Negotiation is turned on EtherType is 0x0000 1 minute input rate 88 bits/sec, 0 packets/sec 1 minute output rate 24 bits/sec, 0 packets/sec Rx 9632 input packets 106 unicast packets 5999 multicast packets 3527 broadcast packets 1276448 bytes <snip>
2011 Cisco
76.3
Verify that the default gateway is not reachable when using the default VRF. Try reaching the outof-band management networks default gateway with a ping.
host host host host host
N7K-1-OTV-1A(config-if)# ping 10.1.111.254 PING 10.1.111.254 (10.1.111.254): 56 data bytes ping: sendto 10.1.111.254 64 chars, No route to Request 0 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 1 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 2 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 3 timed out ping: sendto 10.1.111.254 64 chars, No route to Request 4 timed out
--- 10.1.111.254 ping statistics --5 packets transmitted, 0 packets received, 100.00% packet loss N7K-1-OTV-1A(config-if)#
The ping fails because we are trying to reach a system on the out-of-band management network without specifying the correct VRF. Verify that the default gateway is reachable using the management VRF. Try reaching the MGMT VRFs default gateway with a ping. In our lab environment, we could not use the mgmt0 interface or management VRF. Instead, we used the last gigabit port in each as the management interface and placed into a new VRF called MGMT. To ping other devices in the network from the Nexus 7000s, you will need to specify this VRF context. Lab Hack!
N7K-1-OTV-1A# ping 10.1.111.254 vrf MGMT PING 10.1.111.254 (10.1.111.254): 56 data bytes 64 bytes from 10.1.111.254: icmp_seq=0 ttl=63 time=1.005 64 bytes from 10.1.111.254: icmp_seq=1 ttl=63 time=0.593 64 bytes from 10.1.111.254: icmp_seq=2 ttl=63 time=0.585 64 bytes from 10.1.111.254: icmp_seq=3 ttl=63 time=0.594 64 bytes from 10.1.111.254: icmp_seq=4 ttl=63 time=0.596
ms ms ms ms ms
Linux-like output
--- 10.1.111.254 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss round-trip min/avg/max = 0.585/0.674/1.005 ms
2011 Cisco
In this step we will: Verify the CLI hierarchy independence by issuing a ping from different CLI contexts Verify the CLI piping functionality Step 77 Explore NX-OS CLI capabilities. Duration: 15 minutes 77.1 Verify the CLI hierarchy independence by issuing a ping from different CLI contexts
N7K-1-OTV-1A# conf t N7K-1-OTV-1A(config)#ping ? *** No matches in current mode, matching in (exec) mode *** <CR> A.B.C.D or Hostname IP address of remote system WORD Enter Hostname multicast Multicast ping N7K-1-OTV-1A(config)#ping 10.1.111.254 vrf management PING 10.1.111.254 (10.1.111.254): 56 data bytes 64 bytes from 10.1.111.254: icmp_seq=0 ttl=63 time=4.257 ms 64 bytes from 10.1.111.254: icmp_seq=1 ttl=63 time=0.714 ms <snip> --- 10.1.111.254 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss round-trip min/avg/max = 0.562/1.336/4.257 ms N7K-1-OTV-1A(config)#int e1/9 N7K-1-OTV-1A(config-if)# ping ? *** No matches in current mode, matching in (exec) mode *** <CR> A.B.C.D or Hostname IP address of remote system WORD Enter Hostname multicast Multicast ping
77.2
N7K-1-OTV-1A(config-if)#ping 10.1.111.254 vrf management PING 10.1.111.254 (10.1.111.254): 56 data bytes 64 bytes from 10.1.111.254: icmp_seq=0 ttl=63 time=3.768 ms 64 bytes from 10.1.111.254: icmp_seq=1 ttl=63 time=0.713 ms <snip> --- 10.1.111.254 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss round-trip min/avg/max = 0.586/1.251/3.768 ms
77.3
You can use the up-arrow and get the command history from the exec mode. Any command can be issued from anywhere within the configuration.
2011 Cisco
77.4
Verify the CLI piping functionality. Multiple piping options are available. Lots of them derived from the Linux world.
N7K-1-OTV-1A(config-if)#show running-config | ? cut Print selected parts of lines. diff Show difference between current and previous invocation (creates temp files: remove them with 'diff-clean' command and dont use it on commands with big outputs, like 'show tech'!) egrep Egrep - print lines matching a pattern grep Grep - print lines matching a pattern head Display first lines human Output in human format (if permanently set to xml, else it will turn on xml for next command) last Display last lines Improved CLI Piping less Filter for paging no-more Turn-off pagination for command output perl Use perl script to filter output section Show lines that include the pattern as well as the subsequent lines that are more indented than matching line sed Stream Editor sort Stream Sorter sscp Stream SCP (secure copy) tr Translate, squeeze, and/or delete characters uniq Discard all but one of successive identical lines vsh The shell that understands cli command wc Count words, lines, characters xml Output in xml format (according to .xsd definitions) begin Begin with the line that matches count Count number of lines end End with the line that matches exclude Exclude lines that match include Include lines that match
77.5
N7K-1-OTV-1A(config-if)#sh running-config | grep ? WORD Search for the expression count Print a total count of matching lines only ignore-case Ignore case difference when comparing strings invert-match Print only lines that contain no matches for <expr> line-exp Print only lines where the match is a whole line line-number Print each match preceded by its line number next Print <num> lines of context after every matching line prev Print <num> lines of context before every matching line word-exp Print only lines where the match is a complete word
77.6
Display any line that contains mgmt0 and print the next 3 lines after that match.
N7K-1-OTV-1A(config-if)#sh running-config | grep next 3 mgmt0 interface mgmt0 no snmp trap link-status ip address 10.1.111.17/16
77.7
The [TAB] completes a CLI command and shows the available keywords.
shutdown snmp vrf where
N7K-1-OTV-1A(config-if)# int mgmt 0 N7K-1-OTV-1A(config-if)# [TAB] cdp exit no description ip pop end ipv6 push
77.8
If you want to know the CLI context you are in use the where command.
2011 Cisco
78.3
78.4
78.5
vlan 131,151,171,211,1005 no shut int e1/19 switchport switchport mode trunk switchport trunk allowed vlan 131,151,171,211,1005 no shutdown
Cisco Nexus 5010 B - N5K-2 78.6 Login to the Nexus 5000s with the following credentials: Username: admin Password: 1234Qwer 78.7 Turn off VPC.
no feature vpc
78.8
78.9
Remove ESX 1 & 3 from Site B. We are also shutting down the connection to the 3750 on the B side.
78.10
vlan 131,151,171,211,1005 no shut int et 1/20 switchport switchport mode trunk switchport trunk allowed vlan 131,151,171,211,1005 no shutdown
2011 Cisco
Summary of Commands
You have three options at this point. Option 3 is under maintenance, so do NOT use. 1) Go to the next step (Spanning Tree) to manually configure OTV 2) Copy and paste the commands from the Command Summary for OTV on page 212. 3) Restore an OTV config and go to Section 9.8. Perform the following commands on both Nexus 7000s to load OTV config. SSH into N7K-1 (10.1.111.3) and N7K-2 (10.1.111.4)
rollback running-config checkpoint OTV copy run start reload vdc
2011 Cisco
N7K-1
N7K-1-OTV-1A# conf t Enter configuration commands, one per line. End with CNTL/Z.
79.1
79.2
VLAN ---1 20 23 160 1005
Verify VLANs.
Name -------------------------------default VLAN0020 VLAN0023 VLAN0160 VLAN1005 Status Ports --------- ------------------------------active active active active active
sh vlan br
Repeat Step 79.2 for N7K-2-OTV. Best practices dictate deterministic placement of the spanning tree root in the network. Particularly a network administrator should ensure that a root switch does not inadvertently end up on a small switch in the access layer creating a sub-optimal topology more prone to failures.
N7K-1
N7K-1-OTV-1A(config-vlan)#spanning-tree vlan 131,151,171,211,1005 priority 4096
N7K-2
N7K-2-OTV-1B(config-vlan)#spanning-tree vlan 131,151,171,211,1005 priority 8192
2011 Cisco
Step 80 Now lets bring up the interfaces facing on N5K-1 and N5K-2 in the Access Layer. N7K-1 80.1 Enable switching for interface connecting to N5K-1. Refer to Table 19 and Figure 7 for your specific interfaces. (ex. Pod 1:e1/14,Pod2:e1/22,Pod3:e1/30)
int e1/14 switchport switchport mode trunk mtu 9216
80.2
switchport trunk allowed vlan 131,151,171,211,1005 This will cause VLANS to be overwritten. Continue anyway? [yes] y no shutdown
N7K-2 80.3 Enable switching for interface connecting to N5K-2. Refer to Table 19 and Figure 7 for your specific interfaces. (ex. Pod 1:e1/16,Pod2:e1/24,Pod3:e1/32)
int e1/16 switchport switchport mode trunk mtu 9216
80.4
switchport trunk allowed vlan 131,151,171,211,1005 This will cause VLANS to be overwritten. Continue anyway? [yes] y no shutdown
Summary of Commands
N7K-1
vlan 131,151,171,211,1005 no shut spanning-tree vlan 131,151,171,211,1005 priority 4096 int e1/14 switchport switchport mode trunk no shutdown switchport trunk allowed vlan 131,151,171,211,1005
N7K-2
vlan 131,151,171,211,1005 no shut spanning-tree vlan 131,151,171,211,1005 priority 8192 int e1/16 switchport switchport mode trunk no shutdown switchport trunk allowed vlan 131,151,171,211,1005
2011 Cisco
Step 81 Check the spanning-tree from both the Nexus 7000 and the Nexus 5000. N7K-1
N7K-1-OTV-1A#show spanning-tree vlan 1005 VLAN1005 Spanning tree enabled protocol rstp Root ID Priority 5101 Address 0026.980d.6d42 This bridge is the root Hello Time 2 sec Max Age 20 sec Bridge ID Priority Address Hello Time
5101 (priority 4096 sys-id-ext 1005) 0026.980d.6d42 2 sec Max Age 20 sec Forward Delay 15 sec
Interface Role Sts Cost Prio.Nbr Type ---------------- ---- --- --------- -------- -------------------------------Eth1/14 Desg FWD 2 128.142 P2p
N7K-2
N7K-1-OTV-1A# show spanning-tree vlan 131 VLAN0020 Spanning tree enabled protocol rstp Root ID Priority 4116 Address 0026.980d.6d42 This bridge is the root Hello Time 2 sec Max Age 20 sec Bridge ID Priority Address Hello Time
4116 (priority 4096 sys-id-ext 20) 0026.980d.6d42 2 sec Max Age 20 sec Forward Delay 15 sec
Interface Role Sts Cost Prio.Nbr Type ---------------- ---- --- --------- -------- -------------------------------Eth1/14 Desg FWD 2 128.142 P2p
2011 Cisco
Uplink port to N7K OTV is Root port. N7K is the Root Bridge.
24596 (priority 24576 sys-id-ext 20) 0005.9b7a.03bc 2 sec Max Age 20 sec Forward Delay 15 sec Cost --------2 2 4 4 Prio.Nbr -------128.132 128.147 128.1025 128.1026 Type -------------------------------P2p P2p Edge P2p Edge P2p
Step 82 Verify that you have the correct licenses. OTV requires the LAN Advanced Services license and the Transport Services license. N7K-1 and N7K-2
N7K-1-OTV-1A# show license usage Feature Ins Lic Status Expiry Date Comments Count -------------------------------------------------------------------------------ENHANCED_LAYER2_PKG No Unused SCALABLE_SERVICES_PKG No Unused TRANSPORT_SERVICES_PKG Yes In use Never LAN_ADVANCED_SERVICES_PKG Yes Unused Never LAN_ENTERPRISE_SERVICES_PKG Yes In use Never -
Note:
Be sure to confirm the status of your customers license status and remind them to purchase the license before the feature grace period expires. Temporary licenses are indicated by the word Grace in the comments field that reflects the grace period in days and hours left on your temporary license. In the example below, there is 105 days 15 hours left.
No Unused Grace 105D 15H
TRANSPORT_SERVICES_PKG
2011 Cisco
83.1
N7K-2
N7K-2-OTV-1B(config)# int e 1/<uplink> N7K-2-OTV-1B(config-if-range)# no shut
83.2
Note:
Summary of Commands
int e 1/<uplink> no shut
2011 Cisco
Step 84 Enable OSPF N7K-1 84.1 Enable OSPF feature and configure OSPF instance.
N7K-1-OTV-1A(config)# feature ospf N7K-1-OTV-1A(config)# router ospf 1 N7K-1-OTV-1A(config-router)# log-adjacency-changes
NX-OS is a fully modular operating system. Most software modules dont run unless the correspondent feature is enabled. We refer to these features that need to be specifically enabled as conditional services. Once the service is enabled, the CLI becomes visible and the feature can be used and configured. 84.2 Configure loopback interface for OSPF. Refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your specific interfaces.
N7K-1-OTV-1A(config)# interface loopback0 N7K-1-OTV-1A(config-if)# ip address 10.1.0.X1/32 N7K-1-OTV-1A(config-if)# ip router ospf 1 area 0.0.0.0
84.3 Configure each OTV Edges uplink interface that connects to the Nexus WAN(Core Layer). Refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your specific interfaces. (ex. Pod 1:e1/10,Pod2:e1/18,Pod3:e1/26)
N7K-1-OTV-1A(config)# interface e1/<uplink_port>
84.4
We increased the MTU on the layer 3 links to 9042 bytes. OTV encapsulates the original frame adding 42 bytes to your IP packet, so you will need to increase the MTU on all your WAN links. Since the MTU on the core has already been adjusted to 9042, you will get an OSPF state of EXSTART until your MTU matches the core MTU.
N7K-1-OTV-1A(config-if)# ip address 10.1.X1.Y /24
Refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your specific interfaces. (ex. Pod 1:10.1.11.3,Pod2:10.1.21.5,Pod3:10.1.31.7) 84.5 Specify OSPF interface network type and OSPF Area.
N7K-1-OTV-1A(config-if)# ip ospf network point-to-point N7K-1-OTV-1A(config-if)# ip router ospf 1 area 0.0.0.0
84.6 84.7
N7K-1-OTV-1A(config-if)# no shutdown
The edge devices interface towards the IP core will later be used by OTV as a join interface. Therefore, it needs to be configured for IGMP version 3.
2011 Cisco
N7K-2 For the following steps, refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your specific interfaces. 84.8 Enable OSPF feature and configure OSPF instance.
N7K-2-OTV-1B(config)# feature ospf N7K-2-OTV-1B(config)# router ospf 1 N7K-2-OTV-1B(config-router)# log-adjacency-changes
84.9
N7K-2-OTV-1B(config)# interface loopback0 N7K-2-OTV-1B(config-if)# ip address 10.1.0.X2/32 N7K-2-OTV-1B(config-if)# ip router ospf 1 area 0.0.0.0
84.10
Configure each OTV Edges uplink interface that connects to the Nexus WAN(Core Layer).
N7K-2-OTV-1B(config)# interface e1/<uplink> N7K-2-OTV-1B(config-if)# mtu 9042 N7K-2-OTV-1B(config-if)# ip address 10.1.X4.Y/24 N7K-2-OTV-1B(config-if)# ip ospf network point-to-point N7K-2-OTV-1B(config-if)# ip router ospf 1 area 0.0.0.0 N7K-2-OTV-1B(config-if)# ip igmp version 3 N7K-2-OTV-1B(config-if)# no shutdown
We increased the MTU on the layer 3 links to 9042 bytes. OTV encapsulates the original frame adding 42 bytes to your IP packet, so you will need to increase the MTU on all your WAN links. Since the MTU on the core has already been adjusted to 9042, you will get an OSPF state of EXSTART until your MTU matches the core MTU.
Summary of Commands
N7K-1
feature ospf router ospf 1 log-adjacency-changes interface loopback0 ip address 10.1.0.X1/32 ip router ospf 1 area 0.0.0.0 interface e1/<uplink_port> mtu 9042 ip address 10.1.X1.Y/24 ip ospf network point-to-point ip router ospf 1 area 0.0.0.0 ip igmp version 3 no shutdown
N7K-2
feature ospf router ospf 1 log-adjacency-changes interface loopback0 ip address 10.1.0.X2/32 ip router ospf 1 area 0.0.0.0 interface e1/<uplink> mtu 9042 ip address 10.1.X4.Y/24 ip ospf network point-to-point ip router ospf 1 area 0.0.0.0 ip igmp version 3 no shutdown
2011 Cisco
Step 85 Verify OSPF configuration 85.1 First, lets check our running OSPF configuration. (example from Pod1)
N7K-1-OTV-1A# show running-config ospf <snip> feature ospf router ospf 1 log-adjacency-changes interface loopback0 ip router ospf 1 area 0.0.0.0 interface Ethernet1/10 ip ospf network point-to-point ip router ospf 1 area 0.0.0.0 N7K-2-OTV-1B(config-if)# show running-config ospf <snip> feature ospf router ospf 1 log-adjacency-changes interface loopback0 ip router ospf 1 area 0.0.0.0 interface Ethernet1/12 ip ospf network point-to-point ip router ospf 1 area 0.0.0.0
85.2 N7K-1
N7K-1-OTV-1A# show ip ospf int brief OSPF Process ID 1 VRF default Total number of interface: 2 Interface ID Area Lo0 1 0.0.0.0 Eth1/10 2 0.0.0.0
Cost 1 4
N7K-2
N7K-2-OTV-1B# show ip ospf int bri OSPF Process ID 1 VRF default Total number of interface: 2 Interface ID Area Lo0 1 0.0.0.0 Eth1/12 2 0.0.0.0
Cost 1 4
85.3 N7K-1
N7K-1-OTV-1A# sh ip ospf neighbors OSPF Process ID 1 VRF default Total number of neighbors: 1 Neighbor ID Pri State 10.1.0.1 1 FULL/ -
Interface Eth1/10
N7K-2
N7K-2-OTV-1B# show ip ospf neighbors OSPF Process ID 1 VRF default Total number of neighbors: 1 Neighbor ID Pri State Up Time Address 10.1.0.2 1 FULL/ 1w1d 10.1.14.2
Interface Eth1/12
2011 Cisco
85.4 N7K-1
N7K-1-OTV-1A(config)# show ip route ospf-1 IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] 10.1.0.1/32, ubest/mbest: 1/0 *via 10.1.11.1, Eth1/10, [110/5], 1w1d, ospf-1, intra 10.1.0.2/32, ubest/mbest: 1/0 *via 10.1.11.1, Eth1/10, [110/9], 1w1d, ospf-1, intra 10.1.0.12/32, ubest/mbest: 1/0 *via 10.1.11.1, Eth1/10, [110/13], 1w1d, ospf-1, intra 10.1.7.0/24, ubest/mbest: 1/0 *via 10.1.11.1, Eth1/10, [110/8], 1w1d, ospf-1, intra 10.1.14.0/24, ubest/mbest: 1/0 *via 10.1.11.1, Eth1/10, [110/12], 1w1d, ospf-1, intra
N7K-2 Note: Congratulations, youve successfully configured OSPF. Please continue to the next section.
2011 Cisco
Figure 8 - OTV Packet Flow The following terminology is used for OTV throughout this document: Site: A Layer 2 network that may be single-homed or multi-homed to the core network and the OTV overlay network. Layer 2 connectivity between sites is provided by edge devices that operate in an overlay network. Layer 2 sites are physically separated from each other by the core IP network. Core Network: The customer backbone network that connects Layer 2 sites over IP. This network can be customer managed, provided by a service provider, or a mix of both. OTV is transparent to the core network because OTV flows are treated as regular IP flows. Edge Device: A Layer 2 switch that performs OTV functions. An edge device performs typical Layer 2 learning and forwarding on the site-facing interfaces (internal interfaces) and performs IP-based virtualization on the core-facing interfaces. The edge device can be collocated in a device that performs Layer 3 routing on other ports. OTV functionality only occurs in an edge device. Internal Interface: The Layer 2 interface on the edge device that connects to site-based switches or sitebased routers. The internal interface is a Layer 2 interface regardless of whether the internal interface connects to a switch or a router. Join Interface: The interface facing the core network. The name implies that the edge device joins an overlay network through this interface. The IP address of this interface is used to advertise reachability of a MAC address present in this site.
2011 Cisco
Figure 9 - OTV Terminology (1 of 2) MAC Routing: MAC routing associates the destination MAC address of the Layer 2 traffic with an edge device IP address. The MAC to IP association is advertised to the edge devices through an overlay routing protocol. In MAC routing, MAC addresses are reachable through an IP next hop. Layer 2 traffic destined to a MAC address will be encapsulated in an IP packet based on the MAC to IP mapping in the MAC routing table. Overlay Interface: A logical multi-access multicast-capable interface. The overlay interface encapsulates Layer 2 frames in IP unicast or multicast headers. The overlay interface is connected to the core via one or more physical interfaces. You assign IP addresses from the core network address space to the physical interfaces that are associated with the overlayinterface. Overlay Network: A logical network that interconnects remote sites for MAC routing of Layer 2 traffic. The overlay network uses either multicast routing in the core network or an overlay server to build an OTV routing information base (ORIB). The ORIB associates destination MAC addresses with remote edge device IP addresses. Multicast Control-Group: For core networks supporting IP multicast, one multicast address (the controlgroup address) is used to encapsulate and exchange OTV control-plane protocol updates. Each edge device participating in the particular Overlay network shares the same control-group address with all the other edge devices. As soon as the control-group address and the join interface is configured, the edge device sends an IGMP report message to join the control group and with that participates in the overlay network. The edge devices act as hosts in the multicast network and send multicast IGMP report messages to the assigned multicast group address. Multicast Data-Group: In order to handle multicast data-traffic one or more ranges of IPv4 multicast group prefixes can be used. The multicast group address is an IPv4 address in dotted decimal notation. A subnet mask is used to indicate ranges of addresses. Up to eight data-group ranges can be defined. An SSM group is used for the multicast data generated by the site. Authoritative Edge Device: An edge device that forwards Layer 2 frames into and out of a site over the overlay interface. For the first release of OTV, there is only one authoritative edge device for all MAC unicast and multicast addresses per VLAN. Each VLAN can be assigned to a different authoritative edge device.
2011 Cisco
Figure 10- OTV Terminology (2 of 2) In this section you will: Select the Join interface and establish OSPF connectivity with the Core. Enable OTV Configure the Overlay interface Join the Data-Center site to the Core Extend a VLAN across the overlay
2011 Cisco
Step 86 Configuring Basic OTV Features N7K-1 86.1 Enable the OTV feature.
feature otv
86.2
The OTV Site VLAN is used to communicate with other OTV edge devices in the local site. If our site had dual edge devices, it will be used to elect the active forwarder device in the site. Ensure that the site VLAN is active on at least one of the edge device ports. 86.3 Configure the site identifier. We will use 0x1 for Site A on N7K-1.
otv site-identifier 0x1
OTV uses the site identifier to support dual site adjacency. Dual site adjacency uses both site VLAN and site identifier to determine if there are other edge devices on the local site and if those edge devices can forward traffic. Ensure that the site identifier is the same on all neighbor edge devices in the site. You must configure the site identifier in Cisco NX-OS release 5.2(1) or later releases. The overlay network will not become operational until you configure the site identifier. The Site-VLAN and site identifier must be configured before entering the no shutdown command for any interface overlay and must not be modified while any overlay is up within the site. 86.4 Create an overlay interface.
interface Overlay 1
86.5
Specify the multicast group OTV will use for control plane traffic.
Replace X with your POD # (1 for POD 1, 2 for POD 2 and so on).
The control-group address is used for control plane related operations. Each edge device joins the group and sends control/protocol related packets to this group. This is used for discovery of other edge-devices. 86.6 Specify the multicast address range OTV will use for multicast data traffic.
otv data-group 239.X.2.0/28
The data-group-range specifies a multicast group range that is used for multi-destination traffic. 86.7 Assign a physical interface to the overlay interface.
After you enter the join command an informational message reminds you that IGMPv3 is required to be configured on the join interface. This message can be ignored if IGMPv3 was already configured as instructed earlier in the guide. This interface is used for overlay operations such as discovering remote edge-devices, providing the source address for OTV encapsulated packets and the destination address for unicast traffic sent by remote edge-devices. 86.8 Specify the VLANs to be extended across the overlay. We will extend VLAN 131,151,171, and 211.
otv extend-vlan 131,151,171,211 no shutdown
OTV only forwards Layer 2 packets for VLANs that are in the specified range for the overlay interface.
2011 Cisco
Specify the OTV Site VLAN, which is vlan 1005. Configure the site identifier. We will use 0x2 for Site B on N7K-2. Create an overlay interface. Specify the multicast group OTV will use for control plane traffic.
Replace X with your POD # (1 for POD 1, 2 for POD 2 and so on).
interface Overlay 1
Specify the multicast address range OTV will use for multicast data traffic. Assign a physical interface to the overlay interface.
86.16
Specify the VLANs to be extended across the overlay. We will extend VLAN 131,151,171, and 211.
2011 Cisco
N7K-1 and N7K-2 86.17 Now lets check the OTV configuration just completed:
N7K-1-OTV-1A(config-if-overlay)# show running-config otv <SNIP> feature otv otv site-vlan 1005 interface Overlay1 otv join-interface Ethernet1/10 otv control-group 239.1.1.1 otv data-group 239.1.2.0/28 otv extend-vlan 131, 151, 171 no shutdown otv site-identifier 0x1 N7K-2-OTV-1B(config-if-overlay)# show running-config otv <snip> feature otv otv site-vlan 1005 interface Overlay1 otv join-interface Ethernet1/12 otv control-group 239.1.1.1 otv data-group 239.1.2.0/28 otv extend-vlan 131, 151, 171 no shutdown otv site-identifier 0x2
Note:
Summary of Commands
N7K-1
feature otv otv site-vlan 1005 otv site-identifier 0x1 interface Overlay 1 otv control-group 239.<X>.1.1 otv data-group 239.<X>.2.0/28 otv join-interface Ethernet1/<uplink> otv extend-vlan 131,151,171,211 no shutdown
N7K-2
feature otv otv site-vlan 1005 otv site-identifier 0x2 interface Overlay 1 otv control-group 239.<X>.1.1 otv data-group 239.<X>.2.0/28 otv join-interface Ethernet1/<uplink> otv extend-vlan 131,151,171,211 no shutdown
2011 Cisco
Step 87 First, lets display the OTV overlay status for your sites:
N7K-1-OTV-1A(config-if-overlay)# show otv overlay 1 OTV Overlay Information Site Identifier 0000.0000.0000 Overlay interface Overlay1 VPN name VPN state Extended vlans Control group Data group range(s) Join interface(s) Site vlan AED-Capable Capability : : : : : : : : : Overlay1 UP 131 151 171 211 (Total:4) 239.1.1.1 239.1.2.0/28 Eth1/10 (10.1.11.3) 1005 (up) Yes Multicast-Reachable
N7K-2-OTV-1B# show otv overlay 1 OTV Overlay Information Site Identifier 0000.0000.0000 Overlay interface Overlay1 VPN name VPN state Extended vlans Control group Data group range(s) Join interface(s) Site vlan AED-Capable Capability : : : : : : : : : Overlay1 UP 131 151 171 211 (Total:4) 239.1.1.1 239.1.2.0/28 Eth1/12 (10.1.14.4) 1005 (up) Yes Multicast-Reachable
Note:
Make sure the state is up, and that the vlans and addresses are correct.
2011 Cisco
87.1 Note:
Next, lets check the status of the VLANs extended across the overlay. The authoritative device is the OTV node elected to forward traffic to/from the L3 core. For any given VLAN, only one authoritative edge device (AED) will be elected in a site. The * symbol next to the VLAN ID indicates that the device is the AED for that vlan.
N7K-1-OTV-1A(config-if-overlay)# sh otv vlan OTV Extended VLANs and Edge Device State Information (* - AED) VLAN ---131* 151* 171* 211* Auth. Edge Device ----------------------------------N7K-1-OTV-1A N7K-1-OTV-1A N7K-1-OTV-1A N7K-1-OTV-1A Vlan State ---------active active active active Overlay ------Overlay1 Overlay1 Overlay1 Overlay1
N7K-2-OTV-1B(config)# show otv vlan OTV Extended VLANs and Edge Device State Information (* - AED) VLAN ---131* 151* 171* 211* Auth. Edge Device ----------------------------------N7K-2-OTV-1B N7K-2-OTV-1B N7K-2-OTV-1B N7K-1-OTV-1A Vlan State ---------active active active active Overlay ------Overlay1 Overlay1 Overlay1 Overlay1
87.2
Next, lets see how many OTV edge devices are present at the local site. The * symbol next to the hostname indicates that this is the local node.
N7K-1-OTV-1A(config-if-overlay)# sh otv site Site Adjacency Information (Site-VLAN: 1005) (* - this device) Overlay1 Site-Local Adjacencies (Count: 2) Hostname -------------------------------* N7K-1-OTV-1A N7K-2-OTV-1B System-ID -------------0026.980d.6d42 0026.980d.92c2 Up Time --------00:05:58 00:05:37 Ordinal ---------0 1
Note:
If this was a dual-homed site, two nodes would be listed through this command. The other node would not have a * symbol next to it.
N7K-2-OTV-1B(config-if-overlay)# sh otv site Site Adjacency Information (Site-VLAN: 1005) (* - this device) Overlay1 Site-Local Adjacencies (Count: 2) Hostname -------------------------------N7K-1-OTV-1A * N7K-2-OTV-1B System-ID -------------0026.980d.6d42 0026.980d.92c2 Up Time --------00:10:09 00:09:49 Ordinal ---------0 1
2011 Cisco
Step 88 Verify if we connected to the peer edge device at the peer Site. Note: We should see the remote edge device in our adjacency database.
N7K-1-OTV-1A# show otv adjacency Overlay Adjacency database Overlay-Interface Overlay1 Hostname N7K-2-OTV-1B : System-ID Dest Addr 0026.980d.92c2 10.1.14.4 Up Time 1w2d State UP
N7K-2-OTV-1B# show otv adjacency Overlay Adjacency database Overlay-Interface Overlay1 Hostname N7K-1-OTV-1A : System-ID Dest Addr 0026.980d.6d42 10.1.11.3 Up Time 07:16:05 State UP
2011 Cisco
88.1
The MAC address table will report MAC addresses of end-hosts and devices learnt on the VLAN. If no traffic was ever sent across the overlay, then only the local router MAC will be populated in the table.
N7K-1-OTV-1A# show mac address-table Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports ---------+-----------------+--------+---------+------+----+----------------G 0026.980d.6d42 static F F sup-eth1(R)
88.2
The MAC address in the table is actually the local router MAC, lets verify this:
Refer to Table 18 - IP Addresses for Uplinks and Loopbacks for the correct uplink interface.
show interface e1/<uplink> mac-address N7K-1-OTV-1A# show interface e1/10 mac-address -------------------------------------------------------------------------------Interface Mac-Address Burn-in Mac-Address -------------------------------------------------------------------------------Ethernet1/10 0026.980d.6d42 1cdf.0fd2.0529
Step 89 Display the OTV ARP/ND L3->L2 Address Mapping Cache. In OTV, we also cache ARP resolution for MAC addresses that are not local to the site and that are learnt via the overlay. If no traffic was ever sent across the overlay, then no ARP would have been resolved, and so no entries are cached by the OTV process.
N7K-1-OTV-1A# show otv arp-nd-cache OTV ARP/ND L3->L2 Address Mapping Cache
2011 Cisco
Step 90 Connect to vCenter with the vSphere Client 90.1 After a successful login youll see the following vSphere Client application screen. You can see that a single VMware vSphere logical Data Center with the name FlexPod_DC_1 exists, which includes a cluster named FlexPod_Mgmt. This cluster consists of three ESXi hosts that will correspond to your two physical sites. Hosts ESX1 and ESX2 represent Site A and Site B respectively. Host ESX3 is in Site A and is used for management services. 90.2 Verify interface mappings. In this step you will verify that the port groups available on the ESX hosts in each site are connected to the corresponding interfaces on the Nexus 5000 access device. Recall that interconnecting links between the two Nexus 5000s are either shutdown or not in use, so any interconnections need to go to the Nexus 7000s. Uplink VLAN Connecting Connecting Port Device Ports ESX1 VM-Client vSwitch1 vmnic0 131 N5K-1 E1/9 ESX1 Local Lan vSwitch1 vmnic0 24 N5K-1 E1/9 * ESX1 uses physical adapter vmnic 0 (port 1 on 10G CNA) as the physical uplink for vSwitch1 to N5K-1. ESX2 VM-Client vSwitch1 vmnic1 131 N5K-2 E1/10 ESX2 Local Lan vSwitch1 vmnic1 24 N5K-2 E1/10 * ESX2 uses physical adapter vmnic 1 (port 2 on 10G CNA) as the physical uplink for vSwitch1 to N5K-2. Note: Remember that only VLANs 131 and 151 have been configured to stretch across the OTV overlay between the two sites. The VLAN 24 is only local to the two individual sites. Server Port Group Virtual Switch
2011 Cisco
Step 91 VM-Client: Use Cisco Discovery Protocol (CDP) from within the VMware vSphere Client to verify the physical adapter vmnic2 of the ESX host is connected to the sites 10G access device at port Eth1/9. 91.1 91.2 Identify the Virtual Switch vSwitch1. Click on the bubble icon ( ) on the right side of the corresponding physical adapter vmnic0. Verify that the active CNA adapter for ESX1 (vmnic0) is connected to the N5K-1. Click on the bubble icon ( ) on the right side of the corresponding physical adapter vmnic0.
1 91.3
Verify that the active CNA adapter for ESX2 (vmnic0) is connected to the N5K-2. Click on the bubble icon ( ) on the right side of the corresponding physical adapter vmnic1.
2011 Cisco
2 92.2 Click on Network Adapter. Under Network label, select the Local Lan port group. Click OK.
1 2 92.3 Repeat the steps above for the Virtual Machine ClientXP
2011 Cisco
2011 Cisco
Note:
Leave the continuous ping running and the Console window open for further lab steps.
2011 Cisco
2 94.3 94.4 Leave the default setting of Change host and click on Next. Pick the host ESX2 as the target of the VMotion and click Next..
For vMotion Priority, leave the default setting of High Priority and click on Next Verify the selected choices and click on Next to start the VMotion process. Monitor the Console of the VM Server 2003R2 during the VMotion process.
94.8
When the VMotion process nears completion, network connectivity between the VM ClientXP (10.1.131.32) and the VM Server 2003R2 (10.1.131.31) is established. Therefore the ping between them succeeds.
2011 Cisco
Step 95 Configure both Virtual Machines to use the port group VM-Client. As demonstrated in previous lab steps, this port group uses a VLAN that has been extended between the two sites via OTV: 95.1 Click on the Virtual Machine Server 2003R2 to highlight this VM. Then perform a right-click to open the Action menu for this VM. Choose Edit Settings within the Action menu to change the virtual NIC settings of the VM
1 95.2 Choose Network Adapter 1 under Hardware. In the Network Connection area, change the Network Label to VMTRAFFIC and confirm the settings with OK.
1 2
2011 Cisco
95.3
Verify that the port group for the VM Server 2003R2 has been changed to VMTraffic.
95.4 Repeat the steps above for the VM ClientXP. You will lose network connectivity between the two VMs while one VM is connected to the port group VM-Client and the other VM is still connected to Local LAN. This is due to the two port groups being mapped to two different Layer 2 domains. 95.5 Verify that the VM Server 2003R2-Clone has Layer 2 network connectivity to the VM Server 2003R2 while both are connected to the port group VM-Client and reside within the same site. 95.6 Migrate (VMotion) the VM Server 2003R2 back to site Site A. During and after this migration the VM ClientXP will still have connectivity to the VM Server 2003R2: 95.7 Click on the Virtual Machine Server 2003R2 to highlight this VM. Then perform a right-click to open the Action menu for this VM. 95.8 Choose Migrate within the Action menu to start the VMotion process
2 95.9 95.10 Leave the default setting of Change host and click on Next. Pick the host ESX1 as the target of the VMotion and click Next. 1
Leave the default setting of High Priority and click on Next. Verify the selected choices and click on Next to start the VMotion process. Monitor the Console of the VM ClientXP during the VMotion process.
2011 Cisco
Note:
You will notice that while the VMotion is progressing, network connectivity between the VM ClientXP (10.1.131.33) and the VM Server 2003R2 (10.1.131.31) remains active. Therefore the ping between them succeeds. Check on the local Nexus 7000 that MAC addresses of the remote VM servers were learned on the local site and that ARP Table entries, mapping remote IPs and MACs, were cached successfully. Your MAC addresses will be different depending on what vSphere assigns your VMs.
95.14
N7K-1-OTV-1A# show mac address-table Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+-----------------G 0026.980d.6d42 static F F sup-eth1(R) * 151 0050.5670.e096 dynamic 0 F F Eth1/14 O 151 0050.5674.b27f dynamic 0 F F Overlay1 * 151 0050.567b.cdd7 dynamic 930 F F Eth1/14 * 211 0016.9dad.8447 dynamic 360 F F Eth1/14 O 211 0050.5676.bc47 dynamic 0 F F Overlay1 * 211 0050.567d.6c56 dynamic 420 F F Eth1/14 * 211 0050.567e.d107 dynamic 300 F F Eth1/14 * 211 02a0.9811.5474 dynamic 0 F F Eth1/14
If the Authoritative Edge Device (AED) is the local node, the remote MAC address will be learned through the Overlay. If the Nexus 7000 is not the Authoritative Edge Device the remote MAC address will be learned through the interconnection to the AED Node.
N7K-2-OTV-1B# show mac address-table Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+-----------------G 0026.980d.92c2 static F F sup-eth1(R) O 151 0050.5670.e096 dynamic 0 F F Overlay1 * 151 0050.5674.b27f dynamic 0 F F Eth1/16 O 151 0050.567b.cdd7 dynamic 0 F F Overlay1 O 211 0016.9dad.8447 dynamic 0 F F Overlay1 * 211 0050.5676.bc47 dynamic 0 F F Eth1/16 O 211 0050.567d.6c56 dynamic 0 F F Overlay1 O 211 0050.567e.d107 dynamic 0 F F Overlay1 O 211 02a0.9811.5474 dynamic 0 F F Overlay1
2011 Cisco
95.15
N7K-1-OTV-1A# show otv arp-nd-cache OTV ARP/ND L3->L2 Address Mapping Cache Overlay Interface Overlay1 VLAN MAC Address Layer-3 Address 20 0050.56b6.0007 10.1.131.32
Age 00:01:55
Expires In 00:06:04
N7K-2-OTV-1B# show otv arp-nd-cache OTV ARP/ND L3->L2 Address Mapping Cache Overlay Interface Overlay1 VLAN MAC Address Layer-3 Address 20 0050.56b6.0006 192.168.2.25
Age 00:00:46
Expires In 00:07:13
95.16
You can check reachability of remote MACs through the OTV route command.
N7K-1-OTV-1A# show otv route OTV Unicast MAC Routing Table For Overlay1 VLAN ---20 20 20 23 23 MAC-Address -------------0050.56b6.0000 0050.56b6.0006 0050.56b6.0007 0050.5672.b514 0050.5678.38a6 Metric -----42 1 42 1 42 Uptime -------03:37:33 00:08:34 00:30:10 00:08:41 00:08:41 Owner --------overlay site overlay site overlay Next-hop(s) ----------N7K-2-OTV-1B Ethernet1/14 N7K-2-OTV-1B Ethernet1/14 N7K-2-OTV-1B
N7K-2-OTV-1B# show otv route OTV Unicast MAC Routing Table For Overlay1 VLAN ---20 20 20 23 23 MAC-Address -------------0050.56b6.0000 0050.56b6.0006 0050.56b6.0007 0050.5672.b514 0050.5678.38a6 Metric -----1 42 1 42 1 Uptime -------03:38:04 00:09:05 00:30:41 00:09:11 00:09:12 Owner --------site overlay site overlay site Next-hop(s) ----------Ethernet1/16 N7K-1-OTV-1A Ethernet1/16 N7K-1-OTV-1A Ethernet1/16
Congratulations! You successfully migrate a VM across data center sites, while the VM remains reachable via Layer 2 thanks to Cisco Overlay Transport Virtualization (OTV).
2011 Cisco
EXERCISE OBJECTIVE
In this exercise you will use VMware vSphere to migrate a Virtual Machine to SAN attached storage, configure the Virtual Machine networking, and add VM disks. After completing these exercises you will be able to meet these objectives: Migrate a VM to SAN attached storage Configure VM networking Configure VM disks Manage VM disks in the Virtual Machine Windows 2003 operating system
2011 Cisco
2 96.2 96.3 96.4 96.5 96.6 96.7 96.8 Name the VM Server 2003R2-Clone. Click on FlexPod_DC_1 datacenter. Then, click Next. Select FlexPod_Mgmt for the cluster. Click Next. Select ESX1 for the host. Click Next. For Datastore, select the Netapp-SAN (FC shared storage). Click Next. Click the Same format as source radio button, then click Next Use the default settings. Click Next until you get to the final dialog box. Click Finish. Wait for the Clone to complete.
2011 Cisco
1 97.2 97.3 Click on the Virtual Machine Console button ( ) in the toolbar, then click in the console window. You should already be automatically logged on. If needed, press CTL-ALT-INSERT (instead of CTLALT-DEL). Alternatively, select the VM menu > Guest > Send Ctrl+Alt+del to get to the windows log on window. Authenticate with administrator/1234Qwer. 1
2
3
97.4
Change the Server name and IP address by double-clicking on the MakeMe Server1 shortcut. This launches a batch file that changes the computer name to server1 and the IP address to 10.1.131.31. Allow the computer to restart.
2011 Cisco
97.5
After the server restarts, verify that the hostname is SERVER1 and the IP address is 10.1.131.31. The background image should reflect this. 1
3 4
Note:
2011 Cisco
Step 98
Repeat Step 97 on Server 2003R2-Clone. IP address 10.1.131.32/24 GW =10.1.131.254 Computer name = server2
Step 99 Check that both VMs virtual nic settings are in the ESX hosts vSwitch0 and in the proper Port Group. 99.1 Select the ESX host (ESX1 (10.1.111.21) in this example), select Configuration tab, select Networking under Hardware, select Virtual Switch tab and verify that the VM nic is in the Port Group. 1 3
2
4
99.2
If the VM nic is not in the proper Port Group, select the VM (Server 2003R2 in this example), rightclick on it and select Edit Settings from the pop up menu. 1
2 99.3 Select the Network adapter, and change the Port Group under the Network Label drop-down.
1
2
2011 Cisco
2 100.2 Select Change both host and datastore radio box and click Next.
Select host ESX2 as the destination. Click Next. Select Netapp-SAN-1 datastore, and then click Next. Select Same format as source radio box, then click Next. Click Finish. Wait for the migration to finish.
2011 Cisco
Step 101 Verify that the VM is on ESX2. 101.1 Click on VM Server-2003R2. Then, click on Summary Tab. Note that the host is ESX2 and that the Datastore is Netapp-SAN-1. 1
2011 Cisco
102.4
Select the Create a new virtual disk radio button, and then click Next.
102.5
Change the Disk Size to 3 GB, select the Specify a datastore radio button, and then click Browse. 1
2 3
2011 Cisco
102.6
Select the Netapp-SAN-1 datastore, then click OK. Back at the Create a Disk window, click Next.
1 102.7 102.8 102.9 102.10 Click Next on Advanced Options to accept the default values. Click Finish. Then click OK to close the Add Hardware window. Log into the VM. Right-click on My Computer, select Manage.
2 102.11 Select Disk Management and click Next on the pop-up window.
2 102.12 102.13 102.14 Click Next to Initialize the new disk. Click in the checkbox to select Disk 1 and click Next to Convert the disk to a dynamic disk. Click Finish to start the disk initialization.
2011 Cisco
102.15
Right-click in the Disk1 Unallocated window and select New Volume from the pop-up menu. Go through the wizard using the default settings for all of the settings.
102.16
Right-click in the New Volume and select Format. Use the default settings for the pop-up windows. Close the Computer Management window.
102.17
2011 Cisco
12 SUMMARY
In this lab you: Installed and Configured Nexus 5010. o Virtual Port Channel o Fibre Channel, SAN Port Channel, FCoE VSAN Databases and Zone o FEX Preprovision FEX Configured MDS 9124. o Fibre Channel Port Channel Configured OTV and learned some of the aspects of OTV and its use case: o Enables Layer 2 connectivity between data center sites o Requires a Multicast enabled IP Core network between sites o Can be used to enable VMware VMotion across sites. Configured Vmware o Added hosts to a cluster o Added NFS SAN o Performed vMotion and storage VMotion over OTV
12.1 FEEDBACK
We would like to improve this lab to better suit your needs. To do so, we need your feedback. Please take 5 minutes to complete the online feedback for this lab. We carefully read and consider your scores and comments, and incorporate them into the content program Just click on the link below and answer the online questionnaire. Click here to take survey Thank you!
2011 Cisco
103.2 Using the console from each switch, copy the appropriate file to running-config: Cisco MDS9124
MDS9124# copy tftp://10.1.111.100/mds-base.cfg running-config Trying to connect to tftp server...... Connection to server Established. Copying Started..... | <snip>
<snip>
Note:
You will have to run the copy twice due to features not active when the configuration is applied.
2011 Cisco
NEXUS 5000 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE ON BOOTFLASH
Cisco Nexus 5010 A or B - N5K-1 or N5K-2 Step 104 Use the directory command to determine if the kickstart and system files required for the Nexus 5000 to work are stored locally in bootflash. You will need these file names in the boot variables set for the Nexus 5000.
loader> dir bootflash: lost+found config.cfg license_SSI14100CHE_4.lic n5000-uk9-kickstart.5.0.2.N2.1.bin n5000-uk9.5.0.2.N2.1.bin <snip>
Use the boot command to boot the kickstart image: Use the load command to load the system file: Log in to the N5K:
104.4
N5K-1# conf t N5K-1(config)# boot system bootflash:n5000-uk9.5.0.2.N2.1.bin N5K-1(config)# boot kickstart bootflash:n5000-uk9-kickstart.5.0.2.N2.1.bin N5K-1(config)# copy run st [########################################] 100%
NEXUS 5000 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE NOT ON BOOTFLASH
Cisco Nexus 5010 A or B - N5K-1 or N5K-2 Step 105 Use the set command to assign an IP address to the management interface:
loader> set ip 10.1.111.1 255.255.255.0
105.1 105.2
Boot the kickstart image from the tftp server: Once the kickstart is booted, configure the IP address on the management interface
switch(boot)# conf t switch(boot)(config)# int mgmt0 switch(boot)(config-if)# ip address 10.1.111.1 255.255.255.0 switch(boot)(config-if)# no shut switch(boot)(config-if)# end
105.3
2011 Cisco
Copy the kickstart and system files from the tftp server to bootflash:
Data Center Virtualization Volume 1 Page 170 of 217
105.4 105.5
105.6
N5K-1# conf t N5K-1(config)# boot system bootflash:n5000-uk9.5.0.2.N2.1.bin N5K-1(config)# boot kickstart bootflash:n5000-uk9-kickstart.5.0.2.N2.1.bin
105.7
MDS9124 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE ON BOOTFLASH
Cisco MDS9124 Step 106 Complete these steps on the MDS9124 106.1 Use the directory command to view the files stored on bootflash.
loader> dir bootflash: 12288 2296 18723840 56219997 2995 lost+found/ mts.log m9100-s2ek9-kickstart-mz.5.0.1a.bin m9100-s2ek9-mz.5.0.1a.bin config.cfg
Use the boot command to boot the kickstart image: Load the system image: Log into the switch:
106.5
MDS9124# conf t MDS9124(config)# boot system bootflash:m9100-s2ek9-mz.5.0.1a.bin MDS9124(config)# boot kickstart bootflash:m9100-s2ek9-kickstart-mz.5.0.1a.bin MDS9124(config)# end
106.6
MDS9124 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE NOT ON BOOTFLASH
Step 107 Complete these steps on the MDS9124 107.1 Use the network command to set the ip address and mask for the management interface:
loader> network --ip=10.1.111.40 --nm=255.255.255.0 2011 Cisco Data Center Virtualization Lab 6: Overlay Transport Virtualization Page 171 of 217
107.2 107.3
Boot the kickstart image from the tftp server: Configure the IP address on the management interface:
switch(boot)# conf t switch(boot)(config)# int mgmt0 switch(boot)(config-if)# ip address 10.1.111.40 255.255.255.0 switch(boot)(config-if)# no shut switch(boot)(config-if)# end
107.4
107.5 107.6
Load the system file from bootflash: Log into the MDS9124:
107.7
MDS9124# conf t MDS9124(config)# boot system bootflash:m9100-s2ek9-mz.5.0.1a.bin MDS9124(config)# boot kickstart bootflash:m9100-s2ek9-kickstart-mz.5.0.1a.bin
107.8
2011 Cisco
108.6 108.7
Controller B - NTAP1-B 108.8 During controller boot, when prompted to Press CTRL-C for special boot menu, press CTRL-C. 108.9 At the menu prompt, choose option 5 for Maintenance Mode. 108.10 Type Yes when prompted with Continue to boot? 108.11 Type disk show. 108.12 Reference the Local System ID: value for the following disk assignment. Note: Half the total number of disks in the environment will be assigned to this controller and half to the other controller. Divide the number of disks in half and use the result in the following command for the <# of disks>. Type disk assign -n <# of disks>. Type halt to reboot the controller.
108.13 108.14
2011 Cisco
108.15
Type disk show on the command line for each controller to generate a list of disks owned by each respective controller.
POOL SERIAL NUMBER ----- ------------Pool0 JLVD3HRC Pool0 JLVD2NBC Pool0 JLVD3KPC Pool0 JLVBZW1C Pool0 JLVD3HTC Pool0 JLVBZ9ZC
NTAP1-A> disk show DISK OWNER ------------ ------------0c.00.3 storage (135053985) 0c.00.1 storage (135053985) 0c.00.4 storage (135053985) 0c.00.5 storage (135053985) 0c.00.2 storage (135053985) 0c.00.0 storage (135053985)
After the netboot interface is configured, netboot from the 7.3.5 image. When prompted, press Ctrl+C to enter the special boot menu. Select option 4a, Same as option 4, but create a flexible root volume. The installer asks if you want to zero the disks and install a new file system. Answer y. A warning displays that this will erase all of the data on the disks. Answer y if you are sure this is what you want to do. The initialization and creation of root volume can take up to 75 minutes or more to complete depending on the number of disks attached. To verify successful booting of the Data ONTAP installer, check to see if you are presented with the setup wizard for Data ONTAP. It should prompt for a hostname.
netboot Incomplete
Note:
109.7
2011 Cisco
You might receive a message saying that the cluster failover is not yet licensed. That is fine, because we will license it later. Enter ifgrp1 for the partner interface to be taken over by ifgrp1. Enter 10.1.111.151 for the IP address of the management interface, e0M. Enter 255.255.255.0 as the subnet mask for e0M. Enter y for the question Should interface e0M take over a partner IP address during failover? Enter e0M for the partner interface to be taken over during failover. Press Enter to accept the default flow control of full. Press Enter to accept the blank IP address for e0a. Answer n to have the interface not takeover a partner IP address during failover. Press Enter to accept the blank IP address for e0b. Answer n to have the interface not takeover a partner IP address during failover. Answer n to continuing setup through the Web interface. Enter 10.1.111.254 as the IP address for the default gateway for the storage system. Enter 10.1.111.100 as the IP address for the administration host. Enter Nevada as the location for the storage system. Answer y to enable DNS resolution. Enter dcvlabs.lab as the DNS domain name. Enter 10.1.111.10 as the IP address for the first nameserver. Answer n to finish entering DNS servers, or answer y to add up to two more DNS servers. Answer n for running the NIS client. Answer y to configuring the SP LAN interface. Answer n to setting up DHCP on the SP LAN interface.
Data Center Virtualization Lab 6: Overlay Transport Virtualization Page 175 of 217
Enter Incomplete as the IP address for the SP LAN interface. Enter 255.255.255.0 as the subnet mask for the SP LAN interface. Enter Incomplete as the IP address for the default gateway for the SP LAN interface. Enter Incomplete Incomplete as the name and IP address for the mail host to receive SP messages and Auto Support. Answer y to configuring the shelf alternate control path management interface. Accept the default interface for the ACP management. Accept the default domain and subnet mask for the ACP interface. After these steps are completed, the controller should be at the command line prompt. Type reboot.
Please enter the new hostname []: NTAP1-A Do you want to enable IPv6? [n]: n Do you want to configure virtual network interfaces? [n]: y Number of virtual interfaces to configure? [0] 1 Name of virtual interface #1 []: ifgrp1 Is ifgrp1 a single [s], multi [m] or a lacp [l] virtual interface? [m] l Is ifgrp1 to use IP based [i], MAC based [m], Round-robin based [r] or Port based [p ] load balancing? [i] i Number of links for ifgrp1? [0] 2 Name of link #1 for ifgrp1 []: e0a Name of link #2 for ifgrp1 []: e0b Please enter the IP address for Network Interface ifgrp1 []: No IP address specified. Please set an IP address. Please enter the IP address for Network Interface ifgrp1 []: No IP address specified. Please set an IP address. Please enter the IP address for Network Interface ifgrp1 []: 10.1.1.151 Please enter the netmask for Network Interface ifgrp1 [255.255.255.0]: 255.255.255.0 Please enter media type for ifgrp1 {100tx-fd, tp-fd, 100tx, tp, auto (10/100/1000)} [auto]: auto Would you like to continue setup through the web interface? [n]: n Please enter the name or IP address of the IPv4 default gateway: 10.1.1.254 The administration host is given root access to the filer's /etc files for system administration. To allow /etc root access to all NFS clients enter RETURN below. Please enter the name or IP address of the administration host: 10.1.1.10 Where is the filer located? []: Nevada Do you want to run DNS resolver? [n]: y Please enter DNS domain name []: dcvlabs.com You may enter up to 3 nameservers Please enter the IP address for first nameserver []: 10.1.1.10 Do you want another nameserver? [n]: Do you want to run NIS client? [n]: n This system will send event messages and weekly reports to NetApp Technical Suppor t. To disable this feature, enter "options autosupport.support.enable off" within 24 hours. Enabling Autosupport can significantly speed problem determination and r esolution should a problem occur on your system. For further information on Autosu pport, please see: http://now.netapp.com/autosupport/ Press the return key to continue. The Baseboard Management Controller (BMC) provides remote management capab ilities including console redirection, logging and power control. It also extends autosupport by sending down filer event alerts. Would you like to configure the BMC [y]: y Would you like to enable DHCP on the BMC LAN interface [y]: n Please enter the IP address for the BMC [0.0.0.0]: 10.1.1.152 Please enter the netmask for the BMC [0.0.0.0]: 255.255.255.0 Please enter the IP address for the BMC Gateway [0.0.0.0]: 10.1.1.254 Please enter gratuitous ARP Interval for the BMC [10 sec (max 60)]: 2011 Cisco Data Center Virtualization Volume 1 Page 176 of 217
The mail host is required by your system to enable BMC to send ASUP message when filer is down Please enter the name or IP address of the mail host [mailhost]: You may use the autosupport options to configure alert destinations. The initial aggregate currently contains 3 disks; you may add more disks to it later using the "aggr add" command. Now apply the appropriate licenses to the system and install the system files (supplied on the Data ONTAP CD-ROM or downloaded from the NOW site) from a UNIX or Windows host. When you are finished, type "download" to install the boot image and "reboot" to start using the system.
110.44
To verify the successful setup of Data ONTAP 7.3.5, make sure that the terminal prompt is available and check the settings that you entered in the setup wizard. Step 111 Installing Data ONTAP to the onboard flash storage DONE/INSTRUCTOR Duration: 2 minutes
Note:
For this step, you will need a web server to host your ONTAP installation file.
Controller A - NTAP1-A 111.1 Install the Data ONTAP image to the onboard flash device.
software update Incomplete
After this is complete, type download and press Enter to download the software to the flash device. Controller B - NTAP1-B 111.3 Install the Data ONTAP image to the onboard flash device
software update Incomplete
111.2
111.4 111.5
After this is complete, type download and press Enter to download the software to the flash device. Verify that the software was downloaded successfully by entering software list on the command line and verifying that the Data ONTAP zip file is present.
Step 112 Installing required licenses Duration: 3 minutes Controller A - NTAP1-A 112.1 Install the necessary Data ONTAP licenses.
license add var_ntap_cluster_lic var_ntap_fcp_lic var_ntap_flash_cache_lic var_ntap_nearstore_option_lic var_ntap_a_sis_lic var_ntap_nfs_lic var_ntap_multistore_lic var_ntap_flexclone_lic
112.2
To verify that the licenses installed correctly, enter the command license on the command line and verify that the licenses listed above are active.
2011 Cisco
Step 113 Start FCP service and make sure of proper FC port configuration. DONE/INSTRUCTOR Duration: 3 minutes On both controllers - NTAP1-A and NTAP1-B 113.1 Start fcp and verify status.
NTAP1-A> fcp start Fri May 14 06:48:57 GMT [fcp.service.startup:info]: FCP service startup NTAP1-A> fcp status FCP service is running.
113.2
The fcadmin config command confirms that our adapters are configured as targets
NTAP1-A> fcadmin config Local Adapter Type State Status --------------------------------------------------0c target CONFIGURED online 0d target CONFIGURED online
113.3
If either FC port 0c and 0d is listed as initiator, use the following command to change its status to target Re-run the fcadmin config: both ports should now either state initiator or (Pending) initiator. Reboot the storage controller to enable the cluster feature and also to enable the FC ports as target ports as necessary.
113.4 113.5
2011 Cisco
This command usually finishes quickly. Depending on the state of each disk, some or all of the disks might need to be zeroed to be added to the aggregate. This might take up to 60 minutes to complete. 114.2 Verify that the aggregate was created successfully.
Status raid_dp, aggr 32-bit raid_dp, aggr 32-bit Options
root
2011 Cisco
115.2 115.3
Type wrfile a /etc/rc vlan create ifgrp1 211. Type ifconfig ifgrp1-211 mtusize 9000.
115.4
115.5
Type rdfile /etc/rc and verify that the commands from the previous steps are in the file correctly.
Netapp1> rdfile /etc/rc #Regenerated by registry Thu Apr 21 06:36:34 GMT 2011 #Auto-generated by Setup Wizard Mon Oct 18 17:04:15 GMT 2010 vif create multi ifgrp1 -b ip e0b ifconfig e0a `hostname`-e0a netmask 255.255.255.0 mediatype auto mtusize 1500 wins flowcontrol none ifconfig e0b `hostname`-e0b netmask 255.255.255.0 mediatype auto mtusize 1500 wins flowcontrol none ifconfig ifgrp1 `hostname`-ifgrp1 netmask 255.255.255.0 mtusize 9000 route add default n 1 routed on savecore options dns.enable off options nis.enable off
115.6
Verify that in the output of the command ifconfig -a the interface ifgrp1-211 shows up.
2011 Cisco
Step 116 Hardening storage system logins and security. - DONE Duration: 5 minutes Controller A - NTAP1-A 116.1 Type passwd to change the password for the root user. 116.2 Enter the new root password of 1234Qwer twice as prompted. 116.3 Type secureadmin setup ssh to enable ssh on the storage controller. 116.4 Accept the default values for ssh1.x protocol. 116.5 Enter 1024 for ssh2 protocol. 116.6 Enter yes if the information specified is correct and to create the ssh keys.
NTAP1-A> secureadmin setup ssh SSH Setup <snip> Please enter the size of host key for ssh1.x protocol [768] :768 Please enter the size of server key for ssh1.x protocol [512] :512 Please enter the size of host keys for ssh2.0 protocol [768] :1024 You have specified these parameters: host key size = 768 bits server key size = 512 bits host key size for ssh2.0 protocol = 1024 bits Is this correct? [yes] yes After Setup is finished the SSH server will start automatically.
Disable telnet on the storage controller. Enable ssl on the storage controller. Type secureadmin setup ssl. Enter country name code: US, state or province name: CA, locality name: San Jose, organization name: Cisco, and organization unit name: WWPO. Enter NTAP1-A.dcvlabs.lab as the fully qualified domain name of the storage system. Enter pephan@cisco.com as the administrators e-mail address. Accept the default for days until the certificate expires. Enter 1024 for the ssl key length.
NTAP1-A> secureadmin setup ssl Country Name (2 letter code) [US]: US State or Province Name (full name) [California]: CA Locality Name (city, town, etc.) [Santa Clara]: San Jose Organization Name (company) [Your Company]: Cisco Organization Unit Name (division): WWPO Common Name (fully qualified domain name) [NTAP1-A.dcvlabs.com]: NTAP1-A.dcvlabs.lab Administrator email: pephan@cisco.com Days until expires [5475] :5475 Key length (bits) [512] :1024 Thu May 13 22:12:07 GMT [secureadmin.ssl.setup.success:info]: Starting SSL with new certificate.
116.14 116.15
Disable http access to the storage system. Verify that the root password has been setup by trying to log into the controller with the new credentials. To verify that telnet is disabled, when you try to access the controller by telnet, it should not connect. To verify that http access has been disabled, you should not be able to access FilerView through http but rather through https.
2011 Cisco
Step 117 Create SNMP requests role and assign SNMP login privileges. Duration: 3 minutes On both controller A and B - NTAP1-A and NTAP1-B 117.1 Execute the following command:
useradmin role add snmpv3role -a login-snmp
117.2
To verify, execute the useradmin role list on each of the storage controllers.
Step 118 Create SNMP management group and assign SNMP request role to it. Duration: 3 minutes 118.1 Execute the following command:
useradmin group add snmpv3group -r snmpv3role
118.2
To verify, execute the useradmin role list on each of the storage controllers.
Step 119 Create SNMP user and assign it to SNMP management group. Duration: 3 minutes 119.1 Execute the following command:
useradmin user add Incomplete -g snmpv3group
Note: 119.2
You will be prompted for a password after creating the user. Use 1234Qwer when prompted To verify, execute the useradmin role list on each of the storage controllers.
Step 120 Enable SNMP on the storage controllers. Duration: 3 minutes 120.1 Execute the following command: options snmp.enable on. 120.2 To verify, execute the command options snmp.enable on each of the storage controllers.
Netapp1> options snmp.enable snmp.enable on
Step 121 Delete SNMP v1 communities from the storage controllers. Duration: 3 minutes 121.1 Execute the following command: snmp community delete all.
Netapp1> snmp community ro public Netapp1> snmp community delete all
121.2
To verify, execute the command snmp community on each of the storage controllers.
2011 Cisco
Step 122 Set SNMP contact, location, and trap destinations for each of the storage controllers Duration: 6 minutes On both controller A and B - NTAP1-A and NTAP1-B 122.1 Execute the following commands:
snmp snmp snmp snmp contact pephan@cisco.com location Nevada traphost add ntapmgmt.dcvlabs.lab traphost add snmp_trap_dest??
122.2
Netapp1> snmp contact: pephan@cisco.com location: TNI authtrap: 0 init: 0 traphosts: 10.1.111.10 (10.1.111.10) <10.1.111.10> community:
Step 123 Reinitialize SNMP on the storage controllers. Duration: 3 minutes On both controller A and B - NTAP1-A and NTAP1-B 123.1 Execute the following command snmp init 1. 123.2 No verification needed.
124.2 124.3
Create the volume that will later be exported to the ESXi servers as an NFS datastore. Set the Snapshot reservation to 0% for this volume. Disable automatic snapshot option for this volume.
124.4
vol vol vol vol
Create the volume that will hold the ESXi boot LUNs for each server.
ESX_BOOT_A -s none aggr1 20g ESX1_BOOT_A -s none aggr1 20g ESX2_BOOT_A -s none aggr1 20g ESX3_BOOT_A -s none aggr1 20g
124.5
2011 Cisco
Set the Snapshot reservation to 0% for this volume. Disable automatic snapshot option for this volume.
Data Center Virtualization Lab 6: Overlay Transport Virtualization Page 183 of 217
snap reserve ESX1_BOOT_A 0 vol options ESX1_BOOT_A nosnap on snap reserve ESX2_BOOT_A 0 vol options ESX2_BOOT_A nosnap on snap reserve ESX3_BOOT_A 0 vol options ESX3_BOOT_A nosnap on
Step 125 Creating a virtual swap file volume. - DONE/INSTRUCTOR Duration: 3 minutes ESX servers create a VMkernel swap or vswap file for every running VM. The sizes of these files are considerable; by default, the vswap is equal to the amount of memory configured for each VM. Because this data is transient in nature and is not required to recover a VM from either a backup copy or by using Site Recovery Manager, NetApp recommends relocating the VMkernel swap file for every virtual machine from the VM home directory to a datastore on a separate NetApp volume dedicated to storing VMkernel swap files. For more information, refer to TR-3749: NetApp and VMware vSphere Storage Best Practices and vSphere Virtual Machine Administration Guide. Controller A - NTAP1-A 125.1 Create the volume that will later be exported to the ESXi servers as an NFS datastore.
vol create VDI_SWAP -s none aggr1 20g
Note: 125.2
This volume will be used to store VM swap files. Since swap files are temporary they do not need snapshots or deduplications. Disable the Snapshot schedule and set the Snapshot reservation to 0% for this volume. Disable automatic snapshot option for this volume.
snap sched VDI_SWAP 0 0 0 snap reserve VDI_SWAP 0 vol options VDI_SWAP nosnap on
Verification
NTAP1-A> snap sched VDI_SWAP Volume VDI_SWAP: 0 0 0 NTAP1-A> vol options VDI_SWAP nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
2011 Cisco
Step 126 Setup Deduplication. Duration: 5 minutes NetApp deduplication saves space on primary storage by removing redundant copies of blocks within a volume. This process is transparent to the application and can be enabled and disabled on the fly. In a Citrix XenDesktop environment, deduplication provides great value when we consider that all users in theenvironment have their own user data either on the user data disk (for persistent desktops) and/or CIFS home directories (nonpersistent desktops). In many environments, user data is duplicated multiple times as various identical copies and versions of documents and files are saved. For more information, refer to NetApp TR-3505: NetApp Deduplication for FAS, Deployment and Implementation Guide. Controller A - NTAP1-A 126.1 Enable deduplication on the infrastructure and boot volumes and set them to run every day at 12:00 a.m.
sis sis sis sis sis sis sis sis sis sis on /vol/VDI_VFILER1_DS on /vol/ESX1_BOOT_A on /vol/ESX2_BOOT_A on /vol/ESX3_BOOT_A on /vol/vol1 config -s 0@sun-sat /vol/VDI_VFILER1_DS config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/ESX1_BOOT_A config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/ESX2_BOOT_A config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/ESX3_BOOT_A config -s 0@sun-sat /vol/vol1
126.2
sis start -s
Step 127 Verification 127.1 Monitor the status of the dedupe operation:
sis status Path /vol/ESX1_BOOT_A <snip> State Enabled Status Idle Progress Idle for 00:01:53
127.2
0@mon,tue,wed,thu,fri,sat,sun 0@sun-sat
127.3
2011 Cisco
127.4
NTAP1-A> vol status Volume State ESX1_BOOT_A online ESX2_BOOT_A online ESX3_BOOT_A online VFILER1_ROOT online INFRA_SWAP online VFILER1_DS online
Here are the LAB INSTRUCTOR commands for enabling deduplication for all the lab volumes.
sis sis sis sis sis sis sis sis sis sis on /vol/LAB_VFILER1_DS on /vol/LAB_VFILER2_DS on /vol/LAB_VFILER3_DS config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER1_DS config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER2_DS config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER3_DS on /vol/LAB_VFILER210_DS config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER210_DS on /vol/INFRA_DS_XEN config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/INFRA_DS_XEN
2011 Cisco
In this step we will create secure IP space (logical routing table specific for each vfiler). Each IP Space provides an individual IP routing table per vFiler unit. The association between a VLAN interface and a vFiler unit allows all packets to and from the specific vFiler unit to be tagged with the appropriate VLAN ID specific to that VLAN interface. IP spaces are similar to the concept of VRFs in the Cisco world. Controller A - NTAP1-A 128.1 Type ipspace create ips-vfiler211 to create the IP space for the vdi_vfiler_211 vFiler unit.
NTAP1-A> ipspace create ips-vfiler111 NTAP1-A> ipspace create ips-vfiler211
128.2
Assign interfaces to our IP spaces using the command ipspace assign vdi_vfiler_211 ifgrp1-211.
NTAP1-A> ipspace assign ips-vfiler111 ifgrp1-111 NTAP1-A> ipspace assign ips-vfiler211 ifgrp1-211
128.3
Verify that the IP space was created and assigned successfully by issuing the command ipspace list and verifying that the ipspace and interface assigned to it are listed.
NTAP1-A> ipspace list Number of ipspaces configured: 18 default-ipspace (e0M e0P e0a e0b losk ifgrp1) vfiler1 (no interfaces) ips-vfiler2 (ifgrp1-212) ips-vfiler1 (ifgrp1-211) ips-vfiler3 (ifgrp1-213)
2011 Cisco
Step 129 Creating the infrastructure vFiler units DONE/INSTRUCTOR Duration: 5 minutes Controller A - NTAP1-A 129.1 Create a vfiler called vfiler_1. Assign it to IP Space ips-vfiler1 and give it an IP address of 10.1.211.151. Assign /vol/INFRA_ROOT to it.
vfiler create vdi_vfiler_211 -s ips-vfiler211 -i 10.1.211.151 /vol/VDI_VFILER211_ROOT
Note: 129.2 129.3 129.4 129.5 129.6 129.7 129.8 129.9 129.10
You can only create one vfiler at a time. The commands below should NOT be copied and pasted all at once. Accept the IP address that you specified on the command line by pressing Enter. Type ifgrp1-211 for the interface to assign to the vFiler unit. Press Enter to accept the default subnet mask. If necessary, type 10.1.111.10 as the IP address of the administration host for the vFiler unit. Enter n for running a DNS resolver. Enter n for running an NIS client. Enter a password for the vFiler unit. Enter the same password a second time to confirm. Enter y for setting up CIFS.
NTAP1-A> vfiler create vdi_vfiler_211 -s ips-vfiler211 -i 10.1.211.151 /vol/VDI_VFILER211_ROOT <snip> Setting up vfiler vdi_vfiler_211 Configure vfiler IP address 10.1.211.151? [y]: y Interface to assign this address to {ifgrp1-211}: ifgrp1-211 Netmask to use: [255.255.255.0]: 255.255.255.0 Please enter the name or IP address of the administration host: 10.1.111.10 Do you want to run DNS resolver? [n]: n Do you want to run NIS client? [n]: n New password: 1234Qwerty Retype new password: 1234Qwerty Do you want to setup CIFS? [y]: n
129.11
To verify that the vFiler unit was created successfully, enter the command vfiler status and verify that the vFiler unit is listed and that its status is running.
running running running running
2011 Cisco
Step 130 Mapping the necessary infrastructure volumes to the infrastructure vFiler unit DONE/INSTRUCTOR Duration: 5 minutes In this step we are going to add a datastore volume and a swap volume to each vfiler. This will provide each lab pod the required volumes to support a virtualization infrastructure. Controller A - NTAP1-A 130.1 Type vfiler add vdi_vfiler_211 /vol/VDI_VFILER1_DS. The add subcommand adds the specified paths to an existing vfiler.
NTAP1-A> vfiler add vdi_vfiler_211 /vol/VDI_SWAP /vol/VDI_VFILER1_DS <snip> Mon Sep 26 11:00:26 PDT [cmds.vfiler.path.move:notice]: Path /vol/VDI_SWAP was mov ed to vFiler unit "vdi_vfiler_211". Mon Sep 26 11:00:26 PDT [cmds.vfiler.path.move:notice]: Path /vol/VDI_VFILER1_DS w as moved to vFiler unit "vdi_vfiler_211".
130.2
To verify that the volumes were assigned correctly, enter the command vfiler run infrastructure_vfiler vol status and then check that the two volumes are listed in the output.
NTAP1-A> vfiler run vdi_vfiler_211 vol status ===== vdi_vfiler_211 Volume State VDI_VFILER1_DS online VDI_SWAP online VDI_VFILER211_ROOT online
2011 Cisco
131.2
Allow the ESXi servers read and write access to the infrastructure nfs datastore. The following command exports /vol/VDI_VFILER1_DS and /vol/VDI_SWAP
131.3
To verify that the volumes were exported successfully, enter the command exportfs and make sure the volumes are listed.
vdi_vfiler_211@NTAP1-A> exportfs /vol/VDI_VFILER1_DS -sec=sys,rw=10.1.111.0/27:10.1.211.0/27,root=10.1.111.0/27 :10.1.211.0/27,nosuid /vol/VDI_SWAP -sec=sys,rw=10.1.111.0/27:10.1.211.0/27,root=10.1.111.0/27:10.1.21 1.0/27,nosuid /vol/VDI_VFILER211_ROOT -sec=sys,rw=10.1.10.100,root=10.1.10.100
132.2
!!! Before ntap1-A> priority show Priority scheduler is stopped. NTAP1-A> priority on Priority scheduler starting. !!! After ntap1-A> priority show 2011 Cisco Data Center Virtualization Volume 1 Page 190 of 217
132.3
Set the priority level for operations sent to the volume when compared to other volumes. The value may be one of VeryHigh, High, Medium, Low or VeryLow. A volume with a higher priority level will receive more resources than a volume with lower resources. This option sets derived values of scheduling (CPU), concurrent disk IO limit and NVLOG usage for the volume, based on the settings of other volumes in the aggregate.
set set set set set set volume volume volume volume volume volume INFRA_DS_1 level=VeryHigh ESX1_BOOT_A level=VeryHigh cache=keep ESX2_BOOT_A level=VeryHigh cache=keep ESX3_BOOT_A level=VeryHigh cache=keep VDI_VFILER1_DS level=VeryHigh cache=keep VDI_SWAP level=Medium cache=reuse
132.4
To verify that the priority levels were set correctly, issue the command priority show volume and verify that the volumes are listed with the correct priority level.
Netapp1> priority show volume Volume Priority Relative Sys Priority Service Priority (vs User) INFRASTRUCTURE_SWAP on VeryHigh Medium VMHOST_BOOT_A on VeryHigh Medium INFRA_DS_1 on VeryHigh Medium
ntap1-A> priority show volume Volume Priority Relative Sys Priority Service Priority (vs User) LAB_VFILER10_DS on VeryHigh Medium LAB_VFILER11_DS on VeryHigh Medium LAB_VFILER12_DS on VeryHigh Medium LAB_VFILER13_DS on VeryHigh Medium LAB_VFILER14_DS on VeryHigh Medium LAB_VFILER15_DS on VeryHigh Medium LAB_VFILER16_DS on VeryHigh Medium LAB_VFILER1_DS on VeryHigh Medium LAB_VFILER2_DS on VeryHigh Medium LAB_VFILER3_DS on VeryHigh Medium LAB_VFILER4_DS on VeryHigh Medium LAB_VFILER5_DS on VeryHigh Medium LAB_VFILER6_DS on VeryHigh Medium LAB_VFILER7_DS on VeryHigh Medium LAB_VFILER8_DS on VeryHigh Medium LAB_VFILER9_DS on VeryHigh Medium VMHOST_BOOT_A on VeryHigh Medium
2011 Cisco
133.2
Verify that the igroups were created successfully by entering the command igroup show and verify that the output matches what was entered.
logged logged logged logged in) in) in) in)
NTAP1-A> igroup show VMHOST1 (FCP) (ostype: vmware): 20:00:00:25:b5:01:0a:00 (not 20:00:00:25:b5:01:0b:00 (not 20:00:00:25:b5:01:0a:01 (not 20:00:00:25:b5:01:0b:01 (not <snip>
Verify that the igroups were created successfully by entering the command igroup show and verify that the output matches what was entered. Step 134 Creating LUNs for the service profiles - DONE/Instructor Duration: 5 minutes Controller A - NTAP1-A 134.1 Create a LUN for the service profile booting from NTAP1-A. It will be 10GB in size, type vmware, and will not have any space reserved. Note: We are currently only using controller for active connections in our lab.
lun create -s 4g -t vmware -o noreserve /vol/ESX1_BOOT_A/ESX lun create -s 4g -t vmware -o noreserve /vol/ESX2_BOOT_A/ESX lun create -s 4g -t vmware -o noreserve /vol/ESX3_BOOT_A/ESX
133.3
134.2
Verify that the LUNs were created successfully by entering the command lun show and verify that the new LUNs show up in the output.
4g (4294967296) (r/w, online)
2011 Cisco
Step 135 Mapping LUNs to igroups Duration: 5 minutes Controller A - NTAP1-A 135.1 For each LUN created, enter the following command to map the created LUNs to the two initiator groups per service profile:
lun map /vol/ESX1_BOOT_A/ESX ESX1 0 lun map /vol/ESX2_BOOT_A/ESX ESX2 0 lun map /vol/ESX3_BOOT_A/ESX ESX3 0
135.2
Verify that the LUNs were mapped successfully by entering the command lun show and verify that the LUNs report their status as mapped.
2g (2147483648) 2g (2147483648) 2g (2147483648) (r/w, online, mapped) (r/w, online, mapped) (r/w, online, mapped)
16.1 FLEXCLONE
Step 136 FlexClone the ESX boot volume to create individual boot volume/luns for each ESX server. 136.1 FlexClone a fas3170_vfiler2 volume and add that clone to fas3170_vfiler1 136.2 Take a snapshot of the FlexVol that has the VMFS datastore you want cloned. Name your snapshot clone_base_snap so that you can identify the purpose of the snapshot. The command below will create a snapshot of DCV_VFILER9_DS named clone_base_snap.
NTAP1-A> snap create ESX_BOOT_A clone_base_snap
136.3
NTAP1-A> NTAP1-A> NTAP1-A>
Create a FlexClone based on the Snapshot that you just created. You will provide the name of the new volume, the base volume, and the snapshot from the base volume.
vol clone create ESX1_BOOT_A_clone -s none -b ESX_BOOT_A clone_base_snap vol clone create ESX2_BOOT_A_clone -s none -b ESX_BOOT_A clone_base_snap vol clone create ESX3_BOOT_A_clone -s none -b ESX_BOOT_A clone_base_snap
136.4
fas3170> vfiler run * vol status # ntap1-A> vfiler run * vol status <snip> LAB_VFILER3_SWAP online raid_dp, flex sis
136.5
(optional) You can split your clone off so that it is completely independent.
136.6 136.7
Unmap base LUN from ESX1 igroup. Bring cloned luns online. Cloned LUNs are offline when created.
136.8
lun map /vol/ESX1_BOOT_A_clone/ESX ESX1 0 lun map /vol/ESX2_BOOT_A_clone/ESX ESX2 0 lun map /vol/ESX3_BOOT_A_clone/ESX ESX3 0
2011 Cisco
136.9
ntap1-A> vfiler status -a lab-vfiler1 running ipspace: ips-vfiler1 IP address: 10.1.211.151 [ifgrp1-211] Path: /vol/LAB_VFILER1_ROOT [/etc] Path: /vol/LAB_VFILER1_DS Path: /vol/LAB_VFILER1_SWAP UUID: 5dd244ac-8707-11e0-bb73-00a09816bfba Protocols allowed: 7 Allowed: proto=rsh Allowed: proto=ssh Allowed: proto=nfs Allowed: proto=cifs Allowed: proto=iscsi Allowed: proto=ftp Allowed: proto=http Protocols disallowed: 0 lab-vfiler2 running ipspace: ips-vfiler2 IP address: 10.1.212.151 [ifgrp1-212] Path: /vol/LAB_VFILER2_ROOT [/etc] Path: /vol/LAB_VFILER2_DS Path: /vol/LAB_VFILER2_SWAP UUID: b094290c-86f4-11e0-bb73-00a09816bfba Protocols allowed: 7 Allowed: proto=rsh Allowed: proto=ssh Allowed: proto=nfs Allowed: proto=cifs Allowed: proto=iscsi Allowed: proto=ftp Allowed: proto=http Protocols disallowed: 0
rw=10.1.211.21,root=10.1.211.21 /vol/LAB_VFILER1_DS
2011 Cisco
136.10
vfiler add lab-vfiler1 /vol/LAB_VFILER1_XEN vfiler add lab-vfiler2 /vol/LAB_VFILER2_XEN vfiler add lab-vfiler3 /vol/LAB_VFILER3_XEN
Note: 136.11
Might be useful to add _CLONE suffix to the end for ease of reference. # show volumesclone is now in vfiler1
ntap1-A> vfiler status -a lab-vfiler1 running ipspace: ips-vfiler1 IP address: 10.1.211.151 [ifgrp1-211] Path: /vol/LAB_VFILER1_ROOT [/etc] Path: /vol/LAB_VFILER1_DS Path: /vol/LAB_VFILER1_SWAP Path: /vol/LAB_VFILER1_XEN UUID: 5dd244ac-8707-11e0-bb73-00a09816bfba Protocols allowed: 7 Allowed: proto=rsh Allowed: proto=ssh Allowed: proto=nfs Allowed: proto=cifs Allowed: proto=iscsi Allowed: proto=ftp Allowed: proto=http Protocols disallowed: 0 lab-vfiler2 running ipspace: ips-vfiler2 IP address: 10.1.212.151 [ifgrp1-212] Path: /vol/LAB_VFILER2_ROOT [/etc] Path: /vol/LAB_VFILER2_DS Path: /vol/LAB_VFILER2_SWAP Path: /vol/LAB_VFILER2_XEN UUID: b094290c-86f4-11e0-bb73-00a09816bfba Protocols allowed: 7 Allowed: proto=rsh Allowed: proto=ssh Allowed: proto=nfs Allowed: proto=cifs Allowed: proto=iscsi Allowed: proto=ftp Allowed: proto=http Protocols disallowed: 0 vfiler run lab-vfiler1 exportfs -p vfiler run lab-vfiler1 exportfs -p /vol/LAB_VFILER1_XEN rw=10.1.211.21,root=10.1.211.21 /vol/LAB_VFILER1_XEN rw=10.1.211.20:10.1.211.21,root=10.1.211.20:10.1.211.21
2011 Cisco
137.2
lun unmap /vol/VMHOST_BOOT_A/VMHOST1_NTAP1-A VMHOST1 lun unmap /vol/VMHOST_BOOT_A/VMHOST2_NTAP1-A VMHOST2 lun unmap /vol/VMHOST_BOOT_A/VMHOST3_NTAP1-A VMHOST3
137.3
#lun map /vol/VMHOST_BOOT_A/VMHOST1_clone VMHOST1 0 lun map /vol/VMHOST_BOOT_A/VMHOST2_clone VMHOST2 0 lun map /vol/VMHOST_BOOT_A/VMHOST3_clone VMHOST3 0
2011 Cisco
137.5
vfiler stop lab-vfiler10 vfiler destroy lab-vfiler10 -f vfiler stop lab-vfiler11 vfiler destroy lab-vfiler11 -f vfiler stop lab-vfiler12 vfiler destroy lab-vfiler12 -f
These steps should be performed after the extra vfilers have been destroyed. Take volumes offline and then destroy them.
LAB_VFILER9_ROOT LAB_VFILER9_DS LAB_VFILER9_SWAP LAB_VFILER9_ROOT -f LAB_VFILER9_DS -f LAB_VFILER9_SWAP -f
2011 Cisco
switchport trunk mode off port-license acquire no shutdown interface fc1/4 no switchport trunk allowed vsan all switchport description NetApp Storage 0b switchport trunk mode off port-license acquire no shutdown interface fc1/5-8 port-license acquire interface fc1/9-24 interface mgmt0 ip address 10.1.111.40 255.255.255.0 no system default switchport shutdown
role network-admin
# ip domain-lookup ip domain-lookup switchname N5K-1 logging event link-status default service unsupported-transceiver class-map type qos class-fcoe !class-map type queuing class-fcoe ! match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 !class-map type network-qos class-fcoe ! match qos-group 1 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos class-ip-multicast match qos-group 2 policy-map type network-qos jumbo class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default mtu 9000 system qos service-policy type network-qos jumbo 2011 Cisco Data Center Virtualization Lab 6: Overlay Transport Virtualization
interface port-channel3 description ESX1 switchport mode trunk vpc 3 switchport trunk allowed vlan 1,20-25,100,160,200-201 spanning-tree port type edge trunk speed 10000 interface port-channel4 description ESX2 switchport mode trunk vpc 4 switchport trunk allowed vlan 1,20-25,100,160,200-201 spanning-tree port type edge trunk speed 10000 interface port-channel5 description ESX3 switchport mode trunk vpc 5 switchport trunk allowed vlan 1,20-25,100,160,200-201 spanning-tree port type edge trunk speed 10000 interface port-channel60 description link to core switchport mode trunk vpc 60 switchport trunk allowed vlan 1,20-25,160 speed 10000
!!! We currently do not have IP storage plugged directly into our 5Ks. !!! IP storage comes through core switches.
!interface port-channel70 ! description IP Storage Array ! vpc 70 ! switchport access vlan 162 interface port-channel100 description dual-homed 2148 can use as management switch switchport mode fex-fabric vpc 100 fex associate 100
interface fc2/1 switchport trunk allowed vsan 10 switchport description To MDS9124 1/1 switchport trunk mode on ! channel-group 256 force no shutdown interface fc2/2-4
!!! Associate interfaces e1/7-8 to fex 101 when moving to single homed FEX.
interface Ethernet1/7 fex associate 100 switchport mode fex-fabric channel-group 100 interface Ethernet1/8 fex associate 100 switchport mode fex-fabric channel-group 100 interface Ethernet1/9-16
switchport trunk allowed vlan 1,20-25,160,200-201 channel-group 1 mode active interface Ethernet1/18 switchport mode trunk switchport trunk allowed vlan 1,20-25,160,200-201 channel-group 1 mode active interface Ethernet1/19 description link to core switchport mode trunk ! swtchport trunk native vlan 999 switchport trunk allowed vlan 1,20-25,160 channel-group 60 mode active interface Ethernet1/20 description link to core switchport mode trunk ! switchport trunk native vlan 999 switchport trunk allowed vlan 1,20-25,160 channel-group 60 mode active interface Ethernet2/1-4 interface mgmt0 ip address 10.1.111.1/24 interface Ethernet100/1/1 description ESX1 vmnic3 switchport mode trunk spanning-tree port type edge trunk interface Ethernet100/1/2 description ESX2 vmnic3 switchport mode trunk spanning-tree port type edge trunk interface Ethernet100/1/3-48 line console exec-timeout 0 line vty exec-timeout 0 boot kickstart bootflash:/n5000-uk9-kickstart.5.0.2.N2.1.bin boot system bootflash:/n5000-uk9.5.0.2.N2.1.bin interface fc2/1-4
role network-admin
banner motd #LAB2 SAVED CONFIG # ip domain-lookup ip domain-lookup switchname N5K-2 logging event link-status default service unsupported-transceiver class-map type qos class-fcoe !class-map type queuing class-fcoe ! match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 !class-map type network-qos class-fcoe ! match qos-group 1 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos class-ip-multicast match qos-group 2 policy-map type network-qos jumbo class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default mtu 9000 system qos service-policy type network-qos jumbo fex 100 pinning max-links 1 description "FEX0100"
name backend-storage vlan 999 name NATIVE udld aggressive port-channel load-balance ethernet source-dest-port vpc domain 1 role priority 2000 peer-keepalive destination 10.1.111.1 vsan database vsan 20
interface Vlan1 !interface san-port-channel 256 ! channel mode active ! switchport mode NP ! switchport description To p3-mds9148-1 ! switchport trunk mode on interface port-channel1 switchport mode trunk vpc peer-link
!!! We currently do not have IP storage plugged directly into our 5Ks. !!! IP storage comes through core switches.
!interface port-channel70 ! description IP Storage Array ! vpc 70 ! switchport access vlan 162 2011 Cisco Data Center Virtualization Lab 6: Overlay Transport Virtualization Page 205 of 217
interface port-channel100 description dual-homed 2148 switchport mode fex-fabric vpc 100 fex associate 100
interface Ethernet1/5 description To ESX3 vmnic0 switchport mode trunk switchport trunk allowed vlan 1,20-25,120,160,200-201 spanning-tree port type edge trunk spanning-tree bpduguard enable channel-group 5 interface Ethernet1/6 !!! Associate interfaces e1/7-8 to fex 101 when moving to single homed FEX. interface Ethernet1/7 fex associate 100 switchport mode fex-fabric channel-group 100 interface Ethernet1/8 fex associate 100 switchport mode fex-fabric channel-group 100 interface Ethernet1/9-16 interface Ethernet1/17 switchport mode trunk switchport trunk allowed vlan 1,20-25,160,200-201 channel-group 1 mode active interface Ethernet1/18 switchport mode trunk switchport trunk allowed vlan 1,20-25,160,200-201 channel-group 1 mode active interface Ethernet1/19 description link to core switchport mode trunk ! swtchport trunk native vlan 999 switchport trunk allowed vlan 1,20-25,160 channel-group 60 mode active interface Ethernet1/20 description link to core switchport mode trunk ! swtchport trunk native vlan 999 switchport trunk allowed vlan 1,20-25,160 channel-group 60 mode active interface Ethernet2/1-4 interface mgmt0 ip address 10.1.111.2/24 interface Ethernet100/1/1 description ESX1 vmnic3 switchport mode trunk spanning-tree port type edge trunk interface Ethernet100/1/2 description ESX2 vmnic3 switchport mode trunk spanning-tree port type edge trunk interface Ethernet100/1/3-48 line console exec-timeout 0 2011 Cisco Data Center Virtualization Lab 6: Overlay Transport Virtualization Page 207 of 217
line vty exec-timeout 0 boot kickstart bootflash:/n5000-uk9-kickstart.5.0.2.N2.1.bin boot system bootflash:/n5000-uk9.5.0.2.N2.1.bin interface fc2/1-4
ESX
ESX1 and ESX2
esxcfg-vswitch -m 9000 vSwitch0 esxcfg-vswitch -a vSwitch1 esxcfg-vswitch -m 9000 vSwitch1 esxcfg-vswitch -L vmnic0 vSwitch1 esxcfg-vswitch -L vmnic1 vSwitch1 esxcfg-vswitch -A "MGMT Network" vSwitch1 esxcfg-vswitch -v 111 -p "MGMT Network" vSwitch1 esxcfg-vswitch -A VMotion vSwitch1 esxcfg-vswitch -v 151 -p VMotion vSwitch1 esxcfg-vswitch -A NFS vSwitch1 esxcfg-vswitch -v 211 -p NFS vSwitch1 esxcfg-vswitch -A "CTRL-PKT" vSwitch1 esxcfg-vswitch -v 171 -p "CTRL-PKT" vSwitch1 esxcfg-vswitch -A "VMTRAFFIC" vSwitch1 esxcfg-vswitch -v 131 -p "VMTRAFFIC" vSwitch1 esxcfg-vswitch -A "Local LAN" vSwitch1 esxcfg-vswitch -v 24 -p "Local LAN" vSwitch1 vim-cmd hostsvc/net/refresh vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_ip' vSwitch1
snmp-server user admin network-admin auth md5 0xcac2e012077bc51a340006d3fca7f363 priv 0xcac2e012077bc51a340006d3fca7f363 localizedkey vrf context management ip route 0.0.0.0/0 192.168.1.254 vlan 1 vlan 131 name VM-Client vlan 151 name vmotion vlan 171 name n1k_control_packet vlan 211 name NFS-VLAN port-channel load-balance ethernet source-dest-ip-port-vlan port-profile default max-ports 32 port-profile type ethernet Unused_Or_Quarantine_Uplink vmware port-group shutdown description Port-group created for Nexus1000V internal usage. Do not use. state enabled port-profile type vethernet Unused_Or_Quarantine_Veth vmware port-group shutdown description Port-group created for Nexus1000V internal usage. Do not use. state enabled port-profile type ethernet VM_UPLINK vmware port-group switchport mode trunk switchport trunk allowed vlan 20,23,160,162 mtu 9000 channel-group auto mode on The VMotion, NFS, and Control/Packet VLANs need to be system no shutdown VLANs for availability. system vlan 23,160,162 state enabled port-profile type ethernet VM_UPLINK2 vmware port-group switchport mode trunk switchport trunk allowed vlan 20,23,160 mtu 9000 channel-group auto mode on no shutdown system vlan 23,160 state enabled port-profile type vethernet MGMT vmware port-group switchport mode access switchport access vlan 1 no shutdown system vlan 1 state enabled port-profile type vethernet VMOTION vmware port-group switchport mode access switchport access vlan 23 no shutdown system vlan 23 state enabled port-profile type vethernet STORAGE vmware port-group switchport mode access switchport access vlan 162 no shutdown system vlan 162 state enabled port-profile type vethernet N1KV_CONTROL_PACKET 2011 Cisco Data Center Virtualization Lab 6: Overlay Transport Virtualization Page 209 of 217
vmware port-group switchport mode access switchport access vlan 160 no shutdown system vlan 160 state enabled port-profile type vethernet VM_CLIENT vmware port-group switchport mode access switchport access vlan 20 no shutdown state enabled vdc VSM-P id 1 limit-resource limit-resource limit-resource limit-resource limit-resource limit-resource limit-resource limit-resource
vlan minimum 16 maximum 2049 monitor-session minimum 0 maximum 2 vrf minimum 16 maximum 8192 port-channel minimum 0 maximum 768 u4route-mem minimum 32 maximum 32 u6route-mem minimum 16 maximum 16 m4route-mem minimum 58 maximum 58 m6route-mem minimum 8 maximum 8
interface port-channel1 inherit port-profile VM_UPLINK interface port-channel2 inherit port-profile VM_UPLINK interface port-channel3 inherit port-profile VM_UPLINK interface mgmt0 ip address 192.168.1.200/24 interface Vethernet1 inherit port-profile N1KV_CONTROL_PACKET description Nexus1000V-P,Network Adapter 1 vmware dvport 164 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.5699.000B interface Vethernet2 inherit port-profile N1KV_CONTROL_PACKET description Nexus1000V-P,Network Adapter 3 vmware dvport 165 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.5699.000D interface Vethernet3 inherit port-profile VMOTION description VMware VMkernel,vmk1 vmware dvport 129 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.567F.90F4 interface Vethernet4 inherit port-profile N1KV_CONTROL_PACKET description Nexus1000V-S,Network Adapter 1 vmware dvport 162 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.5699.0011 interface Vethernet5 inherit port-profile N1KV_CONTROL_PACKET description Nexus1000V-S,Network Adapter 3 vmware dvport 163 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.5699.0013 interface Vethernet6 inherit port-profile VMOTION 2011 Cisco Data Center Virtualization Volume 1 Page 210 of 217
description VMware VMkernel,vmk1 vmware dvport 128 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.5671.25BC interface Vethernet7 inherit port-profile VM_CLIENT description Server 2003R2-Clone,Network Adapter 1 vmware dvport 192 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.5699.0007 interface Vethernet8 inherit port-profile VM_CLIENT description Server-2003R2,Network Adapter 1 vmware dvport 193 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.5699.0005 interface Vethernet9 inherit port-profile VMOTION description VMware VMkernel,vmk1 vmware dvport 130 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" vmware vm mac 0050.567A.956B interface Ethernet6/1 inherit port-profile VM_UPLINK interface Ethernet6/2 inherit port-profile VM_UPLINK interface Ethernet7/5 inherit port-profile VM_UPLINK interface Ethernet7/6 inherit port-profile VM_UPLINK interface Ethernet8/1 inherit port-profile VM_UPLINK interface Ethernet8/2 inherit port-profile VM_UPLINK interface control0 line console boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4.bin sup-1 boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4.bin sup-1 boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4.bin sup-2 boot system bootflash:/nexus-1000v-mz.4.2.1.SV1.4.bin sup-2 svs-domain domain id 10 Make sure these VLANs are created and designated as SytemsVLANs control vlan 160 in UPLINK Ethernet profiles. packet vlan 160 svs mode L2 svs connection vcenter protocol vmware-vim remote ip address 192.168.1.10 port 80 vmware dvs uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe" datacenter-name Lab connect vnm-policy-agent registration-ip 0.0.0.0 shared-secret ********** log-level
2011 Cisco
OTV
Cisco Nexus 5010 A - N5K-1
no feature vpc int port-channel 1 shutdown int e1/10 interface po14 shutdown vlan 131,151,171,211,1005 no shut int e1/19 switchport switchport mode trunk switchport trunk allowed vlan 131,151,171,211,1005 no shutdown
2011 Cisco
N7K-1
vlan 131,151,171,211,1005 no shut spanning-tree vlan 131,151,171,211,1005 priority 4096 !int e1/14 int e1/22 !int e1/30 switchport switchport mode trunk mtu 9216 no shutdown switchport trunk allowed vlan 131,151,171,211,1005 int e 1/<uplink> no shut feature ospf router ospf 1 log-adjacency-changes interface loopback0 ! ip address 10.1.0.11/32 ip address 10.1.0.21/32 ! ip address 10.1.0.31/32 ip router ospf 1 area 0.0.0.0 !interface e1/10 interface e1/18 !interface e1/26 mtu 9042 ! ip address 10.1.11.3/24 ip address 10.1.21.5/24 ! ip address 10.1.31.7/24 ip ospf network point-to-point ip router ospf 1 area 0.0.0.0 ip igmp version 3 no shutdown feature otv otv site-vlan 1005 otv site-identifier 0x1 interface Overlay 1 ! otv control-group 239.1.1.1 otv control-group 239.2.1.1 ! otv control-group 239.3.1.1 ! otv data-group 239.1.2.0/28 otv data-group 239.2.2.0/28 ! otv data-group 239.3.2.0/28 ! otv join-interface Ethernet1/10 otv join-interface Ethernet1/18 ! otv join-interface Ethernet1/26 otv extend-vlan 131,151,171,211 no shutdown
2011 Cisco
N7K-2
vlan 131,151,171,211,1005 no shut spanning-tree vlan 131,151,171,211,1005 priority 8192 !int e1/16 int e1/24 !int e1/32 switchport switchport mode trunk mtu 9216 no shutdown switchport trunk allowed vlan 131,151,171,211,1005 int e 1/<uplink> no shut feature ospf router ospf 1 log-adjacency-changes interface loopback0 ! ip address 10.1.0.12/32 ip address 10.1.0.22/32 ! ip address 10.1.0.32/32 ip router ospf 1 area 0.0.0.0 !interface e1/12 interface e1/20 !interface e1/28 mtu 9042 ! ip address 10.1.14.4/24 ip address 10.1.24.6/24 ! ip address 10.1.34.8/24 ip ospf network point-to-point ip router ospf 1 area 0.0.0.0 ip igmp version 3 no shutdown feature otv otv site-vlan 1005 otv site-identifier 0x2 interface Overlay 1 ! otv control-group 239.1.1.1 otv control-group 239.2.1.1 ! otv control-group 239.3.1.1 ! otv data-group 239.1.2.0/28 otv data-group 239.2.2.0/28 ! otv data-group 239.3.2.0/28 ! otv join-interface Ethernet1/12 otv join-interface Ethernet1/20 ! otv join-interface Ethernet1/28 otv extend-vlan 131,151,171,211 no shutdown
2011 Cisco
18 REFERENCES
VMware Fibre Channel SAN Configuration Guide http://www.vmware.com/pdf/vsphere4/r41/vsp_41_san_cfg.pdf Cisco Nexus 1000V Port Profile Configuration Guide, Release 4.2(1) SV1(4) http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/port_profile/c onfiguration/guide/n1000v_portprof_4system.html#wpxref14373
NOW (NetApp on the Web) site http://.now.netapp.com NetApp FAS2020 Storage Controller http://now.netapp.com/NOW/knowledge/docs/hardware/hardware_index.shtml#Storage%20appliances%20an d%20V-series%20systems/gFilers
Cisco Nexus 5010 Switch www.cisco.com/en/US/products/ps11215/index.html Cisco Unified Computing System www.cisco.com/en/US/netsol/ns944/index.html Cisco Nexus 1010 Virtual Services Appliance www.cisco.com/en/US/products/ps10785/index.html VMware vSphere www.vmware.com/products/vsphere/
2011 Cisco
VLAN ID for NFS traffic Network address for NFS traffic VLAN ID for management traffic VLAN ID for VMotion traffic Network address for VMotion traffic VLAN ID for the Cisco Nexus 1000v packet and control traffic VLAN ID for native VLAN VLAN ID for VM traffic Default password DNS server name Domain name suffix VSAN ID for fabric A
11
21:00:00:c0:dd:14:73:2f
2011 Cisco
2011 Cisco