Vous êtes sur la page 1sur 35

Cisco Demo Cloud (dCloud)

Cisco Nexus 1000V:


Basic Configuration Lab v1

dCloud: The Cisco Demo Cloud

Last Updated: 15-MAR-2014

About This Lab


The next generation virtual datacenter from VMware will ensure efficient collaboration between network administrators and
VMware administrators with the use of vNetwork Distributed Switches.
By replacing an existing virtual switch or VMwares DVS with the Cisco Nexus 1000V and the familiar Cisco NX-OS, Cisco Nexus
1000V supports the traditional boundaries between server and network administrators, allowing network administrators to also
manage virtual switches. This lab will augment your knowledge about the Cisco Nexus 1000V with a considerable amount of
hands-on experience.

Lab Summary
In this self-paced lab, participants will discover how the Cisco Nexus 1000V software switch for VMware vSphere enables
organizations to unleash the true power and flexibility of server virtualization, by offering a set of network features, management
tools, and diagnostic capabilities consistent with the customer's existing physical Cisco network infrastructure and enhanced for the
virtual world.
Some of the features of the Cisco Nexus 1000V that will be covered include:

Policy based virtual machine (VM) connectivity

Mobility of security and network properties

Non-disruptive operational model for both Server and Network administrators

In the highly agile VMware environment, the new Cisco Virtual Network Link (VN-Link) technology on the Nexus 1000V will
integrate with VMware's vNetwork Distributed Switch framework to create a logical network infrastructure across multiple physical
hosts that will provide full visibility, control, and consistency of the network.

Key Benefits of the Cisco Nexus 1000V


Policy-based virtual machine (VM) connectivity

Provides real-time coordinated configuration of network and security services

Maintains a virtual machine-centric management model, enabling the server administrator to increase both efficiency and
flexibility

Mobile VM security and network policy

Policy moves with a virtual machine during live migration ensuring persistent network, security, and storage compliance

Ensures that live migration won't be affected by disparate network configurations

Improves business continuance, performance management, and security compliance

Non-disruptive operational model for your server virtualization, and networking teams

Aligns management and operations environment for virtual machines and physical server connectivity in the data center

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 1 of 35

Maintains the existing VMware operational model

Reduces total cost of ownership (TCO) by providing operational consistency and visibility throughout the network

Lab Requirements
The table below outlines the requirements for this preconfigured lab.
Table 1.

Lab Requirements

Required

Optional
None for this release

Laptop
Cisco AnyConnect

Lab Configuration
This lab contains preconfigured users and components to illustrate the scripted scenarios and features of this solution. All access
information needed to complete this lab, is located in the Topology and Servers menus of your active Cisco dCloud session.

Topology Menu. Click on any server in the topology to display the available server options and credentials.

Servers Menu. Click on

or

next to any server name to display the available server options and credentials.

Lab Preparation
Follow the steps below to schedule and configure your lab environment.
1.

Browse to dcloud.cisco.com, choose the location closest to you, and then login with your Cisco.com credentials.

2.

Schedule a session. [Show Me How].

3.

Test your bandwidth from the lab location before performing any scenario. [Show Me How]

4.

Verify your session has a status of Active under My Demonstrations on the My Dashboard page in the Cisco dCloud UI.

5.

It may take up to 10 minutes for your lab to become active.

Access the workstation named wkst1 located at 198.18.133.36 and login using the following credentials: Username:
dcloud\demouser, Password: C1sco12345.

Option 1: Use the Cisco dCloud Remote Desktop client with HTML5. [Show Me How]
o

Accept any certificates or warnings.

Option 2: Use Cisco AnyConnect [Show Me How] and the local RDP client on your laptop [Show Me How].
o

Accept any certificates or warnings.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 2 of 35

Scenario 1. Hands-On Lab: Introduction to the Cisco Nexus 1000V with L3 Mode
Objective
The goal of this guide is to give you a chance to receive hands-on experience with a subset of the features of the Cisco Nexus
1000V Distributed Virtual Switch (DVS). The Cisco Nexus 1000V introduces many new features and capabilities. This lab will give
you an overview of these features and introduce you to the main concepts.

Cisco dCloud
This lab is hosted in Ciscos cloud-based hands-on and demo lab. Within this cloud, you are provided with your personal dedicated
virtual pod (vPod). You connect via RDP to a so-called dCloud workstation within this host and walk through the lab steps below.
All necessary tools to complete this lab can be found in the dCloud workstation. Refer to the Lab Preparation section for details
on how to reach the Cisco dCloud workstation within your lab session.
Figure 1.

Logical Lab Topology

The username and password to access the Cisco dCloud workstation of this vPod are listed below:

User Name: dcloud\demouser

Password: C1sco12345

Lab Exercises
This lab was designed to be completed in sequential order. As some steps rely on the successful completion of previous steps, you
are required to complete all steps before moving on.
The individual lab steps are:

Cisco Nexus 1000V deployment

Attaching Virtual Machines to the Cisco Nexus 1000V

vMotion and Visibility

Policy-based Virtual Machine connectivity

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 3 of 35

Network Admin vs. Server Admin


One of the key features of the Cisco Nexus 1000V is the non-disruptive operational model for both Network and Server
administrators. This means that in a real world deployment scenario of this product, both Network Admin and VMware
administrator would have their own management perspectives with different views and tools.
This lab purposely exposes you to both of these perspectives: The Network administrator perspective with the Cisco NX-OS
Command Line Interface (CLI) as the primary management tool and the VMware administrator perspective with vCenter as the
primary management tool. Even if you will not be exposed to "the other side during your regular job it might be a good idea to
understand the overall operation and handling of the Nexus 1000V.

Lab Topology and Access


The lab represents a typical VMware setup with two physical ESX hosts, offering services to virtual machines and a vCenter to
coordinate this behavior. Furthermore, a Cisco Nexus 1000V will be used to provide network services to the two physical ESX
hosts as well as the virtual machines residing on them.

Logical Topology
The diagram below represents the logical lab setup of a vPod as it pertains to the Cisco Nexus 1000V.
Figure 2.

Logical Lab Topology

Your pod consists of:

Two physical VMware ESX servers. They are called vesx1.dcloud.cisco.com and vesx2.dcloud.cisco.com.

One VMware vCenter, reachable at vcva.dcloud.cisco.com via the vSphere client.

One Cisco Nexus 1000V Virtual Supervisor Module (hosted on one of the ESXi servers), reachable at
vsm.dcloud.cisco.com via SSH.

One pre-configured upstream switch to which you do not have access to.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 4 of 35

Access
During this lab, configuration steps need to be performed on the VMWare vCenter as well as the Cisco Nexus 1000V Virtual
Supervisor Module (VSM) within the Cisco dCloud Virtual Pod.
Use the usernames and passwords listed below for accessing your vPods elements.
Table 2.

Lab Requirements

vPod Element

Login

Password

VMware vCenter

dcloud\demouser

C1sco12345

Cisco Nexus 1000V VSM

admin

C1sco12345

Notes

Use the vSphere client feature Use Windows session credentials


for easier login

The VMWare vCenter is accessible through the vClient application. The VSM is accessible through a SSH connection.
All necessary applications used within this lab are available on the desktop of the Cisco dCloud workstation to which you are
connected via Remote Desktop Protocol (RDP).
After accessing the Cisco dCloud workstation, you may need to click Desktop and then proceed with this look.

Connecting via the vSphere Client


Start VMware vSphere client by double clicking on the VMWare vSphere Client icon on the desktop.

The following figure shows the vSphere Client login screen.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 5 of 35

Figure 3.

vSphere Client Login Window

Please tick Use Windows session credentials and click on Login for vSphere Client authentication.
After a successful login, you will see the following vSphere Client application screen.
Figure 4.

vSphere Client Application Screen

Deployment
While the Nexus 1000V has already been registered in vCenter, it is still necessary to connect the different ESX hosts as part of
the Nexus 1000V. In order to automatically install the necessary Virtual Ethernet Module (VEMs) of the Cisco Nexus 1000V into
the ESX hosts, we have already pre-installed the VEM binaries onto the ESXi hosts. Typically, in production environments, it would
be recommended to use VMware Virtual Update Manager (VUM). In a vSphere setup VUM is used to stage and apply patches and
updates to ESX hosts.
The goal of this step consists of adding the two hosts to the Nexus 1000V.
In this lab, you will:

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 6 of 35

Create an uplink port-profile and apply it on the uplink interface of the ESX hosts

Add the two hosts to the Nexus 1000V Switch

Lab Setup
In order to add a new host to the Distributed Switch we need to create a port-profile to enable the communication between the
Virtual Supervisor Module and the different Virtual Ethernet Modules. On top of that, we want to enable the vMotion traffic.
Each pod is composed of two ESX Host, one Virtual Supervisor Module, and one Virtual Center. Both ESX host are connected an
upstream switch using four different NICs. Here are the different types of traffic flowing through each interface:

vmnic0: Management traffic and vMotion

vmnic4: NFS storage traffic

vmnic1 ,vmnic2 and vmnic 3: VM traffic

Connect to the Cisco Nexus 1000V Virtual Supervisor Module (VSM)


Use the following credentials to connect via SSH to the Cisco Nexus 1000V Virtual Supervisor Module (VSM). The SSH client
software called Putty can be found on the taskbar of your dCloud workstation. It has been pre-configured to connect to the correct
VSM module vsm.dcloud.cisco.com.

Hostname: vsm.dcloud.cisco.com

Username: admin

Password: C1sco12345

Nexus 1000V Environment


In this lab, the Nexus 1000V will be running in layer 3 mode for the communication between the VSM and the VEM(s). With the
Nexus 1000V plug-in already registered to the vCenter server, the following configuration shows the svs connection and the svs
domain of the Nexus 1000V.
Note: SVS stands for Server Virtualization Switch
SVS Connection of Nexus 1000V
vsm# show svs connections
connection vcenter:
ip address: 198.18.133.211
remote port: 80
protocol: vmware-vim https
certificate: default
datacenter name: dCloud-DC
admin: n1kUser(user)
max-ports: 8192
DVS uuid: 1d 55 01 50 ae 11 42 64-87 a0 13 8b ff ef cc 68
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 5.5.0 build-1312298
vc-uuid: 67461318-8FFD-4EC1-8638-62D32F7285D7
ssl-cert: Authenticated
vsm#
2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 7 of 35

SVS Domain of Nexus 1000V


vsm# show svs domain
SVS domain config:
Domain id: 100
Control vlan: NA
Packet vlan: NA
L2/L3 Control mode: L3
L3 control interface: mgmt0
Status: Config push to VC successful.
Control type multicast: No
Note: Control VLAN and Packet VLAN are not used in L3 mode
vsm#

Note: The domain is set to L3 and the control interface is using the interface mgmt0 of the VSM.

Creating an uplink port profile for the Management Traffic


The Nexus 1000V has the capability to communicate between the VSM and VEM(s) via layer 3. The ESXi management interface
resides on VLAN 10. Since we are not using layer 2 for the VSM to VEM communication, the Control and Packet VLANs are not
needed and will default to VLAN 1.

Management: 10: VLAN used for management traffic of ESXi hosts, VSM management and vMotion

Virtual Machine: 11: VLAN used for the application traffic

NFS: 12: VLAN used for NFS storage access

Private VLAN Secondary VLAN: 111: Secondary VLAN for the Private VLAN lab step

Note: Management traffic and vMotion traffic are typically not sharing the same interface in a production environment but for this
lab, we are going to utilize the management interface for vMotion traffic.
Specify the VLANs for later usage.
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# vlan 10
vsm(config-vlan)# name Management-vMotion
vsm(config-vlan)# vlan 11
vsm(config-vlan)# name Data-Network
vsm(config-vlan)# vlan 12
vsm(config-vlan)# name NFS
vsm(config-vlan)# vlan 111
vsm(config-vlan)# name PVLAN-Secondary
vsm(config-vlan)# end

Note: You can always execute the command show vlan in order to verify the creation of VLANs.
Creating Uplink Port-Profiles
In this part, you will learn how to configure a port-profile that will be applied to an uplink interface. In this lab, all of the interfaces
will be managed by the Nexus 1000V. With three (3) types of traffic through the network interfaces, we will create three separate
uplink port profiles.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 8 of 35

A port-profile can be compared to a template that will contain all the networking information that will be applied on different
interfaces. If the port-profile is configured as type ethernet, it is targeted to be applied on a physical interface. Port-profiles of type
vethernet, which is the default type, will be applied on a Virtual Machine interface.
Uplink Port-Profile for Management
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile type ethernet n1kv_mgmt-uplink
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 10
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# system vlan 10
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# end

Note: Since the n1kv_mgmt-uplink VLAN traffic will be utilized to communicate between the VSM and VEMs, it is required that to
be configured as a system vlan, which is highlighted in yellow.
Uplink Port-Profile for NFS Storage
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile type ethernet nfs-uplink
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 12
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# system vlan 12
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# end

Note: Since the nfs-uplink VLAN traffic will be utilized to shared NFS storage, it is required that to be configured as a system
vlan, which is critical for the shared storage access.
Uplink Port-Profile for VM Data
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile type ethernet data-uplink
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 11
vsm(config-port-prof)# channel-group auto mode on mac-pinning
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# end

Note: The data-uplink port-profile already includes a configuration line for private vlans. This configuration is necessary for a later
lab step and will be explained in the corresponding section. It already has to be included at this stage, as certain configurations
cannot be altered once the uplink port profile is in use.
One special characteristic of the uplink port profile should be pointed out at this stage:

type ethernet: This configuration line means that the corresponding port-profile can only be applied to a physical Ethernet
port. This is also indicated through a special icon in the vSphere client:

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 9 of 35

channel-group auto: This configuration line activates the feature of virtual port-channel host mode. It allows the Nexus
1000V to form a port-channel with upstream switches that do not support multi-chassis etherchannel.

Congratulations, you just configured your first port-profile!

Creating Port-Profile for VM Interfaces and VMkernels


In this section, we will create the various port-profiles that will be utilized by the Virtual Machines and VMkernels. One of the critical
port-profile is the one that will be utilized by the management interface of the ESXi server. This interface will be utilized to
communicate from the VEM to the VSM as well. Below are the steps to create this port-profile and other port-profiles for the lab.
n1kv_mgmt_vlan Port-Profile for VSM to VEM Communication
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile n1kv_mgmt_vlan
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# capability l3control
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 10
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# system vlan 10
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# end

Note: Since this port-profile will be utilized by the VEM to communicate with the VSM, it is required to be configured as a system
vlan and has the capability l3control (both highlighted in yellow).
IMPORTANT: If you see a warning similar to below, please ignore:
Warning: Port-profile 'nlkv_mgmt_vlan' is configured with 'capability l3control'. Also, configure the corresponding access vlan as a
system vlan in
nfs_vlan Port-Profile for NFS VMkernels
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile nfs_vlan
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 12
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# system vlan 12
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# end

Note: Since this port-profile will be used for IP-based storage (specifically NFS), it is required to configure this to be a system
vlan, which is highlighted in yellow.
vsm-control-packet Port-Profile for the VSM VM Control and Packet Interfaces
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile vsm-control-packet
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 1
vsm(config-port-prof)# no shutdown
2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 10 of 35

vsm(config-port-prof)# state enabled


vsm(config-port-prof)# end

Note: The port-profile will be assigned to the VSMs control and packet interfaces. Since in this lab we do not have a secondary
VSM, the control interface of the VSM can be set to any VLAN (in this case VLAN 1). It is required to have a VLAN for the
control interface of the VSM so that it can communicate between the Primary and Secondary VSM.
vsm-management Port-Profile for the VSM VM Management Interface
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile vsm-mgmt0
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 10
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# system vlan 10
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# end

Note: Since the VSM is a VM that will reside on a host that will be a VEM, it is critical that the VSM management interface is
configured as a system vlan, which is highlighted in yellow.
VM-Client Port-Profile for Client Virtual Machines Data Traffic
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vsm(config)# port-profile VM-Client
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 11
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# end

With this configuration completed, it is typically a best practice to copy the running-config to startup-config file. This is shown
below.
vsm# copy running-config startup-config
[########################################] 100%

IMPORTANT: Before you continue to the next step, please double check the configuration of the port profiles is correct. Errors in
the configuration might prevent you from accessing the Nexus 1000V VSM management. Check the configuration by using the
command:
vsm(config)#show run port-profile <NAME OF PORT PROFILE>

Adding an ESX host to the Distributed Virtual switch


We will now add the two ESX hosts of your pod to the Nexus 1000V DVS and apply the port-profile that we just created to the
uplink interface of the different hosts.
Utilizing the traditional non-distributed vSwitches requires multiple manual steps to ensure consistent hosts and is therefore time
consuming and error-prone. Consistent network configuration across host is required for successful vMotion.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 11 of 35

Adding a host to the Distributed Virtual Switch is done by assigning some or all of the physical NICs of an ESX host to become part
of the DVS and assign previously created uplink port-profile to these NICs.
1.

Open VMware vSphere client application.

2.

Navigate to the Networking view by clicking on the Home > Inventory > Networking tab. To reach this view click on the arrow
to the right of Inventory and pick Networking from the list being displayed.

3.

Right-click on your VSM and choose Add Host...

4.

You are presented with all hosts that are part of the data center but not part of the DVS. The VEM component has already
been pre-installed on the ESX hosts. An alternative would be the usage of VMware Update Manager (VUM), which would
make the integration of the ESX host to the Nexus 1000V completely automated and transparent.

5.

Select the host(s) and the NICs that will be assigned to the DVS. For each of the vmnics, select from the DVUplink port
group the appropriate uplink port-profiles.
Assign the appropriate uplink port profiles that you created for the host vesx1.dcloud.cisco.com and then click Next.

vmnic0: n1kv_mgmt-uplink

vmnic4: nfs-uplink

vmnic1, vmnic2 and vmnic 3: data-uplink

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 12 of 35

Note: In real life scenarios, uplink port-profiles are configured by the networking administrator to match the setting of the physical
upstream switches. This ensures that there is no misconfiguration between the physical network and the virtual network. It also
enables network administrators to use features for this uplink that are available on other Cisco switches (e.g. QoS, Etherchannel).
6.

The next screen offers you the possibility to migrate existing VMKernel to the Nexus 1000V. Since all of the interfaces,
including vmkernels, will be migrated to the Nexus 1000V, it is also required to migrate the management interface of the ESXi
server, since it will be used for the layer 3 communication from the VEM to the VSM. Below are the two (2) vmkernels that will
be migrated to the port-profiles that was created in the previous step. In the field Destination port group select:

vmk0: n1kv_mgmt_vlan

vmk4: nfs_vlan

7.

Click Next.

8.

Similar to the previous screen, this next screen allows you to migrate existing Virtual Machine Networks to the Nexus 1000V.
Since the VSM VM resides on this host, select the check box Migrate virtual machine networking. With the port-profiles
created in the previous steps for the VM interfaces, select the appropriate port-profile under the Destination port group
column.

VSM VM:
o

Network Adapter 1: vsm-control-packet

Network Adapter 2: vsm-mgmt0

Network Adapter 3: vsm-control-packet

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 13 of 35

Windows 7 A VM:
o

9.

Network Adapter 1: VM-Client

Click Next.

10. You are presented with an overview of the uplink ports that are created. By default VMWare creates 16 uplink ports per hosts
and leaves it to the Nexus 1000V VSM to map them to useful physical ports.

11. Acknowledge these settings by clicking on Finish. After a few seconds, this ESXi host vesx1.dcloud.cisco.com will appear in
the Hosts view of the Distributed Virtual Switch.
2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 14 of 35

IMPORTANT: Ignore any Alert on the Host status.

Repeat the same steps to add the host vesx2.dcloud.cisco.com to the Cisco Nexus 1000V. In the section when migrating existing
VMs, the server vesx2.dcloud.cisco.com allocates two VMs: Windows7 B and WebServer A. For the Network Adapter 1 of
both VMs, choose the port-profile (port-group) VM-Client.

Network Administrator view of Virtual Machine connectivity


Now that the Nexus 1000V is up and ready, you can take some time to explore more details of the virtual switch.
1.

Connect to the Cisco Nexus 1000V Virtual Supervisor Module through an SSH connection, using PuTTY. Use the provided
credentials (admin/C1sco12345).

2.

Issue the command show module.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 15 of 35

In the output of the show module command, you can see different familiar components:

Module 1 and module 2 are reserved for the Virtual Supervisor Module. The Cisco Nexus 1000V supports a model, where
the supervisor can run in an active/standby high availability mechanism. In this lab, only the primary supervisor was
installed, which is why you do not see module 2.

Module 3 and module 4 represent a Virtual Ethernet Modules (VEMs). As shown at the bottom of the screen, each VEM
corresponds to a physical ESX host, identified by the server IP address and name. This mapping of virtual line-card to a
physical server eases the communication between the network and server team.

3.

Let us have a look at the interfaces next by using the show interface brief command.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 16 of 35

The output of the command show interface brief shows you the different interface types that are used within the Cisco Nexus
1000V:

mgmt0: This interface is used for out of band management and correspond to the second vNIC of the VSM.

Ethernet Interfaces: These are physical Ethernet interface and correspond to the physical NICs of the ESX hosts. The
numbering scheme lets you easily identify the corresponding module and NIC.

Port-Channels: Ethernet Interfaces can be bound manually or automatically through vPC-HM into port channels. When
using the uplink port-profile configuration mac-pinning there is no need for the configuration of a traditional port-channel
on the upstream switch(es). Nonetheless on the Nexus 1000V a virtual port-channel is still formed.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 17 of 35

Veths: Virtual Ethernet Interfaces connect to VMs or VMkernels and are independent of the host that the VM runs on. The
numbering scheme therefore does not include any module information. The Veth identifier remains with the VM during its
entire lifetime even while the VM is powered down.

4.

Verify on the Nexus 1000V CLI that the corresponding Virtual Ethernet interface has been created for the two virtual machines
by issuing the command show interface virtual.

The output of the above command gives you a mapping of the VM name to its Veth interface.
5.

On top of that the Network Administrator can see at any given time which VM is in use and which port-profile it is attached to it
by using the show port-profile usage command.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 18 of 35

Note: The Network administrator can manage the shown virtual ethernet interfaces the same way as a physical interface on a
Cisco switch.

vMotion and Visibility


The next section demonstrates the configuration of the VMKernel vMotion interface in order to perform a successful vMotion. In the
second step the continuous visibility of virtual machines during vMotion is demonstrated.
This lab step consists of the following:

Configure a vMotion network connection

Perform a vMotion and note the veth mapping

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 19 of 35

vMotion Configuration
You will now create a VMkernel Interface that will be used for vMotion. vMotion is a well-known feature of VMware, which allows
users to move the Virtual Machine from one physical host to another while the VM remains operational. This feature is also called
live migration.
Note: You will be configuring both hosts. For host vesx1 use the IP address 198.18.133.41 and for host vesx2 the IP address
198.18.133.42. For both hosts choose the Subnet Mask of 255.255.192.0. Do not change the VMkernel Default Gateway
In this step, you will configure the VMKernel vMotion interface for both servers using PuTTY client software.
1.

The first step is to provision a port-profile for the vMotion Interface. Lets call this port-profile vMotion

vsm# configure terminal


vsm(config)# port-profile vMotion
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 10
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# state enabled

2.

Open vSphere client. Go to the Home > Inventory > Hosts and Clusters tab and choose the first server vesx1 of your pod.

3.

Click on the Configuration tab and within the Hardware area on Networking. Under View choose Distributed Virtual Switch.

4.

In order to add the VMKernel vMotion interface choose Manage Virtual Adapters... and afterwards click on Add within the
Manage Virtual Adapters dialog.
In the Add Virtual Adapter Wizard choose to create a New Virtual Adapter, and then click on the Next button.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 20 of 35

5.

As Virtual Adapter Types you can only choose VMKernel. Click Next.

6.

Choose vMotion as the port group name. Also, check the box right next to Use this virtual adapter for vMotion to enable
vMotion on this interface. Click Next.

7.

Configure the IP settings for the vMotion interface.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 21 of 35

8.

Before finishing the Wizard, you are presented with an overview of your setting. Verify the correctness of these settings and
choose Finish.

9.

You have now successfully added the VMkernel vMotion interface. Close the Manage Virtual Adapters window.

Congratulations! You successfully configured the VMKernel vMotion interface leveraging the Cisco Nexus 1000V.
10. Repeat steps 3 to 8 to configure the VMkernel vMotion Interface on the second host vesx2. For step 7, use the IP address
198.18.133.42 and the Subnet Mask of 255.255.192.0 when configuring host vesx2.

Network Administrators view of vMotion


An important attribute of the Nexus 1000V with regards to vMotion is the capability that the VM keeps its virtual connection
identifier throughout the vMotion process. This way a vMotion does not influence the interface policies, network management
capabilities, or traceability for a VM from the perspective of the Network Administrator. Instead, the Virtual Machines keep its Veth
identifier across the vMotion process.
Before VMotioning your pods Virtual Machine, make note of the current veth for the given Virtual Machine.
1.

Prior to the vMotion perform a lookup of the used Virtual Interfaces with the command show interface virtual with the PuTTY
console. This yields the following or similar results:

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 22 of 35

2.

Make note of the associated Veth port and the Module and the ESXi hostname currently associated to the Virtual Machine.
(Windows 7 A)

Perform a vMotion
Test your previous vMotion configuration by performing a vMotion process.
1.

In vSphere client, go to the Home > Inventory > Hosts and Clusters tab.

2.

Drag & drop the Virtual Machine Windows 7 - A from the first ESXi host of your setup to your second ESXi host,
vesx2.dcloud.cisco.com.

3.

Walk through the appearing vMotion wizard by leaving the default settings and clicking on Next and finally finish.

4.

Wait for the vMotion to successfully complete.

IMPORTANT: If unsuccessful, verify that the IP addresses were configured correctly.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 23 of 35

5.

On vCenter, open the Windows 7A console and verify that the Virtual Machine still has network connectivity by pinging the
Windows 7B, which is 198.18.5.12. Login as demouser with password C1sco12345 and issue the ping 198.18.5.12
command.

Verify the new Network Administrators view on the Virtual Machine


After a successful vMotion the expected behavior is that the Virtual Machine can be seen and managed by the network
administrator through the same virtual Ethernet port. Verify that this is the case.
1.

Again, use the show interface virtual command to perform a lookup of the used Virtual Interfaces.

Congratulations! You are now able to trace a VM moving across physical ESXi hosts via vMotion. The resulting output shows you
the current mapping of a Veth port to the Virtual Machine. By comparing the output before and after the vMotion process, you can
notice that the Virtual Machine still uses the same Veth port, while the output for Module and Host changes. The Cisco Nexus
1000V provides all the monitoring capabilities that the network team is used to for a Virtual Ethernet port, even while the VM
attached to it is live migrated. On top of that, all the configuration and statistics follow the VM across the vMotion process.
Please migrate the Virtual Machine Windows 7 - A back to the host vesx1.dcloud.cisco.com before progressing to the
next lab step. To do that drag and drop VM Windows 7 A to vesx1.dcloud.cisco.com.
IMPORTANT: If the vMotion fails, open a new putty session on host 198.18.133.41 and execute the following command:
ping 198.18.133.42 vmk1
Next, open another putty session on host 198.18.133.42 and execute the following command:
ping 198.18.133.41 vmk1

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 24 of 35

Policy-based virtual machine connectivity


After the basic functionality of the Cisco Nexus 1000V distributed virtual switch has been demonstrated, it is time to explore some
of the more advanced features. Thus, this section will demonstrate the policy-based virtual machine capabilities in form of IP based
filtering. The steps of this section include:

Configure an IP-based access list

Apply the access list to a port-group

Verify the functionality of the access list

Verify open ports within your virtual machine


With the VMs having access to the upstream switch for network connectivity, at the same this also means that the VM is accessible
by hosts on the upstream network. This might be at risk for various network-based attacks. To demonstrate this, the Virtual
Machine inside your pod has two Windows specific ports open which might be used for attacks.
Before configuring the access list to block access, verify that your Virtual Machine currently has two open ports:
1.

Open the Virtual Machine Console of the VM Windows 7 A inside your pod.

2.

Double-click on the Web Server icon to load the default webpage.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 25 of 35

3.

Verify that port 3389 (Windows RDP) and 445 (Windows CIFS) are open by double clicking on the Port Scan shortcut on the
desktop.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 26 of 35

Configuration of an IP-based access list


In this lab step, you will create an IP based access list, which blocks access to these two ports.
1.

Using the CLI, create an access list within the Cisco Nexus 1000V VSM. The name ProtectVM is chosen as name for this
access list.

vsm# configure terminal


vsm(config)# ip access-list ProtectVM
vsm(config-acl)# deny tcp any any eq 3389
vsm(config-acl)# deny tcp any any eq 445
vsm(config-acl)# permit ip any any

This access list denies all TCP traffic to port 3389 (Windows RDP) and 445 (Windows CIFS) while permitting any other IP traffic.
2.

You will now apply the access list ProtectVM as an outbound-rule to the virtual Ethernet interfaces (veth) of the existing VMs
running Windows 7 A. Here the concept of port-profiles comes very handy in simplifying the work. As the Veth interface of
the Windows 7 A VM leverage the port profile VM-Client, adding the access list to this port profile will automatically update
all associated Veth interfaces and assign the access list to them.

vsm(config-acl)# port-profile VM-Client


vsm(config-port-prof)# ip port access-group ProtectVM out

As a result, access to both open ports within your Virtual Machine has been blocked.
Note: The directions in and out of an ACL have to be seen from the perspective of the Virtual Ethernet Module (VEM), not the
Virtual Machine. Thus, in specifies traffic flowing in to the VEM from the VM, while out specifies traffic flowing out from the VEM
to the VM.
Verify the application of the IP-based access list
Verify that both ports that were open before have been blocked.
1.

Again, open the Virtual Machine Console of VM Windows 7 A.

2.

Locate the Cisco dCloud icon to load the webpage Port Scan.

3.

Verify that port 3389 (Windows RDP) and 445 (Windows CIFS) are filtered by double clicking on the Port Scan shortcut on
the desktop.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 27 of 35

Congratulations! You have successfully created, applied, and verified an IP based access list. This exercise demonstrated that all
the features usually used on a physical switch interface can now be applied on the veth and that the concept of port-profile makes
the network configuration much easier. Changes to a port-profile will be propagated on the fly on all the VM using it.

Mobile VM Security
Another key differentiator of the Cisco Nexus 1000V is the advanced feature of Private VLAN capability. This section demonstrates
the capabilities of Private VLANs by placing individual VMs in a Private VLAN while utilizing the uplink port as a promiscuous
PVLAN trunk. Thus, VMs will not be able to communicate among each other but can only communicate with the default gateway
and any other peer beyond the default gateway. The upstream switch does not need to be configured for that. This can for
example be used to deploy Server Virtualization within a DMZ.
The content of this step includes:

Configure Private VLANs.

Removing the Private VLAN configuration.

Add a VMKernel port to the VM-Client Port-Profile


These steps are required for the Mobile VM Security section described later in this document. The VM-Kernel will act as a gateway
for the 198.18.5.0/24 network.
1.

In vSphere, go to the Home > Inventory > Hosts and Clusters tab and choose the first server vesx2 of your pod.

2.

Click on the Configuration tab and within the Hardware area on Networking. Under View choose Distributed Virtual Switch.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 28 of 35

3.

In order to add the VMKernel vMotion interface choose Manage Virtual Adapters... and afterwards click on Add within the
Manage Virtual Adapters dialog.
In the Add Virtual Adapter Wizard choose to create a New Virtual Adapter, and then click on the Next button.

4.

As Virtual Adapter Type you can only choose VMKernel. Click Next.

5.

Choose VM-Client as the port group name. Click Next.

6.

Configure the IP settings for the VM-Client interface.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 29 of 35

Private VLANs
IMPORTANT: Please make sure both Virtual Machines Windows 7 - A and Windows 7 - B are hosted in
vesx2.dcloud.cisco.com before progressing to the next lab steps. Otherwise, please perform vMotion accordingly in order to have
both as required.

This section demonstrates the configuration of a Private VLAN towards the connected VM. First, we will update the VLAN to run in
isolated mode. Then we will configure the VM and uplink port-profile to do the translation between the isolated and the
promiscuous VLAN.
In order to prevent the requirement of configuring the PVLAN merging on the upstream switch, the new feature of promiscuous
PVLAN trunks is showcased on the uplink port. This means that the primary and secondary VLAN will be merged before leaving
the uplink port.
Note: When a VLAN is specified to be a primary VLAN for usage with private VLANs, it instantly becomes unusable as a VLAN. As
your Virtual Machines are still using VLAN 11 for network connectivity, your VMs will encounter connectivity issues while you
perform the configuration steps below.
It is therefore recommended not to change an in-use VLAN from non-PVLAN usage to PVLAN usage in a production environment.
1.

First, you will prepare the primary and secondary VLAN on the VSM.

vsm# configure terminal


vsm(config)# feature private-vlan
vsm(config)# vlan 11
vsm(config-vlan)# private-vlan primary
vsm(config-vlan)# vlan 111
vsm(config-vlan)# private-vlan isolated
vsm(config-vlan)# vlan 11

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 30 of 35

vsm(config-vlan)# private-vlan association add 111


vsm(config-vlan)# end

You can check that the configuration has been successfully applied by issuing the show vlan private-vlan command
vsm# show vlan private-vlan
Primary Secondary Type
Ports
------- --------- --------------- -----------------------------------------11
111
isolated

2.

As a next step, configure the uplink port profile as a promiscuous PVLAN trunk with the primary VLAN 11 and the secondary
VLAN 111. The configuration of the promiscuous trunk has already been done during the creation of data-uplink. So it is not
necessary to configure it again. In this step, we will only be configuring the PVLAN mapping, as shown below.

vsm#configure terminal
vsm(config)# port-profile type ethernet data-uplink
vsm(config-port-prof)# switchport mode private-vlan trunk promiscuous
vsm(config-port-prof)# switchport private-vlan trunk allowed vlan 11,111
vsm(config-port-prof)# switchport private-vlan mapping trunk 11 111

3.

After this step has been completed, configure the port profile VM-pvlan that connects the Virtual Machines - as a private VLAN
in host mode, thus isolating the individual VMs from each other.

vsm(config)# port-profile VM-pvlan


vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# switchport mode private-vlan host
vsm(config-port-prof)# switchport private-vlan host-association 11 111
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# state enabled

4.

Apply this port-profile VM-pvlan on both Windows 7 - A and Windows 7 - B VMs. In Summary tab, under commands, click
Edit Settings > Click Network adapter 1 > under Network label select VM-pvlan.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 31 of 35

5.

After applying the new port-profile to the VMs, the veth of those particular VMs (Windows 7 A and Windows 7 B) will
remain the same. Compare with the output obtained in step 4 in the vMotion Configuration section. Verify the current Vethmapping of the VMs and the usage of PVLAN.

Note: Please take note of the Veth numbering of each VM shown on this document might differ from your actual output. However,
it should be consistent with the obtained output before applying the port profile to the VMs.
In this case, you should see Veth4 and Veth10 in pvlan mode. Use the command below:

6.

The expected behavior of the above configuration is that the two virtual machines of your pod should both still be able to reach
the default gateway and all host beyond this gateway. However, they should not be able to reach each other.
This can be verified by pinging the default gateway 198.18.5.1 from Windows 7 - A VM. To do so, login to Windows 7 A
VM and open the console where you enter the command ping 198.18.5.1. Click on the Command Prompt icon on the desktop
within the VM. Now issue the command ping 198.18.5.1. Ping should be successful.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 32 of 35

Try now to ping Windows 7 - B from Windows 7 - A. The IP address of Windows 7 - B is 198.18.5.12. Issue the command ping
198.18.5.12.

As expected, the ping times out.


7.

You can now change the isolated vlan to community vlan. The community VLAN can talk to each other as well as two the
promiscuous port. However, they cannot talk to an isolated port.

vsm(config)# vlan 111


vsm(config-vlan)# private-vlan community

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 33 of 35

Note: The Virtual Machines using the port-profile VM-pvlan will lose network connectivity for a brief moment (interface flap), when
changing the PVLAN mode.
8.

Again, try to ping the VM Windows 7 B. This time the ping will work.

Congratulations, you have successfully configured a Private VLAN with a promiscuous PVLAN trunk on the uplink! This feature
allows you to utilize server virtualization in new areas, such as in the deployment of DMZ.
Feel free to move the VMs around the two ESX hosts via vMotion. You will notice that no matter where the two VMs reside, the
network policies are enforced the same way.

Removing the Private VLAN configuration


To remove the Private VLAN configuration from VLAN 11, the previously created port-profile VM-pvlan will become unusable and
your VMs will therefore lose connectivity. If you like to proceed further, modify at Edit Settings to change the port-profile back to
VM-Client for the Windows 7 A and Windows 7 B VMs. Below are the steps to remove private vlan.
1.

Remove the configuration of VLAN 11 as a primary PVLAN

vsm# configure terminal


vsm(config)# vlan 11
vsm(config-vlan)# no private-vlan association
vsm(config-vlan)# no private-vlan primary

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 34 of 35

Summary
You are now familiar with the Nexus 1000V. As you have experienced during the lab, The Nexus 1000V is based on three
important pillars:

Mobility of the network

Non-disruptive operational model

In this lab you:

Have gotten familiar with the Cisco Nexus 1000V Distributed Virtual Switch for VMWare ESX.
o

Install and configure the Nexus 1000V

Added physical ESX host to the DVS

Attached a Virtual Machine to the Distributed Virtual Switch

Tested the vMotion capability

Provided Enhance Security with IP Access List

Configure Private VLAN

For More Information


For more information about the Cisco Nexus 1000V, visit http://www.cisco.com/go/nexus1000v or contact your local Cisco account
representative.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 35 of 35

Vous aimerez peut-être aussi