Vous êtes sur la page 1sur 392

Intercloud Data Center ACI 1.

0
Implementation Guide
February 20, 2015

Building Architectures to Solve Business Problems

CCDE, CCENT, CCSI, Cisco Eos, Cisco Explorer, Cisco HealthPresence, Cisco IronPort, the Cisco logo, Cisco Nurse Connect, Cisco Pulse, Cisco SensorBase,
Cisco StackPower, Cisco StadiumVision, Cisco TelePresence, Cisco TrustSec, Cisco Unified Computing System, Cisco WebEx, DCE, Flip Channels, Flip for Good, Flip
Mino, Flipshare (Design), Flip Ultra, Flip Video, Flip Video (Design), Instant Broadband, and Welcome to the Human Network are trademarks; Changing the Way We Work,
Live, Play, and Learn, Cisco Capital, Cisco Capital (Design), Cisco:Financed (Stylized), Cisco Store, Flip Gift Card, and One Million Acts of Green are service marks; and
Access Registrar, Aironet, AllTouch, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Lumin, Cisco Nexus, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, Continuum, EtherFast, EtherSwitch, Event Center, Explorer, Follow Me Browsing, GainMaker, iLYNX, IOS, iPhone, IronPort, the
IronPort logo, Laser Link, LightStream, Linksys, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, PCNow, PIX, PowerKEY,
PowerPanels, PowerTV, PowerTV (Design), PowerVu, Prisma, ProConnect, ROSA, SenderBase, SMARTnet, Spectrum Expert, StackWise, WebEx, and the WebEx logo are
registered trademarks of Cisco and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (1002R)
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT
SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE
OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCBs public
domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH
ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT
LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF
DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO
OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Cisco Confidential Partners ONLY
Intercloud Data Center ACI 1.0, Implementation Guide
Service Provider Segment
2015 Cisco Systems, Inc. All rights reserved.

CONTENTS
Preface

iii

Audience
CHAPTER

iii

Solution Overview

1-1

Implementation Overview
Solution Architecture

1-4

1-4

Mapping ACI Concepts to IaaS

1-8

Service Tiers 1-9


Reference IaaS Tenant Network Containers
Bronze 1-13
Silver 1-14
Expanded Gold Container (E-Gold) 1-15
Copper 1-17
Solution Components
CHAPTER

ACI Policy Model

1-12

1-18

2-1

Accessing Managed Object Data through REST API


Authenticating and Maintaining an API Session

2-2

2-4

Layer 4 to Layer 7 Service Insertion 2-4


L4 to L7 Service Parameters 2-6
CHAPTER

Data Center Fabric Implementation with ACI


Fabric Implementation Highlights
APIC Attachment Points
Fabric Load Balancing

3-1

3-1

3-2
3-3

External Connectivity to PE 3-3


vPC Connectivity to PE for L2 Extension 3-3
vPC Configuration in ACI Fabric 3-5
Port Channel with SVI for L3 Extension 3-12
Port Channel Configuration on Border Leaf
Connectivity to Compute 3-17
vPC to Fabric Interconnects
FEX to Bare Metal 3-20

3-13

3-17

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

Contents

Attaching FEX to the ACI Fabric 3-20


Profile Configuration 3-23
Bare Metal Server Attachment to FEX 3-29
Bare Metal and KVM Direct to 93128 and 9396 3-30
Connectivity to Services Appliances 3-32
ASA 5585 Active/Active Cluster Implementation 3-32
ASA 5585 for Expanded-Gold and Copper using vPC

3-36

Connectivity to Storage 3-36


NetApp Cluster Connectivity for NFS 3-36
VPC Configuration 3-36
Storage Tenant Configuration 3-38
Storage Multi-Tenancy Considerations 3-40
High Availability Implications 3-40
Data Center Fabric ManagementOut of Band (OOB) 3-40
Connectivity to OOB Ports on all Fabric Switches 3-41
Connectivity to APIC OOB Ports 3-43
Connectivity from APIC to VMMs 3-43
Connectivity from APIC to Services Appliances (ASA 5585)
Deployment Considerations
CHAPTER

3-44

VMWare ICS Compute and Storage Implementation

4-1

VMWare Based FlexPod Aligned ICS 4-1


Reference Architecture 4-1
UCS Fabric Interconnects and B-Series setup 4-2
Cisco Application Virtual Switch (AVS) 4-3
Forwarding Modes 4-3
Cisco AVS Integration with VMware vCenter 4-4
Cisco AVS Installation 4-5
AVS Virtual Machine Kernel (VMK) NIC connectivity
NetApp NFS Storage 4-8
CHAPTER

3-44

Openstack Compute and Storage Implementation

4-6

5-1

Physical Connectivity Layout 5-1


C-Series Server Attachment 5-2
C-Series Server NIC Layout 5-3
OpenStack Services and Access Implementation 5-4
MaaS and Juju Servers 5-5
OpenStack Horizon Dashboard Access 5-5
OpenStack SWIFT/RADOS Gateway Object Store Access

5-5

Intercloud Data Center Application Centric Infrastructure 1.0

ii

Implementation Guide

Contents

OpenStack Host Access to NFS Storage


Canonical OpenStack Implementation
Metal as a Service (MaaS) 5-6
Juju 5-7
Charms 5-8

5-6

5-6

Nexus1000v for KVM Implementation 5-8


Nexus 1000v High Availability Model 5-9
Nexus 1000v Architecture 5-10
Virtual Supervisor Module (VSM) 5-10
Virtual Ethernet Module (VEM) 5-11
OpenStack Nexus 1000v Components 5-11
VXLAN Gateway (VXGW) 5-12
Nexus 1000v Packet Flow 5-12
Nexus 1000v Charms 5-13
VSM Charm 5-13
VEM Charm 5-14
Quantum Gateway (Neutron) Charm 5-14
Nova Cloud Controller Charm 5-15
Nova Compute Charm 5-15
OpenStack Dashboard Charm 5-15
Nexus 1000v for KVM Work Flow 5-15
OpenStack to Nexus 1000v Object Mapping 5-15
Configuration Work Flow 5-16
OpenStack Installation 5-18
High Availability 5-18
High Availability Components 5-19
Ubuntu MaaS Installation 5-21
Ubuntu Juju Installation 5-27
Installation of OpenStack Icehouse using Ubuntu MaaS/Juju
Juju-Deployer and Configuration File 5-29
Deploying OpenStack Charms 5-35
Post Juju-Deployer 5-39
Troubleshooting 5-40
Install Python OpenStack Clients 5-41

5-29

OpenStack Configuration 5-42


Tenant Configurations 5-42
Networking Configuration 5-42
Server Networking Configuration 5-43
Tenant Networking Configuration 5-44

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

iii

Contents

Additional Nexus 1000v configurations 5-48


Storage Configuration and Implementation 5-49
Block Storage with Ceph 5-49
Block Storage with NetApp 5-50
Image Storage 5-54
Object Storage 5-54
Instance Migration 5-55
Cold Migration 5-55
Live Migration 5-56
Host Failure Scenarios 5-56
Compute Nodes 5-56
Control Nodes 5-57
CHAPTER

WAN Edge Implementation with ASR9K

6-1

Network Virtualization Edge on the ASR 9000 6-1


Benefits 6-1
Requirements 6-1
Restrictions 6-2
Control-plane Extension 6-2
Data Plane Extension 6-2
Link Distribution 6-2
ASR 9000 as the Data Center Provider Edge router 6-3
ASR 9000 Data Center Provider Edge Implementation Toward MPLS Core 6-4
Provider Edge and Customer Edge BGP to Tenant 6-5
L3 Bronze Configuration 6-7
Provider EdgeCustomer Edge using Static Routing 6-7
ASR 9000 as Internet Router 6-8
E-Gold Tenant Internet Connection Configuration on ASR 9000 Data Center Provider Edge
Interface Configuration 6-10
Routing Configuration 6-11
Deployment Considerations
CHAPTER

6-9

6-11

End-to-End QoS Implementation

7-1

QoS Domains and Trust Boundaries

7-2

QoS Transparency 7-3


Trust Boundaries 7-3
QoS per Service Tier 7-4
Tenant Type Mapping to QoS Traffic Classes
ACI Fabric QoS 7-7

7-5

Intercloud Data Center Application Centric Infrastructure 1.0

iv

Implementation Guide

Contents

Classification 7-9
Trust 7-11
Marking 7-11
UCS QoS 7-12
AVS Encapsulation 7-12
QoS System Class 7-12
QoS Policy 7-13
ASR 9000 Data Center PE QoS
Deployment Considerations
CHAPTER

7-13

7-14

Expanded Gold Tenant Container

8-1

Dual Zones Layout for Workload VMs


High Availability

8-2

8-3

Traffic Flows 8-4


Private Zone 8-4
Demilitarized Zone

8-5

Expanded Gold Tenant Container Configuration


Prerequisites

8-6

8-8

Summary of Steps 8-9


Detailed Steps 8-12
Decommission the Expanded Gold Tenant Container
Expanded Gold Tenant Container with ASAv
CHAPTER

Silver Tenant Container

8-75

8-78

9-1

Silver Tenant Container Layout 9-1


Physical Topology 9-2
Logical Topology 9-3
APIC Tenant Construction 9-3
User Roles and Security Domain 9-4
Create Tenant 9-6
Private Network 9-7
Bridge Domain 9-9
Application Profile 9-11
End Point Groups 9-12
Filters 9-14
Contracts 9-16
External Routed Networks 9-19
Traffic Flow Paths 9-28

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

Contents

Server Load Balancing using NetScaler 1000v 9-30


NetScaler 1000v Overview 9-30
NetScaler 1000v Implementation 9-31
One-Arm Mode 9-31
High Availability (HA) Configuration 9-31
Network Setup 9-32
NetScaler 1000v L4-7 Load Balancing Policies 9-32
Server 9-32
Services / Service Groups 9-33
Load Balanced Virtual Server 9-33
Health Monitoring 9-33
NetScaler 1000v Implementation using Service Graph 9-33
Citrix NetScaler Device Package 9-34
L4-L7 Devices Implementation 9-35
Device Cluster (Logical Devices) 9-36
Concrete Devices 9-36
Logical Interfaces 9-36
Service Graph 9-42
Service Graph Configuration 9-43
Configuring device/function parameters under Service Graph
Configuring L4-L7 Parameters under EPG 9-54
Device Selection Policies 9-58
Deploying Service Graph 9-60
Network Parameter Configuration 9-62
Load-Balancing Implementation 9-62
ApplicationHTTP 9-63
Application FTP 9-64
ApplicationMySQL 9-66
SSLOffload Implementation 9-68
References
CHAPTER

10

9-70

Bronze Tenant Container


Overview

9-51

10-1

10-1

Layer 3 Bronze 10-1


Physical Topology 10-1
Logical Topology 10-2
Prerequisites 10-4
L3 Bronze Tenant Configuration Procedure
Verify Configuration 10-21

10-4

Intercloud Data Center Application Centric Infrastructure 1.0

vi

Implementation Guide

Contents

L2 Bronze with Default Gateway on ASR 9000 nV Edge


Physical Topology 10-24
Logical Topology 10-24
L2 Bronze Tenant Configuration Procedure 10-25
Verify Configuration 10-31
Deployment Considerations
CHAPTER

11

Copper Tenant Container

10-23

10-32

11-1

Copper Tenant Logical Layout 11-1


Logical Topology 11-1
Copper Container Traffic Flow 11-2
ACI Fabric Configuration 11-3
Overview 11-3
ACI Link Configuration 11-4
ACI Tenant Configuration 11-4
Base Tenant Configuration 11-4
Server-to-ASA Configuration 11-5
ASA-to-ASR 9000 Configuration 11-9
Object Storage (swift/RADOS GW) Access Configuration
NFS Storage Access Configuration 11-12

11-11

ASA Firewall Configuration 11-13


ASA System Context 11-14
Interface Configuration 11-14
BGP Configuration 11-15
ASA Copper Context 11-15
Base Configuration 11-15
BGP Configuration 11-16
NAT Configuration 11-16
Deployment Considerations

11-17

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

vii

Contents

Intercloud Data Center Application Centric Infrastructure 1.0

viii

Implementation Guide

Preface
The Cisco Intercloud Datacenter ACI 1.0 (ICDC ACI 1.0) system provides design and implementation
guidance for building cloud infrastructures for both Enterprises deploying Private cloud services, and
Service Providers building Public Cloud and Virtual Public Cloud services. With the goal of providing
an end-to-end system architecture, ICDC ACI 1.0 integrates Cisco and third-party products in the cloud
computing ecosystem. This preface explains the objectives and intended audience of the Cisco
Intercloud Data Center ACI 1.0 solution and this implementation guide.
The Intercloud Data Center system is a continuation of the Virtualized Multi-Service Data Center
(VMDC) systems, and this implementation guide is based on Application Centric Infrastructure (ACI)
technology that Cisco has just released. In this first release of the implementation guide, focus is placed
on showing how to build complex tenancy constructs using ACI.
Product screen shots and other similar material in this guide are used for illustrative purposes only and
show trademarks of EMC Corporation (VMAX), NetApp, Inc. (NetApp FAS3250), and VMware, Inc.
(vSphere). All other marks and names mentioned herein may be trademarks of their respective
companies.
Use of the word partner or partnership does not imply a legal partnership relationship between Cisco
and any other company.

Audience
This guide is intended for, but not limited to, system architects, network design engineers, system
engineers, field consultants, advanced services specialists, and customers who want to understand how
to deploy a Public or Private cloud data center infrastructure using ACI. This guide assumes that you are
familiar with the basic concepts of Infrastructure as a Service, Cisco Virtualized Multi-service Data
Center (VMDC) Solution, IP protocols, Quality of Service (QoS), and High Availability (HA), and that
you are aware of general system requirements and data center technologies.
This implementation guide provides guidance for cloud service providers to build cloud infrastructures
using the Cisco Application Centric Infrastructure (ACI) Technology. This implementation guide is part
of the Cisco reference design for cloud infrastructures called Cisco Intercloud Data Center ACI 1.0
release.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

iii

Preface

Intercloud Data Center Application Centric Infrastructure 1.0

iv

Implementation Guide

CH A P T E R

Solution Overview
The goal of implementing cloud infrastructures is to provide highly scalable, efficient, and elastic
services accessed on-demand over the Internet or intranet. In the cloud, compute, storage, and network
hardware are abstracted and delivered as a service to run the workloads that provide value to its users.
The end users, also called tenants, utilize the functionality and value provided by the service as and when
needed without the need to build and manage the underlying data center infrastructure. A cloud
deployment model differs from traditional deployments as the focus is on deploying application by
consuming a service from a provider, and results in business agility and lower cost due to consuming
only the resources needed and for the duration needed. For the provider of the cloud infrastructure, the
compute, storage, networking and services infrastructure in the Data Center are pooled together as a
common shared fabric of resources, hosted by at the providers facility, and consumed by tenants using
automation via API or portals. The key requirements for cloud service providers are multi-tenancy, high
scale, automation to deploy tenant services and operational ease.
With the availability of ACI, powerful technology is now available to build highly scalable and
programmable data center infrastructures. ACI technology brings in software defined networking
principles using centralized policy controller to configure, deploy and manage datacenter infrastructure
including services appliances, and scales vastly by using overlay technology in hardware yielding high
performance and enhanced visibility to the network. It also introduces a different paradigm of designing
and running applications in a datacenter in a multi-tenant environment, with enhanced security.
ACI is supported on the new Nexus 9000 series switches, and the centralized policy controller is called
Application Programming Infrastructure Controller (APIC). Limited First Customer Shipment (FCS)
release of this software was done in summer of 2014, with General Availability (GA) in Nov 2014.
This guide documents the implementation of reference Infrastructure as a Service (IaaS) containers
using the FCS ACI software release and includes detailed configurations and findings based on solution
validation in Cisco labs. The focus is to show the use of ACI based constructs to build similar reference
IaaS containers as shown in past Cisco Virtualized Multi-service Data Center (VMDC) Cisco Validated
Designs (CVD), to enable Cisco customers to understand how ACI can be applied to build Cloud
infrastructures. This is the first release of this system, and hence focus has been to show the functional
capabilities of ACI and how it applies to building reference containers. Validation of scalability of ACI
will be covered in subsequent updates or releases of this solution and implementation guide.
This release of Intercloud Data Center ACI 1.0 includes VMware vSphere-based hypervisor and uses
Cisco Application Virtual Switch (AVS) to extend ACI integration all the way to virtual access layer,
and configuration of the virtual switch port-groups is also done via APIC. Additionally, OpenStack
based compute pod is also validated using the Nexus1000V for KVM platform this implementation
was targeted for providing lower cost hosting services. This was implemented using Canonical
distribution of OpenStack Icehouse release with Ubuntu 14.04 LTS, and does not include APIC
integration.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-1

Chapter 1

Solution Overview

Previous Cisco cloud reference designs were named Virtualized Multi-Service Data Center (VMDC),
and going forward, the naming of these systems has been changed to Intercloud Data Center starting with
this system release. For reference purposes, details are provided here about the previously released
VMDC design and implementation guides. There have been several iterations of the VMDC solution,
with each phase encompassing new platforms, versions, and technologies.

VMDC Virtual Services Architecture 1.0/1.0.1/1.0.2

VMDC 2.3

VMDC Data Center Interconnect (DCI) 1.0/1.0.1

VMDC Security 1.0

This implementation guide introduces several ACI based design elements and technologies:

Scaling with VXLAN based overlaysACI uses VXLANs internally to scale beyond the 4000
VLAN limit to implement Layer 2 (L2) segments.

Data Center fabric uses the Clos design, allowing for large cross sectional bandwidth, and smaller
failure domain using dedicated spines. All attachment of servers and external network is to the leaf
nodes.

Centralized policy controlSDN & programmable Data Center network: Using APIC GUI or REST
API, the whole ACI fabric can be configured.

Multi-tenant configuration modelACI configuration is by design multi-tenant and allows for


configuration of tenant elements using role-based access control (RBAC).

Application centric deployment models and application security.

Integration with Virtual Machine Managers (VMM)vSphere 5.1 using Application Virtual
Switch.

Service integration of Firewall and Server Load Balancer using ACI Service Graphing technology.

The Intercloud ACI 1.0 solution addresses the following key requirements for cloud infrastructure
providers:
1. Tenancy ScaleMulti-tenant cloud infrastructures require the use of multiple Layer 2 segments per

tenant, and each tenant needs layer 3 contexts for isolation to support security as well as overlapping
IP address spaces. These are typically implemented as VLANs and VRFs on the data center access
and aggregation layers and extending the Layer-3 isolation all the way to the DC provider edge. Due
to the 4000 VLAN limit, overlays are required and ACI uses VXLAN technology within the fabric
to scale to a very high number of bridge domains. The number of tenants is similarly very high
with plans to support 64000 tenants in future releases. The implementation of VXLANs in hardware
provides for large scale, high performance and throughput, innovative visibility to tenant traffic, as
well as new security models.
2.

Programmable DC NetworkThe data center network is configured using APIC which is the
central policy control element. The DC fabric and tenant configurations can be created via the APIC
GUI or via REST API Calls, allowing for highly programmable and automatable data center. There
is integration with the Virtual Machine Managers VMware vSphere 5.1 currently using
Application Virtual Switch (AVS), so that the Tenant L2 segments can be created via APIC.

3.

Integration of ServicesDeploying services for tenants such as Firewall and Server Load balancer
requires separate configuration of these devices via orchestration tools. With ACI, these devices can
be also configured via APIC, allowing for a single point of configuration for the data center services.
Each service platform publishes the supported data items via a device package, which then APIC
exposes via its user interface. Currently Cisco ASA Firewalls and Citrix NetScaler Server Load
Balancers (SLB) are among the supported devices, and a number of other vendors are building their
own device packages to allow for integration with ACI.

Intercloud Data Center Application Centric Infrastructure 1.0

1-2

Implementation Guide

Chapter 1

Solution Overview

In summary, the Intercloud ACI 1.0 solution provides the following benefits to cloud providers:

Increased tenancy scaleup to 64000 tenants (in future releases).

Increased L2 segment scaleVXLAN overlays in fabric provide higher L2 scale and also
normalizes the encapsulation on the wire.

Single Clos based Data Center fabric that scales horizontally by adding more leafs.

Large cross sectional bandwidth using Clos fabric, smaller failure domains and enhanced HA using
ACI virtual port-channels from 2 leaf nodes to an external device.

SDNsoftware defined network and services integration all configurations through a centralized
policy controller.

Improved agility and elasticity due to programmability of the network.

Enhanced security and application centric deployment.

Multi-tenancy and RBAC built-in.

APIC provides integration to the virtual access layer using Application Virtual Switch for the
VMWare vSphere 5.1 Hypervisor based virtual machines - no additional configurations required.

OpenStack Icehouse-based compute with Nexus1000V KVM virtual switch implementation to


support tenants that need OpenStack-based IaaS service. Both traditional storage and
software-defined storage using Red Hat Ceph, are covered as storage options

The Intercloud ACI 1.0 solution (as validated) is built around Cisco UCS, AVS, Nexus 9000 ACI
switches, APIC, ASR 9000, Adaptive Security Appliance (ASA), Cisco NetScaler 1000V, VMware
vSphere 5.1, Canonical OpenStack, KVM, Nexus 1000V, NetApp FAS storage arrays and Ceph storage.
Figure 1-1 shows the functional infrastructure components comprising the Intercloud ACI 1.0 solution.
Figure 1-1

Intercloud ACI 1.0 Infrastructure Components

Data Center PE

Cisco ASR 9000


or ASR 1000

Data Center Network


ACI with
Nexus 9000
and APIC

Nexus 9508
with 9736PQ

Nexus
9393PQ

Nexus
93128TX

Virtual Access
Application
Virtual Switch

Compute

UCS 6200 Fabric Interconnect


with UCS-B Series Blade Servers

UCS C-Series
Rack Servers

Cisco Nexus 2232 FEX


Cisco Nexus 2248 FEX

Storage

NetApp FAS

EMC VMAX,
VNX or any other

Hypervisors
vSphere

VMware vSphere 5.1

OpenStack KVM

Services

Management

ASAv

NetScaler
1000V
APIC, VMware vCenter,
OpenStack Horizon,
Cisco UCSM

298460

ASA 5585-X

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-3

Chapter 1

Solution Overview

Implementation Overview

Implementation Overview
The Intercloud ACI 1.0 solution utilizes a Clos design for large capacity DC fabric with High
Availability (HA) and scalability. All external devices are connected to the leaf nodes. This design uses
multiple Nexus 9500 series spine switchesat least two spine switches are required, with 4 spines being
preferred to provide smaller failure domains. Each Nexus 9300 series leaf node is connected to all spines
using 40Gbps connections, and the paths between leafs is highly available via any of the spines.
The external devices are attached to leaf nodesthese include Integrated Compute stacks, Services
appliances such as Firewalls and server load balancers and also the WAN routers that form the DC
Provider edge. These devices are attached to 2 Nexus 9300 series leaf nodes using virtual Port-channels
to ensure high availability from single leaf or link failure. Each service appliance also supports high
availability using redundant appliances either in active/standby or active/active cluster mode to provide
HA and scale. The fabric normalizes the encapsulation used to each external device, and re-encapsulates
using enhanced VXLAN within the fabricthis allows for highly flexible connectivity options and
horizontal scaling. By allowing connectivity of all types of devices to a common fabric, and
interconnecting using overlays, Data Centers can be built in a very highly scalable and flexible manner,
and expanded by adding more leaf nodes as needed.
Using the Application Virtual Switch (AVS) for VMware vSphere based workloads, extends the ACI
fabric to the virtual compute workloads, with the creation of the port-groups for different tenant
segments and end point groups done via the APIC.
BGP or static routing is used to connect the ACI fabric to the ASR 9000 DC Edge for Layer-3 external
connectivity models, while L2 external connectivity to the ASR 9000 is used for some tenant containers.

Solution Architecture
The Intercloud Data Center ACI 1.0 architecture is comprised of ACI Fabric, WAN layer, Compute and
Services layer. All the layers attach to the ACI fabric leaf nodes, and considerations to attach which
devices to which leafs is driven by physical considerations as well as scale per leaf considerations.
Figure 1-2 shows a logical representation of the Intercloud ACI 1.0 solution architecture.

Intercloud Data Center Application Centric Infrastructure 1.0

1-4

Implementation Guide

Chapter 1

Solution Overview
Solution Architecture

Figure 1-2

Logical Representation of Intercloud ACI 1.0 Solution Architecture

WAN (MPLS Core)

Tenant Sites

ASR 9000 nV

Tenant routes
over L3VPN

DC Edge
iBGP or static or L2
external per tenant
between ACI and
DC-PE
Any leaf can be border
leaf connecting to
DC-PE

ACI Fabric
DC Network

L2 external is
over vPC
Tenant VMs default
Gateway on ACI or
ASA firewall
Managment
Service
OOB-Mgmt

Compute

Ceph Nodes

vCenter
MaaS

VM VM VM VM

VMware Based ICS

VM VM VM VM

Bare Metal Servers

VM VM VM

Juju

OpenStack Based Compute

298461

Storage

The layers of the architecture are briefly described below.


WAN/EdgeThe WAN or DC Edge layer connects the DC to the WAN. Typically, this provides IP or
Multiprotocol Label Switching (MPLS)-based connectivity to the Internet or intranet. The ASR 9010 is
used as an MPLS provider edge router in this design, providing L3VPN connectivity to the provider
IP/MPLS network as well as Internet Gateway function. It also provides aggregation of all the Data
Center pods as they connect directly to the ASR 9010 provider edge, wherein each PoD could be an ACI
fabric or could be other legacy data center pods in a brown field deployment scenario. The ASR 9010 is
utilized in Network Virtualization (nV) mode, where two physical ASR 9000 devices have a single
control plane and appear as a single logical device to adjacent nodes. Connection of the ACI fabric to
the ASR 9000 can be done using external layer-3 routed connections or using layer-2 extension with
vPC. Tenant separation is done by using VRFs on the ASR9000 series routers.

ACI Fabric
SpineNexus 9508 with 9736PQ line cards are used as ACI Fabric spine. In this

implementation, 4 Nexus 9508 spine nodes are used. Each 9736PQ line card has 36 40Gbps
ports, and up to 8 such line cards can be installed in each Nexus 9508 chassis. Only leaf nodes
attach to the spines. Leafs can be connected via a single 40G to each spine, or via multiple 40Gs.
Since each leaf connects to spine, the number of spine ports determines the total size of the
fabric and additional spine cards in spine nodes can be attached to increase the number of leaf
nodes supported. Additional form-factors of the Nexus 9500 ACI spine node will be released in
the future.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-5

Chapter 1

Solution Overview

Solution Architecture

LeafsNexus 9396PX or Nexus 93128TX Leaf switches can be used. These switches have 12x

40G ports to be used to connect to spines. All connections external to the ACI Fabric are made
using the edge 1G/10GE ports on the leaf nodes. This includes connecting to the ICS,
WAN/Edge provider edge, and services appliances, as well as storage devices. Scale
considerations based on consumption of hardware resources per leaf node, will determine the
scale per leaf of the number of mac addresses, endpoint groups, bridge domains and security
policy filters. The fabric allows for very high scaling by adding more leaf nodes as needed.

ServicesNetwork and security services, such as Firewalls, server load balancers, intrusion
prevention systems, application-based Firewalls, and network analysis modules, attached directly to
the Nexus 9300 series leaf switches. Virtual Port-channels are used to connect to 2 different leaf
nodes for HA. In this implementation, ASA-5585X physical Firewalls are used, and connected via
vPC to a pair of Nexus 9300 Top of Rack switches. Virtual appliances like the ASAv virtual FW and
NetScaler 1000V virtual SLB are used, but these run over VMware vSphere hypervisor on the
integrated compute stack.

Integrated Compute Stack using VMware vSphereThis is the ICS stack such as FlexPod or
Vblock. These typically consist of racks of compute based on UCS, and storage devices and attach
to a pair Nexus 9300 series leaf pair. Storage can be via IP transport such as NFS, iSCSI or CIFS.
Alternatively, FC/FCoE based SANs can be used by connecting UCS Fabric Interconnect 6200s to
a pair of SAN Fabrics implemented using MDS switches. The Compute and Storage layer in the
Intercloud ACI 1.0 solution has been validated with a FlexPod aligned implementation using the
following components:
ComputeCisco UCS 6296 Fabric Interconnect switches with UCS 5108 blade chassis

populated with UCS B200 and B230 half-width blades. VMware vSphere 5.1 ESXi is the
hypervisor for virtualizing the UCS blade servers.
StorageIP based storage connected directly to the Nexus 9300 series leaf switches. NetApp

FAS storage devices (10G interfaces) are connected directly to leaf nodes, and NFS based
storage for tenant workloads is used.
Virtual AccessCisco Application Virtual Switch (AVS) is used on VMWARE vSphere 5.1

with full integration with APIC. APIC creates port-groups for EPGs and maps it to a VLAN on
the wire.

OpenStack Compute PodOpenStack is setup as an alternative for tenants that desire to use
OpenStack-based virtualization. Canonical OpenStack Icehouse release with Ubuntu 14.04 LTS
Linux is utilized, with a 3 node High availability configuration. Both control nodes and compute
nodes are Cisco UCS C-series servers and connected to the ACI fabric using virtual Port-channels.
The virtual access switch used is Nexus1000V for KVM, using the Nexus1000V Neutron plugin.
For this implementation, the OpenStack compute is validated with copper container only, and hence
the default gateway for all tenant VMs is the ASA Firewall. Each tenant gets an ASA sub-interface
which is extended via the ACI fabric to the compute layer for hosting Tenant VMs. This release with
OpenStack IceHouse does not have integration between APIC and OpenStack and the tenant EPGs
are statically mapped to VLANs.
ComputeCisco UCS C-series servers. These are also Ceph nodes, hence local disks are

configured and used by Ceph as OSDs. The compute nodes also have access to traditional
storage using NetApp.
Storage

Traditional storage using NetApp NFS shares. Cinder is setup to mount NFS shares on compute
nodes, and use it for running instances.
Software defined storage using Ceph. Compute nodes use the built in RBD client to access the
Ceph OSDs.

Intercloud Data Center Application Centric Infrastructure 1.0

1-6

Implementation Guide

Chapter 1

Solution Overview
Solution Architecture

Swift service for tenants is via RADOS GW.


Glance is setup using RBD client to implement glance storage using Ceph as well.
Virtual AccessNexus1000V-KVM is used as the virtual switch. Networks are created on

Horizon dashboard and published to the Nexus1000V VSM. Nexus1000V neutron plugin is
used.
Figure 1-3 provides a logical representation of the OpenStack Pod.
Logical Representation of OpenStack Pod

Management Network via


Nexus 5000 pair, UCS-C
servers attach via bonded
or dual ethernet

Data Network via ACI Fabric, CN and SN attach


via bonded 2x 10G to any of the ACI leaf pair.
Storage network is one
of the VLANs on bonded
2x 10G Cisco vic
Nexus
93128

Nexus
93128

cimc

C240-M3

ACI

Nexus 9396 Nexus 9396

Juju

C240-M3 MaaS
OpenStack Control
Nodes + Rados GW

Nexus 1000V
VSM Nodes

Management PoD

C240-M3

C220-M3

C220-M3
C220-M3

C240-M3

C220-M3
C240-M3
C220-M3
C220-M3

Compute Nodes
Also Ceph OSDs
and MON

PoD 1

NetApp NFS/
via Cinder
298467

Figure 1-3

PoD 2

The OpenStack implementation is targeted for smaller deployments of up to 256 hosts with current
Nexus 1000v KVM verified scalability, and in future releases can scale higher. The High Availability for
the OpenStack control plane is implemented with 3 nodes running all OpenStack services in an
active/active cluster configuration. Canonical-recommended High Availability (HA) designs require
running each OpenStack service in a separate node for production and scaled-up environments, and
alternatively running services on independent Virtual Machines during staging. For this implementation,
3 node HA cluster was setup and Linux containers (LXC) are used to isolate individual OpenStack
services on these nodes (Figure 1-4).
Figure 1-4

OS Services Mapping to Small HA Model

Management Pod

Workload Pod(s)
Controller Nodes

Build Node
MaaS

Keystone
Glance
Neutron
Nova
Cinder
Horizon
Rados-GW

Control Node 01

Control Node 01

Control Node 02

Ceph OSD/mon

Control Node 03

Control Node 02
Ceph OSD/mon
Control Node 03

Control KVM
Node 04

Control KVM
Node 05

Nexus 1000V
VSM

Nexus 1000V
VSM

Ceph OSD/mon
Control Node 03
Ceph OSD
298468

Juju
(bootstrap)

HA Proxy
Rabbit MQ cluster
MySQLPercona/Galera cluster

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-7

Chapter 1

Solution Overview

Mapping ACI Concepts to IaaS

Mapping ACI Concepts to IaaS


In this section, a brief review of key ACI concepts is followed by considerations for use in Cloud Service
Provider deployment for IaaS services.

Note

Refer to the following document for more details on ACI terminology.


http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrast
ructure/white-paper-c11-731960.html

End Point GroupA set of endpoints either VMs or hosts, to be treated similarly from policy
perspective. From the perspective of ACI Fabric, each endpoint is a MAC address and IP address.
In virtualized environments (currently only VMware vSphere), the EPGs are extended all the way
to the virtual switch with the Cisco Application Virtual Switch, and port-groups are created by the
APIC on the vCenter that can then be used to attach VMs the port-groups for the specific EPG.
Currently, EPGs can be mapped to VMM domains, wherein the APIC automatically assigns the
VLAN (from a pool) and creates port-groups with the name of the EPG to indicate to server admins
to attach VMs to those port-groups. Alternative for non-integrated external devices is to statically
map EPG to a certain VLAN on an interface. Multiple such VLANs are allowed at different points
in the fabric allowing flexibility in stitching together a tenant container.

ContractsWhitelist policy allowing specific TCP/UDP ports to be opened to allow


communication between EPGs. By default, communication between EPGs is not allowed, that is,
deny everything. Using contracts, specific protocols and services are allowed between EPGs.

Note

Note that within an EPG, all communication is allowed without restriction.

Note

Note there are some protocols not filtered by contracts, please see the following URL:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/release/notes/aci_nxos_rn_1
102.html

Application ProfileA set of EPGs and contracts between them, implementing a specific
multi-tier application profile. For example, a 3 tier web/app/db type of application might have 3
EPGs, with contracts for outside to web, web to app and app to db. All together these form the
Application Profile.

APIC TenantsAPIC is by design multi-tenant, and creation of policies and configuration are done
on a per tenant basis. Role based access control allows each tenant admin to be able to configure
policies for that specific tenant.

Bridge DomainsBridge domains are L2 segments overlaid over the fabric. At the edges, the
tenant bridge domains are mapped to VLANs or VXLANs on the wire, and carried over the fabric
with enhanced VXLAN encapsulation.

Private NetworksPrivate networks are similar to VRFs on traditional routers. Each Private
network has its own addressing space and routing space.

SubnetsSubnets are IP subnets attached to Bridge domains. There can be one or more subnets
attached to a bridge domain and these are similar to primary and secondary addresses. SVIs are
created on the fabric for these subnets, and these exist on all of the leaf nodes where the bridge
domain exists providing a proxy default gateway for these subnets at the local leaf.

Intercloud Data Center Application Centric Infrastructure 1.0

1-8

Implementation Guide

Chapter 1

Solution Overview
Service Tiers

External Routing OptionsCurrently iBGP sessions or static routing can be used between the
border leaf to external router. This is on a per tenant basis. The scale of external routing adjacencies
per leaf is currently 32 iBGP sessions. Only one session per tenant is allowed per leaf. A contract is
required to allow destinations outside of the fabric to be reached from inside, and an external EPG
is created to represent outside destinations.

L2 ExternalWhen layer-2 connections are extended to outside of the ACI fabric, l2 external
connections can be configured, with a contract to secure traffic between external endpoints and
internal endpoints.

From the perspective of IaaS services offered by the Cloud Service Providers, the following
considerations are used in this implementation:
1.

CSPs can use APIC tenancy constructs to provide multi-tenant Role based access control to the
configuration. The APIC and ACI Fabric scale can go up to a large number of tenants however
currently released software has verified scalability of 100 tenants.

2.

Cloud Service Providers for IaaS want to provide logical containers for hosting VMs, without being
aware of application specifics. On ACI this maps to the CSP providing Bridge domains (L2
segments) to tenants and creating one EPG per Bridge domain to host any number of applications.
The contracts would need to allow access to all application services that are hosted in that L2
segment. While multiple EPGs can be mapped to same BD, and save on the hardware resources in
the leaf VLAN table, separate BD per EPG is used in this implementation to isolate multicast and
broadcast traffic.

3.

Use of L3 vs L2 based containerscurrently the ACI fabric verified scalability is 100 VRFs (called
private networks in ACI), and hence use of VRF per tenant allows for that many tenants. To scale
beyond that limit, for some tenancy models, instead of creating a per-tenant APIC tenant and APIC
VRF, just a L2 segment is created and default gateway is setup on an external device. This is
particularly a good choice for low-end tenants with no features/services, such as Bronze and Copper
tenancy models, and it allows scaling the number of such tenants to a very high number.

4.

Use of service graphingService Graphing allows APIC to configure services devices such as
Firewalls and load balancers. In the current software release, there is no redirection capability, so
all traffic has to be routed or switched to the services appliance explicitly. Additionally, there is
another restriction that there is no routing within fabric, hence this restricts the stitching of the
services to a subset of scenarios. In this implementation, 1arm routed mode is used for Server Load
balancer with default gateway on ACI fabric. However for ASA Firewall, when used in
routed-mode, the default-gateway has to be on the ASA Firewall and not on the ACI fabric and
hence this model is implemented in this release.

5. Additional restrictions on service graphing are covered in the detail in later chapters of this

implementation guide.

Service Tiers
Cloud providers, whether Service Providers or Enterprises, want an IaaS offering that has multiple
feature tiers and pricing levels. To tailor workload or application requirements to specific customer
needs, the cloud provider can differentiate services with a multi-tiered service infrastructure and Quality
of Service (QoS) settings. The Cisco Intercloud architecture allows customers to build differentiated
service tiers and service level agreements that support their tenant or application requirements. Such
services can be used and purchased under a variable pricing model. Infrastructure and resource pools can
be designed so that end users can add or expand services by requesting additional compute, storage, or
network capacity. This elasticity allows the provider to maximize the user experience by offering a
custom, private Data Center in virtual form.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-9

Chapter 1

Solution Overview

Service Tiers

The Intercloud ACI 1.0 solution supports a reference multi-tier IaaS service model of Gold, Silver,
Bronze, and Copper tiers, very similar to what was shown in the previous Cisco VMDC reference
designs. These service tiers (or network containers) define resource and service levels for compute,
storage, and network performance. This is not meant to be a strict definition of appliance and resource
allocation, but to demonstrate how differentiated service tiers could be built. These are differentiated
based on the following features:

Network ResourcesDifferentiation based on network resources and features.


Application TiersService tiers can provide differentiated support for application hosting. In

some instances, applications may require several tiers of VMs (for example, web, application,
database, and so on). Intercloud ACI 1.0 Gold and Silver class tenant containers are defined
with three application tiers on three separate Bridge Domains and 3 separate EPGs to host web,
application, and database services on different VMs. The Bronze and Copper service is defined
with one Bridge Domain and 1 EPG only, so if there are multi-tiered applications, they must
reside on the same L2 segment or potentially on the same VM (Linux, Apache, MySQL, PHP,
Perl, and Python (LAMP)/Windows Apache, MySQL, PHP, Perl, and Python (WAMP) stack).
Access Methods and SecurityThe Gold and Silver service tiers are defined with separate

service appliances per-tenant to provide security and isolation. The Gold tier offers the most
flexible access methodsthrough Internet, L3VPN, and secure VPN access over the Internet.
Also, the Gold tier has multiple security zones for each tenant. The Silver and Bronze tiers do
not support any perimeter Firewall service and provide access through L3VPN only. The
Copper tier supports access over Internet only, along with perimeter Firewall service and NAT.
In this release, the goal was to have all of the services implemented through the APIC using
service graphing feature. However the device package support for integrating with APIC was
not yet available for certain functionality at the time of testing, notably NAT and
RA-VPN/Secure VPN access on the ASA device package. These services can still be
implemented albeit via directly configuring the service appliance itself and in future will be
supported via APIC.
Stateful ServicesTenant workloads can also be differentiated by the services applied to each

tier. The Expanded Gold tier is defined with an ASA based perimeter Firewall and dual security
zonesPVT zone and DMZ zone. Both physical ASA-5585-X and ASAv models were
validated and either option can be used depending on customer requirements. The ASA-5585-X
based implementation uses multi-context mode with each tenant getting a context on a pair of
physical ASAs, whereas if virtual ASAv is used, each tenant gets a pair of single-context
dedicated ASAv. Support for configuring policies inside an ASA context on a multi-context
ASA through APIC will be released in future release, and in this implementation, beta code was
used to validate this functionality. The Gold and Silver tiers are defined with a NetScaler 1000V
SLB service. The Bronze tier is defined with no Firewall or SLB services. The Copper tier
provides NAT and perimeter Firewall services with a context shared amongst all copper tenants
on the ASA-5585 Firewall.
QoSBandwidth guarantee and traffic treatment can be a key differentiator. QoS policies can

provide different traffic classes to different tenant types and prioritize bandwidth by service tier.
The Gold tier supports VoIP/real-time traffic, call signaling and data class, while the Silver,
Bronze, and Copper tiers have only data class. Additionally, Gold and Silver tenants are
guaranteed bandwidth, with Gold getting more bandwidth than Silver. In this release, ACI does
not support rate-limiting. Additionally deploying different classes of traffic to the same tenant
requires either separating the traffic by EPGs or trusting the DSCP set by tenant VM.
VM ResourcesService tiers can vary based on the size of specific VM attributes such as

CPU, memory, and storage capacity. The Gold service tier is defined with VM characteristics
of 4 vCPU and 16 GB memory. The Silver tier is defined with VMs of 2 vCPU and 8 GB, while
the Bronze and Copper tier VMs have 1 vCPU and 4 GB each.

Intercloud Data Center Application Centric Infrastructure 1.0

1-10

Implementation Guide

Chapter 1

Solution Overview
Service Tiers

Storage ResourcesStorage multi-tenancy on NetApp FAS storage arrays using clustered data on
tap was implemented to provide dedicated NetApp Storage VMs to Gold class tenants, whereas
Silver tenants share a single SVM, but use dedicated volumes, and Bronze and Copper share
volumes as well. The storage performance can be also differentiated, for example, Gold tier is
defined with 15000 rpm FC disks, the Silver tier on 10000 rpm FC disks, and the Bronze tier on
Serial AT Attachment (SATA) disks. Additionally, to meet data store protection, the recovery point,
or the recovery time objectives, service tiers can vary based on provided storage features such as
Redundant Array of Independent Disks (RAID) levels, disk types and speeds, and backup and
snapshot capabilities.

Table 1-1 lists the four service tiers or network container models defined and validated in the Intercloud
ACI 1.0 solution. Cloud providers can use this as a basis and define their own custom service tiers, based
on their own deployment requirements. For similar differentiated offerings for Compute and Storage,
reference service tiers can be found in previously published Cisco VMDC VSA 1.0 Implementation
Guide.
Table 1-1

Service Tiers

Secure Zones

E-Gold

Silver

Bronze

Copper

Two,

None

None

1 shared with other


copper tenants

PVT
DMZ
Perimeter Firewalls Two
Access Methods

Internet, L3VPN, RA-VPN L3VPN

L3VPN

Internet

Public IP/NAT

Yes

n/a

n/a

Yes

VM L2 Segments

3 in PVT zone

3 in PVT

1 in PVT

(1 segment = 1 BD, 1 in DMZ zone


and 1 EPG)
External Routing

Static

IBGP or Static IBGP or Static EBGP or Static

Def Gwy

ASA

ACI Fabric

ACI fabric

ASA

Security between
L2 segments

ASA

ACI Fabric

Not available

OpenStack
security groups

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-11

Chapter 1

Solution Overview

Service Tiers

Table 1-1

Services

Service Tiers (continued)

E-Gold

Silver

Bronze

Copper

ASA or ASAv based


perimeter Firewall

NetScaler
1000V based
SLB

None

ASA based
internet Firewall

ASA or ASAv based


Firewall between L2
segments

NAT (not via


Service Graphing)

DMZ zone
Netscaler1000V based
SLB, one per each zone
NAT on ASA (not via
Service Graphs)
RA-VPN with ASAv (not
tested)
QoS

3 classes of traffic allowed: All tenant data


is mapped to
1 dscp=ef Real-time with
premium data
low-latency switching
class (BW
2 dscp=cs3 Call Signaling Guaranteed)
(lumped with tenant data
inside ACI fabric)

Standard Data
class,
Available BW
service (Best
effort)

Standard Data
class, Available
BW service (Best
effort)

3 Tenant data is mapped to


Premium Data class (BW
guaranteed)

Reference IaaS Tenant Network Containers


The tenant network container is a logical (virtual) slice of the shared (common) physical network
resources end-to-end through the Data Center that carries a specific tenants traffic. The physical
infrastructure is common to all tenants, but utilizing ACI multi-tenancy constructs, each tenant gets its
own l2 segments and l3 routing instances, which connect the tenant compute through segregated overlay
networks isolated from other tenants, to Data Center Provider edge routers, where each tenant is isolated
using VRFs and further extended via L3VPN to the tenant sites. Hence the tenants appear to have their
own isolated network with independent IP addressing and security policies. The service appliances such
as ASA Firewalls are either multi-context with each tenant getting a context, or using virtual appliances
with each tenant getting their own dedicated ASAv and NetScaler 1000v VMs.
Figure 1-5 shows the reference IaaS Tenant containers defined in different versions of the Cisco VMDC
reference architecture.

Intercloud Data Center Application Centric Infrastructure 1.0

1-12

Implementation Guide

Chapter 1

Solution Overview
Service Tiers

Figure 1-5

IaaS Tenant Containers

Zinc

Copper

L3

Bronze

L3

Silver

Gold

L3

L3

Expanded Gold
IOS
vFW

Protected
Front-End

L3

Expanded Palladium

LB
VM

Dedicated
vFW

FW

Shared
FW
Context
L3

L3

L2

L2

L2

vLB

LB

LB

VM

L3

L3

Protected
Back-End

L2

VM

Public Zone
FW

vFW

Private Zone
IOS
vFW

L3
LB

L3

L2

VM

L2
vLB

VM

VM

vFW
VM

VM

vFW
VM

VM

VM

vFW
VM

VM

VM

vFW
VM

VM

L2

VM

VM

VM

VM

vFW
VM

VM

VM

298463

vFW

In this document, implementation details of Expanded-Gold, Silver, Bronze and Copper containers are
provided. A high level overview of implementing these containers with ACI is provided here, and the
specific implementation and configuration details are provided in subsequent chapters on each of the
container types.
First the simplest container Bronze is explained, followed by Silver and E-Gold. Lastly the Copper
container which has a shared Firewall and Internet-based access for low cost tenancy model.

Bronze
The Bronze reference container is a simple, low-cost tenancy container.
Each Bronze tenant container has one Layer 2 segment for tenant VMs, implemented with one ACI
BD/EPG. There is one VRF on the Data Center provider edge for each Bronze tenant, and tenants access
their cloud service over L3VPN.
The Bronze Tenant traffic is mapped into the standard data class and can use available bandwidth (best
effort), that is, no bandwidth guarantee.
There are two options to implement Bronze with ACI, with different scaling considerations.

L3-BronzeThis has default gateway of the VMs on the ACI fabric. L3 external routing is used
between the ACI fabric for each of the L3-bronze containers, and can be either IBGP or static. On
the Data Center provider edge router, a VRF for each L3-Bronze is used with a sub-interface towards
the ACI fabric. Two independent L3 links are configured to two different leafs to provide
redundancy for high availability. Each leaf runs an IBGP session or has static routing configured.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-13

Chapter 1

Solution Overview

Service Tiers

Figure 1-6

Layer 3 Bronze Container Logical Topology

*Redundant boxes
not shown

Customer
VRF
ASR 9000 nV
VLAN

iBGP/Static
Bxx Tenant in ACI

Per L3-Bronze IaaS Tenant:


APIC Tenant: 1
ACI VRF (private network): 1
DC-PE VRF: 1
L3-ext: 1 (iBGP or static)
BDs: 1 Subnets: 1
EPGs: 1 + 1 (external EPG)
Contract: 1
Server Leaf VLANs: 2 (EPG + BD)
Border Leaf VLANs: 1

BXX_VRF
VLAN
Bxx_EPG
Bxx_BD
Port-group
created by APIC

AVS

VM

Tenant VMs

298462

Compute

Default Gateway

L2-BronzeThis design has the ACI provide only a BD/EPG for each tenant, and the BD is
configured without unicast routing. Tenant VMs have default gateway on the Data Center provider
edge ASR 9000 tenant VRF. L2 external configuration on the BD is used and ACI contracts can be
setup to protect the Tenant VMs for outside to inside traffic. The connection between the Data
Center provider edge ASR 9000 nV cluster and the ACI fabric is a virtual port channel (vPC)
connecting to two different ACI leaf nodes, and connecting to two different chassis on the ASR 9000
nV side.

Silver
Figure 1-7 shows the Silver container logical topology. The Silver tenant accesses its cloud service via
L3VPN. Each Silver tenant container has 3 EPGs for tenant workloads, mapped to 3 different BDs,
allowing for 3-tier apps. Additionally, the silver tenant has Server Load Balancer implemented using
NetScaler 1000V and this is configured via the APIC using service graphing using the NS1000V device
package. Contracts on the ACI can be used to enforce security policy between tiers as well as between
external to tiers.
This Silver service tier provides the following services:

Routing (IBGP) from ACI Fabric to the Data Center Edge ASR9000 router

Access from MPLS-VPN to tenant container (virtual data center).

1 ZonePVTto place workloads, with 3 BD/EPGs.

ACI Fabric default Gateway, and contracts and filters to implement policy between tiers.

SLB on the Netscaler1000V to provide L4-7 load balancing and SSL Offload services to tenant
workloads.

Medium QoS SLA with one traffic classpremium data class for in-contract traffic.

Redundant virtual appliances for HA.

Intercloud Data Center Application Centric Infrastructure 1.0

1-14

Implementation Guide

Chapter 1

Solution Overview
Service Tiers

Figure 1-7

Silver Container Logical Topology

*Redundant boxes
not shown

Per Silver IaaS Tenant:

Customer
VRF
ASR 9000
L3-ext
SiXX Tenant in ACI
SiXX_VRF

APIC Tenant: 1
ACI VRF (private network): 1
DC-PE VRF: 1
L3-ext: 1 (iBGP or static)
BDs: 4 Subnets: 5 (3 tenant tiers, 1/1 vip/snip)
EPGs: 4 + 1 (external EPG)
Contract: 3 (out to t1, t1 to t2, t2 to t3 )
Server Leaf VLANs: 8 (EPG + BD)
Border Leaf VLANs: 1
Service Graph: 1, instances 3
SG-lb
SG-lb-t1
SG-lb-t2
SG-lb-t3

Compute
Vips_subnet
Snip subnet

AVS
VM

T1_EPG
T1_subnet

VM

T2_EPG
T2_subnet

VM

Tenant VMs

T3_EPG
T3_subnet

298464

NS1000V

Expanded Gold Container (E-Gold)


Figure 1-8 shows the Expanded-Gold container logical topology. The Expanded Gold tenant gets two
security zones to place workloads into with 2 Firewall instances to protect to and between each zone.
The Internet facing connection has a DMZ Firewall instance, and a DMZ Bridge Domain, where there
is one EPG to host applications that run in the DMZ. This DMZ Firewall also has a connection to the
PVT Firewall instance, which is another independent Firewall instance for this tenant to protect the
private l2 segments for hosting the secure backend applications. The PVT Zone has 3 Bridge domains
and each BD has an EPG for the endpoints in that BD. The connectivity via L3VPN is from the PVT
Firewall instance.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-15

Chapter 1

Solution Overview

Service Tiers

Figure 1-8

Expanded-Gold Container Logical Topology

*Redundant boxes
not shown

Per E-Gold IaaS Tenant:


ASR 9000
Customer
VRF
pubout-BD/EPG

L2-ext

pubout-BD/EPG

L2-ext

SG-fw-slb
GoXX Tenant

SG-fw-fw

APIC Tenant: 1
ACI VRF (private network): 2 (reserved for future)
DC-PE VRF: 1
L2-ext: 2
BDs: 9 Subnets: 0 (L2 only model used)
EPGs: 6 + 2 (external EPG)
Contract:
Server Leaf VLANs: 8 (EPG + BD)
Border Leaf VLANs: 2
Service Graph: 3, instances see below
SG-fw-slb: 2
SG-fw-slb-pvt
SG-fw-slb-dmz
SG-fw-fw: 1
SG-fw-fw
SG-fw: 4
SG-fw-pvt-t1
SG-fw-pvt-t2
SG-fw-pvt-t3
SG-fw-dmz

Pvtdmz BD/EPG
GoXX_PVT_VRF

GoXX_DMZ_VRF

Compute

AVS

AVS
NS1000V

NS1000V
VM

T1_EPG
T1_subnet

VM

T2_EPG
T2_subnet

VM

VM

DMZ_EPG
DMZ_subnet

T3_EPG
T3_subnet

Private Zone Tenant VMs

DMZ Tenant VMs

298465

SG-fw

Global

This E-Gold service tier provides the highest level of sophistication by including the following services:

Default Gateway for the VMs is on their respective Zone Firewall, that is, for the PVT zone
BD/EPGs, the default gateway is on the PVT Firewall instance of the tenant, and the DMZ BD/EPG
VMs have default gateway (def gwy) on the DMZ FW instance of the tenant. Default gateway on
ASA is required in this design to use APIC integration for configuring the ASA Firewalls with the
Firewalls in routed mode.

Access from Internet or MPLS-VPN to tenant container (virtual data center).

2 ZonesPVT and DMZto place workloads. Each zone has its own BD/EPGs which are basically
L2 segments.

Either Physical ASA-5585-X in multi-context mode with each tenant getting dedicated contexts or
dedicated ASAv virtualized ASA can be used.

IPsec Remote-Access VPN using the ASA or ASAv, to provide Internet-based secure connectivity
for end-users to their virtual data center resources this was not implemented as the device package
support to configure via APIC is not yet available.

Stateful perimeter and inter-Zone Firewall services to protect the tenant workloads via ASA or
ASAv

Network Address Translation (NAT) on the ASA/ASav, to provide Static and Dynamic NAT services
to RFC1918 addressed VMs, however, configuring NAT via APIC/Device package has limitations
that dont allow it at this time. Enhancements are in progress and will be supported in future
releases.

SLB on the NetScaler 1000V to provide L4-7 load balancing and SSL Offload services to tenant.
One NetScaler 1000V instance for each zone.

Higher QoS SLA and three traffic classes - real-time (VoIP), call signaling and premium data.Please
note within datacenter, the call signaling and premium data travel in same ACI class however in the
MPLS WAN, 3 separate classes are used, one each for VoIP, Call signaling and Data.

Redundant virtual appliances for HA.

Intercloud Data Center Application Centric Infrastructure 1.0

1-16

Implementation Guide

Chapter 1

Solution Overview
Service Tiers

The two zones can be used to host different types of applications to be accessed through different
network paths.
The two zones are discussed in detail below.

PVT ZoneThe PVT, or Private Zone, and its VMs can be used for cloud services to be accessed
through the customer MPLS-VPN network. The customer sites connect to the provider MPLS Core
and the customer has their own MPLS-VPN (Customer-VRF). The Data Center Edge router (ASR
9000 provider edge) connects to the customer sites through the MPLS-VPN (via the
Customer-VRF). This Customer-VRF is connected through a VLAN on Virtual Port-channel to a
pair of ACI leafs, and configured as a L2-external connection in ACI. This L2-external connection
is extension of a bridge domain that also has the PVT Firewall ASA outside interface EPG. From
the perspective of the ASA, the next hop is the ASR9K sub-interface which is in the Customer-VRF.
The ASA is either a dedicated ASAv or ASA-5585 context. PVT BDs are L2 only BDs, that is, no
unicast routing, and default gateway is on the PVT ASA for the VMs in the BD/EPGs in PVT zone.

DMZThe Intercloud ACI 1.0 E-Gold container supports a DMZ for tenants to place VMs into a
DMZ area, for isolating and securing the DMZ workloads from the PVT workloads, and also to
enable users on the Internet to access the DMZ-based cloud services. The ASR 9000 provider edge
WAN router is also connected to the Internet, and a shared (common) VRF instance (usually global
routing table) exists for all E-Gold tenants to connect to (either encrypted or unencrypted). The
ASR9000 Internet table VRF is connected via an ASR 9000 sub-interface to the tenants dedicated
DMZ Firewall, and the sub-interface VLAN is trunked over vPC to ACI Fabric and is mapped to a
L2-external on the DMZ-external BD. On this DMZ-external BD, an EPG exists that is mapped to
the external interface of the DMZ ASA FW. Thus, the DMZ FW outside interface and the ASR9000
sub-interface in global table are L2 adjacent and IP peers. The ASR9000 has a static route for the
tenant public addresses pointing to the DMZ ASA FW outside interface address, and redistributes
static into BGP for advertising towards Internet. The DMZ ASA FW has a static default pointing
back to the ASR 9000 sub-interface, as well as static routes towards L3VPN and PVT subnets
pointing back to the PVT FW.

The DMZ can be used to host applications like proxy servers, Internet-facing web servers, email servers,
etc. The DMZ consists of one L2 segment implemented using a BD and an EPG and default gateway is
on the DMZ ASA FW. For SLB service in the DMZ, there is a NetScaler 1000V. For RA-VPN service,
currently the integration with APIC to configure this service does not exist, hence manual configuration
of the ASAv is required.
As an option, the E-Gold container may be deployed in simplified manner with only one zone, either
PVT zone with only the PVT Firewall with L3VPN connection (previous VMDC designs called this as
Gold container) or with DMZ only, with DMZ Firewall and access via internet only, and additional
secure access via RA-VPN (similar to the Zinc container in the previously released VMDC VSA 1.0
solution).

Copper
Figure 1-9 shows the Copper container logical topology. The Copper tenant gets one zone to place
workloads into and just one L2 segment for tenant VMs, implemented with one ACI BD/EPG, and
default gateway is on the ASA Shared Firewall. Multiple Copper tenants share the same Firewall, with
each tenant getting a different inside interface, but sharing the same outside/Internet-facing interface.
ASA Security policy restricts access to the tenant container from outside or other tenants, as well as
allows for NAT for reduced public address consumption.

Routing (static or Exterior BGP) from ASA shared Firewall to the Data Center provider edge ASR
9000, to connect all of the copper tenant virtual data centers to the global table (internet) instance
on the ASR 9000 router, and advertise towards Internet all the tenants public IP addresses.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-17

Chapter 1

Solution Overview

Solution Components

Access from internet to tenant container (virtual data center).

ASA Firewall Security policy allows only restricted services and public IPs to be allowed to be
accessed from outside.

1 ZonePVTto place workloads, with 1 L2 segment in the zone.

Lower QoS SLA with one traffic class, standard data.

The shared ASA context is configured manually, i.e ACI service graphing is not utilized

Figure 1-9

Logical Copper Container

*Redundant boxes
not shown

Management Network for


Horizon Access

Global
ASR 9000 nV
VLAN

Cu_VRF
Not used, but created to
separate BDs into its own VRF

Copper Tenant

eBGP

Cuout-bd
VLAN
VLAN
Default Gateway

ASA
Shared context for all
Copper customers, no
service graphing

Cuout-EPG

CuXX-bd

CuXX-bd

CuOS-bd

Cu1_EPG
(static VLANs)

Cu2_EPG
Static VLANs

CuOS_EPG
Static VLANs

VLAN

VLAN

VLAN

Compute
Port-group manual
config no APIC
integration

VM

Tenant VMs
for Cu1

VM

VM

Tenant VMs
for Cu2

OS

Many Tenants, Cu1 to Cuxxxx, each tenant with a L2 segment and VMs on
the segment with default gateway to a tenant specific sub interface on the ASA

298466

Nexus 1000V-KVM

Solution Components
Table 1-2, and Table 1-3 list Cisco and Third Party product components for this solution, respectively.
Table 1-2

Cisco Products

Product

Description

Hardware

Software

ASR 9000

Data Center Provider Edge

ASR9010-NV

IOS-XR5.1.2

A9K-RSP440-SE
A9K-24x10GE-SE
A9K-MOD80-SE
A9K-MPA-4X10GE
APIC

Centralized Policy Controller

APIC-CLUSTER-L

1.0(2j)

Nexus 9500

ACI Fabric Spine

Nexus 9508

11.0(2j)

9736PQ

Intercloud Data Center Application Centric Infrastructure 1.0

1-18

Implementation Guide

Chapter 1

Solution Overview
Solution Components

Table 1-2

Cisco Products (continued)

Product

Description

Hardware

Software

Nexus 9300

ACI Fabric Leaf

Nexus 9396

11.0(2j)

Nexus 93128
UCS 6200

UCS Fabric Interconnect

UCS-FI-6296UP

2.2(1d)

UCS B-Series

Blade Servers

UCS-5108, B200-M3,
UCS VIC 1240/1280,
UCS 2204XP

2.2(1d)

UCS C-Series

Rack Servers

C240-M3, C220M3

CIMC: 2.0(1a)

Nexus 2000

FEX

Nexus 2232PP

11.0(2j)

ASA-5585-X

ASA Firewall

ASA-5585-X w SSP60 9.3.1

ASAv

ASA Virtual Firewall

9.3.1
Device package - 1.0.1

NetScaler 1000V Server Load Balancer, virtualized

10.1
Device package - 10.5

Cisco AVS

Table 1-3

Application Virtual Switch

4.2(1)SV2(2.3)

Third Party Products

Product

Description

Hardware

Software

VMWare ESXi

Hypervisor

N/A

5.1.0 Build 1483097

VMWare vCenter

Management tool

N/A

5.1.0 Build 1473063

NetApp FAS3250

StorageArray

FAS3250

8.2.2 cDoT

Linux

Tenant VM

Centos
Ubuntu 14.04 LTS

Linux

Openstack Nodes

Ubuntu 14.04 LTS

Openstack

Cloud Platform

Icehouse release

Ceph

Software defined
storage

0.80.5

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

1-19

Chapter 1

Solution Overview

Solution Components

Intercloud Data Center Application Centric Infrastructure 1.0

1-20

Implementation Guide

CH A P T E R

ACI Policy Model


The Cisco InterCloud Application Centric Infrastructure (ACI) fabric is a model-driven architecture.
The policy model manages the entire fabric, including the infrastructure, authentication, security,
services, applications, and diagnostics. Logical constructs in the policy model define how the fabric
meets the needs of any fabric function. Figure 2-1 provides an overview of the ACI policy model logical
constructs.
Figure 2-1

ACI Policy Model Logical Constructs

APIC Policy

Tenant1 ... Tenantn


Endpoint Groups
(EPGs)

Bridge Domains
(BD)

Named groups of L3 functions


related endpoints, Subnet, default
eg finance
gateway
Static or dynamic
membership

Contracts
The rules that
govern the
interactions of
EPGs
Contracts
determine how
applications use
the network

Contexts
(VRFs)
Unigue L3
forwarding
domain
Relation to
application
profile(s) with
their policies

Location Independent Resource Pool

348504

Endpoints Servers, VMs, Storage, Internet Clients, etc.

As a model-driven architecture, Cisco Application Policy Infrastructure Controller (APIC) maintains a


complete representation of the administrative and operational state of the system (the model). The model
applies uniformly to fabric, services, system behaviors, as well as virtual and physical devices attached
to the network. The logical model itself consists of objects - configuration, policies and runtime states and their attributes. In the ACI framework, this model is known as the management information tree
(MIT). Each node in the MIT represents a managed object or group of objects. These objects are
organized in a hierarchical way, creating logical object containers. Every managed object in the system
can be identified by a unique distinguished name. Figure 2-2 depicts the logical hierarchy of the MIT
object model.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

2-1

Chapter 2

ACI Policy Model

Accessing Managed Object Data through REST API

Figure 2-2

Management Information Tree Overview

In the ACI framework, a tenant is a logical container (or a unit of isolation from a policy perspective)
for application policies that enable an administrator to exercise domain-based access control. Tenants
can represent a customer in a service provider setting, an organization or domain in an enterprise setting,
or just a convenient grouping of objects and policies. Figure 2-3 provides an overview of the tenant
portion of the MIT. The tenant managed object is the basis for the Expanded Gold Tenant Container.
Figure 2-3

ACI Tenant (348505)

Tenant

Outside
Network

Application
Profile

1
n
Bridge
Domain

Context
(VRF)

Contract

Filter

1
1

n
n
Subnet

Endpoint
Group

n n

Subject

348505

Legend:
Solid lines indicate that objects contain the ones below.
Dotted lines indicate a relationship.
1:n indicates one to many; n:n indicates many to many.

Accessing Managed Object Data through REST API


Representational state transfer (REST) is an architectural style consisting of a coordinated set of
architectural constraints applied to components, connectors, and data elements, within a distributed
hypermedia system. REST-style architectures conventionally consist of clients and servers; clients
initiate requests to servers, while servers process requests and return appropriate responses. The REST
style builds requests and responses around the transfer of representations of resources. A resource can
be any body of information, static or variable. A representation of a resource is typically a document that
captures the current or intended state of a resource.
APIC supports REST Application Programming Interface (API) for programmatic access to the MOs on
ACI fabric. The API accepts and returns HTTP or HTTPS messages that contain JavaScript Object
Notation (JSON) or Extensible Markup Language (XML) data structure and provides the essential
information necessary to execute the command.

Intercloud Data Center Application Centric Infrastructure 1.0

2-2

Implementation Guide

Chapter 2

ACI Policy Model


Accessing Managed Object Data through REST API

Note

In the JSON or XML data structure, the colon after the package name is omitted from class names and
method names. For example, in the data structure for a managed object of class zzz:Object, label the
class element as zzzObject.
Managed objects can be accessed with their well-defined address, the REST URLs, using standard HTTP
commands. The URL format used can be represented as follows:
{http|https}://host[:port]/api/{mo|class}/{dn|className}.{json|xml}[?options]

Where:
host

Specifies the hostname or IP address of APIC.

port

(Optionally) specifies the port number for communicating with APIC.

api

Specifies that the message is directed to API.

mo|class

Specifies whether the target of the operation is a managed object, or an object class.

dn|className Specifies the DN of the targeted managed object, or the name of the targeted class.

Note

json|xml

Specifies whether the encoding format of the command or response HTML body is
JSON or XML.

?options

(Optionally) specifies one or more filters, selectors, or modifiers to a query. Multiple


option statements are joined by an ampersand (&).

By default, only HTTPS is enabled on APIC. HTTP or HTTP-to-HTTPS redirection, if desired, must be
explicitly enabled and configured. HTTP and HTTPS can coexist on APIC.
The API supports HTTP POST, GET, and DELETE request methods as follows:

An API command to create or update a managed object, or to execute a method, is sent as an HTTP
POST message.

An API query to read the properties and status of a managed object, or to discover objects, is sent
as an HTTP GET message.

An API command to delete a managed object is sent as either an HTTP POST or DELETE message.
In most cases, a managed object can be deleted by setting its status to deleted in a POST operation.

The HTML body of a POST operation must contain a JSON or XML data structure that provides the
essential information necessary to execute the command. No data structure is required with a GET or
DELETE operation.

Note

The API is case sensitive. When sending an API command with 'api' option in the URL, the maximum
size of the HTML body for the POST request is 1 MB.
The API model documentation is embedded within APIC, accessible with the following URL:
https://{apic_ip_or_hostname}/doc/html/

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

2-3

Chapter 2

ACI Policy Model

Authenticating and Maintaining an API Session

Authenticating and Maintaining an API Session


APIC requires users authentication before allowing access to the API. On APIC, when a login message
is accepted, the API returns a data structure that includes a session timeout period and a token that
represents the session. The session token is also returned as a cookie in the HTTP response header. The
login refresh message allows the user to maintain the API session, if no other messages are sent for a
period time longer than the session timeout period. The token changes each time the session is refreshed.
These API methods manage session authentication:

aaaLoginSent as a POST message to log in a user and open a session. The message body contains
an aaa:User object with the name and password attributes, and the response contains a session token
and cookie.

aaaRefreshSent as a GET message with no message body or as a POST message with the
aaaLogin message body, this method resets the session timer. The response contains a new session
token and cookie.

aaaLogoutSent as a POST message, to log out the user and close the session. The message body
contains an aaa:User object with the name attribute. The response contains an empty data structure.

The example below shows a user login message that uses a XML data structure. The example makes use
of user ID with a login domain, with the following format:
apic#{loginDomain}\{userID}
POST https://{apic_ip_or_hostname}/api/aaaLogin.xml
<aaaUser name="apic#my_login_domain\my_user_id" pwd="my_pA5sW0rd" />

After the API session is authentication and established, retrieve and send the token or cookie with all
subsequent requests for the session.

Layer 4 to Layer 7 Service Insertion


ACI treats services as an integral part of an application. Any services that are required are treated as a
service graph that is instantiated on the ACI fabric from APIC. A service graph is represented as two or
more tiers of an application with the appropriate service function inserted between. APIC provides the
user with the ability to define a service graph with a chain of service functions such as application
firewall, load balancer, SSL offload, and so on. The service graph defines these functions based on a
user-defined policy for a particular application.
Figure 2-4

Service Insertion Graph


Contract
Service Graph
Input
Node

SSL

FW

Output
Node

Destination
EPG

Input or output nodes


Function nodes
Function node connectors
Terminal node connectors

298739

Source
EPG

A service graph is inserted between source/provider EPG and destination/consumer EPG by a contract.
After the service graph is configured on APIC, APIC automatically configures the services according to
the service function requirements that are specified in the service graph. APIC also automatically
configures the network according to the needs of the service function that is specified in the service

Intercloud Data Center Application Centric Infrastructure 1.0

2-4

Implementation Guide

Chapter 2

ACI Policy Model


Layer 4 to Layer 7 Service Insertion

graph. A physical or virtual service appliance/device performs the service function within the service
graph. A service appliance, or several service appliances, render the services required by a service graph.
A single service device can perform one or more service functions.
APIC offers a centralized touch point for configuration management and automation of L4-L7 services
deployment, using the device package to configure and monitor service devices via the southbound
APIs. A device package manages a class of service devices, and provides APIC with information about
the devices so that the APIC knows what the device is and what the device can do. A device package is
a zip file that contains the following:

Device SpecificationThe device specification is an XML file that provides a hierarchical


description of the device, including the configuration of each function, and is mapped to a set of
managed objects on APIC. The device specification defines the following:
ModelModel of the device.
VendorVendor of the device.
VersionSoftware version of the device.
Functions provided by a device, such as firewall, L4-L7 load balancing, SSL offload, etc.
Configuration parameters for the device.
Interfaces and network connectivity information for each function.
Service parameters for each function.

Device ScriptThe device script, written in Python, manages communication between the APIC
and the service device. It defines the mapping between APIC events and the function calls that are
defined in the device script. The device script converts the L4-L7 service parameters to the
configuration that is downloaded onto the service device.

Figure 2-5 shows the APIC service automation and insertion architecture through the device package.
Figure 2-5

APIC Service Automation and Insertion Architecture via Device Package


APIC

Device Package ZIP File

Policy Manager
Device Specification XML

Device Specification XML

Script Wrapper Process


Device Scripts

Linux Name
Space for Device
Package

API/CLI Interface to Service Device

298740

Device Scripts
Supporting Files

Upload Device
Package

After a unique device package is uploaded on APIC, APIC creates a namespace for it. The content of the
device package is unzipped and copied to the name space. The device specification XML is parsed, and
the managed objects defined in the XML are added to the APIC's managed object tree. The tree is
maintained by the policy manager. The Python scripts that are defined in the device package are
launched within a script wrapper process in the namespace. Access by the device script to APICs file
system is restricted.
Multiple versions of a device package can coexist on the APIC, because each device package version
runs in its own namespace. Administrators can select a specific version for managing a set of devices.
The following REST request uploads the device package on APIC. The body of the POST request should
contain the device package zip file being uploaded. Only one package is allowed in a POST request:
POST https://{apic_ip_or_hostname}/ppi/mo.xml

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

2-5

Chapter 2

ACI Policy Model

Layer 4 to Layer 7 Service Insertion

Note

When uploading a device package file with 'ppi' option in the URL, the maximum size of the HTML
body for the POST request is 10 MB.

L4 to L7 Service Parameters
The XML file within the device package describes the specification for the service device. This
specification includes device information as well as various functions provided by the service device.
This XML specification contains the declaration for the L4-L7 service parameters needed by the service
device. The L4-L7 service parameters are needed to configure various functions that are provided by the
service device during service graph instantiation.
You can configure the L4-L7 service parameters as part of the managed objects such as bridge domains,
EPGs, application profiles, or tenant. When the service graph is instantiated, APIC passes the parameters
to the device script that is within the device package. The device script converts the parameter data to
the configuration that is downloaded onto the service device. Figure 2-6 shows the L4-L7 service
parameters hierarchy within a managed object.
L4-L7 Service Parameters
vnsFolderInst

vnsFolderInst

vnsParamInst

vnsParamInst

vnsFolderInst

vnsCfgRelInst

vnsParamInst

vnsParamInst

vnsParamInst

298741

Figure 2-6

The vnsFolderInst is a group of configuration items that can contain vnsParamInst and other nested
vnsFolderInst. A vnsFolderInst has the following attributes:

KeyDefines the type of the configuration item. The key is defined in the device package and can
never be overwritten. The key is used as a matching criterion as well as for validation.

NameDefines the user defined string value that identifies the folder instance.

ctrctNameOrLblFinds a matching vnsFolderInst during parameter resolution. For a


vnsFolderInst to be used for parameter resolution, this attribute must match with the name of the
contract that is associated with the service graph. Otherwise, this vnsFolderInst is skipped and
parameters are not used from this vnsFolderInst.
The value of this field can be any to allow this vnsFolderInst to be used for all contracts.

graphNameOrLblFinds a matching vnsFolderInst during parameter resolution. For a


vnsFolderInst to be used for parameter resolution, this attribute must match with the service graph
name. Otherwise, this vnsFolderInst is skipped and parameters are not used from this vnsFolderInst.
The value of this field can be any to allow this vnsFolderInst to be used for all service graphs.

nodeNameOrLblFinds a matching vnsFolderInst during parameter resolution. For a


vnsFolderInst to be used for parameter resolution, this attribute must match with the function node
name. Otherwise, this vnsFolderInst is skipped and parameters are not used from this vnsFolderInst.

Intercloud Data Center Application Centric Infrastructure 1.0

2-6

Implementation Guide

Chapter 2

ACI Policy Model


Layer 4 to Layer 7 Service Insertion

The value of this field can be any to allow this vnsFolderInst to be used for all nodes in a service
graph.
The vnsParamInst is the basic unit of configuration parameters that defines a single configuration
parameter. A vnsParamInst has the following attributes:

KeyDefines the type of the configuration item. The key is defined in the device package and can
never be overwritten. The key is used as a matching criterion as well as for validation.

NameDefines the user defined string value that identifies the parameter instance.

ValueHolds the value for a given configuration item. The value of this attribute is service device
specific and depended on the Key. The value of this attribute is case sensitive.

The vnsCfgRelInst allows one vnsFolderInst to refer to another vnsFolderInst. vnsCfgRelInst has
following attributes:

Note

KeyDefines the type of the configuration item. The key is defined in the device package and can
never be overwritten. The key is used as a matching criterion as well as for validation.

NameDefines the user defined string value that identifies the config relationship/reference
instance.

targetNameHolds the path for the target vnsFolderInst. The value of this attribute is case
sensitive.

By default, if the L4-L7 service parameters are configured on EPG, APIC only picks up the L4-L7
service parameters configured on the provider EPG, parameters configured on the consumer EPG are
ignored. The vnsRsScopeToTerm relational attribute for a function node or a vnsFolderInst specifies the
terminal node where APIC picks up the parameters.
When a service graph is instantiated, APIC resolves the configuration parameters for a service graph by
looking up the L4-L7 service parameters from various MOs. After resolution completes, the parameter
values are passed to the device script. The device script uses these parameter values to configure the
service on the service device. Figure 2-7 shows the L4-L7 service parameter resolution flow.
Figure 2-7

L4-L7 Service Parameter Resolution Steps

Lookup service parameters declared in


the device package. These service
parameters are used as the input for the
resolution phase.

Lookup service parameters configured on


the function profile. Use the configuration
values as the default values for the
service parameters.

Lookup service parameters configured on


the function node in the service graph,
these values overwrite the defaults from
the function profile.

Lookup service parameters configured on


the EPG, Application Profile, Tenant, or
other MO; these values overwrite the
values from the function nodes.

Use the scopedBy attribute to find teh


starting MO; starts resolution from this
MO, walking up the tree towards the
tenant to resolve the service parameters.

Note

End

298742

Start

By default, the scoped By attribute of L4-L7 service parameter is set to epg; APIC starts the parameter
resolution from the EPG, walking up the MIT to the application profile and then to the tenant to resolve
the service parameter.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

2-7

Chapter 2

ACI Policy Model

Layer 4 to Layer 7 Service Insertion

The flexibility of being able to configure L4-L7 service parameters on various MOs allows an
administrator to configure a single service graph and then use it as a template for instantiating different
service graph instances for different tenants or EPGs, each with its own L4-L7 service parameters.
A simple service graph template has a function node representing a single service function, and two
terminal nodes that connect the service graph to the contract. When mapped to a service device, the
service graph resulted in a service device with one external interface and one internal interface. Utilizing
multiple logical device contexts, and the flexible configuration of L4-L7 service parameters on various
MOs, with the ctrctNameOrLbl, graphNameOrLbl, and nodeNameOrLbl attributes set to the appropriate
contact, service graph and function node, a service appliance with more than two interfaces can be
modeled by mapping multiple service graph instances onto the same service device.
Figure 2-8

Service Graphs Instances Mapped onto the Same Service Device

Service Graph Template:

Three service graph


instances mapped onto
the same service device

outside_epg

contract1 service graph instance1


outside_if
inside1_if

inside3_if
inside3_epg

inside1_epg

contract3 service graph instance3

inside2_epg

298743

asa_fw
inside2_if
contract2 service graph instance2

Figure 2-8 shows the setup where a single service graph template (with a single ASA firewall function
node) is used to instantiate three service graph instances onto a single ASA security appliance. The
L4-L7 service parameters for each service graph instance could be configured on the inside1_epg,
inside2_epg, and inside3_epg provider EPG respectively, with the ctrctNameOrLbl attribute set to
contract1, contract2 and contract3 respectively.
The L4-L7 service parameters for modeling the outside_if related configurations could either be
repeated three times on inside1_epg, inside2_epg, and inside3_epg EPGs; or more conveniently
configured on the application profile or tenant managed object, with the ctrctNameOrLbl attribute set to
any for APIC to be able to use the parameters for all three service graph instances.

Note

The L4-L7 service parameters for modeling the outside_if related configurations is not configured on
the outside_epg consumer EPG, as by default, APIC would not picks up parameters configured on
consumer EPG.

Intercloud Data Center Application Centric Infrastructure 1.0

2-8

Implementation Guide

CH A P T E R

Data Center Fabric Implementation with ACI


This chapter details the steps for Data Center Fabric implementation in the Cisco Intercloud Data Center
Application Centric Infrastructure 1.0 solution.
The Data Center network in this implementation uses Cisco ACI technology. This is a distributed
switching fabric with centralized policy control and configuration with programmability and
standardized south bound interfaces to configure service nodes such as Firewalls and Server Load
Balancers (SLB). Within the fabric, enhanced VXLAN overlays are used that allows for highly scalable
tenancy and tenant segments.
For more info on Cisco ACI 1.0, refer to:
http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/in
dex.html
In this implementation, the ACI Fabric forms the Data Center network with all data center devices
attaching to it. This allows for highly flexible designs with all devices in the Data Center being able to
reach other devices through a high throughput low latency and highly available network, implemented
as a Clos network.
The ACI Fabric consists of spine nodes and leaf nodes. Spine nodes are dedicated to connect only ACI
leafs and provide paths for communication between the ACI leaf nodes. Spines are implemented using
Nexus 9508s with 9736 Spine Line cards. Alternate spine nodes are Nexus 9336PQ and Nexus 9504
switches. The leaf nodes are Nexus 9396PX and Nexus 93128TX.
Additionally, the Application Policy Infrastructure Controller (APIC) is the centralized policy controller
and all configurations for the entire Data Center Fabric are done through the APIC.

Fabric Implementation Highlights


In this implementation, the ACI Fabric is implemented with the following key characteristics to meet
the goals of flexibility, scale and high availability
1.

Four (4) Nexus 9508 Spine Nodes are used. A minimum of 2 spine nodes are needed for high
availability (HA), and adding more nodes increases resiliency by reducing failure domain when a
single spine node fails.

2.

Each Nexus 9300 series leaf node attaches via a single 40G link. With 4x Spine nodes, each leaf
node has 4x 40Gbps = 160 Gbps bandwidth to the fabric. Each leaf has 12 40G uplink ports on the
N9K-M12PQ daughter card or the N9K-M6P with 6 40G uplink ports.

3. The number of ports on the Spine nodes defines the size of the fabric in terms number of leafs. This

can scale quite high with 9736 spine cards each card has 36 40G ports.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-1

Chapter 3

Data Center Fabric Implementation with ACI

APIC Attachment Points

4. The Spine nodes itself only connect to the leaf nodes, and each leaf node connects to all Spine nodes

using 40G connections.


5. The APIC is attached to the leaf nodes. 3 APIC nodes work in a cluster, and attached to different

leaf nodes for resiliency.


6.

Leaf nodes to Spine nodes connections are implemented with QSFP+ with BiDi optics and use dual
strand fiber connections.

7. All Data Center devices are attached to Nexus 9300 leaf nodes using the 10Gbs ports. This includes

compute, storage, service nodes, Data Center premise equipment (PE) ASR 9000.
8. While for operational reasons, some leaf nodes can be dedicated for certain roles such as border leaf,

service leaf etc, in this implementation every leaf node can be used in any role. The technology itself
does not require dedication of such roles, but for operational reasons it might be required to dedicate
leaf node to certain types of roles.
9.

Devices attach to the leaf nodes using different resiliency models. For L2 connections, virtual
port-channels (vPC) are used.

10. For L3 connections, two leaf nodes run open-jaw connections (no diagonal connections) with

routing adjacency to 2 external routers for high availability or as in this implementation to the same
ASR9000 nV cluster with links to different ASR 9000 chassis. The leaf nodes can run iBGP, OSPF
or static routing for each tenant to external router over interface, sub-interface or SVI. In this
implementation, routing is setup over SVI-channel trunks with each tenant on a VLAN. On the
external router side, sub-interfaces are used for each of these VLANs and in the tenant specific VRF
for tenant separation.
11. Attachment of blade servers is from vPC to fabric interconnects. Attachment of compute to rack

servers is directly made or made through FEX. The Nexus 93128TX is oversubscribed and 96 hosts
can be attached to a pair of 93128TXs with RJ-45 based 10GBASE-T connections. The hosts need
to have the same ports types for the 10G connection, and with Cisco C-Series, VIC-1225Ts are used.
12. Storage devices NetApp 3250 cluster is attached to the leaf nodes as well. NetApp controllers in a

cluster are distributed and in this implementation, 4 NetApp controllers are used. Each pair of 3250s
attaches to a Nexus9300 node pair using vPC.
13. ASA-5585 clusters are attached to the Nexus 9300 leaf nodes. The CCL is attached using vPC to a

pair of leaf nodes. The spanned Ethernet-channel connections are to the Nexus 9300 leaf nodes pair
as well.

APIC Attachment Points


APIC controllers are a set of 3 servers, and connected at different points in the ACI Fabric. Each APIC
server connects to a pair of leaf nodes, with 10G ports, in Active/Standby configuration. The 1Gbps
out-of-band connection is wired to an out-of-band (OOB) switched network.
Of the 3 APICs, 2 are connected to the same pair of leaf nodes, and the 3rd one is connected to a different
pair of leaf nodes. This is to provide resiliency, and in production environments, the APICs the
recommended design is to connect to different leaf node pairs.

Intercloud Data Center Application Centric Infrastructure 1.0

3-2

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Fabric Load Balancing

Fabric Load Balancing


Each leaf node connects to multiple spines in this implementation to 4 spines. Hence there are ECMPs
between the leaf nodes. Traffic from leaf to leaf is load balanced across the spines using static hash to
evenly load the links. Additionally, to further balance the links more evenly, dynamic load balancing is
enabled, to provide flowlet switching in this case groups of packets call flowlets in a flow are
distributed on different links during congestion to improve link utilization. Figure 3-1 shows the load
balancer policy in APIC.
Figure 3-1

Load Balancer Policy in APIC

External Connectivity to PE
The ACI Fabric is connected to ASR 9000 nV using single link, port channel (PC) or virtual port channel
(vPC) depending on the tenant container model. On the border leaf nodes, these links are configured as
layer 2 ports to carry the VLAN that is used by the tenant for external connectivity. These VLANs
terminate on a sub-interface that is configured on the ASR 9000 bundle-Ethernet interface.

vPC Connectivity to PE for L2 Extension


The Gold, Copper, L2-Bronze and Internet tenants use L2 extension from the ACI Fabric using vPC. In
the validated topology, two vPCs are defined on the ACI Fabric to connect to ASR 9000. For Gold and
L2-Bronze, vPC is created on border leaves 105 and 106. For Copper and Internet, leaf nodes 103 and
104 are used instead.

Note

Unlike traditional vPC architecture on Nexus, the ACI implementation of vPC does not require peerlink
configuration between the vPC peers.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-3

Chapter 3

Data Center Fabric Implementation with ACI

External Connectivity to PE

Figure 3-2

vPC Connectivity to ASR 9000


Gold and L2-Bronze

Copper and Internet

ASR 9000 nV
BE-9.xxx

BE-10.xxx
vPC

VLAN-xxx
1/15

ASR 9000 nV

VLAN-xxx

1/15

v6-l3a Node-105

Node-106

vPC

VLAN-xxx
1/33

v6-l3b

1/33

v6-l2a Node-105

Node-106

v6-l2b

ACI Fabric
298519

ACI Fabric

VLAN-xxx

VLAN-xxx corresponds to the VLAN used for external connectivity for a given tenant. These VLANs
are typically defined in a VLAN pool in APIC and associated with a physical domain prior to creating a
tenant. Since the ASR 9000 is in nV cluster mode, it is recommended to distribute the port channel
member links across different ASR 9000 chassis to provide chassis level redundancy.

Note

In Figure 3-2, a single 10G link is used on the border leaf switch however you can have 2 or more ports
assigned per leaf switch to provide additional link level redundancy.
The following show command displays the physical connectivity between the border leaf and the ASR
9000 for Gold and L2-Bronze container setup.
RP/0/RSP1/CPU0:v6-pe-NV#sh run int bundle-ether 9
Tue Nov 4 16:02:00.885 EST
interface Bundle-Ether9
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int te0/0/0/0
Tue Nov 4 16:04:19.424 EST
interface TenGigE0/0/0/0
description v6-leaf5-9396::e1/35
bundle id 9 mode active
cdp
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int te0/0/0/1
Tue Nov 4 16:04:39.875 EST
interface TenGigE0/0/0/1
description v6-leaf6-9396::e1/35
bundle id 9 mode active
!
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE9
Tue Nov 4 09:38:04.700 EST
v6-l3a
Te0/0/0/0[BE9]
120
B,R
v6-l3b
Te0/0/0/1[BE9]
120
B,R
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE10
Tue Nov 4 09:38:46.509 EST
v6-l2a
Te1/0/1/2[BE10]
120
B,R
v6-l2b
Te1/1/1/2[BE10]
120
B,R
RP/0/RSP1/CPU0:v6-pe-NV#

Eth1/35
Eth1/35

Eth1/33
Eth1/33

Intercloud Data Center Application Centric Infrastructure 1.0

3-4

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


External Connectivity to PE

vPC Configuration in ACI Fabric


This section details the steps to configure a vPC on a pair of border leaf switches connecting to ASR
9000 nV edge device. These procedures can be reused when you create vPC to other devices such as
UCS Fabric Interconnects, ASA cluster or bare metal servers.
1.

Configure a vPC domain

2.

Configure LACP Profile Policy

3.

Configure vPC Policy Group

4.

Configure Interface Profile

5.

Configure a Switch Profile

The following section describes the procedure for creating a vPC in in APIC. Border leaf nodes 105 and
106 are used in this example for illustration purpose. On each border leaf, interface e1/35 is used to
create the vPC as shown in Figure 3-2. This vPC connects to Bundle-Ethernet 9 on ASR 9000.
Step 1

Configure a vPC domain.


The vPC domain identifies the leaf nodes that define a virtual Port channel.
To configure a vPC domain, login to the APIC GUI with administrative privileges. On the menu bar,
choose the FABRIC tab and select Access-Policies. In the navigation pane, expand Switch Policies and
VPC Domain. Right-click on Virtual Port Channel default and select Create vPC Explicit
Protection group.
Figure 3-3

Create vPC Explicit Protection Group-1

In the pop-up window enter relevant information as shown in Figure 3-4 and submit the configuration.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-5

Chapter 3

Data Center Fabric Implementation with ACI

External Connectivity to PE

Figure 3-4

Step 2

Create vPC Explicit Protection Group-2

Configure LACP Profile Policy.


The LACP Profile policy is used to configure LACP mode such as active, passive or MAC pinning. The
default mode is OFF.
To configure a policy with LACP mode set to "Active," expand Interface Profiles in the navigation
window and right click LACP under Policies. Select Create LACP Policy (Figure 3-5).
Figure 3-5

LACP Policy

In the pop-up window enter a name for this policy, select "Active" mode and submit the configuration
(Figure 3-6).

Intercloud Data Center Application Centric Infrastructure 1.0

3-6

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


External Connectivity to PE

Figure 3-6

Step 3

Create LACP-Active Policy

Configure vPC Policy Group.


In the navigation pane, expand Interface Policies and right-click on Policy Groups. Select Create VPC
Interface Policy Group (Figure 3-7).
Figure 3-7

Interface Policy Groups

In the pop-up window, enter a name for the policy group and select the LACP policy that was created in
the previous step (Figure 3-8). You may also enable LLDP or CDP as needed.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-7

Chapter 3

Data Center Fabric Implementation with ACI

External Connectivity to PE

Figure 3-8

Step 4

Create VPC Interface Policy Group

Configure Interface Profile.


To create an interface profile, right click on Profiles under "Interface Policies" and select "Create
Interface Profile".
Figure 3-9

Interface Profile

In the pop-up window, enter a name for the interface profile and click on the "+" sign in the Interface
Selector box (Figure 3-9).

Intercloud Data Center Application Centric Infrastructure 1.0

3-8

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


External Connectivity to PE

Figure 3-10

Create Interface Profile

Provide a name for the access port selector and enter the interface used for the vPC port channel. Select
the interface policy group that was created in the previous step and click OK to close this window. Click
on submit button to finish the interface profile creation (Figure 3-11).
Figure 3-11

Step 5

Create Access Port Selector

Configure a Switch Profile.


In the navigation pane, expand Switch Policies. Right click on Profiles and select "Create Switch
Profile". Enter a name for the switch profile. In the switch selector window, click on "+" sign to add the
border leaves 105 and 106. Click on NEXT button to go to the next screen (Figure 3-12).

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-9

Chapter 3

Data Center Fabric Implementation with ACI

External Connectivity to PE

Figure 3-12

Create Switch Profile

In the next screen, select the interface profile that was created in step 4 and finish the configuration
(Figure 3-13).
Figure 3-13

Create Switch Profile Association

This completes the vPC configuration from an infrastructure perspective. The vPC can be utilized by all
tenants in the ACI infrastructure.

You can issue the following show commands on the leaf switches to see the status of vPC port channel.

Intercloud Data Center Application Centric Infrastructure 1.0

3-10

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


External Connectivity to PE

v6-l3a# show port-channel summary


Flags: D - Down
P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended
r - Module-removed
S - Switched
R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
------------------------------------------------------------------------------Group PortType
Protocol Member Ports
Channel
------------------------------------------------------------------------------1
Po1(SU)
Eth
LACP
Eth1/33(P)
Eth1/34(P)
2
Po2(SU)
Eth
LACP
Eth1/35(P)
3
Po3(SU)
Eth
NONE
Eth1/3(P)
4
Po4(SU)
Eth
LACP
Eth1/2(P)
Eth1/4(P)
5
Po5(SU)
Eth
NONE
Eth1/1(P)
v6-l3a#
v6-l3a# show vpc brief
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id
: 105
Peer status
: peer adjacency formed ok
vPC keep-alive status
: Disabled
Configuration consistency status : success
Per-vlan consistency status
: success
Type-2 consistency status
: success
vPC role
: primary, operational secondary
Number of vPCs configured
: 4
Peer Gateway
: Disabled
Dual-active excluded VLANs
: Graceful Consistency Check
: Enabled
Auto-recovery status
: Enabled (timeout = 240 seconds)
Operational Layer3 Peer
: Disabled
vPC Peer-link status
--------------------------------------------------------------------id
Port
Status Active vlans
---------- -------------------------------------------------1
up
vPC status
---------------------------------------------------------------------id
Port
Status Consistency Reason
Active vlans
---------- ----------- ----------------2
Po4
up
success
success
500-504,549
,3021-3030,
3061-3070
343 Po2
up
success
success
685 Po5
up
success
success
2901
686

Po3

up

success

success

2901

v6-l3a#

Notice that there are no active VLANs on vPC port channel 2. To see active VLANs, you need to
associate the Bridged external EPG of a tenant with the vPC. This is typically done during the creation
of a tenant. For more information, refer to L2-Bronze and Gold chapters.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-11

Chapter 3

Data Center Fabric Implementation with ACI

External Connectivity to PE

Port Channel with SVI for L3 Extension


For Silver and L3 Bronze, separate port-channels are defined between the border leaves and ASR 9000.
A logical SVI interface is defined on each border leaf which pairs with the bundle ethernet sub-interface
on the ASR 9000. The configuration on ASR 9000 is done over the out-of-band management interface
since APIC does not control the ASR 9000.
Figure 3-14

Port Channel Connectivity to ASR 9000

Silver and L3-Bronze

ASR 9000 nV
BE-5.xxx

BE-5.xxx

svi01

svi02
1/33-34

1/33-34
1/15

v6-l3a Node-105

Node-106

v6-l3b

298531

ACI Fabric

The following show commands display the bundle ethernet configuration on ASR 9000.
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Tue Nov 4 10:04:24.584 EST
interface Bundle-Ether5
mtu 9000
mac-address 4055.3943.f93
load-interval 30
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Wed Nov 5 10:51:04.750 EST
interface TenGigE1/0/0/0
description v6-leaf5-9396::e1/33
bundle id 5 mode active
cdp
!
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Wed Nov 5 10:51:08.356 EST
interface TenGigE1/1/0/0
description v6-leaf5-9396::e1/34
bundle id 5 mode active
cdp
!
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Tue Nov 4 10:04:30.214 EST
interface Bundle-Ether6
mtu 9000
mac-address 4055.3943.1f93
load-interval 30
!
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh run int
Wed Nov 5 10:53:40.960 EST
interface TenGigE1/0/0/1
description v6-leaf6-9396::e1/33
bundle id 6 mode active
cdp
!

bundle-ether 5

te1/0/0/0

te1/1/0/0

bundle-ether 6

te1/0/0/1

Intercloud Data Center Application Centric Infrastructure 1.0

3-12

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


External Connectivity to PE

RP/0/RSP1/CPU0:v6-pe-NV#sh run int te1/1/0/1


Wed Nov 5 10:53:43.621 EST
interface TenGigE1/1/0/1
description v6-leaf6-9396::e1/34
bundle id 6 mode active
cdp
!
RP/0/RSP1/CPU0:v6-pe-NV#

Note

It is recommended to configure separate MAC address on the bundle Ethernet interfaces to prevent MAC
address flapping when the same VLAN encapsulation is used on both border leaves for external
connectivity to ASR 9000 nV.

Port Channel Configuration on Border Leaf


This section provides the steps to configure Port Channel on border leaf switches. The major steps are
listed below.

Step 1

1.

Configure LACP Profile Policy

2.

Configure PC Policy Group

3.

Configure Interface Profile

4.

Configure a Switch Profile

Configure LACP Profile Policy.


LACP-Active policy is already configured during VPC configuration steps. We will reuse the policy for
creating Port Channels.
Figure 3-15

Step 2

LACP Active Policy

Configure Port Channel Policy Group.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-13

Chapter 3

Data Center Fabric Implementation with ACI

External Connectivity to PE

The interface policy group configuration step is similar to the steps described in vPC configuration. The
difference is that a Port Channel is selected instead of a vPC as highlighted in Figure 3-16. CDP is
enabled by default.
Figure 3-16

Port Channel Interface Policy Group

The policy group can be reused on node-106 or a new policy group can be configured as shown in the
Figure 3-16.
Step 3

Configure Interface Profile.


An access port selector using ports 1/33 & 1/34 is configured and linked with the policy group that was
defined in previous step.

Intercloud Data Center Application Centric Infrastructure 1.0

3-14

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


External Connectivity to PE

Figure 3-17

Step 4

Port Channel Interface Profile

Configure a Switch Profile.


In this step, separate switch profiles are created for node 105 and 106 as shown below.
Figure 3-18

Switch Profile for Node-105

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-15

Chapter 3

Data Center Fabric Implementation with ACI

External Connectivity to PE

Figure 3-19

Switch Profile for Node-106

At this time, the port channel interfaces should come up as displayed in the show command output below.
v6-l3a# show port-channel summary
Flags: D - Down
P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended
r - Module-removed
S - Switched
R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
------------------------------------------------------------------------------Group PortType
Protocol Member Ports
Channel
------------------------------------------------------------------------------1
Po1(SU)
Eth
LACP
Eth1/33(P)
Eth1/34(P)
2
Po2(SU)
Eth
LACP
Eth1/35(P)
3
Po3(SU)
Eth
NONE
Eth1/3(P)
4
Po4(SU)
Eth
LACP
Eth1/2(P)
Eth1/4(P)
5
Po5(SU)
Eth
NONE
Eth1/1(P)
v6-l3a#
v6-l3b# show port-channel summary
Flags: D - Down
P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended
r - Module-removed
S - Switched
R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
------------------------------------------------------------------------------Group PortType
Protocol Member Ports
Channel
------------------------------------------------------------------------------1
Po1(SU)
Eth
LACP
Eth1/33(P)
Eth1/34(P)
2
Po2(SU)
Eth
LACP
Eth1/2(P)
Eth1/4(P)
3
Po3(SU)
Eth
LACP
Eth1/35(P)
4
Po4(SU)
Eth
NONE
Eth1/1(P)
5
Po5(SU)
Eth
NONE
Eth1/3(P)
v6-l3b#

Intercloud Data Center Application Centric Infrastructure 1.0

3-16

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

The following show command displays the status of bundle ethernet interface on the ASR 9000.
RP/0/RSP1/CPU0:v6-pe-NV#show bundle brief
Wed Nov 5 10:35:48.201 EST
Name
| IG
| State
| LACP | BFD |
Links
| Local b/w, |
|
|
|
|
| act/stby/cfgd |
kbps
|
-------|----------|---------------|------|-----|---------------|------------|
BE5
- Up
On
Off
2 / 0 / 2
20000000
BE6
- Up
On
Off
2 / 0 / 2
20000000
BE9
- Up
On
Off
2 / 0 / 2
20000000
BE10
- Up
On
Off
2 / 0 / 2
20000000
BE11
- Up
On
Off
1 / 0 / 1
10000000
BE12
- Up
On
Off
2 / 0 / 2
20000000
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE5
Tue Nov 4 09:40:30.626 EST
v6-l3a
Te1/0/0/0[BE5]
120
B,R
Eth1/33
v6-l3a
Te1/1/0/0[BE5]
120
B,R
Eth1/34
RP/0/RSP1/CPU0:v6-pe-NV#sh lldp nei | inc BE6
Tue Nov 4 09:40:34.400 EST
v6-l3b
Te1/0/0/1[BE6]
120
B,R
Eth1/33
v6-l3b
Te1/1/0/1[BE6]
120
B,R
Eth1/34
RP/0/RSP1/CPU0:v6-pe-NV#

Connectivity to Compute
Cisco Integrated Compute Stacks (ICS) can be attached to the ACI fabric directly. Bare metal servers
can be attached directly or using Cisco Nexus 2000 Fabric Extenders. This section details the physical
connectivity and APIC configuration to attach the compute infrastructure to ACI fabric.

vPC to Fabric Interconnects


In this implementation, two NetApp FlexPod based Integrated Compute Stacks are connected to leaf
nodes 101/102 and 103/104 as shown in Figure 3-20. The UCS Fabric Interconnects (FI) in each ICS
connects to the leaf pair over vPCs.
Figure 3-20

vPC Connectivity to UCS Fabric Interconnects

ACI Fabric
Leaf102
E1/5-8

E1/1-4

E1/1-4

E2/1-4

E2/5-8

ICS3-6296-P1A

E2/1-4

Leaf104
E1/5-8

E1/5-8

E1/1-4

E2/5-8

E2/1-4

ICS3-6296-P1B

ICS3

Leaf103
E1/1-4

E1/5-8

E2/5-8

ICS4-6296-P1A

E2/1-4

E2/5-8

ICS4-6296-P1B

ICS3-C1F1-P1

ICS4-C1F1-P1

ICS3-C2F1-P1

ICS4-C2F1-P1
ICS4

298637

Leaf101

A sample XML configuration to attach ICS3 to fabric is shown below.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-17

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Compute

<infraInfra>
<infraFuncP>
<!-- access interface policy group, this create the vpc defination, protocols
-->
<infraAccBndlGrp name="vpc_n101_n102_ics3_fi_a" lagT="node">
<infraRsLldpIfPol tnLldpIfPolName="lldp_disabled" />
<infraRsCdpIfPol tnCdpIfPolName="cdp_enabled" />
<infraRsStpIfPol tnStpIfPolName="spt_no_bpdu" />
<infraRsLacpPol tnLacpLagPolName="lacp_active" />
<infraRsAttEntP tDn="uni/infra/attentp-{{vmmAEP}}" />
</infraAccBndlGrp>
<infraAccBndlGrp name="vpc_n101_n102_ics3_fi_b" lagT="node">
<infraRsLldpIfPol tnLldpIfPolName="lldp_disabled" />
<infraRsCdpIfPol tnCdpIfPolName="cdp_enabled" />
<infraRsStpIfPol tnStpIfPolName="spt_no_bpdu" />
<infraRsLacpPol tnLacpLagPolName="lacp_active" />
<infraRsAttEntP tDn="uni/infra/attentp-{{vmmAEP}}" />
</infraAccBndlGrp>
</infraFuncP>
<!-- access interface profile, this specify the interfaces to use for the vpc -->
<infraAccPortP name="vpc_n101_n102_ics3_fi_a">
<infraHPortS name="port_members" type="range">
<infraPortBlk name="block2" fromPort="1" toPort="4" />
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-vpc_n101_n102_ics3_fi_a" />
</infraHPortS>
</infraAccPortP>
<infraAccPortP name="vpc_n101_n102_ics3_fi_b">
<infraHPortS name="port_members" type="range">
<infraPortBlk name="block2" fromPort="5" toPort="8" />
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-vpc_n101_n102_ics3_fi_b" />
</infraHPortS>
</infraAccPortP>
<!-- access switch profile, this specify the leaf switches to use for the vpc -->
<infraNodeP name="vpc_n101_n102_ics3_fi">
<infraLeafS name="101_102" type="range">
<infraNodeBlk name="block0" from_="101" to_="102" />
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-vpc_n101_n102_ics3_fi_a" />
<infraRsAccPortP tDn="uni/infra/accportprof-vpc_n101_n102_ics3_fi_b" />
</infraNodeP>
</infraInfra>

The following APIC screen captures (Figure 3-21 and Figure 3-22)show sample vPC interface policy
groups and interface profiles in APIC GUI.

Intercloud Data Center Application Centric Infrastructure 1.0

3-18

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

Figure 3-21

UCS FI vPC Interface Profile

Another profile (vpc_n101_n102_ics3_fi_b) is created using ports 1/5-8. These interface profiles can be
reused on other ICS, but new interface profiles for connecting ICS4 FI on nodes 103 and 104 have been
created. A vPC interface policy is shown in Figure 3-22.
Figure 3-22

UCS FI vPC Interface Policy

Notice the Attachable Entity profile (AEP) tied to this interface policy. An AEP is configured to deploy
VLAN pools on the leaf switches. A particular VLAN from this pool is enabled on the vPC based on
VM events from VMware vCenter.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-19

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Compute

FEX to Bare Metal


ACI Fabric allows the integration of Cisco Nexus 2000 series Fabric Extenders (FEX) to provide
additional 1/10G density at the leaf layer. The following fabric extenders are supported as of APIC
release 1.0.2j.
N2K-C2248PQ-10GE
N2K-C2248TP-E-1GE
N2K-C2248TP-1GE
N2K-C2232PP-10GE
N2K-C2232TM-E-10GE

Note

Currently, vPC connection from FEX to the Fabric is not supported.


Physical Topology

The physical topology diagram is shown in Figure 3-23. In this topology, the Cisco Nexus 2000 series
(N2K-C2232PP-10GE) fabric extender is direct attached to port 31 and 32 on leaf switches 105 and 106.
Figure 3-23

FEX Physical Topology

ACI Fabric
Leaf105

Leaf106

N9K-C9396PX

N9K-C9396PX

E1/31-32

E1/31-32

FEX101

FEX101

Activebackup

N2K-C2232PP

UCS C-Series

298640

N2K-C2232PP

Attaching FEX to the ACI Fabric


ACI offers the following methods to attach a FEX to an ACI leaf switch.
1.

Quick Start Configuration

2.

Profile Configuration

3.

REST API

The quick start configuration method uses a template to attach the FEX to the fabric. This method is
useful for users that are new to the APIC GUI. The profile configuration is useful for creating FEX
profiles and reusing the profile across multiple switches when the configuration is identical.
Provisioning can be done using the REST API as well which would help with large scale provisioning.
These methods are described in the following section.
Step 1

Quick Start Configuration.


In the APIC GUI, select the PC/VPC configuration wizard.

Intercloud Data Center Application Centric Infrastructure 1.0

3-20

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

Figure 3-24

Step 2

Click on the "+" sign to enter the configuration information for FEX uplink ports.
Figure 3-25

Step 3

FEX Configuration using Quick Start

FEX Configuration using Quick Start

In the pop-up window, select "Advanced" mode. Configure the switch ID in the box by clicking on the
"+" sign. Enter a name for the switch profile. Choose "96 ports" since this is a Nexus 9396. Enter the
FEX ID and interface selectors.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-21

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Compute

Figure 3-26

Step 4

Create Switch Profile

Click on SAVE button and submit the configuration.


Figure 3-27

Save Switch Profile

The switch profile is created as shown in Figure 3-28.

Intercloud Data Center Application Centric Infrastructure 1.0

3-22

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

Figure 3-28

Verify Switch Profile Creation

v6-l3a# show fex


FEX
FEX
FEX
FEX
Number
Description
State
Model
Serial
-----------------------------------------------------------------------101
FEX0101
Online
N2K-C2232PP-10GE
SSI154103DW
v6-l3a# show fex detail
FEX: 101 Description: FEX0101
state: Online
FEX version: 11.0(2) [Switch version: 11.0(2)]
FEX Interim version: 11.0(1.882)
Switch Interim version: 11.0(1.882)
Extender Model: N2K-C2232PP-10GE, Extender Serial: SSI154103DW
Part No: 68-3547-05
Card Id: 82, Mac Addr: d4:d7:48:11:c6, Num Macs: 64
Module Sw Gen: 22 [Switch Sw Gen: 21]
pinning-mode: static
Max-links: 1
Fabric port for control traffic: Eth1/32
Fabric interface state:
Eth1/31 - Interface Up. State: Active
Eth1/32 - Interface Up. State: Active
Po10 - Interface Up. State: Active
Fex Port
State Fabric Port
Eth101/1/1 Down
Po10
Eth101/1/2 Down
Po10
Eth101/1/3 Down
Po10
Eth101/1/4 Down
Po10
.
.
Eth101/1/30 Down
Po10
Eth101/1/31 Down
Po10
Eth101/1/32 Down
Po10
v6-l3a#

Profile Configuration
The following steps describe how to attach the FEX to fabric by configuring FEX profiles and switch
profiles independently. The major steps are:
1.

Create FEX Profile.

2.

Create Interface Profile.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-23

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Compute

3.
Step 1

Create Switch Profile.

Create a FEX Profile.


In the APIC GUI, go to FABRIC and select Access Policies. Expand Interface Policies and right click
on Profiles.
Figure 3-29

Create a FEX Profile

Select "Create FEX Profiles" from the list. In the pop-up window, enter a name for the FEX Profile and
click on submit button.
Figure 3-30

Name the FEX Profile

Notice that this step creates a FEX Policy Group with the same name as the FEX Profile.

Intercloud Data Center Application Centric Infrastructure 1.0

3-24

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

Figure 3-31

Verify FEX Profile Configuration

Sample XML Code


<infraInfra>
<infraFexP descr="FexP Profile: FEX101_FexP101"
dn="uni/infra/fexprof-FEX101_FexP101" name="FEX101_FexP101" ownerKey="" ownerTag="">
<infraFexBndlGrp descr="FexBundleP Profile" name="FexBndleP101" ownerKey=""
ownerTag="">
<infraRsMonFexInfraPol tnMonInfraPolName=""/>
</infraFexBndlGrp>
</infraFexP>
</infraInfra>

Step 2

Create Interface Profile.


In the navigation pane, right click on Profiles and select "Create Interface Profile". Provide a name for
this interface profile.
Figure 3-32

Create Interface Profile

Click on the "+" sign in the Interface Selector box. In the pop-up window, enter a name for the port
selector. Enter the interface IDs, FEX ID and select the FEX Profile that was created in the previous step.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-25

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Compute

Figure 3-33

Create Access Port Selector

Click OK to close this window and submit button to finish the configuration.
Sample XML Code
<infraInfra>
<infraAccPortP descr="PortP Profile: FEX101"
dn="uni/infra/accportprof-FEX101_ifselector" name="FEX101_ifselector" ownerKey=""
ownerTag="">
<infraHPortS descr="" name="FexCard101" ownerKey="" ownerTag="" type="range">
<infraRsAccBaseGrp fexId="101"
tDn="uni/infra/fexprof-FEX101_FexP101/fexbundle-FexBndleP101"/>
<infraPortBlk fromCard="1" fromPort="31" name="block1" toCard="1"
toPort="32"/>
</infraHPortS>
</infraAccPortP>
</infraInfra>

Step 3

Create Switch Profile.


In the navigation pane, expand Switch Policies and right-click on Profiles and select "Create Switch
Profile".

Intercloud Data Center Application Centric Infrastructure 1.0

3-26

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

Figure 3-34

Create a Switch Profile

Provide a name for the switch profile. Click on "+" sign in the Switch Selector box. Provide a name for
the switch selector and select the switch ID from the drop-down list.
Figure 3-35

Specify the Profile Identity

Click NEXT to go to the next window and select the Interface selector profile that was created in the
previous step. Click FINISH to submit the configuration.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-27

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Compute

Figure 3-36

Specify the Profile Identity

Sample XML Code


<infraInfra>
<infraNodeP descr="Switch Profile: FEX101" dn="uni/infra/nprof-Node105-FEX101"
name="Node105-FEX101" ownerKey="" ownerTag="">
<infraLeafS descr="" name="FEX101_selector_node105" ownerKey="" ownerTag=""
type="range">
<infraNodeBlk from_="105" name="single0" to_="105"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-FEX101_ifselector"/>
</infraNodeP>
</infraInfra>

REST API for Bulk Provisioning

The XML code for creating FEX profile, Interface selector, and switch profile is shown in the previous
section. In this section, the power of REST API to provision additional FEX in the fabric is highlighted
without repeating step 1 & 2.
Assume that identical FEX need to be connected to port 31 & 32 on other leaf switches. The FEX profile
and FEX interface selectors do not need to be reconfigured. All that is needed is to configure a new
switch profile with the node ID of the new device and associate with the interface selector profile that
was created already. In this example, Node-106 is used.
<infraInfra>
<infraNodeP descr="Switch Profile: FEX101" dn="uni/infra/nprof-Node106-FEX101"
name="Node106-FEX101" ownerKey="" ownerTag="">
<infraLeafS descr="" name="FEX101_selector_node106" ownerKey="" ownerTag=""
type="range">
<infraNodeBlk from_="106" name="single0" to_="106"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-FEX101_ifselector"/>
</infraNodeP>
</infraInfra>

Intercloud Data Center Application Centric Infrastructure 1.0

3-28

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

Bare Metal Server Attachment to FEX


The Cisco UCS C220 bare metal server is attached to a pair of Fabric Extenders as shown in Figure 3-37.
Interface bonding is supported on the server however the Cisco NICs are configured in active-backup
mode since the Fabric Extenders do not support vPC at this time. This means that only one interface on
the NIC is sending and receiving traffic at any given time.
Figure 3-37 shows how the FEX ports are statically bound to EPG01.
Figure 3-37

Static Binding to EPG

The following interface configuration is taken from the bare metal server running Ubuntu 12.04. The
server has two 1G and two 10G interfaces. Bonding is configured in Active-backup mode.
root@v6-bm-1:/etc/network# more interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface, used for management
auto eth2
iface eth2 inet static
address 10.0.35.103
netmask 255.255.255.0
network 10.0.35.0
broadcast 10.0.35.255
gateway 10.0.35.253
dns-nameservers 64.102.6.247
#eth0 is manually configured, and slave to the "bond0" bonded NIC
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
#eth1 ditto, thus creating a 2-link bond.
auto eth1
iface eth1 inet manual
bond-master bond0
# bond0 is the bonding NIC and can be used like any other normal NIC.
# bond0 is configured using manual network information.
# bond0 does not need IP since IP is configured on STCA application
# since bond0 does not have IP, the static route below is not needed
auto bond0
iface bond0 inet manual

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-29

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Compute

address 10.1.1.101
netmask 255.255.255.0
gateway 10.1.1.253
#static route
up route add -net 10.0.0.0/16 gw 10.0.35.253 dev eth2
up route add -net 172.18.0.0/16 gw 10.0.35.253 dev eth2
up route add -net 0.0.0.0/0 gw 10.1.1.253 dev bond0
# pre-up ip link set $IFACE up
# post-down ip link set $IFACE down
bond-mode active-backup
bond-miimon 100
#bond-lacp-rate 1
bond-slaves eth0 eth1
root@v6-bm-1:/etc/network#

Bare Metal and KVM Direct to 93128 and 9396


Cisco UCS C200 series servers are connected to Nexus 9396 and 93128 series switches as shown in
Figure 3-38. This environment is used for Open Stack implementation.
Figure 3-38

Bare Metal vPC Connectivity to Leaf Switches

ACI Fabric
Leaf103

Leaf104

Leaf107

N9K-C9312TX

UCS C-Series

UCS C-Series

298655

N9K-C9396PX

Leaf108

Each server is connected to the fabric via vPC and the ports are bonded to work in active-active mode.
In APIC, a vPC interface policy group is created for each bare metal server. The policy is attached to an
AEP (open_stack_aep) from where the VLANs are assigned. A sample policy is shown in Figure 3-39.

Intercloud Data Center Application Centric Infrastructure 1.0

3-30

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Compute

Figure 3-39

Bare Metal vPC Policy Group

In APIC, an interface profile is created with interface selectors and corresponding policy group as shown
in Figure 3-40. In this case servers 1 to 8 are connected to ports 1/38 to 1/44 on nodes 107 and 108.
Figure 3-40

Bare Metal Interface Profile

The interface profile is then attached to a switch profile as in Figure 3-41.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-31

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Services Appliances

Figure 3-41

Bare Metal server switch profile

Connectivity to Services Appliances


ICDC ACI 1.0 provides the flexibility to select physical or virtual firewalls to implement high
performance and scalable network security architecture. The ASA 5585-X appliance provides protection
for media-rich, highly transactional and latency-sensitive applications. This section details the ASA
5585 firewall cluster integration within ACI Fabric.

ASA 5585 Active/Active Cluster Implementation


The ASA units are deployed as a 2-wide ASA cluster running in a multi-context, routed mode with static
routing. Each ASA5585-SSP-60 has four 10G, six 1G and two Management interfaces. Two 10G links
are configured as Cluster Control Link (CCL) and two as a Data port-channel. Each port-channel has
one link connected to one of the leaf nodes in the ACI Fabric as shown in Figure 3-42. Ether Channel
Load Balancing (ECLB) is used to load-balance flows across the spanned data port-channel.
The physical connectivity diagram is shown in Figure 3-42. The CCL port channel (Po1) is independent
on each ASA. The data port channel is configured as a Spanned EtherChannel across both ASAs. Both
CCL port channels and data port channel are configured as vPCs on the ACI Fabric. On each ASA,
Management 0/1 interface is used for out of band management.

Intercloud Data Center Application Centric Infrastructure 1.0

3-32

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Services Appliances

Figure 3-42

ASA Cluster Connectivity Diagram

ACI Fabric

/6

Po2
T0/9

ASA5585-SSP60
ASA-1

T0/7

T0/8

T0/6

T0/7

T0

/8

T0

Po1

ASA5585-SSP60
ASA-2

298659

1/

1/4

1/

1/3

1/

4
1/

1/2
1/1
Po1

CCL
Data

Leaf106

T0/9

Leaf105

ASA cluster integration to the ACI Fabric involves the following major steps.

Configure CCL port channel

Configure Spanned Ether channel

Enable Clustering on ASA

Create virtual Port Channels (vPC) on APIC to attach CCL and Data Port channels

For procedural guidance on how to configure ASA Clustering, refer to the VMDC 3.0.1 Implementation
Guide.
A sample Configuration from System and Admin contexts on the ASA is shown below. The
configuration is identical on both units except for the highlighted parameters.
System Context
!
hostname asa-1
mac-address auto prefix 1
lacp system-priority 1
!
interface Management0/1
!
interface TenGigabitEthernet0/6
channel-group 1 mode on
!
interface TenGigabitEthernet0/7
channel-group 2 mode active vss-id 1
!
interface TenGigabitEthernet0/8
channel-group 1 mode on
!
interface TenGigabitEthernet0/9
channel-group 2 mode active vss-id 2
!
interface Port-channel1
description CCL Interface
lacp max-bundle 8
port-channel load-balance vlan-src-dst-ip-port
!
interface Port-channel2
description Spanned Etherchannel
lacp max-bundle 8
port-channel load-balance src-dst-ip-port
port-channel span-cluster vss-load-balance

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-33

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Services Appliances

!
!
boot system disk0:/asa931-smp-k8.bin
ftp mode passive
cluster group ACI_10
key *****
local-unit ASA-2
# Configure ASA-1 on peer unit
cluster-interface Port-channel1 ip 98.1.1.2 255.255.255.0 # 98.1.1.1 on peer
priority 2
# Configure Priority 1 on peer unit
console-replicate
health-check holdtime 3
clacp system-mac auto system-priority 1
enable
pager lines 24
mtu cluster 1600
no failover
asdm image disk0:/asdm-731.bin
no asdm history enable
arp timeout 14400
no arp permit-nonconnected
!
no ssh stricthostkeycheck
console timeout 0
!
tls-proxy maximum-session 1000
!
admin-context admin
context admin
allocate-interface Management0/1
config-url disk0:/aci-admin.cfg
!
ntp server 172.18.114.20 prefer
username apic password mCrzqDeDrHuidnJf encrypted
username admin password rwXrwfFLI2xesBa/ encrypted
prompt hostname context state
!
jumbo-frame reservation
!

Admin Context
!
hostname admin
names
ip local pool mgmt 10.0.32.69-10.0.32.70 mask 255.255.255.0
!
interface Management0/1
management-only
nameif management
security-level 100
ip address 10.0.32.71 255.255.255.0 cluster-pool mgmt
!
pager lines 24
mtu management 1500
icmp unreachable rate-limit 1 burst-size 1
no asdm history enable
arp timeout 14400
route management 0.0.0.0 0.0.0.0 10.0.32.1 1
user-identity default-domain LOCAL
aaa authentication ssh console LOCAL
aaa authentication http console LOCAL
http server enable
http 10.0.0.0 255.255.0.0 management
http 172.18.0.0 255.255.0.0 management

Intercloud Data Center Application Centric Infrastructure 1.0

3-34

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Services Appliances

snmp-server host management 172.18.114.115 community ***** version 2c


ssh 172.18.0.0 255.255.0.0 management
ssh 10.0.0.0 255.255.0.0 management
ssh timeout 60
ssh version 2
ssh key-exchange group dh-group1-sha1
no threat-detection statistics tcp-intercept
username apic password mCrzqDeDrHuidnJf encrypted
username admin password rwXrwfFLI2xesBa/ encrypted
!

The next step is to configure vPCs on the APIC. The procedure to bring up vPCs on the leaf switches
are already explained earlier in this chapter, so it is not covered here.
Table 3-1 shows various parameters configured on APIC to bring up separate vPC to each ASA for
cluster control link connectivity.
Table 3-1

Cluster Control Link (CCL) Parameters in APIC

Type

ASA-1

ASA-2

Interface Policy Group

vpc_n105_n106_asa_ccl1

vpc_n105_n106_asa_ccl2

Interface Selector Profile

n105_n106_asa_ccl1

n105_n106_asa_ccl2

Interface Selector Name

asa_ccl1_ports

asa_ccl2_ports

Interface Selector Block

e1/1

e1/3

Switch Profile

vpc_n105_n106_asa5585_ccl

Switch Block

105-106

Attachable Entity Profile

asa_ccl_aep

Domain (VMM/Physical)

asa_ccl_phy

Vlan Pool name

asa_ccl_vlan_pool

10

Vlan Encap Blocks

2901

Table 3-2 shows various parameters configured on APIC to bring up a vPC for Spanned Ether Channel
connectivity to ASA cluster.
Table 3-2

Data Port Channel Parameters in APIC

Type

ASA-1/ASA-2

Interface Policy Group

vpc_n105_n106_asa5585_data

Interface Selector Profile vpc_n105_n106_asa5585_data

Interface Selector Name

port members

Interface Selector Block

1/2,1/4

Switch Profile

vpc_n105_n106_asa5585_data

Switch Block

105-106

Attachable Entity Profile asa_data_aep

Domain (VMM/Physical) asa_data_phy

Vlan Pool name

10 Vlan Encap Block

asa_data_vlan (eg: copper, gold)


500-549, 3001-3100

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-35

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Storage

ASA 5585 for Expanded-Gold and Copper using vPC


Once the base configuration is done and the ASAs are attached to the ACI Fabric, tenant contexts can
be created on the ASA. For more information on configuring ASA cluster for Gold and Copper
containers, refer to Chapter 8, Expanded Gold Tenant Container and Chapter 11, Copper Tenant
Container.

Connectivity to Storage
In this implementation shared storage is implemented with NFS. Both VMware and OpenStack servers
use NFS shares implemented on NetApp FAS 3250 cluster. Following sections describes the
connectivity between the NFS cluster, servers, and the ACI Fabric.

NetApp Cluster Connectivity for NFS


NetApp FAS 3250 cluster consist of four controller nodes. Each controller is dual connected to a leaf
pair as shown in Figure 3-43.
Figure 3-43

NetApp Storage Connectivity to ACI Fabric

Leaf101

Leaf102

Leaf103

E3a,
E4a
ICS3-6296-P1A

E3b,
E4b

E3a,
E4a

E3b,
E4b

E3a,
E4a

Leaf104

E3b,
E4b

E3a,
E4a

E3b,
E4b

ICS3-6296-P1B
ICS3-C1F1-P1

NetApp
FAS 3250a

NetApp
FAS 3250b

NetApp
FAS 3250c

NetApp
FAS 3250d

NetApp C-Mode Cluster

UCS-B Series

298660

ICS3-C2F1-P1

ACI Fabric configuration for NetApp controllers are discussed in details in the next sections.

VPC Configuration
All connections from the NetApp controllers to ACI Fabric are configured as vPCs. Therefore in this
implementation four separate vPC are configured. Following are the steps to create a vPC port channels
to a pair of NetApp controllers 3250a/3250b.
Step 1

Create a VLAN pool with single VLAN for NFS traffic encapsulation.
<infraInfra>
<fvnsVlanInstP name="nfs_storage" allocMode="dynamic">
<fvnsEncapBlk from="vlan-1000" to="vlan-1000" />
</fvnsVlanInstP>
</infraInfra>

Step 2

Create a physical domain.

Intercloud Data Center Application Centric Infrastructure 1.0

3-36

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Storage

<physDomP name="netapp_nfs_phy">
<infraRsVlanNs tDn="uni/infra/vlanns-[nfs_storage]-dynamic" />
</physDomP>

Step 3

Create an AEP.
<infraInfra>
<infraAttEntityP name="netapp_nfs_aep">
<infraRsDomP tDn="uni/phys-netapp_nfs_phy" />
</infraAttEntityP>
</infraInfra>

Step 4

Create vC interface policy group for FAS 3250 controllers a/b.


One infraAccBndlGrp is required per vPC bundle even though they have the same configuration.
<infraFuncP>
<infraAccBndlGrp name="vpc_n101_n102_netapp3250a" lagT="node">
<infraRsLldpIfPol tnLldpIfPolName="lldp_enabled" />
<infraRsStpIfPol tnStpIfPolName="spt_no_bpdu" />
<infraRsLacpPol tnLacpLagPolName="lacp_active" />
<infraRsAttEntP tDn="uni/infra/attentp-netapp_nfs_aep" />
</infraAccBndlGrp>
<infraAccBndlGrp name="vpc_n101_n102_netapp3250b" lagT="node">
<infraRsLldpIfPol tnLldpIfPolName="lldp_enabled" />
<infraRsStpIfPol tnStpIfPolName="spt_no_bpdu" />
<infraRsLacpPol tnLacpLagPolName="lacp_active" />
<infraRsAttEntP tDn="uni/infra/attentp-netapp_nfs_aep" />
</infraAccBndlGrp>
</infraFuncP>

Step 5

Create vPC interface profile.


Each vPC requires an interface profile specifying the leaf ports going to the NetApp controllers.
<infraAccPortP name="vpc_n101_n102_netapp3250a">
<infraHPortS name="port_members" type="range">
<infraPortBlk name="block2" fromPort="17" toPort="18" />
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-vpc_n101_n102_netapp3250a" />
</infraHPortS>
</infraAccPortP>
<infraAccPortP name="vpc_n101_n102_netapp3250b">
<infraHPortS name="port_members" type="range">
<infraPortBlk name="block2" fromPort="19" toPort="20" />
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-vpc_n101_n102_netapp3250b" />
</infraHPortS>
</infraAccPortP>

Step 6

Create switch profile.


Now create a switch profile and associate the previously created interface profiles with the leaf nodes.
<infraNodeP name="vpc_n101_n102_netapp3250ab">
<infraLeafS name="101_102" type="range">
<infraNodeBlk name="block0" from_="101" to_="102" />
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-vpc_n101_n102_netapp3250a" />
<infraRsAccPortP tDn="uni/infra/accportprof-vpc_n101_n102_netapp3250b" />
</infraNodeP>

Figure 3-44 of the APIC GUI shows the switch profile configuration.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-37

Chapter 3

Data Center Fabric Implementation with ACI

Connectivity to Storage

Figure 3-44

Switch Profile Configuration in ACI GUI

Storage Tenant Configuration


Once the vPC configuration is completed a tenant configuration is required complete the rest of the
connectivity between the NetApp controller nodes and the ACI Fabric.
The following steps are required to create and configure a storage tenant that supports NFS.
Step 1

Create a storage tenant.


<fvTenant name=" storage " descr="Intercloud ACI 1.0 storage tenant">
</fvTenant>

Step 2

Create a bridge domain.


This bridge domain uses the default private network under the built in common tenant since there is no
requirement to create a unique private network for NFs traffic.
<fvTenant name="storage">
<fvBD name="ip_storage" arpFlood="yes" unicastRoute="no" unkMacUcastAct="flood"
unkMcastAct="flood">
<fvRsCtx tnFvCtxName="default" />
</fvBD>
</fvTenant>

Step 3

Create an application EPG.


In this implementation there are two application EPGs defined.
VMware vSphere host NFS Access
<fvTenant name="storage">

Intercloud Data Center Application Centric Infrastructure 1.0

3-38

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Connectivity to Storage

<fvAp name="ip_storage">
<fvAEPg name="nfs">
<fvRsBd tnFvBDName="ip_storage" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>

The XML configuration snippet above assumes the Virtual Machine Manager (VMM) vCenter domain
profile 'ics3_prod_vc' is already defined and associates it with this EPG.
OpenStack compute host NFS Access

Refer to the NFS storage access configuration section under the copper tenant chapter for detailed
description.
Step 4

Create an L2 external bridge network.


The following XML configuration snippet shows the configuration of an external L2 bridge domain,
associating it to the bridge domain created earlier and configure with the appropriate VLAN
encapsulation. This construct also includes the path to four vPC port channels created for each NetApp
controller.
<fvTenant name="storage">
<l2extOut name="l2_outside">
<l2extRsEBd tnFvBDName="ip_storage" encap="vlan-1000">
</l2extRsEBd>
<l2extLNodeP name="l2_nodes">
<l2extLIfP name="l2_interfaces">
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-101-102/pathep-[vpc_n101_n102_netapp3250a]" />
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-101-102/pathep-[vpc_n101_n102_netapp3250b]" />
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-103-104/pathep-[vpc_n103_n104_netapp3250d]"
targetDscp="unspecified"/>
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-103-104/pathep-[vpc_n103_n104_netapp3250c]"
targetDscp="unspecified"/>
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name="outside_network" />
</l2extOut>
</fvTenant>

Step 5

Add contract to application EPG and external bridged network. In this implementation the built-in
contract "default" under built-in tenant "common" is used. Default contract does not enforce any filtering
and allows all traffic through.
<fvTenant name="storage">
<fvAp name="ip_storage">
<fvAEPg name="nfs">
<fvRsProv tnVzBrCPName="default" />
</fvAEPg>
</fvAp>
<l2extOut name="l2_outside">
<l2extInstP name="outside_network">
<fvRsCons tnVzBrCPName="default" />
</l2extInstP>
</l2extOut>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-39

Chapter 3

Data Center Fabric Implementation with ACI

Data Center Fabric ManagementOut of Band (OOB)

Storage Multi-Tenancy Considerations


With NetApp cluster mode, Storage Virtual Machine (SVM) instances can be used to give tenants
separate SVMs to provide storage separation and control.
In this implementation, E-Gold tenants get access to 2 SVMs, one is dedicated to them and other is from
a shared gold SVM. The Tenant data stores for E-Gold tenants come from both SVMs, and the intent is
to use the dedicated SVM for more secure workloads and the shared gold SVM based storage for general
work loads.
When SVMs are used on NetApp and connected to ACI Fabric, there are different options on connecting
and separating from network point of view.
In this implementation SVMs were implemented and each SVM has a different Virtual Interface (VIF)
with different IP addresses, but they are all on the same VLAN and same subnet.
On APIC, a Storage tenant is created, with a single BD for connecting all hosts and storage devices. All
of the connections to the NetApp are configured as L2 external with the single VLAN encapsulation.
ESXi Hosts NFS VMK interfaces are grouped into different EPGs on this BD, and contracts are setup
from these EPGs to the L2 external allowing communication between hosts to NFS. Separating hosts
into different EPGs allows controlling visibility of hosts groups to each other as additional security
measure. Similarly OpenStack hosts access NFS storage using the 1Gbps NICs on the compute nodes
and these are configured as a separate EPG, with contracts allowing to access the L2 external connecting
to the NetApp storage array.
Additionally, while this was not implemented, separate L2 externals could be used for each SVM, and
different host EPGs provided access to only specific SVMs in the contract.

High Availability Implications


The NetApp controllers are attached to the ACI Fabric in highly resilient configuration. In this
implementation, four NetApp FAS3250 controllers are connected to the fabric. There are 2 High
Availability (HA) pairs, and the first HA pair is connected to a pair of Nexus 9300 leaf nodes. Each
controller has a vPC connection to the Nexus 9300 node pair. The second NetApp controller HA pair
connects to a different Nexus 9300 leaf node pair. NetApp NFS exports are via an IP address on a VIF
on the HA pair where the disk/storage is located as configured on the NetApp.
During single link failures, vPC convergence continues providing path to storage controller that owns
the VIF for the NFS Export. During leaf node failures, vPC convergence continues to provide path to
storage to the same controller.
During controller failures, the VIF is moved to standby controller in the HA pair and the ACI Fabric
learns the mac address on the vPC links to the standby controller and converges them.
If for some reason all the links to one HA pair of controllers are lost, that is, both leaf nodes go down or
all links to the controllers go down, then the NetApp cluster moves the VIF to the other HA pair. Then
the storage traffic is converged through the other Nexus 9300 node pair to controller 3. From there, using
the NetApp cluster network, it reaches the HA pair where the disks are attached.

Data Center Fabric ManagementOut of Band (OOB)


This section details the out of band (OOB) network configuration on various components in the IDC ACI
1.0 topology.

Intercloud Data Center Application Centric Infrastructure 1.0

3-40

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Data Center Fabric ManagementOut of Band (OOB)

Connectivity to OOB Ports on all Fabric Switches


The spine and leaf switches are connected to the out-of-band network using the management interface.
The following steps add a switch to the OOB management network in APIC.

Step 1

1.

Create a Managed Node.

2.

Create a Managed Node Connectivity Group.

3.

Create IP address pool and associate with the managed node connectivity group.

Create a Managed Node.


Select Tenants from the APIC main Menu and go to mgmt tenant. In the left navigation pane, select
"Node Management Addresses" and right-click on it. In the pop-up window, provide a name for the
policy and select the Node.
Figure 3-45

Create a Managed Node

In this screen, you have the option of configuring OOB address for the switch.
<infraInfra>
<!-- One for each Switch -->
<mgmtNodeGrp name="node101">
<mgmtRsGrp tDn="uni/infra/funcprof/grp-node101"/>
<infraNodeBlk name="default" from_="101" to_="101"/>
</mgmtNodeGrp>
</infraInfra>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-41

Chapter 3

Data Center Fabric Implementation with ACI

Data Center Fabric ManagementOut of Band (OOB)

Step 2

Create a Managed Node Connectivity Group.


In the navigation pane, right click on "Managed Node Connectivity Group".
Figure 3-46

Create a Managed Node Connectivity Group

In this window, you have the option of creating an IP address pool for the managed node or you could
do it separately as in the next step.
<infraInfra>
<!-- One for each Switch -->
<infraFuncP>
<mgmtGrp name="node101">
<mgmtOoBZone name="">
<mgmtRsOobEpg tDn="uni/tn-mgmt/mgmtp-default/oob-default"/>
<!-- The IP address pool is associated in advance -->
<mgmtRsAddrInst tDn="uni/tn-mgmt/addrinst-node101oobaddr"/>
</mgmtOoBZone>
</mgmtGrp>
</infraFuncP>
</infraInfra>

Step 3

Create IP address pool and associate with the managed node connectivity group.
In the navigation pane, right click on "IP Address Pools". Enter the IP address information. Once the
pool is created, go to the managed node connectivity group that was created in the previous step and
associate the IP address pool.

Intercloud Data Center Application Centric Infrastructure 1.0

3-42

Implementation Guide

Chapter 3

Data Center Fabric Implementation with ACI


Data Center Fabric ManagementOut of Band (OOB)

Figure 3-47

Create IP Address Pool

<fvTenant name="mgmt">
<fvnsAddrInst name="node101oobaddr" addr="10.0.32.1/24">
<fvnsUcastAddrBlk from="10.0.32.15" to="10.0.32.15"/>
</fvnsAddrInst>
</fvTenant>

Connectivity to APIC OOB Ports


APIC has two 1G out-of-band management ports. For more information on the APIC controller ports,
refer to the following URL:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/hw/aci_hig/guide/b_aci_hardwar
e_install_guide/b_aci_hardware_install_guide_chapter_01.html
To access the APIC via out-of-band management, you need to configure an out-of-band contract and
attach the contract to the external network profile. For more information, refer to the following URL:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/getting-started/b_APIC_Get
ting_Started_Guide/b_APIC_Getting_Started_Guide_chapter_01.html

Connectivity from APIC to VMMs


VMM Integration allows a manager such as VMware vCenter to be linked to ACI so that policies can be
made available for virtual machines hosted within the VMM domain. When an EPGs is created and
bound to a VMM domain in APIC, a port-group is created in vCenter that can be attached to the virtual
machines in that VMM domain. APIC to VMM communication occurs over an out of band network. For
more information on APIC to VMM communication, refer to Cisco Application Virtual Switch (AVS),
page 4-3.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

3-43

Chapter 3

Data Center Fabric Implementation with ACI

Deployment Considerations

Connectivity from APIC to Services Appliances (ASA 5585)


APIC to ASA 5585 appliance communication is done through the out-of-band management network.
There are two 1G management interfaces available on the ASA for OOB connectivity. In the validated
setup, management 0/1 interface is connected to the OOB-management network. After the ASA cluster
is brought up, you need to access the ASA master node and create tenant contexts with management IP
address before the APIC can access the tenant context.
A sample XML code to access ASA5585 tenant context from APIC and push configuration is shown
below. In this configuration, 10.0.32.125 is the management IP address of the tenant context on ASA
master node.
<fvTenant name="g004">
<vnsLDevVip name="pvt_asa" contextAware="single-Context" devtype="PHYSICAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-CISCO-ASA-1.0.1" />
<vnsCMgmt host="10.0.32.125" port="443" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToPhysDomP tDn="uni/phys-g004_pvt_asa_phy" />
</vnsLDevVip>
</fvTenant>

Deployment Considerations
The following considerations are recommended.

It is recommended to use 3 or more APIC controllers in the APIC cluster and spread them out to
multiple leaf nodes for resiliency.

Gold, Copper and L2-Bronze containers use L2 extension between ACI Fabric and ASR 9000 nV
cluster. vPC is the recommended configuration.

vPC implementation on ACI Fabric does not require dedicated peer links unlike traditional vPC
implementation on Nexus switches.

On ASR 9000 nV Edge, it is recommended to distribute port channel members across both Chassis
for chassis level redundancy.

On ASR 9000 nV, it is recommended to configure separate MAC address on Bundle Ethernet
interfaces connecting to ACI Fabric via separate L3 port channels. This is to prevent MAC address
flapping when the same VLAN encapsulation is used on both border leaves for External Routed
Connectivity to ASR 9000 nV.

In the ACI environment, vPC connection is not supported on FEX Network Interfaces (NIF).

In the ACI environment, FEX Host Interfaces (HIF) will not support port channel or vPC.

When attached to FEX, bare metal server interfaces can be configured for active-backup
connectivity.

For vPC connectivity to ACI Fabric, bare metal servers can be directly attached to leaf switches.

You can use in-band or out-of-band connectivity in APIC for managing the fabric nodes. In this
implementation, out-of-band is used for management connectivity.

Intercloud Data Center Application Centric Infrastructure 1.0

3-44

Implementation Guide

CH A P T E R

VMWare ICS Compute and Storage Implementation


This chapter details compute and storage implementation with VMware vCenter, NetApp FlexPod and
Cisco Application Virtual Switch (AVS) distributed virtual switch.

VMWare Based FlexPod Aligned ICS


This design implements a FlexPod-based Integrated Communication Systems (ICS) stack to utilize the
NetApp FAS 3250 as the storage array and Network File System (NFS)-only implementation. This
implementation sets up the NetApp FAS 3250 storage array in Clustered Data ONTAP c-mode.
This implementation is validated using UCS B-Series Blade servers based on UCS 6200 Series Fabric
Interconnects and Unified Computing System (UCS) 5108 chassis, as well as UCS C-Series rack servers
connecting to Nexus 2000 FEX switches. The following sections provide details on the validation.

Reference Architecture
Figure 4-1 represents the end-to-end reference architecture for the system with VMware vSphere built
on FlexPod components and network connections for NFS protocol.
Figure 4-1

FlexPod ICS Setup


NetApp C-Mode Cluster

NetApp
FAS 3250a

E3a,
E4a

Leaf101

E3b,
E4b

Leaf102

ICS3-6296-P1A

ICS3-6296-P1B

UCS-B Series

NetApp
FAS 3250c

E3b,
E4b

E3a,
E4a

NetApp
FAS 3250d

E3a,
E4a

Leaf103

E3b,
E4b

Leaf104

ICS4-6296-P1A

ICS4-6296-P1B

ICS3-C1F1-P1

ICS4-C1F1-P1

ICS3-C2F1-P1

ICS4-C2F1-P1
UCS-B Series

298665

E3b,
E4b

E3a,
E4a

NetApp
FAS 3250b

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-1

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

UCS Fabric Interconnects and B-Series setup


In this implementation compute services are implemented with Cisco UCS B and C series servers.
The following highlights the Cisco Unified Computing System Manager (UCSM) configuration:

Two UCS 5108 chassis are connected to two pairs of Nexus 9396 leaf switches via UCS 6296 Fabric
Interconnects.

The UCS FI switches are configured in End-host (EH) mode.

The uplinks on the FI switches are bundled into port-channels to upstream Nexus 9396 leaf switches
and to management switches with disjoint L2 networks.

The FI switches connect to two Nexus 9396 leaf switches using Virtual Port Channel (vPC) links
that carry both NFS storage traffic and tenant data traffic.

Figure 4-2 shows an overview of the UCS setup.


Figure 4-2

UCS Setup

TACACS
Servers

vCenter
Server

DNS
Servers

Management Ethernet
Tenant Data + NFS

DC
Management
Network
DHCP/TFTP
Server
Leaf101

vCenter Auto
Deploy Server

Leaf102

Management
Nexus 7009
Po1
UCS 6296

UCS 6296

L1 and L2

ICS3-6296-P1A

UCS 5108 Chassis with


UCS 2208XPIOM
UCS B200 M3 Blade
UCS VIC 1240/1280

Po11

Po22

Chassis 1

ICS3-6296-P1B

Chassis 2

298666

Po2

For redundancy, each blade server is configured with four Virtual Network Interface Cards (vNIC) for
access to two disjoint upstream Layer 2 (L2) networks. One pair was used for management and the other
pair was used for all data and NFS storage traffic. On the UCSM, fabric failover for each vNIC is not
enabled. Service profile templates and vNIC templates of updating template type are used to ensure that
the configurations across multiple blade servers are consistent and up-to-date. Figure 4-3 shows the
service-profile configuration for one of the blade servers on the UCSM.

Intercloud Data Center Application Centric Infrastructure 1.0

4-2

Implementation Guide

Chapter 4

VMWare ICS Compute and Storage Implementation


VMWare Based FlexPod Aligned ICS

Figure 4-3

UCS Server Blade vNIC Setup

Cisco Application Virtual Switch (AVS)


The Cisco AVS is an integral part of Application Centric Infrastructure (ACI). It is essentially a
customized Nexus1000V switch implemented as a virtual Leaf (vLeaf) on virtualized compute nodes.
Cisco AVS is managed by the Application Policy Infrastructure Controller (APIC) and it uses OpFlex
protocol for control plane communication. Packet forwarding in AVS is based on EPG forwarding
policies that are defined by the APIC and published to a Virtual Machine Manager (VMM), such as
VMware vCenter. In the current release, AVS is supported on VMware ESXi platform. Other
hypervisors, such as Microsoft hyper-V, may be developed for future implementations.

Forwarding Modes
Cisco AVS supports two modes of traffic forwarding namely Local Switching (LS) mode and
Non-Switching (NS) mode. These modes are also known as Fabric Extender (FEX) disable mode and
FEX enable mode respectively.
Figure 4-4

AVS Switching Modes

FEX Disable Mode


(Local Switching Mode)

FEX Enable Mode


(Non-Switching Mode)

Leaf Switch

Hypervisor

VM

EPG App

VM

VM

EPG App

(Forward to Leaf for


Inter-EPG Traffic)
Intra-EPG Traffic
Inter-EPG Traffic

vLeaf

VM

VM

EPG App

VM

VM

EPG App

(Forward to Leaf for


Inter/Intra EPG Traffic)
298668

VM

Hypervisor

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-3

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

With the FEX disable mode, all intra-EPG traffic is locally forwarded by the Cisco AVS as shown in
Figure 4-4. All inter-EPG traffic is sent to the leaf switch and the leaf switch will in turn forward it to
the appropriate EPG based on the forwarding policy. This mode supports VLAN encapsulation or
VXLAN encapsulation for forwarding traffic to the leaf and back. The VLAN encapsulation is locally
significant to the Cisco AVS and Leaf switch. If VXLAN encapsulation is used, only the infra-VLAN
needs to be available between the Cisco AVS and the VXLAN.
With the FEX enable mode, both inter and intra EPG traffic is forwarded to the leaf switch. In this mode,
VXLAN is the only allowed encapsulation.

Note

In ICDC ACI 1.0 solution, FEX disable mode (Local Switching) with VLAN encapsulation is validated.
VXLAN encapsulation is not used in this solution since it does not support service graphs.

Cisco AVS Integration with VMware vCenter


The Cisco AVS is a distributed virtual switch that extends across multiple virtualized hosts. It manages
a data center defined by the VMware vCenter Server. The Cisco AVS is fully integrated within the
VMware virtual infrastructure. This solution allows the network administrator to configure virtual
switch and port groups to establish a consistent data center network policy.
Figure 4-5 shows a sample AVS topology with VMware vCenter used as the virtual machine manager.
The APIC Controller acts as a unified controller for both the physical fabric as well as the AVS running
on ESXi host. The APIC will view the ESXi hosts as vLeaves attached to the physical Leaves. The ESXi
host communicates to the vCenter for host/hypervisor management purposes such as switch creation,
port attach etc, over the out-of-band management network. APIC also communicates to vCenter over the
out-of-band network.
The control plane interaction between the ACI fabric and AVS is done in-band using OpFlex protocol.
Figure 4-5

AVS and vCenter Integration

Network
Admin
OpFlex

nd

Ba

ft-o

Ou

Physical
Spine

Spine

Hypervisor Manager
Leaf

VMware vCenter

Leaf

Ou

Leaf

OpFlex

t-o

f-B

an

Virtual
vLeaf/AVS vLeaf/AVS vLeaf/AVS vLeaf/AVS
APP
APP
OSAPP
OS
OS

APP
APP
OSAPP
OS
OS

APP
APP
OSAPP
OS
OS

APP
APP
OSAPP
OS
OS

298669

Server
Admin

When the vCenter domain is created, APIC creates a VTEP port-group in vCenter automatically. When
AVS is installed on a host and added to DVS in vCenter, it will automatically bind this VTEP to Virtual
Machine Kernel (VMK) 1 interface on the host enabling the OpFlex channel for control plane
communication.

Intercloud Data Center Application Centric Infrastructure 1.0

4-4

Implementation Guide

Chapter 4

VMWare ICS Compute and Storage Implementation


VMWare Based FlexPod Aligned ICS

Figure 4-6 shows how vCenter domain is created in APIC. The AVS switching mode and Encapsulation
are set during this process.
Figure 4-6

AVS Mode Selection in APIC

Cisco AVS Installation


To install the Cisco AVS with the Cisco APIC, you need the following components:

Cisco AVS software release 4.2(1) SV2 (2.3), and later releases.

ACI Fabric with Cisco APIC (1.0.1 and later release).

VMware vCenter (5.1 or later).

VMware ESXi hypervisor installed on a physical server (Release 5.1 and 5.5).

Cisco AVS download instructions for VMware ESXi deployments is located at the following URL:
http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/nexus1000/avs/install-upgrade/avs-do
wnloads/Cisco-AVS-Download-Instructions-VMware-ESXi.pdf
Cisco AVS can be installed manually using ESX command line interface or using VMware Update
Manager (VUM). After installation, the Cisco AVS hosts needs to be added to the distributed virtual
switch. Notice that you can add only one host to DVS at a time. It is also required to configure a node
policy for Cisco AVS and create a VMWare vCenter domain in APIC.
For more information on installation and configuration of AVS, refer to the following URL:
http://www.cisco.com/c/en/us/support/switches/application-virtual-switch/products-installation-guides
-list.html

Verifying the Cisco AVS Installation


You may use the following steps to verify Cisco AVS installation on VMware ESXi Hypervisor.
Step 1

In vSphere client, go to Hosts and Clusters and select Networking.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-5

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

Step 2

Open the folder under the data center name and click on the virtual switch.

Step 3

Click the Hosts tab and look at the VDS Status and Status fields.
Figure 4-7 is a sample screen shot taken from VMware vCenter. The DVS status is "Up" indicating that
the OpFlex communication has been established with the Leaf switches.
Figure 4-7

Verifying Cisco AVS Installation on ESXi Hosts

AVS Virtual Machine Kernel (VMK) NIC connectivity


The VMK NICs are internal virtual ports on the AVS. These ports are used whenever IP packets need to
originate from the Virtual Ethernet Module (VEM) itself. Depending on the type of traffic, you can
create multiple VMK NICs on a host, typically for VTEP, vMotion, NFS, VMware fault tolerance and
management connectivity. The server admin maps the VMK NICS to port-groups on the vCenter server.
Note that the port-groups are created on vCenter when the network admin creates an EPG and associate
with an AEP that attaches with a VMM domain.
In this implementation, VMK1 Virtual Machine Kernel VMK NIC is used as a Virtual Tunnel End-point
(VTEP) within ACI fabric. This interface is used for the OpFlex communication to the iLeaf. It has a
DHCP address assigned from the fabric management IP pool (192.168.0.0/16) when AVS is installed on
the host and added to the distributed virtual switch.
Figure 4-8 shows a sample configuration of VMK1 taken from a host running Cisco AVS.

Intercloud Data Center Application Centric Infrastructure 1.0

4-6

Implementation Guide

Chapter 4

VMWare ICS Compute and Storage Implementation


VMWare Based FlexPod Aligned ICS

Figure 4-8

VMK1 vNIC on ESXi host running AVS

In this implementation, VMK2 VMK NIC is used for vMotion traffic. This vNIC is mapped to the
vMotion EPG created under VMI tenant.
Figure 4-9 shows vMotion EPG created under tenant VMI.
Figure 4-9

vMotion EPG in APIC

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-7

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

Figure 4-10 shows the mapping between VMK2 vNIC in vCenter and the EPG in APIC. Notice that the
vNIC uses port-group vmotion_epg.
Figure 4-10

VMK2 vNIC on a Host Attached to vmotion_epg

The VMK3 VMK NIC is used for ip storage (NFS) traffic and is mapped to an EPG in the storage tenant.
This interface has a static IP configured.
The VMK4 interface is used for VMware fault tolerance and is mapped to vm_ft_epg in the VMI tenant.
Fault Tolerance logging is enabled on this interface and it has a static IP.

VMK NIC deployment considerations


These are some of the deployment considerations for VMK NICs.
1. VMK1 is used for OpFlex communication with the iLeaf.
2.

Do not enable vMotion on the VMK NIC used for the OpFlex channel.

3.

Do not delete or change any parameters for the VMK NIC created for OpFlex Channel.

4.

If you delete OpFlex VMK NIC by mistake, recreate with VTEP port-group and configure for
dynamic IP address.

5.

Do not configure static IP address on OpFlex VMK NIC.

6.

Configure vMotion on a separate VMK NIC with a separate EPG.

NetApp NFS Storage


Following sections goes through the details of the NetApp based storage service implementation. Please
refer to VMDC VSA 1.0.1 implementation guide for additional details on implementing NFS Storage
Virtual Machines (SVMs) s in clustered ONTAP mode.

Intercloud Data Center Application Centric Infrastructure 1.0

4-8

Implementation Guide

Chapter 4

VMWare ICS Compute and Storage Implementation


VMWare Based FlexPod Aligned ICS

Clustered Data ONTAP Overview


Clustered NetApp Data ONTAP builds upon the success and technical features of Data ONTAP 7G to
provide scale-out, unified storage architecture for both SAN and NAS storage presentation. This
implementation combines the best of both the traditional scale-up model, where individual controllers
or HA pairs can expand the amount of connected storage, and the scale-out model where multiple,
physically distinct arrays (controllers plus storage) are combined into one logical entity as shown in
Figure 4-11.
Figure 4-11

NetApp Cluster Data ONTAP Layout


Clustered Data ONTAP

HA Pair

HA Pair

SVM1
298675

SVM2

This large pool of data can be presented to hosts as if it was a single storage array, or as many, apparently
independent, storage arrays through secure logical containers known as Storage Virtual Machines, or
SVMs. Each SVM has its own set of storage and logical interfaces to which only it has access, and its
own unique configuration. An SVM may own resources on one, several, or all nodes within the cluster,
and those resources can be moved without disruption between individual cluster nodes.
Figure 4-12 shows the physical layout of the NetApp FAS3250 based storage array connectivity
validated during this implementation.
Figure 4-12

NetApp Cluster Data ONTAP Physical Layout

NetApp Data ONTAP


Cluster Network

Nexus 5596 A

Nexus 5596 B

NetApp Storage
FAS3250
FAS 3250a

E3a,
E4a

FAS 3250b

E3b, E3a,
E4b E4a

E3b,
E4b

FAS 3250c

FAS 3250d

E3b, E3a,
E4b E4a

E3a,
E4a

E3b,
E4b

Leaf101

Leaf102

Leaf103

Leaf104

298676

Cisco ACI Fabric

Storage tenant layout


Secure multi-tenancy for storage is the use of secure logical containers within a physical storage
environment to provide shared access to multiple distinct tenants, an abstraction at the storage layer
similar to the compute layer abstraction popularized by VMware. In this implementation NetApp NFS
SVMs are used to achieve the secure storage multi-tenancy services.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-9

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

In this implementation each tenant category such as Gold, Silver, Bronze and copper shares a single NFS
SVM per tenant category. Each tenant in Gold and Silver categories has separate volume on these SVMs.
Additionally, each Gold Tenant gets a dedicated NFS SVMthis is provided only to Gold Tenants, since
Gold service tier is a premium service. It is expected that Gold Tenants will run their more secure
workloads from data stores from the dedicated SVM and normal workloads in data stores from shared
SVMs.
Figure 4-13 and Figure 4-14 show a summary of SVM provisioning for Gold and Silver tenants.
Gold Tenant Storage Volume Layout
Gold2
Tenant VMS

Gold1
Tenant VMS

Dedicated
Datastore 1

Dedicated
Datastore 2

Dedicated
Datastore 1

Dedicated
Datastore 2

Gold1
Volume

Gold1
Volume

Gold2
Volume

Gold2
Volume

Gold1
Dedicated SVM

Gold Shared SVM

Figure 4-14

Gold2
Dedicated SVM

298677

Figure 4-13

Silver Tenant Storage Volume Layout

Dedicated
Datastore 1

Dedicated
Datastore 2

Silver1
Volume

Silver2
Volume

Silver Shared SVM

298678

Silver2
Tenant VMS

Silver1
Tenant VMS

All Bronze tenants share the same volume and data store on a single shared SVM. Copper tenants use
the same layout as Bronze as shown in Figure 4-15.

Intercloud Data Center Application Centric Infrastructure 1.0

4-10

Implementation Guide

Chapter 4

VMWare ICS Compute and Storage Implementation


VMWare Based FlexPod Aligned ICS

Bronze and Copper Tenant Layout


Bronze2
Tenant VMS

Bronze1
Tenant VMS

Copper2
Tenant VMS

Copper1
Tenant VMS

Shared
Datastore 1

Shared
Datastore 1

Bronze
Volume

Copper
Volume

Bronze
Shared SVM

Copper
Shared SVM

298679

Figure 4-15

NetApp NFS Configuration


Following is the work flow to create NFS SVMs. It is assumes those aggregate and failover groups are
already defined.
Step 1

Create the base SVM. As part of this step, the SVM name, data protocols, client services, root volume
aggregate and root volume security style will all be defined.
vserver create -vserver svm_aci_gold_tenant2 -rootvolume svm_aci_gold_tenant2_rootvol
-aggregate aggr_gold_dedicated_SAS_flash -ns-switch file -nm-switch file
-rootvolume-security-style unix

Step 2

Enable NFS on the SVM. The required parameters include protocols, versions enabled.
vserver nfs create -vserver svm_aci_gold_tenant2 access true v3 enabled

Step 3

Create the logical interfaces (LIFs). The required parameters include LIF name, home node, home port,
failover-group, IP address, and netmask. Refer to VMDC VSA 1.0.1 implementation guide for details
on creating fail over groups.
network interface create -vserver svm_aci_gold_tenant2 -lif nfs1 -role data
-data-protocol nfs -home-node vmdc-3250a -home-port a0a-1000 address 10.0.40.212
netmask 255.255.255.0 -status-admin up use-failover-group enabled failover-group
data-1000

Step 4

Create export-policy and rules to allow access from NFS subnets to the NFS SVMs
vserver export-policy create -policyname vmware_mounts -vserver svm_aci_gold_tenant2
vserver export-policy rule create -vserver svm_aci_gold_tenant2 -policyname
vmware_mounts -clientmatch 10.0.40.0/24 -rorule sys -rwrule sys -superuser sys
-protocol nfs

Step 5

Create volume of size 1TB, set the permissions and turn off the volume snapshots.
volume create -vserver svm_aci_gold_tenant2 -volume svm_aci_gold_tenant2_vol01
-aggregate aggr_gold_dedicated_SAS_flash -size 1TB -state online -type RW -policy
vmware_mounts -security-style unix -unix-permissions ---rwxr-xr-x -junction-path
/svm_aci_gold_tenant2 -space-guarantee none -percent-snapshot-space 0%

Step 6

Apply the export policy to root volume.


volume modify -vserver svm_aci_gold_tenant2 -volume svm_aci_gold_tenant2_rootvol
-policy vmware_mounts

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-11

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

NFS Resiliency
In this implementation, link/path level resiliency between server and storage has been validated. There
are total of 16 paths to the storage from server. Considering the best practices for NFS on VMware,
redundancies are created in links (connecting from server to storage), storage switches (redundant ACI
leaf switches) and storage controllers (there are 4 nodes in the NetApp cluster). In this topology there
are four nodes and any set of disk drives is controlled by a pair of nodes (HA pair).
A LIF is a logical network interface that virtualizes SAN or NAS network connections. LIFs are tied to
an SVM and mapped to physical network ports, interface groups, or VLANs (when tagging is used) on
the controller. Because LIFs are virtualized, a LIF address remains the same even when a LIF is migrated
to another physical port on the same or a different node within the cluster. NAS LIFs can automatically
fail over if the current physical interface to which it is assigned fails (whether due to a cable, switch port,
interface port, or interface card failure), or can work in conjunction with storage failover of an HA pair
if the cluster node hosting the LIF goes down. LIFs can also be manually migrated to another physical
port within the cluster.
The state of the LIF used in this implementation in normal operation is shown below
vmdc-3250-cluster::> network interface show -vserver svm_aci_copper_shared -lif nfs1
-fields home-node,home-port,curr-node,curr-port
vserver
lif home-node home-port curr-node curr-port
--------------------- ---- ---------- --------- ---------- --------svm_aci_copper_shared nfs1 vmdc-3250c a0a-1000 vmdc-3250a a0a-1000
vmdc-3250-cluster::>
vmdc-3250-cluster::>
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250a -ifgrp a0a
Node: vmdc-3250a
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:40:bd:9a
Port Participation: full
Network Ports: e3a, e3b, e4a, e4b
Up Ports: e3a, e3b, e4a, e4b
Down Ports: -

After Single Port Failure


vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250a -ifgrp a0a
Node: vmdc-3250a
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:40:bd:9a
Port Participation: partial
Network Ports: e3a, e3b, e4a, e4b
Up Ports: e3b, e4a, e4b
Down Ports: e3a
vmdc-3250-cluster::>
vmdc-3250-cluster::> network interface show -vserver svm_aci_copper_shared -lif nfs1
-fields home-node,home-port,curr-node,curr-port
vserver
lif home-node home-port curr-node curr-port
--------------------- ---- ---------- --------- ---------- --------svm_aci_copper_shared nfs1 vmdc-3250c a0a-1000 vmdc-3250a a0a-1000

Notice that while a port has failed, the interface group remains up and the LIF has not been migrated.
Configuring both interface groups and failover groups provides for the maximum resiliency of NAS
LIFs.
After Multiple Port Failure
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250a -ifgrp a0a

Intercloud Data Center Application Centric Infrastructure 1.0

4-12

Implementation Guide

Chapter 4

VMWare ICS Compute and Storage Implementation


VMWare Based FlexPod Aligned ICS

Node: vmdc-3250a
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:40:bd:9a
Port Participation: none
Network Ports: e3a, e3b, e4a, e4b
Up Ports: Down Ports: e3a, e3b, e4a, e4b
vmdc-3250-cluster::> network interface show -vserver svm_aci_copper_shared -lif nfs1
-fields home-node,home-port,curr-node,curr-port
vserver
lif home-node home-port curr-node curr-port
--------------------- ---- ---------- --------- ---------- --------svm_aci_copper_shared nfs1 vmdc-3250c a0a-1000 vmdc-3250b a0a-1000

Notice that the interface group has no ports in a state of up, and the LIF has migrated to a port on the
other node of the HA pair.
HA Pair Failure
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250b -ifgrp a0a
Node: vmdc-3250b
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:3f:b2:b0
Port Participation: none
Network Ports: e3a, e3b, e4a, e4b
Up Ports: Down Ports: e3a, e3b, e4a, e4b
vmdc-3250-cluster::>
vmdc-3250-cluster::> network port ifgrp show -node vmdc-3250a -ifgrp a0a
Node: vmdc-3250a
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: 02:a0:98:40:bd:9a
Port Participation: none
Network Ports: e3a, e3b, e4a, e4b
Up Ports: Down Ports: e3a, e3b, e4a, e4b
vmdc-3250-cluster::> network interface show -vserver svm_aci_copper_shared -lif nfs1
-fields home-node,home-port,curr-node,curr-port
vserver
lif home-node home-port curr-node curr-port
--------------------- ---- ---------- --------- ---------- --------svm_aci_copper_shared nfs1 vmdc-3250c a0a-1000 vmdc-3250c a0a-1000

The interface group has no ports in a state of up in the HA pair consisting of nodes
vmdc-3250a/vmdc-3250-b, and the LIF has migrated to a port on the node of the second HA pair.

VMware ESXi NFS configuration


In this implementation each ESXi hosts has VMK interface on the NFS subnet as shown in Figure 4-16.
This VMK interface belongs to the AVS distributed virtual switch and is created automatically as part
of the ACI storage tenant configuration as described in the ACI DC Fabric chapter.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-13

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

Figure 4-16

ESXi VMK Interface Layout

Figure 4-17 shows NFS tenant data stores created on each ESXi host in this implementation.

Intercloud Data Center Application Centric Infrastructure 1.0

4-14

Implementation Guide

Chapter 4

VMWare ICS Compute and Storage Implementation


VMWare Based FlexPod Aligned ICS

Figure 4-17

NFS Data Stores Mounted on ESXi Hosts

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

4-15

Chapter 4

VMWare ICS Compute and Storage Implementation

VMWare Based FlexPod Aligned ICS

Intercloud Data Center Application Centric Infrastructure 1.0

4-16

Implementation Guide

CH A P T E R

Openstack Compute and Storage Implementation


In the Cisco Intercloud Data Center Application Centric Infrastructure (ACI) 1.0 solution, the Canonical
Ubuntu 14.04, LTS-based OpenStack Icehouse release provides the compute services to the Copper
tenant container. This implementation uses the Nexus 1000v for KVM underlying layer switching and it
verifies Ceph and NetApp-based NFS storage for compute nodes. This chapter details only the
Openstack-based implementation.
Figure 5-1

VM

VM

Intercloud Data Center ACI 1.0 OpenStack Setup

VM

VM

Cisco
Nexus
1000V
VEM

VM

VM

VM

Cisco
Nexus
1000V
VEM
KVM

VM

VM

VM

VM

VM

Cisco
Nexus
1000V
VEM
KVM
KVM

KVM
Compute Node
+ Ceph

Compute Node
+ Ceph

Nexus
1000V
Plugin

Juju
OpenStack

Cisco Nexus 1000V VSM


OpenStack Controller
Nodes + Ceph

MaaS

Infrastructure
Nodes
298682

Compute Node
+ Ceph

Physical Connectivity Layout


Figure 5-2 shows the physical attachment of required OpenStack components for implementing the
Copper tenant container. Traffic from the OpenStack instances goes in to the ASA firewall and exit out
to the internet via the ASA 9000 network virtualization (nV) edge. NetApp FAS 3250 storage array
provides the NFS storage for OpenStack compute. Canonical Metal as a Service (MaaS) and Juju
deployment platform provide the OpenStack control and compute nodes management. Refer to
Chapter 11, Copper Tenant Container, for details on how all these interconnections flow through the
ACI fabric.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-1

Chapter 5

Openstack Compute and Storage Implementation

Physical Connectivity Layout

Figure 5-2

OpenStack Physical Connectivity


Spine201

Spine202

Spine203

Spine204

40G
10G
10G CCL

FAS 3250a

FAS 3250b

Leaf102

FAS 3250c

NetApp Cluster

Leaf103

Leaf104

Leaf105

Leaf106

Leaf107

OpenStack
C-Series
Servers

FAS 3250d
ASR 9000 nV
Edge Router

OpenStack
C-Series
Servers

Leaf108

ASA 5585
Cluster

Nexus
7009 A

Nexus
7009 B
MaaS/Juju Servers

298683

Leaf101

In this implementation, Canonical Ubuntu MaaS is used to manage the Cisco C-series servers that host
OpenStack control, compute, and Cisco Nexus 1000v VSM nodes.
MaaS makes deploying services faster, more reliable, repeatable, and scalable by using the service
orchestration tool Juju. MaaS and Juju services were hosted on separate Cisco C-series servers in the
provider management segments as shown in Figure 5-1.

C-Series Server Attachment


In this implementation cisco UCS C-Series servers were used to host OpenStack control nodes as well
as Nexus 1000v virtual supervisor modules (VSM). Three C220M3 servers were used to host OpenStack
control nodes while two servers were used to host Nexus 1000v active/standby VSMs. Ceph storage
cluster was built on three C240M3 servers which were used as Nova compute nodes as well. Nexus
93128TX leaf pair was used to connect majority of the servers while there were three servers attached
to Nexus 9396 leaf pair to simulate inter leaf pair connectivity through the spines.
Figure 5-3 shows the physical NIC connectivity on an OpenStack server connected to Nexus
93128TX/9396 leaf pairs.

Intercloud Data Center Application Centric Infrastructure 1.0

5-2

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Physical Connectivity Layout

Figure 5-3

OpenStack C-Series Server Connectivity Layout

Nexus 9396

Leaf103

Nexus 9396

VIC 1225

Nexus 93128

Leaf104

Nexus 93128

VIC 1225-T

Leaf107

CIMC
Mgmt

NFS

Leaf108
CIMC
Mgmt

NFS

OpenStack Control/
Compute/ Nexus 1000V VSM

OpenStack Control/
Compute/Ceph/
Nexus 1000V VSM

3xC240M3:
Ceph Monitor + OSD + MDS +
Nova Compute
1xC220M3:
Nova compute

Nexus
7009

Nexus
7009
MaaS/Juju Servers

298684

Provider Backend
Management

5xC220M3:
3x All in one OS nodes
2x N1KV VSM (P/S)

C-Series Server NIC Layout


Each server connecting to N93128TX was equipped with Cisco VIC 1225T NICs providing 2x 10G
Copper connectivity to the ACI fabric. Each server connecting to N9396 had Cisco VIC 1225 with 2x
10G fiber connectivity to the ACI fabric.
In addition to the 3x1G onboard, the NICs provided:
1.

CIMC connectivity

2.

Management connectivity

3.

NFS

In this implantation a single 1G interface was used for NFS connectivity. It is possible to have bonded
1G interface for redundancy and higher bandwidth if required.
Figure 5-4 shows the NIC connectivity in more detail.
Figure 5-4

OpenStack Server NIC Layout


ACI Fabric/Internet

OpenStack Control Node


OpenStack
API/Managment

Eth0-1
Eth2

br0
bridge

LXC
bridge

br0
10.0.45.15

Eth0
10.0.45.25

Eth0
10.0.45.30

Cinder LXC

Nova Cloud
Controller

CIMC

Eth3
10.0.40.101
CIMC
10.0.35.11

Neutron

298685

NFS

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-3

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Services and Access Implementation

OpenStack Services and Access Implementation


OpenStack tenant instances can reach the Internet as well as can be managed via the Internet. Usually
the Horizon dashboard can be used by the tenant administrator to create, manage and delete instances.
OpenStack object storage provides direct access to object storage for tenant instances. Figure 5-5
accompanied by Table 5-1 legend description shows the logical connectivity model of the OpenStack
servers and tenant access to various OpenStack services.
Figure 5-5

OpenStack Logical Layout


OS Dashboard Access

Internet
NAT for rados/Swift Access
S: 10.21.1.X D:192.168.100.100
NAT to
S:10.0.46.x D:10.0.45.78 (ha proxy VIP for Rados GW)
ASA FW
(OS Instances GW)

ACI Fabric

Nexus 1000V
LACP Active

OS Instances
10.21.(1-4).x

Compute/Ceph
Node 1-3
All in One HA
Node 1-3
(LXC Containers)
NFS Data
10.0.40.x

Rados GW
HA proxy VIP
10.0.45.78

Spirent Mgmt
10.0.47.0/24

Control
Traffic

Nexus 1000V VSMs


OpenStack
Control
10.0.45.x
Management
172.18.116.x
10.0.45.x
FAS 3250b

FAS 3250c

FAS 3250d

NetApp Cluster

Table 5-1

OOB
172.18.x.x

298686

FAS 3250a

Management
Router

MaaS/Juju Servers

OpenStack Logical Layout

Name

Network

Purpose

OpenStack
API/Control/Management

10.0.45.0/24

Inter OpenStack node communication


including compute to control nodes and
control to control nodes. Also includes MAAS
and Juju Boot strap servers.

Openstack Instance Addresses

10.21.x.0/24

Per tenant instance addresses

Openstack Out of band network

172.18.116.0/24

Outside world to OpenStack dashboard


access. Includes NetApp control path access
for cinder as well.

Test tool network

10.0.47.0/24

Traffic tools management network and


instances 2nd VNIC

NFS data network

10.0.40.0/24

NetApp NFS data path access for cinder

Intercloud Data Center Application Centric Infrastructure 1.0

5-4

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Services and Access Implementation

Table 5-1

OpenStack Logical Layout (continued)

Name

Network

Purpose

Network device management

10.0.46.0/24

Provider backend network where all network


device management resides.

Swift/rados gateway NAT network 192.168.100.0/24 NAT network presented to instances for
Swift/rados gateway access.

MaaS and Juju Servers


MaasS and JuJu are part of the initial provisioning framework and therefore could reside outside the
compute and control nodes. In this implementation MaaS and Juju services were hosted on C-Series
servers separate from the compute and control nodes in the provider management segments as shown in
Figure 5-5. With this layout MaaS server managed all C-Series servers used to host Juju, compute,
control and Nexus 1000v VSM nodes.

OpenStack Horizon Dashboard Access


OpenStack tenant administrators require access to the OpenStack horizon dashboard to manage the
instances and resources. Horizon dashboard access is only available in the provider management
segment and is not exposed to the tenant instances as shown in Figure 5-5. Therefore a separate access
mechanism is required to provide the out of band tenant administrators to access the Horizon dashboard.
This is can be implemented with a separate routed data path from the Internet where tenant
administrators reside in to the provider management segments. This type of access can be filtered and
protected by a firewall and the access can be allowed explicitly to the horizon dashboard VIP IP
address/port pair.

OpenStack SWIFT/RADOS Gateway Object Store Access


In this implementation Ceph RADOS gateway provides the OpenStack object storage services. RADOS
gateway also provides the SWIFT API emulation for OpenStack instances. OpenStack object storage
services network requirements are unique when compared to other OpenStack services. This is due to
the fact that object services should be directly accessible from the OpenStack instances unlike other
image and block storage services. This requires the OpenStack management network to be exposed to
the OpenStack instances.
In this implementation, the ASA 5585 cluster was used to secure and control the access from the
OpenStack instances to the RADOS gateway backend with NAT. In this way the IP addressing of the
OpenStack management network segments were not directly exposed to the tenant instances and
provided granular control of what traffic is allowed to the backend management network from the
instances.
Figure 5-5 shows the logical connectivity of instances to RADOS gateway through the ASA firewall.
ASA cluster configuration including the NAT configuration is explained in detail under the Copper
tenant configuration chapter.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-5

Chapter 5

Openstack Compute and Storage Implementation

Canonical OpenStack Implementation

OpenStack Host Access to NFS Storage


In this implementation OpenStack hosts have access to both Ceph and NFS block storage. Each
OpenStack server has a separate connection for NFS storage access. These connections are in the same
subnet as the NetApp NFS storage virtual machine (SVM) for Copper tenant. Please refer to the compute
and storage chapter for detailed information.

Canonical OpenStack Implementation


Canonical Ubuntu OpenStack implementation is based on two main components, MaaS, and Juju.
Following sections give a quick overview of these components.

Metal as a Service (MaaS)


MaaS is the bare-metal provisioning tool that turns hardware servers to easily manageable components
which in turn can be used to build the OpenStack services. It lets physical servers to be treated similar
to virtual machines in the cloud. Rather than having to manage each server individually, MaaS turns bare
metal servers into an elastic cloud-like resource.
Servers provisioned through MaaS are provisioned dynamically. Through a simple web interface or API
physical servers can be added, commissioned, updated, deployed and recycled at will. As scale needs
change new nodes can be added and dynamically re-deployed between services.
When a new node boots up MaaS steps in and supplies all the information the node requires, reboots it
and provides an Ubuntu image to install.
In addition, MaaS can take care of hardware-specific tasks such as burn-in tests, firmware and RAID
upgrades.
The key components of the MaaS software are:

Region controller

Cluster controller(s)

Nodes

The nodes are the physical servers managed using MaaS. These can range from just a handful to many
thousands of systems.
Nodes can be controlled in a hierarchical way to facilitate different control policies as shown in
Figure 5-6. A Region controller is responsible for managing the cluster and consists of a web user
interface, an API, the metadata server for cloud-init and an optional DNS server.
A cluster controller is responsible for provisioning and consists of a TFTP server and an optional DHCP
server. It is also responsible for powering servers on and off via IPMI.
Regional controllers can be used to separate clusters of nodes that belong to different subnets. In this
implementation both regional and cluster controllers are hosted in a single server.

Intercloud Data Center Application Centric Infrastructure 1.0

5-6

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Canonical OpenStack Implementation

Figure 5-6

MaaS Hierarchy

Region
Controller

Web UI

API

Highly
Available
Cluster

Cluster

Cluster
Controller

Cluster
Controller
TFTP
(PXE)

DHCP

DHCP

Node

Node

Node

Node

Node

Node

Node

Node

Node

Node

Node

Node

298687

TFTP
(PXE)

For more information refer to https://maas.ubuntu.com/docs/orientation.html.

Juju
Juju is a service orchestration tool that lets users to quickly deploy OpenStack on Ubuntu. Its libraries
of charms make it simple to deploy, configure and scale out cloud services with only a few simple
commands. The magic behind Juju is a collection of software components called charms that
Encapsulate knowledge of how to properly deploy and configure the services on resources.
Juju needs a separate boot strap node where all the Juju orchestration is based off of. This Juju boot strap
node should be provided as part of MaaS managed hosts. Figure 5-7 show the Juju client deploying a
service on a node provision by MaaS based on MaaS API calls made by a Juju client.
Figure 5-7
Juju Client

Juju Deploying a Service on a MaaS Managed Node


Bootstrap

MaaS API

MaaS Region
Controller

MaaS Cluster
AZ1
DHCP DNS PXE

Job Service
State Node
MaaS Node
MaaS Node

MaaS Node

298688

MaaS Node

For more information refer to http://www.ubuntu.com/cloud/tools/juju.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-7

Chapter 5

Openstack Compute and Storage Implementation

Nexus1000v for KVM Implementation

Charms
Each charm is a structured bundle of files. Conceptually, charms are composed of metadata,
configuration data, and hooks with some extra support files.
Hooks are executable files in a charm's hooks directory; hooks with particular names (see below) will
be invoked by the Juju unit agent at particular times.
There are 5 "unit hooks" with predefined names that can be implemented by any charm:

Installinstall runs just once, before any other hook. It should be used to perform one-time setup
operations only.

Config-changedconfig-changed runs in several different situations such as immediately after


"install", immediately after "upgrade-charm" and at least once when the unit agent is restarted.

Startstart runs immediately after the first config-changed hook. It should be used to ensure the
charm's software is running.

Upgrade-charmupgrade-charm runs immediately after any upgrade operation.

Stopstop runs immediately before the end of the unit's destruction sequence. It should be used to
ensure that the charm's software is not running, and will not start again on reboot.

Nexus1000v for KVM Implementation


Nexus 1000v for KVM is a reliable virtual network infrastructure that provides a scalable, secure
environment for building OpenStack cloud based on NX-OS architecture. As of the time of this
validation Nexus 1000v for KVM is implemented on Canonical Ubuntu and requires MaaS/Juju
deployment environment. Nexus 1000v for KVM on Redhat is now available. Refer to the following link
for more details.
http://www.cisco.com/c/en/us/support/switches/nexus-1000v-kvm/products-release-notes-list.html

Intercloud Data Center Application Centric Infrastructure 1.0

5-8

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Nexus1000v for KVM Implementation

Figure 5-8

Nexus 1000v for KVM Solution Components


MaaS

Juju

Controller

Bootstrap
Node

Charms

Archives

OS services and
Nexus 1000V
Charms

VSM, VEM,
VXGW
Packages

OpenStack Services
MySQL

RabbitMQ

Ceph

Ceph
(Rados GW)

Cinder

Glance

Keystone

Horizon
Tab

Nova

Neutron
Plugin
Rest API
VSM

VEM

KVM

OVS

VM
VM VM
VM
VM VM
VM VM
VM
Compute Nodes

L3 Agent

Libvertd
KVM
L2 Agent

Network Nodes

298689

OVS

Nova

Ubuntu LTS 14.04

Libvertd

Nova

Ubuntu LTS 14.04


VEM

Nexus 1000v has the following major components mentioned in Figure 5-8:

Virtual Supervisor Module (VSM) that can run as a VM on KVM or on Nexus 1110-x cloud services
appliance.

Virtual Ethernet Module (VEM) that is deployed on each KVM host.

VXLAN Gateway that is deployed as a VM on any KVM host.

Neutron Plugin is deployed in the OpenStack Neutron service.

Horizon Router tab is deployed in the OpenStack horizon service.

The Cisco Virtual Networking Solution for OpenStack is available in two editions. The Essential Edition
is available at no cost for up to 20 physical hosts and includes all the basic switching features. The
Advanced Edition adds Cisco VXLAN Gateway to the base functionality of Essential Edition. In this
release essential edition has been used.

Nexus 1000v High Availability Model


Nexus 1000v for KVM supports dual VSM (active/standby) similar to standard NX-OS and the VSM
failure is non-disruptive to the traffic flows. Active VSM runs all applications that control the system,
provide management interface, and interacts with the management entity VEMs and standby VSM.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-9

Chapter 5

Openstack Compute and Storage Implementation

Nexus1000v for KVM Implementation

Figure 5-9

Nexus 1000v VSM High Availability Model

VSM
(Active)

VSM
(Standby)

MTS/AIPC

MTS/AIPC
KVM
Host-2
VEM

KVM
Host-N
VEM

298690

KVM
Host-1
VEM

Applications on standby keep runtime contexts in sync with both the active and ready to run states. At
switchover, standby is ready to take over as active. When started, standby services get an initial state
snapshot from active VSM using Sysmgr. It retrieves a snapshot of persistent context from active peer
including runtime configuration and runtime information.
Subsequent syncing of persistent context includes standby receiving persistent/log messages via sync
events whenever active VSM application sends/receives/drops messages (TCP sync connection).
Standby service receives only updates not already included in the initial snapshot.

Nexus 1000v Architecture


Figure 5-10 gives an overview of the N1Kv solution for OpenStack environment.
Figure 5-10

Nexus 1000v VSM and VEM Connectivity Model


Horizon

Nexus 1000V Tab


Nexus 1000V Plugin

Neutron

Policy
Manager

Seg
Manager

Port
Manager

Rest API
Nexus 1000V (VSM)

VXLAN
GW

VEM
VEM

VEM
VEM

Hosts

Hosts

298691

VLANs

Tenant B
Tenant A

The key components of the system are explained in the following sections.

Virtual Supervisor Module (VSM)


Nexus 1000v virtual supervisor module is called VSM and it provides the management and control plane
functionality for the system. This is based on NX-OS operating system and it makes the virtual
environment network management consistent with physical environment. VSM is a virtual machine that
can be run on a general-purpose compute node or on a dedicated network appliance called Nexus 1110-x
appliance. In this implementation Cisco C-Series servers have been used to host the VSM.

Intercloud Data Center Application Centric Infrastructure 1.0

5-10

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Nexus1000v for KVM Implementation

Virtual Ethernet Module (VEM)


VEM is a software data path module present in the hypervisor host(s) and is responsible for network
functionality of the hypervisor. In a traditional environment primary role of the software in the line card
module is to program the hardware based on the notifications from the supervisor module. The Data path
module in the Nexus 1000v not only receives the notifications from VSM but also provides
packet-processing functionality. In addition to switching frames, packet processing includes enforcing
the data path features on the frames. Nexus 1000v VEM components are shown in Figure 5-11.
Figure 5-11

Nexus 1000v VEM Components

KVM VEM
User Space
Data Path
Agent

Data Path

Data Path: Packet Processing Module


(Forwarding and Features)

Fast Path

Fast Path Module: Kernel Offload Module

Kernel Space

298692

Data Path Agent: Communicate with VSM


and Program DP Tables

The sub-component that is responsible for packet processing is called data path (DP). DP registers as a
Controller to Open-vSwitch and gets involved in switching the incoming packets through an efficient
DP scheduler. VEM works in tandem with Open-vSwitch (OVS), which notifies port events to Nexus
1000v VEM. Together they provide a distributed Virtual Switch solution for KVM. OVS is not engaged
in fast switch path. Nexus 1000v's VEM kernel module is responsible for fast path switching.
VEM does not directly interact with any host side OpenStack module (Nova Compute-Agent as well as
Neutron-Agent) or VM Management entity: LibVirt. It interacts with OVS DB entity for port events. It
includes feature code that works in the DP such as ACL/NetFlow and is a user-space process with
multiple threads
The sub-component that interacts with VSM is called data path agent (DPA). DPA communicates with
VSM and downloads configuration. It also sends notifications on port attach/detach to VSM. DPA
applies policies on DP such as ACL/NetFlow.
A VSM and a group of VEMs together forms one distributed switch system. In the current model, VSM
communicates with VEMs using L3 network and there is no control plane communication among the
VEMs. Unlike a physical switch that uses a dedicated ultra-reliable backplane channel for
communication, VSM uses the network as the communication fabric. Only L3-based communication
modes are supported in the current release of Nexus 1000v.

OpenStack Nexus 1000v Components


OpenStack network component (Neutron) provides plugin mechanism for integration with various
networking products. Plugin is platform specific and each platform requires its own plugin. The Nexus
1000v Neutron plugin configures port-profile and vethernet ports on the Nexus 1000v as shown in
Figure 5-12. In addition OpenStack dashboard component called Horizon is enhanced with Nexus 1000v
specific tab for supporting Nexus 1000v specific functionality.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-11

Chapter 5

Openstack Compute and Storage Implementation

Nexus1000v for KVM Implementation

Figure 5-12

OpenStack Neutron Plugin for Nexus 1000v

Horizon/Custom
Dashboard Tenant Portal
Create Networks
Create VMs and bind ports to networks
OpenStack
Compute

Network

Nova

Nexus 1000V
Neutron
Plugin

Create
Network
Ports, etc.

Cisco Nexus 1000V VSM

Controller
Node
VM

Create VMs

VM

VM

Cisco
Nexus
1000V
VEM

VM

Nova

Compute
Nodes

298693

KVM

VXLAN Gateway (VXGW)


VXGW is an optional element used for mapping VXLAN segments to VLAN segments. VXGW is
available in the VM form factor. This can be deployed as a VM in the OpenStack or as a virtual service
blade (VSB) on the Nexus 1110-X platform. However in this implementation VXGW is not configured.

Nexus 1000v Packet Flow


The Nexus 1000v Kernel Loadable Module (KLM) receives all incoming packets from VM ports,
internal ports, and physical ports. This is because the Nexus 1000v KLM takes over the port completely
from the OVS.
If all the information necessary to switch the packet is available in the KLM, the packet is handled
completely in the kernel (Figure 5-13). If necessary, the packet is sent to the VEM-DP in user space to
be processed.
The VEM-DP supports multiple punt queues so that packets can be punted at different priorities. Each
punt queue is serviced by a thread running at a different nice level:

High PriorityFor intercepted protocols.

Low PriorityFor broadcast traffic.

Normal PriorityFor all other traffic, for example, unknown MAC address or a missing flow.

Intercloud Data Center Application Centric Infrastructure 1.0

5-12

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Nexus1000v for KVM Implementation

Figure 5-13

Nexus 1000v for KVM Packet Flow

User
VEM-DP

Priority Based Punt Queues

Program L2 or
feature flows

Punt packet if KLM


could not switch it

VM1

VM2

tap1

tap2

Kernel
Nexus 1000
(Kernel)
Physical

Switch packet here


if all the necessary
information is available
298694

From Outside

Nexus 1000v Charms


Nexus 1000v for KVM requires introduces two new Juju charms, VSM and VEM charms. In addition
Nova cloud controller, quantum gateway (Neutron), OpenStack dashboard and Nova compute charms
have been modified to accommodate Nexus 1000v related changes.

VSM Charm
VSM is required to be installed as a virtual machine on a bare metal server. Both primary and secondary
VSM use the same charm. Figure 5-14 shows three hooks implemented in the VSM charm. In summary
VSM charm installs OVS, creates the VSM virtual machine and brings it up.
Figure 5-14

Nexus 1000v VSM Charm Hooks

Install Hook

Config-changed
Hook

Start Hook

Install
openvswitch(1.10)

Install VSM deb


from PPA

Start VSM VM
if not running

Install pre-req
packages for VSM
VM bringup

Generate ovf env


xml file using the
config and
Repack VSM ISO

Configure OVS

Generate the VMS


VM xml file

Virsh create
VSM VM

298695

Create disk file


(qemu-img create)

Following parameters are defined and passed to the VSM charm during the install via configuration yaml
file.

n1kv-source: " ppa:cisco-n1kv/icehouse-updates

n1kv-vsm-name: "Nexus1000v VSM virtual machine (VM) name

n1kv-phy-intf-bridge: "Physical uplink interface for the bridge

n1kv-vsm-role: "Nexus1000v VSMs role - primary or secondary"

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-13

Chapter 5

Openstack Compute and Storage Implementation

Nexus1000v for KVM Implementation

n1kv-vsm-domain-id: "Domain id this Nexus1000v VSM will control"

n1kv-vsm-password: "Password of the admin user for the VSM VM"

n1kv-vsm-mgmt-ip: "VSM virtual machine management ip address"

n1kv-vsm-mgmt-netmask: "VSM virtual machines management netmask"

n1kv-vsm-mgmt-gateway: "VSM virtual machines management gateway

VEM Charm
VEM charm is designed to be a subordinate charm. It should be deployed on all Nova-compute and
quantum-gateway node (/neutronnetwork node).
Similar to VSM configuration parameters are defined by administrator and passed to VEM which will
be applied to all compute/neutron hosts.
Part of the config option is a string variable which takes in the content of mapping file (in yaml format).
This mapping file can be used by the admin to specify host specific configuration such as interfaces used
for VEM/VSM communication.
After install, when configuration change hook is run (Figure 5-15), it picks up the parameters entered by
the administrator for each host. If for any host, configuration is not specified in the mapping file then
the general configuration is applied. Following are a list of parameters that can be passed on to VEM,

n1kv-source: "ppa:cisco-n1kv/icehouse-updates"

n1kv-vsm-ip: "Management ip of VSM"

n1kv-vsm-domain-id: Domain id this vsm will be controlling"

host_mgmt_intf: Management interface of the Host"

uplink_profile: Port-Profile mapping for all UVEM managed Interfaces

mapping: Host Specific configuration mapping

If VSM is charm-based, add-relation between VSM and VEM will configure VEM nodes to connect
to VSM. On an appliance based VSM (non-charm based), config parameters can be set on the VEM
charm to connect to the VSM.
Nexus 1000v VEM Charm Hooks

Install
Dependencies

Install VEM binaries


(Install hook)

Read host-specific
config

Start

Read VSM
connectivity details
(add-relation hook)

Update n1kv.conf
(config-changed hook)

298696

Figure 5-15

Quantum Gateway (Neutron) Charm


Quantum gateway (Neutron) charm was modified to accommodate the changes required for Nexus
1000v. Following Nexus 1000v specific parameters are configured and passed on to the charm during
the install.

plugin: n1kv: "Network configuration plugin to use for quantum"

OpenStack-origin: " ppa:cisco-n1kv/icehouse-updates "

Intercloud Data Center Application Centric Infrastructure 1.0

5-14

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Nexus1000v for KVM Implementation

Nova Cloud Controller Charm


Nova cloud controller charm was modified to accommodate the changes required for Nexus 1000v.
Following Nexus 1000v specific parameters are configured and passed on to the charm during the install.

OpenStack-origin: ppa:cisco-n1kv/icehouse-updates

quantum-plugin: Quantum plugin to use for network management = n1kv

n1kv-config-flags: Comma separated list of key=value config flags to be set in cisco_plugins.ini

n1kv-restrict-policy-profiles: All tenants will be able to access all policy profiles if disabled. If
enabled, tenants can access only those policy profiles which are explicitly
assigned to them by
admin

n1kv-vsm-ip: IP address to connect to Cisco Nexus1000v VSM

n1kv-vsm-password: Password to connect to Cisco Nexus1000v VSM

n1kv-vsm-username: Username to connect to Cisco Nexus1000v VSM

Nova Compute Charm


Nova compute charm was modified to accommodate the changes required for Nexus 1000v. Following
Nexus 1000v specific parameters are configured and passed on to the charm during the install.

OpenStack-origin: ppa:cisco-n1kv/icehouse-updates

virt-type: Virtualization flavor. Supports: kvm, xen, uml, LXC . qemu

OpenStack Dashboard Charm


OpenStack dashboard charm was modified to accommodate the changes required for Nexus 1000v.
Following Nexus 1000v specific parameters are configured and passed on to the charm during the install.

OpenStack-origin: "ppa:cisco-n1kv/icehouse-updates

Profile: "Default profile for the dashboard. Example. Cisco"

Nexus 1000v for KVM Work Flow


Following sections describe the configuration workflow between the Nexus 1000v VSM and the
OpenStack dashboard Horizon.

OpenStack to Nexus 1000v Object Mapping


Nexus 1000v replaces the OVS based L2 switching in OpenStack. Table 5-2 compares the networking
constructs used in the traditional Nexus 1000v based configuration and the OpenStack networking
constructs.
Table 5-2

Nexus 1000v to OpenStack Components

OS Controller (Neutron) config item

VSM (NSM) Config item

Cisco-policy-profile

Veth (policy) port-profile

Cisco-network-profile

Network segment pool

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-15

Chapter 5

Openstack Compute and Storage Implementation

Nexus1000v for KVM Implementation

Table 5-2

Nexus 1000v to OpenStack Components (continued)

OS Controller (Neutron) config item

VSM (NSM) Config item

Network

Network segment

Subnet

IP Pool template

Port

NetworkVeth port

All configuration items above have associated CRUD (Create/Read/Update/Delete) operations.


OpenStack supports multiple tenants (projects) and can configure each configuration item and assigned
it to a particular tenant.
CRUD ops are triggered from OpenStack Controller and cause corresponding changes in Nexus 1000v
VSM configuration. Veth policy profiles are only configured through VSM.
CRUD operations can be performed via Python Neutron client CLI commands or via Horizon dashboard.

Configuration Work Flow


Figure 5-16 shows the workflow during creation of a new networks based on Nexus 1000v for
OpenStack tenants. OpenStack configurations can be done with the Horizon dashboard or with Python
CLI.

VM

VM

Nexus 1000v for KVM Configuration Work Flow

VM

VM

OpenStack Controller
Nova
Service

Cisco
Nexus
1000V
VEM

Horizon

Cloud
Management

Neutron
Service

KVM

4 C
o
Server

3 Create tenants,

networks, subnets
and VMs

Other Services

se nf
nt igu
to ra
N tio
ex n
us da
10 ta
00 an
V dp
VE o
M licie

Policy-profiles are synced to Controller. Controller

2 in turn uses Neutron API to create networks and


subnets on VSM.

1 Create policy-profiles
Network
Management

Cisco Nexus 1000V VSM

298697

Figure 5-16

Create Policy Profile


Policy profiles are Veth port-profiles created in Nexus 1000v VSM. All CRUD operations for policy
profiles are performed in VSM by the network administrator.
VM policies using features such as ACL/Netflow can be configured in policy profile. OpenStack
controller (N1Kv Neutron plugin) periodically polls VSM to get the current set of profiles. To remove a
policy profile, Network administrator needs to delete the profile on the VSM. Following is an example
of a port profile used in this implementation.
port-profile type vethernet copper_template
mtu 9216
storm-control broadcast percent 20
ip flow monitor FLOW_MON input
ip flow monitor FLOW_MON output
ip port access-group test_acl in

Intercloud Data Center Application Centric Infrastructure 1.0

5-16

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Nexus1000v for KVM Implementation

ip port access-group test_acl out


no shutdown
guid 55b6099b-0525-4c73-be21-ef28a98017f5
state enabled
publish port-profile

Create Network Profiles


Network profile is a logical collection of network segments created from the Horizon dashboard.
Corresponding network segment pool is created on VSM. Following is an example of configuration
getting created on Nexus 1000v for a network profile created in the dashboard,
nsm network segment pool 54e0418a-0ca3-4670-b954-4c7399283cac
description copper2_data
uuid 54e0418a-0ca3-4670-b954-4c7399283cac
member-of logical network 54e0418a-0ca3-4670-b954-4c7399283cac_log_net

Create Networks
Network represents an L2 segment with the segment id chosen from the corresponding network profile
range defined in the network profile and is created from the Horizon dashboard. Following is an example
of configuration getting created on Nexus 1000v for a network created in the dashboard,
nsm network segment e9e3e757-8647-47d3-9f25-69a52d735cf7
description copper2_data
uuid e9e3e757-8647-47d3-9f25-69a52d735cf7
member-of network segment pool 54e0418a-0ca3-4670-b954-4c7399283cac
switchport mode access
switchport access vlan 502
ip pool import template bd9a3387-dddd-4942-94d7-45b5eaffaf4b uuid
bd9a3387-dddd-4942-94d7-45b5eaffaf4b
publish network segment

Create Subnet
Subnet represents a block of IPv4 addresses with an option to enable/disable DHCP for that IP range
created from the Horizon dashboard. Following is an example of configuration getting created on Nexus
1000v for a subnet created in the dashboard,
nsm ip pool template 054ef0e3-1549-41c9-a91a-092ba3b93110
description 10.0.47_net
ip address 10.0.47.1 10.0.47.252
network 10.0.47.0 255.255.255.0
default-router 10.0.47.253
dhcp
dns-server 64.102.6.247

Create Port
Neutron port represents a specific instance of network segment + policy profile combination and is
created on the Horizon dashboard or from the Python Neutron CLI.
Corresponding network Veth object and port is created on VSM.
nsm network vethernet
vmn_1357022c-9112-4908-a045-4e0a94577ecc_749d699a-51c3-48f9-9ee8-df97ead49d17
import port-profile copper2_data uuid 1357022c-9112-4908-a045-4e0a94577ecc
allow network segment 749d699a-51c3-48f9-9ee8-df97ead49d17 uuid
749d699a-51c3-48f9-9ee8-df97ead49d17
state enabled
port uuid 73d5635b-dd75-4e32-af54-7744a6db77ff mac fa:16:3e:c4:e3:83
port uuid 81995fcf-d027-4304-b485-f451786f6b53 mac fa:16:3e:4a:1b:16

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-17

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

OpenStack Installation
OpenStack software controls large pools of compute, storage, and networking resources throughout a
datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular
enterprise and open source technologies making it ideal for heterogeneous infrastructure.
This document deployment is for OpenStack Icehouse with High Availability for all services and uses
Cisco Nexus 1000v for networking aspects. These services are deployed with the Ubuntu MaaS. MaaS
manages the commissioning of physical servers and Ubuntu Juju to instantiate services using Juju
charms. LXC containers (https://linuxcontainers.org/) are used to better manage resource usage on nodes
where multiple services are co-located. As part of the scope of this project, two storage backends were
configured to work with the deployment. Both object based Ceph storage, and the more traditional
NetApp storage, have been configured to work with Cinder and Glance.

High Availability
Table 5-3

Deployment Nodes and Services

Node # Services

Node Type

Ubuntu MaaS

MaaS Node

Ubuntu Juju Bootstrap

Juju Node

3-5

Neutron Gateway, RabbitMQ (LXC), MySQL (LXC), Ceph RADOS gateway


(LXC), Keystone (LXC), Glance (LXC), Cinder (LXC), Horizon (LXC)

Control Node

6-8

Nova Compute, Ceph

Compute Node

VSM Primary Nexus 1000v

VSM Node

10

VSM Secondary Nexus 1000v

VSM Node

Node 1

Ubuntu MaaS runs on its own node and will provision servers for use with Juju. This is the only service
that will run on this node and shall be known as the MaaS Node in this document.
Node 2

Ubuntu Juju will be bootstrapped to this node, this will allow Juju to be run in conjunction with MaaS
to provision new servers with a fresh Ubuntu OS installation then configure and install services, such as
OpenStack components. This is the only service that will run on this node and shall be known as the Juju
Node in this document.
Nodes 3-5

These nodes will contain Neutron Gateway on the bare-metal OS installation while RabbitMQ Server,
MySQL (Percona XtraDB Cluster), Ceph RADOS Gateway, Keystone, Glance, Cinder, and Horizon are
created and run in LXC containers. Placing the other services in containers will allow their operations
to be compartmentalized, as of now Ubuntu calls co-locating some services hulk-smashing, which
could lead to unintended errors. The 3 nodes will have the same services on them, establishing High
Availability with some extra configurations. These nodes shall be known as Control Nodes in this
document.

Intercloud Data Center Application Centric Infrastructure 1.0

5-18

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

Note

Ceph RADOS gateway does not currently support HA via the hacluster charm. A bug has been opened
in Launchpad to request this feature https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1328927
Nodes 6-8

These nodes will contain Nova-Compute and Ceph services on the bare-metal OS installation.
Nova-Compute and Ceph can be co-located according to Ubuntus recommendations. 3 nodes will have
the same services on them, establishing High Availability with some extra configurations. These nodes
shall be known as Compute Nodes in this document.
Nodes 9-10

These nodes will contain the Nexus 1000v component to control the networking aspect of the setup.
Neutron-Gateway functionality will not be used. Node 9 will be the primary VSM and node 10 will be
the secondary VSM node in case the primary VSM fails. This will allow the Nexus 1000v to be in High
Availability and continue to perform during any single failures. These nodes shall be known as the VSM
Nodes in this document.
Table 5-4

Package and Operating System Versions

Package / OS Version
Ubuntu OS

Ubuntu 14.04.1 LTS

maas

1.5.4+bzr2294-0ubuntu1.1

maas-dns

1.5.4+bzr2294-0ubuntu1.1

maas-dhcp

1.5.4+bzr2294-0ubuntu1.1

juju-core

1.20.11-0ubuntu1~14.04.1~juju1

juju-deployer 0.3.6-0ubuntu2
cloud-init

0.7.5-0ubuntu1.3

High Availability Components


The website https://wiki.ubuntu.com/ServerTeam/OpenStackHAUbuntu, has recommendations and
descriptions of an entire HA setup. Summaries and excerpts describing High Availability and how it
works for each service will be provided here.
The service that is to be the leader is selected according by the following election procedures:

Pre-clustering
Leaders are elected by selecting the older peer within a given service deployment. This service unit
will undertake activities such as creating underlying databases, issuing username and passwords and
configuring HA services prior to full clustering.

Post-clustering
Once a set of service units have been clustered using Corosync and Pacemaker, leader election is
determined by which service unit holds the VIP through which the service is accessed. This service
unit will then take ownership of singleton activity within the cluster.

hacluster is the charm used to configure HA for many services, the following is a quick rundown of its
behavior:

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-19

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

The hacluster charm deals with installing and configuring Corosync and Pacemaker based on what
relation data provided by its related principle charm. This includes services to control from the cluster,
shared block devices from Ceph, file systems on those block devices and VIP's.
If you need to check the cluster status of any service that utilizes the hacluster charm (Glance in this
example):
juju
sudo
sudo
sudo

ssh glance/0
crm status
corosync-quorumtool -s
corosync-cfgtool -s

This will output the current status of resources controlled by Corosync and Pacemaker.
There are two HA Models:

Stateless API Server


For stateless API services, the OpenStack service is reconfigured to listen on [default port - 10],
haproxy is installed and configured to listen on the default service port and to load balance across
all service units with the service and a Virtual IP is floated onto of the primary service unit.
This ensures that the full capacity of all service units in the service is used to service incoming API
requests - an Active/Active model.

Stateful Server
For services where state must be stored, such as for MySQL or RabbitMQ, state is stored on a shared
block device provided by Ceph; this is mapped on one (and only one) server at a time using the Ceph
RBD kernel module.
The device, and its associated file system and contents, are placed under the control of Corosync
and Pacemaker using the hacluster charm; this ensures that the persistent data is only writable from
one service unit within a service at any point in time.
Services of this type are described as active or passive.

Table 5-5

Service

Services High Availability Descriptions

HA Model

Description

MySQL
Stateful Server
(Percona
XtraDB Cluster)

Percona XtraDB Cluster provides a pure-user space


Active/Active MySQL solution with no reliance on shared
storage. Writes are synchronously replicated across the cluster
of MySQL servers, so it is not a scale-out solution. Downtime in
the event of servers dropping should be better.

RabbitMQ
Server

Stateful Server

RabbitMQ provides a centralized message broker which the


majority of OpenStack components use to communicate control
plane requests around an OpenStack deployment. RabbitMQ
does provide a native Active/Active architecture.

Keystone

Stateless API
Server

Keystone is generally stateless; in the reference architecture it


can be scaled horizontally - requests are load balanced across all
available service units. Some state is stored on local disk
(specifically service usernames and passwords). These are
synced between services units during hook execution using SSH
+ unison.

Intercloud Data Center Application Centric Infrastructure 1.0

5-20

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

Table 5-5

Services High Availability Descriptions (continued)

Service

HA Model

Description

Nova Cloud
Controller

Stateless API
Server

The Cloud Controller provides the API endpoints for Nova


(Compute) and Quantum (Networking) services; The API's are
stateless; in the reference architecture this service can be scaled
horizontally with API requests load balanced across all
available service units.

Glance

Stateless API
Server

Uses Ceph/NetApp in-conjunction with Glance to provide


highly-available object storage; the design relegates Glance to
being a stateless API and image registry service.

Cinder

Stateless API
Server

Uses Ceph/NetApp in conjunction with Cinder to provide


highly-available, massively scalable block storage for tenant
instances. Ceph/NetApp block devices are accessed directly
from compute nodes; this design relegates Cinder to being a
stateless API and storage allocation service.

Neutron
See Description
Gateway (Charm
is named after
Quantum
Gateway)

Quantum in a HA mode is only supported in >= Grizzly due to


the provision of an agent/scheduler infrastructure in this release.
Some aspects of Neutron (the API server for example) are
integrated into other OpenStack charms; to complete the
networking topology a Neutron Gateway is required to provide
Layer 3 network routing and DHCP services for Layer 2
networks. Currently HA is implemented by re-allocating
network resources on a failed service unit to good service units;
this is orchestrated using the cluster-relation-departed hook in
the quantum-gateway charm. Fail-over of services can take
between 10-30 seconds. Actual l3 gateway functionality is not
used in this implementation, as the Cisco Nexus 1000v based
provider network is used instead.

Nova Compute

Not available yet


there is
alternate
behavior to aid
managing
instances

Full HA is not possible on Nova Compute service units;


however the Nova-compute charm can be configured to support
secure live migration of running instances between compute
service units, supporting a managed, minimal disruption
approach to maintenance of the underlying operating system.

Horizon

API Server
(Although this
service is not an
API service, it
uses the same
model for HA.)

This service is completely stateless and can be scaled


horizontally, with requests being load-balanced across all
available service units.

Ubuntu MaaS Installation


Ubuntu MaaS allows the provisioning of bare metal servers on which services can be deployed.
https://maas.ubuntu.com/.
This link (https://maas.ubuntu.com/docs1.5/install.html) is Ubuntus install page for the MaaS version
in this documentation.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-21

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

Step 1

Install Ubuntu 14.04.1 LTS on physical server meant to be MaaS server.

Step 2

Configure DNS.

Step 3

Configure Proxy settings, if necessary.


a.

Set the proxy variables in /etc/wgetrc


https_proxy = http://proxy-wsa.esl.cisco.com:80/
http_proxy = http://proxy-wsa.esl.cisco.com:80/

b.

Update /etc/apt/apt.conf with the proxy address.


Acquire::http::Proxy "http://proxy-wsa.esl.cisco.com:80";

c.

Set the following shell variables.


export http_proxy=http://proxy-wsa.esl.cisco.com:80/
export https_proxy=http://proxy-wsa.esl.cisco.com:80/
export no_proxy="10.0.45.1,10,10.0.45.0/24,localhost"

Note
Step 4

http://proxy-wsa.esl.cisco.com:80/ should be replaced with the proxy address of the setup.


Update and upgrade packages.
sudo apt-get update
sudo apt-get dist-upgrade

Step 5

Install MaaS and other needed packages such as NTP and a Juju tool.
sudo apt-get install maas maas-dhcp maas-dns cloud-init ntp

Step 6

Create MaaS super user.


sudo maas-region-admin createsuperuser

Step 7

Obtain MaaS API key.


sudo maas-region-admin apikey --username root

Step 8

Create MaaS Profile and log into the MaaS API using key created in Step 10. The MaaS session
identifies this login with MaaS commands (Figure 5-17).
maas login <maas-session> http://<MaaS-Server-IP>/MAAS/api/1.0 <Generated-API-Key>

Figure 5-17

Step 9

Note

Steps 9 and 10The <maas-session> is vmdc-maas

Log into MaaS node at http://<MaaS-Server-IP>/MaaS with super user credentials.


Unset environment proxy variables or use new terminal before Step 10.

Intercloud Data Center Application Centric Infrastructure 1.0

5-22

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

Step 10

Set Network Configuration through the MaaS GUI. Select the settings option on the right side of the top
menu bar. Scroll down to the Network Configuration as set the values then save (Figure 5-18).
Figure 5-18

Step 11

Example of Network Configuration Section

Import images into MaaS server.


a.

Using the CLI:


maas <maas-session> node-groups import-boot-images

b. When logged into the webpage (Figure 5-19):


1.

Navigate to the clusters tab.

2.

Select Import boot images button.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-23

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

Figure 5-19

Note

Step 12

There is no way as of this writing to check status of import. Reload the page above occasionally to check
on status. In our system under test, it has been observed to possibly take > 30 minutes to download all
images for the latest architectures of the Ubuntu Operating Systems.
Create an SSH key and add to MaaS Server.
a.

Run the command ssh-keygen and press enter until finished.

b.

Copy the key which was created in /home/<user>/.ssh/id_rsa.pub.

c.

Navigate to the MaaS node web page.

d.

Select root (or your own username) -> preferences from the top right of the web page (Figure 5-20).

Figure 5-20

Step 13

Screen Capture of the Clusters Tab showing Import boot images Button

Screen Capture of Preferences Option

e.

Select + Add SSH key under the SSH keys section.

f.

Paste the created key into the field and select + Add key.

Set DNS options on MaaS server (Figure 5-21).


a.

Navigate to the Clusters tab on the MaaS web page.

b.

Select the Cluster Master cluster to be taken to the Edit Cluster Controller page.

c.

Enter the DNS zone name and select Save cluster controller.

Intercloud Data Center Application Centric Infrastructure 1.0

5-24

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

Figure 5-21

d.

Select the interface which will serve DHCP and DNS under the Interfaces section and edit that
interface (Figure 5-22).

Figure 5-22

e.

Screen Capture of the DNS Zone Name Section

Screen Capture of Selecting Edit for DHCP and DNS Management Interface

Enter the correct information on the Edit Cluster Interface for the interface to serve as DHCP and
DNS (Figure 5-23).

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-25

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

Figure 5-23

Step 14

Screen Capture of Example Edit Cluster Interface Page

Add nodes to MaaS server Repeat for each node to be added to MaaS.
a.

Navigate to the Nodes tab.

b.

Select + Add Node.

c.

Enter a hostname for the machine to be added to MaaS.

d.

Select an Ubuntu OS release from the Release drop down to be installed on the node.

e.

Use the Default (master) cluster.

f.

Select the architecture of the node from the Architecture drop down.

g.

Select the Power type from the drop down, this option will bring up another area to enter
information based on the selection. This is how MaaS will control the node.
If IPMI is selected, an IP address, power user and power password must be entered.

h.

Enter the Mac address of each network interface of the node, selecting + Add additional MAC
address if more spaces are needed.

i.

Select Add Node when finished entering information.

Intercloud Data Center Application Centric Infrastructure 1.0

5-26

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

Ubuntu Juju Installation


Ubuntu Juju uses software packages called charms to deploy and scaled services in a cloud
infrastructure. When used together with Ubuntu MaaS, services such as OpenStack can be deployed on
physical servers easily and with minimal user interaction.https://juju.ubuntu.com.
This link (https://maas.ubuntu.com/docs1.5/juju-quick-start.html) is Ubuntus Juju Quick Start web
page.
The following instructions are to be completed through the MaaS server CLI.
Step 1

Prepare an environment yaml file (http://www.yaml.org/) with specific configuration information. This
allows the Juju installation to connect to the MaaS server.
Within the users home directory on the MaaS server, ensure .juju/environments.yaml is created with the
following information:
default: maas
environments:
maas:
type: maas
maas-server: 'http://172.18.112.140:80/MAAS'
maas-oauth: '9KyGP99ZQaffEaD7cp:uSdhpkHRL9gGRV4aXm:5ZRvWCWQPTnRCgYVmKbE7aqLWreRJVHX'
admin-secret: Cisco12345
default-series: trusty
bootstrap-timeout: 3600
http-proxy: http://proxy-wsa.esl.cisco.com:80
https-proxy: http://proxy-wsa.esl.cisco.com:80
no-proxy:
10.0.45.1,10.0.45.0/24,localhost,192.168.125.0/24,192.168.125.10,10.10.10.0/24,10.10.1
0.10

Note

Step 2

The maas-server configuration options value is the key created in Step 7 of Ubuntu MaaS
Installation section
MaaS can tag servers with an identifying stringthese identifiers must be utilized to ensure selected
services are deployed on specific hardware. (https://maas.ubuntu.com/docs/tags.html) The following
tags were used in this documentation:
Table 5-6

MaaS Tags

Node Type

Tag

Juju Node

juju

Compute Node

compute

Control Node

control

VSM Primary Node

vsmp

VSM Secondary Node vsms


a.

Run the following command to create a tag in MaaS:


maas <maas-session> tags new name='<tag-name>'

b.

Run the command below to get a list of all the MaaS nodes and copy the system_id of the nodes to
tag in MaaS (Figure 5-24):

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-27

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

maas <maas-session> nodes list

Figure 5-24

c.

Screen Capture of maas <maas-session> nodes list Output

Run the following command to add a tag to a specific machine, repeat this step for each machine
that needs a tag:
maas <maas-session> tag update-nodes <tag-name> add=<system_id>

Step 3

Bring up the Juju bootstrap. This will install the Juju service on a node provided from the MaaS server
with the configurations in the environments.yaml file. Proxies must be set if needed to correctly
bootstrap Juju.
sudo -i
export http_proxy=http://proxy-wsa.esl.cisco.com:80
export https_proxy=http://proxy-wsa.esl.cisco.com:80
export no_proxy="10.0.45.1,10,10.0.45.0/24,localhost"
add-apt-repository ppa:juju/stable
exit
export http_proxy=http://proxy-wsa.esl.cisco.com:80
export https_proxy=http://proxy-wsa.esl.cisco.com:80
export no_proxy="10.0.45.1,10,10.0.45.0/24,localhost"
sudo apt-get install juju-core
juju sync-tools --debug
juju bootstrap --constraints tags=juju --debug
juju status

The juju status output should look something like the following:
environment: maas
machines:
"0":
agent-state: started

Intercloud Data Center Application Centric Infrastructure 1.0

5-28

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

agent-version: 1.20.11
dns-name: vmdc-juju1.icdc.sdu.cisco.com
instance-id: /MAAS/api/1.0/nodes/node-f23a8d48-691c-11e4-9921-4403a74abe42/
series: trusty
hardware: arch=amd64 cpu-cores=16 mem=196608M tags=use-fastpath-installer,juju
state-server-member-status: has-vote
services: {}

If it does, Juju has been successfully bootstrapped and charms can now be deployed.

Installation of OpenStack Icehouse using Ubuntu MaaS/Juju


Juju Charms define applications as services. Along with MaaS, Juju deploys charms on physical servers
with specific configurations, in this case OpenStack Icehouse.

Cisco Nexus 1000v Configuration Guide

Ubuntu OpenStack HA Guide

Juju-Deployer and Configuration File


Juju-Deployer provides automation of the deployment of application topologies with Juju. The deployer
file declares complex Juju service deployments that includes constraints, relations, configurations, and
resources.
The Juju-Deployer is retrieved from https://launchpad.net/juju-deployer.
For N1kv charms, reference the section Nexus 1000v Charms above for configuration options.
Configuration options for the rest of the charms can be found at https://manage.jujucharms.com/charms.
On each charms page, search for Config details > to view configuration options and their values.
Ensure the updated yaml file is created in the home directory.
The file can be downloaded from the following link:
https://github.com/CiscoSystems/n1kv-openstack-tools/blob/master/canonical_setup_files/SDU_Physi
cal_OpenStack_Ceph_N1kv.yaml

Note

The following information is specific to this system. It is important to verify the information and modify
it to the necessary requirements. This configuration is specific to the HA scenario explained in previous
sections.
Table 5-7

Charm Branch Revisions (As of Last Verified Deployment)

Charm

Branch Revision Source

ceph

86

Charm Store

ceph-radosgw

30

Charm Store

cinder

56

Charm Store

glance

78

Charm Store

hacluster

37

Charm Store

haproxy

86

Charm Store

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-29

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

Table 5-7

Charm Branch Revisions (As of Last Verified Deployment) (continued)

Charm

Branch Revision Source

keystone

87

Charm Store

mysql

130

Charm Store

nova-cloud-controller 70

N1kv

nova-compute

59

N1kv

openstack-dashboard

27

N1kv

percona-cluster

39

Charm Store

quantum-gateway

47

N1kv

rabbitmq-server

68

Charm Store

vem

155

N1kv

vsm

46

N1kv

OpenStack_HA_N1kv.yaml
# OpenStack Options
openstack-common:
series: trusty
services:
nova-compute:
charm: nova-compute
options:
config-flags:
"auto_assign_floating_ip=False,compute_driver=libvirt.LibvirtDriver"
enable-live-migration: True
enable-resize: True
migration-auth-type: 'ssh'
virt-type: kvm
openstack-origin: ppa:cisco-n1kv/icehouse-updates
neutron-gateway:
charm: quantum-gateway
options:
instance-mtu: 1350
ext-port: eth3
plugin: n1kv
openstack-origin: ppa:cisco-n1kv/icehouse-updates
mysql:
charm: percona-cluster
options:
root-password: ubuntu
sst-password: ubuntu
vip: 10.0.45.201
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
max-connections: 500
mysql-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.1
corosync_key:
"3r8Y1zILzqADvJB7eLJGPrCI4g5Tg+uZ0+qq1kXNe0273yZlee9k2VT1twsyaSx3tNDDIcfuM/ykQNFRLw6dO
WdXPbzgqIM5M5FExYQlXv2+s3kowRL0xuanVWXucaKu+t3jDDxmVnhj0SY/ixl3Gg0XrW4qXFoK05uMoIhK8Js
="
rabbitmq-server:

Intercloud Data Center Application Centric Infrastructure 1.0

5-30

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

charm: rabbitmq-server
options:
vip: 10.0.45.202
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
ha-vip-only: True
keystone:
charm: keystone
options:
admin-password: openstack
debug: 'True'
log-level: DEBUG
enable-pki: 'False'
vip: 10.0.45.203
ha-bindiface: eth0
keystone-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.5
corosync_key:
"6aVson6XvaprzAppLB6UA4OUgZIyNtW+qVwbanQta0aLMagwbPNomTniLr3ZyVGtEL7A0c48tJvaA+lafL2Hz
Gq+43/aKnUbG5k7d4sKaQXP/sKLhCpyj+04DddBRAVsBJ6r9tG45CGF+H+qUykL1rOT0EesZhDqBiBGrV+DXes
="
openstack-dashboard:
charm: openstack-dashboard
expose: true
options:
profile: cisco
secret: openstack
vip: 10.0.45.204
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
openstack-origin: ppa:cisco-n1kv/icehouse-updates
dashboard-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.9
corosync_key:
"9aNUFk+o0Hqt/6i46ltcycMogHm+bgOkhsIwBwuXX3YQZfvioZZZqggi9R9Ccj1OqIrxLA+GTstghYcc/hjUL
hIl3BIX6HAdePhX7sI8khTCiPTN/w4MIy3nW1CjFaeWW31CIhrXnTcq11l0MEB3vKNlN5/b7/kqvagB6oSjw4s
="
nova-cloud-controller:
charm: nova-cloud-controller
options:
network-manager: Neutron
neutron-external-network: Public_Network
quantum-security-groups: 'False'
n1kv-vsm-ip: 10.0.45.208
n1kv-vsm-username: admin
n1kv-vsm-password: Cisco12345
openstack-origin: ppa:cisco-n1kv/icehouse-updates
quantum-plugin: n1kv
vip: 10.0.45.205
vip_iface: eth0
vip_cidr: 24
ha-bindiface: eth0
ncc-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.6

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-31

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

corosync_key:
"xZP7GDWV0e8Qs0GxWThXirNNYlScgi3sRTdZk/IXKDqkNFcwdCWfRQnqrHU/6mb6sz6OIoZzX2MtfMQIDcXuP
qQyvKuv7YbRyGHmQwAWDUA4ed759VWAO39kHkfWp9y5RRk/wcHakTcWYMwm70upDGJEP00YT3xem3NQy27AC1w
="
cinder:
charm: cinder
options:
block-device: "None"
overwrite: 'True'
ceph-osd-replication-count: 3
glance-api-version: 2
vip: 10.0.45.206
ha-bindiface: eth0
cinder-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.8
corosync_key:
"wllBMGAfdCsotmXGbCbJ0LhAuOPQ9ZEIIAXIWWeNLwrmC7C9jmm92RSL1kYGCRRWaL7W7AziA6aBy//rZxeZ3
z0YkM0QFD+4Vg7vtM6JaBoOFlJgVd6mbYUfVbI6IMqGiUDJ8hh5sKmN7kwQLNNwASGlJiMo5s9ErWviVM6/OrQ
="
glance:
charm: glance
options:
ceph-osd-replication-count: 3
vip: 10.0.45.207
ha-bindiface: eth0
glance-hacluster:
charm: hacluster
options:
corosync_mcastaddr: 226.94.1.7
corosync_key:
"eO34WuxbQ/FaQvYb/ffTtX+0phNfNZlmhRrC8gLYJMf/b52Ny3cRXjgp5P1lEfZFHjrhQ3lWQOqENuBVcejS1
OYt574Xq2l1XLEHoEPbktovDhaS9yxIU7SYULdlx7j/BNtW7evY0pRBr23MYWEI3hETHVdtOeqgW1IB3zgoyco
="
ceph:
charm: ceph
options:
monitor-count: 3
fsid: 6547bd3e-1397-11e2-82e5-53567c8d32dc
monitor-secret: AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==
osd-devices: /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
/dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn
osd-reformat: "yes"
ceph-radosgw:
charm: ceph-radosgw
haproxy:
charm: haproxy
vsm-p:
charm: vsm
options:
n1kv-source: ppa:cisco-n1kv/n1kv-updates
n1kv-vsm-domain-id: 500
n1kv-vsm-password: "Cisco12345"
n1kv-vsm-mgmt-ip: "10.0.45.208"
n1kv-phy-intf-bridge: "eth0"
n1kv-vsm-mgmt-gateway: "10.0.45.253"
n1kv-vsm-mgmt-netmask: "255.255.255.0"
n1kv-vsm-role: "primary"
n1kv-vsm-name: "vsm-p"
vsm-s:
charm: vsm
options:
n1kv-source: ppa:cisco-n1kv/n1kv-updates

Intercloud Data Center Application Centric Infrastructure 1.0

5-32

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

n1kv-vsm-domain-id: 500
n1kv-vsm-password: "Cisco12345"
n1kv-phy-intf-bridge: "eth0"
n1kv-vsm-role: "secondary"
n1kv-vsm-name: "vsm-s"
vem:
charm: vem
options:
host_mgmt_intf: eth2
n1kv-vsm-domain-id: 500
uplink_profile: phys eth3 profile sys-uplink
n1kv-source: ppa:cisco-n1kv/n1kv-updates
n1kv-vsm-ip: 10.0.45.208
relations:
- [ haproxy, ceph-radosgw ]
- [ nova-cloud-controller, mysql ]
- [ nova-cloud-controller, rabbitmq-server ]
- [ nova-cloud-controller, glance ]
- [ nova-cloud-controller, keystone ]
- [ nova-compute, nova-cloud-controller ]
- [ nova-compute, mysql ]
- [ nova-compute, 'rabbitmq-server:amqp' ]
- [ nova-compute, glance ]
- [ nova-compute, ceph ]
- [ glance, mysql ]
- [ glance, keystone ]
- [ glance, ceph ]
- [ glance, cinder ]
- [ glance, rabbitmq-server ]
- [ cinder, mysql ]
- [ cinder, rabbitmq-server ]
- [ cinder, nova-cloud-controller ]
- [ cinder, keystone ]
- [ cinder, ceph ]
- [ neutron-gateway, mysql ]
- [ neutron-gateway, rabbitmq-server ]
- [ neutron-gateway, nova-cloud-controller ]
- [ openstack-dashboard, keystone ]
- [ ceph, ceph-radosgw ]
- [ ceph-radosgw, keystone ]
- [ mysql, mysql-hacluster ]
- [ keystone, keystone-hacluster ]
- [ nova-cloud-controller, ncc-hacluster ]
- [ glance, glance-hacluster ]
- [ cinder, cinder-hacluster ]
- [ openstack-dashboard, dashboard-hacluster ]
- [ keystone, mysql ]
trusty-icehouse-ha-lxc:
inherits: openstack-common
series: trusty
services:
neutron-gateway:
num_units: 3
constraints: "tags=control"
nova-compute:
num_units: 3
constraints: "tags=compute"
vsm-p:
num_units: 1
constraints: "tags=vsmp"
vsm-s:
num_units: 1
constraints: "tags=vsms"
vem:

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-33

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

num_units: 1
nova-cloud-controller:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
rabbitmq-server:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
mysql:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
openstack-dashboard:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
keystone:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
cinder:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
glance:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
ceph-radosgw:
num_units: 3
to:
- lxc:neutron-gateway=0
- lxc:neutron-gateway=1
- lxc:neutron-gateway=2
haproxy:
num_units: 1
to:
- lxc:neutron-gateway=0
ceph:
num_units: 3
to:
- nova-compute=0
- nova-compute=1
- nova-compute=2

Intercloud Data Center Application Centric Infrastructure 1.0

5-34

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

Deploying OpenStack Charms


Perform the following procedure to deploy OpenStack Charms.
Step 1

Download the OpenStack charms edited for Cisco Nexus 1000v. (Source: N1kv).
Enter the following commands:
sudo add-apt-repository y ppa:cisco-n1kv/icehouse-updates
sudo apt-get update
sudo apt-get install jujucharm-n1k
tar zxf /opt/cisco/n1kv/charms/jujucharm-n1k-precise_5.2.1.sk3.1.1.YYYYMMDDhhmm.tar.gz

Once the file is untarred, copy the trusty directory from /jujucharm-n1k/charms/trusty directory into
the home directory.
Step 2

Download the rest of the charms from the charm store found at https://manage.jujucharms.com/charms.
(Source: Charm Store)
Place the downloaded charms into the trusty folder in the home directory.
For NetApp deployment the charm needs to be customized. See the Block Storage with NetApp section
below for more information. The charm will be in the trusty directory.
a.

Do the following in '/cinder/templates/icehouse/cinder.conf':


1. Add the following lines at the bottom of the file:
[cdot-nfs]
volume_backend_name=cdot-nfs
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname=10.0.40.203
netapp_server_port=80
netapp_storage_protocol=nfs
netapp_storage_family=ontap_cluster
netapp_login=admin
netapp_password=c1sc0Vmdc
netapp_vserver=svm_aci_copper_shared
nfs_shares_config=/etc/cinder/nfs.shares

2. Add the following under '[DEFAULT]':


enabled_backends=cdot-nfs,ceph

3.

Change this:
{% if volume_driver -%}
volume_driver = {{ volume_driver }}
{% endif -%}
{% if rbd_pool -%}
rbd_pool = {{ rbd_pool }}
host = {{ host }}
rbd_user = {{ rbd_user }}
{% endif -%}

to this:
{% if rbd_pool -%}
[ceph]
{% if volume_driver -%}
volume_driver = {{ volume_driver }}
{% endif -%}
volume_backend_name=ceph
rbd_pool = {{ rbd_pool }}
host = {{ host }}

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-35

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

rbd_user = {{ rbd_user }}
{% endif -%}

b.

Ensure the path and file '/cinder/exec.d/nfs/charm-pre-install' exist with 'charm-pre-install'


containing the following data - this will be executed when the charm is run:
#!/bin/bash
mkdir -p /etc/cinder
cat > /etc/cinder/nfs.shares <<EOF
10.0.40.203:/svm_aci_copper_shared_tenant1
EOF

Step 3

Note

Change the current working directory to the home directory where the yaml file and the charms for
Juju-Deployer are contained.
The script requires manual intervention to complete successfully. Read the following instructions before
continuing.
a.

Run the following command to begin deploying OpenStack:


sudo juju-deployer -c ./OpenStack_HA_N1kv.yaml -d -s 30 -w 300 -r 3 -t 9999
trusty-icehouse-ha-lxc

Table 5-8

Juju-Deployer Arguments

Argument

Behavior

-c CONFIGS

File containing deployment(s) json/yaml config. This option can be repeated,


with later files overriding values in earlier ones.

-d

Enable debugging to stdout

-s DEPLOY_DELAY Time in seconds to sleep between 'deploy' commands, to allow machine


provider to process requests. On terminate machines this also signals waiting
for machine removal.

Step 4

Note

-w REL_WAIT

Number of seconds to wait before checking for relation errors after all relations
have been added and subordinates started. (default: 60)

-r RETRY_COUNT

Resolve unit errors via retry. Either standalone or in a deployment

-t TIMEOUT

Timeout (sec) for entire deployment (45min default)

The script must be manually stopped with Ctrl-c after the services requiring bare metal servers to be
commissioned by MaaS are deployed. It is necessary to wait for the nodes to come up and start
successfully, then commands must be run on the machines before the rest of the services can be
deployed. Interfaces on these nodes must be set to correctly interact with the LXC containers and the
Nexus 1000v modules. To ensure the correct functionality of LXC containers, eth0 must be setup as the
management interface. Additionally, some packages must be installed to have the services work.
This section is a recommendation to eliminate possible issues. In our testing, issues were encountered
with interfaces on some machines during deployment that this workaround consistently avoided.
With the previous yaml file, the output should look like the following when breaking the script. The
vsm-s service in this case is the last service to be deployed directly to a bare metal server, any service
after these will be co-located on bare metal with a previously deployed service or will be deployed in an
LXC container:

Intercloud Data Center Application Centric Infrastructure 1.0

5-36

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

2014-08-14 13:41:29 [INFO] deployer.import:


local:trusty/quantum-gateway
2014-08-14 13:41:30 [DEBUG] deployer.import:
2014-08-14 13:41:40 [INFO] deployer.import:
local:trusty/nova-compute
2014-08-14 13:41:41 [DEBUG] deployer.import:
2014-08-14 13:41:40 [INFO] deployer.import:
local:trusty/vsm
2014-08-14 13:41:41 [DEBUG] deployer.import:
2014-08-14 13:41:40 [INFO] deployer.import:
local:trusty/vsm
2014-08-14 13:41:41 [DEBUG] deployer.import:

Deploying service neutron-gateway using


Waiting for deploy delay
Deploying service nova-compute using
Waiting for deploy delay
Deploying service vsm-p using
Waiting for deploy delay
Deploying service vsm-s using
Waiting for deploy delay

With the script stopped, wait for the machines to come up and get into the 'started' state. The status of
the commissioned machines can be viewed with the command 'juju status.'
Step 5

Once all machines are in the started state, the following two scripts are used to set necessary interfaces
and install packages on the compute and control nodes for this setup.
From here, the control node references the node with quantum-gateway on the bare metal and LXC
containers containing the rest of the services. The 'compute node' references the node with
Nova-compute and Ceph co-located on bare metal. They may need to be edited to match an alternate
setup. Ultimately, the control nodes should have br0 as a bridge for the management interface (in our
case, eth2), this allows LXC containers to communicate over the management network successfully
through their host machines. Also on the control nodes, eth0 and eth1will be up as data interfaces (they
do not need IP addresses or other information). The compute nodes need to have eth0 and eth1 up as data
interfaces, eth2 will already be acting as the management interface.
The scripts can be run in two ways:
juju ssh <Juju Machine #> 'bash -s' < <Script Path>.sh

or copy the script to the necessary machine and run it with:


bash <Script Path>.sh

set_compute.sh
sudo su
echo -e "\nauto eth0\niface eth0 inet manual\n\nauto eth1\niface eth1 inet manual" >>
/etc/network/interfaces
ifup eth0
ifup eth1
exit

In this script, eth0 and eth1 are the data interfaces for OpenStack instances to communicate with each
other and eth2 is the management interface. They will be port channeled via the Nexus 1000v. They do
not communicate over the management network that OpenStack services use to operate; data and
management traffic is separate. If run with juju ssh <Juju machine #> bash s command, press ctrl-c
to exit and return control to the MaaS node.
set_control.sh
sudo su
apt-get -y update > /dev/null
apt-get -y install lxc bridge-utils debootstrap ntp > /dev/null
sed -i "8,9d" /etc/network/interfaces
echo -e "auto eth2\niface eth2 inet manual\n\nauto br0\niface br0 inet
dhcp\nbridge_ports eth2\nbridge_stp off\nbridge_fd 0\nbridge_maxwait 0\n\nauto
eth0\niface eth0 inet manual\n\nauto eth1\niface eth1 inet manual" >>
/etc/network/interfaces
brctl addbr br0
ufw disable

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-37

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

reboot

This script sets up interfaces and a bridge for LXC to run correctly. If the machine already has a br0,
then it is unnecessary to add the br0 lines into the interfaces file. In some setups, if eth0 is not the default
management interface (in this case eth2 is the management interface) then br0 will not be automatically
created, but it is still needed. The line sed i 8,9d /etc/network/interfaces removes the previous
management interface information and replaces it with a manual interface which will be ported under
br0. It is important to make sure these lines are changed for a specific setup. In this script, eth0 and eth1
are also the data interfaces for OpenStack and eth2 is the management interface. The reboot of the
machine will bring up all the changes made. The script should make the interfaces file look something
like the following:
ubuntu@neutron-node:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
auto eth2
iface eth2 inet manual
auto br0
iface br0 inet dhcp
bridge_ports eth2
bridge_stp off
bridge_fd 0
bridge_maxwait 0
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual

If run with juju ssh <Juju machine #> bash s command, the script may need to be run twice on the
wanted machine to successfully reboot and create all the settings. If the script hangs, press ctrl-c to go
back to MaaS and rerun the command, this should reboot the machine and return the control to the MaaS
node.
Step 6

Once all of the interfaces are successfully set and the machines are all in the started state according to
juju status, run the following command on every node to ensure NTP is installed and running:
sudo apt-get install ntp y

Step 7

The Juju-deployer script can now be run again until it finishes without any manual intervention:
sudo juju-deployer -c ./OpenStack_HA_N1kv.yaml -d -s 30 -w 300 -r 3 -t 9999
trusty-icehouse-ha-lxc

Use juju status to check that all services and machines are in the started state. If the following
commands return nothing, then the services should have come up correctly; if not see the
Troubleshooting section below:
juju status | grep hook
juju status | grep warning
juju status | grep down

It is only possible to ssh into each machine using Juju ssh commands, this can be done the following
ways:
juju ssh
ex: juju
juju ssh
ex: juju

<machine #>
ssh 1
<service name>/<service #>
ssh nova-compute/0

Intercloud Data Center Application Centric Infrastructure 1.0

5-38

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

Post Juju-Deployer
The OpenStack installation is not finished; there are more steps and workarounds to complete before the
environment is fully functional.
Step 1

The VEM service also needs some more configurations to connect to each machine. Nexus 1000v needs
to get information about each hosts specific data interfaces (uplink_profile), in addition to which
interface will be used for management (host_mgmt_intf). The specific configurations will be placed in
a file called mapping.yaml:
Table 5-9

Hostname

Machine Hostnames, Node Type and Interfaces for Mapping.yaml File

Node Type

Management Interface
Data Interfaces (uplink_profile) (host_mgmt_intf)

vmdc-OpenStack7 Control Node eth0 / eth1

br0

vmdc-OpenStack3 Control Node eth0 / eth1

br0

vmdc-OpenStack4 Control Node eth0 / eth1

br0

vmdc-ceph1

Compute
Node

eth0 / eth1

eth2

vmdc-ceph2

Compute
Node

eth0 / eth1

eth2

vmdc-ceph3

Compute
Node

eth0 / eth1

eth2

vmdc-OpenStack1 VSM Node

N/A

br0

vmdc-OpenStack2 VSM Node

N/A

br0

mapping.yaml
vmdc-OpenStack7:
host_mgmt_intf:
uplink_profile:
vmdc-OpenStack3:
host_mgmt_intf:
uplink_profile:
vmdc-OpenStack4:
host_mgmt_intf:
uplink_profile:
vmdc-ceph1:
host_mgmt_intf:
uplink_profile:
vmdc-ceph2:
host_mgmt_intf:
uplink_profile:
vmdc-ceph3:
host_mgmt_intf:
uplink_profile:
vmdc-OpenStack1:
host_mgmt_intf:
vmdc-OpenStack2:
host_mgmt_intf:

Step 2

br0
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
br0
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
br0
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
eth2
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
eth2
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
eth2
'phys eth0 profile sys-uplink,phys eth1 profile sys-uplink'
br0
br0

Run the following command to set the configurations:


juju set vem mapping="$(cat mapping.yaml)"

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-39

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Installation

Step 3

Nexus 1000v relations must now be established to create networking connections:


juju add-relation vsm-p vem
juju add-relation vem nova-compute
juju add-relation vem neutron-gateway

Step 4

Log into the Nexus 1000v using ssh admin@<vsm_ip> and run the following commands in
configuration mode to set up the port profiles:
feature lacp
port-profile type ethernet sys-uplink
switchport mode trunk
switchport trunk allowed vlan 500-549
channel-group auto mode active
no shutdown
mtu 9216
state enabled
publish port-profile
port-profile type vethernet default-pp
no shutdown
mtu 9216
state enabled
publish port-profile

This should complete the Nexus 1000v setup and all the nodes should be successfully up and recognized
by the VSM.
Step 5

The RADOS gateway needs to have a key ring copied from any Ceph node to work correctly.
Copy etc/ceph/ceph.client.admin.keyring from any ceph machine and ensure
etc/ceph/ceph.client.admin.keyring exists on each RADOS gateway machine with the same data.

Troubleshooting
Juju has built in commands that can help to trouble shoot errors found in the environment. By using a
combination of the following, it may be possible to solve issues without fully redeploying the
environment from scratch.

If a service has a hook error, first run the following command to rerun the hook that failed:
juju resolved r <service-name>

The r option tells Juju to rerun the entire hook. If run without r, the failed hook will be skipped
and there may be unexpected behavior due to this.

Note

Its been observed that juju resolved r may sometimes return text saying the hook/error has been
resolved while juju status still shows a hook error. This issue has been opened as a bug here https://bugs.launchpad.net/juju-core/+bug/1393434.

Sometimes a hook will fail between two services (ex: keystone and mysql). Its possible that the
timing of the hooks being installed caused an issue. Removing and redoing the relations between
services is a possible fix to any problem. Configurations that are shared between services will be
fully reestablished by Juju if a relation is removed then added once again. If an error is seen between
services, the following commands can be run to try and reestablish the correct relations:
juju destroy-relation keystone mysql
juju add-relation keystone mysql

Intercloud Data Center Application Centric Infrastructure 1.0

5-40

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Installation

These commands are done for the user via Juju-deployer, but there are established in the relations:
section of the yaml file for the deployment.

If a machine gets stuck in the pending stage, it may have to be removed from the environment and
added once again to get it correctly started. The services on said machine will also need to be
destroyed due to the deployment configuration. The following commands show how to complete
this:
juju destroy-machine <machine-number> --force
juju destroy-service <service-on-machine>

Once these commands are run, rerun the Juju-deployer command and ctrl-c once the service that
resides on the destroyed machine is deployed. Once every machine is started, the installation can
continue as normal.

If there are unsolvable issues then the entire environment may have to be redeployed. Juju will have
to be bootstrapped again after this command. The following command will remove the Juju
bootstrap from your environment:
juju destroy-environment maas

MySQL has an issue where an extremely large amount of connections can build up and disrupt
services if left for too long. This issue has been opened as a bug in Launchpad here
(https://bugs.launchpad.net/charms/+source/mysql/+bug/1389837). A workaround for this issue has
been produced by Canonical and will keep the connections at workable levels.
On every Nova-compute node, add the following to the end of /etc/nova/nova.conf then run the

command sudo service nova-compute restart:


[database]
pool_timeout=120
idle_timeout=120
min_pool_size=10
max_pool_size=25
retry_interval=3
max_retries=-1
db_max_retries=3
db_retry_interval=1
connection_debug=10

Log into a MYSQL node and run the following commands:


mysql u root p # Password is ubuntu in this setup
mysql> SET GLOBAL interactive_timeout=30;
mysql> SET GLOBAL wait_timeout=30;

After running these commands, the MySQL connections should be limited to workable levels.

Install Python OpenStack Clients


Perform the following procedure to install Python OpenStack clients.
Step 1

Run the following commands to install the OpenStack command line clients:
export http_proxy=http://proxy-wsa.esl.cisco.com:80/
export https_proxy=http://proxy-wsa.esl.cisco.com:80/
sudo apt-get install python-keystoneclient python-glanceclient python-novaclient
python-keystoneclient

The client commands can be found here: https://wiki.openstack.org/wiki/OpenStackClients.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-41

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Configuration

Step 2

Create an admin.rc file with OS admin credentials to run OpenStack commands. Additionally, a file can
be created with the specific information for each project/tenant to run commands within a specific
project. Run the following command with the example admin.rc to set these variables in your terminal:
source admin.rc

admin.rc
export
export
export
export
export
export
export
export
export
export
export
export

OS_NO_CACHE=true
OS_TENANT_NAME=admin # Change to match tenant/project name
OS_USERNAME=admin
# Change to match tenant/project user name
OS_PASSWORD=OpenStack # Change to match tenant/project user password
OS_AUTH_URL=http://<Horizon-VIP>:5000/v2.0/
OS_AUTH_STRATEGY=keystone
OS_REGION_NAME=RegionOne
CINDER_ENDPOINT_TYPE=publicURL
GLANCE_ENDPOINT_TYPE=publicURL
KEYSTONE_ENDPOINT_TYPE=publicURL
NOVA_ENDPOINT_TYPE=publicURL
QUANTUM_ENDPOINT_TYPE=publicURL

OpenStack Configuration
Following sections describes the important implementation details with respect to the OpenStack
environment build in this implementation.

Tenant Configurations
In this implementation each Copper tenant is mapped to an Openstack project. Each tenant is assigned
a single VLAN for all their instances. Each tenant will have their own tenant admin with the access rights
restricted to resources of that tenant.

Networking Configuration
In this implementation of OpenStack, VLAN backed Provider networking was used. Each Nova compute
nodes as well as Neutron gateway nodes were connected to the ACI fabric for data path. However all
OpenStack instances had their default gateway set to the ASA firewall, therefore using the ACI fabric
to go straight out to the Internet. Neutron gateway was only used as L2 DHCP and metadata provider
functionality. Logical connectivity of the OpenStack networking is shown in Figure 5-25.

Intercloud Data Center Application Centric Infrastructure 1.0

5-42

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Configuration

Figure 5-25

Openstack VLAN-based Provider Network


ASA Cluster
Internet

DHCP +
Instance Meta Data

ACI Fabric

One VLAN/subnet per


copper tenant. All tenants
share same context. All
tenant instances have
default gateway on ASA.
10.21.2.254

DHCP:Copper Tenant1 10.21.1.0/24

Copper Tenant1 10.21.1.0/24

Copper Tenant2 10.21.2.0/24

DHCP:Copper Tenant2 10.21.2.0/24

APP
OS

APP
OS

APP
OS

OS Instances
Neutron Gateway Nodes

APP
OS

OS Instances

Nova Compute Nodes

298706

10.21.1.254

Server Networking Configuration


All compute and control nodes had two 10G NICs for data path connectivity and were manager by the
Nexus 1000v. In addition 1G interfaces for API/management, NFS, and CIMC were allocated as shown
in Figure 5-26.
All servers that hosted OpenStack control nodes implemented two bridges, one for LXCs and one for
bridging LXC traffic to API/management physical network. All interfaces are configured with MTU of
9000.
Figure 5-26

OpenStack Control Node NIC Layout


ACI Fabric/Internet

OpenStack Control Node


OpenStack
API/Managment

Eth0-1
Eth2

br0
bridge

LXC
bridge

br0
10.0.45.15

Eth0
10.0.45.25

Eth0
10.0.45.30

Cinder LXC

Nova Cloud
Controller

CIMC

Note

Eth3
10.0.40.101
CIMC
10.0.35.11

Neutron

298685

NFS

It is required to turn on multicast receiving support on the OpenStack API/management network segment
for HA components such as corosync to work correctly. In this implementation IP PIM sparse mode was
configured on the management network segment's SVI.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-43

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Configuration

Tenant Networking Configuration


The following procedure describe the tenant network configuration work flow including configuring the
Nexus 1000v and OpenStack Neutron configuration via the Horizon dashboard.
Step 1

Create Nexus 1000v base configuration.


Following Nexus 1000v configuration is automatically generated during the VSM charm deployment.
hostname vsm-p
feature network-segmentation-manager
username admin password 5 $1$TTqxuIj8$2CQ6ORDx0SQrtZRBmHYbb1
svs-domain
domain id 500
control vlan 1
packet vlan 1
svs mode L3 interface mgmt0
switch-guid fe294f35-4465-423e-97b6-04038c8fa417
interface mgmt0
ip address 10.0.45.208/24

role network-admin

The configuration above is generated by the VSM charm based on the following parameters set in the
yaml configuration file described in the OpenStack installation section.
vsm-p:
charm: vsm
options:
n1kv-source: ppa:cisco-n1kv/n1kv-updates
n1kv-vsm-domain-id: 500
n1kv-vsm-password: "Cisco12345"
n1kv-vsm-mgmt-ip: "10.0.45.208"
n1kv-phy-intf-bridge: "eth0"
n1kv-vsm-mgmt-gateway: "10.0.45.253"
n1kv-vsm-mgmt-netmask: "255.255.255.0"
n1kv-vsm-role: "primary"
n1kv-vsm-name: "vsm-p"

Step 2

Create Nexus 1000v uplink port profile.


Create a Nexus 1000v uplink port profile to be used in VEM.
feature lacp
vlan 500-549
port-profile type ethernet sys-uplink
switchport mode trunk
switchport trunk allowed vlan 500-549
mtu 9216
channel-group auto mode active
no shutdown
state enabled
publish port-profile

Note

When changing the MTU on Nexus 1000v uplink port profiles, physical uplink interface flap and this is
expected behavior.

Note

In this implementation, all VEMs use the port channel LACP protocol mode active and the ACI VPC
port channels are configured accordingly.
The uplink name in For configuration should be the same as mentioned in the configuration yaml file
for VEM similar to below.

Intercloud Data Center Application Centric Infrastructure 1.0

5-44

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Configuration

vem:
charm: vem
options:
uplink_profile: phys eth3 profile sys-uplink

Step 3

Create Nexus 1000v default port profile and tenant port profiles.
In this implementation, each Copper tenant has its own port-profile. This allows different policies to be
applied on different tenants. A default port profile is supported on Nexus 1000v when a tenant network
is created without specifying a port profile.

Note

In this Nexus 1000v release, policy profile UUID is now optional. If no policy profile UUID is entered
while creating a port, a default policy profile will be used. This behavior is in line with ports created for
dhcp and routers.
port-profile type vethernet copper_template
mtu 9216
no shutdown
state enabled
port-profile type vethernet copper1_data
inherit port-profile copper_template
no shutdown
guid 66916079-5e3f-43a2-bef8-2fece1efad49
description copper tenant 1
state enabled
publish port-profile
port-profile type vethernet default-pp
mtu 9216
no shutdown
guid 1650ddbe-dca8-4948-81e9-f92194de2b7d
state enabled
publish port-profile

Figure 5-27 from the OpenStack dashboard shows the port profiles once created in Nexus 1000v appear
in the dashboard.
Figure 5-27

Step 4

Cisco Nexus 1000v Policy Profiles

Create a network profile on OpenStack dashboard.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-45

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Configuration

Create a network profile for each tenant which specifies which VLANs are assigned to each tenant
(Figure 5-28).
Figure 5-28

Creating Openstack Dashboard Network Profile

Following configuration on Nexus 1000v will be created by the network segmentation manager
corresponding to the configuration above in the dashboard.
nsm network segment 542ed831-aac2-43f0-9c8f-6c9007d26f5d
description copper1_data
uuid 542ed831-aac2-43f0-9c8f-6c9007d26f5d
member-of network segment pool b382d17b-2a1b-4972-a00e-51f6b6360aa7
switchport mode access
switchport access vlan 501
publish network segment

Step 5

Create a network for tenant.


Create a network under the admin tab and associate the correct network profile for each tenant
(Figure 5-29).

Intercloud Data Center Application Centric Infrastructure 1.0

5-46

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Configuration

Figure 5-29

Creating Network in Openstack Dashboard

The configuration above in dashboard will create the following NSM configuration in the Nexus 1000v
VSM.
nsm logical network b382d17b-2a1b-4972-a00e-51f6b6360aa7_log_net
description copper1_data
nsm network segment pool b382d17b-2a1b-4972-a00e-51f6b6360aa7
description copper1_data
uuid b382d17b-2a1b-4972-a00e-51f6b6360aa7
member-of logical network b382d17b-2a1b-4972-a00e-51f6b6360aa7_log_net

Step 6

Create a subnet under the network.


Add a subnet for the network created in the step above (Figure 5-30).

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-47

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Configuration

Figure 5-30

Adding Subnet to Network in Openstack Dashboard

Adding the subnet in to the network creates the following configuration in the VSM:
nsm ip pool template 8b3d61a1-64fc-424c-8355-7b02c298e29b
description vlan501
ip address 10.21.1.1 10.21.1.253
network 10.21.1.0 255.255.255.0
default-router 10.21.1.254
dhcp
dns-server 64.102.6.247
nsm network segment 542ed831-aac2-43f0-9c8f-6c9007d26f5d
description copper1_data
ip pool import template 8b3d61a1-64fc-424c-8355-7b02c298e29b uuid
8b3d61a1-64fc-424c-8355-7b02c298e29b

Once the steps above are completed newly created network is ready to be used by the tenant instances.

Additional Nexus 1000v configurations


NetFlow was used in this implementation to gather and analyze the tenant data flow information.
Following configuration was used on Nexus 1000v. The NetFlow collector was reachable over the
management network.
feature netflow
flow timeout active 600
flow exporter NFDMP_HOST
destination 172.18.115.50 use-vrf management
source lc-exp 10.0.45.208/24
version 9
flow record IPV4_DATA
match ipv4 source address
match ipv4 destination address
match ip protocol
match ip tos

Intercloud Data Center Application Centric Infrastructure 1.0

5-48

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Configuration

match transport source-port


match transport destination-port
collect transport tcp flags
collect counter bytes
collect counter packets
collect timestamp sys-uptime first
collect timestamp sys-uptime last
flow monitor FLOW_MON
record IPV4_DATA
exporter NFDMP_HOST
interface Vethernet3
ip flow monitor FLOW_MON input
ip flow monitor FLOW_MON output

Storage Configuration and Implementation


The following sections describe the setup and configuration of the various open Stack storage services.
In this implementation, Cinder is configured as multi backend block storage provider to provide both
Ceph and NFS block storage services to the compute hosts. The Glance image service uses the Ceph.
The SWIFT object storage services are provided with the Ceph RADOS gateway as described in the
following sections.

Block Storage with Ceph


In this implementation, Ceph and Nova compute services were cohosted on the same C240M3 servers.
Three Ceph servers each had 13x 1TB SATA drives making up the Ceph block storage.

Note

As of the time of this verification, both management (Nova, Cinder, and Juju traffic) and Ceph storage
traffic (replication and crush data) are bundled to a single network called management. A new feature
where these two types of traffic can be separated in to two networks is available in Juno/Utopic release.
Ceph is installed and configured entirely through Juju using the options set in the configuration yaml
file. There must be a minimum of 3 nodes for Ceph to be started; this is an official Ceph requirement
(http://ceph.com/). The following commands and outputs will show the status of the Ceph cluster if it
was created successfully:
sudo ceph s
sudo ceph df
ubuntu@vmdc-ceph2:~$ sudo ceph -s
cluster 6547bd3e-1397-11e2-82e5-53567c8d32dc
health HEALTH_OK
monmap e1: 3 mons at
{vmdc-ceph1=10.0.45.20:6789/0,vmdc-ceph2=10.0.45.21:6789/0,vmdc-ceph3=10.0.45.22:6789/
0}, election epoch 6, quorum 0,1,2 vmdc-ceph1,vmdc-ceph2,vmdc-ceph3
osdmap e132: 39 osds: 39 up, 39 in
pgmap v1424: 2832 pgs, 10 pools, 770 MB data, 158 objects
4015 MB used, 36224 GB / 36228 GB avail
2832 active+clean
ubuntu@vmdc-ceph2:~$ sudo ceph df
GLOBAL:
SIZE
AVAIL
RAW USED
%RAW USED
36228G
36224G
4015M
0.01
POOLS:
NAME
ID
USED
%USED
OBJECTS
data
0
0
0
0
metadata
1
0
0
0

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-49

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Configuration

rbd
glance
.rgw.root
.rgw.control
.rgw
.rgw.gc
cinder
.users.uid

2
3
4
5
6
7
8
9

0
706M
840
0
0
0
65536k
0

0
0
0
0
0
0
0
0

0
94
3
8
0
32
21
0

Create a volume backend for Ceph:


cinder type-create ceph
cinder type-key ceph set volume_backend_name=ceph

Ceph can now be selected through Horizon. When creating a volume, select Ceph as the volume type to
use Ceph storage (Figure 5-31).
Figure 5-31

How to Create a Ceph Volume

Block Storage with NetApp


Each compute and control node hosting Cinder needs access to the NetApp-based NFS storage. The
management connection to NFS storage virtual machine (SVM) is required from the Cinder nodes for
control connectivity. Both Cinder and Nova compute nodes require data path connections to the SVM to
mount the NFS shares. The logical connectivity model for NetApp NFS with OpenStack is shown below.

Intercloud Data Center Application Centric Infrastructure 1.0

5-50

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Configuration

Figure 5-32

NetApp Storage Connectivity

ACI Fabric 10.0.40.0/24 (Data Path)

Cinder
LXC

NetApp Cluster

Control Nodes

Nova
Compute
Nodes

298713

Copper Shared
NFS NVM

Management 172.18.116.0/24 (Control Path)

Perform the following procedure to setup NetApp NFS as a multi-backend Cinder.


Step 1

On Cinder change app, armor rules on host to allow LXC s to mount NFS.
By default apparmor doesn't allow LXC containers to mount NFS shares, so it's necessary to explicit
allow it adding the following lines to /etc/apparmor.d/abstractions/LXC /container-base.
mount fstype=nfs,
mount fstype=nfs4,
mount fstype=rpc_pipefs,

Step 2

Configure iptables rules on control nodes.


On Physical server hosting Cinder LXC, iptables NAT needs to be configured to forward traffic from
LXC to NFS network segment. Ethernet 3 interface in the command below is the interface connecting
the physical server to NFS mount via the ACI fabric. Please see the server NIC layout diagram shown
earlier in the section.
iptables -t nat -A POSTROUTING -o eth3 -j MASQUERADE
root@vmdc-openstack4:~# ifconfig br0 | grep inet
inet addr:10.0.45.15 Bcast:10.0.45.255 Mask:255.255.255.0
inet6 addr: fe80::8af0:31ff:feb3:6f6a/64 Scope:Link

Step 3

Add static route on LXC for NFS data network.


Adding a static route in LXC to NFS subnet pointing to the host interface towards the NFS subnet:
route add -net 10.0.40.0/24 gw 10.0.45.15

Step 4

Install nfs-common on all Cinder LXCs and compute nodes.

Step 5

Add host entries to /etc/hosts


10.0.45.108 juju-machine-123-LXC -1.icdc.sdu.cisco.com juju-machine-123-LXC -1.

Step 6

Create Cinder configuration on Cinder LXCs.


For each NFS share /etc/cinder/cinder.conf has to be edited. A new file is required for each NFS share.
As an example following configuration is required to be added to the cinder.conf file.
[cdot-nfs1]
volume_backend_name=cdot-nfs
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname=172.18.116.33
netapp_server_port=80
netapp_storage_protocol=nfs
netapp_storage_family=ontap_cluster
netapp_login=admin
netapp_password=c1sc0
netapp_vserver=svm_aci_copper_shared
nfs_shares_config=/etc/cinder/nfs.share1

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-51

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Configuration

The configuration above refers to the IP address of the NetApp cluster management IP. It also requires
the login to the SVM. The file /etc/cinder/nfs.share1 contains the actual NFS mount point information
as shown below. This file contains the IP address of the SVM and the junction path for the NFS volume.
root@juju-machine-122-LXC -1:/tmp# cat /etc/cinder/nfs.share1
10.0.40.203:/svm_aci_copper_shared_tenant1

Once the /etc/cinder/cinder.conf and /etc/cinder/nfs.share1 files are configured Cinder volume services
should be restarted as shown below:
service cinder-volume restart

Once this is done the NFS mounts appear under the /var/lib/cinder/mnt/ as shown below:
root@juju-machine-121-LXC -1:~# mount
10.0.40.203:/svm_aci_copper_shared_tenant1 on
/var/lib/cinder/mnt/446d1154b7aded6d478f79de396fd513 type nfs (rw,addr=10.0.40.203)
10.0.40.203:/svm_aci_copper_shared_tenant2 on
/var/lib/cinder/mnt/fa9d93b729574cd40cb02b1b42739f70 type nfs (rw,addr=10.0.40.203)
root@juju-machine-121-LXC -1:~# ls -ltr
/var/lib/cinder/mnt/446d1154b7aded6d478f79de396fd513/
total 17397624
drwxr-xr-x 2 root root
4096 Oct 8 00:17 nfs_test
-rw-rw-rw- 1 root root
10737418240 Oct 8 00:36
img-cache-8d094419-43fa-4d5d-b5bf-1fbd680e4430
-rw-rw-rw- 1 root root
10737418240 Oct 8 00:41
volume-eda4e2c3-9d39-463f-84fb-9c6537f55a33
-rw-rw-rw- 1 root root
10737418240 Oct 16 10:43
img-cache-c62498e8-c14c-4554-9436-821822831300
-rw-rw-rw- 1 root root
21474836480 Oct 16 11:52
volume-ddea005c-10b7-40a6-bf52-4d70dec58821
-rw-rw-rw- 1 root root
10737418240 Oct 16 11:55
volume-7e739c91-aa8e-4457-8f19-8fda477a4826
-rw-rw-rw- 1 root root
10737418240 Oct 21 19:53
volume-b054002b-45a6-4efc-be61-dd3054ee44e9

Nova compute nodes mount these NFS shares as per needed. When an instance has an volume crated on
NFS, during the launching of the instance Nova compute mounts the NFS shares.
root@vmdc-ceph1:~# ls -l /var/lib/nova/mnt/
total 8
drwxr-xr-x 3 root root 4096 Oct 23 05:14 446d1154b7aded6d478f79de396fd513
drwxr-xr-x 2 nova nova 4096 Oct 22 21:31 fa9d93b729574cd40cb02b1b42739f70
root@vmdc-ceph1:~# ls -l /var/lib/nova/mnt/446d1154b7aded6d478f79de396fd513/
total 17397624
-rw-rw-rw- 1 root
root 10737418240 Oct 8 00:36
img-cache-8d094419-43fa-4d5d-b5bf-1fbd680e4430
-rw-rw-rw- 1 root
root 10737418240 Oct 16 10:43
img-cache-c62498e8-c14c-4554-9436-821822831300
drwxr-xr-x 2 root
root
4096 Oct 8 00:17 nfs_test
-rw-rw-rw- 1 libvirt-qemu kvm 10737418240 Nov 6 19:22
volume-0bf63ad4-178f-4053-b358-0fbe6fa0f3ec
-rw-rw-rw- 1 root
root 1073741824 Oct 23 05:14
volume-0d7f5c76-05ae-46ea-8e66-e5867d0be7de
-rw-rw-rw- 1 root
root 10737418240 Oct 23 05:14
volume-27e2bd5a-6c62-4f33-aced-4275dfe3ebba
-rw-rw-rw- 1 root
root 10737418240 Oct 22 21:00
volume-3512a0c9-134d-4957-a385-ed1df013f4d3

To create multiple NFS back ends repeat this process as needed.


Step 7

Create volume types for NFS share with Cinder Python client.
cinder type-create nfs
cinder type-key nfs set volume_backend_name=cdot-nfs

Intercloud Data Center Application Centric Infrastructure 1.0

5-52

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


OpenStack Configuration

NetApp NFS shares can now be selected through Horizon. When creating a volume, select NFS as the
volume type to use NetApp storage.
Figure 5-33

Creating a NetApp Volume

Live Migration with NFS Backend


Any type of instance migration usually requires shared storage. By default when Ceph is not used,
/var/lib/nova/instances files for KVM instances are created locally and not on shared storage such as
NFS. It is required to create a new NFS share just for this purpose. Now newly created VMs will have
the instances files created on the new shared storage folder.
Here are the steps required for NFS live migration and these have to be done on all compute nodes.
Step 1

Create new shared NFS mount on compute nodes ex. /mnt/nova_instances_shared.

Step 2

Mount the NFS share:


mount -t nfs 10.0.40.203:/nova_instances_shared /mnt/nova_instances_shared

Step 3

Edit /etc/nova/nova.conf and add instances_path=/mnt/nova_instances_shared under DEFAULT.

Step 4

Restart Nova-compute by service Nova-compute restart. From now on all instances will have the
ephemeral disk pointing to this shared location.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-53

Chapter 5

Openstack Compute and Storage Implementation

OpenStack Configuration

Image Storage
In this implementation, Glance image storage uses Ceph as the backend. This configuration is built
during creation of the relations between the charms. The following configuration shows the relation
between the Ceph and Glance.
vmdc-admin@vmdc-maas1:~$ juju status ceph
services:
ceph:
charm: local:trusty/ceph-105
exposed: false
relations:
client:
- cinder
- glance
- nova-compute
mon:
- ceph
radosgw:
- ceph-radosgw

Figure 5-34 shows the Glance pool in Ceph being used.


Figure 5-34

Ceph Backend for Glance Pool

Object Storage
In this implementation, Ceph RADOS gateway provides the object storage services. RADOS gateway
implements SWIFT API and allows object manipulation with SWIFT API.

Note

As of this writing RADOS gateway charm doesnt have ha cluster support, therefore no Juju charm based
automatic HA is available. In this implementation a single ha proxy subordinate charm has been placed
manually in front of the three ceph-radosgw LXC nodes. Launchpad bug id 1328927 is used to track this
enhancement request.
Perform the following procedure to configure RADOS gateway once the charm installation is complete.

Note

Step 1

During this implementation object storage access and creation through Horizon dashboard encountered
errors and all configurations was done through the Python CLI. This is tracked by an existing launchpad
bug 1271570.
Copy Ceph keys in to RADOS gateway nodes.
Currently Ceph client.admin key rings are not automatically copied on to RADOS gateway nodes during
charm deployment. Copy the ceph.client.admin.keyring file from /etc/ceph in Ceph nodes in to all three
RADOS gateway nodes.

Intercloud Data Center Application Centric Infrastructure 1.0

5-54

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Instance Migration

Step 2

Create RADOS users from one of the RADOS nodes.


radosgw-admin user create --uid="copper1" --display-name="copper1"
--email=copper1@sdu.com

Step 3

Allow swift access to the user.


Allowing access to the object store with SWIFT compatible clients requires an additional step, the
creation of a sub-user and a SWIFT access key. The secret key generated should be used for all the object
storage access.
radosgw-admin subuser create --uid=copper1 --subuser=copper1:swift --key-type=swift
--access=full
radosgw-admin key create --subuser=copper1:swift --key-type=swift --gen-secret

Step 4

Use SWIFT Python API or curl to manipulate objects.


For example, upload the object from an instance to object store using Python SWIFT client.
swift -V 1.0 -A http://192.168.100.100/auth -U copper1:swift -K
wn9R6AQl5q1JQP6RGbaPjjkIXJ0sxMCjXxhifsZY upload copper1_objects
trusty-server-cloudimg-amd64-disk1.img

List the files in an object container:


root@test:~/temp# swift -V 1.0 -A http://192.168.100.100/auth -U copper1:swift -K
wn9R6AQl5q1JQP6RGbaPjjkIXJ0sxMCjXxhifsZY list copper1_objects
trusty-server-cloudimg-amd64-disk1.img

The following shows the content of RADOS gateway containers stored in Ceph:
root@vmdc-ceph2:~# rados --pool=.rgw ls
.bucket.meta.trusty-server-cloudimg-amd64-disk1.img:default.4358.2
.bucket.meta.copper1_container:default.4358.1
copper1_container
.bucket.meta.copper1:default.4355.1
.bucket.meta.copper1_objects:default.4355.2
copper1_objects

Instance Migration
Two types of migration instances in Openstack are described, cold and live.

Cold Migration
Cold migration in OpenStack allows for a user to move instances within a host aggregate. Before the
migration begins, the instance is shut down and then started on the new host once the process is
completed. Access to the instance is lost while the migration is occurring. Once migration is finished,
OpenStack prompts the user to verify the move. Canceling moves the instance back to its original host,
while accepting will complete the migration.
Cold migration works via command line or through Horizon.

Command Line
Run the following command to cold migrate an instance within a host aggregate. The ID of the
instance must be retrieved using the nova list command:
nova migrate <instance-id>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-55

Chapter 5

Openstack Compute and Storage Implementation

Host Failure Scenarios

Once the migration is complete, you will need to confirm the migration with another command. Use
nova migration-list to check if the migration has a status of finished. Once it does, run the
following command:
nova resize-confirm <instance-id>

This completes the cold migration to the new host.

Horizon
Cold migration can also be done through horizon by navigating to the instances tab. In the pull-down
menu select Migrate instance and confirm. When the instance is finished migrating, another button
will come up asking the user to confirm the resize/migration. Once this is confirmed, the migration
is complete.

Live Migration
Live migration in OpenStack creates minimal downtime for instances as they are moved between hosts.
The instance will only be down for a few seconds as the instance is transferred over the network and
started on the specified host. If a host will be going down for maintenance or some other reason, live
migration is the best choice to move instances and reduce any down time.
Live migration works via command line by using the following command (once again use nova list to
retrieve the ID of the instance to migrate):
nova live-migration <instance-id> <destination-host-name>

This command moves the specified instance to the destination host with minimal downtime.

Note

For NFS backed instances, shared storage must be configured as described in the Block Storage with
NetApp section above

Note

There are issues using live migration in Horizon. Using CLI gives consistent working behavior, a
launchpad bug has been opened here
https://bugs.launchpad.net/charms/+source/openstack-dashboard/+bug/1393445.

Host Failure Scenarios


The following host failure scenarios are detailed.

Compute Nodes, page 5-56

Control Nodes, page 5-57

Compute Nodes
Taking a compute node down, even with High Availability (HA), needs some manual intervention to
ensure no information is lost. The following link contains the Canonical description of Nova-compute
HA.

Intercloud Data Center Application Centric Infrastructure 1.0

5-56

Implementation Guide

Chapter 5

Openstack Compute and Storage Implementation


Host Failure Scenarios

https://wiki.ubuntu.com/ServerTeam/OpenStackHA#Compute_.28Nova.29
Fully automated HA and instance migration is not possible with Nova Compute Services out of the
box. Live Migration is the recommended functionality to maintain Nova Compute hosts and keep
instances running with minimal downtime. A process to ensure instances are always available can be
scripted via Openstack CLI commands. As of this implementation, if a compute host goes down without
notice, the following manual intervention is necessary to get instances back up:

Restart compute host. Instances will come up in shutdown state, and will need a hard reboot to start
up again.

The Ceph/NFS volume can be detached from the shutdown instance and reattached to a new instance
on a running host. The administrator needs to change any static information on the volume, such as
IP addresses in /etc/networks/interfaces, to match new instance information allocated by OpenStack.
(The new IP address is visible via the instances tab.)

Use 'nova evacuate <instance-id> <host-id>' to move instances from dead host -> live host via CLI.
The commands nova list and nova host-list can be used to get both of those parameters.

If the Nova-compute host shutdown is planned, the Canonical recommends live migration to be used
to move instances to hosts that will stay running while the one host is shut down.

Control Nodes
A control node failing should not affect the stability of the OpenStack environment. There should be no
loss of information or access to OpenStack services due to a node being removed or failed. The following
commands can be used to get the status of a cluster:
sudo crm_mon -1
sudo corosync-quorumtool s
sudo corosync-cfgtool s

The following are outputs of the commands above and an example of a healthy keystone cluster:
root@juju-machine-3-lxc -3:~# crm_mon -1
Last updated: Fri Nov 14 18:04:20 2014
Last change: Thu Nov 13 17:10:44 2014 via crmd on juju-machine-1-LXC -4
Stack: corosync
Current DC: juju-machine-2-LXC -3 (167783727) - partition with quorum
Version: 1.1.10-42f2063
3 Nodes configured
4 Resources configured
Online: [ juju-machine-1-LXC -4 juju-machine-2-LXC -3 juju-machine-3-LXC -3 ]
Resource Group: grp_ks_vips
res_ks_eth0_vip(ocf::heartbeat:IPaddr2):Started juju-machine-1-LXC -4
Clone Set: cl_ks_haproxy [res_ks_haproxy]
Started: [ juju-machine-1-LXC -4 juju-machine-2-LXC -3 juju-machine-3-LXC -3 ]
root@juju-machine-3-LXC -3:~# corosync-quorumtool -s
Quorum information
-----------------Date:
Fri Nov 14 18:04:21 2014
Quorum provider: corosync_votequorum
Nodes:
3
Node ID:
167783718
Ring ID:
16
Quorate:
Yes
Votequorum information
---------------------Expected votes:
3
Highest expected: 3
Total votes:
3

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

5-57

Chapter 5

Openstack Compute and Storage Implementation

Host Failure Scenarios

Quorum:
2
Flags:
Quorate
Membership information
---------------------Nodeid
Votes Name
167783708
1 10-0-45-28.icdc.sdu.cisco.com
167783718
1 10-0-45-38.icdc.sdu.cisco.com (local)
167783727
1 10-0-45-47.icdc.sdu.cisco.com
root@juju-machine-3-LXC -3:~# corosync-cfgtool -s
Printing ring status.
Local node ID 167783718
RING ID 0
id = 10.0.45.38
status= ring 0 active with no faults

If one node is failed, the behavior of the environment is as expected - there is no loss and the command
juju status shows no errors.
However, during testing we did sometimes observe some adverse behavior; using updated charms
produced better results. Upon returning the machine to the started state, errors can be seen with
corosync/pacemaker, which are the two services that are installed via the hacluster charm. There are
issues returning the node to the corosync cluster in addition to triggering hook errors shown by juju
status that cannot be resolved. OpenStack services are still available after the restore, but multiple
consecutive failures would increase the chance of an irreversible failure or reduction of service
availability. A workaround to this issue was found and is documented in the following Launchpad bug.
(https://bugs.launchpad.net/charms/+source/hacluster/+bug/1392438).
The following procedure should correctly cluster the services again if they failed to automatically do so
after a restore:
Step 1

Stop each corosync and pacemaker service on the nodes that contain the service whose cluster failed to
rejoin correctly. For example, if keystone failed to cluster correctly the following commands would be
run on each machine running keystone:
sudo service corosync stop
sudo service pacemaker stop

Step 2

Once the services are stopped on each machine, start the corosync service on each node using the
following command:
sudo service corosync start

Step 3

Once every corosync service is started, start pacemaker on each node to complete the workaround.
sudo service pacemaker start

Step 4

Run the following commands to ensure the cluster has been formed correctly once again:
sudo crm_mon -1
sudo corosync-quorumtool s
sudo corosync-cfgtool s

Step 5

Run the following commands to try to fix the hook errors seen in juju status, as of right now the hook
errors may be incorrectly shown. This issue has been opened in Launchpad
(https://bugs.launchpad.net/juju-core/+bug/1393434). The most important thing is ensuring the cluster
is created successfully; the hook error issue seems to only be cosmetic.
juju resolved r <service-with-hook-error>

Intercloud Data Center Application Centric Infrastructure 1.0

5-58

Implementation Guide

CH A P T E R

WAN Edge Implementation with ASR9K


In this implementation, the Aggregation Services Router (ASR) 9000 sits on the border between the Data
Center and the WAN network based on the Multiprotocol Label Switching (MPLS) Core. The Data
Center WAN Edge ASR 9000 provides L3VPN Provider edge service for tenants accessing cloud
services over MPLS-VPNs and also connects to internet for services access over internet. The ASR 9000
is implemented using network virtualization (nV) edge cluster to provide high availability, and two
physical ASR 9000 chassis are combined to form a single logical switching or routing entity. This
reduces the complexity of the design and implementation and provides high availability.
This chapter provides details on the ASR 9000 Data Center Provider edge implementation using nV edge
cluster for the Cisco Intercloud ACI 1.0 solution.

Network Virtualization Edge on the ASR 9000


A Cisco ASR 9000 Network Virtualization (nV) Edge cluster consists of two Cisco ASR 9000 router
chassis that are combined to form a single logical routing entity.

Benefits
The following benefits are provided from Network Virtualization Edge on the ASR 9000.

Simplifying management (operate two Cisco ASR 9000 router platforms as a single virtual Cisco
ASR 9000 Series system).

Double the bandwidth capacity.

Eliminate the need for complex protocol-based High Availability (HA) schemes.

Devices attaching can dual home to both racks. For example, a device can have a bundled Ethernet
to the ASR 9000, and the member of bundle in the ASR 9000 can be in two racks; only one routing
peer and no Equal Cost Multi Path (ECMP) needed.

No need for Multi-Chassis Link Aggregation (MC-LAG) or other complexities for L2


environments.

Requirements
The following hardware and software requirements are detailed.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

6-1

Chapter 6

WAN Edge Implementation with ASR9K

Network Virtualization Edge on the ASR 9000

HardwareCisco ASR 9000 Series SPA Interface Processor-700 and Cisco ASR 9000 Enhanced
Ethernet line cards are supported. Cisco ASR 9000 Enhanced Ethernet line card 10 GbE links are
used as inter rack links. The individual racks must be of same type (both 10-slots or 6-slots, and so
on). Mixed chassis types are not supported.

SoftwareCisco IOS-XR Software Release 4.2.x or later.

Restrictions
The following restrictions are emphasized for the Cisco ASR 9001 Series nV Edge System.
Refer to Restrictions of the Cisco ASR 9001 Series nV Edge System for more information.
Figure 6-1

Control Plane EOBC Extension

Control Plane EPBC Extension

1
Active
RSP
LC

LC

Standby
RSP
LC

LC

Active
RSP
LC

LC

Standby
RSP
LC

LC

Inter-Rack Links

298725

Control-plane Extension
The Route-Switch Processor (RSP) communicates using a Layer 1 (L1) Ethernet Out-of-Band Channel
(EOBC) extension to create a single, virtual control plane. The control-plane packets are forwarded from
chassis to chassis in these EOBC links. Each chassis has two RSPs, and each RSP has two EOBC ports,
with four connections between the chassis, to provide high redundancy. If any of the links go down, there
are three possible backup links. Only one of the links will be used for forwarding control plane data, and
all of the other links will be in the "standby" state. The EOBC link can only be 1 GB. The SFP has to be
1 GB SFP as 10 GB SFP is not supported.

Data Plane Extension


The Inter Rack Link (IRL) connections are required for forwarding traffic going from one nV chassis
out of the interface on the other nV chassis. The IRL links must be 10 GB links and they have to be
directly L1 connected.

Link Distribution
In the case of an ECMP or bundle-ether scenario, the ASR 9000 nV by default will use "Source IP,
Destination IP, Source port (TCP/UDP only), Destination port (TCP/UDP only), Router ID" to determine
which link it will take if it is IPv4 traffic or MPLS traffic (less than four labels).
Refer to ASR 9000 nV Edge Guide for more details.

Intercloud Data Center Application Centric Infrastructure 1.0

6-2

Implementation Guide

Chapter 6

WAN Edge Implementation with ASR9K


Network Virtualization Edge on the ASR 9000

Sample Config for IRL


interface preconfigure TenGigE1/2/0/0
nv
edge
interface
!
!
!

IRL Link Verification


RP/0/RSP1/CPU0:v6-pe-NV#show nv edge data forwarding location 0/RSP0/CPU0
Thu Nov 13 16:27:52.689 EST
nV Edge Data interfaces in forwarding state: 4
TenGigE0_0_0_3
TenGigE0_0_1_3
TenGigE0_1_0_3
TenGigE0_1_1_3

<-->
<-->
<-->
<-->

TenGigE1_0_0_3
TenGigE1_0_1_3
TenGigE1_1_0_3
TenGigE1_1_1_3

nV Edge Data interfaces in configured state: 8


TenGigE0_0_0_3
TenGigE0_0_1_3
TenGigE1_0_1_3
TenGigE1_0_0_3
TenGigE0_1_0_3
TenGigE0_1_1_3
TenGigE1_1_0_3
TenGigE1_1_1_3

EOBC Link Verification


RP/0/RSP1/CPU0:v6-pe-NV#admin show dsc
Thu Nov 13 16:28:24.446 EST
--------------------------------------------------------Node (
Seq)
Role
Serial State
--------------------------------------------------------0/RSP0/CPU0 (55307975) STANDBY FOX1446GQXS NON-DSC
0/RSP1/CPU0 (
0)
ACTIVE FOX1446GQXS PRIMARY-DSC
1/RSP0/CPU0 (31594755) STANDBY FOX1439GY4F NON-DSC
1/RSP1/CPU0 (72798281)
ACTIVE FOX1439GY4F BACKUP-DSC
RP/0/RSP1/CPU0:v6-pe-NV# show nv edge control control-link-protocols location
0/RSP0/CPU0
Priority lPort
Remote_lPort
UDLD STP
======== =====
============
==== ========
0
0/RSP0/CPU0/0
1/RSP0/CPU0/0
UP
Forwarding
1
0/RSP0/CPU0/1
1/RSP0/CPU0/1
UP
Blocking
2
0/RSP1/CPU0/0
1/RSP1/CPU0/0
UP
On Partner RSP
3
0/RSP1/CPU0/1
1/RSP1/CPU0/1
UP
On Partner RSP

ASR 9000 as the Data Center Provider Edge router


This section details the ASR 9000 configuration as a provider edge router. The following topics are
covered in this section:

MPLS Core Configuration

Provider edgeCustomer edge BGP for Silver and L3 Bronze Tenant

Provider edgeCustomer edge static to L3 Bronze and E-Gold (via ASA)

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

6-3

Chapter 6

WAN Edge Implementation with ASR9K

Network Virtualization Edge on the ASR 9000

Provider edgeCustomer edge connection with default gateway for Tenants L2 Bronze

ASR 9000 as Internet Router

ASR 9000 Data Center Provider Edge Implementation Toward MPLS Core
The ASR 9000 nV cluster is a Provider Edge (Provider edge) router in the MPLS network. It is a best
practice to have devices dual home to the ASR 9000. In this implementation, Cisco CRS-1 core routers
connect to the ASR 9000 using bundle-ether interfaces. All of these interfaces are dual homed to the
ASR 9000.
The routing protocol used in MPLS core networks is usually OSPF or ISIS, and in this implementation
OSPF was used with core in area 0. LDP is configured for exchanging labels, and best practices for fast
convergence are implemented.
Multiprotocol Interior BGP (IBGP) is configured to peering to remote Provider edge for both IPv4
address family for Internet prefixes, and VPNv4 address family for MPLS-VPNs. Normally a route
reflector is used to distribute prefixes between all Provider edges and, therefore, is highly recommended.
Figure 6-2

MPLS to Core Topology


Leaf104

ASR 9000-PE1
Bundle

-ether1

CRS-P1

ASR 9000-PE2
ther11

Bundle-e

CRS-P2

298726

Leaf103

interface Bundle-Ether11
mtu 4114
ipv4 address 10.254.11.1 255.255.255.0
mac-address 4055.3943.f93
load-interval 30
!
interface Bundle-Ether12
mtu 4114
ipv4 point-to-point
ipv4 address 10.254.12.1 255.255.255.0
mac-address 4055.3934.f92
load-interval 30
!
router ospf 1
nsr
router-id 10.255.255.1
mpls ldp auto-config
nsf cisco
area 0
interface Bundle-Ether11
!
interface Bundle-Ether12
!
interface Loopback0

Intercloud Data Center Application Centric Infrastructure 1.0

6-4

Implementation Guide

Chapter 6

WAN Edge Implementation with ASR9K


Network Virtualization Edge on the ASR 9000

!
!
!
router bgp 200
address-family ipv4 unicast
!
address-family vpnv4 unicast
!
neighbor 10.255.255.201
remote-as 200
update-source Loopback0
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
!
address-family vpnv4 unicast
route-policy allow-all in
route-policy allow-all out
!
!
mpls ldp
nsr
interface Bundle-Ether11
!
interface Bundle-Ether12
!
!

Provider Edge and Customer Edge BGP to Tenant


Routers and devices connecting to the ASR 9000 nV Edge cluster peer with it as a single device there
is single control plane running on the cluster of two ASR 9000 chassis. This greatly reduces the
complexity of the network design and implementation. The ASR 9000 nV edge cluster runs two IBGP
sessions for each tenant in the tenant VRF, to peer with two of the Nexus 9300 leaves in the ACI Fabric,
for the tenants that use BGP for external routing. The ACI leaf nodes connecting to the Data
Center-Provider edge are said to be border leafs as they form the boundary between ACI Fabric and
external network, and pairs of leafs are used to provide high availability. Each border leaf attaches to
both ASR 9000 chassis using LACP port channels/bundle-ethernet, this is to provide resiliency during
single link or ASR 9000 chassis or line card failures.

Note

In this implementation, Interior BGP (IBGP) is used as Provider edge-Customer edge protocol since the
Fabric does not support Exterior BGP (EBGP). Currently IOS-XR on ASR 9000 does not support RFC
6368 to use ATTR_SET to send customer BGP attributes to remote Provider edge. However, prefixes
are advertised from Data Center Provider edge towards remote Provider edge. This will be supported in
a future release.

Note

If ASR 1000 is used as a Provider edge, local or remote, it is recommended to use internal-vpn-client
towards Customer edge along with route-reflector-client to forward the prefixes using IBGP as Provider
edge-Customer edge.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

6-5

Chapter 6

WAN Edge Implementation with ASR9K

Network Virtualization Edge on the ASR 9000

Note

If the ASR 1000 remote Provider edge uses EBGP to connect to remote Customer edge,
internal-vpn-client and route-reflector-client configurations are not required to advertise a route
originated from remote Customer edge.
From the ASR 9000 perspective, there are two sub-interfaces, one per Nexus 9300 leaf node. Please note
the BGP sessions are per tenant, and each tenant needs a sub-interface pair from the ASR 9000 side and
mapped to the VLAN on Nexus 9300 leaf node side where the external connection is configured as L2
with SVI mode.
Please see the Data Center ACI Fabric implementation chapter for more details on configuring external
L3 and L2 connections on the ACI Nexus 9300 leaf nodes.
Please note that loopback interfaces are created in the tenant VRF, and used for IBGP peering between
the ASR 9000 and ACI Fabric border leafs. The loopback on the ACI Fabric side needs to be reachable
from ASR 9000, and hence a static route is needed towards the ACI Fabric from ASR 9000. Similar
configuration is required on the ACI Fabric side as well. The actual IBGP peering is between loopback
on ASR 9000 to the loopback on the each of the Nexus 9300 leaf nodes. Update source loopback is
configured on the ASR 9000. Figure 6-3 shows the ASR 9000 running BGP over the port channels per
tenant.
Figure 6-3

ASR 9000 Running BGP with ACI Fabric for Tenants

Loopback Interface 10.2.200.1

Static route to
ASR 9000 Loopback

10.2.201.1

10.2.202.1

Static route to
ASR 9000 Loopback

ASR 9000 nV Edge


10.2.201.2
Border Leaf - 1
Loopback Interface 10.2.200.105

10.2.202.2
Border Leaf - 2
Loopback Interface 10.2.200.106
298540

ACI Fabric

ASR 9000 Tenant Configuration for IBGP as Provider edge-Customer edge Routing Protocol
Silver Tenant as well as L3-Bronze tenant types use this implementation.
!# VRF definition for silver tenant s001
vrf s001
address-family ipv4 unicast
import route-target
2:417
export route-target
2:417
!
!#LoopBack interface on asr9k for IBGP peering
interface loopback 411
vrf s001
ipv4 address 10.2.200.1/32
!
!#sub-interface for portchannel1
interface Bundle-Ether 5.411
vrf s001
ipv4 address 10.2.201.1 255.255.255.0

Intercloud Data Center Application Centric Infrastructure 1.0

6-6

Implementation Guide

Chapter 6

WAN Edge Implementation with ASR9K


Network Virtualization Edge on the ASR 9000

encapsulation dot1q 411


!
!#sub-interface for portchannel2
interface Bundle-Ether 6.411
vrf s001
ipv4 address 10.2.202.1 255.255.255.0
encapsulation dot1q 411
!
!#static routes for BGP peers, one per Nexus9300 leaf
router static
vrf s001
address-family ipv4 unicast
10.2.200.105/32 10.2.201.2
10.2.200.106/32 10.2.202.2
!
!#neighbor definition
router bgp 200
vrf s001
rd 2:417
address-family ipv4 unicast
!
neighbor 10.2.200.105
remote-as 200
update-source loopback411
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
!
neighbor 10.2.200.106
remote-as 200
update-source loopback411
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
commit
end

L3 Bronze Configuration
For the Bronze tenant, the border leaves on the ACI Fabric has IBGP peering with ASR 9000 nV Edge
device. Each border leaf has an L2 port channel carrying the external VLAN for each tenant and it
terminate on a port channel sub-interface on the ASR 9000. Loopback interfaces configured on both the
border leaves and ASR 9000 act as BGP Router ID. The bronze tenant BGP configuration on the ASR
9000 is identical to the Silver tenant configuration specified in ASR 9000 Tenant Configuration for
IBGP as Provider edge-Customer edge Routing Protocol, page 6-6.

Provider EdgeCustomer Edge using Static Routing


Tenants can use static routing between ASR 9000 and ACI Fabric. This is applicable to the E-Gold and
L3-Bronze types of containers. Each Nexus 9300 border leaf has an L2 port channel carrying tenant's
external VLAN and it terminates on a port channel sub-interface on the ASR 9000. Use of port channel
allows for dual homing to both ASR 9000 chassis in the nV edge cluster. Please note many tenants can
be carried on the same port channel, with each tenant using a different VLAN. For redundancy purposes,
each Nexus 9300 border leaf has

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

6-7

Chapter 6

WAN Edge Implementation with ASR9K

Network Virtualization Edge on the ASR 9000

For redundancy purposes, each Nexus 9300 border leaf has the following:
1. A default route pointing to upstream ASR 9000 nV Edge device For North-bound traffic.
2. ASR 9000 has static routes pointing to the SVI interfaces in the Fabric as next hop for south or Data

Center bound traffic.


The connected and static routes are redistributed into VPNv4 BGP so that other remote Provider edge
sites can reach the tenant server subnets. The following configuration is done on the ASR 9000.
! Tenant b002 uses static routing
conf t
vrf b002
address-family ipv4 unicast
import route-target
3:422
export route-target
3:422
!
interface loopback 422
vrf b002
ipv4 address 10.3.200.1/32
!
interface Bundle-Ether 5.422
vrf b002
ipv4 address 10.3.201.1 255.255.255.0
encapsulation dot1q 422
!
interface Bundle-Ether 6.422
vrf b002
ipv4 address 10.3.202.1 255.255.255.0
encapsulation dot1q 422
!
router static
vrf b002
address-family ipv4 unicast
10.3.1.0/24 10.3.201.2
10.3.1.0/24 10.3.202.2
router bgp 200
vrf b002
rd 3:422
address-family ipv4 unicast
redistribute connected
redistribute static
commit
end

ASR 9000 as Internet Router


The ASR 9000 is also used to provide Internet access to Tenants. A L2 bridged domain runs from the
ASR 9000 to border leaves in the ACI Fabric. ASR 9000 provides Internet connectivity to the service
provider IP/MPLS network for the Expanded Gold Tenant Container and for Copper tenant shared
outside connection. For the expanded Gold Tenant container, logical connectivity is from ASA to the
ASR 9000 using a L2 external bridge domain on the ACI Fabric border leaves the L2 external is
mapped to a VLAN on a vPC. Each tenant uses a different VLAN, but can share the vPC to the ASR
9000. On the ASR 9000 Data Center Provider edge side, there is a sub-interface per tenant in the internet
VRF, and hence a /30 subnet is required as well. This can be from the RFC 1918 block and does not have
to be routable.

Intercloud Data Center Application Centric Infrastructure 1.0

6-8

Implementation Guide

Chapter 6

WAN Edge Implementation with ASR9K


Network Virtualization Edge on the ASR 9000

Static routes for tenant subnets that are to be reachable in the Internet are configured on ASR 9000 with
next hop to the ASA external sub-interface on the tenant context for the corresponding gold tenant,
which then forwards the traffic to tenant VMs on the inside interfaces all tenant VMs have default
gateway on the tenant ASA context (or alternately ASAv). Copper container has similar configuration,
the only difference is that in Copper case, there is only one ASA context shared by all copper tenants,
and only on external connection to the ASR 9000 Data Center Provider edge. These static routes for
tenant subnets are redistributed into IPv4 IBGP towards the Service Providers route reflectors, to
distribute to all other Internet routers in the network.

E-Gold Tenant Internet Connection Configuration on ASR 9000 Data Center Provider Edge
Figure 6-4 shows the E-Gold Tenant Internet Connection.
Figure 6-4

E-Gold Tenant Internet Connection

MPLS
L3 VPN

Internet

Bridge Domain:
dmz_external_bd

ASR 9000

11.1.8.254

11.1.8.253

dmz_asa

APP
OS

APP
OS

DMZ VMs

298727

DMZ 11.1.8.0/29

interface Bundle-Ether10.3080
description g008 internet
vrf internet
ipv4 address 11.1.8.254 255.255.255.252
encapsulation dot1q 3080
!
router static
vrf internet
address-family ipv4 unicast
10.1.6.101/32 11.1.4.5
11.1.4.0/29 11.1.4.5
11.1.4.0/29 11.1.4.253
11.1.6.0/29 11.1.6.253
11.1.8.0/29 11.1.8.253
12.1.1.3/32 11.1.3.5
12.1.1.4/32 11.1.4.253
12.1.1.6/32 11.1.6.5
12.1.1.6/32 11.1.6.253
12.1.1.7/32 11.1.7.5
12.1.1.8/32 11.1.8.253
!
router bgp 200

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

6-9

Chapter 6

WAN Edge Implementation with ASR9K

Network Virtualization Edge on the ASR 9000

!
vrf internet
rd 9:999
address-family ipv4 unicast
redistribute static
!
!

Copper Tenant Configuration

ASR 9000 nV edge provides the copper tenants access to and from Internet. Figure 6-5 shows the
connectivity of the ASR 9000 to the ASA cluster.
Figure 6-5

ASR 9000 nV Edge Connectivity and Configuration

Internet

ASR 9000 nV Edge

AS 200

Po10.500
10.4.101.1
DMZ 11.1.8.0/29

eBGP

10.4.101.2

ASA Cluster

298728

AS 65101

Interface Configuration
ASA outside interface is mapped to the ASR 9000 bundle Ethernet interface with a sub-interface. All
copper tenants' traffic leaving the ASA to and from Internet shares the same sub-interface. Copper traffic
is then put in to a VRF which will then carry the traffic out to the MPLS core separating the traffic from
other tenant's traffic.
The following IOS-XR configuration snippet shows the interface-related configuration.
interface Bundle-Ether10.500
description copper tenants asa outside interface
mtu 9216
vrf internet
ipv4 address 10.4.101.1 255.255.255.0
encapsulation dot1q 500
!
interface Bundle-Ether10
description copper_vpc_103_104
mtu 9216
!
interface TenGigE0/0/1/2
bundle id 10 mode active
!
interface TenGigE1/1/1/2
bundle id 10 mode active

Intercloud Data Center Application Centric Infrastructure 1.0

6-10

Implementation Guide

Chapter 6

WAN Edge Implementation with ASR9K


Deployment Considerations

Routing Configuration
ASR 9000 configures the static routes for NAT subnets used in the ASA for the traffic coming from the
Internet in to the copper container.
router static
vrf internet
address-family ipv4 unicast
111.21.0.0/16 10.4.101.2
!

EBGP is used to exchange routes between the ASR 9000 and ASA. A default route is injected by the
ASR 9000 towards the ASA.
router bgp 200
bgp router-id 200.200.200.1
address-family ipv4 unicast
!
vrf internet
rd 9:999
address-family ipv4 unicast
redistribute connected
!
neighbor 10.4.101.2
remote-as 65101
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
default-originate
!
!

Deployment Considerations
The following considerations are recommended.

Provider edge-Customer edge implementation in the data center uses IBGP as the ACI Fabric does
not support EBGP at this time.

In a typical implementation, the remote Provider edge-Customer edge connection uses EBGP.

Currently IOS-XR on ASR 9000 does not support RFC 6368 to use ATTR_SET to send Customer
edge attributes to remote Provider edge using IBGP. This support will be available in a future
release. Although the RFC 6368 support is not there, the prefixes are advertised from Data Center
Provider edge towards remote Provider edges.

If ASR 1000 is used as a Provider edge, local or remote, it is recommended to use


internal-vpn-client towards Customer edge along with route-reflector-client to forward the
prefixes using IBGP as Provider edge-Customer edge protocol.

If the ASR 1000 remote Provider edge uses EBGP to connect to remote Customer edge,
internal-vpn-client and route-reflector-client configurations are not required to advertise a route
originated from remote Customer edge.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

6-11

Chapter 6

WAN Edge Implementation with ASR9K

Deployment Considerations

Intercloud Data Center Application Centric Infrastructure 1.0

6-12

Implementation Guide

CH A P T E R

End-to-End QoS Implementation


Quality of Service (QoS) is implemented to support differentiated Service Level Agreements (SLA) with
tenants. The SLAs typically include parameters such as bandwidth guarantees and rate limits, latency
bounds, and different tenants are offered different SLAs depending on their subscription levels. The
benefit to the cloud Service Provider is that they are able to charge a premium to provide higher SLAs
to tenants and the QoS implementation supports these SLAs. For end tenants, they get more predictable
performance based on the higher SLAs for their business critical applications, or for lower end
applications they may chose lower priced SLA offerings.
In this system implementation, QoS has the following characteristics:
1.

Edge policies enforce the agreed upon contractual limits. The Service Provider identifies traffic
based on agreed upon markings, and classifies and enforces contractual limits at customer
attachment points.

2.

Customer Attachment (CA) is a location, or multiple locations, where customer sites attach to the
MPLS-VPN WAN network of the provider. For remote sites, the customer edge QoS enforcement is
implemented at the remote Provider Edge (PE) devices from where tenants attach to the SP network
using Layer 3 VPN (L3VPN).

3. Within the data center, the tenant Virtual Machines (VMs) and bare metal servers attach to the Data

Center network based on the Application Centric Infrastructure (ACI) Fabric. This is another edge
enforcement point for per-VM/vNIC limits enforcement - however on the ACI Fabric,
policing/rate-limiting is not supported in this release, and in this implementation, classification
based on customers' DSCP, and mapping to the ACI Traffic class is done. Tenant Virtual Machines
attach via Application Virtual Switch (AVS), bare metals attach directly or via the Fabric Extender
(FEX). North bound traffic reaches the Data Center PE and on the Data Center PE tenant policy
enforcement is done for tenant aggregate bandwidth limits per class and traffic needs to be identified
using IP/DSCP.
4. The ACI fabric offers 3 classes of traffic for tenants, and depending on the tenant type, tenant traffic

is mapped to one or more of these classes. These 3 classes are configured so that one class (Level-1)
is low latency switched, Level-2 is given the bulk of bandwidth remaining to carry premium data,
and Level-3 is the best effort class also called as standard data with a small amount of bandwidth
reserved. All east-west tenant traffic rides on the appropriate ACI Traffic class.
5. ACI Fabric does not mark dot1p bits on the wire. IP/DSCP is used to map tenant traffic to different

queues for traffic on the external nodes attached to the fabric.


This chapter provides details on the end-to-end QoS implementation.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

7-1

Chapter 7

End-to-End QoS Implementation

QoS Domains and Trust Boundaries

QoS Domains and Trust Boundaries


There are three QoS domains to be considered for end-to-end QoS:
1. The end tenant network (for example, enterprise customer for a SP-hosted Infrastructure as a Service

or IaaS) is in its own QoS domain and implements policies independently from the Data Center and
WAN network. This implementation does not cover this topic.
2. The MPLS-Core network (for example, Service Provider Next Generation Network (SP-NGN) or

an Enterprise-wide MPLS-Core WAN) implements a QoS that supports the different offered
services for WAN transport. The end tenant customer traffic is mapped to one of the WAN/SPNGN
service classes based on the contractual SLA between the tenant and the Enterprise-WAN or
SP-NGN.
3.

Inside the Data Center, another QoS domain exists to support Data Center service offerings. The
tenant customer's traffic is mapped into one of the Data Center classes of service to implement the
contractual SLA.

The remote provider equipment is the boundary between the tenant network and the provider network
that is, WAN/SP-NGN and classifies and marks traffic incoming into the WAN/SP-NGN from the
tenant. This is also the enforcement point for traffic entering the WAN/SP-NGN, and hence, traffic is
treated to enforce the contractual agreement and support agreed upon SLA by policing/rate-limiting and
mark down. Traffic that is allowed into a SP Traffic class is marked with a WAN/SP-NGN class marking,
so that the rest of the Enterprise/SP-NGN QoS domain and the Data Center QoS domain can trust this
marking and use it to classify and provide appropriate treatment.
The Data Center-PE is the boundary between the WAN/SP-NGN and the Data Center. While the
WAN/SP-NGN and Data Center can also be two independent Service Providers/Operators, in this
implementation, they are assumed to be one. For the ingress direction from WAN/NGN to the Data
Center, the Data Center-PE trusts the WAN/NGN markings and classifies traffic into similar classes
within the Data Center. The meaning of the markings in the MPLS network that use the MPLS Traffic
Class (MPLS-TC) field are kept consistent with the dot1p Cost of Service (CoS) markings used within
the Data Center. In the egress direction, i.e., from the Data Center to the MPLS network, the Data
Center-PE implements tenant aggregate policy enforcement, as well as mapping from the Data Center
classes to the WAN/NGN classes. Figure 7-1 shows the end-to-end QoS domains.
End-to-End QoS Domains

Enterprise
Customer

SP-NGN

SP-DC

Public Cloud
SP Hosted IaaS

Tenant Dept
Network

Enterprise
MPLS WAN

Enterprise
Data Center

Private
Cloud

Tenant
Network

MPLS
Core
PE

ACI Data
Center
DCPE

DSCP
MPLS TC

DSCP/ACI

298716

Figure 7-1

Intercloud Data Center Application Centric Infrastructure 1.0

7-2

Implementation Guide

Chapter 7

End-to-End QoS Implementation


QoS Transparency

QoS Transparency
QoS transparency is an important requirement for many customers this is when the QoS labels used by
end users are preserved when traffic transits a provider network. Typically, the IP/DSCP or Type of
Service (ToS) bits are used to mark QoS labels by end users; however when this traffic transits a Cloud
Service Provider or WAN provider network, the SP classifies the tenant traffic and marks SP QoS labels.
To provide full QoS transparency, the SP QoS Labels should be independent of the Tenant QoS labels.
This can be achieved by using outer header QoS markings for SP/WAN QoS labels.
In the MPLS WAN network, the SP classifies traffic coming in from customers sites, and after
classification and traffic conditioning, the SP class is marked using MPLS-TC bits in the MPLS Core.
The SP MPLS network trusts the MPLS-TC field and provides the PHB of the class that is marked.
When the traffic reaches the Data Center, the data center provider equipment (DC-PE) maps the MPLS
QoS domain labels in to Data Center domain labels. This is done using the Ethernet Dot1p bits on the
Data Center PE towards the data center.
From the Data Center PE, traffic enters the ACI Fabric, and the ACI Fabric classifies based on the EPG
type to ACI Classes. If a tenant type has multiple classes of traffic, then classification based on dot1p
bits to select one of the 3 ACI Traffic classes can be used using Custom QoS, however this is only
supported on L3-external (used on L3-Bronze and Silver container). On L2 external-based connections
(used in E-Gold, Copper and L2 Bronze), all of the traffic is mapped to the same ACI Traffic class as
custom QoS is not available at this time.

Note

You can't configure Custom QoS policy at the "External Bridged Network" level. An enhancement
(CSCur79905) is filed to enable this capability.
In the opposite direction, traffic originating from the tenant Virtual Machines is mapped to ACI classes
using DSCP and EPGs. Traffic that is northbound, that is, moving towards remote customer sites, exits
the ACI fabric and reaches the Data Center PE. The Data Center PE then does the edge enforcement
based on the tenant type and the IP/DSCP bits and selects the appropriate SP Class and marked with the
MPLS-TC bits.
This implementation does not support AVS policing for edge conditioning and rate limiting at the virtual
access.

Trust Boundaries
The SP network identifies the traffic at its edges, and marks with SP QoS labels. These labels are trusted
from that point on within the SP network. The edges of the SP WAN network form a trust boundary of
traffic entering the SP WAN network. Similarly, there is another trust boundary at the Data Center PE.
At the trust boundaries, contract SLA enforcement and admission control is done. This is typically done
using policing traffic under the agreed upon bandwidth limit is allowed and marked to the correct class.
Traffic exceeding is either dropped or marked down. The PHB in terms of bandwidth guarantee or low
latency is implemented on every node in the SP network based on the markings selected.
To identify the tenant traffic, for instance the VoIP traffic, customers' DSCP can be used this is agreed
upon between the tenant and the SP. For instance, up to a certain bandwidth with DSCP=ef is accepted
and marked with MPLS-TC=5 and given low latency behavior.
Figure 7-2 shows the enforcement point for customer to Data Center traffic and the trust boundary.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

7-3

Chapter 7

End-to-End QoS Implementation

QoS Transparency

Enforcement PointCustomer premises to Data Center

Tenant
Network

MPLS
Core
PE

ACI Data
Center
DCPE

VM
VM
VM

Cisco AVS

VM
VM
VM
VM
VM
VM

DSCP
MPLS TC
Unstrusted

DSCP/ACI

Trusted

Trust Boundary.
WAN/NGN Edge - Apply
Traffic Conditioning and
Mark MPLS-TC

Trust NGN/WAN to DC.


Copy of MPLS-TC
Marking to Dot1p COS

Border Leaf - ACI Maps EPG to


one of the Three ACI Classes
(For Routed External, Custom
Queuing with COS/DSCP Allowed)

298717

Figure 7-2

Figure 7-3 shows the enforcement point for Data Center to customer premises.
Data Center to Customer premises

Tenant
Network

MPLS
Core
PE

Data
Center
DCPE

DSCP
MPLS TC

VM
VM
VM

Cisco AVS

Figure 7-3

VM
VM
VM
VM
VM
VM

DSCP/ACI

Trust Aggregate
Enforcement and
Conditioning
Classify on DSCP
and Mark MPLS-TC

DC Edge (iLeaf) - Tenant


Type Mapped to ACI Class.
For Tenants with more than one
Traffic Type (Gold), use Custom QoS
and Trust DSCP Set by Customer.
No Policing Supported.

298718

Trusted

QoS per Service Tier


This implementation uses the following markings, as shown in Table 7-1, for different tiers of traffic:
Table 7-1

Markings for Traffic Tiers

QoS Traffic Class

Dot1P bits
in Data
Center
MPLS TC in WAN ACI Traffic Class

Management

Low Latency

Low Latency

VoIP

Low Latency

Storage/Infra

Bandwidth Guarantee

(Ssh inband/vmware etc )


Network Control
(Routing protocols etc)

Intercloud Data Center Application Centric Infrastructure 1.0

7-4

Implementation Guide

Chapter 7

End-to-End QoS Implementation


QoS Transparency

Table 7-1

Markings for Traffic Tiers

QoS Traffic Class

Dot1P bits
in Data
Center
MPLS TC in WAN ACI Traffic Class

Call Signaling

Bandwidth Guarantee

Premium Data

21

21

Bandwidth Guarantee,
WRED in WAN

Standard Data

Best effort, WRED in


WAN

Note

COS=6 and COS=7Tenant EPGs cannot send this class traffic. Only Cloud Service Provider in-band
management and backend EPGs can send traffic with this class to protect traffic going in Traffic class 1.

Note

Tenant VoIP traffic is also sent in low latency ACI class, at this time the AVS does not support rate
limiting this traffic. CoS 4 is used for Network File System (NFS) traffic, and this traffic is seen in the
ACI fabric between UCS/compute and Storage. This marking is also for any Data Center infrastructure
traffic such as vMotion.

Note

CoS 3 is used for tenant traffic for call signaling. It is also used for Fibre Channel over Ethernet (FCoE)
traffic inside the UCS where both call signaling and FCoE traffic share the same queue. This
implementation uses IP-based storage instead of Fibre Channel/Fibre Channel over Ethernet (FC/FCoE).
If using FC/FCoE from UCS to a separate SAN, then UCS queue for COS=3 needs to be configured for
FC/nodrop treatment.

Note

Provider marked IP/DSCP can be used if QoS transparency is not required. This rewrites IP/DSCP to SP
markings. This implementation preserves the customers' original IP/DSCP markings except for the
traffic flowing through the SLB Citrix NetScaler 1000v. For traffic flowing through NS1000V, the
connection opening and closing SYN-ACK and FIN-ACK dont have IP/DSCP preserved.

Tenant Type Mapping to QoS Traffic Classes


The Data Center IaaS offering uses four different tenant types with differing service levels. Each service
level is priced differently, and offers a range of options from premium services to standard lower priced
services. These differentiated service levels are also mapped to a set of different Data Center and MPLS
network-QoS service classes for traffic treatment.
In terms of end tenant offerings, the following traffic classes are implemented:
1.

Low Latency Switched Traffic. For real time apps such as VoIP.

2.

Call Signaling Class. Bandwidth is guaranteed for signaling for VoIP and other multimedia.

3.

Bandwidth-guaranteed Data Class. Premium data class.

4.

Best Effort Data Class. Standard data class.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

7-5

Chapter 7

End-to-End QoS Implementation

QoS Transparency

To make the offerings simple, these traffic classes are bundled and mapped to tenant types.
Table 7-1 shows that Gold tenants can send traffic with DSCP=EF, and the Data Center/MPLS network
classifies it as VoIP in their domain to provide low latency guarantee by switching it in the priority
queue. Call control is also allowed for Gold tenants and recognized by marking DSCP=AF31. All other
traffic from Gold tenants is treated as premium data QoS service class traffic. For Silver tenants, all
traffic is treated as premium data, and there is no VoIP or call signaling class offered. For Bronze tenants,
all traffic is treated as standard data class.
Table 7-2

Data Center and WAN Traffic Classes

Data Center and WAN Traffic Classes Customer Marking Gold Silver Bronze/Copper
VoIP (Low Latency)

DSCP=EF or CS5 x

Call Signaling

DSCP=af31

Premium Data

Any

Standard data

Any

x
x

On the ASR 9000 Data Center PE, the tenant aggregate SLA can be implemented to allow per tenant
aggregate bandwidth and rate limiting.

Note

Per-VM rate limiting is not possible with this implementation as the AVS does not support rate-limiting
or policing at this time.
Table 7-3

Tenant Aggregate Sample Contractual SLA

Traffic Type

Enforcement at the ASR


Gold
9000 Data Center PE

VoIP

1R2C

50 Mbps

Call control

1R2C

10 Mbps

Premium Data Data


Center to sites,
implemented at Data
Center-PE to allow total
bandwidth admitted into
WAN from Data Center.
Remote PE enforced
total bandwidth allowed
from WAN to customer.

2R3C

CIR=500 Mbps

WRED in MPLS core


drops out of contract
traffic before dropping
CIR

PIR= 3Gbps

Bronze/Copper

Silver

CIR=250Mb
ps
PIR=2 Gbps

Intercloud Data Center Application Centric Infrastructure 1.0

7-6

Implementation Guide

Chapter 7

End-to-End QoS Implementation


QoS Transparency

Table 7-3

Tenant Aggregate Sample Contractual SLA (continued)

Premium Data Customer


to Data Center, Data
Center PE implements
total traffic admitted into
Data Center. Remote PE
enforce total bandwidth
admitted into WAN
Standard Data

2R3C

CIR=500 Mbps

WRED in MPLS core


and Data Center PE
between Data Center
PE and ACI fabric
drops out of contract
traffic before dropping
CIR

PIR= 3 Gbps
WRED in MPLS
core drops out of
contract traffic
before dropping
CIR

CIR=250
Mbps
PIR=2 Gbps

1R2C

Rate limited
to 100 Mbps,
no CIR per
tenant.

ACI Fabric QoS


ACI Fabric allows for 3 traffic classes that are user configurable and referred to as Level-1, Level-2 and
Level-3. Level-3 forms the default class if nothing is specified.

Note

In APIC GUI, Level-3 and Unspecified class refers to the same.


Additionally there are 3 reserved system classes used for internal control and management of the fabric:
1.

IFC Class
All the IFC originated or destined traffic classified into this class
Strict Priority Class
In Flowlet Prioritization mode, Prioritized packets use IFC class

2.

Control Class (SUP Class)


Strict Priority Class
All the SUP generated traffic uses this
All control traffic such as Protocol packets use this class across fabric

3.

Span Class
Best Effort Class, DWRR mode, Least Possible Weight
All the SPAN and ERSPAN traffic is classified into this class
Can be starved.

This implementation uses the 3 user configurable traffic classes to permit three types of PHB. Tenant
EPGs are mapped to this QoS level directly for Silver, Bronze, and Copper because the traffic from these
tenants is mapped to one traffic class. In the case of E-Gold/Gold tenants, the custom QoS option is used
to map traffic based on DSCP to classify into VoIP class versus non-VoIP class. In this case, a single
EPG can generate traffic of different Traffic classes, and tenant IP/DSCP marking is used to pick the
traffic class to be used. For DSCP=EF, VoIP traffic, ACI traffic class Level-1 is selected, which has low
latency PHB. However, please note at this time there is no rate-limiting capability on the fabric and low
latency queue packets are switched first before switching Level-2 and Level-3 class packets based on
their bandwidth remaining weights.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

7-7

Chapter 7

End-to-End QoS Implementation

QoS Transparency

Note

In this implementation. the Level-2 ACI traffic class is used for bandwidth-guaranteed tenant traffic as
well as all infrastructure traffic such as IP storage (NFS) and vMotion.
Also note that there is no policing or rate limiting at the AVS virtual edge currently, which should be a
consideration for offering any level 1 tenant traffic class.
This implementation does not provide remarking for any of the traffic at ACI fabric edge. Incoming
DSCP is trusted across the fabric.
The traffic classes in APIC maps to specific port class on the fabric.
Table 7-4 shows QoS classes in APIC and corresponding classes at the switch port.
Table 7-4

QoS Class Mapping between APIC and Fabric Switch Port

QoS Class in APIC GUI

Port Class in "sh plat int counter port x"

Level3 / Unspecified

Class 0

Level2

Class 1

Level1

Class 2
Class 3 (IFC)
SPAN
SUP

Table 7-5 shows sample bandwidth allocation in the fabric for various tenant containers and traffic
classes
Table 7-5

Bandwidth Allocation for Tenant Traffic

ACI Traffic
Class
Type

DSCP

Bandwidth
Dot1P COS Weight
Tenant EPGs

Level-1

Low Latency

EF

-NA-

E-Gold/Gold

No Rate
limiting at
ACI ingress

Level-2

Bandwidth
Guaranteed

Any
(including
Call
signaling)

3 for CS,
2/1

85%

E-Gold/Gold/Silver

Bandwidth
weight kept
high to
guarantee
this class
over level3

Level-3

Best Effort

any

any

15%

Bronze/Copper

Bandwidth
Weight kept
low to allow
level2 to go
first during
congestion

Remarks

Intercloud Data Center Application Centric Infrastructure 1.0

7-8

Implementation Guide

Chapter 7

End-to-End QoS Implementation


QoS Transparency

Bandwidth allocation in APIC for traffic classes is done under QoS Class Policies. This is a global
configuration setting that affects the whole fabric. The total bandwidth available on a fabric switch port
is divided among the three traffic classes as shown in Figure 7-4. The scheduling algorithm is set as
Strict Priority for Level-1 and WRR for other classes.
Figure 7-4

Note

Global QoS Policy

The GUI allows bandwidth percentage configuration for Level-1 class when Strict Priority is selected.
The Strict Priority algorithm does not look at the bandwidth percentage and rate-limiting is not allowed.
An enhancement is filed to grey out bandwidth allocated field when Strict priority is configured
(CSCur84469).

Classification
In the ACI Fabric, you can configure classification policies using QoS Class drop-down list at the EPG
level or External Network level. When you set the QoS class at the EPG level or External Network level,
all incoming traffic will be classified into the specified level (Level-1, Level-2 or Level-3) within the
fabric. Figure 7-5 shows an EPG level classification policy using "QoS Class" configuration for Bronze
tenant.
Figure 7-5

EPG Level QoS Classification for Bronze Traffic

For EPGs and External Routed Networks, you can also configure Custom QoS policies. The Custom
QoS Policies allows classification of traffic to different classes based on incoming CoS or DSCP. With
Custom QoS policy, you can also remark the DSCP if needed.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

7-9

Chapter 7

End-to-End QoS Implementation

QoS Transparency

The External Bridged Networks configuration does not support Custom QoS at this time. This means
that all incoming traffic needs to be classified into the same class irrespective of the type of traffic. In
the case of Gold and L2 Bronze tenants, Bridged External Networks is used for external connectivity.
All Gold traffic entering the Fabric from ASR9K-NV edge is classified into Level-2 at the ingress of the
fabric. This means that the low latency voice traffic and the data traffic go into the same Level-2 class
in the North-South direction. An enhancement bug is filed to address this limitation.
CSCur79905ENH: Need to support custom QoS for External Bridged Networks
Figure 7-6 shows all traffic received on the external bridged network is classified into Level-2 class.
Figure 7-6

QoS Classification of North-South Gold Traffic

At the EPG level, a custom QoS policy can be applied to separate VOIP traffic and data traffic into
different traffic classes. Figure 7-7 shows a custom QoS policy at the EPG level.
Figure 7-7

Custom QoS Policy

Figure 7-8 shows a custom QoS policy for classifying Gold tenant's south-north traffic. In this example,
the VoIP traffic is classified into Level-1 based on DSCP CS5 and EF. Traffic that falls into DSCP range
CS1 to AF43 is classified into Level-2 bucket.

Intercloud Data Center Application Centric Infrastructure 1.0

7-10

Implementation Guide

Chapter 7

End-to-End QoS Implementation


QoS Transparency

Figure 7-8

Custom QoS Policy for South-North Gold Traffic

Trust
This implementation uses a simplified trust model in which the tenant EPG is mapped to a single class
or set of classes.

Tenants with One Type of Traffic


In this model, a tenant is a set of EPGs, and EPGs are mapped to ACI traffic levels at ingress to fabric
from external routers or compute by mapping the EPG to the ACI QoS level. No further marking is
required. All traffic of tenants is mapped to same ACI QoS Level. This is applicable for Silver, Bronze
and Copper containers.
This applies to External Routed Network as well as External Bridged Network connections as well
for these tenants.

Tenants with More Than One Type of Traffic


In this model, tenants have multiple traffic types for example, VoIP, Call Signaling and data traffic
within same EPG. In this case, the tenant EPGs are mapped to a base class for all traffic and Custom QoS
option under EPG definition is used to map the tenant to a different traffic level. For E-Gold tenant, this
implementation matches VoIP traffic based on tenant set IP/DSCP and promotes it to Level-1 using
custom QoS.
Currently rate-limiting is not supported, hence this model may not be desirable in deployments that want
to limit the tenant injected low latency queuing traffic. The tenant VMs have to be trusted or SP managed
to inject traffic in this class. In future rate-limiting/policing at the AVS/Nexus 9300 level will allow
tenants to inject traffic in this manner as well.
This model is also used for External Routed Network. However custom QoS is not supported for
External Bridged network and an enhancement request is filed for that feature.

Marking
Traffic inside fabric is expected to use the traffic class assigned by configuring the EPG, external routed
network or external bridged network to an ACI traffic class or level. Additionally marking of the
DSCP bits can be done if needed.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

7-11

Chapter 7

End-to-End QoS Implementation

QoS Transparency

Note

In this implementation the Customer IP/DSCP is not modified to provide QoS Transparency.
Traffic exiting the ACI Fabric towards UCS fabric interconnects or external devices have dot1p set to 0.
An enhancement request has been filed to provide Dot1P marking corresponding to the ACI Traffic
Level.

Note

CSCuq78913Need to preserve CoS across ACI Fabric for IP packets.

UCS QoS
The UCS unified fabric unifies LAN and SAN traffic on a single Ethernet transport for all blade servers
within a UCS instance. In a typical compute implementation, UCS uses FCoE to carry Fibre Channel
and Ethernet traffic on the same physical Ethernet connection between the Fabric Interconnect and the
server. This connection terminates at a converged network adapter on the server, and the unified fabric
terminates on the uplink ports of the Fabric Interconnect. Separate uplink ports are used for Ethernet and
FC/FCoE traffic on the uplink side, and also separate vEth and vHbas are created on the server.

Note

This implementation uses only IP-based storage not FC/FCoE; all traffic is on Ethernet NICs.

AVS Encapsulation
Tenant VMs attach to the AVS, and the ACI fabric extends up to the virtual port-group where the VM is
connected. VMs are considered as End points by ACI fabric and belong to an EPG. AVS is used with
local switching enabled and VLAN encapsulation in this implementation. EPGs are mapped to virtual
port-groups on VMM and a VLAN from the vlan pool is used to transport traffic based on the policy
configured on the APIC. Currently the AVS does not support marking Dot1p QoS corresponding to the
EPG QoS. Hence all traffic will be marked with dot1p bits set to 0.
The following enhancement is filed to track marking capabilities in AVS.

Note

CSCuq74957Need to support AVS Cos marking.

QoS System Class


The UCS unified fabric is divided into eight virtual lanes. Two virtual lanes are reserved for the internal
system and management traffic, and the other six virtual lanes are user configurable. The QoS system
class defines the overall bandwidth allocation for the six user configurable virtual lanes. Each system
class reserves a specific segment of the bandwidth for a specific type of traffic. This provides a level of
traffic management, even in an oversubscribed system.

Note

For this implementation, the dot1p bits are not set by AVS, hence all the traffic will use the default queue.
It is recommended to set the bandwidth weight for this queue to a high value.

Intercloud Data Center Application Centric Infrastructure 1.0

7-12

Implementation Guide

Chapter 7

End-to-End QoS Implementation


QoS Transparency

QoS Policy
The QoS policy determines the QoS treatment for the outgoing traffic from a vNIC of the UCS blade
server. For UCS servers deployed with the AVS, a QoS policy with the Host Control Full setting is
attached to all vNICs on the service profile (logical blade server). The policy allows the UCS to preserve
the CoS markings assigned by the AVS in future, when supported. If the egress packet has a valid CoS
value assigned by the host (that is, marked by AVS QoS policies), then the UCS uses that value.
Otherwise, the UCS uses the CoS value associated with the Best Effort priority selected in the Priority
drop-down list. Figure 10-9 shows the QoS policy configuration.
Figure 7-9

QoS Policy Configuration

ASR 9000 Data Center PE QoS


The ASR 9000 Data Center PE forms the boundary between the WAN QoS domain and the Data Center
QoS domain.
On the WAN facing links, flat QoS policy is used. Egress direction provides priority class,
bandwidth-guaranteed classes, WRED, and best effort classes.
Traffic from WAN to Data Center is treated on the WAN ingress interface with a flat ingress policy to
implement a full pipe L3VPN QoS model, with a QoS-group marked for each class. This allows for the
original class to be preserved for the Data Center egress QoS policy.
Data Center egress policy is setup using hierarchical QoS to enforce per-tenant maximum, and provide
differentiation for multiple child classes such as low latency for VoIP and bandwidth guarantee for call
signaling and WRED for premium data to drop out of contract before dropping in -contract.
Data Center ingress policy classifies traffic based on tenant IP/DSCP markings and rate limits (polices)
based on agreed up on SLAs. For Gold/E-Gold tenants, VoIP is rate limited using 1R2C policer.
Similarly premium data is limited to in-contract max, and over that limit, traffic is still allowed but
marked down using 2R3C so that the out of contract can be dropped first at points of congestion.
The ASR 9000 Data Center PE policy is very similar to previously implemented Data Center PE policies
as shown in the VMDC VSA 1.0 Implementation Guide at the following link with one change. The Data
Center ingress QoS for traffic coming from ACI fabricsince the ACI fabric does not mark the QoS
level with Dot1p bits in current release, the ASR 9000 Data Center PE needs to classify using Customers
IP/DSCP markings to identify the traffic type and class for enforcing SLA and further mark the
MPLS-TC bits of the appropriate SP/WAN QoS class.
http://www.cisco.com/c/dam/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/VSA/1-0/IG/VM
DC-VSA-1-IG.pdf

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

7-13

Chapter 7

End-to-End QoS Implementation

Deployment Considerations

Deployment Considerations
The following considerations are recommended.

ACI Fabric currently allows 3 traffic classes that are user configurable. These classes are referred
as Level-1, Level-2 & Level-3. Level-1 is typically configured as the strict priority class.

You can classify the traffic to Level-1, Level-2 or Level-3 class at the EPG, "External Bridged
Network" and "External Routed Network" level.

Custom QoS policy configuration at the EPG and External Routed Network level allows
classification based on incoming CoS or DSCP. This policy also allows remarking DSCP based on
incoming CoS/DSCP however this capability is not implemented in this solution to allow DSCP
based QoS transparency.

If the incoming traffic has CoS marking, ACI Fabric reset it to CoS0. An enhancement
(CSCuq78913) is filed to preserve CoS across ACI fabric.

You can't configure Custom QoS policy at the "External Bridged Network" level. An enhancement
(CSCur79905) is filed to enable this capability.

In this implementation, premium data traffic shares the Level-2 class with IP storage (NFS) and
Vmotion.

By default, ACI fabric trusts DSCP settings in the packet.

Cisco AVS does not offer marking capabilities. An enhancement CSCuq74957 is filed to track this.

ACI Fabric does not support rate-limiting or policing.

The global QoS policy in APIC allows bandwidth configuration for Strict Priority class. Since there
is no rate-limiting capability for strict priority class, bandwidth modification should not be allowed.
CSCur84469 is filed to grey out this field.

In the current implementation with UCS, all traffic falls into the default queue since dot1p marking
is not supported on AVS and Leaf switches.

Intercloud Data Center Application Centric Infrastructure 1.0

7-14

Implementation Guide

CH A P T E R

Expanded Gold Tenant Container


The Expanded Gold Tenant Container is a template that combines hardware and software network
resources, network functionalities and services into a logical software-based entity that enable an
administrator to exercise domain-based access control. The template enables a level of abstraction, and
allows service provider to offer secured premium services build atop a shared network infrastructure
based upon pre-defined network and services specifications. Figure 8-1 shows the physical network
topology of the shared network infrastructure.
Figure 8-1

Physical Network Topology


UCS B-Series Blade Servers

APIC1
APIC3

NetScaler 1000V
APP
OS

APP
OS

Tenant VMs

UCS-6296-FI-A

UCS-6296-FI-B

Leaf101

Spine201

Leaf103

Spine202

Leaf104

Leaf102

Spine203

Leaf105

Spine204

Leaf106

Netapp
FAS3200
Series
APIC2

ASR 9000 nV Edge System

ASA 5585 Cluster

298515

APP
OS

The Expanded Gold Tenant Container made use of the Cisco ASA 5585 adaptive security appliance as
the perimeter firewall for the protected workload virtual machines (VMs) and virtual appliances. Cisco
ASA 5585 delivers superior scalability and effective, always-on security designed to meet the needs of
an array of deployments. The Cisco ASA 5585 security appliance is partitioned into multiple virtual
devices, known as security contexts. Each ASA security context is an independent virtual device, with
its own security policies, interfaces, and administrators. For each Expanded Gold Tenant Container, two
ASA security contexts are deployed.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-1

Chapter 8

Expanded Gold Tenant Container

Dual Zones Layout for Workload VMs

The Citrix NetScaler 1000v is an application delivery controller that offers web and application load
balancing, acceleration, security and offload feature set in a simple and easy to install virtual appliance
form factor. The NetScaler 1000v is used to intelligently distribute and optimize Layer 4 to Layer 7
(L4L7) network traffic for the generic 3-tier application with web, application, and database tiers. The
Expanded Gold Tenant Container has two NetScaler 1000v pair (configured in active-standby high
availability mode) for the container.
The Application Centric Infrastructure (ACI) fabric provides the robust network fabric to create a highly
flexible, scalable, and resilient architecture of low-latency, high-bandwidth links. The ACI Fabric
simplified and flattened the data center network, with centralized automation and policy-driven
application profiles. The Cisco Application Policy Infrastructure Controller (APIC) is the unifying point
of automation and management for the ACI Fabric. APIC provides centralized access to all fabric
information, optimizes the application lifecycle for scale and performance, and supports flexible
application provisioning across physical and virtual resources.
The ASR 9000 Aggregation Services Router provides access to external networks for the Expanded Gold
Tenant Container, typically via IP or Multi Protocol Label Switching (MPLS) based connectivity to the
private intranet and Internet. The ASR 9000 is utilized in network virtualization (nV) mode, where two
or more physical ASR 9000 chassis are combined to form a single logical switching or routing entity.
The ASR 9000 is used as an MPLS Provider Edge (PE) router, providing Layer 3 VPN (L3VPN) and
Internet connectivity to the service provider IP/MPLS network for the Expanded Gold Tenant Container.
The workload VMs and virtual appliances for the Expanded Gold Tenant Container are hosted on
VMware vSphere infrastructure. The VMware vSphere ESXi hypervisors are hosted on the hardware
platform provided by the Cisco Unified Computing System (UCS) System with B-Series blade servers.
VMware vSphere is VMware's cloud operating system that virtualizes, aggregates, and manages a large
collection of infrastructure resources (CPUs, memory, networking, and so on) to provide pools of virtual
resources to the data centers, transforming them into dramatically simplified cloud computing
infrastructures. VMware vSphere consists of several technologies that provide live migration, disaster
recovery protection, power management and automatic resource balancing for data centers.
Cisco UCS is the next generation data center platform that unites compute, network, storage access,
virtualization and management software into a single, highly available, cohesive, and energy efficient
system. UCS is designed and optimized for various layers of virtualization to provide an environment in
which applications run on one or more uniform pools of server resources. The system integrates
low-latency, lossless Ethernet unified network fabric with x86-architecture servers.
The Cisco Application Virtual Switch (AVS), which is a hypervisor-resident virtual network edge switch
designed for the ACI Fabric, serves as the distributed virtual network switch on the VMware vSphere
ESXi hypervisors. Cisco AVS provides consistent virtual networking across multiple hypervisors to
simplify network operations and provide consistency with the physical infrastructure.
Persistent disk storage for the workload VMs and virtual appliances are provided by NetApp FAS3200
series storage system. The NetApp FAS3200 family offers robust enterprise-grade feature set with tools
to make management easier over the life of the system. The NetApp storage system is configured in
cluster-mode (or c-mode), which supports multi-controller configurations with a global namespace and
clustered file system. Access to the storage system is via the Network File System (NFS) protocol.

Dual Zones Layout for Workload VMs


The Expanded Gold Tenant Container is designed with two zones for hosting workloads VMs, the
private zone and the demilitarized zone (DMZ). Each zone is fronted by a separate ASA firewall,
providing security in depth protection for the tenant container. Figure 8-2 shows the logical network
topology of the Expanded Gold Tenant Container.

Intercloud Data Center Application Centric Infrastructure 1.0

8-2

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


High Availability

Figure 8-2

Expanded Gold Tenant Container Logical Network Topology

MPLS
L3 VPN

Internet

NetScaler 1000V
HA-Pair
10.1.11.254

11.1.8.254
ASR 9000

SLB 10.1.4.0/24
10.1.11.253
10.1.4.253
Web 10.1.1.0/24
APP
OS

11.1.8.253
10.1.5.252
10.1.5.253

10.1.1.253
pvt_asa
10.1.2.253
10.1.3.253

APP
OS

10.1.7.253

DMZ SLB 10.1.7.0/24

dmz_asa
11.1.8.6

Web VMs

NetScaler 1000V
HA-Pair
App 10.1.2.0/24
APP
OS

Database 10.1.3.0/24

App VMs

APP
OS

APP
OS

DMZ VMs
APP
OS

APP
OS

Database VMs

298605

APP
OS

DMZ 11.1.8.0/29

The private zone has three private subnets, designed for generic 3-tier application profile with web,
application and database tiers, plus a dedicated subnet for the Citrix NetScaler 1000v operating in one
arm mode. The tenant connects to the ASR 9000 PE to access the private intranet over the service
provider MPLS L3VPN network. Each tenant has its own Virtual Routing and Forwarding (VRF)
instance on the ASR 9000. The tenant VRF instance on the ASR 9000 is extended to the ASA firewall
via a dedicated tenant VLAN carried over the ACI Fabric.
The DMZ has one public subnet for DMZ workload VMs, as well as a dedicated subnet for the Citrix
NetScaler 1000v. The DMZ allows the tenant to isolate and secure access from Internet to services on
workload VMs hosted on the DMZ. The workload VMs on the DMZ has restricted access to the private
zone workload VMs to access services (such as business logic processing on application tier) not offered
on DMZ, no direct access from Internet to the private zone. For each tenant, the ASA firewall connects
to the ASR 9000 on a point-to-point VLAN sub-interface.
Both ASA firewalls are configured in L3 routing mode, serving as default gateway for the workload
VMs and virtual appliances in each zone. The ACI Fabric only provides L2 switching functionality; all
L3 routing functions are disabled.

Note

As of ACI version 1.0, traffic redirection mechanism is not supported; as such, when inserting ASA in
L3 routing mode via the service graph, the ACI Fabric must be configured in L2 switching mode, with
the ASA serving as default gateway for the workload VMs and virtual appliances.

High Availability
Application availability is inversely proportional to the total application downtime in a given time period
(typically a month), and the total downtime is simply the sum of the duration of each outage. To increase
a system's availability, and achieve high availability (HA), the duration of the outages, the frequency of
outages, or both must be decreased.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-3

Chapter 8

Expanded Gold Tenant Container

Traffic Flows

The ACI Fabric, MPLS core network, UCS compute platform, and storage systems are designed and
deployed with HA features and capabilities. Table 8-1 defines the HA features incorporated into the
components that made up the Expanded Gold Tenant Container.
Table 8-1

Component sand HA Implementation

Component

HA Implementation

ASA Security Context ASA clustering on the physical ASA security appliances
Spanned EtherChannel with Link Aggregation Control Protocol (LACP)
NetScaler 1000v

NetScaler HA pair
Cisco UCS B-Series blade servers and Fabric Interconnects
VMware vSphere HA

Tenant Workload VMs VMware vSphere HA


Cisco UCS B-Series blade servers and Fabric Interconnects
Server load balancing with NetScaler 1000v

Traffic Flows
The following traffic flows are defined:

Private Zone, page 8-4

Demilitarized Zone, page 8-5

Private Zone
Figure 8-3 shows the traffic flow from private intranet to the 3-tier application on the private zone. In
this traffic flow, the web and application tiers VMs are fronted by the NetScaler 1000v server load
balancer, while the database VM is accessed directly.

Note

The ASA and NetScaler 1000v are depicted multiple times to keep the diagram simple and uncluttered.
Both the workload VMs and the NetScaler 1000v virtual appliances have only one data vNIC/interface.

Intercloud Data Center Application Centric Infrastructure 1.0

8-4

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Traffic Flows

Figure 8-3

Private Zone Traffic Flow

MPLS
L3 VPN

Route flow to
tenant ASA

Filter flow
Route to Web SLB
ASR 9000

pvt_asa

Make SLB decision


Route flow to ASA
pvt_ns
APP
OS

Filter flow
Route to Web VM

Filter flow
Route to App SLB

APP
OS

pvt_asa

APP
OS

pvt_asa

Web VMs
Make SLB decision
Route flow to ASA
pvt_ns
APP
OS

Filter flow
Route to App VM

Filter flow
Route to Database VM

APP
OS
APP
OS

pvt_asa

App VMs
Apply App logic
Send request to Database
APP
OS

Retrieve
data

Database VMs

298606

pvt_asa

Demilitarized Zone
Figure 8-4 shows the traffic flow from Internet to DMZ VM on the DMZ zone. In this traffic flow, the
DMZ NetScaler 1000v server load balancer fronts the DMZ VMs. Upon receiving the user request from
Internet, the DMZ VMs send request the private zone VMs for further processing, the private VMs are
fronted by the NetScaler 1000v on the private Zone.

Note

The ASA and NetScaler are depicted multiple times to keep the diagram simple and uncluttered. Both
the workload VMs and the NetScaler 1000v virtual appliances have only one data vNIC/interface.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-5

Chapter 8

Expanded Gold Tenant Container

Expanded Gold Tenant Container Configuration

Figure 8-4

DMZ Traffic Flow

Internet

Route flow to
tenant ASA

Filter flow
Route to DMZ SLB
dmz_asa

ASR 9000

Make SLB decision


Route flow to ASA
dmz_ns

Filter flow
Route to Private ASA

APP
OS

Filter flow
Route to DMZ VM

Filter flow
Route to Private SLB

APP
OS

dmz_asa

APP
OS

dmz_asa

pvt_asa

DMZ VMs
Process web request
Send request to Private SLB

Make SLB decision


Route flow to ASA
pvt_ns
APP
OS

Filter flow
Route to Private VM
pvt_asa

APP
OS

Process
request

Private VMs

298607

APP
OS

Expanded Gold Tenant Container Configuration


The Expanded Gold Tenant Container is modeled as a tenant on APIC. The following figure shows the
mapping of the logical network topology of the Expanded Gold Tenant Container into the logical model
on ACI Fabric. Figure 8-5 shows all of the major logical constructs such as context, bridge domains,
end-point groups (EPGs), and contracts that are used to construct/configure the logical model.

Intercloud Data Center Application Centric Infrastructure 1.0

8-6

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Expanded Gold Tenant Container Configuration

Figure 8-5

ACI Model of Expanded Gold Tenant Container


Tenant: g008
Context: net01

EPG: pvt_ns_epg
contract01
Bridge Domain:
bd01

pvt_ns

Bridge Domain:
inter_asa_bd
dmz_inter_asa
pvt_inter_asa

dmz_outside

Bridge Domain:
pvt_ns_bd

pvt_outside

pvt_ns_contract

Bridge Domain:
dmz-external_bd

pvt_ns
inside, outside

External Bridged Network:


dmz_external

ASR 9000

Bridge Domain:
dmz-external_bd

External Bridged Network:


pvt_external

dmz_ns_contract

Bridge Domain:
dmz_ns_bd
dmz_ns

pvt_inside1

EPG: epg01

pvt_asa
pvt_inside2

dmz_asa

pvt_inside3

EPG: pvt_ns_epg

dmz_contract

dmz_inside1

inside, outside

contract02

EPG: epg02

contract03

Bridge Domain:
bd03

Bridge Domain:
dmz_bd

EPG: epg03

EPG: dmz_epg

dmz_ns

contract
pvt_inside3 Logical interface

Consumer

Provider

298608

Bridge Domain:
bd02

The L2 segment for each ASA interface is modeled as a bridge domain. All bridge domains for the tenant
will be placed under one private-network/context (or VRF). Only one context is required for the tenant
on ACI Fabric for the tenant, since the fabric is operating in L2 mode, with L2 segments providing the
IP address space isolation. The EPGs for the tenant are organized into two application profiles, one for
private zone and one for DMZ. The IP subnets on the private zone are allocated from private IP address
space from RFC 1918, the DMZ uses mixture of both public and private IP address space.
The tenant container connects to outside networks (private intranet over L3VPN and Internet) with the
external bridged network logical construct, which bridged the external interface of the ASAs to the
VLAN sub-interfaces on the ASR 9000 router. On the ASR 9000 router, each tenant has its own VRF
for connection to the L3VPN private intranet. The Internet connection is setup as a common VRF shared
among multiple tenants on the ASR 9000 router.
Figure 8-6 shows the managed objects (MOs) that are constructed/configured for the Expanded Gold
Tenant Container; the name of the tenant container is g008. Generic MO names are used within the tenant
container.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-7

Chapter 8

Expanded Gold Tenant Container

Prerequisites

Figure 8-6

Managed Objects of Expanded Gold Tenant Container

Prerequisites
The following prerequisites to constructing/configuring the Expanded Gold Tenant Container:
1.

Configure the APICs and Nexus 9000 Series switches to operate in ACI Fabric mode.

2.

Configure the ASR 9000 to serve as PE router. The ASR 9000 PE should have bundle Ethernet
interfaces that are connected to the ACI Fabric virtual port-channel (VPC) interfaces.

3.

Configure the ASA physical security appliances into active/active high available cluster operating
in multi contexts mode. The data interfaces of the ASA physical appliances should be bundled and
connected to the ACI Fabric VPC interface.

4.

Configure the UCS B-Series blade servers for hosting the VMware vSphere infrastructure. The UCS
fabric interconnects should have port-channel bundles that connect the ACI Fabric VPC interfaces.

5.

Configure VMware vSphere for hosting workload VMs and virtual appliances.

6.

Configure NetApp storage system to provide persistent disk storage for the workload VMs and
virtual appliances.

7. APIC should have either in-band or out-of-band management access to the management network.

Specifically, APIC should have management access to vSphere vCenter, ASA security appliance and
the virtual service appliances via the management network.
8.

Configure Virtual Machine Management (VMM) domains for the vSphere virtual datacenters
(vDCs) that would host the workload VMs and virtual appliances.

Intercloud Data Center Application Centric Infrastructure 1.0

8-8

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

9.

Configure and assign VLAN pools to VPCs and VMM domains.

10. Configure Cisco AVS or VMware vSphere Distributed Switch (VDS) to provide distributed virtual

network switch for workload VMs and virtual appliances hosted on the VMware vSphere
infrastructure.
11. Upload the device packages for Cisco ASA and Citrix NetScaler 1000v to APIC.

Summary of Steps
Table 8-2 provides an overview of steps required to construct/configure the Expanded Gold Tenant
Container.
Table 8-2

Configuration Procedure

Task

Notes/Remarks

Create Security Domain

The Security Domain identifies the portions of management


information tree the user can access.
Usage of security domain is optional.

Create Tenant

A tenant is a logical container for application policies that enable an


administrator to exercise domain-based access control.

Create Private Network (or A context is a unique L3 forwarding and application policy domain
Context), and Bridge
(a private network or VRF) that provides IP address space isolation
Domains
for tenants.
A bridge domain represents a L2 forwarding construct within the
fabric. A bridge domain must be linked to a context.

Create Application
Profiles and EPGs

An application profile models application requirements. Application


profiles contain one or more EPGs.
An Endpoint Group (EPG) is a managed object that is a named logical
entity that contains a collection of endpoints. Endpoints are devices
that are connected to the network directly or indirectly.

Create External Bridged


Networks

External network managed object controls connectivity to the


outside. External Bridged Network specified the relevant L2
properties that control communications between an outside
public/private network and the ACI Fabric.

Configure ASR 9000

Configure VRF, VLAN sub-interfaces, BGP and static routes, and so


on, on ASR 9000 router.

Create Filters and


Contracts

Contract governs the communication between EPGs that are labeled


providers, consumers, or both. EPGs can only communicate with
other EPGs according to contract rules.
Contract makes use of Filters (which are organized into subjects) to
specify the type of traffic that can be communicated and how it
occurs.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-9

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Table 8-2

Configuration Procedure (continued)

Task

Notes/Remarks

Associate Contracts to
EPGs

The ACI Fabric security policy model is based on contracts.


Contracts contain the specifications for security policies that are
enforced on traffic between EPGs.
The relationship between an EPG and a contract can be either a
provider or consumer. When an EPG provides a contract,
communication with that EPG can be initiated from other EPGs as
long as the communication complies with the provided contract.
When an EPG consumes a contact, the end-points in the consuming
EPG may initiate communication with any endpoint in an EPG that is
providing that contract.

Attach vNIC to EPG


port-group

10 Create ASA security


contexts

Attach the vNIC of workload VMs to EPG backed port-groups on


vSphere.
ASA security context configuration allocates physical resources to
the contexts. Two ASA security contexts are required for the
Expanded Gold Tenant Container, one for private zone, and one for
DMZ.
Each ASA security context should have basic initial configuration to
allow APIC access.

11 Create VLAN pools

The VLAN pools define the range of VLAN IDs assigned to each
ASA logical device.

12 Create Physical Domain


for private zone ASA

Physical domain comprises of physical resources (ports,


port-channels) and encap resources (VLAN/VXLAN pool) allowed
for use by the EPGs.

13 Create Logical Device for A logical device (also known as a device cluster) is one or more
private zone ASA
concrete devices that act as a single device. A logical device is
addressed and managed through a management IP address that is
assigned to the cluster.
14 Create Concrete Device
for private zone ASA

A concrete device identifies an instance of a service device, which


can be physical or virtual. A concrete device has its own management
IP address to configure and monitor through the APIC.

15 Create Physical Domain


for DMZ ASA

See above.

16 Create Logical Device for


DMZ ASA

See above.

17 Create Concrete Device


for DMZ ASA

See above.

18 Configure ASA Logical


Device parameters

Configuration for the logical devices: DNS domain, NTP server,


logging, and so on.

19 Deploy NetScaler 1000v


virtual appliances on
vSphere

Four NetScaler 1000v virtual appliances are deployed out of band per
Expanded Gold Tenant Container, one HA-pair for private zone, and
one HA-pair for DMZ.
Each NetScaler 1000v should have basic initial configuration to
allow APIC access.

Intercloud Data Center Application Centric Infrastructure 1.0

8-10

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Table 8-2

Configuration Procedure (continued)

Task

20 Create Logical Device for


private zone NetScaler
1000v

Notes/Remarks
See above.

21 Create Concrete Devices


See above.
for private zone NetScaler
1000v
22 Enabled HA for private
zone NetScaler 1000v

Active/Standby HA is used for NetScaler 1000v.

23 Create Logical Device for


DMZ NetScaler 1000v

See above.

24 Create Concrete Devices


See above.
for DMZ NetScaler 1000v
25 Enabled HA for DMZ
NetScaler 1000v

See above.

26 Create Service Graph


Templates

Service graph template identified the set of network or service


functions that are needed by an application; each function is
represented as a node in the service graph template.
A service graph is inserted between consumer and provider EPGs by
a contract.
Three service graphs template are required for the Extended Gold
Tenant Container:
Consumer EPG - ASA Firewall - Provider EPG
Consumer EPG - Private ASA Firewall - DMZ ASA - Provider EPG
Consumer EPG - NetScaler 1000v - Provider EPG

27 Configure ASA objects

These are the ASA network and service objects/groups. These objects
identify the IP subnets, SLB virtual IPs, and services that are used in
both private and DMZ ASA configurations.

28 Create Logical Device


Contexts for private zone
ASA

The logical device context (also known as device selection policy)


specifies the policy for selecting a logical device.
The policy also maps the function node connectors to the logical
devices interfaces and the bridge domain of each interface.

29 Modeling the private zone Configure L4-L7 service parameters for ASA:
ASA with Configure
- Interfaces IP address and security level
L4-L7 service parameters
- Static routes
- Security access control lists
- Attach access control lists to interfaces
30 Modeling the private zone Configure L4-L7 service parameters for NetScaler 1000v:
NetScaler 1000v with
- Subnet IP Address
L4-L7 service parameters
- Static Routes
- Service monitor, service groups, and virtual servers

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-11

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Table 8-2

Configuration Procedure (continued)

Task

31 Create Logical Device


Contexts for DMZ ASA

Notes/Remarks
See above.

32 Modeling DMZ ASA with See above.


L4-L7 service parameters
33 Modeling DMZ NetScaler See above.
1000v with L4-l7 service
parameters
34 Create Logical Device
Contexts for inter zones
communication

See above.

35 Modeling inter zones


ASAs setup with L4-L7
service parameters

See above.

36 Associate service graphs


to contracts

A service graph is inserted between consumer and provider EPGs by


a contract. APIC automatically pushes the configuration specified in
the L4-L7 service parameters to the service device once the service
graphs are deployed.

Detailed Steps
The following sections detailed the steps to constructs/configure the ACI logical model of the Expanded
Gold Tenant Container. The API requests make use of XML data structure, instead of JSON.
For steps involving APIC API requests, it is possible to merge some of steps and hence the XML data
structures, instead of sending multiple API requests with separate XML data structures; for example, the
API requests for creating tenant, context, bridge domains, application profiles and EPGs can be
combined into one API request, with the XML data structures merged. The API requests are shown
separately, to make the documentation clearer and easier to digest.
Unless specify otherwise, all the API requests detailed in the following sections make use of HTTP
POST method with the following normalized URL:
http://{apic_ip_or_hostname}/api/mo/uni.xml

Step 1

Create Security Domain.


The Security Domain (aaa:Domain) identifies the portions of management information tree the user can
access. The core APIC internal data access control system provides multi tenant isolation and prevents
information privacy from being compromised across tenants. Read/write restrictions prevent any tenant
from seeing any other tenants or the fabrics configuration, statistics, faults, or event data. The
following shows the XML data structure to create a security domain.
<aaaUserEp>
<aaaDomain name="g008_sd" />
</aaaUserEp>

Note
Step 2

The use of security domain for a tenant container is optional.


Create Tenant.

Intercloud Data Center Application Centric Infrastructure 1.0

8-12

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

A tenant (fv:Tenant) is a logical container for application policies that enable an administrator to
exercise domain-based access control. The security domain is associated with the tenant to scope the
access of users to the tenant. The following shows the XML data structure to create a tenant, and
associate a security domain to the tenant.
<fvTenant name="g008" descr="gold container with asa">
<aaaDomainRef name="g008_sd" />
</fvTenant>

Step 3

Create Context and Bridge Domains.


A context (fv:Ctx) is a unique L3 forwarding and application policy domain (a private network or VRF)
that provides IP address space isolation for tenants. A bridge domain (fv:BD) represents a L2 forwarding
construct within the fabric. A bridge domain must be linked to a context/VRF. The following XML data
structure creates a context and the bridge domains; each bridge domain does not have any subnet MO
(constructing/configuring L2 only construct), and the context is associated to the bridge domains.
<fvTenant name="g008">
<fvCtx name="net01" />
<fvBD name="pvt_external_bd" arpFlood="yes" unkMacUcastAct="flood"
unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="bd01" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="bd02" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="bd03" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="pvt_ns_bd" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="dmz_external_bd" arpFlood="yes" unkMacUcastAct="flood"
unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="dmz_bd" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="dmz_ns_bd" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="inter_asa_bd" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
</fvTenant>

Note

Step 4

On the Expanded Gold Tenant Container, unicast routing is disabled for each bridge domain; since the
ACI Fabric only provides L2 forwarding functionality. ARP and unknown unicast flooding are enabled;
the flooding settings are required for the ACI Fabric to operate properly with the ASAs inserted by the
service graphs.
Create Application Profiles and EPGs.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-13

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

An application profile (fv:Ap) models application requirements. Application profiles contain one or
more EPGs. An EPG (fv:AEPg) is a managed object that is a named logical entity that contains a
collection of endpoints. Endpoints are virtual or physical devices that are connected to the network
directly or indirectly.
For the Expanded Gold Tenant Container, two application profiles are created, one for the private zone,
and one for the DMZ. Each EPG is associated with a bridge domain; constructing/configuring a 1:1
relationship between EPG and bridge domain to provide isolation of IP address space and traffic between
the EPGs. For EPGs that will have VM endpoints attachment, a VMM domain where the VMs will reside
is associated with the EPG.
The XML data structure below creates the applications profiles and the EPGs for each application
profile.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="epg02">
<fvRsBd tnFvBDName="bd02" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="epg03">
<fvRsBd tnFvBDName="bd03" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<!-- no vmm domain for private ns epg -->
<fvRsBd tnFvBDName="pvt_ns_bd" />
</fvAEPg>
</fvAp>
<fvAp name="app02">
<fvAEPg name="dmz_epg">
<fvRsBd tnFvBDName="dmz_bd" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<!-- no vmm domain for dmz ns epg -->
<fvRsBd tnFvBDName="dmz_ns_bd" />
</fvAEPg>
</fvAp>
</fvTenant>

Note

instrImedcy="immediate"

instrImedcy="immediate"

instrImedcy="immediate"

instrImedcy="immediate"

The pvt_ns_epg and dnz_ns_epg EPGs do not have VMM domain association, APIC will insert the
NetScaler 1000v virtual appliances when the service graphs are deployed.
For each EPG with VMM domain association, APIC creates a VDS port-group on vSphere vCenter. The
vSphere administrator attaches the vNIC of the workload VM to the VDS port-group to make it an
endpoint of an EPG. The EPG backed VDS port-group has the following naming convention:
{tenant_name}|{app_profile_name}|{epg_name}

Step 5

Create External Bridged Networks.

Intercloud Data Center Application Centric Infrastructure 1.0

8-14

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

External network managed object (l2ext:Out or l3ext:Out) controls connectivity to outside network.
External Bridged Network (l2ext:Out) specified the relevant L2 properties that control communications
between an outside network and the ACI Fabric.
For the Expanded Gold Tenant Container, two external bridged networks are configured, one for access
to/from private intranet (via L3VPN on ASR 9000 PE), and one for access to/from Internet (via ASR
9000 PE).
The following XML data structure creates the external bridged networks. The pvt_external MO bridged
the pvt_external_bd bridge domain (connected to outside interface of the private zone ASA) to the
VLAN 1008 sub-interface on the ASR 9000 PE, while dmz_external MO bridged the dmz_external_bd
bridge domain (connected to outside interface of the DMZ ASA) to the VLAN 2998 sub-interface on the
ASR 9000 PE.
<fvTenant name="g008">
<l2extOut name="pvt_external">
<l2extRsEBd tnFvBDName="pvt_external_bd" encap="vlan-1008" />
<l2extLNodeP name="l2_nodes">
<l2extLIfP name="l2_interface">
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asr9k]" />
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name="pvt_external" />
</l2extOut>
<l2extOut name="dmz_external">
<l2extRsEBd tnFvBDName="dmz_external_bd" encap="vlan-2998" />
<l2extLNodeP name="l2_nodes">
<l2extLIfP name="l2_interface">
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-103-104/pathep-[vpc_n103_n104_asr9knv]" />
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name="dmz_external" />
</l2extOut>
</fvTenant>

Note

Step 6

For this implementation, two virtual port-channels (VPCs) are configured from the ACI Fabric to the
ASR 9000; the VPCs are configured as Ethernet bundles on the ASR 9000 PE. It is also possible to use
only one VPC.
Configure ASR 9000 PE.
The ASR 9000 PE is not part of the ACI Fabric. It is considered the external network in the ACI
framework. The configuration of the ASR 9000 is included here for completeness. The following shows
the configuration of VRF, VLAN sub-interfaces, MP-BGP and static routes, and so on, on ASR 9000
router. Each Expanded Gold Tenant Container has one dedicated VRF for connection to private intranet
via L3VPN, and a shared Internet VRF. Private IP addresses are used on the private zone; the DMZ uses
public IP addresses.
vrf g008
address-family ipv4 unicast
import route-target
1:1008
export route-target
1:1008
!
interface Bundle-Ether 9.1008
description g008 private
vrf g008
ipv4 address 10.1.11.254 255.255.255.0

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-15

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

encapsulation dot1q 1008


!
interface Bundle-Ether 10.2998
description g008 internet
vrf internet
ipv4 address 11.1.8.254 255.255.255.252
encapsulation dot1q 2998
!
router bgp 200
vrf g008
rd 1:1008
address-family ipv4 unicast
redistribute static
!
router static
vrf g008
address-family ipv4 unicast
10.1.1.0/24 10.1.11.253
10.1.2.0/24 10.1.11.253
10.1.3.0/24 10.1.11.253
10.1.4.0/24 10.1.11.253
10.1.5.0/24 10.1.11.253
10.1.7.0/24 10.1.11.253
11.1.8.0/29 10.1.11.253
!
vrf internet
address-family ipv4 unicast
12.1.1.8/32 11.1.8.253
11.1.8.0/29 11.1.8.253
Commit

Note

Step 7

Static routes are used between ASR 9000 PE and the ASA firewalls. ASA device package version 1.0(1)
does not support dynamic routing protocols.
Create Filters and Contracts.
Contract (vz:BrCp) governs the communication between EPGs that are labeled providers, consumers, or
both. EPGs can only communicate with other EPGs according to contract rules. Contract makes use of
Filters (vz:Filter), which are organized into one of more subjects (vz:Subj), to specify the type of traffic
that can be communicated and how it occurs (unidirectional or bidirectional traffic). Filters are L2 to
Layer 4 fields, TCP/IP header fields such as L3 protocol type, Layer 4 ports, and so forth used to
categorize traffic flows.
For this implementation, four filters are used (three filters, http, https and ssh are created below; the icmp
filter is inherited from the common tenant) on the contracts to allow incoming traffic from external
bridged networks; SSH is only allowed from the private intranet.
<fvTenant name="g008">
<vzFilter name="http">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="http" dToPort="http"
/>
</vzFilter>
<vzFilter name="https">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="https"
dToPort="https" />
</vzFilter>
<vzFilter name="ssh">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="22" dToPort="22" />
</vzFilter>
<vzBrCP name="contract01">
<vzSubj name="subject01">

Intercloud Data Center Application Centric Infrastructure 1.0

8-16

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vzRsSubjFiltAtt tnVzFilterName="http" />


<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="ssh" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
<vzBrCP name="contract02">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="ssh" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
<vzBrCP name="contract03">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="ssh" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
<vzBrCP name="pvt_ns_contract">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_contract">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_ns_contract">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
<vzBrCP name="inter_asa_contract">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
</fvTenant>

Step 8

Associate Contracts to EPGs.


The ACI Fabric security policy model is based on contracts. Contracts contain the specifications for
security policies that are enforced on traffic between EPGs. The relationship between an EPG and a
contract can be either a provider or consumer. When an EPG provides a contract, communication with
that EPG can be initiated from other EPGs as long as the communication complies with the provided
contract. When an EPG consumes a contact, the end-points in the consuming EPG may initiate
communication with any endpoint in an EPG that is providing that contract.
For the Expanded Gold Tenant Container, the Table 8-3 lists the contract relationships that are used.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-17

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Table 8-3

Contract Relationships

Contract

Consumer EPG Provider EPG

contract01

pvt_external

epg01

contract02

pvt_external

epg02

contract03

pvt_external

epg03

pvt_ns_contract

pvt_external

pvt_ns_epg

dmz_contract

dmz_external

dmz_epg

dmz_ns_contract

dmz_external

dmz_ns_epg

inter_asa_contract pvt_external

dmz_epg

The following XML data structure setups the consumer/provider relationship between EPGs and
contract.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsProv tnVzBrCPName="contract01" />
</fvAEPg>
<fvAEPg name="epg02">
<fvRsProv tnVzBrCPName="contract02" />
</fvAEPg>
<fvAEPg name="epg03">
<fvRsProv tnVzBrCPName="contract03" />
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<fvRsProv tnVzBrCPName="pvt_ns_contract" />
</fvAEPg>
</fvAp>
<fvAp name="app02">
<fvAEPg name="dmz_epg">
<fvRsProv tnVzBrCPName="dmz_contract" />
<fvRsProv tnVzBrCPName="inter_asa_contract" />
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<fvRsProv tnVzBrCPName="dmz_ns_contract" />
</fvAEPg>
</fvAp>
<l2extOut name="pvt_external">
<l2extInstP name="pvt_external">
<fvRsCons tnVzBrCPName="contract01" />
<fvRsCons tnVzBrCPName="contract02" />
<fvRsCons tnVzBrCPName="contract03" />
<fvRsCons tnVzBrCPName="pvt_ns_contract" />
<fvRsCons tnVzBrCPName="inter_asa_contract" />
</l2extInstP>
</l2extOut>
<l2extOut name="dmz_external">
<l2extInstP name="dmz_external">
<fvRsCons tnVzBrCPName="dmz_contract" />
<fvRsCons tnVzBrCPName="dmz_ns_contract" />
</l2extInstP>
</l2extOut>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0

8-18

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Note

Step 9

The contract relationships do not adhere to the generic 3-tier application traffic profile (external web
tier app tier database tier). As of APIC version 1.0, traffic redirection mechanism is not supported,
and ASA and NetScaler 1000v service devices are not aware of the ACI contracts. The generic 3-tier
application traffic profile is enforced by the firewall rules on the ASAs. The ACI contracts are enforced
only on traffic entering and leaving the pvt_external_bd and dmz_external_bd bridge domains from/to
the ASR 9000 PE.
Attach vNIC to EPG Port-Group.
APIC will not attach the vNIC of workload VMs to EPG backed VDS port-groups on vSphere; the
operation has to be done manually or programmatically on vSphere. The following shows the vSphere
PowerCLI Cmdlets to attach the first vNIC of a workload VM to an EPG backed VDS port-group.
$vdsPG = Get-VirtualSwitch -Distributed -Name "ics3_prod_vc" | Get-VirtualPortGroup
-Name "g008|app01|epg01"
Get-VM -name "g008-vm01" | Get-NetworkAdapter -name "Network adapter 1" |
Set-NetworkAdapter -NetworkName $vdsPG.Name -confirm:$false

Step 10

Create ASA Security Contexts.


ASA security context configuration allocates physical resources to the contexts. Each ASA security
context is an independent virtual device, with its own security policy, interfaces, and administrators. Two
ASA security contexts are required for the Expanded Gold Tenant Container, one for private zone, and
one for DMZ.
ASA device package version 1.0(1) does not support physical ASA operating in multi contexts mode
(virtual ASA does not support multi context mode). To use physical ASA in multi context mode with
APIC and ACI Fabric:
1. The ASA security context must be created out of band (as in configuration performed either

manually or programmatically outside of APIC).


2.

Each ASA security context must have its own management interface and IP address; the
management interface must be reachable to APIC.

3. All data interfaces allocated to an ASA security context must be VLAN sub-interfaces belonging to

the same physical interface.


4. The interface names of the data interfaces must not be mapped to other names; the physical main

interface name must be retained within the ASA security context.


5.

Some features and configuration are not supported in ASA security context; refer to ASA
documentation,
http://www.cisco.com/c/en/us/td/docs/security/asa/asa90/configuration/guide/asa_90_cli_config/h
a_contexts.html#91406, for more details.

6.

If clustering or active/standby failover is required for the physical ASA, the configuration must be
performed out of band; APIC will have no awareness that the ASA is clustered or have failover
enabled.

7. APIC is not aware of the ASA security context on the physical ASA; on APIC, each ASA security

context is modeled as a physical ASA operating in single context mode.


Each ASA security context should have initial configuration to allow APIC access via the management
interface. The following shows the ASA configuration to create two ASA security contexts, allocate
physical resources, and make the initial configuration on the ASA security contexts to allow
management access by APIC.
changeto context system
conf t

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-19

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

interface port-channel2.3011
description g008-pvt
vlan 3011
!
interface port-channel2.3012
description g008-pvt
vlan 3012
!
interface port-channel2.3013
description g008-pvt
vlan 3013
!
interface port-channel2.3014
description g008-pvt
vlan 3014
!
interface port-channel2.3015
description g008-pvt
vlan 3015
!
interface port-channel2.3016
description g008-pvt
vlan 3016
!
interface port-channel2.3017
description g008-dmz
vlan 3017
!
interface port-channel2.3018
description g008-dmz
vlan 3018
!
interface port-channel2.3019
description g008-dmz
vlan 3019
!
interface port-channel2.3020
description g008-dmz
vlan 3020
!
context g008-pvt
allocate-interface Management0/1 management0
allocate-interface port-channel2.3011-port-channel2.3016
config-url disk0:/contexts/g008-pvt.cfg
!
context g008-dmz
allocate-interface Management0/1 management0
allocate-interface port-channel2.3017-port-channel2.3020
config-url disk0:/contexts/g008-dmz.cfg
!
end
write memory
!
changeto context g008-pvt
conf t
crypto key generate rsa modulus 2048
!
ip local pool mgmt-pool 10.0.32.114-10.0.32.115 mask 255.255.255.0
!
interface management0
management-only
nameif management
ip address 10.0.32.113 255.255.255.0 cluster-pool mgmt-pool
!

Intercloud Data Center Application Centric Infrastructure 1.0

8-20

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

route management 10.0.0.0 255.255.0.0 10.0.32.1 1


route management 172.18.0.0 255.255.0.0 10.0.32.1 1
!
aaa authentication ssh console LOCAL
aaa authentication http console LOCAL
!
http server enable
http 10.0.0.0 255.255.0.0 management
http 172.18.0.0 255.255.0.0 management
!
ssh 10.0.0.0 255.255.0.0 management
ssh 172.18.0.0 255.255.0.0 management
ssh version 2
!
username admin password Cisco12345
username apic password Cisco12345
!
enable password Cisco12345
!
end
write memory
!
changeto context g008-dmz
conf t
crypto key generate rsa modulus 2048
!
ip local pool mgmt-pool 10.0.32.117-10.0.32.118 mask 255.255.255.0
!
interface management0
management-only
nameif management
ip address 10.0.32.116 255.255.255.0 cluster-pool mgmt-pool
!
route management 10.0.0.0 255.255.0.0 10.0.32.1 1
route management 172.18.0.0 255.255.0.0 10.0.32.1 1
!
aaa authentication ssh console LOCAL
aaa authentication http console LOCAL
!
http server enable
http 10.0.0.0 255.255.0.0 management
http 172.18.0.0 255.255.0.0 management
!
ssh 10.0.0.0 255.255.0.0 management
ssh 172.18.0.0 255.255.0.0 management
ssh version 2
!
username admin password Cisco12345
username apic password Cisco12345
!
enable password Cisco12345
!
end
write memory

The management interfaces of the ASA security contexts shared the physical management interface with
the system context. The interface name of the physical Management0/1 interface is mapped to
management0 within the ASA security context. All data (sub)-interfaces allocated to the ASA security
contexts are from the same physical main interface. The interface names of the data interfaces are not
mapped.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-21

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Note

Step 11

It is advisable to store the configurations of ASA security context on a sub-directory of the flash device.
On some ASA models, if the ASA security context configurations are stored in the root directory of the
flash device, the root directory might run out of directory entries space, even though there is available
space on the flash device. This is because some ASA models use the FAT 16 file system for internal flash
device. See http://support.microsoft.com/kb/120138/en-us for more details.
Create VLAN Pools.
The VLAN pools (fvns:VlanInstP) define the range of VLAN IDs assigned to each ASA logical device.
Two VLAN pools are used, one pool for the VLANs allocated to the private zone ASA, and another pool
for the VLANs allocated to the DMZ ASA. The following XML data structure creates the VLAN pools,
the VLAN ranges (fvns:EncapBlk) should match those configured for the ASA security contexts in the
previous steps.
<infraInfra>
<fvnsVlanInstP name="g008_pvt_asa_pool" allocMode="dynamic">
<fvnsEncapBlk from="vlan-3011" to="vlan-3016" />
</fvnsVlanInstP>
<fvnsVlanInstP name="g008_dmz_asa_pool" allocMode="dynamic">
<fvnsEncapBlk from="vlan-3017" to="vlan-3020" />
</fvnsVlanInstP>
</infraInfra>

Step 12

Create Physical Domain for Private Zone ASA.


Physical domain (phys:DomP) comprises of physical resources (ports, port-channels) and encap
resources (VLAN/VXLAN pool) allowed for use by the EPGs or L4-L7 logical devices. The physical
domain ensure that the ASA logical device use the correct VLAN range.
The XML data structure below creates the physical domain for private zone ASA, and associates the
VLAN pool to the physical domain.
<physDomP name="g008_pvt_asa_phy">
<infraRsVlanNs tDn="uni/infra/vlanns-[g008_pvt_asa_pool]-dynamic" />
</physDomP>

Step 13

Create Logical Device for Private Zone ASA.


A logical device (vns:LDevVip, also known as a device cluster) is one or more concrete devices that act
as a single device. A logical device is addressed and managed through a management IP address that is
assigned to it. The service functions offered by the service device are always rendered on a logical
device. Typically, a logical device represents a cluster of service devices deployed in active-active or
active-standby HA mode.
The following XML data structure configures the logical device for the private zone ASA. The logical
device is associated with the physical domain, which specifies the VLAN range allocated to the ASA.

Note

A logical device has logical interfaces (vns:LIf), which describe the interface information for the logical
device. During service graph instantiation, function node connectors are associated with logical
interfaces. The XML data structure below does not create the logical interfaces for the logical device;
the logical interfaces are created with the XM data structure from the next section. This is done to
minimize the number of faults APIC raises when the logical device is created.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa" contextAware="single-Context" devtype="PHYSICAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-CISCO-ASA-1.0.1" />

Intercloud Data Center Application Centric Infrastructure 1.0

8-22

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsCMgmt host="10.0.32.113" port="443" />


<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToPhysDomP tDn="uni/phys-g008_pvt_asa_phy" />
</vnsLDevVip>
</fvTenant>

Note

Step 14

The username/password credential configured should have read/write administrative access to the ASA
security context, as APIC will use the credential to push the configuration to the ASA.
Create Concrete Device for Private Zone ASA.
A concrete device (vns:CDev) identifies an instance of a service device, which can be physical or virtual.
Each concrete device has its own management IP address for configuration and monitoring through the
APIC. A concrete device has concrete interfaces (vns:CIf); when a concrete device is added to a logical
device, concrete interfaces are mapped to the logical interfaces. During service graph instantiation,
VLANs (from the VLAN pool) are programmed on concrete interfaces based on their association with
logical interfaces.
The XML data structure below creates the concrete device, and the logical interfaces of the logical
device for the private zone ASA. With the physical ASA, the concrete device has only one concrete
interface (port-channel2 in this case); all logical interfaces are mapped to the same concrete interface.
APIC creates VLAN sub-interfaces on the concrete interface for each of the logical interface, using the
VLAN IDs specified in the VLAN pool assigned.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsCDev name="asa01">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.113" port="443" />
<vnsCIf name="port-channel2">
<!-- this cif is the main interface, apic will take care of
sub-interface base on vlan allocation -->
<vnsRsCIfPathAtt
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asa5585_data]" />
</vnsCIf>
</vnsCDev>
<vnsLIf name="pvt_outside">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inside1">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inside2">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inside3">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_ns">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"

/>

/>

/>

/>

/>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-23

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="pvt_inter_asa">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[port-channel2]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>

Note

Step 15

The ASA device package does not support physical ASA operating in multi context mode, see
CSCuq96552 for more details. When operating the physical ASA in multi context mode with APIC, each
ASA security context is modeled as a physical ASA operating in single context mode. The ASA device
is deployed in standalone mode, that is, the logical device contains only one concrete device. The
management IP address for logical devices and concrete devices is the same.
Create Physical Domain for DMZ ASA.
The XML data structure below creates the physical domain for DMZ ASA, and associates the VLAN
pool to the physical domain.
<physDomP name="g008_dmz_asa_phy">
<infraRsVlanNs tDn="uni/infra/vlanns-[g008_dmz_asa_pool]-dynamic" />
</physDomP>

Step 16

Create Logical Device for DMZ ASA.


The following XML data structure configures the logical device for the DMZ ASA. The logical device
is associated with the physical domain, which specifies the VLAN range allocated to the ASA.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa" contextAware="single-Context" devtype="PHYSICAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-CISCO-ASA-1.0.1" />
<vnsCMgmt host="10.0.32.116" port="443" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToPhysDomP tDn="uni/phys-g008_dmz_asa_phy" />
</vnsLDevVip>
</fvTenant>

Step 17

Create Concrete Device for DMZ ASA.


The XML data structure below creates the concrete device, and the logical interfaces of the logical
device for the DMZ ASA. With the physical ASA, the concrete device has only one concrete interface
(port-channel2 in this case); all logical interfaces are mapped to the same concrete interface. APIC
creates VLAN sub-interfaces on the concrete interface for each of the logical interface, using the VLAN
IDs specified in the VLAN pool assigned.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa">
<vnsCDev name="asa02">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.116" port="443" />
<vnsCIf name="port-channel2">
<!-- this cif is the main interface, apic will take care of
sub-interface base on vlan allocation -->
<vnsRsCIfPathAtt
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asa5585_data]" />
</vnsCIf>

Intercloud Data Center Application Centric Infrastructure 1.0

8-24

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

</vnsCDev>
<vnsLIf name="dmz_outside">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="dmz_inside1">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="dmz_ns">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
<vnsLIf name="dmz_inter_asa">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal"
<vnsRsCIfAtt
tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa02/cIf-[port-channel2]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>

Step 18

/>

/>

/>

/>

Configure ASA Logical Device Parameters.


The XML data structure below configured device wide parameters such as DNS domain, NTP server,
logging, and so on, for the ASA logical devices.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsDevFolder key="DNS" name="dns">
<vnsDevParam key="domain_name" name="domain_name"
value="aci.icdc.sdu.cisco.com" />
</vnsDevFolder>
<vnsDevFolder key="LoggingConfig" name="logging">
<vnsDevParam key="enable_logging" name="enable" value="enable" />
<vnsDevParam key="buffered_level" name="level" value="warnings" />
<vnsDevParam key="buffer_size" name="size" value="1048576" />
<vnsDevParam key="facility" name="facility" value="18" />
</vnsDevFolder>
</vnsLDevVip>
<vnsLDevVip name="dmz_asa">
<!-- ntp config not supported with asa security context -->
<vnsDevFolder key="DNS" name="dns">
<vnsDevParam key="domain_name" name="domain_name"
value="aci.icdc.sdu.cisco.com" />
</vnsDevFolder>
<vnsDevFolder key="LoggingConfig" name="logging">
<vnsDevParam key="enable_logging" name="enable" value="enable" />
<vnsDevParam key="buffered_level" name="level" value="warnings" />
<vnsDevParam key="buffer_size" name="size" value="1048576" />
<vnsDevParam key="facility" name="facility" value="18" />
</vnsDevFolder>
</vnsLDevVip>
</fvTenant>

APIC pushed the following configuration to each of the ASA service device:
domain-name aci.icdc.sdu.cisco.com
dns server-group DefaultDNS
domain-name aci.icdc.sdu.cisco.com
!
logging enable

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-25

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

logging buffer-size 1048576


logging buffered warnings
logging facility 18

Note

Step 19

NTP server configuration is not supported with ASA security context; NTP server for the physical ASA
must be configured on the ASA system context out of band.
Deploy NetScaler 1000v Virtual Appliances on vSphere.
Four NetScaler 1000v virtual appliances are deployed per Expanded Gold Tenant Container, one
HA-pair for private zone, and one HA-pair for DMZ. APIC will not deploy the NetScaler 1000v virtual
appliances on vSphere, the virtual appliances must be deployed out of band.
Once the NetScaler 1000v virtual appliances are deployed, the following initial configuration should be
made to allow management access by APIC.
add route 10.0.0.0 255.255.0.0 10.0.39.253
add route 172.18.0.0 255.255.0.0 10.0.39.253
rm route 0.0.0.0 0.0.0.0 10.0.39.253
!
set system user nsroot Cisco12345
add system user apic Cisco12345
bind system user apic superuser 100
add system user admin Cisco12345
bind system user admin superuser 100
!
save ns config

Note

Step 20

The management subnets for this implementation are 10.0.0.0/16 and 172.18.0.0/16. Static routes are
configured on the NetScaler 1000v to allow access to/from management network. The default route
0.0.0.0/0 is added during the deployment of the NetScaler 1000v virtual appliance, and is set to point
toward the management network; this default route should be removed.
Create Logical Device for Private Zone NetScaler 1000v.
The following XML data structure configures the logical device for the private zone NetScaler 1000v.
The logical device is associated with a VMM domain, which specifies the vSphere virtual datacenter
where the NetScaler 1000v virtual appliances reside.
<fvTenant name="g008">
<vnsLDevVip name="pvt_ns" contextAware="single-Context" devtype="VIRTUAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5" />
<vnsCMgmt host="10.0.39.225" port="80" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToDomP tDn="uni/vmmp-VMware/dom-ics3_prod_vc" />
<vnsDevFolder key="enableMode" name="modes">
<vnsDevParam key="L3" name="l3mode" value="ENABLE" />
<vnsDevParam key="DRADV" name="dradv" value="ENABLE" />
<vnsDevParam key="USNIP" name="usnip" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="enableFeature" name="features">
<vnsDevParam key="SSL" name="ssl" value="ENABLE" />
<vnsDevParam key="LB" name="lb" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="ntpserver" name="ntpserver1">
<vnsDevParam key="serverip" name="ip" value="172.18.114.20" />
<vnsDevParam key="preferredntpserver" name="preferred" value="YES" />
</vnsDevFolder>

Intercloud Data Center Application Centric Infrastructure 1.0

8-26

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

</vnsLDevVip>
</fvTenant>

Note

The NetScaler features, modes and NTP configurations are performed here to avoid race condition. If
those configurations are performed right after the concrete devices are created (like what has been done
for the ASAs), some of those configurations might not get pushed to the NetScaler 1000v service
devices.
The XML data structure above also configures modes, features, and NTP parameters for the NetScaler
1000v logical device, which is equivalent to the following configuration on the NetScaler 1000v:
enable ns feature LB SSL
enable ns mode L3 DRADV USNIP
add ntp server 172.18.114.20
set ntp server 172.18.114.20 -preferredNtpServer YES

Note

Step 21

The NetScaler device package does not push the NTP configuration to the NetScaler 1000v, see Citrix
BUG0503304.
Create Concrete Devices for Private Zone NetScaler 1000v
The XML data structure below creates two concrete devices, and the logical interfaces of the logical
device for the private zone NetScaler 1000v. The private zone NetScaler 1000v is operating in one-arm
mode; both the inside and outside logical interfaces mapped to the same concrete interface (interface 1/1
in this case, but referred to as 1_1 with the Citrix NetScaler 1000v device package).
<fvTenant name="g008">
<vnsLDevVip name="pvt_ns">
<vnsCDev name="ns01" vcenterName="ics3_vc_tenant_cluster" vmName="g008-ns01">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.39.221" port="80" />
<vnsCIf name="1_1" vnicName="Network adapter 2"/>
</vnsCDev>
<vnsCDev name="ns02" vcenterName="ics3_vc_tenant_cluster" vmName="g008-ns02">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.39.222" port="80" />
<vnsCIf name="1_1" vnicName="Network adapter 2"/>
</vnsCDev>
<vnsLIf name="outside">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-outside"
/>
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns01/cIf-[1_1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns02/cIf-[1_1]" />
</vnsLIf>
<vnsLIf name="inside">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-inside" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns01/cIf-[1_1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_ns/cDev-ns02/cIf-[1_1]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-27

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

The name of the VMM controller (the vcenterName attribute above, is not the vCenter hostname or IP
address, but rather the VMM controller name of the VMM domain), the VM name and vNIC name of
the NetScaler 1000v virtual appliance are part of the concrete devices configuration, so that APIC can
attach the appliances vNIC to the shadow EPG backed VDS port-group created when the service graphs
are deployed.
Step 22

Enabled HA for Private Zone NetScaler 1000v.


Figure 8-7 shows the active/standby HA setup of the private zone NetScaler 1000v consisting of two
NetScaler 1000v virtual appliances. An algorithm determines the primary and secondary nodes after the
two NetScaler 1000v virtual appliances are configured as HA peer to each other. The management
interface 0/1 (referred as 0_1 in L4-L7 device parameter) is used to exchange HA information between
the HA peer. A subnet IP (SNIP), 10.0.39.225/24 from the management subnet with mgmtAccess option
enabled, is configured on each concrete device. This SNIP serves as the HA cluster management IP
address, and is used as the management IP address of the NetScaler 1000v logical device.
Figure 8-7

NetScaler 1000v Active/Standby HA Setup

Management Network 10.0.39.0/24


Cluster IP: 10.0.39.225
0/1

10.0.39.221

10.0.39.222

0/1

ns02
Active Unit

298610

HA Peer to
each other
ns01
Standby Unit

The XML data structure below configures the two NetScaler 1000v concrete devices as HA peer for each
other.
<fvTenant name="g008">
<vnsLDevVip name="pvt_ns">
<vnsCDev name="ns01">
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.222" />
<vnsDevParam key="id" name="id" value="1" />
</vnsDevFolder>
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="snip" name="snip" value="10.0.39.225" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="ns02">
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.221" />
<vnsDevParam key="id" name="id" value="1" />
</vnsDevFolder>
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="snip" name="snip" value="10.0.39.225" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>

The XM data structure above causes APIC to push the following configuration to ns01:

Intercloud Data Center Application Centric Infrastructure 1.0

8-28

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

add HA node 1 10.0.39.222


add ns ip 10.0.39.225 255.255.255.0 -vServer DISABLED -mgmtAccess ENABLED

And to ns02:
add HA node 1 10.0.39.221
add ns ip 10.0.39.225 255.255.255.0 -vServer DISABLED -mgmtAccess ENABLED

Step 23

Create Logical Device for DMZ NetScaler 1000v.


The following XML data structure configures the logical device for the DMZ NetScaler 1000v.
<fvTenant name="g008">
<vnsLDevVip name="dmz_ns" contextAware="single-Context" devtype="VIRTUAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5" />
<vnsCMgmt host="10.0.39.226" port="80" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToDomP tDn="uni/vmmp-VMware/dom-ics3_prod_vc" />
<vnsDevFolder key="enableMode" name="modes">
<vnsDevParam key="L3" name="l3mode" value="ENABLE" />
<vnsDevParam key="DRADV" name="dradv" value="ENABLE" />
<vnsDevParam key="USNIP" name="usnip" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="enableFeature" name="features">
<vnsDevParam key="SSL" name="ssl" value="ENABLE" />
<vnsDevParam key="LB" name="lb" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="ntpserver" name="ntpserver1">
<vnsDevParam key="serverip" name="ip" value="172.18.114.20" />
<vnsDevParam key="preferredntpserver" name="preferred" value="YES" />
</vnsDevFolder>
</vnsLDevVip>
</fvTenant>

Step 24

Create Concrete Devices for DMZ NetScaler 1000v.


The XML data structure below creates two concrete devices and the logical interfaces of the logical
device for the DMZ NetScaler 1000v. The DMZ NetScaler 1000v is operating in one-arm mode; both
the inside and outside logical interfaces mapped to the same concrete interface (interface 1/1 in this case,
but referred to as 1_1 with the Citrix NetScaler 1000v device package).
<fvTenant name="g008">
<vnsLDevVip name="dmz_ns">
<vnsCDev name="ns03" vcenterName="ics3_vc_tenant_cluster" vmName="g008-ns03">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.39.223" port="80" />
<vnsCIf name="1_1" vnicName="Network adapter 2"/>
</vnsCDev>
<vnsCDev name="ns04" vcenterName="ics3_vc_tenant_cluster" vmName="g008-ns04">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.39.224" port="80" />
<vnsCIf name="1_1" vnicName="Network adapter 2"/>
</vnsCDev>
<vnsLIf name="outside">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-outside"
/>
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_ns/cDev-ns03/cIf-[1_1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_ns/cDev-ns04/cIf-[1_1]" />
</vnsLIf>
<vnsLIf name="inside">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-inside" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_ns/cDev-ns03/cIf-[1_1]" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-29

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_ns/cDev-ns04/cIf-[1_1]" />


</vnsLIf>
</vnsLDevVip>
</fvTenant>

Step 25

Enabled HA for DMZ NetScaler 1000v.


The XML data structure below configures the two NetScaler 1000v concrete devices as HA peer for each
other. The management interface 0/1 (referred as 0_1 in L4-L7 device parameter) is used to exchange
HA information between the HA peer. A subnet IP (SNIP), 10.0.39.226/24 from the management subnet
with mgmtAccess option enabled, is configured on each concrete device. This SNIP serves as the HA
cluster management IP address, and is used as the management IP address of the NetScaler 1000v logical
device.
<fvTenant name="g008">
<vnsLDevVip name="dmz_ns">
<vnsCDev name="ns03">
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.224" />
<vnsDevParam key="id" name="id" value="1" />
</vnsDevFolder>
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="snip" name="snip" value="10.0.39.226" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="ns04">
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.223" />
<vnsDevParam key="id" name="id" value="1" />
</vnsDevFolder>
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="snip" name="snip" value="10.0.39.226" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>

The XM data structure above causes APIC to push the following configuration to ns03:
add
add
And
add
add

Step 26

HA
ns
to
HA
ns

node 1 10.0.39.224
ip 10.0.39.226 255.255.255.0 -vServer DISABLED -mgmtAccess ENABLED
ns04:
node 1 10.0.39.223
ip 10.0.39.226 255.255.255.0 -vServer DISABLED -mgmtAccess ENABLED

Create Service Graph Templates.


Service graph template (vns:AbsGraph) identifies the set of network or service functions that are needed
by an application; each function is represented as a node (vns:AbsNode) in the service graph template.
A service graph is inserted between consumer and provider EPGs by a contract. Figure 8-8, Figure 8-9,
and Figure 8-10 show the three service graph templates required by the Extended Gold Tenant Container.

Intercloud Data Center Application Centric Infrastructure 1.0

8-30

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Note

Figure 8-8

Service Graph Template with ASA Firewall

Figure 8-9

Service Graph Template with ASA Firewall & Citrix Load Balancing Function Nodes

Figure 8-10

Service Graph Templates with Two ASA Firewall Function Nodes

ASA device package version 1.0(1) specifies/supports only the firewall function. The NetScaler 1000v
device package version 10.5 specifies a number of functions, but only the LoadBalancing function is
officially supported.
The following XML data structure creates the service graph template with one ASA firewall function
node. The connection on the service graph is configured with L2 adjacency type and unicast routing
disabled, since the ACI Fabric is only providing L2 forwarding service.
<fvTenant name="g008">
<vnsAbsGraph name="single_asa_graph">
<vnsAbsNode name="asa_fw" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsTermNodeCon name="consumer">
<vnsAbsTermConn name="1" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeCon>
<vnsAbsTermNodeProv name="provider">
<vnsAbsTermConn name="2" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-31

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsInTerm name="input-terminal" />


<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeProv>
<vnsAbsConnection name="connection1" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-single_asa_graph/AbsNode-asa_fw/AbsFConn-external" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-single_asa_graph/AbsTermNodeCon-consumer/AbsTConn" />
</vnsAbsConnection>
<vnsAbsConnection name="connection2" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-single_asa_graph/AbsNode-asa_fw/AbsFConn-internal" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-single_asa_graph/AbsTermNodeProv-provider/AbsTConn" />
</vnsAbsConnection>
</vnsAbsGraph>
</fvTenant>

The following XML data structure creates the service graph template with an ASA firewall function
node and a NetScaler load balancing function node.
<fvTenant name="g008">
<vnsAbsGraph name="asa_ns_graph">
<vnsAbsNode name="asa_fw" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsNode name="slb" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing" />
</vnsAbsNode>
<vnsAbsTermNodeCon name="consumer">
<vnsAbsTermConn name="1" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeCon>
<vnsAbsTermNodeProv name="provider">
<vnsAbsTermConn name="2" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeProv>
<vnsAbsConnection name="connection1" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-asa_fw/AbsFConn-external" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsTermNodeCon-consumer/AbsTConn" />

Intercloud Data Center Application Centric Infrastructure 1.0

8-32

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

</vnsAbsConnection>
<vnsAbsConnection name="connection2" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-asa_fw/AbsFConn-internal" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-slb/AbsFConn-external" />
</vnsAbsConnection>
<vnsAbsConnection name="connection3" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsNode-slb/AbsFConn-internal" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-asa_ns_graph/AbsTermNodeProv-provider/AbsTConn" />
</vnsAbsConnection>
</vnsAbsGraph>
</fvTenant>

The following XML data structure creates the service graph template with two ASA firewall function
nodes.
<fvTenant name="g008">
<vnsAbsGraph name="dual_asa_graph">
<vnsAbsNode name="pvt_asa" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsNode name="dmz_asa" funcType="GoTo">
<vnsAbsFuncConn name="external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name="internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToMFunc tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mFunc-Firewall" />
</vnsAbsNode>
<vnsAbsTermNodeCon name="consumer">
<vnsAbsTermConn name="1" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeCon>
<vnsAbsTermNodeProv name="provider">
<vnsAbsTermConn name="2" />
<vnsInTerm name="input-terminal" />
<vnsOutTerm name="output-terminal" />
</vnsAbsTermNodeProv>
<vnsAbsConnection name="connection1" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-pvt_asa/AbsFConn-external" />
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsTermNodeCon-consumer/AbsTConn" />
</vnsAbsConnection>
<vnsAbsConnection name="connection2" adjType="L2" unicastRoute="no"
connType="external">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-33

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-pvt_asa/AbsFConn-internal"
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-dmz_asa/AbsFConn-external"
</vnsAbsConnection>
<vnsAbsConnection name="connection3" adjType="L2" unicastRoute="no"
connType="external">
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsNode-dmz_asa/AbsFConn-internal"
<vnsRsAbsConnectionConns
tDn="uni/tn-g008/AbsGraph-dual_asa_graph/AbsTermNodeProv-provider/AbsTConn"
</vnsAbsConnection>
</vnsAbsGraph>
</fvTenant>

Note

/>
/>

/>
/>

The XML data structures for service graphs for the extended gold container are created from APIC
version 1.0(1k). For APIC version 1.0(2j) and above, APIC requires the service graph to be associated
with a function profile.
The XML data structures presented above (without the function profile association) to create service
graphs for the extended gold tenant container will still work with APIC version 1.0(2x) when configured
using REST API, but using a function profile going forward is recommended and necessary when
configuring via the APIC GUI.
Refer to the Silver tenant configuration in ASR 9000 Tenant Configuration for IBGP as Provider
edge-Customer edge Routing Protocol, page 6-6 for using the function profile based service graphs.

Step 27

Configure ASA Network and Service Objects.


Objects and object groups are reusable components in ASA configuration that make it easy to maintain
ASA configuration. They can be defined and used in ASA configuration in the place of inline IP
addresses, services, names, and so on. The ASA device package requires usage of objects or object
groups when configuring access control lists (ACLs) or network address translation (NAT) rules.
The following XML data structure creates network objects and service object groups that identify the IP
subnets, NetScaler vServer IP addresses, and services for the Extended Gold Tenant Container. The
L4-L7 service parameters for the objects/groups are configured on the tenant MO; with the
ctrctNameOrLbl, graphNameOrLbl, nodeNameOrLbl attributes (representing the contract, service graph
and function node respectively) set to any, which make the L4-L7 service parameters applicable to all
service graph instances.
<fvTenant name="g008">
<vnsFolderInst key="NetworkObject" name="inside1_subnet" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="network_ip_address" name="network_ip_address"
value="10.1.1.0/255.255.255.0" />
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="inside2_subnet" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="network_ip_address" name="network_ip_address"
value="10.1.2.0/255.255.255.0" />
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="inside3_subnet" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="network_ip_address" name="network_ip_address"
value="10.1.3.0/255.255.255.0" />
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="epg01_vip" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">

Intercloud Data Center Application Centric Infrastructure 1.0

8-34

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsParamInst key="host_ip_address" name="host_ip_address" value="10.1.4.111"


/>
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="epg02_vip" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="host_ip_address" name="host_ip_address" value="10.1.4.112"
/>
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="epg03_vip" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="host_ip_address" name="host_ip_address" value="10.1.4.113"
/>
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="dmz_subnet" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="network_ip_address" name="network_ip_address"
value="11.1.8.0/255.255.255.248" />
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="dmz_vip" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="host_ip_address" name="host_ip_address" value="10.1.7.111"
/>
</vnsFolderInst>
<vnsFolderInst key="NetworkObject" name="public_dmz_vip" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="host_ip_address" name="host_ip_address" value="12.1.1.8" />
</vnsFolderInst>
<vnsFolderInst key="ServiceObjectGroup" name="web_https" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsFolderInst key="tcp" name="web" ctrctNameOrLbl="any" graphNameOrLbl="any"
nodeNameOrLbl="any">
<vnsFolderInst key="destination" name="destination" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="80" />
<vnsParamInst key="high_port" name="high_port" value="80" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="tcp" name="https" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsFolderInst key="destination" name="destination" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="443" />
<vnsParamInst key="high_port" name="high_port" value="443" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="ServiceObjectGroup" name="web_https_mysql"
ctrctNameOrLbl="any" graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsFolderInst key="tcp" name="web" ctrctNameOrLbl="any" graphNameOrLbl="any"
nodeNameOrLbl="any">
<vnsFolderInst key="destination" name="destination" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="80" />
<vnsParamInst key="high_port" name="high_port" value="80" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="tcp" name="https" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsFolderInst key="destination" name="destination" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="operator" name="operator" value="eq" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-35

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsParamInst key="low_port" name="low_port" value="443" />


<vnsParamInst key="high_port" name="high_port" value="443" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="tcp" name="mysql" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsFolderInst key="destination" name="destination" ctrctNameOrLbl="any"
graphNameOrLbl="any" nodeNameOrLbl="any">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="3306" />
<vnsParamInst key="high_port" name="high_port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvTenant>

The XML data structure above configures the following CLI equivalent on the ASA service devices:
object network inside1_subnet
subnet 10.1.1.0 255.255.255.0
object network inside2_subnet
subnet 10.1.2.0 255.255.255.0
object network inside3_subnet
subnet 10.1.3.0 255.255.255.0
object network dmz_subnet
subnet 11.1.8.0 255.255.255.248
object network epg01_vip
host 10.1.4.111
object network epg02_vip
host 10.1.4.112
object network epg03_vip
host 10.1.4.113
object network dmz_vip
host 10.1.7.111
object network public_dmz_vip
host 12.1.1.8
!
object-group service web_https
service-object tcp destination eq www
service-object tcp destination eq https
object-group service web_https_mysql
service-object tcp destination eq www
service-object tcp destination eq https
service-object tcp destination eq 3306

Step 28

Create Logical Device Contexts for Private Zone ASA.


The logical device context (vns:LDevCtx, also known as device selection policy) specifies the policy for
selecting a logical device, based on the contract, service graph template and function node. The policy
also maps the function node connectors to the logical devices interfaces and the bridge domain for that
interface. Table 8-4 shows the logical device contexts for the private zone ASA of the Extended Gold
Tenant Container.

Intercloud Data Center Application Centric Infrastructure 1.0

8-36

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Table 8-4

Logical Device Context

Logical Device Context


(contract/graph_template/function_node)

Connector Logical Interface/Bridge


Logical Device Domain

contract01/single_asa_graph/asa_fw

pvt_asa

external pvt_outside/pvt_external_bd
internal pvt_inside1/bd01

contract02/single_asa_graph/asa_fw

pvt_asa

external pvt_outside/pvt_external_bd
internal pvt_inside2/bd02

contract03/single_asa_graph/asa_fw

pvt_asa

external pvt_outside/pvt_external_bd
internal pvt_inside3/bd03

pvt_ns_contract/single_asa_graph/asa_fw pvt_asa

external pvt_outside/pvt_external_bd
internal pvt_ns/pvt_ns_bd

pvt_ns_contract/single_asa_graph/slb

pvt_ns

external outside/pvt_ns_bd
internal inside/pvt_ns_bd

Note

Table 8-4 shows mapping of multiple logical device contexts (hence multiple service graph instances)
to the same logical device, with the connectors of each logical device context mapped to different logical
interfaces, effectively creating a service device with more than two interfaces.

Note

The connectors in Table 8-4 are the name of the function node connectors configured in the respective
service graph template.
The following XML data structure creates the logical device contexts specified in the table above.
<fvTenant name="g008">
<vnsLDevCtx ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-bd01" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_inside1"
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside"
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-bd02" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_inside2"
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside"
</vnsLIfCtx>
</vnsLDevCtx>

/>

/>

/>

/>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-37

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsLDevCtx ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph"


nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-bd03" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_inside3" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_ns" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_ns" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_ns/lIf-inside" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_ns/lIf-outside" />
</vnsLIfCtx>
</vnsLDevCtx>
</fvTenant>

The name attribute for each of the logical device context is not specified in the XML data structure
above. Unlike other MOs, in which the name attribute of the MO is used as part of the DN (such as
uni/tn-g008, where g008 is the name of the tenant), the name attribute is not mandatory for the logical
device context MO; instead APIC automatically constructs the DN of the logical device context MO in
the following format:
uni/tn-{tenant}/ldevCtx-c-{contract}-g-{service_graph}-n-{function_node}

Step 29

Modeling Private Zone ASA with L4-L7 Service Parameters.


The private zone ASA provides security protection for the VM endpoints in the private zone. Figure 8-11
shows the private zone ASA configuration that will be modeled with the L4-L7 service parameters on
APIC.

Intercloud Data Center Application Centric Infrastructure 1.0

8-38

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Figure 8-11

Private Zone ASA Configuration

Ingress ACL
permit icmp and ssh
permit http/https to web subnet/vip
permit http/https to dmz subnet/vip
ASR 9000
10.1.11.254
Ingress ACL
permit icmp and ssh
permit http/https/mysql to web subnet
permit http/https/mysql to app subnet
permit http/https/mysql to db subnet

Database 10.1.3.0/24

Static Routes
10.0.0.0/8 via ASR 9000
30

Ingress ACL
permit icmp

pvt_outside_if
10.1.11.253

70
40
10.1.4.253
pvt_ns_if

60

pvt_asa
Ingress ACL
permit icmp
permit http/https/mysql to app subnet/vip

10.1.1.253
pvt_inside1_if

App 10.1.2.0/24

50

Web 10.1.1.0/24

Ingress ACL
permit icmp
permit http/https/mysql to db subnet/vip

pvt_ns_if Interface Name


30

Interface Security Level

298614

SLB 10.1.4.0/24

10.1.3.253 pvt_inside3_if
10.1.2.253 pvt_inside2_if

The private zone ASA has one outside interface, and three inside interfaces to support generic 3-tier
application, as well as an interface for hosting the private zone NetScaler 1000v operating in one-arm
mode. Private IP addresses that are subnets from 10.1.0.0/16 super-net are assigned to all ASA
interfaces. It is assumed that the private intranet reachable via the ASR 9000 PE is subnets of 10.0.0.0/8
super-net. Static routing is used, as the ASA device package does not support dynamic routing protocols.
Security access control lists (ACLs) are attached to all ASA interfaces in the ingress direction to filter
application traffic in accordance with the generic 3-tier application traffic profile: external web tier
app tier database tier. To ease troubleshooting, ICMP packets are allowed to/from all ASA interfaces.
The following L4-L7 service parameters are configured:

Interfaces IP address and security level

Static routes

Security access control lists

Attach access control lists to interfaces

Interfaces IP Address and Security Level


The XML data structure below configures the L4-L7 service parameters to model the ASA interface
name, IP address and security level on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<vnsFolderInst key="Interface" name="pvt_outside_if" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.11.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level" value="30" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="ExIntfConfigRelFolder" name="extConfig"
ctrctNameOrLbl="any" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-39

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsCfgRelInst key="ExIntfConfigRel" name="extConfigRel"


targetName="pvt_outside_if" />
</vnsFolderInst>
<fvAEPg name="epg01">
<vnsFolderInst key="Interface" name="pvt_inside1_if"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.1.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level"
value="50" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="pvt_inside1_if" />
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg02">
<vnsFolderInst key="Interface" name="pvt_inside2_if"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.2.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level"
value="60" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="pvt_inside2_if" />
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg03">
<vnsFolderInst key="Interface" name="pvt_inside3_if"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.3.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level"
value="70" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="pvt_inside3_if" />
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="Interface" name="pvt_ns_if"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.4.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level"
value="40" />
</vnsFolderInst>

Intercloud Data Center Application Centric Infrastructure 1.0

8-40

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="pvt_ns_if" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The ASA outside interface, pvt_outside_if, is the common interface of all the service graph instances.
The L4-L7 service parameters for the outside interface is configured on the application profile MO, with
the contract (ctrctNameOrLbl attribute) set to any to allow any contract to pick up the parameters.
The L4-L7 service parameters for the other ASA interfaces are configured on the EPG MO, with the
ctrctNameOrLbl attribute set to the contract that would instantiate the service graph.
The XML data structure above configures the following CLI equivalent on the ASA service device:
interface Port-channel2.3013
nameif pvt_outside_if
security-level 30
ip address 10.1.11.253 255.255.255.0
!
interface Port-channel2.3012
nameif pvt_inside1_if
security-level 50
ip address 10.1.1.253 255.255.255.0
!
interface Port-channel2.3011
nameif pvt_inside2_if
security-level 60
ip address 10.1.2.253 255.255.255.0
!
interface Port-channel2.3015
nameif pvt_inside3_if
security-level 70
ip address 10.1.3.253 255.255.255.0
!
interface Port-channel2.3014
nameif pvt_ns_if
security-level 40
ip address 10.1.4.253 255.255.255.0

Note

APIC randomly assigns VLAN IDs to the ASA named interfaces, using the VLAN IDs in the VLAN
pools assigned to the ASA logical device, during the service graph instantiation. For example, the
interface pvt_inside2_if is assigned VLAN ID 3014 when the service graph is instantiated, if the service
graph is re-instantiated (by de-associate the graph from the contract, and re-associate), the interface
pvt_inside2_if might be assigned with another VLAN ID.
Static Routes

The XML data structure below configures the L4-L7 service parameters to model the ASA static route
on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<vnsFolderInst key="Interface" name="pvt_outside_if" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="StaticRoute" name="staticRoute" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-41

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsFolderInst key="route" name="route01" ctrctNameOrLbl="any"


graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="network" name="network" value="10.0.0.0" />
<vnsParamInst key="netmask" name="netmask" value="255.0.0.0" />
<vnsParamInst key="gateway" name="gateway" value="10.1.11.254" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the ASA service device:
route pvt_outside_if 10.0.0.0 255.0.0.0 10.1.11.254 1

Security Access Control Lists

The XML data structure below configures the L4-L7 service parameters to model the ASA security
access control lists on APIC. The configuration makes use of the network and service objects/groups that
are created in Step 27Configure ASA Network and Service Objects., page 8-34.
<fvTenant name="g008">
<fvAp name="app01">
<vnsFolderInst key="AccessList" name="pvt_outside_if_acl"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp" ctrctNameOrLbl="contract01"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_ssh"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="tcp" />
</vnsFolderInst>
<vnsFolderInst key="destination_service" name="destination_service"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="22" />
<vnsParamInst key="high_port" name="high_port" value="22" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside1_subnet" />
</vnsFolderInst>
</vnsFolderInst>

Intercloud Data Center Application Centric Infrastructure 1.0

8-42

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01_vip"


ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="40" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg01_vip" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="50" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz_vip"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="60" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="epg01">
<vnsFolderInst key="AccessList" name="pvt_inside1_if_acl"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg02"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-43

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract01"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside2_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg02_vip"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract01"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg02_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg02">
<vnsFolderInst key="AccessList" name="pvt_inside2_if_acl"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg03"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract02"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside3_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg03_vip"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />

Intercloud Data Center Application Centric Infrastructure 1.0

8-44

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="contract02"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg03_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg03">
<vnsFolderInst key="AccessList" name="pvt_inside3_if_acl"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="AccessList" name="pvt_ns_if_acl"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside1_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg02"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-45

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsCfgRelInst key="object_name" name="object_name"


targetName="inside2_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg03"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="40" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside3_subnet" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the ASA service device:
access-list pvt_outside_if_acl extended permit icmp any any
access-list pvt_outside_if_acl extended permit tcp any any eq ssh
access-list pvt_outside_if_acl extended permit object-group web_https any object
inside1_subnet
access-list pvt_outside_if_acl extended permit object-group web_https any object
epg01_vip
access-list pvt_outside_if_acl extended permit object-group web_https any object
dmz_subnet
access-list pvt_outside_if_acl extended permit object-group web_https any object
dmz_vip
!
access-list pvt_inside1_if_acl extended permit icmp any any
access-list pvt_inside1_if_acl extended permit object-group web_https_mysql any object
inside2_subnet
access-list pvt_inside1_if_acl extended permit object-group web_https_mysql any object
epg02_vip
!
access-list pvt_inside2_if_acl extended permit icmp any any
access-list pvt_inside2_if_acl extended permit object-group web_https_mysql any object
inside3_subnet
access-list pvt_inside2_if_acl extended permit object-group web_https_mysql any object
epg03_vip
!
access-list pvt_inside3_if_acl extended permit icmp any any
!
access-list pvt_ns_if_acl extended permit icmp any any
access-list pvt_ns_if_acl extended permit object-group web_https_mysql any object
inside1_subnet
access-list pvt_ns_if_acl extended permit object-group web_https_mysql any object
inside2_subnet
access-list pvt_ns_if_acl extended permit object-group web_https_mysql any object
inside3_subnet

Attach Access Control Lists to Interfaces

The XML data structure below configures the L4-L7 service parameters to attach security access control
lists to ASA interfaces. Each ASA interface has an ingress security access control list attached.

Intercloud Data Center Application Centric Infrastructure 1.0

8-46

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<fvTenant name="g008">
<fvAp name="app01">
<vnsFolderInst key="Interface" name="pvt_outside_if"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_outside_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="epg01">
<vnsFolderInst key="Interface" name="pvt_inside1_if"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract01" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inside1_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg02">
<vnsFolderInst key="Interface" name="pvt_inside2_if"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract02" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inside2_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="epg03">
<vnsFolderInst key="Interface" name="pvt_inside3_if"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="contract03" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inside3_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="Interface" name="pvt_ns_if"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="accessGroup"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_ns_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the ASA service device:
access-group
access-group
access-group
access-group
access-group

Step 30

pvt_outside_if_acl in interface pvt_outside_if


pvt_inside1_if_acl in interface pvt_inside1_if
pvt_inside2_if_acl in interface pvt_inside2_if
pvt_inside3_if_acl in interface pvt_inside3_if
pvt_ns_if_acl in interface pvt_ns_if

Modeling Private Zone NetScaler 1000v with L4-L7 Service Parameters.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-47

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

The NetScaler 1000v in the private zone is configured to load balance application traffic such as HTTP
and MySQL. The private zone NetScaler 1000v is configured in one-arm mode operation with only a
single data interface. The private zone NetScaler 1000v load balance application traffic for web,
application and database tiers, each tier has it own vServer IP. The following L4-L7 service parameters
are configured:

Note

Subnet IP address

Static routes

Service groups and vServers

SSL offload configuration of the NetScaler 1000v is not implemented here. Refer to Silver Tenant
Container for details of modeling SSL offload on NetScaler 1000v with L4-L7 service parameters.
Subnet IP Address

Only a single SNIP is configured for the data interface of the private zone NetScaler 1000v. The SNIP
is used for health monitoring, and as source IP address to proxy client connections to real servers/VMs.
The XML data structure below configures the L4-L7 service parameters to model the NetScaler 1000v
SNIP on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="Network" name="network"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsFolderInst key="nsip" name="snip" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ipaddress" name="ip" value="10.1.4.21" />
<vnsParamInst key="netmask" name="netmask" value="255.255.255.0"
/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="internal_network" name="snip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="internal_network_key" name="snip_key"
targetName="network/snip" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add vlan 1508
add ns ip 10.1.4.21 255.255.255.0 -vServer DISABLED
bind vlan 1508 -ifnum 1/1
bind vlan 1508 -IPAddress 10.1.4.21 255.255.255.0

Note

The NetScaler 1000v is a virtual appliance hosted on the VMM domain configured on the logical device.
When the service graphs are instantiated, APIC randomly selects the VLAN ID 1508 from the VLAN
pool assigned to the VMM domain. The interface ID 1/1 is the concrete interface configured on concrete
device.

Intercloud Data Center Application Centric Infrastructure 1.0

8-48

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Static Routes

The private zone NetScaler 1000v is configured in one-arm mode operation with only a single data
interface, a single default route is required in such setup. The XML data structure below configures the
L4-L7 service parameters to model the NetScaler 1000v default route on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="Network" name="network"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="gateway" name="gateway" value="10.1.4.253" />
<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="network" name="network" value="0.0.0.0" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="external_route" name="ext_route"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="external_route_rel" name="ext_route_rel"
targetName="network/route01" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add route 0.0.0.0 0.0.0.0 10.1.4.253

Note

The NetScaler 1000v virtual appliance actually has two interfaces, one for management only traffic, and
one for the tenant data traffic. The default route is configured for the data interface. Static routes for the
management interface are configured out of band.
Service Groups and vServers

Table 8-5 shows the vServer IPs, service groups and real servers for the private zone NetScaler 1000v.
The same vServer IP address is used for both HTTP and MySQL services on each tier of the 3-tier
application.
Table 8-5

Service Groups and vServers

vServer IP Service / Port

Service Group

Real Server

10.1.4.111 HTTP / 80

web_service_grp1

10.1.1.11
10.1.1.12
10.1.1.13

10.1.4.112 HTTP / 80

web_service_grp2

10.1.2.11
10.1.2.12
10.1.2.13

10.1.4.113 HTTP / 80

web_service_grp3

10.1.3.11
10.1.3.12
10.1.3.13

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-49

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Table 8-5

Service Groups and vServers (continued)

vServer IP Service / Port

Service Group

Real Server

10.1.4.111 MySQL / 3306 mysql_service_grp1 10.1.1.11


10.1.1.12
10.1.1.13
10.1.4.112 MySQL / 3306 mysql_service_grp2 10.1.2.11
10.1.2.12
10.1.2.13
10.1.4.113 MySQL / 3306 mysql_service_grp3 10.1.3.11
10.1.3.12
10.1.3.13
The XML data structure below configures the L4-L7 service parameters to model the server load
balancing of HTTP service on APIC
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="lbmonitor" name="http_mon"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="monitorname" name="monitorname" value="http_mon" />
<vnsParamInst key="type" name="type" value="http" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="web_service_grp1"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="web_service_grp1" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="http_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.11" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.12" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.13" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="web_service_grp2"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">

Intercloud Data Center Application Centric Infrastructure 1.0

8-50

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsParamInst key="servicegroupname" name="srv_grp_name"


value="web_service_grp2" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="http_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.11" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.12" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.13" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="web_service_grp3"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="web_service_grp3" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="http_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.11" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.12" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.13" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg01_web_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg01_web_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.111" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsParamInst key="port" name="port" value="80" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-51

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />


<vnsParamInst key="persistencetype" name="persistencetype"
value="COOKIEINSERT" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="web_service_grp1" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="web_service_grp1" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg02_web_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg02_web_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.112" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsParamInst key="port" name="port" value="80" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsParamInst key="persistencetype" name="persistencetype"
value="COOKIEINSERT" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="web_service_grp2" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="web_service_grp2" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg03_web_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg03_web_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.113" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsParamInst key="port" name="port" value="80" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsParamInst key="persistencetype" name="persistencetype"
value="COOKIEINSERT" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="web_service_grp3" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="web_service_grp3" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="mFCnglbmonitor" name="lbmonitor_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key"
targetName="http_mon" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="web_service1_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="web_service_grp1" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="web_service2_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="web_service_grp2" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="web_service3_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="web_service_grp3" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg01_web_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">

Intercloud Data Center Application Centric Infrastructure 1.0

8-52

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"


targetName="epg01_web_vip" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg02_web_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg02_web_vip" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg03_web_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg03_web_vip" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure below configures the L4-L7 service parameters to model the server load
balancing of MySQL service on APIC.
<fvTenant name="g008">
<fvAp name="app01">
<fvAEPg name="pvt_ns_epg">
<vnsFolderInst key="lbmonitor" name="mysql_mon"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="monitorname" name="monitorname" value="mysql_mon"
/>
<vnsParamInst key="type" name="type" value="TCP" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="mysql_service_grp1"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="mysql_service_grp1" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.11" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.12" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="msql_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.1.13" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="mysql_service_grp2"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="mysql_service_grp2" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-53

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.11" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.12" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="msql_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.2.13" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="mysql_service_grp3"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="mysql_service_grp3" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.11" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="mysql_service_binding2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.12" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="msql_service_binding3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="10.1.3.13" />
<vnsParamInst key="port" name="port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg01_mysql_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg01_mysql_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.111" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsParamInst key="port" name="port" value="3306" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />

Intercloud Data Center Application Centric Infrastructure 1.0

8-54

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsFolderInst key="lbvserver_servicegroup_binding"
name="mysql_service_grp1" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="mysql_service_grp1" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg02_mysql_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg02_mysql_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.112" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsParamInst key="port" name="port" value="3306" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="mysql_service_grp2" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="mysql_service_grp2" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="epg03_mysql_vip"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="epg03_mysql_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.4.113" />
<vnsParamInst key="servicetype" name="servicetype" value="TCP" />
<vnsParamInst key="port" name="port" value="3306" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="mysql_service_grp3" ctrctNameOrLbl="pvt_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="mysql_service_grp3" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="mFCnglbmonitor" name="mysql_mon_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key"
targetName="mysql_mon" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="mysql_service1_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="mysql_service_grp1" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="mysql_service2_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="mysql_service_grp2" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="mysql_service3_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="mysql_service_grp3" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg01_mysql_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg01_mysql_vip" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg02_mysql_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg02_mysql_vip" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-55

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="epg03_mysql_vip_cfg"
ctrctNameOrLbl="pvt_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="epg03_mysql_vip" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structures create the following CLI equivalent on the NetScaler 1000v service device:
add lb monitor http_mon HTTP
add lb monitor mysql_mon TCP
!
add server 10.1.1.11 10.1.1.11
add server 10.1.1.12 10.1.1.12
add server 10.1.1.13 10.1.1.13
add server 10.1.2.11 10.1.2.11
add server 10.1.2.12 10.1.2.12
add server 10.1.2.13 10.1.2.13
add server 10.1.3.11 10.1.3.11
add server 10.1.3.12 10.1.3.12
add server 10.1.3.13 10.1.3.13
!
add serviceGroup web_service_grp1 HTTP
add serviceGroup web_service_grp2 HTTP
add serviceGroup web_service_grp3 HTTP
add serviceGroup mysql_service_grp1 TCP
add serviceGroup mysql_service_grp2 TCP
add serviceGroup mysql_service_grp3 TCP
!
bind serviceGroup web_service_grp1 10.1.1.11 80
bind serviceGroup web_service_grp1 10.1.1.12 80
bind serviceGroup web_service_grp1 10.1.1.13 80
bind serviceGroup web_service_grp1 -monitorName http_mon
bind serviceGroup web_service_grp2 10.1.2.11 80
bind serviceGroup web_service_grp2 10.1.2.12 80
bind serviceGroup web_service_grp2 10.1.2.13 80
bind serviceGroup web_service_grp2 -monitorName http_mon
bind serviceGroup web_service_grp3 10.1.3.11 80
bind serviceGroup web_service_grp3 10.1.3.12 80
bind serviceGroup web_service_grp3 10.1.3.13 80
bind serviceGroup web_service_grp3 -monitorName http_mon
bind serviceGroup mysql_service_grp1 10.1.1.11 3306
bind serviceGroup mysql_service_grp1 10.1.1.12 3306
bind serviceGroup mysql_service_grp1 10.1.1.13 3306
bind serviceGroup mysql_service_grp1 -monitorName mysql_mon
bind serviceGroup mysql_service_grp2 10.1.2.11 3306
bind serviceGroup mysql_service_grp2 10.1.2.12 3306
bind serviceGroup mysql_service_grp2 10.1.2.13 3306
bind serviceGroup mysql_service_grp2 -monitorName mysql_mon
bind serviceGroup mysql_service_grp3 10.1.3.11 3306
bind serviceGroup mysql_service_grp3 10.1.3.12 3306
bind serviceGroup mysql_service_grp3 10.1.3.13 3306
bind serviceGroup mysql_service_grp3 -monitorName mysql_mon
!
add lb vserver epg01_mysql_vip TCP 10.1.4.111 3306 -persistenceType NONE -lbMethod
ROUNDROBIN
add lb vserver epg02_mysql_vip TCP 10.1.4.112 3306 -persistenceType NONE -lbMethod
ROUNDROBIN
add lb vserver epg03_mysql_vip TCP 10.1.4.113 3306 -persistenceType NONE -lbMethod
ROUNDROBIN

Intercloud Data Center Application Centric Infrastructure 1.0

8-56

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

add lb vserver epg01_web_vip HTTP 10.1.4.111 80 -lbMethod ROUNDROBIN -persistenceType


COOKIEINSERT
add lb vserver epg02_web_vip HTTP 10.1.4.112 80 -lbMethod ROUNDROBIN -persistenceType
COOKIEINSERT
add lb vserver epg03_web_vip HTTP 10.1.4.113 80 -lbMethod ROUNDROBIN -persistenceType
COOKIEINSERT
!
bind lb vserver epg01_mysql_vip mysql_service_grp1
bind lb vserver epg02_mysql_vip mysql_service_grp2
bind lb vserver epg03_mysql_vip mysql_service_grp3
!
bind lb vserver epg01_web_vip web_service_grp1
bind lb vserver epg02_web_vip web_service_grp2
bind lb vserver epg03_web_vip web_service_grp3

Note

Step 31

The NetScaler 1000v is configured to load balance MySQL service as simple TCP sockets, instead of as
MySQL application. Citrix NetScaler 1000v device package version 10.5 does not officially supports the
DataStream function required for load balancing the MySQL application.
Create Logical Device Contexts for DMZ ASA.
Table 8-6 shows the logical device contexts for the DMZ ASA of the Extended Gold Tenant Container.
Table 8-6

Logical Device Contexts for DMZ ASA

Logical Device Context


(contract/graph_template/function_node)

Connector Logical Interface/Bridge


Logical Device Domain

dmz_contract/single_asa_graph/asa_fw

dmz_asa

external dmz_outside/dmz_external_bd
internal dmz_inside1/dmz_bd

dmz_ns_contract/single_asa_graph/asa_fw dmz_asa

external dmz_outside/dmz_external_bd
internal dmz_ns/dmz_ns_bd

dmz_ns_contract/single_asa_graph/slb

dmz_ns

external outside/dmz_ns_bd
internal inside/dmz_ns_bd

The following XML data structure creates the logical device contexts specified in the table above.
<fvTenant name="g008">
<vnsLDevCtx ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_inside1" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_outside" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="asa_fw">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_asa" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_ns" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-57

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_outside" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_ns" />
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_ns/lIf-inside" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_ns_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_ns/lIf-outside" />
</vnsLIfCtx>
</vnsLDevCtx>
</fvTenant>

Step 32

Modeling DMZ ASA with L4-L7 Service Parameters.


The DMZ ASA provides security protection for the VM endpoints in the DMZ. Figure 8-12 shows the
DMZ ASA configuration to model with the L4-L4 service parameters on APIC.
Figure 8-12

DMZ ASA Configuration

ASR 9000
11.1.8.254
Ingress ACL
permit icmp and ssh
permit http/https to dmz subnet/vip

10

Static Routes
default route via ASR 9000

pvt_outside_if 11.1.8.253
dmz_ns_if
10.1.7.253

40
DMZ SLB 10.1.7.0/24

Static Routes
dmz_public_vip via dmz_ns_snip

pvt_asa
pvt_inside1_if 11.1.8.6
Ingress ACL
permit icmp
Ingress ACL
50
permit mysql to epg01 subnet/vip
permit icmp
permit http/https to dmz subnet

pvt_ns_if Interface Name


10

Interface Security Level

298615

DMZ 11.1.8.0/29

The DMZ ASA has one outside interface, and one inside interface to host the workload VMs accessible
from Internet, as well as an interface for hosting the NetScaler 1000v operating in one-arm mode.
Both public and private IP addresses are used on the ASA interfaces. Public IP addresses are used on
subnets that require access to/from Internet. Static routing is used, as the ASA device package does not
support dynamic routing protocols. The ASA has default route to the ASR 9000 for access to/from
Internet, and a static route to the public vServer IP on the NetScaler 1000v.

Note

NAT is not configured; as the ASA device package version 1.0(1) has limited support when configuring
NAT rules on APIC, see CSCuq16294 for more details.

Intercloud Data Center Application Centric Infrastructure 1.0

8-58

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Security access control lists are attached to all ASA interfaces in the ingress direction to filter
application traffic. The ingress ACL of the dmz_inside1_if interface allows the DMZ servers/VMs to
initiate requests to the application servers/VMs on the private zone. To ease troubleshooting, ICMP
packets are allowed to/from all ASA interfaces. The following L4-L7 service parameters are configured:

Interfaces IP address and security level

Static routes

Security access control lists

Attach access control lists to interfaces

Interfaces IP Address and Security Level

The XML data structure below configures the L4-L7 service parameters to model the ASA interface
name, IP address and security level on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<vnsFolderInst key="Interface" name="dmz_outside_if" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="11.1.8.253/255.255.255.252" />
<vnsParamInst key="security_level" name="security_level" value="10" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="ExIntfConfigRelFolder" name="extConfig"
ctrctNameOrLbl="any" graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="ExIntfConfigRel" name="extConfigRel"
targetName="dmz_outside_if" />
</vnsFolderInst>
<fvAEPg name="dmz_epg">
<vnsFolderInst key="Interface" name="dmz_inside1_if"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="11.1.8.6/255.255.255.248" />
<vnsParamInst key="security_level" name="security_level"
value="50" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="dmz_inside1_if" />
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Interface" name="dmz_ns_if"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.7.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level"
value="40" />
</vnsFolderInst>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-59

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="dmz_ns_if" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The ASA outside interface, dmz_outside_if, is the common interface of all the service graph instances.
The L4-L7 service parameters for the outside interface is configured on the application profile MO, with
the contract (ctrctNameOrLbl attribute) set to any to allow any contract to pick up the parameters.
The L4-L7 service parameters for the other ASA interfaces are configured on the EPG MO, with the
ctrctNameOrLbl attribute set to the contract that would instantiate the service graph.
The XML data structure above configures the following CLI equivalent on the ASA service device:
interface port-channel2.3018
nameif dmz_outside_if
security-level 10
ip address 11.1.8.253 255.255.255.252
!
interface port-channel2.3017
nameif dmz_inside1_if
security-level 50
ip address 11.1.8.6 255.255.255.0
!
interface port-channel2.3019
nameif dmz_ns_if
security-level 40
ip address 10.1.7.253 255.255.255.0

Static Routes

The XML data structure below configures the L4-L7 service parameters to model the ASA static routes
on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<vnsFolderInst key="Interface" name="dmz_outside_if" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="StaticRoute" name="staticRoute" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="route" name="route01" ctrctNameOrLbl="any"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="network" name="network" value="0.0.0.0" />
<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="gateway" name="gateway" value="11.1.8.254" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Interface" name="dmz_ns_if"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="StaticRoute" name="staticRoute"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="network" name="network" value="12.1.1.8" />
<vnsParamInst key="netmask" name="netmask"
value="255.255.255.255" />
<vnsParamInst key="gateway" name="gateway" value="10.1.7.21"
/>

Intercloud Data Center Application Centric Infrastructure 1.0

8-60

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the ASA service device:
route dmz_outside_if 0.0.0.0 0.0.0.0 11.1.8.254 1
route dmz_ns_if 12.1.1.8 255.255.255.255 10.1.7.21 1

Security Access Control Lists

The XML data structure below configures the L4-L7 service parameters to model the ASA security
access control lists on APIC. The configuration makes use of the network and service objects/groups that
are created in Step 27Configure ASA Network and Service Objects., page 8-34.
<fvTenant name="g008">
<fvAp name="app02">
<vnsFolderInst key="AccessList" name="dmz_outside_if_acl"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz_vip"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-61

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsFolderInst key="destination_address" name="destination_address"


ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="public_dmz_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="dmz_epg">
<vnsFolderInst key="AccessList" name="dmz_inside1_if_acl"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="tcp" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="dmz_contract"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside1_subnet" />
</vnsFolderInst>
<vnsFolderInst key="destination_service"
name="destination_service" ctrctNameOrLbl="dmz_contract"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="3306" />
<vnsParamInst key="high_port" name="high_port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01_vip"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="tcp" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="dmz_contract"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg01_vip" />
</vnsFolderInst>

Intercloud Data Center Application Centric Infrastructure 1.0

8-62

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsFolderInst key="destination_service"
name="destination_service" ctrctNameOrLbl="dmz_contract"
graphNameOrLbl="single_asa_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="3306" />
<vnsParamInst key="high_port" name="high_port" value="3306" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="AccessList" name="dmz_ns_if_acl"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_group_name"
name="object_group_name" targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address"
name="destination_address" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the ASA service device:
access-list dmz_outside_if_acl extended permit icmp any any
access-list dmz_outside_if_acl extended permit object-group web_https any object
dmz_subnet
access-list dmz_outside_if_acl extended permit object-group web_https any object
public_dmz_vip
!
access-list dmz_inside1_if_acl extended permit icmp any any
access-list dmz_inside1_if_acl extended permit tcp any object inside1_subnet eq 3306
access-list dmz_inside1_if_acl extended permit tcp any object epg01_vip eq 3306
!
access-list dmz_ns_if_acl extended permit icmp any any
access-list dmz_ns_if_acl extended permit object-group web_https any object dmz_subnet

Attach Access Control Lists to Interfaces

The XML data structure below configures the L4-L7 service parameters to attach security access control
lists to ASA interfaces. Each ASA interface has an ingress security access control list attached.
<fvTenant name="g008">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-63

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<fvAp name="app02">
<vnsFolderInst key="Interface" name="dmz_outside_if"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_outside_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
<fvAEPg name="dmz_epg">
<vnsFolderInst key="Interface" name="dmz_inside1_if"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="dmz_contract" graphNameOrLbl="single_asa_graph"
nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_inside1_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Interface" name="dmz_ns_if"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="asa_fw">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_ns_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the ASA service device:
access-group dmz_outside_if_acl in interface dmz_outside_if
access-group dmz_inside1_if_acl in interface dmz_inside1_if
access-group dmz_ns_if_acl in interface dmz_ns_if

Step 33

Modeling DMZ NetScaler 1000v with L4-L7 Service Parameters.


The NetScaler 1000v in the DMZ is configured to load balance application traffic such as HTTP. The
DMZ NetScaler 1000v is configured in one-arm mode operation with only a single data interface. The
following L4-L7 Service Parameters are configured:

Subnet IP address

Static routes

Service groups and vServers

Subnet IP Address

Only a single SNIP is configured for the data interface of the DMZ NetScaler 1000v. The SNIP is used
for health monitoring, and as source IP address to proxy client connections to real servers/VMs. The
XML data structure below configures the L4-L7 service parameters to model the NetScaler 1000v SNIP
on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<fvAEPg name="dmz_ns_epg">

Intercloud Data Center Application Centric Infrastructure 1.0

8-64

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsFolderInst key="Network" name="network"


ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsFolderInst key="nsip" name="snip" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ipaddress" name="ip" value="10.1.7.21" />
<vnsParamInst key="netmask" name="netmask" value="255.255.255.0"
/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="internal_network" name="snip"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="internal_network_key" name="snip_key"
targetName="network/snip" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add vlan 1865
add ns ip 10.1.7.21 255.255.255.0 -vServer DISABLED
bind vlan 1865 -ifnum 1/1
bind vlan 1865 -IPAddress 10.1.7.21 255.255.255.0

Static Routes

The DMZ NetScaler 1000v is configured in one-arm mode operation with only a single data interface, a
single default route is required in such setup. The XML data structure below configures the L4-L7
service parameters to model the NetScaler 1000v default route on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="Network" name="network"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="gateway" name="gateway" value="10.1.7.253" />
<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="network" name="network" value="0.0.0.0" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="external_route" name="ext_route"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="external_route_rel" name="ext_route_rel"
targetName="network/route01" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structure creates the following CLI equivalent on the NetScaler 1000v service device:
add route 0.0.0.0 0.0.0.0 10.1.7.253

Service Groups and vServers

Table 8-7 shows the vServer IPs, service groups and real servers for the DMZ NetScaler 1000v. The
DMZ does not have MySQL servers/VMs, only web servers/VMs. Three are two vServer IP addresses
configured for the same set of real servers, one for access from the Internet, one for local private use.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-65

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Table 8-7

Service Groups and vServers - DMZ

vServer IP Service / Port Service Group


10.1.7.111 HTTP / 80

Real Server

web_service_grp1 11.1.8.1

12.1.1.8

11.1.8.2
11.1.8.3

The XML data structure below configures the L4-L7 service parameters to model server load balancing
of HTTP service on APIC.
<fvTenant name="g008">
<fvAp name="app02">
<fvAEPg name="dmz_ns_epg">
<vnsFolderInst key="lbmonitor" name="http_mon"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="monitorname" name="monitorname" value="http_mon" />
<vnsParamInst key="type" name="type" value="http" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup" name="web_service_grp1"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="web_service_grp1" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsFolderInst key="servicegroup_lbmonitor_binding"
name="monitor_binding" ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="http_mon" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding1" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="11.1.8.1" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding2" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="11.1.8.2" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
<vnsFolderInst key="servicegroup_servicegroupmember_binding"
name="web_service_binding3" ctrctNameOrLbl="dmz_ns_contract"
graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="ip" name="ip" value="11.1.8.3" />
<vnsParamInst key="port" name="port" value="80" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="dmz_private_vip"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="dmz_private_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="10.1.7.111" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsParamInst key="port" name="port" value="80" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsParamInst key="persistencetype" name="persistencetype"
value="COOKIEINSERT" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="web_service_grp1" ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">

Intercloud Data Center Application Centric Infrastructure 1.0

8-66

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsCfgRelInst key="servicename" name="srv_grp_name"


targetName="web_service_grp1" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="lbvserver" name="dmz_public_vip"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsParamInst key="name" name="name" value="dmz_public_vip" />
<vnsParamInst key="ipv46" name="ipv46" value="12.1.1.8" />
<vnsParamInst key="servicetype" name="servicetype" value="HTTP" />
<vnsParamInst key="port" name="port" value="80" />
<vnsParamInst key="lbmethod" name="lbmethod" value="ROUNDROBIN" />
<vnsParamInst key="persistencetype" name="persistencetype"
value="COOKIEINSERT" />
<vnsFolderInst key="lbvserver_servicegroup_binding"
name="web_service_grp1" ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph"
nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicename" name="srv_grp_name"
targetName="web_service_grp1" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="mFCnglbmonitor" name="lbmonitor_cfg"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key"
targetName="http_mon" />
</vnsFolderInst>
<vnsFolderInst key="mFCngservicegroup" name="web_service1_cfg"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="servicegroup_key" name="service_key"
targetName="web_service_grp1" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="dmz_private_vip_cfg"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="dmz_private_vip" />
</vnsFolderInst>
<vnsFolderInst key="mFCnglbvserver" name="dmz_public_vip_cfg"
ctrctNameOrLbl="dmz_ns_contract" graphNameOrLbl="asa_ns_graph" nodeNameOrLbl="slb">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="dmz_public_vip" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

The XML data structures creates the following CLI equivalent on the NetScaler 1000v service device:
add lb monitor http_mon HTTP
!
add server 11.1.8.1 11.1.8.1
add server 11.1.8.2 11.1.8.2
add server 11.1.8.3 11.1.8.3
!
add serviceGroup web_service_grp1 HTTP
!
bind serviceGroup web_service_grp1 11.1.8.1 80
bind serviceGroup web_service_grp1 11.1.8.2 80
bind serviceGroup web_service_grp1 11.1.8.3 80
bind serviceGroup web_service_grp1 -monitorName http_mon
!
add lb vserver dmz_private_vip HTTP 10.1.7.111 80 -lbMethod ROUNDROBIN
-persistenceType COOKIEINSERT
add lb vserver dmz_public_vip HTTP 12.1.1.8 80 -lbMethod ROUNDROBIN -persistenceType
COOKIEINSERT
!

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-67

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

bind lb vserver dmz_private_vip web_service_grp1


bind lb vserver dmz_public_vip web_service_grp1

Step 34

Create Logical Device Contexts for Inter-Zones Communication.


The private zone and DMZ ASA firewalls are connected by the inter-zone link to allow inter-zones
communication. The service graph for the inter-zone setup has two ASA firewalls function nodes,
pvt_asa and dmz_asa. Table 8-8 shows the logical device contexts for the inter-zone setup of the
Extended Gold Tenant Container.
Table 8-8

Logical Device Contexts for Inter-Zones Communication

Logical Device Context


(contract/graph_template/function_node)

Connector Logical Interface/Bridge


Logical Device Domain

inter_asa_contract/dual_asa_graph/pvt_asa

pvt_asa

external pvt_outside/pvt_external_bd
internal pvt_inter_asa/inter_asa_bd

inter_asa_contract/dual_asa_graph/dmz_asa dmz_asa

external dmz_inter_asa/inter_asa_bd
internal dmz_inside1/dmz_bd

The following XML data structure creates the logical device contexts specified in the table above.
<fvTenant name="g008">
<vnsLDevCtx ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-pvt_asa" />
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-pvt_external_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_outside" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-inter_asa_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-pvt_asa/lIf-pvt_inter_asa" />
</vnsLIfCtx>
</vnsLDevCtx>
<vnsLDevCtx ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsRsLDevCtxToLDev tDn="uni/tn-g008/lDevVip-dmz_asa" />
<vnsLIfCtx connNameOrLbl="external" name="external">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-inter_asa_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_inter_asa" />
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="internal" name="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-g008/BD-dmz_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-g008/lDevVip-dmz_asa/lIf-dmz_inside1" />
</vnsLIfCtx>
</vnsLDevCtx>
</fvTenant>

Step 35

Modeling Inter Zones ASAs Setup with L4-L7 Service Parameters.


The inter zone setup allows VMs in the private zone and VMs in the DMZ to communicate with each
other. The following figure illustrates the inter zone ASAs configuration that will be modeled with the
L4-L7 service parameters on APIC. Only the ASAs and the inter zone link are depicted in Figure 8-13.

Intercloud Data Center Application Centric Infrastructure 1.0

8-68

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

Figure 8-13

ASA Configuration for Inter-Zone Communication

Ingress ACL
permit icmp and ssh
permit http/https to dmz subnet/vip
ASR 9000
Static Routes
dmz_ns subnet via dmz_asa
default route via dmz_asa
pvt_inter_asa_if
10.1.5.253
20

10.1.5.252
pvt_inter_asa_if

pvt_asa

dmz_asa
Static Routes
private intranet via pvt_asa

pvt_inter_asa_if Interface Name


20

Interface Security Level

298616

Ingress ACL
permit icmp
permit http/https/mysql to epg01 subnet/vip

One additional interface each is added to pvt_asa and dmz_asa to facilitate inter zone communication.
Static routes are configured on each ASA firewall to route the traffic flows to the correct destinations.
Ingress security access control lists are attached to the inter zone ASA interfaces to filter the inter zone
traffic. The following L4-L7 Service Parameters are configured:

Interfaces IP address and security level

Static routes

Security access control lists

Attach access control lists to interfaces

Interfaces IP Address and Security Level

The XML data structure below configures the L4-L7 service parameters to model the ASA interface
name, IP address and security level on APIC. The configuration is applied to both pvt_asa and dmz_asa
service devices.
<fvTenant name="g008">
<vnsFolderInst key="Interface" name="pvt_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="InterfaceConfig" name="ifcfg"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.5.253/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level" value="20" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="InIntfConfigRelFolder" name="intConfig"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="InIntfConfigRel" name="intConfigRel"
targetName="pvt_inter_asa_if" />
</vnsFolderInst>
<vnsFolderInst key="Interface" name="dmz_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-69

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

<vnsFolderInst key="InterfaceConfig" name="ifcfg"


ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="ipv4_address" name="ipv4_addr"
value="10.1.5.252/255.255.255.0" />
<vnsParamInst key="security_level" name="security_level" value="20" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="ExIntfConfigRelFolder" name="extConfig"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsCfgRelInst key="ExIntfConfigRel" name="extConfigRel"
targetName="dmz_inter_asa_if" />
</vnsFolderInst>
</fvTenant>

The L4-L7 service parameters are configured on the tenant MO, since they are applicable to both the
app01 and app02 application profiles.

Note

The service graph of the inter zone setup, dual_asa_graph, has fours logical interfaces, namely
pvt_outside_if, pvt_inter_asa_if, dmz_inter_asa_if, and dmz_if. The XML data structure above only
configures the L4-L7 service parameters for pvt_inter_asa_if and dmz_inter_asa_if interfaces, the
L4-L7 service parameters for the other two interfaces are already configured by other service graph
instances.
The XML data structure above configures the following CLI equivalent on pvt_asa service device:
interface port-channel2.3016
nameif pvt_inter_asa_if
security-level 20
ip address 10.1.5.253 255.255.255.0

And on dmz_asa service device:


interface port-channel2.3020
nameif dmz_inter_asa_if
security-level 20
ip address 10.1.5.252 255.255.255.0

Static Routes

The XML data structure below configures the L4-L7 service parameters to model the ASA static routes
on APIC. The configuration is applied to both pvt_asa and dmz_asa service devices.
<fvTenant name="g008">
<vnsFolderInst key="Interface" name="pvt_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="StaticRoute" name="StaticRoute"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="network" name="network" value="10.1.7.0" />
<vnsParamInst key="netmask" name="netmask" value="255.255.255.0" />
<vnsParamInst key="gateway" name="gateway" value="10.1.5.252" />
</vnsFolderInst>
<vnsFolderInst key="route" name="route02"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">

Intercloud Data Center Application Centric Infrastructure 1.0

8-70

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsParamInst key="network" name="network" value="0.0.0.0" />


<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="gateway" name="gateway" value="10.1.5.252" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="Interface" name="dmz_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsFolderInst key="StaticRoute" name="StaticRoute"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsFolderInst key="route" name="route01"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="network" name="network" value="10.0.0.0" />
<vnsParamInst key="netmask" name="netmask" value="255.0.0.0" />
<vnsParamInst key="gateway" name="gateway" value="10.1.5.253" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvTenant>

The XML data structure above configures the following CLI equivalent on pvt_asa service device:
route pvt_inter_asa_if 10.1.7.0 255.255.255.0 10.1.5.252 1
route pvt_inter_asa_if 0.0.0.0 0.0.0.0 10.1.5.252 1

And on dmz_asa service device:


route dmz_inter_asa_if 10.0.0.0 255.255.0.0 10.1.5.253 1

Security Access Control Lists

The XML data structure below configures the L4-L7 service parameters to model the ASA security
access control lists on APIC. The configuration makes use of the network and service objects/groups that
are created in Step 27Configure ASA Network and Service Objects., page 8-34. The configuration is
applied to both pvt_asa and dmz_asa service devices.
<fvTenant name="g008">
<vnsFolderInst key="AccessList" name="pvt_inter_asa_if_acl"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https_mysql" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-71

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

</vnsFolderInst>
<vnsFolderInst key="source_address" name="source_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="inside1_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_epg01_vip"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https_mysql" />
</vnsFolderInst>
<vnsFolderInst key="source_address" name="source_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="epg01_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessList" name="dmz_inter_asa_if_acl"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsFolderInst key="AccessControlEntry" name="permit_icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="10" />
<vnsFolderInst key="protocol" name="icmp"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="name_number" name="name" value="icmp" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_ssh"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="20" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="name_number" name="name" value="tcp" />
</vnsFolderInst>

Intercloud Data Center Application Centric Infrastructure 1.0

8-72

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vnsFolderInst key="destination_service" name="destination_service"


ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="operator" name="operator" value="eq" />
<vnsParamInst key="low_port" name="low_port" value="22" />
<vnsParamInst key="high_port" name="high_port" value="22" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_to_dmz"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="30" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_subnet" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="AccessControlEntry" name="permit_dmz_vip"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsParamInst key="action" name="action" value="permit" />
<vnsParamInst key="order" name="order" value="40" />
<vnsFolderInst key="protocol" name="protocol"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsCfgRelInst key="object_group_name" name="object_group_name"
targetName="web_https" />
</vnsFolderInst>
<vnsFolderInst key="destination_address" name="destination_address"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsCfgRelInst key="object_name" name="object_name"
targetName="dmz_vip" />
</vnsFolderInst>
</vnsFolderInst>
</vnsFolderInst>
</fvTenant>

The XML data structure above configures the following CLI equivalent on pvt_asa service device:
access-list pvt_inter_asa_if_acl extended permit icmp any any
access-list pvt_inter_asa_if_acl extended permit object-group web_https_mysql object
dmz_subnet object inside1_subnet
access-list pvt_inter_asa_if_acl extended permit object-group web_https_mysql object
dmz_subnet object epg01_vip

And on dmz_asa service device:


access-list
access-list
access-list
dmz_subnet
access-list
dmz_vip

dmz_inter_asa_if_acl extended permit icmp any any


dmz_inter_asa_if_acl extended permit tcp any any eq ssh
dmz_inter_asa_if_acl extended permit object-group web_https any object
dmz_inter_asa_if_acl extended permit object-group web_https any object

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-73

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Attach Access Control Lists to Interfaces

The XML data structure below configures the L4-L7 service parameters to attach security access control
lists to ASA interfaces. Each ASA interface has an ingress security access control list attached. The
configuration is applied to both pvt_asa and dmz_asa service devices.
<fvTenant name="g008">
<vnsFolderInst key="Interface" name="pvt_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="pvt_asa">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="pvt_inter_asa_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst key="Interface" name="dmz_inter_asa_if"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsFolderInst key="AccessGroup" name="AccessGroup"
ctrctNameOrLbl="inter_asa_contract" graphNameOrLbl="dual_asa_graph"
nodeNameOrLbl="dmz_asa">
<vnsCfgRelInst key="inbound_access_list_name" name="ingress_acl"
targetName="dmz_inter_asa_if_acl" />
</vnsFolderInst>
</vnsFolderInst>
</fvTenant>

The XML data structure above configures the following CLI equivalent on pvt_asa service device:
access-group pvt_inter_asa_if_acl in interface pvt_inter_asa_if

And on dmz_asa service device:


access-group dmz_inter_asa_if_acl in interface dmz_inter_asa_if

Step 36

Associate Service Graphs to Contracts.


A service graph is inserted between a consumer EPG and a provider EPG by a contract. APIC
automatically applies the configuration specified in the L4-L7 service parameters to the service devices
once the service graphs are deployed. Table 8-9 shows the service graph associated with each contract.
Table 8-9

Service Graphs Associated to Contracts

Contract

Service Graph

contract01

single_asa_graph

contract02

single_asa_graph

contract03

single_asa_graph

pvt_ns_contract

asa_ns_graph

dmz_contract

single_asa_graph

dmz_ns_contract

asa_ns_graph

inter_asa_contract dual_asa_graph
The XML data structure below associates the service graphs to the respective contract.
<fvTenant name="g008">
<vzBrCP name="contract01">
<vzSubj name="subject01">

Intercloud Data Center Application Centric Infrastructure 1.0

8-74

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<vzRsSubjGraphAtt tnVnsAbsGraphName="single_asa_graph" />


</vzSubj>
</vzBrCP>
<vzBrCP name="contract02">
<vzSubj name="subject01">
<vzRsSubjGraphAtt tnVnsAbsGraphName="single_asa_graph" />
</vzSubj>
</vzBrCP>
<vzBrCP name="contract03">
<vzSubj name="subject01">
<vzRsSubjGraphAtt tnVnsAbsGraphName="single_asa_graph" />
</vzSubj>
</vzBrCP>
<vzBrCP name="pvt_ns_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt tnVnsAbsGraphName="asa_ns_graph" />
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt tnVnsAbsGraphName="single_asa_graph" />
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_ns_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt tnVnsAbsGraphName="asa_ns_graph" />
</vzSubj>
</vzBrCP>
<vzBrCP name="inter_asa_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt tnVnsAbsGraphName="dual_asa_graph" />
</vzSubj>
</vzBrCP>
</fvTenant>

Once the service graphs are deployed, APIC creates shadow EPG backed VDS port-groups for the virtual
service appliances; the DVS port-groups have the following naming convention:
{tenant}|{logical_device}ctx{context}{bridge_domain}|{connector}

APIC automatically attaches the data vNICs of the virtual service appliances (in this case, the NetScaler
1000v virtual appliances) to the DVS port-groups once the service graphs are deployed.

Decommission the Expanded Gold Tenant Container


The following sections detailed the steps to decommission the Expanded Gold Tenant Container after it
is no longer required.
Step 1

Detaching vNIC from EPG Backed DVS Port-Group.


Before the service graphs and tenant are decommissioned, the vNIC attached to the EPG backed VDS
port-groups must be detached, as APIC will try to delete the VDS port-groups when the service graphs
and EPGs/tenants are decommissioned. This also includes the vNIC of the data interface of the NetScaler
1000v virtual appliances.
The following vSphere PowerCLI Cmdlets detach the vNIC from the EPG backed VDS port-group, and
attach it to the dummy vNetwork Standard Switch (VSS) port-group.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-75

Chapter 8

Expanded Gold Tenant Container

Summary of Steps

Get-VM -name "g008-vm0" | Get-NetworkAdapter -name "Network adapter 1" |


Set-NetworkAdapter -NetworkName "dummy" -confirm:$false

Note
Step 2

It is assumed that the VSS port-group dummy already existed on all vSphere ESXi hosts.
De-associate Service Graphs from Contracts.
The service graphs must be de-associated from the contracts before the tenant is decommissioned. When
service graph is de-associated from contract, APIC (via the device packages) removes the configurations
pushed in by the device script when the service graphs are deployed. If the tenant is decommissioned
without de-associating the service graphs from the contracts, the configuration might not be removed,
see CSCur05367 and CSCuq90719 for more details.
The following XML data structure de-associates the service graphs from the contracts.
<fvTenant name="g008">
<vzBrCP name="contract01">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="contract02">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="contract03">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="pvt_ns_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="dmz_ns_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
<vzBrCP name="inter_asa_contract">
<vzSubj name="subject01">
<vzRsSubjGraphAtt status="deleted"
</vzSubj>
</vzBrCP>
</fvTenant>

Step 3

/>

/>

/>

/>

/>

/>

/>

Decommission Tenant.
The following XML data structures decommission the tenant from APIC MIT. When the tenant is
decommissioned, all MOs contained within the tenant container are deleted from APIC MIT.
<polUni>
<fvTenant name="g008" status="deleted" />
</polUni>

The XML data structure below delete the security domain, which is no longer required.

Intercloud Data Center Application Centric Infrastructure 1.0

8-76

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Summary of Steps

<aaaUserEp>
<aaaDomain name="g008_sd" status="deleted" />
</aaaUserEp>

Step 4

Delete VLAN Pools.


The following XML data structure deletes the VLAN pools.
<infraInfra>
<fvnsVlanInstP name="g008_pvt_asa_pool" allocMode="dynamic" status="deleted" />
<fvnsVlanInstP name="g008_dmz_asa_pool" allocMode="dynamic" status="deleted" />
</infraInfra>

Step 5

Delete Physical Domains.


APIC does not allow removal of both physical domains in one API request. The following XML data
structure deletes the first physical domain.
<physDomP name="g008_pvt_asa_phy" status="deleted"/>

The XML data structure below deletes the second physical domain.
<physDomP name="g008_dmz_asa_phy" status="deleted" />

Step 6

Remove Configuration on ASR 9000.


The following commands remove the configuration related to Expanded Gold Tenant Container from the
ASA9000.
router static
no vrf g008
!
vrf internet
address-family ipv4 unicast
no 12.1.1.8/32 11.1.8.5
exit
!
router bgp 200
no vrf g008
exit
!
no interface Bundle-Ether 9.1008
no interface Bundle-Ether 10.2998
no vrf g008
commit

Step 7

Remove ASA Security Contexts and VLAN Sub-interfaces.


If the ASA security contexts are no longer required after the Expanded Gold Tenant Container is
decommissioned, the following commands remove the ASA security contexts and VLANs
sub-interfaces. The configuration commands are executed on the system security context of the physical
ASA security appliance.
changeto context system
conf t
no context g008-pvt
no context g008-dmz
delete disk0:/contexts/g008-pvt.cfg
delete disk0:/contexts/g008-dmz.cfg
no interface port-channel2.3011
no interface port-channel2.3012
no interface port-channel2.3013
no interface port-channel2.3014
no interface port-channel2.3015
no interface port-channel2.3016

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-77

Chapter 8

Expanded Gold Tenant Container

Expanded Gold Tenant Container with ASAv

no interface
no interface
no interface
no interface
end

port-channel2.3017
port-channel2.3018
port-channel2.3019
port-channel2.3020

Expanded Gold Tenant Container with ASAv


The previous section detailed the steps to construct/configure the Expanded Gold Tenant Container
using physical ASA security appliance (configured in multi context mode) as the perimeter firewalls.
The Expanded Gold Tenant Container can also utilize the ASAv virtual appliances as perimeter firewalls.
Most of the detailed configuration steps would be the same when ASAv virtual appliances are utilized,
with the exception of Step 10Create ASA Security Contexts., page 8-19 to Step 18Configure ASA
Logical Device Parameters., page 8-25, which would be replaced with the following steps.
Step 1

Deploy ASAv Virtual Appliances on vSphere.


Four ASAv virtual appliances are required per Expanded Gold Tenant Container, one HA-pair for private
zone, and one HA-pair for DMZ. APIC will not deploy the ASAv virtual appliances on vSphere; the
virtual appliances must be deployed out of band.
Once the ASAv virtual appliances are deployed, the following initial configuration should be made to
allow management access by APIC.
crypto key generate rsa modulus 2048
!
route management 10.0.0.0 255.255.0.0 10.0.32.1 1
route management 172.18.0.0 255.255.0.0 10.0.32.1 1
!
aaa authentication ssh console LOCAL
aaa authentication http console LOCAL
!
http server enable
http 10.0.0.0 255.255.0.0 management
http 172.18.0.0 255.255.0.0 management
!
ssh 10.0.0.0 255.255.0.0 management
ssh 172.18.0.0 255.255.0.0 management
ssh version 2
!
username admin password Cisco12345
username apic password Cisco12345
!
enable password Cisco12345

Step 2

Create Logical Device for Private Zone ASA.


The following XML data structure configures the logical device for the private zone ASA. The logical
device is associated with a VMM domain, which specifies the vSphere virtual datacenter where the
ASAv virtual appliances reside.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa" contextAware="single-Context" devtype="VIRTUAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-CISCO-ASA-1.0.1" />
<vnsCMgmt host="10.0.32.241" port="443" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />

Intercloud Data Center Application Centric Infrastructure 1.0

8-78

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Expanded Gold Tenant Container with ASAv

<vnsRsALDevToDomP tDn="uni/vmmp-VMware/dom-ics3_prod_vc" />


</vnsLDevVip>
</fvTenant>

Step 3

Create Concrete Devices for Private Zone ASA.


Unlike deployment with physical ASA, which has only one concrete interface, deployment with ASAv
makes use of multiple concrete interfaces, as shown in Table 8-10.
Table 8-10

ASAv Deployment and Interfaces

ASA Interface ASA Interface Name Logical Interface vNIC

Note

Gig0/0

pvt_outside_if

pvt_outside

Network adapter 2

Gig0/1

pvt_inside1_if

pvt_inside1

Network adapter 3

Gig0/2

pvt_inside2_if

pvt_inside2

Network adapter 4

Gig0/3

pvt_inside3_if

pvt_inside3

Network adapter 5

Gig0/4

pvt_ns_if

pvt_ns

Network adapter 6

Gig0/5

pvt_inter_asa_if

pvt_inter_asa

Network adapter 7

Gig0/7

failover_lan

failover_lan

Network adapter 9

Gig0/8

failover_link

failover_link

Network adapter 10

The first vNIC of the ASAv virtual appliance, Network adapter 1, is for management purpose only; APIC
does not model the management interface of the ASAv virtual appliance.
The XML data structure below creates two concrete devices, and the logical interfaces of the logical
device for the private zone ASA.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsCDev name="asa01" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa01">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.241" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/4" vnicName="Network adapter 6" />
<vnsCIf name="Gig0/5" vnicName="Network adapter 7" />
<vnsCIf name="Gig0/7" vnicName="Network adapter 9" />
<vnsCIf name="Gig0/8" vnicName="Network adapter 10" />
</vnsCDev>
<vnsCDev name="asa02" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa02">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.242" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/4" vnicName="Network adapter 6" />
<vnsCIf name="Gig0/5" vnicName="Network adapter 7" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-79

Chapter 8

Expanded Gold Tenant Container

Expanded Gold Tenant Container with ASAv

<vnsCIf name="Gig0/7" vnicName="Network adapter 9" />


<vnsCIf name="Gig0/8" vnicName="Network adapter 10" />
</vnsCDev>
<vnsLIf name="pvt_outside">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/0]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/0]" />
</vnsLIf>
<vnsLIf name="pvt_inside1">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/1]" />
</vnsLIf>
<vnsLIf name="pvt_inside2">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/2]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/2]" />
</vnsLIf>
<vnsLIf name="pvt_inside3">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/3]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/3]" />
</vnsLIf>
<vnsLIf name="pvt_ns">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/4]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/4]" />
</vnsLIf>
<vnsLIf name="pvt_inter_asa">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/5]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/5]" />
</vnsLIf>
<vnsLIf name="failover_lan">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-failover_lan" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/7]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/7]" />
</vnsLIf>
<vnsLIf name="failover_link">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-failover_link" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa01/cIf-[Gig0/8]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-pvt_asa/cDev-asa02/cIf-[Gig0/8]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>

The name of the VMM controller (the vcenterName attribute above, is not the vCenter hostname or IP
address, but rather the VMM controller name of the VMM domain), the VM name and vNIC name of
the ASAv virtual appliance are part of the concrete devices configuration, so that APIC can attach the
appliances vNIC to the shadow EPG backed VDS port-group created when the service graphs are
deployed.
Step 4

Enable HA for Private Zone ASA.


Figure 8-14 shows the active/standby HA setup of the private zone ASA consisting of two ASAv virtual
appliances.

Intercloud Data Center Application Centric Infrastructure 1.0

8-80

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Expanded Gold Tenant Container with ASAv

Figure 8-14

ASAv Active/Standby Failover Setup

Management Network 10.0.32.0/24


Cluster IP
10.0.32.241

10.0.32.242

asa01
Primary Unit

Failover LAN state sync

asa02
Secondary Unit

298617

Failover LAN config sync


10.255.8.1/29
10.255.8.2/29

The XML data structure below configures the HA implementation in Figure 8-14.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsCDev name="asa01">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="primary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.242" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.1"
/>
<vnsDevParam key="netmask" name="netmask" value="255.255.255.248"
/>
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
<vnsDevParam key="standby_ip" name="standby_ip" value="10.255.8.2"
/>
</vnsDevFolder>
<vnsDevFolder key="failover_lan_interface"
name="failover_lan_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
</vnsDevFolder>
<vnsDevFolder key="failover_link_interface"
name="failover_link_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_link" />
</vnsDevFolder>
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="asa02">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="secondary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.242" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.1"
/>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-81

Chapter 8

Expanded Gold Tenant Container

Expanded Gold Tenant Container with ASAv

<vnsDevParam key="netmask" name="netmask" value="255.255.255.248"


/>
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
<vnsDevParam key="standby_ip" name="standby_ip" value="10.255.8.2"
/>
</vnsDevFolder>
<vnsDevFolder key="failover_lan_interface"
name="failover_lan_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
</vnsDevFolder>
<vnsDevFolder key="failover_link_interface"
name="failover_link_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_link" />
</vnsDevFolder>
</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>

Note

Step 5

ASA device package version 1.0(1) does not support using the same vNIC for failover LAN and failover
link.
Create Logical Device for DMZ ASA.
The following XML data structure configures the logical device for the DMZ ASA. The logical device
is associated with a VMM domain, which specifies the vSphere virtual datacenter where the ASAv
virtual appliances reside.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa" contextAware="single-Context" devtype="VIRTUAL"
funcType="GoTo" mode="legacy-Mode">
<vnsRsMDevAtt tDn="uni/infra/mDev-CISCO-ASA-1.0.1" />
<vnsCMgmt host="10.0.32.243" port="443" />
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsRsALDevToDomP tDn="uni/vmmp-VMware/dom-ics3_prod_vc" />
</vnsLDevVip>
</fvTenant>

Step 6

Create Concrete Devices for DMZ ASA.


Unlike deployment with physical ASA, which has only one concrete interface, deployment with ASAv
makes use of multiple concrete interfaces, as shown in Table 8-11.
Table 8-11

Concrete Devices for DMZ ASA

ASA Interface ASA Interface Name Logical Interface vNIC


Gig0/0

dmz_outside_if

dmz_outside

Network adapter 2

Gig0/1

dmz_inside1_if

dmz_inside1

Network adapter 3

Gig0/2

dmz_ns_if

dmz_ns

Network adapter 4

Gig0/3

pvt_inter_asa_if

dmz_inter_asa

Network adapter 5

Gig0/7

failover_lan

failover_lan

Network adapter 9

Gig0/8

failover_link

failover_link

Network adapter 10

Intercloud Data Center Application Centric Infrastructure 1.0

8-82

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Expanded Gold Tenant Container with ASAv

The XML data structure below creates two concrete devices, and the logical interfaces of the logical
device for the DMZ ASA.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa">
<vnsCDev name="asa03" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa03">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.243" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/7" vnicName="Network adapter 9" />
<vnsCIf name="Gig0/8" vnicName="Network adapter 10" />
</vnsCDev>
<vnsCDev name="asa04" vcenterName="ics3_vc_tenant_cluster"
vmName="g008-asa04">
<vnsCCred name="username" value="apic" />
<vnsCCredSecret name="password" value="Cisco12345" />
<vnsCMgmt host="10.0.32.244" port="443" />
<vnsCIf name="Gig0/0" vnicName="Network adapter 2" />
<vnsCIf name="Gig0/1" vnicName="Network adapter 3" />
<vnsCIf name="Gig0/2" vnicName="Network adapter 4" />
<vnsCIf name="Gig0/3" vnicName="Network adapter 5" />
<vnsCIf name="Gig0/7" vnicName="Network adapter 9" />
<vnsCIf name="Gig0/8" vnicName="Network adapter 10" />
</vnsCDev>
<vnsLIf name="dmz_outside">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/0]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/0]" />
</vnsLIf>
<vnsLIf name="dmz_inside1">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/1]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/1]" />
</vnsLIf>
<vnsLIf name="dmz_ns">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-internal" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/2]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/2]" />
</vnsLIf>
<vnsLIf name="dmz_inter_asa">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-external" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/3]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/3]" />
</vnsLIf>
<vnsLIf name="failover_lan">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-failover_lan" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/7]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/7]" />
</vnsLIf>
<vnsLIf name="failover_link">
<vnsRsMetaIf tDn="uni/infra/mDev-CISCO-ASA-1.0.1/mIfLbl-failover_link" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa03/cIf-[Gig0/8]" />
<vnsRsCIfAtt tDn="uni/tn-g008/lDevVip-dmz_asa/cDev-asa04/cIf-[Gig0/8]" />
</vnsLIf>
</vnsLDevVip>
</fvTenant>

Step 7

Enable HA for DMZ ASA.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-83

Chapter 8

Expanded Gold Tenant Container

Expanded Gold Tenant Container with ASAv

The XML data structure below configures the active/standby HA setup for the DMZ ASA consisting of
two ASAv virtual appliances.
<fvTenant name="g008">
<vnsLDevVip name="dmz_asa">
<vnsCDev name="asa03">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="primary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.244" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.11"
/>
<vnsDevParam key="netmask" name="netmask" value="255.255.255.248"
/>
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.255.8.12" />
</vnsDevFolder>
<vnsDevFolder key="failover_lan_interface"
name="failover_lan_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
</vnsDevFolder>
<vnsDevFolder key="failover_link_interface"
name="failover_link_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_link" />
</vnsDevFolder>
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="asa04">
<vnsDevFolder key="FailoverConfig" name="failover_config">
<vnsDevParam key="failover" name="enable_failover" value="enable" />
<vnsDevParam key="lan_unit" name="primary" value="secondary" />
<vnsDevParam key="key_secret" name="secret" value="Cisco12345" />
<vnsDevParam key="http_replication" name="http_replication"
value="enable" />
<vnsDevFolder key="mgmt_standby_ip" name="mgmt_standby_ip">
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.0.32.244" />
</vnsDevFolder>
<vnsDevFolder key="failover_ip" name="failover_ip">
<vnsDevParam key="active_ip" name="active_ip" value="10.255.8.11"
/>
<vnsDevParam key="netmask" name="netmask" value="255.255.255.248"
/>
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
<vnsDevParam key="standby_ip" name="standby_ip"
value="10.255.8.12" />
</vnsDevFolder>
<vnsDevFolder key="failover_lan_interface"
name="failover_lan_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_lan" />
</vnsDevFolder>

Intercloud Data Center Application Centric Infrastructure 1.0

8-84

Implementation Guide

Chapter 8

Expanded Gold Tenant Container


Expanded Gold Tenant Container with ASAv

<vnsDevFolder key="failover_link_interface"
name="failover_link_interface">
<vnsDevParam key="interface_name" name="interface_name"
value="failover_link" />
</vnsDevFolder>
</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>

Step 8

Configure ASA Logical Device Parameters.


The XML data structure below configured device wide parameters such as DNS domain, NTP server,
logging, and so on, for the ASA logical devices.
<fvTenant name="g008">
<vnsLDevVip name="pvt_asa">
<vnsDevFolder key="NTP" name="ntp">
<vnsDevFolder key="NTPServer" name="ntp1">
<vnsDevParam key="server" name="ntp1" value="172.18.114.20" />
<vnsDevParam key="prefer" name="prefer" value="enable" />
</vnsDevFolder>
</vnsDevFolder>
<vnsDevFolder key="DNS" name="dns">
<vnsDevParam key="domain_name" name="domain_name"
value="aci.icdc.sdu.cisco.com" />
</vnsDevFolder>
<vnsDevFolder key="LoggingConfig" name="logging">
<vnsDevParam key="enable_logging" name="enable" value="enable" />
<vnsDevParam key="buffered_level" name="level" value="warnings" />
<vnsDevParam key="buffer_size" name="size" value="1048576" />
<vnsDevParam key="facility" name="facility" value="18" />
</vnsDevFolder>
</vnsLDevVip>
<vnsLDevVip name="dmz_asa">
<vnsDevFolder key="NTP" name="ntp">
<vnsDevFolder key="NTPServer" name="ntp1">
<vnsDevParam key="server" name="ntp1" value="172.18.114.20" />
<vnsDevParam key="prefer" name="prefer" value="enable" />
</vnsDevFolder>
</vnsDevFolder>
<vnsDevFolder key="DNS" name="dns">
<vnsDevParam key="domain_name" name="domain_name"
value="aci.icdc.sdu.cisco.com" />
</vnsDevFolder>
<vnsDevFolder key="LoggingConfig" name="logging">
<vnsDevParam key="enable_logging" name="enable" value="enable" />
<vnsDevParam key="buffered_level" name="level" value="warnings" />
<vnsDevParam key="buffer_size" name="size" value="1048576" />
<vnsDevParam key="facility" name="facility" value="18" />
</vnsDevFolder>
</vnsLDevVip>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

8-85

Chapter 8

Expanded Gold Tenant Container

Expanded Gold Tenant Container with ASAv

Intercloud Data Center Application Centric Infrastructure 1.0

8-86

Implementation Guide

CH A P T E R

Silver Tenant Container


While providing Infrastructure as a Service (IaaS) solutions cloud providers want a tiered model that
accounts for a variety of applications. Based on customer requirements, services can be differentiated
into a multi-tier infrastructure. Such a model provides flexibility in expanding services through adding
resources. The Silver tenant is one such container, which provides application availability with a
dedicated load balancing service.
This chapter covers the following topics:

Silver Tenant Container Layout

Server Load Balancing (SLB) using NetScaler 1000v

NetScaler 1000v Implementation using Service Graph

Work flow to Create Silver Tenant in ACI

Silver Tenant Container Layout


Silver Tenant Container has the capability to provide various application services with Layer 3 (L3)
support. It maintains a logical separation from other network containers in a shared infrastructure.
Dedicating a unique VRF for each silver tenant helps to maintain the logical isolation. Figure 9-1 shows
an overview of the Silver container model.
Figure 9-1

Silver Tenant Model

MPLS
L3 VPN

ASR 9000

ACI Fabric

APP
OS

APP
OS

Tier01 VMs

APP
OS

APP
OS

Tier02 VMs

APP
OS

APP
OS

Tier03 VMs

298514

NetScaler 1000V
HA-Pair

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-1

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Each tenant can host different applications based on the requirement of the customer. This may require
a number of application tiers of Virtual Machines (VMs) to be implemented such as web, application,
and database. In our implementation, silver tenant is defined with three application tiers. Each tier has
a unique VLAN assigned and hosts web, application and database services. Silver tenant also provides
load-balancing services for the application tiers using Citrix NetScaler 1000v. The NetScaler VMs sit on
a different VLAN, thus maintaining a logical separation from the other application tiers. The number of
application tiers can be expanded easily by assigning a new VLAN to the new tier and provide a
multi-tier service. This section covers the following topics:

Physical Topology

Logical Topology

Tenant Construction

Traffic flow paths

Physical Topology
The Silver tenant physical topology is shown in Figure 9-2.
Figure 9-2

Silver Physical Topology


UCS B-Series Blade Servers

APIC1
APIC3

NetScaler 1000V
APP
OS

APP
OS

Tenant VMs

UCS-6296-FI-A

UCS-6296-FI-B

Leaf101

Spine201

Leaf103

Spine202

Leaf104

Leaf102

Spine203

Leaf105

Spine204

Leaf106

Netapp
FAS3200
Series
APIC2

ASR 9000 nV Edge System

ASA 5585 Cluster

298515

APP
OS

All the tiers hosting applications and NetScaler VMs are deployed on UCS B-Series Servers. ASR 9000
provides the external connectivity to the applications. Leafs 101 and 102 are access leafs and other leafs
are used to connect to the edge device.

Intercloud Data Center Application Centric Infrastructure 1.0

9-2

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Logical Topology
In this section, the physical topology translated into a logical layout. Figure 9-2 depicts how the Silver
container is constructed logically. The logical topology can be divided into two sections, first, ACI
Fabric to Application Servers and second, ACI Fabric to external MPLS cloud.
Figure 9-3

Silver Logical Topology

MPLS
L3 VPN
Loopback Interface 10.2.200.1
10.2.201.1

10.2.202.1

ASR 9000 nV Edge


10.2.201.2
Border Leaf - 1

10.2.202.2
Border Leaf - 2
Loopback Interface 10.2.200.106

Loopback Interface 10.2.200.105

ACI Fabric
Access Leaf - 1

Access Leaf - 2

NetScaler 1000V
HA-Pair
UCS Chassis

SLB
VIP - Web: 11.2.1.0/24
VIP - App and DB: 10.2.4.128/25
SNIP - 10.2.4.0/25

APP
OS

APP
OS

Web VMs

App 10.2.2.0/24
APP
OS

APP
OS

App VMs

Database 10.2.3.0/24
APP
OS

APP
OS

Database VMs

298516

Web 10.2.1.0/24

A unique VRF is assigned to each silver tenant which is defined in the access leafs in the fabric. Each
of the application tier and load balancer is assigned a specific VLAN, which are a part of the VRF
assigned to the silver tenant. The fabric serves as the default gateway for each of the tiers and NetScaler.
With the ACI Fabric being the default gateway it has the capability to route the packets from one tier to
another for both load balanced and non-load balanced flow. For external connectivity two leafs in the
fabric are used as border leafs to connect to ASR 9000 nV Edge router using port channels. Switched
virtual interfaces (SVI) are configured on the leaf switches and static routes help to route the packets to
the edge router. Interior BGP (IBGP) is configured between the two devices to advertise the routes for
traffic to reach the application tiers. Loopback interfaces are configured for the same.

APIC Tenant Construction


In the previous section details how the Silver tenant is constructed physically and logically. It can be
mapped to a tenant in APIC by putting a number of pieces together. Figure 9-2 shows the different pieces
put together to create a tenant through APIC. In the following section describes how tenant s001 is
created.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-3

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Silver TenantAPIC

ASR 9000

Port-channel

Port-channel

External Routed Network:


(outside_network)

VRF net01

contract01

contract02

http

ftp-data

https

ftp-control

icmp
Bridge Domain:
slb_bd

contract03
mysql
icmp

icmp

Bridge Domain:
bd01

Bridge Domain:
bd02

Bridge Domain:
bd03

EPG: epg01

EPG: epg02

EPG: epg03

contract
http

Filter

Consumer

Provider

298517

Figure 9-4

User Roles and Security Domain


Authentication, Access and Accounting (AAA) functions for the ACI Fabric is managed by APIC
policies. User privileges, roles and security domain put together provides this functionality. By assigning
read/write access to users the administrator can restrict a tenant from seeing any other tenant details.
This enables isolation among the tenants. A set of roles are defined in the ACI Fabric such as aaa,
access-admin, fabric-admin, admin, tenant-admin, vmm-admin, and so on. These roles have no-access,
read-only and read-write privileges associated with them. By assigning specific privileges to a user,
access to functions in the system can be restricted. Security domain is a tag used in the ACI MIT object
tree. A tenant can be linked to a security domain. Thus the access to a tenant object can be restricted to
a particular security domain and thus to the users that are a part of the security domain. This can be
configured in the GUI or the REST API as well.

Configuring User Access and Security Domain from APIC GUI


The following steps are involved in configuring user access and Security domains for AAA through GUI.
Step 1

Create security domain.


a.

Navigate to Admin Tab> AAA> Security Management.

b.

Right-click on the Security Domain tab and click on Create Security Domain.

Figure 9-5

Create Security Domain 1

Intercloud Data Center Application Centric Infrastructure 1.0

9-4

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Figure 9-6

Step 2

Create Security Domain 2

Create local user.


a.

Right-click on Local Users and click Create Local User. The first step is to add the user to a
security domain and s001_sd is the user.

b.

Next, assign access roles. The user accesses only tenant s001, the tenant-admin role is assigned and
user information is entered. Select Submit.

Figure 9-7

Create Local User 1

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-5

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-8

Create Local User 2

Figure 9-9

Create Local User 3

Create Tenant
All the service tier configurations are done inside the tenant container. Tenant (fvTenant) is basically a
logical container for application policies that represent a customer, organization or just a group of
policies. Adding a security domain to this container enables the use of domain based access control.
Using such a construct helps to maintain isolation between policies for different customers.

Creating Tenant from APIC GUI


Perform the following procedures to create a tenant from an APIC GUI.
Step 1

Create tenant.
a.

Navigate to Tenant Tab>Add Tenant.

Intercloud Data Center Application Centric Infrastructure 1.0

9-6

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

b.

Enter the required fields and make sure you select the security domain s001_sd.

c. The GUI prompts you to add a private network. This is an optional step that can be done later.
d.

Select the Finish button.

Figure 9-10

Create Tenant 1

Figure 9-11

Create Tenant 2

Creating Tenant using XML


To create the tenant using XML:
<fvTenant name="s001" descr="bgp routed external">
<aaaDomainRef name="s001_sd" />
</fvTenant>

Private Network
A private network (fvCtx) is an L3 context or can be more aptly termed as a Virtual Route Forwarding
(VRF). This provides an IP address space isolation for different tenant defined in the ACI Fabric. The
IP addresses are overlapping when the tenants do not share the VRF space in the ACI Fabric.

Create Private Network from APIC GUI


Perform the following procedures to create a private network from an APIC GUI.
Step 1

Navigate to Tenant and select the tenant s001.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-7

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-12

Step 2

Expand the Networking folder, right-click on Private Networks.


a.

Select Create Private Network.

Figure 9-13

Step 3

Create Private Network 1

Create Private Network 2

Enter the name for the network.


a.

If you wish to create a bridge domain, select the check box Create a Bridge Domain. The bridge
domain can be created later as well.

Intercloud Data Center Application Centric Infrastructure 1.0

9-8

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Figure 9-14

Create Private Network 3

Create Private Network using XML


To create the private network using XML:
<fvTenant name="s001">
<fvCtx name="net01" />
</fvTenant>

Bridge Domain
The bridge domain (fvBD) is a Layer 2 (L2) forwarding construct defined in the fabric. To define a
subnet under the bridge domain it must be linked to the L3 context (private network). Bridge domain has
a unique L2 MAC address space and L2 flooding domain if enabled. The private network or VRF can
have multiple subnets in the given address space which in turn can be a part of one or more bridge
domains. A subnet (fvSubnet) defined inside a bridge domain is contained within the bridge domain
itself.

Create Bridge Domain from APIC GUI


Perform the following procedures to create a bridge domain from an APIC GUI.
Step 1

Navigate to Tenant and select the tenant s001.


a.

Expand the Networking folder, right-click on Bridge Domains and select Create Bridge Domain.

b.

Enter the name for the bridge domain.

c.

Select the private network net01 create in the previous section.

d. To add subnets, click on Add in the subnet section.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-9

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-15

Step 2

Create the Bridge Domain

Enter the gateway IP address and the subnet mask. If the subnet needs to be advertised to the external
world mark the scope as public else to limit the subnet within the tenant itself mark the scope as private.
Figure 9-16

Add Subnets to Bridge Domain

Intercloud Data Center Application Centric Infrastructure 1.0

9-10

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Create Bridge Domain using XML


To create a bridge domain using XML:
<fvTenant name="s001">
<fvBD name=bd01 arpFlood=yes>
<fvSubnet ip="10.2.1.254/24" scope="public" />
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="bd02" arpFlood="yes">
<fvSubnet ip="10.2.2.254/24" scope="private" />
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="bd03" arpFlood="yes">
<fvSubnet ip="10.2.3.254/24" scope="private" />
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
<fvBD name="slb_bd" arpFlood="yes">
<fvSubnet ip="10.2.4.254/25" scope="private" />
<fvSubnet ip="11.2.1.254/24" scope="public" />
<fvSubnet ip="10.2.4.126/25" scope="private" />
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
</fvTenant>

Application Profile
An application profile (fvAp) can be considered as a logical container for End point groups. They cater
for the application requirements. Depending on the capability of the application the number of end point
groups in an application profile can vary. For the Silver tenant implementation, there are 3 end-point
groups (EPG) for web, application, and database servers, respectively. Based on requirements, there can
be multiple application profiles.

Creating Application Profile from APIC GUI


Perform the following procedures to create an application profile from an APIC GUI.
Step 1

Navigate to Tenant and select the tenants001.


a.

Right click on Application Profiles and select Create Application Profile.

b.

Enter the name for the profile app01.

c. The dialog box also provides the capability to create contracts and EPGs, but this can be done as a

later step.
d.

Click Finish.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-11

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-17

Create Application Profile 1

Figure 9-18

Create Application Profile 2

Creating Application Profile using XML


To create an application profile using XML:
<fvTenant name="s001">
<fvAp name="app01"/>
</fvTenant>

End Point Groups


End points are physical or virtual end devices that are connected to the network directly or indirectly.
End point group is a logical entity which is essentially a collection of end points. Some examples are
servers, virtual machines, storage, Internet clients, and so on. An EPG can be configured statically or
dynamically. When an EPG is deployed dynamically, VLAN is assigned to the group dynamically from
the pool specified, whereas while deploying statically the VLAN encapsulation should be mentioned
explicitly. It is important to keep in mind that policies always apply to EPGs and not to endpoints. ACI
Fabric consists of the following types of EPGs:

Application Endpoint group (fvAEPg)

Layer 2 external outside network instance endpoint group (l2extInstP)

Layer 3 external outside network instance endpoint group (l3extInstP)

Management endpoint group for out-of-band (mgmtOoB) or in-band (mgmtInB) access

Intercloud Data Center Application Centric Infrastructure 1.0

9-12

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Creating End Point Groups from APIC GUI


Perform the following procedures to create end point groups from an APIC GUI.
Step 1

Navigate to Tenant and select the tenant s001.


a.

Expand the Application Profiles folder, right click on Application EPGs.

b.

Select Create Application EPG.

c.

Enter the name for the Application EPG.

d.

Select the bridge domain to be associated.

e.

Select the VMM domain to dynamically deploy the EPG.

Figure 9-19

Create End-point Groups 1

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-13

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-20

Create End-point Groups 2

Creating End Point Groups using XML


<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="epg02">
<fvRsBd tnFvBDName="bd02" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
instrImedcy="immediate" resImedcy="immediate" />
</fvAEPg>
<fvAEPg name="epg03">
<fvRsBd tnFvBDName="bd03" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc"
instrImedcy="immediate" resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>

Filters
Filters contain Layer 2 to Layer 4 fields such as TCP/IP header fields, Layer 3 protocol type, allowed
Layer 4 ports, and so on. Filters are associated to contracts defined for EPG communication. Based on
the matching criteria defined in the filters traffic is handled.

Intercloud Data Center Application Centric Infrastructure 1.0

9-14

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Creating Filters using APIC GUI


Perform the following procedures to create filters from an APIC GUI.
Step 1

Navigate to Tenant and select the tenant s001.


a.

Expand the Security Policies folder and select Filters.

b.

Right-click on Filters and select Create Filter.

Figure 9-21

Step 2

Create Filter 1

Enter the name for the filter and click on Entries to add a rule. A list of EtherType and IP Protocol is
given in the GUI. Source and destination ports are user defined based on the application under
consideration for web server - port 80, database - port 3306 etc can be configured. APIC GUI provides
a pre-defined list of ports.
Figure 9-22

Create Filter 2

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-15

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-23

Create Filter 3

Creating Filters using XML


To create filters using XML:
<fvTenant name="s001">
<vzFilter name="http">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="http" dToPort="http"
/>
</vzFilter>
<vzFilter name="https">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="https"
dToPort="https" />
</vzFilter>
<vzFilter name="ftp-data">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="10100"
dToPort="10500" />
</vzFilter>
<vzFilter name="ftp-control">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="21" dToPort="21" />
</vzFilter>
<vzFilter name="mysql">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="3306" dToPort="3306"
/>
</vzFilter>
<vzFilter name="ssh">
<vzEntry name="rule01" etherT="ip" prot="tcp" dFromPort="22" dToPort="22" />
</vzFilter>
</fvTenant>

Contracts
A contract (vzBrCP) is needed for inter-EPG communication. Subjects defined within the contract use
filters to dictate the traffic that can pass between the EPGs. Subjects have the capability to define if the
filters are unidirectional or bi-directional. Contracts can have multiple rules one for http, another for
https, and so on. While assigning a contract to an EPG they need to be labeled as either consumer or
provider. When an EPG consumes a contract, the end points that are a part of the consumer EPG will
initiate the communication (client) with end points in the provided EPG. EPG can consume and provide
the same contract. When a contract is not established between two EPGs, communication between them
is disabled.

Intercloud Data Center Application Centric Infrastructure 1.0

9-16

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Creating Contracts from APIC GUI


Perform the following procedures to create contacts from an APIC GUI.
Step 1

Navigate to Tenant and select the tenant s001.


a.

Expand the Security Policies folder and select Contracts.

b.

Right-click on Contracts and select Create Contract.

Figure 9-24

Step 2

Create Contract 1

Enter contract name and add subjects. For example to add http filter to the contract, click on add subject
which opens up a new dialog box. We can select the filter http from the drop-down option and also set
the direction of filter.
Figure 9-25

Create Contract 2

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-17

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-26

Create Contract 3

Creating Contracts using XML


To create contracts using XML:
<fvTenant name="s001">
<vzBrCP name="contract01" descr="http/https

based

traffic directed from External epg to Tier01">


<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
<vzSubj name="subject02">
<vzRsSubjFiltAtt tnVzFilterName="https" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
<vzSubj name="subject03">
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>

<vzBrCP name="contract02" descr="ftp based traffic


directed from tier01 epg to tier02 epg">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="ftp-data" />
<vzRsSubjFiltAtt tnVzFilterName="ftp-control" />
<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
<vzBrCP name="contract03" descr="mysql DB based traffic
directed from tier02 epg to tier03 epg">
<vzSubj name="subject01">

Intercloud Data Center Application Centric Infrastructure 1.0

9-18

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

<vzRsSubjFiltAtt tnVzFilterName="mysql" />


<vzRsSubjFiltAtt tnVzFilterName="icmp" />
</vzSubj>
</vzBrCP>
</fvTenant>

Then assign the contract to and EPG:


<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsProv tnVzBrCPName="contract01"
<fvRsCons tnVzBrCPName="contract02"
</fvAEPg>
<fvAEPg name="epg02">
<fvRsProv tnVzBrCPName="contract02"
<fvRsCons tnVzBrCPName="contract03"
</fvAEPg>
<fvAEPg name="epg03">
<fvRsProv tnVzBrCPName="contract03"
</fvAEPg>

/>
/>

/>
/>

/>

</fvAp>
</fvTenant>

External Routed Networks


Outside network object defined in the tenant enables outside connectivity. ACI Fabric supports both L2
and L3 connectivity between external public/private network and the fabric. Leaves that are configured
to support external connectivity are called as border leaves and the interface on these leaves can be
configured as bridged or routed. Border leaves can also be used to connect to storage, compute etc as a
normal leaf. In Silver tenant implementation L3 external routed network is used. The IBGP routing
peering protocol is used between the ACI Fabric border leaf switches and the external router. Figure 9-27
shows the configuration implemented in the Silver tenant.
Figure 9-27

Silver TenantExternal Routed Network

Loopback Interface 10.2.200.1

Static route to
ASR 9000 Loopback

10.2.201.1

10.2.202.1

Static route to
ASR 9000 Loopback

ASR 9000 nV Edge


10.2.201.2
Border Leaf - 1
Loopback Interface 10.2.200.105

10.2.202.2
Border Leaf - 2
Loopback Interface 10.2.200.106
298540

ACI Fabric

BGP sessions are established to the ASR 9000 nV Edge router through two port channels. As seen in
Figure 9-27, two border leaves are used. Three loopback interfaces are configured one on ASR 9000
router and one each on the border leaves. Static routing is done to route the packets to the peer.
L3 external routed outside network is for the private network defined in the tenant.
<l3extOut name="l3_outside">
<bgpExtP descr="enable bgp" />
<l3extRsEctx tnFvCtxName="net01" />
</l3extOut>

#External Routed network


#Select BGP Protocol
#Associate Private Network

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-19

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

If any address space needs to be advertised to the outside network, the external routed network needs to
be bound to the corresponding bridge domain where the subnet is defined.
<fvBD name="bd01">
#Tier1 BridgeDomain
<fvRsBDToOut tnL3extOutName="l3_outside" />
</fvBD>

Once the BGP sessions are established successfully, the routes on the edge router are visible. It is
important to note that the private network (VRF) is dynamically instantiated and is deployed on the leaf
nodes when an endpoint is attached to the EPG. The VRF is deployed on the border leaves only when
the L3 external network is associated with the private network.

Creating External Routed Outside from APIC GUI


Perform the following procedures to create an external routed outside from an APIC GUI.
Step 1

Navigate to Tenant and select the tenants001.


a.

Expand the Networking folder and select External Routed Networks.

b.

Right-click on External Routed Networks and select Create Routed Outside.

Figure 9-28

Step 2

Create External Routed Outside 1

In the dialog box, enter the name and select the protocol BGP.
a.

From the drop-down list for Private Network, select the private network to associate the external
network with. In this implementation, it is net01.

b.

Next, create a Node and Interface Profile.

c.

Click the + button to add a new profile.

Intercloud Data Center Application Centric Infrastructure 1.0

9-20

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Figure 9-29

Step 3

Create External Routed Outside 2

Enter a name for the Node Profile. As seen in Figure 9-30, there are two leaves to the core router. Node
(loopback interface) is configured on both the leaves and the ASR 9000 peer information is also
provided. This also configures Static routes to reach the Peer. Interface profiles are used to configure
SVIs on the leaf nodes.
Figure 9-30

Create Node Profile 1

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-21

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Step 4

Figure 9-31

Create Nodes 1

Figure 9-32

Figure 9-30 Create Nodes 2

Create two SVI Interface Profiles.

Intercloud Data Center Application Centric Infrastructure 1.0

9-22

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Figure 9-33

Select SVI Interface 1

Figure 9-34

Create SVI Interface 2

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-23

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Step 5

Figure 9-35

Create SVI Interface 3

Figure 9-36

Create SVI Interface 4

Configure external EPG networks. These are the networks that the EPG members can see and reach.
Since the client network subnet can be anything, leave it as 0.0.0.0/0 which allows all subnets.

Intercloud Data Center Application Centric Infrastructure 1.0

9-24

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Figure 9-37

Step 6

Assign Contracts to the external EPG.


Figure 9-38

Step 7

Create External Network

Add Contracts to External Network EPG

Associate the routed outside network to the bridge domain.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-25

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-39

Associate L3 Outside to Bridge Domain

Create External Routed Outside Using XML


1.

Note

External routed network configuration for ACI Fabric:

While using the XML script, make sure you remove the #comments.
<fvTenant name="s001">
#TenantName
<l3extOut name="l3_outside">
#External Routed network
<bgpExtP descr="enable bgp" />
#Select BGP Protocol
<l3extRsEctx tnFvCtxName="net01" />
#PrivateNetwork / VRF
<l3extLNodeP name="bgp_nodes">
#BGP nodes for Peering
<bgpPeerP addr="10.2.200.1" />
#PeerNode IP on asr9k
<l3extRsNodeL3OutAtt rtrId="10.2.200.105"
#BGP node1 on ACI fabric
tDn="topology/pod-1/node-105">
<ipRouteP ip="10.2.200.1/32">
#static-route to Peer
<ipNexthopP nhAddr="10.2.201.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="10.2.200.106"
#Node2
tDn="topology/pod-1/node-106">
<ipRouteP ip="10.2.200.1/32">
#static-route
<ipNexthopP nhAddr="10.2.202.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="svi01">
#svi for portchannel1
<l3extRsPathL3OutAtt addr="10.2.201.2/24"
encap="vlan-411" ifInstT="ext-svi"

Intercloud Data Center Application Centric Infrastructure 1.0

9-26

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

tDn="topology/pod-1/paths-105/pathep-[pc_n105_asr9k]" />
</l3extLIfP>
<l3extLIfP name="svi02">
#svi for port-channel2
<l3extRsPathL3OutAtt addr="10.2.202.2/24"
encap="vlan-411" ifInstT="ext-svi"
tDn="topology/pod-1/paths-106/pathep-[pc_n106_asr9k]" />
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="outside_network">
#Layer3 ext-EPG
<fvRsCons tnVzBrCPName="contract01" />
#consume Tier1 contract
<l3extSubnet ip="10.2.201.0/24" />
#external allowed subnet
<l3extSubnet ip="10.2.202.0/24" />
#external allowed subnet
<l3extSubnet ip="100.2.201.0/24" />
#external allowed subnet
</l3extInstP>
</l3extOut>
</fvTenant>

2. Associate L3 outside network to Bridge domain:


<fvTenant name="s001">
#TenantName
<fvBD name="bd01">
#Tier1 BridgeDomain
<fvRsBDToOut tnL3extOutName="l3_outside" />
</fvBD>
<fvBD name="slb_bd">
#NetScaler BridgeDomain
<fvRsBDToOut tnL3extOutName="l3_outside" />
</fvBD>
</fvTenant>

3.

Configure ASR 9000:


conf t
!# VRF definition for silver tenant s001
vrf s001
address-family ipv4 unicast
import route-target
2:417
export route-target
2:417
!
!#LoopBack interface on asr9k for iBGP peering
interface loopback 411
vrf s001
ipv4 address 10.2.200.1/32
!
!#sub-interface for portchannel1
interface Bundle-Ether 5.411
vrf s001
ipv4 address 10.2.201.1 255.255.255.0
encapsulation dot1q 411
!
!#sub-interface for portchannel2
interface Bundle-Ether 6.411
vrf s001
ipv4 address 10.2.202.1 255.255.255.0
encapsulation dot1q 411
!
!#static routes for BGP peers defined on ACI fabric
router static
vrf s001
address-family ipv4 unicast
10.2.200.105/32 10.2.201.2
10.2.200.106/32 10.2.202.2

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-27

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

!
!# neighbor peer information neighbor <ip-address> update-source <interface>
router bgp 200
vrf s001
rd 2:417
address-family ipv4 unicast
!
neighbor 10.2.200.105
remote-as 200
update-source loopback411
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
!
neighbor 10.2.200.106
remote-as 200
update-source loopback411
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
commit
end

Traffic Flow Paths


The traffic flows defined in silver tenant implementation are as follows:

Load balanced flow from external network to tier1 (HTTP/HTTPs)

Non-load-balanced flow from external network to tier1 (HTTP/HTTPs)

Load-balanced flow from tier1 to tier2 (FTP)

Load-balanced flow from tier2 to tier3 (MYSQL)

For the all the load balanced flows, traffic hits the VIP configured on NetScaler and based on the load
balancing algorithm the request is forwarded to the corresponding real server.

Intercloud Data Center Application Centric Infrastructure 1.0

9-28

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

Figure 9-40

Silver Tenant External 1 to Tier1 Load-balanced Traffic Flow

MPLS
L3 VPN

Route flow to
border leaf

Border leaves
ASR 9000

leaf105

leaf106

Access leaves
Route request
to Web SLB

leaf101

LB the request
and send to
Web VM

pvt_ns

leaf102

APP
OS
APP
OS

Tier01
Web VMs

Figure 9-41

298553

APP
OS

Silver TenantExternal 1 to Tier1 Non-Load-balanced Traffic Flow

MPLS
L3 VPN

Route flow to
border leaf

Border leaves
ASR 9000

leaf105

leaf106

Access leaves
leaf101

Route the request


to Web Server VM

leaf102

APP
OS
APP
OS

Tier01
Web VMs

298554

APP
OS

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-29

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-42

Silver TenantTier1 to Tier2 Load-Balanced Traffic Flow

leaf102

1
pvt_ns

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

Tier01
Web VMs

Figure 9-43

Tier02
App VMs

298555

leaf101

Silver TenantTier2 to Tier3 Load-Balanced Traffic Flow

leaf102

1
pvt_ns

Retrieve data

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

Tier02
App VMs

Tier02
Database VMs

298556

leaf101

Server Load Balancing using NetScaler 1000v


Citrix NetScaler 1000v is a virtual appliance that provides the NetScaler feature set. It is an application
delivery controller (ADC) solution providing applications performance enhancements, QoS, offloading
application servers, and so on. It can run on multiple hypervisor platforms and can be integrated with
Cisco Nexus 1000v. It is a cost-effective solution to support applications and integrate networking
services with application delivery. This section covers the following topics:

Citrix NetScaler 1000v Overview

Citrix NetScaler 1000v Implementation

Citrix NetScaler 1000v L4-7 Load-Balancing Policies

NetScaler 1000v Overview


Citrix NetScaler 1000v is available in three editions standard, enterprise and platinum. The feature set
varies from Standard to platinum edition. Platinum edition offers web application delivery for critical
applications with added security and fast-performance. Citrix NetScaler 1000v offers the following
throughput licenses: 10 MB, 200 MB, 500 MB, 1 GB, 2 GB, 3 GB, and 4 GB. Features provided by
NetScaler include l4-7 traffic management (L4 load balancing, L7 content switching, database load
balancing), application acceleration, application security, and network integration.

Intercloud Data Center Application Centric Infrastructure 1.0

9-30

Implementation Guide

Chapter 9

Silver Tenant Container


Silver Tenant Container Layout

NetScaler 1000v Implementation


In this release Citrix NetScaler 1000v virtual appliance is installed over VMware ESXi hypervisor.
Virtual Machines can be spun from a pre-installed ova template. Two NetScaler 1000v appliances are
configured in an Active / Standby mode to form a High availability cluster. In the network topology,
NetScaler is implemented in a one-arm mode as shown in Figure 9-44.

One-Arm Mode
In one-arm mode, a single data interface on the NetScaler1000v is used for both internal and external
communication. A VLAN is assigned to the arm attached to the load balancer. In such a configuration
the default gateway for the NetScaler will be the upstream router. From a traffic-flow standpoint, traffic
destined a web server sitting in the server farm will hit the load balanced server VIP sitting in the
NetScaler device. Once the load balanced algorithms are applied the request is forwarded from the load
balancer through the same interface to the upstream router and then gets forwarded to the real server.
Figure 9-44

NetScaler One-Arm Mode

ACI Fabric

NetScaler
1000V

VM1

VM2

VM3

VM4

298557

Server Farm

High Availability (HA) Configuration


Two Citrix NetScaler 1000v instances are configured together in an active standby pair. All
configurations and corresponding changes are synced from the active instance to the standby. HA pairing
is done between the instances. From the GUI the active instance is configured first and as soon as the
second node is added using HA, it automatically takes over as Standby and sync the configurations.
During a failover scenario, the Standby instance takes over as Active and when the former node comes
back online it will take over as standby. The configuration for both the active and standby instances are
exactly the same except for the NSIP and HA configuration. HA "hello" packets are exchanged which
determined by the helloInterval and the dead interval (deadInterval) helps an instance to decide
whether its peer is dead.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-31

Chapter 9

Silver Tenant Container

Silver Tenant Container Layout

Figure 9-45

Citrix NetScaler 1000v HA Configuration

HA Cluster

Heartbeat
Exchange
Config Sync

NetScaler 1000V
Standby Node
298558

NetScaler 1000V
Active Node

Network Setup
Setting up the network for the NetScaler 1000v appliance includes configuring NetScaler IP (NSIP) for
management connectivity, Subnet IP address (SNIP) for communication along the data plane, static
routes for reachability to external subnets and default gateway, Source NAT IP, VLAN, and so on. While
implementing with ACI, all of these are automated except for NSIP. When the NetScaler appliance is
created from an ova template, configuring the NSIP is a mandatory step. Once NSIP is configured the
management access to the instance is established. All of the remaining network configuration is done
through APIC.

NetScaler 1000v L4-7 Load Balancing Policies


In this release, NetScaler 1000v is configured to load balance HTTP, HTTPs, FTP and MySQL
application. Figure 9-44 shows the various components involved in configuring a load balanced server.
A virtual IP address (VIP) sits in front of the real server. All the requests from the client are directed to
the VIP which then proxies it to the real server based on the load balancing algorithm. Unless the USIP
mode is enabled on NetScaler, it changes the client IP to one of its SNIP. When a single server is hosting
an application it is configured as a service in NetScaler. If there is a group of servers, they can be put
together into a Service group. NetScaler uses health monitor probes to keep a track of the health of the
servers.
Figure 9-46

Citrix NetScaler 1000v L4-7 Load Balancing


Server Farm Services/
Service Group

Load Balanced
Server VIP

VM1

Client

Source NAT
Health Moniter

VM3

298559

VM2

Server
The first step in creating an LBS is to define the application server. The only information needed is the
IP address of the server. To create an LBS between a group of servers, all the server information needs
to be added.
add server <server-name> <server-ipaddress>

Intercloud Data Center Application Centric Infrastructure 1.0

9-32

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Services / Service Groups


Once the servers hosting the application are added, a service (for single server) or a service group (for
multiple servers) based on the application needs to be configured. While configuring service/service
group, additional information needs to be provided about the application, the port used and the
associated server.
add serviceGroup <group-name> <service-type>
bind serviceGroup <group-name> <server1-name> <port>

Load Balanced Virtual Server


The final step is to configure a load balanced server. A virtual IP is assigned to the server to which the
client connects. Unique load balanced server is assigned for a unique application. Services configured
for the application is bound to the server.
add lb vserver <lbvserver-name> <vip> <port> -lbMethod ROUNDROBIN -persistenceType
COOKIEINSERT
bind lb vserver <lbvserver> <service-group>

Health Monitoring
To check the connectivity to the servers, monitor probes are used. Probes are sent at regular intervals to
check the health of the server. Based on the result the services/service groups state are marked as up or
down. Based on the application, there can be different types of health monitor such as ping, tcp, http,
and so on.

NetScaler 1000v Implementation using Service Graph


ACI technology provides the capability to insert L4-L7 functions using service graph. There are many
benefits in using the ACI Fabric to provision L4-L7 services. We can provision services easily through
the GUI, REST API and python scripts. We can also concatenate functions offered by different discrete
devices. ACI Fabric supports both physical devices connected to any leaf and virtual appliances running
on blade server. APIC talks to concrete devices like firewalls and load balancers to push the
configurations defined. A device package is used by ACI to be able to talk to the device API. Citrix
NetScaler Device package enables the communication between APIC and NetScaler nitro API. Device
Package plays the man in the middle role by collecting configurations from APIC and pushing it to the
device and also collecting statistics from the device and sending it back to APIC which helps in raising
faults and updating health scores. Sets of functions are defined in the device package. Before
implementing the service graph configurations such as load balanced server configurations, and so on,
the device itself needs to be configured and registered to the APIC. This is implemented using the Device
Cluster configuration. This section covers the following topics:

Citrix NetScaler Device Package

Device Cluster implementation

Service Graph

Network Parameter configuration

Load Balancing implementation

SSL Offload Implementation

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-33

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Citrix NetScaler Device Package


For provisioning L4-L7 services, APIC needs to have a method by which it can communicate to the API
of concrete devices. The plugin that enables this communication between Cisco APIC and the device is
called as a device package. The NetScaler device package includes a description of the device and lists
the parameters it is exposing for Cisco APIC configuration and the scripts that allow Cisco ACI to talk
to this device.
Figure 9-47

NetScaler 1000v Device Package 1

Device description includes the model, version and package version. Interface labels are used by APIC
to bind an interface with a connector for specific functions that are provided by the device. Inside
interface can be considered as an interface used for secure internal communication where as outside
is used for less secure external communication. Interface labels are mapped to the physical interfaces on
the registered device. L4-L7 service functions provided by NetScaler device package is show in
Figure 9-48. The two main functions that are used are the LoadBalancing and SSLOffload.

Intercloud Data Center Application Centric Infrastructure 1.0

9-34

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-48

NetScaler 1000v Device Package Functions

Figure 9-49

NetScaler 1000v Load Balancing Function

These functions are called as function nodes, which basically apply the functions defined in the node to
the traffic flowing. Each function node has two connectors called as meta-connectors. The
meta-connectors define the input and output connections for a node. The load balancing function nodes
are called as external and internal.

L4-L7 Devices Implementation


In ACI, there are three major steps involved while configuring NetScaler in one-arm mode. This includes
creation of L4-L7 devices also called as Logical Device Cluster, creating Concrete Devices and finally
creating logical interfaces. Figure 9-48 shows the overall structure of an L4-L7 device.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-35

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-50

Citrix NetScaler 1000v L4-7 Load Balancing

NetScaler 1000V in One-arm Mode


Logical Device Cluster
HA Cluster
Concrete
Device 1

Concrete
Device 2

1/1

1/1

Inside
(Custom Name)

Logical interfaces

Concrete Interface

Outside
(Custom Name)

298563

Inside
Outside
Device Package -- Logical Interface Labels

Device Cluster (Logical Devices)


One or more concrete devices put together forms a device cluster, also known as logical device,
(lDevVip). An administrator can setup a maximum of two concrete devices in an active-standby mode.
While creating the Logical device, other information is needed including Name, device package, cluster
mode or single mode, connectivity (virtual/physical), and access information. When a service graph is
deployed, it is linked to a device cluster that is defined as a part of the device selection policy (also
known as logical device context).

Concrete Devices
Concrete devices (cDev) are the actual application running devices. They can be virtual or physical. In
our implementation, two concrete devices are defined as an HA Cluster. While configuring the concrete
device provide information such as access information, VM Name, virtual interface details, and so on.
When concrete devices are added to logical device cluster, the physical interfaces (concrete interface)
(vnsCIf) on the Concrete device are mapped to the logical interface.

Logical Interfaces
For each logical device, there are two logical interfaces (vnsLIf) internal and external that are
mapped to the logical interface labels defined in the device package. During service graph rendering the
function node connectors are mapped to these logical interfaces. Figure 9-51 shows the mapping
between interfaces at different levels. The service graph connectivity is explained in detail in the next
section.

Intercloud Data Center Application Centric Infrastructure 1.0

9-36

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-51

Interface Mapping

Device Package Function Node Meta Connectors


External

Internal

vnsAbsFuncConn

vnsAbsFuncConn

Function
Node
Service Graph

vnsLIf external

vnsLIf internal

vnsCIf 1-1
Concrete
Device

vnsCIf 1-1
Concrete
Device

Logical Device
mIfLbl

mIfLbl
Inside

Outside

298564

Device Package Meta Interface Labels

Configuring NetScaler 1000v


Before adding the NetScaler devices to the ACI Fabric, there are a few things to be configured on the
devices. This includes adding static routes for specific subnets, deleting the default route, adding
specific users for administration.

Note

The default gateway for the NetScaler device is the ACI Fabric that is added as a part of the service graph
configuration.
Add route 10.0.0.0 255.255.0.0 10.0.39.253
add route 172.0.0.0 255.0.0.0 10.0.39.253
rm route 0.0.0.0 0.0.0.0 10.0.39.253
add system user apic Cisco12345
bind system user apic superuser 100
add system user admin Cisco12345
bind system user admin superuser 100
save ns config

Creating L4-L7 Device from APIC GUI


Perform the following procedures to create a L4-L7 device from an APIC GUI.
Step 1

Navigate to Tenant tab and select the corresponding tenant.


a.

Navigate to L4-L7 Services tab.

b.

Right click on L4-L7 Devices and select Create L4-L7 Devices.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-37

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-52

Step 2

Create L4-L7 Devices 1

Under the General tab enter the name for the logical device.
a.

Select the device package from the drop-down. Mode can be set to HA Cluster. Device type Virtual
and it is a single context based implementation. Once the device type is selected as Virtual, the
connectivity is set to VMM domain.

b.

Select Configure VMM Domain.

c.

Enter the access credentials. This is common for both the logical and concrete device. For concrete
device configuration, provide the management IP address, management port, and select the VM
from the list of VMs. The data interface used is 1_1 is used for data plane communication. The
direction for the interface can be provider, consumer or provider and consumer. Since deployment
of NetScaler is in one-arm mode, the same interface is used for both provider and consumer
communication.

Note

APIC does not support the syntax 1/1 for virtual interface and it needs to replace / with _ , that is, 1_1.

Note

Direction Parameter simply indicates the type of logical interface to which the virtual concrete interface
is mapped. Provider refers to internal interface and Consumer refers to external interface. As seen in
Figure 9-53, 1_1 mapped to external, internal.

Intercloud Data Center Application Centric Infrastructure 1.0

9-38

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-53

Step 3

Create L4-L7 Devices 2

Configure the HA on the devices and enable the required modes and features. Management interface
0_1 is used for HA heartbeat exchange. As the management access is enabled over the SNIP, user should
be able to access the cluster IP through user interface or SSH. Once the device-specific parameters are
configured, the cluster features can be enabled. Features enabled are SSL, SSLOffload, LoadBalancing,
and LB.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-39

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-54

L4-L7 Device Parameter Configuration

Figure 9-55

L4-L7 Device Features and Modes Configuration

Intercloud Data Center Application Centric Infrastructure 1.0

9-40

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Creating L4-L7 Device using XML


1.

Create L4-L7 devices (Logical Device Cluster):


<fvTenant name="s001">
<vnsLDevVip name="ns01_ns02" devtype="VIRTUAL">
<vnsRsMDevAtt tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5"/>
<vnsRsALDevToDomP tDn="uni/vmmp-VMware/dom-ics3_prod_vc"/>
<vnsCMgmt name="devMgmt" host="10.0.39.102" port="80"/>
<vnsCCred name="username" value="nsroot"/>
<vnsCCredSecret name="password" value="nsroot"/>
<vnsDevFolder key="enableMode" name="EnableMode">
<vnsDevParam key="L3" name="L3Mode" value="ENABLE" />
<vnsDevParam key="DRADV" name="dradv" value="ENABLE" />
<vnsDevParam key="USNIP" name="usnip" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="enableFeature" name="EnableFeature">
<vnsDevParam key="LB" name="lb" value="ENABLE"/>
<vnsDevParam key="LoadBalancing" name="Loadbalancing" value="ENABLE"/>
<vnsDevParam key="SSL" name="ssl" value="ENABLE"/>
<vnsDevParam key="SSLOffload" name="ssloffload" value="ENABLE"/>
</vnsDevFolder>
</vnsLDevVip>
</fvTenant>

2.

Create concrete device:


<fvTenant name="s001">
<vnsLDevVip name="ns01_ns02">
<vnsCDev name="ns01" vcenterName="ics3_vc_tenant_cluster" vmName="s001-ns01">
<vnsCIf name="1_1" vnicName="Network adapter 2">
</vnsCIf>
<vnsCMgmt name="devMgmt" host="10.0.39.2" port="80" />
<vnsCCred name="username" value="nsroot" />
<vnsCCredSecret name="password" value="nsroot" />
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="snip" name="snip" value="10.0.39.102" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="id" name="id" value="1" />
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.12" />
</vnsDevFolder>
</vnsCDev>
<vnsCDev name="ns02" vcenterName="ics3_vc_tenant_cluster" vmName="s001-ns02">
<vnsCIf name="1_1" vnicName="Network adapter 2">
</vnsCIf>
<vnsCMgmt name="devMgmt" host="10.0.39.12" port="80" />
<vnsCCred name="username" value="nsroot" />
<vnsCCredSecret name="password" value="nsroot" />
<vnsDevFolder key="HighAvailability" name="HighAvailability">
<vnsDevParam key="snip" name="snip" value="10.0.39.102" />
<vnsDevParam key="netmask" name="netmask" value="255.255.255.0" />
<vnsDevParam key="interface" name="interface" value="0_1" />
<vnsDevParam key="mgmtaccess" name="mgmtaccess" value="ENABLE" />
</vnsDevFolder>
<vnsDevFolder key="HAPeer" name="HAPeer">
<vnsDevParam key="id" name="id" value="1" />
<vnsDevParam key="ipaddress" name="ipaddress" value="10.0.39.2" />

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-41

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

</vnsDevFolder>
</vnsCDev>
</vnsLDevVip>
</fvTenant>

Note

The vcenterName is the controller defined under the VMM domain.


3.

Create logical interfaces and map them to concrete interfaces:


<fvTenant name="s001">
<vnsLDevVip name="ns01_ns02">
<vnsLIf name="internal">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-inside"/>
<vnsRsCIfAtt tDn="uni/tn-s001/lDevVip-ns01_ns02/cDev-ns01/cIf-[1_1]"/>
<vnsRsCIfAtt tDn="uni/tn-s001/lDevVip-ns01_ns02/cDev-ns02/cIf-[1_1]"/>
</vnsLIf>
<vnsLIf name="external">
<vnsRsMetaIf tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mIfLbl-outside"/>
<vnsRsCIfAtt tDn="uni/tn-s001/lDevVip-ns01_ns02/cDev-ns01/cIf-[1_1]"/>
<vnsRsCIfAtt tDn="uni/tn-s001/lDevVip-ns01_ns02/cDev-ns02/cIf-[1_1]"/>
</vnsLIf>
</vnsLDevVip>
</fvTenant>

Service Graph
In ACI, services are inserted using a Service graph that is instantiated on the ACI Fabric using APIC.
The user defines services, whereas service graph cater for the network and service functions that are
needed by the application. Service graph can provide a single function or can even concatenate functions
together. The basic functionality can be thought of as a firewall sitting between two application tiers.
Thus the traffic running between the two EPGs will have to pass through the function defined in the
service graph, which is established when the graph is rendered. The service graph can be configured
either using XML scripts or the APIC GUI. The function required for the service can be chosen from the
GUI. From a device package point of view, the function node can be termed as as a meta device. The
meta device is associated to the actual device when the graph is rendered. Figure 9-56 shows two
examples of a service graph. The first one supports just a single firewall function, where as in the second
one two functions firewall and load-balancer functions are concatenated together. The main components
that make up the graph are function nodes, terminal nodes (provider and consumer) and the function
node connector.
Service graph stitches functions together not actual network devices.
Figure 9-56

Consumer

Consumer

Service Graph

Function Node
Firewall

Function Node
Firewall

Function Node
Load Balancer

Provider

Provider

298569

Note

Intercloud Data Center Application Centric Infrastructure 1.0

9-42

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-57

Service Graph Template

Function node represents the service function and contains the parameters to be configured for the
service. These parameters can be configured at the EPG, application profile or tenant level. Parameters
can also be configured as a part of the service graph itself. The function node connectors have a VLAN
or VXLAN associated with them and can be considered as an EPG. A service graph is inserted for traffic
between two application EPGs. These consumer and provider EPGs are referred as terminal nodes.

Service Graph Configuration


As mentioned earlier service graph is like a template that is used to render services. Creation of a service
graph is a two-step process. We need to create a function profile, which consists of the parameters to be
configured for the service and then create a service graph template and assign the function profile to the
same. A simple service graph can be created using the following steps,

Creating Service Graph from APIC GUI


Perform the following procedures to create a service graph from an APIC GUI.
Step 1

Navigate to the Tenant Tab.


a.

Select the tenant and then click on L4-L7 services.

b.

Right-click on Function Profiles and select Create L4-L7 Services Function Profile.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-43

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-58

Step 2

Create Function Profile 1

Enter a name for the Function Profile.


a.

Since simple load-balancing for applications like HTTP, HTTPS, FTP, and MySQL is performed, a
single profile can be created.

b.

Since there are multiple profiles based on the function node, a profile can be created and added to
the function profile.

Figure 9-59

Create Function Profile 2

c.

Make sure you uncheck the Copy Existing Profile Parameters field. There are predefined profile
parameters. However,in creating a custom profile, existing ones don't need to be copied.

d.

Select the device function from the drop-down menu. The list contains all the functions provided in
the device package. Select Load Balancing.

e.

Select the Submit button.

Intercloud Data Center Application Centric Infrastructure 1.0

9-44

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-60

Function Profile Parameters

Note

None of the parameters under the function profile is configured. This can be done at a later stage. Empty
profile is useful when the service graph parameters are configured at EPG, application profile or tenant
Level.

Note

Function profiles and function profile groups that are created within a tenant cannot be used in some
other tenant. Basically all configurations are local to the tenant

Step 3

Create the Service Graph. There are two options available for creating the Service Graph. Option one
is creating a graph using pre-existing templates. Option two is to create a custom template to select the
function node and define the properties. The following steps explore option one first.
a.

Right-click on L4-L7 service graph template.

b.

Click Create a L4-L7 Service Graph Template.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-45

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-61

Create Service Graph Template 1

c.

Enter a name for the graph and from the Type drop-down field and select a pre-defined template. In
this implementation, there is a single load-balancer between two EPGs which is configured in
one-arm mode.

d.

Select Single Node ADC in One-Arm Mode.

Figure 9-62

Create Service Graph Template 2

e.

Select the corresponding ADC function.

f.

Select the function profile created in Step 2.

Intercloud Data Center Application Centric Infrastructure 1.0

9-46

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-63

Create Service Graph Template 3

g. This creates a graph as shown in Figure 9-64. Note that the function node name is given as ADC by

default since the graph was created using a pre-defined template.


Figure 9-64

Step 4

Create Service Graph Template 4

In the second option, an advanced option is used to create the Service Graph.
a.

Right-click on Create a L4-L7 Service Graph Template (Advanced).

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-47

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-65

Create Service Graph Template (Advanced) 1

b.

From the list of functions under the device package on the left hand side, drag and drop the
load-balancing function. This creates three nodes on the screen, the function node and two terminal
nodes provider and consumer.

c.

Enter a name for the graph.

d.

Once the function node is inserted, a screen pops up to associate a function profile to the node. Select
the function profile created in step 2. Since there are no parameters configured under the profile, the
function node will not have any parameters configured.

Figure 9-66

Figure 9-64 Create Service Graph Template (Advanced) 2

Intercloud Data Center Application Centric Infrastructure 1.0

9-48

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-67

Associate Function Profile to Service Graph

e. Add connections between the terminal nodes and the function node. Once the connections are made,

APIC asks for the properties as shown below.


f.

Select L3 and check Unicast Route.

g.

Select Ok.

Figure 9-68

Connection Properties

h. While creating a service graph in the advanced mode, the function node and connector names can

be renamed. To change them, double-click on the name and enter any custom name. Then select the
Submit button.
Figure 9-69

Service Graph Template

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-49

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Creating service graph using XML


1.

Create the Function Profile and Function Profile Group:


<fvTenant name="s001">
<vnsAbsFuncProfContr>
<vnsAbsFuncProfGrp name="test_aci1.0">
#Function Profile Group
<vnsAbsFuncProf name=icdc_aci1.0_test>
#Function profile
<vnsRsProfToMFunc
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing" />
#Device Function
<vnsAbsDevCfg
dn="uni/tn-s001/absFuncProfContr/absFuncProfGrp-test_aci1.0/absFuncProf-icdc_aci1.0_te
st/absDevCfg" name="devConfig" />
<vnsAbsFuncCfg
dn="uni/tn-s001/absFuncProfContr/absFuncProfGrp-test_aci1.0/absFuncProf-icdc_aci1.0_te
st/absFuncCfg" name="funcConfig" />
</vnsAbsFuncProf>
</vnsAbsFuncProfGrp>
</vnsAbsFuncProfContr>
</fvTenant>

2.

Create the Service Graph:


<fvTenant name="s001">
<vnsAbsGraph name = "lb_epg_graph">
<vnsAbsTermNodeProv name = "Provider">
<vnsAbsTermConn name = "P"
dn="uni/tn-s001/AbsGraph-lb_epg_graph/AbsTermNodeProv-Provider/AbsTConn"/>
</vnsAbsTermNodeProv>
<vnsAbsTermNodeCon name = "Consumer">
<vnsAbsTermConn name = "C"
dn="uni/tn-s001/AbsGraph-lb_epg_graph/AbsTermNodeCon-Consumer/AbsTConn"/>
</vnsAbsTermNodeCon>
<!-- LoadBalancing Provides LoadBalancing functionality -->
<vnsAbsNode name = "LoadBalancing" funcType="GoTo" >
<vnsAbsFuncConn name = "external">
<vnsRsMConnAtt
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing/mConn-external" />
</vnsAbsFuncConn>
<vnsAbsFuncConn name = "internal">
<vnsRsMConnAtt
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing/mConn-internal" />
</vnsAbsFuncConn>
<vnsRsNodeToAbsFuncProf
tDn="uni/tn-s001/absFuncProfContr/absFuncProfGrp-icdc_aci1.0/absFuncProf-icdc_aci1.0_L
oadBalancing"/>
<vnsRsNodeToMFunc
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing"/>
</vnsAbsNode>
<vnsAbsConnection name = "CON1" adjType = "L3" >
<vnsRsAbsConnectionConns
tDn="uni/tn-s001/AbsGraph-lb_epg_graph/AbsTermNodeCon-Consumer/AbsTConn" />
<vnsRsAbsConnectionConns
tDn="uni/tn-s001/AbsGraph-lb_epg_graph/AbsNode-LoadBalancing/AbsFConn-external" />
</vnsAbsConnection>
<vnsAbsConnection name = "CON2" adjType = "L3" >
<vnsRsAbsConnectionConns
tDn="uni/tn-s001/AbsGraph-lb_epg_graph/AbsNode-LoadBalancing/AbsFConn-internal" />

Intercloud Data Center Application Centric Infrastructure 1.0

9-50

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

<vnsRsAbsConnectionConns
tDn="uni/tn-s001/AbsGraph-lb_epg_graph/AbsTermNodeProv-Provider/AbsTConn" />
</vnsAbsConnection>
</vnsAbsGraph>
</fvTenant>

Configuring device/function parameters under Service Graph


While configuring the service graph parameters there are two folders assigned for the same, device
config and function config. All the device specific configurations for an application are done under the
device config folder and then needs to be associated to a defined function under the function config
folder. In this section provides an example of configuring a SNIP from both APIC GUI and XML scripts.
The same procedure needs to be followed for configuring other parameters like service, service group,
lbvserver, lbmonitor, and so on.

Configuring Parameters from the GUI


Perform the following procedures to configure parameters from the GUI.
Step 1

Click on the Function Profile associated with the service graph and click on the pencil icon. This enters
into the edit mode.
Figure 9-70

Step 2

As mentioned earlier there are two main folders to be configured Device Config and Function Config.
Figure 9-71

Step 3

Configure Service Graph Parameters 1

Configure Service Graph Parameters 2

Open up the Device config folder. IP addresses are defined under the network folder.
a.

Double-click on network folder and assign a name to this parent folder and hit update.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-51

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-72

Step 4

Browse the Network folder to locate the folder, nsip. This is where all the SNIP addresses are
configured.
a.

Double-click on nsip folder, name it, and select the Update button.

Figure 9-73

Step 5

Configure Service Graph Parameters 3

Configure Service Graph Parameters 4

Configure the ipaddress and netmask parameters.


a.

Double-click on these parameters, and assign names and values. Make sure the name is unique does
not overlap with any other name already present. Both value fields are the actual ipaddress and
netmask to be configured.

Intercloud Data Center Application Centric Infrastructure 1.0

9-52

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-74

Step 6

Configure Service Graph Parameters 5

Configure the function parameters. Once the device parameters are configured, the function parameters
can be configured.
a. As shown in Figure 9-73, there are predefined folders for a specific function; that is, the folder for

the external communication is called external_network, the folder for the internal communication
is called internal_network, and so on. The SNIP address configured in the previous step is used
for internal communication between the NetScaler device and real servers.
b.

Double-click on the parent folder internal_network, and assign a name.

c. The SNIP address needs to be associated with the internal_network_key.


d. When finished, Update to submit the configuration.
Figure 9-75

Configure Service Graph Parameters 6

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-53

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Step 7

Once the parameters are configured, they are visible under Function profile>All parameters.
Figure 9-76

Configure Service Graph Parameters 7

Configuring Parameters using XML


To configure Parameters using XML:
<fvTenant name="s001
<vnsAbsFuncProfContr>
<vnsAbsFuncProfGrp name="test_aci1.0">
<vnsAbsFuncProf name="icdc_aci1.0_test">
<vnsRsProfToMFunc
tDn="uni/infra/mDev-Cisco-NetScaler1KV-10.5/mFunc-LoadBalancing" />
<vnsAbsDevCfg>
<vnsAbsFolder key="Network" name="network" scopedBy="epg">
<vnsAbsFolder key="nsip" name="snip">
<vnsAbsParam key="ipaddress" name="ip1" value="10.2.4.21"/>
<vnsAbsParam key="netmask" name="netmask1"
value="255.255.255.128"/>
</vnsAbsFolder>
</vnsAbsDevCfg>
<vnsAbsFuncCfg>
<vnsAbsFolder key="internal_network" locked="no" name="snipip">
<vnsAbsCfgRel key="internal_network_key" name="snip_key"
targetName="network/snip" />
</vnsAbsFolder>
</vnsAbsFuncCfg>
</vnsAbsFuncProf>
</vnsAbsFuncProfGrp>
</vnsAbsFuncProfContr>
</fvTenant>

Configuring L4-L7 Parameters under EPG


Apart from configuring the service graph parameters as a part of the function profile, the parameters at
the EPG level can be configured as well as the application profile level and tenant level as shown in
Figure 9-77. In the Silver tenant implementation, there are three contracts, which are provided by the
three EPGs. All of the application-specific configurations are done under the corresponding EPG level.

Intercloud Data Center Application Centric Infrastructure 1.0

9-54

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-77

Step 1

Configure L4-L7 Parameters 1

Navigate to the Tenant tab and tenants001.


a.

Select the EPG to configure.

b.

Click on L4-L7 Service Parameters.

c.

Click on the edit icon to configure the parameters.

d.

Select the contract, graph and node name. As seen in the function profile there are two configuration
foldersthe device configuration and the function configuration.

Figure 9-78

Configure L4-L7 Parameters 2

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-55

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Figure 9-79

Step 2

Open up Device config folder. IP addresses are defined under the Network folder.
a.

Double-click on Network folder and assign a name to this parent folder.

b.

Select Update.

Figure 9-80

Step 3

Configure L4-L7 Parameters 4

Browse the Network folder and locate the folder nsip. This is where all the SNIP addresses are
configured.
a.

Double-click on the nsip folder, assign a name to this folder.

b.

Select Update.

Figure 9-81

Step 4

Configure L4-L7 Parameters 3

Configure L4-L7 Parameters 5

The parameters to configure are ipaddress and netmask.


a.

Double-click on these parameters, and assign names and values. Make sure the name is unique does
not overlap with any other name already present. Both value fields are the actual ipaddress and
netmask to be configured.

b.

Select Update.

Intercloud Data Center Application Centric Infrastructure 1.0

9-56

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-82

Step 5

Configure L4-L7 Parameters 6

Configure the function parameters. Once the device parameters are configured, the function parameters
can be configured.
a. As shown in Figure 9-83, there are predefined folders for a specific function; that is, the folder for

the external communication is called external_network, the folder for the internal communication
is called internal_network, and so on. The SNIP address configured in the previous step is used
for internal communication between the NetScaler device and real servers.
b.

Double-click on the parent folder internal_network, and assign a name.

c. The SNIP address needs to be associated with the internal_network_key.


d. When finished, Update to submit the configuration.
Figure 9-83

Configure L4-L7 Parameters 7

Figure 9-84

Configure L4-L7 Parameters 8

Configuring EPG-Level L4-L7 Parameters using XML


To configure EPG level L4-L7 Parameters using XML:
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="Network" name="network">

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-57

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="nsip" name="snip">
<vnsParamInst key="ipaddress" name="ip1" value="10.2.4.21" />
<vnsParamInst key="netmask" name="netmask1" value="255.255.255.128"
/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="internal_network"
locked="no" name="snipip">
<vnsCfgRelInst key="internal_network_key" name="snip_key"
targetName="network/snip" />
</vnsFolderInst>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

Device Selection Policies


A device selection policy also known as logical device context associates a device cluster to be used by
service graph. Using this feature an administrator can bind multiple service graphs to the same device
cluster. It also maps the logical interface to function connectors of the graph as shown in Figure 9-85. A
device selection policy should be created for a service graph before its deployed.

Configuring Device Selection Policy from APIC GUI


Perform the following procedures to configure a device selection policy from an APIC GUI.
Step 1

Navigate to the Tenant and then to L4-L7 Services.


a.

Right-click on Device Selection Policy and select Create Logical Device Context.

Intercloud Data Center Application Centric Infrastructure 1.0

9-58

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-85

Step 2

Configure Device Selection Policy 1

Select the Contract to apply to the service.


a.

Select the corresponding Graph and Node name.

b.

Next, select the Device Cluster to which the Service Graph configurations are pushed. While
mapping the logical interface contexts the connector names refer to the function node connector
names in the service graph. If the names were left as the default names, they are external and
internal.

Figure 9-86

Configure Device Selection Policy 2

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-59

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Configuring device selection policy using XML


To configure the device selection policy using XML:
<fvTenant name="s001">
<vnsLDevCtx ctrctNameOrLbl="contract01" graphNameOrLbl="lb_graph"
nodeNameOrLbl="LoadBalancing">
<vnsRsLDevCtxToLDev tDn="uni/tn-s001/lDevVip-ns01_ns02"/>
<vnsLIfCtx connNameOrLbl="internal">
<vnsRsLIfCtxToBD tDn="uni/tn-s001/BD-slb_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-s001/lDevVip-ns01_ns02/lIf-internal"/>
</vnsLIfCtx>
<vnsLIfCtx connNameOrLbl="external">
<vnsRsLIfCtxToBD tDn="uni/tn-s001/BD-slb_bd" />
<vnsRsLIfCtxToLIf tDn="uni/tn-s001/lDevVip-ns01_ns02/lIf-external"/>
</vnsLIfCtx>
</vnsLDevCtx>
</fvTenant>

Deploying Service Graph


Once all the parameters are configured and a corresponding device selection policy is created, the
service graph can be deployed. To deploy a graph from APIC GUI, do the following steps:
Step 1

Navigate to the Tenant and then Security Policies.


a.

Select the Contract and then select the Subject to which the Service Graph is being applied.

b.

In the Service Graph field, select the Graph to be deployed.

c.

Once the Graph is deployed successfully, entries appear under the Deployed Graph Instances and
Deployed Devices.

Intercloud Data Center Application Centric Infrastructure 1.0

9-60

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-87

Deploy Service Graph 1

Figure 9-88

Deploy Service Graph 2

XML Script to Deploy Service Graph


To deploy the Service Graph using XML:
<fvTenant name="s001">
<vzBrCP name="contract01">
<vzSubj name="subject01">
<vzRsSubjGraphAtt tnVnsAbsGraphName="lb_epg_graph" />
</vzSubj>
</vzBrCP>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-61

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

Network Parameter Configuration


From a network configuration point of view, there are only two things to be configured. A SNIP is used
by the NetScaler device for health monitoring, Source NAT and other data communication. When the
service graph is deployed, APIC automatically creates a VLAN and associates the SNIP to the data
interface 1/1 and the VLAN. This IP is basically used for all data communications. Second is the default
route. Default gateway for the NetScaler device is defined on the ACI Fabric. So any communication
between NetScaler and real application servers will go through the fabric.
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="Network" name="network">
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="nsip" name="snip">
<vnsParamInst key="ipaddress" name="ip1" value="10.2.4.21" />
<vnsParamInst key="netmask" name="netmask1" value="255.255.255.128"
/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="route"
name="df_route">
<vnsParamInst key="gateway" name="gw" value="10.2.4.126" />
<vnsParamInst key="netmask" name="netmask" value="0.0.0.0" />
<vnsParamInst key="network" name="network" value="0.0.0.0" />
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="internal_network"
locked="no" name="snipip">
<vnsCfgRelInst key="internal_network_key" name="snip_key"
targetName="network/snip" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="external_route"
locked="no" name="ext_route">
<vnsCfgRelInst key="external_route_rel" name="ext_route_df"
targetName="network/df_route" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

Load-Balancing Implementation
This validation tested for applications. HTTP and HTTPs Web service, FTP service and MySQL service
with load-balancing provided for each of them. In Cisco NetScaler device package, LoadBalancing for
HTTPs application is provided using SSLOffload function node. Load balancing functionality for the
remaining nodes can be provided using LoadBalancing function node. Multiple services can be added to
NetScaler as single services when there is only a single server hosting the application or as a service
group when multiple servers host the application. Both the implementations are covered in the following
section.

Intercloud Data Center Application Centric Infrastructure 1.0

9-62

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

ApplicationHTTP
Two servers are configured to host HTTP web application. These servers are added as a service group
and a VIP is configured to load balance the application. An http monitor is created to check the health
of the servers. Figure 9-89 summarizes the configuration and the following configuration is done
through the Service Graph.
add server 10.2.1.1 10.2.1.1
add server 10.2.1.2 10.2.1.2
add serviceGroup servicegroup_web HTTP
bind serviceGroup servicegroup_web 10.2.1.1 80
bind serviceGroup servicegroup_web 10.2.1.2 80
add lb monitor aci_http HTTP
bind serviceGroup servicegroup_web -monitorName aci_http
add lb vserver http_11.2.1.1 HTTP 11.2.1.1 80 -persistenceType COOKIEINSERT
bind lb vserver http_11.2.1.1 servicegroup_web

Figure 9-89

HTTP Load-balancing Configuration


Service Group

Server1
10.2.1.1
NetScaler
lb monitor aci_http

Server2
10.2.1.2

298602

VIP 11.2.1.1
Port 80

XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbvserver" name="VServer1" scopedBy="epg">
<vnsParamInst name="name" key="name" value="http_11.2.1.1"/>
<vnsParamInst name="ipv46" key="ipv46" value="11.2.1.1"/>
<vnsParamInst name="TCP" key="servicetype" value="HTTP"/>
<vnsParamInst name="port" key="port" value="80"/>
<vnsParamInst name="persistencetype" key="persistencetype"
value="COOKIEINSERT"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="lbvserver_servicegroup_binding" name="lbService1">
<vnsCfgRelInst key="servicename"
name="WebServiceGroup1" targetName="ServiceGroup1"/>
</vnsFolderInst>

</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="servicegroup" name="ServiceGroup1" scopedBy="epg">
<vnsParamInst key="servicegroupname" name="srv_grp_name"
value="servicegroup_web"/>
<vnsParamInst key="servicetype" name="servicetype"
value="HTTP"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="servicegroup_servicegroupmember_binding" name="servbind1" scopedBy="epg">
<vnsParamInst key="ip" name="ip1" value="10.2.1.1"/>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-63

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

<vnsParamInst key="port" name="port1" value="80"/>


</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="servicegroup_servicegroupmember_binding" name="servbind2" scopedBy="epg">
<vnsParamInst key="ip" name="ip2" value="10.2.1.2"/>
<vnsParamInst key="port" name="port2" value="80"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="servicegroup_lbmonitor_binding" name="servMon1"
scopedBy="epg">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="lbMon1"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbmonitor" name="lbMon1" scopedBy="epg">
<vnsParamInst name="monitorname" key="monitorname"
value="aci_http"/>
<vnsParamInst name="type" key="type" value="http"/>
</vnsFolderInst>
<!-- Function Param Config -->

<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="mFCnglbmonitor"
locked="no" name="lbmonitor">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key"
targetName="lbMon1" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCngservicegroup" locked="no"
name="servicegroup_cfg">
<vnsCfgRelInst key="servicegroup_key" name="servicegroup_key"
targetName="ServiceGroup1" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing" key="mFCnglbvserver"
locked="no" name="lbvserver_cfg">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="VServer1"/>
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

Application FTP
Tier02 servers host FTP application. Traffic from tier01 VMs to tier02 VMs is governed using contracts.
Passive FTP is configured on the real servers. Port range 10100 10500 is used for ftp-data
communication.

Intercloud Data Center Application Centric Infrastructure 1.0

9-64

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Note

To support passive FTP, on primary ns1000v you should configure the FTP port range. Go to
System>Settings>Global Setting Parameters and configure the FTP port range as start: 10100 and end:
10500.
add server 10.2.2.1 10.2.2.1
add service service_ftp 10.2.2.1 FTP 21 -gslb NONE -healthMonitor NO -maxClient 0
-maxReq 0 -cip DISABLED -usip NO -useproxyport NO -sp ON -cltTimeout 120 -svrTimeout
120 - -CKA NO -TCPB YES -CMP NO
add lb monitor aci_ftp FTP username aci password Cisco12345
bind service service_ftp -monitorName aci_ftp
add lb vserver ftp_10.2.4.132 FTP 10.2.4.132 21
bind lb vserver ftp_10.2.4.132 service_ftp

Figure 9-90

FTP Load-balancing Configuration

VIP 10.2.4.132
ftp control Port 21
ftp data port 10100-10500

Server
10.2.2.1

NetScaler
lb monitor aci_ftp
Type ftp

298603

Service

XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg02">
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbvserver" name="VServer3" scopedBy="epg">
<vnsParamInst name="name" key="name" value="ftp_10.2.4.132"/>
<vnsParamInst name="ipv46" key="ipv46" value="10.2.4.132"/>
<vnsParamInst name="TCP" key="servicetype" value="FTP"/>
<vnsParamInst name="port" key="port" value="21"/>
<vnsFolderInst ctrctNameOrLbl="contract02"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="lbvserver_service_binding" name="lbService2">
<vnsCfgRelInst key="servicename" name="ftpService2"
targetName="Service3"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="service" name="Service3" scopedBy="epg">
<vnsParamInst name="name" key="name" value="service_ftp"/>
<vnsParamInst name="ip" key="ip" value="10.2.2.1"/>
<vnsParamInst name="TCP" key="servicetype" value="FTP"/>
<vnsParamInst name="port" key="port" value="21"/>
<vnsParamInst name="maxclient" key="maxclient" value="0"/>
<vnsParamInst name="maxreq" key="maxreq" value="0"/>
<vnsParamInst name="cip" key="cip" value="DISABLED"/>
<vnsParamInst name="usip" key="usip" value="NO"/>
<vnsParamInst name="useproxyport" key="useproxyport"
value="YES"/>
<vnsParamInst name="sp" key="sp" value="ON"/>
<vnsParamInst name="clttimeout" key="clttimeout" value="180"/>
<vnsParamInst name="svrtimeout" key="svrtimeout" value="360"/>
<vnsParamInst name="cka" key="cka" value="NO"/>
<vnsParamInst name="tcpb" key="tcpb" value="NO"/>
<vnsParamInst name="cmp" key="cmp" value="NO"/>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-65

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

<vnsFolderInst ctrctNameOrLbl="contract02"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="service_lbmonitor_binding" name="servMon1" scopedBy="epg">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="lbMon3"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbmonitor" name="lbMon3" scopedBy="epg">
<vnsParamInst name="monitorname" key="monitorname"
value="aci_ftp"/>
<vnsParamInst name="type" key="type" value="ftp"/>
<vnsParamInst name="username" key="username" value="aci"/>
<vnsParamInst name="password" key="password" value="Cisco12345"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCnglbmonitor" locked="no" name="lbmonitor">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key_ftp"
targetName="lbMon3" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCngservice" locked="no" name="service_cfg">
<vnsCfgRelInst key="service_key" name="service_key_ftp" targetName="Service3"
/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract02" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCnglbvserver" locked="no" name="lbvserver_cfg">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key_ftp"
targetName="VServer3" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

ApplicationMySQL
Tier03 servers host MySQL application. Traffic from tier02 VMs to tier03 VMs is governed using
contracts. To support the service of type MySQL, the Database User on the NetScaler needs to be
configured manually.

Note

For MySQL, from the NetScaler GUI, go to System>User Administration>Database Users and add a
new entry. The login credentials will be the one used to login into the backed server root/Cisco12345.
add server 10.2.3.1 10.2.3.1
add service service_mysql 10.2.3.1 MYSQL 3306 -gslb NONE -healthMonitor NO -maxClient
0 -maxReq 0 -cip DISABLED -usip NO -useproxyport NO -sp ON -cltTimeout 120
-svrTimeout 120 - -CKA NO -TCPB YES -CMP NO
add lb monitor aci_mysql MYSQL username root password Cisco12345 database tenant
sqlquery Select * from tenant
bind service service_mysql -monitorName aci_mysql
add lb vserver mysql_10.2.4.133 MYSQL 10.2.4.133 3306
bind lb vserver mysql_10.2.4.133 service_mysql

Intercloud Data Center Application Centric Infrastructure 1.0

9-66

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

Figure 9-91

MySQL Load-balancing Configuration

VIP 10.2.4.132
ftp control Port 21
ftp data port 10100-10500

Server
10.2.3.1

NetScaler

298604

Service

lb monitor aci_mysql
Type MYSQL

XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg03">
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbvserver" name="VServer4" scopedBy="epg">
<vnsParamInst name="name" key="name"
value="mysql_10.2.4.133"/>
<vnsParamInst name="ipv46" key="ipv46" value="10.2.4.133"/>
<vnsParamInst name="TCP" key="servicetype" value="MYSQL"/>
<vnsParamInst name="port" key="port" value="3306"/>
<vnsFolderInst ctrctNameOrLbl="contract03"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="lbvserver_service_binding" name="lbService2">
<vnsCfgRelInst key="servicename" name="mysqlService"
targetName="Service4"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="service" name="Service4" scopedBy="epg">
<vnsParamInst name="name" key="name" value="service_mysql"/>
<vnsParamInst name="ip" key="ip" value="10.2.3.1"/>
<vnsParamInst name="TCP" key="servicetype" value="MYSQL"/>
<vnsParamInst name="port" key="port" value="3306"/>
<vnsParamInst name="maxclient" key="maxclient" value="0"/>
<vnsParamInst name="maxreq" key="maxreq" value="0"/>
<vnsParamInst name="cip" key="cip" value="DISABLED"/>
<vnsParamInst name="usip" key="usip" value="NO"/>
<vnsParamInst name="useproxyport" key="useproxyport"
value="YES"/>
<vnsParamInst name="sp" key="sp" value="ON"/>
<vnsParamInst name="clttimeout" key="clttimeout" value="180"/>
<vnsParamInst name="svrtimeout" key="svrtimeout" value="360"/>
<vnsParamInst name="cka" key="cka" value="NO"/>
<vnsParamInst name="tcpb" key="tcpb" value="NO"/>
<vnsParamInst name="cmp" key="cmp" value="NO"/>
<vnsFolderInst ctrctNameOrLbl="contract03"
graphNameOrLbl="lb_epg_graph" nodeNameOrLbl="LoadBalancing"
key="service_lbmonitor_binding" name="servMon1" scopedBy="epg">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="lbMon4"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="lbmonitor" name="lbMon4" scopedBy="epg">
<vnsParamInst name="monitorname" key="monitorname"
value="aci_mysql"/>
<vnsParamInst name="type" key="type" value="MYSQL"/>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-67

Chapter 9

Silver Tenant Container

NetScaler 1000v Implementation using Service Graph

<vnsParamInst name="username" key="username" value="root"/>


<vnsParamInst name="password" key="password" value="Cisco12345"/>
<vnsParamInst name="database" key="database" value="tenant"/>
<vnsParamInst name="sqlquery" key="sqlquery" value="Select *
from tenant"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCnglbmonitor" locked="no" name="lbmonitor">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key_ftp"
targetName="lbMon4" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCngservice" locked="no" name="service_cfg">
<vnsCfgRelInst key="service_key" name="service_key_ftp" targetName="Service4"
/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract03" graphNameOrLbl="lb_epg_graph"
nodeNameOrLbl="LoadBalancing" key="mFCnglbvserver" locked="no" name="lbvserver_cfg">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key_ftp"
targetName="VServer4" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

SSLOffload Implementation
A simple SSL offloading setup terminates SSL traffic (HTTPS), decrypts the SSL records, and forwards
the clear text (HTTP) traffic to the back-end web servers. However, the clear text traffic is vulnerable to
being spoofed, read, stolen, or compromised by individuals who succeed in gaining access to the
back-end network devices or web servers. You can, therefore, configure SSL offloading with end-to-end
security by re-encrypting the clear text data and using secure SSL sessions to communicate with the
back-end Web servers. To configure SSL Offloading with end-to-end encryption, add SSL based services
that represent secure servers with which the NetScaler appliance will carry out end-to-end encryption.
Then create an SSL based virtual server, and create and bind a valid certificate-key pair to the virtual
server. Bind the SSL services to the virtual server to complete the configuration.

Note

Before configuring SSL services and virtual server, create the SSL key and certificates. Certificate
key-pair is created by APIC.
create ssl rsakey /nsconfig/ssl/acikey.pem 2048 -exp F4
create ssl certReq /nsconfig/ssl/acireq.pem -keyFile /nsconfig/ssl/acikey.pem
-countryName US -stateName NC -organizationName Cisco
create ssl cert /nsconfig/ssl/acicert.pem /nsconfig/ssl/acireq.pem ROOT_CERT -keyFile
/nsconfig/ssl/acikey.pem -days 365

XML Configuration
<fvTenant name="s001">
<fvAp name="app01">
<fvAEPg name="epg01">
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="lbvserver" name="VServer2" scopedBy="epg">
<vnsParamInst name="name" key="name" value="https_11.2.1.2"/>

Intercloud Data Center Application Centric Infrastructure 1.0

9-68

Implementation Guide

Chapter 9

Silver Tenant Container


NetScaler 1000v Implementation using Service Graph

<vnsParamInst
<vnsParamInst
<vnsParamInst
<vnsParamInst

name="ipv46" key="ipv46" value="11.2.1.2"/>


name="TCP" key="servicetype" value="SSL"/>
name="port" key="port" value="443"/>
name="persistencetype" key="persistencetype"

value="COOKIEINSERT"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload"key="lbvserver_service_binding" name="lbService2">
<vnsCfgRelInst key="servicename" name="Service2"
targetName="Service2"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="service" name="Service2" scopedBy="epg">
<vnsParamInst name="name" key="name" value="service_https"/>
<vnsParamInst name="ip" key="ip" value="10.2.1.11"/>
<vnsParamInst name="TCP" key="servicetype" value="SSL"/>
<vnsParamInst name="port" key="port" value="443"/>
<vnsParamInst name="maxclient" key="maxclient" value="0"/>
<vnsParamInst name="maxreq" key="maxreq" value="0"/>
<vnsParamInst name="cip" key="cip" value="DISABLED"/>
<vnsParamInst name="usip" key="usip" value="NO"/>
<vnsParamInst name="useproxyport" key="useproxyport"
value="YES"/>
<vnsParamInst name="sp" key="sp" value="ON"/>
<vnsParamInst name="clttimeout" key="clttimeout" value="180"/>
<vnsParamInst name="svrtimeout" key="svrtimeout" value="360"/>
<vnsParamInst name="cka" key="cka" value="NO"/>
<vnsParamInst name="tcpb" key="tcpb" value="NO"/>
<vnsParamInst name="cmp" key="cmp" value="NO"/>
<vnsFolderInst ctrctNameOrLbl="contract01"
graphNameOrLbl="ssl_epg_graph" nodeNameOrLbl="SSLOffload"
key="service_lbmonitor_binding" name="servMon1" scopedBy="epg">
<vnsCfgRelInst name="monitor_name" key="monitor_name"
targetName="lbMon2"/>
</vnsFolderInst>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="lbmonitor" name="lbMon2" scopedBy="epg">
<vnsParamInst name="monitorname" key="monitorname"
value="aci_https"/>
<vnsParamInst name="type" key="type" value="tcp"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="sslcertkey" name="sslcertkey" scopedBy="epg">
<vnsParamInst key="certkey" name="certkey" value="acisslcert"/>
<vnsParamInst key="cert" name="certfile" value="acicert.pem"/>
<vnsParamInst key="key" name="keyfile" value="acikey.pem"/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="sslvserver" name="WebSSLVServer2" scopedBy="epg">
<vnsParamInst key="clientauth" name="clienthauth" value="ENABLED"/>
<vnsParamInst key="vservername" name="vservername"
value="https_11.2.1.2"/>
<vnsParamInst key="sendclosenotify" name="sendclosenotify" value="NO"/>

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-69

Chapter 9

Silver Tenant Container

References

<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"


nodeNameOrLbl="SSLOffload" key="sslvserver_sslcertkey_binding" name="SSLWebCertBind"
scopedBy="epg">
<vnsCfgRelInst name="certkey1" key="certkeyname"
targetName="sslcertkey"/>
</vnsFolderInst>
</vnsFolderInst>

<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"


nodeNameOrLbl="SSLOffload" key="mFCnglbmonitor" locked="no" name="lbmonitor">
<vnsCfgRelInst key="lbmonitor_key" name="lbmonitor_key" targetName="lbMon2"
/>
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="mFCngservice" locked="no" name="service_cfg">
<vnsCfgRelInst key="service_key" name="service_key" targetName="Service2" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="mFCnglbvserver" locked="no" name="lbvserver_cfg">
<vnsCfgRelInst key="lbvserver_key" name="lbvserver_key"
targetName="VServer2" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="mFCngsslvserver" locked="no" name="sslvserver_cfg">
<vnsCfgRelInst key="sslvserver_key" name="sslvserver_key"
targetName="WebSSLVServer2" />
</vnsFolderInst>
<vnsFolderInst ctrctNameOrLbl="contract01" graphNameOrLbl="ssl_epg_graph"
nodeNameOrLbl="SSLOffload" key="mFCngsslcertkey" locked="no" name="sslcertkey_cfg">
<vnsCfgRelInst key="sslcertkey_key" name="sslcertkey_key"
targetName="sslcertkey" />
</vnsFolderInst>
</fvAEPg>
</fvAp>
</fvTenant>

References
The following references are provided for your convenience.

http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/2-2/collateral/vmdc
ConsumerModels.html

http://sdu-wiki.cisco.com/display/wat579/Service+Tiers ---VMDC4 vsa1.0 IG

http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/2-0/large_pod_desig
n_guide/vmdc20Lpdg/VMDC_2-0_DG_1.html

http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/2-3/implementation
_guide/VMDC_2-3_IG/VMDC2-3_IG1.html#wp2270214

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACIFundamentals.html

http://www.cisco.com/c/en/us/products/collateral/switches/citrix-netscaler-1000v/datasheet-c78-731
508.pdf

Intercloud Data Center Application Centric Infrastructure 1.0

9-70

Implementation Guide

Chapter 9

Silver Tenant Container


References

http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrast
ructure/white-paper-c11-732493.html

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

9-71

Chapter 9

Silver Tenant Container

References

Intercloud Data Center Application Centric Infrastructure 1.0

9-72

Implementation Guide

CH A P T E R

10

Bronze Tenant Container


This chapter provides detailed steps for implementing the Bronze tenant container within the Cisco
Intercloud Data Center Application Centric Infrastructure (ACI) 1.0 solution.

Overview
The Bronze tenant container is one of the simplest container models defined in the Cisco Virtualized
Multi-Service Data Center (Cisco VMDC) architecture. This container provides single subnet per tenant
for resource placement, such as virtual machines or bare metal servers. This container can be
implemented in two different ways:

Layer 3 (L3) ACI called L3 Bronze

Layer 2 (L2) ACI called L2 Bronze

These two differ in the way the default gateway is implemented within the container and has scale
implications. The L2 Bronze uses only layer-2 constructs in ACI and scales a lot higher than the L3
Bronze which is limited to the verified scalability of 100 tenants in the current release.
The following sections detail the implementation of L3 and L2 Bronze tenant containers in the ACI
Fabric.

Layer 3 Bronze
The L3 Bronze container has the virtual machine default gateway configured on the ACI Fabric. The
ACI Fabric, in turn, routes to an upstream ASR 9000 network virtualization (nV) edge device using
routing protocols or static routes.

Note

The ACI Fabric utilizes Interior Border Gateway Protocol (IBGP), Open Shortest Path First (OSPF), and
static routing for L3 external connectivity.

Physical Topology
Figure 10-1 details the L3 Bronze container physical topology. The Cisco Integrated Compute Stack
(ICS) is connected to pair of leaf switches over virtual port channels (vPC). The border leaf switches
(105 and 106) connect to the edge device over the vPCs.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-1

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Note

ACI leaf switches do not support L3 port channel interface or port channel sub-interface.
Figure 10-1

Physical Topology for L3 Bronze


Spine201

Spine202

Spine203

Spine204

Nexus 9508

Nexus 9396

Leaf102

Leaf103

Leaf104

Leaf105

Border
Leaf

BE-5
APIC1

APIC3

UCS-6296-fi-A

UCS-6296-FI-B

Leaf106

BE-6

APIC2

UCS-6296-FI-C

UCS-6296-FI-D

UCS B-Series Blade Servers


Integrated Compute Stack-3

Integrated Compute Stack-4

ASR 9000 nV Edge Router

298477

Leaf101

Nexus 9396

Logical Topology
Figure 10-2 details the L3 Bronze container logical topology. In this figure, the virtual machines reside
in the 10.3.1.0/24 subnet and the ACI Fabric acts as the default gateway (def gwy) for this subnet. The
ACI Fabric connects to the ASR 9000 nV over L3 paths via leaf 105 and leaf 106. On each border leaf,
a L3 logical interface is defined and mapped to an external VLAN that is carried over the L2 port channel
to the ASR 9000. On the ASR 9000, the L3 bundle Ethernet interface with sub-interfaces separates the
tenant traffic. On each border leaf, IBGP or static routes implement routing to the external network.

Intercloud Data Center Application Centric Infrastructure 1.0

10-2

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

Logical Topology for L3 Bronze


L3 Bronze with IBGP

L3 Bronze with Static Routes


MPLS
L3 VPN

MPLS
L3 VPN

RTR_ID 10.3.200.1
IBGP

10.3.1.201.1

IBGP

10.3.1.202.1

BE-5.421

RTR_ID - 10.3.200.1
ip route 10.3.1.0/24 10.3.201.2
ip route 10.3.1.0/24 10.3.202.2
10.3.1.201.1
10.3.1.202.1

BE-6.421

BE-5.422

ASR 9000-NV
10.3.201.2

10.3.202.2

RTR_ID 10.3.200.105

10.3.201.2

RTR_ID 10.3.200.106
Node-105

BE-6.422

ASR 9000-NV
10.3.202.2
Node-106

Node-105

Node-106

ip route 0.0.0.0/0 10.3.201.1

ip route 0.0.0.0/0 10.3.202.1

ACI Fabric (L3)

ACI Fabric (L3)

GW - 10.3.1.253

GW - 10.3.1.253

Node-101

Node-102

Node-101

10.3.1.0/24
APP
OS

Node-102

10.3.1.0/24

APP
OS

APP
OS

Web/App/Database VMs

APP
OS

Web/App/Database VMs

298478

Figure 10-2

Figure 10-3 shows the L3 Bronze logical construct in Cisco Application Policy Infrastructure Controller
(Cisco APIC).
Figure 10-3

L3 Bronze Logical Construct in Cisco APIC


svi01
http

svi01

Filter

contract01 Contract

ASR 9000

ACI Fabric

Logical Interface

External Routed Network:


(outside_network)

svi02

Tenant: b001
Context: net01
contract01
http
https

298479

icmp
Bridge Domain:
bd01
EPG: epg01

Each tenant is identified by a name in the APIC. The tenant has a private network (net01) that
corresponds to a L3 context or Virtual Fragment Reassembly (VRF) in a traditional network. The bridge
domain (bd01) that identifies the boundary of the bridged traffic is similar to a VLAN in a traditional
network. The bridge domain has an end-point group (EPG), epg01, that identifies a collection of end
points such as virtual machines. A subnet is defined as part of the bridge domain that configures the
default gateway within the fabric. There is an application profile (app01) that defines the policies
associated with the EPG. The tenant container connects to the outside network over an external routed
network. On each of the border leaf switches, a logical switch virtual interface (SVI) routes to external
networks. A contract (contract01) is defined between the EPG and the external routed network. The
epg01 is the provider and outside_network is the consumer of this contract. Filters such as HTTP or
ICMP define the traffic allowed by the contract.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-3

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Prerequisites
For implementing this solution, these are the prerequisites:

Physical connectivity as shown in Figure 10-1.

Virtual Port Channels (vPC) should be configured and connectivity established to ACI Fabric.

ASR 9000 nV cluster with bundle Ethernet connectivity to ACI Fabric.

Virtual Machine Manager (VMM) domain for vSphere.

L3 Bronze Tenant Configuration Procedure


This section describes how to configure a L3 Bronze tenant container in ACI. Follow this procedure:
1.

Create a security domain

2.

Create a tenant container

3.

Create a private network

4.

Create a bridge domain

5.

Create an application EPG

6.

Create filters

7.

Create a contract

8.

Configure ASR 9000 nV edge device

9.

Create an external routed network

10. Associate L3 outside policy to bridge domain


11. Add the contract to EPG

Note

Step 1

Steps 8 and 9 are different for Interior Border Gateway Protocol (IBGP) and static route implementation
of L3 Bronze container.
Create a security domain.
a.

In the APIC GUI, log in using administrative privileges.

b.

On the menu bar, choose the Admin tab.

c.

In the navigation pane, right-click on Security Management and choose Create Security Domain.

d.

Enter a name for the security domain and click on Submit (Figure 10-4).

Intercloud Data Center Application Centric Infrastructure 1.0

10-4

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

Figure 10-4

Create Security Domain

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<aaaUserEp>
<aaaDomain name="b001_sd" />
</aaaUserEp>

Note

Step 2

A security domain is required for the tenant administrator to log into APIC and manage the tenant's
resources.
Create a tenant container.
a. To create a logical tenant container, click on Tenants in the main menu bar and from the submenu,

select Add Tenant.


b.

In the pop-up window, provide a name, and select the Security Domain that was created in the
previous step.

c.

Click Next button to go to the next screen and click Submit to finish the task. The tenant
configuration window opens.

d.

Note that the APIC GUI allows configuration of private network information before submitting the
task (Figure 10-5).

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-5

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Figure 10-5

Create a Tenant

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001" descr="bgp routed external">
<aaaDomainRef name="b001_sd" />
</fvTenant>

Step 3

Create a private network.


a.

In the Tenant navigation pane, right-click on the Networking folder and select Create Private
Network (Figure 10-6).

b.

In the pop-up window, specify a name for the private network.

c. To minimize these steps, you may choose to configure a bridge domain in the next window.
Figure 10-6

Create a Private Network

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">

Intercloud Data Center Application Centric Infrastructure 1.0

10-6

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

<fvCtx name="net01" />


</fvTenant>

Step 4

Create a bridge domain.


a. To create a bridge domain, expand the Networking folder and right-click on the Bridge Domains.
b.

Provide a name for the bridge domain.

c.

Select the Network that was created in the previous step.

d.

Click on the +next to Subnets field (Figure 10-7) which opens a window to enter subnet-specific
information.

e.

Enter the default gateway and select the Public Scope. This scope allows advertising the subnet
outside the fabric.

Figure 10-7

Create a Bridge Domain

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvBD name="bd01">
<fvSubnet ip="10.3.1.253/24" scope="public" />
<fvRsCtx tnFvCtxName="net01" />
</fvBD>
</fvTenant>

Step 5

Create an application EPG.


a. To create an application EPG, expand the Application Profile folder and right-click on Application

EPG.
b.

Provide a name for this EPG.

c.

In the Associated Domain Profile box (Figure 10-8), click on the + and select the VMM domain
where the virtual machine resides.

d.

Change the deployment and resolution immediacy to Immediate.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-7

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

e.

Click on Update and then, click the Finish button to finish the configuration.

Figure 10-8

Figure 10-8 Create Application EPG

Sub-step d) creates a port-profile on the VMWare vCenter Server. You can assign the port-profile to a
virtual machine that resides in the same EPG (Figure 10-9).
Figure 10-9

Port-Profile Creation in vCenter

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-ics3_prod_vc" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>

Step 6

Create filters.
a.

Expand Security Policies and right-click on Filters to create one or more filters.

Intercloud Data Center Application Centric Infrastructure 1.0

10-8

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

b.

Provide a name for the filter and update the Entries box for parameters, such as EtherType, IP
Protocol, L4 port numbers, and so on (Figure 10-10).

Figure 10-10

Create Filters

You can use the following XML to do the same task. The value of the variable is highlighted in bold. In
this example, multiple filters are created.
<fvTenant name="b001">
<vzFilter name="http">
<vzEntry name="rule01" etherT="ip"
/>
</vzFilter>
<vzFilter name="https">
<vzEntry name="rule01" etherT="ip"
dToPort="https" />
</vzFilter>
<vzFilter name="ftp-data">
<vzEntry name="rule01" etherT="ip"
dToPort="10500" />
</vzFilter>
<vzFilter name="ftp-control">
<vzEntry name="rule01" etherT="ip"
</vzFilter>
<vzFilter name="mysql">
<vzEntry name="rule01" etherT="ip"
/>
</vzFilter>
<vzFilter name="ssh">
<vzEntry name="rule01" etherT="ip"
</vzFilter>
</fvTenant>

Step 7

port="tcp" dFromPort="http" dToPort="http"

prot="tcp" dFromPort="https"

prot="tcp" dFromPort="10100"

prot="tcp" dFromPort="21" dToPort="21" />

prot="tcp" dFromPort="3306" dToPort="3306"

prot="tcp" dFromPort="22" dToPort="22" />

Create a contract.
a.

Expand Security Policies folder and right-click on Contracts to create a contract.

b.

Provide a name for the contract. The default scope is Context (Figure 10-11).

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-9

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Figure 10-11

Create a Contract

c.

Create a contract subject by clicking on the + in the Subjects box.

d.

Provide a name for the contract subject.

e.

Click on the + to select Update in the filter chain field with filters created in Sub-step d
(Figure 10-12).

Figure 10-12

Create a Contract Subject

You can use the following XML to do the same task. The value of the variable is highlighted in bold. In
this example, multiple filters are used with the contract.
<fvTenant name="b001">
<vzBrCP name="contract01">
<vzSubj name="subject01">
<vzRsSubjFiltAtt tnVzFilterName="http" />
<vzRsSubjFiltAtt tnVzFilterName="https" />

Intercloud Data Center Application Centric Infrastructure 1.0

10-10

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

<vzRsSubjFiltAtt
<vzRsSubjFiltAtt
<vzRsSubjFiltAtt
<vzRsSubjFiltAtt
</vzSubj>
</vzBrCP>
</fvTenant>

Step 8

tnVzFilterName="icmp" />
tnVzFilterName="ftp-data" />
tnVzFilterName="ftp-control" />
tnVzFilterName="mysql" />

Configure ASR 9000 nV edge device.


a. This step describes the configuration required on ASR 900 for IBGP peering as well as for static

routing to ACI Fabric.


b.

In this example, tenant b001 uses IBGP between ACI Fabric and ASR 9000 while tenant b002 uses
static routes.

c.

IBGP Configuration: The ASR 9000 nV edge device has two Bundle-Ethernet interfaces with
sub-interfaces for IBGP peering to border leaf nodes.

d. The ASR 9000 configuration for tenant b001 is shown in below:


!
vrf b001
# VRF used for tenant b001
address-family ipv4 unicast
import route-target
3:421
export route-target
3:421
!
interface loopback 421
# BGP Router-ID
vrf b001
ipv4 address 10.3.200.1/32
!
interface Bundle-Ether 5.421
# Sub-interface that peer with leaf node-105
vrf b001
ipv4 address 10.3.201.1 255.255.255.0
encapsulation dot1q 421
# dot1q encapsulation 421, same as ACI fabric
!
interface Bundle-Ether 6.421
# Sub-interface that peer with leaf node-106
vrf b001
ipv4 address 10.3.202.1 255.255.255.0
encapsulation dot1q 421
!
router static
vrf b001
address-family ipv4 unicast
10.3.200.105/32 10.3.201.2
# Static route to reach leaf node's Router ID
10.3.200.106/32 10.3.202.2
!
router bgp 200
vrf b001
rd 3:421
address-family ipv4 unicast
!
neighbor 10.3.200.105
# IBGP adjacency to Leaf 105
remote-as 200
update-source loopback421
address-family ipv4 unicast
route-policy allow-all in
route-policy allow-all out
!
neighbor 10.3.200.106
# IBGP adjacency to Leaf 106
remote-as 200
update-source loopback421
address-family ipv4 unicast

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-11

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

route-policy allow-all in
route-policy allow-all out
commit
end
!

e.

Static Routing to ACI border Leaf: The ASR 9000 has static routes pointing to the SVIs on the
border leaf switches to reach the server subnet. The connected and static routes are redistributed into
BGP so that the remote provider edge (PE) device can reach the tenant server subnets.

f. ASR 9000 configuration for tenant b002 is shown below:


!
conf t
vrf b002
# VRF used for tenant b002
address-family ipv4 unicast
import route-target
3:422
export route-target
3:422
!
interface loopback 422
vrf b002
ipv4 address 10.3.200.1/32
!
interface Bundle-Ether 5.422
vrf b002
ipv4 address 10.3.201.1 255.255.255.0
encapsulation dot1q 422
# dot1q encapsulation 422, same as ACI fabric
!
interface Bundle-Ether 6.422
vrf b002
ipv4 address 10.3.202.1 255.255.255.0
encapsulation dot1q 422
!
router static
vrf b002
address-family ipv4 unicast
10.3.200.105/32 10.3.201.2
10.3.200.106/32 10.3.202.2
10.3.1.0/24 10.3.201.2
# static route to reach tenant subnet via node-105
10.3.1.0/24 10.3.202.2
# static route to reach tenant subnet via node-106
router bgp 200
vrf b002
rd 3:422
address-family ipv4 unicast
redistribute connected
redistribute static
# redistribute static routes to BGP
commit
end
!

Step 9

Create an externally routed network.


a. This step outlines the steps to create external routed networks using IBGP and static routes.
b.

External routed network configuration with IBGP:


1.

Creating an external routed network with IBGP consists of the following major tasks:

a. Create a routed outside policy and enable BGP.


b. Create a node profile. This includes configuring loopback address, next hop address and static

routes for each border nodes. The loopback address is required for fabric route reflection.

Intercloud Data Center Application Centric Infrastructure 1.0

10-12

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

c. Create interface profiles. This includes configuring a logical SVI interface on each border leaf

and mapping to the port channel connecting to ASR 9000.


d. Configure the external network such that any external source IP could come in.
c.

In this example, border leaf nodes 105 & 106 are configured as IBGP nodes.

d.

In the tenant navigation pane, right-click on the Networking folder.

e.

Select Create Routed Outside. In the pop-up window, enter a name for the Routed Outside (See
Figure 10-13).

f.

Select the private network from the drop-down list, and then select BGP.

Figure 10-13

Create Routed Outside

g.

Click on + in the previous screen which would open another window to create a node profile.
Enter a name for the node profile.

h.

Click on the + next to Nodes and configure the BGP nodes.

i.

Enter the ASR 9000 loopback interface address in the BGP peer connectivity profile. You need to
enter the next hop address to reach the ASR 9000 loopback address when you configure each node.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-13

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Figure 10-14

Create Node Profile

j.

Click on the + in the Interface Profiles to create an interface profile (Figure 10-14).

k.

Enter the configuration for the SVI interface on each node. This includes the name of the port
channel, IP address for the SVI interface and VLAN. You may configure separate interface profiles
for each node.

Figure 10-15

Create Interface Profile

l.

Click Ok button in the Interface Profiles and Node Profile configuration window. This navigates
back to the Create Routed Outside window (Figure 10-15).

m.

Click on the + under External EPG Networks (Figure 10-16). Provide a name for the external
network.

n.

Open the subnet box and enter the subnet 0.0.0.0/0 that is allowed to come in.

Intercloud Data Center Application Centric Infrastructure 1.0

10-14

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

Figure 10-16

Create External Network

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<l3extOut name="l3_outside">
<bgpExtP descr="this node enable bgp" />
<l3extRsEctx tnFvCtxName="net01" />
<l3extLNodeP name="bgp_nodes">
<bgpPeerP addr="10.3.200.1" />
<l3extRsNodeL3OutAtt rtrId="10.3.200.105" tDn="topology/pod-1/node-105">
<ipRouteP ip="10.3.200.1/32">
<ipNexthopP nhAddr="10.3.201.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="10.3.200.106" tDn="topology/pod-1/node-106">
<ipRouteP ip="10.3.200.1/32">
<ipNexthopP nhAddr="10.3.202.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="svi01">
<l3extRsPathL3OutAtt addr="10.3.201.2/24" encap="vlan-421"
ifInstT="ext-svi" tDn="topology/pod-1/paths-105/pathep-[pc_n105_asr9k]" />
</l3extLIfP>
<l3extLIfP name="svi02">
<l3extRsPathL3OutAtt addr="10.3.202.2/24" encap="vlan-421"
ifInstT="ext-svi" tDn="topology/pod-1/paths-106/pathep-[pc_n106_asr9k]" />
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="outside_network">
<l3extSubnet ip="0.0.0.0/0" />
<!-- allows any external source IP to come in -->
</l3extInstP>
</l3extOut>
</fvTenant>

o.

External Network Configuration with Static Routes: Creating an external routed network with
static routes consists of the following major tasks:
1.

Create a routed outside policy.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-15

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

2.

Create a node profile. This includes configuring loopback address, next hop address and static
routes. The loopback address is required for fabric route reflection.

3.

Create interface profiles. This includes configuring a logical SVI interface on each border leaf
and mapping to the port channel connecting to ASR 9000.

4.

Configure the external network such that any external source IP can reach it.

p.

In this example, border leaf nodes 105 & 106 are configured to use static routes to reach external
networks.

q.

In the Tenant navigation pane, right-click on the Networking folder and select Create Routed
Outside. In the pop-up window, enter a name for Routed Outside.

r.

Select the private network from the drop-down list. Figure 10-17shows how to configure a routed
outside policy.

Figure 10-17

Routed Outside Policy

s.

Click on the + in the previous screen which would open another window to create a node profile.
Enter a name for the node profile.

t.

Click on the + next to Nodes and configure the nodes (Figure 10-18).

u.

Select the node from the drop down list and configure the Router ID to identify the node. Configure
a static route to reach the outside networks.

Intercloud Data Center Application Centric Infrastructure 1.0

10-16

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

Figure 10-18

Note

Create Node Profile

The node profile window does not display the next hop address associated with a static route. Currently
you need to open each node configuration entry to see the next hop address. An enhancement defect
CSCur46784 is filed to address this issue.
v.
w.

Click on the + in the Interface Profiles to create an interface profile.


Enter the configuration for the SVI interface on each node. This includes the name of the port
channel, IP address for the SVI interface and VLAN encapsulation (Figure 10-19). You can
configure separate interface profiles for each node.

Figure 10-19

Create Interface Profile

x.

Click OK button in the Create Interface Profile window and again in the Node Profile
configuration window. This takes you back to Created Routed Outside window (Figure 10-20).

y.

Click on the + under External EPG Networks. This opens a pop-up window. Provide a name for
the external network. Open the subnet box and enter the subnet 0.0.0.0/0 that is allowed to come in.

z.

Click OK to close this window or click on Finish button to submit the configuration.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-17

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Figure 10-20

Create External EPG Network

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b002">
<l3extOut name="l3_outside">
<l3extRsEctx tnFvCtxName="net01" />
<l3extLNodeP name="static_nodes">
<l3extRsNodeL3OutAtt rtrId="10.3.200.105" tDn="topology/pod-1/node-105">
<ipRouteP ip="0.0.0.0/0">
<ipNexthopP nhAddr="10.3.201.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="10.3.200.106" tDn="topology/pod-1/node-106">
<ipRouteP ip="0.0.0.0/0">
<ipNexthopP nhAddr="10.3.202.1" />
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="svi01">
<l3extRsPathL3OutAtt addr="10.3.201.2/24" encap="vlan-422"
ifInstT="ext-svi" tDn="topology/pod-1/paths-105/pathep-[pc_n105_asr9k]" />
</l3extLIfP>
<l3extLIfP name="svi02">
<l3extRsPathL3OutAtt addr="10.3.202.2/24" encap="vlan-422"
ifInstT="ext-svi" tDn="topology/pod-1/paths-106/pathep-[pc_n106_asr9k]" />
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="outside_network">
<l3extSubnet ip="0.0.0.0/0" />
<!-- allows any external source IP to
come in -->
</l3extInstP>
</l3extOut>
</fvTenant>

Step 10

Associate L3 outside policy to bridge domain.


a.

Expand the Networking folder and select the bridge domain that was created in the previous step.

b.

Click the + next to Associated L3 Outs and add the L3 outside policy (Figure 10-21).

Intercloud Data Center Application Centric Infrastructure 1.0

10-18

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

Figure 10-21

Associate L3 Outside Policy to Bridge Domain

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvBD name="bd01">
<fvRsBDToOut tnL3extOutName="l3_outside" />
</fvBD>
</fvTenant>

Step 11

Add the contract to EPG.


a.

Expand the Application Profiles folder and, under the EPG, right-click Contracts.

b.

Choose Add Provided Contract and in this window, select the contract from the pull-down list
(Figure 10-22).

c.

Expand the Networking, External Routed Networks, and Networks folders.

d.

Select the External Network Instance named outside_network.

e. Add the contract to Consumed Contracts (Figure 10-22).

The following screen shot shows how to add a provided contract to the EPG.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-19

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Figure 10-22

Add Provided Contract to EPG

Figure 10-23 shows how to add a consumed contract to External Network.


Figure 10-23

Add Consumed Contract to External Network

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b001">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsProv tnVzBrCPName="contract01" />
</fvAEPg>
</fvAp>
<l3extOut name="l3_outside">
<l3extInstP name="outside_network">
<fvRsCons tnVzBrCPName="contract01" />
</l3extInstP>
</l3extOut>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0

10-20

Implementation Guide

Chapter 10

Bronze Tenant Container


Layer 3 Bronze

Verify Configuration
To verify the tenant subnet reachability from the ASR 9000 nV edge device, use the following show
CLIs.
RP/0/RSP1/CPU0:v6-pe-NV# show vrf b001
Fri Oct 17 09:58:01.790 EDT
VRF
RD
b001
3:421

RT

AFI

import 3:421
export 3:421
RP/0/RSP1/CPU0:v6-pe-NV# show interface loopback 421
Fri Oct 17 09:58:02.101 EDT
Loopback421 is up, line protocol is up
Interface state transitions: 1
Hardware is Loopback interface(s)
Internet address is 10.3.200.1/32
MTU 1500 bytes, BW 0 Kbit
reliability Unknown, txload Unknown, rxload Unknown
Encapsulation Loopback, loopback not set,
Last input Unknown, output Unknown
Last clearing of "show interface" counters Unknown
Input/output data rate is disabled.
RP/0/RSP1/CPU0:v6-pe-NV# show interface Bundle-Ether 5.421
Fri Oct 17 09:58:02.604 EDT
Bundle-Ether5.421 is up, line protocol is up
Interface state transitions: 1
Hardware is VLAN sub-interface(s), address is 4055.3943.0f93
Internet address is 10.3.201.1/24
MTU 9004 bytes, BW 20000000 Kbit (Max: 20000000 Kbit)
reliability 255/255, txload 0/255, rxload 0/255
Encapsulation 802.1Q Virtual LAN, VLAN Id 421, loopback not set,
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:21, output 00:00:21
Last clearing of "show interface" counters never
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
5486 packets input, 370024 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 4 broadcast packets, 0 multicast packets
10883 packets output, 1421734 bytes, 0 total output drops
Output 3 broadcast packets, 5365 multicast packets
RP/0/RSP1/CPU0:v6-pe-NV# show interface Bundle-Ether 6.421
Fri Oct 17 09:58:02.693 EDT
Bundle-Ether6.421 is up, line protocol is up
Interface state transitions: 1
Hardware is VLAN sub-interface(s), address is 4055.3943.1f93
Internet address is 10.3.202.1/24
MTU 9004 bytes, BW 20000000 Kbit (Max: 20000000 Kbit)
reliability 255/255, txload 0/255, rxload 0/255
Encapsulation 802.1Q Virtual LAN, VLAN Id 421, loopback not set,
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:21, output 00:00:21
Last clearing of "show interface" counters never
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
5500 packets input, 371543 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 4 broadcast packets, 0 multicast packets
10819 packets output, 1416950 bytes, 0 total output drops
Output 3 broadcast packets, 5365 multicast packets
RP/0/RSP1/CPU0:v6-pe-NV# show ip route vrf b001
Fri Oct 17 09:58:03.003 EDT

SAFI
IPV4
IPV4

Unicast
Unicast

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-21

Chapter 10

Bronze Tenant Container

Layer 3 Bronze

Codes: C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path


D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, su - IS-IS summary null, * - candidate default
U - per-user static route, o - ODR, L - local, G - DAGR
A - access/subscriber, a - Application route, (!) - FRR Backup path
Gateway of last resort is not set
B
10.3.1.0/24 [200/0] via 10.3.200.105, 1d20h
L
10.3.200.1/32 is directly connected, 1d20h, Loopback421
S
10.3.200.105/32 [1/0] via 10.3.201.2, 1d20h
S
10.3.200.106/32 [1/0] via 10.3.202.2, 1d20h
C
10.3.201.0/24 is directly connected, 1d20h, Bundle-Ether5.421
L
10.3.201.1/32 is directly connected, 1d20h, Bundle-Ether5.421
C
10.3.202.0/24 is directly connected, 1d20h, Bundle-Ether6.421
L
10.3.202.1/32 is directly connected, 1d20h, Bundle-Ether6.421
B
100.3.201.0/24 [200/0] via 10.255.255.201 (nexthop in vrf default), 1d20h
RP/0/RSP1/CPU0:v6-pe-NV# show ip bgp vrf b001
Fri Oct 17 09:58:03.300 EDT
BGP VRF b001, state: Active
BGP Route Distinguisher: 3:421
VRF ID: 0x60000051
BGP router identifier 200.200.200.1, local AS number 200
BGP table state: Active
Table ID: 0xe0000060
RD version: 2267
BGP main routing table version 2286
Status codes: s suppressed, d damped, h history, * valid, > best
i - internal, r RIB-failure, S stale, N Nexthop-discard
Origin codes: i - IGP, e - EGP, ? - incomplete
Network
Next Hop
Metric LocPrf Weight Path
Route Distinguisher: 3:421 (default for vrf b001)
*>i10.3.1.0/24
10.3.200.105
0
100
0 ?
* i
10.3.200.106
0
100
0 ?
*>i100.3.201.0/24
10.255.255.201
0
100
0 ?
Processed 2 prefixes, 3 paths
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV# ping vrf b001 10.3.1.1 Ping a VM
Fri Oct 17 15:19:40.505 EDT
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.3.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
RP/0/RSP1/CPU0:v6-pe-NV#

Figure 10-24 and Figure 10-25are taken from the APIC INVENTORY submenu.
Figure 10-24 shows the BGP adjacency on the border leaf.

Intercloud Data Center Application Centric Infrastructure 1.0

10-22

Implementation Guide

Chapter 10

Bronze Tenant Container


L2 Bronze with Default Gateway on ASR 9000 nV Edge

Figure 10-24

Verify BGP Adjacency

Figure 10-25 shows IP Routes on the border leaf Node-105.


Figure 10-25

Verify IP Routes on Leaf Node

L2 Bronze with Default Gateway on ASR 9000 nV Edge


The L2 Bronze container has a default gateway configured outside the ACI Fabric. The ACI Fabric is
acting as a L2 network and the default gateway is configured on the ASR 9000 nV edge device.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-23

Chapter 10

Bronze Tenant Container

L2 Bronze with Default Gateway on ASR 9000 nV Edge

Physical Topology
The physical topology is the same as show in Figure 10-1 except for the connection between the border
leaves and the ASR 9000. A vPC is configured between the border leaf switches and ASR 9000 nV to
extend the L2 subnet. The vPC connectivity is shown in Figure 10-26. A bundle-Ethernet interface,
labeled BE-9, is configured on the ASR 9000 nV and it terminates on a vPC between leaf 105 and leaf
106.
Figure 10-26

vPC Attachment to ASR 9000

ASR 9000 nV Edge Router

BE-9

vPC

Leaf105 Nexus 9396 Leaf106


Border Leaf

298502

ACI Fabric

Logical Topology
The L2 Bronze logical topology is shown in Figure 10-27. The ACI Fabric is in L2 mode and the ASR
9000 is the default gateway for the tenant server subnet.
Figure 10-27

L2 Bronze Logical Topology

MPLS
L3 VPN

ASR 9000 nV
BE-9.430

10.3.1.254/24

Node-105

Node-106

ACI Fabric (L2)

Node-102

Node-102

APP
OS

APP
OS

Web/App/Database VMs

298503

10.3.1.0/24

Intercloud Data Center Application Centric Infrastructure 1.0

10-24

Implementation Guide

Chapter 10

Bronze Tenant Container


L2 Bronze with Default Gateway on ASR 9000 nV Edge

Figure 10-28 shows the APIC construct for L2 Bronze container b010. A bridge domain and EPG are
defined and are mapped to the server subnet. An external bridged network connects the ACI Fabric to
the upstream ASR 9000. A logical interface maps to the vPC connection between the border leaves and
the ASR 9000. In this example, a default contract that is defined under the common tenant is used. This
contract allows all traffic between epg01 and outside network.
Figure 10-28

L2 Bronze Logical Construct in APIC


ASR 9000

L2_interface

L2_interface Logical Interface

ACI Fabric

default Contract

External Bridged Network:


(outside_network)

Tenant: b010
Context: net01

default

298504

Bridge Domain:
bd01
EPG: epg01

L2 Bronze Tenant Configuration Procedure


The L2 Bronze tenant configuration procedure is described below.

Note

Steps 1 to 5 are identical to the L3 Bronze process.


1.

Create a security domain.

2.

Create a tenant container.

3.

Create a private network.

4.

Create a bridge domain.

5.

Create an application EPG.

6. Add external VLAN in the VLAN pool.


7.

Create contracts and filters.

8.

Configure ASR 9000 nV edge device.

9.

Create an external bridge network.

10. Add the contract to EPG.

Perform the following procedure to configure L2 Bronze Tenants.


Step 1

Create a security domain.

Step 2

Create a tenant container

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-25

Chapter 10

Bronze Tenant Container

L2 Bronze with Default Gateway on ASR 9000 nV Edge

Step 3

Create a private network.

Step 4

Create a bridge domain.

Step 5

Create an application EPG.

Step 6

Add an external VLAN in the VLAN pool.


a. The ACI Fabric is the L2 that talks to the ASR 9000 over specific external VLAN. This VLAN is

defined in a VLAN pool and later assigned when the external bridged network is created. In this
example, VLAN 430 is added to asr9k_vlan_pool which belongs to as9k_phy domain
(Figure 10-29).
Figure 10-29

Step 7

Bridge External VLAN Pool

Create contracts and filters.


a. ACI allows you to configure unique contracts and filters on a per-tenant basis or use the default

contracts and filters defined under the common tenant. If you would like to create unique contracts
and filters, refer to the L3 Bronze section. In this step, the default contracts and filters previously
defined as shown under tenant common are used while setting up a new tenant.
Figure 10-30 shows the default contract in APIC GUI. By default, all traffic is permitted.

Intercloud Data Center Application Centric Infrastructure 1.0

10-26

Implementation Guide

Chapter 10

Bronze Tenant Container


L2 Bronze with Default Gateway on ASR 9000 nV Edge

Figure 10-30

Figure 10-30 Default Contract

Figure 10-31 shows a default filter configuration.


Figure 10-31

Step 8

Figure 10-31 Default Filter

Configure ASR 9000 nV edge device.


a. The ASR 9000 is configured as the default gateway for tenant subnet. In the following configuration

snip, Bundle-Ethernet 9.430 is configured as the default gateway for tenant b010. The subnet is
redistributed into BGP so that it can be advertised to the remote PE device.
ASR 9000 configuration for tenant b010 is shown below. !
conf t
vrf b010
# VRF for tenant b010
address-family ipv4 unicast
import route-target
3:430
export route-target
3:430
!
interface loopback 430
vrf b010
ipv4 address 10.3.200.1/32
!
interface Bundle-Ether 9.430
vrf b010
ipv4 address 10.3.1.254 255.255.255.0
# Default gateway for tenant subnet
encapsulation dot1q 430

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-27

Chapter 10

Bronze Tenant Container

L2 Bronze with Default Gateway on ASR 9000 nV Edge

!
router bgp 200
vrf b010
rd 3:430
address-family ipv4 unicast
redistribute connected
commit
end
!

Step 9

# Redistributing connected interfaces to BGP

Create an external bridge network.


a.

Expand the Networking folder and right-click on External Bridged Networks.

b.

Select Create Bridge Outside (Figure 10-32).

c.

Notice that VLAN-430 is used as the external VLAN connecting to ASR 9000.

Figure 10-32

Create Bridged Outside

d.

Click on the + under Nodes and Interfaces Protocol Profiles (Figure 10-32). The Create Node
Profile window opens (Figure 10-33).

e.

Configure a Node Profile.

Figure 10-33

f.

Create Node Profile

Click on + sign under the Interface Profiles to configure an interface profile. The vPC interface
is chosen as the outside interface to connect to ASR 9000 (Figure 10-33). The Create Interface
Profile window opens (Figure 10-34).

Intercloud Data Center Application Centric Infrastructure 1.0

10-28

Implementation Guide

Chapter 10

Bronze Tenant Container


L2 Bronze with Default Gateway on ASR 9000 nV Edge

Figure 10-34

g.

Create Interface Profile

Next you need to configure an external EPG network so that a contract can be assigned
(Figure 10-35).

Figure 10-35

Create External EPG Network

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b010">
<l2extOut name="l2_outside">
<l2extRsEBd tnFvBDName="bd01" encap="vlan-430" />
<l2extLNodeP name="l2_nodes">
<l2extLIfP name="l2_interface">
<l2extRsPathL2OutAtt
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asr9k]" />
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name="outside_network" />
</l2extOut>
</fvTenant>

Step 10

Add the contract to EPG.


a.

Expand the Application Profiles folder and right-click on Contracts under Application EPG
(epg01) and select Add Provided Contract (Figure 10-36).

b.

Select common/default and then select Submit.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-29

Chapter 10

Bronze Tenant Container

L2 Bronze with Default Gateway on ASR 9000 nV Edge

Figure 10-36

Add Provided Contract

c.

Expand the Networking and External Bridged Networks folders, and select outside_network
under Networks.

d.

Click on the + next to Consumed Contracts and select the Default contract (Figure 10-37).

Figure 10-37

Add Consumed Contract

You can use the following XML to do the same task. The value of the variable is highlighted in bold.
<fvTenant name="b010">
<fvAp name="app01">
<fvAEPg name="epg01">
<fvRsProv tnVzBrCPName="default" />
</fvAEPg>
</fvAp>
<l2extOut name="l2_outside">
<l2extInstP name="outside_network">
<fvRsCons tnVzBrCPName="default" />
</l2extInstP>
</l2extOut>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0

10-30

Implementation Guide

Chapter 10

Bronze Tenant Container


L2 Bronze with Default Gateway on ASR 9000 nV Edge

Verify Configuration
To verify the tenant subnet reachability from the ASR 9000 nV, use the following show CLIs. In this
example, tenant b010 is used.
RP/0/RSP1/CPU0:v6-pe-NV# show vrf b010
Tue Oct 21 15:25:40.464 EDT
VRF
RD
b010
3:430

RT

AFI

SAFI

import 3:430
IPV4 Unicast
export 3:430
IPV4 Unicast
RP/0/RSP1/CPU0:v6-pe-NV# show interface loopback 430
Tue Oct 21 15:25:40.762 EDT
Loopback430 is up, line protocol is up
Interface state transitions: 1
Hardware is Loopback interface(s)
Internet address is 10.3.200.1/32
MTU 1500 bytes, BW 0 Kbit
reliability Unknown, txload Unknown, rxload Unknown
Encapsulation Loopback, loopback not set,
Last input Unknown, output Unknown
Last clearing of "show interface" counters Unknown
Input/output data rate is disabled.
RP/0/RSP1/CPU0:v6-pe-NV# show interface Bundle-Ether 9.430
Tue Oct 21 15:25:41.064 EDT
Bundle-Ether9.430 is up, line protocol is up
Interface state transitions: 1
Hardware is VLAN sub-interface(s), address is f025.72a9.b274
Internet address is 10.3.1.254/24
MTU 1518 bytes, BW 20000000 Kbit (Max: 20000000 Kbit)
reliability 255/255, txload 0/255, rxload 0/255
Encapsulation 802.1Q Virtual LAN, VLAN Id 430, loopback not set,
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:00, output 00:00:00
Last clearing of "show interface" counters never
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
1873 packets input, 177988 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 20 broadcast packets, 0 multicast packets
3007 packets output, 517966 bytes, 0 total output drops
Output 1 broadcast packets, 2208 multicast packets
RP/0/RSP1/CPU0:v6-pe-NV# show ip route vrf b010
Tue Oct 21 15:25:41.370 EDT
Codes: C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
I - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, su - IS-IS summary null, * - candidate default
U - per-user static route, o - ODR, L - local, G - DAGR
A - access/subscriber, a - Application route, (!) - FRR Backup path
Gateway of last resort is not set
C
10.3.1.0/24 is directly connected, 18:18:28, Bundle-Ether9.430
L
10.3.1.254/32 is directly connected, 18:18:28, Bundle-Ether9.430
L
10.3.200.1/32 is directly connected, 18:18:28, Loopback430
B
100.3.201.0/24 [200/0] via 10.255.255.201 (nexthop in vrf default), 18:18:26
RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV# ping vrf b010 10.3.1.1 Ping a VM in EPG01
Tue Oct 21 15:26:43.277 EDT
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.3.1.1, timeout is 2 seconds:
!!!!!

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

10-31

Chapter 10

Bronze Tenant Container

Deployment Considerations

Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms


RP/0/RSP1/CPU0:v6-pe-NV#
RP/0/RSP1/CPU0:v6-pe-NV# sh arp vrf b010
Tue Oct 21 15:27:38.131 EDT
------------------------------------------------------------------------------0/0/CPU0
------------------------------------------------------------------------------Address
Age
Hardware Addr
State
Type Interface
10.3.1.1
00:00:09
0050.56a5.180b Dynamic
ARPA Bundle-Ether9.430
10.3.1.254
f025.72a9.b274 Interface ARPA Bundle-Ether9.430
RP/0/RSP1/CPU0:v6-pe-NV#

Deployment Considerations
The following considerations are recommended.

There are two ways to implement Cisco Bronze container; they are
1.

L3 Bronze using ACI Layer 3

2.

L2 Bronze using ACI Layer 2 only

The L2 Bronze model currently provides a higher tenancy scale in the ACI Fabric than the L3
Bronze provides. For L3 Bronze, the verified scale is 100 tenants.
Refer to the scalability matrix in the following document for more details on supported scale

numbers:
http://mishield-bld.insieme.local/documentation/pdf/ACI_Verified_Scalability_Guide.pdf

As of APIC software version 1.0.2, the L3 routed external policy may use IBGP, OSPF, or static
connectivity to upstream ASR 9000 edge device.

L3 over vPC is not supported between border leaves and upstream router. To work around this
limitation, you can configure separate L3 paths from each border leaf to ASR 9000 network
virtualization.

On the border leaf node, L3 port channel or port channel sub-interfaces are not supported as of 1.0.2
release.

For L3 Bronze, only one L3 external connection per border leaf node per tenant is currently
supported.

Intercloud Data Center Application Centric Infrastructure 1.0

10-32

Implementation Guide

CH A P T E R

11

Copper Tenant Container


This chapter provides detailed steps for implementing the Copper tenant container within the Cisco ACI
1.0 solution.
The ACI Copper tenant container provides high-scale, economical compute services to tenants. The
components of the Copper tenant container infrastructure include the following:

The Copper tenant container underlying network infrastructure is provided by the Cisco ACI fabric.

The Nexus 93128-TX leaf switches offer high-port density.

The Cisco ASA 5585-based ASA cluster technology provides proven security solution.

The Cisco ASR 9000 nV edge offers highly-available connectivity outside of the data center.

In this implementation, OpenStack is based on Canonical Ubuntu 14.04/Icehouse and provides the
compute services to the tenants.

The Cisco Unified Compute System (UCS) C-series servers build the compute pods that consist of
both compute and storage resources.

This implementation allows Copper tenants access to both the traditional block storage via NetApp
storage, and also, to RedHat's Ceph software-defined storage.

The Nexus 1000v for KVM provides the virtual switching capabilities as well as virtual
machine-level security to tenants.

Copper Tenant Logical Layout


This section details the logical layout of the Copper tenant, their traffic flow, and implementation of the
OpenStack host.

Logical Topology
Figure 11-1 shows the Copper container logical topology with respect to the IP addressing, routing,
security, and Network Address Translation (NAT).

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-1

Chapter 11

Copper Tenant Container

Copper Tenant Logical Layout

Figure 11-1

Copper Container Logical Layout

Internet

Static routes pointing


to NAT subnets

AS 200
ASR 9000
10.4.101.1
eBGP

Outside 10.4.101.0/24
OS Management 10.0.46.0/24
10.4.101.2

APP
OS

AS 65101
111.21.1.x

10.0.46.254

St

ati

A
cN

10.21.2.254

tic

NA

10.21.1.254

Sta

ic

Copper Tenant1 10.21.1.0/24


APP
OS

APP
OS

Private IP
Addresses

10.21.3.254

ASA Cluster

at
St

Public IP
Addresses
for NAT

111.21.2.x

OpenStack Swift/
Rados GW Servers

NA

APP
OS

One VLAN/subnet per


copper tenant. All tenants
share same context. All
tenant instances have
default gateway on ASA

Copper Tenant2 10.21.2.0/24


Copper Tenant2 10.21.2.0/24

OS Instances
APP
OS

OS Instances

APP
OS

APP
OS

OS Instances

298729

APP
OS

All Copper tenants share the same context on ASA cluster and have their own VLAN. Since all Copper
tenants share the same context, non-overlapping IP addressing has been used for each tenant. The
connectivity from each tenant to the ASA inside is based on sub-interfaces created on ASA clustered
data port channel. Traffic going out to the Internet shares the same outside interface.
External BGP (EBGP) is used to exchange the routing information between the ASR 9000 edge router
and the ASA cluster. Static routes on ASR 9000 force the traffic from the Internet to the NAT destination
to be directed to the ASA. BGP learned routes on ASA redirects the traffic out of the container on to the
ASR 9000 edge router.
Static NAT is used to allow tenant access to and from the Internet while maintaining private addresses
in the tenant address space. Depending on the access requirements with the Internet to OpenStack
instances, each tenant may require multiple, static NAT IPs.
Static NAT allows tenants access to the provider-backend OpenStack object storage. This enables the
provider to conceal the backend servers from the tenants while still allowing access to the OpenStack
SWIFT/RADOS services hosted in the provider backend.

Copper Container Traffic Flow


Traffic that flows to and from Copper tenants goes through multiple devices. The traffic originating from
the tenant OpenStack instances on Top of the Rack (ToR) 107/108, reach the ToR leaf pair. From there
they are forwarded through the spines to the leaf switch pair that connects the ASA cluster since this is
the instances default gateway. For traffic leaving the container, it is forwarded through the spines to the
leaf switch pair that act as the ToR for the ASR 9000 nV edge router. Figure 11-2 shows the traffic flow
between the Internet and Copper tenant.

Intercloud Data Center Application Centric Infrastructure 1.0

11-2

Implementation Guide

Chapter 11

Copper Tenant Container


ACI Fabric Configuration

Copper Container Traffic Flow


Spine201

Leaf103

ASR 9000 nV
Edge Router

Internet

Leaf104

Spine202

Leaf105

OpenStack
C-Series
Servers
APP
OS

APP
OS

OpentStack
Instances

Spine203

Leaf106

ASA 5585
Cluster

Spine204

Leaf107

Leaf108

OpenStack
C-Series
Servers
APP
OS

APP
OS

OpentStack
Instances

298730

Figure 11-2

ACI Fabric Configuration


This section details the Copper container ACI fabric configuration. You can configure the ACI fabric
with the browser-based APIC GUI or with the REST API calls. This implementation uses a mix of XML
REST API scripting and GUI. Unless redundant, the implementation steps for both methods are
explained in this chapter. User-configurable text is highlighted in bold.

Overview
The Copper container uses ACI fabric as a L2 transport medium. Each Copper tenant has a
corresponding application EPG and their own bridge domain. Unique L3 private network in a pure L2
environment is not needed, so all tenants are sharing the same private network. This implementation
does not use contracts and applies all security policing at the ASA cluster.
The Copper container uses the ASA cluster configuration instead of using the ACI fabric service
graphing. The ASA cluster control links (CCL) and the data links use the ACI fabric to reach both the
tenant instances and the ASR 9000 nV edge router.
All connections use static binding, including the connections from C-Series servers, ASA Cluster, ASR
9000, and NetApp NFS storage.
The process steps for configuring the Copper tenant container configuration are divided in to two main
sections, as follows:
1. ACI Link Configuration

You must configure physical connectivity from various devices to the ACI fabric first in order so
they are referred to in tenant configurations. These include configuring physical ports-to-vPC port
channel mapping and defining which protocols (such as, LACP, CDP, LLDP, STP, and so on) to
configure for on each vPC.
In addition VLAN pools and physical domains, you need to configure each connection map before
so that it is used in tenant configurations.
2. ACI Tenant Configuration

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-3

Chapter 11

Copper Tenant Container

ACI Fabric Configuration

Once the physical connectivity configuration is completed, tenant-specific configuration can be


started. Tenant configuration covers creating EPGs, L2 bridge domains, and allowing VLANs across
the vPC.
The following sections detail the Copper tenant container configuration steps.

ACI Link Configuration


All physical connections in Copper container are configured as vPC port channels; including the
connections from eight C-Series servers, the ASA cluster and data, the ASR 9000 and the management
Nexus 7000. Please refer to the DC Fabric chapter for details on configuring vPC links.

ACI Tenant Configuration


In this implementation, the Copper tenant container is built with a single, ACI tenant. Configuration for
the ACI tenant encompasses the configuration for all Copper tenants, OpenStack servers, ASA cluster,
and ASR 9000 WAN and Management.
The Copper tenant container's access to the NetApp NFS storage is configured under a common storage
tenant.

Base Tenant Configuration


Base configuration for the Copper tenant includes creating the ACI tenant and the application profile.
An application profile models application requirements such as traffic filtering (contracts), services, and
VLAN constraints. Application profiles contain one or more end-point of groups (EPGs) made up of a
collection of end points that require similar policies.
The Copper tenant uses a single application profile in the same way that ACI fabric is used as a single,
large L2 domain. EPGs are created for each Copper tenant.
The following steps create the base Copper tenant configuration.
Step 1

Create a security domain which can be tied to AAA authentication/authorization methods for securing
tenant access to the APIC configuration.
<aaaUserEp>
<aaaDomain name="copper_sd" />
</aaaUserEp>

Step 2

Create the tenant.


<fvTenant name="copper" descr="Intercloud ACI 1.0 Copper tenant">
<aaaDomainRef name="copper_sd" />
</fvTenant>

Step 3

Create the application profile that provides end point groups (EPGs) later on.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
</fvAp>
</fvTenant>

Intercloud Data Center Application Centric Infrastructure 1.0

11-4

Implementation Guide

Chapter 11

Copper Tenant Container


ACI Fabric Configuration

Server-to-ASA Configuration
Each Copper tenant in this implementation has a unique bridge domain where their OpenStack instances
reside. These ACI bridge domains have their unicast routing disabled because the ACI fabric only
provides Layer 2 forwarding functionality to Copper tenants. Address Resolution Protocol (ARP) and
unknown unicast flooding are enabled for the proper operation of the bridge domains. Subnets are not
required for each bridge domain because the OpenStack instances have their default gateway pointing
to the ASA cluster.
Each bridge domain must be associated with a L3 context or a private network which provides IP
addressing isolation and an attachment point for L3 policies for ACI tenants. In this implementation, all
Copper tenants share a single private network and context.
Each Copper tenant has a unique EPG. EPG configures the links permitted to carry tenant VLANs.
Once the base configuration is completed, the following steps create a private network, bridge domain,
and EPG for a given Copper tenant. The private network is configured only once and it is shared by all
tenants.
Step 1

Create the context/private network.


<fvTenant name="copper">
<fvCtx name="copper_tenant_network" />
</fvTenant>

Step 2

Create the bridge domain and associate it with private network.


<fvTenant name="copper">
<fvBD name="bd01" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="copper_tenant_network" />
</fvBD>
</fvTenant>

Figure 11-3 shows the configuration of Copper tenant bridge domain.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-5

Chapter 11

Copper Tenant Container

ACI Fabric Configuration

Figure 11-3

Step 3

Copper Tenant Bridge Domain

Create an EPG under the previously created application profile and associate it with the tenant bridge
domain.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
</fvAEPg>
</fvAp>
</fvTenant>

Step 4

Create static bindings for tenant VLAN.


Static bindings specify the VLAN-allowed list for vPC links configured earlier. All OpenStack server
vPCs allow each tenant VLAN because the tenant instances can reside on any OpenStack server. Because
they carry tenant traffic out of the container, all tenant VLANs should be allowed on the vPC to the ASA
cluster data links.
Figure 11-4 shows the Copper Tenant 1's L2 domain, with VLAN 501 as the tenant VLAN.

Intercloud Data Center Application Centric Infrastructure 1.0

11-6

Implementation Guide

Chapter 11

Copper Tenant Container


ACI Fabric Configuration

Figure 11-4

Copper Tenant L2 Domain


VLAN 501
VLAN 501

Leaf106

ASA 5585
Cluster

Leaf107

Leaf108

Leaf103

Leaf104

OpenStack
C-Series
Servers

OpenStack
C-Series
Servers

APP
OS

APP
OS

APP
OS

OpentStack
Instances

APP
OS

OpentStack
Instances

298732

Leaf105

There are two types of static bindings, static paths and static leaves. Static paths allow the VLANs per
vPC and the static leaves permit the VLANs per the whole switch. With static leaves the number of
entries required for VLAN is minimal compared to static paths.
The following XML REST API snippet shows how to use static paths to create static binding of VLAN
501:
1.

n103_n104_os_vpc ,which is one of the vPCs towards OpenStack server.

2.

vpc_n105_n106_asa5585_data is the vPC towards the ASA cluster data links.


<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="epg01">
<fvRsPathAtt encap="vlan-501" instrImedcy="immediate"
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asa5585_data]" />
<fvRsPathAtt encap="vlan-501" instrImedcy="immediate"
tDn="{{n103_n104_os_vpc}}" />
</fvAEPg>
</fvAp>
</fvTenant>

Figure 11-5 shows the EPG static path binding.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-7

Chapter 11

Copper Tenant Container

ACI Fabric Configuration

Figure 11-5

Static Path Binding for Copper Tenant

The following XML REST API snippets show how to use static leaves to create static binding of VLAN
501 to the leaf pair 107/108. Leaf switch pair 107/108 has eight OpenStack servers which each would
require 8 entries; but, using the static path binding requires only two entries
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="epg01">
<fvRsNodeAtt encap="vlan-501" instrImedcy="immediate"
tDn="topology/pod-1/node-107"/>
<fvRsNodeAtt encap="vlan-501" instrImedcy="immediate"
tDn="topology/pod-1/node-108"/>
</fvAEPg>
</fvAp>
</fvTenant>

Figure 11-6 shows the EPG static leaves binding.


Figure 11-6

Step 5

Static Leaves Binding for Copper Tenant

Associate physical domains with the EPG.


Domains associate the physical domains for external devices created earlier with the EPG.

Intercloud Data Center Application Centric Infrastructure 1.0

11-8

Implementation Guide

Chapter 11

Copper Tenant Container


ACI Fabric Configuration

<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="epg01">
<fvRsBd tnFvBDName="bd01" />
<fvRsDomAtt tDn="uni/phys-asa_data_phy" instrImedcy="immediate"
resImedcy="immediate" />
<fvRsDomAtt tDn="uni/phys-OpenStack_phy" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>

Figure 11-7 shows the EPG domain configuration.


Figure 11-7

EPG Domain Configurations for Copper Tenant

ASA-to-ASR 9000 Configuration


Traffic leaving the Copper container flows from the ASA outside interface to the ASR 9000 nV edge.
The traffic coming into the container flows in the opposite direction. Because of the traffic flow, the
VLAN used in ASA outside interface needs to be allowed on vPC from ASA data links to the ACI fabrics
and on the vPC from ASR 9000 to the ACI fabric.
Figure 11-8 shows the traffic path between the ASA cluster and ASR 9000 nV edge router.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-9

Chapter 11

Copper Tenant Container

ACI Fabric Configuration

Figure 11-8

ASA-to-ASR 9000 Connectivity for Copper Tenant

VLAN 500

Leaf104

Leaf105

Leaf106
Po2.500

Po1.500

ASR 9000 nV
Edge Router

ASA 5585
Cluster

Internet

298736

Leaf103

The following steps detail how to configure the connectivity between the ASA data path and the ASR
9000 edge router.
Step 1

Create the context/private network.


<fvTenant name="copper">
<fvCtx name="copper_ext_network" />
</fvTenant>

Step 2

Create the bridge domain and associate it with private network. Because of the server-to-ASA
configuration, the bridge domain requires ARP flooding and unknown unicast flooding to be enabled.
The subnet is not required since EPG is bridged to the external network and the VM default gateway
points to ASR 9000.
<fvTenant name="copper">
<fvBD name="copper_ext_bd" arpFlood="yes" unkMacUcastAct="flood"
unicastRoute="no">
<fvRsCtx tnFvCtxName="copper_ext_network" />
</fvBD>
</fvTenant>

Step 3

Create an EPG under the previously created application profile and associate it with the tenant bridge
domain.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="copper_ext_epg">
<fvRsBd tnFvBDName="copper_ext_bd" />
<fvRsDomAtt tDn="uni/phys-asa_data_phy" instrImedcy="immediate"
resImedcy="immediate" />
<fvRsDomAtt tDn="uni/phys-asr9k_copper_phy" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>

Step 4

Create static bindings for the tenant's VLAN and associate them to the ASA data links vPC and ASR
9000 vPC.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="copper_ext_epg">

Intercloud Data Center Application Centric Infrastructure 1.0

11-10

Implementation Guide

Chapter 11

Copper Tenant Container


ACI Fabric Configuration

<fvRsPathAtt encap="vlan-500" instrImedcy="immediate"


tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asa5585_data]" />
<fvRsPathAtt encap="vlan-500" instrImedcy="immediate"
tDn="topology/pod-1/protpaths-103-104/pathep-[vpc_n103_n104_asr9knv]" />
</fvAEPg>
</fvAp>
</fvTenant>

Step 5

Associate physical domains with the EPG


<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="copper_ext_epg">
<fvRsDomAtt instrImedcy="immediate" resImedcy="immediate"
tDn="uni/phys-asa_data_phy"/>
<fvRsDomAtt instrImedcy="immediate" resImedcy="immediate"
tDn="{{asr9kPhyDomain}}"/>
</fvAEPg>
</fvAp>
</fvTenant>

Object Storage (swift/RADOS GW) Access Configuration


In this implementation, provider backend management segments that provide the OpenStack
swift/RADOS GW object storage services are reachable via two Nexus 7009 management gateways. All
tenant traffic to the management segments reach the ASA cluster first and then leave the ASA via data
vPC towards the management gateways connected to node 107/108. Management VLAN (for example,
the VLAN 549) is allowed between the ASA and the management gateways.
Figure 11-9

Management Connectivity to Provider Backend

Spine201

Spine202

Spine203

Spine204

VLAN 549

Leaf106

Leaf107

Leaf108

OpenStack
C-Series
Servers

ASA 5585
Cluster

Nexus
7009-A

Nexus
7009-B
Management Servers

298737

Leaf105

The following steps detail how to configure the ACI tenant to access the Nexus 7009 management
gateways.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-11

Chapter 11

Copper Tenant Container

ACI Fabric Configuration

Step 1

Create the bridge domain and associate it with private network. Management EPG shares the same
private network defined for tenant EPGs. Bridge domain requires ARP flooding and unknown unicast
flooding to be enabled.
<fvTenant name="copper">
<fvBD name="mgmt_bd" arpFlood="yes" unkMacUcastAct="flood" unicastRoute="no">
<fvRsCtx tnFvCtxName="copper_tenant_network" />
</fvBD>
</fvTenant>

Step 2

Create an EPG under the previously created application profile and associate it with the tenant bridge
domain.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="mgmt_epg">
<fvRsBd tnFvBDName="mgmt_bd" />
</fvAEPg>
</fvAp>
</fvTenant>

Step 3

Create the static bindings for the management VLAN and associate them with the ASA data links vPC
and Nexus 7009 Switch vPCs.
<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="mgmt_epg">
<fvRsPathAtt encap="vlan-549" instrImedcy="immediate"
tDn="topology/pod-1/protpaths-105-106/pathep-[vpc_n105_n106_asa5585_data]" />
<fvRsPathAtt encap="vlan-549" instrImedcy="immediate"
tDn="topology/pod-1/protpaths-107-108/pathep-[vpc_n107_n108_vmi]" />
<fvRsNodeAtt encap="vlan-549" instrImedcy="immediate"
tDn="topology/pod-1/node-107"/>
</fvAEPg>
</fvAp>
</fvTenant>

Step 4

Associate the physical domains with the EPG.


<fvTenant name="copper">
<fvAp name="copper_app_profile">
<fvAEPg name="mgmt_epg">
<fvRsDomAtt tDn="uni/phys-asa_data_phy" instrImedcy="immediate"
resImedcy="immediate" />
<fvRsDomAtt tDn="uni/phys-vmi_phy" instrImedcy="immediate"
resImedcy="immediate" />
</fvAEPg>
</fvAp>
</fvTenant>

NFS Storage Access Configuration


The NFS storage for the Copper tenant container is configured under the storage tenant and can be
divided in to two parts:
1.

NetApp Controller Physical connectivity

2.

Copper Tenant-specific configuration

Intercloud Data Center Application Centric Infrastructure 1.0

11-12

Implementation Guide

Chapter 11

Copper Tenant Container


ASA Firewall Configuration

NetApp controller Physical connectivity is described under the storage tenant section in Chapter 2. The
following configuration snippet describes the Copper tenant-specific EPG configuration under the
storage tenant.
This snippet creates an EPG and associates it to the bridge domain "ip_storage." The static path bindings
are created for the four OpenStack Nova compute hosts as well as for the three OpenStack control nodes
hosting the OpenStack cinder service. These ports are configured as access ports with "untagged"
encapsulation type. These servers are connected to leaf pair 107/108 as shown in Figure 2. Next, the
corresponding physical domain is associated with the EPG.
<fvAEPg name="os_nfs_hosts">
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/31]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-107/pathep-[eth1/29]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-107/pathep-[eth1/30]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/29]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/30]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-108/pathep-[eth1/32]"/>
<fvRsPathAtt encap="vlan-91" instrImedcy="immediate" mode="untagged"
tDn="topology/pod-1/paths-107/pathep-[eth1/31]"/>
<fvRsDomAtt instrImedcy="immediate" resImedcy="immediate"
tDn="uni/phys-OpenStack_phy"/>
<fvRsBd tnFvBDName="ip_storage"/>
</fvAEPg>

ASA Firewall Configuration


The ASA cluster is shared between different tenants with the ASA context construct. All Copper tenants
share the same ASA context. This section details the Copper tenant-specific configuration. Figure 11-10
below shows the logical connectivity of the Copper tenants to the ASA cluster Firewall.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-13

Chapter 11

Copper Tenant Container

ASA Firewall Configuration

Figure 11-10

ASA Connectivity and Configuration

Internet
Static routes pointing
to NAT subnets

Src: 10.21.1.x
Dest: 192.168.100.100

AS 200
NAT to
ASR 9000

Src: 10.0.46.x
Dest: 10.0.45.78
(ha proxy VIP for Rados GW)

10.4.101.1
eBGP

Outside 10.4.101.0/24
OS Management 10.0.46.0/24
10.4.101.2
APP
OS

AS 65101
111.21.1.x

10.0.46.254

St

ati

A
cN

10.21.2.254

tic

NA

10.21.1.254

Sta

ic

Copper Tenant1 10.21.1.0/24


APP
OS

APP
OS

Private IP
Addresses

10.21.3.254

ASA Cluster

at
St

Public IP
Addresses
for NAT

111.21.2.x

OpenStack Swift/
Rados GW Servers

NA

APP
OS

One VLAN/subnet per


copper tenant. All tenants
share same context. All
tenant instances have
default gateway on ASA

Copper Tenant2 10.21.2.0/24


Copper Tenant2 10.21.2.0/24

OS Instances
APP
OS

OS Instances

APP
OS

APP
OS

OS Instances

298738

APP
OS

ASA System Context


The Copper tenant is assigned a set of port channel sub-interfaces from the ASA cluster data port
channel. Each Copper tenant has a single inside sub-interface used as the default gateway for the tenant
instances. All Copper tenants' outside traffic share a single sub-interface. The Copper tenants require
another sub-interface for the RADOS/SWIFT object storage traffic going to the management segments.
For this implementation, certain instances are created with dual NICs to allow access to a shared
management segment.

Interface Configuration
The following configuration snippet details the interface configuration inside the system context.
interface TenGigabitEthernet0/7
channel-group 2 mode active vss-id 1
!
interface TenGigabitEthernet0/9
channel-group 2 mode active vss-id 2
!
interface Port-channel2
description Data Uplinks
lacp max-bundle 8
port-channel load-balance src-dst-ip-port
port-channel span-cluster vss-load-balance
!
interface Port-channel2.500
description copper outside
vlan 500

Intercloud Data Center Application Centric Infrastructure 1.0

11-14

Implementation Guide

Chapter 11

Copper Tenant Container


ASA Firewall Configuration

!
interface Port-channel2.501
description tenant 1 inside
vlan 501
!
interface Port-channel2.502
description tenant 2 inside
vlan 502
!
interface Port-channel2.503
description tenant 3 inside
vlan 503
!
interface Port-channel2.504
description tenant 4 inside
vlan 504
!
interface Port-channel2.541
description Spirent traffic from 2nd nic
vlan 541
!
interface Port-channel2.549
description To Management for Swift/rados
vlan 549
The interfaces from this snippet are then used by the Copper context as follows:
context copper
allocate-interface Management0/1 management0
allocate-interface Port-channel2.500-Port-channel2.504 ethernet1-ethernet5
allocate-interface Port-channel2.541 ethernet7
allocate-interface Port-channel2.549 ethernet6
config-url disk0:/aci-copper1.cfg
!

BGP Configuration
The ASA BGP configuration requires all contexts to share the same BGP autonomous number; therefore,
this is configured inside the system context as shown below:
router bgp 65101
bgp log-neighbor-changes

Once the autonomous number is configured in the system context, additional configurations can be
added under the individual contexts.

ASA Copper Context


In this implementation, all Copper tenants share the same context. The following sections go through the
Copper tenant-specific configurations.

Base Configuration
The following configuration snippet shows the inside interfaces configured for four tenants and the
interfaces for management and outside.
interface ethernet1
description asa outside to asr9k po10.500
nameif outside
security-level 0

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-15

Chapter 11

Copper Tenant Container

ASA Firewall Configuration

ip address 10.4.101.2 255.255.255.0


!
interface ethernet2
nameif copper_vlan501
security-level 100
ip address 10.21.1.254 255.255.255.0
!
interface ethernet3
nameif copper_vlan502
security-level 100
ip address 10.21.2.254 255.255.255.0
!
interface ethernet4
nameif copper_vlan503
security-level 100
ip address 10.21.3.254 255.255.255.0
!
interface ethernet5
nameif copper_vlan504
security-level 100
ip address 10.21.4.254 255.255.255.0
!
interface ethernet6
nameif vmi
security-level 0
ip address 10.0.46.254 255.255.255.0

BGP Configuration
The following configuration snippet shows the BGP configuration for Copper tenants.
router bgp 65101
address-family ipv4 unicast
neighbor 10.4.101.1 remote-as 200
neighbor 10.4.101.1 activate
no auto-summary
no synchronization
exit-address-family

None of the internal Copper tenant VLANs are advertised out to the ASR 9000 because the static routes
are configured on the ASR 9000 for the NAT subnets.

NAT Configuration
The following sections detail the NAT configurations for Copper tenants.

NAT for Internet Access


The static NAT grants access to and from the Internet to the Copper tenant OpenStack instances. It allows
the instances to travel out to the Internet and allows their remote access to the instances as shown in
Figure 27. Each Copper tenant instance is configured to use NAT translation mapping to a unique NAT
IP address. The configuration snippets below show the NAT objects and NAT configuration for a single
Copper tenant. In this example, the Copper tenant instance with a private IP address of 10.21.1.29 is
translated to a public IP address of 111.21.1.29.
object network copper1_avalanche2
host 10.21.1.29
object network copper1_avalanche2_nat
host 111.21.1.29
nat (copper_vlan501,outside) source static copper1_avalanche2 copper1_avalanche2_nat

Intercloud Data Center Application Centric Infrastructure 1.0

11-16

Implementation Guide

Chapter 11

Copper Tenant Container


Deployment Considerations

NAT for RADOS/SWIFT object storage access


The provider management segments that provide the object storage services must be concealed from any
direct exposure to OpenStack instances. This is achieved through NAT that translate the access to
RADOS gateway VIP IP address to a private IP address. This private IP address is then presented to the
OpenStack instances. Figure 27 shows an example of the NAT translation that occurs through the ASA
for a given packet originating from the Copper Tenant 1 instance.
The RADOS gateway server is located in the provider management segment with an IP address of
10.0.45.78. This is presented to the OpenStack servers as 192.168.100.100 and conceals the actual IP
address of the server.
object network NAT_Pool_copper_mgmt
range 10.0.46.10 10.0.46.200
description Copper swift access
!
object network swift_vip
host 192.168.100.100
!
object network swift_vip_actual
host 10.0.45.78
!
nat (copper_vlan501,vmi) source static any any destination static swift_vip
swift_vip_actual
nat (vmi,copper_vlan501) source static NAT_Pool_copper_mgmt swift_vip

Deployment Considerations
The following considerations are recommended.

The Copper container uses ACI fabric as a layer 2 transport. The default gateway for tenant VMs is
configured on the ASA context.

Each tenant has unique BD/EPG in the ACI fabric with unicast routing disabled while ARP and
unknown unicast flooding enabled.

For OpenStack integration with ACI fabric, VLAN to EPG mapping is done statically.

The copper container implementation does not use contracts for policy enforcement; instead it uses
security policies defined on the ASA context.

Non-overlapping IP addressing is used for copper tenants since all tenants share the same ASA
context.

Static NAT is used on the ASA to provide access from internet to tenant subnet.

Static NAT is used on the ASA to provide OpenStack instances access to object storage services.

Intercloud Data Center Application Centric Infrastructure 1.0


Implementation Guide

11-17

Chapter 11

Copper Tenant Container

Deployment Considerations

Intercloud Data Center Application Centric Infrastructure 1.0

11-18

Implementation Guide

Vous aimerez peut-être aussi