Académique Documents
Professionnel Documents
Culture Documents
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable,
and more predictable customer deployments. For more information visit http://www.cisco.com/go/designzone.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR
OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT
THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY
DEPENDING ON FACTORS NOT TESTED BY CISCO.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of
California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved.
Copyright © 1981, Regents of the University of California.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of
Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The
use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses
and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in
the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative
content is unintentional and coincidental.
Terminology 2-2
1
Introduction
Enterprise Layer 3 (L3) network virtualization enables one physical network to support multiple L3
virtual private networks (L3VPNs). To a group of end users, it appears as if each L3VPN is connected
to a dedicated network with its own routing information, quality of service (QoS) parameters, and
security and access policies.
This functionality has numerous applications, including:
• Requirements to separate departments and functions within an organization for security or
compliance with statutes such as the Sarbanes-Oxley Act or Health Insurance Portability and
Accountability Act (HIPAA).
• Mergers and acquisitions in which consolidating disparate networks into one physical infrastructure
that supports existing IP address spaces and policies provides economic benefits.
• Airports in which multiple airlines each require an independent network with unique policies, but
the airport operator provides only one network infrastructure
• requirements to separate guest networks from internal corporate networks.
For each use case requiring network separation, a L3VPN infrastructure offers the following key benefits
over non-virtualized infrastructures or separate physical networks:
• Reduced costs—Multiple user groups with virtual networks benefit from greater statistical
multiplexing to provide bandwidth with higher utilization of expensive WAN links.
• A single network enables simpler management and operation of operations, administration, and
management (OAM) protocols.
• Security between virtual networks is built in without needing complex access control lists (ACLs)
to restrict access for each user group.
• Consolidating network resources into one higher-scale virtualized infrastructure enables more
options for improved high availability (HA), including device clustering and multi-homing.
End-to-end virtualization of an enterprise network infrastructure relies upon the following primary
components:
• Virtual routing instances in edge routers, delivering service to each group that uses a virtualized
infrastructure instance
• Route-distinguishers, added to IPv4 addresses to support overlapping address spaces in the virtual
infrastructure
• Label-based forwarding in the network core so that forwarding does not rely on IP addresses in a
virtual network, which can overlap with other virtual networks
Figure 2-1 summarizes the three most common options used to virtualize enterprise Layer 3 (L3) WANs.
CE
Site 1
CE
P
P
Site 3
PE PE
P
Site 2
CE Customer-deployed Backbone
(IP and/or MPLS)
Site 1 Provider
Ethernet
CE
Service
Site 3
PE
PE
Site 2
CE
Site 1
Provider CE
MPLS VPN
Service Site 3
PE
PE
Site 2
CE
297258
This guide focuses on Option 1 in Figure 2-1, the enterprise-owned and operated Multiprotocol Label
Switching (MPLS) L3VPN model.
Terminology
The following terminology is used in the MPLS L3VPN architecture:
• Virtual routing and forwarding instance (VRF)—This entity in a physical router enables the
implementation of separate routing and control planes for each client network in the physical
infrastructure.
• Label Distribution Protocol (LDP)—This protocol is used on each link in the MPLS L3VPN
network to distribute labels associated with prefixes; labels are locally significant to each link.
• Multiprotocol BGP (MP-BGP)—This protocol is used to append route distinguisher values to
ensure unique addressing in the virtualized infrastructure, and imports and exports routes to each
VRF based on route target community value.
• P (provider) router—This type of router, also called a Label Switching Router (LSR), runs an
Interior Gateway Protocol (IGP) and LDP.
• PE (provider edge) router—This type of router, also called an edge router, imposes and removes
MPLS labels and runs IGP, LDP, and MP-BGP.
• CE (customer edge) router—This type of router is the demarcation device in a provider-managed
VPN service. It is possible to connect a LAN to the PE directly. However, if multiple networks exist
at a customer location, a CE router simplifies the task of connecting the networks to an L3VPN
instance.
The PE router must import all client routes served by the associated CE router into the VRF of the PE
router associated with that virtual network instance. This enables the MPLS L3VPN to distribute route
information to enable route connectivity among branch, data center, and campus locations.
Figure 2-2 shows how the components combine to create an MPLS L3VPN service and support multiple
L3VPNs on the physical infrastructure. In the figure, a P router connects two PE routers. The packet flow
is from left to right.
Figure 2-2 Figure 2 Major MPLS L3VPN Components and Packet Flow
PE P P PE
PE
PE
VPN Data
l
IGP Labe
l
Labe
4 Byte 4 Byte
297259
Original Packet
IGP Label VPN Label
The PE on the left has three groups, each using its own virtual network. Each PE has three VRFs (red,
green and blue); each VRF is for the exclusive use of one group using a virtual infrastructure.
When an IP packet comes to the PE router on the left, the PE appends two labels to the packet. BGP
appends the inner (VPN) label and its value is constant as the packet traverses the network. The inner
label value identifies the interface on the egress PE out of which the IP packet will be sent. LDP assigns
the outer (IGP) label; its value changes as the packet traverses the network to the destination PE.
For more information about MPLS VPN configuration and operation, refer to “Configuring a Basic
MPLS VPN” at:
• http://www.cisco.com/c/en/us/support/docs/multiprotocol-label-switching-mpls/mpls/13733-mpls-vp
n-basic.html
This Cisco Validated Design (CVD) focuses on the role of Cisco ASR 9000 Series Aggregation Services
Routers (ASR 9000) as P and PE devices in the Multiprotocol Label Switching (MPLS) L3VPN
architecture described in Figure 2-2 on page 2-2. Providers can use this architecture to implement
network infrastructures that connect virtual networks among data centers, branch offices, and campuses
using all types of WAN connectivity.
In this architecture, data centers (branch or campus) are considered customer edge (CE) devices. The
design considers provider (P) and provider edge (PE) router configuration with the following
connectivity control and data plane options between PE and CE routers:
• Ethernet hub-and-spoke or ring
• IP
• Network virtualization (nV)
• Pseudowire Headend (PWHE) for MPLS CE routers
Two options are considered for the MPLS L3VPN infrastructure incorporating P and PE routers:
• A flat LDP domain option, which is appropriate for smaller MPLS VPN deployments (700-1000
devices).
• A hierarchical design using RFC 3107-labeled BGP to segment P and PE domains into IGP domains
to help scale the infrastructure well beyond 50,000 devices.
This chapter first examines topics common to small and large network implementations. These topics
are discussed in the context of small network design. Later, it looks at additional technologies needed to
enable small networks to support many more users. This chapter includes the following major topics:
• Small Network Design and Implementation, page 3-1
• Large Scale Network Design and Implementation, page 3-16
Pre-Aggregation Pre-Aggregation
Node Core and Node
Data Core Aggregation Core Ethernet
Center Node IP/MPLS Domain Node
nV
Pre-Aggregation Pre-Aggregation
Node Node
Core Core
Node Node
297260
Pre-Aggregation Pre-Aggregation Campus/
Node Node Branch
• Core and aggregation networks form one IGP and LDP domain.
– Scale target for this architecture is less than 700 IGP/LDP nodes
• All VPN configuration is on the PE nodes.
• Connectivity between the PE Node and the branch/campus router includes the following options:
– Ethernet hub-and-spoke or ring
– IP between PE and CE
– Network virtualization
– PWHE to collapse CE into PE as nV alternative
The domain of P and PE routers, which is no greater than a few hundred, can be implemented using
single IGP and LDP instances. On the left is the data center, with the network extending across the WAN
to branch and campus locations.
VRF Configuration
VRF configuration comprises the following major steps, which are described in detail in the subsequent
sections:
• Defining a unique VRF name on the PE.
• Configuring a route distinguisher value for the VRF under router BGP so that VRF prefixes can be
appended with RD value to make VPNv4 prefixes.
• Importing and exporting route targets corresponding to the VPN in the VRF configuration so that
PE can advertise routes with the assigned export route target and download prefixes tagged with
configured import route target into the VRF table.
• Applying the VRF on the corresponding interface connected to CPE.
PE VRF Configuration
Step 3 Configure the import route target to selectively import IPv4 routes into the VRF matching the route
target.
import route-target
8000:8002
Step 4 Configure the export route target to tag IPv4 routes having this route target while advertising to remote
PE routers.
export route-target
8000:8002
Step 6 Configure the import route target to selectively import IPv6 routes into the VRF matching the route
target.
import route-target
8000:8002
!
Step 7 Configure the export route target to tag IPv6 routes having this route target while advertising to remote
PE routers.
export route-target
8000:8002
!
!
Step 10 Define the route distinguisher value for the VRF. The route distinguisher is unique for each VRF in each
PE router.
rd 8000:8002
At this stage, the L3 VRF and the route distinguisher are configured to append to routes coming into the
VRF. The route distinguisher enables multiple VPN clients to use overlapping IP address spaces. The
L3VPN core can differentiate overlapping addresses because each IP address is appended with a route
distinguisher and therefore is globally unique. Combined client IP addresses and route distinguishers are
referred to as VPNv4 addresses.
To get routes from a client site at the CE (branch or campus router) into the VRF, either static routing or
a routing protocol is used. Examples of the most common static routing and eBGP scenarios follow.
Step 3 Configure the CPE IP address as a BGP peer and its autonomous system (AS) as remote-as.
neighbor 100.192.30.3 remote-as 65002?
PE is configured using static routes in the VRF, with next-hop as the CPE address. Configuration use
IPv4 address-family to configure IPv4 static routes. The static routes are then advertised to remote PEs
by redistributing under BGP.
The following procedure illustrates the configuration.
Step 4 Redistribute Static Prefixes under BGP VRF address-family IPv4 so that they are advertised to remote
PEs.
redistribute static
After routes from the branch or campus router are in the client VRF, the routes must be advertised to
other sites in the L3VPN to enable reachability. Reachability is delivered using MP-BGP to advertise
VPNv4 addresses, associated with the VRF at the branch location, to members of the same VPN.
PE MP-BGP Configuration
MP-BGP configuration comprises BGP peering with route reflector for VPNv4 and VPNv6 address
families to advertise and receive VPNv4 and VPNv6 prefixes. MP-BGP uses session-group to configure
address-family independent (global) parameters; peers requiring the same parameters can inherit its
configuration.
Session-group includes update-source, which specifies the interface whose address is used for BGP
communication, and remote-as, which specifies the AS number to which the CPE belongs.
Neighbor-group is configured to import session-group for address-family independent parameters, and
to configure address-family dependent parameters, such as next-hop-self, in the corresponding
address-family.
The following procedure illustrates MP-BGP configuration on PE.
Step 10 Enable vpnv4 address-family for neighbor group and configure address-family dependent pa-rameters
under VPNv4 address-family.
address-family vpnv4 unicast
!
Step 11 Enable vpnv6 address-family for neighbor group and configure address-family dependent pa-rameters
under VPNv6 AF.
address-family vpnv6 unicast
!
Step 12 Import the neighbor-group route-reflector to define the route-reflector address as a VPNv4 and VPNv6
peer.
neighbor 100.111.4.3
use neighbor-group rr
!
The above sections described how we can configure virtual networks on a PE router. The network can
have hundreds of PE routers connecting to Campus/Branch Routers and Data centers. A PE router in one
location learns VRF prefixes of remote location using Multiprotocol IBGP. PEs cannot advertise VPNv4
prefix received from one IBGP peer to another due to IBGP split-horizon rule. IBGP requires a full mesh
between all IBGP-speaking PEs. It can cause scalability and overhead issues as PE routers require
maintaining the IBGP session with all remote PEs and sending updates to all IBGP peers; this causes
causing duplication. To address this issue, route reflectors can be deployed, as explained below.
Step 9 Configure RR to send both standard and Extended community(RT) to Peer-group members.
neighbor rr-client send-community both
Step 10 Activate the PE as peer for VPNv4 peering under VPNv4 address-family.
After configuring PE with the required virtual network configuration described above, transport must be
set up to carry virtual network traffic from one location to another. The next section describes how we
can implement transport and optimize it with fast detection and convergence for seamless service
delivery.
• LFA FRR calculates the backup path for each prefix in the IGP routing table; if a failure is detected,
the router immediately switches to the appropriate backup path in about 50 ms. Only loop-free paths
are candidates for backup paths.
• rLFA FRR works differently because it is designed for cases with a physical path, but no loop-free
alternate paths. In the rLFA case, automatic LDP tunnels are set up to provide LFAs for all network
nodes.
Without LFA or rLFA FRR, a router calculates the alternate path after a failure is detected, which results
in delayed convergence. However, LFA FRR calculates the alternate paths in advance to enable faster
convergence. P and PE devices have alternate paths calculated for all prefixes in the IGP table, and use
rLFA FRR to fast reroute in case of failure in a primary path.
PE Transport Configuration
PE configuration includes enabling IGP (IS-IS or OSPF can be used) to exchange core and aggregation
reachability, and enabling LDP to exchange labels on core facing interfaces. A loopback interface is also
advertised in IGP as the BGP VPNv4 session is created, using update-source Loopback0 as mentioned
in PE Operation and Configuration, page 3-2. Using the loopback address to source updates and target
updates to remote peers improves reliability; the loopback interface is always up when the router is up,
unlike physical interfaces that can have link failures.
BFD is configured on core-facing interfaces using a 15 ms hello interval and multiplier 3 to enable fast
failure detection in the transport network. rLFA FRR is used under IS-IS level 2 for fast convergence if
a transport network failure occurs. BGP PIC is configured under VPNv4 address-family for fast
convergence of VPNv4 Prefixes if a remote PE becomes unreachable.
The following procedure describes PE transport configuration.
Step 6 Metric style Wide generates new-style TLV with wider metric fields for IPv4.
metric-style wide
!
Step 7 Enter IPv6 address-family for IS-IS.
address-family ipv6 unicast
Step 8 Metric-style Wide generates new-style TLV with wider metric fields for IPv6.
metric-style wide
!
Step 9 Configure IS-IS for Loopback interface.
interface Loopback0
Step 15 Configure Minimum Interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 15
Step 17 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect ipv4
Step 21 Configure an FRR path that redirects traffic to a remote LFA tunnel.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Step 22 Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid
packet loss.
mpls ldp sync
!
!
Step 23 Enter MPLS LDP configuration mode.
mpls ldp
log
graceful-restart
!
Step 24 Configure router-id for LDP.
router-id 100.111.11.1
!
Step 25 Enable LDP on TenGig0/0/0/0.
interface TenGigE0/0/0/0
address-family ipv4
!
Step 26 Enter BGP configuration mode.
router bgp 101
Step 28 Configure receive capability of multiple paths for a prefix to the capable peers.
additional-paths receive
Step 29 Configure send capability of multiple paths for a prefix to the capable peers.
additional-paths send
Step 30 Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.
additional-paths selection route-policy add-path-to-ibgp
!
Step 31 Configure route-policy used in BGP PIC.
route-policy add-path-to-ibgp
P Transport Configuration
P transport configuration includes enabling IGP (IS-IS or OSPF) to exchange core and aggregation
reachability, and enabling LDP to exchange labels on core-facing interfaces. P routers are not required
because VRF is not configured on them and so they do not need VPNv4 and VPNv6 prefixes. P routers
know only core and aggregation prefixes in the transport network and do not need to know prefixes
belonging to VPNs. P swap labels based on the top packet label belonging to remote PEs, and use LFIB
to accomplish PE-to-PE LSP. rLFA FRR is used under IS-IS level 2 for fast convergence if a transport
network failure occurs.
Step 6 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.
metric-style wide
!
Step 7 Configure IS-IS for Loopback interface.
interface Loopback0
Step 12 Configure Minimum Interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 15
Step 14 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect ipv4
Step 17 Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid
packet loss.
mpls ldp sync
!
!
Step 18 Configure IS-IS for TenGigE0/0/0/1 interface.
interface TenGigE0/0/0/1
Step 20 Configure Minimum Interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 15
Step 22 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect ipv4
Step 26 Configure an FRR path that redirects traffic to a remote LFA tunnel.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Step 27 Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid
packet loss.
mpls ldp sync
!
!
Step 28 Enter MPLS LDP configuration mode.
mpls ldp
log
neighbor
graceful-restart
The QoS configuration includes configuring class-maps created for the different traffic classes
mentioned above assigned with the corresponding MPLS Exp. While configuring policy maps, real-time
traffic class CMAP-RT-EXP is configured with highest priority 1; it is also policed to ensure low latency
expedited forwarding (EF). Rest classes are assigned with the respective required bandwidth. WRED is
used as congestion avoidance mechanism for Exp 1 and 2 traffic in the Enterprise critical class
CMAP-EC-EXP. The Policy-map is applied to the PE and P Core interfaces in egress direction across
the MPLS network.
end-class-map
!
Step 9 Class-map for control traffic.
class-map match-any CMAP-CTRL-EXP
policy-map PMAP-NNI-E
Step 15 Define top priority 1 for the class for low-latency queuing.
priority level 1
Aggregation Aggregation
Node Node
Data Core Core Ethernet
Aggregation Network Core Network Aggregation Network
Center Node IP/MPLS Domain Node nV
IP/MPLS Domain IP/MPLS Domain
Aggregation Aggregation
Node Node
Core Core
Node Node
Aggregation Aggregation Campus/
Node Node Branch
iBGP (eBGP) Hierarchical LSP
297261
LDP LSP LDP LSP LDP LSP
• The core and aggregation networks add hierarchy with 3107 ABR at border of core and aggregation.
• The core and aggregation networks are organized as independent IGP/LDP domains.
• The network domains are interconnected with hierarchical LSPs based on RFC 3107, BGP
IPv4+labels. Intra-domain connectivity is based on LDP LSPs.
• Topologies between the PE Node and branch router can be Ethernet hub-and-spoke, IP, Ethernet
ring, or nV.
clients without changing next-hop or other attributes. ABRs learn PE loopback addresses and labels from
other aggregation domains and advertise them to PEs in their local aggregation domain. ABRs use
next-hop-self while advertising routes to PEs in local aggregation domain and to RRs in the core domain.
This makes PEs learn remote PE loopback addresses and labels with local ABR as BGP next-hop and
ABRs learn remote PE loopback addresses with remote ABR as the BGP next-hop. PEs use two transport
labels when sending labeled VPN traffic to the MPLS cloud: one label for remote PE and another label
for its BGP next-hop (local ABR). The top label for BGP next-hop local ABR is learned from local
IGP/LDP. The label below that, for remote PE, is learned through labeled IBGP with the local ABR.
Intermediate devices across different domains perform label swapping based on the top label in received
MPLS packets. This achieves end-to-end hierarchical LSP without running the entire network in a single
IGP/LDP domain. Devices learn only necessary information, such as prefixes in local domains and
remote PE loopback addresses, which makes labeled BGP scalable for large networks.
Aggregation Aggregation
Node Core RR Node
next-hop-self next-hop-self
RR RR
ABR BGP IPv4+label ABR
Data BGP IPv4+label BGP IPv4+label Ethernet
Center nV
Aggregation Aggregation
Node Node
Core Core
Node Node
Aggregation Aggregation Campus/
Node Node Branch
VPN Remote Local RR VPN Remote Local RR VPN Remote
Label PE Label ABR Label Label PE Label ABR Label Label PE Label
297262
LDP LSP LDP LSP LDP LSP
• Aggregation domains run ISIS level-1/OSPF non-backbone area and core domain runs ISIS
level-2/backbone area.
• ABR connects to both aggregation and core domains.
• ABR runs Labeled iBGP with PEs in local aggregation domain and core RR in core domain.
• ABR uses next-hop-self while advertising routes to PEs and core RR.
ABR's loopbacks are required in both aggregation and core domains since their loopbacks are used for
labeled BGP peering with PEs in local aggregation domain as well as RR in the core domain. To achieve
this, ABR loopbacks are kept in the IS-IS Level-1-2 or OSPF backbone area.
PE Transport Configuration
Step 5 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.
metric-style wide
!
Step 6 Configure IS-IS for Loopback interface.
interface Loopback0
Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 15
Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect ipv4
Step 16 Configure an FRR path that redirects traffic to a remote LFA tunnel.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Step 18 Enable mpls LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet
loss.
mpls ldp sync
!
!
Step 19 Enter router BGP configuration mode.
router bgp 101
!
Step 20 Enter IPv4 address-family.
address-family ipv4 unicast
Step 21 Configure receive capability of multiple paths for a prefix to the capable peers.
additional-paths receive
Step 22 Configure send capability of multiple paths for a prefix to the capablepeers.
additional-paths send
Step 23 Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.
additional-paths selection route-policy add-path-to-ibgp
!
Step 24 Configure session-group to define parameters that are address-family independent.
session-group intra-as
!
Step 32 Configure route-policy used in BGP PIC.
route-policy add-path-to-ibgp
Step 4 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.
metric-style wide
!
Step 5 Configure IS-IS for Loopback interface.
interface Loopback0
circuit-type level-1
Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 15
Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect ipv4
Step 15 Configure an FRR path that redirects traffic to a remote LFA tunnel.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
Step 17 Enable MPLS LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid
packet loss.
mpls ldp sync
!
!
Step 18 Configure IS-IS for TenGigE0/2/0/1 interface.
interface TenGigE0/2/0/1
Step 20 Configure minimum interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 15
Step 22 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect ipv4
Step 25 Configure an FRR path that redirects traffic to a remote LFA tunnel.
Step 27 Enable mpls LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet
loss.
mpls ldp sync
!
!
Step 28 Enter Router BGP configuration mode.
router bgp 101
!
Step 29 Enter IPv4 address-family.
address-family ipv4 unicast
Step 30 Configure receive capability of multiple paths for a prefix to the capable peers.
additional-paths receive
Step 31 Configure send capability of multiple paths for a prefix to the capable peers.
additional-paths send
Step 32 Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.
additional-paths selection route-policy add-path-to-ibgp
!
Step 33 Configure session-group to define parameters that are address-family independent.
session-group intra-as
Step 4 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.
metric-style wide
!
Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 15
Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect ipv4
Step 15 Configure an FRR path that redirects traffic to a remote LFA tunnel.
fast-reroute per-prefix remote-lfa tunnel mpls-ldp
metric 10
Step 17 Enable MPLS LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid
packet loss.
mpls ldp sync
!
!
Step 18 Enter router BGP configuration mode.
router bgp 101
!
Step 19 Enter IPv4 address-family.
address-family ipv4 unicast
Step 20 Configure receive capability of multiple paths for a prefix to the capable peers.
additional-paths receive
Step 21 Configure send capability of multiple paths for a prefix to the capable peers.
additional-paths send
Step 22 Enable BGP PIC functionality with appropriate route-policy to calculate back-up paths.
additional-paths selection route-policy add-path-to-ibgp
!
Step 23 Configure session-group to define parameters that are address-family independent.
session-group intra-as
mpls ldp
log
neighbor
graceful-restart
This section described how we can implement hierarchical transport network using Labeled BGP as a
scalable solution in a large scale network with fast failure detection and fast convergence mechanisms.
This solution helps to avoid unnecessary resource usage, simplifies network implementation, and
achieves faster convergence for large networks.
Virtual network implementation on PE including VRF creation, MP BGP, BGP PIC, rLFA, VPNv4 RR,
Transport QoS, and P configuration will remain the same in concept and configuration as described in
Small Network Design and Implementation, page 3-1.
While the domain creating the MPLS L3 service consisting of P and PE routers remains the same
regardless of access technologies, the technologies and designs used to connect the PE to CE device
varies considerably based on technology preference, installed base, and operational expertise.
Common characteristics, however, exist for each of the options. Each design needs to consider the
following:
• The topology implemented, either hub-and-spoke or rings
• How redundancy is configured
• The type of QoS implementation
Network availability is critical for enterprises because network outages often lead to loss of revenue. In
order to improve network reliability, branch/Campus routers and data centers are multihomed on PE
devices using one of the various access topologies to achieve PE node redundancy. Each topology
should, however, be reliable and resilient to provide seamless connectivity. This is achieved as described
in this chapter, which includes the following major topics:
• Inter-Chassis Communication Protocol, page 4-1
• Ethernet Access, page 4-2
• nV (Network Virtualization) Access, page 4-16
• Native IP-Connected Access, page 4-25
• MPLS Access using Pseudowire Headend, page 4-28
connection and different applications like Multichassis Link Aggregation Group (MC-LAG) and
Network Virtualization (nV) described in the next sections use this control connection to share state
information. ICCP is configured as described below.
ICCP Configuration
Step 1 Add an ICCP redundancy group with the mentioned group-id.
redundancy
iccp
group group-id
Step 2 This is the ICCP peer for this redundancy group. Only one neighbor can be configured per redundancy
group. The IP address is the LDP router-ID of the neighbor. This configuration is required for ICCP to
function.
member
neighbor neighbor-ip-address
!
Step 3 Configure ICCP backbone interfaces to detect isolation from the network core, and trigger switchover
to the peer PE in case the core isolation is occurred on the active PE. Multiple backbone interfaces can
be configured for each redundancy group. When all the backbone in-terfaces are not UP, this is an
indication of core iso-lation.
backbone
backbone interface interface-type-id
!
We discussed ICCP providing control channel between PEs to communicate state information to provide
resilient access infrastructure which can be used by different topologies. The next section discusses
various access topologies that can be implemented among branch, campus or data center devices, and
the Enterprise L3VPN network. Each topology ensures redundancy and fast failure detection and
convergence mechanisms to provide seamless last mile connectivity.
Ethernet Access
Ethernet access can be implemented in hub-and-spoke OR ring access as described below.
• Link failure—A port or link between the CE and one of the PEs fails.
• Device failure—Meltdown or reload of one of the PEs, with total loss of connectivity to the CE, the
core and the other PE.
• Core isolation—A PE loses its connectivity to the core network and therefore is of no value, being
unable to forward traffic to or from the CE.
PE
(ASR 9000)
/0
CPE /1/0 TenG0/0/0/0 Active Port
G0 TenG0/0/0/2
(Branch/Campus Hot Standby Port
Router) /10
G0
(All VLANs)
ICCP
Po1 LAG MC-LAG
BE222 MPLS
G0
/11
G0 TenG0/0/0/0
/1/0
/0 TenG0/0/0/2
297263
PE
(ASR 9000)
A loss of connectivity between the PEs may lead both devices to assume that the other has experienced
device failure; this causes them to attempt to take on the active role, which causes a loop. CE can mitigate
this situation by limiting the number of links so that only links connected to one PE are active at a time.
Hub-and-spoke access configuration is described in Table 4-1.
Table 4-1 Hub-and-Spoke Access Configuration
CE Configuration Explanation
interface gig 0/10 Configures CE interface towards PE in port-channel.
channel-group 1 mode active
!
interface gig 0/11
channel-group1 mode active
! Defines maximum number of active bundled LACP ports
interface port-channel 1 allowed in a port channel. In our case, both PEs have one
lacp max-bundle 1
!
link each to CPE and only one link remains active.
MC-LAG provides interchassis redundancy based on the active/standby PE model. In order to achieve
the active/active PE model for both load balancing and redundancy, we can use VRRP as described
below.
Hub-and-spoke with VRRP configuration includes configuring bundle interface on both PE devices on
the links connecting to the CE. In this case, although bundle interfaces are used, in contrast to MC-LAG,
they are not aggregated across the two PEs. On PE ASR9000s, bundle subinterfaces are configured to
match data VLANs, and VRF are configured on them for L3VPN service. VRRP is configured on these
L3 interfaces. For achieving ECMP, one PE is configured with a higher priority for one VLAN VRRP
group and the other PE for another VLAN VRRP group. VRRP hello timers can be changed and set to
a minimum available value of 100msec. BFD is configured for VRRP for fast failover and recovery. For
core isolation tracking, VRRP is configured with backbone interface tracking for each group so that if
all backbone interfaces go down, the overall VRRP priority will be lowered below peer PE VRRP
priority and the peer PE can take the master ownership.
Bundle-Ether 1.12
112.1.1.2
PE
(ASR 9000)
297264
VRRP 113 Active
VRRP 112 Standby
PE Configuration
Step 5Make high priority for VRRP group 112 to 254 so that PE becomes VRRP active for
this group.
priority 254
Step 5 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its
routing table before becoming the active router.
preempt delay 15
address 112.1.1.1
Step 7 Configure millisecond timers for advertisement with force keyword to force the timers.
timer msec 100 force
Step 9 Enable backbone tracking so that if one interface goes down, VRRP priority will be lowered by 100 and
if two interfaces go down, (core isolation) priority will be lowered by 200; that will be lower than peer
default priority and switchover will take place.
track interface TenGigE0/0/0/0 100
track interface TenGigE0/0/0/0 100
!
!
Step 12 Make high priority for VRRP group 112 to 254 so that PE becomes VRRP active for this group.
priority 254
Step 13 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its
routing table before becoming the active router.
preempt delay 15
Step 15 Configure millisecond timers for advertisement with force keyword to force the timers.
timer msec 100 force
address linklocal autoconfig
Step 18 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE with 254 priority
becomes VRRP active for this group.
vrrp 113
Step 20 Configure millisecond timers for advertisement with force keyword to force the timers.
timer msec 100 force
Step 23 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE becomes VRRP
active for this group.
vrrp 113
Step 25 Configure millisecond timers for advertisement with force keyword to force the timers.
address linklocal autoconfig
interface Bundle-Ether1.12
Access switch is configured with data VLANs allowed on PE and CE-connecting interfaces. Spanning
tree is disabled as Pseudo MLACP takes care of the loop prevention.
Step 1 Disable spanning tree for data VLANs used in Pseudo MLACP.
no spanning-tree vlan 112-113
Step 2 Trunk connecting to CE and PE has the same configuration allowing the data VLANs on trunks.
interface GigabitEthernet0/1
switchport trunk allowed vlan 100-103,112,113
CPE Configuration
Step 3 IPv4 and IPv6 static routes configured with next hop as VRRP address. One PE is master for one VRRP
address and the other PE is master for other VRRP address.
ip route 112.2.1.0 255.255.255.0 112.1.1.1
ip route 113.2.1.0 255.255.255.0 113.1.1.1
ipv6 route 2001:112:2:1::/64 2001:112:1:1::1
ipv6 route 2001:113:2:1::/64 2001:113:1:1::1
In the G.8032 configuration, PE devices, which are configured as RPL owner nodes for one of the two
instances, are specified with the interface connected to the ring. Two instances are configured for odd
and even VLANs. PEs are configured as RPL owner for one of the instances each to achieve load
balancing and redundancy. Both instances are configured with dot1q subinterface for the respective APS
channel communication.
PEs are configured with BVI interfaces for VLANs in both instances and VRF is configured on BVI
interfaces for L3VPN service. CE interface connecting to G.8032 ring is configured with trunk allowing
all VLANs on it and SVIs configured on CE for L3 communication. BVIs are configured with First Hop
Redundancy Protocol (FHRP) and CE uses FHRP address as default gateway. In our example, we are
using VRRP on PEs as FHRP although we can use any available FHRP protocol. PEs are configured with
high VRRP priority for VLANs in the case for which they are not RPL owner. CE uses VRRP address
as default gateway. Since VRRP communication between PEs will be blocked along the ring due to
G.8032 loop prevention mechanism, a pseudowire configured between PEs exists that enables VRRP
communication. In normal condition, CE sends traffic directly along the ring to VRRP active PE
gateway. Two failure conditions exist:
• In the case of link failure in ring, both PEs will open their RPL links for both instances and retain
their VRRP states as VRRP communication between them is still up using pseudowire. Due to the
broken ring, CE will have direct connectivity to only one PE along the ring, depending on which
section (right or left) of G8032 ring has failed. In that case, CE connectivity to other PE will use the
path to reachable PE along the ring and then use pseudowire between PEs.
• In the case of PE Node failure, pseudowire connectivity between PEs will go down causing VRRP
communication to also go down. The PE that is UP to become VRRP Active for all VLANs and all
traffic from CE will be sent to that PE.
Ethernet
Access Node Blocked for Instance 2
(Odd VLANs)
PE
(ASR 9000)
Teng0/3/0/0
CPE
(Branch/Campus Ethernet
Router) Access Node G.8032 MPLS
Ethernet Ring Network
Access
G0/15
Teng0/3/0/0 PE
(ASR 9000)
PE's dot-1q subinterface for data VLAN communication with CE, pseudowire connecting both PEs and
BVI interface are configured in the same bridge domain, which allows both PEs and CE in same
broadcast domain for that data VLAN. So if the link fails, the CE can still communicate to both PEs
along the available path and pseudowire.
PE Configuration
interface TenGigE0/3/0/0
!
Step 4 Symmetrically POP 1 tag while receiving the packet and PUSH 1 tag while sending the traffic from
interface.
rewrite ingress tag pop 1 symmetric
!
Step 13 Make high priority for VRRP group 118 to 254 so that PE becomes VRRP active for this group.
priority 254
Step 14 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its
routing table before becoming the active router.
preempt delay 15
Step 16 Configure millisecond timers for advertisement with force keyword to force the timers.
timer msec 100 force
Step 20 Make high priority for VRRP group 118 to 254 so that PE becomes VRRP active for this group.
priority 254
Step 21 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its
routing table before becoming the active router.
preempt delay 15
Step 23 Configure millisecond timers for advertisement with force keyword to force the timers.
timer msec 100 force
Step 26 Configure VRRP group 119. Default priority for VRRP group 119 such that other PE with 254 priority
becomes VRRP active for this group.
vrrp 119
Step 28 Configure millisecond timers for advertisement with force keyword to force the timers.
timer msec 100 force
Step 31 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE becomes VRRP
active for this group.
vrrp 119
Step 33 Configure millisecond timers for advertisement with force keyword to force the timers
timer msec 100 force
!
!
Step 37 Enable subinterface connected to ring towards CE under bridge domain CE-L3VPN-118.
interface TenGigE0/3/0/0.118
Step 41 Enable subinterface connected to ring towards CE under same bridge domain CE-L3VPN-119.
interface TenGigE0/3/0/0.119
Step 43
routed interface BVI119
!
port1 none
open-ring
CE Configuration
Step 5 Configure IPv4 Static route towards VRRP address for VLAN 118.
ip route 118.2.1.0 255.255.255.0 118.1.1.1
Step 6 Configure IPv4 Static route towards VRRP address for VLAN 119.
ip route 119.2.1.0 255.255.255.0 119.1.1.1
Step 7 Configure IPv6 Static route towards VRRP address for VLAN 118.
ipv6 route 2001:118:2:1::/64 2001:118:1:1::1
Step 8 Configure IPv6 Static route towards VRRP address for VLAN 119.
ipv6 route 2001:119:2:1::/64 2001:119:1:1::1
Step 13 Assign service instance for APS messages on port0 and Port 1.
port0 service instance 99
port1 service instance 99
!
!
Step 14 Configure Instance 2.
instance 2
Step 19 Assign service instance for APS messages on port0 and Port 1.
port0 service instance 199
port1 service instance 199
!
!
!
Step 21 Configure service instance used for APS messages on G.8032 ring for both instances.
service instance 99 ethernet
encapsulation dot1q 99
rewrite ingress tag pop 1 symmetric
bridge-domain 99
!
service instance 199 ethernet
encapsulation dot1q 199
rewrite ingress tag pop 1 symmetric
bridge-domain 199
!
Step 23 Configure service instance used for APS messages on G.8032 ring for both instances.
service instance 99 ethernet
encapsulation dot1q 99
rewrite ingress tag pop 1 symmetric
bridge-domain 99
!
service instance 199 ethernet
encapsulation dot1q 199
rewrite ingress tag pop 1 symmetric
bridge-domain 199
!
!
satellite node physical port to its virtual counterpart at the host for traffic flowing in the upstream and
downstream direction. Satellite access ports are mapped as local ports at the host using the following
naming convention:
<port type><Satellite-ID>/<satellite-slot>/<satellite-bay>/<satellite-port>
where:
• <port type> is GigabitEthernet for all existing satellite models
• <Satellite-ID> is the satellite number as defined at the Host
• <satellite-slot>/<satellite-bay>/<satellite-port> are the access port information as known at the
satellite node.
These satellite virtual interfaces on the Host PEs are configured with VRF to enable L3VPN service.
The satellite architecture encompasses multiple connectivity models between the host and the satellite
nodes. The guide will discuss release support for:
• nV Satellite Simple Rings
• nV Satellite Layer 2 Fabric
In all nV access topologies, host nodes load share traffic on a per-satellite basis. The active/standby role
of a host node for a specific satellite is determined by a locally-defined priority and negotiated between
the hosts via ICCP.
ASR9000v and ASR901 are implemented as a satellite devices:
• ASR9000v has four 10 GbE ports that can be used as ICL.
• ASR901 has two GbE ports that can be used as ICL and that can be used as ICL and ASR903 can
have up to two 10 GbE ports can be used as ICL.
PE PE
(ASR 9000) (ASR 9000)
nV Host nV Host
Host Fabric Port
Satellite Satellite
CPE
(Branch/Campus Satellite Ring
Router)
Satellite Satellite
G0/0/40
Active nV Host
Standby nV Host Standby nV Host
The PE device advertises multicast discovery messages periodically over a dedicated VLAN over fabric
links. Each satellite access device in the ring listens for discovery messages on all its ports and
dynamically detects the Fabric link port toward the host.
The satellite uses this auto-discovered port for the establishment of a management session and for the
exchange of all the upstream and the downstream traffic with each of the hosts (data and control). At the
host, incoming and outgoing traffic is associated to the corresponding satellite node using the satellite
mac address, which was also dynamically learned during the discovery process. Discovery messages are
propagated from one satellite node to another and from either side of the ring so that all nodes can
establish a management session with both hosts. nV L1 fabric access configuration is described below.
nV L1 Fabric Configuration
Step 3 Define fabric link connectivity to simple ring using keyword "Network".
satellite-fabric-link network
Step 8 Virtual Interface configuration corresponding to satellite 100. Interface is configured with the VRF for
L3VPN service.
interface GigabitEthernet100/0/0/40
negotiation auto
load-interval 30
!
interface GigabitEthernet100/0/0/40.502 l2transport
vrf BUS-VPN2
ipv4 address 51.1.1.1 255.255.255.252
encapsulation dot1q 49
!
!
Step 9 Configure ICCP redundancy group 210 and defines peer PE address in the redundancy group.
redundancy
iccp
group 210
member
neighbor 100.111.11.2
!
host-priority 20
!
nV Host tsoH Vn
Host Fabric Host Fabric
Port Port
PW
E3
In the case of L2 Fabric, a unique VLAN is allocated for the point-to-point emulated connection between
the Host and each Satellite device. The host uses such VLAN for the advertisement of multicast
discovery messages.
Satellite devices listen for discovery messages on all the ports and dynamically create a subinterface
based on the port and VLAN pair on which the discovery messages were received. VLAN configuration
at the satellite is not required.
The satellite uses this auto-discovered subinterface for the establishment of a management session and
for the exchange of all upstream and downstream traffic with each of the hosts (data and control). At the
host, incoming and outgoing traffic is associated to the corresponding satellite node based on VLAN
assignment. nV L2 fabric access configuration is described below.
nV L2 Fabric Configuration
Step 5 Configure Ethernet cfm to detect connectivity failure to the fabric link.
ethernet cfm
continuity-check interval 10ms
!
Step 8 Virtual Interface configuration corresponding to satellite 100. Interface is configured with the VRF for
L3VPN service.
interface GigabitEthernet210/0/0/0
negotiation auto
load-interval 30
!
interface GigabitEthernet210/0/0/0.49
vrf BUS-VPN2
ipv4 address 51.1.1.1 255.255.255.252
encapsulation dot1q 49
!
Step 9 Configure ICCP redundancy group 210 and defines peer PE address in the redundancy group.
redundancy
iccp
group 210
member
neighbor 100.111.11.2
!
Step 12 Define the Satellite ID 210 and type of platform ASR 901.
satellite 210
type asr901
ipv4 address 27.27.27.40
redundancy
nV Cluster
ASR 9000 NV Cluster system is designed to simplify L3VPN, L2VPN. and Multicast dual-homing
topologies and resiliency designs by making two ASR9k systems operate as one logical system. An NV
cluster system has these properties and covers some of use cases (partial list) described in Figure 4-6.
• Without an ASR9k cluster, a typical MPLS-VPN dual-homing scenario has a CE dual-homed to two
PEs where each PE has its own BGP router ID, PE-CE peering, security policy, routing policy maps,
QoS, and redundancy design, all of which can be quite complex from a design perceptive.
• With a ASR9k Cluster system, both PEs will share a single control plane, a single management
plane, and a fully distributed data plane across two physical chassis, and support one universal
solution for any service including L3VPN, L2VPN, MVPN, Multicast, etc. The two clustered PEs
can be geographically redundant by connecting the cluster ports on the RSP440 faceplate, which
will extend the EOBC channel between the rack 0 and rack 1 and operate as a single XR ASR9k
router. For L3VPN, we will use one BGP router ID with the same L3VPN instance configured on
both rack 0 and rack 1 and have one BGP router ID and peering with CEs and remote PEs.
Figure 4-6 ASR 9000 nV Cluster Use Cases for Universal Resiliency Scheme
Video
Distribution Cloud Gateway
Router Router
ASR 9000 “nV System”
Data Center Internet
Interconnect Edge/Peering
EOBC
EOBC
Carrier Business
Ethernet Services PE
297264
1 Universal Solution for Any Service
In the topology depicted and descibed in Figure 4-7, we tested and measured L3VPN convergence time
using a clustered system and compared it against VRRP/HSRP. We tested both cases with identical scale
and configuration as shown in the table in Figure 4-7. We also measured access-to-core and
core-to-access traffic convergence time separately for better convergence visibility.
Access Core
ASR 9006
MC-LAG Cluster PE
VRFs
Carrier
EOBC
IRLs
Ethernet
L2VPN
Scale L3VPN
IPv4 eBGP sessions 3k
IPv6 eBGP sessions 500
VRF bundle sub-interfaces 3k
Advertise prefix 1M
297269
The convergence results of L3VPN cluster system versus VRRP/HSRP are summarized in Figure 4-8.
We covered the five types of failure tests listed below.
Note We repeated each test three times and reported the worst-case numbers of three trials.
Access Core
ASR 9006
MC-LAG Cluster PE
Rack 0
3
4
Carrier
EOBC
IRLs
Ethernet
2 1
VRFs
L2VPN
3 Rack 1
297270
14 L2VPN 0 0 0
nV Cluster PE with L3vpn Service can be implemented on ASR9000 Rack0 and Rack1 as described
below.
nV Cluster Configuration
!
data
minimum 0
Step 7 Configure Inter Rack Links (L1 links). Used for forwarding packets whose ingress and egress interfaces
are on separate racks.
interface TenGigE0/3/0/1
Step 12 nV Edge requires a manual configuration of mac-address under the Bundle interface.
mac-address f866.f217.5d23
BGP/Static
PE
BFD
(ASR 9000)
CPE
(Branch/Campus 0/1/
7
Router) G0/ 92.3.2
.1
100
Ethernet MPLS
G0/1 Network Network
100.192.3.1
G0
100 /0/1/7
.192
.3.2
297271
PE
(ASR 9000)
!***Enables BFD for BGP to ***Enables BFD for BGP to neighbor neighbor 100.192.30.1 remote-as
neighbor for VRF*** for VRF*** 101
!***Enable BFD to this BGP Peer***
bfd fast-detect bfd fast-detect neighbor 100.192.30.1 fall-over
bfd multiplier 3 bfd multiplier 3 bfd
bfd minimum-interval 50 bfd minimum-interval 50 !***eBGP peering towards Backup
address-family ipv4 unicast address-family ipv4 unicast PE***
! ! neighbor 100.192.30.2 remote-as
bfd bfd 101
interface GigabitEthernet0/0/1/7 interface GigabitEthernet0/0/1/7
!***Disables BFD echo mode on !***Disables BFD echo mode on !***Enable BFD to this BGP Peer***
interface*** interface*** neighbor 100.192.30.2 fall-over
bfd
echo disable echo disable !
address-family ipv4
no synchronization
redistribute connected
!***Advertise prefix facing the
LAN side of the CE router***
network 100.192.193.0 mask
255.255.255.0
neighbor 100.192.30.1 activate
!***Prefer this neighbor PE1 as
the primary PE
neighbor 100.192.30.1 weight 100
neighbor 100.192.30.2 activate
no auto-summary
exit-address-family
BGP/Static
BFD
CPE
(Branch/Campus Access MPLS PE
Router) Device Access Network (ASR 9000)
PW-Ether 100
G0/4 MPLS
PWE3 Network
G0/2
TenG0/0/0/0
TenG0/0/0/3
297272
Step 3 Configure XConnect on the Access device towards PE with encapsulation MPLS and PW-class
BUS_PWHE to inherit its parameters.
xconnect 100.111.11.1 130901100 encapsulation mpls pw-class BUS_PWHE
!
mtu 1500
!
PE Configuration
Step 1 Configure PWHE interface.
interface PW-Ether100
Step 10 Enable BFD to detect failures in the path between adjacent forwarding engines.
bfd fast-detect
Step 12 Configure Minimum Interval between sending BFD hello packets to the neighbor.
bfd minimum-interval 50
Step 18 Configure XConnect on the PWHE interface PW-Ether100 and mentioning access device as neighbor.
xconnect group BUS_PWHE100
p2p PWHE-K1309-Static
interface PW-Ether100
neighbor 100.111.13.9
!
!
CE Configuration
Step 1 Interface connecting to the Access device.
interface GigabitEthernet0/2.110
encapsulation dot1Q 110
ip address 100.13.9.10 255.255.255.252
ipv6 address 2001:13:9:9::2/64
ipv6 enable
To achieve PE level redundancy, another link can be used between the CPE and the access node and on
that link, the access node can be configured with another pseudowire terminating at another PE.
PE configuration for QoS includes configuring class-maps for respective traffic classes and mapping
them to the appropriate DSCP. Two-level ingress QOS does policing of traffic in individual classes of
child policy. Parent policy is configured with keyword "child-conform-aware" to prevent the parent
policer from dropping any ingress traffic that conforms to the maximum rate specified in the child
policer. While configuring egress policy map, real-time traffic class CMAP-RT-dscp is configured with
highest priority 1 and is policed to ensure low latency expedited forwarding. Rest classes are assigned
with respective required bandwidth. WRED is used as congestion avoidance mechanism for Exp 1 and
2 traffic in the Enterprise critical class CMAP-EC-EXP. Shaping is configured on the Parent egress
policy to ensure overall traffic does not exceed the committed bit rate (CBR). The ingress and egress
policy-maps are applied to the PE interface connecting to CE in respective directions.
Step 19 Configure shaping to ensure egress traffic does not exceed CBR.
shape average 500 mbps
Step 35 Configure policing to ensure ingress traffic does not exceed CBR.
police rate 500 mbps
In case of PWHE access, QoS is implemented on PE based on MPLS EXP bits as the received traffic is
labeled.
policy-map PMAP-PWHE-NNI-C-I
Step 22 Configure policing to ensure ingress traffic does not exceed CBR.
police rate 500 mbps
Step 39 Configure shaping to ensure egress traffic does not exceed CBR.
shape average 500000000 bps
end-policy-map
Feature Scale
eBGP sessions with 3 BGP instances 5k
eBGP routes with 3 BGP instances Total Route Scale = 14M routes
IPv4 = 6M
VPNv4 = 5M
IPv6 = 1.5M
VPNv6 = 1.5M
iBGP sessions with 2 BGP instances 5k
iBGP routes with 2 BGP instances Total Route Scale = 10M
Ipv4 = 402k
VPNv4 = 7.6M
VPNv6 = 2M
In the Internet Peering and Inter-Connect profile, we used the topology described in Figure 6-1 to test
Enterprise, Data Center and SP peering and inter-connect use cases with scalability. The following key
features were tested in this profile:
• Inter-AS option B and C Unicast Routing
• BGP Flowspec
• NetFlow 1:10k Sampling for IPv4, IPv6 and MPLS
• VXLAN L3VPN/L2VPN Gateway handoff between Inter-AS Core
• RFC 3107 PIC, BGP PIC edge for VPNv4, 6VPE, 6PE etc.
• LFA, rLFA
• Inter-AS option C L2VPN VPWS/VPLS with BGP AD, Inter-AS MS-PW, FAT-PW
• Inter-Area/Inter-AS MPLS TE, P2MP TE
• Inter-AS Native IPv4/v6 Multicast, Rosen-mGRE-MVPNv4/v6, mLDP-MVPNv4/v6
• Native IPv4/v6, VPNv4/v6, VPWS/VPLS, Native IPv4/v6 Multicast, mGRE-MVPNv4/v6,
PBB-EVPN over CsC
• Next-generation Routing LISP, LISP-MPLS Gateway
• Next-generation MVPN LSM with BGP C-mcast, Dynamic P2MP-TE MVPN, BGP SAFI 2, 129, 5
• Next-generation L2VPN PBB-EVPN
• Next-generation L2 Multicast: VPLS LSM
• TI-MoFRR, MPLS-TP, Bi-Directional TE LSPs (aka. Flex-LSP)
RR1 RR2
CE1
IXIA IXIA IXIA IXIA IXIA
CE3
CE2
297273
IP/LDP LFA + 3107 PIC eBGP + 3107 PIC IP/LDP LFA + 3107 PIC
AS200
The ASR9k scalability test results of Internet Peering and Inter-Connect Profile are shown in Table 6-3.
Table 6-3 ASR9k Internet Peering and Inter-Connect Profile Scale Numbers
Table 6-3 ASR9k Internet Peering and Inter-Connect Profile Scale Numbers (continued)
Parameter Typhoon
No. of 100G ports per slot 2X100G line rate
SW support XR 4.2.1
No. of 100G ports per slice 1x100G
Bi-directional bandwidth 200Gbps 100Gbps per NPU
Bi-directional PPS 90Mpps/direction
UNI or Edge-facing service termination on 100G Yes
NNI or Core-facing for 100G transport Yes
nV cluster Yes
nV satellite Yes
Table 6-4 Summary of 100G Support for UNI and NNI on ASR9K (continued)
Parameter Typhoon
MACSEC Suite B+ No
MACSEC over Cloud No
100G Pro-active Protection Yes
CPAK Optics No
L2FIB MAC address 2M
L3FIB IPv4/IPv6 address 4M/2M
Bridge domain 64k
We have validated the 100G line card throughput and latency of ASR9k Typhoon line cards in the
following two roles and summarized the performance in Table 6-5.
• UNI or edge-facing L2/L3/Multicast VPN services with features
• NNI or core-facing transport with features
The 100G deployment profiles we covered included MPLS, IPv4 and IPv6 in these applications: Internet
Peering, DCI PE, SP Edge PE, Metro-Ethernet PE and P, Wan-Core PE and P router and general purpose
Core P router.
Table 6-5 Typhoon 100G Forwarding Chain Performance
Linerate
UNI/Edge or Packet Min
NNI/Core Size Latency
SW Ver Feature Facing Role Sub-Feature Linecard (bytes) (us)
5.1.0 MPLS NNI/Core mpls_swap A9K-2x100GE-SE 130 15
5.1.0 MPLS NNI/Core mpls_depo A9K-2x100GE-SE 176 14
5.1.0 MPLS NNI/Core mpls_impo A9K-2x100GE-SE 175 14
5.1.0 IPv4 NNI/Core IPv4 10K BGP route A9K-2x100GE-SE 136 14
5.1.0 IPv4 NNI/Core IPv4 500K BGP+uRPF A9K-2x100GE-SE 212 15
5.1.0 IPv4 NNI/Core IPv4 non recursive A9K-2x100GE-SE 114 14
5.1.0 IPv4 NNI/Core IPv4 500K BGP route A9K-2x100GE-SE 160 16
5.1.0 IPv6 NNI/Core IPv6_50K BGP route + QoS A9K-2x100GE-SE 384 18
5.1.0 IPv6 NNI/Core IPv6_nonrcur udp NH A9K-2x100GE-SE 196 14
5.1.0 IPv6 NNI/Core IPv6_50K BGP route A9K-2x100GE-SE 361 17
5.1.0 IPv6 NNI/Core IPv6_10K BGP route + QoS A9K-2x100GE-SE 359 17
5.1.0 IPv6 NNI/Core IPv6_50K BGP route + QoS A9K-2x100GE-SE 384 18
5.1.0 L3VPN NNI/Edge L3VPN_30vrf A9K-2x100GE-SE 232 15
5.1.0 IPv4 ACL UNI/Edge output_acl A9K-2x100GE-SE 140 15
5.1.0 IPv4 ACL NNI/Core input_acl A9K-2x100GE-SE 199 15
5.1.0 IPv4 ACL NNI/Core in+out_acl A9K-2x100GE-SE 333 16
5.1.0 IPv4 QoS NNI/Core in+out_policy A9K-2x100GE-SE 230 16
Linerate
UNI/Edge or Packet Min
NNI/Core Size Latency
SW Ver Feature Facing Role Sub-Feature Linecard (bytes) (us)
5.1.0 IPv4 QoS NNI/Core out shaper A9K-2x100GE-SE 168 15
5.1.0 IPv4 QoS NNI/Core inpol+outshap A9K-2x100GE-SE 218 16
5.1.0 IPv4 QoS NNI/Core IPv4 500K BGP A9K-2x100GE-SE 264 16
route_inpol+outshap
5.1.0 IPv4 QoS NNI/Core input_policy A9K-2x100GE-SE 223 16
5.1.0 IPv4 QoS NNI/Core output_policy A9K-2x100GE-SE 209 15
5.1.0 L2 UNI/Edge Bridge A9K-2x100GE-SE 129 14
5.1.0 L2 UNI/Edge xconnect A9K-2x100GE-SE 113 13
5.1.0 Multicast UNI/Edge mcast_IPv4 A9K-2x100GE-SE 277 15
5.1.0 Multicast UNI/Edge mcast_IPv6 A9K-2x100GE-SE 516 14
5.1.0 BVI UNI/Edge L2 EFP BVI L3_2K BVI A9K-2x100GE-SE 592 17
5.1.0 mVPN UNI/Edge mVPN 12vrf_100mroute A9K-2x100GE-SE 507 15
5.1.0 L2VPN UNI/Edge VPLS+qos A9K-2x100GE-SE 596 17
5.1.0 L2VPN UNI/Edge VPWS 3ac+3pw A9K-2x100GE-SE 319 15
5.1.0 L2VPN UNI/Edge VPLS_9BD+9ac+27pw A9K-2x100GE-SE 374 16
5.1.0 L2VPN UNI/Edge VPWS_3ac+3pw+inpol+outshap A9K-2x100GE-SE 326 15
The Cisco Enterprise L3 Virtualization Design and Implementation Guide is part of a set of resources
that comprise the Cisco EPN System documentation suite. The resources include:
• EPN 3.0 System Concept Guide: Provides general information about Cisco's EPN 3.0 System
architecture, its components, service models, and the functional considerations, with specific focus
on the benefits it provides to operators.
• EPN 3.0 System Brochure: At-a-glance brochure of the Cisco Evolved Programmable Network
(EPN).
• EPN 3.0 MEF Services Design and Implementation Guide: Design and implementation guide with
configurations for deploying the Metro Ethernet Forum service transport models and use cases
supported by the Cisco EPN System concept.
• EPN 3.0 Transport Infrastructure Design and Implementation Guide: Design and implementation
guide with configurations for the transport models and cross-service functional components
supported by the Cisco EPN System concept.
• EPN 3.0 Mobile Transport Services Design and Implementation Guide: Design and implementation
guide with configurations for deploying the mobile backhaul service transport models and use cases
supported by the Cisco EPN System concept.
• EPN 3.0 Residential Services Design and Implementation Guide: Design and implementation guide
with configurations for deploying the consumer service models and the unified experience use cases
supported by the Cisco EPN System concept.
• EPN 3.0 Enterprise Services Design and Implementation Guide: Design and implementation guide
with configurations for deploying the enterprise L3VPN service models over any access and the
personalized use cases supported by the Cisco EPN System concept.
Note All of the documents listed above, with the exception of the System Concept Guide and System
Brochure, are considered Cisco Confidential documents. Copies of these documents may be obtained
under a current Non-Disclosure Agreement with Cisco. Please contact a Cisco Sales account team
representative for more information about acquiring copies of these documents.