Académique Documents
Professionnel Documents
Culture Documents
0
design guide
September 2013
Table of Contents
Introduction..................................................................................................................................1
Executive Summary..................................................................................................................... 1
Release Notes............................................................................................................................. 7
Requirements.............................................................................................................................12
Service Provider Architectures....................................................................................................12
Fixed and Mobile Converged Transport Characteristics............................................................. 15
System Overview........................................................................................................................20
System Concept........................................................................................................................ 20
Transport Models....................................................................................................................... 23
Flat LDP Core and Aggregation............................................................................................. 25
Hierarchical-Labeled BGP LSP Core-Aggregation and Access............................................. 25
Labeled BGP Redistribution into Access IGP......................................................................... 26
Hierarchical-Labeled BGP LSP Core and Aggregation.......................................................... 27
Hierarchical-Labeled BGP LSP Core, Aggregation, and Access............................................ 28
Hierarchical-Labeled BGP Redistribution into Access IGP..................................................... 29
Residential Wireline Service Models.......................................................................................... 29
Community Wi-Fi Service Models............................................................................................. 34
Business Service Models........................................................................................................... 36
Mobile Service Models.............................................................................................................. 40
System Architecture...................................................................................................................43
Transport Architecture............................................................................................................... 43
Large Network, Multi-Area IGP Design with IP/MPLS Access................................................ 43
Large Network, Inter-AS Design with IP/MPLS Access.......................................................... 48
Large Network, Multi-Area IGP Design with non-IP/MPLS Access........................................ 52
Large Network, Inter-AS Design with non-IP/MPLS Access.................................................. 54
Small Network, Integrated Core and Aggregation with IP/MPLS Access............................... 56
Small Network, Integrated Core and Aggregation with non-IP/MPLS Access........................ 58
Residential Service Architecture................................................................................................ 59
Residential Wireline Service Architecture.............................................................................. 59
Community Wi-Fi Service Architecture................................................................................. 71
Subscriber Experience Convergence.....................................................................................74
Table of Contents
Business Service Architecture................................................................................................... 82
MPLS VRF Service Model for L3VPN.................................................................................... 82
H-VPLS Service Model for L2VPN........................................................................................ 85
PBB-EVPN Service Model for L2VPN.................................................................................... 86
PW Transport for X-Line Services......................................................................................... 89
Mobile Service Architecture....................................................................................................... 91
L3 MPLS VPN Service Model for LTE.................................................................................... 91
Multicast Service Model for LTE eMBMS............................................................................... 96
L2 MPLS VPN Service Model for 2G and 3G........................................................................ 97
Inter-Domain Hierarchical LSPs.................................................................................................. 99
Inter-Domain LSPs for Multi-Area IGP Design........................................................................ 99
Inter-Domain LSPs for Inter-AS Design................................................................................ 103
Inter-Domain LSPs for Integrated Core and Aggregation Design......................................... 108
Transport and Service Control Plane.........................................................................................110
BGP Control Plane for Multi-Area IGP Design.......................................................................110
BGP Control Plane for Inter-AS Design................................................................................. 112
BGP Control Plane for Integrated Core and Aggregation Design.......................................... 115
Scale Considerations................................................................................................................ 116
Conclusion............................................................................................................................... 146
Glossary...................................................................................................................................150
Table of Contents
Introduction
Executive Summary
Infused with intelligence and select solutions for scalability, agile transport, security, and more, the Cisco® Fixed
Mobile Convergence (FMC) system gives operators a proven architecture, platforms, and solutions to address
the dramatic changes in subscriber behavior and consumption of communications services, both fixed and
mobile access, and provide operational simplification, all at optimized cost points.
The Cisco FMC system defines a multi-year ongoing development program by Cisco’s Systems Development
Unit (SDU) that builds towards a flexible, programmable, and cost-optimized network infrastructure, all targeted
to deliver in-demand fixed wireline and mobile network services. As the market leader in providing network
equipment in both fixed and mobile networks, Cisco is uniquely positioned to help providers transition network
operations, technologies, and services to meet these new demands. Cisco is delivering proven architectures
with detailed design and implementation guides as proof points of our strategy to service fixed and mobile
subscribers.
Through a sequence of graceful transitions, Cisco enables transition from legacy circuit-oriented architectures
towards powerful, efficient, flexible, and intelligent packet-based transport with the following proof points:
• 2012: Unified MPLS for Mobile Transport (UMMT) defines a Unified Multiprotocol Label Switching (MPLS)
Transport solution for any mobile backhaul service at any scale.
• 2013: Cisco FMC builds the network and service infrastructure convergence.
• 2014: Cisco FMC enables the unified and seamless fixed and mobile subscriber experience and its
extension to Bring Your Own Device (BYOD) access.
The Cisco FMC system defines MPLS-based transport services and couples that transport closely to the
service delivery architecture. The MPLS transport aspects of the system validation are also directly applicable to
providers offering Layer 2 (L2) and Layer 3 (L3) transport as a service. To expand the transport protocol offerings
beyond MPLS, a separate carrier Ethernet transport system is being planned that will provide validated options
for native Ethernet (G.8032 control plane), network virtualization with satellites, and MPLS-TP.
The context for these questions is one of dramatic growth and change. Whereas a fixed line operator traditionally
did not need to care about mobility, developments such as Wi-Fi, hotspots, and stadium technology are
broadening the definition of mobile solutions beyond traditional mobile handset voice and data. Likewise in the
enterprise space, mobility of devices is a baseline requirement, with more and more users requiring secure
access to corporate data on their own tablet or other mobile device. This pervasive mobility across all services,
access types and end user devices pose challenges like the following:
• How to apply appropriate access policies
• How to keep data secure
• How to build a comprehensive network access strategy
• How to extend the right user experience to all these situations
Many of these challenges are being characterized into the BYOD definition. Initially, BYOD conversations in an
enterprise related to how the IT organization enabled an employee to use their own iPad at work. This created
challenges such as how to connect this device to the network, secure company data and applications, and deal
with lost or stolen devices. This initial conversation has expanded to include consideration for the following:
BYOD will transform how every business/entity provides IT to its employees, interacts with its customers, and
provides IT services. Challenges of this scale also represent opportunities for SPs to expand their list of offerings
and deliver new, innovative, and in-demand services to enhance revenue streams. The Cisco FMC system
addresses all challenges and positions the network as a platform to meet service and transport growth with
accompanying higher returns and operator profitability. The more functions that support this emerging BYOD
movement that can be incorporated into the SP offerings, the more quickly businesses can adopt them and the
more quickly SPs can grow their revenue.
Solution
The Cisco FMC system provides reliable, scalable, and high-density packet processing that addresses mass
market adoption of a wide variety of fixed and mobile legacy services, while reducing the operator’s total cost
of operations (TCO) and the capability to deliver new, innovative, and in- demand services. It also handles
the complexities of multiple access technologies, including seamless handover and mobility between access
networks (2G, 3G, 4G LTE, and Wi-Fi) to meet demands for convergence, product consolidation, and a common
end-user service experience.
Mobile Device
FMC
Converged PCRF Fixed
Fixed and DPI CGN Mobile
Wi-Fi Edge EPC Edge
Cisco FMC introduces key technologies from Cisco’s Unified MPLS suite of technologies to deliver highly
scalable and simple-to-operate MPLS-based networks for the delivery of fixed wireline and mobile backhaul
services.
Unified MPLS resolves legacy challenges such as scaling MPLS to support tens of thousands of end nodes,
which provides the required MPLS functionality on cost-effective platforms and the complexity of technologies
like Traffic Engineering Fast Reroute (TE-FRR) to meet transport SLAs.
By addressing the scale, operational simplification, and cost of the MPLS platform, Cisco FMC resolves the
immediate need to deploy an architecture that is suitable for a converged deployment and supports fixed
residential and business wireline services as well as legacy and future mobile service backhaul.
Service
ce Infrastructure
rvic Infr
Infraas
sttrru
uc
ctu re Convergence
ture Co
on
nverg
gen
Until now, fixed network infrastructures have been limited to wireline service delivery and mobile network
infrastructures have been composed of a mixture of many legacy technologies that have reached the end of
their useful life. The Cisco FMC system architecture provides the first integrated, tested, and validated converged
network architecture, meeting all the demands of wireline service delivery and mobile service backhaul.
UMMT 3.0
Release 3.0 of the Cisco UMMT system architecture further builds upon the architecture defined in the first two
releases with the addition of the following improvements:
• New Unified MPLS models:
◦◦ Labeled BGP access, which provides highest scalability plus wireline coexistence
◦◦ v6VPN for LTE transport
• IEEE 1588v2 Boundary clock (BC) and SyncE/1588v2 Hybrid models:
◦◦ Greater scalability and resiliency for packet-based timing in access and aggregation
• ATM/TDM transport end-to-end:
◦◦ ATM provides transport for legacy 3G services
◦◦ PW redundancy with Multirouter Automatic Protection Switching (MR-APS)
• New network availability models:
◦◦ Remote LFA FRR
◦◦ Labeled BGP PIC core and edge
◦◦ BGP PIC edge for MPLS VPN
◦◦ Most comprehensive resiliency functionality
• ME3600X-24CX platform:
◦◦ 2RU fixed-configuration 40Gb/s platform
◦◦ Supports Ethernet and TDM interfaces
• Network management, service management, and assurance with Prime
70% Packet
60%
Circuit
50% ~50-70%* 20-30% 0-10%
40%
30%
Private/Public Private/Public Private/Public
20% IP Traffic IP Traffic IP Traffic
10% Packet
0% Legacy
Approx. 2008 Est. 2015 TDM
~30-50% 70-80% 90+% Traffic
Legacy Layer 2
Layer 1 and Layer 2
293329
Fixed and Mobile Voice
Layer 3 Transport and Services
• SP revenue is shifting from circuits to packet services (Cisco Research 2010), with approximately 80% of
revenue to be derived from packet services in five years
• Packet traffic is increasing at 23% compound annual growth rate (CAGR) (Cisco VNI 2013)
• SP traffic make-up is expecting a massive change in next five years (ACG Research 2011)
The economic realities depicted in Figure 4 show how this shift towards packet-based services and traffic drives
a preference for packet-based transport. Essentially, the statistical multiplexing benefits of packet transport for
packet traffic outweigh other considerations compared to using legacy TDM transport for packet transport on
economic grounds.
This point is illustrated in Figure 4. The figure takes an example of how to provision bandwidth for ten 1-Gigabit
per second flows. If bandwidth is provisioned for each flow by using TDM technology, a gigabit of bandwidth is
permanently allocated for each flow because there is no way to share unused bandwidth between containers
in a TDM hierarchy. Contrast that to provisioning those flows on a transport that can share unused bandwidth
via statistical multiplexing, and it is possible to provision much less bandwidth on a core link. For networks that
transport primarily bursty data traffic, this is now the norm, rather than the exception.
Provisioning:
Sum of Peak
Flows
Provisioning:
Sum of Average
Flows + a Few
Peak Flows
293330
Chart from Infonetics, text from DT
Beyond simple efficiencies of transport, greater intelligence within the network is needed in order to cope
efficiently with the avalanche of data traffic. AT&T, for example, calculates caching at the edge of their network
can save 30% of core network traffic, which represents tens of millions of dollars of savings every year. With the
dynamic nature of traffic demands in today’s network, IP and packet transport is adept at adjusting very quickly
to new traffic flow demands via dynamic routing protocols. TDM and Layer 2 approaches, however, are slow to
adapt because paths must be manually reprovisioned in order to accommodate new demands.
Future Directions
Starting today, there is convergence of transport across all services, leading towards convergence of edge
functions and ultimately a seamless and unified user experience enabling any service on any screen in
any location. This will be accomplished over a network with standardized interfaces enabling fine-grained
programmatic control of per-user services. Cisco’s FMC program meets all of the demands and challenges
defined for cost-optimized packet transport, while offering sophisticated programmability and service
enablement.
Annually, Cisco Systems publishes the Cisco Visual Networking Index (VNI), an ongoing initiative to track and
forecast the impact of visual networking applications. This section presents highlights from the 2012 to 2017
VNI and other sources give context to trends in the SP space that are driving increases in network capacity and
consolidation of services in a unified architecture.
Executive Overview
• Annual global IP traffic will surpass the zettabyte threshold (1.4 zettabytes) by the end of 2017. In
2017, global IP traffic will reach 1.4 zettabytes per year or 120.6 exabytes per month.
• Global IP traffic has increased more than fourfold over the past 5 years, and will increase threefold
over the next 5 years. Overall, IP traffic will grow at a Compound Annual Growth Rate (CAGR) of 23
percent from 2012 to 2017.
• Metro traffic will surpass long-haul traffic in 2014, and will account for 58 percent of total IP traffic
by 2017. Metro traffic will grow nearly twice as fast as long-haul traffic from 2012 to 2017. The higher
growth in metro networks is due in part to the increasingly significant role of content delivery networks,
which bypass long-haul links and deliver traffic to metro and regional backbones.
• Content Delivery Networks (CDNs) will carry over half of Internet traffic in 2017. Globally, 51 percent of
all Internet traffic will cross content delivery networks in 2017, up from 34 percent in 2012.
• The number of devices connected to IP networks will be nearly three times as high as the global
population in 2017. There will be nearly three networked devices per capita in 2017, up from nearly two
networked devices per capita in 2012. Accelerated in part by the increase in devices and the capabilities
of those devices, IP traffic per capita will reach 16 gigabytes per capita in 2017, up from 6 gigabytes per
capita in 2012.
• Traffic from wireless and mobile devices will exceed traffic from wired devices by 2016. By 2017,
wired devices will account for 45 percent of IP traffic, while Wi-Fi and mobile devices will account for 55
percent of IP traffic. In 2012, wired devices accounted for the majority of IP traffic at 59 percent.
• Globally, consumer Internet video traffic will be 69 percent of all consumer Internet traffic in 2017,
up from 57 percent in 2012. Video exceeded half of global consumer Internet traffic by the end of 2011.
Note that this percentage does not include video exchanged through point-to-point (P2P) file sharing.
The sum of all forms of video (TV, video on demand (VoD), Internet, and P2P) will be in the range of 80
to 90 percent of global consumer traffic by 2017.
• Internet video to TV doubled in 2012. Internet video to TV will continue to grow at a rapid pace,
increasing fivefold by 2017. Internet video to TV traffic will be 14 percent of consumer Internet video
traffic in 2017, up from 9 percent in 2012.
• VoD traffic will nearly triple by 2017. The amount of VoD traffic in 2017 will be equivalent to 6 billion
DVDs per month.
• Business IP traffic will grow at a CAGR of 21 percent from 2012 to 2017. Increased adoption of
advanced video communications in the enterprise segment will cause business IP traffic to grow by a
factor of 3 between 2012 and 2017.
• Business Internet traffic will grow at a faster pace than IP WAN. IP WAN will grow at a CAGR of 13
percent, compared to a CAGR of 21 percent for fixed business Internet and 66 percent for mobile
business Internet.
140
121 EB
101 EB
84 EB
70
69 EB
56 EB
44 EB
293379
0
2012 2013 2014 2015 2016 2017
Source: Cisco VNI, 2013
These two factors will force operators to adopt small cell architectures, resulting in an exponential increase in cell
sites deployed in the network. In large networks covering large geographies, the scale is expected to be in the
order of several tens of thousands to a few hundred thousands of LTE eNodeBs and associated CSGs.
1000
26x Growth
Macro Capacity
Average Macro
100 Cell Efficiency
Growth
Spectrum
10
1
1990 1995 2000 2005 2010 2015
293380
Source: Agilent
The 2G/3G hierarchical architecture consists of a logical hub-and-spoke connectivity between base station
controller/radio network controller (BSC/RNC) and the base transceiver station (BTS)/NodeBs. This hierarchical
architecture lent itself naturally to the circuit-switched paradigm of having point-to-point connectivity between
the cell sites and controllers. The reach of the RAN backhaul was also limited in that it extended from the radio
access network to the local aggregation/distribution location where the controllers were situated.
While the Security Gateway (SGW) nodes may be deployed in a distributed manner closer to the aggregation
network, the Mobility Management Entities (MME) are usually fewer in number and centrally located in the
core. This extends the reach of the Radio Access Network (RAN) backhaul from the cell site deep into the core
network.
Important consideration also needs to be given to System Architecture Evolution (SAE) concepts like MME
pooling and SGW pooling in the EPC that allow for geographic redundancy and load sharing. The RAN backhaul
service model must provide for eNodeB association to multiple gateways in the pool and migration of eNodeB
across pools without having to re-architect the underlying transport architecture.
BTS/Node B
BTS/Node B
Abis/lub
MSC
BSC/RNC GGSN
SGSN
eNode B PGW
SGW
S1-C
S1-C
X2
S1-U MME
SGW/PGW
In these scenarios, the network has to not only support multiple services concurrently, but also support all these
services across disparate endpoints. Typical examples are:
• L3 transport for LTE and Internet-High Speed Packet Access (I-HSPA) controller-free architectures: from
RAN to SAE gateways in the core network
• L3 transport for 3G UMTS/IP: from RAN to BSC in the aggregation network
• L2 transport for 2G GSM and 3G UMTS/ATM: from RAN to RNC/BSC in the aggregation network
• L2 transport for residential wireline: from access to BNG in the aggregation network
• L3/L2 transport for business wireline: from access to remote access networks across the core network
• L2 transport for wireline wholesale: from access to retail wireline SP peering point
• L3 transport for RAN sharing: from RAN to retail mobile SP peering point
The transport technology used in the RAN backhaul and the network architecture must be carefully engineered
to be scalable and flexible enough to meet the requirements of various services being transported across a
multitude of locations in the network.
The system is designed to concurrently support residential triple play, business L2VPN and L3VPN, and multiple
generation mobile services on a single converged network infrastructure. In addition, it supports:
• Graceful introduction of long-term evolution (LTE) with existing 2G/3G services with support for
pseudowire emulation (PWE) for 2G GSM and 3G UMTS/ATM transport.
• L2VPNs for 3G UMTS/IP, and L3VPNs for 3G UMTS/IP and 4G LTE transport.
• Broadband Network Gateway (BNG) co-located with Carrier-Grade NAT for residential services.
• Multiservice Edge (MSE) pseudowire headend (PWHE) termination for business services.
• Multicast transport.
• Network synchronization (physical layer and packet based).
• Hierarchical-QoS (H-QoS).
• Operations, administrations, and maintenance (OAM).
• Performance management (PM).
• Fast convergence.
The Cisco FMC system meets the Broadband Forum TR-101 requirements for residential services and supports
all MEF requirements for business services. The FMC system also meets all Next-Generation Mobile Network
(NGMN) requirements for next-generation mobile backhaul, and innovates on the Broadband Forum TR-221
specification for MPLS in mobile backhaul networks by unifying the MPLS transport across the access,
aggregation, and core domains.
It simplifies the control plane by providing seamless MPLS Label-Switch Paths (LSP) across access, pre-
aggregation, aggregation/distribution, and core domains of the network. In doing so, a fundamental attribute
of decoupling the transport and service layers of the network and eliminating intermediate touchpoints in the
backhaul is achieved. By eliminating intermediate touchpoints, it simplifies the operation and management of the
service. Service provisioning is restricted only at the edges of the network where it is required. Simple carrier
class operations with end-to-end OAM and performance monitoring services are made possible.
Access nodes (AN) transport multipoint business services to these service edge nodes via Ethernet over MPLS
(EoMPLS) pseudowires, and connect to the proper service transport: Virtual Private LAN services (VPLS) virtual
forwarding instance (VFI) for E-LAN and MPLS VPN for L3VPN. PW to L3VPN interworking on the service edge
node is accomplished via PWHE functionality. VPWS services, such as E-Line and Circuit Emulation over Packet
(CEoPs), are transported directly between ANs via pseudowires.
In 2G and 3G releases, the hub-and-spoke connectivity requirement between the BSC/RNC and the BTS/NodeB
makes L2 transport using Ethernet bridging with VLANs or P2P PWs with MPLS PWE3 appealing. In contrast, a
L3 transport option is much better suited to meet the myriad of connectivity requirements of 4G LTE. The UMMT
architecture provides both L2 and L3 MPLS VPN transport options that provide the necessary virtualization
functions to support the coexistence of LTE S1-u/c, X2, interfaces with GSM Abis TDM and UMTS IuB ATM
backhaul. The decoupling of the transport and service layers of the network infrastructure and the seamless
connectivity across network domains makes the system a natural fit for the flat all-IP LTE architecture by allowing
for the flexible placement of 2G/3G/4G gateways in any location of the network to meet all the advance backhaul
requirements listed above.
Deliver New Levels of Scale for MPLS Transport with RFC-3107 Hierarchical-Labeled BGP LSPs
As described in “Fixed and Mobile Converged Transport Characteristics,” supporting the convergence of fixed
wireline and mobile services will introduce unprecedented levels of scale in terms of number of ANs and services
connected to those nodes. While L2 and L3 MPLS VPNs are well suited to provide the required virtualization
functions for service transport, inter-domain connectivity requirements for business and mobile services present
challenges of scale to the transport infrastructure. This is because IP aggregation with route summarization
usually performed between access, aggregation, and core regions of the network does not work for MPLS,
as MPLS is not capable of aggregating Forwarding Equivalence Class (FEC). RFC-5283 provides a kind of
mechanism for aggregating FEC via longest match mechanism in LDP, but it is not widely deployed and requires
significant reallocation of IP addressing in existing deployments to implement. In normal MPLS deployments, the
FEC is typically the PE’s /32 loopback IP address. Exposing the loopback addresses of all the nodes (10k -100k)
across the network introduces two main challenges:
• Large flat routing domains adversely affect the stability and convergence time of the Interior Gateway
Protocol (IGP).
• The sheer size of the routing and MPLS label information control plane and forwarding plane state
will easily overwhelm the technical scaling limits on the smaller nodes (ANs and PANs) involved in the
network.
Transport Models
The Cisco FMC System incorporates a network architecture designed to consolidate transport of fixed wireline
and mobile services in a single network. Continued growth in residential and business services, combined with
ubiquitous mobile broadband adoption driven by LTE, will introduce unprecedented levels of scale in terms of
eNodeBs and ANs into the FMC network. This factor, combined with services requiring connectivity from the
access domain all the way to and across the core network, introduces challenges in scaling the MPLS network.
As previously mentioned, the endpoint identifier in MPLS is the PE’s /32 loopback IP address, so IP aggregation
with route summarization cannot be performed between the access, aggregation, and core regions of the
network. All network technologies meet a scale challenge at some point and the solution is always some form of
hierarchy to scale. The Unified MPLS Transport basis of the FMC System is no different, and uses a hierarchical
approach to solve the scaling problem in MPLS-based end-to-end deployments.
Unified MPLS adopts a divide-and-conquer strategy where the core, aggregation, and access networks are
partitioned in different MPLS/IP domains. The network segmentation between the core and aggregation domains
could be based on a single autonomous system (AS) multi-area design, or utilize a multi-AS design with inter-AS
organization. Regardless of the type of segmentation, the Unified MPLS transport concept involves partitioning
the core, aggregation, and access layers of the network into isolated IGP and LDP domains. Partitioning these
network layers into such independent and isolated IGP domains helps reduce the size of routing and forwarding
tables on individual routers in these domains, which leads to better stability and faster convergence. LDP is used
for label distribution to build LSPs within each independent IGP domain. This enables a device inside an access,
aggregation, or core domain to have reachability via intra-domain LDP LSPs to any other device in the same
domain. Reachability across domains is achieved using RFC 3107 procedures whereby BGP-labeled unicast is
used as an inter-domain LDP to build hierarchical LSPs across domains. This allows the link state database of the
IGP in each isolated domain to remain as small as possible, while all external reachability information is carried via
BGP, which is designed to scale to the order of millions of routes.
• In Single AS Multi-Area designs, interior Border Gateway Protocol (iBGP)-labeled unicast is used to build
inter-domain LSPs.
• In Inter-AS designs, iBGP-labeled unicast is used to build inter-domain LSPs inside the AS, and exterior
Border Gateway Protocol (eBGP)-labeled unicast is used to extend the end-to-end LSP across the AS
boundary.
In both cases, the Unified MPLS Transport across domains will use hierarchical LSPs that rely on a BGP-
distributed label used to transit the isolated MPLS domains, and on a LDP-distributed label used within the AS to
reach the inter-domain area border router (ABR) or autonomous system boundary router (ASBR) corresponding
to the labeled BGP next hop.
The Cisco FMC system integrates key technologies from Cisco’s Unified MPLS suite of technologies to deliver a
highly scalable and simple-to-operate MPLS-based converged transport and service delivery network. It enables
a comprehensive and flexible transport framework structured around the most common layers in SP networks:
the access network, the aggregation network, and the core network. The transport architecture structuring takes
into consideration the type of access and the size of the network.
Network Size
• Small Network:
◦◦ Applies to network infrastructures in small geographies where the core and aggregation network
layers are integrated in a single domain.
◦◦ The Single IGP/LDP domain includes less than 1000 core and AGNs nodes.
• Large Network:
◦◦ Applies to network infrastructures built over large geographies.
◦◦ The core and aggregation network layers have hierarchical physical topologies that enable IGP/
LDP segmentation.
This transport architecture structuring based on access type and network size leads to six architecture models
that fit various customer deployments and operator preferences as shown in the following table, and described in
the sections below.
Pre-Aggregation Pre-Aggregation
Node Core and Node
Core Core Ethernet
Aggregation (SDH)
Node IP/MPLS Domain Node
Pre-Aggregation IGP Area Pre-Aggregation
Node Node
TDM or
Core Core
Packet Microwave
Node Node
Mobile Access Ethernet/SDH Fixed
Pre-Aggregation Pre-Aggregation
Node Node and Mobile Access
293204
IGP/LDP Domain
The small scale aggregation network is assumed to be comprised of core nodes and AGNs that are integrated
in a Single IGP/LDP domain consisting of less than 1000 nodes. Since no segmentation between network layers
exists, a flat LDP LSP provides end-to-end reachability across the network. All mobile (and wireline) services are
enabled by the AGNs. The mobile access is based on TDM and packet microwave links aggregated in AGNs that
provide TDM/ATM/Ethernet VPWS and MPLS VPN transport.
Pre-Aggregation Pre-Aggregation
Access Node Core and Node
Core Core Access
IP/MPLS Aggregation IP/MPLS
Node IP/MPLS Domain Node
Domain Domain
Pre-Aggregation IGP Area Pre-Aggregation
Node Node
Core Core
Node Node
Pre-Aggregation Pre-Aggregation
Node Node
iBGP Hierarchical LSP
293205
By utilizing BGP community filtering for mobile services and dynamic IP prefix filtering for wireline services, the
ANs perform inbound filtering in BGP in order to learn the required remote destinations for the configured mobile
and wireline services. All other unwanted prefixes are dropped in order to keep the BGP tables small and prevent
unnecessary updates.
Redistribute Redistribute
labeled BGP Service labeled BGP Service
Communities into Communities into
Access IGP Access IGP
Pre-Aggregation Pre-Aggregation
RAN Node Core and Node RAN
Core Aggregation Core
IP/MPLS Node Node IP/MPLS
Domain IP/MPLS Domain Domain
Pre-Aggregation IGP Area Pre-Aggregation
Node Node
293206
LDP LSP
The network infrastructure organization in this architecture model is the same as the one described in
“Hierarchical-Labeled BGP LSP Core-Aggregation and Access.” This model differs from the aforementioned
one in that the hierarchical-labeled BGP LSP spans only the combined core/aggregation network and does not
extend to the access domain. Instead of using BGP for inter-domain label distribution in the access domain,
the end-to-end Unified MPLS LSP is extended into the access by using LDP with redistribution. The IGP scale
in the access domain is kept small by selective redistribution of required remote prefixes from iBGP based on
communities. Because there is no mechanism for using dynamic IP prefix lists for filtering in this model, the ANs
support only mobile services. Both mobile and wireline services can be supported by the PANs or AGNs.
Aggregation Aggregation
Node Node
Core Core Ethernet
Aggregation Network Core Network Aggregation Network (SDH)
Node IP/MPLS Domain Node
IP/MPLS Domain IP/MPLS Domain
Aggregation Aggregation
Node Node
TDM or
Core Core
Packet Microwave
Node Node
Mobile Access Aggregation Aggregation Ethernet/SDH Fixed
Node Node and Mobile Access
i/(eBGP) Hierarchical LSP
293207
LDP LSP LDP LSP LDP LSP
The network infrastructure is organized by segmenting the core and aggregation networks into independent IGP/
LDP domains. The segmentation between the core and aggregation domains could be based on a Single AS
Multi-Area design, or utilize a multi-AS design with an inter-AS organization. In the Single AS Multi-Area option,
the separation can be enabled by making the aggregation network part of a different IGP area from the core
network, or by running a different IGP process on the core ABR nodes corresponding to the aggregation and
core networks. The access network is based on native IP or Ethernet links in point-to-point or ring topologies
over fiber and newer Ethernet microwave-based access, or point-to-point TDM+Ethernet links over hybrid
microwave.
All mobile and wireline services are enabled by the AGNs. LDP is used to build intra-area LSP within each
segmented domain. The aggregation and core networks are integrated with labeled BGP LSPs. In the Single AS
Multi-Area option, the core ABRs perform BGP NHS function to extend the iBGP-hierarchical LSP across the
aggregation and core domains. When the core and aggregation networks are organized in different ASs, iBGP is
used to build the hierarchical LSP from the PAN to the ASBRs and eBGP is used to extend the end-to-end LSP
across the AS boundary.
BGP community-based egress filtering is performed by the Core Route Reflector (RR) towards the core ABRs,
so that the aggregation networks learn only the required remote destinations for mobile and wireline service
routing, and all unwanted prefixes are dropped. This helps reduce the size of BGP tables on these nodes and
also prevents unnecessary updates.
Aggregation Aggregation
Node Node
Access Core Core Access
IP/MPLS Aggregation Network Core Network Aggregation Network IP/MPLS
Node IP/MPLS Domain Node
Domain IP/MPLS Domain IP/MPLS Domain Domain
Aggregation Aggregation
Node Node
Core Core
Node Node
Aggregation Aggregation
Node Node
iBGP (eBGP across ASs) Hierarchical LSP
293208
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
The network infrastructure is organized by segmenting the core, aggregation, and access networks into
independent IGP/LDP domains. The segmentation between the core, aggregation, and access domains could be
based on a Single AS Multi-Area design or utilize a multi-AS design with an inter-AS organization. In the Single
AS Multi- Area option, the separation between core and aggregation networks can be enabled by making the
aggregation network part of a different IGP area from the core network, or by running a different IGP process on
the core ABR nodes corresponding to the aggregation and core networks. The separation between aggregation
and access networks is typically enabled by running a different IGP process on the PANs corresponding to
the aggregation and access networks. In the inter-AS option, while the core and aggregation networks are
in different ASs, the separation between aggregation and access networks is enabled by making the access
network part of a different IGP area from the aggregation network, or by running a different IGP process on the
PANs corresponding to the aggregation and RAN access networks.
The mobile and wireline services can be enabled by the ANs in the access as well as the PANs and AGNs. LDP
is used to build intra-area LSP within each segmented domain. The access, aggregation, and core networks
are integrated with labeled BGP LSPs. In the Single AS Multi-Area option, the PANs and core ABRs act as ABRs
for their corresponding domains and extend the iBGP hierarchical LSP across the access, aggregation, and
core domains. When the core and aggregation networks are organized in different ASs, the PANs act as ABRs
performing BGP NHS function in order to extend the iBGP hierarchical LSP across the access and aggregation
domains. At the ASBRs, eBGP is used to extend the end-to-end LSP across the AS boundary.
By utilizing BGP community filtering for mobile services and dynamic IP prefix filtering for wireline services, the
ANs perform inbound filtering in BGP in order to learn the required remote destinations for the configured mobile
and wireline services. All other unwanted prefixes are dropped in order to keep the BGP tables small and to
prevent unnecessary updates.
Pre-Aggregation Pre-Aggregation
Redistribute Node Node Redistribute
labeled BGP Service labeled BGP Service
Communities into Communities into
Access IGP Access IGP
293209
LDP LSP LDP LSP LDP LSP
The network infrastructure organization in this architecture model is the same as the one described in
“Hierarchical-Labeled BGP LSP Core-Aggregation and Access,” with options for both Single AS Multi-Area
and Inter-AS designs. This model differs from the aforementioned one in that the hierarchical-labeled BGP LSP
spans only the core and aggregation networks and does not extend to the access domain. Instead of using
BGP for inter-domain label distribution in the access domain, the end-to-end Unified MPLS LSP is extended
into the access by using LDP with redistribution. The IGP scale in the access domain is kept small by selective
redistribution of required remote prefixes from iBGP based on communities. Because there is no mechanism for
using dynamic IP prefix lists for filtering in this model, only mobile services are currently supported by the ANs.
Both mobile and wireline services can be supported by the PANs or AGNs.
The readiness of fiber-based access and the consequential increase of bandwidth availability at the last mile
have driven a steep rise in the number of subscribers that can be aggregated at the access layers of the
network. New Ethernet-based access technologies such as PON allow for the aggregation of thousands of
subscribers on a single AN, with per-subscriber speeds that average 20 Mbps, further justifying the distribution
of subscriber management functions as close as possible to the subscriber-facing edge of the network to satisfy
scale and total bandwidth demands.
Following these trends, the Cisco FMC system has selected products from the Cisco ASR 9000 family for
deployment at pre-aggregation and aggregation sites, allowing BNG functions to reside at any layer of the
aggregation network. Figure 14 and Figure 15 depict the supported models.
Multicast:
AN enabled MVR
AGN-SE
Non Trunk UNI IP or L3 VPN over Unified MPLS for Triple Play Unicast
PIM MPLS/Multicast VPN (mLDP)
IP
PAN-SE
IP or L3 VPN over Unified MPLS for Triple Play Unicast
PIM MPLS/Multicast VPN (mLDP)
IP
IP
IP
IP
IP
IP
293218
Fiber DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology
MPLS Access Node, Pre-Aggregation Node Aggregation Node Service Edge Node Core Node
ASR-901, ME 3600 ASR-9001, ASR-903 ASR-9000 ASR-9000
293598
Fiber, Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology
To adapt to the preferred deployment model of a given provider, connectivity between the subscriber customer
premises equipment (CPE) and the BNG can be modeled by using both 1:1 and N:1 subscriber aggregation
models, also known as 1:1 VLAN and N:1 VLAN, while the User-Network Interface (UNI) remains non trunk,
keeping the provisioning of the local loop simple on both CPE and ANs.
The 1:1 VLAN indicates a one-to-one mapping between user port on the AN and a VLAN. The uniqueness of the
mapping is maintained in the AN and across the aggregation network. On the other hand, a N:1 VLAN refers to
a many-to-one mapping between user ports and VLAN. The user ports may be located in the same or different
ANs and a common VLAN is used to carry users’ traffic across the aggregation network.
Subscriber access is supported via native IP over Ethernet (IPoE) for providers who prefer a cohesive transport
across all residential services and between residential, business, and mobile applications, or through legacy
Point-to-Point Protocol over Ethernet (PPPoE) for those who desire stronger subscriber authentication
mechanisms and have long lasting incumbency of PPPoE. For operators who choose IPoE as the subscriber
access protocol, the architecture will leverage DHCP-based address assignment procedures in order to discover
subscriber presence, leveraging a single network layer protocol for flexible IP address management as well as
subscriber detection.
Orthogonal to the subscriber access protocol is the address family used at the network layer to carry
subscriber’s traffic. Depletion of the IPv4 address space has been an area of concern for operators for several
years. Techniques such as network address translations of IPv4 address (NAT44) have been widely deployed in
order to reduce the number of globally-routable addresses assigned to subscribers. However, law enforcement
regulations mandating the ability to identify a subscriber univocally by his or her IP address have largely limited
the effectiveness of these techniques in certain countries.
The system will provide a complete migration solution. While the first Cisco FMC system release addressed
support for CG NAT and dual-stack subscribers at the BNG, the second release focuses on an IPv6 only
Access Networks (CPE to BNG) for unicast services. IPv4 capable household devices (Single or Dual Stacked)
are granted end to end connectivity through mapping of address and port using translation (MAP-T) functions
performed at the residential CPE and at the BNG device. Among the various NAT464 (IVI) technologies, MAP-T
has been selected because of its simplicity and transparency, in addition to providing effective IPv4 address
savings. By not requiring that network equipment keeps stateful IVI translation entries, it optimizes resource
utilization and performances, while an intelligent translation logic preserves packet’s original source and
destination ports and addresses information allowing for effective QoS and security applications throughout the
network.
Within the core network, Unified MPLS will offer seamless transport for both address families, fully separating
IPv6 enablement in the residential access from the core transport.
Given the lack of maturity of IPv6-enabled multicast applications, multicast services are delivered by using IPv4
end-to-end, forcing the access network to remain dual stacked. However, multicast forwarding does not impose
any constraint over the receiver IP addressing logic, allowing for the CPE IPv4 address not to be routable or even
unique within the IPv4 domain and therefore preserving the IPv4 address savings achieved by MAP-T.
For PON/FTTH access, Internet Group Management Protocol (IGMP) v2/v3 is used in the Layer-2 access
network, and Protocol Independent Multicast (PIM) Source Specific Multicast (SSM) is implemented at the BNG.
For DSL access, the access network is routed and IGMPv2/v3 reports, proxied by CPE and Digital Subscriber
Line Access Multiplexer (DSLAM), are converted into PIM SSM messages at the last hop multicast router.
In the aggregation/core network, multicast delivery trees are signaled and established by using (recursive)
Multicast Label Distribution Protocol (MLDP), and multicast traffic is forwarded over flat MPLS LSPs. Multicast
forwarding can be isolated in the same residential VPN used for unicast services, or handled globally according
to the operator’s preference and desire for a common multicast transport across multiple service categories
(e.g., residential and mobile).
293371
RAN
The OLT and optical network unit (ONU) share the responsibility for performing the role of an AN, with the ONU
facing the user through the unit (U) reference point, and the OLT facing the aggregation network through the V
reference point.
Under those assumptions, and regardless of the broadband access technology chosen by a given
implementation (DSL, FTTH, or PON), the first release of the Cisco FMC system aligns to TR-101 Non Trunk UNI
support at the U reference point, and for both 1:1 and N:1 VLAN aggregation models.
A non-trunk UNI uses a shared VLAN between the AN and the residential gateway for all subscriber’s services,
while relative priority across services is preserved by properly setting the Differentiated Services Code Point
(DSCP) field in an IP packet header or the 802.1p CoS values carried in an Ethernet priority-tagged frame.
A 1:1 or N:1 VLAN model is then used to aggregate subscriber’s traffic into the operator’s network toward the
associated service insertion point. The N:1 VLAN uses a shared VLAN to aggregate all subscribers and all
services to and from a particular AN, while a 1:1 model dedicates a unique VLAN to each subscriber.
The subscriber aggregation models are described in detail in “Subscriber Aggregation Models.”
At the same time, telco and cable operators who do not own a portion of the licensed spectrum are trying to
improve customer retention by devising creative ways to provide “on the go” connectivity to their clients in
residential and metropolitan areas.
Wi-Fi has become ubiquitous in nearly all personal mobile devices, including smartphones, tablets, cameras, and
game consoles. What’s more, Wi-Fi technology is improving every day. Robust carrier-grade Wi-Fi networks
have the ability to outperform 4G networks and are secure, while next-generation hotspots offer roaming that is
as transparent as cellular roaming. To meet the spectrum challenge, Wi-Fi provides 680 MHz of new spectrum to
operators.
Carrier-grade Wi-Fi therefore has become a central element in strategies for ubiquitous capacity and coverage
across networks for both fixed and mobile operators. While other systems in Cisco focus more toward Metro SP
Wi-Fi architectures, Cisco FMC Release 2.0 introduces community Wi-Fi.
Under this model, operator-owned residential CPEs announce a private Service Set Identifier (SSID) used
by members of the household, and a public, well-known SSID shared among all customers of the same
operator. The Private SSID uses Wi-Fi Protected Access (WPA)/WPA2 security protocols in order to secure
communication for the household equipment, while the public SSID is open. Public access is authenticated
via web logon procedures or transparently using dynamically learnt network identities associated with the
connecting device (e.g., MAC address).
The separation between household and public Wi-Fi traffic in the access network is achieved by VLAN
segmentation, requiring the CPE UNI to become trunked. VLAN-based segmentation simplifies H-QoS modeling
for aggregated rate limiting based on service category (pure residential wireline vs. public Wi-Fi), and it allows for
flexible and independent positioning of the gateway functions. Based on scale and performances capabilities of
the selected devices and mindful of operator’s need for cost optimization, the Cisco FMC system has selected to
implement wireline and Wi-Fi gateway functions on the same aggregation node as shown in the following figure.
Multicast:
AN enabled MVR
AGN-SE
Trunk UNI IP or L3 VPN over Unified MPLS for Triple Play Unicast
MPLS/Multicast VPN (mLDP)
IP
IP
PIM
IP
IP Wireline:
N:1 or 1:1 VLANs Wireline: Explicit (N:1) and Ambiguous (1:1) access interfaces
IPv6 IPoE, PPPoE Sessions
Wi-Fi: Optimal MAP-T Border Router
N:1 VLAN Service
Wireline: Edge Wi-Fi: Explicit (N:1) access interface
IPv6 Routed CPE with MAP-T IPv4 IPoE Sessions
Wi-Fi: Multicast: Explicit (N:1) access interface
Bridged CPE
PAN-SE
IP or L3 VPN over Unified MPLS for Triple Play Unicast
IP PIM MPLS/Multicast VPN (mLDP)
IP
IP
IP
293597
Fiber DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology
The residential CPE operational mode is routed over the household VLAN, and bridged over the Wi-Fi VLAN.
CPE bridged mode is necessary to preserve visibility over the public handset’s MAC address throughout the
access network for authorization purposes.
All public Wi-Fi subscribers connecting from the same AN share the same N:1 Wi-Fi VLAN, regardless of
whether the subscriber aggregation model implemented over the household VLAN is 1:1 or N:1.
Connectivity over the public Wi-Fi network uses IPv4. IPv4 remains the leading address family in the space, while
handsets’ IPv6 capable operative systems and applications have just started making their appearance in the
market.
In the aggregation and core network, the same level of segmentation between pure residential and public Wi-Fi
traffic can be achieved by isolating community Wi-Fi services in a dedicated L3 VPN through the virtualization
means enabled by Unified MPLS.
The Cisco FMC system supports the following business wireline services on a single converged network:
• L3VPN services via Ethernet over Multiprotocol Label Switching (EoMPLS) PW with Pseudowire Headend
(PWHE) connectivity to MPLS VPN VRFs at the service edge node.
• Multipoint E-LAN services via Provider Backbone Bridging Ethernet VPN (PBB-EVPN) or Hierarchical
Virtual Private LAN Service (H-VPLS).
• Point-to-point X-Line via Any Transport over MPLS (AToM) pseudowires: TDM, ATM, and Ethernet.
The Cisco FMC solution supports MPLS-based access networks for those operators seeking to deploy a
converged architecture to transport all service types with a uniform control plane. Native Ethernet and TDM
access networks are also supported for those operators seeking to cap investments in legacy network
deployments and to facilitate migration to a packet-switched network architecture.
AGN-SE
L3 VPN
Ethernet PWE3
MPLS VPN (v4)
Ethernet 802.1q PWHE
PAN-SE
X-Line
Ethernet, CESoPSN, SAToP, ATM VC/VP PWE3
Ethernet Port, 802.1q
TDM, ATM IMA E1, STM1
Efficient Large Scale Multiservice
Access Network Aggregation Network Core Network
Aggregation Node
ASR-9001, 9006
IP/MPLS Transport IP/MPLS Transport
xWDM, Fiber Rings DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology
Pseudowire Headend (PWHE) is a technology that allows termination of access PWs into a L3 (VRF or global)
domain or into a L2 domain. PWs provide an easy and scalable mechanism for tunneling customer traffic into a
common IP/MPLS network infrastructure. PWHE supports features such as H-QoS and access lists (ACL) for an
L3VPN on a per-PWHE interface basis. PWHE introduces the construct of a “pw-ether” interface on the service
edge node. This virtual pw-ether interface terminates the PWs carrying traffic from the subscriber CPE device
and maps directly to an MPLS VPN VRF on the service edge node. Per-subscriber H-QoS and any required
subscriber ACLs are applied to the pw-ether interface.
For an L2VPN service, such as a port-based Ethernet Private LAN (EP-LAN) or a VLAN-based Ethernet Virtual
Private LAN (EVP-LAN) service, the subscriber CPE device is connected to the SP network via an Ethernet port
UNI or 802.1Q-tagged UNI on the FAN or CSG. The Cisco FMC system supports two mechanisms for providing
L2VPN services: traditional H-VPLS virtual forwarding instances (VFI), or PBB-EVPN.
A VPLS VFI automatically creates a full mesh of pseudowires to transport L2VPN services between service edge
nodes. To minimize the number of neighbors involved in the VPLS VFI and to avoid any potential MAC address
scaling issues on the AN, the VPLS VFI is not configured on the ANs. Transport of the L2VPN service from the
AN to the service edge in the PAN or AGN is again accomplished via Ethernet Pseudowire Emulation Edge to
Edge (PWE3). The PWE3 from the AN is connected to a VPLS VFI providing the L2VPN service on the service
edge node.
PBB-EVPN is a new draft in the IETF L2VPN working group that combines PBB and E-VPN functionality in a
single device. While still relying on MPLS forwarding, E-VPN uses BGP for distributing MAC address reachability
information over an MPLS cloud. In existing L2VPN solutions, MAC addresses are always learned in the data
plane, i.e., MAC bridging. In comparison, in E-VPN the learning of MAC addresses over the core is done via
control plane, i.e., MAC routing. Control-plane based learning brings flexible BGP-based policy control to MAC
address, similar to the policy control available for IP prefixes in L3VPNs. Customers can build any topology by
using route targets. A full mesh of pseudowires is no longer required, which is often a scalability concern in
VPLS as the number of provider edge (PE) routers increases. Another key feature of E-VPN is the multi-homing
capability. In VPLS, there is a limited support of multi-homing with only active-standby or active-active per
service dual homing supported. E-VPN, on the other hand, supports both active-active per service and active-
active per flow, leading to better load balancing across peering PEs. It also supports multi-homed device (MHD)
and multi-homed network (MHN) topologies with two or more routers, which can be geographically disjointed, in
the same redundancy group.
PBB-EVPN takes a step further by combining Provider Backbone Bridging (PBB) and E-VPN functions in a
single device. PBB is defined by IEEE802.1ah, where MAC tunneling (MAC-in-MAC) is employed to improve
service instance and MAC address scalability in Ethernet. Using PBB’s MAC-in-MAC encapsulation, PBB-EVPN
separates customer MAC addresses (C-MACs) from backbone MAC addresses (B-MACs) spaces. In contrast to
E-VPN, PBB-EVPN uses BGP to advertise B-MAC reachability, while data-plane learning is still used for remote
C-MAC to remote B-MAC binding. As a result, the number of MAC addresses in provider backbone is now
reduced to the number of PEs, which is usually in hundreds and thus much fewer than the millions of customer
MAC addresses typically in the large service provider networks. Should be there any MAC mobility in the access
layer, it will be completely transparent to BGP and instead be handled by the re-learning of the moved C-MAC to
a new B-MAC.
The route scale in the access domain is kept to a minimum by ingress filtering on the AN. The ANs that enable
wireline services tag their loopbacks in internal BGP (iBGP)-labeled unicast with a common FAN community,
which is imported by all service edge nodes for wireline services. The AN nodes ingress filtering for business
services is dependent upon the type of service.
Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
Wireline VPWS
AToM Pseudowire
FAN
CSG
293223
When VPWS service is activated the inbound filter is When VPWS service is activated the inbound filter is
automatically updated for remote FAN automatically updated for remote FAN
For E-LAN and L3VPN services, all service edge functionality is handled by the PAN or AGN nodes, and the
loopback prefixes are marked with the FSE community in BGP. Thus, connectivity from the AN to these nodes is
achieved by permitting this community in the inbound filter.
For E-Line services, a dynamic IP prefix list is used for inbound filtering. When a wireline service is activated to
new destination, the route-map used for inbound filtering has to be updated. Since adding a new wireline service
on the device results in a change in the routing policy of a BGP neighbor, dynamic inbound soft reset function is
used to initiate non-disruptive dynamic exchange of route refresh requests between the AN and the PAN.
Tech Tip
Both BGP peers must support the route refresh capability in order to use dynamic
inbound soft reset capability.
AGN-SE
L3 VPN
Ethernet 1q, QinQ MPLS VPN/Multicast VPN (mLDP)
PAN-SE
AGN-SE
E-LAN
VPLS (+ 802.1ah PBB) or PBB-EVPN
Ethernet Port, 802.1q or 802.1ad
PAN-SE
X-Line
Ethernet, CESoPSN, SAToP, ATM VC/VP PWE3
Ethernet Port, 802.1q or 802.1ad
TDM, ATM IMA E1, STM1
Legacy Large Scale Multiservice
Access Network Aggregation Network Core Network
Aggregation Node
ASR-9001, 9006
SONET/SDH IP/MPLS Transport IP/MPLS Transport
293221
SONET/SDH DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology
For a L3VPN service, the subscriber CPE device is connected to the SP network typically via an Ethernet
802.1Q-tagged user network interface (UNI) on the AN. Transport of the L3VPN service from the AN to the
service edge in the PAN or AGN is accomplished via native Ethernet. The AN may translate the VLAN tag of the
customer UNI to a unique VLAN tag on the SP network or may push an S-VLAN tag on the C-VLAN, creating
a Q-in-Q network-to-network (NNI). Whether single- or double-tagged, the Ethernet NNI will be terminated on
the service edge node. The VLANs carrying the L3VPN service are mapped to an MPLS VPN VRF, which is then
transported over the Unified MPLS Transport network. H-QoS and any required subscriber ACLs are applied to
the Ethernet NNI interface.
For a L2VPN service, such as a port-based Ethernet Private LAN (EPLAN) or a VLAN-based Ethernet Virtual
Private LAN (EVPLAN) service, the subscriber CPE device is connected to the SP network via an Ethernet port
UNI, 802.1Q-tagged UNI, or 802.1ad double-tagged UNI on the FAN. Transport of the L2VPN service from the
AN to the service edge in the PAN or AGN is accomplished via native Ethernet.
The AN may translate the VLAN tag of the customer UNI to a unique VLAN tag on the SP network or may push
an S-VLAN tag on the C-VLAN, creating a Q-in-Q NNI. Whether single- or double-tagged, the Ethernet NNI
will be terminated on the S node. The VLANs are connected to a VPLS VFI or PBB-EVPN providing the L2VPN
service on the service edge node. Per-subscriber H-QoS and any required subscriber ACLs are applied to the
Ethernet NNI interface.
For a wireline VPWS, like an Ethernet Private Line (EPL) or Ethernet Virtual Private Line (EVPL) business service,
the customer CPE devices on either end are connected via an Ethernet port UNI, 802.1Q-tagged UNI, or 802.1ad
The Cisco FMC system provides a comprehensive mobile service backhaul solution for transport of LTE, legacy
2G GSM, and existing 3G UMTS services. An overview of the models supported for the transport of mobile
services is illustrated in Figure 21 and Figure 22:
Covered by the
MPC System
BBC
RNC
MPLS MPLS VPN
AToM Pseudowire VPN (v4/v6)
ATM or
TDM BTS, ATM Node B TDM GGSN
SGSN
S/PGW
S1-U
IP eNB MPLS VPN
Mobile Transport Gateway (v4/v6)
MME
Mobile Transport PE
ASR-9000
IP/MPLS
Transport IP/MPLS Transport IP/MPLS Transport
Cell Site Gateway Pre-Aggregation Node Aggregation Node Core Node Core Node
ASR-901 ASR-903, ASR-9001 ASR-9000 CRS-3 CSR-3
293225
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
Covered by the
MPC System
SDH/SONET BBC
RNC
MPLS VPN
AToM Pseudowire (v4/v6)
TDM BTS, ATM Node B ATM or
TDM
GGSN
SGSN
Mobile
Transport
Gateway
S/PGW LMA
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
Mobile Transport PE
ASR-9000
Microwave Systems Pre-Aggregation Node Aggregation Node Core Node Core Node
Partners: NSN, NEC, SIAE ASR-903, 9001 ASR-9000 CRS-3 CRS-3
293224
Ethernet/TDM Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
The system proposes a highly-scaled MPLS L3VPN-based service model to meet the immediate needs of LTE
transport and accelerate its deployment. The MPLS VPN model provides the required transport virtualization
for the graceful introduction of LTE into an existing 2G/3G network, and also satisfies future requirements of
RAN sharing in a wholesale scenario. It is well suited to satisfy the mesh connectivity and stringent latency
requirements of the LTE X2 interface. Simple MPLS VPN route-target import/export mechanisms can be used to
enable multipoint connectivity:
• within the local RAN access for intra-RAN-access X2 handoff.
• with adjacent RAN access regions for inter-RAN-access region X2 handoff.
• with EPC gateways (SGWs, MMEs) in the MPC for the S1-u/c interface.
• with more than one MME and SGW for MME and SGW pooling scenarios.
The MPLS VPN-based service model allows for eNodeBs and associated CSGs to be added to the RAN at any
location in the network. EPC gateways can be added in the MPC and have instant connectivity to each other
without additional configuration overhead. It allows seamless migration of eNodeBs initially mapped to centralized
EPC gateways to more distributed ones in order to accommodate capacity and scale demands without having to
re-provision the transport infrastructure. “L3 MPLS VPN Service Model for LTE” covers these aspects in detail.
“L2 MPLS VPN Service Model for 2G and 3G” covers these aspects in detail.
For the above service models, the system supports physical layer synchronization of frequency based on SyncE,
or packet-based synchronization of frequency as well as phase and time of day (ToD) based on 1588 Precision
Time Protocol (PTP), as described in “Synchronization Distribution.”
Single AS
RAN IGP Process Aggregation Area/Level Core Area/Level Aggregation Area/Level RAN IGP Process
OSPF/ISIS OSPF x/IS-IS L1 OSPF 0/IS-IS L2 OSPF x/IS-IS L1 OSPF/ISIS
Aggregation Aggregation
Node (AGN) Node (AGN)
Access Access
FAN IP/MPLS Core Node Core Node
Aggregation Network Core Network Aggregation Network IP/MPLS FAN
CN-ABR IP/MPLS Domain CN-ABR
Domain IP/MPLS Domain IP/MPLS Domain Domain
Pre-Aggregation Pre-Aggregation
Node (PAN) Node (PAN)
Core Node Core Node
CN-ABR CN-ABR
Aggregation Aggregation
CSG Node (AGN) Node (AGN) CSG
293275
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
From a multi-area IGP organization perspective, the core network is either an Intermediate System-to-
Intermediate System (IS-IS) Level 2 or an Open Shortest Path First (OSPF) backbone area. The aggregation
domains, in turn, are Intermediate System-to-Intermediate System (IS-IS) Protocol Level 1 or OSPF non-
backbone areas. No redistribution occurs between the core and aggregation IGP levels/areas, thereby containing
the route scale within each domain. The MPLS/IP access networks subtending from AGNs or PANs are based
on a different IGP process, restricting their scale to the level of the local access network. To accomplish this,
the PANs run two distinct IGP processes, with the first process corresponding to the core-aggregation network
(IS-IS Level 1 or OSPF non-backbone area) and the second process corresponding to the Mobile RAN access
network. The second IGP process could be an OSPF backbone area or an IS-IS L2 domain. All nodes belonging
to the access network subtending from a pair of PANs are part of this second IGP process.
Partitioning these network layers into such independent and isolated IGP domains helps reduce the size of
routing and forwarding tables on individual routers in these domains, which, in turn, leads to better stability and
faster convergence within each of these domains. Label Distribution Protocol (LDP) is used for label distribution
to build intra-domain LSPs within each independent access, aggregation, and core IGP domain. Inter-domain
reachability is enabled by hierarchical LSPs using BGP-labeled unicast as per RFC 3107 procedures, where iBGP
Tech Tip
This model supports transport of fixed wireline and mobile services. The following
figure shows the example for RAN transport. The deployment considerations for both
RAN transport and fixed wireline transport are covered in this guide.
Figure 24 - Inter-Domain Transport for Multi-Area IGP Design with Labeled BGP Access
RR
CN-ABR CN-ABR
Aggregation Aggregation
PAN BGP Community BGP Community PAN
iBGP iBGP
IPv4+label IPv4+label
iBGP iBGP iBGP
IPv4+label IPv4+label MPC IPv4+label
BGP
CSG Community CSG
RAN Region RAN Region
BGP PAN PAN BGP
Community CN-ABR CN-ABR Community
MTG
FAN AGN AGN FAN
iBGP Hierarchical LSP
293276
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
In this option, the access, aggregation, and core networks are integrated with Unified MPLS LSPs by extending
labeled BGP from the core all the way to the nodes in the access network. Any node in the network that requires
inter-domain LSPs to reach nodes in remote domain acts as a labeled BGP PE and runs iBGP IPv4 unicast+labels
with their corresponding local RRs.
• The core point of presence (POP) nodes, referred to in this design as Core Node–Area Border Routers
(CN-ABR), are labeled BGP ABRs and act as inline RRs for their local aggregation network PAN clients.
The CN-ABRs peer with other CN-ABRs using iBGP-labeled unicast in either a full mesh configuration
or using a centralized core-node route reflector (CN-RR) within the core domain. The centralized RR
deployment option is shown in Figure 24. Note that the CN-RR applies an egress filter towards the
CN-ABRs in order to drop prefixes with the common RAN community, which eliminates unnecessary
prefixes from being redistributed.
• For mobile service transport, the MTGs residing in the core network are labeled BGP PEs. They connect
to the EPC gateways (SGW, Packet Data Network Gateway [PGW], and MME) in the MPC. The MTGs
peer either directly with the closest CN-ABR RRs, in the case of a CN-ABR full-mesh configuration, or
with the CN-RR, depending on the deployment setting. The MTGs advertise their loopbacks into iBGP-
labeled unicast with the global MSE BGP community representing the MSE, and then import the global
MSE and common RAN communities.
Since routes between the core IS-IS Level 2 (or OSPF backbone) and aggregation IS-IS Level 1 (or OSPF non-
backbone area) are not redistributed, the CN-ABRs have to reflect the labeled BGP prefixes with the next-hop
changed to self in order to be inserted into the data path, which enables the inter-domain LSP switching and
allows the aggregation and core IGP routing domains to remain isolated. This CN-ABR NHS function is applied
by the CN-ABRs towards its PAN clients in its local aggregation domain only for prefixes from other remote
domains, not for locally-learned prefixes. The purpose is to prevent the CN-ABR from inserting itself into the path
of inter-area X2 interface routing. The CN-ABR applies this NHS function for all updates towards the CN-RR in
the core domain. Similarly, since the access and aggregation networks are in different IGP processes, the PANs
have to reflect the labeled BGP prefixes with the next hop changed to self in order for the PANs to be inserted
into the data path, thus enabling the inter-domain LSP switching. This PAN NHS function is symmetrically
applied by the PANs towards nodes in the local access domain, and the higher level CN-ABR inline-RR in the
aggregation domain.
For mobile service transport, the MTGs in the core network are capable of handling large scale and will learn
all BGP-labeled unicast prefixes since they need connectivity to all the ANs carrying mobile services in the
entire network. Simple prefix filtering based on BGP communities is performed on the CN-RRs for constraining
IPv4+label routes from remote access regions from proliferating into neighboring aggregation domains, where
they are not needed. The PANs only learn labeled BGP prefixes marked with the common RAN BGP community
and the MSE BGP community. This allows the PANs to enable inter- metro wireline services across the core and
also reflects the MSE prefix to their local access networks. Using a separate IGP process for the access enables
the access network to have limited control plane scale, since the ANs only learn local IGP routes and labeled
BGP prefixes marked with the MSE BGP community.
Tech Tip
The filtering mechanisms necessary for fixed wireline service deployment are not
currently available in this option, so it supports only mobile service transport. Wireline
service support will be added for this option in a future release.
Figure 25 - Inter-Domain Transport for Multi-Area IGP Design with IGP/LDP Access
RR
CN-ABR CN-ABR
CSG Aggregation Aggregation CSG
PAN BGP Community BGP Community PAN
iBGP to iBGP to
RAN IGP iBGP iBGP iBGP RAN IGP
Process IPv4+label IPv4+label MPC IPv4+label Process
Redistribution BGP Redistribution
CSG Community CSG
PAN PAN
RAN Region CN-ABR CN-ABR RAN Region
BGP Community MTG BGP Community
AGN AGN
CSG CSG
LDP LSP iBGP Hierarchical LSP LDP LSP
293277
LDP LSP LDP LSP LDP LSP
Since routes between the core IS-IS Level 2 (or OSPF backbone) and aggregation IS-IS Level 1 (or OSPF non-
backbone area) are not redistributed, the CN-ABRs have to reflect the labeled BGP prefixes with the next-hop
changed to self in order to insert themselves into the data path to enable the inter- domain LSP switching and
allow the aggregation and core IGP routing domains to remain isolated. This CN-ABR NHS function is applied
by the CN-ABRs only for prefixes from other remote domains towards its PAN clients in its local aggregation
domain. It is not applied for locally-learned prefixes to prevent the CN-ABR from inserting itself into the path of
inter-area X2 interface routing. The CN-ABR applies this NHS function for all updates towards the CN-RR in the
core domain.
The MTGs in the core network are capable of handling a high degree of scalability and will learn all BGP-labeled
unicast prefixes to provide connectivity to all the CSGs in the entire network. Simple prefix filtering based on
BGP communities is performed on the CN-RRs for constraining IPv4+label routes from remote RAN access
regions from proliferating into neighboring aggregation domains, where they are not needed. The PANs only
learn labeled BGP prefixes marked with the common RAN and MSE BGP communities. This allows the PANs to
enable inter-metro wireline services across the core, and also redistribute the MSE prefix to their local access
networks. Using a separate IGP process for the RAN access enables the mobile access network to have limited
control plane scale, because the CSGs learn only local IGP routes and labeled BGP prefixes marked with the
MSE BGP community.
RAN Area/Level Aggregation Area/Level Core Area/Level Aggregation Area/Level RAN Area/Level
OSPF x/IS-IS L1 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF x/IS-IS L1
Aggregation Aggregation
Node (AGN) Node (AGN)
293278
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
The core and aggregation networks are segmented into different ASs. Within each aggregation domain, the
aggregation and access networks are segmented into different IGP areas or levels, where the aggregation
network is either an IS-IS Level 2 or an OSPF backbone area, and subtending access networks are IS-IS Level
1 or OSPF non-backbone areas. No redistribution occurs between the aggregation and access IGP levels/areas,
thereby containing the route scale within each domain. Partitioning these network layers into such independent
and isolated IGP domains helps reduce the size of routing and forwarding tables on individual routers in these
domains, which, in turn, leads to better stability and faster convergence within each of these domains. LDP is
used for label distribution to build intra-domain LSPs within each independent access, aggregation, and core IGP
domain.
Inter-domain reachability is enabled with hierarchical LSPs using BGP-labeled unicast as per RFC 3107
procedures. Within each AS, iBGP is used to distribute labels in addition to remote prefixes, and LDP is used
to reach the labeled BGP next-hop. At the ASBRs, the Unified MPLS LSP is extended across the aggregation
and core AS boundaries using eBGP-labeled unicast. The Unified MPLS LSP can be extended into the access
domain using two different options as presented below to accommodate different operator preferences.
Tech Tip
This model supports transport of fixed wireline and mobile services. The following
figure shows the example for RAN transport. The deployment considerations for both
RAN transport and fixed wireline transport are covered in this guide.
Figure 27 - Inter-Domain Transport for Inter-AS Design with Labeled BGP Access
RR
AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR
Aggregation Aggregation
PAN BGP Community BGP Community PAN
iBGP iBGP
IPv4+label AGN-RR iBGP AGN-RR IPv4+label
eBGP IPv4+label eBGP
RR IPv4+label IPv4+label RR
MPC
CSG iBGP BGP iBGP CSG
RAN Region IPv4+label Community IPv4+label RAN Region
BGP PAN PAN BGP
Community AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR Community
MTG
FAN AGN AGN FAN
eBGP eBGP
iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP
293279
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
In this option, the access, aggregation, and core networks are integrated with Unified MPLS LSPs by extending
labeled BGP from the core all the way to the nodes in the access network. Any node in the network that requires
inter-domain LSPs to reach nodes in remote domain acts as a labeled BGP PE and runs iBGP IPv4 unicast+labels
with their corresponding local RRs.
• The core POP nodes in this model are labeled BGP Autonomous System Boundary Routers (ASBR),
and are referred to as Core Node ASBRs (CN-ASBR). They peer with iBGP-labeled-unicast sessions
with the centralized CN-RR within the core AS, and also peer with eBGP-labeled unicast sessions with
the neighboring aggregation ASBRs. The CN-ASBRs insert themselves into the data path to enable
inter-domain LSPs by setting NHS on all iBGP updates towards their local CN-RRs and eBGP updates
towards the neighboring aggregation ASBRs. Note that the CN-RR applies an egress filter towards the
CN-ASBRs in order to drop prefixes with the common RAN community, which eliminates unnecessary
prefixes from being redistributed.
• For mobile service transport, the MTGs residing in the core network are labeled BGP PEs, which connect
to the EPC gateways (SGW, PGW, and MME) in the MPC. The MTGs peer with iBGP- labeled unicast
sessions with the CN-RR, advertising loopbacks into iBGP-labeled unicast with the global MSE BGP
community, representing the MSE, and importing the global MSE and common RAN communities.
• For fixed wireline service transport, the network nodes providing FSE functions, such as PWHE or
H-VPLS, are labeled BGP PEs. These FSE nodes will peer with the closest RR in the network, usually in
the aggregation network, depending upon the deployment setting. The FSE nodes advertise loopbacks
into the iBGP-labeled unicast with the global FSE BGP community, representing the FSE, and import the
common FAN community and global FSE and IGW communities. The IGW community represents any
Internet Gateway node, providing internet peering functionality in the SP network.
For mobile service transport, the MTGs in the core network are capable of handling large scale and will learn all
BGP-labeled unicast prefixes since they need connectivity to all the CSGs in the entire network. Simple prefix
filtering based on BGP communities is performed on the CN-RRs in order to constrain IPv4+label routes from
remote access regions from proliferating into neighboring aggregation domains, where they are not needed.
The PANs learn only labeled BGP prefixes marked with the common RAN BGP community and the MSE BGP
community. This allows the PANs to enable inter- metro wireline services across the core, and also reflect the
MPC prefixes to their local access networks. Isolating the aggregation and access domain by preventing the
default redistribution enables the mobile access network to have limited route scale since the CSGs learn only
local IGP routes and labeled BGP prefixes marked with the MSE BGP community.
For fixed wireline service transport, the IGWs in the core network are capable of handling large scale and will
learn all BGP-labeled unicast prefixes since they need connectivity to all the FSEs (and possibly ANs) carrying
fixed services in the entire network. Any nodes providing service edge functionality are also capable of handling
large scale, and will learn the common FAN community for AN access—the FSE community for service transport
to other FSE nodes, and the IGW community for internet access. Again, using a separate IGP process for the
access enables the access network to have limited control plane scale, since the ANs only learn local IGP routes
and labeled BGP prefixes marked with the FSE BGP community or permitted via a dynamically-updated IP prefix
list.
Tech Tip
This option supports only mobile service transport because the filtering mechanisms
necessary for fixed wireline service deployment are not currently available in this
option. Wireline service support will be added for this option in a future release.
RR
AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR
CSG Aggregation Aggregation CSG
PAN BGP Community BGP Community PAN
AGN-RR iBGP AGN-RR
iBGP to iBGP to
eBGP IPv4+label eBGP RAN IGP
RAN IGP
RR IPv4+label IPv4+label RR Process
Process MPC
Redistribution BGP Redistribution
CSG iBGP iBGP CSG
IPv4+label Community IPv4+label
PAN PAN
RAN Region AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR RAN Region
BGP Community MTG BGP Community
AGN AGN
CSG CSG
eBGP eBGP
LDP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LDP LSP
293280
LDP LSP LDP LSP LDP LSP
This option follows the approach of enabling labeled BGP across the core and aggregation networks and extends
the Unified MPLS LSP to the access by redistribution between labeled BGP and the access domain IGP. All
nodes in the core and aggregation network that require inter-domain LSPs to reach nodes in remote domains act
as a labeled BGP PEs and runs iBGP IPv4 unicast+labels with their corresponding local RRs.
• The core POP nodes in this model are labeled BGP ASBRs, referred to as CN-ASBRs. They peer with
iBGP-labeled unicast sessions with the centralized CN-RR within the core AS, and peer with eBGP-
labeled unicast sessions with the neighboring aggregation ASBRs. The CN-ASBRs insert themselves
into the data path in order to enable inter-domain LSPs by setting NHS on all iBGP updates towards
their local CN-RRs and eBGP updates towards the neighboring aggregation ASBRs. Note that the
CN-RR applies an egress filter towards the CN-ASBRs in order to drop prefixes with the common RAN
community, which eliminates unnecessary prefixes from being redistributed.
• The MTGs residing in the core network are labeled BGP PEs, which connect to the EPC gateways (SGW,
PGW, and MME) in the MPC. The MTGs peer with iBGP- labeled unicast sessions with the CN-RR,
advertising loopbacks into iBGP-labeled unicast with the global MSE BGP community, representing the
MSE, and importing the global MSE and common RAN communities.
• The aggregation POP nodes in this model act as labeled BGP ASBRs in the aggregation AS, referred
to as AGN-ASBRs. They peer with iBGP-labeled unicast sessions with the centralized AGN-RR within
the aggregation AS, and peer with eBGP-labeled unicast sessions to the CN-ASBR in the core AS. The
AGN-ASBRs insert themselves into the data path to enable inter-domain LSPs by setting NHS on all
iBGP updates towards their local AGN-RRs and eBGP updates towards neighboring CN-ASBRs.
The MTGs in the core network are capable of handling large scale and will learn all BGP-labeled unicast
prefixes since they need connectivity to all the CGSs in the entire network. Simple prefix filtering based on BGP
communities is performed on the CN-RRs for constraining IPv4+label routes from remote RAN access regions
from proliferating into neighboring aggregation domains, where they are not needed. The PANs learn only labeled
BGP prefixes marked with the common RAN BGP community and the MSE BGP community. This allows the
PANs to enable inter-metro wireline services across the core, and also reflect the MPC prefixes to their local
access networks. Using a separate IGP process for the RAN access enables the mobile access network to have
limited control plane scale, since the CSGs only learn local IGP routes and labeled BGP prefixes marked with the
MSE BGP community.
Single AS
RR
CN-ABR CN-ABR
RAN Aggregation Aggregation
BGP Community BGP Community
TDM/Packet PAN
Microwave PAN
iBGP iBGP iBGP Access
FAN
IPv4+label IPv4+label MPC IPv4+label IP/Ethernet
BGP
RAN Region Community RAN Region
BGP Community BGP Community
CN-ABR CN-ABR
MTG
AGN AGN
CSG
iBGP Hierarchical LSP
293282
LDP LSP LDP LSP LDP LSP
RFC 3107 procedures based on iBGP IPv4 unicast+label are used as an inter-domain LDP to build hierarchical
LSPs across domains. All nodes in the core and aggregation network that require inter- domain LSPs act as
labeled BGP PEs and run iBGP-labeled unicast peering with designated RRs depending on their location in the
network.
• The core POP nodes are labeled BGP ABRs between the aggregation and core areas, referred to in
this model as CN-ABRs, and act as inline RRs for their local aggregation area-labeled BGP PEs. The
CN-ABRs peer with other CN-ABRs using iBGP-labeled unicast in either a full mesh configuration or
using centralized RRs over the core network. The centralized RR deployment option is shown in Figure
30. Note that the CN-RR applies an egress filter towards the CN-ABRs in order to drop prefixes with the
common RAN community, which eliminates unnecessary prefixes from being redistributed.
• For mobile service transport, the MTGs residing in the core network are labeled BGP PEs and peer
either directly with the closest CN-ABR RRs, in the case of a CN-ABR full-mesh configuration, or with
the centralized RRs, depending on the deployment setting. The MTGs advertise their loopbacks into
BGP-labeled unicast with a global MSE BGP community representing the MPC. They learn all the labeled
BGP prefixes from the common RAN BGP community and have reachability across the entire network.
• For fixed wireline service transport, the network nodes providing FSE functions, such as PWHE or
H-VPLS, are labeled BGP PEs. These FSE nodes will peer with the closest RR in the network, usually in
the aggregation network, depending upon the deployment setting. The FSE nodes advertise loopbacks
into the iBGP-labeled unicast with the global FSE BGP community, representing the FSE, and import the
common FAN community and global FSE and IGW communities. The IGW community represents any
Internet Gateway node, providing internet peering functionality in the SP network.
All MPLS services are enabled by the PANs in the aggregation network. These include:
• GSM Abis, ATM IuB, IP IuB, and IP S1/X2 interfaces for 2G/3G/LTE services for RAN access domains
with point-to-point connectivity over TDM or hybrid (TDM+Packet) microwave
• IP IuB, and IP S1/X2 interfaces for 3G/LTE services for RAN access domains with point-to- point or ring
topologies over fiber or packet microwave.
• Business Ethernet Line (E-Line) and E-LAN Layer 2 VPN (L2VPN) services and Layer 3 VPN (L3VPN)
services.
• Residential triple play services with Ethernet connectivity from the access nodes (FANs, PON OLTs, etc.)
to the PAN-SE nodes.
Aggregation Aggregation
Node (AGN) Node (AGN)
RR
AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR
Aggregation Aggregation
RAN BGP Community BGP Community
TDM/Packet AGN-RR AGN-RR
iBGP
Microwave eBGP eBGP
IPv4+label Access
IPv4+label IPv4+label FAN
RR MPC RR IP/Ethernet
PAN iBGP BGP iBGP PAN
IPv4+label Community IPv4+label
RAN Region RAN Region
BGP Community AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR BGP Community
MTG
AGN AGN
eBGP eBGP CSG
iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP
293284
LDP LSP LDP LSP LDP LSP
RFC 3107 procedures based on iBGP IPv4 unicast+label are used as an inter-domain LDP to build hierarchical LSPs
across domains. All nodes in the core and aggregation network that require inter- domain LSPs act as labeled BGP
PEs and run iBGP-labeled unicast peering with designated RRs, depending on their location in the network.
• For mobile service backhaul, the MTGs residing in the core network are labeled BGP PEs and peer with
iBGP-labeled unicast sessions with the centralized CN-RR. The MTGs advertise their loopbacks into
iBGP-labeled unicast with the global MSE BGP community representing the MSE, and then import the
global MSE and common RAN communities, providing reachability across the entire network down to the
PANs at the edge of the aggregation network.
• The core POP nodes act as labeled BGP CN-ASBRs in the core AS. They peer with iBGP- labeled
unicast sessions with the CN-RR within the core AS, and peer with eBGP-labeled unicast sessions with
the neighboring aggregation ASBRs. The CN-ASBRs insert themselves into the data path to enable
inter-domain LSPs by setting NHS on all BGP updates towards their local CN-RRs and neighboring
aggregation ASBRs. Note that the CN-RR applies an egress filter towards the CN-ASBRs in order to
drop prefixes with the common RAN community, which eliminates unnecessary prefixes from being
redistributed.
• The aggregation POP nodes act as labeled BGP AGN-ASBRs in the aggregation AS. They peer with
iBGP-labeled unicast sessions with the centralized AGN-RR within the aggregation AS, and peer
with eBGP-labeled unicast sessions to the CN-ASBR in the neighboring AS. The AGN-ASBRs insert
themselves into the data path to enable inter-domain LSPs by setting NHS on all BGP updates towards
their local AGN-RRs and neighboring core ASBRs.
• All PANs in the aggregation networks that require inter-domain LSPs to either reach nodes in another
remote aggregation network, or that need to cross the core network to reach the MTGs, act as labeled
BGP PEs, and peer with iBGP-labeled unicast sessions to the local AGN-RR. The PANs advertise their
loopbacks into BGP-labeled unicast with a common BGP community that represents any services
configured locally on the PAN or on the attached access network, such as the RAN or FAN community.
The PANs learn labeled BGP prefixes marked with these common BGP communities as necessary and
also any required service communities, such as those for FSE, MSE, or IGW nodes.
Single AS
Aggregation Aggregation
Node (AGN) Node (AGN)
Access Core Node Core Node Access
FAN IP/MPLS (CN) Core and Aggregation (CN) IP/MPLS FAN
Domain IP/MPLS Domain Domain
Aggregation Aggregation
Node (AGN) Node (AGN)
Core Node Core Node
(CN) (CN)
Aggregation Aggregation
CSG Node (AGN) Node (AGN) CSG
293285
LDP LSP LDP LSP LDP LSP
From an multi-area IGP organization perspective, the integrated core+aggregation networks and the access
networks are segmented into different IGP areas or levels, where the integrated core+aggregation network is
either an IS-IS Level 2 or an OSPF backbone area, and access networks subtending from the AGNs are in IS-IS
Level 1 or OSPF non-backbone areas. No redistribution occurs between the integrated core+aggregation and
access IGP levels/areas, thereby containing the route scale within each domain. Partitioning these network
layers into such independent and isolated IGP domains helps reduce the size of routing and forwarding tables on
individual routers in these domains, which, in turn, leads to better stability and faster convergence within each of
these domains.
LDP is used for label distribution to build intra-domain LSPs within each independent IGP domain. Inter-domain
reachability is enabled with hierarchical LSPs using BGP-labeled unicast as per RFC 3107 procedures, where
iBGP is used to distribute labels in addition to remote prefixes, and LDP is used to reach the labeled BGP
next-hop.
AGN AGN
RR
MTG
AGN AGN
FAN FAN
iBGP Hierarchical LSP
293286
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
The collapsed core+aggregation and access networks are integrated with labeled BGP LSPs. Any node in the
network that requires inter-domain LSPs to reach nodes in remote domain acts as a labeled BGP PE and runs
iBGP IPv4 unicast+labels with their corresponding local RR.
• For mobile service backhaul, the MTGs residing in the core network are labeled BGP PEs. They connect
to the EPC gateways (SGW, PGW, and MME) in the MPC. The MTGs peer with iBGP- labeled unicast
sessions with the CN-RR, and advertise their loopbacks into iBGP-labeled unicast with a common MSE
BGP community representing the MPC as shown in Figure 34.
• For fixed wireline service transport, the network nodes providing FSE functions, such as PWHE
or H-VPLS, are labeled BGP PEs. These FSE nodes will peer with the CN-RR in the network. The
FSE nodes advertise loopbacks into the iBGP-labeled unicast with the global FSE BGP community,
representing the FSE, and import the global FSE and IGW communities.
• The AGNs act as inline-RRs for their local access network clients. Each access network subtending from
a pair of AGNs is part of a unique IS-IS Level 1 domain. All access rings/hub and spokes subtending
from the same pair of AGNs are part of IS-IS Level 1 domain, where the ANs are IS-IS L1 nodes and the
AGN are L1/L2 nodes. Since routes between the integrated core+aggregation IS-IS Level 2 (or OSPF
backbone) and access IS-IS Level 1 (or OSPF non- backbone area) are not redistributed, the AGNs have
to reflect the labeled BGP prefixes with the next-hop changed to self in order to insert themselves into
the data path to enable the inter- domain LSP switching and allow the two IGP routing domains to remain
isolated. This AGN NHS function is symmetrically applied by the AGNs towards its clients in its local
access domain, and the higher level CN-RR in the integrated core+aggregation domain.
• The nodes in the access networks are labeled BGP PEs. Nodes carrying mobile services are referred
to as RAN nodes, and nodes carrying fixed wireline services are referred to as FAN nodes. They peer
with iBGP-labeled unicast sessions with their local PAN inline-RRs. The ANs advertise their loopbacks
into BGP-labeled unicast with a common BGP community that represents the local access community:
RAN for mobile services and FAN for fixed wireline services. For mobile service transport, labeled BGP
prefixes marked with the MSE BGP community are learned for reachability to the MPC, and the adjacent
access network BGP communities if inter-access X2 connectivity is desired. For business wireline
service transport, the ANs selectively learn the required FSE and remote FAN prefixes for configured
VPWS services.
BGP communities are learned for reachability to the MPC and the adjacent access network BGP communities
if inter-access X2 connectivity is desired. For business wireline service transport, the ANs selectively learn the
required FSE and remote FAN prefixes for configured VPWS services.
For fixed wireline service transport, the IGWs in the core network are capable of handling large scale and will
learn all BGP-labeled unicast prefixes since they need connectivity to all the FSEs (and possibly ANs) carrying
fixed services in the entire network. Any nodes providing service edge functionality are also capable of handling
large scale, and will learn the common FAN community for AN access, the FSE community for service transport
to other FSE nodes, and the IGW community for internet access. Again, using a separate IGP process for the
access enables the access network to have limited control plane scale, since the ANs only learn local IGP routes
and labeled BGP prefixes marked with the FSE BGP community or permitted via a dynamically-updated IP prefix
list.
Single AS
Mobile
Transport GW
Aggregation Aggregation
Node Node
Core Node Core Node Access
Core and Aggregation FAN
IP/Ethernet
IP/MPLS Domain
Aggregation Pre-Aggregation
Node Node
Core Node Core Node
Aggregation Aggregation
Node Node CSG
Mobile
Transport GW 293287
LDP LSP
This model assumes that the core and aggregation networks form a single IGP/LDP domain consisting of less
than 1000 nodes. Since there is no segmentation between network layers, a flat LDP LSP provides end-to-end
reachability across the network. The mobile access is based on TDM and packet microwave links aggregated in
AGNs that provide TDM/ATM/Ethernet VPWS and MPLS VPN transport.
In Ethernet-based networks, traffic aggregation and isolation are achieved by means of VLAN tagging, thus
promoting the natural development of two VLAN-based models for the deployment of subscriber aggregation:
• 1:1 Aggregation: indicating a one-to-one mapping between the subscriber and the VLAN.
• N:1 Aggregation: indicating a many-to-one mapping between subscribers and VLANs, with subscribers
that may be located in the same or different AN.
These aggregation options, once inherent to a Layer-2 Ethernet access network, have been preserved over the
MPLS-based access in order to provide continuity in how subscriber aggregation is modeled, while allowing the
access network to evolve toward more robust transport technologies.
In addition, a non-trunk interface toward the residential CPE enables all services for all subscribers on a particular
AN to also be mapped to that single shared VLAN. This may include services that are delivered by using
either a unicast or a multicast transport. Relative priority across services is preserved by properly setting the
differentiated services code point (DSCP) field in an IP packet header or the 802.1p class of service (CoS) values
carried in an Ethernet priority-tagged frame.
Figure 36 and Figure 37 show the N:1 aggregation model deployed with Ethernet and MPLS access, respectively.
Routed CPE
Non
Trunk UNI
IP
IP
STB I/F
PPP IP or MPLS VPN
HSI/VoIP/ IP Multicast, mLDP or mVPN
802.1q VoD/TV IP
Routed CPE
PIM-SSM
IP
IP
STB
Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS
293219
CPE PON/FTTH Access PAN or AGN + SE Aggregation Core
Routed CPE
Non
Trunk UNI
IP
IP S-VLAN
POP
BNG I/F
STB HSI/VoIP/ EoMPLS PW PPP
802.1q VoD/TV
IP IP or MPLS VPN
IP Multicast, mLDP or mVPN
Routed CPE
STB
VDSL,
ADSL2+
Ethernet IP/MPLS IP/MPLS IP/MPLS
293590
CPE Ethernet DSLAM MPLS Border Node PAN or AGN + SE Aggregation Core
The CPE device is the demarcation point between the home and the SP network. While a CPE can be configured
to operate in either routed or bridged mode, routed mode is widely preferred for residential wireline applications,
allowing the entire household to be presented as a single entity to the provider for authentication and accounting
purposes.
While both IPv4 and IPv6 address families are supported within the household, the second release of the Cisco
FMC system introduces support for an IPv6-only access network for unicast services.
For IPv6-based access, DHCPv6 prefix delegation (PD) is used at CPE for addressing purposes of end devices.
DHCPv6 PD at the CPE differs from a local DHCP server function in that prefixes assigned on the CPE LAN
interfaces are obtained directly from the operator. The abundance of IPv6 prefixes makes the capacity to
manage the subscriber household address space in a centralized manner attractive to providers, who have
better visibility and influence over address assignment within the household without the cost of running
expensive routing protocols or managing static addresses to guarantee proper downstream forwarding. It is also
helps improve CPE performances, by removing the need for expensive inline packet manipulations such as NAT.
For IPv4-based access, household end devices obtain private addresses from a local DHCP server function
enabled on the CPE. Among the numerous NAT464 technologies, MAP-T is then leveraged to map those IPv4
end devices onto a single CPE-wide IPv4 address first (MAP-T Port Address Translation 44 [PAT44] stage) and to
a CPE-wide IPv6 address next (MAP-T NAT46 stage).
The CPE-wide IPv4 and IPv6 addresses are created from a combination of information, derived as follows:
• a delegated prefix assigned to the CPE via regular DHCPv6 PD procedures
• MAP-T Rules received in MAP-T specific DHCPv6 options
While the CPE-wide IPv6 address is unique throughout, the CPE-wide IPv4 address can be shared among
multiple CPEs requiring unique Layer-4 source ports to be assigned and used for proper routing of return traffic
in the IPv4 core domain. IPv4 address sharing ratio and number of unique ports per CPE are algorithmically tied
and affect each other.
For example, support for 64,000 univocally routable CPE devices within the MAP-T domain can be achieved with
a single a /24 Class C IPv4 subnet by setting the CPE address sharing ratio to 1:256 and limiting the number of
unique layer 4 ports per CPE to 256.
Assignment of a non-temporary IPv6 address to the CPE WAN interface is achieved via DHCPv6 for both PPPoE
and IPoE subscribers.
The CPE also implements IGMP querier and proxy functions for multicast services. By acting as an IGMP querier
toward household appliances, the CPE is able to maintain an updated view of the multicast membership status
for the customer’s end devices, while the proxy function allows it to report that information as an aggregate
toward the AN and ultimately the BNG.
Although, the Cisco FMC system delivers multicast services to subscribers via IPv4, the CPE does not require
a dedicated IPv4 address to be assigned to the WAN interface. Depending on the CPE implementation, proxied
IGMP membership reports can be sent from an all-zeros address, from the shared MAP-T IPv4 address or from
a common, shared IPv4 WAN address statically or dynamically (via Technical Report 069 [TR-069]) provisioned
by the operator. This ensures that the address saving goal promoted by an IPv6-only access network is still
warranted.
The AN is responsible for aggregating all CPEs in the same local area and implements a number of critical
functions, such as line identification, security, efficient multicast transport, and QoS.
For IPoE subscribers, line identification is based on DHCPv4 snooping and DHCPv6 Light-weighted Relay Agent
(DLRA) functions that insert location-specific options in DHCP messages forwarded to servers. These options
encompass Option 82 with its remote and circuit ID for IPv4, and the corresponding Option 37 and 18 for IPv6.
Insertion of line information is essential not only as way of tracking subscriber’s location, but also as a way of
uniquely identifying the subscriber with the operator’s operation support system (OSS) for the deployment of
transparent authorization mechanisms.
While the first release of the Cisco FMC system focused on a Dual Stacked access, bringing relevance to the line
identifiers of both address families, the second release of the FMC system only focuses on an IPv6-only access
network. The Access Node, therefore, is only required to implement DLRA functions.
For PPPoE, subscriber line identification is carried in the PPPoE Intermediate Agent Line ID tag inserted by the
AN in the PPPoE header.
Efficient multicast forwarding is achieved by using a N:1 VLAN-based transport that delegates multicast
replication toward subscribers to the AN, which runs IGMPv2/v3 snooping and proxy functions.
For an Ethernet based access and depending on the capabilities of the AN, the multicast VLAN can co-reside
with the unicast N:1 VLAN or be dedicated, requiring Multicast VLAN Registration (MVR) functions to also be
implemented to give the non-trunk delineation of the subscriber’s UNI.
The Cisco FMC system implements the latter behavior, deemed to be applicable to a wider range of devices and
the N:1 VLAN aggregation model appears slightly modified, as shown in the following figure.
Routed CPE
Non
Trunk UNI AI/F (BNG)
IP
HSI/VoD/ PPP
IP
802.1q VoIP
IP
STB
IP or MPLS VPN
I/F IP Multicast, mLDP or mVPN
MVR TV
Routed CPE 802.1q
PIM-SSM
IP
IP
STB
Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS
293363
For an MPLS based access and N:1 aggregation, MVR functionalities at the Access Node are not needed.
Multicast traffic, both data and control, can be separated from unicast at the Access Switch by inserting a L3
interface in the bridging domain and running multicast routing protocols over it. This is discussed further in the
next section.
In the MPLS based access network, the DSLAM connects directly to an Access Switch in charge of providing
gateway functions into the MPLS domain.
The Access Switch is responsible for establishing Ethernet over MPLS pseudowires that emulate Layer-2
connectivity between subscribers and the BNG over the routed network.
To provision for proper H-QoS modeling at the BNG, the Access Switch establishes an Ethernet pseudowire for
each residential N:1 VLAN and Access Node. These pseudowires are dedicated to residential services and are in
addition to those in use by other service categories, such as business L2VPNs. This allows for the N:1 VLAN to
be popped prior traffic entering the pseudowire, with the advantage of reducing packet sizes across the access
network.
For Multicast forwarding, the Access Switch behaves as the last hop router in the multicast distribution tree built
over the routed IP access domain. The N:1 VLAN is terminated on a Layer-3 interface—switch virtual interface
(SVI)—that acts as the IGMP querier toward the receivers. The remaining of the multicast delivery tree is built by
using PIM Source Specific Multicast running on the Access Switch network-facing interfaces, as shown in the
following figure.
Routed CPE
Non
Trunk UNI
S-VLAN
IP
IP POP
BNG I/F
EoMPLS PW
STB HSI/VoIP/ PPP
802.1q VoD/TV HSI/VoIP/VoD
IP IP or MPLS VPN
IP Multicast, mLDP or mVPN
Routed CPE TV
STB
VDSL,
ADSL2+
Ethernet IP/MPLS IP/MPLS IP/MPLS
293591
CPE Ethernet DSLAM MPLS Border Node PAN or AGN + SE Aggregation Core
The Cisco FMC system has selected this model to provide an alternate architectural approach to MVR running
at the Access Node, when such function is not available or desired for the operational complexity it adds The
MVR-based alternative is implemented for 1:1 VLAN aggregation and discussed in “1:1 VLAN Aggregation”
section of this guide.
The Broadband Network Gateway (BNG) node is the network device that enables subscriber management
functions for the residential PPPoE and IPoE subscribers.
A single 802.1Q interface matching the shared N:1 VLAN aggregates all subscribers connected to the same AN
device. Such interface could be the L3 termination of a bridged network, in the case of a L2 Ethernet access, or
of a pseudowire, in the case of MPLS access.
BNG capabilities enabled on that interface allow subscribers to be tracked and managed individually, and
individual constructs, known as sessions, to be created.
The same device implementing BNG functions also operates as a MAP-T border router reconstructing the
original IPv4 source and destination addressed from the 4to6 translation performed by the CPE.
Multicast forwarding toward the aggregation/core of the network is achieved over MLDP-signaled multicast
LSPs and follows similar traffic isolation models as for unicast services. Depending on operator’s preference,
subscriber’s traffic can be routed within the global routing domain, or can be isolated within the same or different
L3VPN used for residential unicast services.
Multicast forwarding toward subscribers happens over native IPv4 multicast and uses a dedicated N:1 VLAN
transport to simplify forwarding at the AN. Regular IGMPv2/v3 functions are performed at the BNG.
Shared PBH=AF
IP TV Multicast VLAN DSCP=32
293364
COS=4
PBH = Per Hop Behavior
BE = Best Effort
AF = Assured Forwarding
EF = Expedite Forwardind
Figure 41 and Figure 42 show the 1:1 aggregation model deployed with an Ethernet and a MPLS Access
respectively.
Routed CPE
Non
Trunk UNI AI/F (BNG)
IP
HSI/VoD/ PPP
IP
802.1q VoIP
or QinQ IP
STB
I/F IP or MPLS VPN
MVR TV IP Multicast, mLDP or mVPN
802.1q PIM-SSM
Routed CPE
AI/F (BNG)
IP
HSI/VoD/
IP VoIP IP
STB
Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS
293220
CPE PON/FTTH Access PAN or AGN + SE Aggregation Core
IP
IP
MVR PIM-SSM (v4) PIM-SSM
TV
802.1q I/F
STB
VDSL,
ADSL2+
Ethernet IP/MPLS IP/MPLS IP/MPLS
293592
CPE Ethernet DSLAM MPLS Border Node PAN or AGN + SE Aggregation Core
The CPE device is the demarcation point between the home and the SP network. While a CPE can be configured
to operate in either routed or bridged mode, routed mode is widely preferred because it allows the entire
household to be presented as a single entity to the provider for authentication and accounting purposes.
While both IPv4 and IPv6 address families are supported within the household, the second release of the Cisco
FMC system introduces support for an IPv6-only access network for unicast services.
For IPv6-based access, DHCPv6 prefix delegation (PD) is used at CPE for end devices addressing purposes.
DHCPv6 PD at the CPE differs from a local DHCP server function in that prefixes assigned on the CPE LAN
interfaces are obtained directly from the operator. The abundance of IPv6 prefixes makes the capacity to
manage the subscriber household address space in a centralized manner attractive to providers, who have
better visibility and influence over address assignment within the household without the cost of running
expensive routing protocols or managing static addresses to guarantee proper downstream forwarding. It is also
helps improve CPE performances, by removing the need for expensive inline packet manipulations such as NAT.
For IPv4-based access, household end devices obtain private addresses from a local DHCP server function
enabled on the CPE. Among the numerous NAT464 technologies, MAP-T is then leveraged to map those IPv4
end devices onto a single CPE-wide IPv4 address first (MAP-T PAT44 stage), and then to a CPE-wide IPv6
address next (MAP-T NAT46 stage).
The CPE-wide IPv4 and IPv6 addresses are created from a combination of information, derived as follows:
• a delegated prefix assigned to the CPE via regular DHCPv6 PD procedures
• MAP-T Rules received in MAP-T specific DHCPv6 options
While the CPE-wide IPv6 address is unique throughout, the CPE-wide IPv4 address can be shared among
multiple CPEs requiring unique Layer-4 source ports to be assigned and used for proper routing of return traffic
in the IPv4 domain. IPv4 address sharing ratio and number of unique ports per CPE are algorithmically tied and
affect each other.
For example, support for 64,000 univocally routable CPE devices within the MAP-T domain can be achieved with
a single a /24 Class C IPv4 subnet by setting the sharing ratio to 1:256 and limiting the number of unique layer 4
ports per CPE to 256.
Assignment of a non-temporary IPv6 address to the CPE WAN interface is achieved via DHCPv6 for both PPPoE
and IPoE subscribers.
The CPE also implements IGMP querier and proxy functions for multicast services. By acting as an IGMP querier
toward household appliances, the CPE is able to maintain an updated view of the multicast membership status
for the customer’s end devices, while the proxy function allows it to report that information as an aggregate to
the BNG. Although, the Cisco FMC system delivers multicast services to subscribers via IPv4, the CPE does not
require a dedicated IPv4 address to be assigned to the WAN interface. Depending on the CPE implementation,
proxied IGMP membership reports can be sent from an all-zeros address, from the shared MAP-T IPv4 address
or from a common, shared IPv4 WAN address statically or dynamically (via TR-069) provisioned by the operator.
This ensures that the address saving goal promoted by an IPv6-only access network is still warranted.
The AN is responsible for aggregating all CPEs in the same local area and implements a number of critical
functions, such as line identification, security, efficient multicast transport, and QoS.
For IPoE subscribers, line identification is based on DHCPv4 snooping/DHCPv6 Light-weighted Relay Agent
functions that insert location specific options in DHCP messages forwarded to servers. These options encompass
Option 82 with its remote and circuit ID for IPv4, and the corresponding Option 37 and 18 for IPv6. Insertion of line
information is essential not only as way of tracking subscriber’s location, but also as a way of uniquely identifying
the subscriber with the operator’s OSS for the deployment of transparent authorization mechanisms.
While the first release of the Cisco FMC system focused on a Dual-Stacked access, bringing relevance to the
line identifiers of both address families, the second release of the FMC system focuses only on an IPv6-only
access network. The Access Node, therefore, is required only to implement DLRA functions.
For PPPoE, subscriber line identification is carried in the PPPoE Intermediate Agent Line ID tag inserted by the
AN in the PPPoE header.
Efficient multicast forwarding is achieved by using a dedicated N:1 VLAN based transport that delegates
multicast replication toward subscribers to the AN. The AN implements IGMPv2/v3 snooping and proxy functions
for membership tracking and reporting, and it runs Multicast VLAN Registration (MVR) in order to transition
multicast forwarding into the dedicated multicast VLAN.
Access Switch
In the MPLS based access network, the DSLAM connects directly to an Access Switch in charge of providing
gateway functions into the MPLS domain.
The Access Switch is responsible for establishing Ethernet over MPLS pseudowires that emulate Layer-2
connectivity between subscribers and the BNG over the routed network.
To provision for proper hierarchical quality of service (H-QoS) modeling at the BNG, the Access Switch
establishes an Ethernet pseudowire for each residential Access Node. These pseudowires are dedicated to
residential services and are in addition to those in use by other service categories, such as business L2 VPNs.
This allows for the service provider VLAN (S-VLAN) to be popped prior to traffic entering the pseudowire, with
the advantage of reducing packet sizes across the access network. The customer VLANs (C-VLANs) must be
preserved to be able to rebuild the 1:1 VLAN model at the BNG side of the pseudowire.
For Multicast forwarding, the Access Switch behaves as the last hop router in the multicast distribution tree
built over the routed IP access domain. A switch virtual interface (SVI) terminates the dedicated multicast VLAN
that originates from the MVR functions performed by Access Node and it acts as the IGMP querier toward the
receivers. The use of an SVI interface allows for the provisioning of a single routed entity across all Access
Nodes thus simplifying IPv4 address planning and operations, but it mandates IGMP snooping function to be
enabled to control flooding of multicast traffic in the Layer-2 domain.
The remaining of the multicast delivery tree is built using PIM Source Specific Multicast running on the Access
Switch network facing interfaces, as shown in Figure 42.
The BNG node is the network device that enables subscriber management functions for the residential PPPoE
and IPoE subscribers.
In the Cisco FMC system, a single 802.1Q interface aggregates all subscribers connected to the same AN device
regardless of their VLAN tagging while BNG capabilities enabled on that interface still allows for subscribers to
be tracked and managed individually and individual constructs, known as sessions, to be created. Such interface
could be the L3 termination of a bridged network, in the case of a L2 Ethernet access, or of a pseudowire, in the
case of MPLS access.
The BNG authenticates and authorizes subscribers’ sessions and provides accounting per session and
service via RADIUS AAA requests. The BNG enables dynamic policy control with RADIUS CoA functionality on
subscriber sessions. QoS for residential services is guaranteed at the subscriber level as well as at the aggregate
level for all residential subscribers connected to the same OLT.
Multicast forwarding toward the aggregation/core of the network is achieved over MLDP signaled multicast
LSPs and follows similar traffic isolation models as for unicast services. Depending on operator’s preference,
subscriber’s traffic can be routed within the global routing domain or can be isolated within the same or different
L3VPN used for residential unicast services.
Multicast forwarding toward subscribers happens over native IPv4 multicast and uses a dedicated N:1 VLAN
transport to prevent per subscriber replication at the BNG and to minimize the amount of replicated content in the
access network. Regular IGMPv2/v3 functions are performed at the BNG.
Shared PBH=AF
IP TV Multicast VLAN DSCP=32
293365
COS=4
PBH = Per Hop Behavior
BE = Best Effort
AF = Assured Forwarding
EF = Expedite Forwardind
While the first release of the Cisco FMC system encompassed a Dual-Stack access network, the second release
of the FMC system targets operators looking at consolidating unicast services over an IPv6 only transport, while
retaining support for both address families at the residential service layer. This empowers operators to take
advantage of the benefits offered by IPv6 today, while also allowing IPv4-based services to be phased out over
a longer period of time as IPv6 content and applications are developed. To meet the need for coexistence of
both address families within the subscriber household and at service layer, the FMC system implements MAP-T.
MAP-T translates the private IPv4 addresses assigned to household appliances into a single, global IPv6 address
that represents the subscriber’s CPE.
Identity sets differ based on the access model, 1:1 or N:1, and the subscriber access protocol, Native IPoE or
PPPoE. IPv6 is inserted within existing identities and authorization methods, and across all service models in
order to minimize disruption.
In the N:1 VLAN model, the IPoE session identity is associated with the access line. Line ID information is
inserted by the AN in DHCPv6 Options 18 and 37, which correspond to DHCPv4 Option 82 Circuit and Remote
ID. To maintain continuity in subscriber’s identity while migrating to an IPv6-only access, it is expected that the
AN inserts in the DHCPv6 options the same line identifiers it was using for DHCPv4.
For PPPoE subscribers, the identity will be based on Point-to-Point Protocol (PPP) Challenge Handshake
Authentication Protocol (CHAP) username and password, or alternatively on the access line identifier carried in
PPPoE Intermediate Agent (IA) tags from the AN.
In the 1:1 VLAN model, both IPoE and PPPoE session identities are associated with the access line as identified
by the NAS-Port-ID at the BNG.
For residential wireline access, address assignment happens independently for devices within the household and
for the residential CPE.
IPoE
Single session
for NA and PD DHCPv6 proxy
Local DHCPv6 addresses to
DHCPv4 PD-based with PD
same sub
(MAP-T) address DHCP
DHCPv6 PD DHCPv6 NA
PPPoE
BNG RADIUS
(+ MAP-T BR) DHCPv6 RADIUS
Local DHCPv6
proxy with PD
DHCPv4 PD-based
AAA Pool definition on BNG
(MAP-T) address
293231
DHCPv6 PD DHCPv6 NA
Within the household, address assignment for dual-stack appliances happens according to the rules and
methods defined for IPv4 and IPv6 address families:
• For the IPv4 address family, the residential CPE operates as a DHCPv4 server, allocating IPv4 private
addresses from a locally defined pool.
• For the IPv6 address family, the CPE sends periodic Neighbor Discovery Router Advertisements (ND
RAs) with the Managed Flag set to achieve the following:
◦◦ Advertise the IPv6 prefix assigned to the LAN segment
◦◦ Announce itself as the default router on the segment
◦◦ Solicit the client to get the remaining set up information, including the Domain Name System
(DNS) IPv6 address, via DHCPv6
The CPE therefore operates as a DHCPv6 Stateless Server importing relevant information from a
DHCPv6 prefix delegation (PD) exchange.
On the network side, the residential CPE acquires all of its IPv6 addressing information via DHCPv6 for both IPoE
and PPPoE access protocols. Unlike IPv4, IPv6 support for PPPoE subscribers only allows for the negotiation
of link local addressing (interface-id) during the IPv6 Control Protocol (IPv6CP) phase, while global unicast
addressing builds upon the same methods used for IPoE subscribers.
In particular, the system will exploit the use of DHCPv6 non-temporary address for the CPE WAN interface,
and DHCPv6 prefix delegation (PD) for the CPE LAN and MAP-T functions. Delegated prefixes are used for
IPv6 addressing of CPE LAN interfaces and household appliances through router advertisements, as well as by
MAP-T in order to build a unique IPv6 address for its address family translation functions.
An obvious benefit of DHCPv6 PD is that it allows for centralized management of address assignment functions
of IPv6 hosts within the subscriber household, which provides operators with better visibility over traffic from
different household appliances and removes the need for expensive packet manipulation at the CPE.
In these new offerings, the residential CPE role in the operator’s network is two-fold. It provides private access
to the subscriber’s household via wired connections or secured Wi-Fi SSIDs (authenticated and encrypted via
WPA/WPA2), and hotspot-like public access to roaming users through a dedicated Wi-Fi interface, based on a
shared, open SSID.
To accomplish this on a single transport infrastructure, the system implements VLAN-based isolation in the
access network, and MPLS service virtualization in the aggregation and core network.
Residential wireline services can still be offered by using a 1:1 or a N:1 aggregation model, according to
operator’s preference, and VLAN-based traffic separation originates from a trunk UNI at the residential CPE.
Such separation allows for easy prioritization as well as different authentication and authorization procedures for
the two access types.
To account for the additional bandwidth and scale requirements brought by the Wi-Fi access overlay, the Cisco
FMC system promotes such unified deployments with Fiber to the Home (FTTH)/PON access and distributed
BNG. The following figure shows such models, in the case of 1:1 VLAN aggregation for wireline subscribers.
Routed
Bridged Trunk AI/F (BNG)
CPE UNI QinQ
HSI/VoD/VoIP PPP
IP
IP
IP
AI/F (BNG)
IP
IP HSI/VoD/VoIP PPP
IP
STB
Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS
293593
The remainder of this section discusses the differences and additions to the traditional residential access.
In the unified wireline and community Wi-Fi architecture, the CPE device assumes the dual role of demarcation
point between the SP network and the home for fixed access, and the public access area for wireless access.
In the home, appliances attach to the CPE via wired connections or associate to a secure wireless SSID that
ensures authentication and encryption of household’s traffic. Such traffic is then routed at the CPE UNI and
carried over the residential 1:1 or N:1 VLAN.
In the public access area, roaming Wi-Fi handsets associate to the residential CPE via a well known, shared and
open SSID provided by the operator. The resulting data traffic is bridged over a dedicated Wi-Fi VLAN into the
operator’s network, which ensures handsets can still be individually tracked via their MAC and IP addresses. The
use of different VLANs for residential and community Wi-Fi traffic implies the CPE UNI becomes a trunk.
While both IPv4 and IPv6 address families are supported within the household for both PPPoE and IPoE access
protocols, Wi-Fi access happens over IPv4 IPoE only, in line with the capabilities of existing handsets.
DHCPv4 is used between the subscriber end device and the BNG node for IP address allocation. This exchange
is bridged by the CPE transparently.
The AN is responsible for aggregating all CPEs in the same local area and implements a number of critical
functions, such as line identification, security, and QoS.
For Wi-Fi subscribers, line identification is based on DHCPv4 snooping functions inserting location-specific
information in DHCP messages forwarded to servers. Insertion of line information for community Wi-Fi access is
essential to monitoring subscriber’s access locations for tracking purposes.
The AN may implement additional security measures such as Address Resolution Protocol (ARP) inspection and
traffic throttling on the Wi-Fi VLAN in order to block possible attacks from the open access network.
In downstream direction, QoS is used to provide relative priority between the residential and Wi-Fi VLANs at the
trunk UNI, and across classes of traffic within each VLAN.
Split horizon forwarding is implemented at the trunk UNI on the N:1 Wi-Fi VLAN and any N:1 residential VLAN in
order to prevent subscribers’ cross talking prior to reaching the BNG, and thus bypassing it.
The Broadband Network Gateway (BNG) node is the network device that enables subscriber management
functions for the residential subscribers as well as the public Wi-Fi users.
For Community Wi-Fi access, a single 802.1Q interface matching the shared N:1 VLAN aggregates all Wi-Fi
users connected to the same AN device.
A separate access interface is used for aggregating the residential wireline subscribers.
The BNG allows Wi-Fi subscriber access using a combination of MAC-based authorization and web logon
procedures and provides per session accounting via Remote Authentication Dial-In User Service (RADIUS) AAA
requests. The BNG enables dynamic policy control with RADIUS CoA functionality on subscriber sessions.
Similar to residential access, QoS is guaranteed at the subscriber level as well as at the aggregate level for all
community Wi-Fi subscribers connected to the same OLT.
Two forwarding options are available for Wi-Fi subscriber’s traffic past the BNG. Subscriber’s traffic can be
routed within the global routing domain, leveraging labeled Unicast MPLS forwarding, or can be isolated within a
L3VPN, for complete separation between services delivered over the Unified MPLS fabric, such as mobile and
business, and in a dedicated address space.
Such L3VPN, in turn, can be dedicated or shared with the residential users.
For community Wi-Fi, the CPE operates in bridged mode and DHCPv4 is enabled on the subscriber handset in
order to acquire IPv4 configuration parameters (such as Address, DNS, etc.) from an external DHCPv4 server.
The BNG operates as a DHCPv4 proxy looking over the address allocation exchange between the client and the
server.
IPoE
DHCPv4 Proxy
DHCP
293594
User identity is based on the username and password associated with the subscriber’s account as well as the
handset MAC address information.
The MAC address is dynamically learned upon an initial successful web logon and used for transparent
authorization during subsequent accesses.
Transparent Authorization
Transparent Authorization enables users who have already registered with the operator to be automatically
signed into the network without the need for any user intervention, such as redirection to a portal for web-based
authentication.
For community Wi-Fi and mobile access, the number of simultaneous active sessions from multiple devices in
the same account is capped. When the threshold is breached, user attempts to establish additional sessions are
denied and the user is redirected to a notification page on the self-management portal.
Call flows are different depending on access type. The following sections discuss the behavior for residential
wireline, community Wi-Fi, and mobile access.
DHCPv6 AAA/PCRF
BNG
CPE AN
293367
DHCPv4
User Traffic
When a wireline subscriber first accesses the network, a new session is triggered at the BNG depending on the
subscriber access protocol that happens as part of a PPP session negotiation exchange or upon receipt of a
DHCP Solicit (v6) message.
While session establishment is in process, BNG attempts to authenticate the subscriber based on credentials
collected from different sources. These range from information coming directly from the client, such as the PPP
username or the subscriber mac address, to line identifiers inserted in DHCPv6 Option 18/37 by the AN, or taken
from the BNG access port, such as slot, port, and VLAN information. How the user gets authenticated depends
on the subscriber aggregation model, 1:1 or N:1, and the subscriber access protocol PPPoE or IPoE.
For an existing subscriber, this network-based authentication will succeed and the BNG will receive from RADIUS
all the features that should be activated on the subscriber session to reflect his subscription.
DHCP Offer/Request/Ack
Accounting-session-ID cached
from accounting start. To be
RADIUS Accounting-Start (accounting-session-ID) used in subsequent RADIUS
CoA msg to identify subs.
293595
User Traffic
When a Wi-Fi subscriber first accesses the network, a new session is triggered at the BNG based on the receipt
of a DHCPv4 Discover message.
While session establishment is in process, BNG attempts to authenticate the subscriber based on the wireless
handset MAC address.
DHCP Offer/Request/Ack
Accounting-session-ID cached
from accounting start. To be
RADIUS Accounting-Start (accounting-session-ID)
used in sequent RADIUS
CoA msg to identify subs.
HTTP GET (www.cisco.com)
HTTP Get is intercepted. HTTP-Redirect performed.
HTTP 307 (www.portal.com/Error_Active.htm/
?NAS_IP=<nas_ipv4>)
293596
(accounting-session-ID)
Session Terminated
The call flow in the following figure shows the behavior the Cisco FMC system implements for an IPoE IPv4 Wi-Fi
subscriber.
RADIUS Access-Request
(Username=MAC)
User
unkown
RADIUS Access-Reject yet
HTTP-Redirect service applied
DHCP Offer/Request/Ack
sub_ipv4, nas_ipv4,
username, pwd
User identified in
Account Logon
RADIUS CoA Req. Account Logon (SUB_IPv4, <vrf>, username, pwd) msg by his
IP address
(SUB_IP) and
RADIUS Access-Request VRF
(username, pwd)
RADIUS Access-Accept
(Subscriber’s services)
Accnt-sessID cached from
accnt start. To be used in
RADIUS Accounting-Start sequent RADIUS CoA msg
(accounting-session-ID, calling-station-ID) to identify subs. Calling
Station ID cached as
additional credential.
RADIUS CoA Req. Account Logon Ack
293366
User Traffic
While session establishment is in progress, BNG attempts to authenticate the Wi-Fi device based on its MAC
address.
For a new Wi-Fi device, this network-based authentication will fail and the BNG enables redirection of HTTP
traffic to a Web Logon portal page.. Both - upgrades and downgrades, permanent and transient - are supported.
Subscriber address assignment is allowed to complete so that client can gain limited access in the network.
After the user starts a web browser, the BNG responds to the HTTP GET request with an HTTP redirects (HTTP
307 message) requesting the user to change the original URL with the URL of the self-management portal.
The user is then presented with a web logon screen where he is asked to provide his username and password.
The portal propagates user-entered credentials to the BNG via RADIUS CoA, to be used for a second round of
authentication. A successful authentication exchange with AAA includes all the features that should be activated
on the subscriber session to reflect his subscription.
Web logon redirection is removed from the session and an accounting start is sent to AAA to signal the full
establishment of the session. The accounting message includes information about device’s identity such as its
MAC address. This information is recorded and used for subsequent accesses in order to enable transparent
authorization on subsequent logins.
If the maximum number of registered devices has been breached, the automatic credential registration fails and
access is denied by enabling a redirection service to a user notification page.
All active subscriber’s sessions across all access types feed from same credit pool and consumption from the
pool is weighted based on access type in order to steer subscribers toward cheaper access technologies. As
an example, weights for each byte of actual traffic can be set at 5x for mobile, 3x for Wi-Fi, and 1x for wireline
access, making the latter the most cost effective way for subscribers to enter the operator’s network.
Call flows are different depending on access type. The following sections discuss the behavior for residential
wireline, community Wi-Fi, and mobile access.
BNG AAA/PCRF
Portal
CPE
293290
• Rate restored
• Credit pool replenished
Per-session accounting is enabled to periodically report time and volume utilization for the subscriber’s actively
monitored session. An external online charging function or quota manager then adjusts the user’s credit
availability based on the statistics reported in the accounting records and the weight based on the access type.
When the credit utilization reaches 100%, a RADIUS CoA message is sent to the BNG requesting activation of
a lower-tier service, and the user experience degrades until the next credit replenishment. Optionally, the user
may also be redirected to the subscriber registration portal to be notified of his credit status reaching a critical
threshold. Redirection is disabled after the user acknowledges reading the notification, or after a predefined time
period.
To minimize revenue leakage caused by a delay in detection due to the periodic nature of accounting records,
the interval in which accounting messages are sent is incrementally reduced as credit is eroded. While the credit
When a large percentage of credit is eroded, the accounting interval is reduced and accounting records
transmitted more frequently. The new interim interval is calculated as a function of the rate in which the
subscriber is transmitting and his service tier. Changes on the active feature set for the subscriber session are
requested via RADIUS CoA messages.
Tiered offerings help operators generating additional revenue by capturing a larger portion of the market through
differentiated offerings that cater to specific needs. To encourage customer adoption of higher tier plans,
different limits are set to the number of simultaneous active devices in the network that each plan allows, in
addition to different maximum bandwidth settings and total credit available to the subscriber’s account.
Call flows are different depending on access type. The following sections discuss the behavior for residential
wireline, community Wi-Fi, and mobile access.
BNG AAA/PCRF
Portal
CPE
username, pwd
In the scenario depicted in the previous figure, the user logs on to the subscriber self-management portal and
requests activation of a new service tier. Whenever the user requests a change of subscription, the portal sends
a RADIUS CoA message indicating the changes that should be applied to the subscriber session. This affects all
the active subscriber sessions across all access types.
Figure 54 - Wireline L3VPN Service in Cisco FMC System with Unified MPLS Access
Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
293292
FAN CSG
The preceding figure shows a wireline L3VPN service enabled using an MPLS VRF between two PAN-SEs
across the core network with an EoMPLS PW between each PAN-SE and the respective FAN.
As detailed in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global FAN and IGW BGP communities, to provide routing to all possible prefixes for the service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community, and import the loopbacks of the other nodes from the same community.
• Implement PWHE interfaces to terminate the PW from the AN and map it to the MPLS VRF for the
L3VPN service.
As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce its loopback to a global FAN BGP community.
• Transport all services to the PAN-SE.
The PAN-SE will implement SLA enforcement through per-subscriber QoS policies, any required access control
lists (ACLs). The FAN will provide aggregate class enforcement through QoS.
Figure 55 - L3VPN Service in Cisco FMC System with Fixed and Mobile Access
Access Network Aggregation Network Core Network Aggregation Network Mobile Access
IGP/LDP IGP/LDP IGP/LDP IGP/LDP Network
IGP/LDP
Enterprise
Ethernet
FSE FSE
AToM AToM
Pseudowire Pseudowire
CSG FAN
Business L3 VPN
(v4/v6) services
MSE MSE
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
FAN
S1 and X2 L3 VPN
CSG
LTE/3G
IP Bearer
293601
Enterprise
The preceding figure shows a business L3VPN service enabled by using an MPLS VRF that spans across both
the fixed wireline and mobile networks. The figure shows an LTE deployment for the mobile-attached CPE
device, but 3G deployments are also supported.
On the fixed wireline side, the VRF is created on a PAN-SE with an EoMPLS PW transporting the service
between the PAN-SE and the respective FAN.
As detailed in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will do the
following:
• Import the global FAN and IGW BGP communities in order to provide routing to all possible prefixes for
the service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community, and import the loopbacks of the other nodes from the same community.
• Implement PWHE interfaces in order to terminate the PW from the AN and map it to the MPLS VRF for
the L3VPN service.
As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce their loopback to a global FAN BGP community.
• Transport all services to the PAN-SE.
The PAN-SE will implement service SLA enforcement through per-subscriber QoS policies, and any required
ACLs. The FAN will provide aggregate class enforcement through QoS.
In the Mobile Network, the L3 MPLS VPN transport handling is dependent upon the technology being deployed.
In the case of a 3G deployment, the gateway General Packet Radio Service (GPRS) support node (GGSN)
handles the transport establishment and routing with the mobile CPE router. In the case of an LTE deployment,
The GGSN supports a RADIUS Framed Route attribute-value pair (AVP) in order to enable mobile router
functionality. The mobile router enables a router to create a Packet Data Protocol (PDP) context that the GGSN
authorizes via RADIUS. The RADIUS server authenticates the router and includes a Framed-Route attribute (RFC
2865) in the RADIUS Access-Accept response, specifying the subnet routing information to be installed in the
GGSN for the mobile router.
If the GGSN receives a packet with a destination address matching the Frame-Route, the packet is forwarded
to the mobile router through the associated PDP context. Framed-Routes received via RADIUS in the Access-
Accept will be installed for the subscriber (for a GGSN call) and will be deleted once the call is terminated.
This feature is implemented using aggregate VPN APIs. The framed-route attribute also works in combination
with an MPLS/BGP solution, meaning the framed route will be installed in a particular VRF (ip vrf) of a corporate
access point name (APN) and its routing table.
If the VRF has configured BGP and route distribution, routes will be announced over multiprotocol external BGP
(MP-eBGP) to the gateway to the fixed wireline network. The GGSN generates a new label for a framed route
subnet and distributes it over MP-eBGP. The Framed-Routes may be advertised if dynamic routing is in use as
dictated by the routing protocol and its configuration.
Framed-Routes can overlap. They are added in the routing table of a particular VRF. Since each routing table is
separated by VRF, the same IP subnets can coexist among different VRFs.
The Framed-Routes assigned at context setup remains in effect for the lifetime of the context and need not be
modified.
Framed Routes can be public or private. The IP address assigned to the mobile router itself need not be part
of the Framed-Routes assigned to the context. For example, the mobile router may be assigned a private IP
address while the Framed-Route may be a public IP subnet. However, an IP address can be associated with a
Framed-Route attribute via RADIUS (auth accept).
Figure 56 - Wireline L3VPN Service in Cisco FMC System with Ethernet Access
Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave
The preceding figure shows a wireline L3VPN service enabled using an MPLS VRF between two PAN-SEs
across the core network with an 802.1q or Q-in-Q-tagged Ethernet NNI between one PAN-SE and its respective
FAN.
As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Map the UNI to the proper 802.1Q or Q-in-Q Ethernet NNI for transport to the PAN-SE.
The PAN-SE will map the S-VLAN and/or C-VLAN(s) from the UNI or Ethernet NNI to the MPLS VRF. This service
edge node will implement service SLA enforcement through per-subscriber QoS policies, and any required
ACLs. The FAN, if utilized, will provide aggregate class enforcement through QoS.
Figure 57 - Wireline VPLS Service in FMC System with Unified MPLS Access
Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
CPE CPE
293293
FAN CSG
The previous figure shows a wireline VPLS, such as an EP-LAN or EVP-LAN business service enabled using a
VPLS VFI between two PAN-SEs across the core network, with an EoMPLS PW between each PAN-SE and the
respective FAN.
As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global FAN and IGW BGP communities to provide routing to all possible prefixes for the
service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community, and import the loopbacks of the other nodes from the same community.
As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce its loopback to a global FAN BGP community.
• Transport all services to the PAN-SE.
Figure 58 - Wireline VPLS Service in Cisco FMC System with TDM/Ethernet Access
Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave
293373
The preceding figure shows a wireline VPLS, such as a Ethernet Private LAN (EP-LAN) or Ethernet Virtual Private
LAN (EVP-LAN) business service enabled using a VPLS VFI between two PAN-SEs across the core network,
with an 802.1q or Q-in-Q tagged Ethernet NNI between each PAN-SE and the respective FAN.
As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global IGW BGP communities to provide routing to all possible prefixes for the service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community and import the loopbacks of the other nodes from the same community.
As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Map the UNI to the proper 802.1Q or Q-in-Q Ethernet NNI for transport to the PAN-SE.
The PAN-SE will map the S-VLAN and/or C-VLAN(s) from the UNI or Ethernet NNI to the VPLS VFI. This service
edge node will implement service SLA enforcement through per-subscriber QoS policies and any required
ACLs, and learn the MAC addresses in the service. The FAN, if utilized, will provide aggregate class enforcement
through QoS. Since the FAN has only two connections to the VPLS service, the UNI and the NNI to the PAN-SE,
MAC address learning is disabled for the service.
The business customer’s CPE equipment may either be connected directly to this service edge node or via an
AN, such as FANs or CSGs located in an urban area with fiber access from the business to the CSG. The AN
may be connected to the service edge node either via an Ethernet NNI, an Ethernet ring network, or an MPLS
Access network.
Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave
CPE PAN CN-ABR CN-ABR PAN
Inline RR Inline RR
CPE
PAN-SE 802.1q or
PBB-EVPN 802.1ad
FAN
293605
The preceding figure shows a wireline L2VPN service, such as an Ethernet Private LAN (EP-LAN) or Ethernet
Virtual Private LAN (EVP-LAN) business service, enabled using an EVI with PBB configured between two PAN-
SEs across the core network, with an 802.1q or Q-in-Q tagged Ethernet NNI between each PAN-SE and the
respective FAN.
As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global IGW BGP communities in order to provide routing to all possible prefixes for the
service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community and import the loopbacks of the other nodes from the same community.
As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Map the UNI to the proper 802.1Q or Q-in-Q Ethernet NNI for transport to the PAN-SE.
The PAN-SE will group the S-VLAN and/or C-VLANs, defining the service from the UNI or Ethernet NNI into a
bridge domain, referred to as the PBB-Edge bridge domain (BD). Through the PBB functionality in the service
edge node, this PBB-Edge BD is connected to a second PBB-Core BD, on which the EVPN service is configured
with an EVI. This step encapsulates all Customer-MAC (C-MAC) addresses in a Bridge-MAC (B-MAC) address,
and only B-MAC information is shared between service edge nodes. This same EVI is configured on all service
edge nodes participating in this PBB-EVPN. Service traffic between the service edge nodes is routed via BGP
“address-family l2vpn evpn” information.
The service edge nodes will implement service SLA enforcement through per-subscriber QoS policies and any
required ACLs, and learn the MAC addresses in the service. The FAN, if utilized, will provide aggregate class
enforcement through QoS.
Figure 60 - Wireline PBB-EVPN Service in Cisco FMC System with MPLS Access
Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
CPE CPE
293604
FAN CSG
The preceding figure shows a wireline L2VPN service, such as an Ethernet Private LAN (EP-LAN) or Ethernet
Virtual Private LAN (EVP-LAN) business service enabled using an EVI with PBB configured between two PAN-
SEs across the core network. The FAN utilizes an EoMPLS PW to transport traffic from the CPE UNI to the
PAN-SE.
As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global IGW BGP communities in order to provide routing to all possible prefixes for the
service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community and import the loopbacks of the other nodes from the same community.
As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce their loopback to a global FAN BGP community.
• Transport all services to the PAN-SE via an EoMPLS PW.
The PAN-SE will group the S-VLAN and/or C-VLANs, defining the service from the spoke PW into a bridge
domain, referred to as the PBB-Edge bridge domain (BD). Through the PBB functionality in the service edge
node, this PBB-Edge BD is connected to a second PBB-Core BD, on which the EVPN service is configured
with an EVI. This step encapsulates all Customer-MAC (C-MAC) addresses in a Bridge-MAC (B-MAC) address,
and only B-MAC information is shared between service edge nodes. This same EVI is configured on all service
edge nodes participating in this PBB-EVPN. Service traffic between the service edge nodes is routed via BGP
“address-family l2vpn evpn” information.
The service edge nodes will implement service SLA enforcement through per-subscriber QoS policies and any
required ACLs, and learn the MAC addresses in the service. The FAN will provide aggregate class enforcement
through QoS.
Figure 61 - Wireline VPWS Service between CSG and FAN across the Core Network
Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
AToM Pseudowire
CPE CPE
FAN CSG
Advertise loopback in iBGP with Advertise loopback in iBGP with
Local RAN, Global RAN, Global FAN Local RAN, Global RAN, Global FAN communities
293294
When VPWS service is activated the inbound filter is When VPWS service is activated the inbound filter is
automatically updated for remote FAN automatically updated for remote FAN
The preceding figure shows a wireline VPWS like an EPL or Ethernet Virtual Private Line (EVPL) business service
enabled using a pseudowire between CSGs in the access and a FAN in a remote access network across the
core network. The CSG and FAN enabling the VPWS learn each other’s loopbacks via BGP labeled-unicast that
is extended to the access network using the PANs as inline RR, as described in “Hierarchical-Labeled BGP LSP
Core, Aggregation, and Access.”
The following figure shows a variation of VPWS service deployment with native Ethernet, TDM, or Microwave
access where the service utilizes a pseudowire between the PANs.
Figure 62 - Wireline VPWS Service between CSG and FAN with non-MPLS Access
Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave
Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
Wireline VPWS
AToM Pseudowire
FAN
CSG
293223
When VPWS service is activated the inbound filter is When VPWS service is activated the inbound filter is
automatically updated for remote FAN automatically updated for remote FAN
The CSGs and FANs perform inbound filtering on a per-PAN RR neighbor basis using a route-map that:
• Accepts the FSE community.
• Accepts loopbacks of remote destination to which wireline services are configured on the device.
• Drops all other prefixes.
When a wireline service is activated to new destination, the route-map used for inbound filtering of remote
destinations is updated automatically. Since adding a new wireline service on the device results in a change in
the routing policy of a BGP neighbor, dynamic inbound soft reset function is used to initiate a non-disruptive
dynamic exchange of route refresh requests between the ANs and the PAN.
Tech Tip
Both BGP peers must support the route refresh capability to use dynamic inbound soft
reset capability.
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
SGW
S1-U
eNode B
MTG
S1-C MME
293296
SGW
The Mobile RAN includes cell sites with enhanced NodeBs (eNB) that are connected either:
• directly in a point-to-point fashion to the PANs utilizing Ethernet fiber or microwave
or
• through CSGs connected in ring topologies by using MPLS/IP packet transport over Ethernet fiber or
microwave transmission
The cell sites in the RAN access are collected in a MPLS/IP pre-aggregation/aggregation network that may be
comprised of a physical hub-and-spoke or ring connectivity that interfaces with the MPLS/IP core network that
hosts the EPC gateways.
MTG
VRF
VRF VRF
Export: MPC RT
Import: MPC RT, Common RT
Export: RAN X RT, Common RT Export: RAN Z RT, Common RT
293226
Import: RAN X RT, MPC RT Import: RAN Z RT, MPC RT
The FMC System proposes a simple and efficient L3 service model that addresses the LTE backhaul
requirements addressed above. The L3 service model is built over a Unified MPLS Transport with a common
highly-scaled MPLS VPN that covers LTE S1 interfaces from all CSGs across the network and a LTE X2 interface
per RAN access region. The single MPLS VPN per operator is built across the network with VRFs on the MTGs
connecting the EPC gateways (SGW, MME) in the MPC, down to the RAN access with VRFs on the CSGs
connecting the eNodeBs. Prefix filtering across the VPN is done using simple multiprotocol BGP (MP-BGP) route
target (RT) import and export statements on the CSGs and MTGs.
In every RAN access region, all CSGs import the MPC RT and the RAN x RT. The CSGs export the Common
RT and the RAN x RT. Here x denotes the unique RT assigned to that RAN access region. With this importing
and exporting of RTs, the route scale in the VRF of the CSGs is kept to a minimum since VPNv4 prefixes
corresponding to CSGs in other RAN access regions – either in the local aggregation domain, or RAN access
regions in remote aggregation domain across the core – are not learnt. The CSGs have reachability to every MTG
and the corresponding EPC gateways (SGW, MME) that they connect anywhere in the MPC. They also have
shortest path mesh connectivity among themselves for the X2 interface.
In the MPC, the MTGs import the MPC RT and the Common RT. They export only the MPC RT. With this
importing and exporting of RTs, the MTGs have connectivity to all other gateways in the MPC, as well as
connectivity to the CSGs in the RAN access regions across the entire network. The MTGs are capable of
handling large scale and learn all VPNv4 prefixes in the LTE VPN.
MSE BGP
Community
1001:1001
MTG CN-RR
MTG
RR
CN-ASBR CN-ASBR
Inline RR Inline RR
AGN-ASBR AGN-ASBR
Inline RR Inline RR
Metro-1
AGN-RR S1 Traffic
RR
Inter-access
X2 Traffic
Access-2 Access-4
VRF VRF
X2
X2
Access-3
VRF VRF X2
VRF
VRF
Unified MPLS Transport:
X2 VRF VRF X2
Advertise loopbacks in iBGP
labeled-unicast with community Inter-access Inter-access
10:10, 10:102 VRF
MTG
CN-RR
MTG
RR
Redistribute RAN IGP-2 in iBGP,
Selective next-hop-self in RPL
mark BGP Community 10:10, 10:0102
Ser next-hop-self
Redistribute BGP Community If community ↑ 10:01(*)
1000:1000, 10:0101, 10:0103 in RAN IGP-2 CN-ABR CN-ABR
Inline RR Inline RR
Inter-access
X2 Traffic
Access-2 Access-4
VRF VRF
X2 X2
Access-3
VRF VRF VRF VRF
X2
X2 VRF VRF X2
inter-access inter-access
VRF
Export: RAN-2 RT, Common RT Export: RAN-4 RT, Common RT
Import: RAN-1 RT, RAN-2 RT, Import: RAN-3 RT, RAN-4 RT,
RAN-3 RT, MPC RT RAN-5 RT, MPC RT
Export: RAN-3 RT, Common RT
293298
Import: RAN-2 RT, RAN-3 RT,
RAN-4 RT, MPC RT
In some cases, depending on the spread of the macro cell footprint, it might be desirable to provide X2
interfaces between CSGs located in neighboring RAN access regions. This connectivity can easily be
accomplished using the BGP community-based coloring of prefixes used in the Unified MPLS Transport.
• As described in “Transport Architecture,” the CSG loopbacks are colored in BGP labeled-unicast with
a common BGP community that represents the RAN community and a BGP community that is unique
to that RAN access region. This tagging can be done when the CSGs advertise their loopbacks in
iBGP labeled-unicast as shown in Figure 66 if labeled BGP is extended to the access or at the PANs
when redistributing from the RAN IGP to iBGP when IGP/ LDP is used in the RAN access using the
redistribution approach.
• The adjacent RAN access domain CSG loopbacks can be identified at the PAN based on the unique
RAN access region BGP community and be selectively propagated into the access based on egress
filtering as shown in Figure 66, if labeled BGP is extended to the access or be selectively redistributed
into the RAN IGP if IGP/LDP is used in the RAN access using the redistribution approach.
It is important to note that X2 interfaces are based on eNodeB proximity and therefore a given RAN access
domain only requires connectivity to the ones immediately adjacent. This filtering approach allows for
hierarchical-labeled BGP LSPs to be set up across neighboring access regions while preserving the low route
scale in the access. At the service level, any CSG in a RAN access domain that needs to establish inter-access
X2 connectivity will import its neighboring CSG access region RT in addition to its own RT in the LTE MPLS VPN.
The CN-ABR inline-RR applies selective NHS function using route policy in the egress direction towards its
local PAN neighbor group in order to provide shortest-path connectivity for the X2 interface between CSGs
across neighboring RAN access regions. The routing policy language (RPL) logic involves changing the next-hop
towards the PANs for only those prefixes that do not match the local RAN access regions based on a simple
regular expression matching BGP communities. This allows for the CN-ABR to change the BGP next-hop and
MME
M3 Sm
SGmb
293602
M1 SGi-mb
UE eNB MBMS-GW BM-SC
The following interfaces, which are within the scope of the Cisco FMC system design, are involved in eMBMS
service delivery:
• M3 interface—A unicast interface between the MME and MCE (assumed to be integrated into the eNB
for the sake of Cisco FMC), which primarily carries Multimedia Broadcast Multicast Service (MBMS)
session management signaling.
• M1 interface—A downstream user-plane interface between the MBMS Gateway (MBMS-GW) and the eNB,
which delivers content to the user endpoint. IP Multicast is used to transport the M1 interface traffic.
In the context of the Cisco FMC system design, transport of the eMBMS interfaces is conducted based on the
interface type. This is illustrated in the following figure:
S1-U S11
VRF
MTG-1
VRF VRF MTG-3
VRF
S1-C
X2 X2
MPLS VPN v4/v6 VRRP MME
VRF
CSG X2 M3
VRF VRF
MTG-2
Global
M1 Sm
293603
MBMS-GW
The multicast mechanism utilized for transporting the M1 interface traffic depends upon the location in the
network:
• From the MTG attached to the MBMS-GW, through the Core and Aggregation domains to the AGN node,
Label-Switched Multicast (LSM) is utilized to transport the M1 interface traffic. This provides efficient and
resilient transport of the multicast traffic within these regions.
• From the PAN to the CSG, Native IP Multicast is utilized to transport the M1 interface traffic. This
provides efficient and resilient transport of the multicast traffic while utilizing the lowest amount of
resources on these smaller nodes.
On the UNI from the CSG to the eNB, two VLANs are utilized to deliver the various interfaces to the eNB. One
VLAN handles unicast interface (S1, X2, M3) delivery, while the other handles M1 multicast traffic delivery.
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
e Node B
CSG MTG
AToM Pseudowire
ATM or BBC
TDM BTS, ATM Node B
TDM ATM RNC
PAN MTG
AToM Pseudowire
293299
Typical GSM (2G) deployments will consist of cell sites that don’t require a full E1/T1 for support. In such cell
sites, a fractional E1/T1 is used. The operator can deploy these cell sites in a daisy chain fashion (for example,
down a highway) or aggregate them at the BSC location. To save in the CAPEX investment on the number of
channelized STM-1/OC-3 ports required on the BSC, the operator will utilize a digital XConnect to merge multiple
fractional E1/T1 links into a full E1/T1. This reduces the number of T1/E1s needed on the BSC, which results
in fewer channelized STM-1/OC-3 ports being needed. Deploying CESoPSN PWs from the CSG to the RAN
distribution node supports these fractional T1/E1s and the aggregation of them at the BSC site. In this type of
deployment, the default behavior of CESoPSN for alarm sync needs to be changed. Typically, if a T1/E1 on the
ANs goes down, the PWs will forward the alarm indication signal (AIS) alarm through the PW to the distribution
node and then propagate the alarm indication signal (AIS) alarm to the BSC by taking the T1/E1 down. In this
multiplexed scenario, TS alarming must be enabled on a CESoPSN PW to only propagate the AIS alarm on the
affected time slots, thus not affecting the other time slots (for example, cell sites) on the same T1/E1.
The same BGP-based control plane and label distribution implemented for the L3VPN services is also used for
circuit emulation services. For hub-and-spoke access topologies, Bidirectional Forwarding Detection (BFD)-
protected static routes can be used to eliminate the need for an IGP at the cell site. The CSGs utilize MPLS/IP
routing in this system release when deployed in a physical ring topology. TDM and ATM PWE3 can be overlaid in
either deployment model.
The CSGs, PAN, AGNs, and MTGs enforce the contracted ATM CoS SLA and mark the ATM and TDM PWE3
traffic with the corresponding per-hop behavior (PHB) inside the access, aggregation, and core DiffServ
domains. The MTG enables multi-router automatic protection switching (MR-APS) (or single-router automatic
protection switching [SR-APS] redundancy for the BSC or RNC interface, as well as pseudowire redundancy and
two-way pseudowire redundancy for transport protection.
Hierarchical LSPs between Remote PAN-SEs or AGN-SEs for Multi-Area IGP Design
This scenario applies to inter-domain LSPs between the loopback addresses of remote PAN or AGN-SE Nodes,
connected across the core network. It is relevant to wireline L2/L3 MPLS VPN business services deployed
between remote service edges across the core network that use the /32 loopback address of the remote PEs
as the endpoint identifier for the Targeting Label Distribution Protocol (T-LDP) or multiprotocol internal BGP
(MP-iBGP) sessions. The business wireline services are delivered to the service edge in one of three ways:
• Directly connected to the PAN.
• Transported from a FAN to the PAN or AGN service edge via native Ethernet network.
• Transported from a FAN to the PAN or AGN service edge via a PW in an MPLS access network scenario,
which is terminated via PW Headend on the SE.
The service edges are labeled BGP PEs and advertise their loopback using labeled IPv4 unicast address family
(AFI/SAFI=1/4).
Figure 71 - Hierarchical LSPs between Remote PANs for Multi-Area IGP Design
Control
iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label
Imp-Null
iBGP
iBGP iBGP
MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap pop
Forwarding
Hierarchical LSPs between CSG and MTG for Multi-Area IGP Design with Labeled BGP Access
The inter-domain hierarchical LSP described here applies to the Option-1: Multi-Area IGP Design with Labeled
BGP Access transport model described in “Large Network, Multi-Area IGP Design with IP/MPLS Access.” This
scenario applies to inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs in
the core network. It is relevant to 4G LTE and 3G UMTS/IP services deployed using MPLS L3 VPNs or 2G GSM
and 3G UMTS/ATM services deployed using MPLS L2 VPNs that use the /32 loopback address of the remote
PEs as the endpoint identifier for the T-LDP or MP-iBGP sessions. The MTGs and CSGs are labeled BGP PEs and
advertise their loopback using labeled IPv4 unicast address family (AFI/SAFI=1/4).
This scenario is also applicable to point-to-point VPWS services between CSGs and/or FANs in different labeled
BGP access areas. In this scenario, the /32 loopback address of the remote AN is added to the inbound prefix
filter list at the time of service configuration on the local AN, as described in “PW Transport for X-Line Services.”
For this scenario, the stacked label scenario illustrated is the same as that illustrated in Figure 71, with the access
network illustrated in Figure 72 tacked onto either end.
iBGP
iBGP iBGP
MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap pop
293303
iBGP Hierarchical LSP 2-Forwarding
LDP LSP
The CSG in the RAN access learns the loopback address of the MTG through BGP-labeled unicast. For traffic
flowing between the CSG in the RAN and the MTG in the MPC, as shown in the previous figure, the following
sequence occurs:
1. The downstream CSG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the PAN that is the labeled BGP next hop.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing to the PAN.
3. The PAN will swap the BGP label corresponding to the remote prefix and then push the LDP label used
to reach the CN-ABR that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the local CN-ABR.
5. Since the local CN-ABR has reachability to the MTG via the core IGP, it will swap the BGP label with an
LDP label corresponding to the upstream MTG intra-domain core LDP LSP.
The MTG in the MPC learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For
traffic flowing between the MTG and the CSG in the RAN as shown in Figure 72, the following sequence occurs:
1. The downstream MTG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the CN-ABR that is the labeled BGP next hop.
Hierarchical LSPs between CSG and MTG for Multi-Area IGP Design with IGP/LDP Access
The inter-domain hierarchical LSP described here applies to the Option-2: Multi-Area IGP Design with IGP/
LDP Access transport model described in “Large Network, Multi-Area IGP Design with IP/MPLS Access.” This
scenario applies to inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs in
the core network. It is relevant to 4G LTE and 3G UMTS/IP services deployed using MPLS L3 VPNs or 2G GSM
and 3G UMTS/ATM services deployed using MPLS L2VPNs that use the /32 loopback address of the remote
PEs as the endpoint identifier for the T-LDP or MP-iBGP sessions. The MTGs are labeled BGP PEs and advertise
their loopback using labeled IPv4 unicast address family (AFI/SAFI=1/4). The CSGs do not run labeled BGP, but
have connectivity to the MPC via the redistribution between RAN IGP and BGP-labeled unicast done at the local
PANs, which are the labeled BGP PEs.
Figure 73 - Hierarchical LSPs between CSGs and MTGs for Multi-Area IGP Design with IGP/LDP Access
iBGP
IGP <> iBGP iBGP
redistribution
MTG
push swap pop
LDP Label push swap swap swap swap pop
LDP LSP
The MTG in the MPC learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For
traffic flowing between the MTG and the CSG in the RAN as shown in Figure 73, the following sequence occurs:
1. The downstream MTG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the CN-ABR that is the labeled BGP next hop.
2. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing to the CN-ABR.
3. The CN-ABR will swap the BGP label corresponding to the remote prefix and then push the LDP label
used to reach the PAN that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the PAN connecting the RAN.
5. The PAN will swap the locally-assigned BGP label and forward to the upstream CSG using the local RAN
intra-domain LDP-based LSP label.
The PANs are labeled BGP PEs and advertise their loopback using labeled IPv4 unicast address family (AFI/
SAFI=1/4).
Control
iBGP IPv4+label eBGP iBGP IPv4+label eBGP iBGP IPv4+label
IPv4+label IPv4+label Imp-Null
RR
eBGP eBGP
AGN-RR AGN-RR
iBGP
iBGP iBGP
RR RR
MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap swap swap pop
LDP LSP eBGP LSP iBGP Hierarchical LSP eBGP LSP LDP LSP
293305
Forwarding
The remote services edges learn each other’s loopbacks through BGP-labeled unicast. iBGP-labeled unicast is
used to build the inter-domain hierarchical LSP inside each AS, and eBGP-labeled unicast is used to extend the
LSP across the AS boundary. For traffic flowing between the two service edges as shown in the previous figure,
the following sequence occurs:
1. The downstream service edge pushes the iBGP label corresponding to the remote prefix and then
pushes the LDP label that is used to reach the local AGN-ASBR that is the labeled BGP next hop.
2. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing to the local AGN-ASBR.
3. The local AGN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by
the neighboring CN-ASBR.
4. The CN-ASBR will swap the eBGP label with the iBGP inter-domain LSP label and then push the LDP
label that is used to reach the remote CN-ASBR that is the labeled BGP next hop.
5. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing off to the remote CN-ASBR.
6. The remote CN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by
the neighboring aggregation domain AGN-ASBR.
7. Since the remote AGN-ASBR has reachability to the destination service edge via IGP, it will swap the
eBGP label with an LDP label corresponding to the upstream service edge intra-domain LDP LSP.
This scenario is also applicable to point-to-point VPWS services between CSGs and/or FANs in different labeled
BGP access areas. In this scenario, the /32 loopback address of the remote AN is added to the inbound prefix
filter list at the time of service configuration on the local AN, as described in “PW Transport for X-Line Services.”
For this scenario, the stacked label scenario illustrated is the same as that illustrated in Figure 74, with the access
network illustrated in Figure 75 stacked onto either end.
Figure 75 - Hierarchical LSPs between CSGs and MTGs for Inter-AS Design with Labeled BGP Access
1-Control
iBGP IPv4+label iBGP IPv4+label eBGP iBGP IPv4+label
IPv4+label Imp-Null
2-Control
iBGP IPv4+label iBGP IPv4+label eBGP iBGP IPv4+label
Imp-Null IPv4+label
AS-1 AS-2
RR
eBGP
AGN-RR iBGP
iBGP iBGP
RR
MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap swap pop
Hierarchical LSPs between CSG and MTG for Inter-AS Design with IGP/LDP Access
The inter-domain hierarchical LSP described here applies to the Option-2: Inter-AS Design with IGP/ LDP Access
transport model described in “Large Network, Inter-AS Design with IP/MPLS Access.” This scenario applies to
inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs in the core network. It
is relevant to 4G LTE and 3G UMTS/IP services deployed using MPLS L3 VPNs or 2G GSM and 3G UMTS/ATM
services deployed using MPLS L2 VPNs that use the /32 loopback address of the remote PEs as the endpoint
identifier for the T-LDP or MP-iBGP sessions. The MTGs are labeled BGP PEs and advertise their loopback using
labeled IPv4 unicast address family (AFI/SAFI=1/4). The CSGs do not run labeled BGP, but have connectivity to
the MPC via the redistribution between RAN IGP and BGP-labeled unicast done at the local PANs, which are the
labeled BGP PEs.
1-Control
iBGP IPv4+label eBGP iBGP IPv4+label
IPv4+label Imp-Null
NHS NHS NHS
2-Control
iBGP IPv4+label eBGP iBGP IPv4+label
Imp-Null IPv4+label
AS-1 AS-2
RR
eBGP
AGN-RR iBGP
IGP <> iBGP iBGP
redistribution RR
MTG
push swap pop
LDP Label push swap swap swap swap pop
LDP LSP
293306
iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP 2-Forwarding
LDP LSP
The CSG in the RAN access learns the loopback address of the MTG through the BGP-labeled unicast to RAN
IGP redistribution done at the local PAN. For traffic flowing between the CSG in the RAN and the MTG in the
MPC, as shown in the previous figure, the following sequence occurs:
1. The downstream CSG will push the LDP label used to reach the PAN that redistributed the labeled iBGP
prefix into the RAN IGP.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label towards the
PAN.
3. The PAN will first swap the LDP label with the iBGP label corresponding to the remote prefix and then
push the LDP label used to reach the AGN-ASBR that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the local AGN-ASBR.
5. The local AGN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by
the neighboring CN-ASBR.
6. Since the CN-ASBR has reachability to the MTG via the core IGP, it will swap the eBGP label with an LDP
label corresponding to the upstream MTG intra-domain core LDP LSP.
Hierarchical LSPs between CSG and MTG for Integrated Core and Aggregation Design
This scenario applies to inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs
in the integrated core and aggregation network. It is relevant to 4G LTE and 3G UMTS/IP services deployed
using MPLS L3 VPNs or 2G GSM and 3G UMTS/ATM services deployed using MPLS L2 VPNs that use the /32
loopback address of the remote PEs as the endpoint identifier for the T-LDP or MP-iBGP sessions. The MTGs
and CSGs are labeled BGP PEs and advertise their loopback using labeled IPv4 unicast address family (AFI/
SAFI=1/4).
This scenario is also applicable to point-to-point VPWS services between CSGs and/or FANs in different labeled
BGP access areas. In this scenario, the /32 loopback address of the remote AN is added to the inbound prefix
filter list at the time of service configuration on the local AN, as described in “PW Transport for X-Line Services.”
1-Control
iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label
Imp-Null
NHS NHS
2-Control
iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label
Imp-Null
AGN-RR
iBGP RR
iBGP
MTG
CN CN
LDP Label push swap pop
BGP Label push swap swap pop
LDP LSP
1-Forwarding iBGP Hierarchical LSP LDP LSP
LDP LSP
2-Forwarding
293308
LDP LSP iBGP Hierarchical LSP
The CSG in the RAN access learns the loopback address of the MTG through BGP-labeled unicast. For traffic
flowing between the CSG in the RAN and the MTG in the MPC, as shown in the previous figure, the following
sequence occurs:
1. The downstream CSG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the AGN that is the labeled BGP next hop.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing to the AGN.
3. Since the AGN has reachability to the MTG via the aggregation IGP, it will swap the BGP label with an
LDP label corresponding to the upstream MTG intra-domain aggregation LDP LSP. The MTG in the MPC
learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For traffic flowing
between the MTG and the CSG in the RAN as shown in Figure 77, the following sequence occurs:
4. The downstream MTG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the AGN that is the labeled BGP next hop.
5. The CNs and AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing to the AGN connecting the RAN Access.
6. Since the AGN has reachability to the CSG via the RAN IGP area-x/level-1, it will swap the BGP label
with an LDP label corresponding to the upstream CSG intra-domain RAN LDP LSP.
Figure 78 - BGP Control Plane for Multi-Area IGP Design with Labeled BGP Access
iBGP iBGP
IPv4+label iBGP IPv4+label
IPv4+label PE IPv4+label ABR IPv4+label
IPv4+label PE
BNG, MSE
Example:
IP RAN VPNv4 Service
External
RR
Inline RR
Inline RR RR Inline RR
VPNv4 PE iBGP
CSG iBGP iBGP VPNv4
VPNv4 VPNv4
VPNv4 PE
MTG (EPC GW)
IP/MPLS
Transport IP/MPLS Transport IP/MPLS Transport
Access Nodes Aggregation Node Aggregation Node Core ABR Core ABR
293211
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
The CN-ABRs are inline-RRs to the PAN clients for the MP-iBGP IPv4 labeled unicast address-family and form
the next level of the RR hierarchy:
• They form iBGP session neighbor groups with the PAN RR-clients that are the labeled BGP PEs
implementing the inter-domain iBGP hierarchical LSPs in the local aggregation network.
• They either form neighbor groups towards other non-client ABRs in the core if a full-mesh configuration
is used or form neighbor groups towards higher level CN-RRs in the core network at the top level of the
hierarchy as shown in Figure 78.
• If the full mesh option is used, the CN-ABRs also act as RRs serving the closest MTG RR clients in the
core network that are labeled BGP PEs implementing the inter-domain iBGP hierarchical LSPs.
• The CN-ABRs reflect the labeled BGP prefixes with the next-hop changed to self in order to insert
themselves into the data path to enable the inter-domain LSP across the aggregation and core domains.
The CN-ABRs are inline RRs for the MP-iBGP VPNv4 and VPNv6 address-family and form the next level of the
RR hierarchy:
• They form iBGP session neighbor groups towards the local aggregation network to serve the PAN RR
clients.
• They either form neighbor groups towards other non-client CN-ABRs in the core if a full-mesh
configuration is used, or form neighbor groups towards higher level CN-RRs in the core network at the
top level of the hierarchy as shown in Figure 78.
• If the full-mesh option is used, the core ABR RRs also form neighbor groups for the closest MTG RR
clients in the core network that are the PEs implementing the LTE MPLS VPN.
iBGP
IGP <> RAN IGP iBGP IPv4+label
IPv4+label PE IPv4+label
redistribution
IPv4+label PE
MTG
VPNv4 PE iBGP
CSG iBGP iBGP VPNv4
VPNv4 VPNv4
VPNv4 PE
MTG
Access Node Pre-Aggregation Node Aggregation Node Core ABR Core ABR
293210
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
VPNv4 PE iBGP
CSG VPNv4 iBGP
VPNv4
VPNv4 PE
MTG
Aggregation
Access Node Pre-Aggregation Node ASBR Core ASBR Core ASBR
293213
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
The AGN-RRs are external RRs for the MP-iBGP IPv4 labeled unicast address-family and form the next level of
the RR hierarchy:
• They form iBGP session neighbor groups towards the AGN-ASBR and PAN RR-clients in the aggregation
network.
• The AGN-ASBRs insert themselves into the data path to enable inter-domain LSPs by setting NHS on all
iBGP updates towards their local AGN-RRs and eBGP updates towards neighboring CN-ASBRs.
The AGN-RRs are external RRs for the MP-iBGP VPNv4 and VPNv6 address-family in the aggregation network
and form the next level of the RR hierarchy:
• They form iBGP session neighbor groups towards the local aggregation network to serve the PAN RR clients.
• They enable the LTE VPN service with a eBGP multi-hop session towards the CN-RR in the core
network to exchange VPNv4/v6 prefixes over the inter-domain transport LSP.
The CN-RRs are external RRs for the MP-iBGP VPNv4 and VPNv6 address-family in the core network:
• They form iBGP session neighbor groups in the core network to serve the MTG RR clients that are the
PEs implementing the LTE MPLS VPN.
• They enable the LTE VPN service with an eBGP multi-hop session towards the AGN-RRs in the
neighboring aggregation network ASs to exchange VPNv4/v6 prefixes over the inter-domain transport LSP.
Figure 81 - BGP Control Plane for Inter-AS Design with IGP/LDP Access
VPNv4 PE iBGP
CSG VPNv4 iBGP
VPNv4
VPNv4 PE
MTG
Aggregation
Access Node Pre-Aggregation Node ASBR Core ASBR Core ASBR
293212
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
Figure 82 - BGP Control Plane for Integrated Core and Aggregation Design with Labeled BGP Access
MTG
IPv4+label PE
MTG
VPN PE
Core
Mobile Access Integrated Node Mobile Access
Network Core + Aggregation Network
Network
293325
Scale Considerations
This section describes the route scale and the control plane scaling aspects involved in setting up the Unified
MPLS Transport across the network domains.
As an example, consider a large scale deployment following the Inter-AS design described in “Large Network,
Inter-AS Design with IP/MPLS Access,” including support for Residential, Business, and Mobile Services.
For Mobile Services, the network includes 60,000 CSGs across 20 POPs in a SP network. In the core network,
consider around 10 EPC locations, with each location connected to a pair of redundant MTGs. This leads a total
of 20 MTGs for transport connectivity from the core to the CSGs in the RAN access domain. If you consider that
each RAN access domain is comprised of 30 CSGs connected in physical ring topologies of five nodes each
to the pre-aggregation network, and (for the purpose of illustration) you assume an even distribution of RAN
backhaul nodes across the 20 POPs, you end up with the network sizing shown in the following table.
For Residential and Business wireline services, the network includes 3000 FANs across the same 20 POPs in
the SP network. In addition, there are 20 OLTs per POP providing PON access for wireline services. These rings
are divided among 100 pairs of PANs per POP, which are configured in rings to 5 pairs of AGNs and 5 pairs of
AGN-SE nodes.
The entire POP is aggregated by a pair of AGN-ASBR nodes, which connect to a pair of CN-ASBR nodes for
handling all service traffic transport between the core and aggregation domains.
Large Network
Network Access Aggregation (20 POPs) Comments
CSGs 30 3000 60000 Assuming 100 access rings in each POP with 30 CSGs in each ring
(1-5% FAN) (100*30=3000) (20*3000=60000)
FANs 30 150 3000 Assuming 5 access rings in each POP with 30 FANs in each ring
(30% RAN) (5*30=150) (20*150=3000)
OLTs 20 200 4000 Assuming 10 access rings in each POP with 20 OLTs in each ring
(10*20=200) (20*200=4000)
PANs 2 200 4000 Assuming 10 aggregation rings in each POP with 20 PANs in each ring
(10*20=200) (20*200=4000)
AGNs 10 200 Assuming 10 AGNs in each POP (20*10=200)
AGN/PAN-SE 10 200 Assuming 10 AGN/PAN-SEs in each POP (20*10=200)
AGN-ASBR 2 40 (20*2=40)
CN-ASBR 2 40 (20*2=40)
Core Node 10
MTG 20 Assuming 20 MTGs network wide
As another example, consider a smaller scale deployment following the single-AS, multi-area design described
in “Small Network, Integrated Core and Aggregation with IP/MPLS Access,” including support for Residential,
Business, and Mobile Services.
For Mobile Services, the network includes 7,000 CSGs across 20 POPs in a SP network. In the core network,
consider around 5 EPC locations, with each location connected to a pair of redundant MTGs. This leads a total
of 10 MTGs for transport connectivity from the core to the CSGs in the RAN access domain. If you consider that
each RAN access domain is comprised of 30 CSGs connected in physical ring topologies of five nodes each
to the pre-aggregation network, and (for the purpose of illustration) you assume an even distribution of RAN
backhaul nodes across the 10 POPs, you end up with the network sizing shown in the following table.
For Residential and Business wireline services, the network includes 300 FANs across the same 10 POPs in the SP
network. In addition, there are 20 OLTs per POP providing PON access for wireline services. These rings are divided
among 25 pairs of PANs per POP, which are configured in rings to a pair of AGNs and a pair of AGN-SE nodes.
The entire POP is aggregated by a pair of Core nodes for handling all service traffic transport between the core
and aggregation domains.
Small Network
Network Access Aggregation (10 POPs) Comments
CSGs 30 700 7000 Assuming 23 access rings in each POP with 30 CSGs in each ring
(23*30=690) Rounding to 700 (10*700=7000)
FANs 30 30 300 Assuming 1 access ring in each POP with 30 FANs in each ring
(10*30=300)
OLTs 20 20 200 Assuming 1 ring with 20 OLTs in each POP. (20*10=200)
PANs 2 50 500 Assuming 2 PANs per Access Ring and 25 Rings Per POP.
AGNs 2 20 2 AGNs per POP
AGN-SE 2 20 2 AGN-SEs per POP
Core Node 20 2 Core Nodes per POP
MTG 10
The Cisco FMC system architecture provides a scalable solution to this problem by adopting a divide-and-
conquer approach of isolating the access, aggregation, and core network layers into independent and isolated
IGP/LDP domains. While LDP is used to set up intra-domain LSPs, the isolated IGP domains are connected to
form a unified MPLS network in a hierarchical fashion by using RFC 3107 procedures based on iBGP to exchange
loopback addresses and MPLS label bindings for transport LSPs across the entire MPLS network. This approach
prevents the flooding of unnecessary routing and label binding information into domains or parts of the network
that do not need them. This allows scaling the network to hundreds of thousands of LTE cell sites without
overwhelming any of the smaller nodes like CGS in the network. Since the route scale in each independent
IGP domain is kept to a minimum, and all remote prefixes are learnt via BGP, each domain can easily achieve
subsecond IGP fast convergence.
RAN Area/Level Aggregation Area/Level Core Area/Level Aggregation Area/Level RAN Area/Level
OSPF x/IS_IS L1 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF x/IS-IS L1
CSG CSG
Aggregation Node Aggregation
(AGN-SE) Node (AGN)
293599
iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
AGN-SE AGN
CN-RR
RR
AGN-ASBR CN-ASBR CN-ASBR AGN-ASBR FAN
RAN region FSE BGP FSE BGP PAN-SE
BGP Community Community
Community
iBGP
RR eBGP iBGP eBGP RR
IPv4+label
AGN-RR IPv4 + label IPv4+label IPv4 + label AGN-RR
Common
PAN iBGP MSE BGP iBGP PAN-SE FAN BGP
CSG iBGP IPv4+label Community IPv4+label Community
IPv4+label
AGN-ASBR CN-ASBR CN-ASBR AGN-ASBR
293600
FAN AGN-SE MTG AGN FAN
iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
For the example network sizing shown in Table 7, if you consider the peering organization illustrated in Figure 84,
you have the following BGP session scale on different elements in the network (see Table 7).
Notes:
• CSGs in each RAN access domain peer with their two redundant local PAN inline-RRs.
• PANs in each aggregation domain peer with their CSG clients and with the AGN-RR for that domain.
• AGN-SEs in each aggregation domain peer with the AGN-RR for that domain.
• AGN-ASBRs in each aggregation domain peer with the AGN-RR for that domain.
• CN-ASBRs peer with the redundant external CN-RRs in the core domain.
• MTGs in the core domain that connect with regional EPC GWs peer with the redundant external CN-RRs.
Quality of Service
The Cisco FMC system applies the IETF DiffServ Architecture (RFC 2475) across all network layers, utilizing
classification mechanisms like MPLS Experimental (EXP) bits, IP DSCP, IEEE 802.1p, and ATM CoS for
implementing the DiffServ PHBs in use.
In a transport network, congestion can occur anywhere. However, congestion is more likely where statistical
estimates of peak demand are conservative (that is, under-provisioned), which happens more often on the
design of access and aggregation bandwidth links. Congestion due to instantaneous ingress bandwidth to a
node exceeding egress bandwidth (assuming the node can process all ingress bandwidth) therefore requires all
nodes to be able to implement DiffServ scheduling functions. The result is that the under-provisioning is unfairly
distributed among the services transported. This redistribution with DiffServ can result in over-provisioning
for higher quality services (like voice over IP [VoIP] and video) and differing levels of under-provisioning for
other services. This is in line with the functional requirements defined by standards bodies, such as the NGMN
and Broadband Forum TR-221 specification for mobile backhaul, and TR-101 for Ethernet-based aggregation
networks for residential and business services.
Each network layer defines an administrative boundary, where traffic remarking may be required in order to
correlate the PHBs between different administrative domains. A critical administrative and trust boundary is
required for enforcing subscriber SLAs. Subscriber SLAs are enforced with sound capacity management
techniques and functions, such as policing/shaping, marking, and hierarchical scheduling mechanisms.
This administrative boundary is implemented by the access devices for traffic received (upstream) from the
subscribers and by the core nodes for traffic sent (downstream) to the subscribers.
While for mobile services, the access devices performing the administrative boundary function range from
the CSG to the NodeB equipment to the radio controllers, depending on the service model, for business and
residential services, the function is uniquely delegated to the ANs.
Figure 85 and Figure 86 depict the QoS model implemented for the upstream and downstream directions. Within
the aggregation and core networks, where strict control over residential and business subscriber’s SLA is not
required, a flat QoS policy with a single-level scheduler is sufficient for the desired DiffServ functionality among
the different classes of traffic, as all links are operated at full line rate transmission.
Hierarchical QoS policies are required whenever the relative priorities across the different classes of traffic are
significant only within the level of service offered to a given subscriber, and/or within a given service category,
such residential, business or mobile.
In downstream direction, H-QoS for a given subscriber should be performed at the service edge node whenever
possible to guarantee the most optimal usage of link bandwidth throughout the access network.
For an Ethernet-based access NNI and residential services, the service edge node acting as BNG device is
capable of applying QoS at the subscriber level, with per-subscriber queuing and scheduling, as well as at the
aggregate level for all residential subscribers sharing the same N:1 VLAN, or a range of 1:1 VLANs.
Mobile services also require the implementation of H-QoS for access bandwidth sharing. Moreover, in the case
of microwave links in the access, where the wireless portion of the link is only capable of sub-gigabit speeds
(typically 400 Mbps sustained) a parent shaper may be used to throttle transmission to the sustained microwave
link speed.
Whenever subscriber SLAs are managed at the service edge and the access UNI is not multiplexed, a flat QoS
policy can be applied to the AN in order to manage relative priority among the classes of traffic at each UNI
port. Multiplexed UNI, typical of Business services, require an H-QoS policy for relative prioritization among
services first and then between classes of traffic within each service. In those scenarios H-QOS on the service
edge nodes may drive peak information rate (PIR) level traffic, while the Access UNI may force the committed
information rate (CIR) levels.
For an MPLS-based NNI, most services do not have a corresponding attachment point at the service edge node
and therefore the majority of the service level H-QoS logic happens at the AN. The exception are the L3VPN
business services for which the customer-edge to provider-edge (CE-PE) LSP is terminated over a PWHE
interface at the service edge node, which becomes the injection point for H-QoS.
Shaping/BW/BRR
PIR/DIR per Queuing and
Residential Services SLA scheduling
Shaped Rate ≤ Access Line + BRR Marking
Residential, Residential, Policing
Subscriber UNIs Interface Session
Scheduling with
Shaped Rate ≤ Access Line + BRR Oversubscription,
Priority Propagation
Business, Business MPLS NNI MPLS NNI Bandwidth
Subscriber UNI L2 or L3 Sub Interface Remaining
WRR
Shaped Rate ≤ Access Line + BRR
Mobile NNI Ethernet Mobile NNI (L2 or L3)
CO AN Port
L3 Business,
Mobile UNI PWHE
Interface
L3 Business,
Subscriber UNI
MPLS NNI PAN-SE
Remote AN AGN-SE AGN CN
Fiber, Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology
Shaping
PIR/DIR per Queuing and
Residential Services SLA scheduling
Shaped Rate ≤ Access Line + BRR Marking
Residential, Residential, Policing
Subscriber UNIs Interface Session
Scheduling with
Shaped Rate ≤ Access Line + BRR Oversubscription,
Priority Propagation
Business, Business L2 or L3 MPLS NNI MPLS NNI Bandwidth
Subscriber UNI Sub Interface Remaining
WRR
Shaped Rate ≤ Access Line + BRR
L3 Business,
Subscriber UNI
MPLS NNI PAN-SE
Remote AN AGN-SE AGN CN
293243
Fiber, Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology
Residential Voice
Business Real-time
Network Sync (1588 PTP) EF 46 5 46 5 5 46 5 CBR
Mobility & Signaling traffic
Mobile Conversation/Streaming
Residential TV and Video Distribution AF 32 4 32 4 4 NA 4 NA
Business Telepresence AF 24 3 24 3 3 NA 3 NA
Business Critical
16 2 16 2 2 16 2
In Contract AF VBR-nrt
8 1 8 1 1 8 1
Out of Contract
Residential HSI
Business Best Effort
BE 0 0 0 0 0 0 0 UBR
Mobile Background
293241
VQE Fast Channel Change, Repair
Traffic marked as expedited forwarding (EF) is grouped in a single class serviced with priority treatment to satisfy
stringent latency and delay variation requirements. The EF PHB defines a scheduling logic able to guarantee an
upper limit to the per hop delay variation caused by packets from non-EF services.
This category includes residential voice and business real time traffic, mobile Network Timing Synchronization
(1588 PTP) and mobile signaling and conversation traffic (GSM Abis, UMTS Iub control plane and voice user
plane, LTE S1c, X2c, and the LTE guaranteed bit rate (GBR) user plane).
Traffic marked as assured forwarding (AF) is divided over multiple classes. Each class is guaranteed a predefined
amount of bandwidth, thus establishing relative priorities while maintaining fairness among classes and
somewhat limiting the amount of latency traffic in each class may experience.
The Cisco FMC system defines five AF classes, two of which are reserved for network traffic, control and
management, and the remaining three are dedicated to traffic from residential and business services, such as
residential TV and video distribution, and business TelePresence and mission-critical applications.
For Ethernet UNI interfaces, upstream traffic classification is based on IP DSCP or 802.1P CoS markings. The
ingress QoS service policy will match on these markings and map them to the corresponding DSCP and/or
MPLS EXP value, depending on the access NNI being Ethernet or MPLS based. In the downstream direction, IP
DSCP markings are preserved through the Unified MPLS Transport and may be used for queuing and scheduling
at the UNI as well as for restoring 802.1P CoS values.
Specifically to mobile services, TDM UNI interfaces transported via CEoP pseudowires require all traffic to be
classified as real-time with EF PHB. The ingress QoS service policy matches all traffic inbound to the interface,
and applies an MPLS EXP value of 5. No egress service policy is required for TDM UNI interfaces. For ATM UNI
interfaces to be transported via CEoP pseudowires or used for business services, traffic is classified according
to the ATM CoS on a particular VC. The ingress QoS service policy is applied to the ATM permanent virtual circuit
(PVC) subinterface and imposes an MPLS EXP value that corresponds to the type of traffic carried on the VC and
proper ATM CoS. For further distinction, the ingress QoS service policy may also has the ability to match on the
cell loss priority (CLP) bit of the incoming ATM traffic, and can map to two different MPLS EXP values based on
this. For egress treatment, the PVC interface is configured with the proper ATM CoS. If the CLP to EXP mapping
is being used, then an egress QoS service policy applied to the ATM PVC subinterface can map an EXP value
back to a CLP value for proper egress treatment of the ATM cells.
At the service edge node, classification performed at the access-facing NNI will use a different set of marking
depending on the technology used. For an Ethernet-based access NNI and upstream direction, classification is
based on IP DSCP or 802.1P CoS markings. The ingress QoS service policy will match on these markings and
map them to the corresponding MPLS EXP value for transport toward the core. In the downstream direction, IP
DSCP markings are preserved through the Unified MPLS Transport and may be used for queuing and scheduling
as well as for restoring 802.1P CoS values before forwarding.
For an MPLS-based access NNI and in upstream direction, classification is based on IP DSCP or MPLS EXP
markings. The ingress QoS service policy will match on these markings, which are retained when forwarding
toward the core. In the downstream direction, IP DSCP or MPLS EXP markings preserved through the Unified
MPLS Transport can be used for queuing and scheduling toward the access NNI.
All the remaining core, aggregation, and access network traffic classification is based on MPLS EXP or DSCP.
The core network may use different traffic marking and simplified PHB behaviors, therefore requiring traffic
remarking in between the aggregation and core networks.
Synchronization Distribution
Every mobile technology deployment has synchronization requirements in order to enable aspects such as radio
framing accuracy, user endpoint handover between cell towers, and interference control on cell boundaries.
Some technologies only require frequency synchronization across the transport network, while others require
phase and time-of-day (ToD) synchronization as well. The Cisco FMC system delivers a comprehensive model
for providing network-wide synchronization of all three aspects with an accuracy that exceeds the threshold
requirements of any mobile technology deployed across the system.
The primary target for the current system release is to provide frequency synchronization by using the Ethernet
physical layer (SyncE) and phase and ToD synchronization by using IEEE 1588-2008 PTP. SyncE operates on
a link-by-link basis and will provide a high quality frequency reference similar to that provided by SONET and
SDH networks. SyncE is complemented by Ethernet Synchronization Message Channel (ESMC), which allows
transmitting over SyncE-enabled links a quality level value as done with synchronization status message in
SONET and SDH. This allows the SyncE node to select a timing signal from the best available source and help
detect timing loops, which is essential for the deployment of SyncE in ring topologies.
The Cisco FMC system also supports a combination of SyncE and PTPv2 in a hybrid synchronization
architecture, aiming to improve the stability and accuracy of the phase and frequency synchronization delivered
to the client for deployments such as Time Division Duplex (TDD)-LTE eNodeBs. In such an architecture, the
packet network infrastructure is frequency synchronized by SyncE. The phase signal is delivered by 1588-
2008 PTPv2. The CSG, acting as a PTP ordinary clock or as a Boundary Clock (BC), may combine the two
synchronization methods, using the SyncE input as the frequency reference clock for the 1588-2008 PTP
engine. The combined recovered frequency and phase can be delivered to clients via 1 pulse per second (PPS),
10MHz and Building Integrated Timing Supply (BITS) timing interfaces, SyncE and PTP. For access networks that
don’t support SyncE, the hybrid 1588 BC function may be move to the PANs.
Figure 88 illustrates how synchronization distribution is achieved for Mobile RAN services over both fiber and
microwave access networks in the Cisco FMC architecture.
1588 BC
1588 BC+OC
Client IP/MPLS Transport Network PRC/PRS
1588 Client+SyncE
Hybrid Mode Global Navigation Satellite System (e.g., GPS, GLONASS,
GALILEO) - Primary Reference Time Clock (PRTC)
Cell Site Gateway (CSG) Pre-Aggregation Node Aggregation Node Core Node Core Node
ASR-901 ASR-903 ASR-9000 CRS-3, ASR-9000 CSR-3, ASR-9000
293237
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
The time (phase and Time of Day -ToD) source for the mobile backhaul network is the Primary Reference Time
Clock (PRTC), which is usually based on GNSS receiver that derived time synchronization from one or more
satellite systems with traceability to the Universal Coordinated Time (UTC). A PRC provides a frequency signal
of G.811/Stratum-1 quality signal (traceable to UTC frequency if coming from GNSS) to the AGNs via G.703-
compliant dedicated external interfaces (aka BITS input) or 10Mhz interface. A PRTC provides time via a 1PPS
signal for phase, and a serial ToD interface. DOCSIS Timing Interface (DTI) is an alternative to the frequency,
1PPS and ToD interface. A PRTC can also provide frequency as a PRC. If required by the architecture, the IEEE
1588 Primary Master Clock (PMC) will also derive synchronization from the PRC or PRTC. From this point, three
models of synchronization distribution are supported:
• For mobile services that only require frequency synchronization, where all network nodes support SyncE,
then frequency is carried to the NodeB via SyncE. The ESMC provides source traceability between
nodes through the Quality Level (QL) value which helps selecting best signal and preventing timing loops
in SyncE topologies.
• For mobile services that require synchronization over an infrastructure which does not support SyncE,
1588v2 PTP is then utilized for frequency synchronization distribution. The PMC generates PTP streams
for each PTP slave that is routed globally by the regional MTG to the CSG, which then provides sync to
the eNodeB. The PMC can be a network node which receives the frequency source signal via physical
layer (e.g., SyncE). Proper network engineering shall prevent excessive PDV to allow timing network to
provide packet-based quality signal to the slaves.
• For mobile services that require frequency and phase and/or time of day (ToD) synchronization, IEEE
1588-2008 PTP can be used in conjunction with SyncE to provide a hybrid synchronization solution,
where SyncE provides accurate and stable frequency distribution, and PTPv2 is used for allowing
phase and/or ToD synchronization. In this Cisco FMC system release, the PTPv2 streams are routed
globally from the regional MTG to the CSG, which, combined with SyncE frequency, then provides
synchronization to the eNodeBs.
In general, a packet-based timing mechanism such as IEEE 1588 PTP has strict packet delay variation
requirements, which restricts the number and type of hops over which the recovered timing from the source
is still valid. With globally routed model, strict priority queuing of the PTP streams is necessary. With a good
implementation of 1588 BC on intermediate transit nodes, it is possible to provide better guarantee over more
hops from the PMC to the NodeB.
Scalability and reliability of PTPv2 in the Cisco FMC system is enhanced by enabling BC in some or all of the
following: the aggregation node, the PAN, and the CSG. Implementing BC functionality in these nodes serves
two purposes:
• Increases scaling of PTPv2 phase/frequency distribution, by replicating a single stream from the PMC to
multiple destinations, thus reducing the number of PTP streams needed from the PMC.
• Improves the phase stability of PTPv2, by stabilizing the frequency of the PTP servo with SyncE or
another physical frequency source as described in the hybrid synchronization architecture.
The Cisco FMC system implements the following baseline transport mechanisms for improving network
availability:
• For intra-domain LSPs, remote LFA FRR is utilized for unicast MPLS/IP traffic in both hub- and-spoke
and ring topologies. Remote LFA FRR pre-calculates a backup path for every prefix in the IGP routing
table, allowing the node to rapidly switch to the backup path when a failure is encountered, providing
recovery times on the order of 50 msec. More information regarding LFA FRR can be found in IETF RFCs
5286, 5714, and 6571. Also integrated are BFD rapid failure detection and ISIS/OSPF extensions for
incremental shortest-path first (SPF) and LSA/SPF throttling (Cisco IOS XR defaults should be applied to
IOS devices).
• For inter-domain LSPs, network reconvergence is accomplished via BGP core and edge FRR throughout
the system, allow for deterministic network reconvergence on the order of 100 msec, regardless of the
number of BGP prefixes. BGP FRR is similar to remote LFA FRR in that it pre-calculates a backup path
for every prefix in the BGP forwarding table, relying on a hierarchical Label Forwarding Information Base
(LFIB) structure to allow for multiple paths to be installed for a single BGP next hop. BGP core and edge
FRR each handle different failure scenarios within the transport network:
◦◦ Core FRR is used when the BGP next hop is still active, but there is a failure in the path to that
next hop. As soon as the IGP has reconverged, the pointer in BGP is updated to use the new IGP
next hop and forwarding resumes. Thus, the reconvergence time for BGP is the same as the IGP
reconvergence, regardless of the number of BGP prefixes in the RIB.
◦◦ Edge FRR is used for redundant BGP next hops, like the case where there are redundant ABRs.
Additional path functionality is configured on the PE routers and RRs to install both ABR paths in
the RIB and LFIB instead of just the best path. When the primary ABR fails, BGP forwarding simply
switches to the path of the backup ABR instead of having to wait for BGP to reconverge.
Mobile Access Aggregation Network Core Network Aggregation Network Mobile Access
Network IS-IS L1 IS-IS L2 IS-IS L1 Network
OSPF 0/IS-IS L2 OSPF 0/IS-IS L2
iBGP iBGP
CSG IPv4+label IPv4+label CSG
iBGP iBGP
CN-RR
IPv4+label IPv4+label
iBGP
RR IPv4+label
CSG CSG
MTG
Mobile
Packet Core
CSG CSG
MME SGW/PGW
iBGP Hierarchical LSP
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
293328
BGP PIC Edge BGP PIC Core LFA FRR, Remote-LFA FRR
<100 msec <100 msec <50 msec
Mobile Services
The LTE transport MPLS VPN services between the CSG and MTG implement the following mechanisms for
improving network availability:
• For UNI connections at the CSG to the eNodeB, static routes to the MTG are utilized in the eNodeB.
• For UNI connections to the MPC from the MTG, fast IGP convergence with BFD keep-alive checks
or multichassis Link Aggregation Control Protocol (mLACP) port-bundles are utilized. Virtual Router
Redundancy Protocol (VRRP) between MTGs allows for a single IP address to be configured in the
eNodeB and MPC.
• For the MPLS VPN transport between the CSG and MTG, network convergence is handled by BGP FRR,
similar to the base transport infrastructure.
For PWE3-based circuit emulation services providing transport of TDM-based 2G and ATM-based 3G services,
the following mechanisms are implemented:
• For TDM and ATM connections from the MTG to the BSC or RNC, MR-APS allow for redundant
connections.
• For the Circuit Emulation PWE3 between the CSG and MTG, backup pseudowires provide failover
protection in the transport network.
Business Services
Business L3VPN services implement the following mechanisms for improving network availability:
• For UNI connections at the FAN to the CPE, static routes to the service edges are utilized in the CPE,
transported from the FAN by PWs.
• For the service MPLS VPN between service edges, network convergence is handled by BGP FRR,
similar to the base transport infrastructure.
Business L2VPN services implement the following mechanisms for improving network availability:
• For TDM and ATM connections at the FAN, MR-APS allow for redundant connections.
• For Ethernet connections at the FAN, mLACP allows for redundant connections.
• Transport redundancy for VPLS services is provided via backup pseudowires.
• Transport redundancy for VPWS services is provided via two-way backup pseudowires.
Residential Services
The second release of the Cisco FMC system focuses on both Ethernet (FTTH/PON) and MPLS access (DSL)
with BNG functions deployed at the aggregation devices that connect directly to the ANs via physical links or
through pseudowire overlay. As for the typical residential architecture, subscribers are single homed to the ANs,
while the AN is homed to the BNG over multiple paths.
Toward the core, transport redundancy for residential services in both models leverages remote LFA FRR
techniques at each intra-domain site and BGP PIC core and edge for inter-domain connectivity.
iBGP
Link Bundle
IP
IP
IP
IP
IP
IP
with Active/
Standby lnks
293583
Multicast
Resiliency for multicast transport in the Cisco FMC system is handled via multicast LDP convergence for both
residential and business services.
To meet these needs, operators must be able to offer multiple tiers of service, perform real-time metering
for pre-paid charging and fair use policies, and capitalize on the network infrastructure to offer more than just
internet access services, adding voice and video or top of the basic data offerings. A policy-driven approach is
instrumental for implementing the business rules that govern subscriber data usage and application entitlements
needed to support the different service plans and meet the unique requirements of individual SPs.
A typical policy management infrastructure consists of a number of distinct functions that can co-reside in the
same appliance or be spread across multiple devices. These include web portals, subscriber databases, and
charging and billing functions, all orchestrated by a policy controller device that acts as a single point of contact
for the policy enforcement points (PEPs) in the network, such as the BNG devices.
Charging
Subscriber Billing
Database Systems
Portal OSS/BSS
SOAP
Policy
Controller
RADIUS
RADIUS CoA
Access Internet
293370
BNG Application
Servers
The second release of the Cisco FMC system has selected Cisco Quantum Policy Suite (QPS) as a policy
controller that integrates policy management (Quantum Policy Server [PS]), subscriber database (Quantum
Unified Subscriber Manager [USuM]), and charging/billing function (Quantum Charging Server [CS]) in a single
appliance.
The Cisco FMC system implements a number of use cases requiring subscriber interaction with the portal that are
described in detail in the “Subscriber Experience Convergence” section of the “System Architecture” chapter.
Subscriber Databases
Subscriber databases are data storage engines that maintain subscriber profiles and policy information, such as
credential, purchased service packages and billing information. This data is used for subscriber authentication
and policy determination for subscriber provisioning in the network, as well as for billing purposes.
When a new or returning subscriber connects into the network, the BNG initiates a RADIUS authentication exchange
toward the policy controller that, by acting as an AAA server, performs a look up in the subscriber databases in order
to validate user credentials and to download the user profile. The user profile, containing the subscriber policies to be
activated, is then returned to the BNG as part of the same RADIUS authentication exchange.
User profiles in the subscriber database can be updated at any time for administrative reasons or as a result of
user activities on the self-management portal.
The Cisco FMC system leverages the integrated subscriber database available in Cisco Quantum Policy Suite.
Offline monitoring is based on the post processing of charging information as part of a billing cycle and does not
affect in real time the service rendered to subscribers.
Online monitoring happens in real time. Functionality includes transaction handling, rating, and online correlation
and management of subscriber accounts/balances. Charging information can affect, in real- time, the service
being offered to a subscriber and therefore requires dynamic modifications to the policies active on the
subscriber’s session at the BNG. This happens with the involvement of the Policy Controller. OCS functions can
be leveraged for both pre-paid charging for network access services, and for the deployment of fair use policies.
The Cisco FMC system leverages the integrated OCS and OfCS functions available in Cisco Quantum Policy
Suite.
Policy Controller
The policy controller is the Cisco Policy Decision Point in the network. It includes northbound interfaces to OSS/
Basic Service Set (BSS) systems, web servers, OCS/OfCS systems and subscriber databases, and southbound
interfaces to PEPs, such as BNGs.
Northbound interfaces allow the policy manager to communicate with a number of appliances to compute in real
time subscriber’s service entitlement, based on predefined information (for example, subscriber’s subscription)
as well as dynamically triggered events, that could be of administrative-, user- or network-driven nature.
Southbound interfaces are the vehicle by which the dynamic provisioning of the subscriber happens on the PEP.
Embedded AAA functions enable the Policy Controller to provide RADIUS-based authentication and authorization
services to the BNG, while sophisticated rule-based engines allow for the implementation of dynamic policy
modifications to BNG’s subscriber sessions via RADIUS CoA interfaces.
The Policy Controller is also responsible for the processing, manipulation, and format conversion of RADIUS
accounting messages generated by the BNG to report subscriber’s network and to be consumed by the OCS/
OfCS systems.
Dynamic session states are maintained for each subscriber for tracking purposes and for the execution of
advanced rules based on uptime, time of day, current active service, network usage, or other triggers.
Cisco Quantum Policy Suite embeds a policy controller, an AAA server, OCS and OfCS systems, and subscriber
databases in the same appliance.
In order to provide efficient transport of multicast-based services via MPLS, Multicast Label Distribution Protocol
(MLDP) provides extensions to LDP, enabling the setup of multiprotocol Label-Switched Paths (MP LSPs) without
requiring multicast routing protocols such as Protocol Independent Multicast (PIM) in the MPLS core. The two
types of MP LSPs that can be set up are point-to-multipoint (P2MP) and multipoint-to-multipoint (MP2MP) type
LSPs. MLDP constructs the P2MP or MP2MP LSPs without interacting with or relying upon any other multicast
tree construction protocol. The benefit of using MLDP is that it utilizes the MPLS infrastructure for transporting
IP multicast packets, providing a common data plane (based on label switching) for both unicast and multicast
traffic while maintaining service separation.
Access PE
mLDP request with Opaque TLV
Pointing to the BGP next hop for the spource Flat MP/P2MP LSM based on recursive mLDP Each ABR does a recursive lookup
Aggregation Aggregation
Node Node
Access Core Core Access
IP/MPLS Aggregation Network Core Network Aggregation Network IP/MPLS
Node IP/MPLS Domain Node
Domain IP/MPLS Domain IP/MPLS Domain Domain
Aggregation Aggregation
Node Node
Core Core
Node Node
Aggregation Aggregation
Node Node
BGP Hierarchical P2P LSP
293215
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP
The following considerations should be taken into account when deploying multicast support in the Cisco FMC
system:
• MLDP configuration is required in all MPLS nodes utilized in transporting an MLDP-based MVPN. Such
MPLS nodes are mainly PE and P-routers for Intra-AS. For Inter-AS, MLDP must also be enabled in the
ASBRs.
• MLDP uses the LDP-enabled interfaces by default. Use the mldp disable command to explicitly disable
a particular LDP interface from running MLDP.
• Considering that MLDP uses the LDP-enabled interfaces, the ASBR interfaces that connect the two
different AS (for Inter-AS scenario) must be enabled for LDP as well.
The Cisco FMC system validates multicast only for IPoE subscribers. The support of Native IPoE multicast
with PPPoE subscribers requires coexistence of PPP and native IP on the same CPE WAN port, which is not
supported by the CPEs used in the architecture.
On the core/aggregation side, transport of multicast traffic for both residential and business services follows
a Rosen-based multicast VPNv4 (mVPNv4) approach, with multicast LDP (MLDP) signaling in the core of the
network for the setup of P2MP LSPs and PIM SSM at the service edge only. Forwarding of multicast traffic in the
core network is therefore based on label switching. Both global and VPN based forwarding of multicast traffic are
explored for MPLS and Ethernet Access respectively.
The multicast distribution trees (MDT) between service edge routers are built leveraging BGP Auto Discovery
(BGP-AD) and BGP Customer Multicast (C-MCAST) signaling. BGP-AD allows for the automatic discovery of the
PEs involved in the MVPN, while BGP C-MCAST signaling translates IGMP/PIM joins from the access network
into BGP joins for C-mroutes advertisement among PEs in the MPLS core.
Specific to residential services, default MDTs (Multidirectional Inclusive Provider Multicast Service Instance
[MI-PMSI]) and data MDTs (Selective Provider Multicast Service Instance [S-PMSI]) are used for multicast delivery
through the core. The default-MDT connects all PEs in a MVPN in a full mesh fashion. The data-MDT is used to
transport high-rate multicast flows in order to offload traffic from the default MDT, thus avoiding unnecessary
waste of bandwidth and resources to PEs that did not explicitly join the high-rate multicast stream.
BGP also plays an important role in the above mentioned MVPN profile. The BGP-AD is used to discover the PEs
involved in the MVPN, while the BGP C-MCAST signaling translates PIM joins coming from the CPE side into BGP
joins to distribute C-mroutes among PEs. The use of BGP unifies the signaling protocol in the MPLS core wherein
BGP is used for both unicast and multicast rather than BGP for unicast and PIM for multicast in the core.
For business services, the Cisco FMC system supports MVPNv4 for transporting IPv4 multicast, and MVPNv6 for
transporting IPv6 multicast. An MLDP IPv4 (MLDPv4) core tree is used for both MVPNv4 and MVPNv6 services.
In the Core and Aggregation networks down to the service edge node, Label-Switched Multicast (LSM) is
utilized for transport of eMBMS, which in turn utilizes the mLDP-Global in-band signaling profile. In this profile,
PIM is required only at the edge of the network domain, eliminating the requirement of deploying PIM in the core
network. In the Cisco FMC system design, PIM Source Specific Multicast (PIM-SSM) is used to integrate the
multicast transport with the access networks.
In this release of the Cisco FMC system design, only Single-AS, Multi-area models
support LSM with mLDP-Global signaling. A future release of FMC will extend support
to Inter-AS models as well.
In the access network, and from the PAN to the AGN-SE node if the service edge functionality is not in the PAN,
native IP PIM with SSM is utilized for the transport of IPv4 and IPv6 multicast for eMBMS. This shift permits lower
cost and lower power devices to be utilized in the access network by not requiring recursion processing for
MPLS encapsulation of the multicast traffic.
On the UNI from the CSG at the edge of the access network to the eNB, two VLANs are utilized to deliver the
various interfaces to the eNB. One VLAN handles unicast interface (S1, X2, M3) delivery, while the other handles
M1 multicast traffic delivery.
When a multicast service is requested from a user endpoint device, the eNodeB will signal the transport network
to start the requested eMBMS service. The Cisco FMC system design supports both IGMPv2 and IGMPv3
signaling from the eNodeB to the CSG.
• For IGMPv2, the CSG will statically map the IGMP requests to the proper PIM-SSM groups.
• For IGMPv3, the CSG supports dynamic IGMP to PIM-SSM mapping.
The service edge node acts as a leaf node for the mLDP-Global domain. It will dynamically map the PIM requests
from the CSG into mLDP in-band signaling in order to eliminate the need for PIM within the Aggregation and
Core network domains.
The MTG node uses PIM-SSM for the connection to the MBMS-GW and acts as a root node for the mLDP-
Global domain. The MTG node dynamically maps the mLDP in-band signaling into PIM-SSM requests to the
MBMS-GW.
The typical deployment within the Cisco FMC architecture is to use the microwave gear to provide wireless
links between MPLS-enabled ANs, such as CSGs. The interconnection between the CSG and the microwave
equipment is a Gigabit Ethernet connection. As most microwave equipment used in this context supports sub-
Gigabit transmission rates, typically 400 Mbps under normal conditions, certain accommodations are made.
Namely, H-QoS policies are implemented in the egress direction on either side of the microwave link, providing
the ability to limit the flow of traffic to the bandwidth supported across the link, while providing PHB enforcement
for EF and AF classes of traffic. Also, IGP metrics can be adjusted to account for the microwave links in a hybrid
fiber-microwave deployment, allowing the IGP to properly understand the weights between true gigabit links, and
gigabit ports connected to sub-gigabit microwave links.
Regardless of the ACM status of the microwave link, the Gigabit Ethernet connection to the MPLS- enabled ANs
is constant, so the nodes are unaware of any changes to the bandwidth on the microwave link. To ensure that
optimal routing and traffic transport is maintained through the access network, a mechanism is needed to notify
the MPLS ANs of any ACM events on the microwave links. Cisco and microwave vendors (NSN and SIAE) have
implemented a vendor-specific message (VSM) in Y.1731 to allow for the microwave equipment to notify Cisco
routers of ACM events, and the bandwidth available with the current modulation on the microwave link.
Aggregation
Node
Aggregation
Node
Y.1731
VSM
signals the
microwave
link speed
293216
Microwave Fading
The Cisco FMC system has implemented three actions to be taken on the MPLS ANs, which can be enacted
depending upon the bandwidth available on the microwave link:
• Adjustment of the H-QoS policy to match the current bandwidth on the microwave link.
• Adjustment of the IGP metric on the microwave link, triggering an IGP recalculation.
• Removal of link from the IGP.
If the bandwidth available is less than the total bandwidth required by the total of EF+AF classes, then the
operator can choose to have AF class traffic experience loss in addition to BE traffic, or to have the link removed
from service.
Link Removal
At a certain threshold of degradation, determined by the operator, which will impact all service classes across
the microwave link, the MPLS AN will remove the microwave link from the IGP. This will instigate the resiliency
mechanisms in the access network resiliency to bypass the degraded link, resulting in minimal traffic loss.
The link is not brought administratively down so that the microwave equipment can signal to the AN once the
microwave link is restored.
Service OAM is a service-oriented mechanism that operates and manages the end-to-end services carried
across the network. It is provisioned only at the touch points associated with the end-to-end service, and is
primarily used for monitoring the health and performance of the service. Service OAM ensures services are up
and functional, and that the SLA is being met. When services are affected due to network events, it provides the
mechanisms to detect, verify, and isolate the network faults. The following protocols are the building blocks of
Service OAM:
• ATM Service OAM:
◦◦ F4/F5 VC/VP ATM OAM
• Ethernet Service OAM and PM:
◦◦ 802.1ag Connectivity Fault Management (CFM)
◦◦ MEF Ethernet Local Management Interface (E-LMI)
◦◦ ITU-T Y.1731: OAM/PM for Ethernet-based networks
◦◦ Cisco IP SLA PM based on CFM
Transport OAM is a network-oriented mechanism that operates and manages the network infrastructure. It is
ubiquitous in the network elements that make up the network infrastructure, and it is primarily used for monitoring
health and performance of the underlying transport mechanism on which the services are carried. The primary
purpose of Transport OAM is to keep track of the state of the transport entities (MPLS LSP, Ethernet VLAN, etc.).
It monitors the transport entities to ensure that they are up and functional and performing as expected, and
provides the mechanisms to detect, verify, and isolate the faults during negative network events. The following
protocols are the building blocks of Transport OAM:
• Ethernet Transport OAM and PM:
◦◦ IEEE 802.3ah: Ethernet Link OAM
◦◦ 802.1ag CFM
◦◦ International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Y.1731:
OAM/PM for Ethernet-based networks
◦◦ Cisco IP SLA PM based on CFM
• IP/MPLS Transport OAM and PM:
◦◦ BFD single and multi-hop failure detection
◦◦ IP and MPLS LSP ping and traceroute
◦◦ Cisco IP SLA PM
◦◦ Future releases of the FMC architecture will support G-ACh-based OAM and PM for MPLS LSPs
IPSLA PM
LTE, IPSLA IPSLA
3G IP UMTS, Probe Probe
Transport
VRF VRF
Service OAM
Transport OAM
End-to-end LSP
With Unified MPLS IP OAM over inter domain LSP (future LSP OAM)
293238
Node B CSG MTG RNC/BSC/SAE GW
IPSLA IPSLA
Ethernet IPSLA PM Ethernet
Probe Probe
Service OAM
Y.1731 Y.1731
Probe Y.1731 PM Probe
Unmanaged CPE Up Up
MEP L6 Ethernet CFM MEP L6
E-LMI E-LMI
Transport OAM
293239
CPE CSG/FAN/PAN-SE/AGN-SE CSG/FAN/PAN-SE/AGN-SE CPE
IPSLA PM
IPSLA PM
Service OAM
Link OAM MPLS LSP OAM MPLS LSP OAM Link OAM
Transport OAM
293240
CPE CSG/FAN/PAN PA-SE/AGN+SE PA-SE/AGN+SE CSG/FAN/PAN CPE
Autonomic Networking
Autonomic Networking makes devices more intelligent and simplifies network operational aspects for the service
provider’s operational staff by automating various aspects of device initialization, provisioning, and day 2 operations.
Reader Tip
Autonomic networking functionality is currently available for Early Field Trial (EFT) and Proof
of Concept (POC) validation on the ASR 901 platform with Cisco IOS release 15.3(3). This
functionality allows service providers to start experimenting with aspects of Autonomic
Networking. Production Autonomic Networking support will be available on the ASR 901
platform with the next IOS release. Support for other Cisco platforms as well as enhanced
AN functionality will be available in a future release of the Cisco FMC system.
For more information about Autonomic Networking support with the ASR 901, see the
following website:
http://www.cisco.com/en/US/docs/wireless/asr_901/Release/Notes/
asr901_rn_15_3_3_S.html#wp30866
An IETF draft framework describing the concepts covered by Autonomic Networking is available at the following
link: http://tools.ietf.org/html/draft-behringer-autonomic-network-framework-00. The following diagram provides
an illustration of the high-level architecture of the Autonomic Networking system.
Autonomic Autonomic
Process Autonomic interaction Process
Device OS Device OS
Traditional interactions
(e.g., routing)
293611
Autonomic Networking is a software process integrated into Cisco IOS software that runs independently of
other traditional networking processes, such as IP, OSPF, etc. The traditional networking processes are typically
unaware of the presence of the AN process. The AN components use the normal interfaces exposed by the
traditional networking components. In the same way that the traditional networking components of different
devices interact with each other, the AN components of different devices also interact with each other. The
autonomic components of different devices securely cooperate in order to add more intelligence to the
devices so that the devices in AN can configure, manage, protect, and heal themselves with minimal operator
intervention. Also, the AN components running across the devices securely consolidate their operations in order
to present a simplified and abstracted view of the network to the operator.
The benefits of the Autonomic Networking infrastructure, as delivered from Cisco IOS release 15.3(3), are as
follows:
• Autonomic discovery of Layer 2 (L2) topology and connectivity by discovering how to reach autonomic
neighbors.
• Secure and zero touch identity bootstrap of new devices. In this process, each autonomic device
receives a domain-based certificate from the registrar, which is used to secure subsequent transactions
and to establish the autonomic control plane.
• An autonomic control plane is created that enables secure communications between autonomic nodes.
Autonomic behavior is enabled by default. You can disable the behavior by using the no autonomic command.
Registrar—An autonomic registrar is a domain-specific registration authority in a given enterprise that validates
new devices in the domain and makes policy decisions. The policy decisions include whether a new device can
join a given domain. The registrar also has a database of all devices that have joined a given domain, as well as
device details.
Channel Discovery—This applies to Layer 2 networks and is used to discover communication channels between
autonomic nodes, for example, VLANs. In some networks the autonomic nodes operating on Layer 3 may be
connected through a Layer 2 network, on which only certain VLANs are available. Channel Discovery finds those
VLANs.
Autonomic Control Plane—The autonomic control plane is established between the neighbors crossing non-
autonomic Layer 2 devices. All autonomic nodes communicate securely over this autonomic control plane.
Step 1: Nodes exchange their identity using autonomic adjacency discovery packets. If a device is new, it
uses its Unique Device Identifier to identify itself; if a device is already enrolled in a domain, it uses its domain
certificate to identify itself. Nodes must be directly connected on Layer 3 (non-autonomic Layer 2 devices in
between are transparent to this discovery).
Step 2: The domain device acts as a proxy and allows the new device to join its AN domain. It forwards the
information about the new device to the registrar.
Step 3: The registrar validates whether the new device is allowed to join the domain. If so, the new device
receives a domain certificate from the registrar.
Step 4: The new device now advertises its domain certificate in its hello message with all neighbors. The
neighbor information is exchanged every 30 seconds and the neighbor table is refreshed with the time stamp of
the last update.
The autonomic control plane provides a virtual out-of-band (OOB) management channel to allow reachability
from the network operations center to the new device for initial configuration and provisioning. This eliminates
the need for field technicians to have any knowledge of device configuration when bringing up new nodes in the
Cisco FMC network.
The Unified MPLS concept at the heart of the Cisco FMC system resolves legacy challenges such as scaling
MPLS to support tens of thousands of end nodes, and provides the required MPLS functionality on cost-effective
platforms without the complexity of technologies like Traffic Engineering FRR (TE-FRR) to meet transport SLAs.
By addressing the scale, operational simplification, and cost of the MPLS platform, the FMC system provides a
comprehensive solution to the converged operator seeking an immediately deployable architecture suitable for
deployment of residential, business, and mobile services on a converged platform.
Reader Tip
All of the documents listed above, with the exception of this Design Guide, are
considered Cisco Confidential documents. Copies of these documents may be
obtained under a current Non-Disclosure Agreement with Cisco. Please contact a
Cisco Sales account team representative for more information about acquiring copies
of these documents.
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, “DESIGNS”) IN THIS MANUAL ARE PRESENTED “AS IS,”
WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS
SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR
DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS
DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL
ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. Any examples, command display output, and figures included in the
document are shown for illustrative purposes only. Any use of actual IP addresses in illustrative content is unintentional and coincidental.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this
URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership
relationship between Cisco and any other company. (1110R)
B-0000140F-1 09/13