Vous êtes sur la page 1sur 161

Fixed Mobile Convergence 2.

0
design guide
September 2013
Table of Contents
Introduction..................................................................................................................................1
Executive Summary..................................................................................................................... 1
Release Notes............................................................................................................................. 7

Requirements.............................................................................................................................12
Service Provider Architectures....................................................................................................12
Fixed and Mobile Converged Transport Characteristics............................................................. 15

System Overview........................................................................................................................20
System Concept........................................................................................................................ 20
Transport Models....................................................................................................................... 23
Flat LDP Core and Aggregation............................................................................................. 25
Hierarchical-Labeled BGP LSP Core-Aggregation and Access............................................. 25
Labeled BGP Redistribution into Access IGP......................................................................... 26
Hierarchical-Labeled BGP LSP Core and Aggregation.......................................................... 27
Hierarchical-Labeled BGP LSP Core, Aggregation, and Access............................................ 28
Hierarchical-Labeled BGP Redistribution into Access IGP..................................................... 29
Residential Wireline Service Models.......................................................................................... 29
Community Wi-Fi Service Models............................................................................................. 34
Business Service Models........................................................................................................... 36
Mobile Service Models.............................................................................................................. 40

System Architecture...................................................................................................................43
Transport Architecture............................................................................................................... 43
Large Network, Multi-Area IGP Design with IP/MPLS Access................................................ 43
Large Network, Inter-AS Design with IP/MPLS Access.......................................................... 48
Large Network, Multi-Area IGP Design with non-IP/MPLS Access........................................ 52
Large Network, Inter-AS Design with non-IP/MPLS Access.................................................. 54
Small Network, Integrated Core and Aggregation with IP/MPLS Access............................... 56
Small Network, Integrated Core and Aggregation with non-IP/MPLS Access........................ 58
Residential Service Architecture................................................................................................ 59
Residential Wireline Service Architecture.............................................................................. 59
Community Wi-Fi Service Architecture................................................................................. 71
Subscriber Experience Convergence.....................................................................................74

Table of Contents
Business Service Architecture................................................................................................... 82
MPLS VRF Service Model for L3VPN.................................................................................... 82
H-VPLS Service Model for L2VPN........................................................................................ 85
PBB-EVPN Service Model for L2VPN.................................................................................... 86
PW Transport for X-Line Services......................................................................................... 89
Mobile Service Architecture....................................................................................................... 91
L3 MPLS VPN Service Model for LTE.................................................................................... 91
Multicast Service Model for LTE eMBMS............................................................................... 96
L2 MPLS VPN Service Model for 2G and 3G........................................................................ 97
Inter-Domain Hierarchical LSPs.................................................................................................. 99
Inter-Domain LSPs for Multi-Area IGP Design........................................................................ 99
Inter-Domain LSPs for Inter-AS Design................................................................................ 103
Inter-Domain LSPs for Integrated Core and Aggregation Design......................................... 108
Transport and Service Control Plane.........................................................................................110
BGP Control Plane for Multi-Area IGP Design.......................................................................110
BGP Control Plane for Inter-AS Design................................................................................. 112
BGP Control Plane for Integrated Core and Aggregation Design.......................................... 115
Scale Considerations................................................................................................................ 116

Functional Components............................................................................................................ 122


Quality of Service.....................................................................................................................122
Synchronization Distribution......................................................................................................126
Redundancy and High Availability..............................................................................................129
Subscriber and Service Control and Support............................................................................132
Multicast.................................................................................................................................. 135
Transport Integration with Microwave ACM...............................................................................137
OAM and Performance Monitoring........................................................................................... 139
Autonomic Networking............................................................................................................. 143

Conclusion............................................................................................................................... 146

Related Documents.................................................................................................................. 149

Glossary...................................................................................................................................150

Table of Contents
Introduction
Executive Summary
Infused with intelligence and select solutions for scalability, agile transport, security, and more, the Cisco® Fixed
Mobile Convergence (FMC) system gives operators a proven architecture, platforms, and solutions to address
the dramatic changes in subscriber behavior and consumption of communications services, both fixed and
mobile access, and provide operational simplification, all at optimized cost points.

The Cisco FMC system defines a multi-year ongoing development program by Cisco’s Systems Development
Unit (SDU) that builds towards a flexible, programmable, and cost-optimized network infrastructure, all targeted
to deliver in-demand fixed wireline and mobile network services. As the market leader in providing network
equipment in both fixed and mobile networks, Cisco is uniquely positioned to help providers transition network
operations, technologies, and services to meet these new demands. Cisco is delivering proven architectures
with detailed design and implementation guides as proof points of our strategy to service fixed and mobile
subscribers.

Through a sequence of graceful transitions, Cisco enables transition from legacy circuit-oriented architectures
towards powerful, efficient, flexible, and intelligent packet-based transport with the following proof points:
• 2012: Unified MPLS for Mobile Transport (UMMT) defines a Unified Multiprotocol Label Switching (MPLS)
Transport solution for any mobile backhaul service at any scale.
• 2013: Cisco FMC builds the network and service infrastructure convergence.
• 2014: Cisco FMC enables the unified and seamless fixed and mobile subscriber experience and its
extension to Bring Your Own Device (BYOD) access.

The key program benefits of the Cisco FMC system include:


• Lowering the cost of operations compared to competing architectures and leveraging converged and
integrated transport.
• Leveraging common service nodes (e.g., Carrier Grade NAT (CGNAT), Deep Packet Inspection (DPI),
etc.) across all classes of subscribers, regardless of access type.
• Creating a unique user identity within the network that enables personalized and customized application
of policies to enabled services, including personalized access controls and personalized firewalls.
• Delivering a clear system progression toward enabling per-subscriber cloud-based services accessed
from any device.
• Creating more flexible business models by enabling capabilities like optimized business services over
long-term evolution (LTE) access.
• Opening the network to programmable control via robust APIs.

The Cisco FMC system defines MPLS-based transport services and couples that transport closely to the
service delivery architecture. The MPLS transport aspects of the system validation are also directly applicable to
providers offering Layer 2 (L2) and Layer 3 (L3) transport as a service. To expand the transport protocol offerings
beyond MPLS, a separate carrier Ethernet transport system is being planned that will provide validated options
for native Ethernet (G.8032 control plane), network virtualization with satellites, and MPLS-TP.

Introduction September 2013


1
Challenge
The pace of change faced by operators in highly competitive markets continues to accelerate and using old
models of service specific networks or adding patches to networks based on legacy technologies no longer
makes economic sense. Whereas previously the most pressing questions for operators centered around meeting
point-to-point bandwidth demands, more complex questions now dominate discussions, such as the following:
• How do I innovate services delivered by my network?
• How do I lower cost in the face of exponential traffic growth?
• How can I simplify operations while adding new services when my network has grown over a long period
of time to utilize multiple technologies and standards?
• How can I personalize services in an automated fashion without arduous operational procedures?
• How can I achieve the “any service on any device at any location in a secure manner with consistent
quality of experience” that my customers expect?
• How can I monetize my network assets while enabling subscribers to use any device they want to
access services?

The context for these questions is one of dramatic growth and change. Whereas a fixed line operator traditionally
did not need to care about mobility, developments such as Wi-Fi, hotspots, and stadium technology are
broadening the definition of mobile solutions beyond traditional mobile handset voice and data. Likewise in the
enterprise space, mobility of devices is a baseline requirement, with more and more users requiring secure
access to corporate data on their own tablet or other mobile device. This pervasive mobility across all services,
access types and end user devices pose challenges like the following:
• How to apply appropriate access policies
• How to keep data secure
• How to build a comprehensive network access strategy
• How to extend the right user experience to all these situations

Many of these challenges are being characterized into the BYOD definition. Initially, BYOD conversations in an
enterprise related to how the IT organization enabled an employee to use their own iPad at work. This created
challenges such as how to connect this device to the network, secure company data and applications, and deal
with lost or stolen devices. This initial conversation has expanded to include consideration for the following:

BYOD is about enabling:


• Any person
• Any device
• Any ownership
• Any access

This creates the following challenges for IT:


• Pervasive, reliable mobility
• Support many devices and operating systems
• Many user types on the network
• Easy on-boarding
• Making applications portable
• Extending the right experience

Introduction September 2013


2
Beyond these challenges that represent foundational issues related to BYOD, businesses like retailers and hotels
are recognizing the huge opportunity that customized services pushed to mobile devices held by their customers
represents in influencing and improving customer experience and thereby increasing revenues.

BYOD will transform how every business/entity provides IT to its employees, interacts with its customers, and
provides IT services. Challenges of this scale also represent opportunities for SPs to expand their list of offerings
and deliver new, innovative, and in-demand services to enhance revenue streams. The Cisco FMC system
addresses all challenges and positions the network as a platform to meet service and transport growth with
accompanying higher returns and operator profitability. The more functions that support this emerging BYOD
movement that can be incorporated into the SP offerings, the more quickly businesses can adopt them and the
more quickly SPs can grow their revenue.

Solution
The Cisco FMC system provides reliable, scalable, and high-density packet processing that addresses mass
market adoption of a wide variety of fixed and mobile legacy services, while reducing the operator’s total cost
of operations (TCO) and the capability to deliver new, innovative, and in- demand services. It also handles
the complexities of multiple access technologies, including seamless handover and mobility between access
networks (2G, 3G, 4G LTE, and Wi-Fi) to meet demands for convergence, product consolidation, and a common
end-user service experience.

Figure 1 - Cisco Fixed Mobile Convergence System

Subscriber Service Convergence


Enterprise Fixed Residential Fixed
Wi-Fi
Business Convergence: Device Residential Convergence:
• MPLS VPN services over Fixed • Common Service Experience
Corporate IP
and Mobile (LTE) Access IP
• Community Wi-Fi Service

Mobile Device

Service Infrastructure Convergence

FMC
Converged PCRF Fixed
Fixed and DPI CGN Mobile
Wi-Fi Edge EPC Edge

Transport Infrastructure Convergence

Unified MPLS Transport 293200

Cisco FMC introduces key technologies from Cisco’s Unified MPLS suite of technologies to deliver highly
scalable and simple-to-operate MPLS-based networks for the delivery of fixed wireline and mobile backhaul
services.

Introduction September 2013


3
For RAN backhaul of LTE services, operators are adopting MPLS over pure IP for two main reasons:
• Investment in packet-based networks delivers an economic solution to the exponential growth in packet
traffic that needs transport. While the future lies with LTE, the present only offers 2G and 3G cell site
connectivity. Support for ATM and TDM traffic inherent in legacy networks must exist in order to move
traffic to the new higher-capacity LTE networks. The Multiprotocol Label Switching (MPLS) pseudowire is
the industry choice for achieving this over a packet infrastructure.
• L3 MPLS VPNs in the RAN backhaul, which facilitate virtualization of the transport infrastructure, are
becoming common in LTE designs. This is useful when offering wholesale transport. It also leverages the
RAN backhaul network for transport to other services for business and residential consumers.

Unified MPLS resolves legacy challenges such as scaling MPLS to support tens of thousands of end nodes,
which provides the required MPLS functionality on cost-effective platforms and the complexity of technologies
like Traffic Engineering Fast Reroute (TE-FRR) to meet transport SLAs.

By addressing the scale, operational simplification, and cost of the MPLS platform, Cisco FMC resolves the
immediate need to deploy an architecture that is suitable for a converged deployment and supports fixed
residential and business wireline services as well as legacy and future mobile service backhaul.

Figure 2 - Cisco FMC System Components

Subscriber Service Convergence


Enterprise Fixed Residential Fixed
Wi-Fi
Business Convergence: Device Residential Convergence:
• MPLS VPN services over Fixed DHCP • Common Service Experience
Corporate
Corpo
orate Cisco PNR IP
and Mobile (LTE)AAA,
Access
PCRF
IP
• Community Wi-Fi Service
Broadhop QPS
Mobile
Mob
biile
ille
le De
D
Device
evic
v e

Service
ce Infrastructure
rvic Infr
Infraas
sttrru
uc
ctu re Convergence
ture Co
on
nverg
gen

Virtualized Route Reflector


FMC
Converged
Converged PCRF Fixed
Fixed andd DPI CGN Mobile
M
Moobile
o FAN
CSG Wi-Fi
Wi Fi Edge
Edg
Edgee AGN-SE AGN-SE EPC
EP
EPC
P C Edge FTTB ME 3600X
ASR 901 PAN-SE PAN-SE FTTH ME 2600X
ASR-900X ASR-900X
Transport Infrastructure
ras
sttrruc
ucttu
ure Convergence

Open RG and PAN CN PAN Open RG and


IOS CPEs ASR-9001 ASR-903 IOS CPEs
Unified
ed MP
M
MPLS
P
PLS
LS T
LS raCRS-3
Transport

FAN (PON, FAN (PON,


DSL, Ethernet) DSL, Ethernet)
293201

ME 4600, 2600X ME 4600, 2600X

Introduction September 2013


4
FMC Highlights
• Decoupling of transport and service layers. Enables end-to-end MPLS transport for any service, at
any scale. Optimal service delivery to any location in the network is unrestricted by physical topological
boundaries.
• Scaling of the MPLS infrastructure using RFC 3107 hierarchical LSPs. RFC 3107 procedures define the
use of Border Gateway Protocol (BGP) to distribute labels so that BGP can split up large routing domains
to manageable sizes, yet still retain end-to-end connectivity.
• Optimal integration of wireline SE aspects in transport network. Residential Broadband Network
Gateway (BNG) and business Multiservice Edge (MSE) functions are integrated into the nodes
comprising the transport network, allowing for optimal distribution of the service edge and subscriber
SLA and policy enforcement in the network.
• Common service experience. Enabled by a common Policy and Charging Rules Function (PCRF)
for both fixed and mobile networks, the customer service experience is for consumers and business
subscribers over fixed and mobile access with mediated subscriber identities and common services
transport and policies.
• MPLS VPN over fixed and mobile access. Provides expanded addressable market to the service
provider for business service delivery to locations without fixed wireline access via 3G- or LTE-attached
services.
• Service provider-managed public Wi-Fi services in the residential home. Provides the service
provider with expanded Wi-Fi service coverage through deployment via residential service connections.
• Improved high availability. Multi-Router Automatic Protection Switching (MR-APS), pseudowire
redundancy, remote Loop-Free Alternate (LFA) to support arbitrary topologies in access and aggregation
to delivery zero configuration 50msec convergence, and labeled BGP Prefix-Independent Convergence
(PIC) for edge and core.
• Simplified provisioning of mobile and wireline services. New service activation requires only endpoint
configuration.
• Virtualization of network elements. Implementation of virtualized route reflector functionality on a Cisco
Unified Computing System (UCS) platform provides scalable control plane functionality without requiring
a dedicated router platform.
• Highly-scaled MPLS VPNs support transport virtualization. This enables a single fiber infrastructure
to be re-utilized to deliver transport to multiple entities, including mobile for retail and wholesale
applications, residential, and business services. This enables the one physical infrastructure to support
multiple VPNs for LTE and wireline services.
• Comprehensive Multicast support. Efficient and highly-scalable multicast support for residential,
business, and mobile services.
• TDM circuit support. Addition of time-division multiplexing (TDM) circuit transport over packet for legacy
business TDM services and the Global System for Mobile Communications (GSM) Abis interface.
• ATM circuit support. Addition of ATM transport over packet for legacy business ATM services and 3G
Iub support.
• Microwave support. Full validation and deployment recommendations for Cisco’s microwave partners:
NEC (with their iPASOLINK product), SIAE (with their ALCPlus2e and ALFOplus products), and Nokia
Siemens Networks (NSN) (with their FlexiPacket offering).

Introduction September 2013


5
• Synchronization distribution. A comprehensive synchronization scheme is supported for both frequent
and phase synchronization. Synchronous Ethernet is used in the core, aggregation, and access domains
where possible. Where SyncE may not be possible, based on the transmission medium, a hybrid
mechanism is deployed converting SynchE to IEEE 1588v2 timing. IEEE 1588v2 boundary clock function
in the aggregation to provide greater scalability. Cisco FMC now supports Hybrid SyncE and Precision
Time Protocol (PTP) with 1588 BC across all network layers.
• QoS. Cisco FMC leverages Differentiated Services (DiffServ) quality of service (QoS) for core and
aggregation, H-QoS for microwave access and customer-facing service-level agreements (SLAs),
and support for LTE QoS class identifier (QCIs) and wireline services, to deliver a comprehensive QoS
design.
• OAM and Performance Monitoring. Operations, administration, and maintenance (OAM) and
performance management (PM) for Label-Switched Path (LSP) Transport, MPLS VPN, and Virtual Private
Wire Service (VPWS) services are based on IP SLA, pseudowire (PW) OAM, MPLS and MPLS OAM, and
future IETF MPLS PM enhancements.
• LFA for Fast Reroute (FRR) capabilities. The required 50ms convergence time inherent in Synchronous
Optical Networking/Synchronous Digital Hierarchy (SONET/SDH) operations used to be achieved in
packet networks with MPLS TE-FRR. This has been successfully deployed in core networks, but not in
access networks due to the complexity of additional required protocols and overall design. LFA delivers
the same fast convergence for link or node failures without any new protocols or explicit configuration on
a network device. Hub-and-spoke topologies are currently supported, with a later release extending LFA
coverage to arbitrary topologies.

Until now, fixed network infrastructures have been limited to wireline service delivery and mobile network
infrastructures have been composed of a mixture of many legacy technologies that have reached the end of
their useful life. The Cisco FMC system architecture provides the first integrated, tested, and validated converged
network architecture, meeting all the demands of wireline service delivery and mobile service backhaul.

Cisco FMC Benefits


• Flexible deployment options for multiple platforms to optimally meet size and throughput requirements
of differing networks.
• High-performance solution, utilizing the highest capacity Ethernet aggregation routers in the industry.
The components of this system can be in service for decades to come.
• Tested and validated reference architecture that allows operators to leverage a pre-packaged
framework for different traffic profiles and subscriber services.
• Promotes significant capital savings from various unique features such as pre-tested solutions,
benchmarked performance levels, and robust interoperability, all of which are validated and pre-
packaged for immediate deployment.
• Enables accelerated time-to-market based on a pre-validated, turnkey system for wireline service
delivery and mobile service backhaul.
• Complementary system support, with mobile video transport optimization integration; I- WLAN
untrusted offload support on the same architecture; Mobile Packet Core (MPC); and cost-optimized
performance for Voice over LTE (VoLTE), plus additional services such as Rich Communication Suite
(RCS).
• Cisco’s IP expertise is available to operators deploying Cisco FMC through Cisco Services. These
solutions include physical tools, applications, and resources plus training and annual assessments
designed to suggest improvements to the operator’s network.

Introduction September 2013


6
Release Notes
These release notes outline the hardware and software versions validated as part of the Cisco FMC system
effort, the UMMT System efforts from before, and the key advancements of each system release.

Cisco FMC 2.0


Release 2.0 of the Cisco FMC system architecture further builds upon the architecture defined in the first release,
with the addition of the following improvements:
• Transport and Fixed Service Edge (FSE) convergence:
◦◦ Remote LFA FRR and BGP PIC enhancements
◦◦ Virtualized Route Reflector on Cisco Unified Computing System (UCS) platform
◦◦ Unified MPLS transport for legacy access nodes
• Residential Services:
◦◦ Single stack IPv6 PPPoE and IPoE, N:1 and 1:1 residential access
◦◦ MPLS transport for DSL access
◦◦ Dual stack triple play services (IPv4 and IPv6 coexistence within the household)
◦◦ Carrier Grade NAT MAP-T functions for IPv4 services, co-located with residential BNG SE
◦◦ Overlay of community Wi-Fi access over traditional wireline access transport
◦◦ Unified Subscriber Experience use cases for residential wireline, community Wi-Fi and mobile
access
◦◦ PCRF provided by Cisco Quantum Policy Suite
• Business Services:
◦◦ L2VPN services via Ethernet VPN (EVPN) with Provider Backbone Bridge (PBB)
◦◦ MPLS L3VPN Business Services over Fixed and Mobile (3G and LTE) Access
• Mobile Services:
◦◦ Enhanced Multimedia Broadcast Multicast Service (eMBMS) support

◦◦ Enhancements to Hybrid Synchronization distribution model

Introduction September 2013


7
Table 1 - Cisco FMC 2.0 Platforms and Software Versions

Architectural Role Hardware Software Revision


Core node ASR 9000 XR 4.3.1
CRS-3 XR 4.3.1
Aggregation node + service edge ASR 9006 XR 4.3.2
Pre-aggregation node + service edge ASR 9001 XR 4.3.2
Pre-aggregation node ASR 903 XE 3.10
ME3600X-24CX XE 3.10
Fixed access node ME3600X-24CX XE 3.10
Gigabit passive optical network (GPON) optical link terminator (OLT) ME4600 3.1.0
Cell Site Gateway (CSG) ASR 901 XE 3.11
Mobile Transport Gateway (MTG) ASR 9000 XR 4.3.1
Virtualized route reflector IOS-XR Virtual Router XR 4.3.2
DHCP Prime Network Registrar 8.1
PCRF QPS PCRF 5.3.5
Service management QPS Portal/PCRF/SPR 5.3.5
Subscriber management QPS SPR/AAA 5.3.5

Cisco FMC 1.0


Release 1.0 of the Cisco FMC system architecture expands upon the Unified MPLS models first developed in the
UMMT System to include residential and business wireline service delivery alongside mobile service backhaul.
Some of the key areas of coverage are listed here:
• Transport and fixed service edge convergence:
◦◦ Unified MPLS Transport with MPLS access for mobile and business and Ethernet access for residential
◦◦ Fixed edge convergence and optimal placement: dual stack fixed edge and policy integration for
residential services
• Residential Services:
◦◦ Dual stack PPPoE and IPoE, N:1 and 1:1 residential triple play services
◦◦ EVC edge for access and service edge
◦◦ Carrier Grade NAT co-location with residential BNG SE
◦◦ PCRF provided by Bridgewater Services
• Business Services:
◦◦ Dual stack MPLS VPN, VPWS, and VPLS services.
◦◦ Converged fixed edge with optimal placement and integration of the business edge.
◦◦ Pseudowire headend (PWHE) provides direct MPLS-based transport of services to the business
service edge node.
• Mobile Services:
◦◦ Maintain mobile backhaul established in UMMT.
• Microwave ACM integration with IP/MPLS transport.
• Microwave partnerships with SIAE, NEC, and NSN.
• Comprehensive NMS with Cisco Prime.

Introduction September 2013


8
Table 2 - Cisco FMC 1.0 Platforms and Software Versions

Architectural Role Hardware Software Revision


Core node ASR 9000 XR 4.3.1
CRS-3 XR 4.3.1
Aggregation node + service edge ASR 9006 XR 4.3.1
Pre-aggregation node + service edge ASR 9001 XR 4.3.1
Pre-aggregation node ASR 903 XE 3.9
ME3600X-24CX XE 3.9
Fixed access node ME3600X-24CX XE 3.9
Cell Site Gateway (CSG) ASR 901 Release 2.2
Mobile Transport Gateway (MTG) ASR 9000 XR 4.3.1
Network management Prime Management Suite 1.1

UMMT 3.0
Release 3.0 of the Cisco UMMT system architecture further builds upon the architecture defined in the first two
releases with the addition of the following improvements:
• New Unified MPLS models:
◦◦ Labeled BGP access, which provides highest scalability plus wireline coexistence
◦◦ v6VPN for LTE transport
• IEEE 1588v2 Boundary clock (BC) and SyncE/1588v2 Hybrid models:
◦◦ Greater scalability and resiliency for packet-based timing in access and aggregation
• ATM/TDM transport end-to-end:
◦◦ ATM provides transport for legacy 3G services
◦◦ PW redundancy with Multirouter Automatic Protection Switching (MR-APS)
• New network availability models:
◦◦ Remote LFA FRR
◦◦ Labeled BGP PIC core and edge
◦◦ BGP PIC edge for MPLS VPN
◦◦ Most comprehensive resiliency functionality
• ME3600X-24CX platform:
◦◦ 2RU fixed-configuration 40Gb/s platform
◦◦ Supports Ethernet and TDM interfaces
• Network management, service management, and assurance with Prime

Introduction September 2013


9
Table 3 - Cisco UMMT 3.0 Platforms and Software Versions

Architectural Role Hardware Software Revision


Core node ASR 9000 XR 4.2.1 / 4.3.0
CRS-3 XR 4.2.1
Aggregation node ASR 9000 XR 4.2.1 / 4.3.0
Pre-aggregation node ASR 903 XE 3.7 / 3.8
ME 3600X-24CX 15.2(2)S1
Cell Site Gateway (CSG) ASR 901 15.2(2)SNG
Mobile Transport Gateway (MTG) ASR 9000 XR 4.2.1 / 4.3.0
Network management Prime Management Suite 1.1

Cisco UMMT 2.0


Release 2.0 of the Cisco UMMT system architecture continues to build upon the baseline established by release
1.0 by implementing the following improvements:
• Introduction of ASR 903 modular platform as a pre-aggregation node (PAN).
• Any Transport over MPLS (AToM): Complete TDM transport capabilities in access and aggregation
domains with Circuit Emulation over Packet Switching Network (CESoPSN) and Structure Agnostic
Transport over Packet (SAToP) TDM Circuit Emulation over Packet (CEoP) services on the ASR 903 and
ASR 9000 platforms.
• 200Gbps/Slot line cards and new supervisor cards for the ASR 9000, which bring increased scalability,
100G Ethernet support, and synchronization enhancements with 1588 BC support.
• BGP PIC edge and core on ASR 9000 for labeled Unicast.
• Microwave partnerships with NEC and NSN.

Table 4 - Cisco UMMT 2.0 Platforms and Software Versions

Architectural Role Hardware Software Revision


Core node ASR 9000 XR 4.2
CRS-3 XR 4.2
Aggregation node ASR 9000 XR 4.2
Pre-aggregation node ASR 903 XE 3.5.1.S
ME3800X 15.1(2.0.47)EY0
Cell Site Gateway (CSG) ASR 901 15.1(2)SNH
Mobile Transport Gateway (MTG) ASR 9000 XR 4.2
Packet microwave NSN FlexiPacket 2.4
Hybrid microwave NEC iPASOLINK 2.02.29

Introduction September 2013


10
The first release of the Cisco UMMT System architecture formed the baseline for a highly scalable and
operationally-simplified system architecture to deliver mobile backhaul services:
• Introduction of RFC3107-compliant-labeled BGP control plane in the access, aggregation, and core
network domains
• Introduction of hierarchical LSP functionality to provide abstraction between transport and service layers
• MPLS L3VPN-based backhaul of S1 and X2 interfaces for LTE deployment.
• Simplified provisioning of mobile backhaul services enabled by BGP-based control plane. Only endpoint
configuration needed for service enablement
• LFA functionality provides FRR capabilities in a greatly operationally-simplified manner
• End-to-end OAM and PM functionality for mobile backhaul services and transport layer

Table 5 - Cisco UMMT 1.0 Platforms and Software Versions

Architectural Role Hardware Software Revision


Core node ASR 9000 IOS-XR 4.1.1
CRS IOS-XR 4.1.1
Aggregation node ASR 9000 IOS-XR 4.1.1
Pre-aggregation node ME3800X IOS 15.1(2)EY1
Cell Site Gateway (CSG) MWR2941 IOS 15.1(1)MR
ASR 901 IOS 15.1(2)SNG
Mobile Transport Gateway (MTG) ASR 9000 XR 4.1.1

Introduction September 2013


11
Requirements
Service Provider Architectures
Over the past two decades with the pace of change constantly accelerating, service providers (SPs) have
offered dramatically altered services. Fifteen years (or more) ago, the SP was mainly concerned with offering
point-to-point transport. Now, dozens of VPN offerings to enterprises, along with rich residential offerings, create
hundreds of options for different services to be carried on SP networks. This explosion of service offerings has
typically not been matched with an equivalent restructuring of the SP network. The most common practice has
been to add additional stove-piped networks or new protocols that enable offering new services. While each
decision to patch the existing infrastructure has made sense, in many situations the collection of decisions has
created a complex, unwieldy, and difficult to manage network while provisioning new services is time consuming.
It is apparent that the network environment of the last decade doesn’t reflect the conditions SPs face today.

Older networks were created based on the following environmental conditions:


• Initially sparse customer-take rates for broadband
• Business services dominating the bandwidth
• Relatively low data bandwidth usage per user
• Reuse of Synchronous Digital Hierarchy (SDH) and Synchronous Optical Networking (SONET) TDM
transport infrastructure.
• Internet access self-tuning aggregation traffic dominance
• Conservative growth assumptions
• Low availability of IP operations expertise

Requirements September 2013


12
Looking at today’s environment, none of those conditions apply. In addition to environmental factors, there are
more demands placed on SP networks, and these networks are in the midst of dramatic change. Consider the
following figure.

Figure 3 - Revenue Split and Traffic Predictions

% Total Revenue 2011 2013 2016


100% 90+%
Private Line Private Line Private Line IP Traffic
90% TDM/OTN TDM/OTN TDM/OTN
80% Traffic Traffic Traffic

70% Packet
60%
Circuit
50% ~50-70%* 20-30% 0-10%
40%
30%
Private/Public Private/Public Private/Public
20% IP Traffic IP Traffic IP Traffic
10% Packet

0% Legacy
Approx. 2008 Est. 2015 TDM
~30-50% 70-80% 90+% Traffic
Legacy Layer 2
Layer 1 and Layer 2

293329
Fixed and Mobile Voice
Layer 3 Transport and Services

• SP revenue is shifting from circuits to packet services (Cisco Research 2010), with approximately 80% of
revenue to be derived from packet services in five years
• Packet traffic is increasing at 23% compound annual growth rate (CAGR) (Cisco VNI 2013)
• SP traffic make-up is expecting a massive change in next five years (ACG Research 2011)

The economic realities depicted in Figure 4 show how this shift towards packet-based services and traffic drives
a preference for packet-based transport. Essentially, the statistical multiplexing benefits of packet transport for
packet traffic outweigh other considerations compared to using legacy TDM transport for packet transport on
economic grounds.

This point is illustrated in Figure 4. The figure takes an example of how to provision bandwidth for ten 1-Gigabit
per second flows. If bandwidth is provisioned for each flow by using TDM technology, a gigabit of bandwidth is
permanently allocated for each flow because there is no way to share unused bandwidth between containers
in a TDM hierarchy. Contrast that to provisioning those flows on a transport that can share unused bandwidth
via statistical multiplexing, and it is possible to provision much less bandwidth on a core link. For networks that
transport primarily bursty data traffic, this is now the norm, rather than the exception.

Requirements September 2013


13
Figure 4 - Economic Realities

Provisioning for Circuit/OTN and Packet/Router Aggregation

Provisioning:
Sum of Peak
Flows

Provisioning:
Sum of Average
Flows + a Few
Peak Flows

10 1GE Flows Actual Traffic Traffic Aggregated Traffic Aggregated


on a Packet Network on a Circuit Network

293330
Chart from Infonetics, text from DT

• This analysis indicates the following:


• TDM transport of packets is no longer economically viable and lacks statistical multiplexing, which makes
it very expensive.
• Full transformation to Next Generation Networks (NGN) needs to occur from core to customer.
• Long term vision is critical because this will be the network for the next decade.
• Packet transport with MPLS to enable virtualization of the infrastructure and support for legacy protocols
via pseudowires is the most effective technology choice because it will:
◦◦ Minimize capital expenditure (CAPEX) and operating expenses (OPEX).
◦◦ Provide carrier class service delivery.
◦◦ Maximize service agility.

Beyond simple efficiencies of transport, greater intelligence within the network is needed in order to cope
efficiently with the avalanche of data traffic. AT&T, for example, calculates caching at the edge of their network
can save 30% of core network traffic, which represents tens of millions of dollars of savings every year. With the
dynamic nature of traffic demands in today’s network, IP and packet transport is adept at adjusting very quickly
to new traffic flow demands via dynamic routing protocols. TDM and Layer 2 approaches, however, are slow to
adapt because paths must be manually reprovisioned in order to accommodate new demands.

Future Directions
Starting today, there is convergence of transport across all services, leading towards convergence of edge
functions and ultimately a seamless and unified user experience enabling any service on any screen in
any location. This will be accomplished over a network with standardized interfaces enabling fine-grained
programmatic control of per-user services. Cisco’s FMC program meets all of the demands and challenges
defined for cost-optimized packet transport, while offering sophisticated programmability and service
enablement.

Requirements September 2013


14
Fixed and Mobile Converged Transport Characteristics
Networks are an essential part of business, education, government, and home communications. Many residential,
business, and mobile IP networking trends are being driven largely by a combination of video, social networking
and advanced collaboration applications, termed “visual networking.”

Annually, Cisco Systems publishes the Cisco Visual Networking Index (VNI), an ongoing initiative to track and
forecast the impact of visual networking applications. This section presents highlights from the 2012 to 2017
VNI and other sources give context to trends in the SP space that are driving increases in network capacity and
consolidation of services in a unified architecture.

Executive Overview
• Annual global IP traffic will surpass the zettabyte threshold (1.4 zettabytes) by the end of 2017. In
2017, global IP traffic will reach 1.4 zettabytes per year or 120.6 exabytes per month.
• Global IP traffic has increased more than fourfold over the past 5 years, and will increase threefold
over the next 5 years. Overall, IP traffic will grow at a Compound Annual Growth Rate (CAGR) of 23
percent from 2012 to 2017.
• Metro traffic will surpass long-haul traffic in 2014, and will account for 58 percent of total IP traffic
by 2017. Metro traffic will grow nearly twice as fast as long-haul traffic from 2012 to 2017. The higher
growth in metro networks is due in part to the increasingly significant role of content delivery networks,
which bypass long-haul links and deliver traffic to metro and regional backbones.
• Content Delivery Networks (CDNs) will carry over half of Internet traffic in 2017. Globally, 51 percent of
all Internet traffic will cross content delivery networks in 2017, up from 34 percent in 2012.
• The number of devices connected to IP networks will be nearly three times as high as the global
population in 2017. There will be nearly three networked devices per capita in 2017, up from nearly two
networked devices per capita in 2012. Accelerated in part by the increase in devices and the capabilities
of those devices, IP traffic per capita will reach 16 gigabytes per capita in 2017, up from 6 gigabytes per
capita in 2012.
• Traffic from wireless and mobile devices will exceed traffic from wired devices by 2016. By 2017,
wired devices will account for 45 percent of IP traffic, while Wi-Fi and mobile devices will account for 55
percent of IP traffic. In 2012, wired devices accounted for the majority of IP traffic at 59 percent.
• Globally, consumer Internet video traffic will be 69 percent of all consumer Internet traffic in 2017,
up from 57 percent in 2012. Video exceeded half of global consumer Internet traffic by the end of 2011.
Note that this percentage does not include video exchanged through point-to-point (P2P) file sharing.
The sum of all forms of video (TV, video on demand (VoD), Internet, and P2P) will be in the range of 80
to 90 percent of global consumer traffic by 2017.
• Internet video to TV doubled in 2012. Internet video to TV will continue to grow at a rapid pace,
increasing fivefold by 2017. Internet video to TV traffic will be 14 percent of consumer Internet video
traffic in 2017, up from 9 percent in 2012.
• VoD traffic will nearly triple by 2017. The amount of VoD traffic in 2017 will be equivalent to 6 billion
DVDs per month.
• Business IP traffic will grow at a CAGR of 21 percent from 2012 to 2017. Increased adoption of
advanced video communications in the enterprise segment will cause business IP traffic to grow by a
factor of 3 between 2012 and 2017.
• Business Internet traffic will grow at a faster pace than IP WAN. IP WAN will grow at a CAGR of 13
percent, compared to a CAGR of 21 percent for fixed business Internet and 66 percent for mobile
business Internet.

Requirements September 2013


15
• Business IP traffic will grow fastest in the Middle East and Africa. Business IP traffic in the Middle
East and Africa will grow at a CAGR of 29 percent, a faster pace than the global average of 21 percent.
In volume, Asia Pacific will have the largest amount of business IP traffic in 2017 at 8.3 exabytes per
month. North America will be the second at 5.4 exabytes per month.
• Globally, mobile data traffic will increase 13-fold between 2012 and 2017. Mobile data traffic will grow
at a CAGR of 66 percent between 2012 and 2017, reaching 11.2 exabytes per month by 2017.
• Global mobile data traffic will grow three times faster than fixed IP traffic from 2012 to 2017. Global
mobile data traffic was 2 percent of total IP traffic in 2012, and will be 9 percent of total IP traffic in 2017.

High Capacity Requirements from Edge to Core


The landscape is changing for consumer behavior in both wireline and mobile services. Increases in wireline
demands will come primarily from video applications. Powerful new mobile devices, increasing use of mobile
Internet access, and a growing range of data-hungry applications for music, video, gaming, and social networking
are driving huge increases in data traffic. As shown in the Cisco VNI projections, these exploding bandwidth
requirements are driving high capacity requirements from the edge to the core with typical rates of 100 Mbps
per eNodeB, 1Gbps access for mobile and wireline, 10-Gbps aggregation, and 100-Gbps core networks.

Figure 5 - Cisco VNI: Global IP Traffic, 2012 to 2017

Exabytes per Month 23% CAGR 2012-2017

140

121 EB

101 EB

84 EB
70
69 EB
56 EB
44 EB
293379

0
2012 2013 2014 2015 2016 2017
Source: Cisco VNI, 2013

Support for Multiple and Mixed Topologies


Many options exist for physical topologies in the SP transport network, with hub-and-spoke and ring being
the most prevalent. Capacity requirements driven by subscriber density, CAPEX of deploying fiber in large
geographies, and physical link redundancy considerations could lead to a combination of fiber and microwave
rings in access, fiber rings, and hub-and-spoke in aggregation and core networks. The transport technology
that implements these networks must be independent of the physical topology, or combination thereof, used
in various layers of the network, and must cost-effectively scale to accommodate the explosive increase in
bandwidth requirements imposed by growth in mobile and wireline services.

Requirements September 2013


16
Exponential Increase in Scale Driven by LTE Deployments
LTE will drive ubiquitous mobile broadband with its quantum leap in uplink and downlink transmission speeds.
• In denser populations, the increased data rates delivered to each subscriber will force division of the cell
capacity among fewer users. Because of this, cells must be much smaller than they are today.
• Another factor to consider is the macro cell capacity. The spectrum allotted to mobile networks has
been increasing over the years, roughly doubling over a five year period. With advancements in radio
technology, a corresponding increase in average macro cell efficiency has occurred over the same
period. As a result, the macro cell capacity, which is a product of these two entities, will see a four-fold
increase over a five year period. This increase, however, is nowhere close to the projected 26-fold
increase in mobile data (as stated above), and will force mobile operators to deploy a small-cell network
architecture.

These two factors will force operators to adopt small cell architectures, resulting in an exponential increase in cell
sites deployed in the network. In large networks covering large geographies, the scale is expected to be in the
order of several tens of thousands to a few hundred thousands of LTE eNodeBs and associated CSGs.

Figure 6 - Macro Cell Capacity

1000

26x Growth
Macro Capacity
Average Macro
100 Cell Efficiency
Growth

Spectrum

10

1
1990 1995 2000 2005 2010 2015
293380

Source: Agilent

Seamless Interworking with the Mobile Packet Core


As mentioned in the previous section, the flattened all-IP LTE/EPC architecture is a significant departure from
previous generations of mobile standards and should be an important consideration in designing the RAN
backhaul for 4G mobile transport.

The 2G/3G hierarchical architecture consists of a logical hub-and-spoke connectivity between base station
controller/radio network controller (BSC/RNC) and the base transceiver station (BTS)/NodeBs. This hierarchical
architecture lent itself naturally to the circuit-switched paradigm of having point-to-point connectivity between
the cell sites and controllers. The reach of the RAN backhaul was also limited in that it extended from the radio
access network to the local aggregation/distribution location where the controllers were situated.

Requirements September 2013


17
In contrast, the flat LTE architecture does away with the hierarchy by getting rid of the intermediate controller
like the BSC/RNC and letting the eNodeB communicate directly with the EPC gateways. It also does away with
the point-to-point relationship of 2G, 3G architectures and imposes multipoint connectivity requirements at the
cell site. This multipoint transport requirement from the cell site not only applies to the LTE X2 interface, which
introduces direct communication between eNodeBs requiring any-to-any mesh network connectivity, but also
to the LTE S1 interface, which requires a one-to-many relationship between the eNodeB and multiple Evolved
Packet Core (EPC) gateways.

While the Security Gateway (SGW) nodes may be deployed in a distributed manner closer to the aggregation
network, the Mobility Management Entities (MME) are usually fewer in number and centrally located in the
core. This extends the reach of the Radio Access Network (RAN) backhaul from the cell site deep into the core
network.

Important consideration also needs to be given to System Architecture Evolution (SAE) concepts like MME
pooling and SGW pooling in the EPC that allow for geographic redundancy and load sharing. The RAN backhaul
service model must provide for eNodeB association to multiple gateways in the pool and migration of eNodeB
across pools without having to re-architect the underlying transport architecture.

Figure 7 - RAN Backhaul Architecture

2G/3G Hierarchical Backhaul Architecture

BTS/Node B

BTS/Node B

Abis/lub
MSC

BSC/RNC GGSN
SGSN

LTE/EPC Flattened Backhaul Architecture

eNode B PGW
SGW

eNode B S1-U MME

S1-C

S1-C
X2
S1-U MME
SGW/PGW

RAN Aggregation Core


293381

Requirements September 2013


18
Transport of Multiple Services from All Locations
LTE has to co-exist with other services on a common network infrastructure that could include:
• Existing mobile services:
◦◦ 3G UMTS IP/ATM
◦◦ 2G GSM and SP Wi-Fi in a mobile-only deployment
• A multitude of other services:
◦◦ Residential broadband triple play
◦◦ Metro Ethernet Forum (MEF) E-Line and E-LAN
◦◦ L3VPN business services
◦◦ RAN sharing, and wireline wholesale in a converged mobile and wireline deployment

In these scenarios, the network has to not only support multiple services concurrently, but also support all these
services across disparate endpoints. Typical examples are:
• L3 transport for LTE and Internet-High Speed Packet Access (I-HSPA) controller-free architectures: from
RAN to SAE gateways in the core network
• L3 transport for 3G UMTS/IP: from RAN to BSC in the aggregation network
• L2 transport for 2G GSM and 3G UMTS/ATM: from RAN to RNC/BSC in the aggregation network
• L2 transport for residential wireline: from access to BNG in the aggregation network
• L3/L2 transport for business wireline: from access to remote access networks across the core network
• L2 transport for wireline wholesale: from access to retail wireline SP peering point
• L3 transport for RAN sharing: from RAN to retail mobile SP peering point

The transport technology used in the RAN backhaul and the network architecture must be carefully engineered
to be scalable and flexible enough to meet the requirements of various services being transported across a
multitude of locations in the network.

Requirements September 2013


19
System Overview
System Concept
The Cisco Fixed Mobile Convergence (FMC) system defines a multi-year ongoing development effort, building
a flexible, programmable and cost-optimized network infrastructure targeted to deliver in-demand fixed wireline
and mobile network services. FMC provides the architectural baseline for creating a scalable, resilient, and
manageable network infrastructure that optimally integrates the fixed wireline service edge and interworks with
the Mobile Packet Core (MPC).

The system is designed to concurrently support residential triple play, business L2VPN and L3VPN, and multiple
generation mobile services on a single converged network infrastructure. In addition, it supports:
• Graceful introduction of long-term evolution (LTE) with existing 2G/3G services with support for
pseudowire emulation (PWE) for 2G GSM and 3G UMTS/ATM transport.
• L2VPNs for 3G UMTS/IP, and L3VPNs for 3G UMTS/IP and 4G LTE transport.
• Broadband Network Gateway (BNG) co-located with Carrier-Grade NAT for residential services.
• Multiservice Edge (MSE) pseudowire headend (PWHE) termination for business services.
• Multicast transport.
• Network synchronization (physical layer and packet based).
• Hierarchical-QoS (H-QoS).
• Operations, administrations, and maintenance (OAM).
• Performance management (PM).
• Fast convergence.

The Cisco FMC system meets the Broadband Forum TR-101 requirements for residential services and supports
all MEF requirements for business services. The FMC system also meets all Next-Generation Mobile Network
(NGMN) requirements for next-generation mobile backhaul, and innovates on the Broadband Forum TR-221
specification for MPLS in mobile backhaul networks by unifying the MPLS transport across the access,
aggregation, and core domains.

Simplification of the End-to-End Mobile Transport and Service Architecture


A founding principle of the Cisco FMC system is the simplification of the transport architecture by eliminating the
control and management plane translations that are inherent in legacy designs. As described in “Service Provider
Architectures,” traditional backhaul architectures relying on L2 transport are not optimized for converged service
delivery, nor for a flat all-IP architecture to support LTE transport. Furthermore, backhaul architectures built over
mixed L2 and L3 transport are inherently complex to operate. The FMC System enables a Unified L3 MPLS/IP
Transport extending end-to-end across the system.

It simplifies the control plane by providing seamless MPLS Label-Switch Paths (LSP) across access, pre-
aggregation, aggregation/distribution, and core domains of the network. In doing so, a fundamental attribute
of decoupling the transport and service layers of the network and eliminating intermediate touchpoints in the
backhaul is achieved. By eliminating intermediate touchpoints, it simplifies the operation and management of the
service. Service provisioning is restricted only at the edges of the network where it is required. Simple carrier
class operations with end-to-end OAM and performance monitoring services are made possible.

System Overview September 2013


20
Convergence
A key aspect of the Cisco FMC System is service convergence, enabling the ability to provide any service to any
part of the network. Some examples of service convergence provided by the Cisco FMC solution are:
• Convergence of fixed residential and business service edges in a single node.
• Optimal placement of fixed-access residential service edge role in the Unified MPLS Transport.
• Optimal integration and placement of Fixed-access business service edge role in the Unified MPLS
Transport.
• Integration of CGN address translation for fixed access residential services with other residential service
edge functions.
• Integration of fixed and mobile transport services in the Unified MPLS Transport, including support of all
services on a single Access Node (AN).

Optimal Integration of Wireline Service Edge Nodes


The Cisco FMC system integrates service edge aspects directly with the transport functions of the network
design. Such integration allows for optimal placement of edge functions within the network in order to address
a particular type of service. Service nodes providing residential BNG and CGN functions can be co-located
in the central office (CO) with the Access Nodes: Passive Optical Networks (PON) optical line terminal (OLT)
equipment, Fiber to the Home (FTTH) access nodes, etc. Business MSE functions can be located optimally in
the network, either in an aggregation node (AGN) or Pre-Aggregation Node (PAN), depending upon operator
preference and scalability needs.

Access nodes (AN) transport multipoint business services to these service edge nodes via Ethernet over MPLS
(EoMPLS) pseudowires, and connect to the proper service transport: Virtual Private LAN services (VPLS) virtual
forwarding instance (VFI) for E-LAN and MPLS VPN for L3VPN. PW to L3VPN interworking on the service edge
node is accomplished via PWHE functionality. VPWS services, such as E-Line and Circuit Emulation over Packet
(CEoPs), are transported directly between ANs via pseudowires.

Flexible Placement of L3 and L2 Transport Virtualization Functions for Mobile Backhaul


The hierarchical RAN backhaul architecture of 2G and 3G releases involved an intermediate agent like the BSC/
RNC, which mostly resided at the aggregation/distribution layer of the transport network.

System Overview September 2013


21
This simplified the requirements on the transport in that it only required connectivity between the RAN access
and aggregation network layers. In comparison, 4G LTE imposes many new requirements on the backhaul:
• Because of the any-to-any relationship between eNodeBs for the X2 interface and the one-to-many
relationship between eNodeBs and EPC gateways (SGWs, MMEs) for the S1-u/c interface, the eNodeBs
and associated CSGs in the RAN access need both local connectivity and direct connectivity to the EPC
gateways in the MPC.
• The stringent latency requirements of the X2 interface requires a logical mesh connectivity between
CSGs that introduces the minimum amount of delay that is in the order of 30ms. The minimum delay
is expected to reduce further to around 10ms for features such as collaborative multiple input multiple
output (MIMO) in the future with 3GPP LTE Release 10 and beyond.
• The Evolved Universal Terrestrial Radio Access Network (E-UTRAN)/EPC architecture supports MME
pooling and SGW pooling to enable geographic redundancy, capacity increase, load sharing, and
signaling optimization. This requires the transport infrastructure to provide connectivity from eNodeBs in
the RAN access to multiple MME and SGWs within these pools in the core network.
• The introduction of LTE into an existing 2G/3G network has to be graceful and the transition will take
time. During this period, it is natural for a few centralized EPC gateways to be initially deployed and
shared across different regions of the network. As capacity demands and subscriber densities increase,
it is expected that new gateways will be added closer to the regions and subscribers will have to be
migrated. While the migration across gateways within the packet core could be done seamlessly based
on gateway pooling, it is imperative that the underlying transport infrastructure requires minimal to no
provisioning changes to allow the migration.

In 2G and 3G releases, the hub-and-spoke connectivity requirement between the BSC/RNC and the BTS/NodeB
makes L2 transport using Ethernet bridging with VLANs or P2P PWs with MPLS PWE3 appealing. In contrast, a
L3 transport option is much better suited to meet the myriad of connectivity requirements of 4G LTE. The UMMT
architecture provides both L2 and L3 MPLS VPN transport options that provide the necessary virtualization
functions to support the coexistence of LTE S1-u/c, X2, interfaces with GSM Abis TDM and UMTS IuB ATM
backhaul. The decoupling of the transport and service layers of the network infrastructure and the seamless
connectivity across network domains makes the system a natural fit for the flat all-IP LTE architecture by allowing
for the flexible placement of 2G/3G/4G gateways in any location of the network to meet all the advance backhaul
requirements listed above.

Deliver New Levels of Scale for MPLS Transport with RFC-3107 Hierarchical-Labeled BGP LSPs
As described in “Fixed and Mobile Converged Transport Characteristics,” supporting the convergence of fixed
wireline and mobile services will introduce unprecedented levels of scale in terms of number of ANs and services
connected to those nodes. While L2 and L3 MPLS VPNs are well suited to provide the required virtualization
functions for service transport, inter-domain connectivity requirements for business and mobile services present
challenges of scale to the transport infrastructure. This is because IP aggregation with route summarization
usually performed between access, aggregation, and core regions of the network does not work for MPLS,
as MPLS is not capable of aggregating Forwarding Equivalence Class (FEC). RFC-5283 provides a kind of
mechanism for aggregating FEC via longest match mechanism in LDP, but it is not widely deployed and requires
significant reallocation of IP addressing in existing deployments to implement. In normal MPLS deployments, the
FEC is typically the PE’s /32 loopback IP address. Exposing the loopback addresses of all the nodes (10k -100k)
across the network introduces two main challenges:
• Large flat routing domains adversely affect the stability and convergence time of the Interior Gateway
Protocol (IGP).
• The sheer size of the routing and MPLS label information control plane and forwarding plane state
will easily overwhelm the technical scaling limits on the smaller nodes (ANs and PANs) involved in the
network.

System Overview September 2013


22
Unified MPLS elegantly solves this problem with a divide-and-conquer strategy of isolating the access,
aggregation, and core network layers into independent and isolated IGP domains. Label Distribution Protocol
(LDP) is used for setting up LSPs within these domains, and RFC-3107 BGP-labeled unicast is used for setting
up LSPs across domains. This BGP-based inter-domain hierarchical LSP approach helps scale the network to
hundreds of thousands of AN sites without overwhelming any of the smaller nodes in the network, and does not
require any address reallocation as RFC 5283. At the same time, the stability and fast convergence of the small
isolated IGP domains corresponding to various network layers are maintained.

Transport Models
The Cisco FMC System incorporates a network architecture designed to consolidate transport of fixed wireline
and mobile services in a single network. Continued growth in residential and business services, combined with
ubiquitous mobile broadband adoption driven by LTE, will introduce unprecedented levels of scale in terms of
eNodeBs and ANs into the FMC network. This factor, combined with services requiring connectivity from the
access domain all the way to and across the core network, introduces challenges in scaling the MPLS network.
As previously mentioned, the endpoint identifier in MPLS is the PE’s /32 loopback IP address, so IP aggregation
with route summarization cannot be performed between the access, aggregation, and core regions of the
network. All network technologies meet a scale challenge at some point and the solution is always some form of
hierarchy to scale. The Unified MPLS Transport basis of the FMC System is no different, and uses a hierarchical
approach to solve the scaling problem in MPLS-based end-to-end deployments.

Unified MPLS adopts a divide-and-conquer strategy where the core, aggregation, and access networks are
partitioned in different MPLS/IP domains. The network segmentation between the core and aggregation domains
could be based on a single autonomous system (AS) multi-area design, or utilize a multi-AS design with inter-AS
organization. Regardless of the type of segmentation, the Unified MPLS transport concept involves partitioning
the core, aggregation, and access layers of the network into isolated IGP and LDP domains. Partitioning these
network layers into such independent and isolated IGP domains helps reduce the size of routing and forwarding
tables on individual routers in these domains, which leads to better stability and faster convergence. LDP is used
for label distribution to build LSPs within each independent IGP domain. This enables a device inside an access,
aggregation, or core domain to have reachability via intra-domain LDP LSPs to any other device in the same
domain. Reachability across domains is achieved using RFC 3107 procedures whereby BGP-labeled unicast is
used as an inter-domain LDP to build hierarchical LSPs across domains. This allows the link state database of the
IGP in each isolated domain to remain as small as possible, while all external reachability information is carried via
BGP, which is designed to scale to the order of millions of routes.
• In Single AS Multi-Area designs, interior Border Gateway Protocol (iBGP)-labeled unicast is used to build
inter-domain LSPs.
• In Inter-AS designs, iBGP-labeled unicast is used to build inter-domain LSPs inside the AS, and exterior
Border Gateway Protocol (eBGP)-labeled unicast is used to extend the end-to-end LSP across the AS
boundary.

In both cases, the Unified MPLS Transport across domains will use hierarchical LSPs that rely on a BGP-
distributed label used to transit the isolated MPLS domains, and on a LDP-distributed label used within the AS to
reach the inter-domain area border router (ABR) or autonomous system boundary router (ASBR) corresponding
to the labeled BGP next hop.

The Cisco FMC system integrates key technologies from Cisco’s Unified MPLS suite of technologies to deliver a
highly scalable and simple-to-operate MPLS-based converged transport and service delivery network. It enables
a comprehensive and flexible transport framework structured around the most common layers in SP networks:
the access network, the aggregation network, and the core network. The transport architecture structuring takes
into consideration the type of access and the size of the network.

System Overview September 2013


23
Access Type
• MPLS Packet Access:
◦◦ Covers point-to-point links, rings, and hierarchical topologies.
◦◦ Applies to both fiber and newer Ethernet microwave-based access technologies with the MPLS
access network enabled by the ANs.
◦◦ Services include both mobile and wireline services and can be enabled by the ANs in the access
network and the PANs or AGNs in the aggregation network.
• IP/Ethernet/TDM Access:
◦◦ Includes native IP or Ethernet links in point-to-point or ring topologies over fiber and newer
Ethernet microwave-based access.
◦◦ Supports Central Office (CO) located PON OLT access.
◦◦ Covers point-to-point TDM+Ethernet links over hybrid microwave access.
◦◦ The MPLS services are enabled by the aggregation network and includes residential; business
X-Line, E-LAN, and L3VPN; Mobile GSM Abis, ATM IuB, IP IuB, and IP S1/X2 interfaces
aggregated in MPLS PANs or AGNs.

Network Size
• Small Network:
◦◦ Applies to network infrastructures in small geographies where the core and aggregation network
layers are integrated in a single domain.
◦◦ The Single IGP/LDP domain includes less than 1000 core and AGNs nodes.
• Large Network:
◦◦ Applies to network infrastructures built over large geographies.
◦◦ The core and aggregation network layers have hierarchical physical topologies that enable IGP/
LDP segmentation.

This transport architecture structuring based on access type and network size leads to six architecture models
that fit various customer deployments and operator preferences as shown in the following table, and described in
the sections below.

Table 6 - FMC Transport Models

Access Type Small Network Large Network


Ethernet/TDM access Flat LDP core and aggregation network Hierarchical-labeled BGP core and aggregation network
MPLS access Hierarchical-labeled BGP LSP access network Hierarchical-labeled BGP LSP access network
MPLS access (mobile only) Labeled BGP redistribution into access IGP/LDP Hierarchical-labeled BGP redistribution into access
(optional LDP Downstream-on-Demand [DoD]) IGP/LDP (optional LDP DoD)

System Overview September 2013


24
Flat LDP Core and Aggregation
This architecture model applies to small geographies where core and aggregation networks may not have distinct
physical topologies, are integrated under common operations, and where network segmentation is not required
for availability reasons. It assumes a non-MPLS IP/Ethernet or TDM access being aggregated in a small scale
network.

Figure 8 - Flat LDP Core and Aggregation

Pre-Aggregation Pre-Aggregation
Node Core and Node
Core Core Ethernet
Aggregation (SDH)
Node IP/MPLS Domain Node
Pre-Aggregation IGP Area Pre-Aggregation
Node Node
TDM or
Core Core
Packet Microwave
Node Node
Mobile Access Ethernet/SDH Fixed
Pre-Aggregation Pre-Aggregation
Node Node and Mobile Access

293204
IGP/LDP Domain

The small scale aggregation network is assumed to be comprised of core nodes and AGNs that are integrated
in a Single IGP/LDP domain consisting of less than 1000 nodes. Since no segmentation between network layers
exists, a flat LDP LSP provides end-to-end reachability across the network. All mobile (and wireline) services are
enabled by the AGNs. The mobile access is based on TDM and packet microwave links aggregated in AGNs that
provide TDM/ATM/Ethernet VPWS and MPLS VPN transport.

Hierarchical-Labeled BGP LSP Core-Aggregation and Access


This architecture model applies to small geographies. It assumes an MPLS-enabled access network with fiber
and packet microwave links being aggregated in a small scale network.

Figure 9 - Hierarchical-Labeled BGP LSP Core-Aggregation and Access

Pre-Aggregation Pre-Aggregation
Access Node Core and Node
Core Core Access
IP/MPLS Aggregation IP/MPLS
Node IP/MPLS Domain Node
Domain Domain
Pre-Aggregation IGP Area Pre-Aggregation
Node Node
Core Core
Node Node
Pre-Aggregation Pre-Aggregation
Node Node
iBGP Hierarchical LSP
293205

LDP LSP LDP LSP LDP LSP

System Overview September 2013


25
The small scale aggregation network is assumed to be comprised of core nodes and AGNs that are integrated
in a single IGP/LDP domain consisting of less than 1000 nodes. The access network is comprised of a separate
IGP domain. The separation can be enabled by making the access network part of a different IGP area from
the aggregation and core nodes, or by running a different IGP process on the PANs corresponding to the
aggregation/core and RAN access networks. LDP is used to build intra-area LSP within each segmented domain.
The aggregation/core and access networks are integrated with labeled BGP LSPs, with the PANs acting as ABRs
performing a BGP next-hop-self (NHS) function to extend the iBGP hierarchical LSP across the two domains.
The mobile and wireline services can be enabled by the ANs in the access as well as the PANs/AGNs.

By utilizing BGP community filtering for mobile services and dynamic IP prefix filtering for wireline services, the
ANs perform inbound filtering in BGP in order to learn the required remote destinations for the configured mobile
and wireline services. All other unwanted prefixes are dropped in order to keep the BGP tables small and prevent
unnecessary updates.

Labeled BGP Redistribution into Access IGP


This architecture model applies to networks deployed in small geographies. It assumes an MPLS- enabled
access network with fiber and packet microwave links being aggregated in a small scale network.

Figure 10 - Labeled BGP Redistribution into Access IGP

Redistribute Redistribute
labeled BGP Service labeled BGP Service
Communities into Communities into
Access IGP Access IGP
Pre-Aggregation Pre-Aggregation
RAN Node Core and Node RAN
Core Aggregation Core
IP/MPLS Node Node IP/MPLS
Domain IP/MPLS Domain Domain
Pre-Aggregation IGP Area Pre-Aggregation
Node Node

Redistribute Core Core Redistribute


Access IGP into Node Node Access IGP into
labeled BGP Pre-Aggregation Pre-Aggregation labeled BGP
Node Node
LDP LSP iBGP Hierarchical LSP LDP LSP

293206
LDP LSP

The network infrastructure organization in this architecture model is the same as the one described in
“Hierarchical-Labeled BGP LSP Core-Aggregation and Access.” This model differs from the aforementioned
one in that the hierarchical-labeled BGP LSP spans only the combined core/aggregation network and does not
extend to the access domain. Instead of using BGP for inter-domain label distribution in the access domain,
the end-to-end Unified MPLS LSP is extended into the access by using LDP with redistribution. The IGP scale
in the access domain is kept small by selective redistribution of required remote prefixes from iBGP based on
communities. Because there is no mechanism for using dynamic IP prefix lists for filtering in this model, the ANs
support only mobile services. Both mobile and wireline services can be supported by the PANs or AGNs.

System Overview September 2013


26
Hierarchical-Labeled BGP LSP Core and Aggregation
This architecture model applies to networks deployed in medium-to-large geographies. It assumes a non-MPLS
IP/Ethernet or TDM access being aggregated in a relatively large scale network.

Figure 11 - Hierarchical-Labeled BGP LSP Core-Aggregation

Aggregation Aggregation
Node Node
Core Core Ethernet
Aggregation Network Core Network Aggregation Network (SDH)
Node IP/MPLS Domain Node
IP/MPLS Domain IP/MPLS Domain
Aggregation Aggregation
Node Node
TDM or
Core Core
Packet Microwave
Node Node
Mobile Access Aggregation Aggregation Ethernet/SDH Fixed
Node Node and Mobile Access
i/(eBGP) Hierarchical LSP

293207
LDP LSP LDP LSP LDP LSP

The network infrastructure is organized by segmenting the core and aggregation networks into independent IGP/
LDP domains. The segmentation between the core and aggregation domains could be based on a Single AS
Multi-Area design, or utilize a multi-AS design with an inter-AS organization. In the Single AS Multi-Area option,
the separation can be enabled by making the aggregation network part of a different IGP area from the core
network, or by running a different IGP process on the core ABR nodes corresponding to the aggregation and
core networks. The access network is based on native IP or Ethernet links in point-to-point or ring topologies
over fiber and newer Ethernet microwave-based access, or point-to-point TDM+Ethernet links over hybrid
microwave.

All mobile and wireline services are enabled by the AGNs. LDP is used to build intra-area LSP within each
segmented domain. The aggregation and core networks are integrated with labeled BGP LSPs. In the Single AS
Multi-Area option, the core ABRs perform BGP NHS function to extend the iBGP-hierarchical LSP across the
aggregation and core domains. When the core and aggregation networks are organized in different ASs, iBGP is
used to build the hierarchical LSP from the PAN to the ASBRs and eBGP is used to extend the end-to-end LSP
across the AS boundary.

BGP community-based egress filtering is performed by the Core Route Reflector (RR) towards the core ABRs,
so that the aggregation networks learn only the required remote destinations for mobile and wireline service
routing, and all unwanted prefixes are dropped. This helps reduce the size of BGP tables on these nodes and
also prevents unnecessary updates.

System Overview September 2013


27
Hierarchical-Labeled BGP LSP Core, Aggregation, and Access
This architecture model applies to networks deployed in large geographies. It assumes an MPLS- enabled
access network with fiber and packet microwave links being aggregated in a large scale network.

Figure 12 - Hierarchical-Labeled BGP LSP Core, Aggregation, and Access

Aggregation Aggregation
Node Node
Access Core Core Access
IP/MPLS Aggregation Network Core Network Aggregation Network IP/MPLS
Node IP/MPLS Domain Node
Domain IP/MPLS Domain IP/MPLS Domain Domain
Aggregation Aggregation
Node Node
Core Core
Node Node
Aggregation Aggregation
Node Node
iBGP (eBGP across ASs) Hierarchical LSP

293208
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

The network infrastructure is organized by segmenting the core, aggregation, and access networks into
independent IGP/LDP domains. The segmentation between the core, aggregation, and access domains could be
based on a Single AS Multi-Area design or utilize a multi-AS design with an inter-AS organization. In the Single
AS Multi- Area option, the separation between core and aggregation networks can be enabled by making the
aggregation network part of a different IGP area from the core network, or by running a different IGP process on
the core ABR nodes corresponding to the aggregation and core networks. The separation between aggregation
and access networks is typically enabled by running a different IGP process on the PANs corresponding to
the aggregation and access networks. In the inter-AS option, while the core and aggregation networks are
in different ASs, the separation between aggregation and access networks is enabled by making the access
network part of a different IGP area from the aggregation network, or by running a different IGP process on the
PANs corresponding to the aggregation and RAN access networks.

The mobile and wireline services can be enabled by the ANs in the access as well as the PANs and AGNs. LDP
is used to build intra-area LSP within each segmented domain. The access, aggregation, and core networks
are integrated with labeled BGP LSPs. In the Single AS Multi-Area option, the PANs and core ABRs act as ABRs
for their corresponding domains and extend the iBGP hierarchical LSP across the access, aggregation, and
core domains. When the core and aggregation networks are organized in different ASs, the PANs act as ABRs
performing BGP NHS function in order to extend the iBGP hierarchical LSP across the access and aggregation
domains. At the ASBRs, eBGP is used to extend the end-to-end LSP across the AS boundary.

By utilizing BGP community filtering for mobile services and dynamic IP prefix filtering for wireline services, the
ANs perform inbound filtering in BGP in order to learn the required remote destinations for the configured mobile
and wireline services. All other unwanted prefixes are dropped in order to keep the BGP tables small and to
prevent unnecessary updates.

System Overview September 2013


28
Hierarchical-Labeled BGP Redistribution into Access IGP
This architecture model applies to networks deployed in large geographies. It assumes an MPLS-enabled access
network with fiber and packet microwave links being aggregated in a large scale network.

Figure 13 - Hierarchical-Labeled BGP Redistribution into Access IGP

Pre-Aggregation Pre-Aggregation
Redistribute Node Node Redistribute
labeled BGP Service labeled BGP Service
Communities into Communities into
Access IGP Access IGP

RAN Aggregation Network Aggregation Network RAN


Core Core Network Core
MPLS/IP IP/MPLS Domain IP/MPLS Domain MPLS/IP
IP/MPLS Domain
Pre-Aggregation Pre-Aggregation
IGP Area/ Node Node IGP Area/
Process Process
Core Core
Redistribute Redistribute
Access IGP into Access IGP into
labeled BGP Pre-Aggregation Pre-Aggregation labeled BGP
Node Node
LDP LSP iBGP (eBGP across AS) Hierarchical LSP LDP LSP

293209
LDP LSP LDP LSP LDP LSP

The network infrastructure organization in this architecture model is the same as the one described in
“Hierarchical-Labeled BGP LSP Core-Aggregation and Access,” with options for both Single AS Multi-Area
and Inter-AS designs. This model differs from the aforementioned one in that the hierarchical-labeled BGP LSP
spans only the core and aggregation networks and does not extend to the access domain. Instead of using
BGP for inter-domain label distribution in the access domain, the end-to-end Unified MPLS LSP is extended
into the access by using LDP with redistribution. The IGP scale in the access domain is kept small by selective
redistribution of required remote prefixes from iBGP based on communities. Because there is no mechanism for
using dynamic IP prefix lists for filtering in this model, only mobile services are currently supported by the ANs.
Both mobile and wireline services can be supported by the PANs or AGNs.

Residential Wireline Service Models


With network devices becoming increasingly powerful, residential architectures have experienced a shift. The
additional computing capacity and better hardware performances of today’s equipment have made multi-service
capabilities within a single network node possible and fueled a transition toward distributed and semi-centralized
models. These new models simplify the architecture by removing the entire IP edge layer. They also reduce
costs by eliminating application-specific nodes, such as dedicated BNGs, and consolidate transport and service
functions within a single device. They allow for optimal placement of the residential service edge based on
subscriber distribution, empowering SPs with the ability to provision subscribers, bandwidth, and service access
according to the specific patterns of their networks.

The readiness of fiber-based access and the consequential increase of bandwidth availability at the last mile
have driven a steep rise in the number of subscribers that can be aggregated at the access layers of the
network. New Ethernet-based access technologies such as PON allow for the aggregation of thousands of
subscribers on a single AN, with per-subscriber speeds that average 20 Mbps, further justifying the distribution
of subscriber management functions as close as possible to the subscriber-facing edge of the network to satisfy
scale and total bandwidth demands.

System Overview September 2013


29
At the same time, the economy of scale and incumbency of legacy access technologies such as DSL, which
is characterized by limited bandwidth and subscriber fan out at the AN, mandate the positioning of subscriber
management functions in a more centralized location. To cater to those needs while guaranteeing Layer-2 like
connectivity between subscribers and subscriber management devices over a scalable transport infrastructure,
operators have abandoned the traditional access network design based on a flat Layer-2 domain in favor of
a more flexible MPLS access, which can use Ethernet over MPLS pseudowires for the transport of residential
traffic.

Following these trends, the Cisco FMC system has selected products from the Cisco ASR 9000 family for
deployment at pre-aggregation and aggregation sites, allowing BNG functions to reside at any layer of the
aggregation network. Figure 14 and Figure 15 depict the supported models.

Figure 14 - Residential Wireline Service Models—FTTH Access

Multicast:
AN enabled MVR
AGN-SE
Non Trunk UNI IP or L3 VPN over Unified MPLS for Triple Play Unicast
PIM MPLS/Multicast VPN (mLDP)
IP

N:1 or 1:1 VLANs


IP
IP
IP

Explicit (N:1) and Ambiguous (1:1) access interfaces


IP
IP

IPv6 IPoE, PPPoE Sessions


Optimal MAP-T Border Router
Service
Edge
Multicast:
IPv6 Routed CPE with MAP-T client
Explicit (N:1) access interface

PAN-SE
IP or L3 VPN over Unified MPLS for Triple Play Unicast
PIM MPLS/Multicast VPN (mLDP)
IP
IP
IP
IP
IP
IP

Efficient Large Scale Multiservice


Access Network Aggregation Network Core Network
Ethernet
CO Access Node
OLT: ME-4600
Aggregation Node
FTTH: ME-2600
ASR-9001, 9006
IP/MPLS Transport IP/MPLS Transport

Pre-Aggregation Node Aggregation Node Core Node


ASR-9001, ASR-903 ASR-9010 CRS-3

293218
Fiber DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology

System Overview September 2013


30
Figure 15 - Residential Wireline Service Models—DSL Access

BNG with PWHE


(VPWS + loopback to BNG)
Triple Play Unicast
AGN-SE
EoMPLS PW
IP or L3 VPN over Unified MPLS for Triple Play Unicast
MPLS/Multicast VPN (mLDP)
PIMv4/46
IP TV
Explicit (N:1) and Ambiguous (1:1) access interfaces
IPv6 IPoE, PPPoE Sessions
Optimal
Access Node UNI: Non Trunk, Service
MAP-T Border Router
N:1 or 1:1 VLAN Edge
Multicast:
Explicit (N:1) access interface
Triple Play Unicast
EoMPLS PAN-SE
IP or L3 VPN over Unified MPLS for Triple Play Unicast
PIMv4/46 MPLS/Multicast VPN (mLDP)
IP TV

Efficient Large Scale Intelligent Multiservice


Access Network Aggregation Network Services Edge Core Network
Remote CO Aggregation Node
Legacy DSLAM ASR-9001, 9006 Service Edge Node
ASR-9000

IP/MPLS Transport IP/MPLS Transport

MPLS Access Node, Pre-Aggregation Node Aggregation Node Service Edge Node Core Node
ASR-901, ME 3600 ASR-9001, ASR-903 ASR-9000 ASR-9000

293598
Fiber, Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology

To adapt to the preferred deployment model of a given provider, connectivity between the subscriber customer
premises equipment (CPE) and the BNG can be modeled by using both 1:1 and N:1 subscriber aggregation
models, also known as 1:1 VLAN and N:1 VLAN, while the User-Network Interface (UNI) remains non trunk,
keeping the provisioning of the local loop simple on both CPE and ANs.

The 1:1 VLAN indicates a one-to-one mapping between user port on the AN and a VLAN. The uniqueness of the
mapping is maintained in the AN and across the aggregation network. On the other hand, a N:1 VLAN refers to
a many-to-one mapping between user ports and VLAN. The user ports may be located in the same or different
ANs and a common VLAN is used to carry users’ traffic across the aggregation network.

Subscriber access is supported via native IP over Ethernet (IPoE) for providers who prefer a cohesive transport
across all residential services and between residential, business, and mobile applications, or through legacy
Point-to-Point Protocol over Ethernet (PPPoE) for those who desire stronger subscriber authentication
mechanisms and have long lasting incumbency of PPPoE. For operators who choose IPoE as the subscriber
access protocol, the architecture will leverage DHCP-based address assignment procedures in order to discover
subscriber presence, leveraging a single network layer protocol for flexible IP address management as well as
subscriber detection.

Orthogonal to the subscriber access protocol is the address family used at the network layer to carry
subscriber’s traffic. Depletion of the IPv4 address space has been an area of concern for operators for several
years. Techniques such as network address translations of IPv4 address (NAT44) have been widely deployed in
order to reduce the number of globally-routable addresses assigned to subscribers. However, law enforcement
regulations mandating the ability to identify a subscriber univocally by his or her IP address have largely limited
the effectiveness of these techniques in certain countries.

System Overview September 2013


31
With IPv6 reaching maturity, within the network and at the subscriber site alike, an increasing number of
providers are now actively looking at turning on IPv6 access to subscribers. The migration, however, can’t be
instantaneous. A number of network services are still offered exclusively via IPv4, making a coexistence model
and a step-by-step migration necessary.

The system will provide a complete migration solution. While the first Cisco FMC system release addressed
support for CG NAT and dual-stack subscribers at the BNG, the second release focuses on an IPv6 only
Access Networks (CPE to BNG) for unicast services. IPv4 capable household devices (Single or Dual Stacked)
are granted end to end connectivity through mapping of address and port using translation (MAP-T) functions
performed at the residential CPE and at the BNG device. Among the various NAT464 (IVI) technologies, MAP-T
has been selected because of its simplicity and transparency, in addition to providing effective IPv4 address
savings. By not requiring that network equipment keeps stateful IVI translation entries, it optimizes resource
utilization and performances, while an intelligent translation logic preserves packet’s original source and
destination ports and addresses information allowing for effective QoS and security applications throughout the
network.

Within the core network, Unified MPLS will offer seamless transport for both address families, fully separating
IPv6 enablement in the residential access from the core transport.
Given the lack of maturity of IPv6-enabled multicast applications, multicast services are delivered by using IPv4
end-to-end, forcing the access network to remain dual stacked. However, multicast forwarding does not impose
any constraint over the receiver IP addressing logic, allowing for the CPE IPv4 address not to be routable or even
unique within the IPv4 domain and therefore preserving the IPv4 address savings achieved by MAP-T.

For PON/FTTH access, Internet Group Management Protocol (IGMP) v2/v3 is used in the Layer-2 access
network, and Protocol Independent Multicast (PIM) Source Specific Multicast (SSM) is implemented at the BNG.
For DSL access, the access network is routed and IGMPv2/v3 reports, proxied by CPE and Digital Subscriber
Line Access Multiplexer (DSLAM), are converted into PIM SSM messages at the last hop multicast router.

In the aggregation/core network, multicast delivery trees are signaled and established by using (recursive)
Multicast Label Distribution Protocol (MLDP), and multicast traffic is forwarded over flat MPLS LSPs. Multicast
forwarding can be isolated in the same residential VPN used for unicast services, or handled globally according
to the operator’s preference and desire for a common multicast transport across multiple service categories
(e.g., residential and mobile).

Alignment to Broadband Forum TR-101 and TR-200 Technical Specifications


Broadband Forum TR-101 provides architectural and topological models for an Ethernet-based aggregation
network and has become the global standard for triple-play deployments for residential and business customers.
A large part of the specification is technology-agnostic allowing for broadband access technologies such as
DSL and fiber-to-the-x (FTTx) to align easily. Others, such as PON require the definition of additional and more
detailed requirements. TR-200 strengthens TR-101 in that it deepens the characterization of TR101’s topological
models and VLAN architecture options to better adapt to EPON’s point to multipoint and split node nature.

System Overview September 2013


32
Figure 16 - TR-101 and TR-200 Architectural Model Comparison

Regional Access Network


Broadband
Ethernet Network
NSP1

L2TP Customer Premises


NSP2 L2TS Network
TR-101
Reference Access User1
Architecture IP-QoS
BNG Ethernet Access MDF Loop RG
NSP3
IP Aggregation Node
User2
T
IP-QoS
ASP1
V U
A10
RAN

Regional Access Network


Broadband
Ethernet Network
NSP1

L2TP Customer Premises


NSP2 L2TS Network
TR-200
Reference User1
Architecture IP-QoS Ethernet
NSP3 BNG OLT ODN ONU RG
IP Aggregation
User2
T
IP-QoS
ASP1
V S/R R/S U
A10

293371
RAN

The OLT and optical network unit (ONU) share the responsibility for performing the role of an AN, with the ONU
facing the user through the unit (U) reference point, and the OLT facing the aggregation network through the V
reference point.

Under those assumptions, and regardless of the broadband access technology chosen by a given
implementation (DSL, FTTH, or PON), the first release of the Cisco FMC system aligns to TR-101 Non Trunk UNI
support at the U reference point, and for both 1:1 and N:1 VLAN aggregation models.

A non-trunk UNI uses a shared VLAN between the AN and the residential gateway for all subscriber’s services,
while relative priority across services is preserved by properly setting the Differentiated Services Code Point
(DSCP) field in an IP packet header or the 802.1p CoS values carried in an Ethernet priority-tagged frame.

A 1:1 or N:1 VLAN model is then used to aggregate subscriber’s traffic into the operator’s network toward the
associated service insertion point. The N:1 VLAN uses a shared VLAN to aggregate all subscribers and all
services to and from a particular AN, while a 1:1 model dedicates a unique VLAN to each subscriber.

The subscriber aggregation models are described in detail in “Subscriber Aggregation Models.”

System Overview September 2013


33
Community Wi-Fi Service Models
Increase in mobile data traffic, lack of radio spectrum and coverage, and the attractive economy of offload
have caused mobile operators to incorporate small cell solutions into their network infrastructure plans. Those
solutions include licensed and unlicensed (Wi-Fi) spectrum and technologies, such as Femto and Wi-Fi small
cells.

At the same time, telco and cable operators who do not own a portion of the licensed spectrum are trying to
improve customer retention by devising creative ways to provide “on the go” connectivity to their clients in
residential and metropolitan areas.

Wi-Fi has become ubiquitous in nearly all personal mobile devices, including smartphones, tablets, cameras, and
game consoles. What’s more, Wi-Fi technology is improving every day. Robust carrier-grade Wi-Fi networks
have the ability to outperform 4G networks and are secure, while next-generation hotspots offer roaming that is
as transparent as cellular roaming. To meet the spectrum challenge, Wi-Fi provides 680 MHz of new spectrum to
operators.

Carrier-grade Wi-Fi therefore has become a central element in strategies for ubiquitous capacity and coverage
across networks for both fixed and mobile operators. While other systems in Cisco focus more toward Metro SP
Wi-Fi architectures, Cisco FMC Release 2.0 introduces community Wi-Fi.

Under this model, operator-owned residential CPEs announce a private Service Set Identifier (SSID) used
by members of the household, and a public, well-known SSID shared among all customers of the same
operator. The Private SSID uses Wi-Fi Protected Access (WPA)/WPA2 security protocols in order to secure
communication for the household equipment, while the public SSID is open. Public access is authenticated
via web logon procedures or transparently using dynamically learnt network identities associated with the
connecting device (e.g., MAC address).

The separation between household and public Wi-Fi traffic in the access network is achieved by VLAN
segmentation, requiring the CPE UNI to become trunked. VLAN-based segmentation simplifies H-QoS modeling
for aggregated rate limiting based on service category (pure residential wireline vs. public Wi-Fi), and it allows for
flexible and independent positioning of the gateway functions. Based on scale and performances capabilities of
the selected devices and mindful of operator’s need for cost optimization, the Cisco FMC system has selected to
implement wireline and Wi-Fi gateway functions on the same aggregation node as shown in the following figure.

System Overview September 2013


34
Figure 17 - Community Wi-Fi Service Models

Multicast:
AN enabled MVR
AGN-SE
Trunk UNI IP or L3 VPN over Unified MPLS for Triple Play Unicast
MPLS/Multicast VPN (mLDP)
IP
IP
PIM
IP
IP Wireline:
N:1 or 1:1 VLANs Wireline: Explicit (N:1) and Ambiguous (1:1) access interfaces
IPv6 IPoE, PPPoE Sessions
Wi-Fi: Optimal MAP-T Border Router
N:1 VLAN Service
Wireline: Edge Wi-Fi: Explicit (N:1) access interface
IPv6 Routed CPE with MAP-T IPv4 IPoE Sessions
Wi-Fi: Multicast: Explicit (N:1) access interface
Bridged CPE
PAN-SE
IP or L3 VPN over Unified MPLS for Triple Play Unicast
IP PIM MPLS/Multicast VPN (mLDP)
IP

IP
IP

Efficient Large Scale Multiservice


Access Network Aggregation Network Core Network
Ethernet
CO Access Node
OLT: PTIN 360
Aggregation Node
FTTH: ME-2600
ASR-9001, 9006
IP/MPLS Transport IP/MPLS Transport

Pre-Aggregation Node Aggregation Node Core Node


ASR-9001, ASR-903 ASR-9010 CRS-3

293597
Fiber DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology

The residential CPE operational mode is routed over the household VLAN, and bridged over the Wi-Fi VLAN.
CPE bridged mode is necessary to preserve visibility over the public handset’s MAC address throughout the
access network for authorization purposes.

All public Wi-Fi subscribers connecting from the same AN share the same N:1 Wi-Fi VLAN, regardless of
whether the subscriber aggregation model implemented over the household VLAN is 1:1 or N:1.

Connectivity over the public Wi-Fi network uses IPv4. IPv4 remains the leading address family in the space, while
handsets’ IPv6 capable operative systems and applications have just started making their appearance in the
market.

In the aggregation and core network, the same level of segmentation between pure residential and public Wi-Fi
traffic can be achieved by isolating community Wi-Fi services in a dedicated L3 VPN through the virtualization
means enabled by Unified MPLS.

System Overview September 2013


35
Business Service Models
Business, as well as residential, service architectures have experienced a shift. The additional computing
capacity and better hardware performances of today’s equipment enable multi-service capabilities within a single
network node, spurring a transition towards more distributed models. By consolidating transport and service
functions for L2VPNs and L3VPNs within a single device, optimal placement of the business service edge is
enabled based on subscriber distribution.

The Cisco FMC system supports the following business wireline services on a single converged network:
• L3VPN services via Ethernet over Multiprotocol Label Switching (EoMPLS) PW with Pseudowire Headend
(PWHE) connectivity to MPLS VPN VRFs at the service edge node.
• Multipoint E-LAN services via Provider Backbone Bridging Ethernet VPN (PBB-EVPN) or Hierarchical
Virtual Private LAN Service (H-VPLS).
• Point-to-point X-Line via Any Transport over MPLS (AToM) pseudowires: TDM, ATM, and Ethernet.

The Cisco FMC solution supports MPLS-based access networks for those operators seeking to deploy a
converged architecture to transport all service types with a uniform control plane. Native Ethernet and TDM
access networks are also supported for those operators seeking to cap investments in legacy network
deployments and to facilitate migration to a packet-switched network architecture.

Unified MPLS Access


The delivery models for X-Line, E-LAN, and L3VPN business services with MPLS-based access networks are
illustrated in the following figure.

Figure 18 - Business Services Overview - Unified MPLS Access

AGN-SE
L3 VPN
Ethernet PWE3
MPLS VPN (v4)
Ethernet 802.1q PWHE
PAN-SE

Ethernet PWE3 MPLS VPN (v4)


Ethernet 802.1q PWHE
AGN-SE
E-LAN
H-VPLS PWE3 VPLS (+ 802.1ah PBB) or PBB-EVPN
Ethernet Port, 802.1q
PAN-SE

H-VPLS PWE3 VPLS (+ 802.1ah PBB) or PBB-EVPN


Ethernet Port, 802.1q

X-Line
Ethernet, CESoPSN, SAToP, ATM VC/VP PWE3
Ethernet Port, 802.1q
TDM, ATM IMA E1, STM1
Efficient Large Scale Multiservice
Access Network Aggregation Network Core Network
Aggregation Node
ASR-9001, 9006
IP/MPLS Transport IP/MPLS Transport

Remote Fixed Access Node Pre-Aggregation Node Aggregation Node


ME-3600X, ASR-901 ASR-9001, ASR-903 ASR-9010 Core Node
293222

xWDM, Fiber Rings DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology

System Overview September 2013


36
For an L3VPN service, the subscriber CPE device is connected to the SP network via an Ethernet 802.1Q-tagged
UNI on the FAN or CSG. Since the scalability characteristics of a business service L3VPN are very different
from that of the transport L3VPN utilized for LTE service backhaul, both in terms of number of VRFs involved and
number of prefixes in each VRF, the VRF for the L3VPN service is not placed on the AN. Transport of the L3VPN
service from the AN to the service edge in the PAN or AGN is accomplished via Ethernet PWE3. This PWE3 is
mapped by the service edge node to the proper L3VPN VRF by implementing PWHE functionality.

Pseudowire Headend (PWHE) is a technology that allows termination of access PWs into a L3 (VRF or global)
domain or into a L2 domain. PWs provide an easy and scalable mechanism for tunneling customer traffic into a
common IP/MPLS network infrastructure. PWHE supports features such as H-QoS and access lists (ACL) for an
L3VPN on a per-PWHE interface basis. PWHE introduces the construct of a “pw-ether” interface on the service
edge node. This virtual pw-ether interface terminates the PWs carrying traffic from the subscriber CPE device
and maps directly to an MPLS VPN VRF on the service edge node. Per-subscriber H-QoS and any required
subscriber ACLs are applied to the pw-ether interface.

For an L2VPN service, such as a port-based Ethernet Private LAN (EP-LAN) or a VLAN-based Ethernet Virtual
Private LAN (EVP-LAN) service, the subscriber CPE device is connected to the SP network via an Ethernet port
UNI or 802.1Q-tagged UNI on the FAN or CSG. The Cisco FMC system supports two mechanisms for providing
L2VPN services: traditional H-VPLS virtual forwarding instances (VFI), or PBB-EVPN.
A VPLS VFI automatically creates a full mesh of pseudowires to transport L2VPN services between service edge
nodes. To minimize the number of neighbors involved in the VPLS VFI and to avoid any potential MAC address
scaling issues on the AN, the VPLS VFI is not configured on the ANs. Transport of the L2VPN service from the
AN to the service edge in the PAN or AGN is again accomplished via Ethernet Pseudowire Emulation Edge to
Edge (PWE3). The PWE3 from the AN is connected to a VPLS VFI providing the L2VPN service on the service
edge node.

PBB-EVPN is a new draft in the IETF L2VPN working group that combines PBB and E-VPN functionality in a
single device. While still relying on MPLS forwarding, E-VPN uses BGP for distributing MAC address reachability
information over an MPLS cloud. In existing L2VPN solutions, MAC addresses are always learned in the data
plane, i.e., MAC bridging. In comparison, in E-VPN the learning of MAC addresses over the core is done via
control plane, i.e., MAC routing. Control-plane based learning brings flexible BGP-based policy control to MAC
address, similar to the policy control available for IP prefixes in L3VPNs. Customers can build any topology by
using route targets. A full mesh of pseudowires is no longer required, which is often a scalability concern in
VPLS as the number of provider edge (PE) routers increases. Another key feature of E-VPN is the multi-homing
capability. In VPLS, there is a limited support of multi-homing with only active-standby or active-active per
service dual homing supported. E-VPN, on the other hand, supports both active-active per service and active-
active per flow, leading to better load balancing across peering PEs. It also supports multi-homed device (MHD)
and multi-homed network (MHN) topologies with two or more routers, which can be geographically disjointed, in
the same redundancy group.

PBB-EVPN takes a step further by combining Provider Backbone Bridging (PBB) and E-VPN functions in a
single device. PBB is defined by IEEE802.1ah, where MAC tunneling (MAC-in-MAC) is employed to improve
service instance and MAC address scalability in Ethernet. Using PBB’s MAC-in-MAC encapsulation, PBB-EVPN
separates customer MAC addresses (C-MACs) from backbone MAC addresses (B-MACs) spaces. In contrast to
E-VPN, PBB-EVPN uses BGP to advertise B-MAC reachability, while data-plane learning is still used for remote
C-MAC to remote B-MAC binding. As a result, the number of MAC addresses in provider backbone is now
reduced to the number of PEs, which is usually in hundreds and thus much fewer than the millions of customer
MAC addresses typically in the large service provider networks. Should be there any MAC mobility in the access
layer, it will be completely transparent to BGP and instead be handled by the re-learning of the moved C-MAC to
a new B-MAC.

System Overview September 2013


37
For a wireline VPWS, like an Ethernet Private Line (EPL) or Ethernet Virtual Private Line (EVPL) business service,
an EoMPLS pseuodwire is created between two FAN or CSGs to transport the service across the necessary
access and aggregation domains and the core network. The ANs enabling the VPWS learn each other’s
loopbacks via BGP labeled-unicast that is extended to the access network by using the PANs as an inline route
reflector.

The route scale in the access domain is kept to a minimum by ingress filtering on the AN. The ANs that enable
wireline services tag their loopbacks in internal BGP (iBGP)-labeled unicast with a common FAN community,
which is imported by all service edge nodes for wireline services. The AN nodes ingress filtering for business
services is dependent upon the type of service.

Figure 19 - Unified MPLS Access Scale for Business Services

Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR

Wireline VPWS

AToM Pseudowire
FAN
CSG

Advertise loopback in iBGP with Advertise loopback in iBGP with


Local RAN, Global RAN, Global FAN Local RAN, Global RAN, Global FAN communities

293223
When VPWS service is activated the inbound filter is When VPWS service is activated the inbound filter is
automatically updated for remote FAN automatically updated for remote FAN

For E-LAN and L3VPN services, all service edge functionality is handled by the PAN or AGN nodes, and the
loopback prefixes are marked with the FSE community in BGP. Thus, connectivity from the AN to these nodes is
achieved by permitting this community in the inbound filter.

For E-Line services, a dynamic IP prefix list is used for inbound filtering. When a wireline service is activated to
new destination, the route-map used for inbound filtering has to be updated. Since adding a new wireline service
on the device results in a change in the routing policy of a BGP neighbor, dynamic inbound soft reset function is
used to initiate non-disruptive dynamic exchange of route refresh requests between the AN and the PAN.

Tech Tip

Both BGP peers must support the route refresh capability in order to use dynamic
inbound soft reset capability.

System Overview September 2013


38
Native Ethernet and TDM Access
The delivery models for X-Line, E-LAN, and L3VPN business services with native access networks are illustrated
in the following figure.

Figure 20 - Business Services Overview - TDM and Ethernet Access

AGN-SE
L3 VPN
Ethernet 1q, QinQ MPLS VPN/Multicast VPN (mLDP)

PAN-SE

MPLS VPN/Multicast VPN (mLDP)


Ethernet 1q, QinQ

AGN-SE
E-LAN
VPLS (+ 802.1ah PBB) or PBB-EVPN
Ethernet Port, 802.1q or 802.1ad
PAN-SE

VPLS (+ 802.1ah PBB) or PBB-EVPN


Ethernet Port, 802.1q or 802.1ad

X-Line
Ethernet, CESoPSN, SAToP, ATM VC/VP PWE3
Ethernet Port, 802.1q or 802.1ad
TDM, ATM IMA E1, STM1
Legacy Large Scale Multiservice
Access Network Aggregation Network Core Network
Aggregation Node
ASR-9001, 9006
SONET/SDH IP/MPLS Transport IP/MPLS Transport

Pre-Aggregation Node Aggregation Node


ASR-9001, ASR-903 ASR-9010 Core Node

293221
SONET/SDH DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology

For a L3VPN service, the subscriber CPE device is connected to the SP network typically via an Ethernet
802.1Q-tagged user network interface (UNI) on the AN. Transport of the L3VPN service from the AN to the
service edge in the PAN or AGN is accomplished via native Ethernet. The AN may translate the VLAN tag of the
customer UNI to a unique VLAN tag on the SP network or may push an S-VLAN tag on the C-VLAN, creating
a Q-in-Q network-to-network (NNI). Whether single- or double-tagged, the Ethernet NNI will be terminated on
the service edge node. The VLANs carrying the L3VPN service are mapped to an MPLS VPN VRF, which is then
transported over the Unified MPLS Transport network. H-QoS and any required subscriber ACLs are applied to
the Ethernet NNI interface.

For a L2VPN service, such as a port-based Ethernet Private LAN (EPLAN) or a VLAN-based Ethernet Virtual
Private LAN (EVPLAN) service, the subscriber CPE device is connected to the SP network via an Ethernet port
UNI, 802.1Q-tagged UNI, or 802.1ad double-tagged UNI on the FAN. Transport of the L2VPN service from the
AN to the service edge in the PAN or AGN is accomplished via native Ethernet.

The AN may translate the VLAN tag of the customer UNI to a unique VLAN tag on the SP network or may push
an S-VLAN tag on the C-VLAN, creating a Q-in-Q NNI. Whether single- or double-tagged, the Ethernet NNI
will be terminated on the S node. The VLANs are connected to a VPLS VFI or PBB-EVPN providing the L2VPN
service on the service edge node. Per-subscriber H-QoS and any required subscriber ACLs are applied to the
Ethernet NNI interface.

For a wireline VPWS, like an Ethernet Private Line (EPL) or Ethernet Virtual Private Line (EVPL) business service,
the customer CPE devices on either end are connected via an Ethernet port UNI, 802.1Q-tagged UNI, or 802.1ad

System Overview September 2013


39
double-tagged UNI to the ANs. The AN may translate the VLAN tag of the customer UNI to a unique VLAN tag
on the SP network, or may push an S-VLAN tag on the C- VLAN, creating a Q-in-Q NNI. Whether single- or
double-tagged, the Ethernet NNI will be terminated on the service edge node. The service edge node will map
the VLAN(s) to a PW to be transported across the aggregation domains and the core network. Per-subscriber
H-QoS and any required subscriber ACLs are applied to the Ethernet NNI interface of the service edge node.

Mobile Service Models


A fundamental goal of the Cisco FMC system is the simplification of the end-to-end mobile transport and service
architecture. The system achieves this goal by decoupling the transport and service layers of the network,
thereby allowing these two distinct entities to be provisioned and managed independently. As described in
“Transport Models,” Unified MPLS Transport seamlessly interconnects the access, aggregation, and core MPLS
domains of the network infrastructure with hierarchical LSPs at the transport layer. Once this Unified MPLS
Transport is established (a task that only needs to be undertaken once), a multitude of wireline and mobile
services can be deployed on top of it. These services can span any location in the network without restricting
topological boundaries.

The Cisco FMC system provides a comprehensive mobile service backhaul solution for transport of LTE, legacy
2G GSM, and existing 3G UMTS services. An overview of the models supported for the transport of mobile
services is illustrated in Figure 21 and Figure 22:

Figure 21 - Mobile Services Overview - Unified MPLS Access

Covered by the
MPC System
BBC
RNC
MPLS MPLS VPN
AToM Pseudowire VPN (v4/v6)
ATM or
TDM BTS, ATM Node B TDM GGSN

SGSN

S/PGW
S1-U
IP eNB MPLS VPN
Mobile Transport Gateway (v4/v6)
MME

MPLS VPN (v4 or v6)


S1-C
X2-C, X2-U
Mobile Transport Gateway MPLS VPN
(v4/v6)
S/PGW
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network

Mobile Transport PE
ASR-9000
IP/MPLS
Transport IP/MPLS Transport IP/MPLS Transport

Cell Site Gateway Pre-Aggregation Node Aggregation Node Core Node Core Node
ASR-901 ASR-903, ASR-9001 ASR-9000 CRS-3 CSR-3
293225

Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology

System Overview September 2013


40
Figure 22 - Mobile Services Overview - TDM and Ethernet Access

Covered by the
MPC System
SDH/SONET BBC
RNC

MPLS VPN
AToM Pseudowire (v4/v6)
TDM BTS, ATM Node B ATM or
TDM
GGSN

SGSN

S1-U S/PGW LMA


VFI SHD + Mobile
IRB with MPLS VPN Transport
Ethernet One IP subnet per VFI Gateway
MME

MPLS VPN (v4 or v6)


S1-C
X2-C, X2-U

Mobile
Transport
Gateway
S/PGW LMA
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network

Mobile Transport PE
ASR-9000

IP/MPLS Transport IP/MPLS Transport

Microwave Systems Pre-Aggregation Node Aggregation Node Core Node Core Node
Partners: NSN, NEC, SIAE ASR-903, 9001 ASR-9000 CRS-3 CRS-3

293224
Ethernet/TDM Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology

The system proposes a highly-scaled MPLS L3VPN-based service model to meet the immediate needs of LTE
transport and accelerate its deployment. The MPLS VPN model provides the required transport virtualization
for the graceful introduction of LTE into an existing 2G/3G network, and also satisfies future requirements of
RAN sharing in a wholesale scenario. It is well suited to satisfy the mesh connectivity and stringent latency
requirements of the LTE X2 interface. Simple MPLS VPN route-target import/export mechanisms can be used to
enable multipoint connectivity:
• within the local RAN access for intra-RAN-access X2 handoff.
• with adjacent RAN access regions for inter-RAN-access region X2 handoff.
• with EPC gateways (SGWs, MMEs) in the MPC for the S1-u/c interface.
• with more than one MME and SGW for MME and SGW pooling scenarios.

The MPLS VPN-based service model allows for eNodeBs and associated CSGs to be added to the RAN at any
location in the network. EPC gateways can be added in the MPC and have instant connectivity to each other
without additional configuration overhead. It allows seamless migration of eNodeBs initially mapped to centralized
EPC gateways to more distributed ones in order to accommodate capacity and scale demands without having to
re-provision the transport infrastructure. “L3 MPLS VPN Service Model for LTE” covers these aspects in detail.

System Overview September 2013


41
Service virtualization with MPLS-based L2 and L3 VPNs also allows legacy 2G GSM and existing 3G UMTS
services to co-exist with LTE on the same transport infrastructure. The system supports mobile system providers
(MSP) with GSM and ATM-based UMTS deployments wishing to remove, reduce, or cap investments in SONET/
SDH and ATM transport infrastructure by using MPLS-based CEoP services.
• For the MSPs who want to reduce SONET/SDH infrastructure used for GSM, FMC enables PWE3-
based transport of emulated TDM circuits. Structured circuit emulation is achieved with CESoPSN,
and unstructured emulation is achieved with SAToP. E1/T1 circuits from BTS equipment connected to
the CSG or to the PAN are transported to MTG, where they are bundled into channelized STM1/OC-3
interfaces for handoff to the BSC.
• For the MSPs who want to reduce their ATM infrastructure used for ATM-based UMTS, Cisco FMC
enables ATM VC (AAL0 or AAL5) or VP (AAL0) PWE3-based transport. ATM E1/T1 or IMA interfaces
from NodeB equipment connected to the CSG or PAN can be transported to the MTG, where they are
bundled into STM1 ATM interfaces for handoff to the RNC. Cell packing may be used to optimize the
bandwidth used for this transport.

“L2 MPLS VPN Service Model for 2G and 3G” covers these aspects in detail.

For the above service models, the system supports physical layer synchronization of frequency based on SyncE,
or packet-based synchronization of frequency as well as phase and time of day (ToD) based on 1588 Precision
Time Protocol (PTP), as described in “Synchronization Distribution.”

System Overview September 2013


42
System Architecture
Transport Architecture
Large Network, Multi-Area IGP Design with IP/MPLS Access
This section details the system architecture for a transport model where the network organization between
the core and aggregation domains is based on a single access switch (AS), multi-area IGP design. This model
follows the approach of enabling a Unified MPLS LSP using hierarchical-labeled Border Gateway Protocol (BGP)
LSPs across the core and aggregation network, and presents two approaches for extending the Unified MPLS
LSP into the access domain.

Figure 23 - Multi-Area IGP/LDP Domain Organization

Single AS

RAN IGP Process Aggregation Area/Level Core Area/Level Aggregation Area/Level RAN IGP Process
OSPF/ISIS OSPF x/IS-IS L1 OSPF 0/IS-IS L2 OSPF x/IS-IS L1 OSPF/ISIS

Aggregation Aggregation
Node (AGN) Node (AGN)
Access Access
FAN IP/MPLS Core Node Core Node
Aggregation Network Core Network Aggregation Network IP/MPLS FAN
CN-ABR IP/MPLS Domain CN-ABR
Domain IP/MPLS Domain IP/MPLS Domain Domain
Pre-Aggregation Pre-Aggregation
Node (PAN) Node (PAN)
Core Node Core Node
CN-ABR CN-ABR
Aggregation Aggregation
CSG Node (AGN) Node (AGN) CSG

293275
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

From a multi-area IGP organization perspective, the core network is either an Intermediate System-to-
Intermediate System (IS-IS) Level 2 or an Open Shortest Path First (OSPF) backbone area. The aggregation
domains, in turn, are Intermediate System-to-Intermediate System (IS-IS) Protocol Level 1 or OSPF non-
backbone areas. No redistribution occurs between the core and aggregation IGP levels/areas, thereby containing
the route scale within each domain. The MPLS/IP access networks subtending from AGNs or PANs are based
on a different IGP process, restricting their scale to the level of the local access network. To accomplish this,
the PANs run two distinct IGP processes, with the first process corresponding to the core-aggregation network
(IS-IS Level 1 or OSPF non-backbone area) and the second process corresponding to the Mobile RAN access
network. The second IGP process could be an OSPF backbone area or an IS-IS L2 domain. All nodes belonging
to the access network subtending from a pair of PANs are part of this second IGP process.

Partitioning these network layers into such independent and isolated IGP domains helps reduce the size of
routing and forwarding tables on individual routers in these domains, which, in turn, leads to better stability and
faster convergence within each of these domains. Label Distribution Protocol (LDP) is used for label distribution
to build intra-domain LSPs within each independent access, aggregation, and core IGP domain. Inter-domain
reachability is enabled by hierarchical LSPs using BGP-labeled unicast as per RFC 3107 procedures, where iBGP

System Architecture September 2013


43
is used to distribute labels in addition to remote prefixes, and LDP is used to reach the labeled BGP next-hop.
Two options are presented below to extend the Unified MPLS LSP into the access domain to accommodate
different operator preferences.

Option-1: Multi-Area IGP Design with Labeled BGP Access


This option is based on the transport model described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and
Access.”

Tech Tip

This model supports transport of fixed wireline and mobile services. The following
figure shows the example for RAN transport. The deployment considerations for both
RAN transport and fixed wireline transport are covered in this guide.

Figure 24 - Inter-Domain Transport for Multi-Area IGP Design with Labeled BGP Access

AGN CN-RR AGN

RR
CN-ABR CN-ABR
Aggregation Aggregation
PAN BGP Community BGP Community PAN
iBGP iBGP
IPv4+label IPv4+label
iBGP iBGP iBGP
IPv4+label IPv4+label MPC IPv4+label
BGP
CSG Community CSG
RAN Region RAN Region
BGP PAN PAN BGP
Community CN-ABR CN-ABR Community
MTG
FAN AGN AGN FAN
iBGP Hierarchical LSP

293276
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

In this option, the access, aggregation, and core networks are integrated with Unified MPLS LSPs by extending
labeled BGP from the core all the way to the nodes in the access network. Any node in the network that requires
inter-domain LSPs to reach nodes in remote domain acts as a labeled BGP PE and runs iBGP IPv4 unicast+labels
with their corresponding local RRs.
• The core point of presence (POP) nodes, referred to in this design as Core Node–Area Border Routers
(CN-ABR), are labeled BGP ABRs and act as inline RRs for their local aggregation network PAN clients.
The CN-ABRs peer with other CN-ABRs using iBGP-labeled unicast in either a full mesh configuration
or using a centralized core-node route reflector (CN-RR) within the core domain. The centralized RR
deployment option is shown in Figure 24. Note that the CN-RR applies an egress filter towards the
CN-ABRs in order to drop prefixes with the common RAN community, which eliminates unnecessary
prefixes from being redistributed.
• For mobile service transport, the MTGs residing in the core network are labeled BGP PEs. They connect
to the EPC gateways (SGW, Packet Data Network Gateway [PGW], and MME) in the MPC. The MTGs
peer either directly with the closest CN-ABR RRs, in the case of a CN-ABR full-mesh configuration, or
with the CN-RR, depending on the deployment setting. The MTGs advertise their loopbacks into iBGP-
labeled unicast with the global MSE BGP community representing the MSE, and then import the global
MSE and common RAN communities.

System Architecture September 2013


44
• For fixed wireline service transport, the network nodes providing FSE functions, such as PWHE or
H-VPLS, are labeled BGP PEs. These FSE nodes will peer with the closest RR in the network, usually in
the aggregation network, depending upon the deployment setting. The FSE nodes advertise loopbacks
into the iBGP-labeled unicast with the global FSE BGP community, representing the FSE, and import
the global FSE and Internet Gateway (IGW) communities. The IGW community represents any Internet
Gateway node, providing internet peering functionality in the SP network.
• The PANs are labeled BGP PEs and act as inline RRs for the local access network nodes. All the
PANs in the aggregation network that require inter-domain LSPs to reach remote PANs in another
aggregation network, or the core network (to reach the MTGs or IGWs, for example), run BGP-labeled
unicast sessions with their local CN-ABR inline-RRs. The PANs advertise their loopbacks into BGP-
labeled unicast with common BGP communities that represent any services configured locally on the
PAN or on the attached access network, such as the RAN or FAN community. They learn labeled BGP
prefixes marked with these common AN BGP communities as necessary and also any required service
communities, such as those for FSE, MSE, or IGW nodes.
• The nodes in the access networks are labeled BGP PEs. Nodes carrying mobile services are referred
to as RAN nodes, and nodes carrying fixed wireline services are referred to as FAN nodes. They peer
with iBGP-labeled unicast sessions with their local PAN inline RRs. The ANs advertise their loopbacks
into BGP-labeled unicast with a common BGP community that represents the local access community:
RAN for mobile services and FAN for fixed wireline services. For mobile service transport, labeled BGP
prefixes marked with the MSE BGP community are learned for reachability to the MPC, and the adjacent
access network BGP communities, if inter-access X2 connectivity is desired. For business wireline
service transport, the ANs selectively learn the required FSE and remote FAN prefixes for configured
VPWS services.

Since routes between the core IS-IS Level 2 (or OSPF backbone) and aggregation IS-IS Level 1 (or OSPF non-
backbone area) are not redistributed, the CN-ABRs have to reflect the labeled BGP prefixes with the next-hop
changed to self in order to be inserted into the data path, which enables the inter-domain LSP switching and
allows the aggregation and core IGP routing domains to remain isolated. This CN-ABR NHS function is applied
by the CN-ABRs towards its PAN clients in its local aggregation domain only for prefixes from other remote
domains, not for locally-learned prefixes. The purpose is to prevent the CN-ABR from inserting itself into the path
of inter-area X2 interface routing. The CN-ABR applies this NHS function for all updates towards the CN-RR in
the core domain. Similarly, since the access and aggregation networks are in different IGP processes, the PANs
have to reflect the labeled BGP prefixes with the next hop changed to self in order for the PANs to be inserted
into the data path, thus enabling the inter-domain LSP switching. This PAN NHS function is symmetrically
applied by the PANs towards nodes in the local access domain, and the higher level CN-ABR inline-RR in the
aggregation domain.

For mobile service transport, the MTGs in the core network are capable of handling large scale and will learn
all BGP-labeled unicast prefixes since they need connectivity to all the ANs carrying mobile services in the
entire network. Simple prefix filtering based on BGP communities is performed on the CN-RRs for constraining
IPv4+label routes from remote access regions from proliferating into neighboring aggregation domains, where
they are not needed. The PANs only learn labeled BGP prefixes marked with the common RAN BGP community
and the MSE BGP community. This allows the PANs to enable inter- metro wireline services across the core and
also reflects the MSE prefix to their local access networks. Using a separate IGP process for the access enables
the access network to have limited control plane scale, since the ANs only learn local IGP routes and labeled
BGP prefixes marked with the MSE BGP community.

System Architecture September 2013


45
For fixed wireline service transport, the IGWs in the core network are capable of handling a high degree of
scalability and will learn all BGP-labeled unicast prefixes in order to provide connectivity to all the FSEs (and
possibly ANs) in the entire network that carry fixed services. Any nodes providing service edge functionality
are also capable of handling large scale, and will learn the common FAN community for AN access, the FSE
community for service transport to other FSE nodes, and the IGW community for internet access. Again, using a
separate IGP process for the access enables the access network to have limited control plane scale, since the
ANs only learn local IGP routes and labeled BGP prefixes marked with the FSE BGP community or permitted via a
dynamically-updated IP prefix list.

Option-2: Multi-Area IGP Design with IGP/LDP Access


This option is based on the transport model described in “Hierarchical-Labeled BGP Redistribution into Access
IGP.”

Tech Tip

The filtering mechanisms necessary for fixed wireline service deployment are not
currently available in this option, so it supports only mobile service transport. Wireline
service support will be added for this option in a future release.

Figure 25 - Inter-Domain Transport for Multi-Area IGP Design with IGP/LDP Access

AGN CN-RR AGN

RR
CN-ABR CN-ABR
CSG Aggregation Aggregation CSG
PAN BGP Community BGP Community PAN

iBGP to iBGP to
RAN IGP iBGP iBGP iBGP RAN IGP
Process IPv4+label IPv4+label MPC IPv4+label Process
Redistribution BGP Redistribution
CSG Community CSG
PAN PAN
RAN Region CN-ABR CN-ABR RAN Region
BGP Community MTG BGP Community
AGN AGN
CSG CSG
LDP LSP iBGP Hierarchical LSP LDP LSP

293277
LDP LSP LDP LSP LDP LSP

System Architecture September 2013


46
This option follows the approach of enabling labeled BGP across the core and aggregation networks and extends
the Unified MPLS LSP to the access by redistribution between labeled BGP and the access domain IGP. All
nodes in the core and aggregation network that require inter-domain LSPs to reach nodes in remote domains act
as a labeled BGP PEs and runs iBGP IPv4 unicast+labels with their corresponding local RRs.
• The core point of presence (POP) nodes (CN-ABR) are labeled BGP ABRs and act as inline RRs for their
local aggregation network PAN clients. The CN-ABRs peer with other CN-ABRs using iBGP-labeled
unicast in either a full mesh configuration or using a centralized CN-RR within the core domain. The
centralized RR deployment option is shown in Figure 25. Note that the CN-RR applies an egress filter
towards the CN-ABRs in order to drop prefixes with the common RAN community, which eliminates
unnecessary prefixes from being redistributed.
• The MTG residing in the core network are labeled BGP PEs. They connect to the EPC gateways (SGW,
PGW, and MME) in the MPC. The MTGs peer either directly with the closest CN-ABR RRs, in the case of
a CN-ABR full-mesh configuration, or with the CN-RR, depending on the deployment setting. The MTGs
advertise their loopbacks into iBGP-labeled unicast with a common MSE BGP.
• All the PANs in the aggregation network that require inter-domain LSPs to reach remote PANs in another
aggregation network, or the core network (to reach the MTGs, for example), run BGP-labeled unicast
sessions with their local CN-ABR inline RRs. The PANs advertise their loopbacks into BGP-labeled
unicast with a common RAN BGP community. They learn labeled BGP prefixes marked with the RAN and
MSE BGP communities.
• The inter-domain LSPs are extended to the MPLS/IP RAN access with a controlled redistribution
based on IGP tags and BGP communities. Each mobile access network subtending from a pair of PANs
is based on a different IGP process. At the PANs, the inter-domain core and aggregation LSPs are
extended to the RAN access by redistributing between iBGP and RAN IGP. In one direction, the RAN
AN loopbacks (filtered based on IGP tags) are redistributed into iBGP- labeled unicast and tagged with
a RAN access BGP community that is unique to that RAN access region. In the other direction, the MSE
prefixes filtered based on MSE-marked BGP communities, and optionally, adjacent RAN access prefixes
filtered based on RAN-region- marked BGP communities (if inter-access X2 connectivity is desired), are
redistributed into the RAN access IGP process.

Since routes between the core IS-IS Level 2 (or OSPF backbone) and aggregation IS-IS Level 1 (or OSPF non-
backbone area) are not redistributed, the CN-ABRs have to reflect the labeled BGP prefixes with the next-hop
changed to self in order to insert themselves into the data path to enable the inter- domain LSP switching and
allow the aggregation and core IGP routing domains to remain isolated. This CN-ABR NHS function is applied
by the CN-ABRs only for prefixes from other remote domains towards its PAN clients in its local aggregation
domain. It is not applied for locally-learned prefixes to prevent the CN-ABR from inserting itself into the path of
inter-area X2 interface routing. The CN-ABR applies this NHS function for all updates towards the CN-RR in the
core domain.

The MTGs in the core network are capable of handling a high degree of scalability and will learn all BGP-labeled
unicast prefixes to provide connectivity to all the CSGs in the entire network. Simple prefix filtering based on
BGP communities is performed on the CN-RRs for constraining IPv4+label routes from remote RAN access
regions from proliferating into neighboring aggregation domains, where they are not needed. The PANs only
learn labeled BGP prefixes marked with the common RAN and MSE BGP communities. This allows the PANs to
enable inter-metro wireline services across the core, and also redistribute the MSE prefix to their local access
networks. Using a separate IGP process for the RAN access enables the mobile access network to have limited
control plane scale, because the CSGs learn only local IGP routes and labeled BGP prefixes marked with the
MSE BGP community.

System Architecture September 2013


47
Large Network, Inter-AS Design with IP/MPLS Access
This section details the system architecture for a transport model where the core and aggregation networks are
organized as different ASs. This model follows the approach of enabling a Unified MPLS LSP using hierarchical-
labeled BGP LSPs based on iBGP-labeled unicast within each AS, and exterior BGP (eBGP)-labeled unicast to
extend the LSP across AS boundaries. Two approaches are presented for extending the Unified MPLS LSP into
the Mobile RAN access domain.

Figure 26 - Inter-AS IGP/LDP Domain Organization

AS-B AS-A AS-C

RAN Area/Level Aggregation Area/Level Core Area/Level Aggregation Area/Level RAN Area/Level
OSPF x/IS-IS L1 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF x/IS-IS L1

Aggregation Aggregation
Node (AGN) Node (AGN)

Aggregation ASBR Aggregation ASBR


(AGN-ASBR) (AGN-ASBR)
Access Access
Core Node
FAN IP/MPLS Aggregation Network CN-ABR Core Network Core Node Aggregation Network IP/MPLS FAN
Domain IP/MPLS Domain IP/MPLS Domain CN-ABR IP/MPLS Domain Domain
Pre-Aggregation Pre-Aggregation
Aggregation ASBR Aggregation ASBR Node (PAN)
Node (PAN)
(AGN-ASBR) (AGN-ASBR)
Core Node Core Node
CN-ABR CN-ABR
Aggregation Aggregation
CSG Node (AGN) Node (AGN) CSG

293278
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

The core and aggregation networks are segmented into different ASs. Within each aggregation domain, the
aggregation and access networks are segmented into different IGP areas or levels, where the aggregation
network is either an IS-IS Level 2 or an OSPF backbone area, and subtending access networks are IS-IS Level
1 or OSPF non-backbone areas. No redistribution occurs between the aggregation and access IGP levels/areas,
thereby containing the route scale within each domain. Partitioning these network layers into such independent
and isolated IGP domains helps reduce the size of routing and forwarding tables on individual routers in these
domains, which, in turn, leads to better stability and faster convergence within each of these domains. LDP is
used for label distribution to build intra-domain LSPs within each independent access, aggregation, and core IGP
domain.

Inter-domain reachability is enabled with hierarchical LSPs using BGP-labeled unicast as per RFC 3107
procedures. Within each AS, iBGP is used to distribute labels in addition to remote prefixes, and LDP is used
to reach the labeled BGP next-hop. At the ASBRs, the Unified MPLS LSP is extended across the aggregation
and core AS boundaries using eBGP-labeled unicast. The Unified MPLS LSP can be extended into the access
domain using two different options as presented below to accommodate different operator preferences.

System Architecture September 2013


48
Option-1: Inter-AS Design with Labeled BGP Access
This option is based on the transport model described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and
Access.”

Tech Tip

This model supports transport of fixed wireline and mobile services. The following
figure shows the example for RAN transport. The deployment considerations for both
RAN transport and fixed wireline transport are covered in this guide.

Figure 27 - Inter-Domain Transport for Inter-AS Design with Labeled BGP Access

AGN CN-RR AGN

RR
AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR
Aggregation Aggregation
PAN BGP Community BGP Community PAN
iBGP iBGP
IPv4+label AGN-RR iBGP AGN-RR IPv4+label
eBGP IPv4+label eBGP
RR IPv4+label IPv4+label RR
MPC
CSG iBGP BGP iBGP CSG
RAN Region IPv4+label Community IPv4+label RAN Region
BGP PAN PAN BGP
Community AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR Community
MTG
FAN AGN AGN FAN
eBGP eBGP
iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP

293279
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

In this option, the access, aggregation, and core networks are integrated with Unified MPLS LSPs by extending
labeled BGP from the core all the way to the nodes in the access network. Any node in the network that requires
inter-domain LSPs to reach nodes in remote domain acts as a labeled BGP PE and runs iBGP IPv4 unicast+labels
with their corresponding local RRs.
• The core POP nodes in this model are labeled BGP Autonomous System Boundary Routers (ASBR),
and are referred to as Core Node ASBRs (CN-ASBR). They peer with iBGP-labeled-unicast sessions
with the centralized CN-RR within the core AS, and also peer with eBGP-labeled unicast sessions with
the neighboring aggregation ASBRs. The CN-ASBRs insert themselves into the data path to enable
inter-domain LSPs by setting NHS on all iBGP updates towards their local CN-RRs and eBGP updates
towards the neighboring aggregation ASBRs. Note that the CN-RR applies an egress filter towards the
CN-ASBRs in order to drop prefixes with the common RAN community, which eliminates unnecessary
prefixes from being redistributed.
• For mobile service transport, the MTGs residing in the core network are labeled BGP PEs, which connect
to the EPC gateways (SGW, PGW, and MME) in the MPC. The MTGs peer with iBGP- labeled unicast
sessions with the CN-RR, advertising loopbacks into iBGP-labeled unicast with the global MSE BGP
community, representing the MSE, and importing the global MSE and common RAN communities.
• For fixed wireline service transport, the network nodes providing FSE functions, such as PWHE or
H-VPLS, are labeled BGP PEs. These FSE nodes will peer with the closest RR in the network, usually in
the aggregation network, depending upon the deployment setting. The FSE nodes advertise loopbacks
into the iBGP-labeled unicast with the global FSE BGP community, representing the FSE, and import the
common FAN community and global FSE and IGW communities. The IGW community represents any
Internet Gateway node, providing internet peering functionality in the SP network.

System Architecture September 2013


49
• The aggregation POP nodes in this model act as labeled BGP ASBRs in the aggregation AS, and are
referred to as Aggregation Node ASBRs (AGN-ASBR). They peer with iBGP-labeled unicast sessions
with the centralized AGN-RR within the aggregation AS, and peer with eBGP-labeled unicast sessions
to the CN-ASBR in the core AS. The AGN-ASBRs insert themselves into the data path to enable inter-
domain LSPs by setting NHS on all iBGP updates towards their local AGN-RRs and eBGP updates
towards neighboring CN-ASBRs.
• All PANs in the aggregation network requiring inter-domain LSPs to reach remote PANs in other
aggregation networks (acting as FSEs, for example), or the core network (to reach the MTGs, for
example), act as labeled BGP PEs and run BGP-labeled unicast sessions with their local AGN-RRs.
The PANs advertise their loopbacks into BGP-labeled unicast with a common BGP community that
represents any services configured locally on the PAN or on the attached access network, such as
the RAN or FAN community. They learn labeled BGP prefixes marked with these common AN BGP
communities as necessary and any required service communities, such as those for FSE, MSE, or IGW
nodes. In addition to being labeled BGP PEs, the PANs also act as inline RRs for their local access
network clients. Each access network subtending from a pair of PANs is part of a unique IS-IS Level 1
domain. All access rings/hub-and-spokes subtending from the same pair of PANs are part of IS-IS Level
1 domain, where the ANs are IS-IS L1 nodes and the PANs are L1/L2 nodes. Since routes between the
aggregation IS-IS Level 2 (or OSPF backbone) and access IS-IS Level 1 (or OSPF non-backbone area)
are not redistributed, the PANs have to reflect the labeled BGP prefixes with the next-hop changed
to self in order to insert into the data path to enable the inter-domain LSP switching and allow the
aggregation and access IGP routing domains to remain isolated. This PAN NHS function is symmetrically
applied by the PANs towards its AN clients in its local access domain and the higher level AGN-RR in the
aggregation domain.
• The nodes in the access networks are labeled BGP PEs. Nodes carrying mobile services are referred
to as RAN nodes, and nodes carrying fixed wireline services are referred to as FAN nodes. They peer
with iBGP-labeled unicast sessions with their local PAN inline-RRs. The ANs advertise their loopbacks
into BGP-labeled unicast with a common BGP community that represents the local access community:
RAN for mobile services, and FAN for fixed wireline services. For mobile service transport, labeled BGP
prefixes marked with the MSE BGP community are learned for reachability to the MPC, and the adjacent
access network BGP communities if inter-access X2 connectivity is desired. For business wireline
service transport, the ANs selectively learn the required FSE and remote FAN prefixes for configured
VPWS services.

For mobile service transport, the MTGs in the core network are capable of handling large scale and will learn all
BGP-labeled unicast prefixes since they need connectivity to all the CSGs in the entire network. Simple prefix
filtering based on BGP communities is performed on the CN-RRs in order to constrain IPv4+label routes from
remote access regions from proliferating into neighboring aggregation domains, where they are not needed.
The PANs learn only labeled BGP prefixes marked with the common RAN BGP community and the MSE BGP
community. This allows the PANs to enable inter- metro wireline services across the core, and also reflect the
MPC prefixes to their local access networks. Isolating the aggregation and access domain by preventing the
default redistribution enables the mobile access network to have limited route scale since the CSGs learn only
local IGP routes and labeled BGP prefixes marked with the MSE BGP community.

For fixed wireline service transport, the IGWs in the core network are capable of handling large scale and will
learn all BGP-labeled unicast prefixes since they need connectivity to all the FSEs (and possibly ANs) carrying
fixed services in the entire network. Any nodes providing service edge functionality are also capable of handling
large scale, and will learn the common FAN community for AN access—the FSE community for service transport
to other FSE nodes, and the IGW community for internet access. Again, using a separate IGP process for the
access enables the access network to have limited control plane scale, since the ANs only learn local IGP routes
and labeled BGP prefixes marked with the FSE BGP community or permitted via a dynamically-updated IP prefix
list.

System Architecture September 2013


50
Option-2: Inter-AS Design with IGP/LDP Access
This option is based on the transport model described in “Hierarchical-Labeled BGP Redistribution into Access
IGP.”

Tech Tip

This option supports only mobile service transport because the filtering mechanisms
necessary for fixed wireline service deployment are not currently available in this
option. Wireline service support will be added for this option in a future release.

Figure 28 - Inter-Domain Transport for Inter-AS Design with IGP/LDP Access

AGN CN-RR AGN

RR
AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR
CSG Aggregation Aggregation CSG
PAN BGP Community BGP Community PAN
AGN-RR iBGP AGN-RR
iBGP to iBGP to
eBGP IPv4+label eBGP RAN IGP
RAN IGP
RR IPv4+label IPv4+label RR Process
Process MPC
Redistribution BGP Redistribution
CSG iBGP iBGP CSG
IPv4+label Community IPv4+label
PAN PAN
RAN Region AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR RAN Region
BGP Community MTG BGP Community
AGN AGN
CSG CSG
eBGP eBGP
LDP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LDP LSP

293280
LDP LSP LDP LSP LDP LSP

This option follows the approach of enabling labeled BGP across the core and aggregation networks and extends
the Unified MPLS LSP to the access by redistribution between labeled BGP and the access domain IGP. All
nodes in the core and aggregation network that require inter-domain LSPs to reach nodes in remote domains act
as a labeled BGP PEs and runs iBGP IPv4 unicast+labels with their corresponding local RRs.
• The core POP nodes in this model are labeled BGP ASBRs, referred to as CN-ASBRs. They peer with
iBGP-labeled unicast sessions with the centralized CN-RR within the core AS, and peer with eBGP-
labeled unicast sessions with the neighboring aggregation ASBRs. The CN-ASBRs insert themselves
into the data path in order to enable inter-domain LSPs by setting NHS on all iBGP updates towards
their local CN-RRs and eBGP updates towards the neighboring aggregation ASBRs. Note that the
CN-RR applies an egress filter towards the CN-ASBRs in order to drop prefixes with the common RAN
community, which eliminates unnecessary prefixes from being redistributed.
• The MTGs residing in the core network are labeled BGP PEs, which connect to the EPC gateways (SGW,
PGW, and MME) in the MPC. The MTGs peer with iBGP- labeled unicast sessions with the CN-RR,
advertising loopbacks into iBGP-labeled unicast with the global MSE BGP community, representing the
MSE, and importing the global MSE and common RAN communities.
• The aggregation POP nodes in this model act as labeled BGP ASBRs in the aggregation AS, referred
to as AGN-ASBRs. They peer with iBGP-labeled unicast sessions with the centralized AGN-RR within
the aggregation AS, and peer with eBGP-labeled unicast sessions to the CN-ASBR in the core AS. The
AGN-ASBRs insert themselves into the data path to enable inter-domain LSPs by setting NHS on all
iBGP updates towards their local AGN-RRs and eBGP updates towards neighboring CN-ASBRs.

System Architecture September 2013


51
• All the PANs in the aggregation network that require inter-domain LSPs to reach remote PANs in another
aggregation network or the core network (to reach the MTGs, for example), act as labeled BGP PEs and
run BGP-labeled unicast sessions with their local AGN-RRs. The PANs advertise their loopbacks into
BGP-labeled unicast with a common RAN BGP community. They learn labeled BGP prefixes marked with
the RAN BGP community and the global MSE community.
• Each mobile access network subtending from a pair of PANs is part of a unique IS-IS Level 1 domain.
All access rings/hub-spokes subtending from the same pair of PANs are part of IS- IS Level 1 domain,
where the CSGs are IS-IS L1 nodes and the PAN are L1/L2 nodes. The inter-domain LSPs are extended
to the MPLS/IP RAN access with a controlled redistribution based on IGP tags and BGP communities. At
the PANs, the inter-domain core and aggregation LSPs are extended to the RAN access by redistributing
between the iBGP and RAN IGP level/ area. In one direction, the RAN AN loopbacks (filtered based
on IGP tags) are redistributed into iBGP-labeled unicast and tagged with RAN access BGP community
that is unique to that RAN access region. In the other direction, the MPC prefixes filtered based on
MSE-marked BGP communities, and optionally, adjacent RAN access prefixes filtered based on RAN-
region- marked BGP communities (if inter-access X2 connectivity is desired), are redistributed into the
RAN access IGP level/area.

The MTGs in the core network are capable of handling large scale and will learn all BGP-labeled unicast
prefixes since they need connectivity to all the CGSs in the entire network. Simple prefix filtering based on BGP
communities is performed on the CN-RRs for constraining IPv4+label routes from remote RAN access regions
from proliferating into neighboring aggregation domains, where they are not needed. The PANs learn only labeled
BGP prefixes marked with the common RAN BGP community and the MSE BGP community. This allows the
PANs to enable inter-metro wireline services across the core, and also reflect the MPC prefixes to their local
access networks. Using a separate IGP process for the RAN access enables the mobile access network to have
limited control plane scale, since the CSGs only learn local IGP routes and labeled BGP prefixes marked with the
MSE BGP community.

Large Network, Multi-Area IGP Design with non-IP/MPLS Access


This section details the system architecture for the transport model described in “Hierarchical-Labeled BGP LSP
Core and Aggregation.” It assumes that the network organization between the core and aggregation domains
is based on a single AS, multi-area IGP design. It assumes a non-MPLS IP/Ethernet or TDM access where all
mobile and potentially wireline services are enabled by the AGNs or PANs.

Figure 29 - Multi-Area IGP/LDP Domain Organization

Single AS

Aggregation Area/Level Core Area/Level Aggregation Area/Level


OSPF x/IS-IS L1 OSPF 0/IS-IS L2 OSPF x/IS-IS L1

RAN Aggregation Aggregation


TDM/Packet Node (AGN) Node (AGN)
Microwave Core Node Core Node
Aggregation Network Core Network Aggregation Network Access
CN-ABR CN-ABR FAN
IP/MPLS Domain IP/MPLS Domain IP/MPLS Domain IP/Ethernet
Pre-Aggregation Pre-Aggregation
Node (PAN) Node (PAN)
Core Node Core Node
CN-ABR CN-ABR
Aggregation Aggregation
Node (AGN) Node (AGN) CSG
293281

LDP LSP LDP LSP LDP LSP

System Architecture September 2013


52
From a multi-area IGP organization perspective, the core network is either an IS-IS Level 2 or an OSPF backbone
area. The aggregation domains, in turn, are IS-IS Level 1 or OSPF non-backbone areas. No redistribution
occurs between the core and aggregation IGP levels/areas. This isolation helps reduce the size of routing
and forwarding tables on individual routers in these domains, which, in turn, leads to better stability and faster
convergence. LDP is used for label distribution to build intra-domain LSPs within each independent aggregation
and core IGP domain. The access network is based on native IP or Ethernet links over fiber or packet microwave
integrated in point-to-point or ring topologies, or based on TDM+Ethernet links over hybrid microwave with point-
to-point connectivity.

Figure 30 - Inter-Domain Transport with Hierarchical LSPs

AGN CN-RR AGN

RR
CN-ABR CN-ABR
RAN Aggregation Aggregation
BGP Community BGP Community
TDM/Packet PAN
Microwave PAN
iBGP iBGP iBGP Access
FAN
IPv4+label IPv4+label MPC IPv4+label IP/Ethernet
BGP
RAN Region Community RAN Region
BGP Community BGP Community
CN-ABR CN-ABR
MTG
AGN AGN
CSG
iBGP Hierarchical LSP

293282
LDP LSP LDP LSP LDP LSP

RFC 3107 procedures based on iBGP IPv4 unicast+label are used as an inter-domain LDP to build hierarchical
LSPs across domains. All nodes in the core and aggregation network that require inter- domain LSPs act as
labeled BGP PEs and run iBGP-labeled unicast peering with designated RRs depending on their location in the
network.
• The core POP nodes are labeled BGP ABRs between the aggregation and core areas, referred to in
this model as CN-ABRs, and act as inline RRs for their local aggregation area-labeled BGP PEs. The
CN-ABRs peer with other CN-ABRs using iBGP-labeled unicast in either a full mesh configuration or
using centralized RRs over the core network. The centralized RR deployment option is shown in Figure
30. Note that the CN-RR applies an egress filter towards the CN-ABRs in order to drop prefixes with the
common RAN community, which eliminates unnecessary prefixes from being redistributed.
• For mobile service transport, the MTGs residing in the core network are labeled BGP PEs and peer
either directly with the closest CN-ABR RRs, in the case of a CN-ABR full-mesh configuration, or with
the centralized RRs, depending on the deployment setting. The MTGs advertise their loopbacks into
BGP-labeled unicast with a global MSE BGP community representing the MPC. They learn all the labeled
BGP prefixes from the common RAN BGP community and have reachability across the entire network.
• For fixed wireline service transport, the network nodes providing FSE functions, such as PWHE or
H-VPLS, are labeled BGP PEs. These FSE nodes will peer with the closest RR in the network, usually in
the aggregation network, depending upon the deployment setting. The FSE nodes advertise loopbacks
into the iBGP-labeled unicast with the global FSE BGP community, representing the FSE, and import the
common FAN community and global FSE and IGW communities. The IGW community represents any
Internet Gateway node, providing internet peering functionality in the SP network.

System Architecture September 2013


53
• All AGNs and PANs in aggregation networks that require inter-domain LSPs to either reach nodes in
another remote aggregation network, or that need to cross the core network to reach the MTGs, act
as labeled BGP PEs, and peer with their local CN-ABR RRs. These AGNs advertise their loopbacks into
BGP-labeled unicast with the common RAN and/or FAN BGP communities, depending upon the services
configured, as well as into the FSE BGP community if the AGNs or PANs are acting as fixed service
edges.
• Since redistribution of routes between the core and aggregation IGP levels/areas is prevented in order to
keep the routing domains isolated, the CN-ABRs have to insert themselves into the data path to enable
inter-domain LSPs. The CN-ABRs acting as inline RRs do this by reflecting the labeled BGP prefixes
with NHS symmetrically towards the PANs in their local aggregation network, and MTGs and remote
CN-ABRs in the core network.

All MPLS services are enabled by the PANs in the aggregation network. These include:
• GSM Abis, ATM IuB, IP IuB, and IP S1/X2 interfaces for 2G/3G/LTE services for RAN access domains
with point-to-point connectivity over TDM or hybrid (TDM+Packet) microwave
• IP IuB, and IP S1/X2 interfaces for 3G/LTE services for RAN access domains with point-to- point or ring
topologies over fiber or packet microwave.
• Business Ethernet Line (E-Line) and E-LAN Layer 2 VPN (L2VPN) services and Layer 3 VPN (L3VPN)
services.
• Residential triple play services with Ethernet connectivity from the access nodes (FANs, PON OLTs, etc.)
to the PAN-SE nodes.

Large Network, Inter-AS Design with non-IP/MPLS Access


This section details the system architecture for transport model described in “Hierarchical-Labeled BGP LSP
Core and Aggregation.” It assumes that the core and aggregation networks are organized as different ASs. It
assumes a non-MPLS IP/Ethernet or TDM access where all mobile and wireline services are enabled by the
AGNs or PANs.

Figure 31 - Inter-AS IGP/LDP Domain Organization

AS-B AS-A AS-C

Aggregation Area/Level Core Area/Level Aggregation Area/Level


OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2

Aggregation Aggregation
Node (AGN) Node (AGN)

Aggregation ASBR Aggregation ASBR


RAN
(AGN-ASBR) (AGN-ASBR)
TDM/Packet
Microwave Core Node
Aggregation Network
CN-ABR Core Network Core Node Aggregation Network Access
FAN
IP/MPLS Domain IP/MPLS Domain CN-ABR IP/MPLS Domain IP/Ethernet
Pre-Aggregation Aggregation ASBR Aggregation ASBR Pre-Aggregation
Node (PAN) (AGN-ASBR) (AGN-ASBR) Node (PAN)
Core Node Core Node
CN-ABR CN-ABR
Aggregation Aggregation
Node (AGN) Node (AGN) CSG
293283

LDP LSP LDP LSP LDP LSP

System Architecture September 2013


54
This model follows the approach of enabling a Unified MPLS LSP using hierarchical-labeled BGP LSPs based on
iBGP-labeled unicast within each AS. The core and aggregation networks are segmented into different ASs, and
eBGP-labeled unicast is used to extend the LSP across AS boundaries. LDP provides label distribution to build
intra-domain LSPs within each independent aggregation and core IGP domain. The access network is based on
native IP or Ethernet links over fiber or packet microwave, integrated in point-to-point or ring topologies, or on
TDM+Ethernet links over hybrid microwave with point-to-point connectivity.

Figure 32 - Inter-Domain Transport with Hierarchical LSPs

AGN CN-RR AGN

RR
AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR
Aggregation Aggregation
RAN BGP Community BGP Community
TDM/Packet AGN-RR AGN-RR
iBGP
Microwave eBGP eBGP
IPv4+label Access
IPv4+label IPv4+label FAN
RR MPC RR IP/Ethernet
PAN iBGP BGP iBGP PAN
IPv4+label Community IPv4+label
RAN Region RAN Region
BGP Community AGN-ASBR CN-ABSR CN-ABSR AGN-ASBR BGP Community

MTG
AGN AGN
eBGP eBGP CSG
iBGP Hierarchical LSP LSP iBGP Hierarchical LSP LSP iBGP Hierarchical LSP

293284
LDP LSP LDP LSP LDP LSP

RFC 3107 procedures based on iBGP IPv4 unicast+label are used as an inter-domain LDP to build hierarchical LSPs
across domains. All nodes in the core and aggregation network that require inter- domain LSPs act as labeled BGP
PEs and run iBGP-labeled unicast peering with designated RRs, depending on their location in the network.
• For mobile service backhaul, the MTGs residing in the core network are labeled BGP PEs and peer with
iBGP-labeled unicast sessions with the centralized CN-RR. The MTGs advertise their loopbacks into
iBGP-labeled unicast with the global MSE BGP community representing the MSE, and then import the
global MSE and common RAN communities, providing reachability across the entire network down to the
PANs at the edge of the aggregation network.
• The core POP nodes act as labeled BGP CN-ASBRs in the core AS. They peer with iBGP- labeled
unicast sessions with the CN-RR within the core AS, and peer with eBGP-labeled unicast sessions with
the neighboring aggregation ASBRs. The CN-ASBRs insert themselves into the data path to enable
inter-domain LSPs by setting NHS on all BGP updates towards their local CN-RRs and neighboring
aggregation ASBRs. Note that the CN-RR applies an egress filter towards the CN-ASBRs in order to
drop prefixes with the common RAN community, which eliminates unnecessary prefixes from being
redistributed.
• The aggregation POP nodes act as labeled BGP AGN-ASBRs in the aggregation AS. They peer with
iBGP-labeled unicast sessions with the centralized AGN-RR within the aggregation AS, and peer
with eBGP-labeled unicast sessions to the CN-ASBR in the neighboring AS. The AGN-ASBRs insert
themselves into the data path to enable inter-domain LSPs by setting NHS on all BGP updates towards
their local AGN-RRs and neighboring core ASBRs.
• All PANs in the aggregation networks that require inter-domain LSPs to either reach nodes in another
remote aggregation network, or that need to cross the core network to reach the MTGs, act as labeled
BGP PEs, and peer with iBGP-labeled unicast sessions to the local AGN-RR. The PANs advertise their
loopbacks into BGP-labeled unicast with a common BGP community that represents any services
configured locally on the PAN or on the attached access network, such as the RAN or FAN community.
The PANs learn labeled BGP prefixes marked with these common BGP communities as necessary and
also any required service communities, such as those for FSE, MSE, or IGW nodes.

System Architecture September 2013


55
All MPLS services are enabled by the PANs in the aggregation network. These include:
• GSM Abis, ATM IuB, IP IuB, and IP S1/X2 interfaces for 2G/3G/LTE services for RAN access domains
with point-to-point connectivity over TDM or hybrid (TDM+Packet) microwave
• IP IuB, and IP S1/X2 interfaces for 3G/LTE services for RAN access domains with point-to- point or ring
topologies over fiber or packet microwave.
• Business E-Line and E-LAN L2VPN services and L3VPN services.
• Residential triple-play services with Ethernet connectivity from the ASs (FANs, PON OLTs, etc.) to the
PAN-SE nodes.

Small Network, Integrated Core and Aggregation with IP/MPLS Access


This section details the system architecture for the transport model described in “Hierarchical-Labeled BGP
LSP Core, Aggregation, and Access.” It assumes that the core and aggregation networks are integrated into a
single IGP/LDP domain consisting of less than 1000 nodes. The AGNs have subtending access networks that are
MPLS-enabled and part of the same AS as the integrated core+aggregation network.

Figure 33 - Integrated Core and Aggregation with IP/MPLS Access

Single AS

RAN Area/Level Core + Aggregation Area/Level RAN Area/Level


OSPF x/IS-IS L1 OSPF 0/IS-IS L2 OSPF x/IS-IS L1

Aggregation Aggregation
Node (AGN) Node (AGN)
Access Core Node Core Node Access
FAN IP/MPLS (CN) Core and Aggregation (CN) IP/MPLS FAN
Domain IP/MPLS Domain Domain
Aggregation Aggregation
Node (AGN) Node (AGN)
Core Node Core Node
(CN) (CN)
Aggregation Aggregation
CSG Node (AGN) Node (AGN) CSG

293285
LDP LSP LDP LSP LDP LSP

From an multi-area IGP organization perspective, the integrated core+aggregation networks and the access
networks are segmented into different IGP areas or levels, where the integrated core+aggregation network is
either an IS-IS Level 2 or an OSPF backbone area, and access networks subtending from the AGNs are in IS-IS
Level 1 or OSPF non-backbone areas. No redistribution occurs between the integrated core+aggregation and
access IGP levels/areas, thereby containing the route scale within each domain. Partitioning these network
layers into such independent and isolated IGP domains helps reduce the size of routing and forwarding tables on
individual routers in these domains, which, in turn, leads to better stability and faster convergence within each of
these domains.

LDP is used for label distribution to build intra-domain LSPs within each independent IGP domain. Inter-domain
reachability is enabled with hierarchical LSPs using BGP-labeled unicast as per RFC 3107 procedures, where
iBGP is used to distribute labels in addition to remote prefixes, and LDP is used to reach the labeled BGP
next-hop.

System Architecture September 2013


56
Figure 34 - Inter-Domain Transport for Labeled BGP Access with Flat LDP Core and Aggregation

AGN AGN

CN Core and Aggregation CN


IP/MPLS Domain
AGN iBGP AGN
iBGP iBGP
IPv4+label
IPv4+label CN-RR IPv4+label

RR

CSG iBGP CSG


RAN Region IPv4+label RAN Region
MPC
BGP AGN BGP
AGN BGP
Community CN Community CN Community

MTG
AGN AGN
FAN FAN
iBGP Hierarchical LSP

293286
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

The collapsed core+aggregation and access networks are integrated with labeled BGP LSPs. Any node in the
network that requires inter-domain LSPs to reach nodes in remote domain acts as a labeled BGP PE and runs
iBGP IPv4 unicast+labels with their corresponding local RR.
• For mobile service backhaul, the MTGs residing in the core network are labeled BGP PEs. They connect
to the EPC gateways (SGW, PGW, and MME) in the MPC. The MTGs peer with iBGP- labeled unicast
sessions with the CN-RR, and advertise their loopbacks into iBGP-labeled unicast with a common MSE
BGP community representing the MPC as shown in Figure 34.
• For fixed wireline service transport, the network nodes providing FSE functions, such as PWHE
or H-VPLS, are labeled BGP PEs. These FSE nodes will peer with the CN-RR in the network. The
FSE nodes advertise loopbacks into the iBGP-labeled unicast with the global FSE BGP community,
representing the FSE, and import the global FSE and IGW communities.
• The AGNs act as inline-RRs for their local access network clients. Each access network subtending from
a pair of AGNs is part of a unique IS-IS Level 1 domain. All access rings/hub and spokes subtending
from the same pair of AGNs are part of IS-IS Level 1 domain, where the ANs are IS-IS L1 nodes and the
AGN are L1/L2 nodes. Since routes between the integrated core+aggregation IS-IS Level 2 (or OSPF
backbone) and access IS-IS Level 1 (or OSPF non- backbone area) are not redistributed, the AGNs have
to reflect the labeled BGP prefixes with the next-hop changed to self in order to insert themselves into
the data path to enable the inter- domain LSP switching and allow the two IGP routing domains to remain
isolated. This AGN NHS function is symmetrically applied by the AGNs towards its clients in its local
access domain, and the higher level CN-RR in the integrated core+aggregation domain.
• The nodes in the access networks are labeled BGP PEs. Nodes carrying mobile services are referred
to as RAN nodes, and nodes carrying fixed wireline services are referred to as FAN nodes. They peer
with iBGP-labeled unicast sessions with their local PAN inline-RRs. The ANs advertise their loopbacks
into BGP-labeled unicast with a common BGP community that represents the local access community:
RAN for mobile services and FAN for fixed wireline services. For mobile service transport, labeled BGP
prefixes marked with the MSE BGP community are learned for reachability to the MPC, and the adjacent
access network BGP communities if inter-access X2 connectivity is desired. For business wireline
service transport, the ANs selectively learn the required FSE and remote FAN prefixes for configured
VPWS services.

BGP communities are learned for reachability to the MPC and the adjacent access network BGP communities
if inter-access X2 connectivity is desired. For business wireline service transport, the ANs selectively learn the
required FSE and remote FAN prefixes for configured VPWS services.

System Architecture September 2013


57
For mobile service transport, the MTGs in the integrated core+aggregation network are capable of handling
large scale and will learn all BGP-labeled unicast prefixes since they need connectivity to all the CSGs in the
entire network. Simple prefix filtering based on BGP communities is performed on the CN-RRs for constraining
IPv4+label routes from remote RAN access regions from proliferating into other AGNs, where they are not
needed. Since all the AGNs are part of the same IGP/LDP domain, they can enable wireline services across each
other. The AGNs learn labeled BGP prefixes marked with the MSE BGP community and reflect the MPC prefixes
to their local access networks. Isolating the integrated core+aggregation and RAN access domain by preventing
the default redistribution enables the mobile access network to have limited route scale, since the CSGs only
learn local IGP routes and labeled BGP prefixes marked with the MSE BGP community.

For fixed wireline service transport, the IGWs in the core network are capable of handling large scale and will
learn all BGP-labeled unicast prefixes since they need connectivity to all the FSEs (and possibly ANs) carrying
fixed services in the entire network. Any nodes providing service edge functionality are also capable of handling
large scale, and will learn the common FAN community for AN access, the FSE community for service transport
to other FSE nodes, and the IGW community for internet access. Again, using a separate IGP process for the
access enables the access network to have limited control plane scale, since the ANs only learn local IGP routes
and labeled BGP prefixes marked with the FSE BGP community or permitted via a dynamically-updated IP prefix
list.

Small Network, Integrated Core and Aggregation with non-IP/MPLS Access


This section details the system architecture for the transport model described in “Flat LDP Core and
Aggregation.”

Figure 35 - Single-Area IGP with Flat LDP Core and Aggregation

Single AS

Core + Aggregation Area/Level


OSPF 0/IS-IS L2

Mobile
Transport GW
Aggregation Aggregation
Node Node
Core Node Core Node Access
Core and Aggregation FAN
IP/Ethernet
IP/MPLS Domain
Aggregation Pre-Aggregation
Node Node
Core Node Core Node

Aggregation Aggregation
Node Node CSG
Mobile
Transport GW 293287

LDP LSP

This model assumes that the core and aggregation networks form a single IGP/LDP domain consisting of less
than 1000 nodes. Since there is no segmentation between network layers, a flat LDP LSP provides end-to-end
reachability across the network. The mobile access is based on TDM and packet microwave links aggregated in
AGNs that provide TDM/ATM/Ethernet VPWS and MPLS VPN transport.

System Architecture September 2013


58
All MPLS services are enabled by the AGNs. These include:
• GSM Abis, ATM IuB, IP IuB, and IP S1/X2 interfaces for 2G/3G/LTE services for RAN access domains
with point-to-point connectivity over TDM or hybrid (TDM+Packet) microwave.
• IP IuB, and IP S1/X2 interfaces for 3G/LTE services for RAN access domains with point-to- point or ring
topologies over fiber or packet microwave.
• Business E-Line and E-LAN L2VPN services and L3VPN services.
• Residential triple play services with Ethernet connectivity from the ANs (FANs, PON OLTs, etc.) to the
PAN-SE nodes.

Residential Service Architecture


Residential Wireline Service Architecture
The Cisco FMC system design provides transport and service edge functions for residential 1:1 and N:1
subscriber aggregation models, for IPoE and PPPoE subscriber access protocols, and for both IPv4 and IPv6
address families. To accomplish this on a single network, in conjunction with business and mobile services,
MPLS service virtualization is employed, which enables flexible forwarding of subscriber’s traffic from within the
global routing table or in a dedicated L3VPN. The flexibility of Unified MPLS, which builds upon BGP-labeled
Unicast and MPLS-based L3VPN technologies, presents providers with a single transport paradigm regardless of
the desired routing domain for subscriber’s traffic.

Subscriber Aggregation Models


One of the main architectural choices when deploying residential services is how subscribers are aggregated
and isolated in the access network in order to be presented at the BNG.

In Ethernet-based networks, traffic aggregation and isolation are achieved by means of VLAN tagging, thus
promoting the natural development of two VLAN-based models for the deployment of subscriber aggregation:
• 1:1 Aggregation: indicating a one-to-one mapping between the subscriber and the VLAN.
• N:1 Aggregation: indicating a many-to-one mapping between subscribers and VLANs, with subscribers
that may be located in the same or different AN.

These aggregation options, once inherent to a Layer-2 Ethernet access network, have been preserved over the
MPLS-based access in order to provide continuity in how subscriber aggregation is modeled, while allowing the
access network to evolve toward more robust transport technologies.

N:1 VLAN Aggregation


The N:1 VLAN model uses a shared VLAN to aggregate all subscribers to and from a particular AN.

In addition, a non-trunk interface toward the residential CPE enables all services for all subscribers on a particular
AN to also be mapped to that single shared VLAN. This may include services that are delivered by using
either a unicast or a multicast transport. Relative priority across services is preserved by properly setting the
differentiated services code point (DSCP) field in an IP packet header or the 802.1p class of service (CoS) values
carried in an Ethernet priority-tagged frame.

Figure 36 and Figure 37 show the N:1 aggregation model deployed with Ethernet and MPLS access, respectively.

System Architecture September 2013


59
Figure 36 - N:1 VLAN Aggregation Model—Ethernet Access

Routed CPE
Non
Trunk UNI
IP
IP

STB I/F
PPP IP or MPLS VPN
HSI/VoIP/ IP Multicast, mLDP or mVPN
802.1q VoD/TV IP
Routed CPE
PIM-SSM

IP
IP

STB

Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS

293219
CPE PON/FTTH Access PAN or AGN + SE Aggregation Core

Figure 37 - N:1 VLAN Aggregation Model—MPLS Access

Routed CPE
Non
Trunk UNI
IP
IP S-VLAN
POP
BNG I/F
STB HSI/VoIP/ EoMPLS PW PPP
802.1q VoD/TV
IP IP or MPLS VPN
IP Multicast, mLDP or mVPN
Routed CPE

MVR PIM-SSM (v4) PIM-SSM


TV
802.1q I/F
IP
IP

STB

VDSL,
ADSL2+
Ethernet IP/MPLS IP/MPLS IP/MPLS

293590
CPE Ethernet DSLAM MPLS Border Node PAN or AGN + SE Aggregation Core

The main components of the residential access architecture are:


• the customer premises equipment (CPE)
• the access node (AN)
• the access switch (AS), for MPLS access only
• the Broadband Network Gateway (BNG)

System Architecture September 2013


60
Customer Premises Equipment (CPE)

The CPE device is the demarcation point between the home and the SP network. While a CPE can be configured
to operate in either routed or bridged mode, routed mode is widely preferred for residential wireline applications,
allowing the entire household to be presented as a single entity to the provider for authentication and accounting
purposes.

While both IPv4 and IPv6 address families are supported within the household, the second release of the Cisco
FMC system introduces support for an IPv6-only access network for unicast services.

For IPv6-based access, DHCPv6 prefix delegation (PD) is used at CPE for addressing purposes of end devices.
DHCPv6 PD at the CPE differs from a local DHCP server function in that prefixes assigned on the CPE LAN
interfaces are obtained directly from the operator. The abundance of IPv6 prefixes makes the capacity to
manage the subscriber household address space in a centralized manner attractive to providers, who have
better visibility and influence over address assignment within the household without the cost of running
expensive routing protocols or managing static addresses to guarantee proper downstream forwarding. It is also
helps improve CPE performances, by removing the need for expensive inline packet manipulations such as NAT.

For IPv4-based access, household end devices obtain private addresses from a local DHCP server function
enabled on the CPE. Among the numerous NAT464 technologies, MAP-T is then leveraged to map those IPv4
end devices onto a single CPE-wide IPv4 address first (MAP-T Port Address Translation 44 [PAT44] stage) and to
a CPE-wide IPv6 address next (MAP-T NAT46 stage).

The CPE-wide IPv4 and IPv6 addresses are created from a combination of information, derived as follows:
• a delegated prefix assigned to the CPE via regular DHCPv6 PD procedures
• MAP-T Rules received in MAP-T specific DHCPv6 options

While the CPE-wide IPv6 address is unique throughout, the CPE-wide IPv4 address can be shared among
multiple CPEs requiring unique Layer-4 source ports to be assigned and used for proper routing of return traffic
in the IPv4 core domain. IPv4 address sharing ratio and number of unique ports per CPE are algorithmically tied
and affect each other.

For example, support for 64,000 univocally routable CPE devices within the MAP-T domain can be achieved with
a single a /24 Class C IPv4 subnet by setting the CPE address sharing ratio to 1:256 and limiting the number of
unique layer 4 ports per CPE to 256.

Assignment of a non-temporary IPv6 address to the CPE WAN interface is achieved via DHCPv6 for both PPPoE
and IPoE subscribers.

The CPE also implements IGMP querier and proxy functions for multicast services. By acting as an IGMP querier
toward household appliances, the CPE is able to maintain an updated view of the multicast membership status
for the customer’s end devices, while the proxy function allows it to report that information as an aggregate
toward the AN and ultimately the BNG.
Although, the Cisco FMC system delivers multicast services to subscribers via IPv4, the CPE does not require
a dedicated IPv4 address to be assigned to the WAN interface. Depending on the CPE implementation, proxied
IGMP membership reports can be sent from an all-zeros address, from the shared MAP-T IPv4 address or from
a common, shared IPv4 WAN address statically or dynamically (via Technical Report 069 [TR-069]) provisioned
by the operator. This ensures that the address saving goal promoted by an IPv6-only access network is still
warranted.

System Architecture September 2013


61
Access Node (AN)

The AN is responsible for aggregating all CPEs in the same local area and implements a number of critical
functions, such as line identification, security, efficient multicast transport, and QoS.

For IPoE subscribers, line identification is based on DHCPv4 snooping and DHCPv6 Light-weighted Relay Agent
(DLRA) functions that insert location-specific options in DHCP messages forwarded to servers. These options
encompass Option 82 with its remote and circuit ID for IPv4, and the corresponding Option 37 and 18 for IPv6.
Insertion of line information is essential not only as way of tracking subscriber’s location, but also as a way of
uniquely identifying the subscriber with the operator’s operation support system (OSS) for the deployment of
transparent authorization mechanisms.

While the first release of the Cisco FMC system focused on a Dual Stacked access, bringing relevance to the line
identifiers of both address families, the second release of the FMC system only focuses on an IPv6-only access
network. The Access Node, therefore, is only required to implement DLRA functions.

For PPPoE, subscriber line identification is carried in the PPPoE Intermediate Agent Line ID tag inserted by the
AN in the PPPoE header.

Efficient multicast forwarding is achieved by using a N:1 VLAN-based transport that delegates multicast
replication toward subscribers to the AN, which runs IGMPv2/v3 snooping and proxy functions.

For an Ethernet based access and depending on the capabilities of the AN, the multicast VLAN can co-reside
with the unicast N:1 VLAN or be dedicated, requiring Multicast VLAN Registration (MVR) functions to also be
implemented to give the non-trunk delineation of the subscriber’s UNI.

The Cisco FMC system implements the latter behavior, deemed to be applicable to a wider range of devices and
the N:1 VLAN aggregation model appears slightly modified, as shown in the following figure.

Figure 38 - N:1 VLAN Aggregation Model (Modified)

Routed CPE
Non
Trunk UNI AI/F (BNG)
IP
HSI/VoD/ PPP
IP
802.1q VoIP
IP
STB
IP or MPLS VPN
I/F IP Multicast, mLDP or mVPN
MVR TV
Routed CPE 802.1q
PIM-SSM

IP
IP

STB

Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS
293363

CPE PON/FTTH Access PAN or AGN + SE Aggregation Core

For an MPLS based access and N:1 aggregation, MVR functionalities at the Access Node are not needed.
Multicast traffic, both data and control, can be separated from unicast at the Access Switch by inserting a L3
interface in the bridging domain and running multicast routing protocols over it. This is discussed further in the
next section.

System Architecture September 2013


62
Access Switch

In the MPLS based access network, the DSLAM connects directly to an Access Switch in charge of providing
gateway functions into the MPLS domain.

The Access Switch is responsible for establishing Ethernet over MPLS pseudowires that emulate Layer-2
connectivity between subscribers and the BNG over the routed network.

To provision for proper H-QoS modeling at the BNG, the Access Switch establishes an Ethernet pseudowire for
each residential N:1 VLAN and Access Node. These pseudowires are dedicated to residential services and are in
addition to those in use by other service categories, such as business L2VPNs. This allows for the N:1 VLAN to
be popped prior traffic entering the pseudowire, with the advantage of reducing packet sizes across the access
network.

For Multicast forwarding, the Access Switch behaves as the last hop router in the multicast distribution tree built
over the routed IP access domain. The N:1 VLAN is terminated on a Layer-3 interface—switch virtual interface
(SVI)—that acts as the IGMP querier toward the receivers. The remaining of the multicast delivery tree is built by
using PIM Source Specific Multicast running on the Access Switch network-facing interfaces, as shown in the
following figure.

Figure 39 - N:1 VLAN Aggregation Model—MPLS Access (Modified)

Routed CPE
Non
Trunk UNI
S-VLAN
IP
IP POP
BNG I/F
EoMPLS PW
STB HSI/VoIP/ PPP
802.1q VoD/TV HSI/VoIP/VoD
IP IP or MPLS VPN
IP Multicast, mLDP or mVPN
Routed CPE TV

PIM-SSM (v4) PIM-SSM


L3
IP
I/F
IP

STB

VDSL,
ADSL2+
Ethernet IP/MPLS IP/MPLS IP/MPLS

293591
CPE Ethernet DSLAM MPLS Border Node PAN or AGN + SE Aggregation Core

The Cisco FMC system has selected this model to provide an alternate architectural approach to MVR running
at the Access Node, when such function is not available or desired for the operational complexity it adds The
MVR-based alternative is implemented for 1:1 VLAN aggregation and discussed in “1:1 VLAN Aggregation”
section of this guide.

Broadband Network Gateway (BNG)

The Broadband Network Gateway (BNG) node is the network device that enables subscriber management
functions for the residential PPPoE and IPoE subscribers.

A single 802.1Q interface matching the shared N:1 VLAN aggregates all subscribers connected to the same AN
device. Such interface could be the L3 termination of a bridged network, in the case of a L2 Ethernet access, or
of a pseudowire, in the case of MPLS access.

BNG capabilities enabled on that interface allow subscribers to be tracked and managed individually, and
individual constructs, known as sessions, to be created.

System Architecture September 2013


63
The BNG authenticates and authorizes subscribers’ sessions and provides accounting per session and service
via Remote Authentication Dial-In User Service (RADIUS) authentication, authorization, and accounting (AAA)
requests. The BNG enables dynamic policy control with RADIUS Change of Authorization (CoA) functionality
on subscriber sessions. QoS for residential services is guaranteed at the subscriber level, as well as at the
aggregate level for all residential subscribers connected to the same OLT.

Four levels of H-QoS available at the BNG allow for:


• Classification and queuing of traffic at the subscriber level, for separation of subscriber’s traffic into the
various subscriber services. Minimum and maximum bandwidth settings and priority handling can be
assigned to each service.
• Scheduling of traffic across classes at the subscriber level, delivering differentiated QoS handling for the
different subscriber services.
• Scheduling of traffic across subscribers on the same N:1 residential VLAN, allowing operators to offer
different levels of service to different subscribers.
• Scheduling of traffic across VLANs on the same physical ports, enabling controlled partitioning of
interface bandwidth among residential and business services in case of Ethernet Access, and even
among Access Nodes (AN) in the case of the MPLS access.
Two forwarding options are available for subscriber’s traffic past the BNG. Subscriber’s traffic can be routed
within the global routing domain, leveraging labeled Unicast MPLS forwarding, or can be isolated within a L3VPN,
for complete separation between services delivered over the Unified MPLS fabric, such as mobile and business,
and in a dedicated address space.

The same device implementing BNG functions also operates as a MAP-T border router reconstructing the
original IPv4 source and destination addressed from the 4to6 translation performed by the CPE.
Multicast forwarding toward the aggregation/core of the network is achieved over MLDP-signaled multicast
LSPs and follows similar traffic isolation models as for unicast services. Depending on operator’s preference,
subscriber’s traffic can be routed within the global routing domain, or can be isolated within the same or different
L3VPN used for residential unicast services.

Multicast forwarding toward subscribers happens over native IPv4 multicast and uses a dedicated N:1 VLAN
transport to simplify forwarding at the AN. Regular IGMPv2/v3 functions are performed at the BNG.

Figure 40 - N:1 VLAN Aggregation Subscriber’s Services Transport Summary

Services CPE - Access Node - VLAN Service


Access Node BNG Differentiation
PBH=BE
High Speed DSCP=0
Internet COS=0

Shared Unicast PBH=EF


Voice over IP DSCP=46
(VoIP) VLAN
COS=5
Non Trunk UNI 802.1Q
Video on PBH=AF
Demand (VoD) DSCP=32
COS=4

Shared PBH=AF
IP TV Multicast VLAN DSCP=32
293364

COS=4
PBH = Per Hop Behavior
BE = Best Effort
AF = Assured Forwarding
EF = Expedite Forwardind

System Architecture September 2013


64
1:1 VLAN Aggregation
The 1:1 VLAN model uses a dedicated VLAN to carry subscriber’s traffic to and from a particular AN. In addition,
a non-trunk UNI enables all services for that subscriber to also be mapped to that single VLAN. Although this
may include services that are delivered using both a unicast and multicast transport, the Cisco FMC system
chooses to separate multicast in its own shared N:1 VLAN, to minimize the amount of replicated content crossing
the access network. Relative priority across services is preserved by properly setting the DSCP field in an IP
packet header or the 802.1p CoS values carried in an Ethernet priority-tagged frame.

Figure 41 and Figure 42 show the 1:1 aggregation model deployed with an Ethernet and a MPLS Access
respectively.

Figure 41 - 1:1 VLAN Aggregation Model—Ethernet Access

Routed CPE
Non
Trunk UNI AI/F (BNG)
IP
HSI/VoD/ PPP
IP
802.1q VoIP
or QinQ IP
STB
I/F IP or MPLS VPN
MVR TV IP Multicast, mLDP or mVPN
802.1q PIM-SSM
Routed CPE

AI/F (BNG)
IP
HSI/VoD/
IP VoIP IP

STB

Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS

293220
CPE PON/FTTH Access PAN or AGN + SE Aggregation Core

Figure 42 - 1:1 VLAN Aggregation Model—MPLS Access

Routed CPE QinQ (S-VLAN per AN, BNG SubIF


Non C-VLAN per subs)
Trunk UNI Session
IP
HSI/VoD/ PPP
IP
VoIP
EoMPLS PW IP
STB HSI/VoD/
VoIP Session
IP or MPLS VPN
S-VLAN IP IP Multicast, mLDP or mVPN
Routed CPE QinQ (S-VLAN per AN,
C-VLAN per subs) POP

IP
IP
MVR PIM-SSM (v4) PIM-SSM
TV
802.1q I/F
STB

VDSL,
ADSL2+
Ethernet IP/MPLS IP/MPLS IP/MPLS
293592

CPE Ethernet DSLAM MPLS Border Node PAN or AGN + SE Aggregation Core

System Architecture September 2013


65
The main components of the residential access architecture are:
• the customer premises equipment (CPE)
• the access node (AN)
• the access switch (AS), for MPLS access only
• the Broadband Network Gateway (BNG)

Customer Premises Equipment (CPE)

The CPE device is the demarcation point between the home and the SP network. While a CPE can be configured
to operate in either routed or bridged mode, routed mode is widely preferred because it allows the entire
household to be presented as a single entity to the provider for authentication and accounting purposes.

While both IPv4 and IPv6 address families are supported within the household, the second release of the Cisco
FMC system introduces support for an IPv6-only access network for unicast services.

For IPv6-based access, DHCPv6 prefix delegation (PD) is used at CPE for end devices addressing purposes.
DHCPv6 PD at the CPE differs from a local DHCP server function in that prefixes assigned on the CPE LAN
interfaces are obtained directly from the operator. The abundance of IPv6 prefixes makes the capacity to
manage the subscriber household address space in a centralized manner attractive to providers, who have
better visibility and influence over address assignment within the household without the cost of running
expensive routing protocols or managing static addresses to guarantee proper downstream forwarding. It is also
helps improve CPE performances, by removing the need for expensive inline packet manipulations such as NAT.

For IPv4-based access, household end devices obtain private addresses from a local DHCP server function
enabled on the CPE. Among the numerous NAT464 technologies, MAP-T is then leveraged to map those IPv4
end devices onto a single CPE-wide IPv4 address first (MAP-T PAT44 stage), and then to a CPE-wide IPv6
address next (MAP-T NAT46 stage).

The CPE-wide IPv4 and IPv6 addresses are created from a combination of information, derived as follows:
• a delegated prefix assigned to the CPE via regular DHCPv6 PD procedures
• MAP-T Rules received in MAP-T specific DHCPv6 options

While the CPE-wide IPv6 address is unique throughout, the CPE-wide IPv4 address can be shared among
multiple CPEs requiring unique Layer-4 source ports to be assigned and used for proper routing of return traffic
in the IPv4 domain. IPv4 address sharing ratio and number of unique ports per CPE are algorithmically tied and
affect each other.

For example, support for 64,000 univocally routable CPE devices within the MAP-T domain can be achieved with
a single a /24 Class C IPv4 subnet by setting the sharing ratio to 1:256 and limiting the number of unique layer 4
ports per CPE to 256.

Assignment of a non-temporary IPv6 address to the CPE WAN interface is achieved via DHCPv6 for both PPPoE
and IPoE subscribers.

The CPE also implements IGMP querier and proxy functions for multicast services. By acting as an IGMP querier
toward household appliances, the CPE is able to maintain an updated view of the multicast membership status
for the customer’s end devices, while the proxy function allows it to report that information as an aggregate to
the BNG. Although, the Cisco FMC system delivers multicast services to subscribers via IPv4, the CPE does not
require a dedicated IPv4 address to be assigned to the WAN interface. Depending on the CPE implementation,
proxied IGMP membership reports can be sent from an all-zeros address, from the shared MAP-T IPv4 address
or from a common, shared IPv4 WAN address statically or dynamically (via TR-069) provisioned by the operator.
This ensures that the address saving goal promoted by an IPv6-only access network is still warranted.

System Architecture September 2013


66
Access Node (AN)

The AN is responsible for aggregating all CPEs in the same local area and implements a number of critical
functions, such as line identification, security, efficient multicast transport, and QoS.

For IPoE subscribers, line identification is based on DHCPv4 snooping/DHCPv6 Light-weighted Relay Agent
functions that insert location specific options in DHCP messages forwarded to servers. These options encompass
Option 82 with its remote and circuit ID for IPv4, and the corresponding Option 37 and 18 for IPv6. Insertion of line
information is essential not only as way of tracking subscriber’s location, but also as a way of uniquely identifying
the subscriber with the operator’s OSS for the deployment of transparent authorization mechanisms.

While the first release of the Cisco FMC system focused on a Dual-Stacked access, bringing relevance to the
line identifiers of both address families, the second release of the FMC system focuses only on an IPv6-only
access network. The Access Node, therefore, is required only to implement DLRA functions.

For PPPoE, subscriber line identification is carried in the PPPoE Intermediate Agent Line ID tag inserted by the
AN in the PPPoE header.

Efficient multicast forwarding is achieved by using a dedicated N:1 VLAN based transport that delegates
multicast replication toward subscribers to the AN. The AN implements IGMPv2/v3 snooping and proxy functions
for membership tracking and reporting, and it runs Multicast VLAN Registration (MVR) in order to transition
multicast forwarding into the dedicated multicast VLAN.

Access Switch

In the MPLS based access network, the DSLAM connects directly to an Access Switch in charge of providing
gateway functions into the MPLS domain.

The Access Switch is responsible for establishing Ethernet over MPLS pseudowires that emulate Layer-2
connectivity between subscribers and the BNG over the routed network.

To provision for proper hierarchical quality of service (H-QoS) modeling at the BNG, the Access Switch
establishes an Ethernet pseudowire for each residential Access Node. These pseudowires are dedicated to
residential services and are in addition to those in use by other service categories, such as business L2 VPNs.
This allows for the service provider VLAN (S-VLAN) to be popped prior to traffic entering the pseudowire, with
the advantage of reducing packet sizes across the access network. The customer VLANs (C-VLANs) must be
preserved to be able to rebuild the 1:1 VLAN model at the BNG side of the pseudowire.

For Multicast forwarding, the Access Switch behaves as the last hop router in the multicast distribution tree
built over the routed IP access domain. A switch virtual interface (SVI) terminates the dedicated multicast VLAN
that originates from the MVR functions performed by Access Node and it acts as the IGMP querier toward the
receivers. The use of an SVI interface allows for the provisioning of a single routed entity across all Access
Nodes thus simplifying IPv4 address planning and operations, but it mandates IGMP snooping function to be
enabled to control flooding of multicast traffic in the Layer-2 domain.

The remaining of the multicast delivery tree is built using PIM Source Specific Multicast running on the Access
Switch network facing interfaces, as shown in Figure 42.

Broadband Network Gateway (BNG)

The BNG node is the network device that enables subscriber management functions for the residential PPPoE
and IPoE subscribers.

In the Cisco FMC system, a single 802.1Q interface aggregates all subscribers connected to the same AN device
regardless of their VLAN tagging while BNG capabilities enabled on that interface still allows for subscribers to
be tracked and managed individually and individual constructs, known as sessions, to be created. Such interface
could be the L3 termination of a bridged network, in the case of a L2 Ethernet access, or of a pseudowire, in the
case of MPLS access.

System Architecture September 2013


67
The VLAN associated with a given subscriber is also dynamically discovered within the range allowed by the
access interface. The ability to aggregate multiple VLANs over the same access interface largely simplifies
deployments of 1:1 VLAN models at the BNG, requiring a single interface to represent thousands of VLANs.

The BNG authenticates and authorizes subscribers’ sessions and provides accounting per session and
service via RADIUS AAA requests. The BNG enables dynamic policy control with RADIUS CoA functionality on
subscriber sessions. QoS for residential services is guaranteed at the subscriber level as well as at the aggregate
level for all residential subscribers connected to the same OLT.

Four levels of hierarchical QoS available at the BNG allow for:


• Classification and queuing of traffic at the subscriber level for separation of subscriber’s traffic into the
various subscriber services. Minimum and maximum bandwidth settings as well as priority handling can
be assigned to each service.
• Scheduling of traffic across classes at the subscriber level delivering differentiated QoS handling for the
different subscriber services.
• Scheduling of traffic across subscribers aggregated over the same access interface allowing operators
to offer different levels of service to different subscribers.
• Scheduling of traffic across VLANs on the same physical ports enabling controlled partitioning of
interface bandwidth among residential and business services in case of Ethernet Access and even
among Access Nodes (ANs) in the case of the MPLS access.
Two forwarding options are available for subscriber’s traffic past the BNG. Subscriber’s traffic can be routed
within the global routing domain, leveraging labeled Unicast MPLS forwarding, or can be isolated within a L3VPN
for complete separation between services delivered over the Unified MPLS fabric, such as mobile and business,
and in a dedicated address space.
The same device implementing BNG functions also operates as a MAP-T border router reconstructing the
original IPv4 source and destination addressed from the 4to6 translation performed by the CPE.

Multicast forwarding toward the aggregation/core of the network is achieved over MLDP signaled multicast
LSPs and follows similar traffic isolation models as for unicast services. Depending on operator’s preference,
subscriber’s traffic can be routed within the global routing domain or can be isolated within the same or different
L3VPN used for residential unicast services.

Multicast forwarding toward subscribers happens over native IPv4 multicast and uses a dedicated N:1 VLAN
transport to prevent per subscriber replication at the BNG and to minimize the amount of replicated content in the
access network. Regular IGMPv2/v3 functions are performed at the BNG.

Figure 43 - 1:1 VLAN Aggregation Subscriber’s Services Transport Summary

Services CPE - Access Node - VLAN Service


Access Node BNG Differentiation
PBH=BE
High Speed DSCP=0
Internet COS=0
Per Subscriber PBH=EF
Voice over IP Dedicated DSCP=46
(VoIP) Unicast VLAN COS=5
Non Trunk UNI 802.1Q
Video on PBH=AF
Demand (VoD) DSCP=32
COS=4

Shared PBH=AF
IP TV Multicast VLAN DSCP=32
293365

COS=4
PBH = Per Hop Behavior
BE = Best Effort
AF = Assured Forwarding
EF = Expedite Forwardind

System Architecture September 2013


68
Subscriber Address Families and Identities
The Cisco FMC system aims to provide a complete solution for operators wanting to move away from the plague
of IPv4 address space exhaustion and who are actively pursuing insertion of IPv6 at different levels in their
network.

While the first release of the Cisco FMC system encompassed a Dual-Stack access network, the second release
of the FMC system targets operators looking at consolidating unicast services over an IPv6 only transport, while
retaining support for both address families at the residential service layer. This empowers operators to take
advantage of the benefits offered by IPv6 today, while also allowing IPv4-based services to be phased out over
a longer period of time as IPv6 content and applications are developed. To meet the need for coexistence of
both address families within the subscriber household and at service layer, the FMC system implements MAP-T.
MAP-T translates the private IPv4 addresses assigned to household appliances into a single, global IPv6 address
that represents the subscriber’s CPE.

Identity sets differ based on the access model, 1:1 or N:1, and the subscriber access protocol, Native IPoE or
PPPoE. IPv6 is inserted within existing identities and authorization methods, and across all service models in
order to minimize disruption.

In the N:1 VLAN model, the IPoE session identity is associated with the access line. Line ID information is
inserted by the AN in DHCPv6 Options 18 and 37, which correspond to DHCPv4 Option 82 Circuit and Remote
ID. To maintain continuity in subscriber’s identity while migrating to an IPv6-only access, it is expected that the
AN inserts in the DHCPv6 options the same line identifiers it was using for DHCPv4.

For PPPoE subscribers, the identity will be based on Point-to-Point Protocol (PPP) Challenge Handshake
Authentication Protocol (CHAP) username and password, or alternatively on the access line identifier carried in
PPPoE Intermediate Agent (IA) tags from the AN.

In the 1:1 VLAN model, both IPoE and PPPoE session identities are associated with the access line as identified
by the NAS-Port-ID at the BNG.

Figure 44 - Residential Subscriber Identity

Aggregation Model Subscriber Identities


1:1 PPPoE NAS-Port-Id
1:1 IPoE NAS-Port-Id
N:1 PPPoE PPP CHAP username
293288

N:1 IPoE Option 18 + 37

System Architecture September 2013


69
Subscriber Address Assignment
Successful address assignment procedures are the pre-requisite to any network access.

For residential wireline access, address assignment happens independently for devices within the household and
for the residential CPE.

Figure 45 - Subscriber Address Assignment

Client address BNG local address


assignment models assignment functions

IPoE
Single session
for NA and PD DHCPv6 proxy
Local DHCPv6 addresses to
DHCPv4 PD-based with PD
same sub
(MAP-T) address DHCP
DHCPv6 PD DHCPv6 NA

PPPoE
BNG RADIUS
(+ MAP-T BR) DHCPv6 RADIUS
Local DHCPv6
proxy with PD
DHCPv4 PD-based
AAA Pool definition on BNG
(MAP-T) address

293231
DHCPv6 PD DHCPv6 NA

Within the household, address assignment for dual-stack appliances happens according to the rules and
methods defined for IPv4 and IPv6 address families:
• For the IPv4 address family, the residential CPE operates as a DHCPv4 server, allocating IPv4 private
addresses from a locally defined pool.
• For the IPv6 address family, the CPE sends periodic Neighbor Discovery Router Advertisements (ND
RAs) with the Managed Flag set to achieve the following:
◦◦ Advertise the IPv6 prefix assigned to the LAN segment
◦◦ Announce itself as the default router on the segment
◦◦ Solicit the client to get the remaining set up information, including the Domain Name System
(DNS) IPv6 address, via DHCPv6
The CPE therefore operates as a DHCPv6 Stateless Server importing relevant information from a
DHCPv6 prefix delegation (PD) exchange.

On the network side, the residential CPE acquires all of its IPv6 addressing information via DHCPv6 for both IPoE
and PPPoE access protocols. Unlike IPv4, IPv6 support for PPPoE subscribers only allows for the negotiation
of link local addressing (interface-id) during the IPv6 Control Protocol (IPv6CP) phase, while global unicast
addressing builds upon the same methods used for IPoE subscribers.

In particular, the system will exploit the use of DHCPv6 non-temporary address for the CPE WAN interface,
and DHCPv6 prefix delegation (PD) for the CPE LAN and MAP-T functions. Delegated prefixes are used for
IPv6 addressing of CPE LAN interfaces and household appliances through router advertisements, as well as by
MAP-T in order to build a unique IPv6 address for its address family translation functions.

An obvious benefit of DHCPv6 PD is that it allows for centralized management of address assignment functions
of IPv6 hosts within the subscriber household, which provides operators with better visibility over traffic from
different household appliances and removes the need for expensive packet manipulation at the CPE.

System Architecture September 2013


70
Address assignment functions at BNG align with industry established models. For the PPPoE subscriber, the
system will use locally defined pools referenced via RADIUS attributes, while for IPoE the BNG will act as a
DHCPv6 proxy with the support of an external DHCPv6 server.

Community Wi-Fi Service Architecture


The second release of the Cisco FMC system complements traditional fixed subscriber aggregation models with
public Wi-Fi community access.

In these new offerings, the residential CPE role in the operator’s network is two-fold. It provides private access
to the subscriber’s household via wired connections or secured Wi-Fi SSIDs (authenticated and encrypted via
WPA/WPA2), and hotspot-like public access to roaming users through a dedicated Wi-Fi interface, based on a
shared, open SSID.

To accomplish this on a single transport infrastructure, the system implements VLAN-based isolation in the
access network, and MPLS service virtualization in the aggregation and core network.

Wi-Fi Aggregation Models


The nature of community Wi-Fi access is to allow users to enter the operator’s network from a number of
different locations, or hotspots. As a result 1:1 VLAN aggregation models, which call for subscribers to be
identifiable by a well-known and dedicated set of transport VLANs, are not applicable to this kind of offering. The
N:1 VLAN model is therefore employed to aggregate all Wi-Fi subscribers to and from a particular AN.

Residential wireline services can still be offered by using a 1:1 or a N:1 aggregation model, according to
operator’s preference, and VLAN-based traffic separation originates from a trunk UNI at the residential CPE.

Such separation allows for easy prioritization as well as different authentication and authorization procedures for
the two access types.

To account for the additional bandwidth and scale requirements brought by the Wi-Fi access overlay, the Cisco
FMC system promotes such unified deployments with Fiber to the Home (FTTH)/PON access and distributed
BNG. The following figure shows such models, in the case of 1:1 VLAN aggregation for wireline subscribers.

Figure 46 - Unified Wireline Wi-Fi Aggregation Model

Routed
Bridged Trunk AI/F (BNG)
CPE UNI QinQ
HSI/VoD/VoIP PPP
IP
IP
IP

STB AI/F (BNG)


Wi-Fi IP or MPLS VPN
IP

AI/F (BNG)
IP
IP HSI/VoD/VoIP PPP
IP
STB

Ethernet
and PON
Ethernet IP/MPLS IP/MPLS IP/MPLS IP/MPLS
293593

CPE PON/FTTH Access PAN or AGN + SE Aggregation Core

System Architecture September 2013


71
The main components of the community Wi-Fi access architecture are:
• the customer premises equipment (CPE)
• the access node (AN)
• the Broadband Network Gateway (BNG)

The remainder of this section discusses the differences and additions to the traditional residential access.

Customer Premises Equipment (CPE)

In the unified wireline and community Wi-Fi architecture, the CPE device assumes the dual role of demarcation
point between the SP network and the home for fixed access, and the public access area for wireless access.

In the home, appliances attach to the CPE via wired connections or associate to a secure wireless SSID that
ensures authentication and encryption of household’s traffic. Such traffic is then routed at the CPE UNI and
carried over the residential 1:1 or N:1 VLAN.

In the public access area, roaming Wi-Fi handsets associate to the residential CPE via a well known, shared and
open SSID provided by the operator. The resulting data traffic is bridged over a dedicated Wi-Fi VLAN into the
operator’s network, which ensures handsets can still be individually tracked via their MAC and IP addresses. The
use of different VLANs for residential and community Wi-Fi traffic implies the CPE UNI becomes a trunk.

While both IPv4 and IPv6 address families are supported within the household for both PPPoE and IPoE access
protocols, Wi-Fi access happens over IPv4 IPoE only, in line with the capabilities of existing handsets.

DHCPv4 is used between the subscriber end device and the BNG node for IP address allocation. This exchange
is bridged by the CPE transparently.

Access Node (AN)

The AN is responsible for aggregating all CPEs in the same local area and implements a number of critical
functions, such as line identification, security, and QoS.

For Wi-Fi subscribers, line identification is based on DHCPv4 snooping functions inserting location-specific
information in DHCP messages forwarded to servers. Insertion of line information for community Wi-Fi access is
essential to monitoring subscriber’s access locations for tracking purposes.

The AN may implement additional security measures such as Address Resolution Protocol (ARP) inspection and
traffic throttling on the Wi-Fi VLAN in order to block possible attacks from the open access network.

In downstream direction, QoS is used to provide relative priority between the residential and Wi-Fi VLANs at the
trunk UNI, and across classes of traffic within each VLAN.

Split horizon forwarding is implemented at the trunk UNI on the N:1 Wi-Fi VLAN and any N:1 residential VLAN in
order to prevent subscribers’ cross talking prior to reaching the BNG, and thus bypassing it.

System Architecture September 2013


72
Broadband Network Gateway (BNG)

The Broadband Network Gateway (BNG) node is the network device that enables subscriber management
functions for the residential subscribers as well as the public Wi-Fi users.

For Community Wi-Fi access, a single 802.1Q interface matching the shared N:1 VLAN aggregates all Wi-Fi
users connected to the same AN device.

A separate access interface is used for aggregating the residential wireline subscribers.

The BNG allows Wi-Fi subscriber access using a combination of MAC-based authorization and web logon
procedures and provides per session accounting via Remote Authentication Dial-In User Service (RADIUS) AAA
requests. The BNG enables dynamic policy control with RADIUS CoA functionality on subscriber sessions.

Similar to residential access, QoS is guaranteed at the subscriber level as well as at the aggregate level for all
community Wi-Fi subscribers connected to the same OLT.

Two forwarding options are available for Wi-Fi subscriber’s traffic past the BNG. Subscriber’s traffic can be
routed within the global routing domain, leveraging labeled Unicast MPLS forwarding, or can be isolated within a
L3VPN, for complete separation between services delivered over the Unified MPLS fabric, such as mobile and
business, and in a dedicated address space.

Such L3VPN, in turn, can be dedicated or shared with the residential users.

Community Wi-Fi Subscriber Address Assignment and Identities


While the Cisco FMC system addresses Dual-Stack capable devices within the subscriber household, it focuses
on IPv4 only for community Wi-Fi access.

For community Wi-Fi, the CPE operates in bridged mode and DHCPv4 is enabled on the subscriber handset in
order to acquire IPv4 configuration parameters (such as Address, DNS, etc.) from an external DHCPv4 server.
The BNG operates as a DHCPv4 proxy looking over the address allocation exchange between the client and the
server.

Figure 47 - Community Wi-Fi Subscriber Address Assignment

Client address BNG local address


assignment models assignment functions

IPoE

DHCPv4 Proxy

DHCP
293594

DHCPv4 (Bridged) BNG

User identity is based on the username and password associated with the subscriber’s account as well as the
handset MAC address information.

The MAC address is dynamically learned upon an initial successful web logon and used for transparent
authorization during subsequent accesses.

System Architecture September 2013


73
Subscriber Experience Convergence
To support operators in their effort of retaining and expanding their customer base, the second release of Cisco
FMC system presents the concept of unified subscriber experience. It introduces the availability of multi-access
and multi-device plans, allowing end-users to attain simultaneous access in the operator network from a number
of different devices, locations, and access types, including traditional “fixed” wireline, and “wireless” methods,
such as community Wi-Fi and mobile.

The following use cases are addressed:


• Transparent authorization—Enables users to be automatically signed into the network without need for
redirection to a portal for web-based authentication. It applies to wireline, mobile and Wi-Fi access.
• Web logon for Wi-Fi access with dynamic learning of credentials—Allows dynamic learning of network
credentials of unregistered Wi-Fi devices by redirecting the user to a web logon portal the first time
he accesses the operator’s network from an unknown appliance. The device MAC address is then
automatically registered in the subscriber’s account and used for transparent authorization of subsequent
accesses from the same device. It applies to Wi-Fi access only. It assumes credentials for other access
types to be pre-provisioned in subscriber’s account.
• Weighted fair usage—Provides cost-effective sharing of total purchased credit across multiple devices
in the same subscriber’s account and reduces operator’s costs by steering subscribers toward cheaper
access types through more advantageous monetary conditions (/metering rules). It applies to wireline,
mobile and Wi-Fi access.
• Tiered weighted fair usage—Generates additional revenue by capturing a larger portion of the market
through differentiated offerings that cater to specific usage needs. This encourages customer adoption
of higher tier plans by setting different limits to the number of simultaneous active devices in the
network.

Transparent Authorization
Transparent Authorization enables users who have already registered with the operator to be automatically
signed into the network without the need for any user intervention, such as redirection to a portal for web-based
authentication.

For community Wi-Fi and mobile access, the number of simultaneous active sessions from multiple devices in
the same account is capped. When the threshold is breached, user attempts to establish additional sessions are
denied and the user is redirected to a notification page on the self-management portal.

Call flows are different depending on access type. The following sections discuss the behavior for residential
wireline, community Wi-Fi, and mobile access.

System Architecture September 2013


74
Wireline
The following figure shows the behavior the Cisco FMC system implements for an IPoE IPv6 wireline subscriber.

Figure 48 - Wireline Transparent Authorization Call Flows

DHCPv6 AAA/PCRF
BNG
CPE AN

DHCPv6 Solicit (MAC, Option 18/37)

RADIUS Access-Request (Username=Option 18/37)


User known. Number
of active sessions
within allowed limit.
RADIUS Access-Accept (Subscriber’s services)

DHCP Discover (MAC, Option 18/37)

DHCPv6 Advertise/Request/Reply Accounting-session-ID


cached from accounting
start. To be used in
RADIUS Accounting-Start (accounting-session-ID) subsequent RADIUS
CoA msg to identify
subs.

293367
DHCPv4
User Traffic

When a wireline subscriber first accesses the network, a new session is triggered at the BNG depending on the
subscriber access protocol that happens as part of a PPP session negotiation exchange or upon receipt of a
DHCP Solicit (v6) message.

While session establishment is in process, BNG attempts to authenticate the subscriber based on credentials
collected from different sources. These range from information coming directly from the client, such as the PPP
username or the subscriber mac address, to line identifiers inserted in DHCPv6 Option 18/37 by the AN, or taken
from the BNG access port, such as slot, port, and VLAN information. How the user gets authenticated depends
on the subscriber aggregation model, 1:1 or N:1, and the subscriber access protocol PPPoE or IPoE.

For an existing subscriber, this network-based authentication will succeed and the BNG will receive from RADIUS
all the features that should be activated on the subscriber session to reflect his subscription.

System Architecture September 2013


75
Community Wi-Fi
The following figure shows the behavior the Cisco FMC system implements for an IPoE IPv4 Wi-Fi subscriber.

Figure 49 - Community Wi-Fi Transparent Authorization Call Flows

BNG DHCPv4 AAA/PCRF


Bridged CPE

DHCP Discover (MAC)

RADIUS Access-Request (Username=MAC)


User known

RADIUS Access-Accept (Subscriber’s services)

DHCP Discover (MAC)

DHCP Offer/Request/Ack
Accounting-session-ID cached
from accounting start. To be
RADIUS Accounting-Start (accounting-session-ID) used in subsequent RADIUS
CoA msg to identify subs.

293595
User Traffic

When a Wi-Fi subscriber first accesses the network, a new session is triggered at the BNG based on the receipt
of a DHCPv4 Discover message.

While session establishment is in process, BNG attempts to authenticate the subscriber based on the wireless
handset MAC address.

System Architecture September 2013


76
For a returning subscriber, this network-based authentication will succeed and the BNG will receive from RADIUS
all the features that should be activated on the subscriber session to reflect his subscription. Depending on
the number of already active sessions in the user account, these features may grant actual network access
or may redirect the user to a notification page that explains why access is being denied, and the session is
disconnected. The following figure describes the latter scenario.

Figure 50 - Community Wi-Fi—Maximum Number of Active Sessions Breached

BNG DHCPv4 AAA/PCRF


Bridged CPE

DHCP Discover (MAC)

RADIUS Access-Request (Username=MAC)


User known but too many
devices active for same plan
RADIUS Access-Accept
(Subscriber’s Redirection−Max Active)
HTTP-Redirect enabled

DHCP Discover (MAC)

DHCP Offer/Request/Ack
Accounting-session-ID cached
from accounting start. To be
RADIUS Accounting-Start (accounting-session-ID)
used in sequent RADIUS
CoA msg to identify subs.
HTTP GET (www.cisco.com)
HTTP Get is intercepted. HTTP-Redirect performed.
HTTP 307 (www.portal.com/Error_Active.htm/
?NAS_IP=<nas_ipv4>)

HTTP Session to “www.portal.com/Error_Active.htm/?ASR9K_IP=<nas_ipv4>”


Subs reads “too
many devices”
RADIUS CoA Account Log Off message

293596
(accounting-session-ID)
Session Terminated

System Architecture September 2013


77
Web-Logon for Wi-Fi access with Dynamic Learning of Credentials
This use case allows Wi-Fi users to automatically register a new device and to gain network access after being
redirected to a web logon screen on a self-management portal. The number of Wi-Fi devices that can be
registered in a given account is capped, and access is denied when limit is breached.

The call flow in the following figure shows the behavior the Cisco FMC system implements for an IPoE IPv4 Wi-Fi
subscriber.

Figure 51 - Community Wi-Fi—Web-Logon Access with Device Self-Registration

DHCPv4 AAA PCRF


BNG
Portal
Bridged CPE

DHCP Discover (MAC)

RADIUS Access-Request
(Username=MAC)
User
unkown
RADIUS Access-Reject yet
HTTP-Redirect service applied

DHCP Discover (MAC)

DHCP Offer/Request/Ack

HTTP GET (www.cisco.com)


HTTP Get is intercepted. HTTP-Redirect performed.

HTTP 307 (www.portal.com/Web_Logon.htm/


?ASR9K_IP=<nas_ipv4>)
Subs Logs in
using User
HTTP Session to “www.portal.com/Web_Logon.htm/?ASRK_IP=<nas_ipv4>” Credentials
(username/pwd)

sub_ipv4, nas_ipv4,
username, pwd
User identified in
Account Logon
RADIUS CoA Req. Account Logon (SUB_IPv4, <vrf>, username, pwd) msg by his
IP address
(SUB_IP) and
RADIUS Access-Request VRF
(username, pwd)

RADIUS Access-Accept
(Subscriber’s services)
Accnt-sessID cached from
accnt start. To be used in
RADIUS Accounting-Start sequent RADIUS CoA msg
(accounting-session-ID, calling-station-ID) to identify subs. Calling
Station ID cached as
additional credential.
RADIUS CoA Req. Account Logon Ack
293366

User Traffic

System Architecture September 2013


78
When a Wi-Fi subscriber first accesses the network, a new session is triggered at the BNG upon receipt of a
DHCPv4 Discover.

While session establishment is in progress, BNG attempts to authenticate the Wi-Fi device based on its MAC
address.

For a new Wi-Fi device, this network-based authentication will fail and the BNG enables redirection of HTTP
traffic to a Web Logon portal page.. Both - upgrades and downgrades, permanent and transient - are supported.
Subscriber address assignment is allowed to complete so that client can gain limited access in the network.

After the user starts a web browser, the BNG responds to the HTTP GET request with an HTTP redirects (HTTP
307 message) requesting the user to change the original URL with the URL of the self-management portal.
The user is then presented with a web logon screen where he is asked to provide his username and password.
The portal propagates user-entered credentials to the BNG via RADIUS CoA, to be used for a second round of
authentication. A successful authentication exchange with AAA includes all the features that should be activated
on the subscriber session to reflect his subscription.

Web logon redirection is removed from the session and an accounting start is sent to AAA to signal the full
establishment of the session. The accounting message includes information about device’s identity such as its
MAC address. This information is recorded and used for subsequent accesses in order to enable transparent
authorization on subsequent logins.

If the maximum number of registered devices has been breached, the automatic credential registration fails and
access is denied by enabling a redirection service to a user notification page.

Weighted Fair Policies


Volume-controlled services are an alternative to all-you-can eat offerings whenever the subscriber requires
additional cost control on his subscription or as an additional tool for monetization and usage control for
operators. In this model, the subscriber’s service usage is monitored in real time, and when credit is exhausted
service levels are decreased until the next automatic replenishment cycle.

All active subscriber’s sessions across all access types feed from same credit pool and consumption from the
pool is weighted based on access type in order to steer subscribers toward cheaper access technologies. As
an example, weights for each byte of actual traffic can be set at 5x for mobile, 3x for Wi-Fi, and 1x for wireline
access, making the latter the most cost effective way for subscribers to enter the operator’s network.

Call flows are different depending on access type. The following sections discuss the behavior for residential
wireline, community Wi-Fi, and mobile access.

System Architecture September 2013


79
Residential Wireline and Community Wi-Fi
The following figure shows the behavior the Cisco FMC system implements for a wireline and Wi-Fi subscribers.

Figure 52 - Wireline and Community Wi-Fi Weighted Fair Policy

BNG AAA/PCRF
Portal
CPE

RADIUS Accounting Interim


Process subscriber’s network
total usage
Accnt-Sess-ID, IPsub, NAP IP Addr, byte/pkt count • Usage below monthly allowance

RADIUS Accounting Interim


Process subscriber’s network
total usage
Accnt-Sess-ID, IPsub, NAP IP Addr, byte/pkt count • Usage reaches 80% monthly
allowance => faster accounting
records

RADIUS CoA Req. Account-Update (accounting-session-id,


accounting-interim=300)

RADIUS Accounting Interim


Process subscriber’s network
total usage
Accnt-Sess-ID, IPsub, NAP IP Addr, byte/pkt count • Usage below monthly allowance

RADIUS Accounting Interim


Process subscriber’s network
total usage
Accnt-Sess-ID, IPsub, NAP IP Addr, byte/pkt count • Usage reaches 100% monthly
allowance => rate reduced

RADIUS CoA Req. Service-Activate (accounting-session-id,


SERVICE_REDUCED_SPEED)

RADIUS CoA Req. Service-Activate (accounting-session-id,


SERVICE_NORMAL_SPEED)
New billing cycle

293290
• Rate restored
• Credit pool replenished

Per-session accounting is enabled to periodically report time and volume utilization for the subscriber’s actively
monitored session. An external online charging function or quota manager then adjusts the user’s credit
availability based on the statistics reported in the accounting records and the weight based on the access type.

When the credit utilization reaches 100%, a RADIUS CoA message is sent to the BNG requesting activation of
a lower-tier service, and the user experience degrades until the next credit replenishment. Optionally, the user
may also be redirected to the subscriber registration portal to be notified of his credit status reaching a critical
threshold. Redirection is disabled after the user acknowledges reading the notification, or after a predefined time
period.

To minimize revenue leakage caused by a delay in detection due to the periodic nature of accounting records,
the interval in which accounting messages are sent is incrementally reduced as credit is eroded. While the credit

System Architecture September 2013


80
is still conspicuous, the accounting interval is set to larger values to minimize the load on accounting servers.
This is expected to be the state in which the vast majority of subscribers and their sessions will be.

When a large percentage of credit is eroded, the accounting interval is reduced and accounting records
transmitted more frequently. The new interim interval is calculated as a function of the rate in which the
subscriber is transmitting and his service tier. Changes on the active feature set for the subscriber session are
requested via RADIUS CoA messages.

Tiered Weighted Fair Policy


Tiered Weighted Fair Policy use cases fall into the category of subscriber self-management and empower users
to take charge of their experience in the network by allowing them to change their subscription at any time with
the touch of a button. Subscribers can modify their service tier and changes are automatically provisioned to
reflect the new level of service requested.

Tiered offerings help operators generating additional revenue by capturing a larger portion of the market through
differentiated offerings that cater to specific needs. To encourage customer adoption of higher tier plans,
different limits are set to the number of simultaneous active devices in the network that each plan allows, in
addition to different maximum bandwidth settings and total credit available to the subscriber’s account.

Call flows are different depending on access type. The following sections discuss the behavior for residential
wireline, community Wi-Fi, and mobile access.

Residential Wireline and Community Wi-Fi


The following figure shows the behavior the Cisco FMC system implements for a wireline and Wi-Fi subscribers.

Figure 53 - Residential Wireline and Community Wi-Fi

BNG AAA/PCRF
Portal
CPE

User goes on portal to


HTTP Session to “www.portal.com” change subscription.

username, pwd

RADIUS CoA Req. Service De-Activate (accounting-session-id,


SERVICE_TIER2_SPEED)

RADIUS CoA Req. Service-Deactivate Ack

RADIUS CoA Req. Service Activate (accounting-session-id,


SERVICE_TIER1_SPEED)

RADIUS CoA Req. Service-Activate Ack


293368

In the scenario depicted in the previous figure, the user logs on to the subscriber self-management portal and
requests activation of a new service tier. Whenever the user requests a change of subscription, the portal sends
a RADIUS CoA message indicating the changes that should be applied to the subscriber session. This affects all
the active subscriber sessions across all access types.

System Architecture September 2013


81
Business Service Architecture
The Cisco FMC system design provides transport and service edge functions for business point-to-point ATM/
TDM, E-Line, E-LAN, and L3VPN wireline services. To accomplish this on a single network, in conjunction with
residential and mobile services, MPLS service virtualization is employed, which provides emulated circuit services
for TDM and ATM transport, emulated Ethernet L2VPN services for E-Line and E-LAN, and MPLS Virtual Route
Forwarding (VRF) services for L3VPN services. Service models for each type of service are outlined in this section.

MPLS VRF Service Model for L3VPN


This section describes how multipoint L3VPN wireline services are deployed in the Cisco FMC system. The
AGN-SE or PAN-SE node implements the MPLS VRF for the L3VPN service. The business customer’s CPE
equipment may either be connected directly to this service edge node, via an Access Node (AN) such as a
FAN, or via a CSG located in an urban area with fiber access from the business to the CSG. The AN may be
connected to the service edge node either via an Ethernet Network-to-Network Interface (NNI) or spoke PWs,
which transport the service between the service edge node and any MPLS-enabled AN subtending from the
service edge node. In the latter case, PWHE functionality in the service edge node will allow for mapping of the
PW directly to the VRF for the L3VPN service.

Wireline L3VPN Service with Unified MPLS Access

Figure 54 - Wireline L3VPN Service in Cisco FMC System with Unified MPLS Access

Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR

CSG PAN-SE PAN-SE FAN


Pseudowire Pseudowire
MPLS VPN (v4)
CPE CPE

293292
FAN CSG

The preceding figure shows a wireline L3VPN service enabled using an MPLS VRF between two PAN-SEs
across the core network with an EoMPLS PW between each PAN-SE and the respective FAN.

As detailed in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global FAN and IGW BGP communities, to provide routing to all possible prefixes for the service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community, and import the loopbacks of the other nodes from the same community.
• Implement PWHE interfaces to terminate the PW from the AN and map it to the MPLS VRF for the
L3VPN service.

As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce its loopback to a global FAN BGP community.
• Transport all services to the PAN-SE.

The PAN-SE will implement SLA enforcement through per-subscriber QoS policies, any required access control
lists (ACLs). The FAN will provide aggregate class enforcement through QoS.

System Architecture September 2013


82
L3VPN Service with Fixed and Mobile Access

Figure 55 - L3VPN Service in Cisco FMC System with Fixed and Mobile Access

Access Network Aggregation Network Core Network Aggregation Network Mobile Access
IGP/LDP IGP/LDP IGP/LDP IGP/LDP Network
IGP/LDP
Enterprise

Ethernet

FSE FSE
AToM AToM
Pseudowire Pseudowire
CSG FAN

Business L3 VPN
(v4/v6) services

MSE MSE
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR
FAN
S1 and X2 L3 VPN
CSG

LTE/3G
IP Bearer

293601
Enterprise

The preceding figure shows a business L3VPN service enabled by using an MPLS VRF that spans across both
the fixed wireline and mobile networks. The figure shows an LTE deployment for the mobile-attached CPE
device, but 3G deployments are also supported.

On the fixed wireline side, the VRF is created on a PAN-SE with an EoMPLS PW transporting the service
between the PAN-SE and the respective FAN.

As detailed in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will do the
following:
• Import the global FAN and IGW BGP communities in order to provide routing to all possible prefixes for
the service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community, and import the loopbacks of the other nodes from the same community.
• Implement PWHE interfaces in order to terminate the PW from the AN and map it to the MPLS VRF for
the L3VPN service.

As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce their loopback to a global FAN BGP community.
• Transport all services to the PAN-SE.

The PAN-SE will implement service SLA enforcement through per-subscriber QoS policies, and any required
ACLs. The FAN will provide aggregate class enforcement through QoS.

In the Mobile Network, the L3 MPLS VPN transport handling is dependent upon the technology being deployed.
In the case of a 3G deployment, the gateway General Packet Radio Service (GPRS) support node (GGSN)
handles the transport establishment and routing with the mobile CPE router. In the case of an LTE deployment,

System Architecture September 2013


83
the Cisco Packet Data Network Gateway (PGW) handles the transport establishment and routing with the mobile
CPE router. The GGSN use case is described in this section, and deployment using a PGW for LTE is exactly the
same.

The GGSN supports a RADIUS Framed Route attribute-value pair (AVP) in order to enable mobile router
functionality. The mobile router enables a router to create a Packet Data Protocol (PDP) context that the GGSN
authorizes via RADIUS. The RADIUS server authenticates the router and includes a Framed-Route attribute (RFC
2865) in the RADIUS Access-Accept response, specifying the subnet routing information to be installed in the
GGSN for the mobile router.

If the GGSN receives a packet with a destination address matching the Frame-Route, the packet is forwarded
to the mobile router through the associated PDP context. Framed-Routes received via RADIUS in the Access-
Accept will be installed for the subscriber (for a GGSN call) and will be deleted once the call is terminated.

This feature is implemented using aggregate VPN APIs. The framed-route attribute also works in combination
with an MPLS/BGP solution, meaning the framed route will be installed in a particular VRF (ip vrf) of a corporate
access point name (APN) and its routing table.

If the VRF has configured BGP and route distribution, routes will be announced over multiprotocol external BGP
(MP-eBGP) to the gateway to the fixed wireline network. The GGSN generates a new label for a framed route
subnet and distributes it over MP-eBGP. The Framed-Routes may be advertised if dynamic routing is in use as
dictated by the routing protocol and its configuration.

Framed-Routes can overlap. They are added in the routing table of a particular VRF. Since each routing table is
separated by VRF, the same IP subnets can coexist among different VRFs.

The Framed-Routes assigned at context setup remains in effect for the lifetime of the context and need not be
modified.

Framed Routes can be public or private. The IP address assigned to the mobile router itself need not be part
of the Framed-Routes assigned to the context. For example, the mobile router may be assigned a private IP
address while the Framed-Route may be a public IP subnet. However, an IP address can be associated with a
Framed-Route attribute via RADIUS (auth accept).

Wireline L3VPN Service with Ethernet Access

Figure 56 - Wireline L3VPN Service in Cisco FMC System with Ethernet Access

Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave

CPE PAN CN-ABR CN-ABR PAN


Inline RR Inline RR

PAN-SE PAN-SE 802.1q or FAN


802.1ad
MPLS VPN (v4)
CPE
293372

The preceding figure shows a wireline L3VPN service enabled using an MPLS VRF between two PAN-SEs
across the core network with an 802.1q or Q-in-Q-tagged Ethernet NNI between one PAN-SE and its respective
FAN.

System Architecture September 2013


84
As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global IGW BGP communities to Internet connectivity for the service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community and import the loopbacks of the other nodes from the same community.

As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Map the UNI to the proper 802.1Q or Q-in-Q Ethernet NNI for transport to the PAN-SE.

The PAN-SE will map the S-VLAN and/or C-VLAN(s) from the UNI or Ethernet NNI to the MPLS VRF. This service
edge node will implement service SLA enforcement through per-subscriber QoS policies, and any required
ACLs. The FAN, if utilized, will provide aggregate class enforcement through QoS.

H-VPLS Service Model for L2VPN


This section describes how multipoint wireline services are deployed in the Cisco FMC system by employing a
hierarchical VPLS model. The AGN-SE or PAN-SE node implements the VPLS VFI for the E-LAN service. The
business customer’s CPE equipment may either be connected directly to this service edge node or via an AN
such as FANs or CSGs located in an urban area with fiber access from the business to the CSG. The AN may be
connected to the service edge node either via an Ethernet NNI, or via spoke PWs, which transport the service
between the service edge node and any MPLS-enabled AN subtending from the service edge node.

Wireline VPLS Service with Unified MPLS Access

Figure 57 - Wireline VPLS Service in FMC System with Unified MPLS Access

Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR

CSG PAN-SE PAN-SE FAN


Pseudowire VPLS Pseudowire

CPE CPE

293293
FAN CSG

The previous figure shows a wireline VPLS, such as an EP-LAN or EVP-LAN business service enabled using a
VPLS VFI between two PAN-SEs across the core network, with an EoMPLS PW between each PAN-SE and the
respective FAN.

As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global FAN and IGW BGP communities to provide routing to all possible prefixes for the
service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community, and import the loopbacks of the other nodes from the same community.

As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce its loopback to a global FAN BGP community.
• Transport all services to the PAN-SE.

System Architecture September 2013


85
The PAN-SE will map the spoke PW(s) to the VPLS VFI. This service edge node will implement service SLA
enforcement through per-subscriber QoS policies, any required ACLs, and learn the MAC addresses in
the service. The FAN will provide aggregate class enforcement through QoS. Since the FAN has only two
connections to the VPLS service—the UNI and the PW to the PAN-SE—MAC address learning is disabled for the
service.

Wireline VPLS Service with TDM/Ethernet Access

Figure 58 - Wireline VPLS Service in Cisco FMC System with TDM/Ethernet Access

Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave

CPE PAN CN-ABR CN-ABR PAN


Inline RR Inline RR

PAN-SE PAN-SE 802.1q or FAN


802.1ad
VPLS
CPE

293373
The preceding figure shows a wireline VPLS, such as a Ethernet Private LAN (EP-LAN) or Ethernet Virtual Private
LAN (EVP-LAN) business service enabled using a VPLS VFI between two PAN-SEs across the core network,
with an 802.1q or Q-in-Q tagged Ethernet NNI between each PAN-SE and the respective FAN.

As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global IGW BGP communities to provide routing to all possible prefixes for the service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community and import the loopbacks of the other nodes from the same community.

As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Map the UNI to the proper 802.1Q or Q-in-Q Ethernet NNI for transport to the PAN-SE.

The PAN-SE will map the S-VLAN and/or C-VLAN(s) from the UNI or Ethernet NNI to the VPLS VFI. This service
edge node will implement service SLA enforcement through per-subscriber QoS policies and any required
ACLs, and learn the MAC addresses in the service. The FAN, if utilized, will provide aggregate class enforcement
through QoS. Since the FAN has only two connections to the VPLS service, the UNI and the NNI to the PAN-SE,
MAC address learning is disabled for the service.

PBB-EVPN Service Model for L2VPN


This section describes how multipoint wireline services are deployed in the Cisco FMC system by employing an
Ethernet VPN (EVPN) with Provider Backbone Bridge (PBB) model. PBB-EVPN brings enhanced functionality,
scalability, flexibility, and greater operational simplification to L2VPN services versus the VPLS VFI model. In this
model, the AGN-SE or PAN-SE node implements the E-LAN service through and EVPN Instance (EVI), which is
then routed between PE nodes by utilizing the address-family l2vpn evpn construct in BGP.

The business customer’s CPE equipment may either be connected directly to this service edge node or via an
AN, such as FANs or CSGs located in an urban area with fiber access from the business to the CSG. The AN
may be connected to the service edge node either via an Ethernet NNI, an Ethernet ring network, or an MPLS
Access network.

System Architecture September 2013


86
Wireline PBB-EVPN Service with TDM/Ethernet Access

Figure 59 - PBB-EVPN Service in Cisco FMC System with TDM/Ethernet Access

Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave
CPE PAN CN-ABR CN-ABR PAN
Inline RR Inline RR

CPE
PAN-SE 802.1q or
PBB-EVPN 802.1ad
FAN

293605
The preceding figure shows a wireline L2VPN service, such as an Ethernet Private LAN (EP-LAN) or Ethernet
Virtual Private LAN (EVP-LAN) business service, enabled using an EVI with PBB configured between two PAN-
SEs across the core network, with an 802.1q or Q-in-Q tagged Ethernet NNI between each PAN-SE and the
respective FAN.

As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global IGW BGP communities in order to provide routing to all possible prefixes for the
service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community and import the loopbacks of the other nodes from the same community.

As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Map the UNI to the proper 802.1Q or Q-in-Q Ethernet NNI for transport to the PAN-SE.

The PAN-SE will group the S-VLAN and/or C-VLANs, defining the service from the UNI or Ethernet NNI into a
bridge domain, referred to as the PBB-Edge bridge domain (BD). Through the PBB functionality in the service
edge node, this PBB-Edge BD is connected to a second PBB-Core BD, on which the EVPN service is configured
with an EVI. This step encapsulates all Customer-MAC (C-MAC) addresses in a Bridge-MAC (B-MAC) address,
and only B-MAC information is shared between service edge nodes. This same EVI is configured on all service
edge nodes participating in this PBB-EVPN. Service traffic between the service edge nodes is routed via BGP
“address-family l2vpn evpn” information.

The service edge nodes will implement service SLA enforcement through per-subscriber QoS policies and any
required ACLs, and learn the MAC addresses in the service. The FAN, if utilized, will provide aggregate class
enforcement through QoS.

System Architecture September 2013


87
Wireline PBB-EVPN Service with MPLS Access

Figure 60 - Wireline PBB-EVPN Service in Cisco FMC System with MPLS Access

Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR

CSG PAN-SE FAN


Pseudowire PBB-EVPN Pseudowire

CPE CPE

293604
FAN CSG

The preceding figure shows a wireline L2VPN service, such as an Ethernet Private LAN (EP-LAN) or Ethernet
Virtual Private LAN (EVP-LAN) business service enabled using an EVI with PBB configured between two PAN-
SEs across the core network. The FAN utilizes an EoMPLS PW to transport traffic from the CPE UNI to the
PAN-SE.

As described in “Hierarchical-Labeled BGP LSP Core, Aggregation, and Access,” the PAN-SEs will:
• Import the global IGW BGP communities in order to provide routing to all possible prefixes for the
service.
• In order to provide routing to all other FSE nodes, the node will announce its loopback into a global FSE
BGP community and import the loopbacks of the other nodes from the same community.

As detailed in “Transport Architecture,” the nodes with fixed access UNIs will:
• Announce their loopback to a global FAN BGP community.
• Transport all services to the PAN-SE via an EoMPLS PW.

The PAN-SE will group the S-VLAN and/or C-VLANs, defining the service from the spoke PW into a bridge
domain, referred to as the PBB-Edge bridge domain (BD). Through the PBB functionality in the service edge
node, this PBB-Edge BD is connected to a second PBB-Core BD, on which the EVPN service is configured
with an EVI. This step encapsulates all Customer-MAC (C-MAC) addresses in a Bridge-MAC (B-MAC) address,
and only B-MAC information is shared between service edge nodes. This same EVI is configured on all service
edge nodes participating in this PBB-EVPN. Service traffic between the service edge nodes is routed via BGP
“address-family l2vpn evpn” information.

The service edge nodes will implement service SLA enforcement through per-subscriber QoS policies and any
required ACLs, and learn the MAC addresses in the service. The FAN will provide aggregate class enforcement
through QoS.

System Architecture September 2013


88
PW Transport for X-Line Services
This section describes how a point-to-point wireline service is deployed in the Cisco FMC system. The FMC
system supports E-Line services and any MPLS-enabled AN subtending from the pre-aggregation, such as
FANs or CSGs located in a urban area with fiber access from the business to the CSG. A CSG is used as the AN
in this example.

Figure 61 - Wireline VPWS Service between CSG and FAN across the Core Network

Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR

CSG PAN-SE PAN-SE FAN

AToM Pseudowire
CPE CPE

FAN CSG
Advertise loopback in iBGP with Advertise loopback in iBGP with
Local RAN, Global RAN, Global FAN Local RAN, Global RAN, Global FAN communities

293294
When VPWS service is activated the inbound filter is When VPWS service is activated the inbound filter is
automatically updated for remote FAN automatically updated for remote FAN

The preceding figure shows a wireline VPWS like an EPL or Ethernet Virtual Private Line (EVPL) business service
enabled using a pseudowire between CSGs in the access and a FAN in a remote access network across the
core network. The CSG and FAN enabling the VPWS learn each other’s loopbacks via BGP labeled-unicast that
is extended to the access network using the PANs as inline RR, as described in “Hierarchical-Labeled BGP LSP
Core, Aggregation, and Access.”

The following figure shows a variation of VPWS service deployment with native Ethernet, TDM, or Microwave
access where the service utilizes a pseudowire between the PANs.

Figure 62 - Wireline VPWS Service between CSG and FAN with non-MPLS Access

Access Network Aggregation Network Core Network Aggregation Network Access Network
TDM or Packet IS-IS L1 IS-IS L2 IS-IS L1 Ethernet
Microwave

CPE PAN CN-ABR CN-ABR PAN


Inline RR Inline RR

PAN-SE PAN-SE 802.1q or FAN


802.1ad
MPLS VPN (v4)
CPE
293372

System Architecture September 2013


89
As detailed in “Transport Architecture,” the route scale in the access domains is kept to a minimum by ingress
filtering on the AN nodes. The ANs that enable wireline services advertise their loopbacks in iBGP labeled-
unicast with a common FAN community. The filtering mechanism used to maintain the low route scale in the
access domains while enabling this fixed mobile deployment is explained in the following.

Figure 63 - Scalability Control for Fixed Mobile Converged Deployments

Access Network Aggregation Network Core Network Aggregation Network Mobile Access
OPSF 0/IS-IS L2 IS-IS L1 IS-IS L2 IS-IS L1 Network
OPSF 0/IS-IS L2
PAN CN-ABR CN-ABR PAN
Inline RR Inline RR Inline RR Inline RR

Wireline VPWS

AToM Pseudowire
FAN
CSG

Advertise loopback in iBGP with Advertise loopback in iBGP with


Local RAN, Global RAN, Global FAN Local RAN, Global RAN, Global FAN communities

293223
When VPWS service is activated the inbound filter is When VPWS service is activated the inbound filter is
automatically updated for remote FAN automatically updated for remote FAN

The CSGs and FANs perform inbound filtering on a per-PAN RR neighbor basis using a route-map that:
• Accepts the FSE community.
• Accepts loopbacks of remote destination to which wireline services are configured on the device.
• Drops all other prefixes.

When a wireline service is activated to new destination, the route-map used for inbound filtering of remote
destinations is updated automatically. Since adding a new wireline service on the device results in a change in
the routing policy of a BGP neighbor, dynamic inbound soft reset function is used to initiate a non-disruptive
dynamic exchange of route refresh requests between the ANs and the PAN.

Tech Tip

Both BGP peers must support the route refresh capability to use dynamic inbound soft
reset capability.

System Architecture September 2013


90
Mobile Service Architecture
The Cisco FMC system design provides transport for both legacy and current mobile services. To accomplish
this on a single network, MPLS service virtualization is employed, which provides emulated circuit services
via L2VPN for 2G and 3G services and L3VPN services for IP-enabled 3G and 4G/LTE services. Both service
models are outlined in this section.

L3 MPLS VPN Service Model for LTE


The Cisco FMC system supports mobile SPs that are introducing 3G Universal Mobile Telecommunications
Service (UMTS)/IP and 4G LTE-based next generation mobile access in order to scale their mobile subscribers
and optimize their network infrastructure cost for the mobile broadband growth. To this end, the system
proposes a highly-scaled MPLS VPN-based service model to meet the immediate needs of LTE S1 and X2
interfaces and accelerate LTE deployment.

Figure 64 - LTE Backhaul Service

Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network

SGW
S1-U
eNode B
MTG

S1-C MME

MPLS VPN (v4 or v6)


S1-C
X2-C, X2-U MTG
MTG
S1-U

293296
SGW

The Mobile RAN includes cell sites with enhanced NodeBs (eNB) that are connected either:
• directly in a point-to-point fashion to the PANs utilizing Ethernet fiber or microwave
or
• through CSGs connected in ring topologies by using MPLS/IP packet transport over Ethernet fiber or
microwave transmission
The cell sites in the RAN access are collected in a MPLS/IP pre-aggregation/aggregation network that may be
comprised of a physical hub-and-spoke or ring connectivity that interfaces with the MPLS/IP core network that
hosts the EPC gateways.

System Architecture September 2013


91
From the E-UTRAN backhaul perspective, the most important LTE/SAE reference points are the X2 and S1
interfaces. The eNodeBs are interconnected with each other via the X2 interface, and towards the EPC via the
S1 interface.
• The S1-c or S1-MME interface is the reference point for the control plane between E-UTRAN and
MME. The S1-MME interface is based on the S1 Application Protocol (S1AP) and is transported over the
Stream Control Transmission Protocol (SCTP). The EPC architecture supports MME pooling to enable
geographic redundancy, capacity increase, and load sharing. This requires the eNodeB to connect to
multiple MMEs. The L3 MPLS VPN service model defined by the Cisco FMC system allows eNodeBs in
the RAN access to be connected to multiple MMEs that may be distributed across regions of the core
network for geographic redundancy.
• The S1-u interface is the reference point between E-UTRAN and SGW for the per-bearer user plane
tunneling and inter-eNodeB path switching during handover. The application protocol used on this
interface is GPRS Tunneling Protocol (GTP) v1-U, transported over User Datagram Protocol (UDP). SGW
locations affect u-plane latency, and the best practice for LTE is to place S/PGWs in regions closer
to the aggregation networks that they serve so that the latency budget of the eNodeBs to which they
connect is not compromised. The EPC architecture supports SGW pooling to enable load balancing,
resiliency, and signaling optimization by reducing the handovers. This requires the eNodeB to connect to
multiple SGWs. The L3 MPLS VPN service model allows eNodeBs in the RAN access to be connected
to multiple SGWs, which include ones in the core close to the local aggregation network and SGWs that
are part of the pool serving neighboring core POPs.
• The X2 interface comprised of the X1-c and X2-u reference points for control and bearer plane provides
direct connectivity between eNodeBs. It is used to hand over a user equipment from a source eNodeB
to a target eNodeB during the inter-eNodeBs handover process. For the initial phase of LTE, the traffic
passed over this interface is mostly control plane related to signaling during handover. This interface is
also used to carry bearer traffic for a short period (<100ms) between the eNodeBs during handovers.
The stringent latency requirements of the X2 interface requires that the mesh connectivity between
CSGs introduces a minimum amount of delay that is in the order of 30ms. The L3 MPLS VPN service
model provides shortest path connectivity between eNodeBs so as to not introduce unnecessary
latency.
• During initial deployments in regions with low uptake and smaller subscriber scale, MME and SGW/
PGW pooling can be used to reuse mobile gateways serving neighboring core POPs. Gradually, as
capacity demands and subscriber scale increases, newer gateways can be added closer to the region.
L3 MPLS VPN service model for LTE backhaul that is defined by the FMC System allows migrations to
newer gateways to take place without any re-provisioning of the service model or re-architecting of the
underlying transport network required.
• With the distribution of the new spectrum made available for 3G and 4G services, many new SPs
have entered the mobility space. These new entrants would like to monetize the spectrum they have
acquired, but lack the national infrastructure coverage owned by the incumbents. LTE E-UTRAN-sharing
architecture allows different core network operators to connect to a shared radio access network. The
sharing of cell site infrastructure could be based on:
◦◦ A shared eNodeB: shared backhaul model where different operators are presented on different
VLANs by the eNodeB to the CSG, OR
◦◦ A different eNodeB: shared backhaul model where the foreign operator’s eNodeB is connected
on a different interface to the CSG.

System Architecture September 2013


92
Regardless of the shared model, the Cisco FMC system provides per-mobile SP-based L3 MPLS VPNs that are
able to identify, isolate, and provide secure backhaul for different operator traffic over a single converged network.

Figure 65 - L3 MPLS VPN Service Model

Aggregation Network Core Network Aggregation Network

Export: RAN W RT, Common RT Export: RAN Y RT, Common RT


Import: RAN W RT, MPC RT MME Import: RAN Y RT, MPC RT

MTG
VRF

VRF VRF

VRF SGW/PGW VRF


MTG
VRF VRF
LTE MPLS VPN VRF
VRF

VRF VRF MTG VRF


SGW/PGW

VRF VRF VRF VRF

Export: MPC RT
Import: MPC RT, Common RT
Export: RAN X RT, Common RT Export: RAN Z RT, Common RT

293226
Import: RAN X RT, MPC RT Import: RAN Z RT, MPC RT

The FMC System proposes a simple and efficient L3 service model that addresses the LTE backhaul
requirements addressed above. The L3 service model is built over a Unified MPLS Transport with a common
highly-scaled MPLS VPN that covers LTE S1 interfaces from all CSGs across the network and a LTE X2 interface
per RAN access region. The single MPLS VPN per operator is built across the network with VRFs on the MTGs
connecting the EPC gateways (SGW, MME) in the MPC, down to the RAN access with VRFs on the CSGs
connecting the eNodeBs. Prefix filtering across the VPN is done using simple multiprotocol BGP (MP-BGP) route
target (RT) import and export statements on the CSGs and MTGs.

Denoted in Figure 65:


• A unique RT denoted as Common RT is assigned to the LTE backhaul MPLS VPN. It is either imported or
exported at various locations or the VPN, depending on the role of node implementing the VRF.
• A unique RT denoted by MPC RT is assigned to the MTGs in the MPC.
• Each RAN access region in the network is assigned a unique RT. These RTs are denoted as RAN X,
RAN Y, and RAN Z RTs.

In every RAN access region, all CSGs import the MPC RT and the RAN x RT. The CSGs export the Common
RT and the RAN x RT. Here x denotes the unique RT assigned to that RAN access region. With this importing
and exporting of RTs, the route scale in the VRF of the CSGs is kept to a minimum since VPNv4 prefixes
corresponding to CSGs in other RAN access regions – either in the local aggregation domain, or RAN access
regions in remote aggregation domain across the core – are not learnt. The CSGs have reachability to every MTG
and the corresponding EPC gateways (SGW, MME) that they connect anywhere in the MPC. They also have
shortest path mesh connectivity among themselves for the X2 interface.

In the MPC, the MTGs import the MPC RT and the Common RT. They export only the MPC RT. With this
importing and exporting of RTs, the MTGs have connectivity to all other gateways in the MPC, as well as
connectivity to the CSGs in the RAN access regions across the entire network. The MTGs are capable of
handling large scale and learn all VPNv4 prefixes in the LTE VPN.

System Architecture September 2013


93
The rapid adoption of LTE and the massive increase in subscriber growth is leading to an exponential
increase in cell sites that are being deployed in the network. This is introducing a crunch in the number of
IP addresses that need to be assigned to the eNodeBs at the cell sites. For mobile SPs that are running out
of public IPv4 addresses or those that cannot obtain additional public IPv4 addresses from the registries for
eNodeB assignment, the Cisco FMC system enables carrying IPv6 traffic over a IPv4 Unified MPLS Transport
infrastructure using 6VPE as defined in RFC 4659. The eNodeBs and EPC gateways can be IPv6 only or dual
stack-enabled to support IPv6 for S1 and X2 interfaces while using IPv4 for network management functions, if
desired. The dual stack-enabled eNodeBs and EPC gateways connect to CSGs and MTGs configured with a dual
stack VRF carrying VPNv4 and VPNv6 routes for the LTE MPLS VPN service. The IPv6 reachability between the
eNodeBs in the cell site and the EPC gateways in the MPC is exchanged between the CSGs and MTGs acting
as MPLS VPN PEs using the BGP address family [address family identifier (AFI)=2, subsequent address family
identifier (SAFI)=128].

Figure 66 - Inter-Access X2 Connectivity, Labeled BGP Access

MSE BGP
Community
1001:1001

MTG CN-RR
MTG
RR

CN-ASBR CN-ASBR
Inline RR Inline RR

AGN-ASBR AGN-ASBR
Inline RR Inline RR

Metro-1
AGN-RR S1 Traffic
RR

Inter-access
X2 Traffic

Access-2 Access-4
VRF VRF
X2
X2
Access-3

VRF VRF X2
VRF
VRF
Unified MPLS Transport:
X2 VRF VRF X2
Advertise loopbacks in iBGP
labeled-unicast with community Inter-access Inter-access
10:10, 10:102 VRF

LTE MPLS VPN Service: Unified MPLS Transport:


Export: RAN-2 RT, Common RT Advertise loopbacks in iBGP
Unified MPLS Transport:
Import: RAN-1 RT, RAN-2 RT, labeled-unicast with community
Advertise loopbacks in iBGP
RAN-3 RT, MPC RT 10:10, 10:104
labeled-unicast with community
10:10, 10:103
LTE MPLS VPN Service:
Export: RAN-4 RT, Common RT
LTE MPLS VPN Service:
Import: RAN-3 RT, RAN-4 RT,
Export: RAN-3 RT, Common RT
RAN-5 RT, MPC RT
293589

Import: RAN-2 RT, RAN-3 RT,


RAN-4 RT, MPC RT

System Architecture September 2013


94
Figure 67 - Inter-Access X2 Connectivity, IGP/LDP Redistribution

MTG
CN-RR
MTG
RR
Redistribute RAN IGP-2 in iBGP,
Selective next-hop-self in RPL
mark BGP Community 10:10, 10:0102
Ser next-hop-self
Redistribute BGP Community If community ↑ 10:01(*)
1000:1000, 10:0101, 10:0103 in RAN IGP-2 CN-ABR CN-ABR
Inline RR Inline RR

Redistribute RAN IGP-3 in iBGP, Redistribute RAN IGP-4 in iBGP,


mark BGP Community 10:10, 10:0103 mark BGP Community 10:10, 10:0104
Metro-1
Redistribute BGP Community Redistribute BGP Community
1000:1000, 10:0102, 10:0104 in RAN IGP-3 S1 Traffic 1000:1000, 10:0103, 10:0105 in RAN IGP-4

Inter-access
X2 Traffic

Access-2 Access-4
VRF VRF

X2 X2
Access-3
VRF VRF VRF VRF
X2
X2 VRF VRF X2
inter-access inter-access
VRF
Export: RAN-2 RT, Common RT Export: RAN-4 RT, Common RT
Import: RAN-1 RT, RAN-2 RT, Import: RAN-3 RT, RAN-4 RT,
RAN-3 RT, MPC RT RAN-5 RT, MPC RT
Export: RAN-3 RT, Common RT

293298
Import: RAN-2 RT, RAN-3 RT,
RAN-4 RT, MPC RT

In some cases, depending on the spread of the macro cell footprint, it might be desirable to provide X2
interfaces between CSGs located in neighboring RAN access regions. This connectivity can easily be
accomplished using the BGP community-based coloring of prefixes used in the Unified MPLS Transport.
• As described in “Transport Architecture,” the CSG loopbacks are colored in BGP labeled-unicast with
a common BGP community that represents the RAN community and a BGP community that is unique
to that RAN access region. This tagging can be done when the CSGs advertise their loopbacks in
iBGP labeled-unicast as shown in Figure 66 if labeled BGP is extended to the access or at the PANs
when redistributing from the RAN IGP to iBGP when IGP/ LDP is used in the RAN access using the
redistribution approach.
• The adjacent RAN access domain CSG loopbacks can be identified at the PAN based on the unique
RAN access region BGP community and be selectively propagated into the access based on egress
filtering as shown in Figure 66, if labeled BGP is extended to the access or be selectively redistributed
into the RAN IGP if IGP/LDP is used in the RAN access using the redistribution approach.

It is important to note that X2 interfaces are based on eNodeB proximity and therefore a given RAN access
domain only requires connectivity to the ones immediately adjacent. This filtering approach allows for
hierarchical-labeled BGP LSPs to be set up across neighboring access regions while preserving the low route
scale in the access. At the service level, any CSG in a RAN access domain that needs to establish inter-access
X2 connectivity will import its neighboring CSG access region RT in addition to its own RT in the LTE MPLS VPN.

The CN-ABR inline-RR applies selective NHS function using route policy in the egress direction towards its
local PAN neighbor group in order to provide shortest-path connectivity for the X2 interface between CSGs
across neighboring RAN access regions. The routing policy language (RPL) logic involves changing the next-hop
towards the PANs for only those prefixes that do not match the local RAN access regions based on a simple
regular expression matching BGP communities. This allows for the CN-ABR to change the BGP next-hop and

System Architecture September 2013


95
insert itself in the data path for all prefixes that originate in the core corresponding to the S1 interface, while
keeping the next-hop set by the PANs unchanged for all prefixes from local RAN regions. With this, the inter-
access X2 traffic flows across adjacent access regions along the shortest path interconnecting the two PANs
without having to loop through the inline-RR CN-ABR node.

Multicast Service Model for LTE eMBMS


The Cisco FMC system architecture includes support for transport of enhanced Multimedia Broadcast Multicast
Service (eMBMS). The 3rd Generation Partnership Project (3GPP) has standardized eMBMS services in the
LTE releases as a mechanism for effectively delivering the same content to a number of end users, such
as broadcast video or file push. Content delivered via eMBMS services uses a multicast-based transport
mechanism, minimizing packet duplication within the transport network.

An overview of eMBMS service implementation is illustrated in the following figure.

Figure 68 - Overview of eMBMS Service Implementation in LTE

MME

M3 Sm
SGmb

293602
M1 SGi-mb
UE eNB MBMS-GW BM-SC

The following interfaces, which are within the scope of the Cisco FMC system design, are involved in eMBMS
service delivery:
• M3 interface—A unicast interface between the MME and MCE (assumed to be integrated into the eNB
for the sake of Cisco FMC), which primarily carries Multimedia Broadcast Multicast Service (MBMS)
session management signaling.
• M1 interface—A downstream user-plane interface between the MBMS Gateway (MBMS-GW) and the eNB,
which delivers content to the user endpoint. IP Multicast is used to transport the M1 interface traffic.

In the context of the Cisco FMC system design, transport of the eMBMS interfaces is conducted based on the
interface type. This is illustrated in the following figure:

Figure 69 - eMBMS Service Interfaces in Cisco FMC System Design

Mobile Access Network Aggregation Network Core Network


SGW

S1-U S11
VRF
MTG-1
VRF VRF MTG-3
VRF
S1-C
X2 X2
MPLS VPN v4/v6 VRRP MME
VRF
CSG X2 M3
VRF VRF
MTG-2
Global
M1 Sm
293603

MBMS-GW

System Architecture September 2013


96
• The M3 interface is transported within the same L3 MPLS VPN as other unicast traffic, namely the S1
and X2 interfaces. Since both the S1 and M3 interfaces are between the eNB and the MME, it makes
logical sense to carry both in the same VPN.
• The M1 interface transport is handled via IP Multicast. This transport is conducted outside the L3 MPLS
VPN, which carries all the unicast interfaces. Since only M1 interface traffic is transported from the
MBMS-GW to the eNB, having a separate transport mechanism is acceptable.

The multicast mechanism utilized for transporting the M1 interface traffic depends upon the location in the
network:
• From the MTG attached to the MBMS-GW, through the Core and Aggregation domains to the AGN node,
Label-Switched Multicast (LSM) is utilized to transport the M1 interface traffic. This provides efficient and
resilient transport of the multicast traffic within these regions.
• From the PAN to the CSG, Native IP Multicast is utilized to transport the M1 interface traffic. This
provides efficient and resilient transport of the multicast traffic while utilizing the lowest amount of
resources on these smaller nodes.

On the UNI from the CSG to the eNB, two VLANs are utilized to deliver the various interfaces to the eNB. One
VLAN handles unicast interface (S1, X2, M3) delivery, while the other handles M1 multicast traffic delivery.

L2 MPLS VPN Service Model for 2G and 3G


The Cisco FMC system architecture allows mobile service providers (MSPs) with TDM-based 2G GSM and
ATM-based 3G UMTS infrastructures to remove, reduce, or cap investments in SONET/SDH and ATM transport
infrastructure by using MPLS-based CEoP services.
• For the MSPs that want to reduce SONET/SDH infrastructure used for GSM, the FMC System enables
PWE3-based transport of emulated TDM circuits. Structured circuit emulation is achieved with CESoPSN
and unstructured emulation is achieved with SAToP. E1/T1 circuits from BTS equipment connected to
the CSG or to the PAN are transported to MTG, where they are bundled into channelized Synchronous
Transport Module level-1 (STM1) or Optical Carrier 3 (OC-3) interfaces for handoff to the BSC.
Synchronization is derived from the BSC via TDM links, or from a Primary Reference Clock (PRC), and
transported across the core, aggregation, and access domains via SyncE, or via 1588 across domains
where SyncE is not supported.
• For the MSPs that want to reduce their ATM infrastructure used for ATM-based UMTS, the Cisco FMC
system enables PWE3-based transport of ATM virtual circuit (VC) (AAL0 or AAL5) or virtual path (VP)
(AAL0) circuits. ATM E1/T1 or inverse multiplexing over ATM (IMA) interfaces from NodeB equipment
connected to the CSG or PAN are transported to the MTG, where they are bundled into STM1 ATM
interfaces for handoff to the Radio Frequency Subsystem (RFSS) Network Controller (RNC). Cell packing
may be used to optimize the bandwidth used for this transport. Synchronization is derived from the RNC
via ATM links or from a PRC and is then transported across the core, aggregation, and access domains
via SyncE or via 1588 across domains where SyncE is not supported.

System Architecture September 2013


97
Figure 70 - ATM/TDM Transport Services

Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
e Node B
CSG MTG

AToM Pseudowire

ATM or BBC
TDM BTS, ATM Node B
TDM ATM RNC
PAN MTG

AToM Pseudowire

293299
Typical GSM (2G) deployments will consist of cell sites that don’t require a full E1/T1 for support. In such cell
sites, a fractional E1/T1 is used. The operator can deploy these cell sites in a daisy chain fashion (for example,
down a highway) or aggregate them at the BSC location. To save in the CAPEX investment on the number of
channelized STM-1/OC-3 ports required on the BSC, the operator will utilize a digital XConnect to merge multiple
fractional E1/T1 links into a full E1/T1. This reduces the number of T1/E1s needed on the BSC, which results
in fewer channelized STM-1/OC-3 ports being needed. Deploying CESoPSN PWs from the CSG to the RAN
distribution node supports these fractional T1/E1s and the aggregation of them at the BSC site. In this type of
deployment, the default behavior of CESoPSN for alarm sync needs to be changed. Typically, if a T1/E1 on the
ANs goes down, the PWs will forward the alarm indication signal (AIS) alarm through the PW to the distribution
node and then propagate the alarm indication signal (AIS) alarm to the BSC by taking the T1/E1 down. In this
multiplexed scenario, TS alarming must be enabled on a CESoPSN PW to only propagate the AIS alarm on the
affected time slots, thus not affecting the other time slots (for example, cell sites) on the same T1/E1.

The same BGP-based control plane and label distribution implemented for the L3VPN services is also used for
circuit emulation services. For hub-and-spoke access topologies, Bidirectional Forwarding Detection (BFD)-
protected static routes can be used to eliminate the need for an IGP at the cell site. The CSGs utilize MPLS/IP
routing in this system release when deployed in a physical ring topology. TDM and ATM PWE3 can be overlaid in
either deployment model.

The CSGs, PAN, AGNs, and MTGs enforce the contracted ATM CoS SLA and mark the ATM and TDM PWE3
traffic with the corresponding per-hop behavior (PHB) inside the access, aggregation, and core DiffServ
domains. The MTG enables multi-router automatic protection switching (MR-APS) (or single-router automatic
protection switching [SR-APS] redundancy for the BSC or RNC interface, as well as pseudowire redundancy and
two-way pseudowire redundancy for transport protection.

System Architecture September 2013


98
Inter-Domain Hierarchical LSPs
The Cisco FMC system uses hierarchical LSPs for inter-domain transport. The hierarchical LSP is built with a
BGP-distributed label that transits the isolated MPLS domains and an intra-domain, LDP-distributed label that
reaches the labeled BGP next hop. This section describes the different hierarchical LSP structures that apply to
various transport architecture options and the corresponding service models.

Inter-Domain LSPs for Multi-Area IGP Design


This section describes inter-domain hierarchical LSPs that apply to Large Network, single-AS multi-area IGP
designs where the core and aggregation networks are part of the same AS, but segmented into isolated IGP areas.

Hierarchical LSPs between Remote PAN-SEs or AGN-SEs for Multi-Area IGP Design
This scenario applies to inter-domain LSPs between the loopback addresses of remote PAN or AGN-SE Nodes,
connected across the core network. It is relevant to wireline L2/L3 MPLS VPN business services deployed
between remote service edges across the core network that use the /32 loopback address of the remote PEs
as the endpoint identifier for the Targeting Label Distribution Protocol (T-LDP) or multiprotocol internal BGP
(MP-iBGP) sessions. The business wireline services are delivered to the service edge in one of three ways:
• Directly connected to the PAN.
• Transported from a FAN to the PAN or AGN service edge via native Ethernet network.
• Transported from a FAN to the PAN or AGN service edge via a PW in an MPLS access network scenario,
which is terminated via PW Headend on the SE.

The service edges are labeled BGP PEs and advertise their loopback using labeled IPv4 unicast address family
(AFI/SAFI=1/4).

Figure 71 - Hierarchical LSPs between Remote PANs for Multi-Area IGP Design

NHS NHS Next-Hop-Self (NHS)

Control
iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label
Imp-Null

Aggregation IGP Domain Core IGP Domain Aggregation IGP Domain

PAN-SE CN-ABR CN-RR CN-ABR PAN-SE


Inline RR Inline RR
RR

iBGP
iBGP iBGP

MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap pop

LDP LSP LDP LSP


iBGP Hierarchical LSP LDP LSP
293302

Forwarding

System Architecture September 2013


99
The remote service edges learn each other’s loopbacks through BGP-labeled unicast. For traffic flowing
between the two service edges as shown in the previous figure, the following sequence occurs:
1. The downstream service edge pushes the BGP label corresponding to the remote prefix and then
pushes the LDP label that is used to reach the local core ABR (CN-ABR) that is the labeled BGP next
hop.
2. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a penultimate hop popping (PHP) before handing to the local CN-ABR.
3. The local CN-ABR will swap the BGP-based inter-domain LSP label and push the LDP label used to
reach the remote CN-ABR that is the labeled BGP next hop.
4. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing off to the remote CN-ABR.
5. Since the remote CN-ABR has reachability to the destination service edge via IGP, it will swap the BGP
label with an LDP label corresponding to the upstream service edge intra-domain LDP LSP.

Hierarchical LSPs between CSG and MTG for Multi-Area IGP Design with Labeled BGP Access
The inter-domain hierarchical LSP described here applies to the Option-1: Multi-Area IGP Design with Labeled
BGP Access transport model described in “Large Network, Multi-Area IGP Design with IP/MPLS Access.” This
scenario applies to inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs in
the core network. It is relevant to 4G LTE and 3G UMTS/IP services deployed using MPLS L3 VPNs or 2G GSM
and 3G UMTS/ATM services deployed using MPLS L2 VPNs that use the /32 loopback address of the remote
PEs as the endpoint identifier for the T-LDP or MP-iBGP sessions. The MTGs and CSGs are labeled BGP PEs and
advertise their loopback using labeled IPv4 unicast address family (AFI/SAFI=1/4).

This scenario is also applicable to point-to-point VPWS services between CSGs and/or FANs in different labeled
BGP access areas. In this scenario, the /32 loopback address of the remote AN is added to the inbound prefix
filter list at the time of service configuration on the local AN, as described in “PW Transport for X-Line Services.”
For this scenario, the stacked label scenario illustrated is the same as that illustrated in Figure 71, with the access
network illustrated in Figure 72 tacked onto either end.

System Architecture September 2013


100
Figure 72 - Hierarchical LSPs between CSGs and MTGs for Multi-Area IGP Design with Labeled BGP Access

NHS NHS Next-Hop-Self (NHS)

iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label 1-Control


Imp-Null

NHS NHS NHS

2-Control iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label


Imp-Null

RAN IGP Domain Aggregation IGP Domain Core IGP Domain

CSG PAN CN-ABR CN-RR CN-ABR


Inline RR Inline RR
RR

iBGP

iBGP iBGP

MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap pop

LDP LSP LDP LSP


1-Forwarding iBGP Hierarchical LSP LDP LSP

pop swap push pop swap push LDP Label


pop swap swap swap push BGP Label

LDP LSP LDP LSP

293303
iBGP Hierarchical LSP 2-Forwarding
LDP LSP

The CSG in the RAN access learns the loopback address of the MTG through BGP-labeled unicast. For traffic
flowing between the CSG in the RAN and the MTG in the MPC, as shown in the previous figure, the following
sequence occurs:
1. The downstream CSG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the PAN that is the labeled BGP next hop.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing to the PAN.
3. The PAN will swap the BGP label corresponding to the remote prefix and then push the LDP label used
to reach the CN-ABR that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the local CN-ABR.
5. Since the local CN-ABR has reachability to the MTG via the core IGP, it will swap the BGP label with an
LDP label corresponding to the upstream MTG intra-domain core LDP LSP.

The MTG in the MPC learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For
traffic flowing between the MTG and the CSG in the RAN as shown in Figure 72, the following sequence occurs:
1. The downstream MTG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the CN-ABR that is the labeled BGP next hop.

System Architecture September 2013


101
2. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing to the CN-ABR.
3. The CN-ABR will swap the BGP label corresponding to the remote prefix and then push the LDP label
used to reach the PAN that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the PAN.
5. Since the PAN has reachability to the CSG via the RAN IGP process, it will swap the BGP label with an
LDP label corresponding to the upstream CSG intra-domain RAN LDP LSP.

Hierarchical LSPs between CSG and MTG for Multi-Area IGP Design with IGP/LDP Access
The inter-domain hierarchical LSP described here applies to the Option-2: Multi-Area IGP Design with IGP/
LDP Access transport model described in “Large Network, Multi-Area IGP Design with IP/MPLS Access.” This
scenario applies to inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs in
the core network. It is relevant to 4G LTE and 3G UMTS/IP services deployed using MPLS L3 VPNs or 2G GSM
and 3G UMTS/ATM services deployed using MPLS L2VPNs that use the /32 loopback address of the remote
PEs as the endpoint identifier for the T-LDP or MP-iBGP sessions. The MTGs are labeled BGP PEs and advertise
their loopback using labeled IPv4 unicast address family (AFI/SAFI=1/4). The CSGs do not run labeled BGP, but
have connectivity to the MPC via the redistribution between RAN IGP and BGP-labeled unicast done at the local
PANs, which are the labeled BGP PEs.

Figure 73 - Hierarchical LSPs between CSGs and MTGs for Multi-Area IGP Design with IGP/LDP Access

NHS Next-Hop-Self (NHS)

iBGP IPv4+label iBGP IPv4+label 1-Control


Imp-Null
NHS NHS

2-Control iBGP IPv4+label iBGP IPv4+label


Imp-Null

RAN IGP Domain Aggregation IGP Domain Core IGP Domain

CSG PAN CN-ABR CN-RR CN-ABR


Inline RR Inline RR
RR

iBGP
IGP <> iBGP iBGP
redistribution

MTG
push swap pop
LDP Label push swap swap swap swap pop

LDP LSP

LDP LSP iBGP Hierarchical LSP LDP LSP


1-Forwarding

pop swap push pop swap push LDP Label


pop swap swap swap push BGP Label

LDP LSP LDP LSP


293304

LDP LSP iBGP Hierarchical LSP 2-Forwarding

System Architecture September 2013


102
The CSG in the RAN access learns the loopback address of the MTG through the BGP-labeled unicast to RAN
IGP redistribution done at the PAN. For traffic flowing between the CSG in the RAN and the MTG in the MPC, as
shown in the previous figure, the following sequence occurs:
1. The downstream CSG will push the LDP label used to reach the PAN that redistributed the labeled BGP
prefix into the RAN IGP.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label towards the
PAN.
3. The PAN will first swap the LDP label with the BGP label corresponding to the remote prefix and then
push the LDP label used to reach the local CN-ABR that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the local CN-ABR.
5. Since the local CN-ABR has reachability to the MTG via the core IGP, it will swap the BGP label with an
LDP label corresponding to the upstream MTG intra-domain core LDP LSP.

The MTG in the MPC learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For
traffic flowing between the MTG and the CSG in the RAN as shown in Figure 73, the following sequence occurs:
1. The downstream MTG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the CN-ABR that is the labeled BGP next hop.
2. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing to the CN-ABR.
3. The CN-ABR will swap the BGP label corresponding to the remote prefix and then push the LDP label
used to reach the PAN that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the PAN connecting the RAN.
5. The PAN will swap the locally-assigned BGP label and forward to the upstream CSG using the local RAN
intra-domain LDP-based LSP label.

Inter-Domain LSPs for Inter-AS Design


This section describes inter-domain hierarchical LSPs that apply to inter-AS designs where the core and
aggregation networks are segmented into different ASs.

Hierarchical LSPs between Remote PAN-SEs or AGN-SEs for Inter-AS Design


This scenario applies to inter-domain LSPs between the loopback addresses of remote PAN or AGN service
edge Nodes, connected across the core network. It is relevant to wireline L2/L3 MPLS VPN business services
deployed between remote service edges across the core network that use the /32 loopback address of the
remote PEs as the endpoint identifier for the T-LDP or MP-iBGP sessions. The business wireline services are
delivered to the service edge in one of three ways:
• Directly connected to the PAN.
• Transported from a FAN to the PAN or AGN service edge via native Ethernet network.
• Transported from a FAN to the PAN or AGN service edge via a PW in an MPLS access network scenario,
which is terminated via PW Headend on the SE.

The PANs are labeled BGP PEs and advertise their loopback using labeled IPv4 unicast address family (AFI/
SAFI=1/4).

System Architecture September 2013


103
Figure 74 - Hierarchical LSPs between Remote Service Edges for Inter-AS Design

NHS NHS NHS NHS Next-Hop-Self (NHS)

Control
iBGP IPv4+label eBGP iBGP IPv4+label eBGP iBGP IPv4+label
IPv4+label IPv4+label Imp-Null

AS-1 AS-2 AS-3


Aggregation IGP Domain Core IGP Domain Aggregation IGP Domain

PAN-SE AGN-ASBR CN-ASBR CN-RR AGN-ASBR CN-ASBR PAN-SE

RR
eBGP eBGP
AGN-RR AGN-RR
iBGP
iBGP iBGP
RR RR

MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap swap swap pop

LDP LSP LDP LSP

LDP LSP eBGP LSP iBGP Hierarchical LSP eBGP LSP LDP LSP

293305
Forwarding

The remote services edges learn each other’s loopbacks through BGP-labeled unicast. iBGP-labeled unicast is
used to build the inter-domain hierarchical LSP inside each AS, and eBGP-labeled unicast is used to extend the
LSP across the AS boundary. For traffic flowing between the two service edges as shown in the previous figure,
the following sequence occurs:
1. The downstream service edge pushes the iBGP label corresponding to the remote prefix and then
pushes the LDP label that is used to reach the local AGN-ASBR that is the labeled BGP next hop.
2. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing to the local AGN-ASBR.
3. The local AGN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by
the neighboring CN-ASBR.
4. The CN-ASBR will swap the eBGP label with the iBGP inter-domain LSP label and then push the LDP
label that is used to reach the remote CN-ASBR that is the labeled BGP next hop.
5. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing off to the remote CN-ASBR.
6. The remote CN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by
the neighboring aggregation domain AGN-ASBR.
7. Since the remote AGN-ASBR has reachability to the destination service edge via IGP, it will swap the
eBGP label with an LDP label corresponding to the upstream service edge intra-domain LDP LSP.

System Architecture September 2013


104
Hierarchical LSPs between CSG and MTG for Inter-AS Design with Labeled BGP Access
The inter-domain hierarchical LSP described here applies to the Option-1: Inter-AS Design with Labeled BGP
Access transport model described in “Large Network, Inter-AS Design with non-IP/MPLS Access.” This scenario
applies to inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs in the core
network. It is relevant to 4G LTE and 3G UMTS/IP services deployed using MPLS L3 VPNs or 2G GSM and 3G
UMTS/ATM services deployed using MPLS L2 VPNs that use the /32 loopback address of the remote PEs as the
endpoint identifier for the T-LDP or MP-iBGP sessions. The MTGs and CSGs are labeled BGP PEs and advertise
their loopback using labeled IPv4 unicast address family (AFI/SAFI=1/4).

This scenario is also applicable to point-to-point VPWS services between CSGs and/or FANs in different labeled
BGP access areas. In this scenario, the /32 loopback address of the remote AN is added to the inbound prefix
filter list at the time of service configuration on the local AN, as described in “PW Transport for X-Line Services.”
For this scenario, the stacked label scenario illustrated is the same as that illustrated in Figure 74, with the access
network illustrated in Figure 75 stacked onto either end.

Figure 75 - Hierarchical LSPs between CSGs and MTGs for Inter-AS Design with Labeled BGP Access

NHS NHS NHS Next-Hop-Self (NHS)

1-Control
iBGP IPv4+label iBGP IPv4+label eBGP iBGP IPv4+label
IPv4+label Imp-Null

NHS NHS NHS NHS

2-Control
iBGP IPv4+label iBGP IPv4+label eBGP iBGP IPv4+label
Imp-Null IPv4+label
AS-1 AS-2

RAN IGP Domain Aggregation IGP Domain Core IGP Domain

CSG PAN AGN-ASBR CN-ASBR CN-RR CN-ABR

RR
eBGP
AGN-RR iBGP

iBGP iBGP
RR

MTG
LDP Label push swap pop push swap pop
BGP Label push swap swap swap swap pop

LDP LSP LDP LSP


1-Forwarding LDP LSP iBGP Hierarchical LSP eBGP LSP LDP LSP

pop swap push pop swap push LDP Label


pop swap swap swap swap push BGP Label

LDP LSP LDP LSP


293307

iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP 2-Forwarding


LDP LSP

System Architecture September 2013


105
The CSG in the RAN access learns the loopback address of the MTG through BGP-labeled unicast. For traffic
flowing between the CSG in the RAN and the MTG in the MPC, as shown in the previous figure, the following
sequence occurs:
1. The downstream CSG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the PAN that is the labeled BGP next hop.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing to the PAN.
3. The PAN will swap the BGP label corresponding to the remote prefix and then push the LDP label used
to reach the AGN-ASBR that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the local AGN-ASBR.
5. The local AGN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by
the neighboring CN-ASBR.
6. Since the CN-ASBR has reachability to the MTG via the core IGP, it will swap the eBGP label with an LDP
label corresponding to the upstream MTG intra-domain core LDP LSP.
The MTG in the MPC learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For
traffic flowing between the MTG and the CSG in the RAN as shown in Figure 75, the following sequence occurs:
1. The downstream MTG node will first push the iBGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the CN-ASBR that is the labeled BGP next hop.
2. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing to the CN-ASBR.
3. The CN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by the
neighboring aggregation domain AGN-ASBR.
4. The AGN-ASBR will swap the eBGP label with the iBGP inter-domain LSP label corresponding to the
remote prefix and then push the LDP label that is used to reach the PAN that is the labeled BGP next
hop.
5. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label performing
a PHP before handing off to the PAN.
6. Since the PAN has reachability to the CSG via the RAN IGP area/level, it will swap the BGP label with an
LDP label corresponding to the upstream CSG intra-domain RAN LDP LSP.

Hierarchical LSPs between CSG and MTG for Inter-AS Design with IGP/LDP Access
The inter-domain hierarchical LSP described here applies to the Option-2: Inter-AS Design with IGP/ LDP Access
transport model described in “Large Network, Inter-AS Design with IP/MPLS Access.” This scenario applies to
inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs in the core network. It
is relevant to 4G LTE and 3G UMTS/IP services deployed using MPLS L3 VPNs or 2G GSM and 3G UMTS/ATM
services deployed using MPLS L2 VPNs that use the /32 loopback address of the remote PEs as the endpoint
identifier for the T-LDP or MP-iBGP sessions. The MTGs are labeled BGP PEs and advertise their loopback using
labeled IPv4 unicast address family (AFI/SAFI=1/4). The CSGs do not run labeled BGP, but have connectivity to
the MPC via the redistribution between RAN IGP and BGP-labeled unicast done at the local PANs, which are the
labeled BGP PEs.

System Architecture September 2013


106
Figure 76 - Hierarchical LSPs between CSGs and MTGs for Inter-AS Design with IGP/LDP Access

NHS NHS Next-Hop-Self (NHS)

1-Control
iBGP IPv4+label eBGP iBGP IPv4+label
IPv4+label Imp-Null
NHS NHS NHS

2-Control
iBGP IPv4+label eBGP iBGP IPv4+label
Imp-Null IPv4+label
AS-1 AS-2

RAN IGP Domain Aggregation IGP Domain Core IGP Domain

CSG PAN AGN-ASBR CN-ASBR CN-RR CN-ABR

RR
eBGP
AGN-RR iBGP
IGP <> iBGP iBGP
redistribution RR

MTG
push swap pop
LDP Label push swap swap swap swap pop

LDP LSP

LDP LSP iBGP Hierarchical LSP eBGP LSP LDP LSP


1-Forwarding

pop swap push pop swap push LDP Label


pop swap swap swap swap push BGP Label

LDP LSP LDP LSP

293306
iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP 2-Forwarding
LDP LSP

The CSG in the RAN access learns the loopback address of the MTG through the BGP-labeled unicast to RAN
IGP redistribution done at the local PAN. For traffic flowing between the CSG in the RAN and the MTG in the
MPC, as shown in the previous figure, the following sequence occurs:
1. The downstream CSG will push the LDP label used to reach the PAN that redistributed the labeled iBGP
prefix into the RAN IGP.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label towards the
PAN.
3. The PAN will first swap the LDP label with the iBGP label corresponding to the remote prefix and then
push the LDP label used to reach the AGN-ASBR that is the labeled BGP next hop.
4. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the local AGN-ASBR.
5. The local AGN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by
the neighboring CN-ASBR.
6. Since the CN-ASBR has reachability to the MTG via the core IGP, it will swap the eBGP label with an LDP
label corresponding to the upstream MTG intra-domain core LDP LSP.

System Architecture September 2013


107
The MTG in the MPC learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For
traffic flowing between the MTG and the CSG in the RAN as shown in Figure 76, the following sequence occurs:
1. The downstream MTG node will first push the iBGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the CN-ASBR that is the labeled BGP next hop.
2. The core nodes that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing to the CN-ASBR.
3. The CN-ASBR will swap the iBGP-based inter-domain LSP label with the eBGP label assigned by the
neighboring aggregation domain AGN-ASBR.
4. The AGN-ASBR will swap the eBGP label with the iBGP inter-domain LSP label corresponding to the
remote prefix and then push the LDP label that is used to reach the PAN that is the labeled BGP next
hop.
5. The AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing off to the PAN connecting the RAN.
6. The PAN will swap the locally-assigned BGP label and forward to the upstream CSG using the local RAN
intra-domain LDP-based LSP label.

Inter-Domain LSPs for Integrated Core and Aggregation Design


This section describes inter-domain hierarchical LSPs that apply to Small Network, Integrated Core and
Aggregation designs where core and aggregation networks are integrated into a single IGP/LDP domain. The
AGNs have subtending access networks that are MPLS-enabled and part of the same AS.

Hierarchical LSPs between CSG and MTG for Integrated Core and Aggregation Design
This scenario applies to inter-domain LSPs between the loopback addresses of CSGs in the RAN and the MTGs
in the integrated core and aggregation network. It is relevant to 4G LTE and 3G UMTS/IP services deployed
using MPLS L3 VPNs or 2G GSM and 3G UMTS/ATM services deployed using MPLS L2 VPNs that use the /32
loopback address of the remote PEs as the endpoint identifier for the T-LDP or MP-iBGP sessions. The MTGs
and CSGs are labeled BGP PEs and advertise their loopback using labeled IPv4 unicast address family (AFI/
SAFI=1/4).

This scenario is also applicable to point-to-point VPWS services between CSGs and/or FANs in different labeled
BGP access areas. In this scenario, the /32 loopback address of the remote AN is added to the inbound prefix
filter list at the time of service configuration on the local AN, as described in “PW Transport for X-Line Services.”

System Architecture September 2013


108
Figure 77 - Hierarchical LSPs between CSGs and MTGs for Integrated Core and Aggregation Design

NHS Next-Hop-Self (NHS)

1-Control
iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label
Imp-Null
NHS NHS

2-Control
iBGP IPv4+label iBGP IPv4+label iBGP IPv4+label
Imp-Null

RAN IGP Domain Aggregation +Core IGP Domain

CSG AGN AGN


Inline RR CN CN Inline RR

AGN-RR
iBGP RR
iBGP

MTG
CN CN
LDP Label push swap pop
BGP Label push swap swap pop

LDP LSP
1-Forwarding iBGP Hierarchical LSP LDP LSP

pop swap push LDP Label


pop swap swap push BGP Label

LDP LSP
2-Forwarding

293308
LDP LSP iBGP Hierarchical LSP

The CSG in the RAN access learns the loopback address of the MTG through BGP-labeled unicast. For traffic
flowing between the CSG in the RAN and the MTG in the MPC, as shown in the previous figure, the following
sequence occurs:
1. The downstream CSG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the AGN that is the labeled BGP next hop.
2. The CSGs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label, performing
a PHP before handing to the AGN.
3. Since the AGN has reachability to the MTG via the aggregation IGP, it will swap the BGP label with an
LDP label corresponding to the upstream MTG intra-domain aggregation LDP LSP. The MTG in the MPC
learns the loopback address of the remote RAN CSG through BGP-labeled unicast. For traffic flowing
between the MTG and the CSG in the RAN as shown in Figure 77, the following sequence occurs:
4. The downstream MTG node will first push the BGP label corresponding to the remote prefix and then
push the LDP label that is used to reach the AGN that is the labeled BGP next hop.
5. The CNs and AGNs that transit the inter-domain LSP will swap the intra-domain LDP-based LSP label,
performing a PHP before handing to the AGN connecting the RAN Access.
6. Since the AGN has reachability to the CSG via the RAN IGP area-x/level-1, it will swap the BGP label
with an LDP label corresponding to the upstream CSG intra-domain RAN LDP LSP.

System Architecture September 2013


109
Transport and Service Control Plane
The Cisco FMC system proposes a hierarchical RR design for setting up the Unified MPLS Transport BGP control
plane. This hierarchical RR design is also utilized for LTE transport MPLS VPN services, business wireline L3VPN
services between service edge nodes, and L3VPNs providing residential wireline service transport between
BNGs and sources. The hierarchical RR approach is used to reduce the number of iBGP peering sessions on
the RRs across different domains of the FMC network. The following sections describe the BGP control plane
aspects for network designs based on Multi-Area IGP and Inter-AS organizations.

BGP Control Plane for Multi-Area IGP Design


This section details the hierarchical RR design enabling the BGP control plane for the Unified MPLS Transport in
a single AS, multi-area IGP-based network. It also illustrates the BGP control plane for an LTE transport MPLS
VPN as a service example. The need for standalone RRs in the lower layers of the network to support this
hierarchy is eliminated by making use of the inline-RR functionality on the PANs and CN-ABRs. At the top level
of the hierarchy, the CN-ABRs can either peer in a full mesh configuration across the core or peer as RR clients
to a centralized external CN-RR. In deployments with only a few POPs, a full-mesh configuration between the
CN-ABRs would suffice. In larger deployments, an external CN-RR to support the top level of the hierarchy helps
simplify provisioning as well as prefix filtering across various POPs. An external CN-RR based deployment is
shown in the following figure.

Figure 78 - BGP Control Plane for Multi-Area IGP Design with Labeled BGP Access

Unified MPLS Transport


External
Inline RR RR
Inline RR Inline RR
 NHS 
 NHS  RR  NHS 

iBGP iBGP
IPv4+label iBGP IPv4+label
IPv4+label PE IPv4+label ABR IPv4+label
IPv4+label PE
BNG, MSE

Example:
IP RAN VPNv4 Service
External
RR
Inline RR
Inline RR RR Inline RR

VPNv4 PE iBGP
CSG iBGP iBGP VPNv4
VPNv4 VPNv4
VPNv4 PE
MTG (EPC GW)

Mobile Access Network Aggregation Network Core Network


Service Edge Node
(BNG, MTG…)

IP/MPLS
Transport IP/MPLS Transport IP/MPLS Transport

Access Nodes Aggregation Node Aggregation Node Core ABR Core ABR
293211

Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology

System Architecture September 2013


110
BGP Control Plane for Unified MPLS Transport
For the Unified MPLS Transport layer, the PANs are inline-RRs to the CSG and FAN clients for the MP-iBGP
IPv4-labeled unicast address-family:
• They form iBGP session neighbor groups with the CSG and FAN RR-clients that are the labeled BGP PEs
implementing the inter-domain iBGP hierarchical LSPs in the local access network.
• They also form iBGP session neighbor groups towards the local CN-ABR inline RRs.
• The PANs reflect the labeled BGP prefixes with the next-hop changed to self in order to insert
themselves into the data path to enable the inter-domain LSP across the access and aggregation
domains.

The CN-ABRs are inline-RRs to the PAN clients for the MP-iBGP IPv4 labeled unicast address-family and form
the next level of the RR hierarchy:
• They form iBGP session neighbor groups with the PAN RR-clients that are the labeled BGP PEs
implementing the inter-domain iBGP hierarchical LSPs in the local aggregation network.
• They either form neighbor groups towards other non-client ABRs in the core if a full-mesh configuration
is used or form neighbor groups towards higher level CN-RRs in the core network at the top level of the
hierarchy as shown in Figure 78.
• If the full mesh option is used, the CN-ABRs also act as RRs serving the closest MTG RR clients in the
core network that are labeled BGP PEs implementing the inter-domain iBGP hierarchical LSPs.
• The CN-ABRs reflect the labeled BGP prefixes with the next-hop changed to self in order to insert
themselves into the data path to enable the inter-domain LSP across the aggregation and core domains.

BGP Control Plane for LTE MPLS VPN Service


For the LTE MPLS VPN service, the PANs are inline-RRs to the CSG clients for the MP-iBGP VPNv4 and VPNv6
address-family:
• They form iBGP session neighbor groups towards the local RAN access network to serve the CSG
RR-clients that are the PEs implementing the LTE MPLS VPN.
• They also form iBGP session neighbor groups towards the local CN-ABR inline RRs.

The CN-ABRs are inline RRs for the MP-iBGP VPNv4 and VPNv6 address-family and form the next level of the
RR hierarchy:
• They form iBGP session neighbor groups towards the local aggregation network to serve the PAN RR
clients.
• They either form neighbor groups towards other non-client CN-ABRs in the core if a full-mesh
configuration is used, or form neighbor groups towards higher level CN-RRs in the core network at the
top level of the hierarchy as shown in Figure 78.
• If the full-mesh option is used, the core ABR RRs also form neighbor groups for the closest MTG RR
clients in the core network that are the PEs implementing the LTE MPLS VPN.

System Architecture September 2013


111
Figure 79 - BGP Control Plane for Multi-Area IGP Design with IGP/LDP Access

Unified MPLS Transport Unified MPLS Transport


RR
Inline RR Inline RR
 NHS   NHS 

iBGP
IGP <> RAN IGP iBGP IPv4+label
IPv4+label PE IPv4+label
redistribution
IPv4+label PE
MTG

Example: LTE VPNv4 Services


IP RAN VPNv4 Service
RR
Inline RR
Inline RR Inline RR

VPNv4 PE iBGP
CSG iBGP iBGP VPNv4
VPNv4 VPNv4
VPNv4 PE
MTG

Mobile Access Network Aggregation Network Core Network


Mobile Transport Gateway
(MTG) ASR-9000
IP/MPLS
Transport IP/MPLS Transport IP/MPLS Transport

Access Node Pre-Aggregation Node Aggregation Node Core ABR Core ABR

293210
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology

BGP Control Plane for Unified MPLS Transport


At the top layers of the network, namely the core and aggregation domains, the BGP control plane for Unified
MPLS Transport is exactly the same as that in the multi-area design with end-to-end labeled BGP case
described above. The only difference here is in the access domain of the network. In this case, the labeled BGP
at the PANs are being terminated in the aggregation domain. The PANs are labeled BGP PEs and do not have to
perform the inline-RR function to the CSGs in the RAN access. The end-to-end unified MPLS LSP is extended
into the RAN access using LDP with redistribution.

BGP Control Plane for LTE MPLS VPN Service


For the LTE MPLS VPN service, the BGP control plane is exactly the same as that in the multi-area design with
end-to-end labeled BGP case described above.

BGP Control Plane for Inter-AS Design


This section details the hierarchical RR design to enable the BGP control plane for the Unified MPLS Transport
in a network where the core and aggregation networks are organized as different ASs. It also shows the
hierarchical RR design for an LTE MPLS VPN service as an example. The hierarchical RR approach is used to
reduce the number of iBGP peering sessions on the RRs across different domains of the backhaul network.
The need for standalone RRs in the access network to support this hierarchy is eliminated by making use of the
inline-RR functionality on the PANs.

System Architecture September 2013


112
Figure 80 - BGP Control Plane for Inter-AS Design with Labeled BGP Access

Unified MPLS Transport Unified MPLS Transport


RR RR
 NHS   NHS   NHS 
iBGP
iBGP iBGP
IPv4+label
IPv4+label eBPG IPv4+label
IPv4+label PE IPv4+label PE
IPv4+label
IPv4+label PE
MTG

Example: LTE VPNv4 Service


IP RAN VPNv4 Service
RR RR
eBGP multi-hop NHU
NHU VPNv4
Inline RR

VPNv4 PE iBGP
CSG VPNv4 iBGP
VPNv4

VPNv4 PE
MTG

Mobile Access Network Aggregation Network Core Network


Mobile Transport Gateway
(MTG) ASR-9000

IP/MPLS IP/MPLS Transport IP/MPLS Transport


Transport

Aggregation
Access Node Pre-Aggregation Node ASBR Core ASBR Core ASBR

293213
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology

BGP Control Plane for Unified MPLS Transport


In the aggregation AS, the PANs are inline-RRs to the CSG clients for the MP-iBGP IPv4-labeled unicast
address-family:
• They form iBGP session neighbor groups with the CSG RR-clients that are the labeled BGP PEs
implementing the inter-domain iBGP hierarchical LSPs in the local RAN access network.
• They also form iBGP session neighbor groups towards the local AGN-RRs.
• The PANs insert themselves into the data path to enable inter-domain LSPs by setting NHS on all iBGP
updates towards their local AGN-RRs and CSG RR-clients.

The AGN-RRs are external RRs for the MP-iBGP IPv4 labeled unicast address-family and form the next level of
the RR hierarchy:
• They form iBGP session neighbor groups towards the AGN-ASBR and PAN RR-clients in the aggregation
network.
• The AGN-ASBRs insert themselves into the data path to enable inter-domain LSPs by setting NHS on all
iBGP updates towards their local AGN-RRs and eBGP updates towards neighboring CN-ASBRs.

System Architecture September 2013


113
In the core AS, the CN-RRs are external RRs for the MP-iBGP IPv4-labeled unicast address-family:
• They form iBGP session neighbor groups towards the MTG and CN-ASBR RR-clients in the core network.
• The CN-ASBRs insert themselves into the data path to enable inter-domain LSPs by setting NHS on all
iBGP updates towards their local CN-RRs and eBGP updates towards the neighboring aggregation ASBRs.

BGP Control Plane for LTE MPLS VPN Service


For the LTE MPLS VPN service, the PANs are inline RRs for the MP-iBGP VPNv4 and VPNv6 address-family:
• They form iBGP session neighbor groups towards the local RAN access network to serve the CSG
RR-clients that are the PEs implementing the LTE MPLS VPN.
• They also form iBGP session neighbor groups towards the local aggregation network AGN-RR external RRs.

The AGN-RRs are external RRs for the MP-iBGP VPNv4 and VPNv6 address-family in the aggregation network
and form the next level of the RR hierarchy:
• They form iBGP session neighbor groups towards the local aggregation network to serve the PAN RR clients.
• They enable the LTE VPN service with a eBGP multi-hop session towards the CN-RR in the core
network to exchange VPNv4/v6 prefixes over the inter-domain transport LSP.

The CN-RRs are external RRs for the MP-iBGP VPNv4 and VPNv6 address-family in the core network:
• They form iBGP session neighbor groups in the core network to serve the MTG RR clients that are the
PEs implementing the LTE MPLS VPN.
• They enable the LTE VPN service with an eBGP multi-hop session towards the AGN-RRs in the
neighboring aggregation network ASs to exchange VPNv4/v6 prefixes over the inter-domain transport LSP.

Figure 81 - BGP Control Plane for Inter-AS Design with IGP/LDP Access

Unified MPLS Transport Unified MPLS Transport


RR RR
 NHS   NHS   NHS 
iBGP
IPv4+label iBGP
IGP <> RAN IGP eBPG IPv4+label
IPv4+label PE
redistribution IPv4+label
IPv4+label PE
MTG

Example: LTE VPNv4 Service


IP RAN VPNv4 Service
RR RR
eBGP multi-hop NHU
NHU VPNv4
Inline RR

VPNv4 PE iBGP
CSG VPNv4 iBGP
VPNv4

VPNv4 PE
MTG

Mobile Access Network Aggregation Network Core Network


Mobile Transport Gateway
(MTG) ASR-9000

IP/MPLS IP/MPLS Transport IP/MPLS Transport


Transport

Aggregation
Access Node Pre-Aggregation Node ASBR Core ASBR Core ASBR
293212

Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology

System Architecture September 2013


114
BGP Control Plane for Unified MPLS Transport
At the top layers of the network, namely the core and aggregation domains, the BGP control plane for Unified
MPLS Transport is exactly the same as that in the inter-AS design with end-to-end labeled BGP case described
above. The only difference here is in the access domain of the network. In this case, the labeled BGP at the
PANs are being terminated in the aggregation domain. The PANs are labeled BGP PEs and do not have to
perform the inline-RR function to the CSGs in the RAN access. The end-to-end Unified MPLS LSP is extended
into the RAN access using LDP with redistribution.

BGP Control Plane for LTE MPLS VPN Service


For the LTE MPLS VPN service the BGP control plane is exactly the same as that in the Inter-AS Design with the
end-to-end Labeled BGP case described above.

BGP Control Plane for Integrated Core and Aggregation Design


This section details the hierarchical RR design to enable the BGP control plane for the Unified MPLS transport
in a network where the core and aggregation networks are integrated into a single IGP/LDP domain. It also
illustrates the LTE MPLS VPN service as an example. The need for standalone RRs in the lower layers of the
network to support this hierarchy are eliminated by making use of the inline- RR functionality on the AGNs. At the
top level of the hierarchy, the AGNs peer as RR clients to a centralized external AGN-RR.

Figure 82 - BGP Control Plane for Integrated Core and Aggregation Design with Labeled BGP Access

Unified MPLS Transport


AGN AGN-RR AGN
Inline RR External RR Inline RR
 next-hop-self   next-hop-self 
RR
iBGP
CSG IPv4+label CSG
IPv4+label PE iBGP iBGP IPv4+label PE
IPv4+label IPv4+label

MTG
IPv4+label PE

LTE MPLS VPN Service


AGN-RR PAN
AGN External RR Inline RR
Inline RR  next-hop-self 
RR
iBGP
CSG IPv4+label CSG
VPN PE iBGP iBGP VPN PE
VPNv4/v6 VPNv4/v6

MTG
VPN PE

Core Mobile Transport Gateway


Node

Core
Mobile Access Integrated Node Mobile Access
Network Core + Aggregation Network
Network
293325

Access Node Aggregation Node Aggregation Node Access Node

System Architecture September 2013


115
BGP Control Plane for Unified MPLS Transport
For the Unified MPLS Transport layer, the AGNs are inline-RRs to the CSG clients for the MP-iBGP IPv4-labeled
unicast address-family:
• They form iBGP session neighbor groups with the CSG RR-clients that are the labeled BGP PEs
implementing the inter-domain iBGP hierarchical LSPs in the local RAN access network.
• They also form iBGP session neighbor groups towards the local AGN-RRs.
• The AGNs insert themselves into the data path to enable inter-domain LSPs by setting NHS on all iBGP
updates towards their local CSG RR-clients and higher level AGN-RR.

BGP Control Plane for LTE MPLS VPN Service


For the LTE MPLS VPN service, the AGNs are inline RRs for the MP-iBGP VPNv4 and VPNv6 address-family:
• They form iBGP session neighbor groups towards the local RAN access network to serve the CSG
RR-clients that are the PEs implementing the LTE MPLS VPN.
• They also form iBGP session neighbor groups towards the higher level external AGN-RR in the
aggregation network.

Scale Considerations
This section describes the route scale and the control plane scaling aspects involved in setting up the Unified
MPLS Transport across the network domains.

As an example, consider a large scale deployment following the Inter-AS design described in “Large Network,
Inter-AS Design with IP/MPLS Access,” including support for Residential, Business, and Mobile Services.

For Mobile Services, the network includes 60,000 CSGs across 20 POPs in a SP network. In the core network,
consider around 10 EPC locations, with each location connected to a pair of redundant MTGs. This leads a total
of 20 MTGs for transport connectivity from the core to the CSGs in the RAN access domain. If you consider that
each RAN access domain is comprised of 30 CSGs connected in physical ring topologies of five nodes each
to the pre-aggregation network, and (for the purpose of illustration) you assume an even distribution of RAN
backhaul nodes across the 20 POPs, you end up with the network sizing shown in the following table.

For Residential and Business wireline services, the network includes 3000 FANs across the same 20 POPs in
the SP network. In addition, there are 20 OLTs per POP providing PON access for wireline services. These rings
are divided among 100 pairs of PANs per POP, which are configured in rings to 5 pairs of AGNs and 5 pairs of
AGN-SE nodes.

The entire POP is aggregated by a pair of AGN-ASBR nodes, which connect to a pair of CN-ASBR nodes for
handling all service traffic transport between the core and aggregation domains.

System Architecture September 2013


116
Table 7 - Large Network Sizing

Large Network
Network Access Aggregation (20 POPs) Comments
CSGs 30 3000 60000 Assuming 100 access rings in each POP with 30 CSGs in each ring
(1-5% FAN) (100*30=3000) (20*3000=60000)
FANs 30 150 3000 Assuming 5 access rings in each POP with 30 FANs in each ring
(30% RAN) (5*30=150) (20*150=3000)
OLTs 20 200 4000 Assuming 10 access rings in each POP with 20 OLTs in each ring
(10*20=200) (20*200=4000)
PANs 2 200 4000 Assuming 10 aggregation rings in each POP with 20 PANs in each ring
(10*20=200) (20*200=4000)
AGNs 10 200 Assuming 10 AGNs in each POP (20*10=200)
AGN/PAN-SE 10 200 Assuming 10 AGN/PAN-SEs in each POP (20*10=200)
AGN-ASBR 2 40 (20*2=40)
CN-ASBR 2 40 (20*2=40)
Core Node 10
MTG 20 Assuming 20 MTGs network wide

As another example, consider a smaller scale deployment following the single-AS, multi-area design described
in “Small Network, Integrated Core and Aggregation with IP/MPLS Access,” including support for Residential,
Business, and Mobile Services.

For Mobile Services, the network includes 7,000 CSGs across 20 POPs in a SP network. In the core network,
consider around 5 EPC locations, with each location connected to a pair of redundant MTGs. This leads a total
of 10 MTGs for transport connectivity from the core to the CSGs in the RAN access domain. If you consider that
each RAN access domain is comprised of 30 CSGs connected in physical ring topologies of five nodes each
to the pre-aggregation network, and (for the purpose of illustration) you assume an even distribution of RAN
backhaul nodes across the 10 POPs, you end up with the network sizing shown in the following table.

For Residential and Business wireline services, the network includes 300 FANs across the same 10 POPs in the SP
network. In addition, there are 20 OLTs per POP providing PON access for wireline services. These rings are divided
among 25 pairs of PANs per POP, which are configured in rings to a pair of AGNs and a pair of AGN-SE nodes.

The entire POP is aggregated by a pair of Core nodes for handling all service traffic transport between the core
and aggregation domains.

Table 8 - Small Network Sizing

Small Network
Network Access Aggregation (10 POPs) Comments
CSGs 30 700 7000 Assuming 23 access rings in each POP with 30 CSGs in each ring
(23*30=690) Rounding to 700 (10*700=7000)
FANs 30 30 300 Assuming 1 access ring in each POP with 30 FANs in each ring
(10*30=300)
OLTs 20 20 200 Assuming 1 ring with 20 OLTs in each POP. (20*10=200)
PANs 2 50 500 Assuming 2 PANs per Access Ring and 25 Rings Per POP.
AGNs 2 20 2 AGNs per POP
AGN-SE 2 20 2 AGN-SEs per POP
Core Node 20 2 Core Nodes per POP
MTG 10

System Architecture September 2013


117
Route and Control Plane Scale
RAN backhaul for LTE requires connectivity between the CSGs in the RAN access and the MTGs in the core
network. In an MPLS environment, since route summarization of PE’s /32 loopback IP address cannot be
performed, a flat IGP/LDP network design would imply that the core network would have to learn all the 60,000
loopback addresses corresponding to the CSGs deployed across the entire network. While this level route of
scale in a IGP domain may be technically feasible, it is an order of magnitude higher than typical deployments
and would present huge challenges in IGP convergence when a topology change event is encountered.

The Cisco FMC system architecture provides a scalable solution to this problem by adopting a divide-and-
conquer approach of isolating the access, aggregation, and core network layers into independent and isolated
IGP/LDP domains. While LDP is used to set up intra-domain LSPs, the isolated IGP domains are connected to
form a unified MPLS network in a hierarchical fashion by using RFC 3107 procedures based on iBGP to exchange
loopback addresses and MPLS label bindings for transport LSPs across the entire MPLS network. This approach
prevents the flooding of unnecessary routing and label binding information into domains or parts of the network
that do not need them. This allows scaling the network to hundreds of thousands of LTE cell sites without
overwhelming any of the smaller nodes like CGS in the network. Since the route scale in each independent
IGP domain is kept to a minimum, and all remote prefixes are learnt via BGP, each domain can easily achieve
subsecond IGP fast convergence.

Table 9 - Route Scaling for Transport and Mobile Services

Unified MPLS Transport Mobile Services


Large Network
Devices Core IGP Access IGP BGP IP+label L3 VPN VRFxRoutes VPWS PW
CSGs 30 20 MSE 3 x (20+30) 3xSAToP
10 FXX
FANs 30 20 MSE 3 x (20+10) 3xSAToP
20 FXX
OLTs
PANs 212 4000 FAN
200 FSE
20 MSE
30 RAN
AGNs 212
+Service edge 212 4000 FAN
200 FSE
AGN-ASBR 212 4000 FAN
200 FSE
20 MSE
3000 RAN
CN-ASBR 70 4000 FAN
200 FSE
20 MSE
3000 RAN
Core Node 70
MTG 70 61,000 20

System Architecture September 2013


118
For Residential and Business Services, the Cisco FMC system design employs a similar hierarchical mechanism
as for the Mobile Service transport. All Access Nodes providing wireline service require connectivity to the
service edge nodes and any remote Access Nodes for which E-Line services are configured. This allows for
scaling to handle the required service scale without overwhelming the smaller Access Nodes.

Table 10 - Route Scaling for Residential and Business Services

Residential Services Business Services


Sessions from Cisco Multicast VPWS VPLS L3VPN
Large Network Optical Network Groups
UNI VFIs UNI VRFx Routes
Devices Terminators (ONTs)
CSGs 5 2 N/A 3 N/A
FANs 10 5 5
OLTs 3000 300 300 200 500
PANs
AGNs
+Service edge 60,000 500 300 ONTs 200 ONTs 20 500 ONTs 50 x
500-1000
3 services 30 Ethernet 5 x30+ 3x60+
1 account 2x60 PWs 5x50 PWs
20 Ethernet 50 Ethernet
AGN-ASBR
CN-ASBR
Core Node
MTG

Figure 83 - Cisco FMC System Hierarchical Network


AS-B AS-A AS-C

RAN Area/Level Aggregation Area/Level Core Area/Level Aggregation Area/Level RAN Area/Level
OSPF x/IS_IS L1 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF 0/IS-IS L2 OSPF x/IS-IS L1

Aggregation Node Aggregation


(AGN-SE) Node (AGN)

Core ASBR Aggregation


CN-ASBR ASBR
Aggregation Core ASBR (AGN-ASBR)
ASBR (AGN-ASBR) CN-ASBR
200 PAN
FAN 30 40 CN-ASBR 30 FAN
200 PAN 10 AGN
CSG 20 MTGs 10 SE Nodes FAN
10 AGN
10 SE Nodes Core ASBR Aggregation
Pre-Aggregation CN-ASBR ASBR Pre-Aggregation
Node (PAN) Aggregation Core ASBR (AGN-ASBR) Node w/ SE
ASBR (AGN-ASBR) CN-ASBR (PAN-SE)

CSG CSG
Aggregation Node Aggregation
(AGN-SE) Node (AGN)
293599

iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP

LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

System Architecture September 2013


119
If you consider a hierarchical network design as shown in Figure 83, you end up with a route scale across various
domains of the Unified MPLS transport network, as shown in the Table 9 and Table 10 notes:
• On CSGs, there are 30 IGP routes that correspond to the local RAN access CSG loopbacks + 2 local
PANs, and 30 iBGP routes that correspond to the 20 MTG nodes + a total of 10 nodes between remote
FANs and FSEs.
• On PANs, AGNs/-SEs, and AGN-ASBRs, the 212 IGP routes in Core-AGG IGP process correspond to
200 local PANs + 10 local AGN-SEs + 2 local AGN-ASBRs. The iBGP routes are explained in the table.
• On MTGs, the 70 IGP routes correspond to 40 core ABRs + 20 MTG nodes + 10 other core nodes.
The 61,000 iBGP IPv4-labeled routes correspond to 60,000 CSG loopbacks + 1000 PANs with locally-
attached eNBs.

Control Plane Scale


As described in “Transport and Service Control Plane,” the Cisco FMC system uses a hierarchical RR design,
utilizing a top-level RR in each AS, with inline RR functionality on the CN-ASBRs, AGN-ASBRs, and PANs to
greatly reduce the number of iBGP peering sessions across different domains of the backhaul network.

Figure 84 - Cisco FMC System Hierarchical RR Topology

AGN-SE AGN
CN-RR

RR
AGN-ASBR CN-ASBR CN-ASBR AGN-ASBR FAN
RAN region FSE BGP FSE BGP PAN-SE
BGP Community Community
Community

iBGP
RR eBGP iBGP eBGP RR
IPv4+label
AGN-RR IPv4 + label IPv4+label IPv4 + label AGN-RR
Common
PAN iBGP MSE BGP iBGP PAN-SE FAN BGP
CSG iBGP IPv4+label Community IPv4+label Community
IPv4+label
AGN-ASBR CN-ASBR CN-ASBR AGN-ASBR

293600
FAN AGN-SE MTG AGN FAN
iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP eBGP LSP iBGP Hierarchical LSP

LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

For the example network sizing shown in Table 7, if you consider the peering organization illustrated in Figure 84,
you have the following BGP session scale on different elements in the network (see Table 7).

Notes:
• CSGs in each RAN access domain peer with their two redundant local PAN inline-RRs.
• PANs in each aggregation domain peer with their CSG clients and with the AGN-RR for that domain.
• AGN-SEs in each aggregation domain peer with the AGN-RR for that domain.
• AGN-ASBRs in each aggregation domain peer with the AGN-RR for that domain.
• CN-ASBRs peer with the redundant external CN-RRs in the core domain.
• MTGs in the core domain that connect with regional EPC GWs peer with the redundant external CN-RRs.

System Architecture September 2013


120
Residential Services Scale
For the first release of the Cisco FMC system, the scale limits supported by the BNG nodes for residential
services are described in the following table.

Table 11 - Residential Services Scale

BNG Node Limit


Dual stack PPPoE sessions per system 64,000
Dual stack IPoE sessions per system 64,000
Dual stack IPoE + PPPoE sessions per system 64,000
Sessions per line card 64,000
Sessions per N:1 VLAN 32,000

System Architecture September 2013


121
Functional Components
Up to this point in the design guide, we have addressed the base transport design, control plane, data plane, and
service model aspects of the Cisco FMC system architecture. This chapter looks at additional aspects required
for delivering and operating a comprehensive FMC system architecture.

Quality of Service
The Cisco FMC system applies the IETF DiffServ Architecture (RFC 2475) across all network layers, utilizing
classification mechanisms like MPLS Experimental (EXP) bits, IP DSCP, IEEE 802.1p, and ATM CoS for
implementing the DiffServ PHBs in use.

In a transport network, congestion can occur anywhere. However, congestion is more likely where statistical
estimates of peak demand are conservative (that is, under-provisioned), which happens more often on the
design of access and aggregation bandwidth links. Congestion due to instantaneous ingress bandwidth to a
node exceeding egress bandwidth (assuming the node can process all ingress bandwidth) therefore requires all
nodes to be able to implement DiffServ scheduling functions. The result is that the under-provisioning is unfairly
distributed among the services transported. This redistribution with DiffServ can result in over-provisioning
for higher quality services (like voice over IP [VoIP] and video) and differing levels of under-provisioning for
other services. This is in line with the functional requirements defined by standards bodies, such as the NGMN
and Broadband Forum TR-221 specification for mobile backhaul, and TR-101 for Ethernet-based aggregation
networks for residential and business services.

Each network layer defines an administrative boundary, where traffic remarking may be required in order to
correlate the PHBs between different administrative domains. A critical administrative and trust boundary is
required for enforcing subscriber SLAs. Subscriber SLAs are enforced with sound capacity management
techniques and functions, such as policing/shaping, marking, and hierarchical scheduling mechanisms.
This administrative boundary is implemented by the access devices for traffic received (upstream) from the
subscribers and by the core nodes for traffic sent (downstream) to the subscribers.
While for mobile services, the access devices performing the administrative boundary function range from
the CSG to the NodeB equipment to the radio controllers, depending on the service model, for business and
residential services, the function is uniquely delegated to the ANs.

Figure 85 and Figure 86 depict the QoS model implemented for the upstream and downstream directions. Within
the aggregation and core networks, where strict control over residential and business subscriber’s SLA is not
required, a flat QoS policy with a single-level scheduler is sufficient for the desired DiffServ functionality among
the different classes of traffic, as all links are operated at full line rate transmission.

Hierarchical QoS policies are required whenever the relative priorities across the different classes of traffic are
significant only within the level of service offered to a given subscriber, and/or within a given service category,
such residential, business or mobile.

In downstream direction, H-QoS for a given subscriber should be performed at the service edge node whenever
possible to guarantee the most optimal usage of link bandwidth throughout the access network.

For an Ethernet-based access NNI and residential services, the service edge node acting as BNG device is
capable of applying QoS at the subscriber level, with per-subscriber queuing and scheduling, as well as at the
aggregate level for all residential subscribers sharing the same N:1 VLAN, or a range of 1:1 VLANs.

Functional Components September 2013


122
Aggregated QoS at the residential service level is beneficial to manage oversubscription of the AN from
residential traffic, as well as to control sharing of the access-facing NNI bandwidth with mobile and business
services. Similarly, business services interfaces at the service edge implement H-QoS for the deployment of
subscriber level SLAs as well as access bandwidth sharing.

Mobile services also require the implementation of H-QoS for access bandwidth sharing. Moreover, in the case
of microwave links in the access, where the wireless portion of the link is only capable of sub-gigabit speeds
(typically 400 Mbps sustained) a parent shaper may be used to throttle transmission to the sustained microwave
link speed.

Whenever subscriber SLAs are managed at the service edge and the access UNI is not multiplexed, a flat QoS
policy can be applied to the AN in order to manage relative priority among the classes of traffic at each UNI
port. Multiplexed UNI, typical of Business services, require an H-QoS policy for relative prioritization among
services first and then between classes of traffic within each service. In those scenarios H-QOS on the service
edge nodes may drive peak information rate (PIR) level traffic, while the Access UNI may force the committed
information rate (CIR) levels.

For an MPLS-based NNI, most services do not have a corresponding attachment point at the service edge node
and therefore the majority of the service level H-QoS logic happens at the AN. The exception are the L3VPN
business services for which the customer-edge to provider-edge (CE-PE) LSP is terminated over a PWHE
interface at the service edge node, which becomes the injection point for H-QoS.

Figure 85 - Downstream QoS Model

Shaping/BW/BRR
PIR/DIR per Queuing and
Residential Services SLA scheduling
Shaped Rate ≤ Access Line + BRR Marking
Residential, Residential, Policing
Subscriber UNIs Interface Session
Scheduling with
Shaped Rate ≤ Access Line + BRR Oversubscription,
Priority Propagation
Business, Business MPLS NNI MPLS NNI Bandwidth
Subscriber UNI L2 or L3 Sub Interface Remaining
WRR
Shaped Rate ≤ Access Line + BRR
Mobile NNI Ethernet Mobile NNI (L2 or L3)
CO AN Port

L3 Business,
Mobile UNI PWHE
Interface

L3 Business, Shaped Rate ≤ Access Line + BRR


Subscriber UNI

L3 Business,
Subscriber UNI
MPLS NNI PAN-SE
Remote AN AGN-SE AGN CN

Efficient Large Scale Multiservice


Access Network Aggregation Network Core Network
Ethernet
CO Fixed Access Node
ME-4600 OLT
Aggregation Node
MPLS ASR-9001, 9006
Fixed and Mobile IP/MPLS Transport
Access IP/MPLS Transport

Pre-Aggregation Node Aggregation Node Core Node


ASR-9001, ASR-903 ASR-9010 CRS-3
293242

Fiber, Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology

Functional Components September 2013


123
Upstream QoS mainly involves flat egress QoS policies applied to the various network node for relative
prioritization among the different classes of traffic. Additionally, ingress QoS is required at the AN UNI and at
the service edge node access NNI to enforce per subscriber SLAs when an attachment point for the policy is
available. At the service edge nodes, ingress coupled policers can be used to throttle the overall subscriber
transmission rate to the committed speed, while providing some minimum bandwidth guarantee to the several
traffic classes, up to the full subscriber transmission rate.

Figure 86 - Upstream QoS Model

Shaping
PIR/DIR per Queuing and
Residential Services SLA scheduling
Shaped Rate ≤ Access Line + BRR Marking
Residential, Residential, Policing
Subscriber UNIs Interface Session
Scheduling with
Shaped Rate ≤ Access Line + BRR Oversubscription,
Priority Propagation
Business, Business L2 or L3 MPLS NNI MPLS NNI Bandwidth
Subscriber UNI Sub Interface Remaining
WRR
Shaped Rate ≤ Access Line + BRR

Mobile NNI Ethernet Mobile NNI


CO AN Port (L2 or L3)

Mobile UNI L3 Business,


PWHE Interface
L3 Business, Shaped Rate ≤ Access Line + BRR
Subscriber UNI

L3 Business,
Subscriber UNI
MPLS NNI PAN-SE
Remote AN AGN-SE AGN CN

Efficient Large Scale Multiservice


Access Network Aggregation Network Core Network
Ethernet
CO Fixed Access Node
ME-4600 OLT
Aggregation Node
MPLS ASR-9001, 9006
Fixed and Mobile IP/MPLS Transport
Access IP/MPLS Transport

Pre-Aggregation Node Aggregation Node Core Node


ASR-9001, ASR-903 ASR-9010 CRS-3

293243
Fiber, Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Mesh Topology

Functional Components September 2013


124
DiffServ QoS Domain
The traffic classification, marking, and DiffServ PHB behaviors considered in the system architecture, which are
depicted in Figure 87, are targeted to fit the deployment of residential, business, and mobile services. Traffic
across all three services is divided into three main categories:
• Expedited forwarding (EF)
• Assured forwarding (AF)
• Best effort (BE)

Figure 87 - Differentiated Services QoS Domain

Unified MPLS Fixed/Mobile Access


Service Edge
Transport Ethernet/TDM/ATM UNI

Traffic Class PHB Core, Aggregation, Business Res/Bus


M R, B, M M, B
Access PWHE Ethernet

DSCP EXP DSCP EXP 802.1P DSCP 802.1P ATM

Network Management AF 56 7 56 7 7 56 (7) VBR-nrt

Network Control Protocols AF 48 6 48 6 6 48 (6) VBR-nrt

Residential Voice
Business Real-time
Network Sync (1588 PTP) EF 46 5 46 5 5 46 5 CBR
Mobility & Signaling traffic
Mobile Conversation/Streaming
Residential TV and Video Distribution AF 32 4 32 4 4 NA 4 NA

Business Telepresence AF 24 3 24 3 3 NA 3 NA

Business Critical
16 2 16 2 2 16 2
In Contract AF VBR-nrt
8 1 8 1 1 8 1
Out of Contract
Residential HSI
Business Best Effort
BE 0 0 0 0 0 0 0 UBR
Mobile Background

293241
VQE Fast Channel Change, Repair

Traffic marked as expedited forwarding (EF) is grouped in a single class serviced with priority treatment to satisfy
stringent latency and delay variation requirements. The EF PHB defines a scheduling logic able to guarantee an
upper limit to the per hop delay variation caused by packets from non-EF services.

This category includes residential voice and business real time traffic, mobile Network Timing Synchronization
(1588 PTP) and mobile signaling and conversation traffic (GSM Abis, UMTS Iub control plane and voice user
plane, LTE S1c, X2c, and the LTE guaranteed bit rate (GBR) user plane).

Traffic marked as assured forwarding (AF) is divided over multiple classes. Each class is guaranteed a predefined
amount of bandwidth, thus establishing relative priorities while maintaining fairness among classes and
somewhat limiting the amount of latency traffic in each class may experience.

The Cisco FMC system defines five AF classes, two of which are reserved for network traffic, control and
management, and the remaining three are dedicated to traffic from residential and business services, such as
residential TV and video distribution, and business TelePresence and mission-critical applications.

Functional Components September 2013


125
The third category, best effort (BE), encompasses all traffic that can be transmitted only after all other classes have
been served within their fair share. This is traffic that is neither time nor delay sensitive and includes residential H.323
Signaling Interface (HSI), business best effort, mobile background, and video quality experience control traffic.

For Ethernet UNI interfaces, upstream traffic classification is based on IP DSCP or 802.1P CoS markings. The
ingress QoS service policy will match on these markings and map them to the corresponding DSCP and/or
MPLS EXP value, depending on the access NNI being Ethernet or MPLS based. In the downstream direction, IP
DSCP markings are preserved through the Unified MPLS Transport and may be used for queuing and scheduling
at the UNI as well as for restoring 802.1P CoS values.

Specifically to mobile services, TDM UNI interfaces transported via CEoP pseudowires require all traffic to be
classified as real-time with EF PHB. The ingress QoS service policy matches all traffic inbound to the interface,
and applies an MPLS EXP value of 5. No egress service policy is required for TDM UNI interfaces. For ATM UNI
interfaces to be transported via CEoP pseudowires or used for business services, traffic is classified according
to the ATM CoS on a particular VC. The ingress QoS service policy is applied to the ATM permanent virtual circuit
(PVC) subinterface and imposes an MPLS EXP value that corresponds to the type of traffic carried on the VC and
proper ATM CoS. For further distinction, the ingress QoS service policy may also has the ability to match on the
cell loss priority (CLP) bit of the incoming ATM traffic, and can map to two different MPLS EXP values based on
this. For egress treatment, the PVC interface is configured with the proper ATM CoS. If the CLP to EXP mapping
is being used, then an egress QoS service policy applied to the ATM PVC subinterface can map an EXP value
back to a CLP value for proper egress treatment of the ATM cells.
At the service edge node, classification performed at the access-facing NNI will use a different set of marking
depending on the technology used. For an Ethernet-based access NNI and upstream direction, classification is
based on IP DSCP or 802.1P CoS markings. The ingress QoS service policy will match on these markings and
map them to the corresponding MPLS EXP value for transport toward the core. In the downstream direction, IP
DSCP markings are preserved through the Unified MPLS Transport and may be used for queuing and scheduling
as well as for restoring 802.1P CoS values before forwarding.

For an MPLS-based access NNI and in upstream direction, classification is based on IP DSCP or MPLS EXP
markings. The ingress QoS service policy will match on these markings, which are retained when forwarding
toward the core. In the downstream direction, IP DSCP or MPLS EXP markings preserved through the Unified
MPLS Transport can be used for queuing and scheduling toward the access NNI.

All the remaining core, aggregation, and access network traffic classification is based on MPLS EXP or DSCP.
The core network may use different traffic marking and simplified PHB behaviors, therefore requiring traffic
remarking in between the aggregation and core networks.

Synchronization Distribution
Every mobile technology deployment has synchronization requirements in order to enable aspects such as radio
framing accuracy, user endpoint handover between cell towers, and interference control on cell boundaries.
Some technologies only require frequency synchronization across the transport network, while others require
phase and time-of-day (ToD) synchronization as well. The Cisco FMC system delivers a comprehensive model
for providing network-wide synchronization of all three aspects with an accuracy that exceeds the threshold
requirements of any mobile technology deployed across the system.

The primary target for the current system release is to provide frequency synchronization by using the Ethernet
physical layer (SyncE) and phase and ToD synchronization by using IEEE 1588-2008 PTP. SyncE operates on
a link-by-link basis and will provide a high quality frequency reference similar to that provided by SONET and
SDH networks. SyncE is complemented by Ethernet Synchronization Message Channel (ESMC), which allows
transmitting over SyncE-enabled links a quality level value as done with synchronization status message in
SONET and SDH. This allows the SyncE node to select a timing signal from the best available source and help
detect timing loops, which is essential for the deployment of SyncE in ring topologies.

Functional Components September 2013


126
Because not all links on the network may be SyncE-capable or support synchronization distribution at the
physical layer, IEEE 1588-2008 PTPv2 may also be used for frequency distribution. IEEE 1588 packet-based
synchronization distribution is overlaid across the entire system infrastructure; third-party master and third-party
IP-NodeB client equipment are considered outside the scope of the system. The mechanism is standards-based
and can provide frequency and/or phase distribution, relying on unicast or multicast packet-based transport. As
with any packet-based mechanism, IEEE 1588 PTP traffic is subject to loss, delay, and delay variation. However,
the packet delay variation (PDV) is the main factor to control. To minimize the effects of these factors and meet
the requirements for synchronization delivery utilizing PTP, EF PHB treatment across the network is required.

The Cisco FMC system also supports a combination of SyncE and PTPv2 in a hybrid synchronization
architecture, aiming to improve the stability and accuracy of the phase and frequency synchronization delivered
to the client for deployments such as Time Division Duplex (TDD)-LTE eNodeBs. In such an architecture, the
packet network infrastructure is frequency synchronized by SyncE. The phase signal is delivered by 1588-
2008 PTPv2. The CSG, acting as a PTP ordinary clock or as a Boundary Clock (BC), may combine the two
synchronization methods, using the SyncE input as the frequency reference clock for the 1588-2008 PTP
engine. The combined recovered frequency and phase can be delivered to clients via 1 pulse per second (PPS),
10MHz and Building Integrated Timing Supply (BITS) timing interfaces, SyncE and PTP. For access networks that
don’t support SyncE, the hybrid 1588 BC function may be move to the PANs.

Figure 88 illustrates how synchronization distribution is achieved for Mobile RAN services over both fiber and
microwave access networks in the Cisco FMC architecture.

Figure 88 - Synchronization Distribution

No Physical TDM (SDH)


SyncE
Synchronization 1588 PTP
Microwave 1588 PMC
SyncE, ESMC Packet Master Clock

1588 BC
1588 BC+OC
Client IP/MPLS Transport Network PRC/PRS

1588 Phase (+ Frequency)


1588 BC External Synchronization
Optional Interface (Frequency)
Ethernet Fiber
External Synchronization
Interface (ToD and Phase)

1588 Client+SyncE
Hybrid Mode Global Navigation Satellite System (e.g., GPS, GLONASS,
GALILEO) - Primary Reference Time Clock (PRTC)

Mobile Access Network Aggregation Network Core Network

Mobile Transport Gateway


(MTG) ASR-9000
IP/MPLS
Transport IP/MPLS Transport IP/MPLS Transport

Cell Site Gateway (CSG) Pre-Aggregation Node Aggregation Node Core Node Core Node
ASR-901 ASR-903 ASR-9000 CRS-3, ASR-9000 CSR-3, ASR-9000
293237

Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology

Functional Components September 2013


127
The frequency source for the mobile backhaul network is the Primary Reference Clock (PRC), which can be
based on free-running atomic clock (typically Cesium), a global navigation satellite system (GNSS) receiver that
derived frequency from signals received from one or more satellite system, or a combination of both.

The time (phase and Time of Day -ToD) source for the mobile backhaul network is the Primary Reference Time
Clock (PRTC), which is usually based on GNSS receiver that derived time synchronization from one or more
satellite systems with traceability to the Universal Coordinated Time (UTC). A PRC provides a frequency signal
of G.811/Stratum-1 quality signal (traceable to UTC frequency if coming from GNSS) to the AGNs via G.703-
compliant dedicated external interfaces (aka BITS input) or 10Mhz interface. A PRTC provides time via a 1PPS
signal for phase, and a serial ToD interface. DOCSIS Timing Interface (DTI) is an alternative to the frequency,
1PPS and ToD interface. A PRTC can also provide frequency as a PRC. If required by the architecture, the IEEE
1588 Primary Master Clock (PMC) will also derive synchronization from the PRC or PRTC. From this point, three
models of synchronization distribution are supported:
• For mobile services that only require frequency synchronization, where all network nodes support SyncE,
then frequency is carried to the NodeB via SyncE. The ESMC provides source traceability between
nodes through the Quality Level (QL) value which helps selecting best signal and preventing timing loops
in SyncE topologies.
• For mobile services that require synchronization over an infrastructure which does not support SyncE,
1588v2 PTP is then utilized for frequency synchronization distribution. The PMC generates PTP streams
for each PTP slave that is routed globally by the regional MTG to the CSG, which then provides sync to
the eNodeB. The PMC can be a network node which receives the frequency source signal via physical
layer (e.g., SyncE). Proper network engineering shall prevent excessive PDV to allow timing network to
provide packet-based quality signal to the slaves.
• For mobile services that require frequency and phase and/or time of day (ToD) synchronization, IEEE
1588-2008 PTP can be used in conjunction with SyncE to provide a hybrid synchronization solution,
where SyncE provides accurate and stable frequency distribution, and PTPv2 is used for allowing
phase and/or ToD synchronization. In this Cisco FMC system release, the PTPv2 streams are routed
globally from the regional MTG to the CSG, which, combined with SyncE frequency, then provides
synchronization to the eNodeBs.

In general, a packet-based timing mechanism such as IEEE 1588 PTP has strict packet delay variation
requirements, which restricts the number and type of hops over which the recovered timing from the source
is still valid. With globally routed model, strict priority queuing of the PTP streams is necessary. With a good
implementation of 1588 BC on intermediate transit nodes, it is possible to provide better guarantee over more
hops from the PMC to the NodeB.

Scalability and reliability of PTPv2 in the Cisco FMC system is enhanced by enabling BC in some or all of the
following: the aggregation node, the PAN, and the CSG. Implementing BC functionality in these nodes serves
two purposes:
• Increases scaling of PTPv2 phase/frequency distribution, by replicating a single stream from the PMC to
multiple destinations, thus reducing the number of PTP streams needed from the PMC.
• Improves the phase stability of PTPv2, by stabilizing the frequency of the PTP servo with SyncE or
another physical frequency source as described in the hybrid synchronization architecture.

Functional Components September 2013


128
Redundancy and High Availability
The Cisco FMC system provides a highly resilient and robust network architecture, enabling rapid recovery from
any link or node failure within the network. The system design targets well below sub-second convergence
for any failure within the network, meeting or exceeding the demands of wireline service SLAs and the NGMN
requirements for LTE real-time services of 50-200 msec. A hierarchical implementation of mechanisms at
the network and service levels achieves this resilient design. The mechanisms described in this section are
applicable to both single AS and multiple AS models.

The Cisco FMC system implements the following baseline transport mechanisms for improving network
availability:
• For intra-domain LSPs, remote LFA FRR is utilized for unicast MPLS/IP traffic in both hub- and-spoke
and ring topologies. Remote LFA FRR pre-calculates a backup path for every prefix in the IGP routing
table, allowing the node to rapidly switch to the backup path when a failure is encountered, providing
recovery times on the order of 50 msec. More information regarding LFA FRR can be found in IETF RFCs
5286, 5714, and 6571. Also integrated are BFD rapid failure detection and ISIS/OSPF extensions for
incremental shortest-path first (SPF) and LSA/SPF throttling (Cisco IOS XR defaults should be applied to
IOS devices).
• For inter-domain LSPs, network reconvergence is accomplished via BGP core and edge FRR throughout
the system, allow for deterministic network reconvergence on the order of 100 msec, regardless of the
number of BGP prefixes. BGP FRR is similar to remote LFA FRR in that it pre-calculates a backup path
for every prefix in the BGP forwarding table, relying on a hierarchical Label Forwarding Information Base
(LFIB) structure to allow for multiple paths to be installed for a single BGP next hop. BGP core and edge
FRR each handle different failure scenarios within the transport network:
◦◦ Core FRR is used when the BGP next hop is still active, but there is a failure in the path to that
next hop. As soon as the IGP has reconverged, the pointer in BGP is updated to use the new IGP
next hop and forwarding resumes. Thus, the reconvergence time for BGP is the same as the IGP
reconvergence, regardless of the number of BGP prefixes in the RIB.
◦◦ Edge FRR is used for redundant BGP next hops, like the case where there are redundant ABRs.
Additional path functionality is configured on the PE routers and RRs to install both ABR paths in
the RIB and LFIB instead of just the best path. When the primary ABR fails, BGP forwarding simply
switches to the path of the backup ABR instead of having to wait for BGP to reconverge.

Functional Components September 2013


129
Figure 89 illustrates the end-to-end Cisco FMC system architecture and where the resiliency mechanisms are
utilized for various failures.

Figure 89 - Cisco FMC High Availability Overview

Mobile Access Aggregation Network Core Network Aggregation Network Mobile Access
Network IS-IS L1 IS-IS L2 IS-IS L1 Network
OSPF 0/IS-IS L2 OSPF 0/IS-IS L2

PAN CN-ABR CN-ABR PAN


Inline RR Inline RR Inline RR Inline RR
 next-hop-self   next-hop-self   next-hop-self   next-hop-self 

iBGP iBGP
CSG IPv4+label IPv4+label CSG
iBGP iBGP
CN-RR
IPv4+label IPv4+label
iBGP
RR IPv4+label

CSG CSG

MTG

Mobile
Packet Core
CSG CSG

MME SGW/PGW
iBGP Hierarchical LSP
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

293328
BGP PIC Edge BGP PIC Core LFA FRR, Remote-LFA FRR
<100 msec <100 msec <50 msec

Mobile Services
The LTE transport MPLS VPN services between the CSG and MTG implement the following mechanisms for
improving network availability:
• For UNI connections at the CSG to the eNodeB, static routes to the MTG are utilized in the eNodeB.
• For UNI connections to the MPC from the MTG, fast IGP convergence with BFD keep-alive checks
or multichassis Link Aggregation Control Protocol (mLACP) port-bundles are utilized. Virtual Router
Redundancy Protocol (VRRP) between MTGs allows for a single IP address to be configured in the
eNodeB and MPC.
• For the MPLS VPN transport between the CSG and MTG, network convergence is handled by BGP FRR,
similar to the base transport infrastructure.

For PWE3-based circuit emulation services providing transport of TDM-based 2G and ATM-based 3G services,
the following mechanisms are implemented:
• For TDM and ATM connections from the MTG to the BSC or RNC, MR-APS allow for redundant
connections.
• For the Circuit Emulation PWE3 between the CSG and MTG, backup pseudowires provide failover
protection in the transport network.

Functional Components September 2013


130
In order to protect critical synchronization functions, the following mechanisms are implemented within the Cisco
FMC system for synchronization distribution:
• For SyncE frequency synchronization, the ESMC is used to pass frequency quality between nodes. In
ring topologies, SSMs allow nodes within the ring to avoid timing loops, and to switch from one side of
the ring to another if a network failure is encountered within the ring.
• For frequency, phase, and ToD synchronization via 1588 PTP, active and standby streams from two
different PRCs are received at each 1588 client. If the active stream becomes unavailable, then the
backup stream can be utilized. As 1588 is a packet-based protocol, L2 or IGP resiliency mechanisms will
prevent loops within a ring topology. 1588 BC implementation at the various network levels enhances
scalability and resiliency as well.

Business Services
Business L3VPN services implement the following mechanisms for improving network availability:
• For UNI connections at the FAN to the CPE, static routes to the service edges are utilized in the CPE,
transported from the FAN by PWs.
• For the service MPLS VPN between service edges, network convergence is handled by BGP FRR,
similar to the base transport infrastructure.

Business L2VPN services implement the following mechanisms for improving network availability:
• For TDM and ATM connections at the FAN, MR-APS allow for redundant connections.
• For Ethernet connections at the FAN, mLACP allows for redundant connections.
• Transport redundancy for VPLS services is provided via backup pseudowires.
• Transport redundancy for VPWS services is provided via two-way backup pseudowires.

Residential Services
The second release of the Cisco FMC system focuses on both Ethernet (FTTH/PON) and MPLS access (DSL)
with BNG functions deployed at the aggregation devices that connect directly to the ANs via physical links or
through pseudowire overlay. As for the typical residential architecture, subscribers are single homed to the ANs,
while the AN is homed to the BNG over multiple paths.

The following redundancy models are implemented for residential subscribers:


• Ethernet Access—Single BNG at the aggregation site, with dual uplinks between ANs and BNG. AN
uplinks are bundled together in order to offer link level resiliency to access link failures. LACP is used as
the link bundling protocol and runs these links in active/standby mode in order to guarantee subscriber
SLA in both upstream and downstream directions.
• MPLS Access—Network redundancy-based technologies for fast reroute of the pseudowire upon
transport failure.

Toward the core, transport redundancy for residential services in both models leverages remote LFA FRR
techniques at each intra-domain site and BGP PIC core and edge for inter-domain connectivity.

Functional Components September 2013


131
Figure 90 - Residential Services Redundancy Models—Ethernet Access Example

Aggregation Node Core Node

iBGP

Link Bundle
IP
IP
IP
IP
IP
IP
with Active/
Standby lnks

293583
Multicast
Resiliency for multicast transport in the Cisco FMC system is handled via multicast LDP convergence for both
residential and business services.

Subscriber and Service Control and Support


As the demand for broadband, mobile, and wireless networking grows worldwide, SPs continue to search
for solutions that can help them maximize their investments in these technologies and allow them to expand
revenues and margins while the markets evolve. SPs need solutions that can differentiate their offerings and
efficiently deliver services to attract and retain customers and increase per-customer revenue while reducing
costs.

To meet these needs, operators must be able to offer multiple tiers of service, perform real-time metering
for pre-paid charging and fair use policies, and capitalize on the network infrastructure to offer more than just
internet access services, adding voice and video or top of the basic data offerings. A policy-driven approach is
instrumental for implementing the business rules that govern subscriber data usage and application entitlements
needed to support the different service plans and meet the unique requirements of individual SPs.

A typical policy management infrastructure consists of a number of distinct functions that can co-reside in the
same appliance or be spread across multiple devices. These include web portals, subscriber databases, and
charging and billing functions, all orchestrated by a policy controller device that acts as a single point of contact
for the policy enforcement points (PEPs) in the network, such as the BNG devices.

Functional Components September 2013


132
A typical architecture for subscriber policy control is shown in the following figure.

Figure 91 - Subscriber Policy Control Architecture

Charging
Subscriber Billing
Database Systems
Portal OSS/BSS

SOAP

Policy
Controller
RADIUS
RADIUS CoA

Access Internet

293370
BNG Application
Servers

The second release of the Cisco FMC system has selected Cisco Quantum Policy Suite (QPS) as a policy
controller that integrates policy management (Quantum Policy Server [PS]), subscriber database (Quantum
Unified Subscriber Manager [USuM]), and charging/billing function (Quantum Charging Server [CS]) in a single
appliance.

Subscriber Self-Management Portal


A subscriber self-management portal is an HTTP server providing a web-based interface to subscribers for self-
provisioning activities, such as self-registration, service selection, and quota management. Subscriber activity
on the portal is reflected in web service API calls toward the policy controller through a Simple Object Access
Protocol (SOAP)-based interface, to properly update the subscriber state in the network. This may include a
change in the subscriber active policies at the BNG, updates in the subscriber’s profile stored in the subscriber
databases, or restoration of credit at the charging and billing system.

The Cisco FMC system implements a number of use cases requiring subscriber interaction with the portal that are
described in detail in the “Subscriber Experience Convergence” section of the “System Architecture” chapter.

Subscriber Databases
Subscriber databases are data storage engines that maintain subscriber profiles and policy information, such as
credential, purchased service packages and billing information. This data is used for subscriber authentication
and policy determination for subscriber provisioning in the network, as well as for billing purposes.

When a new or returning subscriber connects into the network, the BNG initiates a RADIUS authentication exchange
toward the policy controller that, by acting as an AAA server, performs a look up in the subscriber databases in order
to validate user credentials and to download the user profile. The user profile, containing the subscriber policies to be
activated, is then returned to the BNG as part of the same RADIUS authentication exchange.

User profiles in the subscriber database can be updated at any time for administrative reasons or as a result of
user activities on the self-management portal.

The Cisco FMC system leverages the integrated subscriber database available in Cisco Quantum Policy Suite.

Functional Components September 2013


133
Charging and Billing Systems
Billing and charging systems are responsible for the management of subscriber’s credits, defined in time or
volume, and billing. Both offline and online monitoring is available differentiating the billing servers in online
charging systems (OCS) and offline charging systems (OfCS).

Offline monitoring is based on the post processing of charging information as part of a billing cycle and does not
affect in real time the service rendered to subscribers.

Online monitoring happens in real time. Functionality includes transaction handling, rating, and online correlation
and management of subscriber accounts/balances. Charging information can affect, in real- time, the service
being offered to a subscriber and therefore requires dynamic modifications to the policies active on the
subscriber’s session at the BNG. This happens with the involvement of the Policy Controller. OCS functions can
be leveraged for both pre-paid charging for network access services, and for the deployment of fair use policies.

The Cisco FMC system leverages the integrated OCS and OfCS functions available in Cisco Quantum Policy
Suite.

Policy Controller
The policy controller is the Cisco Policy Decision Point in the network. It includes northbound interfaces to OSS/
Basic Service Set (BSS) systems, web servers, OCS/OfCS systems and subscriber databases, and southbound
interfaces to PEPs, such as BNGs.

Northbound interfaces allow the policy manager to communicate with a number of appliances to compute in real
time subscriber’s service entitlement, based on predefined information (for example, subscriber’s subscription)
as well as dynamically triggered events, that could be of administrative-, user- or network-driven nature.

Southbound interfaces are the vehicle by which the dynamic provisioning of the subscriber happens on the PEP.
Embedded AAA functions enable the Policy Controller to provide RADIUS-based authentication and authorization
services to the BNG, while sophisticated rule-based engines allow for the implementation of dynamic policy
modifications to BNG’s subscriber sessions via RADIUS CoA interfaces.

The Policy Controller is also responsible for the processing, manipulation, and format conversion of RADIUS
accounting messages generated by the BNG to report subscriber’s network and to be consumed by the OCS/
OfCS systems.

Dynamic session states are maintained for each subscriber for tracking purposes and for the execution of
advanced rules based on uptime, time of day, current active service, network usage, or other triggers.

Cisco Quantum Policy Suite embeds a policy controller, an AAA server, OCS and OfCS systems, and subscriber
databases in the same appliance.

Table 12 - Policy Controller Interfaces

Component Implementation External Interface Specification


Self-management portal External SOAP Cisco proprietary
OSS/BSS External SOAP Cisco proprietary
Subscriber databases External/internal RADIUS authentication (proxy), Diameter Sh/Ud
OCS/OfCS External/internal RADIUS accounting (proxy), Diameter Gx, Gy, Sy
BNG External RADIUS authentication/authorization RFC 2865
RADIUS accounting RFC 2866 RFC 3576
with Cisco extensions
RADIUS CoA

Functional Components September 2013


134
Multicast
The Cisco FMC system supports services delivered via multicast transport as well as Unicast. Such services
include residential broadcast video, financial trading, and Evolved Multimedia Broadcast Multicast Service
(eMBMS) for mobile. It is evident that an operator’s network may carry multiple multicast services concurrently on
a single infrastructure, which necessitates proper transport of the multicast services in the FMC system in order
to provide the required separation between the disparate services.

In order to provide efficient transport of multicast-based services via MPLS, Multicast Label Distribution Protocol
(MLDP) provides extensions to LDP, enabling the setup of multiprotocol Label-Switched Paths (MP LSPs) without
requiring multicast routing protocols such as Protocol Independent Multicast (PIM) in the MPLS core. The two
types of MP LSPs that can be set up are point-to-multipoint (P2MP) and multipoint-to-multipoint (MP2MP) type
LSPs. MLDP constructs the P2MP or MP2MP LSPs without interacting with or relying upon any other multicast
tree construction protocol. The benefit of using MLDP is that it utilizes the MPLS infrastructure for transporting
IP multicast packets, providing a common data plane (based on label switching) for both unicast and multicast
traffic while maintaining service separation.

End-to-end deployment of multicast transport is illustrated in the following figure.

Figure 92 - Unified MPLS Multicast

Access PE
mLDP request with Opaque TLV
Pointing to the BGP next hop for the spource Flat MP/P2MP LSM based on recursive mLDP Each ABR does a recursive lookup

Aggregation Aggregation
Node Node
Access Core Core Access
IP/MPLS Aggregation Network Core Network Aggregation Network IP/MPLS
Node IP/MPLS Domain Node
Domain IP/MPLS Domain IP/MPLS Domain Domain
Aggregation Aggregation
Node Node
Core Core
Node Node
Aggregation Aggregation
Node Node
BGP Hierarchical P2P LSP

293215
LDP LSP LDP LSP LDP LSP LDP LSP LDP LSP

The following considerations should be taken into account when deploying multicast support in the Cisco FMC
system:
• MLDP configuration is required in all MPLS nodes utilized in transporting an MLDP-based MVPN. Such
MPLS nodes are mainly PE and P-routers for Intra-AS. For Inter-AS, MLDP must also be enabled in the
ASBRs.
• MLDP uses the LDP-enabled interfaces by default. Use the mldp disable command to explicitly disable
a particular LDP interface from running MLDP.
• Considering that MLDP uses the LDP-enabled interfaces, the ASBR interfaces that connect the two
different AS (for Inter-AS scenario) must be enabled for LDP as well.

Functional Components September 2013


135
Residential Multicast Services
Multicast services for residential subscribers are implemented natively over IPoE IPv4 by using a dedicated N:1 VLAN
between the BNG and the ANs for Ethernet-based access and IPv4 multicast routing for MPLS-based access.

The Cisco FMC system validates multicast only for IPoE subscribers. The support of Native IPoE multicast
with PPPoE subscribers requires coexistence of PPP and native IP on the same CPE WAN port, which is not
supported by the CPEs used in the architecture.

On the core/aggregation side, transport of multicast traffic for both residential and business services follows
a Rosen-based multicast VPNv4 (mVPNv4) approach, with multicast LDP (MLDP) signaling in the core of the
network for the setup of P2MP LSPs and PIM SSM at the service edge only. Forwarding of multicast traffic in the
core network is therefore based on label switching. Both global and VPN based forwarding of multicast traffic are
explored for MPLS and Ethernet Access respectively.

The multicast distribution trees (MDT) between service edge routers are built leveraging BGP Auto Discovery
(BGP-AD) and BGP Customer Multicast (C-MCAST) signaling. BGP-AD allows for the automatic discovery of the
PEs involved in the MVPN, while BGP C-MCAST signaling translates IGMP/PIM joins from the access network
into BGP joins for C-mroutes advertisement among PEs in the MPLS core.

Specific to residential services, default MDTs (Multidirectional Inclusive Provider Multicast Service Instance
[MI-PMSI]) and data MDTs (Selective Provider Multicast Service Instance [S-PMSI]) are used for multicast delivery
through the core. The default-MDT connects all PEs in a MVPN in a full mesh fashion. The data-MDT is used to
transport high-rate multicast flows in order to offload traffic from the default MDT, thus avoiding unnecessary
waste of bandwidth and resources to PEs that did not explicitly join the high-rate multicast stream.

Business Multicast Services


Specific to business service, the MLDP-MVPN is based on MS-PMSI-mLDP-MP2MP with BGP-AD and BGP
C-MCAST signaling to build the MDT between PE routers. The specific MDT used for business service is a
Partitioned-MDT (also known as Multidirectional Selective Provider Multicast Service Instance [MS-PMSI]), which
connects only those PEs (partial mesh) that have active receivers. With Partitioned-MDT, the MDT is built per-
ingress-PE and per- VPN, and is built only when customer traffic needs to be transported across the core. In
addition to Partitioned-MDT, a Data-MDT is also used to transport high-rate multicast flows in order to offload
traffic from the Partitioned-MDT, thus avoiding unnecessary waste of bandwidth and resources to PEs that did
not explicitly join the high-rate multicast stream.

BGP also plays an important role in the above mentioned MVPN profile. The BGP-AD is used to discover the PEs
involved in the MVPN, while the BGP C-MCAST signaling translates PIM joins coming from the CPE side into BGP
joins to distribute C-mroutes among PEs. The use of BGP unifies the signaling protocol in the MPLS core wherein
BGP is used for both unicast and multicast rather than BGP for unicast and PIM for multicast in the core.

For business services, the Cisco FMC system supports MVPNv4 for transporting IPv4 multicast, and MVPNv6 for
transporting IPv6 multicast. An MLDP IPv4 (MLDPv4) core tree is used for both MVPNv4 and MVPNv6 services.

Mobile Multicast Services


Mobile operators are starting to deploy enhanced Multimedia Broadcast Multicast Services (eMBMS) in mobile
networks for delivery of broadcast video and file push services. Deployment of this type of service has been
discussed previously in the “System Architecture” chapter of this guide. This section focuses on the multicast
mechanisms utilized to deliver eMBMS in the Cisco FMC system design.

In the Core and Aggregation networks down to the service edge node, Label-Switched Multicast (LSM) is
utilized for transport of eMBMS, which in turn utilizes the mLDP-Global in-band signaling profile. In this profile,
PIM is required only at the edge of the network domain, eliminating the requirement of deploying PIM in the core
network. In the Cisco FMC system design, PIM Source Specific Multicast (PIM-SSM) is used to integrate the
multicast transport with the access networks.

Functional Components September 2013


136
Tech Tip

In this release of the Cisco FMC system design, only Single-AS, Multi-area models
support LSM with mLDP-Global signaling. A future release of FMC will extend support
to Inter-AS models as well.

In the access network, and from the PAN to the AGN-SE node if the service edge functionality is not in the PAN,
native IP PIM with SSM is utilized for the transport of IPv4 and IPv6 multicast for eMBMS. This shift permits lower
cost and lower power devices to be utilized in the access network by not requiring recursion processing for
MPLS encapsulation of the multicast traffic.

On the UNI from the CSG at the edge of the access network to the eNB, two VLANs are utilized to deliver the
various interfaces to the eNB. One VLAN handles unicast interface (S1, X2, M3) delivery, while the other handles
M1 multicast traffic delivery.

When a multicast service is requested from a user endpoint device, the eNodeB will signal the transport network
to start the requested eMBMS service. The Cisco FMC system design supports both IGMPv2 and IGMPv3
signaling from the eNodeB to the CSG.
• For IGMPv2, the CSG will statically map the IGMP requests to the proper PIM-SSM groups.
• For IGMPv3, the CSG supports dynamic IGMP to PIM-SSM mapping.

The service edge node acts as a leaf node for the mLDP-Global domain. It will dynamically map the PIM requests
from the CSG into mLDP in-band signaling in order to eliminate the need for PIM within the Aggregation and
Core network domains.

The MTG node uses PIM-SSM for the connection to the MBMS-GW and acts as a root node for the mLDP-
Global domain. The MTG node dynamically maps the mLDP in-band signaling into PIM-SSM requests to the
MBMS-GW.

Transport Integration with Microwave ACM


Nearly half of all mobile backhaul access networks worldwide utilize microwave links, necessitating inclusion of
microwave technology in the Cisco FMC System architecture. The FMC System integrates microwave radios
in the access network to validate transport of traffic over microwave links, including such aspects as QoS,
resiliency, OAM, and performance management. System efforts have validated microwave equipment from
multiple vendors.

The typical deployment within the Cisco FMC architecture is to use the microwave gear to provide wireless
links between MPLS-enabled ANs, such as CSGs. The interconnection between the CSG and the microwave
equipment is a Gigabit Ethernet connection. As most microwave equipment used in this context supports sub-
Gigabit transmission rates, typically 400 Mbps under normal conditions, certain accommodations are made.
Namely, H-QoS policies are implemented in the egress direction on either side of the microwave link, providing
the ability to limit the flow of traffic to the bandwidth supported across the link, while providing PHB enforcement
for EF and AF classes of traffic. Also, IGP metrics can be adjusted to account for the microwave links in a hybrid
fiber-microwave deployment, allowing the IGP to properly understand the weights between true gigabit links, and
gigabit ports connected to sub-gigabit microwave links.

Functional Components September 2013


137
Adaptive Code and Modulation (ACM)
If the bandwidth provided by a microwave link was constant, then IGP weights and H-QoS shaper rates could be
set once and perform correctly. However, the bandwidth supported at a given time by a microwave link depends
upon environmental factors. Weather factors such as fog, rain, and snow can drastically affect the microwave
link. To enable the microwave link to support the optimal amount of bandwidth for the current weather conditions,
the equipment supports Adaptive Code Modulation (ACM) functionality. ACM allows the radio equipment on
either end of the microwave link to assess the current environmental conditions and automatically change the
modulation being utilized to provide the optimal amount of bandwidth for the given environment.

Regardless of the ACM status of the microwave link, the Gigabit Ethernet connection to the MPLS- enabled ANs
is constant, so the nodes are unaware of any changes to the bandwidth on the microwave link. To ensure that
optimal routing and traffic transport is maintained through the access network, a mechanism is needed to notify
the MPLS ANs of any ACM events on the microwave links. Cisco and microwave vendors (NSN and SIAE) have
implemented a vendor-specific message (VSM) in Y.1731 to allow for the microwave equipment to notify Cisco
routers of ACM events, and the bandwidth available with the current modulation on the microwave link.

Figure 93 - Overview of ACM Event Signaling to MPLS Access Node

Aggregation
Node

Aggregation
Node

Policy logic that updates


IP/MPLS the IGP metric on the
Interface IP/MPLS interface

Y.1731
VSM
signals the
microwave
link speed
293216

Microwave Fading

The Cisco FMC system has implemented three actions to be taken on the MPLS ANs, which can be enacted
depending upon the bandwidth available on the microwave link:
• Adjustment of the H-QoS policy to match the current bandwidth on the microwave link.
• Adjustment of the IGP metric on the microwave link, triggering an IGP recalculation.
• Removal of link from the IGP.

Functional Components September 2013


138
H-QoS Policy adjustment
The first action to be taken on an ACM event notification is to adjust the parameters of the egress H- QoS policy
on the MPLS AN connected to the microwave link. The AN will modify the parent shaper rate to match the
current bandwidth rate of the microwave link and adjust child class parameters to ensure that the proper amount
of priority and bandwidth-guaranteed traffic is maintained. The goal is that all loss of bandwidth is absorbed by
best-effort (BE) class traffic.

If the bandwidth available is less than the total bandwidth required by the total of EF+AF classes, then the
operator can choose to have AF class traffic experience loss in addition to BE traffic, or to have the link removed
from service.

IGP Metric Adjustment


In addition to H-QoS adjustments, the MPLS AN will adjust the IGP metric on the microwave link to correlate with
the current bandwidth available. This will trigger an IGP SPF recalculation, allowing the IGP to take the correct
bandwidth into account for routing of traffic in the access network.

Link Removal
At a certain threshold of degradation, determined by the operator, which will impact all service classes across
the microwave link, the MPLS AN will remove the microwave link from the IGP. This will instigate the resiliency
mechanisms in the access network resiliency to bypass the degraded link, resulting in minimal traffic loss.
The link is not brought administratively down so that the microwave equipment can signal to the AN once the
microwave link is restored.

OAM and Performance Monitoring


The Cisco FMC system defines an operations, administration, and maintenance (OAM) subsystem that is broadly
classified into two categories: Service OAM and Transport OAM. Service OAM and Transport OAM rely on the
same set of protocols to provide end-to-end OAM capabilities, including fault and performance management, but
focus on different functional areas.

Service OAM is a service-oriented mechanism that operates and manages the end-to-end services carried
across the network. It is provisioned only at the touch points associated with the end-to-end service, and is
primarily used for monitoring the health and performance of the service. Service OAM ensures services are up
and functional, and that the SLA is being met. When services are affected due to network events, it provides the
mechanisms to detect, verify, and isolate the network faults. The following protocols are the building blocks of
Service OAM:
• ATM Service OAM:
◦◦ F4/F5 VC/VP ATM OAM
• Ethernet Service OAM and PM:
◦◦ 802.1ag Connectivity Fault Management (CFM)
◦◦ MEF Ethernet Local Management Interface (E-LMI)
◦◦ ITU-T Y.1731: OAM/PM for Ethernet-based networks
◦◦ Cisco IP SLA PM based on CFM

Functional Components September 2013


139
• MPLS VPWS Service OAM and PM:
◦◦ Virtual Circuit Connectivity Verification (VCCV) PW OAM: PW Ping, BFD failure detection
◦◦ Cisco IP SLA PM based on CFM. A future release of the FMC architecture will support PW OAM-
based PM
• MPLS VPLS Service OAM and PM:
◦◦ VCCV PW OAM: PW Ping, BFD failure detection
◦◦ Cisco IP SLA PM based on CFM
• IP/MPLS VPN Services OAM and PM:
◦◦ IP and VRF ping and trace route
◦◦ BFD single and multi-hop failure detection
◦◦ Cisco IP SLA PM based on CFM

Transport OAM is a network-oriented mechanism that operates and manages the network infrastructure. It is
ubiquitous in the network elements that make up the network infrastructure, and it is primarily used for monitoring
health and performance of the underlying transport mechanism on which the services are carried. The primary
purpose of Transport OAM is to keep track of the state of the transport entities (MPLS LSP, Ethernet VLAN, etc.).
It monitors the transport entities to ensure that they are up and functional and performing as expected, and
provides the mechanisms to detect, verify, and isolate the faults during negative network events. The following
protocols are the building blocks of Transport OAM:
• Ethernet Transport OAM and PM:
◦◦ IEEE 802.3ah: Ethernet Link OAM
◦◦ 802.1ag CFM
◦◦ International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Y.1731:
OAM/PM for Ethernet-based networks
◦◦ Cisco IP SLA PM based on CFM
• IP/MPLS Transport OAM and PM:
◦◦ BFD single and multi-hop failure detection
◦◦ IP and MPLS LSP ping and traceroute
◦◦ Cisco IP SLA PM
◦◦ Future releases of the FMC architecture will support G-ACh-based OAM and PM for MPLS LSPs

Functional Components September 2013


140
OAM Implementation for Mobile Services
For mobile backhaul services, the Cisco FMC system utilizes a subset of these protocols to provide the required
service and transport OAM and PM functionality between the CSG and the MTG. The details of the required
mechanism are highlighted in the following figure.

Figure 94 - OAM Implementation for Mobile Services

IPSLA PM
LTE, IPSLA IPSLA
3G IP UMTS, Probe Probe
Transport
VRF VRF

MPLS VRF OAM

Service OAM

IPSLA IPSLA PM (future PW PM) IPSLA


3G ATM UMTS, Probe Probe
2G TDM,
Transport
MPLS VCCV PW OAM

Transport OAM
End-to-end LSP
With Unified MPLS IP OAM over inter domain LSP (future LSP OAM)

293238
Node B CSG MTG RNC/BSC/SAE GW

Functional Components September 2013


141
OAM Implementation for Business L2VPN Services
The Cisco FMC system covers service and transport OAM and PM functionality for business L2VPN services
for both managed and unmanaged CPE devices. The details of the mechanism utilized are highlighted in the
following figure.

Figure 95 - OAM Implementation for Business L2VPN Services

IPSLA IPSLA
Ethernet IPSLA PM Ethernet
Probe Probe

Down MIP L7 Ethernet CFM MIP L7 Down


MEP L7 MEP L7
Y.1731 Y.1731
Up AIS/LCK Ethernet CFM AIS/LCK
Up
MEP L6 MEP L6
Managed CPE
MPLS VCCV PW OAM

Service OAM

Y.1731 Y.1731
Probe Y.1731 PM Probe
Unmanaged CPE Up Up
MEP L6 Ethernet CFM MEP L6

E-LMI E-LMI

MPLS VCCV PW OAM

Link OAM MPLS LSP OAM Link OAM

Transport OAM

293239
CPE CSG/FAN/PAN-SE/AGN-SE CSG/FAN/PAN-SE/AGN-SE CPE

Functional Components September 2013


142
OAM Implementation for Business L3VPN Services
The Cisco FMC system covers service and transport OAM and PM functionality for business L3VPN services for
both managed and unmanaged CPE devices. Service transport from the FAN or CSG to the service edge node
is handled via PW, which is terminated and mapped to the MPLS VRF via PWHE. This results in the combination
of both PW OAM and VRF OAM at the service layer. The details of the mechanism utilized are highlighted in the
following figure.

Figure 96 - OAM Implementation for Business Services

IPSLA PM

IPSLA PM

IPSLA VRF VRF IPSLA


Probe Probe

MPLS VRF OAM

E-LMI MPLS VCCV PW OAM MPLS VCCV PW OAM E-LMI

Service OAM

MPLS LSP OAM

Link OAM MPLS LSP OAM MPLS LSP OAM Link OAM

Transport OAM

293240
CPE CSG/FAN/PAN PA-SE/AGN+SE PA-SE/AGN+SE CSG/FAN/PAN CPE

Autonomic Networking
Autonomic Networking makes devices more intelligent and simplifies network operational aspects for the service
provider’s operational staff by automating various aspects of device initialization, provisioning, and day 2 operations.

Reader Tip

Autonomic networking functionality is currently available for Early Field Trial (EFT) and Proof
of Concept (POC) validation on the ASR 901 platform with Cisco IOS release 15.3(3). This
functionality allows service providers to start experimenting with aspects of Autonomic
Networking. Production Autonomic Networking support will be available on the ASR 901
platform with the next IOS release. Support for other Cisco platforms as well as enhanced
AN functionality will be available in a future release of the Cisco FMC system.

For more information about Autonomic Networking support with the ASR 901, see the
following website:
http://www.cisco.com/en/US/docs/wireless/asr_901/Release/Notes/
asr901_rn_15_3_3_S.html#wp30866

Functional Components September 2013


143
The aim of Autonomic Networking is to create self-managing networks to overcome the rapidly growing
complexity of the Internet and other networks and to enable further growth of these networks. In a self-managing
autonomic system, user intervention takes on a new role: instead of controlling the system directly, the user
defines policies and rules that guide the self-management process.

An IETF draft framework describing the concepts covered by Autonomic Networking is available at the following
link: http://tools.ietf.org/html/draft-behringer-autonomic-network-framework-00. The following diagram provides
an illustration of the high-level architecture of the Autonomic Networking system.

Figure 97 - High-Level Architecture of an Autonomic System

Simple Management Tools

Abstract, Global Network View

Autonomic Autonomic
Process Autonomic interaction Process

Device OS Device OS
Traditional interactions
(e.g., routing)

293611
Autonomic Networking is a software process integrated into Cisco IOS software that runs independently of
other traditional networking processes, such as IP, OSPF, etc. The traditional networking processes are typically
unaware of the presence of the AN process. The AN components use the normal interfaces exposed by the
traditional networking components. In the same way that the traditional networking components of different
devices interact with each other, the AN components of different devices also interact with each other. The
autonomic components of different devices securely cooperate in order to add more intelligence to the
devices so that the devices in AN can configure, manage, protect, and heal themselves with minimal operator
intervention. Also, the AN components running across the devices securely consolidate their operations in order
to present a simplified and abstracted view of the network to the operator.

The benefits of the Autonomic Networking infrastructure, as delivered from Cisco IOS release 15.3(3), are as
follows:
• Autonomic discovery of Layer 2 (L2) topology and connectivity by discovering how to reach autonomic
neighbors.
• Secure and zero touch identity bootstrap of new devices. In this process, each autonomic device
receives a domain-based certificate from the registrar, which is used to secure subsequent transactions
and to establish the autonomic control plane.
• An autonomic control plane is created that enables secure communications between autonomic nodes.

Autonomic behavior is enabled by default. You can disable the behavior by using the no autonomic command.

Functional Components September 2013


144
The components of Autonomic Networking are as follows:

Registrar—An autonomic registrar is a domain-specific registration authority in a given enterprise that validates
new devices in the domain and makes policy decisions. The policy decisions include whether a new device can
join a given domain. The registrar also has a database of all devices that have joined a given domain, as well as
device details.

Channel Discovery—This applies to Layer 2 networks and is used to discover communication channels between
autonomic nodes, for example, VLANs. In some networks the autonomic nodes operating on Layer 3 may be
connected through a Layer 2 network, on which only certain VLANs are available. Channel Discovery finds those
VLANs.

Adjacency Discovery—Autonomic nodes discover neighbors using Layer 3 discovery packets.

Autonomic Control Plane—The autonomic control plane is established between the neighbors crossing non-
autonomic Layer 2 devices. All autonomic nodes communicate securely over this autonomic control plane.

New Device Joining the Autonomic Network


When adding a new AN-capable device to an existing AN domain, the following actions occur for the device to
join the autonomic network:

Step 1:  Nodes exchange their identity using autonomic adjacency discovery packets. If a device is new, it
uses its Unique Device Identifier to identify itself; if a device is already enrolled in a domain, it uses its domain
certificate to identify itself. Nodes must be directly connected on Layer 3 (non-autonomic Layer 2 devices in
between are transparent to this discovery).

Step 2:  The domain device acts as a proxy and allows the new device to join its AN domain. It forwards the
information about the new device to the registrar.

Step 3:  The registrar validates whether the new device is allowed to join the domain. If so, the new device
receives a domain certificate from the registrar.

Step 4:  The new device now advertises its domain certificate in its hello message with all neighbors. The
neighbor information is exchanged every 30 seconds and the neighbor table is refreshed with the time stamp of
the last update.

Autonomic Control Plane


Once the new device receives the domain certificate, it exchanges the domain certificate in the adjacency
discovery messages with neighbors. This leads to a creation of an Autonomic Control Plane between two
autonomic devices of the same domain. The autonomic control plane is a segment-based system of IPv6 tunnels
between AN devices.

The autonomic control plane provides a virtual out-of-band (OOB) management channel to allow reachability
from the network operations center to the new device for initial configuration and provisioning. This eliminates
the need for field technicians to have any knowledge of device configuration when bringing up new nodes in the
Cisco FMC network.

Functional Components September 2013


145
Conclusion
As explained in depth in this Design Guide, the Cisco FMC system gives operators a proven architecture,
platforms, and solutions to address the dramatic changes in subscriber behavior and consumption of
communications services, both fixed and mobile access, and provides operational simplification, all at optimized
cost points. Expanding on the Unified MPLS concept originally developed in the UMMT system program,
the FMC system encompasses residential and business fixed wireline service transport in addition to mobile
backhaul. The FMC system also provides optimal integration of SE functionality for fixed wireline services into the
transport network, providing tighter integration of transport and service aspects. Finally, this system incorporates
key components from third party vendors to supply integrated microwave and passive optical network (PON)
access support, as well as wireline Policy and Charging Rules Function (PCRF) server support.

The Unified MPLS concept at the heart of the Cisco FMC system resolves legacy challenges such as scaling
MPLS to support tens of thousands of end nodes, and provides the required MPLS functionality on cost-effective
platforms without the complexity of technologies like Traffic Engineering FRR (TE-FRR) to meet transport SLAs.
By addressing the scale, operational simplification, and cost of the MPLS platform, the FMC system provides a
comprehensive solution to the converged operator seeking an immediately deployable architecture suitable for
deployment of residential, business, and mobile services on a converged platform.

Cisco FMC Highlights


To recap, here are some highlights of the key aspects addressed in the Cisco FMC system:
• Decoupling of transport and service layers. Enables end-to-end MPLS transport for any service, at
any scale. Optimal service delivery to any location in the network is unrestricted by physical topological
boundaries.
• Scaling of the MPLS infrastructure using RFC 3107 hierarchical LSPs. RFC 3107 procedures define
the use of Border Gateway Protocol (BGP) to distribute labels such that BGP can split up large routing
domains to manageable sizes, yet still retain end-to-end connectivity.
• Optimal integration of wireline service edge aspects in transport network. Residential Broadband
Network Gateway (BNG) and business Multiservice Edge (MSE) functions are integrated into the nodes
comprising the transport network, allowing for optimal distribution of the SE and subscriber SLA and
policy enforcement in the network.
• Common Service Experience. Enabled by a common PCRF for both fixed and mobile networks for
consumers and business subscribers over fixed and mobile access, with mediated subscriber identities
and common services transport and policies.
• MPLS VPN over fixed and mobile access. Provides expanded addressable market to service providers
for business service delivery to locations without fixed wireline access via 3G- or LTE-attached services.
• SP-managed public Wi-Fi services in the residential home. Provides service providers with expanded
Wi-Fi service coverage through deployment via residential service connections.
• Improved high availability. Multi-Router Automatic Protection Switching (MR-APS), pseudowire
redundancy, remote Loop-Free Alternate (LFA) to support arbitrary topologies in access, and
aggregation to delivery zero configuration 50msec convergence and labeled BGP Prefix-Independent
Convergence (PIC) for edge and core.
• Simplified provisioning of mobile and wireline services. New service activation requires only endpoint
configuration.

Conclusion September 2013


146
• Virtualization of network elements. Implementation of virtualized route reflector functionality on a Cisco
Unified Computing System (UCS) platform provides scalable control plane functionality without requiring
a dedicated router platform.
• Highly-scaled MPLS VPNs support transport virtualization. This enables a single fiber infrastructure to
be re-utilized in order to deliver transport to multiple entities, including mobile for retail and wholesale
applications, residential, and business services. This enables the one physical infrastructure to support
multiple VPNs for LTE and wireline services.
• Comprehensive multicast support. Efficient and highly-scalable multicast support for residential,
business, and mobile services.
• TDM circuit support. Addition of TDM transport over packet for legacy business TDM services and the
Global System for Mobile Communications (GSM) Abis interface.
• ATM circuit support. Addition of ATM transport over packet for legacy business ATM services and 3G
Iub support.
• Microwave support. Full validation and deployment recommendations for Cisco’s microwave partners:
NEC (with their iPASOLINK product), SIAE (with their ALCPlus2e and ALFOplus products), and NSN (with
their FlexiPacket offering). Microwave support includes integration between Cisco routers and microwave
equipment (from NSN and SIAE) in order to correlate microwave transmission speed changes to IGP
routing metrics and QoS policy values.
• Synchronization distribution. A comprehensive synchronization scheme is supported for both frequent
and phase synchronization. Synchronous Ethernet is used in the core, aggregation, and access domains
where possible. Where SyncE may not be possible, based on the transmission medium, a hybrid
mechanism is deployed converting SyncE to IEEE 1588v2 timing. IEEE 1588v2 Boundary Clock (BC)
function in the aggregation to provide greater scalability. Cisco FMC now supports Hybrid SyncE and
Precision Time Protocol (PTP) with 1588 BC across all network layers.
• QoS. In order to deliver a comprehensive QoS design, Cisco FMC leverages DiffServ QoS for core and
aggregation, H-QoS for microwave access and customer-facing SLAs, and support for LTE QoS class
identifier (QCIs) and wireline services.
• OAM and Performance Monitoring. Operation, Administration, and Maintenance (OAM) and
Performance Management for Label-Switched Path Transport, MPLS VPN, and virtual private wire
service (VPWS) services are based on IP SLA, PW OAM, MPLS and MPLS OAM, and future Internet
Engineering Task Force (IETF) MPLS PM enhancements.
• LFA for FRR capabilities. The required 50ms convergence time inherent in Synchronous Optical
Networking/Synchronous Digital Hierarchy (SONET/SDH) operations used to be achieved in packet
networks with MPLS TE-FRR. This has been successfully deployed in core networks, but not in access
networks due to the complexity of additional required protocols and overall design. LFA delivers the
same fast convergence for link or node failures without any new protocols or explicit configuration on a
network device. Hub-and-spoke topologies are currently supported, with a later release extending LFA
coverage to arbitrary topologies.

Conclusion September 2013


147
Until now, fixed network infrastructures have been limited to wireline service delivery, and mobile network
infrastructures have been composed of a mixture of many legacy technologies that have reached the end of
their useful life. The Cisco FMC system architecture provides the first integrated, tested, and validated converged
network architecture, meeting all the demands of wireline service delivery and mobile service backhaul.

Cisco FMC Benefits Summary


• Flexible deployment options for multiple platforms to optimally meet size and throughput requirements
of differing networks.
• High-performance solution utilizing the highest capacity Ethernet aggregation routers in the industry.
The components of this system can be in service for decades to come.
• Tested and validated reference architecture that allows operators to leverage a pre-packaged
framework for different traffic profiles and subscriber services.
• Promotes significant capital savings from various unique features such as pre-tested solutions,
benchmarked performance levels, and robust interoperability, all of which are validated and prepackaged
for immediate deployment.
• Enables accelerated time-to-market based on a pre-validated, turnkey system for wireline service
delivery and mobile service backhaul.
• Complementary system support, with mobile video transport optimization integration; I- WLAN
untrusted offload support on the same architecture; Mobile Packet Core (MPC); and cost-optimized
performance for Voice over LTE (VoLTE), plus additional services such as Rich Communication Suite
(RCS).
• Cisco’s IP expertise is available to operators deploying Cisco FMC through Cisco Services. These
solutions include physical tools, applications, and resources plus training and annual assessments
designed to suggest improvements to the operator’s network.

Conclusion September 2013


148
Related Documents
The FMC 2.0 Design Guide is part of a set of resources that comprise the Cisco FMC system documentation
suite. The resources include:
• FMC 2.0 System Brochure: At-a-glance brochure of the Cisco Fixed Mobile Convergence System.
• FMC 2.0 Transport Implementation Guide: Implementation guide with configurations for the transport
models and cross-service functional components supported by the Cisco FMC system design.
Document structuring, based on access type and network size, leads to four architecture models that
fit various customer deployments and operator preferences: Small Network Design Base with Labeled
BGP Access, Small Network Design with non-MPLS Access, Large Network Inter-AS Design Base with
Labeled BGP Access, Large Network Inter-AS Design with non-MPLS Access, Large Network Single-AS
Design Base with Labeled BGP Access, and Large Network Single-AS Design with non-MPLS Access.
• FMC 2.0 Residential Services Implementation Guide: Implementation guide with configurations for
deploying the residential service models and use cases supported by the Cisco FMC system design.
• FMC 2.0 Business L2VPN and L3VPN Services Implementation Guide: Implementation guide with
configurations for deploying the business L2VPN and L3VPN service models and use cases supported
by the Cisco FMC system design.
• FMC 2.0 Mobile Backhaul Services Implementation Guide: Implementation guide with configurations
for deploying the Mobile Backhaul service transport models and use cases supported by the Cisco FMC
system design.

Reader Tip

All of the documents listed above, with the exception of this Design Guide, are
considered Cisco Confidential documents. Copies of these documents may be
obtained under a current Non-Disclosure Agreement with Cisco. Please contact a
Cisco Sales account team representative for more information about acquiring copies
of these documents.

Related Documents September 2013


149
Glossary
#
3GPP 3rd Generation Partnership Project
A
AAA authentication, authorization, and accounting
ABR Area Border Router
ACL access control list
ACM Adaptive Code Modulation
AF assured forwarding
AFI address family identifier
AGN aggregation node
AGN-ASBR aggregation node-Autonomous System Boundary Router
AGN-RR aggregation-node route reflector
AGN-SE aggregation node service edge
AIS alarm indication signal
AN access node
ANRA Autonomic Networking Registration Authority
APN access point name
ARP Address Resolution Protocol
AS access switch
ASBR Autonomous System Boundary Router
AToM Any Transport over MPLS
AVP attribute-value pair
B
BC boundary clock
BD bridge domain
BE best effort
BFD Bidirectional Forwarding Detection
BGP Border Gateway Protocol
BGP-AD BGP Auto Discovery
BITS Building Integrated Timing Supply
B-MAC Bridge-MAC
BNG Broadband Network Gateway
BSC/RNC base station controller/radio network controller
BTS base transceiver station
BYOD Bring Your Own Device

Glossary September 2013


150
C
CAGR compound annual growth rate
CAPEX capital expenditure
CDN Content delivery network
CE customer edge
CEoP Circuit Emulation over Packet
CE-PE customer-edge to provider-edge
CESoPSN Circuit Emulation over Packet Switching Network
CFM Connectivity Fault Management
CGNAT Carrier Grade NAT
CHAP Challenge Handshake Authentication Protocol
CIR committed information rate
CLP cell loss priority
C-MAC Customer-MAC
C-MCAST Customer Multicast
CN core node
CN-ABR core node-Area Border Routers
CN-ASBR core node-Autonomous System Boundary Router
CN-RR core-node route reflector
CoA change of authorization
CoS class of service
CPE customer premises equipment
CSG Cell Site Gateway
c-tag customer tag
C-VLAN customer VLAN
D
DHCP Dynamic Host Configuration Protocol
DLRA DHCPv6 Light-weighted Relay Agent
DNS Domain Name System
DoD downstream-on-demand
DPI Deep Packet Inspection
DSCP differentiated services code point
DSLAM Digital Subscriber Line Access Multiplexer
DTI DOCSIS Timing Interface

Glossary September 2013


151
E
eBGP exterior BGP
EF expedited forwarding
EFT early field trial
E-LAN Ethernet LAN
E-Line Ethernet Line
E-LMI Ethernet Local Management Interface
eMBMS Enhanced Multimedia Broadcast Multicast Service
eNB enhanced NodeB
EoMPLS Ethernet over Multiprotocol Label Switching
EPC Evolved Packet Core
EPL Ethernet Private Line
EP-LAN Ethernet Private LAN
ESMC Ethernet Synchronization Message Channel
E-UTRAN Evolved Universal Terrestrial Radio Access Network
EVI EVPN Instance
E-VLAN Ethernet VLAN
EVPL Ethernet Virtual Private Line
EVP-LAN Ethernet Virtual Private LAN
EVPN Ethernet VPN
EXP bits Experimental bits
F
FAN fixed access node
FEC Forwarding Equivalence Class
FMC Fixed Mobile Convergence
FRR Fast Reroute
FSE fixed service edge
FTTH fiber to the home
FTTx fiber to the home/building/business
G
GBR guaranteed bit rate
GGSN gateway GPRS support node
GNSS global navigation satellite system
GPON Gigabit Passive Optical Network
GPRS General Packet Radio Service
GRE generic routing encapsulation
GSM Global System for Mobile Communications
GTP GPRS Tunneling Protocol

Glossary September 2013


152
H
H-QoS hierarchical quality of service
HSI High-Speed Internet
HSRP Hot Standby Router Protocol
H-VPLS Hierarchical Virtual Private LAN Service
I
IA Intermediate Agent
iBGP internal BGP
IGMP Internet Group Management Protocol
IGP Interior Gateway Protocol
IGW Internet Gateway
I-HSPA Internet-High Speed Packet Access
IMA inverse multiplexing over ATM
IPoE IP over Ethernet
IPv6CP IPv6 Control Protocol
IS-IS Intermediate System-to-Intermediate System (IS-IS) Protocol
ITU-T International Telecommunication Union Telecommunication Standardization Sector
L
L2VPN Layer 2 VPN
L3VPN Layer 3 VPN
LDP Label Distribution Protocol
LFA Loop-Free Alternate
LFIB Label Forwarding Information Base
LRs logical routers
LSM Label-Switched Multicast
LSP Label-Switched Path
LTE long-term evolution
M
MAP-T mapping of address and port using translation
MBMS Multimedia Broadcast Multicast Service
MBMS-GW MBMS Gateway
MDT multicast distribution trees
MEF Metro Ethernet Forum
MHD multi-homed device
MHN multi-homed network
MI-PMSI Multidirectional Inclusive Provider Multicast Service Instance
mLACP multichassis Link Aggregation Control Protocol
MLDP Multicast Label Distribution Protocol
MME Mobility Management Entities
MP2MP multipoint-to-multipoint
MP-BGP multiprotocol BGP

Glossary September 2013


153
MPC Mobile Packet Core
MP-EBGP multiprotocol external BGP
MP-iBGP multiprotocol internal BGP
MPLS Multiprotocol Label Switching
MP-LSP multiprotocol Label-Switched Paths
MR-APS Multirouter Automatic Protection Switching
MSE Mobile Service Edge
MSP mobile service provider
MS-PMSI Multidirectional Selective Provider Multicast Service Instance
MS-PW multisegment pseudowire
MTG Mobile Transport Gateway
MVPN Multicast Virtual Private Network
MVR Multicast VLAN Registration
N
NAS-Port-ID Network Access Server Port Identifier
NAT network address translation
ND RA Neighbor Discovery Router Advertisements
NGMN Next-Generation Mobile Network
NGN Next Generation Network
NHS next hop self
NNI Network-to-Network Interface
NSN Nokia Siemens Networks
O
OAM operations, administration, and maintenance
OC-3 Optical Carrier 3
OCS online charging systems
OfCS offline charging systems
OLT optical link terminator
ONT Optical Network Terminator
ONU optical network unit
OOB out-of-band
OPEX operating expenses
OSPF Open Shortest Path First
OSS operations support system

Glossary September 2013


154
P
P2MP point-to-multipoint
P2P point-to-point
PAN pre-aggregation node
PAN-SE pre-aggregation node service edge
PAT Port Address Translation
PBB Provider Backbone Bridge
PCRF Policy and Charging Rules Function
PD prefix delegation
PDP Packet Data Protocol
PDV packet delay variation
PE provider edge
PEP policy enforcement point
PGW Packet Data Network Gateway
PHB per-hop behavior
PHP penultimate hop popping
PIC Prefix-Independent Convergence
PIM Protocol Independent Multicast
PIM-SSM PIM Source Specific Multicast
PIR peak information rate
PM performance management
PMC Primary Master Clock
POC Proof of Concept
PON Passive Optical Network
POP point of presence
PPP Point-to-Point Protocol
PPPoE Point-to-Point Protocol over Ethernet
PPS pulse per second
PRC Primary Reference Clock
PRTC Primary Reference Time Clock
PTP Precision Time Protocol
PVC permanent virtual circuit
PW pseudowire
PWE pseudowire emulation
PWE3 pseudowire emulation edge-to-edge
PWHE pseudowire headend
Q
QCIs LTE QoS class identifier
QL quality level
QoS quality of service
QPS queries per second

Glossary September 2013


155
R
RADIUS Remote Authentication Dial-In User Service
RAN Radio Access Network
RCS Rich Communication Suite
RFSS Radio Frequency Subsystem
RNC Radio Frequency Subsystem Network Controller
RPL routing policy language
RR route reflector
RT route target
S
SAE System Architecture Evolution
SAFI subsequent address family identifier
SAToP Structure Agnostic Transport over Packet
SCTP Stream Control Transmission Protocol
SDH Synchronous Digital Hierarchy
SDU Systems Development Unit
SE service edge
SGW Security Gateway
SLA service-level agreement
SOAP Simple Object Access Protocol
SONET Synchronous Optical Networking
SP service provider
SPF shortest-path first
S-PMSI Selective Provider Multicast Service Instance
SPR Subscriber Profile Repository
SR-APS single-router automatic protection switching
SSID Service Set Identifier
SSM synchronization status message
S-tag service tag
STM1 Synchronous Transport Module level 1
SUDI Secure Unique Device Identifier
SVI switch virtual interface
S-VLAN service provider VLAN
T
TCO total cost of operations
TDD Telecommunications Device for the Deaf
TDM time-division multiplexing
T-LDP Targeted Label Distribution Protocol
ToD time of day
TR technical report

Glossary September 2013


156
U
UCS Cisco Unified Computing System
UDP User Datagram Protocol
UMMT Unified MPLS for Mobile Transport
UMTS Universal Mobile Telecommunications Service
UNI User-Network Interface
USuM Cisco Quantum Unified Subscriber Manager
UTC Universal Coordinated Time
V
VC virtual circuit
VFI virtual forwarding instance
VoD video on demand
VoIP voice over IP
VoLTE voice over LTE
VP virtual path
VPLS Virtual Private LAN services
VPNv4 Virtual Private Network IP Version 4
VPNv6 Virtual Private Network IP Version 6
VPWS Virtual Private Wireline Service
VRF Virtual Route Forwarding
VRRP Virtual Router Redundancy Protocol
W
WPA Wi-Fi Protected Access

Glossary September 2013


157
Feedback

Please use the feedback form to send comments and


suggestions about this guide.

Americas Headquarters Asia Pacific Headquarters Europe Headquarters


Cisco Systems, Inc. Cisco Systems (USA) Pte. Ltd. Cisco Systems International BV Amsterdam,
San Jose, CA Singapore The Netherlands

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, “DESIGNS”) IN THIS MANUAL ARE PRESENTED “AS IS,”
WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS
SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR
DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS
DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL
ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. Any examples, command display output, and figures included in the
document are shown for illustrative purposes only. Any use of actual IP addresses in illustrative content is unintentional and coincidental.

© 2013 Cisco Systems, Inc. All rights reserved.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this
URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership
relationship between Cisco and any other company. (1110R)

B-0000140F-1 09/13

Vous aimerez peut-être aussi