Académique Documents
Professionnel Documents
Culture Documents
• Single memory pool is used for Linux, IOSd, and the services plane
IOS XE in enterprise next generation networks
Corporate HQ WAN aggregation
DCI
Regional office
Internet gateway
Branch Cloud
ASR 1000
ISR G1 & G2 Series
7200 Series 200G per Slot
Carrier Ethernet + BNG
IP RAN
40G per Slot L2/L3 VPNs
ISR 4k Series Carrier Ethernet Vidmon
IP RAN
20 – 360GB Per Hosted Firewall
SBC/VoIP
System IP Sec
Broadband
Broadband SBC/VoIP
Vidmon
Route Reflector DPI
Distributed PE
Software architecture
IOS XE software architecture
Control-plane
• IOS + IOS XE Middleware + Platform IOS active IOS standby
Software
Platform Adaptation Layer (PAL)
• IOS runs as its own Linux process for Chassis manager Forwarding manager-RP
control plane
Linux Kernel
• Linux kernel with multiple processes running
in protected memory Control
messaging
• Fault containment, re-startability
Data plane
Forwarding engine client
• ISSU of individual SW packages
Forwarding engine driver
• With redundant data plane hardware packet
Chassis manager Forwarding manager-FP
loss can be as low as 50 ms at failover
Linux Kernel
IOS XE software architecture ASR1000 implementation
Control-plane
IOS active IOS standby
ISR4000 implementation
Platform Adaptation Layer (PAL)
Linux Kernel
IOS active
Control-
plane
Control messaging
Chassis
Data plane
Forwarding engine driver Forwarding engine client
Linux Kernel
IOS XE architecture building blocks • Provides abstraction layer
between hardware & IOS
Control plane
• Runs Control Plane IOS active IOS standby • Manages ESP redundancy
• Generates configurations • Maintains copy of FIB and
• Maintains routing tables (RIB, interface list
Platform Adaptation Layer (PAL) • Communicates FIB status to
FIB…)
active & standby data plane FM
Chassis manager Forwarding manager - RP
• Initialization of RP processes
• Initialization of installed cards Linux Kernel
• Detects and manages OIR of
cards • Maintains copy of FIBs
• Manages system status, Control
messaging • Programs forwarding plane and
environmentals, power, EOBC forwarding engine DRAM
• Statistics collection & RP
Data plane
QoS HQF support dual/single rate 3 color policing bandwidth remaining ratio ATM service policies (VP/VC)
2PQs, 128K queues 16K policy-maps policy aggregation NBAR
MQC: classification / marking 1000 class-maps ATM shaping per VP/VC FPM
egress queuing 3-level hierarchical scheduling egress classification on QoS group
For your
IOS XE features reference
Security hardware assisted IPSec Zone-based Firewall FIPS compliance DMVPN Hierarchical Hub
IPSec VPN 3DES/AES NAT IPv6 IPSec static VI VRF-aware IPSec
DMVPN RTSP Firewall ALG VRF-aware zone-based Firewall VRF-aware Zone-based FW
GETVPN Control Plane Policing VRF-aware NAT
SBC Distributed and Integrated SBC NAPT Twice NAT for IPv4 Flexible header manipulation
Topology Identity hiding Megaco/H.248 No NAT for IPv6 Privacy Header
DoS Protection Flow-based QoS control H.248 ACK 3-way Signaling congestion control
Pinhole/filter control DBE control interface H.248, V4 H.248 interim accounting IPv6 support
SIP Signaling/latching transport, UDP, TCP, etc SIP-H.323, H.323-H.323 SBC Endpoint switching
Network LAN Management Solution MPLS Diagnostics Expert Traffic Engineering Manger Syslog
management Cisco Information Center Netflow Collector MPLS LSP Ping / Traceroute VRF-aware NF
QoS Policy Manager Cisco Security Manager MIBs
IP Solution Center Cisco Multicast Manager SNMP
IOS XE chassis manager
Control-plane
• Initializes hardware and boots other processes IOS active IOS standby
• Manages EOBC switch on RP in ASR1000
Platform Adaptation Layer (PAL)
• Manages ESI links on RP/ESP/SIP in ASR1000
• Manages timing circuitry Chassis manager Forwarding manager-RP
Data plane
Forwarding engine client
• Communicates with IOS to make it aware of the
hardware components Forwarding engine driver
Control-plane
• FM in control plane communicates with peer FM IOS active IOS standby
processes in data plane
• Distributed control function Platform Adaptation Layer (PAL)
• Propagates control plane operations to data plane Chassis manager Forwarding manager-RP
• statistics
Data plane
Forwarding engine client
• FM on the active control plane maintains state for
both active and standby data plane hardware Forwarding engine driver
• Facilitates NSF after re-start with bulk-download of Chassis manager Forwarding manager-FP
state information
Linux Kernel
IOS XE forwarding engine control
Control-plane
• Forwarding engine client IOS active IOS standby
• Allocates and manages resources on forwarding
engine (data structures, memory, scheduling Platform Adaptation Layer (PAL)
hierarchy)
Chassis manager Forwarding manager-RP
• Receives requests from IOS via FM processes
• Re-initializes FE and memory if a software error occurs Linux Kernel
Data plane
Forwarding engine client
• Forwarding engine (runs μcode) Forwarding engine driver
• ASR1000 QFP implements data plane on PPEs
Chassis manager Forwarding manager-FP
• ISR4000 platforms use other multicore chips running
the same microcode compiled for alternate processor Linux Kernel
Feature Invocation Array – FIA
IPv6 IPv4 MPLS X-connect L2 Switch
L2/L3
Classify
IPv4 validation
show platform hardware qfp active interface if-name <name>
IPv4
BGP Policy Acct. WCCP FPM Queuing
Classify
MLP
System Architecture –
ASR1000 Modular
Platforms
ASR 1000 series
Compact, Business-Critical Instant On
Powerful Router Resiliency Service Delivery
Line-rate performance 2.5G to 200G+ Fully separated control and forwarding Integrated firewall, VPN, encryption,
with services enabled planes NBAR, CUBE
Hardware QoS engine with up 128K Hardware and software redundancy Scalable on-chip service provisioning
queues per ASIC In-service software upgrades through software licensing
Investment protection with modular
engines, IOS CLI and SPAs for I/O
ASR 1013
2.5 - 20 2.5–36 10-40 10-100+ 10-200
Gbps Gbps Gbps Gbps Gbps
ASR1000 building blocks • Route Processor (RP)
• Handles control plane traffic
FECP CPU • CPU
Manages system FECP
RP
RP
• Embedded
ESP
ESP
interconn. GE switch interconn. GE switchService Processor (ESP)
QFP QFP
Crypto • Handles data planeCrypto
traffic
Assist. PPE BQS Assist. PPE BQS
interconnect
interconn.
• SPA Interface Processorinterconnect
(SIP)
Route Processor • Shared Port Adapters provide interface
Handles control plane connectivity
EmbeddedMidplane
Service Processor Manages system
Handles forwarding plane traffic • Centralized Forwarding Architecture
• All traffic flows through the active ESP,
interconnect interconnect interconnect with all flow state
standby is synchronized
with a dedicatedSPA10-Gbps link
SIP
SIP
SIP
SPA SPA
IOCP IOCP IOCP
Aggreg. Aggreg. Aggreg.
• Distributed Control Architecture
SPA SPA SPA SPA • SPA components
All major system SPA have a
SPA Interface Processor powerful control processor dedicated for
Houses SPA’s control and management planes
Queues packets in & out
ASR1000 data plane architecture
• Enhanced SerDes Interconnect (ESI)
FECP CPU • serialCPU FECP
RP
RP
communication via midplane
ESP
ESP
QFP interconn. GE switch • can run atGE11.5Gbps
interconn. switch or 23Gbps QFP
Crypto Crypto
• Provides data packet communication
PPE BQS PPE BQS
interconnect
interconn. interconnect
• data packets between ESPs and other linecards
punt/inject traffic to/from RP
Midplane • state synchronization between ESPs
• two ESI links between each ESP and all
interconnect linecards
interconnect interconnect
• Additional full set ofSPA
SIP
SIP
SPA SPA
IOCP IOCP
CRC protection ofAggreg. IOCP
packet contents
Aggreg. Aggreg.
RP
RP
indication and ready loading
Used by RP to pass control messages,
ESP
ESP
images, stats collection
statistics and program QFP
interconn. GE switch interconn. GE switch
Crypto QFP messages to program QFPCrypto QFP
Assist. PPE BQS Assist. PPE BQS
2
Inter-Integrated Circuit (I C)
interconnect
interconn. interconnect
monitor health of hardware components
control resets
Midplane
communicate active/standby
real time presence and ready indicators
control the other RP (reset, Circuit
Inter-Integrated power-down,etc.)
(I2C) Bus
interconnect report power-supply
interconnect status
interconnect
Slow (few kbps)
EEPROM access Used for system monitoring
SIP
SIP
SIP
SPA SPA SPA
IOCP IOCP (temp., OIR, IOCP
fan speed,…)
Aggreg. Aggreg. Aggreg.
SPA control links
SPA SPA SPA detect
SPA SPA OIR SPA SPA
reset SPAs (via I2C)
power-control SPAs (via I2C)
SPA Control Link
read EEPROMs
Works between the SPA’s and SIP
ASR1000 chassis configuration
RP0 and RP
control plane processing
ASR1006 supports redundant ASR1004 supports redundant
control and data planes via control planes via dual IOS
active/standby hardware. process redundancy on a single
RP card.
ASR1000 – power supplies
ASR1001-X
ASR1002
ASR1004
3x multispeed
fan per PEM ASR1013
2 PEM total 3x multispeed
fan per PEM
4 PEMs total
ASR1006
3x multispeed
fan per PEM
2 PEM total
ASR1000 systems
ASR1001-X ASR 1002-X ASR 1004 ASR 1006 ASR 1013
Height 1.75” (1RU) 3.5” (2RU) 7” (4RU) 10.5” (6RU) 22.7” (13RU)
Airflow Front to back Front to back Front to back Front to back Front to back
ASR1000 SPA interface processor (aka SIP)
• Supports up to 4 SPAs
• full OIR support
• Does not participate in forwarding decisions
• Preliminary QoS
• Ingress packet classification – high & low priority
• Ingress over-subscription buffering
• 128MB of ingress oversubscription buffering
• Capture stats on dropped packets
• Network clock distribution to SPAs, reference selection from SPAs
• IOCP manages midplane links, SPA OIR, SPA drivers
• SIP40 supports minimally disruptive restartfor ISSU (MDR)
• SIP reboot times of 25 seconds or less
• SPA reboot times of 10 seconds or less
ASR1000 SIP40 advantages
• Packet classification enhancements to support additional media
• PPP, HDLC, FR, ATM…
• 96 vs 64 queues per port priority queues for higher density SPA support
• Full 3 parameter priority scheduler (Strict, Min, Excess) vs 2 parameter (Min, Excess)
• Addition of per-port and per-VLAN/VC
ingress policers
• Network clocking support
• DTI clock distribution to SPAs
• Timestamp and time-of-day clock distribution
ESI, 11.2 Gbps
SPA-SPI, 11.2Gbps
Hypertransport, 10Gbps
Links for data packets
Other RPs
SIP40 block
RPsdiagram Standby ESP GE, 1Gbps
I2C
SPA Control
SPA Bus
Active ESP
Output ref
clocks
Input ref
clocks
IO Control processor Guarantee bandwidth
running Linux to all interfaces
Interconnect
EV-RP
EV-FC
Egress
DDRAM Ingress
Buffer
Scheduler
Status
Boot Flash IO control
(OBFL,…) processor
JTAG Ctrl
SPA C2W
Aggregation Network
… clock
128MB of input Reset / Pwr Ctrl … ASIC distribution
8MB of output
buffering Temp Sensor
buffering
Network
clocks
EEPROM
Ingress Ingress
Egress
Chassis buffers SPA Agg.
Classifier buffers
management
• RP2
• 2.66Ghz Intel dual-core architecture
• 64-bit IOS XE
• Up to 16GB IOS Memory
• 2GB Bootflash (eUSB)
• 33MB NVRAM
• Hot swappable 80GB Hard Drive
GE, 1Gbps
I2C
SPA Control
SPA Bus
CPU Memory
RP control processor Bootdisk 2GB
Intel 2.66 GHz dual core
Stratum-3 Network
clock circuit
I2C Chassis
Management Bus Interconnect EOBC Switch
• Multiple communications paths to other cards (for control and for network control packets)
• Miscellaneous control functions for card presence detection, card ID, power/reset control, alarms,
redundancy, etc.
• Stratum-3 network clock circuitry and BITS reference input (for synchronizing SONET
links, etc.)
• RP2 adds BITS output capability
Modular route processors: RP1 and RP2
• ROMMON can load IOS XE distribution from multiple sources
• has a v4 IP/UDP stack for tftp booting
• SIP/ESPs tftpboot over EOBC from active RP using this same facility
• ftp, rcp not supported (but are supported by IOS XE once booted)
CPU dual core, 2.2GHz quad core, 2.0GHz quad core, 2.13GHz single core, 1.5GHz dual core, 2.66GHz
Default memory 4GB, 4x1GB 8GB (4x2GB) 4GB 2GB, 2x1GB 8GB, 4x2GB
SPI MUX
Interconnect
ASIC
TCAM
Crypto Engine
QFP Subsystem
PPE + BQS
FECP CPU
PPE DRAM
FECP DRAM
EEPROM
QFP complex
DDRAM
Packet Processor Engines BQS GE, 1Gbps
Boot Flash
FE control I2C
PPE1 PPE2 PPE3 PPE4 PPE5 SPA Control
processor E-CSR SPA Bus
crypto Dispatcher
coprocessor
EEPROM
QFP complex
DDRAM
Packet Processor Engines BQS GE, 1Gbps
Boot Flash
FE control I2C
PPE1 PPE2 PPE3 PPE4 PPE5 SPA Control
processor E-CSR SPA Bus
crypto Dispatcher
coprocessor
SPI Mux
System bandwidth is 10 Gb/sec full duplex
Reset / Pwr Ctrl
SA table
DRAM Interconnect Interconnect ( 10 Gb/sec up plus 10 Gb/sec down )
EEPROM
QFP complex
DDRAM
Packet Processor Engines BQS GE, 1Gbps
Boot Flash
FE control I2C
PPE1 PPE2 PPE3 PPE4 PPE5 SPA Control
processor E-CSR SPA Bus
crypto Dispatcher
coprocessor
EEPROM
QFP complex QFP complex
DDRAM
PPEs BQS PPEs BQS
Boot Flash
FE control
PPE1 PPE2 PPE3 PPE1 PPE2 PPE3
processor E-CSR
JTAG Ctrl
PPE4 PPE64 PPE4 PPE64
PCI*
DDRAM
QFP complex QFP complex
Boot Flash
FE control PPEs
QFP complex BQS PPEs
QFP complex BQS
processor PPE1
PPEs
PPE2 PPE3
BQS PPEs
PPE1 PPE2 PPE3
BQS
JTAG Ctrl PPE1
PPE
PPE2 4 PPE3
PPE64
PPE1
PPE
PPE2 4 PPE3
PPE64
Dispatcher Dispatcher
crypto crypto
coprocessor coprocessor
NVRAM 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB 32 MB
• Downgrading
• Need to ensure that the ESP100 config does not exceed the scaling limits of ESP40 in
any respect
For your
reference
TCAM Uses
Definition
Ternary Content-Addressable Memory is designed for rapid, hardware-based table
lookups of Layer 3 and Layer 4 information. In the TCAM, a single lookup provides
all Layer 2 and Layer 3 forwarding information.
Which ASR 1000 • Security Access Control Lists (ACL) • Multi Topology Routing
features use • Firewall • NAT
TCAM? • IPSec • Policy Based Routing
• Ethernet Flow Point for Ethernet • QoS
Virtual Circuits • NBAR / SCEASR
• Flexible Packet Matching • Web Cache Control Protocol
• Lawful Intercept • Edge Switching Services
• Local Packet Transport Services • Event Monitoring
(LPTS)
Available SIP bandwidth depends on ESP & chassis
ESP-xxx Card • Each ESP has a different Interconnect
ASIC with different numbers of ESI ports
QFP Complex
• ESP-10: 10G to all slots
10Gb 10Gb 20Gb 40Gb Multiple
69Gb • 1 x 11.5G ESI to each SIP slot (1004,1006)
ESP10 Interconnect ESP10 Interconnect
• ESP-20: 10G to all slots
ESP20 Interconnect • 1 x 11.5G ESI to each SIP slot (1004, 1006)
• QFP uses 16 byte memory access sizes • Add hardware assists to further boost
• minimizes wasted memory reads and increases performance
memory access • TCAM, PLU, HMR…
• for the same raw memory bandwidth, a 16B • Trade-off power requirement vs. board space
read allows 4-8 times the number of memory
accesses/sec as a CPU using 64/128B
accesses
Cisco Quantum Flow Processor
2nd generation details
• Used in ASR1002-X, ESP100 & ESP200 • 1st and 2nd gen QFPs run the same code
• 2nd gen QFP integrates both the PPE • Maintains identical feature behavior
engine and the Traffic manager into a single between QFP hardware releases
ASIC • Full configuration consistency
• Identical feature behavior (NAT, FW, etc)
• 64 PPEs
• In-service hardware upgrade from ESP40 to
• 116K queues per 2nd gen QFP
ESP100 supported
128K queues for 1st gen QFP
• Differences
• Can be used in a matrix of 2 or 4
• Minor behavioral show-command differences
• ESP100 has 232K queues
• Deployment differences in deployments with
• ESP200 has 464K queues large number of BQS schedules
For your
reference
Quantum Flow Processor Video
http://www.cisco.com/cdc_content_elements/flash/netsol/sp/quantum_flow/demo.html
40 PPEs (1st Gen); 64 PPEs (2nd Gen)
• Tensilica (MIPS-like) instruction set architecture
Memory Interconnect
Distributor Assigns Each Packet to
General Resources Memory Resources Memory Access Resources
a PPE/Context
DRAM0
DRAM7
INFRA
SRAM
WRC
HMR
TCM
RLB
ARL
PLU
• QFP is not doing flow-based load-
balancing among processors
• Distribution is to any eligible
PPE/Context
Resource Interconnect • Hardware locks for ordering and
mutual exclusion
Boundary
Queuing
Dist FLB
Buffering, Queuing, & Scheduling (BQS)
Hi Perf. Memory • HQF/MQC compatible
• TCAM4: 200 M • 128K queues
searches/ SPI/HT GPM Gather BQS OPM SPI/HT • Flexible allocation of schedule resources
second with QFP IPM
• 5+ levels of scheduling hierarchy
• DRAM: 1.6 billion
cache line accesses Data Path Resources
per second
QFP Hardware Assists
• RLB = Regular Lock Block
• TCM = TCAM Controller
• ARL = ACL Range Lookup
• INFRA = DMA Engine, HT access, CSR access
• PLU = Pointer Lookup Unit (Tree Bitmap lookup)
• HMR = Hash Mod Read
• WRC = Weighted RED Controller
• Gather = gathers fragments
• FLB = Flow lock block
• Packets are given internal ID based on source / destination interface, packet header
fields etc
• ID then used internally to ensure packet sequencing
ESI – Enhanced SerDes Interconnect
• Consists of 4 or 8 SerDes in each direction
SerDes operates at either 3.125 or 6.25Gbps
Provides 11.5Gb/sec usable bandwidth per SIP10 or 2x23Gb/sec per
SIP40
• 24b/26b encoding/framing/scrambler protocol derived from
CAT6K Chico EFCP (efficient fabric channel protocol)
ESI
• All transmission in form of 4 x 26b segments
24 bytes of data per frame byte-stripped across 4 SerDes lanes
8 bits of control for sync/alignment and state signaling
• Supports 2 priorities of packet frame plus control messages
with interleaving and preemption
Byte0 c4 c0
Byte1 c5 c1
Byte2 c6 c2
Byte3 c7 c3
Byte5
Byte10 Byte6
Byte11 Byte7
Byte9
QFP complex
Up to OC-192 rates DDRAM
Packet Processor Engines BQS
Boot Flash
FECP
Offers channelization, programmable burst sizes (OBFL,…)
JTAG Ctrl
E-CSR
PPE1 PPE2 PPE3 PPE4
…
PPE5
16b data bus, 1b control bus RPs RPs ESP RPs SIPs
DDR clocking
FIFO buffer status portion (2b status channel)
Up to 256 port addresses with independent flow control for each
Hypertransport (HT)
• Bi-directional serial/parallel high-bandwith Reset / Pwr Ctrl
TCAM
(10Mbit)
Resource
DRAM
Pkt Buffer
DRAM
Part Len /
BW SRAM
Temp Sensor (512MB) (128MB)
QFP complex
DDRAM
JTAG Ctrl …
• Packet based PPE6 PPE7 PPE8 PPE40
E-RP*
PCI*
Performance (default/max) 50 / 100 Mbit/s 100 / 300 Mbit/s 200 / 400 Mbit/s 500 / 1000 Mbit/s 1000 / 2000 Mbit/s
4 / 16 GB for 4 / 16 GB for
4 / 8 GB for 4 / 16 GB for 4 / 16 GB for
Default/Max. DRAM CP/SP/DP CP/SP/DP CP/SP/DP
CP/SP CP/SP
2 GB for DP 2 GB for DP
Default/Max Flash 4 / 8 GB 4 / 16 GB 4 / 16 GB 8 / 32 GB 8 / 32 GB
IOSd
4xPCIe 10G XAUI 1 Gb/sec SGMII
Mgt Eth
ISC
Cons/Aux Platform
System Multigigabit
Controller
USB FPGA Fabric SM-X
Hub
Flash 10 Gb/sec per slot
Control Plane
DRAM (1 core) and Services Plane (3 cores)
FPGE
Service containers
WAAS
KVM
EnergyWise
IOSd
4xPCIe
10G XAUI 1 Gb/sec SGMII
Mgt Eth
ISC
Cons/Aux Platform
System Multigigabit
Controller
USB FPGA Fabric SM-X
Hub
Flash 10 Gb/sec per slot
mSATA
2 Gb/sec per slot NIM
Note:4321 uses 2DP, 1CP & 1SP cores
Maximum interface termination
Blade Hosting
External
Server
Feature or Application
Container
Blade
NIM
UCS-E
Non-blocking fabric
for internal transport
SM-X
ASR1000 only
• Egress SIP has high and low priority buffers in Ingress Egress
case there is backpressure from a SPA Buffers Buffers
Classifiers
SPA
SPA
SPA
SPAs
For your
reference
ASR1000 SIP egress path
• Egress path on SIP has two queues per interface, high and low priorities
• All packets in high priority queue for an interface must be drained before any low priority
packets will be sent to the SPA for egress
• show platform hard slot X plim buffer settings detail
• Provides details on egress buffer utilization on SIP. These parameters are not user configurable.
ISR4000 only
• All high priority traffic from all SIPs is processed before low priority traffic is
handed to the Cisco QFP
• These parameters are not user-configurable.
ASR1000 only
• After all the above QoS functions (along with other packet forwarding features such as
NAT, Netflow, etc.) are handled the packet is put in packet buffer memory handed off to
the Traffic Manager
ISR4000 only
F
ASR1000 only
• Priority propagation (via minimum) ensures that high priority packets are forwarded first
without loss
Traffic Manager Processing
• Various models of hardware have differing amounts of packet memory
• Packet memory is one large pool
• Interfaces do not reserve a specific amount of packet memory
• Packet memory may be over-provisioned
• Out of resources memory exhaustion conditions
• Non-priority user data dropped at 85% packet memory utilization
• Priority user data dropped at 97% packet memory utilization
• PAK_PRI and internal control packets only dropped at 100% memory utilization
For your
reference
Traffic Manager statistics
• show plat hard qfp active stat drop all | inc BqsOor
• This gives a counter which shows if any packets have been dropped because of packet buffer
memory exhaustion.
• show plat hard qfp active infra bqs status
• Gives metrics on how many active queues and schedules are in use. Also gives statistics on QFP
QoS hierarchies that are under transition.
• show plat hard qfp active bqs 0 packet-buffer util
• Gives metrics on current utilization of packet buffer memory
For your
reference
ESP specific specifications
Card Packet memory Maximum queues TCAM
• Marking
• IPv4 precedence/DSCP, IPv6 precedence/DSCP, MPLS EXP, FR-DE, discard-class, qos-group,
ATM CLP, COS, inner/outer COS
• Detailed match and marker stats may be enabled with a global configuration option
• platform qos marker-statistics
• platform qos match-statistics per-filter
• platform qos match-statistics per-ace
• Detailed statistics will show per line match statistics in class-maps. For marking, the detailed stats
show the number of packets marked per action.
IOS XE QoS – non-queuing
• Policing
• 1R2C – 1 rate 2 color
• 1R3C – 1 rate 3 color
• 2R2C – 2 rate 2 color
• 2R3C – 2 rate 3 color
• color blind and aware in XE 3.2 and higher software
• supports RFC 2697 and RFC 2698
• explicit rate and percent based configuration
• dedicated policer block in QFP hardware on ASR1000
• Policing order of operation (not configurable)
• XE 3.1 and earlier software evaluates from the parent down to the child
• XE 3.2 and later software evaluates from the child up through to the parent (the same as queuing functions)
IOS XE QoS – non-queuing
• WRED
• precedence (implicit MPLS EXP), dscp, and discard-class based
• ECN marking
• byte, packet, and time based CLI on ASR1000
• packet based only on ISR4000
• packet based configurations limited to exponential constant values 1 through 6 on
ASR1000
• dedicated WRED block in QFP hardware on ASR1000
IOS XE QoS – queuing
• Up to 3 layers of queuing configured with MQC QoS
• Two levels of priority traffic (1 and 2), followed by non-priority traffic
• Strict and conditional priority rate limiting
• 3 parameter scheduler (minimum, maximum, & excess)
• Priority propagation (via minimum) to ensure no loss priority forwarding via minimum
parameter
• burst parameters are accepted but not used by scheduler
• Backpressure mechanism between hardware components to deal with external flow
control
• fair-queue consumes 16 queues for each class configured with it
• Allows configuration of aggregate queue depth and per-flow queue depth
IOS XE QoS – queuing
• Queue-limit may be manually configured with various units on ASR1000
• packets, time, or bytes (packets only on ISR4000)
• Within a policy-map, all classes must use the same type of units for all features
• Using packets based queue-limit deals well with bursts of variable size packets while
providing a maximum limit to introduced latency when all packets are MTU sized
• Using time or byte based queue-limits provides more exact control over maximum latency
but will hold a variable number of packets based on the size of the packets enqueued
• Simplifies use of the same policy-map on interfaces of different speeds
• Time based configuration results in bytes programmed in hardware when policy-map is attached to
egress interface
ASR1000 only
• bandwidth remaining percent based allocations remain the same as classes are added to
a policy
• base is a consistent value of 100
• bandwidth remaining ratio based allocations adjust as more classes are added to
a policy with their own remaining ratio values
• base changes as new classes are added with their own ratios defined or with a default of 1
dropped
class voice 9 Mb/s Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth 10000 (kbit/sec)
class internal_services 10 Mb/sec 10 Mb/sec
25 Mb/s
bandwidth 10000 (kbit/sec)
class class-default
bandwidth 1000 (kbit/sec)
! 10 Mb/sec 10 Mb/sec
policy-map parent
class class-default 1 Mb/sec
10 Mb/sec
shape average 25000000 dropped
service-policy child 3 Mb/sec
6 Mb/s
3 parameter scheduler – Injected
minimum traffic
policy-map child
dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth 10000 (kbit/sec) 5 Mb/sec 5 Mb/sec
class internal_services
25 Mb/s
bandwidth 10000 (kbit/sec)
class class-default 10 Mb/sec
bandwidth 1000 (kbit/sec) 15 Mb/sec
! 1 Mb/sec
policy-map parent
dropped
1 Mb/sec 4 Mb/sec
class class-default
10 Mb/sec
shape average 25000000
service-policy child
dropped
4 Mb/sec
5 Mb/sec
3 parameter scheduler – Injected
excess traffic
policy-map child
dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth remaining ratio 4 5 Mb/sec 5 Mb/sec
class internal_services
25 Mb/s
bandwidth remaining ratio 1
class class-default 9.5 Mb/sec
bandwidth remaining ratio 1 15 Mb/sec
!
dropped
policy-map parent 5.5 Mb/sec
class class-default
10 Mb/sec 9.5 Mb/sec
shape average 25000000
service-policy child dropped
0.5 Mb/sec
3 parameter scheduler – Injected
excess traffic
%
policy-map child
dropped
class voice 9 Mb/sec Calculate
priority level 1 Satisfy mins excess sharing
police cir 1000000 (bit/sec) 10 Mb/sec 1 Mb/sec
class critical_services
bandwidth remain percent 40 5 Mb/sec 5 Mb/sec
class internal_services 15 Mb/sec
25 Mb/s
bandwidth remain percent 10
class class-default 9 Mb/sec
!
!
dropped
policy-map parent 6 Mb/sec
class class-default 10 Mb/sec
shape average 25000000
service-policy child dropped
10 Mb/sec
Bandwidth Remaining Ratio Example
interface FastEthernet1/1/3.100 interface FastEthernet1/1/3.101 interface FastEthernet1/1/3.102
encapsulation dot1Q 100 encapsulation dot1Q 101 encapsulation dot1Q 102
service-policy output vlan100 service-policy output vlan101 service-policy output vlan102
! !
policy-map vlan100 policy-map vlan100 policy-map vlan102
class class-default class class-default class class-default
bandwidth remaining ratio 89 bandwidth remaining ratio 8 bandwidth remaining ratio 1
160000
140000
120000
20000
0
12
15
18
0
9
IOS XE QoS hierarchies
• Generally, MQC based policy-maps with queuing functions may be attached to a physical
interface or sub-interface
• It is possible to attach a non-queuing policy-map to one location and then a queuing
policy-map to the other
• Some scenarios are supported with 2 level hierarchical policy-maps on tunnels and a
class-default shaper on the physical interface
• Broadband applications have their own set of supported scenarios which support queuing
policy-maps on sub-interfaces and then on the dynamically created sessions which
traverse that sub-interface
• Innovative hierarchies which move beyond strict parent-child hierarchies can be built using
service-fragment CLI
IOS XE QoS hierarchy
policy-map level1
class class-default
shape average 100000000
!
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
shape average 60000000
service-policy level3
!
policy-map level3
class prec0
priority
police cir 10000000
class class-default
!
interface default interface gig0/0/0.2
interface !
schedule queue
!
interface gig0/0/0.3
SIP root !
schedule
IOS XE QoS hierarchy
policy-map level1
class class-default
shape average 100000000
!
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
shape average 60000000
service-policy level3
!
policy-map level3
class prec0
priority
level1 police cir 10000000
schedule class class-default
!
interface default interface gig0/0/0.2
interface service-policy out level1
schedule queue
!
interface gig0/0/0.3
SIP root !
schedule
IOS XE QoS hierarchy
level3 policy-map level1
queues class class-default
shape average 100000000
service-policy level2
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
level2 shape average 60000000
schedules service-policy level3
!
policy-map level3
class prec0
priority
level1 police cir 10000000
schedule class class-default
!
interface default interface gig0/0/0.2
interface service-policy out level1
schedule queue
!
interface gig0/0/0.3
SIP root !
schedule
ASR 1000 QoS hierarchy
level3 policy-map level1
queues class class-default
shape average 100000000
service-policy level2
!
policy-map level2
class user1
shape average 60000000
service-policy level3
class class-default
level2 shape average 60000000
schedules service-policy level3
!
policy-map level3
class prec0
priority
level1 police cir 10000000
schedule class class-default
!
interface default interface gig0/0/0.2
interface service-policy out level1
schedule queue
!
interface gig0/0/0.3
SIP root service-policy out level2
schedule
ASR 1000 QoS service-fragments 4
multiple instances of sub- policy-map sub-int-mod4
class EF
interface fragment BE police 1000000
class AF4
police 2000000
class class-default fragment BE
shape average 75000000
!
service-fragment BE policy-map int-mod4
class data service-fragment BE
shape average 128000000
class EF
priority level 1
class AF4
priority level 2
class AF1
interface priority shape average 50000000
random-detect
queues !
interface GigabitEthernet0/0/0
interface interface class- service-policy output int-mod4
schedule default queue !
interface GigabitEthernet0/0/0.2
service-policy output sub-int-mod4
SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
class class-default
shape average 100000000
!
service-fragment BE policy-map sub-int-mod3
class EF
police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
interface interface class- policy-map sub-int-child-mod3
class AF1
schedule default queue shape average 100000000
class class-default
shape average 25000000
SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
class class-default
fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
service-fragment BE police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
sub-interface priority queues policy-map sub-int-child-mod3
class AF1
shape average 100000000
interface interface class- class class-default
schedule default queue shape average 25000000
SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
class class-default
fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
service-fragment BE police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
multiple sub-interface priority policy-map sub-int-child-mod3
queues class AF1
shape average 100000000
interface interface class- class class-default
schedule default queue shape average 25000000
SIP root
schedule
ASR 1000 QoS service-fragments 3
policy-map int-mod3
class data service-fragment BE
shape average 128000000
multiple instances of class class-default
fragment BE shape average 100000000
!
policy-map sub-int-mod3
class EF
service-fragment BE police 1000000
priority level 1
class AF4
police 2000000
priority level 2
class class-default fragment BE
shape average 115000000
service-policy sub-int-child-mod3
!
multiple sub-interface priority policy-map sub-int-child-mod3
queues class AF1
shape average 100000000
interface interface class- class class-default
schedule default queue shape average 25000000
SIP root
schedule
IOS XE observed Packet and frame sizes
• Traffic manager calculates ethernet packet size on everything between the MAC L2
header and the end of payload
• IFG, preamble, and FCS are not included
• For queuing features, the packet size can be adjusted manually
• atm cell overhead compensation is available so that ATM L2 links downstream are not overdriven
• Traffic manager includes the 4 byte CRC in packet sizes for frame-relay
• MQC based QoS for ATM performs L3 shaping and only compensates for atm cell
overhead with the above atm directive in shaped classes
• ATM vc rate configurations are ATM L2 based shaping
For your
reference
Selected ASR1000 scalability attributes
ASR ASR ASR CSR
Attribute RP2 RP1
1001 1001-X 1002-X 1000V
Configured class-maps 4,096 4,096 4,096 4,096 4,096 256
Configured policy-maps 16,000 4,096 4,096 1,000 4,096 30
Class-maps per policy-map 1,000 * 1,000 * 1,000 * 256 1,000 * 32
Match rules per class-map 32 32 32 16 32 8
Policer / shaper granularity 8 Kb/s
Policer / shaper accuracy +/- 1%
• For vlan based QoS, a policy-map using VLAN classification should be applied to the
aggregate Port-channel main-interface
• Supports 3 levels of hierarchical policy-maps
• Including 3 levels of policers and/or queueing
Aggregate Etherchannel QoS
with LACP and flow based load balancing
• Up to 8 aggregate Port-channel interfaces supported
• If flow based load balancing overloads a given physical interface, backpressure will be
exerted to the aggregate hierarchy
• Causes queue build-up for the entire hierarchy across all member-links in the aggregate Port-channel
• Important that there be a variety of flows running over the interface so hashing algorithm can
distribute traffic amongst all interfaces
• In IOS XE 2.5S and after, IPSec will use LLQ configuration on GRE/sVTI tunnel if it is
there. If it is not there, IPSec will use LLQ configuration on matching egress interface
QoS-Group Based LLQ for IPSec Example
platform ipsec llq qos-group 5 policy-map input-policy-2
platform ipsec llq qos-group 6 class default
platform ipsec llq qos-group 8 set qos-group 6
! !
! Same class-map used for classify high interface TenGigabitEthernet0/2/0
! priority traffic in egress QoS policy service-policy input input-policy-1
! !
class-map match-all c1 interface TenGigabitEthernet0/3/0
match precedence 5 6 7 service-policy input input-policy-2
class-map match-all c2 !
match precedence 0 1 2 3 interface TenGigabitEthernet0/1/0
class-map match-all default service-policy output egress-policy-1
!
policy-map input-policy-1
class c1
set qos-group 5
class c2
set qos-group 8
!
!
For Your
Reference
QoS
•
Aware Accounting
QoS Aware Accounting is to report QoS policy statistics of successfully transmitted packets to RADIUS server
in AAA accounting packets for PPP session
• It can be used as an alternative for ISG Traffic Class accounting – Traffic Class classification is currently
based on IP ACL, therefore can not classify IPv6 traffic or classify packets on LAC, on the other hand QoS is
supported for IPv6 traffic or on LAC, therefore provides unique use cases in these scenarios
• Accounting statistics can be sent in either per QoS class basis or a group of QoS classes basis. Introduced
the notion of class grouping on a per session basis. A group is an entity for which a single accounting record
will be provided, such that customer can group number of QoS classes (such as voice and voice-control)
belonging to the same service (such as voice) in the same accounting records.
• Ability to send accounting START or STOP record for a class/group when the feature is enabled or disabled
in a QoS class/group.
Traffic being counted Accounts traffic matching a class or a Accounts traffic matching TC ACLs Accounts all traffic on the
group of classes of the QoS service since service activation session since service activation
policy applied on session
RP
RP
RP
• Redundant ESP / RP on ASR 1006 & ASR 1013 CPU CPU
ESP
ESP
ESP
QFP QFP
• Max 50ms loss for ESP fail-over Crypto
Crypto
Assist. PPE BQS
Crypto
Crypto
Assist. PPE BQS
SIP
IOCP
Aggreg.
• IOS XE also provides full support for Network Resiliency
SPA SPA
• NSF/GR for BGP, OSPFv2/v3, IS-IS, EIGRP, LDP
• IP Event Dampening; BFD (BGP, IS-IS, OSPF) SPA
SIP
IOCP
Aggreg.
• first hop redundancy protocols: GLBP, HSRP, VRRP
SPA SPA
SIP
IOCP
Aggreg.
• Stateful inter-chassis redundancy available for NAT, SBC, Firewall SPA SPA
ASR1000 data plane redundancy
FECP CPU CPU FECP
RP
RP
ESP
ESP
interconn. GE switch interconn. GE switch
Crypto QFP Crypto QFP
Assist. PPE BQS Assist. PPE BQS
interconnect interconnect
Midplane
SIP
SIP
SPA SPA SPA
IOCP IOCP
Aggreg. Aggreg. Aggreg.
RP
RP
ESP
ESP
interconn. GE switch interconn. GE switch
Crypto QFP Crypto QFP
Assist. PPE BQS Assist. PPE BQS
interconnect
interconn. interconnect
Midplane
SIP
SIP
ntrol SPA SPA SPA
s IOCP IOCP IOCP
Aggreg. Aggreg. Aggreg.
RP
Chassis MFIB Chassis
manager
Forwarding manager manager
Forwarding manager
FIB
ESP
QFP driver QFP driver
Chassis Chassis
manager
Forwarding manager manager
Forwarding manager
RP
• CPUs: RP-CPU, QFP, FECP, IOCP, interconnect CPU, I2C Mux, ESP Crypto Chip, heat
sinks, … EOBC
• Memory: NVRAM, TCAM, Bootflash, RP SDRAM, FECP SDRAM, resource DRAM,
Packet buffer DRAM, particle length DRAM, IOCP SDRAM, …
Interconnect CPU
• Interconnects: ESI Links, I2C links, EOBC Links, SPA-SPI bus, local RP bus, local FP
bus …
• Detected using
• Software running on the failed hardware will crash -> see FECP SPA-SPI I2C
software crashes
ESP
QFP TCAM
SIP
• Watchdog timers: low level watchdogs that can time out
IOCP
• Notification of other field-replaceable units (FRUs) Crypto Memories
• CPLD interrupts / register bits controlled by CMRP
Interconnect CPU Interconnect CPU
• Initiate fail-over events
• Hardware failures are typically fatal such that modules
need to be replaced
• JTAG: RP can program CPLD on other modules. Test interconnects and other boards
(primarily for RMA-ed hardware)
Failover Triggers: Software Failures IOS
active
IOS
standby
• What software Failures? Platform Adaptation Layer
RP
• Kernel: Linux on RP / ESP / SIP (PAL)
• Middleware: chassis manager, forwarding manager Chassis Forwarding
manager manager
• IOS, SPA drivers
• Detected using the process manager (PMAN) Linux Kernel
• PMAN: every software process has a corresponding PMAN process to check its health
• if software process crashes, PMAN will detect via a signal from the kernel
• IPC: between 2 IOS (and only for IOS)
• Hardware watchdog timers supervise Linux and software stack QFP client SPASPA
driver
driver
SPA driver
• Kernel will take the module down in a controlled manner
ESP
QFP driver
SIP
Chassis
• IOS, CMESP, CMSIP, FMESP, QFP Driver/Client, are not manager
Chassis Forwarding
re-startable manager manager
• PMAN-initiated failover using CPLD register bits for ESP or RP
Linux Kernel Linux Kernel
(failover within 3ms)
• some processes are re-startable (CMRP, FMRP, SSH, telnet, …)
• Kernel will try to re-start the processes in this case
• If unsuccessful, then the process will be held down and console message logged
RPact Failover Procedure (1 of 2)
SIP Slot 0 - ESPact Slot 1 - ESPstby Slot 0 - RPact Slot 1 - RPstby
CMSIP Kernel FMESP CMESP Kernel FMESP CMESP Kernel CMRP FMRP Kernel IOS CMRP FMRP Kernel IOS
ACT SBY ACT SBY
Restart
ACT
State information
If not received in time,
status information send restart message.
Failover
Serviced restored
H/W initialization
Initialize EOBC
Start kernel
start
Standby
ESI link status
forwarding State info
ESPact Failover Procedure (1 of 2)
SIP Slot 0 - ESPact Slot 1 - ESPstby Slot 0 - RPact Slot 1 - RPstby
CMSIP Kernel FMESP CMESP Kernel FMESP CMESP Kernel CMRP FMRP Kernel IOS CMRP FMRP Kernel IOS
ACT SBY ACT SBY
Failure
Detect ESPact
failure
State information of failed ESP
Time
Failed ACT
Disable ESI link
w/ failed ESP Reconfigure
ESI link w/ RPs
H/W initialization
Initialize EOBC
RPact information
Detect RPact
Activate ESI-link
Download software packages
Start kernel
Start FM Start CM
Other-ESP info (e.g. mastership)
• RF is the master of state changes, informs its clients about the state change
• Clients are e.g. network (i.e. interfaces), OSPF, IS-IS, EIGRP, multicast
• Time
• Interface transitions after switchover from DOWN to UP can take O(10sec)
• OSPF cannot send hellos until it is being told that the corresponding interface state is UP
• Total time may take O(Min) with large scale
Stateful Application Inter-Chassis Redundancy
• Current Intra-chassis HA typically protects against
• Control Plane (RP) Failures
• Forwarding Plane (ESP) failures
• Interface failures can be mitigated using link bundling
(e.g. GEC) RP
Crash
RPact
Crash RPstby RP
Crash
act FW FW RPact
fabric fabric ESP ESP
linecard linecard SIP SIP
linecard linecard SIP SIP
Introduction to RG-Infra
• RG Infra is the IOS Redundancy Group Infrastructure to enable the synchronization of
application state data between different physical systems
• Does the job of RF/CF between chassis
• Assumptions
• Application state has to be supported by RG infra (ASR 1000 currently supports NAT, Firewall, SBC)
• Connectivity redundancy solved at the architectural level (need to ‘externalize’ the redundant ESI
links of the intra-chassis redundancy solution)
Redundancy Groups Functions
• Registers applications as clients
• Registers (sub)interfaces / {SA/DA}-tuplets in case of firewall
• Determines if traffic needs to be processed or not
• Communicates control information between RGs using a redundancy group protocol
• Advertisement of RGs and RG state
• Determination of peer IP address
• Determination of presence of active RG
• Synchronizes application state data using a transport protocol
• Manages Failovers! RG state
RG control
FW FW
RPact RG infraact RG infrastby RPact
ESP ESP
SIP SIP
SIP SIP
Physical Topology Requirements
• 2 ASR 1000 chassis with single RP / single ESP
• Co-existence of inter- and intra-chassis redundancy is RPact RG infraact Control RPact RG infraact
NOT supported
ESP ESP
• Maximum of 2 cluster members SIP SIP
• Physical connectivity to both members from adjacent SIP Application SIP
routers / switches
• Need to direct traffic to either member system in case of failover (HSRP, VRRP)
• L1/L2 connectivity between the member systems for RG control traffic
• RG instances exchange control traffic (RG hellos, state, fail-over signaling etc.)
• Guaranteed communication required to avoid split-brain condition
• L3 connectivity on roadmap
• L1/L2 connectivity between member systems for
application state data
• Synchronization of NAT/Firewall/SBC state tables
• FIBs are NOT synchronized by RG Infra
ISSU
IOS XE Software packaging - terminology
• IOS XE software for ASR 1000 is released every 4 months, 3 times a year
• Software that is posted in CISCO.COM is called ‘Consolidated Package’
• Consolidated Package contains several ‘sub-packages’ which are extracted from the
Consolidated Package
• The sub-packages can be used to individually upgrade a specific software component of
the ASR 1000
ASR1000 Software Packaging Overview
• ASR1000 Software images are dependent on the Route Processor
• 3 sets of images – IPBase, AIS and AES are available for each of the different RPs
• RP1
• RP2
• Universal images are provided for the fixed configuration chassis and features are license based
• ASR1001 (integrated RP)
• ASR1001-X (integrated RP)
• ASR1002-X (integrated RP)
• The slides in this section cover images/packaging related to the modular RP1 and RP2 systems
• A following section covers the unique packaging for ASR1001, ASR1001-X, and ASR1002-X
Software Sub-packages
• RPBase: RP OS
IOS IOS
• Upgrading of the OS will require reload to the RP and expect minimal changes
• RPIOS: IOS
active standby
• Facilitates Software Redundancy feature Platform Adaptation Layer
RP
• RPAccess (K9 & non-K9): (PAL)
Chassis Forwarding
• Software required for Router access; 2 versions will be available.
manager manager
One that contains open SSH & SSL and one without
• To facilitate software packaging for export-restricted countries
Linux Kernel
• RPControl :
• Control Plane processes that interface between IOS and the rest of the platform
• IOS XE Middleware Control
• ESPBase: messaging
• ESP OS + Control processes + QFP client/driver/ucode:
• Any software upgrade of the ESP requires reload of the ESP
QFP client / driver SPASPA
driver
• SIPBase: driver
SPA driver
ESP
• SIP OS + Control processes QFP code
SIP
Chassis
• OS upgrade requires reload of the SIP manager
Chassis Forwarding
• SIPSPA: manager manager
• SPA drivers and FPD (SPA FPGA image) Linux Kernel
Linux Kernel
• Facilitates SPA driver upgrade of specific SPA slots
Consolidated images sub-packages
RP
RP RP RP RP SIP SIP ESP
Consolidated Package Descriptor Access
Base Control IOS Access Base SPA Base
- K9
Cisco ASR1000 Series RP1
SASR1R1-
ADVANCED ENTERPRISE X X Advanced Enterprise Services X X X X X
AESK9
SERVICES
Some of the features require Feature Data current to IOS XE3.7. Always check Cisco Feature
Licenses in addition to the software Navigator for the most up to date information regarding
image features included in releases and feature sets.
Universal image licensing
• ASR1001 has its own IOS XE software IOS XE PID Description List
release Price
• For customers to get any of the
3.2.0S SASR1001U-32S Cisco ASR 1001 IOS XE UNIVERSAL $0
following 6 feature sets
• IPB, IPBK9, AIS, AISK9, AES, AESK9, SASR1001NPEK9 Cisco ASR 1001 IOS XE - NO PAYLOAD
3.2.0S $0
-32S ENCRYPTION UNIVERSAL
• Customer needs to order one of the 3
universal images listed in table A below and SASR1001UK9- Cisco ASR 1001 IOS XE - ENCRYPTION
3.2.0S $0
respective IPB or AIS or AES feature 32S UNIVERSAL
license, see table B.
• The feature set licenses are enforced PID Description List Price
via software activation prior to XE3.6,
starting XE3.6 they are changed to SLASR1-IPB Cisco ASR 1000 IP BASE License $5,000
monitoring
SLASR1-AIS Cisco ASR 1000 Advanced IP Services License $10,000
• The performance upgrade license
(2.5Gbps – 5Gbps) are enforced, SLASR1-AES Cisco ASR 1000 Advanced Services License $10,000
starting XE3.7 they are changed to
monitoring mode
ASR 1001 Licenses - “Feature Sets”
For the equivalent feature set on ASR 1000 Series Technology Package License
Universal Software
modular platforms Combination Part Number
Image Part Number
(1002-F, 1002, 1004, 1006, 1013)
PSIRT upgrade of RPIOS • Homogen Build / Stby’s HOT 1. Upgrade standby RPIOS sub-pkg
Sub-package mode 3 1.0 packet loss
only on any chassis type • Booted in sub-pkg mode Switchover (End here)
ISSU Compatibility Summary
• ISSU compatibility is determined by the CONTENT of what went into a release, not the
type of release (ie release, rebuild, etc)
• ISSU supported: Across IOS XE rebuilds (Example: 2.1.0 to 2.1.1)
• ISSU goal: ISSU to work across IOS XE Feature releases (Example 2.1.x to 2.2.x)
• Compatibility matrix will refer to IOS XE releases and rebuilds using ‘Consolidated
Packages’ only.
• Heterogeneous packages are not used for ISSU compatibility testing.
ISSU upgrade
ISSU path – Example
Compatibility
Target Release Software
2.1.0 2.1.1 2.1.2 2.2.1 2.2.2 2.2.3 2.3.0 2.3.1
Release
2.1.0 N/A SSO Tested SSO SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2
SSO Tested
SSO, not 2.1.1 SSO Tested N/A SSO Tested SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2 SSO via 2.1.2
Explicitly tested
2.1.2 SSO SSO Tested N/A SSO Tested SSO Tested SSO Tested SSO Tested SSO Tested
SSO, require
2 step upgrade
2.2.1 SSO via 2.1.2 SSO via 2.1.2 SSO Tested N/A SSO Tested SSO SSO SSO
2.2.2 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO Tested N/A SSO Tested SSO SSO
2.2.3 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO SSO Tested N/A SSO Tested SSO Tested
2.3.0 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO SSO SSO Tested N/A SSO Tested
2.3.1 SSO via 2.1.2 SSO via 2.1.2 SSO Tested SSO SSO SSO Tested SSO Tested N/A
2.3.0 2.3.1
IOS XE Releases
May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May
2008 2008 2008 2008 2008 2008 2008 2009 2009 2009 2009 2009 2009
ISSU for new hardware boards
• ISSU for new hardware revisions of boards will be supported when the change is
incremental and uses same image
• e.g. ESP10 -> ESP20 is same chipsets and uses same images
• e.g. ESP40 -> ESP100 is supported across QFP generations
• Main differences between ESP10 and ESP20 are clock speeds of chips and amount of
memory populated on boards
• Generally ISSU support is planned for this type of hardware difference
• When there is new Architecture/Chipset then a separate image for new hardware is
required and ISSU will be supported for each architecture separately, but not between
architectures. Example: RP1 to RP2
• RP2 uses x86 chipset instead of PPC
• Addressing of all software running on RP2 is 64-bit instead of 32-bit of RP1
Advantages of using Software Redundancy
• Software failure
• Software redundancy helps when there is a RP-IOS failure/crash; the active process will switchover to the standby,
while forwarding continues with zero packet loss
• Other software crashes (example: SIP or ESP) cannot benefit from Software redundancy
• Software upgrade
• The software upgrade procedure for ASR1002/ASR1004 allows customers to upgrade the RP-IOS package only as
and defer all the other steps to a later time – example: Maintenance window
• This allows customers to take advantage of any bug fixes of RP-IOS (or in the case of a PSIRT) available in the next
rebuild while maintaining the router in service.
• The heterogeneous configuration of RP-IOS in one version versus the rest of the sub-packages in a different version
is a supported configuration. It is however required that the configuration become homogeneous (i.e all sub-packages
in the same version) before upgrading to the next software version.
Procedure Intended Use Prerequisites High Level Procedure Impact
(what to do/know before you (Total time for module to be ready to
start) process packets)
Sub-package mode 1 Sliding Minimum disruption to •Homogen Build / Stby’s HOT 1. Upgrade standby ‘bay’ & switchover 1. 0 traffic loss
s/w redundant 2/4RU chassis’ •RP booted in sub-pkg mode & 2. Rolling upgrade of SIPs (if possible) 2. 100sec traffic loss per SIP
new super expanded 3. Upgrade ESP (you take a hit) 3. 100sec traffic loss
4. Upgrade remaining RP sub-pkgs 4. X sec – depends on configuration
Sub-package mode 2 PSIRT upgrade of RPIOS only •Homogen Build / Stby’s HOT 1. Upgrade standby bay RPIOS sub-pkg 1. 0 packet loss
on any chassis type •Booted in sub-pkg mode & Switchover (End here)
One shot ISSU procedure
• Existing ISSU procedure is a multiple step process. This enhancement greatly simplifies
the ISSU process by a single CLI which will execute the multiple steps
• CLI: request platform software package install node file <filename> sip-delay <1-172800>
• Sip-delay will allow delay for each SIP upgrade in the sub=package mode
RP
(PAL)
• MDR reboot time is 25s for SIP, and 10s for SPAs Chassis Forwarding
manager
manager
• SIP/SPA software upgrade can be down with minimal
interruption packet flow Linux Kernel
ESP
QFP code
SIP
• Hardware (RP, ESP) redundancy Chassis
Forwarding manager
• Supported for SIP40 (SIP10 does not support MDR) Chassis
manager manager
• CPLD or FPGA upgrades require full reload of SPA Linux Kernel
Linux Kernel
• ‘from’ and ‘to’ software versions must support MDR
• Statistics counters will be re-set after the software upgrade
MDR Demonstration Results – Only 4 Packets Lost!
Performance
CPU Evolution
Stateless
Stateful
Impact of Packet size
• One route decision = One packet served
• Routing capacity = Number of packets per second served for a given service.
• Big packets
• Many bits switched for each route decision
• = High Mbps number
• Small packets
• Few bits switched for each route decision.
• = Low Mbps number
Mbps or PPS?
Stateless FW Mbps PPS
3945
Line Rate
FE + 3925
2951
2921
VDSL2+/Sub-rate FE ISR1941
2911
• 25Mbps or 2.8Gbps - Which one is true?
2901
• Answer: Both. It depends on how it was tested
EFM
SubrateFE 1941
1921
800
ASR1002-X
5-36Gbps
ASR1001-X
2.5-20Gbps
ISR 4451
1-2Gbps
ISR 4431
500-1000 Mbps
ISR 4351
200-400 Mbps
ISR 4331
100-300 Mbps
ISR 4321
50-100 Mbps 4-10X Faster
Add performance and services anytime
Flexible consumption options
Performance license limit – ISR4k example
• Notice that many of the results are at the exact licensed max limit.
• This means router hit shaper before bottoming out
• How much CPU is then left?
@22% @53%
CPU CPU @65% @81% @89%
@43% CPU CPU
CPU
CPU
@20% @54%
CPU CPU @33%
CPU
ISR Portfolio Performance Overview
2500
22% 57% 97%
Aggregate Throughput
2000
In Mbit/s
1500
21% 54% 95%
99%
1000
100%
82% 50%
2500 46% 81% 53% 81%
100%
93%
2000 79% 1 vCPU
84%
1500 2 vCPU
83%
99%
4 vCPU
1000
99%
97%
500
0
CEF ACL QoS NAT L4 FW IPSec
Challenge: single features don’t load-balance well across multiple CPUs
Testing parameters:
• IMIX traffic at 0.01% Drop Rate
• IOS-XE Image 3.14
• Platform: UCSC-C240-M3S with Intel Xeon E5-2643 v2 running ESXi 5.5
• VM-FEX results are on average 17% higher
ASR1000 Performance
250 70
60
200
50
Throughput (Gbit/s)
Throughput (Gbit/s)
150 40
30
100
20
50
10
0 0
CEF NAT FW IPSEC
ASR1001X ASR1002X ESP40 ESP100 ESP200
ASR1ks perform to their advertised limits in all single feature tests except IPSEC
ASR1000 Performance
40 27
QoS BW
35 24
Base Bw
21
30 Netflow BW
Gb/sec bandwidth
18 ACL BW
Millions PPS
25
15 uRPF BW
20
12 PR2650 BW
15 IPv4 PPS
9
10 ACL PPS
6
QoS PPS
5 3
Netflow PPS
0 0 uRPF PPS
76 132 260 516 1028 1518
PR2650 PPS
Packet size in bytes
• Individual features have small impact with small packet sizes (76B and 132B)
• Individual features have very little impact at large packet sizes (above 260B)
• QFP has excellent behavior even with combined features for larger packet sizes!
How to verify current CPU Load
IOS Router with single CPU
IOS-ROUTER#sh proc cpu his
11111
88888999999999999999000009999999999888883333399999
222222222277777999999999999999000009999999999000005555599999
100 ****************************** *****
90 *********************************** *****
80 **************************************** *****
70 **************************************** *****
60 **************************************** *****
50 **************************************** *****
40 **************************************************
30 **************************************************
20 **************************************************
10 **************************************************
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per second (last 60 seconds)
4351: 1 CPU with 4 cores, each has 2 hyper threads, hence we see 8 “CPUs”
Cores can be used for CP, DP or SP, there is no dedication just scheduling
reservation
How to verify current CPU Load
IOS-XE Router with dedicated CP/SP and DP CPUs (ISR4400 & ASR1k Series)
Classic command shows IOS-XE command shows
ISR4451#sh proc cpu his average CP/SP utilization per core utilization
ISR4451#show platform software status control-processor brief
1111111111111111111111111 Load Average
555550000000000000000000000000 Slot Status 1-Min 5-Min 15-Min
55555 88888222220000000000000000000000000 RP0 Healthy 0.02 0.25 0.15
100 ***********************
90 *********************** Memory (kB)
80 *********************** Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
70 *********************** RP0 Healthy 3972052 3928184 (99%) 43868 ( 1%) 2584140 (65%)
60 ***********************
50 **************************** CPU Utilization
40 **************************** Slot CPU User System Nice Idle IRQ SIRQ IOwait
30 **************************** RP0 0 1.10 1.10 0.00 97.70 0.00 0.10 0.00
20 **************************** 1 0.70 3.50 0.00 95.80 0.00 0.00 0.00
10 ***** ********************************* 2 0.30 1.70 0.00 98.00 0.00 0.00 0.00
0....5....1....1....2....2....3....3....4....4....5....5....6 3 0.30 0.70 0.00 98.99 0.00 0.00 0.00
0 5 0 5 0 5 0 5 0 5 0 4 0.50 0.30 0.00 99.20 0.00 0.00 0.00
CPU% per second (last 60 seconds) 5 3.10 1.90 0.00 95.00 0.00 0.00 0.00
ISR4400 and ASR1k have a second command to monitor Data Plane Cores:
show platform hardware qfp active datapath utilization
We only see average DP utilization, no breakdown per core.
For your
reference
Uni-dimentional scale for select features
ASR1001 ASR1001-X ASR1002-X RP1/ RP2/ RP2/ RP2/ RP2/
ESP10 ESP20 ESP40 ESP100 ESP200
VLANs (per port/SPA/system) 4K/8K/16K 4K/8K/16K 4K/8K/16K 4K/32K/ 4K/32K/ 4K/32K/ 4K/32K/ 4K/32K/
32K 64K 64K 64K 64K
IPv4 routes 1M 1M 3.5M 1.0M 4M 4M 4M 4M
IPv6 routes 1M 1M 3M 0.5M 4M 4M 4M 4M
Sessions 8K not avail 29K 24K 32K 64K 58K 58K
L2TP tunnels 4K 4K 4K 12K 16K 16K 16K 16K
Session setup rate (PTA/L2TP) in cps 150/100 150/100 150/100 100/50 150/100 150/100 150/100 150/100
BGP neighbors 8K 8K 8K 4K 8K 8K 8K 8K
OSPF neighbors 1K 1K 2K 1K 2K 2K 2K 2K
Unique QoS policy- / class-maps 1K/4K 1K/4K 4K/4K 1K/4K 4K/4K 4K/4K 4K/4K 4K/4K
ACL/ACE 4K/25K 4K/50K 4K/120K 4K/50K 4K/100K 4K/100K 4K/400K 4K/400K
Multicast groups 1000 2000 4000 1000 4000 4000 44K 44K
IPv4/IPv6 mroutes 64K 64K 64K 64K 100K 100K 100K 100K
Firewall sessions 250K 2M 2M 1M 2M 2M 6M 6M
NAT + firewall sessions 125K 2M 1M 500K 1M 1M 6M 6M
Netflow cache entries 500K 2M 2M 1M 2M 2M 2M 2M
VRFs 4K 4K 8K 1K 8K 8K 8K 8K
BFD sessions (offloaded) 4095 4095 4095 2047 4095 4095 4095 4095
AVC throughput (Mpps/Gbps) 1.3/5 not avail 6/20 2.5/10 3/20 3.4/20 3.6/40 not avail
ISR DRAM Memory
Demystified
What is DP and CP memory used for?
• Control Plane Memory:
• Used for IOS daemon
• This daemon holds the IOS system as well Control Plane Tables (i.e. Routing Information Base)
• Used for Linux
• This manages the entire device and also allocates memory to service containers
• The linux portion grows when IOS is growing due to information replication into other processes
Linux
750 MB 750 MB 1000 MB 750 MB 750 MB free Total: 750 MB packet 750 MB 40 MB 472 MB
Linux OS Linux Cache IOS dHeap IOSd buffer system EXMEM EXMEM
~62.5% Free
used free free used free
Total reserved
IOS Memory (includes dHeap)
ISR4451#show memory
Head Total(b) Used(b) Free(b) Lowest(b) Largest(b)
Address Bytes Prev Next Ref PrevF NextF Alloc PC what
Processor 7F4A5B545010 1728363504 284041616 1444321888 679710664 1048575908
lsmpi_io 7F4A5AE431A8 6295128 6294304 824 824 412
Dynamic heap limit(MB) 1000 Use(MB) 0
Total used Total free IOS Memory
Total available dHeap dHeap used IOS Memory (includes dHeap)
ISR4451#show platform software status control-processor brief
Load Average
Slot Status 1-Min 5-Min 15-Min
RP0 Healthy 0.00 0.04 0.06
Memory (kB)
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 3972052 3259444 (82%) 712608 (18%) 1506452 (38%)
CPU Utilization
Slot CPU User System Nice Idle IRQ SIRQ IOwait Total used Memory (excludes
RP0 0 2.39 0.39 0.00 97.00 0.09 0.09 0.00 Cache & dHeap, includes full
1 0.40 0.10 0.00 99.49 0.00 0.00 0.00
750 MB IOS)
2 0.80 0.30 0.00 98.90 0.00 0.00 0.00
3 1.80 3.50 0.00 94.70 0.00 0.00 0.00
4 0.60 1.80 0.00 97.50 0.00 0.10 0.00
5 0.20 0.40 0.00 99.39 0.00 0.00 0.00
How to monitor CP and DP (4400, 4GB CP, 2GB DP, IOS-XE 3.13.1)
ISR4451#show platform hardware qfp active infrastructure exmem statistics
QFP exmem statistics
Linux
Packet Buffer
950 MB 750 MB 1000 MB 750 MB Total: ~42% Free
EXMEM
256 MB
100 MB
300 MB
free
Linux OS Linux Cache IOS dHeap IOSd
used free free
Memory (kB)
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 3950588 3881932 (98%) 68656 ( 2%) 2302648 (58%)
CPU Utilization
Slot CPU User System Nice Idle IRQ SIRQ IOwait
RP0 0 1.29 1.59 0.00 97.10 0.00 0.00 0.00 Total used Memory (excludes
1 0.10 0.20 0.00 99.70 0.00 0.00 0.00 Cache & dHeap, includes full
2 2.70 9.00 0.00 88.28 0.00 0.00 0.00 750 MB IOS)
3 0.30 0.80 0.00 98.89 0.00 0.00 0.00
4 0.50 0.00 0.00 99.49 0.00 0.00 0.00
5 0.00 0.00 0.00 100.00 0.00 0.00 0.00
6 23.27 76.72 0.00 0.00 0.00 0.00 0.00
7 0.00 0.00 0.00 100.00 0.00 0.00 0.00
How to monitor CP and DP (4300, 4GB CP&DP, IOS-XE 3.13.1)
ISR4351#show platform hardware qfp active infrastructure exmem statistics
QFP exmem statistics
Memory (kB)
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 16337120 3547568 (22%) 12789552 (78%) 6015204 (37%)
CPU Utilization
Slot CPU User System Nice Idle IRQ SIRQ IOwait
RP0 0 2.29 3.19 0.00 94.40 0.00 0.09 0.00
1 0.80 0.90 0.00 98.30 0.00 0.00 0.00
2 0.20 0.50 0.00 99.30 0.00 0.00 0.00
3 0.19 0.39 0.00 99.40 0.00 0.00 0.00
How to monitor CP and DP (1002-X, 16GB CP, 2GB DP, IOS-XE 3.13.1)
ASR1002-X# show platform hardware qfp active infrastructure exmem statistics
QFP exmem statistics
In comparison to 4400 the IOSd memory limit was probably not reached.
The overall memory consumption identified by “committed memory” is the limitation.
Conclusion
There are 3 possible memory bottlenecks:
• 1. IOSd Memory
• Even including dHeap there is a limit to how big IOSd can grow
• 2. Overall Linux Memory
• Because Linux grows at about the same rate as IOSd and reduces it’s cache constantly
this absence of cache eventually becomes an issue
• 3. EXMEM (Data Plane)
• This is unrelated to the CP memory but still can pose a limitation, especially as it can’t
be increased as of 3.13.1
Bottlenecks in graphics
ISR4400, 4GB CP, 2 GB DP, IOS-XE 3.13.1
Linux
EXMEM
IOSd
750 MB Linux 750 MB 750 MB free 1000 MB 470 MB 280 MB 750 MB packet 750 MB 40 MB 472 MB
OS Linux Cache IOS dHeap IOSd IOSd buffer system EXMEM EXMEM
used free free free used used free
Linux
Packet Buffer
950 MB 750 MB 1000 MB 530 MB 220 MB
300 MB
100 MB
Linux OS Linux Cache IOS dHeap IOSd IOSd
free
used free free free used
236 MB 20 MB
EXMEM EXMEM
free used
Scaling up with bigger memory (IOS-XE 3.13.1)
CP & DP IOS IOS Service
Platform Linux EXMEM
Memory dHeap static Containers
4400 4GB, 2GB 2.25 GB 1 GB 750 MB 512 MB 0 GB
Control Plane (CP) or Data Plane Control Plane and Data Plane (combined Module)
(DP) (separate Modules)
For your
reference
DRAM options per Platform
ISR4321 ISR4331 ISR4351 ISR4431 ISR4451
CP 1333 DIMMs 0 0 0 2 2
DP 1333 DIMMs 0 0 0 1 1
CP + DP 1600 Onboard
2 x CP 1333 DIMMs
For your
reference
Spare DRAM Product IDs
Product ID Description
MEM-4400-2G= 2G DRAM (1 DIMM) 1333 for Cisco ISR 4400
MEM-4400-4G= 4G DRAM (1 DIMM) 1333 for Cisco ISR 4400
MEM-4400-8G= 8G DRAM (1 DIMM) 1333 for Cisco ISR 4400
MEM-4300-2G= 2G DRAM (1 DIMM) 1600 for Cisco ISR 4300
MEM-4300-4G= 4G DRAM (1 DIMM) 1600 for Cisco ISR 4300
MEM-4300-8G= 8G DRAM (1 DIMM) 1600 for Cisco ISR 4300
For your
reference
DRAM Compatibility
ISR4321 ISR4331 ISR4351 ISR4431 ISR4451
MEM-4400-2G= 2x 1x 2x 1x
MEM-4400-8G= 2x 2x
MEM-4300-2G= 2x 2x
MEM-4300-4G= 1x 2x 2x
MEM-4300-8G= 2x 2x
* Official Support pending
Configuration specifics
Management Ethernet
• ASR1000 and ISR4000 have dedicated GigE Management Ethernet
• Not usable for ‘normal’ traffic
• Supports only basic ACLs
• Most forwarding features do not work on this port (traffic not processed by QFP)
• Intended for out of band router access—has SW support for rate limiting but that takes CPU cycles to
drop packets
• Assign the Gi0 interface an IP address, and set the default route in the VRF
ip route vrf Mgmt-intf 0.0.0.0 0.0.0.0 <gateway_ip_address>
• Multiple options for file storage and booting when transferring images to the RP
• bootflash: 1-8GB — recommended, larger on systems without harddisk:
• harddisk: 40-80GB — not on all platforms
Configuring Management Ethernet
• IOSd generates crashinfo files into bootflash: when it crashes—like other IOS based
platforms
IOS XE System Health Monitoring
• standard IOS CPU utilization and memory usage, e.g.,
“show process cpu” are not sufficient to determine control plane memory
ASR1000 health
RP
RP control processor
• Monitoring the CPU and memory utilization of the following
system elements is strongly recommended Linux
Interconnect
Kernel
• RP CPU and Memory Utilization
• ESP CPU and Memory Utilization Interconnect
• QFP Utilization
QFP
ESP
• NOTE: On fixed configuration platforms it is critical to understand QFP memory
that the RP/ESP/SIP are actually sharing the same CPU and Packet Interconnect
Crypto
memory. Therefore checking the RP values reports for all three. buffer
control plane
FECP
SIP
• Relevant MIBs: memory
SPA aggregation
• CISCO-PROCESS-MIB
• CISCO-ENTITY-QFP-MIB IOCP
IOS XE Control-Processor
• Key data to monitor for BB/ISG
ASR1000# show platform software status control-processor brief
Memory (kB) deployments:
Slot Status Total Used (Pct) Free (Pct) Committed (Pct)
RP0 Healthy 16343792 4509516 (28%) 11834276 (72%) 11627180 (71%) • RP/ESP Load Averages
RP1 Healthy 16343792 3962260 (24%) 12381532 (76%) 11621352 (71%)
ESP0 Healthy 16338208 990200 ( 6%) 15348008 (94%) 484804 ( 3%) • Committed Memory
ESP1 Healthy 16338208 1450756 ( 9%) 14887452 (91%) 1094048 ( 7%)
SIP0 Healthy 449336 350208 (78%) 99128 (22%) 359060 (80%) • RP/ESP CPU Utilization
SIP1 Healthy 449336 281628 (63%) 167708 (37%) 250948 (56%)
• This command has a large amount of output related to QoS actions and events.
• The elements to look for are in the summary table listing the number of active queues and
schedules in the system.
IOS XE QoS sorter memory
ASR1001# show platform hardware qfp active infrastructure bqs sorter memory available
Level:Class Total Available Remaining
============= ====== ========= =========
ROOT:ONCHIP 64 64 100%
ROOT:COS_L2 448 448 100%
ROOT:NORMAL 0 0 0%
BRANCH:ONCHIP 128 122 95%
BRANCH:COS_L2 384 384 100%
BRANCH:NORMAL 15872 15872 100%
STEM:ONCHIP 992 877 88%
STEM:COS_L2 1024 1024 100%
STEM:NORMAL 260064 259934 99%
• Show memory utilization by all the active elements in the BQS system, primarily used for
QoS.
• The last line “STEM:NORMAL” is the primary element to monitor. Keeping the % Remaining
at a reasonable level (> 10%) for dynamic system events.
• Note: This command is dependent on an actual BQS ASIC being present and as such is
not operational on ISR or CSR platforms.
IOS XE QFP memory statistics
• This command shows the
ASR1001# show platform hardware qfp active infrastructure exmem statistics
specific QFP memory usage. QFP exmem statistics
Type: Name: DRAM, QFP: 0
• The SRAM memory is fixed Total: 268435456
InUse: 96961536
and should never change. Free: 171473920
Lowest free water mark: 171438080
Type: Name: SRAM, QFP: 0
• The DRAM memory is the main Total: 32768
memory used, when this InUse: 14880
Free: 17888
reaches near 100% the IRAM Lowest free water mark: 17888
Type: Name: IRAM, QFP: 0
memory will increase to handle Total: 134217728
the extra requirements. InUse: 7027712
Free: 127190016
Lowest free water mark: 127190016
• The IRAM should be monitored
for a reasonable free available
to handle dynamic events. (20-30% free)
• This memory is used for ALL the features that are processed by the QFP.
IOS XE datapath utilization
• This output shows the actual ASR1001# show platform hardware qfp active datapath utilization
processing load at the QFP CPP 0: Subdev 0 5 secs 1 min 5 min 60 min
Input: Priority (pps) 1 1 1 1
from all interfaces. (bps) 680 1160 1144 1152
Non-Priority (pps) 1 4 4 4
• The Input/Output Priority/ (bps) 584 3040 2992 3000
Total (pps) 2 5 5 5
Non-Priority pps and bps counts (bps) 1264 4200 4136 4152
should be the aggregate from all Output: Priority (pps) 0 1 1 1
(bps) 496 864 856 856
interfaces. Non-Priority (pps) 1 4 4 4
(bps) 3184 9168 9064 9200
Total (pps) 1 5 5 5
• The Processing Load (pct) needs (bps) 3680 10032 9920 10056
to be monitored. A consistent Processing: Load (pct) 0 0 0 0
• This output shows the reason for any drops in the QFP complex. There are many reasons
for drops but he output command only shows non-zero statistics (use all keyword to see
all reasons)
• If there are drops outside the QFP they will show up in other places
• “show interface” output
• queue overload
• “show controller” output
• due to flow-control because of the ingress overflow
IOS XE platform shell
• Used when there is not asr1000# request platform software system shell r0
enough information from the
Activity within this shell can jeopardize the functioning of the system.
IOS CLI Are you sure you want to continue? [y/n] y
2009/06/27 16:58:44 : Shell access was granted to user <anon>; Trace file: ,
• Fully functional shell as ‘root’ /harddisk/tracelogs/system_shell_R0.log.20090627165844
**********************************************************************
• you can see/break everything Activity within this shell can jeopardize the functioning
of the system.
from here Use this functionality only under supervision of Cisco Support.
asr1000rp1-
advipservicesk9.03.11.01.S.153-4.S-
std.bin
ESP 100/200 show command differences
• show platform hardware qfp active infrastructure exmem statistics
• On ESP 100 the SRAM reports 0 values (no SRAM)
• show platform hardware qfp active datapath utilization
• must be executed twice on ESP100/200, once for each 2nd gen QFP
• Use the summary option to see mulitple QFP ASIC details compressed into one output.
• show platform hardware qfp active infrastructure bqs sorter memory
[active, free, available, utilization]
• different output due to two 2nd gen QFP but same fundamental info for active, free, available
• Utilization is not implemented for 2nd gen QFP
• show platform hardware qfp active infrastructure bqs status
• slightly different, ESP100/200 does not report memory size
IOS XE packet tracing
• Introduced in XE3.10 as part of the IOS-XE serviceability initiative.
• Pactrac provides visibility into the treatment of packets of an IOS-XE platform with simple
to use commands. It is intended to be used externally (TAC, customers) and internally
(DE, DT) to troubleshoot, diagnose or gain a deeper understanding of the actions taken on
a packet during packet processing.
• Pactrac limits its inspection to the packets matched by the debug platform condition
statements making it a viable option even under heavy traffic situations seen in customer
environments.
• Three specific levels of inspection are provided by pactrac: accounting, per packet
summary and per packet path data. Each level adds a deeper look into the packet
processing at the expense of some packet processing capability.
Packet-Trace: Configuration Example
• The following shows how one would trace the first 128 packets entering
GigabitEthernet0/0/0 including FIA trace and a copy of up to the first 2048 octets of the
input packet.
debug platform condition interface g0/0/0 ingress
debug platform packet-trace enable
debug platform packet-trace packet 128 fia-trace
debug platform packet-trace copy packet input size 2048
debug platform condition start
Packet-Trace: Configuration Highlights
• Be mindful of how much QFP DRAM memory a config needs and how much memory is available
• memory needed = (stats overhead) + num pkts * (summary size + path data size + copy size)
• Stats overhead and summary size are fixed and about 2KB and 128B respectively
• Path data size and copy size (in/out/both) are user configurable
• Configure as much detail as you want…more detail…more performance impact for matched packets
(reading/writing memory costs!)
• Each config change temporarily disables pactrac and clears counts/buffers
• “Cheap” way of ‘debug plat cond stop’, ‘clear plat pack stats’ and ‘debug plat cond
start’
• Some configs require a ‘stop’ in order to display summary or per packet data
• Currently circular and drop tracing
• REMINDER: Conditions define where and when filters are applied to a packet
Packet-Trace: Show Commands
• Show commands are used to display pactrac configuration and each level of data:
• show platform packet-trace configuration
• Displays packet-trace configuration including any defaults
• show platform packet-trace statistics
• Displays accounting data for all pactrac packets
• show platform packet-trace summary
• Displays summary data for the number of packets specified by debug platform packet-trace packet
• show platform packet-trace packet { all | <pkt-num>} [decode]*
• Displays all path data for all packets or the packet specified
• Decode attempts to display packets captured by debug platform packet-trace copy in user
friendly way
• Only a few protocol headers are supported initially (ARPA, IP, TCP, UDP, ICMP)
• decode was introduced in XE3.11
Deployment use cases
IOS XE for Intelligent WAN
WAN (IP-VPN)
Private
Cloud
Branch
Virtual
Internet Private Cloud
Public Cloud
Intelligent WAN – Leveraging the Internet
WAN (IP-VPN)
Private
SLAs for Business Critical Applications Cloud
Branch
Virtual
Centralized Security Policy for Internet Access
Internet Private Cloud
AVC
Private
Cloud
Internet
Virtual
Private Cloud
3G/4G-LTE
• Consistent operational model • Application best path based • Application monitoring with • Certified strong encryption
on delay, loss, jitter, path Application Visibility &
• Simple Provider migrations Control (AVC) • Comprehensive threat
preference
defense with ASA & IOS
• Scalable and Modular design • Load Balancing for full • Application Acceleration Firewall/IPS
utilization of all bandwidth and bandwidth savings
• DMVPN IPsec overlay design • Cloud Web Security (CWS)
with WAAS
• Improved network availability for scalable secure
direct Internet access
• Performance Routing (PfR)
Cisco Intelligent WAN
Solution Components
Virtual
Private Cloud
Internet backhaul
Branch
Internet
Direct Cisco
Internet Cloud $ Public
Web Security
Access Cloud
Secure WAN transport across MPLS Leverage local Internet path for
and/or Internet for private cloud / DC public cloud and Internet access
access
Increase WAN Capacity Improve App Performance Scale Security at the Branch
Currently Deployed Virtualization Solutions
Control Plane (Virtual) Private Cloud / DC • Public Cloud
• RR, LISP MS/MR.. • CE/PE Functionality
Internet Internet
CSR VPC2
ISR/ASR ISR/ASR 1000V
Campus
CSR
1000V
ASR 1000 as Services PE
Functions Services Scalability*
• ASR 1006 / 1013 as MSE • Dual-stack • 2.5 – 100 Gbps
L3VPN / L2VPN / VPLS • Multicast / mVPN • 4M Routes
CsC, Extranet • EVC • 8000 VRF
• ASR 1002 –X as PE+LNS • RA-MPLS with MLP • 16000 PW / L2TP
• Hierarchical QoS • Firewall / CGN / NAT64 • 8000 PPP / 1000 MLP
• High-Availability / ISSU • IPSec • 2000 eBGP + NSR
FRR, Fast Convergence
PE-CE BFD + NSR
• Routed PW into VRF
• EOAM / SLA
PE+LNS
VRF
Bridge L2TP
Domain
PW
MSE
IPSec
Ethernet
FR/Serial
POS
ATM LAC/LTS
RACS
H.248 SIP-ALG
TV VOD SIP
DSLAM
Residential H.248
VLAN
NAT/NAPT L2TP
QoS HSI
OLT Integrated Ethernet/MPLS/IP
Service
ASR 1000 as Enterprise Edge
Functions Services Scalability*
• ASR 1001 / 1002 / 1006 as • FNF • 10Gbps encryption + 30 Gbps
WAN Edge • PFR clear traffic
Secure VPN functions • Multilink FR / PPP • 4000+ IPSec tunnels
Internet Edge • VRF-aware PBR • 60K ACEs in 4K ACLs
• H-QoS • NAT / Firewall • 4K policy maps in 4K class maps
• IPSec: S2S, DMVPN, GETVPN With inter-chassis redundancy • 4K GRE
• USGv6
• Trustsec
• Application Visibility & Control
Internet /
MPLS VPN Corporate
Network
NAT IPSec
QoS /FW / GRE
ASR1000 with AppNav-XE
Virtualize WAN optimization resources into pools of elastic resources with
business driven bindings, greatly simplify deployment and management of
WAAS. Previous
Application Path Affinity Custom
Persistence Affinity Rules
• Operational excellence
• QoS, High Availability, easy service enablement
• Platform management
• Multiple processors, memories, busses to be
monitored
• Common code and feature sets across
multiple locations in the network
• Eases deployments, decreases incompatibilities
Thank you
Complete Your Online Session Evaluation
• Give us your feedback to be
entered into a Daily Survey
Drawing. A daily winner
will receive a $750 Amazon
gift card.
• Complete your session surveys
though the Cisco Live mobile
app or your computer on
Cisco Live Connect.
Don’t forget: Cisco Live sessions will be available
for viewing on-demand after the event at
CiscoLive.com/Online
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions