Vous êtes sur la page 1sur 25

Q&A

Cisco ASR 1000 QoS

The Cisco ASR 1000 Aggregation Services Router platform has a robust and
scalable quality-of-service (QoS) implementation. It adheres to the modular QoS CLI
(MQC) interface, so the configuration is familiar to Cisco IOS and IOS XE Software
users from other platforms. Because QoS on the Cisco ASR 1000 is implemented in
hardware, certain details of operation may vary from other Cisco platforms.
General
Q. How does the Cisco ASR 1000 calculate packet sizes?
A.

Please refer to Table 1 for general information about queuing policy maps applied to physical interfaces, subinterfaces, ATM virtual circuits, virtual templates or tunnel interfaces. Please refer to Table 2 for general
information about policing policy maps applied to physical interfaces, sub-interfaces, ATM virtual circuits,
virtual templates or tunnel interfaces.

Table 1.

Packet Size Calculation for Queuing Functions and Counters

QoS Target

What Is Not Included

What Is Included

Ethernet main and sub-interfaces

Inter-frame gap (IFG)/preamble and cyclic


redundancy check (CRC)

Layer 2 headers and Layer 2 payload

Layer 1 overheads

All Layer 3 and up payloads

Layer 1 overheads

5-byte ATM cell headers

ATM virtual circuits and ATM virtual paths

802.1q header

All ATM Adaptation Layer (AAL) headers


AAL CRC values
ATM cell tax and ATM cell padding
All Layer 3 and up payloads
Serial and Packet over SONET (PoS)
main interfaces

CRC and High-Level Data Link Control (HDLC)


bit stuffing

Layer 2 headers and Layer 2 payload

Virtual access, broadband virtual template,


and sessions

IFG/preamble and CRC

Layer 2 headers and Layer 2 payload

Layer 1 overheads

802.1q header

All Layer 3 and up payloads

Layer 2 Tunneling Protocol (L2TP) headers


Point-to-Point Protocol over X (PPPoX)
headers
All Layer 3 and up payloads

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 1 of 25

QoS Target

What Is Not Included

What Is Included

Tunnels
(generic routing encapsulation [GRE],
Dynamic Multipoint VPN [DMVPN], Dynamic
Virtual Tunnel Interface [dVTI], IPsec Siteto-Site VPN [sVTI], and IP Security [IPsec])

IFG/preamble and CRC

Layer 2 headers and Layer 2 payload

Layer 1 overheads

802.1q header
GRE headers
Cryptographic headers and trailer
All Layer 3 and up payloads

PPP multilink bundle

IFG/pre-amble, CRC

L2 multilink PPP headers

L1 overheads

L2 PPP headers
L2TP headers
ATM cell tax, ATM cell padding

Table 2.

Packet Size Calculation for Classification and Policing Functions and Counters in the Egress Direction

QoS Target

What Is Not Included

What Is Included

Ethernet main and sub-interfaces

IFG/preamble, and CRC

Layer 2 headers, Layer 2 payload, 802.1q


header, and all Layer 3 and up payloads

Layer 1 overheads
ATM virtual circuits and
ATM virtual paths

5-byte cell headers, all AAL headers, AAL CRC Layer 3 and up payloads
values, ATM cell tax, ATM cell padding, and all
payloads

Serial and PoS


main interfaces

CRC and HDLC bit stuffing

Layer 2 headers, Layer 2 payload, and all


Layer 3 and up payloads

Virtual access, broadband virtual template,


and sessions

IFG/preamble and CRC

Layer 2 headers* and Layer 2 payload

Layer 1 overheads

802.1q header
L2TP headers
PPPoX headers
All Layer 3 and up payloads

Tunnel
(GRE, DMVPN, dVTI, sVTI, and IPsec)

IFG/preamble and CRC

GRE headers

Layer 1 overheads

All Layer 3 and up payloads

Layer 2 headers and Layer 2 payload


802.1q header
Cryptographic headers and trailers
PPP multilink bundle

IFG/pre-amble, CRC

L2 multilink PPP headers

L1 overheads

L2 PPP headers

ATM cell tax, ATM cell padding

L2TP headers

Note that for broadband L2TP Network Server (LNS) scenarios, QoS policers configured on sessions will not observe the Layer 2
overhead. So the 14 bytes for Layer 2 source/destination address and Layer 2 type and any 802.1q headers will not be included.
As a result, any policers used for priority traffic would not include any overhead accounting offsets that are used for queuing or
scheduling decisions.

Topic

Reference

Multilink PPP Support for the


ASR 1000 Series Aggregation
Services Routers

http://www.cisco.com/c/en/us/td/docs/routers/asr1000/configuration/guide/chassis/asrswcfg/multilink_ppp.html
- pgfId-1097011

Q. Is it possible to account for downstream changes in packet size?


A.

Yes, with the overhead accounting feature, all queuing functions can adjust the size of packets for the
purposes of scheduling packets for transmission by using the account keyword with the queuing feature. You
can configure custom offsets ranging from -64 to 64 bytes. Additionally, you can use some predefined offsets.
Note that queuing features are only supported on egress, therefore overhead accounting is only supported on
egress policy-maps with queuing functions. An example of the command-line interface (CLI) follows:
policy-map test
class class-default
shape average account user-defined -4

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 2 of 25

Additionally, with the atm keyword, queuing functions can compensate for ATM cell division and cell padding
(sometimes called the ATM cell tax). This function compensates for the 5-byte header of each cell and the
padding of the last cell to fill a full 48 bytes of payload. If additional AAL5, Subnetwork Access Protocol
(SNAP), or other headers need to be accounted for, they should be included with the user-defined parameter
or some of the predefined keywords.
There is no support at this time for overhead accounting for policing features, including priority queues that are
rated limited with policers (conditional or strict).
For more information, please reference:

MQC Traffic Shaping Overhead Accounting for ATM: http://www.cisco.com/en/US/docs/iosxml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-mqc-ts-ohead-actg-atm.html

Ethernet overhead accounting: http://www.cisco.com/c/en/us/td/docs/iosxml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-xe-3s-asr-1000-book/qos-plcshp-ether-oheadactg.html

Q. Can QoS be confiugred on the management interface, GigabitEthernet0?


A.

No, you cannot configure QoS on the management interface. The management interface is handled entirely
within the route processor, and traffic to and from the management interface does not move through the Cisco
ASR 1000 Series Embedded Services Processor (ESP). Because all QoS functions are performed on the
ESP, QoS cannot be applied.

Q. Can QoS manage control-plane traffic that is destined for Cisco IOS Software running on the route processor?
A.

Yes, a nonqueuing QoS policy map is supported on the control plane in Cisco IOS Software configuration
mode. This feature is known as CoPP (Control Plane Policing). Usually, a policy map is applied to the control
plane to protect the route processor from denial-of-service (DoS) attacks. A policy map applied in the input
direction on the control plane will affect traffic that is destined for the route processor from regular interfaces. It
is possible to classify packets such that some are rate limited and others are not.
When using show plat hardware qfp commands on the control-plane interface, keep in mind that even
though the policy map is configured as ingress to the control plane, it is egress from the ESP card. Thus, the
show plat hardware qfp commands must use the output direction.
For more information about Control-Plane Policing (CoPP), please visit: http://www.cisco.com/en/US/docs/iosxml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-ctrl-pln-plc.html.

Q. How do QFP complexes map to physical interfaces for egress queuing with Cisco ASR 1000 Series 100- and
200-Gbps Embedded Services Processors (ESP100 and ESP200, respectively)?
A.

For the puposes of egress queueing, a given QFP complex has responsibility for the queuing functions on
certain shared-port-adapter (SPA) bays in the Cisco ASR 1000 chassis. For systems with one QFP complex,
this situation is not a concern because all interfaces are handled by a single QFP complex. For systems with
multiple QFPs, it is important to distribute interfaces among the QFPs if there will be a large number of queues
or schedules or if there is concern about high packet-buffer-memory usage. Note that this queuing
responsibility is independent of other feature processing. For example, a packet could have its ingress and
egress features handled by QFP 0 while the egress queuing responbility is handled by QFP 1.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 3 of 25

Figures 1 and 2 show how interfaces are distributed in the Cisco ASR 1006 and ASR 1013 chassis:
Cisco ASR 1006 chassis with ESP100:

SPA slots in green serviced by QFP 0

SPA slots in blue serviced by QFP 1

It is not possible for multiple QFPs to service a Cisco ASR 1000 Series SPA Interface Processor 10 (SIP10)
installed in any slot. If a SIP10 is used in a slot that is normally divided among QFPs, the QFP that normally
owns the left side of the SIP will service all interfaces. SIP40 cards can be serviced by multiple QFPs.
For Cisco ASR 1000 Series Fixed Ethernet Line Card (ASR1000-2T+20X1GE), the two 10 Gigabit Ethernet
interfaces are owned by the right-side QFP and the twenty 1 Gigabit Ethernet interfaces are owned by the leftside QFP (Figure 1). For the Cisco ASR 1000 Series Fixed Ethernet Line Card (ASR1000-6TGE) the odd even
number ports are owned by the left side QFP and the odd number ports are owned by the right side QFP
(Figure 1).
Figure 1.

Cisco ASR 1006 QFP Distribution with ESP100

Cisco ASR 1013 chassis with ESP100 or ESP200*:

SPA slots in green serviced by QFP 0

SPA slots in blue serviced by QFP 1

SPA slots in purple serviced by QFP 2

SPA slots in orange serviced by QFP 3

Figure 2.

QFP Interface Ownership Distribution Using ESP100 and ESP200

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 4 of 25

Note that Figure 2 assumes SIP40 line cards are used in a Cisco ASR 1013 chassis. If SIP10 line cards are used, all egress
queues are handled by the QFP that owns the left side (even numbered SPA bays) in the figure. For example, if a SIP10 was
installed in slot 2 (third from the bottom), all queues for all ports on that SIP10 would be serviced by QFP 0 (green) with ESP100
and QFP 1 (blue) with ESP200.
**

For the Cisco ASR 1000 Series Fixed Ethernet Line Card (ASR1000-2T+20X1GE), the two 10 Gigabit Ethernet interfaces are
owned by the right-side QFP and the twenty 1 Gigabit Ethernet interfaces are owned by the left-side QFP. For the Cisco ASR
1000 Series Fixed Ethernet Line Card (ASR1000-6TGE) the odd even number ports are owned by the left side QFP and the odd
number ports are owned by the right side QFP (Figure 1).

Q. How does the three-parameter scheduler used by the Cisco ASR 1000 differ from two-parameter schedulers
used by other platforms?
A.

The Cisco ASR1000 QoS scheduler uses three parameters: maximum, minimum, and excess. Most other
other platforms use only two parameters: maximum and minimum.
Both models handle maximum (shape) and minimum (bandwidth) the same way. The difference is how they
distribute excess (bandwidth remaining). Maximum is an upper limit of the bandwidth of traffic that a class
is allowed to forward. Minimum is a guarentee that the given amount of traffic will always be available, even if
the interface or hierarchy is congested.
Excess is the difference between the maximum possible rate (parent shaper) and all the used mimumums
(priority and bandwidth-guaranteeed traffic). A two-parameter scheduler distributes the excess bandwidth
proportionally according to the minimum rates. A three-parameter scheduler has a programmable parameter to
control that sharing. By default, the Cisco ASR 1000 uses equal sharing or excess values of 1 for every class.
Because of restrictions in Cisco IOS Software, you cannot configure the minimum and excess parameters at
the same time in a class.
For more information, please reference:

Policing and shaping overview: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_plcshp/configuration/xe3s/asr1000/qos-plcshp-oview.html

Distribution of remaining bandwidth using ratio: http://www.cisco.com/en/US/docs/iosxml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-dist-rem-bw.html

Leaky bucket algorithm as a queue:


http://en.wikipedia.org/wiki/Leaky_bucket#The_Leaky_Bucket_Algorithm_as_a_Queue
(Note: This document is not controlled or endorsed by Cisco. It is provided only as a convenience.)

Q. What do the non-MQC bandwidth and bandwidth qos-reference commands do and where are they
A.

useful?
Typically the interface bandwidth command is used on an interface to influence the bandwidth metric that
routing protocols use for their path decisions. In certain situations, however, the value given for the bandwidth
command can influence QoS. The bandwidth qos-reference interface command was intended to convey
to the QoS infrastructure how much bandwidth is available for the downstream tunnel bandwidth. Table 3
details when bandwidth and bandwidth qos-reference are applicable.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 5 of 25

Table 3.

Uses for Interface bandwidth and bandwidth qos-reference

Command

Target

Affect

bandwidth

Any generic main


interface for physical
interface

Any top-level QoS MQC references for percent-based configuration will use this value for the
interface throughput instead of the actual throughput. For example, if bandwidth 5000 is
configured on a Gigabit Ethernet interface and a top-level class-default shaper is configured for
shape average percent 50, the interface will be limited to 2.5 Mbps of traffic.

bandwidth

Any generic sub-interface This command does not affect QoS. QoS applied on a sub-interface is affected by a bandwidth
for a physical interface
command configured on the corresponding main interface.

bandwidth

Multilink Point-to-Point
Protocol (MLP) bundle

Configuring on the actual bundle interface rate limits traffic even without the application of the QoS
MQC configuration. Any percent-based configuration that is part of a policy map applied to the
bundle uses the bandwidth value for calculations.

GRE tunnel, sVTI tunnel,


bandwidth
qos-reference dVTI tunnel, and virtual
template for broadband

Any top-level QoS MQC references for percent-based configuration use this value for the
maximum throughput instead of the actual throughput for the used physical interface. For
example, if bandwidth 5000 is configured on a sVTI tunnel interface and a top-level classdefault shaper is configured for shape average percent 50, the tunnel will be limited to 2.5
Mbps of traffic.

Tunnel interface used for


bandwidth
qos-reference DMVPN

This command does not affect the QoS MQC configuration. It is essentially ignored for QoS
purposes.

Q. What are PAK_PRIORITY packets and how are they handled?


A.

Certain packets that are considered so important that they are considered no drop and given a special
designation called PAK_PRIORITY. They are generated by Cisco IOS Software on the route processor.
PAK_PRIORITY packets are typically associated with various protocols such as Border Gateway Protocol
(BGP), Enhanced IGRP (EIGRP), Open Shortest Path First (OSPF), Label Distribution Protocol (LDP), L2TP,
PPP, etc. Not all packets for a given protocol will be PAK_PRIORITY.
In order to achieve the no-drop behavior, PAK_PRIORITY packets are not run through the queues created
by MQC policy maps. PAK_PRIORITY packets are run through the interface default queue with few
exceptions. If a PAK_PRIORITY packet is classified to a priority (low-latency) queue by a MQC policy map,
PAK_PRIORITY packets will move through the user-defined priority queue instead of the interface default
queue. Otherwise the packet will increment the classification counters (but not queuing counters) for the
matching class and then be enqueued in the interface default queue.
For non-ATM interfaces, there is a single interface default queue per physical interface. It carries
PAK_PRIOIRTY and non-PAK_PRIORITY traffic that doesnt move through an MQC policy map. For ATM
interfaces, there is a single interface default queue, but in addition, each ATM virtual circuit has a default
queue associated with it. The per-ATM virtual-circuit default queue carries the non-PAK_PRIORITY traffic of a
given virtual circuit without MQC applied. All PAK_PRIORITY traffic (not otherwise classified into a low-latency
priority queue by a MQC policy map) moves through the ATM interface default queue.
The interface default queue exists outside of the queues created when a queuing QoS policy map is applied to
an interface. The interface default queue has guaranteed minimum bandwidth to service PAK_PRIORITY
packets. Moving the traffic through this queue helps ensure that the PAK_PRIORITY packets are not starved
by user-defined priority packets.
PAK_PRIORITY packets appear in the classification counters for a policy map applied to an egress interface.
The packets do not show up in the queuing counters, however, because they are actually enqueued through
the interface default queue. In order to observe the number of packets that have moved through the interface
default queue, use the following command (note that the interface name must be fully expressed with
matching capitalization):
show plat hard qfp active feature bqs queue out def int GigabitEthernet0/0/0

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 6 of 25

The values for tail drops and total_enqs give the number of packets that were dropped because of a
full queue and the number of packets that were enqueued.
PAK_PRIORITY packets are not tail dropped just because the interface default queue is full. These packets
are added to the interface default queue, even if the queue depth is greater than the queue limit. NonPAK_PRIORITY packets targeted for the interface default queue are tail dropped as any other packet if the
queue limit is exceeded. PAK_PRIORITY packets classified to a low-latency queue also are protected from tail
dropping by the same logic. Only if the overall ESP packet memory is very full (more than 98 percent) are
PAK_PRIOIRTY packets tail dropped.
It is not possible to mark packets as PAK_PRIORITY through the CLI. This function is reserved for packets
generated and marked by Cisco IOS Software. There are no Cisco IOS Software counters specific to
PAK_PRIORITY packets.
Following is a list of protocols with packets that are marked as PAK_PRIORITY. This list is subject to change
without notice and it not considered comprehensive or exhaustive:

Layers 1 and 2

ATM Address Resolution Protocol Negative Acknowledgement (ARP NAK)


ATM arp requests
ATM host ping operations, administration and management cell(OA&M)
ATM Interim Local Management Interface (ILMI)
ATM OA&M
ATM arp reply
Cisco Discovery Protocol
Dynamic Trunking Protocol (DTP)
Ethernet loopback packet
Frame Relay End2End Keepalive
Frame Relay inverse ARP
Frame Relay Link Access Procedure (LAPF)
Frame Relay Local Management Interface (LMI)
Hot standby Connection-to-Connectrion Control packets (HCCP)
High-Level Data Link Control (HDLC) keepalives
Link Aggregation Control Protocol (LACP) (802.3ad)
Port Aggregation Protocol (PAgP)
PPP keepalives
Link Control Protocol (LCP) Messages
PPP LZS-DCP
Serial Line Address Resolution Protocol (SLARP)
Some Multilink Point-to-Point Protocol (MLPP) control packets (LCP)

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 7 of 25

IPv4 Layer 3

Protocol Independent Multicast (PIM) hellos


Interior Gateway Routing Protocol (IGRP) hellos
OSPF hellos
EIGRP hellos
Intermediate System-to-Intermediate System (IS-IS) hellos, complete sequence number PDU (CSNP),
PSNP, and label switched paths (LSPs)

ESIS hellos
Triggered Routing Information Protocol (RIP) Ack
TDP and LDP hellos
Resource Reservation Protocol (RSVP)
Some L2TP control packets
Some L2F control packets
GRE IP Keepalive
IGRP CLNS
Q. The Cisco ASR 1000 isnt showing a class-map filter or access control entries (ACE) matches. How can I
access the information?
A.

By default, the ASR 1000 does not track per class-map filter or per-ACE matches for QoS. However, you can
access these statistics by enabling one of the following CLIs:
platform qos match-statistics per-filter

(supported in Cisco IOS XE Software 3.3)

platform qos match-statistics per-ace

(supported in Cisco IOS XE Software 3.10)

Note that these commands will not be affective if added to the configuration while any QoS policies are
attached to any interfaces. To become effective, all QoS policies must be removed and then reapplied or the
router must be rebooted.
For more information about QoS packet-matching statistics configuration, please visit:
http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-match.html.
Q. The Cisco ASR 1000 isnt showing packet-marker statistics. How can I access the information?
A.

By default, the ASR 1000 does not track marking statistics for QoS. However, you can enable these statistics
by configuring the following CLI:
platform qos marker-statistics

(supported in Cisco IOS Software XE3.3)

Note that this command will not take effect if added to the configuration while any QoS policies are attached to
any interfaces. To become effective, all QoS policies must be removed and then reapplied or the router must
be rebooted.
For more information about QoS packet-marking statistics, please visit: http://www.cisco.com/en/US/docs/iosxml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-mrkg.html.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 8 of 25

Q. How many class maps, policy maps, or match rules are supported?
A.

Support as of Cisco IOS XE Software 3.10 is listed in Table 4.

Table 4.

Number of Class Maps, Policy Maps, and Match Rules Supported

Cisco IOS XE Software Versions

2.0S-2.2S

2.3S

3.5S-3.9S

3.10S

Number of unique policy maps

1,024

4,096

4,096

16,000 or 4,096*

Number of unique class maps

4,096

4,096

4,096

4,096

Number of classes per policy map

256

1,000

1,000

Number of filters per class map

16

16

32

32

16,000 for Cisco ASR 1000 Series Route Processor 2 (RP2) with ESP40, ESP100, or ESP200\All other platform combinations
are 4096.

For more information about applying QoS features using the MQC, please visit:
http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-apply.html.
Q. What are the causes for FMFP_QOS-6-QOS_STATS_PROGRESS messages in the system log?
A.

The FMFP_QOS-6-QOS_STATS_STALLED message is simply an informational message indicating that the


statistics upload from the ESP card to the RP card is not progressing as quickly as normally expected. There
are no long term bad effects from this command other than QoS statistics in IOS may not be updated as
quickly as expected. This would affect statistics gathered from the CLI as well as from SNMP. This error
could occur during a heavy processing load on the RP, for example during a large BGP routing update or
during a period of high rate session bringup.

Q. What are the details of the packet counters in the show policy-map interface output?
A.

The output is divided into several different sections. Typically there are sections for each of the following:

Classification

Policing

Queuing

WRED, random-detect

Fair queue

Marking

The following configuration was used to generate the output for the example being documented:
platform qos marker-statistics
platform qos match-statistics per-filter
platform qos match-statistics per-ace
!
policy-map reference
class p12
police cir 5000000 pir 75000000
conform-action transmit
exceed-action set-dscp-transmit 0
violate-action drop
shape average 40000000
random-detect
2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 9 of 25

random-detect precedence 0 10 20 10
random-detect precedence 1 12 20 10
random-detect precedence 2 14 20 10
fair-queue
class class-default
!
class-map p12
match precedence 1
match precedence 2
!
interface GigabitEthernet1/0/2
service-policy output reference

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 10 of 25

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 11 of 25

Queue Memory
Q. How is packet memory managed?
A.

On all Cisco ASR 1000 platforms, the packet buffer memory on the ESP is one large pool that is used on an
as-needed basis for all interfaces in the chassis. Interfaces do not reserve sections of memory. If 85 percent of
all packet memory is used, nonpriority packets are dropped. At 98-percent packet memory usage, priority
packets are dropped. The remaining 2 percent is reserved for internal control packet information. It is
recommended that no more than 50 percent of packet buffer memory be allocated with configured queuelimit commands. Although not enforced, this recommendation is a best-practice recommendation. For
certain special applications this recommendation may not apply. Only under unusual circumstances would you
expect to see the packet buffer memory highly used. When the 85- and 98-percent thresholds are crossed,
Cisco IOS Software generates a console log message.

Q. How can I monitor packet buffer memory usage?


A.

The following command can show how much of the packet buffer memory is used at any given time. Note that
on systems with multiple QFP complexes (ESP100 and ESP200), you can vary the number after the bqs
keyword to check the different QFP complexes.
ASR1000#show plat hard qfp active bqs 0 packet-buffer utilization
Packet buffer memory utilization details:
Total:

256.00 MB

Used :

2003.00 KB

Free :

254.04 MB

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 12 of 25

Utilization:

0 %

Threshold Values:
Out of Memory (OOM)

255.96 MB, Status: False

Vital (> 98%)

253.44 MB, Status: False

Out of Resource (OOR)

217.60 MB, Status: False

Q. What is the scalability of packet memory, ternary content addressable memory (TCAM), and queue for various
Cisco ASR 1000 hardware devices?
A.

Table 5 details that information:

Table 5.

Packet Memory, Queue, and TCAM Scalability

ESP Hardware

Packet Memory

Maximum Queues

TCAM Size

ASR1001

64 MB

16,000

5 Mb

ASR1001-X

512 MB

16,000

10 Mb

ASR1002-F

64 MB

64,000

5 Mb

ASR1002-X

512 MB

116,000

40 Mb

ESP5

64 MB

64,000

10 Mb

ESP10

128 MB

128,000

10 Mb

ESP20

256 MB

128,000

40 Mb

ESP40

256 MB

128,000

40 Mb

ESP100

1 GB (two 512-MB)

232,000*

2 GB (four 512-MB)

ESP200

464,000

80 Mb
160 Mb

Note that for ESP100 and ESP200, physical ports are associated with a particular QFP complex on the ESP card. In order to
fully use all queues, the queues must be distributed among different slots and SPAs in the chassis. Additional information is
included in this Q&A in this question: How do QFP complexes map to physical interfaces for egress queuing with Cisco ASR
1000 Series 100- and 200-Gbps Embedded Services Processors (ESP100 and ESP200, respectively)?

Queue Limits
Q. How are default queue limits calculated on the Cisco ASR 1000 when QoS is applied?
A.

By default, the ASR 1000 assigns a default queue limit on the greater of the two following items:

Sixty-four packets

The number of packets of interface maximum-transmission-unit (MTU) size that would pass through the
interface at the configured rate for 50 milliseconds. If only a shape average rate or shape percent
value is used, then the rate is the shaper. If a bandwidth rate or bandwidth percent value is
included, then it is used instead of the shaper rate. If bandwidth remaining ratio value is used, then
the parent maximum rate (policy map or interface) is used.

Here are some examples with a Gigabit Ethernet interface with a default MTU of 1500 bytes:
For example, a class with a shape rate of 500 Mbps on a Gigabit Ethernet interface would give a default queue
limit of:

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 13 of 25

For example, a class with a shape rate of 300 Mbps on a Gigabit Ethernet interface would give a default queue
limit of:

For example, a class with a shape rate of 2 Mbps and a minimum bandwidth of 1000 kbps on a Gigabit
Ethernet interface would use the minimum rate for calculations and give a default queue limit of:

Q. If QoS is not configured, what is the queue limit for the interface?
A. Typically on Cisco IOS Software platforms, the output for show interface will give you the number of
packets in the output hold queue. On the Cisco ASR 1000, even if QoS is not configured, the QFP complex
still manages the interface queuing. The output hold-queue value does not apply on the ASR 1000. When QoS
is not configured on an interface, all the traffic for that physical interface moves through the interface default
queue. The interface default queue is by default configured to handle 50 msec worth of traffic at 105 percent of
interface bandwidth speed for interfaces 100 Mbps or faster. (Note that there are two exceptions: interfaces
slower than 100 Mbps are based on 100 percent of interface bandwidth, and is based on 25 msec for all
interface speeds.) For ESP5 through ESP40, if the default calculation comes up with a value that is less than
9280 bytes, then the default queue size is set to 9280 bytes. For the Cisco ASR 1002-X and ESP100 and
higher, if the default calculation comes up with a value that is less than 9218 bytes, then the default queue size
is set to 9218 bytes.
You can use the following command to check the actual interface queue limit for a given physical interface
(note that the interface name must be fully expressed with matching capitalization):
show plat hard qfp active infra bqs queue output default interface
GigabitEthernet1/1/0 | inc qlimit
Note that traffic for sub-interfaces with queuing QoS configured moves through the MQC-created queues,
whereas traffic forwarded through other sub-interfaces or the main interface moves through the interface
default queue.
The interface default queue is always handled in byte mode instead of packet mode, which is the default for
MQC policy maps.
Q. Can I change the units (packets, time, and bytes) of the queue limit in real time?
A.

No, you cannot change units used for a given policy map in real time. You would have to remove the policy
map from any interfaces, reconfigure it, and then reattach it. If you have a feature such as WRED configured
with a given type of units for the minth and maxth values, you would have to remove WRED, change the
queue-limit command units, and then reapply WRED. Also keep in mind that all classes in a given policy map
must use the same units.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 14 of 25

Q. From time to time, drops are seen in various queues. I do not suspected that the maximum rate is being
overdriven. How should I address this problem?
A.

The class showing the drops may be experiencing microbursts. Microbursts are small bursts of traffic that are
long enough to fill up the queue for the class but not sustained long enough for network management to see
the bandwidth as high enough to tail drop. The first thing to try is to increase the queue limit for the class. You
can make this change in real time without affecting forwarding traffic. Try doubling the queue limit and then
monitor for drops. If you still observe drops, you can increase the queue limit again. Eventually the drops
should become less frequent or stop altogether. During nonburst times, traffic will have the same behavior.
During the microbursts, there will be periods of higher latency as packets drain from the deeper queue. Note
that if WRED is on the class, you will need to also adjust the minth and maxth values accordingly or
temporarily remove WRED and reapply it so that WRED can be installed with minth and maxth values based
on the increased queue limit.

Q. When should I use time-, byte-, or packet-based queue limits?


A.

By default, queue limits are defined in units of packets, giving a predictable number of MTU-sized packets that
can be queued for the class. However, the queue could also fill up with just as many very small packets that
would start to tail drop packets while the overall latency of packets at the end of a full queue is quite small. For
most applications, the use of packet-based queue limits works well. If you prefer to have a tightly controlled
and predictable latency, you should switch to byte- or time-based queue limits. When you use time or bytes,
the maximum latency is fixed and the number of packets that can be queued is variable. Note that all classes
in a policy map must use the same units and WRED must be configured using the same units that the queue
limit is specified in. Operationally, time- and byte-based configuration is the same. If you use time units, the
system will use the maximum allowed bandwidth for the class to convert the time value into a number of bytes
and use that value to program the QFP hardware.

Q. When should I use small or large queue limits?


A.

You should use large queue limits as a mechanism to deal with bursty traffic. Having the available queue
space minimizes the chance of dropping packets when there are short bursts of high-data-rate traffic in an
otherwise slower stream of traffic. Queues that normally function well but occasionally show packet are good
candidates for an increased queue limit. If a traffic class is constantly overdriven, a large queue limit is doing
nothing other than increasing latency for most of the packets delivered. It would be better to have a smaller
queue limit because just as many packets would be forwarded and they would have spent less time sitting idly
in a queue. Priority queues by default have a queue limit of 512 packets, helping keep latency low but allowing
buffering if the need arises. Typically, there is no need to tune the priority queue limits because only rarely are
more than one or two packets waiting in the priority queue. If maximum latency and bursts of small packets are
of concern, you should consider changing the queue limit to units of time or bytes.

WRED - Random-Detect
Q. Why do WRED configurations ported to the Cisco ASR 1000 have restrictive queue limits?
A.

Cisco ASR 1000 calculates default queue limits differently from other platforms. Often older platforms have a
higher default queue-limit value than the ASR 1000. You need to either manually increase the queue limit for
the QoS class with the queue-limit value command or reconfigure your WRED minth and maxth values
according to the default ASR 1000 queue-limit value for the given class.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 15 of 25

Q. What are the default minth and maxth values used by WRED?
A.

The default minth and maxth values are based on the queue limit for the class. For all precedence and
differentiated services code point (DSCP) values, maxth values are by default half of the queue limit.
Headroom between the maxth values and the hard queue limit is important because WRED is based on the
mean average queue depth that trails that instantaneous queue depth. The headroom between maxth and
hard queue limit may be needed as the mean queue depth catches up with instantaneous queue depth.
Table 6 presents the default minth values for all precedence and DSCP values. It is easiest to think of minth
values as a fraction of the corresponding maxth value. The example values given are based on a queue limit
of 3200.

Table 6.

WRED Defaults for Queue Limit (Example with Queue Limit of 3200)

DSCP or Precedence

Minimum

Maximum

Minimum as Fraction
of Maximum

af11

1400

1600

14/16

af12

1200

1600

12/16

af13

1000

1600

10/16

af21

1400

1600

14/16

af22

1200

1600

12/16

af23

1000

1600

10/16

af31

1400

1600

14/16

af32

1200

1600

12/16

af33

1000

1600

10/16

af41

1400

1600

14/16

af42

1200

1600

12/16

af43

1000

1600

10/16

ef

1500

1600

15/16

Default or Precedence 0

800

1600

8/16

cs1/prec 1

900

1600

9/16

cs2/prec 2

1000

1600

10/16

cs3/prec 3

1100

1600

11/16

cs4/prec 4

1200

1600

12/16

cs5/prec 5

1300

1600

13/16

cs6/prec 6

1400

1600

14/16

cs7/prec 7

1500

1600

15/16

Q. How is the average or mean queue depth calculated?


A.

The average or mean queue size is calculated according the following formula, where n is the exponential
constant value, current_queue_size is the instantaneous queue size when the drop decision is being made,
and old_average_queue_size is the queue size the previous time this calculation was performed:

As n increases, the mean queue depth is slower to respond to changes in instantaneous queue depth.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 16 of 25

Fair-Queue Behavior
Q. What are the queue limits for the queues created by the fair-queue feature?
A.

By default, each of the 16 queues created by the fair-queue feature has a limit of 25 percent of the queue limit
of the class. For example, if a class is configured to have a queue limit of 1000 packets and fair queue is
configured, each of the 16 underlying queues has a limit of 250 packets. For this reason, it is important to
consider the per-flow queue limit when manually adjusting the WRED minth and maxth values.

Q. Is it possible to specifically change the queue limit for the queues created by fair queuing?
A.

Yes, you can adjust the queue limits for the 16 queues created by fair queuing but only when using packetbased queue limits. As of Cisco IOS XE Software Release 3.11, the CLI is limited such that it is not possible to
adjust the queue limits for the 16 queues using time- or byte-based queue-limit configurations. The
workaround is to manipulate the overall class queue limit in byte or packet mode such that the fair queues are
at the desired value. So if the desired per-flow queue limit is 100 ms, you should configure the class queue
limit to be 400 ms.

Q. How does fair queue divide traffic into different flows?


A.

The Cisco ASR 1000 uses a 5-tuple on the packets contents to hash the traffic into a given queue. The 5-tuple
consists of:

Source and destination IP address

Protocol (TCP, UDP, etc.)

Source and destination protocol ports

There are some special considerations when using fair-queue with tunnel traffic. Specifically, fair-queue will
use the outermost IP addresses as part of the tuple calculation. For tunnel traffic moving across a class with
fair-queue, all the traffic for a given tunnel will use only one of the 16 fair queues even if the inner IP addresses
are different. If there are multiple tunnels using the class-map with fair-queue configured, then the tunnels will
be distributed amongst the 16 queues based on the tunnel source and destination addresses. Fair-queue may
not be the best choice to use on a main-interface for sub-interface that is carrying a number of tunnel
connections.
Q. How does fair queuing interact with random detect?
A.

Adding fair queue to random detect introduces some additional checks and considerations for applying custom
random-detect configurations. Figure 3 shows a flow diagram of the decision-making process when the two
features are configured together.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 17 of 25

Figure 3.

Decision-Making Process (WRED with Fair Queue)

FQD (flow-queue depth): Per-flow queue depth, which is the number of packets in a particular flow queue
FQL (flow-queue limit): Per-flow individual queue limit, set by the fair-queue queue-limit <x>
command on the CLI
AQD (aggregate queue depth): Virtual queue depth, which is the sum of all individual flow-queue depths
AQL (aggregate queue limit): Virtual queue limit, set by the queue-limit <x> command on the CLI
Q. How does fair queuing interact with queue limits when random detect is not configured?
A.

Having only fair queue configured without random detect significantly changes how the QFP decides when to
drop a packet. The flow diagram in Figure 4 describes the process. The key difference in this scenario is that
the decision to drop is based solely on the comparison with the per-flow queue limit. There is no comparison
against the aggregate queue limit. This lack of decision against the aggregate queue limit can be misleading
because it is possible to manipulate the aggregate queue limit to affect changes to the per-flow queue limit (25
percent).

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 18 of 25

Figure 4.

Decision Making Process (WRED without Fair-Queue)

FQD (flow-queue depth): Per-flow queue depth, which is the number of packets in a particular flow queue
FQL (flow-queue limit): Per-flow individual queue limit, set by the fair-queue queue-limit <x>
command on the CLI

Cisco EtherChannel QoS


Please note that some documents refer to EtherChannel, while others may refer to Port-channel, Gigabit
Etherchannel (GEC) or Link Aggregation (LAG). All of these technologies are the same. This document will use the
term Etherchannel for the technology.

For information about QoS policies aggregation, please visit: http://www.cisco.com/en/US/docs/iosxml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-agg.html.

For information about QoS for Cisco EtherChannel interfaces, please visit:
http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-eth-int.html.

For information about Point-to-Point Protocol over (GEC), please visit:


http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-pppgec.html.

Q. What modes are supported for Cisco EtherChannel QoS?


A.

Cisco EtherChannel QoS on the Cisco ASR 1000 is supported in numerous configurations. There are
requirements for coordinated configuration of VLAN load-balancing mode and QoS configurations. Following
are the combinations of load balancing and QoS that are supported on a given port channel:

With VLAN-based load balancing:

Egress MQC queuing configuration on port-channel sub-interfaces


Egress MQC queuing configuration on port-channel member
Policy aggregation: Egress MQC queuing on sub-interface
Ingress policing and marking on port-channel sub-interface
Egress policing and marking on port-channel member link
Policy aggregation for multiple queues (Cisco IOS XE Software Release 2.6 and later)
2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 19 of 25

Active/standby with LACP (1 + 1)

Egress MQC queuing configuration on port-channel member link (Cisco IOS XE Software Release 2.4
and later)

Egress MQC queuing configuration on Point-to-Point Protocol over Ethernet (PPPoE) sessions
Policy map on session only (model D.2, Cisco IOS XE Software Release 3.7 and later)
Policy maps on sub-interface and session (model F, Cisco IOS XE Software Release 3.8 and later)

Cisco EtherChannel with LACP and load balancing (active/active)

Egress MQC queuing configuration supported on port-channel member link (Cisco IOS XE Software
Release 2.5 and later)
Q. Can different port channels in the same router have different supported QoS combinations?
A.

Yes, each port channel is independent. If a global load-balancing method is configured, it could be necessary
to configure a unique load-balancing method on a given port channel to allow certain QoS configurations. For
example, if the global mode is configured to flow-based load balancing, you would need to configure VLANbased load balancing on a specific port channel to configure ingress port-channel sub-interface policy maps.

Q. Can I configure egress and ingress QoS simultaneously on a port-channel interface?


A.

With VLAN-based load balancing, you can configure ingress QoS (nonqueuing) on port-channel sub-interfaces
and egress policy map on the member links or port-channel sub-interfaces (but not both simultaneously).

Q. Is egress policing or marking supported on port-channel sub-interfaces?


A.

No, policing and marking for port-channel configurations are limited to ingress port-channel sub-interfaces and
egress member-link interfaces.

Q. Can I configure egress queuing on a port-channel main interface to rate limit the aggregate bandwidth for the
port channel?
A.

No, currently this function is not supported. However, this function is targeted for Cisco IOS XE Software
Release 3.12. Targeted functions include egress queuing hierarchical policy maps with support for all queuing
features.

Tunnel QoS
Topic

Reference

Inbound Policy Marking for dVTI

http://www.cisco.com/en/US/docs/ios-xml/ios/qos_classn/configuration/xe-3s/asr1000/qos-classnipm-dvti.html

QoS Tunnel Marking for GRE tunnels

http://www.cisco.com/en/US/docs/ios-xml/ios/qos_classn/configuration/xe-3s/asr1000/qos-classntunnel-gre.html

QoS for dVTI

http://www.cisco.com/en/US/docs/ios-xml/ios/qos_classn/configuration/xe-3s/asr1000/qos-classnqos-dvti.html

Per-Tunnel QoS for DMVPN

http://www.cisco.com/en/US/docs/ios-xml/ios/sec_conn_dmvpn/configuration/xe-3s/asr1000/secconn-dmvpn-per-tunnel-qos.html

Q. Are tunnels (GRE/IPSEC/dVTI/sVTI) configured with queuing QoS supported over port-channel interfaces?
A.

No. When a tunnel has a queuing QoS service policy attached and is routed over a port-channel interface the
service policy will be suspended. In this state the tunnel traffic will egress the router via the port-channel
interface, but the Tunnel policy-map will not affect the traffic and not collect any statistics.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 20 of 25

Q. Are QoS policies supported on both the tunnel interface and the physical/sub-interface over which the tunnel is
routed?
A.

Only in certain well-defined scenarios. See When is it acceptable to configure multiple policy-maps for traffic?
on page 20.

Q. Is GRE tunnel marking (marking the tunnel header) supported for IPSEC tunnels?
A.

No. GRE tunnel marking is only supported for non-IPSEC tunnels. It is not blocked by the CLI, however, it
simply does not work when configured.

Q. Is IPv6 supported together with DMVPN and NHRP?


A.

Yes, in IOS XE3.11, support was added for IPv6 DMVPN. As a result, the ip nhrp commands used on the
tunnel interface were changed so that the preceeding ip keyword are not required.

Priority (Low-Latency) Behavior


Q. What is the difference in strict priority (priority with policer) and conditional priority (priority with a rate)?
A.

Strict priority is always rate limited by the explicitly configured policer. The configuration looks like this:
policy-map test
class voice
police cir 1000000
priority
With strict priority, even if there is available bandwidth from the parent (that is, it is not congested), the policed
Low-Latency Queuing (LLQ) class forwards only up to the policer rate. The policer always rate limits the traffic.
Conditional priority configuration looks like this:
policy-map test
class voice
priority 1000
Conditional priority rate limits traffic with a policer only if there is congestion at the parent (policy map or
physical interface). The parent is congested if more than the configured maximum rate of traffic attempts to
move through the class (and/or interface). A conditional priority class can use more than its configured rate,
but only if there is no contention with other classes in the same policy. As soon as there is congestion at the
parent, the priority class(es) throttle back to the configured rate until there is no longer any congestion.

Q. How many levels of priority does the Cisco ASR 1000 support?
A.

Two levels of high-priority traffic are supported. Priority level 1 is serviced first, then priority level 2. After all
priority traffic is forwarded, nonpriority traffic is serviced.

Q. How are queues for multiple priority classes in a single policy map managed?
A.

If there is more than a single priority-level class at the same level in a policy map, there will be a single queue
for that level of priority traffic. Individual classes track packet matches, policer statistics, etc., but when the
packets are queued they are consolidated into a single queue for that policy map. So all priority-level 1 traffic
across multiple classes is consolidated into a single queue. All priority-level 2 traffic across multiple classes is
consolidated into a separate queue.

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 21 of 25

Hierarchical Policy Maps


Q. How many levels of hierarchical policy maps are supported?
A.

In general, three levels of hierarchy are supported. If you mix queuing and nonqueuing policies together in a
hierarchy, the nonqueuing policy maps must be at the leaf level of the policy map (child policy beneath
grandparent and parent queuing policies, for example).
In a three-level queuing policy map, the highest level (grandparent), can consist only of class default.
If the policy map is applied to a virtual interface (such as a tunnel or session), there may be additional
restrictions limiting the hierarchy to two levels of queuing, depending on the configuration.

Interaction with Cryptography


Q. How is QoS low-latency priority queuing acknowledged as traffic is sent to the cryptography engine?
A.

There are high- and low-priority queues for traffic being sent to the cryptography engine. Any traffic that
matches an egress high-priority QoS class is sent through the high-priority queue to the cryptography engine.
Priority-levels 1 and 2 traffic move through a single high-priority queue to the cryptography hardware. All other
traffic is sent through the low-priority queue to the cryptography hardware. After the traffic has returned from
the cryptography hardware, the priority-levels 1 and 2 are honored in independent queues, followed by
nonpriority traffic. PAK_PRI traffic will move through the low-prioirty queue for cryptography by default. Only if
the PAK_PRI traffic is classified into a high priority class via a MQC policy-map will it use the high priority
queue for cryptography.

Q. How does cryptography affect the size of packets that QoS observes?
A.

Queuing functions on physical interfaces or tunnel interfaces see the complete packet size including any
cryptography overhead that was added to the packet. If the policy map is applied to the tunnel interface,
policers do not observe the Layer 2 and/or cryptography overhead. Note that if a policer is used on a priority
class, it is advisable to adjust the policer rate down accordingly because the observed rate for the priority
policer will be different from the rates used for classes configured with other queuing functions.

Q. Why do cryptographic connections sometimes fail when QoS is configured?


A.

Cryptography happens before egress QoS queuing. When encryption occurs a sequence number is
sometimes included in the encryption headers. If the packets are subsequently delayed significantly because
of high queue depths, the remote router can declare the packets outside of the anti-replay window and drop
the encrypted connection. Potential workarounds include increasing the available bandwidth with QoS (to
decrease latency) or increase the replay window size.
For information about IPsec anti-replay window expanding and disabling, please visit:
http://www.cisco.com/en/US/docs/ios-xml/ios/sec_conn_dplane/configuration/xe-3s/asr1000/sec-ipsecantireplay.html.

Q. How can packet drops to the cryptography engine be monitored?


A.

There are high and low priority queues for traffic destined for cryptography. Those queues can be monitored
via platform hardware commands. The following gives an example of how to monitor those queues. You can
see statistics for packet and byte drops with the tail drop statistic.
plevel 0 is low priority traffic and plevel 1 is high priority traffic.
ASR1000#show plat hardware qfp active infrastructure bqs queue output default all
| inc crypto

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 22 of 25

Interface: internal1/0/crypto:0 QFP: 0.0 if_h: 6 Num Queues/Schedules: 2


ASR1000#show plat hardware qfp active infrastructure bqs queue output default
interface-string internal1/0/crypto:0
Interface: internal1/0/crypto:0 QFP: 0.0 if_h: 6 Num Queues/Schedules: 2
Queue specifics:
Index 0 (Queue ID:0x88, Name: i2l_if_6_cpp_0_prio0)
Software Control Info:
(cache) queue id: 0x00000088, wred: 0x88b168c2, qlimit (bytes): 73125056
parent_sid: 0x261, debug_name: i2l_if_6_cpp_0_prio0
sw_flags: 0x08000001, sw_state: 0x00000c01, port_uidb: 0
orig_min

: 0

min: 0

min_qos

: 0

, min_dflt: 0

orig_max

: 0

max_qos

: 0

, max_dflt: 0

share

: 1

plevel

: 0, priority: 65535

max: 0

defer_obj_refcnt: 0
Statistics:
tail drops (bytes): 0

(packets): 0

total enqs (bytes): 0

(packets): 0

queue_depth (bytes): 0
Queue specifics:
Index 1 (Queue ID:0x89, Name: i2l_if_6_cpp_0_prio1)
Software Control Info:
(cache) queue id: 0x00000089, wred: 0x88b168d2, qlimit (bytes): 73125056
parent_sid: 0x262, debug_name: i2l_if_6_cpp_0_prio1
sw_flags: 0x18000001, sw_state: 0x00000c01, port_uidb: 0
orig_min

: 0

min: 0

min_qos

: 0

, min_dflt: 0

orig_max

: 0

max_qos

: 0

, max_dflt: 0

share

: 0

plevel

: 1, priority: 0

max: 0

defer_obj_refcnt: 0
Statistics:
tail drops (bytes): 0

(packets): 0

total enqs (bytes): 0

(packets): 0

queue_depth (bytes): 0

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 23 of 25

General Recommendations
Q. In what order should I add commands to a class map?
A.

Although there is no strict requirement that you add commands in a particular order, the following describes
the best practice:
For queuing classes, add commands in this order:

Queuing features (shape, bandwidth, bandwidth remaining, and priority)

account

queue-limit

set actions

police

fair-queue

random-detect

service-policy

For nonqueuing classes ordering is not as important, but the following order is preferred:

set actions

police

service-policy

Q. When is it acceptable to configure multiple policy maps for traffic?


A.

First it is important to understand the difference in queuing and nonqueuing policy maps. Queuing policy maps
include the following features in at least one class:

shape

bandwidth

bandwidth remaining

random-detect

queue-limit

priority

The practice of configuring multiple queuing policy maps for traffic to traverse is sometimes called multiple
policy maps (MPOL). In general on the Cisco ASR 1000, it is acceptable to configure only one queuing policy
map that traffic will be forwarded through in the egress direction. For example, if a Gigabit Ethernet subinterface has a queuing policy map configured, it is not possible to configure another queuing policy map on
the main interface.
Certain configurations do not carry this limitation, however. Here is a list of those scenarios where multiple
queuing policy maps are supported:

Broadband QoS, class default-only queuing policy map on Ethernet sub-interface, and two-level hierarchical
queuing policy map on session (through virtual template or RADIUS configuration) (sometimes referred to
as model F broadband QoS).

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 24 of 25

Tunnels (GRE, DMVPN, sVTI, and dVTI) with two-level hierarchical queuing policy map and the targeted
egress physical interface with a class default-only flat queuing policy map with a maximum rate configured
(shape): The tunnels may target the physical interface directly or depend on the routing table to point
toward the egress interface. This feature is supported as of Cisco IOS XE Software Release 3.6.

Policy aggregation where priority queues are configured on the sub-interfaces and nonpriority queues are
configured on the main interface: This scenario requires the use of service fragments.

Policy aggregation where priority queues are configured on the main interface and nonpriority queues are
configured on the sub-interfaces: This scenario requires the use of service fragments.

Printed in USA

2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

C67-731655-00

08/14

Page 25 of 25

Vous aimerez peut-être aussi