Discovery 22: Configure QoS
Introduction
After reviewing the performance of FIFO and fair queuing, you determine that although fair queuing improved the
network performance it did not provide the required priority for the VoIP traffic or the hard bandwidth guarantees for
the key applications on the network. In this lab, you will implement a Class-Based Weighted Fair Queueing
(CBWFQ) policy with a low latency queue to achieve these two goals.
Actual lab devices are not used in this activity. This activity is a simulation based on a series of tasks.
Often, it is not always possible to provide real lab equipment because of the nature and type of technology.
These lab simulations are based on real equipment and actual lab tasks. The labs are performed using a
simulation of real equipment. There are no setup or initialization time requirements, and the simulation is
available immediately.
Topology
Task 1: Configure a CBWFQ Policy with LLQ
Measure Network Performance with Traffic Generator Enabled
You will measure network performance using ping and Cisco IP SLA tools. The traffic generator has been enabled
and configured to create variable traffic rates.
This lab is a simulator and the tab complete of commands does not work so please type in the full
command as expressed in each step of the guide.
Activity
Step 1
Click on ROUTER-1. Examine the queuing parameters for the Serial 0/1/0:0 interface using the command show
interfaces serial 0/1/0:0 command.
ROUTER-1 show interfaces serial 0/1/0:0
Serial0/1/0:0 is up, line protocol is up
Hardware is NIM-2MFT-T1/E1
Internet address is 172.16.1.1/24
MTU 1500 bytes, BW 1536 Kbit/sec, DLY 20000 usec,
reliability 255/255, txload 84/255, rxload 129/255
Encapsulation PPP, LCP Open
Open: IPCP, CDPCP, crc 16, loopback not set
Keepalive set (10 sec)
Last input 00:00:08, output 00:00:09, output hang never
Last clearing of "show interface" counters 1d14h
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 737
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 783000 bits/sec, 265 packets/sec
5 minute output rate 506000 bits/sec, 452 packets/sec
4733477 packets input, 988193229 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
4806380 packets output, 972689430 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
1 carrier transitions no alarm present
Timeslot(s) Used:1-24, subrate: 64Kb/s, transmit delay is 0 flags
Step 2
Click to ROUTER-2 and examine the queuing parameters for the Serial 0/1/0:0 interface using the show
interfaces serial 0/1/0:0 command.
ROUTER-2 show interfaces serial 0/1/0:0
Serial0/1/0:0 is up, line protocol is up
Hardware is NIM-2MFT-T1/E1
Internet address is 172.16.1.2/24
MTU 1500 bytes, BW 1536 Kbit/sec, DLY 20000 usec,
reliability 255/255, txload 142/255, rxload 92/255
Encapsulation PPP, LCP Open
Open: IPCP, CDPCP, crc 16, loopback not set
Keepalive set (10 sec)
Last input 00:00:01, output 00:00:01, output hang never
Last clearing of "show interface" counters 1d14h
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 140783
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 556000 bits/sec, 497 packets/sec
5 minute output rate 860000 bits/sec, 298 packets/sec
4831887 packets input, 976190702 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
1 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
4746517 packets output, 994144604 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
1 carrier transitions no alarm present
Timeslot(s) Used:1-24, subrate: 64Kb/s, transmit delay is 0 flag
Step 3
Which queuing strategy is enabled by default for the interface? (Click the Show Me button to view the answer.)
Queueing strategy: fifo
Step 4
Click back to ROUTER-1 and perform an extended ping to the the Serial 0/1/0:0 interface of ROUTER-2
(172.16.1.2) using the ping command, a repeat count of 50 and a datagram size of 160.
Make a note of your ping response time for min/avg/max response time.
You should see that many of the pings have a long response time due to congestion on the low-bandwidth, 1536
Kbps PPP serial link. You should also see some dropped pings.
ROUTER-1 ping
Protocol [ip]:<enter>
Target IP address: 172.16.1.2
Repeat count [5]: 50
Datagram size [100]: 160
Timeout in seconds [2]: <enter>
Extended commands [n]: <enter>
Sweep range of sizes [n]: <enter>
Type escape sequence to abort.
Sending 50, 160-byte ICMP Echos to 172.16.1.2, timeout is 2 seconds:
.!!....!..!........!...!.!...!.!...!..!........!..
Success rate is 24 percent (12/50), round-trip min/avg/max = 215/359/593 ms
The traffic that is sent by the traffic generator through your pod is set up so that the traffic rate varies
constantly and will be different between download and upload. As a result, the drop rate and RRTs may
be different for each of your routers. Your values may also differ from the values in the sample output
that is provided here.
Step 5
Click on ROUTER-2 and perform an extended ping, but this time perform a ping from ROUTER-2 to the Serial
0/1/0:0 interface of ROUTER-1 (172.16.1.1).
Make a note of your ping response time for min/avg/max response time.
ROUTER-2 ping
Protocol [ip]: <enter>
Target IP address: 172.16.1.1
Repeat count [5]: 50
Datagram size [100]: 160
Timeout in seconds [2]: <enter>
Extended commands [n]: <enter>
Sweep range of sizes [n]: <enter>
Type escape sequence to abort.
Sending 50, 160-byte ICMP Echos to 172.16.1.1, timeout is 2 seconds:
!.......!!..!.......!.!.!......!...!.!..!.!.....!.
Success rate is 26 percent (13/50), round-trip min/avg/max = 154/426/595 ms
Step 6
ROUTER-1 and ROUTER-2 are preconfigured with Cisco IP SLA tool and UDP jitter type of IP SLA operation.
ROUTER-2 is configured as an IP SLA responder. On ROUTER-1, review the network statistics that IP SLA UDP
jitter operation types have gathered using the show ip sla statistics 10 command.
Make a note of the following values:
Min/Avg/Max RTT
Min/Avg/Max Source-to-Destination Latency
Min/Avg/Max Source-to-Destination Jitter
Source-to-Destination Packet Loss
MOS Score
You should see excessive RTT, latency, and jitter values and a MOS score of 1.0.
ROUTER-1 show ip sla statistics 10
IPSLAs Latest Operation Statistics
IPSLA operation id: 10
Type of operation: udp-jitter
Latest RTT: 333 milliseconds
Latest operation start time: 03:01:12 EST Thu May 10 2018
Latest operation return code: OK
RTT Values:
Number Of RTT: 313 RTT Min/Avg/Max: 94/333/607 milliseconds
Latency one-way time:
Number of Latency one-way Samples: 313
Source to Destination Latency one way Min/Avg/Max: 8/13/82 milliseconds
Destination to Source Latency one way Min/Avg/Max: 86/320/593 milliseconds
Jitter Time:
Number of SD Jitter Samples: 138
Number of DS Jitter Samples: 138
Source to Destination Jitter Min/Avg/Max: 0/5/16 milliseconds
Destination to Source Jitter Min/Avg/Max: 0/13/69 milliseconds
Over Threshold:
Number Of RTT Over Threshold: 0 (0%)
Packet Loss Values:
Loss Source to Destination: 16
Source to Destination Loss Periods Number: 84
Source to Destination Loss Period Length Min/Max: 1/6
Source to Destination Inter Loss Period Length Min/Max: 1/330
Loss Destination to Source: 670
Destination to Source Loss Periods Number: 175
Destination to Source Loss Period Length Min/Max: 1/25
Destination to Source Inter Loss Period Length Min/Max: 1/12
Out Of Sequence: 0 Tail Drop: 1
Packet Late Arrival: 0 Packet Skipped: 0
Voice Score Values:
Calculated Planning Impairment Factor (ICPIF): 93
MOS score: 1.00
Number of successes: 54
Number of failures: 6
Operation time to live: Forever
Step 7
The switch is preconfigured with a marking policy, so your routers expect incoming traffic with proper markings
already set. On ROUTER-1, configure the VOICE class map according to the table below. The other six class
maps have been preconfigured for you. Note that for the VOICE class, in addition to DSCP EF marking, you
need to match ICMP for testing purposes so that you can see how LLQ will influence this traffic.
Class Name
Match Criteria
(Class-Map Name)
VOICE DSCP EF or ICMP or Cisco IP SLA
DSCP AF41 or
VIDEO
DSCP AF42
NETWORK_CONTROL DSCP CS6
SIGNALING DSCP CS3
Class Name
Match Criteria
(Class-Map Name)
MISSION_CRITICAL DSCP AF31
LOW_LATENCY_DATA DSCP AF21
HIGH_THROUGHPUT_DATA DSCP AF11
ROUTER-1 config t
ROUTER-1(config) class-map match-any VOICE
ROUTER-1(config-cmap) match dscp ef
ROUTER-1(config-cmap) match protocol icmp
ROUTER-1(config-cmap) end
Step 8
Verify that the seven new class maps are configured on your router using the show class-map command.
ROUTER-1 show class-map
Class Map match-all LOW_LATENCY_DATA (id 6)
Match dscp af21 (18)
Class Map match-all SIGNALING (id 4)
Match dscp cs3 (24)
Class Map match-all HIGH_THROUGHPUT_DATA (id 7)
Match dscp af11 (10)
Class Map match-all MISSION_CRITICAL (id 5)
Match dscp af31 (26)
Class Map match-any class-default (id 0)
Match any
Class Map match-all NETWORK_CONTROL (id 3)
Match dscp cs6 (48)
Class Map match-all VIDEO (id 2)
Match dscp af41 (34) af42 (36)
Class Map match-any VOICE (id 1)
Match dscp ef (46)
Match protocol icmp
Step 9
Enable the QoS Packet-Matching Statistics feature on ROUTER-1 using the platform qos match-statistics per-
filter command. When this feature is applied to the interface, it will help you to fully monitor policy-map statistics.
ROUTER-1(config) config t
ROUTER-1(config) platform qos match-statistics per-filter
ROUTER-1(config)
Configuring CBWFQ and LLQ
The individual traffic classes in a CBWFQ policy are characterized by the bandwidth assignment, or weight, and the
total packets that the queue can hold, or queue limit.
The bandwidth policy-map class configuration command is used to specify or modify the amount of bandwidth
that is assigned to the traffic class in kilobits per second or as a percentage of the available bandwidth. The value
that is assigned represents the minimum bandwidth that is guaranteed for this traffic class. Classes can use extra
bandwidth that is not used by other classes.
All classes belonging to one policy map should use the same type of fixed bandwidth guarantee.
These restrictions apply to the bandwidth command:
If the percent keyword is used, the sum of the class bandwidth percentages cannot exceed 100 percent.
The amount of bandwidth that is configured should be large enough to also accommodate Layer 2 overhead.
The class bandwidths can be specified in kilobits per second or in percentages in a policy map, but not a mix of
both. However, the unit for the priority command in the priority class can be different from the bandwidth unit of
the low-priority class.
CBWFQ is not supported on subinterfaces. If a service policy is applied to a subinterface in which the service
policy directly references a policy map that includes a bandwidth statement, this error message will be
displayed:
router(config-subif) service-policy output CBWFQ
CBWFQ : NOT supported on subinterfaces
You can change the default queue limit of 64 packets by using the queue-limit command and specifying the queue
limit in bytes, milliseconds, microseconds, or packets. It is recommended that you do not change the default value.
policy-map policy1
class class1
bandwidth 3000
queue-limit 10 packets
class class2
bandwidth 2000
fair-queue
class class-default
bandwidth 1000
queue-limit 20 packets
In the example, the configuration of policy1 in the figure reserves 3000 kbps of bandwidth for class1 and alters the
default queue limit to 10 packets. Class 2 has a bandwidth reservation of 2000 kbps, is using the default queue
limit, and has fair queuing enabled. The class-default has been assigned 1000 kbps of bandwidth, a queue limit of
20 packets, and uses the default FIFO queuing.
LLQ is enabled in a traffic class using the priority command. When you specify the priority command for a class, it
takes a bandwidth argument that gives a maximum bandwidth in Kbits per second to the traffic class. The
bandwidth parameter both guarantees bandwidth to the priority class and restrains the flow of packets from the
priority class. When determining the bandwidth for the priority command, you should account for Layer 2
encapsulation overhead and any other bandwidth overhead.
If no bandwidth parameter is supplied, the LLQ is configured as a strict-priority queue with no implicit policer. If the
LLQ is configured in this fashion, all other user-defined classes can only be configured with the bandwidth
remaining percent command. It is recommended that you configure an explicit policer to put a maximum limit on
the amount of bandwidth that the LLQ can use.
For example, the following policy map enables conditional traffic policing. In this example, the priority queue is
conditionally policed to 400 kbps and there is a minimum bandwidth of 400 kbps that is guaranteed to class gold.
policy-map my_policy
class voice
priority 400
class gold
bandwidth 400
With conditional traffic policing on the queue, you run the risk of sudden degradation in priority service when an
interface becomes congested. You can go from an instance in which a priority class uses the entire link, to traffic
suddenly being policed to the configured value. You need to know the available bandwidth and use some form of
admission control to ensure that your offered loads do not exceed the available bandwidth.
With conditional policing, traffic policing does not engage unless the interface is congested.
The following policy map enables unconditional traffic policing. In this example, the priority command indicates
priority scheduling for the voice class, and voice traffic is policed to 400 kbps. Class gold gets 400 kbps of minimum
guaranteed bandwidth.
policy-map my_policy
class voice
priority
police 400000
class gold
bandwidth 400
The priority class is configured with an “always on” (unconditional) policer. In this case, the priority class is always
policed to the configured value regardless of whether the interface is congested. The advantage of an
unconditional policer is that you always know how much priority traffic will be offered to the downstream devices,
thus making your bandwidth planning much simpler. This approach is the recommended choice.
If traffic policing is not configured, the priority traffic may consume the entire interface bandwidth.
Task 2: Configuring and Testing the LLQ Policy
Activity
You will configure and apply the LLQ policy and test network performance using ping and Cisco IP SLA tools.
Step 1
Configure the VOICE and VIDEO bandwidth guarantees in the LLQ_POLICY map on Router-1. The other traffic
classes in the table below have been preconfigured for you. For lab purposes, use conditional policing for the
VOICE class.
Traffic Class Bandwidth Guarantee
VOICE 170 kbps maximum priority bandwidth
VIDEO 30% of remaining bandwidth minimum
NETWORK_CONTROL 5% of remaining bandwidth minimum
SIGNALING 5% of remaining bandwidth minimum
MISSION_CRITICAL 15% of remaining bandwidth minimum, fair-queue
Traffic Class Bandwidth Guarantee
LOW_LATENCY_DATA 10% of remaining bandwidth minimum, fair-queue
HIGH_THROUGHPUT_DATA 10% of remaining bandwidth minimum, fair-queue
class-default 25% of remaining bandwidth minimum, fair-queue
ROUTER-1(config) policy-map LLQ_POLICY
ROUTER-1(config-pmap) class VOICE
ROUTER-1(config-pmap-c) priority 170
ROUTER-1(config-pmap-c) exit
ROUTER-1(config-pmap) class VIDEO
ROUTER-1(config-pmap-c) bandwidth remaining percent 30
ROUTER-1(config-pmap-c) exit
ROUTER-1(config-pmap) exit
Step 2
Apply the new LLQ_POLICY policy on the Serial 0/1/0:0 interface in the outbound direction.
ROUTER-1(config) interface serial 0/1/0:0
ROUTER-1(config-if) service-policy output LLQ_POLICY
ROUTER-1(config-if) end
Step 3
Verify the configuration of the LLQ_POLICY policy map on your router using the show policy-map command.
ROUTER-1 show policy-map
Policy Map LLQ_POLICY
Class VOICE
priority 170 (kbps)
Class VIDEO
bandwidth remaining 30 (%)
Class NETWORK_CONTROL
bandwidth remaining 5 (%)
Class SIGNALING
bandwidth remaining 5 (%)
Class MISSION_CRITICAL
bandwidth remaining 15 (%)
fair-queue
Class LOW_LATENCY_DATA
bandwidth remaining 10 (%)
fair-queue
Class HIGH_THROUGHPUT_DATA
bandwidth remaining 10 (%)
fair-queue
Class class-default
bandwidth remaining 25 (%)
fair-queue
The Class map and Policy map have been preconfigured on ROUTER-2.
Step 4
Display and verify the operation of your attached service policy using the show policy-map interface command.
The traffic generator is still generating traffic so you should see that many packets have been matched and
queued for each traffic class. For which traffic class or classes, if any, are there still drops?
ROUTER-1 show policy-map interface
Serial0/1/0:0
Service-policy output: LLQ_POLICY
queue stats for all priority classes:
Queueing
queue limit 512 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 6006/1224504
Class-map: VOICE (match-any)
6006 packets, 1224504 bytes
5 minute offered rate 31000 bps, drop rate 0000 bps
Match: dscp ef (46)
0 packets, 0 bytes
5 minute rate 0 bps
Match: protocol icmp
0 packets, 0 bytes
5 minute rate 0 bps
Match: protocol cisco-ip-sla
6006 packets, 1224504 bytes
5 minute rate 31000 bps
Priority: 170 kbps, burst bytes 4250, b/w exceed drops: 0
Class-map: VIDEO (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: dscp af41 (34) af42 (36)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth remaining 30%
Class-map: NETWORK_CONTROL (match-all)
27 packets, 2464 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: dscp cs6 (48)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 8/640
bandwidth remaining 5%
Class-map: SIGNALING (match-all)
384 packets, 147120 bytes
5 minute offered rate 6000 bps, drop rate 0000 bps
Match: dscp cs3 (24)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 384/147120
bandwidth remaining 5%
Class-map: MISSION_CRITICAL (match-all)
5632 packets, 934276 bytes
5 minute offered rate 24000 bps, drop rate 0000 bps
Match: dscp af31 (26)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/0/0/0
(pkts output/bytes output) 5632/934276
bandwidth remaining 15%
Fair-queue: per-flow queue limit 16 packets
Class-map: LOW_LATENCY_DATA (match-all)
3049 packets, 229075 bytes
5 minute offered rate 9000 bps, drop rate 0000 bps
Match: dscp af21 (18)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/8/0/8
(pkts output/bytes output) 3041/228308
bandwidth remaining 10%
Fair-queue: per-flow queue limit 16 packets
Class-map: HIGH_THROUGHPUT_DATA (match-all)
303 packets, 344997 bytes
5 minute offered rate 2000 bps, drop rate 0000 bps
Match: dscp af11 (10)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/103/0/103
(pkts output/bytes output) 194/191060
bandwidth remaining 10%
Fair-queue: per-flow queue limit 16 packets
Class-map: class-default (match-any)
136132 packets, 17729963 bytes
5 minute offered rate 363000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/136/0/136
(pkts output/bytes output) 135957/17719232
bandwidth remaining 25%
Fair-queue: per-flow queue limit 16 packets
The class-default class should account for all or most of the packet drops on the interfaces. Other traffic
classes may have some packet drops from momentary traffic bursts by the applications that match that
class during times of congestion.
Step 5
From ROUTER-1, perform an extended ping to the Serial 0/1/0:0 interface of ROUTER-2 once again. For the
extended ping, use a repeat count of 50 and a datagram size of 160.
Make a note of your ping response time for min/avg/max response time.
ROUTER-1 ping
Protocol [ip]:<enter>
Target IP address: 172.16.1.2
Repeat count [5]: 50
Datagram size [100]: 160
Timeout in seconds [2]: <enter>
Extended commands [n]: <enter>
Sweep range of sizes [n]: <enter>
Type escape sequence to abort.
Sending 50, 160-byte ICMP Echos to 172.16.1.2, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (50/50), round-trip min/avg/max = 4/8/14 ms
Step 6
Compare the resulting statistics (ping drops and RRT) with FIFO and LLQ. Do you see any difference? Why?
You configured a strict-priority queue for ICMP traffic, and therefore your ping does not experience drops and has
very low RRT.
Step 7
On ROUTER-1, review the network statistics that IP SLA UDP jitter operation types have gathered.
Make a note of the following values:
Min/Avg/Max RTT
Min/Avg/Max Source-to-Destination Latency
Min/Avg/Max Source-to-Destination Jitter
Source-to-Destination Packet Loss
MOS Score
ROUTER-1 show ip sla statistics 10
IPSLAs Latest Operation Statistics
IPSLA operation id: 10
Type of operation: udp-jitter
Latest RTT: 8 milliseconds
Latest operation start time: 14:53:59 EST Wed May 16 2018
Latest operation return code: OK
RTT Values:
Number Of RTT: 1000 RTT Min/Avg/Max: 3/8/19 milliseconds
Latency one-way time:
Number of Latency one-way Samples: 342
Source to Destination Latency one way Min/Avg/Max: 8/10/16 milliseconds
Destination to Source Latency one way Min/Avg/Max: 0/1/4 milliseconds
Jitter Time:
Number of SD Jitter Samples: 999
Number of DS Jitter Samples: 999
Source to Destination Jitter Min/Avg/Max: 0/2/8 milliseconds
Destination to Source Jitter Min/Avg/Max: 0/3/8 milliseconds
Over Threshold:
Number Of RTT Over Threshold: 0 (0%)
Packet Loss Values:
Loss Source to Destination: 0
Source to Destination Loss Periods Number: 0
Source to Destination Loss Period Length Min/Max: 0/0
Source to Destination Inter Loss Period Length Min/Max: 0/0
Loss Destination to Source: 0
Destination to Source Loss Periods Number: 0
Destination to Source Loss Period Length Min/Max: 0/0
Destination to Source Inter Loss Period Length Min/Max: 0/0
Out Of Sequence: 0 Tail Drop: 0
Packet Late Arrival: 0 Packet Skipped: 0
Voice Score Values:
Calculated Planning Impairment Factor (ICPIF): 1
MOS score: 4.34
Number of successes: 5
Number of failures: 0
Operation time to live: Forever
Step 8
Compare the resulting IP SLA statistics (RRT, latency, jitter, packet loss values, and MOS) with FIFO and LLQ.
Do you see any difference? Why? (Click the Show Me button to view the answer.)
You should see that RRT, latency, jitter, and packet loss values significantly decreased after you enabled LLQ
because IP SLA traffic gets strict-priority treatment. The IP SLA operation takes delay, jitter, and packet loss into
consideration when calculating the MOS score. For this reason, you see a MOS score of 4.34, which is highest
possible value.
Step 9
Select the Enter key to end this lab.