Vous êtes sur la page 1sur 7

Quality of Service for Converged Data and Voice over IP Networks

Casimer DeCusatis
Distinguished Engineer, Network Hardware Development IBM Corporation

Lawrence Jacobowitz
Development Manager IBM Corporation
Introduction The datacom and telecom industries are going through a stage of unprecedented change, as new communication ecosystems evolve from isolated, application-specific networks. The industry is moving beyond telecommunications as a utility service, towards a converged voice and data network based on Ethernet protocol and Internet protocol (IP). Quality of service (QoS) has been a feature of voice communication networks almost since their inception. The extension of traditional voice QoS methods to data communication networks and the Internet has been a longstanding research topic, although for many years it was not considered a critical issue due to the inherent differences between data and voice traffic and the relatively low cost of overprovisioning bandwidth. In particular, over-provisioning of network bandwidth has been common practice since the earliest fiber-optic local-area networks (LANs) were deployed, and bandwidth on an optical network was found to be very economical. This practice has extended into widearea networks (WANs) and metropolitan-area networks (MANs), despite the initially higher capital equipment cost. Over-provisioned networks are considered simpler to manage (since there is no QoS mechanism to be configured), easier to troubleshoot, and resistant to certain types of denial of service (DoS) attacks from the network edge. Typically, such networks will not exhibit problems unless there is an unexpected, sustained burst of traffic (exceeding perhaps 60 percent of the provisioned network capacity) or if there is a network failure that significantly increases the load on a particular network segment beyond its nominal design parameters. However, the cost of over-provisioning has increased with the migration to higher data rates (1040 Gbps in many networks), which offsets any savings in operational or management costs. Since overprovisioning does not scale well to larger, higher-data-rate networks, alternative approaches to providing QoS have received increased attention. Despite the popularity, the Internet has not lent itself to any single widely deployed QoS mechanism. Due to the inter-domain QoS requirements and associated operational and security issues, QoS for the Internet is a much more complex problem than for telecom networks. Since many Internet applications are based on lossy protocols such as transmission control protocol (TCP)/IP and are designed to adapt to changing network traffic conditions by dropping packets and using retransmission recovery approaches, there has been little need for a global QoS as in the voice network. There has been some notable progress in this direction, however, beginning in the mid-1990s, when the Internet Engineering Task Force (IETF) tried to standardize the resource

reservation protocol (RSVP) as a QoS mechanism. This effort did not lead to widespread implementation or deployment of that technology. Early commercial implementations of RSVP experienced serious scaling problems. Therefore, the operational networking communities such as the North American Network Operators Group (NANOG) and European Operators Forum (EOF) concluded that RSVP lacked sufficient scalability to be practical for QoS. RSVP remains in use for a very different purposeas a signaling protocol option for multiprotocol label switching (MPLS) deployments. For WANs, MPLS remains an important routing protocol and can play an important role in IP network deployments. However, RSVP for per-flow resource reservation is neither widely available nor a good design option for IP network deployments today. In this paper, we provide a tutorial discussion of the two principal open standards that are widely deployed in commercial voice over IP (VoIP) networks, namely Ethernet precedence and IP type of service (ToS). We will then compare several deployment models using these standards, including simple and fine-grained QoS, which apply to both IP version 4 (IPv4) and the more recently deployed IPv6. Both of these methods adopt the general approach of tagging data frames at the edge of a packet network and using this information to enforce the desired QoS policy. Although based on industry standards, the specific implementations of QoS can vary widely in quality and performance. For this reason, it is strongly recommended that networking test beds be used to validate a particular implementation in the laboratory before deployment in a productionlevel network. The availability of network simulation tools to model proposed design or topology changes is also important. We will highlight implementation details that can impact the quality of voice over IP deployment and should be considered in any practical implementation. It is important to keep in mind that these we will discuss best practices and typical deployment examples, not hard and fast design rules. Each organization needs to consider what kind of QoS policy is appropriate for that organization and then deploy a configuration consistent with that locally designed policy. Each organization has different needs and different network designs, and therefore will probably have a different QoS policy. Ethernet Precedence Originally specified in Institute of Electrical and Electronics Engineers (IEEE) 802.1P during the late 1990s, Ethernet precedence added a QoS mechanism to one of the most widely used networking technologies deployed today [13]. This standard also specified the use of virtual LANs (VLANs), which are created at an edge device such as an Ethernet switch/router through additional packet header information. The VLAN is a logical construct that allows efficient routing of traffic above the physical layer of the network; traffic may be re-routed if a logical node fails or if a redundant VLAN has been implemented (care must be taken that redundant VLANs do not share a common potential failure point at the underlying physical layer). Ethernet precedence specifies a three-bit field in the VLAN tag header that is used to carry data frame precedence information. There are eight precedence values, numbered 0 through 7, with precedence 7 being the highest priority. This scheme maps directly to the IP precedence bits. Not all Ethernet equipment supports eight queues per port; some offer only four queues per port or fewer. In practice, support of traffic in a realistic commercial network will require at least three queues per port, while some critical applications (including government and military networks) will require the full eight queues per port. Ethernet switches that implement this feature can optionally guarantee a minimum bandwidth for each precedence value. Such implementations typically reassign any unused guaranteed capacity for traffic with other precedence values. Some implementations will evaluate the precedence tag for inbound data and use this to control access to the switch backplane, which helps prevent lower-priority traffic from producing bottlenecks and delaying backplane access for higher-precedence data frames. The form of packet queuing

implemented in these switches varies widely, although common implementations include strict priority queuing and weighted round-robin algorithms. IP ToS As an alternative to Ethernet precedence, the IP precedence scheme uses a long-standing message handling scheme first employed by the U.S. Department of Defense. This has largely replaced the original ToS field used in some early IP routers (in which each packet was tagged with an eightbit field, and used three bits to support eight precedence levels). In the more modern IP ToS implementations, there are six precedence levels for user traffic, ranging from ROUTINE used for most traffic to FLASH OVERRIDE used only in an emergency. In addition, the IP precedence model has two precedence values higher than those used for any user traffic. The highest precedence value is called INTERNET CONTROL and is normally used for control traffic that can affect network availability and stability across multiple administrative domains (e.g., border gateway protocol [BGP], which carries inter-domain routing information). The second-highest precedence value is called NETWORK CONTROL and is normally used for control traffic that can affect network availability and stability in a single administrative domain (e.g., open shortest path first [OSPF], which carries intra-domain routing information) [2]. The IP precedence model gives higher precedence to network control traffic than user traffic, based on the assumption that control traffic is critical to the networks capability to detect and route around traffic-impacting failures (i.e., power failures, fiber cuts) [4]. In many implementations, convergence of the higher-level protocols should not be the only mechanism used for fault recovery. This is because convergence of the network at Level 3 or higher can take on the order of seconds or minutes for a very large VoIP network (thousands of users or more). It is recommended that some form of very fast physical-layer fault detection and recovery be implemented in addition to any Layer 3 (L3) reconvergence provided by either of the precedence models discussed so far. This may include forms of traditional 1+1 synchronous digital hierarchy (SDH) protection switching, as specified by the synchronous optical network (SONET) or SDH standards, which can restore the physical layer in 50 ms or less [1]. Service Granularity Recently, the IETF has proposed the differentiated services (DiffServ) specification that provides an alternate set of interpretations for the eight-bit ToS field and defines some packet handling specifications for use with DiffServ. There are two IETF specifications for DiffServ packet processingassured forwarding (AF) and expedited forwarding (EF). In particular, EF was originally designed for carrying voice traffic in the U.S. Department of Energys research network, ESnet. The EF specification includes recommendations for calculating delay and jitter budgets, which are not explicitly included in the AF specification. Although both EF and AF can be used effectively by VoIP networks, they are limited in their support of only a few service levels. Both EF and AF offer two options for traffic queuing, although higher-quality DiffServ implementations that provide much finer granularity and more service levels are available. For example, better implementations offer the network administrator a choice of queuing algorithms, commonly including priority queuing (PQ), weighted fair queuing (WFQ), and weighted random early drop (WRED). It is also desirable to permit each queue an allocation of minimum and maximum bandwidth that will always be available for traffic in that queue. Ethernet and IP QoS Deployment Considerations

Since Ethernet precedence and IP precedence both specify eight precedence values or QoS queues, it is straightforward to use these mechanisms in tandem to provide end-to-end QoS in the packet network (provided that the deployed network equipment supports both mechanisms). The Ethernet precedence and IP precedence values directly map to each other; in other words, priority 7 traffic for an Ethernet precedence implementation should be handed off to priority 7 traffic in an IP precedence implementation, and so forth. While the standards define how a QoS tag is represented in an IP packet header or Ethernet frame header, the standards do not define how to ensure that a given packet or frame contains the correct QoS marking. Preferred implementations of these standards support filtering incoming traffic using access control lists and then re-marking the incoming packet or frame with the correct QoS marking. This marking is then used in the switch or router to apply appropriate packet or frame processing to implement the desired QoS. It is important that networking equipment provides flexible access control list properties so that only authorized traffic is able to obtain higher preferred service quality. It is preferred that access control lists be implemented in dedicated hardwarecapable of operating at wire speed regardless of the packet load or number of access listsrather than in the switch central processor. The latter type of lists do not scale well with increasing numbers of lists or higher data traffic and can cause congestion or decreased switch performance. In equipment implementations that do not support the full eight queues per port, the equipment will generally need to be configured so that traffic with the correct set of QoS markings is sent to the correct queue. Equipment having eight queues per port will be more successful in applying the desired QoS handling to packets or frames passing through it. As noted earlier, bandwidth over-provisioning remains the simplest approach to avoiding network congestion, and it should be used in conjunction with the previously discussed QoS mechanisms to the extent that this option is available and economically practical. Recognizing that this will not always be the case and that congestion may occur in an under-provisioned network, a combination of QoS methods can be deployed to reduce the risk of traffic affecting network disruptions. One example of a simple QoS deployment might break down network traffic into four coarsegrained categoriesInternet control, network control, voice, and other. This breakdown requires four queues per port in the application network equipment. The control categories are highest precedence and other is the lowest precedence in this scheme. A minimum bandwidth is provisioned for these QoS queues to prevent high-precedence traffic from totally starving lower precedence traffic of bandwidth. Inter-domain control traffic (i.e., BGP) belongs in Internet control. Intra-domain control traffic (i.e., simple network-management protocol [SNMP], OSPF, or routing information protocol [RIP]) belongs in network control. Voice traffic includes not only actual voice packets sent using real-time protocol (RTP), but also any telephony signaling protocols that are deployed. The last category contains all other traffic, probably consisting mainly of hypertext transfer protocol (HTTP) for Web access and simple mail transfer protocol (SMTP), point of presence (PoP), or Internet message access protocol (IMAP) for e-mail access. In this model, no class of traffic must be starved of bandwidth by other classes of traffic during normal operation; therefore, a minimum reserved bandwidth must be configured for each class. For example, 30 percent of bandwidth may be reserved for voice, 5 percent for other, and 15 percent for each type of control traffic, with the remaining bandwidth dynamically allocated among the four QoS values based on the relative precedence of the QoS values in the traffic being received. A shortcoming of this model is that it lumps all non-voice user traffic into a single QoS class. In most enterprises, not all data traffic is equally important. For example, file server access is typically very important, with database access only slightly less so. Further, interactive traffic (e.g., instant messaging) normally should get higher priority than background traffic (e.g., file

transfer). To address these shortcomings, we might break the network traffic into more finegrained categories by adding four QoS levels with precedence between other and voice. Furthermore, the voice category can be broken down into two categoriesvoice control (which contains only voice control or telephony signaling protocols, such as session initiation protocol [SIP]) and voice traffic, which contains only the actual voice content. The voice traffic subcategory has a lower priority than the voice control category. This leaves three additional categories for higher-precedence, non-voice user traffic. For example, we might use one of these categories for fileserver and remote-procedure call traffic and the other two for business-critical applications (e.g., remote database access). Web content, e-mail, instant messaging, and similar applications would typically be split into interactive and other categories. As before, some categories (control and voice) would probably be given guaranteed capacity, but the percentage that is guaranteed would probably decrease for most categories. At the other extreme, some highly critical applications, or those responding to an emergency situation, may wish to employ a different QoS model in which highest-priority traffic is delivered at all costs, even if less important traffic is unable to be sent at all. Such a policy requires bandwidth starvation for lower-priority protocols rather than designing around it. This example illustrates the range of QoS options available, depending on the end-user requirements. Telephone handsets designed for use with VoIP should support setting the IP precedence and Ethernet precedence bits appropriately, based on the precedence associated with each phone call placed using that handset. The policing and authorization mechanisms used with traditional telephone systems can also be used with VoIP telephone systems. IETF is enhancing telephony signaling protocol standards (i.e., signaling information protocol). It is preferred practice to segregate VoIP from other data traffic, although this may not be practical. For example, shared WAN links may be difficult to partition in this manner, but Ethernet networks can more easily use different VLANs for this purpose. It is important that VLANs do not accidentally overlap and place traffic on the incorrect VLAN; this can be discouraged by using network equipment that implements VLAN capability in dedicated application-specific integrated circuit (ASIC) hardware. It is preferred that each Ethernet port used for VoIP should be locked down to the specific media access control (MAC) address of the device that is supposed to be connected to that port. This will help reduce the risk of common misconfiguration errors such as swapping the VoIP and Ethernet data ports. Traditional network engineering concerns must be given additional importance when voice or other real-time services are deployed. Conventional telecom network designs are based on SONET/SDH rings with either unidirectional or bidirectional path/line-switch protection; these network architectures have served the telecom industry well for many decades and should not be completely abandoned in a VoIP or converged network environment. Network core switches and routers should have high availability capabilities, such as redundant power, redundant switch fabrics, redundant management modules, and fast physical-layer failover. Edge switches should have at least redundant power options wired to separate power sources. When using Ethernet, ring-oriented topologies (i.e., Ethernet automatic protection switching [EAPS]), offer higher resiliency to common failures such as fiber cuts than strict branching tree or cascaded point-topoint topologies. Deploying networking equipment that has lower jitter and lower latency in the switching/routing fabric will also help provide higher-quality VoIP services. Security Considerations When a network offers differing service quality to different packets, this creates an incentive for users to improperly alter their traffic to get the best service quality. This stands in contrast to a

best-effort-only network, in which all packets are treated equally and there is no incentive to improperly mark traffic. Thus, security becomes a significant consideration in deployment of VoIP and converged networking [5]. In most converged networks, it is preferable to enable data encryption and authentication functions at the host server, rather than at the network edge or router. However, due to the complexity associated with scanning every frame on a high-data-rate link (10 Gbps or above), cryptographic methods are not considered practical for validating either Ethernet precedence or IP ToS bits. Instead, the preferred security approach is to have access control lists deployed at the edges of the network. This causes all packets to become properly tagged when passing through the network edge, whether they were unmarked or improperly marked to begin with. This combination of edge-based access control and host-based data encryption provides an effective layered defense against QoS spoofing. The primary drawback to this approach is that the local QoS policy must be implemented consistently at each edge of the network. This can represent a significant operational cost for the network administrator, not only for initial configuration but also for configuration maintenance over time. Note that initial VoIP configurations may offer reduced performance than conventional voice systems during initial deployment, as both users and administrators become trained on the proper use of QoS mechanisms. It is important to properly set expectations when migrating from a conventional voice to a VoIP network during the transition phase; in the long term, VoIP networks can deliver equivalent or better QoS than conventional networks if properly managed. Automated systems for configuration management software tools are becoming available to help reduce management costs and should be considered for any moderate or large network. These automation tools are also useful for insuring that the network is actually providing the intended QoS levels to various types of traffic, particularly under heavy load or failure conditions. Conclusions The deployment of VoIP and converged networks can deliver significant capital and operational cost savings in a scalable network, but care must be taken to properly deploy QoS to deliver sustained performance. A combination of bandwidth over-provisioning with QoS mechanisms such as Ethernet precedence or IP ToS should be considered, using networking equipment with advanced features such as rate-adaptive voice-encoding algorithms, access control lists enabled in dedicated hardware, and VLANs implemented in dedicated ASIC hardware. Either coarse- or fine-grained QoS queues may be employed depending on the application, although a full eight queues per port is the most flexible. Assignment of minimum and maximum bandwidth for each queue and providing access control lists on the network edge are useful practices to guarantee proper implementation of the QoS policy. References 1) C. DeCusatis, Editor, Handbook of Fiber Optic Data Communication, Academic Press, N.Y., 2nd Edition, 900 pp. (2002) 2) Quality of Service in Voice over IP Networks, white paper, Extreme Networks (www.extremenetworks.com) 3) Ethernet VLAN standard, IEEE 802.1P (1994) 4) L. Jacobowitz and C. DeCusatis, On-Demand Network Architectures for Triple-Play Convergence, IEC Comprehensive Report Series, Achieving the Triple Play: Technologies and Business Models for Success, p. 245252 (2006)

5) C. DeCusatis, Developing a threat model for enterprise storage-area networks, Proc. 7th annual IEEE Workshop on Information Assurance, U.S. Military Academy, West Point, N.Y. (June 2123, 2006)
Educational Content Provided by:

Casimer DeCusatis
Distinguished Engineer, Network Hardware Development IBM Corporation

Lawrence Jacobowitz
Development Manager IBM Corporation

Vous aimerez peut-être aussi