Vous êtes sur la page 1sur 17

EE7234: ADVANCED DATA COMMUNICATION

TUTORIAL NO: 01 - HIGH SPEED PACKET SWITCHING

NAME SMLA

: SENANAYAKE

REG No : RU/E/2008/165 DATE : 02/10/2012

Q1. I. The internet was designed for best effort, no guarantee delivery of packets. This behavior is still predominant on the internet today. If QoS policies are not implemented, traffic is forwarded using the besteffort model. All the network packets are treated exactly the same. An emergency voice messages is treated exactly like a digital photograph attached to an email. Without the implementation of QoS, the network cannot tell the different between packets and, as a result, cannot treat packets preferentially. When a letter is dropped in standard postal mail, uses a best-effort model. Letter will be treated exactly the same as every other letter. It will get there when it gets there. With the best-effort model, the letter may actually never arrive and, unless having a separate notification arrangement with the recipient, mo one never knows if the letter does not arrive. The best-effort model also has these drawbacks: Nothing is guaranteed. Packets will arrive whenever they can, in any order possible, if they arrive at all. Packets are not given preferential. Critical data treated the same as casual e-mail.

II. a. Throughput It is the amount of work which a computer can complete in a given period of time. In data transmission, it is the amount of data that is moved successfully from one device to another in a given period of time. The average rate of successful data transmission over a communication channel is also referred as Network Throughput, Ex. Ethernet. This data may be transmitted via a physical or logical link. The throughput is usually measured in bits per second (bit/s or bps). It is synonymous to digital bandwidth consumption. b. Delay

Delay is caused when packets of data (voice) take more time than expected to reach their destination. This causes some disruption is the voice quality. However, if it is dealt with properly, its effects can be minimized. When packets are sent over a network towards a destination machine/phone, some of them might be delayed. Reliability features in the voice quality mechanism sees to it that a conversation is not deadlocked waiting for a packet that went to have a walk somewhere in the green. In fact, there are many factors affecting the journey of packets from source to destination, and one of them is the underlying network. The delayed packet may come late or may not come at all, in case it is lost. QoS (Quality of Service) considerations for voice are relatively tolerant towards packet loss, as compared to text. When a packet is delayed, you will hear the voice later than you should. If the delay is not big and is constant, your conversation can be acceptable. c. Delay-jitter Jitter is a variation in packet transit delay caused by queuing, contention and serialization effects on the path through the network. In general, higher levels of jitter are more likely to occur on either slow or heavily congested links. It is expected that the increasing use of QoS control mechanisms such as class based queuing, bandwidth reservation and of higher speed links such as 100 Mbit Ethernet, E3/T3 and SDH will reduce the incidence of jitter related problems at some stage in the future, however jitter will remain a problem for some time to come. This contribution discusses the root causes and statistical characteristics of jitter, provides some practical measurement results and then discusses ways in which jitter can be measured and modeled. Finally the operation of jitter buffers is briefly discussed in order that the interaction between jitter and jitter buffers can be better understood. The general context of the discussion below is Voice over IP however applies equally to packet video and other forms of real time jitter sensitive traffic. d. Packet loss

Packet loss occurs in every kind of network. All network protocols are designed to cope with the loss of packets in one way or another. TCP protocol, for example, guarantees packet delivery by sending redelivery requests for the lost packets. RTP employed by the VoIP protocol does not provide delivery guarantee, and VoIP must implement the handling of lost packets. While a data transfer protocol can simply request re-delivery of a lost packet, VoIP has no time to wait for the packet to arrive. In order to maintain call quality, lost packets are substituted with interpolated data. Nevertheless the voice is quite predictive and if the packet loss is isolated the voice can be heard in a quite optimal way. The problem is greater when packet loss occurs in burst.

Q2. I.

GFC4 bits of generic flow control that are used to provide local functions, such as identifying multiple stations that share a single ATM interface. The GFC field is typically not used and is set to a default value.

VPI8 bits of virtual path identifier that is used, in conjunction with the VCI, to identify the next destination of a cell as it passes through a series of switch routers on its way to its destination. VCI16 bits of virtual channel identifier that is used, in conjunction with the VPI, to identify the next destination of a cell as it passes through a series of switch routers on its way to its destination. PT3 bits of payload type. The first bit indicates whether the cell contains user data or control data. If the cell contains user data, the second bit indicates congestion, and the third bit indicates whether the cell is the last in a series of cells that represent a single AAL5 frame. CLP1 bit of congestion loss priority that indicates whether the cell should be discarded if it encounters extreme congestion as it moves through the network. HEC8 bits of header error control that are a checksum calculated only on the header itself.

II. Header Error Control (HEC) provides a capability to correct all single bit errors in the cell header as well as the detection of the majority of multiple-bit errors. The use of this field is up to interpretation of the equipment designers. If most errors are likely to be single bit errors, it can be used for error correction. Using the field for error correction does carry some level of risk of introducing unwanted errant traffic on the network should a mistake be made in the correction process.

III. ATM provides transport of packets from higher layer protocols which are variable in length and longer than the 48 payload section of the ATM cell. To have ATM work in the communications layers, it must segment the larger packets into multiple ATM cells and later reassemble the multiple cells back into the original packet. This process is called Segmentation and Reassembly or SAR for short.

The ATM layer responsible for the SAR process is the ATM Adaptation Layer or AAL. There are three specified ways to do this SAR function and they are called AAL-1, AAL-3/4, and AAL-5. AAL-1 AAL-1, a connection-oriented service, is suitable for handling circuitemulation applications, such as voice and video conferencing. Circuitemulation service also accommodates the attachment of equipment currently using leased lines to an ATM backbone network. AAL-1 requires timing synchronization between the source and destination. For this reason, AAL-1 depends on a medium, such as SONET, that supports clocking. The AAL-1 process prepares a cell for transmission in three steps. First, synchronous samples (for example, 1 byte of data at a sampling rate of 125 microseconds) are inserted into the Payload field. Second, Sequence Number (SN) and Sequence Number Protection (SNP) fields are added to provide information that the receiving AAL-1 uses to verify that it has received cells in the correct order. Third, the remainder of the Payload field is filled with enough single bytes to equal 48 bytes. To ensure the cells are reassembled properly, a sequence number is used. The structure of the AAL-1 PDU is given in the following illustration.

SN - Sequence number. Numbers the stream of SAR PDUs of a CPCS PDU (modulo 16).

CSI - Convergence sublayer indicator. Used for residual time stamp for clocking SC Sequence court. SNP - Sequence number protection. CRC - Cyclic redundancy check calculated over the SAR header. Parity - Parity calculated over the CRC. SAR PDU - payload 47-byte user information field.

AAL-3/4 AAL-3/4 supports both connection-oriented and connectionless data. It was designed for network service providers and is closely aligned with Switched Multimegabit Data Service (SMDS). AAL-3/4 will be used to transmit SMDS packets over an ATM network. AAL-3/4 prepares a cell for transmission in four steps: The convergence sublayer (CS) creates a protocol data unit (PDU) by prepending a beginning/end tag header to the frame and appending a length field as a trailer. The segmentation and reassembly (SAR) sublayer fragments the PDU and prepends a header to it. The SAR sublayer appends a CRC-10 trailer to each PDU fragment for error control. The completed SAR PDU becomes the Payload field of an ATM cell to which the ATM layer prepends the standard ATM header. An AAL-3/4 SAR PDU header consists of type, sequence number, and multiplexing identifier fields. Type fields identify whether a cell is the beginning, continuation, or end of a message. Sequence number fields identify the order in which cells should be reassembled. The multiplexing identifier determines which cells from different traffic sources are interleaved on the same VCC so that the correct cells are reassembled at the destination.

AAL-3/4 CS PDU The basis of AAL-3 is the packet which is shown in the middle of Figure 2 where: CPI - Common part indicator. Represents the units of the BAsize and length field. A value of only zero indicates bytes Btag - Beginning tag. This field must have the same value as Etag for same CPCS-PDUs and a different value as the Btag/Etag preceding and successive CPCS-PDUs. BAsize - Buffer allocation size. In message mode this is equal to the length field. In streaming mode this is equal to or greater than the length field. PAD - Up to 3 bytes of padding to achieve 32-bit alignment in the information field AL - Alignment. A filling byte coded with zero. ETAG - End tag. Refer to Btag. Length - Length of the Information field. This value is used to indicate information loss or gain. Functions of AAL-3/4 SAR include identification of SAR SDUs; error indication and handling; SAR SDU sequence continuity; and multiplexing and demultiplexing.

AAL-3/4 SAR PDU

The structure of the AAL-3/4 SAR PDU is shown at the bottom of Figure 2 where: ST - Segment type. Values 10 = Beginning of message; 00 = Continuation of message; 01 = End of message; 11 = Single segment message SN - Sequence number. Numbers the stream of SAR PDUs of a CPCS PDU (modulo 16). MID - Multiplexing identification. This is used for multiplexing several AAL-3/4 connections over one ATM link. LI - Length indication. Contains the length of the SAR SDU in bytes CRC - Cyclic redundancy check calculated over the SAR header.

AAL-5 SAR PDU AAL-5 is the primary AAL for data and supports both connectionoriented and connectionless data. It is used to transfer most non-SMDS data, such as classical IP, over ATM. AAL-5 also is known as the simple and efficient adaptation layer (SEAL) because the SAR sublayer simply accepts the CS-PDU and segments it into 48-octet SAR-PDUs without adding any additional fields. AAL-5 prepares a cell for transmission in three steps: The CS sublayer appends a variable-length pad and an 8-byte trailer to a frame. The pad ensures that the resulting PDU falls on the 48-byte boundary of an ATM cell. The trailer includes the length of the frame and a 32-bit cyclic redundancy check (CRC) computed across the entire PDU. This allows the AAL-5 receiving process to detect bit errors, lost cells, or cells that are out of sequence. The SAR sublayer segments the CS PDU into 48-byte blocks. A header and trailer are not added (as is in AAL-3/4), so messages cannot be interleaved. The ATM layer places each block into the Payload field of an ATM cell. For all cells except the last, a bit in the Payload Type (PT) field is set to zero to indicate that the cell is not the last cell in a series that represents a single frame. For the last cell, the bit in the PT field is set to one.

AAL-5 is used to carry computer data such as TCP/IP. It is the most popular AAL and is sometimes referred to as SEAL (simple and easy adaptation layer). The basis of AAL-5 is the packet (or SAR PDU) which is composed of the following fields:

AAL-5 CS PDU The structure of the AAL-5 CS PDU is illustrated below in the middle of Figure 3 where: PAD - Padding used to cell align the trailer which may be between 0 and 47 bytes long UU - CPCS user-to-user indication to transfer one byte of user information. CPI - Common part indicator is a filling byte (of value 0). This field is to be used in the future for layer management message indication. Length - Length of the Information field. CRC-32 - Cyclic redundancy check computed from the Information field, PAD, UU, CPI and Length fields. It is a 32-generator polynomial. AAL-5 SAR PDU The structure of the AAL-5 SAR PDU consists of a 48 byte payload and shown at the bottom of Figure 3 which is just a standard ATM cell

IV.

AAL Type 1 supports constant bit rate (CBR), synchronous, connection oriented traffic. Examples include T1 (DS1), E1, and x64 kbit/s emulation. AAL Type 2 supports time-dependent Variable Bit Rate (VBR-RT) of connection-oriented, synchronous traffic. Examples include Voice over ATM. AAL2 is also widely used in wireless applications due to the capability of multiplexing voice packets from different users on a single ATM connection. AAL Type supports VBR, data traffic, connection-oriented, asynchronous traffic (e.g. X.25 data) or connectionless packet data (e.g. SMDS traffic) with an additional 4-byte header in the information payload of the cell. Examples include Frame Relay and X.25. AAL Type 5 is similar to AAL 3/4 with a simplified information header scheme. This AAL assumes that the data is sequential from the end user and uses the Payload Type Indicator (PTI) bit to indicate the last cell in a transmission. Examples of services that use AAL 5 are classic IP over ATM, Ethernet Over ATM, SMDS, and LAN Emulation (LANE).

Q3. I.

LFIB is used by the core MPLS routers (which are not ingress and egress MPLS routers). They compare the label in the incoming packet with the label they have in their LFIB. If a match is found, the routers forward that packet based on that match. If not, the packet will be dropped. The LFIB is created by the LIB and FIB tables
1. After OSPF convergence, all routers have information about

network and this information is placed in the RIB. 2. On R1, the label distribution protocol (LDP) assigns the label 11 to the network and advertises the label to its neighbors. 3. Other routers running OSPF also have information about this network so they use their own LDP to assign a label to this network. They also advertise it to other neighbors by using LDP. Labels are stored in the LIB. 4. Each router uses information about the network, local label and outgoing label to build the LFIB.

Penultimate Hop Popping (PHP) may be interesting in cases where the egress router has lots of packets leaving MPLS tunnels, and thus spends inordinate amounts of CPU time on this. By using PHP, transit routers connected directly to this egress router effectively offload it, by popping the last label themselves.
II.

III.
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Label | Label | Exp |S| TTL | Stack +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Entry

Label - Label Value, 20 bits Exp - Experimental Use, 3 bits S - Bottom of Stack, 1 bit TTL - Time to Live, 8 bits

1. Bottom of Stack (S) This bit is set to one for the last entry in the label stack. (i.e., for the bottom of the stack), and zero for all other label stack entries.

2. Time to Live (TTL) This eight-bit field is used to encode a time-to-live value. The processing of this field is described in section 2.4. 3. Experimental Use This three-bit field is reserved for experimental use. 4. Label Value This 20-bit field carries the actual value of the Label. When a labeled packet is received, the label value at the top of the stack is looked up. As a result of a successful lookup one learns: a. the next hop to which the packet is to be forwarded;
b.

the operation to be performed on the label stack before forwarding; this operation may be to replace the top label stack entry with another, or to pop an entry off the label stack, or to replace the top label stack entry and then to push one or more additional entries on the label stack.

IV. MPLS supports a range of access technologies, including T1/E1,

ATM, Frame Relay and DSL.


Unicast IP routing Multicast IP routing VPN Traffic engineering MPLS QoS

V.

Recently, as Internet and its services grow rapidly, a new switching mechanism, Multi-protocol Label Switching (MPLS), has been introduced by IETF [2]. MPLS by overlying IP and simplifying backbone of wide-area IP

networks is a high speed technology [3]. It substitutes conventional packet forwarding within a network, or a part of network, with a faster operation of label look-up and switching [4]. ATM cell switching mechanism and label switching in MPLS networks are very similar to each other. In order to send packets rapidly, MPLS decreases complexity by integration of Layer-2 switching and Layer-3 routing for complete integrated solutions [5, 6]. Integration of IP routers and ATM switching mechanisms provides IP scalability over ATM networks, where packet forwarding and path controlling are provided with routers [5, 6]. MPLS uses the control-driven model to initiate the assignment and distribution of label bindings for the establishment of Label Switched Paths (LSPs). An LSP is created by concatenating one or more label switched hops, allowing a packet to be forwarded from one labelswitching router (LSR) to another LSR across the MPLS domain. The MPLS network architecture consists of label switching routers (LSR) in the core of the network, and label-edge routers (LER) at the edge. The label-edge routers have the task of analyzing the IP header of each packet arrived, in order to find the corresponding forwarding equivalence class (FEC) and label-switched path, which facilitates the label swapping function in the LSR nodes. Inside an MPLS domain, packet forwarding, classification and QoS are determined by the labels and the class of service (CoS) fields. This makes core LSRs simple. Each MPLS packet has a header that contains a 20-bit label, a 3-bit Experimental field, a 1-bit label stack indicator and an 8-bit TTL field in a non-ATM environment, and holds only a label encoded in the VCI/VPI field in an ATM environment

Q4. I. Quality of Service (QoS) is a set of technologies for managing network traffic in a cost effective manner to enhance user experiences for home and enterprise environments. Achieving the required Quality of Service (QoS) by managing the delay, delay variation (jitter), bandwidth, and packet loss parameters on a network becomes the secret to a successful end-to-end business solution. Thus, QoS is the set of techniques to manage network resources.
II. 1. Classification and markingPacket classification features allow

traffic to be partitioned into multiple priority levels or CoSs.

2. Congestion managementCongestion-management features

control congestion after it occurs.


3. Congestion avoidanceCongestion-avoidance techniques

monitor network traffic loads in an effort to anticipate and avoid congestion at common network and internetwork bottlenecks before it becomes a problem.
4. Traffic conditioningTraffic entering a network can be

conditioned (operated on for QoS purposes) by using a policer or shaper.

III. Classification - Packets can be classified based on the incoming interface, source or destination addresses, IP protocol type and port, application type (network-based application recognition [NBAR]), IPP or DSCP value, 802.1p priority, MPLS EXP field, and other criteria. Marking - Marking is the QoS feature component that colors a packet (frame) so that it can be identified and distinguished from other packets (frames) in QoS treatment. Policies can then be associated with these classes to perform traffic shaping, rate-limiting/policing, priority transmission, and other operations to achieve the desired endto-end QoS for the particular application or class. Figure 5-2 showed an overview of classification for CoS, ToS, and DSCP. Congestion management - Queuing algorithms are used to sort the traffic and then determine some method of prioritizing it onto an output link. Congestion-management techniques include Weighted Fair Queuing (WFQ), CBWFQ, and lowlatency queuing (LLQ): WFQ is a flow-based queuing algorithm that does two things simultaneously: It schedules interactive traffic to the front of the queue to reduce response time, and it fairly shares the remaining bandwidth between high-bandwidth flows.

CBWFQ guarantees bandwidth to data applications.


LLQ is used for the highest-priority traffic, which is especially

suited for voice over IP (VoIP).

Congestion avoidance - WRED algorithm avoids congestion and controls latency at a coarse level by establishing control over buffer depths on both low- and high-speed data links. WRED is primarily

designed to work with TCP applications. When WRED is used and the TCP source detects the dropped packet, the source slows its transmission. WRED can selectively discard lower-priority traffic when the interface begins to get congested. Policing - Traffic shaping involves smoothing traffic to a specified rate through the use of buffers. A policer, on the other hand, does not smooth or buffer traffic. It simply re-marks (IPP/DSCP), transmits, or drops the packets, depending on the configured policy. Legacy tools such as committed access rate (CAR) let network operators define bandwidth limits and specify actions to perform when traffic conforms to, exceeds, or completely violates the rate limits. Generic traffic shaping (GTS) provides a mechanism to control traffic by buffering it and transmitting at a specified rate. Frame Relay traffic shaping (FRTS) provides mechanisms for shaping traffic based on Frame Relay service parameters such as the committed information rate (CIR) and the backward explicit congestion notification (BECN) provided by the Frame Relay switch.

Policers and shapers are the oldest forms of QoS mechanisms. These tools have the same objectivesto identify and respond to traffic violations. Policers and shapers usually identify traffic violations in an identical manner; however, their main difference is the manner in which they respond to violations: Shaping - A shaper typically delays excess traffic using a buffer to hold packets and shape the flow when the sources data rate is higher than expected.

The principal drawback of strict traffic policing is that TCP retransmits dropped packets and throttles flows up and down until all the data is sent (or the connection times out). Such TCP ramping behavior results in inefficient use of bandwidth, both overutilizing and underutilizing the WAN links. Since shaping (usually) delays packets rather than dropping them, it smoothes flows and allows for more efficient use of expensive

WAN bandwidth. Therefore, shaping is more suitable in the WAN than policing.

IV. IntServ is another model for providing QoS in networks. IntServ is based on building a virtual circuit in the internet using the bandwidth reservation technique. Requests for reserving the bandwidth come from the applications that require some kind of level of service. According to this model, each router in the network has to implement IntServ and each application that requires a service guarantee has to make a reservation. When bandwidth is reserved for a certain application, it cannot be reassigned for another application. Routers between the sender and the receiver determine whether they can support the reservation made by the application. If they cannot support it, they notify the receiver. Else they have to route the traffic to the receiver. Therefore, in this method, routers remember the properties of the traffic flow and also supervise it. The task of reserving paths would be very tedious in a busy network such as the Internet.
a.

b. DiffServ is a model for providing QoS in the Internet by differentiating the traffic. The best effort method used in the internet tries to provide the best possible service depending on the varying traffic flow, rather than trying to differentiate the flow and provide higher level of service to some of the traffic. DiffServ tries to provide a improved level of service in the existing best effort environment by differentiating the traffic flow. For example, DiffServ will reduce the latency in traffic containing voice or streaming video, while providing best effort service to traffic containing file transfers. Packets are marked by the DiffServ devices at the boarders of the network with information about the level of service required by them. Other nodes in the network read this information and respond accordingly to provide the requested level of service.

Vous aimerez peut-être aussi