Vous êtes sur la page 1sur 26

1

UNIT IV(TRANSPORT LAYER)


Transport layer : The transport layer is responsible for process-to-process delivery. Although there are
several ways to achieve process-to-process communication, the most common one is through the
client/server paradigm. A process on the local host, called a client, needs services from a process usually
on the remote host, called a server. Both processes (client and server) have the same name. For example,
to get the day and time from a remote machine, we need a Daytime client process running on the local host
and a Daytime server process running on a remote machine.
Transport layer duties
 Packetizing
 Sender side: breaks application messages into segments, passes them to network layer
 Transport layer at the receiving host deliver data to the receiving process
 Connection control
 Connection-oriented
 Connectionless
 Addressing
 Port numbers to identify which network application
 Reliability
 Flow control
 Error Control
Addressing
At the data link layer, we need a MAC address to choose one node among several nodes if the connection is
not point-to-point. A frame in the data link layer needs a destination MAC address for delivery and a source
address for the next node's reply. At the network layer, we need an IP address to choose one host among
millions. A datagram in the network layer needs a destination IP address for delivery and a source IP address
for the destination's reply.
At the transport layer, we need a transport layer address, called a port number, to choose among multiple
processes running on the destination host. The destination port number is needed for delivery; the source
port number is needed for the reply.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


2

In the Internet model, the port numbers are 16-bit integers between 0 and 65,535. The client program defines
itself with a port number, chosen randomly by the transport layer software running on the client host. This is
the ephemeral port number. The server process must also define itself with a port number. This port number,
however, cannot be chosen randomly. The Internet has decided to use universal port numbers for servers;
these are called well-known port numbers.

Types of Delivery
lANA Ranges
The lANA (Internet Assigned Number Authority) has divided the port numbers into three ranges:
Well-known ports: - The ports ranging from 0 to 1023 are assigned and controlled by lANA.
Registered ports:- The ports ranging from 1024 to 49,151 are not assigned or controlled by lANA. They
can only be registered with lANA to prevent duplication.
Dynamic ports : - The ports ranging from 49,152 to 65,535 are neither controlled nor registered. They can
be used by any process. These are the ephemeral ports.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


3

SOCKET ADDRESS:
Process-to-process delivery needs two identifiers, IP address and the port number, at each end to make a
connection. The combination of an IP address and a port number is called a socket address. The client socket
address defines the client process uniquely just as the server socket address defines the server process
uniquely.

Multiplexing and Demultiplexing


The addressing mechanism allows multiplexing and demultiplexing by the transport layer, as shown in
Figure 23.6.

Multiplexing
At the sender site, there may be several processes that need to send packets. However, there is only one
transport layer protocol at any time. This is a many-to-one relationship and requires multiplexing. The
protocol accepts messages from different processes, differentiated by their assigned port numbers. After
adding the header, the transport layer passes the packet to the network layer.
Demultiplexing
At the receiver site, the relationship is one-to-many and requires demultiplexing. The transport layer
receives datagrams from the network layer. After error checking and dropping of the header, the transport
layer delivers each message to the appropriate process based on the port number.
Common properties that a transport protocol can be expected to provide
R.M.K Engineering College CS6551- Computer Networks / Unit IV
4

 Guarantees message delivery


 Delivers messages in the same order they were sent
 Delivers at most one copy of each message
 Supports arbitrarily large messages
 Supports synchronization between the sender and the receiver
 Allows the receiver to apply flow control to the sender
 Supports multiple application processes on each host
Typical limitations of the network on which transport protocol will operate
 Drop messages
 Reorder messages
 Deliver duplicate copies of a given message
 Limit messages to some finite size
 Deliver messages after an arbitrarily long delay
Challenge for Transport Protocols
Develop algorithms that turn the less-than-desirable properties of the underlying network into the
high level of service required by application programs
Simple Demultiplexer (UDP – User Datagram Protocol)
 UDP is a connectionless unreliable transport protocol–extends IP’s host-to-host delivery service into
a process-to-process communication service.
 UDP provides Best effort service. That is it will not provide reliable service.
 UDP does not guarantee reliability or ordering in the way that TCP does.
 Datagrams may arrive out of order, appear duplicated, or go missing without notice.
 Avoiding the overhead of checking whether every packet actually arrived makes UDP faster and
more efficient, for applications that do not need guaranteed delivery.
 Time-sensitive applications often use UDP because dropped packets are preferable to delayed
packets. UDP's stateless nature is also useful for servers that answer small queries from huge
numbers of clients. Unlike TCP, UDP is compatible with packet broadcast (sending to all on local
network) and multicasting (send to all subscribers).
 Common network applications that use UDP include: the Domain Name System (DNS), streaming
media applications such as IPTV, Voice over IP (VoIP), Trivial File Transfer Protocol (TFTP) and
online games.
 Adds a level of demultiplexing which allows multiple application processes on each host to share the
network.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


5

Format for UDP header

Format for UDP header


 The source port, much like the source port in TCP, identifies the process on the originating system.
 The destination port identifies the receiving process on the receiving machine
 Generally, server will use accept messages at a well-known port. That is, each server receives its
messages at some fixed port that is widely published, for example, the Domain Name Server (DNS)
receives messages at well-known port 53 on each host, the mail service listens for messages at port
25, and the Unix talk program accepts messages at well-known port 517, http at port no 80 and so on.
Client process can use ephemeral port numbers.
 The length field contains the length of the UDP datagram.
 The checksum field is used by UDP to verify the correctness of the UDP header and data(contents of
the message body and something called the pseudoheader).
 The pseudoheader consists of three fields from the IP header—protocol number, source IP address,
and destination IP address— plus the UDP length field
UDP Message Queue
When a message arrives, the protocol (e.g., UDP) appends the message to the end of the queue. Should the
queue be full, the message is discarded. There is no flow-control mechanism that tells the sender to slow
down. When an application process wants to receive a message, one is removed from the front of the
queue. If the queue is empty, the process blocks until a message becomes available.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


6

UDP Message Queue

Well-Known Ports for UDP

Unit IV Transport Layer 25

Use of UDP
The following lists some uses of the UDP protocol:
 UDP is suitable for a process that requires simple request-response communication with little
concern for flow and error control. It is not usually used for a process such as FTP that needs to send
bulk data

R.M.K Engineering College CS6551- Computer Networks / Unit IV


7

 UDP is suitable for a process with internal flow and error control mechanisms. For example, the
Trivial File Transfer Protocol (TFTP) process includes flow and error control. It can easily use UDP.
 UDP is a suitable transport protocol for multicasting. Multicasting capability is embedded in the
UDP software but not in the TCP software.
 UDP is used for management processes such as SNMP .
 UDP is used for some route updating protocols such as Routing Information Protocol (RIP)

Reliable Byte Stream (TCP)


In contrast to UDP, Transmission Control Protocol (TCP) offers the following services
 Reliable
 Connection oriented; it creates a virtual connection between two TCPs to send data.
 Byte-stream service
 TCP guarantees the reliable, in-order delivery of a stream of bytes. It is a full-duplex protocol,
meaning that each TCP connection supports a pair of byte streams, one flowing in each direction
 Flow control involves preventing senders from overrunning the capacity of the receivers
 TCP supports a demultiplexing mechanism that allows multiple application programs on any
given host to simultaneously carry on a conversation with their peers.
 TCP also implements a highly tuned congestion-control mechanism. The idea of this mechanism
is to throttle how fast TCP sends data, not for the sake of keeping the sender from overrunning
the receiver, but to keep the sender from overloading the network.
Difference between flow control and congestion control:
Flow control involves preventing senders from overrunning the capacity of receivers. Congestion control
involves preventing too much data from being injected into the network, thereby causing switches or links to
become overloaded. Thus, flow control is an end-to-end issue, while congestion control is concerned with
how hosts and networks interact.
End-to-end Issues
At the heart of TCP is the sliding window algorithm. As TCP runs over the Internet rather than a point-to-
point link, the following issues need to be addressed by the sliding window algorithm
 TCP supports logical connections between processes that are running on two different
computers in the Internet
 TCP connections are likely to have widely different RTT times
 Packets may get reordered in the Internet

R.M.K Engineering College CS6551- Computer Networks / Unit IV


8

 TCP needs a mechanism using which each side of a connection will learn what resources
the other side is able to apply to the connection
 TCP needs a mechanism using which the sending side will learn the capacity of the
network
TCP Segment
 TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP connection
and the receiver reads bytes out of the TCP connection.
 TCP on the source host buffers enough bytes from the sending process to fill a reasonably sized
packet and then sends this packet to its peer on the destination host.
 TCP on the destination host then empties the contents of the packet into a receive buffer, and the
receiving process reads from this buffer at its leisure.
 The packets exchanged between TCP peers are called segments.

How TCP manages a byte stream


TCP Header

 The SrcPort and DstPort fields identify the source and destination ports, respectively.
 The Acknowledgment, SequenceNum, and AdvertisedWindow fields are all involved in TCP’s
sliding window algorithm.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


9

 The SequenceNum field contains the sequence number for the first byte of data carried in that
segment.
 The Acknowledgment and AdvertisedWindow fields carry information about the flow of data
going in the other direction.
 The 6-bit Flags field is used to relay control information between TCP peers.
 The possible flags include SYN, FIN, RESET, PUSH, URG, and ACK.
 The SYN and FIN flags are used when establishing and terminating a TCP connection,
respectively.
 The ACK flag is set any time the Acknowledgment field is valid, implying that the receiver
should pay attention to it.
 The URG flag signifies that this segment contains urgent data. When this flag is set, the UrgPtr
field indicates where the nonurgent data contained in this segment begins.
 The urgent data is contained at the front of the segment body, up to and including a value of
UrgPtr bytes into the segment.
 The PUSH flag signifies that the sender invoked the push operation, which indicates to the
receiving side of TCP that it should notify the receiving process of this fact.
 The RESET flag signifies that the receiver has become confused, it received a segment it did not
expect to receive—and so wants to abort the connection.
 The Checksum field is used in exactly the same way as for UDP—it is computed over the TCP
header, the TCP data, and the pseudoheader, which is made up of the source address, destination
address, and length fields from the IP header.

TCP Connection Management


In TCP, connection-oriented transmission requires three phases:
1.connection establishment,
2.data transfer, and
3.connection termination
Connection Establishment (3 way handshake signal)_

R.M.K Engineering College CS6551- Computer Networks / Unit IV


10

.
• The three steps in this phase are as follows.
1. The client sends the first segment, a SYN segment, in which only the SYN flag is set. This segment
is for synchronization of sequence numbers. It consumes one sequence number. When the data
transfer starts, the sequence number is incremented by 1. The SYN segment carries no real data, but
we can think of it as containing 1 imaginary byte. A SYN segment cannot carry data, but it consumes
one sequence number.
2. . The server sends the second segment, a SYN +ACK segment, with 2 flag bits set: SYN and ACK.
This segment has a dual purpose. It is a SYN segment for communication in the other direction and
serves as the acknowledgment for the SYN segment. It consumes one sequence number. A SYN
+ACK segment cannot carry data, but does consume one sequence number.
3. 3. The client sends the third segment. This is just an ACK segment. It acknowledges the receipt of
the second segment with the ACK flag and acknowledgment number field. The sequence number in
this segment is the same as the one in the SYN segment; the ACK segment does not consume any
sequence numbers. An ACK segment, if carrying no data, consumes no sequence number.

DATA TRASFER PHASE

R.M.K Engineering College CS6551- Computer Networks / Unit IV


11

• In this example, after connection is established (not shown in the figure), the client sends 2000 bytes
of data in two segments. The server then sends 2000 bytes in one segment. The client sends one more
segment. The first three segments carry both data and acknowledgment, but the last segment carries
only an acknowledgment because there are no more data to be sent.
• The data segments sent by the client have the PSH (push) flag set so that the server TCP knows to
deliver data to the server process as soon as they are received. The segment from the server, on the
other hand, does not set the push flag.
• Pushing Data: The sending TCP uses a buffer to store the stream of data coming from the
sending application program. The sending TCP can select the segment size. The receiving TCP also
buffers the data when they arrive and delivers them to the application program when the application
program is ready or when it is convenient for the receiving TCP. This type of flexibility increases the
efficiency of TCP.
• On occasion the application program has no need for this flexibility_ For example; consider an
application program that communicates interactively with another application program on the other
end. The application program on one site wants to send a keystroke to the application at the other site
and receive an immediate response. Delayed transmission and delayed delivery of data may not be
acceptable by the application program. TCP can handle such a situation.
• Urgent Data: TCP is a stream-oriented protocol. This means that the data are presented from the
application program to TCP as a stream of bytes. Each byte of data has a position in the stream.
R.M.K Engineering College CS6551- Computer Networks / Unit IV
12

However, on occasion an application program needs to send urgent bytes. This means that the
sending application program wants a piece of data to be read out of order by the receiving
application program.
As an example, suppose that the sending application program is sending data to be processed by the
receiving application program. When the result of processing comes back, the sending application
program finds that everything is wrong. It wants to abort the process, but it has already sent a huge
amount of data. If it issues an abort command (control +C), these two characters will be stored at the
end of the receiving TCP buffer. It will be delivered to the receiving application program after all the
data have been processed. The solution is to send a segment with the URG bit set.
CONNECTION TERMINATION

• Any of the two parties involved in exchanging data (client or server) can close the connection,
although it is usually initiated by the client. Most implementations today allow two options for
connection termination: three-way handshaking and four-way handshaking with a half-close option.
• Three-Way Handshaking: Three-way handshaking for connection termination is shown in Figure
4.19.
1. In a normal situation, the client TCP, after receiving a close command from the client process, sends the
first segment, a FIN segment in which the FIN flag is set. A FIN segment can include the last chunk of data
sent by the client, or it can be just a control segment. If it is only a control segment, it consumes only one
sequence number. The FIN segment consumes one sequence number if it does not carry data.
2.The server TCP, after receiving the FIN segment, informs its process of the situation and sends the
second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from the client and at the

R.M.K Engineering College CS6551- Computer Networks / Unit IV


13

same time to announce the closing of the connection in the other direction. This segment can also contain
the last chunk of data from the server. If it does not carry data, it consumes only one sequence number. The
FIN +ACK segment consumes one sequence number if it does not carry data.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN segment from
the TCP server. This segment contains the acknowledgment number, which is 1 plus the sequence number
received in the FIN segment from the server. This segment cannot carry data and consumes no sequence
numbers.
3.7 FLOW CONTROL:
• TCP uses a sliding window to handle flow control. The sliding window protocol used by TCP is
something between the Go-Back-N and Selective Repeat sliding window.
• The sliding window protocol in TCP looks like the Go-Back-N protocol because it does not use
NAKs; it looks like Selective Repeat because the receiver holds the out-of-order segments until
the missing ones arrive.
• There are two big differences between this sliding window and the one used at the data link
layer. First, the sliding window of TCP is byte-oriented; in the data link layer it is frame-
oriented. Second, the TCP's sliding window is of variable size; in the data link layer it is of
fixed size.
Flow control defines the amount of data a source can send before receiving an acknowledgement
from receiver
 The flow control protocol must make sure that receiver does not get overwhelmed with data (can’t let
sender send all of its data without worrying about acknowledgements)
 TCP uses a sliding window protocol to accomplish flow control
 For each TCP connection (always duplex), the sending and receiving TCP peer use this window to
control the flow.
 TCP’s variant of the sliding window algorithm, which serves several purposes:
(1) it guarantees the reliable delivery of data,
(2) it ensures that data is delivered in order, and
(3) it enforces flow control between the sender and the receiver.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


14

FIG: TCP Sliding Window for flow control


• The window spans a portion of the buffer containing bytes received from the process. The bytes
inside the window are the bytes that can be in transit; they can be sent without worrying about
acknowledgment. The imaginary window has two walls: one left and one right. The window is
opened, closed, or shrunk. These three activities are in the control of the receiver (and depend on
congestion in the network), not the sender.

• The sender must obey the commands of the receiver. Opening a window means moving the right
wall to the right. This allows more new bytes in the buffer that are eligible for sending.

• Closing the window means moving the left wall to the right. This means that some bytes have been
Shrinking the window means moving the right wall to the left. This is strongly discouraged and
not allowed in some implementations because it means revoking the eligibility of some bytes for
sending. This is a problem if the sender has already sent these bytes. The left wall cannot move to the
left because this would revoke some of the previously sent acknowledgments.

• The size of the window at one end is determined by the lesser of two values: receiver window (rwnd)
or congestion window (cwnd).

• The receiver window is the value advertised by the opposite end in a segment containing
acknowledgment. It is the number of bytes the other end can accept before its buffer overflows and
data are discarded.

• The congestion window is a value determined by the network to avoid congestion.acknowledged


and the sender need not worry about them anymore.

Buffer at Sender
 Maintains data sent but not ACKed
 Data written by application but not sent.

Three pointers are maintained at Sender

R.M.K Engineering College CS6551- Computer Networks / Unit IV


15

LastByteAcked, LastByteSent, LastByteWritten.

Sender maintains
LastByteAcked ≤ LastByteSent
LastByteSent ≤ LastByteWritten

Buffer at Receiver
 Data that arrives out of order
 Data that is in correct order but not yet read by application.
Three pointers are maintained at Receiver
LastByteRead, NextByteExpected ,LastByteRcvd
Receiver maintains
LastByteRead<NextByteExpected
NextByteExpected ≤ LastByteRcvd + 1

How is Flow Control done?


 Receiver “advertises” a window size to the sender based on the buffer size allocated for the connection
through “Advertised Window” field in the TCP header.
 Sender cannot have more than “Advertised Window” bytes of unacknowledged data.
 Buffers are of finite size - i.e., there is a MaxRcvBuffer and MaxSendBuffer.

Setting the Advertised Window


On the TCP receive sider, clearly,
LastByteRcvd -LastByteRead ≤ MaxRcvBuffer
Thus, it advertises the space left in the buffer i.e.,
Advertised Window = MaxRcvBuffer - (LastByteRcvd -LastByteRead)

Sender Side Response


At the sender side, the TCP sender should ensure that:
LastByteSent - LastByteAcked ≤ Advertised Window.
The “Effective Window” which limits the amount of data that TCP can send :
Effective Window = Advertised Window - (LastByteSent - LastByteAcked)
In order to prevent the overflow of the Sender Side buffer:
LastByteWritten - LastByteAcked ≤ MaxSendBuffer
R.M.K Engineering College CS6551- Computer Networks / Unit IV
16

If application tries to write more, TCP blocks writing into the buffer.

Silly Window Syndrome


• Silly window syndrome is a problem in computer networking caused by poorly implemented TCP
flow control. A serious problem can arise in the sliding window operation when the sending
application program creates data slowly, the receiving application program consumes data slowly, or
both.

• MSS (Maximum Segment Size) is the largest chunk of data TCP will send to the other side

 MSS can be announced in the options field of the TCP.


 Serious problems can arise in the sliding window operationwhen:
 Sending application creates data slowly, or
 Receiving applications consumes data slowly(or both)
– Suppose a MSS worth of data is collected and advertised window is MSS/2. What should the
sender do ? -- transmit half full segments or wait to send a full MSS when window
opens ?Early implementations were aggressive -- transmit MSS/2. Aggressively doing this,
would consistently result in small segment sizes -- called the Silly Window Syndrome.
Causes of Silly Window Syndrome
– Poor use of network bandwidth
– Unnecessary computational overhead
Solution:
– Use heuristics at sender to avoid transmitting a small amount of data in each segment
– Use heuristics at receiver to avoid sending small window advisements
Receive-side silly window avoidance
– Monitor receive window size
– Delay advertising an increase until a “significant” increase is possible
R.M.K Engineering College CS6551- Computer Networks / Unit IV
17

– “Significant” = min(half the window, maximum segment size)


Send-Side Silly Window Avoidance
– Avoid sending small segments. TCP must delay sending a segment until it contains a
reasonable amount of data.
– How long should TCP wait before transmitting data?. This is given by Nagle’s algorithm.
• Nagle’s Algorithm
If both available data and Window ≥ MSS, send full segment.
Else, if there is unACKed data in flight, buffer new data until ACK returns.
Else, send new data now.
Quality of Service [QoS]:
Real-time applications implies that the network will treat some packets differently from others—
something that is not done in the best-effort model. A network that can provide these different levels of
service is often said to support quality of service (QoS).

Quality of service is defined as a flow seeks to attain

• Reliability is a characteristic that a flow needs.


• Lack of reliability means losing a packet or acknowledgment, which entails retransmission.
• The sensitivity of application programs to reliability is not the same. For example, it is more
important that electronic mail, file transfer, and Internet access have reliable transmissions than
telephony or audio conferencing.
• Source-to-destination delay is another flow characteristic. Again applications can tolerate delay in
different degrees.
• In this case, telephony, audio conferencing, video conferencing, and remote log-in need minimum
delay, while delay in file transfer or e-mail is less important.
• Jitter is defined as the variation in the packet delay. High jitter means the difference between
delays is large; low jitter means the variation is small. If the jitter is high, some action is needed in
order to use the received data.
• Different applications need different bandwidths

R.M.K Engineering College CS6551- Computer Networks / Unit IV


18

Four Techniques to improve QoS:


Four common methodsused to improve the quality of service are
1.Scheduling
2.Traffic shaping
3.Admission control
4.Resource reservation
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling technique
treats the different flows in a fair and appropriate manner.
Three Scheduling Techniques to Improve the Quality of Service.
1. FIFO Queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until
the node (router or switch) is ready to process them. If the average arrival rate is higher than
the average processing rate, the queue will fill up and new packets will be discarded. A FIFO
queue is familiar to those who have had to wait for a bus at a bus stop. Figure shows a
conceptual view of a FIFO queue.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


19

Priority Queuing

• In priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue.
• The packets in the highest-priority queue are processed first. Packets in the lowest-priority
queue are processed last.
• The system does not stop serving a queue until it is empty. Figure shows priority queuing
with two priority levels (for simplicity).
• A priority queue can provide better QoS than the FIFO queue because higher priority traffic,
such as multimedia, can reach the destination with less delay.
• However, there is a potential drawback. If there is a continuous flow in a high-priority
queue, the packets in the lower-priority queues will never have a chance to be processed.
This is a condition called starvation.
• Weighted Fair Queuing

R.M.K Engineering College CS6551- Computer Networks / Unit IV


20

• A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues.
• The queues are weighted based on the priority of the queues; higher priority means a higher
weight.
• The system processes packets in each queue in a round-robin fashion with the number of
packets selected from each queue based on the corresponding weight.
• For example, if the weights are 3, 2, and 1, three packets are processed from the first queue,
two from the second queue, and one from the third queue. If the system does not impose
priority on the classes, all weights can be equal. In this way, there is fair queuing with
priority. Fig shows the technique with three class
• Traffic Shaping

Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the
network. Two techniques can shape traffic: leaky bucket and token bucket.

FIG: Leaky bucket algorithm

• If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend on
R.M.K Engineering College CS6551- Computer Networks / Unit IV
21

the rate at which the water is input to the bucket unless the bucket is empty. The input rate
can vary, but the output rate remains constant.
• Similarly, in networking, a technique called leaky bucket can smooth out bursty traffic.
Bursty chunks are stored in the bucket and sent out at an average rate. Figure 4.36 shows a
leaky bucket and its effects.
• In the figure, the network has committed a bandwidth of 3 Mbps for a host. The use of the
leaky bucket shapes the input traffic to make it conform to this commitment. The host sends
a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data.
• The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data. In all, the host has sent 30 Mbits of data in 10s.
• The leaky bucket smooth the traffic by sending out data at a rate of 3 Mbps during the same
10 s.
• Without the leaky bucket, the beginning burst may have hurt the network by consuming
more bandwidth than is set aside for this host. The leaky bucket may prevent congestion.
• A simple leaky bucket implementation is shown in Figure 4.37.

Fig: Implementation of leaky bucket algorithm.

Token Bucket algorithm:

R.M.K Engineering College CS6551- Computer Networks / Unit IV


22

• The leaky bucket is very restrictive. It does not credit an idle host. For example, if a
host is not sending for a while, its bucket becomes empty. Now if the host has bursty
data, the leaky bucket allows only an average rate. The time when the host was idle is
not taken into account.
• The token bucket algorithm allows idle hosts to accumulate credit for the future in the
• form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The
system removes one token for every cell (or byte) of data sent.
• For example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000
tokens. Now the host can consume all these tokens in one tick with 10,000 cells, or the
host takes 1000 ticks with 10 cells per tick.

In other words, the host can send bursty data as long as the bucket is not empty

Application Requirements
 Data is generated by collecting samples from a microphone and digitizing them using an A-
D converter. The digital samples are placed in packets which are transmitted across the
network and received at the other end.
 At the receiving host the data must be played back at some appropriate rate
 If data arrives after its appropriate playback time, either because it was delayed in the
network or because it was dropped and subsequently retransmitted, it is useless.
 It is difficult o guarantee that all data traversing a packet-switched network will experience
exactly the same delay.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


23

 Delay tends to vary with time and is different for each packet in the audio stream. The way
to deal with this at the receiver end is to buffer up some amount of data in reserve thereby
always providing a store of packets waiting to be played back at the right time.
 If a packet is delayed, a short time, it goes in the buffer until its playback time arrives. If it
gets delayed a long time, then it will not need to be stored for very long in the receiver’s
buffer before being played back.
 Thus, we have an effectively added a constant offset to the playback time of all packets.
This offset is called the playback point.

 Left hand diagonal line – Shows packets being generated at a steady rate
 Wavy line – Shows when the packets arrive, some variable amount of time after they were
sent, depending on what they encountered in the network
 Right hand diagonal line – Shows the packet being played back at a steady rate, after
sitting in the playback buffer for some period of time

Taxonomy of Real time applications

R.M.K Engineering College CS6551- Computer Networks / Unit IV


24

Applications are categorized into


 Elasticapplications
 Real time applications
Elastic or Non-real time applications-
 A non-real time is a term used to describe a processor event that does not occur
immediately.
 For example, a forum can be considered non-real time as responses often do not occur
immediately and can sometimes take hours or days.
Real time applications
o A real-time application (RTA) is an application program that functions within a time frame that
the user senses as immediate or current.
o Examples:
1. Videoconference applications - Online gaming
2. A robot control program is likely to be an example of a real-time application that
cannot tolerate loss—losing the packet that contains the command instructing the
robot arm to stop is unacceptable.
Tolerant: can tolerate occasional loss of data.
Intolerant: cannot tolerate such losses.
Delay-adaptive: applications that can adjust their playback point (delay or advance over time).
Rate-adaptive: can alter the bit rate depending on available bandwidth and BER.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


25

 A second way to characterize real-time applications is by their adaptability. For example,


an audio application might be able to adapt to the amount of delay that packets experience
as they traverse the network.
 Playback point adjustment is easy and it has been effectively implemented for several voice
applications such as the audio teleconferencing program known as vat.
 Two Approaches to Qos support
1) Fine-grained:
o Provide QoS to individual applications or
flows.
o Example: Integrated Services[RSVP]
2) Coarse-grained:
o Provide QoS to aggregated traffic.
o Example: Differentiated Services.

R.M.K Engineering College CS6551- Computer Networks / Unit IV


26

26

Vous aimerez peut-être aussi