Vous êtes sur la page 1sur 11

Packet Reordering in TCP

Project report

Submitted for the partial fulfillments of the requirement for EL 9953:


Advanced Project I

Department of Electrical & Computer Engineering


Polytechnic Institute of NYU

Acknowledgement
Apart from the efforts of me, the success of this project depends largely on the encouragement
and guidelines of many others. I take this opportunity to express my gratitude to the people who
have been instrumental in the successful completion of this project.

I would like to show my greatest appreciation to Prof. Kang Xi. I can’t say thank you enough for
his tremendous support and help. I feel motivated and encouraged every time I attend his
meeting.

1
The guidance and support received from all the classmates, was vital for the success of the
project. I am grateful for their constant support and help.

Abstract
Packet Reordering can affect the performance of both the network and the packets receiver. In
this project report, firstly the causes and effects of packet reordering on TCP are presented. Then
the metrics for measuring packet reordering is discussed. In addition, techniques for
improvement of TCP’s performance in occurrence of packet reordering are shown. Finally,
methods for prevention against packet reordering in TCP are illustrated.

Introduction
In this project I studied different research papers addressing the packet reordering problem in
internet traffic. My research started with studying the basics of packet reordering in TCP. To
gain in-depth knowledge of the topic, I studied metrics for measuring packet reordering. In
addition, some papers carefully addressed the critical issue like preventing packet reordering in
TCP which was a great source of knowledge for me.

2
Packet reordering occurs naturally as a result of parallelism of path a packet can traverse. Packet
reordering occurs mainly due to changes in the route of the packet. The cause of the route
fluttering varies from malfunctioning of network components to congestion in network. Packet
reordering is mainly of two types Forward Path Reordering and Reverse Path Reordering.
Forwarding path reordering is concerned with reordering of data packets and reverse path
reordering is concerned with reordering of Acknowledgement. The effects of forward path
reordering include unnecessary retransmission, marginal growth of TCP’s Congestion window
and erroneous calculation of Round Trip time for the packet. The effect of Effects of Reverse-
Path Reordering includes loss of self clocking. One of the toughest challenges in detecting
packet reordering is that it is hard to conclude whether the packet was reordered or lost in
between the path from source to destination.
Research done in this field focused on the improvement of TCP performance in case of packet
reordering and some other research emphasized on changing the way in which routers with
parallel buffers operate. To gain the some background information about the causes and effects
of packet reordering, I started my research by studying research papers which focused on the
causes and effects of packet reordering on the TCP flow.
Most of the research was done on analyzing the effect of reordering on TCP performance. Thus
to analyze important parameters affected by reordering, a metric for measuring the packet
reordering was required. There were many metrics studied for measuring packet reordering.

Packet Reordering
The main causes of packet reordering are:
• Packet Reordering inside Router and Switches: If two packets belonging to same flow
arrives at the node and then are assigned different queue with different lengths. Due to
different queue length the packet may leave the node out of order.
• Retransmission: When a packet is lost then due the retransmission the packet could arrive
out of order.
• Diffserv scheduling: If a flow exceeds the negotiated service level constraints, the non-
conformant packets of the flow are either dropped or given a lower priority and placed in
different queue.
• Load Splitting: In multiple paths, different packets of the same flow take different routes
to achieve load balancing.

At the receiver packet reordering is detected if the arriving packet sequence is less than that of
the previously received packets in the same connection. For detecting the reordering of packets
at the receiver the algorithm shown in figure-1[1] is as followed.

3
Figure-1 Reorder Detection Algorithm

In figure-1 deciding whether the packet was reordered or not was based on the TCP sequence
number, IP ID and time lag. To distinguish packet reordering from loss of packet time lag is
used. Particular value of time lag is established as threshold and is used to distinguish packet
reordering from packet loss. A careful conclusion drawn by this paper was that packet reordering
was more site-dependant rather than being internet prevalent.

In the measurement data provided by Yi Wang, Guohan Lu and Xing Li in “A study of Internet
Packet reordering [1]” out of the total 3.3 million packets captured for observation 3.7% of the
data packets were reordered. Also 5.79% of all 10,647 web sites experienced reordering at least
once. So this gives a brief idea of how much reordering is done and how often the reordering
happens in a TCP Connection.

Metrics for Packet Reordering


Generally reordering in a flow is measured in the percentage of the reordered packets. However,
this definition is not uniform. Packet reordering should be measure according to order of
delivery.
Reorder Density
Reorder Density captures both the amount and extent of reordering of packets in an arrival
sequence. For example consider the table-1 sown below.

Table-1 Reorder Density


In table-1 we have an arriving sequence of packets (1, 2, 3, 4, 7, 5, 6, 8) and Receive index is
assigned at the receiver and displacement of the packets from their expected position. For

4
example, packet 5, 6 are displaced by one unit from their positions and packet 7 is displaced by
two positions. Therefore, RD is defined as the histogram of displacement values, normalized with
respect to total number of packets.

Reorder Buffer Occupancy Density


When a packet arrives out of order then it is stored in a hypothetical buffer until it can be
released in order. Reorder buffer occupancy is evaluated at each arrival and it allocates one
buffer for each packet. The occupancy density, RBD of buffer is a measure of reordering.

Table-2 Reorder Buffer occupancy Density

In Table-2 we see that for the arriving sequence (1, 2, 3, 4, 7, 5, 6, 8) when the packet 7 arrived
the buffer occupancy is 1 because packet arrived out of order. This buffer occupancy remains 1
when packet 5 arrives. When packet 6 arrives, the content of this buffer becomes zero. In the
table the RBD value of packet sequence is shown as the normalized form of the histogram of the
occupancy of a hypothetical buffer used for recovery from out of order arrival of packets.

Reordering Extent
Reordering extent is the lateness based metrics. Reordering extent is the maximum distance, in
packets, from late packet to the earliest packet received that has a larger sequence number. The
concept of reordering extent is better understood when we consider an example shown in table-3.

Table-3 Reordering Extent

In table-3 consider packet with sequence number 5, i.e s[i] = 5 the corresponding value of i for
which s[i] = 5 is 6. Therefore we will choose a j now such that s[j] > s[i] in this case j = 5. Then
the reordering extent is given by i – j, which in this case = 6-5= 1.

Application of Reordering Metrics


• TCP Flow Control: It can be used for deciding the value of dupthresh in TCP Flow
control.
• Buffer and Resource allocation: Reordering extent ‘e’ is used for estimation of buffer
required.
• Network Diagnosis: Packet reordering caused due to error and faults in the node could be
detected by measuring the displacement field in Reorder density caused by the node.

5
Improving TCP performance on occurrence of Packet Reordering
Today’s internet traffic is largely generated by TCP. TCP sender uses two different error
recovery strategies: (1) timeout-based retransmission and (2) DUPACK- based retransmission. In
the context of packet reordering there is yet another important term called spurious timeouts.
Spurious timeouts is the timeouts that would not have occurred had the sender waited long
enough. The cause of Spurious timeouts is the retransmission ambiguity i.e. TCP sender’s
inability to distinguish an ACK for the original transmission of a segment from the ACK for its
retransmission.

The Eifel Algorithm


To improve the TCP performance in case of spurious retransmission technique for eliminating
retransmission ambiguity has been proposed in Eifel Algorithm [2]. Addition of TCP Timestamp
option in the header field solves the problem of retransmission ambiguity. Sender stores the time
stamp value of the current retransmission. Thus when the sender receives the ACK
acknowledgment it compares the currently stored time stamp value with time stamp stored in the
ACK frame received from the receiver end.
Figure-2 illustrates the mechanism of Eifel Algorithm.

Figure-2 Eifel Algorithm Mechanism

Importance of this mechanism lies in the fact that implementation of the Eifel algorithm requires
only minor changes on the sender and no other changes are required on the receiver side or on
the protocol.
In case of packet reordering TCP performs poorly, where it misinterprets out-of-order delivery as
packet loss. The sender response to out of order packets is with a fast retransmit though no actual
loss has occurred. These repeated false fast retransmits keep decreasing the sender's window and
severely degrade the throughput. Systems like DiffServ, multi-path routing, and parallel packet
switches requires TCP to deliver packet in order.

DSACK TCP
DSACK [3] is an extension of SACK TCP and is useful for improving TCP performance in case
of reordering. DSACK reports to the sender when duplicate packets arrive at the receiver.

6
Figure-3 DSACK TCP

In Figure-3 as we see packet s1 reach the destination after packet s4. Therefore destination sends
three ACK A1, A2 and A3 indicating the sender to retransmit the packets s1. Although, in ACK
A4, destination acknowledges that it has receive all the packets from s1-s4. ACK A5 is the
DSACK indicating the arrival of duplicate packet s1 at the receiver. So the only solution to
avoid timeout is to increase the value of dupthresh. But the increase in the value of dupthresh
comes at a cost.
Increasing dupthresh on lossless paths improves the throughput of TCP, whereas on paths where
packets are dropped could trigger a timeout based on the value of timeout. Therefore some
approach was required to dynamically adjust the value of dupthresh.
To eliminate the effects of false fast retransmit an enhancements to TCP has been introduced that
improves the protocol's robustness to reordered and delayed packets. Therefore the sender
detects and recovers from false fast retransmits using DSACK information, and to avoid false
fast retransmits proactively, by adaptively varying dupthresh.

Preventing Reordering of Packets


After considering the effects of packet reordering on TCP and measures taken to improve TCP
performance in case of packet reordering, I studied the techniques implemented for preventing
packet reordering in TCP.

Adaptive Burst Shifting


This scheme balances load among processing units by mapping packets flows to processor using
a weighted has functions.

-
Figure-4 Adaptive Burst Shifting [4]

7
Burst distributor looks at the present flows and inserts newly arrived flows to the flow table and
also assigns them the least loaded processor. Flow table entry contains a flow id and a number of
its packets currently in the system. Hash splitter implements hashing over flow identifiers and
output the index where the packet is to be forwarded.
In the paper “Sequence Preserving Adaptive Load Balancers” the experimental results illustrated
that with the use if this scheme TCP performs efficiently and packet reorder density is
comparatively less too.

Ordered Round Robin


ORR [5] (Ordered Round Robin) is yet another scheme which can be used for preventing packet
reordering in TCP. ORR is a packet scheduling algorithm. The main objective of ORR is to
ensure load balancing for a group of heterogeneous processors and in order delivery of packets.

Figure-5 Ideal load Distribution

In figure-5 ORR scheme for multiple rounds is shown. In this figure we have a dispatching
processor, worker processor and a transport processor. The D step in the figure-5 is the time
taken by the dispatching processor for sending the packet to a worker processor. P step is the
time taken by the worker processor in processing the packet. T step represents is the time taken
by the worker processor in transmitting the packet to transmitting processor.
In this figure we have considered the optimal load distribution. This scheme ensures both
sequential delivery and load balancing in a TCP flow. In this paper it was assumed that packet
processing time is proportional to packet length. In ideal load distribution we see that at the
output port we get the sequential delivery of packets which ensures the in order delivery of data.
However, in figure-5 ideal load distribution case is considered but in practical scenario where
packet of variable length approaches the network processor some challenging issues needs to be
addressed. Issues concerned with practical implementation of this scheme are as followed.
• How many packets should be dispatched to each processor to produce the desired
pattern?
• How to address the variable length packets?
• Given a Network Configuration how to schedule packets to ensure sequential delivery?
• How to ensure fair scheduling among multiple flows with reservation?

8
To find a solution to this entire problem let us consider a figure shown below which illustrates
the load distribution in multiple rounds.

Figure-6 Load Distribution in Practical Scenario

In figure-6 a practical approach to ORR is demonstrated. In this case we see that between two
scheduling round there is time difference between each scheduling round. The gap is introduced
to make sure that the scheduling of two adjacent rounds cannot be overlapped to avoid resource
conflict. For example in figure- 6 the processing of D4 is complete but processor P1 is still
processing.
The value of GAPd determines the scheduling time.

GAPd = GAPt ={(w + 2z –zM)mL, M<Msaturate}


GAPd = GAPt = {0, M> Msaturate}
GAP2= {(zM –w – 2z)mL, M>Msaturate}
GAP2 = {0, M< Msaturate}
Where Msaturate = [w/z + 2]
m = B/I, where B is the batch size
L = maximal possible packet length
M = total number of worker processor
w= the processing rate of processor
z= link bandwidth connecting working processor to dispatching processor and transmitting
processor to working processor measured in sec/bytes.

Batch Size
To find the minimal batch size at least one packet should be dispatched to one processor.
Therefore we come to the result that
B = mI;
Where m is a positive integer referred to as batch granularity and
I = CL
Where C is the minimal positive integer such that at least one packet fits into the load fraction.

9
Fair Scheduling
To consider the concept of fair scheduling let us consider a case when two flows with reservation
(r1, r2) = (0.75, 0.25) is scheduled. To guarantee that flow one gets three time the resources we
have the following configuration.

Figure-7 Fair Scheduling among flows

In figure-7 we see that flow r1 is dispatched three times in one scheduling round whereas flow r2
is only scheduled once in a scheduling round.

Conclusion
In this project report various issues concerning the packet reordering problem in TCP has been
addressed. I learned different standard metrics for measuring packet reordering in internet.
Moreover, various techniques presented for preventing the reordering can be beneficial for the
increasing the TCP throughput.

References

[1] Yi Wang, Guohan Lu, Xing Li. A study of Internet Packet Reordering. Department of Electronic
Engineering, Tsinghua University, Beijing, 2003.

[2] Reiner Ludwig, Randy H. Katz. The Eifel Algorithm: Making TCP Robust Against Spurious
Retransmissions, Ericson Research, Germany.

[3] Ming Zhang, Brad Karp, Sally Floyd, Larry Peterson. RR-TCP: A Reordering-Robust TCP with
DSACK, IEEE International Conference on Network Protocols (ICNP’03).

[4] Weiguang Shi, Lukas Kencl. SequencePreserving Adaptive Load Balancers, ANCS’06, December 3–
5, 2006

10
[5] Jingnan Yao, Jiani Guo, Laxmi Narayan Bhuyan. Ordered Round-Robin: An Efficient Sequence
Preserving Packet Scheduler, IEEE Computer Society, May 2008.

11

Vous aimerez peut-être aussi