Vous êtes sur la page 1sur 7

A

SUMMARIZED REPORT

ON

TCP PERFORMANCE ENHANCEMENT ON DATA


TRANSFER

USING SCHEDULING ALGORITHM

Research paper by:

A.Bharathi, K. Ananda Kumar, A. Shanumugam

Summarized by:

Mahan Malik
MCA Vth Sem
0712814023
ABSTRACT
This paper analyzes the TCP performance for both bulk transfer and small packet transfer
traffic separately to reveal some of the issues and the possible solution in the TCP
performance.

This paper develops an efficient scheduling algorithm for TCP bulk data transfer traffic,
which set up a TCP receive window that is efficiently and to make the data movement as
efficient as possible. Request reply types of application typically generate small traffic .the
latency resulting from the packet is also more important as that of the throughput
delivered .in this paper, the developed scheduling algorithm also provides a technique to
reduce the latency time.

INTRODUCTION
TCP is having following characteristics:-
1) Unicast Protocol
2) Connection State
3) Reliable
4) Full Duplex
5) Streaming
6) Rate Adaptation

LITERATURE REVIEW
Categorizing TCP traffic:-

TCP is basically having two type of transfer traffic


 Bulk Transfer Traffic-the payload size of most segments from sender to receiver is
1460 bytes(Ethernet is having maximum frame size of 1518 bytes. it typically
contains 14 bytes fro header and 4 bytes for CRC.then from the remaining 1500
bytes 20 bytes of IP header and 20 bytes of TCP header are reduced which resulted
remaining 1460 bytes). For example, FTP transfers and downloading web pages
with large amount of graphics.

 Small Packet Traffic-the payload size of most segments from sender to receiver is
below 1460 bytes. For e.g. A request from the clients and a short reply by
the server.

Bottlenecks:-
A high speed line sends small messages so quickly that it creates a massive amount
of interrupt pressure within the kernel space of the receiving host. Furthermore the kernel
must deal with processing packets and moving messages payloads into user space after
receipt and error checking .High communication overhead can overwhelm the processor
and prevents it from spending valuable time on computation. This processing time also
prevents messages from being delivered to the application quickly. Due to these blockades,
application cannot harness the bandwidth and speed the network provides.

Transmission Latency:-
Many high performances, scientific computing applications depend on rapid, low
latency transmission of messages between processors. High message latency leads to CPU
idling and wasting resources. The application may wait for message arrival before
continuing computation. Ensuring consistently low messages latency processes frequently
sending messages between nodes suffer performance loss in a high latency environment.

PROBLEM DEFINITION

There are two problems that have been observed. One is


Small Packet Problem: - this problem occurs when we transfer small data packets.
Suppose we want to send 1 byte of data then we have to attach 40 byte of header (20 byte
of IP header and 20 byte of TCP header), then this will increase the overhead which can
result in congestion, lost datagram and retransmission. In practice, may drop so low that
TCP connections are aborted.

Bulk Data Transfer Traffic: -This problem occurs when the amount of data to
move from one computer to another is far larger like, FTP transfer, heavy graphics
downloading from web etc.

PHASES

Small packet transfer and scheduling algorithm: - To reduce transmission latency


during small packet transfer we use Inter user scheduling priority model by deploying fair
queuing with strict priority or rate priority. Each user reports measured condition to the pf
scheduler. User with best channel is selected to transmit in different time slot.PF weight the
current rate achievable by average rate received by user and select user.

Bulk Data transfer and scheduling algorithm: - The objective of this phase is to
maximize the efficiency of data transfer , implying that TCP should endeavor to locate the
maximum point dynamic equilibrium of maximum network efficiency , where the sending
data rate is maximized just prior to onsets of packet sustained loss. Further increasing the
rate from such a point will run the risk of generating a congestion condition within the
network, with rapidly increasing packet loss levels.

METHODOLOGY

Size of TCP window: -The size of the window can be determined by the given formula.
(X*T)/8 where=rate in bps
T=latency time.
To get one gigabit per second on Ethernet, the system must deliver
1,000,000,000/8/1518=82,345 packets per second. This is equivalent to delivering one full
sized packet every microsecond. If the latency is 100 microseconds the size of the window
needs to be at least 1,000,000,000*0.0001/8=12,500 bytes,
Overhead to move data from the sender to the receiver: - The network latency is
calculated as measurement time over the number of packets transferred hence if the
measurement time is 1 sec. latency is the reciprocal of packet rate per sec. Since the packet
rate also shows the server’s capability to process packets besides latency, this paper uses
this metric for the processing of data on small packet traffic.

PERFORMANCE EVALUATION AND RESULTS: - The ns-2 simulator was used to


evaluate the proposed mechanism. In order to determine how well the proposed algo
performs under various conditions, three different scenarios were generated by considering
the following factors:
• Throughput
• Dataloss
• Delay
Case -1 For larger packets

No. of packet Sent No. of packet received

1270 793

1670 918

2070 1168

2470 1168

2.Data Loss

No. of packet Sent Data loss

1270 477

1670 752

2070 1027

2470 1302
3.Delay

No. of packet Sent Delay

1270 10

1670 12

2070 14

2470 16

For small packet Transfer:-1.Throughput

No. of packet Sent No. of packet received

1270 773

1670 898

2070 1023

2470 1148

2.Dataloss
No. of packet Sent Data loss

1270 477

1670 752

2070 1027

2470 1577

3.Delay

No. of packet Sent Delay

1270 10

1670 12

2070 14

2470 16

Conclusion:-
1.By developing an efficient algorithm for small packet and bulk data transfer we could be
able to reduce enough overhead from TCP/IP.

2.By appropriately offloading small parts of protocols functionality on to a NIC, we can


reduce latency and optimized bandwidth utilization for distributed computing environment

REFERENCES

 1.Marco melia, michela Meo and Claudo Casetti,”TCP Smart


Framing:ASegmentation Algorithm To reduce Latency”.IEEE/ACM transaction on
networking.
 2.Internet options

 3.Karpagam JCS VOL.3 Issue 2 Jan-Feb 09