Académique Documents
Professionnel Documents
Culture Documents
300
the sender by advertising a smaller window size. In
cases where the receiver throttles the sender by adver-
250
tising a smaller window size than what the sender as-
200
sumes to be the ideal congestion window, the conges-
150
tion window is constant during the congestion avoid-
100 ance period (does not increase one RTT every MSS).
50 In cases where the congestion window is very stable
0 in the congestion avoidance period, Iperf Quick mode
0 1000 2000 3000 4000 5000 6000
will report near perfect bandwidth values.
Time (ms)
If the sender gets out of slow-start due to a conges-
tion signal, then
Figure 3: CW for link with BW ∗ Delay ≈ 300KB for
RT T =10 ms ss cwnd
cur cwnd = (1)
2
RTT=50ms
The congestion window will rise from cur cwnd to
400 m cwnd by one M SS every RT T . This means
350
that during the first RT T after slow-start, cur cwnd
Congestion Window (KB)
100
80
60
15 Kbps
40 120 Kbps
960 Kbps
7.68 Mbps
20 61.44 Mbps
491.52 Mbps
0
0 20 40 60 80 100 120 140 160 180 200
RTT (ms)
100
80
60
15 Kbps
40 120 Kbps
960 Kbps
7.68 Mbps
20 61.44 Mbps
491.52 Mbps
0
0 20 40 60 80 100 120 140 160 180 200
RTT (ms)
plot the values of the bandwidth measurable as a per- run the standard Iperf available for various operating
centage of the bandwidth achievable for various RTTs systems. Using the Web100 user-library (userland),
and link bandwidths. Figure 5 shows the bandwidths we can read the variables out of the Linux proc file
measurable by running the tests for 1 second after the system. The total changes in code in Iperf is less than
end of slow-start and Figure 6 shows the bandwidths 150 lines6 . We maintain compatibility with all the
measurable by running the tests for 10 seconds after modes in which Iperf previously ran. The Web100
the end of slow-start. We assume that we can mea- polling was done in a separate thread since Web100
sure the bandwidth very accurately if the time of mea- calls were blocking.
surement after slow-start is greater than or equal to 4.2 Testing environment
id time af ter ss. We see that the difference between For the testing environment, Iperf servers were
the 1 second and 10 second measurements is not very present as a part of the IEPM bandwidth measure-
high. For a bandwidth of about 1 Mbps it makes no ment framework [7] to various nodes across the world.
difference at all. Even at a bandwidth of 491 Mbps, it Iperf runs in client-server mode and a version of Iperf
makes a difference of less than 10%. which runs in a secure mode is available as a part of
Nettest[15]. We measured bandwidth from SLAC to
4 Implementation and Results twenty high performance sites across the world. These
4.1 Implementation sites included various nodes spread across US, Asia
Iperf runs in client-server mode. All the code (Japan) and Europe (UK, Switzerland, Italy, France).
changes in Iperf are confined to the client and only The operating systems in the remote hosts were ei-
the host running the client needs a Web100-enabled 6 The 150 lines only includes Iperf modifications. Scripts
kernel. The remote hosts, running Iperf servers, can written for data collection are not included here
ther Solaris or Linux. The bandwidths to these nodes SLAC->CALTECH (Max TCP Win=16M)
varied from 1 Mbps to greater than 400 Mbps. A 1200
Congestion Window(KB)
1000
local host at SLAC which ran Linux 2.4.16 (Web100- Inst BW(Mbps) 900
Congestion Window(KB)
700
3 processor was used for the client. 800
600
4.3 Results 600 500
We measured bandwidth by running Iperf for 20 400
seconds, since running for 10 seconds (default) would 400
300
not be adequate for a few long bandwidth-delay net- 200
200
works from SLAC (for example from SLAC to Japan). 100
We have compared the 20 second measurements with 0 0
0 4000 8000 12000 16000 20000
the bandwidths obtained by running Iperf in Quick Time(ms)
mode (1 second after slow-start). We obtained band-
width estimates in quick mode differing by less than Figure 8: Throughput and cwnd (SLAC->Caltech)
10% when compared to the 20 second Iperf test for
most cases as shown in Figure 7 7 .
300
Bandwidth (Mbps)
250
200
150
100
50
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Node numbers Figure 9: Throughput and cwnd (SLAC->Japan)