Vous êtes sur la page 1sur 5

Non Cooperative Path Characterization using Packet

Spacing Techniques
Supriyo Chakraborty, Bikramjit Walia and D. Manjunath
Department of Electrical Engineering, IIT-Bombay, Mumbai INDIA
supriyo, bikram, dmanju@ee.iitb.ac.in
AbstractNon cooperative network path measurements as-
sume that there is no cooperation from the other end of a
path. Such methods have to cleverly exploit standard protocol
options to initiate probing trafc. Performing measurements in
this framework for the reverse path is particularly challenging.
We describe the design of iPathmeter2, non cooperative tool to
measure capacity and available bandwidth on the reverse path.
The probing trafc of iPathmeter2 consists of a chirp of packet-
pairs with appropriate spacing. The spacing between the ACKs
from the measuring host and the size of the advertised receiver
window help shape the transmitting pattern from the remote host
to the measuring node. Thus, iPathmeter2 adapts the existing
cooperative packet spacing techniques for reverse path, non-
cooperative measurements. A utilization based estimator for the
available bandwidth is also described. iPathmeter2 is validated
by performing measurements under controlled conditions. Re-
sults from live tests on the Internet are also reported.
I. INTRODUCTION
End-to-end bandwidth estimation on a network path is use-
ful in a network measurement and monitoring tool. Example
applications of path bandwidth estimates include peer-to-peer
applications, service level monitors and dynamic server selec-
tion [1]. Two bandwidth related metrics are usually dened for
a network pathbottleneck capacity and available bandwidth.
In this paper we describe a non-cooperative technique to obtain
these performance measures for a path from only one end
of the path. An obvious application of such a single ended
measurement method would be in monitoring Internet service
quality by an end user, e.g., bandwidth received by the node
from a popular web server, where the user typically does not
have access to the far-end of any Internet path. Further, we
remark that for most users the service quality on a few paths
in the Internet will dominate the Internet service quality that
it sees. Hence by measuring the path characteristics on these
paths, the user can quantify its Internet service quality.
An important requirement of a path bandwidth estimation
technique is that it be able to measure forward and reverse
path characteristics from the measuring node. Most techniques
available in the literature can be easily extended to obtain
the non-cooperative forward path characteristics while reverse
path measurements require clever techniques. We describe
one such technique in this paper. Since we adapt existing
techniques for reverse path, non-cooperative measurement, a
brief survey of the cooperative measurement techniques is
given in the next section. We discuss generic issues in the
design of non cooperative estimators in Section III and the
design issues that need to be addressed when developing such
a tool for the public Internet in Section IV. These design issues
are used in iPathmeter2 and Section V describes its basic
design. Some experimental results are presented in Section VI.
II. COOPERATIVE BANDWIDTH ESTIMATION
Many estimators are available to estimate the bandwidth
metrics [1][11]. Almost all of these estimators work in a co-
operative framework which requires access to both ends of the
path being measured. These estimators work as follows: sender
transmits probing packets according to a specied pattern and
the receiver node timestamps these packets and obtains the
deviation from the pattern. These deviations are used in the
bandwidth estimation. There are four basic schemes of packet
transmission patterns that are used by the above estimators
(1) packet pair dispersion, (2) variable packet size probing, (3)
self induced congestion and (4) train of packet pairs.
In packet pair dispersion (PPD) based bandwidth estimators,
two packets (a packet-pair) are transmitted back-to-back to
cause them to queue together at the bottleneck link. If the links
were rate-based servers then, when the packets arrive at the
destination, the dispersion will be the same as the dispersion
when they exit the bottleneck link. Thus if L is the length of
the probing packet and the dispersion observed is D, the path
capacity C can be estimated by

C =
L
D
. A modication of
the basic packet pair technique is packet train probing which
sends multiple back to back packets. The dispersion of the
packet train is shown be asymptotically equal to the available
capacity even in the presence of cross trafc. Pathrate [4], IGI
[8], Cprobe [1] and Spruce [11] use this technique.
In variable packet size (VPS) probing, the capacity of each
hop along a path is measured by exploiting the fact that the
number of links that a packet traverses on a path can be limited
by the TTL eld in the IP packet header. On reception of a
packet with an expired TTL, a router responds with an ICMP
error message back to the sender node. By varying the TTL
eld within a packet, the minimum RTT (Round Trip Time)
for each hop along the path is obtained as a function of the
packet size. The capacity of each link on the path is obtained
from a the estimate of the link capacity on the preceding link
and a plot of the minimum RTT (for packets returned from
the receiver side of the link)against the packet size. Pathchar
[2], Clink [3] and Pchar [10] use this technique.
In the third technique to estimate the bottleneck capacity and
available bandwidth, a binary search kind of approach is used.
The goal here is to build up a queue of the probing packets at
the bottleneck link, thereby causing a self induced congestion
(SIC) at the link, and infer its bandwidth from the queueing
signature. By varying the packet rate in the packet-train at
the source, the available bandwidth can be estimated as the
rate for which the queue length begins to increase. The inter-
packet spacing corresponds to the transmission rate. Pathload
[9] and Pathchirp [6] are examples of this approach.
A fourth approach uses a train of packet-pairs (TOPP) [5]
in which the spacing between the packets in the train is
progressively decreased. When the spacing becomes less than
the service time of the bottleneck link, the second probing
packet is queued at the bottleneck link and the spacing between
the packets at the output of the link starts to increase. Thus
the packet spacings in the received train can be used as an
estimator of the available bandwidth.
Observe from the above discussion that, except for Pathchar
and Clink, the other bandwidth estimators are designed for a
cooperative framework. Although Pathchar and Clink do not
assume a cooperative framework, they depend on the prompt
generation of ICMP messages, which, in the days of DOS
attacks is not a reasonable assumption. Further, Pathchar and
Clink can only measure forward path characteristics.
III. NON COOPERATIVE BANDWIDTH ESTIMATION
A non-cooperative tool has to cleverly exploit standard
protocol options to initiate probing trafc in the network
according to a pattern specied by the measurement algorithm,
e.g., like that used by Pathload or Pathchirp. Note though
that this pattern has to be initiated in the direction in which
the measurements are to be carried out, i.e., for reverse path
characterization, thus the required pattern should be initiated
at the far end. We exploit the features of TCP and the fact that
at the other end of a path, there will likely be open public TCP
ports, e.g., HTTP and FTP, in the design of a non-cooperative,
reverse path bandwidth estimator. The TCP feature that we
exploit is as follows. Recall that the TCP sending window is
affected by the acknowledgments (ACKs) from the receiver
and by the advertised receiver window (rwin). The rate at
which ACKs are sent by receiver (spacing between them) and
the size of rwin is used to shape the incoming trafc as per
the requirement of the measuring algorithm.
IV. DESIGN ISSUES
It is possible to emulate any of the cooperative algorithms
in the non cooperative framework. For illustration, consider
the possible use of packet chirps as in Pathchirp where the
receiver could send exponentially spaced ACKs with rwin
equal to MTU sized packets. Then, one would expect that
the remote host would send data packets with interpacket
spacing similar to the ACK spacing and hence emulate a
chirp from the far-end. In our experiments, we could initiate a
chirp (exponentially spaced packets) from the far-end on the
local network but we could not achieve it over the WAN. The
problem here is that because of the WAN path delays, the
receiver may not be able send a sufcient number of ACKs
with the required spacing. This is because a sufcient number
LOCAL
HOST
REMOTE
HOST
SYN
P(1460)
P(1460)
ACK(1500)
P(1460)
ACK(1500)
P(1460)
ACK(1500)
SYN
ACK(1500)
S4
S3
S2
S1
ACK(1500)
Fig. 1. S1, S2 and S3 are the exponentially decreasing spacings between
the ACKs sent from the measuring node. When this spacing is to be S4, there
is no packet to ACK at the local host. The ACK that should have been sent
according to the algorithm but could not be sent is shown as a dotted line.
Thus, long chirps cannot be initiated from the far-end of the path.
Local
host
Remote
host
ACK(1500)
HTTP GET
A
C
K
S3
S2
host
Local
host
Remote
ACK(6000)
ACK(3000)
P
2
P
4
P
S1
P
(1
5
0
0
)
ACK
P
(
1
5
0
0
)
P
(1
5
0
0
)
S4
P
(1
5
0
0
)
P
(
1
5
0
0
)
ACK
ACK
ACK
ACK
(a) (b)
Fig. 2. On the left side is the buffering phase. On the right is the probing
phase. The ACKs sent in the probing phase correspond to the packets captured
during the buffering phase. Here S1, S2, S3 and S4 are the exponentially
distributed spacing between the packets sent from the local host to the remote
host. As the spacing at the local host decreases (the rate of probing increases),
the packets from the remote host would start getting queued.
of packets are not yet received to allow the sending of ACKs
at a high rate. See Fig. 1 for an illustration. To overcome this,
the following two phase approach was attempted.
1) Buffering Phase: We rst buffer a sufcient number
of packets by not acknowledging them. These will be
ACKed in the probing phase as per the requirements
of the measuring algorithm. After establishing the TCP
connection with the remote host, for the rst few ACKs,
we gradually increase rwin. As rwin is doubled, the
remote host responds with two back to back packets.
The last of the two packets is acknowledged with rwin
equal to four times the original rwin. The remote host
now responds with four back to back packets. These
four packets are buffered, to be ACKed in the probing
phase (as explained below). It is important to ensure
that the number of packers so buffered should not cause
retransmissions by the remote host as this would reduce
cwnd. See Fig. 2 for illustration of these events.
2) Probing Phase: The measuring node transmits ACKs for
the buffered packets with spacings and rwin values that
would cause the remote host to transmit new packets
resembling a packet chirp. This is illustrated in Fig. 2.
While experimenting with the above two phase approach,
we found that our Probing phase did not perform as expected.
This was because even when the rwin and (our estimate
of) cwnd were such as to allow these transmissions, many
hosts would not transmit new packets till all their previous
transmissions were acknowledged. This, we believe, is due to
the use of Nagles algorithm [12] on the remote host.
We also encountered the following unexpected behavior
with respect to a senders response to rwin. Many hosts would
send two packets of size equal to half of rwin for each ACK
sent. This behavior was seen for all values of rwin. However
if the ACK spacing were reduced, we received packets equal
to rwin, or equal to the path MTU when rwin was greater
than the path MTU. The reason for this is not clear.
The above experience leads us to the following outline of
the nal design. The packets from the remote host should be
shaped so that it resembles a chirp of packet pairs (COPP),
where the spacing between the packets in each packet-pair
is successively reduced. This chirp pattern that should be
initiated by remote host has similarities to both TOPP and
Pathchirp. We reiterate that ours is a non-cooperative tool and
the challenge is to be able to initiate a packet pair from the
remote host at any time within the experiment by exploiting the
current TCP implementation. We have successfully achieved
this in iPathmeter2.
V. IPATHMETER2 : DESIGN AND IMPLEMENTATION
We make the following reasonable assumptions in the
design of iPathmeter2.
1) An HTTP daemon is running on a remote host at the
far-end of the path.
2) A le of size that will sustain the burst of probe packets
is available for download via HTTP at remote host.
3) Like in Clink and Pathchar we also assume that the
ACKs do not experience any congestion.
The implementation details are as follows:
1) Initialize by setting up rewall rules in the INPUT
chain of Iptables to block incoming probe packets from
reaching the kernel TCP Stack directly. This will prevent
the kernel TCP from sending of ACKs for these probe
packets. Note that the probe packets come as HTTP
packets from the remote host with destination port the
same as that used by iPathmeter2.
2) We then bind to a randomly chosen port on the local
host, establish a TCP session with the remote host
using the 3-way handshake and then initiate a HTTP
connection with it.
3) The two-phase measurement described in the previous
section is then initiated.
The code for iPathmeter2 uses two independent
processesone to send appropriately spaced ACKs and
the other to receive the packets from the remote host.
Clearly, neither of these should be blocked because the ACK
spaces have to be correct and the received packets should be
timestamped accurately. Thus we need to poll the NIC for
received packets in non-blocking manner. In iPathmeter2,
we this is achieved using a system independent library called
Libpcap that provides a portable framework for low-level
network monitoring. We use it to poll the NIC regularly to
capture the probe packets that will be sent by the remote
host. This is explained in detail below.
The Buffering Phase: Recall that in this phase we accu-
mulate packets by not acknowledging them. This is achieved
as follows. Initially, send an ACK with rwin = 1500 bytes.
The remote host responds with two back-to-back packets of
750 bytes each. Do a cumulative acknowledgment by sending
an ACK for the second packet, but changing rwin to four
times its original value to account for the fact that it can now
accept four MTU sized packets. The remote host responds with
a series of four back-to-back packets. We however observed
deviation in the number of back-to-back packets received from
different sites. This may be attributed to various burst mitiga-
tion techniques adopted in the TCP implementations [13]. On
some of the Linux implementations a variable MAXBURST
is used to achieve the same, and it is normally set to a value
of 3. We must also mention here that this does not affect our
experiment, because we require only about three packets in
our buffer to be able to continue with the probing phase. The
buffering phase is illustrated in Fig. 3.
The Probing Phase: In this phase we do cumulative
ACKing for the packets. In the Buffering Phase, the senders
cwnd increases to 3 MSS or more. In this phase we ensure
that the cwnd does not decrease in size. Since the amount
of data received is always the minimum of cwnd and rwin
we use rwin to control the data being received. At the start
of this phase the last packet of the packet train accumulated
in the previous phase is acknowledged and rwin = 1500 is
advertised. This opens up the cwnd at the sender side and it
sends us a packet of size rwin = 1500.
We use duplicate ACKs (DUPACKs) to generate an ACK-
pair which in turn will generate a corresponding packet-pair
from the remote host. Here we exploit the property that TCP
enters into a Fast Retransmit state only on the reception of 3
DUPACKs [14]. By using only one DUPACK, we ensure that
the remote host does not enter into a Fast Retransmit state.
This eliminates the possibility of a decrease in cwnd. The
DUPACK sent contains rwin increased to twice its current
value causing the remote host to send a new packet. The
two packets classify as a packet pair. From our assumptions,
these packet pairs are injected into the network at the same
rate at which their respective ACKs are sent. The above two
data packets (forming the packet pair) are received by the
measuring host and timestamped. The last of these two packets
S1
S3
R3
R2
R1
S2
ACK(1500)
A
C
K
HTTP GET
ACK(6000)
4 packets(1460)
ACK(3000)
P
(1
4
6
0
)
P
(1460)
P
(
1
4
6
0
)
P
(1
4
6
0
)
ACK(3000)
ACK(1500)
ACK(1500)
ACK(1500)
P
(1
4
6
0
)
P
(1
4
6
0
)
ACK(3000)
P
(1460)
Local
host
Remote
host
Local
host
Remote
host
Fig. 3. iPathmeter2 : The buffering phase is shown on the left and the
probing phase on the right. S
i
are the spacing between the ACK pairs at the
measuring and R
i
are the spacing between the corresponding packet-pairs
as received at the measuring node. S3 is less than the service time on the
bottleneck link of the path; hence R3 > S3.
is used for generating the next ACK pair, and correspondingly
the next packet pair from the remote host. The time difference
between the original ACK and the duplicate ACK in the new
iteration is decreased and thus the rate of probing is increased.
This is illustrated in Fig. 3.
As we decrease the spacing between the packet-pairs, it can
happen that this spacing corresponds to rate of probing greater
than the available bandwidth on the path. In this case, the
second packet in the packet-pair gets queued up behind the
rst packet on the bottleneck link and the spacing between
them is increased at the output. Thus the spacing between this
packet pair as observed at the local host will be more than
what it was between the corresponding ACK pair.
The experiment is repeated a number of times. After col-
lecting a sufcient number of samples, the TCP session with
the remote host is closed. The data collected is fed to the
inference engine described below.
Estimator: As can be seen from above, there is no correla-
tion between the packet pairs corresponding to the different
ACK pairs and packets from different packet pairs do not
queue up at the bottleneck link together. Hence we cannot
use the inference engine of SIC based bandwidth estimators.
The inference engine of iPathmeter2 estimates both the
capacity and the available bandwidth of the path. We consider
capacity rst. Assume that the remote host initiates N chirps
of J packet-pairs and that each packet has L bits. At the
measuring node, iPathmeter2 obtains the packet-pair spacings
for all the packet-pairs in the chirp. Recall that each packet-
pair in a chirp corresponds to a different probing rate. Let d
j,n
denote the spacing at the measuring host for the j-th packet-
pair in the n-th chirp. Dene D
j
= min
n=1,2,3, ,N
(d
j,n
),
i.e., the minimum spacing at the local-end of the reverse path
for the j-th packet pair in a chirp. It is easy to see that if
202.141.154.129 202.141.153.237 202.141.150.3 202.141.144.4
Remote host
202.41.97.50
2 Mbps 2 Mbps 2 Mbps
Local host
202.141.154.131
Ethernet
generator
Cross traffic
202.41.97.52
202.141.154.130
Fig. 4. Testbed setup. All the WAN links are of 2 Mbps.
for packet-pair j the spacing is less than the transmission
time on the bottleneck link, D
j
will approach the packet
transmission time on the bottleneck link, i.e., D
j
approaches a
constant for increasing j (the packet-pair spacing is decreasing
with increasing j). Denote this constant by D. The bottleneck
capacity is then estimated as C =
L
D
.
The available bandwidth is estimated by considering the
fraction of packets that are queued at the bottleneck link.
For this we use the estimate of the capacity from above. The
local host sends ACK pairs to the remote host with spacing
of 2L/C (corresponding to a probing rate of C/2), which in
turn will generate the packet pairs from the remote host with
the above spacing. In each pair, if the rst packet is queued
at the bottleneck link, the packet spacing will be less than the
2L/C that we started with. Thus we can estimate the number
of packets that are queued. The utilization of the bottleneck
link, U is estimated as
U =
N
Packets queued
N
Packets queued
+N
Packets not queued
We will call this the utilization based estimate (UBE) of the
available bandwidth. An estimator similar to that of Spruce
[11] is also used in iPathmeter2. Here the ACKs from the
measuring node are spaced at L/C, corresponding to a probing
rate of C, and the spacing between the received packets are
used in Eqn. 2 of [11]. This estimator is denoted by ESS.
VI. EXPERIMENTAL RESULTS
We rst show the results from experiments under controlled
conditions to validate iPathmeter2. The testbed setup was as
shown in Fig. 4. A cross trafc of UDP packets generated
as a Poisson process of a specied rate is introduced on the
path being measured. A sample trace from this experiment is
shown in Fig. 5 with timestamps obtained on the respective
machines using tcpdump .
All the links are of capacity of 2 Mbps. The utilization of
all the links is low, i.e., there is very less cross trafc on this
network path. Table I shows the estimates of the capacity and
of the available bandwidth (using both the UBE and ESS). The
available bandwidth estimates from Pathchirp are also shown.
Pathchirp estimates are provided by taking an average of all
the per chirp estimates from Pathchirp [6].
We now report some results from measurements on the In-
ternet. iPathmeter2 was used to estimate bandwidth which the
IIT-Bombay network receives from web servers. Several web
servers were probed and estimates for capacity and available
bandwidth are as provided in the Table II. A comparison is

ACK(6000)
ACK(3000)
ACK(3000)
ACK(3000)
ACK(1500)
ACK(1500)
P(750)
P(750)
P(750)
P(750)
P(1460)
R
05.0919
05.0952
07.0985
07.1285
07.1317
07.4578
07.6529
07.6562
07.6563
07.6854
07.6884
07.8137
07.8429
07.8462
SYN(1500)
ACK(1500)
03.024
P(750)
03.0621
05.0622
0.0000
.07.8462
07.8822
07.8935
07.9289
07.9289
07.9431
07.9792
07.9793
07.9644
08.0149
08.0217
08.0218
08.0231
08.0573
08.0226
08.0226
08.0211
08.0211
07.9787
07.9786
07.9429
07.9429
07.9284
07.9283
07.8928
07.8928
07.8130
07.8130
07.6555
07.6555
07.6555
07.8130
P(750)
P(750)
ACK(1500)
P(750)
P(750)
ACK(1500)
P(750)
ACK(1500)
P(1460)
P(1460)
ACK(3000)
P(1460)
P(1460)
ACK(3000)
ACK(3000)
P(1460)
P(1460)
ACK
SYN
P(750)
P(750)
P(1460)
0.0000
0.0000
03.0581
07.6231
07.6231
07.6231
07.0980
07.0980
07.0981
P(750)
08.0630
07.9830
07.9830
07.8458
07.8458
(202.41.97.50) (202.141.154.131)
HOST
REMOTE
HOST
LOCAL
05.0985
07.9836
08.0634
08.0633
03.0289
03.0289
03.0247
03.0247
05.0615
05.0615
05.0615
05.0615
0.0241
ACK(1500)
03.0481
B
U
F
F
E
R
I
N
G

P
H
A
S
E
P
R
O
B
I
N
G

P
H
A
S
E
Fig. 5. Timing diagram of data ow between the local host and remote host.
The two clocks are not synchronized. At the local host, times are referenced
to the transmission of the rst SYN packet while at the remote the times
are referenced to the reception of this SYN packet. The times shown are in
minutes.
not possible as Pathchirp and other tools discussed above all
work in cooperative setups.
VII. DISCUSSION AND FUTURE WORK
There were some observations of unexpected behavior from
our testing of iPathmeter2 on the Internet. Not all hosts in the
Internet behaved as expected with respect to ACK spacing and
varying rwin. When a rwin = 1500 is advertised, some of
the servers respond with MTU sized packets while others send
two back-to-back packets of size 750 bytes each. iPathmeter2
responds to such behavior adaptively. More surprisingly, a few
of the web servers when probed respond with different size
packets for each ACK pair. The methods that we develop must
work in the real world with a variety of possible non standard
TCP implementations. iPathmeter2 has been designed with
the same objective. With extensive testing we expect to be able
to discover more unexpected behaviors and adapt to them.
TABLE I
BANDWIDTH ESTIMATES UNDER CONTROLLED CONDITIONS
Cross Capacity Available Available Pathchirp
trafc bandwidth(UBE) Bandwidth(ESS) estimates
(Mbps) (Mbps) (Mbps) (Mbps) (Mbps)
0 1.98 1.94 1.91 2.7
0.25 1.98 1.72 1.66 1.75
0.50 1.98 1.58 1.51 1.46
1.0 1.98 1.25 1.26 1.32
TABLE II
BANDWIDTH ESTIMATES ON THE GLOBAL INTERNET
Web Server Capacity Available bandwidth
(Mbps) (Mbps)
Yahoo 1.98 1.53
Rediff 1.98 1.18
Nokia 2.015 1.23
Shockwave.com 1.99 0.62
HPSR.com 2.01 1.03
Stanford.edu 1.96 1.16
Infosys.com 0.04 0.016
Finally, we remark that iPathmeter2 is a network friendly
tool, and does not signicantly alter the network load.
ACKNOWLEDGMENTS
The authors thank K. Anil Kumar for many perceptive
remarks and pointers to literature.
REFERENCES
[1] R. Carter and M. Crovella, Server selection using dynamic path
characterization in wide-area networks, in Proc. of IEEE INFOCOM.,
Kobe, Japan, Apr. 1997, pp. 10141021.
[2] V. Jacobson. (1997, Apr.) Pathchar: A tool to infer characteristics of
internet paths. [Online]. Available: ftp://ftp.ee.lbl.gov/pathchar/
[3] A. B. Downey, Using pathchar to estimate internet link characteristics,
in Proc. of ACM SIGCOMM., Sept. 1999, pp. 222223.
[4] C. Dovrolis, P. Ramanathan, and D. Moore, What do packet dispersion
techniques measure? in Proc. of IEEE INFOCOM., Apr. 2001, pp. 905
914.
[5] B. Melander, M. Bjorkman, and P. Gunningberg, A new end-to-end
probing and analysis method for estimating bandwidth bottlenecks, in
Proc. of IEEE GLOBECOM., San Francisco CA, USA, Nov. 2000.
[6] V. Ribeiro, R. Riedi, R. Baraniuk, J. Navratil, and L. Cottrell, pathchirp:
Efcient available bandwidth estimation for network paths, in Proc. of
Passive and Active Measurements (PAM) Workshop., Apr. 2003.
[7] (2004, Dec.) Caida. [Online]. Available:
http://www.caida.org/tools/taxonomy
[8] N. Hu and P. Steenkiste, Evaluation and characterization of available
bandwidth probing techniques, IEEE Journal on Selected Areas in
Communication, vol. 21, no. 6, pp. 879894, Aug. 2003.
[9] M. Jain and C. Dovrolis, End-to-end available bandwidth: Measurement
methodology, dynamics, and relation with TCP throughput, in Proc. of
ACM SIGCOMM., Aug. 2002, pp. 295308.
[10] B. A. Mah. (1999, Feb.) pchar: a tool for mea-
suring internet path characterisitcs. [Online]. Available:
http://www.employees.org/ bmah/Software/pchar/
[11] J. Strauss, D. Katabi, and F. Kaashoek, A measurement study of
available bandwidth estimation tools, in Proc. of ACM SIGCOMM.,
2003, pp. 3944.
[12] J. Nagle, Congestion control in IP/TCP internetworks, RFC 896, Jan.
1984.
[13] S. Floyd, Highspeed TCP for large congestion windows, RFC 3649,
Dec. 2003.
[14] W. Stevens, TCP slow start, congestion avoidance, fast retransmit, and
fast recovery algorithms, RFC 2001, Jan. 1997.

Vous aimerez peut-être aussi