Vous êtes sur la page 1sur 83

The Transport Layer

application

Provides a service to the


application layer
Obtains a service from the
network layer

transport
network
link
physical

The Transport Layer


Principles behind transport layer services

multiplexing/demultiplexing
reliable data transfer
ow control
congestion control

Transport layer protocols used in the Internet

UDP: connectionless
TCP: connection-oriented
TCP congestion control

Transport services and protocols

Provide logical communication


between application processes
running on dierent hosts
Transport protocols run on end
systems
send side: break app messages
into segments, pass to network
layer
recv side: reassemble
segments into messages, pass
to app layer
End-to-end transport between
sockets
network layer provides endto-end delivery between hosts

Internet transport-layer protocols

TCP
connection-oriented, reliable
in-order stream of bytes
congestion control, ow
control, connection setup
users see stream of bytes TCP breaks into segments
UDP
unreliable, unordered
users supply chunks to UDP,
which wraps each chunk into
a segment / datagram
Both TCP and UDP use IP,
which is best-eort - no delay or
bandwidth guarantees

Multiplexing
Goal: put several transport-layer connections over
one network-layer connection
Demultiplexing at rcv host:
delivering received segments
to correct socket

Multiplexing at send host:


gathering data from multiple
sockets, enveloping data with
header (used for demultiplexing)

Demultiplexing

Host receives IP datagrams


each datagram has source IP
address, destination IP
address
each datagram carries one
transport-layer segment
each segment has source,
destination port numbers
Host uses IP addresses and
port numbers to direct
segment to the appropriate
socket

32 bits

source port #

dest port #

other header elds

application data
(message)
TCP/UDP segment format

Connectionless (UDP) demultiplexing


Create sockets with port numbers
DatagramSocket mySocket1 = new DatagramSocket(99111);
DatagramSocket mySocket2 = new DatagramSocket(99222);

UDP socket identied by two-tuple:


(destination IP address, destination port number)

When host receives UDP segment

checks destination port number in segment


directs UDP segment to socket with that port number

IP datagrams with dierent source IP addresses and/


or source port numbers, but same dest address/port,
are directed to the same socket

Connection-oriented (TCP) demultiplexing


TCP socket identied by four-tuple:
(source IP address, source port number, destination IP
address, destination port number)

Receiving host uses all four values to direct segment to the


correct socket

Server host may support many simultaneous TCP


sockets

each socket identied by own 4-tuple

Web servers have dierent sockets for each


connecting client

non-persistent HTTP has dierent socket for each request

TCP demultiplexing

User Datagram Protocol (UDP)

RFC 768
The no frills Internet
transport protocol
best eor: UDP segments
may be:
lost
delivered out of order
connectionless
no handshaking between
sender and receiver
each segment handled
independently of others

Why have UDP?


  no connection establishment

 

 
 


 

(means lower delay)


simple; no connection state
at sender and receiver
small segment header
no congestion control: UDP
can blast away as fast as
desired
no retransmits: useful for
some applications (lower
delay)

UDP

Often used for streaming


multimedia, games
loss-tolerant
rate or delay-sensitive
Also used for DNS, SNMP
If you need reliable transfer
over UDP, can add reliability at
application-layer
application-specic error
recovery
but think about what you are
doing...

32 bits
source port #

dest port #

length
(in bytes of UDP
segment, including
header)

checksum

application data
(message)

UDP segment format

UDP checksum
Purpose: to detect errors (e.g., ipped bits) in a
transmitted segment
Sender
 treat segment contents as a
sequence of 16-bit integers
 checksum: addition (1s
complement sum) of segment
contents
 sender puts checksum value into
UDP checksum eld

Receiver
 compute checksum of received
segment
 check if computed checksum
equals checksum eld value
  NO = error detected
  YES = no error detected
(but maybe errors anyway?)

UDP checksum example


e.g., add two 16-bit integers
NB, when adding numbers, carry from MSB is added
to result
1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
_________________________________

wraparound

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1
_________________________________

sum
checksum

1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

Reliable data transfer


Principles are important in app, transport, link layers
Complexity of the reliable data transfer (rdt) protocol
determined by characteristics of unreliable channel

rdt_send():

called from above


(e.g., by app). Passed data to
deliver to receivers upper layer

SEND
SIDE

udt_send():

called by
rdt to transfer packet
over unreliable channel
to receiver

deliver_data():

called by
rdt to deliver data to upper
layer

RECV
SIDE

rdt_recv():

called
when packet arrives
on rcv side of
channel

Developing an rdt protocol


Develop sender and receiver sides of rdt protocol
Consider only unidirectional data transfer

although control information will ow in both directions

Use nite state machines (FSM) to specify sender and


receiver
event causing state transition
actions taken on state transition
state: when in
this state, next
state is uniquely
determined by
next event

state
1

state
2
event
actions

initial
state

rdt1.0
No bit errors, no packet loss, no packet reordering

SENDER

RECEIVER

rdt2.0
What if channel has bit errors (ipped bits)?

Use a checksum to detect bit errors

How to recover?

acknowledgements (ACKs): receiver explicitly tells sender that


packet was received (OK)
negative acknowledgements (NAKs): receiver explicitly tells
sender that packet had errors (Pardon?)
sender retransmits packet on receipt of a NAK
ARQ (Automatic Repeat reQuest)

New mechanisms needed in rdt2.0 (vs. rdt1.0)

error detectio#
receiver feedback: control messages (ACK, NAK)
retransmissio#

rdt2.0
SENDER

RECEIVER

But rdt2.0 doesnt always work...


SENDER

RECEIVER
data1

data1
ACK

data2

delivered

!
NAK

data2

data2
ACK

data3
NAK

delivered
data3

ACK

data3

delivered
data3

ACK

delivered

If ACK/NAK corrupted
sender doesnt know what
happened at receiver
shouldnt just retransmit:
possible duplicate
Solution:
sender adds sequence number to
each packet
sender retransmits current
packet if ACK/NAK garbled
receiver discards duplicates
Stop and wai$
Sender sends one packet, then
waits for receiver response

rdt2.1 sender

rdt2.1 receiver

rdt2.1
Sender
seq # added
two seq #s (0,1) sucient
must check if received ACK/
NAK is corrupted
2x state
state must remember
whether current packet
has 0 or 1 seq #

Receiver
 must check if received packet is
duplicate
  state indicates whether 0 or 1
is expected seq #
 receiver can not know if its last
ACK/NAK was received OK at
sender

rdt2.1 works!
SENDER

RECEIVER
data1/0

data1
ACK

data2/1

delivered

!
NAK

data2/1

data2
ACK

data3/0
NAK

delivered
data3

ACK

data3/0
ACK

delivered

Do we need NAKs? rdt2.2


Instead of NAK, receiver sends ACK for last packet
received OK

receiver explicitly includes seq # of packet being ACKed

rdt2.2 sender
duplicate ACK at sender results in the same action as
a NAK: retransmit current packe$

rdt2.2 works!
SENDER

RECEIVER
data1/0

data1
ACK0

data2/1

delivered

!
ACK0

data2/1

data2
ACK1

data3/0
#%$

delivered
data3

ACK0

data3/0
ACK0

delivered

What about loss? rdt3.0

Assume:
Underlying channel can also
lose packets (both data and
ACKs)
checksum, seq #, ACKs,
retransmissions will help, but
not enough

Approach:
 sender waits reasonable
 amount of time for ACK
  retransmits if no ACK  
  received in this time
  if pkt (or ACK) just delayed
  (not lost)
   retransmission is duplicate, but
seq # handles this
   received must specify seq # of
packet being ACKed
  requires countdown timer

rdt3.0 sender

rdt3.0 in action
SENDER

timeout

timeout

{
{
{
{
{

RECEIVER
data1/0

data1
ACK0

data2/1

data2/1

data2
ACK1

data3/0

delivered

ACK0

data3/0
ACK0

delivered
data3

delivered

rdt3.0 works, but not very well...


L
8kb/pkt
Ttransmit =
= 9
= 8microsec
R
10 b/sec
L
0.008
R
Usender =
=
=
0.00027
L
30.008
RT T + R

e.g., 1Gbps link, 15 ms propagation delay, 1KB packet:

L = packet length in bits, R = transmission rate in bps


Usender = utilisation - the fraction of time sender is busy
sending
1 KB packet every 30 ms 33 kB/s throughput over a 1Gbps
link
good value for money upgrading to Gigabit Ethernet!
network protocol limits the use of the physical resources!
because rdt3.0 is stop-and-wai$

Pipelining

Pipelined protocols

send multiple packets without waiting


number of outstanding packets > 1, but still limited
range of sequence numbers needs to be increased
buering at sender and/or receiver

Two generic forms

go-back-N and selective repea$

Go-back-N

Sender
k-bit sequence number in packet header
window of up to N, consecutive unACKed packets allowed
ACK(n): ACKs all packets up to, including sequence number n

= cumulative ACK
(this may deceive duplicate ACKs)
timer for each packet in ight
timeout(n): retransmit packet n and all higher sequence #
packets in window

Go-back-N sender

Go-back-N receiver

ACK-only: always send ACK


for correctly-received packet
with highest in-order sequence
number
may generate duplicate
ACKs
only need to remember
expectedseqnum

out-of-order packet:
discard (dont buer), i.e., no
receiver buering
reACK packet with highest
in-order sequence number

go-back-N in action
SENDER

RECEIVER

data1
data2
data3

ACK1

!
ACK2

data4

data3 timeout

data5

ACK2

data3

ACK2

data4

ACK3
ACK4

Selective Repeat
If we lose one packet in go-back-N

must send all N packets again

Selective Repeat (SR)

only retransmit packets that didnt make it

Receiver individua&y acknowledges all correctlyreceived packets

buers packets as needed for eventual in-order delivery to


upper layer

Sender only resends pkts for which ACK not received

sender timer for each unACKed packet

Sender window

N consecutive sequence numbers


as in go-back-N, limits seq numbers of sent, unACKed pkts

Selective repeat windows

Selective Repeat
Sender
if next available seq # is in
window, send packet
timeout(n): resend pkt n,
restart timer
ACK(n) in [sendbase, sendbase+N]:
mark packet n as received
if n is smallest unACKed
packet, advance window base
to next unACKed seq #
Need >= 2N sequence numbers
or reuse may confuse
receiver

Receiver
pkt n in [rcvbase, rcvbase+N-1]:
  send ACK(n)
  if out of order, buer
  if in order: deliver (also deliver
any buered, in-order pkts),
advance window to next
not-yet-received pkt
pkt n in [rcvbase, rcvbase+n-1]:
  send ACK(n)
  even though already ACKed
otherwise
  ignore

SR in action

SENDER
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10

(window full)
1 2 3 4 5 6 7 8 9 10

(ACK1 received)
1 2 3 4 5 6 7 8 9 10

data3 timeout

RECEIVER

data1
data2
data3

1 2 3 4 5 6 7 8 9 10

ACK1

!
ACK2

data4
data5
data6

1 2 3 4 5 6 7 8 9 10

1 2 3 4 5 6 7 8 9 10

ACK4
ACK5

data3

1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10

ACK3

(data3 received, so data3-6


delivered up, ACK3 sent)

TCP

point-to-point
one sender, one receiver
reliable, in-order byte strea'
no message boundaries
pipelined
TCP congestion control and
ow control set window size
send and receive buers
ow-controlled
sender will not
overwhelm receiver

 full-duplex data
  bi-directional data ow in
same connection
  MSS: maximum segment size
 connection-oriented
  handshaking initialises sender,
receiver state before data
exchange

TCP segment structure


32 bits
U = urgent data (not
often used)
source port #

dest port #

A = ACK# valid
P = push data (not
often used)
R,S,F = RST, SYN,
FIN = connection
setup/teardown
commands

Internet
checksum (like
UDP)

sequence number
acknowledgement number
head not
UA P R S F
len used

receive window

checksum

urgent data pointer

options (variable length)


application data
(message)

counted in bytes
(not segments)

#bytes receiver
willing to accept

TCP sequence numbers & ACKs

Sequence numbers
byte-stream # of rst byte in
segmens data
ACKs
seq # of next byte expected
from other side
cumulative ACK
How does receiver handle outof-order segments?
Spec doesnt say; up to
implementor
Most buer and wait for
missing to be retransmitted

TCP RTT & timeout

How to set TCP timeout?


longer than RTT
but RTT can vary
too short premature
timeout
unnecessary
retransmissions
too long slow reaction to
loss
So estimate RTT

 How to estimate RTT?


 


 
 
 
 

measured time from


segment transmission until ACK
receipt
ignore retransmissions
sampleRTT will vary, but we want
smooth estimated RTT
 average several recent
measurements, not just
current
sampleRTT:

EstimatedRT T = (1 ) EstimatedRT T + SampleRT T

 Exponentially-weighted moving average


 Inuence of past samples decrease exponentially fast
 typical = 0.125

TCP RTT estimation

TCP timeout
Timeout = EstimatedRTT + safety margin

if timeouts too short, too many retransmissions


if margin is too large, timeouts take too long
larger the variation in EstimatedRTT, the larger the margin

rst estimate of deviation


DevRT T = (1 ) DevRT T + |SampleRT T EstimatedRT T |

(typical = 0.25)

Then set timeout interval


T imeoutInterval = EstimatedRT T + 4 DevRT T

RDT in TCP

TCP provides RDT on top of unreliable IP

Pipelined segments
Cumulative ACKs
Single retransmission timer

Retransmissions triggered by :

timeout events
duplicate ACKs

TCP sender events


Data received from app:
Create segment with seq #
seq # is byte-stream number
of rst data byte in segment
start timer if not already
running (timer for oldest
unACKed segment)
expiration interval =
TimeOutInterval

Timeout:
  retransmit segment that caused
timeout
  restart timer
ACK received:
  If ACK acknowledges
previously-unACKed segments
   update what is known to be
ACKed
   start timer if there are
outstanding segments

TCP sender (simplied)


NextSeqNum = InitialSeqNum
SendBase = InitialSeqNum
loop (forever) {
switch(event)
event: data received from
create TCP segment with
if (timer currently not
start timer
pass segment to IP
NextSeqNum = NextSeqNum

+ length(data)

event: ACK received, with ACK field value of y


if (y > SendBase) {
SendBase = y
if (there are currently not-yet-acknowledged segments)
start timer
}
/* end of loop forever */

SendBase-1 = last

cumulativelyACKed byte
e.g.,

application above
sequence number NextSeqNum
running)

event: timer timeout


retransmit not-yet-acknowledged segment with
smallest sequence number
start timer

SendBase-1 = 71;
y = 73, so receiver

wants 73+
y > SendBase, so
the new data is
ACKed

TCP retransmissions - lost ACK


HOST A

HOST B

SEQ=92

timeout

, 8 by
tes da
ta

ACK=10

SEQ=92

, 8 by
tes da
ta

ACK=10
SendBase = 100

time

TCP retransmissions - premature timeout


HOST A

HOST B
SEQ=92

timeout

, 8 by
tes da
ta

SEQ=10

0, 20
bytes
data
0
0
1
K=

AC

SendBase =
120

timeout

SendBase =
100

0
2
1
, A8
CK=bytes
data

SEQ=92

SendBase =
120
time

0
2
1
=
ACK

TCP retransmissions - saving retransmits


HOST A

HOST B
SEQ=92

, 8 by
tes da
ta

SEQ=10

timeout

0, 20
bytes
data
0
0
1
K=

AC

!
AC

0
2
1
K=

SendBase =
120

time

TCP ACK generation


Event at receiver
Arrival of in-order segment with
expected seq #. All data up to
expected seq # already ACKed.

TCP receiver actio#


Delayed ACK. Wait up to 500ms for
next segment. If no next segment,
send ACK.

Arrival of in-order segment with


Immediately send single cumulative
expected seq #. One other segment ACK, ACKing both in-order
has ACK pending.
segments.
Arrival of out-of-order segment
higher than expected seq #. Gap
detected.

Immediately send duplicate ACK,


indicating seq # of next expected
byte.

Immediately send ACK, provided


Arrival of segment that partially or
that segment starts at lower end of
completely lls gap.
gap.

TCP Fast Retransmit


Timeout is often quite long

so long delay before resending lost packet

Lost segments are detected via DUP ACKs

sender often sends many segments back-to-back (pipeline)


if segment is lost, there will be many DUP ACKs

If a sender receives 3 ACKs for the same data, it


assumes that the segment after the ACKed data was
lost

fast retransmit: resend segment before the timer expires

TCP ow control
Flow control: prevent sender from overwhelming receiver
Receiver side of TCP connection has a receive buer

Application process may be slow at reading from buer

Need speed-matching service: match send rate to


receiving applications drain rate

TCP ow control

 Spare room in buer


= RcvWindow
= RcvBuffer - [LastByteRcvdLastByteRead]

Receiver advertises spare room


by including value of RcvWindow
in segments

value is dynamic

Sender limits unACKed data to

RcvWindow
guarantees receive buer will not
overow

TCP connection management


Three way handshake
TCP sender and receiver
establish connection before
exchanging data segments
Initialise TCP variables:
sequence numbers
buers, ow control info
Client: initiates connection
Socket clientSocket = new
Socket("hostname","port
number");

Server: contacted by client


Socket connectionSocket =
welcomeSocket.accept();

1. Client host sends TCP SYN


segment to server
  species initial sequence #
  no data
2. Server receives SYN, replies
with SYNACK segment
  server allocates buers
  species server initial seq #
3. Client receives SYNACK,
replies with ACK
  may contain data

TCP connection management


Closing a connection
1. Client host sends TCP FIN
segment to server

close

 species initial sequence #


 no data


2. Server receives FIN, replies with


ACK segment, closes connection,
sends FIN
3. Client receives FIN, replies with
 ACK, enters timed wai


 during timed wait, will


respond with ACK to FINs

4. Server receives ACK, closes.

SERVER
FIN

ACK
FIN
ACK

timed wait

CLIENT

closed
time

close

closed

TCP connection management

TCP server
lifecycle

TCP client
lifecycle

Other TCP ags


RST = reset the connection

used e.g., to reset non-synchronised handshakes


or if host tries to connect to server on non-listening port

PSH = push

receiver should pass data to upper layer immediately


receiver pushes all data in window up

URG = urgent

senders upper layer has marked data in segment as urgent


location of last byte of urgent data indicated by urgent data
pointer

URG and PSH are hardly ever used

except Blitzmail, which appears to use PSH for every segment

See RFC 793 for more info (also 1122, 1323, 2018, 2581)

Congestion control
What is congestion?
Too many sources sending too much data too fast for

the network to handle


Not ow control!

network, not end systems

Manifestations

lost packets (buer overow at routers)


long delays (queueing in router buers)

One of the most important problems in networking

Congestion control: scenario 1

2 senders, 2 receivers
link capacity R
1 router, innite
buers
no retransmissions





large delays when


congested
maximum achievable
throughput = R/2

Congestion control: scenario 2


1 router, nite buers
sender retransmits lost packets
in = sending rate, in = oered load (inc. retransmits)

Congestion control: scenario 2


in = out (goodput)
perfec retransmission, only when loss: in > out
retransmissions of delayed (not lost) packets means

in greater than perfect case


So congestion causes

more work (retransmits) for given goodput


unnecessary retransmissions; link carries multiple copies

Congestion control: scenario 3


4 senders

AC
BD

nite buers
multihop paths
timeouts/
retransmits

Congestion control: scenario 3

AC limited by R1R2 link


BD trac saturates R2
AC end-to-end throughput
goes to zero
may as well have used R1 for
something else
So congestion causes
packet drops any
upstream transmission
capacity used for that packet
is wasted

Approaches to congestion control


Network-assisted congestion
control
End-to-end congestion

routers
provide
feedback
to
end
control
systems
no explicit feedback from
  direct feedback, e.g. choke
network
packet
congestion inferred from end-   mark single bit indicating 
system observed loss and delay   congestion
this is what TCP does
  tells sender the explicit rate
at which it should send
  ATM, DECbit, TCP/IP ECN

ATM ABR congestion control


ATM (Asynchronous
Transfer Mode)
RM (Resource Management)
alternative network
Cells
architecture

sent
by
sender,
interspersed
virtual circuits, xed-size cells
 with data cells
 bits in RM cell set by switches
ABR (Available Bit Rate)
(i.e., network-assisted CC)
elastic server
  NI bit: no increase in rate
if senders path is underloaded, (mild congestion)
sender should use available
  CI bit: congestion indication
bandwidth
 RM cells are returned to the
if senders path is congested, sender by receiver, with bits
intact
sender is throttled to the
minimum guaranteed rate

ATM ABR congestion control

2-byte ER (Explicit Rate) eld in RM cell

congested switch may lower ER value in cell


senders send rate is thus the minimum supportable rate on path

EFCI bit in data cell is set to 1 in a congested switch

if data cell preceding RM cell has EFCI set, sender sets CI bit in
returned RM cell

TCP congestion control


end-to-end (no network assist)
sender limits transmission
LastByteSent LastByteAcked min{CongW in, RcvW indow}
CongW in
rate =
Bytes/sec
RT T

CongWin is dynamic function of perceived congestion

How does sender perceive congestion?

loss event: timeout or 3 DUP ACKs


TCP sender reduces rate (CongWin) after loss event
3 mechanisms: AIMD, slow start, conservative after timeouts

TCP congestion control is self-clocking

TCP AIMD
Multiplicative decrease
Additive increase
halve CongWin after loss event  increase CongWin by 1 MSS
every RTT in the absence of
loss events (probing)

TCP sawtooth

TCP Slow Start


When connection begins, CongWin = 1 MSS

e.g., MSS = 500 bytes, RTT = 200 ms


initial rate = 20 kbps

But available bandwidth may be MSS/RTT

want to quickly ramp up to respectable rate

When connection begins, increase rate exponentially


until the rst loss event

double CongWin every RTT


increment CongWin for every ACK received

Slow start: sender starts sending at slow rate, but


quickly speeds up

TCP slow start


HOST A

HOST B

RTT

1 segment

2 segment

RTT

4 segment

time

TCP - reaction to timeout events


After 3 DUP ACKs

CongWin halved

window then grows linearly

But after timeout

CongWin set to 1 MSS

window then grows exponentially


to threshold, then grows linearly (AIMD: congestion
avoidance)

Why?

3 DUP ACKs means network capable of delivering some


segments, so do Fast Recovery (TCP Reno)
timeout before 3 DUP ACKs is more troubling

TCP - reaction to timeout events

When to switch from


exponential to linear?
When CongWin gets
to of its value
before timeout
Implementation:
Threshold variable
At loss event,
Threshold is set to
of CongWin just
before loss event

TCP congestion control - summary


When CongWin is below Threshold, sender in slow

start phase; window grows exponentially


When CongWin is above Threshold, sender in
congestion-avoidance phase; window grows linearly
When a triple duplicate ACK occurs, Threshold set to
CongWin/2 and CongWin set to Threshold
When timeout occurs, Threshold set to CongWin/2
and CongWin set to 1 MSS

TCP throughput

W = window size when loss occurs


When window = W, throughput = W/RTT
After loss, window = W/2, throughput = W/2RTT
Average throughput = 0.75W/RTT

(ignoring slow start, assume throughput increases linearly


between W/2 and W)

High-speed TCP
assume: 1500 byte MSS (common for Ethernet),
100ms RTT, 10Gbps desired throughput
W = 83,333 segments

a big CongWin! What if loss?

Throughput in terms of loss:


1.22 M SS

RT T L

so L (loss rate) = 2*10-10 (1 loss every 5m segments)

is this realistic?

Lots of people working on modifying TCP for highspeed networks

Fairness
If k TCP sessions share same bottleneck link of
bandwidth R, each should have an average rate of R/*

How is TCP fair?


Two competing sessions

additive increase gives slope of 1 as throughput increases


multiplicative decrease decreases throughput proportionally










suppose we are at A
 total < R, so both increase
B, total > R, so loss
 both decrease window by a
 factor of 2
C
 total < R, so both increase
etc...

Fairness
multimedia apps often use UDP

do not want congestion/ow control to throttle rate


pump A/V at constant rate, tolerate packet loss

How to enforce fairness in UDP?

application-layer congestion control


long-term throughput of a UDP ow is equivalent to a TCP
ow on the same link

Parallel TCP connections

nothing to stop application from opening parallel connections


between two hosts (web browser, download accelerator)
e.g., link of rate R with 9 connections
new app asks for 1 TCP, gets rate R/(9+1) = R/10
new app asks for 11 TCP, gets rate R/(9+11) = R/2 (!)

What is fair?

Max-min fairness

Give the ow with the lowest rate the largest possible share

Proportional fairness

TCP favours short ows


Proportional fairness: ows allocated bandwidth in proportion
to number of links traversed

Pareto-fairness

cant give another ow any more bandwidth without taking


bandwidth away from another ow

Per-link fairness

each ow gets a fair share of each link traversed

Utility functions/pricing

I pay/want more, I get more

Quick history of the Internet


time

10million
100000
hosts
hosts

1000
4
100million
hosts
hosts
hosts

1million
hosts

1996
1999
1990
1991
1983
1995
2001
1976
1978
1985
1969
1957
early
1971
60s
------------------------------------------------------Hotmail
debuts.
business.com
End
ofsends Packet-switching
CERN
releases
DNS
RealAudio,
developed.
Lawsuits
close
The
Queen
ISI
TCP
manages
becomes
DNS
Sputnik
First
launched.
IMP
Ray
Tomlinson
Internet2
domain
ARPANET.
sellsdebut.
for
First
First
web
AltaVista
Napster
down.
an e-mail.
root
TCP/IP.
server,
SRI
ARPA
installed
created
at
in WWW.
develops
independently
e-mail.
launched.
Quake
commercial
$7.5m.
ISP Code
server
Netscape
Red,
Nimda
NIC
manages
UCLA
response.
(rst IPO.
host
invented
By
1973
e-mail
by
Paulis
II75%
released.
Napster
(world.std.com).
launches.
(nsoc01.cern.ch).
worms.
registrations.
on
ARPANET).
Baran
(RAND),
of
BitTorrent
symbolic.com
Second host
Donald
ARPANET
Daviesis
introduced.
rsttrac.
registered
installed at SRI.
(NPL,
UK) and
X-Box
debuts
domain.
First host-to-host
Leonard
with integrated
message crashes Kleinrock
(MIT).
Ethernet port.
on G of
LOGIN.

10000
hosts

300million
hosts

1997
1998
2003
1979
1993
1994
2004
1973
1982
1988
1967
1986
1968
1974
------------------------------------------------------------802.11
standard
Google
launches.
Slammer,
Mosaic
First
launched.
MUD
Blaster
e-commerce.
Network
Bob Metcalfe
UCL
DoD
and
adopts
Norway
OSI
Lawrence
NSFNET
Robert
created. First
BBN
starts
work
Vint
Cerf
and
released.
WWW
worms.
developed
grows
Flash
at by Solutions
First
cyberbank.
oers
invents
Ethernet.
model.
connect
Internet
to
proposes
IETF/IRTF
the
on
the
IMP
Bob
Kahn
publish
mobs,
341,634%.
Essex.
blogs
First
online
pizzayear
domain
UCL
becomes 100
ARPANET/
worm
infects
ARPANET
created.
(Interface
A
Protocol
for
become
Doom
popular.
released.
ordering.
registrations.
rst international
Internet
mostNetwork
of
using
the
Message
Packet
Verisign
almost
First spam.
ARPANET
node. ARPANET,
TCP/IP
over
leads
Processor).
Interconnection
destroys DNS
First
banner
toSATNET.
formation
(TCP) ad.of
(Site Finder).
Yahoo!CERT.
launches.
RIAA starts
sueing P2P endusers.

Vous aimerez peut-être aussi