Vous êtes sur la page 1sur 87

Data Link Layer

Niranjan Baral
IOE, Central Campus
5-2 Link Layer Services
framing, link access:
encapsulate datagram into frame, adding header, trailer
channel access if shared medium
MAC addresses used in frame headers to identify
source, dest
different from IP address!

reliable delivery between adjacent nodes


we learned how to do this already (chapter 3)!
seldom used on low bit-error link (fiber, some twisted pair)
wireless links: high error rates
Q: why both link-level and end-end reliability?
Link Layer Services (more)
5-3
flow control:
pacing between adjacent sending and receiving nodes
error detection:
errors caused by signal attenuation, noise.
receiver detects presence of errors:
signals sender for retransmission or drops frame

error correction:
receiver identifies and corrects bit error(s) without resorting to
retransmission

The DLL provides an interface for the Network Layer to send


information from one machine to another.
4 DLL Services

The Data Link Layer can offer many different services.


These services can vary from system to system.
Common services:
Unacknowledged connectionless service.
Acknowledged connectionless service.
Acknowledged connection-oriented service.
5 Unacknowledged Connectionless
Service
No acknowledgement from the receiving machine.
No logical connection is set up between the two machines.
The DLL will make no attempt to detect the loss of or recover a lost frame.
This service is useful for low error rate networks and for real-time traffic
where late data is worse than no data.
6 Acknowledged Connectionless Service

The receiver acknowledges the arrival of each frame.


If it hasnt arrived correctly (or within the correct time) it can be resent.
This is a useful service when the connection is unreliable (such as wireless)
There is no requirement for such an acknowledgement service to be
implemented by the Data Link Layer.
7 Acknowledged Connection-Oriented
Service
A connection is established between the two machines.
The frames are then transmitted and each frame is acknowledged.
The frames are guaranteed to arrive only once and in order.
This is the same as a reliable bit stream.
The connection is released once the communication is complete.
5-8 Adaptors Communicating
datagram
link layer protocol rcving
sending node
node
frame frame
adapter adapter

link layer implemented in receiving side


adaptor ( NIC)
Ethernet card, 802.11 looks for errors, flow
card control and so on
sending side: extracts datagram,
encapsulates datagram in passes to recving node
a frame
adds error checking bits,
flow control, etc.

5: DataLink Layer
Framing: Translates the physical layer's raw
bit stream into discrete units called
frames

SENDER RECEIVER

Frame #n Frame #2 Frame #1

Four methods:
How can the receiver detect 1. Character count.
frame boundaries? 2. Flag bytes with byte
stuffing
That is, how can the 3. Starting and ending
receiver recognize the start flags, with bit
and end of a frame?
stuffing
4. Physical layer
coding violations
10 Character Count

We use a field in the header to specify the number of


characters in the frame.
This method can cause problems if the count is garbled in
transit.
The receiver will be able to tell the frame is bad (due to a
bad checksum)
The receiver will not know where to pick up and the sender
will not know how much to resend.
This method is rarely used anymore.
If the header gets scrambled then the whole received data is
erroneous.
11 Character Count Example
Starting and Ending with character
stuffing
This method uses two distinct starting and ending bytes called flags
The staring and ending bytes may be same or different , they isolate the individual
information from the rest. In this method, if the receiver ever looses synchronization it
can just search for flag byte to find end of current frame. Two consecutive flag bytes
indicate the end of one frame and start of other.
A drawback of this system is that if a binary data such as a floating point number is to
be transmitted then flags byte pattern might occur in the data which interfere with the
framing.
One way to solve this is by stuffing an special byte (eg.ESC byte) just before
accidental flag byte in data. The data link on the receiving end removes the escape
byte before data are sent to network layer.
Major disadvantage of this method is that it is closely tied to the use of 8-bit
characters. Unicode uses 16 bit characters . So a new technique need to be developed
to allow arbitrary sized characters.
Example of Byte Stuffing
13
Starting and Ending Characters with bit
stuffing
Each frame ends and begins with a special bit pattern. Example: a special
bit pattern 01111110.
Whenever a senders data link layer encounters 5 consecutives 1 in data, it
automatically stuff o bit onto the outgoing stream.
When receiver see 5 consecutives 1s followed by a 0 bit, it automatically
de-stuff the 0 bit and receive actual sequence.
Here 0 acts as escape bits similar to Esc byte in byte stuffing.
If the receiver looses track of where it is, all of it has to do is scan the
input for flag sequence , since they can only occur in frame boundaries
within the data.
Physical Layer Violation shutdown
This method is applicable only to the network in which a encoding on the physical
medium contains some redundancy.
Thus technique uses a symbol that never occurs in the data for the frame boundary.
Eg: Manchester code
1: low to high
0: high to low
In this case boundary symbols will be high high or low low transitions may be used to
locate the start and end of frame.

01 1
10 0
11 Violation, Used to start a number
00 Violation, Used to start a number
Error Detection and Correction
Errors may be a single bit error or a burst error( i.e error in a no of bits)
Network designers have developed two basic strategies for dealing with errors. One
way is to include enough redundancy with each block of data sent so that the
receiver can deduce what the transmitted data must have been. These codes are
called Error Correcting codes.
Other way is to include only enough redundancy to allow the receiver to deduce that
error occurred but not which error and sent a request for retransmission. These codes
are called Error detecting codes.
Error detection is just to detect the errors and it just state that there is a error in the
data sent whether its a burst error or bit error.
In Error Correction, the exact number of bits and their position in data sent need to
be known.
Types of Errors
Single bit Error
Only 1 bit of a given data is changed from 1 to 0 or 0 to 1.
Burst Error
2 or more bits in the data have been changed from 1 to 0 or 0 to 1.

Redundancy:
It is the central concept in detecting or correcting errors. To be able to detect or
correct errors some extra redundant bits need to be added to our data. Those extra bits
are added at the sender side and removed at the receiver side. Redundancy is achieved
through various coding schemes.
Error Detecting Codes
In Copper wire or fibre, the error rate is much lower than in wireless transmission. So error detection and
retransmission is usually more efficient for dealing in such communication media with occasional errors.
The total overhead for error detection and retransmission scheme is less than error correction schemes.
Three main forms of errors detection method:
Parity Check
Checksum
Cyclic Redundancy Check
Parity Check:
A bit called a parity bit is added to every data unit so that the total number of 1s in the unit becomes odd
or even in accordance with the system is a odd parity or even parity system.
Types:
Simple Parity Check
Longitudinal Parity Check or 2-D parity Check
Simple Parity Check
In this technique, a redundant bit called a parity bit is added to every data unit so
that the total number of 1s in the unit (including the parity bit) becomes even (or
odd).
Simple parity check can detect all
single-bit errors. It can also detect burst
errors as long as the total number of bits
changed is odd. This method cannot
detect errors where the total number of
bits changed is even. If any two bits
change in transmission, the changes
cancel each other and the data unit will
pass a parity check even though the data
unit is damaged. The same holds true
fur any even lumber of errors.
2-D Parity Check

A better approach is the two dimensional parity checks. In this method, a block of
bits is organized in a table (rows and columns). First we calculate the parity bit for
each data unit. Then we organize them into a table. We have four data units shown in
four rows and eight columns. We then calculate the parity bit for each column and
create a new row of 8 bits; they are the parity bits for the whole block.
Performance :
Two dimension parity check increase the
likelihood of detecting burst errors.
A burst error of more than n bits is also
detected by this method with a very high
probability.
If 2 bits in one data unit are damaged and two
bits in exactly the same positions in another
data unit are also damaged, the checker will
not detect the error. For example, if two data
units : 11110000 and 11000011. If the receiver
receive 01110001and 01000010, the error
cannot be detected.
5-21
Parity Checking

Single Bit Parity: Two Dimensional Bit Parity:


Detect single bit errors Detect and correct single bit errors

0 0

Data Link Layer


Checksum
Cyclic Redundancy Check
In CRC, instead of adding bits to achieve a desired
parity, a sequence of redundant bits, called the CRC
or the CRC remainder, is appended to the end of a
data unit so that the resulting data unit becomes
exactly divisible by a second, predetermined binary
number. At its destination, the incoming data unit is
divided by the same number. If at this step there is no
remainder the data unit is assumed to be intact and is
therefore accepted. A remainder indicates that the data
unit has been damaged in transit and therefore must
be rejected.
24 Cyclic Redundancy Check
Let us assume k message bits and
n bits of redundancy
xxxxxxxxxx yyyy Block of length k+n

k bits n bits

Associate bits with coefficients of a polynomial


1 0 1 1 0 1 1
1x6+0x5+1x4+1x3+0x2+1x+1
= x6+x4+x3+x+1

ECE 766 Winter 2005


Computer Interfacing and Protocols ECE
25 Cyclic Redundancy Check
Let M(x) be the message polynomial
Let P(x) be the generator polynomial
P(x) is fixed for a given CRC scheme
P(x) is known both by sender and receiver
Create a block polynomial F(x) based on M(x) and P(x) such that F(x) is divisible
by P(x)

F ( x) 0
Q( x)
P( x) P( x)
26 Cyclic Redundancy Check

Sending
1. Multiply M(x) by xn
2. Divide xnM(x) by P(x)
3. Ignore the quotient and keep the reminder C(x)
4. Form and send F(x) = xnM(x)+C(x)

Receiving
1. Receive F(x)
2. Divide F(x) by P(x)
3. Accept if remainder is 0, reject otherwise
27 Proof of CRC Generation
Prove that x n M ( x) C ( x) is divisible by P( x)
Q( x)
P( x) x n M ( x) , remainder C ( x)
x n M ( x) P( x)Q( x) C ( x)
x n M ( x) C ( x) P( x)Q( x) C ( x) C ( x)

P( x) P( x) P( x)

Remainder 0 Remainder 0
Note: Binary modular addition is equivalent to
binary modular subtraction C(x)+C(x)=0
Example
28 Send
M(x) = 110011 x5+x4+x+1 (6
bits) Receive
P(x) = 11001 x4+x3+1 (5 bits, 11001 1100111001
n = 4)
4 bits of redundancy 11001
Form xnM(x) 110011 0000 11001
x9+x8+x5+x4 11001
Divide xnM(x) by P(x) to find 00000
C(x)
100001
11001 1100110000 No remainder
Accept
11001
10000
11001 = C(x)
1001
Send the block 110011 1001
Error Correcting Code

Error-correcting codes are widely used on wireless links, which are


notoriously noisy and error prone when compared to copper wire or optical
fibers.
Without error-correcting codes, it would be hard to get anything through.
However, over copper wire or fiber, the error rate is much lower, so error
detection and retransmission is usually more efficient there for dealing with
the occasional error.
Eg Hamming Code
Flow Control
We must deal with such situations where the sender is sending data at a higher rate than the
30
receiver can receive the data. For reliable communication there should be a limit that sender
can transmit less or equal frames that the receiver can actually receive. This is called flow
control and it can be accomplished by following two approaches:
rate-based flow control
In this method the protocol has a built in mechanism that limits the rate of data
flowing per second from the sender DLL which is also generally equal to the receiver
flow rate at receiver DLL.
Here the sender transmits without using any feedback from the receiver and this
approach is not used often in the DLL
feedback-based flow control
feedback is used to tell the sender how the receiver is doing or to send another frame
i.e the receivers transmits only after the permission from the receiver to transmit.
Methods of flow Control:
Unrestricted Simplex Protocol
Stop-and-Wait
Sliding Window
Unrestricted Simplex Protocol

Data is transmitted in a single direction only where both the receiving and
sending ends are always ready.
Processing time can be ignored.
Infinite buffer space is a assumed to be available.
Communication Channels between data link layers never losses frames or
damage frames.
No sequence number or acknowledgements is needed here. The only
possible is arrival of undamaged frame.
Unrealistic protocol
Simplex: Stop-and-Wait Protocol
32

One of the easiest ways of controlling the speed in which data is transmitted.
Communication Channel is assumed to be error free,
Once a frame has been received by the data link layer on the receiving end, the
receiver sends an empty frame back to the sender to notify the sender that it is
ready for another frame.
Acknowledging each frame means that the sender must stop and wait.

Major Drawback of Stop-and-Wait Flow is that only one frame can be in


transmitted at a time which leads to inefficiency if propagation delay is much longer
than the transmission delay

Protocols in which the sender sends one frame and waits for the
acknowledgement before proceeding are stop and wait protocols.
Figure shows an example of
communication using this
protocol. It is still very
simple. The sender sends
one frame and waits for
feedback from the receiver.
When the ACK arrives, the
sender sends the next frame.
Note that sending two
frames in the protocol
involves the sender in four
events and the receiver in
two events.
Stop-and-Wait in Noisy Channels

Need timers, retransmissions, and duplicate detection.


Use sequence numbers.
Why?
Distinguish frames.
How large (e.g., in number of bits) are sequence numbers?

34
Error Control
Error Control refers to mechanisms to detect and correct errors that has occurred during
transmission. Beside delaying the data arrival, there is possibility of two types of errors:
Lost frames ( A frame fails to arrive at the other side)
damaged frames (Frame arrives with some error )
Most Error Control techniques are based on (1) Error Detection Scheme (e.g.,
Parity checks, CRC),and (2) Retransmission Scheme
Error control schemes that involve error detection and retransmission of lost or
corrupted frames are referred to as Automatic Repeat Request (ARQ) error control
Versions of ARQ:
Stop and wait ARQ
Go-back-N ARQ
Selective-Reject ARQ
Stop and Wait ARQ
The source transmits a single frame and then wait for an acknowledgement.
Two types of errors may occur( First case: the frame at destination may be
damaged and 2nd case: a damaged acknowledgment i.e data received but
acknowledgement not sent successfully
If the sent frame is damaged, the receiver simply reject the frame. In this case
the timer comes into action. The sender waits for a acknowledgement only for a
certain period of time as defined in timer. As the timer expires the frame is sent
again.
In second case the frame is received properly by receiver but ack is not received
by sender. In this case also the sender retransmit the frame again. Multiple
copies of same frame is seen at receiver side. To avoid this, frames are
alternately labelled with 0 and 1, and positive acknowledgement are of the form
ACK0 and ACK1.
Stop and Wait ARQ
Go Back N ARQ

In stop and wait ARQ the sender have to wait until an acknowledgement is
received to transmit other frame
But in this approach, number of frames can be sent without waiting or ack. A
copy of the transmitted frame is kept until the acknowledgement is received.
If the receiver detects any errors in the frame it send a reject acknowledgement
for that frame. The receiver station then discards all other frames after that
frame.
Thus the source transmitter must go back and retransmit again that frame and
all subsequent frame after that.
Go Back N ARQ
Selective Reject ARQ
This transmission scheme is also known as selective retransmission.
Only the rejected frames are transmitted in ths approach in contrast
to all proceeding frames in Go Back N ARQ.
More efficient than Go back N ARQ as it minimizes the amount of
retransmission.
But the receiver need to maintain a buffering capacity large enough
to save the frames until the frame is error is retransmitted and must
contain logic for reinserting that frame in proper sequence.
Selective Reject ARQ
Data Link Protocols
HDLC( High Level Data Link Control)
It is a bit oriented data link layer protocol which supports both connectionless and connection-
oriented service.
It can have point to point and point to multipoint connections. It implements Stop and wait
protocol we discussed earlier.
Although this protocol is a theoretical issue than practical, most of the concept defined in this
protocol is the basis for practical protocols as PPP.
HDLC defines three types of frames:
I- Frame( Information frame) : used for data link user data and control information relating to user
data.(piggybacking)
S-Frame( Supervisory Frame): used to transport control information. Also provides ARQ when
piggybacking is not used.
U frames (Unnumbered frames): reserved for system management. It is used for managing the link
itself.
Figure HDLC frames

11.43
Flag: the flag marks the beginning and end of frame. It has a unique combination
01111110 which identifies the end and start of a frame.
Address field: It identifies the station that transmit or is to receive the frames. Not
needed for point to point link but always included for uniformity. It is 8 bit long
but can be extended as per need.
Information filed: Contains user data from network and only present in I frame.
FCS: frame check Sequence: it is a error detection field and can contain either 2 or
4 byte.
Control Field: the control field is one or two bytes used for flow and error control,
sequence numbering, acknowledgement and other control purpose. There are 3
types of control frames;
Figure;Control field format for the different frame types

11.45
Control field for I frames:
First bit i.e 0: indicates I frame
Next three bits gives sequence number of frame.
P/F(poll/final): provides dialogue control between primary and secondary station. Called Poll when set by
primary station to obtain a response from secondary user. Called final when set by secondary station to
indicate a response or end of transmission.
N(R): For piggybacked Acknowledgement
Control Field for S- Frames:
The first two bits 10 indicates it is a S frame.
The two code bits are used to define the type of S-frame itself. With 2 bits we have four types of S-frame:
Type 0: Receive Ready
Type 1: Reject ( Negative Ack)
Type 2: Receive not ready
Type 3: Selective Reject (used in selective repeat ARQ)
Control Field for U-frame
The five code field combined is used to create up to 32 different types of U-frames to exchange session
management and control information.
PPP( Point to Point Protocol)
Common protocol for point to point access. It is used by Internet users to
connect home computers to server of an ISP.
PPP is a byte-oriented protocol using byte stuffing with the escape byte
01111101
PPP defines the format of frame to be exchanged between devices.
New version of PPP i.e Multilink PPP provides connections over multiple
links
PPP also provides network address configuration and helps a home user get
temporary network address to connect to internet.
PPP does not provide flow control but has a error control mechanisms. PPP
handles error detection, supports multiple protocols, allows IP addresses to be
negotiated at connection time and also permits authentication
5-48 PPP Data Frame

Flag: delimiter (framing)


Address: does nothing (only one option)
Control: does nothing; in the future possible multiple
control fields
Protocol: upper layer protocol to which frame delivered
(e.g., PPP-LCP, IP,etc)

Data Link Layer


5-49 PPP Data Frame

info: upper layer data being carried


check: cyclic redundancy check for error detection

Data Link Layer


PPP frame format ..
Flag: A PPP frame starts and ends with a 1 byte flag with bit pattern 01111110.
Address: The address field is constant in this protocol and set to 11111111(broadcast add)
Protocol: it defines what is being carried in data field, either user data or information.
Payload: Carries either user data or other information.
Check: FCS: frame check Sequence: it is a error detection field and can contain either 2
or 4 byte.

( note: Byte stuffing: since PPP is a byte oriented protocol, the flag in PPP is a byte that
needs to be escaped whenever it appears in the data section of frame. The escape byte is
01111101 i.e every time the flaglike pattern appears in the data , this extra bit is stuffed to tell
the receiver that the next byte is not a flag.)
PPP connection and transition phases:

Dead state: no active carrier at the physical layer


Established State; one of the two nodes start the
communications
Authentication : if two parties agreed authentication
required for connection
Open state: Data transfer takes place in open state. It remains
in open state until one side decide to terminate the connection
Terminate state: Remains in this state until the carrier signal
or physical layer signal is dropped moving the system again
to dead state.
MAC Sublayer
52

Questions to be answered ?

In broadcast networks, How the channel


is divided between competing users?
What is Medium Access Control
(MAC)?
What protocols are used for allocating a
multiple access channel ?

Computer Networks
MAC Sublayer
53
Networks can be divided into two categories:
using point-to-point connections and
using broadcast channels.
Medium Access Sublayer deals with broadcast networks and their
protocols.
In any broadcast network, the key issue is how to determine who gets to
use the channel when there is competition for it .So MAC sublayer
fulfils the need for determine who gets to use the channel
When there are two or more users trying to use a shared single
channel there should be an algorithm to control this access. This
problem occurs in broadcast networks which are known as
multiaccess channels.
Computer Networks
MAC Sublayer
54
What is MAC?
- Medium Access Control (MAC) is a
sublayer of the Data-link layer.

- The protocols used to determine who


goes next on a multiaccess channel
belongs to a MAC sublayer.

- MAC is important in LAN which use a


multiaccess channel as the basis for
communication.
Computer Networks
55 MAC Sublayer
The Channel Allocation Problem
There are two schemes to allocate a
single channel among competing users:

1) Static Channel Allocation.

2) Dynamic Channel Allocation

Computer Networks
Static Channel Allocation

Two common static channel allocation techniques are TDMA and FDMA.
Time Division Multiple Access (TDMA)
With TDMA the time axis is divided into time slots of a fixed length. Each user is
allocated a fixed set of time slots at which it can transmit. TDMA requires that users
be synchronized to a common clock. Typically extra overhead bits are required for
synchronization.
Frequency Division Multiple Access (FDMA)
With FDMA the available frequency bandwidth is divided into disjoint frequency
bands. A fixed band is allocated to each user. FDMA requires a guard band between
user frequency bands to avoid cross-talk.
57 MAC Sublayer
Static Channel Allocation:
In this scheme a Frequency Division
Multiplexing (FDM) is used for allocating a
single channel among competing users.
Example
if we have N users, the bandwidth will be
divided into N equal-size portions.
++ FDM is a simple and efficient allocation
mechanism.
- - Waste of resources when the traffic is
bursty, or the channel is lightly loaded.
Computer Networks
Static Channel Allocation..
The performance of static channel allocation depends on:
The variation in the number of users over time
The nature of the traffic sent by the user
If the traffic on a shared medium is from a fixed number of sources each
transmitting at a fixed rate, static channel allocation can be very efficient.
Voice and Video (in their fixed rate forms) have this property and commonly are
placed in a shared channel using a static channel allocation.
The variation in the number of users over time impacts the performance of a static
allocation because some method is needed to allocate the slot to users as they come
and go.
When the traffic sent by a user is bursty, then, under a static allocation, a user's
portion of the channel may be empty when another user could use it. This leads
one to think that a dynamic allocation will perform better in such cases.
Dynamic Channel Allocation
With a static approach, the channel's capacity is essentially
divided into fixed portions; each user is then allocated a portion
for all time. If the user has no traffic to use in its portion,
then it goes unused.
With a dynamic approach the allocation of the channel
changes based on the traffic generated by the users.
Generally, a static allocation performs better when the traffic is
predictable. A dynamic channel allocation tries to get better
utilization and lower delay on a channel when the traffic is
unpredictable.
60 MAC Sublayer
Dynamic Channel Allocation:
Before the discussion of algorithms used for
dynamic allocation we need to consider the
following assumptions.
1) Station Model: N independent stations
generate frames for transmission.
(Generate >Block >Transmission)
2) Single channel Assumption: Single
channel is available for all communication.
3) Collision Assumption
4) Continuous Time, or Slotted Time
5) Carrier Sense, or No Carrier sense
Computer Networks
MAC Protocols: a taxonomy
5-61

Three broad classes:


Channel Partitioning
divide channel into smaller pieces (time slots,
frequency, code)
allocate piece to node for exclusive use
Random Access
channel not divided, allow collisions
recover from collisions
Taking turns
nodes take turns, but nodes with more to send can
take longer turns

Data Link Layer


5-62 Random Access Protocols
When node has packet to send
transmit at full channel data rate R.
no a priori coordination among nodes
two or more transmitting nodes collision,
random access MAC protocol specifies:
how to detect collisions
how to recover from collisions (e.g., via delayed
retransmissions)
Examples of random access MAC protocols:
slotted ALOHA
ALOHA
Data Link Layer

CSMA, CSMA/CD, CSMA/CA


5-63 Taking Turns MAC protocols
Polling:
master node invites
slave nodes to
data
transmit in turn poll
typically used with
master
dumb slave
devices data

concerns:
polling overhead
latency slaves
single point of failure
(master)

Data Link Layer


5-64 Taking Turns MAC protocols
Token passing:
T
control token passed
from one node to next
sequentially.
token message (nothing
to send)
concerns:
T
token overhead
latency
single point of failure
(token)

Data Link Layer


data
5-65 Summary of MAC protocols
channel partitioning, by time, frequency or code
Time Division, Frequency Division
random access (dynamic),
ALOHA, S-ALOHA, CSMA, CSMA/CD
carrier sensing: easy in some technologies (wire), hard in others
(wireless)
CSMA/CD used in Ethernet
CSMA/CA used in 802.11
taking turns
polling from central site, token passing
Bluetooth, FDDI, Token Ring
Data Link Layer
Ethernet
Ethernet is a family of technologies that provides data-link and physical specifications for
controlling access to a shared network medium. It has emerged as the dominant technology used
in LAN networking
Ethernet was originally developed by Xerox in the 1970s. In the mid 1980s, the Institute of
Electrical and Electronic Engineers (IEEE) published a formal standard for Ethernet, defined
as the IEEE 802.3standard. The original 802.3 Ethernet operated at 10Mbps, and successfully
supplanted competing LAN technologies, such as Token Ring. Ethernet supports two
topology types bus and star.
Ethernet is also defined as is a LAN protocol that is used in Bus and Star
topologies and implements CSMA/CD as the medium access method
Ethernet has several benefits over other LAN technologies:
Simple to install and manage, Inexpensive, Flexible and scalable, Easy to interoperate between
vendors
Ethernet can be deployed over three types of cabling:
Coaxial cabling almost entirely deprecated in Ethernet networking
Twisted-pair cabling
IEEE 802.3
IEEE 802.3 is a working group and a collection of IEEE standards produced by the
working group defining the physical layer and data link layer's media access
control (MAC) of wired Ethernet. This is generally a local area network technology with
some wide area network applications. Physical connections are made between nodes
and/or infrastructure devices (hubs, switches, routers) by various types of copper or fiber
cable.
802.3 specifies the physical media and the working characteristics of Ethernet.
802.3 also defines LAN access method using CSMA/CD
802.3 Frame header contents:
Preamble: The preamble is a field that tells the receiving node that a data frame is
coming. This field is simply a 56-bit (7 byte) alternating pattern of 1s and 0s
SOF: Start of frame
destination address: It is the MAC address of the machine to which the particular frame is
to be delivered. It is this address that is part of the destination address field, which is a 6
byte or 48-bit address.
IEEE 802.3 frame format

source address : Like the destination address, is a 48-bit field. However, this value is always a unicast
address and always reflects the MAC address of the sending node
Length: This field consisting of 16 bits contains the total number of bits of information contained in the
following Data field.
Data field: It contains the actual data to be processed by upper level protocols of the recipient node. The
length of the data must be between 46 1500 bytes
FCS: The frame check sequence is a 4 byte, 32 bit Cyclic Redundancy Check (CRC) value.
This value is calculated by the transmitting node and appended to the frame. On the receiving end, the
receiving node also calculates this value. If the values do not match, there has been a transfer error and the
frame is discarded
IEEE 802.4
In token bus Computer network station must have possession of a token before it
can transmit on the computer network. The IEEE 802.4 Committee has
defined token bus standards as broadband computer networks, as opposed to
Ethernet's baseband transmission technique. Physically, the token bus is a linear or
tree-shape cable to which the stations are attached
Token bus topology is well suited to groups of users that are separated by some
distance. IEEE 802.4 token bus networks are constructed with 75-ohm coaxial
cable using a bus topology. The broadband characteristics of the 802.4 standard
support transmission over several different channels simultaneously.
In token bus, each station receives each frame; the station whose address is
specified in the frame processes it and the other stations discard the frame.
Due to difficulties handling device failures and adding new stations to a network,
token bus gained a reputation for being unreliable and difficult to upgrade.
IEEE 802.4

Preamble: This. Field is at least 1 byte long. It is used for bit synchronization.
Start Delimiter: This one byte field marks the beginning of frame.
Frame Control: This one byte field specifies the type of frame. It distinguishes data frame from control frames.
For data frames it carries frame's priority. For control frames, it specifies the frame type. The control frame types
include. token passing and various ring maintenance frames, including the mechanism for letting new station enter
the ring, the mechanism for allowing stations to leave the ring.
Destination address: It specifies 2 to 6 bytes destination address.
Source address: It specifies 2 to 6 bytes source address.
Data: This field may be up to 8182 bytes long when 2 bytes addresses are used & up to 8174 bytes long when 6
bytes address is used.
Checksum: This 4 byte field detects transmission errors.
End Delimiter: This one byte field marks the end of frame.
IEEE 802.5
IEEE 802.5 Token Ring: Token ring is the IEEE 802.5 standard for a token-passing ring in
Communication networks. A ring consists of a collection of ring interfaces connected by point-to-
point lines i.e .ring interface of one station is connected to the ring interfaces of its left station as
well as right station. Internally, signals travel around the Communication network from one station
to the next in a ring.
IEEE 802.5 does not specify a topology, although virtually all IEEE 802.5 implementations are
based on a star.
Token Ring is a LAN protocol defined in the IEEE 802.5 where all stations are connected in a ring
and each station can directly hear transmissions only from its immediate neighbor. Permission to
transmit is granted by a message (token) that circulates around the ring. A token is a special bit
pattern (3 bytes long). There is only one token in the network.
Operation: Token-passing networks move a small frame, called a token, around the network.
Possession of the token grants the right to transmit. If a node receiving the token has no
information to send, it passes the token to the next end station. Each station can hold the token for
a maximum period of time.
Frame Format
Token Ring and IEEE 802.5 support two basic frame types: tokens and data/command
frames. Tokens are 3 bytes in length and consist of a start delimiter, an access control byte,
and an end delimiter. Data/command frames vary in size, depending on the size of the
Information field. Data frames carry information for upper-layer protocols, while
command frames contain control information and have no data for upper-layer protocols
Token Frame Fields
Start delimiter - Alerts each station of the arrival of a token (or data/command frame). This
field includes signals that distinguish the byte from the rest of the frame by violating the
encoding scheme used elsewhere in the frame.
Access-control byte - Contains the Priority field (the most significant 3 bits) and the
Reservation field (the least significant 3 bits), as well as a token bit (used to differentiate a token
from a data/command frame) and a monitor bit (used by the active monitor to determine
whether a frame is circling the ring endlessly).
End delimiter - Signals the end of the token or data/command frame. This field also contains
bits to indicate a damaged frame and identify the frame that is the last in a logical sequence.
Data/Command Frame Fields
Frame-control bytes - Indicates whether the frame contains data or control information. In
control frames, this byte specifies the type of control information.
Destination and source addresses - Consists of two 6-byte address fields that identify the
destination and source station addresses.
Data - Indicates that the length of field and is limited by the ring token holding time, which
defines the maximum time a station can hold the token.
Frame-check sequence (FCS) - Is filed by the source station with a calculated value
dependent on the frame contents. The destination station recalculates the value to determine
whether the frame was damaged in transit. If so, the frame is discarded
Frame Status - Is a 1-byte field terminating a command/data frame. The Frame Status
field includes the address-recognized indicator and frame-copied indicator.
Pure ALOHA
The basic idea of an ALOHA system is simple: let users transmit whenever they have
75
data to be sent. There will be collisions, of course, and the colliding frames will be
damaged. If the frame was destroyed, the sender just waits a random amount of time
and sends it again. How the channel know that there is a collision:
- Due to the feedback property of broadcasting, a sender can always find out whether its
frame was destroyed by listening to the channel, the same way other users do.
- If listening while transmitting is not possible for some reason, acknowledgements are
needed.
- Pure ALOHA dictates that when the time-out period passes, each station waits a
random amount of time before resending its frame. The randomness will help avoid
more collisions. We call this time the back-off time TB. Pure ALOHA has a second
method to prevent congesting the channel with retransmitted frames. After a maximum
number of retransmission attempts must give up and try later.
Procedure for ALOHA protocol

76
77 Pure ALOHA

In pure ALOHA, frames are transmitted at completely arbitrary times.


The throughput of ALOHA systems is maximized by having a uniform frame size
rather than by allowing variable length frames.
Slotted Aloha
In 1972, Roberts published a method for doubling the capacity of an ALOHA system (Roberts,
1972). His proposal was to divide time into discrete intervals, each interval corresponding to one
frame.
It is a improved ALOHA, with reduced probability of collision
Assumptions:
time is divided into slots of size X=L/R (one frame time)
nodes start to transmit only at the beginning of a slot
nodes are synchronized so that each node knows when the slots begin
operation:
when node has a fresh frame to send, it waits until next frame slot and transmits
if there is a collision, node retransmits the frame after a back off time
This approach requires the users to agree on slot boundaries. One way to achieve synchronization
would be to have one special station emit a pip at the start of each interval, like a clock.
Pure vs Slotted Aloha

Throughput vs offered traffic


CSMA(Carrier Sense Multiple Access)
Carrier sense means that a transmitter uses feedback from a receiver to determine
whether another transmission is in progress before initiating a transmission. That is, it
tries to detect the presence of a carrier wave from another station before attempting to
transmit. If a carrier is sensed, the station waits for the transmission in progress to finish
before initiating its own transmission. In other words, CSMA is based on the principle
"sense before transmit" or "listen before talk".
Multiple access means that multiple stations send and receive on the medium.
Transmissions by one node are generally received by all other stations connected to the
medium.
The key features of CSMA is that each link with a pair of transmitter and receiver senses
the medium and transmits a packet only if the medium is sensed idle. Due to its simple
and distributed nature, it has been regarded as one of the most practical MAC protocols
in wireless networks.
1-Persistent CSMA
Sense the channel.
If busy, keep listening to the channel and transmit immediately when the channel
becomes idle.
If idle, transmit a packet immediately.
If collision occurs,
Wait a random amount of time and start over again.
The protocol is called 1-persistent because the host transmits with a probability of 1
whenever it finds the channel idle.
Even if prop. delay is zero, there will be collisions
Example:
If stations B and C become ready in the middle of As transmission, B and C will wait
until the end of As transmission and then both will begin transmitted simultaneously,
resulting in a collision. If B and C were not so greedy, there would fewer collisions.
Non-Persistent CSMA

In this protocol, a conscious attempt is made to be less greedy than


in the previous one.
Before sending, a station senses the channel. If no one else is
sending, the station begins doing so itself.
However, if the channel is already in use, the station does not
continually sense it for the purpose of seizing it immediately upon
detecting the end of the previous transmission.
Instead, it waits a random period of time and then repeats the
algorithm.
Consequently, this algorithm leads to better channel utilization but
longer delays than 1-persistent CSMA
Tradeoff between 1- and Non-
Persistent CSMA

If B and C become ready in the middle of As transmission,


1-Persistent: B and C collide
Non-Persistent: B and C probably do not collide
If only B becomes ready in the middle of As transmission,
1-Persistent: B succeeds as soon as A ends
Non-Persistent: B may have to wait
P persistent CSMA
This is a sort of trade-off between 1 and non-persistent CSMA
access modes.
When the sender is ready to send data, it checks continually if the
medium is busy. If the medium becomes idle, the sender transmits a
frame with a probability p.
If the station chooses not to transmit (the probability of this event
is 1-p), the sender waits until the next available time slot and
transmits again with the same probability p.
This process repeats until the frame is sent or some other sender
starts transmitting. In the latter case the sender monitors the channel,
and when idle, transmits with a probability p, and so on.
CSMA/CD
Carrier Sense Multiple Access/Collision Detect (CSMA/CD) is the protocol for carrier
transmission access in Ethernet networks. On Ethernet, any device can try to send a frame
at any time. Each device senses whether the line is idle and therefore available to be used.
If it is, the device begins to transmit its first frame. If another device has tried to send at the
same time, a collision is said to occur and the frames are discarded. Each device then
waits a random amount of time and retries until successful in getting its transmission sent.
CSMA/CD is used to improve CSMA performance by terminating transmission as soon as
a collision is detected, and reducing the probability of a second collision
Use one of the CDMA persistence algorithm (non-persistent, 1-persistent, p-persistent)
for transmission
If a collision is detected during transmission, cease transmission and transmit a jam signal
to notify other stations of collision
After sending the jam signal, back off for a random amount of time, then start to transmit
again
CSMA/CD is specified in the IEEE 802.3 standard.
CSMA/CA
(CSMA/CA) is a modification of CSMA.
Collision avoidance is used to improve the performance of CSMA by attempting to be
less "greedy" on the channel.
If the channel is sensed busy before transmission then the transmission is deferred for
a "random" interval. This reduces the probability of collisions on the channel. It just
check whether the medium is in use. If it is busy, then the transmitter waits until it is
idle before it starts transmitting.
With wireless installations, it is not possible for the transmitter to detect whether a
collision has occurred or not. That is why wireless installations often use CSMA/CA
instead of CSMA/CD.
CSMA CA reduces the possibility of a collision while CSMA CD only minimizes the
recovery time.
CSMA CD is typically used in wired networks while CSMA CA is used in wireless
networks.
802.11

Vous aimerez peut-être aussi