Vous êtes sur la page 1sur 75

 Framing techniques  Error detection

 Character count
 VRC
 Starting and ending
 LRC
 Start and end flag
 CRC
 Line discipline  checksum
 Enquiry Acknowledge
 Error control
 Pull/Select
 ARQ methods
 Flow control  Queuing modes in
 Stop and wait communications
 Sliding window networks
Framing
 Frames Are The Small data Units Created By Data
Link Layer And The Process Of Creating Frames
By The Data Link Layer Is Known As Framing.
 Framing Method Implemented By Data Link
Layer Are
 Character Count.
 Starting & Ending Characters With Character
Stuffing.
 Starting & ending Flags With Bit Stuffing.
 Physical Layer Coding Violation
Character Count.
 This Method Specifies The Number Of Characters
That Are Present In particular Frame.

 This Information Is specified By using a Special


Field In the Header Frame.

 Drawback: Problem occurs when the count is lost


or in error. Which makes it difficult to
resynchronize after count loss/error.
Character count

3 0 1 4 1 7 5 7 2 0 1 4 2 6 2 1

Frame 1 Frame 2 Frame 3 Frame 4

Character Count Method


Starting & Ending Characters With
Character Stuffing.
 In this Method Frame Starts & End With a special
Character That Mark The Beginning & End Of
Frame.
 Each character Begins With the ASCII Character
Sequence DLE STX (data link Escape Start Of Text
) And End With ASCII Character Sequence DLE
ETX (data link escape End Of text)
Starting & Ending Characters With
Character Stuffing.
A B C D Data From Network Layer

DLE STX A B C D DLE ETX

Start Of Frame Data End Of Frame

Starting & Ending Characters Added By Link Layer


Character Stuffing.
data on
A B DLE C D Sender side

DLE STX A B DLE DLE C D DLE ETX

Start Of Frame Data End Of Frame

data on
A B DLE C D Reciever side
Starting & ending Flags
With Bit Stuffing.
 In this Method , Each Begins & Ends With a
Special Bit pattern 01111110 Called Flags.
 There for Each frame starts With 01111110 & also
Ends with 01111110.
 The Main Problem arises in this Method
When The Flag byte 01111110 Appear as data.
 This Problem Is Handled By technique called
Bit stuffing That Is similar To character
stuffing.
Starting & ending Flags With
Bit Stuffing.
Stuffing
01111110 0101 0011111 0 101 01111110 Performed
By Data
Link layer
Ending Flag BIT
Shifted Bit
Starting flag Bit

Data received by
0101 001111110101 Network layer On
reciever side After
Performing
BIT stuffing Destuffing By data
link Layer
Physical Layer Coding Violation.
 This Framing Method Is Used only In those
network In which Encoding On The Physical
Medium Contain some Redundancy.

 Some LANs Encode Each Bit Of Data By using


two Physical Bit i.e. Menchester coding is Used.

 In this method Bit 1 Is encoded into high-low(10)


Pair And Bit 0 Is Encoded Into low-high(01)pair
shown in figure.
Physical Layer Coding Violation.

0 1 0 1 1 0

Menchester Encoding
Line discipline
Line discipline
Line discipline

 Select-The primary selects to whom it wants to send data(Ex. C), if the


secondary is interested it sends ack else nack.
 Pulling-this is used when the primary wants data from the secondary.
The primary sends pull signal and the secondary sends ack/nack with
the given time slot fixed by primary.
Flow control

 Flow Control is a technique for speed-matching of


transmitter and receiver. Flow control ensures that a
transmitting station does not overflow a receiving
station with data.
 We will discuss two protocols for flow control:
 Stop-and-Wait
 Sliding Window
 For the time being, we assume that we have a perfect
channel (no errors)
Stop-and-Wait
 Simplest form of flow control
 In Stop-and-Wait flow control, the receiver indicates
its readiness to receive data for each frame
 Operations:
 1. Sender: Transmit a single frame
 2. Receiver: Transmit acknowledgment (ACK)
 3. Go to 1.
Stop-and-Wait
Sliding Window


Sliding Window
 Sending Window:
 At any instant, the sender is permitted to send frames with
sequence numbers in a certain range (the sending window)
Sliding Window
 Receiving Window:
 The receiver maintains a receiving window
corresponding to the sequence numbers of frames that
are accepted.
Sliding Window
 How is “flow control” achieved?
 Receiver can control the size of the sending window
 By limiting the size of the sending window data flow
from sender to receiver can be limited
 Interpretation of ACK N message:
 Receiver acknowledges all packets until (but not
including) sequence number N
Sliding Window-Example
Sliding Window-Example
 The above example assumes a 3-bit sequence number field and a
maximum window size of seven frames. Initially, A and B have windows
indicating that A may transmit seven frames, beginning with frame 0
(FO).

 After transmitting three frames (FO, F1, F2) without acknowledgment,


A has shrunk its window to four frames. The window indicates that A
may transmit four frames, beginning with frame number 3.

 B then transmits an RR (receive-ready) 3, which means: "I have


received all frames up through frame number 2 and am ready to receive
frame number 3; in fact, I am prepared to receive seven frames,
beginning with frame number 3." With this acknowledgment, A is back
up to permission to transmit seven frames, still beginning with frame 3.
A proceeds to transmit frames 3, 4, 5, and 6. B returns an RR 4, which
allows A to send up to and including frame F2.
Error detection and control
Basic concepts
 Networks must be able to transfer data from one device to
another with complete accuracy.
 Data can be corrupted during transmission.
 For reliable communication, errors must be detected and
corrected.
 Error detection and correction are implemented
either at the data link layer or the transport layer of
the OSI model.
Types of Errors
Single-bit error
Single bit errors are the least likely type of
errors in serial data transmission because the
noise must have a very short duration which is
very rare. However this kind of errors can happen
in parallel transmission.
Example:
 If data is sent at 1Mbps then each bit lasts only
1/1,000,000 sec. or 1 μs.
 For a single-bit error to occur, the noise must have
a duration of only 1 μs, which is very rare.
Burst error
The term burst error means that two or more
bits in the data unit have changed from 1 to 0 or
from 0 to 1.

Burst errors does not necessarily mean that


the errors occur in consecutive bits, the length
of the burst is measured from the first corrupted
bit to the last corrupted bit. Some bits in between
may not have been corrupted.
 Burst error is most likely to happen in serial
transmission since the duration of noise is normally
longer than the duration of a bit.
 The number of bits affected depends on the data rate
and duration of noise.
Error detection
Error detection means to decide whether the received
data is correct or not without having a copy of the
original message.

Error detection uses the concept of redundancy,


which means adding extra bits for detecting errors at
the destination.
Redundancy
Error Detection
Four types of redundancy checks are used
in data communications
Error Detection –VRC
Vertical redundancy check/Parity check
 Append a parity bit to the end of a block of data.
 A typical example is ASCII transmission, in which a
parity bit is attached to each 7-bit ASCII character. The
value of this bit is selected so that the character has an
even number of 1s (even parity) or an odd number of 1s
(odd parity).
 Ex. Data:1110001
Transmitted data: 11100011(Odd parity)

Drawback: Cannot detect two bit error


Longitudinal Redundancy Check
(LRC)
 In this error detection method, a block of bits is organized in a table
with rows and columns. Then the parity bit for each column is
calculated and a new row of eight bits, which are the parity bits for the
whole block, is created. After that the new calculated parity bits are
attached to the original data and sends to the receiver.
LRC
 LRC increases the likelihood of detecting burst error. An LRC of n bits
can easily detects a burst error of n bits.
 However, if two bits in one data unit are damaged and two bits in exactly
the same positions in another data unit are also damaged, the LRC
checker will not detect an error.
 Notice that although the 5th bit and the 7th bit for 1st and 2 nd data unit
have been changed but the LRC calculated by receiver is still the same as
the LRC received. Thus the receiver checker cannot detect this burst
error.
Cyclic Redundancy Check (CRC)
 Given a k-bit block of bits, or message, the transmitter
generates an n-bit sequence, known as a frame check
sequence (FCS), so that the resulting frame,
consisting of k + n bits, is exactly divisible by some
predetermined number. The receiver then divides the
incoming frame by that number and, if there is no
remainder, assumes there was no error.
 T = (k + n)-bit frame to be transmitted, with n < k
 M = k-bit message, the first k bits of T
 F = n-bit FCS, the last n bits of T
 P = pattern of n + 1 bits; this is the predetermined
divisor
Cyclic Redundancy Check (CRC)
CRC-Example

CRC-Example

CRC
 The pattern P is chosen to be one bit longer than the desired
FCS, and the exact bit pattern chosen depends on the type of
errors expected. At minimum, both the high- and low-order
bits of P must be 1.
 It can be shown that all of the following errors are not divisible
by a suitably chosen P(X) and, hence, are detectable:
 All single-bit errors.
 All double-bit errors, as long as P(X) has at least three Is.
 Any odd number of errors, as long as P(X) contains a factor (X + 1).
 Any burst error for which the length of the burst is less than the length
of the divisor polynomial; that is, less than or equal to the length of the
FCS.
 Most larger burst errors.
 Reading assignment: CRC Using Digital Logic
Checksum
At the sender
The unit is divided into k sections, each of n bits.
All sections are added together using one’s
complement to get the sum.
The sum is complemented and becomes the
checksum.
The checksum is sent with the data
At the receiver
The unit is divided into k sections, each of n bits.
All sections are added together using one’s
complement to get the sum.
The sum is complemented.
If the result is zero, the data are accepted: otherwise,
they are rejected.
Performance
The checksum detects all errors involving an odd
number of bits.
It detects most errors involving an even number of
bits.
If one or more bits of a segment are damaged and the
corresponding bit or bits of opposite value in a second
segment are also damaged, the sums of those columns
will not change and the receiver will not detect a
problem.
Error-Control
 Error control refers to mechanisms to detect and
correct errors that occur in the transmission of frames.
 The most common techniques for error control are
based on some or all of the following ingredients:
 Error detection
 Positive acknowledgment
 Retransmission after timeout
 Negative acknowledgment and retransmission

 Collectively, these mechanisms are all referred to


as automatic repeat request (ARQ).
Error-Control
 Positive acknowledgment. The destination returns a positive
acknowledgment to successfully received, error-free frames.
 Retransmission after timeout. The source retransmits a frame that
has not been acknowledged after a predetermined amount of time.
 Negative acknowledgment and retransmission. The destination
returns a negative acknowledgment to frames in which an error is
detected. The source retransmits such frames.

 Three versions of ARQ have been standardized:


 Stop-and-wait ARQ
 Go-back-N ARQ
 Selective-reject ARQ
Stop-and-wait ARQ

 The source station transmits a single frame and then


must await an acknowledgment (ACK). No other data
frames can be sent until the destination station's reply
arrives at the source station.

 The principal advantage of stop-and-wait ARQ is its


simplicity. Its principal disadvantage, is that stop-
and-wait is an inefficient mechanism.
Stop-and-wait ARQ
Stop and wait ARQ
 Two sorts of errors could occur.

 First, the frame that arrives at the destination could be


damaged; the receiver detects this by using the error
detection technique referred to earlier and simply discards
the frame. To account for this possibility, the source station
is equipped with a timer. After a frame is transmitted, the
source station waits for an acknowledgment.

 If no acknowledgment is received by the time the timer


expires, then the same frame is sent again. Note that this
method requires that the transmitter maintain a copy of a
transmitted frame until an acknowledgment is received for
that frame.
Stop and wait ARQ
 The second sort of error is a damaged acknowledgment.
Consider the following situation. Station A sends a frame.
The frame is received correctly by station B, which responds
with an acknowledgment (ACK). The ACK is damaged in
transit and is not recognizable by A, which will therefore
time-out and resend the same frame.
 This duplicate frame arrives and is accepted by B, which has
therefore accepted two copies of the same frame as if they
were separate. To avoid this problem, frames are alternately
labelled with 0 or 1, and positive acknowledgments are of the
form ACKO and ACK1. In keeping with the sliding-window
convention, an ACKO acknowledges receipt of a frame
numbered 1 and indicates that the receiver is ready for a
frame numbered 0.
Go-back-N ARQ
 Stop and wait ARQ mechanism does not utilize the
resources at their best. When the acknowledgement is
received, the sender sits idle and does nothing. In Go-
Back-N ARQ method, both sender and receiver
maintain a window.
 It is based on sliding-window flow control.
 While no errors occur, the destination will
acknowledge (RR = receive ready) incoming frames as
usual. If the destination station detects an error in a
frame, it sends a negative acknowledgment (REJ
=reject) for that frame.
Go-back-N ARQ
Go-back-N ARQ
 The sending-window size enables the sender to send multiple frames
without receiving the acknowledgement of the previous ones. The
receiving-window enables the receiver to receive multiple frames and
acknowledge them. The receiver keeps track of incoming frame’s
sequence number. When the sender sends all the frames in window, it
checks up to what sequence number it has received positive
acknowledgement. If all frames are positively acknowledged, the
sender sends next set of frames. If sender finds that it has received
NACK or has not receive any ACK for a particular frame, it retransmits
all the frames after which it does not receive any positive ACK.

 Because of the propagation delay on the line, by the time that an


acknowledgment (positive or negative) arrives back at the sending
station, it has already sent two additional frames beyond the one being
acknowledged. Thus, when an REJ is received to frame 5, not only
frame 5, but frames 6 and 7, must be retransmitted. Thus, the
transmitter must keep a copy of all unacknowledged frames.
Go-back-N ARQ
 go-back- N technique takes into account the following
contingencies:
 I. Damaged frame. There are three subcases:
 A. A transmits frame i. B detects an error and has previously
successfully received frame (i - 1). B sends REJ i, indicated that
frame i is rejected.
 B. Frame i is lost in transit. A subsequently sends frame (i + 1). B
receives frame (i + 1) out of order and sends an REJ i. A must
retransmit frame I and all subsequent frames.
 C. Frame i is lost in transit, and A does not soon send additional
frames. B receives nothing and returns neither an RR nor an REJ.
When A's timer expires, it transmits an RR frame that includes a bit
known as the P bit, which is set to 1.
Go-back-N ARQ
 2. Damaged RR. There are two
subcases
 B receives frame i and sends RR
(i + I), which is lost in transit.
 If A's timer expires, it transmits
an RR command as in Case lc.

3. Damaged REJ. If an REJ is lost,


this is equivalent to Case lc.
Selective-reject ARQ
 In Go-back-N ARQ, it is assumed that the receiver
does not have any buffer space for its window size and
has to process each frame as it comes. This enforces
the sender to retransmit all the frames which are not
acknowledged.

 In Selective-Repeat ARQ, the receiver while keeping


track of sequence numbers, buffers the frames in
memory and sends NACK for only frame which is
missing or damaged. The sender in this case, sends
only packet for which NACK is received.
Selective-reject ARQ
Queuing modes in communications
networks (introduction)

Queuing System
 A queueing system can be described as customers
arriving for service, waiting for service if it is not
immediate, and if having waited for service, leaving
the system after being served.
Why Queuing Theory
 Performance Measurement
 Average waiting time of customer / distribution of
waiting time.
 Average number of customers in the system /
distribution of queue length / current work backlog.
 Measurement of the idle time of server / length of an
idle period.
 Measurement of the busy time of server / length of a
busy period.
 System utilization.
Why Queuing Theory (cont’d)
 Delay Analysis
Network Delay =
Queuing Delay
+ Propagation Delay (depends on the distance)
+ Node Delay Processing Delay
(independent of packet length,
e.g. header CRC check)
Adapter Delay (constant)
Characteristics of Queuing Process
 Arrival Pattern of Customers
 Probability distribution
 Patient / impatient (balked) arrival
 Stationary / non-stationary
 Service Patterns
 Probability distribution
 State dependent / independent service
 Stationary / non-stationary
Characteristics of Queuing Process
(cont’d)
 Queuing Disciplines
 First come, first served (FCFS)
 Last come, first served (LCFS)
 Random selection for service (RSS)
 Priority queue
 Preemptive / non-preemptive
 System Capacity
 Finite / infinite waiting room.
First-in,First-Out Queing
First-in, First-Out Queuing
 FIFO queuing is the most basic of strategies. In essence, it is the first-
come, first-served approach to data forwarding. In FIFO, packets are
transmitted in the order in which they are received. Keep in mind that
this process occurs on each interface in a router, not in the router as a
whole.
 On high-speed interfaces (greater that 2 Mbps), FIFO is the default
queuing strategy on a router. Normally, such high-bandwidth interfaces
do not have problems getting traffic out the door.

 Figure displays the basic model of FIFO. Notice that there are three
different sizes of packets. One potential problem of FIFO is that the
small packets must wait in line for the larger packets to get dispatched.
In the figure, the smallest packet is actually ready to leave before the
largest packet is finished arriving. However, because the largest packet
started to arrive at the interface first, it gets to leave the interface first.
This actually causes gaps between data on the wire, which decreases
efficiency.
Fair Queuing
Fair Queuing
 Fair Queuing is a methodology that allows packets that are
ready to be transmitted to leave, even if they started to arrive
after another packet. Note that FQ is not an option in Cisco
routers, but understanding FQ will help you to understand
WFQ.
 Using the same example as before, the effects of FQ are shown
in Figure 15-3. The same data flow is sent to the egress
interface, only this time the smallest packets are allowed to
leave first because they are ready to leave before the larger
packet.
 FQ allows smaller packets to "cut the line" in front of larger
packets that are still in the process of arriving. This process
solves the FIFO problem of gaps between packets on the wire
caused by the blocking by the large packets.
Weighted Fair Queuing
 WFQ differs from FQ because it uses the ToS(Type of
service) bits that travel within each IP header.

 Remember that FQ looks at when a packet finished


arriving (relative time) to determine when it actually is
dispatched. Thus, the priority of the packet specified
in the ToS bits becomes a "weight" when dispatching
packets through an egress interface.

 Packets within a flow are handled FIFO.


Weighted Fair Queuing
Weighted Fair Queuing
 For FIFO, the largest packet would be dispatched first,
followed by the medium one, followed by the smallest. FQ
corrects this by sending the smallest first, then the
medium one, then the largest. But in this new example, the
medium packet has a much higher priority (ToS = 5) than
the small packet (ToS = 0). Thus, WFQ adjusts the dispatch
accordingly.
 Remember that all values shown here for the "multiplier"
are adjusted for simple mathematical examples. Real
numbers are much larger, but on a similar scale.
Weighted Fair Queuing
Weighted Fair Queuing
 In this example, the third flow has two packets. However, the
second packet is a high-priority packet (ToS = 5). It is quite
possible to have packets of various ToS in a single flow.
Remember that dynamic flow selection is not based on ToS.
 The problem here is that the high-priority packet in flow #3
cannot be dispatched until after the large packet in front of it
(same flow) leaves. Packets within a flow are handled FIFO.
The WFQ algorithm only works with the first packets in each
of the dynamically created flows. And as mentioned, the
administrator has no control over how packets get sorted into
the flows.
 Thus, in the scenario shown, although it would be nice (and
probably desired) to have the high-priority packets leave first,
it is not the case. The high-priority packet in flow #3 is
actually the last one out the door.
Reading assignment
 M/M/1 Model- (The Classical Queuing System)
 HDLC-High level data link control

Vous aimerez peut-être aussi