Académique Documents
Professionnel Documents
Culture Documents
02 Fall 2012
Lecture #3
Communication network architecture
Analog channels
The digital abstraction
Binary symmetric channels
Hamming distance
Channel codes
6.02 Fall 2012 Lecture 3, Slide #1
The System, End-to-End
Digitize
iti Render/display,
(if needed) etc
etc.
Source binary digits
(message bits)
Source coding Source de
decoding
Bit stream Bit stream
COMMUNICATION NETWORK
(from Wikipedia)
(from Wikipedia)
Digitize
iti Render/display,
(if needed) etc
etc.
Source
Source b
binary
inary d
digits
igits
(message bits)
Source coding Source de
decoding
Bit
Bit st
stream
tream
m Bit
Bit s
stream
tream
Channel
Channel Mapper Recv
Decoding
Coding Bits + Signals samples Bits
((reducing or
(bit error Xmit (Voltages) +
removing
correction) samples over Demapper
physical link bit errors)
Digitize
iti Render/display,
(if needed) etc
etc.
Source binary digits
(message bits)
Source coding Source de
decoding
Bit stream Bit stream
Packetize Buffer
uffer + s
stream
Packets
Pa LINK
LINK S
Switch LINK LINK
Switch
Switch
S Switch
S
Packets Bits Signals Bits Packets
6.02 Fall 2012 Lecture 3, Slide #7
Digital Signaling: Map Bits to Signals
Key Idea: Code or map or modulate the desired bit sequence
onto a (continuous-time) analog signal, communicating at
some bit rate (in bits/sec).
0 volts
V0 V1+V 0 V1
2
If received voltage between V0 & V1+V 0 0, else 1
2
heads tails
0
1 with prob p
6.02 Fall 2012 Lecture 3, Slide #10
Replication Code to reduce decoding error
Prob(decoding error) over BSC w/ p=0.01
X Y
Channel
Noise
6.02 Fall 2012 Lecture 3, Slide #12
Evaluating conditional entropy and
mutual information
To compute conditional entropy:
m 1
H (X | Y = y j ) = p(xi | y j )log 2
i=1 p(xi | y j )
m
H (X | Y ) = H (X | Y = y j )p(y j )
i=1
because
H(X,Y ) = H(X)+ H(Y | X)
p(xi , y j ) = p(xi )p(y j | xi )
= H(Y )+ H(X |Y ) = p(y j )p(xi | y j )
so
p
0
0 0.5 1.0
p
0.5 1.0
For low-noise channel, significant reduction in uncertainty
about the input after observing the output.
Noise
To characterize the channel, rather than the input and output, define
{
Easiest to compute as C = max H(Y ) H(Y | X ) , still over all }
possible probability distributions for X . The second term doesnt
depend on this distribution, and the first term is maximized
when 0 and 1 are equally likely at the input. So invoking our
mutual information example earlier:
1.0
Channel capacity
C =1 h(p)
p
6.02 Fall 2012 0.5 1.0 Lecture 3, Slide #18
What channel capacity tells us about how fast
and how accurately we can communicate
Th Hamming
The H i Distance
Di t (HD) bbetween
t a valid binary codeword and
the same codeword with e errors is e.
The problem with no coding is that the two valid codewords (0
and 1) also have a Hamming distance of 1. So a single-bit error
changes a valid codeword into another valid codeword
single-bit error
heads 0 1 tails
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.