Académique Documents
Professionnel Documents
Culture Documents
Correcting Codes
Timothy J. Schulz
Professor and Chair
Engineering Exploration
Fall, 2004
Digital Data
ASCII Text
A
B
C
D
E
F
.
.
.
01000001
01000010
01000011
01000100
01000101
01000110
.
.
.
00101001110101101010101000
Digital Sampling
00001000100000101101101101000011111011111111
011
010
001
000
111
110
101
100
Digital Coding for Error Correction
00101001110101101010101000
Digital Communication
Example: Frequency Shift Keying (FSK)
Transmit a tone with a frequency determined by each bit:
s t b cos 2 f 0t 1 b cos 2 f1t
00101001110101101010101000
Digital Channels
Binary Symmetric Channel
0
1-p
0
p
p
1
1-p
Error probability: p
00101001110101101010101000
channel bits
0
1
000
111
decode book
channel bits
information bits
000
001
010
011
100
101
110
111
0
0
0
1
0
1
1
1
00101001110101101010101000
0
0
1
0
1
000 000 111 000 111
010 000 100 001 110
0
0
0
0
1
00101001110101101010101000
one error
one error
two errors
one error
two errors
two errors
three errors
(1-p)(1-p)(p)
(1-p)(p)(1-p)
(1-p)(p)(p)
(p)(1-p)(1-p)
(p)(1-p)(p)
(p)(p)(1-p)
(p)(p)(p)
=
=
=
=
=
=
=
p-2p2+p3
p-2p2+p3
p2-p3
p-2p2+p3
p2-p3
p2-p3
p3
0.45
0.4
bit error probability
ccc
cce
cec
cee
ecc
ece
eec
eee
0.5
0.35
0.3
0.25
0.2
0.1
0.05
0
0.1
0.2
0.3
channel error probability
0.4
0.5
00101001110101101010101000
00101001110101101010101000
EE576
Dr. Kousa
15
Basic Definitions
Let u be a k-bit information sequence
v be the corresponding n-bit codeword.
A total of 2k n-bit codewords constitute a (n,k) code.
Linear code: The sum of any two codewords is a codeword.
Observation: The all-zero sequence is a codeword in every
linear block code.
EE576
Dr. Kousa
16
Generator Matrix
g k -1 g k 1, 0
g 01
g k 1,1
g 0,n1
g k 1,n1
v = uG
EE576
Dr. Kousa
17
Systematic Codes
Any linear block code can be put in systematic form
n-k
check bits
k
information bits
G = [ P Ik]
This matrix corresponds to a set of k codewords
corresponding to the information sequences that have a
single nonzero element. Clearly this set in linearly
independent.
EE576
Dr. Kousa
18
1 1 0
0 1 1
G
1 1 1
0 1
1
k( nk )
0 0
0 0
P | Ik
1 0
0 1
k k
1 0
0 1
0 0
0 0
Dr. Kousa
19
Parity-Check Matrix
For G = [ P | Ik ], define the matrix H = [In-k | PT]
(The size of H is (n-k)xn).
It follows that GHT = 0.
Since v = uG, then vHT = uGHT = 0.
The parity check matrix of code C is the generator matrix
of another code Cd, called the dual of C.
1
H 0
0
EE576
Dr. Kousa
20
v1
v2
v3
1
0
0
v4 v5 v6 v7 1
0
information 1
1
v1+v4 v6 v7 0
v2+v4 v5 v6 0
v3+v5 v6 v7 0
EE576
Dr. Kousa
0
1
0
1
1
1
0
0
0
1
0 0
1
1
1
v1=v4 v6 v7
v2=v4 v5 v6
v3=v5 v6 v7
21
Encoding Circuit
EE576
Dr. Kousa
22
Minimum Distance
DF: The Hamming weight of a codeword v , denoted by
w(v), is the number of nonzero elements in the codeword.
DF: The minimum weight of a code, wmin, is the smallest
weight of the nonzero codewords in the code.
wmin = min {w(v): v C; v 0}.
DF: Hamming distance between v and w, denoted by
d(v,w), is the number of locations where they differ.
Note that d(v,w) = w(v+w)
DF: The minimum distance of the code
dmin = min {d(v,w): v,w C, v 0}
TH3.1: In any linear code, dmin = wmin
EE576
Dr. Kousa
23
0 0 1 0 1 1 1
EE576
Dr. Kousa
24
ei
0 otherwise
Dr. Kousa
25
Error Detection
Define the syndrome
s = rHT = (s0, s1, , sk-1)
If s = 0, then r = v and e =0,
If e is similar to some codeword,
then s = 0 as well, and the error is undetectable.
EX 3.4:
1 0 0
0 1 0
0 0 1
s 0 = r0 r3 r5 r6
s0 s1 s2 r1 r2 r3 r4 r5 r6 r7 1 1 0 0
0 1 1
s1 = r1 r3 r4 r5
1 1 1
s 2 = r2 r4 r5 r6
1 0 1
EE576
Dr. Kousa
26
Error Correction
s = rHT = (v + e) HT = vHT + eHT = eHT
The syndrome depends only on the error pattern.
Can we use the syndrome to find e, hence do the
correction?
Syndrome digits are linear combination of error digits.
They provide information about error location.
Unfortunately, for n-k equations and n unknowns there are
2k solutions. Which one to use?
EE576
Dr. Kousa
27
Example 3.5
Let r = 1001001
s = 111
s0 = e0+e3+e5+e6 =1
s1 = e1+e3+e4+e5 =1
s2 = e2+e4+e5+e6 =1
There are 16 error patterns that satisfy the above
equations, some of them are
0000010 1101010
1010011
1111101
The most probable one is the one with minimum weight.
Hence v* = 1001001 + 0000010 = 1001011
EE576
Dr. Kousa
28
EE576
Dr. Kousa
29
Dr. Kousa
30
Standard Array
v1 0
v2
e2
e3
e2 + v2
e3 + v 2
e 2n - k v 2 e 2n - k v3
e 2n - k
v3
v 2k
e 2 + v3 e 2 v 2k
e3 + v3 e3 v 2k
e 2n - k v 2k
TH 3.3
No two n-tuples in the same row are identical.
Every n-tuple appears in one and only one row.
EE576
Dr. Kousa
31
Dr. Kousa
32
EE576
Dr. Kousa
33
H 0 1 0 1 0 1
110001
101010
0 0 1 1 1 0
011011
011100
v1=v5 v6
101101
v2=v4 v6
110110
v3=v4 v5
000111
d min 3
EE576
Dr. Kousa
34
Dr. Kousa
35
The Syndrome
EE576
Dr. Kousa
36
EE576
Dr. Kousa
0000000
1000000
0100000
0010000
0001000
0000100
0000010
0000001
000
100
010
001
110
011
111
101
37
Syndrome Decoding
Decoding Procedure:
1. For the received vector r, compute the syndrome s = rHT.
2. Using the table, identify the coset leader (error pattern) el .
3. Add el to r to recover the transmitted codeword v.
EX:
r = 1110101 ==> s = 001
==> e = 0010000
Then, v = 1100101
Syndrome decoding reduces storage memory from nx2n to
2n-k(2n-k). Also, It reduces the searching time considerably.
EE576
Dr. Kousa
38
Hardware Implementation
Let r = r0 r1 r2 r3 r4 r5 r6
From the H matrix:
s0 = r0 + r3 + r5 + r6
and
s = s0 s1 s2
s1 = r1 + r3 + r4 + r5
s2 = r2 + r4 + r5 + r6
From the table of syndromes and their corresponding
correctable error patterns, a truth table can be constructed.
A combinational logic circuit with s0 , s1 , s2 as input and
e0 , e1 , e2 , e3 , e4 , e5 , e6 as outputs can be designed.
EE576
Dr. Kousa
39
EE576
Dr. Kousa
40
Pu
EE576
Dr. Kousa
i
n i
A
p
(
1
p
)
i
i d min
41
A
z
i
Define the weight enumerator:
i 0
Then
i
n
n
Pu Ai p (1 p)
i
i 1
n i
(1 p) Ai p
i 1
1 p
n
p
p
n
1 Ai
; Pu (1 p)
A
i 1
1 p
1 p
EE576
Dr. Kousa
p
1
A
1 p
42
Pu 2 n k B (1 2 p ) (1 p) n
EE576
Dr. Kousa
43
t (d min 1) / 2
It may be able to correct higher error patterns but not all.
The total number of patterns it can correct is 2 n-k
t
n
If 2 n k
the code is perfect
i 0 i
EE576
Pu
n p i (1 p) n i 1 n p i (1 p) n i
i
i
i t 1
i 0
Dr. Kousa
44
Hamming Codes
Hamming codes constitute a family of single-error correcting codes
defined as:
n = 2m-1, k = n-m, m 3
The minimum distance of the code dmin = 3
Construction rule of H:
EE576
Dr. Kousa
45
H =[ Im Q]
n 2nk
i
i 0
Dr. Kousa
46
Dr. Kousa
47
EE576
Dr. Kousa
48
1
(1 z ) n n(1 z )( z 2 ) ( n 1) / 2
n 1
i=1,2,,N
Dr. Kousa
49
History
In the late 1940s Richard Hamming recognized that
the further evolution of computers required greater
reliability, in particular the ability to not only detect
errors, but correct them. His search for errorcorrecting codes led to the Hamming Codes, perfect
1-error correcting codes, and the extended Hamming
Codes, 1-error correcting and 2-error detecting
codes.
EE576
Dr. Kousa
50
Uses
Hamming Codes are still widely used in computing,
telecommunication, and other applications.
Hamming Codes also applied in
Data compression
Some solutions to the popular puzzle The Hat
Game
Block Turbo Codes
EE576
Dr. Kousa
51
EE576
Dr. Kousa
52
EE576
Dr. Kousa
53
EE576
Dr. Kousa
54
EE576
Dr. Kousa
55
EE576
Dr. Kousa
56
Syndrome Decoding
Let y = (y1 y2 yn) be a received codeword.
The syndrome of y is S:=LryT. If S=0 then there was
no error. If S 0 then S is the binary representation
of some integer 1 t n=2r-1 and the intended
codeword is
x = (y1 yr+1 yn).
EE576
Dr. Kousa
57
Example Using L3
Suppose (1 0 1 0 0 1 0) is received.
EE576
Dr. Kousa
58
EE576
Dr. Kousa
59
EE576
Dr. Kousa
60
EE576
Dr. Kousa
61
EE576
Dr. Kousa
62
EE576
Dr. Kousa
63
EE576
Dr. Kousa
64
, word
So decode (0 1 1 1) as
(0 1 1 1) (0 0 2 0) = (0 1 2 1).
EE576
Dr. Kousa
65
EE576
Dr. Kousa
66
EE576
Dr. Kousa
67
Perfect
A sphere of radius centered at x is
S(x)={y in An : dH(x,y) }. Where A is the alphabet,
Fq, and dH is the Hamming distance.
A sphere of radius e contains
words.
If C is an e-error correcting code then
, so
.
EE576
Dr. Kousa
68
Perfect
This last inequality is called the sphere packing
bound for an e-error correcting code C of length n
over Fm:
where n is the length
of the code and in this case e=1.
A code for which equality holds is called perfect.
EE576
Dr. Kousa
69
Proof: Perfect
Applications
Data compression.
Turbo Codes
The Hat Game
EE576
Dr. Kousa
70
Data Compression
Hamming Codes can be used for a form of lossy
compression.
If n=2r-1 for some r, then any n-tuple of bits x is within
distance at most 1 from a Hamming codeword c. Let
G be a generator matrix for the Hamming Code, and
mG=c.
For compression, store x as m. For decompression,
decode m as c. This saves r bits of space but
corrupts (at most) 1 bit.
EE576
Dr. Kousa
71
EE576
Dr. Kousa
72
Zhu Han
Department of Electrical and Computer Engineering
Class 25
Dec. 6th, 2007
Outline
Project 2
ARQ Review
Linear Code
Hamming Code Revisit
ReedMuller code
Cyclic Code
CRC Code
BCH Code
RS Code
ARQ
tx
rx
ACK/NACK
rx
tx
rx
ACK/NACK
Hamming Code
n=2^m-1, k=2^m-m-1:
H(7,4)
Transmission vector x
Received vector r
Error Correction
Exercise
Same problem as the previous slide, but p=(1001) and the error
occurs at location 4 instead.
ReedMuller code
Cyclic code
EXAMPLE
Divider
Example of CRC
Capability of CRC
All single-bit errors if G(x) has more than one nonzero term
All double-bit errors if G(x) has a factor with three terms
Any odd number of errors, if P(x) contain a factor x+1
Any burst with length less or equal to n-k
A fraction of error burst of length n-k+1; the fraction is 1-2^(-(-nk-1)).
A fraction of error burst of length greater than n-k+1; the fraction
is 1-2^(-(n-k)).
Page 652
EE 541/451 Fall 2007
BCH Code
Industry standards
(511, 493) BCH code in ITU-T. Rec. H.261 video codec for
audiovisual service at kbit/s a video coding a standard used for
video conferencing and video phone.
(40, 32) BCH code in ATM (Asynchronous Transfer Mode)
EE 541/451 Fall 2007
BCH Performance
Reed-Solomon Codes
Page 654
Examples
1971: Mariner 9
camera rate:
100,000 bits/second
transmission speed:
16,000 bits/second
Modern Codes
More recently
Turbo codes
were invented,
which are used in
3G cell phones,
(future) satellites,
and in the CassiniHuygens space
probe [1997].
Satellite Communications
GPS
Calculation position
IV054
Dr. Kousa
105
IV054
Example
(i) Code C = {000, 101, 011, 110} is cyclic.
(ii) Hamming code Ham(3, 2): with the generator matrix
1 0 0 0 0 1 1
0
1
0
0
1
0
1
G
0 0 1 0 1 1 0
0 0 0 1 1 1 1
(a) cyclic?
(b) equivalent to a cyclic code?
1 0 1 1
0
1
1
2
Cyclic codes
EE576
Dr. Kousa
106
IV054
Comparing with linear codes, the cyclic codes are quite scarce. For, example there are 11 811
linear (7,3) linear binary codes, but only two of them are cyclic.
Trivial cyclic codes. For any field F and any integer n >= 3 there are always the following cyclic
codes of length n over F:
No-information code - code consisting of just one all-zero codeword.
Repetition code - code consisting of codewords (a, a, ,a) for a F.
Single-parity-check code - code consisting of all codewords with parity 0.
No-parity code - code consisting of all codewords of length n
For some cases, for example for n = 19 and F = GF(2), the above four trivial cyclic codes are
the only cyclic codes.
EE576
Dr. Kousa
Cyclic codes
107
IV054
has codewords
c1 = 1011100
c1 + c2 = 1110010
1 0 1 1 1 0 0
G 0 1 0 1 1 1 0
0 0 1 0 1 1 1
c2 = 0101110
c3 =0010111
c1 + c3 = 1001011
c2 + c3 = 0111001
c1 + c2 + c3 = 1100101
and it is cyclic because the right shifts have the following impacts
c1 c2,
c2 c3,
c3 c1 + c3
c1 + c2 c2 + c3,
c1 + c3 c1 + c2 + c3,
EE576
Dr. Kousa
c2 + c3 c1
c1 + c2 + c3 c1 + c2
Cyclic codes
108
IV054
EE576
Dr. Kousa
Cyclic codes
109
IV054
RING of POLYNOMIALS
The set of polynomials in Fq[x] of degree less than deg (f(x)), with addition and multiplication modulo f(x) forms a ring
denoted Fq[x]/f(x).
Example Calculate (x + 1)2 in F2[x] / (x2 + x + 1). It holds
(x + 1)2 = x2 + 2x + 1 x2 + 1 x (mod x2 + x + 1).
How many elements has Fq[x] / f(x)?
Result | Fq[x] / f(x) | = q deg (f(x)).
Example Addition and multiplication in F2[x] / (x2 + x + 1)
+
1+x
1+x
1+x
1+x
1+x
1+x
1+x
1+x
1+x
1+x
1+x
Definition A polynomial f(x) in Fq[x] is said to be reducible if f(x) = a(x)b(x), where a(x), b(x) Fq[x] and
deg (a(x)) < deg (f(x)),
deg (b(x)) < deg (f(x)).
If f(x) is not reducible, it is irreducible in Fq[x].
Theorem The ring Fq[x] / f(x) is a field if f(x) is irreducible in Fq[x].
EE576
Dr. Kousa
Cyclic codes
110
IV054
FIELD Rn, Rn = Fq[x] / (xn - 1)
Computation modulo xn 1
Since xn 1 (mod xn -1) we can compute f(x) mod xn -1 as follow:
In f(x) replace xn by 1, xn +1 by x, xn +2 by x2, xn +3 by x3,
Identification of words with polynomials
a0 a1 an -1
a0 + a1 x + a2 x2 + + an -1 xn -1
EE576
Dr. Kousa
Cyclic codes
111
Proof
(1) Let C be a cyclic code. C is linear (i) holds.
(ii) Let a(x) C, r(x) = r0 + r1x + + rn -1xn -1
EE576
Dr. Kousa
Cyclic codes
112
IV054
Example C = 1 + x2 , n = 3, q = 2.
We have to compute r(x)(1 + x2) for all r(x) R3.
R3 = {0, 1, x, 1 + x, x2, 1 + x2, x + x2, 1 + x + x2}.
Result
EE576
Dr. Kousa
113
IV054
We show that all cyclic codes C have the form C = f(x) for some f(x) Rn.
Theorem Let C be a non-zero cyclic code in Rn. Then
there exists unique monic polynomial g(x) of the smallest degree such that
C = g(x)
g(x) is a factor of xn -1.
Proof
(i) Suppose g(x) and h(x) are two monic polynomials in C of the smallest degree.
Then the polynomial g(x) - h(x) C and it has a smaller degree and a multiplication
by a scalar makes out of it a monic polynomial. If g(x) h(x) we get a contradiction.
(ii) Suppose a(x) C.
Then
a(x) = q(x)g(x) + r(x)
(deg r(x) < deg g(x))
and
r(x) = a(x) - q(x)g(x) C.
By minimality
r(x) = 0
and therefore a(x) g(x).
Cyclic codes
EE576
Dr. Kousa
114
IV054
(iii) Clearly,
xn 1 = q(x)g(x) + r(x) with deg r(x) < deg g(x)
and therefore
GENERATOR POLYNOMIALS
Definition If for a cyclic code C it holds
C = g(x),
then g is called the generator polynomial for the code C.
EE576
Dr. Kousa
Cyclic codes
115
IV054
The last claim of the previous theorem gives a recipe to get all cyclic codes of given length n.
Indeed, all we need to do is to find all factors of
xn -1.
Problem: Find all binary cyclic codes of length 3.
Solution: Since
x3 1 =
(x + 1)(x2 + x + 1)
Generator polynomials
EE576
x+1
x2 + x + 1
x3 1 ( = 0)
Dr. Kousa
Code in R3
Code in V(3,2)
R3
V(3,2)
Cyclic codes
116
Theorem Suppose C is a cyclic code of codewords of length n with the generator polynomial
g 0 g1 g 2 ...
0 g 0 g1 g 2
G1 0 0 g 0 g1
.. ..
0 0 ... 0
Proof
gr
...
g2
0
gr
...
0
0
gr
0
0
0
...
g0
0
0
..
... g r
...
...
...
Dr. Kousa
Cyclic codes
117
IV054
EXAMPLE
The task is to determine all ternary codes of length 4 and generators for them.
Factorization of x4 - 1 over GF(3) has the form
x4 - 1 = (x - 1)(x3 + x2 + x + 1) = (x - 1)(x + 1)(x2 + 1)
Therefore there are 23 = 8 divisors of x4 - 1 and each generates a cyclic code.
Generator polynomial
1
Generator matrix
I4
1 1
0 1
x+1
x2 + 1
(x - 1)(x + 1) = x2 - 1
(x - 1)(x2 + 1) = x3 - x2 + x - 1
(x + 1)(x2 + 1)
x4 - 1 = 0
EE576
Dr. Kousa
0
1
0
0
1 1
1 1 0 0
0 1 1 0
0 0 1 1
1 0 1 0
0 1 0 1
1 0 1 0
0 1 0 1
[ -1 1 -1 1 ]
0
[1111]
[0000]
Cyclic codes
118
IV054
Let C be a cyclic [n,k]-code with the generator polynomial g(x) (of degree n - k). By the last theorem
g(x) is a factor of xn - 1. Hence
xn - 1 = g(x)h(x)
for some h(x) of degree k (where h(x) is called the check polynomial of C).
Theorem Let C be a cyclic code in Rn with a generator polynomial g(x) and a check polynomial h(x).
Then an c(x) Rn is a codeword of C if c(x)h(x) 0 - this and next congruences are modulo xn - 1.
(ii) c(x)h(x) 0
c(x) = q(x)g(x) + r(x), deg r(x) < n k = deg g(x)
c(x)h(x) 0 r(x)h(x) 0 (mod xn - 1)
Since deg (r(x)h(x)) < n k + k = n, we have r(x)h(x) = 0 in F[x] and therefore
r(x) = 0Cyclic
c(x)
= q(x)g(x) C.
codes
EE576
Dr. Kousa
119
IV054
Since dim (h(x)) = n - k = dim (C) we might easily be fooled to think that the check
polynomial h(x) of the code C generates the dual code C.
Reality is slightly different'':
Theorem Suppose C is a cyclic [n,k]-code with the check polynomial
h(x) = h0 + h1x + + hkxk,
then
(i) a parity-check matrix for C is
hk
0
H
..
hk 1 ... h0
hk ... h1
..
0 ... 0
0
h0
hk
...
...
... h0
h x hk hk 1 x ... h0 x k
i.e. the reciprocal polynomial of h(x).
EE576
Dr. Kousa
Cyclic codes
120
..
h x hk hk 1 x ... h0 x k
EE576
Dr. Kousa
Cyclic codes
121
IV054
ENCODING
with CYCLIC CODES I
Encoding using
a cyclic code can be done by a multiplication of two polynomials - a
message polynomial and the generating polynomial for the cyclic code.
Let C be an (n,k)-code over an field F with the generator polynomial
g(x) = g0 + g1 x + + gr 1 x r -1 of degree r = n - k.
If a message vector m is represented by a polynomial m(x) of degree k and m is encoded
by
m c = mG1,
then the following relation between m(x) and c(x) holds
c(x) = m(x)g(x).
Such an encoding can be realized by the shift register shown in Figure below, where input
is the k-bit message to be encoded followed by n - k 0' and the output will be the encoded
message.
Shift-register encodings of cyclic codes. Small circles represent multiplication by the
corresponding constant, nodes represent modular addition, squares are delay elements
EE576
Dr. Kousa
Cyclic codes
122
IV054
Another method for encoding of cyclic codes is based on the following (so called
systematic) representation of the generator and parity-check matrices for cyclic codes.
Theorem Let C be an (n,k)-code with generator polynomial g(x) and r = n - k. For i = 0,1,
,k - 1, let G2,i be the length n vector whose polynomial is G2,i(x) = x r+I -x r+I mod g(x). Then
the k * n matrix G2 with row vectors G2,I is a generator matrix for C.
Moreover, if H2,J is the length n vector corresponding to polynomial H2,J(x) = xj mod g(x),
then the r * n matrix H2 with row vectors H2,J is a parity check matrix for C. If the message
vector m is encoded by
m c = mG2,
then the relation between corresponding polynomials is
c(x) = xrm(x) - [xrm(x)] mod g(x).
On this basis one can construct the following shift-register encoder for the case of a
systematic representation of the generator for a cyclic code:
Shift-register encoder for systematic representation of cyclic codes. Switch A is closed for
first k ticks and closed for last r ticks; switch B is down for first k ticks and up for last r ticks.
EE576
Dr. Kousa
Cyclic codes
123
IV054
EE576
Dr. Kousa
Cyclic codes
124
IV054
Hamming codes as cyclic codes
Example Polynomial x3 + x + 1 is irreducible over GF(2) and x is
primitive element of the field F2[x] / (x3 + x + 1).
F2[x] / (x3 + x + 1) =
{0, x, x2, x3 = x + 1, x4 = x2 + x, x5 = x2 + x + 1, x6 = x2 + 1}
The parity-check matrix for a cyclic version of Ham (3,2)
1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
0 0 1 0 1 1 1
EE576
Dr. Kousa
Cyclic codes
125
IV054
PROOF of THEOREM
The binary Hamming code Ham (r,2) is equivalent to a cyclic code.
It is known from algebra that if p(x) is an irreducible polynomial of degree r, then the ring F2[x] / p(x) is a field of
order 2r.
In addition, every finite field has a primitive element. Therefore, there exists an element of F2[x] / p(x) such that
H = [ 1 2 2^r 2 ].
Let now C be the binary linear code having H as a parity check matrix.
Since the columns of H are all distinct non-zero vectors of V(r,2), C = Ham (r,2).
Putting n = 2r -1 we get
C = {f0 f1 fn -1 V(n, 2) | f0 + f1 + + fn -1 n 1 = 0 (2)
r()f() = r() 0 = 0
and therefore, by one of the previous theorems, this version of Ham (r,2) is cyclic.
EE576
Dr. Kousa
Cyclic codes
126
IV054
To the most important cyclic codes for applications belong BCH codes and Reed-Solomon codes.
Definition A polynomial p is said to be minimal for a complex number x in Zq if p(x) = 0 and p is
irreducible over Zq.
BHC stands for Bose and Ray-Chaudhuri and Hocquenghem who discovered
Cyclic codes
theseDr.codes.
EE576
Kousa
Linear Block Codes
127
1
IV054
CONVOLUTION CODES
Very often it is important to encode an infinite stream or several streams of data say
bits.
Convolution codes, with simple encoding and decoding, are quite a simple
generalization of linear codes and have encodings as cyclic codes.
For example,
G1 [ x 2 1, x 2 x 1]
1 x
0
G2
EE576
0
1
x 1
Cyclic codes
128
IV054
I=(I0(x), I1(X),,Ik-1(x))
to get an n-tuple of crypto-polynomials
C=(C0(x), C1(x),,Cn-1(x))
As follows
EE576
C= I . G
Dr. Kousa
Cyclic codes
129
EXAMPLES
EXAMPLE 1
1 0 x 1
( x x, x 1).G2 ( x x, x 1).
01 x
2
EE576
Dr. Kousa
Cyclic codes
130
IV054
The way infinite streams are encoded using convolution codes will be
Illustrated on the code CC1.
An input stream I = (I0, I1, I2,) is mapped into the output stream
and
The first multiplication can be done by the first shift register from the next
figure; second multiplication can be performed by the second shift register
on the next slide and it holds
C0i = Ii + Ii+2,
C1i = Ii + Ii-1 + Ii-2.
That is the output streams C0 and C1 are obtained by convolving the input
EE576
Dr. Kousa
Cyclic codes
131
IV054
ENCODING
output
input
1
x2
will multiply the input stream by x2+1 and the second shift register
output
input
1
x2
Dr. Kousa
Cyclic codes
132
IV054
x2
Output streams
C10,C11,C12
EE576
Dr. Kousa
Cyclic codes
133
EE576
Dr. Kousa
134
OUTLIN
E
EE576
Dr. Kousa
135
deg(f (x )) n
2. Eg 4.1.1
EE576
Dr. Kousa
136
f (x ) x x 2 x 6 x 8 , h (x ) 1 x x 2 x 4
q (x ) x 3 x 4 , r ( x ) x x 2 x 3
f ( X ) h(x )(x 3 x 4 ) (x x 2 x 3 )
deg(r (x )) deg(h (x )) 4
EE576
Dr. Kousa
137
c a0a1a2...an 1 of length n in Kn
6. E.g 4.1.12
Dr. Kousa
138
f (x ) mod h (x ) r (x ) p (x ) mod h (x )
f (x ) p (x )(modh (x ))
ie.
8.Eg 4.1.15
f (x ) 1 x 4 x 9 x 11, h(x ) 1 x 5 , p (x ) 1 x 6
f ( x ) modh( x ) r (x ) 1 x p ( x )modh(x )
=>f(x) and p(x) are equivalent mod h(x)!!
9. Eg 4.1.16
f (x ) 1 x 2 x 6 x 9 x 11, h(x ) 1 x 2 x 5 , p (x ) x 2 x 8
f (x )modh (x ) x x 4 , p (x )modh( x ) 1 x 3
=>f(x) and p(x) are NOT equivalent mod h(x)!!
EE576
Dr. Kousa
139
Dr. Kousa
140
(v )
2.cyclic code
A code C is cyclic code(or linear cyclic code) if (1)the cyclic shift of each
codeword is also a codeword and (2) C is a linear code
C1=(000, 110, 101, 011} is a cyclic code
C2={000, 100, 011, 111} is NOT a cyclic code
V=100,
=010 is not in C2
(v )
EE576
Dr. Kousa
141
Dr. Kousa
142
polynimial(mod 1+x7 )
-----------------------------
0110100
xv (x ) x x 2 x 4
0011010
x2v (x ) x 2 x 3 x 4
0001101
x3v (x ) x 3 x 4 x 6
1000110
x4v (x ) x 4 x 5 x 7 1 x 4 x 5 mod(1 x 7 )
0100011
x5v (x ) x 5 x 6 x 8 x x 5 x 6 mod(1 x 7 )
1010001
x6v (x ) x 6 x 7 x 9 1 x 2 x 6 mod(1 x 7 )
EE576
Dr. Kousa
143
EE576
Dr. Kousa
144
9. Corollary 4.2.18
The generator polynomial g(x) for the smallest cyclic code of length n containing
the word v(polynomial v(x)) is g(x)=gcd(v(x), 1+xn)
10. Eg 4.2.19
n=8, v=11011000 so v(x)=1+x+x3+x4
g(x)=gcd(1+x+x3+x4 , 1+x8)=1+x2
Thus g(x)=1+x2 is the smallest cyclic linear code containing
v(x), which has dimension of 6.
EE576
Dr. Kousa
145
g (x )
xg (x )
x g (x )
EE576
Dr. Kousa
k-1
146
g (x ) 1 x x 3
xg (x ) x x 2 x 4
x 2g (x ) x 2 x 3 x 5
x 3g (x ) x 3 x 4 x 6
EE576
Dr. Kousa
1101000
0110100
G=
0011010
0001101
147
Dr. Kousa
148
EE576
Dr. Kousa
149
r0(x ) 1mod g (x ) 1
100
r1(x ) x mod g (x ) x
r2(x ) x mod g (x ) x
2
010
001
r3(x ) x 3 mod g (x ) 1 x
110
r4 (x ) x 4 mod g (x ) x x 2
011
r5(x ) x 5 mod g (x ) 1 x x 2
111
101
r6 (x ) x 6 mod g (x ) 1 x 2
EE576
Dr. Kousa
100
010
001
H 110
011
111
101
150
1. To construct
EE576
Dr. Kousa
151
2r
3. Coro 4.4.4
EE576
Dr. Kousa
152
j
where
c
(
x
)
i
I (x ) ai ci (x ), ai {0,1}
jC
i
i 0
EE576
Dr. Kousa
153
For n=7,
C0 {0}, so c0( x ) x 0 1
C1 {1, 2, 4} = C2 C4 , so c1( x ) x 1 x 2 x 4
C3 {3, 5, 6} = C5 = C7 , so c2(x ) x 3 x 6 x 5
6. Theorem 4.4.13
Every cyclic code contains a unique idempotent
polynomial which generates the code.(?)
EE576
Dr. Kousa
154
==>
Idempotent polynomial
I(x)
1
EE576
Dr. Kousa
The generator
polynomial
g(x)=gcd(I(x), 1+x9)
1
x+x2+x4+x5+x7+x8
x3+x6
1+x+x3+x4+x6+x7
1+x3
1+x+x2+x4+x5+x7+x8
1+x+x2
:
Linear Block Codes
155
EE576
3. Theorem 4.5.2
C: a linear code, length n, dimension k with generator g(x)
If 1+xn = g(x)h(x) then
C: a linear code , dimension n-k with generator xkh(x-1)
Dr. Kousa
156
5. Eg. 4.5.4
g(x)=1+x+x2, n=6, k=6-2=4
h(x)=1+x+x3+x4
h(x)generator for C is g (x)=x4h(x-1)=1+x+x3+x4
EE576
Dr. Kousa
157
EE576
Dr. Kousa
158
EE576
Dr. Kousa
Lecture 8
159
2005-02-09
EE576
Dr. Kousa
Lecture 8
160
2005-02-09
EE576
Dr. Kousa
Lecture 8
161
Dr. Kousa
Lecture 8
162
Coded
A
F
Coding gain:
For a given bit-error probability,
the reduction in the Eb/N0 that can be
realized through the use of code:
Eb
Eb
[dB]
G [dB]
[dB]
N0 u
N0 c
2005-02-09
EE576
Dr. Kousa
Lecture 8
B
D
Uncoded
Eb / N 0 (dB)
163
Channel models
Discrete memoryless channels
Discrete input, discrete output
Binary Symmetric channels
Binary input, binary output
Gaussian channels
Discrete input, continuous output
2005-02-09
EE576
Dr. Kousa
Lecture 8
164
Binary field :
The set {0,1}, under modulo 2 binary addition and
multiplication forms a field.
Addition
00 0
0 1 1
00 0
0 1 0
1 0 1
1 0 0
11 0
1 1 1
2005-02-09
EE576
Multiplication
Dr. Kousa
Lecture 8
165
Fields :
a, b F a b b a F
a, b F a b b a F
a (b c) (a b) (a c)
2005-02-09
EE576
Dr. Kousa
Lecture 8
166
Vector space:
Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space
over F if:
1. Commutative: u, v V u v v u F
2. a F , v V a v u V
3. Distributive:
(a b) v a v b v and a (u v ) a u a v
4. Associative: a, b F , v V (a b) v a (b v )
5. v V, 1 v v
2005-02-09
EE576
Dr. Kousa
Lecture 8
167
Vn
Vector subspace:
A subset S of the vector space Vn is called a
subspace if:
The all-zero vector is in S.
The sum of any two vectors in S is also in S.
Example:
2005-02-09
EE576
Dr. Kousa
Lecture 8
168
A collection of vectors G v1 , v 2 , , v n,
the linear combinations of which include all vectors in
a vector space V, is said to be a spanning set for V or
to span V.
Example:
spans V4 .
Bases:
A spanning set for V that has minimal cardinality is
called a basis for V.
Cardinality of a set is the number of objects in the set.
Example:
2005-02-09
EE576
Dr. Kousa
Lecture 8
is a basis for V4 .
169
V
2
A set
is called a linear block
nwith cardinality
code if, and only if, it is a subspace of the vector Vn
space .
Vk C Vn
Members of C are called code-words.
The all-zero codeword is a codeword.
Any linear combination of code-words is a
codeword.
2005-02-09
EE576
Dr. Kousa
Lecture 8
170
Vn
mapping
Vk
C
Bases of C
2005-02-09
EE576
Dr. Kousa
Lecture 8
171
Data block
Channel
encoder
k bits
n-k
Codeword
n bits
Redundant bits
k
Rc
Code rate
n
2005-02-09
EE576
Dr. Kousa
Lecture 8
172
2005-02-09
EE576
Dr. Kousa
Lecture 8
173
ed
d min 1
t
2
2005-02-09
EE576
Dr. Kousa
Lecture 8
174
2005-02-09
EE576
Dr. Kousa
Lecture 8
175
p
Tx. bits
Rx. bits
p
1-p
p
Q
sin
Q
sin
log 2where
M
M coded
log 2 M
N0
M
isN 0energy per
bit, given by
Ec Rc Eb
Ec
2005-02-09
EE576
Dr. Kousa
Lecture 8
176
mapping
Vk
C
Bases of C
2005-02-09
EE576
Dr. Kousa
v11
V1
v
21
Vk
vk 1
v12
v22
vk 2
Lecture 8
{V1 , V2 , , Vk }
v1n
v2 n
vkn
177
U mG
V1
V
(u1 , u 2 , , un ) (m1 , m2 , , mk ) 2
V
k
(u1 , u 2 , , un ) m1 V1 m2 V2 m2 Vk
2005-02-09
EE576
Dr. Kousa
Lecture 8
178
V1
1 1 0 1 0 0
G V2 0 1 1 0 1 0
V3
1 0 1 0 0 1
2005-02-09
EE576
Dr. Kousa
Lecture 8
Codeword
000
100
010
000000
110100
011010
110
001
101
011
1 11
1 011 1 0
1 01 0 0 1
0 111 0 1
1 1 0 011
0 0 0 1 11
179
G [P I k ]
I k k k identity matrix
Pk k (n k ) matrix
U (u1 , u2 ,..., un ) ( p1 , p2 ,..., pn k , m1 , m2 ,..., mk )
parity bits
2005-02-09
EE576
Dr. Kousa
Lecture 8
message bits
180
GH 0
T
Dr. Kousa
Lecture 8
181
Format
Channel
encoding
Modulation
channel
Data sink
Format
Channel
decoding
Demodulation
Detection
r Ue
Syndrome testing:
S is syndrome of r, corresponding to the error
pattern e.
S rH T eH T
2005-02-09
EE576
Dr. Kousa
Lecture 8
182
Standard array
1. For row i 2,3,...,2 n, k find a vector in
of
minimum weight which is not already
Vn listed in
the array.
i : th
2. Call this patternei and form the
row as the
corresponding coset
zero
codeword
coset leaders
2005-02-09
EE576
Dr. Kousa
U1
e2
e 2 n k
U2
e2 U2
e 2 nk U 2
U 2k
e 2 U 2k
e 2 nk U 2 k
Lecture 8
coset
183
EE576
Note that U
r e (U e) e U (e e )
If e e, error is corrected.
If e e, undetectable decoding error occurs.
Dr. Kousa
184
184
110100
110101
110111
110011
111100
100100
010100
100101
011010
011011
011000
011100
101110
101111
101100
101010
101001
101000
101011
101101
011101
011100
011111
011010
110011
110010
110001
110111
000111
000110
000101
000110
coset
010110
Coset leaders
EE576
Dr. Kousa
185
185
000000
000001
000010
000100
001000
010000
100000
010001
EE576
000
101
011
110
001
010
100
111
Dr. Kousa
U (101110) transmitted.
r (001110) is received.
The syndrome of r is computed :
S rH T (001110) H T (100)
Error pattern corresponding to this syndrome is
e (100000)
The corrected vector is estimated
r e (001110) (100000) (101110)
U
186
Hamming codes
Hamming codes
Hamming codes are a subclass of linear block codes and
belong to the category of perfect codes.
Hamming codes are expressed as a function of a single
integer
. m2
Code length :
n 2m 1
EE576
Dr. Kousa
187
Hamming codes
Example: Systematic Hamming code (7,4)
1 0 0 0 1 1 1
H 0 1 0 1 0 1 1 [I 33
0 0 1 1 1 0 1
0 1 1 1 0 0 0
1 0 1 0 1 0 0
[P
G
1 1 0 0 0 1 0
1 1 1 0 0 0 1
EE576
Dr. Kousa
PT ]
I 44 ]
188
EE576
Dr. Kousa
189
U (u0 , u1 , u2 ,..., un 1 )
U
(i )
i cyclic shifts of U
U (1101)
U (1) (1110 ) U ( 2 ) (0111) U (3) (1011) U ( 4 ) (1101) U
EE576
Dr. Kousa
190
U( X ) u0 u1 X u2 X 2 ... un 1 X n 1
degree (n-1)
un 1 u0 X u1 X 2 ... un 2 X n 1 un 1 X n u n 1
U (1 ) ( X )
u n1 ( X n 1)
U (1) ( X ) un 1 ( X n 1)
U (1) ( X ) XU( X ) modulo ( X n 1)
Hence:By extension
EE576
Dr. Kousa
U ( i ) ( X ) X i U ( X ) modulo ( X n 1)
Linear Block Codes
191
EE576
Dr. Kousa
192
g( X )
Xg ( X )
Dr. Kousa
g1 g r
g0
k 1
X g( X )
EE576
g0
g1
gr
g0
g1 g r
g0
g1
g r
193
EE576
Dr. Kousa
194
1.
m (1011)
n 7, k 4, n k 3
m (1011) m( X ) 1 X 2 X 3
X n k m( X ) X 3m( X ) X 3 (1 X 2 X 3 ) X 3 X 5 X 6
Divide X n k m( X ) by g ( X) :
X 3 X 5 X 6 (1 X X 2 X 3 )(1 X X 3 )
1
quotient q(X)
generator g(X)
remainder p ( X )
EE576
Dr. Kousa
195
1
1
0
0
0
1
1
0
1
0
1
1
0
1
0
1
0
0
1
0
0
0
0
1
0
G
1
1
1
1
0
0
1
1
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
EE576
Dr. Kousa
1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
0 0 1 0 1 1 1
I 44
Linear Block Codes
I 33
PT
196
codewor
d
r ( X ) q ( X )g ( X ) S ( X )
Syndrome
EE576
Dr. Kousa
197
PB
8PSK
QPSK
Eb / N 0 [dB]
EE576
Dr. Kousa
198
EE576
Dr. Kousa
199
the form :
G
EE576
Dr. Kousa
p
21
pk1
22
k2
p
2,(nk )
p
k ,(nk )
0 1
0 0 1
200
pk1
EE576
Dr. Kousa
12
p
22
k2
p
1,(nk )
p
2,(nk )
p
k ,(nk )
1 0 0
0 1
0 0 1
201
where
u m p m p ....... m p
i
1 1i
2 2i
k ki
m
ink
for
for
i 1,....., (n k )
i (n k 1),...., n
U = u1,u2,,uk
Systematic code vector is:
U = p1,p2..,pk , m1,m2,,mk
EE576
Dr. Kousa
202
Example:
For a (6,3) code the code vectors are described as
1 1 0
U m , m , m 0 1 1
1 2 3
1 0 1
1 0 0
0 1 0
0 0 1
EE576
I3
Dr. Kousa
u2 ,
u 3,
u 4, u 5,
u6
203
G HT = 0
EE576
Dr. Kousa
204
1
0
Hence
I
p
P
p
p
nk
0
0
11
0
p
21
k1
12
22
k2
1 ,( n k )
2 ,( n k )
k ,( n k )
EE576
Dr. Kousa
205
Syndrome Testing
Let r = r1, r2, ., rn be a received code vector (one of 2n ntuples)
Resulting from the transmission of U = u1,u2,.,un
the 2k n-tuples).
r=U+e
(one of
Dr. Kousa
r to
206
syndrome of r is seen as
S = (U+e) HT = UHT + eHT
since UHT = 0 for all code words then :
S = eHT
An important property of linear block codes, fundamental to
the
decoding process, is that the mapping between correctable
error
patterns and syndromes is one-to-one.
EE576
Dr. Kousa
207
EE576
1.
2.
Dr. Kousa
208
Example:
Suppose that code vector U = [ 1 0 1 1 1 0 ] is transmitted and
the vector r = [ 0 0 1 1 1 0 ] is received.
Note one bit is in error..
Find the syndrome vector,S, and verify that it is equal to eHT.
(6,3) code has generator matrix G we have seen before:
1 1 0 1 0 0
G 0 1 1 0 1 0
1 0 1 0 0 1
EE576
Dr. Kousa
209
HT
0
1 0 0
0
0 1 0
1
0 0 1
p
1
1
0
1,3
p 0 1 1
2,3
p 1 0 1
3,3
1 0 0
0 1 0
0 0 1
T
[
001110
]
=rH =
1
1
0
0 1 1
1 0 1
1
0
0
p
1,1
p
2,1
p
3,1
0
1
0
p
1,2
p
2,2
p
3,2
= [ 1, 1+1, 1+1 ] = [ 1 0 0]
(syndrome of corrupted code vector)
Now we can verify that syndrome of the corrupted code vector is the same as
the syndrome of the error pattern:
S = eHT = [1 0 0 0 0]HT = [ 1 0 0 ]
( =syndrome of error pattern )
EE576
Dr. Kousa
210
Error Correction
Since there is a one-to-one correspondence between correctable
error patterns and syndromes we can correct such error patterns.
Assume the 2n n-tuples that represent possible received vectors
are arranged in an array called the standard array.
1. The first row contains all the code vectors starting with allzeros
vector
2. First column contains all the correctable error patterns
The standard
array
(n,k) code
is:
U
U for a
U
U
1
e
2
e
j
e
2nk
EE576
Dr. Kousa
2
U e
2 2
U e
2 2nk
i
2k
U e
U e
i 2
2
2k
U e
i
j
U e
2k
2nk
211
cosets
EE576
Dr. Kousa
212
Syndrome of a Coset
If
EE576
Dr. Kousa
213
Error CorrectionDecoding
The procedure for error correction decoding is as follows:
1.
2.
3.
4.
EE576
Dr. Kousa
214
Example:
EE576
Dr. Kousa
110100
110101
110110
110000
111100
100100
010100
100101
011010
011011
011000
011110
010010
001010
111010
001011
101110
101111
101100
101010
100110
111110
001110
111111
101001
101000
101011
101101
100001
111001
001001
111000
011101
011100
011111
011001
010101
001101
111101
001100
110011
110010
110001
110111
111011
100011
010011
100010
000111
000110
000101
000011
001111
010111
100111
010110
215
The valid code vectors are the eight vectors in the first row and
the correctable error patterns are the eight coset leaders in the
first column.
Decoding will be correct if and only if the error pattern caused
by the channel is one of the coset leaders
We now compute the syndrome corresponding to each of the
correctable error sequences by computing ejHT for each coset
leader
1 0 0
0
0
S e
j 1
EE576
Dr. Kousa
1
0
1
1
0
0
1
216
EE576
Dr. Kousa
Syndrome
000
101
011
110
001
010
100
111
217
Error Correction
We receive the vector r and calculate its syndrome S
We then use the syndrome-look-up table to find the
corresponding error pattern.
This error pattern is an estimate of the error, we denote it as
The decoder then adds to r to obtain an estimate of the
transmitted
code vector
= r + = (U + e) + = U + (e )
If the estimated error pattern is the same as the actual error
pattern that is if = e then
=U
EE576
Dr. Kousa
218
Example
Assume code vector U = [ 1 0 1 1 1 0 ] is transmitted and the
vector r=[0 0 1 1 1 0] is received.
The syndrome of r is computed as:
S = [0 0 1 1 1 0 ]HT = [ 1 0 0 ]
From the look-up table 100 has corresponding error pattern:
= [1 0 0 0 0 0 ]
The corrected vectors is the = r + = 0 0 1 1 1 0 + 1 0 0 0 0
0
= 1 0 1 1 1 0 (corrected)
In this example actual error pattern is the estimated error
pattern,
Hence
EE576
Dr. Kousa
=U
Linear Block Codes
219
EE576
Dr. Kousa
220
Introduction
Error Control Coding (ECC)
Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or
correction at the receiver
Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel
The positions of the error control coding and
decoding are shown in the transmission model
EE576
Dr. Kousa
221
Transmission Model
Digital
Source
Source
Encode
r
Error
Control
Coding
Line
Coding
Modulator
(Transmit
Filter, etc)
Hc()
Transmitter
N()
Digital
Source
Sink
Decode
r
Error
Control
Decoding
Line
Decoding
Receiver
EE576
Dr. Kousa
X()
Noise
Demod
(Receive
Filter,
etc)
Channe
l
+
Y()
222
Error Models
Binary Symmetric Memoryless Channel
Assumes transmitted symbols are binary
Errors affect 0s and 1s with equal probability
(i.e., symmetric)
Errors occur randomly and are independent from
bit to bit (memoryless)
0
IN
1
EE576
Dr. Kousa
1-p
p
p
1-p
0
OUT
1
Linear Block Codes
p is the probability
of bit error or the Bit
Error Rate (BER) of
the channel
223
Error Models
Many other types
Burst errors, i.e., contiguous bursts of bit errors
output from DFE (error propagation)
common in radio channels
Insertion, deletion and transposition errors
We will consider mainly random errors
EE576
Dr. Kousa
224
Block Codes
Dataword length k = 4
Codeword length n = 7
This is a (7,4) block code with code rate = 4/7
For example, d = (1101), c = (1101001)
EE576
Dr. Kousa
225
Source code
data chopped
into blocks
Chann
Datawor
d (k bits)
el
coder
Datawor
d (k bits)
Codeword
(n bits)
Codeword +
possible
errors (n bits)
Chann
el
decode
Error flags r
Chann
el
Dr. Kousa
226
Parity Codes
Example of a simple block code Single Parity Check Code
In this case, n = k+1, i.e., the codeword is the dataword with
one additional bit
For even parity the additional bit is,
q i 1 d i (mod 2)
k
EE576
Even parity
(i) d=(10110) so,
c=(101101)
(ii) d=(11011) so,
c=(110110)
Dr. Kousa
227
EE576
Dr. Kousa
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
Codewor
0d 0
0
0
0
0
1
1
1
1
0
1
1
0
0
1
1
1
0
1
0
1
0
1
0
1
1
0
1
0
0
1
228
Parity Codes
To decode
Calculate sum of received bits in block (mod 2)
If sum is 0 (1) for even (odd) parity then the dataword is the first k bits of
the received codeword
Otherwise error
Code can detect single errors
But cannot correct error since the error could be in any bit
For example, if the received dataword is (100000) the transmitted dataword
could have been (000000) or (110000) with the error being in the first or
second place respectively
Note error could also lie in other positions including the parity bit
Known as a single error detecting code (SED). Only useful if probability
of getting 2 errors is small since parity will become correct again
Used in serial communications
Low overhead but not very powerful
Decoder can be implemented efficiently using a tree of XOR gates
EE576
Dr. Kousa
229
Hamming Distance
EE576
Dr. Kousa
230
Hamming Distance
EE576
Dr. Kousa
231
X
O
O
X
X is a valid codeword
O is an invalid codeword
EE576
Dr. Kousa
232
Hamming Distance
The maximum number of detectable errors is
d min 1
That is the maximum number of correctable
errors is given by,
d min 1
t
Dr. Kousa
233
EE576
Dr. Kousa
234
0
0
0
1
0
0
0
1
0
0
0
1
1
0
1
0
1
1
1
1
1
So, to obtain the codeword for dataword 1011, the first, third and fourth
codewords in the list are added together, giving 1011010
This process will now be described in more detail
EE576
Dr. Kousa
235
... a1n a1
a
a
a
...
a
22
2n
G 21
2
.
.
...
. .
Thus,
k
c di a i
c=dG
12
i 1
EE576
Dr. Kousa
236
d 3 d1 d 2
So,
i 1
i 1
i 1
i 1
c 3 d 3i a i (d1i d 2i )a i d1i a i d 2i a i
c3 c1 c 2
c 0 ai 0
i 1
EE576
Dr. Kousa
237
EE576
Dr. Kousa
238
0
1
0
1
EE576
Dr. Kousa
a1 = [1011]
a2 = [0101]
1
0
0
1
1
0
1
1
_
1
_
1
_
1
_
0
239
G 0
0
0
0
1
0
0
0
0
0
0
1
1
1
1
1
EE576
Dr. Kousa
240
Systematic Codes
For a systematic block code the dataword appears unaltered in
the codeword usually at the start
The generator matrix has the structure,
R=n-k
0 0 .. 1 pk1 pk 2 .. pkR
P is often referred to as parity bits
I is k*k identity matrix. Ensures dataword appears as beginning
of codeword
P is k*R matrix.
EE576
Dr. Kousa
241
.
Data output is the error flag, i.e., 0 codeword ok,
If no error, dataword is first k bits of codeword
For an error correcting code the ROM can also store datawords
Another possibility is algebraic decoding, i.e., the error flag is computed
from the received codeword (as in the case of simple parity codes)
How can this method be extended to more complex error detection and
correction codes?
EE576
Dr. Kousa
242
basis vectors (or generator matrix H) denote the generator matrix for
Snull - of dimension n-k = R
This matrix is called the parity check matrix of the code defined by G,
where G is obviously the generator matrix for Ssub- of dimension k
Note that the number of vectors in the basis defines the dimension of
thethe
subspace
So
dimension of H is n-k (= R) and all vectors in the null space are
EE576
Dr. Kousa
243
b11
b
21
bR1
b12
b22
.
bR 2
... b1n
... b2 n
...
.
... bRn
b1
b 2
.
R=n-k
bR
Dr. Kousa
244
c di a i
i 1
and so,
k
i 1
i 1
b j .c b j . d i a i d i (a i .b j ) 0
This means that a codeword is valid (but not necessarily
correct) only if cHT = 0. To ensure this it is required that the rows
of H are independent and are orthogonal to the rows of G
That is the bi span the remaining R (= n - k) dimensions of the
codespace
EE576
Dr. Kousa
245
EE576
Dr. Kousa
246
c1
a2
a1
c2
c3
b1
EE576
Dr. Kousa
247
Error Syndrome
For error correcting codes we need a method to compute the
required correction
To do this we use the Error Syndrome, s of a received
codeword, cr
s = crHT
If cr is corrupted by the addition of an error vector, e, then
cr = c + e
and
s = (c + e) HT = cHT + eHT
s = 0 + eHT
Syndrome depends only on the error
EE576
Dr. Kousa
248
Error Syndrome
That is, we can add the same error pattern to different
codewords and get the same syndrome.
There are 2(n - k) syndromes but 2n error patterns
For example for a (3,2) code there are 2 syndromes and 8
error patterns
Clearly no error correction possible in this case
Another example. A (7,4) code has 8 syndromes and 128
error patterns.
With 8 syndromes we can provide a different value to
indicate single errors in any of the 7 bit positions as well as
the zero value to indicate no errors
Now need to determine which error pattern caused the
syndrome
EE576
Dr. Kousa
249
Error Syndrome
For systematic linear block codes, H is constructed
as follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the R*R identity
for H
Example, (7,4) code, dmin= 3
1
0
G I | P
0
EE576
Dr. Kousa
0
1
0
0
0
0
1
0
0
0
0
1
0
1
1
1
1
0
1
1
1
1
0
0 1 1 1 1 0 0
H - P T | I 1 0 1 1 0 1 0
1 1 0 1 0 0 1
250
In this case,
s c r H T 1 1
EE576
Dr. Kousa
0
1
1 1
1
0
0
1
0
1
1
0
1
0
1
1
0
1 0
0
0
1
251
s c r H T 1 1
0
1
0 1
1
0
0
1
0
1
1
0
1
0
1
1
0
1 0
0
0
1
Dr. Kousa
252
Comments about H
d0
d
c r H T [cr 0 , cr1 ,..., cr n 1 ] 1 cr 0 d 0 cr1d1 ... cr n 1d n 1
.
d
n 1
EE576
Dr. Kousa
253
Comments about H
For the example code, a codeword with min weight (dmin = 3) is
given by the first row of G, i.e., [1000011]
Now form linear combination of first and last 2 cols in H, i.e.,
[011]+[010]+[001] = 0
So need min of 3 columns (= dmin) to get a zero value of cHT in
this example
Standard Array
From the standard array we can find the most likely transmitted
codeword given a particular received codeword without having
to have a look-up table at the decoder containing all possible
codewords in the standard array
Not surprisingly it makes use of syndromes
EE576
Dr. Kousa
254
Standard Array
The Standard Array is constructed as follows,
c1 (all zero)
e1
e2
e3
eN
c2
c2+e1
c2+e2
c2+e3
c2+eN
cM
cM+e1
cM+e2
cM+e3
cM+eN
s0
s1
s2
s3
sN
All patterns in
row have same
syndrome
Different rows
have distinct
syndromes
Dr. Kousa
255
Standard Array
The standard array is formed by initially choosing ei to be,
All 1 bit error patterns
All 2 bit error patterns
Ensure that each error pattern not already in the array has a
new syndrome. Stop when all syndromes are used
EE576
Dr. Kousa
256
Standard Array
This block diagram shows the proposed
implementation
e
s
Compute
syndrome
Look-up
table
c
cr
EE576
Dr. Kousa
257
Standard Array
For the same received codeword c2 + e3, note that the
unique syndrome is s3
This syndrome identifies e3 as the corresponding error
pattern
So if we calculate the syndrome as described previously,
i.e., s = crHT
All we need to do now is to have a relatively small table
which associates s with their respective error patterns. In
the example s3 will yield e3
Finally we subtract (or equivalently add in modulo 2
arithmetic) e3 from the received codeword (c2 + e3) to yield
the most likely codeword, c2
EE576
Dr. Kousa
258
Hamming Codes
We will consider a special class of SEC codes (i.e.,
Hamming distance = 3) where,
Number of parity bits R = n k and n = 2R 1
Syndrome has R bits
0 value implies zero errors
2R 1 other syndrome values, i.e., one for each bit
that might need to be corrected
This is achieved if each column of H is a different
binary word remember s = eHT
EE576
Dr. Kousa
259
Hamming Codes
Systematic form of (7,4) Hamming code is,
1
0
G I | P
0
0
1
0
0
0
0
1
0
0
0
0
1
0
1
1
1
1
0
1
1
1
1
0
0 1 1 1 1 0 0
H - P T | I 1 0 1 1 0 1 0
1 1 0 1 0 0 1
H 0 1 1 0 0 1 1
G
0 1 0 1 0 1 0
1 0 1 0 1 0 1
1 1 0 1 0 0 1
Compared with the systematic code, the column orders of both
G and H are swapped so that the columns of H are a binary
count
EE576
Dr. Kousa
260
Hamming Codes
The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the nonsystematic H is col. 7 in the systematic H.
Dr. Kousa
261
Hamming Codes
Double errors will always result in wrong bit being
corrected, since
A double error is the sum of 2 single errors
The resulting syndrome will be the sum of the
corresponding 2 single error syndromes
This syndrome will correspond with a third single
bit error
Consequently the corrected codeword will now
contain 3 bit errors, i.e., the original double bit
error plus the incorrectly corrected bit!
EE576
Dr. Kousa
262
For a given channel bit error rate (BER), what is the BER after
correction (assuming a memoryless channel, i.e., no burst
errors)?
To do this we will compute the probability of receiving 0, 1, 2, 3,
. errors
And then compute their effect
Example A (7,4) Hamming code with a channel BER of 1%, i.e., p = 0.01
P(0 errors received) = (1 p)7 = 0.9321
P(1 error received) = 7p(1 p)6 = 0.0659
76 2
p (1 p ) 5 0.002
2
EE576
Dr. Kousa
263
EE576
Dr. Kousa
264
EE576
Dr. Kousa
265
Perfect Codes
If a codeword has n bits and we wish to correct up to
t errors, how many parity bits (R) are needed?
Clearly we need sufficient error syndromes (2R of
them) to identify all error patterns up to t errors
Need 1 syndrome to represent 0 errors
Need n syndromes to represent all 1 bit errors
Need n(n-1)/2 to syndromes to represent all 2 bit
errors
Need nCe = n!/(n-e)!e! syndromes to represent all e
bit errors
EE576
Dr. Kousa
266
Perfect Codes
So,
2R 1 n
to correct up to 1 error
n(n - 1)
1 n
to correct up to 2 errors
2
n(n - 1) n(n - 1)(n - 2)
1 n
to correct up to 3 errors
2
6
If equality then code is Perfect
Only known perfect codes are SEC Hamming codes and TEC Golay
(23,12) code (dmin=7). Using previous equation yields
Dr. Kousa
267
Summary
In this section we have
Used block codes to add redundancy to messages
to control the effects of transmission errors
Encoded and decoded messages using Hamming
codes
Determined overall bit error rates as a function of
the error control strategy
EE576
Dr. Kousa
268
Agenda
Shannon Theory
History of Error Correction Code
Linear Block Codes
Decoding
Convolution Codes
Multiple-Access Technique
Capacity of Multiple Access
Random Access Methods
2006/07/07
270
Shannon Theory:
R < C Reliable communication Redundancy (Parity bits) in
transmitted data stream
error correction capability
Encoding
Block Code
Convolutional Code
Decoding
2006/07/07
Hard-Decoding
Soft-Decoding
Digital Information
Analog Information
271
272
Muller (1954):
Combinatorial Digital Function and
Error Correction Code
Elias (1954): Tree Code, Convolutional Code
Reed and Solomon (1960):
Reed-Solomon Code (Maximal Separable Code)
Hocquenghem (1959) and Bose and
Chaudhuri (1960):
BCH Code (Multiple Error Correction)
Peterson (1960):
Binary BCH Decoding, Error Location Polynomial
2006/07/07
273
274
275
2006/07/07
276
Basics of Decoding
t errors correctable
2006/07/07
277
n : code length
k : number of information bits
d min : minimum distance
k n : coding rate
2006/07/07
278
(, , , /)
Arithmetic operations
for encoding
and decoding over an finite field GF(Q)
where Q = pr, p: prime number r: positive
integer
Example GF(2):
+ 0 1
addition 0 0 1
1 1 0
XOR
2006/07/07
0 1
multiplication 0 0 0
1 0 1
AND
279
[Encoder]
C XG
2006/07/07
280
2006/07/07
281
s rH t c e H t eH t
s e (decoding process)
2006/07/07
282
[Minimum Distance]
Singleton Bound
If no more than dmin - 1 columns of H are linearly
independent.
dmin n - k + 1 (Singleton Bound)
Maximal Separable Code:
dmin = n - k + 1, e.g. Reed-Solomon Code
2006/07/07
283
Easy Encoding
Cyclic Codes
C = (cn -1, , c0) is a codeword (cn - 2, , c0, cn
- 1) is also a codeword
Codeword polynomial: C(p) = cn - 1 pn - 1 + ...+ cn
2006/07/07
284
285
286
287
r = s +n
(
)
= ( s ,, s ) + ( n ,, n )
= r1 , , rn
1
Prob r s : Likelihood
Min r s
Max : Correlation r, c
Max Prob r s
k
2006/07/07
288
j 1
Ci : i - th codeword
289
290
Coding gain:
Cg 10 log Rc d min k ln 2 b
dmin
2006/07/07
Cg
291
Hard-Decision Decoding
Discrete-time channel =
modulator + AWGN channel + demodulator
BSC with crossover probability
p Q 2 b Rc : coherent PSK
Q b Rc : coherent FSK
1
2
2006/07/07
292
Maximum-Likelihood Decoding
Minimum Distance Decoding
Syndrome Calculation by Parity check matrix H
S YH
Cm e H
eH t
where C m : transmitted codeword
Y : received codeword at the demodulator
e : binary error vector
2006/07/07
293
2dB difference
d min
n
2006/07/07
1
1
2
log 2 d min 1 Rc
2d min
n
2
294
2 A1 A
n
Rc 1 A log 2 A 1 A log 2 1 A
Gilbert-Varsharmov lower bound
d min
n
Rc 1 H
2006/07/07
295
2006/07/07
296
b n k
2
297
Convolution Codes
Performance of convolution code > block code
shown by Viterbis Algorithm.
P (e) z
nE ( R )
2006/07/07
298
2006/07/07
299
300
301
302
Pe
a Q
d d free
2 b Rc d
303
Turbo Coding
2006/07/07
304
RSC Encoder
2006/07/07
305
2006/07/07
306
Multi-user Communications
2006/07/07
307
Eb
Cn log 2 1 Cn
N0
where
W : Bandwidth
Eb : Energy per bit
N 0 : Noise power spectrum desity
2006/07/07
308
309
310
1
Cn log 2 e
Eb N 0
2006/07/07
311
312
313
314
2006/07/07
315
2006/07/07
316
2006/07/07
317
2006/07/07
318
2006/07/07
319
2006/07/07
320
2006/07/07
321
Chapter 10
Error Detection
and
Correction
10.322
101INTRODUCTION
10.323
10.324
Error detection/correction
Error detection
Error correction
10.326
Modular Arithmetic
10.328
10.329
102BLOCKCODING
10.331
Example 10.1
10.332
10.333
10.334
10.335
10.336
Hamming Distance
10.337
10.338
Example 10.5
Solution
WefirstfindallHammingdistances.
Thedmininthiscaseis2.
10.339
Example 10.6
Solution
WefirstfindalltheHammingdistances.
10.340
Example 10.7
The minimum Hamming distance for our first code scheme (Table 10.1) is 2. This code guarantees
detection of only a single error.
For example, if the third codeword (101) is sent and one error occurs, the received codeword does not
match any valid codeword. If two errors occur, however, the received codeword may match a valid
codeword and the errors are not detected.
10.341
Example 10.8
10.342
10.343
10.344
Example 10.9
10.345
103LINEARBLOCKCODES
10.346
Example 10.10
10.348
10.350
Example 10.12
10.352
10.353
10.354
10.355
10.356
Figure 10.12 The structure of the encoder and decoder for a Hamming code
10.357
r0=a2+a1+a0
S0=b2+b1+b0+q0
r1=a3+a2+a1
S1=b3+b2+b1+q1
r2=a1+a0+a3
S2=b1+b0+b3+q2
10.358
Example 10.13
Let us trace the path of three datawords from the sender to the
destination:
1. The dataword 0100 becomes the codeword 0100011.
The codeword 0100011 is received. The syndrome is
000, the final dataword is 0100.
2. The dataword 0111 becomes the codeword 0111001.
The codeword 0011001 is received. The syndrome is \
011. After flipping b2 (changing the 1 to 0), the final
dataword is 0111.
3. The dataword 1101 becomes the codeword 1101000.
The codeword 0001000 is received. The syndrome is
101. After flipping b0, we get 0000, the wrong dataword.
This shows that our code cannot correct two errors.
10.359
104CYCLICCODES
10.361
10.362
10.363
10.364
10.365
105CHECKSUM
Example 10.18
10.368
Example 10.19
10.369
Example 10.20
Solution
The number 21inbinaryis10101 (itneeds five bits). We
can wrap the leftmost bit and add it to the four rightmost
bits.Wehave(0101+1)=0110or6.
10.370
Example 10.21
1 1 1 1
0 0 0 0
10.372
Note
Sender site:
10.373
Note
Receiver site:
Example 10.23
10.376
Hossein Pishro-Nik
Outline
. 0100010101
379
Track
Sector
010101011110010101001
380
Receiver
Sender
Information bits
BSC
Corrupted bits
1001010101
1011000101
1-p
0<p<0.5
p
1-p
1
381
Richard Hamming
Claude Shannon
382
1001010101
BSC
1011000101
383
Encoder: Repeat
each bit three times
( y1 , y 2 , y3 ) ( x1 , x1 , x1 )
codeword
BSC
( x1 )
Decoder: majority
voting
( z1 , z 2 , z3 )
Corrupted codeword
384
Encoder
(0,0,0)
codeword
BSC
(0)
Successful
decoding!
Decoder
(1,0,0)
Corrupted codeword
Decoding Error
Decoding Error Probability p
:e
= Prob{2 or 3 bits in the codeword received in error}
pe p 3 3 p 2 (1 p ),
p 0.01 pe 3 10-4
Advantage: reduced bite error rate
Disadvantage: we lose bandwidth because each bit
should be sent three times
386
Information block
( x1 , x2 ,..., xk )
Encoder
( y1 , y 2 ,..., y n )
codeword
n>k
Decoder
BSC
( z1 , z 2 ,..., z n )
Corrupted codeword
387
( x1 , x2 ,..., xk )
Encoder
Codeword
Information block
( x1 , x2 ,..., xk )
( y1 , y 2 ,..., y n )
Decoder
BSC
( z1 , z 2 ,..., z n )
Corrupted codeword
Code Rate
In general an (n,k) block code is a 1-1 mapping
( x1 , x2 ,..., xk ) (y1 , y 2 ,..., y n )
from k bits to n bits:
R Code rate
Dimension k
Code length n
0 R 1
R shows the amount of redundancy in the codeword
Higher R = Lower redundancy
389
( x1 )
K=1
Encoder
( y1 , y 2 , y3 ) ( x1 , x1 , x1 )
n=3
BSC
( x1 )
Decoder
( z1 , z 2 , z3 )
Valid codewords
0) ( 0
0)
(0
1) (0
1 1
0)
(0
0) (1
0)
(0 1 1) (0 1 0 1 1)
(1 0 0) (1 0 1 1 1)
(1
(1 1
1)
(0
0) (0
1 1 1
8 valid codewords
0)
1)
(1 1 1) (1 1
0)
.2
k
391
All n-tuples
Valid codewords
(0,0)
(0,1)
(1,0)
(1,1)
2 k points
2 n points
392
Good Codes:
High rates = lower redundancy (depends on the
channel error rate p)
Low error rate at the decoder
Simple and practical encoding and decoding
393
g 21
( y1 , y2 ,..., y n ) ( x1 , x2 ,..., xk ) .
.
g k1
g 1n
g12
...
g 22
.
.... g 2 n
.
.
.
gk 2
.
...
.
g kn
Generator matrix G
Hamming codes
Cyclic codes
Reed-Solomon codes
BCH codes
395
Channel Capacity
( x1 , x2 ,..., xk )
( x1 , x2 ,..., xk )
Encoder
Decoder
( y1 , y2 ,..., y n )
( z1 , z 2 ,..., z n )
Noisy
channel
Shannon Codes
All n-tuples
Valid codewords
(0,0)
(0,1)
(1,0)
(1,1)
2 k points
2 n points
397
398
399
t-Error-Correcting Codes
The repetition codes can correct one error in the
codeword; however, it fails to correct higher number
of errors.
Minimum Distance
The minimum distance of a code is the minimum
Hamming distance between its codewords:
402
d min . 3
403
BSC
Iterative
Decoder
Noisy
channel
Corrupted bits
10e10e01e1
BEC
(0,e)
y1 y2 y3 0
y1
y2
y3
yn
Shokrollahi et al.
Capacity-achieving LDPC codes for the binary erasure
channel (BEC)
408
Received word
01101001 01e0ee01
Standard Iterative Algorithm:
Repeat for any check node
{
If only one of the
neighbors is missing,
recover it
}
Check node
Neighbors
y1
y2
y3
y3 y1 y2 0 1 1
409
e =0
e =1
e =1
Decoding is successful!
410
Algorithm A: Cont.
Stopping Set: S
411
412
10-5
BER
Capacityapproaching LDPC
Codes suffer from
the error floor
problem
10
-7
High Error
Floor
10-9
414
415
SNR 2
M
417
10
12
418
Check Nodes
Variable Nodes
c1
c2
ck
Ensemble Properties
Threshold effect
Concentration
theorem
Density evolution:
420
Ensemble Properties
Stability condition (BEC):
421
Design Methodology
The performance of the decoder is not directly related to
the minimum distance.
Performance on VHM
10-2
Rate=.85
Avg degree=6
Gap from capacity
at BER 1e-9:
0.6dB
4
n= 10
10-7
n= 105
BER
10-5
10-9
423
Storage Capacity
Information theoretic capacity
for soft-decision decoding: .95Gb
1
0.8
0.6
LDPC: soft
.84 Gb
LDPC: hard
.76 Gb
RS: hard
.52Gb
0.4
0.2
0
2000
4000
Number of pages
6000
424
Conclusion
Carefully designed LDPC codes can result in
significant increase in the storage capacity.
By incorporating channel information in
design of LDPC codes
- Small gap from capacity
- Error floor reduction
- More efficient decoding
425
426
Chapter 4
Digital
Transmission
McGraw-Hill
Some Characteristics
Line Coding Schemes
Some Other Schemes
McGraw-Hill
Figure 4.1
McGraw-Hill
Line coding
Figure 4.2
data level
McGraw-Hill
Figure 4.3
McGraw-Hill
DC component
Example 1
Asignalhastwodatalevelswithapulsedurationof1ms.Wecalculatethe
pulserateandbitrateasfollows:
Pulse Rate = 1/ 10-3= 1000 pulses/s
Bit Rate = Pulse Rate x log2 L = 1000 x log2 2 = 1000 bps
Example 2
Asignalhasfourdatalevelswithapulsedurationof1ms.Wecalculatethe
pulserateandbitrateasfollows:
Pulse Rate = = 1000 pulses/s
Bit Rate = PulseRate x log2 L = 1000 x log2 4 = 2000 bps
McGraw-Hill
McGraw-Hill
Example 3
Inadigitaltransmission,thereceiverclockis0.1percent
fasterthanthesenderclock.Howmanyextrabitsper
seconddoesthereceiverreceiveifthedatarateis1
Kbps?Howmanyifthedatarateis1Mbps?
Solution
At 1 Kbps:
1000 bits sent 1001 bits received1 extra bps
At 1 Mbps:
1,000,000 bits sent 1,001,000 bits received1000 extra bps
McGraw-Hill
Figure 4.5
schemes
McGraw-Hill
Line coding
Figure 4.6
Unipolar encoding
Note:
Figure 4.7
encoding
Types of polar
Note:
Note:
In NRZ-L the level of the signal is dependent upon the state of the
bit.
Note:
In NRZ-I the signal is inverted if a
1 is encountered.
McGraw-Hill
Figure 4.8
encoding
McGraw-Hill
Figure 4.9
McGraw-Hill
RZ encoding
Note:
A good encoded digital signal
must contain a provision for
synchronization.
McGraw-Hill
Figure 4.10
encoding
McGraw-Hill
Manchester
Note:
In Manchester encoding, the
transition at the middle of the bit
is used for both synchronization
and bit representation.
McGraw-Hill
McGraw-Hill
Note:
Note:
McGraw-Hill
Figure 4.12
encoding
McGraw-Hill
Bipolar AMI
Figure 4.13
McGraw-Hill
2B1Q
Figure 4.14
McGraw-Hill
MLT-3 signal
Steps in Transformation
Some Common Block Codes
McGraw-Hill
Figure 4.15
McGraw-Hill
Block coding
McGraw-Hill
McGraw-Hill
Data
Code
Data
Code
0000
11110
1000
10010
0001
01001
1001
10011
0010
10100
1010
10110
0011
10101
1011
10111
0100
01010
1100
11010
0101
01011
1101
11011
0110
01110
1110
11100
0111
01111
1111
11101
McGraw-Hill
Code
Q (Quiet)
00000
I (Idle)
11111
H (Halt)
00100
J (start delimiter)
11000
K (start delimiter)
10001
T (end delimiter)
01101
S (Set)
11001
R (Reset)
00111
The McGraw-Hill Companies, Inc., 2004
Figure 4.17
encoding
McGraw-Hill
Example of 8B/6T
4.3 Sampling
Figure 4.18
McGraw-Hill
PAM
Note:
Pulse amplitude modulation has some
applications, but it is not used by itself in data
communication. However, it is the first step in
another very popular conversion method called
pulse code modulation.
McGraw-Hill
Figure 4.19
signal
McGraw-Hill
Quantized PAM
McGraw-Hill
Figure 4.21
McGraw-Hill
PCM
McGraw-Hill
Note:
According to the Nyquist theorem,
the sampling rate must be at least 2
times the highest frequency.
McGraw-Hill
Figure 4.23
McGraw-Hill
Nyquist theorem
Example 4
Whatsamplingrateisneededforasignalwitha
bandwidthof10,000Hz(1000to11,000Hz)?
Solution
The sampling rate must be twice the highest
frequency in the signal:
Sampling rate = 2 x (11,000) = 22,000 samples/s
McGraw-Hill
Example 5
Asignalissampled.Eachsamplerequiresatleast12
levelsofprecision(+0to+5and0to5).Howmanybits
shouldbesentforeachsample?
Solution
We need 4 bits; 1 bit for the sign and 3 bits for the
value. A 3-bit value can represent 23 = 8 levels (000
to 111), which is more than what we need. A 2-bit
value is not enough since 22 = 4. A 4-bit value is too
much because 24 = 16.
McGraw-Hill
Example 6
Wewanttodigitizethehumanvoice.Whatisthebitrate,assuming8bitspersample?
Solution
The human voice normally contains frequencies from 0 to 4000 Hz.
Sampling rate = 4000 x 2 = 8000 samples/s
Bit rate = sampling rate x number of bits per sample
= 8000 x 8 = 64,000 bps = 64 Kbps
Note:
Parallel Transmission
Serial Transmission
McGraw-Hill
Figure 4.24
McGraw-Hill
Data transmission
McGraw-Hill
McGraw-Hill
Note:
Note:
McGraw-Hill
McGraw-Hill
Note:
In synchronous transmission,
we send bits one after another
without start/stop bits or gaps.
It is the responsibility of the
receiver to group the bits.
McGraw-Hill
McGraw-Hill
PART III
Data Link Layer
McGraw-Hill
McGraw-Hill
McGraw-Hill
McGraw-Hill
McGraw-Hill
Chapters
Chapter 10
Chapter 11
Chapter 12
Point-To-Point Access
Chapter 13
Multiple Access
Chapter 14
Chapter 15
Wireless LANs
Chapter 16
Connecting LANs
Chapter 17
Chapter 18
McGraw-Hill
Chapter 10
Error Detection
and
Correction
McGraw-Hill
Note:
Data can be corrupted during
transmission. For reliable
communication, errors must be
detected and corrected.
McGraw-Hill
Single-Bit Error
Burst Error
McGraw-Hill
Note:
In a single-bit error, only one bit in
the data unit has changed.
McGraw-Hill
10.1
Single-bit error
Note:
A burst error means that 2 or more bits in the data unit have changed.
McGraw-Hill
10.2
McGraw-Hill
10.2 Detection
Redundancy
Parity Check
Cyclic Redundancy Check (CRC)
Checksum
McGraw-Hill
Note:
Error detection uses the concept of
redundancy, which means adding
extra bits for detecting errors at the
destination.
McGraw-Hill
10.3
McGraw-Hill
Redundancy
10.4
McGraw-Hill
Detection methods
10.5
McGraw-Hill
Even-parity concept
Note:
In parity check, a parity bit is added
to every data unit so that the total
number of 1s is even
(or odd for odd-parity).
McGraw-Hill
Example 1
Supposethesenderwantstosendthewordworld.InASCIIthefivecharacters
arecodedas
1110111 1101111 1110010 1101100 1100100
Thefollowingshowstheactualbitssent
1110111011011110111001001101100011001001
Example 2
NowsupposethewordworldinExample1isreceivedbythereceiverwithout
beingcorruptedintransmission.
1110111011011110111001001101100011001001
Thereceivercountsthe1sineachcharacterandcomesupwithevennumbers
(6,6,4,4,4).Thedataareaccepted.
McGraw-Hill
Example 3
NowsupposethewordworldinExample1iscorruptedduringtransmission.
1111111011011110111011001101100011001001
Thereceivercountsthe1sineachcharacterandcomesupwithevenandodd
numbers(7,6,5,4,4).Thereceiverknowsthatthedataarecorrupted,discards
them,andasksforretransmission.
Note:
Simple parity check can detect all single-bit errors. It can detect burst
errors only if the total number of errors in each data unit is odd.
McGraw-Hill
10.6
McGraw-Hill
Two-dimensional parity
Example 4
Supposethefollowingblockissent:
1010100100111001110111011110011110101010
However,itishitbyaburstnoiseoflength8,andsomebitsarecorrupted.
1010001110001001110111011110011110101010
Whenthereceivercheckstheparitybits,someofthebitsdonotfollowthe
evenparityruleandthewholeblockisdiscarded.
1010001110001001110111011110011110101010
Note:
10.7
McGraw-Hill
McGraw-Hill
10.9
McGraw-Hill
10.10
A polynomial
McGraw-Hill
McGraw-Hill
Name
Polynomial
Application
CRC-8
x8 + x2 + x + 1
ATM header
CRC-10
x10 + x9 + x5 + x4 + x 2 + 1
ATM AAL
ITU-16
x16 + x12 + x5 + 1
HDLC
ITU-32
LANs
Example 5
Itisobviousthatwecannotchoosex(binary10)orx2+x(binary110)asthe
polynomialbecausebotharedivisiblebyx.However,wecanchoosex+1
(binary11)becauseitisnotdivisiblebyx,butisdivisiblebyx+1.Wecanalso
choosex2+1(binary101)becauseitisdivisiblebyx+1(binarydivision).
Example 6
TheCRC12
x12+x11+x3+x+1
whichhasadegreeof12,willdetectallbursterrorsaffectinganoddnumberof
bits,willdetectallbursterrorswithalengthlessthanorequalto12,andwill
detect,99.97percentofthetime,bursterrorswithalengthof12ormore.
McGraw-Hill
10.12
McGraw-Hill
Checksum
10.13
McGraw-Hill
Note
:
The sender follows these steps:
The unit is divided into k sections, each of n bits.
All sections are added using ones complement to get the sum.
The sum is complemented and becomes the checksum.
The checksum is sent with the data.
Note
:
The receiver follows these steps:
The unit is divided into k sections, each of n bits.
All sections are added using ones complement to get the sum.
The sum is complemented.
If the result is zero, the data are accepted: otherwise, rejected.
McGraw-Hill
Note:
The receiver follows these steps:
The unit is divided into k sections, each of n bits.
All sections are added using ones complement to get
the sum.
The sum is complemented.
If the result is zero, the data are accepted: otherwise,
rejected.
McGraw-Hill
Example 7
Supposethefollowingblockof16bitsistobesentusinga
checksumof8bits.
1010100100111001
Thenumbersareaddedusingonescomplement
10101001
00111001
Sum 11100010
Checksum00011101
Thepatternsentis101010010011100100011101
McGraw-Hill
Example 8
NowsupposethereceiverreceivesthepatternsentinExample7andthereisno
error.
101010010011100100011101
Whenthereceiveraddsthethreesections,itwillgetall1s,which,after
complementing,isall0sandshowsthatthereisnoerror.
10101001
00111001
00011101
Sum
11111111
Complement00000000meansthatthepatternisOK.
McGraw-Hill
Example 9
Nowsupposethereisabursterroroflength5thataffects4bits.
101011111111100100011101
Whenthereceiveraddsthethreesections,itgets
10101111
11111001
00011101
PartialSum111000101
Carry
Sum
1
11000110
Complement00111001thepatterniscorrupted.
McGraw-Hill
10.3 Correction
Retransmission
Forward Error Correction
Burst Error Correction
McGraw-Hill
McGraw-Hill
Number of
data bits
m
Number of
redundancy bits
r
Total
bits
m+r
10
11
McGraw-Hill
10.15
McGraw-Hill
McGraw-Hill
10.17
code
McGraw-Hill
10.18
McGraw-Hill
Chapter 11
Chapter 11 : Error-Control
Coding
EE576
Dr. Kousa
517
Chapter 11 goals
Chapter 11 contents
Introduction
Discrete Memoryless Channels
Linear Block Codes
Cyclic Codes
Maximum Likelihood decoding of Convolutional Codes
Trellis-Coded Modulation
Coding for Compound-Error Channels
EE576
Dr. Kousa
518
Introduction
Cost-effective facility for transmitting information at a
rate and a level of reliability and quality
signal energy per bit-to-noise power density ratio
achieved practically via error-control coding
Error-control methods
Error-correcting codes
EE576
Dr. Kousa
519
EE576
Dr. Kousa
520
EE576
Dr. Kousa
521
Dr. Kousa
522
EE576
Dr. Kousa
523
EE576
Dr. Kousa
524
EE576
Dr. Kousa
525
EE576
Dr. Kousa
526
EE576
Dr. Kousa
527
Cyclic Codes
Cyclic codes form subclass of linear block codes
A binary code is said to be cyclic if it exhibits the two
following properties
the sum of any two code words in the code is also a code
word (linearity)
this means that we speak linear block codes
any cyclic shift of a code word in the code is also a code
word (cyclic)
mathematically in polynomial notation
EE576
Dr. Kousa
528
Cyclic Codes
The polynomial plays major role in the generation of cyclic
codes
If we have a generator polynomial g(x) of an (n,k) cyclic code
with certain k polynomials, we can create the generator
matrix (G)
Syndrome polynomial of the received code word corresponds
error polynomial
EE576
Dr. Kousa
529
Cyclic Codes
EE576
Dr. Kousa
530
Cyclic Codes
Example : A (7,4) cyclic code that has a block length of 7, let us find the
polynomials to generate the code (see example 3 on the book)
find code polynomials
find generation matrix (G) and parity-check matrix (H)
EE576
Dr. Kousa
531
Convolutional Codes
Convolutional codes work in serial manner, which suits better to such kind of
applications
The encoder of a convolutional code can be viewed as a finite-state machine that
consists of an M-stage shift register with prescribed connections to n modulo-2
adders, and a multiplexer that serializesthe outputs of the address
Convolutional codes are portrayed in graphical form by using three different
diagrams
Code Tree
Trellis
State Diagram
EE576
Dr. Kousa
532
EE576
Dr. Kousa
533
Trellis-Coded Modulation
Here coding is described as a process of imposing certain
patterns on the transmitted signal
Trellis-coded modulation has three features
Amount of signal point is larger than what is required,
therefore allowing redundancy without sacrificing
bandwidth
Convolutional coding is used to introduce a certain
dependancy between successive signal points
Soft-decision decoding is done in the receiver
EE576
Dr. Kousa
534
EE576
Dr. Kousa
535
Fall 2007
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
536
The Exclusive OR
A
If A and B are binary
variables, the XOR of A
and B is defined as:
0+0=1+1=0
0+1=1+0=1
XOR
Dr. Kousa
Telecommunications Technology
Linear Block Codes
537
Hamming Distance
The weight of a code word is the number of 1s in it.
The Hamming Distance between two code words is
equal to the number of digits in which they differ.
The distance dij between xi = 1110010 and xj =
1011001 is 4.
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
538
x1=1110010
x2=1100001
y=1100010
d(x1,y)=w(1110010+1100010)
=w(0010000)
=1
d(x2,y)=w(110001+1100010)
=w(000011)
=2
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
539
539
x0
p
y0
p
x1
EE576
Dr. Kousa
1- p
y1
Telecommunications Technology
Linear Block Codes
540
540
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
541
541
Geometric Point-of-View
Code set of
all 8, 3-digit
words
Minimum
distance = 1
010
110
011
111
000
001
101
100
In the BSC, any error changes a code word into a code word
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
542
542
010
110
011
111
000
Minimum
distance = 2
101
100
EE576
Dr. Kousa
001
Telecommunications Technology
Linear Block Codes
543
543
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
544
544
x1
x2
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
545
545
Telecommunications Technology
Linear Block Codes
546
546
m1
m2
ck
m3
i n 1
m
i 0
Addition is modulo 2
EE576
Dr. Kousa
c1
A
+
0
1
0
0
1
1
1
0
547
x7
x6
x5
x4
x3
x2
x1
m4 m3 m2
c3
m1
c2
c1
c1 m1 m2 m4
c2 m1 m3 m4
m4
1
1
1
EE576
Dr. Kousa
c3 m2 m3 m4
m3
1
1
0
m2
1
0
1
m1
0
1
1
c3
1
0
0
c2
0
1
0
c1
0
0
1
548
0
1
0
0
0
0
1
0
0
0
0
1
1
1
1
0
1
1
0
1
1
0
1
1
G 1
1
1
1
0
1
0
1
0
1
1
1
0
0
0
1
0
0
0
1
EE576
Dr. Kousa
549
Codes
Message
Code word
m3
m2
m1
m0
x6
x5
x4
x3
Error event e
EE576
Dr. Kousa
x2
x1
x0
x mH
y xe
550
Syndrome Calculation
y mH e
T
s yG
T T
s mHG e G
T
s eG
EE576
Dr. Kousa
551
Hamming Code
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
552
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
553
Hamming Codes
Hamming codes are (n,k) group codes where n = k+r is the
length of the code words k is the number of data bits. R is the
number of parity check bits, and 2r = n + 1.
Typical codes are
(7,4), r = 3
(15,11), r = 4 (24 = 16)
(63, 57), r = 6
Hamming codes are ideal, single error correcting codes.
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
554
pu
1 (1 p )
c
7 pc (1 pc ) 6
EE576
Dr. Kousa
555
Cyclic Codes
Cyclic codes are algebraic group codes in which the code words
form an ideal.
If the bits are considered coefficients of a polynomial, every code
word is divisible by a generator polynomial.
The rows of the generator matrix are cyclic permutations of one
another.
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
556
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
557
Cyclic codes (or cyclic redundancy check CRC) are used routinely to
detect errors in data transmission.
Typical codes are the
CRC-16: P(X) = X16 + X15 + X2 + 1
CRC-CCITT: P(X) = X16 + X12 + X5 + 1
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
558
Convolutional Codes
Block codes are memoryless codes - each output depends only the current k-bit block
being coded.
The bits in a convolutional code depend on previous source bits.
The source bits are convolved with the impulse response of a filter.
Why convolutional codes? Because the code set grows exponentially with code length
the hypothesis being that the Rate could be maintained as n grew, unlike all block codes
the Wozencraft contribution.
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
559
Convolutional Coder
1 1 0 1 0 1 ...
Input
11 10 11 01 01 01 ...
O1
Xi
Xi-1
Xi-2
Encoded output
O2
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
560
Trellis Diagram
1
00
01
10
11
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
561
1
11
1
10
0
11
1
01
0
01
1
01
00
11
01
01
10
10
01
01
11
11
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
562
Decoding
00
01
10
11
Insert an error in a sequence of transmitted bits and try and decode it.
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
563
Sequential Decoding
Viterbi Decoding
Turbo Codes
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
564
EE576
Dr. Kousa
Telecommunications Technology
Linear Block Codes
565
EE576
Dr. Kousa
Inner Coder
(Block)
Outer Coder
(Convolutional)
Outer Decoder
Inner Decoder
Telecommunications Technology
Linear Block Codes
566
Dr. Kousa
Telecommunications Technology
Linear Block Codes
567
W
h
Transmitter
Reciever
Signal
Message
Received
Signal
Message = [1 1 1
1]
Destination
Message
Message = [1 1 0
1]
Noise
NoiseSource
= [0 0
1 0]
EE576
Dr. Kousa
568
W
h
a
Poor solutions
Single CheckSum Truth table:
A
0
0
1
1
B
0
1
0
1
X-OR
0
1
1
0
Repeats
Data = [1 1 1 1]
Message=
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
General form:
Data=[1 1 1 1]
Message=[1 1 1 1 0]
EE576
Dr. Kousa
569
W
h
a
t
Repeat 3 times:
This divide W by 3
It divides overall capacity by at
least a factor of 3x.
C is Channel Capacity
W is raw Channel Capacity
S/N is the signal to noise ratio
EE576
Dr. Kousa
Single Checksum:
Allows an error to be detected
but requires the message to be
discarded and resent.
Each error reduces the channel
capacity by at least a factor of 2
because of the thrown away
message.
Linear Block Codes
570
W
h
a
t
Hammings Solution
Encoding:
Multiple Checksums
Message=[a b c d]
r= (a+b+d) mod 2
s= (a+b+c) mod 2
t= (b+c+d) mod 2
Code=[r s a t b c d]
EE576
Dr. Kousa
Message=[1 0 1 0]
r=(1+0+0) mod
2 =1
s=(1+0+1) mod 2 =0
t=(0+1+0) mod
2 =1
Code=[ 1 0 1 1 0 1 0 ]
571
W
h
a
t
Simulation
i
s
Stochastic Simulation:
100,000 iterations
Add Errors to (7,4) data
No repeat randoms
Measure Error Detection
Results:
Error Detection
One Error: 100%
Two Errors: 100%
Three Errors: 83.43%
Four Errors: 79.76%
EE576
Dr. Kousa
572
W
h
a
t
i
s
Dr. Kousa
573
W
h
a
t
Hamming Distance
Definition:
The number of elements that need to be changed (corrupted) to
turn one codeword into another.
The hamming distance from:
[0101] to [0110] is 2 bits
[1011101] to [1001001] is 2 bits
butter to ladder is 4 characters
roses to toned is 3 characters
EE576
Dr. Kousa
574
i
s
t
h
W
h
a
t
Another Dot
The code space is now 4.
The hamming distance is still 1.
i
s
t
h
e
Allows:
Error DETECTION for
Hamming Distance = 1.
Dr. Kousa
575
W
h
a
t
i
s
t
h
e
Allows:
Error DETECTION for
Hamming Distance = 2.
EE576
Dr. Kousa
576
W
h
a
t
Multi-dimensional Codes
Code Space:
i
s
2-dimensional
5 element states
Circle packing makes more
efficient use of the code-space
t
h
e
M
a
EE576
Dr. Kousa
577
W
h
a
t
Cannon Balls
Efficient Circle packing is the same
as efficient 2-d code spacing
i
s
t
h
e
M
a
t
http://wikisource.org/wiki/Cannonball_stacking
http://mathworld.wolfram.com/SpherePacking.html
EE576
Dr. Kousa
578
W
h
a
t
More on Codes
Hamming (11,7)
Golay Codes
Convolutional Codes
Reed-Solomon Error Correction
Turbo Codes
Digital Fountain Codes
i
s
t
h
e
An Example
We will
Encode a message
Add noise to the transmission
Detect the error
Repair the error
EE576
Dr. Kousa
M
a
t
r
579
W
h
a
t
1
0
H
0
0
1
0
0
0
0
1
0
0
0
0
1
i
s
But why?
You can verify that:
0
1
1
1
1
0
1
1
1
1
0
t
h
e
Hamming[1 0 0 0]=[1 0 0 0 0 1 1]
Hamming[0 1 0 0]=[0 1 0 0 1 0 1]
Hamming[0 0 1 0]=[0 0 1 0 1 1 0]
Hamming[0 0 0 1]=[0 0 0 1 1 1 1]
M
a
t
r
i
x
?
By our message
code H message
Where multiplication is the logical AND
And addition is the logical XOR
EE576
Dr. Kousa
580
Add noise
If our message is
Message = [0 1 1 0]
Our Multiplying yields
Code = [0 1 1 0 0 1 1]
1 0 0 0 0 1 1
0 1 0 0 1 0 1
0 1 1 0
0 0 1 0 1 1 0
0 0 0 1 1 1 1
0 1 0 0 0 0 1 1
1 0 1 0 0 1 0 1
1 0 0 1 0 1 1 0
0 0 0 0 1 1 1 1
0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 1 1 0 0 1 1
Code => [0 1 0 0 0 1 1]
EE576
Dr. Kousa
581
EE576
Dr. Kousa
582
Decoder*codeT is
[ 0 1 1]
0 0 0 1 1 1 1
Decoder 0 1 1 0 0 1 1
1 0 1 0 1 0 1
Dr. Kousa
583
Channel Coding in
IEEE802.16e
Student: Po-Sheng Wu
Advisor: David W. Lin
EE576
Dr. Kousa
584
Outline
Overview
RS code
Convolution code
LDPC code
Future Work
Overview
EE576
Dr. Kousa
585
RS code
The RS code in 802.16a is derived from a systematic
RS (N=255, K=239, T=8) code on GF(2^8)
EE576
Dr. Kousa
586
RS code
EE576
Dr. Kousa
587
RS code
This code then is shortened and punctured to enable
variable block size and variable error-correction
capability.
Shorten (n, k) (n-l, k-l)
Punctured (n, k) (n-l, k)
In general, the generator polynomial
in IEEE802.16a h=0
EE576
Dr. Kousa
588
RS code
EE576
Dr. Kousa
589
EE576
Dr. Kousa
590
RS code
EE576
Dr. Kousa
591
RS code
Decoding : The Euclids (Berlekamp) algorithm is a
common decoding algorithm for RS code.
Four step:
-compute the syndrome value
-compute the error location polynomial
-compute the error location
-compute the error value
EE576
Dr. Kousa
592
Convolution code
Each RS code is encoded by a binary convolution
encoder, which has native rate of , a constraint
length equal to 7.
EE576
Dr. Kousa
593
Convolution code
1 means a transmitted bit and 0 denotes a
removed bit, note that the
has been changed
from that of the native convolution code with rate .
EE576
Dr. Kousa
594
Convolution code
Decoding: Viterbi algorithm
EE576
Dr. Kousa
595
Convolution code
The convolution code in IEEE802.16a need to be
terminated in a block, and thus become a block code.
Three method to achieve this termination
Direct truncation
Zero tail
Tail biting
EE576
Dr. Kousa
596
RS-CC code
Outer code: RS code
Inner code: convolution code
Input data streams are divided into RS blocks, then each RS
block is encode by a tail-biting convolution code.
Between the convolution coder and modulator is a bit
interleaver.
EE576
Dr. Kousa
597
LDPC code
low density parity checks matrix
LDPC codes also linear codes. The codeword can be
expressed as the null space of H, Hx=0
Low density enables efficient decoding
Better decoding performance to Turbo code
Close to the Shannon limit at long block length
EE576
Dr. Kousa
598
LDPC code
n is the length of the code, m is the number of parity
check bit
EE576
Dr. Kousa
599
LDPC code
Base model
EE576
Dr. Kousa
600
LDPC code
if p(f,i,j) = -1
replace by z*z zero matrix
else
p(f,i,j) is the circular shift size
p (i, j ), p(i,j) 0
p f , i, j p(i,j)z f
, p(i,j)>0
z0
EE576
Dr. Kousa
601
LDPC code
Encoding
[u p1 p2]
Decoding
Tanner Graph
Sum Product Algorithm
EE576
Dr. Kousa
602
LDPC code
Tanner Graph
EE576
Dr. Kousa
603
LDPC code
EE576
Dr. Kousa
604
LDPC code
EE576
Dr. Kousa
605
LDPC code
Future Work
EE576
Dr. Kousa
606
Chapter 11
Data Link
Control
and
Protocols
EE576
Dr. Kousa
607
Error Control
Error control in the data link layer is based on automatic repeat
request, which is the retransmission of data.
EE576
Dr. Kousa
608
Operation
Bidirectional Transmission
EE576
Dr. Kousa
609
11.1
EE576
Normal operation
Dr. Kousa
610
11.2
EE576
Dr. Kousa
611
11.3
EE576
Dr. Kousa
612
Note:
In Stop-and-Wait ARQ, numbering
frames prevents the retaining of
duplicate frames.
EE576
Dr. Kousa
613
11.4
EE576
Dr. Kousa
614
Note:
Numbered acknowledgments are needed
if an acknowledgment is delayed and
the next frame is lost.
EE576
Dr. Kousa
615
11.5
EE576
Piggybacking
Dr. Kousa
616
Dr. Kousa
617
EE576
Dr. Kousa
618
11.7
EE576
Dr. Kousa
619
11.8
EE576
Control variables
Dr. Kousa
620
11.9
EE576
Dr. Kousa
621
11.10
EE576
Dr. Kousa
622
EE576
Dr. Kousa
623
Note:
In Go-Back-N ARQ, the size of the
sender window must be less than 2m;
the size of the receiver window is
always 1.
EE576
Dr. Kousa
624
Dr. Kousa
625
EE576
Dr. Kousa
626
11.13
EE576
Dr. Kousa
627
Note:
In Selective Repeat ARQ, the size of the
sender and receiver window must be at
most one-half of 2m.
EE576
Dr. Kousa
628
EE576
Dr. Kousa
629
Example 1
In a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit
takes 20 ms to make a round trip. What is the bandwidth-delay product? If the
system data frames are 1000 bits in length, what is the utilization percentage of
the link?
Solution
The bandwidth-delay product is
1 106 20 10-3 = 20,000 bits
The system can send 20,000 bits during the time it takes for the data to go from
the sender to the receiver and then back again. However, the system sends only
1000 bits. We can say that the link utilization is only 1000/20,000, or 5%. For
this reason, for a link with high bandwidth or long delay, use of Stop-and-Wait
ARQ wastes the capacity of the link.
EE576
Dr. Kousa
630
Example 2
What is the utilization percentage of the link in Example 1 if the link uses GoBack-N ARQ with a 15-frame sequence?
Solution
The bandwidth-delay product is still 20,000. The system can send up to 15
frames or 15,000 bits during a round trip. This means the utilization is
15,000/20,000, or 75 percent. Of course, if there are damaged frames, the
utilization percentage is much less because frames have to be resent.
EE576
Dr. Kousa
631
11.5 HDLC
Configurations and Transfer Modes
Frames
Frame Format
Examples
Data Transparency
EE576
Dr. Kousa
632
11.15
EE576
Dr. Kousa
NRM
633
11.16
EE576
Dr. Kousa
ABM
634
11.17
EE576
Dr. Kousa
HDLC frame
635
11.18
EE576
Dr. Kousa
636
11.19
EE576
Dr. Kousa
I-frame
637
11.20
EE576
Dr. Kousa
638
11.21
EE576
Dr. Kousa
639
EE576
Meaning
SNRM
SNRME
SABM
SABME
UP
Unnumbered poll
UI
Unnumbered information
UA
Unnumbered acknowledgment
RD
Request disconnect
DISC
Disconnect
DM
Disconnect mode
RIM
SIM
RSET
Reset
XID
Exchange ID
FRMR
Frame reject
Dr. Kousa
640
Example 3
Figure 11.22 shows an exchange using piggybacking where is no
error. Station A begins the exchange of information with an I-frame
numbered 0 followed by another I-frame numbered 1. Station B
piggybacks its acknowledgment of both frames onto an I-frame of
its own. Station Bs first I-frame is also numbered 0 [N(S) field]
and contains a 2 in its N(R) field, acknowledging the receipt of As
frames 1 and 0 and indicating that it expects frame 2 to arrive next.
Station B transmits its second and third I-frames (numbered 1 and
2) before accepting further frames from station A. Its N(R)
information, therefore, has not changed: B frames 1 and 2 indicate
that station B is still expecting A frame 2 to arrive next.
EE576
Dr. Kousa
641
11.22
EE576
Dr. Kousa
Example 3
642
Example 4
InExample3,supposeframe1sentfromstationBto
stationAhasanerror.StationAinformsstationBto
resendframes1and2(thesystemisusingtheGoBack
Nmechanism).StationAsendsarejectsupervisoryframe
toannouncetheerrorinframe1.Figure11.23showsthe
exchange.
EE576
Dr. Kousa
643
11.23
EE576
Dr. Kousa
Example 4
644
Note:
Bit stuffing is the process of adding one
extra 0 whenever there are five
consecutive 1s in the data so that the
receiver does not mistake the
data for a flag.
EE576
Dr. Kousa
645
11.24
EE576
Dr. Kousa
646
11.25
EE576
Dr. Kousa
647
Chapter 11
Data Link Control
EE576
Dr. Kousa
11.648
Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
648
111FRAMING
649
650
651
652
653
112FLOWANDERRORCONTROL
654
Note
Flow control refers to a set of procedures used to
restrict the amount of data that the sender can
send before waiting for acknowledgment.
Error control in the data link layer is based on
automatic repeat request, which is the
retransmission of data.
655
113PROTOCOLS
Now let us see how the data link layer can combine
framing, flow control, and error control to achieve the
delivery of data from one node to another.
The protocols are normally implemented in software by
using one of the common programming languages.
To make our discussions language-free, we have written
in pseudocode a version of each protocol that
concentrates mostly on the procedure instead of delving
into the details of language rules.
11.656 Dr. Kousa
EE576
656
657
114NOISELESSCHANNELS
658
Figure 11.6 The design of the simplest protocol with no flow or error control
659
660
661
662
115NOISYCHANNELS
663
Note
Error correction in Stop-and-Wait ARQ is done by
keeping a copy of the sent frame and retransmitting of the
frame when the timer expires.
In Stop-and-Wait ARQ:
we use sequence numbers to number the frames.
The sequence numbers are based on modulo-2
arithmetic.
In Stop-and-Wait ARQ, the acknowledgment
number always announces in modulo-2 arithmetic the
sequence number of the next frame expected.
11.664 Dr. Kousa
EE576
664
665
Example 11.4
666
Example 11.5
667
Note
In the Go-Back-N Protocol, the sequence numbers are
modulo 2m,
where m is the size of the sequence number field in bits.
668
669
Note
The send window is an abstract concept defining an
imaginary box of size 2m1 with three variables: Sf, Sn, and
Ssize.
The send window can slide one or more slots when a
valid acknowledgment arrives.
670
671
Note
The receive window is an abstract concept defining an
imaginary box
of size 1 with one single variable Rn.
The window slides
when a correct frame has arrived; sliding occurs one slot
at a time.
672
673
Note
In Go-Back-N ARQ, the size of the send window must be
less than 2m;
the size of the receiver window
is always 1.
674
This is an
example
of a case
where the
forward
channel is
reliable,
but the
reverse is
not. No
data
frames
11.675 Dr. Kousa
EE576
675
Scenario
showing what
happens when
a frame is
lost.
676
Note
Stop-and-Wait ARQ is a special case of Go-Back-N ARQ
in which the size of the send window is 1.
677
678
679
680
Note
In Selective Repeat ARQ, the size of the sender and
receiver window
must be at most one-half of 2m.
681
682
683
116HDLC
684
685
Control field
format for the
different frame
types
686
687
688
117POINTTOPOINTPROTOCOL
689
690
691
692
693
694
695
696
697
A Survey of Advanced
FEC Systems
Eric Jacobsen
Minister of Algorithms, Intel Labs
Communication Technology Laboratory/
Radio Communications Laboratory
July 29, 2004
With a lot of material from Bo Xia, CTL/RCL
www.intel.com/labs
Outline
What is Forward Error Correction?
The Shannon Capacity formula and what it means
A simple Coding Tutorial
www.intel.com/labs
699
C = W log2(1 + P / N)
Channel
Capacity
(bps)
Channel
Bandwidth
(Hz)
Transmit
Power
Noise
Power
www.intel.com/labs
700
A simple example
A system transmits messages of two bits each through a channel
that corrupts each bit with probability Pe.
Tx Data = { 00, 01, 10, 11 }
Rx Data = 00
In this case a single bit error has corrupted the received symbol, but
it is still a valid symbol in the list of possible symbols. The most
fundamental coding trick is just to expand the number of bits
transmitted so that the receiver can determine the most likely
transmitted symbol just by finding the valid codeword with the
minimum Hamming distance to the received symbol.
www.intel.com/labs
701
www.intel.com/labs
702
Coding Gain
The difference in performance between an uncoded and a coded
system, considering the additional overhead required by the code,
is called the Coding Gain. In order to normalize the power required
to transmit a single bit of information (not a coded bit), Eb/No is used
as a common metric, where Eb is the energy per information bit, and
No is the noise power in a unit-Hertz bandwidth.
Uncoded
Symbols
Coded
Symbols
with R =
Tb
703
C, R = 9/10
0.1
1.62
Uncoded
Matched-Filter
Bound
Performance
3.2
BER = Pe
0.01
1 10
1 10
1 10
1 10
d = ~1.4dB
d = ~2.58dB
1 10
R = 3/4 w/RS
R = 9/10 w/RS
VitRs R = 3/4
Uncoded QPSK
6
Eb/No (dB)
10
11
These curves
Compare the
performance of
two Turbo
Codes with a
concatenated
Viterbi-RS
system. The
TC with R =
9/10 appears to
be inferior to
the R = VitRS system, but
is actually
operating
closer to
capacity.
www.intel.com/labs
704
1970
1960
Shannons Paper
1948
Hamming
defines basic
binary codes
BCH codes
Proposed
Reed and Solomon
define ECC
Technique
Gallagers Thesis
On LDPCs
Viterbis Paper
On Decoding
Convolutional Codes
Early practical
implementations
of RS codes for tape
and disk drives
Berlekamp and Massey
rediscover Euclids
polynomial technique
and enable practical
algebraic decoding
Forney suggests
concatenated codes
www.intel.com/labs
705
1990
2000
Ungerboecks
TCM Paper - 1982
RS codes appear
in CD players
First integrated
Viterbi decoders
(late 1980s)
TCM Heavily
Adopted into
Standards
LDPC beats
Turbo Codes
For DVB-S2
Standard - 2003
Renewed interest
in LDPCs due to TC
Research
www.intel.com/labs
706
Parity
Codeword
The parity portion can be actual parity bits, or generated by some other means, like
a polynomial function or a generator matrix. The decoding algorithms differ greatly.
The Code Rate, R, can be adjusted by shortening the data field (using zero padding)
or by puncturing the parity field.
Examples of block codes: BCH, Hamming, Reed-Solomon, Turbo Codes,
Turbo Product Codes, LDPCs
Essentially all iteratively-decoded codes are block codes.
www.intel.com/labs
707
www.intel.com/labs
708
709
RS
Encoder
Interleaver
Conv.
Encoder
Viterbi
Decoder
Channel
Inner Code
Outer Code
DeInterleaver
RS
Decoder
Data
www.intel.com/labs
710
CC
Encoder1
Interleaver
CC
Encoder2
Channel
Serial Concatenation
DeInterleaver
Viterbi/APP
Decoder
Channel
Viterbi/APP
Decoder
Data
CC
Encoder1
Interleaver
DeInterleaver
CC
Encoder2
Viterbi/APP
Decoder
Data
Data
Combiner
Viterbi/APP
Decoder
www.intel.com/labs
711
Viterbi/APP
Decoder
Interleaver
DeInterleaver
Viterbi/APP
Decoder
Data
Turbo Codes add coding diversity by encoding the same data twice through
concatenation. Soft-output decoders are used, which can provide reliability update
information about the data estimates to the each other, which can be used during a
subsequent decoding pass.
The two decoders, each working on a different codeword, can iterate and continue
to pass reliability update information to each other in order to improve the probability
of converging on the correct solution. Once some stopping criterion has been met,
the final data estimate is provided for use.
These Turbo Codes provided the first known means of achieving decoding
performance close to the theoretical Shannon capacity.
www.intel.com/labs
712
MAP/APP decoders
Maximum A Posteriori/A Posteriori Probability
Two names for the same thing
Basically runs the Viterbi algorithm across the data sequence in both
directions
~Doubles complexity
www.intel.com/labs
713
www.intel.com/labs
714
1.629
2.864
0.01
1 10
1 10
1 10
1 10
1 10
1 10
BER
4
5
Eb/No (dB)
Uncoded
Vit-RS R = 1/2
Vit-RS R = 3/4
Vit-RS R = 7/8
Turbo Code R = 1/2
Turbo Code R = 3/4
Turbo Code R = 7/8
www.intel.com/labs
715
Accumulate
Section
Interleaver
+
R=1
Inner Code
R = 1/2
Outer Code
Since the differential encoder has R = 1, the final code rate is determined by the
amount of repetition used.
www.intel.com/labs
716
2-Dimensional
Data Field
Parity
Parity
Parity
Parity
Since the constituent codes are Hamming codes, which can be decoded simply, the
decoder complexity is much less than Turbo Codes. The performance is close to capacity
for code rates around R = 0.7-0.8, but is not great for low code rates or short blocks. TPCs
have enjoyed commercial success in streaming satellite applications.
www.intel.com/labs
717
www.intel.com/labs
718
Edges
Variable Nodes
(Codeword bits)
This is an example bipartite graph for an irregular LDPC code.
www.intel.com/labs
719
i = maxx(i+1,qi)
Edges
Check Nodes
(one per parity bit)
ri = maxx(i,i+1)
ri
qi
mVn
Variable Nodes
(one per code bit)
mV = mV0 + rs
qi = mV ri
www.intel.com/labs
720
www.intel.com/labs
721
Current State-of-the-Art
Block Codes
Reed-Solomon widely used in CD-ROM, communications standards.
Fundamental building block of basic ECC
Convolutional Codes
K = 7 CC is very widely adopted across many communications standards
K = 9 appears in some limited low-rate applications (cellular telephones)
Often concatenated with RS for streaming applications (satellite, cable, DTV)
Turbo Codes
Limited use due to complexity and latency cellular and DVB-RCS
TPCs used in satellite applications reduced complexity
LDPCs
Recently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16e
Complexity concerns, especially memory expect broader consideration
www.intel.com/labs
722
EE576
Dr. Kousa
723
Definition
A code is called cyclic if [xnx0x1...xn-1] is a
codeword whenever [x0x1...xn-1xn] is also a codeword.
Notations
EE576
Dr. Kousa
724
EE576
Dr. Kousa
725
EE576
Dr. Kousa
726
Dr. Kousa
727
EE576
Dr. Kousa
728
EE576
Dr. Kousa
729
EE576
Dr. Kousa
730
EE576
Dr. Kousa
731
EE576
Dr. Kousa
732
EE576
Dr. Kousa
733
EE576
Dr. Kousa
734
Detection of a Burst-Error
But b < = n k
There fore P(X) is of higher degree than E1(X) which implies that P(X)
could not divide E1(X)
Q.E.D
Theorem 6: The fraction of bursts of length b > n-k that are undetected
is
2(nk)
if b > n k + 1
2(nk1) if b = n k + 1
EE576
Dr. Kousa
735
EE576
E(X) = X i + X j
E(X) = (X i + Xi+1) + X j
E(X) = X j (X j + Xj+1)
E(X) = (X i + Xi+1) + (X j + Xj+1)
Dr. Kousa
736
Implementation
Briefly, to encode a message, G (X), n-k zeros are annexed (I.e.
multiplication of Xn-1G (X) is performed) and then Xn-1G (X) is
divided by the polynomial P (X) of degree n-k. The remainder is
then subtracted from Xn-1G (X). (It replaces the n-k zeroes).
This encoded message is divisible by P (X) for checking out
errors
EE576
Dr. Kousa
737
Implementation Contd
EE576
Dr. Kousa
738
Implementation Contd
As the first one (the coefficient of the high order term of
the dividend) shifts off the end we subtract the divisor by
the following procedure:
1. In the subtraction the high-order terms of the divisor and dividend
always cancel. As the higher order term of the dividend is shifted
off the end of the register, this part of the subtraction is done
automatically.
2. Modulo two adders are placed so that when a one shifts off the
end of the register, the divisor is subtracted from the contents of
the register. The register than contains a difference that is shifted
until another comes off the end and then the process is repeated.
This continues until the entire dividend is shifted into the register.
EE576
Dr. Kousa
739
EE576
Dr. Kousa
740
Input 100010001101011
0 -> 10 00 1
0 -> 11 10 1
0 -> 11 01 1
1 -> 11 00 0
1 -> 11 10 0
0 -> 11 11 0
1 -> 01 11 1
0 -> 00 01 0
1 -> 00 00 1
1 -> 00 10 1
EE576
Dr. Kousa
741
EE576
Dr. Kousa
742
EE576
Dr. Kousa
743
Implementation Contd
Conclusion
Cyclic codes for error detection provides high efficiency and the ease of
implementation.
It provides standardization like CRC-8 and CRC-32
EE576
Dr. Kousa
744
EE576
Dr. Kousa
Image restoration
Rainfall prediction
Gene sequencing
Character recognition
745
Milestones
EE576
Dr. Kousa
746
Example-Principle of Optimality
Find optimal path to each
bridge
Professor X chooses an
optimum
Peris path on his trip to lunch
h
Publis
h
1.2
.5
.
7
1.2
Optimal: 6 adds
Brute force:8
adds
.
8
EE576
Faculty
Club
1.
2
.
5
EE Bld
.
2
Dr. Kousa
.
5
N bridges
Optimal: 4(N+1) adds
Brute force: (N-1)2N
adds
.
3
1.0
.8
S
Linear Block Codes
.
8
747
Information
Source
a1 , a2 ,..., aN
A
Convolutional
Encoder
c1 , c2 ,..., cN
BSC
p
p
Information
Sink
EE576
Dr. Kousa
Viterbi
Algorithm
b1 , b2 ,..., bN
BN
748
a1 , a2 ,..., aN
a1 , a2 ,..., a N
,BN )
(1 p) N D ( A
,BN )
Equivalently
min D( A N , B N ) log( p /(1 p)
a1 , a2 ,..., a N
Dr. Kousa
749
Convolutional codes-Encoding a
sequence
Example(3,1) code
(output,input)
efficiency=input/output
Initial state - s1 s2 0
Output
Input
T
110100
S1
S2
State
EE576
Dr. Kousa
01 i
0 2 i s1
0 3 i s1 s2
Initial state - s1 s2 0
750
00
1 -111
0 -001
Fig.2.14
state
01
0 -010
0 -011
1 -110
10
EE576
Dr. Kousa
1 -100
11
1 -101
751
Trellis Representation
State output
0 input
s1s2
00
000
111
01
10
11
Next state
1 input
00
01
001
110
011
100
010
101
EE576
Dr. Kousa
10
11
752
a1 , a2 ,..., aN
min
s1 , s2 ,..., s N
Shift register contents
N
min D( A , B ) min
N
s1 , s2 ,..., s N
s1 , s2 ,..., s N
min ( D( A , B )
N
s1 , s2 ,..., s N
s1 , s2 ,..., s N
d (a , b )-
sN
min
( D( A
s1 , s2 ,..., s N 1/ S N
EE576
sN
Dr. Kousa
BSC
N 1
( D( A
min ( D( A N , B N ) min( d ( a N , bN )
s1 , s2 ,..., s N
i 1
s1 , s2 ,..., s N 1 , s N
min ( D( A , B ) min
N
min
D( A N , B N )
N 1
memorylessness
,B
,B
min
N 1
N 1
s1 , s2 ,..., s N 1 / S N
) d (a N , bN ))
) d ( a N , bN ))
D( A N 1 , B N 1 ))
753
Key step!
min
s1 , s2 ,..., sN 2 / S N 1 ,S N
min
s1 , s2 ,..., s N 1 / s N
D( A N 2 , B N 2 ))
Redundant
D( A
N 1
,B
min (d (aN 1 , bN 1 )
s N 1 / s N Incremental distance
N 1
min
s1 , s2 ,..., sN 2 / S N 1
D( A N 2 , B N 2 ))
)
min
s1 , s2 ,..., s N 2 / s N 1
D( A
N 2
,B
N 2
Accumulated distance
Linear growth in N
EE576
Dr. Kousa
))
754
s1 , s2 ,..., si / si 1
D( Ai , B i )
min( d ( ai , bi )
si / si 1
State i-1
min
s1 , s2 ,..., si 1 / si
D( Ai 1 , B i 1 ))
d (ai , bi )
bi 010
D( Ai 1 , B i 1 ) 00
1
4
ai 000
Search previous states
2
10
2
ai 001
EE576
Dr. Kousa
State i
00
4
755
Dr. Kousa
756
Inter-symbol Interference
z (t )
N
a p(t iT )
i 1
Transmitter
Channel
Equalizer
Decisions
VA
Dr. Kousa
757
0
Euclidean distance between received and possible signals
N
Simplification
min 2 a i Z i ai a j ri j
a1 , a2 ,..., aN
i 1
i 1 j 1
where
Dr. Kousa
758
k
k
k
d ( Zk ; sk 1 , sk ) @2ak Z k 2ak
k 1
i k m
min
s1 , s2 ,..., sk 1 / sk
D( Z1 ,..., Z k 1 , sk m 1 ,..., sk 1 )
min ( d (Zk ; sk 1 , sk )
sk 1 / sk
EE576
Dr. Kousa
min
s1 , s2 ,..., sk 2 / sk 1
D( Z1 ,..., Z k 2 , sk m 1 ,..., sk 2 ))
759
Magnetic Recording
Magnetization pattern
d m(t )
e(t )
* h(t )
dt
2 xk h(t kT ) where xk ak ak 1
k 0
EE576
Dr. Kousa
Nyquist pulse
Sample
760
0.8
0.3
-0.2
0.2
0.4
0.6
Dr. Kousa
Signaling interval
0.8
-0.7
-1.2
EE576
Dr. Kousa
762
Input Pixel
Effect of Blurring
s (i, j )
a(i l , j m)
l L m L
EE576
Input pixel
h(l , m) n(i, j )
Optical channel
AWGN
763
Row Scan
EE576
Dr. Kousa
764
EE576
Dr. Kousa
765
Rainfall Prediction
Rainy
wet
Rainy
dry
Showery
wet
Showery
dry
No rain
Rainfall observations
EE576
Dr. Kousa
766
DNA Sequencing
DNA-double helix
Sequences of four nucleotides, A,T,C and G
Pairing between strands
Bonding
A T and C G
Genes
Made up of Cordons, i.e. triplets of adjacent nucleotides
Overlapping of genes
Nucleotide sequence
CGGATTC
Gene 1
Gene 2
Gene 3
EE576
Dr. Kousa
767
P1
M2
P2
P3
Initial and
Transition M4
Probabilities known
P4
H
EE576
Dr. Kousa
E
Linear Block Codes
768
Next Slide
Results
EE576
Dr. Kousa
769
All possible
segmentation
paths
Removal of
Overlapping
paths
EE576
Dr. Kousa
Discarding
near paths
770