Vous êtes sur la page 1sur 46

by Assoc. Prof.

Thuong Le-Tien 1
Channel Coding
Linear block codes
Cyclic codes, Cyclic Redundancy Codes (CRC)
Reed Solomon codes
Convolutional codes, Turbo codes
DIGITAL COMMUNICATIONS
Lectured by Assoc Prof Thuong Le-Tien
October 2013
Chapter Outline
by Assoc. Prof. Thuong Le-Tien
Error Detection and Correction
Repetition and Parity-Check Codes Interleaving Code
Vectors and Hamming Distance FEC Systems ARQ
Systems
Linear Block Codes
Matrix Representation of Block Codes Syndrome
Decoding Cyclic Codes, Cyclic Redundancy Codes,
M-ary Codes (Reed Solomon codes)
Convolutional Codes
Convolutional Encoding Free Distance and Coding Gain
Decoding Methods Turbo Codes
1. Error detection and correction
by Assoc. Prof. Thuong Le-Tien 3
Coding for error detection, without correction, is
simpler than error-correction coding.
When a two-way channel exists between source
and destination, the receiver can request
retransmission of information containing detected
errors. This error-control strategy, called Automatic
Repeat Request (ARQ), particularly suits data
communication systems such as computer networks.
However, when retransmission is impossible or
impractical, error control must take the form of
Forward Error Correction (FEC) using an error-
correcting code.
by Assoc. Prof. Thuong Le-Tien 4
Repetition and Parity Check Codes
If transmission errors occur randomly and independently
with probability P
e
=, then the binomial frequency function
gives the probability of i errors in an n-bit codeword as
by Assoc. Prof. Thuong Le-Tien 5
Consider, a triple-repetition code with codewords 000 and
111. Single and double errors in a word are thereby detected,
but triple errors result in an undetected word error with
probability
P
we
=P(3,3)-
3
For error correction, we use majority-rule decoding based on
the assumption that at least two of the three bits are correct.
Thus, 001 and 101 are decoded as 000 and 111, respectively.
This rule corrects words with single errors, but double or triple
errors result in a decoding error with probability
P
we
=P(2,3)+P(3,3)= 3
2
-2
3
Since P
e
= would be the error probability without coding,
by Assoc. Prof. Thuong Le-Tien 6
Square array for error correction by parity checking
Figure 1-1
7
Code Vectors and Hamming Distance
The triple-repetition code vectors have greater separation than the
parity-code vectors. This separation, measured in terms of the
Hamming distance, has direct bearing on the error-control power of a
code. The Hamming distance d(X,Y) between two vectors X and Y is
defined to equal the number of different elements.
Detect up to l errors per word d
min
l + 1
Correct up to t errors per word d
min
2t + 1
Correct up to t errors and detect l > t errors d
min
t + l + 1
by Assoc. Prof. Thuong Le-Tien 8
FEC system
Information source at rate r
b
. The encoder takes blocks of k message bits
and constructs an (n, k) block code with code rate R
c
= k/n < 1. The bit rate
on the channel therefore must be greater than r
b
, namely
r = (n/k) r
b
= r
b
/R
c
The code has d
min
= 2t + 1 n - k + 1, and the decoder operates strictly in an
error-correction mode.
by Assoc. Prof. Thuong Le-Tien 9
ARQ system
Each codeword constructed by the encoder is stored temporarily
and transmitted to the destination where the decoder looks for errors.
The decoder issues a positive acknowledgement (ACK) if no errors
are detected, or a negative acknowledgement (NAK) if errors are
detected.
by Assoc. Prof. Thuong Le-Tien 10
ARQ schemes (a) stop-and-wait (b) go-back (c) selective repeat
by Assoc. Prof. Thuong Le-Tien 11
2. Linear Block Codes
Matrix Representation of Block Codes
An (n, k) block code consists of n-bit vectors, each vector
corresponding to a unique block of k < n message bits.
The fundamental strategy of block coding is to choose the 2
k
code
vectors such that the minimum distance is as large as possible.
A systematic block code consists of vectors whose first k elements (or
last k elements) are identical to the message bits, the remaining n k
elements being check bits. A code vector then takes the form
X = (m
1
m
1
m
k
c
1
c
1
c
q
)
q = n k X = (M | C)
M is a k-bit message vector and C is a q-bit check vector. Partitioned
notation lends itself to the matrix representation of block codes.
X=MG
by Assoc. Prof. Thuong Le-Tien 12
1 1 2 2
...
i i i k ki
c m p m p m p =
11 12 1
21 22 2
1 2
[5 ]
q
q
kq k k
p p p
P p p p b
p
p p
C MP
| |
|
|
|
=
|
|
|
\ .
=

This binary matrix multiplication follows the usual rules with mod-2
addition instead of conventional addition. Hence, the jth element of C
is computed using the jth column of P, and
The matrix G is a k n generator matrix
where I
k
is the k k identity matrix and P is a k q submatrix of
binary digits represented by
by Assoc. Prof. Thuong Le-Tien 13
1
2 1
c
q
k q
R
n
= =

1 0 0 0 1 0 1
0 1 0 0 1 1 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
G
| |
|
|
|
=
|
|
|
\ .
Hamming Codes
A Hamming code is an (n, k) linear block code with q 3 check bits and
n = 2
q
1 k = n q
and thus R
c
1 if q >> 1. Independent of q, the minimum distance is fixed at
d
min
= 3
so a Hamming code can be used for single-error correction or double-error
detection. To construct a systematic Hamming code, you simply let the k
rows of the P submatrix consist of all q-bit words with two or more 1s,
arranged in any order.
For example, consider a systematic Hamming code with q = 3, so n = 2
3
-1 = 7 and k
= 7 - 3 = 4. According to the previously stated rule, an appropriate generator matrix is
by Assoc. Prof. Thuong Le-Tien 14
Encoder for (7,4) Hamming code
1 1 2 3
2 2 3 4
3 1 2 4
0
0
0
c m m m
c m m m
c m m m
=
=
=
Given a block of message bits M = (m
1
m
2
m
3
m
4
), the check
bits are determined from the set of equations.
by Assoc. Prof. Thuong Le-Tien 15
Table 13.2-1 lists the resulting 2
4
= 16 codewords and their weights.
The smallest nonzero weight equals 3, confirming that d
min
= 3.
by Assoc. Prof. Thuong Le-Tien 16
Syndrome Decoding
Y stand for the received vector when a particular code vector X has been
transmitted. Any transmission errors will result in Y X.
Associated with any systematic linear (n, k) block code is a q n matrix H
called the parity-check matrix. This matrix is defined by
where H
T
denotes the transpose of H and I
q
is the q q identity matrix.
Relative to error detection, the parity-check matrix has the crucial property
XH
T
= (0 0 0) [9]
X belongs to the set of code vectors. However, when Y is not a code vector,
the product YH
T
contains at least one nonzero element.
Therefore, given H
T
and a received vector Y, error detection can be based on
S = YH
T
[10]
a q- bit vector called the syndrome
by Assoc. Prof. Thuong Le-Tien 17
Error correction necessarily entails more circuitry but it, too, can be based
on the syndrome. An n-bit error vector E whose nonzero elements mark the
positions of transmission errors in Y.
For instance, if X = ( 1 0 1 1 0) and Y = (1 0 0 1 1) then E = ( 0 0 1 0 1). In
general,
Y = X + E X = Y + E
S = (X + E)H
T
= XH
T
+ EH
T
= EH
T
Which reveals that the syndrome depends entirely on the error pattern, not the
specific transmitted vector.
However, only 2
q
different syndromes generated by the 2
n
possible n-bit
error vectors, including the no-error case. Consequently, a given syndrome
does not uniquely determine E. Or, we can correct just 2
q
-1 patterns with one
or more errors, and the remaining patterns are uncorrectable. Therefore design
the decoder to correct the 2
q
-1 most likely error patterns namely as
maximum-likelihood decoding. Maximum-likelihood decoding corresponds
to choosing the code vector that has the smallest Hamming distance from the
received vector.
by Assoc. Prof. Thuong Le-Tien 18
The table-lookup decoder.
The decoder calculates S from the received vector Y and looks up the assumed
errors vector stored in the table. The sum Y + generated by exclusive-OR
gates finally constitutes the decoded word. If there are no errors, or if the
errors are uncorrectable, then S = (0 0 0) so Y + = Y.
Example:
by Assoc. Prof. Thuong Le-Tien 19
1 1 1 0 1 0 0
0 1 1 1 0 1 0
1 1 0 1 0 0 1
T
q
H P I
| |
|
(
= =
|

|
\ .
Lets apply table-lookup decoding to a (7, 4) Hamming code used
for single-error correction. The 3 7 parity-check matrix.
There are 2
3
1 = 7 correctable single-error patterns, and the
corresponding syndromes listed in the next Table follow directly
from the columns of H. To accommodate this table the decoder
needs to store only (q + n) 2
q
= 80 bits.
*Suppose a received word having two errors, such that E = (1 0 0
0 0 1 0). The decoder calculates S = YH
T
= EH
T
= (1 1 1) and the
syndrome table gives = (0 1 0 0 0 0 0). The decoded output
word Y + therefore contain three errors, the two transmission
errors plus the erroneous correction added by the decoder.
by Assoc. Prof. Thuong Le-Tien 20
21
Cyclic Codes
The code for a FEC system must be capable of correcting t1 errors per
word. It should also have a reasonably efficient code rate R = k/n.
These two parameters are related by the inequality
q = n k = n(1 - R
c
). This inequality underscores the fact that if we
want R
c
1, we must use codewords with n >> 1 and k >> 1.
However, the hardware requirements for encoding and decoding long
codewords may be prohibitive unless we impose further structural
conditions on the code. Cyclic codes are a subclass of linear block
codes with a cyclic structure leading to more practical implementation.
Thus, block codes used in FEC systems are almost always cyclic codes.
A second shift produces X = (x
n-3
x
1
x
0
x
n-1
x
n-2
) , and so forth.
2
0
1
1 log
t
c
i
n
R
i n
=
(
| |
>
( |
\ .

The polynomial p
n
+ 1 and its factors play major roles in cyclic codes.
Specifically, an (n, k) cyclic code is defined by a generator
polynomial of the form
by Assoc. Prof. Thuong Le-Tien 22
1 2
1 2 1 0
1 2
1 2 1 0
1 2
2 1 0 1
( )
( )
'( )
n n
n n
n n
n n
n
n n
X p x p x p x p x
pX p x p x p x p x p
X p x p x p x p x


= + + + +
= + + + +
= + + + +

1 1
1
( ) '( )
'( ) ( ) ( 1)
n
n n
n
n
pX p X p x p x
X p pX p x p

+ = +
= + +
1
1 1
1
1 1 0
1
1 1 0
( ) 1 [19]
( ) ( ) ( )
( )
( )
q q
q
M
k
k
q
q
G p p g p g p
X p Q p G p
M p m p m p m
C p c p c p c

= + + + +
=
= + + +
= + + +

by Assoc. Prof. Thuong Le-Tien 23


( ) ( ) ( )
q
X p p M p C p = +
( ) ( )
( )
( ) ( )
q
M
p M p C p
Q p
G p G p
= +
( )
( )
( )
q
p M p
C p rem
G p
(
=
(

( )
( )
( )
Y p
S p rem
G p
(
=
(

Syndrome calculation at the receiver is equally simple. Given a received
vector Y, the syndrome is determined from
24
Example
Consider the cyclic (7, 4) Hamming code generated by G(p) = p
3
+ 0 +
p + 1. Well use long division to calculate the check-bit polynomial
C(p) when M = (1 1 0 0). The message-bit polynomial
M(p) = p
3
+ p
2
+ 0 + 0
So p
q
M(p) = p
3
M(p) = p
6
+ p
5
+ 0 + 0 + 0 + 0 +0. Next, we divide
G(p) into p
q
M(p), keeping in mind that subtraction is the same as
addition in mod-2
X(p)= p
3
M(p) + C(p) = p
6
+ p
5
+ 0 + 0 + 0 + p +0
X = (1 1 0 0 | 0 1 0)
(a) Shift-register encoder for (7,4) Hamming code
(b) (b) Register bits when M=(1 1 0 0)
A class of Cyclic codes, the Cyclic Redundancy Codes (CRCs)
Designed for burst-error detection. The following error types
will be detected:
(a) All single bit errors;
(b) Any odd number of errors, assuming p+1 is a factor of G(p);
(c) Burst error of length not exceeding q, number of check bits;
(d) Double error if G(p) contains at least three 1s.
Example: ASCII letter J is 1001010, then be checked for errors
Using CRC-8 code (in table). Determine transmitted sequence X and show
how the Receiver can detect errors in the two left-most message bits?
M-ary Codes Reed Solomon (RS) codes
A subset of the BCH codes performs well burst-error conditions
These are non-binary codes that are member of an M-ary alphabet
If an m-bit digital encoder than an alphabet of M=2
m
symbols.
Minimum distance is d
min
=n-k+1 (n total number of symbols in
the code block, k is the number of message symbols.
RS codes are capable of correcting t or fewer symbol errors:
t=(d
min
-1)/2=(n-k)/2
With an M symbol alphabet, n=2
m
-1 and k=2
m
-1-2t
M-ary code distance
by Assoc. Prof. Thuong Le-Tien 29
3. Convolutional Codes
1 1 0
0
(mod-2)
j j L L j j
L
j i i
i
x m g m g m g
m g

=
=
=


Convolutional codes effectively extend over the entire transmitted
bit stream, rather than being limited to codeword blocks.
The convolutional structure is especially well suited to space and
satellite communication systems that require simple encoders and
achieve high performance by sophisticated decoding methods.
by Assoc. Prof. Thuong Le-Tien 30
Convolutional encoder with n=2, k=1, and L=2
' "
2 1 2
' " ' " ' "
1 1 2 2 3 3

j j j j j j j
x m m m x m m
X x x x x x x

= =
=
The output bit rate is therefore 2r
b
and the code rate is R
c
= - like
an (n, k) block code with R
c
= k/n = .
by Assoc. Prof. Thuong Le-Tien 31
Code tree for 2,1,2) encoder
by Assoc. Prof. Thuong Le-Tien 32
(a) Code Trellis (b) State Diagram for (2, 1, 2)
Encoder (c) Illustrative sequence
by Assoc. Prof. Thuong Le-Tien 33
Termination of (2, 1, 2) code trellis
Each branch has been labeled with the number of 1s in the encoded bits
Free Distance and Coding Gain
The free distance of a convolutional code is then defined to be
The value of serves as a measure of error-control power
by Assoc. Prof. Thuong Le-Tien 34
The exponent of D equals the branch weight
The exponent of I equals the corresponding number of nonzero message bits
(a)Modified state diagram for [2, 1, 2] encoder; (b)
equivalent block diagram
' " 2 0 2
11 and 0, it is labeled with
j j j
x x m D I D = = =
by Assoc. Prof. Thuong Le-Tien 35
2
2
[6a]

b a c c b d
d b d c c
W D IW IW W DW DW
W DIW DIW W D W
= + = +
= + =
5
5 6 2 7 3
5 4
5
( , )
1 2
2 4 [7]
2
d d d
d
D I
T D I
DI
D I D I D I
D I


=
=

= + + +
=


Our modified state diagram now looks like a signal-flow graph of
the type sometimes used to analyze feedback systems. Specifically, if
we treat the nodes as summing junctions and the DI terms as branch
gains, the above Fig. represents the set of algebraic state equations.
The encoders generating function T(D, I) can now be defined by
the input-output equation.
by Assoc. Prof. Thuong Le-Tien 36
( , ) ( , )
j
d i
d d i l
T D I A d i D I

= =
=

As a generalization of Eq.(7), the generating function for
an arbitrary convolutional code takes the form
Here, A(d,i) denotes the number of different input-output
paths through the modified state diagram that have weight
d and are generated by messages containing i nonzero bits.
by Assoc. Prof. Thuong Le-Tien 37
2 ( ) , 1
1 ( , )
be
D l l
T D I
P
k I
o o = =
c
s
c
/ 2
( )
2 1
f f
d d f
be
M d
P
k
o o ~
1
( ) ( , )
f f
i
M d iA d i

=
=

If transmission errors occur with equal and independent probability


per bit, then the probability of a decoded message-bit error is
upper-bounded by
[9]
When is sufficiently small, series expansion of T(D,I) yields the
approximation
[10]
Where
The quantity M(d
f
) simply equals the total number of nonzero
message bits over all minimum-weight input-output paths in the
modified state diagram.
by Assoc. Prof. Thuong Le-Tien 38
( )
b c
R
b c
e R

t o
2 / 1
4

~
( / 2)
/ 4
( )2
(4 )
f
c f b
f
d
R d f
be d
c b
M d
P e
k R

t

~
1/2
1
(4 )
b
ube
b
P e

t

~
Equation (10) supports our earlier assertion that the error-
control power of a convolutional code depends upon its free
distance. For a performance comparison with uncoded
transmission, well make the usual assumption of Gaussian white
noise and (S/N)
R
= 2R
c

b
10 so Eq.(10), The transmission
error probability
The decoded error probability then becomes
[11]
Whereas uncoded transmission would yield
[12]
by Assoc. Prof. Thuong Le-Tien 39
(2, 1, 2) encoder is used at the transmitter, and the received
sequence starts with Y = 11 01 11. The number in parentheses
beneath each branch is the branch metric, obtained by counting
the differences between the encoded bits and the corresponding
bits in Y. The circled number at the right-hand end of each
branch is the running path metric.
DECODING METHOD
by Assoc. Prof. Thuong Le-Tien 40
Illustration of the Viterbi Algorithm for
Maximum-Likelihood Decoding
by Assoc. Prof. Thuong Le-Tien 41
Turbo code
by Assoc. Prof. Thuong Le-Tien 42
Turbo codes, or parallel concatenated codes (PCC) are a
relatively new class of convolutional codes first introduced
in 1993 by Berrou et al., Berrou (1996), Hagenauer et
al.(1996), and Johannesson and Zigangirov (1999).
They have enabled channel capacities to near reach the
Shannon limit.
Shannons theorem for channel capacity assumes random
coding with the BER approaching zero as the codes block
or constraint length approaches infinity
by Assoc. Prof. Thuong Le-Tien 43
Turbo Encoder
The RSC is Recursive Systematic Convolutional encoder with rate .
Both RSC produce parity check bits then overall rate is 1/3. However it
can be reduced to using the process of puncturing by eliminating
the odd parity check bits of the first RSC and the even parity check bits
of the second RSC
by Assoc. Prof. Thuong Le-Tien 44
For the particular encoder in the figure, the polynomial describing the
feedback connections is 1+D
3
+D
4
=10011=23
8
and the polynomial for
the output is
1+D+D
2
+D
4
=11101=35
8
.
Hence, the literature often refers this to as G
1
=23, G
2
=35 or simply a
(23,35) encoder.
RSC encoder with R=1/2, G
1
=23, G
2
=35, L=2
by Assoc. Prof. Thuong Le-Tien 45
Turbo Decoder: Consist of two Maximum a Posterior (MAP) decoders
and feedback path. The first decoder takes the information from the
received signal and calculate the A Posterior Probability (APP) value.
This value is then used as the APP value for the second decoder.
Turbo Decoder
by Assoc. Prof. Thuong Le-Tien 46
Instead of using the Viterbi algorithm, the MAP decoder uses a
modified form of the BCJR (Bahl, Cocke, Jelinek, and Raviv, 1972)
algorithm that take into account the recursive character of the RSV
codes and computes a log-likelihood ratio to estimate the APP for
each bit.
The results by Berrou et al. are impressive. When encoding using
rate R=1/2, G
1
=37 and G
2
=21, 65,537 interleaving, and 18
iterations, they were able to achieve a BER of 1/100000 and
E
b
/N
0
=0.7dB.
The main disadvantage of turbo codes with their relatively large
code words and iterative decoding process is their long latency. A
system with 65,537 interleaving and 18 iterations may have too
long a latency for voice telephony

Vous aimerez peut-être aussi