Vous êtes sur la page 1sur 31

LDPC Codes

Denition of LDPC Codes


Factor Graphs to use in decoding
Decoding for binary erasure channels
EXIT charts
Soft-Output Decoding
Turbo principle applied to LDPC-codes
Slides originally from I. Land p.1
LDPC Codes
Invented by Robert G. Gallager in his PhD thesis, MIT, 1963
Re-invented by David J.C. MacKay from Cambridge,
England in 1999
Dened by the parity-check matrix (which has a low density
of ones)
Iteratively decoded on a factor graph of the check matrix
Advantages
Good codes and
low decoding complexity
Slides originally from I. Land p.2
LDPC Codes
Linear block codes.
Is dened by the parity check matrix.
Used as a big block code.
Advantage: error correction is less complex.
Slides originally from I. Land p.3
Regular LDPC Codes
Remember:
For every linear binary (N, K) code C with code rate R = K/N:
There is a generator matrix G F
KN
2
such that codewords
x F
N
2
and info words u F
K
2
are related by
x = uG
There is a parity-check matrix H F
MN
2
with
rank H = N K, such that
x C xH
T
= 0
Slides originally from I. Land p.4
Regular LDPC Codes
Denition
A regular (d
v
, d
c
) LDPC code of length N is dened by a
parity-check matrix H F
MN
2
, with d
v
ones in each column
and d
c
ones in each row. The dimension of the code (info word
length) is K = N rank H.
Example 1
H =
_

_
1 0 0 1 1 1 0 0
1 0 1 0 1 1 0 0
1 0 1 1 0 1 0 0
0 1 0 1 0 0 1 1
0 1 0 0 1 0 1 1
0 1 1 0 0 0 1 1
_

_
Parameters: d
v
= 3, d
c
= 4, N = 8, M = 6, K = 4 (!), R = 1/2 (!)
Slides originally from I. Land p.5
Regular LDPC Codes
Example 2
H =
_

_
0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1
0 0 0 1 0 1 1 0 0 1 0 0 0 0 0 0
0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0
1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0
1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0
0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 0
1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0
_

_
Parameters: ?
Slides originally from I. Land p.6
Regular LDPC Codes
Design Rate
The true rate R and the design rate R
d
are dened as
R := K/N, R
d
:= 1 d
v
/d
c
,
and they are related by
R R
d
Proof
The number of ones in the check matrix is Md
c
= Nd
v
. Some
check equations may be redundant, i.e., M N K, and thus
K
N
= 1
N K
N
1
M
N
= 1
d
v
d
c
Slides originally from I. Land p.7
Regular LDPC Codes
The check matrices can be constructed randomly or
deterministic
Encoding
LDPC codes are usually systematically encoded, i.e., by a
systematic generator matrix
G = [ I
K
| A ]
The matrix A can be found by transforming H into another
check matrix of the code, that has the form
H

= [ A | I
NK
]
by row operations and column permutations
Slides originally from I. Land p.8
Encoding LDPC Codes
Practical Issues
We would like long codes, however since G is not low
density, complexity is an issue.
Encoding is an issue.
After error correction, the codes are decoded as a linear
block code.
Slides originally from I. Land p.9
Factor Graphs
Denition
A factor graph of a code is a graphical representation of the
code constraints dened by a parity-check matrix of this code:
xH
T
= 0
The factor graph is a bipartite graph with
a variable node for each code symbol,
a check node for each check equation,
an edge between a variable node and a check node if the
code symbol participates in the check equation.
Notice that each edge corresponds to one 1 in the check matrix.
Slides originally from I. Land p.10
Factor Graphs
Example
xH
T
= [x
0
x
1
. . . x
7
]
_

_
1 0 0 1 1 1 0 0
1 0 1 0 1 1 0 0
1 0 1 1 0 1 0 0
0 1 0 1 0 0 1 1
0 1 0 0 1 0 1 1
0 1 1 0 0 0 1 1
_

_
T
= 0

_
x
0
+x
3
+x
4
+x
5
= 0
x
0
+x
2
+x
4
+x
5
= 0
x
0
+x
2
+x
3
+x
5
= 0
x
1
+x
3
+x
6
+x
7
= 0
x
1
+x
4
+x
6
+x
7
= 0
x
1
+x
2
+x
6
+x
7
= 0
Slides originally from I. Land p.11
Factor Graphs
Example (cont.)
xH
T
= 0
x
0
+x
3
+x
4
+x
5
= 0 (chk
0
)
x
0
+x
2
+x
4
+x
5
= 0 (chk
1
)
x
0
+x
2
+x
3
+x
5
= 0 (chk
2
)
x
1
+x
3
+x
6
+x
7
= 0 (chk
3
)
x
1
+x
4
+x
6
+x
7
= 0 (chk
4
)
x
1
+x
2
+x
6
+x
7
= 0 (chk
5
)
X
0
X
1
X
2
X
3
X
4
X
5
X
6
X
7
chk
0
chk
1
chk
2
chk
3
chk
4
chk
5
Slides originally from I. Land p.12
Message Passing Algorithm
The factor graph represents
a factorization of the global
code constraint
xH
T
= 0
Variable nodes and check
nodes represent local
code constraints
This is made explicit by the
edge interleaver
X
0
X
1
X
2
X
3
X
4
X
5
X
6
X
7
chk
0
chk
1
chk
2
chk
3
chk
4
chk
5
Slides originally from I. Land p.13
Message Passing Algorithm
LDPC codes can be iteratively decoded on the factor graph:
Nodes perform local decoding operations
Nodes exchange extrinsic soft-values called messages
Iteration is terminated when code symbol estimates form
a valid codeword, i.e., if xH
T
= 0
Advantages
The overall decoding complexity is only
linear (!) in the code length
The decoder performs close to the ML decoder
But nevertheless: This decoding algorithm is sub-optimal due to
cycles in the graph
Slides originally from I. Land p.14
Message Passing Algorithm
Assume an LDPC code of length N dened by the M N check
matrix
H =
_
H
m,n
_
M,N
Ingredients of the MPA:
Sets of nodes describing the connections of nodes
Messages that are exchanged between the nodes
Node operations that combine incoming messages to
outgoing messages
Slides originally from I. Land p.15
Variable Node Sets
All variable nodes
N := {0, 1, . . . , N 1}
All variable nodes that are connected to check node m
N(m) := {n

N : H
m,n

= 1}
All variable nodes that are connected to check node m, but
excluding variable node n (leads to extrinsic)
N(m)\n := N(m)\{n}
Slides originally from I. Land p.16
Check Node Sets
All check nodes
M:= {0, 1, . . . , M 1}
All check nodes that are connected to variable node n
M(n) := {m

M: H
m

,n
= 1}
All check nodes that are connected to variable node n, but
excluding check node m
M(n)\m := M(n)\{m}
Slides originally from I. Land p.17
Example for Node Sets
Variable node sets
N = {0, 1, . . . , 7}
N(0) = {0, 3, 4, 5}
N(0)\3 = {0, 4, 5}
Check node sets
M= {0, 1, . . . , 5}
M(1) = {3, 4, 5}
M(1)\4 = {3, 5}
X
0
X
1
X
2
X
3
X
4
X
5
X
6
X
7
chk
0
chk
1
chk
2
chk
3
chk
4
chk
5
Slides originally from I. Land p.18
Messages and Node Operations
Initial message to variable node X
n
:

ch
(n) = fct
_
y
n
_
Message from variable node X
n
to check node chk
m
:

vc
(n, m) = fct
_

ch
(n),
cv
(m

, n) : m

M(n)\m
_
Message from check node chk
m
to variable node X
n
:

cv
(m, n) = fct
_

vc
(n

, m) : n

N(m)\n
_
Variable-to-check messages and check-to-variable
messages are extrinsic messages
Slides originally from I. Land p.19
Example for Variable Node
Variable node X
0
Incoming messages:

ch
(0),
cv
(0, 0),
cv
(1, 0),
cv
(2, 0)
Outgoing messages:

vc
(0, 0),
vc
(0, 1),
vc
(0, 2)
Example operation:

vc
(0, 0) =
fct
_

ch
(0),
cv
(1, 0),
cv
(2, 0)
_
X
0
X
1
X
2
X
3
X
4
X
5
X
6
X
7
chk
0
chk
1
chk
2
chk
3
chk
4
chk
5
Slides originally from I. Land p.20
Example for Check Node
Check node chk
1
Incoming messages:

vc
(0, 1),
vc
(2, 1),
vc
(4, 1),
vc
(5, 1)
Outgoing messages:

cv
(1, 0),
cv
(1, 2),
cv
(1, 4),
cv
(1, 5)
Example operation:

cv
(1, 4) =
fct
_

vc
(0, 1),
vc
(2, 1),
vc
(5, 1)
_
X
0
X
1
X
2
X
3
X
4
X
5
X
6
X
7
chk
0
chk
1
chk
2
chk
3
chk
4
chk
5
Slides originally from I. Land p.21
Messages and Node Operations (cont.)
Final message generated by variable node X
n
:

v
(n) = fct
_

ch
(n),
cv
(m

, n) : m

M(n)
_
Estimate for code symbol X
n
x
n
=
_
0 if Pr
_
X
n
= 0|
v
(n)
_
> Pr
_
X
n
= 1|
v
(n)
_
1 otherwise
Slides originally from I. Land p.22
Optimality, Cycles, and Girth
Nodes assume that
incoming messages are
independent
Independency assumption
holds only for rst few
iterations due to cycles in
the graph
Length of smallest cycle is
called the girth
For N , the girth tends
to and the MPA becomes
optimal
X
0
X
1
X
2
X
3
X
4
X
5
X
6
X
7
chk
0
chk
1
chk
2
chk
3
chk
4
chk
5
Slides originally from I. Land p.23
MPA for the BEC
If the communication channel is a BEC, the MPA becomes very
simple. All (!) messages are from the set {0, 1, } ( means
erasure).
Message updating rules:
Variable node operation
An outgoing message is only if all relevant incoming
messages are
Check node operation
An outgoing message is different from only if all relevant
incoming messages are different from
The iteration is terminated if all code symbols are recovered.
Slides originally from I. Land p.24
L-values, repetition
Boxplus Operator I
For two L-values l
1
and l
2
, the boxplus operation is dened
as
l
1
l
2
= 2 tanh
1
_
tanh
l
1
2
tanh
l
2
2
_
It can be approximated by
l
1
l
2
sgn(l
1
) sgn(l
2
) min
_
|l
1
|, |l
2
|
_
Slides originally from I. Land p.25
L-values, repetition
Boxplus Operator II
Meaning of boxplus: let X
1
+X
2
= X
3
mod 2; then
L(X
3
) = L(X
1
) L(X
2
)
For more than two L-values, the operator can be evaluated
successively, or as

i
l
i
= 2 tanh
1
_

i
tanh
l
i
2
_

i
sgn(l
i
) min
i
_
|l
i
|
_
Slides originally from I. Land p.26
Single Parity-Check Codes
Soft-output decoding rules when using L-values
Extrinsic L-values
l
e,n
=
N1

i=0
i =n
l
ch,i
A-posteriori L-values
l
p,n
= l
ch,n
+l
e,n
Slides originally from I. Land p.27
L-values, repetition
Binary symmetric channel (BSC) with crossover probability
L(y|X) =
_
+ln
1

for y = 0
ln
1

for y = 1
Binary erasure channel (BEC) with erasure probability
L(y|X) =
_

_
+ for y = 0
0 for y =
for y = 1
Binary-input AWGN channel with SNR E
s
/N
0
(0 +1 , 1 1)
L(y|X) = 4
E
s
N
0
y
Slides originally from I. Land p.28
MPA based on L-values I
The symbol (for message) is commonly replaced by l (for
L-value) when the messages are L-values.
Initial messages: l
ch
(n) = L(X
n
|y
n
)
Variable-node operation
l
vc
(n, m) = L
_
X
n

l
ch
(n), l
cv
(m

, n) : m

M(n)\m
_
= l
ch
(n) +

M(n)\m
l
cv
(m

, n)
Check-node operation
l
cv
(m, n) = L
_
X
n

l
vc
(n

, m) : n

N(m)\n
_
=

N(m)\n
l
vc
(n

, m)
Slides originally from I. Land p.29
MPA based on L-values II
Final variable-node operation
l
v
(n) = L(X
n
|l
ch
(n), l
cv
(m

, n) : m

M(n))
= l
ch
(n) +

M(n)
l
cv
(m

, n)
Hard decision
x
n
=
_
0 if l
v
(n) > 0
1 if l
v
(n) < 0
Termination criterion
xH
T
= 0
Slides originally from I. Land p.30
Some Remarks
The MPA gives the a-posteriori probabilities (APPs) of the
code symbols if the graph is cycle free
Otherwise, it approximates decoding for the
maximum-likelihood (ML) codeword.
For the BEC, the MPA may get stuck: A set of code symbols
which is not resolvable is called a stopping set.
The performance of a code of innite length can be
determined by tracking the evolution of the probability
density function of the messages with respect to the
iterations. This method is called density evolution.
Slides originally from I. Land p.31

Vous aimerez peut-être aussi