Académique Documents
Professionnel Documents
Culture Documents
Chadi Abou-Rjeily
March 6, 2018
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Introduction (1)
Pe = Pr (V 6= U)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Introduction (3)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Cost Constraint (1)
In some situations, an additional cost constraint is imposed.
For discrete channels, this constraint is quantified by the
Hamming weight.
Designate by x = (x1 , . . . , xn ) one realization of the
transmitted codeword X . The Hamming weight of x is given
by:
Xn
wH (x) = wH (xi )
i =1
where:
0, xi = 0;
wH (xi ) =
1, xi =
6 0.
In other words, wH (x) is the number of nonzero components
of x.
Note that:
wH (x) = 0 ⇔ x = [0, . . . , 0]
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Cost Constraint (2)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Cost Constraint (3)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Channel Coding Theorem (1)
Maximizing Rc is equivalent to maximizing the quantity
1
n I (X , Y ) which corresponds to the amount of information
transmitted over the channel X → Y .
The maximization must be performed under the cost
constraint: n1 EwH (X ) ≤ P.
For a given sequence length n, we define the capacity-cost
function by:
1 1
Cn (P) = max I (X , Y ) | EwH (X ) ≤ P
p(x,y ) n n
Note that p(x, y ) = p(x)p(y |x). Since p(y |x) is fixed by the
channel (it is independent from the channel code), then the
maximization can be performed on p(x) rather than p(x, y ).
Consequently, Cn (P) can be written as:
1 1
Cn (P) = max I (X , Y ) | EwH (X ) ≤ P
p(x) n n
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Channel Coding Theorem (2)
C = max Cn
n
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Channel Coding Theorem (3)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Channel Coding Theorem (4)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Properties of C (P) (1)
Property 1: Reliable transmissions are not possible over any
channel when no resources (such as power) are available.
C (0) = Cn (0) = 0
Proof:
Cn (P) is given by:
1 1
Cn (P) = max I (X , Y ) | EwH (X ) ≤ P
p(x) n n
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Properties of C (P) (3)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Calculating C (P) (1)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Calculating C (P) (2)
Since I (X , Y ) = H(Y ) − H(Y |X ), then C (P) can be written
as:
C (P) = max {I (X , Y ) | EwH (X ) = P}
p(x)
C = 1 − H2 (p)
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-2 (1)
Calculate the capacity of a BSC with parameter p under a
constraint P on the Hamming weight.
For a BSC channel H(Y |X ) = H2 (p) implying that:
EwH (X ) = P
⇒ Pr(X = 0) wH (X = 0) +Pr(X = 1) wH (X = 1) = P
| {z } | {z }
=0 =1
⇒ Pr(X = 1) = P ⇒ Pr(X = 0) = 1 − P
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-2 (2)
The distribution of Y can be calculated from:
Pr(Y = 1) = Pr(Y = 1|X = 0)Pr(X = 0) + Pr(Y = 1|X = 1)Pr(X = 1)
= p(1 − P) + (1 − p)P
Pr(Y = 0) = Pr(Y = 0|X = 0)Pr(X = 0) + Pr(Y = 0|X = 1)Pr(X = 1)
= (1 − p)(1 − P) + pP
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
P
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-3 (1)
Calculate the capacity of a M-ary symmetric channel X → Y with
parameter p characterized by the following conditional probabilities:
1 − p, y = x;
p(y |x) = p
M−1 , y 6= x.
where the input X and output Y are both M-ary.
For this channel, H(Y |X ) = H2 (p) + p log2 (M − 1) (entropy
of a M-ary symmetric r.v.).
Consequently:
C = max H(Y ) − H2 (p) − p log2 (M − 1)
p(x)
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
p
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-3 (4)
Note that C = 0 when:
H2 (p) + p log2 (M − 1) = log2 M
Method 1:
We have seen in lecture-5 that the mutual information
between the input and the output of a BEC channel is given
by:
I (X , Y ) = (1 − π)H(X )
Consequently:
C = H2 (π) + 1 − π − H2 (π)
=1−π
This capacity is achieved when the input has a uniform
distribution.
The capacity reaches its maximum value of C = 1 for π = 0
(ideal channel with no erasures).
For π = 1, C = 0 and the channel is opaque. In this case,
Y = e independently from the value taken by the input.
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-5 (1)
Calculate the capacity of an AWGN channel X → Y under the
average power constraint E[X 2 ] ≤ P.
For this AWGN channel:
Y = X + Z where Z stands for noise.
X and Z are independent.
Z is a Gaussian r.v.:
We denote by N = E[Z 2 ] the power of the Gaussian noise;
consequently, Z ∼ N (0, N).
H(Y |X ) can be calculated from:
1
H(Y |X ) = H(X + Z |X ) = H(Z |X ) = H(Z ) = log(2πeN)
2
The second equality follows since, conditioned on X , there is a
one-to-one relation between Z and X + Z .
The third equality follows since X and Z are independent.
The fourth equality follows from the expression of the
differential entropy of a Gaussian r.v.
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-5 (2)
Consequently:
1
C (P) = max H(Y ) | EX 2 = P − log(2πeN)
p(x) 2
EY 2 = E(X + Z )2 = EX 2 + EZ 2 = P + N
Therefore:
1 1
C (P) = log(2πe(P + N)) − log(2πeN)
2 2
1 P
= log 1 +
2 N
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-5 (4)
2.5
0.5
0
0 5 10 15 20 25 30 35 40 45 50
SNR
Note that:
limP→+∞ C (P) = +∞.
P/N is the signal-to-noise ratio (SNR).
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The
Example-5 (5)
1
C (P) = log (1 + SNR)
2T
Chadi Abou-Rjeily Information and Coding Theory Lecture-09: Channel Coding The