Vous êtes sur la page 1sur 9

Chapter 8

Channel Capacity
Channel Capacity
Define C = max {I(A; B) : p(a)} =
Amount of useful information per bits actually sent = The change
in entropy by going through the channel (drop in uncertainty)

= =
A B
b p a p
b a P
b a P B A H A H B A I
) ( ) (
) , (
log ) , ( ) | ( ) ( ) ; (
average uncertainty
being sent:

before
receiving

after
receiving
= = =

A B
a b P
b a P B H A B H B H I(A; B)
) | (
1
log ) , ( ) ( ) | ( ) (

A B
a b P
a b P a p B H
) | (
1
log ) | ( ) ( ) (
8.1
Uniform Channel
channel probabilities do not change from symbol to symbol I.e.
the rows of the probability matrix are permutations of each
other. So the following is independent of a:
row any of entropy ) | (
) | (
1
log ) | ( = = =

a B H W
a b P
a b P
B
W B H a p W B H B A I
A
= =

) ( ) ( ) ( ) ; (
Consider no noise: P(b | a) =
1 for some b
0 all others
W = 0
I(A ; B) = H(B) = H(A)
(conforms to intuition only if
permutation matrix)

8.2
All noise implies H(B) = W
Capacity of Binary Symmetric Channel


C = max {I(A; B) : p(a)} = max {H(B) - W : p(a)} = max
0 0
1 1
P
Q
p(a = 0)
p(a = 1)
p(b = 0)
p(b = 1)


definition by ) (
1
log
1
log
) 1 (
1
log
1
) 1 (
) 0 (
1
log ) 0 (
2
2 2 2 2
P H
Q
Q
P
P
b p
x
b p
b p
x
b p
(

+
=

= +
=
=
where x = pP + (1 p)Q p = p(a = 0)

maximum occurs when x = , p = also (unless all noise).
C = 1 H
2
(P)
8.5
Numerical Examples
If P = + , then C(P) ~ 3
2
is a great approximation.
Probability Capacity
P = Q = C = 0 %
P = 0.6 Q = 0.4 C ~ 3 %
P = 0.7 Q = 0.3 C ~ 12 %
P = 0.8 Q = 0.2 C ~ 28 %
P = 0.9 Q = 0.1 C ~ 53 %
P = 0.99 Q = 0.01 C ~ 92 %
P = 0.999 Q = 0.001 C ~ 99 %
8.5
Error Detecting Code
A uniform channel with equiprobable input: p(a
1
) = = p(a
q
). Apply
to n-bit single error detection, with one parity bit among c
i
e {0, 1}:
.
k k n
Q P
k
n

|
|
.
|

\
|

=

|
|
.
|

\
|
= =
n
k
k k n
k k n
B
Q P
Q P
k
n
a b P
a b P W
0
2 2
1
log
) | (
1
log ) | (
8.3
P Q
Q P
|A| = q = 2
n1

a = c
1
c
n

(even parity)
H
2
(A) = n 1
|B| = 2
n
= 2q
c
1
c
n
= b
(any parity)
H
2
(B) = ??
For blocks of size n, we know the probability of k errors =
O every b e B can be obtained from any a e A by k = 0 n errors:


=

=
|
|
.
|

\
|
+ =
n
k
k k n
n
k
k k n
n
k
k k n
k k n
k k n
Q kP
k n k
n
Q
Q P k n
k n k
n
P
Q P
Q P
k
n
Q
k
P
k n
Q P
0
2
0
2
0
2 2 2 2
)! ( !
! 1
log ) (
)! ( !
! 1
log
1
log so ,
1
log
1
log ) (
1
log

=

=


+


n
k
k k n k k n
n
k
Q P
k n k
n n
Q
Q Q P
k n k
n n
P
P
1
1
2
1
1
0
2
)! ( )! 1 (
)! 1 ( 1
log
)! 1 ( !
)! 1 ( 1
log
Q
nQ Q P
Q
nQ
P
nP Q P
P
nP
n n
1
log ) (
1
log
1
log ) (
1
log
2
1
2 2
1
2
= + + = +

) ( ) ( ) ; ( ). (
1
log
1
log
2 2 2 2 2
P nH B H B A I P nH
Q
Q
P
P n = =
|
|
.
|

\
|
+ =
A

|| n
th
term = 0


|| 0
th
term = 0

||


||

n O |B| = 2
n



This is W for one bit

W
8.3
1 1
Error Correcting Code
3
noisy
channel
3
encode
triplicate
decode
majority
P Q
Q P

think of this as the channel
original vs. new
prob. of no errors = P
3
=
probability
of no error
prob. of 1 error = 3P
2
Q
prob. of 2 errors = 3PQ
2

=
probability
of an error
prob. of 3 errors = Q
3

uncoded
coded
P
3
+3P
2
Q 3PQ
2
+Q
3

3PQ
2
+Q
3
P
3
+3P
2
Q

let P = P
2
(P + 3Q)
8.4
P C(P) P C(P)/3
.99 92% .9997 33.2%
.9 53% . .972 27%
.8 28% .896 17%
.7 12% . .784 8.2%
.6 3% .648 2%
.51 .03% .512 .014%
Shannons Theorem will say that as n = 3 , there are codes that
take P 1 while C(P)/n C(P).
8.4

Vous aimerez peut-être aussi