Vous êtes sur la page 1sur 50

. . . . . .

.
.
. . .
.
.
Information Theory: Principles and Applications
Tiago T. V. Vinhoza
March 19, 2010
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 1 / 50
. . . . . .
.
.
.1
Course Information
.
.
.2
What is Information Theory?
.
.
.3
Review of Probability Theory
.
.
.4
Information Measures
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 2 / 50
. . . . . .
Course Information
Information Theory: Principles and Applications
Prof. Tiago T. V. Vinhoza
Oce: FEUP Building I, Room I322
Oce hours: Wednesdays from 14h30-15h30.
Email: tiago.vinhoza@ieee.org
Prof. Jos Vieira
Prof. Paulo Jorge Ferreira
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 3 / 50
. . . . . .
Course Information
Information Theory: Principles and Applications
http://paginas.fe.up.pt/vinhoza (link for Info Theory)
Homeworks
Other notes
My Evaluation: (Almost) Weekly Homeworks + Final Exam
References:
Elements of Information Theory, Cover and Thomas, Wiley
Information Theory and Reliable Communication, Gallager
Information Theory, Inference, and Learning Algorithms, McKay
(available online)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 4 / 50
. . . . . .
What is Information Theory?
What is Information Theory?
IT is a branch of math (a strictly deductive system). (C. Shannon,
The bandwagon)
General statistical concept of communication. (N. Wiener, What is
IT?)
It was build upon the work of Shannon (1948)
It answers to two fundamental questions in Communications Theory:
What is the fundamental limit for information compression?
What is the fundamental limit on information transmission rate over a
communications channel?
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 5 / 50
. . . . . .
What is Information Theory?
What is Information Theory?
Mathematics: Inequalities
Computer Science: Kolmogorov Complexity
Statistics: Hypothesis Testings
Probability Theory: Limit Theorems
Engineering: Communications
Physics: Thermodynamics
Economics: Portfolio Theory
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 6 / 50
. . . . . .
What is Information Theory?
Communications Systems
The fundamental problem of communication is that of reproducing at
one point either exactly or approximately a message selected at
another point. (Claude Shannon: A Mathematical Theory of
Communications, 1948)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 7 / 50
. . . . . .
What is Information Theory?
Digital Communications Systems
Source
Source Coder: Convert an analog or digital source into bits.
Channel Coder: Protection against errors/erasures in the channel.
Modulator: Each binary sequence is assigned to a waveform
Channel: Physical Medium to send information from transmitter to
receiver. Source of randomness.
Demodulator, Channel Decoder, Source Decoder, Sink.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 8 / 50
. . . . . .
What is Information Theory?
Digital Communications Systems
Modulator + Channel = Discrete Channel.
Binary Symmetric Channel.
Binary Erasure Channel.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 9 / 50
. . . . . .
Review of Probability Theory
Review of Probability Theory
Axiomatic Approach
Relative Frequency Approach
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 10 / 50
. . . . . .
Review of Probability Theory
Axiomatic Approach
Application of a mathematical theory called Measure Theory.
It is based on a triplet
(, F, P)
where
is the sample space, which is the set of all possible outcomes.
F is the algebra, which is the set of all possible events (or
combinations of outcomes).
P is the probability function, which can be any set function, whose
domain is and the range is the closed unit interval [0,1]. It must
obey the following rules:
P() = 1
Let A be any event in F, then P(A) 0.
Let A and B be two events in F such that A B = , then
P(A B) = P(A) +P(B).
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 11 / 50
. . . . . .
Review of Probability Theory
Axiomatic Approach: Other properties
Probability of complement: P(A) = 1 P(A).
P(A) 1.
P() = 0.
P(A B) = P(A) + P(B) P(A B).
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 12 / 50
. . . . . .
Review of Probability Theory
Conditional Probability
Let A and B be two events, with P(A) > 0. The conditional
probability of B given A is dened as:
P(B|A) =
P(A B)
P(A)
Hence, P(A B) = P(B|A)P(A) = P(A|B)P(B)
If A B = then P(B|A) = 0.
If A B, then P(B|A) = 1.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 13 / 50
. . . . . .
Review of Probability Theory
Bayes Rule
If A and B are events
P(A|B) =
P(B|A)P(A)
P(B)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 14 / 50
. . . . . .
Review of Probability Theory
Total Probabilty Theorem
A set of B
i
, i = 1, . . . , n of events is a partition of when:

n
i =1
B
i
= .
B
i
B
j
= , if i = j .
Theorem: If A is an event and B
i
, i = 1, . . . , n of is a partition of ,
then:
P(A) =
n

i =1
P(A B
i
) =
n

i =1
P(A|B
i
)P(B
i
)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 15 / 50
. . . . . .
Review of Probability Theory
Independence between Events
Two events A and B are statistically independent when
P(A B) = P(A)P(B)
Supposing that both P(A) and P(B) are greater than zero, from the
above denition we have that:
P(A|B) = P(A) P(B|A) = P(B)
Independent events and mutually exclusive events are dierent!
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 16 / 50
. . . . . .
Review of Probability Theory
Independence between events
N events are statistically independent if the intersection of the events
contained in any subset of those N events have probability equal to
the product of the individual probabilities
Example: Three events A, B and C are independent if:
P(AB) = P(A)P(B), P(AC) = P(A)P(C), P(BC) = P(B)P(C)
P(A B C) = P(A)P(B)P(C)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 17 / 50
. . . . . .
Review of Probability Theory
Random Variables
A random variable (rv) is a function that maps each to a real
number.
X : R
X()
Through a random variable, subsets of are mapped as subsets
(intervals) of the real numbers.
P(X I ) = P({w|X() I })
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 18 / 50
. . . . . .
Review of Probability Theory
Random Variables
A real random variable is a function whose domain is and such that
for all real number x, the set A
x
= {|X() x} is an event.
P(w|X(w) = ) = 0.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 19 / 50
. . . . . .
Review of Probability Theory
Cumulative Distribution Function
F
X
: R [0, 1]
X F
X
(x) = P(X x) = P(|X() x)
F
X
() = 1
F
X
() = 0
If x
1
< x
2
, F
X
(x
2
) F
X
(x
1
).
F
X
(x
+
) = lim
0
F
X
(x +) = F
X
(x). (continuous on the right side).
F
X
(x) F
X
(x

) = P(X = x).
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 20 / 50
. . . . . .
Review of Probability Theory
Types of Random Variables
Discrete: Cumulative function is a step function (sum of unit step
functions)
F
X
(x) =

i
P(X = x
i
)u(x x
i
)
where u(x) is the unit step funtion.
Example: X is the random variable that describes the outcome of the
roll of a die. X {1, 2, 3, 4, 5, 6}
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 21 / 50
. . . . . .
Review of Probability Theory
Types of Random Variable
Continous: Cumulative function is a continous function.
Mixed: Neither discrete nor continous.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 22 / 50
. . . . . .
Review of Probability Theory
Probability Density Function
It is the derivative of the cumulative distribution function:
p
X
(x) =
d
dx
F
X
(x)

p
X
(x)dx = F
X
(x).
p
X
(x) 0.

p
X
(x)dx = 1.

b
a
p
X
(x)dx = F
X
(b) F
X
(a) = P(a X b).
P(X I ) =

I
p
X
(x)dx, I R.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 23 / 50
. . . . . .
Review of Probability Theory
Discrete Random Variables
Let us now focus only on discrete random variables.
Let X be a random variable with sample space X
The probability mass function (probability distribution function) of X
is a mapping p
X
(x) : X [0, 1] satisfying:

XX
p
X
(x) = 1
The number p
X
(x) := P(X = x)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 24 / 50
. . . . . .
Review of Probability Theory
Discrete Random Vectors
Let Z = [X, Y] be a random vector with sample space Z = X Y
The joint probability mass function (probability distribution function)
of Z is a mapping p
Z
(z) : Z [0, 1] satisfying:

ZZ
p
Z
(z) =

x,yY
p
XY
(x, y) = 1
The number p
Z
(z) := p
XY
(x, y) = P(Z = z) = P(X = x, Y = y).
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 25 / 50
. . . . . .
Review of Probability Theory
Discrete Random Vectors
Marginal Distributions
p
X
(x) =

yY
p
XY
(x, y)
p
Y
(y) =

xX
p
XY
(x, y)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 26 / 50
. . . . . .
Review of Probability Theory
Discrete Random Vectors
Conditional Distributions
p
X|Y=y
(x) =
p
XY
(x, y)
p
Y
(y)
p
Y|X=x
(y) =
p
XY
(x, y)
p
X
(x)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 27 / 50
. . . . . .
Review of Probability Theory
Discrete Random Vectors
Random variables X and Y are independent if and only if
p
XY
(x, y) = p
X
(x)p
Y
(y)
Consequences:
p
X|Y=y
(x) = p
X
(x)
p
Y|X=x
(y) = p
Y
(y)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 28 / 50
. . . . . .
Review of Probability Theory
Moments of a Discrete Random Variable
The nth order moment of a discrete random variable X is dened as:
E[X
n
] =

xX
x
n
p
X
(x)
if n = 1, we have the mean of X, m
X
= E[X].
The mth order central moment of a discrete random variable X is
dened as:
E[(X m
X
)
m
] =

xX
(x m
X
)
m
p
X
(x)
if m = 2, we have the variance of X,
2
X
.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 29 / 50
. . . . . .
Review of Probability Theory
Moments of a Discrete Random Vector
The joint moment nth order with relation to X and kth order with
relation to Y:
m
nk
= E[X
n
Y
k
] =

xX

yY
x
n
y
k
p
XY
(x, y)
The joint central nth order with relation to X and kth order with
relation to Y:

nk
= E[(Xm
X
)
n
(Ym
Y
)
k
] ==

xX

yY
(xm
X
)
n
(ym
Y
)
k
p
XY
(x, y)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 30 / 50
. . . . . .
Review of Probability Theory
Correlation and Covariance
The correlation of two random variables X and Y is the expected value
of their product (joint moment of order 1 in X and order 1 in Y):
Corr (X, Y) = m
11
= E[XY]
The covariance of two random variables X and Y is the joint central
moment of order 1 in X and order 1 in Y:
Cov(X, Y) =
11
= E[(X m
X
)(Y m
Y
)]
Cov(X, Y) = Corr (X, Y) m
X
m
Y
Correlation Coecient:

XY
=
Cov(X, Y)

Y
1
XY
1
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 31 / 50
. . . . . .
Information Measures
What is Information?
It is a measure that quanties the uncertainty of an event with given
probability - Shannon 1948.
For a discrete source with nite alphabet X = {x
0
, x
1
, . . . , x
M1
}
where the probability of each symbol is given by P(X = x
k
) = p
k
I (x
k
) = log
1
p
k
= log(p
k
)
If logarithm is base 2, information is given in bits.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 32 / 50
. . . . . .
Information Measures
What is Information?
It represents the surprise of seeing the outcome (a highly probable
outcome is not surprising).
event probability surprise
one equals one 1 0 bits
wrong guess on a 4-choice question 3/4 0.415 bits
correct guess on true-false question 1/2 1 bit
correct guess on a 4-choice question 1/4 2 bits
seven on a pair of dice 6/36 2.58 bits
win any prize at Euromilhes 1/24 4.585 bits
win Euromilhes Jackpot 1/76 million 26 bits
gamma ray burst mass extinction today < 2.7 10
12
> 38 bits
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 33 / 50
. . . . . .
Information Measures
Entropy
Expected value of information from a source.
H(X) = E[I (x
k
)] =

xX
p
x
(x)I (x
k
)
=

xX
p
x
(x) log p
x
(x)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 34 / 50
. . . . . .
Information Measures
Entropy of binary source
Let X be a binary source with p
0
and p
1
being the probabililty of
symbols x
0
and x
1
respectively.
H(X) = p
0
log p
0
p
1
log p
1
= p
0
log p
0
(1 p
0
) log(1 p
0
)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 35 / 50
. . . . . .
Information Measures
Entropy of binary source
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
p
0
H
(
X
)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 36 / 50
. . . . . .
Information Measures
Joint Entropy
The joint entropy of a pair of random variables X and Y is given by:.
H(X, Y) =

yY

xX
p
XY
(x, y) log p
X,Y
(x)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 37 / 50
. . . . . .
Information Measures
Conditional Entropy
Average amount of information of a random variable given the
occurence of other.
H(X|Y) =

yY
p
Y
(y)H(X|Y = y)
=

yY
p
Y
(y)

xX
p
X|Y=y
(x) log p
x|Y=y
(x)
=

yY

xX
p
XY
(x, y) log p
X|Y=y
(x)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 38 / 50
. . . . . .
Information Measures
Chain Rule of Entropy
The entropy of a pair of random variables is equal to the entropy of
one of them plus the conditional entropy.
H(X, Y) = H(X) + H(Y|X)
Corollary
H(X, Y|Z) = H(X|Z) + H(Y|X, Z)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 39 / 50
. . . . . .
Information Measures
Chain Rule of Entropy: Generalization
H(X
1
, X
2
, . . . , X
M
) =
M

j =1
H(X
j
|X
1
, . . . , X
j 1
)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 40 / 50
. . . . . .
Information Measures
Relative Entropy: Kullback-Leibler Distance
Is a measure of the distance between two distributions.
The relative entropy between two probability density functions p
X
(x)
and q
X
(x) is dened as:
D(p
X
(x)||q
X
(x)) =

xX
p
X
(x) log
p
X
(x)
q
X
(x)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 41 / 50
. . . . . .
Information Measures
Relative Entropy: Kullback-Leibler Distance
D(p
X
(x)||q
X
(x)) 0 with equality if and only if p
X
(x) = q
X
(x).
D(p
X
(x)||q
X
(x)) = D(q
X
(x)||p
X
(x))
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 42 / 50
. . . . . .
Information Measures
Mutual Information
The mutual information of two random variables X and Y is dened
as the relative entropy between the joint probability density p
XY
(x, y)
and the product of the marginals p
X
(x) and p
Y
(y)
I (X; Y) = D(p
XY
(x, y)||p
X
(x)p
Y
(y))
=

xX

yY
p
XY
(x, y) log
p
X,Y
(x, y)
p
X
(x)p
Y
(y)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 43 / 50
. . . . . .
Information Measures
Mutual Information: Relations with Entropy
Reducing uncertainty of X due to the knowledge of Y:
I (X; Y) = H(X) H(X|Y)
Symmetry of the relation above: I (X; Y) = H(Y) H(Y|X)
Sum of entropies:
I (X; Y) = H(X) + H(Y) H(X, Y)
Self Mutual Information:
I (X; X) = H(X) H(X|X) = H(X)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 44 / 50
. . . . . .
Information Measures
Mutual Information: Other Relations
Conditional Mutual Information:
I (X; Y|Z) = H(X|Z) H(X|Y, Z)
Chain Rule for Mutual Information
I (X
1
, X
2
, . . . , X
M
; Y) =
M

j =1
I (X
j
; Y|X
1
, . . . , X
j 1
)
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 45 / 50
. . . . . .
Information Measures
Convex and Concave Functions
A function f () is convex over ain interval (a, b) if for every
x
1
, x
2
[a, b] and 0 1, if :
f (x
1
+ (1 )x
2
) f (x
1
) + (1 )f (x
2
)
A function f () is convex over an interval (a, b) if its second derivative
is non-negative over that interval (a, b).
A function f () is concave if f () is convex.
Examples of convex functions: x
2
, |x|, e
x
, x log x, x 0.
Examples of concave functions: log x and

x, for x 0.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 46 / 50
. . . . . .
Information Measures
Jensens Inequality
If f () is a convex function and X is a random variable
E[f (X)] f (E[X])
Used to show that relative entropy and mutual information are greater
than zero.
Used also to show that H(X) log |X|.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 47 / 50
. . . . . .
Information Measures
Log-Sum Inequality
For n positive numbers a
1
, a
2
, . . . , a
n
and b
1
, b
2
, . . . b
n
n

i =1
a
i
log
a
i
b
i

(
n

i =1
a
i
)
log

n
i =1
a
i

n
i =1
b
i
with equality if and only if a
i
/b
i
= c.
This inequality is used to prove the convexity of the relative entropy
and the concavity of the entropy.
Convexity/Concavity of mutual information
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 48 / 50
. . . . . .
Information Measures
Data Processing Inequality
Random variables X, Y, Z are said to form a Markov chain in that
order X Y Z, if the conditional distribution of Z depends only
on Y and is onditionally independent of X.
p
XYZ
(x, y, z) = p
X
(x)p
Y|X=x
(y)p
Z|Y=y
(y)
If X Y Z, then
I (X; Y) I (X; Z)
Let Z = g(Y), X Y g(Y), then I (X; Y) I (X; g(Y))
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 49 / 50
. . . . . .
Information Measures
Fanos Inequality
Suppose we know a random variable Y and we wish to guess the value
of a correlated random variable X.
Fanos inequality relates the probability of error in guessing X from Y
to its conditional entropy H(X|Y).
Let

X = g(Y), if P
e
= P(

X = X), then
H(P
e
) + P
e
log(|X| 1) H(X|Y)
where H(P
e
) is the binary entropy function evaluated at P
e
.
Tiago T. V. Vinhoza () Information Theory - MAP-Tele March 19, 2010 50 / 50

Vous aimerez peut-être aussi