Vous êtes sur la page 1sur 121

Elements of Network Information Theory

Abbas El Gamal and Young-Han Kim


Stanford University and UC San Diego
Tutorial, ISIT 2011

Slides available at http://isl.stanford.edu/abbas

Introduction

Networked Information Processing System

Communication network

System: Internet, peer-to-peer network, sensor network, . . .


Sources: Data, speech, music, images, video, sensor data
Nodes: Handsets, base stations, processors, servers, sensor nodes, . . .
Network: Wired, wireless, or a hybrid of the two
Task: Communicate the sources, or compute/make decision based on them
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

2 / 118

Introduction

Network Information Flow Questions

Communication network

What is the limit on the amount of communication needed?


What are the coding scheme/techniques that achieve this limit?
Shannon (1948): Noisy point-to-point communication
FordFulkerson, EliasFeinsteinShannon (1956): Graphical unicast networks
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

3 / 118

Introduction

Network Information Theory


Simplistic model of network as graph with point-to-point links and forwarding
nodes does not capture many important aspects of real-world networks:

Networked systems have multiple sources and destinations

The network task is often to compute a function or to make a decision

Many networks allow for feedback and interactive communication

The wireless medium is a shared broadcast medium

Network security is often a primary concern

Sourcechannel separation does not hold for networks

Data arrival and network topology evolve dynamically

Network information theory aims to answer the information flow questions


while capturing some of these aspects of real-world networks

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

4 / 118

Introduction

Brief History
First paper: Shannon (1961) Two-way communication channels

He didnt find the optimal rates (capacity region)

The problem remains open

Significant research activities in 70s and early 80s with many new results and
techniques, but

Many basic problems remained open

Little interest from information and communication theorists

Wireless communications and the Internet revived interest in mid 90s

Some progress on old open problems and many new models and problems

Coding techniques, such as successive cancellation, superposition, SlepianWolf,


WynerZiv, successive refinement, writing on dirty paper, and network coding,
beginning to impact real-world networks

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

5 / 118

Introduction

Network Information Theory Book


The book provides a comprehensive coverage of key results, techniques, and
open problems in network information theory
The organization balances the introduction of new techniques and new models
The focus is on discrete memoryless and Gaussian network models
We discuss extensions (if any) to many users and large networks
The proofs use elementary tools and techniques
We use clean and unified notation and terminology

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

6 / 118

Introduction

Book Organization
Part I. Preliminaries (Chapters 2,3): Review of basic information measures,
typicality, Shannons theorems. Introduction of key lemmas
Part II. Single-hop networks (Chapters 4 to 14): Networks with single-round,
one-way communication

Independent messages over noisy channels


Correlated (uncompressed) sources over noiseless links
Correlated sources over noisy channels

Part III. Multihop networks (Chapters 15 to 20): Networks with relaying and
multiple communication rounds

Independent messages over graphical networks


Independent messages over general networks
Correlated sources over graphical networks

Part IV. Extensions (Chapters 21 to 24): Extensions to distributed computing,


secrecy, wireless fading channels, and information theory and networking
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

7 / 118

Introduction

Tutorial Objectives
Focus on elementary and unified approach to coding schemes

Typicality and simple universal lemmas for DM models

Lossless source coding as a corollary of lossy source coding

Extending achievability proofs from DM to Gaussian models

Illustrate the approach through proofs of several classical coding theorems

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

8 / 118

Introduction

Outline
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding

10-minute break

6. WynerZiv Coding
7. GelfandPinsker Coding
8. Wiretap Channel

10-minute break

9. Relay Channel
10. Multicast Network
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

9 / 118

Typical Sequences

Typical Sequences
Empirical pmf (or type) of x n X n :

{i: x = x}

i
(x|x n ) =
n

for x X

Typical set (OrlitskyRoche 2001): For X p(x) and > 0,

T(n) (X) = x n : (x|x n ) p(x) p(x) for all x X = T(n)

Typical Average Lemma


Let x n T(n) (X) and (x) 0. Then
(1 ) E((X))

El Gamal & Kim (Stanford & UCSD)

1 n
(xi ) (1 + ) E((X))
n i=1

Elements of NIT

Tutorial, ISIT 2011

10 / 118

Typical Sequences

Properties of Typical Sequences


Let x n T(n) (X) and p(x n ) = ni=1 p X (xi ). Then
2n(H(X)+()) p(x n ) 2n(H(X)()) ,
where () 0 as 0 (Notation: p(x n ) 2nH(X) )
|T(n) (X)| 2nH(X) for n sufficiently large

Let X n ni=1 p X (xi ). Then by the LLN, limn P{X n T(n) } = 1


Xn
T(n) (X)

P{X n T(n) } 1

typical x n

|T(n) | 2nH(X)

El Gamal & Kim (Stanford & UCSD)

p(x n ) 2nH(X)

Elements of NIT

Tutorial, ISIT 2011

11 / 118

Typical Sequences

Jointly Typical Sequences


Joint type of (x n , y n ) X n Y n :

{i: (x , y ) = (x, y)}

i
i
(x, y|x n , y n ) =
for (x, y) X Y
n

Jointly typical set: For (X, Y) p(x, y) and > 0,


T(n) (X, Y) = (x n , y n ): |(x, y|x n , y n ) p(x, y)| p(x, y) for all (x, y)
= T(n) ((X, Y))

Let (x n , y n ) T(n) (X, Y) and p(x n , y n ) = ni=1 p X,Y (xi , yi ). Then

x n T(n) (X) and y n T(n) (Y)

p(x n ) 2nH(X) , p(y n ) 2nH(Y) , and p(x n , y n ) 2nH(X,Y)


p(x n |y n ) 2nH(X|Y) and p(y n |x n ) 2nH(Y|X)

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

12 / 118

Typical Sequences

Conditionally Typical Sequences


Conditionally typical set: For x n X n ,

T(n) (Y |x n ) = y n : (x n , y n ) T(n) (X, Y)

|T(n) (Y|x n )| 2n(H(Y|X)+())

Conditional Typicality Lemma


n
Let (X, Y) p(x, y). If x n T(n)
ni=1 pY|X (yi |xi ), then for > ,
(X) and Y

lim P(x n , Y n ) T(n) (X, Y) = 1

If x n T(n)
(X) and > , then for n sufficiently large,

|T(n) (Y |x n )| 2n(H(Y|X)())

Let X p(x), Y = (X), and x n T(n) (X). Then


y n T(n) (Y |x n )

El Gamal & Kim (Stanford & UCSD)

iff

yi = (xi ), i [1 : n]

Elements of NIT

Tutorial, ISIT 2011

13 / 118

Typical Sequences

Illustration of Joint Typicality


T(n) (X) | | 2nH(X)
xn
y

T(n) (Y)

T(n) (X, Y)

| | 2nH(X,Y)

| | 2nH(Y)

T(n) (Y|x n )
El Gamal & Kim (Stanford & UCSD)

T(n) (X|y n )
Elements of NIT

Tutorial, ISIT 2011

14 / 118

Typical Sequences

Another Illustration of Joint Typicality

Xn

T(n) (X)

T(n) (Y)

Yn

11
00
00
11
00
11
00
11

xn

T(n) (Y|x n )
| | 2nH(Y|X)

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

15 / 118

Typical Sequences

Joint Typicality for Random Triples


Let (X, Y , Z) p(x, y, z). The set of typical sequences is
T(n) (X, Y , Z) = T(n) ((X, Y , Z))

Joint Typicality Lemma


Let (X, Y , Z) p(x, y, z) and < . Then for some () 0 as 0:
If (xn , yn ) is arbitrary and Z n ni=1 pZ|X (zi |xi ), then

P(xn , yn , Z n ) T(n) (X, Y , Z) 2n(I(Y ;Z|X)())


If (x n , y n ) T(n)
and Z n ni=1 pZ|X (zi |xi ), then for n sufficiently large,

P(x n , y n , Z n ) T(n) (X, Y , Z) 2n(I(Y ;Z|X)+())

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

16 / 118

Typical Sequences

Summary
1. Typical Sequences

Typical average lemma

2. Point-to-Point Communication

Conditional typicality lemma

3. Multiple Access Channel

Joint typicality lemma

4. Broadcast Channel
5. Lossy Source Coding
6. WynerZiv Coding
7. GelfandPinsker Coding
8. Wiretap Channel
9. Relay Channel
10. Multicast Network
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

17 / 118

Point-to-Point Communication

Discrete Memoryless Channel (DMC)


Point-to-point communication system
M

Encoder

Xn

Yn

p(y|x)

Decoder

Assume a discrete memoryless channel (DMC) model (X , p(y|x), Y)

Discrete: Finite-alphabet
Memoryless: When used over n transmissions with message M and input X n ,
p(yi |x i , y i1 , m) = pY|X (yi |xi )

When used without feedback, p(y n |x n , m) = i=1 pY|X (yi |xi )


n

A (2nR , n) code for the DMC:

Message set [1 : 2nR ] = {1, 2, . . . , 2nR }


Encoder: a codeword x n (m) for each m [1 : 2nR ]
n ) [1 : 2nR ] {e} for each y n
Decoder: an estimate m(y

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

18 / 118

Point-to-Point Communication

Encoder

Xn

Yn

p(y|x)

Decoder

Assume M Unif[1 : 2nR ]


= M}
Average probability of error: Pe(n) = P{M
Assume cost b(x) 0 with b(x0 ) = 0
Average cost constraint:
n

b(xi (m)) nB
i=1

for every m [1 : 2nR ]

R achievable if (2nR , n) codes that satisfy the cost constraint with lim Pe(n) = 0
n

Capacitycost function C(B) of the DMC p(y|x) with average cost constraint B
on X is the supremum of all achievable rates

Channel Coding Theorem (Shannon 1948)


C(B) =
El Gamal & Kim (Stanford & UCSD)

max

p(x):E(b(X))B
Elements of NIT

I(X; Y)
Tutorial, ISIT 2011

19 / 118

Point-to-Point Communication

Proof of Achievability
We use random coding and joint typicality decoding
Codebook generation:

Fix p(x) that attains C(B/(1 + ))


n
Randomly and independently generate 2nR sequences x n (m) i=1 pX (xi ),
nR
m [1 : 2 ]

Encoding:

To send message m, the encoder transmits x n (m) if x n (m) T(n)


n
(by the typical average lemma, i=1 b(xi (m)) nB)
Otherwise it transmits (x0 , . . . , x0 )

Decoding:

is sent if it is unique message such that (x n (m),


y n ) T(n)
Decoder declares that m
Otherwise it declares an error

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

20 / 118

Point-to-Point Communication

Analysis of the Probability of Error


Consider the probability of error P(E) averaged over M and codebooks
Assume M = 1 (symmetry of codebook generation)
The decoder makes an error iff one or both of the following events occur:
E1 = (X n (1), Y n ) T(n)

E2 = (X n (m), Y n ) T(n) for some m = 1


Thus, by the union of events bound
P(E) = P(E |M = 1)
= P(E1 E2 )

P(E1 ) + P(E2 )

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

21 / 118

Point-to-Point Communication

Analysis of the Probability of Error


Consider the first term
P(E1 ) = P(X n (1), Y n ) T(n)

= PX n (1) T(n) , (X n (1), Y n ) T(n) + PX n (1) T(n) , (X n (1), Y n ) T(n)

p X (xi )

y T() (Y|x ) i=1

x T() i=1

pY|X (yi |xi ) + PX n (1) T(n)

p X (xi )pY|X (yi |xi ) + PX n (1) T(n)

(x , y )T() i=1

By the LLN, each term 0 as n

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

22 / 118

Point-to-Point Communication

Analysis of the Probability of Error


Consider the second term
P(E2 ) = P(X n (m), Y n ) T(n) for some m = 1

For m = 1, X n (m) ni=1 p X (xi ), independent of Y n ni=1 pY (yi )


Yn

Xn

T(n) (Y)

X n (2)
X n (m)

Yn

To bound P(E2 ), we use the packing lemma


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

23 / 118

Point-to-Point Communication

Packing Lemma
Let (U , X, Y) p(u, x, y)

Let (U n , Y n ) p(u n , yn ) be arbitrarily distributed

Let X n (m) ni=1 p X|U (xi |u i ), m A, where |A| 2nR , be


pairwise conditionally independent of Y n given U n

Packing Lemma
There exists () 0 as 0 such that
lim P(U n , X n (m), Y n ) T(n) for some m A} = 0,

if R < I(X; Y|U ) ()

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

24 / 118

Point-to-Point Communication

Analysis of the Probability of Error


Consider the second term
P(E2 ) = P(X n (m), Y n ) T(n) for some m = 1
For m = 1, X n (m) ni=1 p X (xi ), independent of Y n ni=1 pY (yi )
Hence, by the packing lemma with A = [2 : 2nR ] and U = , P(E2 ) 0 if
R < I(X; Y) () = C(B/(1 + )) ()
Since P(E) 0 as n , there must exist a sequence of (2nR , n) codes with
limn Pe(n) = 0 if R < C(B/(1 + )) ()
By the continuity of C(B) in B, C(B/(1 + )) C(B) as 0, which implies
the achievability of every rate R < C(B)

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

25 / 118

Point-to-Point Communication

Gaussian Channel

Gaussian Channel
Discrete-time additive white Gaussian noise channel
Z

Y = X + Z

: channel gain (path loss)


{Zi }: WGN(N0 /2) process, independent of M

Average power constraint: ni=1 xi2 (m) nP for every m

Assume N0 /2 = 1 and label received power 2 P as S (SNR)

Theorem (Shannon 1948)


C=

El Gamal & Kim (Stanford & UCSD)

max

F(x):E(X 2 )P

I(X; Y) =

Elements of NIT

1
log(1 + S)
2

Tutorial, ISIT 2011

26 / 118

Point-to-Point Communication

Gaussian Channel

Proof of Achievability
We extend the proof for DMC using a discretization procedure (McEliece 1977)
First note that the capacity is attained by X N(0, P), i.e., I(X; Y) = C
Let [X] j be a finite quantization of X such that E([X]2j ) E(X 2 ) = P and
[X] j X in distribution
Z

[X] j

[Y j ]k

Yj

Let Y j = [X] j + Z and [Y j ]k be a finite quantization of Y j


By the achievability proof for the DMC, I([X] j ; [Y j ]k ) is achievable for every j, k
By the data processing inequality and the maximum differential entropy lemma,
I([X] j ; [Y j ]k ) I([X] j ; Y j ) = h(Y j ) h(Z) h(Y) h(Z) = I(X; Y)
By the weak convergence and the dominated convergence theorem,
lim inf lim I([X] j ; [Y j ]k ) = lim inf I([X] j ; Y j ) I(X; Y)
j k

Combining the two bounds I([X] j ; [Y j ]k ) I(X; Y) as j, k

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

27 / 118

Point-to-Point Communication

Gaussian Channel

Summary
1. Typical Sequences
2. Point-to-Point Communication

Random coding

3. Multiple Access Channel

Joint typicality decoding

4. Broadcast Channel

Packing lemma

5. Lossy Source Coding

Discretization procedure for Gaussian

6. WynerZiv Coding
7. GelfandPinsker Coding
8. Wiretap Channel
9. Relay Channel
10. Multicast Network
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

28 / 118

Multiple Access Channel

DM Multiple Access Channel (MAC)


Multiple access communication system (uplink)
M1

M2

Encoder 1

Encoder 2

X1n
X2n

p(y|x1 , x2 )

Yn

Decoder

1, M
2
M

Assume a 2-sender DM-MAC model (X1 X2 , p(y|x1 , x2 ), Y)


A (2nR1 , 2nR2 , n) code for the DM-MAC:

Message sets: [1 : 2nR1 ] and [1 : 2nR2 ]


Encoder j = 1, 2: x nj (m j )
1 (y n ), m
2 (y n ))
Decoder: (m

Assume (M1 , M2 ) Unif([1 : 2nR1 ] [1 : 2nR2 ]): x1n (M1 ) and x2n (M2 ) independent
1, M
2 ) = (M1 , M2 )}
Average probability of error: P (n) = P{(M
e

(R1 , R2 ) achievable: if (2nR1 , 2nR2 , n) codes with limn Pe(n) = 0


Capacity region: closure of the set of achievable (R1 , R2 )
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

29 / 118

Multiple Access Channel

Theorem (Ahlswede 1971, Liao 1972, SlepianWolf 1973b)

Capacity region of DM-MAC p(y|x1 , x2 ) is the set of rate pairs (R1 , R2 ) such that
R1 I(X1 ; Y | X2 , Q),
R2 I(X2 ; Y | X1 , Q),

R1 + R2 I(X1 , X2 ; Y |Q)

for some pmf p(q)p(x1 |q)p(x2 |q), where Q is an auxiliary (time-sharing) r.v.
R2
C12

Individual capacities:
C1 = max p(x1 ), x2 I(X1 ; Y|X2 = x2 )
C2 = max p(x2 ), x1 I(X2 ; Y|X1 = x1 )

C2

Sum-capacity:
C12 = max p(x1 )p(x2 ) I(X1 , X2 ; Y)
C1 C12
El Gamal & Kim (Stanford & UCSD)

R1
Elements of NIT

Tutorial, ISIT 2011

30 / 118

Multiple Access Channel

Proof of Achievability (HanKobayashi 1981)


We use simultaneous decoding and coded time sharing
Codebook generation:

Fix p(q)p(x1 |q)p(x2 |q)


n
Randomly generate a time-sharing sequence qn i=1 pQ (qi )

Randomly and conditionally independently generate 2nR1 sequences


n
x1n (m1 ) i=1 pX1 |Q (x1i |qi ), m1 [1 : 2nR1 ]

Similarly generate 2nR2 sequences x2n (m2 ) i=1 pX2 |Q (x2i |qi ), m2 [1 : 2nR2 ]
n

Encoding:

To send (m1 , m2 ), transmit x1n (m1 ) and x2n (m2 )

Decoding:

1, m
2 ) such that (qn , x1n (m
1 ), x2n (m
2 ), y n ) T(n)
Find the unique message pair (m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

31 / 118

Multiple Access Channel

Analysis of the Probability of Error


Assume (M1 , M2 ) = (1, 1)

Joint pmfs induced by different (m1 , m2 )


m1
1

m2
1
1

Joint pmf
p(q )p(x1n |qn )p(x2n |qn )p(y n |x1n , x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x1n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |qn )
n

We divide the error events into the following 4 events:


E1 = (Q n , X1n (1), X2n (1), Y n ) T(n)

E2 = (Q n , X1n (m1 ), X2n (1), Y n ) T(n) for some m1 = 1

E3 = (Q n , X1n (1), X2n (m2 ), Y n ) T(n) for some m2 = 1

E4 = (Q n , X1n (m1 ), X2n (m2 ), Y n ) T(n) for some m1 = 1, m2 = 1

Then P(E) P(E1 ) + P(E2 ) + P(E3 ) + P(E4 )


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

32 / 118

Multiple Access Channel

m1
1

m2
1
1

Joint pmf
p(q )p(x1n |qn )p(x2n |qn )p(y n |x1n , x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x1n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |qn )
n

E1 = (Q n , X1n (1), X2n (1), Y n ) T(n)

E2 = (Q n , X1n (m1 ), X2n (1), Y n ) T(n) for some m1 = 1

E3 = (Q n , X1n (1), X2n (m2 ), Y n ) T(n) for some m2 = 1

E4 = (Q n , X1n (m1 ), X2n (m2 ), Y n ) T(n) for some m1 = 1, m2 = 1

By the LLN, P(E1 ) 0 as n

By the packing lemma (A = [2 : 2nR1 ], U Q, X X1 , Y (X2 , Y)),


P(E2 ) 0 as n if R1 < I(X1 ; X2 , Y|Q) () = I(X1 ; Y|X2 , Q) ()
Similarly, P(E3 ) 0 as n if R2 < I(X2 ; Y|X1 , Q) ()

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

33 / 118

Multiple Access Channel

Packing Lemma
Let (U , X, Y) p(u, x, y)

Let (U n , Y n ) p(u n , yn ) be arbitrarily distributed

Let X n (m) ni=1 p X|U (xi |u i ), m A, where |A| 2nR , be


pairwise conditionally independent of Y n given U n

Packing Lemma
There exists () 0 as 0 such that
lim P(U n , X n (m), Y n ) T(n) for some m A} = 0,

if R < I(X; Y|U ) ()

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

34 / 118

Multiple Access Channel

m1
1

m2
1
1

Joint pmf
p(q )p(x1n |qn )p(x2n |qn )p(y n |x1n , x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x1n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |qn )
n

E1 = (Q n , X1n (1), X2n (1), Y n ) T(n)

E2 = (Q n , X1n (m1 ), X2n (1), Y n ) T(n) for some m1 = 1

E3 = (Q n , X1n (1), X2n (m2 ), Y n ) T(n) for some m2 = 1

E4 = (Q n , X1n (m1 ), X2n (m2 ), Y n ) T(n) for some m1 = 1, m2 = 1

By the LLN, P(E1 ) 0 as n

By the packing lemma (A = [2 : 2nR1 ], U Q, X X1 , Y (X2 , Y)),


P(E2 ) 0 as n if R1 < I(X1 ; X2 , Y|Q) () = I(X1 ; Y|X2 , Q) ()
Similarly, P(E3 ) 0 as n if R2 < I(X2 ; Y|X1 , Q) ()

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

35 / 118

Multiple Access Channel

m1
1

m2
1
1

Joint pmf
p(q )p(x1n |qn )p(x2n |qn )p(y n |x1n , x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x2n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |x1n , qn )
p(qn )p(x1n |qn )p(x2n |qn )p(y n |qn )
n

E1 = (Q n , X1n (1), X2n (1), Y n ) T(n)

E2 = (Q n , X1n (m1 ), X2n (1), Y n ) T(n) for some m1 = 1

E3 = (Q n , X1n (1), X2n (m2 ), Y n ) T(n) for some m2 = 1

E4 = (Q n , X1n (m1 ), X2n (m2 ), Y n ) T(n) for some m1 = 1, m2 = 1

By the packing lemma (A = [2 : 2nR1 ] [2 : 2nR2 ], U Q, X (X1 , X2 )),


P(E4 ) 0 as n if R1 + R2 < I(X1 , X2 ; Y|Q) ()

Remark: (X1n (m1 ), X2n (m2 )), m1 = 1, m2 = 1, are not mutually independent but
each of them is pairwise independent of Y n (given Q n )

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

36 / 118

Multiple Access Channel

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel

Coded time sharing

4. Broadcast Channel

Simultaneous decoding

5. Lossy Source Coding

Systematic procedure for decomposing


error event

6. WynerZiv Coding
7. GelfandPinsker Coding
8. Wiretap Channel
9. Relay Channel
10. Multicast Network
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

37 / 118

Broadcast Channel

DM Broadcast Channel (BC)


Broadcast communication system (downlink)
Y1n
M1 , M2

Encoder

Xn

p(y1 , y2 |x)

Y2n

1
M

Decoder 1

2
M

Decoder 2

Assume a 2-receiver DM-BC model (X , p(y1 , y2 |x), Y1 Y2 )


A (2nR1 , 2nR2 , n) code for the DM-BC:

Message sets: [1 : 2nR1 ] and [1 : 2nR2 ]


Encoder: x n (m1 , m2 )
j (y nj )
Decoder j = 1, 2: m

Assume (M1 , M2 ) Unif([1 : 2nR1 ] [1 : 2nR2 ])


1, M
2 ) = (M1 , M2 )}
Average probability of error: Pe(n) = P{(M

(R1 , R2 ) achievable: if (2nR1 , 2nR2 , n) codes with limn Pe(n) = 0


Capacity region: closure of the set of achievable (R1 , R2 )
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

38 / 118

Broadcast Channel

Superposition Coding Inner Bound


Capacity region of the DM-BC is not known in general
There are several inner and outer bounds tight in some cases

Superposition Coding Inner Bound (Cover 1972, Bergmans 1973)


A rate pair (R1 , R2 ) is achievable for the DM-BC p(y1 , y2 |x) if
R1 < I(X; Y1 |U ),

R2 < I(U ; Y2 ),
R1 + R2 < I(X; Y1 )

for some pmf p(u, x), where U is an auxiliary random variable


This bound is tight for several special cases, including

Degraded: X Y1 Y2 physically or stochastically


Less noisy: I(U ; Y1 ) I(U ; Y2 ) for all p(u, x)
More capable: I(X; Y1 ) I(X; Y2 ) for all p(x)
Degraded Less noisy More capable

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

39 / 118

Broadcast Channel

Proof of Achievability
We use superposition coding and simultaneous nonunique decoding
Codebook generation:

Fix p(u)p(x|u)
Randomly and independently generate 2nR2 sequences (cloud centers)
n
un (m2 ) i=1 pU (ui ), m2 [1 : 2nR2 ]

For each m2 [1 : 2nR2 ], randomly and conditionally independently generate 2nR1


n
sequences (satellite codewords) x n (m1 , m2 ) i=1 pX|U (xi |ui (m2 )), m1 [1 : 2nR1 ]
Un

El Gamal & Kim (Stanford & UCSD)

un (m2 )

Elements of NIT

Xn

x n (m1 , m2 )

Tutorial, ISIT 2011

40 / 118

Broadcast Channel

Proof of Achievability
We use superposition coding and simultaneous nonunique decoding
Codebook generation:

Fix p(u)p(x|u)
Randomly and independently generate 2nR2 sequences (cloud centers)
n
un (m2 ) i=1 pU (ui ), m2 [1 : 2nR2 ]

For each m2 [1 : 2nR2 ], randomly and conditionally independently generate 2nR1


n
sequences (satellite codewords) x n (m1 , m2 ) i=1 pX|U (xi |ui (m2 )), m1 [1 : 2nR1 ]

Encoding:

To send (m1 , m2 ), transmit x n (m1 , m2 )

Decoding:

2 such that (un (m


2 ), y2n ) T(n)
Decoder 2 finds the unique message m
(by the packing lemma, P(E2 ) 0 as n if R2 < I(U ; Y2 ) ())
1 such that
Decoder 1 finds the unique message m
1 , m2 ), y1n ) T(n)
(un (m2 ), x n (m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

for some m2
Tutorial, ISIT 2011

40 / 118

Broadcast Channel

Analysis of the Probability of Error for Decoder 1


Assume (M1 , M2 ) = (1, 1)
Joint pmfs induced by different (m1 , m2 )
m1
1

m2
1

Joint pmf
p(u , x n )p(y1n |x n )
n

p(un , x n )p(y1n |un )

p(un , x n )p(y1n )

p(un , x n )p(y1n )

The last case does not result in an error


So we divide the error event into the following 3 events:
E11 = (U n (1), X n (1, 1), Y1n ) T(n)

E12 = (U n (1), X n (m1 , 1), Y1n ) T(n) for some m1 = 1

E13 = (U n (m2 ), X n (m1 , m2 ), Y1n ) T(n) for some m1 = 1, m2 = 1

Then P(E1 ) P(E11 ) + P(E12 ) + P(E13 )


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

41 / 118

Broadcast Channel

m1

m2

Joint pmf
p(un , x n )p(y1n |x n )

p(un , x n )p(y1n |un )

p(un , x n )p(y1n )

p(un , x n )p(y1n )

E11 = (U n (1), X n (1, 1), Y1n ) T(n)

E12 = (U n (1), X n (m1 , 1), Y1n ) T(n) for some m1 = 1

E13 = (U n (m2 ), X n (m1 , m2 ), Y1n ) T(n) for some m1 = 1, m2 = 1


By the packing lemma (A = [2 : 2nR1 ]), P(E12 ) 0 as n if
R1 < I(X; Y1 |U ) ()
By the packing lemma (A = [2 : 2nR1 ] [2 : 2nR2 ], U , X (U , X)),
P(E13 ) 0 as n if R1 + R2 < I(U , X; Y1 ) () = I(X; Y1 ) ()
Remark: P(E14 ) = P{(U n (m2 ), X n (1, m2 ), Y1n ) T(n) for some m2 = 1} 0 as
n if R2 < I(U , X; Y1 ) () = I(X; Y1 ) ()
Hence, the inner bound continues to hold when decoder 1 is also to recover M2
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

42 / 118

Broadcast Channel

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding

Superposition coding
Simultaneous nonunique decoding

6. WynerZiv Coding
7. GelfandPinsker Coding
8. Wiretap Channel
9. Relay Channel
10. Multicast Network
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

43 / 118

Lossy Source Coding

Lossy Source Coding


Point-to-point compression system
Xn

Encoder

Decoder

( X n , D)

Assume a discrete memoryless source (DMS) (X , p(x))


a distortion measure d(x, x), (x, x) X X
Average per-letter distortion between x n and xn :
d(x n , xn ) =

1 n
d(xi , xi )
n i=1

A (2nR , n) lossy source code:

Encoder: an index m(x n ) [1 : 2nR ) := {1, 2, . . . , 2nR }


Decoder: an estimate (reconstruction sequence) xn (m) X n

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

44 / 118

Lossy Source Coding

Xn

Encoder

Decoder

( X n , D)

Expected distortion associated with the (2nR , n) code:

D = Ed(X n , X n ) = p(x n )d(x n , xn (m(x n )))


x

(R, D) achievable if (2nR , n) codes with lim supn E(d(X n , X n )) D

Ratedistortion function R(D): infimum of R such that (R, D) is achievable

Lossy Source Coding Theorem (Shannon 1959)


R(D) =

min

p(x|x):E(d(x, x))D

I(X; X)

for D Dmin = E[minx(x) d(X, x(X))]

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

45 / 118

Lossy Source Coding

Proof of Achievability
We use random coding and joint typicality encoding
Codebook generation:

Fix p(x|x) that attains R(D/(1 + )) and compute p(x) = x p(x)p(x|x)


n
Randomly and independently generate sequences xn (m) i=1 pX (xi ), m [1 : 2nR ]

Encoding:

Find an index m such that (x n , xn (m)) T(n)


If more than one, choose the smallest index among them
If none, choose m = 1

Decoding:

Upon receiving m, set the reconstruction sequence xn = xn (m)

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

46 / 118

Lossy Source Coding

Analysis of Expected Distortion


We bound the expected distortion averaged over codebooks
Define the encoding error event
E = (X n , X n (M)) T(n) = (X n , X n (m)) T(n) for all m [1 : 2nR ]
X n (m) ni=1 p X (xi ), independent of each other and of X n ni=1 p X (xi )
X n

Xn

T(n)
(X)

X n (1)
Xn

X n (m)

To bound P(E), we use the covering lemma


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

47 / 118

Lossy Source Coding

Covering Lemma
p(u, x, x) and <
Let (U , X, X)

Let (U n , X n ) p(un , x n ) be arbitrarily distributed such that


lim P{(U n , X n ) T(n)
(U , X)} = 1

i |ui ), m A, where |A| 2nR , be


Let X n (m) ni=1 p X|U
(x
conditionally independent of each other and of X n given U n

Covering Lemma
There exists () 0 as 0 such that
lim P(U n , X n , X n (m)) T(n) for all m A = 0,

) + ()
if R > I(X; X|U

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

48 / 118

Lossy Source Coding

Analysis of Expected Distortion


We bound the expected distortion averaged over codebooks
Define the encoding error event
E = (X n , X n (M)) T(n) = (X n , X n (m)) T(n) for all m [1 : 2nR ]
X n (m) ni=1 p X (xi ), independent of each other and of X n ni=1 p X (xi )
By the covering lemma (U = ), P(E) 0 as n if
+ () = R(D/(1 + )) + ()
R > I(X; X)

Now, by the law of total expectation and the typical average lemma,
Ed(X n , X n ) = P(E) Ed(X n , X n )|E + P(E c ) Ed(X n , X n )|E c

P(E) dmax + P(E c )(1 + ) E(d(X, X))


Hence, lim supn E[d(X n , X n )] D and there must exist a sequence of
(2nR , n) codes that satisfies the asymptotic distortion constraint
By the continuity of R(D) in D, R(D/(1 + )) + () R(D) as 0
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

49 / 118

Lossy Source Coding

Lossless Source Coding

Lossless Source Coding


Suppose we wish to reconstruct X n losslessly, i.e., X n = X n
R achievable if (2nR , n) codes with limn P{ X n = X n } = 0
Optimal rate R : infimum of achievable R

Lossless Source Coding Theorem (Shannon 1948)


R = H(X)
We prove this theorem as a corollary of the lossy source coding theorem
Consider the lossy source coding problem for a DMS X, X = X , and
Hamming distortion measure (d(x, x) = 0 if x = x, and d(x, x) = 1 otherwise)
At D = 0, the ratedistortion function is R(0) = H(X)
We now show operationally R = R(0) without using the fact that R = H(X)
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

50 / 118

Lossy Source Coding

Lossless Source Coding

Proof of the Lossless Source Coding Theorem


Proof of R R(0):

First note that


lim E(d(X n , X n )) = n
lim

1 n
lim P{ X n = X n }
P{ X i = Xi } n
n i=1

Hence, any sequence of (2nR , n) codes with limn P{ X n = X n } = 0 achieves D = 0

Proof of R R(0):

We can still use random coding and joint typicality encoding!

Fix p(x|x) = 1 if x = x and 0 otherwise (p(x) = pX (x))

As before, generate a random code xn (m), m [1 : 2nR ]

+ () = R(0) + ()
Then P(E) = P{(X n , X n ) T(n) } 0 as n if R > I(X; X)

Now recall that if (x n , xn ) T(n) , then xn = x n (or if xn = x n , then (x n , xn ) T(n) )


Hence, P{ X n = X n } 0 as n if R > R(0) + ()

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

51 / 118

Lossy Source Coding

Lossless Source Coding

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding

Joint typicality encoding

6. WynerZiv Coding

Covering lemma

7. GelfandPinsker Coding

Lossless as a corollary of lossy

8. Wiretap Channel
9. Relay Channel
10. Multicast Network
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

52 / 118

WynerZiv Coding

Lossy Source Coding with Side Information at the Decoder


Lossy compression system with side information
Xn

Encoder

Decoder

( X n , D)

Yn

Assume a 2-DMS (X Y , p(x, y)) and a distortion measure d(x, x)

A (2nR , n) lossy source code with side information available at the decoder:

Encoder: m(x n )
Decoder: xn (m, y n )

Expected distortion, achievability, ratedistortion function: defined as before

Theorem (WynerZiv 1976)


RSI-D (D) = minI(X; U ) I(Y ; U ) = min I(X; U |Y),

D
where the minimum is over all p(u|x) and x(u, y) such that E(d(X, X))
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

53 / 118

WynerZiv Coding

Proof of Achievability
We use binning in addition to joint typicality encoding and decoding
yn
x

un (1)
B(1)

B(m)

T(n) (U , Y)

un (l)

B(2nR )

un (2nR )

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

54 / 118

WynerZiv Coding

Proof of Achievability
We use binning in addition to joint typicality encoding and decoding
Codebook generation:

Fix p(u|x) and x(u, y) that attain RSI-D (D/(1 + ))

Randomly and independently generate 2nR sequences un (l) i=1 pU (ui ), l [1 : 2nR ]

Partition [1 : 2nR ] into bins B(m) = [(m 1)2n(RR) + 1 : m2n(RR) ], m [1 : 2nR ]

Encoding:

Find l such that (x n , un (l)) T(n)

If more than one, it picks one of them uniformly at random

If none, choose l [1 : 2nR ] uniformly at random


Send the index m such that l B(m)

Decoding:

Upon receiving m, find the unique l B(m) such that (un ( l), y n ) T(n) , where >
Compute the reconstruction sequence as x = x(u ( l), y ), i [1 : n]

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

54 / 118

WynerZiv Coding

Analysis of Expected Distortion


We bound the distortion averaged over the random codebook and encoding
Let (L, M) denote chosen indices and L be the index estimate at the decoder
Define the error event
X n , Y n ) T (n)
E = (U n (L),

and consider
E1 = (U n (l), X n ) T(n)
for all l [1 : 2nR ]

E2 = (U n (L), X n , Y n ) T(n)
E3 = (U n ( l), Y n ) T(n) for some l B(M), l =
L

The probability of error is bounded as


P(E) P(E1 ) + P(E1c E2 ) + P(E3 )
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

55 / 118

WynerZiv Coding

for all l [1 : 2nR ]


E1 = (U (l), X n ) T(n)

E2 = (U n (L), X n , Y n ) T(n)
E3 = (U n ( l), Y n ) T(n) for some l B(M), l =
L

P(E) P(E1 ) + P(E1c E2 ) + P(E3 )

By the covering lemma, P(E1 ) 0 as n if R > I(X; U ) + ( )

Since E1c = {(U n (L), X n ) T(n)


}, > , and
n

i=1

i=1

Y n | {U n (L) = un , X n = x n } pY|U ,X (yi |ui , xi ) = pY|X (yi |xi ),


P(E1c

by the conditional typicality lemma,


To bound P(E3 ), it can be shown that

E2 ) 0 as n

P(E3 ) P(U n ( l), Y n ) T(n) for some l B(1)

Since each U n ( l) ni=1 pU (ui ), independent of Y n ,


by the packing lemma, P(E3 ) 0 as n if R R < I(Y ; U ) ()
Combining the bounds, we have shown that P(E) 0 as n if
R > I(X; U ) I(Y ; U ) + () + ( ) = RSI-D (D/(1 + )) + () + ( )
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

56 / 118

WynerZiv Coding

Lossless Source Coding with Side Information

Lossless Source Coding with Side Information

What is the minimum rate RSI-D


needed to recover X losslessly?

Theorem (SlepianWolf 1973a)

RSI-D
= H(X |Y)

We prove the SlepianWolf theorem as a corollary of the WynerZiv theorem


Let d be the Hamming distortion measure and consider the case D = 0
Then RSI-D (0) = H(X|Y)

As before, we can show operationally RSI-D


= RSI-D (0)

RSI-D
RSI-D (0) since (1/n) i=1 P{ X i = Xi } P{ X n = X n }
n

RSI-D
RSI-D (0) by WynerZiv coding with X = U = X

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

57 / 118

WynerZiv Coding

Lossless Source Coding with Side Information

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding
6. WynerZiv Coding
7. GelfandPinsker Coding

Binning
Application of conditional typicality
lemma

8. Wiretap Channel
9. Relay Channel

Channel coding techniques in source


coding

10. Multicast Network


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

58 / 118

GelfandPinsker Coding

DMC with State Information Available at the Encoder


Point-to-point communication system with state
Sn

Encoder

Xn

p(s)

p(y|x, s)

Yn

Decoder

Assume a DMC with DM state model (X S , p(y|x, s)p(s), Y)

DMC: p(y n |x n , s n , m) = i=1 pY|X,S (yi |xi , si )


DM state: (S1 , S2 , . . .) i.i.d. with Si pS (si )
n

A (2nR , n) code for the DMC with state information available at the encoder:

Message set: [1 : 2nR ]


Encoder: x n (m, s n )
n)
Decoder: m(y

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

59 / 118

GelfandPinsker Coding

Sn

Encoder

p(s)

Xn

p(y|x, s)

Yn

Decoder

Expected average cost constraint:


n

E[b(xi (m, S n ))] nB


i=1

for every m [1 : 2nR ]

Probability of error, achievability, capacitycost function: defined as for DMC

Theorem (GelfandPinsker 1980)


CSI-E (B) =

El Gamal & Kim (Stanford & UCSD)

max

p(u|s), x(u,s):E(b(X))B

Elements of NIT

I(U ; Y) I(U ; S)

Tutorial, ISIT 2011

60 / 118

GelfandPinsker Coding

Proof of Achievability (HeegardEl Gamal 1983)


We use multicoding
sn
un
un (1)
C(1)

T(n) (U , S)
C(m)

un (l)

C(2nR )

un (2nR )

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

61 / 118

GelfandPinsker Coding

Proof of Achievability (HeegardEl Gamal 1983)


We use multicoding
Codebook generation:

Fix p(u|s) and x(u, s) that attain CSI-E (B/(1 + ))


For each m [1 : 2nR ], generate a subcodebook C(m) consisting of

n
2n(RR) randomly and independently generated sequences un (l) i=1 pU (ui ),

n(RR)
n(RR)
l [(m 1)2
+ 1 : m2
]

Encoding:

To send m [1 : 2nR ] given s n , find un (l) C(m) such that (un (l), s n ) T(n)

Then transmit xi = x(ui (l), si ) for i [1 : n]


n
(by the typical average lemma, i=1 b(xi (m, s n )) nB)
n
If no such u (l) exists, transmit (x0 , . . . , x0 )

Decoding:

such that (un (l), y n ) T(n) for some un (l) C(m),


where >
Find the unique m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

61 / 118

GelfandPinsker Coding

Analysis of the Probability of Error


Assume M = 1
Let L denote the index of the chosen U n sequence for M = 1 and S n
The decoder makes an error only if one or more of the following events occur:
E1 = (U n (l), S n ) T(n)
for all U n (l) C(1)

E2 = (U n (L), Y n ) T(n)

E3 = (U n (l), Y n ) T(n) for some U n (l) C(1)


Thus, the probability of error is bounded as
P(E) P(E1 ) + P(E1c E2 ) + P(E3 )

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

62 / 118

GelfandPinsker Coding

E1 = (U n (l), S n ) T(n)
for all U n (l) C(1)

E2 = (U n (L), Y n ) T(n)

E3 = (U n (l), Y n ) T(n) for some U n (l) C(1)

P(E) P(E1 ) + P(E1c E2 ) + P(E3 )

By the covering lemma, P(E1 ) 0 as n if R R > I(U ; S) + ( )


(n)
n
n n
Since > , E1c = {(U n (L), S n ) T(n)
} = {(U (L), X , S ) T }, and
n

i=1

i=1

Y n |{U n (L) = un , X n = x n , S n = s n } pY|U ,X,S (yi |ui , xi , si ) = pY|X,S (yi |xi , si ),


by the conditional typicality lemma, P(E1c E2 ) 0 as n
Since U n (l) C(1) is distributed according to ni=1 p(ui ), independent of Y n ,
by the packing lemma, P(E3 ) 0 as n if R < I(U ; Y) ()
Remark: Y n is not i.i.d.
Combining the bounds, we have shown that P(E) 0 as n if
R < I(U ; Y) I(U ; S) () ( ) = CSI-E (B/(1 + )) () ( )
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

63 / 118

GelfandPinsker Coding

Multicoding versus Binning


Multicoding

Binning

Channel coding technique

Source coding technique

Given a set of messages

Given a set of indices (sequences)

Generate many codewords for


each message

Map indices into a smaller number


of bins

To communicate a message, send


a codeword from its subcodebook

To communicate an index, send


its bin index

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

64 / 118

GelfandPinsker Coding

WynerZiv versus GelfandPinsker


WynerZiv theorem: ratedistortion function for a DMS X with side
information Y available at the decoder:
RSI-D (D) = minI(U ; X) I(U ; Y)
We proved achievability using binning, covering, and packing
GelfandPinsker theorem: capacitycost function of a DMC with state
information S available at the encoder:
CSI-E (B) = maxI(U ; Y) I(U ; S)
We proved achievability using multicoding, covering, and packing
Dualities:

min
binning
covering rate packing rate

El Gamal & Kim (Stanford & UCSD)

max
multicoding
packing rate covering rate
Elements of NIT

Tutorial, ISIT 2011

65 / 118

GelfandPinsker Coding

Writing on Dirty Paper

Writing on Dirty Paper


Gaussian channel with additive Gaussian state available at the encoder
S

Sn
M

Z
Yn

Xn

Encoder

Decoder

Encoder

Noise Z N(0, N)
State S N(0, Q), independent of Z

Assume expected average power constraint: ni=1 E(xi2 (m, S n )) nP for every m

C=

1
2

log 1 +

CSI-ED =

1
2

N+Q

log 1 +

= CSI-D

Writing on Dirty Paper (Costa 1983)


CSI-E =
El Gamal & Kim (Stanford & UCSD)

P
1
log 1 +
2
N

Elements of NIT

Tutorial, ISIT 2011

66 / 118

GelfandPinsker Coding

Writing on Dirty Paper

Proof of Achievability
Proof involves a clever choice of F(u|s), x(u, s) and discretization procedure
Let X N(0, P) independent of S and U = X + S, where = P/(P + N). Then
I(U ; Y) I(U ; S) =

P
1
log 1 +
2
N

Let [U ] j and [S] j be finite quantizations of U and S


Let [X] j j = [U ] j [S] j and [Y j j ]k be a finite quantization of the
corresponding channel output Y j j = [U ] j [S] j + S + Z
We use GelfandPinsker coding for the DMC with DM state
p([y j j ]k |[x] j j , [s] j )p([s] j )

Joint typicality encoding: R R > I(U ; S) I([U ] j ; [S] j )


Joint typicality decoding: R < I([U ] j ; [Y j j ]k )

Thus R < I([U ] j ; [Y j j ]k ) I(U ; S) is achievable for any j, j , k

Following similar arguments to the discretization procedure for Gaussian


channel coding,
lim lim lim I([U ] j ; [Y j j ] l ) = I(U ; Y)

j j k
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

67 / 118

GelfandPinsker Coding

Writing on Dirty Paper

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding
6. WynerZiv Coding
7. GelfandPinsker Coding

Multicoding

8. Wiretap Channel

Packing lemma with non i.i.d. Y n

9. Relay Channel

Writing on dirty paper

10. Multicast Network


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

68 / 118

Wiretap Channel

DM Wiretap Channel (WTC)


Point-to-point communication system with an eavesdropper
Yn
M

Encoder

Xn

Decoder

p(y, z|x)
Zn

Eavesdropper

Assume a DM-WTC model (X , p(y, z|x), Y Z)


A (2nR , n) secrecy code for the DM-WTC:

Message set: [1 : 2nR ]


Randomized encoder: X n (m) p(x n |m) for each m [1 : 2nR ]
n)
Decoder: m(y

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

69 / 118

Wiretap Channel

Yn
M

Encoder

Decoder

p(y, z|x)
Zn

Eavesdropper

Assume M Unif[1 : 2nR ]

= M}
Average probability of error: Pe(n) = P{M

Information leakage rate: RL(n) = (1/n)I(M; Z n )

(R, RL ) achievable if (2nR , n) codes with limn Pe(n) = 0, lim supn RL(n) RL
Rateleakage region R : closure of the set of achievable (R, RL )
Secrecy capacity: CS = max{R: (R, 0) R }

Theorem (Wyner 1975, CsiszarKorner 1978)


CS = max I(U ; Y) I(U ; Z)
p(u,x)

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

70 / 118

Wiretap Channel

Proof of Achievability
We use multicoding and two-step randomized encoding
Codebook generation:

Assume CS > 0 and fix p(u, x) that attains it (I(U ; Y) I(U ; Z) > 0)
For each m [1 : 2nR ], generate a subcodebook C(m) consisting of

n
2n(RR) randomly and independently generated sequences un (l) i=1 pU (ui ),

n(RR)
n(RR)
l [(m 1)2
+ 1 : m2
]

C(1)

C(2)

C(2nR )

C(3)

Encoding:

2n R

2n(RR)

l :1

To send m, choose an index L [(m 1)2n(RR) + 1 : m2n(RR) ] uniformly at random


n
Then generate X n i=1 pX|U (xi |ui (L)) and transmit it

Decoding:

such that (un ( l), y n ) T(n) for some un ( l) C(m)

Find the unique m

By the LLN and the packing lemma, P(E) 0 as n if R < I(U ; Y) ()

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

71 / 118

Wiretap Channel

Analysis of the Information Leakage Rate


For each C(m), the eavesdropper has 2n(RRI(U ;Z)) un (l) jointly typical with z n

2n R

2n(RR)

l :1

C(2)

C(1)

C(2nR )

C(3)

If R R > I(U ; Z), the eavesdropper has roughly same number of sequences in
each subcodebook, providing it with no information about the message
Let M be the message sent and L be the randomly selected index
Every codebook

induces a pmf of the form


n

p(m, l, un , z n | C ) = 2nR 2n(RR) p(un | l, C ) pZ|U (zi |ui )

i=1

In particular, p(un , z n ) = ni=1 pU ,Z (ui , zi )


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

72 / 118

Wiretap Channel

Analysis of the Information Leakage Rate


Consider the amount of information leakage averaged over codebooks:
I(M; Z n |C) = H(M |C) H(M |Z n , C)

= nR H(M, L|Z n , C) + H(L|Z n , M, C)


= nR H(L|Z n , C) + H(L|Z n , M, C)

The first equivocation term


H(L|Z n , C) = H(L|C) I(L; Z n |C)
= nR I(L; Z n |C)
= nR I(U n , L; Z n |C)
nR I(U n , L, C; Z n )

= nR I(U n ; Z n )
= nR nI(U ; Z)

(a)

(a) (L, C) U n Z n form a Markov chain


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

73 / 118

Wiretap Channel

Analysis of the Information Leakage Rate


Consider the amount of information leakage averaged over codebooks:
I(M; Z n |C) nR nR + nI(U ; Z) + H(L|Z n , M, C)
The remaining equivocation term can be upper bounded as follows

Lemma

If R R I(U ; Z), then

lim sup H(L|Z n , M, C) R R I(U ; Z) + ()


n

1
n

Substituting (recall that R < I(U ; Y) () for decoding), we have shown that
lim sup I(M; Z n |C) ()
n

1
n

if R < I(U ; Y) I(U ; Z) ()

Thus, there must exist a sequence of (2nR , n) codes such that Pe(n) 0 and
RL(n) () as n
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

74 / 118

Wiretap Channel

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding
6. WynerZiv Coding
7. GelfandPinsker Coding
8. Wiretap Channel
9. Relay Channel

Randomized encoding
Bound on equivocation (list size)

10. Multicast Network


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

75 / 118

Relay Channel

DM Relay Channel (RC)


Point-to-point communication system with a relay
Relay encoder
Y2n
M

Encoder

X1n

X2n

p(y2 , y3 |x1 , x2 )

Y3n

Decoder

Assume a DM-RC model (X1 X2 , p(y2 , y3 |x1 , x2 ), Y2 Y3 )


A (2nR , n) code for the DM-RC:

Message set: [1 : 2nR ]


Encoder: x1n (m)
Relay encoder: x2i (y2i1 ), i [1 : n]
3n )
Decoder: m(y

Probability of error, achievability, capacity: defined as for the DMC


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

76 / 118

Relay Channel

Relay encoder
Y2n
M

Encoder

X1n

X2n

p(y2 , y3 |x1 , x2 )

Y3n

Decoder

Capacity of the DM-RC is not known in general


There are upper and lower bounds that are tight in some cases
We discuss two lower bounds: decodeforward and compressforward

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

77 / 118

Relay Channel

Multihop

Multihop Lower Bound


The relay recovers the message received from the sender in each block and
retransmits it in the following block
Y2 : X2

M
Y3

X1

Multihop Lower Bound


C max min{I(X2 ; Y3 ), I(X1 ; Y2 | X2 )}
p(x1 )p(x2 )

Tight for a cascade of two DMCs, i.e., p(y2 , y3 |x1 , x2 ) = p(y2 |x1 )p(y3 |x2 ):
C = minmax I(X2 ; Y3 ), max I(X1 ; Y2 )
p(x2 )

p(x1 )

The scheme uses block Markov coding, where codewords in a block can depend
on the message sent in the previous block
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

78 / 118

Relay Channel

Multihop

Proof of Achievability
Send b 1 messages in b blocks using independently generated codebooks
n

m1

m2

m3

mb1

Block 1

b1

Codebook generation:

Fix p(x1 )p(x2 ) that attains the lower bound


For each j [1 : b], randomly and independently generate 2nR sequences
n
x1n (m j ) i=1 pX1 (x1i ), m j [1 : 2nR ]

Similarly, generate 2nR sequences x2n (m j1 ) i=1 pX2 (x2i ), m j1 [1 : 2nR ]


n

Codebooks: C j = {(x1n (m j ), x2n (m j1 )) : m j1 , m j [1 : 2nR ]}, j [1 : b]

Encoding:

To send m j in block j, transmit x1n (m j ) from C j

Relay encoding:

j such that (x1n (m


j ), x2n (m
j1 ), y2n ( j)) T(n)
At the end of block j, find the unique m
j ) from C j+1
In block j + 1, transmit x2n (m

Decoding:

j such that (x2n (m


j ), y3n ( j + 1)) T(n)
At the end of block j + 1, find the unique m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

79 / 118

Relay Channel

Multihop

Analysis of the Probability of Error


We analyze the probability of decoding error for M j averaged over codebooks
Assume M j = 1

j be the relays decoded message at the end of block j


Let M

j = 1} {M
j = 1} {M
j = M
j }, the decoder makes an error only if one
Since {M
of the following events occur:
j1 ), Y n ( j)) T (n)
E1 ( j) = (X1n (1), X2n (M
2

j1 ), Y n ( j)) T (n) for some m j = 1


E2 ( j) = (X1n (m j ), X2n (M
2

j ), Y n ( j + 1)) T (n)
E1 ( j) = (X2n (M
3

j
E2 ( j) = (X2n (m j ), Y3n ( j + 1)) T(n) for some m j = M

Thus, the probability of error is upper bounded as


j = 1} P(E1 ( j)) + P(E2 ( j)) + P(E1 ( j)) + P(E2 ( j))
P(E( j)) = P{M
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

80 / 118

Relay Channel

Multihop

j1 ), Y n ( j)) T (n)
E1 ( j) = (X1n (1), X2n (M
2

j1 ), Y n ( j)) T (n) for some m j = 1


E2 ( j) = (X1n (m j ), X2n (M
2

j ), Y n ( j + 1)) T (n)
E1 ( j) = (X2n (M
3

j
E2 ( j) = (X2n (m j ), Y3n ( j + 1)) T(n) for some m j = M

j1 , which is a function of Y n ( j 1)
By the independence of the codebooks, M
2
j1 ) in C j
and codebook C j1 , is independent of the codewords X1n (1), X2n (M
Thus by the LLN, P(E1 ( j)) 0 as n
By the packing lemma, P(E2 ( j)) 0 as n if R < I(X1 ; Y2 |X2 ) ()
By the independence of the codebooks and the LLN, P(E1 ( j)) 0 as n

By the same independence and the packing lemma, P(E2 ( j)) 0 as n if


R < I(X2 ; Y3 ) ()
Thus we have shown that under the given constraints on the rate,
j = M j } 0 as n for each j [1 : b 1]
P{M

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

81 / 118

Relay Channel

Coherent Multihop

Coherent Multihop Lower Bound


In the multihop coding scheme, the sender knows what the relay transmits in
each block
Y2 : X2

Y3

X1

Hence, the multihop coding scheme can be improved via coherent cooperation
between the sender and the relay

Coherent Multihop Lower Bound


C

El Gamal & Kim (Stanford & UCSD)

max

p(x1 ,x2 )

minI(X2 ; Y3 ), I(X1 ; Y2 | X2 )

Elements of NIT

Tutorial, ISIT 2011

82 / 118

Relay Channel

Coherent Multihop

Proof of Achievability
We again use a block Markov coding scheme

Send b 1 messages in b blocks using independently generated codebooks

Codebook generation:

Fix p(x1 , x2 ) that attains the lower bound

For j [1 : b], randomly and independently generate 2nR sequences


n
x2n (m j1 ) i=1 pX2 (x2i ), m j1 [1 : 2nR ]

For each m j1 [1 : 2nR ], randomly and conditionally independently generate 2nR


n
sequences x1n (m j |m j1 ) i=1 pX1 |X2 (x1i |x2i (m j1 )), m j [1 : 2nR ]
Codebooks: C j = {(x1n (m j |m j1 ), x2n (m j1 )) : m j1 , m j [1 : 2nR ]}, j [1 : b]

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

83 / 118

Relay Channel

Coherent Multihop

Block

...

b1

X1

x1n (m1 |1)

x1n (m2 |m1 )

x1n (m3 |m2 )

...

x1n (mb1 |mb2 )

x1n (1|mb1 )

Y2

1
m

2
m

3
m

...

b1
m

X2

x2n (1)

1)
x2n (m

2)
x2n (m

...

b2 )
x2n (m

b1 )
x2n (m

Y3

1
m

2
m

...

b2
m

b1
m

Encoding:

In block j, transmit x1n (m j |m j1 ) from codebook C j

Relay encoding:

j such that
At the end of block j, find the unique m
j |m
j1 ), x2n (m
j1 ), y2n ( j)) T(n)
(x1n (m
j ) from codebook C j+1
In block j + 1, transmit x2n (m

Decoding:

j such that (x2n (m


j ), y3n ( j + 1)) T(n)
At the end of block j + 1, find unique message m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

84 / 118

Relay Channel

Coherent Multihop

Analysis of the Probability of Error


We analyze the probability of decoding error for M j averaged over codebooks
Assume M j1 = M j = 1
j be the relays decoded message at the end of block j
Let M
The decoder makes an error only if one of the following events occur:
j) = {M
j = 1}
E(

j ), Y n ( j + 1)) T (n)
E1 ( j) = (X2n (M
3

j
E2 ( j) = (X2n (m j ), Y3n ( j + 1)) T(n) for some m j = M

Thus, the probability of error is upper bounded as


j)) + P(E1 ( j)) + P(E2 ( j))
j = 1} P(E(
P(E( j)) = P{M
Following the same steps as in the multihop coding scheme, the last two terms
0 as n if R < I(X2 ; Y3 ) ()
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

85 / 118

Relay Channel

Coherent Multihop

Analysis of the Probability of Error


j)) = P{M
j = 1}, define
To upper bound P(E(
j1 ), X n (M
j1 ), Y n ( j)) T (n)
E1 ( j) = (X1n (1| M
2
2

j1 ), X n (M
j1 ), Y n ( j)) T (n) for some m j = 1
E2 ( j) = (X1n (m j | M
2

Then

j)) P(E(
j 1))+ P(E1 ( j) Ec ( j 1))+ P(E2 ( j))
P(E(

Consider the second term


j1 ), X n (M
j1 ), Y n ( j)) T (n) , M
j1 = 1}
P(E1 ( j) Ec ( j 1)) = P{(X1n (1| M
2
2

j1 = 1},
P{(X1n (1|1), X2n (1), Y2n ( j)) T(n) | M

which, by the independence of the codebooks and the LLN, 0 as n


By the packing lemma, P(E2 ( j)) 0 as n if R < I(X1 ; Y2 |X2 ) ()
j)) 0 as n for every j [1 : b 1]
0 = 1, by induction, P(E(
Since M
Thus we have shown that under the given constraints on the rate,
j = M j } 0 as n for every j [1 : b 1]
P{M
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

86 / 118

Relay Channel

DecodeForward

DecodeForward Lower Bound


Coherent multihop can be further improved by combining the information
through the direct path with the information from the relay
Y2 : X2

Y3

X1

DecodeForward Lower Bound (CoverEl Gamal 1979)


C max minI(X1 ,X2 ; Y3 ), I(X1 ; Y2 | X2 )
p(x1 ,x2 )

Tight for a physically degraded DM-RC, i.e.,


p(y2 , y3 |x1 , x2 ) = p(y2 |x1 , x2 )p(y3 | y2 , x2 )

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

87 / 118

Relay Channel

DecodeForward

Proof of Achievability (ZengKuhlmannBuzo 1989)


We use backward decoding (Willemsvan der Meulen 1985)
Codebook generation, encoding, relay encoding:

Same as coherent multihop


Codebooks: C j = {(x1n (m j |m j1 ), x2n (m j1 )): m j1 , m j [1 : 2nR ]}, j [1 : b]
Block

...

b1

X1

x1n (m1 |1)

x1n (m2 |m1 )

x1n (m3 |m2 )

...

x1n (mb1 |mb2 )

x1n (1|mb1 )

Y2

1
m

2
m

3
m

...

b1
m

X2

x2n (1)

1)
x2n (m

2)
x2n (m

...

b2 )
x2n (m

b1 )
x2n (m

Y3

1
m

2
m

...

b2
m

b1
m

Decoding:

Decoding at the receiver is done backwards after all b blocks are received
j such that
For j = b 1, . . . , 1, the receiver finds the unique message m
j+1 |m
j ), x2n (m
j ), y3n ( j + 1)) T(n) , successively with the initial condition m
b = 1
(x1n (m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

88 / 118

Relay Channel

DecodeForward

Analysis of the Probability of Error


We analyze the probability of decoding error for M j averaged over codebooks
Assume M j = M j+1 = 1
The decoder makes an error only if one or more of the following events occur:
j) = {M
j = 1}
E(

j+1 = 1}
E( j + 1) = {M

j+1 | M
j ), X n (M
j ), Y n ( j + 1)) T (n)
E1 ( j) = (X1n (M
2
3

j+1 |m j ), X n (m j ), Y n ( j + 1)) T (n) for some m j = M


j
E2 ( j) = (X1n (M
2
3

Thus, the probability of error is upper bounded as


j = 1}
P(E( j)) = P{M
j) E( j + 1) E1 ( j) E2 ( j))
P(E(

j)) + P(E( j + 1)) + P(E1 ( j) Ec ( j) E c ( j + 1)) + P(E2 ( j))


P(E(

As in the coherent multihop scheme, the first term 0 as n if


R < I(X1 ; Y2 |X2 ) ()
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

89 / 118

Relay Channel

DecodeForward

j) = {M
j = 1}
E(

j+1 = 1}
E( j + 1) = {M

j+1 | M
j ), X n (M
j ), Y n ( j + 1)) T (n)
E1 ( j) = (X1n (M
2
3

j+1 |m j ), X n (m j ), Y n ( j + 1)) T (n) for some m j = M


j
E2 ( j) = (X1n (M
2
3

j))+ P(E( j + 1)) + P(E1 ( j) Ec ( j) E c ( j + 1)) + P(E2 ( j))


P(E( j)) P(E(
The third term is upper bounded as
j+1 = 1} {M
j = 1}
PE1 ( j) {M

j+1 = 1, M
j = 1
= P(X1n (1|1), X2n (1), Y3n ( j + 1)) T(n) , M

j = 1,
P(X1n (1|1), X2n (1), Y3n ( j + 1)) T(n) | M

which, by the independence of the codebooks and the LLN, 0 as n


By the same independence and the packing lemma, the fourth term
P(E2 ( j)) 0 as n if R < I(X1 , X2 ; Y3 ) ()
b = Mb = 1, by induction, P{M
j = Mj} 0
Finally for the second term, since M
as n for every j [1 : b 1] if the given constraints on the rate are satisfied
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

90 / 118

Relay Channel

CompressForward

CompressForward Lower Bound


In the decodeforward coding scheme, the relay recovers the entire message
Y2 : X2
|

Y2
Y3

X1

If channel from sender to relay is worse than direct channel to receiver, this
requirement can reduce rate below that of direct transmission (relay is not used)
In the compressforward coding scheme, the relay helps communication by
sending a description of its received sequence to the receiver

CompressForward Lower Bound


(CoverEl Gamal 1979, El GamalMohseniZahedi 2006)
C

max

p(x1 )p(x2 )p( y2 |y2 ,x2 )

El Gamal & Kim (Stanford & UCSD)

minI(X1 , X2 ; Y3 ) I(Y2 ; Y2 | X1 , X2 , Y3 ), I(X1 ; Y2 , Y3 | X2 )

Elements of NIT

Tutorial, ISIT 2011

91 / 118

Relay Channel

CompressForward

Proof of Achievability
We use block Markov coding, joint typicality encoding, binning, and
simultaneous nonunique decoding
Y2 : X2

Y2
Y3

X1

At the end of block j, the relay chooses a reconstruction sequence y2n ( j) of the
received sequence y2n ( j)
Since the receiver has side information y3n ( j), we use binning to reduce the rate
The bin index is sent to the receiver in block j + 1 via x2n ( j + 1)
At the end of block j + 1, the receiver recovers the bin index and then m j and the
compression index simultaneously

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

92 / 118

Relay Channel

CompressForward

Proof of Achievability
We use block Markov coding, joint typicality encoding, binning, and
simultaneous nonunique decoding
Y2 : X2

Y2
Y3

X1

Codebook generation:

Fix p(x1 )p(x2 )p( y2 |y2 , x2 ) that attains the lower bound
For j [1 : b], randomly and independently generate 2nR sequences
n
x1n (m j ) i=1 pX1 (x1i ), m j [1 : 2nR ]

Similarly generate 2nR2 sequences x2n (l j1 ) i=1 pX2 (x2i ), l j1 [1 : 2nR2 ]


n

For each l j1 [1 : 2nR2 ], randomly and conditionally independently generate 2nR2

n
sequences y2n (k j |l j1 ) i=1 pY2 |X2 ( y2i |x2i (l j1 )), k j [1 : 2nR2 ]

Codebooks: C j = {(x1n (m j ), x2n (l j1 )): m j [1 : 2nR ], l j1 [1 : 2nR2 ]}, j [1 : b]


Partition the set [1 : 2nR2 ] into 2nR2 equal-size bins B(l j ), l j [1 : 2nR2 ]

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

93 / 118

Relay Channel

CompressForward

Block

...

b1

X1

x1n (m1 )

x1n (m2 )

x1n (m3 )

...

x1n (mb1 )

x1n (1)

Y2

y2n (k1 |1), l1

y2n (k2 |l1 ), l2

y2n (k3 |l2 ), l3

...

y2n (kb1 |lb2 ), lb1

X2

x2n (1)

x2n (l1 )

x2n (l2 )

...

x2n (lb2 )

x2n (lb1 )

Y3

l , k , m
1 1 1

l , k , m
2 2 2

...

l , k , m
b2 b2 b2

l , k , m
b1 b1 b1

Encoding:

Transmit x1n (m j ) from codebook C j

Relay encoding:

At the end of block j, find an index k j such that (y2n ( j), y2n (k j |l j1 ), x2n (l j1 )) T(n)

In block j + 1, transmit x2n (l j ), where l j is the bin index of k j

Decoding:

At the end of block j + 1, find the unique l j such that (x2n ( l j ), y3n ( j + 1)) T(n)
j such that (x1n (m
j ), x2n ( l j1 ), y2n (k j | l j1 ), y3n ( j)) T(n) for some
Find the unique m
k B( l )
j
j

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

94 / 118

Relay Channel

CompressForward

Analysis of the Probability of Error


Assume M j = 1 and let L j1 , L j , K j denote the indices chosen by the relay
The decoder makes an error only if one or more of the following events occur:

j) = (X n (L j1 ), Y n (k j |L j1 ), Y n ( j)) T (n)
E(
for all k j [1 : 2nR2 ]
2
2
2

E1 ( j 1) = {L j1 = L j1 }
E1 ( j) = {L j = L j }

E2 ( j) = (X1n (1), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n)

E3 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n) for some m j = 1

E4 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (k j | L j1 ), Y3n ( j)) T(n)


for some k B(L ), k = K , m = 1
j

Thus, the probability of error is bounded as


j = 1}
P(E( j)) = P{M

j)) + P(E1 ( j 1)) + P(E1 ( j)) + P(E2 ( j) Ec ( j) E c ( j 1))


P(E(
1
+ P(E3 ( j)) + P(E4 ( j) E1c ( j 1) E1c ( j))

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

95 / 118

Relay Channel

CompressForward

j) = (X n (L j1 ), Y n (k j |L j1 ), Y n ( j)) T (n)
E(
for all k j [1 : 2nR2 ]
2
2
2

E1 ( j 1) = {L j1 = L j1 }
E1 ( j) = {L j = L j }

E2 ( j) = (X1n (1), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n)

E3 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n) for some m j = 1

E4 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (k j | L j1 ), Y3n ( j)) T(n)


for some k j B(L j ), k j = K j , m j = 1

j)) + P(E1 ( j 1)) + P(E1 ( j)) + P(E2 ( j) Ec ( j) E c ( j 1))


P(E( j)) P(E(
1
+ P(E3 ( j)) + P(E4 ( j) E1c ( j 1) E1c ( j))
By the independence of codebooks and the covering lemma (U X2 , X Y2 ,
X Y2 ), the first term 0 as n if R 2 > I(Y2 ; Y2 |X2 ) + ( )
As in the multihop coding scheme, the next two terms P{L j1 = L j1 } 0 and
P{L j = L j } 0 as n if R2 < I(X2 ; Y3 ) ()
The fourth term P(X1n (1), X2n (L j1 ), Y2n (K j |L j1 ), Y3n ( j)) T(n) | Ec ( j) 0 by
the independence of codebooks and the conditional typicality lemma
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

96 / 118

Relay Channel

CompressForward

Covering Lemma
p(u, x, x) and <
Let (U , X, X)

Let (U n , X n ) p(un , x n ) be arbitrarily distributed such that


lim P{(U n , X n ) T(n)
(U , X)} = 1

i |ui ), m A, where |A| 2nR , be


Let X n (m) ni=1 p X|U
(x
conditionally independent of each other and of X n given U n

Covering Lemma
There exists () 0 as 0 such that
lim P(U n , X n , X n (m)) T(n) for all m A = 0,

) + ()
if R > I(X; X|U

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

97 / 118

Relay Channel

CompressForward

j) = (X n (L j1 ), Y n (k j |L j1 ), Y n ( j)) T (n)
E(
for all k j [1 : 2nR2 ]
2
2
2

E1 ( j 1) = {L j1 = L j1 }
E1 ( j) = {L j = L j }

E2 ( j) = (X1n (1), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n)

E3 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n) for some m j = 1

E4 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (k j | L j1 ), Y3n ( j)) T(n)


for some k j B(L j ), k j = K j , m j = 1

j)) + P(E1 ( j 1)) + P(E1 ( j)) + P(E2 ( j) Ec ( j) E c ( j 1))


P(E( j)) P(E(
1
+ P(E3 ( j)) + P(E4 ( j) E1c ( j 1) E1c ( j))
By the independence of codebooks and the covering lemma (U X2 , X Y2 ,
X Y2 ), the first term 0 as n if R 2 > I(Y2 ; Y2 |X2 ) + ( )
As in the multihop coding scheme, the next two terms P{L j1 = L j1 } 0 and
P{L j = L j } 0 as n if R2 < I(X2 ; Y3 ) ()
The fourth term P(X1n (1), X2n (L j1 ), Y2n (K j |L j1 ), Y3n ( j)) T(n) | Ec ( j) 0 by
the independence of codebooks and the conditional typicality lemma
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

98 / 118

Relay Channel

CompressForward

j) = (X n (L j1 ), Y n (k j |L j1 ), Y n ( j)) T (n)
E(
for all k j [1 : 2nR2 ]
2
2
2

E1 ( j 1) = {L j1 = L j1 }
E1 ( j) = {L j = L j }

E2 ( j) = (X1n (1), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n)

E3 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (K j | L j1 ), Y3n ( j)) T(n) for some m j = 1

E4 ( j) = (X1n (m j ), X2n (L j1 ), Y2n (k j | L j1 ), Y3n ( j)) T(n)


for some k j B(L j ), k j = K j , m j = 1

j)) + P(E1 ( j 1)) + P(E1 ( j)) + P(E2 ( j) Ec ( j) E c ( j 1))


P(E( j)) P(E(
1
+ P(E3 ( j)) + P(E4 ( j) E1c ( j 1) E1c ( j))

By the same independence and the packing lemma, P(E3 ( j)) 0 as n if


R < I(X1 ; X2 , Y2 , Y3 ) + () = I(X1 ; Y2 , Y3 |X2 ) + ()
As in WynerZiv coding, the last term
P{(X1n (m j ), X2n (L j1 ), Y2n (k j |L j1 ), Y3n ( j)) T(n) for some k j B(1), m j = 1},
which, by the independence of codebooks, joint typicality lemma, and union
bound, 0 as n if R + R 2 R2 < I(X1 ; Y3 |X2 ) + I(Y2 ; X1 , Y3 |X2 ) ()
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

99 / 118

Relay Channel

CompressForward

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding

Block Markov coding

6. WynerZiv Coding

Coherent cooperation

7. GelfandPinsker Coding

Decodeforward

8. Wiretap Channel

Backward decoding

9. Relay Channel

Compressforward

10. Multicast Network


El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

100 / 118

Multicast Network

DM Multicast Network (MN)


Multicast communication network
2

1
p(y1 , . . . , yN |x1 , . . . , xN )

j
M

N
M

k
M

Assume an N-node DM-MN model ( j=1 X j , p(y N |x N ), j=1 Y j )


N

Topology of the network is defined through p(y N |x N )


A (2nR , n) code for the DM-MN:

Message set: [1 : 2nR ]


Source encoder: x1i (m, y1i1 ), i [1 : n]
Relay encoder j [2 : N]: x ji (y i1
j ), i [1 : n]
k (ykn )
Decoder k D: m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

101 / 118

Multicast Network

1
p(y1 , . . . , yN |x1 , . . . , xN )

j
M

N
M

k
M

Assume M Unif[1 : 2nR ]

k = M for some k D}
Average probability of error: Pe(n) = P{M

R achievable if there exists a sequence of (2nR , n) codes with limn Pe(n) = 0

Capacity C: supremum of achievable R


Special cases:

DMC with feedback (N = 2, Y1 = Y2 , X2 = , and D = {2})


DM-RC (N = 3, X3 = Y1 = , and D = {3})
Common-message DM-BC (X2 = = XN = Y1 = and D = [2 : N])
DM unicast network (D = {N})

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

102 / 118

Multicast Network

Network DecodeForward

Network DecodeForward
Decodeforward for RC can be extended to MN
Y2 : X2

M j1

Y3 : X3

Mj
Mj

M j2
Y4

X1

j2
M

Network DecodeForward Lower Bound


(XieKumar 2005, KramerGastparGupta 2005)
N
C max min I(X k ; Yk+1 | Xk+1
)
p(x ) k[1:N1]

For N = 3 and X3 = , reduces to the decodeforward lower bound for DM-RC


N
N
N
Tight for a degraded DM-MN, i.e., p(yk+2
|x N , y k+1 ) = p(yk+2
|xk+1
, yk+1 )

Holds for any D [2 : N]

Can be improved by removing some relay nodes and relabeling the nodes
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

103 / 118

Multicast Network

Network DecodeForward

Proof of Achievability
We use block Markov coding and sliding window decoding (Carleial 1982)
We illustrate this scheme for DM-RC
Codebook generation, encoding, and relay encoding: same as before
Block

b1

X1

x1n (m1 |1)

x1n (m2 |m1 )

x1n (m3 |m2 )

x1n (mb1 |mb2 )

x1n (1|mb1 )

Y2

1
m

2
m

3
m

b1
m

X2

x2n (1)

1)
x2n (m

2)
x2n (m

b2 )
x2n (m

b1 )
x2n (m

Y3

1
m

2
m

b2
m

b1
m

Decoding:

j such that
At the end of block j + 1, find the unique m
n
n
n
(n)
n
j |m
j1 ), x2 (m
j1 ), y3 ( j)) T and (x2 (m
j ), y3n ( j + 1)) T(n) simultaneously
(x1 (m

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

104 / 118

Multicast Network

Network DecodeForward

Analysis of the Probability of Error


Assume that M j1 = M j = 1
The decoder makes an error only if one or more of the following events occur:
j 1) = {M
j1 = 1}
E(
j) = {M
j = 1}
E(

j1 = 1}
E( j 1) = {M

j |M
j1 ), X n (M
j1 ), Y n ( j)) T (n) or (X n (M
j ), Y n ( j + 1)) T (n)
E1 ( j) = (X1n (M
2
3

2
3

j1 ), X n (M
j1 ), Y n ( j)) T (n) and (X n (m j ), Y n ( j + 1)) T (n)
E2 ( j) = (X1n (m j | M
2
3

2
3

for some m j = M j

Thus, the probability of error is upper bounded as


j 1) E(
j) E( j 1) E1 ( j) E2 ( j))
P(E( j)) P(E(

j 1)) + P(E(
j)) + P(E( j 1))
P(E(
c
+ P(E1 ( j) E ( j 1) Ec ( j) E c ( j 1)) + P(E2 ( j) Ec ( j))

By independence of the codebooks, the LLN, the packing lemma, and


induction, the first four terms tend to zero as n if R < I(X1 ; Y2 |X2 ) ()

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

105 / 118

Multicast Network

Network DecodeForward

For the last term, consider


j1 ), X n (M
j1 ), Y n ( j)) T (n) ,
P(E2 ( j) Ec ( j)) = P(X1n (m j | M
2
3

j = 1
(X2n (m j ), Y3n ( j + 1)) T(n) for some m j = 1, and M

j1 ), X n (M
j1 ), Y n ( j)) T (n) ,
P(X1n (m j | M
2
3

m =1
n
n
(n)

(X (m j ), Y ( j + 1)) T , and M j = 1
2

j1 ), X n (M
j1 ), Y n ( j)) T (n) and M
j = 1
= P(X1n (m j | M
2
3

m =1
j = 1
P(X n (m j ), Y n ( j + 1)) T (n) | M

(a)

j1 ), X n (M
j1 ), Y n ( j)) T (n)
P(X1n (m j | M
2
3

m =1
n
n
(n)
P(X (m j ), Y ( j + 1)) T | M j = 1

(b) nR n(I(X ;Y |X )()) n(I(X ;Y )())


1 3 2
2 3

2 2

0 as n if R < I(X1 ; Y3 |X2 ) + I(X2 ; Y3 ) 2() = I(X1 , X2 ; Y3 ) 2()

j1 ), X n (M
j1 ), Y n ( j)) T (n) } and {(X n (m j ), Y n ( j + 1)) T (n) } are
(a) {(X1n (m j |M
2
3

2
3

j = 1 for m j = 1
conditionally independent given M
(b) independence of the codebooks and the joint typicality lemma
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

106 / 118

Multicast Network

Noisy Network Coding

Noisy Network Coding


Compressforward for DM-RC can be extended to DM-MN

Theorem (Noisy Network Coding Lower Bound)


C max min

min

kD S:1S,kS

c ), Yk | X(S c )) I(Y(S); Y(S)|

I(X(S); Y(S
X N , Y (S c ), Yk ),

where the maximum is over all Nk=1 p(xk )p( yk |yk , xk ), Y1 = by convention,
X(S) denotes inputs in S, and Y(S c ) denotes outputs in S c
Special cases:

Compressforward lower bound for DM-RC (N = 3 and X3 = )


Network coding theorem for graphical MN (AhlswedeCaiLiYeung 2000)
Capacity of deterministic MN with no interference (RatnakarKramer 2006)
Capacity of wireless erasure MN (DanaGowaikarPalankiHassibiEffros 2006)
Lower bound for general deterministic MN (AvestimehrDiggaviTse 2011)

Can be extended to Gaussian networks (giving best known gap result) and to
multiple messages (LimKimEl GamalChung 2011)
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

107 / 118

Multicast Network

Noisy Network Coding

Proof of Achievability
We use several new ideas beyond compressforward for DM-RC

The source node sends the same message m [1 : 2nbR ] over b blocks
Relay node j sends the index of the compressed version Y jn of Y jn without binning
Each receiver node performs simultaneous nonunique decoding of the message and
compression indices from all b blocks

We illustrate this scheme for DM-RC


Codebook generation:

Fix p(x1 )p(x2 )p( y2 |y2 , x2 ) that attains the lower bound
For each j [1 : b], randomly and independently generate 2nbR sequences
n
x1n ( j, m) i=1 pX1 (x1i ), m [1 : 2nbR ]

Randomly and independently generate 2nR2 sequences x2n (l j1 ) i=1 pX2 (x2i ),
l j1 [1 : 2nR2 ]
n

For each l j1 [1 : 2nR2 ], randomly and conditionally independently generate 2nR2


n
sequences y2n (l j |l j1 ) i=1 pY2 |X2 ( y2i |x2i (l j1 )), l j [1 : 2nR2 ]
C j = {(x1n ( j, m), x2n (l j1 ), y2n (l j |l j1 )): m [1 : 2nbR ], l j , l j1 [1 : 2nR2 ]}, j [1 : b]

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

108 / 118

Multicast Network

Noisy Network Coding

Block

b1

X1

x1n (1, m)

x1n (2, m)

x1n (3, m)

x1n (b 1, m)

x1n (b, m)

Y2

y2n (l1 |1), l1

y2n (l2 |l1 ), l2

y2n (l3 |l2 ), l3

y2n (lb1 |lb2 ), lb1

y2n (lb |lb1 ), lb

X2

x2n (1)

x2n (l1 )

x2n (l2 )

x2n (lb2 )

x2n (lb1 )

Y3

Encoding:

To send m [1 : 2nbR ], transmit x1n ( j, m) in block j

Relay encoding:

At the end of block j, find an index l j such that (y2n ( j), y2n (l j |l j1 ), x2n (l j1 )) T(n)

In block j + 1, transmit x2n (l j )

Decoding:

such that
At the end of block b, find the unique m
x2n (l j1 ), y2n (l j |l j1 ), y3n ( j)) T(n) for all j [1 : b] for some l1 , l2 , . . . , lb
(x1n ( j, m),

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

109 / 118

Multicast Network

Noisy Network Coding

Analysis of the Probability of Error


Assume M = 1 and L1 = L2 = = Lb = 1

The decoder makes an error only if one or more of the following events occur:
E1 = (Y2n ( j), Y2n (l j |1), X2n (1)) T(n)
for all l j for some j [1 : b]

E2 = (X1n ( j, 1), X2n (1), Y2n (1|1), Y3n ( j)) T(n) for some j [1 : b]

E3 = (X1n ( j, m), X2n (l j1 ), Y2n (l j | l j1 ), Y3n ( j)) T(n) for all j for some l b , m = 1
Thus, the probability of error is upper bounded as
P(E) P(E1 ) + P(E2 E1c ) + P(E3 )
By the covering lemma and the union of events bound (over b blocks),
P(E1 ) 0 as n if R2 > I(Y2 ; Y2 |X2 ) + ( )
By the conditional typicality lemma and the union of events bound,
P(E2 E1c ) 0 as n

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

110 / 118

Multicast Network

Noisy Network Coding

Define Ej (m, l j1 , l j ) = (X1n ( j, m), X2n (l j1 ), Y2n (l j |l j1 ), Y3n ( j)) T(n)


Then
b

P(E3 ) = P Ej (m, l j1 , l j )
m=1 l j=1
b

P Ej (m, l j1 , l j )
j=1

m=1 l

= P(Ej (m, l j1 , l j ))
m=1 l j=1
b

P(Ej (m, l j1 , l j ))
m=1 l j=2

If l j1 = 1, then by the joint typicality lemma, P(Ej ) 2


I(X1 , X2 ; Y3 ) + I(Y2 ; X1 , Y3 | X2 )())
Similarly, if l j1 = 1, then P(Ej ) 2n(

n( I(X1 ; Y2 , Y3 | X2 )())

I2

Thus, if l b1 has k 1s, then

P(Ej (m, l j1 , l j )) 2n(kI1 +(b1k)I2 (b1)())


b

j=2

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

111 / 118

Multicast Network

Noisy Network Coding

Continuing with the bound,


b

P(Ej (m, l j1 , l j )) = P(Ej (m, l j1 , l j ))

m=1 l j=2

m=1 l l 1 j=2
b1


m=1 l j=0

b1

=
m=1 l j=0

b1


m=1 l j=0

b 1 n(b1 j)R2 n( jI1 +(b1 j)I2 (b1)())


2
2
j
b 1 n( jI1 +(b1 j)(I2 R2 )(b1)())
2
j
b 1 n((b1)(min{I1 , I2 R2 }()))
2
j

2nbR 2nR2 2b 2n(b1)(min{I1 , I2 R2 }()) ,

which 0 as n if R < ((b 1)(min{I1 , I2 R2 } ()) R2 )/b


Finally, by eliminating R2 > I(Y2 ; Y2 |X2 ) + ( ), substituting I1 and I2 , and
taking b , we have shown that P(E) 0 as n if
R < minI(X1 ; Y2 , Y3 | X2 ), I(X1 , X2 ; Y3 ) I(Y2 ; Y2 | X1 , X2 , Y3 ) () ( )
This completes the proof of achievability for noisy network coding
El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

112 / 118

Multicast Network

Noisy Network Coding

Summary
1. Typical Sequences
2. Point-to-Point Communication
3. Multiple Access Channel
4. Broadcast Channel
5. Lossy Source Coding
Network decodeforward
6. WynerZiv Coding
Sliding window decoding
7. GelfandPinsker Coding
Noisy network coding
8. Wiretap Channel
Sending same message multiple times
using independent codebooks

9. Relay Channel
10. Multicast Network
El Gamal & Kim (Stanford & UCSD)

Beyond packing lemma

Elements of NIT

Tutorial, ISIT 2011

113 / 118

Conclusion

Conclusion
Presented a unified approach to achievability proofs for DM networks:

Typicality and elementary lemmas


Coding techniques: random coding, joint typicality encoding/decoding,
simultaneous (nonunique) decoding, superposition coding, binning, multicoding

Results can be extended to Gaussian models via discretization procedures


Lossless source coding is a corollary of lossy source coding
Network Information Theory book:

Comprehensive coverage of this approach


More advanced coding techniques and analysis tools
Converse techniques (DM and Gaussian)
Open problems

Although the theory is far from complete, we hope that our approach will

Make the subject accessible to students, researchers, and communication engineers


Help in the quest for a unified theory of information flow in networks

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

114 / 118

References

References
Ahlswede, R. (1971). Multiway communication channels. In Proc. 2nd Int. Symp. Inf. Theory,
Tsahkadsor, Armenian SSR, pp. 2352.
Ahlswede, R., Cai, N., Li, S.-Y. R., and Yeung, R. W. (2000). Network information flow. IEEE
Trans. Inf. Theory, 46(4), 12041216.
Avestimehr, A. S., Diggavi, S. N., and Tse, D. N. C. (2011). Wireless network information flow:
A deterministic approach. IEEE Trans. Inf. Theory, 57(4), 18721905.
Bergmans, P. P. (1973). Random coding theorem for broadcast channels with degraded
components. IEEE Trans. Inf. Theory, 19(2), 197207.
Carleial, A. B. (1982). Multiple-access channels with different generalized feedback signals. IEEE
Trans. Inf. Theory, 28(6), 841850.
Costa, M. H. M. (1983). Writing on dirty paper. IEEE Trans. Inf. Theory, 29(3), 439441.
Cover, T. M. (1972). Broadcast channels. IEEE Trans. Inf. Theory, 18(1), 214.
Cover, T. M. and El Gamal, A. (1979). Capacity theorems for the relay channel. IEEE Trans.
Inf. Theory, 25(5), 572584.
Csisz
ar, I. and K
orner, J. (1978). Broadcast channels with confidential messages. IEEE Trans.
Inf. Theory, 24(3), 339348.
Dana, A. F., Gowaikar, R., Palanki, R., Hassibi, B., and Effros, M. (2006). Capacity of wireless
erasure networks. IEEE Trans. Inf. Theory, 52(3), 789804.

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

115 / 118

References

References (cont.)
El Gamal, A., Mohseni, M., and Zahedi, S. (2006). Bounds on capacity and minimum
energy-per-bit for AWGN relay channels. IEEE Trans. Inf. Theory, 52(4), 15451561.
Elias, P., Feinstein, A., and Shannon, C. E. (1956). A note on the maximum flow through a
network. IRE Trans. Inf. Theory, 2(4), 117119.
Ford, L. R., Jr. and Fulkerson, D. R. (1956). Maximal flow through a network. Canad. J. Math.,
8(3), 399404.
Gelfand, S. I. and Pinsker, M. S. (1980). Coding for channel with random parameters. Probl.
Control Inf. Theory, 9(1), 1931.
Han, T. S. and Kobayashi, K. (1981). A new achievable rate region for the interference channel.
IEEE Trans. Inf. Theory, 27(1), 4960.
Heegard, C. and El Gamal, A. (1983). On the capacity of computer memories with defects. IEEE
Trans. Inf. Theory, 29(5), 731739.
Kramer, G., Gastpar, M., and Gupta, P. (2005). Cooperative strategies and capacity theorems
for relay networks. IEEE Trans. Inf. Theory, 51(9), 30373063.
Liao, H. H. J. (1972). Multiple access channels. Ph.D. thesis, University of Hawaii, Honolulu, HI.
Lim, S. H., Kim, Y.-H., El Gamal, A., and Chung, S.-Y. (2011). Noisy network coding. IEEE
Trans. Inf. Theory, 57(5), 31323152.
McEliece, R. J. (1977). The Theory of Information and Coding. Addison-Wesley, Reading, MA.

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

116 / 118

References

References (cont.)
Orlitsky, A. and Roche, J. R. (2001). Coding for computing. IEEE Trans. Inf. Theory, 47(3),
903917.
Ratnakar, N. and Kramer, G. (2006). The multicast capacity of deterministic relay networks with
no interference. IEEE Trans. Inf. Theory, 52(6), 24252432.
Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Tech. J., 27(3),
379423, 27(4), 623656.
Shannon, C. E. (1959). Coding theorems for a discrete source with a fidelity criterion. In IRE
Int. Conv. Rec., vol. 7, part 4, pp. 142163. Reprint with changes (1960). In R. E. Machol
(ed.) Information and Decision Processes, pp. 93126. McGraw-Hill, New York.
Shannon, C. E. (1961). Two-way communication channels. In Proc. 4th Berkeley Symp. Math.
Statist. Probab., vol. I, pp. 611644. University of California Press, Berkeley.
Slepian, D. and Wolf, J. K. (1973a). Noiseless coding of correlated information sources. IEEE
Trans. Inf. Theory, 19(4), 471480.
Slepian, D. and Wolf, J. K. (1973b). A coding theorem for multiple access channels with
correlated sources. Bell Syst. Tech. J., 52(7), 10371076.
Willems, F. M. J. and van der Meulen, E. C. (1985). The discrete memoryless multiple-access
channel with cribbing encoders. IEEE Trans. Inf. Theory, 31(3), 313327.
Wyner, A. D. (1975). The wire-tap channel. Bell Syst. Tech. J., 54(8), 13551387.

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

117 / 118

References

References (cont.)
Wyner, A. D. and Ziv, J. (1976). The ratedistortion function for source coding with side
information at the decoder. IEEE Trans. Inf. Theory, 22(1), 110.
Xie, L.-L. and Kumar, P. R. (2005). An achievable rate for the multiple-level relay channel. IEEE
Trans. Inf. Theory, 51(4), 13481358.
Zeng, C.-M., Kuhlmann, F., and Buzo, A. (1989). Achievability proof of some multiuser channel
coding theorems using backward decoding. IEEE Trans. Inf. Theory, 35(6), 11601165.

El Gamal & Kim (Stanford & UCSD)

Elements of NIT

Tutorial, ISIT 2011

118 / 118

Vous aimerez peut-être aussi