Académique Documents
Professionnel Documents
Culture Documents
Outline
Outline
Outline
Outline
Outline
Outline
Outline
Introduction
Wireless Channels
y = Hx + n
x = K-dimensional complex-valued input vector,
y = N -dimensional complex-valued output vector,
n = N -dimensional additive Gaussian noise
H = N K random channel matrix known to the receiver
This model applies to a variety of communication problems by simply
reinterpreting K, N , and H
>
>
>
>
Fading
Wideband
Multiuser
Multiantenna
10
Multi-Antenna channels
K
y = Hx + n
K and N number of transmit and receive antennas
H = propagation matrix: N K complex matrix whose entries represent
the gains between each transmit and each receive antenna.
11
Multi-Antenna channels
K
N
y = Hx + n
K and N number of transmit and receive antennas
H = propagation matrix: N K complex matrix whose entries represent
the gains between each transmit and each receive antenna.
12
Multi-Antenna channels
K
N
y = Hx + n
K and N number of transmit and receive antennas
H = propagation matrix: N K complex matrix whose entries represent
the gains between each transmit and each receive antenna.
13
x2
Interface
cha
nne
l
x1
Interface
...
...
...
nnel
ch a
User 1
el
nn
a
h
c
x2
Interface
22
s2
...
...
...
Interface
sk
x1
s1
y=Hx+n
A kk
A 11
User 1
Front
End
y = |{z}
H x + n= SAx + n
SA
...
s2
.. .
Front
End
C 1s 1
User 1
y=Hx+n
...
Interface
y = |{z}
H x + n= G Sx + n
G S
...
n = 1, . . . , N
k = 1, . . . , K
17
If, as N, FN
A () converges almost surely (a.s), the corresponding limit
(asymptotic ESD) is simply denoted by FA().
N () denotes the expected ESD.
F
A
18
1
N
1 X
with FN
(x)
the
ESD
of
HH
and with
HH
SNR
N E[kxk2]
=
KE[knk2]
19
1
E[I(SNR )] =
E log det I + SNR HH
N
Z
N (x)
log (1 + SNR x) dF
=
HH
0
20
S = lim
which for most channels gives S
L =
K
= min N , 1 , and the power offset
SNR
I(SNR )
S
k=1
1 X
1
=
K i=1 1 + SNR i(HH)
Z
1
dFK
(x)
=
H H
1
+
SNR
x
0
Z
N
N K
1
N
=
dF (x)
K 0 1 + SNR x HH
K
22
23
24
Wishart Matrices
B
det
B
.
m
n
det
i=1 (n i)!
(1)
The joint p.d.f. of the ordered strictly positive eigenvalues of the Wishart
matrix HH:
R. A. Fisher, The sampling distribution of some statistics obtained from
non-linear equations, The Annals of Eugenics, vol. 9, pp. 238249, 1939.
M. A. Girshick, On the sampling theory of roots of determinantal
equations, The Annals of Math. Statistics, vol. 10, pp. 203204, 1939.
P. L. Hsu, On the distribution of roots of certain determinantal equations,
The Annals of Eugenics, vol. 9, pp. 250258, 1939.
S. N. Roy, p-statistics or some generalizations in the analysis of variance
appropriate to multivariate problems, Sankhya, vol. 4, pp. 381396,
1939.
26
i=1
j=i+1
rt 2 rt
k!
L () e
(k + r t)! k
Lnk()
1 n dk
k! e
dk
n+k
27
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
5
4
5
4
3
3
2
2
1
1
0
28
29
0 +1 +1 1 1 +1
+1 0 1 1 +1 +1
1
+1 1 0 +1 +1 1
W=
N 1 1 +1 0 +1 +1
1 +1 +1 +1 0 1
+1 +1 1 +1 1 0
As the matrix dimension N , the histogram of the eigenvalues
converges to the semicircle law:
1p
4 x2,
f (x) =
2
2<x<2
4
max E |Wi,j | 2
1ijN
N
(2)
33
0.3
0.25
0.2
0.15
0.1
0.05
0
2.5
1.5
0.5
0.5
1.5
2.5
The semicircle law density function compared with the histogram of the average of 100
empirical density functions for a Wigner matrix of size N = 10.
34
1
H=
N
+1
1
+1
+1
1
1
+1
1
1
1
1
1
1
1
+1
1
1
+1
+1
1
1
1
+1
+1
1
+1
+1
+1
1
+1
+1
+1
1
+1
1
+1
0.5
0.5
1.5
1.5
0.5
0.5
1.5
The full-circle law and the eigenvalues of a realization of a 500 500 matrix
35
V. L. Girko, Circular law, Theory Prob. Appl., vol. 29, pp. 694706, 1984.
Z. D. Bai, The circle law, The Annals of Probability, pp. 494529, 1997.
Theorem 2. Let H be an N N complex random matrix whose entries
are independent random variables with identical mean and variance and
finite kth moments for k 4. Assume that the joint distributions of the
real and imaginary parts of the entries have uniformly bounded densities.
Then, the asymptotic spectrum of H converges almost surely to the circular
law, namely the uniform distribution over the unit disk on the complex plane
{ C : || 1} whose density is given by
1
fc() =
|| 1
(3)
(also holds for real matrices replacing the assumption on the joint
distribution of real and imaginary parts with the one-dimensional
distribution of the real-valued entries.)
36
37
38
0x2
(4)
39
0.6
0.5
0.4
0.3
0.2
0.1
0.5
1.5
2.5
40
/2
(5)
41
Marcenko-Pastur
Law
V. A. Marcenko and L. A. Pastur, Distributions of eigenvalues for some
sets of random matrices, Math USSR-Sbornik, vol. 1, pp. 457483,
1967.
U. Grenander and J. W. Silverstein, Spectral analysis of networks with
random topologies, SIAM J. of Applied Mathematics, vol. 32, pp. 449
519, 1977.
K. W. Wachter, The strong limits of random matrix spectra for sample
matrices of independent elements, The Annals of Probability, vol. 6,
no. 1, pp. 118, 1978.
J. W. Silverstein and Z. D. Bai, On the empirical distribution of
eigenvalues of a class of large dimensional random matrices, J. of
Multivariate Analysis, vol. 54, pp. 175192, 1995.
Y. L. Cun, I. Kanter, and S. A. Solla, Eigenvalues of covariance matrices:
Application to neural-network learning, Physical Review Letters, vol. 66,
pp. 23962399, 1991.
42
Rediscovering/Strenghtening the Marcenko-Pastur
Law
V. A. Marcenko and L. A. Pastur, Distributions of eigenvalues for some
sets of random matrices, Math USSR-Sbornik, vol. 1, pp. 457483,
1967.
U. Grenander and J. W. Silverstein, Spectral analysis of networks with
random topologies, SIAM J. of Applied Mathematics, vol. 32, pp. 449
519, 1977.
K. W. Wachter, The strong limits of random matrix spectra for sample
matrices of independent elements, The Annals of Probability, vol. 6,
no. 1, pp. 118, 1978.
J. W. Silverstein and Z. D. Bai, On the empirical distribution of
eigenvalues of a class of large dimensional random matrices, J. of
Multivariate Analysis, vol. 54, pp. 175192, 1995.
Y. L. Cun, I. Kanter, and S. A. Solla, Eigenvalues of covariance matrices:
Application to neural-network learning, Physical Review Letters, vol. 66,
pp. 23962399, 1991.
43
Marcenko-Pastur
Law
V. A. Marcenko and L. A. Pastur, Distributions of eigenvalues for some
sets of random matrices, Math USSR-Sbornik, vol. 1, pp. 457483, 1967.
If N K-matrix H has zero-mean i.i.d. entries with variance N1 , the
asymptotic ESD of HH found in (Marcenko-Pastur, 1967) is
p
+ [b x]+
[x
a]
+
f (x) = [1 ] (x) +
2x
where
[z]+ = max{0, z},
and
p 2
a= 1
p 2
b= 1+ .
N
44
Marcenko-Pastur
Law
V. A. Marcenko and L. A. Pastur, Distributions of eigenvalues for some
sets of random matrices, Math USSR-Sbornik, vol. 1, pp. 457483, 1967.
If N K-matrix H has zero-mean i.i.d. entries with variance N1 , the
asymptotic ESD of HH found in (Marcenko-Pastur, 1967) is
p
+ [b x]+
[x
a]
+
f (x) = [1 ] (x) +
2x
(Bai 1999) The results also holds if only a unit second-moment condition is
placed on the entries of H and
1 X
2
E |Hi,j | 1 {|Hi,j | } 0
K
for any > 0 (Lindeberg-type condition on the whole matrix).
45
Nonzero-Mean Matrices
Lemma: (Yin 1986, Bai 1999): For any N K matrices A and B,
sup |FN
(x)
AA
x0
FN
(x)|
BB
rank(A B)
.
N
Lemma: (Yin 1986, Bai 1999): For any N N Hermitian matrices A and B,
sup |FN
A (x)
x0
FN
B (x)|
rank(A B)
.
N
46
Generalizations needed!
Correlated Entries
H=
p
RS T
Gi,j
N
Transforms
1. Stieltjes transform
2. transform
3. Shannon transform
4. R-transform
5. S-transform
48
1
SX (z) = E
X z
49
1
1 + X
Note:
X () =
()k E[X k ],
k=0
50
FN () =
E
= E tr I + H H
A
K i=1
1 + i(H H)
N
the -transform of its asymptotic ESD is
Z
A() =
0
1
1
dFA(x) = lim
tr{(I + HH)1}
K K
1 + x
51
52
VA() = lim
53
d
1
1
VX () = 1 SX
= 1 X ()
log e d
d
SNR
dSNR
I(SNR ) =
K
(1 MMSE)
N
54
d
1
1
VX () = 1 SX
= 1 X ()
log e d
d
SNR
dSNR
I(SNR ) =
K
(1 MMSE)
N
55
S-transform
x + 1 1
X (1 + x)
x
(6)
56
AB() = A
B(AB() 1)
57
S-transform: Example
Let
H = CQ
where:
> KN
> Q is an N K matrix independent of C and uniformly distributed over
the Stiefel manifold of complex N K matrices such that QQ = I.
Since Q is bi-unitarily invariant then
CQQC (SNR ) = CC
1 + CQQC
SNR
CQQC (SNR )
and
Z
VCQQC () =
0
SNR
1
(1 CQQC (x)) dx
x
58
k=1
"
(y, )
|C|
=E
1 + (y, )
y |C|2 + 1 + (1 y)(y, )
R-transform
(7)
60
x = B() a.s.
(8)
x = SB(z) a.s.
(9)
lim x (I + B)
lim x (B zI)
62
Rationale
63
t1 X
k X
k
X
k (k + r t)!(1)`1+`2 I`
k=0 `1 =0 `2 =0
`1
( )
(k `2)!(r t + `1)!(r t + `2)!`2!
1 +`2 +rt SNR
1
)
I0(SNR ) = e SNR Ei( SNR
n
X
!
k
(k 1)! (SNR )
k=1
SNR
dSNR
V(SNR )
64
Asymptotics
K
N
K
N
65
Shannon and -Transform of Marcenko-Pastur
Law
Example: The Shannon transform of the Marcenko-Pastur law is
1
V(SNR ) = log 1 + SNR F (SNR , )
4
1
1
log e
+ log 1 + SNR F (SNR , )
F (SNR , )
4
4 SNR
where
F(x, z) =
q
x(1 +
z)2 + 1
x(1
2
z)2 + 1
F(SNR , )
4 SNR
66
Asymptotics
N
1 X
N i=1
HH
4
10
N = 15
10
SNR
N = 50
10
SNR
N=5
SNR
N=3
10
SNR
67
Asymptotics
68
Marcenko-Pastur
Law: Applications
69
Correlated Entries
H=
p
RS T
Gi,j
N
1 X
Pi,j
lim
K K
j=1
is independent of i as
K
N
1 X
1 X
If the limits lim
Pi,j = lim
Pi,j = 1 then P is standard
K K
N N
i=1
j=1
asymptotically mean doubly-regular.
71
72
text
1/2
H=P S
o
1
N
Pi,j
N
...
...
P=
...
.. .. .. .. . . .
which is mean doubly regular.
73
Pi,j
Var [Hi,j ] =
N
with P an N K deterministic standard asymptotically doubly-regular
matrix whose entries are uniformly bounded for any N .
1
HH
+ (HH 1) log e
1 HH
=
.
1 (HH )
75
76
Correlated Entries
Let
H = CSA
S: N K complex random matrix whose entries are i.i.d with variance
1
N.
77
78
(C)i,j
ij
1
c
=
Wc
Wc
79
where
r t
= 1 T (t)
r t
= 1 R (r )
80
1
.
1 + Rr ()
r t
log2 e
where
T
r t
= tE
1 + Tt
R
r r
= r E
1 + Rr
(10)
81
R
1 X
1
HH ()
.
nR i=1 1 + i(R) r
nT
X
t r
VHH ()
log2 e
log2 (1 + i(R) r )+
log2 (1 + j (T) t)
i=1
j=1
nT
r
1 X
j (T)
=
nT j=1 1 + j (T)t
R
t
1 X
i(R)
=
.
nR i=1 1 + i(R) r
82
(IID)
d=2
d=1
2.5
capacity
rate / bandwidth
(bits/s/Hz)
receiver
transmitter
d=1
d=2
( T)
max
g nR-8
dB - 8
g nR -6
dB - 6
1.5
analytical
simulation
first-order
low SNR
dB
1
(-2
gn
g nR-4
dB - 4
1
isotropic
input
dB
- 1.59)
g nR dB
nR 2
dB +2
E b/N 0 (dB)
0.05 d2 (ij)2
lated antennas. Power angular spectrum at the transmitter is Gaussian (broadside) with
2 spread. Solid lines indicate analytical solution, circles indicate simulation (Rayleigh
fading), dashed lines indicate low-SNR expansion.
83
SA1
H =
SAL
is the same as the distribution of the singular values of the matrix
S P
(11)
Applications: DS-CDMA with Flat Fading and Antenna Diversity: {Ak,`} are the i.i.d.
fading coefficients of the kth user at the `th antenna and S is the signature matrix.
Engineering interpretation: the effective spreading gain = the CDMA spreading gain
the number of receive antennas
84
Pi,j
N
where P is an N K deterministic matrix whose entries are uniformly
bounded.
Var[Hi,j ] =
85
VHH ()
nT
nR
log2 1 +
log2 (1 + j ) +
j=1
i=1
where
SNR
R
1 X
nR i=1
1 + j
nT
X
nT j=1
(P)i,j j
nT
X
nT j=1
j j
(P)i,j
nT
X
1
1+
(P)i,j j
nT j=1
RT
p
p
H = diag(R)S diag(T)
87
88
H = P Hw
where Hw is zero-mean i.i.d. Gaussian and P is a deterministic matrix
with nonnegative entries.
(P)i,j is the power gain between the jth transmit and ith receive
antennas, determined by their relative polarizations.
> Non-separable Correlations
H = UHw U
are independent
where UR and UT are unitary while the entries of H
zero-mean Gaussian. A more restrictive case is when UR and UT are Fourier
matrices.
This model is advocated and experimentally supported in W. Weichselberger et all,
A stochastic mimo channel model with joint correlation of both link ends, IEEE Trans.
on Wireless Com., vol. 5, no. 1, pp. 90100, 2006.
89
12
analytical
simulation
10
G=
-10
-5
10
15
20
SNR (dB)
90
Ergodic Regime
The quantity
of interest is then the mutual information averaged over the
fading, E I(SNR , HH ) , with
1
91
Non-ergodic Conditions
Often, however, H is held approximately constant during the span of a
codeword
Outage capacity (cumulative distribution of mutual information),
Pout(R) = P[log det(I + SNR HH) < R]
The normalized mutual information converges a.s. to its expectation as
K, N (hardening / self-averaging)
1
1
a.s.
log det(I + SNR HH ) VHH (SNR ) = lim
E[log det(I + SNR HH)]
N N
N
However, non-normalized mutual information
I(SNR , HH) = log det(I + SNR HH)
still suffers random fluctuations that, while small relative to the mean, are
vital to the outage capacity.
92
93
IID Channel
As K, N with
K
N
2
(1 HH (SNR ))
E = log 1
94
IID Channel
95
One-Side Correlated Wireless Channel (H = S T)
[Tulino-Verdu,2004]
Theorem: As K, N with
K
N
!2
TSNR STS (SNR )
2
E[ ] = log 1 E
1 + TSNR STS (SNR )
with expectation over the nonnegative random variable T whose
distribution equals the asymptotic ESD of T.
96
Examples
In the examples that follow, transmit antennas correlated with
0.2(ij)2
(T)i,j = e
1
1+SNR K
1
E[2] = log 1 K
1
j (T )
K
j=1 1+SNR j ( )
T
PK
j=1
j (T )SNR
1+j (T )SNR
2
97
Example: Histogram
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
3
98
14
Simulation
12
Gaussian approximation
10
SNR (dB) Simul. Asympt.
0
0.52
0.50
10
2.28
2.27
8
6
4
Transmitter Receiver
(K=2)
(N=2)
2
0
0
10
15
20
25
30
35
40
SNR (dB)
99
12
Receiver
(N=2)
Transmitter
(K=4)
0
0
10
15
20
25
30
35
40
SNR (dB)
100
Summary
Various wireless communication channels: analysis tackled with the aid
of random matrix theory.
Shannon and -transforms, motivated by the application of random
matrices to the theory of noisy communication channels.
Shannon transforms and -transforms for the asymptotic ESD of several
classes of random matrices.
Application of the various findings to the analysis of several wireless
channel in both ergodic and non-ergodic regime.
Succinct expressions for the asymptotic performance measures.
Applicability of these asymptotic results to finite-size communication
systems.
101
Reference
102
1. Introduction. Let M(R) denote the collection of all subprobability distribution functions on R. We say for {FN } M(R),
v
1
dF (x),
xz
z C+ {z C : =z > 0}
Properties:
1. SF is an analytic function on C+ .
2. =SF (z) > 0.
3. |SF (z)|
1
=z .
Z
a
=SF ( + i)d.
Z
fN (x)dFN (x)
f (x)dF (x)
FN F
for all z S.
1
(number of eigenvalues of A x).
N
Then
SF A (z) =
1
tr (A zI)1 .
N
(1.1)
S(z) = S0
T
zE
1 + TS(z)
,
h
i
S(z) = E
T
DE
z
1+z(z)T
where z(z) satisfies
(1.2)
z(z) = E
D
.
i
T
z
DE
1+z(z) T
h
Theorem 1.3 (Dozier and Silverstein). Let H0 be N K, random, independent of S, such that the ESD of H0 H0 converges almost
surely in distribution to a nonrandom limit, and let M denote a random variable with this limiting distribution. Let K > 0 be nonrandom.
Define
H=S+
KH0 .
(1.3)
S(z) = E
1
KM z(1 + S(z)) + ( 1)
1+S(z)
S(z) is the only solution to (1.3) with both S(z) and zS(z) in C+ .
9
(1.4)
z=
T
1
+E
.
S
1 + TS
1
1
+
,
S
1+S
10
with solution
S=
(z + 1 )
(z + 1 )2 4z
2z
p
(z + 1 ) z 2 2z(1 + ) + (1 )2
=
2z
p
2
2
(z + 1 ) (z (1 ) )(z (1 + ) )
=
2z
We see the imaginary part of S goes to zero when z approaches the real
2
2
line and lies outside the interval [(1 ) , (1 + ) ], so we conclude
from property 5. that for all x 6= 0 the limiting distribution has a
density f given by
( p
(x(1
f (x) =
)2 )((1+
2x
)2 x)
11
2
2
x ((1 ) , (1 + ) )
otherwise.
12
1
1
A
q
1
1 + tq A q
1
1
13
1 2
q (B zI)1 A((B zI)1 q
k(B
zI)
qk
t
kAk |t|
.
1
1 + tq (B zI) q
|1 + tq (B zI) q|
Write B =
P
i
z|
i
i
and
|1 + tq (B zI)
q| |t|=(q (B zI)
14
X |e q|2
i
q) = |t|=z
.
2
|i z|
i
p/2
E|X CX tr C| Kp E|X1 | tr CC
2p
p/2
+ E|X1 | tr (CC )
where the constant Kp does not depend on N , C, nor on the distribution of X1 . (Proof given in Bai and Silverstein (1998).)
Thus we have
X CX tr C p
K0 ,
E
N
N p/2
the constant K0 depending on a bound on the 2p-th moment of X1 and
on the norm of C. Roughly speaking, for large N , a scaled quadratic
form involving a vector consisting of i.i.d. standardized random variables is close to the scaled trace of the matrix. As will be seen below,
this is the only place where randomness comes in.
15
16
Before continuing, two more basic properties of matrices are included here.
Lemma 2.4 Let z1 , z2 C+ with max(= z1 , = z2 ) v > 0, A and
B N N with A Hermitian, and q CN . Then
1
|tr B((A z1 I)
|q B(A z1 I)
(A z2 I)
1
)| |z2 z1 |N kBk 2 , and
v
1
q q B(A z2 I)
17
1
q| |z2 z1 | kqk kBk 2 .
v
2
K
X
ti qi qi .
i=1
18
X
K
i=1
= (1/N )
ti qi qi xI (WzI)1
n
X
ti qi (W(i) zI)1 (W0 (z x)I)1 qi
i=1
1 + ti qi (W(i) zI)1 qi
19
Letting
x = xN = (1/N )
K
X
i=1
ti
1 + ti SW (z)
we have
SW0 (z xN ) SW (z) = (1/N )
K
X
i=1
ti
di
1 + ti SW (z)
where
di =
1 + ti SW (z)
1 + ti qi (W(i) zI)1 qi
K
X
j=1
tj
.
1 + tj SW(i) (z)
20
Using Lemma 2.3 (p = 6 is sufficient) and the fact that all matrix
inverses encountered are bounded in spectral norm by 1/=z we have
from standard arguments using Booles and Markovs inequalities, and
the Borel-Cantelli lemma, almost surely
(2.1)
|qi (W(i) zI)1 (W0 (zx(i) )I)1 qi (1/N )tr (W(i) zI)1 (W0 (zx(i) )I)1 |]
0
as N .
21
(2.3)
"
#
1 + ti SW (z)
max max
1
, |x x(i) | 0.
1
iK
1 + ti qi (W(i) zI) qi
Therefore, from Lemmas 2.2, 2.4, and (2.1) -(2.3), we get maxiK di
0 almost surely, giving us
SW0 (z xN ) SW (z) 0,
almost surely.
22
xN
K
X
T
ti
1
E
= (K/N )
K i=1 1 + ti SW (z)
1 + TS
S = S0
T
zE
1 + TS
S=
z E
< z E
T
1 + TS
T
1 + TS
i dW0 ( )
i =z + E
T =S
|1 + TS|2
dW0 ( )
Therefore
(3.1)
=S =
=z + E
T =S
|1 + TS|2
24
1
h
z + E
i2 dW0 ( )
T
1 + TS
S 1 + TS h
h 1 + TS
i
i dW0 ( )
S S =
T
T
z + E
z + E
S
1 + TS
1 + TS
T
= (S S )E
S)
(1 + TS)(1 + TS
Z
1
h
i
h
i dW0 ( ).
T
T
z + E
z + E
S
1 + TS
1 + TS
Using Cauchy-Schwarz and (3.1) we have
T
E
S)
(1 + TS)(1 + TS
1
h
i
h
i dW0 ( )
T
T
z + E
z + E
S
1 + TS
1 + TS
25
T2
E
|1 + TS|2
1/2
1
h
i2 dW0 ( )
z + E 1 +TTS
2
T
S |2
|1 + TS
1
h
z + E
=
E
2
|1 + TS|
1/2
dW
(
)
i 2
0
T
S
1 + TS
1/2
=S
2
T
=S
=z + E
|1 + TS|2
1/2
S
T
=S
2
2
S|
|1 + TS
S
T
=S
=z + E
S |2
|1 + TS
Therefore, from (3.2) we must have S = S .
26
< 1.