Vous êtes sur la page 1sur 34

EE603 Class Notes Version 1 John Stensby

Chapter 13 Series Representation of Random Processes

Let X(t) be a deterministic, generally complex-valued, signal defined on [0, T] with

z0
T 2
X( t ) dt < . (13-1)

Let k(t), k 0, be a complete orthonormal basis for the vector space of complex-valued, square

integrable functions on [0, T]. The functions k satisfy

z0
T
k ( t )j ( t )dt = 1, k = j
. (13-2)
= 0, k j

Then, we can expand X(t) in the generalized Fourier series


X( t ) = x m m ( t )
m =1
(13-3)

xm = z0
T
X( t )m ( t )dt

for t in the interval [0,T]. In (13-3), convergence is not pointwise. Instead, Equation (13-3)

converges in the mean square sense. That is, we have

limit
N 0
z T
X( t )
N
xk k ( t )
k =1
2
dt = 0 . (13-4)

It is natural to ask if similar results can be obtained for finite power, m.s. Riemann

integrable random processes. The answer is yes. Obviously, for random process X(t), the

expansion coefficients xk will be random variables. In general, the coefficients xk will be pair-wise

603CH13.DOC 13-1
EE603 Class Notes Version 1 John Stensby

correlated. However, by selecting the basis functions as the eigenfunctions of a certain integral

operator, it is possible to insure that the coefficients are pair-wise uncorrelated, a highly desirable

condition that simplifies many applications. When the basis functions are chosen to make the

coefficients uncorrelated, the series representation of X(t) is known as a Karhunen-Love

expansion. These types of expansions have many applications in the areas of communication and

control.

Some Important Properties of the Autocorrelation Function

Random process X(t) has an autocorrelation function (t1,t2) which we assume is

continuous on [0, T][0, T]. Note that is Hermitian; that is, the function satisfies (t1,t2) =

*(t2,t1). Also, it is nonnegative definite, a result that is shown easily. Let f(t) be any function

defined on the interval [0, T]. Then, we can define the random variable

xf = z
0
T
X( t ) f ( t )dt . (13-5)

The mean of xf is

E[ x f ] = z0
T
m( t ) f ( t )dt ,

a result that is zero under the working assumption that m(t) = E[X(t)] = 0. The variance of xf is

Var[ xf ] = E LMz T f (t )X( t)dt z T X () f ()dOP = z T z T f (t )E[ X( t) X ()]f ()dtd


N0 0 Q 00 . (13-6)

= zz T T
0 0
f ( t ) ( t , ) f ( )dtd

Now, the variance of a random variable cannot be negative, so we conclude

603CH13.DOC 13-2
EE603 Class Notes Version 1 John Stensby

zz
T T
0 0
f ( t ) ( t , ) f ( )dtd 0 (13-7)

for arbitrary function f(t), 0 t T. Condition (13-7) implies that autocorrelation function

(t1,t2) is nonnegative definite. In most applications, the autocorrelation function is positive

definite in that

zz
T T
0 0
f ( t ) ( t , ) f ( )dtd > 0 (13-8)

for arbitrary functions f(t) that are not identically zero.

We can define the linear operator A : L2[0,T] L2[0,T] by the formula

A x( t ) z T
0
( t , ) x( )d (13-9)

(recall that L2[0,T] is the vector space of square integrable functions on [0,T]). Continuous,

Hermitian, nonnegative definite autocorrelation function forms the kernel of linear operator A.

In the world of mathematics, A[] is a commonly-used Hilbert-Schmidt operator, and it is an

example of a compact, self-adjoint linear operator (for definitions of these terms see the appendix

of R. Ash, Information Theory, the book An Introduction to Hilbert Space, by N. Young, or

almost any book on Hilbert spaces and/or functional analysis).

Eigenfunctions and Eigenvalues of Linear Operator A

The eigenfunctions k and eigenvalues k of linear operator A satisfy A[k(t)] = kk(t)

which is the same as

k k ( t) = z T
0
( t , ) k ( ) d . (13-10)

In what follows, we assume that kernel (t,) is

603CH13.DOC 13-3
EE603 Class Notes Version 1 John Stensby

a) Hermitian (i.e. (t,) = (,t)),

b) at least nonnegative definite (i.e. (13-7) holds),

c) continuous on [0, T][0, T],


d) satisfies zzT T
0 0
( t , ) dtd < (this is a consequence of the continuity condition c).

Much is known about the eigenfunctions and eigenvalues of linear operator A. We state a number

of properties of the eignevectors/eigenvalues. Proofs that are not given here can be found in the

references cited above.

1. For a Hermitian, nonnegative definite, continuous kernel (t,), there exist at least one square-

integrable eigenfunction and one nonzero eigenvalue.

2. It is obvious that eigenfunctions are defined up to a multiplicative constant. So, we normalize

them according to (13-2).

3. If 1(t) and 2(t) are eigenfunctions corresponding to the same eigenvalue , then 1(t) +

2(t) is an eigenfunction corresponding to .

4. Distinct eigenvalues correspond to eigenfunctions that are orthogonal.

5. The eigenvalues are countable (i.e., a 1-1 correspondence can be established between the

eigenvalues and the integers). Furthermore, the eigenvalues are bounded. In fact, each

eigenvalue k must satisfy the inequality

inf zz
T T
f =1 0 0
f ( t ) ( t , ) f ( )dtd k sup
f =1
zzT T
0 0
f ( t ) ( t , ) f ( )dtd < (13-11)

6. Every nonzero eigenvalue has a finite-dimensional eigenspace. That is, there are a finite

number of linearly independent eigenfunctions that correspond to a given eigenvalue (k, 1 k

n, are linearly independent if 11 + 22 + + nn = 0 implies that 1 = 2 = = n = 0).


7. The eigenfunctions form a complete orthonormal basis of the vector space L2[0,T], the set of

all square integrable functions on [0, T]. If is not positive definite, there is a zero eigenvalue,

603CH13.DOC 13-4
EE603 Class Notes Version 1 John Stensby

and you must include its orthonormalized eigenfunction(s) to get a complete orthonormal basis of

L2[0,T] (use the Gram-Schmidt procedure here).

8. The eigenvalues are nonnegative. For a positive definite kernel (t,), the eigenvalues are

positive. To establish this claim, use (13-8) and (13-2) and write

i = z
T
0 z
0
LMz
i ( t )[ i i ( t )]dt = i ( t )
T
N
T
0
OP
( t , ) i ( )d dt
Q
= zz
T T
( t ) ( t , ) i ( )d dt
0 0 i
. (13-12)

This result is strictly positive if kernel (t,) is positive definite.

9. The sum of the eigenvalues is the expected value of the process energy in the interval [0, T].

That is

E M z X( t ) dt P = z ( t , t )dt = k .
L T 2 O T

N0 Q 0 k =1
(13-13)

With items 10 through 15, we want to establish Mercers theorem. This theorem states

that you can represent the autocorrelation function (t,) by the expansion


( t , ) = k k ( t) k ( ) . (13-14)
k =1

We will not give a rigorous proof of this result, but we will come close.

10. Let 1(t) and 1 be an eigenfunction and eigenvalue pair for kernel (t,), the nonnegative

definite autocorrelation of random process X. Then

603CH13.DOC 13-5
EE603 Class Notes Version 1 John Stensby

1 ( t , ) ( t , ) 11 ( t )1 ( ) (13-15)

is the nonnegative-definite autocorrelation of the random process

X1 ( t ) X( t ) 1 ( t ) z 0
T
X( )1 ( )d . (13-16)

To show this, first compute the intermediate result

LMF
E X1( t ) X1 ( ) = E X( t ) 1 ( t ) z T
X(s1 )1 (s1)ds1
I F X () 1 ()z T X (s2 )1(s2 )ds2 I OP
NH 0 KH 0 KQ
T
0 z
= E X( t ) X( ) 1 ( ) E X (s2 ) X( t ) 1 (s2 )ds2 1 ( t ) E X(s1 ) X ( ) 1 (s1 )ds1
T
0 z
+ 1( t )1 ( ) zzT T
0 0
E X(s1 ) X (s2 ) 1 (s1) 1 (s2 )ds1ds2 (13-17)

= ( t , ) 1 ( ) z 0
T
( t , s2 ) 1 (s2 )ds2 1( t ) z0
T
(s1, ) 1 (s1 )ds1

+ 1( t )1 ( ) zzT T
0 0
(s1 , s2 )1 (s1 )1(s2 )ds1ds2 .

Use (t,) = (,t), and take the complex conjugate of the eigenfunction relationship to obtain

11 ( ) = z0
T
(s, )1 (s)ds (13-18)

With (13-18), the two cross terms on the right-hand-side of (13-17) become

1 ( ) z
0
T
( t , s) 1(s)ds = 1( t ) z0
T
(s, ) 1 (s)ds = 1 1 ( t )1 ( ) (13-19)

603CH13.DOC 13-6
EE603 Class Notes Version 1 John Stensby

On the right-hand-side of (13-17), the double integral can be evaluated as

zz
T T
0 0
(s1, s2 )1 (s1 )1 (s2 )ds1ds2 = z
T
0
LMz
1 (s1 )
N
T
0
OP
(s1 , s2 )1 (s2 )ds2 ds1
Q
= z
T
(s ) (s )ds
0 1 1 1 1 1 1
. (13-20)

= 1

Finally, use (13-19) and (13-20) in (13-17) to obtain

E X1( t ) X1 ( ) = ( t , ) 11( t ) 1 ( ) , (13-21)

and this establishes the validity of (13-15).

11. As defined by (13-15), 1(t,) may be zero for all t, . If not, 1(t,) can be used as the

kernel of integral equation (13-10). This reformulated operator equation has a new

eigenfunction 2(t) and new nonzero eigenvalue 2 (this follows from Property #1 above). They

can be used to define the new nonnegative definite autocorrelation function

2 ( t , ) 1 ( t , ) 2 2 ( t )2 ( ) . (13-22)

Furthermore, the new eigenfunction 2(t) is orthogonal to the old eigenfunction 1(t). That

2 is nonnegative definite follows immediate from application of Property 10 with replaced by

1. That 1(t)2(t) follows from an argument that starts by noting

22 ( t) = z 0
T
1 ( t , ) 2 ( )d . (13-23)

Plug (13-15) into (13-23) and obtain

603CH13.DOC 13-7
EE603 Class Notes Version 1 John Stensby

22 ( t) = z
0
T
( t , ) 2 ( )d 11 ( t ) zT
( ) 2 ( ) d .
0 1
(13-24)

Multiply both sides of this equation by *1 (t) and integrate to obtain

2 z
T
0
1 ( t ) 2 ( t )dt =
T T
0 0 zz 0
2 T
0z
( t , ) 1* ( t ) 2 ( )d dt 1 1( t ) dt 1 ( ) 2 ( )d
T
z
(13-25)
= z LMNz
T
0
2 ( )
T
0
OP
( t , ) 1 ( t )dt d 1
Q z T
( ) 2 ( ) d .
0 1

Use (13-18) (which results from the Hermitian symmetry of ) to evaluate the term in the bracket

on the right-hand-side of Equation (13-25). This evaluation results in

2 z
T
0
T
z
1 ( t ) 2 ( t )dt = 2 ( )
0
T
0
LMz
N
T
0
OP
( t , ) 1 ( t )dt d 1 1 ( ) 2 ( )d
Q z
= z 0
T
z
2 ( ) 11 ( ) d 1 1 ( ) 2 ( )d
T
0
. (13-26)

e
= 1 1 jz0T 1 ()2 ()d

Since *1 - 1 = 0 (by Property 8, the eigenvalues are real valued), we have

2 z
T
( t ) 2 ( t )dt = 0 .
0 1
(13-27)

Since 2 0, we conclude that 1(t)2(t), as claimed. In addition to being an eigenfunction-

eigenvalue pair for kernel 1, 2(t) and 2 are an eigenfunction and eigenvalue, respectively, for

kernel (as can be seen from (13-24) and the fact that 12).

12. Clearly, as long as the resulting autocorrelation function is nonzero, the process outlined in

Property 11 can be repeated. After N such repetitions, we have orthonormal eigenfunctions 1,

603CH13.DOC 13-8
EE603 Class Notes Version 1 John Stensby

N and nonzero eigenvalues 1, , N. Furthermore, the Nth-stage autocorrelation function is

N
N ( t , ) ( t , ) k k ( t )k ( ) . (13-28)
k =1

13. N(t,) may vanish, and the algorithm for computing eigenvalues may terminate, for some

finite N. In this case, there exist a finite number of nonzero eigenvalues, and autocorrelation

(t,) has a finite dimensional expansion of the form

N
( t , ) = k k ( t) k ( ) . (13-29)
k =1

for some N. In this case, the kernel (t,) is said to be degenerate; also, it is easy to show that

(t,) is not positive definite.

14. If the case outlined by 13) does not hold, there exists a countable infinite number of nonzero

eigenvalues. However, N(t,) converges as N . First, we show convergence for the special

case t = ; next, we use this special case to establish convergence for the general case, t . To

reduce notational complexity in what follows, define the partial sum

LM n n ( ) OP
S n, m ( t , )
m
k k ( t )k ( ) = n n ( t) L m m ( t) M
M M
PP . (13-30)
k=n MM P
N m m ( )PQ

Consider the special case t = . Since N(t,t) 0, Equation (13-28) implies

0 < S1,N ( t , t ) ( t , t ) < . (13-31)

603CH13.DOC 13-9
EE603 Class Notes Version 1 John Stensby

As a function of index N, the sequence S1,N(t,t) is increasing but always bounded above by (t,t),

as shown by (13-31). Hence, as N , both S1,N(t,t) and N(t,t) must converge to some limit.

For the general case t , convergence of S1,N(t,) can be shown by establishing the fact

that partial sum Sn,m (t,) 0 as n, m (in any order). To establish this fact, consider partial

sum Sn,m(t,) to be the inner product of two vectors as shown by (13-30); one vector contains the

elements k k ( t ) , n k m, and the second vector contains the elements k k ( ) , n k

m. Now, apply the Cauchy-Schwartz inequality (see Theorem 11-4) to inner product Sn,m (t,)

and obtain

m
Sn, m ( t , ) = k k ( t )k ( ) Sn , m ( t , t ) S n, m ( , ) . (13-32)
k=n

As N , the convergence of S1,N(t,t) implies that partial sum Sn,m(t,t) 0 as n, m (in any

order). Hence, the right-hand-side of (13-32) approaches zero as n, m (in any order), and

this establishes the convergence of S1,N(t,) and (13-28) for the general case t .

15. As it turns out, N(t,) converges to zero as N , a claim that is supported by the

following argument. For each m N and fixed t, multiply N(t,) by m() and integrate to obtain

z0
T
N ( t , ) m ( )d = z
0
T
( t , ) m ( )d
T L
z0 MNk=1 kk (t)k ()OPQm()d
N

= m m ( t ) m m ( t ) . (13-33)

=0

For each fixed t and all m N, N(t,) has zero component in the m() direction. Equation

(13-33) leads to the conclusion

603CH13.DOC 13-10
EE603 Class Notes Version 1 John Stensby

limit
N 0
z T
N ( t , ) m ( )d = 0 (13-34)

for each m 1. By the continuity of the inner product, we can interchange limit and integration in

(13-34) to see that (t,) has no component in the m() direction, m 1. Since the

eigenfunctions m() span the vector space L2[0,T] of square-integrable functions, we see that

(t,) = 0. The argument we have presented supports the claim


( t , ) = k k ( t) k ( ) , (13-35)
k =1

a result known as Mercers theorem. In fact, the sum in (13-35) can be shown to converge

uniformly on the rectangle 0 t, T (see R. Ash, Information Theory, Interscience Publishers,

1965).

Karhunen-Love Expansion

In an expansion of the form (13-3), we show that the coefficients xk will be pair-wise

uncorrelated if, and only if, the basis functions k are eigenfunctions of (13-10) . Then, we show

that the series converges in a mean-square sense.

Theorem 13-1: Suppose that finite-power random process X(t) has an expansion of the form


X( t ) = xm m ( t )
m =1
(13-36)

xm = z 0
T
X( )m ( t )d

for some complete orthonormal set k(t), k 1, of basis functions. If the coefficients xn satisfy

603CH13.DOC 13-11
EE603 Class Notes Version 1 John Stensby


E xn x m = n , n = m
(13-37)
= 0, nm

(i.e., the coefficients are pair-wise uncorrelated and xn has a variance equal to eigenvalue n), then

the basis functions n(t) must be eigenfunctions of (13-9); that is, they must satisfy

z0
T
( t , ) n ( )d = n n ( t ) , 0 t T . (13-38)

Proof: Multiply the expansion in (13-36) (the first equation in (13-36)) by x n , take the

expectation, and use (13-37) to obtain


E X( t ) x n = E xm xn
2
m ( t ) = E xn n (t) = n n ( t) . (13-39)
m =1

Now, multiply the complex conjugate of the second equation in (13-36) by X(t), and take the

expectation, to obtain

E X( t ) x n = z
0
T
E X( t ) X ( ) n ( )d = z T
0
( t , ) n ( )d . (13-40)

Finally, equate (13-39) and (13-40) to obtain

z0
T
( t , ) n ( )d = n n ( t ), 0 t T,

where n is given by (13-37). In addition to this result, the K-L coefficients will be orthogonal

if the orthonormal basis function satisfy (13-38).

Theorem 13-2: If the orthogonal basis functions n(t) are eigenfunctions of (13-38) the

603CH13.DOC 13-12
EE603 Class Notes Version 1 John Stensby

coefficients xk will be orthogonal.

Proof: Suppose the orthogonal basis functions n(t) satisfy integral equation (13-38). Compute

the expected value

E xn x m = E
LMRSz T X( t)n ( t)dt UV xm OP = z T E X(t ) xm n (t)dt .
NT 0 W Q 0 (13-41)

Now, use (13-39) to replace the expectation in (13-41) and obtain

0 z
E xn x m = m m ( t )n ( t )dt = m mn
T

which shows that the coefficients are pair-wise uncorrelated. Theorems 13-1 and 13-2 establish

the claim that the xk will be uncorrelated if, and only if, the basis functions satisfy integral equation

(13-38). Next, we show mean square convergence of the K-L series.

Theorem 13-3: Let X(t) be a finite-power random process on [0, T]. The Karhunen-Love

expansion


X( t ) = xm m ( t )
m =1
, (13-42)

xm = z T
0
X( )m ( )d

where the coefficients are pair-wise uncorrelated and the basis functions satisfy the integral

equation (13-38), converges in the mean square sense.

Proof: Evaluate the mean-square error between the series and the process to obtain

L 2O
E M X ( t ) x m m ( t ) P
N m=1 Q
603CH13.DOC 13-13
EE603 Class Notes Version 1 John Stensby

LM F
I OP E LM xnn ( t)F X( t) xmm (t )I OP .
x m m ( t )
N H K Q N n =1 H m=1 KQ
= E X( t ) X( t ) (13-43)
m =1

On the right-hand side of (13-43), the first term is

LM F
I OP = (t , t ) mm ( t)m (t ) = 0
xm m ( t )
N H KQ
E X( t ) X( t ) (13-44)
m =1 m =1

(E[X(t) xm ]= = mm(t), first established by (13-39), was used here). The fact that the right hand

side of (13-44) is zero follows from Mercers Theorem (discussed in Property 15 above). On the

right-hand side of (13-43), the second term can be expressed as

LM x ( t)F X(t ) x (t )I OP
Nn=0 n n H m=0 m m K Q
E


= E x n X ( t ) n ( t ) E xn xm n ( t)m ( t)
n=0 m= 0 n = 0 . (13-45)


= nn ( t ) n ( t) n n ( t)n ( t)
n =1 n =1

=0

On the right-hand-side of (13-45), E[xnX*] was evaluated with the aid of (13-39); also the fact

that the coefficients are uncorrelated was used in (13-45). Equations (13-43) through (13-45)

imply

L 2
O
E M X ( t ) x m m ( t ) P = 0 , (13-46)
N m=1 Q

603CH13.DOC 13-14
EE603 Class Notes Version 1 John Stensby

so the K-L expansion converges in the mean square sense.

As it turns out, the K-L expansion need contain only eigenfunctions that correspond to

nonzero eigenvalues. Suppose (t) is an eigenfunction that corresponds to eigenvalue = 0.

Then the corresponding coefficient x has a second moment given by

LM T X(t )( t)dt 2 OP = E L T T X(t )X ()( t)() dt dO


MN z0 PQ MNz0 z0 PQ
E x x = E

= z z ( t , ) ( t )( ) dt d = z ( t ) LM z ( t , )( ) d OP dt
T T T T
0 0 0 N0 Q (13-47)

= 0.

That is, in the K-L expansion, the coefficient x of (t) has zero variance, and it need not be

included in the expansion.

Example 13-1 (K-L Expansion of the Wiener Process): From Chapter 6, recall that the

Wiener process X(t), t 0, has the autocorrelation function

( t1 , t 2 ) = 2 D min{t1 , t 2 } , (13-48)

where D is the diffusion constant. Substitute (13-48) into (13-38) and obtain

2D z
0
T
min{t , } n ( )d = n n ( t ) (13-49)

z t
z T
2 D n ( )d + 2 Dt n ( )d = n n ( t ) ,
0 t
(13-50)

for 0 t T. With respect to t, we must differentiate (13-50) twice; the first derivative produces

603CH13.DOC 13-15
EE603 Class Notes Version 1 John Stensby

2D z t
T
n ( )d = n n ( t ) , (13-51)

where n denotes the time derivative of n. Differentiate (13-51) to obtain

2D
n ( t ) + n (t ) = 0 , (13-52)
n

a second-order differential equation in the eigenfunction n. A general solution of (13-52) is

n ( t ) = n sin n t + n cos n t , n = 2 D / n , (13-53)

where n, n and n are constants that must be chosen to so that n satisfies appropriate boundary

conditions. Evaluate (13-50) at t = 0 to see

n (0) = 0 (13-54)

for all n. Because of (13-54), Equation (13-53) implies that all n 0. In a similar manner,

Equation (13-51) implies that n (T) = 0 , a result that leads to the conclusion

2D (2 n 1) ( n )
n = = = , n = 1, 2, 3, ... (13-55)
n 2T T

Equation (13-55) implies that the eigenvalues are given by

2 D T2
n = , n = 1, 2, 3, ... (13-56)
( n ) 2 2

And, the normalization condition (13-2) can be invoked to obtain

603CH13.DOC 13-16
EE603 Class Notes Version 1 John Stensby

z0
T
( n sin n t ) 2 dt =
2n T
2
= 1, (13-57)

so that

2
n = . (13-58)
T

After using n 0, (13-58) and (13-55) in Equation (13-53), the eigenfunctions can be expressed

as

n (t ) =
2
T
eb g j
sin n T t , 0 t T . (13-59)

Finally, the K-L expansion of the Wiener process is

2
X( t ) =
T n =1
e j
xn sin ( n ) T t , 0 t T , (13-60)

where the uncorrelated coefficients are given by

xn =
2 T
T 0z e
X( t ) sin ( n ) T t dt . j (13-61)

Furthermore, the coefficient xn has variance n given by (13-56).

Example 13-2: Consider the random process

X( t ) = A cos 0 t + , (13-62)

603CH13.DOC 13-17
EE603 Class Notes Version 1 John Stensby

where A and 0 are constants, and is a random variable that is uniformly distributed on (-, ].

As shown in Chapter 7, the autocorrelation of X(t) is

A2
( ) = cos 0 , (13-63)
2

a function with period T0 = 2/0. Substitute (13-63) into (13-10) to obtain

zT0 A 2
0 2
cos 0 ( t ) n ( )d = n n ( t ), 0 t T0 . (13-64)

The eigenvalues and eigenfunctions are found easily. First, use Mercers theorem to write


( t ) = k k ( t)k ( )
k =1
. (13-65)
A2 A2 A2
= cos 0 ( t ) = cos 0 t cos 0 + sin 0 t sin 0
2 2 2

Note that this kernel is degenerate. After normalization, the eigenfunctions, that correspond to

nonzero eigenvalues, can be written as

1 ( t ) = 2 / T cos 0 t
. (13-66)
2 ( t ) = 2 / T sin 0 t

Both of these eigenfunctions correspond to the eigenvalue = TA2/4; note that = TA2/4 has an

eigenspace of dimension two. Also, note that there are a countably infinite number of

eigenfunctions in the null space of the operator. That is, for k 1, the eigenfunctions

603CH13.DOC 13-18
EE603 Class Notes Version 1 John Stensby

1k ( t ) = 2 / T cos k 0 t
(13-67)
2 k ( t ) = 2 / T sin k 0 t

correspond to the eigenvalue = 0. The K-L expansion of random process X(t) is

X( t ) = x1 2 / T0 cos 0 t + x2 2 / T0 sin 0 t , (13-68)

where

x1 = +A T0 / 2 cos
. (13-69)
x2 = A T0 / 2 sin

As expected, we have

LM A 2T0 sin cos OP = E LM A 2T0 sin 2OP = A2T0 1 2 sin(2) d = 0


PQ 4 2 z0
E[ x1x2 ] = E
MN 2 PQ MN 4
L A 2T0 cos2 OP = A2T0
E[ x12 ] = E M . (13-70)
MN 2 PQ 4
E[ x 2 ] E M
2
=
L A 2T0
sin P
2

O =
A 2T0
MN 2 PQ 4

K-L Expansion for Processes with Rational Spectrums

Suppose X(t) is a wide sense stationary process with a rational power spectrum. That is,

the power spectrum of X can be represented as

603CH13.DOC 13-19
EE603 Class Notes Version 1 John Stensby

N ( 2 )
S( ) = , (13-71)
D( 2 )

were N and D are polynomials. Such a process occurs if white noise is passed through a linear,

time-invariant filter. Hence, many applications are served well by modeling their processes as

having a rational power spectrum.

As it turns out, a process with a rational power spectrum can be expanded in a K-L

expansion where the eigenfunctions are non-harmonically related sine and cosine functions. For

such a case, the eigenvalues and eigenfunctions can be found. The example that follows illustrates

a general method for solving for the eigenfunctions.

Example 13-4: Let X(t) be a process with power spectrum

2P
S( ) = , - < < , P > 0, > 0 . (13-72)
2 + 2

Process X(t) has the autocorrelation function

( ) = F 1 S( ) = P exp( ) . (13-73)

For the related eigenfunction/eigenvalue problem, the integral equation is

zT
T
P e t u ( u) du = ( t ), - T t T . (13-74)

An analysis leading to the eigenvalues and eigenfunctions is less complicate if a symmetric interval

[-T,T] is used (of course, our expansion will be valid on [0, T]). We can write (13-74) as

( t ) = z T
t
P e ( t u) ( u) du + z
t
T
P e ( u t ) ( u) du = , - T t T . (13-75)

603CH13.DOC 13-20
EE603 Class Notes Version 1 John Stensby

With respect to t, differentiate (13-75) to obtain

d ( t ) = P e t
dt T z
t u
t
T
z
e ( u) du + P et e u ( u) du . (13-76)

Once again, differentiate (13-76) to obtain

dt
2
d 2 ( t ) = P 2 e t
t u
T z
e ( u) du P e t et ( t )
, (13-77)
+ P 2 e t z
t
T u
e ( u) du P e t e t ( t )

which can be written as

dt
2
d 2 ( t ) = P 2 z
T t u
T
e ( u) du 2 P ( t ) . (13-78)

Now, multiply (13-74) by 2 and use the product to eliminate the integral in (13-78); this

procedure results in

2
d 2 ( t ) = 2 ( 2 P / ) ( t ) . (13-79)
dt

There are no zero eigenvalues since is positive definite. Inspection of (13-79) reveals that the

three cases

i) 0 < < 2P/,

ii) = 2P/,

iii) < 2P/

must be considered.

Case i) 0 < < 2P/


603CH13.DOC 13-21
EE603 Class Notes Version 1 John Stensby

We start by defining

2 ( 2P / )
b2 , 0 < b2 < , (13-80)

which can be solved for

2P
= . (13-81)
( jb)( + jb)

In terms of b, the general, complex-valued solution of (13-79) is

( t ) = c1e jbt + c 2 e jbt , (13-82)

where c1 and c2 are complex constants. Plug (13-82) into integral equation (13-75) to obtain

c1e jbt + c2 e jbt

= P e t zT
t
eu c1e jbu + c2 e jbu du + P et z
t
T u
e c1e jbu + c 2 e jbu du

= P e Mc1
L e ( + jb) u e ( jb ) u O
u= t
L e( + jb) u e ( jb ) u O
u= T

MN + jb 2 jb PPQ u =T
+ P e Mc1
MN + jb 2 jb PPQ u= t
t t
+c +c

L c e jbt + c2e jbt + c1e jbt + c2e jbt OP


= PM 1
(13-83)

MN + jb jb jb + jb PQ
L e( + jb)T + c e( jb)T OP + P et LMc e( + jb)T + c e( + jb)T OP .
P e t Mc1
MN + jb 2 jb PQ MN 1 + jb 2 jb PQ

Now, substitute (13-81) for on the left-hand-side of (13-83); then, cancel out like terms to

603CH13.DOC 13-22
EE603 Class Notes Version 1 John Stensby

obtain the requirement

0 =e t LMc e( + jb)T + c e( jb)T OP et LMc e( jb)T + c e( + jb)T OP . (13-84)


MN 1 + jb 2 jb PQ MN 1 + jb 2 jb PQ

We must find the values of b (i.e., the frequencies of the eigenfunctions) for which equality is

achieved in (13-84) . Note that both bracket terms must vanish identically to achieve equality for

all time t. However, for c1 c2, neither bracket will vanish for any real b. Hence, we require c1

= c2 in order to obtain equality in (13-84). First, consider c1 = -c2; to zero the first bracket term

we must have

jbT
e jbT e jbT e jbT ( jb) e jbT ( + jb) e e jbT jb e jbT + e jbT
= =
+ jb jb ( + jb)( jb) ( + jb)( jb)

2 j sin( bT) 2 jb cos( bT)


= . (13-85)
( + jb)( jb)

=0

To obtain zero in this last expression, we must have

sin( bT) + b cos( bT) = 0 . (13-86)

Finally, this leads to the requirement

tan( bT) = b / . (13-87)

With c1 = -c2, the second bracket in (13-84) is zero if (13-87) holds. Hence, the values of b that

solve (13-87) are roots of (13-84), and they are frequencies of the eigenfunctions.

603CH13.DOC 13-23
EE603 Class Notes Version 1 John Stensby

Next, we must analyze the case c1 = c2 (which is similar to the case c1 = -c2 just finished).

For c1 = c2, we get

tan( bT) = / b . (13-88)

Hence, the permissible frequencies of the eigenfunction are given by the union

b 0 : tan( bT) = b / U b 0 : tan( bT) = / b . (13-89)

These frequencies can be found numerically. Figure 13-1 depicts graphical solutions of

(13-89) for the first nine frequencies. A value of T = 2 was used to construct the figure. Note
2 b1 Y = T/bT
Y

1
b3
b5
b7 b9
0
3 2 5 3 7 4 bT
2 2 2 2

-1 b2

-2
b4

-3

-4 b6

-5 T = 2
b8
Y

-6
=
-b
T/
T

Figure 13-1: Graphical display of the bk, the frequencies of the


eigenfunctions.

603CH13.DOC 13-24
EE603 Class Notes Version 1 John Stensby

that bk, k odd, form a decreasing sequence of positive numbers, while bk, k even, form a

decreasing sequence of negative numbers.

Once the frequencies bk are found, they can be used to determine the eigenvalues

2 P
k = , k = 1, 2, 3, L (13-90)
( 2 b 2k )

The frequencies bk, k odd, were obtained by setting c1 = c2. For this case, (13-82) yields

k ( t ) = l k cos b k t , k odd , (13-91)

where constant lk is chosen to normalize the eigenfunction. That is, lk must satisfy

z T
l 2 cos2 ( b k t ) dt = 1 ,
T k
(13-92)

which leads to

1
lk = , - T t T, k odd , (13-93)
T[1 + Sa (2 b k T)

where Sa(x) sin(x)/x. Hence, for k odd, we have the eigenfunctions

1
k ( t) = cos b k t , - T t T, k odd . (13-94)
T[1 + Sa (2 b k T)

The frequencies bk, k even, were obtained by setting c1 = -c2. An analysis similar to the

case just presented yields the eigenfunctions

603CH13.DOC 13-25
EE603 Class Notes Version 1 John Stensby

1
k ( t) = sin b k t , - T t T, k even . (13-95)
T[1 Sa (2 b k T)

Observations:

1. Eigenfunctions are cosines and sines at frequencies that are not harmonically related.

2. For each n, the value of bnT is independent of T. Hence, as T increases, the value of bn

decreases, so the frequencies are inversely related to T.

3. As bT increases, the upper intersections (the odd integers k) occur at approximately (k-1)/2,

and the lower intersections occur at approximately (k-1)/2, k even. Hence, the higher index

eigenfunctions are approximately a set of harmonically related sines and cosines. For large k

we have

1 ( k 1)
k ( t) cos t , - T t T, k odd
T[1 + Sa (2 b k T) 2T
(13-96)
1 ( k 1)
sin t , - T t T, k even
T[1 Sa (2 b k T) 2T

This concludes the case 0 < < 2P/.

Case ii) = 2P/


For this case, Equation (13-79) becomes

2P d 2
( t ) = 0 . (13-97)
dt 2

Two independent solutions to this equation are (t) = t and (t) = 1. By direct substitution, it is

seen that neither of these satisfy integral equation (13-74). Hence, this case yields no

eigenfunctions and eigenvalues.

Case iii) > 2P/


603CH13.DOC 13-26
EE603 Class Notes Version 1 John Stensby

For this case, Equation (13-79) becomes

2
d 2 ( t ) = ( 2 P / ) ( t ) (13-98)
dt 2

This equation two independent solutions given by

1 ( t ) = e t
, (13-99)
t
2 (t ) = e

where

2 ( 2 P / )
> 0. (13-100)

By direct substitution, it is seen that neither of these satisfy integral equation (13-74). Hence, this

case yields no eigenfunctions and eigenvalues.

Example 13-5: In radar detection theory, we must detect the presence of a signal given a T-

second record of receiver output data. There are two possibilities (termed hypotheses). First, the

record may consist only of receiver noise; no target is present for this case. The second possibility

is that the data record contains a target reflection embedded in the receiver noise; in this case, a

target is present. You must filter the record of data and make a decision regarding the

presence/absence of a target.

Let (t), 0 t T, denote the record of receiver output data. After receiving the

complete time record, we must decide between the hypotheses

603CH13.DOC 13-27
EE603 Class Notes Version 1 John Stensby

H 0: (t) = (t), only noise - target not present


. (13-101)
H 1: (t) = s(t) + (t), signal + noise - target is present

Here, (t) is zero-mean Gaussian noise that is described by positive definite correlation function

(t,). Note that we allow non-white and non-stationary noise in this example. s(t) is the

reflected signal, which we assume to be known (usually, s(t) is a scaled and time-shifted version

of the transmitted signal). At time T, we must decide between H0 and H1.

We expand the received signal (t) in a K-L expansion of the form


( t ) = kk (t )
k =1
, (13-102)

k z T
0
( t ) k ( t )dt

where k(t) are the eigenfunction of (13-10), an integral equation that utilizes kernel (t,)

describing the receiver noise. The k are uncorrelated Gaussian random variables with variance

equal to the positive eigenvalues of the integral equation; that is, VAR[k] = k.

The received signal (t) may be only noise, or it may be signal + noise. Hence, the

conditional mean of k is

Y
E k H 0 = E LMz T (t)k (t )dt OP = 0
N0 Q
, (13-103)
E kYH1 = E LM z b s( t ) + ( t )g k ( t )dt OP = s k
T
N0 Q
where

sk = z0
T
s( t ) k ( t ) dt

603CH13.DOC 13-28
EE603 Class Notes Version 1 John Stensby

are the coefficients in the expansion


s( t ) = sk k ( t ) (13-104)
k =0

of signal s(t). Under both hypotheses, k has a variance given by

Y Y
Var k H 0 = Var k H 1 = k (13-105)

To start with, our statistical test will use only the first n K-L coefficients k, 1 k n.

We form the vector

r T
V = 1 2 L n (13-106)

and the two densities

r r
Y LM n (2 ) O expFG n 2 / IJ
P0 ( V) P( V H 0 ) =
MNk=1 k PQ H k=1 k k K
. (13-107)
L O F n
I
P1 ( V) P( VYH 1 ) = M (2 k ) P expG ( k sk ) 2 / k J
r r n

MNk =1 Q H k =1 K

P0 (alternatively, P1) is the density for the n coefficients when H0 (alternatively, H1) is true.

We will use a classical likelihood ratio test (see C.W. Helstrom, Statistical Theory of
r
Signal Detection, 2nd edition) to make a decision between H0 and H1. First, given V, we compute

the likelihood ratio

603CH13.DOC 13-29
EE603 Class Notes Version 1 John Stensby

r
( V)
r
P1( V)
r
n LM OP
= exp (2s k k s2k ) / 2 k . (13-108)
P0 ( V) k =1 MN PQ

in terms of the known sk and k. Then, we compare the computed to a user-defined threshold

0 to make our decision (there are several well-know methods for setting the threshold 0). We

decide hypothesis H1 if exceeds the threshold, and H0 if is less than the threshold. Stated

tersely, our test can be expressed as

H
r >1
( V) . (13-109)
< 0
H0

The inequality (13-109) will be unchanged, and the decision process will not be affected, if

we take any monotone function of (13-109). Due to the exponential functions in (13-108), we

take the logarithm of this equation and obtain

H1
Gn
n
sk
k
> OP n
s OP
ln 0 + k sk G n0 .

k =1 k
< Q
k =1 k
Q (13-110)
H0

To simplify (13-110), we define qk sk/k. The qk are coefficients in the generalized

Fourier series expansion of a function q(t); that is, the coefficients qk determine the function


q( t) q k k ( t) . (13-111)
k =1

As will be discussed shortly, function q(t) is the solution of an integral equation based on kernel

603CH13.DOC 13-30
EE603 Class Notes Version 1 John Stensby

. In terms of the coefficients qk, (13-110) can be written as

H1
n > n
G n q k k ln 0 + q k sk G n0 (13-112)
k =1
< k =1
H0

The two sums in (13-112) converge as n . By a general form of Parsevals theorem,

we have

limit
n
n
q k k =
k =1
z0
T
q ( t ) ( t ) dt

. (13-113)

limit
n
n
q k sk =
k =1
zT
0
q ( t ) s( t ) dt

Use (13-113), and take the limit of (13-112) to obtain the decision criteria

H1
z T
G q ( t ) ( t )dt
0
>
<
T
z
ln 0 + q ( t )s( t )dt .
0
(13-114)
H0

As shown on the left-hand-side of Equation (13-114), statistic G can be computed once data

record (t), 0 t T is known. Then, to make a decision between hypothesis H0 and H1, G is

compared to the threshold obtained by computing the right-hand-side of (13-114).

Statistic G can be obtained by a filtering operation, as illustrated by Figure 13-3. Simply

pass received signal (t) through a filter with impulse response

h( t ) q (T t ), 0 t T , (13-115)

603CH13.DOC 13-31
EE603 Class Notes Version 1 John Stensby

a)
(t)
h(t) = q(T-t), 0 t T G= z
0
T
q ( t ) ( t )dt

Sample
@t=T

H1
b) z T
G q ( t ) ( t )dt
0
>
<
T
z
ln 0 + q ( t )s( t )dt
0
H0

Figure 13-3: a) Statistic G generated by a filtering operation that is matched


to the signal and noise environment. b) Statistical test description.

and sample the filter output at t = T (the end of the integration period) to obtain the statistic G.

This is the well known matched filter for signal s(t) embedded in Gaussian, nonstationary,

correlated noise described by correlation function (t,).

As described above, function q(t) has expansion (13-111) with coefficients qk sk/k.

However, we show that q(t) is the solution of a well-known integral equation. First, write

(13-111) with as the time variable. Then, multiply the result by (t,), and integrate from = 0

to = T to obtain

z0
T
( t , )q ( )d =

qk
k =1
z
0
T
( t , ) k ( )d =

LM sk OP k k ( ) ,
k =1 N k Q

(13-116)

where sk/k has been substituted for qk. On the right-hand-side, cancel out the eigenvalue k, and

use (13-104) to obtain the integral equation

z0
T
( t , )q ( )d = s( t ), 0 t T , (13-117)

for the match filter impulse response q(t). Equation (13-117) is the well-know Fredholm integral

equation of the first kind.

603CH13.DOC 13-32
EE603 Class Notes Version 1 John Stensby

Special Case: Matched Filter for Signal in White Gaussian Noise

We consider the special case where the noise is white with correlation

( ) = 2 ( ) . (13-118)

The Fredholm integral equation is solved easily for this case; simply substitute (13-118) into

(13-117) and obtain

q( t) = s2 ( t) / 2 , 0 t T . (13-119)

So, according to (13-115), the matched filter for the white Gaussian noise case is

a) s(t) b) h(t)

1 1

T t T t

c)
Filter Output

T/3

t
T 2T

Sampling Time to Produce Statistic G

Figure 13-4: a) Signal s(t), b) matched filter impulse response h(t) and c) filter output for the
case 2 = 1. Note that the filter output is sampled at t = T to produce the decision statistic G.

603CH13.DOC 13-33
EE603 Class Notes Version 1 John Stensby

h( t ) = s(T t ) / 2 , (13-120)

a folded, shifted and scaled version of the original signal. Figure 13-4 illustrates a) signal s(t), b)

matched filter h(t) and c) filter output, including the sample point t = T, for the case 2 = 1.

603CH13.DOC 13-34

Vous aimerez peut-être aussi