Vous êtes sur la page 1sur 53

Dept. of Telecomm.

Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
Chapter 3:
Spectrum Analysis
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
2
References

[1] Simon Haykin, Adaptive Filter Theory, Prentice Hall,
1996 (3
rd
Ed.), 2001 (4
th
Ed.).

[2] Steven M. Kay, Fundamentals of Statistical Signal Processing:
Estimation Theory, Prentice Hall, 1993.

[3] Alan V. Oppenheim, Ronald W. Schafer, Discrete-Time Signal
Processing, Prentice Hall, 1989.

[4] Athanasios Papoulis, Probability, Random Variables, and
Stochastic Processes, McGraw-Hill, 1991 (3
rd
Ed.), 2001 (4
th
Ed.).

Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
3
3. Power Spectrum (1)


( ) ( ) ,
j t
X x t e dt

For a deterministic signal x(t), the spectrum is well defined: If X()


represents its Fourier transform, i.e., if


then | X() |
2
represents its energy spectrum. This follows from Parsevals
theorem since the signal energy is given by


Thus, | X() |
2
represents the signal energy in the band (, + ).

2 2

1
2
( ) | ( )| . x t dt X d E


+ +

= =

t
0
( ) X t

0
2
| ( )| X
Energy in ( , ) +
+
(3-1)
(3-2)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
4
3. Power Spectrum (2)


( ) ( )
T
j t
T
T
X X t e dt

However for stochastic processes, a direct application of (3-1) generates


a sequence of random variables for every . Moreover, for a stochastic
process, E{| X(t) |
2
} represents the ensemble average power (instantaneous
energy) at the instant t.

To obtain the spectral distribution of power versus frequency for stochastic
processes, it is best to avoid infinite intervals to begin with, and start with a
finite interval ( T, T ) in (3-1). Formally, partial Fourier transform of a
process X(t) based on ( T, T ) is given by


so that


represents the power distribution associated with that realization based
on ( T, T ). Notice that (3-4) represents a random variable for every
and its ensemble average gives, the average power distribution based on
( T, T ). Thus
2
2

| ( )| 1
( )
2 2
T
j t
T
T
X
X t e dt
T T

=

(3-3)
(3-4)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
5
3. Power Spectrum (3)
1 2
1 2
2
( ) *
1 2 1 2
( )
1 2 1 2
| ( ) | 1
( ) { ( ) ( )}
2 2
1
( , )
2
T
XX
T T
j t t
T
T T
T T
j t t
T T
X
P E E X t X t e dt dt
T T
R t t e dt dt
T






= =
`
)
=


1 2

( )
1 2 1 2

1
( ) ( ) .
2
T XX
T T
j t t
T T
P R t t e dt dt
T



=

2
2
2
| |
2
2
1
( ) ( ) (2 | |)
2
( ) (1 ) 0
T XX
XX
T
j
T
T
j
T
T
P R e T d
T
R e d

=
=

(3-5)
represents the power distribution of X(t) based on ( T, T ). If X(t) is
assumed to be w.s.s, then R
XX
(t
1
, t
2
) = R
XX
(t
1
- t
2
), and (3-5) simplifies to



Let = t
1
t
2
, we get



to be the power distribution of the w.s.s. process X(t) based on ( T, T ).
Finally letting T in (3-6), we obtain
(3-6)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
6
3. Power Spectrum (4)
( ) lim ( ) ( ) 0
XX T XX
j
T
S P R e d

= =

F T
( ) ( ) 0.
XX XX
R S


1
2
( ) ( )
XX XX
j
R S e d

=

2
1
2
( ) (0) {| ( ) | } ,
XX XX
S d R E X t P the total power.

= = =

to be the power spectral density of the w.s.s process X(t). Notice that

i.e., the autocorrelation function and the power spectrum of a w.s.s process
form a Fourier transform pair, a relation known as the Wiener-Khinchin
Theorem. From (3-8), the inverse formula gives
(3-7)
and in particular for = 0, we get

From (3-10), the area under S
XX
() represents the total power of the
process X(t), and hence S
XX
() truly represents the power spectrum.
(3-8)
(3-9)
(3-10)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
7
3. Power Spectrum (5)
( )
* *
1 1 1 1
2
1
1
2
1
2
( ) ( )
( ) 0.
i j
XX XX
i
XX
n n n n
j t t
i j i j i j
i j i j
n
j t
i
i
a a R t t a a S e d
S a e d

= = = =
+
=

=
=

+
0

represents the power
in the band ( , ) +
( )
XX
S ( )
XX
S

( ) ( ) 0.
XX XX
R nonnegative - definite S
The nonnegative-definiteness property of the autocorrelation function
translates into the nonnegative property for its Fourier transform
(power spectrum)

(3-11)
From (3-11), it follows that
(3-12)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
8
3. Power Spectrum (6)





0
( ) ( )
( )cos
2 ( )cos ( ) 0
XX XX
XX
XX XX
j
S R e d
R d
R d S

=
=
= =

If X(t) is a real w.s.s process, then R


XX
( ) = R
XX
(- ), so that

(3-13)
so that the power spectrum is an even function, (in addition to being real
and nonnegative).

Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
9
3. Power Spectra and Linear Systems (1)
* *
( ) ( ) ( ), ( ) ( ) ( ) ( ).
XY XX YY XX
R R h R R h h = =
( ) ( ), ( ) ( ) f t F g t G
( ) ( ) ( ) ( ) f t g t F G


{ ( ) ( )} ( ) ( )
j t
f t g t f t g t e dt

h(t) X(t) Y(t)


If a w.s.s process X(t) with autocorrelation
function R
XX
( ) S
XX
() is applied to a
linear system with impulse response h(t), then
the cross correlation function R
XY
( ) and the output autocorrelation function
R
YY
( ) can be determined. From there

(3-14)
If
(3-15)
then
(3-16)
since
(3-17)
{ }



( )

{ ( ) ( )}= ( ) ( )
= ( ) ( ) ( )
= ( ) ( ).
j t
j j t
f t g t f g t d e dt
f e d g t e d t
F G





+ +


+ +





Dept. of Telecomm. Eng,


Faculty of EEE
SSP2013
DHT, HCMUT
10
3. Power Spectra and Linear Systems (2)
* *
( ) { ( ) ( )} ( ) ( )
XY XX XX
S R h S H = =
( )
*

* *

( ) ( ) ( ),
j j t
h e d h t e dt H


+ +


= =



( ) ( )
j t
H h t e dt

Using (3-15)-(3-17) in (3-14) we get



2
( ) { ( )} ( ) ( )
( ) | ( ) | .
YY YY XY
XX
S R S H
S H


= =
=

(3-18)
since
where
(3-19)
represents the transfer function of the system, and

(3-20)
From (3-18), the cross spectrum need not be real or nonnegative. However
the output power spectrum is real and nonnegative and is related to the
input spectrum and the system transfer function as in (3-20). Eq. (3-20) can
be used for system identification as well.

Example of Thermal noise (Example 11.1, p. 351, [4])
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
11
3. Power Spectra and Linear Systems (3)
( ) ( ) ( ) .
WW WW
R q S q = =
W.S.S White Noise Process: If W(t) is a w.s.s white noise process, then

Thus the spectrum of a white noise process is flat, thus justifying its
name. Notice that a white noise process is unrealizable since its total
power is indeterminate.

From (3-20), if the input to an unknown system is a white noise process,
then the output spectrum is given by
2
( ) | ( )|
YY
S q H =
Notice that the output spectrum captures the system transfer function
characteristics entirely, and for rational systems Eq (3-22) may be used
to determine the pole/zero locations of the underlying system.

(3-21)
(3-22)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
12
3. Power Spectra and Linear Systems (4)
2
, | | / 2
( ) | ( )| .
0, | | / 2
XX
q B
S q H
B

= =

>

/2 /2
/2 /2

( ) ( )
sin( /2)
sinc( /2)
( /2)
XX XX
B B j j
B B
R S e d q e d
B
qB qB B
B


= =

= =

2
| ( )| H
/2 B / 2 B
1

qB
( )
XX
R
Example 3.1: A w.s.s white noise process W(t) is passed through a low pass
filter (LPF) with bandwidth B/2. Find the autocorrelation function of the
output process.

Solution: Let X(t) represent the output of the LPF. Then from (3-22)

(3-23)
Inverse transform of S
XX
() gives the output autocorrelation function to
be

(3-24)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
13
3. Power Spectra and Linear Systems (5)


1
2
( ) ( )
t T
t T
T
Y t X d
+

=



( ) ( ) ( ) ( ) ( ) Y t h t X d h t X t
+

= =

2
( ) ( )| ( )| .
YY XX
S S H =
Eq (3-23) represents colored noise spectrum and (3-24) its autocorrelation
function.

Example 3.2: Let
T T
t
( ) h t
1/ 2T

1
2
( ) sinc( )
T
j t
T
T
H e dt T

= =

represent a smoothing operation using a moving window on the input


process X(t). Find the spectrum of the output Y(t) in term of that of X(t).
(3-25)
Solution: If we define an LTI system with impulse
response h(t) as in the figure, then in term of h(t),
(3-25) reduces to

(3-26)
so that (3-27)
where (3-28)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
14
3. Power Spectra and Linear Systems (6)
2
( ) ( ) sinc ( ).
YY XX
S S T =

( )
XX
S
2
sinc ( ) T
T


( )
YY
S
so that
(3-29)
Notice that the effect of the smoothing operation in (3-25) is to suppress
the high frequency components in the input (beyond / T), and the
equivalent linear system acts as a low-pass filter (continuous-time moving
average) with bandwidth 2 / T in this case.
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
15
3. Discrete-Time Processes (1)
( ) ( ) ( ),
n
X t X nT t nT =

{ } ,
k
r
+

( ) ( ).
XX k
k
R r kT
+
=
=

( ) 0,
XX
j T
k
k
S r e

=
=

( ) ( 2 / )
XX XX
S S T = +
2
2 . B
T

=
For discrete-time w.s.s stochastic processes X(nT) with autocorrelation
sequence (proceeding as above) or formally defining a continuous
time process we get the corresponding
autocorrelation function to be
Its Fourier transform is given by
(3-30)
and it defines the power spectrum of the discrete-time process X(nT).
From (3-30),
(3-31)
so that S
XX
() is a periodic function with period
(3-32)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
16
3. Discrete-Time Processes (2)


1
( )
2
XX
B
jk T
k
B
r S e d
B

=


2
0

1
{| ( ) | } ( )
2
XX
B
B
r E X nT S d
B

= =

*
( ) ( ) ( )
XY XX
j
S S H e

=
2
( ) ( )| ( )|
YY XX
j
S S H e

=
( ) ( )
j j nT
n
H e h nT e

+

=
=

This gives the inverse relation
and
represents the total power of the discrete-time process X(nT). The
input-output relations for discrete-time system h(nT) translate into
(3-33)
(3-34)
(3-35)
and
(3-36)
where
(3-37)
represents the discrete-time system transfer function.

Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
17
3. Matched Filter (1)
0
( ) ( ) ( ), 0 r t s t w t t t = + < <
h(t) r(t)
y(t)
0
t t =
( ) ( ) ( )
s
y t y t n t = +

( ) ( ) ( ), ( ) ( ) ( ),
s
y t s t h t n t w t h t = =
Let r(t) represent a deterministic signal s(t) corrupted by noise. Thus
where r(t) represents the observed data,
and it is passed through a receiver with
impulse response h(t). The output y(t) is
given by
where
and it can be used to make a decision about the presence of absence of
s(t) in r(t). Towards this, one approach is to require that the receiver
output signal to noise ratio (SNR)
0
at time instant t
0
be maximized.
Notice that
(3-38)
(3-39)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
18
3. Matched Filter (2)
0
2
0 0
0
2
2
2

0

2

1
2
1 1
2 2
Output signal power at | ( ) |
( )
Average output noise power {| ( ) | }
( ) ( )
| ( ) |
( ) ( ) | ( ) |
nn WW
s
j t
s
t t y t
SNR
E n t
S H e d
y t
S d S H d

+ +

=
= =
= =

(3-41)
represents the output SNR, where we have made use of (3-20) to
determine the average output noise power, and the problem is to
maximize (SNR)
0
by optimally choosing the receiver filter H().
Optimum Receiver for White Noise Input: The simplest input noise
model assumes w(t) to be white noise in (3-38) with spectral density N
0
,
so that (3-41) simplifies to
0
2

0

2
0

( ) ( )
( )
2 | ( ) |
j t
S H e d
SNR
N H d

(3-42)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
19
3. Matched Filter (3)
0

2

2
0
0

0 0
1
2
( )
( ) | ( ) |
s
N
s t dt
E
SNR S d
N N


+
+

= =

0
*
( ) ( )
j t
H S e



=
Direct application of Cauchy-Schwarz inequality in (3-42) gives
0
( ) ( ). h t s t t =
and equality in (3-43) is guaranteed if and only if
or
(3-43)
(3-44)
(3-45)
From (3-45), the optimum receiver that maximizes the output SNR at
t = t
0
is given by (3-44)-(3-45). Notice that (3-45) need not be causal, and
the corresponding SNR is given by (3-43).

Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
20
3. Matched Filter (4)
0
( ) ( ) ( )
opt
h t s t t u t =
(a) (b) t
0
=T/2 (c) t
0
=T
( ) h t ( ) s t ( ) h t
T
t
t
t
T / 2 T
t
0

The figure shows the optimum h(t) for two different values of t
0
. In
figure(b), the receiver is noncausal, whereas in figure (c) the receiver
represents a causal waveform.
If the receiver is causal, the optimum causal receiver can be shown to
be
(3-46)
and the corresponding maximum (SNR)
0
in that case is given by
0
0

2
0
0
1
( ) ( )
t
N
SNR s t dt =

(3-47)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
21
3. Matched Filter (5)
Optimum Transmit Signal: In practice, the signal s(t) in (3-38) may
be the output of a target that has been illuminated by a transmit signal
f (t) of finite duration T. In that case
( ) f t
T
t
q(t)
( ) f t
( ) s t

0
( ) ( ) ( ) ( ) ( ) ,
T
s t f t q t f q t d = =

(3-48)
where q(t) represents the target impulse response. One interesting question
in this context is to determine the optimum transmit signal f (t) with
normalized energy that maximizes the receiver output SNR at t = t
0
in the
Matched filter. Notice that for a given s(t), (3-45) represents the optimum
receiver, and (3-43) gives the corresponding maximum (SNR)0. To
maximize (SNR)
0
in (3-43), we may substitute (3-48) into (3-43). This gives


Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
22
3. Matched Filter (6)
0
1 2
0

2
0 1 1 1
0 0

*
1 2 2 2 1 1
0 0 0
( , )

1 2 2 2 1 1 max 0
0 0
1
1
( ) |{ ( ) ( ) }|
( ) ( ) ( ) ( )
{ ( , ) ( ) } ( ) /
T
T T
T T
N
N
SNR q t f d dt
q t q t dt f d f d
f d f d N



=
=
=


max

1 2
( , )

*
1 2 1 2
0
( , ) ( ) ( ) q t q t dt

(3-49)
(3-50)

1 2 2 2 max 1 1
0
( , ) ( ) ( ), 0 .
T
f d f T = < <

(3-51)
where is given by
and is the largest eigenvalue of the integral equation

2
0
( ) 1.
T
f t dt =

and
(3-52)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
23
3. Matched Filter (7)
0
*
1 2 1 2
0
( , ) ( ) ( ) .
t
q t q t dt =

Observe that the kernel in (3-50) captures the target


characteristics so as to maximize the output SNR at the observation
instant, and the optimum transmit signal is the solution of the integral
equation in (3-51) subject to the energy constraint in (3-52). The figure
below shows the optimum transmit signal and the companion receiver
pair for a specific target with impulse response q(t) as shown there.

t
(a)
( ) q t
(b)
t
T
( ) f t
0
t
( ) h t
t
(c)
If the causal solution in (3-46)-(3-47) is chosen, in that case the kernel in
(3-50) simplifies to
(3-53)
and the optimum transmit signal is given by (3-51). Notice that in the
causal case, information beyond t = t
0
is not used.
1 2
( , )
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
24
3. Matched Filter (8)
What if the additive noise in (3-38) is not white?

Let S
WW
() represent a (non-flat) power spectral density. In that case,
what is the optimum matched filter?

If the noise is not white, one approach is to whiten the input noise first by
passing it through a whitening filter, and then proceed with the whitened
output as before.
Whitening Filter
g(t)
( ) ( ) ( ) r t s t w t = +
( ) ( )
g
s t n t +
colored noise
white noise
( ) ( ) ( )
g
s t s t g t =
Notice that the signal part of the whitened output s
g
(t) equals
where g(t) represents the whitening filter, and the output noise n(t) is
white with unit spectral density.
(3-54)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
25
3. Matched Filter (9)
2
1 ( ) ( )| ( )| ,
WW nn
S S G = =
Whitening Filter: What is a whitening filter? From the discussion above,
the output spectral density of the whitened noise process S
nn
() equals
unity, since it represents the normalized white noise by design. But from
(3-20)

2
1
| ( )| .
( )
WW
G
S

=
which gives
i.e., the whitening filter transfer function G() satisfies the magnitude
relationship in (3-55). To be useful in practice, it is desirable to have the
whitening filter to be stable and causal as well. Moreover, at times
its inverse transfer function also needs to be implementable so that it
needs to be stable as well. How does one obtain such a filter (if any)?

(3-55)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
26
3. Matched Filter (10)


( )
XX
S d
+

<


2
|log ( )|
1
XX
S
d

<
+

From there, any spectral density that satisfies the finite power constraint
and the Paley-Wiener constraint (see Eq. (12-4), p. 402, [4])


can be factorized as
2
( ) | ( )| ( ) ( )|
XX s j
S H j H s H s


=
= =
(3-56)
(3-57)
(3-58)
where H(s) together with its inverse function 1/H(s) represent two
filters that are both analytic in Re s > 0. Thus H(s) and its inverse 1/ H(s)
can be chosen to be stable and causal in (3-58). Such a filter is known as
the Wiener factor, and since it has all its poles and zeros in the left half
plane, it represents a minimum phase factor. In the rational case, if X(t)
represents a real process, then S
XX
() is even and hence (3-58) reads
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
27
3. Matched Filter (11)
2 2
0 ( ) ( )| ( ) ( )| .
XX XX s j s j
S S s H s H s

= =
= =

2 2 2
4
( 1)( 2)
( )
( 1)
XX
S

+
=
+
2 2 2
2
4
(1 )(2 )
( ) .
1
XX
s s
S s
s
+
=
+

Example 18.3: Consider the spectrum


which translates into

2 s j =
2 s j =
1 s=+ 1 s =
1
2
j
s
+
=
1
2
j
s
+
=
1
2
j
s

=
1
2
j
s

=
The poles ( ) and zeros ( ) of this
function are shown in the figure. From there to maintain the symmetry
condition in (3-59), we may group together the left half factors as
2
2 1 1
2 2
( 1)( 2 )( 2 ) ( 1)( 2)
( )
2 1
j j
s s
s s j s j s s
H s
s s
| | | |
| |
\ . \ .
+
+ +
+ + + +
= =
+ +
(3-59)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
28
3. Matched Filter (12)
( )
( ) ( )
j
H A e



=

0
ln ( ) ln ( ) ( ) ( ) .
j t
H A j b t e dt


+

= =

and it represents the Wiener factor for the spectrum S


XX
() above.
Observe that the poles and zeros (if any) on the j-axis appear in even
multiples in S
XX
() and hence half of them may be paired with H(s) (and
the other half with H( s)) to preserve the factorization condition in (3-58).
Notice that H(s) is stable, and so is its inverse.
More generally, if H(s) is minimum phase, then ln H(s) is analytic on
the right half plane so that
gives

0

0
ln ( ) ( )cos
( ) ( )sin
t
t
A b t t dt
b t t dt


=
=

Thus
(3-60)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
29
3. Matched Filter (13)
( ) {ln ( )}. A = H


( ) <
XX
S d



ln ( ) > .
XX
S d

Since cost and sint are Hilbert transform pairs, it follows that the phase
function () in (3-60) is given by the Hilbert transform of ln A(). Thus
Eq. (3-60) may be used to generate the unknown phase function of a
minimum phase factor from its magnitude.

For discrete-time processes, the factorization conditions take the form
and
In that case

2
( ) | ( )|
XX
j
S H e

=
0
( ) ( )
k
k
H z h k z

=
=

(3-61)
(3-62)
(3-63)
where the discrete-time system
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
30
3. Matched Filter (14)
2
( ) ( ) | | ( ) | ( ).
WW s j
L s L s L j S


=
= =
is analytic together with its inverse in |z| >1. This unique minimum phase
function represents the Wiener factor in the discrete-case.
Matched Filter in Colored Noise: Returning back to the matched filter
problem in colored noise, the design can be completed as shown in figure
below
h
0
(t)=s
g
(t
0
t)
1
( ) ( ) G j L j

=
0
t t =
( ) ( )
g
s t n t +
( ) ( ) ( ) r t s t w t = +
Whitening Filter
Matched Filter
where G(j) represents the whitening filter associated with the noise
spectral density S
WW
() as in (3-55)-(3-58). Notice that G(s) is the inverse
of the Wiener factor L(s) corresponding to the spectrum S
WW
() i.e.,
(3-64)
The whitened output s
g
(t) + n(t) in the figure is similar to (3-38), and from
(3-45) the optimum receiver is given by

0 0
( ) ( )
g
h t s t t =
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
31
3. Matched Filter (15)
1
( ) ( ) ( ) ( ) ( ) ( ).
g g
s t S G j S L j S

= =
where
If we insist on obtaining the receiver transfer function H() for the original
colored noise problem, we can deduce it easily from the below figure
( ) H
0
t t =

L
-1
(s) L(s)
( ) H
0
t t =
( ) r t ( ) r t
where
0
0
1 1 *
0
1 1 *
( ) ( ) ( ) ( ) ( )
( ){ ( ) ( )}
j t
g
j t
H L j H L S e
L L S e





= =
=
(3-65)
0
()

H

Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
32
3. AM/FM Noise Analysis (1)
0
( ) ( )cos( ) ( ), X t m t t n t = + +
0
( ) cos( ( ) ) ( ), X t A t t n t = + + +
Consider the noisy AM signal
and the noisy FM signal
where

0
( ) FM
( )
( ) PM.
t
c m d
t
cm t


(3-66)
(3-67)
(3-68)
Here m(t) represents the message signal and a random phase jitter in
the received signal. In the case of FM, (t) = (t) = c m(t) so that the
instantaneous frequency is proportional to the message signal. We will
assume that both the message process m(t) and the noise process n(t) are
w.s.s with power spectra S
mm
() and S
nn
() respectively. We wish to
determine whether the AM and FM signals are w.s.s, and if so their
respective power spectral densities.
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
33
3. AM/FM Noise Analysis (2)
0
1
( ) ( )cos ( )
2
XX mm nn
R R R = +
Solution: AM signal: In this case from (3-66), if we assume U(0, 2),
then
so that (see figure below)
0 0
( ) ( )
( ) ( ).
2
XX XX
XX nn
S S
S S


+ +
= +
( )
mm
S

0

0

0
( )
XX
S
0
( )
mm
S
(a) (b)
(3-69)
(3-70)
Thus AM represents a stationary process under the above conditions.
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
34
3. AM/FM Noise Analysis (3)
FM signal: In this case (suppressing the additive noise component in (3-67),
we obtain

2
0
0
2
0
0
2
0
( /2, /2) {cos( ( /2) ( /2) )
cos( ( /2) ( /2) )}
{cos[ ( /2) ( /2)]
2
cos[2 ( /2) ( /2) 2 ]}
[ {cos( ( /2) ( /2))}cos
2
{sin( ( /2) ( /2))
XX
R t t A E t t
t t
A
E t t
t t t
A
E t t
E t t






+ = + + + +
+ +
= + +
+ + + + +
= +
+
0
}sin ]
since
0
0
0
{cos(2 ( / 2) ( / 2) 2 )}
{cos(2 ( / 2) ( / 2))} {cos2 }
{sin(2 ( / 2) ( / 2))} {sin2 } 0.
E t t t
E t t t E
E t t t E



+ + + +
= + + +
+ + + =
0
0
(3-71)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
35
3. AM/FM Noise Analysis (4)
2
0 0
( /2, /2) [ ( , )cos ( , )sin ]
2
XX
A
R t t a t b t + =
( , ) {cos( ( /2) ( /2))} a t E t t = +

( , ) {sin( ( /2) ( /2))} b t E t t = +

Eq (3-71) can be rewritten as


where
and
2
2
2
( )
( ) ( )
mm
d R
R c R
d

= =
In general, a(t,) and b(t,) depend on both t and so that noisy FM is not
w.s.s in general, even if the message process m(t) is w.s.s. In the special
case when m(t) is a stationary Gaussian process, from (3-68), (t) is also a
stationary Gaussian process with autocorrelation function
for the FM case.
(3-72)
(3-73)
(3-74)
(3-75)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
36
3. AM/FM Noise Analysis (5)
2
2( (0) ( )).
Y
R R

=
2
2 2
( (0) ( ))
/2
{ }
Y
R R
j Y
E e e e



= =
{ } {cos } {sin } ( , ) ( , ),
jY
E e E Y jE Y a t jb t = + = +
In that case the random variable
2
( /2) ( /2) ~ (0, )
Y
Y t t N = +

where
Hence its characteristic function is given by
which for = 1 gives
where we have made use of (3-76) and (3-73)-(3-74). On comparing
(3-79) with (3-78), we get
( (0) ( ))
( , )
R R
a t e


=
and
(3-76)
(3-77)
(3-78)
(3-79)
(3-80)
( , ) 0 b t
(3-81)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
37
3. AM/FM Noise Analysis (6)
so that the FM autocorrelation function in (3-72) simplifies into
2
( (0) ( ))
0
( ) cos .
2
XX
R R
A
R e




=
Notice that for stationary Gaussian message input m(t) (or (t)), the
nonlinear output X(t) is indeed strict sense stationary with autocorrelation
function as in (3-82).


Find R
XX
() for narrowband FM and wideband FM?
(3-82)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
Chapter 4:
Eigenanalysis
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
39
4. Eigenanalysis: Definitions (1)
Let R denote (MM)-correlation matrix of wide-sense
DTStP represented by (M1)-observation vector u(n).
Finding (M1)-vector q such that:


for some constant , rewriting in form:


(4.2) has nonzero solution in vector q if:


Equation (4.3) called characteristic equation, whose
roots
1
,
2
,,
M
called eigenvalues. General, (4.3) has
M distinct roots M solutions in the vector q.

q Rq =
(4.1)
( ) 0 q I R =
(4.2)
( ) 0 det = I R (4.3)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
40
4. Eigenanalysis: Definitions (2)
Let
I
denote i-th eigenvalue of matrix R, q
i
be a
nonzero vector such that:


Vector q
i
called eigenvector associated with
i

Example: see pp. 161-162, [1]

i i i
q Rq = (4.4)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
41
4. Eigenanalysis: Properties (1)

1
,
2
,,
M
are eigenvalues of correlation matrix R, then
eigenvalues of correlation matrix R
k
equal
1
k
,
2
k
,,
M
k

for any k > 0.
Let q
1
, q
2
,, q
M
be eigenvector corresponding to distinct
eigenvalues
1
,
2
,,
M
of (MM)-correlation matrix R.
Then, q
1
, q
2
,, q
M
are linearly independent.
Let
1
,
2
,,
M
be eigenvalues of (MM)-correlation
matrix R. Then all these eigenvalues are real and
nonnegative.
Let q
1
, q
2
,, q
M
be eigenvector corresponding to distinct
eigenvalues
1
,
2
,,
M
of (MM)-correlation matrix R.
Then, q
1
, q
2
,, q
M
are orthogonal to each other.
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
42
4. Eigenanalysis: Properties (2)
Let q
1
, q
2
,, q
M
be eigenvector corresponding to distinct
eigenvalues
1
,
2
,,
M
of (MM)-correlation matrix R.
Define (MM)-matrix:

where



Define (MM)-diagonal matrix:


Then, original matrix R may be diagonalized as:

where Q is nonsingular with Q
-1
=Q
H
Q is unitary matrix.
| |
M
q q q Q , , ,
2 1
=

=
=
j i
j i
j
H
i
, 0
, 1
q q
) , , , ( diag
2 1 M
=
RQ Q =
H
(4.5)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
43
4. Eigenanalysis: Properties (3)
(4.5) can be written:



Let
1
,
2
,,
M
be eigenvalues of (MM)-correlation
matrix R. Then sum of these eigenvalues equals the
trace of R:


trace of a square matrix defined as sum of diagonal
elements of the matrix.

=
= =
M
i
H
i i i
H
1
q q Q Q R
(4.6)
| |

=
=
M
i
i
1
tr R (4.7)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
44
4. Eigenanalysis: Properties (4)
Condition number = eigenvalue spread = eigenvalue
factor:


Eigenvalues of correlation matrix of a DTStP are
bounded by min and max values of power spectral
density of the process:


Condition number is bounded by:
min
max
) (

= R
(4.8)
M i S S
i
,..., 2 , 1 ,
max min
= (4.9)
min
max
min
max
) (
S
S
=

R
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
45
4. Eigenanalysis: Properties (5)
Minimax Theorem: Let (MM)-correlation matrix R
have eigenvalues
1
,
2
,,
M
that are arranged in
decreasing order as:


Then:



where C is subspace of vector space of all (M1)-
complex vector; dim(C): dimension of subspace C.
Subspace of dimension k defined as set of complex
vectors that can be written as a linear combination of q
1
,
q
2
,,q
k
M

2 1
M k
H
H
C k C
k
, , 2 , 1 , max min
) dim(
= =

=
x x
Rx x
0 x
x

(4.10)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
46
4. Eigenanalysis: Properties (6)
Karhunen-Loeve expansion: Let (M1)-vector u(n)
drawn from a wide-sense stationary process of zero
mean and correlation matrix R. Let q
1
, q
2
,, q
M
be
eigenvector corresponding to M eigenvalues of R.
Vector u(n) may be expanded as linear combination of
these eigenvector:



Coefficients of the expansion are zero-mean,
uncorrelated random variables defined by:

=
=
M
i
i i
n c n
1
) ( ) ( q u
(4.11)
) ( ) ( n n c
H
i i
u q =
(4.12)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
47
4. Eigenanalysis: Properties (7)
where



Physical interpretation: viewing eigenvectors q
1
,
q
2
,,q
M
as coordinates of an M-dimensional space,
thus representing random vector u(n) by set of its
projections c
1
(n), c
2
(n), , c
M
(n) on these axes.

From (4.11):

From (4.12), (4.13):


M i n c E
i
, , 2 , 1 , 0 )] ( [ = =

=
=

j i
j i
n c n c E
i
j i
0
)] ( ) ( [

(4.13)
2
1
2
) ( ) ( n n c
M
i
i
u =

=
| | M i n c E
i i
, , 2 , 1 , ) (
2
= =
(4.14)
(4.15)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
48
4. Eigenanalysis: Low-Rank Modeling (1)
Dimensionality reduction: Transformation in a way that a
data vector (in data space) can be represented by a reduced
number of effective feature (in feature space) and yet
retain most of intrinsic content of input data.

Consider M-dimensional data vector u(n) (representing
realization of a wide sense stationary process) that may be
expanded as a linear combination of eigenvectors q
1
, q
2
,,
q
M
corresponding to distinct eigenvalues
1
,
2
,,
M
of
(MM)-correlation matrix R of u(n). See (4.11). If knowing
prior knowledge that (M-p) eigenvalues
p+1
,,
M
are all
very small, then truncating (4.11) at term i=p. Then, an
approximate reconstruction of data vector u(n) can be
defined:

Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
49
4. Eigenanalysis: Low-Rank Modeling (2)


Vector u(n) has rank p < M of original data vector u(n).
Model in (4.16): low-rank model.

Meaning: approximation u(n) can be reconstructed by
using a set of p numbers c
1
(n), c
2
(n), , c
p
(n). In other
words, new vector c(n)=[c
1
(n), c
2
(n), , c
p
(n)] is viewed
as a reduced-rank representation for u(n).

Transformation: M-dimensional data space to p-
dimensional feature space subspace decomposition
M p n c n
p
i
i i
< =


=1
) ( ) ( q u
(4.16)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
50
4. Eigenanalysis: Low-Rank Modeling (3)
Reconstruction error vector:



Mean-square error:



Error is small good data reconstruction if
eigenvalues
p+1
,,
M
are all very small.

Example: Application of Low-Rank Modeling (see
pp.179-180, [1])

+ =
=

=
M
p i
i i
n c n n n
1
) ( ) ( ) ( ) ( q u u e
(4.17)
| |

+ =
= =
M
p i
i
n E
1
2
) ( e (4.18)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
51
4. Eigenanalysis: Eigenfilters (1)
Design optimum FIR filter with optimization criterion
of maximizing output signal-to-noise ratio
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
52
4. Eigenanalysis: Eigenfilters (2)
Effects of signal only: Average power of signal at
filter output:


Effects of noise only: Average power of noise at filter
output:


Output signal-to-noise ratio:

Rw w
H
o
P =
(4.18)
w w
H
o
N
2
=
(4.19)
w w
Rw w
H
H
o
o
o
N
P
SNR
2
) (

= =
(4.20)
Dept. of Telecomm. Eng,
Faculty of EEE
SSP2013
DHT, HCMUT
53
4. Eigenanalysis: Eigenfilters (3)
Optimum problem: max (SNR)
o
subject to w
H
w=1.
Applying minimax theorem (4.10):




Optimum FIR filter that produces (4.21) having
coefficient vector w
o
=q
max
is called eigenfilter,
where q
max
: eigenvector associated with
max
.


Note: Eigen filter maximizes the output SNR for a
random signal (w.s.s) in additive white noise,
while a matched filter maximizes the output SNR for
a known signal in additive white noise.

2
max
max ,
) (

=
o
SNR
(4.21)

Vous aimerez peut-être aussi