Vous êtes sur la page 1sur 9

ECE 6151, Spring 2017

Lecture Notes 2
Shengli Zhou
January 25, 2017

Outline
The random process
Random passband signals
Basis functions and signal space

1 Random process
Questions to ask
What does it to you as stochastic process?
How to compute the autocorrelation and PSD after passing through a linear system?
Two ways of understanding R.P.
consider the random noise process X(t) (Measure the noise in the classroom)
50

40 1

30 2

20

3

10

10

20
150 200 250 300 350 400 450 500
time

X(t0 ) is random variable


Consider to figure out the noise characteristics in ocean
Autocorrelation
Mean:
mX (t) = E[X(t)]

autocorrelation function
RX (t1 , t2 ) = E[X(t1 )X (t2 )]

WSS

Wide sense stationary

1
mX (t) = 0
RX (t1 , t2 ) depends only on the difference t1 t2

R(t1 t2 ) = E[X(t1 )X (t2 )]


Example
Consider the process
X(t) = A cos(2fc t + )
where U (0, 2)
find where this is w.s.s., and if it is, find its power spectrum

Z 2
1 2
E[X(t)X(t + )] = A cos(2fc t + ) cos(2fc (t + ) + )d
0 2
A2
= cos(2fc )
2
Hence w.s.s. with
A2 A2
SX (f ) = (f + fc ) + (f fc )
4 4
Power
Why do we define this ?
First, note that R(0) = E[|X(t)|2 ] is the variance, which we want
What will be the autocorrelation after passing a system?
second, lets consider
X(t) h(t) Y (t),
Z
Y (t) = h(t) ? X(t) = h(s)X(t s)ds

we want to know how big Y (t) is.


RY ( ) = E[Y (t + )Y (t)]
Z Z 

=E X(t + s)h(s)X(t v)h (v)dsdv

Z Z
= RX ( s + v)h(s)h (v)dsdv

since we are sharped-eyed, we can see a convolution here


RY ( ) = h( ) ? RX ( ) ? h ( )
Z
)=
R( RX ( v)h(v)dv = h( ) ? RX ( )

Let us take the Fourier transform (define SX (f ), H(f ))


SY (f ) = F{RY ( )}
Z Z Z
= RX ( s + v)h(s)h (v)ej2f d dsdv
Z Z
= RX (f )h(s)h (v)ej2f (sv) dsdv

= |H(f )|2 SX (f )

2
this SX (f ) is known as a power-spectral density, and is as meaningful as RX ( )
Intuitive justification
Intuitive justification of power spectral density: take a bandpass filter of width f .

1
H(f)
Xt Yt f f+f
h(t)

E[|Y (t)|2 ] f SX (f )

White noise
Special situation: white noise
N0
SX (f ) = 2
N0
RX ( ) = 2 ( )

Since RX (0) = E[|X(t)|2 ] = , can not exist


But since usually have some LPF in system, and input noise spectrum is flat over range at LPF, it is
mathematically-enticing thing to do:

X(t) Y(t)
H(f)

S_x(f) S_y(f)
H(f)

S_x(f) S_y(f)

2 Bandpass random process


X(t) is a bandpass W.S.S. random process
Xi (t) and Xq (t) lowpass random processing. Also Xi (t) and Xq (t) W.S.S.
X(t) = Xi (t) cos(2fc t) Xq (t) sin(2fc t) = Re[Xl (t)ej2fc t ]

What are the statistical properties between X(t) and Xi (t), Xq (t), Xl (t)?

Time domain relations


Due to the W.S.S. property, we have
RXi ( ) = RXq ( ), RXi Xq ( ) = RXq Xi ( )
RX ( ) = RXi ( ) cos(2fc ) RXq Xi sin(2fc )
RXl ( ) = E[Xl (t + )Xl (t)] = 2[RXi ( ) + jRXq Xi ( )]

RX ( ) = 21 Re RXl ( )ej2fc
 

3
Additional property

Frequency domain relations (Is SXl (f ) real?)

SXl (f ) = 2[SXi (f ) + jSXq Xi (f )]

Evaluate PSD at baseband, then translate to passband

1
SX (f ) = 4 [SXl (f f0 ) + SXl (f f0 )]

Or, know passband PSD first, then evaluate components at baseband

RXl ( ) = 4LPF[RX ( )ej2fc ]

RXi ( ) = 2LPF[RX ( ) cos(2fc )]


(
SX (f + fc ) + SX (f fc ) |f | < fc
SXi (f ) = SXq (f ) =
0 otherwise
RXi Xq ( ) = 2LPF[RX ( ) sin(2fc )]
(
j[SX (f + fc ) SX (f fc )] |f | < fc
SXi Xq (f ) = SXq Xi (f ) =
0 otherwise
Apply this to narrowband noise case.

RXi ( ) is an even function of


RXi ( ) = RXi ( )

RXq Xi ( ) is an odd function of


RXq Xi ( ) = RXq Xi ( )

SXi (f ) is even-symmetric and real


SXq Xi (f ) is odd-symmetric and imaginary

Important: If SX (f ) is even symmetric around fc , then SXq Xi (f ) = 0 and RXq Xi ( ) = 0.


In this case, Xi (t) and Xq (t) are uncorrelated. (independent if Gaussian)

Bandpass noise with a flat spectrum

N (t)
N0
SN (f ) = , |f fc | B/2
2
SNi (f ) = N0 , |f | B/2

Auto-correlation of the low-pass equivalent noise

sin(B )
RNi ( ) = N0 = N0 Bsinc(B )
B
RNi (0) = N0 B

4
3 Discrete Representation of Continuous Signals
Given an orthonormal set {k (t)}kI , the best approximation of s(t) is
X
s(t) = < s, > k (t)
k
R
such that the Ee =
|s(t) s(t)|2 dt is minimized.
Scalar product Z
< x, y >= x(t)y (t)dt

If < x, y >= 0, x(t) and y(t) are orthogonal


Orthonormal basis functions {k (t)}kI
(
1 if i = j
< i , j >=
0 6 j
if i =
P
Approximate s(t) by s(t) = k sk k (t) such that
Z
Ee = |s(t) s(t)|2 dt

is minimized

Optimal projections Z
sk =< s, k >= s(t)k (t)dt

X
(Ee )min = Es |sk |2
k

If (Ee )min = 0, {k (t)}kI complete X


s(t) = sk k (t)
iI

s(t) {sk }kI

Why orthonormal transformation important


Distance preserving Z X
si (t)sj (t)dt = sik sjk =< si , sj >= sH
j si
k
Z X
|si (t)|2 dt = |sik |2 = ksi k2
k
Z X
|si (t) sj (t)|2 dt = |sik sjk |2 = ksi sj k2
k

Ex 1: Representation of time limited signal (Fourier Series)

X 1 k
s(t) = ck ej2 T t , t [0, T ]
T
Z T
1 k
ck =< x, k >= x(t) ej2 T t
0 T

5
Ex 2: Fourier Transform Z
X(f ) = x(t)ej2f t dt

Z
x(t) = X(f )ej2f t dt

A limiting view of Fourier transform


X X 1 1 n
x(t) = X(nf )ej2nf t f = X(nf ) ej2 T0 t
n n
T0 T0

Ex 3: Sampling of Band-limited signal

x(t) X(f ) band limited in (W, W )


X 1 n
X(f ) = cn ej2 2W f
n 2W
Z W
1 n 1  n 
cn = X(f )ej2 2W f df = x
2W W 2W 2W

1 X  n  j2 n f
X(f ) = x e 2W
2W n= 2W
Z W
x(t) = X(f )ej2f tdf
W

X  n  sin(2W (t n ))
2W
= x n
n=
2W 2W (t 2W )
 n 
X  n 
= x sinc 2W (t )
n=
2W 2W

So
sin(2W (t 2Wn
 
))
2W n is a orthonormal set
2W (t 2W )

Whiteboard pictures

6
7
Gram-Schmidt Orthogonalization
How do we find {k (t)} in general ?
(i) s
Z T
1 (t) = s1 (t)/ s21 (t)dt
0

(ii)
g2 (t) = s2 (t) s21 1 (t)
s
Z T
2 (t) = g2 (t)/ g22 (t)dt
0

(iii)
g3 (t) = s3 (t) s31 1 (t) s32 2 (t)
s
Z T
3 (t) = g3 (t)/ g32 (t)dt
0

so on and so forth

2 b(t)
s1(t) s2(t) f1(t) 1
1
1
1 2 3 1
1 2 3 1 2 3

s4(t)
s3(t) 2 f2(t) f3(t)
1 1 1 1
1 2 3 1 2 3 1 2 3 1 2 3
-3/4 -1

1. step 1: 1 (t) = R s11 (t)


2
= 12 s1 (t) = b(t), m = 1, N = 1
0
s1 (t)dt

2. step 2: m = 2
Z
Sm1 = s2 (t)1 (t) = 1

sm (t) = 1 (t)
e(t) = sm (t) sm (t) = b(t 1)

3. step 3: N = 2 2 (t) = e(t)/||e(t)|| = b(t 1)


4. step 2: m = 3
Z
Sm1 = s2 (t)1 (t) = 1
Z
Sm2 = s2 (t)2 (t) = 1/2
1
sm (t) = 1 (t) + 2 (t)
2
3
e(t) = sm (t) sm (t) = b(t 2)
4

8
5. step 3: N = 3 3 (t) = e(t)/||e(t)|| = b(t 2)
6. step 2: m = 4, ..., e(t) = 0
7. step 3: since e(t) = 0, go to step 2.
8. step 2: m = 5 > M , stop. Output N = 3 and 1 (t), 2 (t), and 3 (t).

Vous aimerez peut-être aussi