Vous êtes sur la page 1sur 121

Random Processes, Optimal Filtering and Model-based Signal Processing

Elena Punskaya
www-sigproc.eng.cam.ac.uk/~op205

Overview of the course

A good text for the course: Monson H. Hayes, Statistical Digital Signal Processing and Modeling, Willey, 1996

Discrete-time Random Processes

family of functions (a single function is identified by the outcome k and it is just a function of, for example, time, where t = nT, and T the sampling interval)

A random process is a rule that maps every outcome of an experiment to a function.


4

Random Processes

Ensemble representation of a discrete-time random process


Random variable From pdf f()

Xn1 Xn2 Xn3

Random Vector

1 2 . . .

sample space
n1 n2 n3

where t = nT, and T the sampling interval


6

Discrete-time and Continuous-time Random process

Example: the harmonic process

Example: the harmonic process


0=0T, where T sampling period
independent of T

o true frequency 0 normalised frequency

1/2

f()

A few members of the random phase sine ensemble

10

Correlation functions- Autocorrelation

11

Cross-Correlation function

12

Stationarity

13

Stationarity
Random vectors, see Fig.1

Joint pdf:

Xn1 Xn2 Xn3

Random Vector

n1

n2

n3

14

Stationarity

15

Strict-Sense Stationarity

16

Wide-sense stationarity

17

Wide-sense stationarity

WSS sometimes known as weakly stationary

This only applies for finite variance processes: a SSS process with infinite variance is not WSS

18

Example: random phase sine-wave

19

Example: random phase sine-wave


Take three WSS conditions in turn Condition 1 Ok
constant sum of angles constants

cos() f() d = cos()(1/2)d =0 sin() = 0, sin(-) = 0 sin() f() d = sin()(1/2)d =0 E[sin()]= odd function
E[cos()]=20

Example: random phase sine-wave


Condition 2 Ok
nth mth from the same member of ensemble

sinAsinB=0.5[cos(A-B)-cos(A+B)]

fixed constants

cos(A+B)=cosAcosB-sinAsinB, where B = 2 and A fixed constant simplify and show as before E[cos(2)] = E[sin(2)]

=0
21

Example: random phase sine-wave


3. Check variance is finite x2 = E[(Xn-)2], = 0 x2 = E[Xn2] (autocorrelation) = rXX[0] = 0.5A2[cos(n-n)0] = 0.5A2 < Condition 3 Ok

22

Power Spectra

23

Autocorrelation function from power spectrum

24

Power Spectra

contribution to the meansquare in a frequency interval

uT uT lT lT

2
25

Example: Power Spectrum

26

Example: Power Spectrum

cos as half sum of complex exp

Recall from Part 1B: Fourier series representation of a function of , C1m= exp(jm0) and C2m= exp(-jm0) E.g., C2m corresponds to impulse train of period 2 Sum of two impulse trains of period 2 C2m = (1/2)(-0)exp(-jm)d = = (1/2)exp(-jm0)
27

Power spectrum of harmonic process

0 - 2

0 + 2

28

White Noise

[m] m

=cXX[0]

29

White Noise
rXX[m] = cXX[m] = X2[m]

[m]=1 only for m=0

30

Example: white Gaussian noise (WGN)

31

Example: white Gaussian noise (WGN)

fXn(xn) = N ( xn | = 0, X2)
Xn1 Xn2 Xn3
Random Vector

n1

n2

n3

32

Example: white Gaussian noise

since all Xnis independent

33

Example: white Gaussian noise

Statistical characteristics are the same irrespective of shifts along the time axis. An observer looking at the process from sampling time n1 would not be able to tell the difference in the statistical characteristics of the process if he moved to a different time n2

34

Linear systems and random processes

e.g. Digital Filter


35

Linear systems and random processes

36

Linear systems and random processes

37

Linear systems and random processes

38

Linear systems and random processes

39

Linear systems and random processes

Time reversed impulse response

40

Linear systems and random processes

41

Linear systems and random processes

42

Example: Filtering white noise

43

Example: Filtering white noise

44

Example: Filtering white noise

45

Example: Filtering white noise

m = -1,

m = 0, m =1

46

Example: Filtering white noise


Maxima

Minima
H(z) has a zero at z = -b0/b1=-1/0.9 Hence |H(ej) | has a minimum at = +2n maximum at =0 + 2n

Periodic
=

z-plane

47

Ergodic Random processes

48

Ergodic Random processes

49

Ergodic Random processes

50

Ergodic Random processes

51

Example

f
52

Example

as we were supposed to obtain

53

Example

54

Ergodic processes

55

Comment: Complex-valued processes

56

Summary
Looked at discrete-time random processes (most results for continuous time random processes follow through almost directly) Defined Correlation functions (auto- and cross-) Stationarity (strict sense and wide sense 3 conditions) Power spectrum (and calculation of autocorrelation function using the inverse DTFT) White noise (in particular, white Gaussian noise) Ergodic processes Linear system and a wide-sense stationary process Revision: Continuous time random processes see handouts, not covered during lectures

57

Optimal Filtering

58

The general Wiener filtering problem

observed

Need to design this filter

+
unobserved

H(z)

59

The general Wiener filtering problem

60

The Discrete-time Wiener Filter

Need to design this filter

H(z)

61

Filtering observed noisy signal

62

Mean-squared error (MSE)

63

Mean-squared error (MSE)


Error signal

Need to design this filter


+

error

H(z)

64

The Discrete-time Wiener Filter

65

Derivation of Wiener filter

66

Derivation of Wiener filter

67

Derivation of Wiener filter

rdx [-q] = rxd [q]


68

Derivation of Wiener filter

69

Derivation of Wiener filter

70

Mean-squared error for the optimal filter

71

Mean-squared error for the optimal filter


Thus, minimum error is:

72

Important Special Case: Uncorrelated Signal and Noise Processes

73

Important Special Case: Uncorrelated Signal and Noise Processes

74

Important Special Case: Uncorrelated Signal and Noise Processes

75

Important Special Case: Uncorrelated Signal and Noise Processes

leads to Wiener Filter

1
and are real and positive
76

Important Special Case: Uncorrelated Signal and Noise Processes

1 1+

1 1 1+ SNR()

Signal-to-noise ratio

77

Example: AR Process
A 1st order all-pole, also known as, autoregressive process (AR), is generated by passing a zero-mean white noise through a first-order all-pole IIR filter

H(z) =

1 1 - z-1
2

78

Example: AR Process

This is our AR process

79

Example: AR Process

80

Example: Deconvolution
Consider the following process

xn =dn+ 0.8 dn-1 + vn


where vn is zero-mean unit variance white noise uncorrelated with dn Assume dn is a WSS AR(1) process with rdd[k] = (0.5)|k| Determine the optimal Wiener filter to estimate dn from xn

81

Example: Deconvolution

82

Example: Deconvolution

and take DTFT of rxd and rxx to obtain the Wiener filter

83

The FIR Wiener filter

84

The FIR Wiener filter

85

The FIR Wiener filter

86

The correlation matrix

87

The FIR Wiener filter

Positive definite

88

The FIR Wiener filter

89

Example: AR process
Consider 1st order AR process which has autocorrelation function

rdd [k] = |k|, with -1 < < 1


The process is observed in zero mean white noise with variance is uncorrelated with : Design the 1st order FIR Wiener Filter for estimation of Now We need to find from , which

= |k| +v2 [k] 90

Example: AR process

91

Example: AR process
-1 dd dd

1+v2

1+v2

-1

92

Example: Noise cancellation


Signal Source dn vn Noise Source
Design a Wiener Filter to estimate dn

xn=dn+vn

93

Example: Noise cancellation


First, assume dn and vn are stationary and ergodic so that we could estimate

large Also assume that a long segment of vn is available during a silent section of music/speech

vv

v v
94

Example: Noise cancellation


For the FIR Wiener Filter

95

Example: Noise cancellation


For the FIR Wiener Filter

96

Example: Noise cancellation


In the above equation rxx and rvv were estimated as explained before and the equation below can be solved using, for example, Matlab

In fact, audio signals are non-stationary Thus, need to apply to short quasi-stationary batches of data, one by one Can also successfully implement a frequency domain version using FFT

97

Model-based Signal Processing

98

Autoregressive moving-average model

99

Model-based Signal Processing

100

ARMA modelling

101

ARMA modelling

102

ARMA modelling

103

ARMA modelling

104

ARMA modelling

105

Autoregressive models

106

AR model

107

Autocorrelation function of AR model

108

Autocorrelation function of AR model

109

Autocorrelation function of AR model

The inverse ztransform of 1/A(z)

110

Yule-Walker Equations

111

Yule-Walker Equations

wn = [n] and for n = 0

112

Yule-Walker Equations

113

Yule-Walker Equations

r=0 r=1 r=P

114

Yule-Walker Equations
`

115

Yule-Walker Equations

116

Solving for AR coefficients

117

Example: AR coefficients estimation


Take p = 2 We have measured rXX[0] = 6.14 rXX[1] = 3.08 rXX[2] = -2.55 In practice this would be measured using ergodicity of the process

large

118

Example: AR coefficients estimation


Yule-Walker Equations:

6.14 3.08

3.08 6.14

3.08 -2.55 -0.95 0.89


119

Example: AR coefficients estimation

= 6.14 +3.08 x (-0.95) +(-2.55) x (0.89)

120

Thank you!

121

Vous aimerez peut-être aussi