Vous êtes sur la page 1sur 121

# Random Processes, Optimal Filtering and Model-based Signal Processing

Elena Punskaya
www-sigproc.eng.cam.ac.uk/~op205

## Overview of the course

A good text for the course: Monson H. Hayes, Statistical Digital Signal Processing and Modeling, Willey, 1996

## Discrete-time Random Processes

family of functions (a single function is identified by the outcome k and it is just a function of, for example, time, where t = nT, and T the sampling interval)

4

Random Processes

## Ensemble representation of a discrete-time random process

Random variable From pdf f()

Random Vector

1 2 . . .

sample space
n1 n2 n3

6

## Example: the harmonic process

0=0T, where T sampling period
independent of T

1/2

f()

10

## Correlation functions- Autocorrelation

11

Cross-Correlation function

12

Stationarity

13

Stationarity
Random vectors, see Fig.1

Joint pdf:

## Xn1 Xn2 Xn3

Random Vector

n1

n2

n3

14

Stationarity

15

Strict-Sense Stationarity

16

Wide-sense stationarity

17

Wide-sense stationarity

## WSS sometimes known as weakly stationary

This only applies for finite variance processes: a SSS process with infinite variance is not WSS

18

19

## Example: random phase sine-wave

Take three WSS conditions in turn Condition 1 Ok
constant sum of angles constants

cos() f() d = cos()(1/2)d =0 sin() = 0, sin(-) = 0 sin() f() d = sin()(1/2)d =0 E[sin()]= odd function
E[cos()]=20

## Example: random phase sine-wave

Condition 2 Ok
nth mth from the same member of ensemble

sinAsinB=0.5[cos(A-B)-cos(A+B)]

fixed constants

cos(A+B)=cosAcosB-sinAsinB, where B = 2 and A fixed constant simplify and show as before E[cos(2)] = E[sin(2)]

=0
21

## Example: random phase sine-wave

3. Check variance is finite x2 = E[(Xn-)2], = 0 x2 = E[Xn2] (autocorrelation) = rXX = 0.5A2[cos(n-n)0] = 0.5A2 < Condition 3 Ok

22

Power Spectra

23

24

Power Spectra

uT uT lT lT

2
25

26

## cos as half sum of complex exp

Recall from Part 1B: Fourier series representation of a function of , C1m= exp(jm0) and C2m= exp(-jm0) E.g., C2m corresponds to impulse train of period 2 Sum of two impulse trains of period 2 C2m = (1/2)(-0)exp(-jm)d = = (1/2)exp(-jm0)
27

## Power spectrum of harmonic process

0 - 2

0 + 2

28

White Noise

[m] m

=cXX

29

White Noise
rXX[m] = cXX[m] = X2[m]

30

31

## Example: white Gaussian noise (WGN)

fXn(xn) = N ( xn | = 0, X2)
Xn1 Xn2 Xn3
Random Vector

n1

n2

n3

32

33

## Example: white Gaussian noise

Statistical characteristics are the same irrespective of shifts along the time axis. An observer looking at the process from sampling time n1 would not be able to tell the difference in the statistical characteristics of the process if he moved to a different time n2

34

35

36

37

38

39

40

41

42

43

44

45

m = -1,

m = 0, m =1

46

## Example: Filtering white noise

Maxima

Minima
H(z) has a zero at z = -b0/b1=-1/0.9 Hence |H(ej) | has a minimum at = +2n maximum at =0 + 2n

Periodic
=

z-plane

47

48

49

50

51

Example

f
52

Example

## as we were supposed to obtain

53

Example

54

Ergodic processes

55

## Comment: Complex-valued processes

56

Summary
Looked at discrete-time random processes (most results for continuous time random processes follow through almost directly) Defined Correlation functions (auto- and cross-) Stationarity (strict sense and wide sense 3 conditions) Power spectrum (and calculation of autocorrelation function using the inverse DTFT) White noise (in particular, white Gaussian noise) Ergodic processes Linear system and a wide-sense stationary process Revision: Continuous time random processes see handouts, not covered during lectures

57

Optimal Filtering

58

observed

+
unobserved

H(z)

59

60

H(z)

61

62

63

Error signal

+

error

H(z)

64

65

66

67

68

69

70

71

## Mean-squared error for the optimal filter

Thus, minimum error is:

72

73

74

75

## Important Special Case: Uncorrelated Signal and Noise Processes

1
and are real and positive
76

## Important Special Case: Uncorrelated Signal and Noise Processes

1 1+

1 1 1+ SNR()

Signal-to-noise ratio

77

Example: AR Process
A 1st order all-pole, also known as, autoregressive process (AR), is generated by passing a zero-mean white noise through a first-order all-pole IIR filter

H(z) =

1 1 - z-1
2

78

Example: AR Process

## This is our AR process

79

Example: AR Process

80

Example: Deconvolution
Consider the following process

## xn =dn+ 0.8 dn-1 + vn

where vn is zero-mean unit variance white noise uncorrelated with dn Assume dn is a WSS AR(1) process with rdd[k] = (0.5)|k| Determine the optimal Wiener filter to estimate dn from xn

81

Example: Deconvolution

82

Example: Deconvolution

and take DTFT of rxd and rxx to obtain the Wiener filter

83

84

85

86

87

## The FIR Wiener filter

Positive definite

88

## The FIR Wiener filter

89

Example: AR process
Consider 1st order AR process which has autocorrelation function

## rdd [k] = |k|, with -1 < < 1

The process is observed in zero mean white noise with variance is uncorrelated with : Design the 1st order FIR Wiener Filter for estimation of Now We need to find from , which

## = |k| +v2 [k] 90

Example: AR process

91

Example: AR process
-1 dd dd

1+v2

1+v2

-1

92

## Example: Noise cancellation

Signal Source dn vn Noise Source
Design a Wiener Filter to estimate dn

xn=dn+vn

93

## Example: Noise cancellation

First, assume dn and vn are stationary and ergodic so that we could estimate

large Also assume that a long segment of vn is available during a silent section of music/speech

vv

v v
94

## Example: Noise cancellation

For the FIR Wiener Filter

95

## Example: Noise cancellation

For the FIR Wiener Filter

96

## Example: Noise cancellation

In the above equation rxx and rvv were estimated as explained before and the equation below can be solved using, for example, Matlab

In fact, audio signals are non-stationary Thus, need to apply to short quasi-stationary batches of data, one by one Can also successfully implement a frequency domain version using FFT

97

98

99

## Model-based Signal Processing

100

ARMA modelling

101

ARMA modelling

102

ARMA modelling

103

ARMA modelling

104

ARMA modelling

105

Autoregressive models

106

AR model

107

108

109

## The inverse ztransform of 1/A(z)

110

Yule-Walker Equations

111

Yule-Walker Equations

## wn = [n] and for n = 0

112

Yule-Walker Equations

113

Yule-Walker Equations

## r=0 r=1 r=P

114

Yule-Walker Equations
`

115

Yule-Walker Equations

116

117

## Example: AR coefficients estimation

Take p = 2 We have measured rXX = 6.14 rXX = 3.08 rXX = -2.55 In practice this would be measured using ergodicity of the process

large

118

## Example: AR coefficients estimation

Yule-Walker Equations:

6.14 3.08

3.08 6.14

119

120

Thank you!

121