Vous êtes sur la page 1sur 20

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Tutorial Sheet 4 Optimal and Adaptive Filters

Question 1. Consider a least squares method to desing an IIR filter to approximate some ideal impulse response hd(n). Prove that in order to design this filter, a set of equations given below needs to be solved: p for n = 0,1,..., q b hd ( n ) + ak hd ( n k ) = n k =1 0 for n = q + 1,..., q + p
where ak and bk represent IIR filter coefficients to be determined in order to design the filter.

Solution:
A general difference equation for the IIR filter can be expressed as:

y ( n ) = bk x ( n k ) ak y ( n k )
k =0 k =1

The transfer function of this filter can be obtained by applying the z-transform to above difference equation:

Y ( z ) = bk z k X ( z ) ak z k Y ( z )
k =0 k =1 q Y ( z ) 1 + ak z k = X ( z ) bk z k k =0 k =1 p

H (z) =

Y (z)

X ( z)

b z
k =0 p k k =1

1 + ak z k

B( z) A( z)

where A(z) and B(z) are two polynomials involving all ak and bk filter coefficients. Note that if h(n) represents the impulse response of the IIR filter we also have:
H ( z ) = h ( n ) z n =
n =0

b z
k =0 p k k =1

1 + ak z k

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

To get the required set of equations, we can write H ( z ) = B ( z ) A ( z ) as: A( z ) H ( z ) = B ( z ) Recognising that the multiplication on the left side of the equation corresponds to convolution in time domain, we can revert back to time domain as: an h ( n ) = h ( n ) + ak h ( n k ) = bn
k =1 p

We have a set of p+q+1 unknowns to be determined, so we need p+q+1 linear equations to achieve this. This can be done by setting h(n)=hd(n) for n=0,1,, p+q+1. This results in the following set of equations:
hd ( n ) + ak hd ( n k ) = bn
k =1 p

Noting that bn is a finite length sequence, i.e. that bn=0 for n<0 and n>q the above can also be written as:
b hd ( n ) + ak hd ( n k ) = n k =1 0
p

for n = 0,1,..., q for n = q + 1,..., q + p

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Question 2:
Prony method solves the problem of IIR filter design described in the previous question in two steps. In the first step, coefficients ak are found by minimizing the square error defined as:

n = q +1

e (n)

where e(n) represents the design error on the impulse response segment where n>q+1, i.e. since:
hd ( n ) + ak hd ( n k ) = 0
k =1 p p

for n > q + 1

we have:

e ( n ) = hd ( n ) + ak hd ( n k )
k =1

Derive the set of equations used to obtain coefficients ak using this approach.

Solution:
To find the coefficients ak, according to this approach, we differentiate with respect to each ak, and set the derivatives equal to zero as follows:
e ( n ) = 2e ( n ) ak n = q +1 ak

p ak hd ( n k ) 1 2e ( n ) k =1 a n=q+ k =
n = q +1

2e ( n ) h ( n k )
d n = q +1

= 2 e ( n ) hd ( n k ) =0 Dividing by two and substituting for e(n) we have

n = q +1

e ( n ) hd ( n k ) =

p hd ( n ) + ak hd ( n l ) hd ( n k ) = 0 +1 n=q l =1

i.e.

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

n = q +1 l =1

ak hd ( n l ) hd ( n k ) = hd ( n ) hd ( n k )
n = q +1 n = q +1

ak
l =1

hd ( n l ) hd ( n k ) = hd ( n ) hd ( n k )
n = q +1

a r ( l k ) = r ( k )
l =1 k h h

where rh ( l k ) = of hd(n).

n = q +1

h ( n l ) h ( n k ) and r ( k ) = h ( n ) h ( n k ) are autocorrelations


d d h n = q +1 d d

Set of equations given with:

a r ( l k ) = r ( k ) for k=1,2,p is also known as Prony


l =1 k h h

normal equations. Using the symmetrical property of autocorrelation sequence, matrix form of those equations is:
rd ( 0 ) rd (1) ... rd ( p 1) a1 rd (1) a rd ( 0 ) rd (1) 2 = rd ( 2 ) ... ... ... rd ( 0 ) a p rd ( p ) rd ( p 1) rd ( p 2 ) or in the more compact form:

R dd a = rdd
After determining coefficients ak from the above system of equations, coefficients bk can then be determined using: hd ( n ) + ak hd ( n k ) = bn
k =1 p

for n = 0,1,..., q

(Note that the only unknowns in the above q equations are q coefficients bk, and they can therefore be easily determined.)

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Question 3.
We would like to build a predictor of digital waveforms. Such a system would form and estimate of a later sample (say n0 samples later) by observing p consecutive data samples. Thus we would set:
x ( n + n0 ) = a p ( k ) x ( n k )
k =1 p

The predictor coefficients ap(k) are to be chosen to minimise the cost function defined as p = x ( n + n0 ) x ( n + n0 )
n=0 2

a) Determine the equations that define the optimum set of coefficients ap(k). b) If n0=0, how is your formulation of this problem different from Pronys method?

Solution:
a) We want to find the predictor coefficients ap(k) that minimise the linear prediction error

p = e ( n )
where e ( n ) = x ( n + n0 ) x ( n + n0 )
n=0

To find the coefficients, differentiate p with respect to ap(k), and set the derivatives equal to zero as follows: p x ( n + n0 ) = 2e ( n ) =0 a p ( k ) a p ( k ) n =0 Since

x ( n + n0 ) = a p ( k ) x ( n k )
k =1

then

x ( n + n0 ) a p ( k )

= x (n k )

so we have

a p ( k )

= 2 e ( n ) x ( n k ) = 0
n=0

Dividing by two, and substituting for e(n) we have x ( n + n ) = a ( l ) x ( n l )x ( n k ) = 0

p n=0

k =1

k = 1, 2,..., p

Therefore the normal equations are:

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

a (l ) r (l k ) = r ( k + n )
k =1 p xx xx 0

where rxx ( l k ) = x ( n l ) x ( n k )
n=0

rxx ( k + n0 ) = x ( n + n0 ) x ( n k )
n=0

b) With n0=0 these equations are the same as the all-pole normal equations using Pronys method (difference in notation, h(n) instead of x(n) makes no difference to the algorithm), except that the right-hand side does not have a minus sign. Therefore, the solution differs in sign.

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Questions 4.

Derive the Wiener-Hopf equation for the Wiener FIR filter working as a noise canceller according to a setup given in the figure below.
signal source x ( n ) = d ( n ) + v1 ( n )
primary sensor

noise source v2 ( n )
secondary sensor

v1 ( n ) W(z)

e ( n)

Solution:

In this setup, the task of Wiener filter working as a noise canceller is to estimate signal d(n) from a noise corrupted observation x(n)=d(n)+v1(n) recorded by a primary sensor. The additional information about the corrupting noise is obtained from a secondary sensor that is placed elsewhere in the noise field but away from the d(n) signal source. Although the noise v2(n) measured by the secondary source is correlated with the noise in the primary sensor signal the processes will not be equal. Wiener filter is designed to estimate the noise v1(n) from the signal v2(n) received by the secondary sensor. The estimate v1(n) is then subtracted from the primary signal x(n) to form the estimate of d(n) which is given with:
d ( n ) = x ( n ) v1 ( n ) = e ( n ) Wiener filter is designed by minimising the sum of squares of error e(n) defined as:
N 1 N 1 p = e ( n ) = x ( n ) w ( k ) v2 ( n k ) = d ( n ) + v1 ( n ) w ( k ) v2 ( n k ) n =0 n=0 k =0 n=0 k =0

where N represents the order of the FIR Wiener filter with coefficients w(0),w(1),,w(N-1). To minimize the p it is necessary and sufficient to obtain derivatives of p with respect to filter coefficients w(k) and equate them to zero:

w ( k )
branislav.vuksanovic@port.ac.uk December 2010

= 2 e ( n )
n =0

(e ( n )) w ( k )

= 2 e ( n ) v2 ( n k ) = 0
n =0

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Dividing the above equation by 2 and substituting the expression for e(n) we have:
N 1 e ( n ) v2 ( n k ) = d ( n ) + v1 ( n ) w ( k ) v2 ( n l ) v2 ( n k ) n=0 n=0 l =0

= ( d ( n ) + v1 ( n ) ) v2 ( n k ) ( w ( k ) v2 ( n l ) ) v2 ( n k )
n=0 k =0 n =0

N 1

= rdv2 ( k ) + rv1v2 ( k ) w ( k ) rv2v2 ( l k )


k =0

N 1

=0 where rdv2 ( k ) = d ( n ) v2 ( n k ) represents the cross correlation between the information


n=0

carrying signal d(n) and noise v2(n), in general case those two sequences are uncorrelated so rdv2 ( k ) = 0 rv1v2 ( k ) = v1 ( n ) v2 ( n k ) represents the cross correlation between two versions of the
n=0

noise signal shown in the figure

rv2v2 ( l k ) = v2 ( n l ) v2 ( n k ) represents the autocorrelation of the v2(n) version of


n=0

the noise signal shown in the figure Assuming no correlation between the noise and information carrying signals, i.e. rdv2 ( k ) = 0 the set of Wiener-Hopf equations becomes:
N 1 k =0 N 1 k =0

rdv2 ( k ) + rv1v2 ( k ) w ( k ) rv2v2 ( l k ) = rv1v2 ( k ) w ( k ) rv2v2 ( l k ) = 0


i.e. rv1v2 ( k ) = w ( k ) rv2v2 ( l k ) for each Wiener filter coefficient, i.e. for k=0,1,,N-1
k =0 N 1

In the matrix form we have:

rv1v2 = R v2 w

From this equation set of optimal filter coefficients is easily obtained from:
w opt = R v21rv1v2

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Notice that in the noise cancellation example, analysed in this question, rv1v2 ( k ) sequence is not usually obtainable as the v1 ( n ) signal is not directly accessible, i.e. it is contained in the signal
rv1v2 ( k ) since for the uncorrelated signals d(n) and v2(n) we have:

x ( n ) . We can however calculate and use the cross correlation sequence rxv2 ( k ) instead of

rxv2 ( k ) = E x ( n ) v2 ( n )

= E ( d ( n ) + v1 ( n ) ) v2 ( n ) = E d ( n ) v2 ( n ) + E v1 ( n ) v2 ( n )

= 0 + E v1 ( n ) v2 ( n ) = rv1v2 ( k )

Question 5.

Show how to modify the expression for p obtained in the previous question as:

p = d ( n ) + v1 ( n ) w ( k ) v2 ( n k )
n=0

N 1 k =0

to get the cost function:


J = p = const. 2 w ( k ) rv1v2 ( k ) + w ( l )w ( k ) rv2v2 ( l k )
k =0 k =0 l =0 N 1 N 1 N 1

and prove that the minimization of this cost function results in the same set of Wiener-Hopf equations.

Solution:

and Wiener-Hopf equation w ( k ) is to first square the term in the brackets of the above equation and identify the autocorrelation and crosscorrelation terms. Assuming no correlation between the d(n) and v1(n) or v2(n) signals we have: Another, slightly different approach to obtaining derivatives

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques


N 1 p = d ( n ) + v1 ( n ) w ( k ) v2 ( n k ) n=0 k =0 2

( d ( n ) )2 + 2d ( n ) v1 ( n ) + ( v1 ( n ) )2 N 1 = 2 ( d ( n ) + v1 ( n ) ) w ( k ) v2 ( n k ) n=0 k =0 N 1 N 1 + w ( l ) v2 ( n l )w ( k ) v2 ( n k ) k =0 l =0 = rdd + rv1v1 2 w ( k ) rv1v2 ( k ) + w ( l )w ( k ) rv2v2 ( l k )


k =0 k =0 l =0 N 1 N 1 N 1

Taking derivative of the above expression with respect to each w(k) and equating to zero, we have:
w ( k ) p = 2rv1v2 ( k ) + 2 w ( l ) rv2v2 ( l k ) = 0
l =0 N 1 l =0 N 1

i.e.
rv1v2 ( k ) = w ( l ) rv2v2 ( l k ) for k=0,1,,N-1

so in the matrix form we have

rv1v2 = R v2 w
which is the same equation as obtained in the previous question.

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Question 6.

Consider the standard expression for the mean square error of the Wiener filter given in the vector form:

J = E d 2 ( n ) 2rdxT w + w T R x w a) Starting with the Wiener-Hopf equation obtain the expression for the minimum mean square error (MMSE), i.e. Jmin. Using the obtained expression, prove that MMSE can be rewritten in the following way:

b)

J = J min + vT R x v
where v represents a so called misalignment vector defined as v = w w opt .

Solution: a) Jmin is the value of the mean square error for w= w opt so it can be obtained by inserting the Wiener-Hopf equation (i.e. solution for wopt) into starting MSE equation J min = E d 2 ( n ) 2rdxT w opt + wT R x w opt opt From the Wiener-Hopf equation, for the optimal set of filter weights we have: rdx = R x w opt Combining the above two equations the expression for the MMSE is obtained as:
T J min = E d 2 ( n ) 2rdx w opt + w T rdx opt T T = E d 2 ( n ) 2rdx w opt + rdx w opt T = E d 2 ( n ) rdx w opt

b) Now, since J = E d 2 ( n ) 2rdxT w + w T R x w we have:

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

J = E d 2 ( n ) 2rdxT w + w T R x w = J min + rdxT w opt 2rdxT w + wT R x w = J min + ( R x w opt ) w opt 2 ( R x w opt ) w + w T R x w


T T

= J min + wT R T w opt 2wT R T w + wT R x w opt x opt x


Rx is symmetrical matrix, so RxT=Rx. With some further manipulations of the above expression we finally get to required form of the expression for Jmin:
J = J min + wT R T w opt 2wT R T w + wT R x w opt x opt x = J min + wT R x w opt 2wT R x w + wT R x w opt opt = J min + wT R x w opt w T R x w w T R x w + w T R xx w opt opt opt = J min + wT R x w opt ( R xx w ) w opt wT R x w + w T R x w opt opt
T

= J min + wT R xx w opt w T R xx w opt w T R xx w + w T R xx w opt opt = J min + ( wT w T ) R x w opt ( w T w T ) R x w opt opt = J min + ( wT w T ) R x ( w opt w ) opt = J min + ( wT w T ) Rxx ( w w opt ) opt = J min + ( w w opt ) Rxx ( w w opt )
T

= J min + vT Rxx v

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Question 7. We want to use two-coefficient Wiener filter to filter noisy data x ( n ) = d ( n ) + v ( n ) . The noise v(n) has a zero mean value, unit variance and is uncorrelated with the desired signal d(n). Furthermore, assume rd ( m ) = 0.6m and rv ( m ) = ( m ) . Find the following: a) b) c) Solution: a) Since d(n) and v(n) are independent, WSS processes rdx ( m ) = E d ( n ) x ( n m )
= E d ( n ) d ( n m ) + E d ( n ) v ( n m ) = rd ( m )

Cross-correlation vector rdx. Wiener equation for this system and two coefficients of the Wiener filter, i.e. w0 and w1. Minimum mean square error value, i.e. Jmin.

r ( 0) 1 so rdx = d = rd (1) 0.6


b) Also, R x ( m ) = E x ( n ) x ( n m )

= E ( d ( n ) + v ( n) ) ( d ( n m ) + v ( n m) ) = E d ( n ) d ( n m ) + E d ( n ) v ( n m ) + E v ( n ) d ( n m ) + E v ( n ) v ( n m ) = Rd ( m) + Rv ( m)

Combining the above equation R x = R d + R v and the Wiener filter solution, rdx = R x w opt we have: rdx = ( R d + R v ) w opt For a two coefficients Wiener filter, the autocorrelation matrix for signal d is a 22 Toeplitz matrix, i.e. r ( 0 ) rd (1) 1 0.6 Rd = d = rd (1) rd ( 0 ) 0.6 1 Autocorrelation matrix for unit variance noise v is 22 identity matrix, i.e.

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

2 0 1 0 Rv = v = 0 v2 0 1
The Wiener equation rdx = ( R d + R v ) w opt in expanded form is therefore:

2 0.6 w0 1 0.6 2 w = 0.6 1


The coefficients of the Wiener filter are solutions of the above matrix equation, i.e. w opt = ( R d + R v ) rdx
1

w0 0.549 0.165 1 0.451 w = 0.165 0.549 0.6 = 0.165 1


c) The minimum mean square error Jmin can be obtained from J min = E d 2 ( n ) rdxT w opt (see previous question). Since the autocorrelation function for the signal d is E d 2 ( n ) = rd ( n, n ) = rd ( 0 ) = 1
J min = E d 2 ( n ) rdxT w opt = rd ( 0 ) rdxT w opt 0.451 = 1 [1 0.6] 0.165 = 1 0.549 = 0.451

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Question 8.

With reference to figure given below state, discuss and derive three methods used to find the MMSE (minimum mean square error) by iterative adjustment of FIR coefficients Newton, Steepest Descent and LMS method.

x(n)

y(n)

FIR filter w(n)


e(n)

d(n)

LMS update

Solution:

Newtons method uses the gradient of the MSE surface to find the MMSE in one iteration. Starting from the equation for the gradient of the MSE surface:
( ) = 2rdx + 2R x w

The optimum weight vector can be obtained, at the point where ( ) = 0 , i.e.
2rdx + 2R x w opt = 0

i.e.

w opt = R 1rdx x

We can multiply both sides of the gradient equation by

1 1 R x from the left-hand side to obtain: 2

1 1 R x ( ) = R 1rdx + w = w opt + w x 2
or, after rearranging the terms in the equation: 1 w opt = w R 1 ( ) x 2 This can be easily modified into adaptive algorithm: 1 w k +1 = w k R 1 ( ) x 2 This equation still requires demanding calculation of R 1 matrix, but can cope with nonquadratic x type MSE surfaces and nonstationarity of the signals. Steepest descent technique does not require the knowledge of R x . This algorithm searches for the MMSE point in the direction of the negative gradient of that surface:
branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

w k +1 = w k ( ( w k ) )

where is a constant that regulates the step size. This equation is quite similar to the one for 1 Newtons method. The only differences are that has been replaced by the user-defined 2 parameter and non-existence on R x in the equation. However notice that the steepest descent method requires the knowledge of the performance function ( w k ) for gradient computation.

The LMS algorithm is similar to steepest descent method but does not require a priori knowledge of ( w k ) . The gradient in this method is computed by differentiating the instantaneous squared
2 2 error e ( n ) instead of mean/expected value of squared error E e ( n ) . This estimate of the gradient of the MSE surface is therefore obtained as:
2 T 2e ( n ) e ( n ) 2e ( n ) ( d ( n ) x k w k ) e ( n ) ( w k ) = = = = 2 e ( n ) x k w k w k w k

Replacing this expression for the gradient of ( w k ) into steepest descent algorithm expression the LMS algorithm is obtained:
w k +1 = w k ( ( w k ) ) = w k + 2 e ( n ) x k

In this case the shape of the error surface and the calculation of the autocorrelation matrix are not required. The LMS algorithm proceeds according to the following steps: 1. 2. The weight vector is initialised by setting all filter coefficients to some random values. A common choice is to set all the taps to 0. A choice of step size is made. Although it is possible to determine theoretical limits for this step size where convergence is ensured, is usually picked by trial and error method. The vector of previous and current input signal samples x k is formed and the filter output calculated: 4. 5. Error is calculated:
y ( n ) = xT w k k
e (n) = d ( n) y ( n)

3.

Weight vector is updated, i.e. w k +1 vector is calculated according to weight update equation:
w k +1 = w k + 2 e ( n ) x k
k = k + 1 and algorithm jumps to step 3.

6.

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Question 9.

a)

Use diagrams and relevant equations to explain and discuss the application of adaptive filters in system identification application. LMS based FIR adaptive filter working in the system identification configuration uses N=4. If the exact transfer function of the system to be identified is:
H (z) = 1.25 + 0.35 z 1 1 0.5 z 1

b)

write the equations for the signals d(n) and e(n) and the update equation for each adaptive filter coefficient, i.e. w1(n) w1(n).

Solution:

a)

Unknown system to be identified is c(n) and h(n) is a digital filter used to model c(n). The basic concept is that the adaptive filter (model) adjusts itself, intending to cause its output to match the output of the unknown system. When the difference (error signal, e(n)) between the physical system response d(n) and adaptive model response y(n) has been minimised, the adaptive model reproduces unknown system or provides an approximation to it. Provided that the order of the adaptive filter matches that of the unknown system, unknown system is identified in an optimum sense. In actual applications, however, there will normally be additive noise present at the adaptive filter input so the filter structure will not exactly match that of the unknown system. Once the good model of the unknown system is obtained, DFT (on the estimated impulse response h(n)) can be performed in order to obtain the frequency response of the system. When the plant is time varying, the plant output is nonstationary. In such a situation, the adaptive filtering algorithm has the task of keeping the modelling error small by continually tracking time variations of the plant dynamics.

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

b)

1.5 + 0.3z 1 = G ( z) = X ( z ) 1 0.5 z 1 + 0.25 z 2

D( z)

D ( z ) (1 0.5 z 1 + 0.25 z 2 ) = X ( z ) (1.5 + 0.3z 1 )

D ( z ) = X ( z ) (1.5 + 0.3 z 1 ) + D ( z ) ( 0.5 z 1 0.25 z 2 )

Taking the inverse z-transform we have: d (n) = 1.5 x(n) + 0.3x(n 1) + 0.5d (n 1) +0.25d (n 2) The adaptive FIR filter output is: y (n) = w(0) x(n) + w(1) x(n 1) + w(2) x(n 2) + w(3) x(n 3) e( n ) = d ( n) y ( n ) The weights update equations are w(i ) = w(i) + 2 e(n) x(n i ) for i = 0, ,3 or w(0) = w(0) + 2 e(n) x(n) w(1) = w(1) + 2 e(n) x (n 1) w(2) = w(2) + 2 e(n) x(n 2) w(3) = w(3) + 2 e(n) x(n 3)

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

Question 10.
a) Use diagrams and relevant equations to explain and discuss the application of adaptive filters in inverse system modeling applications. LMS based FIR adaptive filter working in the inverse system modelling configuration uses N=2. The system to be inverted has an impulse response given with:
g ( n ) = ( n ) ( n 1)

b)

Discuss the properties of the estimated inverse model with respect to values of parameter in the above expression.

Solution:
a) Inverse system identification configuration is very similar to system identification but the unknown system is now placed in series with the adaptive filter.

Adaptive filter becomes the inverse of the unknown system when the error between the delayed reference signal and adaptive filter output e(n) gets very small. So the goal of the inverse identification (equalisation) procedure is to obtain the transversal filter that satisfies the equation:
C ( z ) H ( z ) = z

As shown in the figure the process requires a delay inserted in the desired signal d(n) path to keep the data at the summation synchronised. Adding the delay keeps the system causal. Without the delay element, the adaptive filter algorithm tries to match the output from the adaptive filter (y(n)) to input data (x(n)) that has not yet reached the adaptive elements because it is passing through the unknown system. In essence, the filter ends up trying to look ahead in time. As hard as it tries, the filter can never adapt: e(n) never reaches a very small value and the adaptive filter never compensates for the unknown system response. The adaptive filter therefore never provides a true inverse response to the unknown system. Including a delay equal or similar to the delay caused by the unknown system prevents this condition.

branislav.vuksanovic@port.ac.uk December 2010

Department of Electronic and Computer Engineering M513 Advanced DSP Techniques

b) The system to be inverted has a transfer function: G ( z ) = 1 z 1 , i.e. zero at z = . If > 1 G(z) has a zero outside the unit circle, i.e. G(z) is not a minimum phase filter, i.e. causal and stable realisation of G(z) is not realisable. If < 1 , G(z) is a minimum phase and G-1(z) is easily obtainable as:

G 1 ( z ) =
and the inverse filter has an impulse response:

1 1 z 1

g 1 ( n ) = u ( n )

branislav.vuksanovic@port.ac.uk December 2010

Vous aimerez peut-être aussi