Vous êtes sur la page 1sur 4

Adaptive Filters Background

All of the filters that we have been designing in the lab have been simple FIR or IIR fixed coefficient
filters. We have used programs like MATLAB to compute filter coefficients for a particular situation, like
low-pass or high-pass filters with some fixed frequency cutoff. Once the filter was designed and coded, we
ran it on our DSP boards and used simple inputs to test the frequency response. Unfortunately, interference
and the signals of interest can change over time

What would happen if we designed a low-pass filter with a cutoff frequency of 500Hz, for example, and
started to process data. During the processing we discovered some noise signal at 450Hz that was sneaking
through our filter. We could go back into MATLAB, redesign the filter coefficients, and paste them into
our code and then the noise at 450Hz would be eliminated. This would be effective if the interference does
not change in time.

One solution to this problem is an adaptive filter. The coefficients of an adaptive filter change in time. In
this lab, we will look at a few ways to implement adaptive algorithms. Take a look at the block diagram
below that compares the setup of a standard filter, like we are used to, and an adaptive filter. The first thing
that should strike you is the appearance of a feedback loop.

Filter
Structure

Adaptation Criterion of
Algorithm Performance

Figure 1: Adaptive filter structure.

Lets take a look at some of the terminology that will be used when we talk about adaptive filters.

Filter Structure – This is the implementation of the filtering algorithm. It is set by the designer of the filter,
and may be something like a Direct Form implementation of an FIR filter. This block computes the filter
output based on the input. The filter coefficients can be updated by the Adaptive Algorithm.

Criterion of Performance – This block looks at the output of the filter and compares it with some other
signal. This other signal is the desired output of the filter. If we know what the desired response is, we can
compare it to the actual response and then indicate to the Adaptive Algorithm that something needs to be
changed. From the example in the introduction, the Criterion of Performance would detect the noise at
450Hz and tell the Adaptive Algorithm that it needs to change the filter’s cutoff frequency.

Adaptive Algorithm – This is the main part of an adaptive filter. This algorithm decides how to change the
filter coefficients in response to the signal given by the Criterion of Performance. This is the most difficult
part of an adaptive filter to design.

In order to design an adaptive filter, we need to know some information about the environment that the
filter will be in. We call this the Signal Operating Environment (SOE). The Signal Operating Environment
gives us a lot of statistical information about what the input signal might look like – there are basically two
things of interest: (1) What is the average value of the signal? and (2) How far does the signal deviate from
this mean value? Once we have this information, we can start to design the filter.

In this lab we will be talking about a very specific case of a Signal Operating Environment. In this
particular case, the average value of our signal will be constant. You can think of the average value of the
signal as its DC value. Also, in this particular case of Signal Operating Environment, our signal will stay
relatively close to its average value. In other words, the input signal will not have big swings in one
direction or the other.

The last thing we need to talk about is the difference between Supervised Adaptation and Unsupervised
Adaptation. In Supervised Adaptation, the expected output of the filter is a known quantity, and the
Criterion of Performance can simply take the difference between the filter output and the expected output.
It then uses that information to tell the Adaptation Algorithm what adjustments to make.

In Unsupervised Adaptation, the expected output of the filter is an unknown quantity. This case is more
difficult because of the work that the Criterion of Performance has to do. We cannot simply take the
difference of two signals like with Supervised Adaptation. A lot of the time, Unsupervised Adaptation
consists of looking for signal qualities. For example, the Criterion of Performance may look for a particular
signal envelope that we know should be there.

In this lab, we will deal only with Supervised Adaptation because it is easier to implement.

Adaptive Algorithm Theory

Lets first focus on the Adaptive Algorithm. This is the part of an adaptive filter that updates the filter
coefficients. The filter is a FIR filter. We will discuss two ways to calculate new filter coefficients based on
a performance criterion based on the mean squared error.

Mean Squared Error (MSE) Criterion:


Take a look at the modified block diagram below.

Filter
Structure
-

Adaptive
Algorithm

Figure 2: Filter structure for the Mean Square error algorithm

Looking at the figure we can immediately do some calculations. The output of the FIR filter is:
N −1
y[n] = ∑ h(k ) x(n − k )
k =0
The error signal is the desired output signal, d(n) , minus the filter output, y(n):
e[n] = d ( n) − y ( n)
N −1
= d ( n) − ∑ h( k ) x ( n − k )
k =0
The error criterion is the mean or average squared error. The goal is to find the filter
coefficients, h(k)’s, that minimize the mean squared error:

{ } { }
E e 2 [ n] = E d 2 ( n) − 2 y ( n) d ( n ) − y 2 ( n )
= E {d 2 (n)}− E{2 y (n)d (n)} − E {y 2 (n)}

The term E{ } is the expected value of the error over time. In this case we make the
assumption that the signal statistics do not change over time, this lets us replace the
expected value with a time average.

A common method of minimization is to take the derivative with respect to the h(k)’s and
set the result equal to zero.
∂e 2 (n)
=0
∂h
This equation can be solved to give a result of
h opt = R x −1R dx

{
R x −1 = E x(n )xT (n ) }
{
R x = E d (n )xT (n ) }
This result is known as the Weiner-Hopf equation. This result is usually not practical to
compute since the signal statistics are not usually known and it involves a matrix
inversion which is difficult and time-consuming to calculate.

Least Mean Square (LMS)

The Least Mean Square method iteratively estimates the solution to the Weiner-Hopf
equations using a method called gradient descent. This minimization method finds a
minima by estimating the gradient. The basic equation in gradient descent is:

{ }
hn (k ) = hn −1(k ) + β ∇h E e 2 (n )
where
β is the step size parameter
{ }
∇h E e 2 (n ) is the gradient vector that makes hn (k ) approach hopt .

The LMS method uses this technique to compute new coefficients that minimize the
difference between the computed output and the expected output under a few key
assumptions about the nature of the signal. The LMS method assumes that the expected
value of the signal equals the instantaneous error. Using this assumption, the gradient can
be estimated as:

{ }
∇ hk E e 2 (n ) ≈ e(n) x(n − k )

The equation for updating the filter parameters is:

hk (n) = hk (n − 1) + β * e(n) x(n − k )

The basic algorithm takes the form:


1. Read in the next sample, x(n), and perform the filtering operation with the previous
N −1
version of the coefficients. y (n ) = ∑ hn (k )x(n − k )
k =0
2. Take the computed output and compare it with the expected output. e(n ) = d (n ) − y (n )
3. Update the coefficients using the following computation.
hk (n) = hk (n − 1) + β * e(n) x(n − k )

This algorithm is performed in a loop so that with each new sample, a new coefficient
vector, hn(k) is created. In this way, the filter coefficients change and adapt.

Vous aimerez peut-être aussi