Vous êtes sur la page 1sur 50

PEOPLE'S DEMOCRAT REPUBLIC OF ALGERIA

MINISTRY OF HIGHER EDUCATION AND SCIENTIFIC RESEARCH


DJILLALI BOUNAAMA UNIVERSITY KHEMIS MILIANA-ALGERIA
FACULTY OF TEHCNOLOGY
DEPARMENT OF ELECTRICAL ENGINEERING

A presented dissertation
For the Licence degree in

« Electrical engineering»

Option :

« Control System»

The Kalman filter

Done by : Overseen by:


 Soltani Yacine Mr .Himour
 Zaknoune Mohamed
 Naas Said

University Year 2020/2021


Résumé
Au fur et à mesure que la technologie avance, nous nous trouvons constamment dans le besoin
de prendre des mesures précises dans tous les domaines de la société. Nous fournissons une
description de type didacticiel du filtre de Kalman. Ce chapitre s'adresse à ceux qui ont besoin
d'enseigner les filtres de Kalman à d'autres, ou à ceux qui n'ont pas une solide expérience en
théorie de l'estimation. Suite à une définition de problème d'estimation d'état, les algorithmes
de filtrage seront présentés avec un exemple à l'appui pour aider les lecteurs à comprendre
facilement le fonctionnement des filtres de Kalman. Une implémentation sur le suivi de cible est
donnée. Dans cet exemple, nous expliquons comment choisir, implémenter, régler et modifier
les algorithmes pour les pratiques du monde réel. Des codes sources pour la mise en œuvre de
l'exemple sont également fournis. En conclusion, ce chapitre deviendra un préalable aux autres
contenus du livre.
Mots-clés : filtre de Kalman, suivi de cible, algorithmes
Abstract
As technology advances, we find ourselves constantly in the need to take precise
measurements in all fields of society. We provide a tutorial-like description of Kalman filter. This
chapter aims for those who need to teach Kalman filters to others, or for those who do not
have a strong background in estimation theory. Following a problem definition of state
estimation, filtering algorithms will be presented with a supporting example to help readers
easily grasp how the Kalman filters works. An implementation on target tracking is given. In this
example, we discuss how to choose, implement, tune, and modify the algorithms for real world
practices. Source codes for implementing the example are also provided. In conclusion, this
chapter will become a prerequisite for other contents in the book.
Keywords: Kalman filter, , target tracking, algorithms
‫ملخص‬
‫ نجد أنفسنا باستمرار في حاجة ألخذ قياسات دقيقة في جميع مجاالت المجتمع نحن نقدم برنامج تعليمي‬، ‫مع تقدم التكنولوجيا‬
‫ أو ألولئك الذين ليس لديهم‬، ‫ يهدف هذا الفصل إلى أولئك الذين يحتاجون إلى تعليم مرشح كالمان لآلخرين‬.‫لمرشح كالمان‬
‫ سيتم تقديم خوارزميات التصفية مع مثال داعم لمساعدة القراء‬، ‫ بعد تعريف مشكلة تقدير الحالة‬.‫خلفية قوية في نظرية التقدير‬
‫ نناقش كيفية اختيار الخوارزميات وتنفيذها وضبطها وتعديلها‬، ‫ في هذا المثال‬.‫على فهم كيفية عمل مرشح كالمان بسهولة‬
‫ سيصبح هذا الفصل متطلًبا أساسًيا‬،‫ في االخير‬.‫ يتم أيًض ا توفير رموز المصدر لتنفيذ المثال‬.‫للتطبيق في العالم الحقيقي‬
‫لمحتويات أخرى في الكتاب‬.
‫ الخوارزميات‬، ‫ تتبع الهدف‬، ، ‫ مرشح كالمان‬:‫الكلمات الرئيسية‬

2
Acknowledgments
We thank Allah for all the strength and might that he bestowed upon us, the
courage and will that he blessed us with to accomplish this work.
We thank our Professor Mister HIMOUR for guiding us and providing us with rich
and valuable advice and information.
We thank every teacher and comrade from this faculty that aided us, taught us
and put us on the path that led us to this instance. Thank you for everyone who
made this possible

3
table of contents
1 CHAPTER I ............................................................................................................................10
1.1 Fourier series:................................................................................................................11
1.1.1 Definition:.............................................................................................................. 11
1.1.2 Periodic Signals:..................................................................................................... 11
1.2 Fourier theorem:...........................................................................................................12
1.3 Fourier transform:.........................................................................................................13
1.3.1 Definition:.............................................................................................................. 13
1.3.2 Inverse Fourier transform:.....................................................................................14
1.3.3 Fourier Transformation Properties:.......................................................................14
1.4 The relationship between Fourier series and Fourier transform:..................................15
1.5 The relationship between Fourier transform and Laplace transform:.........................15
1.5.1 Definition............................................................................................................... 15
1.5.2 Laplace transform:................................................................................................. 15
1.5.3 Link with the fourier transform:.............................................................................16
1.6 Frequential Analysis Of Linear Systems:........................................................................17
1.6.1 The frequential response:......................................................................................17
1.6.2 The complex transfer function:..............................................................................17
1.6.3 Bode and Nyquist plots:.........................................................................................18
1.7 filtering.......................................................................................................................... 19
1.7.1 Filter Definition:.....................................................................................................19
1.7.2 Types of Filters and Functions................................................................................19
.............................................................................................................................................. 19
1.7.3 Filtering Functions :................................................................................................20
.............................................................................................................................................. 20
1.8 Filter Classifications Analysis.........................................................................................20
1.8.1 Passive Filter & Active Filter...................................................................................20
2 CHAPTER II .......................................................................................................................... 22

4
2.1 What is the Kalman filter?.............................................................................................23
2.2 What is it used for?....................................................................................................... 24
2.3 Estimating the State of Dynamic Systems:....................................................................24
2.4 performance analysis of estimators..............................................................................24
2.5 Advantages and Disadvantages of Kalman Filter:..........................................................27
2.5.1 Advantages:........................................................................................................... 27
2.5.2 Disadvantages:.......................................................................................................27
2.6 mathematical development:.........................................................................................27
2.6.1 Linear formulation:................................................................................................ 27
2.6.2 Matrix formulation:................................................................................................27
2.7 How the kalman filter operate :....................................................................................28
2.7.1 The prediction phase:.............................................................................................29
2.7.2 The update phase (correction):..............................................................................30
2.8 Kaman filter algorithm:................................................................................................. 30
2.9 Signal Noise................................................................................................................... 36
2.9.1 DEFINING SIGNAL NOISE.......................................................................................36
2.9.2 COMMON CAUSES OF SIGNAL NOISE.....................................................................36
2.9.3 PROBLEMS ASSOCIATED WITH SIGNAL NOISE.......................................................37
3 CHAPTER III ......................................................................................................................... 39
3.1 The Implementation with MATLAB...............................................................................41
3.1.1 Global variable delaration and common variables:................................................41
3.1.2 Kalman Filter: The prediction part, the estimation Part.........................................42
3.1.3 Averaging the results to eleminate uncertainity....................................................42
3.1.4 Statistics (Root mean Sauare Error).......................................................................43
3.1.5 Plotting...................................................................................................................43
3.2 The plots:.......................................................................................................................44
3.2.1 Target positon........................................................................................................44
3.2.2 Target Root sean square Error...............................................................................45

5
Table of figures

Figure 1-1 : characteristics of periodic signal...............................................................................12


Figure 1-2: Temporal represntation.............................................................................................13
Figure 1-3: spectral represntation............................................................................................... 13
Figure 1-4: a frequency plot.........................................................................................................17
Figure 1-5: nyquist plot................................................................................................................18
Figure 1-6: Bode gain plot............................................................................................................18
Figure 1-7: bode phase plot (in degrees).....................................................................................18
Figure 1-8: Filtering Out the Noise (signal processing)................................................................19
Figure 1-9:Electronic Filter...........................................................................................................20
Figure 2-1:Kalman filter application on a ship.............................................................................26
Figure 2-2:Kalman filter application on a spacecraft...................................................................26
Figure 2-3: kalman filter steps..................................................................................................... 29
Figure 2-4: Actual and estimated standard deviation for x-axis estimate errors.........................35
Figure 2-5: Time history of estimation errors..............................................................................35
Figure 2-6: Isolated noise from signal..........................................................................................36
Figure 2-7: signal noise causing miscommunication Between devices........................................38
Figure 3-1:Thermal camera installed on an airborne platform....................................................40
Figure 3-2: variable delaration in MATLAB..................................................................................41
Figure 3-3: The prediction part in MATLAB..................................................................................42
Figure 3-4: The estimation Part in MATLAB.................................................................................42
Figure 3-5: Averaging the results to eleminate uncertainity........................................................42
Figure 3-6: Statistics in MATLAB..................................................................................................43
Figure 3-7: Plotting in MATLAB....................................................................................................43
Figure 3-8: Target positon............................................................................................................44
Figure 3-9: Target Root sean square Error...................................................................................45

6
Symbols and abbreviations
∫R Lebesgue Integral

F ( f ) ou f Fourier transform of f
−1
F ( f )∨F the inverse Fourier transform

2
L ( R) { f : f measured∧∫ R|f | <∞ }
2

¿ p(f ) Support of the function f

C ([ a ; b ]) Space of the continious function in the interval [ a ; b ]

[a;b] Real interval

φ unknown function

A Linear operator

k (x , y ) Integral of core

x (k ) State vector at time k which includes the quantities to be estimated, of size n ×1

Ak Transition matrix. It describes the evolution of the instantaneous state vector K−1
à l’instant k , de taille n × n
Bk Control matrix at instant k, depends on the modeling of the system
Hk Observation matrix (measurement). It is in fact the link between the system
parameters and the measurements. Size m ×n
uk Vector representing the commands applied to the system at time k
Wk Modeling noise related to the uncertainty that we have on the process model
Qk Process noise variance-covariance matrix at time k
yk Measurement vector at time k, of size m ×1
rk Measurement noise, size m ×1.

Rk Variance-covariance matrix of the measurement noise at time k.

7
General
Introduction

8
General Introduction :
One of the most challenging question today is to be able from past observation to
make some meaningful guests. Obviously, as no one is God, no one would
ultimately be able to predict with one hundred percent accuracy where for
instance the an unknown flying object will end up being. However, between a
pure blind quest and a perfect accuracy forecast, there is room for improvement
and work. If in addition, we are able somehow to model the dynamics of the given
statistics and factor in some noise due to unpredictable external or internal
behavior, we can leverage this model information to make an informed guess. It
will not be 100 percent accurate but it is much better than a pure random guess.
Indeed, this scientific question of using a model and filtering noise has been
extensively examined in various fields: control theory leading to Kalman filter,
emphasizing Fourier series and Fourier's transform, frequency analysis and their
relationship with the core of our project the Kalman filter and this latter is an
algorithm that provides estimates of some unknown variables given the
measurements observed over time, it has been demonstrating its usefulness in
various applications despite it relatively having a simple form and requiring small
computational power. However, it is still not easy for people who are not familiar
with estimation theory to understand and implement the Kalman filters. With the
introduction of algorithms of Kalman filter and extended Kalman filter,
respectively, including their applications. With linear models with additive
Gaussian noises, the Kalman filter provides optimal estimates. Tracking a
stationary target can be done and how to implement the filtering algorithms for
such applications will be presented in detail.

9
1 CHAPTER I

Frequency Analysis &


Fourier Transforms

10
Introduction:
Named in honor of the well renowned French mathematician Jean-Baptiste-Joseph Baron
Fourier (1768/1830), introduced initially for the purpose of solving the heat equation in a metal
plate yet expanded on to become a living field in our present day, Fourier series is basically
using an infinite sum of sines and cosines in the form of a periodic function f (x)

1.1 Fourier series:


The Fourier series is a way of rewriting functions as a series of trigonometric functions. Read
on below to learn how this series is constructed.[ 1 ]

1.1.1 Definition:

The Fourier series of a periodic function f (x)of period T is :


a0 ∞ 2 πkx

2 πkx
f ( x )= + ∑ ak cos + ∑ bk sin
2 k=1 T k=1 T

For a set of coefficients a kand b kdefined by the integrals :


T T
2 2 πkx 2 2 πkx
a k= ∫
T 0
f (x )cos
T
dx ,b k = ∫ f (x )sin
T 0 T
dx ,

1.1.2 Periodic Signals:

Many phenomenas are characterized by a set of signals of different kinds presenting a periodic
form. We can think of the sunspot cycle, biological observables of the human body (aortic
pressure, electrocardiogram ...), electronic signals, complex sounds produced by musical
instruments, etc. We denote by f ¿ ) this signal, and t a real variable. To fix the ideas, we can
imagine that t is the temporal variable although it is not necessary; t can also be a spatial
variable. The signal admits T as period when we can write: [ 2 ]
f ( t +T )=f ( t ) ∀ t ∈ R withT >0

All the useful information about the signal is therefore found in a pattern of durationT . The
number ν of patterns found in an interval of one second is called the frequency and is
expressed in hertz (Hz). Since the pattern extends over a duration T , we have:

11
Figure 1-1 : characteristics of periodic signal

1
Frequency: v=
T
The pattern has characteristics that can be easily measured once the signal is converted into an
electrical signal:

 The continuous component represents the mean value of the signal :


T
1
f cc = ∫ f ( t ) dt
T 0

 The peak-to-peak value corresponds to the difference between the maximum and the
minimum of f :
f pp=max ( f )−min (f )

 Signals encountered in physics have a finite root mean square. Indeed, the power of a
signal is proportional to f 2 (t) so that its average must be finite.The rms value is related
to the root mean square via the relation :


T
1
f rms =√ f = ∫
2 2
f ( t ) dt
T 0

1.2 Fourier theorem:


Under certain mathematical conditions which are not very restrictive for the physical quantities,
it is shown that a periodic signal f (t) is developable in Fourier series, as follows:[ 1 ]

f ( t )=a0 + ∫ a n cos ( n2 πvt )+ bn sin ( n 2 πvt ) withn ∈ N
n=1

The term a n cos ( n 2 πvt )+ bn sin ( n 2 πvt )represents the harmonic of order n. The harmonic of
rank n = 1 is also called the fundamental of f .

12
Figure 1-3: spectral represntation
Fourier series converges point to point at f (t) if the signal is continuous and of finite energy
over a period.

The set of Fourier coefficients (a n , b n) completely determines the shape of the periodic pattern.
This is why another way of representing a signal is to provide the histogram of the Fourier
coefficients: we obtain what we call the spectral representation or the Fourier spectrum of f .
For example, two notes of the same pitch played by two different musical instruments present
two spectra made up of the same harmonics but with different relative weights. These notes
are of identical pitch but of distinct timbre.
If the function f (t) is known, we can determine the Fourier coefficients by integration. For
example, if we take the mean of the Fourier series we finda 0. The first Fourier coefficient
therefore represents the continuous component of f :

Calculation of a 0
T
1
a 0=f cc = ∫ f ( t ) dt
T 0

1.3 Fourier transform:


The Fourier transform F is an operation which transforms an integrate function on R in another
function, describing the frequency spectrum of the latter. We give a set of necessary definitions
and properties of the transformation of Fourier [ 3 ]

1.3.1 Definition:

we call the Fourier transform of f the application that we denote by f or F (f ); defined for
everything ξ ∈ R by:
+∞
F ( f )= f^ ( ξ )=∫ f ( x ) an cos ( n 2 πvt ) +bn sin ( n 2 πvt ) dx
−∞

1
With: ξ=πvt and v=
T

1.3.2 Inverse Fourier transform:

13
Let f ∈ L1 (R) we call the conjugate Fourier transform of f the function:
+∞
F ( f )=F f =∫ f ( x ) a n cos ( n2 πvt )+ bn sin ( n 2 πvt ) dx
−1

−∞

1.3.3 Fourier Transformation Properties:

We say that f (t)and ^f (v ) form a pair of Fourier transforms. We go from one to the other by a
Fourier transformation (TF) or inverse Fourier transformation ¿):

f (t)⇌ ^f (v)+

Some properties of the Fourier transform of real signals:

 Linearity - By virtue of the linearity of integration, the Fourier transform is also a linear
operation:
af ( t ) +bg ( t ) ⇌ a f^ ( v )+ b g^ (v)

 Parity – If f ( t ) is an even function then f^ (v) is real and even. If f (t) is an odd function
then ^f (v ) is imaginary and odd. In all cases |f^ ( v)| is an even function, which is why we
sometimes restrict its representation on R+¿ ¿

 Translation - Translating a signal in time amounts to phase shifting the Fourier


transform:

f^ ( v)
−2 ξi
f ( t−τ ) ⇌ e

 Dilation - Any expansion of the timescale leads to an inverse contraction of the


frequency scale and vice versa. Mathematically, we have

f ( t /a ) ⇌ a f^ (av )

 Duality - This property makes it easy to obtain new pairs of Fourier transforms from
already known pairs. Indeed,

si f ( t ) ⇌ f^ ( v ) alors ^f (−t) ⇌ f (v )

14
1.4 The relationship between Fourier series and Fourier
transform:
Let us first consider a periodic signal f (t) with period T 0 which can be decomposed into Fourier
series. So we have:[ 3 ]

f ( t )=a0 + ∫ a n cos ( n 2 π v 0 t )+ bn sin ( n 2 π v 0 t )
n=1

This signal is not a summable square and does not present a Fourier transform in the classical
sense of the term. However, we can define a Fourier transform of such a signal in the sense of
distributions. Indeed, we have just encountered the following pairs of transforms:

1
 cos ( n 2 π v 0 t ) ⇌ 2 [ δ ( v−v 0 ) +δ ( v +v 0) ]
1
 sin ( n 2 π v 0 t ) ⇌ 2 i [ δ ( v−v 0 ) +δ ( v +v 0 ) ]

We can therefore write by linearity:



an b
^f (v )=a0 δ(v )+ ∑ [ δ ( v−n v 0 ) +δ ( v +nv 0 ) ]+ n [ δ ( v−n v 0 )−δ ( v+ nv 0 ) ]
n=1 2 2i

a n−ib n a +ib n
¿ a 0 δ (v)+ ∑ δ ( v−n v 0 ) + n δ ( v +nv 0 )
n=1 2 2


^f ( v )= ∑ c n δ ( v−n v 0 )
n=−∞

1.5 The relationship between Fourier transform and Laplace


transform:[ 3 ]
1.5.1 Definition

Let f : R+ ¿→C ¿ continues in pieces. We call the Laplace transform of the function f , the function:
+∞
F ( p ) =L ( f ( t ) ) ( p )=∫ f ( t ) e
− pt
dt ; p∈ C
0

1.5.2 Laplace transform:

15
vector space of the functions of the temp  vector space of phase functions

Conditions of existence: F (p) is defined by a generalized integral, so it is necessary that:


 F be continuous in pieces on R+¿ ¿
 ∃ β ∈¿ 0 , 1 ¿

| |
+∞
∨∫ e−( Rep−α )t dt
− pt − ( Rep−α ) t
 The function f is of exponential order: f (t)e ≤M e
0

converges for Rep>α


1
Notes: Some functions do not have a Laplace transform, for example the function f (t)= t
2

which does not respect the second condition of existence,and f ( t )=et which is not of an
exponential order

1.5.3 Link with the fourier transform:

The Fourier transform of an absolutely integrable function over the set of real numbers is given
by:
+∞
f ( t ) ( α )=∫ f ( t ) a n e
−iαt
dt
−∞

0 +∞

=∫ f ( t ) e −iαt
d t +∫ f ( t ) e
−iαt
dt
−∞ 0

+∞ +∞

=∫ f (−t ) e −iαt
d t+ ∫ f ( t ) e
−iαt
dt
0 0

f ( t ) ( α )=[ L f (−t ) (−iα ) + L f ( t )( iα ) ]

From this link, we can say that the Laplace transformation is a generalization of the Fourier
transformation. In addition, and since the Laplace transform is a linear and bijective operator,
we deduce that the Fourier transform is too.

16
1.6 Frequential Analysis Of Linear Systems:
1.6.1 The frequential response:

The frequential analysis aims to study the behavior and the response of a linear system that
consists of a sinusoidal excitation , it consists of studying the input and output’s signal of the
amplitudes of the report’s variations, also the
phase shift between them while varying
the frequency.
In this analysis, the amplitude of the input
signal is constant while the variable is the
frequency or the pulsation ω=2πf. [ 4 ]

Figure 1-4: a frequency plot

There are two methods for obtaining the frequential response:

 Practical experimentation: Running the system using a sinusoidal signal with a


constant amplitude and a variable frequency. For different values of the frequency we
change the amplitude of the output’s signal and we calculate the report for the
amplitudes i(0) and o(0) and the phase shift Ф between the input signal and the
output signal
 Theoretically: Using the concept called the transfer function

1.6.2 The complex transfer function:

17
o(s)
The transfer function for a continuous linear system comes in the form: H(s)=
i(s)

While o(s) and i(s) are the Laplace transforms for o(t) and i(t).
We replace the Laplace variable s with the term jω so that the complex transfer function is
H(jω), the frequential analysis studies H(jω) in terms of ω
H(s)= H(jω) determines the entire frequency response. This helps to explain its importance: by
widening our view to all complex s we can get a better view of the frequency response which is
our true interest.[ 4 ]

1.6.3 Bode and Nyquist plots:

Bode plot:
The Bode plots show the frequency response of a system. There are however 2 separate Bode
plots one of them is for the gain and the other one is for the phase
Nyquist plot:
The Nyquist plot combines both gain and phase plots into one and it’s drawn by plotting the
complex gain H(jω) for all frequencies ω[ 5 ]

 An example of Bode and Nyquist plots:


We have the system ẍ + x + 2x = 2f(t)
2
The system has complex gain where P(s) = s 2 + s + 2. So the gain and phase are
i( jw )
2 2 2 2
H(ω)= ¿ i ( jw ) ∨¿ ¿ = 2 = -φ= i ( jw ) =−Arg(i(jω)) = Arg(1−ω
¿ 1−w + 2 jw∨¿ ¿ √ (1−ω 2) 2+(2 ω)2
2+2jω).

Note: we write -φ because we consider φ as the phase lag so -φ is the phase[ 5 ]

18
Figure 1-7: Bode gain plot Figure 1-6: bode phase plot (in Figure 1-5: nyquist plot
1.7 degrees)

filtering
1.7.1 Filter Definition:

In electronics, a filter (signal processing) is a kind of devices or process that removes some
unwanted components or features from a signal. Filtering is a class of signal processing, the
defining feature of filters being the complete or partial suppression of some aspect of the
signal. Most often, this means removing some frequencies or frequency bands. However, filters
do not exclusively act in the frequency domain; especially in the field of image processing many
other targets for filtering exist. As is known to all, electronic filters remove unwanted frequency
components from the applied signal, enhance wanted ones, or both [ 6 ]

1.7.2 Types of Filters and Functions


1.7.2.1 Types of Filters:
Filters have different effects on signals of different frequencies. According to this fact, the basic filter
types can be classified into four

categories: low-pass, high-pass,


band-pass, and band-stop. Each of them has a specific application in DSP. One of the objectives may
involve digital filters design in applications. Generally, the filter is designed based on the specifications
primarily for the passband, stopband, and transition band of the filter frequency response. The filter
passband is the frequency range with the amplitude gain of the filter response being approximately
unity. The filter stopband refers to the frequency range over which the filter magnitude response is
attenuated to eliminate the input signal whose frequency components are within that range. The
transition band means the frequency range between the passband and the stopband.

19
Figure 1-8: Filtering Out the Noise (signal processing)
Because
there are many different standards of classifying filters and these overlap in many different
ways, there is no clearly distinctive classification. Filters may be:
 non-linear or linear
 analog or digital
 time-variant or time-invariant , also known as shift invariance.
 discrete-time (sampled) or continuous-time
 passive or active type of continuous-time filter
 infinite impulse response (IIR) or finite impulse response (FIR) type

1.7.3 Filtering
Functions :

 Separ
ate

useful signals from noise to improve signal immunity and signal-to-noise ratio.
 Filter out unwanted frequency to improve signal analysis accuracy.
 Separate single frequency from complex frequency[ 6 ]

20
Figure 1-9:Electronic Filter
1.8 Filter Classifications
Analysis

1.8.1 Passive Filter & Active Filter


1.8.1.1 Passive filter :
A passive filter is composed of passive
components only. It is based on the principle that
the reactance of the capacitive and inductive
components changes with frequency. The
advantages of this type of filter are: simple circuit,
causal power supply, and high reliability. Also there
are disadvantages: the signal in the pass-band has
energy loss, the load effect is relatively obvious, and electromagnetic induction is easy to cause
when using inductive components. When the inductance is large, the size and weight of the
filter are relatively large, which is not applicable in the low frequency range.
The passive filter circuit has a simple structure and is easy to design, but its pass-band
magnification and cut-off frequency change with the load, so it is not suitable for occasions with
large signal processing requirements. Passive filter circuits are usually used in power
circuits, such as filtering after DC power rectification, or LC (inductance, capacitor) circuit
filtering when high current loads are used.

1.8.1.2 Active filter :


Active filters are composed of passive components and active devices. The advantages of this
type of filter are that the signal in the pass-band has no energy loss, even be amplified; the load
effect is not obvious, and the mutual influence is small when multi-levels are connected. The
simple method of cascading is easy to form high-order filter, and the device is small,
lightweight, and does not require magnetic shielding. Their disadvantages are that the pass-
band range is limited by the bandwidth of the active device and requires a DC power supply;

21
the reliability is not as high as that of a passive filter, and it is not suitable for high voltage, high
frequency, and high power applications.[ 6 ]
The load of the active filter circuit does not affect the filtering characteristics, so it is often used
in places with superior signal processing requirements. Active filter circuit is generally
composed of an RC network and integrated operational amplifier, so it can only be used under
the condition of suitable DC power supply, and it can also be amplified. However, the
composition and design of the circuit are also more complicated. Active filter circuits are not
suitable for high voltage and high current applications.

2 CHAPTER II

Kalman Filter
22
Introduction:
The Kalman filter is a mathematical power tool that is playing an increasingly important role in
computer graphics as we include sensing of the real world in our systems. The good news is you
don’t have to be a mathematical genius to understand and effectively use Kalman filters. This
tutorial is designed to provide developers of graphical systems with a basic understanding of
this important mathematical tool.
While the Kalman filter has been around for about 30 years, it (and related optimal estimators)
have recently started popping up in a wide variety of computer graphics applications. These
applications span from simulating musical instruments in VR, to head tracking, to extracting lip
motion from video sequences of speakers, to fitting spline surfaces over collections of points.

23
The Kalman filter is the best possible (optimal) estimator for a large class of problems and a
very effective and useful estimator for an even larger class. With a few conceptual tools, the
Kalman filter is actually very easy to use. We will present an intuitive approach to this topic that
will enable developers to approach the extensive literature with confidence.

2.1 What is the Kalman filter?


Theoretically the Kalman Filter is an estimator for what is called the linear-quadratic problem,
which is the problem of estimating the instantaneous ‘‘state’’ of a linear dynamic system
perturbed by white noise—by using measurements linearly related to the state but corrupted
by white noise. The resulting estimator is statistically optimal with respect to any quadratic
function of estimation error. [ 7 ]
Practically, it is certainly one of the greater discoveries in the history of statistical estimation
theory and possibly the greatest discovery in the twentieth century. It has enabled humankind
to do many things that could not have been done without it, and it has become as
indispensable as silicon in the makeup of many electronic systems. Its most immediate
applications have been for the control of complex dynamic systems such as continuous
manufacturing processes, aircraft, ships, or spacecraft. To control a dynamic system, you must
first know what it is doing. For these applications, it is not always possible or desirable to
measure every variable that you want to control, and the Kalman filter provides a means for
inferring the missing information from indirect (and noisy) measurements. The Kalman filter is
also used for predicting the likely future courses of dynamic systems that people are not likely
to control, such as the flow of rivers during flood, the trajectories of celestial bodies, or the
prices of traded commodities.
You can use a Kalman filter in any place where you have uncertain information about some
dynamic system, and you can make an educated guess about what the system is going to do
next. Even if messy reality comes along and interferes with the clean motion you guessed
about, the Kalman filter will often do a very good job of figuring out what actually happened.
And it can take advantage of correlations between crazy phenomena that you maybe wouldn’t
have thought to exploit!Kalman filters are ideal for systems which are continuously changing.
They have the advantage that they are light on memory (they don’t need to keep any history
other than the previous state), and they are very fast, making them well suited for real time
problems and embedded systems.

The math for implementing the Kalman filter appears pretty scary and opaque in most places
you find on Google. That’s a bad state of affairs, because the Kalman filter is actually super
simple and easy to understand if you look at it in the right way. Thus it makes a great article
topic. The prerequisites are simple; all you need is a basic understanding of probability and
matrices.[ 7 ]

24
2.2 What is it used for?
The applications of Kalman filtering encompass many fields, but its use as a tool is almost
exclusively for two purposes: estimation and performance analysis of estimators.

2.3 Estimating the State of Dynamic Systems:


What is a dynamic system? Almost everything, there is hardly anything in the universe that is
truly constant. The orbital parameters of the asteroid Ceres are not constant, and even the
‘‘fixed’’ stars and continents are moving. Nearly all physical systems are dynamic to some
degree. If one wants very precise estimates of their characteristics over time, then one has to
take their dynamics into consideration. The problem is that one does not always know their
dynamics very precisely either. Given this state of partial ignorance, the best one can do is
express our ignorance more precisely using probabilities. The Kalman filter allows us to
estimate the state of dynamic systems with certain types of random behavior by using such
statistical information. A few examples of such systems are:
 The process control of a chemical plant
 The flood prediction of a river system
 The tracking of a spacecraft
 The navigation of a ship

2.4 performance analysis of estimators


Down below we have some possible sensor types that might be used in estimating the state of
the corresponding dynamic systems. The objective of design analysis is to determine how best
to use these sensor types for a given set of design criteria. These criteria are typically related to
estimation accuracy and system cost.[ 7 ]
The Kalman filter uses a complete description of the probability distribution of its estimation
errors in determining the optimal filtering gains, and this probability distribution may be used in
assessing its performance as a function of the ‘‘design parameters’’ of an estimation system,
such as the types of sensors to be used and the locations and orientations of the various
sensor types with respect to the system to be estimated

 Examples of some sensor types:


In the process control of a chemical plant:
 Pressure
 Temperature
 Flow rate
 Gas analyzer
In the flood prediction of a river system:

25
 Water level
 Rain gauge
 Weather radar
In the tracking of a spacecraft:

 Radar
 Imaging system
In the navigation of a ship:

 Sextant
 Log
 Gyroscope
 Accelerometer
 Global Positioning Systems (GPS) receiver

26
Figure 2-10:Kalman filter application on a ship

Figure 2-11:Kalman filter application on a spacecraft

2.5 Advantages and Disadvantages of Kalman Filter:


2.5.1 Advantages:
27
 The proxy for the prediction error which in itself presents an indicator of precision.

 Its algorithm works in the time domain with a recursive nature and has aoptimal estimator
in the least squares sense.

 Another aspect of its optimality is the incorporation of all the information available on the
system, measurements and errors, in an adaptive operator which is reset each time anew
measurement becomes available.

 The big advantage of the method is to provide at each iteration an estimate of the matrices
of measurement and analysis error covariance. However, it is necessary to correctly
initialize these matrices to time (t 0), and have an estimate of the model error and error
covariance matrices observation.

2.5.2 Disadvantages:

 The Kalman filter was developed only for Gaussian linear models.

 The Gaussian noise hypothesis is not essential for the operation of the Kalman filter,the
latter approaches the density of the state knowing the observation (conditional density)
by aGaussian density, determined by its mean and its covariance matrix. The non
linearity ofmodel can involve the multi-modality of the conditional law of the state, and
thus makes the filter ofKalman unsuitable.[ 7 ]

 When the system is strongly nonlinear the extended Kalman filter can diverge
(Diverging: when the estimate it provides us with is marred by errors that are becoming
more and moreimportant. The filter then becomes unstable and therefore
unsatisfactory)

2.6 mathematical development:


2.6.1 Linear formulation:[ 8 ]

2.6.2 Matrix formulation:

The basic equations of the Kalman filter are translated in the form of reduced matrices:

28
[] [ ] [][ ]
X1 A 1 ,1 . . A 1 ,n X1 W1
X2 A 1 ,2 . . . X2 W2
The equation of state: . = . . . . . + .
. . . . . . .
Xn k +1 A n ,1 . . A n ,n k +1/ k Xn k W n k

State vector State transition matrix System noise vector

The measurement equation:

[] [ ] [][]
y1 H 1 ,1 . . H 1 ,n X1 V1
y2 H 1 ,2 . . . 2 V2
. = . . . . . + .
. . . . . . .
yn k+ 1 H n ,1 . . H n ,n k +1 /k
Xn k V n k

Measurement matrix Measurement noise vector


Vector of
measured

2.7 How the kalman filter operate :


The optimal estimate consists in finding the best estimate ^x k of the state x k in

minimizing a criterion which is the variance of the estimation error:


e k =x k − x^ k

Kalman filtering has two distinct phases: Prediction and Update


(correction). The prediction phase uses the estimated state of the previous instant to produce a
current estimate. In the update step, the observations of the current instant are
used to correct the predicted condition in order to obtain a more accurate estimate.

29
Figure 2-12: kalman filter steps

2.7.1 The prediction phase:

We go to instant k. At This moment, we have an initial estimate based on the


Knowledge of the process and measurements up to the previous instant, that is to say. k −1.
This
Estimate is called a priori estimate.

If we write ^x k /k −1 the state estimate a priori, so the a priori error is given by :


e k =x k − x^ k
−1 −1
k k

Les équations de la correction sont:

{ x^ k / k−1=¿ A k−1 x^ k−1/k−1+ Bk−1 uk−1 ¿ Pk / k−1 =A k r k−1/ k−1 A Tk−1+ Qk−1

2.7.2 The update phase (correction):

30
We will now use the measure y k to correct the prior estimate ^x k /k−1 and get the posterior
estimate ^x k /k [ 8 ]

The a posteriori error is:


e k /k =x k −^x k/ k

The equations of the correction phase are:

{ x^ k / k=¿ x^ k /k−1 +K K ( y k −H k ^xk /k−1)¿ P k/ k =(I −K k H k ) Pk /k −1

I: is the identity matrix of the same size as Pk /k −1


Optimal Kalman gain :
T T −1
K k =P k/ k−1 H k (H k Pk /k−1 H k + Rk )

2.8 Kaman filter algorithm:


Kalman filter algorithm consists of two stages: prediction and update. Note
that the terms “prediction” and “update” are often called “propagation” and
“correction,” respectively, in different literature. The Kalman filter algorithm is
summarized as follows:
Prediction :
+¿+ B u ¿

Predicted state estimate x^ −¿=F ^x ¿


k −1
k−1
k
T
+ ¿F +Q ¿
−¿=FP k ¿
Predicted error covariance Pk

Update :

Measurement residual ~y =z −H ^x−¿¿


k k k
T

Kalman gain K k =P−¿


k
H ¿ ¿¿

−¿+ K k ~y ¿

Updated state x^ +¿=


k
^x
k ¿

−¿¿
+¿=(I− K k H )Pk ¿
Updated error covariance PK

In the above equations, the hat operator, ̂


, means an estimate of a variable. That
is, ^x is an estimate of x . The superscripts – and +¿ denote predicted (prior) and

31
updated (posterior) estimates, respectively.
The predicted state estimate is evolved from the updated previous updated state estimate. The
new term P is called state error covariance. It encrypts the error covariance that the filter
thinks the estimate error has. Note that the covariance of a
T T
random variable x is defined as cov ( x )=E [ ( x− x^ ) ( x−^x ) ] where E denotes the

expected (mean) value of its argument. One can observe that the error covariance
becomes larger at the prediction stage due to the summation with Q , which means the filter is
more uncertain of the state estimate after the prediction step.
In the update stage, the measurement residual ~y k is computed first. The
measurement residual, also known as innovation, is the difference between the true
−¿¿
measurement, z k , and the estimated measurement, H ^x k . The filter estimates the
current measurement by multiplying the predicted state by the measurement matrix. The
residual, ~y k , is later then multiplied by the Kalman gain, K k , to provide the correction, K k ~y k , to
−¿¿
the predicted estimate ^x k . After it obtains the updated state estimate, the Kalman filter
+¿¿
calculates the updated error covariance, Pk , which will be used in the next time step. Note
that the updated error covariance is smaller than the predicted error covariance, which means
the filter is more certain of the state estimate after the measurement is utilized in the update
stage.
We need an initialization stage to implement the Kalman filter. As initial values,
+¿¿
we need the initial guess of state estimate, ^x 0 , and the initial guess of the error covariance
+¿¿ +¿¿ +¿¿
matrix, P0 . Togetherwith Q and R , ^x 0 and P0 play an important role to obtain desired
performance. There is a rule of thumb called “initial ignorance,” which means that the user
+¿¿
should choose a large P0 for quicker convergence. Finally, one can obtain implement a Kalman
filter by implementing the prediction and update stages for each time step, k =1 ,2 , 3 , … , after
the initialization of estimates. Note that Kalman filters are derived based on the assumption
that the process and measurement models are linear, i.e., they can be expressed with the
matrices F , B, and H , and the process and measurement noise are additive Gaussian. Hence, a
Kalman filter provides optimal estimate only if the assumptions are satisfied.
Example
An example for implementing the Kalman filter is navigation where the
vehicle state, position, and velocity are estimated by using sensor output from an inertial
measurement unit (IMU) and a global navigation satellite system
(GNSS) receiver. In this example, we consider only position and velocity,
omitting attitude information. The three-dimensional position and velocity comprise the state
vector :
T T T
x=[P , v ]

32
T T
Where P=[P x , P y , P z ] is the position vector and v=[ v x , v y , v z ] is the velocity vector whose
elements are defined in x, y, z axes. The state in time k can be predicted by the previous state in
time k −1 as:

[ ][ ]
1
p k−1+ v k−1 + ~ak−1 Δ t
2
pk
xk= = 2
vk ~
v k −1 + a k−1 Δ t

Where ~a k−1 is the acceleration applied to the vehicle. The above equation can be
rearranged as:

[ ]
1
[ ]
2
I 3 ×3 I 3 × 3 Δ t I Δt ~
xk= x k−1 + 2 3 × 3 a k−1
03 × 3 I 3× 3
I3 × 3 Δ t

Where I 3 ×3 and 03 × 3 denote 3 ×3 identity and zero matrices, respectively. The process noise
comes from the accelerometer output a k−1=~ a k−1 + ek−1 where e k−1denotes the noise of the
accelerometer output. Suppose e k−1 N ( 0 , I 3 ×3 σ 2e ) . From the covariance relationship,
cov ( Ax )= A ∑ A T where cov ( x )=∑,we get the covariance matrix of the process noise as:

[ ] [ ][ ]
T
1 1 1
I Δt
2
2 I Δt
2
I Δ t 4 03 × 3 2
Q= 2 3 × 3 I 3 × 3 σ e 2 3 ×3 = 4 3×3 σe
2
I 3× 3 Δ t I 3 ×3 Δ t 03 × 3 I 3 ×3 Δ t

Now, we have the process model as:


x k =Fx k−1+ Bak−1 +w k−1

Where F=
[ I 3 ×3 I 3 ×3 Δ t
03 ×3 I 3 × 3 ]
[ ]
1 2
I Δt
B= 2 3 ×3 w k−1 N (0 ,Q)
I 3 ×3 Δ t

The GNSS receiver provides position and velocity measurements corrupted by


measurement noise v k as:

zk=
[]
pk
vk
+v k

It is straightforward to derive the measurement model as :


z k =Hx k + v k

Where H=I 6 × 6 v k N (0 , R)

33
In order to conduct a simulation to see how it works, let us consider N=20 time
steps (k =1 ,2 , 3 , … , N ) with Δ t=1 . It is recommended to generate a time history of
true state, or a true trajectory, first. The most convenient way is to generate the
series of true accelerations over time and integrate them to get true velocity and position. In
this example, the true acceleration is set to zero and the vehicle is
moving with a constant velocity, v k =[5 , 5 , 0]T for all k =1 ,2 , 3 , … , N , from the initial position,
p0=[0 , 0 ,0 ]. Note that one who uses the Kalman filter to estimate the vehicle state is usually
not aware whether the vehicle has a constant velocity or not. This case is not different from
nonzero acceleration case in perspective of this Kalman filter models. If the filter designer (you)
has some prior knowledge of the vehicle maneuver, process models can be designed in
different forms for best describing various maneuvers as in [9].
We need to generate noise of acceleration output and GNSS measurements for
every time step. Suppose the acceleration output, GNSS position, and GNSS velocity are
corrupted with noise with variances of 0.32 , 32 , and 0.032 , respectively. For each axis, one can
use MATLAB function randn or normrnd for generating the Gaussian noise.
The process noise covariance matrix, Q , and measurement noise covariance matrix, R , can be
constructed following the real noise statistics described above to get the best performance.
However, have in mind that in real applications, we do not know the real statistics of the noises
and the noises are often not Gaussian. Common practice is to conservatively set Q and R
slightly larger than the expected values to get robustness. Let us start filtering with the initial
guesses
T

^x +¿=[2
0
,−2, 0 ,5 , 5.1 ,0.1 ] ¿

P0
+¿=
[ I 3 ×3 42 03× 3
]
03 ×3 I 3 ×3 0.42
¿

And noise covariance matrices

[ ]
1
I 3 ×3 Δ t 4 03 ×3 2
Q= 4 0.3
2
03 × 3 I 3× 3 Δ t

[ ]
2
I 3× 3 3 03 ×3
R= 2
03 × 3 I 3 × 3 0.03

where Q and R are constant for every time step. The more uncertain your initial
guess for the state is, the larger the initial error covariance should be.
In this simulation, M =100 Monte-Carlo runs were conducted. A single run is not sufficient for
verifying the statistic characteristic of the filtering result because
each sample of a noise differs whenever the noise is sampled from a given distribution, and
therefore, every simulation run results in different state estimate. The
repetitive Monte-Carlo runs enable us to test a number of different noise samples for each time
step.
The time history of estimation errors of two Monte-Carlo runs is depicted in

34
Figure 2-5. We observe that the estimation results of different simulation runs are different
even if the initial guess for the state estimate is the same. You can also run the Monte-Carlo
simulation with different initial guesses (sampled from a distribution) for the state estimate.
The standard deviation of the estimation errors and the estimated standard
deviation for x-axis position and velocity are drawn in Figure 2-4. The standard
deviation of the estimation error, or the root mean square error (RMSE), can be
obtained by computing standard deviation of M estimation errors for each time
step. The estimated standard deviation was obtained by taking squared root of the
+¿¿
corresponding diagonal term of pk . Drawing the estimated standard deviation for
each axis is possible because the state estimates are independent to each other in this
+¿¿
example. A care is needed if pk has nonzero off-diagonal terms. The estimated standard
deviation and the actual standard deviation of estimate errors are very similar. In this case, the
filter is called consistent. Note that the estimated error covariance matrix is affected solely by
+¿¿
p0 , Q , and R , judging from the Kalman filter algorithm. Different settings to these matrices
+¿¿
will result in different pk and thereforedifferent state estimates. In real applications, you will
be able to acquire only the estimated covariance because you will hardly have a chance to
conduct Monte-Carlo runs. Also, getting agood estimate of Q and R is often difficult. One
practical approach to estimate thenoise covariance matirces is the autocovariance least-
squares (ALS) technique or an adaptive Kalman filter where the noise covariance matrices are
adjusted in realtime can be used. Source code of MATLAB implementation for this example can
be found in. It is recommended for the readers to change the parameters and aircraft trajectory
by yourself and see what happens.

35
2.9
S
Figure 2-13: Time history of estimation errors.

i Figure 2-14: Actual and estimated standard deviation for x-axis estimate errors.
gnal
Noise
2.9.1 DEFINING SIGNAL NOISE

36
Signal noise, in its most basic sense, is any unwanted interference that degrades a
communication signal. Signal noise can interfere with both analog and digital signals; however,
the amount of noise necessary to affect a digital signal is much higher. This is because digital
signals communicate using a set of discrete electrical pulses to convey digital “bits.” As can be
seen in Figure 1, those electrical pulses would require a lot of noise in order to be confused
with one another.
Conversely, analog signals represent an infinite range of possible values using an established
range, such as 4-20 mA or 0-10 V. In Figure 2-15: Isolated noise from signal
this case, any unwanted voltage or
current spikes will cause a fluctuation in the message being communicated. Minuscule
variations along analog signals, on the order of millivolts or microamps, typically do not result in
a significant (or even perceptible) discrepancy. High levels of electrical noise, however, can
produce large variations and therefore lead to substantial discrepancies making communication
between process control devices utterly impossible.[ 10 ]
As seen in Figure 2, signal noise injected onto electrical communication will add or detract from
the expected signal value. In an industrial situation where vital processes are automatically
controlled based on the measurement of that signal, any variation can lead to unpredictable
and potentially damaging results.

2.9.2 COMMON CAUSES OF SIGNAL NOISE

Noise injection can occur anywhere in the system and at any physical location in which the
network is exposed. It can be the result of various factors at any location on the network. It may
seem a daunting task to troubleshoot signal noise; nonetheless, there are some causes that are
more common than others. These common causes account for the vast majority of signal noise
interfering with process control networks.

 GROUND LOOPS AND IMPROPER GROUNDING:


As discussed in the previous issue of The Current Quandary, ground loops inject additional
current onto the signal loop via a voltage differential between two grounding locations in a
multi-ground system. This and other grounding issues can lead to an influx of signal noise on an
otherwise functional network.

 POOR WIRING PRACTICES:


Poorly wired networks, such as those not utilizing shielded twisted-pair and conduit, are more
susceptible to ambient electrical noise.

 POORLY DESIGNED PRODUCT CIRCUITRY:

37
Poorly designed electronic circuitry within devices, which does not provide adequate shielding
against internal and external sources of noise, will also be more likely to have signal issues.

 CLOSE PROXIMITY TO OTHER ELECTRICAL EQUIPMENT:


Devices or wires placed in close proximity to electrical equipment that generates strong
magnetic fields, such as generators, motors, or power lines, can pick up some of that
interference, which can contribute to fluctuations in communications signals.

 LONG WIRE LEADS PICKING UP RADIO FREQUENCY:


Long segments of wire (especially unshielded wire) essentially act as antennae; they pick up
radio waves and convert them to electrical signals, contributing to additional noise in the
system.

2.9.3 PROBLEMS ASSOCIATED WITH SIGNAL NOISE

 PROCESS SIGNAL DISTORTION:


The most common and obvious problem caused by signal noise is the distortion of the process
signal, causing incorrect interpretation or display of a process condition by the equipment. The
addition to and/or subtraction from the process signal translates into an incorrect process
variable. To put this into context, see the example in Figure 3 below.

 APPARENT SIGNAL LOSS:


Though uncommon, extreme signal noise can lead to an apparent loss of signal. Most modern
electronic equipment have built in noise filtering. However, in extremely noisy environments,
this filter will not be enough, which can lead to the equipment not receiving a signal and no
communication taking place at all.

 IMPROPER CONTROL OF PROCESS:


In the example discussed in Figure 3, each device on the network is functioning exactly as
intended; however, the signal noise caused a miscommunication between devices.
Consequently the tank remained empty. A system experiencing signal noise fluctuations
could inadvertently turn relays and alarms on / off at irregular intervals because the noisy
signals are being misunderstood. A situation like this results in the improper control of an
industrial process.[ 10 ]

38
 THE APPLICATION: A radar level transmitter is measuring tank liquid level. It outputs a 4-20
mA signal (4 mA when empty and 20 mA when full) to a mechanical relay that when triggered at
4.5 mA, activates a pump to begin filling the tank.
 THE PROBLEM: The tank empties and the transmitter outputs a 4 mA signal but,
because of extreme signal noise, the relay receives a 5 mA signal and never triggers or
activates the pump. The tank remains empty and the process grinds to a halt.

Figure 2-16: signal noise causing miscommunication Between devices

39
3 CHAPTER III

Kalman Filter
Implementation with
MATLAB

40
Introduction:
In this section we are going to implement the Kalman filter using Matlab in a system which
consists of a thermal camera installed on an airborne platform which observes the position of a
stationary target. The obtained
Position is under noise because the thermal camera is being affected by certain vibrations due
to malfunctioned platform. The Objective is to estimate the position (along x-axis) of target. The
measurement noise is additive having Gaussian Probability Density Function with zero mean
and standard deviation of 1 m. Observation time is 5 seconds and sampling time is 0.1 second.
All the thoroughly used steps are as followed:
1. Global variable delaration and common variables
2. Kalman Filter
2.1. The prediction part
2.2. The estimation Part
3. Averaging the results to eleminate uncertainity (MonteCarlo Runs)
4. Statistics (Root mean Sauare Error)
5. Plotting

Figure 3-17:Thermal camera installed on an airborne platform

41
3.1 The Implementation with MATLAB
3.1.1 Global variable delaration and common variables:

We define and declare all variables which are sampling time, time vecto,r No of meaurements
identity Matrix , measurement noise mean, measurement noise standard deviation, inital
Position x ,Transition matrix ,Output Coefficient Matrix ,measurement covariance matrix
generating measurement noise (gaussian noise), The noisy measurment vector, inital State
inital State Covariacne, inital State, inital State covariance

3.1.2

Kalman Filter: The prediction part, the estimation Part


Figure 3-18: variable delaration in MATLAB
Then we will implement the prediction then the estimation part of the Kalman filter

3.1.2.1 The prediction part

42
3.1.2.2

3.1.3
Aver

Figure 3-20: The estimation Part in MATLAB


aging the results to eleminate uncertainity

We will run these multiple times so that after that we will average the results of these parts to
eliminate uncertainty which is called the MonteCarlo Runs

3.1.4

Statistics (Root mean Sauare Error)

At the end we will find the root mean square error

Figure 3-22: Statistics in MATLAB 3.1.5


3.1.5 Plotting

43
Then plot the results.

3.2 The plots:


3.2.1 Target positon

As we can see here, the red color represents the target's true position which is what we need
to
find

Figure 3-23: Plotting in MATLAB

while the measured position by the filter is presented in green and lastly the estimated position
in represented in blue.

44
3.2.2

Target Root sean square Error

Here we can see that the target root mean square error has been minimized with respect to
time and has descended to almost 0.5 meters after 5 seconds

45
General conclusion:
In This project we were introduced to the Kalman filter and we could comprehend it's
functionality and the concepts of prediction and estimation, through this filter we could study
how the noise affected a system and how to minimize these effects on this system in order to
study it with more accuracy. We have seen the Kalman filter's uses in the target tracking
domain in an example that demonstrated to us how it works on an fundamental basis and how
important this filter is in achieving results that seem impossible on first glance. We can only

46
imagine how more important it is in all the other various different domains in today's society
and it will surely keep being developed to reach mind blowing results.

Bibliographical references

47
Bibliographie
[ 1 ]Renaux-petel, R. (2015). L'analyse de Fourier en physique.
[ 2 ] Houchmandzadeh, B. (2010). Mathématiques pour la Physique.
[ 3 ] Laamri, E. H. (s.d.). Mesures ,intRogration convolution,et transforem de Fourier des fonctions.

48
[ 4 ] regime-frequentiel. (s.d.). Récupéré sur www.specialautom.net:
http://www.specialautom.net/regime-frequentiel.htm
[ 5 ] Orloff, H. M. (s.d.). Frequency response; frequency domain; Bode and Nyquist plots; transfer
function. Récupéré sur https://math.mit.edu/stoopn/18.031/class2-reading.pdf
[ 6 ]electron. (s.d.). Récupéré sur Filter-Signal-Processin:
https://www.apogeeweb.net/electron/FilterSignal-Processing.html
[ 7 ] Mohinder s.grewal, A. p. (s.d.). kalman filtering theeory and practices. NEW YORK,
SINGAPOR, TORONTO, WEINHEIM.
[ 8]
S.Harkat. (2016). Application du Filtre de Kalman sur la variabilité pluviométrique. (p.188pp).
dans le bassin versant de Chellif » Algerie: Université de Chlef.
[9]
Li XR, J. V. (2003). Survey of maneuvering target tracking. Part I. Dynamic models. IEEE
Transactions on Aerospace and Electronic Systems. . 39(4):1333-1364.
[ 10 ]
Paonessa, S. (s.d.). predig. Récupéré sur reducing signal noise practice:
https://www.predig.com/whitepaper/reducing-signal-noise-practice

49
50

Vous aimerez peut-être aussi