Académique Documents
Professionnel Documents
Culture Documents
A presented dissertation
For the Licence degree in
« Electrical engineering»
Option :
« Control System»
2
Acknowledgments
We thank Allah for all the strength and might that he bestowed upon us, the
courage and will that he blessed us with to accomplish this work.
We thank our Professor Mister HIMOUR for guiding us and providing us with rich
and valuable advice and information.
We thank every teacher and comrade from this faculty that aided us, taught us
and put us on the path that led us to this instance. Thank you for everyone who
made this possible
3
table of contents
1 CHAPTER I ............................................................................................................................10
1.1 Fourier series:................................................................................................................11
1.1.1 Definition:.............................................................................................................. 11
1.1.2 Periodic Signals:..................................................................................................... 11
1.2 Fourier theorem:...........................................................................................................12
1.3 Fourier transform:.........................................................................................................13
1.3.1 Definition:.............................................................................................................. 13
1.3.2 Inverse Fourier transform:.....................................................................................14
1.3.3 Fourier Transformation Properties:.......................................................................14
1.4 The relationship between Fourier series and Fourier transform:..................................15
1.5 The relationship between Fourier transform and Laplace transform:.........................15
1.5.1 Definition............................................................................................................... 15
1.5.2 Laplace transform:................................................................................................. 15
1.5.3 Link with the fourier transform:.............................................................................16
1.6 Frequential Analysis Of Linear Systems:........................................................................17
1.6.1 The frequential response:......................................................................................17
1.6.2 The complex transfer function:..............................................................................17
1.6.3 Bode and Nyquist plots:.........................................................................................18
1.7 filtering.......................................................................................................................... 19
1.7.1 Filter Definition:.....................................................................................................19
1.7.2 Types of Filters and Functions................................................................................19
.............................................................................................................................................. 19
1.7.3 Filtering Functions :................................................................................................20
.............................................................................................................................................. 20
1.8 Filter Classifications Analysis.........................................................................................20
1.8.1 Passive Filter & Active Filter...................................................................................20
2 CHAPTER II .......................................................................................................................... 22
4
2.1 What is the Kalman filter?.............................................................................................23
2.2 What is it used for?....................................................................................................... 24
2.3 Estimating the State of Dynamic Systems:....................................................................24
2.4 performance analysis of estimators..............................................................................24
2.5 Advantages and Disadvantages of Kalman Filter:..........................................................27
2.5.1 Advantages:........................................................................................................... 27
2.5.2 Disadvantages:.......................................................................................................27
2.6 mathematical development:.........................................................................................27
2.6.1 Linear formulation:................................................................................................ 27
2.6.2 Matrix formulation:................................................................................................27
2.7 How the kalman filter operate :....................................................................................28
2.7.1 The prediction phase:.............................................................................................29
2.7.2 The update phase (correction):..............................................................................30
2.8 Kaman filter algorithm:................................................................................................. 30
2.9 Signal Noise................................................................................................................... 36
2.9.1 DEFINING SIGNAL NOISE.......................................................................................36
2.9.2 COMMON CAUSES OF SIGNAL NOISE.....................................................................36
2.9.3 PROBLEMS ASSOCIATED WITH SIGNAL NOISE.......................................................37
3 CHAPTER III ......................................................................................................................... 39
3.1 The Implementation with MATLAB...............................................................................41
3.1.1 Global variable delaration and common variables:................................................41
3.1.2 Kalman Filter: The prediction part, the estimation Part.........................................42
3.1.3 Averaging the results to eleminate uncertainity....................................................42
3.1.4 Statistics (Root mean Sauare Error).......................................................................43
3.1.5 Plotting...................................................................................................................43
3.2 The plots:.......................................................................................................................44
3.2.1 Target positon........................................................................................................44
3.2.2 Target Root sean square Error...............................................................................45
5
Table of figures
6
Symbols and abbreviations
∫R Lebesgue Integral
F ( f ) ou f Fourier transform of f
−1
F ( f )∨F the inverse Fourier transform
2
L ( R) { f : f measured∧∫ R|f | <∞ }
2
φ unknown function
A Linear operator
k (x , y ) Integral of core
Ak Transition matrix. It describes the evolution of the instantaneous state vector K−1
à l’instant k , de taille n × n
Bk Control matrix at instant k, depends on the modeling of the system
Hk Observation matrix (measurement). It is in fact the link between the system
parameters and the measurements. Size m ×n
uk Vector representing the commands applied to the system at time k
Wk Modeling noise related to the uncertainty that we have on the process model
Qk Process noise variance-covariance matrix at time k
yk Measurement vector at time k, of size m ×1
rk Measurement noise, size m ×1.
7
General
Introduction
8
General Introduction :
One of the most challenging question today is to be able from past observation to
make some meaningful guests. Obviously, as no one is God, no one would
ultimately be able to predict with one hundred percent accuracy where for
instance the an unknown flying object will end up being. However, between a
pure blind quest and a perfect accuracy forecast, there is room for improvement
and work. If in addition, we are able somehow to model the dynamics of the given
statistics and factor in some noise due to unpredictable external or internal
behavior, we can leverage this model information to make an informed guess. It
will not be 100 percent accurate but it is much better than a pure random guess.
Indeed, this scientific question of using a model and filtering noise has been
extensively examined in various fields: control theory leading to Kalman filter,
emphasizing Fourier series and Fourier's transform, frequency analysis and their
relationship with the core of our project the Kalman filter and this latter is an
algorithm that provides estimates of some unknown variables given the
measurements observed over time, it has been demonstrating its usefulness in
various applications despite it relatively having a simple form and requiring small
computational power. However, it is still not easy for people who are not familiar
with estimation theory to understand and implement the Kalman filters. With the
introduction of algorithms of Kalman filter and extended Kalman filter,
respectively, including their applications. With linear models with additive
Gaussian noises, the Kalman filter provides optimal estimates. Tracking a
stationary target can be done and how to implement the filtering algorithms for
such applications will be presented in detail.
9
1 CHAPTER I
10
Introduction:
Named in honor of the well renowned French mathematician Jean-Baptiste-Joseph Baron
Fourier (1768/1830), introduced initially for the purpose of solving the heat equation in a metal
plate yet expanded on to become a living field in our present day, Fourier series is basically
using an infinite sum of sines and cosines in the form of a periodic function f (x)
1.1.1 Definition:
Many phenomenas are characterized by a set of signals of different kinds presenting a periodic
form. We can think of the sunspot cycle, biological observables of the human body (aortic
pressure, electrocardiogram ...), electronic signals, complex sounds produced by musical
instruments, etc. We denote by f ¿ ) this signal, and t a real variable. To fix the ideas, we can
imagine that t is the temporal variable although it is not necessary; t can also be a spatial
variable. The signal admits T as period when we can write: [ 2 ]
f ( t +T )=f ( t ) ∀ t ∈ R withT >0
All the useful information about the signal is therefore found in a pattern of durationT . The
number ν of patterns found in an interval of one second is called the frequency and is
expressed in hertz (Hz). Since the pattern extends over a duration T , we have:
11
Figure 1-1 : characteristics of periodic signal
1
Frequency: v=
T
The pattern has characteristics that can be easily measured once the signal is converted into an
electrical signal:
The peak-to-peak value corresponds to the difference between the maximum and the
minimum of f :
f pp=max ( f )−min (f )
Signals encountered in physics have a finite root mean square. Indeed, the power of a
signal is proportional to f 2 (t) so that its average must be finite.The rms value is related
to the root mean square via the relation :
√
T
1
f rms =√ f = ∫
2 2
f ( t ) dt
T 0
The term a n cos ( n 2 πvt )+ bn sin ( n 2 πvt )represents the harmonic of order n. The harmonic of
rank n = 1 is also called the fundamental of f .
12
Figure 1-3: spectral represntation
Fourier series converges point to point at f (t) if the signal is continuous and of finite energy
over a period.
The set of Fourier coefficients (a n , b n) completely determines the shape of the periodic pattern.
This is why another way of representing a signal is to provide the histogram of the Fourier
coefficients: we obtain what we call the spectral representation or the Fourier spectrum of f .
For example, two notes of the same pitch played by two different musical instruments present
two spectra made up of the same harmonics but with different relative weights. These notes
are of identical pitch but of distinct timbre.
If the function f (t) is known, we can determine the Fourier coefficients by integration. For
example, if we take the mean of the Fourier series we finda 0. The first Fourier coefficient
therefore represents the continuous component of f :
Calculation of a 0
T
1
a 0=f cc = ∫ f ( t ) dt
T 0
1.3.1 Definition:
we call the Fourier transform of f the application that we denote by f or F (f ); defined for
everything ξ ∈ R by:
+∞
F ( f )= f^ ( ξ )=∫ f ( x ) an cos ( n 2 πvt ) +bn sin ( n 2 πvt ) dx
−∞
1
With: ξ=πvt and v=
T
13
Let f ∈ L1 (R) we call the conjugate Fourier transform of f the function:
+∞
F ( f )=F f =∫ f ( x ) a n cos ( n2 πvt )+ bn sin ( n 2 πvt ) dx
−1
−∞
We say that f (t)and ^f (v ) form a pair of Fourier transforms. We go from one to the other by a
Fourier transformation (TF) or inverse Fourier transformation ¿):
f (t)⇌ ^f (v)+
Linearity - By virtue of the linearity of integration, the Fourier transform is also a linear
operation:
af ( t ) +bg ( t ) ⇌ a f^ ( v )+ b g^ (v)
Parity – If f ( t ) is an even function then f^ (v) is real and even. If f (t) is an odd function
then ^f (v ) is imaginary and odd. In all cases |f^ ( v)| is an even function, which is why we
sometimes restrict its representation on R+¿ ¿
f^ ( v)
−2 ξi
f ( t−τ ) ⇌ e
f ( t /a ) ⇌ a f^ (av )
Duality - This property makes it easy to obtain new pairs of Fourier transforms from
already known pairs. Indeed,
si f ( t ) ⇌ f^ ( v ) alors ^f (−t) ⇌ f (v )
14
1.4 The relationship between Fourier series and Fourier
transform:
Let us first consider a periodic signal f (t) with period T 0 which can be decomposed into Fourier
series. So we have:[ 3 ]
∞
f ( t )=a0 + ∫ a n cos ( n 2 π v 0 t )+ bn sin ( n 2 π v 0 t )
n=1
This signal is not a summable square and does not present a Fourier transform in the classical
sense of the term. However, we can define a Fourier transform of such a signal in the sense of
distributions. Indeed, we have just encountered the following pairs of transforms:
1
cos ( n 2 π v 0 t ) ⇌ 2 [ δ ( v−v 0 ) +δ ( v +v 0) ]
1
sin ( n 2 π v 0 t ) ⇌ 2 i [ δ ( v−v 0 ) +δ ( v +v 0 ) ]
∞
^f ( v )= ∑ c n δ ( v−n v 0 )
n=−∞
Let f : R+ ¿→C ¿ continues in pieces. We call the Laplace transform of the function f , the function:
+∞
F ( p ) =L ( f ( t ) ) ( p )=∫ f ( t ) e
− pt
dt ; p∈ C
0
15
vector space of the functions of the temp vector space of phase functions
| |
+∞
∨∫ e−( Rep−α )t dt
− pt − ( Rep−α ) t
The function f is of exponential order: f (t)e ≤M e
0
which does not respect the second condition of existence,and f ( t )=et which is not of an
exponential order
The Fourier transform of an absolutely integrable function over the set of real numbers is given
by:
+∞
f ( t ) ( α )=∫ f ( t ) a n e
−iαt
dt
−∞
0 +∞
=∫ f ( t ) e −iαt
d t +∫ f ( t ) e
−iαt
dt
−∞ 0
+∞ +∞
=∫ f (−t ) e −iαt
d t+ ∫ f ( t ) e
−iαt
dt
0 0
From this link, we can say that the Laplace transformation is a generalization of the Fourier
transformation. In addition, and since the Laplace transform is a linear and bijective operator,
we deduce that the Fourier transform is too.
16
1.6 Frequential Analysis Of Linear Systems:
1.6.1 The frequential response:
The frequential analysis aims to study the behavior and the response of a linear system that
consists of a sinusoidal excitation , it consists of studying the input and output’s signal of the
amplitudes of the report’s variations, also the
phase shift between them while varying
the frequency.
In this analysis, the amplitude of the input
signal is constant while the variable is the
frequency or the pulsation ω=2πf. [ 4 ]
17
o(s)
The transfer function for a continuous linear system comes in the form: H(s)=
i(s)
While o(s) and i(s) are the Laplace transforms for o(t) and i(t).
We replace the Laplace variable s with the term jω so that the complex transfer function is
H(jω), the frequential analysis studies H(jω) in terms of ω
H(s)= H(jω) determines the entire frequency response. This helps to explain its importance: by
widening our view to all complex s we can get a better view of the frequency response which is
our true interest.[ 4 ]
Bode plot:
The Bode plots show the frequency response of a system. There are however 2 separate Bode
plots one of them is for the gain and the other one is for the phase
Nyquist plot:
The Nyquist plot combines both gain and phase plots into one and it’s drawn by plotting the
complex gain H(jω) for all frequencies ω[ 5 ]
18
Figure 1-7: Bode gain plot Figure 1-6: bode phase plot (in Figure 1-5: nyquist plot
1.7 degrees)
filtering
1.7.1 Filter Definition:
In electronics, a filter (signal processing) is a kind of devices or process that removes some
unwanted components or features from a signal. Filtering is a class of signal processing, the
defining feature of filters being the complete or partial suppression of some aspect of the
signal. Most often, this means removing some frequencies or frequency bands. However, filters
do not exclusively act in the frequency domain; especially in the field of image processing many
other targets for filtering exist. As is known to all, electronic filters remove unwanted frequency
components from the applied signal, enhance wanted ones, or both [ 6 ]
19
Figure 1-8: Filtering Out the Noise (signal processing)
Because
there are many different standards of classifying filters and these overlap in many different
ways, there is no clearly distinctive classification. Filters may be:
non-linear or linear
analog or digital
time-variant or time-invariant , also known as shift invariance.
discrete-time (sampled) or continuous-time
passive or active type of continuous-time filter
infinite impulse response (IIR) or finite impulse response (FIR) type
1.7.3 Filtering
Functions :
Separ
ate
useful signals from noise to improve signal immunity and signal-to-noise ratio.
Filter out unwanted frequency to improve signal analysis accuracy.
Separate single frequency from complex frequency[ 6 ]
20
Figure 1-9:Electronic Filter
1.8 Filter Classifications
Analysis
21
the reliability is not as high as that of a passive filter, and it is not suitable for high voltage, high
frequency, and high power applications.[ 6 ]
The load of the active filter circuit does not affect the filtering characteristics, so it is often used
in places with superior signal processing requirements. Active filter circuit is generally
composed of an RC network and integrated operational amplifier, so it can only be used under
the condition of suitable DC power supply, and it can also be amplified. However, the
composition and design of the circuit are also more complicated. Active filter circuits are not
suitable for high voltage and high current applications.
2 CHAPTER II
Kalman Filter
22
Introduction:
The Kalman filter is a mathematical power tool that is playing an increasingly important role in
computer graphics as we include sensing of the real world in our systems. The good news is you
don’t have to be a mathematical genius to understand and effectively use Kalman filters. This
tutorial is designed to provide developers of graphical systems with a basic understanding of
this important mathematical tool.
While the Kalman filter has been around for about 30 years, it (and related optimal estimators)
have recently started popping up in a wide variety of computer graphics applications. These
applications span from simulating musical instruments in VR, to head tracking, to extracting lip
motion from video sequences of speakers, to fitting spline surfaces over collections of points.
23
The Kalman filter is the best possible (optimal) estimator for a large class of problems and a
very effective and useful estimator for an even larger class. With a few conceptual tools, the
Kalman filter is actually very easy to use. We will present an intuitive approach to this topic that
will enable developers to approach the extensive literature with confidence.
The math for implementing the Kalman filter appears pretty scary and opaque in most places
you find on Google. That’s a bad state of affairs, because the Kalman filter is actually super
simple and easy to understand if you look at it in the right way. Thus it makes a great article
topic. The prerequisites are simple; all you need is a basic understanding of probability and
matrices.[ 7 ]
24
2.2 What is it used for?
The applications of Kalman filtering encompass many fields, but its use as a tool is almost
exclusively for two purposes: estimation and performance analysis of estimators.
25
Water level
Rain gauge
Weather radar
In the tracking of a spacecraft:
Radar
Imaging system
In the navigation of a ship:
Sextant
Log
Gyroscope
Accelerometer
Global Positioning Systems (GPS) receiver
26
Figure 2-10:Kalman filter application on a ship
Its algorithm works in the time domain with a recursive nature and has aoptimal estimator
in the least squares sense.
Another aspect of its optimality is the incorporation of all the information available on the
system, measurements and errors, in an adaptive operator which is reset each time anew
measurement becomes available.
The big advantage of the method is to provide at each iteration an estimate of the matrices
of measurement and analysis error covariance. However, it is necessary to correctly
initialize these matrices to time (t 0), and have an estimate of the model error and error
covariance matrices observation.
2.5.2 Disadvantages:
The Kalman filter was developed only for Gaussian linear models.
The Gaussian noise hypothesis is not essential for the operation of the Kalman filter,the
latter approaches the density of the state knowing the observation (conditional density)
by aGaussian density, determined by its mean and its covariance matrix. The non
linearity ofmodel can involve the multi-modality of the conditional law of the state, and
thus makes the filter ofKalman unsuitable.[ 7 ]
When the system is strongly nonlinear the extended Kalman filter can diverge
(Diverging: when the estimate it provides us with is marred by errors that are becoming
more and moreimportant. The filter then becomes unstable and therefore
unsatisfactory)
The basic equations of the Kalman filter are translated in the form of reduced matrices:
28
[] [ ] [][ ]
X1 A 1 ,1 . . A 1 ,n X1 W1
X2 A 1 ,2 . . . X2 W2
The equation of state: . = . . . . . + .
. . . . . . .
Xn k +1 A n ,1 . . A n ,n k +1/ k Xn k W n k
[] [ ] [][]
y1 H 1 ,1 . . H 1 ,n X1 V1
y2 H 1 ,2 . . . 2 V2
. = . . . . . + .
. . . . . . .
yn k+ 1 H n ,1 . . H n ,n k +1 /k
Xn k V n k
29
Figure 2-12: kalman filter steps
{ x^ k / k−1=¿ A k−1 x^ k−1/k−1+ Bk−1 uk−1 ¿ Pk / k−1 =A k r k−1/ k−1 A Tk−1+ Qk−1
30
We will now use the measure y k to correct the prior estimate ^x k /k−1 and get the posterior
estimate ^x k /k [ 8 ]
Update :
−¿+ K k ~y ¿
−¿¿
+¿=(I− K k H )Pk ¿
Updated error covariance PK
31
updated (posterior) estimates, respectively.
The predicted state estimate is evolved from the updated previous updated state estimate. The
new term P is called state error covariance. It encrypts the error covariance that the filter
thinks the estimate error has. Note that the covariance of a
T T
random variable x is defined as cov ( x )=E [ ( x− x^ ) ( x−^x ) ] where E denotes the
expected (mean) value of its argument. One can observe that the error covariance
becomes larger at the prediction stage due to the summation with Q , which means the filter is
more uncertain of the state estimate after the prediction step.
In the update stage, the measurement residual ~y k is computed first. The
measurement residual, also known as innovation, is the difference between the true
−¿¿
measurement, z k , and the estimated measurement, H ^x k . The filter estimates the
current measurement by multiplying the predicted state by the measurement matrix. The
residual, ~y k , is later then multiplied by the Kalman gain, K k , to provide the correction, K k ~y k , to
−¿¿
the predicted estimate ^x k . After it obtains the updated state estimate, the Kalman filter
+¿¿
calculates the updated error covariance, Pk , which will be used in the next time step. Note
that the updated error covariance is smaller than the predicted error covariance, which means
the filter is more certain of the state estimate after the measurement is utilized in the update
stage.
We need an initialization stage to implement the Kalman filter. As initial values,
+¿¿
we need the initial guess of state estimate, ^x 0 , and the initial guess of the error covariance
+¿¿ +¿¿ +¿¿
matrix, P0 . Togetherwith Q and R , ^x 0 and P0 play an important role to obtain desired
performance. There is a rule of thumb called “initial ignorance,” which means that the user
+¿¿
should choose a large P0 for quicker convergence. Finally, one can obtain implement a Kalman
filter by implementing the prediction and update stages for each time step, k =1 ,2 , 3 , … , after
the initialization of estimates. Note that Kalman filters are derived based on the assumption
that the process and measurement models are linear, i.e., they can be expressed with the
matrices F , B, and H , and the process and measurement noise are additive Gaussian. Hence, a
Kalman filter provides optimal estimate only if the assumptions are satisfied.
Example
An example for implementing the Kalman filter is navigation where the
vehicle state, position, and velocity are estimated by using sensor output from an inertial
measurement unit (IMU) and a global navigation satellite system
(GNSS) receiver. In this example, we consider only position and velocity,
omitting attitude information. The three-dimensional position and velocity comprise the state
vector :
T T T
x=[P , v ]
32
T T
Where P=[P x , P y , P z ] is the position vector and v=[ v x , v y , v z ] is the velocity vector whose
elements are defined in x, y, z axes. The state in time k can be predicted by the previous state in
time k −1 as:
[ ][ ]
1
p k−1+ v k−1 + ~ak−1 Δ t
2
pk
xk= = 2
vk ~
v k −1 + a k−1 Δ t
Where ~a k−1 is the acceleration applied to the vehicle. The above equation can be
rearranged as:
[ ]
1
[ ]
2
I 3 ×3 I 3 × 3 Δ t I Δt ~
xk= x k−1 + 2 3 × 3 a k−1
03 × 3 I 3× 3
I3 × 3 Δ t
Where I 3 ×3 and 03 × 3 denote 3 ×3 identity and zero matrices, respectively. The process noise
comes from the accelerometer output a k−1=~ a k−1 + ek−1 where e k−1denotes the noise of the
accelerometer output. Suppose e k−1 N ( 0 , I 3 ×3 σ 2e ) . From the covariance relationship,
cov ( Ax )= A ∑ A T where cov ( x )=∑,we get the covariance matrix of the process noise as:
[ ] [ ][ ]
T
1 1 1
I Δt
2
2 I Δt
2
I Δ t 4 03 × 3 2
Q= 2 3 × 3 I 3 × 3 σ e 2 3 ×3 = 4 3×3 σe
2
I 3× 3 Δ t I 3 ×3 Δ t 03 × 3 I 3 ×3 Δ t
Where F=
[ I 3 ×3 I 3 ×3 Δ t
03 ×3 I 3 × 3 ]
[ ]
1 2
I Δt
B= 2 3 ×3 w k−1 N (0 ,Q)
I 3 ×3 Δ t
zk=
[]
pk
vk
+v k
Where H=I 6 × 6 v k N (0 , R)
33
In order to conduct a simulation to see how it works, let us consider N=20 time
steps (k =1 ,2 , 3 , … , N ) with Δ t=1 . It is recommended to generate a time history of
true state, or a true trajectory, first. The most convenient way is to generate the
series of true accelerations over time and integrate them to get true velocity and position. In
this example, the true acceleration is set to zero and the vehicle is
moving with a constant velocity, v k =[5 , 5 , 0]T for all k =1 ,2 , 3 , … , N , from the initial position,
p0=[0 , 0 ,0 ]. Note that one who uses the Kalman filter to estimate the vehicle state is usually
not aware whether the vehicle has a constant velocity or not. This case is not different from
nonzero acceleration case in perspective of this Kalman filter models. If the filter designer (you)
has some prior knowledge of the vehicle maneuver, process models can be designed in
different forms for best describing various maneuvers as in [9].
We need to generate noise of acceleration output and GNSS measurements for
every time step. Suppose the acceleration output, GNSS position, and GNSS velocity are
corrupted with noise with variances of 0.32 , 32 , and 0.032 , respectively. For each axis, one can
use MATLAB function randn or normrnd for generating the Gaussian noise.
The process noise covariance matrix, Q , and measurement noise covariance matrix, R , can be
constructed following the real noise statistics described above to get the best performance.
However, have in mind that in real applications, we do not know the real statistics of the noises
and the noises are often not Gaussian. Common practice is to conservatively set Q and R
slightly larger than the expected values to get robustness. Let us start filtering with the initial
guesses
T
^x +¿=[2
0
,−2, 0 ,5 , 5.1 ,0.1 ] ¿
P0
+¿=
[ I 3 ×3 42 03× 3
]
03 ×3 I 3 ×3 0.42
¿
[ ]
1
I 3 ×3 Δ t 4 03 ×3 2
Q= 4 0.3
2
03 × 3 I 3× 3 Δ t
[ ]
2
I 3× 3 3 03 ×3
R= 2
03 × 3 I 3 × 3 0.03
where Q and R are constant for every time step. The more uncertain your initial
guess for the state is, the larger the initial error covariance should be.
In this simulation, M =100 Monte-Carlo runs were conducted. A single run is not sufficient for
verifying the statistic characteristic of the filtering result because
each sample of a noise differs whenever the noise is sampled from a given distribution, and
therefore, every simulation run results in different state estimate. The
repetitive Monte-Carlo runs enable us to test a number of different noise samples for each time
step.
The time history of estimation errors of two Monte-Carlo runs is depicted in
34
Figure 2-5. We observe that the estimation results of different simulation runs are different
even if the initial guess for the state estimate is the same. You can also run the Monte-Carlo
simulation with different initial guesses (sampled from a distribution) for the state estimate.
The standard deviation of the estimation errors and the estimated standard
deviation for x-axis position and velocity are drawn in Figure 2-4. The standard
deviation of the estimation error, or the root mean square error (RMSE), can be
obtained by computing standard deviation of M estimation errors for each time
step. The estimated standard deviation was obtained by taking squared root of the
+¿¿
corresponding diagonal term of pk . Drawing the estimated standard deviation for
each axis is possible because the state estimates are independent to each other in this
+¿¿
example. A care is needed if pk has nonzero off-diagonal terms. The estimated standard
deviation and the actual standard deviation of estimate errors are very similar. In this case, the
filter is called consistent. Note that the estimated error covariance matrix is affected solely by
+¿¿
p0 , Q , and R , judging from the Kalman filter algorithm. Different settings to these matrices
+¿¿
will result in different pk and thereforedifferent state estimates. In real applications, you will
be able to acquire only the estimated covariance because you will hardly have a chance to
conduct Monte-Carlo runs. Also, getting agood estimate of Q and R is often difficult. One
practical approach to estimate thenoise covariance matirces is the autocovariance least-
squares (ALS) technique or an adaptive Kalman filter where the noise covariance matrices are
adjusted in realtime can be used. Source code of MATLAB implementation for this example can
be found in. It is recommended for the readers to change the parameters and aircraft trajectory
by yourself and see what happens.
35
2.9
S
Figure 2-13: Time history of estimation errors.
i Figure 2-14: Actual and estimated standard deviation for x-axis estimate errors.
gnal
Noise
2.9.1 DEFINING SIGNAL NOISE
36
Signal noise, in its most basic sense, is any unwanted interference that degrades a
communication signal. Signal noise can interfere with both analog and digital signals; however,
the amount of noise necessary to affect a digital signal is much higher. This is because digital
signals communicate using a set of discrete electrical pulses to convey digital “bits.” As can be
seen in Figure 1, those electrical pulses would require a lot of noise in order to be confused
with one another.
Conversely, analog signals represent an infinite range of possible values using an established
range, such as 4-20 mA or 0-10 V. In Figure 2-15: Isolated noise from signal
this case, any unwanted voltage or
current spikes will cause a fluctuation in the message being communicated. Minuscule
variations along analog signals, on the order of millivolts or microamps, typically do not result in
a significant (or even perceptible) discrepancy. High levels of electrical noise, however, can
produce large variations and therefore lead to substantial discrepancies making communication
between process control devices utterly impossible.[ 10 ]
As seen in Figure 2, signal noise injected onto electrical communication will add or detract from
the expected signal value. In an industrial situation where vital processes are automatically
controlled based on the measurement of that signal, any variation can lead to unpredictable
and potentially damaging results.
Noise injection can occur anywhere in the system and at any physical location in which the
network is exposed. It can be the result of various factors at any location on the network. It may
seem a daunting task to troubleshoot signal noise; nonetheless, there are some causes that are
more common than others. These common causes account for the vast majority of signal noise
interfering with process control networks.
37
Poorly designed electronic circuitry within devices, which does not provide adequate shielding
against internal and external sources of noise, will also be more likely to have signal issues.
38
THE APPLICATION: A radar level transmitter is measuring tank liquid level. It outputs a 4-20
mA signal (4 mA when empty and 20 mA when full) to a mechanical relay that when triggered at
4.5 mA, activates a pump to begin filling the tank.
THE PROBLEM: The tank empties and the transmitter outputs a 4 mA signal but,
because of extreme signal noise, the relay receives a 5 mA signal and never triggers or
activates the pump. The tank remains empty and the process grinds to a halt.
39
3 CHAPTER III
Kalman Filter
Implementation with
MATLAB
40
Introduction:
In this section we are going to implement the Kalman filter using Matlab in a system which
consists of a thermal camera installed on an airborne platform which observes the position of a
stationary target. The obtained
Position is under noise because the thermal camera is being affected by certain vibrations due
to malfunctioned platform. The Objective is to estimate the position (along x-axis) of target. The
measurement noise is additive having Gaussian Probability Density Function with zero mean
and standard deviation of 1 m. Observation time is 5 seconds and sampling time is 0.1 second.
All the thoroughly used steps are as followed:
1. Global variable delaration and common variables
2. Kalman Filter
2.1. The prediction part
2.2. The estimation Part
3. Averaging the results to eleminate uncertainity (MonteCarlo Runs)
4. Statistics (Root mean Sauare Error)
5. Plotting
41
3.1 The Implementation with MATLAB
3.1.1 Global variable delaration and common variables:
We define and declare all variables which are sampling time, time vecto,r No of meaurements
identity Matrix , measurement noise mean, measurement noise standard deviation, inital
Position x ,Transition matrix ,Output Coefficient Matrix ,measurement covariance matrix
generating measurement noise (gaussian noise), The noisy measurment vector, inital State
inital State Covariacne, inital State, inital State covariance
3.1.2
42
3.1.2.2
3.1.3
Aver
We will run these multiple times so that after that we will average the results of these parts to
eliminate uncertainty which is called the MonteCarlo Runs
3.1.4
43
Then plot the results.
As we can see here, the red color represents the target's true position which is what we need
to
find
while the measured position by the filter is presented in green and lastly the estimated position
in represented in blue.
44
3.2.2
Here we can see that the target root mean square error has been minimized with respect to
time and has descended to almost 0.5 meters after 5 seconds
45
General conclusion:
In This project we were introduced to the Kalman filter and we could comprehend it's
functionality and the concepts of prediction and estimation, through this filter we could study
how the noise affected a system and how to minimize these effects on this system in order to
study it with more accuracy. We have seen the Kalman filter's uses in the target tracking
domain in an example that demonstrated to us how it works on an fundamental basis and how
important this filter is in achieving results that seem impossible on first glance. We can only
46
imagine how more important it is in all the other various different domains in today's society
and it will surely keep being developed to reach mind blowing results.
Bibliographical references
47
Bibliographie
[ 1 ]Renaux-petel, R. (2015). L'analyse de Fourier en physique.
[ 2 ] Houchmandzadeh, B. (2010). Mathématiques pour la Physique.
[ 3 ] Laamri, E. H. (s.d.). Mesures ,intRogration convolution,et transforem de Fourier des fonctions.
48
[ 4 ] regime-frequentiel. (s.d.). Récupéré sur www.specialautom.net:
http://www.specialautom.net/regime-frequentiel.htm
[ 5 ] Orloff, H. M. (s.d.). Frequency response; frequency domain; Bode and Nyquist plots; transfer
function. Récupéré sur https://math.mit.edu/stoopn/18.031/class2-reading.pdf
[ 6 ]electron. (s.d.). Récupéré sur Filter-Signal-Processin:
https://www.apogeeweb.net/electron/FilterSignal-Processing.html
[ 7 ] Mohinder s.grewal, A. p. (s.d.). kalman filtering theeory and practices. NEW YORK,
SINGAPOR, TORONTO, WEINHEIM.
[ 8]
S.Harkat. (2016). Application du Filtre de Kalman sur la variabilité pluviométrique. (p.188pp).
dans le bassin versant de Chellif » Algerie: Université de Chlef.
[9]
Li XR, J. V. (2003). Survey of maneuvering target tracking. Part I. Dynamic models. IEEE
Transactions on Aerospace and Electronic Systems. . 39(4):1333-1364.
[ 10 ]
Paonessa, S. (s.d.). predig. Récupéré sur reducing signal noise practice:
https://www.predig.com/whitepaper/reducing-signal-noise-practice
49
50