Académique Documents
Professionnel Documents
Culture Documents
Lecture 7
The Kalman filter was first introduced by R. Kalman (1960). The Kalman filter is a stochastic state estimator. A linear dynamic system with noise may be described by:
x(k + 1) = A x(k ) + Bu(k ) + Gw(k ) y(k ) = C x(k ) + v(k )
(1)
where
x u y w v
n-dimensional state vector m-dimensional control vector p-dimensional vector of measurements r-dimensional vector of process noise p-dimensional vector of measurement noise
Lecture 7
w is white noise with zero mean and with a covariance matrix Q = E { w wT }, which we write w ~ ( 0, Q ). It represents modelling uncertainties and disturbances.
v is white noise v ~ ( 0, R ) and it represents sensor inaccuracy. It is assumed that v and w are mutually uncorrelated, so that E { v wT } = 0
The initial state of the process x0 is also unknown. But we do have some knowledge about it in the form of its mean (expected) value x 0 and covariance P0; x0~( x 0 ,P0). It is assumed that x0 is independent of v and w. The objective is to design an estimator that provides estimates of the state x(k), taking into account the known dynamics expressed in equation (1) and the output measurements y(k).
Lecture 7
Propagation of means and covariances The mean of the state x propagates as follows:
x (k + 1) = Ax (k ) + Bu (k ) + Gw (k ) x (k + 1) = Ax (k ) + Bu(k ) x ( 0) = x 0
(2)
This implies that the mean of x propagates according to deterministic dynamics. To find how the covariance of x propagates, write:
P = E x(k +1) x (k +1) x(k +1) x(k +1) T x(k +1) P = E A( x(k ) x (k )) + Gw(k ) A( x(k ) x (k )) + Gw(k ) T x(k +1) P = AE ( x(k ) x (k ))( x(k ) x (k ))T AT + GE w(k )w(k )T GT x(k +1) +GE w(k )( x(k ) x (k ))T AT + AE ( x(k ) x (k ))w(k )T GT
(3)
Lecture 7
(4)
Px(k ), y(k ) =E ( x(k )x (k ))(C( x(k )x (k ))+v(k ))T T P =P C x(k ), y(k ) x(k )
Lecture 7
Optimal linear estimate of x(k) given y(k) Assume that the state estimate is a linear combination of the output plus a constant:
x(k ) = Fy(k ) + g
for some matrix F and vector g Define x(k|k 1) as the estimate of x(k) given information up to time k-1
Lecture 7
To find the best choice of F and g, let us minimize the mean square error
{ } J = E{ x(k ) x(k ) T x(k ) x(k ) } J = tr{E x(k ) x(k ) x(k ) x(k ) T } J = tr{E x(k ) Fy(k ) g x(k ) Fy(k ) g T } J = tr{E x(k ) x(k|k 1) ( Fy(k ) + g x(k|k 1)) x(k ) x(k|k 1) ( Fy(k ) + g x(k|k 1)) T }
J = E ~(k )T ~(k ) x x
where, tr{.} is the trace of the matrix {.}. After some algebra we have:
+ y (k ) y (k )T )F T + (g x(k| k 1))(g x(k| k 1))T J = tr{P(k| k 1) + F( P y(k ) +2 Fy (k )(g x(k| k 1))T 2 FP } y(k ) x(k )
Lecture 7
R S T m
U V W
T Note that Px ( k ) y ( k ) = Py ( k ) x ( k )
Lecture 7
A posteriori error covariance To see how accurate is this estimate x(k|k), the a posteriori error covariance P(k|k) may be computed:
P(k|k) = E L x(k) x(k|k)OL x(k ) x(k|k)O P PM M
Q QN N P(k|k)= E L x(k) x(k|k 1) Px(k )y(k ) Py(k)1e y(k) y(k )jO MN PQ T L x(k) x(k|k 1) Px(k )y(k ) Py(k)1e y(k) y(k)jO PQ MN
P(k|k)= P(k|k 1) Px(k )y(k ) Py(k)1Py(k)x(k )
Lecture 7
Discrete-time Kalman filter algorithm Initialization: Set x(0) = x0, P(0) = Px0, k = 1 Step1 (Time update): Given x(k-1|k-1) and P(k-1|k-1) apply the time update (2) and (3) (effect of system dynamics):
x(k| k 1) = Ax(k 1| k 1) + Bu(k 1) P(k| k 1) = AP(k 1| k 1) AT + GQGT
to obtain x(k| k 1), P(k| k 1) Step 2: Then, after obtaining the new measurement y(k), apply the following measurement update (effect of measurement):
K (k ) = P(k| k 1)CT (CP(k| k 1)CT + R)1 P(k|k ) = ( I K (k )C)P(k|k 1) x(k| k ) = x(k| k 1) + K (k ) y(k ) Cx(k|k 1)
to obtain the optimal estimates P(k| k ), x(k| k ) . K(k) is called the Kalman gain. Set k= k+1 and go to step 1.
Lecture 7
K(k)
q-1
u(k-1) dynamic model x(k|k-1)=Ax(k-1|k-1)+Bu(k-1) x(k|k-1)
10
Lecture 7
Sub-optimal steady state Kalman Filter Even when the plant model A, B, C is time invariant, the optimal Kalman Filter is time-varying, because the Kalman gain K(k) is a function of time. It is often satisfactory to use a simplified time invariant filter with a constant gain K. At statistical steady state, the a priori error covariance P(k|k-1) reaches a steady state, which we call P. The update equation for the a priori covariance may be written as:
P = A[ P PC T (CPC T + R) 1CP]AT + GQG T
11
Lecture 7
The steady-state Kalman filter is the time invariant system given by:
x(k| k ) = x(k| k 1) + K y(k ) Cx(k|k 1)
Matlab expression to find the Kalman filter gain K using the Control Systems Toolbox:
12
Lecture 7
with
A=
LM0.8 N0
01 0 . ; B = ; C = 1 0 ; h = 01 . 0.2 1
OP Q
LM OP NQ
LM0.0358OP N0.0237Q
13
Lecture 7
14
Lecture 7
There are control laws which require availability of the state vector x. If the state vector is not measured, it is necessary to use an estimate. u(t) = g( x(t), )
Kalman Filter u
Controller
g(x,)
15
Lecture 7
Extended Kalman Filter The Extended Kalman Filter (EKF) estimates the states of a nonlinear dynamic system, given a nonlinear dynamic model. In addition, the filter can estimate parameters within the nonlinear model by treating them as additional states. Given the nonlinear dynamic model:
dx = f ( x, u, t ) + G(t )w(t ) dt y(t ) = g( x(t )) + v(t ) x (0 ) = x 0
where w~(0,Q), v~(0,R) and x0~(x0,P0) are independent random variables, dim x = n, dim y = p, dim u = m
16
Lecture 7
The EKF algorithm is based on the linearization of the nonlinear dynamics f(.,.) and output function g(.) as follows:
A( x, u) C( x )
g x
f x
Algorithm initialization (Q, R, P0 and G given): Set t=0, k=0 , P(0) = P0, x(0) = x0
17
Lecture 7
EKF algorithm Step 1: Time update (effect of dynamics) Integrate from tk-1 to tk the following differential equations to obtain x(k|k 1), P(k|k 1):
dx = f ( x(t ), u(t )) dt dP = A( x, u)P(t ) + P(t ) A( x, u)T + GQGT dt
Step 2: Measurement update (effect of measurement): Measure the current output y(k) and then compute:
K (k ) = P(k| k 1)C( x(k| k 1))[C( x(k| k 1))P(k| k 1)C( x(k| k 1))|T + R]1 P(k| k ) = [ I K (k )C( x(k| k 1))]P(k| k 1)[ I K (k )C( x(k| k 1))]T + K (k ) RK (k )T x(k| k ) = x(k| k 1) + K (k )[ y(k ) g( x(k| k 1))]
18
Lecture 7
Estimating parameters using the EKF In order to estimate a vector of parameters (dim =n) from a nonlinear state space model
dx = f ( x, ) dt
LMxOP N Q
19
Lecture 7
and the parameters are modelled as constants with uncertain initial conditions.
d = w (t ) dt (0) = 0 w ~ (0, Q ), ~ ( , P ) 0 0 dX = F ( X , u) dt
the original matrix G is augmented to include the parameter noise effect; and the original covariance matrices, P and Q are augmented to include the parameter covariances P and Q. Then the standard EKF algorithm can be applied.
20
Lecture 7
Simplified extended Kalman Filter with discrete measurements and continuous dynamics If measurements are taken in discrete samples with an interval Ts, a useful simplification of the EKF algorithm is achieved by assuming that the Jacobians remain constant between samples:
A(k ) = e
Ac ( tk )T f
C (k ) = C c (tk )
Algorithm initialization (Q, R, P0 and G given): Set t=0, k=0 , P(0) = P0, x(0) = x0
21
Lecture 7
Step 1: Time update (effect of dynamics) Integrate from tk-1 to tk the following differential equation to obtain x(k | k 1) :
dx = f ( x (t ), u (t )) dt
The a priori state covariance, which is a measure of the state estimation error at time k calculated using measured information up to time k-1, is obtained from:
P ( k | k 1) = A ( k ) P ( k 1 | k 1) A ( k ) T + Q
Step 2: Measurement update (effect of measurement): Measure the current output y (k) and then compute: (a) the Kalman filter gain:
K (k ) = P(k | k 1) C (k )T ( C (k ) P(k | k 1) C (k )T + R)1
(b) the corrected state estimate: x (k | k ) = x (k | k 1) + K ( k )[ y ( k ) g ( x( k | k 1) )] Set k=k+1 and return to step 1.
22
Lecture 7
Simplified EKF Example This simulation study consists uses the dynamic model of a single link manipulator. The model dynamic equations are given by:
Where x1 is the angle and x2 is the angular velocity. It is assumed that the angle is measured, so that y = x1+v(t)
23
Lecture 7
Two models with the same structure and parameters were used in the simulation to represent the real pendulum and the model used by the Kalman filter. A Gaussian random noise signal with variance 0.0016 was added to the measurement. The sampling time of the measurements was 0.1 s and so the updating period of the extended Kalman filter was 0.1 s. The pendulum started from the following initial state: x0= [1 0]T The following tuning parameters were used for the extended Kalman filter: Initial state covariance:
R = 0.0016
24
Lecture 7
0.9
y and yhat
0.8
0.7
0.6
0.5
0.4
5 Time (s)
10
0.5
5 Time (s)
10
25