Vous êtes sur la page 1sur 67

Filtering and Identification

Lecture 4:
Kalman Filtering

Michel Verhaegen and Jan-Willem van Wingerden


1/50

Delft Center for Systems and Control


Delft University of Technology
An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 1: Estimator: x̃ = linear transformation of the “known” data?
 
h i x0
x̃ = M N  
y

Step 2: Unbiased E[x̃ − x] = 0.

M x0 + N F x0 − x0 = 0 ⇒ (M + N F − I) = 0

Step 3: Covariance matrix E[(x̃ − x)(x̃ − x)T ].


Let us first em minimize the Covariance matrix E[(x̃−x0 )(x̃−x0 )T ]
2/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 1: Estimator: x̃ = linear transformation of the “known” data?
 
h i x0
x̃ = M N  
y

Step 2: Unbiased E[x̃ − x] = 0.

M x0 + N F x0 − x0 = 0 ⇒ (M + N F − I) = 0

Step 3: Covariance matrix E[(x̃ − x)(x̃ − x)T ].


Let us first em minimize the Covariance matrix E[(x̃−x0 )(x̃−x0 )T ]
3/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 1: Estimator: x̃ = linear transformation of the “known” data?
 
h i x0
x̃ = M N  
y
   
h i I −Sν 
Step 2: Unbiased E[x̃ − x] = 0: x̃ − x = M N  x +   −x
F Lǫ
 
h i I
E[x̃ − x] = M N   x0 − x0 ⇒ M + N F − I = 0
F
4/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 3: Wrong!
Minimize the Covariance matrix E[(x̃ − x0 )(x̃ − x0 )T ]

x̃ = (M + N F ) x − M |{z}
Sν +N Lǫ (M + N F − I = 0
| {z }
=
=
=
=

5/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 3: Wrong!
Minimize the Covariance matrix E[(x̃ − x0 )(x̃ − x0 )T ]

x̃ = (M + N F ) x − M |{z}
Sν +N Lǫ (M + N F − I = 0
| {z }
= Ix + (N F − I)(x − x0 ) + N Lǫ
=
=
=

5/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 3: Wrong!
Minimize the Covariance matrix E[(x̃ − x0 )(x̃ − x0 )T ]

x̃ = (M + N F ) x − M |{z}
Sν +N Lǫ (M + N F − I = 0
| {z }
= Ix + (N F − I)(x − x0 ) + N Lǫ
x̃ − x0 = I(x − x0 ) + (N F − I)(x − x0 ) + N Lǫ
=
=

5/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 3: Wrong!
Minimize the Covariance matrix E[(x̃ − x0 )(x̃ − x0 )T ]

x̃ = (M + N F ) x − M |{z}
Sν +N Lǫ (M + N F − I = 0
| {z }
= Ix + (N F − I)(x − x0 ) + N Lǫ
x̃ − x0 = I(x − x0 ) + (N F − I)(x − x0 ) + N Lǫ
= N F (x − x0 ) + N Lǫ
=

5/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 3: Wrong!
Minimize the Covariance matrix E[(x̃ − x0 )(x̃ − x0 )T ]

x̃ = (M + N F ) x − M |{z}
Sν +N Lǫ (M + N F − I = 0
| {z }
= Ix + (N F − I)(x − x0 ) + N Lǫ
x̃ − x0 = I(x − x0 ) + (N F − I)(x − x0 ) + N Lǫ
= N F (x − x0 ) + N Lǫ
E[(x̃ − x0 )(x̃ − x0 )T ]=N F P F T N T + N LLT N T + N F ΓLT N T + N LΓT F T N T
⇒ N = 0 ⇒ M = I ⇒ x̃ = x0
5/50

Delft Center for Systems and Control


An illustrative Exercise
Given: x = x0 + Sν (ν ∼ (0, I)) and y = F x + Lǫ (ǫ ∼ (0, I)) with
E[(x − x0 )ǫT ] = Γ.
Determine: Unbiased minimum variance estimate of x! Solution:
Step 3: Minimize the Covariance matrix E[(x̃ − x)(x̃ − x)T ]
x̃ = Ix + (N F − I)(x − x0 ) + N Lǫ
x̃ − x = (N F − I)(x − x0 ) + N Lǫ
E[(x̃ − x)(x̃ − x)T ] = (N F − I)P (N F − I)T + N LLT N T +
(N F − I)ΓLT N T + N LΓT (N F − I)T
= (N F P F T N T + P − N F P − P F T N T ) + (N LLT N T
+(N F ΓLT N T − ΓLT N T ) + (N LΓT F T N T − N LΓT
= P − N (F P + LΓT ) − (P F T + ΓLT )N T +
N (F P F T + LLT + F ΓLT + LΓT F T )N T
6/50

Delft Center for Systems and Control


Overview
• The 3-key problems in sc4040
• Review: State observer
• The Stochastic Least Squares Problem
• The Conventional Kalman filter (SLS
framework)
• Properties and Use of the conventional
Kalman filter.

7/50

Delft Center for Systems and Control


The 3-key problems in sc4040
• Estimate linear static models from data:

min ky − F xk2W
x

Relevant and to introduce the numerical fundaments of this course!

8/50

Delft Center for Systems and Control


The 3-key problems in sc4040
• Estimate linear static models from data:

min ky − F xk2W
x

Relevant and to introduce the numerical fundaments of this course!


• Estimate the state of an LTI state space model:

x(k + 1) = Ax(k) + Bu(k) + Ke(k) e(k) ∼ (0, Re )


y(k) = Cx(k) + Du(k) + e(k)

given the model (A, B, C, D) and K and input-output data.

8/50

Delft Center for Systems and Control


The 3-key problems in sc4040
• Estimate linear static models from data:

min ky − F xk2W
x

Relevant and to introduce the numerical fundaments of this course!


• Estimate the state of an LTI state space model:

x(k + 1) = Ax(k) + Bu(k) + Ke(k) e(k) ∼ (0, Re )


y(k) = Cx(k) + Du(k) + e(k)

given the model (A, B, C, D) and K and input-output data.


• “Identify” state space model matrices (A, B, C, D, K) from
input-output data {u(k), y(k)}N
k=1 .

8/50

Delft Center for Systems and Control


Overview
• The 3-key problems in sc4040
• Review: State observer
• The Stochastic Least Squares Problem
• The Conventional Kalman filter (SLS
framework)
• Properties and Use of the conventional
Kalman filter.

9/50

Delft Center for Systems and Control


The asymptotic observer
Given the (Plant) model Σ:

Σ : x(k + 1) = Ax(k) + Bu(k)


y(k) = Cx(k) + Du(k)

with x(k) ∈ Rn , y(k) ∈ Rℓ , u(k) ∈ Rm


For the case x(0) unknown, we seek a
’state-observer’ of the form, 
b(k + 1) = Ab
x x(k) + Bu(k) + K y(k) − Cb
x(k) − Du(k)
| {z } | {z }
Predictor
Corrector
such that lim x(k) − x
b(k) = 0
k→∞

10/50

Delft Center for Systems and Control


The asymptotic observer
Given the (Plant) model Σ:

Σ : x(k + 1) = Ax(k) + Bu(k)


y(k) = Cx(k) + Du(k)

with x(k) ∈ Rn , y(k) ∈ Rℓ , u(k) ∈ Rm


For the case x(0) unknown, we seek a
’state-observer’ of the form, 
b(k + 1) = Ab
x x(k) + Bu(k) + K y(k) − Cb
x(k) − Du(k)
| {z } | {z }
Predictor
Corrector
such that lim x(k) − x
b(k) = 0
k→∞

11/50

Delft Center for Systems and Control


The asymptotic observer
Given the (Plant) model Σ:

Σ : x(k + 1) = Ax(k) + Bu(k)


y(k) = Cx(k) + Du(k)

with x(k) ∈ Rn , y(k) ∈ Rℓ , u(k) ∈ Rm


For the case x(0) unknown, we seek a
’state-observer’ of the form, 
b(k + 1) = Ab
x x(k) + Bu(k) + K y(k) − Cb
x(k) − Du(k)
| {z } | {z }
Predictor
Corrector
such that lim x(k) − x
b(k) = 0
k→∞

12/50

Delft Center for Systems and Control


The asymptotic observer
Given the dynamics of the system and observer
state,
x(k + 1) = Ax(k) + Bu(k)
 
x
b(k + 1) = x(k) + Bu(k) + K y(k) − C x
Ab b(k) − Du(k)
 
= x(k) + Bu(k) + K Cx(k) + Du(k) − C x
Ab b(k) − Du(k)
 
= Ab
x(k) + Bu(k) + KC x(k) − xb(k)

The dynamics of the state error, xe (k) = x(k) − x


b(k), satisfies,

xe (k + 1) = (A − KC)xe (k)

the goal is to design K, such that:

13/50

Delft Center for Systems and Control


The asymptotic observer
Given the dynamics of the system and observer
state,
x(k + 1) = Ax(k) + Bu(k)
 
x
b(k + 1) = x(k) + Bu(k) + K y(k) − C x
Ab b(k) − Du(k)
 
= x(k) + Bu(k) + K Cx(k) + Du(k) − C x
Ab b(k) − Du(k)
 
= Ab
x(k) + Bu(k) + KC x(k) − xb(k)

The dynamics of the state error, xe (k) = x(k) − x


b(k), satisfies,

xe (k + 1) = (A − KC)xe (k)

the goal is to design K, such that:


(A − KC) is asymptotically stable
13/50

Delft Center for Systems and Control


Design of the asymptotic observer
Recall the observer structure,
 
b(k+1) = (A−KC)b
x x(k)+Bu(k)+K y(k)−Du(k)

Theorem: If the pair (A, C) is observable, then


there exists a matrix K ∈ Rn×ℓ such that A − KC
is asymptotically stable. Design in matlab: Given the pair
(A, C) and a desired spectrum Sp (spectrum) of the poles of the
matrix A − KC then we can determine (in matlab) the gain
matrix K by,
K = place(A’,C’,Sp)’;

14/50

Delft Center for Systems and Control


Acker_test.m

15/50

Delft Center for Systems and Control


Example: Double tank
Linearized discrete-time
state-space model:
 
0.9512 0 x1

x(k + 1) =   x(k)
0.0476 0.9512
  Pump

0.0975
+   u(k)
0.0024 x2

h i
y(k) = 0 1 x(k)

Determine levels in both tanks: estimate states.


16/50

Delft Center for Systems and Control


Example: asymptotic observer
Selecting the eigenvalues of the asymptotic
observer equal to {0.7, 0.8} with initial state
vector [1.5, 1.5]T yields,
Asymptotic Observer - x1
3
Estimated
2 True

0
0 10 20 30 40 50 60
Asymptotic Observer - x2
2
Estimated
True
1.5

0.5

0
0 10 20 30 40 50 60

17/50

Delft Center for Systems and Control


Example: Double tank with noise
Signal generating model:
 
0.0975 0
x(k + 1) = Ax(k) + Bu(k) +   w(k)
0.0024 0.0975
h i
y(k) = 0 1 x(k) + v(k)

with w(k) and v(k) zero-mean white noise with,


 
  0.0125 0 0.005
v(k) h i  
E  v(j)T w(j)T =
 0 0.01 0  ∆(k−j)

w(k)
0.005 0 0.01
18/50

Delft Center for Systems and Control


Example: asymptotic observer
Selecting the eigenvalues of the asymptotic
observer equal to {0.7, 0.8} with initial state
vector [1.5, 1.5]T yields,
Asymptotic Observer - x1
3
Estimated
2 True

0
0 10 20 30 40 50 60
Asymptotic Observer - x2
2.5
Estimated
2 True

1.5

0.5

0
0 10 20 30 40 50 60

19/50

Delft Center for Systems and Control


Overview
• The 3-key problems in sc4040
• Review: State observer
• The Stochastic Least Squares Problem
• The Conventional Kalman filter (SLS
framework)
• Properties and Use of the conventional
Kalman filter.

20/50

Delft Center for Systems and Control


The SLS Problem
Given the prior on the RV x ∼ (x, P ) with P ≥ 0 and given
the observations:
y = F x + Lǫ ǫ ∼ (0, I) E[(x − x)ǫT ] = 0

with F, L deterministic L full rank. Then seek among the


linear estimators:
 
h i y
e= M N  
x
x

The unbiased minimum variance estimate, i.e.:


E[(x − x e)T ]
e)(x − x x] = x
is minimized E[e

21/50

Delft Center for Systems and Control


Overview
• The 3-key problems in sc4040
• Review: State observer
• The Stochastic Least Squares Problem
• Key Notation
• The Conventional Kalman filter (SLS
framework)
• Properties and Use of the conventional
Kalman filter.

22/50

Delft Center for Systems and Control


Notation
For k the current time
 instant: The estimate of
y(k + 1) x(k + 1) given past measurements
.., y(k − 1), y(k) is called
 the one-step
 predicted
estimate ỹ(k + 1|k) x̃(k + 1|k) .

The estimate of x(k) given past measurements


.., y(k − 1), y(k) to estimate x(k) is called the
filtered estimate x̃(k|k).

23/50

Delft Center for Systems and Control


Overview
• The 3-key problems in sc4040
• Review: State observer
• The Stochastic Least Squares Problem
• Key Notation
• The Conventional Kalman filter (SLS
framework)
• Properties and Use of the conventional
Kalman filter.

24/50

Delft Center for Systems and Control


The signal generating (SGM) model
The signal generating model (SGM) is governed (a) by the state
space model:

x(k + 1) = A(k)x(k) + B(k)u(k) + w(k)


y(k) = C(k)x(k) + v(k)
     
v(k)  0 R(k) S(k)T 
  ∼  ,  ≥ 0 & white R(k) > 0
w(k) 0 S(k) Q(k)

25/50

Delft Center for Systems and Control


The signal generating (SGM) model
The signal generating model (SGM) is governed (a) by the state
space model:

x(k + 1) = A(k)x(k) + B(k)u(k) + w(k)


y(k) = C(k)x(k) + v(k)
     
v(k)  0 R(k) S(k)T 
  ∼  ,  ≥ 0 & white R(k) > 0
w(k) 0 S(k) Q(k)

and (b) at time k − 1 an estimate of x(k) is given such that:


1. x(k) ∼ (x̂(k|k − 1), P (k|k − 1) ≥ 0) and,
  T
T T
2. E[ x(k) − x̂(k|k − 1) v (k) w (k) ] = 0

25/50

Delft Center for Systems and Control


The Kalman Filter Problem
Let for the SGM at time instant k the measurements
u(k), y(k) become available, then we seek a filtered and
one-step predicted estimate of x(k) and x(k + 1) of the
form,

26/50

Delft Center for Systems and Control


The Kalman Filter Problem
Let for the SGM at time instant k the measurements
u(k), y(k) become available, then we seek a filtered and
one-step predicted estimate of x(k) and x(k + 1) of the
form,
   
x̂(k|k) y(k)
 =M  + N x̂(k|k − 1)
x̂(k + 1|k) −B(k)u(k)

such that both estimates are minimum variance unbiased


estimates, i.e. estimates with the following properties,
E[x̂(k|k)] = E[x(k)] E[x̂(k + 1|k)] = E[x(k + 1)]
h  T i h  T i
E x(k)−b
x(k|k) x(k)−b
x(k|k) and E x(k+1)−b
x(k+1|k) x(k+1)−b
x(k+1|k)
are minimal.
26/50

Delft Center for Systems and Control


Putting the dynamic model in the y = F x + Lǫ-format

27/50

Delft Center for Systems and Control


Putting the dynamic model in the y = F x + Lǫ-format

The collected data u(k), y(k) define the following DATA EQUATION:
      
y(k) C(k) 0 x(k) v(k)
 =  +  
−B(k)u(k) A(k) −I x(k + 1) w(k)
| {z } | {z }| {z } | {z }
y F x L(k)ǫ ǫ∼(0,I)

Combining this with our linear estimate of x(k) and x(k + 1),
   
x̂(k|k) y(k)
 =M  + N x̂(k|k − 1)
x̂(k + 1|k) −B(k)u(k)

yields,

27/50

Delft Center for Systems and Control


Putting the dynamic model in the y = F x + Lǫ-format

The collected data u(k), y(k) define the following DATA EQUATION:
      
y(k) C(k) 0 x(k) v(k)
 =  +  
−B(k)u(k) A(k) −I x(k + 1) w(k)
| {z } | {z }| {z } | {z }
y F x L(k)ǫ ǫ∼(0,I)

Combining this with our linear estimate of x(k) and x(k + 1),
      
x̂(k|k) M M12 y(k) N
  =  11   +  1  x̂(k|k − 1)
x̂(k + 1|k) M21 M22 −B(k)u(k) N2

yields,

28/50

Delft Center for Systems and Control


Sketch of the proof (step 1)
1. Unbiased estimation property: Using the y = F x + Lǫ-format,
    
x̂(k|k) M C(k) + M12 A(k) −M12 x(k)
  =  11  
x̂(k + 1|k) M21 C(k) + M22 A(k) −M22 x(k + 1)
    
M11 M12 v(k) N1
+     +   x̂(k|k − 1)
M21 M22 w(k) N2

Taking the mean, and using notation E[x(k)] = xk , C(k) = C, etc.


      
x M C + M12 A −M12 x N
 k  =  11   k  +  1  xk
xk+1 M21 C + M22 A −M22 xk+1 N2
  
I − M11 C − M12 A − N1 M12 xk
⇔    =0
−M21 C − M22 A − N2 I + M22 xk+1
29/50

Delft Center for Systems and Control


Conclusion from the Unbiasedness Condition (step 1)
" #" #
I − M11 C − M12 A − N1 M12 xk
⇔ =0
−M21 C − M22 A − N2 I + M22 xk+1

1. M12 = 0
2. I − M11 C − N1 = 0 ⇒ N1 = I − M11 C
3. M22 = −I
4. −M21 C + A − N2 = 0 ⇒ N2 = A − M21 C

30/50

Delft Center for Systems and Control


Sketch of the proof (step 2)
2. Expression for Covariance matrices to be minimized:
   
x(k) − x̂(k|k) I − M11 C(k)  
 =  x(k)−x̂(k|k−1)
x(k + 1) − x̂(k + 1|k) A(k) − M21 C(k)
  
M11 0 v(k)
−  
M21 −I w(k)
 
Since ǫ and x(k) − x̂(k|k − 1) are uncorrelated,
  T
x(k) − x̂(k|k) x(k) − x̂(k|k)
E[   ]=
x(k + 1) − x̂(k + 1|k) x(k + 1) − x̂(k + 1|k)

31/50

Delft Center for Systems and Control


Sketch of the proof (step 2, Ct’d)
   T
I − M11 C(k) I − M11 C(k)
  P (k|k − 1)  
A(k) − M21 C(k) A(k) − M21 C(k)
   
M11 0 R(k) S(k)T T
M11 M21T
+   
M21 −I S(k) Q(k) 0 −I
Therefore, the covariance matrix to be minimized for the
measurement update is,
h i
T
E (x(k) − x̂(k|k))(x(k) − x̂(k|k)) =

   T
I − M11 C(k) P (k|k − 1) I − M11 C(k)

+M11 RM11 T
32/50

Delft Center for Systems and Control


Sketch of the proof (step 3, Ct’d)
3.Minimizing the covariance matrix w.r.t. M11 (M12 ) by an
application of the C.O.S. argument. This yields,
 −1
M11 = P (k|k − 1)C(k)T C(k)P (k|k − 1)C(k)T + R(k)
 
M21 = A(k)P (k|k − 1)C(k)T + S(k)
 −1
× C(k)P (k|k − 1)C(k)T + R(k)

33/50

Delft Center for Systems and Control


The conventional KF
Theorem: Let the conditions of KF problem hold, then the minimum
variance unbiased estimate for x(k + 1) is given as x̂(k + 1|k) =,
  −1
= A(k)P (k|k − 1)C(k)T + S(k) C(k)P (k|k − 1)C(k)T + R(k) y(k)+
| {z }
K(k)

 
B(k)u(k)+ A(k) − K(k)C(k) x̂(k|k − 1)

34/50

Delft Center for Systems and Control


The conventional KF
Theorem: Let the conditions of KF problem hold, then the minimum
variance unbiased estimate for x(k + 1) is given as x̂(k + 1|k) =,
  −1
= A(k)P (k|k − 1)C(k)T + S(k) C(k)P (k|k − 1)C(k)T + R(k) y(k)+
| {z }
K(k)

 
B(k)u(k)+ A(k) − K(k)C(k) x̂(k|k − 1)
h  
& has covariance matrix E x(k + 1) − x
b(k + 1|k) x(k + 1) − x
b(k + 1|k)

P (k + 1|k) = A(k)P (k|k − 1)A(k)T + Q(k)


  −1  T
− A(k)P (k|k−1)C(k)T +S(k) C(k)P (k|k−1)C(k)T +R(k) •

34/50

Delft Center for Systems and Control


Summary: The conventional KF for LTI systems

K(k) = (AP (k|k − 1)C T + S)(CP (k|k − 1)C T + R)−1


x̂(k + 1|k) = K(k)y(k) + Bu(k) + (A − K(k)C)x̂(k|k − 1)
P (k + 1|k) = AP (k|k − 1)AT + Q − K(k)(AP (k|k − 1)C T + S)T

35/50

Delft Center for Systems and Control


Summary: The conventional KF for LTI systems

K(k) = (AP (k|k − 1)C T + S)(CP (k|k − 1)C T + R)−1


x̂(k + 1|k) = K(k)y(k) + Bu(k) + (A − K(k)C)x̂(k|k − 1)
P (k + 1|k) = AP (k|k − 1)AT + Q − K(k)(AP (k|k − 1)C T + S)T

Recall the Asymptotic Observer structure:

x̂(k + 1) = Ky(k) + Bu(k) + (A − KC)x̂(k)

35/50

Delft Center for Systems and Control


Summary: The conventional KF for LTI systems

K(k) = (AP (k|k − 1)C T + S)(CP (k|k − 1)C T + R)−1


x̂(k + 1|k) = K(k)y(k) + Bu(k) + (A − K(k)C)x̂(k|k − 1)
P (k + 1|k) = AP (k|k − 1)AT + Q − K(k)(AP (k|k − 1)C T + S)T

Recall the Asymptotic Observer structure:

x̂(k + 1) = Ky(k) + Bu(k) + (A − KC)x̂(k)

Observations: (1) Optimal Linear estimator (Kalman filter - KF)


has asymptotic observer structure!,

35/50

Delft Center for Systems and Control


Summary: The conventional KF for LTI systems

K(k) = (AP (k|k − 1)C T + S)(CP (k|k − 1)C T + R)−1


x̂(k + 1|k) = K(k)y(k) + Bu(k) + (A − K(k)C)x̂(k|k − 1)
P (k + 1|k) = AP (k|k − 1)AT + Q − K(k)(AP (k|k − 1)C T + S)T

Recall the Asymptotic Observer structure:

x̂(k + 1) = Ky(k) + Bu(k) + (A − KC)x̂(k)

Observations: (1) Optimal Linear estimator (Kalman filter - KF)


has asymptotic observer structure!, (2) KF is time-varying even
if SGM is LTI.

35/50

Delft Center for Systems and Control


Summary: The conventional KF for LTI systems

K(k) = (AP (k|k − 1)C T + S)(CP (k|k − 1)C T + R)−1


x̂(k + 1|k) = K(k)y(k) + Bu(k) + (A − K(k)C)x̂(k|k − 1)
P (k + 1|k) = AP (k|k − 1)AT + Q − K(k)(AP (k|k − 1)C T + S)T

Recall the Asymptotic Observer structure:

x̂(k + 1) = Ky(k) + Bu(k) + (A − KC)x̂(k)

Observations: (1) Optimal Linear estimator (Kalman filter - KF)


has asymptotic observer structure!, (2) KF is time-varying even
if SGM is LTI. (3) Is (A − K(k)C) asym. stable?

35/50

Delft Center for Systems and Control


Overview
• Review: State observer
• Recall: Case study AO - new TUD approach
• The Stochastic Least Squares Problem
• The Conventional Kalman filter (SLS
framework)
• Properties and Use of the conventional
Kalman filter.

36/50

Delft Center for Systems and Control


Properties and Use of the conventional KF

1. (Optimality?) - comparison KF with


asymptotic observer - double tank
example.
2. Asymptotic stability (A − K(k)C)?
3. Checking optimality from data?
4. Innovation form representation of SGM
5. Estimating a constant?

37/50

Delft Center for Systems and Control


Comp_Obs_SRCF1.m and
Comp_Obs_SRCF2.m

38/50

Delft Center for Systems and Control


Properties and Use of the conventional KF

1. (Optimality?) - comparison KF with


asymptotic observer - double tank example.
2. Asymptotic stability (A − K(k)C)?
3. Checking optimality from data?
4. Innovation form representation of SGM
5. Estimating a constant?

39/50

Delft Center for Systems and Control


The Asym. Stability of the stationary Kalman filter

Theorem: Consider the time-invariant signal generating


model and statistical data. If the pair (A, C) is observable
and the pair (A, Q1/2 ) is controllable, then the covariance
matrix of UMV estimate, i.e. P (k|k − 1), satisfies
lim P (k|k − 1) = P > 0, for any symmetric initial condition
k→∞
P (0| − 1) > 0, where P satisfies

P = AP AT +Q−(S+AP C T )(CP C T +R)−1 (S+AP C T )T (DARE).

If this matrix P is used to define the Kalman gain matrix K


as K = (S + AP C T )(CP C T + R)−1 then the matrix A − KC
is asymptotically stable.
40/50

Delft Center for Systems and Control


Properties and Use of the conventional KF

1. (Optimality?) - comparison KF with


asymptotic observer - double tank example.
2. Asymptotic stability (A − K(k)C)?
3. Checking optimality from data?
4. Innovation form representation of SGM
5. Estimating a constant?

41/50

Delft Center for Systems and Control


Checking optimality from data?
The state update of the stationary KF is given by:

x̂(k + 1|k) = Ky(k) + Bu(k) + (A − KC)x̂(k|k − 1)

This can also be written in the predictor/corrector form:


 
x̂(k + 1|k) = Ax̂(k|k − 1) + Bu(k) + K y(k) − C x̂(k|k − 1)

The quantity between is called the “innovation signal”,
with properties:

42/50

Delft Center for Systems and Control


Checking optimality from data?
The state update of the stationary KF is given by:

x̂(k + 1|k) = Ky(k) + Bu(k) + (A − KC)x̂(k|k − 1)

This can also be written in the predictor/corrector form:


 
x̂(k + 1|k) = Ax̂(k|k − 1) + Bu(k) + K y(k) − C x̂(k|k − 1)

The quantity between is called the “innovation signal”,
with properties:
 
• e(k) = y(k) − C x̂(k|k − 1) becomes a zero-mean,

white noise sequence with minimum variance.

42/50

Delft Center for Systems and Control


Comp_Obs_SRCF3.m

43/50

Delft Center for Systems and Control


Properties and Use of the conventional KF

1. (Optimality?) - comparison KF with


asymptotic observer - double tank example.
2. Asymptotic stability (A − K(k)C)?
3. Checking optimality from data?
4. Innovation form representation of SGM
5. Estimating a constant?

44/50

Delft Center for Systems and Control


Innovation form representation of SGM
Recall the SGM:
x(k + 1) = Ax(k) + Bu(k) + w(k) E[w(k)] = 0, etc.
y(k) = Cx(k) + v(k)

And the stationary KF:


 
x̂(k + 1|k) = Ax̂(k|k − 1) + Bu(k) + K y(k) − C x̂(k|k − 1)
y(k) = C x̂(k|k − 1) + e(k)

With e(k) the innovation signal.

45/50

Delft Center for Systems and Control


Innovation form representation of SGM
Recall the SGM:
x(k + 1) = Ax(k) + Bu(k) + w(k) E[w(k)] = 0, etc.
y(k) = Cx(k) + v(k)

And the stationary KF:


 
x̂(k + 1|k) = Ax̂(k|k − 1) + Bu(k) + K y(k) − C x̂(k|k − 1)
y(k) = C x̂(k|k − 1) + e(k)

With e(k) the innovation signal. This allows to re-write the KF as:
x̂(k + 1|k) = Ax̂(k|k − 1) + Bu(k) + Ke(k)
y(k) = C x̂(k|k − 1) + e(k)

This is the innovation form

45/50

Delft Center for Systems and Control


Properties and Use of the conventional KF

1. (Optimality?) - comparison KF with


asymptotic observer - double tank example.
2. Asymptotic stability (A − K(k)C)?
3. Checking optimality from data?
4. Innovation form representation of SGM
5. Estimating a constant?

46/50

Delft Center for Systems and Control


Example: Estimating a constant
Consider estimation of the ’almost’ constant
temperature x(k).
x(k + 1) = x(k) + w(k)
y(k) = x(k) + v(k)
If we assume that S ≡ 0 and Q 6= 0, then the
DARE reads:
P = P + Q − P (P + R)−1 P, P ∈R

or equivalently: P 2 − QP − QR = 0.
47/50

Delft Center for Systems and Control


Example: Estimating a constant
The equation P 2 − QP − QR = 0 has two
solutions:
p
Q + Q 2 + 4QR
P+ =
p 2
Q − Q 2 + 4QR
P− =
2
The value P + (the maximal root) is strictly
smaller then R provided that R > 2Q.

48/50

Delft Center for Systems and Control


Example: Estimating a constant
For P + , the stationary Kalman filter has update
equation:
 + 
P P+
b(k+1|k) = 1 − +
x b(k|k−1)+ +
x y(k)
P +R P +R
which is a stable filter since
P+
0<1− + <1
P +R
If we take the negative root P − (minimal), the
stationary Kalman filter is anti-stable.
49/50

Delft Center for Systems and Control


Preparation and next lecture
Preparation:
Study Chapter 5 (5.1-5.4 and 5.5.5 and 5.7-5.9)
Download Homework 3

Next lecture:
Lecture 5: Subspace Identification
Wednesday December 7, 2016, 15.30-17.30

Keep your eyes focussed on “Guide-


lines/Rules/Schedule sc4040: 2016-
2017” for the correct deadlines!
50/50

Delft Center for Systems and Control

Vous aimerez peut-être aussi