Vous êtes sur la page 1sur 7

Extended Kalman Filter Modifications Based

on an Optimization View Point

Martin Skoglund, Gustaf Hendeby and Daniel Axehill

Linköping University Post Print

N.B.: When citing this work, cite the original article.

Original Publication:
Martin Skoglund, Gustaf Hendeby and Daniel Axehill, Extended Kalman Filter Modifications
Based on an Optimization View Point, 2015, 18th International Conference of Information
Fusion.
http://dx.doi.org/

Copyright: The Authors and IEEE

Preprint ahead of publication available at: Linköping University Electronic Press


http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-120383
Extended Kalman Filter Modifications Based on an
Optimization View Point
Martin A. Skoglund∗ , Gustaf Hendeby∗† Daniel Axehill ∗
∗Division of Automatic Control, Linköping University, SE-581 83 Linköping, Sweden,
Email: {ms,hendeby,daniel}@isy.liu.se
† Dept. of Sensor & EW Systems, Swedish Defence Research Agency (FOI), Linköping, Sweden

Abstract—The extended Kalman filter (EKF) has been an guaranteed even that a single update in the EKF improves the
important tool for state estimation of nonlinear systems since estimate since the full step may be to long. In [11] a bistatic
its introduction. However, the EKF does not possess the same ranging example illustrates when the IEKF converges to the
optimality properties as the Kalman filter, and may perform
poorly. By viewing the EKF as an optimization problem it true state as measurements become more accurate whereas
is possible to, in many cases, improve its performance and the EKF is biased even though the state error covariance
robustness. The paper derives three variations of the EKF converges.
by applying different optimisation algorithms to the EKF cost Yet the EKF often works satisfactorily which can be under-
function and relate these to the iterated EKF. The derived filters stood in the light of being a GN method in which the first
are evaluated in two simulation studies which exemplify the
presented filters. correction often gives a large improvement compared to the
following, often smaller, corrections. Furthermore, the cost is
I. I NTRODUCTION often decreased for the full step.
The Kalman filter (KF, [1]) is a minimum variance estimator Hence, both IEKF and EKF could easily be improved using
if the involved models are linear and the noise is additive step control using e.g., backtracking line search. That is, with
Gaussian. A common situation is however that dynamic mod- step control, or at least checking for cost decrease, unnecessary
els and/or measurement models are nonlinear. The extended divergent behaviour of the EKF can be avoided.
Kalman filter (EKF, [2]) approximates these nonlinearities us- This paper shows that line search can be used in EKF
ing a first-order Taylor expansion around the current estimate and IEKF improve the overall performance. Furthermore, a
and then the time and the measurement update formulas as in quasi-Newton approximation for EKF update with a Hessian
the KF can be used. Higher-order terms can be included in correction term is derived. This matrix can be successively
the Taylor expansions if the degree of nonlinearity is high and built during the iterations using a symmetric rank-2 update. A
thus not well approximated using only first-order information. slight modification of the Levenberg-Marquardt-IEKF (LM-
Many suggestions have been made to improve the EKF. In the IEKF, [12]) is made to include a diagonal damping matrix
modified EKF [3] higher-order moments are included, [4, 5] which could further speed up convergence.
use sigma-point approximations which is related to including The paper is organized as follows. In Sec. II the cost func-
second-order moments [6], while [7] proposes a robust EKF tion for the measurement update of the EKF is derived. Sec. III
using H∞ theory. describes the EKF as an optimization problem via Gauss-
Another viewpoint is that the EKF is a special case Newton and introduces line search as a way to improve the
of Gauss-Newton (GN) optimization without iterations, see convergence rate. The IEKF is further modified to include an
e.g., [8, 9] for longer discussions. Thus, iterations may improve explicit step size. A quasi-Newton EKF is the introduced and
the performance with just a little added effort. The idea a slight modification of the LM-IEKF is then derived. Sec. IV
explored here is that it is not only linearisation errors that contains two simulations exemplifying the derived filters and
causes filter inconsistencies but rather the non-iterative way the papper ends with conclusions and future directions in
of including new information as it may not be sufficient for Sec. V.
staying close to the true state. There is of course nothing
that prevents the inclusion of higher-order terms when re- II. T HE M EASUREMENT U PDATE AND ITS C OST
linearising in the iterations. Notice that the Fisher scoring F UNCTION
algorithm is also known as Gauss-Newton when applied to Consider systems having linear dynamics and nonlinear
nonlinear least-squares see, e.g., [10]. measurement models according to
An iterative version of EKF is the iterative extended Kalman
filter (IEKF, [2]) which re-linearises the measurement Jacobian xt = F xt−1 + wt (1a)
at the updated state and applies the measurement update yt = h(xt ) + et (1b)
repeatedly until some stopping criterion is met. It should
be noted that the iterations might not reduce the state error where xt is the state, yt the measurement, wt ∼ N (0, Qt )
or reduce the associated cost function. It is actually not and et ∼ N (0, Rt ) all at time t. In a Bayesian setting, the
Algorithm 1 Kalman Filter and we have dropped the time dependence on the variable x.
Available measurements are Y = {y1 , . . . , yN }. Require an The first term in the objective function is corresponding to
initial state, x̂0|0 , and an initial state covariance, P0|0 , Qt and the unknown sensor noise and the second term corresponds to
Rt are the covariance matrices of wt and et , respectively. the importance of the prior to the estimate. Furthermore, the
1) Time Update optimization problem should return a covariance approxima-
tion P̂t|t . We simply base it on the assumption that the x̂t|t is
x̂t|t−1 = F x̂t−1|t−1 (3a) optimal resulting in
T
Pt|t−1 = F Pt−1|t−1 F + Qt (3b)
P̂t|t = (I − Ki Hi )P̂t|t−1 , (8)
2) Measurement Update
since it should reflect the state uncertainty when (5) has
K = Pt|t−1 H T (HPt|t−1 H T + Rt )−1 , (4a)
converged. A general formulation for the MAP estimation
x̂t|t = x̂t|t−1 + K(yt − H x̂t|t−1 ), (4b) problem is then
Pt|t = (I − KH)Pt|t−1 , (4c)
{x̂t|t , P̂t|t } = arg min V (x) (9)
x

where P̂t|t is computed using (8) while Ki and Hi depends


nonlinear Gaussian filtering problem propagate the following
on the optimization method used.
densities

p(xt−1 |Yt−1 ) ≈ N xt−1 ; x̂t−1|t−1 , Pt−1|t−1 (2a) III. EKF V IEWED AS AN O PTIMIZATION P ROBLEM

p(xt |Yt−1 ) ≈ N xt ; x̂t|t−1 , Pt|t−1 (2b) There is no general method for solving (6) and the nonlinear

p(xt |Yt ) ≈ N xt ; x̂t|t , Pt|t (2c) nature implies that iterative methods are needed to obtain an
approximate estimate [11]. The Newton step p is the solution
where p(xt−1 |Yt−1 ) is the initial prior density, p(xt |Yt−1 ) is to the equation
the prediction density, p(xt |Yt ) is the posterior density and
Yt = {yi }ti=1 are the measurements. The dynamic model (1b) ∇2 V (x)p = −∇V (x). (10)
is linear which means that the prediction density is exactly
propagated in the KF time update equations as given in Starting from an initial guess x0 , Newton’s method iterates
Algorithm 1. The measurement update is however nonlinear the following equations
and is given as the maximum a posteriori (MAP) estimate −1
according to xi+1 = xi − ∇2 V (xi ) ∇V (xi ), (11)
x̂t|t = arg max p(xt |Yt ). (5)
xt where ∇2 V (xi ) and ∇V (xi ) are the Hessian and the gradient
of the cost function, respectively. Note that if V (x) is quadratic
The posterior is proportional to the product of the likelihood
then (11) gives the minimum in a single step. Gauss-Newton
and the prior
can be applied when the minimisation problem is on nonlinear
p(xt |Yt ) ∝ p(yt |xt )p(xt |Yt−1 ) least-squares form as in (6). Then the gradient has a simple
1 T form
yt − h(xt ) Rt−1 yt − h(xt )

∝ exp −
2 T ∂r(s)
−1
 ∇V (x) = J (x)r(x) where J(x) = , (12)
+ (x̂t|t−1 − xt )T Pt|t−1 (x̂t|t−1 − xt ) , ∂s s=x

where terms not depending on xt have been dropped. Since and the Hessian is approximated as
the exponential function is monotone and increasing we can
∇2 V (x) ≈ J(x)T J(x), (13)
minimise the negative log instead which gives
1 T avoiding the need for second-order derivatives. The GN step
x̂t|t = arg min yt − h(x) Rt−1 yt − h(x)

x 2 then becomes
1 −1
+ (x̂t|t−1 − x)T Pt|t−1 (x̂t|t−1 − x) xi+1 = xi − (JiT Ji )−1 JiT ri . (14)
2
1
= arg min rT (x)r(x) where Ji = J(xi ) and ri = r(xi ) are introduced to ease
x 2 notation. Furthermore, the Jacobian evaluates to
= arg min V (x) (6) " #
x −1/2
Rt Hi ∂h(s)
where Ji = − −1/2 where H i = , (15)
"
−1/2 # Pt|t−1 ∂s s=xi
Rt y − h(x)
r(x) = −1/2 , (7)
Pt|t−1 (x̂t|t−1 − x) is the measurement Jacobian.
A. Line Search Algorithm 2 Iterated Extended Kalman Measurement Update
Require an initial state, x̂0|0 , and an initial state covariance,
The EKF measurement update should ideally give a cost P̂0|0 .
decrease 1. Measurement update iterations
V (xi+1 ) < V (xi ), (16)
∂h(s)
Hi = (24a)
∂s s=xi
at each iteration. This will not always the case and this is why a
full step in the EKF, and consequently in the IEKF, may cause Ki = P̂t|t−1 HiT (Hi P̂t|t−1 HiT + Rt )−1 (24b)
problems. A remedy to this behaviour is to introduce a line

xi+1 = x̂t|t−1 + Ki yt − h(xi ) − Hi (x̂t|t−1 − xi ) (24c)
search such that the estimate actually results in cost decrease.
Without an explicit step-size rule the EKF should be expected 2. Update the state and the covariance
to have sub-linearly convergence [13] even though it should
x̂t|t = xi+1 , (25a)
be linear [9]. A step-size condition was proposed in [8] for
smoothing passes over the data batch 0 ≤ 1 − (αi )t ≤ c/t P̂t|t = (I − Ki Hi )P̂t|t−1 . (25b)
where c > 0 and (αi )t is initially small but should approach
unity. There are many strategies for defining a sufficient
decrease for the iteration procedure. The general iteration for
Gauss-Newton with step-size parameter α is B. Iterated EKF
The IEKF was derived in [9, 11, 16] and is summarised in
xi+1 = xi − αi (JiT Ji )−1 JiT ri , (17) Algorithm 2. The iterated update for the IEKF from (24c) is

where each αi is chosen as to satisfy some criterion e.g., the xi+1 = x̂ + Ki (yt − h(xi ) − Hi (x̂ − xi ))
 
cost decrease condition (16). For the EKF update the GN step = xi + x̂ − xi + Ki yt − h(xi ) − Hi (x̂ − xi ) (26)
with step-size (17) evaluates to
 where x̂ = x̂t|t−1 and the iterations are initialized with x0 =
x̂t|t = x̂t|t−1 + αi K yt − h(x̂t|t−1 ) (18) x̂. With (17) the step parameter α can be introduced. The
= x̂t|t−1 + αp. (19) modified measurement update of the IEKF is then on the form
 
xi+1 = xi + αi x̂ − xi + Ki yt − h(xi ) − Hi (x̂ − xi )
The cost decrease condition is then
= xi + αi pi , (27)
V (x̂t|t−1 + αp) < V (x̂t|t−1 ), (20)
where the step length 0 < αi ≤ 1 and the search direction pi
and it is only needed to find a suitable α over 0 < α ≤ 1 is chosen such that (16), or a more sophisticated convergence
starting with α = 1. Furthermore, an exact line search where criterion, is satisfied in each step. Note that for αi = 1, the
first iteration is exactly the same as for the EKF. This step-size
α = arg min V (x̂t|t−1 + sp), (21) should always be attempted first since, if it is allowed, will
0<s≤1 result in faster convergence. We will refer to this method as
IEKF-L.
is rather cheap since the search direction p is fixed. A simple As a special case, the standard EKF is obtained in the
line search strategy can be to multiply the current step length first iteration of the measurement update (24). Note that the
with some factor less than unity until sufficient decrease is state covariance is updated using the last Kalman gain and
obtained. More advanced methods can of course also be used measurement Jacobian when the iterations have finished (25)
such as the Wolfe conditions, see e.g., [14], as discussed in (8).

V (xi + αi pi ) ≤ V (xi ) + c1 α∇V (xi )T pi , (22a) C. Quasi-Newton EKF


T T
∇V (xi + αi pi ) pi ≥ c2 ∇V (xi ) pi , (22b) Quasi-Newton type methods try to avoid exact Hessian
and/or Jacobian computation for efficiency. The Hessian ap-
with 0 < c1 < c2 < 1, which also require that the slope has proximation described in (13) may be bad far from the opti-
been reduced sufficiently by αi . The backtracking line search, mum resulting in e.g., slow convergence. A quite successful
see e.g., [15], is another effective inexact line search method approach to avoid this is to introduce a correction
which simply reduces the step by a factor α := αβ until
∇2 V (xi ) ≈ JiT Ji + Ti , (28)
T
V (x + αp) < V (x) + αγ∇V (x) p, (23) where Ti should mimic the dropped second-order terms. The
step p with Hessian correction is the solution to the problem
is satisfied, starting with α = 1, with 0 < γ < 0.5 and
0 < β < 1. (J T J + T )p = −J T r (29)
which is iteratively solved by which is referred to as Levenberg-Marquardt IEKF
(LM-IEKF). The damping parameter works as an interpolation
xi+1 = xi − (JiT Ji + Ti )−1 JiT ri . (30)
between GN and steepest-descent when µ is small, and large,
The state update for the quasi-Newton EKF (QN-EKF) respectively. The parameter also affects the step-size, where
becomes large µ means shorter steps reflecting the size of the trust-
region and it should be adapted at each iteration. The relations
xi+1 = x̂ + Kiq (yt − hi − Hi x̃i − Siq Ti x̃i ,

(31a)
in (38) were derived in [12] but, just as in [20], it was never
Siq = (HiT Ri−1 Hi + P −1 + Ti )−1 , (31b) indicated how µ is selected, in fact, they never seem to
Kiq = Siq HiT R−1 , (31c) perform any iterations. Therefore their solution rather works
as Tikhonov regularization for the linearized problem with
where P = Pt|t−1 , hi = h(xi ), x̃i = x̂ − xi have been
fixed µ. In [12, Example 2] the damping have to be selected
introduced. The derivation can be found in Appendix A.
as µ = cond(J T J) ≈ 106 in order to obtain roughly the
The matrix Ti can be built during the iterations [17] using
same update. The update, counterintuitively, has a larger cost,
wi viT + vi wiT wT si V , than the EKF update but a lower RMSE. This example
Ti = Ti−1 + T
− Ti 2 vi viT , (32)
vi si (vi si ) is extremely ill-conditioned and the Hessian approximation
where is thus not very useful. Furthermore, the condition number
can serve as a simple heuristic for selecting the damping
vi = JiT ri − Ji−1
T
ri−1 automatically resulting in cond(J T J + µ2 I) ≈ 1. Another
= −HiT R−1 (y − hi ) + Hi−1
T
R−1 (y − hi−1 ), (33a) option, due to [19], is to start with some initial µ = µ0
T T −1 (typically µ0 = 0.01) and factor v > 1. The factor is used
zi = (Ji − Ji−1 ) ri = (Hi−1 − Hi ) R (y − hi ), (33b)
to either increase or decrease the damping if the cost is
si = xi − xi−1 , (33c) increased or decrease, respectively.
wi = zi − Ti−1 si . (33d) The LM-IEKF can be improved by replacing the diagonal
damping with µ diag(J T J) in (37) as suggested by [19]. By
Furthermore, the quasi-Newton relation Ti si = zi is satisfied
doing so, larger steps are made in directions where the gra-
by construction. The iterations are initialized with T0 = 0
dient is small, further speeding up convergence. The resulting
and Ti should be replaced with τi Ti before the update where
! equations for the problem (6) then becomes
|sTi zi | 
τi = min 1, T , (34) xi+1 = x̂ + Ki (yt − hi − Hi x̃i
|si Ti si |
− µi (I − Ki Hi )P̃ Bi x̃i , (39a)
is a heuristic to ensure that Ti converges to zero when the   1 −1 −1

residual vanishes. P̃ = I − P P + Bi P, (39b)
µi
The covariance update with this approach can be updated
using (8) for the last iterate Ki = P̃ HiT (Hi P̃ HiT + R)−1 , (39c)
Bi = diag(JiT Ji ) = diag(HiT R−1 Hi +P −1
), (39d)
P̂t|t = (I − Kiq Hi )P̂t|t−1 . (35)
An explicit line-search parameter can further be introduced which is shown in Appendix B.
IV. R ESULTS
 
xi+1 = xi + αi x̃i + Kiq (yt − hi − Hi x̃i − Siq Ti x̃i . (36)

The filters are evaluated in two simulations highlighting
using the same argumentation as in Section III-B. the effect of line search and the overall better performance
D. Levenberg-Marquardt Iterated EKF obtained with the derived optimisation based methods in a
bearings only tracking problem.
Trust region methods for GN are similar to the Hessian
correction above in the way that the Newton step is computed. A. Line Search
The main difference being that only a single parameter is To illustrate the effect of using line search in EKF and IEKF,
adapted at runtime. The well-known Levenberg-Marquardt consider the measurement function
method introduced by [18, 19] is such a method and it is
obtained by adding a damping parameter µ to the GN problem h(x) = −x2 . (40)
p
(J T J + µI)p = −J T r. (37) This result in a cost function with two minima in ± |y| and
a local maximum at x = 0 which makes the problem difficult.
Applying this to (14) results in Fig. 1 shows the estimates obtained with the EKF and IEKF
initialized with x0 = 0.1, R = 0.1 and P = 1 where V (x0 ) =

xi+1 = x̂ + Ki (yt − hi − Hi x̃i − µi (I − Ki Hi )P̃ x̃i , (38a)
  1 −1
 4.90, while the true state is x∗ = 1, and a perfect measurement
P̃ = I − P P + I P, (38b) y = −1 is given. In the IEKF the step length is reduced to
µi
α = 0.5 for the first iteration resulting in x1IEKF = 0.807,
Ki = P̃ HiT (Hi P̃ HiT + R)−1 , (38c) V (x1IEKF ) = 0.860 while the EKF (equivalent to the IEKF
10 EKF
2
IEKF - 1 iteration
xEKF
1 EKF
9 IEKF - 3 iterations IEKF-L
1.9 LM-IEKF
8 QN-EKF

7 1.8
6
V (x)

x0
1.7
5

Y
1.6
3

2 xIEKF
1
1.5
xIEKF
3
1
1.4
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
1.3
Fig. 1. EKF with full step increases the cost while half the step-size results
in a reduced cost. Remember that EKF and IEKF are identical in the first
step. Three IEKF iterations with line search significantly reduces the cost and 1.2
further improves the estimate. 1 2 3 4 5 6
X
without step-length adaption) yields x1EKF = 1.51, V (x1EKF ) = Fig. 2. All filters except for the EKF are iterated 5 times with line search and
9.36. Another two iterations in the IEKF results in x3IEKF = come closer to the true state with a covariance that captures the uncertainty.
0.978, V (x3IEKF ) = 0.395. Two things can be noted from this
simple example. First, the adaptive step-length improves the TABLE I
EKF noticeably at a relatively low cost. Second, the iterations M ONTE C ARLO SIMULATION RESULTS FOR THE FOUR FILTERS .
in the IEKF further improves the estimate. Method EKF IEKF-L LM-IEKF QN-EKF
Furthermore, with a smaller measurement covariance e.g., Mean-RMSE 0.45 0.19 0.19 0.24
R = 0.01 the EKF performance degrades even more giving
x1EKF = 4.01 (not shown in Fig. 1), while the IEKF performs
well. This illustrates that EKF struggles with large residuals perfect measurement and 5 iterations. All filters, except for
as the linearisation point differs much from its optimal value. the EKF, result in acceptable estimates of the true target state
and its uncertainty. After 10 iterations IEKF-L, LM-IEKF and
B. Bearings Only Tracking QN-EKF produce nearly identical results.
The next example, a bearings-only tracking problem involv- For the final example 10 iterations are used to ensure the
ing several sensors, which is also studied in [21, Ch. 5] and all filters have converged (less would have sufficed) and the
[22, Ch. 4], is used to illustrate IEKF-L, QN-EKF and LM- result are clearly improved estimates compared to the EKF. A
IEKF. In a first stage the target with state x = [X, Y]T is small Monte Carlo study is done with 10 trajectories generated
stationary, i.e., wt = 0, at the true position x∗ = [1.5, 1.5]T . by wt ∼ N (0, 0.1I2 ), et ∼ N 0, π 2 10−5 I2 for 20 time


The bearing measurement function from the j-th sensor S j at steps. The filters are initialized with the true position x̂0|0 =
time t is [1.5, 1.5]T with P0|0 = 0.1I2 . The results are summarized in
Table I where it can be seen again that all filters perform
ytj = hj (xt ) + et = arctan2(Yt − SYj , Xt − SXj ) + et , (41) better than EKF with IEKF-L and LM-IEKF being slightly
where arctan2() is the two argument arctangent function, SY better than QN-EKF.
and SX denotes the Y and the X coordinates of the sensors,
respectively. The noise is et ∼ N (0, Rt ). The Jacobians of V. C ONCLUSIONS
the dynamics and measurements are The cost function used in the extended Kalman filter (EKF)
−(Y − SYj ) has been studied from an optimisation point of view. Applying
 
j 1
F = I, H = . optimisation techniques to the cost function motivates using
(X − SXj )2 + (Y − SYj )2 X − SXj
(42) an adaptive step length in the EKF and three iterated EKF
variations are derived. In simulations it is shown that these
With the two sensors having positions S 1 = [0, 0]T and S 2 = filters outperform the standard EKF providing more accurate
[1.5, 0]T . The filters are initialised with x̂0|0 = [0.5, 0.1]T , and consistent results. It remains to investigate the convergence
P0|0 = 0.1I2 , R = π 2 10−5 I2 and Q = 0.1I2 . Fig. 2 shows rates obtained this way, as well as how to incorporate other
the four filters posteriors with 2 − σ covariances after a single EKF improvements such as the higher order EKF, square-root
implementations for improved numerical stability, and how to R EFERENCES
deal with colored noise. [1] R. E. Kalman, “A new approach to linear filtering and prediction
problems,” Transactions of the ASME—Journal of Basic Engineering,
ACKNOWLEDGMENT vol. 82, no. Series D, pp. 35–45, 1960.
[2] A. H. Jazwinski, Stochastic Processes and Filtering Theory, ser. Mathe-
This work was funded by the Vinnova Industry Excellence matics in Science and Engineering. Academic Press, Inc, 1970, vol. 64.
[3] N. U. Ahmed and S. M. Radaideh, “Modified extended Kalman fil-
Center LINK-SIC together with the Swedish Research Council tering,” IEEE Transactions on Automatic Control, vol. 39, no. 6, pp.
through the grant Scalable Kalman Filters and the Swedish 1322–1326, Jun. 1994.
strategic research center Security Link. [4] S. Julier, J. Uhlmann, and H. F. Durrant-Whyte, “A new method for
the nonlinear transformation of means and covariances in filters and
estimators,” IEEE Transactions on Automatic Control, vol. 45, no. 3,
A PPENDIX A pp. 477–482, Mar 2000.
D ERIVATION OF THE Q UASI -N EWTON EKF [5] S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear
estimation,” Proceedings of the IEEE, vol. 92, no. 3, pp. 401–422, Mar.
Using (7), (15) gives a simplified notation 2004.
 −1/2  [6] F. Gustafsson and G. Hendeby, “Some relations between extended and
R (y − hi ) unscented Kalman filters,” IEEE Transactions on Signal Processing,
ri = , (43)
P −1/2 (x̂ − xi ) vol. 60, no. 2, pp. 545–555, Feb. 2012.
 −1/2  [7] G. A. Einicke and L. B. White, “Robust extended Kalman filtering,”
R Hi IEEE Transactions on Signal Processing, vol. 47, no. 9, pp. 2596–2599,
Ji = − . (44) Sep 1999.
P −1/2
[8] D. Bertsekas, “Incremental Least Squares Methods and the Extended
Kalman Filter,” in Decision and Control, 1994., Proceedings of the 33rd
The right-hand side of (30) evaluates to IEEE Conference on, vol. 2, Lake Buena Vista, Florida, USA, dec 1994,
pp. 1211–1214.
xi − (JiT Ji + Ti )−1 JiT ri = [9] P. J. G. Teunissen, “On the local convergence of the iterated extended
kalman filter,” in Proceeding of the XX General Assembly of the IUGG,
= xi + (HiT R−1 Hi + P −1 + Ti )−1 × IAG Section IV. Vienna: IAG, Aug. 1991, pp. 177–184.
[10] G. K. Smyth, Encyclopedia of Environmetrics. John
HiT R−1 (y − hi ) + P −1 (x̂ − xi )

Wiley & Sons, Ltd, 2006. [Online]. Available:
= x̂ + (HiT R−1 Hi + P −1 + Ti )−1 × http://dx.doi.org/10.1002/9780470057339.vao011
[11] B. M. Bell and F. W. Cathey, “The iterated Kalman filter update as
HiT R−1 (y − hi − Hi (x̂ − xi )) − Ti (x̂ − xi )

a Gauss-Newton method,” IEEE Transactions on Automatic Control,
vol. 38, no. 2, pp. 294–297, Feb. 1993.
= x̂ + Siq HiT R−1 y − hi − Hi (x̂ − xi ) − Siq Ti (x̂ − xi )

[12] R. L. Bellaire, E. W. Kamen, and S. M. Zabin, “New nonlinear iterated
= x̂ + Kiq (yt − hi − Hi x̃i ) − Siq Ti x̃i , (45) filter with applications to target tracking,” in SPIE Signal and Data
Processing of Small Targets, vol. 2561, San Diego, CA, USA, Sep. 1
1995, pp. 240–251.
which is the expression in (31) with x̃i = x̂ − xi . [13] H. Moriyama, N. Yamashita, and M. Fukushima, “The incremental
Gauss-Newton algorithm with adaptive stepsize rule,” Computational
A PPENDIX B Optimization and Applications, vol. 26, no. 2, pp. 107–141, 2003.
D ERIVATION OF THE M ODIFIED LM-IEKF [14] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed. New
York: Springer, 2006.
The LM-IEKF with modified step is evaluated us- [15] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge
ing (37), (14), (43) and (44) as University Press, 2004.
[16] Y. Bar-Shalom and X.-R. Li, Estimation and tracking: principles,
techniques and software. Boston: Artech House, 1993.
xi − (JiT Ji + µi Bi )−1 JiT ri = [17] J. E. Dennis, D. M. Gay, and R. E. Welsh, “Algorithm 573 — NL2SOL,
An adaptive nonlinear least-squares algorithm,” ACM Transactions on
= xi + (HiT R−1 Hi + P −1 + µi Bi )−1 × Mathematical Software, vol. 7, pp. 348—-368, 1981.
[18] K. Levenberg, “A method for the solution of certain non-linear problems
HiT R−1 (y − hi ) + P −1 (x̂ − xi )

in least squares,” Quarterly Journal of Applied Mathmatics, vol. II, no. 2,
= x̂ + (HiT R−1 Hi + P −1 + µi Bi )−1 × pp. 164–168, 1944.
[19] D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear
HiT R−1 (y − hi − Hi x̃i ) − µi Bi x̃i

parameters,” SIAM Journal on Applied Mathematics, vol. 11, no. 2, pp.
431–441, 1963.
= x̂ + Ki (yt − hi − Hi x̃i ) − µi (I − Ki Hi )P̃ Bi x̃i , (46) [20] M. Fatemi, L. Svensson, L. Hammarstrand, and M. Morlande, “A study
of MAP estimation techniques for nonlinear filtering,” in Proceedings
which is the expression in (39). Where we have used the of International conference on information fusion (FUSION), 2013.
relations [21] G. Hendeby, “Performance and implementation aspects of nonlinear
filtering,” Dissertations No 1161, Linköping Studies in Science and
Technology, Mar. 2008.
Ki = (HiT R−1 Hi + P̃ −1 )−1 HiT R−1 [22] ——, “Fundamental estimation and detection limits in linear non-
= P̃ HiT (Hi P̃ HiT + R)−1 , (47) Gaussian systems,” Licentiate Thesis No 1199, Department of Electrical
Engineering, Linköpings universitet, Sweden, Nov. 2005.
(HiT R−1 Hi + P̃ −1 −1
) = (I − Ki Hi )P̃ , (48)
and
P̃ = (P −1 + µi Bi )−1
 −1 
 1
= I − P P + Bi−1 P. (49)
µi

Vous aimerez peut-être aussi