Jieqing Shi
November 4, 2015
Prfer
Univ.Prof. Dr.  Ing. Christian Endisch
Betreuer
Dipl.Ing. Simon Altmannshofer
1 Introduction
With recent advancements in vehicle automation, advanced driver assistance systems have
emerged as an important tool to facilitate energy efficient driving. For the driver assistance
systems to function an accurate model of the vehicles longitudinal dynamics is needed for
which vehicle parameters such as the mass and the driving resistances are required. In general,
a straightforward approach is to use sensors to measure these parameters. This, however is
not always applicable or it can be costly [1]. Furthermore, these vehicle parameters such as
mass and driving resistances can vary depending on the load, the attachment of trailers and
road conditions etc. In this respect, an efficient alternative to a sensorbased approach is to
estimate these parameters adaptively using a modelbased approach [22]. For this, methods
from data fusion can be applied on available vehicle data retrieved from existing sensors and,
using a mathematical model of the vehicle dynamics, the unknown parameters can then be
reconstructed. The estimation algorithm is developed as a new software function based on various
rapid prototyping development tools and can be continuously validated on existing vehicle data.
To guarantee reliable system performance, the estimation algorithm has to be robust and accurate.
Many estimators have been proposed in literature amongst which the recursive least squares
(RLS) estimator is one of the most popular algorithms. However, in situations where the system
excitation is poor (e.g. on highways where the vehicle travels with nearly constant speed), a
problem called the estimator windup can occur in the RLS estimator. As a result, the estimator
is unable to produce accurate estimates of the unknown parameters which can severely inhibit
the systems performance. This is why the RLS algorithm is not wellsuited for the parameter
estimation of vehicle parameters.
The objective of this work is to find modifications and alternatives to the RLS estimator,
respectively, which show better performance regarding the estimation of vehicle parameters
during periods of poor excitation. More specifically, the following tasks are targeted in this
work:
Analysis of estimator windup including its manifestation and consequences
Research of alternative/modified estimators which target the problem of estimator windup
Selection of suitable estimator candidates for the estimation of parameters in vehicle
dynamics model
Implementation of estimators in MATLAB/Simulink
Validation and evaluation of algorithms based on data obtained from various test drives
The remainder of this work is organized as follows: chapter 2 discusses the basics of real time
parameter identification. The algorithm of the RLS estimator is introduced and the problem
1 Introduction
of estimator windup is explained. Moreover the mathematical model of a vehicles longitudinal
dynamics based on [1] is described. Chapter 3 introduces several estimation techniques which can
be classified by their defining principles. The estimators algorithms as well as their key properties
are stated and discussed. Subsequently, a computational study is performed. The introduced
estimators are implemented in MATLAB/Simulink and based on the data obtained from test
drives, the performance of the algorithms are evaluated regarding the quality of parameter
estimates of the vehicles longitudinal dynamics model. The results of the simulation as well as
key observations are described in chapter 4. Finally, this work is concluded with a summary and
discussion in chapter 5.
(2.1)
...
n (k)] denotes
...
Y (k) = [y(1)
y(2)
T (1)
...
y(k)]T
(2)
(k) = .
..
T (k)
. . . e(k)]T
= Y (k) (k)
k
1X
2 = 1 (Y (k) (k))
T (Y (k) (k))
= 1 E T (k)E(k) (2.2)
(y(i) T (i))
2 i=1
2
2
(2.3)
The matrix T is often denoted as the information matrix R, its inverse (T )1 is called
the covariance matrix P [2].
T
P (k) = ( )
k
X
!1
T
(i) (i)
(2.4)
i=1
A geometric interpretation of the least squares estimate can be obtained when considering a
twodimensional case where two parameters 1 and 2 are estimated (see fig. 2.1). With the
regression variables spanning a subspace in which the predicted output Y lies, the least squares
estimation can be seen as finding the parameters such that the distance between the real
output Y and its best approximation Y is minimal. The minimal distance is only achieved when
[2]
E = Y Y span(1 , 2 . . . n )
In order to enable online parameter estimation, the leastsquares algorithm can also be formulated
in a recursive manner, i.e. the results obtained until time instance k 1 are used to determine
the estimates at current time instance k. For this purpose, it is assumed that the information
matrix R(k) = T is regular for all k. Using the fact that R(k) can be decomposed as,
R(k) = P 1 (k) = T (k)(k)
=
k1
X
i=1
= P 1 (k 1) + (k)T (k)
(2.5)
(k)
= P (k)T (k)Y (k) = P (k)
= P (k)
k
X
i=1
k1
X
(i)y(i)
!
(i)y(i) + (k)y(k)
(2.6)
i=1
where
k1
X
1)
(i)y(i) = P 1 (k 1)(k
i=1
From (2.5) it can be deduced that P 1 (k 1) = P 1 (k) (k)T (k). Plugging this expression
1) + K(k)(k)
(k)
= (k
K(k) = P (k)(k)
1)
(k) = y(k) T (k)(k
After some algebraic reformulations using the matrix
inversion lemma (A +
BCD)1 = A1
1
A1 B(C 1 + DA1 B)1 DA1 on P (k) = P 1 (k 1) + (k)T (k)
one obtains the
recursive least squares (RLS) algorithm [2]:
1) + K(k)(k)
(k)
= (k
(2.7)
1
(2.8)
(2.9)
(2.10)
with K(k) = P (k)(k) denoting a correction factor. Interpreting (k) as the error which occurs
1), the estimate (k)
estimates of (0)
and P (0) or to simply assume appropriate initial values [14].
An interesting observation is the similarity between the RLS and the standard Kalman filter
recursive algorithm. The Kalman filter algorithm is usually associated with a random walk
parameter variation model and a linear regression model that can be described by [8], [20]:
(k) = (k 1) + w(k)
(2.11)
(2.12)
1) + K(k)(k)
(k)
= (k
P (k 1)(k)
K(k) =
r(k) + T (k)P (k 1)(k)
(2.13)
(2.14)
1)
(k) = y(k) T (k)(k
The similarity between the standard RLS and the Kalman filter estimator become apparent when
comparing the algorithm equations. In fact, it can be shown that the RLS estimator is a special
case of the Kalman filter if specific assumptions about Q(k) and r(k) are made [8].
k
1X
2
ki (y(i) T (i))
2 i=1
(2.15)
The constant 0 1 is called the forgetting factor. Evidently, the modified lossfunction
assigns exponentially less weight to data that is far away from current time instance k while
new incoming data is considered with more weight. This way parameter estimation using a
forgetting factor can be simply interpreted as averaging the data over a certain amount of data
points while the forgetting factor sets the memory length of the algorithm [3], [20]. Repeating the
calculations in the previous subsection for the modified lossfunction leads to the RLS algorithm
with exponential forgetting [2].
1) + K(k)(k)
(k)
= (k
(2.16)
1
(2.17)
(2.18)
1)
(k) = y(k) T (k)(k
(2.19)
(2.20)
is used instead of (2.9) [6]. However, from the perspective of implementation, it is often computationally more efficient to use the covariance matrix in the update equations so as to avoid a
matrix inversion operation at each update.
The quality of the estimates depends directly on the choice of the forgetting factor. Choosing
1 leads to robust, smooth trajectories, however the algorithm loses its capability to track
parameter changes since old data is discarded at a relatively slow rate. Conversely, a small
value for enables fast tracking of parameter changes, however the measurements become more
sensitive to noise, ultimately causing fluctuating trajectories and decreased robustness. Therefore,
the tradeoff between adaption rate and robustness needs to be taken into account when deciding
on the forgetting factor [14]. This relationship is illustrated in fig. 2.2. The left figure shows the
estimation of four parameters using = 0.9 while the right figure displays the same estimation
using = 0.95. It is apparent that a smaller value of causes more noise sensitive estimates and
larger measurement variances, however changes in the parameter can be tracked relatively fast.
In comparison, a larger leads to smoother, less noise sensitive estimates at the cost of slower
parameter tracking [14].
Figure 2.2 Comparison of different forgetting factors = 0.9 (left) and = 0.95 (right) in the RLS
estimator with exponential forgetting (true values  dashed lines, estimated values  solid lines)
[14]
1
= P (k 1)
As < 1, P (k) grows exponentially with increasing time horizon which leads to a blowup or
windup of covariance matrix [2]. Then, as soon as the process becomes properly exciting again,
the correction gain K(k) = P (k)(k) becomes very large due to the large covariance matrix.
and b (solid line) drifting away from their true values (dashed line). As soon as proper excitation
occurs (such as after t 130s), the covariance matrix decreases to 0, resulting in the estimates
converging to the true parameters [2].
10
Figure 2.3 Effects of estimator windup caused by a constant regressor: control variable (top left), covariance
matrix element (top right), trajectories of estimates (bottom)[2]
the robustness of the algorithm cannot be guaranteed. Thus, as estimator windup causes an
unbounded increase of the covariance matrix the estimates become unreliable [7]. This is why
estimation algorithms relying on constant forgetting factors are only suitable for persistently
excited processes [22].
The concept of persistent excitation can be defined by the following expression. A sequence of
regressors is called persistently exciting in m steps if there exist constants c, C and m such that
[9]
cI
k+m
X
(i)(i)T CI
(2.21)
i=k+1
for any m > n. This condition implies that if (k) is persistently exciting, the entire Rn space
can be spanned by (k) uniformly in m steps. On the contrary, if the input is not sufficiently
exciting, only a subspace with a dimension smaller than n is spanned [9].
To decide if a signal is exciting or not, it can be useful to exploit the fact that a lack of excitation
is reflected in the behavior of the covariance matrix in several ways. Since the covariance matrix is
a symmetrical, realvalued, positive semidefinite matrix which possesses orthogonal eigenvectors,
it can be diagonalized using eigenvalue decomposition. In that way, a covariance matrix can be
transformed into canonical form, i.e. factorized as
P = U U 1
where U is a square matrix whose i th column is the eigenvector qi of P and is a diagonal
matrix with the eigenvalues of P as its diagonal elements. As a result, each covariance matrix
can be fully represented in terms of its eigenvalues and eigenvectors.
Adopting a statistical interpretation, the eigenvalues of a covariance matrix represent the
magnitude of data spread in the direction of the respective eigenvectors. This implies that the
11
EV 1
EV 2
b
a
Figure 2.4 Snapshot representation of a 2 2 covariance matrix as an ellipse at time k
matrix and is caused by a lack of excitation in the input signal i.e. no new information is
incoming regarding the parameters or more precisely, the incoming data does not contain enough
information all along the parameter space [5]. Since an estimator such as the RLS discards old
information, the uncertainty grows. Because each covariance matrix can be represented in terms
of its eigenvalues and eigenvectors, the windup phenomenon caused by insufficient excitation can
therefore be detected by an unbounded increase of its eigenvalues. Referring to fig. 2.4, a lack of
excitation would be represented by an increase of the eigenvalues, thus changing the shape of the
ellipse during the period of poor excitation [20]. Furthermore, using the fact that the trace of a
matrix is defined as the sum of its eigenvalues, another possibility to detect a lack of excitation
is by simply measuring the trace of the P matrix. Thus, an increase of the covariance matrix
eigenvalues would be represented by an increase of the matrix trace.
Similarly, another method of detection is by interpreting the windup phenomenon from the
perspective of the information matrix. As seen from (2.20), poor excitation will lead to [6]
R(k) = R(k 1)
As < 1, information is discarded in every computation step which leads to the information
matrix tending to 0. In this case, the information matrix becomes almost singular and P as its
12
In todays driver assistance systems accurate models of the vehicles longitudinal dynamics are
required to enable automated control schemes for e.g. fuel efficient driving, vehicle following or
range prediction of electric vehicles. Since the longitudinal dynamics is mainly characterized by
the mass of the vehicle as well as rolling resistance and air resistance, obtaining an accurate
model depends mostly on good estimates of these parameters. Since a sensorbased measurement
is often not possible, an adaptive, realtime estimation of these parameters can be considered as
a rational alternative.
In order to derive a system for online parameter identification, a physical model of a vehicles
longitudinal dynamics is required. The dynamics can be modeled as [1], [22]:
1
(m + mrot )v = FA air Acw v 2 mg sin() mg cos()fR
2
(2.22)
In the above equation m is the vehicle mass, mrot is the equivalent mass of rotating components,
v the vehicle acceleration. FA is the driving force and is computed as FA =
Twheel
rdyn
which is the
wheel torque divided by the dynamic rolling radius of the tire. The resisting forces consist of the
aerodynamic resistance, slope resistance and rolling resistance. The air resistance is defined as
Fair = 12 air Acw v 2 with air denoting the air density, A being the frontal area of the vehicle, cw
being the drag coefficient and v representing the vehicle speed. The slope resistance is defined
as FS = mg sin() with g denoting the gravitational constant and being the slope of the
road where 0 corresponds to uphill and downhill grades respectively and = 0 means no
inclination. Lastly, the rolling resistance is computed as FR = mg cos()fR with fR denoting the
rolling resistance coefficient of the road [1], [22].
The unknown parameters in the dynamic model to be estimated are the vehicle mass, the rolling
resistance coefficient and the drag coefficient. For the estimation of these parameters (2.22) can
be further simplified. Using the small angle approximation for small road slopes cos() 1, the
rolling resistance can be approximated as FR mgfR . Furthermore, in order to obtain a linear
representation, the rolling resistance coefficient is to be estimated together with the vehicle mass
(mfR ). Likewise, the drag resistance is to be estimated in combination with the frontal surface
area of the vehicle (Acw ). Lastly, the vehicle acceleration is combined with g sin() of the slope
resistance, so that v + g sin() = ax is used. With these simplifications, (2.22) can be expressed
using the regressor notation [1]:
FA mrot = ax g

{z
}
y
1
2
2 air v
{z
mfR
(2.23)
Acw
 {z }
13
1
2
2 air v
iT
14
T (Y T )
=
Y T
(Y T )
(3.1)
(3.2)
problem can be ensured to stay wellposed at the price of biasing the obtained solution. Thus,
the regularization parameter T R has to be chosen considering the tradeoff that a small value
does not effectively prevent the illconditioning of the problem while a large value leads to a
larger bias of the obtained solution. Ideally, T R can be chosen such that the residual is small
and the penalty is moderate [10].
Since in many cases L is chosen as I (socalled standard form) the solution of (3.2) is obtained
as:
= (T + T R I)1 T Y
(3.3)
Similar to the standard RLS, the derivation of the solution can be formulated in recursive form.
Thus, one obtains the Tikhonov regularization estimator (TR) [23]:
1) + K(k)(k) + P (k)(1 )T R (k
1)
(k)
= (k
(3.4)
K(k) = P (k)(k)
(3.5)
(3.6)
1)
(k) = y(k) T (k)(k
(3.7)
Note that while with the RLS estimator it is possible to formulate the algorithm entirely without
the notion of the information matrix, due to the additional term of T R I it is not possible to
apply the matrix inversion lemma to obtain a recursive expression for P (k). Hence, the algorithm
is formulated using R(k) = P 1 (k).
16
1) + K(k)(k)
(k)
= (k
K(k) = P (k)(k)
(3.8)
(3.9)
(3.10)
(3.11)
The LMR is formally very similar to the TR algorithm. The only difference is that unlike the
LMR, the TR algorithm includes the covariance matrix in the parameter update equation. Similar
to TR, the adjustable parameter in the LMR is the parameter LM R which is to be chosen as a
positive constant while considering the same tradeoff as in the TR estimator.
A different category of estimators target the windup problem from the perspective of the forgetting
factor. Since the standard RLS uses a constant, timeinvariant forgetting factor, old data is
discarded uniformly in each iteration step. This means that the same forgetting factor is applied
to the covariance matrix at all times, regardless of the level of excitation of the input or the
variation rate of the parameters. Therefore the windup problem can be targeted by designing an
estimator which uses a variable forgetting factor.
An estimation technique which is based on a variable forgetting factor has been proposed by
Fortescue et al, 1981. The basic idea of the method is to enforce timevariant forgetting where
the forgetting factor is chosen approximately as 1 when the process is not properly excited to
avoid windup and to decrease the forgetting factor during periods of rich excitation in order to
enable parameter tracking. This way, during periods of poor excitation the large forgetting factor
ensures that old data is not discarded and conversely, when the excitation is rich can be varied
[14].
In the proposed algorithm, the variable forgetting factor is computed as a function of the noise
variance level and the current estimation error. The idea is to choose (k) such that the a
posteriori error remains constant in time (E(k) = E(k 1) = E(0)). In [14] this condition is
achieved by using
(k) = 1
1
1 T (k)K(k 1) (k)2
E(0)
(3.12)
17
N02 =
(3.13)
N01
(3.14)
2 (k)N02
if n2 (k) n2 (0) n2 (k) n2 (k 1) n2
2 (k)N
else
n
n
01
(3.15)
(3.16)
1
(k) = 1
1 T (k)K(k 1) (k)2
E(0)
In the above algorithm, n2 (0) and n2 are the threshold parameters which need to be chosen.
The equation (3.16) states that if the noise variance exceeds its initial level and if there is a
significant increase or decrease of the noise variance level from the previous sample time, it
can be deduced that the process is sufficiently exciting. Thus, to enable good adaptation to
parameter changes, can be decreased which is achieved by choosing the smaller N02 to compute
the forgetting factor. Otherwise, when the two thresholds are not exceeded should be chosen
as a large value in order to restrict the effects of estimator windup. In this case, the forgetting
factor is computed using the larger N01 so that 1. Another parameter to be tuned is which
determines how the previous noise variance level and the current estimation error should be
weighted in the recursive computation of n2 (k) [14].
18
With the above expression, the error term can distinguish between the error caused by the first
estimate and the second estimate.
Similar to the standard RLS, the least squares estimates can be calculated using the individual
gradients
V
,i
i (k)
= 1, 2. Therefore, repeating the calculations for the standard RLS the update
equations for each individual parameter can be obtained (here presented for two parameters
i = 1, 2).
i (k) = i (k 1) + ki (k)(k)
1
1) + Knew (k)(k)
(k)
= (k
(3.17)
1)
(k) = y(k) T (k)(k
"
knew (k) = k
1
1 p1 (k 1)1 (k)
1
2 p2 (k 1)2 (k)
1
2
1+
1)1 (k)2 + 1
2 p2 (k 1)2 (k)
1
pi (k) = (1 knew,i (k)Ti (k))pi (k 1)
i = 1, 2
i
"
#
p1 (k)
P (k) = I
p2 (k)
k=
1
1 p1 (k
(3.18)
(3.19)
(3.20)
(3.21)
with knew,i denoting the ith row of knew . In this case, the covariance matrix is a diagonal matrix
with p1 and p2 as the diagonal elements. Thus, for each diagonal element an individual forgetting
factor is applied and each element of the covariance matrix is also updated separately.
19
1) + K(k)(k)
(k)
= (k
1)
(k) = y(k) T (k)(k
a(k) = T (k)P (k 1)(k)
(3.22)
1
if a(k) > 0
if a(k) = 0
a(k)
P (k) = P (k 1)
(3.23)
(3.24)
P (k 1)(k)T (k)P (k 1)
1 (k) + a(k)
(3.25)
In case windup is caused by a regression vector of 0, the term a(k) becomes 0 as well and the
equation for the correction gain (3.23) is the same as the corresponding equation in the standard
RLS without forgetting (2.8). Furthermore, the update equation of the covariance matrix is
almost the same as in (2.9) except for the difference in sign of the denominator expression.
Consequently, the influence of the forgetting factor is eliminated from the update equations
and the effects of estimator windup can be limited. On the other hand, if the regression vector is
constant but different from 0, a(k) > 0 applies and (k) is set to
1
a(k)
positive or negative. Thus, the denominator 1 (k) + a(k) in the update equation of P (k) can
shift in sign, depending on the chosen forgetting factor and the magnitude of a(k). As a result,
when windup occurs due to a constant regression vector the covariance matrix can either increase
20
An alternative directional forgetting based algorithm has been proposed by [7], [6]. The fundamental idea can be explained by examining the update equation of the information matrix of the
standard RLS in situations of poor excitation R(k) = R(k 1). It can be observed that in this
case, the entire matrix R(k) will tend to 0 because information is forgotten uniformly. However,
a better performance can be achieved when the information content of the regression vector (k)
is taken into account, i.e. forgetting is only applied to the specific part of R(k), which is affected
by the new information [7].
This leads to the modification of the information matrix update equation from (2.20) to a more
generalized form
R(k) = F (k)R(k 1) + (k)T (k)
(3.26)
where F (k) denotes the forgetting matrix. As stated in [6] the forgetting matrix should be
designed to apply forgetting only on the excited subspace of the parameter space.1 By introducing
1) + (k)T (k)
R(k) = R(k
1) = F (k 1)R(k 1)
R(k
(3.27)
(3.28)
(3.29)
where R2 (k 1) is the part to which forgetting is applied. This way it can be stated that
R1 (k 1)(k) = 0
(3.30)
which establishes an orthogonal relationship between R1 (k 1) and (k) and (3.29) becomes
R(k 1)(k) = R2 (k 1)(k)
1
(3.31)
As one can see, by setting F (k) = I the update equation of the standard RLS is obtained.
21
1
T (k)R(k1)(k)
(3.32)
if (k) >
(3.33)
if (k)
0
(3.34)
where a deadzone for (k) is introduced in which the decomposition is not performed (i.e.
R2 (k 1) = 0, R1 (k 1) = R(k 1))[6].
If forgetting is only applied to R2 (k 1) the recursive update equation of the information matrix
can be expressed as
R(k) = R1 (k 1) + R2 (k 1) + (k)T (k)
(3.35)
where R1 (k 1) refers to the part that is orthogonal to the regression vector which carries
information not to be discarded, R2 (k 1) is the part of the information matrix to which
forgetting is applied and (k)T (k) denotes the new incoming information.
After some reformulations and application of the matrix inversion lemma the estimation algorithm
which is henceforth referred to as Directional Forgetting by Cao (DFC) can be formulated [7],
[6]:
1) + K(k)(k)
(k)
= (k
1)
(k) = y(k) T (k)(k
P (k 1)(k)
K(k) = P (k)(k) =
1 + (k)P (k 1)(k)
P (k 1) =
P (k 1) + 1
if (k) >
P (k 1)
if (k)
(k)T (k)
T (k)R(k1)(k)
P (k) = P (k 1)
P (k 1)(k)T (k)P (k 1)
T (k)P (k 1)(k)
(3.36)
(3.37)
(3.38)
where is the threshold parameter for the deadzone of the covariance matrix update which
needs to be adjusted. Therefore, when the excitation is poor (i.e. the threshold is not exceeded),
the covariance matrix is not updated, i.e. P (k) = P (k 1) thus preventing the blowup of the
covariance matrix. In this case the update equations are exactly the same as the RLS without
forgetting and the effects of windup can be restricted since old data is not discarded [6].
22
1) + K(k)(k)
(k)
= (k
1)
(k) = y(k) T (k)(k
P (k 1)(k)
K(k) =
r + T (k)P (k 1)(k)
P (k 1)(k)T (k)P (k 1)
(k)T (k)
P (k) = P (k 1)
+
T
r + (k)P (k 1)(k)
+ T (k)(k)
(3.39)
The estimator is denoted by the authors as the Kalman Filter based algorithm (KFBI) since it
has very similar properties to a standard Kalman filter (see (2.7), (2.10), (2.13), (2.14)). Here
r, and are adjustable parameters. For instance, is a parameter that determines the tracking
speed of the algorithm while can often be chosen as very a small value to ensure that the
covariance matrix is welldefined. Interpreting the algorithm as a modification of the standard
Kalman filter, r represents the as the variance of the measurement noise, which can e.g. be
assumed as Gaussian and known, i.e. r(k) = r [8].
P (k 1)(k)T (k)P (k 1)
+ Q(k)
r(k) + T (k)P (k 1)(k)
where Q(k) is the covariance matrix of the random walk sequence vector w(k). Since in real
applications Q(k) is never known exactly, it is possible to compute Q(k) recursively. Thus, the
socalled Modified Kalman Filter based algorithm (KFBII) can be obtained by simply modifying
(3.39) as [8]
P (k 1)(k)T (k)P (k 1)
+ Q(k)
r + T (k)P (k 1)(k)
(k)T (k)
Q(k) = Q(k 1) +
+ T (k)(k)
P (k) = P (k 1)
(3.40)
(3.41)
It can be observed that the modified version differs from the original version simply in the
choice of Q(k). That is, in the KFBI estimator the variance matrix is chosen directly as
Q(k) =
(k)T (k)
,
+T (k)(k)
(k)T (k)
.
+T (k)(k)
In summary, it can be proven that the properties of both Kalman Filter based algorithms as well
as DFC ensure that the covariance matrix can be bounded from both below and above [9], [8]. This
23
3.4 Estimation algorithms based on limiting or scaling the covariance matrix trace
represents a desirable property of any estimation algorithm since the boundedness from below
ensures that good tracking abilities (since P (k) does not tend to zero), while the boundedness
from above restricts the effects of estimator windup, indicating that the covariance matrix cannot
increase infinitely. In contrast, DFB only shows upper boundedness of the covariance matrix [6].
Hence, although all directional forgetting based algorithms should be able to restrict the effects
of windup, one should expect that the latter three DF based algorithms provide better tracking
abilities than the first algorithm.
Pd (k)T (k)Pd
r + T (k)Pd (k)
(3.42)
where r denotes the error covariance and Pd Rn is a matrix to be adjusted. This way, by
adding Q(k) to the update equation Pd becomes the matrix to which P (k) converges in periods
of poor excitation [1]. This indicates that the covariance matrix stays bounded. As a result, the
algorithm which is henceforth denoted as Stenlund Gustafsson Antiwindup (SG) can be obtained by adding (3.42) to the standard Kalman Filter estimator given by (2.7), (2.10), (2.13) [20].
24
3.4 Estimation algorithms based on limiting or scaling the covariance matrix trace
recursively calculated matrix P (k) and by calculating P (k) as a function of P (k). The Constant
trace algorithm (CT) can be described by the following equations [2]:
1) + K(k)(k)
(k)
= (k
1
P (k 1)(k)T (k)P (t 1)
1
P (k 1)
P (k) =
1 + T (k)P (k 1)(k)
P (k)
o + c2 I
P (k) = c1 n
tr P (k)
(3.43)
(3.44)
1)
(k) = y(k) T (k)(k
The key principle of the estimator can be explained through (3.43) and (3.44). Assuming that
excitation is poor, (k) = 0 leads to the exponential increase of P (k) due to P (k) = 1 P (k 1).
However, by dividing the matrix by its trace, the result
P (k)
tr{P (k)}
remains constant, no matter how large P (k) becomes. Therefore, the covariance matrix P (k) stays
bounded even in periods of poor excitation. The optional term c2 I is added as a regularization
mechanism [2], [18].
(3.45)
Thus, the Maximum trace algorithm (MT) is given by substituting the constant forgetting
factor of the standard RLS algorithm by (3.45). Using said expression tends to 1 once the
trace of matrix P approaches the predefined maximum value tr {Pmax } since 1
tr{P (k)}
tr{Pmax }
= 0.
Conversely, when the covariance matrix converges to 0, tends to the specified lower bound 0
which ensures the algorithms adaptability to parameter changes.
25
(4.1)
where the transformation matrix T is chosen so that the transformed variables lie within the
range of 1 to 1.
In order to compare and evaluate the estimators the reference values of the vehicle mass and
block named Parameters contains all parameters needed for the adjustment or initialization of
the algorithms, such as values of forgetting factors or initial values for estimates or covariance
matrices. Another block CAN2input processes all data obtained from the vehicle bus system.
The various data are subsequently sent to the orange colored estimator blocks as well as another
block called A/B Generator 3P which contains the model for the longitudinal dynamics. Finally,
an Evaluation block aggregates the simulation results regarding some quality measures and
sends the data to the MATLAB workspace for further processing.
All algorithm blocks have the same structure which is displayed in fig. 4.2 for the RLS algorithm.
The estimators are implemented as enabled subsystems since the estimators should only compute
when the conditions stated in 2.2.3 are fulfilled. The fulfillment of the conditions is evaluated in
a specific subsystem and the evaluation result is transmitted to each algorithm block using a
GotoBlock Valid. This way, the estimators are activated only if Valid = 1. Each algorithm is
implemented as a MATLABfunction. Upon simulation the results are sent to the workspace
for further processing. Furthermore, a number of quality measures are used to evaluate the
27
algorithms performances.
28
(4.2)
(4.3)
with k and k 0 denoting the current and previous sample time, respectively and E denoting
the error of estimated value and actual/reference value. Otherwise when the update conditions
are not fulfilled k = k 0 and RSoS(k) = RSos(k 0 ). The root mean square error is subsequently
computed as the square root of the RSoS value.
Regarding the calculation of the quality measures, a timer is used to count the time instances
during which the validity conditions of 2.2.3 are fulfilled and stops when the requirements
are violated. When the duration of the valid time instances passes a predefined threshold, the
calculation of the quality measures is activated. The reason for setting such a threshold is because
in the beginning of a test drive, an estimator is still in its learning phase. Thus, the obtained
estimates may fluctuate drastically in the beginning and may also show large deviations from
the actual values. This would lead to large error values which should not be counted in the
evaluation of the estimators. Therefore by setting a duration threshold it is ensured that the
quality measures are only calculated when the learning phase is passed. However, the duration
threshold must be set at an appropriate value since a threshold too large can cause the error
calculation to not be activated at all while a threshold too small can lead to biased results for
the evaluation.
20
factor of = 0.9999 is used. The algorithms are evaluated based on the data obtained from
a total of 30 test drives. During some of the test drives the vehicle travels mostly through
inner cities. Thus, due to the acceleration and braking maneuvers in inner city traffic the input
29
signals from T = ax , g, 12 air v 2 can be seen as persistently exciting and consequently, the
RLS estimator performs reasonably well. However during test drives on highways or freeways
where the vehicle speed and acceleration remain constant for long periods of time the insufficient
excitation in the regressor can lead to noticeable estimator windup (see fig. 4.3). For instance,
m[kg]
2400
RLS
2200
2000
40
f0[kg]
RLS
20
0
1.5
f2[m 2 ]
RLS
v [km/h]
0.5
150
100
50
a x [m/s 2 ]
5
0
5
0
200
400
600
800
1000
1200
1400
1600
1800
time[s]
Figure 4.3 Estimator windup during t = 100s 800s in the RLS algorithm (test drive 9); velocity v;
acceleration ax
test drive no. 9 happens mostly on a free or highway, which can be deduced from the vehicle
speeds v > 100km/h. Due to the poorly exciting signals of vehicle speed v and acceleration ax
which is most noticeable during t = 100s 800s a significant drift in the parameter estimates can
be observed. This is especially noticeable in the estimates of f0 whose values reach the bottom
thresholds defined by the saturation limits. However, once the signals are properly exciting
again (t > 800s) the estimators converge again to their reference values. This is also reflected in
the eigenvalues as well as trace of the covariance matrix. Fig. 4.4 shows the covariance matrix
eigenvalues and the trace in logarithmic scale. The observed peaks in the eigenvalues (such as
around t 800s) happen when the eigenvalues switch orders, i.e. the largest eigenvalue becomes
the second largest or smallest. Evidently, the inaccurate estimates in the parameters correspond
to the relatively large eigenvalue/trace values of the covariance matrix during t = 100s 800s.
Once the input is sufficiently exciting again the eigenvalues and thus the trace decrease and the
estimates become more accurate (see 4.3, 4.4).
30
EV(m)
RLS
15
20
25
10
EV(f0)
RLS
15
20
25
10
EV(f2)
RLS
15
20
25
log(trace)
10
RLS
15
20
0
200
400
600
800
1000
1200
1400
1600
1800
time[s]
Figure 4.4 Eigenvalues and trace of the covariance matrix for RLS in logarithmic scale (test drive 9)
same value as for the RLS estimator while T R is chosen as 2 106 and LM R = 108 .
As already illustrated, the regularization based algorithms TR and LMR aim at making the
least squares problem wellconditioned so as to prevent the information matrix R from becoming
singular. As a direct consequence, it is to be expected that the covariance matrices of the
regularization based algorithms have better condition numbers than the regular RLS. This is
shown in fig. 4.5 for test drive no. 6 where one can see that the condition numbers of TR and
especially, LMR are significantly lower than those of RLS. This is also reflected in the parameter
estimates for test drive no. 6, which is shown in fig. 4.6. One can observe that during said test
drive poor excitation occurs noticeably during t = 100s 200s as well as t = 300s 400s. During
this time, the TR and especially LMR estimators manage to keep the condition number lower
which is directly reflected in the more robust and accurate estimates as compared to the regular
RLS.
The implementation of the VF and MF algorithm are described in 3.2.1 and 3.2.2. As already
stated, the VF algorithm is basically a modified RLS with a noise variance based detection
mechanism for windup, based on which is varied. In the described model, the algorithm has
31
1800
RLS
LMR
TR
1600
1400
log(cond)
1200
1000
800
600
400
200
0
0
100
200
300
400
500
600
time[s]
2150
2100
2050
2000
1950
f0[kg]
m[kg]
Figure 4.5 Condition numbers of TR and LMR in logarithmic scale (test drive 6)
RLS
TR
LMR
RLS
TR
LMR
20
10
f2[m 2 ]
1
RLS
TR
LMR
0.5
v [km/h]
100
a x [m/s 2 ]
5
0
5
0
100
200
300
400
500
600
time[s]
32
Furthermore, a lower threshold for has been set at 0.99 which is not to be exceeded.
The key concept of the VF estimator is best illustrated in fig. 4.7 for test drive no. 9 where one
can clearly see the variation of .
1.0001
0.9999
0.9998
0.9997
0.9996
0.9995
0.9994
160
162
164
166
168
170
172
174
176
178
180
time[s]
Figure 4.7 Variation of the forgetting factor in the VF estimator (test drive 9)
Upon further examination of the simulation results it can be concluded that with the given
parameters, the VF estimator behaves very similar to the RLS estimator. In fact, for almost all
test drives the estimation signals for the VF and RLS estimator overlap almost completely. A
small difference can be observed when using a more detailed scale of the signals (see fig. 4.8).
Regarding the MF estimator the algorithm which is adopted from [22] only discusses the case of
estimating two parameters. Therefore, in order to implement the algorithm for the estimation of
three parameters, the algorithm is adapted accordingly to feature an additional forgetting factor.
33
m[kg]
18.6
18.4
18.2
18
17.8
17.6
RLS
VF
2075
RLS
VF
0.76
RLS
VF
0.755
v [km/h]
f2[m 2 ]
2080
f0[kg]
2085
0.75
100
a x [m/s 2 ]
5
0
5
800
850
900
950
1000
1050
1100
1150
1200
time[s]
Figure 4.8 Excerpt of the parameter estimates of VF; velocity v; acceleration ax (test drive 9)
Thus, there are three parameters which need to be adjusted. These have been determined as
m = 0.9999, f 0 = 0.9999, f 2 = 0.99999, respectively.
The performance of the MF algorithm with the given parameters is somewhat unique as it is
capable of providing accurate estimates for some test drives while for other test drives, the
estimates are noise sensitive. This is best illustrated in the following figure which shows the MF
estimators performance for test drive no. 13, 9 and 6 (from left to right), all three of which are
cases with noticeable lack of excitation. One can deduce that with test drive 13 the MF estimator
performs better compared to the RLS with regards to the f0 estimation but slightly worse with
regards to the m estimation (larger deviations during t = 500s 1000s. With test drive 9, MF
provides significantly more accurate estimates for all three parameters. However with test drive
6, the MF estimates regarding f2 are very noisy, although the estimates for m and f0 are fairly
accurate.
34
m[kg]
500
1000
1500
2000
2500
5
5
time[s]
3000
0.5
1.5
20
40
100
40
50
2000
2200
100
a x [m/s 2 ]
v [km/h]
f2[m 2 ]
f0[kg]
3800
4000
200
400
600
1000
RLS
MF
time[s]
800
1200
1400
1600
1800
5
100
0.5
24
22
20
18
16
2050
2100
2150
100
200
300
time[s]
400
500
600
Figure 4.9 Estimates of MF; velocity v; acceleration ax : left = test drive 13, middle = test drive 9, right =
test drive 6
35
10
RLS
DFB
11
12
log(trace)
13
14
15
16
17
18
0
500
1000
1500
2000
2500
3000
time[s]
Figure 4.10 Trace of the covariance matrix of DFB estimator in logarithmic scale(test drive 13)
Next, the DFC and KFBI algorithms are analyzed. The DFC algorithm has two adjustable
parameters, while the KFBI has three. For the simulation these are determined as:
DFC:
= 0.9999
= 1000
KFBI:
= 1013
= 1010
r=1
36
m[kg]
RLS
DFB
4000
3800
RLS
DFB
f0[kg]
50
45
40
v [km/h]
f2[m 2 ]
RLS
DFB
0.5
100
a x [m/s 2 ]
5
0
5
0
500
1000
1500
2000
2500
3000
time[s]
m[kg]
RLS
DFC
KFB1
4000
3950
f2[m 2 ]
f0[kg]
54
52
50
48
46
44
42
RLS
DFC
KFB1
0.9
RLS
DFC
KFB1
0.8
v [km/h]
0.7
100
a x [m/s 2 ]
5
0
5
0
500
1000
1500
2000
2500
3000
time[s]
Figure 4.12 Estimates of DFC and KFBI; velocity v; acceleration ax (test drive 13)
37
10
RLS
DFC
KFB1
11
12
log(trace)
13
14
15
16
17
18
0
500
1000
1500
2000
2500
3000
time[s]
Figure 4.13 Trace of the covariance matrix of DFC and KFBI in logarithmic scale(test drive 13)
38
m[kg]
2200
RLS
KFB2
2100
2000
f0[kg]
40
RLS
KFB2
20
0
v [km/h]
f2[m 2 ]
1
RLS
KFB2
0.5
100
a x [m/s 2 ]
10
0
10
0
1000
2000
3000
4000
5000
6000
time[s]
Figure 4.14 Estimates of KFBII algorithm; velocity v; acceleration ax (test drive 10)
Lastly in the category of directional forgetting based algorithms the behavior of the SG Anti
39
3900
DFB
DFC
KFB1
SG
55
50
45
40
35
DFB
DFC
KFB1
SG
4000
f0[kg]
m[kg]
4100
f2[m 2 ]
2
DFB
DFC
KFB1
SG
v [km/h]
100
a x [m/s 2 ]
5
0
5
0
500
1000
1500
2000
2500
3000
time[s]
Figure 4.15 Estimates of DFB, DFC, KFB1 and SG; velocity v; acceleration ax (test drive 13)
windup estimator is discussed. For the algorithm only the convergence matrix Pd needs to be
specified, which is chosen as Pd = I 33 [1010 ,
1012 ,
observed that the SG estimator behaves similarly to the above mentioned directional forgetting
based algorithms (with the exception of KFBII). Using the example of test drive no. 13, the SG
estimator achieves similar estimates for the given parameters as the other directional forgetting
based estimators This is illustrated in fig. 4.15, where the estimates of DFB, DFC, KFB1 and
SG are displayed. Evidently, the similarity of the estimates result in almost a complete overlap
of the signals which is why only the estimates of SG can be seen.
Finally, the CT and MT algorithms, which are based on a limitation or scaling of the covariance
matrix are evaluated.
The core concept of the CT algorithm is to keep the trace of the covariance matrix constant
at all times. This is illustrated in the plot of the trace of the P  matrices for the RLS and
CT algorithm for test drive no. 9, where one can see the varying trace of the RLS covariance
matrix and in contrast, the constant trace of the CT covariance matrix. Aside from P (0) and
40
10
RLS
CT
11
12
log(trace)
13
14
15
16
17
18
0
200
400
600
800
1000
1200
1400
1600
1800
time[s]
Figure 4.16 Covariance matrix trace for CT in comparison to RLS in logarithmic scale (test drive 9)
41
m[kg]
2500
RLS
DFB
CT
2000
f0[kg]
40
RLS
DFB
CT
20
0
f2[m 2 ]
2
RLS
DFB
CT
v [km/h]
100
a x [m/s 2 ]
5
0
5
0
200
400
600
800
1000
1200
1400
1600
1800
time[s]
Figure 4.17 Estimates of CT in comparison to DFB and RLS; velocity v; acceleration ax (test drive 9)
Nevertheless, the estimates of the CT algorithm are not always accurate. E.g. for test drive no.6
the CT algorithm produces rather inaccurate estimates for the vehicle mass while the estimates
for f0 and f2 remain smooth and accurate. Although CT does not exhibit a noticeable parameter
drift of the f0 and f2 signals like RLS during the poor excitation periods the mass estimates are
noisy right from the start (see fig. 4.18). Upon investigation of other test drives, noisy m but
accurate f0 , f2 estimates is a defining characteristic of the CT algorithm for many test drives.
Finally the MT algorithm is examined, which has two adjustable parameters that are determined
as 0 = 0.999 and Pmax = 108 I 33 . As already described the defining principle of the MT
estimator is to bound the trace of the covariance matrix from above. This way, once the trace of
the P  matrix approaches the specified upper bound, the forgetting factor is set to 1 to avoid
the further increase of the matrix trace. In other words, the forgetting factor varies with the
42
m[kg]
2400
2200
RLS
CT
2000
1800
40
RLS
CT
f0[kg]
30
20
10
v [km/h]
f2[m 2 ]
1
RLS
CT
0.5
100
a x [m/s 2 ]
5
0
5
0
100
200
300
400
500
600
time[s]
development of the covariance matrix trace. This is demonstrated in the fig. 4.19 where one
can see that the trajectory of varies according to the changes of the covariance matrix trace.
Generally, it has been observed that the MT algorithm shows similar properties as CT in the
sense that for both algorithms the accuracy of the estimates is dependent on the characteristics
of the considered test drive. A noticeable difference however is that the estimates of the MT
algorithm are generally less noisy than those of CT. For instance, the sidebyside comparison of
test drive no. 13 and no. 9 shows that the MT algorithm can provide fairly exact estimates for
test drive no. 9 (right), where the effects of windup are restricted such that the estimates are
very close to the actual values. On the other hand, the left figure shows that the MT algorithm
can also produce inaccurate results since the estimates of m for the given test drive 13 are too
small while the f2 estimates are constantly too large for the entire simulation horizon 4.20).
43
16.5
MT
MT
1.0006
1.0004
log(trace)
forgetting factor
17
17.5
1.0002
0.9998
0.9996
18
0.9994
0
200
400
600
800
1000
1200
1400
1600
200
400
600
800
1000
1200
1400
1600
time[s]
time[s]
Figure 4.19 Covariance matrix trace in logarithmic scale (left) and (k) (right) for MT
m[kg]
4500
2600
2400
4000
2200
2000
3500
1800
30
f0[kg]
60
20
40
10
20
0
f2[m 2 ]
1.5
1
0.5
150
150
100
100
50
50
a x [m/s 2 ]
v [km/h]
5
5
0
500
1000
1500
time[s]
2000
2500
3000
0
RLS
MT
200
400
600
800
1000
1200
1400
1600
1800
time[s]
Figure 4.20 Estimates of MT; velocity v; acceleration ax : left = test drive 13, right = test drive 9
44
Algorithms
RM SE
RM SEm
pRM SEm
nRM SEm
mmax
RM SEv
pRM SEv
nRM SEv
CT
DFB
DFC
KFBI
KFBII
LMR
MF
MT
RLS
SG
TR
VF
108,82
125,13
126,08
125,09
119,98
123,10
119,69
124,99
124,19
126,03
126,61
124,28
134,14
100,44
100,11
99,87
108,83
99,00
108,96
109,28
101,87
100,43
101,97
101,78
99,71
52,18
51,46
53,54
72,58
49,97
50,05
49,99
53,59
51,94
55,24
53,49
67,91
55,67
55,32
54,59
63,89
57,44
63,82
64,82
56,90
55,60
56,53
56,84
263,40
133,87
132,84
142,14
191,46
136,96
143,47
143,25
142,26
134,68
143,30
141,53
0,75
0,82
0,84
0,82
0,78
0,79
1,40
0,81
0,80
0,83
0,86
0,80
0,52
0,31
0,30
0,30
0,32
0,35
1,07
0,33
0,32
0,30
0,32
0,32
0,59
0,84
0,86
0,84
0,76
0,80
0,87
0,84
0,82
0,86
0,89
0,00
represented in fig. 4.21. The columns in Tab. 4.1 display the different quality measures. For each
Quality Measurements
dm maxRMSE
pRMSE m
RLS
CT
DFB
DFC
KFB1
KFB2
LMR
MT
TR
VF
SG
MF
RMSE m
nRMSE
RMSE
RMSE v
nRMSE v
pRMSE v
Figure 4.21 Spider chart of achieved quality measures for all estimators
45
46
48
8
c1 = 10 , c2 = 10
2350
2300
2250
2250
2200
2200
2150
2150
2100
2100
2050
2050
2000
2000
1950
1950
1900
200
400
600
800
1000
time[s]
1200
1400
1600
c1 = 10 8 , c2 = 10 12
2350
2300
c1 = 10 7 , c2 = 10 11
12
m[kg]
m[kg]
2400
c1 = 10 7 , c2 = 10 11
1800
1900
0
100
200
300
400
500
600
time[s]
Figure 5.1 Comparison of the mass estimates of CT using two different sets of parameters; left = test drive
9, right = test drive 6
mass estimates for test drive no. 6. However, the parameters are now too insensitive given the
data of test drive no. 9. As a consequence, the mass is barely tracked and the algorithm constantly
overestimates the mass. This is also reflected in the quality measures. It has been found that for
test drive no. 6. the overall RM SEm score is significantly improved from 103 to 62 when the new
parameter settings are used. However, in test drive no. 9. the results are significantly worsened
as the error measure increases from 35 to 59. This example illustrates that there is no optimal
setting for the parameters which yield the best results under all possible circumstances.
In a similar fashion, the initial covariance matrix as well as initial regressor can influence an
49
50
Bibliography
[1] S. Altmannshofer: Robuste, Onlinefhige Schtzung von Fahrzeugmasse und Fahrwiderstnden. In: AUTOREG; BadenBaden. 2015.
[2] K. J. Astrm and B. Wittenmark: Adaptive Control. Second Edition. Dover, 2008; pp. 1
574.
[3] S. Bittani and M. Campi: Adaptive RLS Algorithms under Stochastic ExcitationStrong
Consistency Analysis. In: Systems & control letters, vol. 17, no. 1, 1991; pp. 38.
[4] S. Bittanti, P. Bolzern, and M. Campi: Convergence and Exponential Convergence of
Identification Algorithms with Directional Forgetting Factor. In: Automatica, vol. 26,
no. 5, 1990; pp. 929932.
[5] S. Bittanti, P. Bolzern, and M. Campi: Recursive Least Squares Identification Algorithms
with Incomplete Excitation: Convergence Analysis and Application to Adaptive Control.
In: IEEE Transactions on Automatic Control, vol. 35, no. 12, 1990; pp. 13711373.
[6] L. Cao and H. Schwartz: A Directional Forgetting Algorithm Based on the Decomposition
of the Information Matrix. In: Automatica, vol. 36, no. 11, 2000; pp. 17251731.
[7] L. Cao and H. Schwartz: A Novel Recursive Algorithm for Directional Forgetting. In:
American Control Conference, 1999. Proceedings of the 1999. Vol. 2. 1999; pp. 13341338.
[8] L. Cao and H. Schwartz: Analysis of the Kalman Filter Based Estimation Algorithm: An
Orthogonal Decomposition Approach. In: Automatica, vol. 40, no. 1, 2004; pp. 519.
[9] L. Cao and H. Schwartz: The Kalman Filter Based Recursive Algorithm  Windup and
Its Avoidance. In: Proceedings of the American Control Conference; Airlington, 2011;
pp. 36063611.
[10] G. Golub, P. C. Hansen, and D. OLeary: Tikhonov Regularization and Total Least
Squares. In: SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 1, 1999;
pp. 185194.
[11] S. Gunnarsson: Combining Tracking and Regularization in Recursive Least Squares
Identification. In: Linkping University Electronic Press, 1996.
[12] L. Gustafsson and M. Olsson: Robust Online Estimation. Master Thesis. Lund Institute
of Technology, 1999; pp. 178.
[13] X. Hu and L. Ljung: New Convergence Results for Least Squares Identification Algorithm.
In: The International Federation of Automatic Control (ed) Proceedings of the 17th IFAC
World Congress; Seoul, 2008; pp. 50305035.
[14] R. Isermann and M. Mnchhof: Identification of Dynamic Systems. Springer Verlag, 2011;
pp. 1705.
Bibliography
[15] R. Johnstone et al.: Exponential Convergence of Recursive Least Squares with Exponential
Forgetting Factor. In: 21st IEEE Conference on Decision and Control; Orlando, 1982;
pp. 994997.
[16] P. Krus and S. Gunnarsson: Adaptive Control of a Hydraulic Crane Using Online Identification. In: Linkping University Electronic Press, 1993.
[17] J. Parkum, N. K. Poulsen, and J. Holst: Recursive Forgetting Algorithms. In: International
Journal of Control, vol. 55, no. 1, 1992; pp. 109128.
[18] M. Salgado, G. Goodwin, and R. Middleton: Modified Least Squares Algorithm Incorporating Exponential Resetting and Forgetting. In: International Journal of Control, vol. 47,
no. 2, 1988; pp. 477491.
[19] T. Seidman and C. Vogel: Well Posedness and Convergence of Some Regularisation
Methods for Nonlinear Ill Posed Problems. In: Inverse problems, vol. 5, no. 2, 1989;
pp. 227241.
[20] B. Stenlund and F. Gustafsson: Avoiding Windup in Recursive Parameter Estimation.
In: Preprints of reglermote, 2002; pp. 148153.
[21] A. Tikhonov and V. Arsenin: Solutions of IllPosed Problems. In: Mathematics of
Computation, vol. 32, no. 144, 1978; pp. 13201322.
[22] A. Vahidi, A. Stefanopoulou, and H. Peng: Recursive Least Squares with Forgetting
for Online Estimation of Vehicle Mass and Road Grade: Theory and Experiments. In:
International Journal of Vehicle Mechanics and Mobility, vol. 43, 1 2005; pp. 3155.
[23] T. Van Waterschoot, G. Rombouts, and M. Moonen: Optimally Regularized Recursive
Least Squares for Acoustic Echo Cancellation. In: Proceedings of the second annual IEEE
BENELUX/DSP Valley Processing Symposium (SPSDARTS 2006), Antwerp, Belgium.
2005; pp. 2829.
52
List of Figures
2.1
2.2
Comparison of different forgetting factors = 0.9 (left) and = 0.95 (right) in the
RLS estimator with exponential forgetting (true values  dashed lines, estimated
values  solid lines) [14]
2.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.4
12
4.1
27
4.2
28
4.3
Estimator windup during t = 100s 800s in the RLS algorithm (test drive 9);
velocity v; acceleration ax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4
30
Eigenvalues and trace of the covariance matrix for RLS in logarithmic scale (test
drive 9) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
4.5
32
4.6
32
4.7
33
4.8
4.9
Estimates of MF; velocity v; acceleration ax : left = test drive 13, middle = test
drive 9, right = test drive 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
4.10 Trace of the covariance matrix of DFB estimator in logarithmic scale(test drive 13) 36
4.11 Estimates of DFB; velocity v; acceleration ax (test drive 13) . . . . . . . . . . . .
37
4.12 Estimates of DFC and KFBI; velocity v; acceleration ax (test drive 13) . . . . .
37
4.13 Trace of the covariance matrix of DFC and KFBI in logarithmic scale(test drive 13) 38
4.14 Estimates of KFBII algorithm; velocity v; acceleration ax (test drive 10) . . . .
39
4.15 Estimates of DFB, DFC, KFB1 and SG; velocity v; acceleration ax (test drive 13) 40
4.16 Covariance matrix trace for CT in comparison to RLS in logarithmic scale (test
drive 9) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
42
43
4.19 Covariance matrix trace in logarithmic scale (left) and (k) (right) for MT . . .
44
4.20 Estimates of MT; velocity v; acceleration ax : left = test drive 13, right = test drive 9 44
4.21 Spider chart of achieved quality measures for all estimators . . . . . . . . . . . .
5.1
45
49
List of Tables
4.1
45