Académique Documents
Professionnel Documents
Culture Documents
AIAA Guidance, Navigation and Control Conference, Keystone, CO, 21 - 24 Aug 2006
In this paper, several issues arising in the context of actuator Failure Detection, Identification and Recon-
figuration (FDIR) in flight control are addressed. These include algorithms, performance, implementation and
metrics. Several FDIR algorithms are reviewed from the point of view of their complexity, ease of implementa-
tion and robustness properties. It is shown that a FDIR system based on the a actuator failure parameterization
achieves excellent performance in the presence of multiple simultaneous flight critical failures, and results in
improved convergence of the failure-related parameter estimates. A new self-diagnostics procedure is described
that results in guaranteed convergence of the parameter errors to zero, and improved performance of the over-
all closed-loop system. Following that, performance metrics for evaluation of FDIR systems are discussed. As a
first step towards a quantitative performance metric, we introduce the concept of failure criticality and present
a grouping of actuator failures according to their severity for a given task-controller pair. Failure criticality is
based on comparing the achievable dynamic performance after failure with the required dynamic performance
needed to accomplish a given task by a given controller. We present computational methods and illustrate the
concept with numerical examples.
I. Introduction
In the last several years a large number of different approaches to Failure Detection, Identification and Reconfigu-
ration (FDIR) in flight control has been proposed. Most of the research has been focused on failures in flight control
actuators and effectors.1, 2, 4, 6, 7, 19–21 Instead of facilitating the implementation of fault-tolerant control algorithms, such
an abundance of approaches has increased the confusion among potential users. Among the many questions cloud-
ing the current state of research are the guaranteed computational and system theoretic properties of the approaches,
the assumptions under which they hold and the practical problems for which an approach is better suited than an-
other. The main contributing factor is the lack of performance metrics to evaluate the extent of faults and failures that
can be accommodated using the proposed approaches, and the flight performance that can be achieved under failure
accommodation.
Probably the first question that arises prior to FDIR system design is to determine a class of failures for which
the system should be designed. Different actuator and control effector failures affect the closed-loop system in dif-
ferent ways, and FDIR system design should address the following questions in the initial stage: (i) Which failures
can be accommodated without affecting performance of the resulting closed-loop system? (ii) Which failures can be
accommodated so that the accommodation results in graceful performance degradation? (iii) Which failures cannot be
accommodated? This issue is closely related to the effect of failure and its accommodation on the resulting closed-loop
control system. For instance, a failure can be such that its accommodation can be achieved without saturating the
remaining healthy actuators. In such a case, the FDIR system can (at least theoretically) achieve the same level of
performance as in the no-failure case. In other cases, the failure severity can be such that the FDIR system achieves
performance that is worse than the original one, but closed-loop stability can still be guaranteed. In case (iii) the sever-
ity of failure is such that the use of remaining healthy actuators cannot guarantee closed-loop stability, and switching
to some type of fail-safe mode is necessary. These cases need to be studied in the context of both single and multiple
∗ Copyright ° c 2006 by Scientific Systems Company, Inc.. Published by the American Institute of Aeronautics and Astronautics, Inc. with
permission.
† This research was supported by NASA Langley Research Center under contract No. NNL06AA26P to Scientific Systems Company.
‡ Intelligent & Autonomous Control Systems Group Leader, 500 W. Cummings Park, Suite 3000, AIAA Senior Member, jovan@ssci.com
§ Principal Research Scientist, 500 W. Cummings Park, Suite 3000, AIAA Member, prasanth@ssci.com
¶ President & CEO, 500 W. Cummings Park, Suite 3000, AIAA Senior Member, rkm@ssci.com
1 of 24
ẋ = Ax + BDu,
where x ∈ IRn is the system state vector, u ∈ IRm is the control input vector, matrices A and B are known, and
D = diag[d1 d2 ... dm ] is an unknown actuator effectiveness matrix, where di ∈ [²i , 1] and 0 < ²i << 1. The objective
is to design a controller such that the plant state follows the state of the reference model:
ẋm = Am xm + Bm r,
where Am is asymptotically stable, and r is a vector of bounded piece-wise continuous reference inputs (commands).
Direct adaptive fault-tolerant controller is of the form
u = Θx + Kr,
2 of 24
Θ̇ = −Γθ exT
K̇ = −Γk erT ,
where Γ(·) = ΓT(·) > 0, and e = x − xm denotes the tracking error. It is well known23 that the above adaptive algorithms
result in a stable closed-loop system in which asymptotic convergence of the tracking error to zero is guaranteed. It is
important to note that in the case of direct adaptive control 2nm parameters need to be adjusted even though only m
parameters are unknown.
In FDIR based on indirect adaptive control, the failure-related parameters are estimated first using a suitably de-
signed on-line observer, and the resulting estimates are in turn used in the adaptive reconfigurable control law assuring
the closed-loop stability. It is well known23 that the resulting closed-loop system is stable even when the signals in
the system are not sufficiently persistently exciting (PE), i.e. even when the parameters do not converge to their true
values.
The indirect adaptive controller is implemented in two steps. First step includes the design of an adaptive observer
of the form:
where x̂ denotes an estimate of x, and corresponding stable adaptive laws for adjusting D̂ (see e.g. Ref.6 ) are of the
form:
where Γ = ΓT > 0, and d̂ = diag(D̂). In the above expression the Projection Operator is used (see e.g. Ref.6 ) to keep
the estimates of di within [²i , 1]. The use of the projection algorithm also makes the overall closed-loop system robust
to parameter variation and external disturbances, and assures the invertibility of D̂(t) at every instant.
The adaptive reconfigurable control law is of the form:
and can be readily demonstrated to result in a stable system. It is seen that in the case of FDIR based on indirect
adaptive control only m parameters need to be adjusted to achieve the control objective. Hence indirect adaptive
control appears better suited for FDIR implementation since the failure-related parameters are estimated explicitly,
which can be beneficial for failure diagnostics and prognostics.
In our previous work we developed failure models (i.e. the models that describe the effect of failures on the plant
and/or actuator dynamics) for a large class of failures in terms of two parameters. The class of failures includes: lock-
in-place, loss of effectiveness, hard-over, float, and also models failure recoveries. These models were used to design
the corresponding FDIR system for the case of actuator dynamics of zeroth-order, 14 first-order,6 and second-order with
measurable5 and non-measurable4 actuator rates. These algorithms were further simplified by the recent development
of a new failure parameterization3 that models the above class of failures using a single parameter θ. The FDIR system
based on the new parameterization was shown to result in improved performance and better parameter convergence
properties.
Based on the criteria for the FDIR system design that include computation, ease of implementation, and robustness,
it appears that the FDIR based on indirect adaptive control and the new failure parameterization is superior to other
algorithms for accommodation of failures in flight control actuators.
3 of 24
...
Controller
Observer for Actuator N
Decentralized FDI
+
Baseline Controller Actuators AIRFRAME
+
Aircraft State
c
Figure 1. Structure of the Fast on-Line Actuator Reconfiguration Enhancement (FLARE) System ( °2006 Scientific Systems Company, Inc.)
Figure 1, and is seen to include the following subsystems: (i) Decentralized FDIR system that consists of local FDI
observers that are run at each of the actuators, (ii) Global FDI algorithms whose role is to detect and identify control
effector damage, and (iii) Disturbance Rejection Subsystem that attenuates the effect of the external disturbances. The
FDI information is passed on to the Supervisory Block that activates the Adaptive Reconfigurable Controller as needed
to effectively compensate for failures or control effector damage. The system is designed in a retrofit fashion, i.e. the
baseline controller is maintained and the FDIR system is active only in the case of failure.
A. Decentralized Estimation
The FLARE system is well suited for first- and second-order actuator dynamics. For a second-order actuator model,
several types of failures are modeled using only two parameters. One is the actuator mobility coefficient σ, which
indicates whether or not the actuator is locked at a specific value. If σ = 1, the actuator is operational, while during
lock-in-place and hard-over failures, σ = 0. The second parameter is the actuator effectiveness coefficient k, which
models loss-of-effectiveness failures, with k ∈ [² 1], where ² << 1. The actuator failure model is written as:
u̇1 = σu2
u̇2 = −2ζωu2 + σω2 (kuc − u1 )
4 of 24
...
A1
Local FDI
System
ADAPTIVE
...
RECONFIGURABLE
CONTROLLER
AN
Local FDI
System
c
Figure 2. The Decentralized FDIR Scheme (°2006 Scientific Systems Company, Inc.)
B. Retrofit Reconfiguration for the Lock-In-Place (LIP) and Hard-Over (HOF) Failures
The retrofit algorithm for the LIP and HOF failures redistributes the control effort assigned to the failed actuator using
the algorithm described below.
In the case of fast actuator dynamics, the effect of LIP, LOE, float and hard-over failures can be expressed in terms
of the overall aircraft model as:
ẋ = Ax + BKΣuc + BK(I − Σ)ū (1)
where ū is a vector containing the current positions of any failed actuators.
The control signal uc (t) that is sent to the actuators is defined as a sum of the retrofit signal v(t), and the signal
ucN (t), which is the output of the baseline controller, as:
uc (t) = ucN (t) + v(t). (2)
The objective is to design the signal v(t) immediately following a failure so that its effect, combined with that of
the baseline controller, is close to that of ucN (t) in the no-failure case. Hence, in the case of known failures, the retrofit
reconfigurable control objective is to find the signal v∗ (t) such that:
BucN = BΣ(ucN + v∗ ) + B(I − Σ)u. (3)
From the above expression we obtain that a v∗ (t) that achieves the objective is of the form:
v∗ = ΣT BT (BΣΣT BT )B(I − Σ)(ucN − u) (4)
Since the failures are unknown, the actual algorithm is implemented by replacing the failure-related parameters with
their estimates. It is shown7 that such a strategy results in a stable closed-loop system and asymptotic convergence of the
tracking error to zero. This algorithm may also be implemented by solving a corresponding constrained optimization
problem to take into account the position and rate limits on the control effectors.
5 of 24
Flight Condition A Baseline Controller with Failures 4 6 6 5.5 7 7+ 5 7 6 5 8 7-8 2.5 3.5 4
(Mach 0.7, 20,000 ft) FLARE System with Failures 2.5 4.5 4 4 3.5 4 3.5 3.5 3-4 3 4 4 2 2 2.25
Dbl = Doublet Task CH Ratings in the case without failures 2 to 3 = excellent response
Trk = Tracking Task Baseline Controller, no Failures Dbl Trk 4G Roll 3.5 to 4.5 = good response, low pilot load
4G = 4G Turn Flight Condition A 2 3 2 2 5 to 6 = poor response, high pilot load
o
Roll = 360 Roll Flight Condition B 3 4 3.5 3 7 and up = unacceptable response
As mentioned earlier, we recently developed a new failure parameterization and the corresponding FDIR system
referred to as the θ-FLARE.3 Due to a relatively small number of parameters that need to be adjusted on-line, the
resulting system is highly robust, and most of the parameters converge close to their true values even in the case when
the commands are not persistently exciting. As a result, the θ-FLARE achieves excellent performance in the presence of
severe multiple flight-critical failures and failure recoveries, and hence appears superior to other available approaches.
The features of θ-FLARE are illustrated through a simulation example below.
Simulations using θ-FLARE: As the simulation example the linearized dynamics of the F/A-18 aircraft during 30 o
lateral doublet is chosen. The simulation consists of linear A and B matrices for a straight flight regime, and actuator
dynamics and position and rate limits on the control effectors.
The states of the model are: Total velocity V, pitch rate q, pitch angle θ, angle-of-attack α, altitude h, side-slip angle
β, roll rate p, yaw rate r, roll angle φ, and yaw angle ψ. The control surfaces include: Left and right Leading-Edge Flaps
(LEF); Left and right Trailing-Edge Flaps (TEF); Left and right Ailerons (AIL); Left and right Stabilators (STAB); and
Left and right Rudders (RUD). Control inputs also include left and right engine (PLA).
The following failure scenario is chosen:
• All right-wing surfaces lock at t = 4 seconds. The resulting σ matrix is Σ = diag[0 0 0 0 0 1],
6 of 24
Longitudinal States
725.81
Actual
725.8 Model
V [ft/s]
725.79
725.78
0 5 10 15 20 25 30 35 40
1
q [deg/s]
−1
0 5 10 15 20 25 30 35 40
3.5
θ [deg]
2.5
0 5 10 15 20 25 30 35 40
3
α [deg]
2.5
2
0 5 10 15 20 25 30 35 40
2050
h [ft]
2000
1950
0 5 10 15 20 25 30 35 40
Time [sec]
Figure 4. Longitudinal state response with θ-FLARE in case of multiple failures & failure recoveries
The parameter convergence obtained in the case of θ-FLARE can be further improved using the Integrated Self-
Diagnostics (ISD) procedure described next.
7 of 24
β [deg]
0
−2
0 5 10 15 20 25 30 35 40
p [deg/s] 20
−20
0 5 10 15 20 25 30 35 40
0
r [deg/s]
−1
−2
0 5 10 15 20 25 30 35 40
50
φ [deg]
−50
0 5 10 15 20 25 30 35 40
2
ψ [deg]
−2
0 5 10 15 20 25 30 35 40
Time [sec]
Figure 5. Lateral state response with θ-FLARE in case of multiple failures & failure recoveries
curate estimates of failure parameters.3 The ISD procedure consists of injecting a self-diagnostics signal to one or
more actuators for which a failure is indicated, where the failure information is provided either by a Health Monitoring
System (HMS), or by a suitably designed internal mechanism. The procedure also includes applying a compensation
signal to use the remaining actuators for minimizing the effect of self diagnostics (SD) on the system state. The ISD
procedure is applied to the FDIR system based on the new failure parameterization from 3 that results in a minimum
number of adjustable parameters for accommodating a large class of failures. Due to a small number of failure pa-
rameters, the frequency content of the SD signal is relatively low, and the system can rapidly arrive at accurate failure
parameter estimates.3 The ISD system is shown in Figure 8. The key benefit provided by the IDS is that, after the
failure parameters have been accurately identified, the system becomes completely known, and the convergence of the
tracking error in the presence of initial conditions and subsequent disturbances is exponential. This also simplifies the
V&V of the system.
Other benefits of having accurate estimates of failure-related parameters available is that the pilot can be informed
about the health status in a timely manner, and the failure information can be effectively used in the loss-of-control pre-
vention and recovery system. Hence strategies based on accurate estimation of unknown parameters in a reconfigurable
control system are of great importance in the area of flight safety.
8 of 24
LEF [deg]
Right
3.5
3
0 5 10 15 20 25 30 35 40
20
TEF [deg]
0
−20
0 5 10 15 20 25 30 35 40
20
AIL [deg]
−20
0 5 10 15 20 25 30 35 40
5
STAB [deg]
−5
0 5 10 15 20 25 30 35 40
20
RUD [deg]
−20
0 5 10 15 20 25 30 35 40
80
PLA [deg]
60
40
0 5 10 15 20 25 30 35 40
Time [sec]
Figure 6. Actuator response with θ-FLARE in case of multiple failures & failure recoveries
both compensate for a critical failure, and achieve the desired performance. Hence techniques that result in graceful
performance degradation in the presence of critical failures are of great interest.
9 of 24
LEF [deg]
k
0.5 0.5
θ̂
0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
TEF [deg] 1 1
0.5 0.5
0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
1 1
AIL [deg]
0.5 0.5
0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
1 1
STAB [deg]
0.5 0.5
0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
1 1
RUD [deg]
0.5 0.5
0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
Time [sec] Time [sec]
failure case. To decrease the number of simulation cases, the Failure Criticality concept can be used to test for the most
critical failures only.
In situation when acceptable performance can be achieved in the case of known failures, one of the questions that
arises is whether an adaptive FDIR system in which there is on-line adjustment of parameters can actually achieve
comparable performance. This issue is closely related to the following: (i) Number of parameters that are adjusted on-
line. Adjustment of a large number of parameters may result in large transients and saturation of the control surfaces,
yielding poor closed-loop performance due to the lack of persistent excitation of the flight control commands. This
makes the resulting closed-loop system non-robust. (ii) Adaptive laws for parameter adjustment. The key requirement
here is that the adaptive laws assure closed-loop stability. This is a difficult problem, particularly in the presence of
actuator dynamics and position and rate limits on the flight control effectors. In the case of flight-critical failures, the
adaptive laws for parameter update also need to be fast and robust since the delay in calculating the failure parameter
estimates or controller parameters can have detrimental effect of the closed-loop stability and performance; (iii) Adap-
tive reconfigurable control law. Robustness of the baseline controller is of great importance for control reconfiguration.
Baseline controller can be thought of as the algorithm that is used as a basis for reconfigurable control design (e.g.
LQR, inverse dynamics), or as a baseline legacy controller to which the adaptive reconfigurable part is added in the
form of a retrofit signal.
10 of 24
Commands
CONTROLLER ACTUATORS AIRFRAME
FDI
c
Figure 8. Structure of the Integrated Self-Diagnostics (ISD) System ( °2006 Scientific Systems Company, Inc.)
examples are given in Appendices A-B. We use an LQ-tracker and a model reference tracker to show how failure
criticality can be used to compare controllers as well as to compare different failures. The comparison results should not
be construed as justification for preferring one controller design method over another. They merely serve to illustrate
how different controllers can be compared in regard to their ability to accommodate failures.
ẋ = Ax + Bu + w, x(0) ∈ X0 (5)
where x(t) ∈ Rn is the state at time t, u(t) ∈ Rm is the control input at time t and w is an exogenous input belonging to a
specified set W and X0 is a given set of initial conditions. The control inputs are subject to constraints:
u(t) ∈ U (6)
where U is a convex subset of Rm with non-empty interior and containing 0. We will use the F-18 model given in
Appendix A for numerical work. When the exogenous input set W is white noise of known mean w̄ and known
covariance W ≥ 0, the plant is a stochastic system. In this case, we may take the initial condition set X 0 as an ellipsoid
that specifies the 1σ region for a Gaussian initial state. We may also take W as the set of all L 2 signals whose L 2
norm is strictly less than 1. All analytical and numerical results are the same in these cases, only the interpretation of
results is different. Throughout, in the stochastic case, we use 1σ ellipsoids in the definitions. Other classes of inputs
and initial conditions that can be handled are the set of bounded exogenous inputs and polyhedral initial conditions.
For the LTI system, we consider tasks in which the control objectives are to attain and maintain some desired flight
condition in a stable manner. An example is given below. When stating a control task verbally, we assume that the
initial condition of the nonlinear aircraft is equal to some trim value (and, hence, the LTI system initial state condition
is 0) and that the exogenous input is zero. Later on, we shall define control task more clearly taking into account the
effects of initial condition and exogenous inputs.
Example VI.1 (Glide slope tracking) The aircraft is in steady level flight at airspeed V 0 and altitude h0 . The task
is to acquire a 3◦ glide slope and maintain steady descent at airspeed V0 . The task is said to be completed when the
aircraft trajectory reaches the desired condition: airspeed is V0 , all the state derivatives except ḣ are zeros, all the
lateral states are zeros, V = V0 and θ − α = −3◦ .
In the example, task completion is given in terms of some linear equations on states and state derivatives. The
conditions on state derivatives can be expressed as conditions on states and controls using the state space equations.
We will refer to the set of states obtained from these task completion conditions as the ideal desired final state set.
When exogenous inputs are non-zero or when initial state is non-zero, it may not be possible to achieve any ideal
desired final state. In this case, we specify a desired final state set X f that contains the ideal desired final states. The
task is said to be completed if, for any initial condition in X0 and for any exogenous input in W, the corresponding
aircraft trajectory enters X f after a finite time and stays inside thereafter. The size of X f will depend on the sizes of X0
and W. In addition, when the system is stochastic, we interpret task accomplishment in a probabilistic sense.
11 of 24
It is easily checked that this system for the F-18 model has at least one solution and that all solutions can be written
as:
α
0
θ = N + + Mv
−π/60
u
where N + is the Moore-Penrose inverse of N, M is such that its columns form a basis for the null space of N and v is a
free vector. For the F-18 model,
0
i0
N + = 10−4 31.52 −492 35.6 35.6 −15.9 −15.9 −2.3 2.3 −1302 0 0
h
−π/60
and 0
33 33 881 461 −58 −58 7.6 26 0 0
−7.6
10.6 10.6 884 −17 −17 2.5 8.1 0 0
−466 −2.5
M = 10−3
−9.6 −9.6 −10.9 −4.4 −40 −40 498 −498 2.2 500 500
9.6 9.6 10.9 4.4 40 40 498 500 500
−498 −2.2
Hence, the ideal desired final state set for glide slope tracking (see Appendix A) is given by:
Suppose that the wind disturbances w are zero-mean white noise inputs with covariance given in Appendix A. In this
case, it is difficult, if not impossible, to maintain a state in Xideal and we need to enlarge Xideal to a desired final state
set X f .
Definition VI.1 (Control task) Let X0 be a given non-empty open set of initial conditions, Xideal be a given non-empty
set of ideal desired final states, W be a given set of disturbance inputs containing zero disturbance, and X f be an open
set of desired final states containing Xideal . The control task is to
1. take the system state x(t), starting from any initial condition in X0 with disturbance input w = 0, to some ideal
desired final state in Xideal asymptotically i.e.
and
2. take the system state x(t), starting from any initial condition in X0 and for any disturbance input in W, to some
state inside the desired final state set X f in finite time:
for some finite time T ≥ 0 (with high probability if the disturbance input is stochastic)
12 of 24
We next characterize control tasks in a way that facilitates actuator failure analysis. Fix a state feedback control
law C that accomplishes the task. For each initial state x0 ∈ X0 and disturbance input w ∈ W, there is a set of values
of control inputs that C generates over time during closed loop system operation. Let it be denoted by S u (C; x0 , w).
Definition VI.2 (Required dynamic performance set) Let R be a control task and C be a controller that accom-
plishes the task. The required dynamic performance (RDP) set needed to accomplish R using C is:
x0 ,w
where the union is taken over all initial conditions in X0 and exogenous inputs in W.
It is the set of all quantities of the form Bu that are generated during closed loop system operation for different
initial conditions and exogenous inputs. The computational tools for calculating RDP set are given in Appendix B. The
computations involve well-known convex programming methods for calculating and approximating reach sets. The
RDP set is a subset of:
BU = {Bu : u ∈ U} (9)
if controller C accomplishes the task. We will refer to BU as the nominal achievable dynamic performance (ADP) set.
Example VI.3 (ADP and RDP sets for glide slope tracking with an LQ-tracker) Let
X0 = {0}
and W be zero-mean white noise. Consider the LQ-tracker design given in Appendix A. For this example,
i0
ū f = 10−2 1.12 1.12 8.15 8.15 0.58 −0.58 −5.8 0 0
h
We will also consider an LQR gain with R = 1000I instead of R = 100I as in Appendix A. Large values of R result in
less addgressive controllers.
3 3
2.5 2.5
2 2
1.5 1.5
1 1
αdot (deg/s)
(deg/s)
dot
0.5 0.5
α
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
V (ft/s2) V (ft/s2)
dot dot
Figure 9. Union of projections of 2σ required dynamic performance set onto ( V̇, α̇) plane for glide slope tracking task for two different LQ
trackers. The interior of the polytope shown in Blue is the nominal achievable dynamic performance set BU.
Figure 9 shows the 2σ required dynamic performance set for T = 3. The RDP sets for both controllers are located
within the nominal ADP set (shown in Blue). So, with high probability, both controllers accomplish the glide slope
13 of 24
1. Failure Parametrization
Flight control actuator failures can be broadly divided into two categories: (i) failures that result in a total loss of
effectiveness of the control effector; and (ii) failures that cause partial loss of effectiveness. The former includes lock-
in-place (LIP), float, and hard-over failure (HOF), while the latter describes general loss-of-effectiveness (LOE) types
of failure. In the case of LIP failures, the actuator “freezes” at a certain condition and does not respond to subsequent
commands. HOF is characterized by the actuator moving to and remaining at its upper or lower position limit regardless
of the command. Float failure occurs when the actuator contributes zero moment to the control authority. LOE is
characterized by lowering the actuator gain with respect to its nominal value. The actuator failures can be expressed
mathematically as:
where tFi denotes the time instant of failure of the ith actuator, ki denotes its effectiveness coefficient such that ki ∈
[²ki , 1], and ²ki > 0 denotes its minimum effectiveness.
To parametrize these failures, define the following sets:
where ui min and ui max are specified lower and upper bounds respectively. The control input in the presence of failures
is given by:
u = KΣuc + (I − Σ) ū (11)
where K ∈ K is the LoE gain, Σ ∈ S specifies failed actuators and ū ∈ U specifies the actuator position at (and after)
failure. The case of no failure corresponds to K = I, Σ = I and ū = 0.
For computational purposes, the lower and upper bounds on the control inputs will be assumed to satisfy the
following inequality:
14 of 24
B (KΣuc + (I − Σ) ū) = η
has a solution uc ∈ U. It is easy to see that A (K, Σ, ū) is not empty for any K ∈ K, Σ ∈ S and ū ∈ U. In particular,
A (I, I, 0) = BU
corresponds to the no failure case. Further, since A (K, Σ, ū) is the image of the polytope U under a linear map, it is a
polytope for any K ∈ K, Σ ∈ S and ū ∈ U.
A property of A (K, Σ, ū) is given next. This property will motivate the use A (K, Σ, ū) as a means to compare
criticality of failures.
Proposition VI.1 (LoE failures) Let K1 ∈ K, K2 ∈ K, Σ ∈ S and ū ∈ U be given. Suppose that the loss-of-
effectiveness (LoE) gains K1 and K2 are such that:
K1 ≤ K2
Proof: Pick any η ∈ A (K1 , Σ, ū). By definition, there exists uc ∈ U such that
Now, define ûc as follows: the ith component of ûc is given by:
uci if K2i = 0
ûci =
uci K1i otherwise
K2i
where K1i and K2i are the ith diagonal entries of K1 and K2 respectively. Note that K1 ≤ K2 implies that if K2i =0 then
K1i = 0. Therefore, it is easy to verify that
K2 ûc = K1 uc
and, using the diagonal structure of Σ, K1 and K2 , that
K2 Σûc = K1 Σuc
Therefore,
B (K2 Σûc + (I − Σ) ū) = B (K1 Σuc + (I − Σ) ū) = η
Finally, since 0 is in the interior of U and the ratio
K1i
≤1
K2i
whenever K2i > 0, we see that ûc ∈ U. So, η ∈ A (K2 , Σ, ū).
Figure 10 shows Proposition 1 graphically. So, at least in the case of LoE failures, it is natural to associate criticality
with the size of A (K1 , Σ, ū) in relation to the size of A (K2 , Σ, ū). However, if the control task can be completed with
an η belonging to (or with a subset of) A (K1 , Σ, ū), then the failures (K1 , Σ, ū) and (K2 , Σ, ū) should have the same level
of criticality relative to the control task.
To motivate further, let (Ki , Σi , ūi ), i = 1, · · · , 5 define different failures. Suppose that a non-empty set E:
E ⊂ A (I, I, 0) = BU,
15 of 24
LoE 2
No Failure
Figure 10. The set A gets smaller with severity of LoE failures.
that specifies all the vectors η of the form Bu that may be needed to complete a given control task, is given. Figure 11
shows the set A(Ki , Σi , ūi ) for each failure and the control task set E. Failure 1 is such that
E ⊂ A (K1 , Σ1 , ū1 )
which implies that the control task can be accomplished despite the failure. Therefore, failure 1 is not as critical as
failure 2 for the control task because, under failure 2
E 1 A (K2 , Σ2 , ū2 ) ,
Figure 11. Different failure scenarios and their relationship with “control task set” - Rectangular region is achievable set A (I, I, 0) = BU
with no failure, circular region is the control task set E and hexagonal region is the achievable set A(K i , Σi , ūi ) after failure.
The area of A (K2 , Σ2 , ū2 ) corresponding to failure 2 is smaller than the area of A (K3 , Σ3 , ū3 ) corresponding to
failure 3. But, one would say that failure 3 is a more critical than failure 2 for the given control task because
A (K3 , Σ3 , ū3 )
\
E
is empty implying that there is no chance of performing the control task under failure 3 whereas
A (K2 , Σ2 , ū2 )
\
E
is not empty implying that there is some potential still for performing the control task under failure 2. On further
examination of the figures, one would say that failure 3 and failure 4 have the same level of criticality, even though
16 of 24
Definition VI.3 (Relatively critical) For i = 1, 2, let (Ki , Σi , ūi ) be given failures.
1. Let E be a k-dimensional convex subset of BU representing the required dynamic performance set. (K 1 , Σ1 , ū1 )
is said to be a more critical failure than (K2 , Σ2 , ū2 ) relative to E if
2. (K1 , Σ1 , ū1 ) is said to be a more critical failure than (K2 , Σ2 , ū2 ) if (K1 , Σ1 , ū1 ) is a more critical failure than
(K2 , Σ2 , ū2 ) relative to any convex subset of BU.
The definition gives a computational procedure to compare criticality of two failures. The numerical steps are as
follows:
2. Calculate the required dynamic performance set E using the method given in Appendix B.
3. Calculate the achievable dynamic performance sets A (Ki , Σi , ūi ) and the volumes vi of their intersection with the
required dynamic performance set.
4. If v1 is less then v2 , declare that failure 1 is more critical for the task-controller pair than failure 2.
The major computational steps are the calculation of E and the calculation of volumes. The set E is almost never
convex. But, it can be approximated by convex sets or, better yet, by the union of a finite number of convex sets. The
calculation of volume of sets can be very difficult even for convex sets.15 We may therefore consider using the volume
of the smallest volume ellipsoid containing the set instead of the volume of the set itself. Another approach is to project
the sets to two-dimensional subspaces by considering two state variables at a time and then to use the area of projected
region in the definition. This is the approach used for numerical examples in this paper.
Example VI.4 (LOE/LIP failures with model reference controller) We use a model reference controller for glide
slope tracking given in Appendix A in this example. Figures (12-13) show the RDP set E and the ADP sets for various
loss of effectiveness (LOE) and lock-in-place (LIP) failures. To compute the sets, we used X 0 and wind covariance W
given in Appendix A. The regions enclosed by the Black lines are the 2σ RDP sets.
Since the 2σ RDP set is within the ADP set for LOE failures except for rudder failure (Figure 12), we can say with
high likelihood that these failures are not critical for the glide slope tracking task. In comparison, LIP failures shown
in Figure 13 are more critical.
The definition of relatively critical failure induces a pre-order ≺ f c on actuator failures for a task-controller pair.
That is,
f1 ≺ f c f2
if f2 = (K2 , Σ2 , ū2 ) is relatively more critical than f1 = (K1 , Σ1 , ū1 ). A pre-order is a binary relation that is reflexive
( f ≺ f c f ) and transitive ( f1 ≺ f c f2 and f2 ≺ f c f3 imply f1 ≺ f c f3 ). We use this pre-ordering to group failures for a
task-controller pair into different categories ranging from least critical to most critical. Note that
17 of 24
0.4
1.5
0.2
1
αdot (deg/s)
(deg/s)
0
dot
0.5
β
−0.2
0
−0.4
−0.5
−0.6
−1 −0.8
−1.5 −1
−6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
2 2
Vdot (ft/s ) Vdot (ft/s )
Figure 12. Comparison of ADP sets for various loss of effectiveness (LOE) failures with 2σ RDP set (show in Black). The polytope in Blue
is the nominal ADP set BU.
5. Category 4 provided that there is a strictly positive distance between E and the ADP set A (K, Σ, ū).
These categories allow us not only to compare different actuator failures for the same task-controller pair but also to
compare controllers for the same task-failure pair. If a controller C 1 for a task has a certain failure in category 1 and
controller C 2 for the same task has the same failure in category 2, then C 1 is a better controller than C 2 . Our idea is to
compare controllers in this way for failures that are highly likely to occur.
Example VI.5 (Comparison of LQ-tracker and model reference controller) We will compare the LQ-tracker and
the model reference tracker for F-18 landing given in Appendix A. The following single loss of effectiveness (LOE)
failures:
18 of 24
0.4
1.5
0.2
1
αdot (deg/s)
(deg/s)
0
dot
0.5
β
−0.2
0
−0.4
−0.5
−0.6
−1 −0.8
−1.5 −1
−6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
2 2
Vdot (ft/s ) Vdot (ft/s )
Figure 13. Comparison of ADP sets for various lock in place (LIP) failures with 2σ RDP set (show in Black). The polytope in Blue is the
nominal ADP set BU.
Table 1. Failure categories of actuator failures for glide slope tracking task using LQ and MRC trackers (controllers are not adaptive).
The failure categories shown in Table 1 indicate the level of criticality of a failure when a controller is used for
glide slope tracking. An important point to remember is that the controllers used to build Table 1 are not adaptive.
With this caveat, we make a number of observations:
• Compound failures ( fi , g j ) are more critical than both single failures fi and g j , no matter which controller is
used. For example, for the LQ tracker, the single LOE failure f3 is somewhat critical and the single LIP failure
g2 is non-critical, but the compound failure ( f3 , g2 ) is critical. This is a desirable consequence of our pre-order.
• LIP failures seem to be more critical than LOE failures for both LQ and MRC controllers. The reason is that,
when a LIP failure occurs, the number of independent control inputs is reduced by 1 and the remaining control
inputs must compensate for the potentially non-zero effect of the locked actuator.
• The LQ tracker seems to be a better controller overall than the MRC tracker. We do not mean that the LQ tracker
design method is a better method for glide slope tracking than MRC tracker design. The problem may lie in the
reference model (not exponentially stable) used by MRC.
In future work, we will use reconfigurable controllers to develop failure category ratings as shown in Table 1.
19 of 24
VIII. Conclusions
In this paper, several issues arising in the context of actuator Failure Detection, Identification and Reconfiguration
(FDIR) in flight control are addressed. These include algorithms, performance, implementation and metrics. Several
FDIR algorithms are reviewed from the point of view of their complexity, ease of implementation and robustness prop-
erties. It is shown that a FDIR system based on the a actuator failure parameterization achieves excellent performance
in the presence of multiple simultaneous flight critical failures, and results in improved convergence of the failure-
related parameter estimates. A new self-diagnostics procedure is described that results in guaranteed convergence of
the parameter errors to zero, and improved performance of the overall closed-loop system. Following that, perfor-
mance metrics for evaluation of FDIR systems are discussed. As a first step towards a quantitative performance metric,
we introduce the concept of failure criticality and present a grouping of actuator failures according to their severity
for a given task-controller pair. Failure criticality is based on comparing the achievable dynamic performance after
failure with the required dynamic performance needed to accomplish a given task by a given controller. We present
computational methods and illustrate the concept with numerical examples.
Future work in this area will be focused on the extensions of the proposed Failure Criticality concept to the case
of reconfigurable controllers. We will also pursue on-line implementation of the algorithms arising in the context of
Failure Criticality, and their use in the design of a comprehensive loss-of-control prevention and recovery system that
will be robust not only to severe actuator failures, but also to control effector damage, sensor failures, control computer
faults, and pilot errors. The tools described in this paper will facilitate the development of such a system.
References
1 M. Bodson and J. Groszkiewicz, “Multivariable Adaptive Algorithms for Reconfigurable Flight Control”, IEEE Transactions on Control
Systems Technology, Vol. 5, No. 2, pp. 217-229, March 1997.
2 Boeing Phantom Works, ”Reconfigurable Systems for Tailless Fighter Aircraft - RESTORE (First Draft)”, Contract No. F33615-96-C-3612,
System Design Report, No. A007, St. Louis, M0, May 1998.
3 J. D. Bošković, J. Redding and R. K. Mehra, ”Integrated Health Monitoring and Fast on-Line Actuator Reconfiguration Enhancement
(IHM-FLARE) System”, Final Report for NASA Langley Phase I SBIR, Contract No. NNL06AA26P, July 2006.
4 J. D. Bošković and R. K. Mehra, ”A Multiple Model-based Decentralized System for Accommodation of Failures in Second-Order Flight
Control Actuators”, to be presented at the 2006 American Control Conference, Minneapolis, MN, June 14-16, 2006.
5 J. D. Bošković, S. E. Bergstrom and R. K. Mehra, ”Adaptive Accommodation of Failures in Second-Order Flight Control Actuators with
Measurable Rates”, Proc. 2005 American Control Conference, Portland, OR, June 8-10, 2005.
6 J. D. Bošković and R. K. Mehra, ”Robust Fault-Tolerant Control Design for Aircraft Under State-Dependent Disturbances”, AIAA Journal
of Guidance, Control & Dynamics, Vol. 28, No. 5, pp. 902-917, September-October 2005.
7 J. D. Bošković, S. E. Bergstrom, R. K. Mehra, J. Urnes, Sr., M. Hood, and Y. Lin, ”Fast on-Line Actuator Reconfiguration Enabling (FLARE)
System”, in Proceedings of the 2005 AIAA Guidance, Navigation and Control Conference, San Francisco, CA, August 15-18, 2005.
8 J. D. Bošković, S. E. Bergstrom and R. K. Mehra, ”Aircraft Prognostics and Health Management (PHM) and Adaptive Reconfigurable
Control (ARC) System”, Final Report for NASA DFRC Phase II SBIR, Contract No. NAS4-02017, March 2004.
9 S. E. Bergstrom, J. D. Bošković and R. K. Mehra, ”Development of an Adaptive Reconfigurable Control Analysis, Design & Evaluation
(ARCADE) Toolbox”, AIAA-2003-5378, in Proceedings of the 2003 AIAA Guidance, Navigation and Control Conference, Austin, TX, August
11-14, 2003.
10 J. D. Bošković and R. K. Mehra, ”A Multiple Model Adaptive Flight Control Scheme for Accommodation of Actuator Failures”, AIAA
ures”, in Proceedings of the 2002 AIAA Guidance, Navigation and Control Conference, Monterey, California, 5-8 August 2002.
20 of 24
of Guidance, Control & Dynamics, Vol. 23, No. 5, pp. 876-884, September-October 2000.
13 J. D. Bošković and R. K. Mehra, “Stable Multiple Model Adaptive Flight Control for Accommodation of a Large Class of Control Effector
Failures”, in Proceedings of the 1999 American Control Conference, San Diego, CA, June 1999.
14 J. D. Bošković, S.-H. Yu, and R. K. Mehra, ”A Stable Scheme for Automatic Control Reconfiguration in the Presence of Actuator Failures”,
in Proceedings of the 1998 American Control Conference, Vol. 4, pp. 2455-2459, Philadelphia, PA, June 24-26, 1998.
15 B. Büeler, A. Enge, and K. Fukuda, “Exact volume computation for polytopes: A practical study”, In G. Kalai and G. M.
Ziegler, (Ed.), Polytopes – Combinatorics and Computation, volume 29 of DMV Seminar, pages 131-154, Birkhȧuser, 2000. (see also
lix.polytechnique.fr/Labo/Andreas.Enge/Vinci.html)
16 A. P. Loh, A. M. Annaswamy and F. P. Skanze, ”Adaptation in the Presence of a General Nonlinear Parameterization: An Error Model
Approach”, IEEE Transactions on Automatic Control, Vol. 44, 1999, pp. 1634–1652.
17 J. D. Bošković, ”Adaptive Control of a Class of Nonlinearly-Parametrized Plants”, IEEE Transaction on Automatic Control, Vol. 43, No. 7,
where A is the system matrix from (5). This corresponds to standard deviations of 2 ft/sec in airspeed V, 1 ◦ in α and β,
and 1◦ /sec in p, q and r. The initial states are assumed to have Gaussian density with mean zero and standard deviation
diag [2, π/180, π/180, π/180, 2, π/180, π/180, π/180, π/180, π/180]
g(x, u) ≤ 0
involving states and control inputs. The ideal desired final state set is given by:
where U is the control constraint set. Any controller that accomplishes the task, in the absence of exogenous in-
puts, must reach some state in Xideal . The calculation of this set for the glide slope tracking problem is shown in
Example VI.2.
For the glide slope tracking problem, we consider an LQ-tracker and a model reference tracker. The LQ-tracker
has the form:
u = F x + ū (15)
21 of 24
(t/T )ū f if 0 ≤ t ≤ T
ū(t) =
(16)
ū f
otherwise
where ū f is the steady state value of ū and T > 0. The steady state value is obtained from the ideal desired final state
set Xideal as follows. Choose any x f ∈ Xideal . There exists u f ∈ U such that
g xf , uf ≤ 0
³ ´
Define:
ū f = −F x f + u f
Note that, under the above choices, for t > T , we have:
u(t) = F x(t) − x f + u f
³ ´
and, using a change of variables and the fact that F is a stabilizing gain, we can show that the LQ-tracker accomplishes
the control task when exogenous inputs are set to zero and provided that control constraints are met during the transient
period t ≤ T . We typically adjust T to avoid control saturation. For the numerical examples, we use T = 3 and, state
and control weight matrices of:
respectively.
The model reference tracker uses the following reference model:
ẋlon Alon 0 xlon Blon 0 rlon
= +
ẋlat 0 Alat xlat 0 Blat rlat
where
0 0 0 0 0.08 0 0
−0.08
0 1 0 0 0 0.37 0
−0.37
Alon = 0 0 −2.8 −4 0 , Blon = 0 0 4
,
0 0 1 0 0 0 0 0
0 0 228 0 0 0 0
−228.5
0 0 0 0 5 0 0
−5
0 −2.8 0 −4 0 0 4 0
Alat = 0 0 −2.8 0 −4 , Blat = 0 0 4 ,
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
if 0 ≤ t ≤ T
(t/T )r f
rlon (t) =
(17)
rf
otherwise
u = B0 QB + R −1 B0 Q (−Ax + Am x + Bm r)
¡ ¢
As in the case of LQ tracker, we can show that the model reference controller accomplishes the control task when
exogenous inputs are set to zero and provided that control constraints are met during the transient period t ≤ T . For the
numerical examples, we use T = 3 and, state and control weight matrices of:
respectively.
22 of 24
ẋ = F x + G 1 w + G2 r, x(0) ∈ X0 (18)
where X0 is a specified set of initial conditions, w is an exogenous input w belonging to a specified set W and r is a
given reference input. Here, F, G 1 and G2 are real matrices of appropriate dimensions. Since (18) is the closed loop
system, we may assume that all the eigenvalues of F have strictly negative real parts. For linear open loop plant with
linear controller, the control input can be written as:
u = H1 x + H2 r
where H1 and H2 are known matrices. We make no assumption about the plant, measurements or the controller (except
that they are all linear and that the closed loop system is asymptotically stable with w = 0 and r = 0).
Fix an initial condition x0 ∈ X0 and an exogenous input w ∈ W. The state trajectory emanating from x0 under the
influence of w and r is:
Z t Z t
x(t) = eFt x0 + eF(t−τ)G1 w(τ)dτ + eF(t−τ)G2 r(τ)dτ (19)
0 0
for all t ≥ 0. We make great use of the superposition principle apparent in the above formula. The principle extends to
reach sets as well where the plus sign is taken to be Minkowski sum. Define the following reach sets of the LTI system
(18):
( Z t )
Ric
t (X 0 ) = e Ft
x 0 + e F(t−τ)
G 2 r(τ)dτ : x 0 ∈ X 0 (20a)
0
(Z t )
Rexo
t (W) = e F(t−τ)
G 1 w(τ)dτ : w ∈ W (20b)
0
for t ≥ 0. Ric
t (X0 ) is the set of all states that can be reached at time t starting from some initial condition in X 0 and
evolving under the influence of the reference input r only (zero exogenous input). R exo t (W) is the set of all states
that can be reached at time t starting at zero and evolving under the influence of exogenous input w only (zero initial
condition and reference input). The set of all states that can be reached at time t by starting from some initial condition
and evolving under both reference input and exogenous input is:
Rt = Ric exo ic
t (X0 ) + Rt (W) = y + z : y ∈ Rt (X0 ), z ∈ Rt (W)
exo
(21)
n o
Our aim is to first characterize these sets and then to characterize the corresponding set of required control inputs and
RDP set.
A positive (or positive semi-definite) matrix is a symmetric matrix whose eigenvalues are all greater than or equal
to zero. A positive matrix is strictly positive (or positive definite) if all the eigenvalues are strictly greater than zero.
The unique positive square root of a positive matrix M is denoted by M 1/2 . The open elliposid centered at z and defined
by a positive M is denoted by E (z, M). That is,
We do not require M to be strictly positive. L 2 is the Hilbert space of all Lebesgue measurable signals taking values
in some appropriate Euclidean space that are square integrable. The set of all signals in L 2 whose norm is strictly
less than 1 is denoted by BL 2 . These notations are being introduced so we consider ellipsoidal initial condition sets
and finite energy exogenous inputs in detail. The techniques can be applied to discrete-time systems, polyhedral initial
condition sets and L ∞ inputs with some minor changes.
23 of 24
Then, Ric
t (X0 ) = E ( x̄t , Xt ).
3. Suppose that initial condition set is E ( x̄0 , X0 ) and that the exogenous input set is BL 2 . Then,
Rt = E ( x̄t , Xt ) + E (0, Pt )
The set Rt of all states that can be reached at time t is a convex set. But, the set of all reachable states given by:
[
Rt
t≥0
is not necessarily convex. It can be outer-approximated by a convex set (or by the union of a finite number of convex
sets) using well-known convex programming methods.
24 of 24