Vous êtes sur la page 1sur 24

Proc.

AIAA Guidance, Navigation and Control Conference, Keystone, CO, 21 - 24 Aug 2006

Reconfigurable Fault-Tolerant Flight Control: Algorithms, Implementation


and Metrics ∗†

Jovan D. Bošković‡, Ravi K. Prasanth§, and Raman K. Mehra¶


Scientific Systems Company, Inc.

In this paper, several issues arising in the context of actuator Failure Detection, Identification and Recon-
figuration (FDIR) in flight control are addressed. These include algorithms, performance, implementation and
metrics. Several FDIR algorithms are reviewed from the point of view of their complexity, ease of implementa-
tion and robustness properties. It is shown that a FDIR system based on the a actuator failure parameterization
achieves excellent performance in the presence of multiple simultaneous flight critical failures, and results in
improved convergence of the failure-related parameter estimates. A new self-diagnostics procedure is described
that results in guaranteed convergence of the parameter errors to zero, and improved performance of the over-
all closed-loop system. Following that, performance metrics for evaluation of FDIR systems are discussed. As a
first step towards a quantitative performance metric, we introduce the concept of failure criticality and present
a grouping of actuator failures according to their severity for a given task-controller pair. Failure criticality is
based on comparing the achievable dynamic performance after failure with the required dynamic performance
needed to accomplish a given task by a given controller. We present computational methods and illustrate the
concept with numerical examples.

I. Introduction
In the last several years a large number of different approaches to Failure Detection, Identification and Reconfigu-
ration (FDIR) in flight control has been proposed. Most of the research has been focused on failures in flight control
actuators and effectors.1, 2, 4, 6, 7, 19–21 Instead of facilitating the implementation of fault-tolerant control algorithms, such
an abundance of approaches has increased the confusion among potential users. Among the many questions cloud-
ing the current state of research are the guaranteed computational and system theoretic properties of the approaches,
the assumptions under which they hold and the practical problems for which an approach is better suited than an-
other. The main contributing factor is the lack of performance metrics to evaluate the extent of faults and failures that
can be accommodated using the proposed approaches, and the flight performance that can be achieved under failure
accommodation.
Probably the first question that arises prior to FDIR system design is to determine a class of failures for which
the system should be designed. Different actuator and control effector failures affect the closed-loop system in dif-
ferent ways, and FDIR system design should address the following questions in the initial stage: (i) Which failures
can be accommodated without affecting performance of the resulting closed-loop system? (ii) Which failures can be
accommodated so that the accommodation results in graceful performance degradation? (iii) Which failures cannot be
accommodated? This issue is closely related to the effect of failure and its accommodation on the resulting closed-loop
control system. For instance, a failure can be such that its accommodation can be achieved without saturating the
remaining healthy actuators. In such a case, the FDIR system can (at least theoretically) achieve the same level of
performance as in the no-failure case. In other cases, the failure severity can be such that the FDIR system achieves
performance that is worse than the original one, but closed-loop stability can still be guaranteed. In case (iii) the sever-
ity of failure is such that the use of remaining healthy actuators cannot guarantee closed-loop stability, and switching
to some type of fail-safe mode is necessary. These cases need to be studied in the context of both single and multiple
∗ Copyright ° c 2006 by Scientific Systems Company, Inc.. Published by the American Institute of Aeronautics and Astronautics, Inc. with
permission.
† This research was supported by NASA Langley Research Center under contract No. NNL06AA26P to Scientific Systems Company.
‡ Intelligent & Autonomous Control Systems Group Leader, 500 W. Cummings Park, Suite 3000, AIAA Senior Member, jovan@ssci.com
§ Principal Research Scientist, 500 W. Cummings Park, Suite 3000, AIAA Member, prasanth@ssci.com
¶ President & CEO, 500 W. Cummings Park, Suite 3000, AIAA Senior Member, rkm@ssci.com

1 of 24

American Institute of Aeronautics and Astronautics


failures and recoveries, and measures that aid in distinguishing between cases (i)-(iii) are of interest in FDIR system
design. Additional importance of this analysis stems from the fact that cases (i) and (ii) can result in the loss of control
prevention, while case (iii) can lead to the loss of control where loss-of-control recovery techniques need to be used.
As a first step towards addressing these issues, in this paper we introduce the concept of actuator failure criticality.
For a given control task, such as glide slope tracking during landing, and a given controller to accomplish the task,
different failures affect closed loop aircraft performance differently. The main idea behind failure criticality is to
describe the effect of failure using an achievable dynamic performance (ADP) set and to compare ADP set with a
required dynamic performance (RDP) set needed by the controller to accomplish the task. When the RDP set is a
subset of the post-failure ADP set, the control task can be achieved despite failure and control reconfiguration. On the
other hand, when the RDP set is not completely contained in the ADP set, there is some likelihood that the control task
cannot be accomplished at the specified performance level and a degradation in performance may be needed. These
correspond to cases (i) and (ii) mentioned earlier. In case (iii), the RDP set and post-failure ADP set are separated by
some positive distance.
For a given task-controller pair, failure criticality allows us to group actuator failures into categories ranging from
non-critical to severe. The number and type of failures in each category is an indicator of the quality of the controller for
the given task. Similarly, we can compare different controllers for the same task-failure pair by determining the failure
category under the different controllers. In this way, failure criticality provides a metric with which the performance of
different controllers can be evaluated.
The ADP and RDP sets are calculated under an assumption that the failure is completely known. Since this is
not a realistic assumption, the paper discusses ways in which accurate failure information can be obtained. Related
algorithmic, performance, and implementation issues are also discussed. Several FDIR algorithms are reviewed from
the point of view of their simplicity, ease of implementation and robustness properties. It is shown that a FDIR
system based on a new actuator failure parameterization achieves excellent performance in the presence of multiple
simultaneous flight critical failures, and results in improved convergence of the failure-related parameter estimates.
A new self-diagnostics procedure is described that results in accurate estimation of failure-related parameters and
improved performance of the overall closed-loop system. This, combined with the failure criticality measure, aids in
determining the maximum performance that can be achieved under failure and control reconfiguration. Performance
metrics and FDIR algorithm implementation are also discussed, and several new problems arising in this context are
preliminary stated.

II. FDIR Algorithms


In the current literature there is a large number of Failure Detection, Identification and Reconfiguration (FDIR)
approaches that guarantee closed-loop stability in the presence of actuator failures. 1, 2, 4, 6, 7, 19–21 Such FDIR techniques
are primarily based on direct or indirect adaptive control approaches23 in which the adaptation is with respect to
failures that are modeled as parametric uncertainty.
In FDIR based on direct adaptive control, controller parameters are adjusted directly based on the response of the
overall control system. Such adaptive control systems suffer from the difficulty in establishing a relationship between
controller parameters and failure-related parameters. In addition, a large number of parameters commonly need to be
adjusted on-line, which makes these algorithms difficult to tune and implement. For instance, let the plant dynamics be
of the form:

ẋ = Ax + BDu,

where x ∈ IRn is the system state vector, u ∈ IRm is the control input vector, matrices A and B are known, and
D = diag[d1 d2 ... dm ] is an unknown actuator effectiveness matrix, where di ∈ [²i , 1] and 0 < ²i << 1. The objective
is to design a controller such that the plant state follows the state of the reference model:

ẋm = Am xm + Bm r,

where Am is asymptotically stable, and r is a vector of bounded piece-wise continuous reference inputs (commands).
Direct adaptive fault-tolerant controller is of the form

u = Θx + Kr,

2 of 24

American Institute of Aeronautics and Astronautics


where Θ ∈ IRn×m and K ∈ IRn×m , and where the adaptive laws for adjusting Θ and K are of the form:

Θ̇ = −Γθ exT
K̇ = −Γk erT ,

where Γ(·) = ΓT(·) > 0, and e = x − xm denotes the tracking error. It is well known23 that the above adaptive algorithms
result in a stable closed-loop system in which asymptotic convergence of the tracking error to zero is guaranteed. It is
important to note that in the case of direct adaptive control 2nm parameters need to be adjusted even though only m
parameters are unknown.
In FDIR based on indirect adaptive control, the failure-related parameters are estimated first using a suitably de-
signed on-line observer, and the resulting estimates are in turn used in the adaptive reconfigurable control law assuring
the closed-loop stability. It is well known23 that the resulting closed-loop system is stable even when the signals in
the system are not sufficiently persistently exciting (PE), i.e. even when the parameters do not converge to their true
values.
The indirect adaptive controller is implemented in two steps. First step includes the design of an adaptive observer
of the form:

x̂˙ = Ax + BD̂u + Λ( x̂ − x),

where x̂ denotes an estimate of x, and corresponding stable adaptive laws for adjusting D̂ (see e.g. Ref.6 ) are of the
form:

d̂˙ = Proj[²,1] {−Γdiag(u)BT ê},

where Γ = ΓT > 0, and d̂ = diag(D̂). In the above expression the Projection Operator is used (see e.g. Ref.6 ) to keep
the estimates of di within [²i , 1]. The use of the projection algorithm also makes the overall closed-loop system robust
to parameter variation and external disturbances, and assures the invertibility of D̂(t) at every instant.
The adaptive reconfigurable control law is of the form:

u = (BD̂)T (BD̂(BD̂)T )−1 (−(A − Am )x + Bm r),

and can be readily demonstrated to result in a stable system. It is seen that in the case of FDIR based on indirect
adaptive control only m parameters need to be adjusted to achieve the control objective. Hence indirect adaptive
control appears better suited for FDIR implementation since the failure-related parameters are estimated explicitly,
which can be beneficial for failure diagnostics and prognostics.
In our previous work we developed failure models (i.e. the models that describe the effect of failures on the plant
and/or actuator dynamics) for a large class of failures in terms of two parameters. The class of failures includes: lock-
in-place, loss of effectiveness, hard-over, float, and also models failure recoveries. These models were used to design
the corresponding FDIR system for the case of actuator dynamics of zeroth-order, 14 first-order,6 and second-order with
measurable5 and non-measurable4 actuator rates. These algorithms were further simplified by the recent development
of a new failure parameterization3 that models the above class of failures using a single parameter θ. The FDIR system
based on the new parameterization was shown to result in improved performance and better parameter convergence
properties.
Based on the criteria for the FDIR system design that include computation, ease of implementation, and robustness,
it appears that the FDIR based on indirect adaptive control and the new failure parameterization is superior to other
algorithms for accommodation of failures in flight control actuators.

III. FDIR System Implementation & Performance


In our previous work we developed the FLARE (Fast on-Line Actuator Reconfiguration Enhancement) system
that achieves very fast detection of failures in flight control actuators, and effective control reconfiguration in the
presence of single or multiple failures and recoveries even while rejecting external disturbances. The FLARE system
combines decentralized FDIR algorithms with a disturbance rejection mechanism within a retrofit control architecture.
In collaboration with Boeing Phantom Works, the performance of the FLARE system was extensively evaluated using
high-fidelity and piloted simulators. The FLARE system achieved excellent response in the presence of severe flight-
critical control effector failures, and received excellent HQ ratings from the pilot. The FLARE system is shown in

3 of 24

American Institute of Aeronautics and Astronautics


FLARE System
Global FDIR &
Disturbance Estimation

Observer for Actuator 1

Observer for Actuator 2

Adaptive Retrofit Observer for Actuator 3


Reconfigurable

...
Controller
Observer for Actuator N

Decentralized FDI

Add−on Signal Actuator Commands Actuator Positions


Baseline Control Signal

+
Baseline Controller Actuators AIRFRAME
+
Aircraft State

Commands Baseline Flight Control System

c
Figure 1. Structure of the Fast on-Line Actuator Reconfiguration Enhancement (FLARE) System ( °2006 Scientific Systems Company, Inc.)

Figure 1, and is seen to include the following subsystems: (i) Decentralized FDIR system that consists of local FDI
observers that are run at each of the actuators, (ii) Global FDI algorithms whose role is to detect and identify control
effector damage, and (iii) Disturbance Rejection Subsystem that attenuates the effect of the external disturbances. The
FDI information is passed on to the Supervisory Block that activates the Adaptive Reconfigurable Controller as needed
to effectively compensate for failures or control effector damage. The system is designed in a retrofit fashion, i.e. the
baseline controller is maintained and the FDIR system is active only in the case of failure.

A. Decentralized Estimation
The FLARE system is well suited for first- and second-order actuator dynamics. For a second-order actuator model,
several types of failures are modeled using only two parameters. One is the actuator mobility coefficient σ, which
indicates whether or not the actuator is locked at a specific value. If σ = 1, the actuator is operational, while during
lock-in-place and hard-over failures, σ = 0. The second parameter is the actuator effectiveness coefficient k, which
models loss-of-effectiveness failures, with k ∈ [² 1], where ² << 1. The actuator failure model is written as:

u̇1 = σu2
u̇2 = −2ζωu2 + σω2 (kuc − u1 )

where uc is the command, u1 is actuator position, and u2 is actuator rate.


Based on this parameterization, the on-line FDI observers are designed for each of the actuators within the Decen-
tralized FDIR scheme7 shown in Figure 2.
In this case the FDI algorithms are local in nature since they are based on the actuator dynamics only, and the
estimation of the failure-related parameters is carried out using local signals (actuator inputs and outputs). A separate
adaptive observer is run for each actuator, and estimates of σ and k are generated on-line. The algorithms for on-
line estimation of the failure-related parameters and the corresponding proofs of stability for the case of second-order
actuator dynamics are given in detail in.4, 5, 8 In our recent work3 we derived a new failure parameterization where a
large class of failures is described using a single parameter θ, resulting in the θ-FLARE architecture.

4 of 24

American Institute of Aeronautics and Astronautics


AIRCRAFT

...
A1
Local FDI
System
ADAPTIVE

...
RECONFIGURABLE
CONTROLLER

AN
Local FDI
System

c
Figure 2. The Decentralized FDIR Scheme (°2006 Scientific Systems Company, Inc.)

B. Retrofit Reconfiguration for the Lock-In-Place (LIP) and Hard-Over (HOF) Failures
The retrofit algorithm for the LIP and HOF failures redistributes the control effort assigned to the failed actuator using
the algorithm described below.
In the case of fast actuator dynamics, the effect of LIP, LOE, float and hard-over failures can be expressed in terms
of the overall aircraft model as:
ẋ = Ax + BKΣuc + BK(I − Σ)ū (1)
where ū is a vector containing the current positions of any failed actuators.
The control signal uc (t) that is sent to the actuators is defined as a sum of the retrofit signal v(t), and the signal
ucN (t), which is the output of the baseline controller, as:
uc (t) = ucN (t) + v(t). (2)
The objective is to design the signal v(t) immediately following a failure so that its effect, combined with that of
the baseline controller, is close to that of ucN (t) in the no-failure case. Hence, in the case of known failures, the retrofit
reconfigurable control objective is to find the signal v∗ (t) such that:
BucN = BΣ(ucN + v∗ ) + B(I − Σ)u. (3)
From the above expression we obtain that a v∗ (t) that achieves the objective is of the form:
v∗ = ΣT BT (BΣΣT BT )B(I − Σ)(ucN − u) (4)
Since the failures are unknown, the actual algorithm is implemented by replacing the failure-related parameters with
their estimates. It is shown7 that such a strategy results in a stable closed-loop system and asymptotic convergence of the
tracking error to zero. This algorithm may also be implemented by solving a corresponding constrained optimization
problem to take into account the position and rate limits on the control effectors.

C. Performance Evaluation of FLARE


In this section we describe the results of extensive performance evaluations of the FLARE system. Our initial design
of FDIR algorithms used under FLARE was carried out in Matlab using the linearized simulation of F/A–18 dynamics.

5 of 24

American Institute of Aeronautics and Astronautics


Upon successful testing of the algorithms, these were transitioned to Fortran and sent to Boeing for evaluation on their
high-fidelity simulation of F/A–18 dynamics. Once acceptable results were obtained on that simulation, the algorithms
were transitioned to the piloted simulator at Boeing. In this section we give a detailed description of the results.
Design, tuning and performance evaluation of the FLARE system were carried out in three steps, described below.
Medium-Fidelity Simulation Testing: This simulation was developed by Boeing and contains linearized stability and
control derivatives, position and rate limits on the actuators, actuator dynamics, pure time delay in sending the signals
from the controller to the actuators, and a linearized version of the Boeing’s baseline controller referred to as the CAS
(Control Augmentation System). It was shown by Boeing that the medium-fidelity simulation is very close to their
high-fidelity simulator over a reasonable range of perturbations from the trim. All our initial design, testing and tuning
was carried out on this simulation, and the results of extensive simulations are presented in. 8
High-Fidelity Simulation Testing: In this step, the FDIR algorithms were tested and tuned on the Boeing’s high-
fidelity 6dof simulation of F/A–18 dynamics, referred to as the ModSDF. The simulation captures the most dominant
dynamics effects and calculates forces and moments using an extensive aerodynamics data base. Since it is written in
Fortran, all our algorithms were transitioned from Matlab to Fortran, and extensively tested by Boeing.
Piloted Simulation Testing: In this step, the FDIR algorithms that were successfully tested and tuned on the ModSDF
simulation were transitioned to the piloted simulator at Boeing. The piloted simulator is used for pilot training as well
as for testing advanced control algorithms.
The results of the piloted simulation with FLARE are summarized in Figure 3. All tests were evaluated by the
pilot using the Cooper-Harper Rating scale, which classifies aircraft handling into ten categories ranging from (1) No
compensation required by pilot to obtain desired performance; to (9) Intense compensation required by pilot to obtain
control; and (10) Control is lost. The results with failures and FLARE ranged from 2–4.5, with most values near 3.
This is a substantial improvement over performance with the baseline controller, which yielded results ranging from
2.5–8. In multiple cases, the pilot commented that, after a minor initial transient, the damaged aircraft with FLARE
behaved identically to the nominal aircraft.

Stabilator Failures Aileron Failures Rudder Failures


Cooper-Harper (CH) Ratings in the case of failures Trim 3o 6o 15o 30o Trim 15o 30o
Dbl Trk Dbl Trk Dbl Trk Dbl Trk 4G Roll Dbl Trk Dbl Dbl Trk

Flight Condition A Baseline Controller with Failures 4 6 6 5.5 7 7+ 5 7 6 5 8 7-8 2.5 3.5 4
(Mach 0.7, 20,000 ft) FLARE System with Failures 2.5 4.5 4 4 3.5 4 3.5 3.5 3-4 3 4 4 2 2 2.25

Flight Condition B Baseline Controller with Failures 4 5 7 5 5 6 6 6 3.5 4.5


N/A N/A N/A
(Mach 0.6, 30,000 ft) FLARE System with Failures 3 3 4 3 4 4 4 4 2 2.75

Dbl = Doublet Task CH Ratings in the case without failures 2 to 3 = excellent response
Trk = Tracking Task Baseline Controller, no Failures Dbl Trk 4G Roll 3.5 to 4.5 = good response, low pilot load
4G = 4G Turn Flight Condition A 2 3 2 2 5 to 6 = poor response, high pilot load
o
Roll = 360 Roll Flight Condition B 3 4 3.5 3 7 and up = unacceptable response

Figure 3. Table of Cooper-Harper Ratings from Piloted Simulations using FLARE

As mentioned earlier, we recently developed a new failure parameterization and the corresponding FDIR system
referred to as the θ-FLARE.3 Due to a relatively small number of parameters that need to be adjusted on-line, the
resulting system is highly robust, and most of the parameters converge close to their true values even in the case when
the commands are not persistently exciting. As a result, the θ-FLARE achieves excellent performance in the presence of
severe multiple flight-critical failures and failure recoveries, and hence appears superior to other available approaches.
The features of θ-FLARE are illustrated through a simulation example below.
Simulations using θ-FLARE: As the simulation example the linearized dynamics of the F/A-18 aircraft during 30 o
lateral doublet is chosen. The simulation consists of linear A and B matrices for a straight flight regime, and actuator
dynamics and position and rate limits on the control effectors.
The states of the model are: Total velocity V, pitch rate q, pitch angle θ, angle-of-attack α, altitude h, side-slip angle
β, roll rate p, yaw rate r, roll angle φ, and yaw angle ψ. The control surfaces include: Left and right Leading-Edge Flaps
(LEF); Left and right Trailing-Edge Flaps (TEF); Left and right Ailerons (AIL); Left and right Stabilators (STAB); and
Left and right Rudders (RUD). Control inputs also include left and right engine (PLA).
The following failure scenario is chosen:
• All right-wing surfaces lock at t = 4 seconds. The resulting σ matrix is Σ = diag[0 0 0 0 0 1],

6 of 24

American Institute of Aeronautics and Astronautics


• All left-wing surfaces undergo Loss-of-Effectiveness (LOE) at t = 4 seconds. The resulting K matrix is K =
diag[0.35 0.65 0.01 0.6 0.8 1],
• Right TEF and AIL recover from failure at t = 12 seconds.
The performance of the θ-FLARE system under the above failure scenario is shown on Figures 4 and 5. It is seen
that, despite multiple severe flight-critical failures and failure recoveries, the tracking performance is excellent and the
error between the aircraft states and those of a reference model are small. In addition, the estimates of the failure-
related parameters converge close to their true values despite limited persistent excitation arising from a single doublet.
The lack of persistent excitation is evident from the response of the θ̂ parameters for the LEF, while all other estimates
converge close to their true values.

Longitudinal States
725.81
Actual
725.8 Model
V [ft/s]

725.79

725.78
0 5 10 15 20 25 30 35 40

1
q [deg/s]

−1
0 5 10 15 20 25 30 35 40

3.5
θ [deg]

2.5
0 5 10 15 20 25 30 35 40

3
α [deg]

2.5

2
0 5 10 15 20 25 30 35 40

2050
h [ft]

2000

1950
0 5 10 15 20 25 30 35 40
Time [sec]

Figure 4. Longitudinal state response with θ-FLARE in case of multiple failures & failure recoveries

The parameter convergence obtained in the case of θ-FLARE can be further improved using the Integrated Self-
Diagnostics (ISD) procedure described next.

IV. FDIR System Robustness


Robustness and performance of FDIR systems based on indirect adaptive control strongly depends on the con-
vergence properties of the estimates of failure-related parameters. Since flight control commands are commonly not
persistently exciting, the parameters may converge to values that are far from the true ones, but the estimation and
tracking errors still converge to zero asymptotically due to the inherent stability properties of the indirect adaptive
control systems.23 Since the steady state values of parameter estimates strongly depend on the command that is applied
at the time of failure, different subsequent commands will cause those estimates to move again (even when there are no
further failures), falsely indicating another fault condition. In addition, in such a case there may be large state transients
due to continued adaptation, which makes the resulting system non-robust.
To address this problem, we recently developed an Integrated Self-Diagnostics (ISD) procedure for arriving at ac-

7 of 24

American Institute of Aeronautics and Astronautics


Lateral States
4
Actual
2 Model

β [deg]
0

−2
0 5 10 15 20 25 30 35 40

p [deg/s] 20

−20
0 5 10 15 20 25 30 35 40

0
r [deg/s]

−1

−2
0 5 10 15 20 25 30 35 40

50
φ [deg]

−50
0 5 10 15 20 25 30 35 40

2
ψ [deg]

−2
0 5 10 15 20 25 30 35 40
Time [sec]

Figure 5. Lateral state response with θ-FLARE in case of multiple failures & failure recoveries

curate estimates of failure parameters.3 The ISD procedure consists of injecting a self-diagnostics signal to one or
more actuators for which a failure is indicated, where the failure information is provided either by a Health Monitoring
System (HMS), or by a suitably designed internal mechanism. The procedure also includes applying a compensation
signal to use the remaining actuators for minimizing the effect of self diagnostics (SD) on the system state. The ISD
procedure is applied to the FDIR system based on the new failure parameterization from 3 that results in a minimum
number of adjustable parameters for accommodating a large class of failures. Due to a small number of failure pa-
rameters, the frequency content of the SD signal is relatively low, and the system can rapidly arrive at accurate failure
parameter estimates.3 The ISD system is shown in Figure 8. The key benefit provided by the IDS is that, after the
failure parameters have been accurately identified, the system becomes completely known, and the convergence of the
tracking error in the presence of initial conditions and subsequent disturbances is exponential. This also simplifies the
V&V of the system.
Other benefits of having accurate estimates of failure-related parameters available is that the pilot can be informed
about the health status in a timely manner, and the failure information can be effectively used in the loss-of-control pre-
vention and recovery system. Hence strategies based on accurate estimation of unknown parameters in a reconfigurable
control system are of great importance in the area of flight safety.

V. Performance Metrics for Evaluation


The performance of the closed-loop system that can be achieved after failure and control reconfiguration is closely
related to the concept of Failure Criticality. For less critical failures, the performance that can be achieved is close
to the nominal (no-failure) one. For critical failures the desired dynamics that should be followed during control
reconfiguration needs to change to reflect the new (lowered) capabilities of the system since the control law needs to

8 of 24

American Institute of Aeronautics and Astronautics


Actuator Response
4
Left

LEF [deg]
Right
3.5

3
0 5 10 15 20 25 30 35 40
20
TEF [deg]
0

−20
0 5 10 15 20 25 30 35 40
20
AIL [deg]

−20
0 5 10 15 20 25 30 35 40
5
STAB [deg]

−5
0 5 10 15 20 25 30 35 40
20
RUD [deg]

−20
0 5 10 15 20 25 30 35 40
80
PLA [deg]

60

40
0 5 10 15 20 25 30 35 40
Time [sec]

Figure 6. Actuator response with θ-FLARE in case of multiple failures & failure recoveries

both compensate for a critical failure, and achieve the desired performance. Hence techniques that result in graceful
performance degradation in the presence of critical failures are of great interest.

A. Known failure case as metrics


One of the issues in the context of reconfigurable control is that of metrics that can be used to compare different
approaches. Probably the best performance can be achieved under an assumption that the failure is known. In such
a case the performance bounds can be computed based on the testing of a corresponding fixed controller through
simulations.
It should be noted that the performance comparison should be made with respect to the case of known failure, and
not to the no-failure case since the maximum performance that the system can achieve under failures is that obtained in
the known failure case. In other words, even when the failure is completely known, the reconfigurable controller may
not be able to achieve the same level of performance as in the no-failure case. This is closely related to the available
control authority and the current Achievable Dynamic Performance (ADP) measure that may be insufficient to achieve
the original performance after control reconfiguration. In such a case the reference model, or the commands, or both
need to be changed resulting in graceful performance degradation in the closed-loop system.

B. Achieving the Achievable Performance


Based on the above discussion, it can be concluded that one metrics for comparing different reconfigurable control
algorithms can be based on a measure of closeness of the performance in the unknown failure case with respect to
that in the case when the failure is known. Different cases of known failures can then be simulated and an appropriate
performance index calculated (e.g. integral of the tracking error), and compared with that obtained in the unknown

9 of 24

American Institute of Aeronautics and Astronautics


Left Right
1 σ 1

LEF [deg]
k
0.5 0.5
θ̂
0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40

TEF [deg] 1 1

0.5 0.5

0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40

1 1
AIL [deg]

0.5 0.5

0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40

1 1
STAB [deg]

0.5 0.5

0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40

1 1
RUD [deg]

0.5 0.5

0 0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
Time [sec] Time [sec]

Figure 7. Response of parameter estimates with θ-FLARE in case of failure

failure case. To decrease the number of simulation cases, the Failure Criticality concept can be used to test for the most
critical failures only.
In situation when acceptable performance can be achieved in the case of known failures, one of the questions that
arises is whether an adaptive FDIR system in which there is on-line adjustment of parameters can actually achieve
comparable performance. This issue is closely related to the following: (i) Number of parameters that are adjusted on-
line. Adjustment of a large number of parameters may result in large transients and saturation of the control surfaces,
yielding poor closed-loop performance due to the lack of persistent excitation of the flight control commands. This
makes the resulting closed-loop system non-robust. (ii) Adaptive laws for parameter adjustment. The key requirement
here is that the adaptive laws assure closed-loop stability. This is a difficult problem, particularly in the presence of
actuator dynamics and position and rate limits on the flight control effectors. In the case of flight-critical failures, the
adaptive laws for parameter update also need to be fast and robust since the delay in calculating the failure parameter
estimates or controller parameters can have detrimental effect of the closed-loop stability and performance; (iii) Adap-
tive reconfigurable control law. Robustness of the baseline controller is of great importance for control reconfiguration.
Baseline controller can be thought of as the algorithm that is used as a basis for reconfigurable control design (e.g.
LQR, inverse dynamics), or as a baseline legacy controller to which the adaptive reconfigurable part is added in the
form of a retrofit signal.

VI. Failure Criticality


This section defines and illustrates the concept of failure criticality. We begin with the definitions of control
task, achievable dynamic performance (ADP) set and required dynamic performance (RDP) set. It is followed by
the definition of failure criticality and of failure categories. The details of computational procedures and numerical

10 of 24

American Institute of Aeronautics and Astronautics


COMPENSATOR Self Diagnostics HMS

Commands
CONTROLLER ACTUATORS AIRFRAME

FDI

c
Figure 8. Structure of the Integrated Self-Diagnostics (ISD) System ( °2006 Scientific Systems Company, Inc.)

examples are given in Appendices A-B. We use an LQ-tracker and a model reference tracker to show how failure
criticality can be used to compare controllers as well as to compare different failures. The comparison results should not
be construed as justification for preferring one controller design method over another. They merely serve to illustrate
how different controllers can be compared in regard to their ability to accommodate failures.

A. Control tasks and dynamic performance sets


Consider a linear time-invariant (LTI) open loop aircraft model:

ẋ = Ax + Bu + w, x(0) ∈ X0 (5)

where x(t) ∈ Rn is the state at time t, u(t) ∈ Rm is the control input at time t and w is an exogenous input belonging to a
specified set W and X0 is a given set of initial conditions. The control inputs are subject to constraints:

u(t) ∈ U (6)

where U is a convex subset of Rm with non-empty interior and containing 0. We will use the F-18 model given in
Appendix A for numerical work. When the exogenous input set W is white noise of known mean w̄ and known
covariance W ≥ 0, the plant is a stochastic system. In this case, we may take the initial condition set X 0 as an ellipsoid
that specifies the 1σ region for a Gaussian initial state. We may also take W as the set of all L 2 signals whose L 2
norm is strictly less than 1. All analytical and numerical results are the same in these cases, only the interpretation of
results is different. Throughout, in the stochastic case, we use 1σ ellipsoids in the definitions. Other classes of inputs
and initial conditions that can be handled are the set of bounded exogenous inputs and polyhedral initial conditions.
For the LTI system, we consider tasks in which the control objectives are to attain and maintain some desired flight
condition in a stable manner. An example is given below. When stating a control task verbally, we assume that the
initial condition of the nonlinear aircraft is equal to some trim value (and, hence, the LTI system initial state condition
is 0) and that the exogenous input is zero. Later on, we shall define control task more clearly taking into account the
effects of initial condition and exogenous inputs.

Example VI.1 (Glide slope tracking) The aircraft is in steady level flight at airspeed V 0 and altitude h0 . The task
is to acquire a 3◦ glide slope and maintain steady descent at airspeed V0 . The task is said to be completed when the
aircraft trajectory reaches the desired condition: airspeed is V0 , all the state derivatives except ḣ are zeros, all the
lateral states are zeros, V = V0 and θ − α = −3◦ .

In the example, task completion is given in terms of some linear equations on states and state derivatives. The
conditions on state derivatives can be expressed as conditions on states and controls using the state space equations.
We will refer to the set of states obtained from these task completion conditions as the ideal desired final state set.
When exogenous inputs are non-zero or when initial state is non-zero, it may not be possible to achieve any ideal
desired final state. In this case, we specify a desired final state set X f that contains the ideal desired final states. The
task is said to be completed if, for any initial condition in X0 and for any exogenous input in W, the corresponding
aircraft trajectory enters X f after a finite time and stays inside thereafter. The size of X f will depend on the sizes of X0
and W. In addition, when the system is stochastic, we interpret task accomplishment in a probabilistic sense.

11 of 24

American Institute of Aeronautics and Astronautics


Example VI.2 (Desired final state set for glide slope tracking) The ideal desired final conditions for glide slope track-
ing are:
V̇ = 0, α̇ = 0, q̇ = 0, θ̇ = 0, β̇ = 0, ṗ = 0, ṙ = 0, φ̇ = 0, ψ̇ = 0
and
V = 0, θ − α = −π/60, q = 0, β = 0, p = 0, r = 0, φ = 0, ψ = 0
where the conditions on state derivatives can be expressed in terms of both states and controls via (5). After some
algebraic manipulations, we can write the above equations as a linear system of equations:
 
α 
   0 

N  θ  =  
−π/60
u
 

It is easily checked that this system for the F-18 model has at least one solution and that all solutions can be written
as:  
α
 0 
 
 θ  = N +   + Mv
 
−π/60
u
 

where N + is the Moore-Penrose inverse of N, M is such that its columns form a basis for the null space of N and v is a
free vector. For the F-18 model,

 0 
 
i0
N +   = 10−4 31.52 −492 35.6 35.6 −15.9 −15.9 −2.3 2.3 −1302 0 0
h
−π/60
and 0
 33 33 881 461 −58 −58 7.6 26 0 0

−7.6 
 10.6 10.6 884 −17 −17 2.5 8.1 0 0
 
−466 −2.5
M = 10−3 

 −9.6 −9.6 −10.9 −4.4 −40 −40 498 −498 2.2 500 500


9.6 9.6 10.9 4.4 40 40 498 500 500
 
−498 −2.2
Hence, the ideal desired final state set for glide slope tracking (see Appendix A) is given by:

x ∈ R10 : V = 0, q = 0, β = 0, p = 0, r = 0, φ = 0, ψ = 0 and there exists


 


 


 0  (7)
   
Xideal = 
 
4
u ∈ U and v ∈ R such that [α, θ, u ]
0 0
= N +  + Mv

 


  
−π/60
  

Suppose that the wind disturbances w are zero-mean white noise inputs with covariance given in Appendix A. In this
case, it is difficult, if not impossible, to maintain a state in Xideal and we need to enlarge Xideal to a desired final state
set X f .

Definition VI.1 (Control task) Let X0 be a given non-empty open set of initial conditions, Xideal be a given non-empty
set of ideal desired final states, W be a given set of disturbance inputs containing zero disturbance, and X f be an open
set of desired final states containing Xideal . The control task is to
1. take the system state x(t), starting from any initial condition in X0 with disturbance input w = 0, to some ideal
desired final state in Xideal asymptotically i.e.

lim x(t) = x f for some x f ∈ Xideal


t→∞

and
2. take the system state x(t), starting from any initial condition in X0 and for any disturbance input in W, to some
state inside the desired final state set X f in finite time:

x(t) ∈ X f for all t ≥ T

for some finite time T ≥ 0 (with high probability if the disturbance input is stochastic)

12 of 24

American Institute of Aeronautics and Astronautics


by means of a stabilizing state feedback control law subject to the control constraint (6).

We next characterize control tasks in a way that facilitates actuator failure analysis. Fix a state feedback control
law C that accomplishes the task. For each initial state x0 ∈ X0 and disturbance input w ∈ W, there is a set of values
of control inputs that C generates over time during closed loop system operation. Let it be denoted by S u (C; x0 , w).

Definition VI.2 (Required dynamic performance set) Let R be a control task and C be a controller that accom-
plishes the task. The required dynamic performance (RDP) set needed to accomplish R using C is:

E(R; C) = {Bu : u ∈ S u (C; x0 , w)} (8)


[

x0 ,w

where the union is taken over all initial conditions in X0 and exogenous inputs in W.

It is the set of all quantities of the form Bu that are generated during closed loop system operation for different
initial conditions and exogenous inputs. The computational tools for calculating RDP set are given in Appendix B. The
computations involve well-known convex programming methods for calculating and approximating reach sets. The
RDP set is a subset of:

BU = {Bu : u ∈ U} (9)

if controller C accomplishes the task. We will refer to BU as the nominal achievable dynamic performance (ADP) set.

Example VI.3 (ADP and RDP sets for glide slope tracking with an LQ-tracker) Let

X0 = {0}

and W be zero-mean white noise. Consider the LQ-tracker design given in Appendix A. For this example,
i0
ū f = 10−2 1.12 1.12 8.15 8.15 0.58 −0.58 −5.8 0 0
h

We will also consider an LQR gain with R = 1000I instead of R = 100I as in Appendix A. Large values of R result in
less addgressive controllers.
3 3

2.5 2.5

2 2

1.5 1.5

1 1
αdot (deg/s)

(deg/s)
dot

0.5 0.5
α

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5
−6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
V (ft/s2) V (ft/s2)
dot dot

(a) Control Weight R = 1000I (b) Control Weight R = 100I

Figure 9. Union of projections of 2σ required dynamic performance set onto ( V̇, α̇) plane for glide slope tracking task for two different LQ
trackers. The interior of the polytope shown in Blue is the nominal achievable dynamic performance set BU.

Figure 9 shows the 2σ required dynamic performance set for T = 3. The RDP sets for both controllers are located
within the nominal ADP set (shown in Blue). So, with high probability, both controllers accomplish the glide slope

13 of 24

American Institute of Aeronautics and Astronautics


tracking task in the presence of initial condition errors and wind disturbances. Note that the RDP set for the less
aggressive controller (larger R) is farther away from the boundaries of the nominal ADP set compared to the RDP set
for the more aggressive controller. This means that changes in ADP set due to failure would have a greater impact on
the performance of the more aggressive controller than the less aggressive one.

B. Pre-Ordering of Critical Failures


We present a notion of criticality of actuator failures relative to a control task. The notion leads to a pre-ordering of
actuator failures using which failures that are critical for the control task can be determined. Throughout, the required
dynamic performance set associated with a task-controller pair (R; C) is denoted by E without explicitly showing R and
C.

1. Failure Parametrization
Flight control actuator failures can be broadly divided into two categories: (i) failures that result in a total loss of
effectiveness of the control effector; and (ii) failures that cause partial loss of effectiveness. The former includes lock-
in-place (LIP), float, and hard-over failure (HOF), while the latter describes general loss-of-effectiveness (LOE) types
of failure. In the case of LIP failures, the actuator “freezes” at a certain condition and does not respond to subsequent
commands. HOF is characterized by the actuator moving to and remaining at its upper or lower position limit regardless
of the command. Float failure occurs when the actuator contributes zero moment to the control authority. LOE is
characterized by lowering the actuator gain with respect to its nominal value. The actuator failures can be expressed
mathematically as:

uci (t), ki (t) = 1, for all t ≥ 0 No-Failure Case










ki (t)uci (t), 0 < ²i ≤ ki (t) < 1, for all t ≥ tFi Loss of Effectiveness









ui (t) = 

0, ki (t) = 1, for all t ≥ tFi Float Type of Failure






for all t ≥ tFi Lock-in-Place Failure

uci (tFi ), ki (t) = 1,









k (t) = 1, for all t ≥ t Hard-Over Failure

 (u )


or (u ) ,
i min i max i Fi

where tFi denotes the time instant of failure of the ith actuator, ki denotes its effectiveness coefficient such that ki ∈
[²ki , 1], and ²ki > 0 denotes its minimum effectiveness.
To parametrize these failures, define the following sets:

K = diagonal [k1 , · · · , km ] : 0 ≤ ki ≤ 1, i = 1, · · · , m (10a)


© ª

S = diagonal [σ1 , · · · , σm ] : σi ∈ {0, 1}, i = 1, · · · , m (10b)


© ª

U = {col [u1 , · · · , um ] : ui min ≤ ui ≤ ui max , i = 1, · · · , m} (10c)

where ui min and ui max are specified lower and upper bounds respectively. The control input in the presence of failures
is given by:

u = KΣuc + (I − Σ) ū (11)

where K ∈ K is the LoE gain, Σ ∈ S specifies failed actuators and ū ∈ U specifies the actuator position at (and after)
failure. The case of no failure corresponds to K = I, Σ = I and ū = 0.
For computational purposes, the lower and upper bounds on the control inputs will be assumed to satisfy the
following inequality:

ui min < 0 < ui max for i = 1, · · · , m (12)

so that U has a non-empty interior (0 is in the interior of U).

14 of 24

American Institute of Aeronautics and Astronautics


2. Some Motivational Arguments
For each K ∈ K, Σ ∈ S and ū ∈ U, define:

A (K, Σ, ū) = {B (KΣuc + (I − Σ) ū) : uc ∈ U} (13)

which is the set of all vectors η in Rn for which the equation:

B (KΣuc + (I − Σ) ū) = η

has a solution uc ∈ U. It is easy to see that A (K, Σ, ū) is not empty for any K ∈ K, Σ ∈ S and ū ∈ U. In particular,

A (I, I, 0) = BU

corresponds to the no failure case. Further, since A (K, Σ, ū) is the image of the polytope U under a linear map, it is a
polytope for any K ∈ K, Σ ∈ S and ū ∈ U.
A property of A (K, Σ, ū) is given next. This property will motivate the use A (K, Σ, ū) as a means to compare
criticality of failures.

Proposition VI.1 (LoE failures) Let K1 ∈ K, K2 ∈ K, Σ ∈ S and ū ∈ U be given. Suppose that the loss-of-
effectiveness (LoE) gains K1 and K2 are such that:

K1 ≤ K2

i.e. K1 corresponds to a more severe LoE than K2 . Then,

A (K1 , Σ, ū) ⊂ A (K2 , Σ, ū)

Proof: Pick any η ∈ A (K1 , Σ, ū). By definition, there exists uc ∈ U such that

η = B (K1 Σuc + (I − Σ) ū)

Now, define ûc as follows: the ith component of ûc is given by:

 uci if K2i = 0

ûci = 


 uci K1i otherwise

K2i

where K1i and K2i are the ith diagonal entries of K1 and K2 respectively. Note that K1 ≤ K2 implies that if K2i =0 then
K1i = 0. Therefore, it is easy to verify that
K2 ûc = K1 uc
and, using the diagonal structure of Σ, K1 and K2 , that

K2 Σûc = K1 Σuc

Therefore,
B (K2 Σûc + (I − Σ) ū) = B (K1 Σuc + (I − Σ) ū) = η
Finally, since 0 is in the interior of U and the ratio
K1i
≤1
K2i
whenever K2i > 0, we see that ûc ∈ U. So, η ∈ A (K2 , Σ, ū).
Figure 10 shows Proposition 1 graphically. So, at least in the case of LoE failures, it is natural to associate criticality
with the size of A (K1 , Σ, ū) in relation to the size of A (K2 , Σ, ū). However, if the control task can be completed with
an η belonging to (or with a subset of) A (K1 , Σ, ū), then the failures (K1 , Σ, ū) and (K2 , Σ, ū) should have the same level
of criticality relative to the control task.
To motivate further, let (Ki , Σi , ūi ), i = 1, · · · , 5 define different failures. Suppose that a non-empty set E:

E ⊂ A (I, I, 0) = BU,

15 of 24

American Institute of Aeronautics and Astronautics


LoE 1

LoE 2

No Failure

Figure 10. The set A gets smaller with severity of LoE failures.

that specifies all the vectors η of the form Bu that may be needed to complete a given control task, is given. Figure 11
shows the set A(Ki , Σi , ūi ) for each failure and the control task set E. Failure 1 is such that

E ⊂ A (K1 , Σ1 , ū1 )

which implies that the control task can be accomplished despite the failure. Therefore, failure 1 is not as critical as
failure 2 for the control task because, under failure 2

E 1 A (K2 , Σ2 , ū2 ) ,

and the control task may not be accomplished.

(a) Failure 1 (b) Failure 2 (c) Failure 3

(d) Failure 4 (e) Failure 5

Figure 11. Different failure scenarios and their relationship with “control task set” - Rectangular region is achievable set A (I, I, 0) = BU
with no failure, circular region is the control task set E and hexagonal region is the achievable set A(K i , Σi , ūi ) after failure.

The area of A (K2 , Σ2 , ū2 ) corresponding to failure 2 is smaller than the area of A (K3 , Σ3 , ū3 ) corresponding to
failure 3. But, one would say that failure 3 is a more critical than failure 2 for the given control task because

A (K3 , Σ3 , ū3 )
\
E

is empty implying that there is no chance of performing the control task under failure 3 whereas

A (K2 , Σ2 , ū2 )
\
E

is not empty implying that there is some potential still for performing the control task under failure 2. On further
examination of the figures, one would say that failure 3 and failure 4 have the same level of criticality, even though

16 of 24

American Institute of Aeronautics and Astronautics


their areas are different, simply because the control task cannot be accomplished when either failure occurs. So, it is the
intersection of E and A (K, Σ, ū) that is a measure of failure criticality. Now, for failure 2 and failure 5, the respective
areas are equal and, without further information about the control task, it is hard to decide which of these failures is
more critical.

3. Definitions and Computational Procedures


The k-dimensional volume of a k-dimensional convex subset of Rn is denoted by volk (·). The dimension of a convex
set is the dimension of the span of the elements of the convex set. The k-dimensional volume of an empty set is zero
for any k.

Definition VI.3 (Relatively critical) For i = 1, 2, let (Ki , Σi , ūi ) be given failures.

1. Let E be a k-dimensional convex subset of BU representing the required dynamic performance set. (K 1 , Σ1 , ū1 )
is said to be a more critical failure than (K2 , Σ2 , ū2 ) relative to E if

volk (A (K1 , Σ1 , ū1 ) E) ≤ volk (A (K2 , Σ2 , ū2 )


T T
E)

2. (K1 , Σ1 , ū1 ) is said to be a more critical failure than (K2 , Σ2 , ū2 ) if (K1 , Σ1 , ū1 ) is a more critical failure than
(K2 , Σ2 , ū2 ) relative to any convex subset of BU.

The definition gives a computational procedure to compare criticality of two failures. The numerical steps are as
follows:

1. Suppose that a task-controller pair is given.

2. Calculate the required dynamic performance set E using the method given in Appendix B.

3. Calculate the achievable dynamic performance sets A (Ki , Σi , ūi ) and the volumes vi of their intersection with the
required dynamic performance set.

4. If v1 is less then v2 , declare that failure 1 is more critical for the task-controller pair than failure 2.

The major computational steps are the calculation of E and the calculation of volumes. The set E is almost never
convex. But, it can be approximated by convex sets or, better yet, by the union of a finite number of convex sets. The
calculation of volume of sets can be very difficult even for convex sets.15 We may therefore consider using the volume
of the smallest volume ellipsoid containing the set instead of the volume of the set itself. Another approach is to project
the sets to two-dimensional subspaces by considering two state variables at a time and then to use the area of projected
region in the definition. This is the approach used for numerical examples in this paper.

Example VI.4 (LOE/LIP failures with model reference controller) We use a model reference controller for glide
slope tracking given in Appendix A in this example. Figures (12-13) show the RDP set E and the ADP sets for various
loss of effectiveness (LOE) and lock-in-place (LIP) failures. To compute the sets, we used X 0 and wind covariance W
given in Appendix A. The regions enclosed by the Black lines are the 2σ RDP sets.
Since the 2σ RDP set is within the ADP set for LOE failures except for rudder failure (Figure 12), we can say with
high likelihood that these failures are not critical for the glide slope tracking task. In comparison, LIP failures shown
in Figure 13 are more critical.

The definition of relatively critical failure induces a pre-order ≺ f c on actuator failures for a task-controller pair.
That is,
f1 ≺ f c f2
if f2 = (K2 , Σ2 , ū2 ) is relatively more critical than f1 = (K1 , Σ1 , ū1 ). A pre-order is a binary relation that is reflexive
( f ≺ f c f ) and transitive ( f1 ≺ f c f2 and f2 ≺ f c f3 imply f1 ≺ f c f3 ). We use this pre-ordering to group failures for a
task-controller pair into different categories ranging from least critical to most critical. Note that

volk (A (K, Σ, ū) E) ≤ volk (E)


T

17 of 24

American Institute of Aeronautics and Astronautics


3 1
Required Control Required Control
No Failure No Failure
2.5 50% LOE in δSTABl 0.8 50% LOE in δSTABl
50% LOE in δRUDl 50% LOE in δRUDl
50% LOE in δ 0.6 50% LOE in δ
2 PLA PLA

0.4
1.5

0.2
1
αdot (deg/s)

(deg/s)
0

dot
0.5

β
−0.2

0
−0.4

−0.5
−0.6

−1 −0.8

−1.5 −1
−6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
2 2
Vdot (ft/s ) Vdot (ft/s )

(a) V̇ vs. α̇ (b) V̇ vs. β̇

Figure 12. Comparison of ADP sets for various loss of effectiveness (LOE) failures with 2σ RDP set (show in Black). The polytope in Blue
is the nominal ADP set BU.

always and, since volk (E) , 0, the ratio of volumes:

volk (A (K, Σ, ū) volk (E \ A (K, Σ, ū))


T
E)
γ (K, Σ, ū) = =1− (14)
volk (E) volk (E)
is well-defined and lies between 0 and 1. We will use γ (K, Σ, ū) to group failures.
Let a task-controller pair be given. As before, let E be the required dynamic performance set (RDP) associated with
the task-controller pair. We say that an actuator failure (K, Σ, ū) is of:

1. Category 0 (non critical) provided that γ (K, Σ, ū) = 1,

2. Category 1 (somewhat critical) provided that 0.75 ≤ γ (K, Σ, ū) < 1,

3. Category 2 (critical) provided that 0.25 ≤ γ (K, Σ, ū) < 0.75,


4. Category 3 (severely critical) provided that 0 ≤ γ (K, Σ, ū) < 0.25, and

5. Category 4 provided that there is a strictly positive distance between E and the ADP set A (K, Σ, ū).

These categories allow us not only to compare different actuator failures for the same task-controller pair but also to
compare controllers for the same task-failure pair. If a controller C 1 for a task has a certain failure in category 1 and
controller C 2 for the same task has the same failure in category 2, then C 1 is a better controller than C 2 . Our idea is to
compare controllers in this way for failures that are highly likely to occur.

Example VI.5 (Comparison of LQ-tracker and model reference controller) We will compare the LQ-tracker and
the model reference tracker for F-18 landing given in Appendix A. The following single loss of effectiveness (LOE)
failures:

Failure Label Description


f1 50% LOE in δS T ABl
,
f2 50% LOE in δRUDl
f3 50% LOE in δPLA

single lock in place (LIP) failures:

18 of 24

American Institute of Aeronautics and Astronautics


3 1
Required Control Required Control
No Failure No Failure
2.5 δSTABl LIP at min 0.8 δSTABl LIP at min
δ LIP at min δ LIP at min
RUDl RUDl
0.6
2

0.4
1.5

0.2
1
αdot (deg/s)

(deg/s)
0

dot
0.5

β
−0.2

0
−0.4

−0.5
−0.6

−1 −0.8

−1.5 −1
−6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
2 2
Vdot (ft/s ) Vdot (ft/s )

(a) V̇ vs. α̇ (b) V̇ vs. β̇

Figure 13. Comparison of ADP sets for various lock in place (LIP) failures with 2σ RDP set (show in Black). The polytope in Blue is the
nominal ADP set BU.

Failure Label Description


g1 δS T ABl LIP at its minimum value
,
g2 δRUDl LIP at its minimum value
g3 δPLA LIP at its minimum value

and the compound LOE/LIP failures ( f1 , g2 ), ( f1 , g3 ), ( f2 , g1 ), ( f2 , g3 ), ( f3 , g1 ) and ( f3 , g2 ) are considered. Table 1


shows the results of analysis.

Controller LOE LIP LOE/LIP


f1 f2 f3 g1 g2 g3 ( f1 , g 2 ) ( f1 , g3 ) ( f2 , g1 ) ( f2 , g3 ) ( f3 , g1 ) ( f3 , g2 )
LQ 0 0 1 2 0 3 0 4 3 3 3 2
MRC 0 1 0 3 2 4 2 4 4 4 3 2

Table 1. Failure categories of actuator failures for glide slope tracking task using LQ and MRC trackers (controllers are not adaptive).

The failure categories shown in Table 1 indicate the level of criticality of a failure when a controller is used for
glide slope tracking. An important point to remember is that the controllers used to build Table 1 are not adaptive.
With this caveat, we make a number of observations:
• Compound failures ( fi , g j ) are more critical than both single failures fi and g j , no matter which controller is
used. For example, for the LQ tracker, the single LOE failure f3 is somewhat critical and the single LIP failure
g2 is non-critical, but the compound failure ( f3 , g2 ) is critical. This is a desirable consequence of our pre-order.
• LIP failures seem to be more critical than LOE failures for both LQ and MRC controllers. The reason is that,
when a LIP failure occurs, the number of independent control inputs is reduced by 1 and the remaining control
inputs must compensate for the potentially non-zero effect of the locked actuator.
• The LQ tracker seems to be a better controller overall than the MRC tracker. We do not mean that the LQ tracker
design method is a better method for glide slope tracking than MRC tracker design. The problem may lie in the
reference model (not exponentially stable) used by MRC.
In future work, we will use reconfigurable controllers to develop failure category ratings as shown in Table 1.

19 of 24

American Institute of Aeronautics and Astronautics


VII. Choosing an appropriate testbed for performance evaluation
In our previous work we used several test bed platforms for performance evaluation of our algorithms. Aircraft
dynamics simulations include F/A-18 aircraft, Boeing’s Tailless Advanced Fighter Aircraft (TAFA), and generic models
of spacecraft and transport aircraft. These models are included in our Adaptive Reconfigurable Control Analysis,
Design and Evaluation (ARCADE) software toolbox9 written in Matlab.
Even though the ARCADE toolbox includes several representative vehicle models, and other types of aircraft have
also been used for performance evaluation by other research groups, the main issue arising in this context is an apparent
lack of a standard testbed that exists in other areas (e.g. engine health monitoring and control). The availability
of such a testbed for FDIR research would enable rapid testing of different FDIR techniques, and comparison of
different algorithms from the point of view of their response in the presence of multiple flight critical failures, external
disturbances and noise. Availability of such a testbed would be highly beneficial for future research in this area.

VIII. Conclusions
In this paper, several issues arising in the context of actuator Failure Detection, Identification and Reconfiguration
(FDIR) in flight control are addressed. These include algorithms, performance, implementation and metrics. Several
FDIR algorithms are reviewed from the point of view of their complexity, ease of implementation and robustness prop-
erties. It is shown that a FDIR system based on the a actuator failure parameterization achieves excellent performance
in the presence of multiple simultaneous flight critical failures, and results in improved convergence of the failure-
related parameter estimates. A new self-diagnostics procedure is described that results in guaranteed convergence of
the parameter errors to zero, and improved performance of the overall closed-loop system. Following that, perfor-
mance metrics for evaluation of FDIR systems are discussed. As a first step towards a quantitative performance metric,
we introduce the concept of failure criticality and present a grouping of actuator failures according to their severity
for a given task-controller pair. Failure criticality is based on comparing the achievable dynamic performance after
failure with the required dynamic performance needed to accomplish a given task by a given controller. We present
computational methods and illustrate the concept with numerical examples.
Future work in this area will be focused on the extensions of the proposed Failure Criticality concept to the case
of reconfigurable controllers. We will also pursue on-line implementation of the algorithms arising in the context of
Failure Criticality, and their use in the design of a comprehensive loss-of-control prevention and recovery system that
will be robust not only to severe actuator failures, but also to control effector damage, sensor failures, control computer
faults, and pilot errors. The tools described in this paper will facilitate the development of such a system.

References
1 M. Bodson and J. Groszkiewicz, “Multivariable Adaptive Algorithms for Reconfigurable Flight Control”, IEEE Transactions on Control
Systems Technology, Vol. 5, No. 2, pp. 217-229, March 1997.
2 Boeing Phantom Works, ”Reconfigurable Systems for Tailless Fighter Aircraft - RESTORE (First Draft)”, Contract No. F33615-96-C-3612,

System Design Report, No. A007, St. Louis, M0, May 1998.
3 J. D. Bošković, J. Redding and R. K. Mehra, ”Integrated Health Monitoring and Fast on-Line Actuator Reconfiguration Enhancement

(IHM-FLARE) System”, Final Report for NASA Langley Phase I SBIR, Contract No. NNL06AA26P, July 2006.
4 J. D. Bošković and R. K. Mehra, ”A Multiple Model-based Decentralized System for Accommodation of Failures in Second-Order Flight

Control Actuators”, to be presented at the 2006 American Control Conference, Minneapolis, MN, June 14-16, 2006.
5 J. D. Bošković, S. E. Bergstrom and R. K. Mehra, ”Adaptive Accommodation of Failures in Second-Order Flight Control Actuators with

Measurable Rates”, Proc. 2005 American Control Conference, Portland, OR, June 8-10, 2005.
6 J. D. Bošković and R. K. Mehra, ”Robust Fault-Tolerant Control Design for Aircraft Under State-Dependent Disturbances”, AIAA Journal

of Guidance, Control & Dynamics, Vol. 28, No. 5, pp. 902-917, September-October 2005.
7 J. D. Bošković, S. E. Bergstrom, R. K. Mehra, J. Urnes, Sr., M. Hood, and Y. Lin, ”Fast on-Line Actuator Reconfiguration Enabling (FLARE)

System”, in Proceedings of the 2005 AIAA Guidance, Navigation and Control Conference, San Francisco, CA, August 15-18, 2005.
8 J. D. Bošković, S. E. Bergstrom and R. K. Mehra, ”Aircraft Prognostics and Health Management (PHM) and Adaptive Reconfigurable

Control (ARC) System”, Final Report for NASA DFRC Phase II SBIR, Contract No. NAS4-02017, March 2004.
9 S. E. Bergstrom, J. D. Bošković and R. K. Mehra, ”Development of an Adaptive Reconfigurable Control Analysis, Design & Evaluation

(ARCADE) Toolbox”, AIAA-2003-5378, in Proceedings of the 2003 AIAA Guidance, Navigation and Control Conference, Austin, TX, August
11-14, 2003.
10 J. D. Bošković and R. K. Mehra, ”A Multiple Model Adaptive Flight Control Scheme for Accommodation of Actuator Failures”, AIAA

Journal of Guidance, Control & Dynamics, 25(4), pp. 712-724, 2002.


11 J. D. Bošković and R. K. Mehra, ”A Decentralized Scheme for Autonomous Compensation of Multiple Simultaneous Flight-Critical Fail-

ures”, in Proceedings of the 2002 AIAA Guidance, Navigation and Control Conference, Monterey, California, 5-8 August 2002.

20 of 24

American Institute of Aeronautics and Astronautics


12 J. D. Bošković and R. K. Mehra, ”Intelligent Adaptive Control of a Tailless Advanced Fighter Aircraft under Wing Damage”, AIAA Journal

of Guidance, Control & Dynamics, Vol. 23, No. 5, pp. 876-884, September-October 2000.
13 J. D. Bošković and R. K. Mehra, “Stable Multiple Model Adaptive Flight Control for Accommodation of a Large Class of Control Effector

Failures”, in Proceedings of the 1999 American Control Conference, San Diego, CA, June 1999.
14 J. D. Bošković, S.-H. Yu, and R. K. Mehra, ”A Stable Scheme for Automatic Control Reconfiguration in the Presence of Actuator Failures”,

in Proceedings of the 1998 American Control Conference, Vol. 4, pp. 2455-2459, Philadelphia, PA, June 24-26, 1998.
15 B. Büeler, A. Enge, and K. Fukuda, “Exact volume computation for polytopes: A practical study”, In G. Kalai and G. M.

Ziegler, (Ed.), Polytopes – Combinatorics and Computation, volume 29 of DMV Seminar, pages 131-154, Birkhȧuser, 2000. (see also
lix.polytechnique.fr/Labo/Andreas.Enge/Vinci.html)
16 A. P. Loh, A. M. Annaswamy and F. P. Skanze, ”Adaptation in the Presence of a General Nonlinear Parameterization: An Error Model

Approach”, IEEE Transactions on Automatic Control, Vol. 44, 1999, pp. 1634–1652.
17 J. D. Bošković, ”Adaptive Control of a Class of Nonlinearly-Parametrized Plants”, IEEE Transaction on Automatic Control, Vol. 43, No. 7,

1998, pp. 930–934.


18 J. Brinker and K. Wise, “Flight Testing of a Reconfigurable Flight Control Law on the X-36 Tailless Fighter Aircraft”, Proc.1998 AIAA GNC

Conference, Denver, CO, August 2000.


19 J. Brinker and K. Wise, “Reconfigurable Flight Control of a Tailless Advanced Fighter Aircraft”, Proc. 1998 AIAA GNC Conference, Vol. 1,

pp. 75-87, Boston, MA, August 1998.


20 A. Calise, S. Lee and M. Sharma, “Direct Adaptive Reconfigurable Control of a Tailless Fighter Aircraft”, Proc. 1998 AIAA GNC Conference,

Vol. 1, pp. 88-97, Boston, MA, August 1998.


21 P. Chandler, M. Pachter and M. Mears, “System Identification for Adaptive and Reconfigurable Control”, Journal of Guidance, Control &

Dynamics, Vol. 18, No. 3, pp. 516-524, May-June 1995.


22 J. D. Monaco et al, ”Implementation and Flight Test Assessment of an Adaptive , Reconfigurable Flight Control System”, Proc. AIAA GNC

Conference, New Orleans, LA, August 1997.


23 K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems, Prentice Hall Inc., Englewood Cliffs, New Jersey, 1988.
24 L. Vandenberghe and S. Boyd, “Applications of semidefinite programming”, Available at stanford.edu/ boyd/sdp-apps.html

A. State space model for numerical example


The state space model used for numerical examples is a model of F-18 linearized at 228.5 ft/s airspeed, 500 ft
altitude and 8◦ angle of attack. The state space model has the form (5). The (perturbation) state vector consists of
airspeed V, angle of attack α, pitch rate q, pitch angle θ, altitude h, side slip angle β, roll rate p, yaw rate r, roll angle φ
and yaw angle ψ in the order given. The (perturbation) control input vector consists of left and right leading edge flaps
(δLEFl , δLEFr ), left and right stabilators (δS T ABl , δS T ABr ), left and right rudders (δRUDl , δRUDr ), power level angle δPLA
and left and right ailerons (δAILl , δAILr ). All the angles and control inputs are in radians, speed is in ft/sec and angular
rates are in radians/sec. For simplicity, wind disturbance input w is modeled as a zero-mean white noise input with
covariance:
W = A diag [2, π/180, π/180, 0, 0, π/180, π/180, π/180, 0, 0] A0
¡ ¢

where A is the system matrix from (5). This corresponds to standard deviations of 2 ft/sec in airspeed V, 1 ◦ in α and β,
and 1◦ /sec in p, q and r. The initial states are assumed to have Gaussian density with mean zero and standard deviation

diag [2, π/180, π/180, π/180, 2, π/180, π/180, π/180, π/180, π/180]

The open loop system is unstable.


Consider a task whose objective is to acquire and maintain a certain flight condition in the absence of exogenous
inputs. We assume that this final flight condition is specified by a set of equations and inequalities:

g(x, u) ≤ 0

involving states and control inputs. The ideal desired final state set is given by:

Xideal = x ∈ R10 : there exists u ∈ U such that g(x, u) ≤ 0


n o

where U is the control constraint set. Any controller that accomplishes the task, in the absence of exogenous in-
puts, must reach some state in Xideal . The calculation of this set for the glide slope tracking problem is shown in
Example VI.2.
For the glide slope tracking problem, we consider an LQ-tracker and a model reference tracker. The LQ-tracker
has the form:

u = F x + ū (15)

21 of 24

American Institute of Aeronautics and Astronautics


where F is a stabilizing LQR gain and, for simplicity, ū is given by:

 (t/T )ū f if 0 ≤ t ≤ T

ū(t) = 

(16)

 ū f
 otherwise

where ū f is the steady state value of ū and T > 0. The steady state value is obtained from the ideal desired final state
set Xideal as follows. Choose any x f ∈ Xideal . There exists u f ∈ U such that

g xf , uf ≤ 0
³ ´

Define:
ū f = −F x f + u f
Note that, under the above choices, for t > T , we have:

u(t) = F x(t) − x f + u f
³ ´

and, using a change of variables and the fact that F is a stabilizing gain, we can show that the LQ-tracker accomplishes
the control task when exogenous inputs are set to zero and provided that control constraints are met during the transient
period t ≤ T . We typically adjust T to avoid control saturation. For the numerical examples, we use T = 3 and, state
and control weight matrices of:

Q = diag [1, 0, 1, 0, 0, 1, 1, 1, 1, 1] and R = 100I

respectively.
The model reference tracker uses the following reference model:

 ẋlon   Alon 0   xlon   Blon 0   rlon
       

 =   + 
ẋlat 0 Alat xlat 0 Blat rlat
     

where
0 0 0 0  0.08 0 0
   
 −0.08  
 0 1 0 0  0 0.37 0
   
−0.37   
Alon =  0 0 −2.8 −4 0  , Blon =  0 0 4
   
 ,
 0 0 1 0 0 0 0 0
   
  
0 0 228 0 0 0 0
   
−228.5
0 0 0 0  5 0 0 
   
 −5 
 0 −2.8 0 −4 0  0 4 0 
   

Alat =  0 0 −2.8 0 −4  , Blat =  0 0 4  ,
   

 0 1 0 0 0 0 0 0 
   
 
0 0 1 0 0 0 0 0
   

the reference input rlat = 0 and

if 0 ≤ t ≤ T

 (t/T )r f
rlon (t) = 

(17)

 rf
 otherwise

where r f = [0, 0, π/60]0 . The control input u is generated as:

u = B0 QB + R −1 B0 Q (−Ax + Am x + Bm r)
¡ ¢

As in the case of LQ tracker, we can show that the model reference controller accomplishes the control task when
exogenous inputs are set to zero and provided that control constraints are met during the transient period t ≤ T . For the
numerical examples, we use T = 3 and, state and control weight matrices of:

Q = diag [1, 1, 1, 0, 0, 1, 1, 1, 0, 0] and R = 10−3 I

respectively.

22 of 24

American Institute of Aeronautics and Astronautics


B. Reach sets and RDP sets for linear systems
This appendix summarizes results used to calculate RDP sets. These results are easily derived from known results. 24
We focus on linear time invariant (LTI) systems as the nonlinear case is little more involved using sum of squares (SOS)
type arguments.
Consider the LTI closed loop system:

ẋ = F x + G 1 w + G2 r, x(0) ∈ X0 (18)

where X0 is a specified set of initial conditions, w is an exogenous input w belonging to a specified set W and r is a
given reference input. Here, F, G 1 and G2 are real matrices of appropriate dimensions. Since (18) is the closed loop
system, we may assume that all the eigenvalues of F have strictly negative real parts. For linear open loop plant with
linear controller, the control input can be written as:

u = H1 x + H2 r

where H1 and H2 are known matrices. We make no assumption about the plant, measurements or the controller (except
that they are all linear and that the closed loop system is asymptotically stable with w = 0 and r = 0).
Fix an initial condition x0 ∈ X0 and an exogenous input w ∈ W. The state trajectory emanating from x0 under the
influence of w and r is:
Z t Z t
x(t) = eFt x0 + eF(t−τ)G1 w(τ)dτ + eF(t−τ)G2 r(τ)dτ (19)
0 0

for all t ≥ 0. We make great use of the superposition principle apparent in the above formula. The principle extends to
reach sets as well where the plus sign is taken to be Minkowski sum. Define the following reach sets of the LTI system
(18):
( Z t )
Ric
t (X 0 ) = e Ft
x 0 + e F(t−τ)
G 2 r(τ)dτ : x 0 ∈ X 0 (20a)
0
(Z t )
Rexo
t (W) = e F(t−τ)
G 1 w(τ)dτ : w ∈ W (20b)
0

for t ≥ 0. Ric
t (X0 ) is the set of all states that can be reached at time t starting from some initial condition in X 0 and
evolving under the influence of the reference input r only (zero exogenous input). R exo t (W) is the set of all states
that can be reached at time t starting at zero and evolving under the influence of exogenous input w only (zero initial
condition and reference input). The set of all states that can be reached at time t by starting from some initial condition
and evolving under both reference input and exogenous input is:

Rt = Ric exo ic
t (X0 ) + Rt (W) = y + z : y ∈ Rt (X0 ), z ∈ Rt (W)
exo
(21)
n o

Our aim is to first characterize these sets and then to characterize the corresponding set of required control inputs and
RDP set.
A positive (or positive semi-definite) matrix is a symmetric matrix whose eigenvalues are all greater than or equal
to zero. A positive matrix is strictly positive (or positive definite) if all the eigenvalues are strictly greater than zero.
The unique positive square root of a positive matrix M is denoted by M 1/2 . The open elliposid centered at z and defined
by a positive M is denoted by E (z, M). That is,

E (z, M) = z + M 1/2 α : α0 α < 1


n o

We do not require M to be strictly positive. L 2 is the Hilbert space of all Lebesgue measurable signals taking values
in some appropriate Euclidean space that are square integrable. The set of all signals in L 2 whose norm is strictly
less than 1 is denoted by BL 2 . These notations are being introduced so we consider ellipsoidal initial condition sets
and finite energy exogenous inputs in detail. The techniques can be applied to discrete-time systems, polyhedral initial
condition sets and L ∞ inputs with some minor changes.

Proposition B.1 Let t ≥ 0 be given. The following statements are true.

23 of 24

American Institute of Aeronautics and Astronautics


1. Suppose that the initial condition set is E ( x̄0 , X0 ). Define:
Z t
Ft
x̄t = e x̄0 + eF(t−τ)G2 r(τ)dτ, Xt = X01/2 eF t eFt X01/2
0

Then, Ric
t (X0 ) = E ( x̄t , Xt ).

2. Suppose that W = BL 2 . Define:


Z t
Pt = eFτG1G01 eF τ dτ = P∞ − eFt P∞ eF t
0 0

where P∞ is the solution of the Lyapunov equation FP + PF 0 + G1G01 = 0. Then, Rexo


t (W) = E (0, Pt ).

3. Suppose that initial condition set is E ( x̄0 , X0 ) and that the exogenous input set is BL 2 . Then,

Rt = E ( x̄t , Xt ) + E (0, Pt )

where the ellipsoids are as in the previous statements.

The set Rt of all states that can be reached at time t is a convex set. But, the set of all reachable states given by:
[
Rt
t≥0

is not necessarily convex. It can be outer-approximated by a convex set (or by the union of a finite number of convex
sets) using well-known convex programming methods.

24 of 24

American Institute of Aeronautics and Astronautics

Vous aimerez peut-être aussi