Vous êtes sur la page 1sur 13

Assessing the Performance of Model Predictive

Controllers
Rohit S. Patwardhan1,, Sirish L. Shah1,* and Kent Z. Qi2
1
2

Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB T6G 2G6, Canada
Shell Canada, Fort Saskatchewan, AB, Canada

he area of performance assessment is concerned with the analysis


of operating controllers. Performance assessment aims at evaluating
controller performance from routine data. Several algorithms are
now available for estimating a performance index from closed loop data.
The conventional estimation procedure compares the existing controller
to a theoretical benchmark such as the minimum variance controller
(MVC). Harris (1989) laid the theoretical foundations for performance
assessment of single loop controllers from routine operating data. Time
series analysis of the output error was used to determine the minimum
variance control for the process. A comparison of the output variance
term with the minimum achievable variance reveals how well the
controller is currently doing. Desboroug and Harris (1993) use this idea
for assessing feedback/feedforward control schemes. Harris et al. (1996)
and Huang et al. (1997b) generalized the minimum variance benchmark
to the multivariate case based on the multivariate interpretation of the
delay term, known as the interactor matrix. The interactor matrix plays
a crucial role in determining the control invariant or the minimum
variance control for a MIMO process. However, estimation of the interactor
matrix requires some closed loop excitation and cannot be based on
routine operating data alone. Huang (1997a) showed that the interactor
matrix obtained under closed loop conditions is the same as the open
loop interactor. Kozub and Garcia (1993) proposed user defined
benchmarks based on settling times, rise times, etc. This presents a more
practical method of assessing controller performance. The settling time
or rise time for a process can often be chosen based on process
knowledge. A correlation analysis of the operating data is used to
determine whether the desired closed loop characteristics were
achieved. Tyler and Morari (1995) proposed performance evaluation
based on likelihood methods and hypothesis testing. Performance
assessment of non-minimum phase and open loop unstable systems was
also addressed by Tyler and Morari (1995). Ko and Edgar (2000)
addressed the isssue of cascade control system performance assessment.
Huang and Shah (1999) proposed the LQG control as the benchmark
instead of MVC. The main advantage of this technique is that the input
variance is also taken into account. In many processes the input variance
is of major concern as this is often a utility such as steam, power, etc.,
with significant cost. A model of the process and the disturbances is
required to do the LQG benchmarking. Kammer et al. (1996) used
non-parametric modelling in the frequency domain to ascertain the

*Author to whom correspondence may be addressed. E-mail address:


Sirish.Shah@ualberta.ca
Presently with Matrikon Inc., #1800, 10405 Jasper Ave., Edmonton, AB T5J 3N4.
Note: A portion of this work was presented at the 1997 CSChE Meeting, Edmonton, AB.

954

Performance assessment of model predictive


controllers is a problem of significant industrial
relevance. Model predictive controllers belong to a
class of linear time-varying controllers, which compute
the future control actions by minimizing a
constrained, time-varying objective function. In this
work we propose a performance statistic that takes
into account the time-varying and constrained nature
of model predictive control. The proposed measure
compares the achieved objective function with its
design value, online. Analytical expressions are derived
to calculate the expected value of the design objective
function under closed loop conditions. Simulation and
industrial case studies are used to illustrate the
applicability of the proposed metric.
Lvaluation de la performance des contrleurs
prdictifs modles revt une grande importance
pour lindustrie. Les contrleurs prdictifs modles
appartiennent la catgorie des contrleurs variables
linaires, qui calculent les actions de contrle futures
en minimisant une fonction objectif variant dans le
temps et soumise une contrainte du contrl prdictif
modles. Dans ce travail, nous proposons une statistique
de performance qui tient compte de la nature variable
dans le temps et de la contrainte du contrl prdictif
modles. La mesure propose compare la fonction
objectif ralise avec sa valeur de conception, en
continu. Des expressions analytiques sont obtenues
pour calculer la valeur attendue de la fonction objectif
de conception dans des conditions en boucle ferme.
La simulation et des tudes industrielles nous permettent dillustrer lapplicabilit de la mtrique propose.
Keywords: performance assessment, model predictive
controllers, multivariable constrained controllers.

optimality of a LQG controller, based on the comparison


of the optimal and the achieved cost functions. The
reviews by Qin (1998) and Harris et al. (1999) present
the state of research in controller performance
assessment. Huang and Shah (1999) give a detailed
exposition of univariate and multivariate, feedback and
feedforward performance assessment for the case of
both stochastic and deterministic inputs.

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

Model predictive controllers belong to a class of model-based


controllers which compute future control actions by minimizing
a performance objective function over a finite prediction
horizon. This family of controllers is truly multivariate in nature
and has the ability to deal with constraints on the inputs, slew
rates, etc. It is for the above reasons that MPC has been widely
accepted by the process industry. Variations of MPC such as
dynamic matrix control (DMC) and quadratic dynamic matrix
control (QDMC) have become the norm in industry for
processes where interactions are significant and constraints
have to be taken into account. Qin and Badgewell (1996) have
provided an excellent survey of industrial model predictive
control technology. As the number of industrial MPC applications
increases, the role of the control practitioner is becoming one
that is dedicated to the maintenance of these controllers. A
medium-sized MPC application can have 20 to 30 inputs and
up to 50 outputs and there could be several medium-sized
MPCs in an average-sized chemical plant. Intuitive measures of
performance, which can be related to process variables, and
diagnostic tools that can help troubleshoot MPC are of great
value to today's control engineer. In this work we deal with the
former part evaluating model predictive controllers.
A constrained MPC is essentially a nonlinear controller,
especially when operating at the constraints. Conventional
MVC benchmarking, based on linear time series analysis, is
infeasible and alternative techniques have to be developed.
Patwardhan et al. (1998) attempted to address some of these
issues in the context of industrial MPCs. They proposed the use
of the historical objective function as a practical benchmarking
technique. Ko and Edgar (2001a) presented a benchmark based
on the finite horizon minimum variance controller. They derived
this benchmark using closed loop data and the knowledge of
the order of the delay matrix. This idea is extended to the
constrained case in Ko and Edgar (2001b). Based on the
knowledge of the process and noise models, they resorted to
obtaining the lower bounds on constrained performance of a
finite horizon MVC. Though this idea has merit, accurate
process and noise models are rarely available in practice. The
presence of modelling uncertainty will result in inaccurate
estimation of the benchmark (see Patwardhan and Shah, 2002).
Huang and Tamayo (2000) illustrate the importance of model
validation in relation to MPC monitoring.
In this work, we present a novel way of assessing constrained
model predictive controllers. Model predictive controllers
design an optimal sequence of inputs by minimizing an
obective function that quantifies the various design criteria such
as input and output variances, stability, etc. Process models
forecast the behaviour of the plant into the future and form the
basis of this design objective function. We propose a performance
metric based on the comparison of the designed and achieved
objective functions. The metric is free of any assumptions of
linearity and lends itself to real time application quite naturally.
In this paper, key properties of this performance measure are
established. These include analytical estimation of the design
objective function under closed loop conditions and the effect
of the receding horizon implementation. The main advantage
of the design vs. achieved performance measure is that it does
not require any closed loop estimation or time series analysis.
Only weighted time-averaging of appropriate terms is needed.
Moreover, the measure takes into account the structure of the
model predictive control (MPC) application along with its
design specifications. In this sense, it is not a absolute
The Canadian Journal of Chemical Engineering, Volume 80, October 2002

benchmark like the minimum variance benchmark, which is


known to be unrealistic at times.
The method for MPC assessment presented here is quite
straightforward to implement online. The design objective
function is available to the controller every sampling instant.
The achieved objective function can be calculated with little
effort through appropriate weighting and time-averaging.
There is no estimation or restrictive assumptions of linearity,
stationarity, etc. Its inherent simplicity makes it appealing to the
practitioner. Moreover, presence of time-varying disturbances
or process nonlinearities do not pose any hurdles to the application
of the proposed assessment technique.

MPC Preliminaries
This section introduces the MPC preliminaries and notation
used in later sections. The underlying philosophy of model
predictive control consists of minimization of a performance
objective function with respect to future input moves, over a
finite time horizon. The standard objective function in model-based
predictive controllers of the sum of: (i) weighted norm of the
control errors over a prediction horizon, p; and (ii) a weighted
norm of the control moves over a control
horizon, m:
J =
k

2
r (k + i ) - y (k + i k ) Gi +

2
Du(k + i - 1) L
i
i =1

i =1
m

+
i =1

(1)

2
u(k + i - 1) R
i

Here, r Rny is the reference signal, y^ Rny is the predicted


output; u Rnu is the control input; Gi 0, Li > 0, Ri > 01 are
the respective weighting matrices:  is used to denote the
Euclidean norm (2-norm) of a vector. In MPC terminology, Gi is
the output weighting matrix; Li is the input move suppression
factor; Ri is the input weighting matrix. If the model is linear,
the convexity of the objective function is guaranteed, since Li is
positive definite. Du(k) is the differenced input.
Du(k ) = u(k ) - u(k - 1)

(2)

The key component of any model predictive control scheme


is a predictor which is capable of accurately predicting the
process response over the prediction horizon, based on the
current and past measurements. Different model forms can be
used to predict the process behaviour. The commonly used
model forms include: (i) the step response or the finite impulse
response model; (ii) the state space form; and (iii) the transfer
function form. The general model form for the step response
model is illustrated below:
y (k k - 1) =

s ( j )Du(k - j ) + s (N )u(k - N - 1) + e (k )

(3)

j =1

where e(k) is a white noise process.


The p-step ahead predicitions can be expressed in vector
form as follows:
y k = SDuk + fk + d k

(4a)

955

relevant horizons. Using Equation (4a), the objective function


can be further simplified as:

where the individual terms are given by:


0
L
0
s1
s
s1 0 L
0
2

0
S = s3 s 2 s1 0
M
O
M

s p s p -1
L s p -m +1

ny .p nu .m

J = (r - SDu - f - d )T G(r - SDu - f - d ) + DuT LDu


k
k
k
k
k
k
k
k
k
k

Duk = Du1(k ) K Dunu (k ) K Du1(k + m - 1) K Dunu (k + m - 1)

(4b)
T

y k = y1(k + 1 k ) Ky n y (k + 1 k ) Ky1(k + p k ) Ky n y (k + p k )

fk = f1(k + 1) K fny (k + 1) K f1(k + p ) K fny (k + p )

Each si is a matrix of dimension ny nu consisting of the step


response coefficients of the process model. The second term on
the righthand side of Equation (4a) denotes the free response of
the system, i.e., the response if there were no further changes
in the inputs. This free response is expressed in terms of the past
(N) inputs and can be written as:

s3 s 4 K
sN
s2
s3
s4 K
sN 0

s 4 s5 K

H1 = M
M , H2 = s ss I

sN
K
sN
0
2

K
K
s
s
0
0
N
N2+1

u(k ) = u(k - i ) + Duk* (1)

(12)

(6)

Performance Assessment of Model


Predictive Controllers

(8)

uo = u(k - N ) Ku(k - N + p - 1)

This is a finite dimensional optimization problem. However,


after computing the optimal move sequence u*k, only the first of
these optimal moves is implemented on the process. This is
known as receding horizon control. The remaining moves are
discarded and a new optimization problem is solved at the next
sampling instant:

The steps leading to optimal input sequence are well known


and can be found in any number of articles on MPC. We refer
the reader to two survey articles Garcia and Morari (1989)
and Mayne et al. (2000) for further details on MPC.

(7)

min Jk ( Duk )

Duk W k

(5)

Duo = Du(k - 1) K Du(k - N + 1) ,

In reality the inputs are limited to a region of allowable moves,


Wk  Rm. Note that Wk is determined by the constraints imposed
on the inputs/outputs and is time-varying in nature. Thus, in
practice, the following constrained optimization problem is
solved at every sampling instant:
Duk* =

T
d k = d (k + 1 k )[I K I ]

fk = H1Duo + H2uo

(11)
T

= rkT Grk + DuTk ST GS + L Duk - 2(rk - fk - dk ) GSDuk

In this section we propose the use of the design case as a


benchmark for model predictive controllers. A performance
measure based on comparison of the design objective function
with the achieved objective function is defined and some
properties of this measure are established. We will restrict
ourselves to cases where an MPC-type controller is in place and

The third term in Equation (4a) is the disturbance estimate d(k),


which is generally found by subtracting the model output from
the actual output:
d (k + i k ) = y (k ) - y (k k - 1)

(9)

The disturbance is assumed to be constant over the prediction


horizon. The objective function, for Ri = 0; can now be
compactly expressed as:

J = (r - y )T G(r - y ) + DuT LDu


k
k
k
k
k
k
k

(10)

where rk = r(k)[II]T is the setpoint assumed to be constant


over the prediction horizon. In general, a trajectory can be
computed for the future setpoint. G = diag(G1, ,Gp) and
L = diag(L1, ,Lm) are the weighting matrices defined over the
entire prediction and control horizons respectively. G, L are
usually diagonal in nature with the individual weights indicating
the relative significance of the different inputs/outputs over the

956

Figure 1. The optimal performance curve obtained through LQG


benchmarking.

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

the plant dynamics are linear. Conventionally, when a linear


feedback controller is in place, time series analysis of the closed
loop data (output) can be used to evaluate the controller
performance. However, MPC gives a linear time-varying
feedback law for the constrained case. Evaluation of the MVC
benchmark from the nonlinear closed loop data without resorting
to identification techniques is a non-trivial problem (Harris,
1989). It should be noted here that, for a linear plant, linear
quadratic Gaussian (LQG) control offers the best achievable
nominal performance amongst the class of all stabilizing
controllers.
The LQG benchmark (Huang and Shah, 1999) provides
useful information about the lower bound on a model based
predictive controller given its similarity in structure and form to
a MPC controller. Note that for large prediction and control
horizons, p, m , the MPC objective function becomes
identical to that of a LQG controller. The weightings for the
LQG criterion can be chosen to reflect the MPC objective. The
LQG approach is more practical compared to MVC since it
recognizes the importance of input variability. Figure 1 shows a
typical performance curve for a LQG controller. The performance
curve is sensitive to the ratio of the input to measurement noise
variances. Uncertainty in the process and noise models also
results in an uncertain performance curve (Patwardhan and
Shah, 2002). A low performance index, based on comparison of
the LQG and MPC objective functions, does not always imply
poor performance since it is based on comparison of two different
control laws. The model predictive controller may be delivering
the nominal performance it was designed for and yet be far
away from the achievable performance curve. The LQG
controller is an unconstrained controller and provides a useful
lower bound on the achievable performance of a model-based
predictive controller. Huang and Shah (1999) discuss ways of
estimating the LQG benchmark using a process model and
information about the noise variances.

Using the Design Case as a Benchmark


An alternative approach is to evaluate the controller performance
using a criterion commensurate with the actual design
objective(s) of the controller and then compare the achieved
performance. This idea is analogous to the method of Kammer
et al. (1996), which was based on frequency domain comparison
of the achieved and design objective functions for LQG. For an
MPC controller with a quadratic objective function, the design
requirements are quantified by:

Jk = (rk - y k ) G(rk - y k ) + DuTk LDuk

where yk and Duk denote the measured values of the outputs


and inputs at corresponding sampling instants appropriately
vectorized:

(13)

The model predictive controller calculates the optimal control


moves by minimizing this objective function over the feasible
control moves. If we denote the optimal control moves by Du*
k,
the optimal value of the design objective function is given by:

( )

J * = J Du *
k
k
k

(14)

The actual output may differ significantly from the predicted


output due to inadequacy of the model structure, nonlinearities,
modeling uncertainty, etc. Thus, the achieved objective
function is given by:
The Canadian Journal of Chemical Engineering, Volume 80, October 2002

Duk = Du1(k ) K Dunu (k ) K Du1(k + m - 1) K Dunu (k + m - 1)

(16)

y k = y1(k + 1 k ) K y ny (k + 1 k ) K y1(k + p k ) K y ny (k + p k )

The inputs will differ from the design value in part due to the
receding horizon nature of the MPC control law. The value of
the achieved objective function cannot be known a priori, but
only p sampling instants later. A simple measure of performance
can then be obtained by taking a ratio of the design and the
achieved objective functions:
J *
h(k ) = k
Jk

(17)

This performance index will be equal to one when the achieved


performance meets the design requirements. The advantage of
using the design criterion for the purpose of performance
assessment is that it is a measure of the deviation of the
controller performance from the expected or design performance.
Thus, a low performance index truly indicates changes in the
process or the presence of disturbances, resulting in sub-optimal
control and not merely due to the choice of an unrealistic
benchmark. The estimation of such a index does not involve
any time series analysis or identification. The design objective is
calculated by the controller at every instant and only the
measured input and output data is needed to find the achieved
performance. The above performance measure represents an
instantaneous measure of performance and can be driven by
unmeasured disturbances. In order to get a better overall
picture the following measure is recommended:
k

Ji*
a(k ) = i =1
k

J = (r - y )T G(r - y ) + DuT LDu


k
k
k
k
k
k
K

(15)

(18)

ji
i =1

a(k) is the ratio of the average design performance to the


average achieved performance up to the current sampling
instant. Thus, a(k) = 1 implies that the design performance is
being achieved on an average. a(k) < 1 means that the
achieved performance is worse than the design. a(k) > 1
indicates that the achieved performance is better than the
design performance. In subsequent discussion we arrive at
bounds on the achieved objective function, which will eventually
give us the limits on a(k):
can be obtained from the optimization step in MPC and J
J*
k
k
can be found from the operating data and the controller tuning
parameters. Consider the quadratic form which is present in the
objective function of MPC. If we let

957

Jk (y k ) = (rk - y k ) G(rk - y k )

(19)
Jk (uk ) = DuTK LDuk

then we have,
Jk = Jk (y k ) + Jk (uk )

(20)
J k = J k (y k ) + J k (uk )

where  is the 1-norm2 and Jk(yk) and Jk(uk) are the respective
vectors of [Ji(yi)]T and [ Ji(ui)]T, i = 1, , k. The equality on the
second line of Equation (20) will always hold since Jk(yk) and
Jk(uk) are both positive terms by definition. By analogy we have
a similar expression for the design objective term:

( )

J * = J * (y ) + J * u *
k
k k
k k

Stochastic Inputs

and

(21)

( )

J * = J * (y k ) + J * uk*
k

described. As with the LQG benchmark (Huang and Shah,


1999) this involves solution of the relevant Lyapunov equations
describing the closed loop relationships. The analytic expression
is derived for the case of stochastic and deterministic inputs.
The state space description of MPC is utilized in obtaining the
desired quantities. The effect of uncertainty on the design cost
function is also captured in the state space framework.
An important implementational aspect of MPC is its receding
horizon nature. It is shown here that the receding horizon
implementation leads to a loss in the design performance as per
the cost function. However, the receding horizon nature is
known to combat disturbance uncertainty and therefore may
lead to lower achieved cost than a conventional multi-step
implementation.
We give the bounds on achieved MPC performance, under
the assumption of closed stability. The bounds provide insight
into the controller relevant mismatch terms that may affect the
achieved performance.

Here we derive the expected value of the design cost function


for MPC for the case of stochastic inputs described by:
x (k + 1) = Ax (k ) + B (uk ) + Gw (k )

(23)
So, we have expressed the respective objective functions as a
sum of the contributions from the outputs and the inputs.
Comparing each of these terms in the above sum with their
achieved counterparts can give an idea of which quantity, in a
relative sense contributes the most to a loss in performance. We
also define performance indices based on the individual terms
in the objective function:

y (k ) = Cx (k ) + u(k )

where w(k), v(k) are uncorrelated white noise processes.


The predictions can be expressed in terms of the future
inputs and current state variables as:
y k = Fx (k ) + SDuk + d k

(24)
a y (k ) =

J k* y k

( )
J (y k )
k

, au (k ) =

( )

J k* uk*

J (uk )

where F = CA CA

(22)

K CA

The indices in Equation (22) compare the design output and


input variances to the respective achieved quantities. These two
numbers can be different from unity and the overall performance
index in Equation (17) can still be equal to one. For example,
the design output variance can be greater than achieved
quantity but the design input variance can be less than the
corresponding achieved quantity. The sum of input and output
variances, however, may be equal for the design and achieved
cases.
The objective function approach has the following
advantages: (i) since it is free of any restrictive assumptions, it
can deal with the multivariate and constrained nature of MPC;
(ii) it reflects the design criterion and indicates whether the
controller is doing what it was designed for; and (iii) it does not
involve any estimation and therefore the measure is exact.

Properties

where S is once again the dynamic matrix made up of the step


response coefficients. The disturbance estimate is given by:
T
d k = [I K I ] y (k ) - y (k k - 1)

= [I K I ] C x (k ) - x (k )

(25)

= C1 x (k ) - x (k )

where ^
x(k) is the state estimate obtained through a Kalman
predictor/filter. The state space form of the MPC feedback law
is given by:

Duk* = K rk - Fx (k ) - dk

T
= K rk - Ip y (k ) - F - [I K I ] C x (k )

In the ensuing discussion we prove several useful properties of


the proposed benchmark based on comparison of the design
and achieved objective function. First, an analytical way of
obtaining the expected value of the design cost function is

= K1e (k ) - K 2x (k )

958

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

(26)

Writing the space description together we have:

where
-1
T
T
K1 = K [I K I ] , K 2 = F - [I K I ] C , K = ST GS + L ST G

(27)

The objective function can be expressed as:

Jk = rk - Fx (k ) - SDuk - dk

}T G{rk - Fx (k ) - SDuk - dk } + DuTK LDuk (28)

For the nominal case the feedback term will correspond to the
noise term. Take the regulatory case where rk = 0: Therefore, we
have:
(29)

rk - Fx (k ) - SDuk - dk

= -Fx (k ) - S K1y (k ) - K 2x (k ) - C1 x (k ) - x (k )

x (k + 1)
A

x (k + 1) = PC
x

-K' C
I (k +1) 1

0
B x (k ) B1 0 w(k )
A - PC B x (k ) + 0
P

' u(k )
I z (k ) 0 -K1
-K'2

f(k ) -(SK1C + C1)

=
-K1
q(k )

(-F + SK 2 + C1)
-K 2

(35)

x (k )

0
x (k )
0
z (k )

0 -SK1 w(k )
+

0 -K1 u(k )

or
(36)

x(k + 1) = AF x(k ) + BF y (k )
y j (k ) = CF x(k ) + DF y (k )

= ( -F + SK 2 + C1)x (k ) - (SK1C + C1)x (k ) - SK1u(k )

The state co-variance matrices can be obtained by solving the


equivalent covariance matrices:

Duk* = -K1y (k ) - K 2x (k )

= -K1Cy (k ) - K 2x (k ) - K1u(k )

)T

)(

x(k + 1)x(k + 1) = AF x(k ) + BF y (k ) AF x(k ) + BF y (k )

If we let

(37)

= AF x(k )x(k ) ATF + 2AF x(k )y (k ) BTF

f(k ) D ( -F + SK 2 + C1)x (k ) - (SK1C + C1)x (k ) - SK1u(k )

(30)

q(k ) D - K1Cx (k ) - K 2x (k ) - K1u(k )

+ BF y (k )y (k ) BTF
T
T
T
E x(k + 1)x(k + 1) = AF E x(k )x(k ) ATF + BF E y (k )y (k ) BTF

then
(31)

J * = f(k )T Gf(k ) + q(k )T Lq(k )


k

s 2x = AF s 2x ATF + BF s 2y BTF

The expected value of the objective function can be easily


expressed in terms of the variances of the pseudo-outputs
defined above:

( )

T
T
E Jk* = E f(k ) Gf(k ) + q(k ) Lq(k )

(32)

T
T
= tr GE f(k )f(k ) + tr LE q(k )q(k )

Computing E[f(k)f(k)T] and E[q(k)q(k)T], we obtain:

x (k + 1) = Ax (k ) + Bu(k ) + P y (k ) - y (k )

(33)

Du(k ) = -[I 0 K 0] K1y (k ) + K 2x (k )

'
y k - K'2x k
= -K1

()

()

For the integrator we define an extra state of the following


form:
xI (k + 1) = xI (k ) + Du(k ) = z (k ) - K'1y (k ) - K'2x (k )
u(k ) = xI (k )

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

2 T
2 T
s2
yJ = CF s xCF + DF s y DF

where sy2 = E[y(k)y(k)T] is a known covariance matrix. Thus, the


solution to the above Lyapunov equation (Equation 38) will
give us the desired state and state error co-variance matrices.
2 we get the
Substituting the appropriate entries from syJ
expected value of the design objective function for the regulatory
case. The above derivation gives the expected value of the
design cost function for the unconstrained case. This provides
an useful lower bound on the performance for the constrained
case.

The Deterministic Case with No Model-Plant


Mismatch
Let us consider the case where there is no model plant
mismatch and the only external inputs are deterministic
changes in the setpoint. In this case the disturbance estimate
will reduce to zero since there is no mismatch, y(k) y^(k/k 1) = 0:

J * = r - Fx (k ) - SDu *
k
k
k

(34)

(38)

}} G{rk - Fx (k) - SDuk*} + Duk*T LDuk*

(39)

Duk* = K rk - Fx (k )

959

Substituting, we get:

)T (

Loss in Performance due to the Receding Horizon


Implementation

Fx (k ) Q rk - Fx (k )

(rk - Fx (k ))T Q(rk - Fx (k ))


(40)

lim 1 N
N N y (k ) Q
k =1

K
KS ) G(I - KS ) + K T LK

The model predictive controller computes up to m input moves


that minimize a performance objective function. However,
having computed the optimal sequence, only the first move is
implemented, which is a sub-optimal solution. The main for
reason for doing so is to combat disturbance uncertainty, which
is quite common to chemical processes. However, in doing so,
the controller is giving up on some performance as per the
objective function. The purpose of the following exercise is to
quantify this loss in performance that results from the receding
horizon implementation for the regulatory case (rk = 0). Under
the assumptions of closed loop stability we will derive the
limk n(k) for the case of no model plant mismatch:

Finding E[^
Jk] in this case requires a different approach:
lim
K z(k ) D

x (k + 1) = Ax (k ) + Bu(k )

(41)

[]

E Jk*

(46)

[ ]

E Jkrec

Du(k ) = [I 0 K 0]K rk - Fx (k ) = K1r (k ) - K 2x (k )

Once again introducing the additional state to take care of the


integrator we have:

where Jkrec is obtained by substituting Dukr = [Du(k) 00]T, i.e. the


receding horizon implementation instead of the optimal moves:

Jkrec = -Fx (k ) - SDukr - dk

xI (k + 1) = xI (k ) + Du(k ) = z (k ) + K1r (k ) - K 2x (k )

(42)
u(k ) = x1(k )

= -Fx (k ) - dk

}T G{-Fx (k ) - dk } + DukrT (ST GS + L)Dukr

- 2DukrT ST G -Fx (k ) - dk

The state space can be described by:


x (k + 1) A B x (k ) 0

=
+ r (k )

xI (k + 1) -K 2 I z (k ) K1

} G{-Fx (k) - SDukr - dk } + DukrT LDukr

= -Fx (k ) - dk

(43)

(47)

}T G{-Fx (k ) - dk } + Du(k )2H11

- 2Du(k ) J1T -Fx (k ) - dk

or
where H = (STGS + L), J = STG and J1T is the first row of J.
The control law can simplified to:

x(k + 1) = AF x(k ) + BF r (k )
y (k ) = rk - Fx (k )

= -F

(44)
T

0 x(k ) + [I K I ] r (k )

= CF x(k ) + DF r (k )

We are interested in finding out the weighted root mean


squared (RMS)3 norm of y(k), which is equal to the expected
value of the design objective function for this case:

2
lim 1 N
E Jk = N y (k )
Q
N k =1

( )

(45)

The two-norm of the closed loop system in Equation (44) will


give the objective function for impulse change in the setpoint.
Note that the analysis presented in the state-space form here
can be performed in the frequency domain as well, via the
transfer function representations of the controller and the
process and the use of Parseval's theorem. However, the
state-space route is more elegant for multivariable analysis.

960

Du(k ) = K1 -Fx (k ) - dk

(48)

where K1 are the first m rows of K. Substituting for Du(k) we


get:

Jkrec = rk - Fx (k ) - dk

}T (G + Q1 - Q2 ){rk - Fx (k ) - dk }

(49)

where Q1 = KT1H11K1, and Q2 = 2KT1JT1.


If we compare the receding horizon objective function to the
design objective function, we have:
J * = -Fx (k ) - d T (G - Q ) r - Fx (k ) - d
k
k
k
k

(50)

where Q = GTS(STGS + L)STG.


The Canadian Journal of Chemical Engineering, Volume 80, October 2002

If we define the following:

f(k ) = -Fx (k ) - dk

Similarly, a lower bound can be derived:

(51)

2
2
Jk Jk* - y k - y k - Duk - Duk*
G

(56)

then we have

[ ] {

E Jk = tr (G - Q )s 2f

The exact relationship between the design and achieved


objective functions is given by:

[ ] {

E Jkrec = tr (G - Q1 - Q2 )s 2f

(52)

- 2(rk - y k ) G(y k - y k ) - 2Duk*T

Therefore:

lim
k V(k ) =

tr (G - Q )s 2f

tr (G - Q1 - Q2 )s 2f

{
}
} {(

tr (G - Q )s 2f

{(

(53)

tr G - Q s 2f + tr Q - Q1 - Q2 s 2f

1 + tr (Q - Q1 - Q2 )s 2f

It is obvious that (Q Q1 Q2) is positive semidefinite (since


2

Jrec
k J k). sf can be found by the method described earlier. The
above expression quantifies the loss in performance due to the
receding horizon nature of the control law. If m = 1, Q = Q1 + Q2,
and there is no loss in performance due to the receding horizon
nature of the control law. Subsequent simulation examples
illustrate the loss in performance due to receding horizon
implementation.

Bounds on Achieved Performance


Here we derive the bounds on instantaneous achieved
performance (Jk) via the triangle inequality. This derivation is
free of any assumptions on stability or the nature of the stochastic
disturbances. The purpose is to provide some insight into what
terms contribute to performance degradation and their
interpretation. It is important to know what contributes to the
deviation of achieved performance from the designed
performance. Let us consider the case where we are evaluating
the achieved objective function from the true data:
T

Jk = {rk - y k } G{rk - y k } + DuTk LDuk

(54)
2
2
= rk - y k + Duk
G
L

Using the triangle inequality, an upper bound on the achieved


objective function can obtained (Vanden Hof and Schrama,
1995):

rk - y k + y k - y k + Duk* + Duk - Duk*


G
G
L
L
= Jk* + y k - y k

(57)

1
JkCR Jk

(58)

where JkCR = yk ykG2 + Duk Du*kL2.


Equation (58) provides insight into the factors that can cause
the achieved performance to deviate from the design objective
function. JkCR is the control relevant mismatch that will
determine the extent of the deviation. A non-zero JCR
k could be
the result of a poor model, unmeasured disturbances, etc. A
small JCR
k indicates highly robust performance from the MPC
while a large JCR
k implies poor robust performance. The areas of
control relevant identification and iterative control and
identication (see Van Den Hof and Scrhama, 1995) deal with
minimizing JCR
k to obtain model estimates consistent with the
control objective. In the subsequent sections, simulation
examples and an industrial application will demonstrate some
of the properties proved here.

Simulation Examples
The design case benchmark and the LQG benchmark are
applied to evaluate the MPC performance for two simulated
examples in this section. The effects of constraints and model
plant mismatch are highlighted.

Example 1: Mixing Process


The above approach was applied to a simulation example. The
system under consideration is a 2 2 mixing process. The
controlled variables are temperature (y1) and water level (y2),
and the manipulated inputs are inlet hot water (u1) and inlet
cold water (u2) flow rates. The following model is available in
discrete form:

Jk = rk - y k + y k - y k + Duk* - Duk* + Duk


G
L
2

*
k

*
k - Duk

Thus, the achieved objective function equals the design


objective function plus two terms which are positive and an
additional two terms which could be either positive or negative.
Whether the achieved performance betters the design performance, a(k) > 1; depends on how these last two terms behave.
The tighter these bounds are, the closer the achieved performance will be to the design performance. Rearranging the terms
in Equations (55) and (56), we get:
J
1
k
Jk
1 + JkCR J
1-

} tr {(G - Q)s2f }

) L(Du
L( Du - Du )

Jk = Jk* + (y k - y k ) G(y k - y k ) + Duk - Duk*

2
2
+ Duk - Duk*
G
L

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

(55)

0.0235z -1

1 - 0.8607z -1

-1
P z
=

0.2043z -1

1 - 0.9827z -1

( )

-0.1602z -1

1 - 0.8607z -1

-1
0.2839z

1 - 0.9827z -1

(59)

961

Table 1. Effect of constraints on MPC performance.

Unconstrained
Constrained

E(JLQG)/E(Jk)

)/E(J )
E( J*
k
k

0.6579
0.4708

0.8426
1.00

Figure 3. The input moves for the constrained controller during the
regulatory run.

Figure 3 shows the input moves during the regulatory run for
the constrained controller. On one hand, the constraints are
active for a large portion of the run and are limiting the
performance of the controller in an absolute sense (LQG). On
the other hand, the controller cannot do any better due to
design constraints as indicated by the design case benchmark.
Figure 2. MPC performance assessment using the LQG benchmark.

An MPC controller was used to control this process in the


presence of unmeasured disturbances. The controller design
parameters were:

p = 10 , m = 2 , Li = diag [1, 4] , Gi = diag [1, 2]

White noise sequences at the input and output with covariance


equal to 0:1*I served as the unmeasured disturbances. 10 000
data points were generated throughout simulations and used in
the following analysis. First the LQG benchmark was found, and
the performance of a constrained and unconstrained MPC was
evaluated against this benchmark (see Figure 2). The
constraints on input moves were artificially imposed in order to
activate the constraints frequently. Based on the LQG
benchmark, the constrained MPC was performing poorly. This
is a case of unfair comparison, as the LQG benchmark is an
unconstrained lower bound on performance.
Performance assessment of the same controller using the
design case benchmarking approach yields contrasting results.
From Table 1 we see that for the unconstrained controller a
performance index 0.8426 revealed satisfactory performance
while in the presence of constraints the performance index is 1.
The constrained controller showed improvement according to
one benchmark, and deterioration with respect to the LQG
benchmark. The design case approach indicates that the
controller is doing its best under the given constraints while the
LQG approach, which is based on comparison with an
unconstrained controller, shows a fall in performance.

962

Example 2: Effect of Reduced Order Modelling


The following example illustrates the effect of model plant
mismatch on the proposed performance index. The true plant is a
third order, overdamped process, G(s) = 1/(s + 1)(3s + 1)(5s + 1),
whose discrete equivalent for a sampling time of Ts = 1 is:

( )

G z -1 =

0.0077z -1 + 0.0212z -2 + 0.0036z -3


1 - 1.9031z -1 + 1.1514z -2 - 0.2158z -3

(60)

This plant is approximated by the following first order model


which is estimated through identification techniques:
0.0419z -1 + 0.0719z -2
G z -1 =
1 - 0.8969z -1

( )

(61)

In practice, such approximations are common and the


controller design is often based on reduced-order models. It
should also be noted here that both the plant and the model
have non-minimum phase zeros. The presence of non-minimum
phase zeros is known to pose fundamental limitations on
achievable performance. The MPC tuning parameters were:
p = 8 , m = 2 , Li = I , Gi = I
-1 Du(k ) 1, - 10 u(k ) 20

Once again, 10 000 data points were generated through closed


loop simulation of the above controller. Table 2 shows the
tracking and regulatory performances of the MPC based on the
design case benchmark for: 1) the nominal case or the design

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

Table 2. Tracking and regulatory performance index, a(k).

Nominal
Achieved
Constrained

Tracking

Regulatory

Combined

0.9036
0.7547
0.8120

0.4724
0.4621
0.4621

0.8441
0.7474
0.8038

case; 2) the achieved case with model plant mismatch; and 3)


the achieved case for the constrained controller.
The MPC controller shows acceptable tracking performance
but poor regulatory performance. For the regulatory case
unmeasured disturbance was added at the input and output
with variances equal to 0.01. For the tracking case a square
wave of magnitude 1 was used. A comparison of rows 1 and 2
of Table 2 shows clearly the effect of model plant mismatch on
performance. Since the mismatch is mainly in the process model
and not the disturbance model, the regulatory performance shows
negligble change in performance while expectedly the tracking
performance is significantly affected. Again, the imposition of
constraints led to improvement in performance (see rows 2 and
3). A comparison of columns 1 and 2 with column 3 shows that
the tracking performance dominates the combined case leading
to a satisfactory overall performance. Figures 4a and 4c show
the comparison of the design and achieved objective functions
for different control weightings. For l = 0, the achieved
objective function is significantly higher than the design values
and the system shows a poorly damped servo response (Figure
4b). For l = 0.5, the design objective has higher values and is
closer to the achieved values. The result is a comparatively
satisfactory servo response (Figure 4d). It should be noted here
that as a result of higher l; the design performance requirements
were lowered in order to reduce the difference between the
achieved and the design performance (performance degradation).
For l = 0, the design requirements proved to be too stringent
and the achieved performance deteriorated considerably.

Figure 4. Comparison of design () and achieved performance


objectives ( ) for l = 0 (a) and l = 0:5 (c) and the corresponding
servo responses achieved by MPC (b) and (d).

Application to QDMC Assessment


An industrial QDMC application on a recycle surge drum level
control is used to demonstrate the practical nature of the MPC
performance measure. The purpose of this industrial example is
to demonstrate the applicability of the proposed metric in a
real-time environment. Experience gained from a practical
setting is critical to understanding how to use the performance
measure for routine MPC monitoring. A QDMC controller in
Shell's hydrocracker unit (HCU) was chosen to assess QDMC
performance. The QDMC manipulates the mild vacuum column
(MVC) bleed flow and the two second-stage, weighted average
bed temperatures (WABTs) to control the recycle surge drum
level. The real CV in the controller formulation is the change in
level (Dh). Since the application controls one level with two
WABTs, an additional controlled variable, the DWABT was
added. The DWABT is the difference between the train 1 and 2
second-stage WABTs. Its setpoint may be adjusted to balance
the hydrogen consumption gap between the trains. Figure 5
shows a schematic of the process and the position of the mild
vacuum column with respect to the reactors whose WABTs are
the MVs. The QDMC is implemented once every 10 min. This
QDMC application currently has two controlled variables (CVs),
three manipulated variables (MVs), one permanently
The Canadian Journal of Chemical Engineering, Volume 80, October 2002

Figure 5. A schematic overview of the process.

constrained manipulated variable (PCMV), three associated


variables (AVs) and two feedforward variables (FFs). The prediction
horizon p equals 20 and the control horizon m equals 5. Note
that an associated variable is an output where only the
constraints are important, it will not figure in the objective
function under normal conditions. The HCU is in material
balance whenever the recycle drum level is steady, indicating
that the reaction severity is just enough to crack all recycled
material to extinction (Shell Application ManualHowie, 1995).
This condition is necessary to properly optimize the unit,
whether through manual adjustments to such variables as CFR
or product cutpoints, or through on-line moves made by the
HCU optimizer. The purpose of this application is to make
adjustments to reaction severity in order to bring the unit back
into balance after a disturbance has occurred.
The bleed Flow rate has been included in this control

963

application as an MV so that short-term fuctuations in the level


will be carried out of the unit by adjusting the bleed rather than
being kept within the unit. The application will adjust the
second-stage WABTs slowly over a longer time frame to keep
the daily average bleed flow at the desired target.
Total fresh HCU feed flow and the HGO draw flow are
included as feedforward variables in the QDMC application
currently. The controller will anticipate the effect of changes in
these variables when computing the size and direction of
moves to be made to reactor WABTs. Included amongst the
associated variables are:
1. Recycle drum level with the purpose of providing a means of
level flow smoothing, i.e., the level is allowed to float within
this range with fewer moves by the controller, but once
outside this range the control moves are more aggressive.
The level is constrained to lie within 40% and 60% of the
maximum.
2. To prevent the MVC filter temperature limit trip, the
temperature is constrained by a maximum limit.
3. To maintain MVC bottoms level, the valve position of the
level controller is constrained to lie above 5%. The level
could otherwise be easily lost due to excessive bleed flows.
The degree to which the constraint limits are followed and
the controlled setpoints are relaxed depends on how the
application was tuned. The QDMC application is also
programmed to deal with hard constraints on the manipulated
variables and their rate of change. Figure 6 shows the step
response models between the MVs, and FFs and the CVs and
AVs. A QDMC based on the step response models is used to
control the surge drum level. Data was collected for over two
weeks and the performance of the QDMC was evaluated based
on this data.

Figure 6. The step response models between the active inputs and
outputs.

Design vs. Achieved Objective Function


The on-line predicted values were available for the CVs, AVs and
MVs. These projections were used to calculate the design
objective function4 and this was compared with achieved
objective function. Figure 7 shows the ratio of the cumulative
design and achieved objective functions, a(k): The overall
achieved performance is 87.44% of the design performance
amounting to an average of 12.56% degradation in
performance. The sudden shift in the performance index was
found to correspond to the large drop in the total fresh feed
flow rate.
The sudden drop in FF1 was due to the flowmeter being
taken out of service for recalibration. The actual fresh feed is
unchanged but a false reading prompts the QDMC to make
unnecessary moves. From Figure 8 we can see that the design
objective function becomes quite large there is a large
change in the predicitons due to the change in FF1 while the
achieved objective function is relatively unchanged: the change
in FF1, not being real, does not appear in the measured CVs.
Figure 9 shows the performance indices based on the individual terms in the objective function. For the CVs (ay) the achieved
performance is almost equal to the design performance
(94.13%), the MVs (au) and the PCMVs (ap); the achieved
performace is much worse than the designed performance at
0.47% and 3% respectively. For the AVs alone (aa), the
achieved performance is much better than design performance
(288.4%). Figure 10 shows the ratio of the MPC relevant
mismatch and the design objective as a function of time. The
ratio settles down to a significant non-zero number (2) over

964

Figure 7. Performance measure based on comparison of the design


and achieved objective functions.

Figure 8. Comparison of the design and achieved objective functions


for the QDMC application.

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

Figure 9. Performance indices based on comparison of the different


components in the objective function.

Figure 10. Ratio of the control relevant mismatch term to the design
objective function.

the two-week period. As mentioned in the discussion of properties of the proposed performance index, this is indicative of poor
robust performance, i.e., deterioration in future performance
can be expected.
In summary, it was found that the QDMC application for the
recycle drum level control is delivering satisfactory performance.
The performance index is 0.87, according to the objective
function method. The constraint handling, however, is not
satisfactory. Constraints on the drum level are violated 24.5% of
the time. Constraint violations also cause poor performance in
CV2, the difference between the WABT setpoints for train 1 and
train 2. Further investigation revealed that the possible reasons
may include a poor model for AV1 and/or unmeasured
disturbances. There appears to be a false sensor reading for FF1,
the total fresh feed flow at sampling instant 533. This was
confirmed by noticing that the achieved objective function is
unchanged whereas the design objective function undergoes a
large change at this point in time. A prediction error analysis
also led to a similar conclusion since the model predictions
showed a large upset and the measured values of the outputs
were relatively unchanged.

Performance assessment is only the first step in MPC analysis


stage (Patwardhan et al., 1998; Shah et al., 2001). Once having
established poor performance, proper diagnostics are needed
to establish and address the root causes of poor performance.
These could include poor models, unmeasured/measured
disturbances, poor tuning, sensor failures, valve issues amongst
others. Further research in this area is needed to be able to
address the issues in monitoring of model predictive controllers.

Conclusions
Performance assessment of model predictive controllers based
on the design case benchmark is proposed in this work. The
design case benchmark can be treated as a relative measure of
performance. The nominal or the design case serves as the basis
for comparison with the achieved performance. This approach
makes use of the prediction model used by the controller. The
multivariable nature of most chemical processes and presence
of nonlinearities do not pose a hindrance to the application of
this method. Some theoretical properties of this index were
established. These include analytical estimation of the design
objective function, effect of the receding horizon policy on MPC
performance, and bounds on the proposed index. The
proposed measure of performance has been shown to be
sensitive to model plant mismatch, hard constraints, and
stochastic and deterministic disturbances through two simulation
examples. Finally, an industrial case study was used to illustrate
the usefulness of this performance metric.

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

Nomenclature
A, B, C, D, G
AF, BF, CF, DF
d^k
fk
H
I
^
Jk
^J*
k
Jk
JCR
k

Jrec
k
J
K, K1, K2
m
nu
ny
p
Ri
rk
S
s
uk
uo
Duk
Du*k
Duo
x, xI
^
x
yk

state space model matrices


state space model matrices
disturbance estimate in MPC
free response vector in MPC
matrix formed from step response coeffcients
identity matrix of appropriate dimension
MPC design objective function
MPC optimal design objective function
MPC achieved objective function
control relevant mismatch in the MPC objective
function
MPC receding objective function
vector of objective function values
model predictive controller gain matrices
control horizon
number of inputs
number of outputs
prediction horizon
input weighting matrix
setpoint vector
dynamic matrix constructed from step response
coeffcients
Laplace transform variable
process inputs
vector of past process inputs
differenced process inputs
optimal input moves
vector of past differenced process inputs
appropriate state vectors
state estimate
process outputs

965

y^k
z

predicted process outputs


Z-transform variable

Greek Symbols
a
au
ay
ax2
h
w
n
f, q
x, y
G, Gi
L, Li
Wk

MPC performance index based on cumulative


values of objective functions
MPC performance index for inputs
MPC performance index for outputs
covariance matrix for signal x
MPC performance index based on comparison of
design and achieved objective functions
state noise vector
output noise vector
signals formed from linear combination of state
and noise vectors
appropriate state vectors
output weighting matrices
input move weighting matrices
constraint set

End Notes
1

2
3
4

A matrix A > 0 A is positive definite and A 0 A is positive


semi-definite.
N
The one-norm of a signal, yk, is given by y1 = S yk.
k=1
The RMS norm of a signal is defined as: rms(y) = limN
.
2
1 N
y (k )
For proprietary reasons the details of the objective function
N k =1are not
discussed.

References
Desborough, L. and T.J. Harris, Performance Assessment Measure for
Univariate Feedforward / Feedback Control, Can. J. Chem. Eng. 71,
605616 (1993)
Garcia, C.E., D.M. Prett and M. Morari, Model Predictive Control:
Theory and Practice A Survey, Automatica 25, 335348 (1989).
Harris, T.J., Assessment of Closed Loop Performance, Can. J. Chem. Eng.
67, 856861 (1989).
Harris, T.J., C.T. Seppala and L. Desborough, A Review of Performance
Monitoring and Assessment Techniques for Univariate and
Multivariate Processes, J. Proc. Control 9, 117 (1999).
Harris, T.J., F. Bourdeau and J.F. MacGregor, Performance Assessment
of Multivariate Feedback Controllers, Automatica 32, 15051518
(1996).
Howie, B., Application Manual on HCU Recycle Surge Drum Level
Control, Technical report, Shell Canada, Scotford Refinery, Fort
Saskatchewan, AB (1995).

966

Huang, B. and E. Tamayo, Model Validation for Industrial Model


Predictive Control Systems, Chem. Eng. Sci. 55, 23152327 (2000).
Huang, B. and S.L. Shah, Performance Assessment of Control Loops:
Theory and Applications, Springer Verlag, New York, NY (1999).
Huang, B., H. Fujii and S.L. Shah, The Unitary Interactor Matrix and its
Estimation From Closed-loop Data, J. Proc. Control 7, 195207
(1997a).
Huang, B., S.L. Shah and K.Y. Kwok, Good, Bad or Optimal?
Performance Assessment of Multivariable Processes, Automatica 33,
11751183 (1997b).
Kammer, L.C., R.R. Bitmead and P.L. Barlett, Signal-based Testing of
LQ-Optimality of Controllers, Proceedings of the 35th Conference
on Decision and Control, Kobe, Japan (1996), pp. 36203624.
Ko, B.-S. and T.F. Edgar, Performance Assessment of Cascade Control
Loops, AIChE J. 46, 281291 (2000).
Ko, B.-S. and T.F. Edgar, Performance Assessment of Constrained
Model Predictive Control Systems, AIChE J. 47, 13631371 (2001a).
Ko, B.-S. and T.F. Edgar, Performance Assessment of Multivariable
Feedback Control Systems, Automatica 37, 899905 (2001b).
Kozub, D. and C.E. Garcia, Monitoring and Diagnosis of Automated
Controllers in the Chemical Process Industries, presented at the
AIChE Annual Meeting, Chicago, IL (1993).
Mayne, D.Q., J.B. Rawlings, C.V. Rao and P.O.M. Scokaert,
Constrained Model Predictive Control: Stability and Optimality,
Automatica 36, 789814 (2000).
Patwardhan, R.S. and S.L. Shah, Issues in Diagnostics of Model-based
Controllers, J. Proc. Control 12(3), 413427 (2002).
Patwardhan, R.S., G. Emoto, Fujii H. and S.L. Shah, Performance
Analysis of Model Predictive Controllers: An Industrial Case Study,
AIChE Annual Meeting, Miami, FL (1998).
Qin, J.S., Control Performance Monitoring A Review and
Assessment, Computers and Chemical Engineering 23, 173186
(1998).
Qin, J.S. and T.A. Badgewell, An Overview of Industrial Model
Predictive Control Technology, in Fifth Int. Conf. on Chem. Proc.
Control, J.C. Kantor and C.E. Garcia, Eds., Vol. 93, AIChE
Symposium Series (1996).
Shah, S.L., R.S. Patwardhan and R.S. Huang, Multivariate Controller
Performance Analysis: Methods, Applications and Challenges, in
Proceedings of CPC VI, Tuscon, AZ (2001).
Tyler, M.L. and M. Morari, Performance Monitoring of Control Systems
using Likelihood Methods, in Proc. American Control Conf.
(1995), pp. 12451249.
Van den Hof, P.M. J. and R.J.P. Schrama, Identification and Control Closed
Loop Issues, Automatica 31, 17511770 (1995).
Manuscript received December 14, 2000; revised manuscript received
October 25, 2001; accepted for publication July 15, 2002.

The Canadian Journal of Chemical Engineering, Volume 80, October 2002

Vous aimerez peut-être aussi