Vous êtes sur la page 1sur 17

Advanced Regulatory Control (ARC) or

Advanced Process Control (APC)?

R. Russell Rhinehart, rrr@okstate.edu


School of Chemical Engineering, Oklahoma State University

This was originally presented as Rhinehart, R. R., “Advanced Regulatory Control (ARC) or Advanced
Process Control (APC)?” Proceedings of the ISA Automation Week Conference, October 4-7, 2010,
Houston, TX. Then jointly authored and published as Rhinehart, R. R., M. L. Darby, and H. L. Wade,
“Editorial – Choosing Advanced Control”, ISA Transactions: The Journal of Automation, Vol. 50, No. 1,
2011, pp 2-10.

Keywords: Process Control, ARC, APC, Selection

Abstract:
Both Advanced Classical and Model-Predictive Control are important, useful, functional,
and powerful. The question many process control professionals ask is, “Where should I use
which?”
There are many control problems, and for each there is a wide range of technical
solutions, from simple to complex. Once the technical details are understood the audience will
be ready to see the issues related to initial cost, maintenance, personnel training, etc.; and will
be able to choose an approach that best addresses the guiding engineering principles of
K.I.S.S., sustainability, and balance technology within the human situation.
This paper attempts to provide a comprehensive overview of how the progression in
complexity from simple PI to advanced classical to elementary model-based to model predictive
control addresses process, human, and economic features that would make one approach
preferred for a particular application.

Bottom Line:
Use the right tool for the job. Don’t use a wrench to hammer a nail. In process control,
there are many tasks, each with appropriate tools.
Start with process knowledge. This is essential to making right decisions.

Scope:
ARC (Advanced Regulatory Control) seems to have a common meaning. It refers to
what used to be called advanced process control: gain scheduling, ratio, cascade, feedforward,
decouplers, override, and related and ancillary techniques such as anti-windup, bumpless
transfer, PID modifications, and tuning techniques. The ARC techniques were known prior to
computers and the modern era of state-space and model-predictive control. However, as Wade
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
indicates, “Most of the techniques included in what is called ARC were known during the time of
analog control. However the problems of individual component cost, component reliability and
consistent performance precluded the use of most of them, except for occasional Cascade
loops. Perhaps the hay-day of these ARC techniques was after the introduction of DCS and
prior to the wide-spread use of APC.”
By contrast, APC (Advanced Process Control) has many meanings. Within the model
predictive control (MPC) community, APC means MPC. Since the “big ticket” APC item within
the chemical process industry (CPI) is MPC, APC and MPC are synonymous to many of us, as
are the equivalent of Dacron for polyester, and Ping Pong for table tennis. However, many
control experts recognize that the modern computer era also brings us other advanced
nonlinear and adaptive controllers, automation of supervisory real-time economic optimization of
controller setpoints, computer perception and monitoring of status and health to trigger
corrective action, and computer-based planning and scheduling. Since this session is primarily
about a comparison of feedback control strategies, I will include nonlinear and adaptive
algorithms with MPC in the definition of APC.

Challenges in Process Control:


Attributes of processes within the chemical process industries (CPI) pose particular
challenges, and a unique relative importance of problems, that are different from those
encountered in other control applications such as robotics, aerospace, and communications.
Further, differences in 1) economic and risk considerations and 2) standardization of process
and equipment design, between the CPI and other control applications, restrict the permissible
investment of sensing and control modeling. As a result, only a subset of control algorithms has
found acceptance within the CPI.
There are many other advanced control approaches used outside of the CPI. These
include sliding mode control (an adaptive approach), fractional order (a nonlinear approach),
and state-space (or modern) control and linear quadratic regulator (multivariable linear model-
based predictive approaches).
Understanding the issues that face CPI applications is essential to choosing a right
control strategy. Here is a listing of CPI application attributes that cause difficulty for control.
x Nonlinear Process – The process gain changes with manipulated variable (MV)
(controller output, process input), to process variable (PV) (process response), and to
controlled variable (CV) (the process response that is controlled, controller input)
x Non-Stationary Process – Process attributes change in time (such as time-constants,
gain, interactions, and deadtime) due to product grade, piping arrangement, unit
switching for maintenance or production, operational stages within units, and fouling or
other degradation of equipment or sensors.
x Ill-Behaved Dynamics – Large deadtime relative to other time-constants or sampling
interval, integrating, open loop-unstable, inverse acting, disparate settling times for
different variables.
x Multi-Variable – Each of several MVs affects each of several CVs requiring a
coordinated plan for the several MV moves. There may be more MVs than CVs, the
extra degree of freedom (DoF) situation, which requires an economic optimization of the
best MV combination that meets the CV objectives. Alternately, there may be fewer
MVs than CVs, negative DoF, meaning that optimization is required to best limit
deviations from setpoints (SP). The DoF situation changes in time as production,
product mix, raw material, process maintenance, and environmental factors
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
(disturbances) shift active constraints. The economic weights for the DoF>0 optimization
change with economic factors affecting the business (waste penalty, raw material cost,
energy cost, product price, etc.). And, the constraint violation and SP violation concern
values needed for the DoF<0 optimization change with the same factors, but additionally
with political factors (What is the boss’s concern today? How soon is the labor contract
to be re-negotiated? How long has it been since a community or regulatory complaint?
Etc.).
x Constraints – This includes product specifications, operational limits on equipment
(cavitation, vibration, pump capacity, valve position, temperature, vacuum), inventory
storage (tank level), safety issues (pressure, explosive limits), occupational health issues
(noise, fumes, physical exertion), etc. The constraints may be either soft (some violation
for a short period is permissible) or hard (violation is either impossible or absolutely not
permissible). Constraints may be on the MV, CV or any PV. Constraints may happen
now. Or, they may be encountered in the future due to the eventual expression of
control action taken now, or of measured or unmeasured disturbances (any form of
upset). Constraints may be contradictory, where increasing the MV relieves one
constraint violation, but makes another worse.
x Individuality – No two CPI processes are the same (usually). By contrast, every robot,
car, camera, or airplane of the same series has the same behavior (usually). Admittedly,
some processes have standardized designs (air separations, fired boilers, water
treatment, ion exchange units, refrigeration units, etc.), and standardized process units
would be shipped with a unit-specific control system. But, by contrast, nearly every CPI
plant has unique design, with unique behaviors. Even if originally built from a standard
design, maintenance procedures, additions, upgrades, and replacements cause
processes to evolve individually. This makes each control application unique.
x Sensors – Seeking to minimize capital investment, usually the number of sensors is
employed is the minimum essential for control and analysis, and the sensors are often
selected and located with a priority to cost, safety, flexibility rather than to goodness of
control. Orifice flow meters, for example, have a 3%-7% measurement uncertainty, and
we use flange taps because maintenance convenience and safety override concerns
about measurement accuracy. As long as the flow meter is internally consistent, the
numerical value of flow rate (reflux perhaps) is irrelevant to composition control. Often
sensors measure the wrong thing because it is easy to “prove” to regulatory bodies by
convenient calibration. There are not enough sensors to measure everything, sensors
are often located in a spot that leads to delay or lag, low-cost sensors often have high
uncertainty, and they measure a related but not primary value (pressure drop can infer
viscosity which infers polymer average molecular weight).
x Disturbances – These can take many forms: Accidental human perturbations or errors,
equipment failures, environmental upsets (rain storm, raw material variability), up-stream
process changes, aging equipment, catalyst degradation, surging, up-stream controller
cycling, valve sticktion, etc. Disturbances can have short or long persistence. If their
persistence is less than the sampling interval they appear as noise. Most disturbances
are not measured, surprising the control system with a CV deviation to fix, and providing
little information about the impending magnitude or trend in the CV deviation.
x Noise – This comprises seemingly random and independent perturbations to what might
be an average signal. Noise could be due to mixing fluctuations causing variation on
composition measurement, or due to process turbulence impacting flow rate
measurement. Such sources of noise are due to the process-sensor combination.
Alternately, noise may be the result of mechanical vibration or stray electromagnetic
corruption of transmitted signals. Often noise is close enough to being Gaussian
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
distributed that we can accept that it is normal, for analysis methods. There is a fine line
between disturbance and noise. If the sampling frequency is increased, what appears to
have been independent noise, begins to express autocorrelation, and appears like a
disturbance with some persistence.
x Cause and Effect Relations – Depending on a viewpoint, one can substantially shift an
input-output perspective. For example, on a primitive heat exchanger control, the signal
from the product temperature controller to the steam valve constitutes the single loop. In
this case, steam pressure is a significant un-measureable disturbance to product
temperature. However, in either a cascade or ratio arrangement, the steam flow
controller operates the valve, and gets the flow rate setpoint from the primary
temperature controller. In the cascade strategy, any flow rate fluctuation due to steam
pressure is “immediately” corrected eliminating that disturbance from the product
temperature.
x Initial Capital Cost – The desire to minimize investment cost often drives the design and
equipment decisions. Such “rational” economic decisions often result in control related
undesirables, such as, making CVs have a nonlinear and interacting response to MVs,
limiting process information available to the control system, providing instruments and
final control elements prone to faults, and confounding the measurement with inferential
variables, delays, and noise.
x Faults – These take many forms:
o Measurement Error – Due to sensor failure, fouling, aging, calibration drift …
o Final Control Element Problems – Valve sticktion, steam utility pressure,
instrument air failure, capacity, …
o Process – Bypass, blockage, cavitation, choking, collapsed internals, broken
mixer impeller, …
o Control System – High traffic leads to missed or delayed transmission, CPU
overload, alarm/priority overload, …
o Calibration – Discrimination error dominates a signal, PV is beyond calibrated
range, bias or systematic error, …
x Models – Since processes are nonlinear, multivariable, and unique, model development
is expensive. Since processes are non-stationary, models are either short lived, require
maintenance, or must be adaptive.
x Plant Staff Experience of the Operator and Process Engineer – Several human issues
include: Human machine Interface (HMI) understandability, process and control method
complexity, easy and understandable process overview, training and education required
to implement and maintain the controllers and associated aspects (HMI, models).
x Return on Investment – Control system and algorithms need to be economically justified.
The essential minimal control is obvious to the process owner. How does one justify the
benefit and pay-out time for each additional level of control complexity?
x Infrastructure – The choice must be compatible with the legacy of field instrumentation,
control systems, software, procedures, and training materials for the site.

Control Algorithms:
It seems that no controller solves every one of the problems listed above. Accordingly,
the users must assess which problems are most important within their process context (this
includes technical, economic, safety, and political issues), and choose a control scheme that is
right for that confluence of issues and concerns.

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
I’ll list control algorithms that seem to have been accepted by industrial practice, and
organize them from simple to complex (in my opinion), and describe their relative advantages.
x On-Off – Sometimes referred to as bang-bang control, and representing the signal to a
solenoid valve, relief valve, air compressor, refrigeration, room heater, etc. This is
simple to understand and implement. The on-off switch could be initiated by a CV
crossing a threshold and tempered by a dead band. In this case, there are no PID
tuning parameters (except, perhaps dead band). Alternatively, the on-off periods may
be apportioned as a percent of a fixed cycle time, as in time-proportioning control. In
this case, there is a higher level controller, such as PID, that is setting the percent on
time within each time cycle.
x PID – This includes P-only, I-only, PI, and all combinations. This is the classical
controller derived from the beginning and substantially completed and analyzed by the
1940s. It includes anti-windup and bumpless transfer features, and so many variations
(rate before reset, parallel gains, velocity mode, proportional band, setpoint softening,
bumpless transfer, …) that one wonders why it is mentioned here, on the not-complex
part of this list.
x Self-Tuning PID – These products use a range of strategies (expert systems,
evolutionary optimization, auto-tune variation, etc.) to periodically adjust the controller
tuning values to meet dynamic performance choices as the process changes.
x SISO Linear Model-Based Controllers – Targeted to compensate for deadtime is the
Smith predictor, and for more complex dynamics, Internal Model Control (IMC), and
other sampled-data model-based controllers. (IMC forms the basis for lambda-tuning of
PID control.) In ideal cases, of a low-order process model and a simple control
objective, these algorithms generate PI or PID rules, and are no different from a PID.
Where the model deadtime is a reasonable approximation to the process deadtime
these controllers provide excellent results, because they control the model and don’t
have to wait to see what the process does. There is only one tuning parameter, usually
for CV damping, simplifying on-line tuning. However, generating the model is a step
more complicated for the user. These also must have MAN-AUTO bumpless transfer
and output limit features as does PID.
x SISO Nonlinear Controllers – With a linear model, the following all reduce to PID or one
of the subsets (P, I, PI, etc.). As with SISO PID, each needs to initialize model values
and have other auxiliary operations such as setpoint tracking in the MAN mode, or a
gradual return to the prior setpoint in the AUTO mode for bumpless transfer. And, each
needs to limit the output between 0 and 100% (or -6 and 106%).
o Gain Scheduled PID – Either from explicit mathematical models or from
experience it is relatively simple to understand how the process gain, transport
delay, and time-constant change with operating conditions. This knowledge can
be used to scale controller gain, integral and derivative time by conventional
reaction-curve tuning rules. The controller tuning response to the nonlinear or
non-stationary process can be placed in a look-up table or calculated by
equations.
o Relatively Simple Process-Model Based Controllers – There are several control
approaches that use process-models which are relatively simple for an individual
to implement. Predictive Functional Control (PFC) (Richalet and O’Donnel,
2009) and Active Disturbance Rejection Control (ADRC) (Han, 1998; Gao, 2006)
are of the few that I am aware that are also marketed as a product. The user
must define the process model and imbed it in the controller software.
Alternately, Generic Model Control (GMC) (Lee and Sullivan, 1998) and Process-
Model Based Control (PMBC) (Rhinehart and Riggs, 1990) also provide simple
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
structures, and any of these can be implemented by an engineer with MS-level
skills. In my experience, an engineer that is able to generate a first-principles
process model can also write the controller code. My MS students do it regularly
as class or lab exercises. PFC uses a coincidence point (a time in the future
when it is desired to have the process match the reference trajectory) for the MV
calculation. GMC with steady-state models uses a fraction beyond the setpoint
as a target to calculate the MV. PMBC and GMC with dynamic models use a
desired rate of change toward the setpoint to calculate MV. Feedback correction
could be used to bias the setpoint. In PMBC feedback correction is used to
adjust model parameters, keeping the model locally true to the process, which
has the additional benefit of providing information about process health. Using a
steady-state model, GMC reduces to a PI controller sending a signal which
represents a biased setpoint to an output function that uses the nonlinear model
to calculate the MV, which is easily implemented in most any digital controller.
o Tailored Process-Model Based Controllers – Several companies specialize in
implementing process-model based control, and have developed techniques to
handle multivariable aspects, model evolution, and constraints.
o Adaptive and Self-Tuning Controllers – These algorithms observe the MV and
CV and possibly disturbances, and use the information to adjust the controller.
o Fuzzy Logic (FL) or Expert System (ES) Controllers – This set of controllers uses
human knowledge. Heuristics or expert rules are embedded in a logic system
that uses human understanding to calculate the MV. Easily this can adjust for
nonlinearity, and often include issues such as dynamic compensation for
measureable disturbances or coordination from up-stream events. Most
products show an example FLC as a nonlinear velocity-mode PI. But if
nonlinear PI control is desired, I think that gain scheduling would be simpler. I
think that FLC and ES are better in higher level applications, such as coordinated
control of parallel streams, change cycling periods as carbon bed absorption
capacity degrades, and automation of other supervisory activities of the process
operator.
x Advanced Regulatory Control (MISO) – Cascade, Ratio, Feedforward, Override,
Decouplers
o Cascade – One controller sends a setpoint to a lower-level controller. When an
intermediate PV can provide early indication that a change will affect the primary
CV, use the MV to control the secondary CV, and use the primary CV controller
to determine the setpoint for the secondary CV.
o Ratio – One controller determines a desired ratio between PVs, then one PV
times the ratio becomes the setpoint for the second PV. When a wild flow
changes, the ratio times the wild flow determines the controlled flow rate setpoint.
Ratio could be based on energy or other composite variables. Ratio control
detects a change and takes immediate action. If there is a large dynamic
difference in how the wild and secondary controlled variables affect the CV then
include dynamic compensation. Ratio control can be viewed as feedforward, but
without dynamic compensation.
o Feedforward – A disturbance is observed; when it changes, a time scheduled
compensator (typically lead-lag-delay-gain) is added or multiplied to an MV. The
disturbance might be either a single measurement or a calculated composite of
several measurements.
o Override – Control intended to keep one CV at the SP could make an auxiliary
PV violate a limit. In this case control of the MV needs to be taken over by the
auxiliary CV controller. Override could be the result of safety, specification,
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
economic conditions, the boss is watching, maintenance, etc. In any case it is
especially important that the non-selected controller not wind up. When there are
conflicting constraints, classic override by either high or low select blocks is
inadequate.
o Decouplers – In a multivariable interactive process with SISO control loops, one
controller changes its MV to fix its CV. However, that MV action upsets another
CV, and seems like a measureable disturbance to the other controller. Dynamic
compensators can decouple the interaction by adding a time-compensated
change on one MV so that the change in the other MV does not upset the one
CV.
x SISO or MISO Constraint handling Control – Action now, could lead to violating a
constraint in the future. Model-based Constraint handling controllers forecast the impact
of past MV and disturbance influences on the future of the process and choose a future
MV sequence that will avoid or minimize constraint violation. If the model is linear the
least squares inverse is easy to obtain. If nonlinear but static, Dynamic Programming
can calculate a best path, which can be placed in a look-up table. Either case imposes
an inconsequential computational burden for on-line, real-time calculations. However, if
the process model is non-stationary the controller must calculate an optimal MV
sequence at each stage.
x Linear Multiple Input Multiple Output Control – If interactive, decouplers can solve that
problem. If subject to measureable disturbances, feedforward can handle that. Override
can handle constraints, when they are hit. But: What about situations of very high
number of interacting MVs. What about avoidance of future constraints. What about
situations of conflicting constraints? Finally, what about economic optimization when
DoF>0 or best balancing CV objectives when DoF<0? Model Predictive Control handles
all in a unified framework. MPC typically uses a finite impulse response (FIR) model of
the process, a vector of CV values after an MV impulse, with a vector length (time
duration) equal to the time the CV returns to within a noisy vicinity of its original value.
Feedback of process-model-mismatch (residual) corrects all future model predictions by
the current residual. Tempering control action is by either CV damping (CV reference
trajectory) or MV damping (a penalty for large MV moves). Optimization handles the
DoF≠0 case. Vendors use a variety of optimizers. How many options are there to PID?
(velocity mode, rate before reset, proportional band, setpoint softening, bumpless
transfer, external reset feedback, …). Well, there are even more versions within the
concept of MPC.
x Nonlinear Multiple Input Multiple Output Control – Some processes are nonlinear
enough that linear control only has a limited useful range. Nonlinear MPC can
accommodate for the nonlinearity. There are a variety of approaches, from using neural
network static models, to using first-principle models, to using expert estimates, to
switching between several linear models as operating conditions change, to using
grouped NNs to individually predict select future process values. Most commercial
products use linear (stationary) dynamic representations and either nonlinear static
gains or multiple linear models. However, some companies are offering true nonlinear
control approaches.
There are many other advanced control approaches used outside of the CPI.

Comparisons of ARC and APC:

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
Subawalla, et al. (1996) evaluated MPC (a commercial product [DMC]), PMBC, IMC,
and ARC algorithms on a commercial-scale plasma etch reactor, and a lab-scale distillation
column. Joshi, et al. (1997) evaluated ARC, IMC, MPC, PMBC, GMC, FLC, and MPC-NN
control algorithms for on a pilot-scale fluid flow and heat exchange. Collectively the processes
express classic problems associated with MIMO, interactive, nonlinear, varying dynamics,
measurement noise, and substantial deadtime. Although the test processes expressed many
control problems, they all were of low dimension (2 to 4 MVs) with simple constraint and DoF
aspects. Both groups evaluated multiple performance criteria related to control implementation
and operation.
Their evaluation criteria included:
x Cost (initial, development, equipment, process testing to obtain models)
x Operator issues (education, convenience, understandability, engagement, maintenance,
supervision/intervention, ease of adjustment, knowledge development)
x Computational aspects (computer speed, memory requirements, ancillary routines,
algorithm robustness, execution errors, guaranteed solutions within a defined time)
x DoF handling (excess and insufficient number of MVs, future constraints, hard and soft)
x Robustness (unexpected upsets, process & instrument faults, calibration errors, stays
tuned, product changeover, production rate, )
x Balance of CV and MV performance (ISE, Travel, propagation of noise)
x Miscellaneous benefits (process knowledge validation and dissemination, personnel
training, process diagnosis and health monitoring, predictive maintenance, politics).
Their conclusions include:
x None of the control approaches are operator-convenient.
x When a controller contains solutions to all process problems, it is technically equivalent
to any other controller. With the same features, ARC matches nonlinear-APC.
x DoF handling (future impacts) requires model-predictive control.
x ARC was best in all categories except in Miscellaneous Benefits and DoF (constraint)
handling.
x Where a single problem dominates, use the simplest control approach designed to
handle that problem.

Evaluation:
You should not just make an APC or ARC decision based on CV performance. There
are many other benefits to consider.
How does advanced control impact the operators’ ability to take manual corrective action
in response to process events? This is grounded in operators’ understanding of both the
advanced controller and the process, and complicated controllers can either diminish or
enhance that ability.

Table 1 Advanced Control Strategy Impact on Humans


Enhance Operator Ability Diminish Operator Ability
A good dynamic model can provide a strong Advanced control is often overwhelmingly
training simulator that enhances operator confusing and reduces process understanding,
understanding and ability to respond to
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
abnormal situations. especially for relative new personnel.
Automation can correct human error, by Control displays may not keep the operators
revealing the right way to coordinate MVs. engaged with the process, revealing nothing
that improves their understanding.
Control system can disentangle interactions Operators are often not able to make sense of
and move multiple MVs in a dynamically what the controllers are doing.
coordinated manner.

The human machine interface (HMI) which displays the process and controller activity to
the operators is a critical element in the success of process management. If operators are not
engaged and the control action is not understandable, then their ability to manage abnormal
events in the process progressively diminishes.
How do the modeling outcomes of advanced control benefit process understanding by
the engineer and subsequent process engineering aspects such as trouble shooting, process
improvement, and abnormal management? The better the process knowledge, the better will
be process management decisions related to predictive maintenance, fault/situation diagnosis,
knowledge dissemination, supervisory economic modeling, and process management.
Controller models based on process first-principles are especially useful for knowledge
validation and dissemination, and automated process health monitoring.

Economic Benefits of Control and Advanced Control:


Bauer and Craig (2008) report results of a survey asking industrial APC experts (66
responded, 38 users, 28 suppliers) about how to determine the economic benefits of APC.
They covered all major sectors of the CPI. In their study, APC includes MPC, constraint control,
split range control, linear programming, nonlinear control, deadtime compensation, statistical
process control, FLC and Expert systems, IMC, Self-tuning and adaptive control, and others.
More than 50% of the 38 users indicate that they have in-house APC software and expertise to
implement it.
They report the primary reasons for economic benefits. Listed in decreasing priority:
x Throughput increase (70%)
x Process stability improvement (55%)
x Energy consumption reduction (55%)
x Increased yield of more valuable products (50%)
x Quality giveaway reduction (40%)
x Down time reduction (18%)
x Better use of raw materials (15%)
x Responsiveness increase (12%)
x Reprocessing cost reduction (10%)
x Safety increase (9%)
x Operating manpower reduction (7%)
x Other (10%)
The numbers in parenthesis are my approximations of their bar-graph data, and represent the
proportion of their respondents listing that aspect as important. Obviously there are many

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
economic benefits. In any one particular application, one benefit may have been the critical
issue; but overall, the first 5 benefits dominate.
I have often heard that fixing basic problems with a control system that is in need of
renovation is often a major contributor to improvements. For instance devices and inferential
measurements may not be functioning properly, and the strategy may no longer be matched to
how the plant is now being operated; and fixing such problems may significantly improve
process performance. Kern (2010), Darby, et al. (2009), Ford (2008), and Hugo (2000) each
mention this. And, Wade relates an instance where a process study with the purpose of
designing ARC tuned a controller in order to obtain meaningful study results. This action
substantially improved the process performance; so much so, that the process owner received
more benefit than expected from the ARC upgrade and wanted to end the project. Refreshingly,
even an APC vendor (Honeywell, 2010) credits economic improvement to “... several basic
control loops were modified and optimized, yielding improved plant controllability and stability,
which was necessary for a successful APC realization.”

Estimating Economic Benefits of Advanced Control:


Bauer and Craig (2008) also asked their APC experts what was the extent of the
economic benefit, what is the payback period, and how can one estimate the potential impact
from an APC application?
In forecasting benefits, several anticipate that the CV variance reduction from improved
control will reduce the standard deviation by 50%, with respondents citing a range of 35 to 85%.
The 50% reduction in variability per each level of advanced control application matches what I
have heard over the past 20 years from experts in my circle. A reduction in variability means
tighter control. This provides the process owner to choose from the above benefits, and the
popularity of the benefits indicates that owners chose to operate closer to specifications and
process constraints, which means increased throughput and yield, and reduced energy
consumption. Bauer and Craig report that “… throughput and quality, which are directly related,
were two frequently named profit factors.” Canney (2003) estimates that APC increases
throughput by about 3 to 5%. Some of the Bauer and Craig respondents cited 5 to 10%.
Bauer and Craig’s (2008) respondents indicate that APC projects have a payback period
of 3-9 months. Honeywell (2010) reports 6 months on a particular project, giving credit to the
strong involvement and cooperation between the vendor and user staffs. Canney (2010)
estimates 9 months on his web site. This makes APC very high on the investment priority list.
However, it appears that many applications degrade into disuse, disappointingly rapidly,
because resources are not allocated to maintain the APC (Hugo, 2000; Ford 2010; Bauer and
Craig, 2008, and personal contacts). Reasons for disuse include: Process operating conditions
(equipment reconfiguration, equipment revamp, or product mix) change and the models or
relationships become dysfunctional for the new process; rotation of operators and engineers is
not accompanied by adequate training; re-tuning or restructuring of lower level controllers
changes relative gains that the upper-level MPC experiences; changing valve sizes changes
constraints and nonlinearity. Unless the economic incentive remains, companies do not justify
the economic re-investment to maintain the control system. For instance, a change in product
distribution or demand, or an improvement in process capacity could remove the primary APC
justification – throughput increase. This would suggest that a criterion to consider when
justifying an APC application is the expected time over which the economic incentive will remain
strong.

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
Since the first stage in implementing MPC seems to be fixing the basic control system, it
is difficult to separate the impact of good regulatory control and MPC on the economic benefits.
Hugo (2000), for instance asks, “Were any measurements added? Benefits are commonly
claimed for MPC pushing the plant to a new measured constraint, but is this benefit due to
MPC, or is it due to new knowledge of the constraint?”

Estimation of Costs of Implementing APC:


Bauer and Craig (2008) report that the main costs associated with implementing APC
are:
x Consultant manpower cost (68%)
x Cost of technology (58%)
x Control software (upgrade) (45%)
x Internal manpower cost (35%)
x Control hardware (upgrade) (35%)
x Maintenance cost (30%)
x Traveling expenditure (5%)
x Production loss due to installation downtime (4%)
x Other (5%)
Again, numbers in parenthesis represent the number of respondents who placed that category
as one of the top three contributors to implementation cost.
Canney (2010) estimates an average MPC implementation cost of $450k.

When to use which? (Technical aspects):


Estimates for the number of MPC product units implemented world-wide since the 80s
range from 10,000-15,000. Estimates of the proportion of control loops using PID controllers
within the CPI are about 95%. With the remaining 5% of loops using something else (MPC,
adaptive, fuzzy, etc.). Application demographics favor implementing PID, but certain control
problems make other choices technically best.
MPC successes are often credited to fixing and upgrading the primary control devices
and strategy. Perhaps management buy-in to APC is the mechanism to get the resources
needed to take care of the basics.
Use the right tool for the job: A kitchen worker may need to open a can of touch-up
paint. Being familiar with the shape of a table knife, knowledgeable about where to find one,
and understanding how to use it; the knife may become the chosen tool to open the paint can.
A tool-person may use a screw driver. But, a paint can opener is the best tool, considering
damage to the tool or to the can lid. However, if the tool works once, then there will be a
tendency to use it next time. I know I have a paint can opener somewhere, but I continue to use
the screwdriver, even though it bends the lip on the can top. Interestingly, Bauer and Craig
report that “… half of the APC experts choose a control technology that they are already familiar
with, based on favorable experience with that technology on similar processes.” Some people
say, “When you have a hammer, all the world looks like a nail.”
x PI(D) – Right 95% of the time. It is simple, well understood, pretty functional, many
enhancements are available for gain scheduling, overrides, etc. Products are widely

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
available from nearly any control vendor for PCs, PLCs, DCSs. It translates an
operator’s experience in one process to another.
x If SISO, Nonlinear, with well-behaved dynamics – Use gain scheduled PI(D). Alternately
use GMC or PMBC to clearly embody the process models.
x If SISO, nonlinear, but only qualitatively understood – Use FLC
x If SISO, Linear, and ill-behaved dynamics – Use IMC. This requires a model of the ill-
behaved dynamics (integrating, inverse, delay).
x If SISO nonlinear and non stationary – Consider adaptive controllers.
x If MIMO, Nonlinear, Well-Behaved, not subject to constraints – Use GMC or PMBC. I am
not aware of commercial products, so you need to structure it yourself.
x If MIMO, Linear, Interactive, Subject to simple constraints, and 2 or 3 MVs – Consider
ARC.
x If MIMO, Linear, Interactive, Subject to constraints (DoF<0) or opportunities (DoF>0),
large number of MVs, loops that switch between MAN and AUTO, and similar dynamics
for all CVs – Use MPC (APC, HPC, DMC, …). These are expensive, which requires
strong payout rates. Unless the process never changes (product mix, physical structure,
operating rate), MPC performance degrades, and they require model re-development.
Payout is usually the consequence of throughput enhancement, and 6-month pay-back
periods are often indicated. However, without maintenance, staged decommission by
default by the operators a year later is also often mentioned.
x If MIMO, Linear, Interactive, Subject to constraints (DoF<0) or opportunities (DoF>0),
large number of MVs, and dissimilar dynamics for the CVs – Use a hierarchal control
structure with APC supervising a lower level ARC.
x If MIMO, Nonlinear, Interactive, subject to constraints – Consider nonlinear MPC.

Related to choosing an appropriate control strategy, Darby, et al. (2009) discuss the “…
significant ‘art’ aspect to the application …” and that “… both technical and organizational issues
that are critical …” Experience with the process, classical control and model-predictive control
is needed to consider whether sensors are in the right locations, measuring enough variables,
and reliable enough to be able to provide accurate, fault-free, and complete information in a
timely manner. Can reliable inferential measurements be generated from time compensation of
the measurements? Are valves adequately sized and functioning to prevent sticktion and
constraint issues? How should plant tests be designed to reveal plant dynamics without hitting
future constraints? When is the model adequate? In a hierarchical structure, which variables
should be included in the lower level ARC and which in the upper-level APC? The answers to
these questions depend what operating conditions will affect the plant today, and in the future,
which shape the relative importance of the pros and cons associated with each choice.

Why not use MPC?


The reasons to use MPC are grounded in model-predictive action, DoF handling
(constraints and economics), coordinated MIMO control, and reduction in CV variability. But
there are things that MPC is not best at doing, providing a set of reasons to not use MPC.
Including information from Ford (2008) and Hugo (2000), these include:
x Cascade and hierarchical applications, which have intermediate variables and loops with
relatively fast dynamics relative to the primary or upper-level loops.
x Primary problem is nonlinearity (Linear MPC is not the solution, perhaps nonlinear MPC
is applicable).

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
x Processes that cycle through distinct stages, and would need dynamic models for each
stage.
x Processes with three or fewer interacting MVs.
x Continual engagement of operators on the process is desired to keep them tuned and
able to react for abnormal event management.
There are some residual reasons that seem to be disappearing.
x Processes that are likely to change frequently or soon, to the extent that the model
needs to be changed. Nonlinear MPC and multi-model MPC can cope with this.
Regardless, if a significant change is impending, ARC will likely need retuning or
restructuring also.
x The controller undesirably causes the process to jump from one set of constraints to
another as supervisory optimizers economically adjust setpoints. This is somewhat of a
complaint about earlier MPC implementations, disappearing as the technology evolves.
x Critical sensors or analyzers are subject to frequent faults. If ARC has the same
functionality as an MPC, it needs the same sensors. For equivalent performance, the
problem with sensor faults is equivalent to both ARC and MPC.

Commercial MPC Products


From the Wikipedia Web site (2010):
“Commercial MPC packages typically contain tools for model identification and analysis,
controller design and tuning, as well as controller performance evaluation. The commercially
available packages include:
x FLSmidth Automation ECS/Process Expert for Cement and Mineral Applications
x Connoisseur control and identification package (Invensys)
x INCA (linear, non-linear, Batch) from IPCOS
x Pavilion8 (Pavilion Technologies)
x ADMC & DMCX1 (both Cutlertech)
x DMC Plus (Aspen Technology)
x RMPCT(Honeywell)
x 3dMPC & Expert Optimizer (both ABB)
x DeltaV Predict and PredictPRO (Emerson)
x APC Library (Siemens|PCS 7)
x MACS (Capstone Technology)
x eMPC (eposC) and Control Station's LOOP-PRO
x ControlMV, PharmaMV and WaterMV from Perceptive Engineering
x MATLAB Model Predictive Control Toolbox
x Prime (RandControls)”

With help from Qin and Badgwell (2003), here are my additions to the Wikipedia list, in
alphabetical order by vendor name, including adaptive, nonlinear, process-model based, and
MPC products:
x ABB (Optimize IT)
x Adersa (HIECON, and PFC)
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
x Aspen Technologies (AspenOne, Aspen Target, DMC-Plus)
x B D Payne and Associates
x ControSoft (Mantra)
x CyboSoft (Model Free Adaptive Control)
x Dot Products (Nova, STAR)
x Emerson (Delta-V NN and FLC, and EnTech)
x Expertune (Plant Triage)
x GE (Continental Controls process-model based control, and MVC)
x Gensym (G2 products)
x Honeywell (Profit MAX, Profit Suite)
x Hyperion (DMCplusTM)
x Ipcos (INCA)
x Knowledgescape
x Knowledge Process Solutions (IPC)
x LineStream (ADRC)
x Matrikon (ProcessACT)
x Perceptive Engineering (Perceptive)
x Shell Global (SMOC-II)
x Universal Dynamics (Brainwave)
x Yokogawa (APCSuite)

Additional Perspectives:
Development needs for control:
x Autonomous health monitoring of the process and the control system. Cyber employees
that observe evaluate and advise operators and engineers.
x Sustainment – monitoring and improving both ARC and APC, perhaps within 6-sigma
plans.
x Abnormal event (fault, disturbance) recognition, diagnosis, and compensation.
x Normal event (stage completion) recognition and action trigger. The event to be
recognized might be steady-state, transient state, draining complete, emulsion
stabilization.
x Control of perceived situations from visual, acoustic, sniffer diagnosis of phenomena
(cavitation, impending log jams, impending undesired confluence of events, impending
flooding, clinker formation, foaming, froth, agglomeration, taste, customer satisfaction).
This is in contrast to controlling state variables.
x Automation of every routine function of the operators and engineers (data analysis,
transition start & stop, balancing feeds, adjusting cycle times, initiating calibration, last
night’s loop performance, tweeking setpoints based on control chart data).
x Determine how the economic uncertainty can be used to temper control action.
Evaluating the value of intermediate products is even more difficult than determining a
value of the product (considering Sales activities). But RTO and APC actions are
strongly grounded in economic values. How can we prevent RTO and APC from
bouncing operating conditions between constraints when there is an appearance of a
penny to be saved?
x Develop robust accurate in-process cost accounting systems to determine the value of
intermediate and final products.

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
x Control to achieve improvement, not to hold at the status quo. This would be a
supervisory recognition of desired and undesired outcomes of control action, scheduling,
etc. by monitoring waste, costs, safety, inventory, etc.; which would lead to changes in
the management rules.
x Automate model development and adjustment by computer-observation of data.
Automate validation of new observations with data and comparison to historical models.
Using utility as the evaluation criteria, only update model is utility reveals that it is
justified.
x Integration of laboratory analysis as feedback action.

According to Qin and Badgwell (2003), Bauer and Craig (2008), Ford (2008), and Darby, et al.
(2009) here are additional development needs for control:
x Standard methodology or tool to estimate the cost benefit analysis of potential APC or
RTO application
x Look-up table of benefits from post application audits on a per unit per situation basis to
facilitate estimation of economic benefit of new applications
x Continuous monitoring of economic benefit, total and for each loop
x Evaluation of economic impact of model degradation (actually plant changes that make
model less than ideal)
x Integration of planning, scheduling, RTO, and APC. RTO is a SS model sandwiched
between dynamic operations. RTO uses instantaneous prices and costs, but APC
action is devoid of economics or based on old values. RTO changes make APC bump
from one constraint to another, with no assessment of the impact on utilities cost or
product variability. Scheduling creates a wave of change that progresses through the
plant, but RTO uses SS models.
x Improve the HMI so that operators and process engineers can understand.
x Use of multiple objective functions, for priority shedding of constraints for instance
x Create adaptive MPC that auto corrects the model, preventing degradation.
x Create tuning procedures that are easy to implement (prevent ill-conditioning, reflect
operational priorities, do not need extensive simulation testing)
x Determine how to know when the model is good enough to stop plant tests.
x Plantwide control
x Diagnosing the problem when a controller is performing poorly.
x Automate the hierarchal structure. What variables should be part of the conventional
ARC and what should be inputs and outputs of the supervisory MPC? Should there be
one supervisory MPC or does the plant isolation of effects indicate that several smaller
MPCs, one for each section, is better.
x Can MPC action be made more aggressive when CV is changed by unmeasured
disturbances?
x Automatic and robust updating of inferred property rules (inferential sensors, soft
sensors) which estimate the CV value for the controllers.
x Automatic and robust updating of SS models used in RTO to keep them true to the plant.
x Improve robustness to field instruments (sensor transmitters, communication network,
final elements)
x Static transformations to linearize the process I/O so that linear MPC is applicable
x Models of the unmeasured disturbance so that future model predictions are corrected by
future estimated residuals.

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
x Including consistency relationships in the model. Include material and energy balances,
VLE, unit operation models, steady-state gains, etc. as constraints in model optimization.
x Develop a method for closed loop plant identification.

The topics discussed in this paper also present a challenge to control educators related
to the preparation of students within undergraduate engineering programs for automation and
control careers. Here are some perspectives on how education should be changed to support
the automation engineering workforce needs:

x Let go of the technology legacy of PID control to a setpoint, and prepare engineers to
become the parents and coaches of intelligent controllers who can baby-sit their
process. Students will need to understand the rasion d’etre of their darling little process.
They will need to understand and recognize misbehavior and know how to correct and
prevent it. They will need to understand health, recognize symptoms, diagnose the
disease, and implement a cure. Control theory is not the essential issue for
undergraduates seeking to go to industry. Present the math and analysis that is
fundamental to control as a secondary, supporting theme. Don’t let the joy of the
mathematics mask the primary course objective. Drop frequency analysis and z-
transforms from undergraduate courses. Diminish Laplace transforms to the role of a
historical language of communicating process and controller dynamics. For the plant
engineer, Laplace transforms need to be understood only as a carrier of information
such as process order, deadtime value, and gain value.
x Add process control laboratory experience to the undergraduate program. Automate
Unit Operations Laboratory process equipment. Use pilot-scale equipment. Don’t use
bench-top engineering-science experiments or computer simulators for the chemical
engineering lab. Students need to experience valves, sensors, data logging, loop
structure and tuning, signal transmission, etc.
x Add Automation Engineering degree programs to universities, or adequate courses to
obtain a minor in automation. The one control course in the ChE program is adequate
to reveal the “tip of the iceberg” of PID feedback control to students, but it does not
usually cover instrument system calibration, ARC, APC, optimization, DCS structure or
operation, electronic aspects (grounding, wiring protocol, isolation), Safety Instrumented
Systems, health monitoring, permissible industrial tuning practices, etc.

Acknowledgments:
I greatly appreciate the review of this paper and feedback from Harold Wade, Mark
Darby, Dave Schnelle, Jacques Smuts, and Alan Hugo. They broadened my perspectives with
their experience; however, I fully accept responsibility for any incompleteness that remains.

References:
1. Bauer, M., and I. K. Craig, “Economic assessment of advanced process control – A
survey and framework”, Journal of Process Control, 18 (2008) 2-18.
Distributed with permission of author by ISA 2010
Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org
2. Canney, W. M., “The future or advanced process control promises more benefits and
sustained value”, Oil & Gas Journal, 101 (16) (2003) 48-54
3. Canney, W. M., WMCanney@ModelPredictiveControl.com, accessed 5 June 2010
4. Darby, M. L., M. Harmse, and M. Nikolaou, “MPC: Current Practice and Challenges”,
ADCHEM 2009, Proceedings of the International Symposium on Advanced Control and
Chemical Processes, an IFAC Symposium, July 12-15, 2009, Koc University, Isntanbul,
Keynote Lecture 3.1, paper 239.
5. Ford, J. R., “APC: A Status Report (The Patient Is Still Breathing!), a white paper,
Maverick Technologies, 12/06/08, www.mavtechglobal.com.
6. Gao, Z., “Active disturbance rejection control: A paradigm shift in feedback control
system design”, Proceedings of the 2006 American Control Conference (pp. 2399–
2405). Minneapolis, MN, USA, 2006.
7. Han, J. Auto-disturbance rejection control and its applications. Control and Decision,
1998, 13(1), 19–23 (In Chinese).
8. Honeywell Process Solutions, BASF Ammonia Plant Increases Production and Achieves
ROI in Six Months with Profit Controller, http://hpsweb.honeywell.com/Cultures/en-
US/NewsEvents/SuccessStories/Success_BaASF, accessed 4/28/2010
9. Hugo, A., “Simpler control methods often provide better results”, Hydrocarbon
Processing, 2000, January, pp 83-88
10. Joshi, N. V., P. Murugan, and R. R. Rhinehart, “Experimental Comparison of Control
Strategies,” Control Engineering Practice, Vol. 5, No. 7, 1997, pp. 885-896.
11. Kern, A., “An inferential update”, InTECH, May/June 2010, Vol. 57, No. 3, 2010, pp14-16
12. Lee, P. L., and G. R. Sullivan, “Generic Model Control”, Computers and Chemical
Engineering, 1998, Vol. 12, No. 6, PP 573-580.
13. Ou, J., and R. R. Rhinehart, “Grouped Neural Network Modeling for Model Predictive
Control”, ISA Transactions, Vol. 41, No. 2, April 2002, pp 195-202.
14. Qin, S. J., and T. A. Badgwell, “A survey of industrial model predictive control
technology”, Control Engineering Practice, 11 (2003) 733-764.
15. Rhinehart, R.R., and J.B. Riggs, "Process Control Through Nonlinear Modeling,” Control,
Vol. III, No. 7, July 1990, pp. 86-90.
16. Richalet, J., and D. O'Donovan, Predictive Functional Control: Principles and Industrial
Applications, Springer, New York, NY, USA, 2009.
17. Subawalla, H., V. P. Paruchurri, A. Gupta, H. G. Pandit and R. R. Rhinehart,
“Comparison of Model-Based and Conventional Control: A Summary of Experimental
Results,” Industrial and Engineering Chemistry Research, Vol. 35, No. 10, October,
1996, pp. 3547-3559.
18. Wade, H. L., Personal communications from 1987 to today.
19. Wikipedia, “Model Predictive Control”,
http://en.wikipedia.org/wiki/Model_predictive_control#Overview, page was last modified
on 15 May 2010 at 11:18, accessed 2010-06-02

Distributed with permission of author by ISA 2010


Presented at ISA Automation Week; 5-7 October 2010; http://www.isa.org

View publication stats

Vous aimerez peut-être aussi