APC Identifier
Users Guide
an Integrated Part of
Profit Design Studio (APCDE)
TPSWIDNT
Revision 1.5
5/01
AP09200
APC Identifier
Users Guide
an Integrated Part of
Profit Design Studio (APCDE)
TPSWIDNT
Revision 1.5
5/01
AP09200
While this information is presented in good faith and believed to be accurate, Honeywell
disclaims the implied warranties of merchantability and fitness for a particular purpose and
makes no express warranties except as may be stated in its written agreement with and for
its customer.
In no event is Honeywell liable to anyone for any indirect, special or consequential
damages. The information and specifications in this document are subject to change without
notice.
Honeywell
Industrial Automation and Control
16404 N. Black Canyon Hwy
Phoenix, AZ 85053
Table of Contents
REFERENCES....................................................................................................................... XIII
Documentation ...................................................................................................................xiii
Title...................................................................................................................................xiii
Number .............................................................................................................................xiii
General .............................................................................................................................xiii
Open .................................................................................................................................xiii
TPS System .......................................................................................................................xiii
Embedded Uniformance ......................................................................................................xiii
iii
Table of Contents
Data Operation Tools ............................................................................................................5
Profit Toolkit .........................................................................................................................5
Profit Sensor ........................................................................................................................5
2.3
2.4
Overview ............................................................................................................................7
In This Section......................................................................................................................7
System and Software Requirements .................................................................................8
Software Requirements .........................................................................................................8
Do I have to Install the Identifier Separately? ............................................................................8
PC Requirements..................................................................................................................8
Quick Reference to Installation..........................................................................................9
How to Use the Quick Reference ............................................................................................9
Quick Reference Table ..........................................................................................................9
Installing the Profit Design Studio and the APC Identifier................................................10
PC Installation ....................................................................................................................10
Installing the Dongle ............................................................................................................11
Check the Log File ..............................................................................................................13
Check the ini File ................................................................................................................14
Other Options .....................................................................................................................15
Caution ..............................................................................................................................15
Starting the PC Application...................................................................................................15
3.2
3.3
3.4
Model Structures..............................................................................................................24
Overview............................................................................................................................24
FIR Models ........................................................................................................................24
FIR Structure ......................................................................................................................24
PEM Models .......................................................................................................................26
PEM Structure ....................................................................................................................26
Model for Order and Variance Reduction................................................................................28
ARX Parametric Models (Discrete Time) ................................................................................28
Output Error Models (Discrete Time) .....................................................................................28
Laplace Domain Parametric Models ......................................................................................29
Final Model Form ................................................................................................................29
3.5
Solutions ..........................................................................................................................30
Overview............................................................................................................................30
Linear Solutions FIR Models .................................................................................................30
iv
5/01
Table of Contents
Linear Solutions PFX Models (PreFiltered ARX) ....................................................................32
Nonlinear Solutions .............................................................................................................33
Solution Procedure..............................................................................................................33
PEM Formulation ................................................................................................................35
OE Formulation ..................................................................................................................36
Laplace Formulations ..........................................................................................................37
Starting Conditions ..............................................................................................................37
Delay Estimation.................................................................................................................37
3.6
Model Properties..............................................................................................................40
Overview ...........................................................................................................................40
FIR Bias ............................................................................................................................40
FIR Consistency .................................................................................................................41
PEM ..................................................................................................................................41
Summary ...........................................................................................................................42
3.7
3.8
Factorizations ..................................................................................................................49
Background........................................................................................................................49
Normal vs. Orthonormal .......................................................................................................49
Sensitivity and Accuracy ......................................................................................................50
An Illconditioned example ...................................................................................................52
QR Solution .......................................................................................................................53
Cholesky Solution ...............................................................................................................54
SVD Solution......................................................................................................................56
Sensitivity of Illconditioned Problem .....................................................................................57
Pseudorank ........................................................................................................................58
A Rank Deficient Example....................................................................................................59
Zero Value Solution.............................................................................................................61
Minimum Norm Minimum length QR Solution .........................................................................62
MATLAB Solutions ..............................................................................................................63
Perturbed Solution and Pseudorank ......................................................................................64
Timing ...............................................................................................................................65
3.9
Summary .........................................................................................................................67
Future Perspective ..............................................................................................................68
4.4
4.5
5/01
Overview..........................................................................................................................69
In This Section ...................................................................................................................69
Profit Design Studio (APCDE) ..............................................................................................69
Starting an Identification Session ....................................................................................70
File Types and File Extensions .............................................................................................70
Creating a Profit Controller (RMPCT) Model File.............................................................72
Creating an RMPCT Model File ............................................................................................72
Data Source  Data Files ......................................................................................................73
Data Source  Manually Entered ..........................................................................................74
Entering or Changing Variable Information .............................................................................74
Creating a Robust PID Model File ...................................................................................77
Creating an RPID Model File ................................................................................................77
Data Source  Data Files ......................................................................................................78
Data Source  Manually Entered ..........................................................................................79
Reading in Data...............................................................................................................80
Getting Test Data................................................................................................................80
Single Point Data Files ........................................................................................................80
APC Identifier Users Guide
Honeywell Inc.
Table of Contents
Single Point DataAn Example File ......................................................................................80
Multiple Point Data File ........................................................................................................81
Multiple Point DataAn Example File....................................................................................82
Saving an .mdl or .pid File ....................................................................................................82
4.6
Overview ..........................................................................................................................97
In This Section....................................................................................................................97
Basic Views .....................................................................................................................98
Primary Functions ...............................................................................................................98
Viewing, Selecting and Marking Data ............................................................................103
Working with different Windows ..........................................................................................103
Plotting Raw Data .............................................................................................................103
SingleGraph Plots ............................................................................................................104
Changing the Plot Size .....................................................................................................105
Viewing Single Graph Plots ................................................................................................105
Reconfigure SingleGraph Plots ..........................................................................................107
Plot Modes .......................................................................................................................108
Selecting Time Ranges ......................................................................................................109
Selecting Ranges ..............................................................................................................109
Reading the Plots..............................................................................................................110
Marking Data Bad at the Global Level ..................................................................................111
Marking Data Bad at the Regression Level ...........................................................................113
Marking Data Bad at the Prediction Level .............................................................................114
Scatter Matrix ...................................................................................................................116
MultiGraph/Scatter Plots ...................................................................................................116
MultiGraph Mode .............................................................................................................116
Scatter Plot Mode .............................................................................................................117
6.3
vi
Overview ........................................................................................................................119
In This Section..................................................................................................................119
Data and File Manipulation .................................................................................................120
Edit Functions ................................................................................................................121
Basic Edit Characteristics ..................................................................................................122
Special Edit Functions .......................................................................................................126
Copy Trial Information .......................................................................................................127
Edit Variable Attributes ......................................................................................................129
Entering or Changing Information ........................................................................................130
Document without raw data ................................................................................................131
Empty Document ..............................................................................................................131
Combining Files and Rearranging Variables/Data /Models ...........................................133
Copying Models/Data From One File to Another Using Copy/Paste .........................................133
Copying Models/Data From One File to Another Using DragDrop ...........................................133
APC Identifier Users Guide
Honeywell Inc.
5/01
Table of Contents
Rearranging Models and Variables Within a Given File Using DragDrop .................................135
Copy Data From One File To Another Using DragDrop .........................................................135
Merging Data....................................................................................................................136
Merging Data Marks/Selection Ranges ................................................................................138
7.2
7.3
Overview........................................................................................................................141
In This Section .................................................................................................................141
Basic Functions ................................................................................................................141
Supported Operations .......................................................................................................141
Block Manipulations.......................................................................................................142
Invoking Block Manipulations .............................................................................................142
Options and Their Use .......................................................................................................143
Vector Calculations........................................................................................................144
Source and Destination Variables Remembering Past events ................................................145
Invoking Vector Calculations ..............................................................................................145
Vector Functions ...............................................................................................................147
Transformations................................................................................................................148
Special Transformations ....................................................................................................151
Polynomial .......................................................................................................................151
Piecewise Linear...............................................................................................................161
Installed Valve Characteristics ............................................................................................171
Transformations without Data .............................................................................................172
Filter................................................................................................................................174
Statistics ..........................................................................................................................180
Edit Data .........................................................................................................................181
Combine Variables ............................................................................................................185
User Notes .......................................................................................................................186
Saving and Recovering Vector Calculations .........................................................................187
Merging Vector Calculations ...............................................................................................193
Saving Transformation For Use with Profit Controller .............................................................194
8.2
8.3
8.4
5/01
Overview........................................................................................................................197
In This Section .................................................................................................................197
Main Functions .................................................................................................................197
Overall Options .................................................................................................................197
Load & Go .......................................................................................................................198
Overall Model Setup ......................................................................................................199
Setting Overall Options ......................................................................................................199
Data Rate / Trial Specification ............................................................................................199
MIMO Discrete Model Specification .....................................................................................201
Initial Conditions and Model Forms .....................................................................................201
FIR Setup ......................................................................................................................202
Configuring FIR Models .....................................................................................................202
Max Settle T (Settling Time) ...............................................................................................202
# of Coefficients ................................................................................................................203
FIR Model Form ................................................................................................................203
FIR Initial Conditions .........................................................................................................204
PEM Setup.....................................................................................................................205
General Guidelines ...........................................................................................................205
Auto Setup .......................................................................................................................207
Detailed Setup ..................................................................................................................207
PEM Initial Conditions and Model Form ...............................................................................209
APC Identifier Users Guide
Honeywell Inc.
vii
Table of Contents
8.5
8.6
9.2
9.3
Overview ........................................................................................................................217
In This Section..................................................................................................................217
About the FIR Model..........................................................................................................217
About the PEM Models ......................................................................................................217
Procedure ......................................................................................................................219
Fitting the FIR/PEM Mode ..................................................................................................219
Fit Fir/PEM Models Dialog Box and Associated View .............................................................219
Show & Select Vars...........................................................................................................219
Set Overall Options ...........................................................................................................220
Set Options per Sub Model ................................................................................................221
Options per MV/DV ...........................................................................................................222
Excluding Data From the Regression ...................................................................................223
Fit FIR/PEM Models ..........................................................................................................225
Model Example .................................................................................................................225
Model Descriptors .............................................................................................................226
Checking Trial Dependent information .................................................................................227
FIR/PEM Step Responses .................................................................................................229
Interpreting Results ...........................................................................................................230
Statistics ........................................................................................................................232
Background ......................................................................................................................232
Guidelines ........................................................................................................................233
Special Consideration ........................................................................................................234
Interpretation of Model Rank...............................................................................................235
Overview..........................................................................................................................236
Correlation View MV/DV to MV/DV ......................................................................................237
Correlation View CV to MV/DV ...........................................................................................238
Confidence/Null Hypothesis View ........................................................................................239
Statistical Summary View...................................................................................................240
Descriptors .......................................................................................................................241
Positional Form / 1 Trial .....................................................................................................242
Impact of Exclude data Options ..........................................................................................244
Overview .......................................................................................................................249
In This Section..................................................................................................................249
What Are Parametric Models Used For? ..............................................................................249
10.2
Procedure .....................................................................................................................250
Fitting the Parametric Models .............................................................................................250
viii
5/01
Table of Contents
Fit Parametric Models Dialog Box and Associated View .........................................................250
Show & Select Submodels ................................................................................................252
Overall Options .................................................................................................................252
Discrete Model Information ................................................................................................253
Individual Options .............................................................................................................256
Dialog Box Information ......................................................................................................258
Parametric Options Per Trial ..............................................................................................259
Viewing the Transfer Function ............................................................................................260
Example of Legal Polynomials ............................................................................................262
Step Response Overview ...................................................................................................262
All Responses ..................................................................................................................263
Overview.......................................................................................................................265
In This Section .................................................................................................................265
Final Models Defined .........................................................................................................265
Searching for the Best Final Models ....................................................................................266
Two Procedures ...............................................................................................................266
11.2
Procedure .....................................................................................................................267
Selecting Final Trials/Finding Final Models ...........................................................................267
Trial Source .....................................................................................................................268
Show & Select Submodels ................................................................................................271
Excluding Data From the Prediction Calculations ..................................................................272
Select Trial Manually .........................................................................................................273
Dialog Box Information ......................................................................................................273
Update Trial .....................................................................................................................274
Stop ................................................................................................................................274
Plot Predictions ................................................................................................................274
Load Source to Final .........................................................................................................278
Null Final Model ................................................................................................................281
11.3
Overview.......................................................................................................................287
Annotation Access and Update ....................................................................................288
Access Overview ..............................................................................................................288
Detailed Access and Update ..............................................................................................289
Annotation Example ..........................................................................................................291
SECTION 13  TUTORIAL.....................................................................................................299
13.1
13.2
13.3
13.4
13.5
5/01
Overview.......................................................................................................................299
Rich Input Signals.........................................................................................................300
RichDoc1 .........................................................................................................................300
WafrDoc1 ........................................................................................................................305
Typical Input Signals.....................................................................................................309
TowrDoc1 ........................................................................................................................309
ColDoc1 ..........................................................................................................................316
Limited Input Signals ....................................................................................................320
LevDoc1 ..........................................................................................................................320
BlecDoc2 .........................................................................................................................322
Creating PEM models...................................................................................................325
Synthetic Data ..................................................................................................................325
APC Identifier Users Guide
Honeywell Inc.
ix
Table of Contents
Pressure Data ..................................................................................................................328
Furnace Data ...................................................................................................................329
Large disturbance .............................................................................................................330
Demo Data.......................................................................................................................332
ColDoc1...........................................................................................................................333
WafrDoc1 ........................................................................................................................334
BlecDoc2 .........................................................................................................................335
5/01
The following table describes the audience, purpose, and scope of this book:
Work
Purpose
Audience
For Product
Release
Profit Course
Information
Anyone responsible for creating process models based on either plant data or
existing models. All models identified are structured for seamless integration into
Profit Controller (RMPCT), Profit Optimizer (DQP), Profit PID (RPID).
Honeywell offers several courses that explain the math and conceptual underpinnings
as well as application implementation of the Advanced Process Control suite of
products.
Engineers wanting a more technical exposure to Profit products can contact:
xi
5/01
xii
The following writing conventions have been used throughout this book and
other books in the Profit Suite library.
Words in double quotation marks " " name sections or subsections in this
publication.
Shadin
Windows pull down menus and their options are separated by an angle
bracket >. For example,
Tools> Point Builders>RMPCT Point Builder
Command keys appear as they appear on the key, but within angle
brackets. For example, press <Enter>.
Zero as a value and when there is a chance for confusion with the letter O
is given as . In all other cases, zero as a numerical place holder is given
as 0. For example, 1.0, 10, 101, CV1, parameter P.
The terms screen and display are used inter changeably in discussing the
graphical interfaces. The verbs display a screen and call a screen are also
used inter changeably.
The names Profit Optimizer (DQP), Profit Optimizer and DQP may be
used interchangeably.
5/01
References
The following comprise the Profit Suite library.
Documentation
Title
Number
General
RM09400
RM11410
PR11400
AP11400
AP09200
RM11100
PS09100
Open
RM20501
RM11401
PR11421
RM11 431
AP11401
AP11410
AP13201
AP13101
AP13111
AP11411
AP20401
TPS System
Profit Controller (RMPCT) Installation Reference for AM, AxM and Open LCNSide
Profit Controller (RMPCT) Commissioning
Profit Controller (RMPCT) Users Guide for AM, AxM and Open LCNSide
Profit Optimizer Installation Reference for AM and Open LCNSide
Profit Optimizer Users Guide for AM and Open LCNSide
Profit Suite ToolKit
TDC Data Converter
Data Collector
Step Test Builder
Performance Monitor
RMPCT Cascade
PV Validation
RM20400
RM20410
RM11400
PR20400
PR11420
AP09300
Simulation BackBuilder
Gain Scheduler
AP13100
AP13200
AP13600
AP09700
Embedded Uniformance
5/01
AP20510
AP20520
AP20530
xiii
If You Need
Assistance
Customers
Outside of the United States, contact your local Honeywell Service Organization.
If you are not sure of the location or telephone number, call your Honeywell
representative for information.
Customers Inside
the United States
Within the United States, call the Technical Assistance Center (TAC) at the toll
free number 18008227673.
Arizona Customers
Services Provided
Calls to TAC are answered by a dispatcher from 7:00 A.M. to 5:00 P.M.,
Mountain Standard Time (6:00 A.M. to 4:00 P.M. when daylight savings time is
in effect).
International
5/01
It is a good idea to make specific notes about the problem before making the call.
This helps to reduce delays and expedite answers.
xiv
Variables
Type and class categorize variables. There are two distinct classes of variables.
One is the Var class the other is the Aux class. Variables of class Var, are
variables that can be used for identification and therefore are variables that
implicitly define distinct rows or columns in the model matrices. Views associated
with models will only display Var variables. Variables of class Aux are variables
that provide a permanent home for data but do not appear as model variables. Both
classes of variables are displayed in any views associated with data. Variables of
class Var can of course be converted to variables of class Aux and vice a versa.
There are three distinct types of variables.
Controlled Variables (CVs) These are the variables that a controller would
attempt to keep at setpoint or within some range. These are dependent variables
Manipulated Variables (MVs) These are the variables that a controller would
adjust to keep the CVs within some range. These are independent variables.
Disturbance Variables (DVs) These are measured variables that are not under
the influence of a specific controller but which affect the values of the CVs. These
are also independent variables.
Models
5/01
There are no inherent limitations to problem size. Any number of CVs, MVs, and
DVs can be accommodated. No restrictions are placed on the maximum number of
FIR coefficients (compression or decimation ratio must, of course, be an integer
number >= 1). There are no practical restrictions on model orders.
System model matrices can have any number of elements. Only computer speed
and memory resources (RAM) limit the application.
While the offline design package imposes no size limitations, online controller
dimensions ARE restricted depending on the platform. For AM implementations,
CVs, MVs and DVs are limited to 20, 40, 40 respectively. For AxM and NT
nodes, CVs, MVs and DVs are limited to 40, 80, 80 respectively.
Collecting Data
Collecting accurate data is crucial. Be very careful to get good data. If you can get
good data, you are virtually assured to get good models.
First, conduct preliminary tests to make sure that all regulatory loops are properly
tuned, and that all actuators and positioners are performing correctly. Then get
initial, but accurate, estimates on process response times, gains, nonlinearities, and
noise levels.
Once the preliminary test is complete, the full test should be properly designed to
ensure that the variables of interest are properly excited wherever possible.
If the data is sufficiently rich (excited over the required spectrum with appropriate
signal to noise ratios), then the identifier can extract the appropriate models. For a
full discussion on the importance and issues involved with test signal design, the
Profit Controller (RMPCT) Implementation Course: Identification and Control can
be helpful. In addition, the Step Builder tools, optional parts of the APC ToolKit
package, can be used to significantly aid in the identification process.
Saving Data
Data should be recorded during all plant testing. Many options exist for saving this
data. The Honeywell Data Collector, which runs in the AM and is an optional
part of the APC ToolKit package, can be used to collect this data automatically.
Other collection techniques can also be used. It is important to remember when
using alternate methods that the identifier uses specifically formatted files. Before
beginning plant tests make sure that the data can be saved with the correct format.
(See Section 3.3 for a description of data formats).
5/01
5/01
Multiple model forms are supported. Final models are saved in Laplace form.
For training on the conceptual and practical aspects of the identifier, Honeywells
Profit Controller (RMPCT) implementation course is recommended (4516s).
Although the Identifier can be used as a standalone tool, it is an integral part of
Honeywell's Profit Controller (RMPCT), Profit Optimizer (DQP) ProfitPID
and Profit Loop (Future).
Profit Controller
(RMPCT)
Profit Controller (RMPCT) Design software. This software can be easily used
to create an RMPCT controller, based on the model provided by the Identifier.
The controller can be used online to control the actual process, and can also be
tested on a simulated process using the offline software.
Profit Optimizer
(DQP)
Properly tuned PID loops can be maintained by using the Profit PID (RPID)
library. This software determines the proper tuning constants to ensure minimum
loop sensitivity based on parametric uncertainty. Tuning constants for a wide
range of equation types are generated. Calculations are based either on user
entered transfer functions or transfer functions derived from raw data.
Profit Loop
(Future Product) High quality base level control can be accomplished by using
the Profit Loop (SPID) library. This software generates a MISO RMPCT
controller specifically designed for regulatory loops. Calculations are based either
on user entered transfer functions or transfer functions derived from raw data.
Effective step test design is provided by the Step Test Builder. This software
allows you to easily create a series of one or more sequences that can be used to
properly excite the actual process. The Step Builder has been designed to work
in conjunction with the APC Identifier. Sequential or/and simultaneous signals
can be readily synthesized and evaluated. Signals are designed for minimum
length and broadband uniform power. The Step Builder is available stand alone
or as part of the Profit Suite ToolKit.
Point Builder
By using Point Builder, you can automatically create LCN data structures,
command and configuration files for both Profit Controller and Profit
Optimizer. Point Builder is provided as part of both the Profit Controller and the
Profit Optimizer packages.
5/01
You can use the Data Converter library to automatically convert LCN data to be
Profit Design Studio (APCDE) compliant. The Data Converter is available stand
alone or as part of the Profit Suite ToolKit.
Model Converter
The Model Converter will allow third party models to be converted to Profit
Design Studio (APCDE) form. The Model Converter is provided as part of the
Profit Controller (RMPCT) package.
Data Operation
Tools
Profit Toolkit
Use this library to design and configure a Profit Toolkit application. Currently,
the library supports both FCCU and Fractionator toolkits.
Profit Sensor
5/01
5/01
Overview
In This Section
5/01
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.2
System and Software Requirements
2.2
Software
Requirements
Do I have to Install
the Identifier
Separately?
The Profit Design Studio (APCDE) package either comes on the APC Identifier
CD or may be obtained on a separate CD. You may also have received Profit
Design Studio and/or Identifier as part of another Profit product.
If you have purchased Profit Controller (RMPCT), Profit Optimizer (DQP) or
Profit PID, the Identifier is automatically included in those packages. In those
cases, you do NOT have to install the Identifier separately. Please refer to those
products Users Guides for installation instructions.
If you have purchased the Identifier as a standalone product please follow the
instructions in this manual for installation.
PC Requirements
The following table lists the recommended and minimum PC system requirements
for using Profit Design Studio (APCDE). Depending on system size, simulation
can be computationally demanding. Although slower systems may function,
maximum computational resources are recommended.
Recommended Configuration
WIN NT(all versions) or WIN 95
Minimum Configuration
WIN NT (all versions) or WIN 95
24 MB RAM
16 MB RAM
VGA
VGA
CD Rom Drive
CD Rom Drive
Mouse
Mouse
5/01
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.3
Quick Reference to Installation
2.3
Read and perform the following procedures to install Profit Design Studio
(APCDE). For detailed instructions and help, see the referenced sections.
The following table outlines the major tasks involved with Profit Design Studio
installation. Use this table if you have installed Profit Design Studio (APCDE)
before. If this is your first installation, use the detailed instructions provided in
Section 2.4.
Step
5/01
Action
1.
2.
3.
Section Reference
See Section 2.2, System and
Software Requirements.
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.4
Installing the Profit Design Studio and the APC Identifier
10
To run the SETUP program for Profit Design Studio (APCDE), perform the
following steps:
1.
Make sure no other programs are running. Installation may fail or take a
very long time to complete.
2.
3.
4.
5.
6.
7.
8.
All components (dlls) are now included with the installation of the design
studio. There is no longer a need or capability to select individual
components.
5/01
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.4
Installing the Profit Design Studio and the APC Identifier
9.
Installing the
Dongle
Click [Next].
10. After ensuring that the physical Dongle is attached to your computer and you
have System Administration privileges, click [Yes] to proceed with the Dongle
installation.
NOTE
The driver needs to be installed only when you first install the 32bit version of
Profit Design Studio (APCDE). You do not need to install it again when you
upgrade or add optional products to the Profit Design Studio (APCDE).
11. At this point you will receive what appears as a blank white screen. Choose
Function > Install Sentinel Driver.
5/01
11
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.4
Installing the Profit Design Studio and the APC Identifier
14. After exiting the installation program, restart your computer to make the new
driver take effect.
15. Select Start>Programs> Profit Design Studio 220.00
Or from the desktop, click on the Profit Design Studio icon. If all the
software was installed correctly, the About Dialog box will have the
appropriate check marks and version information displayed.
12
5/01
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.4
Installing the Profit Design Studio and the APC Identifier
If you would like to check the version numbers of the dlls, check the log file. To
view a log file, open it with a text editor (Notepad). The log file can be found at
c:\Winnt\APCDE32.log. An example of the information contained in the log file
is shown below.
13:45:09 24Apr01
Profit Design Studio: 32 bit Version 220.00.0000
Loaded Math Library: HMATH32.DLL Version 220.00.0000
Loaded Utility Library: HUTIL32.DLL Version 220.00.0000
Loaded Identification Library: HIDENT32.DLL Version 300.00.0000
Loaded Controller Build Library: HBUILD32.DLL Version 200.00.0000
Loaded Controller Library: HCNTRL32.DLL Version 200.0001
Loaded Run Block Library: HBLOCK32.DLL Version 200.0001
Loaded Process Simulator Library: HSIM32.DLL Version 200.0001
Loaded Advanced Identification Library: HADVID32.DLL Version 110.00.0000
Profit Finder Library: HFINDER32.DLL is not present
Loaded Profit PID Library: HPID32.DLL Version 115.00.0100
Loaded Process Simulator Library: HSIM32.DLL Version 200.0001
Loaded Profit Loop Build Library: HSPID32.DLL Version 100.01.0000
Loaded Profit Loop Controller Library: HSCTRL32.DLL Version
100.0100
Loaded Process Simulator Library: HSIM32.DLL Version 200.0001
Loaded Profit Optimizer Builder Library: HBLDQP32.DLL Version 200.00.0000
Loaded Signal Generation Library: HSIG32.DLL Version 100.02.0000
Loaded RMPCT Point Builder Library: HBLDEB32.DLL Version 200.00.0000
Loaded DQP Point Builder Library: HDBLDEB.DLL Version 200.00.0000
Loaded TDC Data Converter Library: HCONV32.DLL Version 105.00.0000
Loaded Scout File Converter Version 100.00.
Loaded Model Converter Library: HDMCNV32.DLL Version 100.00.0100
Loaded Vector Tool Library: HVTOOL32.DLL Version 110.00.0000
Loaded Profit Toolkit Designer Library: HSTOOL32.DLL Version 200.00.0000
Profit Sensor neural net builder is not present
Any problems with loading libraries will be described here. Inability to locate a
library or library version incompatibility will prevent a library load.
5/01
13
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.4
Installing the Profit Design Studio and the APC Identifier
Default parameters that can be adjusted by the user are contained in the apcde.ini
file. An example of the default information contained in this file is shown below.
Note that with release 200 of Profit Design Studio (APCDE) a new .ini structure
is used. Please make a backup copy of your old file and remove it from your
SYSTEM or WINNT directory.
[Color options]
PltMargBkgndClr=48300031
PltMargTextClr=33554560
CustomColor0=16777215
CustomColor1=16777215
CustomColor2=16777215
CustomColor3=16777215
CustomColor4=16777215
CustomColor5=16777215
CustomColor6=16777215
CustomColor7=16777215
CustomColor8=16777215
CustomColor9=16777215
CustomColor10=16777215
CustomColor11=16777215
CustomColor12=16777215
CustomColor13=16777215
CustomColor14=16777215
CustomColor15=16777215
[Recent EB File List]
Field1=D:\Sample\160Test\AM\AMLARGE.ebb
[Recent DEB File List]
Field1=D:\Sample\DQPA_opt.ebd
[Recent File List]
File1=D:\Sample\DQPA_opt.ebd
File2=D:\Sample\160Test\AM\AMLARGE.ebb
File3=D:\Sample\SET1\LARGE.mdl
[Recent EB File List]  The last opened *.ebb file for Profit Controller (RMPCT)
Point Builder. It will be created automatically and will be used as the defaultstarting file when a RMPCT Point Builder is opened next time.
[Recent DEB File List]  The last opened *.ebd file for Profit Optimizer (DQP)
Point Builder. It will be created automatically and will be used as the defaultstarting file when a DQP Point Builder is opened next time.
14
5/01
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.4
Installing the Profit Design Studio and the APC Identifier
Other Options
[SSC Options]  You have to manually add the option which supports a Dongle
bypassing code. You have to obtain the code from HiSpec and manually add the
option and edit the code.
[Engine Debug Flags]  You have to manually add the option. If the
CreateChgFile is set to 1, a change file will be generated every time you run an
offline RMPCT simulation.
[Memory Buffer]  You have to manually add the option. This option pertains
directly to identification and will be discussed in a subsequent section.
[User Options]  You have to manually add the option. This option pertains directly
to identification and will be discussed in a subsequent section.
[Toolbar preference] This option is automatically added and pertains to the
identification toolbar preference of the user. Zero implies the use of the standard
toolbar while one implies the use of the detailed toolbar.
Caution
Starting the PC
Application
5/01
15
Section 2 Installing Profit Design Studio (APCDE) and the APC Identifier
2.4
Installing the Profit Design Studio and the APC Identifier
16
5/01
IdentificationA
Science and an
Art
A fundamental problem for any controller is the choice of the model that should be
used to represent the system. In general, the model is one of the following:
System identification remains both an art and a science. The science is concerned
with parameter estimation; the art is usually concerned with determining
structure/order, the excitation requirements, and accuracy. System identification
involves two steps:
The Identification
Process
Extracting models from process data for control purposes can require several
steps. At a minimum, the diagram shown below illustrates the overall procedure.
Start
Experimental Design
and executation
Identification
 Data processing
 Model order/structure
 Parameter estimation
Model validation
Correlation, transformation
plant models
Simulation, cross validation
Is model Good
Figure 11
5/01
Use model
17
Identification
Environment
The APC Identifier contains a family of automated estimation tools with the
following characteristics:
Identification
Approach
The Identifier combines the best features of state of the art algorithms. Adopting a
hybrid approach, the Identifier fits three models to arrive at a final, unified
controller model for loading onto the control system:
Fitting FIR
Models
FIR models are based on raw plant data and have these characteristics:
Solutions result in an unbiased estimate (when plant tests are conducted in the
open loop with a properly designed signal). This is true even for colored noise
disturbances.
Solutions are extremely fast and exceedingly robust (the Toeplitz structure is
fully exploited, and Cholesky decomposition ensures numerical robust factorization
of the normal equations).
Solutions result in potentially high order, high variance estimates (damping
these estimates with weights or smoothing will result in biased estimates, and in
18
5/01
3.1
PEM models are based on raw plant data and have these characteristics:
Models
Fitting
Parametric
Models
Parametric models are based on either FIR or PEM results. The purpose of the
parametric model is to take the FIR or PEM model, and fit it with a parametric
model that reduces or eliminates the variance. In addition, the parametric models
provide an extremely effective mechanism for model order reduction that is easily
configured by the user. At this step:
Fitting Final
System Models
Parametric models are automatically selected for the final system model. Based on
the raw data (cross validation if selected), the parametric models with the best long
term open loop prediction performance are automatically selected for use in the
final system model. Models can also be automatically eliminated based on FIR
statistics
With a click of the mouse you can manually override any default, or modify any
model fit by default settings.
5/01
19
3.2
Key Topics
In the remainder of this section, the underlying concepts associated with the APC
Identifier will be discussed. Continue reading this section to find out about:
Models

Solutions

Initial estimates
Transport delay
Model Properties

Consistency
FIR Statistics

Covariance
Confidence limits
An Illconditioned Example
Pseudorank
While not all of these topics will be of interest to all readers, a quick review is
nevertheless recommended.
20
5/01
3.3
Identification
Structure
u (t )
H( z)
(t )
+
G( z)
+
y (t )
While many formulations are possible, the APC Identifier addresses only two: the
quadratic norm and the socalled robust norm. The quadratic norm formulation can
be written as follows:
2
1
1
( ) +
2
2
2
s.t. i = i , i
( ) = y y ( )
min
2
2
In the above expressions , , and y are the unknown parameter, error and
5/01
21
These expressions define the starting point for the quadratic norm formulation.
The smoothing term is included here only for purposes of discussion in later
sections. Smoothing is never actually performed in the APC Identifier. The
reasons will be made apparent in later sections.
Robust Norm
Formulation
While the quadratic norm is the norm most commonly used for identification, there
are other norms that can be used in addition to this approach. Let l ( x i ) be the
positive scalar valued function of x such that the lnorm is defined as:
l(x )
i
The lfunction for the quadratic and Maximum Likelihood Estimates (MLE)
respectively are:
1
l q ( x) = x 2
2
1
l mle ( x) = log f e ( x);
= 1 l mle ( x) = x 2
2
Note: for Gaussian processes with unit variance MLE formulation satisfies the
quadratic norm.
It is well known that the quadratic formulation can be sensitive to outliers. MLE
on the other hand, asymptotically approaches the theoretical CramerRao lower
bound on variance. The MLE formulation however, may not be the best approach
in all cases. This may be due to small data sets, bias distributions or sensitivity to
unknown variations in the unknown probability density function f e (x) . A
technique to reduce sensitivity in general and specifically to the unknown f e (x) is
the robust norm. In the APC Identifier the robust norm is defined in terms of the
derivative of its lfunction as follows:
x
x <
l ( x ) r =
x >
x <
Here is the estimated standard deviation of the prediction error vector e. This
estimate is given by:
=
22
median( median() )
5/01
The constants and are taken from Ruppert and Ljung to be 1.6 and .7
respectively. Clearly, the robust norm is equivalent to the quadratic norm when the
magnitude of x less than rho sigma. When the magnitude of x is larger than rho
sigma the norm is linear in x with a slope of rho sigma with the appropriate sign.
With this definition the problem statement for the robust norm becomes:
min ( )
lr
s.t. i = i , i
( ) = y y ( )
5/01
23
3.4
Model Structures
Overview
FIR Models
With the proper formulation, the FIR approach can be an exceptionally effective
estimator. Under reasonably conditions, to be discussed later, this approach results
in a statistically unbiased and consistent estimate. To this end, the APC identifier
uses the FIR model as its base form.
FIR Structure
M
+ b0i ,m utm + b1i ,m utm1 + b2i ,mutm2 +K+ bni ,m utmn + t + i
m
24
5/01
B (z
)u i (t ) + (t ) +
G i ( z 1 ) = B i ( z 1 )
while the predictor in positional and velocity form respectively is given by:
y (t ) =
B (z
)u i (t ) + =
y (t ) =
B (z
i
(b u(t ) + b u(t 1) + L bu
0
nb
(t nb)) i +
)u i (t ) +
y (t ) = y (t ) y (t 1)
u i (t ) = u i (t ) u i (t 1)
Any dependent variables that contain integrators corresponding to one or more
independent variables are handled as special cases of the above expression. In
these cases, the dependent and nonintegrating independent variables are
differenced while the integrating independent variables are left in standard
positional form.
Intrinsic problems that can result from data that is over sampled or improperly
scaled, are eliminated by an automated data compression and scaling routine that
is an integral component of the FIR computations.
5/01
25
This model form encompasses virtually all of the polynomial blackbox models. In
its full form, this model can be used in both the open and closed loop and under
reasonable conditions (to be described later) is a consistent estimator. Under
slightly more restrictive conditions it yields optimal (minimum variance)
estimates. These desirable features are however, not without consequence. PEM
calculations are computationally intense. In addition convergence can not be
guaranteed in spite of the considerable effort expended in obtaining good initial
estimates.
PEM models are provided with one goal in mind: Ease of use. The goal here is to
provide a mechanism that will allow truly onestep identification. One click on
Load & Go button and thats it. While no restrictions are imposed in the use of
PEM, it can quickly become a fiddlers paradise that requires expert intervention.
This is NOT the intent. If the results are not satisfactory after one try, simply
revert to the standard FIR approach. To this extent, it is useful to view the PEM
models as a complement to the standard FIR models
While PEM models can be used in a general setting, due to computation, structure
and convergence limitations, they can be less effective from a practical perspective
than the standard FIR approach
The target use of the PEM model is for regression sets on stable processes
when only one or two independent variables are moving simultaneously.
Under these conditions the PEM approach can be an effective onestep method
that requires no apriori user input
PEM Structure
+
u
(
t
d
)
F( z 1 )
D( z 1 ) e(t ) +
1
B(
z
)
C( z 1 )
1
G( z 1 ) =
H(
z
)
=
A( z 1 ) F( z 1 )
A( z 1 ) D( z 1 )
A( z 1 ) y (t ) =
26
D( z 1 )
C( z 1 )
B( z 1 )
D( z 1 ) A( z 1 )
+ 1
y
u
(
t
d
)
F( z 1 )
C( z 1 )
1
D( z )
+
C( z 1 )
5/01
(t ) =
D( z 1 )
A( z 1 ) y (t )
C( z 1 )
B( z 1 )
F( z 1 ) u (t d )
Fi (z 1 ) = (10 + f1 z 1 + f 2 z 2 L f nf z nf )i
i
C(z 1 ) = 1 + c1 z 1 + c2 z 2 L cnc z nc
D(z 1 ) = 1 + d1 z 1 + d 2 z 2 L d nd z nd
Within this context, the PEM structure supports the following standard forms:
FIR
ARX
ARMA
ARMAX
ARIMA(X)
ARARMAX
BJ
OE
While the PEM model is actually solved using the completely general form given
above, the user is currently prevented from specifying both A and F polynomials
in a given regression. There are no other restrictions on orders or structures. With
this model there is no need to specify positional or velocity form. As a matter of
fact it is invariably disadvantageous to specify velocity form when using PEM
under normal conditions. The one possible exception is the ARIMA(X) model.
This structure requires the velocity flag to be set.
The default model uses the multiinput single output BoxJenkins (BJ) structure.
While all the structures listed above are available, it is not in the best interest of
the average user to select any structure but the default. If there are problems with
the default, revert to FIR.
Intrinsic problems that can result from improperly scaled data are eliminated by an
automated data scaling routine that is an integral component of the PEM
computations. PEM calculations are fully integrated to the design studio and as
such can be used seamlessly with discontinuous, contaminated and bad data
(NaN). Constraints for null models and the Lock Model options are fully
supported.
5/01
27
ARX Parametric
Models (Discrete
Time)
In the above paragraphs, data regression related models were presented. The
following paragraphs illustrate those models that are used for order reduction and
variance reduction/elimination. The discrete time models presented below
correspond to structures contained in the PEM model. They should however not be
confused with the PEM approach. The models presented below are much simpler
by design and as such have their own solution methodology.
Parametric models are used for model order reduction and to remove the variance
found in the models regressed from raw data. While standard low order ARX
models are typically inadequate due to biased estimates, the pre filtered form used
in the APC identifier automatically weights the low frequency fit and hence,
results in high quality models.
The general form of this model is given by:
yt + p1 yt1 + p2 yt 2 +L+ pn yt n
= b1ut1 d + b2ut 2 d +L+ bnut n d + et
or
P( z ) y(t ) = B( z )u(t d ) + e(t )
(b1 z 1 + b2 z 2 +L+ bn z n ) z d
1 + p1 z 1 + p2 z 2 +L+ pn z n
In the above expressions the prime denotes a prefiltered value while n and d
correspond to the order, and delay of the sub process respectively.
If the discrete time model contains an integrator, then one pole of T(z) is
constrained to lie exactly on the unit circle.
Output Error
Models (Discrete
Time)
In addition to the ARX form shown above, the identifier also generates discrete
time models with an output error structure. The general form of these models is
given by:
wt + f1 wt 1 + f 2 wt 2 +L+ f n wt n
= b1u t 1 d + b2 ut 2 d +L+ bn ut n d
and
yt = wt + et
Close inspection of these expressions shows that the output y does not appear in
the regression matrix for this model. Consequently, this structure results in an
unbiased estimate if the input u is persistently exciting. The above expressions can
be conveniently written as:
y (t ) =
28
B( z )
u (t d ) + e(t )
F( z )
5/01
(b1 z 1 + b2 z 2 +L+ bn z n ) z d
1 + f 1 z 1 + f 2 z 2 +L+ f n z n
While the output error model has the desirable feature of being unbiased even
without prefiltering, this structure requires that the estimation parameters appear
in the regression matrix. Consequently, the estimation problem becomes nonlinear.
This implies that more computational effort is required for the output error
solution than is required for the ARX solution.
Laplace Domain
Parametric
Models
k (s + 1)e ds
s (1s + 1)( 2 s + 1)
where :k = Gain
= time constant associated with process zero
1 = time constant associated with first process pole
2 = time constant associated with second process pole
d = transport delay
This model is guaranteed to be over damped and open loop stable. If any pole of a
discrete time model is outside the unit circle, then the sub model is automatically
rebuilt using this structure.
For under damped subprocesses, the discrete model form is required to capture
the complex pole structure. For the best fit, simply select the Best of both option.
This uses both the Laplace and discrete forms and returns the model with the
lowest prediction error.
Final Model Form
All models are ultimately saved in Laplace form. Discrete models are
automatically converted from the z to s domain. The form of the saved models is:
T( s) =
The lead s in the denominator is present only if the sub process contains an
integrator. In this case the k refers to the integration rate.
5/01
29
3.5
Solutions
Overview
Due to the hybrid nature of the identifier and the variety of model structures
supported, several solution methodologies are used for parameter estimation. Some
of the model types utilize linear strategies while others require nonlinear techniques.
Nonlinear methods have the additional requirement of specifying an initial estimate
to start the algorithm.
In the following paragraphs, each of the various solution techniques will be
described. Linear approaches will be given first. This will be followed by the
nonlinear techniques and associated starting conditions. Finally, a brief description
will be given on the delay estimation algorithm used for all reduced order models.
Linear Solutions
FIR Models
Using the FIR structure defined in the previous section, the prediction equation in
vector form can be written as y = A . Where
ut1
ut11
1
u
ut1
A = t +1
M
1
1
ut + m 1 ut + m 2
L
L
ut1 2
u1t 1
u1t + m 3
L utnu nb ( nu )
ut2
ut2+1 L utnu+1 nb ( nu )
M
M
nu
2
L ut + m 1 L ut + m nb ( nu )
and
T = b01
nu
b11 b21 L b02 L bnb
( nu )
min
5/01
A fast correlation update algorithm, which is analytical rigorous, is used to form the
normal equations. Solution of the normal equations is accomplished by a highly
efficient numerically robust Cholesky decomposition. This rank reveling
decomposition is a reproduction, with minor enhancements, of the LINPAC
algorithm. In this decomposition the normal equations are written as:
C = d
By inspection:
C = ATA ; d = ATy
T
P T CP = R T R
Where R is an upper right triangular matrix and P is a permutation matrix.
Multiplying the normal equations by P T gives:
P TC = P Td
Let Px = and P T d = f . Substituting these expressions into the above equation
results in the following relation:
P T CPx = P T d = R T Rx = f
From this expression, the estimates can be calculated in three trivial steps. First, the
following equation is solved for z using simple forward substitution:
RTz = f
With z known, the following equation is solved for x using simple back substitution
Rx = z
Finally, the estimates are recovered from x using the perturbation relationship
defined above.
In some cases, accuracy of the normal equation approach might be of concern. The
Cholesky algorithm used here is designed to deal directly with both poorly
conditioned and rank deficient problems. For system identification, in which double
precision accuracy of the plant data cannot be assumed, the approach used is as
accurate as alternative orthonormal solutions but significantly faster. Issues relating
to accuracy, rank, pseudorank and factorizations will be discussed in more detail at
the end of the section.
5/01
31
Implicit in the use of the PFX model is the assumption that the data is the step
response result of either an FIR or PEM calculation. With this information at hand,
the general procedure is as follows:
= filter(n, u );
yf
= filter(n, y );
Pfx = PfxSolve(n, u f , y f );
Pfx s = Convert( Pfx);
n ;
}
In the procedure given above, the function PfxSolve returns the conventional ARX
solution while the function Convert transforms the model from the z to the s
domain. Using the ARX structure defined in the previous section, the prediction
equation in vector form can be written as y = A and the subsequent prediction
error becomes = y y = y A . With this definition of the prediction error and
the quadratic norm formulation given previously, the minimization problem
becomes:
2
1
A y
2
2
T
s.t. =
min
Constraints in the above expression apply only for integrating processes. Solution
of this problem (without constraints) is accomplished by using an orthonormal
factorization of A. The orthonormal factorization used in the APC Identifier is a
rank reveling QR decomposition which is essentially that of Golub and Van Loans
Matrix Computations. In this decomposition AP = QR (P is a permutation matrix).
Let Px = and the minimization problem becomes:
A y
32
2
2
= APx y
2
2
= QRx y
2
2
= Rx Q T y
2
2
= Rx b
2
2
5/01
T ( A T A) 1 Ay
= ( A T A ) 1 A T y +
T ( A T A) 1
T
= 1
= [11L1 0 0 L 0]
Clearly, this solution can exhibit sensitivity problems when A is ill
conditioned. In spite of possible sensitivity, use of constraints to insure one
pole is located at (1,0) has proven to be particularly effective. Earlier
attempts at removing the constraints by differencing the input step response
and using a straightforward QR solution were less accurate as might be
expected due to the differencing operation. When ill conditioning exists, the
integrating pole is prescribed to be precisely on the unit circle at (1,0).
Hence, the transformation to the sdomain will always result in a pole that is
exactly at the origin. Rank deficient problems at this level, will return an
error message with a null model
Nonlinear
Solutions
Solution
Procedure
Each of the remaining models, PEM, OE, and Laplace all require a nonlinear
solution. A general procedure is used for all models. Application of this procedure
to a particular model type depends only on the mechanism used to represent the
prediction error.
Both robust and quadratic norms are used to define the nonlinear minimization
problem. For the PEM model the user is free to select either norm. Only the
quadratic norm is used to define the OE and Laplace problems. For clarity of
presentation, use of the robust norm and constraints will not be described. Hence,
the problem statement presented previously becomes:
2
1
( )
2
2
() = y y ()
min
The solution of this problem is given by the that solves the following equation:
J T ( ) ( ) = 0
5/01
33
1
1
T
2
T
=
J ( )
1
M
m
1
1
2
2
n
2
L
n
m
L
n
r ( ) = J T ( ) ( )
Clearly, it is desired to find the value of such that r is as close to zero as
possible. A straightforward NewtonRaphson technique is used to accomplish this.
Initially the use of a potentially superior QuasiNewton (BroydenFlecherGoldfarbShanno (BFGS) ) algorithm was briefly investigated. Due to the nature
and structure of this specific problem, attention was focused on establishing reliable
initial estimates rather than on techniques that can potentially enhance the
convergence properties of the base algorithm.
By expanding the residual in a Taylor series and neglecting high order terms, the
Newton step can be written as:
H ( ) = r ( )
Where the Hessian matrix in the above expression is given by:
H ( )
r T ( )
J ( )
= J ( ) T J ( ) + e( ) T
From this definition it can be seen that the Hessian consists of a special
combination of first and secondorder information. Here it will be assumed that
eventually the first order term (the first term on the right hand side of the above
expression) will dominate the second order term. If the magnitude of the prediction
error tends to zero as the solution is approached, then the second order term in the
above expression also tends to zero. Thus the approximate Hessian becomes
(dropping the notational dependence on the estimates):
H JTJ
Using this approximation the GaussNewton step becomes:
J T J = J T e
34
5/01
Note that the approximate Hessian is semipositive definite and the above equations
are fully compatible. By inspection, the solution to the above expression is also a
solution to the following minimization problem:
min
1
J
2
2
2
For PEM models, this problem is solved for using either a QR or Cholesky
factorization. The choice is user specified and the corresponding factorization
follows that presented previously. For the other models a QR factorization is always
used.
Updates of the parameter vector are given in terms of the GaussNewton step
according to:
i +1 = i i i
A line search is used to determine the step length i that will insure the magnitude
of the residual decreases in a monotonic fashion.
PEM Formulation
For PEM models the Jacobian matrix is obtained by analytically differentiating the
prediction error with respect to the unknown parameter vector: As presented
previously, the prediction error is:
(t ) =
D
A y (t )
C
F u(t d )
w(t d )
B
u (t d ) ; (t d ) A y (t ) w(t d );
F
T
J
Ji, j =
Differentiating the prediction error allows the Jacobian to be defined (dropping the
notational dependence on the delay, d) by the following:
5/01
35
(t ) D
= y (t k )
a k
C
(t )
b
i
k
(t )
f
i
k
=
=
D i
u (t k )
CF
DB i
D i
u (t k ) =
w (t k )
CFF
CF
(t )
D
1
v(t k ) = (t k )
=
CC
C
c k
(t ) 1
= v (t k )
C
d k
(t )
D
=
C
Where the index k runs over the order of the individual polynomial and the index i
runs over the number of inputs. Since the right hand side of the equations presented
above can be evaluated in terms of a fast filter (Transposed Direct Form II)
operation, Jacobian evaluation is very efficient. Column shifting where possible is
fully exploited.
Bad values (NaN) in the Jacobian matrix, indeed in all regression matrices are
handled in a straightforward fashion. Any rows containing bad values and any
corresponding vector elements are simply removed from the regression. Treating
bad values in the filter operations is not so straightforward. Here special filter
operations were required. While care was taken in the design of these filter
operations, they are nevertheless slightly less efficient than the standard Transposed
Direct Form II filter function
OE Formulation
The analytical approach described above is also used for the Output Error models
used for model order and variance reduction. To eliminate any overhead a separate
algorithm tailored to this specific structure is used. For these SISO models the
Jacobian is defined by:
(t )
1
= u (t k )
F
bk
(t ) 1
= w (t k )
F
f k
36
5/01
Starting
Conditions
FIR  None
ARX (PFX)  Delay estimation
OE  Delay estimation and parameters from ARX solution
Laplace  Delay estimation, gain and dynamics from FIR solution
PEM
 Get initial A and B using high order ARX solution (determine order
based on modified Akaike information criterion)
 Perform PFX model reduction step on high order models
 Use A and B as filters for an Instrumental Variable refinement step to
calculate modified A, B and F (this step is not usually required unless
user chooses to ignore high order ARX/PFX reduction option)
 Stabilize F
 With refined A, B and F, calculate
 Prewhiten using high order AR model
 Use Prewhitened as an estimate of and original to estimate C and
D
 Stabilize D
 Begin search
Note the high order ARX solution followed by the PFX model reduction step for
parameter initialization serves two purposes not found in conventional PEM
approaches (i.e. MATLAB). First it typically yields better initial estimates than low
order instrumental variable or bootstrap methods. Second it substantially reduces
the effect that PEM model order has on the initialization procedure.
Delay Estimation
5/01
Accuracy of the reduced order models is heavily influenced by the transport delay.
Unfortunately, this parameter does not lend itself to direct estimation. In addition
the current formulation requires that the delay be an integer multiple of the effective
sample rate. Note that the effective sample rate is not in general the same as the
data sample rate due to internal compression. Since the delay is constrained to be an
integer multiple of the effective sample rate, gradient based searches are not
convenient. Hence, a heuristic approach is used to estimate the delay. In the APC
Identifier a relatively brute force approach is used. Here four likely delays are
defined and each is evaluated with the solution being the delay that delivers the
lowest fit error. Delay estimation is accomplished using the following procedure:
APC Identifier Users Guide
Honeywell Inc.
37
y
d2
= y
d0
d3
d1
Each of the delays is determined based on the FIR or PEM step response shown
above by the solid line. Here high variance is assumed. To begin the procedure the
maximum value of the step response, y is determined. The delay threshold
shown by the dashdot lines is computed next. In this calculation is the userspecified threshold in percent (default value is 5%). Based on this information the
four delays are calculated as follows:
5/01
here is determining the maximum slope of the noisy (potentially high variance) step
response curve. In the APC Identifier an iterative least squares technique is used.
Here the number of points needed to insure attenuation in the fluctuation of the
slope of the straight line is determined.
While this approach is not guaranteed to be optimal, experience has shown it to be
very effective. Since optimality is not guaranteed, a mechanism is provided to allow
the user to easily override these calculations.
As a final note, a preliminary PEM option exists for delay estimation. When this
option is invoked, the delay is estimated in a fashion similar to that described here.
This estimation is performed as part of the initialization process and performed
prior to the nonlinear search. Superior automated correlationbased techniques are
anticipated in the future
5/01
39
3.6
Overview
Model Properties
Two model characteristics that are highly desirable, if not required, are that the
estimated model should be
Unbiased
Consistent
E( ) = 0
lim = 0
Where the subscript 0 implies the true process. In the discussion to follow, it
will be assumed that the observed data have been generated by the following
single input single output process:
y = A 0 +
(FIR)
1
y (t ) = G 0 ( z ) u (t ) + H 0 ( z ) (t )
(PEM)
Where G and H are as defined previously and and are colored and white
noise disturbances respectively. It will also be assumed that the process is quasistationary and that disturbances are zero mean.
Discussions on bias and consistency will be limited to these properties as they
pertain to the FIR and PEM models. In the PEM discussion, a full structure is
assumed. Results are also valid when A=1. Other substructures need to be
evaluated on a case by case bases.
FIR Bias
Parameter estimates for the FIR solution can be written directly from the normal
equations given previously as:
= [ A T A + Q]1 A T y
Substituting for y and taking the expectation gives:
E( ) = E([ A T A + Q]1 A T A 0 ) + E([A T A + Q]1 A T )
When A is uncorrelated with the zero mean disturbance, the last term on the right
hand side of the above expression becomes zero and the expected value is:
E( ) = E([ A T A + Q]1 A T A 0 )
Using the matrix inversion lemma, the expected value can be rewritten in the
following form:
E( ) = [I B(Q)] 0
B(Q) = [ A T A ]1 Q[( A T A ) 1 Q + I ]1
Bias is clearly a function of the smoothing matrix Q. When there is no smoothing
( Q = 0 ), the FIR estimates are unbiased (given the stated assumptions).
40
5/01
1
=
N
[a(k )aT (k ) + (k ) T (k )]
k =1
1
N
a(k )[a (k )
T
+ v(k )]
k =1
Where a(k ) is a vector composed of the kth row regressors of the original A
matrix and (k ) is a vector of the kth row of the original matrix. Quasistationatity implies the mean and covariance converge to constant values. Thus
the limit of the summation terms given above, as N goes to infinity yields the
expected value. For example:
1
N N
lim
[a(k )a
(k ) + (k ) T (k )]
k =1
Clearly, for the stated assumptions, the FIR estimates are consistent when Q
equals zero
PEM
y (t ) = H 1G u (t ) + [1 H 1 ] y (t )
Here it will be assumed that G and H have the correct structure. Based on the
expression given above the prediction error is:
(t ) = H 1 [ y (t ) G u (t )]
And the corresponding loss function for the quadratic norm is:
5/01
41
V=
1
N
2 (t ) =
t =1
1
N
(H
[ y (t ) G u (t )]) 2
t =1
1
N
(t ) = E{ 2 (t )}
t =1
Using the definition of the true process for y, the prediction error can be
rewritten to give:
(t ) = H 1 [G G 0 ]u (t ) + H 1 H 0 (t )
Since H is monic, (t ) = (t ) + L . Thus
lim V = E{ 2 (t )} E{ 2 (t )} = R
and
E{ 2 (t )} = H 2 [G G 0 ] 2 E{u 2 (t )} + H 2 H 0 R
2
If the search for G and H converges to a global minimum, then from the
expression given above, it must be true that:
[G G 0 ] 2 = 0
H 02
H2
=1
G G 0 , H H0
42
Process is quasistationary
All disturbances are zero mean
All inputs are uncrosscorrelated with all disturbances
Model is structurally compatible with the process (number of
coefficients is sufficiently large)
Process is persistently excited
Model coefficients are unsmoothed (Q equals zero)
5/01
n ()
1
N u () S(e i )
where :
n model order
N number of data points
disturbance power
u input power of injected signal
S controller sensitivity function (= 1 for open  loop operation)
This frequencybased expression completes the story. Indeed this is probably the
most important relationship for anyone involved with identification to understand.
This expression clearly demonstrates that errors are proportional to the ratio of
disturbance to input power. Errors are also proportional to the ratio of model
order to test duration and inversely proportional to the sensitivity function (tight
control leads to big errors).
Implications are straightforward. If there are significant disturbances for a fixed
duration test, then the errors will be high unless the input power is increased to
compensate for the disturbances. If input power is restricted (limited movement)
in spite of disturbances, then the only alternative is increased test duration. Here it
is assumed that n is not a strictly free parameter since improper adjustment may
lead to bias. Closedloop identification (S not equal to 1), results in increased
errors relative to a comparable openloop test. The tighter the loop the worse the
results. If accurate models are required in a particular frequency, then the injected
signal must have sufficient power in the desired spectrum.
5/01
43
Too often when poor results are obtained people look everywhere but to the
source. These same people are prone to jump in favor of magic solutions. Most
often problems can be addressed in terms of the expression given above. There
simply is NO replacement for proper experiment design.
44
5/01
3.7
FIR Statistics
Statistical
Properties
P = s 2 [( A T A) 1 Z] A T A [(A T A) 1 Z]T
+ ZA T A 0 T0 A T AZ T
where:
Z = ( A T A) 1 Q [(A T A) 1 Q + I ]1 ( A T A) 1
In this expression, the covariance matrix of the estimates is in fact a function of
the actual values of the parameters that are unknown. This problem is obviously
resolved by setting Q equal to zero. When this is done, the expression for the
covariance matrix which is used in the APC Identifier is:
P = s 2 ( A T A) 1
Noise variance is simply an indication of how the actual outputs vary about the
predicted outputs. Thus, the variance is estimated by:
s2 =
1
N d
( y y )
i
i =1
This expression can be written in a more compact and efficient form as:
s2 =
y T y T A T y
N d
Using the above expression, the final form of the covariance matrix is given by:
P=
1
(y T y T A T y )(A T A) 1 ;
N d
45
th
estimates when i j .
Rather than using the inversion shown above to form the covariance matrix, The
APC Identifier makes use of the following identity:
( A T A) 1 = R 1 (R 1 ) T
1
Since the regressor matrix is deterministic and the disturbances are assumed to be
white, the outputs are themselves random variables. This implies since the
estimates are constructed from these variables, that they too are random variables.
To determine the distribution of the estimates, it is further assumed that the
disturbances have a Gaussian distribution as N . This assumption implies
that the outputs and therefore the estimates will also have a Gaussian distribution
as N . Hence:
~
N( 0 , P)
That is the estimates are normally distributed around the true values with variance
P. For sampled data sets, the above expression may be too restrictive. Thus rather
than use the normal distribution, the APC identifier computes the Student T
distribution. To generate the final distribution it will also be assumed that the
th
estimates can be independently parameterized. Thus the i estimate will have the
following distribution:
i T( 0i , Pi , j )
Note that the above expression will only be truly correct for a diagonal
covariance matrix. The implications of this assumptions will be discussed in a
following section
For large N, the distribution of the estimate about the actual value has the familiar
Gaussian distribution shown below:
46
5/01
In the above plot, the probability density function f is shown as a function of the
i i0
S(i )
5/01
47
With the above expression, the true value of the coefficient lies between a welldefined upper and lower bound as shown below.
i t S(i ) i0 i + t Pi ,i
Hence, a noise band can be defined as
N b = t Pi ,i
Finally, for the null hypothesis test, it is assumed that the true value of the
coefficient is in fact zero. This would imply that:
i t Pi ,i
If the estimate is in fact larger than the bound, then the null hypothesis test fails
and the coefficient can be assumed to be nonnull. In the Identifier t is
established based on the tprobability density function and is evaluated by
numerically solving the following expression for t .
~ 2 (( + 1) / 2)
P=
(/ 2)
t*
0
2
1 + T
+1
dT
Where
~
P = Probability level (i.e. 95%)
= Degrees of freedom
= Gamma Function
48
5/01
3.8
Factorizations
Background
Normal vs.
Orthonormal
5/01
49
Accuracy and sensitivity are closely related. To discuss these topics, consider the
following two problems:
min
x
1
Ax b
2
2
2
and min
x
1
( A + E) x (b + f )
2
2
2
The first problem represents the true system. The second is a slightly perturbed
problem and the one that will actually be solved. The variables E and f are the
errors in A and b respectively. These errors may have a number of sources. If the
elements of A and b have been measured, as is the case in identification, then
they will be inaccurate due to the limitations of the measuring instruments. If they
have been computed, then truncation or rounding errors will contaminate them.
Even if A and b are known perfectly, they may not be perfectly representable on
a digital computer.
While seemingly innocuous, this last statement is particularly important for the
identification problem. At this time the design must accommodate singleprecision data acquisition devices. This implies an upper bound on the accuracy
of the data and a lower bound on the size of E and f. This will impose limitation
in spite of the calculation precision (double for the APC Identifier).
Subsequent computations performed on A and b can be considered as another
source of initial error. For purposes of discussion these errors will also be lumped
into E and f.
With this information at hand, the issues of accuracy and sensitivity can be more
reasonably discussed. Accuracy in this discussion implies a solution to a given
problem. Here the concern is the relative accuracy of the different factorizations
when applied to the quadratic norm problem. Sensitivity on the other hand
addresses the concern of error magnification on the solution. That is, for the
problems given above, what is the expected difference between x an x .
Accuracy and sensitivity are always related due to roundoff errors in the
computations. If the effects of roundoff errors are neglected, then as described in
chapter 19 of Lawson and Hansons Solving Least Squares Problems, the
solutions to a specific quadratic norm problem by SVD, QR and Cholesky are
equivalent and hence have the same accuracy. Indeed, the R obtained by the
orthogonal factorization of a given matrix with full column rank is identical
50
5/01
(within the signs of rows) to the R obtained by the Cholesky factorization of the
same matrix. For a given arithmetic precision however the orthogonal approaches
will be more accurate than forming the normal equations and using Cholesky
factorization. This reduction in accuracy is not the result of the decomposition
but is a consequence of forming the normal equations.
With respect to accuracy, short word computations (low precision) would favor
the orthogonal approaches. With double precision computations any differences
in accuracy are effectively moot. For single precision data and double precision
computations, as is the case with the APC Identifier, there is no loss in accuracy
incurred by forming the normal equations, hence the normal approach is
effectively as accurate as the orthonormal approach.
Sensitivity as described above is concerned with the possible magnification of
errors. Of particular concern is the extent to which the solution to a given
problem can change as a result of errors or perturbations to the original problem.
This concern can be addressed directly in terms of the condition number
associated with the matrix A. When the condition number is large a matrix is said
to be illconditioned. A matrix can be illconditioned with respect to inversion. A
matrix can also be illconditioned with respect to its eigenproblem. It is possible
for a matrix to be ill conditioned with respect to inversion but have a wellconditioned eigenproblem, and viceversa. Here the concern is only the condition
number of A with respect to inversion.
Errors for a poorly conditioned problem will be greatly magnified in the solution.
The 2norm condition number is defined as:
2 ( A) = A
A 1
1 ( A)
n ( A)
bounds of the various factorizations for the perturbation problem given above can
be stated in terms of the condition number . These bounds (taken from the
LINPACK manual) are:
x x
x
x x
x
x x
x
2 (A) 2
E
A
Normal (Cholesky)
2 (A) + 2 (A) 2
A 2 x
2 ( A) + 2 ( A) 2
A 2 x
E
A
E
A
Orthonormal (QR)
2
2
Orthonormal (SVD)
Where is the length of the residual vector Axb and and are constants
greater than one. Clearly, in general the sensitivity for all methods is proportional
5/01
51
to the square of the condition number. It can also be seen that as the residual
approaches zero, sensitivity of the orthogonal methods becomes linearly related
to the condition number while for the normal approach the sensitivity varies as
the square of the condition number. Hence as approaches zero, the orthonormal
methods can be expected to be much less sensitive than the normal approach.
Note however that strictly zeronorm residuals are academic with respect to
identification since this implies a perfect model with no external noise or a
collocated model (A is n x n and b is n x 1) both of which are not at all realistic.
An Illconditioned
example
.9130 .6590
.2540
In this example the data can be considered to be single precision. Since the
computations are performed in double precision, there will be no loss in accuracy
when the normal equations are formed. The known solution to this problem is:
1
x=
1
Using MATLAB (with format long), A and b have the following representation:
52
5/01
This problem is clearly illconditioned. Since A has full column rank there is a
unique solution. Since A is square this residual vector of this unique solution is
zero length. That is Ax = b.
QR Solution
5/01
53
In spite of the fact that (C) = ( A) 2 , the Cholesky solution is no less accurate
than that obtained from the QR factorization. The condition number of C is:
Solving R T z = d gives:
54
5/01
It is obvious that the Cholesky factorization has suffered no loss in accuracy and
in fact for this case yields the exact solution.
5/01
55
Inspection of the various solutions illustrates that the accuracy of both the QR
and Cholesky approaches are comparable to that of the SVD.
56
5/01
.
0020
.0010
Solution of the perturbed system for the QR, Cholesky, and SVD factorizations
respectively are as follows.
Notice how the small perturbation was magnified in the solution. In this case, all
methods exhibit similar magnification (in spite of the fact that this is in essence a
zero residual solution). In general, the different approaches can have significantly
different magnification characteristics as illustrated by the magnification bounds
given previously (these bounds express maximum magnification than can be
expected not necessarily the actual magnification obtained).
For model predictive control, where constraints may be active, it is possible that
at some time individual submodels (CV/MV pairs) may totally dictate controller
performance. Thus the true causal relationship between independent and
dependent variables is desired. Therefore, it is good practice to never use
sensitive models that are the result of poorly conditioned data. Attenuation of
sensitivity in this case is NOT recommended. Indeed, even using minimum length
(minimum sensitivity) SVD solutions can result in FIR models are arbitrarily bad
relative to control performance. Under the proper conditions gain reversal is a
possibility. Proper excitation (rather than sensitivity attenuation) and hence wellconditioned data is the goal here.
It is also possible to have models that appear sensitive to perturbations even when
the input signals are properly designed. Invariably this is the result of the
disturbance power being large relative to the input power. These models should
also be considered suspect as the disturbance characteristics become part of the
model.
Sensitivity problems have tremendously influenced the design of the APC
5/01
57
58
5/01
obtain a solution that is more accurate than the data upon which it is based.
Therefore, the tolerance used in the APC Identifier for any data based regression
is defined as:
s
and
where s is the single precision accuracy of the resident machine. While this
tolerance is used to define the pseudorank, both Cholesky and QR factorizations
are first computed to machine precision accuracy using pivoting strategies and
reliable condition estimators. It is only after these factorizations are complete that
the tolerance defined above is used to determine the pseudorank. The pseudorank
k is determined by finding k+1, such that Rk+1,k+1 < . Final solutions are then
based on the pseudorank k.
A Rank Deficient
Example
To illustrate more clearly the discussion on pseudorank and its use in the
identifier, a problem defined in the MATLAB Users Guide will be solved using
the various factorization techniques. In this case it is desired to minimize the
standard quadratic norm problem given above where A and b are defined as
follows.
1
1 2 3
4 5 6
and b = 3
A=
5
7 8 9
10
11
12
7
In this example the data can be considered to be single precision. Since the
computations are performed in double precision, there will be no loss in accuracy
when the normal equations are formed. In this problem, the second column in A
is a linear combination of the first and third columns, therefore this is a rank 2
matrix. Data has been selected such that the solution will result in a zero length
residual.
Because this is a rank deficient problem, column pivoting must be used in both
the Cholesky and QR factorizations. Results directly from the APC Identifier will
be used for the Cholesky discussion. By using A and b to construct input data and
by selecting FIR models with only one coefficient, the solution given by the
identifier is:
5/01
59
where m is the number of input data rows , order is the model order and x is the
permuted solution vector. The corresponding permutation vector is:
x=
0.16666666666667
0.00000000000000
This solution contains one more unknown than is found in the original problem
statement. This is simply due to the fact that the identifier automatically adds a
bias term in any data regressed FIR or PEM model (see section on model
structure). The solution to the original problem is simply the first three elements
in the vector given above.
Factorization of A using QR gives:
where P is the permutation matrix resulting from the pivot calculations (In the
Identifier all permutations are saved in the JPVT array to save storage). With the
factorization complete it is easy to establish the pseudorank. As discussed
previously, simply find k+1, such that Rk+1,k+1 < . Here can be taken to be 1.e60
5/01
For the zero valued solution, which is typically implied when performing QR
factorization, the solution in terms of the permuted variable x p is R x p = d .
Here, the dimensionality is defined in terms of the pseudorank k and x ip = 0 for
In these solutions the number of nonzero coefficients is equal to the rank of the
problem. For rank deficient problems, a minimum length solution that also
5/01
61
minimizes the error exists. This minimum length solution can be recovered from
the QR factorization in a straightforward fashion.
Minimum Norm
Minimum length
QR Solution
62
5/01
MATLAB
Solutions
Next the minimum error solution can be obtained using SVD by invoking the
PINV function as follows:
Both the zero value and minimum length solutions are essentially identical to
those given previously. It should be obvious that the MATLAB left division
operation results in what is referred to here a zero value solution, while
MATLABs pseudoinverse function gives a minimum length solution.
Choice of which approach to use depends on the application. For MPC
identification using FIR models, there is no advantage to the minimum length
solution. Indeed as described previously this does not insure that the models are
even useful for control purposes. There is however a computational penalty. In
addition it could be argued that length minimization can result in a deleterious
effect since minimum length solutions will distribute coefficient effects over
linear dependent columns. For example if truly first order data is fit with a third
order model, all parameters will appear to be pertinent. Conversely, the zero
value approach will discard nonimpactive parameters. Currently, the APC
Identifier returns only zero valued solutions.
5/01
63
Perturbed
Solution and
Next consider the same problem subject to a small perturbation. In this case A is
as defined previously but E has random errors as defined below.
Pseudorank
Note that the R(3,3) element falls below the pseudorank threshold and is
therefore treated a being indistinguishable from zero. The pseudorank for this
perturbed problem is still 2 in spite of the fact that that the true rank is 3. Using
k = 2 , the solution becomes:
Clearly, it is reasonable to account for the inherent limitations in the data. To this
extent the sensitivity can be somewhat attenuated. For identification, further
attenuation or suppression in sensitivity by arbitrarily increasing the pseudorank
tolerance is ill advised since this will lead to biased solutions or suppress inherent
problem in the data of which the user should be aware. In the Identifier, these
sensitivities are purposely displayed to indicate potential concerns with
information content in the data.
64
5/01
Timing
At the data regression level, primary attention has been focused on accuracy,
numerical stability and the ability to deal directly with rank deficient problems as
described above. With this as a requirement, some care has also been expended to
ensure that the delivered algorithms are reasonably fast. The delivered algorithms
have been tested in comparison with the defacto standard MATLAB. In all cases
the computational speed was found to be comparable. This comparison was by no
means meant to be comprehensive but simply a reasonableness check. All
computations were performed on the same Pentium II 366 MHz machine.
For the FIR calculations, the rank revealing Cholesky factorization routine used
in the APC Identifier was slightly faster than the chol routine used in
MATLAB. Chol was approximately 30% slower than the Identifier
factorization. A 960x960 matrix takes about 9 seconds to factor using the
Identifier. A direct comparison on the formulation of the normal equations is not
really meaningful since the Identifier uses a fast correlation update. While
intrinsic MATLAB functions are very fast, script (.m) files are not. Nevertheless,
MATLAB can be used as a sanity check based on the theoretical number of
floating point operations (like MATLAB, all computations in the Identifier are
double precision). Consider a 6000x720 A matrix. Using MATLAB A T A
takes approximately 410 seconds. While this full formulation makes use of
neither the symmetry nor the Toeplitz structure of the problem, the fast
correlation update does. The operation count using this approach is a function of
the rows and columns in A and the number of independent variables.
Formulation of the normal equations in the Identifier for a 6000x720 matrix takes
less than 2 seconds for 1 independent variable and less than 4 seconds for 6
independent variables.
This computational speed is reasonable relative to the theoretical operation count.
There is some overhead that could be reduced but the effort is hard to justify
considering the existing performance. Overhead is due to the interactive design
(interrupts allow messaging and user intervention while the computations are in
progress), the nested indexing used to implement the fast correlation update and
the support of matrix segmentation. Only the latter can be influenced by the user.
In the Identifier, the parameter, UserMemABuf, defines the maximum size of
the A matrix. If the actual A matrix requires more than this amount of memory,
then A is partitioned and the normal equations are formed looping over the
segmented A .
As a final case, consider a problem where 30 independent variables are moved in
a simultaneous fashion. Let there be 6000 rows in A and let there be 30
coefficients for each independent variable. The dimensions of A and A T A are
6000x900 and 900x900 respectively. Using the default settings the total time to
obtain an FIR solution for this problem using the Identifier is less than 24
seconds. It takes approximately 16 seconds to form the normal equations. If
UserMemABuf is increased so no partitioning occurs than it takes only 11
seconds to form the normal equations and the total solution time is less than 18
5/01
65
seconds.
Note that as long as the matrix if full, the cost to compute models for additional
CVs is essentially undetectable relative to the other computations. This example
in no way implies that it is recommended to include all possible independent
variables into a single regression. In fact quite the opposite is usually true. It is
usually very poor practice to simply include all variables and see what happens.
Most often it is more effective to do block testing where the specified
independent variables have been designed to maximize information content in the
data. Large systems can easily be constructed simply by combining smaller subsystems.
For PEM calculations there is almost no need to report timing. If the goal is
ultimate speed, then this is indeed a poor choice. PEM calculations are very slow
relative to any FIR calculations. For a reasonable class of problems however,
solutions can certainly be obtained in a respectable amount of time. For the
intended applications, computations should be less than 10 seconds per CV.
With this model form, the only comparisons made were for the update
calculations. That is the time required to calculate a full GaussNewton step. For
the QR update, direct comparison of the solution implemented in the Identifier
and the qr MATLAB routine is not really meaningful since MATLABs qr
routine physically forms Q while in the Identifier only a compact representation
of the Q factors are used. Thus even using the economy form, MATLABs qr
routine would be expected to be relatively slow. Hence, the comparison will be
made between the Identifier and MATLABs left division operation. For this
problem the Jacobian is taken to be 6000x280 and the computations speeds are
comparable. Here, MATLAB is slightly faster than the Identifier. The Identifier
takes about 2728 seconds to perform the update while MATALB takes about 2224 seconds for the equivalent computations. Any further reduction in overhead
associated with the QR algorithm in the Identifier is difficult to justify
considering the stated performance. Note however, if the Cholesky option in the
APC Identifier is used, the GaussNewton step for the same problem takes less
than 2 seconds to compute.
66
5/01
3.9
Summary
Guidance for the technical discussion is based on the desire to present a
completely open description of the Identifier. In many instances closed or black
box type of algorithms are undesirable from a technical perspective. For many the
details may be more than are necessary. For some however the detailed
discussion will help to provide a deeper understanding of the fundamental
operation of the algorithm and hopefully address some basic issues that are often
misunderstood.
In the initial portion of this section, the discussion was limited to a general
overview describing the hybrid approach. A more detailed technical description
was then given with respect the general identification problem, the various
models available and the corresponding solution techniques. It was pointed out
that both robust and quadratic norms are supported, the former being available
only with PEM models. Both PEM and FIR models can be used for data
regression. PEM models should be considered to complement the FIR models
and are provided for increased ease of use. The intent is one step identification
in instances where there are only a few independent variables moving
simultaneously. If these models are satisfactory, move on. Otherwise use FIR
models.
Model forms used for order reduction and variance attenuation were described
next. These include ARX, which uses a prefiltering algorithm, fixedform
Laplace and output error. The solutions used for all of the various models were
then discussed illustrating the pertinent features. Once the solutions were
presented, properties of the FIR and PEM models were delineated showing that
both models have a sound theoretical basis. Under the stated conditions, FIR
models, as used here, are unbiased and consistent. PEM models, when they
converge, are consistent even in the closed loop and are minimum variance if the
noise is Gaussian. Both bias and variance effects were also discussed. Techniques
for quantitative evaluation of FIR models were then outlined. These topics
included correlation, confidence limits, null hypothesis tests and statistical
ranking.
Finally, the factorizations used in the various solutions were described. Examples
were given showing the salient features of both the Cholesky and QR
factorizations. A significant portion of the discussion was focused on the illconditioned/rank deficient problem. For properly designed experiments, these
conditions should seldom be encountered. Nevertheless, they always remain a
possibility. The intent of the factorization discussion was to convey the fact that
the solution algorithms are numerically robust. If poor models are obtained it is
not the result of numerical problems. Consequently, switching to an alternate
factorization approach will not result in more reliable models (at least not in a
statistically meaningful sense)
5/01
67
Identification is a relatively involved topic and it can take many forms. Search for
the ultimate identifier can be a lifelong endeavor. These issues were known at the
onset in the design of the APC Identifier. While there are some existing
approaches that are effective, there are always promising new techniques that
need to be evaluated. Clearly any tool needs to continually evolve. This one is no
different.
Openloop identification of stable and integrating process has been the objective
in the design and development of the APC Identifier. At the onset, the goal was to
have a tool that would be effective, fast, numerically stable and provide
interactive ease of use even for the relative novice. There certainly are other
approaches, some more elegant, some less. The simple fact remains that this one
works and has proven effective for the intended applications. This in no way
however, implies completion. Indeed as other approaches are proven more
effective they need to be either integrated or used to replace existing
functionality.
It is the intent to provide at least three new quantitative indicators: noise and
confidence bounds on the PEM models, uncertainty spectrum on the final Laplace
models and cross correlation plots of the residuals.
It is also the intent to continuously evaluate promising techniques. Of particular
interest are canonical subspace approaches. They offer minimal order minimum
variance models, can be used in the closed loop and dont require a nonlinear
search. Issues on delay and short data sets may however be of concern. Also of
interest are design approaches for poorly conditioned plants. Much of the
literature is focused on zero frequency design. Extension to mid frequencies
would aid greatly in control relevant identification.
Another enormously interesting area is closed loop identification. Much appears
in the technical literature. Yet practical success has proven to be an elusive
endeavor. A word of caution is therefore warranted regarding this topic.
Irrespective of the approach, there are the socalled Identifiability conditions that
simply can not be ignored (see foe example Ljungs System Identification Theory
for the User or Larimore and Seborgs Automated Multivariable System
Identification: Basic principles with Control and Monitoring Applications). Even
when the Identifiability conditions are satisfied, the variance errors will still be
dictated by the expression given previously in the discussion on bias and
variance.
68
5/01
Overview
In This Section
This section explains how to read input files and begin the model
identification process. Read this section to find out about:
Hierarchical Overview
To invoke Profit Design Studio (APCDE), either click on the APC icon
or double click on the APCDE32.exe file. The about dialog box shown in
Section 1 is displayed illustrating the current configuration. When the
development environment is launched, an APCDE32.log file is
automatically created (rewritten if one already exists). If the operating
system is NT, the file is placed in the WinNT directory. If the operating
system is WIN95, the file is placed in the WINDOWS directory. This
.log file contains Profit Design Studio (APCDE) version compatibility
information. Any problems associated with incorrect versions etc. is
summarized in this file.
Once the correct configuration has been established, the about dialog box
can be closed. At this point Profit Design Studio (APCDE) can be used to
perform any of the configured functions. An empty environment appears
as shown below.
5/01
69
4.2
Only the Model Converter and TDC Data Converter are associated specifically
with the APC Identifier (the Point Builder is used in conjunction with Profit
Controller (RMPCT) design components of the Profit Design Studio (APCDE)).
The converter tools provide a mechanism for creating files that can be directly
imported into the Profit Design Studio (APCDE). Note, use of these tools does not
result in the start of an identification session. Rather, it creates one or more files
that can be used as input for identification.
Several files are associated with the APC Identifier. A description of these files
and their corresponding extensions are shown in the following table.
File Types and
File Extensions
70
File
Extension
File Description
File Type
MPT
ASCII
PNT
ASCII
MDL
Binary
PID
Binary
FIR
ASCII
XFR
ASCII
INF
ASCII
5/01
Selecting File > New at this level results in the creation of an empty document
(an empty document assumes that either data is not available and the user is
going to enter all pertinent information by hand or the user is going to merge
information into it from one or more existing files). The user can specify the
type of document to be created by selecting from the New dialog box shown
below.
Only Model Dev.File and PID Dev.File are associated with an identification
session. Each identification session is automatically associated with a specific
document or file having the .mdl or .pid extension respectively. Any .mdl file
contains all the information necessary to represent a general MultipleInput
MultipleOutput (MIMO) identification session. While any .pid file contains all
the information necessary to represent a MultipleInput SingleOutput (MISO)
identification session.
In addition to the File>New option, an identification session can also be started
by selecting File>Open. The environment displayed to the user depends directly
on the type of file or document that is opened.
Depending on the procedure followed, the options can be used to either
open/create an .mdl file (also referred to as an Profit Controller (RMPCT)
model file) or a .pid file (also referred to as an Profit PID (RPID) model file) A
discussion on creating/opening these files is given in the following sections.
5/01
71
4.3
Creating an
RMPCT Model File
This allows the specification of the proper document type. To open a file, select
File>Open, then select the desired directory. From the pull down list, choose the
extension of the files that you want to display as shown below.
72
5/01
This dialog box displays all files that can be read into Profit Design Studio
(APCDE). This list expands as new elements are added. If a document type is
selected whose functions have not been installed on the host computer, then a
message is displayed indicating that the document could not be opened.
From the file list, select the desired file. To open an existing model file, select
.mdl. Selecting either .pnt or .mpt implies that a new .mdl file is created from raw
data.
Data Source  Data
Files
Choose whether the .mdl file is based on raw data from .pnt or .mpt files.
File Type
Action
.mpt
Select a multipoint (.mpt) file to start from scratch with the variables
and test data in that file (as shown in the above figure).
.pnt
At this point selected data is read into Profit Design Studio (APCDE) and an
RMPCT identification document is opened. If a .mpt file was read in, the
document is titled Filename.mdl (from .mpt), where Filename is the name of the
.mpt file. If .pnt files are read in, the document is titled model*.mdl (from .pnt).
An example of an .mpt file is given below.
5/01
73
After the file is saved, the (from .mpt) or (from .pnt) descriptor is no longer
displayed in the title bar. With data loaded into the .mdl document, identification
may begin. See Sections 47 for detailed description on the identification
procedure.
At this stage the .mdl document can be saved for later use by selecting File>Save
(or Save As) from the main menu or it can be used to begin the identification
Data Source Manually Entered
To create models without data, choose File>New from the main menu. Select
Model Dev.File to create an .mdl file as shown below.
Entering or
Changing Variable
Information
Choose Edit> Var Info to begin entering descriptive information about the model.
A dialog box will appear allowing the entry of and changes to information for
each variable.
74
5/01
5/01
Use the Name field to name the variable. If you do not enter a name the
Point field is used. At least one CV and one MV must be entered to
proceed with the identification. More will be said about this dialog box
in later sections
The Variable Info dialog box can be used to change existing variable
information and if there is no raw data present to add new variables to
the work space.
If no raw data is present, then the next button will eventually access the
end of the variable list which will be reflected in the Descriptive Info
view as a highlighted empty row. This will result in an empty dialog box
such as that shown above. In this state a new variable will be added once
APC Identifier Users Guide
Honeywell Inc.
75
the pertinent information is entered and the OK, Next or Previous button
is selected. Note that if any variables were selected prior to invoking the
dialog box, all newly created variables will be automatically be selected
when the dialog box is closed.
After the pertinent information has been entered, switch to the Model Summary
view. Note that there are no models available at this time. To manually enter
models, double click in the empty grid area to bring up the Transfer Function
dialog box. From here proceed as described in sections 6 and 7.
76
5/01
4.4
Creating an RPID
Model File
To create an RPID model, it is necessary to select File>New from the main menu
as shown below.
If PID Dev. File is selected and if the appropriate library has been installed, then
the dialog box illustrated below appears.
5/01
77
You must now choose whether your Robust PID file is based on raw data from
.pnt or .mpt files or whether you want to manually enter the transfer function.
Selecting the Data Files radio button and clicking on [OK] results in the
following dialog box.
You may select either .mpt or .pnt files as long as the total number of variables
are limited to one CV, one MV and up to 10 DVs.
To select a single file, click on the file name in the file name box.
To select additional files, hold <CTRL> and click on the file names
(<CTRL> toggles the selection state).
To select all files in a range, click on the first name and then hold
<SHIFT> and click on the last name.
Click [Open]
At this point selected data is read into Profit Design Studio (APCDE) and a PID
identification document titled PIDDev*.pid is opened as illustrated below.
With data loaded into the proper document (.pid), identification may begin. See
sections 47 for detailed description on the identification procedure.
78
5/01
If you prefer to enter your transfer function manually choose the data source
Manually Entered instead of Data Files as indicated below.
Now you have a window representing your empty PID model file. Proceed as
described above for the empty document
5/01
79
4.5
Reading in Data
The starting point for identification of a multivariable process model is a file that
contains test data obtained from the process.
Test data consists of sampled values, for the independent and dependent
variables, taken over a period during which the independent variables are excited
by a test signal. The Identifier can read in test data from files having data from
one point, or from multiple points. The different data file types are described
below.
Single point files (which must have a .pnt extension) contain sampled values for
only one point. The first six records are header information for this point.
The fourth record is the sample rate at which the data was taken.
The fifth record contains the beginning time stamp marking the start of the
data record (Month/Day/Year Hours/Minutes/Seconds).
The sixth record shows the point category (this can be either manipulated,
controlled or disturbance).
Header information is followed by the actual data. One sample value per record is
stored for as long as the test is run. Data not to be included in the analysis is
entered as an NaN.
Single Point Data
An Example File
08/54/30
CONTROLLED
0.0000000e+00
1.2572979e01
1.5274251e01
NaN
9.6721695e02
80
5/01
Multipoint files (which must have an .mpt extension) contain sampled values for
multiple points. These files are created by the AM Data Collector and contain one
variable per column (each column is eight characters wide), and each column is
separated by a blank.
The first nine rows contain header information:
Rows one and two allow for sixteen character tagnames for each variable
Row three contains the parameter (OP, SP, PV, etc.) of the point
Rows four, five, and six are used for a twenty four character point
description
This information is followed by rows of the actual data. Each subsequent row
corresponds to a data sample. At the end of each row there is a blank character
followed by the time stamp of the data sample. This time information is used to
determine the sample interval.
If sample intervals are not consistent then the following message will be
displayed.
5/01
81
PC26220
OP
OP
OP
OP
ONTROL
DEG F
PSIG
BPD
DEG F
Saving an .mdl or
.pid File
CO
CO
MA
MA
0.00000
0.00000
0.00000
0.00000
1.26424
.316060
1.72932
NaN
1.90042
.475106
82
Select File>Save.
Click the toolbar button that looks like a diskette (same as File>Save).
5/01
4.6
The Identifier can read finite impulse response (FIR) models and transfer
function models created by other applications, as long as:
The Identifier needs to build the controller models, so it does not work with these
model forms from other applications.
Non Native FIR
Files
FIR model files created by other applications need an FIR extension, with the
data given in a single column, in this order:
Number of CVs
Number of MVs
Number of DVs
Sample File
5/01
83
Number of CVs
Number of MVs
Number of DVs
Numerator coefficients for each polynomial in CV row (for null models make no
entry go on to the next model, leaving no blank lines)
Denominator coefficients for each polynomial in CV row (for null models make
no entry go on to the next model, leaving no blank lines)
Tagname
Engineering units.
Sample
84
5/01
Once an identification session has been started, the main menu will indicate all
available high level functions that are available to the user. This menu has been
arranged in a fashion that reflects progression through a typical identification
session. Starting at the left and progressing right as functions are completed. The
functions contained on the main menu are illustrated below.
In many cases, this same philosophy is used once a given item is selected. Here
however the progression is usually from top to bottom. User options are also
made available in a logical fashion. Access to commonly changed parameters is
provided through highlevel dialog boxes. Subsequent dialog boxes can be used
to access parameters that are less frequently used.
An overview of the various menu items given above is as follows:
File
Edit
Typical cut, copy, paste and delete type functions are available
using this item. In addition this function also provides a shortcut to the dialog
box used to edit information associated with any variable. Access to the
functions in this menu is highly dependent on the states of the application, the
current view, selection status and past events. Edit options are shown below.
The cut, copy, paste and delete items can be applied to either data or a
combination of data and models. These operations, to be described in detail in a
dedicated section, can be used to move or rearrange data and models within a
given document or to merge data and/or models into one or more documents.
The copy paste functions can also be performed using the standard drag and
drop operations. Select All can be used as a short cut to select all variables or
models depending on the current view. The procedures designated as Special
are dedicated to operations on sub models within a given matrix. Only one
5/01
85
model at a time can be modified using copy and paste. Delete will work with any
number of selected models. The User, Final, Uniform and Mixed options are for
copying results associated with different selection strategies (details will be
given in a later section). The CopyRegr2Pred and CopyPred2Reg options are for
copying selection ranges between FIR/PEM and prediction ranges.
The item below the separator bar can be used to invoke the dialog box that
supports the editing of descriptive information about each of the variables.
Insert
Special marks that can be used to designate bad data or data
that should not be used for certain operations can be inserted or removed using
this option as shown below.
In certain views data can be marked using special designators. These marks can
subsequently be removed. In addition, these marks can be displayed or not
depending on the users preference. Marking and unmarking of data can be
accomplished in a more convenient fashion using the dedicated toolbar buttons
discussed in a subsequent section.
86
5/01
Data Operations
All manipulations to be performed on the data with
the exception of the cut, copy, paste and delete functions can be accessed
through this menu option. The following pull down menu contains the data
options.
Views Different views let you display different information about your
identificationdata plots for different ranges, model trials, normalized scaling,
zoom and many other options. The fundamental view options are obtained by
selecting View from the main menu as shown below.
5/01
87
Views preceding the first separator bar correspond to any information associated
with the data. The next group of views pertains to the various models and the
different ways they can be presented. This group is followed by views that
pertain to qualitative and quantitative indicators that can be used to help asses
data and model quality. The next group can be used to configure how various
data is displayed. Finally, the last group can be used to enable/disable the
toolbar and status bar respectively.
Identify This menu option is used to perform all the functions associated
with identification. It also supports overall setup and Load & Go operations.
Selecting Identify from the main menu gives:
88
Tools
Several tool functions are available during an identification
session. In addition to those available in the empty design studio (discussed
at the beginning of this section), two more options are available when a
session is open. These options are shown below.
5/01
At times other applications may interact with the color palette in an undesirable
fashion. Select Default to correct this problem.
5/01
89
The other preference option is the choice of toolbar. Here, either the standard or
detailed identification (ID) toolbar may be chosen. The standard ID toolbar has
the following form.
While the detailed ID toolbar has the additional buttons as shown below.
90
5/01
New Window, Cascade and Tile under the Windows menu refer to the
conventional options in a standard application with a multidocument interface.
The Arrange Icons option, while enabled, performs no meaningful operation in
the current release and can be considered reserved for future releases.
Currently, the help menu only supports the About Profit Design Studio function.
Fully integrated online help is planed for future releases
Keyboard Selection
5/01
Any menu item can be accessed either by using the mouse or by direct keyboard
access. To access a menu item via keyboard select <Alt * $>, where * is the
underlined character in the main menu (selecting <Alt *> will cause the menu
drop down options to be displayed) and & is the underlined character in the
dropdown menu. For example to save the current file select <Alt F S>.
Character selection is NOT case sensitive.
91
Parameters that can be adjusted to alter the identification configuration are listed
below.
[Memory Buffer]
SwapMode=0
UserMemBuf=32
UserMemABuf=8
[UserOptions]
FIRDefault=1
PositionalForm=1
StartSettleT=60
DeltaSettleT=30
FirNumCoeff=30
ConfidenceCalcs=0
StartPemOrder=2
DeltaPemOrder=1
RobustNorm=0
PemBias=1
UsePfxIC=1
AICSearch=1
NoiseModCheck=0
PfxExpRed=10
PZTol=.0001
DTTol=0.001
UsrPrecision=8
AutoAnnotate=0
MaxAnnotate=1
DisParOrder=2
MultiMean=0
AutoSelect=1
ExcludeRangeType=0
92
5/01
93
94
PZTol Defines the tolerance in percent for canceling poles and zeros
in the Laplace domain transfer functions. Cancellation is performed
only through the Transfer function dialog box.
5/01
AutoAnnotate This flag is used to turn on and off the autoannotation option. Set this option at any time. Default is off (0). For
more on this option see section 12.
5/01
95
96
5/01
Overview
In This Section
While each view is described in this section, focus is directed primarily on the use
of those views associated with observing selecting and configuring data.
Use of other views that deal primarily with models and/or performance will be
discussed in subsequent chapters as the need arises
Information displayed by Profit Design Studio (APCDE) specifies which
environment has the current focus. In the case shown below, the environment
supports MIMO identification since the active document (document with the
current focus) is of the .mdl type. (Identification documents have the model icon
showing multiple response curves followed by the prediction symbol y). When an
identification document is initially opened the source data file and document type
are displayed in the document title as described previously. The default view is the
Descriptive Info View
5/01
97
5.3
Basic Views
Different views let you display different information about your identification
data plots for different ranges, model trials, normalized scaling, zoom and many
other options. The fundamental view options are obtained be selecting View from
the main menu as shown below.
Primary Functions
98
As illustrated, the current view is the Descriptive Info view. The primary views
and their functions are:
Singlegraph Data Plots Use this to view all selected variables on the
same plot. Only variables selected are plotted. If no variables are selected,
all variables are plotted. Select Multiple Scale option to plot each selected
variable full scale on the same plot (use this option to compare variables).
Select Normalized Scale option to plot each selected variable on its own
axis. Select Single Scale option to plot all selected variables using one
range. For this option the value of the range will be displayed on the
vertical axis. Configure the plot by double clicking the desired variable. Use
APC Identifier Users Guide
Honeywell Inc.
5/01
right mouse button in the plot box to display time, value, and index for
variable closest to target location. Zoom and unzoom to facilitate
selection/edit functions. Use this view to mark/unmark bad data at the
global level. This data will be treated as bad for all subsequent operations
until the marks are removed. Use this view to cut, remove or delete data.
The title for this view is Trend Plots and will always be displayed in the
lower right portion of the vertical margin. This view can be used for the
selection and data edit functions used throughout the environment.
Show Regression Ranges Use this view to show all ranges associated
with regressions. Select ranges to exclude values from data used for
FIR/PEM regressions. Values to be excluded are set bad entering the
regression. Values can be excluded using two different approaches:
Block Selection With this option, ranges are selected and these
ranges are applied to all variables used in the regression. All values
within the time range (inclusive) are set bad for any variable being
regressed. Since all variables are bad for each range selected, the
data is collapsed such that each range to be excluded is represented
by a single NaN for each variable.
Variable Selection With this option, data can be excluded for each
variable on an individual basis. Display of this type of selection is
different than that used for Block selection to avoid any ambiguity.
This category supports an additional option
Data marked as bad at the global level (in the Single Graph Data plots
View) can be displayed in this view (and any graphical view) by choosing
the Show Bad Data option. Global marks however can not be altered in this
view.
This view operates in a fashion almost identical to the Single Graph Data
Plots View. Selection, zooming, scaling etc. are as described above. You
can NOT use this view to actually cut or remove data. This can only be
done in the Single Graph Data Plots View. The title for this view is Show
Regr. Ranges and will always be displayed in the lower right portion of the
5/01
99
vertical margin. This title will have a red superscript b, v1 or v2. The
superscript b and v designate block and variable selection respectively
while the 1 and 2 imply that marks are applied only to dependent
variables (1) or to both (2).
Show Prediction Ranges Use this view to show all ranges associated
with predictions. Select ranges to exclude values when performing any
prediction calculations. Values can be excluded only by using block
selection. These ranges are applied to all variables used in subsequent
predictions. All values within the time range (inclusive) are set bad for any
variable being used in the prediction. Since all variables are bad for each
range selected, the data is collapsed such that each range to be excluded is
represented by a single NaN for each variable
Data marked as bad at the global level (in the Single Graph Data plots
View) can be displayed in this view (and any graphical view) by choosing
the Show Bad Data option. Global marks however can not be altered in this
view.
This view operates in a fashion almost identical to the Single Graph Data
Plots View. Selection, zooming, scaling etc. are as described above. You
can NOT use this view to actually cut or remove data. This can only be
done in the Single Graph Data Plots View. The title for this view is Show
Pred. Ranges and will always be displayed in the lower right portion of the
vertical margin.
100
Scatter Matrix (raw data) Use this to view raw data for each variable as
a function of all other variables (excluding Aux variables) as a scatter plot
in matrix form. As with all matrix views this view is fully scrollable. In spite
of optimizing the scrolling function, documents with large amounts of data
may exhibit update delay (resizing can be particular slow). This delay can
be minimized by scrolling using the page approach or by scrolling to the
desired matrix position in an alternate model and subsequently switching to
the scatter matrix view (scroll positions will be saved). Positions of
variables in this view are reflective of those in the Descriptive Info View.
5/01
FIR/PEM Step Responses This view is used to display the matrix of all
FIR or PEM step responses. Sensitivity of the step responses can be used as
a preliminary indicator of FIR model adequacy. Similarly, sensitivities can
be used for PEM order selection
All Step Responses Both FIR/PEM and parametric step responses are
displayed in this view. Overall parametric fit and smoothing qualities can be
obtained from this view.
5/01
101
102
5/01
5.3
Working with
different Windows
Windows in Profit Design Studio (APCDE) work the same way as in any other
Windows applications. Multiple windows let you display different information, or
different views of specific information at the same time. You can open any
number of files at one time. Each new file has its own window.
You can also open additional windows on the same file by using Window>New
Window or by using the following toolbar button
The new window initially has the same view as the previously selected window,
but you can now change the view. This lets you see different views of the same
information at the same time. This is particular helpful when marking data,
building FIR/PEM models or evaluating predictions.
When marking data two views can be open to graphical views and a third can be
open to Descriptive Info. In one graph you can display a block of variables witch
are of interest. From the Descriptive Info View select a subset of variables to be
marked. Drag and drop these variables into the alternate graphical view. Mark
values for these variables as desired. The marks and their temporal relationship
with other variables will be automatically displayed in the original graphical
view.
When regressing data, one window can be opened for the fit. Another can be used
to view the step responses and a third can be opened to view the statistics.
Information is automatically reflected in all views simultaneously as the
calculations are updated.
When evaluating predictions, one window can be opened for the prediction
(remember to select the Store predictions option) and another can be opened to
Single Graph Data Plot. Only predicted and actual CVs along with the
corresponding residuals will be shown in the prediction view. In this view, data in
any excluded ranges will not be displayed. In the other view select any variable
you wish to display including predicted variables (they will be stored as Aux
variables). Excluded ranges for the predicted variables will be represented by
NaNs.
5/01
Data can be plotted in any of four graphical views. While the Single Graph Data
Plots is the primary graphical view, Exclude FIR/PEM Ranges, Exclude
Prediction Ranges and Multigraph Data/Scatter Plots can also be used to view
data in a graphical framework. Use of each of these views will be presented in the
following paragraphs.
103
SingleGraph Plots
With no variables selected (none selected is the same as all selected) All
variables will be plotted in a typical SingleGraph Data Plot as shown below.
Here the data is displayed using the normalized scale. That is each variable is
displayed using its own axis.
104
5/01
Viewing Single
Graph Plots
Use View>Plot Options to change the plot size, or click and drag in the plot area
in any of the SingleGraph Type of views to select a zoom rectangle that expands
to fill the window. Press <ENTER> to unzoom.
The Single Graph Plots View (as well as Show Regression Ranges and Show
Prediction Ranges) shows plots of the selected variables on one graph. Use the
Descriptive Info view to select the variables that are plotted on the Single Graph
Plots view (none selected is the same as all selected) Open up a second window
to Descriptive Info. and drag and drop variables from that window to the graph to
display the dropped variables.
1. Select View>Normalized Scale to scale the ranges so that each plot occupies
its own band on the graph. Select View>Multiple Scale to scale the ranges
so that each plot occupies the full height of the graph. Select View>Single
Scale to view all selected variables on a common scale. The common scale
will be displayed on the vertical axis.
2. To magnify the plot (zoom in), click and hold down the left mouse button
anywhere in the plot area and drag the cursor to open up a dashed
rectangle. When you release the button, the dashed rectangle expands to
fill the window. Repeat this to get finer resolution.
The scroll bars become active when the view is zoomed to allow scrolling
in either the time or value axes.
3. The date/times of the left and rightmost data points display at the left and
right sides of the time axis box below the horizontal scroll bar.
You can see the date/time and corresponding vector index of any data
point by moving the cursor into the time axis box. A vertical dash dot line
appears in the graph above the cursor, and the data/time (index) of this line
displays in the center of the time axis box. See for example the plot shown
above.
5/01
105
4. To Display values for each variable corresponding to the vertical dash dot
line simply hold down the right mouse button while the cursor is in the time axis
box. The values will be displayed in parenthesis between the high and low ranges
as shown below.
5.
106
To use the spyglass feature, position the mouse anywhere in the plot
box. Click the right mouse button and hold it down. An indicator will
encircle the data point closest to the cursor. A vertical dash dot line will
connect the indicator to the horizontal axis. The corresponding variable,
its value and index will be displayed at the center of the time axis box as
shown below.
5/01
Reconfigure
SingleGraph Plots
To reconfigure the plot, double click anywhere in the text box on the left side of
the plot (left margin). The following dialog box will be displayed.
This dialog box will allow you to set the plot ranges independently for each of the
variables. Initial values correspond to the variable that was double clicked. At the
top of the dialog box the Item information from the Descriptive Info view is
displayed along with the tagname.parameter and the variable class. The maximum
and minimum values of the data are displayed as text strings. The values are used
to initialize the User Hi and Lo Values if not already set and are used to reset
these values when the Defaults button is selected. The user can enter the desired
ranges and see interactive response from the plot view by simply selecting the
Update button. Note that all range information is saved on the variable NOT the
view. Hence if multiple windows are opened the range change will be
automatically reflected in all data views simultaneously.
Use the next and previous buttons to sequentially access alternate variables
displayed in this view. A value entered takes effect if the update button is
selected or if a new variable is selected. Selecting Cancel after the fact will not
undo the operation. Use the check box to prevent data that is outside user ranges
from being displayed in the normalized mode.
5/01
107
Three modes are possible with any of the trending plots. These modes are
Multiple Scale, Normalized Scale and Single Scale. All graphs displayed to this
point have illustrated the use of the Normalized Scale. All curves are plotted such
that each variable occupies its own band. An example of the Multi Scale mode
follows.
In this mode all curves are plotted in the same graph using the individual curves
high and low ranges. Thus the vertical axis has a different scale for each variable.
For the Single Scale mode, all curves are plotted in the same graph using the
same ranges taken from the minimum and maximum value of the entire set of
data. The data show above in this mode is as follows.
Since there is only one scale in this mode, the range is displayed on the vertical
axis. This range is automatically adjusted under zoom conditions as shown
below.
108
5/01
Selecting Time
Ranges
You can select time ranges of data. If you call a view from:
View>Single Graph Plots You can mark/delete the test data over the
selected time ranges.
View>Show Regression Ranges or Fit FIR/PEM Models>Exclude Data
Ranges You can mark selected time ranges for exclusion from subsequent
FIR/PEM model fitting (see subsequent section).
View>Show Prediction Ranges or Select Final Models>Exclude Data
Ranges You can mark selected time ranges for exclusion from subsequent model
validation/prediction calculations fitting (see subsequent section).
Selecting Ranges
To select a range:
1. Move the cursor within the time axis box to one end of the desired time
range. The vertical dash dot line and the date/time in the center of the box show
you where you are. When you have positioned the cursor at one end of the range,
press and hold the left mouse button.
2. Move the cursor to the other end of the desired time range. The second
vertical dash dot line that appears and the date/time in the center of the box
correspond to the other end of the range. Release the mouse button. The selected
time range is shown with a gray background.
5/01
109
3. Repeat these steps to select additional ranges. The resulting operations may
look like the plot shown below.
4.
Reading the Plots
Hold down [CTRL] and use the above procedure to deselect all or part of a
previously selected range.
The boundary of a selected range may be notched, or the entire range may appear
as a dashed rather than solid gray line ( see the plot given above). This indicates
that there is more than one data sample that is plotted at one horizontal pixel
position on the screen.
Some of these samples are in the selected range and some are not. Zoom to a
finer resolution to see which data samples are in the selected range.
Selecting ranges can be used to delete data, mark data as bad (which can be
subsequently unmarked), or to exclude data from subsequent FIR/PEM fitting or
model validation calculations. The time ranges are remembered separately for
these three cases. You can display and change the ranges at any time. Remember
however that excluding data MUST be done through the appropriate view or by
using the Exclude Ranges button on the appropriate dialog box. It is a common
mistake to use View>SingleGraph Plots to select ranges and to think that these
ranges will be excluded from future calculations.
110
5/01
Data can be marked bad at the global level in a number of ways. As an example,
consider the case shown below.
Here, Three windows are opened on the same data. Window 3 shows the block of
variables that are of concern. Window 2 will be used as the selection window. As
shown three variables have been selected and dropped into window 1, which will
be used for marking. Note that the Show/Hide NaN toolbar button is in a selected
state (this is similarly reflected in the Insert pulldown on the main menu). With
the ranges selected, the data may be marked bad by choosing the mark NaN
toolbar button. While data marking/unmarking can be accomplished using the
Insert menu options, it is more convenient to use the following mark, unmark and
show/hide toolbar buttons
5/01
111
This action will result in the data shown below. Note that the selected ranges
have been cleared and only the variables displayed in the active or highlighted
window (window 1 in this case) contain the desired marks. Data to be treated as
bad at the global level has the distinct dark gray mark.
Data marks will only be displayed when the normalized scale option is in use and
only when the Show/Hide option is in the selected state. Since the Show/Hide
option is a view option, it can be set independently for each view on a given
document. If this button is selected and the current view is a nontrending view,
then the view will be automatically switched to SingleGraph Data Plots View
using a normalized scale. Data can be unmarked by simply selecting ranges and
choosing the Unmark button or associated menu item. Data within the selected
ranges that is marked will be restored and the selection ranges will be cleared.
112
5/01
Data can be marked bad for regression purposes in a similar fashion as shown
below.
As shown above there are four distinct methods for displaying data that is to be
treated as bad. In one the data is simply removed (cut/deleted) from the
environment. This is the case for CV13. Data from index 80 through 105
inclusive has been deleted. The circles illustrate good data bracketing this range.
Data marked as bad at the global level is illustrated by the dark gray bands.
Global marks are the darkest of any marks. Next, data marked as bad for
regression purposes is always displayed as lighter gray crosshatched bands. These
marks can only be seen in the Show Regression Ranges View (Show Regr.
Ranges as illustrated above). Finally, the selected ranges themselves are
displayed as intermediate gray bands These ranges will always be displayed over
the entire vertical height of the plot. There should never be any confusion
between the marks even if there is just one variable. In this case the plot may look
like that shown below.
5/01
113
Notice that the global and regression marks do not cover the entire vertical height
of the band while the selection ranges do. Selection ranges may be used for
defining global or regression marks, as illustrated above or they may be used
directly to exclude data for either regression or prediction. For the case shown
above the selection ranges would be ignored in a FIR/PEM fit since the v1 option
is in effect. That is the regression is to be performed using the variable selection
option and applied to dependent variables only. Alternative, the block option
could be used. In this case only the selection ranges would be used. Since the
ranges are the same for all variables being regressed, the data would be collapsed
and a single NaN would be used to represent each range. These options can be
changed through the Set Overall Options dialog box described in a later section.
Bad data marks at the global level are always applied when data is used for any
operation.
While there is a great deal of flexibility in marking data, the end result may be
counterintuitive especially with respect to regression. When an unexpected result
occurs it is best to consider what happens when data is marked as bad. This can
be done conveniently with respect to the prediction equations defined in Section
3. Any prediction equation containing a bad value must be removed from the
regression set.
For FIR calculations, if a dependent variable is bad, the corresponding prediction
equation(s) containing that variable must be removed. This implies that the y and
corresponding row in the regression matrix be removed. When an independent
variable is bad all rows in the regression matrix containing the bad value must be
removed. This implies that a minimum of n rows is removed for a single bad
value where n is the number of response coefficients. If multiple dependent
variables are regressed simultaneously, then rows in the regression matrix
corresponding to all bad values in each dependent variable must be removed.
This implies that different but fixed NaN ranges can result in different answers
depending on which dependent variables are regressed. A dependent variable
regressed by itself can yield a different answer than if the variable were
regressed with other dependent variables.
For PEM calculations all offending terms must also be removed from the
Jacobian matrix. In addition special considerations are made for dealing with the
noise models through NaN filtering operations. In general the number of rows
removed for an NaN is related to the maximum polynomial order. With PEM
however, only MISO models are supported. As such, even if multiple dependent
variables are selected for a given regression only one dependent variable is
regressed at a time. Therefore NaNs for one dependent variable will never affect
another dependent variable.
Marking Data Bad
at the Prediction
Data can be marked bad for regression purposes in a similar fashion as shown
below.
Level
114
5/01
Only block ranges can be used to exclude data from prediction calculations. As
such all ranges are collapsed and represented by a single NaN as described
previously. Most often, prediction ranges will be set by using the Exclude Data
Ranges button on the Select Final Models dialog box. In the prediction results,
excluded ranges will be collapsed. Values marked as bad at the global level, will
simply result in the generation of corresponding bad (NaN) values. These values
will NOT be collapsed.
5/01
115
Select View>Scatter Matrix. The scatter matrix appears as shown below. In this
view a scatter plot is displayed for each variable as a function of all other
variables
MultiGraph/Scatter
Plots
Select View>MultiGraph Data/Scatter Plots. This will switch the active view to
one of two modes as described next.
MultiGraph Mode
With no variables selected (none selected is the same as all selected) or more than
one variables selected, multigraphs are displayed as shown below..
116
5/01
Select just one variable from the descriptive info view. Then, Select View>Multigraph Data/Scatter Plots. A typical Scatter Plot is then displayed.
Note this mode must be used to view auxiliary variables in a scatter plot since
auxiliary variables do not appear in any matrix views. When using the MultiGraph/Scatter Plot View, variables can be selected as described for the trend
plots. For example variables can be selected from one window in the Descriptive
Info. View and dropped into the MultiGraph/Scatter Plot.
5/01
117
118
5/01
Overview
In This Section
5/01
Merging between files will result in the loss of all MV/DV auto and crosscorrelation data
119
This in turn gives you the ability to combine all or some of the variables/data/
and/or models from several different test data files into one model file. This data
can come from different testing periods.
Variables/Data and/or models can be moved from one model file to another via the
Model Summary view
If only Variables/Data are to be moved from one model file to another, use the
Descriptive Info view.
120
5/01
6.2
Edit Functions
Depending on the current state of the identification procedure and what variables
are selected, different editing options are available or not as is appropriate. The
basic edit functions; Cut, Copy, Paste, and Delete are the standard windows
functions as applied to manipulating Profit Design Studio (APCDE) models and
data. To view the edit functions select Edit from the main menu
In addition to the standard cut copy paste and delete functions. the following
functions are Identifier specific:
5/01
Select All For views where selection states are possible, this function can
be used to automatically select all possible variables.
SpecialModCopy Used to copy all models and associated data for a single
dependent/independent pair (submodel). Copy can only be performed from
the Model Summary view when a single submodel is selected and is for the
sole purpose of modifying models within a single document.
121
Basic Edit
Characteristics
122
Uniform2User Used to copy the uniform trial solution to the user trials.
Uniform2User can only be performed from the Model Summary view and
applies to all selected submodels. The copy results in an automatic residual
update and the trials are stored as user selected (see section on selecting
final models) If the final model source for any selected model is User,
then the final model is updated with the new set of user models.
Mixed2User Used to copy the mixed trial solution to the user trials.
Mixed2User can only be performed from the Model Summary view and
applies to all selected submodels. The copy results in an automatic residual
update and the trials are stored as user selected (see section on selecting
final models) If the final model source for any selected model is User,
then the final model is updated with the new set of user models.
As shown above the basic edit functions are disabled. No operations have been
performed and nothing is selected. The basic edit functions have the following
characteristics.
Cut This function is only enabled in the Descriptive Info view and only
when one or more variables are selected. Cutting the selected variables will
remove the variables and any corresponding models. This information will
then be copied to the internal paste buffer for subsequent retrieval. If an MV
or DV is cut, then an entire column of models is removed from the model
matrix. If a CV is cut, then an entire row of models is removed from the
model matrix. If an Aux variable is cut there will be no impact on the model
matrix.
5/01
in the Model Summary view all information pertaining to the selected models
and all associated variables are copied to the internal paste buffer. This
implies for example that if one submodel is copied, then the model and its
corresponding CV and MV/DV will be stored into the internal buffer.
Paste This function is enabled only after a cut or copy operation has been
performed. What actually gets pasted depends on the source and destination
document (file). If the paste destination document is the same as the source
cut/copy document, then contents of the internal buffer will simply be copied
back into the original document. If a copy and paste operation is performed
on the same document, then no apparent changes will be displayed if no
modifications have been made since the copy operation. A warning message
similar to the one shown below will however be displayed in spite of the fact
that the models are being are being overwritten by identical models
If a cut and paste operation is performed on the same document then the data
and models will be unaffected but the relative positions of variables and
models in the matrix will be potentially altered depending upon the insertion
point of the paste. This is one mechanism by which a model matrix can be
easily reorganized.
If the paste operation is performed on another document, then some or all of
the contents of the internal buffer will be copied to the destination document.
If the cut/copy operation was performed in the Descriptive Info view of the
source document, then only variables and data will be copied into the
destination document. If the cut/copy operation was performed in the Model
Summary view of the source document, then variables, data and models will
be copied into the destination document
If the paste information already exists in the destination document then the
data and/or models will be merged into the destination document as
appropriate. If one or more of the variables to be copied already exist in the
destination document then the data will be spliced together using time stamp
information from both the source and destination documents. Time stamp
overlaps are handled automatically with precedence given to the source file
(destination data is overwritten). If models already exist then the user will be
presented with an overwrite option
Position or location of the information being copied from the internal buffer
depends on the insertion point in the destination file. The insertion point for
the Descriptive Info view is immediately before the focus rectangle such as
5/01
123
This focus rectangle corresponds to the last selected variable. Its index is
stored even if the view loses focus and the rectangle is no longer displayed.
If the focus box is unavailable then the insertion point is at the top of the list.
After the paste operation is complete, the focus box will be redrawn
illustrating the insertion point and the view will be automatically scrolled
such that the focus box is clearly displayed.
At times it may be desirable to append variables/data to the end of the
Descriptive Info list. When the focus rectangle is the last element in the list
and the variable is of class Var, then the insertion point is defined by the
users response to the following dialog box.
Note, that if the last element(s) are of class Aux, then the dialog box shown
above will not be displayed during the paste operation since all variables of
class Aux are required to be at the end of the list.
If the current view is a modelbased view other than Descriptive Info, then
the insertion point is established by the selection state of the Model
Summary view. It is therefore advisable to have the view of the destination
document set to Model Summary when models are to be merged. Consider
the selection state shown below.
124
5/01
The selected model with the lowest CV index establishes the row insertion
point. The selected model on this row with the lowest MV/DV index
establishes the column insertion point. Models are inserted prior to this
point. In the illustration given above the insertion point is (3,3). If no submodels are selected then trailing rows are added for each CV, trailing
columns are added for each MV/DV and models are inserted at the
appropriate intersection.
Delete This function is enabled in the Descriptive Info when one or more
variables are selected. It is also enabled in the SingleGraph Data Plot view
when at least one range has been selected. Deleting the selected variables
will remove the variables and any corresponding models. If ranges have been
selected in the in the SingleGraph Data Plot view prior to the delete
operation, only data corresponding to the ranges will be deleted. Variables
and models will remain intact. In this case, NaNs will replace deleted data. If
data ranges are deleted for all variables then the data is collapsed and a
single NaN replaces the deleted data in a given range. Since deleted
information is lost, this operation will result in the display of the following
dialog box.
5/01
125
Special Edit
Functions
These functions can be used effectively replicate and null models in a given
document. To replicate a model simply select the desired model in the Model
Summary view and select Edit>CopySpecial. Next select the paste position and
select Edit>SpecialPaste as shown below.
Here the (4,1) model has been replicated in the (3,2) position. If there was an
existing model in the (3,2) location then the following message will be displayed.
126
5/01
Models can be deleted in a similar fashion. From the Model Summary view select
the desired submodels. Then select Edit>SpecialDelete or enter <Alt Delete> or
toolbar button. The following message will be displayed
use the
Copy Trial
Information
5/01
127
Select the appropriate button to increase or decrease the displayed trials and
corresponding models. Adjust until the selected submodels display the
models and corresponding trials of interest. Select the
toolbar button
or select Edit>User2Final. Results of this operation are shown below.
In addition to copying the displayed trials for the selected submodels into
the User Trials, the residuals for any touched CVs are updated. These
results are then loaded into the Final Models. As shown above, the trials and
128
5/01
corresponding models displayed in the Final Model view reflect the user
choices. Also note that the prediction error and Final Model Source have
been updated.
Edit Variable
Attributes
Begin editing from the first variable. When Variable Info is selected, a dialog box
shows the basic information that is associated with each variable. This dialog
box is shown below.
5/01
129
Entering or
Changing
Information
You can enter or change information about the variable type. To make a change:
Use the Type field to indicate if the variable is a CV, MV, or DV.
You can also enter or change parameters that are used to describe/define each
variable. These parameters are as follows:
Name Use the Name field to give a descriptive name to the variable. In
each model view, this name will be displayed in the row or column that is
associated with this variable. If a period is part of the name, then only
characters to the left of the period will be displayed. If you do not enter a
name, the Point field is used.
Point This field is the point or tagname of the variable and is usually
taken directly form the DCS.
Param. This field is the parameter of the variable and is usually taken
directly form the DCS.
Units Use this field to specify the engineering units associated with the
variable.
Special Note:
Each variable in the Profit Design Studio (APCDE) must be represented by a
unique name. Unique names are maintained internally and are established as
follows:
1.
2.
As long as a Point name exists, you can freely modify the name field without
affecting the uniqueness of a particular variable. You can not enter variables
130
5/01
with nonunique names. Nor can you modify any name such that it results in a
nonunique name
Use the previous and next buttons to view and change data associated with the
previous and next variables. Note that the variable in question is automatically
selected in the background Descriptive Info view. This selection status
automatically changes as the previous and next buttons are selected. When this
dialog box is closed, the original selection state of the Descriptive Info view will
be recovered.
When no variables are selected prior to the invocation of the dialog box (such as
the case above), it is assumed that all variables are to be potentially edited. To
edit a subset of the available variables simply select the desired variables. When
the dialog box is opened, only this subset will be used for modification. Using the
next and previous buttons will sequentially access only the selected variables.
To modify information on a single variable just double click on that variable in
the Descriptive Info view. When this is done the next and previous buttons will
be disabled.
Document without
raw data
If no raw data is present, then the next button will eventually access the end of the
variable list which will be reflected in the Descriptive Info view as a highlighted
empty row. This will result in an empty dialog box such as that shown below. In
this state a new variable will be added once the pertinent information is entered
and the OK, Next or Previous button is selected. Note that if any variables were
selected prior to invoking the dialog box, all newly created variables will
automatically be selected when the dialog box is closed.
When the edit operation is complete, the Descriptive Info view will be
automatically scrolled to display the last variable accessed.
Empty Document
5/01
If the document contains no data, then the variable information must be entered
manually through the Variable Info dialog box. As a minimum the name and type
APC Identifier Users Guide
Honeywell Inc.
131
fields must be entered. In addition there must be at least one CV and one MV to
proceed with the creation of the final model matrix.
132
5/01
6.3
6.3
Copying
1.
2.
Models/Data From
One File to
Another Using
Copy/Paste
All information for the sub model, including FIR step responses,
parametric step responses, and final model selections if these exist.
Select Edit>Copy, or click its toolbar icon (looks like two sheets of paper).
4.
Select View>Model Summary for the destination file. You can determine
where the copied variables are inserted by selecting a sub model.
Any CVs selected in the source that are not already in the destination file
are inserted just ahead of the CV of the selected sub model.
Any MVs and DVs selected in the source file that are not already in the
destination file are inserted just ahead of the MV or DV of the selected sub
model.
5.
Copying
Models/Data From
One File to
Another Using
Select Edit>Paste, or click its toolbar icon (looks like a sheet of paper on a
clipboard).
Dragdrop can be performed with any number of windows open. All windows do
not have to be associated with identification. The source and destination
documents must be associated with identification.
1.
Arrange the windows so both the source and destination windows are
visible, and select View>Model Summary on the source window. It is also
recommend but not required to select the Model Summary view on the
destination window. This will give direct control of the insertion point.
2.
Select the models to be copied from the source file. You can select sub
models individually, or you can select a row or column of sub models by
selecting a CV, MV, or DV or you can select all models by clicking in the
upper left corner of the Model Summary view
3.
DragDrop
All information for the sub model, including FIR step responses,
5/01
133
Position the cursor over any part of the selection, and press and hold down
the left mouse button. Any movement of the mouse at this point will cause
the cursor to change from the standard arrow to a cursor consisting of a
circle with line through it. This is the no drop cursor which indicates that
the selected models/data can not be dropped or inserted at this time. If the
cursor is moved over any nonmodel based window (this includes any
performance or statistical window) the cursor will remain in the nodrop
state. As soon as the cursor is moved over a model based window, the
window will automatically be brought to the foreground (top of the stack)
and the design studio will reflect that this window has the current focus.
The cursor will remain in the no drop state until it is positioned over a
legitimate model matrix. When this is done the cursor will change to a set
of curves with a plus sign. This is the dragdrop cursor for models and data.
If the mouse button is released the models/data will be inserted at a position
that depends on the selection state of the Model Summary view as
described above. It is advised to have the destination window in the Model
Summary view
5.
Drag the cursor to the destination file. As the cursor is moved over submodels in the Model Summary view, the selection status will automatically
change in response to the cursor position. Moving the cursor to a boundary
of the model matrix will cause the matrix to automatically scroll in the
desired direction. Release the mouse button when the desired submodel is
selected. Models and variables are pasted as described above. To append to
the end of a row or column, drag the cursor to the area after the last entry
(even if it means dropping it on the scroll bar.)
When merging Models, the user can choose to also merge data by selecting
the desired option in the following dialog box.
134
5/01
6.3
The test data in the source and destination files can be from different time
periods before the copy operation. If so, the time periods of the test data in
the destination file after the operation are a union of the original time
periods in the source and destination files.
When sample intervals are the same in both source and destination files, the
source data overwrites data in the destination file.
Rearranging
1.
Models and
2.
From this View, select the appropriate variables (select variables using the
standard mouse click/ctrl/shift options)
3.
Press and hold down the left mouse button. Any movement will cause the
cursor to change from the standard arrow to a cursor with a curve with a plus
sign. This is the dragdrop cursor for data. The focus rectangle will follow
the cursor as it is moved within the Descriptive Info view. Move the cursor
until the focus is at the desired position. Move the cursor to the very top of
the list and the view will automatically scroll up until the first variable has
the focus rectangle. Move the cursor to the bottom of the list and the view
will automatically scroll down until the last variable has the focus rectangle.
4.
Release the mouse button and the variables will be cut from their old
position and inserted just before the focus rectangle. Models will be
automatically rearranged to agree with the new variable order. The Item
descriptor illustrates variable position in the matrix.
Variables Within a
Given File Using
DragDrop
Arrange the windows so both the source and destination windows are
visible, and select View>Model Descriptive Info on the both windows.
2.
From the source window, select the appropriate variables (select variables
using the standard mouse click/ctrl/shift options)
3.
Press and hold down the left mouse button. Any movement will cause the
DragDrop
5/01
135
cursor to change from the standard arrow to a cursor with a curve with a
plus sign. The focus rectangle will follow the cursor as it is moved within
the Descriptive Info view. As the cursor leaves the source window it will
change to the nodrop status. When the cursor is positioned over a window
displaying the Descriptive Info view, the window will automatically be
brought to the foreground (top of the stack) and the design studio will
reflect that this window has the current focus. The cursor will remain in the
no drop state until it is positioned over the text in the Descriptive Info view.
It will then change to the appropriate dragdrop status. The focus rectangle
will follow the cursor as it is moved within the Descriptive Info view. Move
the cursor until the focus is at the desired position. Move the cursor to the
very top of the list and the view will automatically scroll up until the first
variable has the focus rectangle. Move the cursor to the bottom of the list
and the view will automatically scroll down until the last variable has the
focus rectangle.
4.
Release the mouse button and the variables will inserted just before the
focus rectangle. Empty models will be inserted as appropriate to agree with
the new variable order. The Item descriptor illustrates variable position in
the matrix.
As a first step in any merge operation, the sample rates in the source and
destination files are compared. If the average difference in percent is greater than
DTTol, then the following dialog box will be displayed.
If the difference is greater than DTTol, which can be specified in the .ini file, then
the sample rates are considered different and the data can not be merged. If the
deference is less than DTTol then the sample rates are considered to be the same
and the merge operation can proceed. If the difference is less than DTTol but the
sample rates are not equal then the following dialog box appears.
136
5/01
6.3
If data is to be copied from one file to another and data for the same variable
exists in both the source and destination files, then the data must be spliced
together. This is accomplished using internal time stamps and the user supplied
answer to the following dialog box.
137
Merging Data
Marks/Selection
Ranges
When data is merged, any and all data marks and selection ranges are also
merged. The time periods of the marks/selection ranges in the destination file
after the operation are a union of the original marks/ranges in the source and
destination files. As with the data itself, the merge is based on the time stamps in
the source and destination files. Marks/ranges will be collapsed reflecting any
collapse in data due to consecutive NaN values in all data.
The following case illustrates this procedure. Here, file w1 is the destination file
and w3 is the source file. File w1 as shown is prior to the merge. File w4 is the
destination file after the merge operation.
File w1 has one CV and one MV. File w3 has one CV, which has the same
tagname as the CV in file w1. It also has one MV, which is unique. The data in
file w3 was collected a week after the data collected in filew1. There are global
NaN marks, regression NaN marks and range selections in both w1 and w3.
In addition to the marks/ranges, file w3 also has some missing data (actual bad
data) in both the CV and MV. This missing data however is not consistent for
both variables. There are however overlaps of missing data in two regions.
File w4 shows the results of the merge. Since MV1 did not appear in file w3 it
was padded to the last sample time with the last good value available from the
actual data in the destination file (padding option is user definable). MV2 on the
other hand did not exist in the original destination file. This variable was padded
at the front end with the first good value from the data in the source file.
While a week separated the collection of these two sets of data, this time period is
represented by a single discontinuity in file w4. This discontinuity is represented
by a single NaN (for each variable) and is illustrated by the vertical dashdot line
shown above. Small circles are always displayed around the last good value
preceding a bad value and the first good value following a bad value. Index 333
corresponds to the first good value after the discontinuity and has a time stamp of
138
5/01
6.3
midnight on the 17th of September. Index 331 corresponds to the last good value
prior to the discontinuity and has a time stamp of 5:30 a.m. the 6th of September.
Index 332 corresponds to the actual discontinuity. Its time stamp, like all
discontinuities, is one sample interval after the time stamp immediately preceding
it.
In file w3, there are segments of bad data but no discontinuities. This implies that
the time stamps are all consecutive. For the CV, there are bad values in the
following ranges. From 1:34 a.m. 2:08 and from 3:43 a.m. 4:04. For the MV
there are bad values in the following ranges. From 1:39 a.m. 2:04 and from 3:31
a.m. 3:55. Thus for all variables in the source file there are no legitimate values
during the times: 1:39 a.m. 2:04 and 3:43 a.m. 3:55. These values can
therefore be removed during any merge or reordering operation as long as no new
legitimate values are added to the data. In the example shown above there are no
new data during these intervals so the data is collapsed (note that data is padded
after the collapse). All marks/ranges are subsequently collapsed to be consistent
with the new data as shown above. Thus the final destination file has the minimal
set of informative data. Incidentally, if the variables in file w3 were rearranged in
any fashion, then the data would be collapsed in a similar manor.
5/01
139
140
5/01
Overview
In This Section
Basic Functions
Read this section to find out how to modify or manipulate raw data. A suite of tools
is available to perform automated and manual operations on any or all raw data
associated with CVs, MVs, DVs and Auxiliary (Aux) variables.
Data Operations are categorized in two distinct functions:
Block Manipulations
Vector Calculations
5/01
Transformations
 Ln
 Log
 Exponential
 Power
 Special (Polynomial, Piecewise Linear , Valve Characteristics)
Filter
 Exponential
 Butterworth
 Zero Phase
 User
Statistics
Outlier Detection and Removal
Edit
Combine Variables
While these options provide the user with a powerful means of manipulating the
data, the potential for misuse can not be overstated. Blind use of some of these
techniques can lead to degraded performance. Note some techniques can not even
be implemented in the online environment (i.e. zerophase filter).
141
7.2
Block Manipulations
Invoking Block
Manipulations
Data operation functions are enabled only if there is raw data available and only if
the current view is model based (they will not be enabled when the current view is
associated with performance or statistics measures). To access Block
Manipulations select Data Operations>Block Manipulations from the main menu.
Using this option, the current view is automatically switched to the SingleGraph
Data Plot view as shown below.
142
5/01
Use the radio buttons to select the replacement option. Select <Replace>. Data
within the selected ranges will be overwritten as shown below.
5/01
Value When user selects this option the edit box will be enabled. The single
value entered in the box will be used to overwrite the selected ranges. This
value is initialized to NaN.
NaN(s) All data will be set bad for the ranges selected. Data will not be
collapsed if all variables are selected. Under this condition a warning message
will be displayed. For this operation use the delete function.
Next Value(s) Use the value immediately following each range to overwrite
the selected data. A warning message will be displayed if there is no trailing
value.
Original Value(s) Data within the ranges are set back to their original
values.
143
144
5/01
7.3
Vector Calculations
Source and
Destination
VariablesRemembering
Past events
Source and destination variables have a significant role in the Vector Calculations.
As its name implies, a source variable is used as input to the calculation function.
The destination variable is the result of one or more calculations that are
performed on the source variables data. In most cases there is a single source
variable. The function to combine variables and the function to perform special
transformations supports more than one source variable. There is never more than
one destination variable.
Source variables must already exist in the document. Destination variables are
dynamically created to support the vector calculations. They are initially
temporary variables of class Aux. When computations are complete, the temporary
Aux variable can be saved to a permanent variable of class Var or of class Aux.
The choice is up to the user. These variables can then be used as source variables
in future vector calculations.
Internally, all pertinent information defining vector operations that have been
performed to obtain a particular variable are stored in a VecTool object. These
objects contain, in addition to the calculation information, the source and
destination information necessary to reconstruct the calculation. Reconstruction is
always done from the destination variable. In this sense, the VecTool object can be
considered loosely bound to the destination variable.
Invoking Vector
Calculations
When this option is selected the current view will be automatically switched to the
Descriptive Info view and the result will be similar to that given below.
5/01
145
Note that the Descriptive Info view is now disabled but visible. The modeless
Vector Calculations dialog box controls all significant functions at this stage.
Virtually all menu functions are now disabled. The selected variable shown above
is the current source variable for the vector calculations. The initial selection state
is based on the selection state of the Descriptive Info view prior to it being
disabled. The initial source variable is the first variable on the original selection
list. If no variables are selected then the first CV is chosen. To change the source
variable use the dropdown list box under Source Variable Selection as shown
below
This selection is reflected in the disabled Descriptive Info view. Note that when
the Vector Calculations dialog box is closed, the Descriptive Info view will
become enabled and the selection state prior to disabling will be recovered.
The primary function of the Vector Calculation dialog box are:
Temporary destination variables are created when the Vector Function button is
146
5/01
selected. This variable is initialized with a copy of the data in the source variable.
Save options are enabled only when a temporary destination variable exists. When
the save options are enabled, you can create a permanent variable of class Aux or
Var by selecting either <Save to Aux> or <Save to Var> respectively. The save
operation will use the name displayed in the Name of Destination Variable edit
box. This name is initialized to the source variable name. You can type in any
name you choose.
Vector Functions
When the vector functions are invoked, the view is automatically changed to
SingleGraph Data Plot and the environment takes the following form.
Both source and destination variables are always displayed in the fully interactive
graphic view. The destination variable at this stage is temporary and is always
named VectorCalc. User interaction is restricted to selecting and evaluating
functions and the data. General operation is as follows:
Select the general function from the tabbed dialog box
1.
5/01
Use the radio buttons and any auxiliary edit or option boxes to select
APC Identifier Users Guide
Honeywell Inc.
147
specific functions
2.
3.
Use the plot to analyze the data. All plot options are fully functional
4.
5.
Select <OK> to store destination data and all current dialog settings.
Select <Cancel> to loose this information. Both options will close
the vector function dialog box and return focus to the Vector
Calculation dialog box and descriptive view.
Data given above will be used to begin the discussion on the use of the
transformations. This data is in fact the result of a prior vector operation. It is the
difference between a predicted and actual CV. The variable was constructed by
using the function Combine Variables.
Transformations are as listed below:
Operation and selection of the Standard transformations (first four radio buttons)
should be selfevident. The only restriction imposed on a transformation in the
design studio is that it must be a monotonic function.
It is important to note that transformations that are to be used with Profit
Controller must be monotonically increasing. Select <View/Set Ranges> to
specify the ranges over which you wish to use the transformation in conjunction
with Profit Controller. Online transformations WILL be limited to these ranges.
If the Clamp Ranges check box is selected, values outside these ranges will be set
equal to the range limit (clamped). Otherwise, the value will be set bad (NaN).
For the plot shown above, the data ranges from a low of .444 to a high of .321. If
it is desired to use the transformation over a broader range than that contained in
the data (you cant use a range more restrictive), the use the View/Set Ranges
148
5/01
Here, the ranges have been extended to 1 . The stored limits are the values that
will be used in the Online transformation. The current limits are used when the
Evaluate & Plot button is selected to insure a monotonically increasing function
over these ranges. Once successfully evaluated the stored limits are set equal to the
current limits.
Select <Evaluate and Plot> to transform the prediction error using the selection
state given above. This results in the following data.
Values that are undefined or out of range are simply treated as NaNs. To save the
5/01
149
transformation and the transformed data select <OK>. If at this stage there is a
problem, the following message box will be displayed.
The EuLo/EuHi values are the ranges established by the data or the ranges input
by the user. These values define the input limits on the variable to be transformed.
This message tells the user that the function is not monotonically increasing over
the stated ranges and that the transformation can NOT be used in the online
environment.
150
5/01
Special
Transformations
Construction and use of the special transformation is more involved than the
predefined transformations. To use these transformations, select the Special radio
button and use the dropdown menu to select the desired transformation as shown
below.
Choose polynomial then select <Define>. The following dialog box will be
displayed
While somewhat involved, the fundamental purpose of this dialog box is to specify
a polynomial for use as a transformation. The intent is to take input/output data at
different operating points and plot the output or dependent variable against the
5/01
151
y = c0 +
c x
i =1
(At this stage the polynomial coefficients are not scaled. Once a fit is performed
the coefficients are automatically scaled. Select the Clear button to set all
coefficients to zero and eliminate scaling). Select <xy Plot>. This will force an
update of the plot. The following results will be displayed.
152
5/01
The blue curve is the polynomial transformation and the green curve is its inverse.
The inverse would result if the axes were switched.
At this point the Polynomial transformation has been manually defined. You could
select the accept button and continue from here. Usually it is desired to use data to
define the transformation. To do this another variable can be selected using the
dropdown list box. If another variable is selected and a fit done, the result of this
operation is shown in the following picture.
5/01
153
In this case the polynomial was fit to the observed data. For a good fit a higher
order polynomial was required. While the fit shown above is relative good, the
results cant be used with the online RMPCT controller since the polynomial is
not monotonic. When there is a problem with the fit or its inverse, a message box
such as that shown below will be displayed
Higher order polynomials are notorious for their erratic behavior. To circumvent
this problem, which many times will result in unacceptable nonmonotonic
response, a Monotonic option has been added to the polyfit routine. This default
option results in a polynomial fit that satisfies a set of gradient constraints. The
gradients are calculated using a grid distributed uniformly over the data range of
the independent variable. With the Monotonic option selected the results shown
above become.
154
5/01
To see the inverse fit switch the axes by selecting the second variable in the list
box. This results in the following plot.
5/01
155
156
5/01
This curve is now monotonic and can be used with the online RMPCT controller
if desired. To remove user data hold the right mouse button down and move the
cursor over the plot. When it is over the plot the cursor will change to a cross hair.
Release the right mouse button and the user data point closest to the cursor will be
removed. Continue this process until all desired user points are removed. You can
not remove actual data points in this fashion.
Note. User data can be added only when there are just two variables in the
selection list. While user data is present, variables can neither be added to nor
removed from the selection list
At this point the data can be saved or further modifications can be made. To see
how the inverse transformation fits the data over the extended range, select the
dependent variable from the selection list (this causes a swap in the dependent and
independent variables as shown below).
5/01
157
Similarly, if the ranges are adjusted in the swapped state and another variable is
selected, the following message will appear.
When desired results are achieved and the Accept button is enabled, the
polynomial can be saved. When a polynomial is saved, the user data is stored in
the associated VecTool object for possible reconstruction purposes. This data is
never added to any source or destination variable
In the creation of the polynomial transformation, no restrictions are placed on the
selection of the dependent and independent variables. However, if the
transformation is to be used with Profit Controller then the independent variable
should always be the same as the source variable.
Next, consider further extension of the ranges for the same data. Here the
minimum value of x is set at .75 while the maximum and minimum value of y are
4 and .3 respectively. In this case the transformation exhibits a slight rise at the
high end and a slight dip at the low end as shown in the following picture.
158
5/01
To remove the dip and rise in the transformation more data can be added but it is
usually far easier just to redefine the ranges. The following curves are displayed by
specifying the maximum and minimum value of x to be 2.3 and .1 respectively.
5/01
159
Thus, in the online operation of this transformation (when ranges are clamped),
values of the input variable less than .1 will be taken to be .1 and have a forward
transformation of approximately 0. Similarly, values of the input variable greater
than 2.3 will be taken to be 2.3 and have a forward transformation of
approximately 3.8. For the inverse transformation, values less than .3 will be
taken to be .3 and will yield an inverse of .1. Similarly, values greater than 4 will
be taken to be 4 and will yield an inverse of 2.3.
As this procedure illustrates, the user is free to define ranges inside actual data.
Note, the FitPoly function uses ALL data irrespective of the current data ranges
To recover data ranges (both raw and user input) simply select <Data2Usr> as
shown below.
Actual data ranges (raw and user input) are displayed by the in the Data Ranges
box by the Max x, Min x, Max y and Min y descriptors. These parameters are
unaltered (when data is present) by the modification to the User max and min
values define the graphical display and the limits used in the forward and reverse
(inverse) transformations. What is displayed graphically is precisely what is used
for the forward and reverse transformation. For the plot shown above, the forward
transformation will have a minimum and maximum value of .068051 and 3.9793
respectively.
To evaluate the polynomial select <Accept>. When this button is selected, the
PolyFit dialog box will be closed and focus will be returned to the Vector
Functions level with access to the SingleGraph Data Plots. Select <Evaluate and
Plot>. Observe the source and destination variables. Modify the transformation as
160
5/01
necessary. When satisfied with the results, select <OK>. This will allow you to
save the destination variable and all related information. If the cancel button is
selected, then the temporary destination variable and all associated information
will be lost. If the destination variable is saved, then the effectiveness of the
transformation can be observed by using the Scatter Plot view from the main
menu. For this example, the scatter plots are as shown below.
Here Aux1 is the transformed variable. The first plot is essentially the same as the
inverse transformation shown previously in the FitPoly dialog box. The
effectiveness of the transformation is illustrated by the almost linear characteristics
displayed in the second plot.
Piecewise Linear
5/01
Choose Piecewise Linear then select <Define>. The following dialog box will be
displayed
161
u u min
u max u min
This dialog box is used to define f (u ) and the scaling. . The scaling is simply
defined by entering the expected ranges into the user defined edit boxes. These
values should be the maximum and minimum values that are expected to occur in
the online process. These values will have NO effect on the entered function. If
the physical process is selfsimilar then this is precisely the desired effect. For
example two valves of the same type and characteristics but of different sizes
would likely have selfsimilar characteristics with respect to normalized flow vs.
normalized stem travel. The same normalized function will work for both cases.
The ranges simply need to be redefined.
In the normalized mode all function values are displayed/entered between 0 and
100. To display/enter values in engineering units, deselect the Display Normalized
Data checkbox.
Note: Make sure the ranges are set PRIOR to entering values in engineering units.
Since the transformation are stored in normalized coordinates, a change in the
ranges will result in an appropriate modification of the functional points in
engineering units to maintain a selfsimilar shape.
162
5/01
The blue line represents the existing piecewise linear segments and the blue circles
represent the segment endpoints (it is actually endpoints that are added to the
function). The cyan curve represents the new shape of the function if the left
mouse button is released over a legitimate point. Release the left mouse button and
the target curve becomes real. Continue adding segments until the function
exhibits the desired shape. After adding a few points the function may look as
shown below.
5/01
163
Select the smooth function check box to smooth the transformation. As long as the
box is checked and the function can be properly smoothed, a magenta curve will
be displayed such as that shown in the following picture.
With the smooth box checked, the function will be continually smoothed as
segments are moved, added or deleted.
If a smoothed curve is present when the Accept button is selected, then the
smoothed function will be used to perform the transformation. To use the actual
piecewise linear profile, turn the smoothing function off before accepting.
With two or more variables available, all fields are enabled. At this point the
Hide/Show Data button is enabled. For now Hide Data is selected. To manually
adjust endpoints use the vertical scrollbar as illustrated below.
164
5/01
5/01
165
To remove a point and its associated segments, hold the right mouse button down
and move the cursor over the plot. When it is over the plot the cursor will change
to a cross hair. As the cursor is moved a single cyan segment will connect the two
neighbor points of the point to be removed as shown below.
166
5/01
The point to be removed is the point that is closest to the current cursor position
which although not shown in the picture is located at the coordinates given by the
parameter Target position (here given in engineering units). Release the right
mouse button and the point closest to the cursor will be removed. Continue this
process until all desired user points are removed. For this example the plot will
take the following form.
This technique can be used to enter arbitrary functions of the users choice. To
specify the function based on data select <Show Data> from the dialog box. The
function can then be shaped for the desired data. For the previous data the function
will look as follows.
5/01
167
In this case either the piecewise liner or smoothed function could be used. To use
the piecewise linear function the Smooth Function checkbox must be deselected. A
judicious choice of segments can yield an effective smoothed transformation as
shown below.
In this case there are not enough segments to accurately represent the data.
However, the smoothed function gives a reasonable approximation.
To specify a function that is unrelated to the data select <Hide Data> from the
dialog box. It is possible to specify almost any function. An example follows.
In this particular case the smoothed fit is not very accurate. To improve the
accuracy, simply add more points as appropriate. Add, delete or move points until
the desired shape is achieved. After adding several points to the above function,
the following curve is obtained.
168
5/01
Use of the smoothing function involves a significant calculation (it uses the same
constrained minimization algorithm that the polynomial fit uses to insure a
monotonically increasing function. Here however the order is determined in an
iterative fashion). As such, for some complex curves the calculations may take up
to a second. When the hour glass is displayed (indicating calculations in
progress), do NOT try to add, move or delete points.
To see the function without data simply select <Hide Data>. The Accept,
Data2Usr and Evaluate functions work the same for this function as described
previously for the polynomial.
Make sure to select <xy Plot> when changing scale factors otherwise the
changes will NOT be in effect. When there is data in the environment this should
be obvious. Without data however there will be no graphical queue (the text in
the Scale Ranges box will reflect this change).
It is important to realize that when dealing with normalized functions (such as
piecewise linear and valve characteristic to be shown next) that the ranges do NOT
have the same effect as they do with nonnormalized functions such as described
in the previous polynomial discussion. With nonnormalized functions the ranges
can be considered to act in a clamping fashion as described previously. This is not
the case with normalized functions. Here the ranges can be used to translate,
expand or contract the function relative to the data.
Consider the case where using the data presented above the minimum and
maximum values for the independent variable are constrained to be .75 and 2.0
respectively. The corresponding curves for the polynomial and piecewise linear
transformations are shown in the following graph.
5/01
169
Clearly, these curves illustrate the differences between the two approaches. With
the polynomial, all values of x less than .75 will result in a transformed value of
.539 while all values of x greater than 2 will result in a transformed value of 3.8.
For the piecewise linear transformation maintains selfsimilar function form, hence
all values of x less than .75 will result in a transformed value of 0.0681 while all
values of x greater than 2 will result in a transformed value of 3.87. Corresponding
scatter plots are shown below.
Next, consider the case where x is not constrained but the minimum and maximum
values of y are taken to be 1 and 2.5 respectively. The corresponding curves for
the polynomial and piecewise linear transformations are shown in the following
graph
With the polynomial, all values of x less than .925 will result in a transformed
value of .1 while all values of x greater than 1.45 will result in a transformed value
of 2.5. For the piecewise linear transformation maintains selfsimilar function
form, hence all values of x less than 0 will result in a transformed value of 1 while
all values of x greater than 2.5 will result in a transformed value of 2.5.
Corresponding scatter plots are shown below.
170
5/01
Note that in this case the normalized transformation still linearizes the data but the
gain (slope) is directly modified by the change in the range.
Installed Valve
Characteristics
Choose Valve Curve then select <Define>. The following dialog box will be
displayed
It is the intent of this dialog box to provide a simple mechanism to represent the
characteristics of installed valves. Operation is similar to that discussed in the
previous paragraphs. Here however the transformation is specified by defining the
particular valve characteristics. The characteristics are taken from Perrys
Chemical Engineers Handbook Sixth Edition. The characteristics are given by:
Q=
(( + (1 )L )
2 12
For linear valves and for parabolic or equal percentage valves by:
Q=
L2
(( + (1 )L )
4 12
In the expression given above Q and L are the fractions of maximum flow and
5/01
171
stem travel respectively in percent. The parameter is the ratio of valve head
differential at maximum flow to the valve head differential at zero flow. The
theoretical range of alpha is between 0 and 1. For the transformations alpha is
permitted to vary between .005 and 10.
To define the transformation, simply select the valve type and the desired value of
alpha. Remember to select <Update Valve Curve> to refresh the plot and to store
the most current user entered information. When satisfied, evaluate and save
results as prescribed previously.
For the valve characteristics dialog box there is no Data2Usr button. In this
instance the ranges can never be set inside the actual data ranges. If a value is
entered inside a range a warning will be displayed and the entered value will be
reset to the appropriate minimum or maximum data value.
Transformations
without Data
Since there is no data the Block Manipulation option is disabled. Select <Vector
Calculations> and then <Vector Functions> in the normal fashion to obtain.
172
5/01
This dialog box defines the only functions available when there is no data present
in the environment. All transformations work as described previously except no
data will be displayed. Hence, for the polynomial no fit is performed. To use this
option the polynomial will have to be manually entered. When the transformation
is defined simply select OK. In this mode the vector calculation dialog box will be.
Note that the only save button available is Save to Src (Source). Thus the
transformation can only be saved back to the source variable. Creating
transformations when there is no data is only meaningful for building online
transformations for Profit Controller.
5/01
173
Filter
Several types of filter calculations are available. Select the Filter tab in the Vector
Functions dialog box. The following options will be displayed.
T( s ) =
1
(s + 1) n
Here n is the filter order and is the filter time constant in minutes.
T( s ) =
k
( s p1 )(s p 2 ) L ( s p n )
174
5/01
When the Evaluate and Plot button is selected to filter the data, the filter transfer
functions for both the analog and discrete designs will be printed to the message
window (This window will be discussed in a subsequent chapter).
Use of the exponential filter is illustrated below. In this case the source variable
has been marked bad over an intermediate region (index 91 114) and at the point
251. The destination variable (named VectorCalc) is the result of a copy operation
on the source variable. As such the painted values appear as actual bad values in
the destination variable.
5/01
175
Results of the filter operation obtained by selecting <Evaluate and Plot> are as
follows. Note that with Vector Operations bad value data marks are displayed even
in a non normalized plot mode. This is an exception to the general rule.
176
5/01
To specify the filter manually, select <User>, then select <Enter Tf>. The User
Filter Transfer Function dialog box shown below will be displayed.
Discussion of the transfer function and its use is given in detail in a subsequent
chapter. Suffice it to say that the entered transfer function is equivalent to that used
in the Exponential filter. To shift the data back in time 6 minutes enter a 6 in the
delay edit box and select <Calculate>. This gives.
Take the default option and the transfer function is displayed as:
5/01
177
Select <Exit> to close the User Filter Function dialog box and select <Evaluate
and Plot> to obtain the following results.
As a final case illustrating the filter operation, data will be filtered with a user
supplied transfer function with a nonunity gain. In this case the transfer function
is:
T ( s) =
178
.5
(3s + 1)(3s + 1)
5/01
The input and output data for this case are presented in the next plot.
At first glance it looks like data is missing from the second part of the graph. If the
plot is changed to normalized view, the result becomes apparent.
Notice that the two horizontal lines are identical. What happened to the filter in
this case? The top curve is the input to the filter while the bottom curve is the
output. The first half of the plot looks as expected. The input however suffers a
discontinuity. After this time there is no movement in the input. Hence there will
be no movement in the output. The value of the output is arbitrary. Its bias value is
based on the input for the associated segment. Note that it is not related to the
value of the output during prior segments (time before the discontinuity). Hence,
the output value is free to change across discontinuities. This characteristic is
required to handle filtering over general discontinuities as illustrated previously.
5/01
179
Statistics
Mean removal, normalization and information on range, mean and variance can be
obtained by selecting the Statistic tab on the Vector Functions dialog box. When
this tab is selected the following information is displayed
When the Evaluate Plot button is selected the source data is copied to the
destination variable and the functions corresponding to the selection state of radio
buttons are applied to the destination variable. Current values reflect the state of
the destination variable
Use of this function will overwrite the destination variable. This function is not
intended to provide statistics on the results of prior calculations.
180
5/01
Edit Data
At this point two options are available; Outlier Detection and Removal and
Manual Data Manipulation.
Auto Outlier Detection and Removal This option can be used to remove
statistically unreliable data. Adjust the confidence level used in the detection by
selecting the up or down arrows. The default is usually adequate. Next select the
replacement option. An example using the default options is shown below.
5/01
181
Manual Data Manipulation This option allows you to edit data in an interactive
fashion.
This function is the one exception to the vector function calculation sequence. In
all other functions the source variable is the input and the destination variable is
the output. Here the operations are performed directly on the destination variable.
This means that you can modify the results of a prior vector calculation by using
the Manual Data Manipulation function
To modify the results of the previous outlier removal calculations select <Manual
Data Manipulation>. Then select <Alter Data>. The following dialog box will be
displayed.
In the picture presented above, the Manual Data Manipulation dialog box is shown
together with a zoomed in view of the pertinent data.
182
5/01
For this example it is desired to extend the prior interpolation from index 176 to
the encircled point which has a value of .3682 and an index of 189.
To do this it is useful to understand the design and operation of this dialog box.
Index, is the position of a particular data point. This index corresponds to the
value displayed in the time axis box of the SingleGraph Data Plot. The first
column of data corresponds to the source variable. It can not be changed. The
second column of data corresponds to a local copy of the destination variable. This
is the variable that will be modified. Since it is a copy, you can modify it and
select cancel without changing the destination variable.
Replacement options are identical to those described in the section on block
manipulations and will not be repeated here. The replacement strategy is:
1.
2.
3.
Search options have been provided to facilitate the data selection process. These
options are as follows.
Find Index Enter the index of the data point you wish to find. Get
this index by using the right mouse button in the SingleGraph Data
plot. Select <Find Index> and the data window will be automatically
scrolled to display this index and a focus rectangle will be drawn
around this row
Find Next Enter the next value you wish to find. Select Find Next>.
A string search is used such that if you enter .383 and the actual
value was .383246, the search will be successful and the data
window will be automatically scrolled to display this value and a
focus rectangle will be drawn around this row. Both current and
source variables are searched. Repeated values are skipped. If .383
appears 10 consecutive times, the first occurrence will be found on
the first search while the last occurrence will be found in the second
search. NaN is a legitimate search value. When the end of the list is
reached, the search will prompt to begin again at the top of the list
Once the current values have been modified, select <Plot Data> to display the
results. At this stage you can still cancel without making any real modifications.
Continue making modifications until you are satisfied, then select <OK> The
current values are now used to overwrite the destination variable.
5/01
183
For the example problem, enter 177 (we want to use index 176 as an endpoint in
the interpolation) in the index edit box. Select <Find Index>. Select the focus row.
Enter 188 into the index edit box. Select <Find Index>. Ctrl/Shift click in the
focus row. Select <Interpolate>. Select <Replace> and the dialog box has the
following appearance.
184
5/01
Combine
Variables
Add variables to the list using the dropdown selection box. Delete variables from
the list by first selecting the variable in the list box. Then select <Delete
Variable>. As shown above, all Variables in the list box are displayed in the Plot
window. To change an operation, first select the variable in the list box. Then
select the desired Operation Type.
5/01
185
It is also possible to annotate your work. To do this select the User Notes tab on
the Vector Functions dialog box. This gives you access to the following interface.
Simply enter a description of any notes you may wish to keep. You can also look
at notes you made for other variables.
186
5/01
Saving and
Recovering Vector
Calculations
Whenever the temporary destination variable exists, the Save buttons will be
enabled.
Hint. You can make a copy of any variable in the environment by doing the
following
1.
2.
3.
4.
When you save the results of a Vector Calculation both the destination variable
and an associated VecTool object will be saved. The VecTool contains all
information associated with the vector operation and allows you to easily modify
or reconstruct a prior calculation.
Use the Save Option choices to configure the save procedure. Name of
Destination Variable is the name of the permanent variable to which the
temporary destination variable will be copied. For new vector calculations, this
name is initialized using the name of the source variable. You are free to change
5/01
187
Save to Aux With this choice the data will be copied into a variable with the
name specified in the edit box of class type Aux. If the variable does not exist
it will be created and a message will be displayed telling you that a variable
with the specified name has been added to the workspace. If it already exist,
and is not the source for another calculation, the following message will be
displayed.
Save to Var With this choice the data will be copied into a variable with the
name specified in the edit box of class type Var. If the variable does not exist
it will be created and a message will be displayed telling you that a variable
with the specified name has been added to the workspace. If it already exists
then the following dialog box will be displayed.
188
Overwrite Var and copy to Aux This option first copies the data
from the existing Var variable to the Aux variable. If the Aux variable
doesnt exist it creates it. If it does exist then it asks if it is ok to
overwrite. It then copies the destination variable to the existing Var
variable. It also modifies the VecTool objects as appropriate to reflect
these changes.
5/01
Once a save is performed, the temporary destination variable is deleted and the
save buttons are disabled.
Since connections and operations are stored in the VecTool object, it is easy to
reconstruct a Vector calculation. Simply select the variable from the descriptive
Info view and select Data Operations>Vector Calculations. For the Combine
Variable example presented above this will give the following results.
5/01
189
Selecting yes for the case shown above, results in the following
In this case the four variables that were previously combined are automatically
selected. The first variable in the combination list (CV7) is displayed as the source
variable since there is only one field for this parameter (Usually there will only be
one source variable). Select <Vector Function> to obtain the following results.
Which is the last state for this set of calculations. Note that all data contained in
the Vector Functions dialog box is recovered. This implies for example that, if a
polynomial transformation was performed, the polynomial and its last plot state
will be recovered when the FitPoly button is selected.
Since links between variables can become involved, internal checks are made to
prevent the user from inadvertently destroying source variable information. If the
user tries to delete a source variable, then the following message will be displayed.
190
5/01
A similar situation exists if for example you edit the name of a variable that
happens to be a source variable. In this case the message is.
If a source variable is in fact deleted and you try to reconstruct the vector
calculation, then you will get this message.
In some instances you may want to remove the effect of a calculation. Take the
case where a Var variable is filtered and its source is saved as an Aux variable.
You can restore the original variable to its initial state by doing the following.
1.
2.
3.
4.
5.
1.
2.
3.
4.
5.
This Var variable is now returned to its initial state. If it is selected no source is
recognized. If Vector Functions are invoked, then all tabbed dialog boxes are in
their initial state.
5/01
191
A similar situation can exist for Aux variables. When an Aux variable is the result
of a prior calculation and the prior settings are ignored, the Aux becomes the new
source. If after the calculations are complete and the results are selected to be
stored back into the same Aux variable used for the source, then the following
message box will be displayed.
Since the original VecTool object will be destroyed, information associated with
the calculations is destroyed. Links to any other variables are removed. Hence the
calculations can NOT be reconstructed. The end result of selecting yes is to simply
modify the data with no record of any operations. Note that if the source variable
was not the original Aux variable, then only the data would be overwritten and the
VecTool object would remain intact. In this case the message box would be that
shown previously in the discussion on Save to Aux.
If you try to save any variable to another variable which is a source variable in
another calculation, then the following message will appear.
Selecting Yes at this point will cause the existing data to be overwritten. The
following dialog box will then be displayed.
This options allows the user to keep or destroy the VecTool Object.
192
5/01
Merging Vector
Calculations
This message tells the source file of the original calculation. It will be displayed
until a new set of calculations are performed and saved in the destination file. If all
the variables involved in the original Vector Calculation are not present in the
destination file, then the following message will be displayed
5/01
193
If yes is selected, then the missing variables will be displayed as given below.
Missing variables can be merged into the destination file at any time. The display
shown above will reflect the current state of the destination file as variables are
added to or removed from the file. Vector Calculations can be performed at any
time regardless of the state of the destination file.
Be careful when merging data to a variable that is a result of a VecTool
operation. The data merged will NOT be automatically updated with the
calculation. In this instance, the vector calculation must be redone.
Saving
Transformation
For Use with
Profit Controller
Any of the transformation described above can be used with the profit controller as
long as they are monotonically increasing (only CV transformations are currently
supported in the online environment). These functions and their ranges of
applicability (defined by the user limits entered as discussed above) can be
automatically instantiated into an online module via the Profit Controller Point
Builder.
The Profit Controller Point Builder allows an engineer to enter control design data
or import an .mdl file into a graphical user interface on the PC. Depending on the
options chosen, the Point Builder then generates files that can be transferred to a
TPS system to be used to automatically construct the profit controller points. It
generates the necessary configuration files, too. The application also determines
the scheduling of the various points, and ensures that AM loading is balanced by
distributing the points among various execution cycles. For a full description of
the Point Builder see section 9 of the Profit Controller Designers Guide.
Any CV transformation saved in a .mdl file is automatically available to the point
builder. While there are many ways to save transformation information for use
with Profit Controller, the simplest is as follows.
194
1.
2.
3.
4.
5/01
6.
Select the default (Copy to Aux and Overwrite Var) option in the Overwrite
Option dialog box
7.
8.
You can also update transformations from previously releases by simply opening
the existing .mdl file in the latest release. Observe the transformation and bounds.
You can modify the bounds and/or transformations and evaluate the results in the
standard fashion. You MUST save the .mdl file before rebuilding the EB/Config
file
Bound definitions for transformation have been changed between PDS release
200 and 220. Old bounds are inconsistent and must be updated.
Simply load and save the old .mdl file as described above. Inconsistent bounds
will generate the following message
While it is good practice to check the bounds and transformation, it is not strictly
required.
5/01
195
196
5/01
Overview
In This Section
Main Functions
Read this section to find out about the main identification functions and specifically
how to set overall options and how to run Load & Go.
To access the main identification functions, Select>Identify from the main menu. A
drop down selection list as shown below displays the five main identification
functions: Set Overall Options, Fit FIR/PEM Models, Fit Parametric Models, Select
Final Trials and Load & Go.
Alternatively, you can use the associated toolbar buttons. The main identification
toolbar buttons which appear in both the standard and detailed toolbars are:
Only the Set Overall Options and the Load & Go functions are described in this
section. These functions correspond to the first two buttons in the above group.
The other options shown in the pull down menu are described fully in later sections.
Overall Options
In the APC Identifier there is a large list of options that the user can set. Options
have been logically grouped first according to function then according to
complexity
Each function has its own associated set of user configurable parameters. For a
given function, some parameters need to be configured more often than others.
Dialog boxes are set up to deal with this structure in an intuitive fashion. Each
function has a main dialog box. Subdialog boxes can be invoked to allow the user
more and more flexibility for a specific application depending on the users
experience and knowledge.
Some parameters or options apply to more than one function or in some cases apply
to all identification functions. These parameters are accessed from the main menu
toolbar button.
by selecting Identify >Set Overall Options or by selecting the
5/01
197
198
5/01
8.2
Setting Overall
Options
This invokes the dialog box shown below. The Overall Model setup dialog box
contains the highest level options. These options allow the user to specify the
number of trials, model structure (FIR/PEM), model form and initial condition
treatment. It also allows the user to access less used high level options.
5/01
Data collection frequency or scan rate is displayed only in this dialog box. This
value may or may not correspond to the sample rate of the discrete time model
obtained by data regression. The sample rate of the discrete time model is always
APC Identifier Users Guide
Honeywell Inc.
199
an integer multiple of the scan rate and this integer value will be referred to as the
compression ratio. Models are eventually saved in the Laplace domain and as such
are not associated with the original data rate.
Detection of model sensitivity is of fundamental concern. Use of more than one
model for a given CVMV/DV pair provides a reasonable mechanism for
addressing this concern. The number of trials corresponds to the number of
discrete (FIR or PEM) and continuous models that can exist for a given CVMV/DV pair. Through this document, a CVMV/DV pair will be referred to as a
submodel of the overall model matrix. Increasing the number of trials, results in
more models for a given submodel. The number of trials is the same for all submodels in the entire matrix. Be aware that decreasing the number of trials will
result in the loss of those models corresponding to the deleted trials. A warning
message s will be displayed if any of these models are in the solution matrix.
FIR/PEM Step responses corresponding to different trials are colorcoded. Up to
ten (10) trials can be specified. The color coding for each of these trials is as given
below.
200
5/01
Here there are ten trials. As shown above the trial number corresponds to the
settling time and gain (i.e. Trial 1 has a settling time of one minute and a gain of 1,
Trial 2 has a settling time of 2 minutes and a gain of 2 etc). Each response will
always have the assigned color designation irrespective of the settling time and
gain. Colors corresponding to the trials are as follows.
Trial 1
Green
Trial 2
Red
Trial 3
Blue
Trial 4
Neon Green
Trial 5
Neon Red
Trial 6
Neon Blue
Trial 7
Purple
Trial 8
Olive
Trial 9
Cyan
Trial 10
Magenta
Both FIR and PEM model structures can be used for data regression. While FIR is
the default, you can select either by choosing the appropriate radio button. For
information regarding these model structures see the concepts section of this
document.
The target use of the PEM model is for regression sets on stable processes
when only one or two independent variables are moving simultaneously.
PEM models are provided with one goal in mind: Ease of use. The goal here is to
provide a mechanism that will allow truly onestep identification. One click on the
Load & Go button and thats it (Two or three if you are particularly ambitious).
If the results are not satisfactory after one try, simply revert to the standard FIR
approach. To this extent, it is useful to view the PEM models as a complement to
the standard FIR models. Setup for these models will be described shortly
Initial Conditions
and Model Forms
5/01
While these choices apply to both FIR and PEM models, they are much more
impactive when used in conjunction with the FIR models and to some extent they
are even required. When using PEM models it is strongly recommended to stay
with positional form. In fact, if velocity form is used in this case a message box
will recommend a switch back to positional form. Ignore this message if you want
ARIMA models. To obtain this structure the form must be set to velocity.
201
8.3
FIR Setup
Configuring FIR
Models
To configure the FIR models, select <FIR Setup>. This results in the following
dialog box.
.
Use the drop down list box to configure and observe model characteristics for the
various trials. Significant parameters are described below.
Max Settle T
(Settling Time)
This is the one variable that needs to be set on a process by process basis. It is not
important that the value be accurate. In fact, it is recommended to enter several
(three to five) settling times that span the expected range for one or more CVs.
When the resulting step responses are plotted on the same graph, selfsimilar
profiles indicate that the models are reasonable. If a few responses are similar but
one or more diverge, then this is an indication that either some of the settling times
were too long or that the process was not sufficiently excited for the full range of
settling times entered.
If no responses are similar or if the step response changes sign at times greater
than half the settling time, this is an indication that there is probably no model, or
that this model has not been sufficiently excited.
In any case, in the final model selection step (described in a subsequent section),
the identifier picks the model with the best longterm open loop prediction relative
to the raw data. At this stage models based on inappropriate settling times are
automatically rejected. (Note, if models corresponding to all trials are bad, then
the final step picks the best of the bad models. The result being a bad model.
These models should be rejected or nulled out before building the controller.)
By default, three trial fits are made, each with a different maximum settling time.
You can select more or fewer trials. If the settling times turn out to be incorrect,
202
5/01
While this option is not as important as the settling time, it is still a parameter of
which the user should be aware. Remember that the Positional Form gives good
low frequency performance (accurate steady state gain), but is not well suited to
nonstationary processes (i.e. processes with drift). In contrast, the Velocity Form
can result in some low frequency information loss, but gives good performance for
nonstationary processes. The Positional Form is the default. If there are
discontinuities in the data that possess significantly different means, or if there is
significant drift, the Velocity form can be used to potentially improve
performance.
When using Positional Form, pay particular attention to the last value in the step
response curve. The FIR coefficients are unaltered and represent the unsmoothed
solution. Since smoothing is not done, the last coefficient may serve as an
indicator of the proper form. (for nonintegrating models)
If the last coefficient changes dramatically, then either the settling time is too short
or, as in the case shown below, it may indicate the need to switch from Positional
to Velocity form.
Prior to release 150, Velocity form was internally disabled for integrators. This
option is now enabled. If Velocity form is selected for integrating CVs, then
performance may be improved by slightly extending the settling time.
5/01
203
Three initial condition options are provided. The default option is Unsteady. For
this option it is assumed that the data is not in equilibrium at the start of the test or
at any breaks (discontinuities) in the data. This option should be used if the initial
conditions are unknown.
If the process is at rest at the beginning of the test, then select Steady at start only.
If the process is at rest at the start of the test and at all breaks (discontinuities) in
the data, then select Steady at start/NaN breaks. Use of the initial condition option
can be helpful especially in cases where the data set is severely limited. When the
initial conditions are steady, the solution can be modified such that no data is
wasted dealing with unknown initial conditions.
204
5/01
8.4
PEM Setup
General
Guidelines
With PEM Models the goal is simply Ease of use. To this end it would be
desirable to require no setup. Unfortunately, structure plays an extremely
important role in the use of PEM models. If the structure or order of the PEM
model is not sufficient to represent the process, then the model will be biased (see
the concept section for a discussion on the characteristics of the PEM models).
Bias may result in a completely useless model. Therefore, with PEM models order
becomes the key parameter
This parameters effect on model quality can be loosely described in the following
manner. When the order is too low, the model will be biased and yield poor
performance. As the order is increased the performance of the model will improve
(given that there is reasonable information in the data). At a certain point the order
will be sufficient to capture the response of the process. Increasing the order past
this point can lead to overfitting the data and eventually may lead to convergence
issues. For example consider the data shown below.
In the plot given above there is one CV one MV and one DV. If this data is fit with
PEM models and the models are first, second and third order respectively, then the
following set of step responses will result.
These curves clearly indicate the effect that changing order has on the resultant
model. The sensitivity of the response curves to model order is obvious. The green
(short), red (medium) and blue (long) curves correspond to first, second, and third
order models respectively. Fitting the same data again using fourth, fifth and sixth
5/01
205
Clearly, the sensitivity is virtually eliminated and the models give selfsimilar
response. Quality of models such as these is high as indicated by the predictive
performance shown in the following plot.
Thus with respect with the above discussion, a third or forth order model would be
appropriate for this problem. Note that PEM models tend to be much more
sensitive to model order than FIR models are to settling time.
As described above, there is a preferred order. Selection of this order can be
automated by using, for example, an Akaike Information theoretic Criterion (AIC).
While this is a sound theoretical approach it is not the one used here. In many
practical cases the data sets are short and not particularly informative. In these
cases there is a likelihood that fit quality (loss function) is fairly insensitive to
model order while at the same time the model characteristics are very sensitive to
model order. This implies that significantly different models give similar fit
performance. Automated techniques can be insensitive to this phenomenon. It will
however be immediately exposed when the models are viewed as shown above.
In practice, model order itself is not of concern. What is important is to choose a
reliable model. The graphical approach illustrated above is an effective way to do
this. General guidance for PEM model selection is as follows.
206
5/01
1.
2.
3. Observe step responses as shown above. Submodel(s) with two or more selfsimilar step responses indicate that for these models the PEM fit is finished. If the
predictions are satisfactory the model(s) should appear in the Final model matrix
4.
5.
6.
If submodels are not selfsimilar after 2 tries, use FIR (if you are
convinced a model really exists)
These steps are meant only as general guidelines. In fact, in the true Ease of use
spirit they are too complex. To make the procedure even easier, the following
approach can be used.
1.
2.
3.
Auto Setup
There are two ways to specify PEM model orders. One is to first select the PEM
(Auto Setup Order override) radio button in the Overall Model Setup dialog box.
Then simply scroll the start order to the desired value. This will automatically
modify the orders of all polynomials in the PEM models for each trial. If there are
three trials and the start order is 2, then all polynomials for the PEM model
corresponding to trial 1 will be second order. All polynomials in the PEM model
corresponding to trial 2 will be third order. All polynomials in the PEM model
corresponding to trial 3 will be fourth order. Thus the order selection is extremely
simple using the auto setup method.
Detailed Setup
Orders can also be specified by selecting the PEM Detailed Model Selection radio
button in the Overall Model Setup dialog box. This radio button will enable the
PEM Setup button.
Use of the PEM Setup Button should not be necessary in practical applications. If
you need to come here and are looking for expedient results, you should switch to
FIR models.
If on the other hand you have a curious nature select <PEM Setup> to display the
detailed dialog box shown below.
5/01
207
This dialog box allows access to all elements of the PEM model (see concepts
chapter of this document for a discussion on the PEM model). The options and
their meanings are as follows
Include Noise Terms When this is checked (the default), the term
C ( z)
e(t ) will be included in the model. If unchecked this term will
D( z)
be ignored
It can be advantageous to turn off the noise model in cases where the data sets are
short or when there are only a few moves in the independent variable(s).
208
Index i When set to zero, Values entered for nB, nF and nK will be
applied for all MV/DVs. For values other than zero, this is the index
of the MV/DV for which nB, nF and nK are set
nK(i) Number of delay intervals for the ith MV/DV. This value is
equal to the delay divided by the compression ratio. Set this value to
zero to obtain a semiproper model
5/01
Reset Order When this value is scrolled all orders are reset to this
value
In most applications, there should not be a need to set these parameters. The one
exception is the noise term option. A brief discussion follows
PEM Initial
Conditions and
Model Form
5/01
These options are intended primarily for use with FIR models. However, since
they are used in data preparation, they also apply for PEM models. While the
initial condition option has little effect on PEM models, the Model Form option
can have a serious detrimental effect when used with PEM models. In general,
selecting velocity form will result in reduced model performance. Let the noise
term deal with disturbances. To prevent inadvertent use of velocity form, a
message box will be displayed when this form is selected for use with PEM
models. Note however to obtain ARIMA models this option must be set to
velocity.
209
8.5
Parameters in this dialog box allow the user to configure the overall identification
procedure. The nine general categories that can be modified are described below.
Calculation
Options
210
This first category contains parameters that allow the user to specify information
relating to the calculation of the quantitative measures indicating model quality.
With the exception of Correlation, these parameters are initially all deselected.
The parameters are:
Correlation This check box enables both the MV/MV and CV/MV
correlation calculations.
Confidence Select this check box to enable confidence, noise bounds, null
hypothesis calculations and model ranking. When this check box is selected,
the following items will be enabled; <Confidence limit>, <Rank option>,
<UseConfidenceOnTset> and <Auto null uncertain models>. Set
ConfidenceCalcs=1 in the .ini file to initialize this parameter as
selected in all new documents.
Rank option Several internal rankings are performed. The result of each
APC Identifier Users Guide
Honeywell Inc.
5/01
5/01
Auto null uncertain models When selected, this option will use the
recommendation of the user specified rank option to automatically set the
parametric model flag to null or auto depending if the recommendation is to
reject or keep the model.
Power spectrum Select this check box to perform the input power
spectrum calculations. This function is planed for a future release.
Residual Correlation Select this check box to perform both auto and
APC Identifier Users Guide
Honeywell Inc.
211
cross correlation on the prediction error. This function is planed for a future
release.
Data Options
Data Scaling
Null Model
Treatment
Options that are user configurable pertaining to data that serves as input to the
regression routines are as follows:
Proper models only Proper FIR models are those in which all terms on
the right hand side of the prediction equation correspond to sample times
prior to the output. Semiproper implies that one or more terms on the right
hand side of the prediction equation correspond to sample times equal to the
time of the current output. Choose this option to force proper models to be
used in the regression.
Semiproper models only Choose this option only if the original sampled
data is semiproper. While this condition should seldom occur, it is possible
when the data is severely undersampled (i.e. a process that appears to have
no dynamics or a process with a small time constant that is sampled
relatively slowly).
In many instances the lack of a causal effect between CVs and MV/DVs may be
known apriori. In these instances, the models can be set to null from the very
start of the identification process. The following two options for null model
treatment are provided:
212
5/01
Regression
Selection Options
5/01
Set to zero after regression With this option, the identification problem
is cast as an unconstrained twonorm minimization problem. After the
problem is solved the results for the null models are simply set to zero.
There are two basic options available for selecting segments of data to be treated
as bad values when used for regression calculations. These options described in a
previous section of this document are:
Block for all Dep Vars With this option, ranges are selected and these
ranges are applied to all variables used in the regression. All values within
the time range (inclusive) are set bad for any variable being regressed. Since
all variables are bad for each range selected, the data is collapsed such that
each range to be excluded is represented by a single NaN for each variable.
One for each DepVar With this option, data can be excluded for each
variable on an individual basis. Display of this type of selection is different
than that used for Block selection to avoid any ambiguity. This category
supports an additional option
1.
2.
Several options pertain only to the PEM models. Of these options, some will be
used infrequently, if ever. Others may need to be used more often. The general
Options are:
Search on Start Order This is a flag to enable the search for the optimal
order of the high order ARX model used as a first step in generating initial
estimates. This flag only has meaning if the UsePfxIC parameter in the .ini
file is set to 1. When this option is unchecked the order of the ARX
initialization is based PfxExpRed. Initialization of this option in new
documents is controlled by the AICSearch parameter in the .ini file.
Auto Check Noise Mod Noise terms enable the PEM models to be a very
effective identification tool. There is however no magic here. Use of the
noise terms does NOT insure that immeasurable disturbances will be
automatically accommodated in all cases. In fact the search must converge
to a reasonable noise model for the deterministic portion of the model to be
APC Identifier Users Guide
Honeywell Inc.
213
acceptable.
When there is reasonable information content in the data, then the noise
terms can be used to significant advantage. If however the information
content is low, then the noise terms may in fact be ineffective. While these
are typically cases where the models are prone to being suspect, it may be
the only data that is available. In these cases, the most appropriate model
can be easily selected by evaluating models both with and without noise
terms.
While the user is free to turn the noise terms on and off, this is somewhat
contrary to the Ease of use spirit with which the PEM models are intended.
To automatically address this issue, it is recommended that the user select
the AutoCheckNoiseMod option and still use Load&Go. When this option
is checked, the search automatically evaluates the effectiveness of the noise
terms and chooses the most appropriate model. This option can be
initialized as checked in new documents by setting NoiseModCheck=0 in
the .ini file
Factorization
Options
214
Auto Delay Flag Long dead times will likely cause problems for PEM
models. Set this flag true to perform an initial dead time estimation prior to
PEM calculation. The current estimator is overly simplistic and the intent is
to extend it using a correlation approach. Do NOT set this flag true for the
general case. Only set it if you are sure there is a long dead time.
5/01
5/01
Max Iter This defines the maximum number of iterations that are allowed
in the search procedure. When set to zero no search will be performed and
the resultant model will be the initial estimates. Some model forms, such as
ARX, dont require a search. For these forms this parameter has no
meaning. If the procedure has not converged after Max Iter iterations then
an appropriate message will be displayed in the message window. Note in
many cases the resultant model will be effectively converged.
Search tol When the cost function drops below this value, the search
terminates.
215
8.6
Load & Go
following dialog box for performing Load and Go calculations. As shown below,
the detailed toolbar has been enabled. Enabling of the toolbar and status bar is at
the users discretion. Both modes will be illustrated in this document.
Default Model
Settings
216
The Identifier has preset defaults for FIR/PEM, parametric, and final models.
Load & Go uses these defaults. Before accepting the defaults, Check to see if
youre using an FIR or PEM model. Then check the FIR/PEM, parametric, and
final model drop down menus to make sure the settings are appropriate. If they
are not, the Load & Go procedure should not be used. Please see sections 9, 10
and 11.
5/01
Overview
In This Section
About the FIR
Model
Read this section to find out how to set FIR/PEM options, identify the sub models,
and view the FIR/PEM model summaries.
FIR step response models are obtained by integrating the finite impulse response
coefficients. These models represent the response of a dependent variable (CV) to
a step change made to an independent variable (MV or DV).
If input signals have been designed properly, then FIR models can result in
unbiased estimates, even in the presence of colored noise in the test data, and do
not require structural information about the process dynamics.
FIR results typically have a high variance, evidenced as kinks or wiggles in the
FIR step response that would not be reproduced if a different set of test data were
used for another identification calculation.
This high frequency behavior is eliminated by a second set of calculations
(described in Section 10) in which parametric models are fit to the FIR step
responses.
While there are several FIR options available, there are only two parameters that
typically ever need to be adjusted:
These parameters have been discussed in the previous section describing Overall
Options. Set these parameters for the application at hand. Review and if desired
set any other overall option before doing identification.
After the overall options have been reviewed/set, identification can begin. These
options (especially the settling time and model form) can be adjusted at any time
during the identification process. This offers essentially unlimited flexibility.
Settling times and model forms can be changed every time an FIR model is built.
By building individual CVs, each CV can have different model forms and/or
settling times. By using Lock Model and other options (described in this section)
individual sub models can have different forms and/or settling times.
About the PEM
Models
As described previously, the goal here is Ease of use. Under the above conditions,
Load & Go is the preferred option for building PEM models. These models can of
course be built using the Fit FIR/PEM Models approach described in this
section. In fact selection of CVs and MV/DVs must be performed prior to using
5/01
217
218
5/01
9.2
Procedure
Like all model views, the FIR/PEM model matrix view shows information for
each sub model in a twodimensional matrix of submodel boxes. The MVs and
DVs are the columns of the matrix and the CVs are the rows.
Fit the FIR/PEM models to the data using the default options by clicking [Fit FIR
or Fit PEM] on the Fit FIR/PEM Models dialog box. To modify the default
options select the appropriate buttons on the main dialog box or double click in
the appropriate areas in the FIR/PEM model matrix as described in the following
paragraphs.
5/01
This button can be used to return to the FIR/PEM model matrix view as shown
above at any time. If the Fit FIR/PEM Models dialog box is displayed and the
user selects another view (i.e. SingleGraph Data Plots), then the FIR/PEM model
view can be restored by simply clicking the Show & Select button.
219
Set parameters shown above for all submodels selected. The parameters should
be selected for the following conditions:
Null SubProcess Check this box, if there is no physical way that the
independent variable (MV or DV) of the selected sub model can affect
the CV of the selected sub model.
220
5/01
When this button is selected two actions occur. An Options per Sub model dialog
box is displayed and a focus box (colored outline) is drawn around the sub model
corresponding to element (1,1) of the matrix (i.e. the sub model whose
parameters are to be potentially changed.
Alternatively, the Options per Sub model dialog box can be invoked by double
clicking anywhere in the text field (except on the trial descriptor) for any desired
sub model. In this instance the focus box is drawn around the sub model from
which the dialog box was invoked. The Options per Sub model dialog box shown
below allows the user to change information for one sub model at a time.
As illustrated above, the selected sub model is indicated with a highlighted frame
in the twodimensional FIR/PEM model matrix view. Its CV and MV/DV indices
are also shown on the dialog box.
Move to a different sub model by using the Next MV/DV or Next CVbuttons.
The focus box and CV, MV/DV indices changes accordingly.
Null SubProcess Same as described above
Integrating SubProcess Same as described above
5/01
221
parametric model for the selected trial are retained. Once the FIR/PEM
submodel is locked, it is NOT altered upon rebuilding the CV models.
When a CV that has one or more locked sub models is refit, the
following occurs:
I.
II.
Deconvoluted data and inputs corresponding to all nonlocked models are used to regress nonlocked FIR/PEM
models.
II.
Use this button to independently adjust settling times for a given CV (this option
pertains only to FIR models). Select [Set Options per MV/DV] to invoke the
Options per MV/DV dialog box shown below. For each trial for a given MV/DV,
reduce the settling time, if this sub process has a shorter settling time than the
maximum specified for the trial set in the Overall Options dialog box.
As illustrated below by the focus boxes, changing this parameter potentially
effects an entire column of the model matrix. Different reduced settling times for
sub elements for each CV can be easily accommodated by building the CVs
independently (i.e. fitting the CVs one at a time.)
222
5/01
Excluding Data
From the
Regression
5/01
223
At this point, data can be excluded from the FIR/PEM regression calculations
using two different approaches:
Block Selection With this option, ranges are selected and these ranges are
applied to all variables used in the regression. All values within the time
range (inclusive) are set bad for any variable being regressed. Since all
variables are bad for each range selected, the data is collapsed such that
each range to be excluded is represented by a single NaN for each variable.
Variable Selection With this option, data can be excluded for each
variable on an individual basis. Display of this type of selection is different
than that used for Block selection to avoid any ambiguity. Here each crosshatched range will be painted as bad values at the time of regression. This
category supports an additional option
Data marked as bad at the global level (in the Single Graph Data plots View) is
also displayed in this view (and any graphical view) whenever the Show Bad
Data option is selected. Global marks however can not be altered in this view.
These marks are however applied at the time of the regression.
This view operates in a fashion almost identical to the Single Graph Data Plots
View. The title for this view is Show Regr. Ranges and will always be
displayed in the lower right portion of the vertical margin. This title will have a
red superscript b, v1 or v2. The superscript b and v designate block
and variable selection respectively while the 1 and 2 imply that marks are
applied only to dependent variables (1) or to both dependent and independent
variables(2). The actual ranges used in the regression will correspond to the
value of the superscript at the time of the regression. To change the method used
for excluding data, modify the Regression Selection Option in the Overall Model
Setup Options dialog box described in the previous section
To Select Ranges, do the following:
224
Move the cursor within the time axis box to one end of the desired time
range. The vertical dash dot line and the date/time in the center of the box
show you where you are. When you have positioned the cursor at one end of
the range, press and hold the left mouse button
APC Identifier Users Guide
Honeywell Inc.
5/01
Move the cursor to the other end of the desired time range. The second
vertical dash dot line that appears and the date/time in the center of the box
correspond to the other end of the range. Release the mouse button. The
selected time range is shown with a gray background
Hold down CTRL and use the above procedure to deselect all or part of a
previously selected range.
Remember that the data that is grayed is excluded from the regression
calculations.
Once the ranges have been selected, the data can be marked/unmarked for
individual dependent variables and displayed using the
toolbar
buttons. (See section 5 for a detailed discussion on selecting ranges and
marking/viewing data). Alternatively, the ranges can be used directly for
exclusion applied to all variables used in the current regression. The choice is
entirely up to the user and depends on the specific application. The method
chosen can however have a significant impact on the results. More on this topic at
the end of this section.
Fit FIR/PEM Models
At this point either choose [Show and Select Vars] from the FIR/PEM dialog box
(this will automatically switch the view to the FIR/PEM models matrix view) or if
the dialog has been deleted Select Identify>Fit FIR/PEM Models from the main
menu.
Select the CVs and MV/DVs for which models are to be built. Do this by clicking
on the desired CVs in the normal fashion. Click in the far left column where the
CVs are described. Similarly, for MV/DVs click on the top row where the
MV/DVs are defined. The rows corresponding to the selected CVs and columns
corresponding to the selected MV/DVs are highlighted. Click in the upper left
corner of the model matrix to select the entire matrix.
Next click [Fit FIR or Fit PEM depending upon the current selection state]. This
initiates the FIR/PEM model identification calculations.
Model Example
5/01
225
Model Descriptors
226
The FIR/PEM model matrix view shows the model information for each sub
process, including a plot of the step response for the selected trial. Several
descriptors are displayed for each sub model.
Trial  Indicates the model index corresponding to the user specified settling
time or in the case of PEM the internally calculated settling time for which
model information is displayed. (i.e. If the user specified three settling times
of 60, 90 and 120 minutes, then trial 1 would correspond to all model
information related to the 60 minute settling time. Trial 2 would correspond to
the 90 minute settling time, etc.) This descriptor can be doubled clicked to
select different trial information.
5/01
default information.
Checking Trial
Dependent
FIR/PEM Order For FIR, this field defines the number of FIR coefficients
used in the model. For PEM, this field defined the PEM order. If Auto setup is
used this is the order of all model terms, otherwise it is the order of the term
with the largest polynomial.
Gain  Steady state gain of the parametric model. Set to the last value of the
FIR/PEM step response curve at completion of FIR/PEM calculations when
no transfer function exists.
Settle T  User specified settling time for FIR model or internally calculated
settling time for PEM model for indicated submodel.
TfSettle  Settling time of parametric model. TfSettle > 1.5 * Settle T implies
significant extrapolation and indicates potential deficiencies. In these cases
both Settle T and TfSettle will be backlit in blue to bring this extrapolation
problem to your immediate visual attention. If TfSettle > 2 * Settle T in the
Final model, then this matrix can not be used in subsequent controller build
operations.
FIR Form (PEM Form)  Gives the form of the source FIR or PEM model.
Can be either Positional (Pos), Velocity (Vel) or Unknown (UK).
To check model information pertaining to different trials, from the main menu Click
View>Trials>Change All to bump the trial number for all sub models up or down.
information
5/01
227
Similarly, trial information for a specific sub model can be accessed by Double
Clicking on the Trial descriptor. This invokes the Displayed Trial dialog box
shown below. In addition the model box that was double clicked has a focus box.
Use the pull down menu in the Displayed Trial dialog box to change the displayed
model information for the sub model of interest. The displayed trial is
remembered by all model views. Thus if the value is changed in one view, then all
other concurrent views will reflect this change.
228
5/01
One of the most useful views is the FIR/PEM step Response View. Select
View>FIR/PEM Step Responses as shown below. This is the same view that will
automatically be displayed at the conclusion of the FIR/PEM calculations.
This selection displays the step response plots for the FIR/PEM models for all
trials as follows.
5/01
229
When using PEM models, multiple trials can be used to effectively select
models/orders as described in the previous chapter on Overall Identification
functions. When using FIR models, multiple trials correspond to multiple settling
times and the FIR Step Responses can be used to indicate the goodness of the submodels. Since the different settling times for a given submodel effectively result
in a perturbation to the solution matrix, these curves reflect sensitivity problems as
discussed in the concept section of this document.
All selfsimilar curves (exclusive of variance) indicate little sensitivity and are a
first indicator of a reasonable model. Some selfsimilar and some divergent curves
indicate a potentially reasonable model but some sensitivity. Many times this is
caused by a settling time specified outside the power range of the input signal (i.e.
continually increasing the settling time for a fixed input signal of finite power band
with noisy data eventually results in a divergent step response). Sometimes this is
caused by a settling time that is too short. If adjustment of the settling time results
in selfsimilarity, then this indicates a potentially reasonable model.
If the sensitivity is low and the model prediction good (see section on selecting
final trials), then the model usually can be used with a high degree of confidence.
If the sensitivity is high, then the model should NOT be used. Under some
conditions, nulling of the model is all that is required. Under others, more
information (data) may be necessary. Sensitivity is usually caused by poor signal
design or by adverse test conditions and in both cases indicate that the model is
not reliable.
Poor signal design is usually the result of correlated inputs and/or insufficient
power spectrum. Unfortunately, the sensitivity of the regression matrix is related
directly to both of these variables. As the correlation increases, the covariance
matrix becomes more poorly conditioned and the sensitivity of the regression
matrix increases. Similarly, if the power is too low over the frequency range of
interest, then the covariance matrix will again be illconditioned. If the signal is
not persistently exciting, then the covariance matrix becomes singular. Sensitive
response curves imply that large changes in the models have relatively small
effects on prediction errors and hence these models are unreliable.
Even with proper signal design, the FIR/PEM step responses may exhibit sensitive
behavior. If there is no causal relationship for the CV and MV/DV pair, then
essentially random response curves would be expected. In this case, no model
exists. Hence the model can and should be nulled and no further issues need to be
addressed.
Another more troubling possibility is the result of adverse test conditions. The
concern here is that there is a causal relationship for the CV and MV/DV pair, yet
there is still a sensitivity problem. This condition is possible even for properly
designed experiments and even when all modeling assumptions (i.e. linearity,
stationarity, etc.) are satisfied. Under these conditions, a theoretically unbiased or
accurate (insensitive) model would be expected only in the limit as the length of
the test goes to infinity. Since this is not a practical possibility, the issue here is the
230
5/01
limitation of a finite duration test. Errors in the model are proportional to the
power of the disturbances and inversely proportional to the input power.
Sensitivities due to adverse test conditions, indicate that the model should not be
used and that more data is required. Under these conditions, attention should be
focused on minimizing or eliminating disturbances to the extent possible and
making sure the amplitude of the input signals are large enough to move the CVs
outside the noise and/or disturbance bands.
5/01
231
9.3
Statistics
Background
At the end of the previous section guidelines were presented for interpreting FIR
results. To complement/enhance this information, the APC Identifier also provides
statistical information relative to the MIMO models. Information is provided in
two general areas; signal content and confidence data on the individual FIR
estimates.
Signal design is by far the most important aspect of the identification process. The
Identifier automatically provides key information as to the quality of the signals
used to create process models. This information is presented in terms of easy to
interpret plots depicting both auto and cross correlation functions and power
spectrum. These plots should always be reviewed before generating models to be
used in any controller. Problems with information content (which will be evident
in the correlation and power plots) will invariably cause problems or at least
concerns with the resultant models.
While signal information is germane to any type of model identification, with the
FIR structure it is possible to generate additional information pertaining directly to
the model itself. This additional information is provided in terms of statistical
estimates of noise bands associated with individual model coefficients. Based on
user specified probability levels, standard null hypothesis tests can be evaluated to
determine if coefficients are in fact distinguishable from the noise present in the
data set. This information, summarized in the Null Hypothesis or Confidence
View, can be used to detect causal relationships between inputs and outputs in a
straightforward analytical fashion.
At first, one would expect that the confidence information discussed above is all
that is needed for accepting or rejecting models. Unfortunately, this information
only specifies if the coefficients are in fact statistically significant. It must be
remembered that the identification objective is NOT to simply fit the data but to
obtain the causal effect between inputs and outputs in spite of both deterministic
and stochastic disturbances. To this end, heuristics capturing practical experience
have also been incorporated into the analysis.
As discussed previously in Section 9.2, the self similarity of unsmoothed FIR step
response curves is correlated in a qualitative sense to the goodness of the model.
This concept is extended by utilizing noise estimates on the individual coefficients
to generate noise bands on the step response curve. Hence in the Statistical
Summary View, the models are represented in terms of step response bands rather
than an individual step response curve. The bands visually display the degree of
separation. While it is true, that in this framework, a single trial will have a
separation band, it is still highly recommended to use more than one trial. When
more than one trial is used, the band will expand to encompass all models. The
upper bound is the maximum value of the step response plus the maximum noise
bound for all trials. The lower bound is the minimum value of the step response
minus the maximum noise bound for all trials. A separation factor indicating the
degree of separation is also calculated and displayed.
232
5/01
Experience has shown that in some situations grossly over or under estimating the
settling time can cause undue separation even with relatively small noise estimates.
In these instances it may be possible to in fact have two or more step responses
that are relatively self similar, which might imply the existence of a causal
relationship. To preclude the omission of this condition, a sensitivity factor is also
calculated. This factor is calculated based on the two trials which exhibit a
minimum separation. Step responds bands (excluding noise) for the two trials
corresponding to the sensitivity factor can also be displayed in the Statistical
Summary View.
Guidelines
To begin, a few basic guidelines will be presented. After these are discussed, an
interpretation of the model rank will be summarized. Following this
summarization, an overview will be given to highlight the use and interpretation of
the statistical results.
When setting overall parameters it is recommended to consider the following:
1.
For a given input signal, the auto correlation function will always be
worse for the positional form than it is for the velocity form. For
properly designed signals this difference can be minimized.
2.
Due to the factors given above, it might be tempting to simply use the velocity
form in all cases. While this will in general improve the noise bounds, it will
also usually result in some information loss and hence, some model
degradation can be expected.
It is recommended to always start with positional form. When this gives
satisfactory performance, confidence in the results should be high. Always
check the correlation curves. For properly designed signals, these should be
within target ranges. If the correlation curves are not within or close to the
target ranges, then the velocity form must be used to obtain reliable bounds.
In some cases, when the signal design is tentative, it is possible to establish the
causal relationships between input/output pairs using velocity form first and
then rebuilding using positional form. In these instances it is necessary to
perform conventional checks (i.e. spikes in step response, predictions and
residuals) to insure that the positional form is justified.
5/01
Settling Time While the user specified settling times does not have to be
particularly accurate, it should range form a low of 2 Tau to a high of 68 Tau.
It is better to over, rather than under, estimate the settling time. This is
APC Identifier Users Guide
Honeywell Inc.
233
For mediocre input signals, the standard heuristic tends to be overly severe. Thus
for integrators, the overall ranking is modified based on the overall average
sensitivity of all trials. The ranking will be modified by at most one unit (see next
section for interpreting the model rank).
Switching from positional to velocity form to improve results with integrators is
NOT recommended. While there have been incidences when this approach has
been advantageous, the recommended approach is to design information rich input
signals and to use the positional form if at all possible.
234
Complex Poles In some instances the FIR step response model may exhibit
APC Identifier Users Guide
Honeywell Inc.
5/01
oscillatory behavior. While it is possible the actual process does in fact have this
characteristic, it is much more likely, especially for industrial processes, that the
phenomenon is due to process noise or poor signal design
Usually, oscillatory models require no special considerations since the oscillations
are either bounded by the noise estimates or they are not consistent between trials.
If, however, the oscillations exceed the noise bounds and/or are consistent between
trials, then the calculated sensitivity factor is adjusted. The factor is penalized
based on overshoot and number of cycles. When this condition occurs, a warning
message will be displayed in the message window.
This heuristic may not be acceptable to all users. For example the oscillations may
be real and the model accurate. To circumvent this heuristic simply choose either
the NNHT or Separation Rank Option.
As a final word of caution on oscillatory behavior, remember that even if the
oscillatory behavior is real, well modeled and accurate, it still may not be
necessary or helpful to include this characteristic in the controller model. This is
particularly true of predictive controllers since they are in many cases bandwidth
limited.
Interpretation of
Model Rank
235
model.
Rank = 4 While the model may exist at this level,. The quality could be poor. In
many instances level 4 models should not be used without rework. Typically, this
level is the result of high noise and/or relative weak input signals. In some
instances these models may exhibit a favorable sensitivity factor. If this is the case,
the models may still be adequate. The suggested recommendation is to keep (use)
the corresponding parametric model.
Rank = 5 No reliable model exists. The suggested recommendation is to null
(reject) the corresponding parametric model.
A good starting point is the Demo data presented in Section 9.2. Here Velocity
form is used and the correlation and confidence check boxes are selected. Defaults
are used for all other options.
After fitting the FIR models, switch to the MV correlation view. To do this . Select
View>Correlation (MV/MV) as shown below.
236
5/01
This will display the correlation view for independent variables. Unlike model
views, the Correlation matrix view for independent variables shows information
for each independent variable in a twodimensional matrix of correlation plot
boxes. This matrix, as shown below, will always be square. The MVs and DVs
form both the columns and rows of this antisymmetric matrix.
Correlation View
MV/DV to MV/DV
5/01
237
Diagonal elements of this matrix correspond to the auto correlation function while
offdiagonal elements correspond to the cross correlation function. It is desirable
for these functions to be within the target ranges specified by the dashed red target
limits. Results shown above illustrate ideal behavior. If the correlation functions
significantly exceed their limits, then there may be a sensitivity problem. In this
case it may be possible to obtain better results by making appropriate
modifications. If the positional form is used then the models can be either rebuilt
using the velocity form or additional data could be collected.
While less impactive than the independent variable correlation view, the
dependent variable correlation view still presents useful information. To invoke
this view, Select View>Correlation (CV/MV). The following matrix will be
displayed.
Correlation View
CV to MV/DV
In this correlation view, the correlation matrix shows information for each
correlation function in a twodimensional matrix of correlation plot boxes. The
MVs and DVs are the columns of the matrix and the CVs are the rows.
It is the objective of this view to visually display potential feedback effects in the
data. The positive portion of the curve represents the correlation from independent
to dependent variable. The negative portion of the curve represent the correlation
from dependent to independent variable. Hence for ideal openloop data the
correlation function should theoretically be zero in this region. Practically, the
correlation function should be within the target ranges indicated by the red dashed
lines. Results shown above illustrate completely acceptable behavior
When the inputs themselves are auto correlated, then non zero values of the cross
correlation function can be expected in the negative region. To compensate for this
effect, the endpoints of the target ranges are dynamically adjusted. Thus for inputs
that have little or no auto correlation the target ranges will encompass the entire
negative axis.
238
5/01
This matrix shows information relating to each impulse response model in a twodimensional matrix of confidence plot boxes. The MVs and DVs are the columns
of the matrix and the CVs are the rows.
While information presented in this and other statistically related views, does not
strictly pertain to models per se, input/output pairs may still be referred to as sub
model elements.
In the Confidence View, only the elements of the impulse response model that
exceed the noise band are displayed. The value displayed is the normalized
difference between the coefficient value and its corresponding noise band or
confidence limit. If no coefficients corresponding to a given trial are displayed,
then the model is not statistically reliable. If no models are reliable for a given sub
model, then a completely empty plot box, such as that shown above, is displayed.
The empty or null plot box implies no causal relationship between input/output. A
null plot box also implies that the non null hypothesis test has failed and the sub
model is of rank 5 for the NNHT rank option. Always check this view for a
graphic summary of the confidence results
In addition to simply displaying obvious causal relationships, the confidence view
can be used to ascertain information pertaining to the temporal quality of the data.
Element (1,1) and (1,2) clearly illustrate rich information content up to
approximately 60 minutes. Beyond this, the coefficients are indistinguishable from
5/01
239
the noise. By inspecting the FIR step response curves, it can be seen that this time
range in fact captures essentially the entire model. It can therefore be concluded
that the corresponding input signals were sufficiently powerful over the spectrum
appropriate for these models.
Element (1,3) illustrates rich information content up to about 25 minutes. As in the
previous cases, this time range encompasses the entire response curve for trial 1
and trial 2. Thus these models are clearly statistically significant.
Note however, that Trial 3 (120 minute settling time) is not displayed. While the
input signal has adequate power in the relative high frequency range, the signal
does not have enough bandwidth to reliably excite the desired low frequency
modes. Hence, the model is free to drift or fit low frequency noise. Model (1,3,3)
has the characteristic that only a small portion of the response is statistically
significant. In addition the insignificant portion contributes in a substantial way to
the overall model.
To directly address this characteristic, which can occur relatively frequently, a
statistically significant settling time is internally computed and used to detect and
reject unreliable responses. This option is controlled by the
<UseConfidenceOnTset> check box in the Overall Model Setup Options dialog
box described in Section 8.5
The Confidence view can also be used to quickly establish proper settling times,
which if desired can be used to enhance performance. For example the default
settling times of 60, 90 and 120 minutes are too long for element (3,1) since
responses longer than 20 or 30 minutes are probably not statistically reliable.
Overall statistical results are provided in the Statistical Summary View. To invoke
this view, Select View>Correlation (CV/MV).
Statistical
Summary View
This matrix shows the statistical summary for each sub model in a twodimensional matrix of summary plot boxes. The MVs and DVs are the columns of
the matrix and the CVs are the rows.
Several descriptors and a plot of the separation or sensitivity bands are displayed
for each sub model for the selected Rank Option.
240
5/01
Descriptors shown in the Statistical Summary view have the following definitions
for all rank options.
NNHT Indicates the results of the non null hypothesis test. Status is either
PASS or FAIL. Its Value is independent of the selected Rank Option. If the
status is FAIL then the corresponding rank (for Rank Option = 1(NNHT)) is
5.
Rank Option Indicates which option has been used to rank the given sub
model and corresponds to the information displayed.
Rank This is the actual rank corresponding to the displayed Rank Option.
Separation Factor Value upon which the rank is based. This value is
displayed for all Rank Options except Rank Option = 3 (Sensitivity). The
factor indicates the degree of separation relative to the mean response and will
correspond to the displayed step response bands.
Sensitivity Factor Value upon which the rank is based. This value is
displayed only for Rank Option = 3. It indicates the smallest sensitivity of the
step response curves when noise estimates are not included and will
correspond to the displayed step responses.
Pending Action This item reflects the status of the parametric source flag.
When null is displayed, parametric models will not exist for this sub model.
5/01
241
Only those models that are selected can be modified. If additional sub models are
to be selected , click on the CV name to select the entire row of sub models for
that CV, click on the MV or DV name to select the entire column for that MV or
DV, click the upper left box to select all sub models or hold <CTRL> and click to
select any desired combination of sub models. <CTRL> also acts as a toggle.
Deselect an item by clicking it again.
Since the Statistical View Options dialog box is modeless, selections can be made
at any time. This dialog box will automatically be closed if the view is changed
(since it pertains only to this view), and it can obviously be manually closed be
selecting the close button.
Use the pull down list box to select the desired Rank Option. All information in
selected models will reflect this change. Select the <Load suggest action> button
to overwrite the parametric model source flag with the suggest actions for the
selected models. This overwrite will be reflected in the Pending Act. descriptor
displayed in the Summary matrix.
Actions can also be manually specified. Select the <Load user action> button to
overwrite the parametric model source flag with the User action source defined by
the selected radio button for the selected models. This overwrite will be reflected
in the Pending Act. descriptor displayed in the Summary matrix.
Positional Form /
1 Trial
242
5/01
By using positional form with this data, the inputs become highly auto correlated.
When the correlation functions significantly exceed the target ranges, sensitivity
and/or model confidence may be suspect
Typically, there will be correlation concerns when positional form is used and the
signals are not well designed. These deleterious effects can certainly be eliminated
or at least reduced by switching to velocity form as previously illustrated with this
data. While this is a tempting approach, it is almost always better to try and
properly design the input signals. Doing so will eliminate the need for
unnecessarily differencing the data and thereby loosing some low frequency
information.
At this point it is possible to use the correlation information to ascertain potential
problems with respect to the confidence estimates. In Section 3.7, the covariance
matrix (upon which the confidence estimates are based) was shown to be directly
related to the inverse of the regression matrix. This matrix is basically a scaled
version of the correlation matrix. Hence the conditioning if the covariance matrix
would be expected to increase as the auto correlation (or cross correlation)
function degrades. Indeed, in the limit, as the inputs become perfectly auto or
cross correlated, the covariance will become singular.
To graphically illustrate the effects of partially correlated inputs, consider the
Statistical Summary shown below. This matrix corresponds to the Correlation
matrix presented above.
5/01
243
These results are very revealing considering that the step response bands shown
are for a single trial. Thus the bands are due solely to the large noise estimates and
reflect a complete lack of confidence. Indeed, all models shown have failed the
non null hypothesis test (To observe these bands it is necessary to switch to Rank
Option = 0 (No Rank), otherwise empty plot boxes will be displayed).
It is worth mentioning, that for this data, which is discontinuous and exhibits nonstationary behavior, velocity form would most likely be necessary irrespective of
the input signal design.
Impact of
Exclude data
Options
As discussed previously, the manor in which data is excluded can have an impact
on the regression results. In this subsection a brief example will be presented
showing some of the significant results. The first case is given below.
Both files have the same 3 CVs and one MV. In file b1, the block selection option
has been chosen. In file b2, the variable selection option has been chosen with the
exclusion applied to all variables. The exclusion ranges are the same in both cases.
Results are as follows.
244
5/01
As can be seen the answers are identical. This is to be expected, since in the
variable selection file, the range was the same for all dependent variables and the
selection was applied to the independent as well as dependent variables. Thus in
file b2 the values of all variables within the selected range are set bad entering the
regression. This data can therefore be collapsed and represented as one NaN for
each variable. This is also precisely what occurs by definition for the block range
shown in file b1. Hence the results should be identical. The advantage of the v2
option is that a different range can be defined individually for each dependent
variable. This may be useful when only a subset of the dependent variables is
included in the regression at any given time. Next, consider the following case.
Here, the data is identical to that given previously. The difference is that the v1
selection option is used in file b3 and no selection range is specified in file b4.
Notice that data corresponding to the selection range has been cut from CV1 in file
b4. Results for these two files are given next.
5/01
245
In both cases shown above the results are identical. Note however that they are
drastically different than the results presented previously. Indeed, the gains are in
fact of opposite sign for most models. The first set of selection ranges actually
resulted in rank deficient solutions and completely degenerate models. This was
caused by the removal of additional rows in the regression matrix corresponding to
the bad values of the independent variable. In essence this removed the effect of
the second step thereby resulting in insufficient information content.
In file b3, the v1 option was used. Thus only the marked data for the dependent
variables was set bad. This resulted in the removal of only the corresponding rows
of the regression matrix and unnecessary data was not lost. Even though the
desired data was removed, the regression matrix was of full rank and the resultant
models were of reasonable quality.
In file b4, data was physically removed from CV1. The other CVs however are
unaltered. Since the data removed is physically set bad in the .mdl file, any future
regression will obviously see only bad data for these values regardless of any
selection strategy. Since this data and all corresponding rows in the regression
matrix are removed, identical results such as those shown above should be
expected for CV1. Why do CV2 and CV3 exhibit identical results? The reason is
simply because they have been built simultaneously with CV1. As such, the bad
values in CV1 require removal of corresponding rows in the regression matrix,
246
5/01
which impact CV2 and CV3. This impact yields results that are identical to the
case where CV2 and CV3 are themselves marked with bad values. Hence the
solutions are the same as those obtained in file b3. Note that if CV2 and CV3 were
regressed independently of CV1, then no bad value rows would be removed and
the results would be correspondingly different.
Finally, the discontinuity shown in the prediction plot is NOT due to any
regression range selection or internally bad values. Rather it is the result of a
prediction range exclusion. Here the poor data shown previously was excluded
from the predictions. Had these values not been excluded, the following results
would have been obtained.
5/01
247
248
5/01
This section tells you how to build the parametric models. You can use
the automatic build capability to build parametric models, and you can
manually:
Parametric models are used primarily for model order reduction and to
remove the variance of the FIR/PEM models. FIR step response models
are generated by integrating the impulse response coefficients. Step
response models, are fit by the parametric models. While default models
are low order, no limit is imposed on the order of the parametric models.
Any high frequency behavior of the FIR/PEM model can be captured by
the parametric fit. The defaults almost always capture all control relevant
characteristics.
Each FIR/PEMmodel is fit by a parametric model. This includes each of
the models corresponding to the various trials.
Parametric models are used in an open loop fashion with raw data
(described in the next section) to select only those models corresponding
to the trial that yield the best long term openloop predictive
performance. This eliminates the need to be concerned about the choice
of a specific FIR/PEM step response model.
5/01
249
10.2 Procedure
Fitting the
Parametric Models
Fit Parametric
Models Dialog Box
and Associated
View
Selecting the Fit Parametric Models option automatically changes views. Any
parametric models that are not current are automatically selected (backlit). A
model is not current if its corresponding FIR model has been modified. As shown
below, no parametric models are current (since none exist at this time) and while
the view looks similar to that shown above, this view corresponds to the
parametric model matrix (Show Submodels for Par Fit in the upper left corner of
the model matrix).
250
5/01
It is important to remember that different views have different text sensitive areas
that when double clicked, invoke views of specific dialog boxes. This topic is
discussed in more detail in the paragraphs that follow.
Like all model views, the parametric model matrix view shows information for
each sub model in a twodimensional matrix of sub model boxes. The MVs and
DVs are the columns of the matrix and the CVs are the rows.
Only those models that are selected are updated in the fitting procedure. If sub
models are not selected, click on the CV name to select the entire row of sub
models for that CV, click on the MV or DV name to select the entire column for
that MV or DV, click the upper left box to select all sub models or hold <CTRL>
and click to select any desired combination of sub models. <CTRL> also acts as a
toggle. Deselect an item by clicking it again.
Models that are automatically selected (not current), can not be manually
deselected (These models are automatically deselected when the model is
updated).
Fit the parametric models to the FIR/PEM models using the default options by
clicking [Fit Models] on the Fit Parametric Models dialog box. Typical results
are shown below.
This view now illustrates the results of the parametric fit. No models are selected
as all models are current. The plot boxes show the parametric step responses
superimposed on the FIR step responses indicating the quality of the fit. The text
in the sub model boxes defines the pertinent parameters for the displayed models.
Since submodel (2,2) trial one has a TfSettle that is too long relative to Settle T,
both these descriptors are displayed in blue.
When TfSettle is > 1.5 * Settle T, the text for these descriptors will be displayed
in blue. Use this text sensitive display in any model view to visually identify
models with potential deficiencies
5/01
251
To modify the other default options select the appropriate buttons on the main
dialog box or double click in the appropriate areas in the parametric model matrix
as described in the following paragraphs.
Show & Select
Submodels
This button can be used to return to the parametric model matrix view as shown
above at any time. If the Fit Parametric Models dialog box is displayed and the
user selects another view (i.e. FIR/PEM Step Responses), then the parametric
model view can be restored by simply clicking the Show & Select button.
Overall Options
Use this button to set options at the highest parametric level. Before choosing this
button one or more sub models must first be selected. If no models are selected
prior to choosing this button, a message box will be displayed prompting the user
to first select one or more sub models before selecting this option.
Remember, parameters are first set, then a function (i.e. Fit Models) performed.
The APC Identifier keeps track of which parameters are current (those shown in
any view are current) and which are pending (those shown in any dialog box are
pending i.e. they are to be used in the next fit). The parameters may or may not be
the same.
To set overall options for sub models (1,2) and (2,1), select the models as shown
below and click on the [Set Overall Options] button.
252
5/01
At this point any options that are set apply to ALL models selected. That is, the
options apply to each of the sub models for all corresponding trials.
Overall options are set from the Overall Parametric Option dialog box shown
below. This dialog box can be invoked only by selecting the Set Overall Options
button on the Fit Parametric Models dialog box. Since this is a modal dialog box,
it must be closed before fitting the model.
Overall Parametric options and their use are described below (as usual, bold text
applies to parameters that can be set by the user).
Discrete Model
Information
These options allow the user to specify the desired characteristics of the zdomain
models.
FIR Extension Sometimes the FIR step response does not settle out (come
to equilibrium). In these instances it is possible that the parametric fit
exhibits significant extrapolation. (if the TfSettle parameter is much larger
than the Settle T parameter then there is probably too much extrapolation).
To significantly reduce or eliminate extrapolation select this check box.
When checked, it automatically pads the FIR step response, that is used for
fitting the parametric model, with constant future values. This parameter has
no effect on integrating sub models.
Auto PreFilter This check box allows the user to turn the prefilter
calculations on or off. When checked, the prefiltering calculations are done
automatically. When not checked, the user can control the prefilter
calculations by changing the prefilter order described next. Prefiltering
applies only to the ARX model.
PreFilter When the Auto PreFilter check box is deselected the PreFilter
option becomes enabled. This parameter enables the user to specify the order
of the prefilter. A zero implies no prefiltering calculations. Increasing the
order shifts the fit emphasis from higher to lower frequencies.
Order This parameter refers ONLY to the order of the discrete time
parametric model (both ARX and Output error). It does not effect the order
5/01
253
254
ARX Selecting this radio button results in the use of the prefiltered
ARX structure defined in Section 1. With the appropriate order, this
structure will effectively result in an unbiased estimate. Models will
be converted to the Laplace domain before being saved.
5/01
5/01
Level of Laplace Search When the model method is Laplace, the user can
further restrict the form of the model used in the search.
Full Search This option results in the use of all terms defined in the
Laplace model description given in Section 1.
255
Once the parameters are set as desired, click [OK] to save the settings. If [Cancel]
is clicked, then the settings are not saved. Select [Fit Models] to perform the
calculations using the newly set parameters.
If at this stage, the Set Overall Options dialog box is again invoked, an apparent
discrepancy may be observed. Initialization of the overall dialog boxes described
above is provided based in internal defaults NOT from current or pending
settings. Thus for example, if the discrete model order was set to 3 originally, it is
displayed as 2 in the Overall dialog box. It is done in this manner since multiple
models (CVs, MV/DVs and Trials) are almost always selected simultaneously
and each may have different current or pending parameters. Note, all other dialog
boxes display the actual current or pending parameters for the appropriate
models. Thus if the Individual Parametric Options dialog box (described in
paragraphs that follow) is invoked, the order is correctly displayed as 3.
Individual Options
Use this button to set options for individual parametric models. At this level,
options are specified for a specific sub model corresponding to a specific trial.
Unlike in the case described above no sub models need to be selected before
choosing the Individual Options button. This is a modeless dialog box and as
such, other operations can be performed while it is still opened.
When this button is selected two actions occur. An Individual Parametric Options
dialog box is displayed and a focus box (colored outline) is drown around the sub
model of interest (i.e. the sub model whose parameters are to be potentially
changed). If no sub models are selected when the Individual Options button is
clicked, then the focus box defaults to sub model (1,1) such as that shown below.
If one or more sub models are selected then the focus box is drawn on the
selected sub model with the smallest row index and the smallest column index.
256
5/01
5/01
257
As illustrated above, double clicking on the sub model in the parametric model
view also selected (backlit) the sub model that was double clicked (element (2,2)
in this case). A description of the information contained in this and subsequent
dialog boxes is presented in the following paragraphs. Parameters that have been
defined once will not be redefined.
Dialog Box
Information
258
Navigation of the dialog box is controlled by the previous and next buttons. Use
these buttons to move the focus box to the sub model whose parameters are to be
changed or reviewed. The indices of the focus box are displayed in the dialog box
by the CV and MV/DV parameters. Note, the selected (backlit) sub model(s)
do NOT change as the focus box changes. Therefore, if a model fit is done only
the models that are selected (backlit) are updated with the modified parametric
information. Modified parametric information in other models is retained until
the model is updated. Information in this dialog box is summarized as follows:
When data is present in the environment, check boxes for the first three
parameters; Null SubProcess, Integrating SubProcess and FIR/PEM Model
Locked are for information only. This information reflects parameters that
are current and that have been specified at the FIR/PEM level of the
identification procedure. These parameters apply to all trials. To change
these parameters you must go back to the FIR/PEM level. If however, there
is no data, then these parameters will be enabled and can be modified
directly through this dialog box.
Radio buttons in the Parametric Model Source area apply to all trials for the
sub model that has the current focus. The Auto Calculation and Null
Override buttons have the same meaning as described previously
Values selected are only pending. Models are not changed until a function
(Fit Models or Do It) is executed
Parameters in the Info Per Trial area apply only to the trial number selected.
Click on the pull down menu to change the trial number. The settling time
corresponding to the trial number is displayed in noneditable text. The
model matrix view reflects the selected trial (i.e. model information
displayed in the model view corresponds to the selected trial). Trial specific
information is as follows:
Lock Dead Time Check this box to specify the desired dead time. Then use
the scroll bar to enter the desired dead time or type it directly in the edit box.
The dead time must be in minutes. When checked, the delay estimation
routine is not invoked and the parametric models are fit with the specified
transport delay. Values returned after the fit may be slightly different than
those originally entered since the dead time must be an integer multiple of
the effective sample rate of the FIR model.
5/01
Settle Time This is a noneditable field and is shown only to display the
settling time corresponding to the selected trial number. Remember, the
parametric model is fit to the FIR model with the specified settling time.
Current Model Source This is a noneditable field and is shown only to
display the model source corresponding to the selected trial number. The source
is Auto if the model is automatically fit; User if the model is manually
entered (described below); or Null if transfer function model has deleted.
Parametric Options
Per Trial
Parameters in this dialog box correspond to the model defined by the CV,
MV/DV and Trial indices. Information not previously defined includes the More
Options and Do It buttons.
5/01
More Options Select this button to change options at the lowest parametric
level. The corresponding dialog box shown below is similar to the More
Overall Options dialog box described previously, the difference being, here
the options apply only to the model with the displayed CV, MV/DV and
Trial indices.
259
All three dialog boxes described above are modeless and have been designed to
work in conjunction with each other. Parameters changed in one are
automatically reflected in the others. For example, a change in the sub model or
trial from the Individual Parametric Options dialog box results in an automatic
update of the indices displayed in subsequent dialog boxes. Closing a higher level
dialog box automatically and properly closes all subsequent boxes. Selecting OK
or moving the focus box to another sub model saves any modified parameters.
Viewing the
Transfer Function
260
5/01
The transfer function of the submodel for the listed trial is displayed. The
transfer function is presented in standard form. The red trace on the plot is the
step response based on the displayed transfer function. Any time the calculate
button is depressed, the red trace extends to the steady state of the stable portion
of the transfer function (equivalent to about 4 5 times the time constant for a first
order system). The green trace is the step response of the FIR model and is shown
for reference only.
Switch to another trial or sub model by using the buttons and pull down list
shown in the dialog box. The selected model is indicated by the focus outline on
the model summary view.
The transfer function can be changed by editing the gain, numerator polynomial,
denominator polynomial or dead time. After making modifications, click
[Calculate] to update the plot of the transfer function and to redisplay the transfer
function in standard form. Click [Accept] to save the new transfer function. If you
click [Exit] or change to a different model without clicking Accept, the model is
not saved.
5/01
261
If there is more than one factor, each factor must be enclosed in parenthesis
even if the factor consists of only a single term.
The transfer function does not have to be entered in standard form, but it will
be automatically standardized when the calculate button is clicked.
The error box shown above displays the average absolute error between the
transfer function step response and the FIR step response. The error is only
evaluated over the settling time displayed.
Step Response
Overview
262
5/01
Since all step response models for all trials are presented in this view, overall
performance can easily be evaluated by observing responses such as those shown
in the following figure.
5/01
263
264
5/01
This section describes how to find and select the best set of models. This best set
is referred to as the Final models.
Final Models
Defined
Final models are derived from all possible nonnull parametric models. Since a
model may exist for each trial (which corresponds to a user specified settling time
for FIR models or to a particular structure for PEM models), it is necessary to
choose one of potentially several models.
Parametric models are built for each dependent/ independent relationship and for
every trial. By specifying a range of settling times or a range of orders,
identification can proceed without regard to model structure.
When dealing with FIR models, accurate ranges for the settling time are not
required, it is however, possible to specify values that are significantly short or
long. (It may at first appear that long settling times are always good.
Unfortunately, depending on excitation signals, specifying settling times that are
too long can result in poor models.) It then becomes a question of which settling
time results in the best model.
Similarly, when dealing with PEM models, exact orders are not required, it is
however possible to specify values that are too low or others which are too high.
Either of which can result in a less than desirable model.
Comparing step responses of the FIR models is one way to determine the
reasonableness of the specified settling times. This information can be obtained
from the FIR/PEM step response summary. Unfortunately, this approach can be
ambiguous and may not result in the most effective models. Similarly, the
statistics can be used for a more unambiguous estimate. However, even for
statistically valid models with refined settling times, it may be possible to have
models with somewhat different characteristics.
Comparing PEM step responses is an effective way to evaluate reasonableness of
the PEM models. However, even for cases when the responses are satisfactory, it
may be possible to have models with somewhat different characteristics.
To avoid these difficulties, the last step in the identification procedure is a
technique that automatically searches all models to find the final set that give the
best long term open loop prediction relative to raw process data. This search
procedure effectively rejects any models that are ill suited for prediction
purposes. This technique strives to select the best of the available models (Note,
that it does NOT guarantee that the selected model is necessarily good. It may
simply select the best of a poor set of models). To prevent the use of poor FIR
models use the statistical results to null the appropriate parametric models before
performing this final step. To prevent the use of poor PEM models use the
guidelines given previously on PEM step responses to null unreasonable models.
5/01
265
Open loop prediction forms the basis of the search procedure. Prediction is done
on a Multiple Input Single Output (MISO) basis. That is, only one CV at a time is
evaluated but all possible MVs/DVs are used in the evaluation.
The MISO model is evaluated based on its open loop predictive performance
relative to raw plant data. The default data is the same as that used by the
regression, but any segment can be used including data never regressed (cross
validation).
The actual MVs/DVs are used as inputs to the MISO model. The output of the
MISO model is the predicted value of the process CV. Since this predicted CV is
never updated by the actual CV data, the results illustrate the long term open loop
performance of the model.
The figure of merit for these models is the absolute value of the average residual
error (difference between actual and predicted values).
Two Procedures
In evaluating the MISO model, the APC Identifier uses two analytical strategies.
In one strategy, the Identifier uses models corresponding to uniform trial indices
(each index corresponds to a user specified settling time). The Identifier attempts
to find the models that result in the lowest average prediction error, given that all
models for a CV are based on the same trial index.
In the other strategy, the Identifier uses models corresponding to mixed trial
indices. Its starting point is the uniform trial solution. Starting with the first
MV/DV, each model not corresponding to the uniform solution is evaluated in
the overall MISO model holding all other models constant.
If the current model results in a reduction in the average prediction error, then the
model is added to the mixed trial solution. The procedure continues until all
models have been evaluated. Although this search is not exhaustive, it almost
always finds an optimal or nearoptimal solution.
266
5/01
11.2 Procedure
Selecting Final
Trials/Finding
Final Models
Selecting the Select Final Trials option automatically changes views and invokes the
Select Final Models dialog box as shown below. The default view is the Final Trials
view as shown in the upper left corner of the model matrix.
5/01
267
This view presents the model matrix which corresponds to those models that are
deemed the final solution to the identification problem. When a controller (either
Profit Controller (RMPCT) or RPID) is built, only final models are used in the
calculation procedure. The Final Trials descriptor is used to indicate that each final
sub model is one of potentially several possible models. When the identification
procedure is completed, this view will show the final parametric models in Laplace
domain form along with other pertinent information.
It is highly recommended to inspect this view before continuing with any control
building operation. Note that this view can also be accessed by selecting View>Final
Model Xfer Function from the main menu.
Inspection of the final models shown above, illustrates that all models are Invalid
final models. Invalid final models indicate that final models have not yet been
selected (as is the case here) or that there is something wrong with the final model.
The Invalid final model state will preclude the building of either a Profit
Controller (RMPCT) or RPID controller.
Many options are available for selecting/defining the final models. Taylor the
selection procedure by choosing the desired options from the Select Final Models
dialog box as Discussed below.
Trial Source
By far the most important option is the choice of the trial source. The APC Identifier
maintains four separate and distinct sets of models for use as effective long term
openloop predictors. These models are characterized by their trial indices. The four
trial sources are:
Auto Best Uniform  Trials whose submodels produce the smallest average
absolute error between the predicted results and the test data from the process,
given that the submodels for any row in the matrix are all from the same trial
Auto Best Mixed  Trials whose submodels produce the smallest average
absolute error between the predicted results and the test data from the process,
given that the submodels for any row in the matrix can be from different trials
User Selected  This choice allows the user to manually select the trial for
each submodel. Since this set of trials is initialized by either the uniform or
mixed trial set, it is highly recommended that one of these be updated (as
described below) before manually selecting the trial for any submodel
Final  Trial for each sub model that corresponds to the final model.
268
5/01
Both the uniform and mixed buttons are associated with a calculation procedure that
strives to minimize the prediction error for a selected CV subject to the restrictions
given above. As such, referral to these buttons is made with respect to either the
uniform or mixed solution. That is, the trials for these buttons are the result or
solution to the search for the minimum prediction error. The solution for the Auto
Best Mixed trials has as its starting point the Auto Best Uniform solution. Thus a
uniform solution is always performed prior to the calculation of the mixed solution.
Final model selection should always begin by choosing either the uniform or mixed
button. Selection of any of the Trial Source buttons automatically changes to the
appropriate view. The text displayed in the upper left corner of the model matrix
(defining the view) for the various buttons is as follows:
User Selected 
Final 
Final Trials
Information displayed in these views corresponds to the models with the trial indices
defined by the selected radio button. If the Auto Best Mixed button is selected in the
diagram presented above, then the result is as follows.
5/01
269
In this figure, the view has been automatically changed to Auto Best Mixed Trials. In
addition, all CVs have been automatically selected (backlit). Note: CVs with any
uniform or mixed trial solutions that are not current are automatically selected
(user defined trials that are not current are NOT automatically selected). The
user can not select/deselect from any of the trial views (To select/deselect see the
discussion on the Show & Select Sub models button). A solution is not current if any
corresponding parametric model has been modified since the last update. A CV with
a solution that is not current can not be deselected.
As shown above, no solutions are current and the Final and Pending Errors are
undefined (since none exist at this time). To obtain a best mixed solution using
default options, click [Update Trials]. A message window shows the progress of the
search. To view the progress messages click Window>Messages
After the search is complete, the Auto Best Mixed Trials view shown below is
displayed.
270
5/01
This view shows the models that correspond to the Auto Best Mixed solution. It also
displays the final and pending prediction errors. Since final models have not yet been
created, the final error is still undefined. The pending errors are those errors that are
associated with the mixed solution. Final Source designates the trial source of the
final model. The source can be Uniform, Mixed or User. . Since final models have
not yet been created, there is no final source.
Further modification of the final model selection process is accomplished through
the use of the appropriate buttons on the main dialog box shown above. These
buttons have been designed to work in conjunction with each other to offer the
maximum degree of flexibility in selecting a final model.
There are essentially three categories of operations. The first is the Trail Source
which has just been described. The next two are the selection and function
operations respectively. The selection buttons apply to both variables and data. The
function buttons apply to whatever is selected (variables, data, trial sources). Use of
these buttons is described below.
Show & Select
Submodels
5/01
This button has two primary functions. If the Select Final Models dialog box is
displayed and the user selects another view from the main menu (i.e. SingleGraph
Data Plot), then the original view can be restored by simply clicking the Show &
Select button.
271
Its other primary function is to allow the user to independently select variables for
use in any of the functions supported in the Select Final Models dialog box. Note:
At this level, only CVs can be selected. Individual sub models can not be selected
since all functions involve operations on raw data and this implies the use of MISO
models.
This button must be used for the user to select/deselect any variable or to show a
variable that has been previously selected by the user. When the button is clicked,
the view is automatically changed to the Select Sub models for Final Model view.
Models displayed correspond to the Trial Source radio button selected.
If there are no CVs highlighted, click the CVs of interest, or click the upper left box
to select all. <CTRL> toggles the selection state.
If the Trial Source is changed to either Uniform or Mixed after variables are
selected, then only CVs that are not current will show as selected. Clicking the Show
& Select button again redisplays the user selected CVs.
Excluding Data
From the
Prediction
Calculations
272
5/01
Use this option to manually specify the desired trials. This button is disabled unless
the trial source is [User Selected]. When this button is selected, the User Trial
Selection dialog box is displayed with the focus box on matrix element (1,1)
The same dialog box can be invoked by double clicking in the non Trial text area of
any desired sub model (double clicking on the Trial descriptor invokes the displayed
trial dialog box as described previously). Remember that the Trial Source must be
[User Selected] for this to work.
To set the user trial for matrix element (2,1) to trial 3, choose the [User Selected]
Trial Source and double click in the text area of CV 2, MV 1. Then select TRIAL 3
from the Select User Trial pull down menu as shown below.
Dialog Box
Information
Navigation of the User Trial Selection dialog box is controlled by the previous and
next buttons. Use these buttons to move the focus box to the sub model whose trials
are to be changed or reviewed. The indices of the focus box are displayed in the
dialog box by the CV and MV/DV parameters.
Information in this dialog box is summarized as follows:
Trial Value  This value is the current trial index for each of the four trial categories
for the sub model with the current focus. In the case shown above the Uniform,
Mixed and User all have indices of 2 while the Final trial value is empty. The three
entries are the result of the one Auto Best Mixed solution. As described previously,
the uniform solution is determined prior to calculating the mixed solution. In this
case the solution was the same for both searches. That is the model corresponding to
trial 2 resulted in the minimum prediction error. The User trial is initialized with the
mixed trial solution if it exists. Otherwise the uniform solution is used for
initialization
5/01
273
Info Per Trial FIR/PEM settling time and parametric model source are displayed
under this category. Use the pull down menu to change the displayed trial. The
settling time and model source change accordingly as does the model information
displayed in the User Selected Trials view
Selected User Trial  Use this pull down menu to actually specify the desired trial.
All displayed information automatically reflects this selection.
Since this is a modal dialog box, it must be closed before another operation can be
performed. Selecting OK or moving the focus box to another sub model saves the
specified trial information.
Update Trial
When any trial information is not current, this button can be used to invoke the
various searches and/or update the prediction errors. This button works in
conjunction with the first three trial categories and applies to whichever variables are
selected. The procedure depends on the trial source as follows:
Auto Best Uniform  It performs the uniform trial search for the minimum prediction
error as described above and update the pending error with the resultant
minimum prediction error
Auto Best Mixed  It first performs the uniform trial search. With this as a starting
point, it subsequently performs the mixed trial search for the minimum
prediction error as described above and update the pending error with the
resultant minimum prediction error
User Selected  No search is performed when this radio button is selected. The
models corresponding to the user selected trials are merely used to update the
prediction error. Since user modifications to the trials are NOT tagged in the
automated selection process, it is recommended that the trials be updated as
soon as they are modified. This ensures that the prediction errors remain current.
Stop
Use the Stop button to prematurely terminate the search procedure. Note, the search
for the best mixed solution can be time consuming for large data sets especially when
there are many trials.
Plot Predictions
Performance for any set of models over any range of data can be obtained by
selecting this button. Long term openloop predictions are displayed in terms of
predicted, measured and residual values as a function of time. These values are
generated for any CVs that are selected using the transfer function models
corresponding to the indices of the current Trial Source.
Two types of prediction calculations are supported.
Positional  With this default option, raw data is used unaltered in the prediction
calculation. Bias shifts and drift effects cause discrepancies between predicted and
actual data. Use this option for standard evaluations.
Velocity  With this option, raw data is differenced prior to the prediction
calculations. While bias shifts and drift effects are reduced or eliminated, noise
effects are amplified. Use this option in support of the default. Since responses are
274
5/01
impulse like, this option can be very useful for integrating models.
Note: These prediction options have nothing in common with the FIR/PEM
model forms that use the same names.
Use the prediction information to evaluate model performance before selecting final
models. If for example, the performance of the mixed trial solution for CV 1 is to be
observed, select [Auto Best Mixed]. Then chose [Show & Select Sub models] and
click on CV 1 (it will be highlighted). Next, select [Plot Predictions]. The following
results are displayed.
Store Predictions Use this option to store any predictions into an Aux variable.
This variable can then be observed at a later time in the Single Graph Data Plots
view and as such can be plotted against any other variable.
5/01
275
5/01
These results illustrate a dramatic improvement in the predicted valued over all but
the initial segment. Note also the comparison of the predicted and actual values in
the trend plot shown in window2. The importance of using the same scale for the
predicted values is selfevident.
5/01
277
Configuration of the prediction plots (and all other views in which plots occur) can
be modified by adjusting the plot options. To do this select View>Plot Options from
the main menu. Set the Magnification and Height/Width ratio as shown below.
With these settings, the prediction plots (from the original Demo example) take the
following form.
Adjust the settings as desired and continue evaluating the performance of all the
models. When the results are satisfactory, select the final trials as discussed below.
Load Source to
Final
278
Use the [Load Source to Final] button to create final models. To load the entire
mixed trial solution into the final models for the case shown above; select [Auto Best
Mixed] as the Trial Source, click on Show & Select Submodels, click on upper left
corner of the model matrix and select [Load Source to Final]. The results are
illustrated below.
5/01
As can be seen above, the view is automatically switched to the Final Trial view,
which displays the final models. Also displayed in this view are the final errors and
the final source. Final errors are now defined (in this view the pending error has no
meaning and is therefore not displayed). As illustrated the mixed trial solution is the
source for all final models.
This view is unique in that the Laplace domain transfer function of the final
parametric model is displayed. In addition the step response curves are displayed
with the time axis corresponding to the maximum of TfSettle and Settle T.
At this point, the first pass of the identification procedure is complete. The file can
be saved for later use or it can be used for control design.
Modification and/or adjustments to the final models can be achieved in a relatively
straightforward fashion. For example, consider the case where it is desired to
manually adjust the transfer function of element (2,1) for trial 3 (the user selected
trial shown above). In addition, it is desired for the final model to contain the best
mixed solution for CV1 and CV3, the user solution for CV 2 and the uniform
solution for CV4. To do this proceed as follows.
Change the transfer function as desired. The Transfer Function dialog box for
manual entry can be invoked by double clicking in the plot box for all views except
the final trial view. In this case select [User Selected] and double click in the plot
box for element (2,1). Then enter the transfer function as shown below.
5/01
279
After the transfer function is entered click [Calculate] to show the full step response
of the transfer function. When satisfied click [Accept] then [Exit]. The user entered
transfer function is now save in element (2,1,3)
When the dialog box is closed, the view is still User Selected Trials. If at this point,
the Trial Source is changed to either Uniform or Mixed, the CV2 is backlit
indicating that solutions for this CV are not current. The solution for this CV is
therefore automatically calculated the next time the Update Trials button is chosen
with either the uniform or mixed button selected.
In the User Selected Trials view, CV2 is not selected. Similarly, if the Show &
Select button is chosen (Select Submodels for Final Model view), CV2 is not
selected. That is because these views only show user selected CVs. Typically at this
stage, it would be advisable to update the prediction error for the user model. Select
[Show & select Submodels], click on CV2, then click [Update Trials]. If there is
concern that the manually entered transfer function is a better predictor than the one
calculated automatically, then the search can be reevaluated. Select either [Auto
Best Uniform] or [Auto Best Mixed] (CV2 is automatically selected) then click
[Update Trials]. All information is now current.
To construct the final models, select [Auto Best Mixed], click [Show & select
Submodels], click on CV1 and CV3, then click [Load Source To Final]. Next, select
[User Selected], click [Show & select Submodels], click on CV2, then click [Load
Source To Final]. Finally, select [Auto Best Uniform], click [Show & select
Submodels], click on CV4, then click [Load Source To Final].
280
5/01
In some circumstances it may be desirable to see the impact that one or more final
models have on overall prediction performance. This can be easily accomplished by
temporarily nulling final submodels.
In the final model (Final Trials) view, double click on any model box. The Null Final
Model dialog box shown below will be displayed.
This is the only dialog box that can be invoked from this view. Only one submodel
can be nulled at a time. If more than one submodel for a given CV is to be nulled,
null all but the last without residual update (this saves the time associated with the
prediction calculations). For the last model, select [With residual update]. The
prediction error now reflects the effect of the null model(s). The null model is
displayed as shown below.
5/01
281
Compare the errors with and without the models. Observe the prediction as described
above.
Restoration of the null models is simple. If the Select Final Models dialog box is not
present, select Identify>Select Final Trials from the main menu. Select the
appropriate Trial Source, choose [Show & Select Submodels], click on the desired
CV, then click [Load Source To Final]. If at this stage it is attempted to build a
controller, the following message will be displayed.
Be sure to inspect the final matrix. As shown above, the backlit settling time
information indicated there may be a problem. If the response is not reasonable, it is
always best to correct or eliminate any submodels before building any controllers.
When finished with the identification, it is a good idea to save the file. After the
document is saved, the title reflects the new name as filename.mdl (or filename.pid ).
The source descriptor (from .mpt or .pnt) will no longer be displayed. This file can
be opened at another time to merge and/or rebuild models.
282
5/01
11.3
Creation of the final models based on the final trial selection procedure has been
discussed in detail in the previous section. The final model view can be invoked
by either selecting Identify>Select Final Trials or by selecting View>Final
Model Xfer Function. The former approach will invoke the Select Final Models
dialog box and since the default Trial Source is Final, the view will be
automatically switched to the final model view. Note, the view referred to as
final model view has the Final Trials descriptor in the upper left corner of the
model matrix. As shown below.
Information displayed in this view has been described in the previous section. In
addition to the Final Model view, another view that is very useful for reviewing
the various model information is the Model Summary view. This view is
discussed below.
5/01
283
To switch to the model summary view, select View>Model Summary from the
main menu. Like all model views, the model summary view shows information
for each sub model in a twodimensional matrix of sub model boxes. The MVs
and DVs are the columns of the matrix and the CVs are the rows.
This view is extremely similar to the parametric model view discussed in Section
10. In fact all text sensitive areas and displayed text are the same with the
exception of the view descriptor found in the upper left corner of the model
matrix. The only real difference between this and the Show Submodels for
Parametric Fit view described in Section 10, is the type of dialog box invoked by
double clicking on nonTrial, text sensitive areas. This action results in the dialog
box shown below.
284
5/01
11.3
Copy Trials from
One Source to
Another
Edit>User2Final.
In addition to copying the displayed trials for the selected submodels into the
User Trials, the residuals for any touched CVs are updated. These results are
then loaded into the Final Models. The trials and corresponding models displayed
in the Final Model view reflect the user choices. Also note that the prediction
error and Final Model Source are automatically updated.
5/01
285
286
5/01
Section 12  Annotation
12.1 Overview
Profit Design Studio (APCDE) supports annotation at many levels. Both user and
automatic annotation are available. The following items can be annotated:
Variables
Var
Aux
Submodels
Graphs
Vector Calculations
Variables and graphs can be automatically annotated. This feature can be turned
on or off at any time by setting the AutoAnnotate variable equal to one or zero
respectively in the .ini file.
Access to the annotation for any item can be easily accomplished from virtually
any appropriate view. To access an item simply lift up on the right mouse button
with the cursor over the desired item. The next section describes annotation
access and update in some detail.
5/01
287
Section 12  Annotation
12.2
Annotation Access and Update
Annotation dialog boxes can be invoked in many different ways. Annotation for
each item is available as summarized below.
Applications To access this item, lift up on the right mouse button when the
cursor is over the upper lefthand corner of any matrix view. The same
annotation will be accessed irrespective of the current view.
Variables These annotations can be accessed by lifting up on the right mouse
button when the cursor is over virtually any tag name that is not in a dialog box.
Use the left or top margins in any matrix view. Use the Descriptive Info. View
or any Single Graph Data Plot View.
Submodels With this item, the annotations apply to the rowcolumn element
of any matrix. The same annotation will be accessed irrespective the particular
matrix view. The one exception is the MV/DV MV/DV correlation view. This
view has its own annotation items. Lifting up on the right mouse button when
the cursor is over any submodel can access these annotations.
Graphs Graphical annotations can be accessed by
1)
Selecting View>Single Graph Data Plots and lift up on the right
mouse button when the cursor is in the text margin. You will be given the option
to annotate the variable closest to the cursor, all displayed variables or the plot
corresponding to the general data.
2)
Selecting View>Exclude FIR/PEM Ranges or by selecting Exclude
Data ranges from the Fit FIR/PEM dialog box and lift up on the right mouse
button when the cursor is in the text margin. You will be given the option to
annotate the variable closest to the cursor, all displayed variables or the plot
corresponding to any data ranges that have been selected for exclusion with
respect to FIR/PEM regression.
3)
Selecting View>Exclude Prediction Ranges or by selecting
Exclude Data ranges from the Select Final Trials dialog box and lift up on the
right mouse button when the cursor is in the text margin. You will be given the
option to annotate the variable closest to the cursor, all displayed variables or
the plot corresponding to any data ranges that have been selected for exclusion
with respect to final trials/prediction calculations.
Vector Calculations Annotations for these items can only be accessed by
selecting Vector Calculation> Vector Function>User Notes from either the Data
Operations or Tools main menu
Once annotations are made they can be viewed and or modified at any time by
simply reselecting as described above. An annotation descriptor (superscript A)
will appear in all matrix views for any variable or submodel that is currently
annotated.
288
5/01
12.2
Detailed Access and
Update
Section 12  Annotation
Annotation Access and Update
2) Data Deletion  When one or more but not all variables are displayed in the
Single Graph Data Plots view and one or more ranges are defined and the user
selects delete, then the data in the ranges (inclusive) for each displayed variable
is set bad (NaN). In this case the data vectors are NOT collapsed. When this
occurs and the AutoAnnotate flag is on, the following annotation will occur
Annotation for Single Graph Data Plots view will list the number of data
ranges selected for deletion and the start and end time of each deleted range. It
will then list all variables for which data has been set bad (NaN).
Annotation for each displayed variable will be updated to reflect the ranges
over which data has been set bad. The start and end times of each range will be
recorder
289
Section 12  Annotation
12.2
Annotation Access and Update
each variable that is modified. The start and end index for each modification
range and corresponding replacement option will be recorded
4) Block Range Change When FIR/PEM block range selection is modified,
the corresponding annotation will be updated the next time a FIR/PEM model is
regressed. The start and end indices of each range will be recorded. When data
used for selecting Final Trials or predictions is modified, the corresponding
annotation will be updated the next time an update or prediction is performed.
5) Data/Model Merge When data is merged, annotations in the destination
file for each variable will be updated reflecting when the merge took place and
the source file from which the data was merged. When models are merged,
annotations in the destination file for each model touched will be updated
reflecting when the merge took place and the source file from which the models
were merged.
All autoannotations will be marked accordingly at the beginning of the
annotation. When any annotation is made, a time/date stamp is automatically
inserted at the end of the annotation.
For user annotations, it is recommended to start all new text on a new line in the
dialog box. When exiting the annotation dialog box, you do not need to insert a
new line as this is automatically done prior to the insertion of the time/date
stamp. Since it is anticipated that annotation for submodels will occur from a
variety of different views, these annotations will be automatically appended with
the particular view from which the annotation was made. The comment will
immediately proceed the time/date stamp. The next section presents use of the
annotation features through the demo example
290
5/01
12.2
Annotation Example
Section 12  Annotation
Annotation Access and Update
To start, the autoannotate feature is turned on. Some data is excluded and an
FIR fit is performed. From the Fit FIR/PEM dialog box Select>Exclude Data
Ranges. Then move the cursor over DV3 and lift up in the right mouse button to
obtain the following.
The text margin in the picture given above can be used to select annotation
items corresponding to either variables or plots. Only this area can be used to
invoke annotations (Use of the right mouse button in the plot or in the time axis
box is reserved for displaying data). When the right mouse button is lifted up, a
popup menu is displayed at the cursor position. If Annotate This Var is selected
then the annotation dialog box for the variable corresponding to the cursor
position will be displayed. If Annotate Displayed Vars is selected then the
annotation will be applied to all variables listed in the left margin. Selecting
Annotate Plot as shown above invokes the following dialog box.
5/01
291
Section 12  Annotation
12.2
Annotation Access and Update
Note that this annotation applies to the Single Graph Data plots while the
previous annotation was for the FIR/PEM range selection. Which annotation
item appears depends on the current view and weather or not ranges are to be
selected for FIR/PEM or Final Trials/Predictions. Now, Data will be deleted for
the first three MVs and the second DV. When this is done the annotation
becomes.
As more text is added the dialog box supports scrolling. By default, the most
current annotation is scrolled into view. As shown above any annotation text can
be selected. This selected text can be cut, copied and pasted in the normal
fashion. Text can be pasted into other annotation dialog boxes or into you
292
5/01
12.2
Section 12  Annotation
Annotation Access and Update
favorite text editor. Note that when data is cut (removed from the workspace)
the data length is altered. As such, information in the above dialog box is given
in terms of time stamps (since they are invariant) rather than indices. At this
point the annotation on the FIR/PEM range selection has the following
appearance.
This information is given in terms of indices since indices are more convenient
for resetting of ranges. As such when data is cut both before and after indices of
all ranges are presented. Note that an annotation is made any time data is altered
indicating that the FIR/PEM model needs to be updated. The last two remarks,
of which only one is shown above, were made when the data was cut and then
when some data was deleted respectively. Since no new range information is
displayed after these comments, it is apparent that the FIR/PEM models have
not yet been updated. After all models are refit, the annotation will be as
follows.
Note Auto annotation for range selection only applies for range selection
5/01
293
Section 12  Annotation
12.2
Annotation Access and Update
These annotations can be invoked by using the right mouse button wherever the
tag name is displayed. As described in the dialog box two ranges were selected
and overwritten using the interpolation option. Following this an independent
range was selected and the range was overwritten with the value immediately
preceding the range.
Note Data modifications made using Vector Calculations will NOT be autoannotated. It is up to the user to annotate these modifications.
To enter or modify an annotation simply select the annotation item using the
right mouse button. Type any desired text. Use the enter key to start a new line.
When satisfied select <OK>. To ignore modifications select <Cancel>.To
remove any segment of text, select the text to be removed in the standard
fashion and select <Clear>. If no text is selected then <Clear> will remove all
contents in the dialog box.
As mentioned previously, annotations can be accessed at many different levels.
To access submodels a matrix view must be present. Submodels can be
accessed from every matrix view. Every view except MV Correlation will
access the same submodel annotation. Variable annotations can be accessed by
using the right mouse button in the appropriate margins (Left for CVs and top
for MV/DVs).
294
5/01
12.2
Section 12  Annotation
Annotation Access and Update
Overall annotations can be accessed by lifting up the right mouse button when
the cursor is in the upper left corner of the model matrix. In the matrix views a
small superscript A as shown below will indicate annotation for any variable or
submodel.
Visual inspection of the picture given above indicates that MV2 and submodel
(2,1) are annotated. Scroll around to observe any other annotated items. All
views will display the same annotation information. Hold the right mouse button
down and move it over submodel (2,1). Nothing will happen until the mouse
button is released. When this is done the following annotation dialog box will be
invoked.
5/01
295
Section 12  Annotation
12.2
Annotation Access and Update
Finally, consider the case where a submodel is merged into the previous demo
example. In this case the source model data sample rate was different than that
contained in the destination file. Hence the data is automatically dropped but the
models are merged in the normal fashion. Results are shown below.
A new row and column (CV3 and MV2) have been added to the matrix. Submodel (3,2) is the new element and has been automatically annotated. The
annotation for this element is given in the following graphic.
296
5/01
12.2
Section 12  Annotation
Annotation Access and Update
Note that if data were included in the merge, then all variables would be
automatically annotated to reflect this operation. When data is dropped, no
annotation is made to the corresponding variables.
5/01
297
12.2
298
5/01
Section 13  Tutorial
13.1 Overview
In previous sections of this document, the main emphasis was to present a
sequential approach to the use of the APC Identifier. While relevant background
and guidelines were furnished in many instances, the focus was nevertheless more
on the mechanical aspects of using the Identifier than on illustrating actual
identification.
This section has been added to briefly show some identification examples, which
illustrate a few of the more practical aspects, involved in model synthesis. This
information is presented as a high level overview and is not intended as an
instructional device. For those interested in proficient use of this tool, the Profit
Controller (RMPCT) Implementation class is highly recommended. For those
interested in a more detailed use of this tool and a better understanding of
advanced identification topics and procedures, the new Advance ID class is
recommended.
This chapter will be split into two major themes. The first will deal with the
general use of the tool using FIR models as the main regression function. The
second will illustrate basic use of the PEM approach
While there are many ways in which the FIR information can be presented, it will
be arbitrarily categorized based on data quality. The categories are split
according to data that was generated using:
A few sets of data in each of the above categories will be presented, as will the
nuances of practical model synthesis.
PEM applications will also be presented using various data sets. Basic operation
will first be given using synthetic data for which there is a known answer. The
rest of the discussion will be based on plant data. Pressure data will be used to
show simple use and performance. Furnace data and data with large disturbances
will be presented next. The demo example used through this document will be
solved as will a high purity (very slow dynamics) column. Finally, applications
involving integrators and long delay will be presented.
5/01
299
Section 13  Tutorial
13.2
Rich Input Signals
Note The inputs have been designed specifically for this process. FIR models are
fit using positional form and settling times that range from 10, 12 and 15 minutes
with between 25, 30 and 25 coefficients respectively. Corresponding correlation
plots follow
300
5/01
13.2
Section 13  Tutorial
Rich Input Signals
Comparing the diagonal elements illustrates that the designed signals are close to
ideal (the pseudowhite). Cross correlations are near perfect. With these
satisfactory results, the next step is to check FIR and confidence data.
Based on both the FIR and confidence views, it is obvious that elements (1,1) and
(1,3) exist while element (1,2) does not. Similarly, elements (2,2) and (2,3) exist
while (2,1) does not.
5/01
301
Section 13  Tutorial
13.2
Rich Input Signals
A less clear case is presented by CV3. The FIR data shown indicates that all submodels may exist. FIR results worse than these have been interpreted by some to
indicate model existence. Confidence results on the other hand indicate no
models exist. The answer can be obtained by a closer inspection of the FIR step
response curves. Observe the spike in the last coefficient. As described
previously, this indicates nonstationary behavior. In deed, this variable
experiences a large drift during the test. Hence, CV3 should be built using
velocity form. When this is done the following results with respect to FIR and
confidence data will be obtained.
Now, the correct answer is readily apparent. Models (3,1) and (3,3) do not exist,
while model (3,2) does. It would also be possible to rebuild all models using
velocity form. This will only result in a relatively small loss in accuracy for CV1
and CV2. These results are shown below.
302
5/01
13.2
Section 13  Tutorial
Rich Input Signals
With the appropriate forms selected, the Statistical Summary View will illustrate
both the correct density pattern and the fact that the models that exist are high
quality..
Using these results, the predictive performance for CV1 is illustrated in the next
graph.
The performance speaks for itself. Indeed the model obtained for CV2 is within
2% of the analytical solution. In fact even the high frequency lead term was
correctly captured. The model for CV3 has been identified with the same level of
accuracy. This is to be expected based on the quality levels presented previously.
Predictive performance for CV3 is illustrated next. While the performance looks
poor, the model is in fact correct. This case demonstrates the effect of a large
unmeasured disturbance
5/01
303
Section 13  Tutorial
13.2
Rich Input Signals
304
1.
High quality models/Good predictions  This is the ideal situation and should
inspire high confidence in the models
2.
3.
Low quality models/Good predictions  In this case the user has conflicting
information. In all data observed to date, this is caused by limited
information content. Usually caused by large noise/signal ratios and /or
insufficient number of steps. Use caution here. Better data is the best
solution, but in some cases the models may be adequate.
4.
5/01
13.2
Section 13  Tutorial
Rich Input Signals
Anomalies associated with item 4 above are more annoying than they are serious.
Invariable, they can be detected by judicious use of the Identifier. A powerful but
seldom used function is the ability to select appropriate data ranges when
performing predictions. Results are shown below for the same data as presented
previously.
In the above plot, only 8 single data points have been deselected. The usefulness
of this capability is strikingly apparent.
In spite of the aforementioned capability, the need to make manual adjustments is
somewhat time consuming. In cases such as these, this need can be eliminated by
the use of a noise model. The capability is planned to be included as a PEM
option in the future release of the advanced ID module.
WafrDoc1
5/01
This plant data shows the response of silicon wafer temperatures to radiant heat
lamps. The thermal transport mechanism is predominately radiation. As such the
temperature response exhibits integral behavior. There are three CVs and three
MV. The signals are given below
305
Section 13  Tutorial
13.2
Rich Input Signals
Integrator flags are set for all sub models and FIR models are fit using positional
form and settling times of 2, 3 and 4 minutes. The overall Rank Option is set to
NNHT and the <Auto null uncertain models> check box is selected. The
Corresponding correlation plots follow
The input correlations are relatively good. The negative auto correlation at + 1
minute is of concern. However, since it recovers rapidly, it will most likely be
acceptable.
As shown in the second set of plots, the output correlations indicate potential
problems. Diagonal elements are very acceptable. Off diagonal elements,
however, indicate significant feed back in the data. This occurrence is actually by
design. While the signals were properly designed , this integrating process
required some closed loop control to keep the temperatures in an acceptable
306
5/01
13.2
Section 13  Tutorial
Rich Input Signals
range during the duration of the test. Hence the feedback must simply be
accommodated. The FIR and confidence views are presented next
FIR responses indicate that models (2,1), (3,1) and (2,3) are questionable.
Existence is shown more clearly in the Confidence View. Only the diagonal
models are statistically relevant.
With the default Rank Option = 1 (NNHT), the Corresponding Statistical
Summary View takes the form shown below.
5/01
307
Section 13  Tutorial
13.2
Rich Input Signals
For integrators these high quality models are outstanding. Usually it is very
difficult to obtain a level 1 rank for integrators. In fact in many instances level 3
integrators can be considered good. Final corresponding models and subsequent
predictions are shown next.
308
5/01
13.3
Section 13  Tutorial
Typical Input Signals
This data is characterized by an input signal that has a fairly limited power band.
The band is adequate for some variables and lacking for others. There are four
CVs and one MV. The signals are given below
As a quick first pass models are first fit using all default options (positional form,
60, 90, 120 minute settling time with 30 coefficients). In this case the correlation
have the following characteristic.
This should be deemed suspect (values that are significantly off scale, as
illustrated in this plot, may result in undesirable behavior). Rebuilding using
velocity form gives the improved results depicted below.
5/01
309
Section 13  Tutorial
13.3
Typical Input Signals
While still not ideal, they are acceptable. Proceeding with these settings, allows
the generation of the FIR step responses and the Confidence estimates to be
calculated. Results for these quantities are given next. Results will be presented
first for CV1 and CV2.
In the above plots, the first column corresponds to the FIR step responses while
the second column corresponds to the confidence estimates. Rows correspond to
CV1 and CV2 respectively.
FIR Results indicate potential separation concerns. Intuition would dictate the
presence of a model at least for CV1. Inspection of the confidence estimates gives
a clear indication of model existence for both CVs. In addition, the confidence
estimates indicate that the FIR coefficients become unreliable at settling times
greater than 6090 minutes. To check this, the models can be rebuilt using 40, 50
and 60 minute settling times. Results for CV1 are given below.
310
5/01
13.3
Section 13  Tutorial
Typical Input Signals
These curves show essentially ideal behavior. FIR responses are self similar and
the confidence data shows that most of the model is in fact captured in the first 3040 minutes of the response.
Similar results can be obtained for CV2 as presented below. With CV2 however, a
more judicious choice of the settling times is required to obtain such satisfactory
results. That is, as settling times exceed 60 minutes, the results deteriorate rapidly.
This anomaly, which is somewhat characteristic of this entire case study, is causes
by a lack of power band in the input signal.
Further inspection of the results given above, indicate that there is probably some
amount of nonminimum phase behavior (time delay in this case) associated with
CV2. In addition it is obvious that there is limited steady state information in this
model. As such, this is an ideal candidate for using the <FIR Extension> flag when
fitting the parametric model.
That CV1 and CV2 are quality models is self evident as illustrated by the
Statistical Summary results given next.
5/01
311
Section 13  Tutorial
13.3
Typical Input Signals
It was relatively easy to extract reasonable models from this data for CV1 and
CV2. For these variables model existence is clear an unambiguous. The remaining
CVs illustrate the case where model existence becomes an issue.
Next consider CV3 and CV4. These CVs have a longer response time and
considerably more noise than the previous CVs. However, the same input signal
will be used to build these models. To start, default settling times are also used.
Results in terms of FIR and confidence plots follow.
For CV4 the FIR and Confidence answers are consistent. This is clearly a case
where there is no reliable model. For CV3 however, the FIR and confidence
answers apparently conflict. FIR results indicate that the 120 minute settling time
is too long, while the 6090 minute curve are relatively self similar. The
confidence curves on the other hand indicate that the shorter settling times are in
fact not significant.
Confidence curves, such as those presented above for CV3, would in general
indicate that the shorter settling times are either statistically unreliable or are just
too short. Based on FIR intuition, however, it seems reasonable to rebuild using
shorter settling times. Results for 45, 60 and 90 minutes are illustrated in the
following plots.
312
5/01
13.3
Section 13  Tutorial
Typical Input Signals
For CV4 it is still clear that no reliable model exists. For CV3, it can be seen that
the noise level is too high to determine reliable models for the shorter settling
times. The estimates just start to exceed the confidence threshold for the 90 minute
trial. Unfortunately, as the settling times are further increased with insufficient
input power (as illustrated by the 120 minute settling time presented previously),
the model begins to fit the noise in a statistically meaningful fashion. Hence the
shorter settling times do in fact result in statistically unreliable models, however,
the longer settling times are also dubious.
Inspection of the FIR step responses for CV3 illustrates that the 90 minute trial is
not quite able to capture the steady state behavior of the process. The inability to
capture steady state is caused by lack of input range. Corresponding summary
results are given below.
5/01
313
Section 13  Tutorial
13.3
Typical Input Signals
In the final analysis, CV3 is seen to be of questionable quality while CV4 has no
model at all. While CV3 does have relatively reasonable step response curves, it
still should not be considered to be statistically reliable. To understand this more
fully, consider the predictive performance shown below for all CVs.
While the fit is very good in all cases (even for CV4), the amplitude of movement
for CV3 and CV4 relative to the magnitude of the noise should be of concern.
Indeed, this is one of the major limitations in this data set. In addition, the input
power frequency is rather limited. Since the duration of the input steps appears
more as a pulse for CV3, information in both the low (steady state gain) and high
frequency ranges is compromised. Indeed, the transfer function settling times for
this variable exceeds the specified settling time by more than 50%.
As discussed previously, it is possible for a model to exhibit good predictive
performance, yet not be statistically reliable. Here the lack of reliability is due
primarily to the noise and to a lesser extent to the limited duration of the steps.
While there is no doubt that models far worse than these have been used in
practice, the textbook recommendation would be to either gather more
information, or exclude them from the controller design.
314
5/01
13.3
Section 13  Tutorial
Typical Input Signals
To more clearly illustrate the problems associated with noisy data, it can be more
informative to generate the predictions in velocity form. To do this select the
<Velocity> radio button then select <Plot Predictions>. Result shown below are
the velocity equivalent of the predictions given previously.
This information is more reflective of the data actually used in the regression
calculations. Since velocity form is used, the predictions are the impulses that
result from the changing input. While causal relationships are clearly demonstrated
for the first 2 CVs, it would be difficult to state with any certainty that a
relationship exists for the remaining CVs if the above data were all that were
available.
It is precisely this information that is reported in the statistical summary. Thus, this
information reflects the confidence that the model is not attributable to or unduly
influenced by noise effects. Even though models for CV3 and CV4 fit the data
well (in the least squares sense), their reliability remains in question. In fact there
is little difference between the reliability of CV3 and CV4. Both models should be
considered unreliable due to the noise level. The fact that CV4 has a slightly
higher noise level than CV3, results in a cross over from level 4 to level 5 rank. As
such, it is clear that this model should not be used. Note, that the level 4 results are
not that much different. Therefore, while model retention is the level 4
recommendation, these models should still be considered with some degree of
trepidation.
5/01
315
Section 13  Tutorial
13.3
Typical Input Signals
At this point it should be realized that the statistical information addresses two
concerns that are often encountered. The first concern is the use of small gain
models. In the current framework, a model is either reliable or it is not. Models
that have gains that are small relative to noise will automatically be rejected. If
there was sufficient authority in the input signals to move the process outside its
noise band and the models is reliable, then the model is useful regardless of the
numerical gain value. Of course, other considerations such as MV movement
limitations may in the end be the determining factor.
With respect to the second concern, the need to capture high frequency dynamics
is sometimes in question. Irrespective of controller bandwidth limitations, if the
step response bands encompass the high frequency dynamics, then there is no need
to have this level of detail in the final model since it can be considered to be within
the noise level. This issue is addressed directly in the uncertainty estimates.
As a practical point it is worth mentioning, that step tests should be designed to
insure signal to noise ratios of about 3. That is, the output (CV) movement should
exceed the noise present by a factor of three. This rule of thumb is consistent with
the statistics provide by the Identifier. Note, that for integrators it is desirable to
move the inputs such that the impulse exceed the noise level. Simply making a
small move and having the integrating nature of the process move the CV outside
the noise is in general not sufficient.
ColDoc1
316
This data is characterized by a process that exhibits a very long response time.
Hence the inputs need to be sufficiently exciting over a relatively wide spectrum.
Input signals here are of reasonable quality. There is one CV and three MVs. The
signals are given below
5/01
13.3
Section 13  Tutorial
Typical Input Signals
With long settling time such as this, it might be tempting to adjust the number of
coefficients accordingly. This however, is not necessary. The length of the settling
time imposes NO restriction on the number of coefficients. Only the curvature of
the response function influences the required number of coefficients. In this case
the data is sampled every minute and the controller is to run every 2 minutes. In
spite of this the default number of coefficients give excellent results. Using default
settings with settling times of 11.5, 15 and 20 hours gives the following correlation
plots.
From this figure, it is clear that there is a relatively large auto correlation. In
addition, there is a strong cross correlation between flow1 and the feed
disturbance. Results using Velocity form are presented next
5/01
317
Section 13  Tutorial
13.3
Typical Input Signals
At this stage the validity of the models has been established. However, since there
is a moderate amount of separation, the best model still need to be selected. It is
precisely under these condition for which the final pass of the APC Identifier has
been specifically designed. Results of the automated selection process and
subsequent predictions follow.
318
5/01
13.3
Section 13  Tutorial
Typical Input Signals
It is interesting to note that the selected solution corresponds to those trials that are
completely within the statistically valid band. Even though the FIR responses
corresponding to these trials for MV1 and DV1 had not completely settled,
confidence should remain very high due to the duration of the test and the quality
of the predictions.
5/01
319
Section 13  Tutorial
13.4
Limited Input Signals
This data represents the response of a level to valve opening. The data has a mild
amount of noise and only a single step (down then up). There is one CV and one
MV. The signals are given below
Single steps, such as those shown above should never be conducted in actual
practice. Nevertheless, the corresponding correlation results for Positional and
Velocity forms respectively are presented next.
As has been the case in previous results, poor correlation performance can be
drastically improved by using positional form. Unfortunately, for this problem
there is no magic wand. In fact, it will be shown shortly that the velocity form
actually degrades performance.
320
5/01
13.4
Section 13  Tutorial
Limited Input Signals
Performance, in terms of step responses for the two model forms is highlighted in
the plots presented next.
Note that the velocity form is more prone to separation. This is true in general
and not restricted to integrating data. Since differencing the data will result in
some low frequency information loss, this can be expected. The information loss
typically results in a (small) reduction in steady state gain accuracy. This loss in
accuracy can be either to over or underestimate the gain (or integration rate if
appropriate). It is precisely for this reason that it is recommended to at least start
with positional form. In either case, the nonnull hypothesis test fails and
therefore the Confidence view is null. The corresponding Statistical Summary
View and prediction plots ( for the positional model only) are presented in the
plots that follow
5/01
321
Section 13  Tutorial
13.4
Limited Input Signals
Results presented in the Statistical Summary view reflect the special heuristics
used for integrators. Without these heuristics both plots would display a level 5
ranking. With velocity form, the degraded step response sensitivity, results in a
reduced rank relative to that given for the positional form. To check the
sensitivity, the Rank Option can be changed to 3 to view the following
information.
The corresponding ranking and sensitivity for the positional form are 1 and .117
respectively. Combined Level 3 ranking for integrators are most likely indicative
of decent models. For Integrating processes with questionable statistics pay
particular attention to the prediction results. The goal here, should be results such
as shown above. For difficult cases try the <Velocity > Plot Prediction option for
more insight.
BlecDoc2
322
This data represents the response of two key variables in a bleach plant. The data
has a mild amount of noise and only few steps. There are two CVs and one MV.
The signals are given below
5/01
13.4
Section 13  Tutorial
Limited Input Signals
As is obvious from the data, the response for CV1 has a long dead time and both
CVs exhibit quick dynamics. This data indicates one of the few legitimate cases
when the default number of coefficients needs to be increased. Here CV1 and
CV2 will be built using different settling times. To capture the long dead time in
CV1 settling times of 90, 110, and 140 will be used. For CV2 settling times of
10, 15, and 20 will be used. In this case positional form will be used. Use of
velocity form here results in slightly poorer results.
Correlation results for the long and short set of settling times are displayed
below.
5/01
323
Section 13  Tutorial
13.4
Limited Input Signals
Results indicate a low confidence in CV1. In spite of the proper settling times and
a large number of coefficients, accuracy of the delay estimation should be
considered suspect. To accurately estimate delay, it is necessary to have sufficient
power in the high frequency portion of the response curve. In addition to the
proper discrete time resolution (number of coefficients) is also required.
Even though the step response band for CV2 is fairly large, the quality of this
model is very good. Note, that the confidence data for all three trials are so
similar, they appear as one curve. These results are typical for very fast
responding processes even for relatively limited data.
Finally, as shown below, even though the models are not necessarily reliable,
they both give excellent predictive performance.
In this instance the dead time for CV1 was 50 minutes while the time constant
was approximately 10 minutes. To capture this type of response the increase in
the number of coefficients was not only justified it was required.
324
5/01
13.5
Section 13  Tutorial
Creating PEM models
To start a problem with a known solution will be used. The data for this problem
is shown below.
A rich input signal and significant drift characterize this one inputone output
problem. This is a subset of the data shown at the beginning of this section
(RichDoc1). For the first try, the Start Order will be set to 1 (in general you
should not use an order of less than 2 or 3). This choice will result in the
following dialog box.
This dialog box will be displayed any time the algorithm detects a potential
problem with the model. With the PEM approach, the settling time is determined
from the model itself. For this problem, the first order model results in a biased
estimate that has as an enormous effect on the model. Usually you would take the
default (for ease of use) and still use the rule of thumb that there should be at
least 2 self similar trials. Here we are curious so we wont null the model
5/01
325
Section 13  Tutorial
13.5
Creating PEM models
In this case the known order is 3, the settling time is approximately 12 minutes
and the gain is 1. Set the Start order to 3 and Load & Go to get.
Note the selfsimilar responses for the three trials. Inspection of the transfer
function shows the gain to be .989 and TfSettle is 12.4. In addition one of the
roots of the D polynomial is 1.0002 which corresponds to the pure drift exhibited
by the data. Not only is B and F correct so is the noise term.
Next the same data will be used but in this case we will change some of the setup
parameters. Here we will turn the Pfx initial search off, the instrumental variable
approach will be used for initialization, the robust norm will be used, the PEM
bias term () will be tuned off, QR factorization will be used and there will be no
scaling. Only one trial will be used and that trial will correspond to a third order
model which can result in the correct answer as shown above.. The results will be
compared to the MATLAB solution.
326
5/01
13.5
Section 13  Tutorial
Creating PEM models
Results from the message window in the APC identifier are as follows.
5/01
327
Section 13  Tutorial
13.5
Creating PEM models
The output above illustrates that under the right conditions both MATLAB and
the APC Identifier will yield the same results. Note however that these answers
are not the same as the third order case run previously. As a word of caution, do
not modify .ini parameters lightly.
Pressure Data
In the next case data from a pressure loop will be used. Here, there is one MV,
one DV and one CV. The data for this loop is shown below.
In this case, the start order is set to five and Load & go. The step responses are:
It is clear that the DV model exhibits behavior that is due to the use of an order
that is probably higher than is necessary. The effects are however easily
attenuated by the parametric fit. So in general a slight ringing of the PEM model
should not be of concern as long as it is not too significant. Especially if it is
attenuated by the model reduction step.
328
5/01
13.5
Section 13  Tutorial
Creating PEM models
Furnace Data
This data is taken from a furnace application. Here there is one MV, and two
DVs, only one of which will be used.
Note the large deviation in CV1 at the beginning of the test set. This data has
been excluded. For this case DV2 is also nulled. Start order is 5, Load & go.
For the pressure data, the predictions are stored in an Aux variable and the Aux
variable is plotted in the SingleGraph Data plot view against the inputs and CV.
5/01
329
Section 13  Tutorial
13.5
Creating PEM models
It looks pretty reasonable even after the disturbance hits towards the end of the
test. Consider the case where the front portion of data is not removed. The step
responses for the condition are.
The large initial drift in the CV is not due to the initial moves in the DV and it is
not handled well by the noise model. Thus the models tend to be degraded. Note
that the noise model is NOT a cure all. Even with the noise model, it is always
better to remove suspicious data.
Large disturbance
Here is a case where the disturbance starts small and continues to grow as the test
continues. The input signal has a good spectrum however, the power can not deal
well with the disturbance. The data is shown below.
Set start Order to 3 and Load & Go give the following step responses.
330
5/01
13.5
Section 13  Tutorial
Creating PEM models
This is not a pretty picture. If the back end of the data is excluded as shown
below.
Then performing the same calculations results in the step responses shown next.
5/01
331
Section 13  Tutorial
13.5
Creating PEM models
When the disturbance is removed, the model does a fairly good job. It is clear in
this case that the noise model could not accommodate this disturbance. It is worth
noting that when fitting the entire data set with an FIR model using velocity form
a better, though relative poor model, was obtained. In some instances a priori
knowledge can be used to advantage. This in no way is meant to imply that you
should ever include disturbances such as those in the regression no matter what
technique you are using.
When comparing FIR and PEM step responses remember that PEM models tend
to be more sensitive to model order than FIR models are to settling time. Also
remember that when messages warning of too short data sets are displayed,
better results may sometimes be obtained by turning off the noise terms in the
PEM model.
Demo Data
332
While the two input guidelines should be adhered to, there are no physical
restrictions on inputs when using the PEM models. In fact the PEM approach has
been used with demo data used throughout this document. The step responses are
given below.
5/01
13.5
Section 13  Tutorial
Creating PEM models
While it is possible to use the PEM models on problems like this it is simply
impractical. This is a relative small problem and the amount of resources was
unacceptable.
ColDoc1
Next, PEM will be used with the coldoc1 data, which was already discussed.
Using Load & GO in this case generates a warning message on undersampling.
Nevertheless the following step responses are generated.
These curves show an extremely long settling time. Remember for PEM the
default is no compression. The means the regression is at the data rate which in
this case is one minute. These settling times with a oneminute sample rate are the
reason for the warning message. The predictions for these models are quite good.
5/01
333
Section 13  Tutorial
13.5
Creating PEM models
WafrDoc1
To demonstrate use of PEM with integrators, a subset of the WafrDoc1 data will
be used. In this instance, the AutoInteg flag will be used to detect integrators.
Note when this flag is set NO special consideration is given to the data (i.e. no
special differencing), the algorithm will simply try to identify the presence of
integrating dynamics from the poles of the PEM model. Using the defaults and
selecting AutoInteg and Load & Go, the following message box is displayed.
This message box tells you that a potential integrator has been found. If
integrating characteristics are found for all trials and you select yes, then the local
submodel integrator flag will automatically be set to insure perfect integrators at
the parametric level. If some trial for a given submodel have integrator like
characteristics but others dont then the following message box will be displayed.
334
5/01
13.5
Section 13  Tutorial
Creating PEM models
Note that this message will not be displayed if the submodel integrator flag is
already set. Continuing with the calculations will result in the following Step
responses.
BlecDoc2
As a final case a subset of the BlecDoc2 data described previously will be used.
For this case the approach will fail if the auto delay flag is not set. Even when it is
set, the approach requires several iterations. And as such can not be considered
very effective. Work must be done to develop a better delay estimator. With a
start order of 5 the step responses are.
Note that the model corresponding to seventh order failed in this case (initial
estimate had problems). Again you can see the ringing phenomenon that often
accompanies high order fits. This is almost never a problem. Note the smooth
5/01
335
Section 13  Tutorial
13.5
Creating PEM models
336
5/01
337
1.386
1.435
1.477
1.513
1.545
1.572
1.596
1.616
1.634
1.649
1.663
1.674
1.685
1.693
1.701
1.707
1.713
1.718
1.722
1.726
1.729
1.732
1.734
1.736
1.738
1.74
1.741
1.742
1.743
1.744
0
0
0
0.04398
0.08444
0.1217
0.1559
0.1874
0.2164
0.2431
0.2676
0.2902
0.311
0.3301
0.3477
0.3638
338
5/01
0.3787
0.3924
0.405
0.4166
0.4273
0.4371
0.4461
0.4544
0.4621
0.4691
0.4756
0.4815
0.487
0.492
0.4967
0.5009
0.5049
0.5085
0.5118
0.5148
0.5177
0.5202
0.5226
0.5248
0.5268
0.5287
0.5304
0.5319
0.5334
0.5347
0.5359
0.5371
0.5381
0.5391
0.5399
0.5407
0.5415
0.5422
0.5428
0.5434
0.5439
0.5444
0.5448
0.5452
0.5456
0.546
5/01
339
0.5463
0
0.03331
0.06344
0.09071
0.1154
0.1377
0.1579
0.1762
0.1927
0.2077
0.2212
0.2335
0.2446
0.2546
0.2637
0.2719
0.2793
0.2861
0.2921
0.2977
0.3026
0.3071
0.3112
0.3149
0.3182
0.3213
0.324
0.3265
0.3287
0.3307
0.3326
0.3342
0.3357
0.3371
0.3383
0.3394
0.3404
0.3413
0.3422
0.3429
0.3436
0.3442
0.3448
0.3453
0.3457
340
5/01
0.3461
0.3465
0.3468
0.3471
0.3474
0.3476
0
0
0
0.05997
0.1151
0.1659
0.2126
0.2556
0.2951
0.3315
0.3649
0.3957
0.4241
0.4501
0.4741
0.4962
0.5164
0.5351
0.5523
0.5681
0.5827
0.596
0.6083
0.6197
0.6301
0.6397
0.6485
0.6566
0.6641
0.671
0.6773
0.6831
0.6884
0.6934
0.6979
0.7021
0.7059
0.7094
0.7127
0.7156
5/01
341
0.7184
0.7209
0.7232
0.7254
0.7274
0.7292
0.7308
0.7324
0.7338
0.7351
0.7363
0.7374
0.7384
0.7393
0.7402
0.7409
0.7417
0.7423
0.7429
0.7435
0.744
0.7445
0.7449
0
0.3118
0.5352
0.6953
0.81
0.8922
0.9511
0.9933
1.024
1.045
1.061
1.072
1.08
1.086
1.09
1.093
1.095
0
0
0
0
0
0.001242
342
5/01
0.004863
0.01072
0.01866
0.02857
0.04032
0.0538
0.06889
0.0855
0.1035
0.1229
0.1435
0.1653
0.1882
0.2122
0.2371
0.2629
0.2895
0.317
0.3452
0.3741
0.4037
0.4339
0.4646
0.4959
0.5277
0.56
0.5928
0.626
0.6595
0.6934
0.7277
0.7623
0.7972
0.8324
0.8679
0.9036
0.9395
0.9757
1.012
1.049
1.085
1.122
1.159
1.197
1.234
1.271
5/01
343
1.309
1.347
1.385
1.423
1.461
1.499
1.537
1.575
1.614
1.652
1.691
1.729
1.768
1.807
1.845
1.884
1.923
1.962
2.001
2.04
2.079
2.118
2.157
2.196
2.235
2.274
2.313
2.352
2.392
2.431
2.47
2.509
2.549
2.588
2.627
2.666
2.706
2.745
2.784
2.824
2.863
2.902
2.942
2.981
3.021
3.06
344
5/01
3.099
3.139
3.178
3.218
3.257
3.296
3.336
3.375
3.415
3.454
3.494
3.533
3.573
3.612
3.652
3.691
3.73
3.77
3.809
3.849
3.888
3.928
3.967
4.007
4.046
4.086
4.125
4.165
4.204
4.244
4.283
4.323
4.362
4.402
4.441
4.481
4.52
4.56
4.599
4.639
4.678
4.718
4.757
4.797
4.836
4.876
5/01
345
4.915
4.955
4.994
5.033
5.073
5.112
5.152
5.191
5.231
5.27
5.31
5.349
5.389
5.428
5.468
5.507
5.547
5.586
5.626
5.665
5.705
5.744
5.784
5.823
5.863
5.902
5.942
5.981
6.021
6.06
6.1
6.139
6.179
6.218
6.258
6.297
6.337
6.376
6.416
6.455
6.495
6.534
6.574
6.613
6.653
6.692
346
5/01
6.732
6.771
6.811
6.85
6.89
6.929
6.969
7.008
7.048
7.087
7.127
7.166
7.206
7.245
7.285
7.324
7.364
7.403
7.443
7.482
7.522
7.561
7.601
7.64
7.68
7.719
7.759
7.798
7.838
7.877
7.917
7.956
7.996
8.035
8.075
8.114
8.154
8.193
8.233
8.272
8.312
8.351
8.391
8.43
8.47
8.509
5/01
347
8.549
8.588
8.628
8.667
8.707
8.746
8.786
8.825
8.865
8.904
8.944
8.983
9.023
9.062
9.102
9.141
9.181
9.22
9.26
9.299
9.339
9.378
9.418
9.457
9.497
9.536
9.576
9.615
9.655
9.694
9.734
9.773
9.813
9.852
9.892
9.931
9.971
10.01
10.05
10.09
10.13
10.17
10.21
10.25
10.29
10.33
348
5/01
10.37
10.41
10.44
10.48
10.52
10.56
10.6
10.64
10.68
10.72
10.76
10.8
10.84
10.88
10.92
10.96
11
11.04
0
0.0001532
0.0006002
0.001323
0.002306
0.003532
0.004987
0.006656
0.008528
0.01059
0.01283
0.01524
0.0178
0.02051
0.02336
0.02634
0.02945
0.03267
0.03599
0.03942
0.04294
0.04656
0.05025
0.05403
0.05788
0.0618
0.06578
0.06983
5/01
349
0.07393
0.07809
0.0823
0.08656
0.09086
0.09521
0.09959
0.104
0.1085
0.113
0.1175
0.122
0.1266
0.1312
0.1358
0.1405
0.1452
0.1499
0.1546
0.1593
0.164
0.1688
0.1736
0.1784
0.1832
0.188
0.1928
0.1976
0.2025
0.2073
0.2122
0.2171
0.2219
0.2268
0.2317
0.2366
0.2415
0.2464
0.2514
0.2563
0.2612
0.2661
0.2711
0.276
0.2809
0.2859
350
5/01
0.2908
0.2958
0.3008
0.3057
0.3107
0.3156
0.3206
0.3256
0.3305
0.3355
0.3405
0.3455
0.3504
0.3554
0.3604
0.3654
0.3704
0.3753
0.3803
0.3853
0.3903
0.3953
0.4003
0.4052
0.4102
0.4152
0.4202
0.4252
0.4302
0.4352
0.4402
0.4452
0.4502
0.4552
0.4602
0.4652
0.4701
0.4751
0.4801
0.4851
0.4901
0.4951
0.5001
0.5051
0.5101
0.5151
5/01
351
0.5201
0.5251
0.5301
0.5351
0.5401
0.5451
0.5501
0.5551
0.5601
0.5651
0.5701
0.5751
0.5801
0.5851
0.5901
0.5951
0.6001
0.6051
0.6101
0.6151
0.6201
0.6251
0.6301
0.6351
0.6401
0.6451
0.6501
0.6551
0.6601
0.6651
0.6701
0.6751
0.6801
0.6851
0.6901
0.6951
0.7001
0.7051
0.7101
0.7151
0.7201
0.7251
0.7301
0.7351
0.7401
0.7451
352
5/01
0.7501
0.7551
0.7601
0.7651
0.7701
0.7751
0.7801
0.7851
0.7901
0.7951
0.8001
0.8051
0.8101
0.8151
0.8201
0.8251
0.8301
0.8351
0.8401
0.8451
0.8501
0.8551
0.8601
0.8651
0.8701
0.8751
0.8801
0.8851
0.8901
0.8951
0.9001
0.9051
0.9101
0.9151
0.9201
0.9251
0.9301
0.9351
0.9401
0.9451
0.9501
0.9551
0.9601
0.9651
0.9701
0.9751
5/01
353
0.9801
0.9851
0.9901
0.9951
1
1.005
1.01
1.015
1.02
1.025
1.03
1.035
1.04
1.045
1.05
1.055
1.06
1.065
1.07
1.075
1.08
1.085
1.09
1.095
1.1
1.105
1.11
1.115
1.12
1.125
1.13
1.135
1.14
1.145
1.15
1.155
1.16
1.165
1.17
1.175
1.18
1.185
1.19
1.195
1.2
1.205
354
5/01
1.21
1.215
1.22
1.225
1.23
1.235
1.24
1.245
1.25
1.255
1.26
1.265
1.27
1.275
1.28
1.285
1.29
1.295
1.3
1.305
1.31
1.315
1.32
1.325
1.33
1.335
1.34
1.345
1.35
1.355
1.36
1.365
1.37
1.375
1.38
1.385
1.39
1.395
1.4
1.405
1.41
1.415
0
0
0
0
5/01
355
0
0.03
0.06
0.09
0.12
0.15
0.18
0.21
0.24
0.27
0.3
TI002.PV
None
TI003.PV
None
LI001.PV
None
FC001.SP
None
TC001.SP
None
PC001.PV
None
After reading this data, the corresponding model file will have the following form.
356
5/01
357
12.0
1.0
3.0
1.0
0.0395
0.0049
0.005
0.03
90.4
20.5
1.0
0.0
16.0
1.0
0.0
0.001
1.0
0.0
TI002.PV
None
TI003.PV
None
LI001.PV
None
FC001.SP
None
TC001.SP
None
PC001.PV
None
358
5/01
After reading this data, the corresponding model file will have the following form.
5/01
359
360
5/01