Vous êtes sur la page 1sur 327

Numerical Computing

with Simulink, Volume I


Creating Simulations
ot100_granfm3.qxp 10/11/2007 9:29 AM Page 1
ot100_granfm3.qxp 10/11/2007 9:29 AM Page 2
Numerical Computing
with Simulink, Volume I
Creating Simulations
Richard J. Gran
Mathematical Analysis Company
Norfolk, Massachusetts
Society for Industrial and Applied Mathematics Philadelphia
ot100_granfm3.qxp 10/11/2007 9:29 AM Page 3
Copyright 2007 by the Society for Industrial and Applied Mathematics.
10 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book may be
reproduced, stored, or transmitted in any manner without the written permission of the
publisher. For information, write to the Society for Industrial and Applied Mathematics,
3600 Market Street, 6th floor, Philadelphia, PA 19104-2688 USA.
Trademarked names may be used in this book without the inclusion of a trademark symbol.
These names are used in an editorial context only; no infringement of trademark is intended.
Maple is a trademark of Maplesoft, Waterloo, Ontario, Canada.
MATLAB, Simulink, Real Time Workshop, SimHydraulics, Stateflow, and Handle Graphics are
registered trademarks of The MathWorks, Inc. For MATLAB product information, please
contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098 USA, 508-647-
7000, Fax: 508-647-7101, info@mathworks.com, www.mathworks.com.
Figure 1.12 is used with permission of Marcus Orvando, president of Erwin Sattler Clocks
of America.
Figure 1.16 was taken by Michael Reeve on January 30, 2004, and its source is the
Wikipedia article Foucault Pendulum. Per Wikipedia, permission is granted to copy, dis-
tribute and/or modify this document under the terms of the GNU Free Documentation
License, Version 1.2 or any later version published by the Free Software Foundation; with no
Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. Subject to disclaimers.
Figure 2.7 is taken from the Wikipedia article Thermostat. Per Wikipedia, permission is
granted to copy, distribute and/or modify this document under the terms of the GNU Free
Documentation License, Version 1.2 or any later version published by the Free Software
Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
Subject to disclaimers.
Library of Congress Cataloging-in-Publication Data
Gran, Richard J., 1940-
Numerical computing with Simulink / Richard J. Gran.
v. cm.
Includes bibliographical references and index.
Contents: v. 1. Creating simulations
ISBN 978-0-898716-37-5 (alk. paper)
1. Numerical analysis--Data processing. 2. SIMULINK. I. Title.
QA297.G676 2007
518.0285--dc22
2007061803
is a registered trademark.
ot100_granfm3.qxp 10/11/2007 9:29 AM Page 4
To Dr. Richard A. Scheuing: A Friend and Mentor

ot100_granfm3.qxp 10/11/2007 9:29 AM Page 5


ot100_granfm3.qxp 10/11/2007 9:29 AM Page 6
Contents
List of Figures xi
List of Tables xvii
Preface xix
1 Introduction to Simulink 1
1.1 Using a Picture to Write a Program . . . . . . . . . . . . . . . . . . . . . 1
1.2 Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa . 5
1.3 Example 2: Modeling a Pendulum and the Escapement of a Clock . . . . 17
1.3.1 History of Pendulum Clocks . . . . . . . . . . . . . . . . . . . . . 18
1.3.2 ASimulation Model for the Clock . . . . . . . . . . . . . . . . . . 20
1.4 Example 3: Complex RotationsThe Foucault Pendulum . . . . . . . . . 24
1.4.1 Forces from Rotations . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4.2 Foucault Pendulum Dynamics . . . . . . . . . . . . . . . . . . . . 26
1.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2 Linear Differential Equations, Matrix Algebra, and Control Systems 33
2.1 Linear Differential Equations: Linear Algebra . . . . . . . . . . . . . . . 33
2.1.1 Solving a Differential Equation at Discrete Time Steps . . . . . . . 36
2.1.2 Linear Differential Equations in Simulink . . . . . . . . . . . . . . 38
2.2 Laplace Transforms for Linear Differential Equations . . . . . . . . . . . 40
2.3 Linear Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3.1 What Is a Control System? . . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Control Systems and Linear Differential Equations . . . . . . . . . 48
2.4 Linearization and the Control of Linear Systems . . . . . . . . . . . . . . 49
2.4.1 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.4.2 Eigenvalues and the Response of a Linear System . . . . . . . . . . 51
2.5 Poles and the Roots of the Characteristic Polynomial . . . . . . . . . . . . 55
2.5.1 Feedback of the Position of the Mass in the Spring-Mass Model . . 56
2.5.2 Feedback of the Velocity of the Mass in the Spring-Mass Model . . 57
2.5.3 Comparing Position and Rate Feedback . . . . . . . . . . . . . . . 61
2.5.4 The Structure of a Control System: Transfer Functions . . . . . . . 62
vii
viii Contents
2.6 Transfer Functions: Bode Plots . . . . . . . . . . . . . . . . . . . . . . . 64
2.6.1 The Bode Plot for Continuous Time Systems . . . . . . . . . . . . 65
2.6.2 Calculating the Bode Plot for Continuous Time Systems . . . . . . 65
2.7 PD Control, PID Control, and Full State Feedback . . . . . . . . . . . . . 69
2.7.1 PD Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.7.2 PID Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.7.3 Full State Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.7.4 Getting Derivatives for PID Control or Full State Feedback . . . . 74
2.8 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3 Nonlinear Differential Equations 81
3.1 The Lorenz Attractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.1.1 Linear Operating Points: Why the Lorenz Attractor Is Chaotic . . . 83
3.2 Differential Equation Solvers in MATLAB and Simulink . . . . . . . . . 86
3.3 Tables, Interpolation, and Curve Fitting in Simulink . . . . . . . . . . . . 87
3.3.1 The Simple Lookup Table . . . . . . . . . . . . . . . . . . . . . . 88
3.3.2 Interpolation: Fitting a Polynomial to the Data and Using the Result
in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.3.3 Using Real Data in the Model: From Workspace and File . . . . . . 92
3.4 Rotations in Three Dimensions: Euler Rotations, Axis-Angle
Representations, Direction Cosines, and the Quaternion . . . . . . . . . . 94
3.4.1 Euler angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.4.2 Direction Cosines . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4.3 Axis-Angle Rotations . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.4.4 The Quaternion Representation . . . . . . . . . . . . . . . . . . . . 99
3.5 Modeling the Motion of a Satellite in Orbit . . . . . . . . . . . . . . . . . 105
3.5.1 Creating an Attitude Error When Using Direction Cosines . . . . . 107
3.5.2 Creating an Attitude Error Using Quaternion Representations . . . . 109
3.5.3 The Complete Spacecraft Model . . . . . . . . . . . . . . . . . . . 109
3.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4 Digital Signal Processing in Simulink 115
4.1 Difference Equations, Fibonacci Numbers, and z-Transforms . . . . . . . 116
4.1.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.1.2 Fibonacci (Again) Using z-Transforms . . . . . . . . . . . . . . . . 120
4.2 Digital Sequences, Digital Filters, and Signal Processing . . . . . . . . . . 121
4.2.1 Digital Filters, Using z-Transforms, and Discrete Transfer Functions 121
4.2.2 Simulink Experiments: Filtering a Sinusoidal Signal and Aliasing . 123
4.2.3 The Simulink Digital Library . . . . . . . . . . . . . . . . . . . . . 128
4.3 Matrix Algebra and Discrete Systems . . . . . . . . . . . . . . . . . . . . 130
4.4 The Bode Plot for Discrete Time Systems . . . . . . . . . . . . . . . . . . 135
4.5 Digital Filter Design: Sampling Analog Signals, the Sampling Theorem,
and Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.5.1 Sampling and Reconstructing Analog Signals . . . . . . . . . . . . 137
Contents ix
4.5.2 Analog Prototypes of Digital Filters: The Butterworth Filter . . . . 142
4.6 The Signal Processing Blockset . . . . . . . . . . . . . . . . . . . . . . . 145
4.6.1 Fundamentals of the Signal Processing Blockset: Analog Filters . . 146
4.6.2 Creating Digital Filters fromAnalog Filters . . . . . . . . . . . . . 148
4.6.3 Digital Signal Processing . . . . . . . . . . . . . . . . . . . . . . . 149
4.6.4 Implementing Digital Filters: Structures and Limited Precision . . . 153
4.6.5 Batch Filtering Operations, Buffers, and Frames . . . . . . . . . . . 160
4.7 The Phase-Locked Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4.8 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5 Random Numbers, White Noise, and Stochastic Processes 173
5.1 Modeling with Random Variables in Simulink: Monte Carlo Simulations . 173
5.1.1 Monte Carlo Analysis and the Central Limit Theorem . . . . . . . . 174
5.1.2 Simulating a Rayleigh Distributed Random Variable . . . . . . . . 176
5.2 Stochastic Processes and White Noise . . . . . . . . . . . . . . . . . . . . 177
5.2.1 The Random Walk Process . . . . . . . . . . . . . . . . . . . . . . 178
5.2.2 Brownian Motion and White Noise . . . . . . . . . . . . . . . . . . 180
5.3 Simulating a System with White Noise Inputs Using the Weiner Process . 184
5.3.1 White Noise and a Spring-Mass-Damper System . . . . . . . . . . 184
5.3.2 Noisy Continuous and Discrete Time Systems:
The Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . 186
5.3.3 Discrete Time Equivalent of a Continuous Stochastic Process . . . . 189
5.3.4 Modeling a Specied Power Spectral Density: 1/f Noise . . . . . . 194
5.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6 Modeling a Partial Differential Equation in Simulink 201
6.1 The Heat Equation: Partial Differential Equations in Simulink . . . . . . . 202
6.1.1 Finite Dimensional Models . . . . . . . . . . . . . . . . . . . . . . 202
6.1.2 An Electrical Analogy of the Heat Equation . . . . . . . . . . . . . 203
6.2 Converting the Finite Model into Equations for Simulation with Simulink 205
6.2.1 Using Kirchhoffs Law to Get the Equations . . . . . . . . . . . . . 206
6.2.2 The State-Space Model . . . . . . . . . . . . . . . . . . . . . . . . 208
6.3 Partial Differential Equations for Vibration . . . . . . . . . . . . . . . . . 212
6.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7 Stateow: ATool for Creating and Coding State Diagrams, Complex Logic,
Event Driven Actions, and Finite State Machines 215
7.1 Properties of Stateow: Building a Simple Model . . . . . . . . . . . . . 216
7.1.1 Stateow Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 218
7.1.2 Making the Simple Stateow Chart Do Something . . . . . . . . . 221
7.1.3 Following Stateows Semantics Using the Debugger . . . . . . . . 223
7.2 Using Stateow: AController for Home Heating . . . . . . . . . . . . . . 225
7.2.1 Creating a Model of the System and an Executable Specication . . 225
x Contents
7.2.2 Stateows Action Language Types . . . . . . . . . . . . . . . . . . 229
7.2.3 The Heating Controller Layout . . . . . . . . . . . . . . . . . . . . 230
7.2.4 Adding the User Actions, the Digital Clock, and the Stateow Chart
to the Simulink Model of the Home Heating System . . . . . . . . . 231
7.2.5 Some Comments on Creating the GUI . . . . . . . . . . . . . . . . 239
7.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
8 Physical Modeling: SimPowerSystems and SimMechanics 241
8.1 SimPowerSystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
8.1.1 Howthe SimPowerSystems Blockset Works: Modeling a Nonlinear
Resistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
8.1.2 Using the Nonlinear Resistor Block . . . . . . . . . . . . . . . . . 248
8.2 Modeling an Electric Train Moving on a Rail . . . . . . . . . . . . . . . . 251
8.3 SimMechanics: ATool for Modeling Mechanical Linkages
and Mechanical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
8.3.1 Modeling a Pendulum with SimMechanics . . . . . . . . . . . . . . 257
8.3.2 Modeling the Clock: Simulink and SimMechanics Together . . . . 260
8.4 More Complex Models in SimMechanics and SimPowerSystems . . . . . 262
8.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
9 Putting Everything Together: Using Simulink in a System Design Process 269
9.1 Specications Development and Capture . . . . . . . . . . . . . . . . . . 270
9.1.1 Modeling and Analysis: Converting the Specications into an Ex-
ecutable Specication . . . . . . . . . . . . . . . . . . . . . . . . 271
9.2 Modeling the System to Incorporate the Specications: Lunar Module
Rotation Using Time Optimal Control . . . . . . . . . . . . . . . . . . . 272
9.2.1 From Specication to Control Algorithm . . . . . . . . . . . . . . . 273
9.3 Design of SystemComponents to Meet Specications: Modify the Design
to Accommodate Computer Limitations . . . . . . . . . . . . . . . . . . . 276
9.3.1 Final Lunar Module Control System Executable Specication . . . 279
9.3.2 The Control System Logic: Using Stateow . . . . . . . . . . . . . 283
9.4 Verication and Validation of the Design . . . . . . . . . . . . . . . . . . 285
9.5 The Final Step: Creating Embedded Code . . . . . . . . . . . . . . . . . 286
9.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
10 Conclusion: Thoughts about Broad-Based Knowledge 289
Bibliography 291
Index 295
List of Figures
1.1 Simulink library browser. . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Leaning Tower of Pisa. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Integrator block dialog. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Simulink model for the Leaning Tower of Pisa experiment. . . . . . . . . 9
1.5 Scope block graph for the Leaning Tower simulation. . . . . . . . . . . . 10
1.6 Leaning Tower Simulink model with air drag added. . . . . . . . . . . . 13
1.7 Scope dialog showing how to change the number of axes in the Scope plot. 13
1.8 Leaning Tower simulation results when air drag is included. . . . . . . . 14
1.9 Adding the second object to the Leaning Tower simulation by vectorizing
the air drag. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.10 Simulating results. The Leaning Tower Simulink model with a heavy
and light object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.11 Right clicking anywhere in the diagram opens a pull-down menu that
allows changes to the colors of the blocks, bold vector line styles, and
other model annotations. . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.12 Components of a pendulum clock. . . . . . . . . . . . . . . . . . . . . . 19
1.13 Simulink model of the clock. . . . . . . . . . . . . . . . . . . . . . . . . 21
1.14 Blocks in the subsystem Escapement Model. . . . . . . . . . . . . . . 22
1.15 Results of the clock simulation for times around 416 sec. Upper gure
is the pendulum angle, and the lower gure is the acceleration applied to
the pendulum by the escapement. . . . . . . . . . . . . . . . . . . . . . 24
1.16 Foucault pendulum at the Panthon in Paris. . . . . . . . . . . . . . . . . 25
1.17 Axes used to model the Foucault pendulum. . . . . . . . . . . . . . . . . 27
1.18 The Simulink model of the Foucault pendulum. . . . . . . . . . . . . . . 29
1.19 Simulation results from the Foucault pendulum Simulink model using
the default solver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.20 Simulation results for the Foucault pendulum using a tighter tolerance
for the solver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1 Using Simulink to compare discrete and continuous time state-space ver-
sions of the pendulum. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2 Outputs as seen in the Scope blocks when simulating the model in Figure 2.1. 40
2.3 Linear pendulum model using Transfer Function and Zero-Pole-Gain
blocks from the Simulink Continuous Library. . . . . . . . . . . . . . . 43
xi
xii List of Figures
2.4 Home heating system Simulink model (Thermo_NCS). This model de-
scribes the house temperature using a single rst order differential equa-
tion and assumes that a bimetal thermostat controls the temperature. . . . 45
2.5 Subsystem House contains the single differential equation that models
the house. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.6 Results of simulating the home heating system model. The top gure
shows the outdoor and indoor temperatures, and the bottom gure shows
the cost of the heat over the 24-hour simulation. . . . . . . . . . . . . . . 47
2.7 Atypical home-heating thermostat [50]. . . . . . . . . . . . . . . . . . . 47
2.8 Hysteresis curve for the thermostat in the Simulink model. . . . . . . . . 48
2.9 Model of a spring-mass-damper system using Simulink primitives. . . . 52
2.10 Model of a spring-mass-damper system using the state-space model and
Simulinks automatic vectorization. . . . . . . . . . . . . . . . . . . . . 52
2.11 Changing the value of the damping ratio from 0.1 to 0.5 in the model of
Figure 2.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.12 Generic components of a control system. . . . . . . . . . . . . . . . . . 55
2.13 A simple control that feeds back the position of the mass in a spring-
mass-damper system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.14 Feed back the velocity of the mass instead of the position to damp the
oscillations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.15 Time response of the mass (position at the top and velocity at the bottom)
for the velocity feedback controller. . . . . . . . . . . . . . . . . . . . . 58
2.16 Using the state-space model for the spring-mass-damper control system. . 59
2.17 Root locus for the spring-mass-damper system. Velocity gain varying
from 0 to 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.18 Root locus for the position feedback control. Gain changes only the
frequency of oscillation. . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.19 One method of determining the velocity of the mass is to differentiate the
position using a linear system that approximates the derivative. . . . . . 64
2.20 Simulation result from the Simulink model. . . . . . . . . . . . . . . . . 64
2.21 Control System Toolbox interface to Simulink. GUIs allow you to select
inputs and outputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.22 LTI Viewer results from the LTI Viewer. . . . . . . . . . . . . . . . . . . 68
2.23 Using proportional-plus derivative (PD) control for the spring-mass-
damper. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.24 Response of the spring-mass-damper with the PD controller. There are
no oscillations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.25 Proportional, integral, and derivative (PID) control in the spring-mass-
damper system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.26 PID control of the spring-mass system. Response to a unit step has zero
error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.27 Getting speed from measurements of the mass position and force applied
to the mass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.28 Simulink model to compare three methods for deriving the velocity of
the mass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.29 Methods for deriving rate are selected using the multiport switch. . . . 77
List of Figures xiii
3.1 Simulink model and results for the Lorenz attractor. . . . . . . . . . . . 83
3.2 Exploringthe Lorenz attractor usingthe linearizationtool fromthe control
toolbox. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3 Eigenvalues of the linearized Lorenz attractor as a function of time. . . . 85
3.4 Pseudotabulated data for the external temperature in the home heating
model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.5 Home heating systemSimulink model with tabulated outside temperature
added. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.6 Dialog to add the tabulated input. . . . . . . . . . . . . . . . . . . . . . 90
3.7 Real data must be in a MATLAB array for use in the From Workspace
block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.8 Simulink model of the home heating system using measured outdoor
temperatures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.9 Fromworkspace dialog block that uses the measured outdoor temperature
array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.10 Using real data from measurements in the Simulink model. . . . . . . . . 94
3.11 Single axis rotation in three dimensions. . . . . . . . . . . . . . . . . . . 95
3.12 Using quaternions to compute the attitude from three simultaneous body
axis rotations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.13 Simulink block to convert quaternions into a direction cosine. . . . . . . 103
3.14 Combining the quaternion block with the body rotational acceleration. . . 104
3.15 Electrical circuit of a motor with the mechanical equations. . . . . . . . . 106
3.16 Controlling the rotation of a spacecraft using reaction wheels. . . . . . . 110
3.17 Matrix concatenation blocks create the matrix Qused to achieve a desired
value for q. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.1 Simulating the Fibonacci sequence in Simulink. . . . . . . . . . . . . . . 117
4.2 Fibonacci sequence graph generated by the Simulink model. . . . . . . . 118
4.3 Simulink model for computing the golden ratio from the Fibonacci
sequence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.4 Generating the data needed to compute a digital lter transfer function. . 123
4.5 Adding callbacks to a Simulink model uses Simulinks model properties
dialog, an option under the Simulink windows le menu. . . . . . . . . 125
4.6 Sampling a sine wave at two different rates, illustrating the effect of
aliasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.7 Changing the numerator and denominator in the Digital Filter block
changes the lter icon. . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.8 Using the Digital Filter block to simulate the Fibonacci sequence. . . . . 130
4.9 State-space models for discrete time simulations in Simulink. . . . . . . 131
4.10 Anumerical experiment in Simulink: Does f
k+1
f
k1
f
2
k
= 1? . . . . 134
4.11 13 iterations of the Fibonacci sequence show that f
k+1
f
k1
f
2
k
= 1
(so far). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.12 The Bode plot for the discrete lter model developed using the Control
System Toolbox interface with Simulink. . . . . . . . . . . . . . . . . . 137
4.13 Illustrating the steps in the proof of the sampling theorem. . . . . . . . . 139
xiv List of Figures
4.14 Simulation illustrating the sampling theorem. The input signal is 15
sinusoidal signals with frequencies less than 500 Hz. . . . . . . . . . . . 140
4.15 Specication of a unity gain low pass lter requires four pieces of infor-
mation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.16 Butterworth lter for D/Aconversion using the results of the M-le but-
terworthncs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.17 Using the Signal Processing Blockset to design an analog lter. . . . . . 147
4.18 Moving average FIR lter simulation. . . . . . . . . . . . . . . . . . . . 150
4.19 Pole-zero plot and lter properties for the moving average lter. . . . . . 152
4.20 The Simulink model for the band-pass lter with no computational limi-
tations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.21 The band-pass lter design created by the Signal Processing Blockset
digital lter block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.22 Band-pass lter simulation results (with and without signal in the pass
band). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.23 Fixed-point implementation of a band-pass lter using the Signal Pro-
cessing Blockset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4.24 Running the Simulink model to determine the minima, maxima, and
scaling for all of the xed-point calculations in the Band-pass lter. . . . 161
4.25 Illustrating the use of buffers. Reconstruction of a sampled signal using
the FFT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.26 Using the FFT and the sampling theorem to interpolate a faster sampled
version of a sampled analog signal. . . . . . . . . . . . . . . . . . . . . 163
4.27 The analog signal (sampled at 1 kHz, top) and the reconstructed signal
(sampled at 8 kHz, bottom). . . . . . . . . . . . . . . . . . . . . . . . . 165
4.28 Results of interpolating a sampled signal using FFTs and the sampling
theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.29 Phase-locked loop (PLL). Anonlinear feedback control for tracking fre-
quency and phase of a sinusoidal signal. . . . . . . . . . . . . . . . . . . 167
4.30 Voltage controlled oscillator subsystem in the PLL model. . . . . . . . . 168
4.31 Details of the integrator modulo 1 subsystem in Figure 4.30. . . . . . . . 168
4.32 Phase-locked loop simulation results. . . . . . . . . . . . . . . . . . . . 169
5.1 Monte Carlo simulation demonstrating the central limit theorem. . . . . . 175
5.2 Result of the Monte Carlo simulation of the central limit theorem. . . . . 175
5.3 ASimulink model that generates Rayleigh random variables. . . . . . . . 177
5.4 Simulink model that generates nine samples of a random walk. . . . . . . 179
5.5 Nine samples of a random walk process. . . . . . . . . . . . . . . . . . . 179
5.6 Simulink model for the spring-mass-damper system. . . . . . . . . . . . 185
5.7 Motions of 10 masses with white noise force excitations. . . . . . . . . . 185
5.8 White noise block in the Simulink library (masked subsystem). . . . . . 186
5.9 Continuous linear systemcovariance matrix calculation in Simulink. The
result is the covariance matrix that can be used for simulating with a
covariance equivalent discrete system. . . . . . . . . . . . . . . . . . . . 191
5.10 Noise response continuous time simulation and equivalent discrete sys-
tems at three different sample times. . . . . . . . . . . . . . . . . . . . . 192
List of Figures xv
5.11 Simulation results from the four simulations with white noise inputs. . . 193
5.12 Simulation of the fractal noise process that has a spectrum proportional
to 1/f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.13 Power spectral density estimators in the signal processing blockset used
to compute the sampled PSD of the 1/f noise process. . . . . . . . . . . 198
6.1 An electrical model of the thermodynamics of a house and its heating
system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
6.2 Simulink model for the two-room house dynamics. . . . . . . . . . . . . 210
6.3 Simulation results when the heat is off. . . . . . . . . . . . . . . . . . . 211
6.4 Simulation results when the heat is on continuously. . . . . . . . . . . . 211
6.5 Using PID control to maintain constant room temperatures (70 deg F). . . 211
7.1 First Stateow diagram (in the chart) uses a manual switch to create the
event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
7.2 AStateow diagram that contains most of the stateow semantics. . . . . 219
7.3 The Simulink model for the simple timer example. . . . . . . . . . . . . 222
7.4 Modied Stateow chart provides the timer outputs Start, On, and
Trip. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7.5 Outputs resulting from running the simple timer example. . . . . . . . . 224
7.6 PID controller for the home-heating example developed in Chapter 6. . . 227
7.7 Response of the home heating system with the external temperature from
Chapter 3 as input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
7.8 Atentative design for the new home heating controller we want to
develop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
7.9 The complete simulation of the home heating controller. . . . . . . . . . 231
7.10 Graphical user interface (GUI) that implements the prototype design for
the controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.11 Interactions between the GUI and Stateowuse changes in the gain values
in three gain blocks in the subsystem called User_Selections. . . . . . 233
7.12 The rst of the two parallel states in the state chart for the heating
controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.13 The second of the two parallel states in the home heating controller state
chart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
7.14 The nal version of the PID control for the home heating controller. . . . 239
8.1 Asimple electrical circuit model using Simulink and SimPowerSystems. 242
8.2 The electrical circuit time response. Current in the resistor and voltage
across the capacitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
8.3 The SimPowerSystems library in the Simulink browser and the elements
sublibrary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
8.4 A Simulink and SimPowerSystems subsystem model for a nonlinear or
time varying resistor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
8.5 Masking the subsystem with an icon that portrays what the variable re-
sistor does makes it easy to identify in an electrical circuit. . . . . . . . . 248
8.6 Circuit diagram of a simulation with the nonlinear resistor block. . . . . 249
xvi List of Figures
8.7 Using the signal builder block in Simulink to dene the values for the
time varying resistance. . . . . . . . . . . . . . . . . . . . . . . . . . . 250
8.8 Using SimPowerSystems: Results from the simple RL network
simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
8.9 Simulink and SimPowerSystems model of a train and its dynamics. . . . 252
8.10 Complete model of the train moving along a track. . . . . . . . . . . . . 254
8.11 The Simulink blocks that compute the rail resistance as a function of the
train location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
8.12 Simulation results. Rail resistances, currents in the rails, and current to
the train. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
8.13 Modeling a pendulum using SimMechanics. . . . . . . . . . . . . . . . . 258
8.14 Coordinate system for the pendulum. . . . . . . . . . . . . . . . . . . . 258
8.15 Simulating the pendulum motion using SimMechanics and its viewer. . . 260
8.16 Adding Simulink blocks to a SimMechanics model to simulate the clock
as in Chapter 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
8.17 Time history of the pendulum motion from the clock simulation. . . . . . 261
8.18 SimMechanics model Sim_Mechanics_Vibration. . . . . . . . . 263
8.19 String model (the subsystem in the model above). It consists of 20 iden-
tical spring mass revolute and prismatic joint elements. . . . . . . . . . 263
8.20 String simulation. Plucked 15 cm from the left, at the center, and 15 cm
from the right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
8.21 Using SimPowerSystems to build a model for a four-room house. . . . . 264
9.1 The start of the specication capture process. Gather existing designs
and simulations and create empty subsystems for pieces that need to be
developed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
9.2 The physics of the phase plane logic and the actions that need to be
developed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
9.3 Complete simulation of the lunar module digital autopilot. . . . . . . . . 279
9.4 Simulating a counter that ticks at 625 microsec for the lunar module
Simulink model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
9.5 The blocks in the jet-on-time counter subsystem in Figure 9.4. . . . . . . 281
9.6 Lunar module graph of the switch curves and the phase plane motion for
the yawaxis. (The graph uses a MATLABfunction block in the simulation.)282
9.7 Stateow logic that determines the location of the state in the phase
plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
9.8 Simulink blocks that check different model attributes for model
verication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
List of Tables
3.1 Computation time for the Lorenz model with different solvers and toler-
ances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.1 Using low pass lters. Simulating various lters with various sample
times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.2 Comparison of the error in converting a digital signal to an analog signal
using different low pass lters. . . . . . . . . . . . . . . . . . . . . . . . 148
4.3 Specication for the band-pass lter and the actual values achieved in the
design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.1 Simulink computation times for the four white noise simulations. . . . . 193
xvii
Preface
Simulation has gradually become the preferred method for designing complex systems. The
aerospace industry provided the impetus for this design approach because most aerospace
systems are at best difcult or at worst impossible to test. I started my career working on the
lunar module ight control system. Testing this system in space was extremely expensive
and very difcult. In the entire development cycle, there was, in fact, only one unmanned test
inearthorbit. All of the designs, bynecessity, requiredsimulations. Over time, this approach
has become pervasive in many industries, particularly the automotive industry. Even in
product developments as mundane (seemingly) as disk drives for computers, simulation has
become the dominant method of design.
A digital simulation of the environment that a system lives in is crucial in this
modern simulation based design process. The accurate representation of this environment
allows the designer to see how well the system being designed performs. It allows the
designer to verify that the design meets all of the performance specications. Moreover,
as the simulation develops, the designer always can see how the performance requirements
pass through to the various subsystems in the design. This renement of the specication
allows the model to, in essence, become the specication. The simulation environment can
also allow the designer of embedded digital signal processing and control algorithms to test
the code directly as it is developed to ensure that it satises the specication. The last,
and newest, step in this process is the ability to use the simulation model automatically to
generate the computer code for computers in the system. This can eliminate most of the
hand coding that is required by the current practice. (In most companies this process
consists of handing a stack of papers with the written specication to a team of computer
programmers for coding.) This approach eliminates two potential sources of error: the
errors introduced by the conversion of the design into a written specication and the errors
introduced by the manual coding.
Simulink

is a remarkable tool that lls the niche for a robust, accurate, and easily used
simulation tool. It provides a visual way of constructing a simulation of a complex system
that evolves in time (the designer can optimize the system by optimizing the mathematical
models response). This model-based design approach provides accurate simulation as
various numerical constants in the model are changed. It allows the user to tune these
constants so that the simulation accurately portrays the real world. It then allows sharing
of the model among many different users so they can begin to design various components
or parts of the system. At the same time, it provides a means for understanding design
problems whose solutions will make the system operate in a desired way. Finally, the
resulting designs embedded software is rapidly convertible into embedded code that is
xix
xx Preface
directly usable for testing in a stand-alone computer. All of this can be accomplished using
one easily understood graphical environment. A major thesis of this book is that Simulink
can form the basis for an integrated design environment. As your use of the tool evolves,
the way this can happen should become more apparent.
Because simulation is such an important adjunct to numerical analysis, the book can
be a supplement for a course on numerical computing. It ts very well with an earlier
SIAM book, Numerical Computing with MATLAB by Cleve Moler [29] (which we refer to
so frequently that its title is abbreviated NCM in the text), and it has been written with the
same eclectic mix of practice and mathematics that makes Cleves book so much fun to read.
I hope that this work favorably compares withand comes close to matching the quality
ofCleves work. As is the case with Numerical Computing with MATLAB, this book is
based on a collection of 88 model les that are in the library Numerical Computing with
Simulink (which we abbreviate as NCS), which can be downloaded from the Mathematical
Analysis Co. web page (www.math-analysis.com). This library of SIMULINK models and
MATLAB

M-les forms an essential part of the book and, through extensions that the
reader is encouraged to peruse, an essential part of the exercises.
The book will take you on a tour of the Simulink environment, showing you how to
develop a system model and then how to execute the design steps to make the model into
a functioning design laboratory. Along the way, you will be introduced to the mathematics
of systems, including difference equations and z-transforms, ordinary differential equations
(both linear and nonlinear), Laplace transforms, numerical methods for solving differential
equations, and methods for simulating complex systems from several different disciplines.
The mathematics of simulation is not complete without a discussion of random vari-
ables and random processes for doing Monte Carlo simulations. Toward this end, we
introduce and develop the techniques for modeling random processes with predetermined
statistical properties. The mathematics for this type of simulation begins with white noise,
which is a very difcult entity to pin down. We introduce the concept of Brownian motion
and show the connection between this process and the ctitious, but useful, white noise
process. The simulation of the Brownian process in Simulink is developed; it is then used
to form more (statistically) complex processes. We review and show how to simulate and
analyze randomprocesses, and we formulate and use the power spectral density of a process.
We introduce other tools, in addition to Simulink, from The MathWorks. Each of
the tools is an expansion into a different domain of knowledge. The rst tool is the
Signal Processing Blockset that extends Simulink into the domain of signal processing
(both analog and digital). The second tool, Stateow

, expands Simulink to include state


charts and signal ow for modeling event driven systems (i.e., systems where the actions
start at times that are asynchronous with the computers clock). This tool naturally extends
Simulink into the traditional realm of nite state machines.
The third tool, SimPowerSystems, extends Simulink into the realm of physical mod-
eling and in particular into the realm of electrical circuits including power systems, motor
drives, power generation equipment including power electronics, and three-phase power
transmission lines.
The last tool, SimMechanics, is also a physical modeling tool; it develops models of
mechanical systems such as robots, linkages, etc. These tools can all work together in the
common Simulink environment, but they all have their own method for displaying a picture
of the underlying system.
Preface xxi
Both MATLAB and Simulink are available in student versions. All of the examples
in this book use the tools in the student version supplemented with the Control System
Toolbox, the Signal Processing blockset, SimMechanics, and SimPowerSystems, which
can be purchased separately. If you are not a student but wish to learn how to use Simulink
using this book, the MathWorks can provide a demonstration copy of MATLAB, Simulink,
and the add-ons; contact them at http://www.mathworks.com.
This book not only introduces Simulink and discusses many useful tricks to help
develop models but it also shows how it is possible for Simulink to be the basis of a
complete design process. I hope the material is not too theoretical and that you nd the
models we will work with are fun, instructive, and useful in the future.
As is always the case, innumerable people have contributed to the genesis and devel-
opment of this book. I thank Cleve Moler for sharing an early manuscript of his wonderful
book and The MathWorks for many years of interesting and stimulating work. In particu-
lar, I wish to thank the MathWorks staff with whom I have had many long and interesting
discussions: Rob Aberg, Paul Barnard, Jason Ghidella, Ned Gully, Loren Shure, and Mark
Ullman, and The MathWorkss CEO, Jack Little. These interesting and productive discus-
sions helped me create the various examples and models described in this book.
Words are never enough to thank ones family. My wife, and soulmate, Jean Frova
Gran, provides support, joy, affection, and love. My daughter Kathleen keeps my thoughts
focused on the future. Her hard work and dedication to the achievement of her goals, despite
many small-business frustrations, is laudable.
My extended family includes two stepgrandchildren, Ian and Ila, who provide me
with insight into the joy brought by the process of learning about this unbelievable world
we live in. My stepdaughters Elizabeth and Juliet have a sense of humor and a joie de vivre
that are inspirational.
Finally, I need to say thank you to my sister Cora and her wonderful husband,
Arthur, who have always been loving and supportive in more ways than can be stated; Cora
is, as the self-proclaimed matriarch of the Grans, much more to me than a mere sister.
Dr. Richard J. Gran
Mathematical Analysis Co.
www.math-analysis.com
Norfolk, Massachusetts, August 2006
Chapter 1
Introduction to Simulink
The best way to follow the details of this book is to have a copy of Simulink and work
the examples as you read the text. The examples described in the book are all in the
Numerical Computing with Simulink (NCS) library that is available as a download from
The MathWorks (www.mathworks.com). The les are located in an ftp directory on the site.
As you work your way through this text, the Simulink and MATLAB les you will
run are in boldface Courier type. When you need to run a model or M-le, the text
shows the exact MATLAB command you need to type.
After you download the NCS library, place it on the MATLAB path using the Set
Path command under the MATLAB le menu. Alternatively, you can save the NCS les
and use the MATLAB directory browser (at the top of the MATLAB window) or the cd
command to change to the directory that contains the NCS library.
1.1 Using a Picture to Write a Program
In order to understand the Simulink models that we use to illustrate the mathematical princi-
ples, this chapter provides the reader with a brief introduction to Simulink and its modeling
syntax. In my experience, the more familiarity a user has with the modern click-and-drag
technique for interfacing with a computer, the easier it is to build (and understand) a Simulink
model.
In essence, a Simulink model provides a visual depiction
of the signal ow in a system. Conceptually, signal ow di-
agrams indicate that a block in the diagram (as represented
by the gure at left) causes an operation on the input, and the
result of that operation is the output (which is then possibly the
input to the next block). The operation can be as simple as a gain block, where the output
is simply the product of the input and a constant (called the gain).
Signal ow in the diagram splits, so several different blocks have the same input.
Then the signal can be coalesced into a single signal through addition (subtraction) and
multiplication (division). The diagram illustrates this; the output y is y = K
1
u K
2
u
(or y = (K
1
K
2
)u). The power of this type of diagram is that K
1
or K
2
may represent an
1
2 Chapter 1. Introduction to Simulink
Figure 1.1. Simulink library browser.
operator which might be a matrix, a difference equation, a differential equation, integration,
or any of a large number of nonlinear operations.
The signals in a block diagram(y, u, and any intermediate terms not explicitly named)
are usually functions of time. However, any independent variable may be used as the
underlying simulation variable. (For example, it may be the spatial dimension x, or it might
simply be interpreted as an integer representing the subscript for a sequence of values.)
The Simulink environment consists of a library of elementary building blocks that are
available to build a model and a window where the user builds the model. Figure 1.1 shows
the Simulink library browser. Open this browser by typing
simulink
at the MATLAB command line (or alternatively by clicking on the Simulink icon that
appears below the menu at the top of the MATLAB window). In either case, the result is
that you will open the Simulink browser and a window for building the model.
The browser consists of a catalog of 16 types of blocks. The catalog is a grouping of the
blocks into categories that make it easier for the user to nd the appropriate mathematical
1.1. Using a Picture to Write a Program 3
element, function, operation, or grouping of elements. A quick look at the list shows
that the rst element in the catalog is the Commonly Used Blocks. This, as the name
implies, is redundant in that it contains blocks from other elements in the catalog. The
second set of elements in the library is denoted Continuous. This element contains the
operator blocks needed to build continuous time ordinary differential equations and perform
numerical integration. Similarly, the library element Discrete contains the operators
needed to develop a discrete time (sampled data) system. All math operations are in the
Math Operations catalog element.
To understand how to build a Simulink model, let us build a simple system.
In many applications, simple trigonometric transformations are required to resolve
forces into different coordinates. For example, if an object is moving in two dimensions
with a vector velocity v at an angle relative to the x-axis of a Cartesian coordinate system
(see gure), then the velocity along the x-axis is v cos() and along the y-axis is v sin().
ASimulink model that will compute this is in the gure
at the right. If you do not want to build this model from
scratch, then type
Simplemodel
at the MATLAB command line. However, if this is the rst
time you will be using Simulink, you should go through the
exercise of building it. Remember that the model building
uses a click-and-drag approach. As we discussed above, the
trigonometric functions in the blocks are operators in the sense
that they operate on the input signal v to create the output.
To build the model, open Simulink and use the empty model window that opens or, if
the empty windowis not open, click on the blank paper icon at the upper left of the Simulink
library browser (the circled icon in the Simulink library browser, Figure 1.1). The empty
model window that is open will appear with the name Untitled. All of the mathematical
functions needed for building this model are in the Math Operations section of the browser.
This section of the browser is opened by clicking on the words Math Operations in the
browser or by clicking on the + symbol at the Math Operations icon in the right side of
the browser window. (The browser is set up to work the same as the browser in Windows.)
Math Operations contains an icon for all of the math operations available for use in Simulink
models (in alphabetic order). We will need the Trigonometric Function block. To insert
this block in the model, click on it and drag it into the untitled model window. We will
need two of these blocks, so you can drag it twice. Alternatively, after the rst block is in
the model window, you can right click on the block in the diagram (which will create a copy
of the block) and then (still holding down the right click button on the mouse) drag the copy
to another location in the diagram.
4 Chapter 1. Introduction to Simulink
We will assume that the velocity v is a MATLAB variable that is in the MATLAB
workspace. To bring this variable into the Simulink model, a block fromthe Sources library
is required. To open this library, locate the Sources Library in the Simulink Browser.
This will open a large number of Simulink blocks that provide sources (or inputs) for the
model. The Constant block allows you to specify a value in MATLAB (created using the
MATLAB command window), so click on it and drag it into the untitled model window.
To make this input the velocity v, double click on the Constant block to open the Block
Parameters window. In general, almost all Simulink blocks have some form of dialog that
sits (virtually) behind the block; open these dialogs by double clicking the block. As we
go through the steps in building a model, we will investigate some of the dialogs. It is a
good idea to spend some time familiarizing yourself with the contents of these dialogs by
browsing through the library one block at a time and opening each of their dialogs to see
what is possible. For now, in the Block Parameters window change the Constant value to
v, and click OK to close the Block Parameters window.
We are now ready to create the signal ow in the diagram. Look carefully at the icons
in the model. You will see arrowheads at either the input or output (or both) for each of
the blocks. You can connect blocks manually or automatically. To connect them manually,
click the mouse on the arrowhead at the output of the constant block, drag the line to the
input of the Trigonometric Function block (either one), and release the mouse button when
the icon changes from a plus sign to a double plus sign. This should leave a line connecting
the input to the trig block. To connect the second trig block, start at the input arrow of
this block and drag the line to the point on the previous connection that is closest to the
block. This will take a little getting used to, so try it a couple of times. The last part of
the modeling task is to change the sin function (the default for the trigonometric function
from the library) to a cos. This block has a dialog behind it that allows changes, so double
click on the block that will become the cosine and use the pull down menu in the dialog
called Function to select cos. Two additional boxes in the dialog allow you to specify the
output type (auto, real, or complex) and the sample time for the signal. We can ignore these
advanced functions for now. Try to move the blocks around so they look like the blocks in
the Simplemodel gure above.
As you were connecting the rst two blocks, a message from Simulink should have
appeared, telling you that there is an automatic way of connecting the blocks. Do this by
clicking (and thereby highlighting) the block that is the input and, while holding down the
control (Ctrl) key, clicking on the block that this input will go to. Simulink will then create
(and neatly route) a line to connect the two blocks.
This simple model introduces you to click and drag icon-based mathematical models.
It is simple, obviously too simple for the work needed to create it. It would be far easier to
do this in MATLAB.
Simulink provides a visual representation of the calculations involved in a process. It
is easier to follow complex calculations visually, particularly if they involve ow in time.
Toward this end, it is very possible to build a diagram that obscures the ow rather than
enhances it. We call such a diagram a spaghetti Simulink model (like spaghetti code).
To avoid this, it is important to make the connections clear and the model visually easy to
navigate. Simulink allows you to do this by dragging the icons around the diagram to make
it look better. It is a good idea to get used to this right from the start. Therefore, spend a
few minutes manipulating the diagram with the mouse. In addition, blocks move (nudged
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa 5
up or down and back and forth) with the arrow keys. Simply highlight the block and press
the arrow key to move it one grid point.
To ip a block, highlight it and use Ctrl + i, and to rotate a block, highlight it and use
Ctrl + r. Once again, try these steps. If you want to save the example you created, click the
oppy disk icon at the top of the model window and save the model in any directory you
want. (Be careful not to overwrite the model Simplemodel in the NCS directory.)
1.2 Example 1: Galileo Drops Two Objects from the
Leaning Tower of Pisa
To illustrate a Simulink model that solves a differential equation, let us build a model that
simulates the experiment that is attributed to Galileo. In the experiment, he went to the
upper level of the Leaning Tower of Pisa (Figure 1.2
1
) and dropped a heavy object and a light
Figure 1.2. Leaning Tower of Pisa.
1
This photograph is from my personal collection. It was taken during a visit to Pisa in May of 2006. If Galileo
performed the experiment, he most likely would have used the upper parapet at the left of the picture. (Because
of the offset at the top, it would have been difcult to use the very top of the tower.)
6 Chapter 1. Introduction to Simulink
object. The experiment demonstrated that the objects both hit the ground at the same time.
Is it true that this would happen? Let us see (this model is in the NCS library, where it is
called Leaningtoweryou can open it from the library, but if this is your rst experience
with Simulink, use the following instructions to create the model instead).
Thanks to Newton, we know how to model the acceleration due to gravity. Denote
the acceleration at the surface of the earth by g (32.2 ft per sec per sec). Since the force on
the object is mg and Newtons rst law says that
F = ma,
we have
m
d
2
x
dt
2
= mg.
From this, it follows immediately that the mass of the object does not enter into the calcu-
lation of the position (and velocity), and the underlying equation of motion is
d
2
x
dt
2
= g.
We assume that Galileo dropped the balls from the initial location of 150 ft, so the initial
velocity is
dx(0)
dt
= 0 and the initial position is x(0) = 150.
Even though this formulation of the equation of motion explicitly shows that Galileos
experiment has to work, we continue because we are interested in seeing if the experimental
results would in fact have shown that the objects would touch the ground at the same
time. Initially we model just the motion as it would be in a vacuum. A little later in this
section, we will add air drag to the model. The model of any physical device follows
this evolutionary path. Understanding the physical environment allows the developer to
understand the ramications of different assumptions and then add renements (such as air
drag) to the model.
The equation above is probably the rst differential equation anyone solves. It has a
solution (in the foot-pound-second [fps] system) given by
dx(t )
dt
=32.2t,
x(t ) =16.1t
2
+150.
Since the experiment stops when the objects hit the ground, the nal time for this experiment
is when x(t ) = 0. From the analytic solution, we nd that this time is
t =

150
16.1
.
Go into MATLAB and compute this value. MATLABs full precision for the square root
gives
>> format long
>> sqrt(150/16.1)
ans =
3.05233847833680
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa 7
We will return to this calculation after we use Simulink to model and solve this
equation because it will highlight a feature of Simulink that is numerically very important.
Now, to build the Simulink model of the motion we need to create the left-hand side
of the differential equation and then integrate the result twice (once to get the velocity and
then again to get the position).
Five types of Simulink blocks are required to complete this model:
Three input blocks are used to input the constant g into the equation and to provide
the initial position and to provide a value to test the position against to determine
whether the object has reached the ground.
Two integrator blocks are needed to compute the velocity and position from the
acceleration.
One block will perform the logic test to see when the object hits the ground.
One block will receive the signal from the logic block to stop the simulation.
One block will create a graph of the position and velocity as a function of time.
These blocks are in sections of the Simulink library browser as follows:
The inputs are in the Sources library.
The integrator is in the Continuous library.
The block to test to see if the object has hit the ground is a Relational Operation, and
it is in the Logic and Bit Operations library.
The block to stop the simulation is in the Sinks library.
The graph is a Scope block and is in the Sinks library.
So let us start building the model. As we did above, open up the Simulink browser
and a new untitled model window. Accumulate all of the blocks in the window before we
start to make connections. Thus, open the Sources, Continuous, Logic and Bit Operations,
and Sinks libraries one by one, and grab the icons for a Constant, an Integrator, a Relational
Operator, and the Stop and drag them into the untitled model window.
The integrator block in the Continuous library uses the Laplace operator
1
s
to denote
integration. This is a bit of an abuse of notation, since strictly speaking this operator
multiplies the input only if it is the Laplace transformof the time signal. However, Simulink
had its origins with control system designers, and consequently the integration symbol
remains in the form that would appear in a control system block diagram.
In fact, the integrator symbol invokes C-coded numerical integration algorithms that
are the same as those in the MATLAB ODE suite to perform the integration. The C-coded
solvers are compiled into linked libraries that are invoked automatically by Simulink when
the simulation is executed. (Simulink handles all of the inputs, outputs, initial conditions,
etc. that are needed.) We will talk a little more about the ordinary differential equation
solvers later, but if the reader is interested in more details about the numerical integration
solvers, consult Chapter 7 of NCM [29].
8 Chapter 1. Introduction to Simulink
Figure 1.3. Integrator block dialog.
Make copies of the integrator and the constant block (since we need two integrators
and three constants); as before, either copy it by right clicking or drag a second block into
the model from the browser.
Make the connections in the model. Next, change the Constant block values by double
clicking to open its dialog box. Change the rst of these to make the value g, change the
second to make the value 0 (which will be used to test when the position of the object is at
the ground), and then change the third so its value is 150 (the initial position of the object
at the top of the Leaning Tower of Pisa).
Modify the integrator block to bring the initial conditions into the model. When
you double click the integrator block, the Block Parameters menu, shown in Figure 1.3,
appears.
In this menu, you can change the way the numerical integration handles the integral.
The options are as follows:
Force an integration reset (to the initial condition) when an external reset event occurs.
This reset can be increasing as a function of time (i.e., whenever the reset signal is
increasing, the integrator is reset), it can be decreasing, or it can be either (i.e., the
integration will take place only if the reset signal is unchanging), and nally, the reset
can be forced when the reset signal crosses a preset threshold.
The initial conditions can be internal or external. If it is internal, there is a box called
Initial condition (as shown above) for the user to specify the value of the initial
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa 9
Figure 1.4. Simulink model for the Leaning Tower of Pisa experiment.
condition. When it is set to external, an additional input to the integrator appears on
the integrator icon. We will use this external initial condition to input the height of
the leaning tower.
The output of the integrator can be limited (i.e., it can be forced to stay constant if
the integration exceeds some minimum or maximum value).
Other options are available that will be discussed as we encounter the need for them.
Set one of the integrators to have an external initial condition, and leave the other at
its default value. (The default is internal with an initial condition of 0.) Then connect the
blocks, as we did before, so the model looks like Figure 1.4.
The default time for the simulation is 10 sec, which is ample time for the object to hit
the ground. Before we start the simulation, we need to specify a value for g (we used the
value g in the dialog for the Constant, and it must be created). To do this, in MATLAB
type the command
g = 32.2;
Nowstart the simulation by clicking the right pointing play arrow(highlighted with
a circle in Figure 1.4). The simulation will run. Double click on the Scope icon and then
click the binoculars (at the top of the Scope plot window) to scale the plot properly. The
result should look like Figure 1.5.
10 Chapter 1. Introduction to Simulink
Figure 1.5. Scope block graph for the Leaning Tower simulation.
Notice that the simulation stopped at a time just past 3 sec. We can see howaccurately
Simulink determined that time by zooming in on the plot. If you put the mouse pointer over
the zero crossing point and click, the plot scale will change by about a factor of three. If
you do this some number of times, you will see that the crossing time is about 3.0523. The
plot will not resolve any ner than that so we cannot see the zero crossing time any better
than this.
The default for Simulink is to use the variable step differential equation solver ode45.
This variable step solver forces the integration to take as long a step as it can consistent
with an accuracy of one part in 1000 (the default relative tolerance). Simulink also sends
(by default) the time points at which the integration occurred back to MATLAB. The time
values are in a vector in MATLAB called tout. To see what the zero crossing time was,
get the last entry in tout by typing
tout(end)
To get the full precision of the calculation, type
format long
The answer should be 3.05233847833684. Remember that the stop time computed
above with MATLAB (using the solution of the equation) was 3.05233847833680, a dif-
ference of 4 in the last decimal place (i.e., a difference of 4 10
14
, which is a little more
than the computation tolerance variable in MATLAB called eps). How can the Simulink
numerical integration compute this so accurately?
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa 11
This remarkable accuracy in the numeric calculation of the stopping time is because
of a unique feature of Simulink. The variable step integration algorithms that are used to
solve the differential equations in a Simulink model will compute the solution at time steps
that are as big as possible consistent with the relative tolerance specied for the algorithm.
If, in the process of computing the solution, a discontinuity occurs between any two times,
the solver forces an interpolation to determine the time at which the discontinuity occurred.
Only certain Simulink blocks cause the solver to interpolate. These are as follows:
Absthe absolute value block.
Backlasha nonlinear block that simulates a mechanical systems backlash.
Dead zonea nonlinear system in which outputs do not change until the absolute
value of the input exceeds a xed value (the variable called deadzone).
Hit crossinga block that looks at any signal in the Simulink diagram to see if it
matches a particular value (the hit).
Integratorone of the options in the integrator is to reset the integration. This re-
set value triggers the zero crossing algorithm. The integrator also triggers the zero
crossing algorithm if the integration is limited.
MinMaxthis block computes the minimum or maximum of a signal. When an
input to this block exceeds the maximum or is below the minimum, the interpolation
algorithm calculates the new maximum or minimum accurately.
Relational operatorthis operation, in this problem, detects the crossing of the ground
(zero crossing in this case) of the object.
Saturationa signal entering this block is limited to be between an upper and lower
bound (the saturation values). The solver interpolates the time at which the signal
crosses the saturation values.
Signthis block is +1 or 1, depending on whether the signal is positive or negative.
The solver interpolates the time at which the signal changes sign.
Stepthis is an input function that jumps from one value (usually 0) to another value
(usually 1) at a specied time.
Subsystema subsystem is a block that contains other blocks. Their use makes
diagrams easier to read. They also allow simulations to have blocks that go on and
off. Blocks inside a subsystem do not appear at the level of the system, but you can
viewthemby double clicking the subsystemblock. Subsystems come on (are enabled)
using a signal in the Simulink model that connects to an icon on the subsystem. If you
place an Enable block in a subsystem, the icon automatically appears at the top level
of the subsystem. Whenever the system enable or disable occurs, the solver forces a
time step.
Switchthere is a manual switch that the user can throw at any time. The solver
computes the time the switch state changes.
12 Chapter 1. Introduction to Simulink
Now that we have simulated the basic motion of Galileos experiment, let us add the
effects of air resistance to the model and see if, in fact, the objects hit the ground at the same
time.
Air drag creates a force that is proportional to the square of the speed of the object. To
rst order, an object with cross sectional area A moving at a velocity v in air with density
will experience a drag force given by
f
Drag
=
1
2
v
2
A.
(In the model so far we have assumed that the force from gravity is downward and is
therefore negative, but the air drag force is up so it is positive.) In the fps unit system we
have been using, the value of is 0.0765 lbf/ft
3
. Let us assume that the objects are spherical
and that their diameters are 0.1 and 0.05 ft each. (If they are made of the same material, the
large object will then be 8 times heavier than the small object.)
The acceleration of the objects is now due to the force from the air drag combined
with the force from gravity, so the differential equation we need to model is now
d
2
x
dt
2
=
1
2
A

dx
dt

2
g.
To get the air drag force for each object we need to know their cross-sectional areas. Since
the cross-sectional area is r
2
, the acceleration from the air drag for the large object is
1
2
r
2
=

2
(0.1)
2
= 0.005 and the acceleration for the small object is
1
2
(0.05)
2
=
.00125 (they differ by a factor of 4).
To create the new model, we need to add the air drag terms. Thus, we need a way to
add the force from the air resistance to the force from gravity. This uses a Sum block from
the Math library. This block is a small circle (or a rectanglemy personal preference is
the circle since it makes it easy for the eye to distinguish summations from the other icons
that are mostly rectangular) with two inputs, (each of which has a small plus sign on it). So
click and drag this block into the leaning tower model. To get the square of the velocity, we
need to take the output of the rst integration (the integration that creates the velocity from
the acceleration) and multiply it by itself. The block that does this is the Product block in
the Math library. Drag a copy of this block into the model also.
We will start by creating the model for the heavy object alone. Connect the output of
the integrator to the Product block as we did before (using either of the two inputs, it does
not matter which one), and then grab the second input to the Product block and drag it until
it touches the line that you just connected (the line from the integrator). This will cause
the two inputs to the Product block to be the same thing (the velocity), so the output is the
square of the velocity. Notice how easy it is to create a nonlinear differential equation in
Simulink. The next step is to multiply the velocity squared by 0.005 or 0.003825. Use
the Gain block from the Math library to do this. Click and drag a copy of this block into
the model then double click on this Gain block and change the value in the dialog to be
0.003825.
Figure 1.6 shows the resulting Simulink model, which is in the NCS library. Open it
with the MATLAB command:
Leaningtower2
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa 13
Figure 1.6. Leaning Tower Simulink model with air drag added.
Figure 1.7. Scope dialog showing how to change the number of axes in the Scope plot.
Note that we changed the Scope block to view both the position and the velocity of
the objects. This was done by double clicking the Scope icon and then selecting the Scope
Parameters dialog by double clicking the second menu itemat the top of the Scope window.
(This is the icon to the right of the printer icon; it is the icon for a tabbed dialog.) The dialog
that opens looks like Figure 1.7.
Note that the number of axes in the Scope is the result of changing the value in the
Number of Axes box. We have made it two, which causes the Scope block to have two
inputs. We then connected the rst input to the Velocity and the second to the Position lines
in the model.
14 Chapter 1. Introduction to Simulink
0 0.5 1 1.5 2 2.5 3 3.5
-80
-60
-40
-20
0
Velocity
0 0.5 1 1.5 2 2.5 3 3.5
-50
0
50
100
150
Time
Position
Figure 1.8. Leaning Tower simulation results when air drag is included.
If you run this model, the results in the Scope block will resemble the graphs in
Figure 1.8.
Notice that the air drag causes the object to hit the ground later than we determined
when there was no drag (3.35085405691268 seconds instead of 3.05233847833684, about
0.3 seconds later).
Now let us compare the light and heavy object. We could do this by going back to the
model and changing the cross-sectional area to that of the smaller object. However, there
is a feature of Simulink that makes this type of calculation extremely easy. The feature is
the automatic vectorization of the model.
To exploit this feature, open the Gain block in the model (by double clicking) and
enter both the values for the large and the small object. Do this using MATLABs vector
notation. Thus the Gain block dialog will have the values [0.003825 0.000945]. (Include
the square brackets, which make the value in the dialog equal to the MATLAB vector, as
shown in Figure 1.9.)
After running this model with the 2-vector for the drag coefcients, the Scope shows
plots for the position and velocity of both of the objects (see Figure 1.10).
The heavier object hits the ground well after the light object. The velocity curve in the
top graph shows why. The speed of the heavier object comes to a maximum value of about
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa 15
Figure 1.9. Adding the second object to the Leaning Tower simulation by vector-
izing the air drag.
70 ft/sec, whereas the lighter object ends up at about 90 ft/sec. This difference is from the
increased air drag force on the object because the heavier object has a larger cross-sectional
area.
This difference is too large for Galileo not to have noticed, so it does lead us to suspect
that he never really did the purported experiment. (It is not clear from Galileos writings
that he understood the effect of drag, but if he did, he could have avoided this by making
the objects the same size.) In fact, it is very probable that Galileo relied on the thought
experiment he talks about in his 1638 book on the Copernican theory, Two New Sciences
[4]. In this thought experiment he argued that in a vacuum, by tying the two objects together
with a short rope, if the heavier object were to hit the ground rst, at some point the rope
would become taut and the light object would begin dragging the heavier. That implies
that the combined object would hit the ground a little later than would have been the case
had no rope been connecting them. This meant that the combined object (which is heavier
than either one) would hit the ground a little later than the single heavy object. However,
since the combined weight of the two objects is greater than the single heavy object, this
violates the premise that the heavier object must hit the ground rst. This deduction was
the reason Galileo believed that the Aristotelian statement that the heavy object falls faster
was wrong, and that prompted him to perform many experiments with inclined planes to
try to understand the dynamics of falling bodies [14].
Since this is the rst Simulink model with any substance that we have built, let us use
it to illustrate some other features of Simulink.
First, let us clean up the model and annotate it. It is a good idea to do this as you
create the model so that when you return to the model later (or give the model to someone
else to use), it is clear what is simulated. The rst thing to do is to assign a name to the
16 Chapter 1. Introduction to Simulink
0 0.5 1 1.5 2 2.5 3 3.5
-100
-50
0
Velocity
0 0.5 1 1.5 2 2.5 3 3.5
-50
0
50
100
150
Time
Position
Figure 1.10. Simulating results. The Leaning Tower Simulink model with a heavy
and light object.
variables in the diagram, and the second is to name all of the blocks so their functions are
clear. It is also helpful to add color to the diagram to indicate the generic function of the
blocks.
To add a variable name onto a line in the Simulink model, simply double click on the
line. This causes an insertion point to appear on the line where you can type any text. The
annotation is arbitrary.
To annotate the model you created, enter the names in the model, or if you have not
createda model, openthe model Leaningtower3bytypingLeaningtower3at the MATLAB
command line. This will open the model with the annotation shown in Figure 1.11.
Simulink automatically changed from a scalar to a vector in the model when we made
the gain a MATLAB vector. After making the change, to see that the variables in the model
are vectors corresponding to the two different drag accelerations you must right click the
mouse anywhere in the model window and select Format from the menu; this will open the
submenu shown as part of Figure 1.11.
In this menu, we selected Wide Nonscalar Lines and Signal Dimensions (note
that they are checked). After you do this, all of the vector signals in the diagram have a
wider line, and the signal dimensions (vectors of size 2 here) are on the signal lines in the
model (as shown in Figure 1.11). To change the color of a block, highlight the block and
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock 17
Figure 1.11. Right clicking anywhere in the diagram opens a pull-down menu
that allows changes to the colors of the blocks, bold vector line styles, and other model
annotations.
right click. Select Background Color from the resulting pop-up menu and then move the
mouse pointer to the right to select the desired color.
There are other features that Simulink supports which you can discover on your own
by playing with the model. As we work our way through the exercises and examples in this
book, you will learn a lot more.
In the next section, we continue creating some simple models by simulating a pendu-
lum clock. For the interested reader, [30] has a very nice discussion of how Galileo came
to realize that objects with different masses had the same motions. The Leaning Tower
experiment that showed that the masses would actually move the same had to wait for the
invention of the vacuum pump at the end of the 17th century.
1.3 Example 2: Modeling a Pendulum and the
Escapement of a Clock
The next step in our introduction to Simulink is to investigate a pendulum clock and the
mechanism that allows clocks to work: the escapement. This example introduces control
systems and how they are simulated. It also continues to discuss both linear and nonlinear
differential equations.
18 Chapter 1. Introduction to Simulink
The escapement mechanism is one of the earliest examples of a feedback control
system. An escapement provides the sustaining force for the motion of the clock while
ensuring that the pendulum and the clock movement stay in synchronization. Many famous
mathematicians and physicists analyzed clocks and early forms of escapements during the
17th and 18th centuries. We start by describing the escapement mechanismfor a mechanical
pendulum clock; we then show how to build a Simulink model of the clock.
1.3.1 History of Pendulum Clocks
Humans have been building timekeepers that use the sun or water since at least 1400 BC. The
Egyptian clepsydra (from the Greek kleptei, to steal, and hydor, water) was a water-lled
bucket with a hole. The water level was a measure of the elapsed time. Sundials provided
daylight time keeping. The rst mechanical clocks appeared in the 13th century, with the
earliest known device dating to about 1290. Who invented the rst clock is lost to history,
but the main development that made these devices possible is the escapement, a gear with
steep sloping teeth. Agear tooth escapes from the escapement, causing the gear to rotate
until the next tooth engages the opposite side of the escapement. The rotation of the gear
registers a count by the clock.
At the same time the tooth escapes, the driver imparts energy (through the escaping
tooth) into the pendulum, which overcomes the friction losses at the pivot. The escapement
thus serves two purposes: it provides the energy to keep the clock in motion, and it provides
the means for counting the number of oscillations.
Christiaan Huygens invented an escapement similar to that shown in Figure 1.12 in
1656. This type of mechanism was responsible for the emergence of accurate mechanical
clocks in the eighteenth century. The mechanismrelies on the escapement (the parts labeled
1 and 2 in Figure 1.12).
2
With this mechanism, he was able to build a clock that was accurate
to within 1 sec per day. Previous clock mechanisms did not have the regulation capability
of the Huygens escapement, so they would slow down as the springs unwound or weights
that drove the clock dropped down. Earlier clocks required daily resetting to ensure that
they were accurate to only the nearest hour.
The pendulum mechanism provides the timing for the clock and the escapement
provides the forces and connections that
provide energy to the pendulum so the clock does not stop because of friction;
regulate the speed of the gear motions to the natural period of the pendulum so that
the timing of the clock is accurate, depending only on the pendulum motion;
decouple the pendulum and the drive;
provide the feedback to ensure that the force from the driver (springs or weights) and
wear in the various mechanical parts do not adversely affect the time displayed by
the clock.
2
This copyright gure is from a document titled How a Clock Works available from Erwin Sattler Clocks of
America. You can download the document from the web page http://www.sattlerclocks.com/ck.php. I would like
to thank Marcus Orvando, president of Sattler Clocks of America, for permission to use this diagram.
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock 19
Figure 1.12. Components of a pendulum clock.
Before we go into the details of the design of the escapement, let us look at how it
works. The escapement is on a shaft attached to the clock case through a bushing called a
back cock. The pendulum mount does not connect to this shaft. The pendulum suspension
allows the pendulum to swing with minimal friction unencumbered by the escapement. As
the pendulum swings, it alternately engages the left and right side of the escapement anchor
(1). In the process a force from the escapement wheel (2) goes to the pendulum through the
crutch (4) and the crutch pin (5). The crutch assembly also forces the escapement to rotate
in synchronism with the pendulum. The pendulum itself (3) moves back and forth with a
period that is very closely determined by its length. (Once again we return to Galileo, who
demonstrated this in 1582.) The pendulum includes a nut at its bottom (not shown), whose
adjustment changes the effective length and thereby adjusts the period of the pendulum.
The escapement wheel (2) has a link to all of the gears in the gear train (they are not all
shown in Figure 1.12) that turns the hands (8) of the clock.
For the remainder of this description, we focus on the escapement anchor (1) and its
two sides. On the two sides of the anchor are triangular teeth; as the anchor rotates, one
of the two sides of each tooth alternately enters and exits the escapement gear, hence the
names entrance and exit pallets. If you look carefully at the anchor in Figure 1.12, you
20 Chapter 1. Introduction to Simulink
will see them: they are triangular pins that pass through each end of the anchor. As the
pendulum moves to the left, it forces the anchor to rotate and ultimately disengage the right
exit pallet face from the escapement wheel (2). When it does, the exit pallet on the opposite
side of the anchor engages the next tooth of the escapement wheel (9), and the escapement
gear moves one tooth. If the pendulum is set to swing at a period of 1 sec (as is the case for
the mechanism in Figure 1.12), the result of the motion of the escapement gear is to move
the second hand on the clock by one second (which implies that there are 60 teeth on the
escapement gear).
When the right face of the anchor engages the gear, the anchor and the escapement
wheel interact. The escapement wheel, which is torqued by the drive weight (7), applies a
force back through the anchor (1) and then, through the crutch and crutch pin (4 and 5), to the
pendulum. The alternate release and reengagement of the anchor and escape wheel provides
the energy to the pendulum that compensates for the energy lost to friction. As a by-product
of this interplay of the pendulum and the escapement wheel, we hear the distinctive and
comforting tick-tock sound that a clock makes.
1.3.2 A Simulation Model for the Clock
This Simulink model of the clock is in the NCS library, and you can open it using the
command Clocksim. The clock model has two parts: the model of the pendulum, and the
model of the escapement. The pendulum model comes from the application of Newtons
laws by balancing the force due to the acceleration with the forces created by gravity. Let
us assume the following:
The pendulum angle is .
The mass of the pendulum is m.
The length of the pendulum is l.
The inertia of the pendulum is J. (If we assume that the mass of the pendulum is
concentrated at the end of the shaft, the inertia is given by J = ml
2
.)
The damping force coefcient is c (Newtons/radians/sec).
Then the equation of motion of the pendulum is
J
d
2
(t )
dt
2
= c
d(t )
dt
mgl sin((t )).
Substituting ml
2
for J gives
d
2
(t )
dt
2
=
c
ml
2
d(t )
dt

g
l
sin((t )).
These dynamics are in the pendulum clock Simulink model shown in Figure 1.13.
This model uses Simulink primitive blocks for integrators, summing junctions, gains, and
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock 21
Units of
acceleration m/s
2
dampcoef = 0.01
g = 9.8
plength = 0.2485
dampcoef
c/(m*len^2)
1
s
1
s x
o
0.2
Initial
Pend.
Angle
g*sin(u)
Gravity Force
g*sin(Theta)
0.1148
Force from escapment
over the pendulum mass
Pendulum AngleForce
Escapement
Model
Display
Variables
1/plength
1/length
Thet2dot Thetadot
Pend. Angle
Pend. Angle
Escapement Accel. Force
Damping Accel.
Gravity Accel.
Figure 1.13. Simulink model of the clock.
the sine function (from the Continuous and the Math Operations libraries). We also used a
new concept, called a subsystem, to create a block called Escapement Model. Inside this
block are Simulink primitives that model the escapement. Open the subsystem by double
clicking, or view it using the model browser.
Let us followthe signal owin the model. We start at the output of the summing junc-
tion, where the angular acceleration of the pendulum, Theta2dot, is the result of subtracting
the three terms:
the damping acceleration
c
ml
2

from the equation above (dampcoef*Thetadot),
the gravity acceleration from the equation above (g/plength*sin(Theta)),
the acceleration created by the escapement that we develop next.
The Simulink model creates the variables representing the angular position and ve-
locity, Theta and Thetadot, by integrating the angular acceleration Theta2dot. Apendulum
clock only sustains its oscillation if the pendulum starts at an angle that causes the escape-
ment to operate. Consequently, an initial condition is used in the last integrator (we use 0.2
radians, about 11.5 degrees) to start the clock.
A feature of Simulink is the ability to create a subsystem to simplify the model.
Typically, a subsystemis a Simulinkmodel that groups the equations for a particular common
function into a single block. The details of the block are in the subsystemand are not visible
until the subsystem block is opened (by double clicking or browsing), but its functionality
is captured so that the user can see the interactions. The clocks single subsystem is the
model for the escapement. This Escapement Model block has the equations shown in
Figure 1.14. These nonlinear escapement mechanism equations consist of the logic needed
to model the force created by the escapement.
22 Chapter 1. Introduction to Simulink
Clock Escapement Model
Creates a force that is applied when the pendulum is at the end of its swing
1
Force
0 penlow1
Restore
Sign
<
Relational
Operator3
>=
Relational
Operator2
<=
Relational
Operator1
Product1
0.1 Penlow
0.15 Penhigh
du/dt
Derivative
|u|
Abs
1
Pendulum
Angle
Figure 1.14. Blocks in the subsystem Escapement Model.
The escapement works in the same way as you would push a swing. Asmall accurately
timed push occurs when the exit pallet leaves the escapement. The push is always in
the direction of the pendulums motion and it occurs just after the pendulum reaches its
maximum or minimum swing (depending on the sign of the angle).
The attributes of the escapement model are as follows:
Since the pendulum motion is symmetric, the calculations use positive angles (with
the absolute value function block). The force applied has the correct sign because
we multiply it by +1 or 1, depending on the sign of the angle. (The block called
Restore Sign does this.)
The pendulum angle at which the escapement engages (applies an acceleration) is
denoted by Penlow. (As can be seen, it is given a value of 0.1 radians.)
The angle of the pendulumat which the escapement disengages (jumps fromone gear
tooth to the next) is denoted by Penhigh (with a value of 0.15 radians).
The escapement applies the force only when the pendulum is accelerating (i.e., when
the pendulum is moving toward the center of its swing). We check to see if the
derivative of the pendulum angle is positive or not, and multiply the force by 1 when
it is and by 0 when it is not in the Relational Operator 3 block.
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock 23
The plot at right shows the force
applied by the escapement model (the
Y axis) vs. the pendulum angle (X
axis). Notice that the angle at which
the force engages is 0.1 and 0.1 radi-
ans, and the angle at which the force
stops is 0.15 and 0.15 radians as re-
quired. We used the Math Opera-
tions and Logic libraries to develop
this model, so let us go through the
logic systematically.
The absolute value of the pendu-
lum angle is the rst operation. This
block is the rst one in the Math Oper-
ations library. At the conclusion of the
logic, the direction of the force comes from the sign of the pendulum angle. In the model
this is done by multiplying the force created (which will always be positive) by the sign of
the pendulum angle. When the absolute value of the pendulum angle is less than or equal to
0.15, the Relational Operator1 block is one; otherwise it is zero. When the angle is greater
than or equal to 0.1 the Relational Operator2 block is one; otherwise it is zero. The last
part of the logic insures that the only time the force is applied is when the pendulum angle is
getting smaller (in the absolute sense). This is accomplished by checking that the derivative
of the pendulum angle is negative. (The derivative block is in the Continuous library). The
product of all of these terms causes the output to be one if and only if the pendulum angle
is between 0.1 and 0.15 and the pendulum angle is decreasing, or the pendulum angle is
between 0.1 and 0.15 and the pendulum angle is increasing.
The simulation is set up to run for 1000 sec, and the output in the Scope block shows
the pendulum angle and the escapement acceleration. After you run the model, you can
reveal the details of the plot by using the mouse to encircle an area on the plot to zoom
in on. If you do this (it was done here between 412 and 421 sec), the plot resembles
Figure 1.15.
The early part of the simulation shows that the pendulum angle grows from its initial
value (of 0.2 radians) to 0.2415 radians. This stable value for the maximumpendulumangle
is a consequence of the escapement.
At this point, numerical experiments with the clock can investigate different ques-
tions. For example, the fact that the escapement force is determined by some device that
converts potential energy into kinetic energy (weights, a spring, a pneumatic device that uses
variations in air pressure, etc.) means that over its operation the force that the escapement
applies to the pendulum will decrease. To understand the effect of this decreasing force
on the time keeping of the clock, simulate the escapement with different values of Gain in
the Gain block labeled Force from escapement over the pendulum mass. Evaluate the
change, if any, in the period of swing of the pendulum, and the amplitude of the swing when
you change this value (see Example 1.2).
24 Chapter 1. Introduction to Simulink
Figure 1.15. Results of the clock simulation for times around 416 sec. Upper gure
is the pendulum angle, and the lower gure is the acceleration applied to the pendulum by
the escapement.
1.4 Example 3: Complex RotationsThe Foucault
Pendulum
Lon Foucault, in 1848, noticed that a pendulum mounted in the chuck of a rotating lathe
always moved back and forth in the same plane even if the chuck rotated. This gave him the
idea that he could demonstrate that the earth rotated by setting in motion a long pendulum
and observing the path of motion over the period of a day. In a series of experiments
with gradually longer pendulums, he became convinced that his concept was valid. In
February of 1851, he unveiled a public demonstration for the Paris Exposition, where, at the
Eglise Ste.-Genevive in Paris, he set up a pendulum that was 67 meters long. The original
pendulum, shown in Figure 1.16
3
, has been back at the Panthon since 1995.
The Foucault pendulum both illustrates the mechanics of a body moving in a rotating
frame and is the simplest example of the complex motions that result fromgyroscopic forces.
We will investigate these forces in detail in Chapter 3. For this introduction to Simulink,
we have looked at Galileos experiment, the pendulum for use in a clock, and now the
Foucault pendulum. The dynamics are not too complex, but they do require that the full
three dimensions be included. (However, one of these motions, the vertical direction, may
be ignored.) The model we create can also be used as a simple example of the Coriolis
effect that causes weather patterns to move from west to east (in the northern hemisphere).
3
The original church is now a public building called the Panthon on the Place du Panthon in Paris. This
photgraph is from the Wikipedia web site [51].
1.4. Example 3: Complex RotationsThe Foucault Pendulum 25
Figure 1.16. Foucault pendulum at the Panthon in Paris.
1.4.1 Forces from Rotations
A particle moving with constant speed along a
circular path is always changing its direction of
travel; it is accelerating all of the time. The forces
associated with this change in direction are the
centrifugal and centripetal forces. The gure at
right illustrates the motion and the vectors asso-
ciated with the location of the particle as it moves
over an interval t . The vectors in the diagram
are the radial location at the two times t + t ;
the unit vectors for the radial motion, u
r
(t ); and
the direction perpendicular to the radial motion,
u

(t ). To restrict the particle to motions with a


constant radius, we assume that r(t ) = r u
r
(t ), where r is the radius (a scalar constant).
Differentiating this to get the linear velocity of the particle gives
v(t ) =
dr(t )
dt
= r
du
r
(t )
dt
.
From the gure above, the derivative of the unit vector (
du
r
(t )
dt
) is given by calculating the
difference in this unit vector over the time t . Since t is small, the directions of the unit
vector u

over this interval can be assumed constant. In addition, the magnitude of the
perturbation |u

(t +t ) u

(t )| is the small arc length .


Putting these together gives the derivative
du
r
(t )
dt
as
du
r
(t )
dt
= lim
t 0
u
r
(t +t ) u
r
(t )
t
= lim
t 0
u
r
(t ) +u

u
r
(t )
t
= lim
t 0
u

t
= u

.
Here is the angular velocity of the particles motion. Therefore we have v(t ) = u

r or,
equivalently, that the magnitude of the velocity (the particles speed) is r.
26 Chapter 1. Introduction to Simulink
To get the relationship between the angular acceleration and the particles speed we
differentiate the vector velocity using the fact that the radius is constant and the angular
velocity and unit vector perpendicular to the radial motion are changing with time. Thus
the acceleration of the particle is
a(t ) =
dv(t )
dt
= u

d
dt
r +
du

dt
r.
Following steps similar to those we used above for differentiating u
r
, the derivative of u

is u
r
, where the minus sign comes from the fact that the difference in u

at the times t
and t+t points inward (along the radial direction this is negative). Using these values, we
get the acceleration of the particle as (in this expression, =
d
dt
)
a(t ) = u

r u
r

2
r.
This acceleration has two components: tangential and radial. The tangential component
is the instantaneous linear acceleration and its magnitude is r. The radial component is
the centrifugal acceleration and its magnitude is
2
r =
v
2
r
. These two results should be
familiar fromintroductory physics, but we showthemhere to remind you that motions under
rotations need to account for both the vector magnitude and its directions.
1.4.2 Foucault Pendulum Dynamics
When there are no forces on an object, its trajectory in inertial space is a straight line. If we
viewthis motion in a rotating coordinate system, the motion will appear curved. Because the
curve is not the true motion of the object but is a consequence of the observers motion, to get
the equations of motion for the observer we use a ctitious force, called the Coriolis force.
We have indirectly seen this kind of ctitious force in the centrifugal acceleration
above. Following the derivation in Section 1.4.1, if a three-dimensional vector b is rotating,
then its derivative in an inertial (stationary) frame is
db
dt

Inertial
=
db
dt

Rotating
+ b|
Inertial
.
Using this to get the velocity of a rotating vector gives v
Inertial
= v
Rotating
+r. Applying
this result again to differentiate the inertial velocity gives
a
Inertial
=
d
dt
(v
Rotating
+ r)
Rotating
+ v
Rotating
+ ( r).
The derivative of the rst termon the right must account for the fact that the angular velocity
and the radius vector are both time varying. Thus, we have
a
Inertial
= a
Rotating
+
d
dt
r +2 v
Rotating
+ ( r).
This equation gives the acceleration in the rotating coordinate system as
a
Rotating
= a
Inertial
2 v
Rotating
( r)
d
dt
r.
Since the force on the body in the rotating frame is the mass times the acceleration in the
rotating frame and the force on the body in the inertial frame is the mass times the inertial
1.4. Example 3: Complex RotationsThe Foucault Pendulum 27
z
x

Figure 1.17. Axes used to model the Foucault pendulum.


acceleration, the Coriolis force that accounts for the perceived motion in the rotating
frame is
F
Coriolis
= 2m v
Rotating
m ( r) m
d
dt
r.
We can now model the Foucault pendulum. Figure 1.17 shows the coordinate system
we use. The y coordinate comes out of the paper. The origin of these coordinates is the
rest coordinates of the pendulum, and the z axis goes through the point of suspension of the
pendulum. We assume that the pendulum length is L.
In the gure, the Earth is rotating at the angular velocity , the pendulum is at the
latitude , and we denote the coordinates of the pendulum with the vector

x, y, z

.
Since the pendulum is rotating at the Earths rate, the Coriolis forces on the pendulum are
C = 2m(v ) = 2m

dx
dt
,
dy
dt
,
dz
dt

cos , 0, sin

.
In the vertical direction, the tension in the wire holding the pendulum is approximately
constant (equal to the weight of the pendulum bob mg), so the velocity along the vertical is
essentially zero. Therefore, the Coriolis forces (from the cross product) are
2m

dy
dt
sin ,
dx
dt
sin ,
dy
dt
cos

.
Now, when the pendulum moves, there is a restoring force from gravity. This force is
proportional to mg sin(), where is the pendulum angle, as we saw for the pendulum
above. For the pendulum, the angle is
x
L
along the x-axis and
y
L
along the y-axis. We assume
that the pendulum motion is small so the acceleration is mg. Thus the accelerations on the
28 Chapter 1. Introduction to Simulink
Foucault pendulum are
d
2
x
dt
2
= 2sin()
dy
dt
(2f
P
)
2
x,
d
2
y
dt
2
= 2sin()
dx
dt
(2f
P
)
2
y.
We ignore the accelerations in the vertical direction since to rst order they are zero.
The term (2f
P
)
2
=
g
L
is the square of the period of the pendulum, exactly as was
used in the clock example above. An interesting aspect of this problem is the magnitudes of
the coefcients in the differential equations. The rate of rotation of the earth is essentially
once every 24 hours (it is actually a little less), so

=
2
24 60 60
=
2
86, 400
= 0.00007272205.
The coefcient in the model depends on the latitude. Let us assume we are at 45 degrees
north latitude, so the coefcients are
2sin() = 0.00010284.
If we assume that the pendulum length is such that the period of the pendulum swing is 1
minute, we obtain that the other coefcient in the differential equation is
(2f
P
)
2
=

2
60

2
= 0.01096622.
These coefcients differ by about four orders of magnitude, which will make these equations
difcult to solve numerically. As we will see, Simulink allows the user to accommodate
this so-called stiffness without difculty.
We can nowbuild a Simulink model for the Foucault pendulumusing these differential
equations. First, though, let us compute how long the simulation should be set up to run.
The period of the total motion of the Foucault pendulum depends upon our latitude. This
period is
2
sin()
=
T
Eart h
sin()
, where T
Eart h
is the time for one rotation of the earth. (We assume
the earth rotates in 24 hours so, for a latitude of 45 degrees, the period is 1.2219e+005 sec.)
We will make this time the simulation stop time.
Figure 1.18 shows the model for the Foucault pendulum. Once again, to gain famil-
iarity with Simulink and to ensure you know how to build a model like this, you should
create this model yourself. We call this model Foucault_Pendulum in the NCS library.
By now, you should be nding it reasonably easy to picture what the Simulink diagram
portrays. Remember that the picture represents the mathematics and, because the signal ow
represents the various parts of the mathematics, it is important when the model reinforces
this ow. In this picture, you should be able to clearly distinguish the x and y components of
the pendulummotion. We could emphasize this by coloring the two second order differential
equation blocksets with a different color. (The models that are in the NCS library use this
annotation.)
When you run this model, the Scope block creates the oscillation over one period, as
shown in Figure 1.19. As can be seen from the plots, the motions in the east-west and north-
south directions are out of phase by 90 degrees and essentially create a circular motion. The
initial condition in the model sets the pendulum at 1 ft to the north and at exactly zero along
1.4. Example 3: Complex RotationsThe Foucault Pendulum 29
Scope
OmegaP^2
Pendulum Freq
Squared
OmegaP^2
Pendulm Freq
Squared
1
s
Integrate y
Vel.
1
s
Integrate y
Accel.
1
s
Integrate x
Vel.
1
s
Integrate x
Accel.
2*OmegaE*sin(Lat)
Earth Rate
at Latitude (Lat)
2*OmegaE*sin(Lat)
Earth Rate
at Latitude
y Axis (E-W)
Motion
x Axis (N-S)
Motion
dx/dt dx/dt
dy/dt
dy/dt
Pend. Accel.
Coriolis
Accel (y)
Coriolis
Accel. (x)
Pend.
Accel.
Figure 1.18. The Simulink model of the Foucault pendulum.
0 2 4 6 8 10 12 14
x 10
4
-1
-0.5
0
0.5
1
M
o
t
i
o
n

o
f

P
e
n
d
u
l
u
m

i
n

y

(
D
F
e
e
t
)
y Axis (E-W)
Motion
0 2 4 6 8 10 12 14
x 10
4
-1
-0.5
0
0.5
1
Time
M
o
t
i
o
n

o
f

P
e
n
d
u
l
u
m

i
n

x

(
F
e
e
t
)
x Axis (N-S)
Motion
Time
Figure 1.19. Simulation results fromthe Foucault pendulumSimulink model using
the default solver.
30 Chapter 1. Introduction to Simulink
0 2 4 6 8 10 12 14
x 10
4
-1
-0.5
0
0.5
1
y Axis (E-W)
Motion
0 2 4 6 8 10 12 14
x 10
4
-1
-0.5
0
0.5
1
Time
x Axis (N-S)
Motion
Figure 1.20. Simulation results for the Foucault pendulum using a tighter toler-
ance for the solver.
the east-west line. If for some reason the pendulum starts with a motion that is out of the
plane, the pendulum will precess during its motion sweeping out a more elliptic path.
The parameters in the model come from code used by the pre-load function (in the
Model Properties dialog under Edit in the pull-down menu at the top of the Simulink
model). You can experiment with different latitudes for the pendulumby changing the value
of Lat (by simply typing the new value of Lat at the MATLAB command line).
Let us use this result to explore the numerical integration tools in Simulink. Chap-
ter 7 of Cleve Molers Numerical Computing with MATLAB contains a discussion of the
differential equation solvers in MATLAB. The solvers in Simulink are identical, except that
they are linked libraries that Simulink automatically uses when the model runs (because
you clicked the run button). The results in Figure 1.19 use the default solver in Simulink
(the default is ode45 with a relative tolerance of 1e-3). If you look carefully at this gure,
you will notice that the amplitude of the pendulum motion is decreasing over time. There
is, however, no physical reason for this. We have not included any pendulum damping in
the model, and the terms that couple the x and y motions should not introduce damping.
To see if this effect is a consequence of the numerical solver, let us go into the model and
change the solver to make it more accurate. Open the Conguration Parameters dialog
under the Simulation pull-down menu (or use Ctrl + E), and in the dialog change the relative
tolerance to 1e-5. If you run the simulation, the result appears as shown in Figure 1.20.
1.5. Further Reading 31
Notice that this result is what we expect; there is no discernable damping in the
motion and the period is exactly what we predicted. The need for the tighter tolerance in
the calculation is because of the order of magnitude difference of ve in the coefcients in
the model, as we noted above. In general, experimentation is required with the differential
equation solver values in order to insure that the solution is correct. The best way to do this
is to try different solvers and different tolerances to see if they create discernable differences
in the solutions. We return to the solvers as we look at other simulations. In particular, we
will look at some very stiff systems and the use of stiff solvers in Chapter 3.
In Chapter 3, we will also explore the more complex motions of objects undergoing
gyroscopic forces when the motion takes place in three dimensions. First, however, we will
look at how Simulink creates simulation models for linear systems, how Simulink handles
vectors and matrices, and, last, when to use Simulink as part of the process of designing
control systems.
1.5 Further Reading
Galileo showed that a falling bodys acceleration is independent of its mass by making
detailed measurements with an inclined plane, as was described in his 1632 Dialogue Con-
cerning the Two Chief World Systems: Ptolemaic and Copernican [14]. Recent and old
versions of his devices are in the Institute and Museum of the History of Science in Flo-
rence, Italy (http://www.imss.renze.it /), and you can view a digital version of the original
manuscript of Galileos Notes on Motion (Folios 33 to 196) from this site. The location
for the manuscript is http://www.imss..it/ms72/index.htm. An excellent translation of the
Galileo dialogue describing the inclined plane measurements that he made is in The World
of Mathematics, Volume 2, edited by James R. Newman [30].
There is a wonderful essay by John H. Lienhard from the University of Houston (at
http://www.uh.edu/engines/epi1307.htm) that describes the clock as the rst device ever
engineered. The essay ties the clock back to Galileo and traces its history from the early
writings of Francis Bacon to Huygens and Hooke.
The document that you can download from Sattler Clocks (referred to in footnote 2)
has much more detail on the mechanisms involved in the escapement (and in particular the
escapement in Figure 1.12 that was invented by George Graham in 1720) and the other
very interesting features of a pendulum clock. They also have extensive pictures of some
elegantly designed and beautiful clocks.
I used the clock example to develop an analysis of the clock escapement in the IEEE
Control Systems Society Magazine [38]. Shortly after the publication of this article, a set
of articles also appeared in the same magazine on several interesting early control systems,
including the clock [3], [19].
In a recent Science article [37] the authors demonstrated that moths maintain control
during their ight because their antennae vibrate. This vibration, like the motion of the
Foucault pendulum, precesses as the moth rotates, giving the moth a signal that tells it
what changes it needs to make to maintain its orientation. The authors demonstrated this
by cutting the antennae from moths and observing that their ight was unstable. They
subsequently reattached the antennae, and the moths were again able to y stably.
You should carefully review The MathWorkss Users Manual for Simulink [42].
Simulink models can get very large and take a long time to run. The data generated by
32 Chapter 1. Introduction to Simulink
the model can also overwhelm the computers memory. Managing the use of memory can
improve the performance of your models. Reference [41] has a good discussion of the
methods available for managing the memory in a Simulink model.
Exercises
1.1 Use the Leaning Tower of Pisa model as the starting point to add a horizontal com-
ponent to the velocity at the instant the objects start. Include the effect of air drag.
Also, add an input to the model to simulate different wind speeds. Experiment with
this model to see how these do or do not affect the experiment.
1.2 In the clock simulation, increase the friction force to see how large it must be before
the escapement ceases working. Performsome numerical experiments with the model
to see if there is any effect on the accuracy because of the increased friction. When
you evaluated the change in the period and amplitude of swing of the pendulum as
you changed the Gain, you should not have seen a signicant change. What change in
the model will allow you to see the effect of lower forces applied by the escapement?
Modify the model to look at this, and experiment with different escapement forces.
What is the effect? Why is the effect so small? How does the escapement ameliorate
these forces? Look at Figure 1.12 carefully. The Sattler Company has a spring
mounted on the pendulumthat transmits the force fromthe crutch pin to the pendulum.
Does the Simulink model have this mechanism? How would you go about adding
this mechanism to the model? Is there a good reason for the Sattler clock to use this
spring?
1.3 Experiment with the different Simulink solvers (one by one) to see what effect they
have on the simulation of the Foucault pendulum. Which ones work well? Try
changing the tolerances used for each of the solvers to see if they stop working.
1.4 Create a Simulink model of a mass suspended on a spring under the inuence of
gravity. Add a force that is proportional to the velocity of the mass to model the
damping. Pick some parameter values for the mass, the spring constant, the damping,
and, of course, gravity, and simulate the system. Explore what happens when you
increase the value of the damping from zero. Can you explain the behavior you are
seeing in terms of the solution of the underlying differential equation? (If you cannot,
we will see why in the next chapter.)
Chapter 2
Linear Differential
Equations, Matrix Algebra,
and Control Systems
We have seen in Chapter 1 how Simulink allows vectorization of the parameters in a model
(using MATLAB notation for the vectors) with the resulting model simultaneously simulat-
ing multiple situations. In a similar manner, Simulink allows matrix algebra using blocks.
In this chapter, we will exploit this capability to build easy to understand models using
vectors, matrices, and related computational tools. In fact, it is this capability that, when
used properly, contributes to the readability and ease of use of many models. To make it
easy to understand these capabilities, we will spend some time talking about the solution to
linear differential equations using matrix techniques.
2.1 Linear Differential Equations: Linear Algebra
The general form of an nth order linear differential equation is
a
n+1
d
n
y(t )
dt
n
+a
n
d
n1
y(t )
dt
n1
+a
n1
d
n2
y(t )
dt
n2
+ +a
2
dy(t )
dt
+a
1
y = u(t ).
In addition to the equation, we must specify the values (at t = 0) of the dependent variable
y(t ) and its derivatives up to the order n 1. Specifying the initial conditions creates a
class of differential equation called an initial value problem.
The coefcients in the equation can be functions of t , but for now, we will assume
that they are constants. One way to solve this equation is to use a test solution of the form
y(t ) = e
pt
, where p is an unknown. Substituting this for y(t ) into the differential equation
gives a polynomial equation of order n for p. The solution of this polynomial gives n values
of p that, with the initial conditions and the input u(t ), provide the solution y(t ). This may
be familiar to you from a course on differential equations. However, there is a better way
of understanding the solution to this equation, and it relies on matrix algebra.
In the differential equation above, let us build an n-vector x (the vector will be n 1)
that consists of y and its derivatives as follows:
x(t ) =

y(t )
dy(t )
dt
d
2
y(t )
dt
2

d
n1
y(t )
dt
n1

T
.
33
34 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Using this vector, the differential equation becomes a rst order vector-matrix differential
equation. The equation comes about quite naturally because all of the derivates of the
elements in the vector x(t ) are linear combinations of elements in x(t ). Thus, any linear
differential equation becomes
dx(t )
dt
= Ax(t ) +bu(t ),
where A and b are the matrices
A =

0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1

a
1
a
n+1

a
2
a
n+1

a
3
a
n+1

a
n
a
n+1

, b =

0
0
0
.
.
.
1
a
n+1

.
If this is the rst time you have encountered this equation, you should verify that the matrix
equation and the original differential equation are the same (see Problem 2.1).
We will solve this equation in two steps. First, we assume that u(t ) 0 and get the
solution; then we will let u(t ) be nonzero and derive the solution. We will do this using
matrix algebra. When u(t ) 0 we have
dx(t )
dt
= Ax(t )
with x(0) = x
0
.
If A in this equation were a scalar (say a), then the solution would be x(t ) = e
at
x(0). This
comes immediately by substituting this value for x into the differential equation and using
the fact that
d
dt
e
at
x
0
= ae
at
x
0
.
Can we expand the idea of an exponential to a matrix? To answer this we need to
think about what the exponential function is. When we say that a function e
at
exists, what
we really mean is that the innite series
e
at
= 1 +at +
a
2
t
2
2!
+
a
3
t
3
3!
+ +
a
n
t
n
n!
+
is convergent and well behaved. Therefore, we can differentiate this by taking the derivative
of each term in the series. Can we do the same thing with a matrix?
Lets dene the matrix exponential (analogous to the scalar exponential) as
e
At
= I +At +
A
2
t
2
2!
+
A
3
t
3
3!
+ +
A
n
t
n
n!
+ .
This series converges because we can always nd a matrix, T, which will convert the matrix
A to a diagonal or Jordan form. For now, let us assume that the A matrix is diagonalizable.
That is, there exists a matrix T such that TAT
1
= , a diagonal matrix. (See Chap-
ter 10 of Numerical Computing with MATLAB [29] for a discussion of eigenvalues and the
diagonalization of a matrix in MATLAB.)
2.1. Linear Differential Equations: Linear Algebra 35
Multiplying both sides of the denition of the matrix exponential by T on the left and
T
1
on the right gives
Te
At
T
1
= I +TAT
1
t +
TA
2
T
1
t
2
2!
+
TA
3
T
1
t
3
3!
+ +
TA
n
T
1
t
n
n!
+ .
If we insert the identity matrix in the form of T
1
T between each of the powers of A in this
series, we get
Te
At
T
1
= I +TAT
1
t +
TA(T
1
T)AT
1
t
2
2!
+
TA(T
1
T)A(T
1
T)AT
1
t
3
3!
+ .
Now, by grouping slightly differently we get
Te
At
T
1
= I +TAT
1
t +
(TAT
1
)(TAT
1
)t
2
2!
+
(TAT
1
)(TAT
1
)(TAT
1
)t
3
3!
+ .
Therefore, the innite series is now the sum of an innite number of diagonal matrices
whose n diagonal elements are the innite series that sums to the exponential e

i
t
for each
eigenvalue
i
of the matrix A. The steps for this are as follows:
Te
At
T
1
= I +t +

2
t
2
2!
+

3
t
3
3!
+ +

n
t
n
n!
+ = e
t
=

1
t
0 0 0
0 e

2
t
0 0
0 0 e

3
t
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 e

n
t

.
Therefore, the matrix exponential exists and is absolutely convergent. This constructive
proof of the existence of the matrix exponential also shows how to compute it. To construct
it, rst compute the diagonal matrix above from each of the eigenvalues of the matrix and
then multiply the result on the right by T and on the left by T
1
.
MATLAB allows construction of a simple M-le that will generate the solution ma-
trix this way. The code is expm_ncs. There is a built-in matrix exponential function in
MATLABthat performs the calculation, but since the code is not visible, the M-le expm_ncs
is inthe NCSlibrary. Openthis le andfollowthe steps it uses toformthe matrixexponential.
We are only halfway to solving the linear differential equation we set out to solve.
So now, let us remove the restriction that u(t ) 0. Before we show what happens when
we do this, there are a couple of facts about the matrix exponential that you need to know.
(Spend a few seconds verifying these facts, using the hint below each of the statements; see
Exercise 2.2.)
Fact 1: The derivative of e
At
is Ae
At
.
Show this by differentiating the innite series term by term.
Fact 2: The inverse of e
At
is e
At
.
36 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Show this by taking the derivative of the product e
At
e
At
(using the rst fact) and
show that the result is zero. Then show that this means that the product is a constant equal
to the identity matrix.
Now, return to the differential equation
dx(t )
dt
= Ax(t )+bu(t ) and let u(t ) be nonzero.
Conceptually, the input u(t ) is like a change to the initial conditions of the differential
equation at every time step. Thus, it seems reasonable that the solution to the equation
should have the form e
At
c(t ), where c(t ) is a pseudoinitial condition that is a function of
time that is to be determined.
Substituting this assumed solution into the differential equation gives
d
dt

e
At
c(t )

= Ae
At
c(t ) +bu(t ).
However,
d
dt

e
At
c(t )

= Ae
At
c(t ) +e
At
d
dt
c(t ).
When this result is used, we get
Ae
At
c(t ) +e
At d
dt
c(t ) = Ae
At
c(t ) +bu(t )
or
e
At d
dt
c(t ) = bu(t ).
Multiplying by e
At
gives the following for c(t ):
c(t ) =

t
0
e
A
bu()d +x
0
.
The fact that the constant of integration is x
0
comes from letting t = 0 in the solution.
Therefore the solution of the differential equation is
x(t ) = e
At
c(t ) = e
At
x
0
+

t
0
e
A(t )
bu()d.
We call the integral in this expression the convolution integral.
2.1.1 Solving a Differential Equation at Discrete Time Steps
In our discussion of digital lters in Chapter 4, we will need to understand how to convert a
continuous time linear system into an equivalent discrete time (digital) representation. One
method uses the solution above.
Assume that the analog-to-digital (A/D) converter at the input samples u(t ) to make it
piecewise constant (i.e., the analog input is sampled at times kt and held constant over the
interval t ). We call this a zero order hold sampler. Under this assumption, the solution
above becomes a difference equation by following these steps:
Assume that the initial time, t
0
, is the sample kt and the current time, t , is (k +1)t .
Assume that u(t ) is sampled so that its value is constant over the sample interval, i.e.,
u(t ) = u(kt ), kt t < (k +1)t , which we denote by u
k
.
2.1. Linear Differential Equations: Linear Algebra 37
Denote the value of the vector x(kt ) by x
k
.
With these assumptions, the solution becomes the difference equation
x
k+1
= e
At
x
k
+

(k+1)t
kt
e
A( (k+1)t )
d bu
k
= (t )x
k
+(t )u
k
.
The matrix (t ) is e
At
, and the matrix (t ) =

t
0
e
A
d comes from the integral by
changing the integration variable (k +1)t =

(which makes d = d

):
(t ) =

(k+1)t
kt
e
A( (k+1)t )
d b =

t
0
e
A

b.
Notice that this difference equation is also a vector matrix equation. It corresponds to the
solution of the differential equation at the times kt , and this discrete time solution is the
same as the continuous time solution when the input u(t ) is a constant.
We can now solve any linear differential equation we encounter using linear algebra.
In fact, we have a whole bunch of ways of solving the equation. So let us explore them.
Before we do, there are some names that you should be aware of that are associated with
the solution method that we just went through. We call the vector x(t ) in the differential
equation the state vector and the differential equation the state-space model. In the
solution, the matrix = e
At
is the transition matrix. If you check the Simulink Continuous
library, you will nd the state-space icon that allows you to put a linear differential equation
in state-space form in a model. Similarly, in the list of digital lters in the Discrete library,
the eighth block is the Discrete State-Space. It allows you to enter the difference equation
above. In order to match the continuous and discrete models, we need to be able to compute
for any A and b the values of (t ) and (t ). MATLAB can do this.
The NCS library contains a code called c2d_ncs that will create these matrices. The
inputs to c2d_ncs are A, b, and the sample time t ; the outputs are phi and gamma. The
M-le, shown below, uses the matrix exponential expm in MATLAB. This code is similar
to the code called c2d that is part of the control system toolbox in MATLAB.
function [Phi, Gamma] = c2d_ncs(a, b, deltat)
% C2D_NCS Converts the continuous time state space model to a discrete
% time state space model that is equivalent at the sample times under
% the assumption that the input is constant over the sample time.
%
% [Phi, Gamma] = C2D(A,B,deltat) converts:
% .
% x = Ax + Bu
% into the discrete-time state-space system:
%
% x[k+1] = Phi * x[k] + Gamma * u[k]
[ma,na] = size(a)
[mb,nb] = size(b)
38 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
if ma=na
error(The matrix a must be square)
end
if mb=ma
error(The matrix b must have the same number of rows as the matrix a)
end
augmented_matrix = [[a b];zeros(nb,na+nb)];
exp_aug = expm(augmented_matrix*deltat);
Phi = exp_aug( 1:na, 1:na );
Gamma = exp_aug( 1:na, na+1:na+nb );
The cute trick in this code is to build the augmented matrix

A b
0
nxn
0
nxm

, where the
zeros at the bottom have the same number of rows as A and the same number of columns as
the concatenation of Aand b. Computing the matrix exponential of this matrix automatically
generates the integral

t
0
e
At
bdt . (See Exercise 2.3, where you will verify this yourself.
Hint: Use the fact that the integral of the bottom n rows in the matrix is a constant.)
2.1.2 Linear Differential Equations in Simulink
We can now go to Simulink and solve some linear differential equations. In the clock
example, the equation for the pendulum was
d
2

dt
2
=
c
ml
2
d
dt

g
l
sin() +0.1148 u(t ).
To make this linear, assume that the pendulum angle is small so that sin() . (This
comes from the fact that the lim
0
sin()

= 1.) The parameter values in the model were


such that the linear differential equation becomes

+0.01

+39.4366 = 0.1148 u(t ).
Using the state-space form we developed above gives
dx(t )
dt
=

0 1
39.4366 0.01

x(t ) +

0
0.1148

u(t ).
If we convert this, using the c2d_ncs M-le, into a discrete system with a sample time of
0.01 seconds, the matrices and are
phi = gamma =
0.99802888363385 0.00999292887448 0.00000573792261
-0.39408713885130 0.99792895434511 0.00114718823479
2.1. Linear Differential Equations: Linear Algebra 39
x' = Ax+Bu
y = Cx+Du
State-Space Scope2
Scope1
Scope
y(n)=Cx(n)+Du(n)
x(n+1)=Ax(n)+Bu(n)
Di screte State-Space
0
Constant
Figure 2.1. Using Simulink to compare discrete and continuous time state-space
versions of the pendulum.
We can now compare the solutions in Simulink for the continuous version and the
discrete versionusingthe state-space blocks for each. The NCSlibrarycontains the Simulink
model State_Space, which you can open (or build yourself using Figure 2.1 as the guide).
The state-space block requires values for two additional matrices (called C and D). They
dene the output of the block. Specically, the output of the block, denoted by y, is
y = Cx + Du. Because we dened the state x with the output y as the rst entry (and
the derivatives of y as the subsequent entries), the output and its derivatives are always
available as a simple matrix product. If we are interested only in the value of y, then
the matrix c =

1 0

, and since the input u does not appear in the output, D = 0 (a
scalar). These values, along with the values of A and b, are set when the model opens using
the model preload function callback in the model properties. We also have set the initial
displacement of the pendulum to be 1 and the initial velocity of the pendulum to be 0. This
is done in the block dialogs by setting the initial condition vector to be [1 0].
As an exercise, let us see how the two solutions compare. We set up the model
so that a sum block creates the difference between the two solutions and sends the dif-
ference to a Scope. We also set the model up to run for 1000 sec, and to compute the
solution to the continuous time model with the ode45 algorithm with the relative tolerance
set to10
10
.
The Simulink model that compares the state-space models in discrete and continuous
form are in Figure 2.1; Figure 2.2 contains plots that show the solution for the continuous
model and the difference between the continuous solution and the discrete solution. Notice
that the largest difference is about 3 10
8
, which is almost entirely due to the conversion
from continuous to discrete form. This is because in the Conguration Parameters dialog
(under the Simulation menu) we set the relative tolerance for the ode45 solver to 10
10
.
The ability to simulate a continuous time linear differential equation using the iterative
discrete version is obviously very useful when it comes to designing a digital lter. We will
return to this later when we show details about the design of digital lters.
For now, we need to spend some time on a detour that shows how to get the frequency
response of a continuous time system. To do this we need to introduce and investigate the
properties of the Laplace transform.
40 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Dynamics
0 200 400 600 800 1000
-0.2
-0.1
0
0.1
0.2
Time
0 200 400 600 800 1000
-4
-2
0
2
4
x 10
-8
Time
399 399.5 400 400.5 401
-0.04
-0.02
0
0.02
0.04
Time
399 399.5 400 400.5 401 401.5
-1
0
1
x 10
-8
Time
Output from Continuous State-Space Model (Scope2) and
the Difference Between the Continuous and Discrete
Models (Scope)
Scope 2 -- Continuous State -Space Model
Scope -- Difference Between Continuous
and Discrete Model
Figure 2.2. Outputs as seen in the Scope blocks when simulating the model in Figure 2.1.
2.2 Laplace Transforms for Linear Differential Equations
Early in his career, Pierre-Simon Laplace (17491827) was interested in the stability of
the orbits of planets. He developed his eponymous transform to solve the linear differen-
tial equations that dened the perturbations away from the nominal orbit of a planet, and
consequently there is no single paper devoted only to the transform. In fact, he treated the
transform as simply a tool and most times used it without explanation.
The transform converts a differential equation into an equivalent algebraic equation
whose solution is more manageable. Today, this use is not as important because we have
computers, and tools like Simulink, to do this for us. However, one aspect of the Laplace
transform is still extensively used: the ability to understand the effect of a differential
equation on a sinusoidal input in terms of the frequency of the sinusoid. We will make
2.2. Laplace Transforms for Linear Differential Equations 41
extensive use of this in our discussions of electrical systems, mechanical systems, and both
analog and digital ltering.
The Laplace transform of a function f (t ) is
F(s) =


0
e
st
f (t )dt .
With this denition, it is easy to build a table of Laplace transforms for different types of
functions. The simplest is the function f (t ) = 1 when t 0 and f (t ) = 0 when t < 0.
This is the step function because its value steps from 0 to 1 at the time t = 0. Putting this
function in the denition, we get
F(s) =


0
e
st
dt =
1
s
e
st

0
=
1
s
.
When the value of the integral is evaluated at innity, the assumption is made that the real
part of s is > 0. In a similar way, the transform of e
at
is
1
s+a
. The Laplace transforms
for the sine and cosine can use this transform and the fact that sin(bt ) =
e
ibt
e
ibt
2i
and
cos(bt ) =
e
ibt
+e
ibt
2
(see Exercise 2.4).
We can also apply the Laplace transformto the state-space model we developed in the
previous section. In order to do this we need to knowthe Laplace transformof the derivative
of a function. Thus, if the Laplace transform of a function f (t ) is F(s), the derivative
df (t )
dt
has the Laplace transform sF(s) f (0). This follows from the denition and integration
by parts as follows:


0
df (t )
dt
e
st
dt = s


0
f (t ) e
st
dt + f (t )e
st

0
= sF(s) f (0).
Since the Laplace transform of the derivative is s times the Laplace transform of the
function differentiated, it follows that the Laplace transformof the integral is 1/s. Fromthis
comes the Simulink notation for the integral block in the Continuous library.
Let us apply this denition to the state-space model
dx(t )
dt
= Ax(t ) + bu(t ). The
Laplace transform of the vector x(t ), X(s), is the Laplace transform of each element, so
sX(s) x(0) = AX(s) +bU(s).
This gives X(s) as
X(s) = [sI A]
1
x(0) +[sI A]
1
bU(s).
If we compare this to the solution x(t ) = e
A(t t
0
)
x
0
+

t
t
0
e
A(t )
bu()d we developed
earlier, it is obvious that the Laplace transforms of the two terms in the solution are (using
the symbol L to denote the Laplace transform)
L((t )) = L(e
At
) = [sI A]
1
and
L

t
t
0
e
A(t )
bu()d

= [sI A]
1
bU(s).
42 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
The last transform is the convolution theorem; it says that the Laplace transform of the
convolution integral is the product of the transforms of the two functions in the integral.
Also note that the Laplace transform of the matrix exponential has exactly the same form as
the transform of the scalar exponential (except the inverse is used and the Laplace variable
is multiplied by the identity matrix).
As we noted when we introduced the state-space model, the output (for a scalar input
u) is
y = Cx +du.
One of the most important uses of the Laplace transform is the transfer function of a linear
system. The transfer function is
Y(s)
U(s)
when the initial conditions are zero. In the state-space
model, if y(t ) is of dimension m (i.e., there are m outputs from the model), then m transfer
functions are created.
Using the Laplace transforms above gives the transfer function for the state-space
model as
Y(s)
U(s)
= C[sI A]
1
b +d.
The transfer function of the linear system can specify the properties of the differential
equation in a Simulink simulation. For example, the transfer functions that result from
taking the Laplace transform of the state-space model for the clock dynamics are (once
again using the symbol L to denote the operation of taking the Laplace transform)
L
dx(t )
dt
= L

0 1
39.4366 0.01

x(t ) +

0
0.1148

u(t )

,
y(t ) =

1 0

x(t )
Moreover, using the derivation above and the fact that the initial condition is zero for
the transfer function, we get
Y(s)
U(s)
= [ 1 0]

s 1
39.4366 s +0.01

0
0.1148

= [ 1 0]

s +0.01 1
39.4366 s

s
2
+0.01s +39.4366

0
0.1148

=
0.1148
s
2
+0.01s +39.4366
.
The Simulink model in Figure 2.3 uses the two transfer-function blocks in Simulink
to dene the clock example that we developed in Section 1.3. We opened this from the
NCS library with the command Clock_transfer_functions. Notice that the blocks are
Transfer Fcn and Zero-Pole. The rst uses the transfer function in exactly the form that
we just developed, whereas the second uses the form with the numerator and denominator
polynomials factored. The numerator polynomials roots are the zeros of the transfer function
since the roots of the numerator, when substituted for s, result in a value of zero for the
2.3. Linear Feedback Control 43
0.1148
(s+0.005+6.27985469577123i)(s+0.005-6.27985469577123i)
Zero-Pole
s +0.01s+39.4366
2
0.1148
Transfer Fcn
Scope2
Scope1
Scope
[10000/9]
IC
0
Constant
Figure 2.3. Linear pendulum model using Transfer Function and Zero-Pole-Gain
blocks from the Simulink Continuous Library.
transfer function. Similarly the roots of the denominator are called the poles of the transfer
function since setting s to these values causes the transfer function to become innite. (As
poles go, these are truly long poles.)
The simulation would result in a value of exactly zero if there were no input since the
transfer function assumes that the initial condition is zero. No initial condition combined
with an input of 0 would mean that everything in the simulation is exactly zero for all time.
For this reason, the model uses the Initial Condition (IC) block from the Signal Attributes
library in Simulink to force an initial value for the input to the transfer functions. This block
causes only the input u(t ) to have an initial condition, not the output. To get the output to
have the right value at the start of the simulation is rather delicate. It involves issues about
how long the input stays at the value specied by the IC block, the amplitude of the output
caused by this initial input, and the desired output amplitude. To ensure that the simulation
produces consistent results, the solver for this model was a xed step solver with a step size
of 0.01 sec. The initial condition for the input was set to 10000/9, the value needed to force
the output to be 0.2 at t = 0. The simulation results then are the same as the state-space and
the original clock simulation results shown above. The difculties associated with getting
the initial condition right makes transfer functions difcult to use for initial value problems.
For this reason, either the state-space model or the model created using integrators (as in
the clock simulation above) is the better choice.
2.3 Linear Feedback Control
One of the more powerful uses for Simulink is the evaluation of the performance of poten-
tial control strategies for complex systems. If you are designing an aircraft ight-control
system; a spacecrafts guidance, navigation, or attitude control system; or an automobile
engine controller, you often have neither the luxury nor the ability to build the device with
the intention of trying the control system and then tinkering with it until it works. The
preferred method for these complex systems is to build a simulation of the device and use
44 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
the simulation to tinker. This section will introduce the methods for control system design
and its mathematics and then will show how to use Simulink to tinker with the design.
2.3.1 What Is a Control System?
Early in the industrial revolution, it became clear that some attributes of a mechanical system
required regulation. One of the rst examples was the control of water owing through a
sluice gate to maintain a constant speed for the water wheel. Although many control ideas
were tried, they often operated in an unstable way or hunted excessively, causing the water
wheel to alternately speed up and slow down. James Clerk Maxwell analyzed such a device
and showed that its stability and performance required the roots of an equation that came
from the differential equation (the poles that we discussed in Section 2.2). Unfortunately,
the mathematics at the time allowed factoring only of third order and some fourth order
polynomials, so it was very difcult to design a system that involved dynamics that had
more than three differential equations.
Mathematicians found an approach that handled higher order systems, but today the
digital computer and programs such as MATLAB and Simulink have made those methods
interesting only as historical footnotes.
So what exactly is a control system? The easiest way to understand a control systemis
to show an example. In the process, we will build a Simulink model that we can experiment
with to understand the various design issues and the consequences of incorrect choices.
One of the simplest and most commonly encountered control systems is that used to
keep the temperature constant for a home heating system. The dynamics for this system
can be grossly described by assuming that the entire house is a single thermal mass which
has both heat loss (via conduction through the walls, windows, oors, and ceilings) and
heat gain from the heating plant (via the radiators, convectors, or air vents, depending on
whether the system uses steam, hot water, or hot air). Simple thermodynamics states that
the temperature, T , of the house is
C
dT
dt
=
kA
l
(T T
out side
) +Q
in
u(t ),
where C is the thermal capacity of the house,
T is the temperature of the house,
T
out side
is the air temperature outside the house,
k is the average coefcient of thermal conductivity of all of the house surfaces,
A is the area of all of the house surfaces,
l is the average thickness of the insulation in the house,
Q
in
is the heat from the heat exchanger,
u(t ), the control signal applied to the furnace, is either zero or one.
Most heating appliances in the United States use the English system of units, where
heat is in British thermal units (BTUs), temperature is in degrees Fahrenheit, dimensions are
in feet, and thermal capacity is dened as the thermal mass of the house times the specic
heat capacity of the materials in the house. (The units are BTUs per degree F.)
Clearly, a house does not consist of a single material, or a single compartment with a
temperature T , or a single thermal mass. For this reason, a more detailed model would have
many terms corresponding to different rooms in the house, with different losses correspond-
2.3. Linear Feedback Control 45
Figure 2.4. Home heating system Simulink model (Thermo_NCS). This model
describes the house temperature using a single rst order differential equation and assumes
that a bimetal thermostat controls the temperature.
ing to the walls, doors, windows, oors, ceilings, etc. We will develop a more complex
model like this in Chapter 6.
We will use a model that is one of The MathWorks Simulink demonstrations. (This
model, sldemo_househeat, comes with Simulink, but we have a version of the model in
the NCS library.) Open the model by typing Thermo_NCS at the MATLAB prompt. This
opens the model shown in Figure 2.4.
The model uses meter-kilogram-second units, so there are conversions from degrees
Fahrenheit to Celsius and back in the model. (These are the blocks called F2C and C2F,
respectively). One new feature of Simulink that we have not seen before is in the model.
It is the multiplexer block (or the mux block) that is found in the Signal Routing library.
This block converts multiple signals into a single vector signal. In the model, it forces the
Scope block to plot two signals together in a single plot. We congured the Scope block
in this model to show two separate plot axes; we did this by opening the plot parameters
dialog by double clicking the icon at the top of the plot that is next to the printer icon and
then changing the Number of axes to two.
We have seen the use of a subsystemin the clock example. Here we use subsystems to
group two of the complex models into a visually simplied model by grouping the equations
for the house and the thermostat inside subsystem blocks. The house model contains the
differential equation above, and the thermostat model uses the hysteresis block from the
Discontinuous library in Simulink.
The house model is a subsystemin the Simulink model that implements the differential
equation above. The subsystem is in Figure 2.5.
The thermal capacity of the house is computed from the total air mass, M, in the
house and the specic heat, c, of air. The thermal conductivity comes from the equivalent
thermal resistance to the ow of heat through the walls, ceilings, glass, etc. All of these
46 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Thermodynami c Model
for the House
1
Indoor Temp
Ti n
1/s
1/Req
1/Req
1/(M*c)
1/Mc
2
Outdoor Temp
Tout
1
Heater
QDot
In
Figure 2.5. Subsystem House contains the single differential equation that
models the house.
data and the data for the heating system (which is electric) are loaded from the MATLAB
M-le Thermdat_NCS.m in the NCS Library. To see the various data and their functional
relationship to the dimensions of the house and the insulation properties, etc., edit and
review the data in this le.
The model uses a sinusoidal input for the outside temperature that gives a variation
of 15 deg F around an average temperature of 50 deg F. The plots of the indoor and outdoor
temperatures and the total heating cost for a 24-hour (86,400 sec) period are in Figure 2.6.
We can now see the features of this home heating controller (and any other control
system for that matter). It consists of a system that needs to be controlled (in this case, the
temperature of the house needs to be controlled, and the thermodynamics of the house is the
system). The control system needs to measure the variables we are trying to control (in this
case the thermostat measures the house temperature), and it needs a device to calculate the
difference between the desired and actual variable. It also needs a device (a controller) that
converts the desired state into an action that causes the system to move toward the desired
condition. In this case, the control is the heater; the controller is the thermostat. (The
thermostat plays the dual role of both the temperature sensor, working on the difference
between the actual and desired temperature values, and the control in this application.) In
order to model the thermostat, we need to look at how this device works.
The photograph in Figure 2.7
4
shows a typical thermostat. The temperature difference
is detected using a metal coil (2) made from two dissimilar materials, each with different
heat expansion rates (so-called bimetals). The bimetals expansion difference translates into
a torque that causes a contact (4 and 6) to open and close. A lever (3) sets the tension of
the coil and therefore the room temperature set point. Moving the lever makes the contact
coil rotate closer or further away from the contact (6). When the temperature drops below
the set point, the bimetal rotates and the contacts close, making the heat source come on.
When the temperature rises above the set point, the contacts open and the heat goes off.
The contact 4 has a magnet that holds it closed (just to the left of the (6) in Figure 2.7).
This forces the contacts to stay closed until the temperature of the room rises well past the
4
This gure is from the article on thermostats in Wikipedia, The Free Encyclopedia web site [50].
2.3. Linear Feedback Control 47
0 1 2 3 4 5 6 7 8
x 10
4
30
40
50
60
70
80
0 2 4 6 8 10 12 14 16 18
x 10
4
0
10
20
30
40
Time
Heating Cost ($)
Indoor Temperature
(with a 70 deg. F set point)
Outdoor Temperature
Figure 2.6. Results of simulating the home heating system model. The top gure
shows the outdoor and indoor temperatures, and the bottom gure shows the cost of the
heat over the 24-hour simulation.
Figure 2.7. A typical home-heating thermostat [50].
48 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
T
off
T
on
In
In
T
set
1
Out
0
Figure 2.8. Hysteresis curve for the thermostat in the Simulink model.
point at which the contacts closed. (The force fromthe magnet is greatest when the contacts
are closed because of the close proximity of the magnet and the contact.) This effect, called
hysteresis, results in the room temperatures rising above the desired value and then falling
below the desired value by some amount. Look closely at the room temperature in the plot
above; the room temperature rises to about 5 deg above and falls about 5 deg below the
desired value. In the next section, we will develop a simple controller that will eliminate
this problem.
The thermostats mechanical motioncanbe modeledusingthe built-inhysteresis block
in the Simulink discontinuous block library. This block captures the essential features of
the thermostat, as can be seen from the input-output map of the block. Figure 2.8 shows the
input-output graph.
An input-output map of a device shows how the output changes as the input varies.
In the thermostat, the temperature is the input, and the output is either 0 (off) or 1 (on).
When the temperature is belowthe set value (T
set
), the thermostat is on and the output of the
block is 1; as the temperature increases because of the heat being provided, the temperature
inevitably becomes greater than the off temperature (T
off
) and the thermostat turns off (the
output becomes 0). As the house cools down, the temperature must fall below the value
T
off
to the value T
on
before the output changes to 1 again. The difference between T
set
and
T
off
and T
set
and T
on
is set to 5 deg in the model by changing the values in the dialog that
opens when you double click on the block.
Once you have built the model, try different values for the hysteresis, change the
temperature prole for the outside, and see if you can nd an outside temperature below
which the house will not stay at the desired set point temperature.
2.3.2 Control Systems and Linear Differential Equations
The basic dynamics for the heating system is the rst order linear differential equation
for the heat ow into and out of the house. These dynamics are linear, but the system is
nonlinear because of the thermostat. The controller nonlinearity causes the heat to cycle to
2.4. Linearization and the Control of Linear Systems 49
a temperature that is 5 deg too hot and then fall to a temperature that is 5 deg too cold. This
10-deg swing in the roomcan cause discomfort for the occupants since the roomis either too
hot or too cold. What could we do to x this? The simple answer is to make the controller
linear, because if you did so, the temperature would stay almost exactly at the set point.
Therefore, we would like to control the heater so that the temperature is proportional to
the difference; it should be possible to keep the temperature at the desired value without the
10-deg swing produced by the thermostat. Asimple way to achieve this would be to control
the heater using a device that makes the heat output proportional to the input. Electronic
devices exist that allow this form of control. We will actually build such a controller in
Chapter 6, rene it further in Chapter 7, and look at a more complete version in Chapter 8.
Another important aspect of a control system is the fact that a properly operating
controller causes the system to deviate away from the set point by only a small amount.
This means that even a very nonlinear system is linearizable for purposes of control. Let
us formalize these ideas and use the results to justify the use of linear systems analysis for
control systems.
2.4 Linearization and the Control of Linear Systems
In the most general situation, the underlying mathematical model of a system is nonlinear.
Nonlinearities occur naturally because of the three dimensional world we live in. For
example, a rotation about an axis is described through a coordinate transformation which
involves the sine and cosine of the rotational angle. (We will discuss this in Chapter 3.)
Another example is the air drag that is a function of the square of the speed; we modeled this
when we were exploring the Leaning Tower of Pisa in Chapter 1. Other nonlinear effects
are a consequence of the properties of the materials that we encounter. All control system
designs use linear differential equations to model the physical systems under control. This
is true because a control system is usually trying to keep the system at a particular value
or is trying to track an input. In either case, the perturbations away from the desired input
are small. We will show an example of this in Section 2.4.2, but rst let us see how we go
about nding a model for small perturbations. (It is the linear model.)
2.4.1 Linearization
As we stated above, the goal of a control systemis to make some systemattribute constant or
to follow a slowly changing command. (Think of the autopilot on an airplane that is trying
to keep the altitude of the plane constant.) In other words, the control is trying to keep
perturbations small. To make this concept more precise, let us consider a general nonlinear
differential equation that models a physical system. This differential equation might come
from a Simulink model that you have built. The form of the equation then will be a set of
nonlinear differential equations that couple through nonlinear functions as follows:
dx
i
(t )
dt
= f
i
(x
1
, x
2
, . . . , x
n
, t ) +g(u
1
, u
2
, . . . , u
m
, t ), i = 1, 2, . . . , n.
There are n rst order differential equations in this description. The n values of x
i
(t ) are
lumped into an n-vector x(t ), the m values of u
i
(t ) are lumped into an m-vector u(t ), and
50 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
the n functionsf
i
(x
1
, x
2
, . . . , x
n
, t ) are lumped into an n-vector f . Doing this changes the
way we write the differential equation to the more compact
dx(t )
dt
= f(x, t ) +g(u, t ).
To investigate small perturbations x(t ) away from some nominal solution x
nom
(t ) of this
equation, let x(t ) = x
nom
(t ) +x(t ) and u(t ) = u
nom
(t ) +u(t ), and substitute them into
the differential equation. The result is
d(x
nom
(t ) +x(t ))
dt
= f (x
nom
(t ) +x(t ), t ) +g(u
nom
(t ) +u(t ), t ).
If the perturbations are small enough, then a Taylor series gives a rst order perturbation
away from the nominal as
d(x
nom
(t ))
dt
+
d(x(t ))
dt
= f(x
nom
(t ), t ) +
f(x
nom
(t ), t )
x
x(t ) +
+g(u
nom
(t ), t ) +
g(u
nom
(t ), t )
u
u(t ) + .
In these equations, the partial derivatives of any vector with respect to another vector are
matrices dened as follows:
f
x
=

f
1
x
1

f
1
x
n
.
.
.
.
.
.
.
.
.
f
n
x
1

f
n
x
n

.
The Taylor series terms beyond the rst derivatives are small as long as the perturbations
are small, so we can neglect them.
Using the fact that the nominal values satisfy the differential equation, i.e., that
d(x
nom
(t ))
dt
= f(x
nom
(t ), t ) +g(u
nom
(t ), t )),
the differential equation for the perturbations is
d(x(t ))
dt
=
f(x
nom
(t ), t )
x
x(t ) +
g(u
nom
(t ), t )
u
u(t ).
Since the matrices in this equation are evaluated at the nominal value, they are time varying,
and as a result the equation is linear and in the state-variable form
dx(t )
dt
= Ax(t ) +Bu(t ).
In this state equation, the matrix A is
f(x
nom
(t ),t )
x
, the matrix B is
g(u
nom
(t ),t )
x
, the
states are the perturbations in x(t ), and the controls are the inputs u(t ) that create the
perturbations.
To recap this result, the use of the linear equation is justied by the fact that when
the control system is working properly, the perturbations are small and the Taylor series is
2.4. Linearization and the Control of Linear Systems 51
an excellent approximation to the actual dynamics. For this reason, discussions on control
systems design generally consider only linear models. In a later chapter, we will show
howto use the above linearization to create the linear model directly fromSimulink. Before
we do this, let us create some Simulink models in various forms for a mass attached to
a spring.
2.4.2 Eigenvalues and the Response of a Linear System
Since linear differential equations form the basis for control system design, it is important
to understand the relationships between the parameters in the state-variable model and the
solution of the differential equations. Let us create some Simulink models for a spring-
mass systemthat is typical of many mechanical systems. Aspring stores mechanical energy
whenever it is compressed or expanded. The energy that the spring stores returns whenever
the spring is not compressed or expanded. The force that the spring applies is typically
nonlinear. However, consistent with the discussion above, we can linearize the force to give
f = Kx, where x is the displacement. (K is the proportionality constant for the spring, and
it has a negative sign so the force is positive when the spring is compressed and negative
when it expands.) The units of K are force per unit displacement (Newtons per meter or
pounds per inch, for example).
M
x(t)
Mg
Kx+D dx/dt
This linear equation by itself describes an
ideal spring; there is no friction. When there is
friction, it is usual to add another linear force,
a so-called damping force, that is proportional
to the speed of the object. The model for this
is f = D
dx
dt
. A spring with a mass attached
therefore has a model that is very much like the
pendulum in Chapter 1. Let us develop an equa-
tion of motion for the spring-mass system in the
gure at the right.
In the gure, the motion x(t ) is positive
down, so the forces from the spring and the
damping are negative (upward). The gure contains a diagram that shows the force balance
used to get the equation of motion.
Using the force balance in the gure along with Newtons second law gives
M
d
2
x
dt
2
= Kx D
dx
dt
+Mg.
We can now easily build a Simulink model for this equation. The model Spring_Mass1
is in the NCS library, but as usual, you should create the model yourself before you open it
from the library. The model is in Figure 2.9. Remember that all of the data for the model
load from a callback function.
We can also use the state-space methods we developed above to convert this equation
into a state-space model. Letting
x =

x(t )
dx(t )
dt

,
52 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Spring Mass Model Using the Equations of Motion
and Basic Simulink Blocks
M*g
m g
K
Spri ng
Scope
1
s
x
o
Integrator1
1
s
Integrator
0
Ini t i al
Positi on
D
Damper
1/M
1/mass
Position of Mass
Velocity of Mass
Figure 2.9. Model of a spring-mass-damper system using Simulink primitives.
Step Positi on and Speed
of Mass
Matrix
Mult iply
Mult iply
0 1
-omegan^2 -2*zeta*omegan
Matrix
A
1
s
Integrator
K*u
Bu
Add
2
2
2
[2x2]
[2x2]
2 State Deriva tive = Ax+Bu
2
2
Ax
2
Input u(t)
Figure 2.10. Model of a spring-mass-damper system using the state-space model
and Simulinks automatic vectorization.
the model is
dx(t )
dt
=

0 1

K
M

D
M

x(t ) +

0
1
M

u(t ).
A Simulink model for this is easy to build. The rst, and easiest, way to build the
model is to use the State-Space block in the Continuous library (you should try this), but
we will use an alternate method that exploits the automatic use of vectors in Simulink. The
model (shown in Figure 2.10) is Spring_Mass2 in the NCS library.
This model uses a characterization for the parameters of the second order differential
equation that make the calculation of the solution easier to understand. The parameters are
the undamped natural frequency (denoted by
n
) and the damping ratio (denoted by ).
The values of these parameters are

n
=

K
M
and =
D/M
2
n
.
2.4. Linearization and the Control of Linear Systems 53
0 5 10 15 20 25 30
-1
-0.5
0
0.5
1
1.5
2
Time
= 0.1
= 0.2
= 0.3
=0.4
= 0.5
Figure 2.11. Changing the value of the damping ratio from 0.1 to 0.5 in the model
of Figure 2.10.
When we insert these values in the state-space differential equation, we get
dx(t )
dt
=

0 1

2
n
2
n

x(t ) +

0
1
M

u(t ).
In the model we let u(t ) = g, corresponding to the force of gravity applied at t = 0.
The simulation shows the motion when
n
= 1 and = 0.1, which is identical to the
Spring_Mass1 simulation. When this model opens, MATLAB sets the value for omegan,
zeta, and M to 1, 0.1, and 1, respectively.
The motion of the mass is a sine wave whose amplitude gradually decreases. We can
see what happens if we vary the parameters in this model. First, keep
n
= 1 and let
vary between 0 and 1. The derivative is Ax +bu and the input is a function called the step
function, which is found in the Sources library.
Thus, as an exercise, vary zeta in the model from the value 0.1 to 1 by changing the
value of zeta in MATLAB and rerunning the model. For each value of zeta entered, you
will see a plot that looks like Figure 2.11. (This gure is an overplot for values of zeta from
0.1 to 0.5 in steps of 0.1.)
54 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
It should be evident that the bigger we make zeta, the more rapidly the oscillations
disappear. (We say the oscillations are damped.) It is important to understand this since it
is critical in the design of a control system.
The second exercise is to increase the value of
n
. As you do so, you will see that
the frequency of the oscillations increases. There is a subtle connection between the actual
frequency of the oscillation and the damping ratio.
Remember that we showed that the solution of the differential equation in state-space
form depends on the transition matrix given by e
At
= T
1
e
t
T, where is the diagonal
matrix that contains the eigenvalues of the matrix A.
Using the undamped natural frequency and damping ratio parameters in the A matrix,
let us see what its eigenvalues are. The eigenvalues come from setting the determinant
det(I A)to 0, so
det(I A) = det

2
n
+2
n

=
2
+2
n
+
2
n
= 0.
There are two eigenvalues, the roots of this equation, given (assuming that 1) by

1
=
n
+i

1
2

n
,

2
=
n
i

1
2

n
.
Therefore, the diagonal matrix e
t
is
e
t
=


n
+i

1
2

t
0
0 e


n
i

1
2

.
Since the solution is T
1
e
t
T, the frequency of the oscillations is

1
2

n
and the
damping comes from the term e

n
t
. Thus, the closer gets to 1, the less oscillatory
the response is and the faster the initial conditions disappear. The complete solution of
the differential equation comes from e
At
= T
1
e
t
T. We ask you to complete this in
Exercise 2.5. (You need to compute T and T
1
and nish the multiplication.)
Several additional facts about the solution to this spring mass equation are important.
First, when >1, the two eigenvalues are real and the solution becomes the sum of two
exponentials (i.e., it is no longer oscillatory). Second, if the mass or the damping term were
negative for any reason (such as a badly designed control system), the solution will grow
without bound. (The term
n
in the solution would be negative, so the solution terms
e
(
n
+i

1
2

n
)t
would grow without bound.)
The solution to the state-space differential equation using Laplace transforms was
X(s) = [sI A]
1
x(0) +[sI A]
1
bU(s).
The inverse Laplace transform involves exponentials of the form e
p
i
t
, where p
i
is one of
the roots (poles) of the denominator polynomial det(sI A). This determinant is the same
one we get when we compute the eigenvalues of the matrix A, so poles and eigenvalues
are the same creature in different clothes. The major difference is in the ease and accuracy
of calculation. Poles are the roots of the polynomial; factoring it is inherently prone to
2.5. Poles and the Roots of the Characteristic Polynomial 55
1
Output
White Noise
Signal 1
Input Signal
num(s)
den(s)
Dynamics of the
Distrurbance
num(s)
den(s)
Dynamics of the
Measurement Device
num(s)
den(s)
Dynamics of the
Controller
num(s)
den(s)
Dynamics of the
System Being Controlled
K
Control
Gain
Figure 2.12. Generic components of a control system.
numeric issues, whereas the eigenvalues of a matrix are computed using numerically robust
algorithms (see NCM, Chapter 10, in particular Section 10.5).
These are all important aspects for the understanding of the control of a linear system,
which we explore next.
2.5 Poles and the Roots of the Characteristic Polynomial
Now that we know that poles and eigenvalues are the same, let us use this knowledge.
By now, you should be familiar with the notation used in Simulink for modeling
linear systems. The idea is to use the transfer function (which is the Laplace transform of
the output divided by the Laplace transform of the input) as a multiplicative element inside
the block. As a result, you can write the equations in the pictorial form of the block diagram
where the multiplications use the blocks (with the transfer function inside) and parallel paths
represented by lines that join a sum or difference (which use summing junctions). This idea
predates Simulink and has represented control systems for over 50 years.
Figure 2.12 shows the general form of a linear control system when the device to be
controlled is a linear differential equation. Let us bring each of the elements in this gure
into the control system framework we established above; from left to right in the diagram
these are as follows:
The Input Signal is the set point (in the case of the thermostat) which models the input
that the control system is trying to maintain (or track).
The Control Gain is one or more gains in the system. They ensure the control system
performs properly.
The Dynamics of the Controller models the control device. (This might not be dy-
namic, as was the case for the thermostat.)
56 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
The Dynamics of the Disturbance is a model for the way that the world is acting
to cause the system to deviate from the desired performance. Note that the model
assumes that the disturbance adds to the controller.
The Dynamics of the System Being Controlledoften called the plant dynamicsis
the differential equation (or transfer function) of the system that is to be controlled.
Notice that the Output is measured, and this measurement compares with the input
signal through a difference. The controller forces the systemto react as desired, using
the difference signal.
Each of these functions exists in any control system. They sometimes use a common
device (as was the case for the thermostat, where the difference, the set point, and the
measurement are in this single device).
2.5.1 Feedback of the Position of the Mass in the Spring-Mass Model
Let us build a simple control system that will attempt to control the spring-mass damper
system that we developed in Section 2.4.2. We assume that you are holding the mass at
some location below its rest position, and you then let it go. We also assume that the control
system will attempt to stop the mass motion as it approaches the rest position, but with no
oscillation and as little residual motion as possible. The Simulink simulation in Figure 2.13
is the model. By now, this model should be easy for you to build. Try to build it without
looking too closely at the gure. (Use the model Spring_Mass1 from the NCS library
as the starting point, or open the complete model from NCS library using the command
SpringMass_Control.) As you review the diagram, identify each of the generic control
system features that we discussed.
This model has the values K = 20, D = 2, M = 10, and initially the gain = 0. The
initial condition for y has been set up so the mass is initially at rest (at t = 0 this requires
that
d
2
y
dt
2
= 0 and
dy
dt
= 0 in the differential equation), so the initial value of y needs to be
m*32.2
m g
Step at
1 second
k
Spri ng
Scope
1
s
x
o
Integrator1
1
s
Integrator
-32.2/(k/m+Gai n)
Ini ti al
Condi ti on
d
Damper
Gai n*m
Control
Gai n
(Ini ti al l y 1)
1/m
1/mass
Position of Mass
Velocity of Mass
Figure 2.13. A simple control that feeds back the position of the mass in a spring-
mass-damper system.
2.5. Poles and the Roots of the Characteristic Polynomial 57
computed. With the gain at 0, the initial position is
g
K
/
M
. (As an exercise, show why this is
so.) This initial condition for y is an external input to the integrator using the dialog for the
integrator y. With this initial condition, the mass is at rest when the simulation starts.
The rst thing you need to try in this model is to change the value of gain and see what
happens. If you have done what was asked, i.e., varied the value of gain, you should see that
you cannot achieve the desired result (i.e., having the motion stop without an oscillation).
All that seems to happen as the value of gain is changed is that the oscillation frequency
increases. Why is this and what can be done to achieve the desired performance?
The why part of this question is easy: it can be answered by inspection from the
block diagram. Look carefully at what the feedback is doing. The gain multiplies the
position (after the summing junction), which is exactly what the spring force is doing.
Thus, increasing the gain is like changing the restoring force of the spring. In the solution
for the spring mass system we developed in the previous section, the result of changing K
is to change the parameter
n
=

K/M. Therefore, changing gain will only change the
frequency of oscillation of the mass, and it will never cause the mass to stop without an
oscillation.
2.5.2 Feedback of theVelocity of the Mass in the Spring-Mass Model
Fromthis discussion and the solution we developed in the previous section, it should be clear
that the only way to cause the oscillation to stop quickly is to alter the value of =
D/M
2
n
. The
most direct way to do this is to change the effective D. Unfortunately, as it is constructed,
the control system will not do this.
If, however, we were to measure the velocity of the mass and then multiply this by
a gain, we will create a force that is proportional to the velocity. This is equivalent to the
mass damping term D
dy
dt
that appears in the differential equation. Thus, we will modify
the Simulink diagram to allow a feedback from the velocity. The resulting block diagram,
shown in Figure 2.14, is the model SpringMass_Vel_Control in the NCS library.
M*32.2
m g
Step at
1 second
K
Spring
Scope
1
s
x
o
Integrator1
1
s
Integrator
-32.2/(K/M)
Init ial
Condit ion
D
Damper
Gain*M
Control
Gain
(Ini ti al ly 1)
1/M
1/mass
Position of Mass
Velocity of Mass
Figure 2.14. Feed back the velocity of the mass instead of the position to damp
the oscillations.
58 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
0 5 10 15 20 25 30
-16
-15.8
-15.6
-15.4
-15.2
-15
Time
Position of Mass
Figure 2.15. Time response of the mass (position at the top and velocity at the
bottom) for the velocity feedback controller.
This model uses a measurement of the velocity of the mass as the feedback variable.
If you open this model and play with the gain variable, you will see that the response of the
mass to the change in the position of the mass is less oscillatory as the gain is increased. In
fact when the gain is set to 1, there is only one cycle of oscillation before the mass settles,
and when the gain is 2, there are no longer any oscillations. (See the plot of the response
with these different gain values in Figure 2.15.)
For now, let us bypass how one would measure the speed and accept the fact that the
control objective (that of ensuring that the mass stops with little or no oscillatory motion) is
achieved with this control. Instead, let us try to put in place a way of determining why the
second of the two control systems achieved the desired result while the rst did not. To do
this we will use the state-space model (with the built-in state-space block) for the various
pieces in the control system.
The Simulink model then looks like Figure 2.16.
2.5. Poles and the Roots of the Characteristic Polynomial 59
Figure 2.16. Using the state-space model for the spring-mass-damper control system.
The state-space model of the spring is
dx
dt
=

0 1

K
M

D
M

x +

0
1
M

u(t ),

y
dy
dt

measured
=

1 0
0 1

x.
Moreover, fromthe model, we see that u(t ) = Gain(Stepinput
dy
dt
)32.2. If we substitute
this into the state-space model, we get the state-space model of the closed loop system as
dx
dt
=

0 1

K
M

D
M

x +

0
1
M

Gain

Stepinput
dy
dt

32.2,
dy
dt
= [ 0 1] x,
which reduces to the model (using the fact that

0
1
M

0 1

x =

0 0
0
1
M

x)
dx
dt
=

0 1

K
M

D +Gain
M

x +

0
1
M

(Gain Stepinput) 32.2.


This illustrates clearly that the gain term is equivalent to a change in the damping term D
since the gain adds to D.
The eigenvalues of the state matrix in this equation are the roots of the polynomial
det(I A) = det

1
K/M +Gain +D/M

=
2
+(Gain +D/M) +K/M,
and they clearly change as the gain changes. Let us draw a plot that will allow us to see
how the roots change.
60 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
-2.5 -2 -1.5 -1 -0.5 0
-1.5
-1
-0.5
0
0.5
1
1.5
Real part of the Eigenvalues
I
m
a
g
i
n
a
r
y

p
a
r
t

o
f

t
h
e

E
i
g
e
n
v
a
l
u
e
s
Figure 2.17. Root locus for the spring-mass-damper system. Velocity gain varying
from 0 to 3.
If we let the gain vary from 0 to 3 in steps of 0.1, and we use the parameter values
in the model above (i.e., M = 10; D = 2; K = 20), the eigenvalues change as shown in
Figure 2.17. Remember that the response of the system is a sum of exponentials, where
each of the exponents is an eigenvalue. Thus, as long as the eigenvalues are complex, the
solution will be oscillatory. When the eigenvalues are all real and negative, on the other
hand, the solution will asymptotically decay with no oscillations. The plot we created allows
us to select a gain that will ensure that this happens (and meet the requirements that the
control system had to satisfy). This plot, called a root locus by control engineers, is the
most useful way of understanding the effect on a control systems response when the gain is
changed. Before computers, the rules for creating a root locus by hand were quite elaborate.
Fortunately, today we can do this easily using MATLAB (and Simulink). For example, the
code that created the plot is
ev =[];
for Gain = 0:0.1:3
ev = [ev eig(A-[0 0;0 Gain])];
end
plot(real(ev),imag(ev),x)
axis(square)
grid
2.5. Poles and the Roots of the Characteristic Polynomial 61
-0.18 -0.16 -0.14 -0.12 -0.1 -0.08 -0.06 -0.04 -0.02 0
-4
-3
-2
-1
0
1
2
3
4
Real part of the Eigenvalues
I
m
a
g
i
n
a
r
y

p
a
r
t

o
f

t
h
e

E
i
g
e
n
v
a
l
u
e
s
Figure 2.18. Root locus for the position feedback control. Gain changes only the
frequency of oscillation.
2.5.3 Comparing Position and Rate Feedback
In contrast to this root locus, let us look at the locus for the rst control systemwe developed.
In this system, the gain multiplied the position of the mass, and the control system model
in state-space form is
dx
dt
=

0 1

K
M

D
M

x +

0
1
M

Gain(Stepinput y) 32.2,
y = [ 1 0] x.
As we did above, multiplying the matrices gives
dx
dt
=

0 1

K +Gain
M

D
M

x +

0
1
M

(GainStepinput) 32.2,
y = [ 1 0] x.
Therefore, the eigenvalues of the matrix

0 1

K+Gain
M

D
M

will be the root locus. Figure 2.18 shows the plot.


62 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
As we observed before, this control does nothing but change the frequency because
the only change is the imaginary part of the eigenvalues.
2.5.4 The Structure of a Control System: Transfer Functions
A generic control system has several essential components. These are the plant, which we
will denote by G(s); the measurement, which we will denote by H(s); and the control
dynamics (which might be only a gain), which we denote by K(s). The structure of the
control system is in the gure below.
G(s) K(s)
H(s)
+
E(s)
U(s) Y(s)
C(s)
It is a simple matter to write the equation for the transfer function of the closed loop
control system. If we denote the difference betweenthe input onthe left andthe measurement
as E(s), then
E(s) = U(s) H(s)Y(s),
Y(s) = K(s)G(s)E(s),
which we solve for the transfer function
Y(s)
U(s)
to give
T (s) =
Y(s)
U(s)
=
K(s)G(s)
1 +K(s)G(s)H(s)
.
Let us take this equation one step further by replacing each of the transfer functions with
their respective numerators and denominators. Thus, if the transfer functions are
G(s) =
n
1
(s)
d
1
(s)
, K(s) = K
n
2
(s)
d
2
(s)
, H(s) =
n
3
(s)
d
3
(s)
,
then
T (s) =
n
1
(s)n
2
(s)d
3
(s)
d
1
(s)d
2
(s)d
3
(s) +Kn
1
(s)n
2
(s)n
3
(s)
.
The inverse Laplace transform of this transfer function comes from the partial fraction
expansion into a sum of terms involving the roots of the denominator polynomial. This
expansion is
T (s) = a
0
+
n

j=1
a
j
s +p
j
.
The denominator term s + p
j
is the jth factor of the denominator polynomial (i.e., p
j
is the jth root of the polynomial d
1
(s)d
2
(s)d
3
(s) + Kn
1
(s)n
2
(s)n
3
(s)). These poles of
the closed loop transfer function obviously determine the response of the system. Notice
2.5. Poles and the Roots of the Characteristic Polynomial 63
that the denominator polynomial is determined from the numerator and denominator of
K(s)G(s)H(s); i.e., the poles are determined by the numerator, denominator, and gain
terms of the individual blocks in the system. (These result from the open loop transfer
functions of the system, the transfer function that would result if there were no feedback.)
We could compute the root locus as the gain K varies using this polynomial equation,
but this is not numerically robust. As was pointed out above, factoring polynomials is not
a good idea. However, the use of Laplace transforms in block diagrams does allow some
control system features to be readily developed. For example, in the control system model
we developed above, how would we create the measurement of the velocity of the mass?
One way would be to measure the position and differentiate it. A block that would do that
would contain a single s (since the Laplace transformof the derivative
df
dt
was sF(s), where
F(s) is the Laplace transform of f (t )). If one were to try to build an analog device that
would do this (using an electronic or mechanical device, for example), it becomes obvious
rather quickly that this cannot be easily done. Asimple idea would be to use a coil of wire
and a magnet attached to the mass to develop a signal that is proportional to the speed of the
mass. This comes about naturally because the voltage across the coil is the rate of change of
the magnetic ux, which comes from the coil inductance and the coil motion. The electrical
model for this is (where P is a constant that depends on the geometry of the coil and magnet)
d
dt
=
d(Li Py)
dt
= L
di
dt
P
dy
dt
= iR.
Let the measurement be the voltage iR; then the Laplace transform of this equation
gives the measurement-block transfer function asH(s) =
RI (s)
X(s)
= P
R/Ls
s+R/L
, which, when
G(s) K
+
E(s)
U(s) Y(s)
C(s)
R
s
L
P
R
s
L
+
added to the simulation diagram
of the mass damper, gives the
model at right. (G(s) is the
Laplace transform of the mass-
damper dynamics.)
This exercise shows that
some measurements come with
additional dynamics. In this
case, the measurement came
with the dynamics associated with the inductor and resistor in the electrical circuit. The
analysis of the control system must account for these to ensure that the system is stable
despite the added dynamics.
Once again, Simulink can rapidly analyze the effect of adding the dynamics. With a
fewmouse strokes, the model changes to include the dynamics of the measurement (they can
be either a transfer function or differential equation). The resulting Simulink model with the
response of the system is in Figure 2.19. (Instead of the model SpringMas_Vel_Contol,
use SpringMass_Control_Sensor_Dynamics.) In the modied model, the transfer
function block was used, and the values of the parameters in the model were set to make
P(R/L) = 1 and R/L = 100. Note that the response of the control system with the value
of the Gain = 100 (Figure 2.20) is almost identical in the two models, indicating that the
sensor design does not change the result. Once again, Simulink allowed us rapidly to answer
the question of the effect of the additional sensor dynamics on the stability and response of
the system.
64 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Figure 2.19. One method of determining the velocity of the mass is to differentiate
the position using a linear system that approximates the derivative.
Figure 2.20. Simulation result from the Simulink model.
2.6 Transfer Functions: Bode Plots
It is possible to build a simulation in Simulink that uses a sinusoidal input. You could
use any of the simulations we have created so far. If the simulations are linear, all of the
outputs will be sinusoids at the same frequency as the input. In steady state, the outputs
will have different amplitudes and will cross through zero at times that are different from
the input. Therefore, the amplitude and phase of the sinusoid at the various frequencies
describe the sinusoidal response of the linear system. In 1938, Bell Labs scientist Hendrik
Wade Bode (pronounced Boh-dee) demonstrated that a plot of the frequency response of
a linear system contains all of the information needed to understand what feedback around
2.6. Transfer Functions: Bode Plots 65
the system would do. His Bode plots are still a mainstay of control system design. Let us
see how to compute a Bode plot and how Simulink (and the Control System Toolbox in
MATLAB) can do the work.
2.6.1 The Bode Plot for Continuous Time Systems
Remember that the Laplace transform for an exponential gives the Laplace transform of a
sine wave (see Problem 2.4). Using the approach in this problem, the Laplace transform of
sin(t ) is
L

e
it
e
it
2i

=
1
2i
s i

1
2i
s + i
=

s
2
+
2
.
Now, if we want to knowthe response of a systemwith transfer function H(s) to a sinusoidal
input, we use the fact that the transfer function is the ratio of the Laplace transform of the
output divided by the Laplace transform of the input. Thus, if the input U(s) is the Laplace
transform above, the output Y(s) is
Y(s) = H(s)

s
2
+
2
= H(s)
1
2i

1
s + i

1
s i

.
The partial fraction expansion of the right-hand side above gives
Y(s) =
n
p

k=1
H(p
k
)
s + p
k
+
1
2i

H(i)
s + i
+
H(i)
s i

,
where the expansion is around the n poles (p
k
) of the transfer function H(s) and the two
imaginary poles from the sinusoidal input. The inverse Laplace transform of this has two
parts: the transient response fromthe rst termabove andthe sinusoidal steadystate response
from the second term above. The inverse transform gives the response of the system as
y(t ) =
n
p

k=1

k
H(p
k
)e
p
k
t
+ |H(i)| sin(t + ()).
The
k
are constants that depend on the initial conditions and the proof of this assertion is
Exercise 2.6.
The Bode plot is a plot of 20 log
10
|H(i)| and () versus on semilog axes.
These show what the sinusoidal steady state response of the system is because the output is
a sinusoid with amplitude |H(i)| and phase ().
Rapid and accurate calculation of the magnitude and the phase of H(s) uses the state-
space representation of the system. The calculations are easy to do in MATLAB, and there is
a strong connection between Simulink and these calculations. The next section shows this.
2.6.2 Calculating the Bode Plot for Continuous Time Systems
In Section 2.2, we showed that the transfer function for a linear systemin state-space formis
Y(s)
U(s)
= C[sI A]
1
b. We now know that the Bode plot comes from this transfer function
66 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
by letting s = i and then computing the magnitude and phase of the resulting complex
variable. Thus, the Bode plot contains the two terms
Bode amplitude = 20 log
10

(iI A)
1

,
Bode phase = tan
1

Im(C(iI A)
1
b)
Re(C(iI A)
1
b)

.
It should be obvious that the calculation of these terms in MATLAB is trivial. The code
would look something like this:
function [mag, phase] = bode(a,b,c,omega)
% This function computes the magnitude and phase of a
% transfer function when the system dynamics are in
% state-space form. The inputs are:
% a - the A matrix of the system
% b - the B matrix of the system
% c - the C matrix of the system
% omega vector of values for the frequency (rad/sec)
for iw = sqrt(-1)*omega
h = c*(iw-a)/b;
hmag = abs(h);
hphase = phase(h);
mag = [mag hmag];
phase = [phase hphase];
end
This code segment is not terribly efcient, and it is prone to errors because it manip-
ulates the matrix a in an unscaled form. In the Control Systems Toolbox, there is a built
in function that does the calculation in a far more efcient, and therefore faster, form. The
function is called bode, and it returns both the magnitude and the phase. The MATLAB
command logspace generates a vector of equally spaced values for omega so that a semilog
plot of the magnitude versus the log base 10 of the frequency is a smooth curve. (To run
the following examples you will need the Control Systems Toolbox.)
The Control Systems Toolbox uses a MATLABobject called an lti object that allows
the calculations of the various linear system attributes such as the poles, zeros, Bode plot,
and other plots in a simple and seamless way. The user may enter the lti object data using
state-space models or transfer functions (factored into the poles and zeros or as polynomials).
It also allows the systemto be discrete (i.e., in the formof a z-transform) or continuous (i.e.,
in the form of a Laplace transform). The rules for manipulating the lti object are part of the
objects denition and the overloading of the various operations.
Overloading anoperationis a wayof redeningthe meaningof the mathoperators +
or , * or /, so they are meaningful operations for the objects. For the lti object, transfer
functions, pole-zero representations, and state-space representations are interchangeable.
The + or adds or subtracts the transfer functions (with the automatic changes in the
state-space model that the additions or subtractions imply), and the * or / operators are
multiplication and division of the transfer functions. (For these operations the order of
the transfer functions increases, so the resulting polynomials change their order and the
state-space model changes dimensions.)
2.6. Transfer Functions: Bode Plots 67
Figure 2.21. Control System Toolbox interface to Simulink. GUIs allow you to
select inputs and outputs.
It is beyond the scope of this book to discuss how to build a MATLAB object, but if
the reader is interested, the documentation that comes with MATLAB shows how to create
and use objects. The example shows how to create a polynomial object and how to then
overload all of MATLABs operations so that the appropriate rules for the polynomial arith-
metic are invoked whenever the user types the symbols +, , *, /, 2, etc. at the MATLAB
command line.
Let us use the model Spring_Mass2 that we created in Section 2.5 to illustrate
the use of Simulink to get the lti model and the Bode plots. Open the model by typing
Spring_Mass2 at the command line after changing to the NCS directory. We now want to
invoke the connections between Simulink and MATLAB that the Control Systems Toolbox
allows. Under the Tools menu in the model, select Control Design and the submenu
Linear Analysis. This will invoke the linearization tool and the GUI shown at the left
in Figure 2.21 will appear. This GUI allows you to select where you want the inputs and
outputs to be. Any signal in the model can be selected for either of these; just go to the
model, right click on the line that has the desired signal, and then select from the resulting
dialog Linearization Points, and then from the submenu whether the point is an input, an
output, or possibly a combination of the two. You need to select one input and one output,
and when you are done, the GUI will resemble the GUI at the right in Figure 2.21.
Now, click the Linearize button on the GUI and watch. Anew LTI window will open
and the comment LTI viewer is being launched will appear. The step response of the output
will appear in the LTI Viewer window (see Figure 2.22). To see other types of plots, right
click anywhere in the window and select the submenu Plot Types and then Bode. The
result will be the Bode plots for the two outputs from the model shown below. (There are
two outputs because the Simulink model has a 2-vector output that has both the position
and velocity of the mass.)
From the LTI Viewer window, you can export the lti object to MATLAB for further
analysis. To do this, select from the File menu in the viewer the Export command, and
then, in the LTI Viewer Export GUI that comes up, select the model to export. The GUI
68 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Figure 2.22. LTI Viewer results from the LTI Viewer.
gives a tentative name of sys to the lti model, so if you select this it will create the object in
MATLAB. To export now, click the Export to workspace button. Nowgo to the MATLAB
command line and type sys. You will see that MATLAB now contains the lti object sys,
and it is a state-space model. If you want to convert this to a transfer function, type tf(sys).
All of the MATLAB commands and the resulting answers are in the MATLAB code
segment below.
>> sys
a =
Spring_Mass/ Spring_Mass/
Spring_Mass/ 0 1
Spring_Mass/ -1 -0.2
b =
Spring_Mass/
Spring_Mass/ 0
Spring_Mass/ 1
2.7. PD Control, PID Control, and Full State Feedback 69
c =
Spring_Mass/ Spring_Mass/
Spring_Mass/ 1 0
Spring_Mass/ 0 1
d =
Spring_Mass/
Spring_Mass/ 0
Spring_Mass/ 0
Continuous-time model.
>> tf(sys)
Transfer function from input "Spring_Mass/Step (1)" to output...
1
Spring_Mass/Integrator (pout 1, ch 1): -----------------
s2 + 0.2 s + 1
s
Spring_Mass/Integrator (pout 1, ch 2): -----------------
s2 + 0.2 s + 1
Other plot types and results are available with the LTI Viewer. Spend some time
exploring them and work with some of the problems at the end of this chapter.
2.7 PD Control, PID Control, and Full State Feedback
2.7.1 PD Control
The simple control systems that we have investigated so far were mostly second order
systems (i.e., systems described by second order differential equations). There are good
reasons for thoroughly exploring this class of system, because any mechanical system that
has a moving mass creates a second order differential equation via Newtons law of motion
(f = ma implies
d
2
x
dt
2
=
f
m
). We saw in the spring-mass example that if the position
and derivative (speed) were the feedback variables any desired response is possible by
appropriate choices of gains. Control engineers call this PD control (P for position and D
for derivative). The simplest way to see that this control will allow the system to be set
arbitrarily is to use the state-space formulation of the system. In state-space formthe second
order differential equation is
dx
dt
=

0 1
a b

x +

0
a

u,
y =

1 0
0 1

x.
The rst component of the vector measurement y is the position, and the second component
70 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
D
Vel oci ty
Control Gai n
Step at
1 second
x' = Ax+Bu
y = Cx+Du
State-Space
Model of Spri ng
Posi ti on
and
Vel oci ty
P
Posi ti on
Control Gai n
Position
Velocity
Figure 2.23. Using proportional-plus derivative (PD) control for the spring-mass-
damper.
is the derivative (because of the denition of the state x). Therefore, the PD control is
u =

P D

y =

P 0
0 D

x +u
in
,
where the input to the closed loop system is u
in
. Substituting this back into the state-space
model above gives
dx
dt
=

0 1
a b

x +

0
a

P 0
0 D

x +u
in
=

0 1
a(P 1) aD b

x +u
in
.
It is clear fromthe matrix that by selecting Pand Dthe entire closed loop systemcan respond
in any desired way. This makes PD control the simplest and easiest to use in a mechanical
system, and as such, it should be the rst choice for a simple design.
There is, however, one problem with a simple PD controller. Most systems require
that the response in steady state should have some predictable value. (When the controller
is trying to track the input, the desired value is the steady state value of u
in
.) In the above
system, we denote the steady state value of x by S. S comes from setting the derivative of x
to zero. (The name steady state comes from the fact that nothing in the system is changing;
hence the derivative of all of the states must be zero.) Therefore,
S =

0 1
a(P 1) aD b

1
u
in
.
Figure 2.23 is the Simulink model PD_Control from the NCS library. Simulating this
model with a, b, P, and D equal to 100, 2, 1, and 0.2, respectively, shows that the steady
state value of the output is 0.5 when the input steady state value is 1 (see Figure 2.24).
2.7. PD Control, PID Control, and Full State Feedback 71
0 2 4 6 8 10 12 14 16 18 20
0
0.2
0.4
0.6
0.8
Position
0 2 4 6 8 10 12 14 16 18 20
-1
0
1
2
3
Time
Velocity
Figure 2.24. Response of the spring-mass-damper with the PD controller. There
are no oscillations.
It is obvious that the steady state value of neither component of x is equal to u
in
.
We now address the question, How does one force the steady state value of one of the
components to match u
in
?
2.7.2 PID Control
We will use the block diagram of the PD control system we developed above to see how it
is possible to force one of the state components to be equal to the input in steady state. If
the control system were only trying to improve the response with no attempt to track the
input (the step), the response we got above would be ne.
Assume that we want the response to track the input precisely (for example, this
control might be for the steering system in an automobile so that the angle of the steering
wheel will always be proportionally to the angle of the front tires). In this case, the steady
state response is not adequate. One possibility would be to add a gain in front of the step
input that doubles the input. Then the value of the output would be 1 when the step is in
steady state, but it is easy to see that any change in the values of a, b, P, or D would change
this relationship. So what can we do?
72 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
D
Vel oci ty (Deri vati ve) Gai n
Step at
1 second
x' = Ax+Bu
y = Cx+Du
State-Space
Model of Spri ng
P
Posi ti on Gai n1
Posi ti on
and
Vel oci ty
1
s
Integrator1
I
Integral Gai n
Position
Figure 2.25. Proportional, integral, and derivative (PID) control in the spring-
mass-damper system.
Think about the error between the input and the output as measured by the difference
between the value of u
in
and the position measured in the system. (This is the output from
the difference block in the diagram above.) If we were to insert an integrator into the model
at this point, then, were the steady state error at this point S, the output of the integrator
would be St . The input to the second order spring-mass system is now St , so it begins to
move away from the steady state value of 0.5. As it moves, the feedback causes the output
to get closer to the steady state value of the input. In fact, when the output matches the input
in steady state, the difference will be exactly zero and the integrator stops. The result will
be that the input and the output match exactly and the states will not change values because
the integrator output will be an unchanging constant.
Let us do this in the model. Add an integrator from the continuous library in Simulink
so it integrates the output of the difference block (the difference between the input and the
measured position of the mass). Multiply the integral by a gain I and then add the result to
the difference (see Figure 2.25). Simulate the system and verify that the output and input
match. (This model is called PID_Control in the NCS library.)
While you are doing the simulation of this model, look carefully at the difference
between the input and the output (add another Scope block displaying the difference if you
want to make the difference clear), and verify that we have achieved zero steady state error,
as we set out to do. Also, experiment with different values for the integral gain I (the model
uses I = 7) to see what effect this has on the time it takes to reach steady state.
The result of simulating the PID controller with the nominal values is in Figure 2.26.
The gure clearly shows that the steady state value of the mass position is exactly one, and
the error between the desired and actual position is zero.
2.7.3 Full State Feedback
Most models used in textbooks to illustrate control concepts are second order. The reasons
for this are that
second order systems are easy to visualize and manipulate;
second order systems are the basic building blocks of mechanical systems through
Newtons equations of motion.
2.7. PD Control, PID Control, and Full State Feedback 73
0 1 2 3 4 5 6 7 8 9 10
0
0.5
1
1.5
Position
0 1 2 3 4 5 6 7 8 9 10
-1
0
1
2
3
4
5
Time
Velocity
Figure 2.26. PID control of the spring-mass system. Response to a unit step has
zero error.
Unfortunately, the real world is not restricted to second order systems. What do we
do if we want the response of a higher order system to match some desired response?
The PDcontroller we developed used feedback of both the position and the derivative
of the output, which turned out to be the full state of the model when we used the output y
and its derivative,
dy
dt
, as the state. In other words the full state of the systemwas assumed to
be available (i.e., measurements of the entire state were assumed to exist). This generalizes
very easily. We have seen that any linear system given by a differential equation of the form
d
n
y(t )
dt
n
+a
n1
d
n1
y(t )
dt
n1
+a
n2
d
n2
y(t )
dt
n2
+ +a
0
y(t ) = bu(t )
has the vector-matrix (state-space) form
dx(t )
dt
=

0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
a
0
a
1
a
2
a
n1

x(t ) +

0
0
.
.
.
0
b

u(t ).
74 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
If the entire state is available for control (meaning that y and all of its derivatives up to the
(n 1)st are available), then the controller can be
u(t ) = K
1
x
1
+K
2
x
2
+ +K
n
x
n
= [ K
1
K
2
K
n
]x = Kx.
When used in the state-space equation, the result is
dx(t )
dt
=

0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
a
0
a
1
a
2
a
n1

x(t ) +

0
0
.
.
.
0
b

Kx.
Since K is a row vector, the product of the b vector and K is an n n matrix that, when
added to the state matrix, results in
dx(t )
dt
=

0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
bK
1
a
0
bK
2
a
1
bK
3
a
2
bK
n
a
n1

x(t ).
Now every one of the original coefcients in the differential equation has an additive term
from the gain matrix, and as a result, the gains change the entire last row of the state matrix.
This means that any desired response is possible for the entire system. This is full state
feedback.
There are some subtleties associated with full state feedback having to do with state-
space models that are not in the form we used here (i.e., where the state is not the output
and all of its derivatives). The result we showed is still true.
2.7.4 Getting Derivatives for PID Control or Full State Feedback
The assumption in all of the discussions above was that some device measures all of the
derivatives of the output. What happens when this is not so?
Three methods are available to create measurements for a full state feedback imple-
mentation. The rst approach to creating the derivative of a variable is to use a linear system
that approximates the derivative. One possibility is to use the transfer function
H
d
(s) =
s
s +
=

2
s +
.
This is not exactly the derivative, but it is close. (It is the derivative for all motions that are
faster than approximately t
max
=
1

.)
The second approach is to use a digital approximation to the derivative. Remember
that the derivative is approximately
dx
dt

x(kt +t ) x(kt )
t
=
1
t
(x
k+1
x
k
).
2.7. PD Control, PID Control, and Full State Feedback 75
This digital approximation is reasonably good if the sample time is fast relative to the rate
of change in the position.
In the third approach, the fact that the derivative of the output (or even linear combi-
nations of the state of the system) is available through related measurements. One example
of this would be a measurement of the force at the support point of a spring-mass system.
The force measured would be the sum of the gravity force (mg), the spring force (Kx), and
the damping force (D
dx
dt
). Thus, this measurement, combined with a measurement of the
position, can compute the velocity. If the force measurement is f
Meas
, the acceleration of
the mass is the result of subtracting the three accelerations applied to the mass. They are
the acceleration due to the spring (obtained using the position measurement multiplied
by the spring constant),
gravity acceleration,
the acceleration from the damping (obtained as feedback by integrating this acceler-
ation estimate).
Thus a pseudomeasurement of velocity is
d
2
x
dt
2

Meas
=
f
Meas
m
g
K
m
x
Meas

D
m
dx
dt

Meas
,
where
dx
dt

Meas
=

d
2
x
dt
2

Meas
dt .
In this implementation, we have an implicit feedback loop because we integrate the pseudo-
measurement of acceleration (the left-hand side of this equation) for use on the right
(as
dx
dt

Meas
).
This approach uses a priori values for the spring constant and the force of gravity
(which means that in the actual control system they need to remain constant over time),
but despite this, control systems are often built using this type of computation to provide
a pseudomeasurement of the velocity (or other states). Using this approach, you can re-
compute the gains so that the feedback explicitly uses the measured values of force and
displacement, thereby eliminating the explicit computation of the derivative. Once again,
this requires that the parameters in the model be constant or, if they are not, that the gains
change as the parameters vary.
These equations can be assembled into the Simulink diagramin Figure 2.27; the linear
system that results derives the rate using no derivatives (actually, it uses only integration).
The initial condition for the derived rate in this model is zero. That means that when the
systemstarts, there could be an error between the estimate of the rate and the actual rate. It is
easy to show that this error goes to zero as time evolves, so that the derived rate approaches
the velocity of the mass. (Try to demonstrate that this is true.) To verify these three strategies
for reconstructing the derivative, let us build a Simulink model that implements all of these
measurements. The model will contain the three different methods for deriving rate that we
described above in a masked subsystem. The model is PID_Control_Vel_Estimates in
the NCS library. It is in Figure 2.28.
76 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
1
Deri ved Rate
Kspri ng
Spri ng Constant
for Rate
Reconstructi on
1
s
Integrator
1/m
Di vi de
by mass
damp
Dampi ng
for Rate
Reconstructi on
2
Posi ti on
of Mass
1
External l y
Appl i ed
Force
Estimate
of Velocity
Figure 2.27. Getting speed from measurements of the mass position and force
applied to the mass.
D
Vel oci ty
(Deri vati ve)
Gai n
Step at
1 second
-K-
Spri ng Constant
[Esti mate]
Send Rate Esti mate ot
SCope usi ng Goto
Externally Applied Force
Position of Mass
Deriv ed Rate
Sel ect Method for
Deri vi ng Rate
for Use i n PID Control l er
(Di al og al so al l ows testi ng
of parameter vari ati ons)
Posi ti on and
Vel oci ty Pl ot
P
Posi ti on Gai n
1
s
Integrator2
1
s
Integrator1
1
s
Integrator
I
Integral Gai n
[Esti mate]
From
-K-
Dampi ng
1/m
1/ mass
External Force Velocity Position
Figure 2.28. Simulink model to compare three methods for deriving the velocity
of the mass.
The derivation of the rate is in the subsystem at the bottom center of the model. Right
click on this masked subsystem and select Look Under Mask. The three rate estimation
approaches are in this subsystem, which is shown by Figure 2.29.
We are using two new blocks in this model. The rst is the Multiport Switch. This
switch uses the variable Select to determine which of the three inputs come out of the
block.
2.7. PD Control, PID Control, and Full State Feedback 77
1
Deri ved Rate
Zero-Order
Hol d
al pha.s
s+alpha
Transfer Fcn
Kspri ng
Spring Constant
for Rate
Reconstructi on
Mul ti port
Swi tch
1
s
Integrator
1/m
Di vi de
by mass
z-1
z
Di fference
damp
Dampi ng
for Rate
Reconstructi on
Sel ect Constant
1/Samp_T
1/Samp_Ti me
2
Posi ti on
of Mass
1
External l y
Appl ied
Force
Estimate3
of Velocity
Estimate3
of Velocity
Estimate2
of Velocity
Estimate1
of Velocity
Figure 2.29. Methods for deriving rate are selected using the multiport switch.
If you double click
on the subsystem block,
the mask shown at right
opens. The pull-down
menu at the top deter-
mines the value of Se-
lect. In the mask edi-
tor (obtained by right click-
ing the block and select-
ing Edit Mask), you will
see that the pull-down menu
selection makes the value
of Select equal to 1, 2, or
3. The way the switch
is set up, this number-
ing scheme needs to be
reversed, so in the Ini-
tialization part of the ed-
itor we have the code
Select = 4-Select. All of the data used in the estimation go under the mask using
the mask dialog, which we will discuss in more detail in the next chapter.
The last blocks that we use here, but have not used before in our models, are the
GoTo and the From blocks. They are very convenient blocks to use to help keep your
model looking uncluttered. As their names imply, they allow a signal to propagate from one
location in the model to another without the use of a line. We use these to send the signals
78 Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
to the Scope blocks for plotting. (In general, just as with GoTo statements in code, avoid
using these blocks.)
Open this model, and run it. Use the mask dialog to change the subsystem values
and to experiment with the different rate derivation approaches. Verify that the various
derivations of the rate of the mass are reasonably accurate. Try changing the values of the
various parameters (alpha, Samp_Time, and the mass, spring constant, and damping used
in the derivation of the rate). These change when you type the new values right into the
dialog box of the subsystem mask.
Control systems engineers call this method for deriving the rate an observer. If you
wish to explore this in more detail, and to explore observers for more complex systems,
see [1].
In this chapter, we have explored linear systems and the basics of feedback controllers
that will achieve a desired result. In the next chapter, we look at some real-world complex-
ities that make systems nonlinear. The rst example is chaotic in the mathematical sense,
and subsequent examples extend the pendulums that we explored in Chapter 1 to rotations
that are more complex. Along the way, we explore how one can incorporate nonlinear
devices in our simulation using Simulink primitives (blocks in the libraries). When the
desired behavior is not available as a block, we look at how to build the requisite nonlinear
behavior out of the primitives.
2.8 Further Reading
Dynamic systems and state-space formulation is discussed in a nice book by Scheinerman
[33]. State-space models also are the backbone of modern control design, as in Bryson [6]
and Anderson and Moore [1].
A pseudomeasurement of a system state can come from a mathematical description
of the function that relates the state to the measurement, or from the differential equations
describing the state and the measurements. Applications of both approaches are available.
For example, Stateow (described in Chapter 7) comes with a demo called fuelsys that
illustrates the rst approach. The Simulink blocks in this demo illustrate how the computer
provides control of the fuel injectors on a car. It also shows how pseudomeasurements
provide the missing data from a failed sensor, using redundant information contained in
related sensors. This demo is fun to run, since it allows you to throw switches that fail the
sensors and see the result. The Help le that accompanies the demo has more details.
Exercises
2.1 In the text, the differential equation in vector-matrix form or the state-space model
used the output and its derivatives up to the (n 1)st order to create the state vector.
Verify that the state-space model in the text and the original differential equation are
the same. How would you modify the model to add more inputs? (Hint: Make the
input a vector.) How would you add more outputs?
2.2 In the text we asked that you verify that the derivative of e
At
is Ae
At
and that the
inverse of e
At
is e
At
. The latter result comes easily from showing that the derivative
Exercises 79
of the product e
At
e
At
is zero (so the product is a constant). If you do this, you need
to show that Ae
At
= e
At
A, i.e., that these matrices commute.
2.3 The calculation of the transition matrix phi and the input inuence matrix gamma in
the code c2d_ncs uses the matrix
_
A b
0 0
_
. Verify that this matrix creates the phi
and gamma matrices dened in the text. (Use the hint in the text.)
2.4 Create the Laplace transformof the cosine and sine using their exponential denitions
(as suggested in the text).
2.5 The motion of the spring, mass, and damper is the solution of the differential equation
in state-space form
d
dt
x(t ) =
_
0 1

2
n
2
n
_
x(t ) +
_
0
1
M
_
u(t ).
Solve this equation when the input is zero (with the damping ratio less than one, the
initial position of x
0
and the initial velocity of v
0
) to show that the motion of the mass
is
x(t ) = e

n
t
_
x
0
cos
_
_
1
2

n
t
_
+
v
0

n
+x
0
_
1
2
sin
_
_
1
2

n
t
_
_
.
Create a Simulink model that uses this solution and a simulation of the motion to
compare the numerical accuracy of the Simulink solution for different solvers.
2.6 In the text, we used the Laplace transform to develop the response of a system to a
sinusoidal input as y(t ) =

n
p
k=1

k
H(p
k
)e
p
k
t
+|H(i)| sin (t +()). Verify
this result. Can you think of a way that you can use Simulink to calculate the transfer
function H(i) without the use of the Control System Toolbox?
2.7 Experiment with the three different control system models (PD, PI, and PID). Try
changing the gains and observing the results. Try adding nonlinearities to the models
(in particular, limit the control to a maximum and minimum value) and redo the
experiments. What can you say about the resulting responses?
Chapter 3
Nonlinear Differential
Equations
We have already seen some nonlinear differential equations (for example, the clock and the
Foucault pendulum dynamics in Chapter 1). In this chapter, we will delve into some of the
more interesting aspects of nonlinear equations, including a simple example of chaos, the
dynamics of a rotating body, and the modeling of a satellite in orbit. We will also explore
an interesting way of modeling motions in three dimensions using a four-dimensional vec-
tor called a quaternion. This chapter illustrates some advanced Simulink modeling and a
major feature of Simulink, namely the ability to create reusable models. For the beginning
Simulink user these models might not be too useful, but users that need to model complex
mechanical systems will nd a great many uses for many of the examples presented here
(and therefore could cut and paste the models in the NCS library directly into the newmodels
that they might want to build).
In addition, we illustrate the following:
how to make subsystem models that can be stored in a library for later use;
how to annotate a model so the simulated equations are written out in the annotation;
how to create a subsystem and then layer a mask on top of it, the mask providing
the user with a dialog that allows parameters in the model to be transferred into the
model (just as a subroutines parameters are transferred into the subroutine during
execution).
3.1 The Lorenz Attractor
One of the more interesting uses of simulation is the investigation of chaos. Chapter 7
of Cleve Molers Numerical Computing with MATLAB [29] describes the Lorenz chaotic
attractor problem developed in 1963 by Edward Lorenz at MIT. The differential equation is
a simple nonlinear model that describes the behavior of the earths atmosphere.
Cleve analyzes the differential equations strange attractorsthe values at which
the differential equation tries to stop but cannot because the solution, although bounded, is
neither convergent nor periodic. The model tracks three variables: a term that tracks the
81
82 Chapter 3. Nonlinear Differential Equations
convection of the atmospheric ow (we call it y
1
), the second (y
2
) related to the horizontal
temperature of the atmosphere, and a last (y
3
) related to the vertical temperature of the
atmosphere.
The Lorenz differential equation has three parameters, , , and ; the most popular
values for these parameters are 10, 28, and 8/3, respectively. The differential equation is
three coupled rst order differential equations given by
y
1
= y
1
+y
2
y
3
,
y
2
= y
2
+y
3
,
y
3
= y
2
y
1
+y
2
y
3
.
Alternatively, in state-space form (even though the equations are nonlinear),
dx(t )
dt
=

0 y
2
0
y
2
1

x(t ).
We will not duplicate the analysis from Molers book; sufce it to say that the equation has
two xed points where, if the equation has an initial condition at t = 0 equal to one of
these solutions, the derivatives would all be zero and the solution would stay at these points
(hence the name xed points). However, both of these points are unstable in the sense that
any small perturbation away from them will cause the solution to rapidly move away and
never return. (If the solution gets close again to one of these points, it will rapidly move
away again.) The two points are determined from
y =

( 1)

( 1)

and y =

( 1)

( 1)

.
We created the nonlinear equation in Simulink by building each of the differential
equations around the integrator block in the Continuous library, and at the same time, we
use a new block to plot the resulting solution on the two axes (y
1
versus y
2
and y
1
versus
y
3
). The model is in the NCS library, and it opens using the command Lorenz_1. The
model is in Figure 3.1 along with a typical set of two-dimensional plots.
This model uses a few new tricks that you should explore carefully. First, the two
dimensional plots are from the Simulink Sinks library. The scales on these plots are set
by double clicking on the icon. (Review the settings when the model is opened.) In
order to have a new initial condition every time the model runs, we have added a Gaussian
randomperturbation to each of the initial conditions in the integrator blocks. The MATLAB
command randn is used in the Constant block, and each integrator has its initial condition
set fromthe external source (so they actually appear in the diagram; this is a good practice
when the models are simple because the initial conditions are visible, but when the model
is very busy, this approach can make the diagram too busy). To make the initial conditions
external, you need to open the integrator dialog by double clicking the integrator block, and
then select external as the source of the initial conditions.
3.1. The Lorenz Attractor 83
Figure 3.1. Simulink model and results for the Lorenz attractor.
The two x-y plots show the two xed points for the Lorenz system. To change the
values of Rho, Sigma, and Beta, simply type in new values at the MATLAB command line
(see [29] for some suggestions).
3.1.1 Linear Operating Points: Why the Lorenz Attractor Is Chaotic
Since Release 7 of Simulink, the Control SystemToolbox link has a very robust linearization
algorithm. Two blocks appearing in the Simulink Model-Wide Utilities library can be used
to get a linear model of a nonlinear system. The rst of these blocks is the time-based
linearization, and the second is trigger-based linearization. The rst will create a complete
state-space model at the times specied in the dialog that opens when you double click the
block. Unlike the Simulink Scope block, this block does not require a connection to the
model. The block causes the simulation to stop at either the times or the triggers, and then
a program from the Control System Toolbox, called linmod (for a continuous time model)
or dlinmod (for a discrete time model), creates the linear state-space matrices A, B, C, and
D. We changed the model Lorenz_1 to create the linearizations using the trigger block.
The triggers come from the Pulse Generator in the Sources library. In this new model,
called Lorenz_2 in the NCS library (Figure 3.2), the diagram uses the GoTo and From
blocks in the Simulink Signal Routing library to make it easier to follow. These blocks
allow connections without a line, thereby simplifying the diagram. The model was also
redrawn a little to emphasize the state-space form of the Lorenz differential equations. The
state-space formof the equations is in the text block in the model (created using the Enable
Tex Commands option in the Format menu). When this model runs (slowly because
the linearization is taking place every 0.01 sec of simulation time), the linear models are
created and stored in a structure in MATLAB. We use the structure created by linmod
called Lorenz_2_Trigger_Based_Linearization_to investigate the stability of the
84 Chapter 3. Nonlinear Differential Equations
Simulink model for the Lorenz Attractor, whose
nonlinear diff ernetial equation is given by:
dy/dt = A y
where: A =
| - 0 y
2
|
| 0 - |
| - y
2
-1 |
.
El ement 1 of the matrix Ay
El ement 2 of the matrix Ay
El ement 3 of the matrix Ay
Lorenz Attractor
This set of bl ocks creates
a li near model at the ti mes
triggered by the pulse generator.
The result s are stored in a structure
in the MATLAB Workspace.
Trigger-Based
Li nearization.
Pulse
Generator
Pl ot
Subsystem
(Uses Data from GOTOs)
Nonlin earity:
Prod of
y2 and y3
Nonlin earity:
Prod of
y1 and y2
1
s x
o
1
s x
o
1
s x
o
y3
y2
y1
Rho
Si gma
Beta
y3
y1
y2
y3
y2
sqrt(Beta*(Rho-1))+20*randn
sqrt(Beta*(Rho-1))+20*randn
Rho-1+10*randn
y1
y1
y2
y2
y3
y3
y2 *y 3
y1 *y 2
Figure 3.2. Exploring the Lorenz attractor using the linearization tool from the
control toolbox.
operating points and at the same time understand why the Lorenz attractor response is
chaotic. Asmall MATLAB program, called Lorenz_eigs.m in the NCS library, plots the
rst 40 eigenvalues (corresponding to the rst 0.4 sec of the time history). The code in this
M-le is
axis([-25 15 -25 25]);
axis(square)
grid
hold on
nlins = tstop/0.01
for i = 1:nlins/50
Lorenz(i).eig=eig(Lorenz_2_Trigger_Based_Linearization_(i).a);
plot(real(Lorenz(i).eig),imag(Lorenz(i).eig),.)
drawnow
end
hold off
Figure 3.3 shows the eigenvalue plot. Let us compare this plot with the solutions
generated when the simulation ran. The plot below the eigenvalue gure is the simulation
results for the state variable y
1
plotted against the state variable y
3
. The initial position in
this plane was about (10, 20).
3.1. The Lorenz Attractor 85
-20 -15 -10 -5 0 5 10
-30
-20
-10
0
10
20
30
Real part of the Eigenvalues
I
m
a
g
i
n
a
r
y

p
a
r
t

o
f

t
h
e

E
i
g
e
n
v
a
l
u
e
s
Eigenvalues of the Lorenz System
(Every 0.01 sec. of the 20 sec. simulation)
t = 0
t = 0
t = 0
Figure 3.3. Eigenvalues of the linearized Lorenz attractor as a function of time.
From the eigenval-
ues plot above we can see
that the initial condition
was unstable; when the
simulation starts there are
a complex pair of eigen-
values that have a real part
of 2.91. The eigenval-
ues are almost in the left
half plane, and during the
next four times they move
toward the left half com-
plex plane. From the sixth
eigenvalue on, they have
moved into the left half
complex plane, indicating
that the solution now is
86 Chapter 3. Nonlinear Differential Equations
stable. The complex pair indicates that the solutions are oscillatory, but since they are in the
left half plane, it also indicates that the amplitude is decreasing, so the solution is spiraling
in toward the attractor at the bottom of the time-history plot. However, as the solution
approaches this attractor, the complex pair of eigenvalues again moves into the right half
plane (the eigenvalue plot ends before this happens), indicating that the solution is again
locally unstable, and it begins to diverge away fromthe attractor. This behavior is consistent
with the fact that the attractors are both unstable. (As an exercise, set the value of y
2
in the
Amatrix to the values

( 1), and compute the eigenvalues of the matrix.)


It is quite instructive to play with this model and compare the eigenvalues with the
solutions. Some of the questions you might want to ask: Do the three eigenvalues always
appear as one complex pair and one real? What is the importance of the magnitude of the
real part of the complex pair? What is the importance of the imaginary part of the complex
pair?
This technique of analyzing a systemby linearization about the current operating point
is very useful as systems get complex. This approach is particularly useful when systems
are so large that creating linear models analytically is virtually impossible.
3.2 Differential Equation Solvers in MATLAB and
Simulink
Simulink uses the same solvers that are in the MATLAB ode suite (although they are C-
coded in an independent way and automatically linked by the Simulink engine to solve the
differential equations modeled in the diagram). Chapter 7 of NCM describes these and how
they work, so we will not go into the details here.
When you create a new
Simulink model, the default solver
is the variable step ode45, and the
default simulation time is 10 sec. In
addition, the Relative tolerance is
set to 1e-3 (or approximately 0.1%
accuracy), and the Absolute toler-
ance is set to auto. The solver
also allows the user to force a Max,
Min, and Initial step size if de-
sired, but these are all set initially
to auto. The solver options allow the Zero crossing control to be set so that zero cross-
ing detection occurs for all blocks that allow it (or, optionally, none of the blocks). The
default option for zero crossing is Use local settings, where the solver checks to see if a
zero crossing has occurred for those blocks that this option is set to on. For example, the
absolute value block has the dialog shown above. If the Enable zero crossing detection
check box is not checked, then in the local settings mode, this blocks zero crossing is
disabled. Note that if the zero crossing control is set to Enable all, this check box selection
does not appear.
Some simple numerical experiments were performed (in the spirit of Section 7.14 in
NCM) using the Lorenz model above. To do these experiments, we execute the Simulink
3.3. Tables, Interpolation, and Curve Fitting in Simulink 87
Table 3.1. Computation time for the Lorenz model with different solvers and tolerances.
Solver Type Ode45 Ode23 Ode113
Tolerance 0.001 0.0370 0.1445 0.1802
Tolerance 0.000001 0.3670 0.4680 0.3749
model from the MATLAB command line. Type in the following code at the MATLAB
command line and then use the arrow keys to rerun the same code segment:
tic;sim(Lorenz_2);te = toc;
te
In this set of MATLAB instructions, the sim command causes MATLAB to run the
Simulink model (the argument of the command tells MATLAB what model to runit is a
string). The value of the elapsed time for the simulation is te. In between each execution
of the simulation using the sim command, open the Conguration Parameters dialog in
Simulink; use the Simulation pull-down menu or click on the model window and type
Ctrl + e. Then change, in turn, the solver or the relative tolerances, andthis is very
importantsave the model and re-execute the commands above. Table 3.1 shows the result
of my experiment; I used a Dell dual Pentium3.192 GHz computer with 1 gigabyte of RAM.
With the understanding that you have obtained from what we have done so far in this
book, and the discussion in NCM, it should be clear that there are no hard and fast rules
that apply to selecting a solver, a tolerance, or any of the other parameters in the Simulink
Conguration Parameters pull-down dialog. The best way to understand the simulation
accuracy for any model you will produce is to try different solvers with different tolerances.
If you think that the solvers are jumping through time steps that are too big, you can set the
maximumstep size, and if you think that the solver is taking too long, you can try increasing
the minimum step size. In addition, if you are interested in the solution of a system that has
some dynamics that change very fast, but you do not really care about the details of these
fast states, then the stiff solvers are best (see Section 7.9 of NCM). We will see some
examples in later chapters that require stiff solvers, and when we do, we will be able to
explain why they are used and how to use them.
3.3 Tables, Interpolation, and Curve Fitting in Simulink
There are times when a simulation requires tabulated data for a function. Examples of this
are systems that are operating in owing air or water (aircraft, rockets, submarines, balloons,
parachutes, etc.). In many of these applications, the tabular data represents the gradient of
a nonlinear function of one or more variables. The gradient measurement comes from wind
tunnels, water tanks, or actual measurements of the system in its environment (a full-scale
model of an airplane or automobile). How do you use such data in Simulink?
Simulink implements most of these curve-tting tools in a set of blocks that are in
the Lookup Tables library. This library has a variety of tables. The simplest is the Lookup
Table; the versions that are more complex are the 2-D and n-D (for functions that give two
outputs or more). Ablock that does interpolation (n-D), using what is called a PreLookup of
88 Chapter 3. Nonlinear Differential Equations
the input data, simplies the tables. The PreLookup Index Search block that goes along
with this block is also in the library, but since this block works in conjunction with the
PreLookup block, it is not a stand-alone block. The last block in the library is the Lookup
Table Dynamic block. We will investigate each of these block types and illustrate when to
use them in the following examples.
3.3.1 The Simple Lookup Table
To illustrate the simple lookup table, let us return to the home heating system model we
introduced in Section 2.4.1. The original model used an outdoor temperature variation of 50
deg F with a 24-hour sinusoidal variation of 15 deg. This might be a typical diurnal variation
in the outdoor temperature, but to compute the fuel usage for a specic winters day we
want to use the actual outdoor temperature. Assume that you have made measurements of
the temperature during a particular cold snap when the temperature plummets in a 48-hour
period from an average of 40 deg to about 10 deg. Figure 3.4 shows the temperature over
a 24-hour day (sampled every 15 minutes).
The data were actually generated from MATLAB using the following code:
Time = (0:.25:48)*3600;
Temp = 50-0.75*Time/3600
5*sin(2*pi*Time/(24*3600)+pi/4)+randn(size(Time))
How do you insert this temperature variation into the model? Open the model by
typing Thermo_NCS at the MATLAB command line. Find the blocks with the constant 50
and the sinusoid that models the external temperature variation along with the summation
block that adds the two, and remove them from the model. Open the Simulink library
browser and nd the Lookup Tables library. From this library, select the block called
Lookup Table and insert it into the model. The input to this block will be the simulation
time that is the Clock block in the Simulink Sources library. Get one of these blocks
and insert it into the model so that it is the input to the lookup table. Then, double click
on the Lookup Table block and enter the values of the data using the MATLAB commands
above. The model that results is in Figure 3.5, and the Lookup Table dialog should look
like Figure 3.6.
The result will nowshowthe home heating systemoperating with the outdoor temper-
ature given by the tabular values in the lookup table. The gure also shows the dialog that
opens when you double click the Lookup Table block. It allows editing of the data (using
the Edit button). You can also enter the data as a MATLAB vector using MATLABs
vector notation or pasting a vector of numbers into the dialog box. If, for example, the data
is in an Excel spread sheet, it can be pasted into the vector of output values window and
then surrounded by the square brackets [ and ] to make the data into a MATLAB vector.
Notice also that the dialog box allows you to select the interpolation method you want
to use. The default is interpolation and extrapolation. This means that the data interpolation
is linear between time values, and the last two data points in the table project the data beyond
the nal time. The other options are interpolation with end values used past the last data
point in the table, the value nearest to the input value, or the values below or above the
input value. The block will also provide a value at xed times by setting the sample time.
In the simulation the value of this is set to 1, which causes the value to be computed at
3.3. Tables, Interpolation, and Curve Fitting in Simulink 89
Figure 3.4. Pseudotabulated data for the external temperature in the home heating
model.
F2C
deg F to
deg C
F2C
deg F
to deg C
C2F
deg C to
deg F
Thermostat
70
Set Poi nt Pl ots
Lookup Tabl e
for Outdoor Temp.
1/s
House
Mdot*ha
Heat Output Cal /Hr.
cost
Cost of Fuel
$/Cal /Hr
Cl ock
Mdot*ha
Bl ower
Heat Output
blower
cmd Terr
Thouse
Indoor v s.
Outdoor Temp.
Heat Cost ($)
Tset
Figure 3.5. Home heating system Simulink model with tabulated outside temper-
ature added.
90 Chapter 3. Nonlinear Differential Equations
Figure 3.6. Dialog to add the tabulated input.
every simulation step when the input changes. (Since the input to the block is the clock,
this is every integration step used by the solver.) The Help button on the dialog opens the
MATLAB help and provides a very good description of how this block works.
If you have not created the model yourself during the above discussion, it is available
in the NCS library using the command Thermo_table.
The 2-D Lookup Table block computes an approximation to z = f (x, y) given some
x, y, and z data points. The dialog block for this lookup table has an edit box that contains
the Row index input values (a 1 m vector corresponding to the x data points) and an
edit box with the Column index input values (a 1 n vector of y data points). The values
for z are in the edit box labeled Matrix of output values (an mn matrix).
Both the row and column vectors must be monotonically increasing. These vectors
must be strictly monotonically increasing in some specic cases. (See the Help dialog for
more details on what these cases are.)
By default, the output is determined from the input values using interpolation-
extrapolationthat is, linear interpolation and extrapolation of the inputs past the ends.
The alternatives available are Interpolation-Use End Values, Use Input Nearest, Input
Below, or Input Above, with the same meaning as for the 1-D Lookup Table.
It is important to use the proper end extrapolation method when the simulation could
cause the input to the lookup table to be outside the range of the tabulated values. For
example, the heating simulation is set up to run for 2 days, which is also the extent of the
time data loaded into the lookup table dialog. The selected extrapolation was Interpolation-
Extrapolation, so running the simulation linearly extrapolates past 48 hours, using the last
two data points in the table. This means that the temperature will continue to drop, going
down at the rate of about 10 deg per hour (240 deg per day), which is clearly not correct.
Even selecting the option of keeping the end-point values constant (Interpolation-Use End
Values) is not a good approximation to what actually happens, but it is the better choice. In
this case the user has clear control (via the simulation time) over whether or not the lookup
table data will be exceeded; in many applications the independent variable in the table is
a simulation variable whose value cannot be easily determined. In this case, a plot of the
simulation results and the independent variable will allow you to verify that the simulation
handles the end-point properly.
3.3. Tables, Interpolation, and Curve Fitting in Simulink 91
The 2-D and n-D Lookup Tables are set up in a very similar way to the 1-D table we
just reviewed. To ensure that you understand the way these are used, use the Simulink help
(from the 2-D or n-D block dialog) to review the way these tables are used.
3.3.2 Interpolation: Fitting a Polynomial to the Data and Using the
Result in Simulink
There is another optioninSimulinkthat insome applications maybe more accurate andeasier
to use. This option is the curve tting block, Polynomial, in the Math Operations Simulink
library. The use of this block requires you to do some preliminary work in MATLAB to
compute the coefcients of the polynomial that will interpolate and extrapolate your data.
The process of creating a curve t to any data uses the Basic Fitting tool in MATLAB.
To open this tool, create a plot of the data, using the MATLAB commands that we used
to create the 48-hour outdoor temperature prole above or running the MATLAB M-le
simulated_temp_data in the NCS library.
When the plot appears, select Basic Fitting under the Tools menu to open the Basic
Fitting dialog shown above. In this dialog, we selected the sixth order polynomial option.
The tool will overplot the curve t on the graph of the data. Try different polynomial degree
options and navigate around the dialog. To view the polynomial coefcients as shown in
the gure, click the right arrow at the bottom of the dialog. To get these values into the
92 Chapter 3. Nonlinear Differential Equations
0 5 10 15 20 25
20
21
22
23
24
25
26
27
28
29
30
Time (Hours from Midnight)
D
r
y

a
n
d

W
e
t

B
u
l
b

T
e
m
p
e
r
a
t
u
r
e
R
e
a
d
i
n
g
s

(
D
e
g
.

F
)
Temperature Data for Norfolk MA, Jan. 1,2006
Temperature (Dry Bulb)
Temperature (Wet Bulb)
Figure 3.7. Real datamust be inaMATLABarray for use inthe FromWorkspace
block.
Simulink polynomial block, click the button in the Basic Fitting tool that says Save to
workspace. MATLAB will save the curve t in a MATLAB structure called t. The
coefcients are in the eld called coeff (i.e., the coefcients are the MATLAB variable
t.coeff). In the Thermo_table model in the NCS library, replace the Lookup Table block
with the Polynomial block, and then open this blocks dialog and type t.coeff in the area
designated Polynomial coefcients. Run the simulation and verify that the simulation is
using the curve t you created instead of the data. (A version of this model is in the NCS
library, but you still need to create the polynomial t as described above before you use it;
the model is called Thermo_polynomial.)
The reason you might want to use this approach is to smooth the data and to improve
the simulation run time. In general, it is faster to use a low order t than it is to use the table
lookup with interpolation. (It is not always the case that curve t calculations are faster, so
if improving the simulation time is your goal this assumption should be veried.)
3.3.3 Using Real Data in the Model: From Workspace and File
A web site gives access to weather data from volunteers in your local area. We used
data for Norfolk, Massachusetts on January 1, 2006, to create an M-le that can then
import the actual temperature data into the home heating model. This data is in the M-le
weather_data.m, and Figure 3.7 is its plot. The web site that the data comes from is
http://www.wunderground.com/weatherstation.
3.3. Tables, Interpolation, and Curve Fitting in Simulink 93
F2C
deg F
to deg C
F2C
deg F
to deg C
C2F
deg C to
deg F
Thermostat
Thermo
Pl ots
70
Set
Poi nt (F)
Tdata(:,1:2)
Measured Data
From
Workspace
House
Mdot*ha
Heater
Bl ower
blower
cmd Terr
Thouse
Indoor v s.
Outdoor Temp.
Measured Outdoor
Temperature
Figure 3.8. Simulink model of the home heating system using measured outdoor
temperatures.
Figure 3.9. From workspace dialog block that uses the measured outdoor temper-
ature array.
Running the M-le creates the plot in Figure 3.7 and the data in the MATLAB
workspace. The data is stored in a 288 3 array called Tdata, where the rst column
is the time (in hours), the second is the measured dry bulb temperature, and the last column
is the wet bulb temperature (both in degrees F).
To use data in MATLAB as an input to a Simulink model, the From Workspace block
is used. (An equivalent block called From File gets the data from matrices that were saved
using a mat le; since the two blocks work in the same way, we will illustrate only the
Workspace block.) We have modied the Thermo_table model to allow input to the
model to come from a data le in MATLAB. This is Thermo_real_data in the NCS
library (Figure 3.8).
The data we want is in the array called Tdata, where the rst column is the time and the
second is the outside temperature. The From Workspace block dialog in Figure 3.9 opens
when you double click on the block. We have entered the values for the data into this dialog
94 Chapter 3. Nonlinear Differential Equations
0 1 2 3 4 5 6 7 8 9
x 10
4
66
68
70
72
74
Indoor vs.
Outdoor Temp.
0 1 2 3 4 5 6 7 8 9
x 10
4
22
24
26
28
30
Time (Sec.)
Measured Temperature
Figure 3.10. Using real data from measurements in the Simulink model.
(calling the data Tdata). Since the array had three columns (the last column is the wet
bulb temperature, which we do not need), the dialog box Data has Tdata(:,1:2). This
truncates the tabulated data down to two columns. The sample time is every 5 minutes, or
every 300 sec. For completeness, we show the result of simulating the data in Figure 3.10).
3.4 Rotations in Three Dimensions: Euler Rotations,
Axis-Angle Representations, Direction Cosines, and
the Quaternion
We saw in Chapter 1 that the Foucault pendulum has forces that are a consequence of the
rotation of the coordinate system where the pendulum is mounted. When we extend these
ideas to a rigid object rotating in three-dimensional space, it is not easy to keep track of
the new orientation. This is because even when doing the rotations about a single axis at a
time, the order they are done will determine the result. The simplest way to illustrate this
is to think about what happens if you were standing, facing north, and you then turned 90
degrees to the left followed by a 90-degree rotation to lie down on your back. You would be
laying face up with your head to the east. If you did this sequence in the reverse orderi.e.,
you rst lay down on your back and then you turned leftyou would be lying on your left
side with your head to the north, clearly not in the same orientation as the rst sequence.
Mathematically, rotations do not commute.
The way we keep track of the rotation of a body is with the angles of rotation about
each of the three orthogonal axes. Mathematically, rotations are transformation matrices
that describe what the rotation does to the x, y, and z coordinates after the rotation. These
matrices have unique representations and manipulations.
3.4. Rotations in Three Dimensions 95
x
y
z
x
New
y
New

Figure 3.11. Single axis rotation in three dimensions.


3.4.1 Euler angles
Euler proved that any orientation requires at most three rotations about arbitrary axes on a
rigid body. (The rigidness of the body ensures that the complete body is a single mass as
well as inertia; we discuss what happens when the body is not rigid in Chapter 6.) The angles
that dene the rotation are Euler angles, and they are not unique. In most applications that
use Euler angles, it is conventional for the rotations rst to be about the x-axis, followed
by a rotation about the new y-axis and then another rotation about the new z-axis. (This
is abbreviated 123 rotation.) This is not unique since there are 12 possible orderings
of the rotation axes (namely 123, 132, 213, 232, 312, 321, 121, 131, 212, 232, 313, and
323). To compute the orientation of the new axes after such a sequence of rotations, we
need to investigate what happens with each of them alone. Only coordinates in the plane
perpendicular to the axis of rotation change when a single axis rotation is used. We can
think of the three Euler angle rotations as a sequence of three independent planar rotations,
each plane redened by the previous rotation. Figure 3.11 shows a rotation about the z-axis
that causes the entire x-y plane to rotate.
For the single axis rotation of angle about the z-axis shown above, the coordinates
(x, y) of a point in the plane has a new set of coordinates (x
New
, y
New
) given by
x
New
= x cos() +y sin(),
y
New
= x sin() +y cos().
This transformation, using the vector-matrix form, is

x
New
y
New
z
New

cos() sin() 0
sin() cos() 0
0 0 1

x
y
z

= T()

x
y
z

.
96 Chapter 3. Nonlinear Differential Equations
For a given value of the angle , the matrix T is orthogonal. (As an exercise, prove this by
showing that TT
T
is the identity matrix.)
As an example, if we use this result to compute the transformation of the coordinates
after a standard Euler angle rotation using the axes 121 (i.e., the three rotation angles are
about the original x-axis, then the new y-axis, and, last, about the new x-axis) the new
coordinates are given by

x
New
y
New
z
New

1 0 0
0 cos() sin()
0 sin() cos()

cos() 0 sin()
0 1 0
sin() 0 cos()

1 0 0
0 cos() sin()
0 sin() cos()

x
y
z

.
Notice that the transformation is always a 22 matrix, but when the axis of rotation changes
the 2 2 matrix appears in different rows and columns of the full 3 3-rotation matrix.
The three transformation matrices can be multiplied together to give a single direction
cosine matrix for the entire rotation, but in most applications this is not done. Note that
since each of the transformations by themselves is orthogonal, the combined direction cosine
matrix product is also orthogonal. (The proof of this is an exercise.)
When an object is rotating, each axis of the body has its own rotational rate. It is usual
to denote these rotations with the letters p, q, and r for the x-, y-, and z-axes attached to
the body, respectively. When this is the case, the Euler angles are functions of time, so we
need to know what happens to them as the object rotates.
We saw in Section 1.4.1 of Chapter 1 that a scalar rotation around a single axis gives
rise to rotations in the two coordinates that are orthogonal to the rotation axis. Generalizing
this to three dimensions gives the operator equation (where a is an arbitrary vector in the
Body frame that is rotating at the vector rate )
da
dt

Inertial
=
da
dt

Body
+ a.
Arotating body has accelerations induced whenever the angular momentumvector changes.
If we let the rotational rates around the body axes be
=

p
q
r

,
the angular momentum of the body around its center of mass is the result of calculating the
angular momentum of every mass particle in the rigid body and then summing over all of
the particles. If the rigid body is homogeneous, the summation is an integral. In either case,
the result is that the angular momentum is J, where the inertia matrix is
J =

J
xx
J
xy
J
xz
J
yx
J
yy
J
yz
J
zx
J
zy
J
zz

.
(Since this result might be unfamiliar, see [17] for more details.)
From Newtons laws applied to rotations, the torques applied to the body and the rate
of change of the angular momentum balance, so (using the derivative of a vector in inertial
3.4. Rotations in Three Dimensions 97
coordinates above)
dJ
dt
= J
d
dt
+
dJ
dt
+ J = T.
When the inertia matrix is constant over time, J factors to give
d
dt
() = J
1
(T (J)) .
This equation is the starting point for any simulation that involves the rotation of a rigid
body. (We will build a simulation model in Simulink shortly.) If this equation is integrated,
the result is the angular velocity at any time t . Integrating the angular velocity gives the
angular position. The angular position that results from integrating the angular rate is not a
record of the Euler angle history. This is because the angular rate is always with respect
to the body, and the Euler angles are always with respect to the original orientation of the
body.
Using the transformation fromthe body axes to the Euler axes (and the Euler rotations
about x-, y-, and the new x-axes) developed above, the instantaneous angular rate in the
body coordinates (with respect to the Euler angle rates) is

p
q
r

d
dt
0
0

1 0 0
0 cos sin
0 sin cos

0
d
dt
0

1 0 0
0 cos sin
0 sin cos

cos 0 sin
0 1 0
sin 0 cos

0
0
d
dt

.
Multiplying the matrices in this expression gives

p
q
r

1 0 sin
0 cos sin cos
0 sin cos cos

d
dt

.
The Euler rates in terms of the body rates come from inverting the matrix in this expression
as follows (noting that this matrix is not orthogonal; see Exercise 3.1):
d
dt

1 0 sin
0 cos sin cos
0 sin cos cos

p
q
r

1 sin tan cos tan


0 cos sin
0
sin
cos
cos
cos

p
q
r

.
Nowthere are some interesting and potentially difcult numerical issues associated with this
equation. Whenever the Euler angle becomes a multiple of

2
, this matrix becomes singular
98 Chapter 3. Nonlinear Differential Equations
(its inverse becomes innite), sokeepingtrackof the orientationusingthis representationcan
have numeric problems whenever the Euler angles are near 90 degrees or 270 degrees. Even
if there were a nice way of overcoming the numerical issues, the fact that the transformation
matrix is not orthogonal would create numerical stability issues. What can we do about this?
3.4.2 Direction Cosines
Assume that three rotations that lead to a general rotation are sequential about the axes z,
y, and x (axes 321 as dened above). The rotation matrices above give the nal orientation
as the product of three matrices as follows (starting with a rotation of angle about z,
followed by a rotation of about y, and, last, a rotation of around x):

1 0 0
0 cos sin
0 sin cos

cos 0 sin
0 1 0
sin 0 cos

cos sin 0
sin cos 0
0 0 1

cos cos cos sin sin


sin sin cos cos sin sin sin sin +cos cos sin cos
cos sin cos +sin sin cos sin sin sin cos cos cos

.
As we pointed out when we were talking about the Euler angles, this matrix is the direction
cosine matrix, and it denes the cosines of the angles between the new axis (after the
rotation) and the old (before the rotation). Thus the 1,1 element of the matrix is the cosine
of the angle between the x-axis after rotation and the x-axis before rotation, the 1,2 element
is the cosine of the angle between the x-axis after and the y-axis before, etc. If we denote
this matrix by C, then when the object is rotating about its body axes with rates p, q, and
r, the direction cosine matrix satises the differential equation
dC
dt
=

0 r q
r 0 p
q p 0

C =

0 r q
r 0 p
q p 0

C = C.
This equation comes from the fact that the rate of angular rotation is the cross product of
the vector rate with the body axis vector. (In Exercise 3.2 you will show that the matrix
=

0 r q
r 0 p
q p 0

,
multiplying, on the left, any column c
i
of C, gives a result that is the same as c
i
.)
This equationcanbe the calculationusedtondthe orientationof anobject as it rotates,
but because there are nine elements in C and there are only six independent variables, we
would be calculating nine values when only six are required. For this reason, direction
cosine formulations are not preferred for calculating rotations. The next section describes
the preferred method. It is instructive to build a Simulink model that uses this equation to
keep track of the orientation of an object and to compare the computation time required
with that required using the quaternion formulation (see Exercise 3.3).
3.4. Rotations in Three Dimensions 99
3.4.3 Axis-Angle Rotations
Euler also proved that any three-dimensional rotation can be represented using a single
rotation about some axis (Eulers theorem). The axis-angle form of a rotation uses this
observation. For now let us not worry about how we determine the axis; assume that the
vector n species it. The desired rotation is dened to be (using the right-hand rule) a
rotation about this axis by some angle. The vector n is a unit vector (i.e., n
T
n = 1).
Most of the blockset tools in Simulink (the Aerospace Block Set and SimMechanics)
do not make direct use of the axis-angle form, but this representation is the starting point
for deriving other forms. The axis-angle approach forms the basis of the quaternion repre-
sentation that is nding more applications in many diverse elds (for example, mechanical
applications, computer-aided design, and robotics).
We need four pieces of information to describe the axis-angle transformation, so we
denote it by the four-dimensional vector [ n ]
T
=

n
x
n
y
n
z

T
. However,
because n
T
n = 1 (n specifying direction of the axis that we are rotating about, not its
length), only three of these four numbers are independent. When there is a rotation about
the body axis, the rotational rate around the axis n is
dn
dt
= nx = xn.
3.4.4 The Quaternion Representation
There is a nice computationallycompact methodfor computingrotations usingthe axis-angle
form. The method uses the quaternion representation, invented by William Hamilton in
1843. Aquaternion is a modication of the angle-axis 4-vector that represents the rotation,
dened as
q =

q
1
s

T
=

n
x
sin(/2) n
y
sin(/2) n
z
sin(/2) cos(/2)

.
Because of the sin(/2) multiplying the vector n in this denition, the rotational rate
of q
1
(rst three components of the quaternion) is
dq
1
dt
=
1
2
q
1
.
By using sin and cos of the rotation angle instead of the angle itself (as was done in
the axis-angle representation), the quaternion is numerically easier to manipulate. In fact,
the norm of q is q
T
q =

q
1
s

q
1
s

T
= q
1
q
T
1
+ s
2
= 1. (As an exercise, show
that this is true.) Since q must be a unit vector, we can continuously normalize it as we
compute it (which ensures an accurate representation of the axis of rotation).
The other attribute of the quaternion representation that makes it more robust nu-
merically is the way the body rates determine the quaternion rates. The derivative of the
100 Chapter 3. Nonlinear Differential Equations
This block implements the quaternion equations.
Let the quaternion q (a 4 vector) be partitioned into a 3 vector and a scalar as follows:
q = [ q
1
s ]
T
The differential equations for the partitioned components of the quaternion vector are:
dq
1/
dt
= 1/2 [ q
1
X + s ]
and
ds
/
dt
= -1/2
T
q
1
To insure that the quaternion vector has a unit norm, ||q||is calculated at each
step and the derivatives are divided by this norm before they are integrated (insuring that the
vector has unit norm at the time t, just prior to the next integration step).
1
qdot_i2b
omega' q_1
Select s from q
Select q_1 from q
Scalar multiplication
s omega / ||q||
Normalize the
quaternion[q_1 s]
1
s
x
o
Integrate qdot
0.5
-1
x||x||
Form the norm of q
using a subsystem
Initial_q
Cross
Product
(q_1 cross omega) / ||q||
1
omega_b2i
4
4
3
q_1 x omega + s omega
4
4
qdot 4
s/||q||
3
3
q_1/||q||
3
q_1/||q|| 3
||q||
3
omega
3 omega
3
3omega
4
4
4
4
4
3
3
3
Figure 3.12. Using quaternions to compute the attitude from three simultaneous
body axis rotations.
quaternion, using the partition above, is
dq
dt
=
d
dt

q
1
s

=
d
dt

n sin(/2)
cos(/2)

dn
dt
sin(/2) +
1
2
cos(/2)n
d
dt

1
2
sin(/2)
d
dt

nxsin(/2) +
1
2
cos(/2)

1
2
sin(/2)
d
dt

=
1
2

s +q
1
x

T
q
1

.
Thus, calculating the quaternion rate is simply a matter of some algebra involving the cross
product of the quaternion with the body rate (for q
1
) and an inner product with the body
rate (for s).
We have created a subsystemmodel for these equations (see Figure 3.12). It uses some
Simulink blocks that we have not encountered before. The rst is the block that extracts a
component from a vector (the Selector block). We use this block twice: rst to extract
3.4. Rotations in Three Dimensions 101
q
1
and second to extract s. The other blocks that are used are the dot product and cross
product blocks in the Math Operations library. We need these because of the cross product
in
dq
dt
and the calculation of
T
q
1
.
1
Out1
Sel ector4
Sel ector3
Sel ector2
Sel ector1
2
In2
1
In1
The cross product block in Simulink
implements the denition of the cross prod-
uct using a masked subsystem. Since we
have not thoroughly explored the masking of
subsystems yet, let us take a little detour here
to do so. The idea behind a mask is to cre-
ate a newSimulink block (i.e., a user-dened
block where the data needed to execute the
block appear in a dialog that the user selects
by double clicking on the block, as we have
seen many times for the built in blocks in the
Simulink browser). The mask hides the in-
ternal mathematics fromthe user so it has the
look and feel of a Simulink built-in block. To
build such a block, you rst create a subsys-
tem in the usual way and then invoke the mask. For the cross product, the subsystem uses
two multiplication blocks and four Selector blocks (to select the appropriate components of
the vector for the cross product). The subsystem is in the gure above.
If you were creating this block as a masked subsystem, the next step in masking the
subsystem would be to right click the subsystem block and select Edit Mask from the
menu. This opens the Edit mask dialog that allows you to specify the mask properties.
For the cross product block, no input parameters are required, so the only thing you might
want to do is provide documentation for the block using the Documentation tab in the
dialog. We will see other masked subsystems as we start to build models that are more
complex. Therefore, we will defer further discussion on building a mask until we need
to use them in these models. Using the equations above and the cross product block, the
Simulink model that will compute the quaternions is shown below. (This model is called
Quaternion_block in the NCS library.) The input to this block is the angular velocity in
body coordinates of the rigid body, and the output is the quaternion rate. To complete the
quaternion calculations we must integrate the quaternion rate. We will do this outside the
quaternion subsystem after we show how to extract the Euler angles and direction cosine
matrix from q.
We use the denition of q to nd the Euler angles and direction cosine matrix. Re-
fer back to the denition of the direction cosine matrix: the Euler angles come from the
appropriate angle in the matrix and the inverse trigonometric functions. For example, in
the 121 representation we developed above, the 3,2 element of the direction cosine matrix
is sin cos , and the 3,3 element is cos cos , so if we divide the 3,2 element by the 3,3
element we get tan , so the Euler angle is given by = tan
1
(
c
3,2
c
3,3
). In a similar fashion,
the other angles are
= sin
1

c
1,3

and = tan
1

c
1,2
c
1,1

.
102 Chapter 3. Nonlinear Differential Equations
From the denition of the quaternion, the Euler angles are determined from the fol-
lowing (where the components of the vector q
1
are denoted q
1
, q
2
, and q
3
):
= tan
1

2(q
2
q
3
+sq
1
)
s
2
q
2
1
q
2
2
+q
2
3

,
= sin
1
(2(q
1
q
3
sq
2
)) ,
= tan
1

2(q
1
q
2
+sq
3
)
s
2
+q
2
1
q
2
2
q
2
3

.
Again, using the same reasoning, the direction cosine matrix is

s
2
+q
2
1
q
2
2
q
2
3
2(q
1
q
2
sq
3
) 2(q
1
q
3
+sq
2
)
2(q
1
q
2
+sq
3
) s
2
+q
2
2
q
2
1
q
2
3
2(q
2
q
3
sq
1
)
2(q
1
q
3
sq
2
) 2(q
2
q
3
+sq
1
) s
2
+q
2
3
q
2
2
q
2
1

.
A Simulink block to do these transformations is in Figure 3.13 (this is the model Quater-
nion2DCM in the NCS library). Notice that this model implements the direction cosine
matrix from the following denition:
DCM =

s
2
|q
1
|
2

I
3x3
+2q
1
q
T
1
2s

0 q
3
q
2
q
3
0 q
1
q
2
q
1
0

.
(In Exercise 3.5, you are to verify that this calculation gives the same direction cosine matrix
as that shown above.)
When this discussion started, we noted that the rotational acceleration comes from
the Euler equation
d
dt
() = J
1
(T x(J)) .
We have assumed that the inertia matrix is constant; otherwise, its derivative needs to
be included since the left-hand side of this equation is really the derivative of the total
momentum given by J.
We combine the body-axis angular acceleration termabove with the quaternion calcu-
lation shown in Figure 3.12 to give the Simulink model shown in Figure 3.14. (This model,
called Quaternion_acceleration, is in the NCS library.) Also, note that the block im-
plements a time varying inertia matrix using the input port number 2 for the derivative of
the inertia; you can either remove the port if the inertia is constant or put the 3 3 zero
matrix as the input.
Once again, some new Simulink blocks are in this model. The multiplication block
in the Math Operators library computes the inverse of the matrix J. To do this, drag the
multiplication block into the model and then double click to open its block dialog. In the
dialog, change the multiplication type to Matrix (*) in the pull-down menu, and then put the
symbols /* in the Number of inputs box. The icon will change to the matrix multiply, and
Inv will denote the top input (because of the division symbol you entered in the dialog).
3.4. Rotations in Three Dimensions 103
This block implements the conversion from
Quaternions to the Direction Cosine Matrix.
The Equation implemented is:
[A(q)] = ( s
2
-- |q|
2
) I + 2 q q
T
-- 2 s Q
in Yellow in Cyan in Blue
| 0 --q
3
q
2
|
Where Q = | q
3
0 --q
1
|
| --q
2
q
1
0 |
1
Direction
Cosine
Matrix
square s
zeros(3,1)
col vector
of zeros
u
T
Transpose q
Select q3
Select q2
Select q1
Reshape
Multiply by q4
Get |q|^2
-1
-1
-1
2
2
Gain
Horiz Cat Form skew Sym
cross product
matrix Q
Matrix
Multiply
Form q q^T
Form (s^2-q dot q) I
em
U1 -> Y
U2 -> Y(E)
Y
U1 -> Y
U2 -> Y(E)
Y
U1 -> Y
U2 -> Y(E)
Y
eye(3) 3x3 Identity
1
Quaternion
Vector
Figure 3.13. Simulink block to convert quaternions into a direction cosine.
We have already encountered the text block with TeX when we explored the Lorenz
attractor in Section 3.1. This model uses this feature, and it is a good habit to do so. Locate
the text block in the Simulink model at any desired location by double clicking at the
desired location. This approach should always be used to annotate the model by showing
the equations that the model implements. If you open the model later, the combination of
the Simulink model and the equations make it very easy to understand and reuse the model.
After you double click on a location in the model for the equations, simply type the text. You
have control over the font selected, the font size, the font style, the alignment of the font, and
nally the ability to turn on TeX commands in the block. These are selectable (after the text
is typed) using the Format menu in the Simulink window. For the TeX commands, the
options are the subset of TeX that MATLAB recognizes. Thus, for example, the annotation
of the equation at the bottomof the quaternion model in Figure 3.12 above uses the following
text (the text appears centered in the block because the alignment is set to center):
104 Chapter 3. Nonlinear Differential Equations
This section of the block implements the equation:
d
/
dt
= J
-1
[ - X J -
dJ
/
dt
+m
total
]
This section implements the quarternion equations.
dq
1/
dt
= 1/2 [ q
1
X + s ]
and
ds
/
dt
= -1/2
T
q
1
To insure that the quaternion vector has a unit norm, ||q|| is calculated at each
step and the derivatives are divided by this norm before they are integrated (insuring that the
vector has unit norm at the time t, just prior to the next integration step).
2
dq/dt
1
Body Angular
Accel
Cross
Product
q cross omega1
q (dot) omega1
Inv
Matrix
Multiply
Matrix
Multiply
Normalize the
quaternion[q s] 1
U( : )
Make into a 4x1
column vector
x ||x||
Magnitude of q1
0.5
-1
Cross
Product
5
q
4
omega
3
External
Momentum (M)
2
dJ/dt
1
J
J omega
( angular momentum H)
omega
cross H
Figure 3.14. Combining the quaternion block with the body rotational acceleration.
{\bf{\it This block implements the quaternion equations.}}
Let the quaternion {\bfq} (a 4 vector) be partitioned into a 3 vector and a scalar as
follows:
{\bfq} = {[\it {\bfq_1} } {s ]T}.
The differential equations for the partitioned components of the quaternion vector are
{{{d{\bf\itq_1}}}}/_{\itdt} = 1/2 [ {\bf{\itq_1} X {\omega}} + s {\bf {\omega}} ]
and
{ds}/_{dt} = -1/2 {\bf{\omega}T} {\it{\bfq_1}}.
To ensure that the quaternion vector has a unit norm, {\bf||q||} is calculated at each
step and the derivatives are divided by this norm before they are integrated (ensuring
that the vector has unit norm at the time t, just prior to the next integration step).
3.5. Modeling the Motion of a Satellite in Orbit 105
The command \bf{ } causes the text inside the braces to appear in boldface. The
equation that denes the quaternion vector is {\bfq} = {[\it {\bfq_1} } {s ]T}, where the
\it invokes italics, and the means superscript. Since the purpose of Simulink is to model
the equations in a signal ow form, it is always good practice to include annotation for the
actual equation. In fact, this practice will ensure that the next user of the block understands
both the block ow and the equations that the block is implementing. To familiarize you
with the TeX notation, open the quaternion model and review the two text blocks.
3.5 Modeling the Motion of a Satellite in Orbit
With the blocks we created in Section 3.4, we can now build a simulation for the rotational
dynamics of a satellite in orbit. This really is the rst Simulink model that we will build
that is close to a real problem. As such, you can use it for models that you might build in
the future.
We start by listing the specications for the model and for what we hope to achieve.
We assume we are trying to build a control system for a satellite that will orbit the earth.
We also assume that the satellites control system will consist of a set of reaction wheels
devices that are essentially large wheels attached to a motor. The motor accelerates the
reaction wheel and in the process creates a torque on the vehicle through Newtons third
law. (The spacecraft reacts to the accelerating inertia because the torque produced is equal
and the opposite of the torque on the wheel.) We assume that there are three wheels mounted
on the spacecraft along the three orthogonal body axes. (For the exercise we will assume
that the wheels are along the axes of the body, and it is symmetric so the inertia matrix is
diagonal.)
The rst part of the model uses the quaternionandaccelerationmodel fromSection3.4.
For the second part of the model, we will need to create models for the electric motors that
drive each of the wheels and the torque that they create. Once we have the model, we can
create a feedback controller for the reaction wheels that will cause themto move the satellite
from one orientation to another. In the process of creating these models, we will end up
with a simulation for the rotational motion of a satellite in orbit. We also will confront
a problem that always exists with reaction wheels, namely that they can provide a torque
only as long as the wheels can be accelerated. Because a motor cannot accelerate once it
reached its maximum speed, a method is required to allow the motor to decelerate back to
zero speed. This uses reaction jets to unload the wheel (the jets, by ring in the opposite
direction from the torque created by the deceleration, allow the reduction of the wheels
speed without affecting the spacecraft). We will be modeling only the reaction wheel in the
following, not the unloading of the wheel, and the unloading is left as an exercise.
First, we build a model for the reaction wheels. A DC motor converts an electric
current owing through an iron core coil into a magnetic eld that interacts with a stationary
magnetic eld to create a torque that causes the coil to spin. Current owing through a coil
of wire creates a north pole on one side of the coil and a south pole on the opposite side.
The interaction of the magnet with the stationary magnetic eld will cause the coil to spin
as long as the stationary poles are opposite to the spinning coil (i.e., as long as a north pole
of the spinning coil is near a north pole of the stationary coil). As soon as the spin of the
coil cause the north pole of the coil to align with the south pole of the stationary magnet,
106 Chapter 3. Nonlinear Differential Equations
Back emf = K
b

where K
b
is the Back emf Gain
J d/dt= K
i
i
where K
i
is the Torque Constant
i
R
Mechanical Side of the Motor:
Motor Voltage
Armature
Resistance
Armature
Inductance
Back emf
Figure 3.15. Electrical circuit of a motor with the mechanical equations.
the coil will see a torque that stops it from rotating, and it will stop. To overcome this,
the current in the coil must change direction at least every half revolution. The device that
switches the direction of the current at the half revolution point, if it is mechanical, is a
commutator.
Most modern motors achieve this switching using electronics where, as the rotor
turns, a device on the shaft lets the driving electronics know where the rotor is relative to
the xed magnet (the stator). The torque applied to the motor is proportional to the current
owing in the rotating coil, and as long as the gap between the rotating coil and the xed
magnet is small, the torque is nearly linear. Thus, for the rst part of the model, the torque
is directly proportional to the magnetic eld induced in the rotor by the rotor current; since
the magnetic eld is also nearly linear, the torque is proportional to the current owing in
the rotor. Thus, the torque is T
m
= K
i
i
R
. The rotor electrical circuit consists of the rotor
resistance R
R
, the rotor inductance L
R
, the voltage applied to the rotor V
in
, and the voltage
induced across the rotor because of its motion in the magnetic eld of the stator, V
b
. (This
voltage is the back electromotive force [emf ].)
Asimple circuit diagram (see Figure 3.15) can represent the electromechanical equa-
tions for the motor. In this diagram, the motor torque T
m
is the controlled source K
i
i, and
the back emf, V
b
, is the controlled voltage source whose voltage is K
b

M
.
It is a simple matter to use the fact that the sum of the voltages around the loop must
be zero to get the equation for the current in the rotor (i
R
):
L
R
di
R
dt
= R
R
i
R
+V
in
V
b
.
The back emf is a nonlinear function of how fast the motor is turning. For well-designed
motors, however, the voltage V
b
is proportional to the motor angular velocity (
M
). Thus,
the motor electrical circuit is the differential equation
di
R
dt
=
R
R
L
R
i
R

K
b
L
R

M
+
1
L
R
V
in
.
3.5. Modeling the Motion of a Satellite in Orbit 107
Because electrical energy, when converted into mechanical energy, is conserved, the con-
stants K
m
and K
b
are dependent. Thus the electrical power (the product of the back emf e
b
and the rotor current) is P
Elect ical
=
e
b
i
R
746
(where in the English unit system the 746 converts
watts into horsepower [hp]. Using the linear back emf voltage this termbecomes K
b

M
i
R
746
.
In a similar way, the mechanical power is the torque produced times the motor angular
velocity: T
m

M
=
K
i
i
R

M
550
(where the 550 in the English unit system converts foot-pounds
into hp). Equating these gives K
b
=

746
550

K
i
= 1.3564K
i
(in the English unit system).
3.5.1 Creating an Attitude Error When Using Direction Cosines
We need to describe howone creates an attitude error for the 3-axis rotation of a spacecraft.
None of the descriptions (Euler, direction cosine, or quaternion) lends themselves to forming
the difference between a desired and an actual value. We need a way to create an error that
goes to zero at the desired attitude. The direction cosine matrix difference C C
desired
, or
some similar error for either the quaternion or the Euler angle representation, is not usable.
(The property that causes the rotation to the desired command is that the error is zero when
the command is achieved.) This is because the necessary rotational commands about the
three body axes are a nonlinear function of the angles (in any of the representations).
Let us perform some algebra with Maple to see what we can do to get a representation
for the commands in terms of the direction cosine matrix and the quaternion approach. We
will work with the direction cosine matrix rst. The direction cosine matrix we worked
with earlier is
C
BI
=

cos cos cos sin sin


sin sin cos cos sin sin sin sin +cos cos sin cos
cos sin cos +sin sin cos sin sin sin cos cos cos

,
where the rotations are about the z-, y-, and x-axes, are in the sequence , , , and we
have explicitly indicated with the subscript BI that the rotations are with respect to some
inertial (i.e., xed) axis to the principal axes of the body that is rotating.
When the body is rotating with angular velocity
=

p
q
r

about these principal axes, the matrix C satises the differential equation
dC
dt
=

0 r q
r 0 p
q p 0

C
=

c
1
c
2
c
3

.
The value of is explicitly dened by this equation.
108 Chapter 3. Nonlinear Differential Equations
We can also write this matrix differential equation in another form by creating a 91
vector that consists of the columns of C concatenated one on top of the other. That is, we
let the vector c be
c =

c
11
c
21
c
31
c
12
c
22
c
23
c
31
c
32
c
33

T
.
We also dene the cross product matrix for C as (where the subscript i denotes the ith
column of the matrix C)
C
xi
=

0 c
3i
c
2i
c
3i
0 c
1i
c
2i
c
1i
0

.
In terms of this cross product matrix, the differential equation for the columns of C is
dc
i
dt
= C
xi
.
Applying this equation to the vector c, we get a vector matrix differential equation for the 9-
vector c in terms of the three vectors as (where C
Cross
is the 93 matrix dened explicitly
by this equation)
d
dt
c =

C
x1
C
x2
C
x3

= C
Cross
.
Youcandeduce some interestingattributes of C
Cross
usingMaple. Developa Maple program
(from MATLAB) to show, remarkably, the two products below:
C
T
Cross
C
Cross
= 2I
33
and C
T
Cross
c = 0
91
.
Maple facilitates the calculations (all of the algebra and trigonometric identities involved
in manipulating these matrices; they are quite extensive and are done automatically) and
saves an immense amount of work.
If you have not built the Maple programyourself, you can open the MATLABprogram
Maple_CCt_identity in the NCS library to do the computations. This program sets up
the direction cosine matrix in Maple and computes C
T
Cross
C
Cross
and C
T
Cross
c. (It computes
the vector q and the matrix Q used in the following discussion.)
From these identities, we can create a nice error signal for a rotation.
Multiply the equation
d
dt
c = C
Cross
by C
T
Cross
on both sides to give
C
T
Cross
d
dt
c = C
T
Cross
C
Cross
= 2.
So =
1
2
C
T
Cross
d
dt
c, or is the rotational rate needed to make the direction cosine matrix
C. Therefore, if we want a particular direction cosine matrix whose elements are the 9-
vector c, all we need to do is calculate the value of from this equation and use it to drive
3.5. Modeling the Motion of a Satellite in Orbit 109
the three axes of the spacecraft. From the second identity we proved in the Maple program,
when the direction cosine matrix is at the desired value, the command becomes zero, as we
require. Notice how neatly this error signal works.
3.5.2 Creating an Attitude Error Using Quaternion Representations
There is a complete analogy to this result for the quaternion representation. We will sum-
marize it here and ask you to reproduce the results in Exercise 3.7 at the end of this chapter.
The rst part of the quaternion result requires the formation of the 4 3 matrix Q as
follows:
Q =

s q
3
q
2
q
3
s q
1
q
2
q
1
s
q
1
q
2
q
3

.
Then, following similar steps used in the direction cosine matrix derivation, the following
two identities are true:
Q
T
Q = I
33
and Q
T

q
1
q
2
q
3
s

= 0
4
.
Finally, the differential equation for the quaternion vector that we developed above (in terms
of the matrix Q) is
= 2Q
T
d
dt

q
1
q
2
q
3
s

.
This last equation becomes the dening equation for the quaternion command (again fol-
lowing the steps we used for the direction cosine); the commanded rate to achieve a desired
quaternion value is

C
= 2Q
T
q
Desired
.
We use this in the spacecraft attitude-control model. All of the pieces we need are now in
place, so we can create the Simulink model for the simulation. You might want to try to do
this yourself before you open the model Spacecraft_Attitude_Control in the NCS
library.
3.5.3 The Complete Spacecraft Model
The model of the complete rotational dynamics of the spacecraft is in Figure 3.16. It has
four subsystems to compute the following:
the spacecraft rotational motionthis is a variation of the quaternion block we created
in Section 3.4.4;
the Reaction Wheel dynamics that we created above;
110 Chapter 3. Nonlinear Differential Equations
Wheel Momentum
Jw
Wheel
Inerti a
J
Spacecraft
Inerti a Matri x
V_in Wheel Speeds
Reacti on Wheel s
Quater ni ons
Inertia J
Input External Momentum (M)
omega
q
dq/dt
Body Angular Accel
Quat ernion and Spacecraft
Rotati onal Dynami cs
Al titude Error
Rate Error
Command
PID
Control l er
Normal i ze the
quaterni on[q s] 1
2
Mul t by 2
x ||x| |
Magni tude of q
1
s x
o
Integrate
qdot
1
s x
o
Integrate
omega dot
[0.5 0.5 0.5 sqrt(3)/3 ]'/1.048
Ini ti al
q
ones(3,1)*0.01
Ini ti al
omega
Quaternions Matrix E'
Form quaterni on
propagati on matri x
Matri x
Mul ti pl y
E' q_desi red
q_desi red/norm(q_desi red)
Desi red
al ti tude
(q_desi red)
1
Convert (rad)
to Vol ts
In1
Cal cul ate
and di spl ay
E'*E (Shoul d be I_3)
3
3
[3x3]
[3x3]
4
4
4
4
3
[3x1]
[3x1]
[3x1]
omega_wheel (r/s)
[3x1]
[3x1]
[3x1]
Momentum
of Wheels
[3x4]
[3x4]
[3x4]
[3x1]
[4x1]
[4x1]
[3x1]
[3x1]
3
omega
3
3
omega
4
4
4
[3x1]
[3x1]
2E' q_desired
4
4
4
4
Figure 3.16. Controlling the rotation of a spacecraft using reaction wheels.
a block that has a PID controller in it;
a block that forms the matrix Q from the quaternion vector.
In addition, there are blocks that compute the command (
C
= 2Q
T
q
Desired
) and a
block that checks the identity Q
T
Q = I
33
. (This is used to verify the accuracy of the
computations and that the model is accurate.)
It is a good idea to familiarize yourself with each of the subsystems and the overall
simulation before you run it. The data is all contained in the model; the inertia of the wheels
is 50 slug-ft
2
and the vehicle inertia matrix is diagonal, with the diagonal values of 1000,
700, and 500. The reaction-wheel-dynamics model uses the same parameters as we used
in the model above. The model uses as initial conditions for q the value [0.4771 0.4771
0.4771 0.5509], and the desired nal value is set to q_desired/norm(q_desired) in the block
that is labeled Desired Attitude. The value for q_ desired is [1 2 3 4]. You can change this
value at the MATLAB command line to any value you would like and see the result.
The PIDcontroller is set up with only PD. (The Integral gain is set to zero; look inside
the PID block to see the values.) The Proportional gain is set to 10, and the Derivative
gains (on the three angular velocities p, q, and r) are 70. We have not attempted to make
these values optimum in any way; they simply make the response reasonably fast with a
minimum overshoot in the quaternion values at the end of the command.
When you run the model using these parameter values, you should see the quaternion
values plotted in the gure at the right below. Note that these quaternion values change
smoothly over the time interval, and they do indeed go to the desired nal values from the
initial value we used.
3.5. Modeling the Motion of a Satellite in Orbit 111
1
Matri x Q'
-u
Mi nus2
-u
Mi nus1
-u
Mi nus
Hori z Cat
Hori z Cat
Hori z Cat
Vert Cat
Matri x
Concatenati on
1
Quaterni ons
[1x4]
[1x4]
[1x4]
[1x4]
[1x4]
4
[3x4]
q1
q1
-q1
-q1
-q1
s
s
s
s
q2
q2
q3
q3
-q3
-q3
-q2
-q2
-q2
Figure 3.17. Matrix concatenation blocks create the matrix Q used to achieve a
desired value for q.
One aspect of
this model is new,
namely the way we
create the matrix
Q
T
. We use two
steps to form the el-
ements of the ma-
trix (which has 3
rows and4columns).
First, we compute
the 3 rows (each 1
4) one at a time,
and then we con-
catenate them verti-
cally to create the
matrix.
The block that
creates the rows and
columns is the Ma-
trix Concatenation
block in the Math
Operations libraryin
Simulink. This
block takes scalars
or one-dimensional vectors and places them into the columns (called horizontal concate-
nation) or the rows (called vertical concatenation) of a matrix. The initial construction of
the rows places the various scalar values from the quaternion vector q into the appropriate
locations to build the three rows of the matrix (using the horizontal version of the block),
and then the rows are stored into a matrix using the vertical version of the block. Figure 3.17
112 Chapter 3. Nonlinear Differential Equations
shows how we used the block Horizontal Concatenation in the subsystem called Form
quaternion propagation matrix.
The quaternion block is applicable to more complex dynamics than a spacecraft.
Examples are the rotational dynamics of an automobile or any other vehicle in motion, a
robot, a linkage, or an inertial platform. Since the quaternion formulation can model these,
it is a good idea to keep this model as part of a user library with the other models in the
Simulink browser.
3.6 Further Reading
Reference [40] (Chapter 4 in particular) has a good description of the Euler and quaternion
representations of the rotational motion of a spacecraft.
In his book Unknown Quantity: A Real and Imaginary History of Algebra [7], John
Derbyshire describes the development of the quaternion representation as the logical leap
into the fourth dimension after the two-dimensional representation of complex numbers.
He notes that in 1827 Hamilton began investigating complex numbers in a purely algebraic
way. He found it extremely difcult to come up with a scheme that would make the algebra
distributive and would maintain the property of complex numbers that the modulus of the
product of two numbers is the product of their respective moduli. Hamiltons insight into the
fact that you could not satisfy this modulus rule with triplets but could do so with quadruplets
led to the invention of quaternion algebra. Hamilton was so pleased with this result that he
inscribed it with a knife on a stone of Brougham Bridge near Dublin. Derbyshires book
is about the algebraic properties of the quaternion, which are not a major consideration for
portraying rotations. Despite this, his book is fun to read for its mathematical discussions
and its look at the personalities behind the math.
There is an excellent discussion of the computational efciency of the quaternion
representation in [11], and you should consult this after you complete Exercise 3.3.
Exercises
3.1 Show that the matrix product

p
q
r

d
dt
0
0

1 0 0
0 cos sin
0 sin cos

0
d
dt
0

1 0 0
0 cos sin
0 sin cos

cos 0 sin
0 1 0
sin 0 cos

0
0
d
dt

gives

p
q
r

1 0 sin
0 cos sin cos
0 sin cos cos

d
dt

.
Exercises 113
3.2 Show that the direction cosine matrix satises the differential equation
dC
dt
=

0 r q
r 0 p
q p 0

C
=

c
1
c
2
c
3

.
3.3 Start with the Simulink model for the quaternion representation of rotations. Next,
create a model that does the same thing using direction cosine matrices and, last, a
model that creates a rotation with Euler angles. Compare the computational efciency
of the three models. In particular, see what happens when the Euler angles approach
the points where the transformations become innite. Reference [24] has a good
discussion of why the quaternion approach is better.
3.4 Show that the norm of the quaternion vector is one.
3.5 Verify that the direction cosine matrix from the quaternion vector is
DCM =

s
2
|q
1
|
2

I
33
+2q
1
q
T
1
2s

0 q
3
q
2
q
3
0 q
1
q
2
q
1
0

.
3.6 Try adding a simple controller to the reaction wheel model that will cause the reaction
wheel speed to decrease whenever it gets to 1000 rpm. Use an external torque applied
by a reaction jet. (The jet will create a force, so you need to decide on where it is
relative to the center of gravity to determine the torque.) If you apply a torque using
only a single jet with the force not through the center of gravity, then rotations will
occur about multiple axes. How do you prevent this? How many jets will you need
to unload the reaction wheels in all three axes?
3.7 Verify the quaternion will move toward the desired quaternion q
Desired
if the rate body
rate is

C
= 2Q
T
q
Desired
.
In this equation, the matrix
Q =

s q
3
q
2
q
3
s q
1
q
2
q
1
s
q
1
q
2
q
3

.
Also, verify that the matrix concatenation blocks in Figure 3.17 create this matrix.
Chapter 4
Digital Signal Processing in
Simulink
We sawinthe previous chapters howtobuildmodels of continuous time systems inSimulink.
This chapter provides insight into howto use Simulink to create, analyze, simulate, and code
digital systems and digital lters for various applications.
We begin with a simple example of a discrete system; one discussed in Cleve Molers
Numerical Computing with MATLAB [29]. This example, the Fibonacci sequence, is not a
digital lter but is an example of a difference equation. In creating the Simulink model for
this sequence, we illustrate the fact that the independent variable in Simulink does not have
to be time. In this case, the independent variable is the index in the sequence. We set the
sample time in the digital block to one, and then we interpret the time steps as the number
index for the element in the sequence.
Digital lters use an analysis technique related to the Laplace transform, called the
z-transform. We introduce the mathematics of this transform along with several methods
for calculating digital lter transfer functions.
A digital signal typically consists of samples from an analog signal at xed times.
Since it is usually necessary to convert these digital signals back into an analog form so that
they can be used (to hear the audio from a CD or a cell phone, for example), it is natural to
start by asking howto go about doing this. The answer is the sampling theorem that shows
that an analog signal with a bounded Fourier transform (i.e., F() = 0 for || >
M
) can
be sampled at the times

M
or faster, and these samples can then be used to reconstruct
the analog signal. The method for doing this reconstruction is via a lter that allows only
the frequencies below
M
to pass through (and for this reason, we call the lter a low pass
lter). Therefore, the second section of this chapter shows how to develop low pass lters
and ways to adapt their properties to make them do other useful signal processing functions
such as high pass and band pass. In all cases, we will use Simulink to simulate the lters
and explore their properties.
The last part of this chapter will deal with implementation issues. We will look at how
Simulink allows us to evaluate different implementations of the digital lter. We will also
begin to look at the effect of limited precision arithmetic on the digital lters performance.
We will then look at a unique combined analog and digital device called a phase lock
loop. Simulink allows us to build a simulation of this device and do numerical experiments
115
116 Chapter 4. Digital Signal Processing in Simulink
that demonstrate its properties. In fact, Simulink is unique in its ability to do some very
detailed analyses of a phase-locked loop (see [11]). This is particularly true when it comes
to analyzing the effect of noise on the loop, a topic that we will take up in Chapter 5.
4.1 Difference Equations, Fibonacci Numbers, and
z-Transforms
One of the more interesting difference equations is the Fibonacci sequence. Fibonacci, the
sequence he developed, and its rather remarkable properties and history are all described
in detail, along with several MATLAB programs developed to illustrate the sequence, in
Chapter 1 of Cleve Molers Numerical Computing with MATLAB [29]; also see [26].
Let us revisit this sequence using Simulink. The Fibonacci sequence is
f
n+2
= f
n+1
+f
n
with f
1
= 1 and f
2
= 2.
As a reminder, the sequence describes the growth in a population of animals that are
constrained to give birth once per generation, where the index is the current generation.
One possible Simulink model to generate this sequence uses the Discrete library
from the Simulink browser. To understand how Simulink works, we need to describe how
one would go about writing a program that generates the Fibonacci numbers. In the NCM
library (the programs that accompany the book Numerical Computing with MATLAB), there
is a MATLAB program that computes and saves the entire Fibonacci sequence from 1 to
n. If, instead, we want only to compute the values as the program runs without saving the
entire set of values, the MATLAB code (called Fibonacci and located in the NCS library)
would look like
function f = fibonacci(n)
% FIBONACCI Fibonacci sequence
% f = FIBONACCI(n) sequentially finds the first n
% Fibonacci numbers and displays them on the command line.
f1 = 1;
f2 = 2;
while i <= n
f = f1 + f2
f2 = f1;
f1 = f;
i = i+1
end
Even though we do not need to save the entire sequence in this example, we still need
to save the current and the previous value in order to calculate the next value of the sequence.
This fact means that this sequence requires two states. In the theory of systems, a state
is the minimum information needed to calculate the values of the difference equation. In
the snippet of MATLAB code above, we save the values in f1 and f2. When we build a
Simulink model to simulate a difference equation, the state needs a place to be stored for
use in the solution. Simulink uses the name
1
z
to denote this block. To understand where
4.1. Difference Equations, Fibonacci Numbers, and z-Transforms 117
z
1
Uni t Del ay1
z
1
Uni t Del ay Scope
Figure 4.1. Simulating the Fibonacci sequence in Simulink.
this comes from, we need to show the method for solving difference equations using the
discrete version of the Laplace transform. We will do that in a moment, but rst let us build
the Simulink model for the Fibonacci sequence.
To create the model, from MATLAB open the Simulink Library Browser as we have
done previously and open a new untitled model window.
Select the Discrete library, and then select the
1
z
icon (the Unit Delay block) and
drag one into the open model window. Right click and drag on the Unit Delay block in
the model window to make a copy of the Unit Delay. Connect the output of the rst delay
block to the input of the second. This will send the output of the rst delay to the second
delay block. Now we need to create the left-hand side of the Fibonacci equation. To do
this we need to add the outputs of the two Unit Delay blocks. Therefore, open the Math
Operations library and drag the summation block into the model window. Then connect
the outputs of each of the Unit Delay blocks to sum the inputs one at a time. The model
should look like Figure 4.1.
In order to start the process, we need the correct initial conditions. To set them, double
click on the Unit Delay block and set the initial condition to 2 and 1 (from left to right in the
diagram). This will set the initial value of f1 to 1 and the initial value of f2 to 2, as required.
Notice that the
1
z
block has a default sample time of 1 sec, which is exactly what we want
for simulating the sequence, as we discussed in the introduction above. To view the output,
drag a Scope block (in the Sinks library) into the model and connect it to the last Unit Delay
block. Click the start button on the model to start the simulation of the sequence.
Double click on the Scope block and click on the binoculars icon to see the result.
The simulation makes ten steps and plots the values that the Fibonacci sequence generated
as it runs. The graph should look like Figure 4.2. We created this gure using a built-in
MATLABroutine called simplot. This M-le uses a MATLABstructure generated by the
output of the Scope block. The Scope block generates this MATLAB data structure during
the simulation; the plot comes from the MATLAB command simplot(ScopeData). The
plot is Fibonacci Sequence.fig, and it is available in the NCS library from the Figures
directory.
If you want to compute the golden ratio phi as was done in NCM[29], the calculation
requires that you divide the value of f2 by f1. This uses the Math Operations library Product
block. In this library, drag the product block into the model window and double click on
it. The dialog that opens allows you to change the operations. In the dialog box that asks
Number of Inputs (which is set to 2 by default), type the symbols * and /. This will cause
one of the inputs to be the numerator and the other the denominator in a division (denoted
by and in the block). Connect the sign on the Product block icon to the line after
118 Chapter 4. Digital Signal Processing in Simulink
0 5 10 15 20
0
5000
10000
15000
Index
Fibonacci Sequence
(Values for the first 20 Numbers)
0 2 4 6 8 10
0
20
40
60
80
100
Index
Fibonacci Sequence
(Detail from 0 to 10)
Figure 4.2. Fibonacci sequence graph generated by the Simulink model.
z
1
Unit Delay1
z
1
Unit Delay Scope
Product
1.6180339901756
Display
f(n-1)
f(n-1) f(n-2)
f(n)
f(n)
Figure 4.3. Simulink model for computing the golden ratio from the Fibonacci sequence.
the rst delay block (the Unit Delay block in Figure 4.1) and connect the to the Unit
Delay1 block. Connect the output of the Product block to a Display block that you can
get from the Sinks library. This block displays the numeric value of a signal in Simulink.
Figure 4.3 shows Display with the result of the division after 10 iterations.
To make the simulation run longer, open the Conguration Parameters menu under
the Simulation menu at the top of the Fibonacci model window. The dialog that opens
when you do this allows you to change the Stop time. Change it to some large number
(from the default of 10), and run the simulation. You should see the display go to 1.618. To
see the full precision of this number, double click on the Display block and in the Format
pull down, select long. This corresponds to the MATLAB long format. The result should
be 1.6180339901756, the limit of this ratio is the golden ratio, phi, and it has the value
phi =
1+

5
2
. (MATLAB returns 1.61803398874989 when calculating this, and as can be
seen after 20 iterations, Simulink has come very close.) The discussions in [29] and [26]
describe phi and its history in detail.
This discrete sequence is only one of many sequences that you might want to nd the
solution of in Simulink. In a more practical vein, we often want to process a digital signal
4.1. Difference Equations, Fibonacci Numbers, and z-Transforms 119
(a process called digital signal processing). Toward this end, digital lters are part of the
Simulink discrete library. They appear in the Digital library as z-transforms. What is this
all about?
4.1.1 The z-Transform
The z-transform, F(z), of an innite sequence {f
k
} , k = 0, 1, . . . , n, . . . , is
F(z) =

k=0
f
k
z
k
.
There are many technical details that need to be invoked to ensure that this sequence always
converges to a nite value, but sufce it to say that because the variable z is complex, the
sequence always is nite (even for sequences that diverge). For example, let us compute
the z-transform for the sequence that is 1 for all values of k. (This is called the discrete step
function.) Thus, we need to compute the sum
F(z) =

k=0
z
k
= 1 +z
1
+z
2
+z
3
+ .
If we multiply the value of F(z) by z, the sum on the right side becomes
zF(z) = z
1
+z
2
+z
3
+ .
Now, by subtracting the second series from the rst, all of the powers of z subtract (all the
way to innity), and the only term that remains on the right is the 1, so
F(z) zF(z) = 1
or
F(z) =
1
1 z
.
As a second example, consider the sequence
k
, k = 0, 1, 2, . . . . This sequence is the
discrete version of the exponential function e
at
since the values of this at the times kt
generate the sequence (e
at
)
k
=
k
(where = e
at
). Following the same steps as we
used above, the z-transform of this sequence is F(z) =

k=0

k
z
k
=
1
1z
.
It is a simple matter to work with the denition to create a table of z-transforms. This
table will allow you to solve any linear difference equation. For example, the discrete sine
can be generated using sin(kt ) =
e
ikt
e
ikt
2i
and the above transform of
k
.
You can use the MATLAB connection to Maple to get some z-transforms. Try some
of these:
syms k n w z
simplify(ztrans(2n))
This gives z/(z-2) as the result.
ztrans(sym(f(n+1)))
This gives z*ztrans(f(n),n,z)-f(0)*z) as the result.
120 Chapter 4. Digital Signal Processing in Simulink
ztrans(sym(f(n+1)))
This gives z*sin(k)/(z2-2*z*cos(k)+1) as the result.
Solutions of linear difference equations using z-transforms are very similar to the
techniques for solving differential equations using Laplace transforms. Just as the derivative
has a Laplace transformthat converts the differential equation into an algebraic equation, the
z-transformof f
k+1
, k = 0, 1, . . . , n, . . . , converts the difference equation into an algebraic
equation. To see that this is so, assume that the z-transform of f
k
, k = 0, 1, . . . , n, . . . , is
F(z). Then the z-transform of f
k+1
, k = 0, 1, . . . , n, . . . , is

k=0
f
k+1
z
k
= f
1
+f
2
z
1
+f
3
z
2
+
= z
1
F(z) z
1
f
0
.
Notice that this is the same answer as we got when using Maple above. From this, we can
see why Simulink uses 1/z as the notation for the Unit Delay.
4.1.2 Fibonacci (Again) Using z-Transforms
Let us use the z-transform to solve the Fibonacci difference equation. We use the unit delay
z-transform above twice. The rst time gives the z-transform of f
k+2
, and the second gives
the transform of f
k+1
. The z transform of the Fibonacci equation is therefore
z
2
F(z) z
2
f
0
z
1
f
1
= z
1
F(z) z
1
f
0
+F(z).
Solving for F(z) in this expression gives
(z
2
z
1
1)F(z) = z
1
f
1
+z
2
f
0
z
1
f
0
;
therefore, we have the nal algebraic equation for F(z):
F(z) =
z
1
+z
2
+2z
1
(z
2
z
1
1)
=
z
1
(z
1
+1)
z
2
z
1
1
.
Now we can factor the denominator of the function F(z) and write the right-hand side of
the above as a partial fraction expansion. The roots of the denominator polynomial are

1
=
1+

5
2
and
2
=
1

5
2
(note that
2
= 1
1
), so the partial fraction expansion is
F(z) =
A
z
1
+
1
+
B
z
1
+
2
.
A and B are determined by using Heavisides method, wherein the value of A is obtained
by multiplying the left and right side of the above expression by z
1
+
1
and then setting
the value of z
1
=
1
, and similarly for B. The inverse z-transform for each of these
terms comes from the transforms above. There is a lot of algebra involved in this, so let us
just look at the answer (see Example 4.1):
f
n
=
1
2
1
1
(
n+1
1
(1
1
)
n+1
).
This is the same result demonstrated in NCM [29].
4.2. Digital Sequences, Digital Filters, and Signal Processing 121
There is a connection between z-transforms and Laplace transforms that we will
develop later, but rst let us look at practical applications of difference equations. Modern
technology such as cell phones, digital audio, digital TV, high-denition TV, and so on,
depends on taking an analog signal, processing it to make it digital, and then doing something
to the signal to make it easier to send and receive. Digital ltering permeates all of this
technology.
4.2 Digital Sequences, Digital Filters, and Signal
Processing
The advent of digital technology for both telephone and audio applications has made digital
signal processing one of the most pervasive mathematical techniques in use today. The meth-
ods used are particularly easy to simulate with the digital blocks in Simulinks Discrete
library. To understand what these blocks do to a digital sequence, we need to understand
the mathematics that underlies discrete time signal processing and digital control.
4.2.1 Digital Filters, Using z-Transforms, and Discrete Transfer
Functions
To start, we will work with the exponential digital sequence we created above (see Sec-
tion 4.1.1). Assume that we are going to process a digital sequence f
k
using the sequence

k
. The digital lter is then
y
k+1
= y
k
+(1 )f
k
.
In this difference equation, the sequence f
k
is the signal to be processed (where f
k
is the
value of the signal f (t ) at times kt as k, an integer, increases from 0), and the sequence y
k
is the processed result. The simplest way to solve this equation is to use induction. Starting
at k = 0, with the initial condition y
0
, we get the value of y
1
as
y
1
= y
0
+(1 )f
0
.
Now, with y
1
in hand, we can compute y
2
by setting k = 1 in the difference equation for
the lter. The result is as follows:
y
2
= y
1
+(1 )f
1
= (y
0
+(1 )f
0
) +(1 )f
1
.
Thus,
y
2
=
2
y
0
+(1 )f
0
+(1 )f
1
.
If we continue iterating the equation like this, a pattern rapidly emerges and can be used to
write the solution to the equation for any k. (Verify this assertion by continuing to do the
iteration.) This solution is
y
k
=
k
y
0
+(1 )
k1

j=0

k1j
f
j
.
122 Chapter 4. Digital Signal Processing in Simulink
Notice that
k
, the sequence we wanted, multiplies both the summation and the initial
condition.
We now use induction to show that this is indeed the solution. Remember that a proof
by induction follows these steps:
Verify that the assertion is true for k = 0.
Assume that the assertion is true for k, and show that it is then true for k +1.
Because of the way that we generated the solution, it is clear that it is true for k = 0.
So next, assume that the solution above (for the index k) is true, and let us show that it is
true for k +1.
From the difference equation y
k+1
= y
k
+ (1 )f
k
, we substitute the postulated
solution for y
k
to get
y
k+1
= y
k
+(1 )f
k
= (
k
y
0
+(1 )
k1

j=0

k1j
f
j
) +(1 )f
k
=
k+1
y
0
+(1 )
k

j=0

kj
f
j
.
This is exactly the solution that we postulated with the index at k +1. Thus, by induction,
this is the solution to the difference equation.
Notice that one way of thinking about discrete equations in Simulink is that it imple-
ments the induction algorithm. It uses the denitions of the discrete process and starts at
k = 0, iterating until it reaches the nth sample.
If we take the z-transform of the difference equation y
k+1
= y
k
+(1 )f , we get
z
1
Y(z) z
1
y
0
= Y(z) +(1 )F(z).
Solving this for Y(z) gives
Y(z) =
z
1
z
1

y
0
+
(1 )
z
1

F(z)
=
1
1 z
y
0
+
(1 )z
1 z
F(z).
By comparing the z-transform above with the solution, we can conclude that the inverse
z-transformof
z
1z
F(z) is the convolution sum

k
j=0

kj
f
j
. Note that this says that when
a sequence whose z-transformis F(z) is used as an input to a digital lter whose z-transform
is H(z) (called the discrete transfer function, and for this lter its value is H(z) =
z
1z
),
the product H(z)F(z) has an inverse transform that is the convolution sum. This result,
called the convolution theorem, provides the rationale for the Simulink notation of using
the z-transform of the lter inside a block. The notation implies that the output of the block
is the transfer function times the input to the blockeven though the output came from the
difference equation and the result is a convolution sum (when the system is linear). This
4.2. Digital Sequences, Digital Filters, and Signal Processing 123
z
1
Uni t Del ay
si mout
To Workspace
Si ne Wave
1-al pha
Gai n1
al pha
Gai n
Figure 4.4. Generating the data needed to compute a digital lter transfer function.
slight abuse of notation allows for clarity in following the ow of signals in the Simulink
model because it maintains the operator notation even when the block contains a transfer
function.
One of the major uses of digital lters is to alter the tonal content of a sound. Since
music consists of many tones mixed together in a harmonic way, it is useful to see what a
digital lter does to a single tone. Therefore, we use Simulink to build a model that will
generate the output sequence y
k
when the input is a single sinusoid at frequency . Thus, we
assume that f
k
= Asin(kt ), where kt is the sample time. The process of converting
an analog signal to a digital number is sampling. All digital signal processing uses some
form of sampling device that does this analog to digital conversion.
The discrete elements in the Simulink library handle the sampling process automat-
ically. Try building a Simulink model that lters an analog sine wave signal with the rst
order digital lter y
k+1
= y
k
+(1 )f
k
before you open the model in the NCS library.
To run the model in the NCS library, type Digital_Filter at the MATLAB com-
mand line. Figure 4.4 shows the model.
In this model, the sampling of the sine wave occurs at the input to the Unit Delay block.
The sample time is set in a Block Parameters dialog box (opened by double clicking on
the Unit Delay block). In this dialog, the sample time was set to delta_t (an input from
MATLAB that is set when the model opens). This illustrates an important attribute that
Simulink uses. After sampling the signal, all further operations connected to the block that
does the sampling treat the signal as sampled (discrete). Thus, the Gain block operates on
the sampled output from the Unit Delay, and the addition occurs only at the sample times.
The dialog also allows setting the initial condition for the output. (We assume that the initial
condition is zero, the default value in the dialog box.)
4.2.2 Simulink Experiments: Filtering a Sinusoidal Signal and
Aliasing
The digital lter model in Section 4.2.1 is set up to run 50 sinusoidal signals (each at a
different frequency) simultaneously, and as it runs, it sends the results of all 50 simulations
into MATLAB in the MATLAB structure simout.
The values for the various parameters in the model are in the MATLAB workspace
and have the following values:
124 Chapter 4. Digital Signal Processing in Simulink
>> delta_t
delta_t =
1.0000e-003 %(Sample time of 1 ms or sample frequency of 1 kHz)
>> alpha
alpha =
9.0000e-001
>> omega
omega =
Columns 1 through 25
1.0000e-001 1.2355e-001 1.5264e-001 1.8859e-001 2.3300e-001
2.8786e-001 3.5565e-001 4.3940e-001 5.4287e-001 6.7070e-001
8.2864e-001 1.0238e+000 1.2649e+000 1.5627e+000 1.9307e+000
2.3853e+000 2.9471e+000 3.6410e+000 4.4984e+000 5.5577e+000
6.8665e+000 8.4834e+000 1.0481e+001 1.2949e+001 1.5999e+001
Columns 26 through 50
1.9766e+001 2.4421e+001 3.0171e+001 3.7276e+001 4.6054e+001
5.6899e+001 7.0297e+001 8.6851e+001 1.0730e+002 1.3257e+002
1.6379e+002 2.0236e+002 2.5001e+002 3.0888e+002 3.8162e+002
4.7149e+002 5.8251e+002 7.1969e+002 8.8916e+002 1.0985e+003
1.3572e+003 1.6768e+003 2.0717e+003 2.5595e+003 3.1623e+003:
The values for the 50 frequencies, the sample time for the lter, and the value of alpha are
stored as part of the model through a callback set by the Model Properties dialog.
The ability to cause calculations in MATLAB to run when the model opens or when
other Simulink actions occur is a feature of Simulink that you should understand.
After you have opened the model, go to the File menu and select Model Properties.
AModel Properties window will open, allowing you to enter and/or view information about
the model. It also allows you to select actions that occur at various events during the model
execution.
There are four tabs across the top of this window, denoted Main, Callbacks,
History, and Description. Figure 4.5 shows two of the tabs in the dialog. The window
opens, showing the contents of the Main tab. The Main tab is the top level of the window. It
shows the model creation date and the date we last saved it. It also shows a version number
(every time the model is changed or updated in any way, this number changes) and whether
or not the model has been modied. The second tab in the window is the Callbacks tab. In
this section, the user can specify MATLAB commands to execute whenever the indicated
action occurs. The possible actions are as follows.
4.2. Digital Sequences, Digital Filters, and Signal Processing 125
Figure 4.5. Adding callbacks to a Simulink model uses Simulinks model properties
dialog, an option under the Simulink windows le menu.
Model preload function: These commands run immediately before the model opens
for the rst time. Note that it is here that the values of omega, the sample time
delta_t, and alpha are set. The values for omega are provided by the MATLAB
function logspace, which creates a set of equally spaced values of the log (base 10)
of the output (in this case, omega), where the two arguments are the lowest value
(here it is 10
1
) and the highest value (3 10
3
here). The value set for the sample
time is 0.001 sec (corresponding to 1 kHz).
Model postload function: These commands run immediately after the model loads
the rst time.
Model initialization function: These commands run when the model creates its initial
conditions before starting.
Simulation start function: These commands run prior to the actual start (i.e., imme-
diately after the start arrow is clicked).
Simulation stop function: These commands run when the simulation stops. In this
case, we have two MATLABcommands to rst calculate the maximumvalue of all 50
signals. (Note that the maximumvalues are over the structure in MATLABgenerated
by the To Workspace block.)
Model presave function: These commands run prior to the saving the model.
Model close function: These commands run prior to closing the model.
126 Chapter 4. Digital Signal Processing in Simulink
The History and Description tabs allowthe user to save information about the number
of times the model opens and is changed (the History tab) and for the user to describe the
model.
The StopFcn callback, executed when the simulation stops, is
simmax = max(simout.signals.values);
semilogx(omega,simmax);
xlabel(Frequency "omega" in rad/sec)
ylabel(Amplitude of Output);
grid
10
-1
10
0
10
1
10
2
10
3
10
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Frequency "omega" in rad/sec
A
m
p
l
i
t
u
d
e

o
f

F
i
l
t
e
r
e
d

O
u
t
p
u
t
10
2
10
3
10
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Frequency "omega" in rad/sec
A
m
p
l
i
t
u
d
e

o
f

F
i
l
t
e
r
e
d

O
u
t
p
u
t
The plot at the right
comes fromthis code. It is a
semilog plot that shows
what the digital lter does
to the amplitudes of a sinu-
soid for different frequen-
cies. Notice that all of
the lowfrequenciestones
up to about 6 radians/sec
(about 1 Hz)are unaf-
fected by the lter, whereas
the frequencies above that
are attenuated to the point
where a tone at about 3000
radians/sec (about 500 Hz)
is reduced in amplitude by
95%. For this reason, this
lter is a low pass l-
ter. This means that low
frequencies pass through
the lter unchanged, and it
attenuates higher frequen-
cies. Let us see what hap-
pens if we input frequen-
cies beyond 500 Hz. To
see this, change the val-
ues in the omega array
by typing at the MATLAB
command line omega =
logspace(2,3.8);. The
second plot shown at the
right is the result of rerun-
ning the model with these
50 values of omega. In-
stead of continuing to re-
4.2. Digital Sequences, Digital Filters, and Signal Processing 127
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02
-1.1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1.1
Time (sec.)
S
i
g
n
a
l
A
m
p
l
i
t
u
d
e
Sampled Sine Wave -- Illustrating Aliasing
Continuous Signal
Sampled at 0.0025 and 0.0125
Sampled Every 0.002 sec.
Sampled only at the times
0.0025 and 0.0125 sec., the sine
seems to be a constant for all times.
When the samples are every 0.002 seconds,
the signal looks like a sinusoid.
Figure 4.6. Sampling a sine wave at two different rates, illustrating the effect of aliasing.
duce the amplitude of the input, the lters output amplitude starts to climb back up until,
at the frequency
1
t
(1 kHz), there is no reduction in the amplitude of the sinusoid.
Why does the amplitude not continue to decrease? It is because this is a sampled-data
signal. A close examination of what happens when we sample an analog signal to convert
it into a sequence of numeric values (at equally spaced sample times) reveals why.
Look carefully at the effect of sampling a 100 Hz sinusoid as illustrated in Figure 4.6.
When the sample time is 0.002 seconds (the * in the gure), the values are tracking the
sinusoid as it oscillates up and down in amplitude. However, when the sample time exactly
matches the frequency of the sinusoid (the black squares), the oscillatory behavior of the
sinusoid is lost. For the precisely sampled values shown, the amplitude after sampling is
always exactly 1. Thus, as far as the digital lter is concerned, the frequency of the input
is 0 (it is not oscillating at all). This is what causes the plot of the amplitude of the ltered
output to turn around starting at half of the sample frequency.
We call this effect aliasing. It is exactly because of this that the digital standard
for audio CDs is to sample at 44.1 kHz (a sample time of 22.676 microseconds). With
this sample frequency, the audio frequencies up to 22.05 kHz are unaffected by the digital
conversion. Since the human ear does not really hear sounds that are above this frequency,
the aliasing effect is not perceived. There is a caveat, though: since aliasing takes the high
frequencies above 22.05 kHz and reduces them, these frequencies need to be removed (with
128 Chapter 4. Digital Signal Processing in Simulink
an analog lter) before the sampling takes place. The lter that does this is an antialiasing
lter. In Section 4.5.2, we will investigate how one must sample an analog signal to
ensure that aliasing is not a problem, but rst we explore some of Simulinks digital system
tools.
4.2.3 The Simulink Digital Library
Before we leave the subject of digital lters, let us look at some other methods that Simulink
has for digital ltering. You may have noticed as we built the digital models above that
there were other blocks in the Discrete library for digital lters. They are
Discrete Filter,
Discrete Transfer Function,
Discrete Zero-Pole,
Weighted Moving Average,
Transfer Function First Order,
Transfer Function Lead or Lag,
Transfer Function Real Zero,
Discrete State Space.
Each of these uses a different, but related, method for simulating the digital lter. The
preponderance of these models uses the z-transform of the lter to create the simulation of
the lter. For example, the discrete lter model uses the z-transform in terms of powers
of 1/z. We use this form of the digital lter because in some denitions of the z-transform
the innite series is in terms of 1/z, not z. To change the numerator and denominator of
the lter transfer function, use the block parameters dialog box that opens when you double
click on the Discrete Filter block. The lter default is
1
1+0.5z
1
. The numerator is 1, and
the denominator is entered using the MATLAB notation [1 0.5], which, as the help at the
top of the dialog box shows, is for ascending powers of 1/z (see Figure 4.7). Notice in the
gure that when you change the numerator and denominator, the icon in the Simulink model
changes to show the new transfer function.
To try this block, let us simulate the Fibonacci sequence with it. Set up a new model
and drag the Discrete Filter block into it. Open the dialog by double clicking and enter the
vector [1 1 1] as the Denominator. The vector sets the powers of z
1
from the lowest to
the highest (the MATLAB convention for polynomials). Notice that the icon changes for
the Discrete Filter to display the denominator polynomials. Leave the numerator at 1.
Transfer functions do not have initial conditions, so we need to nd a way to specify
that the Fibonacci sequence start with initial values of 1 and 2. To do this we use a block
that causes a signal to have a value at the start of the simulation. The block we need is
the IC block, which is in the library called Signal Attributes. Grab an IC block and drag
it into the model. The IC block has an input, but we do not need to use it. You can leave
4.2. Digital Sequences, Digital Filters, and Signal Processing 129
Figure 4.7. Changing the numerator and denominator in the Digital Filter block
changes the lter icon.
the input unconnected, but every time you run the model, you will receive the annoying
message
Warning: Input port 1 of untitled/IC is not connected.
To eliminate this message, there is a connection in the Sources library called Ground.
All it does is provides a dummy connection for the block and thereby eliminates the message.
The last thing is to connect a Scope to the output. The IC block has a default value of 1,
which is acceptable because starting the Fibonacci sequence with initial vales of 0 and 1
will still generate the sequence.
Figure 4.8 shows the model (called Fibonacci2 in the NCS library) and the simulation
results from the Scope block.
Comparing this with the result generated in the earlier version of the Fibonacci model
shows that the results are the same (except for the initial conditions).
The rst seven versions of the discrete lter in the list above are all variations on
this block. To understand the subtleties of the differences, spend some time modeling the
Fibonacci sequence using each of these.
130 Chapter 4. Digital Signal Processing in Simulink
Figure 4.8. Using the Digital Filter block to simulate the Fibonacci sequence.
4.3 Matrix Algebra and Discrete Systems
We looked at state-space models for continuous time linear systems in Chapter 2. There is
an equivalent model for discrete systems.
Let us return to the Fibonacci sequence and use Simulink in a different way to show
some attributes of the sequence. Before we do though, let us look at the state-space version
of the Fibonacci sequence. When we talked about the digital lters in Simulink, we had
a list of eight different ways, the last of which was a state-space model. We did not go
into the details at the time because we had not developed the state-space model. We saw
how to convert a continuous time state-space model into an equivalent discrete model in
Section 2.1.1. We can convert the Fibonacci sequence difference equation into a discrete
state-space model directly. The steps are as follows.
Use the values on the right-hand side of the Fibonacci sequence (f
k
and f
k+1
) as the
components of a vector as follows:
x
k
=

f
k
f
k+1

.
From the denition of the sequence f
n+2
= f
n+1
+ f
n
, with f
1
= 1 and f
2
= 2, we get the
state-space vector-matrix form from the fact that the rst component of x
k+1
is the second
component of x
k
and the second component of x
k+1
is the left-hand side of the difference
equation. Thus,
x
k+1
=

f
k+1
f
k+2

0 1
1 1

x
k
,
x
0
=

1
1

,
f
k
= [ 1 0 ]x
k
.
4.3. Matrix Algebra and Discrete Systems 131
Ground
144
Di spl ay
y(n)=Cx(n)+Du(n)
x(n+1)=Ax(n)+Bu(n)
Di screte State-Space
z
1
Uni t Del ay
Matri x
Mul ti pl y
Mul ti pl y
0 1
1 1
Matri x
Constant
[1 0]* u
Gai n
144
Di spl ay
2
2
[2x2]
[2x2]
2
2
2
a)
Simulating the Fibonacci Sequence with the Discrete Library State-Space Block
b)
Simulating the Fibonacci Sequence Using Simulinks Automatic Vectorization
Figure 4.9. State-space models for discrete time simulations in Simulink.
Notice that the initial conditions are not the values we used previously, but since we start
the iteration in the state-space model at k = 0, we have made the initial value of f
0
= 1,
which is consistent with the initial values we used previously (since f
2
= f
1
+f
0
= 2).
We use two methods to create this model. The rst model for this state-space de-
scription uses the Discrete State Space block in the Simulink Discrete library. The model
is shown in Figure 4.9(a). (It is called Fib_State_Space1 in the NCS library.)
When this model runs, the Display block shows the values for the sequence. The
model has been set up to do 11 iterations. To see other values, highlight the number 11 in
the window at the right of the start arrow in the model window, and change its value to any
number (but be careful: the sequence grows without bound).
To build a Simulink model for this equation that uses Simulinks vector-matrix capa-
bilities, we use the blocks for addition, multiplication, and gains from the Math Operations
library (as we did when we created the state-space model for continuous time systems).
Figure 4.9(b) shows this model (it is Fib_State_Space2 in the NCS library).
Note that the Constant block fromthe Sources library provides, as an input, the matrix
on the right side of the state-space equation above. The Multiply and the Gain blocks from
the Math Operations library are used to do the matrix multiply and the calculation of the
output. The dialogs fromthe two Math Operations blocks are shown in the following gures.
132 Chapter 4. Digital Signal Processing in Simulink
The dialog on the left is for the Matrix Constant block, and as can be seen, the value
for it has been set to the MATLAB variable [1 0; 1 1]. The dialog on the right is for the
Multiply block. It species that there are two inputs through the ** (two products), which
may be increased to as many values as desired (including the use of the / to denote matrix
inverse). The multiplication type comes fromthe pull-down menu next to the Multiplication:
annotation. The menu has only two options: Matrix(*) and Element-wise (.*), where the
notation in parentheses indicates the operation as if it were MATLAB notation.
The initial conditions for the iteration are set using the dialog that opens when you
double click the Unit Delay block. Double clicking this block opens the dialog that allows
the desired initial values to be set. As above, the initial values are

0
1

.
The iteration we are doing creates the Fibonacci sequence in a vector form that will
allow us to show some interesting facts about the sequence. So let us do some exploring.
The matrix

0 1
1 1

can be thought of as

f
0
f
1
f
1
f
2

, since we assume that the initial


value of f
0
is 0 and the values of f
1
and f
2
are both 1. Therefore, after the rst iteration of
the difference equation we have
x
2
=

f
k+2
f
k+3

0 1
f
1
f
2

x
1
=

0 1
1 1

f
0
f
1
f
1
f
2

x
0
=

f
1
f
2
f
0
+f
1
f
1
+f
2

x
0
=

f
1
f
2
f
2
f
3

x
0
.
Notice that after this iteration the matrix multiplying x
0
is in exactly the same form as
when we started, except that the subscripts are all one greater than the initial matrix. If the
iteration continues, the matrix form is the same (i.e., the value of x
k
after k iterations is

f
k1
f
k
f
k
f
k+1

x
0
; in Exercise 4.2 we ask you to use induction to prove this).
If we take the determinant of this matrix, we see that it is
det

f
k1
f
k
f
k
f
k+1

= f
k+1
f
k1
f
2
k
.
Let us use our model to show that this determinant is
f
k+1
f
k1
f
2
k
= 1.
4.3. Matrix Algebra and Discrete Systems 133
The Simulink library does not contain an explicit block to compute the determinant of a
matrix, so we will use some of the User Dened Functions blocks. There are ve different
ways that the user may dene a function in Simulink. In this library are blocks that
create embedded code directly from MATLAB instructions,
call any MATLAB function (this block does not compile the MATLAB code),
create a C-code function.
There are also variations on these blocks where a function uses a rather arcane but
useful form and a version of the C-code function that uses MATLAB syntax for those who
refuse to learn C. We will use only two of these blocks: the MATLAB function and the
Embedded MATLAB blocks.
The model that was created is shown below. (It is called Fib_determinant in the
NCS library.) In this model, the MATLAB Function block uses the single function det
directly fromMATLABto compute the determinant (which is set using the dialog that opens
when you double click on the block), and the Embedded MATLAB Function block has
the following simple code to compute the determinant of the 2 2 matrix input u. Since
the result of using the MATLAB function and the Embedded MATLAB function are the
same, you might legitimately wonder why there are two blocks. The reason has to do with
calling MATLAB from Simulink. Stand-alone code does not have access to MATLAB,
so the MATLAB Function block will not work. The Embedded MATLAB block, on the
other hand, creates exportable C code, so when the code compiles it works as a stand-alone
application.
function d = det2(u)
% An embeddable subset of the MATLAB language is supported.
% This function computes the determinant of the 2x2 matrix u.
d=det(u);
The model is in Figure 4.10. The rst time this model runs, the embedded code
compiles into a dll le that executes each time the model runs.
When the model runs, 13 iterations result, and the determinant plot appears in the
Scope block (see Figure 4.11). We will use this model to illustrate the computational
aspects of nite precision arithmetic. If the number of iterations is set to 39, the determinant
from the MATLAB function shows a value of 2, and the determinant from the embedded
MATLAB function shows a value of 0. If we continue past 39 iterations, say 70, we still
get zero from the embedded function, but we get 2.18e+013 for the determinant from the
MATLAB function. What is happening here?
The problem is that we are at the limit of the precision of the computations. The
values for the Fibonacci sequence are at 8e+014 so that the products of the values in the
equation f
k+1
f
k1
f
2
k
are on the order of 10
29
and the difference is therefore less than
the least signicant bit in the calculation. The built-in MATLAB function starts to fail as
soon as the terms in this function get to about 10
18
, whereas the embedded MATLAB block
protects the calculation from underow by making the value 0. This still works only for so
long; eventually even this strategy fails. (To see this, try 80 iterations; at this number of
iterations none of the determinant values work.)
134 Chapter 4. Digital Signal Processing in Simulink
Figure 4.10. A numerical experiment in Simulink: Does f
k+1
f
k1
f
2
k
= 1?
Figure 4.11. 13 iterations of the Fibonacci sequence show that f
k+1
f
k1
f
2
k
=
1 (so far).
This exercise is an example of how you could use Simulink to check that some
mathematical result is true. If you wanted to show that f
k+1
f
k1
f
2
k
= 1, and you did
not have any idea if it were true or not, you could build the model and try it. It would be
immediately obvious that the values are 1 for the number of iterations you used.
4.4. The Bode Plot for Discrete Time Systems 135
You then could try to prove the result. The ability to use the pictorial representation
of the premise quickly to create numerical results can almost immediately tell you if the
premise is true.
Now that we know f
k+1
f
k1
f
2
k
= 1 is true, Exercise 4.3 asks that you use
induction to prove it. (Use the fact that det

0 1
1 1

is 1.)
4.4 The Bode Plot for Discrete Time Systems
In Section 4.2.2, we created a frequency response plot, but it used a numerical experiment
on the Simulink model, which is ponderous at best. There is a simpler way, based on the
z-transforms that we explore now.
In order to compute the Bode plot for a discrete system we need to understand the
mapping from the continuous Laplace variable to the discrete z-transform variable. To see
this we need to investigate the Laplace transform of a sampled signal f (t ) (i.e., a signal that
exists only at the sample times kt ) when we shift it in time by t . If we assume that the
Laplace transform of f (t ) is F(s), then


0
e
st
f (t +t )dt =


t
e
s(t +t )
f ()d
= e
st
F(s) e
st
f (0).
The last step used the fact that the Laplace transform is from t = 0 to innity, and in the
rst line of the equation the integral starts at t . Since the function f (t ) is discrete,f (0) is
the only value not in the integral when it starts at t .
Comparing this to the z-transform derived in Section 4.1.1, we see that z
1
= e
st
or z = e
st
.
Withthis information, we wouldlike tondthe discrete (z-transform) transfer function
for the discrete time state-space model. We have seen two ways for developing the discrete
state-space model. When we developed the solution for the continuous time state-space
model in Section 2.1.1, we showed that the result of making the system discrete in time was
the model
x
k+1
= (t )x
k
+(t )u
k
,
y
k
= Cx
k
+Du
k
.
In developing the discrete state-space model of the Fibonacci sequence above, we went
directly from the difference equation to the discrete state-space model. (In this case the
matrices were not determined from the continuous system, and they are not functions of
t .) In either case, the form of the equations is the same. Taking the z-transform of this
gives
zX(z) zx
0
= (t )X(z) +(t )U(z).
136 Chapter 4. Digital Signal Processing in Simulink
Thus the discrete transfer function of the system (H(z)) is the z-transform with the initial
conditions set to zero, so we have
H(z) =
Z {y
k
}
Z {u
k
}
= C(zI )
1
B +D.
The Bode plot of the discrete system from the state-space form of the model comes from
setting z = e
it
in the above derivation. That is we need to compute
H(e
it
) = C(e
it
I )
1
B +D,
which has exactly the same form as the continuous transfer function except i is replaced
by e
it
. (Remember that t is the sample time of the discrete process and is therefore
a constant.) The manipulations of the state-space model for both continuous and discrete
systems in MATLAB are the same, so the connection from Simulink to MATLAB for the
Bode plot calculation is identical. We can go back now to the digital lter example in
Section 4.2.2 above and use the Control System Toolbox to calculate its Bode plot.
Open the digital lter model by typing Digital_Filter1 at the MATLAB command
line; the model that opens is the same as the model we created in Section 4.2.2, but the
sinusoid input has been deleted since this is not needed to create the Bode plot. As we did
above, under the Tools menu select Control Design and the submenu Linear Analysis.
The Control and Estimation Tools Manager will open. In the model, right click on the line
coming fromthe Input block and select the Input Point sub menu under the Linearization
Points menu item. Similarly, select Output Point (under the same menu) for the line
going to the Output block. Selecting these input and output points causes a small I/O icon
to appear on the input and output lines in the model. In the Control and Estimation Tools
Manager GUI, select the Bode response plot for the plot linearization results and then click
the Linearize Model button. The LTI Viewer starts, and right click to select the Bode plot
under Plot Types. The plot of the amplitude and phase of the discrete lter as created
by the Viewer (Figure 4.12) stops at the frequency 3.1416 radians/sec because this is the
frequency at which the Bode plot for the discrete system starts to turn around and repeat.
(This is called the half sample frequency, and it is equal to

t
.)
Compare this plot withthe amplitude plot createdbythe Simulinkmodel Digital_Filter
in Section 4.2.2. You will see that it is the same. The important difference is that the
computation of the Bode plot using this method is far more accurate (and faster) than using
a large number of sinusoidal inputs as we did there.
4.5 Digital Filter Design: Sampling Analog Signals, the
Sampling Theorem, and Filters
In 1949, C. E. Shannon published a paper in the Proceedings of the Institute of Radio
Engineers (the IRE was an organization that became part of the Institute of Electrical and
Electronics Engineers, or IEEE) called Communication in the Presence of Noise. This
landmark paper introduced a wide range of technical ideas that formthe backbone of modern
communications. The most interesting of these concepts is the sampling theorem. It gives
conditions under which an analog signal can be 100% accurately reconstructed from its
4.5. Digital Filter Design 137
Figure 4.12. The Bode plot for the discrete lter model developed using the Control
System Toolbox interface with Simulink.
discrete samples. Because it is such an important idea, and because it is so fundamental, it
is very instructive to work through the theorem to understand why, and how, it works.
4.5.1 Sampling and Reconstructing Analog Signals
When we introduced discrete signals above, they were simply a sequence of numbers. In
addition, their z-transform was the sum of the sequence values multiplied by powers of z.
In Section 4.4, we saw that the frequency response results when e
it
is substituted
for z (thereby giving H(e
it
)). When the digital signal is the result of sampling an analog
signal, we need a way of representing this fact. The method must maintain the connection
with the analog process.
Equating a sampled analog signal to a sequence of its values at the sample times loses
the analog nature of the process (and besides is really only one of the mathematical ways
of representing the process). An alternative is to represent the sampled signal as an analog
signal that is a sequence of impulses (analog functions) multiplying the sample values.
Doing this gives an alternate representation of the sampled sequence, {s
k
} as the analog
signal s

(t ):
s

(t ) =

k=0
s
k
(t kt ).
138 Chapter 4. Digital Signal Processing in Simulink
With this denition, we can take the Laplace transform of s

(t ) as
S

(s) =


0
e
st
s

(t )dt =


0
e
st

k=0
s
k
(t kt )

dt.
The integral and the sum commute in the last term above, so
S

(s) =

k=0


0
e
st
s
k
(t kt )dt

k=0
s
k
e
skt
.
Now to understand the sampling theorem, assume that we have been sampling the signal
for a very long time so that the signal is in steady state. In that case, the Fourier transform
describes the frequency content of the signal. The difference between the Laplace and
Fourier transform is in the assumption that for the Fourier transform the time signal began
at . The steps that created the sampled Laplace transform above are the same, except
that, since the time signal exists for all time, the sum and integral are double sided:
S

() =

k=

e
it
s
k
(t kt )dt

k=
s
k
e
skt
.
Now, the sampling theorem is as follows:
If a continuous time signal s(t ) has the property that its Fourier transform is zero for
all frequencies above
m
or below
m
(the Fourier transform of s(t ), S() = 0 for
||
m
), then we can perfectly reconstruct the signal from its sample values at the
sample times t =

m
(or if the signal is sampled faster) using the innite sum
s(t ) =

k=
s
k
sin(
m
t k)
(
m
t k)
.
This theorem is critical to all applications that use digital processing since it assures
that after the signal processing is complete the signal may be perfectly reconstructed as long
as the sampling was originally done at a rate at least twice as fast as the highest frequency
in the signal. As an aside, Claude Shannon proved this result, and it was published in the
Proceedings of the I.R.E. in 1949 [39].
The proof of this assertion is straightforward. Refer to Figure 4.13 as we proceed
through the steps in the proof.
The rst step is to take the function S() and expand it into an innite series (using
the Fourier series to do so). The result is
S
expanded
() =

k=
S
k
e

ik2
2m
.
The value of S
k
comes from the Fourier expansion
S
k
=
1
2
m

m
S()e
ik
m
d.
4.5. Digital Filter Design 139
s
0
s
1
s
2
s
3

s
4
s
5

t
m

m

a) Signal and its Sampled Values b) Fourier Transform of the Signal


c) Result of replicating ( ) S an infinite number of times.
Figure 4.13. Illustrating the steps in the proof of the sampling theorem.
Since the inverse Fourier transform of S() is the signal s(t ) =
1
2

S()e
it
d, we
get that the value of S
k
is

m
s
k
. Therefore, S

() is the same as S
expanded
() as shown in
Figure 4.13. That is,
S

() =

k=

m
s
k
e

ik
m
.
The last step in the proof is now simple. In order to recover the signal we simply need to
multiply the Fourier transform of the sampled signal by the function p() dened by
p() =

1,
m
< <
m
,
0, elsewhere.
Figure 4.13 shows the rectangular function drawn on top of the transform of the sampled
signal. Clearly, the product is the transform of the original analog signal.
The inverse Fourier transform of p() is the sinc function given by

sin(
m
t k)
(
m
t k)
.
The proof of the result follows immediately because the inverse Fourier transform of the
product of S

() and p() is the convolution of this function and the inverse transform of
S

() which is just the sum of impulses dening s

(t ) that we started with.


140 Chapter 4. Digital Signal Processing in Simulink
Vi ew Si gnal s
2*pi*5000
s+2*pi*5000
Transfer Fcn of
D/A Filter
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
D/A Error
A/D Converter
(Sampled at 0.1 msec)
15
15
Sum of
Sines
Input
Digital
Signal
D/A Filter
Output
Error
Figure 4.14. Simulation illustrating the sampling theorem. The input signal is 15
sinusoidal signals with frequencies less than 500 Hz.
The functionp() eliminates all frequencies inthe sampledsignal above the frequency

m
, and we have already seen that such a lter is a low pass lter. This is the main reason
we need to create good low pass analog lters.
For various technical reasons, it is impossible to build a lter that has a frequency
response that is exactly p(). This means that we are always searching for a good approx-
imation. The next section shows that there are numerous ways to come up with an approx-
imation and introduces the design of analog lters. These lters are prototypes of digital
lters, so this section is important for both the implementation of the sampling theorem and
for digital lter design, but rst let us do some simple simulations to illustrate these concepts.
Because of the sharp discontinuity in the lter function, the ideal low pass lter
represented by the function p() above is not the result of using a nite dimensional system
(i.e., a system represented by a nite order differential equation, having a transfer function
whose magnitude is |H(i)|). Therefore, it is necessary to gure out how best to create
a good approximation. As you should guess, the approximation must not have the sharp
corner, so the transition from the region where the gain of the lter is 1 to the region where
the gain is 0 must be smooth.
In Figure 4.14, we show a Simulink model that uses the sine block (from the Sources
library) to create 15 continuous time sinusoidal signals. The signals are then summed
together and sampled at 0.1 msec, using a sample and hold operation (the zero order hold
block in the Discrete library). Once again, you should try to create this model from scratch
using the Simulink library rather than loading it from the NCS library using the MATLAB
command Sampling_Theorem.
The approximation we use for the low pass lter is the simple rst order digital lter
from Section 4.2.2. The frequency for the lter has been set at 5 kHz.
The sinusoidal frequencies in the simulation come from the MATLAB vector freqs,
whose values are: 65, 72.8, 74.7, 82, 89.3, 91, 99.5, 103.2, 125, 180.2, 202.1, 223.3, 230.3,
310.3, and 405.2 Hz. Because the sample frequency of 10 kHz is 20 times the highest
frequency in the signal, the digital to analog (D/A) reconstruction of the signal using the
simple rst order lter is not too bad. (The error is about 8%, as can be seen from the D/A
Error Scope.)
4.5. Digital Filter Design 141
Table 4.1. Using low pass lters. Simulating various lters with various sample times.
Sample Time Filter Type Filter Freq. Standard Dev. Of
Error Observed
0.001 First Order 1000 Hz 2.093
0.001 Second Order 1000 Hz 1.902
0.0001 First Order 1000 Hz 0.872
0.0001 Second Order 1000 Hz 0.632
This simulation can investigate changes in the sample frequency relative to the fre-
quencies in the signal. As a rst numerical experiment, try changing the sampling frequency
to 500 Hz. (Make the sample time in the A/D Converter block 0.002.) Make note of the
magnitude of the error.
Next, change the
lter to 1 kHz by
changing the parame-
ter values in the Trans-
fer Function block to
2*pi*1000 in both the
numerator anddenom-
inator. Note the er-
ror for this simula-
tion. Remember that
this lter is not a good
approximation to the
ideal low pass lter.
(If you go back to Sec-
tion 4.2.2 and look
at the frequency re-
sponse plot for this l-
ter, youcansee that for
this lter the amplitude is reduced 50% at 1000 Hz and is reduced only 90% at 10 kHz.)
Abetter lter is one that has a more rapid reduction in amplitude. Higher order lters
will do this. For example, double click the lter block and change the lter parameters
to the values in the gure above. This makes the transfer function for the lter H(s) =
(21000)
2
s
2
+1.414(21000)+(21000)
2
. As we will see next, this is an example of a Butterworth lter.
Run the simulation using 1 kHz sampling and record this error. You should see very
little difference between the two lters when the sample time and the lter functions are
set to what the sampling theorem says are the appropriate values. Again, this is because
we are not ltering the signal with the ideal lter required by the theorem. To see that the
lters work well when the sampling is done at a higher rate than the minimum value of the
sampling theorem, change the sample time back to 0.1 msec, as we started with, and rerun
the simulation. Table 4.1 summarizes our results from these simulations.
142 Chapter 4. Digital Signal Processing in Simulink
Figure 4.15. Specication of a unity gain low pass lter requires four pieces of
information.
4.5.2 Analog Prototypes of Digital Filters: The Butterworth Filter
We did some simple analog low pass lter design in the previous section to illustrate the
conversion of a digital to an analog signal. We also need to have analog lters as follows.
Since the Fourier transform of a signal that we want to sample must have no fre-
quencies above half the sample frequency, a low pass antialiasing lter used prior to
sampling ensures that this is true.
We can design a digital lter using the analog lter as the starting point and then
converting the result using some type of transformation,.
So, with these reasons as motivation, let us explore some analog lters.
Remember that the transfer function of a lter is H(s)|
s=i
= |H(i)| e
i

H(i)
. Three
bands on the amplitude plot specify this lter, as illustrated in Figure 4.15. The rst of
these bands is the pass band, which represents the region where the signal is not attenuated
(the region whose frequencies are below
m
). The second region is the transition band,
where the amplitude gradually reduces to an acceptable minimum. (Because the frequency
response of a nite dimensional system is the ratio of polynomials, it is impossible for the
amplitude to be exactly zero except at an innite frequency.) The last region is the stop
band, where the lter is belowthe acceptable value. In the gure, there are three additional
parts to the specication: the acceptable gain change over the pass band, the acceptable
amplitude of the signal in the stop band, and the frequency at the transition band limit.
4.5. Digital Filter Design 143
These parameters are
1
,
2
, and
maccept able
, respectively, and they specify bounds on the
transfer function as follows:
In the pass band,
1
1
|H(i)| 1 for ||
m
.
In the stop band,
|H(i)|
2
for ||
maccept able
.
The analog low pass lters that are most frequently used are the Butterworth, Cheby-
shev, and elliptic lters. The rst of these, the Butterworth lter, comes from making the
lter transfer function as smooth as possible in each of the regions. The lter has as many
of the derivatives of its transfer function equal to 0 at frequencies 0 and innity.
Since the magnitude of the transfer function is |H(i)| =

H(i)H(i), it is usual
to use the square of the magnitude in specifying the lter transfer function (eliminating the
square root). Therefore, the Butterworth lter is
|H
But t er
|
2
=
1
1 +

2n
.
This function has the property that its derivatives (up to the (2n 1)st are zero at = 0
and at = . Exercise 4.3 asks that you show that this assertion is true. The Laplace
transform for the lter can be determined by using the fact that
|H(i)|
2
= H(s)H(s)|
s=i
.
Therefore, lter transfer function is the result of factoring
H(s)H(s) =
1
1 +

s
i
m

2n
.
The poles of the transfer function are the roots of the denominator, given by the equation
(
s
i
m
)
2n
= 1, or equivalently, the 2n roots of this polynomial are the 2n complex roots of
1 given by
s
k
= (1)
1
2n
(i
m
) .
These poles are equally spaced around a circle with radius
m
. The values with negative real
parts dene H(s), and those with positive real parts become H(s), so the transfer function
for the desired Butterworth lter is stable. It is easy to create this lter in MATLAB. We
have included in the NCS library an M-le called butterworthncs that uses the above to
determine the Butterworth lter poles and gain for any lter order. The code performs the
144 Chapter 4. Digital Signal Processing in Simulink
calculations shown below:
function [denpoles, gain] = butterworthncs(n, freq)
% The Butterworth Filter transfer function
% Use zero-pole-gain block in Simulink with the kth pole given by the
% formula:
% (1/2n)
% Poles with real part <0 among p = i(-1) 2*pi*freq
% k
% Where:
% freq = the Butterworth filter design frequency in Hz.
% n = order of the desired filter
% i = sqrt(-1)
denroots = (i*roots([1 zeros(1,2*n-1) 1]))*2*pi*freq; %BW poles = roots
denpoles = denroots(find(real(denroots)<=0)); % of -1 in left plane
% Zero out the imaginary part of the real pole (there is only one real
% pole, and then only when n is odd. Because of the precision
% in computing roots, the imaginary part will not be zero):
index = find(abs(imag(denpoles))<1e-6); % imag. part is 0
denpoles(index) = real(denpoles(index)); % when < eps.
% To insure the steady state gain is 1, the Butterworth filter gain
% must be n times the magnitude of the Butterworth circle squared:
gain = (2*pi*freq)n
To exercise this code, let us design an eighth order Butterworth lter for the problem
we investigated above. The poles and gain result from typing the following command in
MATLAB:
[BWpoles, BWGain] =butterworthncs(8, 10000)
Now let us lter the same signal we ltered above, but this time we build a Simulink
model using these poles and gain. Anew model similar to the Sampling_Theorem model
above is in Figure 4.16. (This model is Butterworth in the NCS library.)
The only difference in this model is that instead of the transfer function block, the lter
comes from the zero-pole-gain block in the Simulink continuous library. The dialog values
for this lter are set using the output of the M-le butterworthncs. (The output of this
M-le is the gain, BWgain, and the poles, BWpoles.) We designed the lter for a sample
frequency of 10000 Hz. Since the maximum frequency contained in the signal is only
about 500 Hz, this lter should do a good job of reconstructing the sampled signal. It does,
because the root mean square (rms) error in the reconstruction is only 0.0761 (signicantly
smaller than the rms value for the simple second order lter we used above). Notice that
the model above contains a delay block (called transport delay in the model) that delays
the input signal by 1.1 msec to account for the Butterworth lters phase shift that delays
4.6. The Signal Processing Blockset 145
Thi s model uses the data created by a cal l back to MATLAB when the model opens as fol l ows:
The source i s a sum of si ne waves at the fol l owi ng frequenci es
freqs = [65 72.8 74.7 82 89.3 91 99.5 103.2 125 180.2 202.1 223.3 230.3 310.3 405.2] Hz.
The phase of each si ne wave i s:
= [0 0.0011 -0.0014 -0.0013 0.0006 0.0002 0.0037 -0.0006 0.0044 0.0030 -0.0039 -0.0034 -0.0011 -0.0004 0 ]
The ampl i tude of al l of the si ne waves and the sampl e ti mes that they are generated are:
ampl i tudes = 1; sampl e ti me = 10
-5
sec.
Transport
Del ay
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
Fi l tered
Si gnal
Error
BWgai n
BWpol es(s)
Butterworth Fi l ter of Order n
Compute the coeffi ci ents
usi ng the m-fi l ebutterworthncs
Anal og
Input
Si gnal
A/D Converter
(Sampled at 0.1 msec)
Figure 4.16. Butterworth lter for D/A conversion using the results of the M-le
butterworthncs.
the output signal. By delaying the input before the error is computed, this lter lag is
accommodated. (Remember that in signal processing, time lags in the reconstruction of the
data are acceptable because there is usually a large spatial difference between the source
and the site of the reconstruction. (Think of transmission via the internet or via a radio link.)
We do not have to go through these machinations every time we want to design a lter.
Simulink easily allows the adding of new tools. These tools are blocksets, and the rst of
them that we will look into is the Signal Processing blockset, which contains built-in blocks
that allowany analog (or digital) lters to be created. In the next section, we experiment with
analog and digital lter blocks using the Signal Processing Blockset to design Butterworth
and other lters.
4.6 The Signal Processing Blockset
Analog lters, digital lters and many other signal-processing components are available in
the Simulink add-on tool called the Signal Processing Blockset. This tool has many unique
features that allow analog and digital lters to be designed, modeled, and then coded. (Sig-
nals can be analog, digital, or mixed analog-digital; digital signals can have multiple sample
times.) Among the features of the tool are the ability to capture a segment of a time series
(in a buffer) for subsequent batch processing. The tool also permits processing temporal
data using frames, where the computations wait until a frame of data is collected, and then
the processor operates on the entire frame in a parallel operation. The tool allows digital
signal processing models and components using computations that have limited precision
and only xed-point calculations. This last feature couples with a coding method that allows
the generation of C-code and allows HDLcode output for special purpose signal processing
applications on a chip. (The code comes directly from the Simulink model.) We will not
describe howwe dothis inthis chapter, but we will touchonsome of the features inChapter 9.
146 Chapter 4. Digital Signal Processing in Simulink
4.6.1 Fundamentals of the Signal Processing Blockset: Analog
Filters
To understand the capabilities and become
familiar with the Signal Processing Block-
set, open the library. The gure at the right,
a snapshot of the Library browser, shows
nine different categories of model elements.
They are Estimation, Filtering, Math Func-
tions, Quantizers, Signal Management, Sig-
nal Operations, Sinks, Sources, Statistics,
and Transforms. Explaining the details of
many of these blocks would take us beyond
the scope of this book, so we will leave
out the Estimation library and some of the
blocks in the Filtering and Math Functions
libraries.
We start our discussion by navigating
the Filtering library. Let us use the model
we created above, but this time we add l-
tering blocks from the Signal Processing li-
brary.
Open the model butterworth_sp
(shown in Figure 4.17) that now includes
a block from the Filtering library that au-
tomatically designs an analog lter of any
type. The list of possible lters goes way
beyond the Butterworth that we have been
exploring so far. When you open the model,
it will contain the Butterworth lter we
designed using the butterworthncs M-le
along with the Filter block from the Signal
Processing Blockset. This block contains
the design specications for the same But-
terworth lter.
This model illustrates another powerful feature of Simulink. In the process of making
a change to the model (perhaps a change that does nothing but simplify it, as we are doing
here), we need to be worried that the change might introduce an error. The simple expedient
of comparing the two calculations using the summation block (to compute their differences)
creates an immediate and unequivocal test for the accuracy of the calculations. The differ-
ence, displayed on a Scope, should be about the numeric precision. The result, displayed
in a Scope block, contains both the Signal Processing Blockset results and the difference
between the design using the NCS library design and the Signal Processing Blockset design
of the Butterworth lter. The Scope we call Compare SP block with NCS block gives
the plot shown in Figure 4.17(b). As can be seen in this gure, the difference between the
4.6. The Signal Processing Blockset 147
This model uses the data created by a callback to MATLAB when the modle opens as follows:
The source is a sum of sine waves at the following frequencies
freqs = [65 72.8 74.7 82 89.3 91 99.5 103.2 125 180.2 202.1 223.3 230.3 310.3 405.2] Hz.
The phase of each sine wave is:
= [0 0.0011 -0.0014 -0.0013 0.0006 0.0002 0.0037 -0.0006 0.0044 0.0030 -0.0039 -0.0034 -0.0011 -0.0004 0 ]
The amplitude of all of the sine waves and the sample times that they are generated are:
amplitudes = 1; sample time = 10
-5
sec.
Transport
Delay
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
D/A Error
Compare
SP block with
NCS block
BWgain
BWpoles(s)
Butterworth Filter of Order n
Computed using the NCS
m-file butterworthncs
Analog
Input
Signal
butter
Analog
Filter Design
from SP Blockset
A/D Converter
(Sampled at 0.1 msec)
15
Sum ofSines
Input
D/A Filter
Output
Difference between
NCS and SP Library
Butterworth filters
SP Blockset Result
0 0.05 0.1 0.15 0.2 0.25
-10
-5
0
5
10
15
SP Blockset Result
0 0.05 0.1 0.15 0.2 0.25
-4
-2
0
2
4
x 10
-14
Time
Difference between
NCS and SP Library
Butterworth filters
a)
Simulink Model with Butterworth Filter from the Signal Processing Blockset and the Filter
Designed using butterworthtncs.
b)
Reconstruction of an Analog Signal using a Filter Designed with the Signal Processing Blockset.
Figure 4.17. Using the Signal Processing Blockset to design an analog lter.
two implementations of the lter is mostly less than 2 10
14
, which is well within the
numerical tolerances of the calculations.
We can now experiment with the entire range of lters that the Signal Processing
Blockset offers. Note that as the lter changes, the icon for the lter shows a plot of the
148 Chapter 4. Digital Signal Processing in Simulink
Table 4.2. Comparison of the error in converting a digital signal to an analog
signal using different low pass lters.
10th Order Time Delay A/D
Filter Type milliseconds Error
Butterworth 1.10 0.0761
Chebyshev I 1.47 0.4576
Chebyshev II 0.58 0.1859
Elliptic 0.49 0.3393 to 0.5108
Bessel 0.132 0.274
frequency response of the transfer function, along with the name of the lter type. This
provides a direct visual cue to the type of lter so that when we review it in the future, we
know precisely the original intent (and if for any reason the picture and/or the type do not
match the original intention or the specications, it is readily apparent).
Five different lters are available from the analog lter design block. Try each of
these lters in turn and record the error in the reconstruction of the original analog signal.
We did this experiment, with the results tabulated in Table 4.2. (Each lter has a different
time lag, so modify the delay time in the Transport Lag block, as shown, to account for
this.) The elliptic lter stop band ripples range over 0.12 db, hence the error ranges.
4.6.2 Creating Digital Filters from Analog Filters
We have seen how one can use the state-space model to create a digital system that has the
same response as the analog system when the input to the system is a step. In Section 2.1.1,
the discrete time solution of a continuous state variable model was determined to be
x
k+1
= (t )x
k
+(t )u
k
,
where
(t ) =

t
0
e
A
d,
(t ) =

t
0
e
A
Bd,
and the MATLAB le c2d_ncs computes these matrices. We can use these methods of
forming a discrete time system with a response that is equivalent to the analog system to
make a digital lter from the analog lter. In this instance the responses are equivalent
in the sense that both the analog and digital lters will have the same step response. (In
Exercise 4.5 you will show that this is true.) Digital lters are usually equivalent (using
this approach) to the analog Butterworth, Chebyshev, Bessel, or elliptic lters. The other
equivalence that one can have is impulse equivalence, where the impulse responses of
the analog and digital lter are the same (but not the step responses).
Athird approach for the creation of an equivalent digital lter uses an approximation
of the mapping of the Laplace and z-transform variables. The approximation is the bilinear
transformation given by s =
2
t

1z
1
1+z
1

. This mapping in the complex plane is one-to-one,


4.6. The Signal Processing Blockset 149
so the inverse mapping is unique and is (noting that this form is also bilinear)
z =

1 +
t
2
s
1
t
2
s

.
The denominator in this mapping is of the form
1
1x
= 1+x +x
2
+ , so the value of z is
z =

1 +
t
2
s

1 +
t
2
s +

t
2
s

2
+

= 1 +t s +
t
2
2
s
2
+ .
This approximation is very close to the Taylor series for e
st
, the actual mapping for the
Laplace variable to the z variable. However, because the bilinear mapping is one-to-one,
there is no ambiguity when this transformation is used. The powerful feature of this trans-
formation is that one frequency exists where the phases of the transfer functions for both
the analog and digital lters are the same. You can select this frequency using a technique
called prewarping. Any of these approaches are selectable in the lter design block from
the Signal Processing Blockset. In the next section, we look at one of these.
4.6.3 Digital Signal Processing
One of the consequences of using a linear system described by a differential equation to
lter data is the time lag that we encountered in the low pass lters we used in the previous
section. In signal processing, these are innite impulse response, or iir, lters. If it
is necessary to eliminate this time lag, then the lter transfer function needs to be real at
all frequencies. (If it is complex, then the imaginary part of the lter transfer function
corresponds to a phase shift that results in the time lag.) For an iir lter, the only way that
the transfer function can be real is if the poles are symmetric with respect to the imaginary
axis (i.e., for any pole with negative real part, there is an equivalent pole with positive real
part). Since poles in the right half plane (with positive real parts) are unstable, it is clearly
impossible to build an iir lter that processes the signal sequentially and has no phase shift.
The next best attribute we can ask for an iir lter is that its phase be linear. This is possible,
and many lter designs impose this criterion in the development of the lter specication.
There is an alternative to an iir lter; it is called nite impulse response, or r, lter.
The r lter does not require the solution of a differential equation but instead relies on the
convolution of the input with the lter response to create the output. The most important
attribute of a r lter is that it always has linear phase. Let us explore this attribute with
some Simulink models. The rst model we create uses one of the simplest and, for many
applications, most useful of all r lters. This lter is the movingaverage lter that computes
the mean of some number of past samples of a signal. The lter is
y
k
=
1
n
n1

i=0
u
ki
.
150 Chapter 4. Digital Signal Processing in Simulink
0 50 100 150 200 250 300 350 400 450 500
-1
-0. 8
-0. 6
-0. 4
-0. 2
0
0. 2
0. 4
0. 6
0. 8
1
Time
200 Point M oving A verage
S imulink S imul at ion
Us ing St at e S ac e Model
a)
The FIR Filter Simulink Model
b)
Simulation Results
1/n
d
ones(1,n-1)/n* u
c
[zeros(n-2,1);1]
b Vector
z
1
Unit Delay Scope
Band-Limi ted
White Noise
diag(ones(1,n-2),1)* u
A Matrix
Figure 4.18. Moving average FIR lter simulation.
The z-transform of this lter is
Y(z) =
1
n

1 +z
1
+ +z
(n1)

=
1
nz
n1

z
n1
+z
n2
+ +z +1

.
Thus, this lter has n 1 poles at the origin. This means only that you have to wait for n
samples before there is an output.
The process for computing the output y
k
is to accumulate (in the sense of adding to
the previous values)
1
n
times the current and past values of the input. The Simulink model
that generates the r n-point moving average lter (Figure 4.18) is in the NCS library. It is
Moving_Avg_FIR. Exercise 4.6 asks that you verify that this model is indeed the state-space
model of the lter.
The simulation computes the average of the input created by the Band Limited White
Noise block, which we investigate in more detail in the next chapter. The important in-
formation about this block is that the output is a sequence of Gaussian random variables
with zero mean and unit variance (so the moving average should be zero). When the model
opens, the callback sets n to 200 so the average is over the previous 200 samples. The result
is in Figure 4.18(b).
4.6. The Signal Processing Blockset 151
You can change the number of points in the moving average by changing the variable n
in MATLAB. Beware, however, that this implementation of the lter is extremely inefcient,
so as n gets larger, the time it takes to compute the moving average increases dramatically.
This last point illustrates a very important as-
pect of lter design. That is, the design approach
makes a dramatic impact on the time it takes to
perform the computations and the accuracy of the
result. Let us explore this with the Digital Fil-
ter design block in the Signal Processing Blockset.
The model Moving_Avg_FIR_sp in the NCS li-
brary uses the signal processing Digital lter block
to create an r lter that is identical to the state-
space model above. Double click on the Digital
Filter block in the model, and the window on the
right will appear. This dialog allows you to select
the lter type (in the Main tab); in this case, we
selected FIR (all zeros). Next, we select the l-
ter structure, the feature of the block that allows
a more robust implementation. This is not criti-
cal in the simulation of the lter (although using
a bad implementation as we did above can cause a
long wait for the simulation to be completed), but it is absolutely critical in implement-
ing a lter in a real-time application. Using the smallest number of multiplies, adds,
and storage elements can
greatly simplify the l-
ter and make its execution
much more rapid. In this
implementation, we have
used the Direct Form
of the lter with the nu-
merator coefcients equal
to 1/n*ones(1,n), where in
the simulation, n = 200.
One last feature of
the Filter block is the abil-
ity to view the lter trans-
fer function as a frequency
plot (and in other views).
Clicking the View Filter
Response button creates
the plot shown at the right.
The plot opens showing
the magnitude of the lter transfer function plotted versus the normalized frequency. The
viewer allows you also to look at
the phase of the lter,
the amplitude and phase plotted together,
152 Chapter 4. Digital Signal Processing in Simulink
-1 -0.5 0 0.5 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
199
Real Part
I
m
a
g
in
a
r
y

P
a
rt
Pole/Zero Plot
a) Pole Zero Plot b) Filter Properties
Figure 4.19. Pole-zero plot and lter properties for the moving average lter.
the phase delay,
the impulse or the step response,
the poles and zeros of the lter,
the coefcients of the lter (both numerator and denominator polynomials),
the data about the lters form,
issues about the lter implementation by viewing the lter magnitude response with
limited precision implementations.
We select these using the buttons along the top of the gure (circled in the gure).
Two of the more interesting outputs from this tool are the pole-zero plot and the lter
properties. Figure 4.19 shows these for the moving average lter. Notice that the lter
properties gives a calculation of the number of multiplies, adds, states, and the number of
multiplies and adds per input sample. (In this case the numbers are the same since the r
lter requires only multiplications and adds for each of the zeros.)
We will explore some of the lter structures and their computation counts in Exer-
cise 4.6. For now, we can really appreciate the difference the implementation makes by
using the sim command in MATLAB to simulate both of the moving average Simulink
models. The MATLAB code for doing this is
tic;sim(Moving_Avg_FIR);t1=toc;
tic;sim(Moving_Avg_FIRsp);t2=toc;
tcomp = [t1 t2]
The results from this code (on my computer) are
tcomp =
18.6036 0.5747
4.6. The Signal Processing Blockset 153
The difference is so dramatic because the rst implementation (the state-space model)
has a state matrix a that is 199 199. At every iteration, this matrix multiplies the previous
state. Then there is a vector addition, followed by a vector multiply and an addition (for
the feed-through of the input). Use this as a guide for determining the calculation count in
Exercise 4.6.
In general, all r lters have the form of a sum of present and past values of the input
multiplied by coefcients that are the desired (nite) impulse response. Thus, the general
r lter and its z-transform are
y
k
=
n1

i=0
h
i
u
ki
and H(z) =
Y(z)
U(z)
=
n1

i=0
h
i
z
i
=
n1

i=0
(z
1
+zero
i
).
Notice that this lter has n zeroes (zero
i
) that are determined by factoring the poly-
nomial H(z). Because the highest power in this transfer function is z
n
, the lter has n
poles at the origin (which corresponds to the fact that the lter output is only complete after
n samples; there is an n-sample lag before the correct output appears). This appears in the
response of the moving average lter from the Simulink model above.
The discussion above was for a lter that computes the mean of n samples. Because
of the study of this type of lter in statistics, all zero lters are moving average (or MA-
lters). The statisticians also developed iir lters that were all poles, and they dubbed these
autoregressive (or AR-lters). Last, if a lter has both poles and zeros, it is an ARMA-lter.
The lter design block allows you to create all three types of lters. The dialog also allows
the user to specify where the lter coefcients are set in the model. The option we have
used is to set them through the lter mask dialog. (They are computed from the poles and
zeros set in the dialog.) You also can select an option where the lter coefcients are an
input (they can then be computed in some other part of the Simulink model for use in this
block, thereby changing the lter coefcients on the y), or the coefcients can be created
in MATLAB using an object called DFILT. Finally, the dialog allows the user to force the
lter to use xed-point arithmetic. This leads us to the discussion in the next section.
4.6.4 Implementing Digital Filters: Structures and Limited Precision
Very few analytic methods allow one to visualize the effects of limited precision arithmetic
on a lter. The use of simulation in this case is mandatory. Therefore, let us explore some
of the consequences of designing a lter for use in a small, inexpensive computer that, say,
has only 16 bits available.
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-10
-5
0
5
10
15
Input Signal (Sum of sinusoids)
154 Chapter 4. Digital Signal Processing in Simulink
Table 4.3. Specication for the band-pass lter and the actual values achieved in
the design.
Specication Spec. Values Actual Values
(3 dB point)
Sample Freq. 10 kHz 10 kHz
Pass Band #1 Edge 100 Hz 100 Hz
Stop Band #1 Edge 110 Hz 109.7705Hz
Pass Band #2 Edge 120 Hz 120.2508 Hz
Stop Band #2 Edge 130 Hz 130 Hz
Stop Band 1 Gain 60 dB 60 dB
Transition Width 10 Hz 10 Hz
Pass Band Ripple 1 dB 1 dB
Vi ew Si gnal s
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
Bandpass
Di gi tal
Bandpass Fi l ter
Desi gner
double double double
Floating Pt.
Bandpass Filter
Figure 4.20. The Simulink model for the band-pass lter with no computational
limitations.
The rst question we need to ask is, whats the problem? We have created a very
simple example of a digital signal-processing task that will allow us to explore some of the
limited precision arithmetic features built into the signal processing blockset. The model,
called Precision_testsp1 or 2 in the NCS library, is a simulation of a device that
one might design to try to nd a single tone in a time series that consists of a multitude of
tones. We might use it, for example, in a frequency analyzer or in a device to tune a wind
instrument during its manufacture. It consists of a band-pass lter with a narrow frequency
range (10 Hz in this case) and a sample time of 10 kHz. The input to the lter is the same
sum of sinusoids that we have used previously, namely sine waves at the frequencies 65,
72.8, 74.7, 82, 89.3, 91, 99.5, 103.2, 125, 180.2, 202.1, 223.3, 230.3, 310.30, and 405.2 Hz,
in the gure above.
The band-pass lter specication is in Table 4.3, and the Simulink model
Precision_testsp1 that we use to test the design is in Figure 4.20. If you open this
model, you will see that it contains the Bandpass Filter block from the Filter Design
Toolbox Simulink library. This block converted the specication in the table into lter
coefcients used by the Digital Filter block. The digital lter block allows you to take the
oating point design and convert it to an equivalent xed point design.
Figure 4.21 shows the amplitude-versus-frequency plot for the lter designed by the
Band Pass Filter Design block. (Open the model in the NCS library and double click
on this block to see the specication and create this plot.) The solid lines in the gure are
the lter amplitude, as designed, and the dotted lines are the specication from the table
4.6. The Signal Processing Blockset 155
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18
-70
-60
-50
-40
-30
-20
-10
0
Frequency (kHz)
M
a
g
n
it
u
d
e

(
d
B
)
Magnitude Response (dB)
Figure 4.21. The band-pass lter design created by the Signal Processing Blockset
digital lter block.
above. When this lter is used, the digital lter response is almost indistinguishable from
the response of an analog lter. We have deliberately designed this lter for a frequency
that is not contained in the 15 sine waves that make up the input. (The pass band is from
110 to 120 Hz; none of the sinusoids have this frequency range.)
Thus, the output of the lter should be very small. This is in fact the case as can be
seen in the response plot (Figure 4.22(a)). The amplitude of the output (after the initial
transient dies down) is about 0.01 (about 1/1000 the amplitude of the input signal), which
is very good for detecting that the tone is not present.
The next part of the design is to place the lter pass band in the area where we know
there is a tone. Thus, let us try to pick out the tone at 103.2 Hz by specifying that the pass
band corners will be 100 and 110 Hz (so the values of Fstop1, Fpass1, Fstop2, and Fpass2
are 90, 100, 110, and 120, respectively). This is so easy to do in the design that it amounts to
a trivial change (in contrast to designing this lter by hand). You should try this, if you have
not already. If you have not, the Simulink model Precision_testsp2 in the NCS library
has the changes in it. Open this model and double click the Bandpass Filter block to view
the changes that were made. (The dialog that opens has the new design for the pass band.)
When you run this model, the response should look like the plot shown in Figure 4.22(b).
The maximum output is now about 1.5 (compared to the 0.15 above), showing that there is
a tone in the 10 Hz frequency range of 100 to 110 Hz.
The lter design method that we use for both of the band-pass lters is the Second
Order Section (or Direct FormType II). When a digital lter is implemented using oating-
point operations, the structure of the lter does not usually matter, but when the lter is for a
xed-point computer, as we intend to do next, the way that the lter is structured can make
a huge difference. For this reason, the Signal Processing Blockset includes design methods
for a wide variety of lter structures.
156 Chapter 4. Digital Signal Processing in Simulink
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-1.5
-1
-0.5
0
0.5
1
1.5
Time
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-0.2
-0.1
0
0.1
0.2
Time
Output of Bandpass Filter
a) Output of the Bandpass Filter when no Signal is in the Pass Band
b) Bandpass Filter Output when One of the Sinusoidal Signal is in the Pass Band
Figure 4.22. Band-pass lter simulation results (with and without signal in the
pass band).
Why is the implementation method an issue for xed-point calculations? In Sec-
tion 4.4, we showed that the transfer function for a discrete linear system comes from the
state-space model as H(z) =
Z{y
k
}
Z{u
k
}
= C(zI )
1
B+D. The poles of the discrete system
and the eigenvalues of the systemare given by the det(zI (t )), so using the denomina-
tor polynomial to create a digital lter will result in coefcients that range from the sum of
the eigenvalues (or poles) to their product. All of the eigenvalues (poles) of the z-transform
transfer function are less than one because the matrix (t ) is
(t ) = e
At
= T
1

1
t
0 0
0 e

2
t
0
.
.
.
.
.
.
.
.
.
0 0 e

n
t

T,
where T is the matrix that diagonalizes the matrix A (and (t )). The denominator of
the z-transform is the determinant of this matrix and is the polynomial

n
j=1
(z e

j
t
).
(Exercise 4.7 asks that you show that this is true.)
4.6. The Signal Processing Blockset 157
If this polynomial denes the lter denominator, the precision needed to store the
coefcients would range over many orders of magnitude. Consider, for example, the fourth
order lter with poles at 0.1, 0.2, 0.3, and 0.4; its denominator polynomial is z
4
+ z
3
+
0.35z
2
+ 0.05z + .0024, so the coefcients span almost three orders of magnitude. Each
coefcient would therefore require at least 10 bits just to get one bit of accuracy. The
simplest way to avoid this is to break the lter up into cascaded second order systems.
Second order sections can eliminate complex arithmetic. (The denominator polynomial of
a second order system is always z
2
+ (e

i
t
+ e

i
t
)z + e

i
t
2
, which is always real.)
The cascaded second order sections that were designed by the lter designer for this model
can be seen by looking under the mask (by right clicking on the Bandpass Filter block,
and selecting look under mask form the menu that opens). The block that appears is the
Generated Filter block, and we open it by double clicking. When this block opens, you
will see the cascaded second order sections (unconnected) that make up the lter (as shown
in the gure below).
1
Output
-K-
s5(4)
-K-
s(4)
-K-
s(3)
-K-
s(2)
-K-
s(1)
-K-
b(3)(4)
-K-
b(3)(3)
-K-
b(3)(2)
-K-
b(3)(1)
-K-
b(2)(4)
-K-
b(2)(3)
-K-
b(2)(2)
-K-
b(2)(1)
-K-
a(3)(4)
-K-
a(3)(3)
-K-
a(3)(2)
-K-
a(3)(1)
-K-
a(2)(4)
-K-
a(2)(3)
-K-
a(2)(2)
-K-
a(2)(1)
[Sect3] [Sect2] [Sect1] [Sect3] [Sect2] [Sect1]
CheckSigna
Attributes
z
-1
z
-1
z
-1
z
-1
z
-1
z
-1
z
-1
z
-1
1
Input.
The coefcients are in gain blocks, and we can inspect them by double clicking on
the block. It is very useful to become familiar with navigating around the Signal Processing
Blockset generated models this way.
Now that a lter design exists, we impose the requirement that the calculations must
use a precision of 16 bits. Designing a lter with limited precision arithmetic is simple
if the lter design block creates the design. Once it is complete, MATLAB uses the lter
design block to design the implementable digital lter. Before we do this, we need to think
a little about what we mean by limited precision arithmetic and digital computing without
oating-point calculations.
Whenever a calculation uses oating-point arithmetic, the computer automatically
normalizes the result (using the scaling of the oating-point number) to give the maximum
precision. (Among the many references on the use of oating point, [31] describes how
to use a block oating-point realization of digital lters.) The normalized version of any
variable in the computer is
x = (1 +f ) 2
e
,
f is the fraction (or mantissa) and e is the exponent.
The fraction f is always positive and less than one and is a binary number with at most
52 bits. The exponent (in 64-bit IEEE format) is always in the interval 1022 e 1023.
Because of the limited size of f , there is a limit to the precision of the number that can
be represented. (In MATLAB, this is captured by the variable eps = 2
52
, the value for
any computer using the IEEE oating-point standard for 64-bit words.) The exponent has a
similar effect, except that its maximum and minimum values determine the smallest and the
largest numbers we can represent. It would pay to read Section 1.7 of NCM [29] to make
sure that you understand the concept of limited precision.
158 Chapter 4. Digital Signal Processing in Simulink
Inexpensive computers donot have oating-point capability. (Theyhave xed-point
architectures, where the user places the binary point at a xed location in the digital word.)
Programming these computers requires that the programmer ensure that the calculation
uses enough bits. When we say enough bits, we are immediately in the realm of speculation
because the number of bits needed for any calculation depends on the range of values that
the data will have. For example, if we are interested in building a digital lter that will act
as the tone control for an audio amplier, the amplier electronics immediately before the
analog-to-digital conversion determines the maximum value that the signal will ever have.
The amplier might limit the signal to a maximum of 1 volt. The result of the conversion
of the analog signal will be a number that ranges from 1 to 1. It is then easy to scale the
xed-point number so its magnitude is always less than one by simply putting the binary
point after the rst bit. (The rst bit is then the sign bit, and the remainder of the bits are
available to store the value.) This scaling can be kept for all of the intermediate calculations,
but depending on the calculation (add, subtract, multiply, or divide), this may not be best,
so scaling needs to be reconsidered at every step in the computation.
When the amplitude of the signal being ltered is not knownas, for example, when
ltering a signal that is being received over a radio linkthe designer needs to gure out
what the extremes for the signal will be and ensure that the digital word accommodates
them. When the extremes are exceeded, the conversion to a digital word reaches a limit
(which is called saturation and results in the most signicant bit in the conversion to try to
alter the sign bit). When the digital word has a sign, the computer recognizes this attempt
and creates an error. When using unsigned integers, the bit overows the register. (The
computer sees a carry bit that has no place to go, resulting in an overow error.)
To handle the limited precision, the designer needs to gure out what the effect of the
limited precision will be both in terms of the accuracy that is required and in terms of the
artifacts introduced by the quantization. As we will see, quantization is a nonlinear effect
that can introduce many different noises into a signal that may, at a minimum, be distracting
and, in the worst case, cause the lter to do strange things (like oscillate).
Based on the discussion above, it should be clear that xed-point numbers accommo-
date a much smaller dynamic range than oating-point numbers. The goal in the scaling
is to ensure that this range is as large as possible. The scaling usually used is to have a
scale factor that goes along with the digital representation and an additive constant (or bias)
that determines what value the digital representation has when all of the bits are zero. The
scaling acts like a slope and the additive constant acts like the intercept in the equation of a
straight line, i.e.,
V
represent at ion
= S w +b.
As was the case for the oating-point numbers, the constant S = (1 + f ) 2
e
, where the
magnitude of f is less than one and b is the bias. The difference between the oating-point
and xed-point representation is that the value for e is always the same in the xed-point
calculations. The programmer must keep track of the slope and the bias. The computer
uses only w during the calculations. Simulink xed-point tools have many different rules
for altering the scaling for each of the calculations that occur ensuring the best possible
precision (with constraints). Simulations insure that the input signals truly represent the full
range that is expected. The simulation also must ensure that the calculations at every step
use values that have ranges that are consistent with the data.
4.6. The Signal Processing Blockset 159
Figure 4.23. Fixed-point implementation of a band-pass lter using the Signal
Processing Blockset.
With this brief discussion, we can begin to investigate digital signal processing on
a xed-point machine. Open the models Precision_testsp2 (see Figure 4.23(a)) and
double click the Digital Filter block. The dialog in Figure 4.23(b) will appear. The infor-
160 Chapter 4. Digital Signal Processing in Simulink
mation in the Main dialog consists of the data created by the lter design in the Digital
Band-pass Filter Designer block in the model. The Transfer function type is iir, and the
lter structure is the biquadratic direct form that we specied.
The dialog has a tab that forces the lter design to use a xed-point architecture.
Click this tab and look at the options. Since the lter consists of four cascaded second order
sections, the rst issue we need to address is how to scale each of the outputs. As a starting
guess we specify that each second order lter will use the full precision of the 16-bit word,
so we put the binary point to the right of the sign bit (i.e., between bit 15 and 16, so that the
fraction lengththe length of w in the scaling equationis 15 bits). The coefcients are
also critical, so they need as much precision as possible. We have specied overall accuracy
of 32 bits, with 15 bits for both the numerator and denominator coefcients in each of the
second order sections. Each multiplication in our computer has 32 bits of precision, so we
specify that the word length is 32 bits, and we again allow the most precision of 31 bits for
the Fraction Length. The output will use a 32-bit D/Aconversion, so the output is scaled
the same as the accumulator.
These guesses for the scaling are now tested. The rst step in the process is to run the
model (with the guesses) to see what the maximum and minimum values of the various sig-
nals are. To do this, just click the start button. The simulation will run, and the maximumand
minimum values appear in MATLAB in a 45-element cell array called FixPtSimRanges.
The last ve values in this cell array show the simulation results for the xed-point cal-
culations. For example, the 45th entry, obtained by typing FixPtSimRanges{45} at the
MATLAB command line, contains
Path: [1x54 char] SignalName: Section output
DataType: FixPtMantBits: -16
FixExp: -15 MinValue: -1.0000
MaxValue: 0.9998.
The next step is to allowthe Autoscale tool to adjust the scaling to give the maximum
amount of precision to the implementation. To do this, select Fixed Point Settings under
the Tools menu. This selection opens the dialog in Figure 4.24.
To automatically scale the calculations (and in the process compute all of the scale
factors S associated with each calculation), simply click the Autoscale Blocks button at
the bottom of the dialog. The results appear by opening the Digital Filter block and then
looking under the Fixed-Point Tab. The Autoscale will have scaled all of the calculations,
but you should nd that the values we selected are good.
Now that the lter is complete, you can experiment with different word sizes and
precisions, and see what the effect is on the output of our lter. However, if you look at
the View Signals Scope block, you will notice that even with the design optimized for
precision, the xed-point and oating-point computations are not the same.
4.6.5 Batch Filtering Operations, Buffers, and Frames
The Signal Processing Blockset can also process signals in a batch mode where the signals
are captured and then buffered in a register before the entire sequence is used to calculate a
desired functional. The most frequent use of this technique is in the calculation of the Fourier
transformof a signal. Remember that the Fourier transformis an integral over all time. Since
4.6. The Signal Processing Blockset 161
Figure 4.24. Running the Simulink model to determine the minima, maxima, and
scaling for all of the xed-point calculations in the Band-pass lter.
we can capture only a nite time sample, any calculation for a signal processing application
will at best approximate the transform. In addition, the data will be digital, so the transform
will be done using an approximating summation. The most frequent approximation is the
fast Fourier transform (FFT). It is the subject of Chapter 8 of NCM, so we will assume that
the reader is familiar with its method of calculation.
In order to illustrate the concept of buffers and the FFT in the Signal Processing
blockset, consider the problemof recreating an analog signal fromits samples (the sampling
theorem problem we investigated above). In this case lets not try to nd an analog lter
that will do this, but lets try to use the Fourier transform (in the form of the FFT).
The model FFT_reconstruction in the NCS library (Figure 4.25) starts with the
same sum of 15 sinusoids that we have been using in the previous models. The recon-
struction, however, uses the transform of the signal. Let us look at the theory behind the
calculations, and then we will explore the models details.
Remember that the sampling theorem told us that to reconstruct the signal from its
samples we need to multiply the Fourier transformof the sampled signal by the function that
is 1 up to the sample frequency and zero elsewhere. When we take the Fourier transform
using the FFT, we get frequencies that are up to half the sample frequency. Thus, if the
162 Chapter 4. Digital Signal Processing in Simulink
Frequency Domain Sampled Signal Reconstruction
Ideal Low Pass Filter Created by Padding the FFT
Vi ew Si gnal s
Unbuffer
(ndesi red val ues)
Freq
Short-Ti me
Spectrum
-K-
Scal e by:
nd/nbuf
FFT Padded FFT
Pad Transform
wi th Zeros
Inheri t
Compl exi ty
Data
Ref
Make Ressul t Real
(El i mi nate Resi dual
Imagi nary Part)
IFFT
IFFT
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
FFT
FFT
Del ay by:
samp_ti me*nbuffer
(Accounts for l ag i n buffer)
1
DSP
Constant
|u|
Compute |FFT|
Buffer
(nbuffer val ues)
15 [512x1]
[512x1]
[512x1]
[512x1]
[512x1]
4096 [4096x1]
Reconstructed
at: nd/nbuf
Sampled
Input
Figure 4.25. Illustrating the use of buffers. Reconstruction of a sampled signal
using the FFT.
conditions for the sampling theorem (that the signal is band limited) are valid, the FFT is
just S(),
M

M
, where
M
is the maximum frequency contained in the signal
and the values of are just
2
M
n
, where n is the number of samples of the signal that were
transformed. In creating the model in Figure 4.25, the rst step was to save n samples of
the input signal for subsequent processing by the FFT. The block in the Signal Processing
Blockset that does this is the Buffer block. It simply stores some number of values of the
input in a vector before passing it on to the next block. When the model opened, three data
values (called nbuffer, npad, and ndesired) appear in MATLAB. Nbuffer is the size of
the buffer, and it is set to 512. (It always must be a power of two for the FFT block.)
The FFT and the inverse FFT use the blocks from the Transforms library in the Signal
Processing Blockset. Thus, outwardly, all we are doing in the model above is taking the
transform of the input and then taking the inverse transform to create an output. The output,
though, cannot be the result of the inverse transform, since this is going to be a vector of
time samples that, presumably, matches the output of the Buffer block. The way to pass the
vector of samples back into Simulink as a set of time samples at the appropriate simulation
times is to use the Unbuffer block. (Both the Buffer and Unbuffer are in the Signal
Management library under Buffers.)
To apply the Sampling theorem, you need to multiply the transform of the sampled
signal by the pulse function and then take the inverse Fourier transform(which is continuous
so this results in a continuous time signal). We are using the IFFT block that takes the
inverse Fast Fourier Transform and therefore outputs a signal only at discrete values of
time. Therefore, in order for the inverse to have more time values (thereby lling in or
interpolating the missing samples), we need to increase the size of the FFT before we take
the inverse. We do this by padding the transform with zeros to the left of
M
and to the
right of
M
. The subsystem called Pad Transform with Zeros in Figure 4.26(a) does this.
To see how, double click on the block to open it.
There is a block called Zero Pad in the Signal Operations library, and we use it
to do the padding. The block adds zeros at either the front or the rear of a vector. However,
we need to be careful about this. Remember that when the FFT is computed, the highest
frequencies are in the center of the transform vector, and the lowest (zero frequencies) are
at the left and right of the vector (i.e., the transform is stored from 0 to
M
and then from
4.6. The Signal Processing Blockset 163
0 100 200 300 400 500 600
0
50
100
150
200
250
300
a) Simulink Subsystem that Pads an FFT with Zeros to
Increase the Number of Sample Points in the Transform
b) Padding an FFT with Zeros Must Account for the Fact that
the 0-Frequency is Not at the Center of the FFT
1
Padded
FFT
|u| |FFT| padded
Zerod Pad the Transform
(at the highest pos. freq.)
Zero Pad the Transform
(at the lowest neg. freq.)
Matrix
Viewer
View Padded
Transform as a Matrix
MATLAB
Function
Un-Shift the Transform
so Zero is at the Ends.
Store the |FFT|
in rows of a matrix
MATLAB
Function
Shift the Transform
so Zero is at the Center.
1
FFT
2304 4096 512 [512x1]
4096
4096
4096
[8x4096]
4096
4096
Figure 4.26. Using the FFT and the sampling theorem to interpolate a faster
sampled version of a sampled analog signal.

M
to 0 with a discontinuity at the center of the array). If you run the following MATLAB
code, it will generate the plot in Figure 4.26(b) to illustrate this:
t = 0: .01:511*.01;
y = sin(t)+sin(10*t)+sin(100*t);
z=fft(y);
plot (abs(z))
The plot shows the fact that the transform has the zero frequency at the 0th and the
512th points computed. Because of this FFT quirk, padding the FFT with zeros during the
computation in the Simulink model will not add zeros below
M
and above
M
. (Exercise:
What does it do?)
There is a built-in command in MATLAB that we will use to rotate the fft. The
function is fftshift, and it converts the FFT so its zero frequency is at the center of the
164 Chapter 4. Digital Signal Processing in Simulink
plot (as it appears using the Fourier transform). To see the effect of using this command,
replace the last plot command with
plot (fftshift(abs(z))).
There is no function in the Signal Processing library that does the equivalent of
fftshift. Therefore, we use the MATLABfunction block in the Simulink library to use the
MATLAB fftshift. The function needs to be invoked twice: once before we do the zero
padding and once after to put the FFTback into the correct formfor the inverse (IFFTblock).
Two additional blocks used in the model come from the Signal Processing Blockset
sinks library. They are the Short Time Spectrum block and the Matrix Viewer block. The
rst block is used to view the FFT as it is computed, and the second allows us to view the
padded FFT as it is computed (in a three-dimensional plot). In this plot, time and frequency
are the two axes, and the color is the amplitude of the FFT. (In the book it is in a gray
scale, but in the model that you are running from the NCS library, it is in color.) Another
attribute used in the model is to display the length of the vectors and the sample times
on the various lines using different colors for the lines. These attributes come from the
Port/Signal Displays under the Format menu in the model. Now, with this understanding of
the mathematics involved, it should be clear that if we pad the transforms by some number
of zero elements to make the nal padded value have 2
n
points, the inverse FFT (IFFT)
will result in a time series with 2
n
values. Thus, making 2
n
> nbuff er, the output will
have more time samples than the input. From the sampling theorem, this output is the result
of multiplying the FFT of the input by the pulse function, and the result is a signal that
perfectlyto within numeric precision, of coursereconstructs the original signal at the
new sample times.
The result of running this model is shown in Figure 4.27 for the reconstruction of
the samples signal at 8 times the input frequency (ndesired = 4096, nbuffer = 512, and
npad = 1792).
The results showvery good reconstruction of the sampled signal, as we would expect.
Figure 4.28(a) shows the output from the Short-Time Spectrum block in the model, and
Figure 4.28(b) shows the plot created by the Matrix Viewer block. Almost every one of
the 15 frequencies in the input sine wave can be seen in the peaks of the spectrum (which
is really the FFT), and the padding of the FFT to create the new output can be seen in the
Matrix Viewer output.
We have explored about 20% of the capabilities of the Signal Processing Blockset
in this chapter. For example, dramatic improvements in many signal-processing applica-
tions result from the processing of a large sample of data transferred to the computer as
a contiguous block. To some extent, we have seen this in the example above, where we
buffer the signal data before sending it to the FFT. The same approach applies to lters and
many other signal-processing operations. When we do this in Simulink, the operation uses
a vector called a frame. Frames make most of the computationally intensive blocks in the
Signal Processing Blockset run faster. There are many examples in the demos that are part
of the signal-processing toolbox that illustrate this, and now that you understand how the
buffer block works, you should be able to work through these examples without trouble.
Furthermore, it is a good idea to look at all of the demos, and also to open all of the blocks in
the blockset to see howeach of themworks, and, along with the help, determine what each of
the blocks needs in terms of data inputs and special considerations for the use of the blocks.
4.7. The Phase-Locked Loop 165
4 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.1
-10
-5
0
5
10
Analog Signal Sampled
at 1kHz
4 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.1
-10
-5
0
5
10
Time
Reconstructed
at 4 kHz
Figure 4.27. The analog signal (sampled at 1 kHz, top) and the reconstructed
signal (sampled at 8 kHz, bottom).
4.7 The Phase-Locked Loop
An interesting signal-processing device incorporates the basics of signal processing and
feedback control. The device, a phase-locked loop (PLL), is an inherently nonlinear control
system. It is extremely simple to understand, and Simulink provides a perfect way for
simulating the device. In the process of creating the simulation, we will encounter some
newSimulink blocks, we will use some familiar blocks in a newway, and we will encounter
some numerical issues. We begin with how the PLL operates.
Imagine that we want to track a sinusoidal signal. The classic example of this is
the tuner in a radio, television, cell phone, or any device that must lock onto a particular
frequency to operate properly. In early radios, a demodulator followed an oscillator tuned
to the desired frequency. The oscillator operated in an open loop fashion, so if its frequency
drifted (or the signals frequency shifted slightly), the radio needed to be manually retuned.
The operation of the demodulator used the fact that the product of the incoming frequency
and the local oscillator created sinusoids at frequencies that were the sum and difference of
the input and oscillator frequencies. Mathematically this comes from
sin(
1
t +
1
) cos(
2
t +
2
) =
1
2
(sin ((
1
+
2
)t +
1
+
2
) +sin ((
1

2
)t +
1

2
)) .
The output of the demodulator was the result of extracting only the difference frequency
using a circuit tuned to this intermediate frequency. In most applications, tuned radio-
frequency ampliers increased the intermediate signals amplitude. The PLL uses the same
166 Chapter 4. Digital Signal Processing in Simulink
Figure 4.28. Results of interpolatingasampledsignal usingFFTs andthe sampling
theorem.
4.7. The Phase-Locked Loop 167
Thi s Phase Locked Loop Si mul ati on wi l l Track a si nusoi dal i nput that
i s i n the frequency range of 95 to 105 Hx.
VCOin VCOout
Vol tage
Control l ed
Osci l l ator
VCO
Tracki ng
Input
Modul ator
num(s)
den(s)
Low Pass Fi l ter
(3rd Order Butterworth at 5 Hz.)
Input Si gnal
101.56 Hz.
Error
Figure 4.29. Phase-locked loop (PLL). A nonlinear feedback control for tracking
frequency and phase of a sinusoidal signal.
concept, except the oscillator frequency uses the difference sinusoid to change the fre-
quency of the oscillator. The feedback uses a device called a voltage controlled oscillator
(abbreviated VCO). The operation of the loop needs three parts:
the VCO,
the device that creates the product of the input and the VCO output,
a lter to remove sin ((
1
+
2
)t +
1
+
2
) before the result of the product passes
to the VCO.
The term locked describes when the input frequency and the VCO frequency are
the same. Thus, at lock, since the product has both the sum and difference frequencies, the
difference frequency is zero. Therefore, the lter that we need to remove the sum frequency
is our old friend the low pass lter. We will build the loop simulation in Simulink using a
third order Butterworth lter.
The Simulink model of phase-locked loop is Phase_Lock_Loop in the NCS library
(Figure 4.29).
A MATLAB callback (as usual) creates the parameters for the simulation. The But-
terworth lter is third order, and its coefcients are in the Transfer function block. The
modulator uses the product block. The VCO is a subsystem that looks like Figure 4.30.
This implementation of the VCO ensures that the generated sinusoid always has the
correct frequency despite the limited numeric precision of the simulation. The big worry is
the effect of roundoff. NCM has a discussion of the effect of calculating a sinusoid using
increments int that looklike t
next
= t +t . If the value of t cannot be representedprecisely
in binary, then eventually the iteration implied by the equation will give an incorrect result.
The integrator modulo 1 in the diagram above xes this problem.
Let us look at this subsystem (Figure 4.31).
The rst thing to note is that the integration is creating the
2
t part of the argument of
cos(
2
t +
2
) that is the VCO output. The second thing to note is that the integrator resets
whenever the value of its output is one.
168 Chapter 4. Digital Signal Processing in Simulink
The integrator output is approximately:
0 <= ( f0 + Input(t) K
vco
) t < 2
1
VCOout
cos(u[1])
cosi ne
Amp
Vol tage Control l ed Osci l l ator
Output Ampi tude
Scope
f0
Osci l l ator Base
Frequency
Phase
Osci l l ator
Base Phase
In Integral modulo 2pi
Integrator
Modul o 1
Kpl l
Gai n
1
VCOi n
Figure 4.30. Voltage controlled oscillator subsystem in the PLL model.
**************************************************************************
This integrator creates an output that is between 0 and 2 .
The reset on the integration uses the state port at the top
(so there is no algebraic loop).
Any errors in the integration at the reset are calculated from
the remainder. The integration starts at a value that is its value
before the reset - 1.
**************************************************************************
1
Integral modul o 2pi
2*pi
Set val ue to
be between 0 and 2 pi
>=
Rel ati onal
Operator 1
Modul us
1
1 Modul us
rem
Math
Functi on
1
s
x
o
Integrator
[0]
Ini ti al Integral
i s set to 0 at t = 0
1+eps
Gai n
1
In
Figure 4.31. Details of the integrator modulo 1 subsystem in Figure 4.30.
This reset uses two options in the integrator block that we have not used before. The
rst is the state port that comes from the top of the integrator. Acheck box in the integrator
dialog causes the display of this port. You use it when you need the result of the integration
to modify the input to the integrator. If you fed the output back to the input, an algebraic
loop would result that is difcult for Simulink to resolve. The port eliminates this loop.
The second new input to the integrator is the reset port. This port is created when
you select rising from the External reset pull-down menu in the integrator dialog. Thus,
in the model, the integrator is reset to the initial condition whenever the relational operator
shows that the output is greater than 1+eps. (eps is the smallest oating-point number in the
IEEE oating-point standard; see NCM for a discussion of this MATLAB variable.) The
4.7. The Phase-Locked Loop 169
1.9 1.91 1.92 1.93 1.94 1.95 1.96 1.97 1.98 1.99 2
-1.5
-1
-0.5
0
0.5
1
1.5
Input Signal
1.9 1.91 1.92 1.93 1.94 1.95 1.96 1.97 1.98 1.99 2
-1.5
-1
-0.5
0
0.5
1
1.5
Time
Output of the VCO
1.9 1.91 1.92 1.93 1.94 1.95 1.96 1.97 1.98 1.99 2
0
1
2
3
4
5
6
7
Time
Output of the Modulo Integrator
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
Time
I
n
p
u
t

t
o

V
C
O
Phase Lock Loop Tracking Error
a) Phase Lock Loop Simulation Input and Output
b) Output of the Modulo Integrator c) Loop Tracking Response
Figure 4.32. Phase-locked loop simulation results.
integrator steps are not necessarily going to occur at exactly the point where the output is
exactly one. The external initial condition in the integrator is used at the reset to set the
value of the integrator to the remainder (the amount the output exceeds 1) for the next cycle.
The output of the integral is therefore a sawtooth wave that goes from zero to one. We
multiply the output by 2 before using it to calculate the cosine.
Run the PLL Simulink model and observe the outputs in the Scopes. There are three
Scope blocks, two at the top level of the diagram. The rst shows the two sinusoids (the
input and the VCO output), and the second shows the feedback signal that drives the VCO.
Look at these and at the Scope block in the VCO subsystem. (Figure 4.32(a) shows the
170 Chapter 4. Digital Signal Processing in Simulink
plots of the input and output of the simulation.) Figure 4.32(b) shows the moduloarithmetic
integrator output that we described above, and Figure 4.32(c) is the tracking error in the PLL.
Because the frequency of the VCO is not exactly 101.56 Hz, the VCO frequency (VCOin)
must have a nonzero steady state value. This is clearly the case. Furthermore, the PLLgot to
this steady state value with a rapid response and minimal overshoot. The design of the PLL
feedback uses linear control techniques that start with a linear model of the PLL dynamics.
Many more interesting and complex mathematical processing applications are possi-
ble using the blockset, including noise removal, more complex lter designs, and the ability
to create a processing algorithm for an audio application and actually hear how it sounds
on the PC. There are tools that allow the export of any signal processing application to
Hardware Description Language form so that the processing ends up on a chip. Tools also
will export designs to Xilinx and Texas Instruments architectures.
We are nowready to talk about stochastic processes and the mathematics of simulation
when the simulated variables are processes generated by a random quantity called noise.
This is the subject of the next chapter.
4.8 Further Reading
The Fibonacci sequence and the golden ratio is the subject of a book by Livio [26]. It is
a very readable and interesting review of the many real (and apocryphal) attributes of the
sequence.
In graduate school (at MIT), Claude Shannon worked on an early from of analog
computer called a differential analyzer. His work on the sampling theorem links to the
subject of simulation in a very fundamental way. In fact, while he was working on the
differential analyzer he created a way of analyzing relay switching circuits that were part
of these early devices. He published a paper on these results in the Transactions of the
American Institute of Electrical Engineers (AIEE) that won the Alfred Noble AIEE award.
During World War II, Shannon worked at Bell Labs on re control systems. As the
war ended, Bell Labs published a compilation of the work done on these systems. Shannon,
along with Richard Blackman and Hendrik Bode, wrote an article on data smoothing that
cast the control problem as one of signal processing. This work used the idea of uncertainty
as a way to model information, and it was a precursor to the discovery of the sampling
theorem.
The Shannon sampling theorem is so fundamental to all of discrete systems that his
papers frequently reappear in print. Two recent examples of this are in the Proceedings of
the IEEE [23], [39]. Apaper in the IEEE Communications Society Magazine [27] followed
shortly afterward. The best source for the proof of the theorem is in Papoulis [32]. Papoulis
had a knack for nding elegant proofs, and his proof of the sampling theoremis no exception.
There are hundreds of texts on digital lter design. One that is reasonable is by Leland
Jackson [22].
Reference [11] describes the operation of phase-locked loops. It also shows how they
are analyzed using linear feedback control techniques, and how easy it is to create a loop in
a digital form.
Tolearnmore about the tools, blocks, andtechniques available withthe Signal Process-
ing Blockset and Toolbox, see The MathWorks Users Manuals and Introduction [43], [44].
Exercises 171
Exercises
4.1 Show that the Fibonacci sequence has the solution (completing the step outlined in
Section 4.1.2)
f
n
=
1
2
1
1
(
n+1
1
(1
1
)
n+1
).
4.2 Verify using induction the result from Section 4.3 that: x
k
=

f
k1
f
k
f
k
f
k+1

x
0
. Is it a
fact that because you use the iterative equation in the model and the result is what you
expect that this must imply that the result is true by induction? If you use a simulation
of a difference equation, iteratively, to produce a consistent result, will this imply that
the result is always true by induction?
4.3 Show, using induction, that the Fibonacci sequence has the property that f
k+1
f
k1

f
2
k
= 1. (Follow the hint in Section 4.3.)
4.4 Verify that the Butterworth lter function
|H
But t er
|
2
=
1
1 +

2n
is maximally at. (All of its derivatives from the rst to the (n 1)st are zero at zero
and innity.)
4.5 Show that the analog system
d
dt
x(t ) = Ax(t ) +Bu(t ) and the discrete systemx
k+1
=
(t )x
k
+(t )u
k
have the same step responses when
(t ) =

t
0
e
A
d,
(t ) =

t
0
e
A
Bd.
Create a Simulink model with the state-space model of a second order system in
continuous time and in discrete time using the above.
Create a discrete time system using the Laplace transform of the continuous system
with the mapping s =
2
t
(
1z
1
1+z
1
) to generate the discrete system.
Compare the simulations using inputs that are zero (use the same initial conditions
for each of the versions), a step, and a sinusoid at different frequencies.
4.6 Show that the r lter for an n-sample moving average has the state-space model
x
k+1
=

0 1 0 . . . 0
0 0 1 . . . 0
.
.
.
.
.
.
.
.
. . . .
.
.
.
0 0 0 . . . 1

x
k
+

0
0
.
.
.
1

u
k
,
y
k
=
1
n

1 1 1 1

x
k
+
1
n
u
k
.
In this state-space model the A matrix is (n 1) (n 1) and the vectors are of
appropriate dimensions. Verify that the Simulink model in the text does indeed use
172 Chapter 4. Digital Signal Processing in Simulink
this equation. Calculate the number of adds and multiplies this version of the moving
average lter requires.
Investigate other nonstate-space versions of this digital lter. Use the Simulink
digital lter library. Work out the computation count needed to implement the lters
you select, and compare them with the number of computations needed for the state-
space model. Is this model an efcient way to do this lter? What would be the most
efcient implementation?
4.7 Show that det(zI (t )) is

n
j=1
(z e

j
t
), where (t ) = e
At
and the
j
are
the eigenvalues of the matrix A.
Chapter 5
Random Numbers, White
Noise, and Stochastic
Processes
Simulation of dynamic systems where we assume we know the exact value for all of the
model parameters does not adequately represent the real world. Most of the time the designer
of a system wants to know what happens when one or more components are subject to
uncertainty. The modeling of uncertainty covers two related subjects: random variables
and stochastic processes.
Random variables can model the uncertainty in experiments where a single event
occurs, resulting in a numeric value for some observable. An example of this might be
the value of a parameter (or multiple parameters) specied in the design. In the process of
building the system, these can change randomly because of manufacturing or other errors.
Simulink provides blocks that can be used to model randomvariables using the two standard
probability distributions: uniform and Gaussian or normal.
Stochastic processes, in contrast to random variables, are functions of time or some
other independent variable (or variables). The mathematics of these processes is complicated
because of the interrelation between the randomness and the independent variable(s).
In this chapter, we explore the use of Simulink to model both types of uncertainty. We
introduce the two types of random variables available in Simulink and show how to create
other probability distributions for use in modeling phenomena that are more complex. We
extend ideas of random variables to the simplest of stochastic processes where at each of
the sample times in a discrete system we select a new random variable. Finally, we show
discrete time processes that converge to continuous time processes in the limit where the
time steps are smaller and smaller.
5.1 Modeling with RandomVariables in Simulink: Monte
Carlo Simulations
The simplest uncertainties one might need to model are the values of parameters in a simu-
lation. These parameter uncertainties can be errors due to manufacturing tolerances, uncer-
tainties in physical values because of measurement errors, and errors in model parameters
because of system wear. Monte Carlo simulations use parameters selected from known
probability distributions. The parameters are then random variables, so each simulation has
173
5.1. Modeling with RandomVariables in Simulink: Monte Carlo Simulations 175
Summation forms
Binomial Distribution
for X, with -n/2 < X < n/2
From MATLAB:
Sum n+1 Uniformly Distributed
Random Variables with 0 mean
(values from -1 to +1 )
Create n/2 Random Variables
with: Pr( x = 1 ) = Pr( x = -1 ) = 1/2
(Mean = 0 and Var = 1)
Runni ng
Var
Vari ance1
Runni ng
Var
Vari ance
si mout
To Workspace
Sum n/2
Bi nomi al RVs
Si gn
Scope2
Scope1
Scope Random
Number
SumofUni forms
From Workspace
1/sqrt(n/2)
Di vi de by Si gma
of Bi nomi al
1/sqrt(n/6)
Di vi de by
Si gma of Uni form
0.9905
Di spl ay1
1.033
Di spl ay
5000 5000
Sum of Binomial rv s
with mean 0 and
unit v ariance
Sum of Unif orm rv s
with mean 0 and
unit v ariance
Figure 5.1. Monte Carlo simulation demonstrating the central limit theorem.
-6 -4 -2 0 2 4 6
0
50
100
150
200
250
300
350
Val ues of the Random Vari abl e
N
u
m
b
e
r

o
f

S
a
m
p
l
e
s

a
t

E
a
c
h

V
a
l
u
e
Monte Carl o Si mul ati on Il l us trati ng the
Central Li mi t Theorem
Hi s togram (Probabi l i ty Di s tri buti on) Res ul ts
Figure 5.2. Result of the Monte Carlo simulation of the central limit theorem.
structure.) The rst column is the time points (in this case the integers from 0 to n, which
are the counters for the number of random samples we are creating and not time), and the
second column is the random numbers generated in MATLAB using the rand function.
The result of running this model with n = 10,000 is shown in Figure 5.2. Note that
the distribution is almost exactly Gaussian, with zero mean and unit variance as the central
limit theorem demands. Thus, we have done a numerical experiment where we have made
176 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
multiple runs with random number generators, and we have created the probability density
function for the resultand have seen that the result is Gaussian as theory predicts. This is
exactly what Monte Carlo analyses are supposed to do. In the situations where it is used,
the mathematics required to develop the probability distribution are so complex that this
method is the only way to achieve the required understanding.
In the simulation above, we generate 100 million random variables to create the
histogram (50 million in MATLAB before we start the model, and 50 million over all of the
time steps in the model). Using tic and toc, the total time that this took on a 3.192 GHz
dual processor Pentium computer was 24.2 seconds.
When the simulation ends, there is a callback (using the StopFcn tab in the File menu
Model Properties submenu) that executes the following code to create the plot in Figure 5.2:
y = simout.signals.values;
hist(y,100).
It is important to see howthe parameters were set up in the dialog box for the binomial
distribution. We used the Band Limited White Noise which generates a Gaussian random
variable at each time step, and then we extracted from this a random variable that is +1
when the sign is positive and 1 when it is negative. This uses the sgn (sign) block from
the Math Library. We add the elements in the resulting vector to create a random variable
that lies between n/2 and n/2; thus it is a binomial distribution. The dialog box for the
Band Limited White Noise block has the value of the random seed set to
12345:100:12345+100*n/2-1.
The seed needs to be a vector of different values so that each of the n/2 generated
random variables is independent.
Fromthis discussion, you should be able to see howto create a simulation that contains
randomvariables either using the randomnumber generators in Simulink or using MATLAB
commands to generate random numbers. However, you might think from the example that
the only options for the random variables are uniform distributions or Gaussian.
Well, while you were not looking, we sneaked a binomial distribution into the model
above. We actually generated this distribution using Simulink blocks. Let us explore another
example, the Rayleigh distribution, which will give some insight into how to generate more
complex (and consequently more likely to occur in practice) random variables.
5.1.2 Simulating a Rayleigh Distributed RandomVariable
Radio waves that reect off two different surfaces have amplitudes that are the square root
of the sum of the squares of the amplitudes from each direction. Thus, if the amplitude of
each wave is Gaussian, the resulting signal is Rayleigh. To make this precise, let x(t ) and
y(t ) be two Gaussian random variables; then the random variable z(t ) =

x(t )
2
+y(t )
2
is
Rayleigh distributed. The probability density function for z(t ) is
f
Z
(z) =

2
e
z
2
/2
2
for z 0,
0 for z < 0.
5.2. Stochastic Processes and White Noise 177
Running
Var
Variance
Scope
Random
Number1
Random
Number
Product1
Product
sqrt
Math
Function
0.4292
Display
Rayleigh Distributed
Random Variable
Figure 5.3. A Simulink model that generates Rayleigh random variables.
How do we generate a Rayleigh distributed random variable for use in Simulink? The
answer is to go to the denition. The model in Figure 5.3 creates a Monte Carlo simulation
that generates a randomvariable with a Rayleigh distribution. This model is Rayleigh_Sim
in the NCS library.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
10
20
30
40
50
60
70
Values of the Random Variable
N
u
m
b
e
r
o
f
S
a
m
p
le
s

fo
r
th
e

S
p
e
c
ifie
d
V
a
lu
e
Histogram for the Monte Carlo
Experiment of the Rayleigh Distribution
When this model runs, the generated noise
sample appears in the Scope, and when the sim-
ulation stops, a MATLAB callback creates the
histogram for the generated process (as we did in
Section 5.1.1 above). The gure at the right is
the histogram that results.
As an aside, the variance of the Rayleigh
random variable is 2

2
= 0.4292. The simu-
lation uses the variance block from the Signal
Processing library to compute the variance as the
simulation progresses, and the value is exactly
the 0.4292 for the 10 sec simulated in the exam-
ple above.
It should now be clear how one goes about
creating randomvariables with different distributions. The rst step is to nd a mathematical
relationship between the desired randomvariable and the Gaussian or other randomvariable,
and then build a set of blocks that formthat function. In Example 5.1, we ask that you modify
the Simulink model above to add the calculation of the random variable that is the phase
angle of the reected sine waves.
5.2 Stochastic Processes and White Noise
A random variable is a number that results from a random experiment. The experimenter
may assign the number, or it might arise naturally because of the experiment. For example,
with a coin ip, the experiment is the operation of ipping the coin and observing the result.
The outcome is a member of the set that contains the two possible outcomes {Head,
Tail}. Arandom variable for this experiment is the assignment of a number to each of the
178 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
outcomes (the number 1 for heads and the number 0 for tails, for example). A stochastic
or random process is similar to a random variable except that the experimental result is the
assignment of a function (usually, but not always, a function of time) to the event. Thus, the
following are stochastic processes (in order of complexity from trivial to overwhelming).
A random process resulting from an event with only two outcomes:
Flip a coin, and assign the function sin(t ) if the result of ip is a head and
assign the function e
t
if it is a tail.
The random walk process:
Start with x(0) = 0. At every discrete time step kt , ip a coin and move left
if the result is a head and move right if it is a tail.
Thus, x(kt ) = x({k 1}t )) 1 if a head appears and x(kt ) = x({k
1}t )) +1 if a tail appears.
The Brownian motion (Weiner) process:
Let the coin ip above happen innitely often in the sense that over any interval,
t no matter how small, the coin is still ipped an innite number of times.
Each of these processes is progressively more complex than the previous. The rst
is simple because there are only two outcomes so the probability for each is simply
1
2
. The
second process has a growing number of possible outcomes, so at the nth time step there
are n possible values for the process. The last process is inconceivably complex; it is the
prototype for Brownian motionthe motion of a particle in a uid. This process plays a
fundamental role in the analysis of stochastic processes. (The process is also called the
Weiner process.)
Most of us have heard of white noise, perhaps only as a noise source that masks
background noise to allow us to sleep well. However, this ubiquitous process is an essential
part of all analyses of systems with noise, and it is, therefore, a very convenient mathematical
ction. As a process, white noise does not exist (it is easy to demonstrate that this process
has innite energy), but it is possible to give some insight into its properties and showa way
that it is related to the Brownian motion process, and as a consequence it can be subjected
to a rigorous mathematical treatment. As the rst step in understanding systems excited by
noise, let us explore the Brownian motion process in more detail. We will see how to use it
as a prototype of white noise.
5.2.1 The Random Walk Process
The starting point for our discussion of Brownian motion is the random walk process de-
scribed above. Let us create a Simulink model that generates this process (Figure 5.4). We
need to use the
1
z
block from the Discrete library to store the previous value of the Random
Walk, and then we need to add this to the result of Flipping the Coin, which is simulated
using the sign of the Uniform Random Number Generator (a sign of +1 or 1).
The simulation has a seed for the random number generator that is a vector of length
nine, so each run creates nine separate instantiations of the random walk. Figure 5.5 shows
5.2. Stochastic Processes and White Noise 179
x
k+1
= x
k
+ sign(y
k
)
x
0
= 0
z
1
Uni t Del ay
Uni form Random
Number
Si gn Scope
+ or - 1
x_k
x_k+1
Figure 5.4. Simulink model that generates nine samples of a random walk.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
-200
-150
-100
-50
0
50
100
150
200
Index "k"
R
a
n
d
o
m

W
a
l
k

V
a
l
u
e
s
Nine Simulations of the Random Walk
for 10000 Coin "Flips"
Figure 5.5. Nine samples of a random walk process.
the nine samples of the random walk that result with 10,000 ips. (After n iterations, the
value of the random walk must be between n and n, but it can be shown that a random
walk grows like

n, and this is indeed the case for these nine samples.)


180 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Let us compute some of the statistics for this process. The mean of the process is as
follows:
E {x
k
} =
k

i=0
E {sign(y
i
)} = 0.
The fact that the mean of the sign of y is zero follows from the fact that the probability that
the sign is +1 (from the uniform distribution) is
1
2
and the probability that the sign is 1 is
also
1
2
. The mean is therefore
1
2
(+1) +
1
2
(1) = 0.
Similarly, the variance of the process is

2
RW
(k) = E

i=0
sign(y
i
)

=
k

i=0
E

sign(y
i
)
2

= k.
Our goal is to convert this discrete time random walk into a continuous process by letting
the numbers of coin ips in any time interval go to innity. That means that
lim
n
t 0
x
k
(k, t ) = B(t ).
We need to verify that the result B(t ) is a mathematical function that exists and has the
desired property that it represents the random motion of a particle immersed in a uid.
We have scaled the time axis in such a way that it can become continuous, but we do
not know how to scale the outcome of the coin ip (in the random walk it is +1 or 1 for
each ip), so we need to answer the following very simple question:
If we scale the number of coin ips in the random walk using a scale factor of t (we are
scaling the horizontal axis that will converge to the time axis), then how must we scale the
position (vertical) axis so the process converges to a valid continuous time process as t
approaches zero?
This question arose at the end of the 19th century because physicists were trying to
model Brownian motion as a verication of the atomic theory of matter in which the motion
was due to collisions with atoms (see [5]).
5.2.2 Brownian Motion and White Noise
Brownian motion is a process named for the English botanist Robert Brown, who stated in
a paper in 1828 that the motion of a particle immersed in a uid was not due to something
alive in the particle but was probably due to the uid. This led to attempts to nd what it
was in the uid that caused the motion.
The Polish physicist Marian von Smoluchowski was trying to model the Brownian
motion process using a limiting argument on the randomwalk. He realized that this approach
would provide a way of verifying the atomic theory of matter. Physicists who believed in the
atomic theory thought that the observed Brownian motion was due to the very large number
of collisions between the particle and the atoms in the uid. While Smoluchowski spent a lot
of effort on solving this problem using the approach described here, Albert Einstein worked
with the underlying probability density instead, deriving this density from rst principles.
5.2. Stochastic Processes and White Noise 181
Einsteins rst 1905 paper (the year in which he also published the photoelectric
effect and the special theory of relativity papers) reported his work on Brownian motion.
Smoluchowski published his results about a year later. Afewyears later, the French physicist
Perrin won the Nobel Prize by using Einsteins result with experimental measurements of
Brownian motion to come up with an estimate for Avogadros number that was in excellent
agreement with (and actually more accurate than) other measurements.
The velocity of a particle undergoing Brownian motion is not computable, and Perrin
showedthat this was true whenone attemptedtomeasure the velocityof a particle undergoing
Brownian motion. In fact, Brownian motion is an example of a mathematical function that
is nowhere differentiable. We will show this result and use it to show why white noise
does not exist, by itself, in any mathematical sense. (However, a form of the process exists
whenever it appears, as it always does in practice, in an integral.)
The discussion that follows is about how one goes about modeling a random process
in Simulink. To do this properly we need to understand the nature of the continuous time
process called white noise and its relation to Brownian motion. The way we will do this is
to let the random walk above become a continuous process by letting the time between coin
ips go to zero (in a very precise way). Brownian motion is the result of the convergence
in the limit, as the time between ips goes to zero, to a continuous time process. Before we
explore this convergence, we need to describe what we mean by convergence.
If we look at the Simulink results for the random walk simulation above, it is quite
clear that the random walk does not converge to a deterministic function. In fact every
time we redo the experiment, the results are different, so what is converging? Remember
that the stochastic process is the result of a random experiment, so it is a function of two
variables: one is time, and the other is the experimental outcome (the observed coin ips
and the process generated by the particular sequence of ips).
The process can converge in the following ways:
in terms of the statistics (i.e., in the mean, mean square, or variance sense, for exam-
ple);
interms of the probabilitydistribution(i.e., inthe sense that the probabilitydistribution
for the random walk converges to a probability distribution for the continuous time
process);
in the sense that for every experimental outcome, the function of time that results
converges to a well-dened function with certainty (in a sense that is made precise
but is beyond the scope of this discussion).
We will use the convergence in the probability distribution sense in the following, but
it is true that the random walk converges in every one of these senses to Brownian motion.
Let us assume that we scale the graphs above so that along the ips axis, the variable
k is replaced by kt . The question is, how do we scale the vertical axis? Since we do not
know exactly what to use, let us just scale it with an unknown value, say s. In that case, the
mean of the process (before we take the limit) is still 0 and the variance is ks
2
. (Exercise 5.2
asks that you verify this using the calculations for the variance of the random walk above.)
It is now a simple matter to use the central limit theorem to develop the probability
density function for the Brownian motion process. Let the number of ips k go to innity
182 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
and let t go to 0 in such a way that the product becomes the time variable t in the Brownian
motion process. That means that
lim
t 0
n
(kt ) = t,
or equivalently, the number of ips k is constrained to be
t
t
for any time t .
We use the central limit theorem. The variance of the scaled random walk (scaled by
s) is ks
2
. Thus, in the limit, when we force k to be
t
t
, the variance of the random walk
becomes
t s
2
t
. It is now obvious that the scale factor, s, must be proportional to

t . If
it is not, the limiting variance will be either zero or innity, neither of which makes the
limiting process have the desired properties. The innite variance is a process that could
have innite jumps in any short period, and the zero variance would mean the process is
not random. Brownian motion has neither of these properties.
Therefore, scaling the random walk by

t will insure convergence to the Brow-


nian motion process. (The parameter is the proportionality constant and is the standard
deviation of the resulting Gaussian process.) Thus, we have the following result.
Create a random process B(t ) from the random walk as follows:
B(t ) = lim
t 0
k=
t

2
t
1

t
k

i=1

2
t sign(y
i
).
Then B(t ) is a process that has a probability density function that is normal with mean 0
and variance 1.
The usual way of stating the limit is to remove the scale factor
1

t
on the process so
the variance of the result is
2
t . Both Einstein and Smoluchowski showed this result, but
using the approach we are using, Smoluchowski was able to prove that the process does not
have a derivative anywhere. We can see this if we try to differentiate the random walk and
then take the limit as we did above. When we do, we get the following:
The Brownian motion process B(t ) has an innite derivative everywhere.
The derivative is
dB(t )
dt
= lim
t 0
B(t +t )B(t )
t
. Based on the scaling we did above,
the difference in the numerator is proportional to

t . Let the proportionality constant be


; then the derivative becomes lim
t 0
B(t +t )B(t )
t
= lim
t 0

t
t
for all times t .
The Brownian motion process has the following properties:
The process is Gaussian at every time t , with mean zero and variance
2
t .
The process is 0 at t = 0.
The process has increments that are independent random variables in the sense that
B(t
n
) B(t
n1
) and B(t
n2
) B(t
n3
) are independent random variables for any
nonoverlapping intervals (t
n3
t
n2
) < (t
n1
t
n
).
The process does not have a derivative anywhere.
5.2. Stochastic Processes and White Noise 183
The integral of any function of time multiplying the derivative of B(t ) is evaluated
using the integral

f (t )dB(t ), which is well dened even though the derivative of


B(t ) is not.
The fth property is the mathematical rationale for the existence of white noise. Even
though w(t ) =
dB(t )
dt
does not exist, it does make sense to talk about integrals that contain
products of w(t ) (white noise) with other functions because these integrals are

f (t )w(t )dt =

f (t )dB(t ).
The rst four of these properties follow from the discussion so far. Norbert Weiner
demonstrated the last result in the 1950s, but it will take us too far aeld to show the result
here, so we will accept it without proof. However, because of his seminal work in showing
these properties, we call the process B(t ) the Wiener process, and we will use this name
from now on.
The name white noise comes from an analogy with white light that contains every
color. White noise as a stochastic process is supposed to contain every frequency. Let us
see what that means.
A stochastic process is called stationary if for any two times t
1
and t
2
the correlation
function, dened by E {y(t
1
)y(t
2
)}, depends on only the difference between the two times
t
1
and t
2
. That is,
R() = E {y(t
1
)y(t
2
)} = R(|t
2
t |
1
),
where = |t
2
t
1
|.
If we go back to the attempt to dene the derivative of the Weiner process, the corre-
lation function for white noise is
E {w(t )w(t +)} =

0, > 0,
, = 0,
=
2
().
The rst part of this assertion comes from the fact that the Weiner process has independent
increments, so the expected value of nonoverlapping intervals is the product of the means
(and since the mean is 0, the result is 0). The last part of this assertion comes from the
fact that the derivative of B(t ) is innite. The
2
in this denition is often called the noise
power. It is not the variance of the process (which is innite); however, the integral of this
process has the variance
2
t , since the integral is the Weiner process.
We can compute the correlation function of a stationary process directly from the
process, or from the way we generated the process. If, for example, the process were the
result of using white noise as an input to a linear system, the output would be the convolution
of the white noise with the systems time response to an impulse. With this approach, the
white noise process only appears in the solution through the convolution integral, and from
property 5 of the Weiner process, this integral always is computable. Remember, however,
that the convolution integral can be computed from the product of the transforms of the
two functions. (In this case, we must use the Fourier transform because a process can be
stationary only if it has been in existence for an innite time, as we will see when we attempt
to develop simulations of these processes.)
184 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
The Fourier transform of the correlation function is the spectral density function.
For the white noise process the spectral density is
S() =

2
()e
i
d =
2
.
Since the spectral density is constant for all frequencies, we can now see why we use the
name white noise to describe it.
If white noise does not exist, how can we simulate a system where the white noise
stochastic process excites a linear system? The answer is to use the Weiner process.
5.3 Simulating a System with White Noise Inputs Using
the Weiner Process
From the discussion in the previous section, it is clear that to simulate a system with white
noise we must use care. When simulating a continuous time systemwith a white noise input,
the method used is always to select a Gaussian random variable at each of the numerical
solvers time steps and then to scale the variable so it has the same effect on the solution as
if the random variable were continuous. From the results of Section 5.2, this means that the
scaling has to be proportional to the square root of the time step used to generate the white
noise sample. We will now look into how Simulink achieves this with the built in Band
Limited White Noise block.
5.3.1 White Noise and a Spring-Mass-Damper System
Open the model White_Noise in the NCS library. We will use this model to explore Monte
Carlo simulations with noise sources and to explore methods that speed up the simulations.
The rst part of the model uses the built in white noise block with 10 different random
number seeds to create 10 separate simulations of white noise exciting a linear system.
The model is the state-space version of the spring-mass-damper with the damping set to
make the damping ratio 0.707. This Simulink model uses the integrator and gain blocks that
we did in Chapter 2. The model is in Figure 5.6, and Figure 5.7 shows the 10l responses.
The model uses the xed step solver with a nominal solver step of 0.005 sec. The undamped
natural frequency of the spring is 10 rad/sec (1.6 Hz).
The variance and mean calculations are from the Signal Processing Blockset library.
The data for the model come froma Callback when the model opens. The integrator step size
and the various sample times in the model are all the same (called samptime in MATLAB),
so when this is changed, the simulations solver and all of the calculations for the white
noise and the statistics are updated too. Open this model from the NCS library and run the
simulation. You should see roughly the same variances as in Figure 5.6. (The variances of
the position and the velocity are difcult to read in this gure, so make sure you run the
model; the values are all around 3.53 and 353.6, respectively.)
The rst thing we need to look at in this model is the Band Limited White Noise block.
This block is an example of a masked subsystem. To see what the model is like under the
mask, right click on the block and select Look Under Mask from the menu that opens. A
new window will open that contains the blocks in Figure 5.8. (The block will open with the
5.3. Simulating a System with White Noise Inputs Using the Weiner Process 185
Running
Var
Variance1
Running
Var
Variance
Scope
Sample Data
for Statistics1
Sample Data
for Statistics
In
Mean
1
s
Integrator1
1
s
Integrator
omega^2
Gain2
2*zeta*omega
Gain1
omega^2
Gain
322.3
348.7
341.4
396.3
411.5
325.7
342.6
358.2
330.6
357
Display Variance1
3.706
3.025
3.611
2.584
4.045
2.493
5.135
3.439
3.307
4.352
Display Variance
0.3125
0.2534
0.04678
0.5556
-0.6937
0.02838
0.0582
-0.01026
0.1033
-0.1191
Display Mean
Band-Limited
White Noise
Figure 5.6. Simulink model for the spring-mass-damper system.
0 1 2 3 4 5 6 7 8 9 10
-8
-6
-4
-2
0
2
4
6
8
Time
Figure 5.7. Motions of 10 masses with white noise force excitations.
186 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
1
Whi te Noi se
[sqrt(Cov)]/[sqrt(Ts)]
Figure 5.8. White noise block in the Simulink library (masked subsystem).
Gain block small; we enlarged it here to make its contents visible.) Along with the mask
is a mask dialog that allows the user to select the values for the parameters used under the
mask. To see this, again right click on the block, but this time select Edit Mask. Adialog
window with four tabs will appear. The rst tab is the instructions for drawing the little
wiggly line (representing noise) on the block. The second tab is the dialog that allows the
user to change the parameters he wants to use in the subsystem. Options for these are an
edit box, a pop-up menu, or a check box. The names of the parameters and a user prompt
can also be set up. The third tab, called Initialization, allows the user to set up MATLAB
code to set up the actual values used in the blocks under the mask. Note that the mask is
a form of subroutine, so these parameter values only appear in the masked subsystem. For
the Band Limited White Noise block, the important calculations are the test done in the
initialization to ensure that the terms in the variable Cov are all positive, and the calculation
done in the block below. This calculation should look familiar. It is the scaling that we
determined above (with the value of denoted by Cov and the value of t denoted by Ts).
Let us performsome experiments where we change the solver step size. The goal is to
verify that this scaling keeps the results statistically consistent as we make the changes. The
step size is the MATLAB variable samptime. (It is changed at the MATLAB command
line.) The model opens with this value set to 0.005. Select some both smaller and larger
time steps (say from 0.001 to 0.05). You should observe that the mean and variance of the
simulated processes are about the same. (Remember in a Monte Carlo simulation the results
will not be the same from simulation to simulation.) One caveat: since the seed used for
the random number generator is the same in every simulation if you keep everything in the
simulation the same, the results will be identical from run to run.
One nal caution: when you simulate the response of the system you need to be
careful to select a solver step size and a value for the sample time in the white noise block
that is smaller by about a factor of 10 than the response time of the system(in this case faster
than about 0.05 sec). See what happens if you make the samptime equal to 0.1 sec. (You
should even see some effect when it was 0.05 sec.) This effect comes from the sampling
theorem.
The value of the noise power in the Band Limited White Noise block is set to 1. It
would seem that this should set the value of the variance of the position of the mass at 1,
but it does not. Let us try to nd out why.
5.3.2 Noisy Continuous and Discrete Time Systems: The Covariance
Matrix
To describe a stochastic process we need to specify its joint probability density function
for the values at any arbitrary set of times. White noise is characterized completely by the
Weiner process because the independence of all of the increments makes the joint density
5.3. Simulating a System with White Noise Inputs Using the Weiner Process 187
just the product of the densities for each of the increments. When the stochastic process
results from white noise excitation of a linear system, the joint density is determined from
a matrix whose dimension is the highest derivative in the underlying differential equation.
This matrix is the covariance matrix. We need to compute this matrix for a linear system.
Consider the linear state-space system excited by a white noise process w(t ) given by
dx(t )
dt
= Ax(t )+Bw(t ).
The covariance matrix, P(t ), of this vector stochastic process is dened as P(t ) =
E
_
x(t )x(t )
T
_
. It is relatively easy to determine a differential equation for this matrix. We
simply differentiate P(t )using the fact that the derivative and the expectation commute.
Thus,
dP(t )
dt
= E
_
d
dt
_
x(t )x(t )
T
_
_
= E
__
d
dt
x(t )
_
x(t )
T
+x(t )
_
d
dt
x(t )
T
__
= E
_
(Ax(t )+Bw(t )) x(t )
T
_
+E
_
x(t ) (Ax(t )+Bw(t ))
T
_
= AP(t ) +P(t )A
T
+E
_
(Bw(t )) x(t )
T
+x(t ) (Bw(t ))
T
_
= AP(t ) +P(t )A
T
+BSB
T
.
The last step comes from the fact that the process w(t ) is the vector white noise process.
The covariance matrix E
_
(Bw(t )) x(t )
T
_
in the next to last step is therefore determined
using the solution of the linear state variable equation that we developed in Chapter 2. We
can now use this solution, along with the fact that E
_
w(t )w
T
()
_
is the impulse function
scaled by the noise power parameter (for each of the white noise processes in the vector), to
develop the solution. Remember that when an impulse appears inside an integral, it selects
the values of the integrand at the point the impulse occurs (in this case when (t ) = 0).
Therefore, we have
E
_
(Bw(t )) x(t )
T
_
= E
_
(Bw(t ))
_
(t )x
0
+
_
t
0
(t )Bw()d
_
T
_
= BE
__
t
0
w(t )w
T
()B
T
(t )
T
d
_
= B
_
t
0
E
_
w(t )w
T
()
_
B
T
(t )
T
d
= B
_
t
0
1
2
S(t )B
T
(t )
T
d
=
1
2
BSB
T
.
188 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
The factor of
1
2
in this sequence comes from the fact that the impulse is a two-sided function
and the integration limit t is right in the center of these two sides, so only
1
2
of the value
comes from the impulse. The other
1
2
BSB
T
comes from the second term in the expectation
(i.e., from the E

x(t ) (Bw(t ))
T

term).
The linear matrix equation
dP(t )
dt
= AP(t ) +P(t )A
T
+BSB
T
has a solution given by
P(t ) = (t t
0
)P(t
0
)(t t
0
)
T
+

t
t
0
(t )BSB
T
(t )
T
d.
This is easy to verify by substituting back into the differential equation. Exercise 5.3 asks
that you make this substitution, but remember when you do that the derivative of an integral
whose upper limit depends on the independent variable comes from the identity
d
dt

b(t )
a
c(t, )d = c(t, b(t ))
db(t )
dt
+

b(t )
a

t
c(t, )d.
Before we investigate the use of this procedure to model a continuous time stochastic
process as a discrete time equivalent, we can use the development above to compute the
steady state noise variances for the spring-mass-damper Simulink model in Section 5.3.1.
This system has a state variable model given by
dx(t )
dt
=

0 1
100 14.14

x(t ) +

0
10

w(t ).
This state x(t ) in this model has the usual components. (The rst is the position of the mass,
and the second is the velocity.) Therefore, the covariance matrix for the state x(t ) is


2
pos

pos

vel

pos

vel

2
vel

.
The diagonal terms are the variances of the position and velocity and the off diagonal terms
show the correlations between the two. The variable has a value whose magnitude is
less than 1, and it measures the correlation between the position and the velocity. We can
calculate the covariance matrix to verify that the Simulink model of the masss position
gave the correct variance. We do this by solving for the steady state covariance in the
differential equation above. Steady state means that the covariance matrix does not change,
so its derivative is 0. Thus,
AP(t ) +P(t )A
T
+BSB
T
= 0.
Substituting the values for A, B, and S from the state model above gives

0 1
100 14.14

P(t ) +P(t )

0 1
100 14.14

T
+

0 0
0 10000

= 0.
This equation, called a Lyapunov equation, is solved with an M-le that is built into the
control systemtoolbox in MATLAB. The command is lyap(A,BSBT), where Ais the matrix
A and BSBT is the matrix BSB
T
. (Remember that B =

0
100

and S = 1 for this system.)


The MATLAB commands and the result for the steady state value of P (Pss) are as follows
(a version of lyap, called lyap_ncs, has been included in the NCS library so you can try
doing this calculation if you do not have lyap available):
5.3. Simulating a System with White Noise Inputs Using the Weiner Process 189
>> A = [0 1;-100 -14.14];
>> BSBT = [0 0;0 10000];
>> Pss = lyap(a,q)
Pss =
3.5355 -0.0000
-0.0000 353.5534.
As can be seen, the variances from this calculation match the simulation results in the
Simulink model we generated and evaluated in the previous section (at least statistically,
since the simulation generates an estimate of the 10 runs). This then answers the question we
asked at the end of Section 5.3.1. This analysis also shows that the noise power in the Band
Limited White Noise block does not directly determine the variance of the output from the
simulation, only the calculations above provide the correct variances for the various state
variables in the model.
5.3.3 Discrete Time Equivalent of a Continuous Stochastic Process
We are now in a position to develop a discrete time model for a continuous time stochastic
linear system. In general, if we want to use a discrete process to create a solution that is in
some sense equivalent to the original continuous time process, we need to come up with a
denition for what we mean by equivalence. The simplest and most direct is covariance
equivalence, dened as follows:
A continuous time system and a discrete time system (for the sample times t ) are
covariance equivalent if
they have the same mean at the sample times;
they have the same covariance matrix at the sample times.
Remember from Section 2.1.1 that the discrete time solution of a continuous state
variable model is
x
k+1
= (t )x
k
+(t )u
k
.
We assume that the input in this model is a vector of Gaussian random variables (each one
like the random walk) whose values are independent from step to step and Gaussian. We
also assume that random variables have the same Covariance matrix at each time step (i.e.,
E

u
k
u
T
k

= S
discret e
for all values k). The covariance matrix for the discrete process is the
solution of the difference equation, which is determined as follows:
P
k+1
= E

x
k+1
x
T
k+1

= E

(x
k
+u
k
) (x
k
+u
k
)
T

= P
k

T
+S
discret e

T
.
The state x
k
is a vector random variable, and it is independent of u
k
(because x
k
only
depends on the past values of u, and all of the values of u
k
are independent, zero mean
random variables). This means that E

x
k
(u
k
)
T

= E {x
k
} E

(u
k
)
T

= 0, and
E

u
k
(x
k
)
T

= E {u
k
} E

(x
k
)
T

= 0.
190 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
The covariance matrix for the continuous process, P(t ) will be exactly the same as the
covariance matrix of the discrete process P
k
when t = kt if the solution to the continuous
time covariance matrix equation above at the times t = kt is the same as the covariance
matrix for the discrete time system. That is, P
k
= P(kt ) where P
k
comes from iterating
the equation above, and P(kt ) is the solution of the continuous time covariance equation
at the times kt . We need to nd a value for S
discret e
that insures this.
Comparing the solution for the continuous covariance matrix at time t = t when
the initial covariance (at t = 0) is 0, with iterations from the difference equation for the
discrete covariance matrix with the initial value 0 gives an equation for S
discret e
as follows:
S
discret e

T
=

t
0
()BSB
T
()
T
d.
There are many ways of factoring the left-hand side of this solution to give an explicit
value for S
discret e
, and we will explore this further as we return to the spring-mass-damper
example. Exercise 5.4 asks you to verify that S
discret e
that results from this single iteration
insures that the covariance matrices are the same for all k.
Let us use a sample time for the discrete model that is 0.01 sec. The value of (t )
is determined, as in Section 2.1.1, using the c2d_ncs program in the NCS library. Now we
need to determine (t ) using the solution of the covariance matrix at the time t .
When the initial covariance matrix is 0, the solution for the covariance matrix at time
t is
P(t ) =

t
0
(t )BSB
T
(t )
T
d.
Many numerical approaches will compute this integral, but let us try doing it in Simulink.
We will use the ability of Simulink to solve matrix differential equations by setting up the
differential equation
dP(t )
dt
= AP(t ) +P(t )A
T
+BSB
T
. Figure 5.9 shows the model (called
Covariance_Matrix_c2d in the NCS library). Running this model produces the solution
for the matrix P(t ) in the MATLAB workspace.
To send P to MATLAB at the end of the simulation, the model uses the Triggered
Subsystem block from the Ports and Subsystems library. In a triggered subsystem, the
trigger causes any blocks inside the subsystem to run when the trigger signal increases
in value (the default). The icon at the top of the subsystem, where the trigger signal is
connected, indicates the trigger type; if the icon looks like a step discontinuity that is
increasing, the trigger is rising, and the options are increasing, decreasing, or both. The
contents of the subsystem appear when the block opened.
From the annotation for this model, which shows the differential equation simulated,
you can work through the diagram and verify the Simulink model. You should do this by
carefully going through the model or, better still, by creating the model yourself. Before
running the simulation, set up the values of Aand BSB
T
in MATLAB. (From the Simulink
model you can see that the variables are called A and BSBT respectively, and if you used
the lyap_ncs code above, these should already exist in MATLAB.) After the simulation is
complete, you can see what P is in MATLAB by typing P at the command line. The result
should be the matrix
P =
0.0030 0.4333
0.4333 86.8229.
5.3. Simulating a System with White Noise Inputs Using the Weiner Process 191
Simulink Model for
Computing P(t) from the Differential Equation:
dP/
dt
= AP + PA' +Q
For values of A and Q in MATLAB Workspace.
In1
Tri ggered
Subsystem
to Send P(del tat) to
MATLAB Workspace
Matri x
Mul ti pl y
Product1
Matri x
Mul ti pl y
Product
BSBT
Noi se
Covari ance
1
s
x
o
Integrator
zeros(n)
Ini ti al
Val ue of P
(zero matri x)
>= del tat
Compare
To Constant
Cl ock
A'
A Matri x
(transposed)
A A Matri x
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
[2x2]
Figure 5.9. Continuous linear system covariance matrix calculation in Simulink.
The result is the covariance matrix that can be used for simulating with a covariance
equivalent discrete system.
Armed with this value for the covariance matrix we can create the equivalent digital
model for the original spring-mass-damper continuous system. Figure 5.10 shows the
Simulink model. We use the discrete state-space model from the discrete library in the
model, and we have set it up for three different sample times (0.01 sec as derived above,
0.05 sec, and 1 sec). The model also contains the original continuous time model as a
subsystem.
The resulting time histories for the four simulations are in Figure 5.11. The gures
show both the position and the velocity of the mass; the lighter line is the velocity, and
the smaller and darker line is the position. Notice that the simulations for the discrete
time models at 0.01 and 0.05 sec match the continuous time system very nicely. (Again,
remember that the match is only in the statistical sense.) For the last sample time, namely
1 sec, the match is not good. Why is this?
The reason is our old friend the sampling theorem. When we create a discrete model
at 1 sec, the conversion to the discrete model has errors. These arise because the system has
a natural frequency of 100 radians/sec (1.592 Hz corresponding to a period of 0.628 sec).
The 1-sec sample time is almost two times slower than the natural oscillation of the mass
and as such violates the requirements of the sampling theorem for accurate reconstruction.
(The discrete time system aliases the oscillation.)
As an exercise, you should isolate each of these discrete models and see how long
Simulink needs to create the results. Remember that to do this you need to run the simulation
192 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Figure 5.10. Noise response continuous time simulation and equivalent discrete
systems at three different sample times.
from MATLAB using tic and toc before and after the simulation command (in MATLAB
type tic;sim(modelname);toc;).
If you dont want to do this exercise yourself, we have simulated each of the models
10 times (with each of the simulations using a stop time of 1000 sec) using the M-code
shown below to record the simulation times and compute their averages (see the MATLAB
le Test_Noise_Models.m in the NCS Library).
for j =1:10
for i = 0:3
str = [tic; sim(test num2str(i) ); t num2str(i) (
num2str(j) )=toc;];
eval(str)
end
end
% Averages:
Avgsim0 = mean(t0)
Avgsim1 = mean(t1)
Avgsim2 = mean(t2)
Avgsim3 = mean(t3)
5.3. Simulating a System with White Noise Inputs Using the Weiner Process 193
0 10 20 30 40 50
-80
-60
-40
-20
0
20
40
60
80
Time
0 10 20 30 40 50
-80
-60
-40
-20
0
20
40
60
80
Time
0 10 20 30 40 50
-50
0
50
Time
0 10 20 30 40 50
-50
0
50
Time
Sample Time = 0.05 sec.
Continuous Time at 0.01 sec. Sample Time = 0.01 sec.
Sample Time = 1.0 sec.
Figure 5.11. Simulation results from the four simulations with white noise inputs.
Table 5.1. Simulink computation times for the four white noise simulations.
Model Avg. Simulation
Time
Continuous 1.0490 sec.
Discrete at 0.01 sec. 0.2571 sec.
Discrete at 0.05 sec. 0.0572 sec.
Discrete at 1.00 sec. 0.0105 sec.
The results we obtained from this code are in Table 5.1.
There is an improvement of about a factor of 5 in the simulation time comparing the
fast sample time with the continuous solution using the ode45 solver (with a max step size
of 0.01 sec), so when it comes to large and complex Monte Carlo simulations, this approach
can offer a considerable computational advantage.
194 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Based on the work we have done in this chapter, you should be able to simulate any
system with arbitrary noise sources. If you are given the properties of a noise in the form
of a power spectral density function (which implies that the noise is stationary), it is always
possible to nd a linear system that can simulate this noise source.
In the next section, we look at one particular and important process that has a power
spectral density function that is proportional to 1/f (one over the frequency). The approach
we discuss is a simple way to create a noise that has a desired power spectral density function.
5.3.4 Modeling a Specied Power Spectral Density: 1/f Noise
In many electronic devices (such as sensors that are used to create an image from radiation
in visible, infrared, or ultraviolet light) quantum mechanical effects manifest themselves
as a noise that has a power spectral density that is proportional to the reciprocal of the
frequency. These noise sources are not from systems that have rational transfer functions,
so a method to create a typical noise sample for a Monte Carlo simulation needs to use some
approximation. Let us see why this is true.
The power spectral density (PSD) for a linear system excited with white noise comes
fromthe correlation function of the process because, by denition, the power spectral density
function is
S() =

R()e
i
d,
where R() is the correlation function
R() = E {x(t )x(t + )} .
If the process x(t ) comes from a linear differential equation, then it is the result of con-
volving the input with the impulse response of the system. Since the correlation function
involves the product of x(t ) at two different times, the correlation function R() requires
two convolutions to compute. In fact, if a linear system with impulse response h(t ) has
the (real) stochastic process u(t ) as its input, then the correlation function of the output is
R() = h() R
u
() h(), where R
u
() is the correlation function of the input, and the
* operator denotes the convolution integral.
Thus, we have the following very important result. It is the reason that stochastic
process models use linear systems excited by white noise:
If a real, stationary, stochastic process excites a linear system with impulse response h(t )
it has the correlation function
R() = h() R
u
() h().
Therefore, the power spectral density function of the output is
S
out
() = |H(i)|
2
S
in
().
The proof of this is very straightforward and is an exercise (Exercise 5.5 at the end of
this chapter).
5.3. Simulating a System with White Noise Inputs Using the Weiner Process 195
Since the input is white noise S
in
() = 1, it is easy to see that the PSD of the output
is simply the magnitude squared of the transfer function of the system. Since the transfer
function of an nth order linear system is the ratio of polynomials of order (at most) n (in
s), the PSD is always the ratio of real polynomials with only even powers of , and it is of
order 2n. (These properties are because of the square of the magnitude.)
From this, we can see the problem with generating a spectrum that has a PSD pro-
portional to 1/f . The linear system that generates such a spectrum has a transfer func-
tion given by H() = 1/

i because the magnitude squared of this transfer function is


1/ = 1/(2f ). The impulse response of the linear system with this transfer function
comes from the inverse Laplace transform of

2/s, which is

2/t . So how do we come


up with an approximation for this spectrum? The method is to approximate the transfer
function on a semilog plot (i.e., on the Bode plot). Since the spectrum is real, we do not
have to worry about the phase.
We can approximate a system using products of rst order rational transfer functions:
H(s) =
n

i=1
s +z
i
s +p
i
.
On a Bode plot, this transfer function can be used to approximate any straight line up to
the frequency of the last pole. The trick is to place the zeros and poles symmetrically with
respect to the desired straight line.
For our approximation, we use the fact that the transfer function at the pole is
p
i
+z
i
2p
i
and at the zero it is
2z
i
z
i
+p
i
, a difference of
2z
i
z
i
+p
i

p
i
+z
i
2p
i
=
(p
i
z
i
)
2
2p
i
(z
i
+p
i
)
.
We allocate this error around the desired frequency response as follows:
First, the number of poles used to make the approximation is arbitrary, but the more
that we use, the better the approximation will be.
The simplest way to set the number of poles is to assume that each pole is a factor
of 2 in frequency away from the previous pole (i.e., they are an octave apart). In
general this separation will ensure that the approximation is within 1 db of the desired
spectrum (1 db equates to a maximum error of about 12% in the magnitude at the
frequencies of the pole and zero; the error is zero at the midpoint between them).
The value of the zeros is set to

2 times the poles (this places the amplitude sym-


metrically around the poles and zeros).
There is no way that any nite representation of the 1/f noise can be valid over
all frequencies (doing so requires an innite number of terms). Thus, we assume that the
approximation starts at some frequency
Low
, and it stops some number n of octaves above
196 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Simulation of a noise sample that has a
Power Spectral Desnity Function that is approximately 1/f.
The approximation is valid from:
low
to 2
n

low
In MATLAB the variable names are:
omegalow =
low
; numoctaves = n.
Scope
Input Signal
Pl ot Spectal
Esti mates
1
s
Integrator
d
Gai n3
C* u
Gai n2
B* u
Gai n1
A* u
Gai n
Band-Li mi ted
Whi te Noi se
1/f noise
1/f noise
Figure 5.12. Simulation of the fractal noise process that has a spectrum propor-
tional to 1/f .
this.
5
The transfer function is then
H(s) =
n

i=1
s +

2 2
i1

Low
s +2
i1

Low
.
ASimulink model that generates a noise sample using this approximation is in Figure 5.12
(called Oneonf in the NCS library).
This model generates a plot of the sample and calculations of the resulting PSD. Note
that this simulation, once again, uses the vector capabilities of Simulink to simulate the state
space model of the differential equation represented by the transfer function above.
The differential equationthat generates this time sample has anorder that is determined
by the number of octaves used. In addition, the state-space matrices A, B, C, and D are
set up in a callback from the model. The number of rst order systems is set to the default
value of 10 when the model opens. Also, the starting pole (
Low
) is set to 1 radian/sec.
These values are MATLAB variables using the names that are in the gure. Changing their
value in MATLAB causes subsequent simulations to use the new values (again, because of
an Init callback from the model that executes at every simulation run).
5
In Chapter 6, we examine methods for solving partial differential equations using a nite number of ordinary
differential equations. This discussion parallels the approach that we use there. It is a lumped parameter or
nite element method.
5.3. Simulating a System with White Noise Inputs Using the Weiner Process 197
The value for the A and B matrices that come from the callbacks are
A =

p
1
0 0 0
z
2
p
2
p
2
0 0
z
3
p
3
z
3
p
3
p
3
0
.
.
.
.
.
.
.
.
.
.
.
.
z
n
p
n
z
n
p
n
z
n
p
n
p
n

, B =

z
1
p
1
z
2
p
2
z
3
p
3
.
.
.
z
n
p
n

.
The code that sets up these matrices (shown below) is in the Callbacks tab of the Model
Properties under the File menu of the model:
n = 0:numoctaves-1;
n = 2.n;
p = omegalow*n;
z = sqrt(2)*p;
A = diag(-p);
for i = 1:numoctaves-1
A = A+diag(p(i:end-1)-z(i:end-1),-i);
end
B = (z-p);
C = ones(size(B));
d = 1;
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
-4
-3
-2
-1
0
1
2
3
4
Time
S
im
u
la
t
e
d

V
a
lu
e
s

o
f

1
/
f

N
o
i
s
e
The Simulink simulation
results in the 1/f noise sam-
ple shown in the gure at the
right. In addition, the model
contains a subsystem that has
four different methods for com-
puting an estimate of the Spec-
trum. When the model is open,
if you double click on the Plot
Spectral Estimates subsystem
you will see that these esti-
mators are all from the Sig-
nal Processing Blockset. (Fig-
ure 5.13(a) shows the Simulink
model.) The details of how
each of these estimates oper-
ates are beyondthe scope of this
text, but in the broad sense, all of them, except the magnitude of the FFTblock, use a method
that estimates a linear model that matches the statistical properties of the data and then com-
putes the estimate of the spectrum using the model. This approach always produces an
estimate of the PSD that gets better the longer the time over which the estimate is per-
formed. (Mathematically these estimates are consistent, meaning that the variance of the
estimate goes to zero as the estimation time goes to innity.)
The magnitude of the FFT is a very poor estimate of the PSD because it does not have
this property. (In fact, the variance of this estimate never gets smaller.) This can be seen in
198 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Figure 5.13. Power spectral density estimators in the signal processing blockset
used to compute the sampled PSD of the 1/f noise process.
the Vector Scope plot of all of the PSD estimates in Figure 5.13(b). (The magnitude of
the FFTis the top estimate with the + signs as line markers.) The three consistent estimators
all have the same overall estimate, and they would indeed be identical except that each of
them is given an arbitrary order for the estimate (starting at 2 for the Yule-Walker estimator
Exercises 199
and going to 6 for the Burg estimator; as an exercise, change them all to order 2 and verify
this result).
In this chapter, we have explored the mathematics of stochastic processes and the
methods for simulating them in Simulink, using some of the tools in the Signal Processing
Blockset. It should be evident from what we have done so far how Simulinks function-
ality can go beyond simulating differential and difference equations. We have shown how
to interchange discrete time and continuous time systems and work in the frequency do-
main. We consider next how Simulink can model systems described by partial differential
equations.
5.4 Further Reading
There are many books on probability, statistics, and random processes. Among the best is
by Papoulis [33]. Arecent book by Childers [7] has many MATLAB examples.
Norbert Weiners book Cybernetics [49] has a wonderful description of the importance
of being versed in many disciplines. (For more on this point, see Chapter 10.)
He wrote: Since Leibniz there has perhaps been no man who has had a full command
of all the intellectual activity of his day. Since that time, science has been increasingly the
task of specialists, in elds that show a tendency to grow progressively narrower. Acentury
ago, there may have been no Leibniz, but there was a Gauss, a Faraday, and a Darwin.
Today few scholars can call themselves mathematicians, physicists, or biologists without
restriction.
A man may be a topologist, an acoustician, or a coleopterist. He will be lled with
the jargon of his eld, and will know all its literature and all its ramications, but, more
frequently than not, he will regard the next subject as something belonging to his colleague
three doors down the corridor, and will consider any interest in it on his own part as an
unwarrantable breach of privacy.
For more on Weiner, visit Wikipedia at http://en.wikipedia.org/wiki/Cybernetics.
Both the Weiner process and the 1/f noise process are examples of a fractal. The
fern in NCM is another example. There is an iterative method for generating fractals and a
discussion of fractal dimension in Chapter 5 of Scheinerman [37]. A MATLAB code that
will create Sierpinskis triangles is in the appendix of [33]; this example is fun to run, and
you might want to try creating a Simulink model to do the same thing.
Exercises
5.1 Modify the Rayleigh distribution Simulink model in Section 5.1.2 to generate the
random variable that is the phase angle of the reected sine waves. The book by
Papoulis [33] has a derivation of the probability density function for this random
variable. Check to see if the histogram that you generate has the required form.
5.2 Verify the property that the random walk has zero mean and a variance of ks
2
(the
properties shown in Section 5.2.2).
200 Chapter 5. Random Numbers, White Noise, and Stochastic Processes
5.3 Show that the covariance matrix differential equation
dP(t )
dt
= AP(t ) +P(t )A
T
+BSB
T
has the solution P(t ) = (t t
0
)P(t
0
)(t t
0
)
T
+

t
t
0
(t )BSB
T
(t )
T
d.
Use the hint given in Section 5.3.2 when you differentiate this solution.
5.4 We iterated the discrete covariance matrix equation one time and showed that the
covariance matrix of a discrete system x
k+1
= (t )x
k
+(t )u
k
will be the same
as the covariance matrix of the continuous system if the discrete system noise input
u
k
has the covariance matrix E{u
k
u
T
k
t } = S
discret e
for all values k, where S
discret e
is
given implicitly by S
discret e

T
=

t
0
()BSB
T
()
T
d. Show that this is true
for all possible iterations.
5.5 If a real, stationary, stochastic process excites a linear system with impulse response
h(t ) it has the correlation function
R() = h() R
u
() h().
Use this fact to showthat the power spectral density function of the output (the Fourier
transform of R()) is
S
out
() = |H(i)|
2
S
in
().
5.6 Show that the state-space model in Section 5.3.4 is the representation of the transfer
function H(s) =

n
i=1
s+

2 2
i1

Low
s+2
i1

Low
.
Chapter 6
Modeling a Partial
Differential Equation in
Simulink
As engineers design systems with more stringent requirements, it has become far more
common to nd that the underlying dynamics of the systemare partial differential equations.
Examples of this permeate the engineering design literature. For example, designers of
computer disk drives are always striving to store more bits. They do this in two major
ways: by making the number of bits per unit area on the surface as large as they can and by
increasing the rotational speed of the disk. The smaller the area on the disk that contains the
ones and zeros, the more accurate the positioning of the read/write head needs to be. At the
same time, the speed of rotation of the disk makes it possible to access the stored ones and
zeros faster, meaning that the read/write head needs to move faster making the assembly
more prone to vibrations induced by the rapid accelerations. Thus, disk drive designers
have to design the read/write head positioning system with the following:
exible motions of the read/write heads, their mounts, and the disks themselves;
aerodynamic induced vibrations as the heads move across the spinning disk;
vibrations induced by thermal effects caused by uneven heating of the drive.
The models for these dynamics are specic partial differential equations, and when
they all must be included, they interact with each other.
In a similar vein, automatic ight control systems for advanced aircraft (particularly
aircraft that deliberately have unstable rotational dynamics) impose requirements that re-
quire the designer to include the effects of exible motions and their interactions with the
aerodynamics. Designers of spacecraft control systems, particularly those with precision
pointing requirements, must model the vibrations of appendages that are induced by moving
parts on the vehicle and, in some cases the combined effects of these vibrations interacting
with (and creating) uid sloshing in propellant and oxidizer tanksonce again, dynamics
that are described by specic partial differential equations.
In Numerical Computing with MATLAB, Cleve Moler showed methods for solving
pdes using nite difference methods. Since Simulink has the ability to model and solve
ordinary differential equations, it is natural to ask whether or not a partial differential
201
202 Chapter 6. Modeling a Partial Differential Equation in Simulink
equation may be solved using a method that converts the partial differential equation into a
set of coupled ordinary differential equations. The answer is a resounding yes.
6.1 The Heat Equation: Partial Differential Equations in
Simulink
In the next chapter, we will begin the design of a home heating controller that replaces a
thermostat, with the following benets:
better control of the temperature of the home,
less energy use for the same or better comfort level,
a signicantly better feel for the residents.
The way one goes about achieving these objectives is with a thorough understanding
of the dynamics of heat owfroma home, so we need to model the home in detail. With this
in mind, let us build a two-room home heating model in Simulink and run it to understand
the issues.
Let the temperature of the house be T (x, y, z, t ), where the three orthogonal spatial
directions x, y, z are relative to some coordinate system and the temperature is explicitly
a function of time because of the heat supplied by the heating system. The underlying
equation for computing the temperature is the heat equation
c
p
T (x, y, z, t )
t
= ( k(x, y, z, t ) T (x, y, z, t ) ) +Q(x, y, z, t ).
The source of heat in the rooms is the convectors (radiators) that are located either along
the walls or in the oor, depending on the geometry. These heat sources are the heat input
Q(x, y, z, t ).
6.1.1 Finite Dimensional Models
There are many simplifying assumptions that we can use to model the home heating system
using this equation. First, we assume that the walls, ceilings, oors, and glass areas have
constant heat conductivities (i.e., k will be a constant for each of the different types of
surface). Second, we can assume that each room in our house is a separate volumetric
lumped entity so that the heat stored in each room (the c
p
T (x(t ),t )
t
in the heat equation)
is independent for each of the rooms. The last assumption we can make is that the heat
ow in and out of the rooms (through the walls, ceilings, etc.) will result in a linear thermal
gradient in the material.
We can also assume that the heat exchangers have some thermal capacity (i.e., they
can store the heat for some time). Each heat exchanger will transfer their heat to the room
with some thermal loss. This heat exchange occurs through boundaries of air that, while
not conducting (the heat exchange is actually through convection), are modeled assuming
a linear gradient in the air around the exchanger. We make the reasonable assumption
that the heat owing out of the house does not change the outside air temperature (at
6.1. The Heat Equation: Partial Differential Equations in Simulink 203
least over the time scale at which we are the heating the house). We also assume that
the combustion process is a constant source of heat, with appropriate efciencies for the
combustion; this means that each gallon of oil or 100,000 BTUs (therms) of gas burned
produces a constant amount of heat. Under these assumptions, the heat equation becomes a
set of interconnected lumps with the lumps consisting of the individual rooms and the
individual heat exchangers.
These assumptionsconstant outside air temperatures, linear gradients in the walls
and around the heat exchangers, and constant heat owfromthe combustion process into the
medium used for heat exchangemean that the heat ow in steady state from the warmest
parts of the home to the cold outside is constant. This follows from the fact that with these
assumptions the term k(x, y, z, t ) T (x, y, z, t ) is constant for each of the rooms.
We modeled a house in Chapter 2 (the model called Thermo_NCS), where we simply
provided the model as a rst order differential equation for the entire home. Because the
house was a single entity, the temperature of each room in the house was the same. This is
clearly not the case in a typical home. Each rooms thermodynamics are interconnected but
also independent in the sense that the heat loss from inltration, conduction, and radiation
depend only on the materials in the walls, the number of windows (architects use the term
fenestration) and doors, and the spaces that surround the room (basements, attics, other
rooms, sunspaces, etc.).
In the lumped parameter approximation, the individual room temperatures are the
same throughout the rooms volume. These assumptions also mean that the net heat owing
into or out of a compartment (a volume of space that has constant thermal capacity and
uniform temperature) sums to zero. From the heat equation when the divergence of the
gradient is zero, there is no net increase in the temperature of the compartment from this
owand a linear gradient ensures that this is true. We will see that this assumption is
equivalent to the fact that, in an electrical circuit, currents owing into and out of any node
must sum to zero.
Thus, for our two-room house, the heat ow equation breaks up into four separate or-
dinary differential equations: two for the room temperatures and two for the heat exchanger
6.1.2 An Electrical Analogy of the Heat Equation
Once we have lumped the rooms and heaters into single entities and made the assumptions
on howthe heat is owing, the heat equation becomes analogous to an electrical circuit with
resistors, capacitors, current sources and voltage sources. The analogs of the different
heat-equation element values are as follows:
Heat ow is analogous to current (and a current source is a heat source).
The temperature of an element is analogous to the voltage in the circuit (and therefore
a voltage source is a constant temperature regardless of the heat owing in or out).
Thermal conductivity of the material is analogous to a resistor (the analogy is really
to the conductance, i.e., 1/R is the thermal conductance, which will be important
when we need to combine heat ow paths).
204 Chapter 6. Modeling a Partial Differential Equation in Simulink
C
R 1
C
R 2
Losses
R
wall
R
fl-ceil
R
convect
R
window
Losses
Ambient
Outdoor
Temp.
Ta
Flame
Heat
Qf
C
HE 1
C
HE 2
R
HE 2 R
HE 1
R
L 3
R
L 2
R
L 1
R
wall
R
fl-ceil
R
convect
R
window
Room 1 Heat Source: Room 2
T
H1
T
R1
T
H2
T
R2
T
p
T
a
Figure 6.1. An electrical model of the thermodynamics of a house and its heating system.
Thermal capacitance times the volume of the element, c
p
V, is analogous to the
electrical capacitance.
Using this analogy, we have the following:
The individual rooms in our home can each be modeled as a single capacitor.
The heat ow between rooms and the outside are modeled as current owing through
resistors.
The rooms heat exchangers are also modeled as capacitors.
The heat losses from the heat source (the ame) to the heat exchangers can also be
modeled as resistors.
The heat source (oil or gas being burned) is a current source.
The outside temperature (constant) is a voltage source.
Putting these together, we can model our two-room house as the electrical circuit in
Figure 6.1.
This model is easy to understand because it presents a (lumped) picture of the under-
lying dynamics. It should be obvious that the picture in this case is not the equations (in the
sense that a Simulink model explicitly represents the differential equations), but we can go
through each piece in this circuit (picture) to understand exactly what it implies in terms of
the thermodynamics.
6.2. Converting the Finite Model into Equations for Simulation with Simulink 205
Therefore, following Figure 6.1, we have the following:
Each room and each heat exchanger is represented by a capacitor (this is the heat
capacity of the room or heat exchanger given bycV). For each room this is the
product of the air density, the specic heat of the air, and the volume of the room, and
for each heat exchanger, it is the product of the density of the heat exchange medium
(water or air), the specic heat of this material, and the volume of the heat exchanger.
The heat owing into or out of the roomdepends on the thermal conductivity, k, which
we convert into a thermal resistance using 1/k. Thus, the four resistors in parallel, at
the top of each of the room compartments, are the paths for the heat ow due to the
thermal losses from the walls, the ceilings, the convection of air from the outside to
the inside, and the windows. These conduct the heat (current is heat) from the room
to the xed outside temperature which is the voltage source titled Ambient Outdoor
Temp.
The heat ow from the heat exchangers to the rooms involves some reduction in the
temperature of the medium so the room does not get as hot as the heat exchangers
(the resistors R
HE1,2
).
The heat fromthe heat source is not completely captured. (The process of transferring
the heat from the boiler to the heat exchanger involves losses that are modeled as
resistors R
L1,2,3
.)
The power of this electrical analog for the heat equation is that it is easy to modify
the model to make it more complex and include more effects. (We will do this in
Chapter 8.)
6.2 Converting the Finite Model into Equations for
Simulation with Simulink
The circuit picture models the heat owand the roomtemperatures, but it is not in the formof
differential equations. To convert the picture into a set of coupled differential equations we
need to use Kirchhoffs current lawthat states that the sumof all of the currents entering any
node of the circuit must be zero (where the signs of the currents are determined by assigning
arbitrary inequalities to the various capacitor voltages). For this model, the temperatures for
each of the rooms and heat exchangers (the capacitor voltages) are the nodes. The external
sources and sinks of heat are the voltage and current sources that represent the constant
ambient temperatures and heat ow. (These are not constant over a long time interval, but
over the time interval that the heating systemis operating they are very reasonably constant.)
The result of these manipulations on the heat equation is the equations used in the Simulink
model.
We have captured the salient features of the home for design of the heating sys-
tem without explicitly solving the heat partial differential equation. The method we used
here is only one of the ways of converting the partial differential equation into a coupled set
206 Chapter 6. Modeling a Partial Differential Equation in Simulink
of differential equations. They are all called nite element models. In this version, the
elements are large masses that have more or less constant temperatures (each roomand each
heat exchanger).
For many reasons it would be nice to have a tool that would convert a picture like
this into the appropriate equations for use by Simulink. It also would be nice to maintain
the picture above in the model because it is a lot easier to modify the picture to change
the attributes of the house. Adding more rooms, heat exchangers, and intermediate losses
like that from the heat exchangers, etc., using the picture is a lot easier than working with
gradually more complex equations. This is true because the picture clearly shows what
interactions are taking place among the various components, and it is easy to cut and paste
to create additional components with a computer. If such a tool existed, it would allow the
pictorial representation of the circuit to be visually available in Simulink, just as the signal
owis available. Such a tool exists. The SimPowerSystems Blockset fromThe MathWorks
would convert this picture into the differential equations that Simulink needs. We describe
this tool later, but for now let us work through the model above and get the differential
equations. (In Chapter 8, as we investigate how to use the SimPowerSystems tool, we will
create a multiroom model.)
6.2.1 Using Kirchhoffs Law to Get the Equations
Kirchhoffs current lawat each of the capacitance nodes in the model gives us the differential
equations for the simulation. Thus we have the following.
For Heat Exchanger 1:
C
HE1
dT
H1
dt
=

1
R
HE1
+
1
R
L2

T
HE1
+
1
R
HE1
T
R1
+
1
R
L2
T
p
.
For Room 1:
C
R1
dT
R1
dt
=
1
R
eq1
T
R1
+
1
R
HE1
T
H1
+
1
R
eq2
T
a
,
where
1
R
eq1
=

1
R
wall1
+
1
R
f lceil1
+
1
R
convect 1
+
1
R
window1
+
1
R
HE1

,
1
R
eq2
=

1
R
wall1
+
1
R
f lceil1
+
1
R
convect 1
+
1
R
window1

.
For Heat Exchanger 2:
C
HE2
dT
H2
dt
=

1
R
HE2
+
1
R
L3

T
H2
+
1
R
L3
T
R2
+
1
R
L3
T
p
.
6.2. Converting the Finite Model into Equations for Simulation with Simulink 207
For Room 2:
C
R2
dT
R2
dt
=
1
R
eq3
T
R2
+
1
R
HE2
T
H2
+
1
R
eq4
T
a
,
where
1
R
eq3
=

1
R
wall2
+
1
R
f lceil2
+
1
R
convect 2
+
1
R
window2
+
1
R
HE2

,
1
R
eq4
=

1
R
wall2
+
1
R
f lceil2
+
1
R
convect 2
+
1
R
window2

.
There is also an algebraic (nondifferential equation) for the temperature of the heat exchange
medium, T
p
, given by summing the heat ow into this node at the heat source.
For the heat ow to the heat exchangers (algebraic equation):
T
p
=
Q
f
+
T
H1
R
L2
+
T
H2
R
L3

1
R
L1
+
1
R
L2
+
1
R
L3
= R
eq5

Q
f
+
T
H1
R
L2
+
T
H2
R
L3

,
where
1
R
eq5
=

1
R
L1
+
1
R
L2
+
1
R
L3

.
Substituting this algebraic result into the rst and third equations above gives the nal form
of these two equations as
C
HE1
dT
H1
dt
=

1
R
HE1
+

1
R
eq5
R
L2

1
R
L2

T
H1
+
1
R
HE1
T
R1
+
R
eq5
R
L2
R
L3
T
H2
+
R
eq5
R
L2
Q
f
and
C
HE2
dT
H2
dt
=

1
R
HE2
+

1
R
eq5
R
L3

1
R
L3

T
H2
+
1
R
L3
T
R2
+
R
eq5
R
L2
R
L3
T
H1
+
R
eq5
R
L3
Q
f
.
These differential equations were relatively easy to write out because there are only four
differential equations (and one algebraic equation); however, if we were to try to model a
house with 15 rooms, the task would be quite difcult.
In the development of these equations, we have dened some equivalent resistors.
These equivalents are meaningful in terms of the heat losses for the rooms and the heat
exchangers. The combined loss for each room is the equivalent resistor that comes from
placing the resistors for the losses in parallel, so the net loss is the combined effect of the
heat owing through the walls, oors ceilings, doors, and windows and the conductive path
of the heat exchangers. In practice, these resistances account for the convective losses that
come from air inltration into the rooms. Since the air inltration depends on the external
winds, these added terms include the wind speed. We have ignored this effect in our model,
but it is important enough that we will revisit the model and add this effect in Chapter 8.
208 Chapter 6. Modeling a Partial Differential Equation in Simulink
6.2.2 The State-Space Model
Now that we have the equations to model the house heating system, we will create a state-
space model for use in Simulink. Assume that the voltage (temperature) of each capacitor
is a state in a four state model (one differential equation, or state, for each temperature).
The equations above then become
C
d
dt

T
R1
T
R2
T
HE1
T
HE2

= G

T
R1
T
R2
T
HE1
T
HE2

+B
1
T
a
+B
2
Q
f
.
Let the state vector be x(t ), and we can invert the matrix C to give the state-space model,
d
dt
x(t ) = C
1
Gx(t ) +C
1
B
1
T
a
+C
1
B
2
Q
f
.
The matrix Cis diagonal so its inverse is simply the reciprocal of its elements, and therefore
the nal form of the state equations has the following matrices:
C
1
G =

1
C
R1
R
eq1
0
1
C
R1
R
HE1
0
0
1
C
R2
R
eq3
0
1
C
R2
R
HE2
1
C
HE1
R
HE1
0

1
C
HE1
R
HE1
+

1
R
eq5
R
L2

1
C
HE1
R
L2

R
eq5
C
HE1
R
L2
R
L3
0
1
C
HE2
R
HE2
R
eq5
C
HE2
R
L2
R
L3

1
C
HE2
R
HE2
+

1
R
eq5
R
L3

1
C
HE2
R
L3

,
C
1
B
1
=

1
C
R1
R
eq2
1
C
R2
R
eq4
0
0

, and C
1
B
2
=

0
0
R
eq5
C
HE1
R
L2
R
eq5
C
HE2
R
L3

This set of equations is the basis for the Simulink model of the heating system. We can run
the model to learn what the temperatures do when the heating plant starts, and when the
heating plant is off. To complete the model, we need numeric values for the parameters.
To determine the parameters we need to dene the house. We assume that room 1
is a rectangular solid with dimensions of 20 30 8 ft; thus the areas of the oor and
ceiling are 600 square feet. Room 1 has four windows and one door, with a total area of
72 square ft. Room 2 is also a rectangular solid with a size of 20 40 8 ft. We assume
that the two rooms are adjacent along the 20-foot dimension and that the heat ow through
this wall is negligible (and we will ignore it). Room 2 has eight windows and two doors,
with a combined area of 180 square ft. The area of the exposed walls in Room 1 is therefore
the total surface area minus the glass and door areas, which is 568 square feet. Similarly,
Room 2 has an exposed surface (minus the glass and doors) of 620 square ft.
The oors and the ceilings of both rooms have 12 inches of berglass insulation.
(This insulation has an equivalent R (i.e., including the inltration loss, value of 30). The
6.2. Converting the Finite Model into Equations for Simulation with Simulink 209
thermal conductance is the reciprocal of the resistance, so an Rof 30 translates into a thermal
conductivity of 1/30 BTU/hour per square foot of oor and ceiling surface area per degree F
temperature difference between the roomand the outside. The walls of each roomalso have
been insulated with berglass with the equivalent R-value of 15. (Again this represents a
heat conductance of 1/15 BTU/hour per square foot of insulated wall surface area and per
degree F.) The last equivalent thermal conductance value we need is that of the windows
and doors. With double panes of glass, most modern windows (and doors) have equivalent
R-values of 3, which gives a thermal conductance of 1/3 (same units as above). In all of
these numbers we assume that the convection has been included in the equivalent R so the
explicit term in the room equivalent resistances for the convection is not used (however, we
will investigate the effect of added convection losses using this term). For the calculation
of all of the thermal resistance values above, the values are per hour, but the simulation will
be in seconds, so the data needs to be converted.
The heatingsystemuses water as the heat exchange medium. The thermal conductance
(one over the resistance) that couples the heat exchangers into the rooms is .01 BTUs per
hour per linear foot of convector per degree F temperature difference between the rooms
and the heat exchangers. We assume that Room 1 has 25 ft of heat exchangers, and Room 2
has 30 ft. We also assume that the losses in the heat exchange process are 0.05.
The specic heat of air is 0.17 BTU per pound per degree F, and the density of air is
0.0763 pounds per cubic foot.
Combining these quantities together, we get the following values for the components
in the state-space model:
The thermal capacitance of Room 1 is the mass of the air in the room times the
specic heat of air. The roomvolume is 4800 cubic ft, so the mass of air is 48000.0763 =
366.31 pounds, and the capacitance is 0.17 366.31 = 62.27 BTU per deg F. Similarly,
the capacitance of Room 2 is 83.03 BTU per deg F. For the heat exchangers, we assume
that the volume of water in them is 1 cubic ft. Since the specic heat of water is 1, the
thermal capacity of each of the heat exchangers is therefore 62.5 BTU per deg F. Thus the
capacitances used in the model (in BTU/deg F) are
C
R1
= 62.27, C
R2
= 83.03, C
HE1
= C
HE2
= 62.5.
The thermal resistances for the insulation in the various parts of the house are not the
thermal resistance numbers in the differential equations (the dimensions of the various
quantities show this). To get these numbers we need to multiply the conductance numbers
for the various surfaces by their areas. Thus, we have (before the conversion from hours to
seconds)
R
HE1
= R
HE2
= R
HE3
= .01 20 = 0.2,
R
L1
= R
L2
= R
L3
= 0.05,
R
wall1
=
15
568
= 0.0264, R
f lceil1
= 2
30
600
= 0.1, R
window1
=
3
72
= 0.0417,
R
wall2
=
15
620
= 0.0242, R
f lceil2
= 2
30
800
= 0.075, R
window2
=
3
180
= 0.0167.
The Simulink model we created once again uses the state-space model explicitly, and it uses
the vectorizing capability of Simulink. The model resulting from this exercise is extremely
simple; it is in the gure below. This model will work no matter how many rooms and heat
210 Chapter 6. Modeling a Partial Differential Equation in Simulink
Simulation of the Heating System in a Two Room House
Open Loop (no control) Investigation
(double click on the switch):
- no heat from furnace (top)
- max heat from furnace (bottom)
Closed Loop vs. Open Loop Investigation
(double click on the switch):
- Open Loop (top)
- Closed Loop (bottom)
K*u
Zone 1 & 2 Heat Input Vectors
(B2 i s zone 1; B3 i s zone 2)
both vectors are from m-fi l e
Swi tch
Heat Control l er On or Off
Swi cth
Furnace On or Off
Sel ect
Room1 and 2
Temperatures
Saturati on
(Heat command
i s > 0 and <1) Matri x
Mul ti pl y
Product
Pl ot 4 Temperatures
Room 1 and 2
Heat Exchanger 1 and 2
10
Outsi de
Temperature (deg. F)
Heat from Furnace
Set Temperature
Temperatures
Li near Hati ng
Control l er
1
s
x
o
Integrator
[50 50 120 120]'
Ini ti al Temperatures
K*u
Heat Zones
(B1 i s zone 1
B2 i s zone 2)
Qf
Heat Input
(Furnace ON)
0
Heat Input
(Furnace OFF)
70
Cl osed Loop
Set Poi nt
Temeprature
B1* u
B1 from m-fi l e
A
A Matri x
from m-fi l e
Temperatures
(R1 R2 HE1 HE2)
Temperatures
(R1 R2 HE1 HE2)
Figure 6.2. Simulink model for the two-room house dynamics.
exchangers we model; all you need to do is change the matrix values. The data are all in
an M-le called TwoRoomHouse.m that is in the NCS library. This M-le runs when the
model opens because of the callback from the Model Properties dialog.
At this point, you might try creating this model yourself. If you look back at the
state-space model that was developed in Section 2.1.1, this Simulink model will be similar.
(The state here is of dimension 4 and in the earlier model it was 2, but this does not matter
since Simulink will use the matrices provided to get the dimensions.) Remember to put the
initial conditions into the integrator as a 4-vector.
Our version of the model is shown in Figure 6.2. (It is in the NCS library and is called
TwoRoomHeatingSystem.)
This model has been set up to run the M-le TwoRoomHouse when it starts. This
populates all of the matrices in the model with the correct data. Two Simulink manual
switches are in the model. Remember that these switches change state whenever you
double click on the icon. We use them to investigate the open loop (no control) attributes of
the house temperatures. The rst switch (at the top left of the diagram) is set up when the
model opens to run the simulation with the heater off. The temperatures that result fromthis
analysis are in Figure 6.3. (The values of the four states appear on the same graph.) The
second, Figure 6.4, shows what happens when the heater is turned on (double clicking the
switch at the top left) and left on (the graphs are in the same order as the rst gure). The
simulations are for 3 hours (10,800 sec). The outside temperature is 10 deg F, and the heat
from the furnace, Q
f
, is 22.22 BTU/sec (or 80,000 BTU per hour, which is the heat from a
gallon of fuel oil when burned in one hour with about 80% efciency). These numbers are
very typical for a reasonably well insulated house. (Remember that this simulation uses an
outside temperature of 10 deg F.)
Double click on the second switch in the diagram, and rerun the simulation. The
resulting temperatures are shown in Figure 6.5. We will reserve comment on these results
6.2. Converting the Finite Model into Equations for Simulation with Simulink 211
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
0
20
40
60
80
100
120
Time
Temperatures (R1 R2 HE1 HE2)
Room 1 Temperature
Room 2 Temperature
Heat Exchanger 1 Temp.
Heat Exchanger 2 Temp.
Figure 6.3. Simulation results when the heat is off.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
40
60
80
100
120
140
160
180
200
220
240
Time
Temperatures (R1 R2 HE1 HE2)
Room 1 Temperature
Room 2 Temperature
Heat Exchanger 1 Temp.
Heat Exchanger 2 Temp.
Figure 6.4. Simulation results when the heat is on continuously.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
50
60
70
80
90
100
110
120
130
140
150
Time
Temperatures (R1 R2 HE1 HE2)
Temp. Room 1
Temp. Room 2
HE 1 Temp.
HE 2 Temp.
Figure 6.5. Using PID control to maintain constant room temperatures (70 deg F).
212 Chapter 6. Modeling a Partial Differential Equation in Simulink
until the next chapter. For now let us assume that a thermodynamics design team has
just performed the analysis of the house heat losses and developed the heating controller.
The next step in the process of designing a controller will be to put this heating system
model together with a model that has the controller logic. This logic will use a tool that
allows complex logic with a signal ow graph that is similar to Simulink (i.e., with a visual
programming environment that is tuned to the needs of logic). The tool is Stateow,
which is the subject of the next chapter. We will revisit this model and create a new type of
home heating controller, one that will provide heating that is more comfortable. The system
will use the PID control system that we developed here.
6.3 Partial Differential Equations for Vibration
The techniques used in developing the nite dimensional model for the heat equation apply to
other types of equations (such as the partial differential equation that describes the vibration
of the read/write head in a disk drive). The idea is the same; the difference lies in the order
of the time derivatives. In the case of the heat equation (which, because of the electric
analog we showed, also denes electric elds), the differential equations that result are
always rst order. This class of equations (called parabolic equations) also includes any
type of diffusion. When the time derivative is zero (i.e., the solution in steady state is
desired), they are called elliptic equations. The equations that describe vibration are more
complex in that the underlying ordinary differential equations that result are second order
(these equations are hyperbolic). The method for creating the nite element model follows
the above method; we assume a particular shape for the spatial part of the vibration, and
these shapes break the total motion into a large number of nite elements. In the process
of developing these elements, we create masses and inertias that interact through springs
(and damping elements) so the resulting equations are second order in time. The state-space
models that result are then a set of coupled second order differential equations that describe
the oscillations. This topic can be the subject of an entire book by itself, and if you are
interested in learning more, see [6], [21], and [28]. We will also look at an example of a
vibrating string in Chapter 8.
6.4 Further Reading
The modeling of thermal systems using electrical analogues is pervasive. A recent special
issue of the Proceedings of the IEEE describes this modeling approach for the calculation of
the temperature of a chip. ([34] is one particular paper that discusses the thermal modeling
of a VLSI chip.) An online PDF version of a book on heat transfer also is available [25].
The use of equivalent thermal resistance to model both the heat conduction and con-
vection is pervasive but prone to error. The equivalent thermal resistance is good for a
single convection rate. The convective losses depend on the pressure difference between
the interior spaces and the exterior. These are a function of wind speed, so the greater the
wind speed, the larger the heat loss due to convection.
It is typical for oil companies to calculate oil consumption using a number called
degree days. This number is the accumulated sum of the difference between 65 deg F
and the average outdoor temperature over some period. (Generally, oil companies use the
Exercises 213
number to determine when to deliver oil, so the period is approximately a month). You
can compute this number yourself, and check your oil or gas consumption per degree day.
You will soon see that your consumption per degree day is not constant. However, if you
use the average wind-chillcorrected temperature (which captures the convective loss), you
should see a very good match. In the model we have created, you can change the thermal
resistances to account for the effect of the wind by lowering the resistance by some amount.
We ask you to work this out in Exercise 6.2.
Exercises
6.1 The two-room house model with the 24-hour external temperature variation can cal-
culate the homeowners heating bill. Add a set of blocks to the model that accumulates
the total heat used over the 24-hour period. Then modify the model to use the thermo-
stat that we introduced in Chapter 2. Compare the heating costs for the two methods
of heating. Is there any difference? If there is, why is this so?
6.2 Develop a model for the heating system that has a thermal loss for each room that
depends on the external wind speed. Modify the resistances R
eq1
, R
eq2
, R
eq3
, and
R
eq4
(remember that they each contain an equivalent resistance for convective losses)
so they are of the form
R
eq1
(t ) = R
eq1
(0) +R
eq1
w(t )
where w(t ) is the wind speed.
Make R
eq1
10% of the nominal resistance when the wind speed is 10 mph. That
means that the modied thermal resistances are
R
eq1
(t ) = R
eq1
(0)

1 +0.1
w(t )
10

.
Now investigate the effect of the wind on the heat loss. What can you conclude about
the importance of reducing the convective losses?
Chapter 7
Stateow: ATool for
Creating and Coding State
Diagrams, Complex Logic,
Event Driven Actions, and
Finite State Machines
Simulation of systems that have logical operations or that contain actions triggered by
external asynchronous actions is difcult. For example, if we are simulating the action of
a person throwing a switch, or if we are simulating the action that causes a computer to
process an interrupt, the simulation needs to process the event at the time it occurs. When we
create such a model using Simulink, the blocks and their connections do not always provide
clear indications of the data ow making the diagrams difcult to read and understand. The
rst attempt at creating a visual representation of complex logic ows was David Harels
introduction, in the 1980s, of a visual tool to represent all of the possible states that a
complex reactive system might have. He called the idea a statechart [18]. The major
innovation that a statechart provided was And states, where parts of the diagram could be
acting in parallel with other parts. Until the introduction of this idea, the modeling of nite
state machines relied on the ideas of Mealy and Moore from the 1950s. (Thus, in the Harel
approach, states could be active at the same timeone and two are both activewhereas
in the Mealy and Moore approach, all states were Or states.)
In 1996, The MathWorks introduced a new tool called Stateow, which encompasses
many modeling and programming ideas into a single and easily understood tool. The
name hints that the tool allows modeling automata using the statechart approach, while
also allowing the picture to represent signal ow. Designing a system with complex logic
and state transitions that depend upon events (either external or internal) using the Stateow
paradigmis remarkably easy. It also has the major advantage that it integrates with Simulink.
This last feature makes it possible not only to design the logic for an automata but also
to model its interaction with the environment. The result is an insight into how well the
automata design works; that is, it allows you to verify that the design meets the specication
as it operates in a simulation of the real world. In the early stages of any design, the
ability to make decisions about the structure of the automata can help clarify and tighten
the requirements as the specication for the design also evolves. The result becomes an
executable specication, which we will have more to say about in Chapter 9.
215
216 Chapter 7. Stateow
7.1 Properties of Stateow: Building a Simple Model
We used the concept of state extensively in this book. Stateow expands the
concept of a dynamic state to systems that do not require differential or difference
equations.
The basic building blocks for creating a Stateow machine are the eight
icons at the left. Starting at the top, we have the following:
The rst icon is used to create a state in the Stateow chart.
The second is the icon that denotes a state that should remember its history.
The third icon is used to denote the initialization of the Stateow chart.
The fourth icon is used to make signal ow connections. (Typically the
connections are like resting points prior to single or multiple decisions.)
We will explore each of these in turn.
The last four icons are truth tables, functions, embedded MATLAB code,
and the box. We will consider only the truth table object and the embedded
MATLAB object in this chapter. The box icon allows the user to change the
shape of the State symbol to a square (box) and treat the result as a subsystem.
The easiest way to get familiar with Stateow is to build a simple model.
Therefore, we will create a model that uses Stateow to track the position of a
simple on-off switch. Clearly such a switch has only two (exclusive or) states.
Therefore, the diagram will be quite simple. To create the Stateow model we
need to open Simulink and create a new Simulink model using the blank paper
icon in the menu. Then, in the Simulink library browser, open Stateow by clicking on
its icon, and after it opens, drag a new chart into the Simulink model. These steps should
create a Simulink model that looks like the gure below.
Chart
Double click the Chart icon, and the Stateow
window will open. This window contains the icons
above on the left hand side. This window is where we
construct the actual Stateow chart. We have already
established that we need two states in this chart, so let
us start by dragging two copies of the state icon (at the
top of the icon chain) into the diagram. After you place
each state icon, you will see a ashing insertion point
at the top left side of the icon. If you immediately start
typing, the name you assign to the state appears in the
state icon. Let us call the rst state Off and the second On. If you do not type these
names immediately, the icon displays with a question mark where the name should go. To
add the name, simply click the question mark and type the name.
With the states in place, we need to specify the transitions. If you place the mouse
cursor over any of the lines that specify the boundary of the state (any of the straight lines
that bound the state will do), you will see the cursor change from an arrow to a plus sign.
When this happens, left click and drag the mouse to create a connection line. This line is
7.1. Properties of Stateow: Building a Simple Model 217
actually a Bezier curve that will leave the state perpendicular to the boundary, and when
you make the connection to the terminating state, it will be perpendicular to the bounding
lines of that state. You need to drag two lines, one starting at the On state and terminating
at the Off state and the second going the opposite direction. When these lines are in your
diagram, click in turn on each of them. You will see a question mark on the line that will
allow you to specify what causes the transition. We will set up an event that we will call
Switch that will cause the transition next, so for now all you need to do is click on each
of the transitions and type the word Switch in place of the question marks.
As a nal step, we need to
specify where in the diagramwe
want the state chart to start. Se-
lect the third icon in the menu
to do this. When you drag the
icon into the chart, a transition
arrow will appear. You want to
attach this to the state we have
called Off. This will cause the
Stateow chart to start in the
Off state. When you have com-
pleted all of these steps, the di-
agram will look like the picture
at the right.
We cannot execute the
diagram yet because we have
not specied what the event
Switch means, nor have we
created anything in Simulink that will simulate the throwing of a switch. So let us
specify what the event is. At the top of the chart is a menu with the traditional File,
Edit, View, Help, and Simulation options that are in Simulink. In addition, there are
two options called Tools and Add. Click the word Add in the menu and a list will
appear; from this list select Event and in the submenu of this list we need to select
Input from Simulink. When
you do this, the dialog box
shown at the left will open.
Enter the name of the event
(Switch) in the box titled
Name (replacing the default
name event1), and where
it says Trigger, select Ei-
ther from the pull-down menu.
Finally, click Apply. After
you complete this, look at the
Simulink model that contains
the Chart; you will see that the
Chart icon has a Trigger input at
218 Chapter 7. Stateow
Manual Swi tch
0
1
Chart
Figure 7.1. First Stateow diagram (in the chart) uses a manual switch to create
the event.
the top center of the chart and the icon inside the chart indicates that both rising and falling
events will cause the event to trigger. The trigger icon is the same as that used for triggering
a Simulink subsystem. This process has created not only the variable Switch in Stateow
but also a data type (in the sense of a strongly typed programming language). All variables
in Stateow have to be typed using this process (i.e., by using the Data or related dialog).
We now need to build a switch in Simulink that causes the trigger event Switch.
In the Signal Routing library, you will nd a Manual Switch icon that we can use.
Drag it into the model and connect the switch arm to the trigger input of the chart. Last,
from the Source library drag a Constant block into the model and duplicate it. Set one of
the constants to 1 (leave the other at the 0 it came in with), and connect them as shown in
Figure 7.1. The model is now complete.
The model has only discrete states, so you can go into the Conguration Parameters
menu (under Simulation or you can use Ctrl+e) and change the Solver to discrete (no
continuous states), and while you are at it, change the stop time to inf (the MATLAB
name for innity). If you now run the model and double click the switch icon while looking
at the Chart, you will see the transitions from state to state take place each time the switch is
thrown. (An active state will be highlighted in blue, and each time a transition takes place,
it also will be highlighted in blue.)
7.1.1 Stateow Semantics
The Stateow chart that we have created so far is extremely simple. It should have given
you an idea of how the chart works. (In the language of automata theory, this knowledge is
called semantics.) The chart we created in the introduction does not accomplish anything.
In order to do something, we need to export an action from the chart. However, before we
do that, let us describe some of the semantics of Stateow.
Look at the diagram in Figure 7.2. It depicts most of the semantics of Stateow.
Semantics are associated with each of the Stateow objects as follows.
States:
States can be hierarchical. A state that contains other states is called a super
state (examples: SuperState_1 and SuperState_2 in the gure).
7.1. Properties of Stateow: Building a Simple Model 219
Figure 7.2. A Stateow diagram that contains most of the stateow semantics.
States cancause anactionwhentheyare enteredor exited, duringtheir execution,
or when an event occurs while the state is active. (Examples: State SS1.1 has
entry action SS1.1_A1, during action SS1.1_A2, and exit action SS1.1_A3.)
Note that the names of the actions can be anything you choose, and the semantics
allow you to abbreviate the names to en, du, and ex, respectively.
States are, by default, exclusive Or states. However, you can change them
to parallel (And) states by highlighting the states that you want to be parallel
and then right clicking on the super state that contains them and selecting the
Parallel (AND) option under Decomposition in the menu that opens. Parallel
states appear in the diagram with dotted boundary lines (SS2.1 and SS2.2, for
example).
States may be grouped together (as a graphical object in the diagram) using the
Group state icon in the menu at the top of the chart. (It is the tenth icon.)
You can also double click on the super state that contains them. (Example:
SuperState_2 in the diagram is grouped, as can be seen from the wide border
and the shading.)
Super states or combinations of states may be made into a subsystem (or a sub-
chart in the semantics) by selecting this option from the Make Contents menu
220 Chapter 7. Stateow
after you right click on the combination. (Example: The SuperState_3 in the
diagram is a subchart. It has internal structure that appears when you double
click on it, just as a subsystem opens in a new window in Simulink.)
Transitions, Conditions, and History:
Atransition is a directed arrow in the diagram that implies the action of leaving
one state (the source) and moving to another (the destination).
Adefault transition is the method Stateow uses to determine which of the ex-
clusive (Or) states are to be active when there is ambiguity (for example, when
the chart is started). These transitions may also occur with conditions attached
or because of events. The initialization of SuperState_1 and the state SS1.1 in
this super state are both defaults that occur on the rst entry into the chart.
The History (the encircled Hin the diagram) overrides the default. In this exam-
ple, the history forces Stateow to remember the substate that was active when
the exit action from SuperState_1 occurs. (In the example, the transition from
SuperState_1 is to SuperState_2, and it occurs regardless of which substate in
SuperState_1 is active.) The subsequent return to SuperState_1 will go back
to the last active substate because of the history. Otherwise, Stateow uses the
default transition.
Atransition may occur because of an external event (as we saw in the previous
section) or because of an event that is internal to the chart. Events are nongraphi-
cal objects; they do not have a pictorial representation, but they are shown in a di-
alog that is opened using the Explore option under the Tools menu in the chart.
A transition can also use a condition. Conditions can be one or more Simulink
data variables. (Data variables are also nongraphical, and you can viewthemus-
ing the Explorer option.) The semantics for conditional operations on Simulink
data is to enclose the operation in square brackets. (Example: in the chart above,
the conditions are Condition_1, Condition_2, and Condition_3.) Thus, the con-
dition [t >10] causes a transition when t (data from Simulink) is greater than
10; note that this means that when the Stateow model is C-coded, t will be
input data.
We can also have actions associated with a transition. Actions can occur in two
different ways. The rst is during a transition, when the condition evaluated
is true (before the transition takes place). The second is when the transition
actually takes place. The rst action is denoted by putting the action inside of
curly brackets { and }. The second is denoted using a diagonal slash / (Example:
[Condition_3] {CA_3} / TA3 in the diagram above denotes that Condition_3 is
being tested for the transition, and if it is true, the conditional action CA_3 will be
created; the transition will then occur, and the transition action TA3 will occur.)
Connective Junctions:
A connective junction is a place that allows a transition to terminate while a
decision is made. (It is denoted by a circle; transitions can be made to or from
7.1. Properties of Stateow: Building a Simple Model 221
this circle.) This device is the ow part of Stateow. It allows pictorial repre-
sentations of all kinds of programming decisions and as such greatly contributes
to the visual programming approach that Stateowembodies. As an example, in
the chart above, there is an if, elseif, and else implied by the connective junction
with the three transitions labeled 1, 2, and 3. The code that would be created
from this is as follows: If Condition_1 then, if Condition_2 cause action CA_2,
elseif Condition_3 cause action CA_3 and make the transition to state SS1.2
and cause transition action TA_3, else transition back to state SS1.1. Notice
that in this example, the diagram has three lines with six clearly visible actions,
whereas the verbal description contains 30 words, each of which must have
precisely the stated order for the proper actions. (The code that it implies also
is verbose.) As we develop some more complex Stateow examples, the power
of this visual programming environment will become even more evident.
This is not a complete description of Stateows semantics. We will see howStateow
programs a chart when we build a model that does something.
7.1.2 Making the Simple Stateow Chart Do Something
Before we make our chart do something (have outputs), let us look at how Stateow differs
from the classical Mealy and Moore state machines. In the Moore paradigm, outputs from
the nite state machine depend only on the machines states. In the Mealy paradigm, an
output can occur when there is a change in the input, so the output depends on both the states
and the input. The newest version of Stateow allows you to restrict the Stateow chart to
be one or the other of these two paradigms. If you wish to do this, you need to open the
Model Explorer window in the chart by selecting the Explore option under the Tools
menu and then selecting the Mealy or Moore option in the State Machine Type pull-down
menu.
In addition to the method for dening outputs, Stateow also allows parallel (And)
states, while both the Mealy and Moore schemes are purely exclusive Or machines.
Finally, the ability to do signal owusing transitions and connection icons makes it possible
to implement (and visualize) complex branching, including do loops and all types of if
statements.
Nowwe can return to the example we started and create some outputs. We will assume
that we have a simple specication for the automata that we want to create, and we will
then build the Stateow model to satisfy the specication. We will then look at the way
Stateow processes the chart by looking at the transitions and the actions both during the
transitions and in the states themselves.
We assume we are creating a device that will cause a neon sign to ash with a period
of 0.5 sec with a duty cycle of 50%. The rst requirement that we have is that the drive
electronics for the neon light must be warmed up for at least 0.25 sec before the ashing
commences. (This ensures that the voltages are at the correct values.) After the warmup
is complete, we commence ashing using a timer to trigger the state chart. The nal
requirement is to turn on a fault detection circuit that will monitor the voltage and current
going to the lamp, but this can be done only 1 sec after the start command is issued, to allow
222 Chapter 7. Stateow
Scope
Pul se
Generator
12:34
Di gi tal Cl ock
t
start
on
trip
Chart
start
on
trip
Figure 7.3. The Simulink model for the simple timer example.
the start transient to die down. (This time is greater than 3 time constants, so we select a
time of 1 sec).
We assume that the earlier model was the rst step in the development of this timer,
so we will modify this model to incorporate the complete specication. Here, then, is the
specication:
A timer that sends out a pulse every 0.125 sec will replace the switch we created
in the early version (and because this triggers the chart, it will cause the states and
transitions to occur every 0.125 sec).
We will send the clocks time to the chart so we can create outputs that are Timed.
We will have three outputs. The rst will be a variable called start that starts the
lights electronics. The second will be a variable called on that causes the neon
light to switch on and off at the 0.25-sec rate. The nal variable, trip, occurs after
a delay of 1 sec.
Figure 7.3 shows the Simulink model with the state chart that achieves this specica-
tion (called SecondStateflowModel in the NCS library).
Run this model several times, carefully watching the transitions and the actions that
occur on entry and exit from the states. (This is easy to do because the chart in Figure 7.4
is animated.) Along with the animation, also look at the three outputs in the Scope block.
The model has been set up so all of the transitions in the Chart occur every 0.6 sec. (This is
done using the Stateow Debugger under the Tools menu; we will look at this tool in more
detail later.)
The chart has the same two states that we used in the initial example. We have added
entry and exit actions for the On state and an exit action for the Off state. In addition,
we have put the connective icon at the transition fromthe Off state to the On state. This
connection is an if-then-else statement. It executes the following logic:
If the time t is greater than 1 sec, then trip is set to 1; otherwise (the else), go
unconditionally to the On state. (Remember that t is data from Simulink; when the
device is implemented, this is the clock time beginning when the timer is turned on.)
When we enter the On state, Stateow sets the variable on to 1, and when the
chart exits the On state, the variable on is set to 0 (causing the ashing).
7.1. Properties of Stateow: Building a Simple Model 223
Figure 7.4. Modied Stateow chart provides the timer outputs Start, On,
and Trip.
All of the transitions take place because the event Switch is triggered by the clock
pulses. (Remember that either rising or falling edges of the pulse trigger the event.)
To create the input t for the Statechart we used the Add menu in the Stateow
window. When you select Add, the options that you can use are Event, Data, or Target.
We want t to be Data that is an input from Simulink. Therefore, select Data, and then
from the submenu select Input from Simulink. In a similar way, we need to add outputs
called on, start, and trip (as in the chart diagram above)
The Simulink model contains the chart and models of the timer. (We use the Pulse
Generator from the Simulink Sources library with the dialog values set to Pulse type =
Sample based, Period = 4, Pulse Width = 2, and Sample Time = 0.125. The digital
clock is also from the Sources library, and its sample time is 0.125 sec.
We have now looked at the semantics for Stateow, and a simple example with inputs
and outputs. We can now create a model that actually simulates a device that has both
complex dynamics and a Stateow chart that provides the code for some complex logic.
7.1.3 Following Stateows Semantics Using the Debugger
When you ran the model and viewed the animation, you should have noted that there is a
logical pattern to the execution of the chart. This pattern is part of the Stateow semantics
that you should understand. Let us look at how Stateow translates the chart into actions.
224 Chapter 7. Stateow
0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.5
1
start
0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.5
1
on
0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.5
1
Time
trip
Event 1
Event 2
Event 6
Event 3
Figure 7.5. Outputs resulting from running the simple timer example.
First, the chart does nothing until something that the chart recognizes occurs. This
recognition might be an event, as we have used in the model above, or it might be a change
in some data from Simulink. Stateow can recognize a Simulink variable change as a
condition for a transition. In the (anthropomorphic and cute) semantics of Stateow, the
chart is asleep when it starts. When something outside the chart causes an event or the
fulllment of a condition, the chart wakes up. When the chart wakes up for the rst time, the
default transition takes place and the chart initializes. Note that this takes 0.25 sec because
the rst event occurs then. Thus at the end of 0.25 sec the rst state is active, but the chart
is asleep again. The next event is at 0.5 sec. (The clock ticks occur every 0.25 sec because
the Pulse Generator has a period of 4 samples, with a pulse width of 2 samples.) At the
second Switch event, the chart transitions from the Off state to the On state, and the
entry action start=1 occurs. (Figure 7.5 shows the Scope that displays all of the outputs
from the chart.)
If you watched the transitions carefully in the Stateow window, you will see that the
rst action that occurs is the transition. Then the connective decision occurs, with the ow
stepping up to the border of the On state. Before the chart enters this state, the Off state
exit action occurs. Then and only then does the chart enter the On state.
You can understand the details of the actions and their timing if you start the Stateow
debugger. This is accessible in the Tools menu at the top of the chart window or by typing
Ctrl+g when the chart window is active. The debugger opens with the check box Disable
all checked. To have the debugger stop at the various actions in the chart, deselect this
check box. Run the model from the debugger by selecting Start. The debugger will stop
the chart at every action, and you can use the Step button to move to the next change in
7.2. Using Stateow: A Controller for Home Heating 225
the chart. As you step through the chart, note carefully the actions that Stateow takes at
each step. This is the best way to understand the Stateow semantics.
In the next section, we will use Stateow to create a new concept in home heating
control. This control will be an alternative to a thermostat, we will create the controller
using Stateow to handle all of the logic, and we will create the control system in Simulink,
all the while using Simulink to model the furnace that heats the home. This example builds
on the simple home heating example (using a thermostat) that we investigated in Chapter 2,
and the Two Room House model that we developed in Chapter 6.
7.2 Using Stateow: A Controller for Home Heating
The simple home heating example we investigated in Chapter 2 assumed that the heat source
was an electric heater. The thermostat that did the control was a bimetal device that worked
because of the unequal coefcients of expansion of the two metal pieces. This type of
heating system is still in common use in most homes, but it is not very efcient. First,
the hysteresis of the thermostat causes the temperature of the house to overshoot the value
that the thermostat is set to. (This overshoot can be as much as 4 or 5 deg F for a typical
thermostat.) Second, the typical house heat source is oil or gas (not electricity), and the
heater itself typically heats either air or water that circulates around the house. (In the case of
water, or hydronic, systems, the water may be circulated as superheated water or as steam.)
In either case, the heat exchanger that transfers the heat generated by the ames into the
water or the air has signicant heat loss. (Agood boiler has an efciency of about 75%, and
a good air heat exchange plenum has an efciency of about the same.) One way to avoid
these losses is to circulate water or air at a temperature that varies (and is typically only
slightly above the ambient temperature of the house). The ideal method for doing this is to
create a controller that varies the ow rate of the heat exchange medium. (A variable ow
rate requires a variable-speed fan in the case of an air heat exchange or a variable-speed
pump in the case of a hydronic system.) Let us develop an electronic controller in Stateow
that will provide all of the functionality that a variable ow rate system needs.
The rst step in the design is to create the Simulink model for the physical system
in which the controller will operate. This part of the design process is critical because it
provides the basis for capturing the specications by allowing the developer to see the effect
of a change in the requirements or the specications on the way the system operates. It is
the rst step in the creation of an executable specication for the design. This step is so
crucial that we will do it in some detail.
7.2.1 Creating a Model of the System and an Executable
Specication
We will assume that we are creating a device that will control the hydronic heating system
that we investigated in Chapter 6. We will divide the system into four parts:
the combustion process,
the heat exchanger that converts the heated gas or oil into hot water,
226 Chapter 7. Stateow
the pumps that deliver the heated water to the heat exchangers in the rooms,
nally, the dynamics of the heat exchange in the rooms (the ow of heat into the
room from the heat exchangers and the loss of heat through conduction, inltration
of external air, and radiation).
We modeled a home that has two rooms in Chapter 6. This model had all of the
attributes of a multiroom home but was easier to build. In the next chapter, we will revisit
the model and, using another tool, build a model that has multiple rooms. This approach is
typical of the way a design progresses. Initial modeling tends to be simple but expandable.
Its main thrust is to capture the salient details that drive the design of the product and
allow the specication to be rened. When systems engineers talk about this phase, they
often talk about owing the requirements down. This phrase captures one of the most
important attributes of the development of a specication. We start with a requirement for
the system as a complete unit. We specify the very top part of the system; in this case we
want an energy efcient heating system that provides control over each individual room in
a multiroom home with heat owing into the room on a continuous basis, not turned on
and off with a thermostat. The next step is modeling the dynamics to investigate how the
various parts of the system interact. This is where the simple model plays a signicant role.
So-called trade-offs can now take place where a more complex subsystem in one area can
make the design of other subsystems less difcult. Trade-offs of tighter specications in one
area for less onerous specication in another require a very robust model. In fact, this is the
only method that the designer can use to develop a complex specication. However, in most
engineering the early work to create these models is lost because a written specication is all
that the engineers see in the subsequent design phases. The transmission of the specication
through a written document that omits the simulations used to develop it result in a loss of
intellectual content and it signicantly slows down the design process.
In order to understand the process we will create the new home heating controller
using the model of the house that we developed in Chapter 6. Presumably, the model was
the starting point for the design of the new heat controller, and the model showed the limits
on what is possible using the new controller. When we described the model in Chapter 6,
we noted that it contained a simple control system that started each time the manual switch
changed. So let us open the model and look at what the specication developers want. Open
the model TwoRoomHeatingSystem in the NCS library. Look at the responses when you
run the system without the controller. (The model should open with this optionthe switch
on the right should be in the up positionand then throw the switch on the left by double
clicking and see what happens when the furnace is turned on and off.) The responses you see
were in Figure 6.5 of Chapter 6. Now, turn on the controller by throwing (double clicking)
the switch on the right. You will see that the design team has devised a good controller. It
seems to keep the house temperature right at the 70 deg F set point. (This value comes from
the constant input at the lower left side of the Simulink diagram.)
So let us look at howthey did this. The Simulink subsystemthat contains the controller
is in Figure 7.6. The salient features of this subsystem are as follows:
The design uses a PID controller, which makes sense because the heating system
should provide the homeowner with a constant temperature under all circumstances.
(The integral control forces the temperature to match the set point.)
7.2. Using Stateow: A Controller for Home Heating 227
Integral Control
1
Heater Command
Sample at
1 sec.
Sample at
1 sec
Control_Gain
Proportional Control
K Ts
z-1
Discrete Integrator
with Integral Gain Ki
K (z-1)
Ts z
Derivative
Control
2
Set
Temperature
1
Measured
Temperatures
Figure 7.6. PID controller for the home-heating example developed in Chapter 6.
The design uses derivative control, which allows the controller to anticipate changes
in the outside temperatures and accommodate them.
The proportional control makes the heating system follow variations in the indoor
temperature caused by disturbances such as the opening and closing of doors.
The control system is digital. (This is implicit in the specication since we are using
a digital device for the control; the sample time of the controller is 1 sec, which is
more than adequate but is long enough that it should not cause any computational
issues.)
The simulation results from the executable specication are good, but we probably
should see what happens when the outside temperature moves around. Back in Chapter 2, we
developed a temperature prole (fromreal data) for a typical day in Norfolk, Massachusetts.
We can now excite the model with this temperature variation.
Open the two-room house model, and delete the xed outside temperature block
and make it a From Workspace block. For the data type Tdata(:,1:2) in the From
Workspace block. This causes the outdoor temperature in the model to follow the outside
temperature prole in the gure that opened when you started the two-room heating system
model (the temperature prole is the curve in red). Also, change the stop time for the
simulation from Tend to Tend1. (This makes the simulation 24 hours long.) If you do
not want to do this, you can open the model TwoRoomHeatingSystem24 in the NCS
library. When you run the simulation, the four temperatures in the model are as shown
in Figure 7.7. Notice that the individual room temperatures go from 50 deg F to the set
point of 70 deg F with no overshoot in about 20 minutes. Furthermore, the heating system
controller maintains a constant temperature of 70 deg F for both rooms despite the diurnal
variations in the temperature. In addition, the circulating water temperature goes up and
down as the outside temperature changes; this helps maintain a constant wall temperature
and signicantly improves the comfort level in the rooms.
This simulation allows you verify that this controller works well. At this point, you
are ready to build the Stateow block that will implement this controller. However, there
are features that the controller should have that you know but the control designer did not.
228 Chapter 7. Stateow
0 1 2 3 4 5 6 7 8
x 10
4
50
60
70
80
90
100
110
120
130
Time
Temperatures (R1 R2 HE1 HE2)
Room 1 Temp.
Room 2 Temp..
Heat Exchanger 1 Temp.
Heat Exchanger 2 Temp.
Figure 7.7. Response of the home heating system with the external temperature
from Chapter 3 as input.
70 .0
68.5
Heating Controller
Temp .. Zone 1..Set Temp
Temp .. Zone 2 .Set Temp
Enter
1 2 3 4 5
6 7 8 9 0
Time
69 .7
68. 5
Time
Run Temp
10 : 20 AM
Figure 7.8. A tentative design for the new home heating controller we want to develop.
You know that you want to raise and lower the set temperature from the device. You know
that you need to display the set temperatures for each zone. You know that you need to
display the status of the systemfor the user, including the current time and the current indoor
temperature for each room (zone).
Assume that a human factors engineer and a commercial artist working together have
created the tentative interface in Figure 7.8 for the heating controller.
From this, we can begin to build the requirements for the Stateow logic. First,
however, we need to talk about some of Stateows Action Language constructs that we
have not used so far.
7.2. Using Stateow: A Controller for Home Heating 229
7.2.2 Stateows Action Language Types
Stateowhas various types of ActionLanguage constructs, some of whichwe have already
seen, that will make the programming of the controller easy. For example, actions can be
associated with both states and transitions. These actions have certain keywords associated
with them (we have already encountered most of them), but there are some implicit actions
that are very useful. These are as follows.
Binary and Fixed-Point Operations: The rules for xed-point and binary operations
are quite extensive and beyond the scope of this introduction. You should carefully
read the Stateow manual before using any of these operations.
In their order of precedence, the binary operations are a*b, a/b, a+b, a-b, a >b,
a < b, a >= b, a <= b, a==b, a = b, a != b, a <> b, a&b, a|b, a && b, a||b.
The unary operations are a (unary minus), !a (logical not), a++ (increment a
by 1), a (decrement a by 1).
Assignments: There are certain assumptions that Stateow makes about the
calculation results when you are using xed point. Some of the operators can
override these assumptions. The Stateowmanual describes these assumptions,
so if you are using the assignments below, you need to use care.
With this in mind the assignments are a = expression, a: = expression, a +=
expression, a -= expression, a *= expression, a /= expression, a /= expression,
a |= expression, a &= expression. Most of these are the same as they would be
for C code; however, in some cases they are bit operations and not operations
on an entire xed-point word.
Typecast operations: These operations are data conversions and are of the form
int8(v), int16(v), int32(v), single(v), and double(v).
C function calls: You can use a subset of the C Math Library functions. These are
abs, acos, asin, atan, atan2, cell, cos, cosh, exp, fabs, oor, fmod, labs, ldexp, log,
log10, pow, rand, sin, sinh, sqrt, tan, and tanh. Stateow also uses macros to enable
max and min.
The user may call his or her own C code from Stateow. The procedure for doing so
is in the manuals.
MATLAB calls from Stateow use the operator ml. For example, a = ml.sin(ml.x)
will use MATLAB to compute sin(x), where x is in the MATLAB workspace.
Event broadcasting: Any action in Stateowcan create and broadcast (to other states)
a new event. For example, the entry into a state can have on: e1:e2, which is read
as On the occurrence of the event e1, broadcast the event e2. An event broadcast
may also occur during a transition using the code e1/e2, which is read Make the
transition when e1 occurs and broadcast the event e2. The events can also be directed
to specic states in the chart using send(event, state), where event is the name of
the event and state is the name of the state it will be used by. (No other state will be
able to use the event.)
230 Chapter 7. Stateow
Temporal logic: This logic uses constructs like Before, After, Every, and At.
It also allows
conditionals and supports some special symbols like t for the absolute time
in the simulation target;
$ for target code entries that will appear as written between the $ signs in the
code but will be ignored by the Stateow parser;
ellipsis to () denote continuation of a code segment to the next line;
the MATLAB, C, and C++ comment delimiters (%, /* */, and //);
the MATLAB display symbol ;.
It supports the single-precision oating-point number symbol F and hexadecimal
notation.
7.2.3 The Heating Controller Layout
We are ready to begin building the home heating controller. We will start with the simulation
model for the heating controller TwoRoomHeatingSystem24, and modify it by adding the
Stateow chart that will handle all of the operations that the user will perform. The rst
step is to write the specication for the Stateow chart that outlines all of the actions that
the user can perform.
The actions are as follows:
The user may select any of the three switch settings: Run, Set Time, and Temp.
In the Run position, the controller operates to maintain the temperature set for each
of the zones. The controller also displays the time of day.
In the Time position, the user can set the time. The user sets the time with the push-
button keyboard (with the integers 1 through 0) and the Enter pushbutton. Pressing
the Enter key saves the time in the display. The display cycles through the hour and
minute display values every time a new number is pressed. If the Enter key is pressed
before the complete time is entered, the Time entry command recycles back to the
beginningas if no entry has been made.
In the Temp position, the user can set the temperature for each zone. The display
begins with zone 1. The desired temperature is set by using the number keys. After
entering each digit, the option is to change it (pressing the new number causes the
display to cycle through the two digits). By pressing the enter key, the number in
the display is accepted. Temperatures are set to two digits, and the selection of enter
saves the temperatures.
Zones are selected using the enter key immediately after the Set Temp option is
selected.
In all cases, the systemcontinues to run properly even if the switch is not in the Run
position. If the users fail to put the switch to Run, the display will continue to accept new
inputs. With this specication for the operation, we can begin building the Stateow chart.
7.2. Using Stateow: A Controller for Home Heating 231
Two Zone Home
Heating System and Controller
K*u
Zone 1 & 2
Heat Input Vectors
Setting
PushButton
Enter
User_Selections
MATLAB
Function
Update Time
Display
MATLAB
Function
Update Set
Temps in GUI
MATLAB
Function
Update Room
Temps in GUI
SetClock
Time
Clock Set
Time for Display
Setting
PushButton
Enter
MeasuredTemps
SetClock
Time
Reset
SetTemps
Command
Stateflow Chart
ProcessCommands
MATLAB
Function
Reset Time
in GUI
MATLAB
Function
Reset
Pushbuttons in
User_Selections
Matrix
Multiply
Plot Room
Temps (States)
Tdata(:,1:2)
Outdoor Temperature
from Workspace
1
s x
o
Integrator
-C-
Initial
Temps
Heater
Commands
Heat Command Gain
( 0 < Heat cmd. <Qf )
tic
next
Clock
B1* u
B1 from
m-file
Analog Inputs
Output (double)
Digital Output (single)
A/D Conversion
A
AMatrix
from m-file
Heat Cmd. Command
2 Zone PID Heating
Controller
4
4
Temperature States
(Room1 Room2 HE1 HE2)
4
4
[4x4]
4
4
4
4
4
2 4
2
2
2 2
2
2
2
2
2
2
2
2
Figure 7.9. The complete simulation of the home heating controller.
7.2.4 Adding the User Actions, the Digital Clock, and the Stateow
Chart to the Simulink Model of the Home Heating System
The Simulink model needs a simulation of different user actions so we can test the various
modes of the Stateow chart. (When we test the chart, we want to make sure that we cover
every possible action and that no states in the chart are unused.) The GUI will contain all
of the buttons and displays that the design team specied (as in the sketch provided above).
We also need a simulation of the digital clock that will trigger the controller. (The digital
clock also drives the clock display.)
The complete simulation is the model Stateflow_Heating_Controller in the
NCS library (Figure 7.9). When you open this model, a Heating_Control_GUI display
opens also (Figure 7.10). We made this GUI with GUIDE(GUI Development Environment)
in MATLAB. The GUI simulates all of the user interactions, including
the display of the time,
the display of the actual room temperatures,
the display of the desired (set) room temperatures,
the operating modes (through a pull-down menu): Run, Set Temp, and Set Time,
a functional keyboard that allows the user to set the time and the two-room set tem-
peratures.
When the user changes the settings, they immediately are available in the simulation
and the GUI displays the result.
232 Chapter 7. Stateow
Figure 7.10. Graphical user interface (GUI) that implements the prototype design
for the controller.
We will go through all of the details in the creation of this GUI in the next section,
but rst let us see how it works. The Simulink model with the controller and all of the
code to drive the GUI are in the gures and code descriptions that follow. AStateow chart
handles the logic for the feedback control (the command to the heating controller comes
from the bottom right port in the chart) and all of the pushbutton and selection options.
These include the options for running the system and changing the time and the desired
temperature settings.
The simulationcontains a digital clockthat drives the displayandtriggers the Stateow
chart. This is above the chart in Figure 7.9 and provides the trigger inputs to the chart. Five
MATLAB blocks handle the changes to the GUI. These are as follows.
Update Time Display: As its name implies, this code updates the GUI-displayed time
every minute.
Reset Time in GUI: When the user changes the time by selecting Set Time in the
pull-down menu, this code changes the time display in the GUI to the time entered
by the user.
Reset PushButtons in User_Selections: The communication between the GUI and
the Simulink model uses a Gain block whose value is changed by code in the GUI
M-le. (We will spend some time describing this feature since we have not used it
before.)
7.2. Using Stateow: A Controller for Home Heating 233
3
Enter
2
PushButton
1
Setting
2
Setting_SW_Gain
-1
Enter_Gain
1
1
1 -1
11_PB_Gain
Figure 7.11. Interactions between the GUI and Stateow use changes in the gain
values in three gain blocks in the subsystem called User_Selections.
Update Set Temps in GUI: The user inputs from the keyboard change the set temper-
atures, and this code displays them in the GUI.
Update Room Temps in GUI: The measured room temperatures are the output of a
Simulink block that simulates the A/D conversion of the measured temperatures into
3 digits (in the form XX.X), which are displayed in the GUI with code in this block.
The digital controller and the simulation of the two rooms are directly from the
TwoRoomHouse model from Chapter 6. We added an empty Stateow chart to the model,
and we used the Add menu selection (in Stateow) to add the data inputs. The inputs are
the settings (an integer from 1 to 3 from the GUI pull-down menu);
the 11 PushButton values that come from the interaction of the GUI and the
User_Selection block (these values are 1, 0, 1, 2, . . . , 9);
the result of pressing the Enter pushbutton (when the buttons are reset the value is 1,
and it changes to +1 when the Enter button is pressed; these result from interactions
of the GUI with the User_Selections block).
The inputs, from the block called User_Selections at the left of the model, result from
changes in the gains in Gain blocks. These Gain blocks are in the User_Selections block
shown in Figure 7.11.
Tosee howthe interactions withthe GUI occur, openthe model andthe User_Selections
subsystem. You do not have to start the model to see the interactions. Start by selecting
one of the three options in the pull down menu in the GUI. When you do, you should see
the value in the Setting_SW_Gain (the top block) change. This change uses a feature of
the interaction of Simulink with MATLAB that we have not used before. The command, in
MATLAB, is set_param, and it allows you to change the values of the parameter settings
in a block. For the Gain block, there are only two parameters, the Gain itself and the sample
time. In order to use setparam, you need to get the handle to the graphics object (the block)
that you want to change from the GUI. We do this in two steps. First, when the model
opens, a callback (under Model Properties in the edit menu) is executed and opens the GUI
234 Chapter 7. Stateow
using the command hGUI = Heating_Control_GUI. This saves the graphics handle for
the GUI in the variable named hGUI in MATLAB. We can then use this handle to access
the GUI and to make the appropriate changes. The second step is to set the gain values in
each of the blocks above. The handle for these blocks uses the names of the model, the
subsystem that contains the gain blocks, and the names of the gain blocks. Thus, to set the
gain in the rst block, the following code is used:
set_gain = 1;
block_handle =
Stateflow_Heating_Controller/User_Selections/Setting_SW_Gain;
set_param(block_handle,Gain,set_gain);
The fully qualied name for the block is the string variable block_handle. This is
used in the set_param command to set the value of the Gain to the string variable set_gain
(which is the string for the character 1).
This set of commands will change the gain of the top block to 1. (In Figure 7.11 the
value is 2.) Each time you change the value of the gain, the Stateow Chart sees the result
of multiplying the gain (from the set_param value) by the input 1 (i.e., it sees the values 1,
2, or 3).
The PostLoadFcn callback also sets the value of the 11_PB_Gain to an initial value
of 1 (so we can tell when no pushbuttons are selected); this changes to 0, 1, 2, . . . , or
9, depending on which button is pushed. The last gain, the Enter_Gain is changed to 1
whenever the Enter pushbutton is pressed; otherwise it is 1.
In the Stateow chart, the action Reset occurs after the pushbutton data is used.
This action is an output from the chart. The other outputs are the Set Clock action and
the time to which the clock is to be set, and the SetTemps that is a 2-vector of the desired
temperatures created by the users pushbutton actions.
The Stateow chart has two parallel states called Interactions (Figure 7.12) and Pro-
cess_Buttons (Figure 7.13). The Interactions state computes the two commands for the
controller (each command is the difference between the actual and set temperatures for
each zone) and processes the changes in the state of the switch. The second parallel state
processes the pushbuttons.
This chart is rather easy to follow. The annotation at the top of the state has enough
detail so you subsequently can see how this part of the chart processes the commands. Run
the model and watch the chart. Then as you select different options with the pull-down
menu, watch the chart as the paths change. In addition, notice how the events broadcast
from this state to the parallel Process_Buttons state and note what happens in this part of
the chart (shown in Figure 7.13).
If you are sure that you understand the working of the Interactions state, focus your
attention on the Process_Buttons chart. The entry into this chart activates the Start state. The
broadcast of one of the three events run, time, or temperature causes the chart to stay in
this state (run), go to the state SetTime (time), or the state Temperature (temp). In each
of the SetTime states, we wait for a pushbutton selection. When the selection occurs, the
value of the variable Pushbutton is 0, 1, . . . , 9, depending on which button was pushed,
and we exit the state and go to the state Hours2, where the rst digit of the desired hour
setting is set. We then reset the pushbuttons, and when the event next occurs, we exit the
state and go to a Wait state. If the user does not continue after 25 of the next clock pulses (25
7.2. Using Stateow: A Controller for Home Heating 235
At Every Clock tic after the Start --Test the Switch condition and act as follows:
Switch Setting = 1 -- send event "run" to Process_Buttons
Switch Setting = 2 -- send event "time" to Process_Buttons
Switch Setting = 3 -- send event "temperature" to Process_Buttons
Then calculate the heater command "Command" and Reset all of the pushbuttons.
Interactions 2
Start
Output
en: Command = SetTemps -MeasuredTemps;
Reset = 0;
tic [Setting==3]/send(temperature,Process_Buttons)
3
tic [Setting == 1]/send(run,Process_Buttons)
1
tic [Setting==2]/send(time,Process_Buttons)
2
Figure 7.12. The rst of the two parallel states in the state chart for the heating
controller.
sec), this state is abandoned and we start all over again. If the user continues, we process the
next digit in the same manner. When all of the digits are set, we are in the Min1 state where
we wait for the user to press the Enter button. This sends the chart to the Output_Time state
where we calculate the Time from the four digits and the variable SetClock is set to 1. This
updates the display with the new time. First, we need to decide whether the user is setting
the clock to am or pm. This happens in the state AMPM. The MATLAB code that resets
236 Chapter 7. Stateow
Process_Buttons 1
SetTime Start
en: SetClock = 0;
Minutes1
en: h2 = PushButton;
du: Reset = 1;
Temperature
Wait2
Minutes2
en: h1 = PushButton;
du: Reset = 1;
Units1
en: m2 = PushButton;
du: Reset = 1; Wait3
Hour1
en: m2 = PushButton;
du: Reset = 1;
Wait1
Wait4
Tens1
en: m1 = PushButton;
Hour2
en: m1 = PushButton;
Reset = 1;
SetTemp
Output_Time
en: Time=10*h2+h1+(10*m2+m1)/100;
SetClock = 1;
AMPM
en: Reset = 1;
time
1
[Zone == 1] / Zone =2;Reset = 1;
1
run
3
temperature
2
tic [PushButton >= 0]
1
temperature
2
run
2
time
3
next [Enter == 1]
1 next [PushButton == -1]
/Zone = 1;Reset=1;
2
[Enter == -1]
4
next [PushButton >=0]
2 after(25,next)
1
tic [PushButton >= 0]
next [PushButton == -1]
after(25,next)
1
next [PushButton >= 0]
2
next [PushButton == -1]
next [PushButton == -1]
after(25,next)
2
after(25,next)
1
next [PushButton >=0]
1
next [PushButton >= 0] 2
next [Enter == 1]/ Reset =1;
[Time >100]/Time = Time-100;
1
next [Enter==1]
1
/Time=Time+100;
2
next [Enter == 1]
[Zone == 1] /SetTemps[1] = 10*m2+m1;
2
[Zone == 2] /SetTemps[2] = 10*m2+m1;
1
after(4,next)/SetClock = 0;
2
Figure 7.13. The second of the two parallel states in the home heating controller
state chart.
the clock looks at the time, and if it is greater than 100 it knows that the time is pm, and if
not, it is am. Thus, in this state, every time the user hits the Enter button we alternately add
or subtract 100. This sequentially sets am or pm in the time display. Try this!
The path that sets the temperatures begins with the state Temperature. At the start
of this, we wait to see if the user presses the Enter button. Every time the user presses
Enter, the zone changes; it cycles through 1 and 2. The rest of this chart should be easy
to follow, particularly if you run the chart while you select the temperatures.
The only remaining parts of the model are the MATLAB code blocks that change the
display. We will describe one of these in detail, and allow you to look at the remaining
code blocks to see what each of them does. There are 5 different MATLAB functions in the
model. One of these is the code that resets the pushbuttons, and we have seen this already.
The other four all cause changes to be made to the GUI. Two of these make the changes
that the user commands using the pushbuttons, while the other two change the time and the
temperature as the simulation progresses. We look at the block that updates the time in the
GUI. The code is
7.2. Using Stateow: A Controller for Home Heating 237
function updatetime(Clock)
persistent AMPM Clocksec Clockmin Clockhr changehr changemin changeampm
if Clock == 0; return; end
hGUI = evalin(base,hGUI);
h = guidata(hGUI);
Hrfield = get(h.text15,String);
if strcmp(Hrfield,00)
AMPM = am;
Clocksec = 0;
Clockmin = 0;
Clockhr = 12;
changemin = 1;
changehr = 1;
changeampm= 1;
end
Clocksec = Clocksec + 1;
if Clocksec == 60
Cm = get(h.text13,String);
Clockmin = str2double(Cm);
Clocksec = 0;
Clockmin = Clockmin + 1;
changemin = 1;
if Clockmin == 60
Ch = get(h.text15,String);
Clockhr = str2double(Ch);
Clockhr = Clockhr + 1;
Clockmin = 0;
changehr = 1;
if Clockhr == 13; Clockhr = 1; end
if Clockhr == 12 && Clockmin == 0
changeampm = 1;
if strcmp(AMPM,am)
AMPM = pm;
else
AMPM = am;
end
end
end
end
if changemin == 1;
Clockmins = num2str(Clockmin);
if Clockmin < 10; Clockmins = [0 Clockmins]; end
set(h.text13,String,Clockmins)
changemin = 0;
238 Chapter 7. Stateow
end
if changehr == 1;
Clockhrs = num2str(Clockhr);
set(h.text15,String,Clockhrs)
changehr = 0;
end
if changeampm == 1;
set(h.text11,String,AMPM)
changeampm = 0;
end
This code should be easy to follow. The only part that might be newto you (particularly
if you have never used MATLABto handle graphics before) is the code that nds the graphic
object in the GUI. The GUI was created using GUIDE. GUIDEautomatically creates a name
(a handle) for each of the objects in the GUI. In this case, we want to change the attributes
of the text in the GUI. At any time we want to use them, we can retrieve these names using
the MATLAB command guidata. To do this we need to have the handle for the GUI.
Remember that when we opened the GUI from the callback, MATLAB created a number
called hGUI. This number exists in the MATLAB workspace, so to retrieve it for use in this
function you need to bring it into the functions workspace. The evalin command does this.
If this is the rst time you have seen this command, go to MATLABand type help evalin
at the command line. The text in the GUI is always a MATLABstring variable, and its name
is always String. Try typing some of the set and get commands that appear in this
code at the MATLAB command line. (The model does not have to be running to execute
these commands in the GUI.)
The controller (with the PID control) and the Stateow chart (along with the A/D
converters) are the only parts of this model that will be used in the actual controller. (The
Simulink pieces can be C coded using Real Time Workshop, and the Stateow chart can
be coded with the Stateow coder.) Figure 7.14 shows the nal form of the Controller.
The measured temperature goes to the display through a MATLAB block that computes the
temperatures to three-digit accuracy (tens, units, and tenths) in the blocks to the left of the
A/D converter block in Figure 7.9.
When the design is complete, Real Time Workshop creates C-code for the Stateow
chart and the digital controller. Thus, the design will be complete when the Simulink
diagram works as required. One of the neat aspects of this design approach is the fact that
the Simulink model can be used to verify that the controller works as a customer would
expect. You can show him.
If you run this model, you will see it controlling the room temperatures both through
the GUI and in the plots. The plots are very similar to those that we created in the simple
models, which is further verication of the accuracy of the design (the design and the
specications match).
7.2. Using Stateow: A Controller for Home Heating 239
Integral Control
Derivative Control
1
Heater Command
z
1
Unit Delay
delt
Samp Time
Control_Gain
Proportional Control
Ki
Integral
Gain
Kd/delt
Derivative
Gain
z
1
Delay
1
Command
2
2
2
2
2
2
2
2
2
2
2
2
2
2 2
2
2
2
2
2
Figure 7.14. The nal version of the PID control for the home heating controller.
7.2.5 Some Comments on Creating the GUI
The GUI was created using GUIDE (the GUI Development Environment in MATLAB).
The creation of the GUI is quite easy. The tool opens when you type guide at the command
line in MATLAB. You insert each of the objects for the GUI by clicking and dragging the
appropriate icon into the blank window that opens. When all of the objects (pushbuttons,
pull-down menus, text, etc.) are in the window, simply click the green arrow at the top to
create the GUI. The result will be an active window (which will do nothing) and an M-le
that you can edit to create the actions for each of the objects.
The M-le contains a subfunction that is executed for every action in the GUI. The
code below, for example, is the code that runs when you double click pushbutton 1.
% --- Executes on button press of pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% This is the Keyboard Entry for the number 1.
block_handle =
Stateflow_Heating_Controller/User_Selections/11_PB_Gain;
set_gain = 1;
settimeortemp(hObject, eventdata, handles,set_gain,block_handle)
240 Chapter 7. Stateow
The rst ve lines come from GUIDE. The comment and the last three lines are the
added code that processes the button. Note that this code changes the value of the gain in
the User_Settings subsystem in the Simulink model (we have seen this already). The gain
is changed to 1 here. (Other buttons change the gain to the values 0, 1, 2 . . . , 9, as we have
seen.) When the gain change occurs, the appropriate action in the Stateow Chart occurs
because the chart input sees the value of the output of the gain block. With this knowledge,
you should be able to debug the GUI. Edit the GUI (called Heating_Control_GUI.m in
the NCS library). Select some of the lines in the M-le that correspond to actions, and click
the dash to the left of the lines. This will put a large red dot where the dash was, and the
next time the M-le runs, MATLAB will pause at this point. As is always the case, once
you understand how the GUI works, you can learn more by reading the help les.
7.3 Further Reading
The paper by Harel [18] forms only a part of Stateow. The most signicant difference is the
ability to do both the Mealy and Moore approaches (mixed syntactically into the same chart
if you desire). In addition, Stateows ow chart feature allows coding of if statements and
do loops. This is a powerful and signicant addition to the Harel formulation. However,
the true genius of Stateow is its ability to connect to Simulink. This allows integration of
the software design and the hardware right from the start. We will return to this subject in
Chapter 9.
Stateow semantics are subtle. The Stateow Users Guide should be consulted as
you work with the tool to ensure that you use the most effective techniques when you build
a model.
Exercise
7.1 Use the modications you made to the two-room house model in Exercise 6.2 to
investigate the heating controller design. In particular, see what happens when the
wind speed increases and decreases. Try creating a wind gust model and use it to
drive the two-room house model. What do you think you could do to the controller
to better accommodate the heating system response to wind gusts?
Chapter 8
Physical Modeling:
SimPowerSystems and
SimMechanics
Almost every engineering discipline has its own unique method for developing a mathemati-
cal representation for a particular system. In electrical networks, the pictorial representation
is the schematic diagram that shows howthe components (resistors, capacitors, inductors,
solid-state devices, switches, etc.) are connected. In mechanical systems, depending on
their complexity, diagrams can be used to develop the equations of motion (for example,
free body diagrams), and in more complex systems, the Lagrange and Hamiltonian variation
methods are used. In hydraulic systems, models consist of devices such as hydraulic pumps
and motors, storage reservoirs, accumulators, straight and angled connecting lines, valves,
and hydraulic cylinders.
The models of the devices and components used in various disciplines have unique
nonlinear differential equation representations. Their inputs and outputs come from both
the internal equations and the equations of the devices with which they interact. Before an
engineer can use Simulinks signal ow paradigm, he must develop the differential equa-
tions he needs to model. However, because almost every discipline has a unique pictorial
representation that helps in the derivation of these equations, an intermediate process leads
to the ultimate Simulink model. Unique pictorial tools can capture the domain knowledge
for different disciplines, and this domain knowledge has evolved so practitioners can easily
develop the underlying differential equations for complex models.
The domain models pictorially represent interconnections of components. Once the
picture is complete, analysts use various methods for reducing them to the required equa-
tions. These methods become quite complex when the system has many parts, and even for
simple problems, developing the equations from the picture can be a bookkeeping night-
mare. Producing Simulink models for these disciplines seems to demand a method that will
allow the user to draw the relevant pictures and then invoke a set of algorithms that will
convert the picture into the Simulink model.
The MathWorks has developed the concept of physical modeling tools that provide
a way of drawing the domain knowledge pictures in Simulink. We will describe two tools
that have been created for doing this. The rst, used for electrical circuits and machinery,
is SimPowerSystems. It allows the user to draw a picture of the circuit, and when the
simulation starts, the tool converts the picture into the differential (and algebraic) equations
241
242 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Continuous
powergui
c
1
2
Switch
Step
Resistor
2
Multimeter
DC Voltage Source
Capacitor
Cap.
Voltage
Cap.
Current
Figure 8.1. A simple electrical circuit model using Simulink and SimPowerSystems.
that the picture describes. The second tool is SimMechanics, which allows the user to draw
pictures of complex mechanical linkages, and simulate them in Simulink.
The approach used to create the Simulink equations for each of these modeling tools
is very clever because the nal simulation takes place in Simulink. Consequently, inputs
and outputs to and fromthe electrical or mechanical components can be Simulink blocks (or
blocks from other domains besides the electrical and mechanical systems). At the current
time, tools exist for hydraulic systems (Physical Networks), aerospace vehicles (Aerospace
Blockset), and automotive vehicle power plant and drive systems (SimDriveline). The nal
simulations are hybrids of the specic domains, and the combined tool for each of the
engineering disciplines becomes the actual simulation in Simulink. We start our tour with
SimPowerSystems.
8.1 SimPowerSystems
An electrical network diagram shows how electrical components interact. Therefore, a
circuit diagramis not the signal owdiagramused in Simulink. One of the easiest examples
is the circuit diagramin Figure 8.1. The SimPowerSystems tool was used to drawthis circuit.
In order to run this and other examples in this chapter, you will need to obtain a
demo copy of this tool. (If you do not already own it, a demo copy is available from The
MathWorks.) The model Simple_Circuit is in the NCS library. The circuit consists of a
series connection of a 100 ohmresistor and a .001 farad capacitor with a 100 volt DCsource
that is switched on at t = 1 sec. Because the circuit uses the standard symbols for the DC
source, the resistor, and the capacitor, the pictorial representation should be easy to follow.
The results of running this simulation are shown in Figure 8.2. As can be seen, the switch
closes at one second, and because this circuit has only a single state (i.e., it is a rst order
differential equation), the current and voltage have exponential decay and growth exactly
as we would expect.
This is the rst example of a physical model that we have encountered, and since
a physical model is different from Simulink, let us spend some time learning how these
8.1. SimPowerSystems 243
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time
C
u
r
r
e
n
t

(
A
m
p
s
)
Current in Resistor & Capacitor
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
10
20
30
40
50
60
70
80
90
100
Time
V
o
lt
a
g
e

(
V
o
lt
s
)
Voltage Across Capacitor
Figure 8.2. The electrical circuit time response. Current in the resistor and voltage
across the capacitor.
differences manifest themselves. First, a physical component, such as a resistor, capacitor,
or inductor (or a linkage or joint in the SimMechanics tool), has two types of connections.
The rst type of connection creates the network (i.e., it connects components together). The
icon for this is a small circle that does not have an arrow (since the current owing in an
element can owin or out of the node, or the forces at a joint act on all of the links connected
to the joint). The second connection type is the small arrow that is the standard Simulink
connection. In early releases of the tools, it was possible to connect the two types, but in
the current release, the connection is not possible. If you use one of the earlier versions of
the tools, be careful not to make incorrect connections.
Two blocks in the diagram are unfamiliar and the way in which the switch changes
is different, but even these blocks should be clear. To understand the way this tool is used,
let us go through the steps that are required to build this model in Simulink. The blocks we
need are in SimPowerSystems under the Simulink browser. Open this from the Simulink
Library browser. When you double click on the SimPowerSystems icon in the browser, the
Library browser should resemble Figure 8.3(a). The library for the circuit models consists
of the following:
+ Application Libraries,
Electrical Sources,
Elements,
+ Extras Library,
Machines,
Measurements,
Power Electronics.
244 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Figure 8.3. The SimPowerSystems library in the Simulink browser and the ele-
ments sublibrary.
The categories with the + sign in front have further subdirectories that we will discuss
later. The blocks that are required to build this model are in the Electrical Sources, Elements,
and Measurements libraries. To start, open a new Simulink window (as usual); then open
the Electrical Sources browser and nd the DC Voltage Source. Now drag a copy into
the Simulink window.
Next, open the Elements library (Figure 8.3(b)). This library contains all of the basic
elements needed to create a circuit.
One of the clever attributes of the Elements library is the way that the three basic
elements (resistor, capacitor, and inductor) are a single icon. The options are to select a
8.1. SimPowerSystems 245
series combination or a parallel combination of these elements. This minimizes the number
of basic elements in the library. The parallel combinations are the ninth and tenth icons in the
library, and the series combinations are the fourteenth and fteenth icons. The model we are
building uses single elements for the resistor and capacitor, obtained from either the series
or parallel combinations. The single elements in the diagram are the result of placing the
series or parallel combinations into the diagram and then double clicking them to open the
dialog box. The dialog that opens has a pull-down menu that allows the selection of a single
R, L, or C; a combination of RL, RC, and LC; or the complete R, L, and C circuit element.
The dialog also allows the initial voltage (for a capacitor) and the initial current (for an
inductor) to be specied. Last, the dialog allows the user to select what data in the element
are measured. (The options are the current in the branch, the voltage across the branch, or
both.) If you need to plot one of the signals in the circuit, use the Multimeter block in the
Measurements library. As can be seen in Figure 8.1, this block stands alone in the diagram.
The outputs from this block are a subset (or all) of the measurements that are specied in
the dialog. To select what is an output, double click on the Multimeter block, highlight the
desired measurement in the left pane, and then click the double right arrow icon to place
the selected items in the right pane. Buttons allow you to reorder the measurements or to
remove a highlighted measurement fromthe list. If you have not created the circuit elements
yet, drag theminto your diagramand set the parameter values to 100 ohms and 0.001 farads.
In the capacitor icon, specify that you want to measure the current and voltage of the branch.
The last part of the diagram is the switch. Two switch elements are available for use
in the model. The rst is the circuit breaker that is in the Elements library, and the second
is the ideal switch that is in the Power Electronics library. For this model, either switch can
work in the circuit. We have selected to use the circuit breaker. Drag a copy of the breaker
into your diagram and double click the icon. The dialog that opens allows the selection of
the on resistance for the breaker (note that this cannot be zero);
the initial state of the breaker (0 is off and 1 is on);
the specication of what is called a Snubber (a resistor and capacitor in series that
are placed across the switchthis represents either actual components that are in the
circuit or they model the stray capacitance and the open circuit resistance of the
breaker when it is in the off state).
We will see why this snubber is important later.
We now have all of the elements in the model, and we are ready to connect them.
If you look carefully at the various icons, you should see that the electrical elements have
an icon that is different from the Simulink icons that we have seen so far. First, they do
not have an arrow to indicate a preferred ow direction (since there is none in a circuit),
and second, the connection points are small circles rather than the familiar little arrow in
Simulink. The exception is the circuit breaker, where the input on the top left is the arrow
(the arrow goes to the letter c in the icon). This small distinction species when icons in
the diagram expect a Simulink (signal ow) input/output or are part of the topology of the
electrical circuit. Usually these inputs and outputs are ports that give access to the internal
workings of the element. (For example, the state of the switch is determined by whether or
not the input c is greater than 0, and for the electrical machine elements, outputs provide
Simulink values for the internal states of the machine, such as speed of the motor.)
246 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Connect the circuit together to match Figure 8.1, and nally drag two Scope blocks
and a demux block from Simulink into the diagram to show the current and voltage across
the capacitor. When you run the simulation, the results should look like Figure 8.2. Once
again, drawthe diagramneatly so that it is easier to follow. Toward this end, if you highlight
an icon, you can use the command Ctrl+R to rotate it.
8.1.1 How the SimPowerSystems Blockset Works: Modeling a
Nonlinear Resistor
Modeling a circuit as a schematic diagram represents its topology, and not the underlying
differential equations. Kirchhoffs laws applied to the circuit (we encountered them in
Chapter 6) give the differential equations. These laws state that in a connection of elements,
the voltages around any closed loop sum to zero, and the currents entering and leaving any
node sum to zero. In creating these sums, you must be careful to maintain the signs of the
voltages and the direction of ow of the currents. (For example, assume current ow out of
the node is negative and that a capacitor connected to a node has a current that ows out of
the node.) With this convention, the currents owing into the node from other elements are
positive, and similarly for an inductor and resistor. One of the easiest numerical methods
for reducing any circuit to a set of differential equations is to create a state-space model
where the voltages across the capacitors and the currents in the inductors are the states. The
topological representation of resistors connected in series or parallel lead to algebraic equa-
tions. SimPowerSystems creates the state-space model automatically using this approach,
and it eliminates algebraic equations so the model is of the lowest possible order.
When there are nonlinear elements in the circuit, SimPowerSystems isolates the non-
linear part of the model from the linear part and creates a small feedback loop in Simulink
that models the linear pieces (using state-space models) with the nonlinear parts as an equa-
tion (or equations) that manipulate the currents and/or voltages. The currents and voltages
are the values that come from the linear circuit. We will build a simple model to give some
insight into howSimPowerSystems does this. Therefore, let us build a model for a nonlinear
resistor. We will use this model in the next section, where we will create a simulation of an
electric train on a track.
The Electrical Sources library of the SimPowerSystems blockset contains two ex-
tremely useful blocks: the Controlled Voltage and Current Sources. Each of these blocks
generates an output (a voltage or a current) that is equal to the value of the Simulink signal
at its input. They are the way in which Simulink couples into SimPowerSystems and are the
dual of the Voltage and Current Measurement blocks in the Measurements library. To create
a device that is nonlinear, we need to use a measurement and a controlled source block.
The equations that relate the voltage and current for resistors (Ohms law), inductors,
and capacitors are
e = Ri,
e = L
di
dt
,
i = C
dv
dt
.
If the elements are functions of the current or voltage in the network (or if they are functions
of some other variable that is indirectly a function of the voltages and currents), then the
8.1. SimPowerSystems 247
Output Voltage
V = iR
}
2
+
1
s
-
+
Voltage Source
( V = to Simulink signal s)
i
+
-
Current
Measurement
1
Desired R
iR
Current (i) Resistance (R)
Figure 8.4. A Simulink and SimPowerSystems subsystem model for a nonlinear
or time varying resistor.
circuit is nonlinear. To build a circuit element where the component values (R, L, or C) are
functions of some other variable, we need to use the denitions above to implement them.
So let us build a model of a nonlinear (or time-varying) resistor.
The nonlinear resistor model is shown in Figure 8.4. (It is in the NCS lbrary and is
called Nonlinear_Resistor.) Try to create the model yourself before you look at the
model as we created it. To create it you need the current measurement block (in the Mea-
surements library of SimPowerSystems) and the Controlled Voltage Source in the Electrical
Sources library. You also need two electrical connection ports (in the Elements library).
The Current Measurement block has three ports: a pair for the electrical circuit labeled +
and and a port that is a Simulink signal output that is the measurement of the current
passing through the block. The desired resistance multiplies the measured current, which
is this Simulink signal. To create the product you need a Product block from the Simulink
Math Operations library.
The signals create a voltage output that satises Ohms law, as indicatedinthe diagram.
It should be clear how one could use this approach to create an electrical circuit element in
Simulink that has any desired nonlinear characteristics.
The nal step in the creation of this block is to add a mask (icon) to the block that
makes the element look like a variable resistor. We do this by selecting rst the entire model
(use Ctrl+A to do this) and then the Create Subsystem option from the Edit menu. After
creating the subsystem, right click on the Subsystem block and select the Mask Subsystem
option from the menu. This will bring up the Mask dialog that allows the selection of the
attributes of the mask. We do not need to specify any parameters for this block, so only the
Icon, Initialization, and Documentation tabs are used.
The masked icon we created looks like Figure 8.5.
To drawthis picture, we needed to dene the plot points. This is under the Initialization
tab, where the following MATLAB code is entered:
yvals = [0.5 0.5 0.45 0.55 0.45 0.55 0.45 0.55 0.45 0.55 0.5 0.5];
xvals = [0 0.3 0.325 0.375 0.425 0.475 0.525 0.575 0.625 0.675 0.7 1];
Wyvals = [0.75 0.75 0.5 0.6 0.6 0.5];
Wxvals = [0.2 0.5 0.5 0.47 0.53 0.5];
248 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
R i
+ --
Nonlinear
Resistor
Figure 8.5. Masking the subsystem with an icon that portrays what the variable
resistor does makes it easy to identify in an electrical circuit.
The rst set of x and y plot coordinates draw the resistor, and the second set draws
the wiper (the arrow). The actual drawing is under the Icon tab in the dialog, and it uses the
code
plot(xvals,yvals)
plot(Wxvals,Wyvals).
The points that are plotted use the convention that the lower left corner of the icon is
the point (0,0) and the upper right corner is the point (1,1). This will plot correctly if the
Units (the last entry in the Icon options of the Icon dialog) are Normalized. We also have
specied in this dialog that the Rotation of the Icon is xed (i.e., it will not rotate if the user
clicks on the icon and types Ctrl+R).
The last part of the mask dialog is the documentation. This tab in the dialog allows
the user to name the block and create a description of the block that appears after double
clicking on the block; it also provides the user with help on the block. To navigate through
all of these, open the NLResistor model and select the block. Double click on it to see the
mask, look under the Mask to see the model, and nally edit the Mask to see the various
dialogs. If you choose, you can add this block to the SimPowerSystems Elements library so
it will be available in the future. This is essentially howthe Elements library was developed.
SimPowerSystems is extendable to include any type of electrical device, and the
library contains many such models. Take a few minutes to browse through the complete
library. Note that theApplication libraries contains a set of wind turbine induction generator
models, various induction and synchronous motor models, a set of DC drives that use one-
or three-phaseACin a bridge rectier, a set of mechanical elements that model a mechanical
shaft (with compliance and damping), and a speed reducer. Many of these models contain
blocks from the Power Electronics devices library, where models for nonlinear electronic
elements such as SCRs, IGBTs, diodes, and MOSFETS are located. These models use
Simulink S-function code in the background to do the calculations that are required to
implement the voltage and current characteristics of the devices. The outputs from, and input
to, these S-functions use the same approach that we used to create the nonlinear resistor.
8.1.2 Using the Nonlinear Resistor Block
Before we create a circuit with the nonlinear resistor, you should be aware of a fewsubtleties.
8.1. SimPowerSystems 249
Continuous
powergui
Switch On at
@ 0.05
c
1
2
Switch
Signal
Builder
RL Branch
Voltage
RL Branch
(1 Ohm & 0.1 Henry)
Nonlinear Res.
Current
Nonlinear Res.
Nonlinear
Resistor1
1
Multimeter
100 V DC
Figure 8.6. Circuit diagram of a simulation with the nonlinear resistor block.
The nonlinear resistor block introduces an algebraic loop in Simulink. This loop
automatically generates an error message that will appear in the MATLAB window every
time the simulation starts. To get rid of this message, select the Conguration Parameters
option in the Simulation pull-down menu after you build the model. The Diagnostics tab in
this dialog allows you to change the algebraic loop diagnostic from warning to none. Every
time you build a newmodel with this block, you will need to do this. If you are going to build
this model (and not use the version in the NCS library), then make this change. In addition,
the SimPowerSystems tool prompts you to use the Ode23t solver for the simulation. If you
do not do this, again an error message appears every time you run the model. The reason
this solver is suggested is that, for technical reasons having to do with the order of the
resulting differential equations, the electronic devices that have discontinuities (switches,
diodes, thyristers, and related power electronics, etc.) all have a snubber circuit that is an
RC circuit in series across the switched part of the device. The dynamics associated with
the snubber makes the system stiff. Ode23t is a stiff solver that minimizes the number of
calculations at every time step, making it run fast and accurately. Thus, if you are creating
this model yourself, make sure that you select this solver.
We can now try to create a circuit that uses the nonlinear resistor and demonstrate
how it works. The circuit, called NL_Res_Circuit in the NCS library, is in Figure 8.6.
Before we explore the model, we need to describe a new block in this diagram,
the Signal Builder block. This block allows the user to create an input that consists of a
piecewise continuous sequence of steps and ramps. The tool allows the user to specify
250 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Figure 8.7. Using the signal builder block in Simulink to dene the values for the
time varying resistance.
when the discontinuities occur, the amplitude of the steps, and the slopes of the ramps.
Figure 8.7 shows how the user interface looks. In this gure the rightmost discontinuity of
the waveform has been selected (highlighted with a little circle), and consequently the time
(0.5) and amplitude (3) of this edge are displayed at the bottom of the graph. By clicking
and grabbing, any line can be moved (up and down if the line is horizontal, or left and right
if it is vertical). If a point is selected it may be moved up and down. The duration of the
waveform comes from the Axes menu (where the rst choice is Change Time Range).
This waveform drives the nonlinear resistor block so the resistance will vary from 1 ohm
to 4 ohms over the rst 0.1 sec. The resistor then changes from 4 to 5 ohms over the next
0.2 sec and then drop from5 to 3 ohms over the next 0.2 sec, where the value will be constant
until the end (at 1 sec).
The circuit we created in Figure 8.6 consists of the nonlinear resistor in series with
the RL branch (the branch resistance is 1 ohm and the inductance is 0.1 Henry) with a
100 volt DC source that is applied at time t = 0.05 seconds. Ignoring the transient due
to the inductance, the current in the RL branch would be 28.5 amps at the time the switch
changes state; it would be 20 amps at 0.1 sec and it would be 33 amps at the end. The
simulation results are in Figure 8.8(a) and 8.8(b).
The transient due to the inductance shows up quite clearly in the rst graph of the
voltage across the RL branch, and the nonlinear resistance is quite clear from the current
8.2. Modeling an Electric Train Moving on a Rail 251
a) Voltage across the R-L Branch b) Current in the Nonlinear Resistor Block
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
5
10
15
20
25
30
Time
Voltage Across the RL Branch (Volts)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
5
10
15
20
25
30
35
Time
Ccurrent in the Nonlinear Resistor (Amps)
Figure 8.8. Using SimPowerSystems: Results from the simple RL network simulation.
owing in the Nonlinear Resistor block. Note that this model is time varying and linear
because the resistance does not depend on any of the internal states in the model. (It follows
the time variation specied by the Signal Builder block.) However, it should be obvious to
you that the input to the nonlinear resistor could be any variable in the Simulink model.
Before we leave this discussion, the SimPowerSystems blockset manual has a descrip-
tion of a nonlinear resistor wherein Ohms law is implemented using a controlled current
source and a voltage measurement (instead of the voltage source and current measurement
we used). It pays to look at the description of that in the users manual (see [47]).
8.2 Modeling an Electric Train Moving on a Rail
As a practical application of the SimPowerSystems tools, we model an electric train moving
along a rail. We create the electrical circuits using SimPowerSystems, and we use Simulink
to model the dynamics of the trains motion. The example illustrates how the combination
of these two different modeling tools into one diagram makes the simulation far easier to
understand.
A train moves along a set of rails called traction rails since they provide the friction
force that the wheels need to propel the vehicle. The train gets its power either from a third
rail (the power rail, usually found at the side of the traction rails) or from an overhead wire.
The power transfers from the rail or the overhead wire to the train using a connector that
slides along the rail or wire. The power circuit completes through the traction rails (i.e.,
the return path for the current uses the traction rails as the conductor). As the train moves
along the traction rails, the resistance between the power supplies and the train for both the
traction and power rails changes. For example if a train is traveling at 60 mph (88 ft per
sec) on a set of rails with a resistance of 10
6
ohms per ft, then the resistance of the rail
from the supply to the train is 0.001 ohms when the train is 1000 ft away from the source. If
252 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Blocks in Green Model the
Electrical Components of the Motor
Blocks in Cyan Model the
Dynamics of the Train
Blocks in Light Blue Model the PI Motor Controller,
the Motor Converter, and Converter Time Constant
2
i
1
x
2
--
1
+
1
s x
o
Velocity Int.
Mux
Vector of
States
Sum
Square
Product
Accel. Cmd.R Equivalent
PID Controller
s
-
+
Motor Drive
Controller
10
s+10
Inverter Time
Constant
1
s x
o
Integrator
V0
Initial
Speed
Pos
Initial
Position
[Accel]
Goto1
[Speed]
Goto
1/TotalMass
VConst
ForceConst
[Speed]
From3
[Accel]
From2
[Accel]
From1
[Speed]
From
-K-
Friction
Field RL
i
+
-
Current
Measurement
s-
+
Back EMF
Armature RL
-K-
Air Drag
2
F
1
Ac
Acceleration
Velocity
Traction
Motor Force
Acceleration
Speed
Position
Figure 8.9. Simulink and SimPowerSystems model of a train and its dynamics.
the train is moving away from the source, the rail resistance will increase by 0.00088 ohms
every second. The voltage and current owing in the motor give the trains speed, and the
rail resistance determines them. This means that the interaction of the train propulsion with
the track geometry is nonlinear. Thus, in order to model the interactions, the train and rail
interactions need to make extensive use of the nonlinear resistor block we created.
To start the modeling, we rst need to create a DC or AC traction motor. We assume
that the system uses DC, and we further assume that a control system keeps the supply
voltage constant. We also assume that the traction motors are controlled and that we are
modeling an urban transit system where each of the cars in the train has identical traction
motors. We will lump all of these traction motors together into one equivalent motor. The
model of the traction motor uses a masked subsystem where all of the parameters needed
to describe the entire train drive motors are in the mask dialog.
Open the model in Figure 8.10 from the NCS library, using the command
Train Simulation. The train subsystem model is the green block in this diagram (la-
beled Train), and to view it you need to select Look Under Mask from the menu that
opens when you right click on the block. (The Train subsystem is in Figure 8.9.) This train
model consists of a combination of Simulink and SimPowerSystems blocks. These blocks
are in color, but here they are in black and white. The color makes it easier to understand the
diagram. The color keys each part of the diagram to its function. The SimPowerSystems
blocks are in green in the model, the control system that controls the acceleration of the
train is in light blue, and the dynamics of the motion are in cyan. We describe the blocks,
and their interconnections, beginning with the power electronics blocks (at the lower left of
the gure). We will work our way clockwise around the diagram.
8.2. Modeling an Electric Train Moving on a Rail 253
We have already seen in Section 3.5 how to model a motor. The electrical and
mechanical sides of a motor are
L
R
di
R
dt
= R
R
i
R
+V
in
V
b
and
d
M
dt
=
1
J
w
(K
m
i
R
B
w

M
) .
These equations are not required in this model since SimPowerSystems will create them
for us. The circuit for the motor consists of the input ports (from the Elements library) and
the inductance and resistance of the train lighting, heating, ventilation, and air conditioning
systems connected across the ports. This load is in parallel with the series circuit consisting
of the armature inductance, armature resistance, and the back emf from the motor. In this
model, the back emf is a voltage source whose voltage is the product, using the gain block,
of Vconst and the train speed. In building this motor model, we deduced the rotational rate
of the motor from the trains speed and the gearing that connects the motor to the traction
wheels. The diagram was cleaned up using GoTo and From blocks, so the train speed
is sent into the gain using the GoTo at the top right of the diagram.
Another controlled voltage source is also in series with the armature circuit. This
source models the DC voltage converter that controls the acceleration of the train. This
controller is a simple PI control of the type that we have discussed before, so we will not go
into details. The only important aspect of the control is that it essentially implements the
nonlinear resistor block (although the block is not explicitly used). This approach emulates
the way the actual converter on the train operates. We use the measurement of the current
in the armature for both the train controller and the force that propels the train. The right
side of the diagram implements this. The force is the result of multiplying (through the gain
block) the motor parameter ForceConst and the current in the armature.
From the armature current measurement on, the diagram consists of only Simulink
blocks. These implement the acceleration from the sum of the forces on the train. These
forces are
the external force F, comes fromthe force of gravity acting when the train is moving
on a grade;
the friction force (proportional to the speed);
the aerodynamic drag (proportional to the square of the speed);
the force from the motor (traction motor force).
The summing junctions before the gain block (with the gain 1/TotalMass) creates the
total force on the train, and the gain creates the acceleration. As usual, we integrate the
acceleration and speed to get the entire state for the train. The state vector uses the Mux
block to send the state out of the subsystem. The other outputs from the subsystem are the
motor current and, of course, the connections to the motor.
This entire motor model is a masked subsystem. The train model connects to the
simulation of the track in the diagram using the SimPowerSystems connection icons. The
254 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Location
Pos1 =
42543 ft.
Location
Pos 2 =
47543 ft.
Train Motion___
>
Location
Pos3 =
55543 ft.
Location
Pos4 =
60543 ft.
Continuous
powergui
R i
+ --
West Traction Rail
SS2 to Train
NL Resistance
Ri
+--
West Power Rail
SS2 to Train
NL Resistance
West Traction Rail Res.
West Power Rail Res.
East Traction Rail Res.
East Power Rail Res.
Stop Cmd.
Train to Substatoin
Rail Resistances
Calculated from Train Location
Ac
F
x
i
+
--
State
Current
Accel. Cmd.
Ext. Forces
Train
Traction Rail 2
Traction Rail 1
Signal 1
Track Slope
Supply R4 Supply R3
Supply R2
Supply R1
SubStation4 SubStation3 SubStation2 SubStation1
Rail Resistances
Power Rail 2
Power Rail 1
Location
Speed
Acceleration
R1Current
R4Current
R4
R3
R2
R1
Train_Current
States
R3Current
R2Current
[Slope_Accel]
[Accel_Cmd]
Weight_of_train
Train_Current
R4Current
R1Current
States
R4
R3
R2
R1
R3Current
R2Current
[Slope_Accel]
[Accel_Cmd]
Ri
+--
East Traction Rail
SS3 to Train
NL Resistance
R i
+ --
East Power Rail
SS3 to Train
NL Resistance
Currents in Rails
and Powering Train
Constant_drag
Signal 1
Acceleration Command
STOP
Stop Simulation
when Train is at
SubStation3
Pos. Along Track (Feet)
Speed (mph)
Accel. (mph/sec) I in West Traction Rail
I in West Power Rail
I Train
I in East Traction Rail
I in East Power Rail
Figure 8.10. Complete model of the train moving along a track.
nal model is in Figure 8.10. The model contains four DC voltage sources and their series
source resistances. These four sources represent power substations along the track. We
model the train moving between substations 2 and 3 (i.e., in the middle of the various
substations). Four copies of the nonlinear resistor model the variable resistance between
the train and the adjacent substations. We assume the train is moving to the right, so the
nonlinear resistors to the left of the train (denoted West in the annotation) start at 0 ohms
and rise to their maximum value when the train is at substation 3 (on the right). This
model will not be correct when the train position exceeds the location of this substation (the
topology of the model will change). Therefore, we calculate where the train is relative to
this substation and stop the simulation when the train position reaches it. The block that
does this also calculates the resistances for the nonlinear resistor blocks.
Each of the blocks in this model are SimPowerSystems blocks except for the GoTo and
From blocks, the Scope blocks, and the Signal Generator blocks that create the slope over
which the train operates and the acceleration commands that are sent to the train. Running
the simulation creates plots of the four tracks resistances, the state (position, speed, and
acceleration), the currents in the four nonlinear resistors, and the current that the train is
drawing from the track.
The Simulink subsystem that computes the resistances and generates the stop com-
mand is in the top right of the diagram. It has the Simulink blocks shown in Figure 8.11.
8.2. Modeling an Electric Train Moving on a Rail 255
5
Stop
Cmd.
4
East Power
Rail Res.
3
East Traction
Rail Res.
2
West Power
Rail Res.
1
West Traction
Rail Res.
Pos3
Sub
Sta.3
Selector
>=
Relational
Operator
Pos3
Location of
Sub Station 3
Pos2
Initial Train Position
& Location of
Sub-Station 2
Rtraction
Gain4
Rpower
Gain3
Rtraction
Gain2
Rpower
Gain1
States
From2
Train Position
Location of Train
(relative to SubStation 2)
Location of Train
(relative to Sub Station 3)
Figure 8.11. The Simulink blocks that compute the rail resistance as a function of
the train location.
0 20 40 60 80 100 120 140 160 180 200
-0.01
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Time
Resistance of Rail Segments
Between Train and Power Substations
Traction Rail to SS3
Power Rail to SS3
Traction Rail to SS2
Power Rail to SS2
0 20 40 60 80 100 120 140 160 180 200
0
5000
10000
I in West Traction Rail
0 20 40 60 80 100 120 140 160 180 200
0
5000
10000
I in West Power Rail
0 20 40 60 80 100 120 140 160 180 200
0
5000
I Train
0 20 40 60 80 100 120 140 160 180 200
0
500
1000
I in East Traction Rail
0 20 40 60 80 100 120 140 160 180 200
0
500
1000
Time
I in East Power Rail
a) Rail Resistances as a Function of Time b) Currents Train, Traction & Power Rails
Figure 8.12. Simulation results. Rail resistances, currents in the rails, and current
to the train.
When the simulation opens, a MATLAB M-le called Train_Data runs (using the
standard Model Properties callback). Edit this le and look at how it is constructed. You
can increase the drag on the train by changing the variable Tunnels in MATLAB from one
to two. You can also change any of the parameters.
The simulation creates the graphs in Figure 8.12(a) and (b). The leftmost is the
nonlinear resistor values as a function of time (and since the train is moving, as a function of
256 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
the location of the train). The rightmost picture is the currents in the nonlinear resistors and
in the train. The ve graphs, from the top to the bottom, are the current in the traction and
power rails to the right (west) of the train, the train current, and the currents in the traction
and power rails to the left (east) of the train.
To get familiar with this model, open it and try changing some of the parameters. In
addition, you can use the command generators to change the slope that the train is on and
the acceleration command.
We have only touched on a few of the many possible models that SimPowerSystems
can create. The users guide and the demos provided with the tool should be the next step
if you want to become an expert with it.
8.3 SimMechanics: ATool for Modeling Mechanical
Linkages and Mechanical Systems
Just as the SimPowerSystems tool allows a picture of an electrical network that interacts with
Simulink, the SimMechanics tool allows you to draw a picture of a complex linkage with
multiple connected coordinate systems. The model you create will then run in Simulink. It
also can use Simulink inputs, and the motions of any of its components are available for use
in Simulink. It is also possible to combine SimPowerSystems and SimMechanics models.
SimMechanics has a look and feel that are similar to SimPowerSystems. However,
there are signicant differences. When mechanical elements are connected to form a ma-
chine, there are physical attributes that need to be specied. First is the coordinate system
that the entire model will use. The second is the coordinate system that applies to each of
the bodies in the device. SimMechanics assigns a coordinate system to each body (usually
at the center of gravity) and to every point on the body that can be connected to another
part of the machine. (There is a large variety of possible connections: joints that allow
rotations in one, two, or three directions and joints that are rigid, called welds.) Enter-
ing these coordinates means that there are a lot more data associated with the elements in
SimMechanics than were the case for SimPower-
Systems. One rule is still inviolate: no connec-
tions are permitted between a SimMechanics port
(a connection with a small circle on it just like Sim-
PowerSystems) and a Simulink signal. These con-
nections must be made with measurement and con-
trolled sources (as was the case for SimPowerSys-
tems) that are in the Sensors and Actuators library
in SimMechanics.
The entire SimMechanics library looks like
the gure at the left. The Bodies sublibrary con-
tains four blocks. The rst species a body that is
the basic element of the mechanical system. The
second block in the library is the Ground block. It
species the coordinates of the base of the device.
It is possible for the Ground to represent the ground
(for example, when you want to model the trajec-
8.3. SimMechanics 257
tory of a projectile). Two entries are needed in this block: the coordinates of the origin and
a check box that will create an output from the Ground block that you can use to connect
the machine. Every SimMechanics model must have a Ground block.
The third library block (Machine Environment) species the environment for the
device (i.e., the value of the gravity vector, the values used for the differential equation
solver, etc.). Solvers for SimMechanics exist for solving the differential equations forward,
as in Simulink, or backward. The backward solver uses the motions of each element to nd
what forces would create them. Kinematical solvers give the steady state solution without
going through the differential equations. Last, a trimming solver does the calculations
needed for the simulation to start with all of the elements at rest (so no motion will occur
in the device until an external force or torque is applied). Every SimMechanics model must
have an Environment block.
When you double click on the Environment block, you will see that the dialog opens
with the tab called Parameters selected. All of the above settings (and a few more) are
accessible with this tab. You also will see that this is one of four tabs across the top. The
constraints allowyou to specify the solver tolerances when a hard constraint on the motion of
a body or joint occurs. Constraints are usually limits that the motions must not exceed. The
third tab, Linearization allows you to select the size of the perturbation used to determine
the linear model. (Chapter 2 contains a discussion of linearization and a description of
perturbation.) The last tab, Visualization, allows you to invoke the SimMechanics tool
that draws and animates a picture of the device as the simulation progresses. Make sure
that you check this box before you close the block; this will show the motion of the device
during the simulation.
The Constraints and Drivers library has a large variety of constraints that the bodies
and joints in the simulation model can have. The idea behind a driver is to force a body or
joint to have predetermined displacements or angles as a function of time; they provide the
inputs when we compute the forces that cause a particular motion.
The Force elements in the library are blocks that add a spring or a damper to a body to
cause the linear or rotational motion to have stored energy or to lose energy due to friction.
Without the blocks, the model would require a separate sensor and actuator connected
through Simulink to provide the spring and damping forces.
The meat of SimMechanics is the Joints library (which is comparable to the Elements
library in SimPowerSystems). There are 22 different joints in this library. We will discuss
a few of them, but if you want to use this tool in models that are more complex, you should
be aware of and understand each of these blocks.
The Sensors and Actuators blocks are the means used to allow Simulink to talk and
listen to SimMechanics models. These blocks allow connections to and from Simulink
To understand how SimMechanics works we replicate the development of the clock
model in Chapter 1 using SimMechanics. We start by modeling the pendulum.
8.3.1 Modeling a Pendulum with SimMechanics
One of the easiest models to create is a single joint and body congured to be a swinging
pendulum. Remember that in Chapter 1 we created this model using Simulink alone, so this
exercise is an interesting contrast of the differential equation development and the pictorial
258 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Scope
BF
Pivot
CS1
Pendulum
Env
ap
av
Joint Sensor
Ground
Coordinates
Damping
at Pivot
Velocity
Position
Figure 8.13. Modeling a pendulum using SimMechanics.
Figure 8.14. Coordinate system for the pendulum.
representation provided by SimMechanics. The pendulum model we will develop is in
Figure 8.13. You can open the SimMechanics model (called SimMechanics_Pendulum in
the NCS library), or you can launch SimMechanics and try building the model from scratch
yourself. Since we will take you through all of the steps to create this model, it is probably
better to build the model yourself.
Because every SimMechanics model needs an Environment and Ground block, open
an empty Simulink window and drag one of each of these blocks from the SimMechanics
library into it. Then connect the Environment and Ground blocks together. Double click on
the Environment and Ground blocks to view their dialogs. The Environment block opens
with the default gravity vector set to 9.81
m
sec
2
in the negative y direction. The default value
for the Ground block coordinates (dening the coordinate system for the entire mechanical
device) is [0 0 0]. These defaults mean that the coordinates for the environment are as
shown in Figure 8.14. We add the pivot for the pendulum and the pendulum body using this
World coordinate system, as shown in the gure.
8.3. SimMechanics 259
The pivot for the pendulum is a revolute block from the Joints library. The revolute
block allows rotations around a single axis (a single degree of freedom in the Base). The
axis of revolution can be referenced to the World, to the body that the joint is connected to
(called the Base), or to the body that rotates relative to the base (called the Follower). All
joints connect to two bodies. (In this case, the Base or B port connects to the Ground and
the pendulum; the second body connects to the follower or F port.) The coordinate system
for the Pivot is the World coordinate system. (Open the Revolute dialog for the Pivot block
and look at the Axes tab, where the rotation axis is set to the z-axis and the coordinates,
called Reference CS in the dialog, are the World coordinates.)
A joint can have sensors that measure the rotational angle, velocity, etc. and can
have forces or torques applied. To allow this, the Joint blocks have a dialog that allows
selection of a different number of sensor/actuator ports. In our model, we need a port for
sensing the motion and a port to apply the damping force, so the number of ports is set to
two. To make the measurements we need a Body Sensor from the Sensors and Actuators
library, and to create the damping force, we need a Joint and Spring block from the Force
Elements library. The joint sensor dialog for the model has the check boxes selected that
provide measurements of the angle and angular velocity of the joint, using degrees as the
angle units. The Joint Spring and Damper block is congured to only provide damping.
(The spring constant is set to 0, and the damping coefcient is set to 0.00005.) This block
creates equal and opposite torques for each element connected through the joint.
The last block needed to describe the physical model is the Body block that models the
pendulum. This block connects to the joint follower port (the F port). So grab a Body block
and drag it into the model. Any number of bodies may be in the model, and SimMechanics
automatically generates the equations of motion. The tool even takes care of computing
the rotations of the bodies relative to each other and the base, using one of the methods we
developed in Chapter 3 (Euler, angle-axis, or quaternion). For now we need only to enter
the body mass properties (mass and inertia) and the location of the center of gravity and
the attachment point to the pivot. Open the dialog on the body and set the mass to 1 kg
and the inertia matrix to the diagonal matrix with 0.001 kg*m2 along the diagonal. The
center-of-gravity (CG) coordinates in the Ground Coordinates block are set to the origin,
and the CGcoordinates in the Pendulumblock are set to [0.05 0.2485 0]. This corresponds
to a slight offset in x (5 cm) for the pendulum. (This will cause the simulation to start with
the pendulum moving because of the gravity force.) The value of the CG displacement
of 0.2485 m along the negative y axis will make the pendulum move with a period of 1
sec. (Remember that the frequency of the pendulum is

g/l, which can be solved for the


pendulums length to give 0.2485 m.)
Connect all of the blocks as shown in the diagram above, making sure that you
understand why these connections are all required. Also, make sure that all of the dialogs
that we specied along the way are in place and that you have checked the animation box
in the Visualization Tab of the Environment block. Finally, connect a scope block (with
two inputs) to the joint sensor. We have set the simulation time to 100 sec, and using the
Conguration Parameters dialog, we have set the maximum step size for the solver to 0.1
sec. You can start the model now.
When the simulation is complete, the pendulumsimulation will start and the animation
will appear. You should be able to watch the simulation evolve both through the animation
and the graphs displayed in the Scope. This ability will be there no matter how complex the
260 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics

0 10 20 30 40 50 60 70 80 90 100
-25
-20
-15
-10
-5
0
Position
0 10 20 30 40 50 60 70 80 90 100
-100
-50
0
50
100
Time
Velocity
Figure 8.15. Simulating the pendulum motion using SimMechanics and its viewer.
mechanical system you have modeled. (Asnap shot of the animation and the Scope plots is
in Figure 8.15.) Remember that this simulation, unlike the one we developed in Chapter 1,
did not require us to develop the equations of motion. Furthermore, it was created quite
rapidly. (Of course if this model was the rst time you used SimMechanics, there was a
learning curve you had to go through, but you can easily appreciate how the tool could save
a large amount of time when you are modeling complex linkages, machines, or dynamics.)
8.3.2 Modeling the Clock: Simulink and SimMechanics Together
The simple pendulum model can redo the simulation of the clock from Chapter 1. The
approachusedinthe SimPowerSystems tool, where a Simulinkvalue converts intoa physical
value (a voltage or current) is also the physical modeling approach. For SimMechanics, the
physical measurements can be quite numerous, as we have already seen. The SimMechanics
actuators can apply a torque, a force, or a motion based on any combination of measurements
and Simulink calculation that we want. The Damping force block for the pendulum is an
example of this; it is really a masked block with both an actuator and a sensor built in.
The calculations that provide the desired damping and spring force are in a Simulink block.
To create the clock model, we will actually build the damping, so we will not need the
Damping block. We also will use the Escapement force block that was used in Chapter 1,
and nally we will build a Signal using the signal builder to give the pendulum an initial
displacement. The initial displacement will allow us to place the center of gravity of the
pendulum at the pivot location in both x and z. (Since we will be kicking the clock
pendulum to start it, we will not need the offset of 0.05 specied for the pendulum CG to
get the simulation started.)
It should be easy for you to add the blocks needed to create the clock simulation, so try
to do so. The model in Figure 8.16 is SimMechanicsClock in the NCS library. The joint
sensor measures both the angle and the angular velocity of the pendulum; these are inputs to
the escapement block. The angle and angular velocity determine the forces applied by the
escapement and the angular velocity determines the damping force. The signal generator is
a pulse of amplitude 1 with duration 0.1 sec applied at the start of the simulation.
8.3. SimMechanics 261
B F
Pivot
Pendulum
Angle
CS1
Pendulum
Env
Joint Actuator
ap
av
Joint
Sensor
Signal 1
Initial Pendulum "Kick"
Ground
Coordinates
1.6
Escapment
acceleration
Pendulum AngleForce
Escapement
Model
0.0005
Damping
Coefficient
Ang. Position
Ang. Vel.
Figure 8.16. Adding Simulink blocks to a SimMechanics model to simulate the
clock as in Chapter 1.
0 5 10 15 20 25 30
-40
-20
0
20
40
Time
A
n
g
l
e

(
D
e
g
r
e
e
s
)
Pendulum Position
Figure 8.17. Time history of the pendulum motion from the clock simulation.
Most of the parameters in the model remain the same, except that in Chapter 1 we
used radians as the units, and here we use degrees. Thus, the escapement values Penlow
and Penhigh are 5.6 and 8.6, respectively, and the gain for the escapement force increases
to 1.6 (in the Gain block). The signs of the damping and the escapement forces applied to
the joint are negative since they reference the follower. The result of the simulation from
this model is in Figure 8.17.
262 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
8.4 More Complex Models in SimMechanics and
SimPowerSystems
There are many examples of complex motions in the demos that ship with SimMechanics,
and it is very useful to look at them. Before we leave this tool, though, there is one example
that is worthlookingat. InChapter 7, we talkedabout modelingpartial differential equations,
and we said that one could model vibration using the same approach as we used to model
the heat ow in the house. We also said that we could build a more complex home heating
system house model. Let us illustrate both of these now.
The rst model is a modication of the vibrating string model demo that is included
with SimMechanics ([2] was the source for the model and the data). Amodied version of
this model is the model SimMechanics_Vibration in the NCS library. Open this model
and run the simulation. The model uses 20 nite linkages to model a steel string (like a guitar
string) that is 1 m long. Each of the elements is a two degree of freedom representation
of the strings tension and rotation. Each of the elements is a small mass that can rotate
and translate (while transferring the forces that result from element to element). Browse
through the model and locate each of the elements. Look at the dialogs that each element
has to see what the data are, and then look at their values using MATLAB. The data is stored
in a MATLAB structure called fBeamElement that has the components
material: Mild Steel
density: 7800
youngsM: 210000000
diameter: 5.0000e-004
length: 0.0500
cgLeng: 0.0250
beamDir: [1 0 0]
Inertia: [3x3 double]:
matDamping: 5.0000e-004
The inertia matrix for each of the elements is
1.0e-007 *
0.0005 0 0
0 0.2500 0
0 0 0.2500
The model (Figures 8.18 and 8.19) looks complicated, but it is simply the same block
replicated 20 times. The 20 inputs allow you to pluck the string at any of the 20 lumped
locations (every 5 cm) along the string. It is interesting to do this while viewing the vibration
to see how many of the various harmonics are excited as you pluck the string near the edges
versus near the center. Several examples are in the output of SimMechanics shown in
Figure 8.20.
It is easy to modify this model to make it have more elements. We explore more
attributes of this model in Exercise 8.3.
The last model we will build in this chapter shows a complex electrical circuit. We
have modeled a four-room house heat ow (FourRoomHouse in the NCS library, shown in
8.4. More Complex Models in SimMechanics and SimPowerSystems 263
This is a model of a 1 meter long string made of "mild steel" that is plucked and then vibrates
in a uniform gravitationla field. There are 20 elements all together, and each element is 5 cm long.
The elements are identical ( they consist of a spring constant and damping in each of the 2
degrees of freedom -- 1 rotation that transfers torques from element to element and 1 translation
to transmit tension from element to element).
Each of the 5 cm segments can be plucked by changing the connection point for the "External Pluck".
The Pluck ports on the model are numbered from the left to right.
The mass proopserites for the elements are separately defined in the "Mask" for each of the 20 elements.
See Klaus-Jurgen Bathe, Finite Element Procedures, Prentice Hall Inc, 1996 for details on this model.
This model is a modification of the Vibrating String model that ships with SimMechanics.
Yf orce Pluck
Stri ngPl ucker
Lef t End
Pluck1
Pluck2
Pluck3
Pluck4
Pluck5
Pluck6
Pluck7
Pluck8
Pluck9
Pluck10
Pluck11
Pluck12
Pluck13
Pluck14
Pluck15
Pluck16
Pluck17
Pluck18
Pluck19
Pluck20
Right End
Stri ng Model
Signal 1
Si gnal Buil der
-- Stri ng Pluck
Env
Machi ne
Envi ronment
Ground2
Ground1
B F
Connecti on to
Ground (Joi nt)
Connect to any of the 20 Pluck
Ports on the String Model to see
what happens to the string motion.
Figure 8.18. SimMechanics model Sim_Mechanics_Vibration.
22
Pluck20
21
Pluck19
20
Pluck18
19
Pluck17
18
Pluck16
17
Pluck15
16
Pluck14
15
Pluck13
14
Pluck12
13
Pluck11
12
Pluck10
11
Pluck9
10
Pluck8
9
Pluck7
8
Pluck6
7
Pluck5
6
Pluck4
5
Pluck3
4
Pluck2
3
Pluck1
2
Right End
1
Left End
Base
Pluck
Tip
Wire Element 9
Base
Pluck
Tip
Wire Element 8
Base
Pluck
Tip
Wire Element 7
Base
Pluck
Tip
Wire Element 6
Base
Pluck
Tip
Wire Element 5
Base
Pluck
Tip
Wire Element 4
Base
Pluck
Tip
Wire Element 3
Base
Pluck
Tip
Wire Element 20
Base
Pluck
Tip
Wire Element 2
Base
Pluck
Tip
Wire Element 19
Base
Pluck
Tip
Wire Element 18
Base
Pluck
Tip
Wire Element 17
Base
Pluck
Tip
Wire Element 16
Base
Pluck
Tip
Wire Element 15
Base
Pluck
Tip
Wire Element 14
Base
Pluck
Tip
Wire Element 13
Base
Pluck
Tip
Wire Element 12
Base
Pluck
Tip
Wire Element 11
Base
Pluck
Tip
Wire Element 10
Base
Pluck
Tip
Wire Element 1
Figure 8.19. String model (the subsystem in the model above). It consists of 20
identical spring mass revolute and prismatic joint elements.
Figure 8.20. String simulation. Plucked 15 cm from the left, at the center, and 15
cm from the right.
264 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Wall to Outside
Room 4
Wall to Outside
Room 3
Wall to Outside
Room 2
Wall to Outside
Room 1
Wall C
Room 4
Wall C
Room 3
Wall C
Room 2
Wall C
Room 1
Wall
Thermal Loss
Room 4
Wall
Thermal Loss
Room 3
Wall
Thermal Loss
Room 2
Wall
Thermal Loss
Room 1
Thermal C
Room 4
Thermal C
Room 3
Thermal C
Room 2
Thermal C
Room 1
Piping/Duct
Thermal Loss
Outside Temperature
s
-
+
Heat Source
Heat Exchanger
Thermal Loss4
Heat Exchanger
Thermal Loss3
Heat Exchanger
Thermal Loss2
Heat Exchanger
Thermal Loss1
Heat Exchanger
Room 4
Heat Exchanger
Room 3
Heat Exchanger
Room 2
Heat Exchanger
Room 1
HE Loss to
Room 4
HE Loss to
Room 3
HE Loss to
Room 2
HE Loss to
Room 1
Figure 8.21. Using SimPowerSystems to build a model for a four-room house.
Figure 8.21). In addition to the four rooms, instead of the two that were created in Chapter 6,
we have also modeled the thermal capacity of the wall. By modeling the wall capacity,
we can use a measurement of the wall temperature as a way to account for the outside
temperatures. The model is not complete, and in the exercises at the end of the chapter, we
ask that you nish the model (Exercise 8.4). The model uses the SimPowerSystems tool.
The rooms are all modeled using an identical circuit. This allows you to draw one room
and then copy the circuit elements to make the next room, and so on. Look carefully at the
way Simulink creates the circuit element names. Because Simulink does not allowidentical
names, copying an element causes Simulink to change the number at the end of the name
to the next sequential number. For this reason, copying the circuit elements automatically
changes the name to the next room. (Try making a fth room to see how this works.)
8.5. Further Reading 265
We pointed out when we built the two-room model in Chapter 6 that the equivalent
R-values for the various losses (in particular for the ceilings, oors, walls, and windows
include an approximation to the heat loss from convection (i.e., from the ow of air in and
out of the various spaces). Unfortunately, this approximation does not account for the effect
of increased convection when the outside winds increase in speed. We can include this
effect using the nonlinear resistor model we created in Chapter 6. It is simply a matter of
changing the resistor at the right of each room to one of the nonlinear resistor blocks. Note
that in this model, the resistor will be time varyinga function of the wind speedand
therefore the block is linear. We leave this as part of Exercise 8.4.
The models created in this chapter show how Simulink includes many physical mod-
eling concepts. We have not shown all of the uses of these tools, nor have we shown how
the SimMechanics tool can use a CAD tool to build the SimMechanics model automatically
from the CAD drawings. If you need to pursue this further, see [46].
Newer physical modeling tools are useful in the automotive industry and for modeling
hydraulic systems. In addition, The MathWorks is developing new tools for other physical
systems. They collectively will allow engineers rapidly to create simulation models.
We are almost at the end of our tour of mathematical modeling with Simulink. The
next chapter tries to show, with a relatively simple example, how to use Simulink in the
overall design and development process. The key to doing so is the development of a process
that makes it possible for all of the people working on the design of a system to interact
using the same environment. By wrapping a process around Simulink and its various tools,
a company can easily create new designs. One of the features is that the staff can maintain a
clear path to the corporate legacy (intellectual property) so previous designs can easily and
seamlessly be incorporated into their newer designs. The models also provide a simple way
to document designs and, last but not least, have a way of generating code that will satisfy
the design requirements.
The next chapter takes us into the realm of product development and all of the issues
associated with ensuring that the process creates a reliable and accurate device at the lowest
possible cost.
8.5 Further Reading
The nonlinear resistor model and the model of the train are from [15]. I developed these
to investigate a method for adding high temperature superconducting wire for power at the
center of the Bay Area Rapid Transit (BART) Trans-Bay Tube between Oakland and San
Francisco. This report used a SimPowerSystems model for multiple trains operating over
various segments of BART. It allowed a preliminary design of the power distribution system
and allowed experiments that investigated alternative approaches.
Once again, we are becoming aware of the importance of energy conservation in the
world. The model of the four-room home is useful to answer personal questions about
the cost effectiveness of adding insulation to your home (particularly when there are tax
advantages for adding insulation). It is fun to create a model of you own home and play the
What if? game with insulation and other energy efciency improvements.
In [8], The MathWorkss technical staff describes a method for creating a SimMechan-
ics model that computes the motion of a exible beam. The paper describing the method
266 Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
and the models is available from The MathWorkss web site. In the paper that accompanies
the model, the authors show an alternative method that uses Simulink and the LTI modeling
approach from the Control Systems Toolbox (see Chapter 2). The model they use is the
state-space model.
The steps that one follows to use this model provide insight into the power of
Simulinks automatic vectorization, so it is a useful exercise.
The last part of the reference develops the equations for a multi-degree-of-freedom
model in some generalized coordinate systemq. When a nite element modeling approach
is used, the coordinates that result are usually a mix of translations and rotations for each
elementanywhere from all three translations and three rotations at an element to perhaps
one or two at each element. The masses (or inertias) of the elements, the damping forces,
and the spring force between elements are represented by generalized mass matrices M,
damping matrices C, and stiffness matrices K, so the dynamics of all of the elements are
encapsulated by the differential equations
M
d
2
q
dt
2
+C
dq
dt
+Kq = F.
Any nite element modeling tool will create this general form for a vibrating system. The
nite element codes also will reduce this model to the form
d
2

dt
2
+2
d
dt
+
2
= T
1
F.
This is done using the orthogonal transformation T to simultaneously diagonalize the sym-
metric matrices M, C, and K. The steps for this are
q = T.
Then
M
d
2
T
dt
2
+C
dT
dt
+KT = F.
Multiplying this equation by the inverse of T (T
1
) gives
T
1
MT
d
2

dt
2
+T
1
CT
d
dt
+T
1
KT = T
1
F,
or
d
2

dt
2
+2
d
dt
+
2
= T
1
F.
This last step comes from the fact that the matrix T diagonalizes all of the matrices in the
rst equation. The algorithm that does this is the qz algorithm; it is a built-in routine in
MATLAB (see the MATLAB help).
Both the SimPowerSystems and the SimMechanics users guides [46], [47] are excel-
lent references for creating models and for understanding the demonstrations that are part
of the products. Consider them an extension of the material in this chapter.
Exercises 267
Exercises
8.1 Modify the model in Section 8.1.2 to make it nonlinear. For example, measure the
voltage across the RLbranch and add a nonlinear resistor that has the following value:
R(t ) = R
0
+R V
RL
,
where R
0
= 1 ohm, R = 0.1 ohm, and V
RL
is the voltage across the RL branch.
8.2 Build a simple Simulink model of an electric car using the DC motor model in the
train simulation. Set the mass of the car to 2000 pounds. Investigate the amount of
power and energy used to accelerate the car from 0 to 60 mph in 10 sec. (Remember
that power is the product of the current and voltage used by the motor and that the
energy used is the integral of the power.) Use the air drag numbers in the train model.
Try different rates of acceleration in the model. What can you conclude about the
energy used as a function of the rate of acceleration?
8.3 Add more elements to the string model. You need to allocate the mass element data
in the fBeamElement structure.
8.4 Add a fth roomto the model FourRoomHouse. Create data for all of the components
using the values from Chapter 6. Add the time varying resistor (from Section 8.1.2)
to model the convective losses from the wind. Make runs to answer some of the
hypothetical questions in the Further Reading section.
Chapter 9
Putting Everything
Together: Using Simulink in
a System Design Process
We have reached the point in our travels through the world of Simulink where we can
talk about how Simulink can help in the design of complex systems. There are many
ways to partition a design process. Furthermore, every person that works in research and
development has a favorite approach to design. The one common aspect of all of these
approaches is the use of a methodical, documented, and traceable procedure.
A design process is methodical when it has a clear specication, the design has con-
nections to the specication, the design has clear documentation, and the design validation
is performed via a thorough test program. The obvious aspects of this statement are that the
design meets the objectives and performs as required. It is efcient, reliable, and buildable
within some budgeted cost. However, there are some nonobvious benets: the design has
legacy (it is reusable), the design exploits legacy from previous intellectual property, and
the ultimate audience for the system can exercise the simulation model so that their inputs
have a benecial inuence on the design. (In fact, many times this feature can sell a design.)
The entire design process needs careful documentation. Doing otherwise makes it
impossible to revisit decisions and convince engineers, designers, and technicians working
with the designs that the decisions made during the creation are valid (at least not without
a large and often futile replication of the earlier work).
A process that provides traceability means that the parameters in the design trace
back to one or more of the specications for the design. Traceability also provides an
understanding of a how a top-level requirement ows into specications for various sub-
systems. Subsystem specications often come from a specication imposed by top-level
system design requirements.
Once the design is nished, it requires a method for its validation. This is particularly
true when the design requires embedding computer code in the system. The design process
also should not require the creation of a new specication just for this embedded code. In
fact, the process should be seamless so the code generated as the specication evolves is
always tested quality code.
Simulink allows you to create a design process that will perform all of these functions
in a seamless way. In fact, the Simulink model can play the role of the specication. This
is an Executable Specication. To make Simulink the tool for system design, the R&D
269
270 Chapter 9. Putting Everything Together
parts of a company, the manufacturing arm of the company, and the computer engineers
need to use the same Simulink model throughout the process. Obviously, every company
approaches system design with a bias that comes from their previous experience. For this
reason, a process using Simulink must come from the Systems Engineering part of the
company.
Whatever way the process uses, the broad outlines of the steps are
specication development and capture,
modeling the system to incorporate the specications,
design of the system components that meet the specications,
verication, and validation of the design,
creation of embedded code.
In the 1960s I was part of the team that developed the original digital autopilot for
the lunar module. The design of this system was difcult because there was no way that
the control system was testable on earth. This meant that simulation was the only way
to create, optimize, and verify the design. In the process of creating the control system,
the rst step was to understand the requirements and translate these into a mathematical
model for the design. The modeling started with a digital simulation. We used it to develop
candidate designs that in turn were optimized using simulation. In the 1960s this work used
a mainframe computer that allowed, at best, two computer runs per day. Simulink and the
PC change every aspect of the design approach we used then. Therefore, the exposition that
follows should be entitled How we would do it today. I wrote an article that appeared in
the summer 1999 issue of The MathWorkss magazine News and Notes that describes the
design process and the lunar module reaction-jet attitude-control system design [16]. We
will use the lunar module digital autopilot design as a case study to illustrate how Simulink
provides a natural tool for the design process steps above.
9.1 Specications Development and Capture
In the actual lunar module design, there were modes for the following.
Coasting ight: the initial mode for preparation of the lunar descent, where the lunar
module was powered up and checked, the inertial measurement unit was calibrated
(using an optical telescope to track stars), the computer was veried, etc.
Descent: using the descent engine gimbals for control during the landing.
Ascent: using reaction jets but with the control actively combating the rotation caused
by the engine misalignment.
Combined lunar module and command-service module control: where the lunar mod-
ule controlledbothvehicles, the mode usedfor returningthe astronauts after theApollo
13 explosion.
9.1. Specications Development and Capture 271
Figure 9.1. The start of the specication capture process. Gather existing designs
and simulations and create empty subsystems for pieces that need to be developed.
The requirements for each of these modes were traceable to the overall mission re-
quirements. (For example, the amount of fuel needed to be carried on the lunar module
was a function of how long the descent would take, which is relatively easy to calculate.)
Another example is how much time a search for an alternate landing could use. This site
selection was required if there was a problem with the preselected site. A gimbal on the
descent engine might allow the thrust to go through the center of gravity, or we could have
xed the engine at a particular orientation (with a fuel penalty for the thrust needed to
counteract the torque from the engine). The requirements optimization needed to exercise
alternative designs. Calculations provided some of the answers but were time consuming.
This severely limited the number of alternatives that we investigated. An easily modied
fast simulation of multiple systems would clearly have made it possible to do many more
trade-offs of options and theoretically would have resulted in a better design.
To keep the model of the lunar module understandable, we will limit the discussion of
the digital autopilot to the rotational motion using the reaction jets only (i.e., we will work
only with the rst phase of the mission, coasting ight).
9.1.1 Modeling and Analysis: Converting the Specications into an
Executable Specication
We already have most of the modeling pieces that we need for the lunar module simulation
from the discussion on the rotation dynamics of a spacecraft in Chapter 3. The ability to
use a previous Simulink block is one of the major features of Simulink, allowing reuse of
previous work (and the possible exploitation of corporate intellectual property).
Working with the results of Chapter 3, the building of the simulation part of the model
is straightforward. The model is in Figure 9.1. We placed an empty subsystem block in
272 Chapter 9. Putting Everything Together
this model to denote the control system. This allows the development of the simulation
model to proceed independently of the development of the control system design. When
you are working with several teams each tasked to create a different aspect of the system,
this approach (sometimes called top-down design) enables teams to work independently
but in concert. Also, note that all of the blocks in this model come from the discussion in
Chapter 3, so no new dynamics are required. This is a good example of the independent
design approach that we have been discussing. In this design, a team separate from the
modeling team might be creating the control system. The modeling team can still proceed
with the creation of the simulation using the empty subsystem. (Open the model called
LM_Control_System in the NCS library and look at how this subsystem was created.)
The lunar module had an inertial platformthat provided Euler angle measurements for
the control system. The Euler angles were a direct measurement of the angular orientation
of the vehicle without the transformations needed to give the exact orientation in inertial
space. Astronaut inputs were Euler angle commands that incrementally caused the vehicle
to be oriented properly. The model shown here uses the quaternion representation from
Chapter 3, and the measured angles for the autopilot are the direction cosines (as given
by the quaternion to direction-cosine-matrix block we developed in Section 3.4.4). The
computations needed to convert the Euler angles into a direction cosine matrix were well
beyondthe capabilityof the lunar module guidance computer at the time, sowe approximated
the angles using Euler angles as if they were the actual rotational angles. This is not a bad
approximation since the control system was operating to maintain the orientation at the
desired value every 0.1 sec.
Before we leave the discussion on modeling, it might be of interest to discuss how
we built the rst simulation model in the 1960s. An engineer, working with FORTRAN,
developed this design model (for Grumman, the lunar module designer). He spent over six
months developing a model that had only a single axis of rotation. The reasons the model was
so simple had to do with the computer resources available (we used an IBM 7090 computer
that allowed each user, less than 100 Kbytes) and the available time (the simulation was
created using punch cardsone card for each FORTRAN instructionand each attempt at
compiling the model took one day). Simulink and the PC have dramatically changed the
way we approach such designs today.
So let us now change hats and make believe we are the design team that is developing
the control system for the lunar module and embark on the second of the system design
tasks, modeling the system to incorporate the specications.
9.2 Modeling the System to Incorporate the
Specications: Lunar Module Rotation Using Time
Optimal Control
Control of a satellite using reaction jets is possible with two different strategies for the ring
of the jets. The rst is to use a modulator to emulate a linear control systemwhere the average
thrust is proportional to some error signal. Amodulator pulsing the jets on and off over time
creates this average torque so it can be made proportional to the error signal. For several
reasons this approach is not good: rst, it causes the mechanical device that is pulsing the
jets to wear faster than need be, and second, the typical reaction jet has an efciency curve
9.2. Modeling the System to Incorporate the Specications 273
that causes less fuel to be used if the jets are turned on for a longer time. In the early design
of the lunar module, jets were turned on and off with a modulator. This analog device was
a pulse ratio modulator (a combination of pulse-width and pulse-frequency modulation).
For several reasons this system did not have a backup, which worried the system designers
(see [16]). NASA proposed that the digital computer used for guidance and navigation
might provide a backup control system. In a competition described in more detail in [16],
MIT Instrumentation Labs proposed a control system that used a time optimal controller
instead of a modulator. This approach greatly simplied the needed computations to reorient
the vehicle, a simplication that made it practical to implement the autopilot in the existing
computer. This is an excellent example of how independent design teams working on the
same system model can optimize the design.
9.2.1 From Specication to Control Algorithm
The lunar module digital autopilot design was based on a very simple conceptual approach:
look at the error between the actual (measured) angular orientation and the desired orienta-
tion, and then re the jets with either a positive or a negative acceleration (as appropriate)
in such a way that the error is reduced to zero as fast as possible. Working out the details of
how to do so is not difcult. Once the control strategy was worked out, the control law
(i.e., the exact strategy for what to do in every situation) was modied to limit the amount
of fuel that was used.
Toget anidea of what a minimumtime control is like, consider the followingsimplied
version of the lunar module control:
You are in a drag road race where you must start when a light goes green and then
travel exactly 1500 ft at which point you must be stopped. You will be penalized if you either
fall short or exceed the 1500-ft distance.
Before you read the following, you might ask, How do you drive the car to win this
race?
After a little thought you should be convinced that the way to win is to accelerate
for as long as you can (to the exact point where if you did not brake you would skid past
the stop line) and then stomp on the brakes and decelerate until the car stops exactly at the
nish line. The car that accelerates the fastest and the driver that best nds the exact point
to make the switch between accelerating and decelerating the car will determine the winner
of the race.
The form of this problem makes the answer easy to calculate. To perform the calcu-
lations, we will assume the following:
the car instantaneously achieves its maximum acceleration;
the driver applies the brakes at exactly the correct time;
there is no change in the deceleration because of tire slippage, etc.
Let us put some numbers on the table. Assume that the car can accelerate at 10 feet
per second per second (10 ft/sec
2
) and the braking force allows a constant deceleration of
5 ft/sec
2
. It the driver accelerates for t
1
sec, the speed will then be 10t
1
ft/sec. During this
274 Chapter 9. Putting Everything Together
acceleration, the car will travel 10
t
2
1
2
ft. The deceleration will now commence and the car
must slow down from 10t
1
ft/sec to zero. This will require a time of 2t
1
sec (since the car
decelerates half as fast as it accelerates). The distance traveled during the deceleration is
therefore 5
(2t
1
)
2
2
= 10t
2
1
. The total distance traveled is 15t
2
1
, and this must equal 1500 ft.
Thus, we have the equation that provides the critical time:
15t
2
1
= 1500; t
2
1
= 100; t
1
= 10.
Clearly, by working through this process for any acceleration/deceleration rates, the strategy
for any car can be determined.
To control the lunar module, the same rules apply, except that both the acceleration
and deceleration require ring the reaction jets (which produce the same force) so the
acceleration and deceleration are the same. In addition, the location of the stop is different
every time we apply the control, so we need a generic control that will work for any starting
and stopping conditions (including the possibility that the lunar module is rotating at the
start).
We will develop this control system(which was only one of several on board the lunar
module) and insert it into the TO BE DESIGNED block in the rotational dynamics model
developed previously by the system design team.
If the jet forces are a couple around the center of gravity, the linear model for a single
axis rotation is simply
d
2

e
dt
2
=
Fl
I
u = u,
where

e
is the control system error (i.e.,
measured

desired
),
F is the force from the jets,
l is the distance from the jets to the center of gravity,
I is the inertia of the vehicle around the rotational axis.
We will let the acceleration term Fl/I be , and u is either 1 (ring jets for positive
or negative rotations) or 0 (no rotation).
The minimum time control sets the control u to either +1 or 1 until we reach the
critical point where (as was the case for the car above) the sign of the control changes. This
critical point is along the unique trajectory that causes the angular position error
e
and the
angular rate error

e
to be exactly zero at the same time. The theory of optimal control states
that the goal (getting to zero
e
and

e
) can be achieved with at most one switch in the sign
of the acceleration. The plausibility of this should be clear from the discussion of the auto
race above. (As an exercise, create a logical argument why this must be so.)
To start the development of the control system, let us assume we are at the critical
point described above (the point where we can force the position and rate to be zero at
exactly the same time using a single jet ring). We use a graphical approach to visualize the
trajectory that will be followed. The graph is a plot of
e
vs.

e
when u is 1. To develop
this graph, let us solve the simple second order differential equation for the rotation above.
The solution is simply

e
(t ) = ut +

e
(t
0
),

e
= u
t
2
2
+

e
(t
0
)t +
e
(t
0
),
9.2. Modeling the System to Incorporate the Specications 275
where t
0
is the initial time and u is constant (either +1 or 1) over the duration of the
solution.
To create a plot
of
e
vs.

e
we need
to eliminate time (t )
from these equations.
Therefore, solving the
rst of the equations
gives t =

e
(t
0
)
u
.
Substituting this time
into the second equa-
tion gives the equation
for a parabola:
e
(t )

e
(t
0
) =

e
(t )
2

e
(t
0
)
2
2u
.
The unique parabolas
that go through the
origin from an arbi-
trary initial condition
are obtained by setting

e
(t ) = 0 and

e
(t ) = 0
in this equation (i.e., t is the time at which the trajectory goes through 0). A plot of these
parabolas in the

e
,

plane (called the phase plane or, in higher dimensions, phase space)
is in the gure above. For each value of u, one and only one parabola goes through the
origin (i.e., [
e

e
] = [ 0 0] ). These parabolas are the locus of all of the critical
points at which the sign of the jet rings must be reversed. (These curves are therefore
called the switch curves.)
If the initial conditions are located anywhere in the area above and to the right of
the curves shown in the gure the jets are red using 1 as the control (i.e., negative
acceleration) to force the motion to approach the switch curve in the second quadrant.
When the values of [
e

e
] reach the switch curve, the sign is reversed (to +1) and the
motion will track the switch curve to the origin at which point the control is set to 0. In the
absence of any disturbance, the system would stay at the origin forever.
Similarly, if the initial conditions are to the left of the switch curves, the jets are red
with a positive acceleration (u = +1) until the switch curve in the fourth quadrant is inter-
sected and the acceleration is reversed to allowthe motion to approach the origin fromabove.
The measurement of the values of [
e

e
] is always off slightly, and the exact vehicle
inertia and the jet forces are known only to within some error, causing the acceleration to
be in error when the switch curve is calculated. These errors will cause the control to miss
the origin slightly so instead of having the control objective be to hit the origin exactly; a
box around the origin is the control objective. This box is denoted using the name dead
zone, indicating that any time the vehicle has errors inside this zone, the control is unused
(i.e., dead). The resulting switch curves have the form shown in Figure 9.2.
Now that we have calculated the switch curves, let us try to justify the assertion that
the minimum time control is the control that gets the trajectory in the phase plane to the
origin with one switch. Assume we have made one switch and are on the unique trajectory in
phase space that takes the solution to the origin. Were we now to switch again, the solution
276 Chapter 9. Putting Everything Together
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
-0.25
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
Angular Position (Radians)
A
n
g
u
l
a
r

R
a
t
e

(
R
a
d
.
/
S
e
c
.
)
Turn Jets Off at this
Curve
Turn Jets On at this
Curve
Coast Region -- Jets are Off
Target Region
Figure 9.2. The physics of the phase plane logic and the actions that need to be
developed.
will immediately move off the only solution path that goes to the origin, and we would then
have to undo that move, all of which would increase the amount of time it would take to
achieve the goal of reaching the origin. Thus, the minimum time path must be that which
results from using the logic described in the gure above.
To implement this control using feedback in continuous time is simply a matter of
tracking the location in the phase plane, performing the switch when the trajectory crosses
the appropriate switch curve. However, because we are implementing the control system in
a computer, there are computer usage issues we must consider. This leads to the third com-
ponent of the design process: designing the system components to meet the specications.
9.3 Design of System Components to Meet
Specications: Modify the Design to Accommodate
Computer Limitations
When using a real-time computer implementation of a control algorithm, the time it takes
to perform the control calculations can interfere with other computations that need to be
done. (Typically a computer is too expensive to devote solely to the control system, so other
functions are performed on a time sharing basis.) This is the rst time we must deal with
creating embedded software (and simulating its effects), so let us spend some time on this.
9.3. Design of System Components to Meet Specications 277
The computer is totally occupied every time it looks at the location in the phase plane.
The more frequently we look to see where we are, the more computer time used and the
more overloaded the computer will be. The discussion above on the control system design
shows we cannot wait to make a switch. Every time the control system looks at the position
and rate to make a decision, other critical tasks are left waiting. (An interesting historical
footnote is that when we rst attempted to design a digital control system for the lunar
module, we concluded that it was impossible for exactly this reason.) Thanks to George
Cherry, an MIT Instrumentation Laboratory engineer at the time, the lunar module control
system had a unique implementation. His idea was to make the control into a timed task.
Any computer has the ability to do a timed task. The timed task interrupts the computer
(i.e., temporarily terminating an existing task for this one) and then a program runs that
starts an external timer. When the computer is nished with the task of starting the timer,
it can return to performing the original interrupted task (while the timer ticks away in the
background). Whenthe timer counts down, indicatingthat the taskis complete, the computer
is once again used (usually again via an interrupt) to complete the timed task. This was
George Cherrys idea for the lunar module digital control system. When the control logic
required jets to be on, the control algorithm computed the time needed to reach the desired
location in the phase plane (either the switch curve or the small rectangle at the origin). The
computer turned on the jets and sent the desired on time to an external timer. The control
task ended, and the computer was free to do other tasks. When the timer counted down, the
computer was interrupted again and the jets were turned off. This operation was very quick
since it consisted of simply resetting bits in one of two output channels (each bit connected
directly to a jet ring solenoid).
We ran a large simulation of the design twice a day, so to verify that this logic worked
was a time consuming process. With Simulink, we can have the same simulation running on
a desktop computer. Furthermore, the simulation can use the same logic used in the actual
lunar module code (including a simulation of the timers).
To implement the design, we need to compute the amount of time needed to get from
any initial location in the phase plane to some desired position. Let us assume that we are in
the region where negative accelerations are required (i.e., to the right and above the switch
curves). At the switch curve we want to intersect, the control will change from 1 to +1
(positive acceleration). From the analysis above the switch curve equation is:

e
(t ) =

e
(t )
2
2
.
Note that when we are on the switch curve, the jet on time is simply
T
on
=

e
(t )

.
Until we reach the switch curve, the jets are on, providing negative acceleration, and the
trajectory in the phase plane is

e
(t )
e
(t
0
) =

e
(t )
2

e
(t
0
)
2
2
.
We need to compute the intersection of this curve with the switch curve. Once we know the
intersection values, we can compute the time that the jets need to be on using the equation
278 Chapter 9. Putting Everything Together
for the time t we developed above (in this case with the value of u = 1):
T
on
=

e
(t
0
)

e
(t )

.
One of the important ways to do an analysis like this now is to use symbolic manipulation.
MATLAB has an interface to Maple that easily allows this. The MATLAB M-le with the
implicit Maple calls to solve for the on time is in the code segment below. Note that the
Maple variables in the code are MATLAB objects created with the syms command.
syms Ton t alph th thdot th0 thd0
thdot = solve('th = thdot^2/(2*alph)',thdot)
thdot = thdot(2) % Select negative square root
intsect = subs('th = th0 +(-thdot^2+thd0^2)/(2*alph)',thdot)
thdot = solve(intsect,'thdot')
Ton = (thdot(1)-thd0)/alph
pretty(Ton)
The solution proceeds by rst setting up the variables that are symbolic in MATLAB
(Ton, t, alph, th, thdot, th0, and thd0). Next, the switch curve is dened, and the solution
along the curve as a function of

e
is developed. (Realizing that the solution involves a square
root, the negative value is selected since the intersection when the acceleration is positive
will occur in the second quadrant where

e
is negative.) Next, the curve followed during
negative acceleration is developed. Then, we nd the intersection of the two parabolas. The
value of

e
(t ) at the intersection determines the time T
on
. The MATLAB script that invokes
Maple to do the algebra is in the gure above. The result from running this script (after the
pretty instruction in MATLAB) is
[ 1/2 1/2 2 1/2 ]
Ton = [(2 2 (th alph) alph + 2 th0 alph + thd0) - thd0]
[-------------------------------------------------------]
[ alph ].
This equation denes the time that the jet needs to be on in order to reach the switch
curve. At every sample time (0.1 sec), the lunar module digital control system computes
the total time that the jets need to re. This equation applies for both negative and positive
accelerations by using absolute values in some of the terms. The coasting region of the
control system (when the jets are off and the lunar module drifts at its last angular rate) uses
the approach of looking every 0.1 sec to see if the jets should be on (based on the location in
the phase plane). If there were no need to re the jets, the coast would continue for another
0.1 sec. When the motion is just about to leave the dead zone region this strategy adds an
extra 0.1 sec of drift. We deemed this error was acceptable.
9.3. Design of System Components to Meet Specications 279
The Lunar Module Digital Autopilot Design
Initialize Data Stores
NofJets
Write Var
PitchRollJets
Write
PitchRoll Var
Two Jet or
Four Jet Yaw
Couples
2
Two Jet
Read the "News & Notes" article
about this model over the web.
(Double-Click Here)
Single Jet (Ascent)
or Two Jet
Couples
for Pitch and Roll
e_edot
Read data
Attitude Cmd.
Attitude Meas.
Yaw Jets
Pitch/Roll Jets
Reaction Jet Control
Rates
Inertia J
omega
Input Jet Torque
q
Body Angular Accel
dq/dt
Quaternion and Spacecraft
Rotational Dynamics
Pos. Error
Phase Plane
Plot
Normalize the
quaternian[q s] 1
2
Nominal Two Jet Couples
Mux
Mux
x ||x||
Magnitude of q
Jet
Torques1
Jet
Commands
-K-
Jet
Torques
1
sx
o
Integrate
qdot
1
s x
o
Integrate
omega dot
-C-
Initial
Rates
[4x1]
Initial
Quaternions
[I]
Inertia
Matrix
Derived_Rates
4
Four Jet
q
Euler Angles
(Yaw Roll Pitch)
from Quaternions
PitchRollJets NofJets
e_edot
Attitudes
Cmd.
Astronaut Attitude
Command
to Autopilot
1
Ascent Single
Jets
A/D
Conversions
(Sample
time delt)
Yaw
Pitch-Roll
Derived
Inertia
Actual
Actual
Figure 9.3. Complete simulation of the lunar module digital autopilot.
9.3.1 Final Lunar Module Control System Executable Specication
We are now ready to look at the Simulink model for the control system. Open the model
LMdap3dof fromthe NCS library in MATLAB, and carefully look at the model (Figure 9.3).
This model is the complete simulation of the control system in the three degree of
freedom simulation we developed in Section 9.1.1. The control system has a number of
features that are important for the overall development process that we are trying to describe.
These are as follows.
Data used in the model are saved and retrieved using Data Store blocks.
The control system design began with a single axis, and the result was stored in a
library. Each of the three axes then used the library block.
All of the logic for determining where we are in the phase plane uses Stateow.
To ensure that the blocks we developed are available for reuse, the library also has the
block for the quaternion calculations, the block used to derive the digital rates for the
control system from the attitudes, and the block that converts the quaternion values
into Euler angles.
When you open the model, it will appear with the model browser turned on. This
feature allows you rapidly to navigate around the model using the same navigation technique
that allows you to navigate in Windows.
At the top level, zero order hold blocks before the Reaction Jet Control subsystem
convert the continuous time attitudes into measured values (at a time interval of every
0.1 sec). The zero order hold measures the value of the analog signal and then does the
280 Chapter 9. Putting Everything Together
analog to digital conversion. Because the astronauts communicated with the control system
both by using a data entry key board on the computer and by turning switches on and
off, we need to keep track of the switches that they used. All of the calculations above
(since the acceleration from the jets determines the parabolas) use information about the
available acceleration. Therefore, the control system must know the state of the switches
that the astronaut could use to disable reaction jets. Use of one of these switches caused a
computer update that saved the state of the switch and changed the values of the affected
jet accelerations. In the Simulink model, we use blocks called Data Store Write, Data Store
Read, and Data Store Memory to record and interrogate the switch states. These blocks
are at the bottom of the Simulink model. They update the number of jets in the model and
modify the acceleration used by the control system. Another neat feature is that the user can
double click on the switch icon to change the number of jets used in the simulation. The
switches change both the yaw axis jets (where the number of jets used can be set to either 2
or 4) or the pitch and roll axes jets (where control using only upward ring jets is an option).
As long as the jets red downward during ascent from the surface of the moon, all of the
control forces contributed to the lifting thrust from the ascent engine. This idea reduced the
amount of fuel used during the ascent. The Data Store Read, Write, and Memory blocks
work by creating internal memory storage locations for the values of the variable. (The
stored values are created by the Data Store Write block.) The initial values for the data are
created using the Data Store Memory block (which are in the diagram but are not connected
to any other blocks; they are at the bottom right side of the model). Data Store Read blocks
are the source in the model of all of the volatile data required by the control system. In this
model, the Read blocks are all inside the Reaction Jet Control subsystem.
There is also a block that provides information about the revision number, date of
revision, and so on; it appears at the bottom right of the model. Every time you save the
model, Simulink updates this block. The block for the Quaternion calculations is the same
block that we used in Chapter 3, but it is now stored in the library. The last new block at this
level is the quaternion to Euler angle block. The blocks contain annotation of the equations
used inside the block, so we will not go through them.
With this introduction, you can now begin to explore each of the blocks in the nal
model (in Figure 9.3). A good place to start is the simulation of the counter that times
the jets. This model is in Figure 9.4. (There are three of these blocks one in each of the
Reaction Jet Control subsystems. Browse to one of them to see the model.)
The timer clock was 625 microsec, so the simulation uses two digital clocks: one
at this rate and one at the sample time (0.1 sec). The jet on time counts off the 625-
microsec clock in the subsystem labeled Jet On Time Counter. This subsystem looks like
Figure 9.5.
Since the Sample Time clock changes at the sample time, this value is constant over
the duration of the countdown of the jets. (Remember that we compute the on time anew at
every sample time.) Thus at some time before the next sample the difference between the
time of the last sample and the time from the counter clock will be equal to the on time. The
relational operator block looks for the difference to be greater than zero, so the output goes
to zero when the countdown is complete. Returning to the blocks that use the counter, we
see that the logic uses this output, along with the jet on command and the counter enable
that comes from a Stateow chart. The logic implemented is the logical NOR of the enable
and the counter output. Next, we compare the output, using a logical OR, with the counter
output. This nally is multiplied by the jet on command. (This command is the number
9.3. Design of System Components to Meet Specications 281
Logic implements:
~(enable | stopjets) | stopjets
1
Jet
Command
Product
Logical
Operator
Logical
OR1
ton
Clock at tics
Clock at Sample Time
Stop jets
Jet On TIme Counter
12:34
Clock at time
delt (Sample time)
12:34
Clock at counter
time tic (0.000625) clockt
3
Jet On Time
2
Enable Counter
1
On Command
Counter
Output
Counter
Output
Figure 9.4. Simulating a counter that ticks at 625 microsec for the lunar module
Simulink model.
1
Stop jets
>
Relational
Operator
0
Constant
Enable
3
Clock at
Sample Time
2
Clock at tics
1
ton
ton
Time in clockt
multiples
Time in delt
multiples
Count down of ton
Stop Command
Figure 9.5. The blocks in the jet-on-time counter subsystem in Figure 9.4.
of jets with a sign indicating the direction, so a logical AND cannot be used to provide the
nal jet command output.)
If you start this model in the usual way, a phase plane plot for the system response
appears (Figure 9.6) as the simulation runs. The plot is the control response for motion
around the x (yaw) axis, which is the axis with the smallest inertia. The model also contains
plots (Scopes) that show the attitude and attitude rate errors for all three axes.
If you navigate through the reaction jet control subsystem, you can see the logic and
the computations for the switch curves. In Section 9.1.1, we created the model and used
an empty subsystem for the controller. (We called the subsystem Control System TO BE
DESIGNED.) We have now completed the design. It is easy to see that a different group
could have created the control system design. The group that created the simulation model
with the quaternion block might even have been working at the same time.
282 Chapter 9. Putting Everything Together
-0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
Yaw Attitude Error
Y
a
w

A
t
t
i
t
u
d
e

R
a
t
e

E
r
r
o
r
Phase Plane Plot for LM Yaw Axis Digital Control
Switch Curves for
Turning Jets Off
Switch Curves for
Turning Jets On
Figure 9.6. Lunar module graph of the switch curves and the phase plane motion
for the yaw axis. (The graph uses a MATLAB function block in the simulation.)
Continuing our travels through the model, we have a block that computes the three
axis rotational rates from the Euler angle measurements. This block is in the library called
LMdapLibrary_NCS.mdl in the NCS library. We have not encountered library blocks
before, so we need to describe how to create a library and how they work.
When you select New in the File menu, there are two options: a Simulink model
or a library. To create a library you simply select the latter. A library is different from a
model in that none of the Simulink menu items (like the start button) appears in the window.
Furthermore, if you use any of the blocks in the library to develop a model, future changes
to any of the library blocks will propagate to the Simulink models that use them. This
feature makes it easy to keep track of design modications and far easier to reuse a block in
the future. When you convert a library block into embedded computer code, and the code
is used multiple times (as is the case here where the same control system block applies to
the yaw, pitch, and roll controllers), the code is created as a reentrant subroutine. The
conversion of the diagram into embedded code occurs once. The Real Time Workshop
conversion makes only one copy of the control system code (not three versions). Each
instantiation uses the same code, minimizing the amount of storage needed for the code.
While we are on the subject, libraries work best if a tool for maintaining conguration
control is also in place. All of the models in Simulink (and in MATLAB) allowthe user to use
a conguration management tool with the model. Source control systems must comply with
the Microsoft Common Source Control standard. If you have a compliant source control
9.3. Design of System Components to Meet Specications 283
system on your computer, the Source Control options in the MATLAB Preferences dialog
will show it. Revision Control Systems (RCS) and Concurrent Versions System (CVS) are
two tools that use the Microsoft Source Code Control. When a tool such as this is used,
a designated conguration manager must make all changes to models and libraries. This
ensures that all models that are in the current design are veried and validated. Individuals
can work with their own version of the models, but then the conguration manager must
test them before he allows the changes to migrate to the actual design.
9.3.2 The Control System Logic: Using Stateow
Now that all of the basic analyses for the control system design are complete, we can focus
on implementing the control logic in the simulation.
function [Firefct1, Coastfct1, Firefct2, Coastfct2, tcalc1, tcalc] = ...
fcn(e,Njets,DB,alph,alphs)
% Time at full accel for e_dot = 0 (cross the e axis) is tcalc1.
% (This time is positive in RHP and negative in the LHP, and
% this is accounted for in the calculation of the total on time
% in the Stateflow chart).
% The intersection of the switch curve and the "on" trajectory
% determines the value of xdot at the switch curve. This value
% divided by the acceleration is the time needed to go from
% the e axis crossing to the switch curve (tcalc). The two times
% tcalc and tcalc1 are added together in the State Chart.
% Note that the square plus or minus value are accounted for
% in the State Chart (when in the RHP we add tcalc and tcalc1,
% and in the LHP we subtract tcalc1 (which is <0) from
% tcalc (>0 always).
% Constants used in the calculations:
ac = 2*alphs/(alph+alphs);
% Changeable constants used in the calculation:
accel = Njets*alph;
accels = Njets*alphs;
% Evaluate the location in the phase plane w.r.t the 4 parabolas:
Firefct1 = e(1) -DB +e(2)^2/(2*accel);
Coastfct1 = e(1) -DB -e(2)^2/(2*accels);
Firefct2 = e(1) +DB -e(2)^2/(2*accel);
Coastfct2 = e(1) +DB +e(2)^2/(2*accels);
% Compute the on time for a jet firing based on phase plane locations:
x1 = e(1)/accel; % Scal position for jet on time calc.
x2 = e(2)/accel; % Time at full accel for e_dot = 0.
x3 = DB/accel; % Scale Dead Band for on time calc.
tcalc1 = x2; % Time to reach the e axis from edot.
u = (abs(x1) + x2^2/2 - x3)*ac;
tcalc = sqrt(u*(u>=0));
284 Chapter 9. Putting Everything Together
Wait_for_stable_rate
Start
Fire_region_1
en: jets=-Nofjets;
ton=tjcalc+tjcalc1;
Coast_region_2
en: jets=0;
ton=0;
enable=0;
Skip_a_Sample_2
en:count--;
Skip_a_Sample_1
en:count--;
Coast_region_1
en: jets=0;
ton=0;
enable=0;
Fire_region_2
en: jets=Nofjets;
ton=tjcalc-tjcalc1;
[count==0]
1
/count=2;
/count--;
2
[ (e[1]<0 & Firefct2<0) | (e[1]>0 & Coastfct2>0) ]
1 [ (e[1]<0 & Coastfct1<=0 & Firefct2>0) ]
4
[ (e[1]>0 & Coastfct2>=0 & Firefct1<0) ]
2
[ (e[1]>0 & Firefct1>0) | (e[1]<0 & Coastfct1<0) ]
3
[e[1]>0 & Firefct1>0]
[count==0]
1
3
[ton<2*delt]{enable=1;}
2
[e[1]<0 & Coastfct1<=0]
1
{enable=0;}
2
[e[1]>0 & Coastfct2>0]
2
{enable=0;}
2
/ton=ton-delt;
2
[ton>tmin]{count=2;}
1
[ton>tmin]{count=2;}
1
/ton=ton-delt;
2
[count==0]
1
[ton<2*delt]{enable=1;}
1
[e[1]<0 & Firefct2<0]
3
Figure 9.7. Stateow logic that determines the location of the state in the phase plane.
The Control Law blocks from the library use an Embedded MATLAB Function block
to compute the data needed to determine where the error is in the phase plane. We do
this by substituting the appropriate yaw, pitch, or roll attitude and attitude rates into the
equations for the four different switch curve parabolas. The functions that we generate
are Firefct and Coastfct (with 1 or 2 appended to denote the quadrant). In addition, this
code block calculates the jet on time using the equations from the Maple derivation above.
This Embedded MATLAB code is in the window above. Note that the code is very read-
able because it uses standard MATLAB. As an exercise, follow the code and verify that it
uses the equations above for the jet on times. Also, note that this code segment is in the
LMdapLibrary_NCS.mdl as part of the Control System block, so it also is reentrant.
The Embedded MATLAB block does not do the logic for determining which jets to
re and counting down the on time. This decision making uses Stateow for the logic.
(The Stateow block decides whether to re a jet with positive or negative acceleration
and whether or not to coast at any decision point.) The logic in the Stateow block is easy
to follow (see Figure 9.7 for the chart). The original control design for the lunar module
did not use a process like this at all; all of the logic had to be programmed in assembly
language.
The Stateow chart is identical for all three axes. One of the techniques used to
develop this chart was to lay it out visually so it follows the four quadrants in the phase
9.4. Verication and Validation of the Design 285
space. Skipping the top of the chart for a moment, there are four states that are at the vertices
of a rectangle:
quadrant 4 & 1 coast region is at the top right;
quadrant 1 & 2 jet ring (negative acceleration) is at the bottom right;
quadrant 2 & 3 coast region is at the bottom left;
quadrant 3 & 4 jet ring (positive acceleration) is at the top left.
When the state-space locations are near the switch curves, we start the counter to
count down the jet on times. This is not necessary unless we are near the switch curves.
(When we are not close, the on times are greater than the sample time so we turn the jets on,
leave them, and reevaluate the on time at the next sample.) The Stateow states that handle
this logic we called Skip_a_Sample. There are two such states: one for the left half of
the phase plane and one for the right. The Stateow logic should be easy to follow, since
every decision uses the sign of the attitude rate and the coast and re functions evaluated
in the Embedded MATLAB block. When you run the model, open the Stateow block for
the yaw control and watch the logic and the phase plane plot. The correlation between the
Stateow switching and the phase plane should make it easy to follow the logic.
We are now ready for the fourth step in the design process: verication and validation
of the design.
9.4 Verication and Validation of the Design
If you browse through the Simulink library, you will see a group of blocks called Model
Verication. Figure 9.8 shows this library. You can use these blocks to evaluate whether or
not the signals in the Simulink model are doing what you expect.
The block description and their icons help you to understand what they do. For
example, the Check Discrete Gradient block (the second block in the library) evaluates the
difference between the signals at two different times and ags you if this exceeds some
number that you specify in the block. This ensures that the (digital) derivative of the signal
is within bounds. You can check to see if signals are within certain bounds, that they do not
exceed bounds, that they lie within a dynamic range that is possibly changing with time,
and that the signal is nonzero. With these blocks and a built-in tool that gives an analysis of
the coverage of the signal ow (for both Simulink and Stateow), you can assure yourself
that the simulation is valid (and therefore, when the time comes to create the code, that it
is valid). Since this is the kind of tool that provides verication and validation, which are
important only when the design process has the goal of creating a system design that may
have embedded code, we will not spend more time on it. However, if the overall system
design is your goal, this tool is extremely valuable.
We are now at the last step in the design process, the creation of embedded code.
286 Chapter 9. Putting Everything Together
Figure 9.8. Simulink blocks that check different model attributes for model verication.
9.5 The Final Step: Creating Embedded Code
We now come to the last step in the design process: the creation of the embedded code.
Simulink offers the user a multitude of paths for achieving this goal. The simplest code
generation process is to generate code that runs under Windows. The code generation tool
Real Time Workshop

(RTW) will typically default to this option. When you invoke RTW,
it will generate a stand-alone application that runs from the Command window. It is well
beyond the scope of this book to take you through this process, but it is easy enough to
do and you can generate some very interesting applications. Reference [41] discusses the
need for structured development of real-time system applications; it should be clear that
you could not nd a more structured (since it is automatic) way than using RTW.
9.6. Further Reading 287
The Signal Processing Blockset comes with blocks that allow the capture of signals
fromthe audiodevices built intoyour computer. Youcandesigna complexsignal-processing
task and try it out on your computer. The blockset comes with several interesting examples
that you should try. Once you understand what the applications do, try modifying them to
do something different. It is a lot of fun.
9.6 Further Reading
The process of creating newsystems is difcult to manage. Several books have been devoted
to this subject (see [10], for example), and their main conclusion is that a tight integration
of the research, development, and manufacturing groups is the only way to ensure that
new technology rapidly migrates into products. The example in this chapter illustrates a
powerful feature of Simulink. You can open a model and actually play with it. There is no
better way for a research team to communicate an innovative idea to the development and
manufacturing sides of a company (and, for that matter, to the companys managersin
fact, why not the CEO and the board of directors). Imagine the R&D team brieng the
top management of a company using a simulation of the proposed new device. Imagine
that the simulation has all of the bells and whistles, shows all of the manufacturability
components, clearly delineates computer software required, and has a clear path to the
embedded computer code. When the design team then quotes a cost and schedule, it will
have far more meaning than a simple recitation of the same information.
Controlling the costs of the software in a system has been extremely difcult. There
are innumerable instances of software development cost overruns that are multiples of the
initial estimates. Reference [35] describes many such horror stories. The stories generally
result in the denition of a process that will make the software development work better.
In my experience, these process improvements always lack one major component:
the software designers never have a way of verifying that the software will do what is
required. Reference [19], for example, decries the fact that software requirements appear
well before the allocation of the system requirements to the software is complete. This is
justied because the software development takes so long that it must start early. Another
example that I remember well was the rst time I tried to put a process together. It was
before Simulink but after the introduction of MATLAB. The CEO of the company I worked
for asked me to x a severe schedule slippage in the development of a complex system. The
rst thing I did was to force the development team to use MATLAB (instead of FORTRAN
or C) to build the systemcomponents. The existing process required that we send a monthly
specication document to the software group. Each of these transmittals was hundreds of
pages. About two months after we adopted MATLAB for the design team, I noticed that
the software group would ask for the MATLAB code. In a conversation with the software
team, I found out that they were using the MATLAB code as the specication and checking
the written specications only when they found conicts or processes that were not in the
MATLAB code. This was the rst example of an executable specication that I had ever
encountered. I have had the same experience many times; these occasions are what have
convinced me that spending the time and effort to build a process around Simulink (and the
other tools we have discussed) is cost effective and possible and will lead to faster, cheaper,
and more efcient designs.
Chapter 10
Conclusion: Thoughts
about Broad-Based
Knowledge
I hope that you have reached this point in the book with a much better understanding of the
power of a visual programming environment. I also hope that you can see howan integrated
system-engineering environment can improve your designs.
In the process of working through this text, you should have become familiar with a
large number of different disciplinesperhaps some that you had not previously learned.
This was partly my goal. One of the major attributes of a good engineer (or mathematician
or any other discipline you would like to insert here) is both breadth and depth of knowledge.
As I was completing this book, I read a marvelous editorial in the NewYork Times by
Thomas Friedman, entitled Learning to Keep Learning [12]. In the editorial, Friedman
makes the case for better and broader education. He quotes Marc Tucker, who heads the
National Center on Education and the Economy: It is hard to see how, over time, we are
going to be able to maintain our standard of living.
Friedman then adds: In a globally integrated economy, our workers will get paid a
premium only if they or their rms offer a uniquely innovative product or service, which
demands a skilled and creative labor force to conceive, design, market, and manufacture
and a labor force that is constantly able to keep learning. We cant go on lagging other major
economies in every math/science/reading test and every ranking of Internet penetration and
think were going to eld a work force able to command premium wages.
Alittle later in the editorial, Friedman again quotes Tucker as saying, One thing we
know about creativity is that it typically occurs when people who have mastered two or
more quite different elds use the framework in one to think afresh about the other. The
goal of Tucker is to make that kind of thinking integral to every level of education.
Tucker thinks that this requires a revamping of our educational system, designed in
the 1900s for people to do routine work, into something different. The new system must
teach how to imagine things that have never been available before, and to create ingenious
marketing and sales campaigns, write books, build furniture, make movies, and design
software, that will capture peoples imaginations and become indispensable for millions.
In the last part of his editorial, Friedman tempers the nationalistic sound of the state-
ments by noting that innovation is not a zero-sum game. It can be win-win. He says, We,
China, India and Europe can all ourish. But the ones who ourish the most will be those
289
290 Chapter 10. Conclusion: Thoughts about Broad-Based Knowledge
who develop the best broad-based education system, to have the most people doing and
designing the most things we cant even imagine today.
It is my hope that this book and the products and processes we described here go some
small way to making Friedmans thoughts real for you. It is clearly possible to learn multiple
disciplines, to bring them to bear on complex design problems, and to create devices using
more automation. It is also possible, working with the visual programming paradigm that
Simulink exemplies, to work faster, smarter, and more accurately. I truly hope that this
book helps to further that process.
As a nal footnote, one of the reviewers of this manuscript made the prescient ob-
servation that using Simulink or any modeling software without care can lead to disasters.
This is very true.
However, I am old enough to remember engineers who, after they graduated, used
only handbooks for the models in their designs. The result for these engineers was often a
disaster, too. Because what they selected from the handbook was not valid in the context of
the design they were developing, their conclusions were wrong. Simulink does not solve
this problem. In fact, no tool can.
How many times have you seen someone using a wrench as a hammer? Workers
always have the opportunity to abuse their tools. The main reason for this book was to
show you how to use Simulink the tool, not by simply learning how to build models but by
showing you how the numerical methods in the background work. Along the way, we have
emphasized the design process and the fact that modern design is a team effort. If you are
new to this world, or if you are a student looking forward to working as a design engineer,
remember that your colleagues want to help. Use their expertise. Share the Simulink
models with themearly and often. Get their feedback and use it. If you are an engineer
with experience in the design of systems, work to incorporate a visual programming tool
into your design process. If you do, do not forget to train the engineers that will be new to
this process. Also, ensure that your design teams have an ample number of old-timers who
know when something looks wrong.
Finally, remember the various tricks and comments that I have made in the text:
Annotate your models.
Use subsystems to keep the diagram simple.
Vectorize the model where you can (and let the user know that you have done so).
Make neat models; avoid spaghetti (very messy) Simulink models.
Run the models with the verication and validation tools to see if the results of the
simulation make sense.
Checkthe data youare usingagainst the real worldbyseeingif the simulationproduces
results that match experiments.
If you replace a block in a model with a new block, run the model with both the old
and new block, subtracting the outputs of each to verify that the differences between
them is essentially zero.
In other words, be a good engineer.
Bibliography
[1] Anderson, Brian D. O. and Moore, John B., Linear Optimal Control, Prentice-Hall,
Englewood Cliffs, 1971.
[2] Bathe, Klaus-Jurgen, Finite Element Procedures, Prentice Hall, Englewood Cliffs,
1996.
[3] Bernstein, Dennis S., Feedback Control: An Invisible Thread in the History of Tech-
nology, IEEE Control System Magazine, Vol. 22, No. 2, pp 5368, April 2002
[4] Bolles, Edmund Blair, Galileos Commandment, pp 415419, W. H. Freeman, New
York, 1999. This book contains the fragment from Galileos 1638 book Two New
Sciences, as translated by Henry Crew and Alfonso de Salvio.
[5] Brush, Stephen G., AHistory of RandomProcesses, I, Brownian Movement fromBrown
to Perrin, Archive for History of Exact Sciences, Vol. 5, pp 136, 1968. Reprinted in
Studies in the History of Statistics and Probability, II, edited by M. Kendall & R. L.
Plackett, Macmillan, New York, pp 347382, 1977.
[6] Bryson Jr., Arthur E., Control of Spacecraft and Aircraft, Princeton University Press,
Princeton, N.J., 1994.
[7] Childers, Donald G., Probability and Random ProcessesUsing MATLAB with Ap-
plications to Continuous and Discrete Time Systems, Irwin, a McGraw-Hill Company,
Chicago, 1997.
[8] Chudnovsky, Victor, Kennedy, Dallas, Mukherjee, Arnav, and Wendlandt, Jeff,
Modeling Flexible Bodies in SimMechanics and Simulink, MATLAB Digest, Avail-
able on The MathWorks web site http://www.mathworks.com/company/newsletters/
digest/2006/may/simmechanics.html, May 2006.
[9] Derbyshire, John, Unknown QuantityA Real and Imaginary History of Algebra,
Joseph Henry Press, Washington, D.C., 2006.
[10] Edosomwan, Johnson A., Integrating Innovation and Technology Management, John
Wiley and Sons, New York, 1989.
[11] Egan, William F., Phase-Lock Basics, John Wiley and Sons, New York, 1998.
291
292 Bibliography
[12] Friedman, Thomas L., Learning to Keep Learning, NewYork Times, pA33, December
13, 2006.
[13] Funda, J., Taylor, R.H., and Paul, R.P., On Homogeneous Transforms, Quaternions,
and Computational Efciency, IEEE Transactions on Robotics and Automation, Vol.
6, No. 3, pp 382388, June 1990.
[14] Galilei, Galileo, Dialogue Concerning the Two Chief World SystemsPtolemaic and
Copernican, translated by Stillman Drake, Dover Press, New York, 1995.
[15] Gran, Richard, Fly Me to the Moon, The MathWorks News and Notes, Summer 1999,
available at the MathWorks website: http://www.mathworks.com/company/newletters/
news_notes/sum99/gran.html
[16] Gran, Richard. et.al, High Temperature Superconductor Evaluation Study, Final report
for Department of Transportation, available from the DOT Library NASSIF Branch,
2003.
[17] Halliday, David, and Resnick, Robert, Physics, John Wiley and Sons, NewYork, 1978.
[18] Harel, David, Executable Object Modeling with Statecharts, IEEE Computer Society
Magazine, Vol. 30, No. 7, pp 3142, July 1997.
[19] Headrick, Mark V., Origin and Evolution of the Anchor Cock Escapement, IEEE
Control System Magazine, Vol. 22, No. 2, pp 4152, April 2002.
[20] Humphrey, Watts S., Managing the Software Process, Software Engineering Institute,
Addison-Wesley, Reading, MA, 1990.
[21] Inman, Daniel J., Engineering Vibration, Prentice Hall, Upper Saddle River, NJ, 2001.
[22] Jackson, Leland B., Digital Filters and Signal Processing, Kluwer Academic Publish-
ers, Norwell, MA, 1986.
[23] Jerri, A.J., The Shannon sampling theoremIts various extensions and applications:
Atutorial review, Proceedings of the IEEE, Vol. 65, No. 11, pp 15651596, Nov. 1977.
[24] Kaplan, Marshall H., Modern Spacecraft Dynamics and Control, JohnWiley and Sons,
New York, 1976.
[25] Lienhard IV, John H., and Lienhard V, John H., A Heat Transfer Textbook, Phlogiston
Press, Cambridge, MA, 2002.
[26] Livio, Mario, The Golden Ratio, The Story of Phi, the Worlds Most Astonishing
Number, Broadway Books, New York, 2002.
[27] Luke, H.D., The Origins of the Sampling Theorem, IEEE Communications Society
Magazine, Vol. 37, No. 4, pp 106108, April 1999.
[28] Meirovitch, Leonard, Dynamics and Control of Structures, John Wiley and Sons, New
York, 1990.
Bibliography 293
[29] Moler, Cleve B., Numerical Computing with MATLAB, SIAM, Philadelphia, 2004.
[30] Newman, James R., editor, Volume 2 of The World of Mathematics, Mathematics of
Motion, by Galileo Gallilei, pp. 734774, Simon and Schuster, Inc., 1956.
[31] Oppenheim, Alan, Realizationof Digital Filters UsingBlock-oating-point Arithmetic,
IEEE Transactions on Audio and Electroacoustics, Vol. 18, No. 2, pp. 130139, June
1970.
[32] Papoulis, Athanasios, The Fourier Integral and it Applications, McGraw-Hill, NY,
1962.
[33] Papoulis, Athanasios, Probability, Random Variables, and Stochastic Processes, Mc-
Graw-Hill, NY, 2002.
[34] Pedram, Massoud and Nazarian, Shahin, Thermal Models, Analysis and Management
in VLSI Circuits: Principles and Methods, Proceeding of the IEEE, Special Issue
On-Chip Thermal Engineering, Vol. 94, No. 8, pp 14731486, August, 2006.
[35] Royce, Walker, Software Project ManagementA Unied Framework, Addison-
Wesley, Boston, 1998.
[36] Sane, Sanjay P., Dieudonn, Alexandre, Willis, Mark A., and Daniel, Thomas L.,
Antennal Mechanosensors Mediate Flight Contol in Moths, Science, Vol. 315, No.
5813, pp 863866, February 2007.
[37] Scheinerman, Edward R., Invitation to Dynamical Systems, McGraw Hill, Prentice
Hall, NJ, 1996.
[38] Schwartz, Carla and Gran, Richard, Describing function analysis using MATLAB and
Simulink, IEEE Control System Magazine, Vol. 21, No. 4, pp 1926, Aug. 2001.
[39] Shannon, Claude, Communication in the Presence of Noise, originally published in the
Proceedings of the I.R.E., Vol. 37, No. 1, Jan. 1949. Republished as a Classic Paper
in the Proceedings of the IEEE, Vol. 86, No. 2, pp 447457, February 1998.
[40] Sidi, Marcel J., Spacecraft Dynamics and ControlA Practical Engineering Ap-
proach, Cambridge University Press, Cambridge, MA, 1997.
[41] The MathWorks Inc., Simulink Performance and Memory Management Guide,
http://www.mathworks.com/support/tech-notes/1800/1806.html.
[42] The MathWorks Inc., Users Manual: Simulink

, Simulation and Model Based Design;


Using Simulink, Version 6, Ninth Printing, Revision for Simulink 6.4 (Release 2006a),
March 2006.
6
6
References [42] through [47] are the current printed versions (2007) of the appropriate users manuals and
guides. They are available from The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA01760-2098. They also are
in the Help les shipped with the various products, and as downloads in pdf format from the MathWorks web site
(http://www.mathworks.com).
294 Bibliography
[43] The MathWorks Inc., Getting Started: Signal Processing Blockset, For Use with
Simulink

, Version 5.1, Reprint, (Release 2006a), March 2006.


[44] The MathWorks Inc., Users Guide: Signal Processing Toolbox, For Use with MAT-
LAB

, Version 6, Sixth Printing, Revision for Version 6.5 (Release 2006a), March
2006.
[45] The MathWorks Inc., Users Guide: Stateow

and Stateow

Coder, For Complex


Logic and State Diagram Modeling, Version 5, Fifth Printing, Revision for Version 5
(Release 13), July 2002.
[46] The MathWorks Inc., Users Guide: SimMechanics, For Use with Simulink

, Reprint
for Version 2.2, December 2005.
[47] The MathWorks Inc., Users Guide: SimPowerSystems, For Use with Simulink

,
Reprint for Version 4, March 2006.
[48] Ward, P., and Mellor, S., Structured Development for Real Time Systems, Prentice-Hall,
Englewood Cliffs, NJ, 1985.
[49] Wiener, Norbert. Cybernetics or Control and Communication in the Animal and the
Machine, Hermann et Cie., Paris, and MIT Press, Cambridge, MA, 1948.
[50] Wikipedia, The Free Encyclopedia, Thermostats, http://en.wikipedia.org/wiki/Image:
WPThermostat.jpg, retrieved December 10, 2006.
[51] Wikipedia, The Free Encyclopedia, Foucault Pendulum, http://en.wikipedia.org/wiki/
Foucault%27s_pendulum, retrieved December 10, 2006.
Index
1/f noise
Need for, 194
Simulating, 194
Using cascaded linear systems, 195
Absolute value block
In clock model, 22
In Math Operations library, 23
Zero-crossing, turning on and off,
86
Zero-crossing detection, 11
Air drag, 12
Force from, 12
In Leaning Tower model, 12
In train model, 253
Leaning Tower Simulation results,
13
Analog lters
As prototypes for digital lters, 142
Signal Processing Blockset, 145
Animation
SimMechanics blocks, 259
Annotation
Creating wide lines for vectors, 15
Of models, 16
Using TeX, 103
Attitude errors
3-axis direction cosine matrix, 107
3-axis quaternion, 109
Axis angle
Rotation, 94
Axis-angle representation
Eulers theorem, 99
Band Limited White Noise, 184
Setting noise power of, 189
Band Limited White Noise block
In moving average simulation, 150
Numerical experiments with, 184
To generate binomial distribution,
176
Using the Weiner process, 184
Bessel lter
In Signal Processing Blockset, 146
Bode plots
Calculating, 65
Creating, 64
Using Control Systems Toolbox, 67
Brown, Robert, 180
Brownian motion, see also Random
processes; Weiner process
And white noise, 178
Random walk and, 178
Brownian motion, connection
White noise, 180
Buffers, 160
Using with the FFT, 161
Butterworth lter
Analog, 142
Compared to other lters, 147
Denition, 143
In phase-locked loop, 167
MATLAB code for, 144
Using Signal Processing Blockset,
148
Callback
Executing MATLAB code from,
196
Finding graphics handles after, 233
295
296 Index
Loading Data from, 254
Options in Model Properties, 124
PostLoadFcn, 234
PreLoad function, 39, 51, 123, 150,
167
StopFcn, 126, 176, 177
Central limit theorem, 174
Monte Carlo Simulink model, 174
Chebyshev lter
In Signal Processing Blockset, 146
Clock
Pendulum, 17
Simulation, 23
Using SimMechanics, 260
Clock model, see NCS library
Computing Mean and Variance
With Signal Processing Blockset
blocks, 184
Constant block, see also Simulink blocks
MATLAB input for, 4
Control System Toolbox
Lyapunov equation solver, 188
Control Systems
Bode plots, 64
Comparing position and velocity
feedback, 60
Early development, 44
Example, thermostat, 46
Full state feedback, 72
Getting derivatives, 74
Linear differential equations and,
48
Observers, 74
PD control, 69
PID control, 71
Position feedback, 56
Simulink model, constructing
derivatives, 77
Velocity feedback, 57
Control Systems Toolbox, 64
Convergence
Limits for random processes, 181
Correlation function
Denition, 194
Of white noise, 183
Spectral density function from, 183
Covariance equivalence
Denition, 189
White noise, 189
Covariance matrix, 186
Covariance equivalence, 189
Denition, 187
Differential equation for, 187
Creating an Executable Specication
In the system design process, 271
Cross product, 98, 100
Cross product block
In Simulink, masked subsystem,
101
Curve tting
with MATLAB for the polynomial
block, 91
Damping ratio, 51
Data store, data read, data write
Storing and retrieving data in
embedded code, 279
DC motor, 105
Demux block, see also Simulink blocks
Using, 246
Deriving rates
For derivative feedback, 75
Difference block, see also Simulink
blocks
Validating model changes with, 146
Digital lters, 121
Bode plot for, 135
Discrete library, 129
In Discrete library, 121
Limited precision arithmetic and,
153
State-space models in Simulink,
129
Digital signal processing, 115
Bandpass lter design, 154
Denition, 118
Digital lter from analog using
bilinear transformation, 148
r lter from state-space model,
151
Impulse sample representation, 137
Index 297
Sampling and A/D conversion, 123
Uses of, 121
Using Signal Processing blockset,
145
Digital Transfer Function block, see
Simulink blocks
Direct Form Type II, see second order
sections
Direction cosines, 94
From quaternion, 101
Discrete time
Comparing continuous with
discrete, 39
Converting continuous system to,
38
Discrete transfer functions
Calculation, 135
Dynamics
Connection with stiff solvers, 87
Control system, 55
Control system sensors, 63
DC motor, 105
Forces from rotation, 24
Foucault pendulum, for, 26
House, heat ow in, 44, 48
Pendulum, 42
Reaction wheel, 105
Rotation, see also Axis angle:
Direction Cosines; Quaternion
Rotations, 94
Satellite rotation, 105
Einstein, Albert
Brownian motion, 180
Electric train
Rail resistance with nonlinear
resistor block, 251
Traction motor and train dynamics,
252
Electric Train Model
Model, 251
Elliptic Filter
In Signal Processing Blockset, 146
Embedded code
In the system design process, 286
Using data read, data write, data
store blocks, 279
Equations of Motion
For Euler angle rotations, 97
For quaternion representation of
rotations, 99
Escapement, 17, 31
Description of, 19
History of, 17
Simulink subsystem for, 21
Euler angles, 95
From quaternion, 101
In lunar module model, 272
Eulers theorem, 99
Executable Specication, 215, 225, 269
Analysis to create, 276
Denition and example, 225
For lunar module digital control
system, 279
Stateow and Embedded MATLAB
in, 283
Fast Fourier Transform (FFT)
block, 161
Fibonacci sequence, 116
Using the z-transform on, 120
Filter design, 141
r lter, 149
White noise in, 150
Fixed-point ltering in, 159
Foucault pendulum, see also NCS
library, 24
Dynamics, derivation of, 26
Model parameters, 30
Model, creating, 28
Solvers, experimenting with, 30
Vibration of moths antennae, 31
Friedman, Thomas, 289
From Workspace block, 92
Gain block, 1, 12, 14, 23, 123, 184, 186
Creating a product with, 253
GUI interface to Simulink with, 232
In Signal Processing Blockset, 157
In SimMechanics, 261
298 Index
Galileo
Comparing objects of different
mass, 14
Experiments by, 17
Inclined plane experiments, 31
Internet, references to, 31
Leaning Tower of Pisa, 5
Pendulum period and, 19
GoTo block, 77, 83, 253, 254
Graphical User Interface (GUI), 231
MATLAB Graphical User Interface
Development Environment,
232
GUI, see Graphical User Interface
GUIDE, see Graphical User Interface
Hamilton
Quaternion, 99
History, 112
Heat equation
Electrical analog of, 203
Four-room house model, 262
Partial differential equations, 202,
213
Two-room house model, 262
Heating control
With thermostat, 44
Home heating control
Two-room house model, 233
Hooke, 18
Huygens, Christiaan, 18, 31
Hysteresis, 48, 225
Hysteresis block, 45
iir lter, 149
Integration
Observer, 75
Integrator, see also Simulink blocks
Block dialog for, 8
Block dialog options, 8
Denoted by 1/s, 7
Dialog, 8
Discrete Reset in phase-locked
loop, 167
External initial conditions, 57
In integral compensation, 72
In state-space model, 210
Initial condition, external, 9
Simulink Block, 7
Simulink C-coded dll, 7
Integrator modulo 1 block
In the PLL model, 167
Interpolation
In n dimensions, 87
In zero crossing, 11
Polynomial block with MATLAB
curve t, 91
Laplace Transforms, 39
Of state-space model, 41
Transfer function of state model, 42
Leaning Tower
Simulation, two bodies, 14
Library block
Quaternion, making, 279
Limited precision arithmetic, 115, 145,
152, 153
Fixed-point, 158
Floating-point, 157
Linear control systems, see control
systems
Linear Differential Equations
Complete solution of, 35
Computing in discrete time , 37
Eigenvalues and response, 51
General form of, 33
In Simulink, state-variable form, 38
Poles and zeros, 55
State-space form of, 34
Undamped natural frequency and
damping ratio, 51
Linear feedback control, see Control
systems
Linear models
Analysis of Lorenz attractor with,
83
Creating from differential
equations, 49
From Control System Toolbox, 67
Linearization
Of a Nonlinear System, 49
Index 299
Lookup Table, 90
2-D, 90
Entering tabular data in, 88
n-D, 91
Simple, 88
Lookup Table block, see also Simulink
blocks
Lookup Table library, 87
Lorenz attractor
Chaotic motion model, 82
Linear models as function of time,
83
Parameters and xed points, 82
Time varying root locus for, 83
Lorenz attractor simulation, 81
Lorenz, Edward, 81
LTI object
Creating and using, 67
Denition, 66
LTI Viewer
For digital lters, 136
For string vibration model, 266
Lunar Module
Euler angles in, 272
Quaternion block in, 272
Lunar module digital ight control
System design process example,
272
Lyapunov equation solver
Control System Toolbox M-le,
188
In Control System Toolbox, 188
Maple-MATLAB Interface
Determining direction cosine
attitude error with, 107
Using for lunar module control law,
277
Using Maple for z-transform, 119
Masked subsystem
Deriving rates model, 75
Example, cross product, 101
Masked block parameters, 76
Masked subsystem, creating
SimPowerSystems nonlinear
resistor, 247
Math Operations, see Simulink blocks
MATLAB
Calculation of stop time, 6, 10
Commands, in Courier type, 1
Creating a discrete time model, 35
Creating a discrete time model,
c2d_ncs, 37
Default Simulink data in, 10
Input for Constant block, 4
ODE Suite, compared to Simulink,
7
Opening Foucault_Pendulum, 28
Opening Leaningtower, 6
Opening Leaningtower2, 12
Opening Leaningtower3, 16
Opening Simulink, 2
Vector notation in blocks, 14
MATLAB code
Updating GUI with, 236
MATLAB connection to Simulink
Graphical User Interface (GUI),
232
MATLAB GUI
Code for updating, 236
Creating, 238
Interface for Stateow and
Simulink, 230
Model Properties Tab, see Callback
Modeling and Analysis
In the system design process, 271
Moler, Cleve, xx, 30, 81, 115, 201
Monte Carlo simulation, 173
Mux block, see also Simulink blocks
using
Using, 45
NCS library
1/f noise, model, 194
Batch processing, model, 160
Butterworth lter model, 141
Butterworth lter, Signal
Processing Blockset, 145
Clock_transfer_functions, 42
Control systems, model, 44
Digital lter aliasing, model, 123
Digital lter transfer function,
model, 121
300 Index
Digital lter, Bode plot, 135
Fibonacci sequence
Digital lter block model, 127
Model, 116
State-space model, 129
Final lunar module specication,
model, 278
r lter model, 149
Foucault pendulum, model, 24
Heat equation, model, 207
Heating controller
Executable specication model,
225
MATLAB connection, 230
Model, 230
How to download, 1
Including computer limitations,
lunar module model, 276
Leaning Tower, model, 9
Limited precision lter design,
model, 157
Linear differential equations,
model, 38
Lorenz attractor, model, 81
lunar module Stateow logic,
model, 282
Monte Carlo, central limit theorem
model, 174
Naming conventions, 1
NCS denition, xx
Nonlinear resistor, model, 246
Observer, model, 74
Partial differential equations, 207
PD control, model, 69
Phase-locked loop, model, 164
PID control, model, 71
Random walk, model, 178
Rayleigh noise, model, 176
Reaction wheels model, 105
Rotation
Axis-angles, 98
Direction cosines, 98
Euler angles, 94
Quaternions, 99
Sampling theorem using FFT,
model, 160
Sampling theorem, model, 140
Saving new models, 4, 5
Set MATLAB path, 1
SimMechanics clock, model, 260
SimMechanics pendulum, model,
257
SimMechanics vibrating string,
model, 262
SimPowerSystems
Four-room house model, 263
Simple model, 242
SimPowerSystems train, model,
251
Spacecraft with reaction wheel,
model, 109
Specication capture, lunar module
model, 271
Specication to design, lunar
module control law model,
273
Spring-mass
Model, 55
State-space model, 51
State-space, continuous and
discrete, 39
Stateow
Chart, 215
Debugger, 223
Model input-output, 221
Simple model, 215
Systems excited by white noise,
model, 189
Thermo_NCS, 46
Using data in tables, models, 87
Using lter design block, model,
153
Using the nonlinear resistor, model,
248
White noise, model, 184
New York Times
Thomas Friedman editorial, 289
Nonlinear controller
Home heating thermostat, 44
Nonlinear differential equations, 79
Index 301
Numerical Computing with MATLAB, 30
Fibonacci sequence in, 115
Lorenz attractor in, 81
Partial differential equations in, 201
Numerical integration, see Solvers
Observers, 77
Ode113, 86
Ode23t, 249
Ode45, 193
Opening Simulink
Clicking on icon, 2
Command for, 2
Partial differential equations
Creating a Simulink model for, 205
Finite-dimensional models for, 202
Heat equation, 202
Electrical analog, 207
Model using SimPowerSystems,
262
State-space model, 207
Modeling in Simulink, 200
Vibrating string, 261, 264
Vibration, models for, 211, 261,
265
Pendulum, see also Clock; Foucault
pendulum
Clock, 17
Using SimMechanics, 257
Perrin, Jean
And Brownian motion, 180
Phase-locked loop (PLL), 164
How it works, 165
Model of, 167
Simulation of, 169
Voltage controlled oscillator
(VCO), 167
Physical modeling, 241
SimMechanics, 242
Poles and zeros
Bode Plot calculation with, 65
Denition, 42
Digital lter implementation and,
155
For 1/f noise approximation, 195
From state-space model, 54
In r lters, 150
In Signal Processing Blockset, 152
LTI object and Control Systems
Toolbox, 66
Maxwell and, 44
Of Butterworth lter, 143
Phase shift in lters, 149
Using Control System Transfer
function, 55, 62
Polynomial block
Curve tting in MATLAB for, 91
PostLoadFcn, see Callback
Power spectral density function
1/f noise, 194
Creating with white noise, 194
Denition, 194
Preload function, see Callback
Simulink model properties, 30
Quaternion, 94
Converting to direction cosines,
101
Converting to Euler angles, 101
Denition, 99
Derivative of, 99
Hamilton, discovery of, 112
In lunar module model, 272
Library block, 280
making, 279
Norm of, 99
Subsystem block for, 101
Random processes, 173
Convergence of, 181
Random walk, see also Random
processes
Prototype for white noise, 178
Reaction wheels
Model of, 107
Operation of, 105
Relational Operator block, see Simulink
blocks
Reset integrator
For phase-locked loop, 167
302 Index
Root locus plot
Comparing position and velocity
feedback, 61
Denition, 60
For mass velocity feedback, 60
Transfer functions, numerical
issues with, 63
Rotating bodies
Forces on, 25
Rotations
Axis-angle representation, 94
Direction cosine matrix
representation, 94
Quaternion representation, 94
Sampling theorem, 115, 136
Implementing, low pass lters, 140
Numerical experiments, 140
Proof of, 138
Simulink model for, 140
Using FFT to implement, 161
Satellite in orbit
Dynamics, rotational, 105
Second order sections, 157
In Filter Design, 156
Second order systems
Parameters in, see also Damping
ratio; Undamped natural
frequency, 51
Simulink model with transfer
functions, 42
Shannon, Claude, 136, 138, 170
Signal builder
For input to nonlinear resistor, 249
Signal processing blocks
FFT, 161
Signal Processing Blockset
Butterworth lter in, 148
Signal Processing blockset, 145
Analog lters in, 146
Bessel lter in, 148
Blocks in the library, 146
Chebyshev lter in, 148
Comparing blockset and Simulink
models, 146
Filter design with second order
sections, 157
Implementing a digital lter in
xed point, 159
Limited Precision Bandpass lter
design with, 154
Using buffers for batch processing,
160
SimMechanics, 240
Environment, 258
Ground, 258
Library, 256
Physical modeling, 242
Vibrating string, 262, 264
SimMechanics Blocks
Animation, 259
Body, 259
Body sensor, 259
Environment, 257
Ground, 256
Revolute joint, 259
SimPowerSystems, 240
Algebraic loops in, 248
Circuit with nonlinear resistor, 250
Connection icon, 242
Connections icon, 245
Connections to and from Simulink,
252
Library of blocks, 243, 244
Mask for nonlinear resistor block,
247
Mask for train model, 253
Modeling an electric train, 251
Nonlinear devices in, 248
Nonlinear elements in, 246
Nonlinear resistor model, 246
Simple exampleRC circuit, 242
Simulink inputs, 245
SimPowerSystems, blocks
DC voltage Source, 243
Resistors, inductors, capacitors,
244
Switch and circuit breaker, 245
Simulink
Adding a block, 3
Automatic connections, 4
Automatic vectorization, 33
Index 303
Block diagram basics, 1
Block dialogs, 3
Click and drag, 3, 7
Continuous, library, 7
Creating a new model, 3
Drawing neat diagrams, 4
Library browser, 2
Logic and bit operations, library, 7
Masked subsystem, creating, 247
MATLAB, setting constants in, 9
Modeling partial differential
equations, 200
Preload function in model
properties, 30
Signal ow connections, 4
Simple model, 3
Sinks, library, 7
Solver
Default, 10
For SimMechanics, 257
For SimPowerSystems, see also
Ode23t
Making changes, 30
Selecting, 86, 87
With noise, 184, 186
Sources, library, 7
Starting a simulation, 9
Viewing results, Scope block, 9
Zero crossing
Blocks that detect, 11
Detection, 10
Interpolation, 10
Simulink blocks
Absolute value, 22, 23
Band Limited White Noise, 184
Clock, from Sources library, 88
Comparing, using differences, 39
Constant, 88
Constant block, 4, 7
Control systems, time-based
linearization, 83
Control systems, trigger-based
linearization, 83
Cross product, 101
Data store, read, write and memory,
280
From File, 92
From Workspace, 92
Gain, 12, 23, 72
Vectorizing, 14
GoTo, 77
Hysteresis, 45
Integrator, 7, 82
Integrator notation, from Laplace
operator, 41
Leaning Tower simulation, blocks
for, 7
Lookup Interpolation, 87
Lookup Table, 87, 88
Dynamics, 87
Library, 87
Math Operations, 3
Matrix Concatenation, horizontal
and vertical, 111
Multiplication, for matrix
operations, 102
Multiport switch, 76
Mux, 45, 253
Polynomial, curve tting, 91
Prelookup, 87
Product, 12
Relational Operator block, 7
Scope, 13
Scope block, 7
Selector, 100
Sign, 23
Signal Builder, 249
Sine Input, 21, 46
State-space
Continuous, 38, 52, 58
Discrete, 39
Stop block, 7
Subsystem, 21, 190
Sum, 12, 20, 88, 253
Sum, used as difference, 72
Transfer function, 42, 63
Trigonometric functions, 4
Unit Delay, 116
Zero-Pole-Gain, 42
Simulink data
In MATLAB by default, 10
304 Index
Simulink Models, see also NCS library
Annotating, 15
Wide nonscalar lines, 16
Smoluchowski, Marian von
And Brownian motion, 180
Solvers, see also Ode113:Ode23t;
Simulink, see also Ode45
For Numerical Integration, see also
Foucault pendulum
In SimMechanics, 257
ODE suite in Simulink, 86
Setting step size, 259
Simulink implements MATLAB
Solvers as standalone dlls, 30
Simulinks use of, 7
Stiff, 105
Using Different in a simulation, 31
Using Ode23t in
SimPowerSystems, 249
Spacecraft rotation
Model for, 109
Spaghetti Simulink models, 290
Specication development and capture
In the system design process, 270
Spectral density function
Denition, 194
Using correlation function, 183
State-Space
And switch curves for lunar
module, 285
State-Space model, see also Simulink
blocks
Calculating transfer function from,
42
For 1/f noise approximation, 196
For discrete time systems, 131
For pendulum, 38
For spring-mass-damper system, 58
Full state feedback and, 74
Getting Bode plot from, 65
Getting linear model for Lorenz
attractor, 83
In SimPowerSystems, 246
Of two-room house, 209
Transfer function for, 42
Discrete systems using, 135
Using lti object in MATLAB, 67
Stateow, 282
Action language, 228
Adding events, 221
Adding inputs and outputs, 221
Heating control specication, 230
Home heating controller using, 225
Semantics, 218
Simple chart, 215
Using a GUI for inputs, 230, 233
Using the debugger, 223
Stateow Executable Specication
In the system design process, 283
Stochastic processes, see Random
processes
Subsystems
Annotating, 81
Completing top down design, 281
Converting Fahrenheit to Celsius
and back, 45
Counter, for RCS jet timing (lunar
module), 280
Creating, 21
Creating a library for, 81
Creating a matrix in, 111
Deriving rate, for, 75
Empty, for top-down design, 271
Heating controller, executable
specications, 226
House dynamics, heating system
model, 45
Interacting with a GUI, 233
Lunar module reaction jet control,
279
Mask dialog for parameter inputs,
78
Masked, 76, 81
documentation, 247
drawing an icon on, 247
for white noise, 184
Noise simulation, discrete and
continuous time, 191
Nonlinear resistor, in
SimPowerSystems, 247
Padding a buffer, signal processing,
162
Index 305
Quaternion, library block, for, 100
Reaction wheels, for, 107
Reset Integrator, in phase-locked
loop, 167
Spacecraft rotation, for, 109
Specication capture and, 226
Stateow Box command, 216
Stateow, subcharts, 219
Tracing model to specication, for,
269
Train model
For track resistances, 254
SimPowerSystems and
Simulink, 252
With track simulation, 253
Triggered, 218
VCO, in phase-locked loop, 167
Vibrating string, in SimMechanics,
264
Zero crossing detection for, 11
System design process, 269
Component level design that meets
specication, 276
Creating an executable
specication, 271
Creating embedded code, 286
Example, lunar module digital
ight control, 272
Modeling and analysis, 271
Specication development and
capture, 270
Steps in, 270
The nal lunar module executable
specication, 279
Using Stateow in the executable
specication, 283
Verication and validation, 285
TeX
Annotating with, 103
Thermostat
Modeling, 48
Operation of, 46
Time based linearization, 83
Train Simulation
Calculating rail resistances from
train positions, 254
Using SimPowerSystems, 254
Transfer Function
Irrational for q/f Noise, 195
Of a discrete system, 156
Of Butterworth lter, 143
Of ideal low pass lter, 140
Specifying for an analog lter, 142
Viewing in the Signal Processing
Blockset, 151
Transfer Function Block, see also
Simulink Blocks
Change of Filter changing icon for,
147
Transfer Functions
For digital lters, 121
From a simulation, 123
In the Simulink digital library, 128
Trigger based linearization, 83
Trigonometric (Trig) functions, 3
In direction-cosine matrix
calculation, 101
Tucker, Marc
In New York Times editorial, 289
Unbuffer, see Buffers
Undamped natural frequency, 51
Unit Delay, see Simulink blocks
Vectorizing a model, 33
Verication and validation
In the system design process, 285
Simulink blocks for, 285
Vibrating string, 262
Voltage Controlled Oscillator (VCO)
In phase-locked loop, 167
Weiner process, 178
Band Limited White Noise block,
184
In integrals, 183
Simulations with, 184
White noise and, 183
White noise, in Simulink, 184

Vous aimerez peut-être aussi