Vous êtes sur la page 1sur 165

Additional Lecture Notes for the course SC4026/EE4C04

Introduction modeling and control


Ton J.J. van den Boom

September 7, 2015

Delft
Delft University of Technology

Delft Center for Systems and Control


Delft University of Technology
Mekelweg 2, NL-2628 CD Delft, The Netherlands
Tel. (+31) 15 2784052 Fax. (+31) 15 2786679
Email:
a.j.j.vandenBoom@tudelft.nl

Contents
Preface

1 Signals and Systems


1.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7
7
12
15

2 Modeling of dynamical systems


2.1 Domains of dynamical systems
2.2 Examples of modeling . . . .
2.3 Input-output models . . . . .
2.4 State systems . . . . . . . . .
2.5 Exercises . . . . . . . . . . . .

.
.
.
.
.

17
17
22
32
36
40

second-order systems
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .

43
43
47
59

3 Analysis of first-order and


3.1 First-order systems . . .
3.2 Second-order systems . .
3.3 Exercises . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

4 General system analysis


4.1 Transfer functions . . . . . . . . . . . . . . .
4.2 Time responses . . . . . . . . . . . . . . . .
4.3 Time response using Laplace transform . . .
4.4 Impulse response model: convolution . . . .
4.5 Analysis of state systems . . . . . . . . . . .
4.6 Relation between various system descriptions
4.7 Stability . . . . . . . . . . . . . . . . . . . .
4.8 Exercises . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

61
61
63
72
76
79
88
96
99

5 Nonlinear dynamical systems


103
5.1 Modeling of nonlinear dynamical systems . . . . . . . . . . . . . . . . . . . 103
5.2 Steady state behavior and linearization . . . . . . . . . . . . . . . . . . . . 107
5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3

4
6 An
6.1
6.2
6.3
6.4
6.5

CONTENTS
introduction to feedback control
Block diagrams . . . . . . . . . . . . .
Control configurations . . . . . . . . .
Steady state tracking and system type
PID control . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

113
113
122
125
133
141

Appendix A: The inverse of a matrix

143

Appendix B: Laplace transforms

145

Appendix C: Answer to exercises

147

Index

163

References

165

Preface
Engineers have always been interested in the dynamic phenomena they observe when studying physical systems. Mechanical engineers are interested in the interaction between forces
and motion, while electrical engineers want to know more about the relation between
current and voltage. To facilitate their study they utilize the concept of system, a mathematical abstraction that is devised to serve as a model for a dynamic phenomenon. It
represents the dynamic phenomenon in terms of mathematical relations among the input
and the output of the system, usually called the signals of the system. A system is, of
course, not limited to modeling only physical dynamic phenomena; the concept is equally
applicable to abstract dynamic phenomena such as those encountered in economics or other
social sciences.
The motions of the planets, the weather system, the stock market, a simple chemical
reaction, and the oscillating air in a trumpet are all examples of dynamic systems, in
which some phenomena evolve in time.
The main maxim of science is its ability to relate cause and effect. On the basis of the
laws of gravity, for example, astronomical events such as eclipses and the appearances
of comets can be predicted thousands of years in advance. Other natural phenomena,
however, are so complex that they appear to be much more difficult to predict. Although
the movements of the atmosphere (wind), for example, obey the laws of physics just as
much as the movements of the planets do, long term weather prediction is still rather
problematic.
The objective of this course is to present an introduction in the modeling, analysis, and
control of dynamical systems. In Chapter 1 we discuss some basic concepts of signals and
systems. In Chapter 2 we study the modeling of linear dynamic systems in the domains of
mechanical, electrical, electromechanical, and fluid/heat flow systems. Chapter 3 analyzes
first-order and second-order systems. Chapter 4 proceeds the analysis of Chapter 3, but
now for general (and higher-order) systems. In Chapter 5 we discuss the extension to
nonlinear dynamical systems and introduce the concept of linearization. In Chapter 6 an
introduction to feedback control is given.

Acknowledgements
The author likes to thank Peter Heuberger, Bart De Schutter, Nicolas Weiss, Martijn
Leskens and Rufus Fraanje for the fruitful discussions on the topic of modeling and control
and for their comments on the concept version of these lecture notes.
5

Preface

Chapter 1
Signals and Systems
A signal is an indicator of a phenomenon, arising in some environment, that may be
described quantitatively. Examples of signals are mechanical signals such as forces, displacement, and velocity, or electrical signals such as voltages and currents in an electrical
circuit.
Engineers and physicists have utilized the concept of a system to facilitate the study of
the interaction between forces and matter for many years. A system is a mathematical
abstraction that is devised to describe a part of the environment the properties of which we
want to study. It describes the relation between certain phenomena that can occur in this
environment. A dynamical system describes such relations with respect to time. A mechanical setup is a typical example of a dynamical system, because the forces, accelerations,
velocities and positions that exist within the system are related in time.
We will now more formally define the terms signal and system.

1.1

Signals

In the context of this course, a signal is an information carrier of a (physical) phenomenon.


Since any signal is always one out of a collection of several of many possible signals, signals
may mathematically be represented as elements of a set, called the signals set. In this
section we introduce various kinds of signals.
We use the symbol R to denote the set of all real numbers, C to denote the complex
numbers, and Z to denote all integers. The symbol N is used to denote all positive integers
including zero. We will use R+ to denote all positive real numbers including zero.
The signals we are interested in are functions of a variable that is usually time. The domain
of a signal is a subset T of the real line R and is called the time axis. The signal takes
values in a set W, called the signal space. The formal definition of a signal is as follows.
Definition 1.1 (Signal) Let W be a set and suppose that T is a subset of the reals R.
Then, any function x : T W is called a signal with signal axis T and signal space W.
Example 1.1 (Signals)
7

CHAPTER 1. SIGNALS AND SYSTEMS

(a) Mechanical signal. The velocity of a mass is a time signal with signal axis T =
(, ) and signal space W = R.
(b) Electrical signal. The voltage across a capacitor is a time signal with signal axis
T = [t0 , ) (measurement of voltage starts at time t0 ) and signal space W = R.
(c) Hydraulic signal. The oil pressure difference across a fluid resistance in a hydraulic
system is a signal.
(d) Thermal signal. The heat flow through a wall in a thermal system is a signal.
What is not a signal? For example, the value of a resistor in an electric circuit is usually
constant, and therefore considered not to be a signal. The same holds for other constant
system parameters, such as the mass in a mechanical system or the thermal capacity of a
room in a heat-flow system.

Elementary signals
We now give some elementary signals
The rectangular function: The rectangular function T : R R for a given scalar
T > 0 is defined as:
(
1/T for 0 t T
T (t) =
(1.1)
0
elsewhere
T (t)

1/T

Figure 1.1: Rectangular function

The unit impulse function: The unit impulse function : R R is defined as:
(t) = lim T (t)
T 0

where T (t) as defined above is a block function with constant amplitude 1/T and
duration T . We find that for T 0, the amplitude will go to infinity where the
duration approaches zero. The area under the pulse remains equal to 1. This leads

1.1. SIGNALS
(t)

0
Figure 1.2: Unit impulse function

to the following definition of the unit impulse function:


(t) = 0 for t 6= 0
Z
(t)d = 1

(1.2)

Note that the amplitude for t = 0 is not defined, but the integral (area under the
pulse) is also equal to 1.
The unit impulse function is often also referred to as the Dirac delta function. An
important property of the unit impulse function is that it can filter out values of a
function f through integration:
Z
f (t)(t) dt = f (0)
(1.3)

and
Z

f (t)(t ) dt = f ( )

(1.4)

us (t)

Figure 1.3: Unit step function

The unit step function: The unit step function us : R R is defined as:
(
0 for t < 0
us (t) =
1 for t 0

(1.5)

10

CHAPTER 1. SIGNALS AND SYSTEMS


Note that the derivative of the unit step function is the unit impulse function, so
(t) = d udst(t) .

The unit ramp function: The unit ramp function ur : R R is defined to be a linearly
increasing function of time with a unit increment:

ur (t)

0
Figure 1.4: Unit ramp function

(
0
ur (t) =
t

for t 0
for t > 0

(1.6)

Note that the derivative of the unit ramp function is the unit step function, so
us (t) = d udrt(t) .

up (t)
0

Figure 1.5: Unit parabolic function

The unit parabolic function: The unit parabolic function up : R R is defined to be


a parabolically increasing function of time:
(
0
for t 0
up (t) = t2
(1.7)
for t > 0
2
Note that the derivative of the unit parabolic function is the unit ramp function, so
ur (t) = d udpt(t) .

11

1.1. SIGNALS

Harmonic functions: The well-known sine function sin : R R and cosine function
cos : R R, can be written as exponentials
ejt + ejt
2
ejt ejt
sin(t) =
2j

cos(t) =

(1.8)

with j 2 = 1.
sin(t)
t

cos(t)

/2

Figure 1.6: Harmonic functions

Damped harmonic functions A damped harmonic function is a harmonic function,


multiplied by an exponential function:
e(+j)t + e(j)t
2
(+j)t
e

e(j)t
et sin(t) =
2j
et cos(t) =

(1.9)

et cos(t)
t

Figure 1.7: Damped harmonic functions for < 0

12

CHAPTER 1. SIGNALS AND SYSTEMS

The impulse, step, ramp, and parabolic are often called singularity functions.
Finally we introduce a single dot to denote the time derivative of a signal, so
y(t)
=

d y(t)
dt

and we use a double dot to denote the second time derivative of a signal, so
y(t) =

1.2

d2 y(t)
d t2

Systems

In this section the concept of a system is defined.


Definition 1.2 (System) A system is a part separated from the environment by a real or
imaginary boundary, that causes certain signals that exist within the boundary to be related.
Definition 1.3 (Dynamical system [1]) A dynamical system is a system whose behavior changes over time, often in response to external stimulation or forcing.
Definition 1.4 (Inputs [7]) An input is a system variable that is independently prescribed, or defined, by the systems environment. The value of the input is independent
of the system behavior or response. Inputs define the external excitation of the system and
can be quantities such as the external wind force acting on a tall building or the rainfall
forming the input flow into a reservoir system. A system may have more than one input.
Definition 1.5 (Outputs [7]) An output is defined as any system variable of interest.
It may be a variable measured at the interface with the environment or a variable that is
internal to the system and does not directly interact with the environment.
Definition 1.6 (Input-Output system) A system with inputs and outputs is called an
input-output system. The outputs of this type of system depend on the initial state of the
system and the external inputs.
Definition 1.7 (Autonomous system) A system without external inputs is called an
autonomous system. The behavior of this type of system depends entirely on the initial
state of the system.
Example 1.2 (Systems)
(a) Moving car. When the position of the throttle pedal (input) of a car is changed, the
power developed by the motor will change and the forward speed (output) increases
or decreases.

1.2. SYSTEMS

13

(b) Steered ship. When the steersman of a ship changes the position of the wheel (input)
to a new position, the heading of the ship (output) changes because of hydrodynamic
side forces acting on the newly positioned rudder.
(c) Mass-damper-spring system. A mechanical system with a number of masses that
are connected by springs and dampers to each other and without any outside forces
acting on it is an autonomous system.
(d) Wafer stage for lithography. A example of a more complex system is a wafer
stepper, with a positioning mechanism that is used in chip manufacturing processes
for accurate positioning (outputs of the system) of the silicon wafer on which the chips
are to be produced. The wafer can be accurately moved (stepped) in three degrees of
freedom (3DOF) by manipulating the currents of linear motors (inputs of the system).
(e) A municipal solid waste combustion plant. An example of a large-scale system
is a municipal solid waste combustion plant in which household waste is incinerated
for the reduction of the amount of waste and for the production of energy. The input
of the system is the amount a waste that is put into the oven, the output of the system
is the amount of energy that is produced.

System properties
In the remainder of this section we will introduce some basic system properties.
Definition 1.8 (Dynamical vs memoryless) A system is said to be memoryless if its
output at a given time is dependent only on the input at that same time.
For example the system

2
y(t) = 2u(t) u2 (t)

is memoryless, as the value y(t) at any particular time t only depends on the input u(t) at
that time. A physical example of a memoryless system is a resistor in an electric circuit.
Let i(t) be the input of the system and v(t) the output, then the input-output relationship
is given by
v(t) = R i(t)
where R is the resistance.
An example of a system with memory is a capacitor, where the voltage v(t) is the integral
of the current i(t), so
Z t
1
v(t) =
i( ) d
C

14

CHAPTER 1. SIGNALS AND SYSTEMS

Definition 1.9 (Causality) A system is causal if the output at any time depends only
on values of the input at present time and in the past.
All physical systems in the real world are causal because they cannot anticipate on the
future. Nonetheless, there are important applications where causality is not required. For
example if we have recorded an audio signal in a computers memory, we can process it
later off-line. We can then use non-causal filtering by allowing a delay in the input signal,
and as such implement a system that is theoretically non-causal.
Note that all memoryless systems are causal, since the output responds only to the current
value of the input.
Definition 1.10 (Linearity) Let y1 be the output of the system to the input u1, and let
y2 be the output to the input u2 . A system is said to be linear if and only if it satisfies the
following properties:
1. the input u1 (t) + u2 (t) will give an output y1 (t) + y2 (t).
2. the input u1 (t) will give an output y1 (t) for any (complex) constant .
An example of a linear system is an inductor, where the current i(t) is the integral of the
voltage v(t), so
Z t
1
i(t) =
v( ) d
L
An example of a non-linear system is the following system with input u and output y:
y(t) = u2 (t)
Definition 1.11 (Time-invariance) A system is said to be time-invariant if the behavior
and characteristics are fixed over time.
An input-output system is time-invariant if and only if a time-shift in the input leads to
the same time-shift in the output.
u(t) = y(t) , t R

u(t ) = y(t ) , t R

In other words, if the input signal u(t) produces an output y(t) then any time shifted
input, u(t ), results in a time-shifted output y(t ). An example of a time-invariant
system is a mass-spring system with a mass and a spring-constant that do not vary in time.

Definition 1.12 (Stability) A system is said to be stable if a bounded input (i.e. if its
magnitude does not grow without bound) gives a bounded output and therefore will not
diverge.

1.3. EXERCISES

15

An example of a stable biological system is a predator-prey model in which the population


of prey and predators are in balance and a small change of one of the populations will only
lead to a temporary deviation of the equilibrium point and does not destabilize the system.
A common example of an unstable system is illustrated by someone pointing the microphone close at a speaker; a loud high-pitched tone results.
An interesting example in the light of stability is a bicycle. If we consider the tilt position
with respect to the vertical axis, the systems stability depends on the forward velocity of
the bicycle. If the velocity is zero or very low, a small disturbance force on the handle bar
will immediately cause instability and the bicycle will fall. However, if the forward velocity
of the bicycle is high enough, the bicycle is stable, and a small disturbance force on the
handle bar will not destabilize the bicycle.
Finally we mention the derivative property for linear time-invariant systems:
Property 1 (Derivative property for linear time-invariant systems)
Let y1 be the output of the system to the input u1 for a linear time-invariant system,
then the derivative of the input u2 (t) = d udt1 (t) will result in the derivative of the output
R
y2 (t) = d ydt1 (t) . Furthermore, the
integral
of
the
input
u
(t)
=
u1 ( ) d will result in the
3
R
integral of the output y3 (t) = y1 ( ) d .

1.3

Exercises

Exercise 1. Signals
a) Show that the unit rectangular function can be written as a sum of scaled unit step
functions.
Exercise 2. Plots of signals
Plot the signals ac.
a) T1 (t) 2 T2 (t T1 ), for T1 = 1 and T2 = 2, t R.
b) ur (t) ur (t 1) us (t 4), for t R.
c) up (t) us (1 t) + us (t 1), for t R.
Exercise 3. Derivative of signals
Compute the derivative of the signals ac of Exercise 2.
Exercise 4. System properties
Are the systems ad memoryless, linear, time-invariant and/or causal.
a) 6 y(t) = 4 u(t) + 3 et .

16
b) y(t)
= 0.1 u(t) .
c) y(t) = 6 y(t 2) + 3 u(t) + 2 .
d) y(t) = sin(u(t)).

CHAPTER 1. SIGNALS AND SYSTEMS

Chapter 2
Modeling of dynamical systems
Real physical systems, which engineers must design, analyze, and understand, are usually
very complex. We therefore formulate a conceptual model made up of basic buildingblocks. These blocks are idealizations of the essential physical phenomena occurring in
real systems. An adequate model of a particular physical device or system will behave
approximately like the real system, and the best system model is the simplest one which
yields the information necessary for the engineering job.
In this chapter we study the modeling of linear dynamic systems in various domains:
1. Mechanical domain
2. Electrical domain
3. Electromechanical domain
4. Fluid flow domain
5. Heat flow domain
For these domains we aim at deriving linear differential equations for physical systems.

2.1

Domains of dynamical systems

In this section we study some basic tools for the modeling of dynamical systems in various
domains. For each domain we define the so called basic signals, that described the phenomena in that particular domain. Furthermore, for each domain the dynamical relations
between the basic signals can be described by a set of basic elements, which can be seen
as the building blocks of the system.
1. Mechanical systems
We can distinguish between translational and rotational mechanical systems. In
translational mechanical systems we consider motions that are restricted to translation along a line. In rotational mechanical systems we consider motions that are
restricted to rotation around an axis.
17

18

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


(a) Translational mechanical systems
Basic signals are force (f ) and position (x) (together with its derivatives velocity
v = x and acceleration a = x).
There are three basic system elements: mass (m), damper (b), and spring (k).
The relations between the basic signals and the basic elements for linear systems
are summarized in Figure 2.1
-

mass

f = m x

x
f-

f

x1

damper

f = b (x 1 x 2 )

spring

f = k (x1 x2 )

x2
f-

f

x1

x2
Figure 2.1: Translational mechanical systems

For an interconnected translational mechanical system we derive one differential


equation for each mass in the system.
(b) Rotational mechanical systems
Basic signals are torque ( ) and angle () (together with its derivatives angular
and angular acceleration = ).

velocity = ,
There are three basic system elements: inertia (J), rotational damper (b), and
rotational spring (k). The relations between the basic signals and the basic
elements for linear systems are summarized in Figure 2.2.

inertia

= J

19

2.1. DOMAINS OF DYNAMICAL SYSTEMS

rotational damper

= b (1 2 )

rotational spring

= k (1 2 )

Figure 2.2: Rotational mechanical systems


For an interconnected rotational mechanical system we derive one differential
equation for each inertia in the system.
2. Electrical systems
Basic signals are voltage (v) and current (i).
There are three basic system elements: resistor (R), capacitor (C), and inductor (L).
The relations between the basic signals and the basic elements for linear systems are
summarized in Figure 2.3
v2 e
i
v1 e

resistor

v2 v1 = R i

capacitor

i = C (v 2 v 1 )

inductor

v2 v1 = L

v2 e
i
v1 e

v2 e
i
v1 e

di
dt

Figure 2.3: Electrical systems


For an electrical network we derive one differential equation for each inductor in the
system, and one differential equation for each capacitor in the system.

20

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


Remark 2.1 Note that in electrical systems we use di
for derivative of the current
dt
The two dots on top of each other may lead to confusion.
rather than i.
3. Electromechanical systems
Also in electromechanical systems we can distinguish between translational en rotational electromechanical systems. An example of a translational electromechanical
system is a loudspeaker. An example of a rotational electromechanical system is
an DC-motor. Both systems contain an electrical part as well as a mechanical part.
Both can be modeled separately using the basic elements discussed before. The parts
are connected by means of a transducer. A transducer transforms energy from one
physical domain into another, in this case electrical energy into mechanical energy
and vice versa.
(a) Translational electromechanical systems
The basic signals in a translational electromechanical system are force (f ), velocity (v = x),
voltage (e) and current (i). The basic system element is the
transducer with transduction ratio or electromechanical coupling constant Kp .
(see Figure 2.4). For a translational electromechanical system we derive one
ie2 e
uuuuu

fx

uuuuu

transducer

f = Kp i
e2 e1 = Kp x

e1 e
Figure 2.4: Translational electromechanical system
differential equation for each inductor/capacitor in the electrical part of the
system, and one differential equation for each mass in the mechanical part of
the system.
(b) Rotational electromechanical systems
The basic signals in a rotational electromechanical system are torque ( ), angular
voltage (e) and current (i). The basic system element is the
velocity ( = ),
transducer with transduction ratio or electromechanical coupling constant Kr .
(see Figure 2.5).
For a rotational electromechanical system we derive one differential equation for
each inductor/capacitor in the electrical part of the system, and one differential
equation for each inertia in the mechanical part of the system.
Remark 2.2 Note that in electromechanical systems we use e for voltage instead of
v. We do this to avoid confusion with the velocity v.

21

2.1. DOMAINS OF DYNAMICAL SYSTEMS


ie
e2
e

,
= Kr i

transducer

e2 e1 = Kr

e1

Figure 2.5: Rotational electromechanical system

4. Heat flow systems


Basic signals are temperature (T ) and heat energy flow (q).
There are two basic system elements: thermal capacitor (C), and thermal resistor
(R). The relations between the basic signals and the basic elements for linear systems
are summarized in Figure 2.6.

T2

T
thermal capacitor

1
T = q
C

thermal resistor

q=

T1

1
(T2 T1 )
R

Figure 2.6: Heat flow systems


For an interconnected heat flow system we derive one differential equation for each
capacitor in the system.
5. Fluid flow systems
Basic signals are fluid pressure (p) and fluid mass flow rate (w).
There are two basic system elements: fluid capacitor (C), and fluid resistor (R).
The relations between the basic signals and the basic elements for linear systems are
summarized in Figure 2.7.
For an interconnected fluid flow system we derive one differential equation for each
fluid capacitor in the system.
Often a fluid capacitor appear in the form of a vessel that contains the fluid. The
difference between the pressure p at the bottom of the vessel and the outside pressure

22

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


wfluid capacitor

p =

1
w
C

fluid resistor

w=

1
(p1 p2 )
R

p
w p1

p2
Figure 2.7: Fluid flow systems

p0 is related to the level h of the fluid in the vessel by the linear equation
p p0 = g h

(2.1)

where is the mass density of the fluid and g is the gravitational constant.
Remark 2.3 Note that in these lecture notes we only consider transducers that transform
mechanical energy into electrical energy and vice versa. We could have discussed conversion
between any combination of physical domains, e.g. thermo-mechanical systems, thermoelectrical systems, but for the sake of simplicity we have limited the discussion to the
mechanical and electrical domains.

2.2

Examples of modeling

1. Translational mechanical systems


x2
x1

b3

m1
k1

b1

m2
k2

fe -

b2

Figure 2.8: Example of a translational mechanical system

Given 2 carts in the configuration of Figure 2.8. Cart 1 with mass m1 is connected
to a wall by linear spring with spring constant k1 , and to cart 2 by a spring with
spring constant k2 and a damper with damping constant b3 . Cart 2 with mass m2 is
driven by an external force fe . Friction with the ground causes a damping force with
damping constant b1 for cart 1 and damping constant b2 for cart 2. Our task is to

23

2.2. EXAMPLES OF MODELING


derive the differential equations for this system.

First we note that we will have two differential equations, one for each mass. Let us
first concentrate on the first mass m1 :
Newtons law tells us that
X
m1 x1 =
f1,i
i

where f1,i are all forces acting on mass m1 , see Figure 2.9.


b1 x 1

b3 (x 2 x 1 )

m1


k1 x1

x2
-

x1 k2 (x2 x1 )

Figure 2.9: Forces acting on cart 1


We can distinguish four forces:
(a) The force due to the spring between cart 1 and the wall is equal to
f1,k1 = k1 (0 x1 ) = k1 x1
(b) The force due to the spring between cart 1 and cart 2 is equal to
f1,k2 = k2 (x2 x1 )
(c) The force due to the damper between cart 1 and cart 2 is equal to
f1,b3 = b3 (x 2 x 1 )
(d) The force due to the friction is equal to
f1,b1 = b1 (0 x 1 ) = b1 x 1
and so the first differential equation becomes
m1 x1 = b1 x 1 + b3 (x 2 x 1 ) + k2 (x2 x1 ) k1 x1
For the second mass m2 we derive with Newtons law:
X
m2 x2 =
f2,i
i

where f2,i are all forces acting on mass m2 , see Figure 2.10.
We can distinguish four forces:

24

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


b3 (x 2 x 1 )

x1

fe


k2 (x2 x1 )

m2

b2 x 2

x2

Figure 2.10: Forces acting on cart 2

(a) The force due to the spring between cart 2 and cart 1 is equal to
f2,k1 = k2 (x1 x2 )
(b) The force due to the damper between cart 2 and cart 1 is equal to
f2,b3 = b3 (x 1 x 2 )
(c) The force due to the friction is equal to
f2,b2 = b2 (0 x 2 ) = b2 x 2
(d) The external force fe
and so the second differential equation becomes
m2 x2 = b2 x 2 + k2 (x1 x2 ) + fe + b3 (x 1 x 2 )
So summarizing, the two differential equations describing the motion of the two-cart
system are as follows:

m1 x1 = b1 x 1 + b3 (x 2 x 1 ) + k2 (x2 x1 ) k1 x1
(2.2)
m2 x2 = b2 x 2 + b3 (x 1 x 2 ) + k2 (x1 x2 ) + fe
2. Rotational mechanical systems
Given an inertia J in the configuration of Figure 2.11. The inertia is lined up with
a rotational damper with damping constant b1 and a friction with damping constant
b2 . An external angular velocity 1 is exciting the system. Our task is to derive the
differential equations for this system.

25

2.2. EXAMPLES OF MODELING


1

b1

J
2

b2

Figure 2.11: Rotational mechanical systems

Using Newtons law for rotation we obtain:


J 2 = 1 + 2
where 1 and 2 are all torques acting on inertia J, see Figure 2.11. For the first
damper we find:
1 = b1 (1 2 )
For the second rotational damper we find:
2 = b2 (0 2 ) = b2 2
Combining these results gives us
J 2 = 1 + 2
= b1 (1 2 ) b2 2
= b1 1 (b1 + b2 ) 2

and so we obtain the differential equation describing the dynamics between 1 and
2 as follows:

J 2 = b1 1 (b1 + b2 ) 2
3. Electrical system The electrical circuit of Figure 2.12 consists of an inductor, a
resistor and a capacitor in series, where L is the induction, R the resistance and C
the capacity. Often we assume one of the voltages to be zero, in this example we
choose v4 = 0.
First note that the current i is the same for three elements (induction, resistance and
capacitor). For the inductor we find:
L

di
= v1 v2
dt

For the capacitor we find


d (v3 v4 )
d v3
1
=
= i
dt
dt
C

26

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


L

v1
e

v2

v3

C
v4

r v4

Figure 2.12: Example of an electrical systems

For the resistor we find


v2 v3 = Ri

v2 = v3 + R i

Substitution into the first equation gives us:


L

di
= v1 v3 R i
dt

and so we obtain the differential equations describing the dynamics of the electrical
network as follows:

di

= v1 v3 R i
L
dt
(2.3)

1
d
v
3

= i
dt
C

4. Translational electromechanical system

Consider the system of Figure 2.13. A current i through a coil causes a force ft on
a permanent magnet, resulting in a displacement x. The voltages in the circuit are
e1 , e2 , e3 , and e4 .
L
R
e1
e2
e3 ie

uuuuu
ft

e4

Kp

fs

m
uuuuu

k


Figure 2.13: Translational electromechanical system

The electromechanical coupling constant is Kp . This system consists of two parts,


the mechanical and the electrical part.

27

2.2. EXAMPLES OF MODELING

We start with the electrical part. Note that the current i is the same through all
electrical elements. For the inductor we find:
e1 e2 = L

di
dt

For the resistor we find


e2 e3 = Ri
For the voltage difference at the input of the system we find:
e1 e4 = (e1 e2 ) + (e2 e3 ) + (e3 e4 )
di
= L + R i + (e3 e4 )
dt
Next we consider the mechanical part:
m

d2 x
= ft + fs
d t2
= ft k x

The last step is to connect the two systems by the transducer using the relations
ft = Kp i
dx
e3 e4 = Kp
dt
Substitution gives us:
di
+ R i + (e3 e4 )
dt
dx
di
= L + R i + Kp
dt
dt

e1 e4 = L

d2 x
= ft k x
d t2
= Kp i k x

So summarizing, the two differential equations describing the dynamics of the translational electromechanical system are as follows:

di
dx
= e1 e4 R i Kp
dt
dt
2

d
x
m
= Kp i k x
d t2
L

(2.4)

28

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

ext ,

re2

Kr e

R
e
re1

Figure 2.14: Rotational electromechanical system

5. Rotational electromechanical system


Figure 2.14 shows a dynamo, which translates mechanical energy into electrical en which
ergy. On the mechanical side we have the torque and angular velocity ,
results in a current i and voltage e2 e1 on the electrical side. We assume the dynamo
has inertia J, and electromechanical coupling constant Kr . On the electrical side the
circuit is closed by a resistor R.
This system consists of two parts, the mechanical and the electrical part. We start
with the mechanical part. (First note that the angular velocity 1 of the inertia and
transducer are both the same.) There are two torques acting on inertia J, namely
the external torque ext and the transducer torque t . Newtons law for rotational
systems gives us:
J

d2
= ext t
d t2

Next we concentrate on the electrical part: We find the relation


e1 e2 = R i
The last step is to connect the two systems by the transducer using the relations
t = Kr i
d
e2 e1 = Kr
dt
Substitution gives us:
J

d2
= ext t
d t2
= ext + Kr i
Kr
= ext +
(e1 e2 )
R
K2 d
= ext r
R dt

29

2.2. EXAMPLES OF MODELING

So summarizing, the differential equation describing the dynamics of the rotational


electromechanical system is as follows:

d2
K2 d
(2.5)
J 2 = ext r
dt
R dt
6. Water flow system
Given is a system with two water vessels as in Figure 2.15.
win

p0

A1

h1
p1

A2
R1
-

p2

h2
R2

wout

wmed
Figure 2.15: Example of a water flow system

Water runs into the left vessel from a source with mass flow win , from the left vessel
through a restriction with restriction constant R1 into the right vessel with mass flow
wmed , and through a restriction with restriction constant R2 out of the right vessel
with mass flow wout . The pressures at the bottom of the water vessels are denoted
as p1 and p2 . The outside pressure is p0 . The areas of the left and rights vessels are
A1 and A2 , and the water levels are denoted by h1 and h2 . Our task is to derive the
differential equations for this system.
First we consider the left vessel:
The net flow into the left vessel is w1 = win wmed . The fluid capacitance of the left
vessel is given by
C1 =

A1
g

The change in pressure p1 is now given by:


p 1 =

1
g
w1 =
(win wmed)
C1
A1

For the flow wmed we find:


wmed =

1
(p1 p2 )
R1

30

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


Substitution of wmed into the previous equation we obtain:
g
g
win
(p1 p2 )
A1
A1 R1
g
g
g
=
win
(p1 p0 ) +
(p2 p0 )
A1
A1 R1
A1 R1

p 1 =

The derivation for the right vessel is similar:


The net flow into the right level is w2 = wmed wout . The fluid capacitance of the
right vessel is given by
C2 =

A2
g

The change in pressure p2 is now given by:


p 2 =

1
g
w2 =
(wmed wout )
C2
A2

For the flow wout we find:


wout =

1
(p2 p0 )
R2

Substitution of wmed and wout into the previous equation yields


g
(p1 p2 )
A2 R1
g
=
(p1 p0 )
A2 R1
g
=
(p1 p0 )
A2 R1

p 2 =

g
(p2 p0 )
A2 R2
g
g
(p2 p0 )
(p2 p0 )
A2 R1
A2 R2
g(R1 + R2 )
(p2 p0 )
A2 R1 R2

So summarizing, the two differential equations describing the dynamics of the 2 vessel
system are as follows:

g
g
g

p 1 = A win A R (p1 p0 ) + A R (p2 p0 )


1

g
g(R1 + R2 )

p 2 =
(p1 p0 )
(p2 p0 )
A2 R1
A2 R1 R2

Now we can use the relation between the fluid levels h1 , h2 and p1 , p2 :
p1 p0 = gh1 ,

p2 p0 = gh2 .

This gives us
p 1 = g h 1 ,

p 2 = g h 2 ,

31

2.2. EXAMPLES OF MODELING


and we can rewrite the equations as

1
g
g

h 1 = A win A R h1 + A R h2
1

h 2 =

(2.6)

g(R1 + R2 )
g
h1
h2 .
A2 R1
A2 R1 R2

7. Heat flow system


In the system of Figure 2.16, a heat flow qin is entering a wall with temperature
Tw and heat capacity cw . From the wall a heat flow qout is entering the room with
temperature T0 and heat capacity c0 . The thermal resistance from the wall to the
room is equal to Rw . Our task is to derive the differential equations for this system.
Rw
Tw
qin -

To
qout-

cw

co

Figure 2.16: Example of a heat flow system


First we consider the wall temperature:
The net heat flow into the wall is q1 = qin qout . The change in wall temperature Tw
is now given by:
1
1
Tw =
q1 =
(qin qout )
cw
cw
For the heat flow qout we find:
qout =

1
(Tw T0 )
Rw

Substitution of qout results in:


1
1
Tw =
qin +
(T0 Tw )
cw
cw Rw
The derivation for the room temperature is similar:
The net heat flow into the room is qout . The change in room temperature T0 is now
given by:
1
T0 = qout
c0

32

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


Substitution of qout into this equation we obtain:
T0 =

2.3

1
(Tw T0 )
c0 Rw

So summarizing, the two differential equations describing the dynamics of the heat
flow system are as follows:

1
1

qin +
(T0 Tw )
Tw =
cw
cw Rw
(2.7)
1

T0 =
(Tw T0 )
c0 Rw

Input-output models

In the previous sections we showed that for a linear dynamical system we can derive a
set of differential equations, that describe the dynamics of the system. In many cases we
obtain more than one differential equation, and the relation between the input variable
and the output variable are not directly given, but we need one or more auxiliary variables. For example, the two differential equations of (2.2), describing the dynamics of a
translational mechanical system, use three variables x1 , x2 , and fe . If we define the signal
fe as the input of the system, and we consider the signal x2 as the output of the system,
then x1 can be seen as an auxiliary variable. It would be nice if we could eliminate the
variable x1 from the equations and have a direct relation between fe and x2 , in a so-called
input-output differential equation. For linear systems, this elimination can be done easily
using the Laplace transformation.
Let x(t), t 0 be a signal, then the Laplace transform is defined as follows:
Z
X(s) = L{x(t)} =
x( ) es d

(2.8)

where X(s) is called the Laplace transform of x(t), or X(s) = L{x(t)}. The complex
variable s C is called the Laplace variable. Table 2.1 gives the Laplace transforms of
some common signals. A more extensive list of Laplace transforms is given in Appendix B.
The Laplace transformation is a linear operation and so it has the following property
L{f1 (t) + f2 (t)} = L{f1 (t)} + L{f2 (t)}

for , C

(2.9)

Furthermore if the system is initially-at-rest at time t = 0 (which means that output


y(0) = 0 and all its derivatives are zero as well), then
 n

d y(t)
L
= sn Y (s)
(2.10)
d tn

33

2.3. INPUT-OUTPUT MODELS


time function

Laplace transform

Dirac pulse

(t)

Unit step

us (t)

Ramp

ur (t)

Parabolic

up (t)

Exponential
Sinusoid

eat us (t)
sin (t)us (t)

1
s
1
s2
1
s3
1
s+a

2
s + 2

Table 2.1: The Laplace transforms of some common signals


Now let us consider the linear differential equation
dn1 y(t)
d y(t)
dn y(t)
+
a
+ . . . + an1
+ an y(t)
1
n
n1
dt
dt
dt
= b0

dm u(t)
dm1 u(t)
d u(t)
+
b
+ . . . + bm1
+ bm u(t)
1
m
m1
dt
dt
dt

Let U(s) = L{u(t)} and Y (s) = L{y(t)}. The Laplace transformation gives us
sn Y (s) + a1 sn1 Y (s) + . . . + an1 sY (s) + an Y (s)
= b0 sm U(s) + b1 sm1 U(s) + . . . + bm1 sU(s) + bm U(s)
and using the linearity property we obtain




n
n1
m
m1
s + a1 s
+ . . . + an1 s + an Y (s) = b0 s + b1 s
+ . . . + bm1 s + bm U(s)

(2.11)

In the following examples we will show how we can rewrite a system of differential equations
into a single input-output differential equation.
1. Translational mechanical system
In (2.2) two differential equations were given, describing the dynamics of a translational mechanical system:

m1 x1 (t) = b1 x 1 (t) + b3 (x 2 (t) x 1 (t)) k1 x1 (t) + k2 (x2 (t) x1 (t))
m2 x2 (t) = b2 x 2 (t) + b3 (x 1 (t) x 2 (t)) + k2 (x1 (t) x2 (t)) + fe (t)

34

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


Our aim is to eliminate the variable x1 and to obtain an input-output differential
equation with input variable fe and output variable x2 . Let X1 (s) = L{x1 (t)}, Let
X2 (s) = L{x2 (t)} and Fe (s) = L{fe (t)}. Equation (2.2) can now be written as

s2 m1 X1 (s) = s b1 X1 (s) + s b3 (X2 (s) X1 (s)) k1 X1 (t) + k2 (X2 (s) X1 (s))


(2.12)
s2 m2 X2 (s) = s b2 X2 (s) + s b3 (X1 (s) X2 (s)) + k2 (X1 (t) X2 (t)) + Fe (s)
(2.13)

which gives




2
s m1 + s (b1 + b3 ) + k1 + k2 X1 (s) = s b3 + k2 X2 (s)




s2 m2 + s (b2 + b3 ) + k2 X2 (s) = s b3 + k2 X1 (s) + Fe (s)

(2.14)
(2.15)

From equation (2.14) we obtain:


X1 (s) =

s2

s b3 + k2
X2 (s)
m1 + s (b1 + b3 ) + k1 + k2

Substitution of (2.16) into (2.15) gives us:




2
s m2 +s (b2 + b3 ) + k2 X2 (s)


s b3 + k2
= s b3 + k2 2
X2 (s) + Fe (s)
s m1 + s (b1 + b3 ) + k1 + k2
or



s2 m1 +s (b1 + b3 ) + k1 + k2 s2 m2 + s (b2 + b3 ) + k2 X2 (s)



= s b3 + k2 s b3 + k2 X2 (s)


2
+ s m1 + s (b1 + b3 ) + k1 + k2 Fe (s)

This leads to

s4 m1 m2 + s3 (m1 b3 + m2 b3 + m1 b2 + m2 b1 ) + s2 (m1 k2 + m2 k1 + m2 k2

+ b1 b2 + b1 b3 + b2 b3 ) + s(k1 b3 + k1 b2 + k2 b2 + k2 b1 ) + (k1 k2 ) X2 (s)


= s2 m1 + s (b3 b1 ) + k1 + k2 Fe (s)

(2.16)

(2.17)

(2.18)

(2.19)

We can rewrite this as the input-output differential equation:

d4 x2 (t)
d3 x2 (t)
+
(m
b
+
m
b
+
m
b
+
m
b
)
+ (m1 k2 + m2 k1 + m2 k2
1 3
2 3
1 2
2 1
d t4
d t3
d2 x2 (t)
d x2 (t)
+ b1 b2 + b1 b3 + b2 b3 )
+ (k1 b3 + k1 b2 + k2 b2 + k2 b1 )
+ k1 k2 x2 (t)
2
dt
dt
d2 fe (t)
d fe (t)
= m1
+ (b1 + b3 )
+ (k1 + k2 )fe (t)
(2.20)
2
dt
dt

m1 m2

35

2.3. INPUT-OUTPUT MODELS

2. Electrical system
In (2.3) two differential equations were given, describing the dynamics of an electrical
system:

d i(t)

= v1 (t) v3 (t) R i(t)


L
dt

d v3 (t) = 1 i(t)
dt
C

Our aim is to eliminate the variable v3 and to obtain an input-output differential


equation with input variable i and and output variable v1 . Let V1 (s) = L{v1 (t)},
V3 (s) = L{v3 (t)}, and I(s) = L{i(t)}. Equation (2.3) changes into

s L I(s) = V1 (s) V3 (s) R I(s)


s V (s) =
3

1
C

I(s)

From the first equation we obtain


V3 (s) = s L I(s) + V1 (s) R I(s)
Substitution into the second equation gives us:
s2 L I(s) + s V1 (s) s R I(s) =

1
I(s)
C

or
s V1 (s) = s2 L I(s) + s R I(s) +

1
I(s)
C

We can rewrite this as the input-output differential equation:


d2 i(t)
d i(t)
1
d v1 (t)
=L
+R
+ i(t)
2
dt
dt
dt
C
3. Translational electromechanical system In (2.4) two differential equations were
given, describing the dynamics of a rotational electromechanical system:

d i(t)

L
= e1 (t) e4 (t) R i(t) Kp d dx(t)
t
dt
2

m d x(t) = K i(t) k x(t)

p
d t2

Our aim is to eliminate the variable i and to obtain an input-output differential


equation with input variable fe and output variable x. Let E1 (s) = L{e1 (t)}, Let
E4 (s) = L{e4 (t)}, X(s) = L{x(t)} and I(s) = L{i(t)}. Equation (2.4) changes into
sLI(s) = E1 (s) E4 (s) R I(s) s Kp X(s)
s mX(s) = Kp I(s) k X(s)
2

36

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS


From the second equation we obtain:
I(s) = s2

m
k
X(s)
X(s)
Kp
Kp

Substitution into the first equation gives us:


sL(s2

k
m
k
m
X(s)
X(s)) = E1 (s)E4 (s)+R (s2
X(s)+
X(s))s Kp X(s)
Kp
Kp
Kp
Kp

or
s3 L m X(s) s2 R m X(s) + s (Kp2 L k) X(s) Rk X(s) = Kp (E1 (t) E4 (t))
We can rewrite this as the input-output differential equation:
L m

2.4

d3 x(t)
d2 x(t)
d x(t)
2

R
m
+
(K

L
k)
R k x(t) = Kp (e1 (t) e4 (t))
p
d t3
d t2
dt

State systems

In this section we introduce the notion of state systems. We will define the concept of a
state, and describe the behavior of a system using a state differential equation.
One of the triumphs of Newtons mechanics was the observation that the motion of the
planets could be predicted based on the current positions and velocities of all planets. It
was not necessary to know the past motion. In general, the state of a dynamical system
is a collection of variables that completely characterizes the evolution of a system for the
purpose of predicting the future evolution. For a system of planets the state simply consists
of the positions and the velocities of the planets.
Definition 2.1 The state of a system is a collection of variables that summarize the past
of a system for the purpose of predicting the future.
Example 2.1 Consider the translational mechanical system of Section 2.2. The differential equations were given by (2.2):
m1 p1 (t) = (b1 + b3 ) p 1 (t) + b3 p 2 (t) (k1 + k2 ) p1 (t) + k2 p2 (t)
m2 p2 (t) = b3 p1 (t) (b2 + b3 ) p 2 (t) + k2 p1 (t) k2 p2 (t) + fe (t)
where we use the variable p1 and p2 (instead of x1 and x2 ) for the positions of mass 1 and
2. In the sequel we will use the variable x for the state vector. In mechanical systems the
state consists of the velocities and the positions of each mass in the system. In this case
we have two masses, and therefore two velocities and two positions. This gives us a state
vector

x1 (t)
p 1 (t)
x2 (t) p1 (t)

x(t) =
x3 (t) = p 2 (t)
x4 (t)
p2 (t)

37

2.4. STATE SYSTEMS


For the derivative of x1 we can write:
x 1 (t) = p1 (t)
(b1 + b3 )
b3
(k1 + k2 )
k2
=
p 1 (t) +
p 2 (t)
p1 (t) +
p2 (t)
m1
m1
m1
m1
(b1 + b3 )
b3
(k1 + k2 )
k2
=
x1 (t) +
x3 (t)
x2 (t) +
x4 (t)
m1
m1
m1
m1
The derivative of x2 is straightforward:
x 2 (t) = p 1 (t) = x1 (t)
For the derivative of x3 we can write:
x 3 (t) = p2 (t)
(b2 + b3 )
k2
k2
1
b3
=
p 1 (t)
p 2 (t) +
p1 (t)
p2 (t) +
fe (t)
m2
m2
m2
m2
m2
b3
(b2 + b3 )
k2
k2
1
=
x1 (t)
x3 (t) +
x2 (t)
x4 (t) +
fe (t)
m2
m2
m2
m2
m2
and the derivative of x4 becomes
x 4 (t) = p 2 (t) = x3 (t)
If we define u(t) = fe (t) as the input of the system, we have the equations
(b1 + b3 )
b3
(k1 + k2 )
k2
x1 (t) +
x3 (t)
x2 (t) +
x4 (t)
m1
m1
m1
m1
x 2 (t) = p 1 (t) = x1 (t)
b3
(b2 + b3 )
k2
k2
1
x1 (t)
x3 (t) +
x2 (t)
x4 (t) +
u(t)
x 3 (t) =
m2
m2
m2
m2
m2
x 4 (t) = p 2 (t) = x3 (t)
x 1 (t) =

In matrix notation this becomes:

b3
k2
2)
(b1m+b1 3 ) (k1m+k
m1
m1
1

1
0
0
0

x(t)

=
(b2 +b3 )
b3
k2

m2
mk22
m2
m2
0
0
1
0

x(t) +
1 u(t)

m2
0

where the derivatives x(t)

are taken elementwise, so

d x1 (t)/d t
x 1 (t)
d x2 (t)/d t x 2 (t)

x(t)

=
d x3 (t)/d t = x 3 (t)
d x4 (t)/d t
x 4 (t)

38

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

By defining two matrices A and B:

b3
k2
2)
(b1m+b1 3 ) (k1m+k
m1
m1
1

1
0
0
0

A=
(b2 +b3 )
b3
k2

m2
mk22
m2
m2
0
0
1
0
we can write the system as

0
0

B=
1
m2
0

x(t)

= A x(t) + B u(t)
If we choose the first position to be the output of the system, so y(t) = p1 (t) = x2 (t), we
obtain the output equation


y(t) = 0 1 0 0 x(t) + 0 u(t)
= C x(t) + D u(t)
where the matrices C and D are defined by


C= 0 1 0 0

D=0

We have seen in this example that state variables were gathered in a vector x Rn , which is
called the state vector. In general every linear time-invariant (LTI) state system (with one
input and one output) can be represented by the system of first-order differential equations
x 1 (t) = a11 x1 (t) + a12 x2 (t) + . . . a1n xn (t) + b1 u(t)
x 2 (t) = a21 x1 (t) + a22 x2 (t) + . . . a2n xn (t) + b2 u(t)
..
..
.
.
x n (t) = an1 x1 (t) + an2 x2 (t) + . . . ann xn (t) + bn u(t)
y(t) = c1 x1 (t) + c2 x2 (t) + . . . cn xn (t) + d u(t)
These equations can

x 1 (t)
x 2 (t)

.. =
.
x 1 (t)

be written in matrix form as


x1 (t)
a11 a12 a1n

a21 a22 a2n


x2 (t)
..
.. .. +
..
. .
.
.
xn (t)
an1 an2 ann

x1 (t)



x2 (t)
y(t) = c1 c2 cn .. + d u(t)
.
xn (t)

b1
b2
..
.
bn

u(t)

2.4. STATE SYSTEMS


which may be summarized as

x(t)
= A x(t) + B u(t) ,
y(t) = C x(t) + D u(t),

39

(2.21)

where A Rnn , B Rn1 , C R1n , and D R are constant and the derivative x(t)

is
taken elementwise, so

d x1 (t)/d t
x 1 (t)
d x2 (t)/d t x 2 (t)

x(t)

=
= ..
..

.
.
d xn (t)/d t
x n (t)
The input signal of the system is represented by u and the output signal by y.

Example 2.2 Consider the electrical system of Section 2.2 where we set v4 = 0. The
differential equations were given by (2.3):
d i(t)
1
R
= (v1 (t) v3 (t)) i(t)
dt
L
L
d v3 (t)
1
= i(t)
dt
C
In general, in electrical systems the state vector will consist of the currents through the
inductors and the voltages over the capacitors. In this particular case we have one inductor
and one capacitor. This gives us a state-vector

 

i(t)
x1 (t)
x(t) =
=
v3 (t)
x2 (t)
If we define u(t) = v1 (t) as the input of the system, we can write the equations
R
1
1
d i(t)
= x1 (t) x2 (t) + u(t)
dt
L
L
L
d v3 (t)
1
x 2 (t) =
= x1 (t)
dt
C
x 1 (t) =

or in the matrix notation:

1
R
1

L
L
x(t) + L u(t)
x(t)

=
1

0
0
C

If we choose the current as the output of the system, so y(t) = i(t), we obtain the output
equation


y(t) = 1 0 x(t) + 0 u(t)

40

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

This means that the matrices A, B, C and D for this example become

1
1
R

L
L

A=
B= L
1

0
0
C 

C= 1 0
D=0

Table 2.2 indicates how to choose the states during the modeling phase for various physical
domains.
Physical domain
Translational mechanical system
Rotational mechanical system
Electrical system
Translational electromechanical system

Rotational electromechanical system

Fluid flow system


Heat flow system

States
velocity of each mass
position of each mass
angular velocity of each inertia
angular position of each inertia
voltage over each capacitor
current through each inductor
velocity of each mass
position of each mass
voltage over each capacitor
current through each inductor
angular velocity of each inertia
angular position of each inertia
voltage over each capacitor
current through each inductor
fluid pressure at the bottom of each vessel
(or fluid level of each vessel)
temperature in each room

Table 2.2: How to choose the states during modeling

2.5

Exercises

Exercise 1. Modeling of a linear mechanical system


The system in Figure 2.17 a tractor with mass m1 is pulling, via a linear spring with spring
constant k, a trailer with mass m2 . The position of the tractor is x1 , the position of the
trailer is x2 . The surface friction of the tractor is fc,1 = c1 x 1 and the surface friction of the
trailer is fc,2 = c2 x 2 . The motor of the tractor produces a force ft .
Now perform the following tasks:
1. Give the differential equations of this system.

41

2.5. EXERCISES

m1

m2

x2

c2

ft-

c1
x1

Figure 2.17: Tractor with trailer


2. Determine the input-output differential equation with input ft (t) and output x2 (t)
using the result of answer 1.
3. Describe the system as a state system.
Exercise 2. Modeling of a linear electrical system
v1

i1 -

i2

i3

v2
e

?
r

Figure 2.18: LC-network


An electrical system (see Figure 2.18) consists of a linear capacitor and a linear inductor.
Assume v2 = 0.
Now perform the following tasks:
1. Give the differential equations of this system.
2. Determine the input-output differential equation with input i1 (t) and output v1 (t)
using the result of answer 1.
3. Describe the system as a state system.

42

CHAPTER 2. MODELING OF DYNAMICAL SYSTEMS

Chapter 3
Analysis of first-order and
second-order systems
3.1

First-order systems

First-order systems can be described by a state system with only one state or by a single
first-order differential equation. For example, braking of a car, the discharge of an electronic camera flash, the flow in a fluid vessel, and the cooling of a cup of tea may all be
approximated by a first-order differential equation which may be written in a standard
form as
y(t)
+ y(t) = f (t)

(3.1)

where the system is defined by the single parameter = 1/ , where is the system time
constant, and f (t) = b0 u(t)
+ b1 u(t) is a forcing function, which depends on the input u
and the parameters b0 , b1 R.
Example 3.1 (First-order systems)
Consider the RC-circuit of Figure 3.1.a with a resistor with resistance R and a capacitor
with capacitance C. The relation between the voltage v(t) = v1 (t) v2 (t) and the current
i(t) is given by the first-order differential equation
v(t)
+

1
1
v(t) = i(t) .
RC
C

So with i(t) the input and v(t) the output of the system, we find that the time constant for
this system is = RC and the forcing function is f (t) = C1 i(t).
For the water vessel in Figure 3.1.b with area A, and fluid resistance R we find that
the relation between the inflow win (t) and level h(t) is given by the first-order differential
equation
1
g

h(t)
+
h(t) =
win (t) .
AR
A
43

44 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS


v1

i-

win
r

J
A

(a) RC network

wout

v2
e

(b) Water vessel

(c) Rotational mechanical system

Figure 3.1: Examples of first-order systems

Let win (t) be the input and h(t) be the output of the system, then we find that the time
1
win (t).
constant for this system is = A R/g and the forcing function is f (t) = A
Finally, the rotational mechanical system in Figure 3.1.c with inertia J, and damping
constant b we find the relation between the external torque T (t) and rotational velocity (t)
is given by the first-order differential equation
b
1
(t)

+ (t) = T (t) .
J
J
So with T (t) the input and (t) the output of the system, we find that the time constant
for this system is = J/b and the forcing function is f (t) = J1 T (t).
First-order state space systems can be described by the equations

x(t)
= a x(t) + b u(t) ,
y(t) = c x(t) + d u(t),

(3.2)

where the state x(t) R is a scalar signal and the system matrices a, b, c and d are scalar
constants. If we rewrite the second equation as
1
d
x(t) = y(t) u(t)
c
c
we can derive
d
1
x(t)

= y(t)
u(t)

c
c
and we can substitute these expressions for x(t) and x(t)

into the first equation. This


yields
1
d
a
ad
y(t)
u(t)

= y(t)
u(t) + b u(t) ,
c
c
c
c
and so we obtain the first-order equation:
y(t)
a y(t) = d u(t)
+ (bc ad) u(t) ,

and we see that the time constant for the first-order state system (3.2) is = 1/a and
the forcing function is f (t) = d u(t)
+ (bc ad) u(t).

3.1. FIRST-ORDER SYSTEMS

45

Unit step response


In practical situations we often encounter the unit step function as a forcing function, so

0 for t < 0
f (t) = us (t) =
1 for t 0
We will compute the step response y(s) = ys (t) for f (t) = us (t), when the system (3.1) is
initially at rest, so y(0) = 0.
We first solve the homogeneous differential equation
y h (t) + yh (t) = 0
The homogeneous solution of this first-order system is given by:
yh (t) = et

(3.3)

Note that if yh (t) is a solution, then also yh (t) = et for any R is a solution.
We now compute a particular solution, which means we find a solution for (3.1) without
regarding the initial value y(0). Observing equation (3.1) for f (t) = 1 for t 0, we find
that yp (t) = 1/ for t 0 satisfies the equation.
For the final solution we combine the homogeneous solution and particular solution y(t) =
yp (t) + yh (t) = 1/ + et and compute by solving y(0) = 0, so = 1/. The unit
step response for a first-order system is given by ys (t) = 1/(1 et ) for t 0. This can
be rewritten as:
ys (t) =

1
(1 et )us (t)

(3.4)

The unit step response is given in Figure 3.2. We see that the response asymptotically
1/
6

ys
t
Figure 3.2: Unit step response for a first-order system

approach the steady-state value.

Unit impulse response


The derivative property (Property 1, Section 1.2) for linear time-invariant systems tells us
that if we take the derivative of the input as a new input, then the derivative of the output

46 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS


will be the new output. We know there holds
(t) =

d us (t)
dt

and thus the impulse response y (t), resulting from an input f (t) = (t) is given by
y (t) =

d ys(t)
= et , t 0
dt

(3.5)

The impulse response is given in Figure 3.3. We see that the response starts in y (0) = 1,
and then decays asymptotically towards zero.
1
6

y
t
Figure 3.3: Impulse response for a first-order system

Stability of a first-order system


Let us consider the homogeneous solution for a first-order system and see how the system
responds to some initial energy stored within it in the absence of an input signal (so the
particular solution yp = 0). Then from (3.3) we see that the response will always have the
form
yh (t) = et
Typical responses for yh (t) are given in Figure 3.4. If yh (t) decays to zero as t approaches
infinity, the system is said to be stable (see Figure 3.4.a). If on the other hand, yh (t)
increases without limit as t becomes large, the system is unstable (see Figure 3.4.b). A
first-order system is stable if > 0 and unstable if < 0. If the magnitude of is equal
to zero, yh (t) becomes constant as shown in Figure 3.4.c. Such a system is said to be
marginally stable.
Example 3.2 Consider the RC-circuit of Figure 3.1.a with R = 2 and C = 1/4 F, and
let the input current follow a step function, so i(t) = us (t) A. The differential equation is
given by
v(t)
+ 2 v(t) = 4 i(t)

47

3.2. SECOND-ORDER SYSTEMS

>0

(a)

<0

(b)

=0

(c)

Figure 3.4: Stability of first-order systems

and so the forcing function is f (t) = 4 i(t) = 4 us (t). From Equation (3.4) we know that
for a first-order system the unit step response is given by
ys (t) = 1/(1 et )us (t) = 0.5(1 e2t )us (t)
In our case we have f (t) = 4 us (t) and so the step response of this RC-circuit is given by
v(t) = 4 ys (t) V = 2(1 e2t )us (t) V

3.2

Second-order systems

Second-order systems can be described by a state system with two state or by a single
second-order differential equation. Physical second-order systems contain two independent
elements in which energy can be stored. For example in mechanical systems the kinetic
energy in a mass can be exchanged with the potential energy of a spring, and in electrical
systems the energy between capacitors and inductors can be exchanged. Second-order
systems can be written in a standard form as
y(t) + a1 y(t)
+ a2 y(t) = f (t)

(3.6)

where the system is defined by the parameters a1 and a2 and f (t) is the forcing function.
Often the forcing function has the form f (t) = b0 u(t) + b1 u(t) + b2 and so it depends on
the input u(t) and the parameters b0 , b1 , b2 R.
Example 3.3 (Second-order systems)
Consider the RLC-circuit of Figure 3.5.a with inductance L, capacitance C and resistance
value R. The relation between the voltage v(t) = v1 (t) v2 (t) and the current i(t) is given
by the second-order differential equation
d2 v(t)
1 d v(t)
1
1 d i(t)
+
+
v(t) =
2
dt
RC d t
LC
C dt

48 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS


v1

i-

m g sin
C

R
F

v2







m

mg

b
m

fe

(a) RLC network

(b) Simple pendulum

(c) Mechanical system

Figure 3.5: Examples of second-order systems

so with i(t) as the input and v(t) as the output of the system, we find that the parameters
for this system are a1 = 1/RC and a2 = 1/LC, and the forcing function is f (t) = C1 ddi(t)
.
t
For the simple pendulum in Figure 3.5.b with mass m, length we find that the relation
between the angle and an external force F is given by the second-order differential equation
+ g (t) = 1 F (t)
(t)

m
where we used the approximation sin() and cos() = 1 for small . With F (t) as the
input and (t) as the output of the system, we find that the parameters for this system are
1
a1 = 0 and a2 = g/, and the forcing function is f (t) = m
F (t).
Finally, for the translation mechanical system in Figure 3.5.c with mass m, spring constant
k, damping b we find the relation between the position y and an external force fe is given
by the second-order differential equation
m y(t) + b y(t)
+ k y(t) = fe (t)
With fe (t) as the input and y(t) as the output of the system, we find that the parameters
for this system a1 = b/m and a2 = k/m, and the forcing function is f (t) = fe (t)/m.

Second-order single-input single-output state space systems can be described by the equations
 

 


x

(t)
a
a
x
(t)
b

1
11
12
1
1

+
u(t) ,
x (t) = a
x2 (t)
b2
2
21 a22


(3.7)

 x1 (t)

y(t) = c1 c2
+ d u(t),
x2 (t)

49

3.2. SECOND-ORDER SYSTEMS


or

x 1 (t) = a11 x1 (t) + a12 x2 (t) + b1 u(t) ,


x 2 (t) = a21 x1 (t) + a22 x2 (t) + b2 u(t) ,

y(t) = c x (t) + c x (t) + d u(t),


1 1
2 2

(3.8)

where the state x(t) R2 is a vector with two entries and the system matrices are A R22 ,
B R21 , C R12 and D R.
We will use the Laplace transformation to find the input-output description of this state
system. Define X1 (s) = L{x1 (t)}, X2 (s) = L{x2 (t)}, Y (s) = L{y(t)} and U(s) = L{u(t)}.
Now (3.8) can be rewritten as

sX1 (s) = a11 X1 (s) + a12 X2 (s) + b1 U(s) ,


sX2 (s) = a21 X1 (s) + a22 X2 (s) + b2 U(s) ,
(3.9)

Y (s) = c X (s) + c X (s) + d U(s),


1
1
2
2
In matrix form this becomes
 


 

X1 (s)
a11 a12
X1 (s)
b1
sI
=
+
U(s) ,
X2 (s)
a21 a22
X2 (s)
b2



 X1 (s)
Y (s)= c1 c2
+ d U(s),
X2 (s)

The first equation now becomes




 
 

a11 a12
X1 (s)
b1
sI
=
U(s)
a21 a22
X2 (s)
b2
and so


X1 (s)
X2 (s)

sI

a11 a12
a21 a22

1 

b1
b2

U(s)

With


1 
1
a11 a12
s a11 a12
sI
=
a21 a22
a21 s a22


1
s a22
a12
=
a21
s a11
(s a11 )(s a22 ) a12 a21

we derive



 

1
X1 (s)
s a22
a12
b1
=
U(s)
X2 (s)
a21
s a11
b2
(s a11 )(s a22 ) a12 a21


1
sb1 a22 b1 + a12 b2
= 2
U(s)
sb2 + a21 b1 a11 b2
s + s(a11 + a22 ) + (a11 a22 a12 a21 )


1
sb1 a22 b1 + a12 b2
= 2
U(s)
sb2 + a21 b1 a11 b2
s + sa1 + a2

50 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS


with a1 = (a11 + a22 ), a2 = (a11 a22 a12 a21 ). Now it follows



 X1 (s)
Y (s) = c1 c2
+ d U(s),
X2 (s)




1
sb1 a22 b1 + a12 b2
c1 c2
= 2
U(s) + d U(s)
sb2 + a21 b1 a11 b2
s + sa1 + a2
s(b1 c1 + b2 c2 ) + (a12 b2 c1 + a21 b1 c2 a22 b1 c1 a11 b2 c2 )
=
U(s) + d U(s)
s2 + sa1 + a2
s2 b0 + s b1 + b2
= 2
U(s)
s + sa1 + a2
with b0 = d, b1 = (b1 c1 + b2 c2 a11 d a22 d), and b2 = (a12 b2 c1 + a21 b1 c2 a22 b1 c1 a11 b2 c2 +
a11 a22 d a12 a21 d). We obtain
(s2 + sa1 + a2 )Y (s) = (s2 b0 + s b1 + b2 )U(s)
This leads to the differential equation
y(t) + a1 y(t)
+ a2 y(t) = b0 u(t) + b1 u(t)
+ b2 u(t)
and so the forcing function is f (t) = b0 u(t) + b1 u(t)
+ b2 u(t).

Unit step response


Consider the second-order system
y(t) + a1 y(t)
+ a2 y(t) = f (t)

(3.10)

If a1 0 and a2 0 we can define


n =

a2
a1
=
2n

where is called the damping ratio and n is the undamped natural frequency. We obtain
a new description of the second-order system:
y(t) + 2 n y(t)
+ n2 y(t) = f (t)
We will now study the unit step response for system (3.10) when the forcing function is
a unit step function (f (t) = us (t)) and the system is initially at rest, so y(0) = 0 and
y(0)

= 0.
We first solve the homogeneous equation
yh (t) + 2 n y h (t) + n2 yh (t) = 0

51

3.2. SECOND-ORDER SYSTEMS


Laplace transformation gives us
(s2 + s 2 n + n2 )Yh (s) = 0
or

(s 1 )(s 2 )Yh (s) = 0


(3.11)
p
p
where 1 = n ( + 2 1) C and 2 = n ( 2 1) C are the roots of this
equation. Note that 1 and 2 are distinct if 6= 1, and 1 = 2 for = 1. We will look
at both cases.
The case ( 6= 1): Let us first consider the case of 6= 1. For 6= 1 we find 1 6= 2 . The
homogeneous equation now becomes
(s 1 )(s 2 )Yh (s) = 0
This means that either (s 1 )Yh (s) = 0 or (s 2 )Yh (s) = 0. Note that this is equivalent
to computing the homogeneous solution of the first-order systems
y h1 (t) 1 yh 1 (t) = 0 and y h2 (t) 2 yh2 (t) = 0
The solution for the first equation is yh 1 (t) = 1 e1 t , and for the second equation is
yh 2 (t) = 2 e2 t . This means that the overall homogenous solution for the second-order
differential equation is given by
yh (t) = yh 1 (t) + yh 2 (t) = 1 e1 t + 2 e2 t

(3.12)

We now compute a particular solution, which means we find a solution for (3.10) without
regarding the initial value y(0). Observing equation (3.10) for f (t) = 1 for t 0, we find
that yp (t) = 1/n2 for t 0 satisfies the equation.
Finally we combine the homogeneous solution and the particular solution y(t) = 1/n2 +
1 e1 t + 2 e2 t and we compute 1 and 2 by solving y(0) = 0 and y(0)

= 0. With
y(0) = 1/n2 + 1 + 2 = 0
and with y(t)
= 1 e1 t + 2 e2 t , resulting in
y(0)

= 1 1 + 2 2 = 0
we find
1 =

2
,
2 )

n2 (1

2 =

1
1 )

n2 (2

This gives us the unit step response for a second-order system:



1
2
1
ys (t) = 2 1 +
e1 t +
e2 t us (t) , for t 0
n
(1 2 )
(2 1 )

(3.13)

52 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

ys (t)

=0
0.1

0.3
0.707
1.0

2.0
5.0

t
Figure 3.6: Step response of a second-order system for n = 1, and different values of

Depending on , the system is either overdamped, underdamped, or critically damped:


Overdamped system > 1:
If > 1 the system is called overdamped, and the parameters 1 and 2 are both real
numbers (because 2 1 > 0) and so the step response is given by (3.13).
Underdamped system 0 < 1:
If 0 < 1 the system is called underdamped, and the parameters 1 and 2 are both
complex numbers (because 2 1 < 0):
p
p
1 = n ( + j 1 2 ) and 2 = n ( j 1 2)
p
If we define = n and d = n 1 2 , then we obtain
1 = + jd ,

2 = j d

We find

2
1 1

1 = 2
= 2 +j p
n (1 2 )
n
2
2 1 2


1
1 1

2 = 2
= 2 j p
n (2 1 )
n
2
2 1 2

With e1 t = et+jd t = et (cos d t + j sin d t) and e2 t = etjd t = et (cos d t


j sin d t), this leads to the unit step response for the case < 1:

1

t
t
ys (t) = 2 1 e
cos d t p
e
sin d t us (t) , for t 0
(3.14)
n
1 2
Critically damped system = 1:
If = 1 the system is called critically damped. We find = 1 = 2 = n . The

3.2. SECOND-ORDER SYSTEMS

53

homogeneous solutions yh1 = yh 2 = e t become equal and we need an additional homogeneous solution yh = t e t (See chapter 4). The complete solution is now given by
y(t) = 1/n2 + 1 e t + 2 t e t . We compute 1 and 2 by solving y(0) = 0 and y(0)

= 0,
and we obtain
1 =

1
n2

2 =

1
=
2
n
n

This leads to the following unit step response for the case = 1:

1
ys (t) = 2 1 en t n t en t us (t) , for t 0
n

(3.15)

In Figure 3.6 the response for different values of is given. After a dynamical phase the
output will asymptotically approach the final value (except for = 0). In the underdamped
case the output will oscillate, whereas the output for the overdamped case will not.

Unit impulse response


The derivative property for linear time-invariant systems of Section 1.2 tells us that if we
take the derivative of the input as a new input, then the derivative of the output will be
the new output. We know there holds
(t) =

d us (t)
dt

and thus the impulse response y (t), resulting from an input f (t) = (t) is given by
y (t) =

d ys(t)
, t0
dt

For the overdamped case ( > 1) this results in



1  1 2
1 2
y (t) = 2
e1 t +
e2 t
n (1 2 )
(2 1 )


1
1 t
2 t
=
e e
, for t 0
(1 2 )

(3.16)

For the critically damped case ( = 1) this results in


y (t) = t en t , for t 0

(3.17)

For the underdamped system 0 < 1) this results in


y (t) =

1 t
e
sin d t , for t 0
d

The impulse response is given in Figure 3.7.

(3.18)

y (t)

54 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS

=0
0.1
0.3
0.707
1.0
2.0
5.0

Figure 3.7: Impulse response of a second-order system for n = 1 and different values of

Damping ratio, (un)damped natural frequency


In the previous paragraph we studied the underdamped second-order system
y(t) + 2 n y(t)
+ n2 y(t) = f (t)
with 0 < 1. In this paragraph we will take a closer look at the influence of the damping
ratio , the undamped natural frequency n , the negative p
real part of the pole, denoted
by = n , and the damped natural frequency d = n 1 2 on the behavior of a
second-order system. The relation between , n , , and d is illustrated in Figure 3.8.
The desired behavior of a system is often expressed in specifications on the output response
on a step signal f (t) = n2 us (t). The response on this input is given by:



ys (t) = 1 e t cos d t + p
sin d t
1 2

(3.19)

Interesting criteria in this response are the overshoot, the peak-time, the rise time, and
the settling time of the response.
The settling time ts is the time it takes for the output to enter and remain within
a 1% band centered around its steady-state value.
The rise time tr is the time required for the output to rise from 10% to 90% of the
steady-state value.
The peak time tp is the time it takes for the response to reach its maximum value.
The overshoot Mp is the maximum value of the response minus the steady-state
value of the response divided by the steady-state value of the response .

55

3.2. SECOND-ORDER SYSTEMS


6

Im
d
n
= arccos()

Re

Figure 3.8: The relation between , n , d , and .

In Figure 3.9 the criteria are visualized.


1. Settling time of output signal on unit step reference signal:
The settling time is defined as the time the output signal y(k) needs to settle within
1% of its final value. When we consider the response of (3.19) we find that the output
signal y(t) will be in the 1% band if
e t 0.01
The settling time is therefore
ts =

log(0.01)
4.6

This means that ts is inversely proportional to . Increasing will decrease the


settling time ts , and vice versa.
2. Rise time:
The rise time is the time required for the output to rise from 10% to 90% of the
steady-state value. If we substitute = n t into (3.19) we find:


p
p

ys ( ) = 1 e cos( 1 2 ) + p
sin( 1 2)
1 2
We can now compute functions 1 () and 2 (), such that for any choice of we find
ys (1 ()) = 0.1
ys (2 ()) = 0.9

56 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS


tp
Mp

+1%
1%

1
0.9

0.1
tr

ts

Figure 3.9: The step response criteria of a second-order system

Define the function r () = 2 () 1 (), then the rise time tr is is given by:
tr =

r ()
n

In Figure 3.10 the function r () is given for 0 < 1.

3
2

1.8

1
0

0.25

0.50

0.75

Figure 3.10: The function r () for 0 < 1


Often the function r () is approximated by the average 1.8, and the the rise time is
given by
tr

1.8
n

Note that tr is inversely proportional to n . Increasing n will decrease the rise time
tr , and vice versa.

57

3.2. SECOND-ORDER SYSTEMS

3. Peak time:
To compute the peak time we set the derivative of the response from (3.19) to zero:
y s (t) = y (t) = p

n
1 2

e t sin d t

We find that y s (t) = 0 when sin d t = 0 and so d t = k, k Z. The first peak in


the response will occur for k = 1, and so
tp =

This means that tp is inversely proportional to d . Increasing d will decrease the


peak time tp , and vice versa.
4. Overshoot:
We find the peak value can be found by substitution of t = tp =

into (3.19):

ys (tp ) = 1 + e

and so with limt ys (t) = 1 the overshoot is given by


Mp =

ys (tp ) ys ()

= e d = e 12
ys ()

Mp

1.00
0.75
0.5
0.25
0

0.25

0.50

0.75

Figure 3.11: The overshoot Mp as a function of damping ratio .

We find that the overshoot only depends on the damping ratio .


In Figure 3.11 the relation between Mp and is visualized.

58 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS


Im

Re
Figure 3.12: Response for various pole locations

Stability of a second-order system


For second-order systems the roots 1 , 2 of the homogeneous equation can be either realvalued or complex. These roots are often called the poles of the system (See also Section
4.1).
For an overdamped system ( > 1) both poles 1 , 2 are real and the homogeneous solution
is the sum of two exponentials:
yh (t) = 1 e1 t + 2 e2 t
Similar to the first-order case, the yh (t) will approach zero for t if the both poles are
negative, and so an overdamped second-order system is stable if 1 < 0 and 2 < 0 and
unstable if either 1 > 0 or 2 > 0 (or both).
For an underdamped system ( < 1) both the poles 1 , 2 are complex conjugated and the
homogeneous solution is a sinusoidal function
yh (t) = 1 e t cos d t + 2 e t sin d t
Note that cos d t and sin d t are both bounded in absolute value by one. In order to have
yh (t) decaying to zero as t approaches infinity, e t has to decay to zero, which is the
case for > 0. This means that an underdamped second-order system is stable if > 0
and unstable if < 0. Note that = Re(1 ) = Re(2 ) and so the underdamped
second-order system is stable if Re(i ) < 0.
For a critically damped system ( = 1) both the poles are equal (1 = 2 ) and real-valued.
The homogeneous solution is given by
yh (t) = 1 e t + 2 t e t

59

3.3. EXERCISES

Note that yh (t) decays to zero as t approaches infinity if e t decays to zero, which is the
case for < 0. This means that a critically damped second-order system is stable if < 0
and unstable if > 0.
Let 1 , 2 be the roots of the homogeneous equation of a second-order system. In general
we can say that the second-order system will be stable if there holds
Re(i ) < 0 , i = 1, 2
or, in other words, if the poles are in the left-half of the complex plane.
The relation between various pole locations and the system response is sketched in Figure
3.12. For a pole with a negative real part we see a decaying response, for a pole with a
positive real part we have an increasing response. If the pole has an imaginary part, there
is an oscillation. The frequency of the oscillation grows with increasing imaginary part. If
there is a single pole is in the origin, the response will contain a step function. If the pole
is on the imaginary axis (but not in the origin), the response is an oscillation with constant
amplitude. For multiple poles on the imaginary axis, the response will be unstable.
Example 3.4 Consider the translation mechanical
system in Figure 3.5.c with mass m = 2
kg, spring constant k = 8 N/m, damping b = 4 3 Ns/m, and let the external force be given
by fe (t) = 10 (t). The differential equation is given

y(t) + 2 3 y(t)
+ 4 y(t) = 0.5 fe (t)
and so the forcing functionis f (t) = 0.5 fe (t)
the undamped natural
p
= 5 (t). We compute
2
frequency n = 2, = 0.5 3, = n = 3, and d = n 1 = 1. With < 1 we
have an underdamped system and so for f (t) = (t) we find
y (t) =

1 t
e
sin d t = e 3 t sin t , for t 0
d

In our case we have f (t) = 5 (t) and so the step response of this mechanical system is
given by
y(t) = 5 y (t) = 5 e

3.3

3t

sin t , for t 0

Exercises

Exercise 1. Driving car


A car with mass m = 2 kg is driving with a velocity v(t) over a road. The surface friction
of the car is fc (t) = c v(t) = 6 v(t) and the motor of the car produces a force fm (t). Let
this force be equal to a unit step, so fm (t) = us (t). Compute the velocity v as a function
of time.

60 CHAPTER 3. ANALYSIS OF FIRST-ORDER AND SECOND-ORDER SYSTEMS


Exercise 2. RLC-circuit
Consider the electrical circuit of Example 3.5.a with inductance L = 1 H. How do we
choose R and C such that n = 2 rad/s and = 0.5 for this system.
Exercise 3. Damped natural frequency
Consider the second-order system:
y(t) + K y(t)
+ 3 y(t) = u(t)
For which values of K do we have d 1.5?
Exercise 4. Stability
Are the following systems stable?
a)
b)
c)
d)

y(t)
+ 3 y(t) = u(t)
4y(t)
2 y(t) = 4u(t)
y + 3y y(t) = u(t)
4
y + 2y + y(t) = u(t)

Exercise 5. Damping ratio, (un)damped natural frequency, decay factor


Compute for the following systems the damping ratio , undamped natural frequency n ,
damped natural frequency d , and the decay factor :
a) y(t) + 3y(t)
+ 9 y(t) = u(t)
b) 4
y (t) + 8y(t)
+ y(t) = 4u(t)
c) y + 0.4y + y(t) = u(t)
d) y + 4y + 4y(t) = 3 u(t)
Exercise 6. Response criteria
Compute estimates of the settling time ts , rise time tr , peak time tp , and overshoot Mp for
each of the systems of Exercise 5.

Chapter 4
General system analysis
In this chapter we will analyze single-input single-output linear time-invariant systems,
which are described by the linear differential equation:
dn y(t)
dn1 y(t)
d y(t)
+
a
+
.
.
.
+
a
+ an y(t)
1
n1
d tn
d tn1
dt
dm u(t)
dm1 u(t)
d u(t)
+
b
+ . . . + bm1
+ bm u(t).
= b0
1
m
m1
dt
dt
dt

(4.1)

where we assume in this chapter that m n. In this chapter we will generalize the
results of the first-order and second-order systems of Chapter 3 to higher order systems.
To facilitate the discussion we will introduce the new concepts of transfer function and
convolution. Further we will consider the computation of the time response and show the
relation between various equivalent system descriptions.

4.1

Transfer functions

In Section 2.3 we have introduced the Laplace transformation and claimed that if the
system is initially-at-rest at time t = 0 (which means that output y(0) = 0 and all its
n
derivatives d d y(t)
| = 0 as well), then
tn t=0
 n

d y(t)
L
= sn Y (s)
(4.2)
d tn
With U(s) = L{u(t)} and Y (s) = L{y(t)}, Equation (4.1) can be written as




sn + a1 sn1 + . . . + an1 s + an Y (s) = b0 sm + b1 sm1 + . . . + bm1 s + bm U(s)

(4.3)

and if we define a(s) = sn +a1 sn1 +. . .+an1 s+an and b(s) = b0 sm +b1 sm1 +. . .+bm1 s+bm
we obtain
a(s)Y (s) = b(s)U(s)

(4.4)
61

62

CHAPTER 4. GENERAL SYSTEM ANALYSIS

If we rewrite Equation (4.4) as


b(s)
Y (s)
=
= H(s)
U(s)
a(s)

(4.5)

then H(s) is called the transfer function, which describes the relation between the Laplace
transform of the input signal and the Laplace transform of the output signal. Note that
b(s)
b0 sm + b1 sm1 + . . . + bm2 s2 + bm1 s + bm
=
(4.6)
a(s)
sn + a1 sn1 + . . . + an2 s2 + an1 s + an
The transfer function provides a complete representation of a linear system in the Laplace
domain. The order of the transfer function is defined as the order of the denominator
polynomial and is therefore equal to n, the order of the differential equation of the system.
H(s) =

Type

differential equation

Integrator

y = u

Differentiator

y = u

First-order system

y + ay = u

Double integrator

y = u

Damped oscillator

y + 2ny + n2 y = u

Transfer Function
1
s
s
1
s+a
1
s2
1
2
s + 2ns + n2

Figure 4.1: Transfer functions for some common differential equations.


Since the a and b coefficients are real numbers and s is a complex number, the transfer
function H(s) will also be a complex number that can be represented in polar form or in
magnitude-and-phase form as
H( + j) = M( + j) ej(+j)

(4.7)

where
M( + j) = |H( + j)|

(4.8)

( + j) = H( + j)

(4.9)

is the magnitude of the transfer function and


is the phase of the transfer function. Both M and are real-valued functions depending
on the variable s = + j. With real-valued a and b coefficients we have the property
that
M( j) = M( + j)
( j) = ( + j)

(4.10)
(4.11)

63

4.2. TIME RESPONSES

Poles and Zeros


Consider a linear system with the rational transfer function
H(s) =

b(s)
,
a(s)

where a(s) = sn + a1 sn1 + . . . + an2 s2 + an1 s + an and b(s) = b0 sm + b1 sm1 +


. . . + bm2 s2 + bm1 s + bm . The roots of the polynomial a(s) are called the poles of the
system, and the roots of b(s) are called the zeros of the system. As was already shown
in Chapter 3, the poles pi of first-order and second-order systems correspond to the roots
of a(s) in the homogeneous equation, leading to the homogeneous solution y(t) = epi t .
Zeros have a different interpretation. Let z be a zero of the system, so b(z) = 0 and
therefore H(z) = 0. Then for a input signal u(t) = ezt , it follows that the pure exponential
output is y(t) = H(z)ezt = 0. Zeros of the transfer function thus block transmission of the
corresponding exponential signals.

4.2

Time responses

In this section we will derive the time response of linear time-invariant systems of the form
(4.1), which can be written as
dn y(t)
dn1 y(t)
d y(t)
+
a
+ . . . + an1
+ an y(t) = f (t)
1
n
n1
dt
dt
dt
m

(4.12)

m1

d
u(t)
d u(t)
where the forcing function f (t) = b0 d d tu(t)
m + b1 d tm1 + . . . + bm1 d t + bm u(t) is assumed
to be known, and where we have initial conditions


dn1 y(t)
d y(t)
= cn1 , . . . ,
= c1 , y(0) = c0
(4.13)
d tn1 t=0
d t t=0

The procedure to find the signal y(t) that is a solution of differential equation (4.12) and
at the same time satisfies the initial conditions (4.13) consists of three parts
1. Compute the homogeneous solution yh(t), t 0 that satisfies the so called homogeneous equation
dn y(t)
dn1 y(t)
d y(t)
+
a
+
.
.
.
+
a
+ an y(t) = 0
1
n1
d tn
d tn1
dt

(4.14)

so equation (4.12) for f (t) = 0.


2. Compute a particular solution yp (t), t 0 that satisfies the differential equation
(4.12) for f (t) = 4 (but without considering the initial conditions).
3. By combining the homogeneous solution and particular solution, find the final solution y(t) that is a solution of differential equation (4.12) and at the same time
satisfies the initial conditions (4.13) with correct initial conditions.

64

CHAPTER 4. GENERAL SYSTEM ANALYSIS

Homogeneous solution
In the previous section we already observed that solutions of the form et , for possiblycomplex values of , play an important role in solving first-order and second-order differential equations. The exponential function is one of the few functions that keep its shape
even after differentiation. In order for the sum of multiple derivatives of a function to sum
up to zero, the derivatives must cancel each other out and the only way for them to do so
is for the derivatives to have the same form as the initial function. Thus, to solve
dn1 y(t)
d y(t)
dn y(t)
+
a
+
.
.
.
+
a
+ an y(t) = 0
1
n1
d tn
d tn1
dt

(4.15)

we set y = et , leading to
n et + a1 n1 et + + an et = 0.
Division by et gives the nth-order polynomial equation
a() = n + a1 n1 + + an = 0.
This equation a() = 0, is the characteristic equation and a is defined to be the characteristic polynomial.
The polynomial a() is of nth order and so there are n roots (1 , . . . , n ) of the characteristic polynomial. If all roots i are distinct, we obtain n possible solutions ei t (i = 1, . . . , n).
Note that because of linearity, any linear combination of possible solutions will satisfy the
homogeneous equation (4.15). This gives us the most general form of the homogeneous
solution:
yh (t) = C1 e1 t + C2 e2 t + . . . + Cn en t
n
X
=
Ci ei t
i=1

with Ci C.
Example 4.1 Consider the homogeneous equation
d3 y(t)
d2 y(t)
d y(t)
+
10
+ 31
+ 30y(t) = 0
3
2
dt
dt
dt
The characteristic equation is:
3 + 2 10 + 31 + 30 = ( + 2)( + 3)( + 5) = 0
We find three roots: 1 = 2, 2 = 3, 3 = 5, and so the homogeneous solution becomes
yh (t) = C1 e2 t + C2 e3 t + C3 e5 t
with C1 , C2 , C3 R.

4.2. TIME RESPONSES

65

Example 4.2 Consider the homogeneous equation


d3 y(t)
d2 y(t)
d y(t)
+
4
+6
+ 4y(t) = 0
3
2
dt
dt
dt
with characteristic equation:
3 + 2 4 + 6 + 4 = ( + 2)( + 1 + j)( + 1 j) = 0
We find three roots: 1 = 2, 2 = 1 j, 3 = 1 + j, and so the homogeneous solution
becomes
yh (t) = C1 e2 t + C2 e(1j) t + C3 e(1+j) t
In this case we find C1 R and C2 , C3 C. Moreover, to make sure that yh is real-valued
the coefficients C2 and C3 will be complex conjugates of each other, so if C2 = + j, then
C3 = j with , R. We derive
yh (t) = C1 e2 t + ( + j) e(1j) t + ( j) e(1+j) t
= C1 e2 t + ( + j) et(cos t j sin t) + ( j) et(cos t + j sin t)
= C1 e2 t + 2 et cos t + 2 et sin t

C1 , , R.
Multiple roots:
If one of the roots i has a multiplicity equal to m (with m > 1), then
y = tk ei t for k = 0, 1, . . . , m 1
is a solution of the homogeneous equation. Applying this to all roots gives a collection of
n distinct and linearly independent solutions, where n is the order of the system.
Example 4.3 Consider the homogeneous equation
d4 y(t)
d3 y(t)
d2 y(t)
d y(t)
+
2
+
2
+2
+ y(t) = 0
4
3
2
dt
dt
dt
dt
with characteristic equation:
4 + 23 + 22 + 2 + 1 = ( + 1)2 ( + j)( j) = 0
We find three roots: 1 = j, 2 = j, 3 = 1 (multiplicity 2), and so the homogeneous
solution becomes
yh (t) = C1 ej t + C2 ej t + C3 et + C4 t et
As in Example 4.2, the coefficients C1 = + j and C2 = j will be complex conjugates
of each other with , R. We derive
yh (t) = C1 ej t + C2 ej t + C3 et + C4 t et
= 2 cos t 2 sin t + C3 et + C4 t et

with , , C3, C4 R.

66

CHAPTER 4. GENERAL SYSTEM ANALYSIS

Particular solution
To obtain the solution to the non-homogeneous equation (sometimes called inhomogeneous
equation), we need to find a particular solution yp (t), t 0 that satisfies the differential
equation (4.12) (but without considering the initial condition).
Particular solution for an exponential function
We start by considering the particular solution of differential equation (4.12) for an exponential input u(t) = es t where s C. First note that the derivatives of the input function
is given by
d2 u(t)
dm u(t)
d u(t)
2 st
= s es t ,
=
s
e
,
.
.
.
,
= sm es t ,
dt
d t2
d tm
We observe that all the derivatives are scaled versions of the the exponential input es t .
Therefore assume that y(t) has the same shape as u(t), so let
y(t) = C es t
for the same fixed complex value s. We can compute the derivatives of y:
d y(t)
d2 y(t)
dn y(t)
2
st
= s C es t ,
=
s
C
e
,
.
.
.
,
= sn C es t
dt
d t2
d tn
Substitution gives us:
sn C es t + a1 sn1 C es t + . . . + an1 s C es t + an C es t
= b0 sm es t + b1 sm1 es t + . . . + bm1 s2 es t + bm es t .

(4.16)

(sn + a1 sn1 + . . . + an1 s + an ) C es t


= (b0 sm + b1 sm1 + . . . + bm1 s2 + bm ) es t .

(4.17)

or

From this equation we can derive


C=

(b0 sm + b1 sm1 + . . . + bm1 s + bm )


= H(s)
(sn + a1 sn1 + . . . + an1 s + an )

where H is the transfer funstion of the system.


We see that for an exponential input u(t) = es t where s C we obtain a particular solution
y(t) = H(s)es t ,

t0

(4.18)

Using property (4.18) we can compute the particular solution for forcing functions that are
singularity, harmonic and damped harmonic functions.

67

4.2. TIME RESPONSES


Particular solution for a step function

If we consider u(t) = est , t 0 for s = 0, we find u(t) = est |s=0 = 1, t 0 and so the
particular solution for a step function is given by

bm
y(t) = H(s)est s=0 =
,
an

t0

(4.19)

Particular solution for a ramp and parabolic function


Two input signals that are important to study, namely the ramp function ur (t) = t,
t 0 and the parabolic function up (t) = t2 /2, t 0, cannot be written in the form est .
To compute the particular solution for a ramp or parabolic input function, we need the
integral property of linear time-invariant system, i.e. let ys (t) be the outputRof the system
to the step input us (t) = 1 for t 0, then the
R integral of the input ur (t) = us ( ) d will
result in the integral
of the output yr (t) = ys ( ) d , and one step further,
a parabolic
R
R
input up (s) = ur ( ) d will result in the integral of the output yp (t) = yr ( ) d .
Consider the system described by the differential equation
dn1 y(t)
d y(t)
dn y(t)
+
a
+ . . . + an1
+ an y(t)
1
n
n1
dt
dt
dt
dm u(t)
dm1 u(t)
d u(t)
= b0
+ b1
+ . . . + bm1
+ bm u(t),
m
m1
dt
dt
dt
with the corresponding transfer function
H(s) =

b0 sm + b1 sm1 + . . . + bm2 s2 + bm1 s + bm


sn + a1 sn1 + . . . + an2 s2 + an1 s + an

From (4.19) we know that a particular solution for a step input is given by
ys (t) = H(0) =

bm
for t 0
an

The particular solution yr (t) for a ramp input ur (t) = t, t 0 is given by


Z
yr (t) = ys (t) = c0 t + c1 for t 0
where c0 = bm /an and c1 is a constant to be determined. Note that y r (t) = c0 and u r (t) = 1.
Substitution into the differential equation gives us:
0 + an1 c0 + an (c0 t + c1 ) = bm1 + bm t for t 0
We find:
c1 =

bm1 an1 c0
an bm1 an1 bm
=
.
an
a2n

68

CHAPTER 4. GENERAL SYSTEM ANALYSIS

The parabolic response for this system for an parabolic input up(t) = t2 /2, t 0 is given
by
Z
t2
yp (t) = yr (t) = c0 + c1 t + c2 , for t 0
2

where c2 is a constant to be determined. Substitution into the differential equation gives


us:
0 + an2 c0 + an1 (c0 t + c1 ) + an (c0 , t2 + c1 t + c2 ) = bm2 + bm1 t + bm t2 for t 0
where c0 = bm /an and c1 = (bm1 an1 c0 )/an . We find:

a2n bm2 an1 an bm1 (an an2 + a2n1 )bm


c2 = (bm2 an2 c0 an1 c1 )/an =
.
a3n

Example 4.4 Consider the system


y(t) + 5y(t)
+ 6y(t) = u(t)
+ u(t)
We compute:
c0 = bm /an = 1/6
c1 = (bm1 an1 c0 )/an = (1 5/6)/6 = 1/36
c2 = (bm2 an2 c0 an1 c1 )/an = (0 1/6 5/36)/6 = 11/216
The particular solution of this system for a unit step input is given by
ys (t) = H(0) = 1/6 for for t 0
The particular solution of this system for a unit ramp input is given by
Z
yr (t) = ys (t) = 1/6 t + 1/36 for t 0

The particular solution of this system for a unit parabolic input is given by
Z
yp (t) = yr (t) = 1/12 t2 + 1/36 t 11/216 for t 0
Higher order singularity function
For a function
1
u(t) = tk
k!
for k > 2 we can also compute the particular solution
y(t) = c0

1 n
1
t + cn1
tn1 + . . . + cn1 t + cn
n!
(n 1)!

where the coefficients ci , i = 0, . . . , n are computed in an similar way.

4.2. TIME RESPONSES

69

Particular solution for a (damped) harmonic function


Consider the input signal u(t) = et cos t with , R. We can rewrite this input in
terms of exponentials with a complex argument as follows:
u(t) = et cos t = 1/2e(+j)t + 1/2e(j)t
If we let s = + j then the response to u(t) = e+jt is equal to y(t) = H( + j)e+jt
and the response to u(t) = ejt is y(t) = H(j)ejt . By superposition, the response
of the cosine is equal to the sum of these two responses:
y(t) = 1/2H( + j)e(+j)t + 1/2H( j)e(j)t
Note that following (4.7) we can write
M( + j) = |H( + j)| and ( + j) = H( + j)

(4.20)

then
H( + j) = M( + j) ej(+j)
Using the similar properties as (4.10) and (4.11) this leads to
y(t) = 1/2H( + j)e(+j)t + 1/2H( j)e(j)t

= 1/2M( + j) ej(+j) e(+j)t + 1/2M( + j) ej(+j) e(j)t




= M( + j) e t 1/2ejt+j(+j) + 1/2ejtj(+j)
= M( + j) e t cos(t + ( + j))

Summary of particular solutions


We summarize the particular solutions for some singularity, exponential, and sinusoidal
input functions in Table 4.2.

Computation of the final response


Finally we solve the original problem (4.12) where we have initial conditions


dn1 y(t)
d y(t)
= cn1 , . . . ,
= c1 , y(0) = c0
d tn1 t=0
d t t=0

The general solution to the linear differential equation is the sum of the general solution
of the related homogeneous equation and the particular solution. First note that the
signal y(t) = yp (t) + yh (t) satisfies (4.12) for any C1 , . . . , Cn C. To compute the correct
C1 , . . . , Cn we have to make sure that the initial conditions are satisfied.

70

CHAPTER 4. GENERAL SYSTEM ANALYSIS


Input u(t)

Output yp (t), t 0

H(0)

1 n
t
n!
t

1
c0 n!1 tn + c1 (n1)!
tn1 + . . . + cn1 t + cn

ejt

H(j ) ejt

cos(t)

M(j) cos(t + (j))

sin(t)

M(j) sin(t + (j))

et cos(t)

M( + j) e t cos(t + ( + j))

et sin(t)

M( + j) e t sin(t + ( + j))

H() et

Figure 4.2: Particular solutions

Example 4.5 Consider the system


y(t) + 4y(t)
+ 13y(t) = u(t)
with u(t) = e3t for t 0 and initial conditions y(0)

= 3 and y(0) = 1. From the


characteristic equation
2 + 4 + 13 = 0
we find the roots 1 = 2 + j 3 and 2 = 2 j 3. The homogeneous solution becomes
yh (t) = e2t (C1 cos 3t + C2 sin 3t)

The transfer function H(s) for this system is given by


1
H(s) = 2
s + 4s + 13
For an input u(t) = e3t the particular solution has the form
1 3t
e = 0.1 e3t
yp (t) = H(3) e3t =
10
The final solution becomes
y(t) = yh (t) + yp (t) = e2t (C1 cos 3t + C2 sin 3t) + 0.1 e3t
with derivative
y(t)
= 2 e2t (C1 cos 3t + C2 sin 3t) + 3 e2t ( C1 sin 3t + C2 cos 3t) 0.3 e3t

and so

y(0) = C1 + 0.1 ,

y(0)

= 2C1 + 3C2 0.3

With y(0) = 1 and y(0)

= 3 we find C1 = 0.9 and C2 = 1.7.

4.2. TIME RESPONSES

71

Frequency response
We will now give a special attention to the particular solution for a sinusoidal input. We
already discussed that the sinusoid can be expressed as a sum of two harmonic functions
u(t) = cos t = 1/2ejt + 1/2ejt
If we let s = j then the response to u(t) = ejt is equal to
y(t) = 1/2H(j)ejt + 1/2H(j)ejt
Note that similar to (4.20) we can define
H(j) = M(j) ej(j)
with
M(j) = |H(j)| and (j) = H(j)

With this substitution we find




y(t) = M(j) 1/2ej(t+(j)) + 1/2ej(t+(j))
= M(j) cos(t + (j))

(4.21)

(4.22)

This means that if a system represented by the transfer function H(s) has a sinusoidal
input, the output will be sinusoidal at the same frequency with magnitude M(j) and will
be shifted in phase by an angle (j).
Example 4.6 Frequency response of a second-order system. Consider a second-order system with input u(t) and output y(t), satisfying the differential equation
y(t) + 0.1y(t)
+ y(t) = u(t)
The transfer function of this system is given by
1
H(s) = 2
s + 0.1 s + 1
and so
1
H(j) = M(j) ej(j) =
2
+ j 0.1 + 1
with




1
1
1
= p

M(j) =
=
2 + j 0.1 + 1
4 1.99 2 + 1
(1 2 )2 + (0.1)2


0.1
(j) = H(j) = arctan
1 2

Figure 4.3 shows the plots of M(j) and (j) as a function of the frequency. These plots
are called the Bode plots of the system (named after H.W. Bode). The frequencies on the
horizontal axis are customarily given on a logarithmic scale. The magnitude M(j) is
given in decibels (dB), which means that we plot 20 log[M(j)] on the vertical axis. For
the phase (j) we use a linear scale on the vertical axis.

72
20 log[M(j)]

CHAPTER 4. GENERAL SYSTEM ANALYSIS


20
0

20
40

101

100

1
10

101

100

1
10

(j)

90

180

Figure 4.3: The magnitude M(j) and the phase (j).

4.3

Time response using Laplace transform

In this section we will derive the time response of linear time-invariant system (4.12), with
initial conditions


d y(t)
dn1 y(t)
= cn1 , . . . ,
= c1 , y(0) = c0
(4.23)
d tn1 t=0
d t t=0
using the Laplace transform. In Section 4.1 we have introduced the notion of transfer
function and have given the property for a initially-at-rest systems:
 k

d y(t)
L
= sk Y (s)
(4.24)
k
dt
where Y (s) is the Laplace transform of y(t). For the computation of the time response
for a system that is not initially-at-rest (so c0 , . . . , cn1 in (4.23) are not all equal to zero),
property (4.24) can be extended into the following property
L

dk y(t)
d tk

= s Y (s)
= sk Y (s)

k1
X
i=0

k1
X
i=0

k1i


di y(t)
d ti t=0

sk1i ci

(4.25)
(4.26)

4.3. TIME RESPONSE USING LAPLACE TRANSFORM

73

so in particular for k = 1, 2 we find


n d y(t) o
L
= s Y (s) y(0) = s Y (s) c0
dt
n d2 y(t) o
L
= s2 Y (s) s y(0) y(0)

= s2 Y (s) s c0 c1
d t2
Consider the differential equation

dn y(t)
dn1 y(t)
d y(t)
+ a1
+ . . . + an1
+ an y(t) = f (t).
n
n1
dt
dt
dt

(4.27)

for a known forcing function f (t). Applying the Laplace transformation we obtain
n dn y(t) o

n dn1 y(t) o
L
+ L a1
+ ...
d tn
d tn1
n
n
o
n
o
d y(t) o
+ L an1
+ an L y(t) = L f (t) .
dt

(4.28)

Substitution of (4.26) into (4.28) gives us


n

s Y (s)

n1
X
i=0

n1i

ci + a1 s

n1

Y (s) a1

+ an1 s Y (s) y(0) + an Y (s) = F (s).

n2
X

sn2i ci + . . .

(4.29)

i=0

(4.30)

where Y (s) and F (s) are the Laplace transforms of y(t) and f (t), respectively. From this
equation we can find an expression for Y (s). We can now compute y(t) for t 0 from
Y (s), using the inverse Laplace transformation. Let Y (s), s C be the Laplace transform
of y(t), then the inverse Laplace transformation is defined as follows:
Z
1
1
y(t) = L {Y (s)} =
Y (s) es t d s , t 0
(4.31)
2 j
A problem is that the inverse Laplace integral is not always easy to compute. An alternative
way to find L1 {Y (s)} is carrying out a partial fraction expansion, in which Y (s) is broken
into components
Y (s) = Y1 (s) + Y2 (s) + . . . + Yn (s)
for which the inverse Laplace transforms of Y1 (s), . . . , Yn (s) are available from Table 2.1 or
the Table in Appendix B. The method using partial fraction expansion is possible because
the inverse Laplace transformation is a linear operation, so:
L1 {f1 (t) + f2 (t)} = L1 {f1 (t)} + L1{f2 (t)}

(4.32)

The procedure to compute the output signal using Laplace transformation now consists of
three steps:

74

CHAPTER 4. GENERAL SYSTEM ANALYSIS

Step 1: We compute the so called Laplace transform F (s) of the forcing signal f (t). We
can use Equation (2.8) or we can use Table 2.1, where for a number of known signals
the Laplace transforms are given.
Step 2: In the second step we substitute (4.26) and obtain an expression for the Laplace
transform Y (s).
Step 3: Finally, split Y (s) up in pieces by the so called partial fractional expansion. For
every term we apply the Inverse Laplace transform using Table 2.1 or the Table in
Appendix B to retrieve the output signal y(t) of the original problem.

Partial fractional expansion


In the previous paragraph we discussed a procedure to compute the output signal of a
system using Laplace transformation. In step 3 the procedure tells us to split Y (s) up into
components that occur in Table 2.1. This splitting up can be done using partial fraction
expansion.
Let the Laplace transform Y (s) of a signal y(t) be given by the ratio of large polynomials:
Y (s) =

b1 sm + b2 sm1 + bm s + bm+1
sn + a1 sn1 + an1 s + an

Let p1 , . . . , pn be the roots of the polynomial a(s) = sn + a1 sn1 + an1 s + an . Then a(s)
can be written as
a(s) = (s p1 )(s p2 ) (s pn1 )(s pn ) =

n
Y
(s pi )
i=1

We can then rewrite Y (s) as a sum of partial fractions:


Y (s) =

C1
C2
Cn
+
+ ...+
(s p1 ) (s p2 )
(s pn )

where the coefficients C1 , . . . , Cn can be determined by


Ci = (s pi )Y (s)|s=pi , i = 1, . . . , n
If we have a multiple pole in s = with multiplicity r we obtain
a(s) = (s )r (s pr+1 ) . . . (s pn )
Now we can then rewrite Y (s) as a sum of partial fractions:
Y (s) =

0
1
r1
Cr+1
Cn
+
+ ...+
+
+ ...+
r
r1
(s )
(s )
(s ) (s pr+1 )
(s pn )

(4.33)

4.3. TIME RESPONSE USING LAPLACE TRANSFORM

75

where the Ci s are determines using Equation (4.33), and the i s can be computed using:

1 dk 
r
k =
(s ) Y (s)
, for k = 0, . . . , r1
k! d sk
s=

If we have a complex pole in p1 = + j then we will also find the complex conjugate
p2 = j as a pole. The corresponding coefficients C1 and C2 will be complex conjugates
of each other, so if C1 = + j , then C2 = j and so
C1
C2
s+

+
= 2
2
2
2
(s p1 ) (s p2 )
(s + ) +
(s + )2 + 2
This leads the solution





s+
1
1
2 L
= 2 e t cos( t)2 e t sin( t)
y(t) = 2 L
(s + )2 + 2
(s + )2 + 2
Example 4.7 Consider the function
Y (s) = 6

(s + 2)(s + 4)
s(s + 1)(s + 3)

We can rewrite it as a sum of partial fractions:


Y (s) =

C1
C2
C3
+
+
s
s+1 s+3

and using Equation (4.33) we can compute the coefficients


C1 = 16 ,

C2 = 9 ,

C3 = 1

and the output signal y(t) is now given by


1
1
1
y(t) = 16 L1 { } 9 L1{
} L1 {
}
s
s+1
(s + 3)
= 16 9 et e3t
Example 4.8 Consider the function
Y (s) =

(s + 3)
(s + 1)(s + 2)2

This system has a multiple pole at s = 2 with multiplicity 2. We can rewrite Y (s) as a
sum of partial fractions:
Y (s) =

C1
C2
C3
+
+
s + 1 s + 2 (s + 2)2

76

CHAPTER 4. GENERAL SYSTEM ANALYSIS

and using Equation (4.33) we can compute the coefficients


Y (s) =

2
2
1
+
+
s + 1 s + 2 (s + 2)2

and so the output signal is given by


1
1
1
} 2 L1{
} L1 {
}
s+1
s+2
(s + 2)2
= 2 et 2 e2t t e2t

y(t) = 2 L1 {

Example 4.9 Consider the function


Y (s) =

s(s2

10
+ 2s + 5)

This system has a complex pole pair at s = 1 j 2. We can rewrite it as a sum of partial
fractions:
C1
C2 (s + 1)
C3 2
+
+
2
2
s
(s + 1) + 2
(s + 1)2 + 22
and using Equation (4.33) we can compute the coefficients
Y (s) =

2
2s + 4
2
2(s + 1)
1 2
2
= +
+
2
2
s s + 2s + 5
s (s + 1) + 2
(s + 1)2 + 22

and so the output signal becomes


1
s+1
2
y(t) = 2 L1 { } 2 L1{
} L1 {
}
2
2
s
(s + 1) + 2
(s + 1)2 + 22
= 2 2 et cos 2t et sin 2t

4.4

Impulse response model: convolution

In this section we will show that the time response of a linear time-invariant system can be
completely characterized in terms of the unit impulse response of the system. Recall that
the impulse response of a system can be determined by exciting the system with a unit
impulse (t) and observing the corresponding output y(t). We will denote this impulse
response as h(t).
To develop the theory of convolution we will start by considering the rectangular function
T (t), t R for a small scalar value T > 0 as defined in Section 1.1. First note that


T (t) = 1/T us (t) us (t T )

77

4.4. IMPULSE RESPONSE MODEL: CONVOLUTION


y6

u6
u(t) = (t)

(t)

impulse

system

y(t) = h(t)
-

impulse
response

h(t)
-

Figure 4.4: The impulse response h(t)


This means that using the results from Section 4.2 we can compute the response y(t) for
the input u(t) = T (t). We denote this response as hT (t) = y(t) (see Figure 4.5).
Using linearity and time-invariance we find that multiplying the input by a constant R
and shifting the input in time with > 0 gives an output with the same multiplication
and time shift:
u(t) = T (t )

y(t) = hT (t )
y6

u6
1/T


T

t
y(t) = hT (t)

u(t) = T (t)
y6

u6
-

-

t
u(t) = T (t )

t
y(t) = h(t )

Figure 4.5: Response for a rectangular function


Now assume that we aim to compute the response y(t) of a linear time-invariant system
for an arbitrary u(t). We begin our derivation by considering a staircase approximation
of of the input u(t):
X
u(t) uT (t) =
u(k T ) T T (t k T )
(4.34)
k

78

CHAPTER 4. GENERAL SYSTEM ANALYSIS

uT 6

u6

u(t) uT (t) =

T 2T 3T 4T 5T 6T

X
k

u(k T ) T T (t k T )

Figure 4.6: Staircase approximation of an input signal


as illustrated in Figure 4.6. Now we will consider the response yT (t) to the staircase signal
uT (t). The staircase signal uT is built up from a number of rectangular functions T T ,
shifted in time by kT and multiplied by u(k T ) for k Z. As was derived before, if hT (t) is
the output signal for an input signal T (t), then shifting in time by kT and multiplying by
u(k T ) gives that for an input u(k T ) T T (t k T ) we obtain an output u(k T ) T hT (t k T )
for any k Z. Now the approximation uT from equation (4.34) is a linear combination of
delayed rectangular functions. The output corresponding to this input can be expressed
as a linear combination of delayed response hT as follows:
yT (t) =

X
k

u(k T ) T hT (t k T )

(4.35)

So,
y6

u6

=
-

uT (t) =

X
k

t
u(k T ) T T (t k T )

yT (t) =

X
k

t
u(k T ) T hT (t k T )

Figure 4.7: Computation of the system response using a staircase approximation

79

4.5. ANALYSIS OF STATE SYSTEMS


u(t) uT (t) =

X
k

u(k T ) T T (t k T )

y(t) yT (t) =

X
k

u(k T ) T hT (t k T )

As we let T approach 0, the approximation will improve for smaller T , and in the limit yT
will be equal to the true y(t):
X
y(t) = lim yT (t) = lim
u(k T ) T hT (t k T )
T 0

T 0

In the limit we can replace T T (t k T ) by (t ) d , and T hT (t k T ) by h(t ) d ,


where (t) is the unit impulse function and h(t) is the impulse response. Moreover, the
sum will become an integral and we therefore obtain
Z
y(t) =
u( ) h(t ) d
(4.36)

this is called the convolution integral. The convolution of two signals u and h will be
represented symbolically as
y(t) = u(t) h(t)

(4.37)

A basic property of the convolution is that it is commutative, so


Z
y(t) = u(t) h(t) = h(t) u(t) =
h( ) u(t ) d

(4.38)

This means that the input-output behavior of a linear time-invariant system can be described by either one of the following convolution integrals:
Z

y(t) =
u( ) h(t ) d

Z
input-output behavior

y(t) =
u(t ) h( ) d

4.5

Analysis of state systems

State system time response


In this section we examine the time response of a linear time-invariant state model in the
standard state equation form

x(t)

= Ax(t) + Bu(t)
(4.39)
y(t) = Cx(t) + Du(t)
Before we compute this time response we first introduce the state transformation which
gives us the A, B, C and D matrices for a different set of state variables. A special state
transformation is the modal transformation, which gives us a state system description in
which the A-matrix is diagonal. This will facilitate the computations dramatically.

80

CHAPTER 4. GENERAL SYSTEM ANALYSIS

State transformation
There is no unique set of state variables that describe a given system; many different sets
of variables may be selected to yield a complete system description.
Example 4.10 Consider the fluid flow system of Section 2.2, in which the dynamics between the two levels in the vessels and the inflow are described as follows:

g
g
1

h 1 (t) = A win (t) A R h1 (t) + A R h2 (t)


1

h 2 (t) =

g
g(R1 + R2 )
h1 (t)
h2 (t) .
A2 R1
A2 R1 R2

Note that by choosing the state as




h1 (t)
x(t) =
h2 (t)
and as an input u(t) = win (t), we obtain the state system with state equation

g
g
1

A1 R1

A1 R1

x(t)

=
g
g(R1 + R2 ) x(t) + A1 u(t)

0
A2 R1
A2 R1 R2

Now suppose we are interested in the mean level y(t) = (h1 (t) + h2 (t))/2, then the output
equation is given by


y(t) = 0.5 0.5 x(t) + 0 u(t)
Note that we could have chosen the states in a different way, for example


(h1 (t) + h2 (t))/2

.
x (t) =
(h1 (t) h2 (t))/2

In other words, the first state gives the average water level and the second state gives the
difference divided by two. Now we obtain a new set of equations:
 2g
h 1 (t) + h 2 (t)
g  h1 (t) + h2 (t)
=
+
2
2A2 R1 2A2 R2
2
 2g
g  h1 (t) h2 (t)
1

+
+
win (t)
A1 R1 A2 R2
2
2A1
 2g
h 1 (t) h 2 (t)
g  h1 (t) + h2 (t)
=
+
2
2A2 R1 2A2 R2
2

 2g
g  h1 (t) h2 (t)
1

+
win (t)
A1 R1 A2 R2
2
2A1

4.5. ANALYSIS OF STATE SYSTEMS

81

so we obtain the state system with state equation

1
g
2g
g
2g
+

2A1

R1 2A2 R2
A1 R1 A2 R2
x (t) = 2A22g
g
2g
g x (t) + 1 u(t)

+
2A2 R1 2A2 R2
A1 R1 A2 R2
2A1
and the output equation:
y(t) =


1 0 x(t) + 0 u(t)

The first system with state x(t) and the second system with state x (t) both describe the
same physical system with the same input u(t) = win (t) and the same output y(t) = (h1 (t)+
h2 (t))/2, but the state description is different because of a different choice of states.
In the previous example we have introduced a state transformation


1 1
x(t) =
x (t)
1 1
or
x(t) = T x (t)

(4.40)

We will now consider how the system matrices will change if we introduce a state transformation (4.40) where T is non-singular (invertible). First note that the time-derivative
of the new state x (t) is given by
x(t)

= T x (t)
Consider the original system

x(t)

= Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)

(4.41)

Substitution of x(t) = T x (t) and x(t)

= T x (t) into (4.41) leads to




T x (t) = AT x (t) + Bu(t)


y(t) = CT x (t) + Du(t)

Multiplying the first equation by the inverse T 1 gives us the equations



x (t) = T 1 AT x (t) + T 1 Bu(t)
y(t) = CT x (t) + Du(t)

(4.42)

82

CHAPTER 4. GENERAL SYSTEM ANALYSIS

By defining
x (t) = T 1 x(t)
A = T 1 AT
B = T 1 B
C = C T
D = D
we arrive at the state system

x (t) = A x (t) + B u(t)
y(t) = C x (t) + D u(t)

(4.43)

which has the same input-output behavior as (4.41).

Modal transformation
Consider system (4.39). We will now give a closer loop to the matrix A which is called the
system matrix of the system. The values i satisfying the equation
i mi = A mi

for mi 6= 0

(4.44)

are known as the eigenvalues of A and the corresponding column vectors m are defined as
the eigenvectors. Equation (4.44) can be rewritten as:
(i I A) mi = 0

(4.45)

The condition for a non-trivial solution of such a set of linear equations is that
det(i I A) = 0

(4.46)

which is defined as the characteristic polynomial of the A matrix. Eq. (4.46) may be
written as
n + an1 n1 + . . . + a1 + 1 = 0

(4.47)

or, in factored form in terms of its roots 1 , . . . , n ,


( 1 )( 2 ) ( n ) = 0

(4.48)

Let us consider the case that all eigenvalues are distinct. Define
M=

m1 m2 mn

(4.49)

83

4.5. ANALYSIS OF STATE SYSTEMS


and

0
..
.

0
..
.
,

..

(4.50)

then the eigenvalue decomposition of the A matrix is given by


A = M M 1
and system (4.39) can be transformed to the diagonal form

x (t) = A x (t) + B u(t)
y(t) = C x (t) + D u(t)

(4.51)

by choosing M as a state transformation matrix (see Section 2.4):


x (t) = M 1 x(t)
A = M 1 AM =
B = M 1 B
C = C M
D = D
This transformation is called the modal transformation.
Example 4.11 Consider the second-order system



 
6 6
1

x(t)

=
x(t) +
u(t)
2
1
1



y(t) = 1 0 x(t) + 4 u(t)

An eigenvalue decomposition of the matrix A gives us eigenvalues 1 = 2 and 2 = 3


with eigenvector
 
 
3
2
m1 =
, m2 =
2
1
and so
=M

AM =

1 2
2 3



6 6
2 1



3 2
2 1

2 0
0 3

The system matrices after the modal transformation become






2 0
1

1
A = M AM = =
B =M B=
0 3
1


C = C M = 3 2
D = D = 4

84

CHAPTER 4. GENERAL SYSTEM ANALYSIS

In the case of multiple eigenvalues, the modal transformation is not always possible. One
possible solution is to transform the A matrix into a modified-diagonal form or Jordan
form (see the book by Kailath [4]). The A matrix will then be almost diagonal in the sense
that its only non-zero entries lie on the diagonal and the superdiagonal (A superdiagonal
entry is one that is directly above and to the right of the main diagonal).
Homogeneous state response
The computation of the time response of a state system consists of two steps: First the
state-variable response x(t) is determined by solving the first-order state equation (4.39.1),
and then the state response is substituted into the algebraic output equation (4.39.2) in
order to compute y(t). The state variable response of a system described by (4.39) with
zero input and an arbitrary set of initial conditions x(0) is the solution of the system of n
homogeneous first-order differential equations
x(t)

= A x(t) ,

x(0) = x0

(4.52)

In the scalar case for a R (see Section 3.1) we obtain


x(t) = x(0) ea t
For the matrix case we can write the same solution:
x(t) = x(0) eA t
where
eA t = I + A t +

(At)2
+ ...
2

The solution is often written as:


x(t) = (t) x(0)

(4.53)

where (t) = eA t is defined as the state transition matrix. To compute this state transition
matrix we need to compute the exponential of a matrix. This can be done by bringing the
model into the modal form. We compute M and according to (4.49) and (4.50).
(t) = eAt
= I + At+

(At)2
+ ...
2

M 2 t2 M 1
= M M 1 + M t M 1 +
+ ...
2


(t)2
= M I +t+
+ . . . M 1
2
t
1
=Me M

85

4.5. ANALYSIS OF STATE SYSTEMS


and so
(t) = M e t M 1

(4.54)

For e t we derive:
(t)2
et = I + t +
+ ...
2

2
0

0
1 + 1 t + (12t) + . . .

2
..
(
t)
2

0
1
+

t
+
+
.
.
.
.
2
2
=

..
..
.

.
2
0

1 + n t + (n2t) + . . .
1 t

e
0
0
..

.
0 e2 t
= .

.
..
..

n t
0
e

Example 4.12 Consider the first-order state system




x(t)

= 3 x(t) + 2 u(t)
y(t) = 5 x(t) + 2 u(t).

The homogeneous state differential equation is given by


x(t)

= 3 x(t)
It follows that the 1 1 state transition matrix is given by
(t) = e3 t , t > 0, t R.
Example 4.13 Consider the second-order system of Example 4.11. The homogeneous
state differential equation is
x(t)

= Ax(t) =


6 6
x(t)
2 1

Then the 2 2 state transition matrix is given by

At

(t) = e

=e

6 6
2 1

86

CHAPTER 4. GENERAL SYSTEM ANALYSIS

Using the model transformation we can compute


(t) = M e t M 1

 2 0 t 

3 2
1 2
0
3
=
e
2 1
2 3

 2t


3 2
e
0
1 2
=
2 1
0 e3t
2 3


2t
3t
2t
3e +4e
2e +2e3t
=
6e2t 6e3t
4e2t 3e3t


The state transition matrix (t) has the following properties:


1. (0) = I.
2. x(t1 ) = (t1 )x(0)
3. (t) = 1 (t), and so x(0) = (t1 )x(t1 ) = 1 (t1 )x(t1 ).
4. (t1 )(t2 ) = (t1 + t2 ), and so
x(t2 ) = (t2 )x(0) = (t2 )(t1 )x(t1 ) = (t2 t1 )x(t1 )
or
x(t2 ) = (t2 t1 )x(t1 )
5. If A is a diagonal matrix, then eA t is also a diagonal matrix and each diagonal element
is equal to the exponential of the corresponding diagonal element of the A matrix
times t, that is, eaii t :

a11
0
ea11 t
0

..

..

t =
.

.
0
ann
ann t
0
e
e
The forced response of a state system
Let us now determine the solution of the inhomogeneous state equation
x(t)

= Ax(t) + Bu(t)
We can rewrite this as
x(t)
Ax(t) = Bu(t)

(4.55)

4.5. ANALYSIS OF STATE SYSTEMS

87

Multiplying by eAt gives




At
e
x(t)
Ax(t) = eAt Bu(t)

For the first term we can write:





d  At
eAt x(t)
Ax(t) =
e x(t)
dt
and so

d  At
e x(t) = eAt Bu(t)
dt
Integrating both sides gives us
Z t 
Z t

d A
e
x( ) d =
eA Bu( ) d
0 d
0
or
Z t
At
A0
e x(t) e x(0) =
eA Bu( ) d
0

and so
At

At

x(t) = e x(0) + e

eA Bu( ) d

This results in
At

x(t) = e

x(0) +

eA (t ) B u( ) d

(4.56)

The system output response of a state systems


The output response of a linear time-invariant system is easily derived by substitution of
(4.56) into (4.39.2):
y(t) = C x(t) + D u(t)


Z t
At
A (t )
= C e x(0) +
e
B u( ) d + D u(t)
0

So the forced output response is given by


Z t
At
y(t) = C e x(0) +
C eA (t ) B u( ) d + D u(t)

(4.57)

Example 4.14 Consider the first-order system of Example 4.12. For this system the forced
output response is
Z t
3 t
y(t) = 5 e x(0) +
10 e3 (t ) u( ) d + 2 u(t)
0

88

4.6

CHAPTER 4. GENERAL SYSTEM ANALYSIS

Relation between various system descriptions

In this section we summarize the different types of system description for linear timeinvariant systems.

The transfer function of linear time-invariant state systems


Consider the state system
x(t)

= A x(t) + B u(t) ,
y(t) = C x(t) + D u(t) .
Note that x(t) is a vector. The Laplace transform of a vector x(t) is given by the Laplace
transform of its entries:

X
(s)
x
(t)
1
1

x2 (t)
X2 (s)

1
1
L {x(t)} = L
= X(s)
.. =
..

.
.

Xn (s)
xn (t)
where L1 {xi (t)} = Xi (s) for i = 1, . . . , n. For the derivative x(t)

it follows:

L1 {x 1 (t)}
L1 {x 2 (t)}

1
L {x(t)}

..

L1 {x n (t)

s X1 (s)
s X2 (s)

..

.
s Xn (s)

s 0 0
X1 (s)

0 s
0
X2 (s)

..
..
..

.
.
.
0 0 ... s
Xn (s)

= s I X(s)

Let L1 {u(t)} = U(s) and L1 {y(t)} = Y (s), then we can write the state equation in
Laplace form as s I X(s) = A X(s) + B U(s) , or (s I A)X(s) = B U(s) . If we assume
that the matrix (s I A) is invertible, we obtain
X(s) = (s I A)1 B U(s) .

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS

89

Substitution into the second state equation gives us


Y (s) = C X(s) + D U(s) = C (s I A)1 B U(s) + D U(s)

(4.58)

and so the transfer function of state system (4.39) is given by


H(s) = C (s I A)1 B + D

(4.59)

Remark:
Note that the transfer function is an input-output description of the system. A state
transformation does not influence the input-output behavior of the system. We therefore
may use the modal form of the state space description of the system, and the transfer
function becomes:
H(s) = C (s I )1 B + D
which can easily be computed, because of the diagonal form of :

(s I )

s 1

0
..
.

s 2

(s 1 )1

..

0
..
.

s n

0
..
.

(s 2 )1

..

0
..
.

.
(s n )1

Example 4.15 Consider the system of Example 4.11 and Example 4.13. The transfer is
given by
H(s) = C (s I )1 B + D


1 



2 0
1
= 3 2
sI
+4
0 3
1

1 



s+2
0
1
+4
= 3 2
0
s+3
1
 1



 s+2
0
1
= 3 2
+4
1
0 s+3
1
3
2
=
+
+4
s+2 s+3
4 s2 + 21 s + 29
=
s2 + 5 s + 6

(4.60)

90

CHAPTER 4. GENERAL SYSTEM ANALYSIS

The relation between state systems and input-output system


Compute the Laplace transforms X(s) = L{x(t)}, U(s) = L{u(t)}, and Y (s) = L{y(t)}
and let us consider the relation (4.58):


Y (s) = C (s I A)1 B + D U(s)
The inverse of an n n matrix can be computed as follows (see Appendix A)
1
adj M
det M

M 1 =
and so
Y (s) =

 C adj (s I A) B


+ D U(s)

det(s I A)
C adj (s I A) B + D det(s I A)
=
U(s)
det(s I A)

(4.61)
(4.62)

From here we can derive the input-output equation




det(s I A)Y (s) = C adj (s I A) B + D det(s I A) U(s)

(4.63)

Example 4.16 Consider the state system




 
2 3
1
x(t)

=
x(t) +
u(t) ,
3
4
2


y(t) = 1 0 x(t) + 2 u(t) .

(4.64)

Note that det(s I A) and C adj (s I A) B +D det(s I A) are polynomials in the variable
s and so by inverse Laplace transformation we find the input-output differential equation.

From

s I A =
=




s 0
0 s

2 3
3 4


s2
3
3 s + 4

we compute the determinant and the adjoint of s I A as follows:




s2
3
det
= (s 2)(s + 4) 3(3) = s2 + 2s + 1
3 s + 4
 


s2
3
s + 4 3
adj
=
3 s + 4
3
s2

91

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS


Furthermore
C adj (s I A) B + D det(s I A)

 

 s + 4 3
1
= 1 0
+ 2(s2 + 2s + 1)
3
s2
2
 

 1
= s + 4 3
+ (2s2 + 4s + 2)
2
= (s 2) + (2s2 + 4s + 2)
= 2s2 + 5s

The input-output relation in the Laplace domain becomes




det(s I A)Y (s) = C adj (s I A) B + D det(s I A) U(s)
(s2 + 2s + 1)Y (s) = (2s2 + 5s)U(s)

which gives us the final input-output differential equation


d y(t)
d2 u(t)
d u(t)
d2 y(t)
+
2
+
y(t)
=
2
+
5
d t2
dt
d t2
dt
The transformation of an input-output description into a state description can be done
in many ways. Note that the state representation is not unique and therefore the transformation is not unique. We will now discuss one specific realization, which is called the
controllability canonical form.
Consider the differential equation
dn y(t)
dn1 y(t)
d y(t)
+
a
+ . . . + an1
+ an y(t)
1
n
n1
dt
dt
dt
dn u(t)
dn1 u(t)
d u(t)
+ b1
+ . . . + bn1
+ bn u(t)
= b0
n
n1
dt
dt
dt
This equation can be written in a state representation
x(t)

= A x(t) + B u(t)
y(t) = C x(t) + D u(t)
by choosing:

A=

C=

a1 an1 an
1
0
0

..
.. ,
..
.
.
.
0
1
0


b1 b0 a1 bn1 b0 an1 bn b0 an ,

B=

D=

1
0
..
.
0
b0

(4.65)

92

CHAPTER 4. GENERAL SYSTEM ANALYSIS

To prove that this is really a state system representing the input-output differential equation, we simply transform this state system into an input-output system using equation
(4.63). We have

s + a1 a2 an2 an1 an
1

s
0
0
0

..
..
.. ..
..

.
.
.
.
.
det(s I A) = det

..
0

. 1
0
s
0
0
0
1 s
= sn + a1 sn1 + + an1 s + an

Furthermore

adj (s I A) =

sn1
sn2
..
.
s
1

..
..
.
.

where the stars indicate that the values can be computed but are not relevant.

sn1
sn2



..
b

b
a

b
a
b

b
a
C adj (S I A) B =

1
0 1
n1
0 n1
n
0 n
.
s
1
= (b1 b0 a1 )sn1 + . . . + (bn1 b0 an1 )s + bn b0 an

and so
C adj (s I A) B + D det(s I A) =
= (b1 b0 a1 )sn1 + (b2 b0 a2 )sn2 + . . . + (bn1 b0 an1 )s + (bn b0 an )
+ b0 (sn + a1 sn1 + + an1 s + an )
= b0 sn + b1 sn1 + b2 sn2 + . . . + bn1 s + bn
We obtain
sn Y (s) + a1 sn1 Y (s) + . . . + an1 sY (s) + an Y (s)
= b0 sn U(s) + b1 sn1 U(s) + . . . + bn1 sU(s) + bn U(s)
This means that after a inverse Laplace transformation we end up with the original differential equation.

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS

93

Example 4.17 Consider the final input-output description of Example 4.16:


d y(t)
d2 u(t)
d u(t)
d2 y(t)
+
2
+
y(t)
=
2
+5
2
2
dt
dt
dt
dt
Using (4.65) we derive with n = 2:

B
C
D

 

a1 a2
2 1
=
=
,
1
0
1
0
 
1
=
0

 

= b1 b0 a1 b2 b0 a2 = 1 2 ,

  
= b0 = 2

Note that the system matrices of this realization are different from the system matrices in
(4.64). The two realizations are related by the state transformation matrix


1 2
T =
2 1
with inverse
T

1
=
3

1 2
2 1

We find that
x (t) = A x (t) + B u(t)
y(t) = C x (t) + D u(t)
with
x (t) = T 1 x(t)
A = T 1 AT
B = T 1 B
C = C T
D = D

The impulse response of a linear time-invariant state system


Let
u(t) = (t)

(4.66)
(4.67)

94

CHAPTER 4. GENERAL SYSTEM ANALYSIS

where (t) is the unit impulse function, defined in (1.2).


Following (4.57) we find
Z t
At
y(t) = Ce x(0) +
CeA(t ) B ( ) d + D(t)

(4.68)

Using property (1.4) we obtain:


Z t
CeA(t ) B ( ) d = CeA t B us (t)
0

Finally note that in the impulse response we usually assume the system initially at rest,
so x(0) = 0. Equation (4.68) now becomes
h(t) = CeA t B us (t) + D(t)

(4.69)

which is the impulse response of the linear time-invariant state system (4.39).
Remark:
Note that a state transformation does not influence the input-output behavior of the system. We therefore may use the diagonal form of the state space description of the system.
The impulse response becomes:
h(t) = C e t B us (t) + D (t)
= CMe t M 1 B us (t) + D(t)
which can easily be computed.
Example 4.18 Consider the second-order system of Example 4.11 and Example 4.13. Using the model transformation we can derive
h(t) = C e t B us (t) + D (t)




 e2t 0
1
= 3 2
us (t) + 4(t)
0 e3t
1
= (3e2t 2e3t )us (t) + 4(t)

(4.70)

The relation between the transfer function and the impulse response
One of the properties of the Laplace transformation is that the Laplace transform of a
convolution of two functions f1 and f2 results in the product of the Laplace transforms of
f1 and f2 :
L{f1 (t) f2 (t)} = L{f1 (t)}L{f2 (t)}

(4.71)

95

4.6. RELATION BETWEEN VARIOUS SYSTEM DESCRIPTIONS


Now recall the convolution integral to compute the output signal in the time domain
Z
y(t) = h(t) u(t) =
u( ) h(t ) d

Using property (4.71) we find:


Y (s) = L{y(t)} = L{h(t) u(t)} = L{h(t)}L{u(t)} = L{h(t)} U(s)
Comparing this result to (4.5) we find that
H(s) = L{h(t)} and h(t) = L1 {H(s)}

(4.72)

where H(s) is the transfer function of the system. This means that the Laplace transform
of the impulse response is equal to the transfer function.

Overview of all the relations

Eq. (4.69)

A, B, C, D

Eq. (4.63)
Eq. (4.65)

Eq. (4.59)

h(t)

Eq. (4.72)

H(s)

Eq. (4.6)

Differential
Equation

s=j

H(j)

Figure 4.8: Relations between various LTI system descriptions


The first description we discussed in Section 2.3 is the input-output differential equation:
dn y(t)
dn1 y(t)
d y(t)
+
a
+ . . . + an1
+ an y(t)
1
n
n1
dt
dt
dt
= b0

dm u(t)
dm1 u(t)
d u(t)
+
b
+ . . . + bm1
+ bm u(t)
1
m
m1
dt
dt
dt

96

CHAPTER 4. GENERAL SYSTEM ANALYSIS

The second description we discussed in Section 2.4 is the state description:



x(t)
= A x(t) + B u(t) ,
y(t) = C x(t) + D u(t),
The third description we discussed in Section 4.1 is the transfer function description:
H(s) =

b0 sm + b1 sm1 + . . . + bm2 s2 + bm1 s + bm


sn + a1 sn1 + . . . + an2 s2 + an1 s + an

The last description we discussed in Section 4.4 is the convolution description:


Z
y(t) = u(t) h(t) = h(t) u(t) =
h( ) u(t ) d

In Figure 4.8 the equations that gives the relations between the various system descriptions
are given.

4.7

Stability

Stability of linear time-invariant input-output systems


In this section we discuss the stability of a linear time-invariant system as defined in (4.1).
Let pi , i = 1, . . . , n be the system poles, i.e. the solutions of the homogeneous equation
n + a1 n1 + + an = 0
The system is asymptotically stable if and only if all components in the homogeneous
response from a finite set of initial conditions decay to zero as time increases, or
lim

n
X

Ci epi t = 0

(4.73)

i=1

where pi are the system poles1 .


In order for a linear time-invariant system to be stable, all its poles must have a real part
smaller than zero, i.e. they must all lie in the left half plane. An unstable pole, lying in the
right half plane, generates a component in the system homogeneous response that increases
without bound from any finite initial conditions. A system having one or more poles lying
on the imaginary axis with multiplicity equal to one has nondecaying (usually oscillatory)
components in the homogeneous response and is defined to be marginally stable. For a
pole on the imaginary axis with multiplicity higher than one, the homogeneous response
will grow unboundedly.
1

In (4.73) we assume that all poles have multiplicity equal to one. If a pole pi has multiplicity mi > 1,
the terms for this pole has the form Ci tj epi t , j = 0, . . . , mi 1

97

4.7. STABILITY

Stability of linear time-invariant state systems


Consider the linear time-invariant state system of (4.39). For asymptotic stability, the
homogeneous response of the state vector x(t) should return to the origin for any arbitrary
initial condition x(0) for time t , or
lim x(t) = lim (t) x(0) = lim M e t M 1 x(0) = 0

for any x(0) and all eigenvalues have multiplicity equal to one. All the elements of x(t) are
a linear combination of the modal components ei t , and therefore, the stability of a system
response depends on all components decaying to zero with time. If Re(i ) > 0 for some i ,
the component will grow exponentially with time and the sum is by definition unbounded.
The requirements for system stability may therefore be summarized:
A linear time-invariant state system described by the state equation x(t)

= A x(t) + b u(t)
is asymptotically stable if and only if all eigenvalues have real part smaller than zero.
Three other separate conditions should be considered:
1. If one or more eigenvalues, or pair of conjugate eigenvalues, has a real part larger than
zero, there is at least one corresponding modal component that increases exponentially without bound from any initial condition, violating the definition of stability.
2. Any pair of conjugate eigenvalues on the imaginary axis (real part equal to zero),
i,i+1 = j with multiplicity equal to one, generates an undamped oscillatory
component in the state response. The magnitude of the homogeneous state response
neither decays nor grows but continues to oscillate for all time at a frequency . Such
a system is defined to be marginally stable. For poles on the imaginary axis with
multiplicity higher than 1, the homogeneous state response will grow unboundedly.
3. An eigenvalue = 0 with multiplicity one generates a model exponent e t = e0 t = 1
that is a constant. The system response neither decays or grows, and again the system
is defined to be marginally stable. A eigenvalue = 0 with multiplicity mi > 1
gives additional components tj , j = 1, . . . , mi 1 and will lead to an unbounded
homogeneous state response.

Stability of convolution systems


In this section we consider BIBO stability of a convolution system, which is described by
the convolution integral (4.38). We say a linear time-invariant system is Bounded-InputBounded-Output (BIBO) stable if a bounded input sup |u(t)| = M1 , produces a bounded
t

output sup |y(t)| = M2 . A necessary and sufficient condition for such BIBO stability is
t

that the impulse response h(t) is such that


Z
|h(t)| dt = M3 <

(4.74)

98

CHAPTER 4. GENERAL SYSTEM ANALYSIS

First, we show that the system is stable if (4.74) holds:


Z




sup |y(t)| = sup
h( )u(t ) d
t
t
Z

sup
|h( )u(t )|d
t

Z
sup
|h( )| |u(t )|d
t

|h( )|M1 d

M3 M1

This means that M2 is finite if (4.74) holds. That (4.74) is necessary can be seen as follows.
Assume that for we want to compute y(0) for an input given u given by

if h(t) > 0
1
u(t) := sgn[h(t)] = 0
if h(t) = 0 , t.

1 if h(t) < 0
Then, M1 = sup |u(t)| = 1 and,
t

y(0) =
=

h( )u( ) d
|h( )| d

This shows that if M3 is not bounded, that y(t0) is not bounded and so the system is not
BIBO stable. This shows that for BIBO stability (4.74) is necessary.
Example 4.19 Consider the system of Example 4.11 and Example 4.13. The eigenvalue
of the matrix


6 6
A=
2 1
are 1 = 2 and 2 = 3. Both eigenvalues are negative real, which means that this
system is stable.
As we will show in the next section the transfer function the input-output differential equation of this system is given by
d2 y(t)
d y(t)
d2 u(t)
d u(t)
+
5
+
6
y(t)
=
4
+ 21
+ 29 u(t)
2
2
dt
dt
dt
dt
The characteristic equation is equal to
2 + 5 + 6 = 0

99

4.8. EXERCISES

and we find (not surprisingly) the poles 1 = 2 and 2 = 3, which are equal to the
eigenvalues of the A-matrix of the corresponding state system. Both poles are negative
real, which means that this system is stable.
Finally we can compute the impulse response h(t) given in Equation (4.70).
Z
M3 =
|h(t)| dt

Z
=
|(3e2t 2e3t )us (t) + 4(t)| dt
Z

=
(3e2t 2e3t )us (t) + 4(t) dt

Z
Z
Z
2t
3t
=3
e dt 2
e dt + 4
(t) dt
0

2

3
= e2t 0 + e3t 0 + 4
2
3
3 2
= +4
2 3

<

which means that this system is also bounded-input-bounded-output stable.


Example 4.20 Consider the third-order state system


2
9
12
1
36 x(t) + 2 u(t)
x(t)

= 9 26
6 18 25
4

The eigenvalues of A are 1 = 1 and 2,3 = 2 j 3. Note that two eigenvalues have a
positive real part, which means that this system is unstable.

4.8

Exercises

Exercise 1. Transfer functions


Compute the transfer function of the system described by the following differential equations:
d2 y(t)
d y(t)
+6
+ 5y(t) = 4u(t)
2
dt
dt
d3 y(t)
d y(t)
d3 u(t)
d2 u(t)
d u(t)
b)

13
+
12y(t)
=
2
+
12
+ 24
+ 16u(t)
3
3
2
dt
dt
dt
dt
dt
d4 y(t)
d3 y(t)
d2 y(t)
d y(t)
c)
+6
+ 22
+ 30
+ 13y(t)
4
3
2
dt
dt
dt
dt
d3 u(t)
d2 u(t)
d u(t)
=3
+
6
21
+ 12u(t)
3
2
dt
dt
dt

a)

100

CHAPTER 4. GENERAL SYSTEM ANALYSIS

Exercise 2. Poles, zeroes, stability


Consider the systems ac of Exercise 1, compute the poles and zeros, and determine whether
the systems are stable or unstable.
Exercise 3: Frequency response
Consider the following system:
d2 y(t)
d y(t)
+6
+ 5y(t) = u(t)
2
dt
dt
Compute the frequency response of this system, determine the magnitude M(j) and phase
(j), and compute the output y(t) for a given input u(t) = 4 cos 3 t.
Exercise 4: Time response
Consider the system
y(t) + 6y(t)
+ 13y(t) = 13 u(t)
with step input u(t) = 1 for t 0. Compute the output y(t) for the initial conditions
y(0)

= 2 and y(0) = 5.
Exercise 5: Time response
Consider the system
y(t) + 10y(t)
+ 25y(t) = 40 u(t)
with step input u(t) = e3t for t 0. Compute the output y(t) for the initial conditions
y(0)

= 65 and y(0) = 55.


Exercise 6: Impulse response
Consider a system with impulse response
h(t) = us (t)
Compute the output y(t) for t 0 when the input is given by

0 for t < 0
1 for 0 t < 1
u(t) =

0 for t 1

4.8. EXERCISES
Exercise 7: State systems
Consider the state system



 
3 2
1

x(t)

=
x(t) +
u(t)
21 10
2




y(t) = 4 1 x(t)
1. Is this state system stable?

2. Compute the system matrices after a modal transformation.


3. Compute the homogeneous response of this system for x(0) = [ 1 1 ]T .
4. Compute the forced response for t 0 with x(0) = 0 and u(t) = us (t).
5. Derive the impulse response of the system.
6. Derive the transfer function of the system.

101

102

CHAPTER 4. GENERAL SYSTEM ANALYSIS

Chapter 5
Nonlinear dynamical systems
In this chapter we will consider some examples of nonlinear (differential) systems. We will
also discuss the concept of linearization, which gives us the possibility to approximate the
behavior of a nonlinear system locally by a linear system description.

5.1

Modeling of nonlinear dynamical systems

In Chapter 2 we have discussed the modeling of dynamical systems with only linear basic
elements. In practice however we will often encounter phenomena that are nonlinear. We
will present two examples with nonlinear elements and show that the differential equations
can be derived in a similar way as in the linear case.
Let u(t) be the input signal and let y(t) be the output signal of a nonlinear dynamical
system. The relation between inputs and outputs of dynamical systems can be described
by a differential equation:
 dn1 y(t)

dn y(t)
d y(t)
dm u(t) dm1 u(t)
d u(t)
=F
,...,
, y(t),
,
,...,
, u(t)
d tn
d tn1
dt
d tm
d tm1
dt
Example 5.1 (Mechanical system)
A mass M is connected to the ceiling by a nonlinear spring and a linear damper in the
configuration of Figure 5.1. The spring force is given by fs = k y 2 and the damping force
is equal to fd = c y,
where k and c are constants. The gravity force is equal to fg = m g
and finally there is an external force fext acting on the mass. Our task is to derive the
differential equations for this system.
We use Newtons law for this system and we obtain
X
m y =
fi
i

103

104

CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

k y2

c y

m
mg

fext
?

Figure 5.1: Example of a nonlinear mechanical system

where fi are all forces acting on mass m. We have


m y = fs + fd + fg + fext = k y 2 c y + m g + fext
Example 5.2 (Water flow system)
Given a system with two water vessels in the configuration of Figure 5.2. Water runs into
the upper vessel from a source with flow win , from the upper vessel through restriction with
fluid resistance R1 into the lower vessel with flow wmed , and out of the lower vessel through
restriction with fluid resistance R2 with flow wout . The pressures at the bottoms of the
water vessels are denoted by p1 and p2 . The outside pressure is p0 . The area of both vessels
is A, and the water levels are denoted by h1 and h2 . There are two differences between
this system and that of Figure 2.15. The first difference is that in the system of Figure 5.2
the water of the first vessel flows freely into the second vessel (and so the flow wmed does
not depend on p2 ), and the second difference is that we assume a nonlinear behavior of the
restrictions:
1
p1 p0
R1
1
=
p2 p0
R2

wmed =
wout

By introducing the square root in the relation between the flow and the pressure, the equations can describe the physical behavior more realistically. Our task is to derive the differential equations for this system.
First we consider the upper vessel:
The net flow into the upper level is w1 = win wmed . The fluid capacitance of the upper
vessel is given by
C1 =

A1
g

5.1. MODELING OF NONLINEAR DYNAMICAL SYSTEMS

105

win p0

A
wmed

h1
p1

R1
A
h2
p2

wout R2

Figure 5.2: Example of a nonlinear fluid flow system

The change in pressure p1 is now given by:


p 1 =

g
1
w1 =
(win wmed)
C1
A1

Substitution of wmed =
p 1 =

1
p1 p0 gives us
R1

g
g
win
p1 p0
A1
A1 R1

The derivation for the lower vessel is similar:


The net flow into the lower vessel is w2 = wmed wout . The fluid capacitance of the lower
vessel is given by
C2 =

A2
g

The change in pressure p2 is now given by:


p 2 =

1
g
w2 =
(wmed wout )
C2
A2

For the flow wout we find:


wout =

1
p2 p0
R2

Substitution of wmed and wout into the previous equation yields:


p 2 =

g
g
p1 p0
p2 p0
A2 R1
A2 R2

106

CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

So summarizing, the two differential equations describing the dynamics of the 2 vessel
system are as follows:

g
g

win
p1 p0
p 1 =
A1
A1 R1
g
g

p 2 =
p1 p0
p2 p0
A2 R1
A2 R2
Now we can use the relation between the fluid levels hi and pi , i = 1, 2:
pi p0 = ghi , and pi = g h i
and we can rewrite the equations as

g p
g

win
gh1
g h 1 =
A1
A1 R1
g p
g p

g h 2 =
gh1
gh2
A2 R1
A2 R2
2 = R2 g
1 = R1 g and R
or by introducing the parameters R

1
g p

win
h 1 =
1 h1
A1
A1 R
p
g p

h 2 = g
h

1
1
2 h2
A2 R
A2 R

Nonlinear state systems

We now introduce the notion of nonlinear state systems. A nonlinear system can then be
represented by the state equations
x(t)

= f (x, u),
y(t) = h(x, u),

(5.1)

where f and h are nonlinear mappings. We call a model of this form a nonlinear state
space model. The dimension of the state vector is called the order of the system. The system (5.1) is called time-invariant because the functions f and h do not depend explicitly
on time t; there are more general time-varying systems where the functions do depend on
time. The model consists of two functions: the function f gives the rate of change of the
state vector as a function of state x and input u, and the function h gives the output signal
as functions of state x and control u. A system is called a linear state space system if the
functions f and h are linear in x and u.
Example 5.3 (Mechanical system)
Consider the system of Example 5.1. The nonlinear differential equation is given by
m y(t) = k y 2 (t) c y(t)
+ m g + fext (t)

5.2. STEADY STATE BEHAVIOR AND LINEARIZATION

107




y(t) and input u(t) = fext (t) we obtain the nonlinear


If we choose the state x(t) = y(t)
state equations

x 1 (t) = k/m x22 (t) c/m x1 (t) + g + 1/m u(t)


x 2 (t) = x1 (t)

y(t) = x2 (t)

Example 5.4 (Water flow system)


Consider the system of Example 5.2. The nonlinear differential equations are given by

g p
1

win
h 1 =
1 h1
A1
A1 R
p
g p

h 2 = g
h

1
1
2 h2
A2 R
A2 R


If we choose the state x(t) = h1 (t) h2 (t) , the input u(t) = win (t) and the output
y(t) = h2 (t) we obtain the nonlinear state equations

1
g p

x 1 (t) =
u(t)

1 x1 (t)

A1
A1 R

g p
g p
x

(t)
=
x
(t)

2
1

1
2 x2 (t)

A2 R
A2 R

y(t) = x2 (t)

5.2

Steady state behavior and linearization

An equilibrium point (or steady state) is a point where the system comes to a rest. For
a system at rest all signals will be constant and so in an equilibrium point the derivative
of the state will be zero. We define an equilibrium point, or steady state, of a nonlinear
system as follows:
Definition 5.1 Consider a nonlinear state system, described by (5.1). For a steady state
or equilibrium point (x0 , u0, y0 ) there holds
f (x0 , u0 ) = 0
with a corresponding output y0 :
y0 = g(x0 , u0 )
Example 5.5 (Mechanical system)
Consider the nonlinear state system of Example 5.3. We aim at computing the equilibrium
point (x0 , u0 , y0 ) for the system

x 1 (t) = k/m x22 (t) c/m x1 (t) + g + 1/m u(t)


x 2 (t) = x1 (t)

y(t) = x2 (t)

108

CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS

If we set f (x0 , u0 ) = 0 we obtain


0 = k/m x20,2 c/m x0,1 + g + 1/m u0
0 = x0,1

0
. This means
This means that x0,1 = 0 and
= g + 1/m u(t), so x0,2 = m g+u
k
that if the external
q force fext (t) = u0 is constant, and the system is in rest then y 0 = x0,1 = 0

k/m x20,2 (t)

and y0 = x0,2 =

m g+u0
.
k

Example 5.6 (Water flow system)


The equilibrium point (x0 , u0 , y0) of the nonlinear water flow system of Example 5.4 can
be computed by setting x = 0:
q
1
g
0=
u0 (t)
1 x0,1 (t)
A1
A1 R
g
0=
1
A2 R

g
x0,1 (t)
2
A2 R

x0,2 (t)

y0 (t) = x0,2 (t)


We find:

2
R

x
=
u20

0,1
2
2

22
22
R
R
x
=
x
=
u2

0,2

2 0,1
2 2 0

g
R

y0 (t) = x0,2

In many physical systems the relations used to define the model elements are inherently
nonlinear. The analysis of systems containing such elements is a much more difficult
task than that for a system containing only linear elements, and for many such systems of
interconnected nonlinear elements there may be no exact analysis technique. In engineering
practice it is often convenient to approximate the behavior of a nonlinear systems by a
linear one over a limited range of operation, usually in the neighborhood of an equilibrium
point. The achieve this linear behavior we have to do a linearization step. We study small
variations about the equilibrium (x0 , u0 , y0 ), where x0 , u0 and y0 satisfy f (x0 , u0 ) = 0 and
g(x0 , u0 ) = y0 . To derive the linear behavior we look at small variations x, u, and y about
the equilibrium (x0 , u0 , y0 ):

x(t) = x0 + x(t)
u(t) = u0 + u(t)

y(t) = y0 + y(t)

109

5.2. STEADY STATE BEHAVIOR AND LINEARIZATION


First of all note that x (t) = x x0 = x,
and so

x (t) = f (x0 + x(t), u0 + u(t))
y0 + y(t) = g(x0 + x(t), u0 + u(t))

Now we can, using Taylor expansion, describe the nonlinear equations in terms of these
small variations x and u, which yields

x (t) = f (x0 , u0) + A x(t) + B u(t))
y(t) = g(x0 , u0 ) + C x(t) + D u(t)) y0
where A, B, C, and D are computed as


f
f
A=
, B=
,
x x = x0
u x = x0
u = u0


h
C=
,
x x = x0

u = u0

u = u0

With f (x0 , u0 ) = 0 and y0 = g(x0 , u0), this reduces to:



x (t) = A x(t) + B u(t)
y(t) = C x(t) + D u(t)


h
D=
u x = x0

(5.2)

u = u0

Example 5.7 (Mechanical system)


Consider the mechanical system of Example 5.5. We derived the system equations

x 1 (t) = k/m x22 (t) c/m x1 (t) + g + 1/m u(t)


x 2 (t) = x1 (t)

y(t) = x2 (t)
q
q
m g+u0
0
,
and
y
=
x
=
. We
and found the equilibrium point x0,1 = 0, x0,2 = m g+u
0
0,2
k
k
compute
"
#

f1
f1
f
x2
1
A=
= x
f2
f2

x x=x0 ,u=u0
x = x0
x1
x2
u = u0

c/m 2 k/m x2
1
0

"

f
B=
=
u x=x0 ,u=u0

h
g
C=
=
x x=x0 ,u=u0

f1
u
f2
u

g
x1

x = x0
u = u0

"

=
x = x0
u = u0

g
x2

x = x0
u = u0

c/m 2
1

1/m
0

0 1

mkg+ku0
m2

110

CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS




g
D=
=
u x=x0 ,u=u0

g
u

=0

x = x0
u = u0

So for small variations x, u and y about the working point equilibrium (x0 , u0 , y0 ) the linear
behavior is described by the linear time-invariant state system

x (t) = A x(t) + B u(t)
y(t) = C x(t) + D u(t)
These equations can describe the dynamic behavior of the mechanical system quite accurately as long as the signals u(t) and x(t) remain small.
Example 5.8 (Water flow system)
Consider the nonlinear flow system of Example 5.6 with state equations

1
g p

(t)
=
u(t)

1 x1 (t)

A1
A1 R

g p
g p
x 2 (t) =
x
(t)

1
2 x2 (t)

A
R
A
R
2
2

y0 (t) = x0,2 (t)

2
2
R
R
1
2
2
u
,
x
=
u20 and y0 (t) = x0,2 . We compute
0,2
0
2
2
2
2
g
g
#

and the equilibrium point x0,1 =


"


f
=
A=
x x=x0 ,u=u0

1 x1 (t)
2 A1 R
g
1 x1 (t)
2 A2 R

f1
x2
f2
x2

f1
x1
f2
x1

2
2 A2 R

"


f
B=
=
u x=x0 ,u=u0

h
g
C=
=
x x=x0 ,u=u0



g
D=
=

u x=x0 ,u=u0

x = x0
u = u0

f1
u
f2
u

g
x1

g
u

x2 (t)

=
x = x0
u = u0

=
x = x0
u = u0

g
x2

x = x0
u = u0

x = x0
u = u0

=0

1
A1

1
2A1 u0
1
2A2 u0

0 1

0
1
2A2 u0

111

5.3. EXERCISES

So for small variations x, u, and y about the working point equilibrium (x0 , u0, y0 ) the
linear behavior is described by the linear time-invariant state system

x (t) = A x(t) + B u(t)
y(t) = C x(t) + D u(t)
These equations can describe the dynamic behavior of the mechanical system quite accurately as long as the signals u(t) and x(t) remain small.

5.3

Exercises

Exercise 1. Pendulum system


A point mass m is attached to the end of a massless rod with length that is rotating
about a fixed pivot as shown in Figure 5.3. The angle between the rod and the vertical
axis is (t), and the external force working on the ball is fe (t).
e

fge
j

fgx
fz

fgy

Figure 5.3: A simple point-mass pendulum


Now perform the following tasks:
1. Compute the equilibrium point for fe (t) = 0.5 m g.
2. Linearize the system around this equilibrium point.
Exercise 2. Electrical circuit
An electrical circuit consists of a linear capacitor and a nonlinear resistor as shown in
Figure 5.4. For the resistor there holds:
(v1 v2 )3 = R i3

112

CHAPTER 5. NONLINEAR DYNAMICAL SYSTEMS


v1

i1 -

i2

i3
?

v2
e

Figure 5.4: A nonlinear electrical circuit


Further we have v2 = 0, R = 2 , and C = 1/4 F.
Now perform the following tasks:
1. Compute the equilibrium point for v1,0 = 2.
2. Linearize the system around this equilibrium point with v1,0 = 2.

Chapter 6
An introduction to feedback control
Engineers often use control methods engineering to enhance the performance of systems in
many fields of application, such as mechanical, electrical, electromechanical, and fluid/heat
flow systems (see Chapter 2). This chapter gives an introduction to the field of control
engineering. Some basic definitions and terminology will be introduced and the concept of
feedback will be presented.
Definition 6.1 Given a system with some inputs for which we can set the values, control
is a set of actions undertaken in order to obtain a desired behavior of the system, and it
can be applied in an open-loop or a closed-loop configuration by supplying the proper control
signals.
Controllers can be found in all kinds of technical systems, from cruise control to a central
heating system, from hard disks to washing machines, from GPS to oil refineries, from
watches to communication satellites. In many cases the impact of control is not recognized
from the outside. Control is therefore often called the hidden technology.

6.1

Block diagrams

In Section 4.1 we have seen that a linear time-invariant system can be represented by a
transfer function. Often real-life physical systems consist of many subsystems where each
subsystem can be described by a differential equation and therefore can be represented by
a transfer function. If we want to look at the overall system on a higher, less detailed level,
we can draw a block diagram of the system.
Definition 6.2 A block diagram is a diagram of a system, in which the principal subsystems are represented by blocks interconnected by arrows that show the interaction between
the blocks.
Example 6.1 Consider the three-vessel water flow system of Figure 6.1.
113

114

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL


win A

h1
R1
A

h2
R2
A

h3
R3

Figure 6.1: Example of a three-vessel water flow system

Using the modeling techniques of Chapter 2 we can describe the system with three differential equations

1 = 1 win g h1 ,

A
AR1

g
g
h 2 =
h1
h2 ,

AR1
AR2

h 3 = g h2 g h3 .
AR2
AR3

We consider each equation as a subsystem. In the first subsystem, the input is the inflow
win (t), and the output of the system is the water level h1 (t). In the second subsystem we
have h1 (t) as an input and h2 (t) as an output. In the third subsystem we have h2 (t) as an
input and h3 (t) as an output. In Figure 6.2 the three-vessel water flow system is represented
by a block diagram, in which each block represents one of the three vessels.

win

Water vessel 1

h1 Water vessel 2

h2 Water vessel 3

h3 -

Figure 6.2: Block diagram of the three-vessel water flow system

For the three-vessel water flow system of Example 6.1 we see that the block diagram
consists of three subsystems that are in a series connection. Each of the subsystems can

115

6.1. BLOCK DIAGRAMS


be described by a transfer function:
Subsystem 1:

H1 (s) =

Subsystem 2:

H2 (s) =

Subsystem 3:

H3 (s) =

1
A
g
AR1
g
AR1
g
+ AR
2
g
AR2
g
+ AR
3

s+
s
s

Interconnection of systems
Series interconnection:
In a series interconnection of two systems the output of the first system becomes the
input of the second system (See Figure 6.3). Let U1 (s) and Y1 (s) denote the Laplace

U(s)

H1 (s)

H2 (s)

Y (s)

Figure 6.3: Series connection: Y (s) = H1 (s) H2 (s) U(s)


transforms of the input u1 (t) and the output y1 (t) of the first system, respectively, and
denote by U2 (s) and Y2 (s) the Laplace transforms of the input u2 (t) and the output y2 (t)
of the second system. From the previous section we know that Y1 (s) = H1 (s) U1 (s) and
Y2 (s) = H2 (s)U2 (s). With u2 (t) = y1 (t) and consequently U2 (s) = Y1 (s) we find
Y2 (s) = H2 (s)H1 (s)U1 (s)
and the transfer function of the series connection is the product of the two transfer functions:
Hseries (s) = H2 (s) H1(s)
Example 6.2 Consider the water flow system of Example 6.1. Let U(s) and Y (s) be the
Laplace transform of u(t) = win (t) and y(s) = h3 (t), respectively. The three blocks H1 , H2

116

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

and H3 are in a series connection so,


Htot (s) = H3 (s) H2 (s) H1 (s)
=
=

1
A

g
g
AR1
AR2
g
g
g
s + AR
s + AR
s + AR
1
2
3
g2
A3 R1 R2
g
g
g
)(s + AR
)
(s + AR1 )(s + AR
2
3

and thus
Y (s) = Htot (s) U(s)
Parallel interconnection:
In a parallel interconnection two systems have the same input and the outputs of the
systems are added (See Figure 6.4). Let the Laplace transform of input and output of the
-

H1 (s)

U(s)

Y (s)
?
e
6
-

H2 (s)

Figure 6.4: Parallel connection: Y (s) = (H1 (s) + H2 (s)) U(s)


first system be given by U1 (s) and Y1 (s), and of the second system by U2 (s) and Y2 (s).
The overall output is given by y(t) = y1 (t) + y2 (t). With u(t) = u1(t) = u2 (t) we find for
the Laplace transforms of the output:
Y (s) = Y1 (s) + Y2 (s) = H1 (s) U(s) + H2 (s) U(s) = (H1 (s) + H2 (s))U(s)
and the transfer function of the parallel connection is the sum of the two transfer functions:
Hparallel (s) = H1 (s) + H2 (s)
Feedback interconnection:
Another important type of interconnection is the feedback interconnection, in which the
systems are placed in a loop, e.g. the output of the first system is the input of the second
system, and the output of the second system is the input of the first system, possibly added
to an external signal (See Figure 6.5). Let U1 (s) and Y1 (s) denote the Laplace transforms of

117

6.1. BLOCK DIAGRAMS

R(s)

-e
+
6

U1 (s)

H1 (s)

Y1 (s)

Y2 (s)

H2 (s) 

Figure 6.5: Feedback interconnection: Y (s) =

H1 (s)
R(s)
1 + H2 (s)H1 (s)

the input u1 (t) and the output y1 (t) of the first system, respectively, and denote by U2 (s)
and Y2 (s) the Laplace transforms of the input u2 (t) and the output y2 (t) of the second
system. Furthermore, let R(s) denote the Laplace transform of the reference signal r(t).
The loop is created by setting u2 (t) = y1 (t) and u1 (t) = r(t) y2 (t). Consequently we
obtain U2 (s) = Y1 (s), U1 (s) = R(s) Y2 (s). By substitution we find:
U1 (s) = R(s)Y2 (s) = R(s)H2 (s)U2 (s) = R(s)H2 (s)Y1 (s) = R(s)H2 (s)H1 (s)U1 (s)
Hence, we obtain


1 + H2 (s)H1 (s) U1 (s) = R(s)
and so

1
R(s)
1 + H2 (s)H1 (s)
H1 (s)
Y1 (s) =
R(s)
1 + H2 (s)H1 (s)

U1 (s) =

and the transfer function from r to y1 is given by


Hfeedback (s) =

H1 (s)
1 + H2 (s)H1 (s)

When the feedback argument y2 (t) is subtracted (see Figure 6.5) we call it a negative
feedback. Negative feedback often appear in controller design an is required for system
stability. For a negative feedback configuration as in Figure 6.5 we can express the solution
by a simple rule:
The transfer function of a single-loop negative feedback system is given by the forward
transfer function divided by the sum of one plus the loop gain function.
where the loop gain function is the product of the transfer functions making the loop, that

118

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

is, the products of the gains in the loop.


For the configuration of Figure 6.5 the forward transfer function is equal to H1 (s), the loop
gain function is equal to H2 (s)H1 (s), and so Hfeedback (s) = H1 (s)/(1 + H2 (s)H1 (s)).

R(s)

-e
+
6

U1 (s)

H1 (s)

Y1 (s)

Y2 (s)

H2 (s) 

Figure 6.6: Positive feedback interconnection: Y (s) =

H1 (s)
R(s)
1 H2 (s)H1 (s)

When the feedback argument is added (instead of subtracted) we call it a positive feedback
(See Figure 6.6). For a positive feedback configuration the solution is given by the rule:
The transfer function of a single loop positive feedback system is given by the forward
transfer function divided by one minus the loop gain function.
Block diagram manipulations
Figure 6.7 shows some basic block diagram manipulations for nodes where signals split into
two branches, or where signals are added. The basic manipultions can be used to convert
block diagrams without effecting the mathematical relationships.
Example 6.3 (Simplifying block scheme)
In this example we will consider the block diagram of Figure 6.8 and find the transfer
function from input r(t) to output y(t).
Using the manipulations defined in Figure 6.7 we first replace the closed-loop of H1 (s) and
H3 (s) by H1 (s)/(1H1(s)H3 (s)) (Note that we have a positive feedback here). The next step
is to shift the input of system H6 (s) over the system H2 (s) using the basic manipulation step
given in Figure 6.7.b. Now we have a system consisting of two subsystems: The first subsystem is the feedback of H1 (s)/(1 H1 (s)H3 (s)), H2 (s) and H4 (s). For this subsystem use
the rule for a positive feedback configuration: The transfer function of a single-loop positive
feedback system is given by the forward transfer function (H1 (s)H2 (s)/(1 H1 (s)H3 (s)))
divided by one minus the loop gain function (1 H1 (s)H2 (s)H4 (s)/(1 H1 (s)H3 (s))). This
results in the following transfer function for the first subsystem:
Hsub,1 (s) =

H1 (s)H2 (s)/(1 H1 (s)H3 (s))


H1 (s)H2 (s)
=
1 (H1 (s)H2 (s)H4 (s))/(1 H1 (s)H3 (s))
(1 H1 (s)H3 (s) H1 (s)H2 (s)H4 (s))

119

6.1. BLOCK DIAGRAMS


U(s)

Y1 (s)

H(s)

U(s)

H(s)

H(s)

Y2 (s)

U(s)

H(s)

Y1 (s)

Y1 (s)

Y2 (s)
(a)

U(s)

H(s)

Y1 (s)

Y2 (s)

-e 6

H(s)

Y (s)

?
1
H(s)

U1 (s)

Y2 (s)

(b)

U1 (s)

H(s)

H(s)

-e
6

Y (s)

U2 (s)

U2 (s)
(c)

U1 (s)

H(s)

-e
6

Y (s)

U1 (s)

-e 6

H(s)

Y (s)

1
H(s)

U2 (s)

(d)

U2 (s)

Figure 6.7: Basic block diagram manipulations


The second subsystem consists of the parallel interconnection of H6 (s)/H2 (s) and H5 (s),
leading to
Hsub,2(s) = H6 (s)/H2 (s) + H5 (s) =

H6 (s) + H5 (s)H2 (s)


H2 (s)

120

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL


-

R(s)

X (s) X (s)

- e 1- e 2 6
6

X3 (s)

H1

H6
H2

X4 (s)

H5

-?
e

Y (s)

-?
e

Y (s)

-?
e

Y (s)

H3 
H4 

Figure 6.8: Simplification of block diagram

R(s)

-e
6

H1
1H1 H3

H6

H2

H5

H4 

R(s)

-e
6

H1
1H1 H3

H2

H6
H2

H5

H4 
|

{z

Hsub,1

}|

{z

Hsub,2

Figure 6.9: Final blockscheme after some block manipulations


The final transfer function is the series connection of Hsub,1(s) and Hsub,2 (s):



H1 (s)H2 (s)
H6 (s) + H5 (s)H2 (s)
H(s) =
(1 H1 (s)H3 (s) H1 (s)H2 (s)H4 (s))
H2 (s)
=

H1 (s)H2 (s)H5 (s) + H1 (s)H6 (s)


1 H1 (s)H3 (s) H1 (s)H2 (s)H4 (s)

121

6.2. CONTROL CONFIGURATIONS


Algebraic approach in block diagram computations

A more algebraic approach starts with writing down the input-output equations of all
subsystems, and proceeds with eliminating the internal signals, which are not relevant. We
will show this approach by computing the overall transfer for the system of Example 6.3.
Example 6.4 (Simplifying block scheme II)
In this example we will consider the block diagram of Figure 6.8 and find the transfer
function from input r(t) to output y(t). We write down the equations:
X1 (s) = R(s) + H4 X4 (s)
X2 (s) = X1 (s) + H3 (s) X3 (s)
X3 (s) = H1 (s) X2(s)
X4 (s) = H2 X3 (s)
Y (s) = H6 (s) X3(s) + H5 (s) X4(s)

(6.1)
(6.2)
(6.3)
(6.4)
(6.5)

Elimination of X1 and X2 from (6.1)(6.3) gives us


X3 (s) = H1 (s) R(s) + H1 (s) H4 (s) X4 (s) + H1 (s) H3 (s) X3 (s)

(6.6)

Substitution of (6.4) into (6.6) and (6.5) gives us:


X3 (s) = H1 (s) R(s) + H1 (s) H2 (s) H4 (s) X3(s) + H1 (s) H3(s) X3 (s)
Y (s) = H6 (s) X3(s) + H2 (s) H5(s) X3 (s)
Equation (6.7) can be rewritten as


1 H1 (s) H2 (s) H4 (s) H1 (s) H3 (s) X3 (s) = H1 (s) R(s)

(6.7)
(6.8)

(6.9)

so

X3 (s) = 1 H1 (s) H2(s) H4 (s) H1 (s) H3 (s)

1

H1 (s) R(s)

(6.10)

Finally, substitution of (6.10) into (6.8) leads to the final result





1
H1 (s) R(s)
Y (s) = H6 (s) + H2 (s) H5 (s) 1 H1 (s) H2 (s) H4(s) H1 (s) H3 (s)
and so the overall transfer function is
H(s) =

H1 (s)H6 (s) + H1 (s)H2 (s) H5(s)


1 H1 (s) H2 (s) H4 (s) H1 (s) H3 (s)

(6.11)

122

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

reference

controller

control
signal

process

output

Figure 6.10: Open loop configuration

6.2

Control configurations

Open-loop control system


In an open-loop control configuration the system is controlled in a certain pre-described
manner, regardless of the actual state of the system. The open-loop configuration is given
in Figure 6.10.
In the field of control systems, the reference signal input can often be seen as a trajectory
which the output signal should track. To achieve that, the controller initiates a precomputed action which aims to making the process output equal to the desired reference signal.

Closed-loop control system


In a closed-loop control configuration the controller produces a control signal based on
the difference between the desired and the measured system output. The closed-loop
configuration is given in Figure 6.11.
reference- e + 6

controller

control
signal

process

output

measured output
Figure 6.11: Closed-loop configuration
In a closed-loop configuration, the measured output is compared with the desired reference
signal to provide an error signal that then initiates corrective action until the feedback
signal duplicates the reference signal. In this chapter we assume the sensor is optimal and
errors in the measurement can be neglected. In that case the measured output is equal to
the real output.
Open-loop systems are simpler than closed-loop systems and perform satisfactory in applications involving highly repeatable processes, having well established characteristics, and
that are not exposed to disturbances. In the case of model uncertainty or disturbances
acting on the system, closed-loop methods are preferred.

123

6.2. CONTROL CONFIGURATIONS

The controller in open-loop configuration is often referred to as a feedforward controller .


The controller in closed-loop configuration is often referred to as a feedback controller .

Analysis of open-loop and closed-loop control systems


In the linear time-invariant case the controller and the process will be linear time-invariant
systems and can be represented by transfer functions, for the plant H(s) and for the
controller D(s). For the open loop configuration we obtain Figure 6.12. In this setup we
want the output y(t) to track the reference signal r(t) with Laplace transform R(s). In any
physical system, there is always some amount of external disturbance that influences the
process behavior. This disturbance signal is denoted by w(t) with its Laplace transform
W (s).
W (s)
R(s)

controller
Dol (s)

Uol (s)

-?
e

process

Yol (s)

H(s)

Figure 6.12: Open-loop control configuration with feedforward controller Dol


Let the output of the process be given by yol (t) with Laplace transform Yol (s), then we
find
Yol (s) = H(s) Dol (s) R(s) + H(s) W (s) = Tol (s) R(s) + H(s) W (s)
where Tol (s) = H(s) Dol (s). The reference tracking error e(t) with Laplace transform E(s)
is defined as the difference between the reference signal and the process output and can be
computed as :
Eol (s) = R(s) Yol (s)
= [1 H(s) Dol (s)] R(s) H(s) W (s)
= [1 Tol (s)] R(s) H(s) W (s)
For an optimal reference tracking we like to make the error as small as possible, so
Tol (s) = H(s) Dol (s) 1. This can be achieved by setting Dol (s) = H 1 (s). Unfortunately, this choice is not always feasible because H 1 (s) may be unstable or physically not
realizable. In this case an approximation has to be used, resulting in a non-zero reference
tracking error. Another problem in the open-loop configuration is that disturbance cannot
be rejected and will be visible in the output signal.
For the closed-loop configuration we obtain Figure 6.13. Again we assume the Laplace
transforms of the reference signal and disturbance signal to be given by R(s) and W (s),

124

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL


W (s)
R(s)

-e + 6

controller

U(s)

D(s)

?
-e

process

Y (s)

H(s)
V (s)
?
e

Figure 6.13: Closed-loop control configuration with or feedback controller D


respectively. In this configuration we also add a measurement error v(t) with Laplace
transform V (s) to the measured output of the system.
Let the output of the process be given by Y (s), then we find
Y (s) =

H(s) D(s)
H(s)
H(s) D(s)
R(s) +
W (s)
V (s)
1 + H(s) D(s)
1 + H(s) D(s)
1 + H(s) D(s)

The controller output signal U(s) is then given by


U(s) =

H(s) D(s)
D(s)
D(s)
R(s)
W (s)
V (s)
1 + H(s) D(s)
1 + H(s) D(s)
1 + H(s) D(s)

The reference tracking error E(s) is given by


E(s) = R(s) Y (s)


H(s) D(s)
H(s)
H(s) D(s)
= R(s)
R(s) +
W (s)
V (s)
1 + H(s) D(s)
1 + H(s) D(s)
1 + H(s) D(s)
1
H(s)
H(s) D(s)
=
R(s)
W (s) +
V (s)
1 + H(s) D(s)
1 + H(s) D(s)
1 + H(s) D(s)
Define the loop gain function
L(s) = H(s) D(s),
the sensitivity function:
S(s) =

1
1
=
1 + L(s)
1 + H(s) D(s)

and the complimentary sensitivity function:


T (s) = 1 S(s) =

L(s)
H(s) D(s)
=
1 + L(s)
1 + H(s) D(s)

then the reference tracking error becomes:


E(s) = S(s) R(s) S(s) H(s) W (s) + T (s) V (s)

(6.12)

6.3. STEADY STATE TRACKING AND SYSTEM TYPE

125

The loop gain is an engineering term used to quantify the gain of a system controlled by
feedback loops. The loop gain function plays an important role in control engineering. A
high loop gain may improve the performance of the closed-loop system, but it may also
destabilize it.
The sensitivity function has an important role to play in judging the performance of the
controller, because it describes how much of the reference signal cannot be tracked and
will still be in the tracking error. The smaller S, the smaller the reference tracking error.
The complementary sensitivity function is the counterpart of the sensitivity function (Note
that S(s) + T (s) = 1). The closer T is to 1, the better the reference tracking.

6.3

Steady state tracking and system type

For most closed-loop control systems the primary goal is to produce an output signal that
follows the reference signal as closely as possible. It is therefore important to know how
the output signal behaves for t . We define the steady state value of a signal x(t) as
xss = lim x(t).
t

Final value theorem


A very important property in the analysis of linear time-invariant systems is the final value
theorem.
The final value theorem gives us the relation between the value of a signal x(t) when
t , and the Laplace transform X(s) of this signal. If limt x(t) exists, then
lim x(t) = lim s X(s)

s0

(6.13)

Example 6.5 (final value theorem)


Given the Laplace transform X(s) = L{x(t)}:
X(s) =

3(s + 2)
+ 2s + 10)

s(s2

Now we can derive for x() as follows



3(s + 2)
3(s + 2)
6
x() = lim s X(s) = lim s
= 2
=
= 0.6

2
s0
s0 s(s + 2s + 10)
s + 2 s + 10 s=0 10

Consider a system with input u(t), output y(t) and transfer function H(s). Often one is
interested in the value y() for a step input u(t) = us (t). From Table 2.1 we find that
1
U(s) = L{us (t)} = 1/s. The output Y (s) is now given by Y (s) = H(s) U(s) = H(s) .
s
With the final value theorem we derive:
1
y() = lim s Y (s) = lim s H(s) = H(0)
s0
s0
s

126

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

This means that H(0) is the value that remains if we put a constant signal on the input.
The value H(0 is therefore often referred to as the DC Gain of the system.
Example 6.6 (DC Gain of the system)
Consider the system
H(s) =

(s2

3(s + 2)
+ 2s + 10)

The DC gain of this system is given by



6
3(s + 2)
DC gain = H(0) = 2
=
= 0.6

s + 2 s + 10 s=0 10
Steady state performance

The steady state performance of a control system is judged by the steady state difference
between the reference and output signals. We consider the stable closed-loop configuration
of Figure 6.14 with a loop gain function L(s) = H(s) D(s).
r -e e D(s)
+ 6

u H(s)

y-

Figure 6.14: Closed-loop control configuration

Steady state error for a reference signal r(t)


Let us assume that r(t) is given for t 0. Equation (6.12) tells us that for w(t) = 0 and
v(t) = 0 we obtain:
E(s) = S(s) R(s)
where E(s) is the Laplace transform of e(t), R(s) is the Laplace transform of r(t), and
S(s) = 1/(1 + H(s) D(s)) is the sensitivity function. The final value theorem tells us that
we can use (6.13) to compute the value of e(t) for t :
ess = lim s S(s) R(s)
s0

We will study the steady state error ess for different reference
signals r(t).

6.3. STEADY STATE TRACKING AND SYSTEM TYPE

127

Steady state error for a step reference signal


For r(t) = us (t) we find the Laplace transform R(s) = 1/s from Table 2.1 and so
ess = lim e(t) = lim s S(s) R(s) = lim s S(s) 1/s = S(0)
t

s0

s0

This means that the steady state error for a step response signal is equal to the sensitivity
function for s = 0.
Steady state error for a ramp and a parabolic reference signal
For r(t) = ur (t) = t us (t) we find the Laplace transform R(s) = 1/s2 from Table 2.1. The
steady state error now becomes
ess = lim s S(s) R(s) = lim s S(s) 1/s2 = lim
s0

s0

s0

S(s)
s

Similarly, for a parabolic signal r(t) = up (t) = t2 /2 us (t) we find the Laplace transform
R(s) = 1/s3 from table 2.1. The steady state error for a parabolic reference signal is
S(s)
s0 s2

ess = lim s S(s) R(s) = lim s S(s) 1/s3 = lim


s0

s0

Steady state error for a higher-order polynomial signal


Let the reference be given by the higher order polynomial signal
r(t) =

tk
for t 0, k Z+
k!

The Laplace transform is given by


R(s) = 1/sk+1
The steady state error can be computed as:
ess = lim s S(s)
s0

1
sk+1

Note that the behavior of the function s1k S(s) for s 0 is important. To describe this
behavior we introduce the notion of system type.
Definition 6.3 Consider a system in the closed-loop configuration as in Figure 6.14 with
S(s) =

1
1
=
.
1 + L(s)
1 + H(s) D(s)

Assume that for some positive integer value n the sensitivity function S can be written as
S(s) = sn S0 (s)
such that S0 (0) is neither zero nor infinite, i.e. 0 < |S0 (0)| < . Then the system type is
equal to n.

128

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

For a closed-loop configuration of type n and a reference signal r(t) =


state error is computed as




1
sn

ess = k S(s)
= S(s) = k S0 (s)
s
s
s=0

tk
k!

us (t), the steady

s=0

and so

if n > k
0
S0 (0) if n = k
ess =

if n < k

If the system type is 0 a step input signal results in a constant tracking error. If the system
type is 1, then for a ramp input signal the steady state tracking error is constant. If the
system type is 2 a parabolic input signal result in a constant tracking error. Summarizing,
if the system type is n then an input signal r(t) = tn /n!us (t) results in a constant steady
state tracking error.
The relation between system type and loop gain function L(s)
Consider a system in the closed-loop configuration as in Figure 6.14 with
1
1
=
.
S(s) =
1 + L(s)
1 + H(s) D(s)
Assume that for some positive integer value n the loop gain function can be written as
L0 (s)
sn
with L0 (0) = Kn finite but not zero. Then
L(s) =

S(s) =

1
sn
1
=
=
1 + L(s)
sn + L0 (s)
1 + Ls0n(s)

Then for an input signal


tk
for t 0
k!
we find a steady state tracking error
sn
1
sn
1
ess = lim n

e
=
lim
ss
s0 s + L0 (s) sk
s0 sn + Kn sk
r(t) =

Define the following variables:


Kp = lim L(s)
s0

Kv = lim s L(s)
s0

Ka = lim s2 L(s)
s0

6.3. STEADY STATE TRACKING AND SYSTEM TYPE

129

If a system is of type 0, then we find for a step a steady state tracking error ess = 1/(1+Kp ),
and for a ramp and parabola the error will diverge to infinity. For a type 1 system the
steady state tracking error for a step is zero, for a ramp we find ess = 1/Kv , and for a
parabola the error is infinite. Finally, for a type 2 system, the steady state tracking error
for a step and for a ramp is zero, and for a parabola we find ess = 1/Ka . This summarized
in Table 6.1.
step

ramp

parabola

type 0

1
1 + Kp

type 1

1
Kv

type 2

1
Ka

Table 6.1: System type and steady state errors for various reference signals

Example 6.7 Consider the feedback configuration of Figure 6.15 for which we study the
steady state error for different types of loop gain function L(s).
r -e e L(s)
+ 6

y-

Figure 6.15: Closed-loop control configuration


Define the three systems:
10
(s + 1)(s + 2)
4
System 2 L(s) =
s(s + 2)
4s + 1
System 3 L(s) = 2
s (s + 4)
System 1 L(s) =

10
System 1 is of type 0 because we compute L0 (s) = L(s) = (s+1)(s+2)
and Kp = L0 (0) = 5.
4
System 2 is of type 1 because we compute L0 (s) = s L(s) = s+2 and Kv = L0 (0) = 2.
System 3 is of type 2 because we compute L0 (s) = s2 L(s) = 4s+1
and Ka = L0 (0) = 0.25.
s+4

130

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL


System 3

System 2
System 1

t
Figure 6.16: Step responses for different system types

First we plot the output y(t) for a step reference signal. As can be seen in Figure 6.16
systems 2 and 3 can follow the signal with zero steady state error. System 1 of type 0 gives
a finite steady state state tracking error ess = 1/(1 + Kp ) = 1/6.
Figure 6.17 gives the output y(t) for a ramp reference signal. We see that the response of
system 1 diverges from the reference, the response of system 2 follows the reference with a
finite error, and the response of system 3 converges to the reference.
Finally Figure 6.18 shows the output y(t) for a parabolic reference signal. None of the
responses converge to the parabolic signal, but system 3 with system type 2 can follow the
parabola with a finite error. The response of system 1 and 2 diverge from the parabola.

System type w.r.t. disturbance inputs


In addition to the steady state reference tracking error, another criterion of steady state
performance is the sensitivity to disturbances acting on the closed-loop system as in Figure
6.19.
With W (s) the Laplace transform of w(s) and r(t) = 0, v(t) = 0, then we find
E(s) = S(s) H(s) W (s) =

H(s)
W (s) = Tw (s) W (s)
1 + H(s) D(s)

where Tw (s) = H(s)/(1 + H(s) D(s)) is the transfer function of the system with input w
and output e. For a step function w(s) = us (t) we find
ess = lim e(t) = lim s E(s) = lim s Tw (s) W (s) = lim s Tw (s) 1/s = Tw (0)
t

s0

s0

s0

131

6.3. STEADY STATE TRACKING AND SYSTEM TYPE


System 3
System 2
System 1

t
Figure 6.17: Ramp responses for different system types

We can extend the analysis of the steady state error to the class of disturbance signals
tk
us (t)
k!
with Laplace transform
w(t) =

W (s) =

1
sk+1

The steady state error can be computed as:


1
sk
Assume that for some positive integer value n the disturbance function Tw can be written
as
ess = lim s Tw (s) W (s) = lim s Tw (s)
s0

s0

sk+1

= lim Tw (s)
s0

Tw (s) = sn Tw,0 (s)


such that Tw,0 (0) is neither zero nor infinite, i.e. 0 < |Tw,0 (0)| < , then the disturbance
system type is equal to n. Now the tracking error can be written as




1
sn

ess = k Tw (s)
= k Tw,0(s)
s
s
s=0

s=0

and so

if n > k
0
Tw,0 (0) if n = k
ess =

if n < k

If fact we have the same property as for the reference tracking error. For the disturbance
system type n we find that a disturbance signal w(t) = tn /n!us (t) results in a constant
steady state tracking error.

132

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL


System 1

System 3
System 2

t
Figure 6.18: Parabola responses for different system types
w(t)
r(t) = 0 - e e(t) + 6

u(t) - ?
e

y(t) -

Figure 6.19: Closed-loop configuration with reference r(t) and disturbance w(t)
Example 6.8 Consider the system
H(s) =

1
s(s + 1)

in the closed-loop configuration of Figure 6.19 with controller


D(s) = 2 .
We compute
Tw (s) =

H(s)
1
= 2
1 + H(s) D(s) s + s + 2

For n = 0 we find Tw,0(s) = Tw (s) with 0 < Tw (0) = |1/2| < , and so the disturbance
system type is equal to 0. If we choose a different controller
D(s) =

0.1
+3
s

6.4. PID CONTROL

133

we have
Tw (s) =

10 s
H(s)
=
3
1 + H(s) D(s) 10 s + 10 s2 + 30 s + 1

For n = 1 we find Tw (s) = s1 Tw,0(s) with


10
+ 10 s2 + 30 s + 1
and 0 < Tw,0 (0) = |10| < , and so the disturbance system type is equal to 1.
Tw,0 (s) =

6.4

10 s3

PID control

In most industrial applications, PID controllers are used to enhance the system performance
and to meet the desired specifications. The terms P, I, and D stand for P - Proportional, I
- Integral, and D - Derivative. These terms describe three basic mathematical operations
applied to the error signal e(t) = r(t) y(t). The proportional value determines the
reaction to the current error, the integral value determines the reaction based on the
integral of recent errors, and the derivative value determines the reaction based on the
rate at which the error has been changing. We will discuss the PID controllers as they
operate in the closed-loop configuration of Figure 6.19. We start with the P controller
with only a proportional action. Then we discuss the PI controller (proportional + integral
action) and PD controller (proportional + derivative action), and finally the PID controller
(proportional + integral + derivative action).

P control
The controller in the configuration of Figure 6.20 is called the P controller (proportional
controller). Typically the proportional action is the main drive in a control loop, as it
reduces a large part of the overall error. For a P controller we have D(s) = kp and so the
control signal u(t) is proportional to the error signal e(t):
u(t) = kp e(t)
Increasing the value kp may improve the steady state tracking error and the response speed.
Unfortunately, it may also lead to excessive values of the control signal u(t), which cannot
be realized in practice. Furthermore high values of kp may lead to instability.

PI control
Another kind of controller is the PI controller (proportional-integral control), in which
a part of the control signal u(t) is proportional to the error signal e(t) and another part is
proportional to the integral of the error signal e(t):
Z


1 t
u(t) = kp e(t) +
e( ) d
Ti 0

134

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

e(t)

u(t)
-

kp

Figure 6.20: Proportional control configuration


where Ti is called the integral or reset time and is the time needed to reach kp with a unit
input. The integral action in the PI controller reduces the steady state error in a system.
Integration of even a small error over time produces a drive signal large enough to move the
system toward a zero error and this therefore will improve the steady state performance.
Unfortunately integral action may also give undesired oscillatory behavior.
-

kp

e(t)

?
e
6
-

u(t) -

ki
s

Figure 6.21: Proportional Integral control configuration


If we define ki = kp /Ti we obtain
Z


1 t
u(t) = kp e(t) +
e( ) d
Ti 0
Z t
e( ) d
= kp e(t) + ki
t0

The transfer function of the PI controller is




1
D(s) = kp 1 +
Ti s
ki
= kp +
s

PD control
Instead of an integral action we can also introduce a derivative action, in which the proportional part of the control action is added to a multiple of the time derivative of the
error signal e(t):

d e(t) 
u(t) = kp e(t) + Td
(6.14)
dt

135

6.4. PID CONTROL

where Td is called the derivative time. The derivative action is used to increase damping
and improve the systems stability. It counteracts the kp (and ki in case of an integral
action) when the output changes quickly. This helps reduce overshoot and avoid unwanted
oscillation of a signal. It has no effect on final error. Note that the derivative action alone
never occurs, because if e(t) is constant and different from zero, the controller does not
react.
-

kp

e(t)

?
e
6
-

u(t) -

kd s

Figure 6.22: Proportional Derivative control configuration


Consider (6.14). If we define kd = kp Td we obtain


d e(t) 
u(t) = kp e(t) + Td
dt
d e(t)
= kp e(t) + kd
dt
The transfer function of the PD controller is given by
D(s) = kp (1 + Td s)
= kp + kd s
Unfortunately it is impossible to realize a derivative action in practice. The implementation
is usually done as


du
d e(t) 
+ u(t) = kp e(t) + Td
dt
dt

where is very small.

PID control
The most general case is the PID controller in which we combine the proportional action
with an integral and derivative action:
Z t

1
d e(t) 
u(t) = kp e(t) +
e( ) d + Td
(6.15)
Ti 0
dt

136

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

e(t)

kp

ki
s

kd s

e
-?
6

u(t) -

Figure 6.23: Proportional Integral Derivative control configuration


Finding good values of kp , Ti , and Td is called tuning. In certain cases by proper choices
we can modify the dynamics as required (see Example 6.10. For further reading, see [3]).
With ki = kp /Ti and kd = kp Td we obtain
Z t
d e(t)
u(t) = kp e(t) + ki
e( ) d + kd
dt
t0
The transfer function of the PID controller is


1
D(s) = kp 1 +
+ s Td
s Ti
ki
= kp + + s kd
s
Example 6.9 Consider the closed-loop system of Figure 6.24 with plant
1
(s + 1)(10 s + 1)

H(s) =

and a proportional controller


D(s) = kp
The transfer function from input r(t) to y(t) is given by
T (s) =

kp
(s + 1)(10 s + 1) + kp

and the transfer function of disturbance w(t) to y(t) is given by


Tw (s) =

1
(s + 1)(10 s + 1) + kp

137

6.4. PID CONTROL


w
r

- e e+ 6

D(s)

u -?
e

H(s)

Figure 6.24: Control configuration


We can analyze the response of y(t) for different values of kp . The results are given in
Figure 6.25.a for kp = 5, 10, 25, 50. The left plot is for a step reference (r(t) = us (t),
w(t) = 0), and the right plot is for a step disturbance (w(t) = us (t), r(t) = 0). In the plots
we can see that increasing kp leads to a reduction of the steady state error with respect to
a step reference signal and a smaller disturbance error. However, larger values of kp also
introduce oscillatory behavior, which is undesired.
Next we choose a Proportional-Integral controller
D(s) = kp (1 +

1
)
Ti s

The transfer function from input r(t) to y(t) is now given by


T (s) =

kp (s + 1/Ti )
s(s + 1)(10 s + 1) + kp (s + 1/Ti )

and the transfer function of disturbance w(t) to y(t) is given by


Tw (s) =

s
s(s + 1)(10 s + 1) + kp (s + 1/Ti )

We can analyze the response of y(t) for different values of Ti . The results are given in Figure
6.25.b for kp = 25 and Ti = 5, 10, 50. The left plot is for a step reference (r(t) = us (t),
w(t) = 0), and the right plot is for a step disturbance (w(t) = us (t), r(t) = 0). In the plots
we can see that decreasing Ti leads to a faster decay of the steady state error with respect
to a step reference signal and a step disturbance signal. However, smaller values of Ti also
give an increase of the overshoot.
Next we choose a Proportional-Integral-Derivative controller
D(s) = kp (1 +

1
+ Td s)
Ti s

The transfer function from input r(t) to y(t) is now given by


T (s) =

kp (Td s2 + s + 1/Ti )
s(s + 1)(10 s + 1) + kp (Td s2 + s + 1/Ti )

138
(a)

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL


1.5

2.0

kp = 50
kp = 25
kp = 10

kp = 5
1.5

1.0
kp = 5

1.0

kp = 10

0.5

kp = 25

0.5

10

15

Reference tracking

(b)

20

Disturbance rejection

20

1.5
0.05

Ti = 5
Ti = 10

0.04

Ti = 50

1.0
Ti = 50

0.03
Ti = 10

0.02

0.5

0.01
5

10

15

Reference tracking

20

Ti = 5
5

10

15

Disturbance rejection

20

1.5
Td = 0.2
Td = 1

0.04

1.0

Td = 0.2
Td = 1

0.03

(c)

kp = 50
10
15

Td = 5

0.02

0.5

0.01
5

10

15

Reference tracking

20

Td = 5
5

10

15

Disturbance rejection

Figure 6.25: Responses for (a) P control, (b) PI control, and (c) PID control

20

6.4. PID CONTROL

139

and the transfer function of disturbance w(t) to y(t) is given by


Tw (s) =

s
s(s + 1)(10 s + 1) + kp (Td s2 + s + 1/Ti )

We can analyze the response of y(t) for different values of Td . The results are given in
Figure 6.25.c for kp = 25, Ti = 10 and Td = 0.2, 1, 5. The left plot is for a step reference
(r(t) = us (t), w(t) = 0), and the right plot is for a step disturbance (w(t) = us (t), r(t) = 0).
In the plots we can see that increasing Td introduces damping of the oscillatory behavior.
However, too much damping will result in a slower convergence towards the steady state.
Example 6.10 Consider the system
H(s) =

1
(s + p1 )(s + p2 )

and the PID controller of (6.15). The closed-loop transfer function becomes
Ti s3 + Ti (p1 + p2 )s2 + Ti p1 p2 s
1
=
1 + H(s) D(s) Ti s3 + (Ti p1 + Ti p2 + kp Ti Td )s2 + (Ti p1 p2 + kp Ti )s + kp
Note that the closed-loop system has 3 poles determined by 3 parameters of the controller.
We can place poles anywhere we like.
Note that we can use the PID-controller to place the poles on desired locations. Using the
properties of second-order systems we can tune the P, I, or D action in such a way that the
system properties such as settling time, rise-time, overshoot, and peak-time satisfy certain
design criteria. We will illustrate this with some examples.
Example 6.11 Given a system
H(s) =

1
(5s + 1)(2s + 1)

and a PD controller
D(s) = kp (1 + Td s)
in a closed-loop configuration of Figure 6.14. The proportional gain kp has to be tuned
such that the closed-loop system has an undamped natural frequency n = 1 rad/s. The
derivative time constant Td has to be such that the closed-loop system has relative damping
of = 21 . So the tasks are:
1. Compute kp .
2. Compute Td .

140

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

To answer the question we first compute the closed-loop system.


L(s) = D(s)H(s) =

kp (1 + Td s)
(5s + 1)(2s + 1)

The characteristic equation is now given by


10s2 + 7 s + 1 + kp (Td s + 1) = 0
or
s2 + (0.7 + 0.1kp Td ) s + (0.1 + 0.1kp ) = s2 + 2n s + n2 = 0
1. We have
n2 =

1 + kp
=1
10

kp = 9

2. For the derivative time constant we have


(0.7 + 0.9Td ) = 2 n = 2 0.5 1 = 1

Td =

1
3

Finally we can study the steady state error for a system in closed-loop with a PID controller.
Example 6.12 Given a system
H(s) =

5
(10s + 1)(s + 1)

and a PD controller
D(s) = kp (1 +

1
)
10 s

in the closed-loop configuration of Figure 6.14. The reference input is a unit ramp signal
r(t) = ur (t). For which values of controller gain kp is the steady state error ess < 10% ?
To answer the question we use the fact that for a type 1 control system the steady state
error for a ramp reference signal is given by
ess =

1
with Kv = lim s D(s) H(s).
s0
Kv

There holds:


(10 s + 1)
5
kp
Kv = lim s D(s) H(s) = lim s kp
=
.
s0
s0
10 s
(10 s + 1)(s + 1)
2
So Kv > 10 and it follows kp > 20.

141

6.5. EXERCISES

R(s)
- e - H1 (s)
+ 6

-e

+ 6

H2 (s)

Y (s)
-

H3 (s) 
H4 (s) 

Figure 6.26: Block scheme

6.5

Exercises

Exercise 1. Block diagram


Given the block diagram of Figure 6.26:
1. Determine the transfer function H(s) of the system with input R(s) and output Y (s).
Exercise 2. Rise time and settling time
Given a second-ordersystem
H(s) =

1
(s + 1)(s + 5)

and a proportional controller D(s) = kp in the closed-loop configuration of Figure 6.14.


The reference input is a step function (r(t) = us (t)).
1. For which kp do we obtain a stable loop?
2. For which kp do we obtain a rise time smaller than 0.2 seconds?
3. For which kp do we obtain a settling time smaller than 2.3 seconds?
Exercise 3. Steady state error
Given a system
H(s) =

3
(s + 1)(s + 5)

and a proportional controller D(s) = kp in a closed-loop configuration of Figure 6.14. The


reference input is a unit ramp signal r(t) = ur (t). For which values of controller gain kp is
the steady state error ess < 10% ?

142

CHAPTER 6. AN INTRODUCTION TO FEEDBACK CONTROL

Appendix A: The inverse of a matrix


The inverse of an n n matrix can be computed as follows
M 1 =

1
adj M
det M

where det M is the determinant of M and adj M is


defined as

11 )
21 )
det(M
det(M
det(M
12 )

det(M22 )

adj M = ..
..
.
.
1+n
2n )
det((1) M1n ) (1)2+n det(M

the adjoint matrix of M, which is


n1 )
(1)n+1 det(M
n+2
n2 )
(1) det(M
..
.
nn )
det(M

..
.

ij is equal to the matrix M after removing the ith row and jth column.
where M

Example:

1 2 3
M = 4 4 4
1 2 1

The entries of the adjoint matrix can be computed as:







4
4
2
3
2 3
11 = det
21 = det
31 = det
M
= 4, M
= 4, M
2 1
2 1
4 4





4 4
1 3
1 3
12 = det
22 = det
32 = det
M
= 0,
M
= 2, M
1 1
1 1
4 4





4 4
1 2
1 2
13 = det

33 = det
M
= 0,
M
1 2 ) = 4, M23 = det
1 2
4 4
and so the adjoint matrix is given by

4
4 4
adj M = 0 2
8
4
0 4
143





= 4,
= 8,
= 4,

144
With det(M) = 8 the inverse of M is now computed as

0.5
0.5 0.5
1
0 0.25
1
M 1 =
adj M =
det M
0.5
0 0.5
For a 2 2 matrix this works out as



1
1
m4 m2
m1 m2
=
m3 m4
m1 m4 m2 m3 m3 m1

Appendices

Appendix B: Laplace transforms


(t)
1
t
t2
t3
tm
eat
t eat
1 2 at
t e
2!
1
tm1 eat
(m 1)!
1 eat

1
1
s
1
s2
2!
s3
3!
s4
m!
sm+1
1
s+a
1
(s + a)2
1
(s + a)3
1
(s + a)m
a
s(s + a)

1
(at 1 + eat )
a
eat ebt
(1 at)eat
1 eat (1 + a t)
bebt aeat
sin at
cos at
eat cos bt
eat sin bt
1 eat (cos bt + ab sin bt)

(See Franklin et al. [3]).

145

a
+ a)
ba
(s + a)(s + b)
s
(s + a)2
a2
s(s + a)2
(b a)s
(s + a)(s + b)
a
2
s + a2
s
s2 + a2
s+a
(s + a)2 + b2
b
(s + a)2 + b2
a2 + b2
s[(s + a)2 + b2 ]
s2 (s

146

Appendices

Appendix C: Answer to exercises


Exercises chapter 1
Answer exercise 1. Signals
a). Define the function:
x(t) = 1/T us (t) 1/T us (t T )
The function 1/T us (t) is 0 for t < 0 and 1/T for t 0. The function 1/T us (t T )
is 0 for t < T and 1/T for t T . Taking the difference gives us:
x(t) = 0
x(t) = 1/T
x(t) = 0

for 0 < t
for 0 t T
for t > T

This is similar to the definition of the unit rectangular function T (t).

Answer exercise 2. Plots of signals


a).
1
1

b).
1

147

148

Appendices

c).
1
1

Answer exercise 3. Derivative of signals




a). d/dt 1 (t) 2 2 (t 1) = (t) 2 (t 1) + (t 3), for t R.



b). d/dt ur (t) ur (t 1) us (t 4) = us (t) us (t 1) (t 4), for t R.



c). d/dt up (t) us (1 t) + us (t 1) = ur (t) us (1 t) + 0.5(t 1), for t R.
Answer exercise 4. System properties
memoryless

linear

time-invariant

causal

System a)

Yes

No

No

Yes

System b)

No

Yes

Yes

Yes

System c)

No

No

Yes

Yes

System d)

Yes

No

Yes

Yes

Exercises chapter 2
Answer exercise 1.
system

Modeling and transfer function of a linear mechanical

a). For the tractor:


m1 x1 (t) = c1 x 1 (t) + k(x2 (t) x1 (t)) + ft (t)
For the trailer:
m2 x2 (t) = c2 x 2 (t) + k(x1 (t) x2 (t))

149
b).


S 2 m1 + c1 S + k x1 (t) = k x2 (t) + ft (t)


2
S m2 + c2 S + k x2 (t) = k x1 (t)

From the second equation we derive for x1 (t):


x1 (t) =

s2 m2 + c2 s + k
x2 (t)
k

Substitution into the first equation gives us:


(s2 m2 + c2 s + k)

(s2 m2 + c2 s + k)
x2 (t) = k x2 (t) + ft (t)
k

Multiplication with k results in:


(s2 m2 + c2 s + k)(s2 m1 + c1 s + k)x2 (t) = k 2 x2 (t) + k ft (t)
and so


(s2 m2 + c2 s + k)(s2 m1 + c1 s + k) k 2 x2 (t) = k ft (t)

or


s4 m1 m2 + s3 (c1 m2 + c2 m1 ) + s2 (km2 + km1 + c1 c2 ) + s k(c2 + c1 ) x2 (t) = k ft (t)

We can rewrite this as the input-output differential equation:

d4 x(t)
d3 x(t)
d2 x(t)
+
(c
m
+
c
m
)
+
(km
+
km
+
c
c
)
1
2
2
1
2
1
1
2
d t4
d t3
d t2
d x(t)
+ k(c2 + c1 )
= k ft (t)
dt

m1 m2

c). Choose state vector x(t):


x1 (t)
x2 (t)

x(t) =
x3 (t) =
x4 (t)
then

x 1 (t)


x2 (t)
x (t)

3
x 4 (t)

x 1 (t)
x1 (t)

x 2 (t)
x2 (t)

= mc11 x1 (t) +
= x1 (t)
= mc22 x3 (t) +
= x3 (t)

k
(
x4 (t)
m1

x2 (t)) +

k
(
x2 (t)
m2

x4 (t))

1
u(t)
m1

150

Appendices
and so

1
k
0
mc11 mk1
m1
m
1

0
0
0
0

u(t)
x (t) =
x(t) +
0
0
+ mk2 mc22 mk2
0
0
0
1
0


y(t) = 0 0 0 1 x(t) + 0 u(t)

Answer exercise 2. Modeling and transfer function of a linear electrical system


a). There is one capacitor and one inductor in the network. The system can therefore be
described by two differential equations. The equations are as follows:
L
C

d i3 (t)
= v1 (t)
dt

d v1 (t)
= i1 (t) i3 (t)
dt

b). Using the differential operator we obtain


S Li3 (t) = v1 (t)
S Cv1 (t) = i1 (t) i3 (t)
Elimination of i3 (t):
i3 (t) = i1 (t) S C v)1(t)
Substitution into the first equation gives the following differential equation:
SLi3 (t) = v1 (t)
SLi1 (t) S LCv1 (t) = v1 (t)
2

and so
SLi1 (t) = S 2 LCv1 (t) + v1 (t)
or
SLi1 (t) = (S 2 LC + 1)v1 (t)
We can rewrite this as the input-output differential equation:
L

d i1 (t)
d2 v1 (t)
= LC
+ v1 (t)
dt
d t2

151
c). Choose
x(t) =

i3 (t)
v1 (t)

u(t) = i1 , y(t) = v1

then
d i3 (t)
1
= v1 (t)
dt
L
1
d v1 (t)
= (i1 (t) i3 (t))
dt
C
or


 

i3 (t)
0
x =
+ 1 u(t)
0
v1 (t)
C



 i3 (t)
y= 0 1
+ 0 u(t)
v1 (t)
0
C1

1
L



Exercises chapter 3
Answer exercise 1. Driving car

so

m v = fc (t) + fm (t) = c v(t) + fm (t)


v + 3 v(t) = 0.5 us (t)

so f (t) = 0.5 us (t). From Equation 3.4 in the lecture notes we know that for a forcing
function f (t) = us (t) we find:
ys (t) = 1/(1 et ) = 1/3(1 e3t )
The actual forcing is 0.5us (t), and so
v(t) = 1/6(1 e3t ) for t 0
Answer exercise 2. RLC-circuit
The differential equation of the circuit is given by
1 d v(t)
1
1 d i(t)
d2 v(t)
+
+
v(t) =
2
dt
RC d t
LC
C dt
In other words
1
1
v +
v +
v(t) = v + 2nv + n2 v(t) = v + 2v + 4v(t)
RC
LC
with L = 1 we find 1/C = n2 = 4, so C = 0.25, and 1/RC = 2 so R = 2.

152

Appendices

Answer exercise 3. Damped natural frequency


We compute
y(t) + K y(t)
+ 3 y(t) = y(t) + 2ny(t)
+ n2 y(t)
This gives n =
p

3, and

1 2 = d /n 0.5 3

This means that 0.5, so K = 2 3 2 0.5 3, dus:


K

Answer exercise 4. Stability

Stable

System a)

System b)

Yes
= 3

Yes
= 0.5

System c)

System d)

No p
Yes
1 = 1.5 + p5/4 1 = (2 + j 3)/4
2 = 1.5 5/4 2 = (2 j 3)/4

Answer exercise 5. Damping ratio, (un)damped natural frequency, decay factor


System a):

= 0.5

n = 3

d = 1.5 3

= 1.5

System b):

=2

n = 0.5

(d = j 0.5 3)

=1

System c):

= 0.2

n = 1

d = 0.4 6 = 0.96 = 0.2

System d):

=1

n = 2

d = 0

=2

Note that for question b) the system is overdamped ( > 1) and there is no overshoot.
This means that d becomes complex and do not have a physical meaning.
Answer exercise 6. Response criteria
We use
ts = 4.6/ ,

tp = /d ,

tr = 1.8/n ,

Mp = exp(

) = exp( p
)
d
1 2

153
System a):

System b):

System c):

ts = 4.6/1.5 = 46/15 3.0667 tr = 1.8/3 = 0.6

tp = /d = 2 3/9 1.209 Mp = exp( 0.5


) 0.163
0.75
ts = 4.6/1 = 4.6

(tp = 2j/ 3 j 3.6276)


ts = 4.6/0.2 = 23
tp =

System d):

0.4 6

5 6
12

) 0.8842 j 0.4671)
(Mp = exp( j2
3

tr = 1.8/1 = 1.8
3.2064

ts = 4.6/2 = 2.3
tp =

tr = 1.8/0.5 = 3.6

Mp = exp( 0.2
) 0.5266
0.96
tr = 1.8/2 = 0.9
Mp = exp( 0 ) = exp() = 0

Note that for question b) the system is overdamped ( > 1) and there is no overshoot.
This means that tp and Mp become complex and do not have a physical meaning.

Exercises chapter 4
Answer exercise 1. Transfer functions

4
+6s+5
2 s3 + 12 s2 + 24 s + 16
b)
s3 13 s + 12
3 s3 + 6 s2 21 s + 12
c) 4
s + 6 s3 + 22 s2 + 30 s + 13
a)

s2

Answer exercise 2. Poles, zeroes, stability


a):
poles: p1 = 1 en p2 = 5
zeros: none
stable: YES

b):
poles: p1 = 1 en p2 = 3 en p3 = 4
zeros: z1,2,3 = 2
stable: NO (two poles have positive real part!!)

154

Appendices

c):
poles: p1,2 = 2 j 3 en p3,4 = 1
zeros: z1 = 4, z2,3 = 1
stable: YES
Answer exercise 3: Frequency response
H(s) =

s2

1
+6s+5

Magnitude
M(j) = | H(j) |
=

=p

1
+ j 6 + 5|
1

(5 2 )2 + (6 )2
1
=
2
25 10 + 4 + 36 2
1
=
4 + 26 2 + 25
Phase
(j) = (H(j)


1
=
2 + j 6 + 5


1
=
2 + j 6 + 5


6
= arctan
5 2
Response:
1
1
1
=
= 0.0542
34 + 26 32 + 25
340
2 85


63
(j 3) = arctan
= arctan(4.5) 1.3521rad
5 32
M(j 3) =

Output:

y(t) = M(j 3) 4 cos(3 t + (j 3)) = 0.2168 cos(3 t + 1.3521)

155
Answer exercise 4: Time response
13
+ 6s + 13
Homogeneous solution:
Characteristic equation:
H(s) =

s2

s2 + 6s + 13 = (s + 3)2 + 22 = 0
so 1 = 3 + 2 j, 2 = 3 2 j. Homogeneous solution:
yhom = C1 e3t cos 2 t + C2 e3t sin 2 t
Particular solution:
ypart = H(0) = 1
Final solution with derivatives:
y(t) = yhom + ypart = 1 + C1 e3t cos 2t + C2 e3t sin 2t
y(t)
= (3 C1 + 2C2 ) e3t cos 2t + (2C1 3C2 ) e3t sin 2t
Initial conditions:
y(0) = 1 + C1 = 5

y(0)

= 3C1 + 2 C2 = 2

C1 = 6

C2 = 8

and so
y(t) = 1 6 e3t cos 2t 8 e3t sin 2t
Answer exercise 5: Time response
40
+ 10 s + 25
Homogeneous solution:
Characteristic equation:
H(s) =

s2

s2 + 10s + 25 = (s + 5)2 = 0
so 1,2 = 5. Homogeneous solution:
yhom = C1 e5t + C2 t e5t

156

Appendices

Particular solution:
40 3t
40
3t
e
=
e = 10e3t
32 10 3 + 25
4
Final solution with derivatives:
ypart = H(3)e3t =

y(t) = yhom + ypart = 10e3t + C1 e5t + C2 t e5t


y(t)
= 30e3t 5C1 e5t + C2 e5t 5C2 t e5t

Initial conditions:

y(0) = 10 + C1 = 55

C1 = 45

y(0)

= 30 5C1 + C2 = 65

and so

C2 = 190

y(t) = 10e3t + 45 e5t + 190 t e5t


Answer exercise 6: Impulse response
y(t) =

h(t )u( )d

With u( ) = 0 outside the interval 0 < 1, we find:


Z 1
y(t) =
h(t ) 1 d
0

For t < 0 we find that t 0 for the interval 0 < 1, and so


Z 1
Z 1
for t < 0 : y(t) =
h(t )1d =
0 d = 0
0

For 0 t 1 we find h(t ) = 1 for < t and h(t ) = 0 for t, so:


Z 1
Z t
Z 1
for 0 t 1 : y(t) =
h(t )1d ==
1 1 d +
0 1d = t
0

For t 1 we find h(t ) = 1 for all 0 < 1, and h(t ) = 1 for all 0 < 1 and so
Z 1
Z 1
for t 1 : y(t) =
h(t )1d =
1 1 d = 1
0

This means that

0 for t < 0
t for 0 t < 1
y(t) =

1 for t 1
or
y(t) = ur (t) ur (t 1)

157
Answer exercise 7: State systems
a. We find the eigenvalues 1 = 3 and 2 = 4 with the corresponding eigenvectors
v1 =

1
3

en v2 =


2
.
7

Both eigenvalues have a negative real part, and so the system is stable.

b. We find

V =

A =V

1 2
3 7

3 0
0 4


AV =

3

1
B =V B=
1



C =CV = 1 1
D = D = 0

7 2
3 1

c. The homogeneous response

x(t) = V

=

=

=

e t V 1 x(0)
 3t


1 2
e
0
5
3 7
0 e4t
2


3t
1 2
5e
3 7
2e4t

5e3t 4e4t
15e3t 14e4t

158

Appendices

d. Forced response:
At

x(0) +
eA (t ) B u( ) d
Z t 0
= C eA t 0 +
C V e (t ) V 1 , B d
0
Z t
=
C e (t ) B d



Z0 t

 e3(t )
0
3
1 1
=
d
1
0
e4(t )
0
Z t

=
3 e3(t ) e4(t ) d
0
Z t
Z t
3t
3
4t
= 3e
e d e
e4 d
0
0



1
1  4t
= 3 e3t
e3t e3 0 e4t
e e4 0
3 
4


= 1 e3t 0.25 1 e4t

y(t) = C e

= 0.75 e3t + 0.25e4t

e. Impulse response:

y(t) = C eA t B us (t) + D (t)






 e3t 0
3
= 1 1
us (t) + 0(t)
0 e4t
1


= 3 e3t e4t us (t)
f. Transfer function:
H(s) = C (sI A )1 B + D

1 


 s+3
0
3
= 1 1
0
s+4
1
 1



 s+3 0
3
= 1 1
1
0 s+4
1
3
1
=

s+3 s+4
3(s + 4)
(s + 3)
=

(s + 3)(s + 4) (s + 3)s + 4
2s + 9
= 2
s + 7 s + 12

159

Exercises chapter 5
Answer exercise 1. Pendulum system
a)

m = fez + fe
fez = fz sin = m g sin
We find:
= m g sin (t) + fe (t)
m(t)
Choose x(t) =

(t)
, y(t) = , u(t) = fe (t), then
(t)

1
g
u(t)
x 1 (t) = sin x2 (t) +

m
x 2 (t) = x1 (t)
y(t) = x2 (t)
b)
With u0 = fe,0 = 0.5 m g we find
g
g
0 = sin x2,0 + 0.5

0 = x1,0
and so x1,0 = 0 and sin x2,0 = 0.5 which gives x2,0 = y0 = /6.
c)





g
f1 /x1 f1 /x2
0 g cos x(t)
0 2
3
A=
=
=
f2 /x1 f2 /x2 (x0 ,u0 )
1
0
1
0
(x0 ,u0 )


 1 
f1 /u
B=
= m
f2 /u (x ,u )
0
0 0




C = g/x1 g/x2 (x0 ,u0 ) = 0 1


D = g/u|(x0 ,u0 ) = 0

160

Appendices

Answer exercise 2. Electrical circuit


a)
Three relevant equations:
v13 (t) = R i3 (t)
i1 (t) = i2 (t) + i3 (t)
C v 1 (t) = i2 (t)
We derive:
1
C v 1 (t) = v13 (t) + i1 (t)
R
Choose x(t) = v1 (t), y(t) = i2 , u(t) = i1 (t), then
x(t)

1 3
1
x (t) + u(t)
RC
C

1
y(t) = C v 1 (t) = C x(t)

= x3 (t) + u(t)
R
b)
With u0 = i1,0 = 4 we find
0=

1 3 1
x + u0
RC 0 C

and so x0 = (R uo )1/3 = (8)1/3 = 2 en y0 = R1 x3 (t) + u(t) = 0


c)
A=
B=
C=
D=



f
3 2
3 2
=
x (t)
=
x = 24

x (x0 ,u0 )
RC
RC 0
(x0 ,u0 )

f
1
=
=4

u (x0 ,u0 ) C


h
3 2
3
= x (t)
= x20 24

x (x0 ,u0 )
R
R
(x0 ,u0 )

h
=1
u
(x0 ,u0 )

Exercises chapter 6
Answer exercise 1. Block scheme
The transfer function from x2 to y is given by:

Y (s)
H2
=
.
X2 (s)
1 + H2 H3

161

R - e X1
H1 (s)
+ 6

X2 e H2 (s)
+ 6

Y-

H3 (s) 
H4 (s) 

Y (s)
H2
H1 H2
= H1
=
.
X1 (s)
1 + H2 H3
1 + H2 H3
Finally the transfer function becomes:
In series with H1 this gives:

H1 H2
H1 H2
Y (s)
1 + H2 H3
=
=
H
H
R(s)
1 + H2 H3 + H1 H2 H4
1 2
1 + H4
1 + H2 H3
Answer exercise 2. Rise time and settling time
The closed-loop transfer function is given by
s2

kp
kp
kp
kp
= 2
=
=
2
2
2
2
+ 6s + 5 + kp
(s + 2ns + n )
(s + ) + d
(s + 3) + kp 4

a). We find
1,2 = 3

4 kp

To obtain stability we need 4 kp < 32 , so kp > 5.


b). To find
tr =

1.8
< 0.2
n

we have to make n > 9. We find n =

5 + kp > 9. and so kp > 76.

c). To find
4.6
< 2.3

we have to make > 2. However, we already found that = 3, which means that
ts < 2.3 for all kp > 0.
ts =

162

Appendices

Answer exercise 3. Steady-state error

E(s) =

1
(s + 1)(s + 5)
(s + 1)(s + 5) + 3kp s2

ess = lim sE(s)


s0

 
(s + 1)(s + 5)
1
= lim
s0 (s + 1)(s + 5) + 3kp
s

 
1
5
= lim
s0 5 + 3 kp
s
=

There is no kp that makes the steady-state error ess < 10%.

Index
angle, 18
angular acceleration, 18
angular velocity, 18, 20
autonomous system, 12

fluid mass flow rate, 21


fluid pressure, 21
fluid resistor, 21
force, 18, 20
forcing function,, 43
frequency response, 71

basic elements,, 17
basic signals,, 17
harmonic function, 10
block diagrams, 113
Bounded-Input-Bounded-Output (BIBO) sta- heat energy flow, 21
heat flow system, 21
bility, 97
homogeneous solution, 45, 51, 64, 84
capacitor, 19
first-order system, 45
causality, 14
LTI system, 64
closed-loop control, 122
second-order system, 51
convolution, 76
state system, 84
critically damped system, 52
impulse response, 45, 53
current, 19, 20
first-order system, 45
damped harmonic function, 11
second-order system, 53
damped natural frequency, 54
impulse response model, 76
damper, 18
impulse response of a state system, 93
damping ratio, 50, 54
inductor, 19
dynamical system, 12
inertia, 18
dynamical systems, 17
inhomogeneous solution of a state system, 86
initial conditions, 63
electrical system, 19
input, 12
electromechanical system, 20
input-output differential equation, 32
elementary signals, 8
input-output system, 12
equilibrium point, 107
Laplace transform, 32
feedback control, 113
linear time-invariant systems, 61
feedback controller, 123
linearity, 14
feedforward controller, 123
linearization, 108
final value theorem, 125
loop gain function, 117, 124
first-order system, 43
mass, 18
fluid capacitor, 21
mechanical system, 17
fluid flow system, 21
163

164

INDEX

memoryless, 13
modal transformation, 83
model, 17
Newtons
Newtons
nonlinear
nonlinear

law, 23
law for rotation, 25
dynamical systems, 103
state system, 106

open-loop control, 122


output, 12
overdamped system, 52
overshoot, 54
P control, 133
partial fraction expansion, 74
particular solution, 45, 51, 63, 66
first-order system, 45
LTI system, 66
second-order system, 51
PD control, 134
peak time, 54
PI control, 133
PID control, 135
poles of a system, 63
position, 18
rectangular function, 8
relation between system descriptions, 88
resistor, 19
rise time, 54
rotational damper, 18
rotational electromechanical systems, 20
rotational mechanical system, 18
rotational spring, 18
second-order system, 47
settling time, 54
signal, 7
singularity functions, 12
spring, 18
stability, 14, 46, 58
first-order system, 46
second-order system, 58

stability: Bounded-Input-Bounded-Output (BIBO),


97
stability: convolution system, 97
stability: LTI input-output system, 96
stability: LTI state system, 97
state of a system, 36
state system, 36, 79
state transformation, 81
steady state, 107
steady state tracking, 125
step response, 45, 50
first-order system, 45
second-order system, 50
system, 12
system type, 127
temperature, 21
the homogeneous solution, 63
thermal capacitor, 21
thermal resistor, 21
time response of an LTI system, 63
time-invariance, 14
torque, 18, 20
transducer, 20
transfer function of a state system, 88
translational electromechanical system, 20
translational mechanical system, 18
undamped natural frequency, 50, 54
underdamped system, 52
unit impulse function, 8
unit parabolic function, 10
unit ramp function, 10
unit step function, 9
velocity, 20
voltage, 19, 20
zeros of a system, 63

Bibliography
[1] K.J.
Astrom and R.M. Murray. Feedback Systems; An Introduction for Scientists and
Engineers. Princeton Univeristy Press, Princeton, New Jersey, USA, 2009.
[2] C.M. Close, D.K. Frederick, and J.C. Newell. Modeling and Analysis of Dynamic
Systems. John Wiley & Sons, New York, USA, 2002.
[3] G.F. Franklin, J.D. Powel, and A. Emami-Naeini. Feedback Control of Dynamic Systems. Pearson/Prentice Hall, New Jersey, USA, 2006.
[4] T. Kailath. Linear Systems. Prentice Hall, New Jersey, USA, 1980.
[5] H. Kwakernaak and R. Sivan. Modern Signals and Systems. Prentice Hall, New Jersey,
USA, 1991.
[6] A.V. Oppenheim and Alan S. Willsky. Signals and Systems. Prentice Hall, New Jersey,
USA, 1983.
[7] D. Rowell and D.N. Wormley. System Dynamics, an Introduction. Prentice Hall, New
Jersey, USA, 1997.
[8] V. Verdult and T.J.J. van den Boom. Analysis of Continuous-time and Discrete-time
Dynamical System. Lecture Notes for the course ET2-039, faculty EWI, TU Delft,
2002.

165

Vous aimerez peut-être aussi