Vous êtes sur la page 1sur 135

Modern Control Theory

10EE55

MODERN CONTROL THEORY


Subject Code

: 10EE55

IA Marks

25

No. of Lecture Hrs./


Week

: 04

Exam
Hours

03

Total No. of Lecture


Hrs.

: 52

Exam
Marks

: 100

PART - A
UNIT - 1 & UNIT - 2
STATE VARIABLE ANALYSIS AND DESIGN: Introduction, concept of state, state
variables and state model, state modeling of linear systems, linearization of state equations.
State space representation using physical variables, phase variables & canonical variables
10 Hours
UNIT - 3
Derivation of transfer function from state model, digitalization, Eigen values, Eigen vectors,
generalized Eigen vectors.
6 Hours
UNIT - 4
Solution of state equation, state transition matrix and its properties, computation using
Laplace transformation, power series method, Cayley-Hamilton method, concept of
controllability & observability, methods of determining the same
10 Hours
PART - B
UNIT - 5
POLE PLACEMENT TECHNIQUES: stability improvements by state feedback, necessary
& sufficient conditions for arbitrary pole placement, state regulator design, and design of
state observer, Controllers- P, PI, PID.
10 Hours

Dept. of EEE, SJBIT

Page 1

Modern Control Theory

10EE55

UNIT - 6
Non-linear systems: Introduction, behavior of non-linear system, common physical non
linearity-saturation, friction, backlash, dead zone, relay, multi variable non-linearity.
3 Hours
UNIT - 7
Phase plane method, singular points, stability of nonlinear system, limit cycles, construction
of phase trajectories.
7 Hours
UNIT - 8
Liapunov stability criteria, Liapunov functions, direct method of Liapunov & the linear
system, Hurwitz criterion & Liapunovs direct method, construction of Liapunov functions
for nonlinear system by Krasvskiis method.
6 Hours
TEXT BOOKS:
1.
Digital control & state variable methods- M. Gopal - 2nd edition, THM Hill
2003
2.
Control system Engineering- I. J. Nagarath & M. Gopal, - 3rd edition, New
Age International (P) Ltd.
REFERENCE BOOKS:
1.
State Space Analysis of Control Systems- Katsuhiko Ogata -Prentice Hall
Inc
2.
Automatic Control Systems- Benjamin C. Kuo & Farid Golnaraghi, 8th
edition, John Wiley & Sons 2003.
3.
Modern Control Engineering- Katsuhiko Ogata- PHI 2003
4.
Control Engineering theory and practice- M. N. Bandyapadhyay PHI,
2007
5.
Modern control systems- Dorf & Bishop- Pearson education, 1998

Dept. of EEE, SJBIT

Page 2

Modern Control Theory

10EE55
CONTENTS

Sl. No.

Titles

Page No.

1.

UNIT - 1 STATE VARIABLE ANALYSIS AND DESIGN

04

2.

UNIT - 2 STATE SPACE REPRESENTATION

24

3.

UNIT - 3 DERIVATION OF TRANSFER FUNCTION FROM


STATE MODEL

42

4.

UNIT- 4 SOLUTION OF STATE EQUATIONS

57

5.

UNIT - 5 POLE PLACEMENT TECHNIQUES

78

6.

UNIT -6 NON LINEAR SYSTEMS

99

7.

UNIT - 7 PHASE PLANE ANALYSIS

105

8.

UNIT - 8 STABILITY ANALYSIS

121

Dept. of EEE, SJBIT

Page 3

Modern Control Theory

10EE55
PART -A

UNIT - 1 & UNIT - 2


STATE VARIABLE ANALYSIS AND DESIGN: Introduction, concept of state, state
variables and state model, state modeling of linear systems, linearization of state equations.
State space representation using physical variables, phase variables & canonical variables
10 Hours

State Variable Analysis & Design


State Variable Analysis or State Space Analysis :The state variable approach is a powerful tool/technique for the analysis and design
of control system.
The state space analysis is a modern approach and also easier for analysis using
digital computers. It's gives the total internal state of the system considering all initial
conditions.
Why do we need state space analysis?
The conventional approach used to study the behaviour of linear time invariant
control systems, uses time domain or frequency domain methods. When performance
specifications are given for single input, single output linear time invariant systems, then
system can be designed by using Root locus. When time domain specifications are given,
Root locus technique is employed in designing the system. If frequency domain
specifications are given, frequency response plots like Bode plots are used in designing the
system.
In conventional methods, the systems are modelled using Transfer Function
approach, which is the ratio of Laplace transform of output to input, neglecting all the initial
conditions.

The drawbacks in the transfer function model and analysis are,


Dept. of EEE, SJBIT

Page 4

Modern Control Theory

10EE55

1.

Transfer function is defined under zero initial conditions.

2.

transfer function is applicable to linear time invariant systems.

3.

It is restricted to single input and single output systems.

4.

Does not provides information regarding the internal state of the system.

5.

The classical methods like Root locus, Bode plot etc. are basically trial and

error procedures which fail to dive the optimal solution required


State variables analysis can be applied for any type of systems like,

Linear system

Non- linear system

Time invariant system

Time varying system

Multiple input and multiple output system.

the analysis can be carried with initial conditions.

Advantages of state variable analysis


1.

Convenient tool for MIMO systems

2.

Uniform platform for representing time-invariant systems, time-varying

systems, linear systems as well as nonlinear systems


3.

Can describe the dynamics in almost all systems (mechanical systems,

electrical systems, biological systems, economic systems, social systems etc.


4.

It can be performed with initial conditions.

5.

Variables used to represent system can be any variables in the system.

6.

Using this analysis the internal states of the system at any time instant can be

predicted.
7.

As the method involves matrix algebra, can be conveniently adopted for

digital computers.

Dept. of EEE, SJBIT

Page 5

Modern Control Theory

10EE55

Comparison: Classical vs. Modern Control


Classical Control (Linear)

Modern Control (Linear)

Developed in 19201950

Frequency domain
analysis & Design(Transfer
function based)

Based on SISO
models

Deals with input and


output variables

Well-developed
robustness concepts
(gain/phase margins)

No
Controllability/Observabilit
y inference

No optimality
concerns

Well-developed
concepts and very much in
use in industry

Developed in 19501980

Time domain analysis


and design(Differential
equation based)

Based on MIMO
models

Deals with input,


output and state variables

Not well-developed
robustness concepts

Controllability/Observabilit
y can be inferred

Optimality issues can


be incorporated

Fairly well-developed
and slowly gaining
popularity in industry

State:The state is the condition of a system at any time instant 't'.

State variable:A set of variable which described the state of the system at any time instant
are called state variables.
OR
The state of a dynamic system is the smallest number of variables (called
state variables) such that the knowledge of these variables at t=to, together with the
knowledge of the input for t=to, completely determine the behaviour of the system
for any time t to.
Dept. of EEE, SJBIT

Page 6

Modern Control Theory

10EE55

State space:The set of all possible values which the state vector X(t) can have( or
assume) at time t forms the state space of the system.

State vector:It is a (nx1) column matrix whose elements are state variables of the
system,(where n is order of system) it is denoted by X(t).

State Variable Selection

Typically, the number of state variables (i.e. the order of the system)

is equal to the number of independent energy storage elements. However,


there are exceptions!

Is there a restriction on the selection of the state variables ?

YES! All state variables should be linearly independent and they must
collectively describe the system completely.

State Space Formulation


In the state variable formulation of a system, in general, a system consists of
m-inputs, p-outputs and n-state variables. The state space representation of the
system may be visualized as shown in figure1.1.
Let,

State variables =

(t),

(t),

(t),...................

Input variables =

(t),

(t),

(t),...................

(t),

(t),

(t),

(t),...................

(t),

Output variables=

Dept. of EEE, SJBIT

(t),

Page 7

Modern Control Theory

10EE55

(t)

(t)

(t)

(t)

(t)

(t)

Control

(t)
(t)

System

(t)

(t) .....

(t)

(t)

Control

System

X
The different variables may be represented by the vectors(column matrix) as
shown below
Input vector

Output vector

State variable vector

State variable representation can be arranged in the form of n number of


first order differential equations.
Dept. of EEE, SJBIT

Page 8

Modern Control Theory

10EE55

= = f1 (x1, x2, x3,......xn; u1, u2, .......um)


= 2 = f2 (x1, x2, x3,......xn; u1, u2, .......um)
.
.
.
= n = fn (x1, x2, x3,......xn; u1, u2, .......um)
Any 'n' dimensional time invariant system has state equations in the function form
as,

(t) = f( X, U) ...... State

equation ......(1)

While out puts of such system are dependent on the state of the system and
instantaneous inputs.
Functional output equation can be written as,

Y(t) = g(X, U) .......Output equation .....(2)


State Model Of Linear System

State model of a system consist of the state equation & output equation.

The state equation of the system is a function of state variables and inputs as defined

by equation 1.
For linear tine invariant systems the first derivatives of state variables can be
expressed as linear combination of state variables and inputs.

1 = a11 x1 + a12 x2 + .............+ a1n xn + b11 u1 +.............+ b1m um


2 = a21 x1 + a22 x2 + .............+ a2n xn + b21 u1 +.............+ b2m um
Dept. of EEE, SJBIT

Page 9

Modern Control Theory

10EE55

..
.

n = an1 x1 + an2 x2 + .............+ ann xn + bn1 u1 +.............+ bnm um


In matrix form the above equation can be expressed as

It can also be written as

(t) = A X(t) + B U(t)


State space analysis
Classical control theory Vs modern control theory
The development of control system analysis and design can be divided into three
eras. In the first era, we have classical control theory , which deals with techniques
developed before 1950. classical control embodies such methods as root locus, Bode,
Nyquist and Routh- Hurwitz. These methods have in common the use of transfer
functions in the complex frequency(s) domain, emphasis on the use of graphical
techniques, the use of feedback and the use of simplifying assumptions to approximate
the time response . since computers were not available at that time , a great deal of
emphasis was placed on developing methods that were amenable to manual
computation and graphics. A major limitation of classical control methods was the use of
single input , single output (SISO) methods. Multivariable (i.e

multiple input

multiple output or MIMO) systems were analyzed and designed one loop at a time .
Also the use of transfer functions and the frequency domain limited one to linear time
invariant systems.
In the second era , we have modern control ( which is not so modern any longer )
Dept. of EEE, SJBIT

Page 10

Modern Control Theory

10EE55

which referees to state space methods developed in the late 1950,s and early 1960s. In
modern control, system models are directly written in the time domain. Analysis and
design are done in time domain. It should denoted that before Laplace transforms and
transfer functions became popular in the 1920s Engineering were studying systems in
the time domain. Therefore the resurgence of time domain analysis was not unusual, but
it was triggered by the development of computers and advances in numerical analysis.
Because computers were available, it was no longer necessary to development analysis and
design methods

that were strictly manual. An Engineer could use computers to

numerically solve or simulate large system that were nonlinear and time varying. State
space methods removed the previously mentioned limitations of classical control. The
period of the1960s was the heyday of modern control.

System representation in state variable form


This chapter introduce the concept of state variable and the various means of
representing control systems in state variable form. Each method of state variable
representation results in a system description interms of n first order differential
equations, as opposed to the usual nth order equation. s A convenient tool for
this new system representations is matrix notation.

System state and state variable


It is important to stress at out set that the concept of system state is first of all, a
physical concept. However it is often convenient to interms of mathematical model.
Here this mathematical model is assumed to consist of ordinary differential equations
which have a unique solution for all inputs and initial conditions. It is in terms of this
mathematical model that the system state or simply state is defined.
Definition The state of system at any time t0 is the minimum set of numbers
X1 (to) ,X2(to) -------- Xn(to) which along with the input to the system for t
sufficient to determine the behaviour of the system for all t

Dept. of EEE, SJBIT

to

is

to.

Page 11

Modern Control Theory

10EE55

In other words, the state of a system represents the minimum amount of


information that we need to know about a system at to such that its future behaviour can
be determined without reference to the input before to.

The idea of state is familiar from a knowledge of the physical world and the means
of solving the differential equations used to model the physical world . Consider a ball
flying through the air. Intuitively we feel that if we know the balls position and
velocity, we also know its future behaviour. It is on this basis that an outfielder positions
himself to catch a ball. Exactly the same information is needed to solve a differential
equation model of the problem. Consider for example the second order differential
equation

X +aX + bX = f(t)
The solution to this equation be found as the sum of the forced response, due to
f(t) and the natural or unforced response ie the solution of homogeneous equation

X +aX + bX = 0
If X1

(t), X2(t) -------- Xn(t) are state variables of the system chosen then the

initial conditions of the state variables plus the u(t)s for t > 0 should be sufficient to
decide the future behaviour i.e y(t)s for t > 0. Note that the state variables need not be
physically measurable or observable quantities. Practically however it is convenient to
chose easily measurable quantities. The number of state variable is then equal to the
order of the differential equation which is normally equal to the number of energy
storage elements in the system.

State equations for linear systems in matrix form


The state of a linear time invariant nth order system is represented by the following set
of n number of first order differential equations with constant coefficients in terms of n
state variable X1,X2-------- Xn.
Dept. of EEE, SJBIT

Page 12

Modern Control Theory

10EE55

X1 = a11 X1+a12 X2 + ------- --- +a1n Xn + b11U1 + ----- - +b1m U m

X2 = a21 X1+a22 X2 + ------- --- +a21 Xn + b21U1 + ----- - +b2m U m

Xn = an1 X1+an2 X2 + ------- --- +ann Xn + bn1U1 + ----- - +bnm U


In matrix form the above equations may be written as

X1

a11 a12 ----------- a1n

x1

b11 b12 ----------- b1m

u1

X2

a21 a22 ----------- a2n

x2

b21 b22 ----------- b2m

u2

bn1 bn2 ----------- bnm

um

an1 an2 ----------- ann

Xn

xn

X is called derivative of state vector whose size is (nx1)

X is called state vector whose size is (n x 1)


A is called system matrix whose size is (n x n)
B is called input matrix whose size is ( n x m)
U is called input vector whose size is (m x 1)

Dept. of EEE, SJBIT

Page 13

Modern Control Theory

10EE55

Output equation
The state variables X1(t) -------X n(t) represents the dynamic state of a system. The
system output/ outputs may be used as some of the state variables themselves ordinarily,
the output variables
Y1 = C11 X1 + C12 X2 + ---------- C1n Xn
Y2 = C21 X1 + C22 X2 + ---------- C2n X

Yp = Cp1 X1 + Cp2 X2 + ---------- CPn Xn


In matrix form,

y1

c11c 12 ----------- c1n

x1

y2

c21 c22 ----------- c2n

x2

cp1 cp2 ----------- cpn

xn

yp
or

Y = output vector of size ( P x 1)


C = Transmission matrix of size ( P x n)
X = State vector of size ( n x 1)
Sometimes the output is a function of both state variables and inputs . for this general
case

Dept. of EEE, SJBIT

Page 14

Modern Control Theory


Y = CX +DU
or
Y1

Y2

10EE55

c11 c12 ----------- c1n

x1

D11 D12 -------- D1m

u1

c21 c22 ----------- c2n

x2

D21 D22 -------- D2m

u2

DP1 DP2 --------- DPm

um

Yp

cpn -----------

cpn

xn

D matrix is of size ( p x m)
State Model
The state equation of a system determines its dynamic state and the output equation gives
its output at any time t > 0 provided the state at t = 0 and the control forces for
t 0 are known. These two equations together form the state model of the system.
The state model of a linear system is therefore given by

X = AX +BU
Y = CX+ DU

(1 )

State Model of SISO linear and time invariant system.


If we let m =1 and p=1 in the state model of a multiple input multiple output linear time
invariant system we obtain the following state model for SISO linear system.
= AX +bu
X
Y = CX+ du

(2 )

Where b and C are ( nx1) vectors

Dept. of EEE, SJBIT

Page 15

Modern Control Theory

10EE55

State Model using phase variables ( BUSH FORM)


Let us now consider how the state model defined by equation (2) may be obtained for an
nth order SISO system whose describing differential equation relating output y with input
u is given by
dn y
dtn

+ an-1

dn-1 y
dtn-1

where an-1 , an - 2
y(0),

dy
(0),
dt

+ an-2

------------

dn-2 y
dtn-2

dy
+ ------------ a1 + a0 y
dt

= b0 u

(3)

a1, a0 are constant coefficients


dn-1 y

------------

(0),

are initial conditions

dt

To arrive at the state model of equation (3) it is rewritten in shorthand form as


yn +an-1 yn-1 + an-2 yn-2 +--------------a1y + a0 y = b0u

(4)

We first define the state variables


X1, X2, ----------- Xn which can be done in many possible ways. A convenient way
is define
X1 = y
X2 = y

Xn = yn-1
With the above definition of state variables equation (4) is reduced to a set of n first
order differential equations given below;

X1 = y = X2
=y
X
= X3
2

Xn-1 = yn-1 = Xn

X = yn= -a0X1 a1X2 a2X3 -----a n-1Xn +b0u


Dept. of EEE, SJBIT

Page 16

Modern Control Theory

10EE55

the above equations result in the following state equations

It is to be noted that the matrix A has a special form. It has all 1s in the upper off
diagonal, its last row is comprised of the negative of the coefficients of the original
differential equation and all other elements are zero. This form of matrix A is known as
the BUSH FORM . The set of state variables which yield BUSH FORM for the
matrix A are called Phase variables

When A is in BUSH FORM, the vector b has the specialty that all its elements
except the last are zero. In fact A and B and therefore the state equation can be written
directly by inspection of the linear differential equation.
The output being y = X1 , the output equation is given by

Where C = [ 1 0 ------0]
Note: There is one more state model called canonical state model . we shall consider this
model after going through transfer function.
Dept. of EEE, SJBIT

Page 17

Modern Control Theory

10EE55

Derivation of transfer function from a given state model


Having obtained the state model we next consider the problem of determining transfer
function from a given state model of SISO / MIMO systems.
1) SISO SYSTEM
y(s)

u(s)

G(s)

G(s) is called transfer function defined as


y(s)
G(s) =

y(s) = u(s) G(s) or y (s) = G(s) u(s) -------- (1)


u(s)

State Model is given by


X (t) = AX (t) + Bu
Y(t) = CX (t) + Du

----------- (2)
----------- (3)

Taking laplace transformation on both sides of equations ( 2) and (3) and neglecting
initial conditions
We get sX(s) = AX (s) +
Bu(s) -----(4) Y(s) = CX (s) +
Du(s) ---- (5)
From (4)
(sI A ) X(s) = Bu(s)
Or X(s) = --------- (sI A )-1Bu(s) --------(6)
Substituting ( 6) in (5)
Y(s) = C (sI A )-1Bu(s) + Du(s)
Y(s) = ( C (sI A )-1B+D) u(s) --------------- (7)
Comparing ( 7 ) with (1)
Dept. of EEE, SJBIT

Page 18

Modern Control Theory


G(s) = C( sI A ) -1B + D

10EE55
---------( 8)

An important observation that needs to be made here is that while the state model is not
unique, the transfer function is unique. i.e. the transfer function of equation (8) must work
out to be the same irrespective of which particular state model is used to describe the
system.
(ii) MIMO SYSTEM
y1(s)

u1(s)

G(s)

u2(s)

y2(s)

m inputs

P out puts
yn(s)

um(s)
G(s) = C( sI A ) -1B + D Where y(s) =G(s) U(s)

G(s) matrix is called transfer matrix of size (p x m)


y(s) matrix is of size (px1)
u(s) matrix is of size (mx1)
y1(s) = G 11(s) u1(s) + G 12(s) u2(s) + -------------------+ G 1m(s)um(s)
y1(s) Transfer function
11(s)

u1(s)
u2(s) = u3(s) = ----- um(s) = 0

Similarly G 12(s), -------- are defined


Dept. of EEE, SJBIT

Page 19

Modern Control Theory

10EE55

Derivative of state models from transfer function


More often the system model is known in the transfer function form. It therefore becomes
necessary to have methods available for converting the transfer function model to a state
model. The process of going from the transfer function to the state equations is called
decomposition of the transfer function. In general there are three basic ways of
decomposing a transfer function in direct decomposition, parallel decomposition, and
cascaded decomposition has its own advantage and is best suited for a particular
situation.
Decomposition of TF
1. Converting a TF with a constant term in numerator. Phase variablesvariables that are successive derivatives of each other.

s3C + 9s2C + 26sC + 24 c = 24 R


Take in LT

Dept. of EEE, SJBIT

Page 20

Modern Control Theory

10EE55

2. Converting a TF with polynomial in numerator

Taking in LT

Dept. of EEE, SJBIT

Page 21

Modern Control Theory

10EE55

C(S) = s2X1 +7s X1 + 2 X1


..
.
C(t) = X1 + 7 X1 + 2 X1
= X3 + 7 X2 + 2 X1

3. Cascading form
The denominator of TF is to be in factor form

Dept. of EEE, SJBIT

Page 22

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 23

Modern Control Theory

10EE55

UNIT-2
STATE-SPACE REPRESENTATION

Introduction

The classical control theory and methods (such as root locus) that we have been using in
class to date are based on a simple input-output description of the plant, usually expressed as a
transfer function. These methods do not use any knowledge of the interior structure of the plant,
and limit us to single-input single-output (SISO) systems, and as we have seen allows only limited
control of the closed-loop behavior when feedback control is used. Modern control theory solves
many of the limitations by using a much richer description of the plant dynamics. The so-called
state-space description provide the dynamics as a set of coupled rst-order dierential equations
in a set of internal variables known as state variables, together with a set of algebraic equations
that combine the state variables into physical output variables.
1.1

Denition of System State

The concept of the state of a dynamic system refers to a minimum set of variables, known as
state variables, that fully describe the system and its response to any given set of inputs [1-3]. In
particular a state-determined system model has the characteristic that:
A mathematical description of the system in terms of a minimum set of variables xi (t), i
= 1, . . . , n, together with knowledge of those variables at an initial time t0 and the
system inputs for time t t0 , are sucient to predict the future system state and
outputs for all time t > t0 .
This denition asserts that the dynamic behavior of a state-determined system is completely
characterized by the response of the set of n variables xi (t), where the number n is dened to be
the order of the system.
The system shown in Fig. 1 has two inputs u1 (t) and u2 (t), and four output vari- ables y1
(t), . . . , y4 (t).
If the system is state-determined, knowledge of its state variables (x1
(t0 ), x2 (t0 ), . . . , xn (t0 )) at some initial time t0 , and the inputs u1 (t) and u2 (t) for t t0 is
sucient to determine all future behavior of the system. The state variables are an internal
description of the system which completely characterize the system state at any time t, and from
which any output variables yi (t) may be computed. Large classes of engineering, biological, social
and economic systems may be represented by state-determined system models. System models
constructed withthe pure and ideal (linear) one-port elements (suchas mass, spring and damper
elements) are state-determined

Dept. of EEE, SJBIT

Page 24

Modern Control Theory

10EE55

Figure 1: System inputs and outputs.


system models. For suchsystems the number of state variables, n, is equal to the number of
independent energy storage elements in the system. The values of the state variables at any
time t specify the energy of each energy storage element within the system and therefore
the total system energy, and the time derivatives of the state variables determine the rate
of change of the system energy. Furthermore, the values of the system state variables at
any time t provide sucient information to determine the values of all other variables in the
system at that time.
There is no unique set of state variables that describe any given system; many
dierent sets of variables may be selected to yield a complete system description.
However, for a given system the order n is unique, and is independent of the particular set
of state variables chosen. State variable descriptions of systems may be formulated in
terms of physical and measurable variables, or in terms of variables that are not directly
measurable. It is possible to mathematically transform one set of state variables to
another; the important point is that any set of state variables must provide a complete
description of the system. In this note we concentrate on a particular set of state
variables that are based on energy storage variables in physical systems.

1.2

The State Equations

A standard form for the state equations is used throughout system dynamics. In the standard
form the mathematical description of the system is expressed as a set of n coupled rst-order
ordinary dierential equations, known as the state equations, in which the time derivative
of each state variable is expressed in terms of the state variables x1 (t), . . . , xn (t) and the
system inputs u1 (t), . . . , ur (t). In the general case the form of the n state equations is:
x1 = f1 (x, u, t)
x2 = f2 (x, u, t)
. = .
xn = fn (x, u, t

Dept. of EEE, SJBIT

(1)

Page 25

Modern Control Theory

10EE55

where xi = dxi /dt and eachof the functions fi (x, u, t), (i = 1, . . . , n) may be a general
nonlinear, time varying function of the state variables, the system inputs, and time. 1
It is common to express the state equations in a vector form, in which the set of n state
variables is written as a state vector x(t) = [x1 (t), x2 (t), . . . , xn (t)]T , and the set of r inputs
is written as an input vector u(t) = [u1 (t), u2 (t), . . . , ur (t)]T . Each state variable is a time
varying component of the column vector x(t).
This form of the state equations explicitly represents the basic elements contained
in the denition of a state determined system. Given a set of initial conditions (the values
of the xi at some time t0 ) and the inputs for t t0 , the state equations explicitly specify
the derivatives of all state variables. The value of each state variable at some time t later
may then be found by direct integration. The system state at any instant may be
interpreted as a point in an n-dimensional state space, and the dynamic state response x(t)
can be interpreted as a path or trajectory traced out in the state space.
In vector notation the set of n equations in Eqs. (1) may be written:
x = f (x, u, t) .

(2)

where f (x, u, t) is a vector function with n components fi (x, u, t).


In this note we restrict attention primarily to a description of systems that are linear and
time-invariant (LTI), that is systems described by linear dierential equations with constant
coecients. For an LTI system of order n, and with r inputs, Eqs. (1) become a set of n
coupled rst-order linear dierential equations with constant coecients:
x1 = a11 x1 + a12 x2 + . . . + a1n xn + b11 u1 + . . . +
x2 = a21 x1 + a22 x2 + . . . + a2n xn + b21 u1 + . . . +
.
.
xn = an1 x1 + an2 x2 + . . . + ann xn

+ bn1 u1

b1r ur
b2r ur

(3)

+ . . . + bnr ur

where the coecients aij and bij are constants that describe the system. This set of n
equations denes the derivatives of the state variables to be a weighted sum of the state
variables and the system inputs.
Equations (8) may be written compactly in a matrix form:
As given above in page 10
x = Ax + Bu (5)
In this note we use bold-faced type to denote vector quantities. Upper case letters are
used to denote general matrices while lower case letters denote column vectors. See
Appendix A for an introduction to matrix notation and operations.

Dept. of EEE, SJBIT

Page 26

Modern Control Theory

10EE55

where the state vector x is a column vector of length n, the input vector u is a column vector
of length r, A is an n n square matrix of the constant coecients aij , and B is an n r
matrix of the coecients bij that weight the inputs.

1.3

Output Equations
A system output is dened to be any system variable of interest. A description of a

physical system in terms of a set of state variables does not necessarily include all of the
variables of direct engineering interest. An important property of the linear state equation
description is that all system variables may be represented by a linear combination of the
state variables xi and the system inputs ui . An arbitrary output variable in a system of
order n with r inputs may be written:
y(t) = c1 x1 + c2 x2 + . . . + cn xn + d1 u1 + . . . + dr ur

(6)

where the ci and di are constants. If a total of m system variables are dened as outputs,
the m suchequations may be written as:
y1
y2

= c11 x1
= c21 x1

.
.
ym = cm1 x1

+
+

c12 x2
c22 x2

+ cm2 x2

+ ... +
+ ... +

c1n xn +
c2n xn +

d11 u1
d21 u1

+ . . . + cmn xn + dm1 u1

+ ... +
+ ... +

d1r ur
d2r ur

(7)

+ . . . + dmr ur

or in matrix form:
As given above in page 10
The output equations, Eqs. (8), are commonly written in the compact
. form:
y = Cx + Du

(9)

where y is a column vector of the output variables yi (t), C is an m n matrix of the constant
coecients cij that weight the state variables, and D is an m r matrix of the constant
coecients dij that weight the system inputs. For many physical systems the matrix D
is the null matrix, and the output equation reduces to a simple weighted combination of
the state variables:
y = Cx.
(10)

Dept. of EEE, SJBIT

Page 27

Modern Control Theory


1.4

10EE55

State Equation Based Modeling Procedure

The complete system model for a linear time-invariant system consists of (i) a set of n
state equations, dened in terms of the matrices A and B, and (ii) a set of output
equations that relate any output variables of interest to the state variables and inputs,
and expressed in terms of the C and D matrices. The task of modeling the system is to
derive the elements of the matrices, and to write the system model in the form:
x = Ax + Bu
y = Cx + Du.

(11)
(12)

The matrices A and B are properties of the system and are determined by the system
structure and elements. The output equation matrices C and D are determined by the
particular choice of output variables.
The overall modeling procedure developed in this chapter is based on the following steps:
1. Determination of the system order n and selection of a set of state variables from
the linear graphsystem representation.
2. Generation of a set of state equations and the system A and B matrices using a
well dened methodology. This step is also based on the linear graph system
description.
3. Determination of a suitable set of output equations and derivation of the appropriate
C and D matrices.

Block Diagram Representation of Linear Systems


Described by State Equations

The matrix-based state equations express the derivatives of the state-variables explicitly in
terms of the states themselves and the inputs. In this form, the state vector is expressed as
the direct result of a vector integration. The block diagram representation is shown in
Fig. 2. This general block diagram shows the matrix operations from input to output in
terms of the A, B, C, D matrices, but does not show the path of individual variables.
In state-determined systems, the state variables may always be taken as the outputs of
integrator blocks.
A system of order n has n integrators in its block diagram. The
derivatives of the state variables are the inputs to the integrator blocks, and each state
equation expresses a derivative as a sum of weighted state variables and inputs. A detailed
block diagram representing a system of order n may be constructed directly from the state
and output equations as follows:
Step 1: Draw n integrator (S 1 ) blocks, and assign a state variable to the output
of each block.

Dept. of EEE, SJBIT

Page 28

Modern Control Theory

10EE55

Figure 2: Vector block diagram for a linear system described by state-space system dynamics.
Step 2: At the input to each block (which represents the derivative of its state variable)
draw a summing element.
Step 3: Use the state equations to connect the state variables and inputs to the
summing elements through scaling operator blocks.
Step 4: Expand the output equations and sum the state variables and inputs through
a set of scaling operators to form the components of the output.

Example 1
Draw a block diagram for the general second-order, single-input single-output
system:
x1
x1
a11 a12
b1
x2 +
u(t)
=
x2
a21 a22
b2
y(t) =

c1 c2

x1
x2

+ du(t).

(i)

Solution: The block diagram shown in Fig. 3 was drawn using the four steps
described above.

3 .Transformation From State-Space Equations to Classical Form


The transfer function and the classical input-output dierential equation for any system
variable may be found directly from a state space representation through the Laplace
transform. The following example illustrates the general method for a rst order system.

Dept. of EEE, SJBIT

Page 29

Modern Control Theory

10EE55

Figure 3: Block diagram for a state-equation based second-order system.

Example 2
Find the transfer function and a single rst-order dierential equation relating
the output y(t) to the input u(t) for a system described by the rst-order linear
state and output equations:
dx
= ax(t) + bu(t)
dt
y(t) = cx(t) + du(t)

(i)
(ii)

Solution: The Laplace transform of the state equation is


sX (s) = aX (s) + bU (s),

(iii)

which may be rewritten with the state variable X (s) on the left-hand side:
(s a) X (s)) = bU (s).

(iv)

Then dividing by (s a), solve for the state variable:


X (s) =

Dept. of EEE, SJBIT

b
U (s),
sa

(v)

Page 30

Modern Control Theory

10EE55

and substitute into the Laplace transform of the output equation Y (s) = cX (s)+
dU (s):
Y (s) =

bc
+ d U (s)
sa
ds +U (s)
=
(bc ad)
(s a)

The transfer function is:


H (s) =

(vi)

ds + (bc
.
ad) (s a)

Y (s)
=
U (s)

(vii)

The dierential equation is found directly:


(s a) Y (s) = (ds + (bc ad)) U (s),

(viii)

and rewriting as a dierential equation:


dy
du
ay = d + (bc ad) u(t).
dt
dt

(ix)

Classical representations of higher-order systems may be derived in an analogous set of


steps by using the Laplace transform and matrix algebra. A set of linear state and output
equations written in standard form
x = Ax + Bu
y = Cx + Du

(13)
(14)

may be rewritten in the Laplace domain. The system equations are then
sX(s) = AX (s) + BU (s)
Y(s) = CX(s) + DU(s)

(15)

and the state equations may be rewritten:


sx(s) Ax(s) = [sI A] x(s) = Bu(s).

(16)

where the term sI creates an n n matrix with s on the leading diagonal and zeros elsewhere.
(This step is necessary because matrix addition and subtraction is only dened for matrices
of the same dimension.) The matrix [sI A] appears frequently throughout linear system
theory; it is a square n n matrix withelements directly related to the A matrix:

(s a11 )
a12

a
(s

a22 )

21
[sI A] =
.

. ..
.
an1

Dept. of EEE, SJBIT

an2

a1n
a2n
.
.
(s ann )

(17)

Page 31

Modern Control Theory

10EE55

The state equations, written in the form of Eq. (16), are a set of n simultaneous
opera- tional expressions. The common methods of solving linear algebraic equations, for
example Gaussian elimination, Cramers rule, the matrix inverse, elimination and
substitution, may be directly applied to linear operational equations such as Eq. (16). For
low-order single-input single-output systems the transformation to a classical formu- lation
may be performed in the following steps:
1. Take the Laplace transform of the state equations.
2. Reorganize each state equation so that all terms in the state variables are on
the left-hand side.
3. Treat the state equations as a set of simultaneous algebraic equations and solve
for those state variables required to generate the output variable.
4. Substitute for the state variables in the output equation.
5. Write the output equation in operational form and identify the transfer function.
6. Use the transfer function to write a single dierential equation between the
output variable and the system input.
This method is illustrated in the following two examples.

Example 3
Use the Laplace transform method to derive a single dierential equation for the
capacitor voltage vC in the series R-L-C electric circuit shown in Fig. 4
Solution: The linear graph method of state equation generation selects the

Figure 4: A series RLC circuit.

Dept. of EEE, SJBIT

Page 32

Modern Control Theory

10EE55

capacitor voltage vC (t) and the inductor current iL (t) as state variables, and
generates the following pair of state equations:
v c
i L

0
1/C
1/L R/L

0
1/L

vc
iL

0 Vin

Vin .

(i)

The required output equation is:


y(t) =

vc
iL

1 0

(ii)

Step 1: In Laplace transform form the state equations are:


sVC (s) = 0VC (s) + 1/C IL (s) + 0Vs (s)
sIL (s) = 1/LVC (s) R/LIL (s) + 1/LVs (s)

(iii)

Step 2: Reorganize the state equations:


sVC (s) 1/C IL (s) = 0Vs (s)
1/LVC (s) + [s + R/L] IL (s) = 1/LVs (s)

(iv)
(v)

Step 3: In this case we have two simultaneous operational equations in the


state variables vC and iL . The output equation requires only vC . If Eq.
(iv) is multiplied by [s + R/L], and Eq. (v) is multiplied by 1/C , and the
equations added, IL (s) is eliminated:
[s (s + R/L) + 1/LC ] VC (s) = 1/LC Vs (s)

(vi)

Step 4: The output equation is y = vC . Operate on both sides of Eq. (vi) by


1
[s2 + (R/L)s + 1/LC ] and write in quotient form:
VC (s) =

s2

1/LC
Vs (s)
+ (R/L)s + 1/LC

Step 5: The transfer function H (s) = Vc (s)/Vs (s) is:


H (s)2= 1/LC
s + (R/L)s + 1/LC

(vii)

(viii)

Step 6: The dierential equation relating vC to Vs is:


1
1
d2 vC R dvC
+
+
vC =
Vs (t)
2
dt
L dt
LC
LC

Dept. of EEE, SJBIT

(ix)

Page 33

Modern Control Theory

10EE55

Cramers Rule, for the solution of a set of linear algebraic equations, is a useful method to
apply to the solution of these equations. In solving for the variable xi in a set of n linear
algebraic equations, such as Ax = b the rule states:
xi =

det A(i)

(18)

det [A]

where A(i) is another n n matrix formed by replacing the ith column of A with the vector
b.
If
[sI A] X(s) = BU(s)
(19)
then the relationship between the ith state variable and the input is
Xi (s) =

det [sI A](i)


det [sI A]

U (s)

(20)

(i)

where (sI A) is dened to be the matrix formed by replacing the ith column of (sI A)
with the column vector B. The dierential equation is
det [sI A] xi = det (sI A)(i) uk (t).

(21)

Example 4
Use Cramers Rule to solve for vL (t) in the electrical system of Example 3.
Solution: From Example 3 the state equations are:
v c
i L

0
1/C
1/L R/L

vc
iL

0
1/L

Vin (t)

(i)

and the output equation is:


vL = vC RiL + Vs (t).

(ii)

In the Laplace domain the state equations are:


s
1/C
1/L s + R/L

Dept. of EEE, SJBIT

Vc (s)
IL (s)

0
1/L

Vin (s).

(iii)

Page 34

Modern Control Theory

10EE55

The voltage VC (s) is given by:

VC (s)

det (sI A)

(1)

det [(sI A)]

s2

det

0
1/C
1/L (s + R/L)

det

1/C
s
1/L (s + R/L)

Vin (s) =

Vin (s)

1/LC
Vin (s).
+ (R/L)s + (1/LC )

(iv)

The current IL (t) is:

IL (s) =

det (sI A)

det [(sI A)]

s2

det

(2)

Vin (s) =

s
0
1/L 1/L

1/C
s
det
1/L (s + R/L)

s/L
Vin (s).
+ (R/L)s + (1/LC )

Vin (s)

(v)

The output equation may be written directly from the Laplace transform of Eq.
(ii):
VL (s)

= VC (s) RIL (s) + Vs (s)


1/LC
(R/L)s
=
+ 2
+ 1 Vs (s)
2
s + (R/L)s + (1/LC ) s + (R/L)s + (1/LC )
1/LC (R/L)s + (s 2 + (R/L)s + (1/LC ))
=
Vs (s)
s2 + (R/L)s + (1/LC )
s2
= 2
V (s),
(vi)
s + (R/L)s + (1/LC ) s

giving the dierential equation


1
d2 vL R dvL
d2 Vs
+
+
v
(t)
=
.
L
dt2
L dt
LC
dt2

(vii)

For a single-input single-output (SISO) system the transfer function may be found directly
by evaluating the inverse matrix
X(s) = (sI A)

BU (s).

(22)

Using the denition of the matrix inverse:


[sI A]1 =

Dept. of EEE, SJBIT

adj [sI A]
,
det [sI A]

(23)

Page 35

Modern Control Theory

10EE55

adj [sI A] B
U (s).
det [sI A]
and substituting into the output equations gives:
X(s) =

(24)

Y (s) = C [sI A]1 BU (s) + DU (s)


=

C [sI A]1 B + D U (s).

(25)

Expanding the inverse in terms of the determinant and the adjoint matrix yields:
Y (S)

C adj (sI A) B + det [sI A] D


U (s)
det [sI A]
= H (s)U (s)
=

(26)

so that the required dierential equation may be found by expanding:


det [sI A] Y (s) = [C adj (sI A) B + det [sI A] D] U (s).

(27)

and taking the inverse Laplace transform of both sides.

Example 5
Use the matrix inverse method to nd a dierential equation relating vL (t) to
Vs (t) in the system described in Example 3.
Solution: The state vector, written in the Laplace domain,
X(s) = [sI A]1 BU(s)

(i)

from the previous example is:


Vc (s)
IL (s)

s
1/C
1/L s + R/L

0
1/L

Vin (s).

(ii)

The determinant of [sI A] is


det [sI A] = s2 + (R/L)s + (1/LC ) ,

(iii)

and the adjoint of [sI A] is


adj

s
1/C
1/L s + R/L

s + R/L 1/C
1/L
s

(iv)

From Example .5 and the previous example, the output equation vL (t) = vC
RiL + Vs (t) species that C = [1 R] and D = [1]. The transfer function, Eq.
(26) is:

Dept. of EEE, SJBIT

Page 36

Modern Control Theory

10EE55

Since
C adj (sI A) B =

s + R/L 1/C
1/L
s

1 R

0
1/L

R
1
s
,
L
LC

(vi)

the transfer function is


H (s) =

(R/L)s 1/(LC ) + (s
(s2

+ (R/L)s + (1/LC ))

+ (R/L)s + (1/LC ))

s
,
(S 2 + (R/L)S + (1/LC ))

(vii)

which is the same result found by using Cramers rule in Example 4.

Transformation from Classical Form to State-Space


Representation

The block diagram provides a convenient method for deriving a set of state equations for
a system that is specied in terms of a single input/output dierential equation. A set of
n state variables can be identied as the outputs of integrators in the diagram, and state
equations can be written from the conditions at the inputs to the integrator blocks (the
derivative of the state variables). There are many methods for doing this; we present
here one convenient state equation formulation that is widely used in control system
theory.
Let the dierential equation representing the system be of order n, and without loss of
generality assume that the order of the polynomial operators on both sides is the same:
an sn + an1 sn1 + + a0 Y (s) = bn sn + bn1 sn1 + + b0 U (s).

(28)

We may multiply bothsides of the equation by sn to ensure that all dierential operators
have been eliminated:
an + an1 s1 + + a1 s (n1) + a0 s n Y (s) =
bn + bn1 s 1 + + b1 s (n1) + + b0 s n U (s),

(29)

from which the output may be specied in terms of a transfer function. If we dene a dummy
variable Z (s), and split Eq. (29) into two parts

Dept. of EEE, SJBIT

Page 37

Modern Control Theory

10EE55

Figure 5: Block diagram of a system represented by a classical dierential equation.


Eq. (30) may be solved for U (s)),
U (s) = an + an1 s1 + + a1 s (n1) + a0 sn X (s)

(32)

and rearranged to generate a feedback structure that can be used as the basis for a block
diagram:
a0 1
1
a1 1
an1 1
+
Z (s) = U (s)
Z (s).
(33)
+ +
an
an s
an sn1 an sn
The dummy variable Z (s) is specied in terms of the system input u(t) and a weighted sum
of successive integrations of itself. Figure 5 shows the overall structure of this direct-form
block diagram. A string of n cascaded integrator (1/s) blocks, with Z (s) dened at the input
to the rst block, is used to generate the feedback terms, Z (s)/si , i = 1, . . . n, in Eq. (33).
Equation (31) serves to combine the outputs from the integrators into the output y (t).
A set of state equations may be found from the block diagram by assigning the state
variables xi (t) to the outputs of the n integrators. Because of the direct cascade connection
of the integrators, the state equations take a very simple form. By inspection:
x 1 = x2
x 2 = x3
.
x n1

.
= xn

x n =

Dept. of EEE, SJBIT

an1
a1
a0
1
x1 x2
xn + u(t).
an
an
an
an

(34)

Page 38

Modern Control Theory

10EE55

In matrix form these equations are

x 1

x 2

.
.

x n2

x n1
x n

0
0
..

1
0
..

0
0
..

...

0
0
..

0
0

1
0
0
0

0
1
a0 /an a1 /an a n2 /an an1 /an

x1
x2
..

+
x

n2


xn1

xn

0
0
..
0
0
1/an

u (t) .

(35)
The A matrix has a very distinctive form. Each row, except the bottom one, is lled of
zeroes except for a one in the position just above the leading diagonal. Equation (35) is
a common form of the state equations, used in control system theory and known as the
phase variable or companion form. This form leads to a set of state variables which may not
correspond to any physical variables within the system.
The corresponding output relationship is specied by Eq. (31) by noting that Xi (s) =
Z (s)/s(n+1i) .
y (t) = b0 x1 + b1 x2 + b2 x3 + + bn1 xn + bn z (t) .
(36)
But z (t) = dxn /dt , which is found from the nthstate equation in Eq. (34). When substituted
into Eq. (36) the output equation is:

Y (s) =

b0

bn a0
an

b1

bn a1
an

bn1

x1

x2 bn
+

u(t).

. an
xn

bn an1
an

(37)

Example 6
Draw a direct form realization of a block diagram, and write the state equations
in phase variable form, for a system with the dierential equation
d3 y
d2 y
dy
du
+ 19 + 13y = 13
+
7
+ 26u
3
2
dt
dt
dt
dt

(i)

Solution: The system order is 3, and using the structure shown in Fig. 5 the
block diagram is as shown in Fig. 6.
The state and output equations are found directly from Eqs. (35) and (37):

x 1

x
2

x 3

Dept. of EEE, SJBIT

1 x2 + 0 u(t),

13 19 7

x1

x3

(ii)

Page 39

Modern Control Theory

10EE55

Figure 6: Block diagram of the transfer operator of a third-order system found by a direct
realization.

y(t) =

x1

26 13 0 x2
+ [0] u (t) .
x3

(iii)

The Matrix Transfer Function

For a multiple-input multiple-output system Eq. 22 is written in terms of the r component


input vector U(s)
X(s) = [sI A]1 BU(s)
(38)
generating a set of n simultaneous linear equations, where the matrix is B is n r. The m
component system output vector Y(s) may be found by substituting this solution for X(s)
into the output equation as in Eq. 25:
Y(s) = C [sI A]1 B {U(s)} + D {U(s)}
=

C [sI A]1 B + D {U(s)}

(39)

and expanding the inverse in terms of the determinant and the adjoint
matrix
C adj (sI A) B + det [sI A] D
U(s)
det [sI A]
= H(s)U(s),

Y(s) =

(40)

where H(s) is dened to be the matrix transfer function relating the output vector Y(s) to
the input vector U(s):
(C adj (sI A) B + det [sI A] D)
det [sI A]

(41)

H(s) =

Dept. of EEE, SJBIT

Page 40

Modern Control Theory

10EE55

UNIT-3 DERIVATION OF TRANSFER FUNCTION FROM STATE MODEL


UNIT - 3
Derivation of transfer function from state model, digitalization, Eigen values, Eigen vectors,
generalized Eigen vectors.
6 Hours

Converting transfer-functions to state models using canonical forms


The state variables that produce a state model are not, in general, unique. However, there exist several
common methods of producing state models from transfer functions. Most control theory texts contain
developments of a standard form called the control canonical form, see, e.g., [1]. Another is the phase
variable canonical form.
Control Canonical Form
When the order of a transfer functions denominator is higher than the order of its numerator, the
transfer function is called strictly proper. Consider the general, strictly proper third-order transfer
function
Y ( s)
b s 2 b s b0
.
3 2 2 1
U ( s ) s a2 s a1s a0

(1)

Dividing each term by the highest order of s yields


b2 b1 b0
2 3
Y ( s)
s
s s
a
a
U ( s ) 1 2 1 a0
s s2 s3

(2)

which is a function containing numerous 1/s terms, or integrators. There are various signal-flow graph
configurations that will produce this function. One possibility is the control canonical form shown in
Figure 1. The state model for the signal flow configuration in Figure 1 is
C

Dept. of EEE, SJBIT

Page 42

Modern Control Theory


x1 0
x 0
2
x3 a0
y b0

b1

10EE55
1
0
a1

0 x1 0
1 x2 0 u

a2 x3 1

(3)

x1
b2 x2 0 u

x3

b2
b1
U(s)

1/s

Y(s)

1/s
b0

1/s
X2(s)

X3(s)

X1(s)

-a2
-a1
-a0

Figure 1. Control canonical form block diagram.

The validity of (3) is tested by rearranging (2) to yield


1
1
1
1
1
1
Y ( s) a2 Y ( s) a1 2 Y ( s) a0 3 Y ( s) b2 U ( s ) b1 2 U ( s ) b0 3 U ( s )
s
s
s
s
s
s

(4)

and then substituting the expression for Y(s) from the state model output equation in (3) yields
1
1
1
b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) b2 U ( s ) b1 2 U ( s ) b0 3 U ( s )
s
s
s
1
a2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s )
s
1
a1 2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s )
s
1
a0 3 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) .
s

(5)
Equation (5) can be rewritten as

b0 X 1 ( s) b1 X 2 ( s) b2 X 3 ( s) Y ( s)
U ( s)

Dept. of EEE, SJBIT

U ( s)

b2 b1 b0

s s2 s3 ,
a a a
1 2 21 30
s s
s
Page 43

Modern Control Theory

10EE55

which is identical to equation (2) proving that the control canonical form is valid.
As an example consider the transfer function

b2 b1 b0
2 8 6
2 3

C ( s)
2 s 2 8s 6
s s
s s s2 s3 .
3

R( s ) s 8s 2 26s 6 1 a2 a1 a0 1 8 26 6
s s2 s3
s s2 s3

The coefficients corresponding with the control canonical form are:


b0 6,

b1 8,

b2 2,

a0 6,

a1 26,

and

a2 8 .

The state model based on control canonical form is


1
0 x1 0
x1 0
x 0
0
1 x2 0 r
2

x3 6 26 8 x3 1
x1
y 6 8 2 x2 [0]r.

x3
Examination of Figure 1 shows one potential benefit of manipulating systems to conform to control
canonical form. If X1(s) happened to carry units of inches (position), then X2(s) might be inches/second
(velocity), and X3(s) might be inches/second/second (acceleration).
Phase Variable Canonical Form
Development of the phase variable canonical form as presented in [2] begins with the general fourthorder, strictly proper transfer function

Y ( s)
b s 3 b s 2 b s b0
b3s 1 b2 s 2 b1s 3 b0 s 4
4 3 3 2 2 1

U ( s) s a3s a2 s a1s a0 1 a3s 1 a2 s 2 a1s 3 a0s 4

(6)

which can be rearranged to read

Y ( s ) b3s 1U( s ) b2 s 2U( s ) b1s 3U( s ) b0s 4U( s )


a3s 1Y ( s ) a2 s 2Y ( s ) a1s 3Y ( s ) a0s 4Y ( s )

(7)

The denominator in (6) is fourth order and leads one to conclude that a block diagram consisting of
four 1/s terms may be useful. Construction of the phase variable canonical form is initiated by setting
the output to

Y( s ) b0s 4U( s )

(8)

which may be represented using four integrators as shown in Figure 2.


Dept. of EEE, SJBIT

Page 44

Modern Control Theory

U(s)

1/s

1/s

1/s

10EE55

1/s

b0

Y(s)

Figure 2. Block diagram of the output equation.

After substituting Y(s) from (8) into (7), the expression becomes

Y ( s ) b3s 1U( s ) b2 s 2U( s ) b1s 3U( s ) b0s 4U( s )


a3b0 s 5U( s ) a2b0s 6U( s ) a1b0s 7U( s ) a0b0s 8U( s )

(9)

It is left for the reader to show that the additional terms when applied to the block diagram results in
Figure 3.8 (b) in [2]. The reader may also wish to examine the similarities between control canonical
form and phase variable form.
It is important to note that a particular transfer function may be represented by block diagrams of
many different canonical forms, yielding many different valid state models.
The advances made in microprocessors, microcomputers, and digital signal processors have
accelerated the growth of digital control systems theory. The discrete-time systems are dynamic
systems in which the system-variables are defined only at discrete instants of time.
The terms sampled-data control systems, discrete-time control systems and digital control
systems have all been used interchangeably in control system literature. Strictly speaking, sampleddata are pulse-amplitude modulated signals and are obtained by some means of sampling an analog
signal. Digital signals are generated by means of digital transducers or digital computers, often in
digitally coded form. The discrete-time systems, in a broad sense, describe all systems having some
form of digital or sampled signals.
Discrete-time systems differ from continuous-time systems in that the signals for a discrete-time
systems are in sampled-data form. In contrast to the continuous-time system, the operation of
discrete-time systems are described by a set of difference-equations. The analysis and design of
discrete-time systems may be effectively carried out by use of the z-transform, which was evolved
from the Laplace transform as a special form.

Dept. of EEE, SJBIT

Page 45

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 46

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 47

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 48

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 49

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 50

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 51

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 52

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 53

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 54

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 55

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 56

Modern Control Theory

10EE55

UNIT-4 SOLUTION OF STATE EQUATIONS


UNIT - 4
Solution of state equation, state transition matrix and its properties, computation using Laplace transformation,
power series method, Cayley-Hamilton method, concept of controllability & observability, methods of determining
the same
10 Hours
After obtaining the various mathematical models such as physical, phase and canonical
forms of state models, the next step to analysis is to obtain the solution of state equation. From the
solution of the state equation, the transient response can then be obtained for specific input. This
completes the analysis of control system in state-space.
The solution of state equation consists of two parts; homogenous solution and forced solution. The
model with zero input is referred as homogenous system and with non-zero input is referred as forced
system. The solution of state equation with zero input is called as zero input response (ZIR). The solution
of state model with zero state (zero initial condition) is referred as zero state response (ZSR). Hence
total response is sum of ZIR and ZSR.
a)

Zero Input Response (ZIR):

Consider the n-th order LTI system with zero input. The state equation of
such system with usual notations is given by
X (t ) = AX (t ) (1)
with x(0) = x 0

(2)

Let the solution of Eq.(1) be of the form


x(t ) = eat .k

(3)

where eat is matrix exponential function defined as


A 2 t 2 A 3t 2
e At = I + At +
+
+ ......
2!
3!
and k is a constant vector.

(4)

Differentiating Eq. (3);


X (t ) = Ae at k
k = (e At ) 1 x(t 0) = e At x(0)
Dept. of EEE, SJBIT

Page 57

Modern Control Theory

10EE55
x(t ) = e At e At x(t 0)
= e A(t t 0) x(t 0)

At t=0 (zero initial condition),


x(t ) = eat x(0)

(8)

From Eq.8, it is observed that the initial state


x(0) = x 0
at t=0 is driven to a state x(t) at time t by matrix exponential function e At . This matrix
exponential function, which transfers the initial state of the system in t seconds, is called
STATE TRANSITION MATRIX (STM) and is denoted by
Properties of State Transition Matrix
1).
e At

(t):

(0)=I proof follows from the definition,


A 2 t 2 A 3t 2
= I + At +
+
+ ......
2!
3!
= (t )

Substituting t=0 results into

(0)=I.

2). 1 (t ) = (t ) :
Proof: Consider (t ) = I + At +

A2 t 2
+ ....
2!

A 2 t 2 A 3t 3
(t ) = I At +

+ .....
2!
3!
Multiplying (t ) and (t ) ,

(t ) (t ) = I
Pre-multiplying both sides with 1
(t ) = 1 (t )
3). (t1 ) (t 2 ) = (t1 + t 2 ) :
2

A 2 t1
A3t1
+
Proof: (t1 ) = I + At1 +
+ ....
2!
3!
2
3
A2t2
A3t2
+
and (t 2 ) = I + At 2 +
+ ....
2!
3!
A2(t +t )2 A3(t +t )3
(t1)(t2 ) = I +A(t1 +t2)+ 1 2 + 1 3 +.......
2!
3!
Dept. of EEE, SJBIT

Page 58

Modern Control Theory

10EE55

= (t1 + t 2 )
4). (t 2 t1 ) (t1 t 0 ) = (t 2 t 0 ) :
(This property implies that state transition processes can be divided into a number of
sequential transitions), i.e. the transition from t0 to t2,
x(t 2 ) = (t 2 t 0 ) x(t 0 ) ; is equal to transition from t0 to t1 and from t1 to t2 i.e.,
x(t1 ) = (t1 t 0 ) x(t 0 )
x(t 2 ) = (t 2 t1 ) x(t1 )
A (t 2 t1 ) + ....
2!
2
A (t1 t 0 )
+ ....
(t1 t 0 ) = I + A(t1 t 0 ) +
2!
A2(t +t )2 A3(t +t )3
(t2 t1)(t1 t0) = I +A(t2 +t0)+ 2 0 + 2 0 +.......
2!
3!
= (t 2 t 0 )

Proof: (t 2 t1 ) = I + A(t 2 t1 ) +

Evaluation of State Transition Matrix (t ) :


Few method to evaluate the STM are;
1) Power series method
2) Inverse Laplace transform method
3) Cayley-Hamilton theorem
1). Power series method:
It is infinite solution which can be trunkated to 2 or 3 terms.
By the definition,
A 2 t 2 A 3t 2
e At = I + At +
+
+ ......
2!
3!
Example: Compute the STM by power series method given
0 a) A =
1

1
2

b) A =

A 2 t 2 A 3t 2
+
+ ......
2!
3!
1 0
0
1
0
1
=
+
t+
1 2
0 1 1 2

1 1
0 1

a) (t ) = I + At +

t2 t3
+ + ...
2
3
=
t3 e
2
t +t
t
2
+
1

Dept. of EEE, SJBIT

tt2 +
te t

0
1 t3
t2
+
+ ....
2! 1 2 3!

t3
+ ...
2
3

+ ...

te t

Page 59

Modern Control Theory


1 2t +

3t
2t
+ ...
2
3

te t

e t te

b) (t ) = I + At +

t2 t3
+ + ...
2! 3!
0

et

1 1 t2
+
t+
+ .....
0 1 2!
0 1
1 1

=
0 1

1+ t +

A 2 t 2 A 3t 2
+
+ ......
2!
3!

1 0

10EE55

t3
+ ...
2
t 2 t3
1 + t + + + ...
2! 3!
t+t2 +

te t

0 e t
2). Inverse Laplace Transform Method:
Consider the state equation with zero input as
x(t ) = Ax(t );

x(t 0) = x(0)
Taking Laplace Transform on both sides,
Sx(s) x(0) = Ax(s)
[SI A]x(s) = x(0)
Premultying both sides by [SI-A]-1,
x(s) = [SI A] 1 x(0)
Taking Laplace Inverse on both sides,
-1
x(t ) =
[ SI A] 1 x(0)
Comparing the above example with
x(t ) = e At x(0)
It shows that
1
e At = STM = (t ) = -1 [SI A]
1
(t ) = -1 [SI A]
Example:
Obtain the STM by Laplace Transform (Inverse Laplace Transform) method for given
system matrices
1
0
1 1
0
A=
a). A =
,
1 1 b).
Dept. of EEE, SJBIT

Page 60

Modern Control Theory


1
S 1

1
2
a). A =

c). A =

1
0 1

[SI A] =

(s) = [SI A] 1 =
(t ) =

10EE55

-1 (s)

Dept. of EEE, SJBIT

S 1

S 1 1
=
0
S 1

et

te t

S 1

S 1

[S 1] 2

Page 61

Modern Control Theory

b). A =

10EE55

S
1

[SI A] =

1 2

1
(S + 2)

S+2
1
S
(S + 1) 2
(t ) = [SI A] 1 =
=
1
1 S+2
(S + 1) 2
1

-1 (t)

(t ) =
c). A =

0
2

1
3

[SI A] 1 =

Hence (t ) =

(1 + t)e t
te t

te t
(1 t )e

[SI A] =

2 S +3

S
2

1
(S + 1)
S
(S + 1) 2

1
S +3

S +3
2

S
(S + 2)(S + 1)

S +3 1
2 S

(S + 2)(S + 1)
S +3 1
2 S
-1 (t ) = -1
(t ) =
(S + 2)(S + 1)
=

2e t e 2t
2e t + 2e 2t

e t e 2t
e t + 2e 2t

3. STM by Calley-Hamilton Theorem:


This method is useful for large systems. The theorem states that Every nonsingular square matrix satisfies its own characteristic equation. This theorem helps for
evaluating the function of a matrix.
For nxn, non-singular matrix A the matrix poly function f(A) is given by
f ( A) = 0 I + 1 A + 2 A + .... + n An 1
2

= (t )
where 0 , 1,..... n constant coefficients which can be evaluated with eigen values of
matrix A as described below.
Step 1: For a given matrix, form its characteristic equation I A = 0 and find
eigen values as 1 , 2 ,..... n .
Dept. of EEE, SJBIT

Page 62

Modern Control Theory

10EE55

Step2 (Case 1):If all eigen values are distinct, then form n simultaneous
equations as,
e 1t = f (1) = 0 + 11 + 2 1 + .....
2

e 2t = f ( 2 ) = 0 + 1 2 + 2 22 + .....
e nt = f ( n ) = 0 + 1 n + 2 n + .....
2

Solve for 0 , 1 , 2 ,.... n


(Case 2): If some Eigen values are repeated then obtain one independent equation
by using this Eigen value and find co-efficents 0 , 1 , 2 ,.... n .
Step 3: Substitute the co-efficients 0 , 1 , 2 ,.... n in function
f ( A) = 0 I + 1 A + .... + n An 1
= (t )
Examples:
1) Find f ( A) = A10 for A =

1
1 2

Now characteristic equation is I A = 0 .

1
=0
1 +2

( + 1) 2 = 0.

Hence, Eigen values are 1 = 1, 2 = 0, 2 = 1.


Since A is of 2x2, the corresponding poly function is
f ( ) = 10 = 0 +

(1)

1
f (1) = 110 = 0 + 1 1
(1)10 = 0 1

0 1 = 1
Since it is case of repeated Eigen values, to obtain the second equation
differentiating the expansion for f( ) on both sides (i.e. Eq 1),
d 10
[ ]
d
109

=1

=1

= 1

= 1 = 10
Hence, 0 = 1 + 1 = 1 10 = 9

Dept. of EEE, SJBIT

Page 63

Modern Control Theory

10EE55

f ( A) = A10 = 0 I + 1 A
= 0

1 0
0
1
+ 1
0 1
1 2

0 0
0
1
+
0 0 1 2 1

0
1
9 10
=
1 0 21
10 11

2). Find STM by Cayley Hamlton method, given;


0 b) A=
1

1 1
0 1

a) A =

a) Consider A =

1
2

c)

1
2 3

A=

1 1
0 1

Characteristic equation is I A = 0
=

1 1
1
0

1 = 1and 2 = 1. Since A is of 2x2 second order system, hence,

e t = 0 + 1

(1)

e = 0 + 1 at = 1

will be e 1t = 0 + 11
et = 0 + 1

(2)

Differentiating Eq. (1) with respect to and substituting =


te t = 1 at = 2
te t = 1

(3)

From Eqs. (2) and (3) 0 = e t 1 = e t te t


= e t (1 t )
Hence STM is given by

(t ) = e At = 0 I + 1 A
= 0

1 0
0 1

+ 1

1 1
0 1

e t (1 t )
0
te t
=
+
0
e t (1 t)
0
Dept. of EEE, SJBIT

te t
tet
Page 64

Modern Control Theory

et

te t

0 b) A =
1

10EE55

; characteristic equation is I A = 0

2
1 = 1, 2 = 1.

1
=0
1 +2

For second order system,


e t = 0 + 1

(1)

e 1t =
0

e t = 0 1 ; ( 1 = 1 ) (2)
Differentiating Eq. (1) with respect to and substituting = 2=-1

1 = te t
hence 0 = et (1 + t ) from Eq. (2).
The STM is given by (t ) = e At = 0 I + A
On simplification,

(t) =
c). A =

(1 + t)e t

te t

te t

(1 t )e

1
characteristic equation is I A = 0 .
2 3

1
=0
2 +3

1 = 1, 2 = 2.

Eigen values are distinct. The corresponding functions are,


e 1t = 0 + 1 1

(1)

e t = 0 1
And e 2t = 0 +
e 2t 1 2

(2)

= 0 21

(3)

From Eqs. (2) and (3) solving for 0 and 1 ,

0 = 2e t e 2t and
1 = e t e 2t .
Hence the STM will be (t ) = 0 I + 1 A . On simplification
Dept. of EEE, SJBIT

Page 65

Modern Control Theory


0
=
=

10EE55

1
2 1 0 3 1
2e t e 2t
2e t + 2e 2t

e t e 2t
e t + 2e

2t

as before.

Deterimine the STM of system matrix


2 1

A = 0 2 0 by Cayley-Hamilton method.
3 1
Now characteristic equation is,
4
2 1
2
0 =0
I A = 0
0
0
1
Eigen values are 1 = 2, 1 = 2, 2 = 1
Corresponding function will be
2
e 1t = 0 + 1 1 + 21

(1)

e 2t = 0 + 2 1 + 4 2

(2)

Differentiating Eq. (1) with respect to


On simplification

and substituting

te 2t = 1 + 4 2

1=2,

(3)

and e 2t = 0 + 1 2 + 2 22
= 0 + 1 + 2
From Eqs. (2), (3) and (4), solving for 0 , 1 , 2 ,

(4)

0 = 2te t 3e 2t + 4e t ;
1 = 3te 2t + 4e 2t 4e t
2 = te 2t e 2t + e t
Hence STM (t ) = 0 I + 1A + 2 A2
e 2t 12e t 12e 2t + 13te 2t
= 0
e 2t
0
3e t + 3e 2t

Dept. of EEE, SJBIT

4e t + 4e 2t
0
et

Page 66

Modern Control Theory

10EE55

Solution of Non-homogeneous State Equation


Consider the state equation with forced input u(t) as
x(t ) = Ax(t ) + Bu(t ) ;

(1)

x(t o ) = xo
x(t ) Ax(t ) = Bu(t )
Pre-multiplying both sides by e At ,
e At [ x(t ) Ax(t )] = e At u(t )
Consider,
d At
[e x(t )]
dt
Ae At x(t ) + e At x(t )
e At [ x9T 0 Ax(t )]
hence Eq (2) can be written as
d At
[e x(t )] = e At u(t )
dt
Integrating both sides with time limits 0 and t

(2)

(3)

At
x(t) |t = e A Bu( )d
oe
o

At

x(t ) x(0) = e A Bu( )d


o
t

e At x(t ) = x(0) + e A Bu( )d


o

Pre-multiplying both sides by e At ,


t

x(t ) = e Atx ( 0) x(0) + e A(t ) Bu( )d


o

ZIR

ZSR
t

x(t ) = (t ) x(0) + (t )Bu( )d .


o

If the initial time is to instead of o then,


t

x(t ) = (t t o ) x(t o ) + (t )Bu( )d

(4)

to

The Eq (4) is the solution of state equation with input as u(t) which represents the change
of state from x(to) to x(t) with input u(t).
Example 1: Obtain the response of the system for step input of 10 units given state model
as

Dept. of EEE, SJBIT

Page 67

Modern Control Theory

x(t ) =

1 10

x(t ) +

10EE55

0
10

u(t )

y(t) = [1 0]x(t ); x(0) = 0


Solution: The state equation solution is
x(t ) = e At x(0) + e A(t ) Bu( )d

(1)

Since x(0)=0,
t

x(t ) = e A(t ) Bu( )d


o

= (t )Bu( )d

(2)

Now, the state transition matrix (t ) can be evaluated as


1
e At = (t ) = -1 [SI A]
1.0128e a1t 0.0128e

a 2t

0.114e a1t + 0.114e a2t

(t ) =

0.114e a1t 0.114e a2t


0.0128e a1t + 1.012e

a 2t

1.0128e a1 (t ) 0.0128e a2 (t

0.114e a1 (t ) 0.114e a 2 (t )

0.114e a1 (t ) + 0.114e a2 (t )

0.0128e a1 (t ) + 1.012e a2 (t )

Hence
t

x(t) = (t )
o

0
10d
10

9.094 10.247e a1t + 1.153e a 2 t


0.132 + 1.151e a1t 1.019e a 2t
y(t) = Cx(t) = [1 0]x(t )
=

= 9.094 10.247e a1t + 1.153e a2t ;


where, a1=1.1125, a2=9.88
Example 2: Consider the system
0
1
0
x(t ) =
x(t ) +
u(t )
2 3
1
1
y(t ) = [1 1]x(t ); x(0) =
Solution: The

Dept. of EEE, SJBIT

Page 68

Modern Control Theory

2e t e 2t
2e t + 2e 2t

10EE55

e t e 2t
e t + 2e 2t

ZIR = (t ) x(0)
=
=
=

2e t e 2t
2e

+ 2e

e t e 2t

2t

e + 2e
t

2t

2e t e 2t e t + e 2t
2e t + 2e 2t + e t 2e 2t
e t
e t

ZSR = (t )Bu( )d
o

2e (t ) e 2(t
2e t (t ) + 2e 2 (t )

0
e (t ) e 2 (t )
1d
(t )
2(t )
1
e
+ 2e

e (t ) e 2 (t
d
e (t ) + 2e 2 (t )

1
1
e t + e 2t
= 2
2
e t e 2t
x(t ) = ZIR + ZSR
1
1
e t
e t + e 2t
=
+
=
2
2
e t
e t e 2t
1
(1 + e 2t
= 2
e 2t
y(t) = Cx(t ) = [1 1]x(t)
1
(1 + e 2t
= [1 1] = 2
e 2t
1
= (1 + e 2t ) e 2t
2
1
= (1 e 2t ) )
2
Example 3: x(t ) =

2 5

x(t ) +

y(t ) = [2 1]x(t ); x(0) = [1 2]

0
1

u(t )

Dept. of EEE, SJBIT

Page 69

Modern Control Theory

10EE55

Solution: State transition matrix


4 t 1 4t
2 t 2 4t
e e
e e
3
3
3
(t ) = 3
2 t 2 4t 1 t 4 4t
e + e
e + e
3
3
3
3
t

x(t) = (t ) x(0) + (t )Bu( )d


o

10 t
4
e e 2t e 4t
3
= 3
5 t
8
e + e 2t + e 4t
3
3

Controllability and Observability


Consider the typical state diagram of a system.

The system has two state variables. X1(t) and X2(t). The control input u(t) effects the state
variable X1(t) while it cannot effect the effect the state variable X2(t). Hence the state
variable X2(t) cannot be controlled by the input u(t). Hence the system is uncontrollable,
i.e., for nth order, which has n state variables, if any one state variable is uncontrolled
by the input u(t), the system is said to be UNCONTROLLABLE by input u(t).
Definition:
For the linear system given by
X (t ) = AX (t ) + Bu(t )
Y (t ) = CX (t ) + Du(t )
is said to be completely state controllable. If there exists an unconstrained input vector
u(t), which transfers the initial state of the system x(t0) to its final state x(tf) in finite time
f(tf-t0) i.e. ff. It can be seen if all the initial states are controllable the system is completely
controllable otherwise the system the system uncontrollable.
Methods to determine the Controllability:
1) Gilberts Approach
2) Kalmans Approach.
Gilberts Approach:
It determines not only the controllability but also the uncontrollable state variables, if the
it uncontrollable.
Dept. of EEE, SJBIT

Page 70

Modern Control Theory

10EE55

Consider the solution of state equation


X (t ) = AX (t ) + Bu(t ); X (t o ) = X (0)
as
t

X (t) = e X (0) + e A(t ) Bu( )d . Assuming without loss of generality that X(t)=0, the
At

solution will be
t

X (0) = e A Bu( )d
0

The system is state controllable, iff the none of the elements of B is zero. If any element
is zero, the corresponding initial state variable cannot be controlled by the input u(t).
Hence the system is uncontrollable. The solution involves A, B matrices and is
independent of C matrix, the controllability of the system is frequently referred to as the
controllability of the pair [A, B].
Example: Check the controllability of the system. If it is uncontrollable,
identify the state, which is uncontrollable.
1
1 1
X (t ) =
X (t ) +
u(t )
0
0 2
Solution: First, let us convert into diagonal form.
+1
1
=0
I A = 0
+2
[ + 1][ + 2] = 0
2 + 3 + 2 = 0
( + 2)( + 1) = 0
= 2, = 1
Eigen vector associated with = 2
2 1
0 0

v1 =

1
1

Eigen vector associated with = 1


1
1+1
0
1+ 2
v2 =

P=

0
0

P 1 =
Dept. of EEE, SJBIT

0 1
0 1
1 1
1 0

1 1
1

1 1
Page 71

Modern Control Theory


P 1 AP =

0
1

1 1 1 1 1
1

=
P 1 B =

10EE55

2 1 0

0 2 1 1
1 1 1 0

0 1 1
0
=
1 1 0
1

As P-1B vector contains zero element, the system is uncontrolled and state X1(t) is
uncontrolled.

2) Kalmans approach:
This method determines the controllability of the system.
Statement: The system described by
x(t ) = Ax(t ) + Bu(t )
y(t ) = Cx(t )
is said to completely state controllable iff the rank of controllability matrix Qc is equal to
order of system matrix A where the controllability matrix Qc is given by,
Qc = [B |AB|A2 B||An-1 B].
i.e., if system matrix A is of order nxn then for state controllability

(Qc ) = n
where (Qc ) is rank of Qc.
Example: Using Kalmans Approach determine the state controllability of the system
described by
(1)
y + 3y + 2 y = u + u
with x1 = y, x 2 = y u
x2 = 3x 2 2x1 2u
x1 = x 2 + u
x1
0
1 x1
1
=
+
u(t)
x2 2 3 x2
2
A=

2 3

,B =

1
2
0

Dept. of EEE, SJBIT

Page 72

Modern Control Theory


Now AB =

2 3 2

Qc = [B | AB ] =
Qc =

10EE55
=

2
4

1 2
2 4

=0
2 4
hence rank (Qc ) is <2 and system matrix A is of 2x2. Therefore system is not state
controllable.
Example: Determine the state controllability of the system by Kalmans approach.
0 1
0
0
x(t) = 0 0
1 x(t) + 0 u(t)
1
0 2 3

Dept. of EEE, SJBIT

Page 73

Modern Control Theory

10EE55

Verify the result with Gilberts approach.


Solution: Kalmans approach: Here,
0

1 , B = 0 ; AB = 1 ; A 2 B = 3
A0 0
1
0 2 3
3
7
B|A2B|AB
0

0
c

Q = 0 3 1 c
1 3 7

| Q | 0.

(Qc ) = 3
= Order of system. Therefore, system
is state controllable. Verification by
Gilberts approach:
Transferring the system model into canonical form with usual procedure.
1
0
2
0 Z (t ) + 1 u(t )
1
2
2

0 0
Z (t ) = 0 1
0

0
T

Here, B =
2

1
1
2

has no zero element in any row, hence system is controllable.

Observability:
Concept:
A system is completely observable, if every state variable of the system effects some of
the outputs. In other words, it is often desirable to obtain information on state variables
from the measurements of outputs and inputs. If any one of the states cannot be observed
from the measurements of the outpits and inputs, that state is unobservable and system is
not completely observable or simply unobservable.
Consider the state diagram of typical system with state variables as x1 and x2 and y and
u(t) as output and inputs respectively,

Dept. of EEE, SJBIT

Page 74

Modern Control Theory

10EE55

On measurement of y(t), we can observe x1(t) but we cannot observe x2(t) by


measuring out y(t), hence state is not completely observable.
Definition:
A system described by
x(t ) = Ax(t ) + Bu(t )
y(t ) = Cx(t ) + Du(t ) is said to be completely observable iff with measurement of output
y(t) over finite time interval to t tf along with input u(t); t to the initial state x(to) can be
estimated.
Explanation:
Consider the output equation
y(t ) = Cx(t ) + Du(t )
Assuming D=0, y(t)=Cx(t).
All the state variables x(t) can be estimated with measurement of y(t) provided C matrix
do not contain the zero element. If any element of C matrix is zero, the corresponding
state cannot be estimated and hence system is unobservable.
Kalmans Approach:
The system described by the model
x(t ) = Ax(t ) + Bu(t )
y(t ) = Cx(t ) + Du(t )
is said to be completely observable iff the rank of observability matrix o is equal to the
order of system matrix A where,
n 1
2
o = [C T A T C T A T C T AT C T ]
For nth order system, rank of o must be n.
i.e. ( o ) = n
It involves A and C matrices. The pair A, C is observable.
Example: Consider the system
3
2 0
x(t ) =
x(t ) +
u(t )
1
0 1
y(t) = [1 0]x(t)
Test the observability using Kalmans approach.
Solution: Here
A=

; C = [1 0 ]

A TC T =
T

Dept. of EEE, SJBIT

2
0

1 0

Page 75

Modern Control Theory

10EE55

= C

AT C T =

1 2
00

o = 0 ( o ) < 2
Order of system is 2. So, system is unobservable.
Example: Show that the system
0

x(t) = 0
0
1 x(t) + 0 u(t)
1
6 11 6
y(t) = [4 5 1]x(t) is not completely observable by
1) Kalmans approach
2) Gilberts approach
Solution: Kalmans approach
0

0 0

A= 0

A = 1 0 11
0 1 6

6 11 6

C = [4 5 1]

= 5
1

A C = 7 ;A C = 5
1
1
T

T2

o 4=

CT T AT C T
6
5
1

7
1

AT C T
6
5
1

=0

( o ) < 3 and order of system is 3. Therefore system is unobservable.


Gilberts approach:
Transferring the system model into observable canonical form with usual
procedure, the output equation will be
y(t) = [0 2 2]z(t

Dept. of EEE, SJBIT

Page 76

Modern Control Theory

Dept. of EEE, SJBIT

10EE55

Page 77

Modern Control Theory

10EE55

UNIT-5 POLE PLACEMENT TECHNIQUES


UNIT - 5
POLE PLACEMENT TECHNIQUES: stability improvements by state feedback, necessary
& sufficient conditions for arbitrary pole placement, state regulator design, and design of
state observer, Controllers- P, PI, PID.
10 Hours
Stability improvement by state Feedback: The main goal of a feedback design is to
stabilize a system, if it is initially unstable or to improve its stability if transient
phenomena dont die out sufficiently fast.
Consider single input single out put system with n-th order described by
X(t) = AX(t)
+ Bu(t)
Let us assume that all the state variables are measured accurately, and then it is possible to implement
a linear control law of the form
u(t) = KX(t)
where,
K = [k1k 2 ..........k n
] is
(1xn) feedback gain matrix.
Substitute Eq (2) in Eq (1) to obtain closed loop system
model as
X(t) = AX(t) BKX(t)
= [A BK]X(t)
The characteristic Eqn of closed loop system will be
SI (A BK) = 0
When Eq (4) is evaluated, this yields an nth order polynomial in S containing the n-gains K1,
K2,Kn. The control Law design then consists of picking the gains so that the roots of Eq
(4) are in desired locations.
In the next section it can be seen that if the state model is in controllable canonical form, then the
roots of characteristic Eqn (4) can be chosen arbitrarily. If all the given values of (A-BK) are
placed in left- half of s-plane, the closed loop system is asymptotically stable. And the state X(t)
decays to Zero irrespective of initial state X(0)- the initial perturbation in the state . The system
state can be maintained Zero value in
spite of disturbances that act upon the system. System with this property are called
REGULATOR SYSTEMS. Control configuration of such system as shown

Dept. of EEE, SJBIT

Page 78

Modern Control Theory

10EE55

Algorithm for Design:


a) Let the desired Locations of closed loop Poles be 1 , 2 ........ n
b) The correspondry cha Eqn will be
(S 1 )(S 2
)......(S n ) =
0
n
S + 1S n1 + 2S n2 + ....... + n = 0
c) Let the controller gain matrix be
K=[K1 K2----------Kn].
d) SI (A BK) = Sn + 1Sn1 + 2 S n2 + ...... n
n

e) Corresponding equation is
S + 1Sn1 + 2S n2 + ......

(1)

= 0 -------(2)

f) Comparing the co-efficient of like powers of s form Eqn (1) & (2) K1,K2,--------Kn
can be Evaluated and this completes design of state back controller K.
Example: Design a controller K for the state model
0 1
0
X(t) =
X(t) +
u(t)
0 0
1
Such that the system shall have damping ratio as 0.707 & settling time as 1 sec.
Solution:
a) Converting the desired specifications as locations of closed loop poles at 4 j 4
b) The corresponding cha Eqn will be
(S+4+J4) (S+4-J4)=0
(1)
S 2 + 8S + 32 = 0
c) Let the controller be of the form
K=[K1 K2].
d) From given state model
0 1
0 0
[A BK ] =

0 0 K1 K 2
=

0
1
K1 K 2

2
SI (A BK) = S
+ K1 S + K
2
e) The corresponding characteristic equation is
S 2 + K 2S + K 1 = 0
Comparing the co-efficient of like powers of S from Equ (1) and (2)
K2=8, K1=32.

(2)

Hence controller K=[32 8].


Controller gain matrix K=[32 8] in feed back will have desired specifications.
Dept. of EEE, SJBIT

Page 79

Modern Control Theory

10EE55

Necessary and sufficient conditions:


The sufficient conditions for arbitrarily pole-placement is, the system
must be completely controllable.
Consider the state model
X(t) = AX(t) + Bu(t)
Y(t)=CX(t)

(1)

(2)

The above model can be converted into controller canonical form through the
transformation
X(t) = PX(t)

(3)

Where P is nxn non singular transformable matrix, and assumed as


P11 P12
P1n
P21 P22
P 2n
P=
Pn2

Pn1

nn

P1

Pi = [Pi1 Pi2

= P2 ;
Pn

Pin ]

Taking the derivative of Eqn (3) and substituting for X(t) from Equ(1)
X(t) = AX + B u(t)
Where A = PAP 1 ,

0
.

0
.

n1

B = PB
0......
1......
.
......

0
0
.
n

(4)
(5)

(6)

0
B = PB =

.
0
1

From Equ.(3),
x1 = P11x1 + P12 x 2 + ......... + P1n x n =
P1x
Taking the derivative
x1 = P1x(t) = P1[Ax(t) + Bu(t)]
= P1Ax(t) + P1Bu(t)
Comparing (8) and (9), P1B = 0
Dept. of EEE, SJBIT

(7)

(8)
(9)

Page 80

Modern Control Theory


& x 2 = P1Ax(t)

10EE55
(10)

Taking again derivative of Equ(10)


Hence the corresponding characteristic poly will be
SI (A B K = Sn + (1 + K n )Sn1 + ....... + n + K1

(14)

and characteristic equation will be


n
S + ( 1 + K n )Sn1 + ( 2 + K n )Sn2 ....... + n + K1 = 0
Where K1,K 2 ,.....K n can be chosen real numbers arbitrarily.

(15)

Let the desired characteristic equation be


n
S + a1Sn1 + ....... + an = 0
Equating the coefficients of like powers of S from equation (15) and (16)

(16)

a1 =
a2 =
.
an =

+ K n K n =a1 1
K n1 =a 2
2 + K n1

(17)
n

+ K1

K1 =an

Hence the controller K will be


(18)
K = [an n an1 n1 a1 1 ]
Transferring to original system
K = KP = [an n an1 n1 a1 1 ]P
The necessary condition is the system should be completely controllable. If the system is
not completely controllable then there are certain states that cannot be controlled by
stable feedback.
Thus the necessary and sufficient condition for arbitrarily pole placement is the
system should be completely controllable in s-plane.
Steps to design the controller K:
1) For given state model, test the controllability of the system by determining the rank
of controllable matrix Qc.
2) If it is controllable , transfer the model into controllable canonical form through
similarly transformation matrix P ; where P can be
obtained as. a) P1=[00---------0 1]Qc-1
P2=P1A
P3=P1A2
Pn=P1An-1
P1
b)
P =P1 P1A
A n1
3) Compute A & B as
A = PAB = PB
4) Assume the controller K
Dept. of EEE, SJBIT

Page 81

Modern Control Theory

10EE55

K = K1K 2 ....K n and


1 0 . .
0
0
0
0 1 0 .
1
Determine A B K =
.
. . . .
.
n K1 . . . . - i K n
5) Determine the Cha equation as
SI (A B K = S n + ( n + K 1 )Sn1 + ....... + 1 K n = 0

6) Determine the desired cha equation as


Sn + a1Sn1 + a Sn2 + ....... + a n = 0
7) Determine the elements of gain matrix K , K 1 , K 2 ,.....K n
as K1 = an
K 2 = an1

n
n1

K n = a1 1 and hence
K = K1K 2 ....K n
8) The feedback controller K for original system is
K = KP
This completes the design procedure

Design of controller by Ackermanns formula:


We have
P1

[ ]

& P1 = Q1
c [00

P = P1A

01]

P1A n1
K = [a n

an1

n1

a1

]P
(Q c )1[00

= [an

= [00

a n1

a1

n1

01]Q c (a1

] (Q c ) A[00 01]
(Q c )1 A n1[00 01]

)A (a 2
n1

01]

n2

)A

+ ...... + (an

n)I

]
(1)

Characteristics poly of A matrix is


SI A = Sn + 1S n1 + 2 Sn2 + ....... + n
According to Cayley-Hammilton theorem,
An + 1A n1 + ....... + nI = 0
n

Dept. of EEE, SJBIT

(2)

n1

Page 82

Modern Control Theory


K = [00

01]Q c

10EE55

(A) -------------(4)

Where
(A) = A n + a1A n1 + a 2 An2 + a 3 An3 + ....... + anI

(5)

& Qc = B | AB | A B | .......A B

(6)

n1

Design procedure:
1. For given model, determine Qc and test the controllability. If controllable,
2. Compute the desired characteristics equation as
Sn + a1Sn1 + ......... + a n = 0
and determine coefficients a1 , a 2 , a n
3. Compute the characteristics poly of system matrix A as
(A) = A n + a1 A n1 + a2 A n2 + ....... + an
1
4. Determine Q 1
( A)
c and then controller k will be K=[0 0 .0 1] Q c .
This completes the design procedure.
Ex. Design controller k which places the closed loop poles at 4 j 4 for system
0 1
0
X(A) =
X(t) +
u(t) using Acermanns formula.
1
1 0
Answer: Check for controllability
I

Qc = [B | AB] =
Q c1 =

0 1

Its controllable as f(Qc)=2.

1 0

0 1

1 0
II. Desired characteristics equation is (S+4+j4)(S+4-j4)
S 2 + 8S + 32 = 0
a1 = 8,a 2 = 32.
III.

(A) = A 2 + a1A + a 2I =

1 0
0
0
1 32

32 0
0

IV. K = [0 1]
State regulator design: The state Regulator system is a system whose non zero
initial state can be transferred to zero initial state by the application of the input u(t).
Design of such system requires a control law
U(t)=-KX(t) which transfers the non-zero initial state to zero initial state (Rejection of
disturbances) by properly designing controller K.
The following steps give design of such controller. For a system given by

Dept. of EEE, SJBIT

Page 83

Modern Control Theory

10EE55

X(t) = Ax(t) + Bu(t)


Y(t) = CX(t)with
u(t) = KX(t)where
K = [K 1K 2 ......K n ]
From above relations, the closed loop system is
X(t) = [A BK]X(t)
1. From the characteristics poly of matrix A
SI A = Sn + 1 S n1 + 2 Sn2 + ....... + n

(1)

Determine 1 , 2 ,....... n
2. Determine the transformation matrix P, which transfers the given state model
into controllable caunomical form.
P1
P=

P1

1
; P = [0..001
c ] Q

P1A n1
c

Q = B | AB | A 2B | .......A n1B

3. Using the desired location of closed loop poles (desired Egen values) obtain
the characteristics poly
(S 1 )(S 2 ).....(S nn ) = S + a1Sn1 + a 2Sn2 + .....a n
determine a1 , a 2 ,.......a n
4. The required state feed back controller K is
K = [a n nan1 n1.....a1 1 ]P
Example: A regulator system has the plant
0

X(t) = 0
0
1 X(t) + 0 u(t)
1
6 11 6
Y = [1 0 0 ]X(t)
Problems
1) Design a state feed back controller which place the closed loop poles at 2 j3.464 and-5. Give the block
diagram of the control configuration
Solution: By observing the structure of A and B matrices, the model is in controllable conmical
form. Hence controller K can be designed.
S1.

SI A = 0
1
6
S

0
1

11 S + 6
Dept. of EEE, SJBIT

S 3 + 6S 2 + 11S + 17
Page 84

Modern Control Theory


Comparing with S 3 + 1S +

10EE55

S+

, we obtain

= 6,

= 11,

= 17

2. Desired characteristics poly is


(S+2+j3.464)(S+-j3.464)(S+5)
S 3 + 9S 2 + 20S + 60
Comparing with s23 + a1s + a 2 s + a 3 we obtain
a1 =9, a 2 =20, a3 =60
3. Hence the required controller K is
K=[a3- 3 a2- 2 a1- 1]
= [ K1 K2 K3]
= [43 9 3]
Block Diagram:

K1=43
K2=9
K3=3

Design of state observer:


Design of observer: The device that estimates the state variables with measurement of
output along with input u(t) is called state observer.
If all the state variables are estimated, it is called Full-order state observer. If only few
statevariable are estimated (which cant be measured accurately), the observer is called
Reduced-order state observer.
Design of Full-order observer: In this case all the state variables are to be
estimated. a) Open loop observer: Here it is assumed that all the state variables
are estimated
correctly and initial state are also estimated correctly.
i.e For State model
X(t) = Ax(t) + Bu(t)
Dept. of EEE, SJBIT

Page 85

Modern Control Theory

10EE55

Y(t) = Cx(t);
initial state X(0) are known and
X (t ) is estimated states and
X (0) is estimated initial states the resulting observer in open loop observer.
Draw backs of open loop observer.
1. Precise Lack of information of x(0)
2. If X (0) X(0), the estimated state X (t ) obtain from open loop estimator will have
continuously growing error and hence the output.
3. Small errors in A,B, and disturbances that enters the system but not in
estimator will cause slow estimate process.
Hence better to go for closed loop estimator.
Design of closed loop Full-order state observer: Figure shows closed loop Full order
state observer (Luen berger state observer)

State model is X(t) = Ax(t) + Bu(t)


Y(t) = Cx(t)

(1)
(2)

Estimator output is Y(t) = CX(t)

(2a)

Dept. of EEE, SJBIT

Page 86

Modern Control Theory

10EE55

Y(t) Y(t) = c X(t) X(t)


Error in output is
Error in state vector
~
X(t) = X(t) X(t)

From figure X(t) = AX(t) + Bu(t) + m(Y(t) Y(t)

Dept. of EEE, SJBIT

(2b)
(3)
(4)

Page 87

Modern Control Theory

10EE55

m is nx1 constant gain matrix.


Differentiating equation (3)
~
X = X(t) X(t)
~
= [A mc ]X(t) .. by equations (2 to 3)

(5)

The characteristics equation of the error is given by


SI (A mc) = 0

(6)

~
If m is chosen properly, (A-mc) have reasonably fast roots, then X (t ) will decay to
~
zero irrespective of X (0) i.e X (t ) ~
X (t ) , regardless of X (0) . Further it can be seen that
equation (5) is independent of control input u(t).
The observer gain matrix m can be obtained in exactly the same fashion as controller
gain matrix K
Design procedure:
1. Let the desired pole locations of observer error
roots be corresponding poly equation will be
(S 1 )(S 2 ).....(S n ) = 0
S + a1Sn1 + a 2Sn2 + .....a n = 0
determine co-effients a1 , a 2 ,.......a n
n

1 , 2 ,....... n then the


(1)
(2)

2. Let the observer gain matrix be m


m = [m1,m 2 ,.......mn ]T
obtain the characteristics poly equation of the closed loop system with observer as
SI (A mc) = 0
(3)
Sn + ( 1 + mn )Sn1 + ( 2 + mn1 )Sn2 + ....... + ( n + m1 ) = 0
3. Comparing the co-efficient of like powers of S from equations (2) and (3)
a1 = 1 + m n m n = a1 1
an = n + m1 m1 = a n n
where the co-efficients 1 , 2 ,....... n are obtained from poly equation
SI A = 0
Sn + 1Sn1 + ....... + n =0.
4. The observer gain matrix m will be then
m = [m1,m 2 ,.......mn ]T
Example : Consider the state model
0

X(t) = 0
0
1 X(t) + 0 u
1
6 11 6
Y(t) = [0 0 1]X(t)
Dept. of EEE, SJBIT

Page 88

Modern Control Theory

10EE55

2) Design an observer; where observer-error poles are required to be located at 2 j3.64


and -5
Solution: Model in oberserver conamical form will be
0 1 6
0
X(t) = 0 0 11 X(t) + 0
1

Y(t) = [0 0 1]X(t)
1) Desired characteristic
equation will be
(S+2+j 3 .64) (S+2-j3.64) (S+5)=0.
=> S 3 + 9S 2 + 20S + 60 = 0
hence comparing with S 3 + a1s2 + a 2 s + a 3 = 0
a1 =9, a 2 =20, a3 =60

(I)

2) Let the observer be m = [m1 , m2 ,.......mn ]

0 0

0 0

(6 + m1 )
(11+ m 2 )

0 0 m1
[A mc ] = 1 0
0 m2

11

0
1 6
0 m3

[A mc ] =

1 0

(6 + m3 )

SI (A mc ) = 0
S 3 + (6 + m3 )S2 + (11+ m 2 )S + (6 + m1 ) = 0
Comparing with
s3 + 1s 2 + 2 s +
1

= (6 + m3 ),

=0

= (11+ m 2 ),

= (6 + m1

(II)

)
3) From I and II,
9 = 6 + m3 m3 = 3
20 = 11+ m 2 m 2 = 9
60 = 6 + m1 m1 = 54
4) Hence m = [54

Dept. of EEE, SJBIT

9 3]T

Page 89

Modern Control Theory

10EE55

Reduced-order state observer Required when few of


state variables cannot be measured accurately. Design of Reducedorder state observer :
In this design procedure, let us assume one measurement Y(t). Which measures one state
X1(t). The output equation is given by
Y(t)=CX(t)
(1)
Where C=[1 0 -----0]
Partitioning the states X(t) into states directly measured X1(t) and states directly non
measurable Xe(t), which are to be estimated.
X (t)
i.e X(t) = 1
(2)
X e (t )
Partionionly the matrixes accordly & state equation can be written as
X1(t)

a11 a1e
a e1 a ee

Xe (t)

and Y(t) = [1 0]

X1(t)
b1
+
u(t)
X e (t)
be

(3)

X1 (t)

(4)

X e (t)

from equation (3)


Xe (t) = a ee X e(t) + a X(t)
+eub
e1

(5)

Known input

from (4) Y(t) = X1(t)


=> X1(t) = Y(t) Also from
Equation (3) X1(t) = a11X1(t) + a1e Xe (t) + b1u(t).
Collecting the known terms.
Y(t) a11 X1 (t) - b 1u(t) = a 1e X e (t)

(6)

Known terms

Comparing equation (5) & (6) with


X(t) = AX(t) + bu(t)
and Y(t) = CX(t)
we can have similarity as
X(t) Xe (t)
A(t) a ee
bu a e1X1(t) + b eu(t)
c = a1e

(7)

Y = Y(t) a11X1(t) b1u(t)


Using Equation (7), the state model for reduced order state observer can be obtained as
(t) = a X
(t) + a y + b u(t)X (t) + m (Y(t) + (Y a y b u a X
(t))
(8)
X
e

Dept. of EEE, SJBIT

ee

e1

11

1e

Page 90

Modern Control Theory

10EE55

e (t)
(t) = AX
(t) + bu(t) + m(y(t) y (t)) where y (t) = a1e X
(same as X
~

Defining Xe (t) = X e X
e
~
(t)
Xe (t) = X e (t) X
e
(t) in above equation & on simplification,
Substituting for Xe (t) & X
e
(9)
X(t) = (a ee ma1e )Xe
Corresponding characteristic equation is
(10)
SI (a ee ma1e ) = 0
Comparing with desired characteristic equation m can be evaluated same as m in fullorder observer.
To implement the reduced-order observer,
Equation (8) can be written as
e (t) = (a ee ma 1e )X
e (t) + (a e1 ma 11)Y(t) + (be mb 1)u + m y(t)
X

(11)

As Y(t) is difficult to realize,


Define

(t) my
X (t) = X
e

e (t) m y(t)
X e (t) = X
(t) + (a ma )y(t) + (b mb )u(t)
= (a ee ma 1e )X
e
e1
11
e
1
Equation (12) can be represented by block diagram as shown below.

(12)

Example: System is described by


0 0
0
50
0 24 X(t) + 2 u(t)

X(t) = 1 1 10

Y(t) = [0 0 1]X(t)
To design the reduced-order observer for un measurable states such that the given values
will be at 10, -10.
Dept. of EEE, SJBIT

Page 91

Modern Control Theory

10EE55

Solution:
From output Equation it follows that states X1(t) & X2(t) cannot
be measured as corresponding elements of C matrix are zero. Let
us write the state equation as
X1

0 0
0
X1(t)
50
X2 = 1 0 24 X2 (t) + 2 u(t)
1
X 0 1 10 X3 (t)

,
aee =

0 0
0
, ae1 =
a1e = [0 1]
1 0
24

a11 = [ 10]; be =

50
b 1 = [1]
2

m1
m2
Characteristic equation of an observer is
SI (a ee ma1e ) = 0
Let

m=

S 2 + m 2S + m1 = 0
Desirable characteristic equation is
(S + 10)(S + 10) = 0
S 2 + 20S + 100 = 0
From two characteristic equations, equating the co-efficient of like powers of S
20=m2 & 100=m1.
100
m=
20
Hence state model of observer will be
X e(t) = {

0 0
100
(t) + { 0 100 {10}]}Y(t) + { 50 100 1}u(t)
[
0 1]}X
e
1 0
20
2
20
20
24

Design of compensator by the principle of separation:


Consider the completely controllable and observable system described by

He
nc
e
the
sta
te
fee
db
ac
k
co

X(t) = AX(t) + Bu(t)


Y(t) = CX(t)
Let the control law be
u(t) = kX(t)
Assume the full-order observer model be
(t) = AX
(t) + Bu + M(y cx (t))
X
Dept. of EEE, SJBIT

Page 92

Modern Control Theory

10EE55

ntrol law using the estimated states will be


Using equation (4) in (1a)
(t)
X(t) = AX(t) BKX
(t))
= (A BK)X(t) + BK(X X
Defining observer state error vector as
~
(t)
X(t) = X(t) X
using (6) in (5)
~
X(t) = (A BK)X(t) + BKX(t)
WKT observer error-vector model is
~
~
X(t) = (A mC)X(t)
Equation (7) & (8) can be obtained in vector model as

(5)
(6)

(7)
(8)

BK
X(t)
X(t) (A BK)
(9)
=
~
~
0
(A mC) X(t)
X(t)
Equation (9) represents the model of 2n dimensional system. The corresponding
characteristic equation is SI (A BK) SI (A mC) = 0
(10)
Equation (10) represents the poles (given values) of the system, which is union of poles
of controller & observers. The design of controller & observer can be carried by
separately, & when used together, the system is unchanged. This special case of
separation principle controller can be designed using
SI (A BK) = 0 and observer by SI (A mC) = 0 .
The corresponding block diagram is as shown

Transfer function of compensator:


(t) = AX
(t) + Bu(t) + m(y cx (t))
Now X
Dept. of EEE, SJBIT

(1)
Page 93

Modern Control Theory

10EE55

(t)
U(t) = KX

(2)

(1) & (2)

(3)

From

(t) = (A BK mC)X (t) + mY(t)


X
Taking L.T.
(s) = (A BK mC)X
(S) + mY(S)
SX
(S)
mY(s) = [SI (A BK mC)]X
Taking L.T. of equation (2)
(S)
U(S) = K.X
From equations (4) & (5)
U(S)
= K[SI (A BK mC) 1 ]m
Y(S)
= D(S)

(4)
(5)

Example: A System is described by


0
1
0
0
0
1 X(t) + 0 u(t)
X(t) = 0
1
5 6 11
Y(t) = [1 0 0]X(t)
Design controller to take place closed loop poles at 1 j1, -5. Also design an observer
such that observer poles are at 6, -6, -6. Derive the Un compensated & compensated
T.F.
Solution: System is observable & Controllable. Using the procedure of controller design
& observer design, the controller will be
K=[4 1 1] and observer will be
M=[12 25 -72]T
Transfer function of uncompensated system will be
Mu(S)= C[SI-A]-1B.
1
Mu(S) = 3
2
S + 6S + 11S + 6
Transfer function of compensated system will be
U(S)
MC(s) =
= K[SI (A BK mC)] 1m
Y(S)
Dept. of EEE, SJBIT

Page 94

Modern Control Theory

10EE55

CONTROLLERS
The controller is an element which accepts the error in some form and decides the proper corrective
action. The output of the controller is then applied to the system or process. The accuracy of the entire
system depends on how sensitive is the controller to the error detected and how it is manipulating such an
error. The controller has its own logic to handle the error.
the response and mode of operation.

The controllers are classified based on

On the basis of mode of operation, they are classified into

Continuous and Discontinuous controllers. The discontinuous mode controllers are further classified as
ON-OFF controllers and multi position controllers.

Continuous mode controllers, depending on the input-output relationship, are classified into three
basic types named as Proportional controller, Integral controller and Derivative controller.

In many

practical cases, these controllers are used in combinations.


The

examples

of

such

composite controllers are Proportional Integral (PI) controllers,

Proportional Derivative (PD) controllers and Proportional - Integral Derivative (PID) controllers.
The block diagram of a basic control system with controller is shown in Figure. The error detector compares
the feedback signal b(t) with the reference input r(t) to generate an error. e(t) = r(t) b(t).

Proportional Controller:
In the proportional control mode, the output of the controller is proportional to the error e(t).
The relation between the error and the controller output is determined by a constant called proportional
gain constant denoted as KP. i.e. p(t) = KP e(t).
Though there exists linear relation between controller output and the error, for zero error the
controller output should not be zero as this will lead to zero input to the system or process.
there exists some controller output Po for the zero error.
Dept. of EEE, SJBIT

Hence,

Therefore mathematically the proportional


Page 95

Modern Control Theory

10EE55

control mode is expressed as P(t) = KP.e(t) + Po.


The performance of proportional controller depends on the proper design of the gain KP. As
the proportional gain KP increases, the system gain will increase and hence the steady state error will
decrease.

But due to high gain, peak overshoot and settling time increases and this may lead to

instability of the system. So, compromise is made to keep steady state error and overshoot within
acceptable limits.

Hence, when the proportional controller is used, error reduces but can not

make it zero. The proportional controller is suitable where manual reset of the operating point is
possible and the load changes are small.

Integral Controller:
We have seen that proportional controller can not adapt with the changing load
conditions. To overcome this situation, integral mode or reset action controller is used. In
this controller, the controller output P(t) is changed at a rate which is proportional to
actuating error signal e(t).

Mathematically the integral controller mode is expressed as


The constant Ki is called integral constant. The output from the controller at any instant is the area
under the actuating error curve up to that instant. If the error is zero, the controller output will not
change. The integral controller is relatively slow controller. It changes its output at a rate which is
dependent on the integrating time constant, until the error signal is cancelled.

Compared

to

the proportional controller, the integral control requires time to build up an appreciable output.
However it continues to act till the error signal disappears.

Hence, with the integral controller the

steady state error can be made to zero. The reciprocal of integral constant is known as integral time
constant Ti. i.e., Ti = 1/Ki.

Derivative Controller:
In this mode, the output of the controller depends on the rate of change of error with respect to time.
Hence it is also known as rate action mode or anticipatory action mode.
The mathematical equation for derivative controller is
.
Where Kdis the derivative gain constant. The derivative gain constant indicates by how much
percentage the controller output must change for every percentage per second rate of change of the
error. The advantage of the derivative control action is that it responds to the rate of change of
error and can produce the significant correction before the magnitude of the actuating error
becomes too large. Derivative control thus anticipates the actuating error, initiates an early corrective
action and tends to increase stability of the system by improving the transient response.
When
the error is zero or constant, the derivative controller output is zero. Hence it is never used alone.

PI Controller:
Dept. of EEE, SJBIT

Page 96

Modern Control Theory

10EE55

This is a composite control mode obtained by combining the proportional mode and
integral mode. The mathematical expression for PI controller is

The transfer function is given by

The advantage of this controller is that the one to one correspondence of


proportional controller and the elimination of steady state error due to integral
controller. Basically integral controller is a low-pass circuit.
The PI Controller has following effects on the
system.
1.
2.
3.
4.
5.
6.

It increases the order of the system


It increases the type of the system
It improves the steady state accuracy.
It increases the rise time so response become slow.
It filters out the high frequency noise
It makes the response more oscillatory.

PD Controller:
This is a composite control mode obtained by combining the proportional
mode and derivative mode. The mathematical expression for PI controller is

The transfer function is given by

PID Controller:
This is a composite control mode obtained by combining the proportional mode,
integral mode and derivative mode. The mathematical expression for PI controller is

Dept. of EEE, SJBIT

Page 97

Modern Control Theory

10EE55

The PD Controller has following effects on the system.

1. It increases the damping ratio and reduces overshoot.


2. It reduces the rise time and makes response fast.
3. It reduces the settling time
4. The type of the system remains unchanged.
5. Steady state error remains unchanged.
In general it improves transient part without affecting steady state.
The transfer function is given by

This is to note that derivative control is effective in the transient part of the
response as error is varying, whereas in the steady state, usually if any error is there, it is
constant and does not vary with time. In this aspect, derivative control is not effective in
the steady state part of the response.

In the steady state part, if any error is there,

integral control will be effective to give proper correction to minimize the steady state
error. An integral controller is basically a low pass circuit and hence will not be effective in
transient part of the response where error is fast changing. Hence for the whole range of
time response both derivative and integral control actions should be provided in addition
to the inbuilt proportional control action for negative feedback control systems.

.
Dept. of EEE, SJBIT

Page 98

Modern Control Theory

10EE55

UNIT-6 NON LINEAR SYSTEMS


UNIT - 6
Non-linear systems: Introduction, behavior of non-linear system, common physical non linearitysaturation, friction, backlash, dead zone, relay, multi variable non-linearity.
3 Hours
Many practical systems are sufficiently nonlinear so that the important features of
their performance may be completely overlooked if they are analyzed and designed
through linear techniques. The mathematical models of the nonlinear systems are
represented by nonlinear differential equations. Hence, there are no general methods
for the analysis and synthesis of

nonlinear control systems.

The fact that

superposition principle does not apply to nonlinear systems makes generalisation


difficult and study of many nonlinear systems has to be specific for typical situations.

6.2 Behaviour of Nonlinear Systems


The most important feature of nonlinear systems is that nonlinear systems do not
obey the principle of superposition. Due to this reason, in contrast to the linear case,
the response of nonlinear systems to a particular test signal is no guide to their
behaviour to other inputs. The nonlinear system response may be highly sensitive to
input amplitude. For example, a nonlinear system giving best response for a certain
step input may exhibit highly unsatisfactory behaviour when the input amplitude
is changed.

Hence, in a nonlinear system, the stability is very much dependent

on the input and also the initial state.


Further, the nonlinear systems may exhibit limit cycles which are selfsustained oscillations of fixed frequency and amplitude. Once the system trajectories
converge to a limit cycle, it will continue to remain in the closed trajectory in the state
space identified as limit cycles.

In many systems the limit cycles are undesirable

particularly when the amplitude is not small and result in some unwanted phenomena.

Dept. of EEE, SJBIT

Page 99

Modern Control Theory

10EE55

Figure 6.1

A nonlinear system, when excited by a sinusoidal input, may generate several


harmonics in addition to the fundamental corresponding to the input frequency. The
amplitude of the fundamental is usually the largest, but the harmonics may be of
significant amplitude in many situations.

Another peculiar characteristic exhibited


jump phenomenon.

by nonlinear

systems

is

called

For example, let us consider the frequency response curve of

spring-mass- damper system. The frequency responses of the system with a linear
spring, hard spring and soft spring are as shown in Fig. 6.2(a), Fig. 6.2(b) and Fig.
6.2(c) respectively.

For a hard spring, as the input frequency is gradually

increased from zero, the measured response follows the curve through the A, B and
C, but at C an increment in frequency results in discontinuous jump down to the point D,
after which with further increase in frequency, the response curve follows through DE.
If the frequency is now decreased, the response follows the curve EDF with a jump up
to B from the point F and then the response curve

moves

towards

A. This

phenomenon which is peculiar to nonlinear systems is known as jump resonance. For


a soft spring, jump phenomenon will happen as shown in fig. 6.2(c).

Fig. 6.2(a)
Dept. of EEE, SJBIT

Fig. 6.2(b)

Fig. 6.2 (c)


Page 100

Modern Control Theory

10EE55

When excited by a sinusoidal input of constant frequency and the amplitude is increased from
low values, the output frequency at some point exactly matches with the input frequency
and continue to remain as such thereafter.

This phenomenon which results in a

synchronisation or matching of the output frequency with the input frequency is called
frequency entrainment or synchronisation.
6.3 Methods of Analysis
Nonlinear systems are difficult to analyse and arriving at general conclusions are tedious.
However, starting with the classical techniques for the solution of standard nonlinear
differential equations, several techniques have been evolved which suit different types of
analysis. It should be emphasised that very often the conclusions arrived at will be useful for
the system under specified conditions and do not always lead to generalisations.

The

commonly used methods are listed below.

Linearization Techniques: In reality all systems are nonlinear and linear systems are only
approximations of the nonlinear systems.

In some cases, the linearization yields useful

information whereas in some other cases, linearised model has to be modified when the
operating point moves from one to another. Many techniques like perturbation method, series
approximation techniques, quasi-linearization techniques etc. are used for linearise a nonlinear
system.

Phase Plane Analysis: This method is applicable to second order linear or nonlinear systems
for the study of the nature of phase trajectories near the equilibrium points. The system
behaviour is qualitatively analysed along with design of system parameters so as to get the
desired response from the system.

The periodic oscillations in nonlinear systems

called limit cycle can be identified with this method which helps in investigating the stability
of the system.

Describing Function Analysis: This method is based on the principle of harmonic


linearization in which for certain class of nonlinear systems with low pass characteristic. This
method is useful for the study of existence of limit cycles and determination of the amplitude,
frequency and stability of these limit cycles. Accuracy is better for higher order systems as they
have better low pass characteristic.
Dept. of EEE, SJBIT

Page 101

Modern Control Theory

10EE55

Liapunovs Method for Stability: The analytical solution of a nonlinear system is rarely possible.
If a numerical solution is attempted, the question of stability behaviour can not be fully answered
as solutions to an infinite set of initial conditions are needed. The Russian mathematician A.M.
Liapunov introduced and formalised a method which allows one to conclude about the stability
without solving the system equations.
6.4 Classification of Nonlinearities
The nonlinearities are classified into i) Inherent nonlinearities and ii) Intentional
nonlinearities. The nonlinearities which are present in the components used in system due
to the inherent imperfections or properties of the system are known as inherent
nonlinearities. Examples are saturation in magnetic circuits, dead zone, back lash in
gears etc.

However in some cases introduction of nonlinearity may improve the

performance of the system, make the system more economical consuming less space and
more reliable than the linear system designed to achieve the same objective.

Such

nonlinearities introduced intentionally to improve the system performance are known as


intentional nonlinearities.

Examples are different types of relays which are very

frequently used to perform various tasks. But it should be noted that the improvement
in system performance due to nonlinearity is possible only under specific operating
conditions.r other conditions, generally nonlinearity degrades the performance of the
system.

6.5 Common

Physical Nonlinearities:

The common examples of physical

nonlinearities are saturation, dead zone, coulomb friction, stiction, backlash, different
types of springs, different types of relays etc.

Saturation: This is the most common of all nonlinearities. All practical systems, when
driven by sufficiently large signals, exhibit the phenomenon of saturation due to
limitations of physical capabilities of their components. Saturation is a common
phenomenon in magnetic circuits and amplifiers.

Dead zone: Some systems do not respond to very small input signals. For a particular
range of input, the output is zero. This is called dead zone existing in a system. The
input-output curve is shown in figure.
Dept. of EEE, SJBIT

Page 102

Modern Control Theory

10EE55

Figure 6.3

Backlash: Another important nonlinearity commonly occurring in physical systems is


hysteresis in mechanical transmission such as gear trains and linkages. This nonlinearity is
somewhat different from magnetic hysteresis and is commonly referred to as backlash. In
servo systems, the gear backlash may cause sustained oscillations or chattering
phenomenon and the system may even turn unstable for large backlash.

Figure 6.4

Relay: A relay is a nonlinear

power amplifier which can provide large power

amplification inexpensively and is therefore deliberately introduced in control systems. A relay


controlled system can be switched abruptly between several discrete states which are usually off,
full forward and full reverse.

Relay controlled

systems

find

wide applications in the

control field. The characteristic of an ideal relay is as shown in figure. In practice a relay has a
definite amount of dead zone as shown. This dead zone is caused by the facts that relay coil
requires a finite amount of current to actuate the relay. Further, since a larger coil current is
Dept. of EEE, SJBIT

Page 103

Modern Control Theory

10EE55

needed to close the relay than the current at which the relay drops out, the characteristic always
exhibits hysteresis.

Multivariable Nonlinearity: Some nonlinearities such as the torque-speed characteristics of a


servomotor, transistor characteristics etc., are functions of more than one variable. Such
nonlinearities are called multivariable nonlinearities.

Dept. of EEE, SJBIT

Page 104

Modern Control Theory

10EE55

UNIT-7 PHASE PLANE ANALYSIS


UNIT - 7
Phase plane method, singular points, stability of nonlinear system, limit cycles, construction of phase
trajectories.
7 Hours

Phase plane analysis is one of the earliest techniques developed for the study of second
order nonlinear system. It may be noted that in the state space formulation, the state variables
chosen are usually the output and its derivatives. The phase plane is thus a state plane where the
two state variables x1
derivative

and x2

are analysed which may be the output variable y and its

The method was first introduced by Poincare, a French mathematician. The

method is used for obtaining graphically a solution of the following two simultaneous equations
of an autonomous system.

Where
and
are either linear or nonlinear functions of the state
variables x1 and x2 respectively. The state plane with coordinate axes x1 and x2 is called the
phase plain. In many cases, particularly in the phase variable representation of systems, take the
form

The plot of the state trajectories or phase trajectories of above said equation thus gives an
idea of the solution of the state as time t evolves without explicitly solving for the state. The
phase plane analysis is particularly suited to second order nonlinear systems with no input or
constant inputs.

It can be extended to cover other inputs as well such as ramp inputs, pulse

inputs and impulse inputs.

7.2 Phase Portraits


From the fundamental theorem of uniqueness of solutions of the state equations or
differential equations, it can be seen that the solution of the state equation starting from an initial
state in the state space is unique. This will be true if

and

are analytic. For

such a system, consider the points in the state space at which the derivatives of all the state
Dept. of EEE, SJBIT

Page 105

Modern Control Theory

10EE55

variables are zero. These points are called singular points. These are in fact equilibrium points
of the system.
undisturbed.

If the system is placed at such a point, it will continue to lie there if left
A family of phase trajectories starting from different initial states is called a

phase portrait. As time t increases, the phase portrait graphically shows how the system moves
in the entire state plane from the initial states in the different regions. Since the solutions from
each of the initial conditions are unique, the phase trajectories do not cross one another. If the
system has nonlinear elements which are piece-wise linear, the complete state space can be
divided into different regions and phase plane trajectories constructed for each of the regions
separately.

7.3 Phase Plane Method


Consider the homogenous second order system with differential equations

This equation may be written in the standard form

where

and

are the damping factor and undamped natural frequency of the system. Defining the state

variables as x = x1 and
variable form as

= x2, we get the state equation in the state

These equations may then be solved for phase variables x1 and x2. The time response plots of
x1, x2 for various values of damping with initial conditions can be plotted. When the differential
equations describing the dynamics of the system are nonlinear, it is in general not possible to
obtain a closed form solution of x1, x2. For example, if the spring force is nonlinear say (k1x +
k2x3) the state equation takes the form

Solving these equations by integration is no more an easy task. In such situations, a graphical
method known as the phase-plane method is found to be very helpful.
Dept. of EEE, SJBIT

The coordinate plane


Page 106

Modern Control Theory

10EE55

with axes that correspond to the dependent variable x1 and x2 is called phase-plane. The curve
described by the state point (x1,x2) in the phase-plane with respect to time is called a phase
trajectory. A phase trajectory can be easily constructed by graphical techniques.
7.3.1 Isoclines Method:
Let the state equations for a nonlinear system be in the form

Where both

and

are analytic.

From the above equation, the slope of the trajectory is given by

Therefore, the locus of constant slope of the trajectory is given by f2(x1,x2) = Mf1(x1,x2)

The above equation gives the equation to the family of isoclines. For different values of
M, the slope of the trajectory, different isoclines can be drawn in the phase plane.
Knowing the value of M on a given isoclines, it is easy to draw line segments on each of these
isoclines.
Consider a simple linear system with state equations

Dividing the above equations we get the slope of the state trajectory in the x1-x2 plane as

For a constant value of this slope say M, we get a set of equations

which is a straight line in the x1-x2 plane. We can draw different lines in the x1-x2 plane for
different values of M; called isoclines. If draw sufficiently large number of isoclines to cover the
complete state space as shown, we can see how the state trajectories are moving in the state
plane. Different trajectories can be drawn from different initial conditions. A large number of
such trajectories together form a phase portrait. A few typical trajectories are shown in figure
given below.

Dept. of EEE, SJBIT

Page 107

Modern Control Theory

10EE55

Figure 7.1
The Procedure for construction of the phase trajectories can be summarised as
below:
1. For the given nonlinear differential equation, define the state variables as x1 and x2
and obtain the state equations as

2. Determine the equation to the isoclines as

3. For typical values of M, draw a large number of isoclines in x1-x2


plane
4. On each of the isoclines, draw small line segments with a
slope M.
5. From an initial condition point, draw a trajectory following the line
segments with slopes M on each of the isoclines.
Example 7.1: For the system having the transfer function

and a relay with

dead zone as nonlinear element, draw the phase trajectory originating from the initial
condition (3,0).

Dept. of EEE, SJBIT

Page 108

Modern Control Theory

10EE55

The differential equation for the system is


given by

where u is

Since the input is zero, e = r c = -c and the differential equation in terms of error
will be

Defining the state variables as x1 = e and x2 =


equations as

we

get

the

state

The slope of the trajectory is


given by

Equation to the isoclines is


given by

We can identify three regions in the state plane depending on the values of e
= x 1.
Re
gio
n
1:
Here u = 1, so that the isoclines are given by

or

For different values of M, these are a number of straight lines parallel to the
x-axis.
M

1/3

1/2

-4

-3

-2

-1

-1

-1.5

-2

0.5

-0.2

-0.25

-0.33

-0.5

Here u = 0, so that the isoclines are given by

or M = -1, which are

parallel lines having constant slope of -1. Trajectories are lines of constant slope -1.
Dept. of EEE, SJBIT

Page 109

Modern Control Theory

10EE55

Region 3:
Here u = -1 so that on substitution we get

or

These are also lines parallel to x axis at a constant distance decided by the value
of the slope of the trajectory M.
M

1/3

-5

-4

-3

-2

0.75

0.5

1/3

0.25

-0.25

-0.33

-0.5

-1

Figure 7.2: Phase portrait of


Example 1
Isoclines drawn for all three regions are as shown in figure. It is seen that
trajectories from either region 1 or 2 approach the boundary between the regions
and approach the origin along a line at -1 slope. The state can settle at any value of
x1 between -1 and +1 as this being a dead zone and no further movement to the
origin along the x1-axis will be possible. This will result in a steady state error,
the maximum value of which is equal to half the dead zone. However, the
presence of dead zone can reduce the oscillations and result in a faster response. In
order that the steady state error in the output is reduced, it is better to have as small a
dead zone as possible.
Example 7.2: For the system having a closed loop transfer function

plot the phase trajectory originating from the initial condition (-1,0).
The differential equation for the system is
given by

Dept. of EEE, SJBIT

Page 110

Modern Control Theory

10EE55

The slope of the trajectory


is given by

When x2 = 0,

ie., all the isoclines will pass through the point x1


= 2, x2 = 0. When M = 0,

When M = 2, When M = 4, When M = 8,

When M = -2, When M = -4,

When M = -6, When M = -10,

Dept. of EEE, SJBIT

Page 111

Modern Control Theory

10EE55

Figure 7.3
The isoclines are drawn as shown in figure. The starting point of the trajectory is marked at (1,0). At (-1,0), the slope is
, ie., the trajectory will start at an angle 90o. From the
next isoclines, the average slope is (8+4)/2 = 6, ie., a line segment with a slope 6 is drawn (at an
angle 80.5o).The same procedure is repeated and the complete phase trajectory will be obtained
as shown in figure.
7.3.2 Delta Method:
The delta method of constructing phase trajectories is applied to systems of the form

Where
may be linear or nonlinear and may even be time varying but must be
continuous and single valued.
With the help of this method, phase trajectory for any system with step or ramp or any time
varying input can be conveniently drawn. The method results in considerable time saving when
a single or a few phase trajectories are required rather than a complete phase portrait.
While applying the delta method, the above equation is first converted to the form

In general,
depends upon the variables x,
and t, but for short intervals the
changes in these variables are negligible. Thus over a short interval, we have
, where is a constant.
Let us choose the state variables as x1 = x;

Dept. of EEE, SJBIT

, then

Page 112

Modern Control Theory

10EE55

Therefore, the slope equation over a short interval is given by

With known at any point P on the trajectory and assumed constant for a short interval,
we can draw a short segment of the trajectory by using the trajectory slope dx2/dx1 given
in the above equation. A simple geometrical construction given below can be used for
this purpose.
1. From the initial point, calculate the value of .
2. Draw a short arc segment through the initial point with (- , 0) as centre,
thereby determining a new point on the trajectory.
3. Repeat the process at the new point and continue.
Example 7.3: For the system described by the equation given below, construct the
trajectory starting at the initial point (1, 0) using delta method.

Let

then

The above equation can be rearranged as

So that
At initial point is calculated as = 0+1-1 = 0. Therefore, the initial arc is centred at
point (0, 0). The mean value of the coordinates of the two ends of the arc is used to
calculate the next value of and the procedure is continued. By constructing the small
arcs in this way the complete trajectory will be obtained as shown in figure.

Figure 7.4
Dept. of EEE, SJBIT

Page 113

Modern Control Theory

10EE55

7.4 Limit Cycles:

Limit cycles have a distinct geometric configuration in the phase plane portrait, namely,
that of an isolated closed path in the phase plane. A given system may have more than
one limit cycle. A limit cycle represents a steady state oscillation, to which or from
which all trajectories nearby will converge or diverge. In a nonlinear system, limit cycles
describes the amplitude and period of a self sustained oscillation. It should be pointed out
that not all closed curves in the phase plane are limit cycles. A phase-plane portrait of a
conservative system, in which there is no damping to dissipate energy, is a continuous
family of closed curves. Closed curves of this kind are not limit cycles because none of
these curves are isolated from one another.
Such trajectories always occur as a
continuous family, so that there are closed curves in any neighbourhood of any particular
closed curve. On the other hand, limit cycles are periodic motions exhibited only by
nonlinear non conservative systems.
As an example, let us consider the well known Vander Pols differential equation

which describes physical situations in many nonlinear systems. In terms of the state variables
, we obtain

The figure shows the phase trajectories of the system for > 0 and < 0. In case of > 0 we
observe that for large values of x1(0), the system response is damped and the amplitude of
x1(t) decreases till the system state enters the limit cycle as shown by the outer trajectory. On the
other hand, if initially x1(0) is small, the damping is negative, and hence the amplitude of x1(t)
increases till the system state enters the limit cycle as shown by the inner trajectory. When < 0,
the trajectories moves in the opposite directions as shown in figure.

Figure 7.5
A limit cycle is called stable if trajectories near the limit cycle, originating from outside
or inside, converge to that limit cycle. In this case, the system exhibits a sustained
Dept. of EEE, SJBIT

Page 114

Modern Control Theory

10EE55

oscillation with constant amplitude. This is shown in figure (i). The inside of the limit cycle is
an unstable region in the sense that trajectories diverge to the limit cycle, and the outside is a
stable region in the sense that trajectories converge to the limit cycle.
A limit cycle is called an unstable one if trajectories near it diverge from this limit cycle. In this
case, an unstable region surrounds a stable region. If a trajectory starts within the stable region, it
converges to a singular point within the limit cycle. If a trajectory starts in the unstable region, it
diverges with time to infinity as shown in figure (ii). The inside of an unstable limit cycle is the
stable region, and the outside the unstable region.
7.5 Analysis and Classification of Singular Points:
Singular points are points in the state plane where
. At these points the slope of the
trajectory dx2/dx1 is indeterminate. These points can also be the equilibrium points of the
nonlinear system depending whether the state trajectories can reach these or not. Consider a
linearised second order system represented by

Using linear transformation x = Mz, the equation can be transformed to canonical form

Where,

and

are the roots of the characteristic equation of the system.

The transformation given simply transforms the coordinate axes from x1-x2 plane to z1-z2 plane
having the same origin, but does not affect the nature of the roots of the characteristic
equation. The phase trajectories obtained by using this transformed state equation still carry the
same information except that the trajectories may be skewed or stretched along the coordinate
axes. In general, the new coordinate axes will not be rectangular.
The solution to the state equation being given by

The slope of the trajectory in the z1 z2 plane is given by

Based on the nature of these eigen values and the trajectory in z1 z2 plane, the singular
points are classified as follows.
Nodal Point:
Consider eigen values are real, distinct and negative as shown in figure (a). For this case
the equation of the phase trajectory follows as

Dept. of EEE, SJBIT

Page 115

Modern Control Theory

10EE55
Where, k1 = ( 2/ 1)
0 so that the trajectories
become a set of parabola as shown in figure (b) and
the equilibrium point is called a node. In the
original system of coordinates, these trajectories
appear to be skewed as shown in figure (c).
If the eigen values are both positive, the nature of
the trajectories does not change, except that the
trajectories diverge out from the equilibrium point
as both z1(t) and z2(t) are increasing exponentially.
The phase trajectories in the x1-x2 plane are as
shown in figure (d). This type of singularity is
identified as a node, but it is an unstable node as the
trajectories diverge from the equilibrium point.
Saddle Point:

Consider now a system with eigen values are real,


distinct one positive and one negative. Here, one of
the states corresponding to the negative eigen value
converges and the one corresponding to positive eigen value diverges so that the
trajectories are given by z2 = c(z1)-k or (z1)kz2 = c which is an equation to a rectangular
hyperbola for positive values of k. The location of the eigen values, the phase portrait in
z1 z2 plane and in the x1 x2 plane are as shown in figure. The equilibrium point
around which the trajectories are of this type is called a saddle point.

Dept. of EEE, SJBIT

Page 116

Modern Control Theory

10EE55

Focus Point: Figure 7.7


Consider a system with complex conjugate eigen values. The canonical form of the state
equation can be written as

Using linear transformation, the equation becomes

The slope

We get,

This is an equation for a spiral in the polar coordinates. A plot of this equation for negative
values of real part is a family of equiangular spirals. The origin which is a singular point in this
case is called a stable focus. When the eigen values are complex conjugate with positive real
parts, the phase portrait consists of expanding spirals as shown in figure and the singular point is
an unstable focus. When transformed into the x1-x2 plane, the phase portrait in the above two
cases is essentially spiralling in nature, except that the spirals are now somewhat twisted in shape.

Dept. of EEE, SJBIT

Page 117

Modern Control Theory

10EE55

Centre or Vortex Point: Figure 7.8

Consider now the case of complex conjugate eigen values with zero real parts. ie.,

1,

= j

Integrating the above equation, we get


which is an equation to a
circle of radius R. The radius R can be evaluated from the initial conditions. The
trajectories are thus concentric circles in y1-y2 plane and ellipses in the x1-x2
plane as shown in figure. Such a singular points, around which the state
trajectories are concentric circles or ellipses, are called a centre or vortex.

Figure 7.9
Example 7.4:
Determine the kind of singularity for the following differential equation.

Let the state variables be


The corresponding state model is

At singular points,

and

Therefore, the singular point is at (0,0)

The characteristic equation is


i.e,

I-A =0
or

+3 +2=0

1, 2 = -2, -1. Since the roots are real and negative, the type of singular point is stable
node.

Dept. of EEE, SJBIT

Page 118

Modern Control Theory

10EE55

Example 7.5: For the nonlinear system having differential equation:


find all singularities.
Defining the state variables as

, the state equations are

At singular points,
So that the singular points

The singularities are thus at (0,0) and


(-1,0). Linearization about the
singularities:

The Jacobean matrix J =

Linearization around (0,0), i.e., substituting x1 = 0 and x2 = 0

The characteristic equation is

i.e., ( -0.1)+1 = 0 or

I-A =0

0.1 + 1 = 0

The eigen values are complex with positive real part. The singular point is an
unstable focus.
Linearization around (-1,0)

The characteristic equation will be

Therefore

0.1 - 1 = 0

1, 2 = 1.05 and -0.98. Since the roots are real and one negative and another
positive, the singular point is a saddle point.

Dept. of EEE, SJBIT

Page 119

Modern Control Theory

10EE55

Example 7.6:
Determine the kind of singularity for the following differential equation.

Let the state variables be


The corresponding state model is

At singular points,
So that the singular points

The singularities are thus at (0,0) and (2,0). Linearization about the
singularities:

The Jacobean matrix J =

Linearization around (0,0), i.e., substituting x1 = 0 and x2 = 0

The characteristic equation is

i.e., ( + 0.5) + 2 = 0 or

I-A =0

+ 0.5 + 2 = 0

The eigen values are complex with negative real parts. The singular point is a
stable focus.
Linearization around (-2,0)

The characteristic equation will be

Therefore

+ 0.5 - 2 = 0

= 1.19 and -1.69. Since the roots are real and one negative and another positive,
the singular point is a saddle point.
1, 2

Dept. of EEE, SJBIT

Page 120

Modern Control Theory

10EE55

UNIT-8: STABILITY ANALYSIS


UNIT - 8
Liapunov stability criteria, Liapunov functions, direct method of Liapunov & the linear system,
Hurwitz criterion & Liapunovs direct method, construction of Liapunov functions for nonlinear
system by Krasvskiis method.
6 Hours
For linear time invariant (LTI) systems, the concept of stability is simple and can be
formalised as per the following two notions:

a) A system is stable with zero input and arbitrary initial conditions if the resulting trajectory
tends towards the equilibrium state.
b) A system is stable if with bounded input, the system output is bounded.

In nonlinear systems, unfortunately, there is no definite correspondence between the


two notions. The linear autonomous systems have only one equilibrium state and their
behaviour about the equilibrium state completely determines the qualitative behaviour in the
entire state plane.

In nonlinear systems, on the other hand, system behaviour for small

deviations about the equilibrium point may be different from that for large deviations.
Therefore, local stability does not imply stability in the overall state plane and the two
concepts should be considered separately.

Secondly, in a nonlinear system with multiple

equilibrium states, the system trajectories may move away from one equilibrium point and
tend to other as time progresses. Thus it appears that in case of nonlinear systems, there is no
point in talking about system stability. More meaningful will be to talk about the stability of
an equilibrium point.

Stability in the region close to the equilibrium point or in the

neighbourhood of equilibrium point is called stability in the small. For a larger region around
the equilibrium point, the stability may be referred to as stability in the large.

In the

extreme case, we can talk about the stability of a trajectory starting from anywhere in the
complete state space, this being called global stability. A simple physical illustration of different
types of stability is shown in Fig. 8.1

Dept. of EEE, SJBIT

Page 121

Modern Control Theory

10EE55

Fig. 8.1 Global and local stability


Equilibrium State: In the system of equation
, a state xe where f(xe,t) = 0 for all
t is called an equilibrium state of the system. If the system is linear time invariant, namely f(x,t) =
Ax, then there exists only one equilibrium state if A is non-singular and there exist infinitely many
equilibrium states if A is singular. For nonlinear systems, there may be one or more equilibrium
states.
Any isolated equilibrium state can be
shifted to the origin of the coordinates, or f(0,t) = 0, by a translation or coordinates.

8.2 Stability Definitions:


The Russian mathematician A.M. Liapunov has clearly defined the different types of stability.
These are discussed below.
Stability: An equilibrium state xe of the system
is said to be stable if for each real
number > 0 there is a real number ( ,t0) > 0 such that
implies
for all t t0. The real number depends on and in general, also depends on t0. If does not depend
on t0, the equilibrium state is said to be uniformly stable.
An equilibrium state xe of the system of equation
is said to be stable in the
sense of Liapunov if, corresponding to each S( ), there is an S( ) such that trajectories starting in S(
) do not leave S( ) as t increases indefinitely.
Asymptotic Stability: An equilibrium state xe of the system
is said to be
asymptotically stable, if it is stable in the sense of Liapunov and every solution starting within S( )
converges, without leaving S( ), to xe as t increases indefinitely.
Asymptotic Stability in the Large:
If asymptotic stability holds for all states from
which trajectories originate, the equilibrium state is said to be asymptotically stable in the large. An
equilibrium state xe of the system
is said to be asymptotically stable in the
large, if it is stable and if every solution converges to xe as t increases indefinitely. Obviously a
necessary condition for asymptotic stability in the large is that there is only one equilibrium state in
the whole state space.
Dept. of EEE, SJBIT

Page 122

Modern Control Theory

10EE55

The above said definitions are represented graphically in Fig. 8.2

Fig. 8.2 Liapunovs Stability


Instability: An equilibrium state xe is said to be unstable, if for some real number > 0 and any
real number > 0, no matter how small, there is always a state x0 in S( ) such that the trajectory
starting at this state leaves S( ).

8.3 Stability by the Method of Liapunov


Russian mathematician A.M. Liapunov has proposed a few theorems for the study of stability
of the system. The most popular among this is called the Second Method of Liapunov or
Direct Method of Liapunov.
This method is very general in its
formulation and can be used to study of stability of linear or nonlinear systems. The method
is called direct method as it does not involve the solution of the system differential
equations and stability information is available without solving the equations which is
definitely an advantage for nonlinear systems.
The stability information obtained by
this method is precise and involved no approximation.
First Method of Liapunov: The first method of Liapunov, though rarely talked about, is
essentially a theorem stating the conditions under which system stability information can be
inferred by examining the simplified equations obtained through local linearization. This
theorem is applicable only to autonomous systems.

8.4 Sign Definiteness


Let V(x1, x2, x3, . Xn) be a scalar function of the state variables x1, x2, x3, .., xn. Then
the following definitions are useful for the discussion of Liapunovs second method.
8.4.1 Scalar Functions:
A scalar function V(x) is said to be positive definite in a region
states x in the region and V(0) = 0.

if V(x) > 0 for all nonzero

A scalar function V(x) is said to be negative definite in a region


states x in the region and V(0) = 0.

if V(x) < 0 for all nonzero

A scalar function V(x) is said to be positive semi-definite in a region


if it is positive for all
states in the region and except at the origin and at certain other states, where it is zero.
Dept. of EEE, SJBIT

Page 123

Modern Control Theory

10EE55

A scalar function V(x) is said to be negative semi-definite in a region


if it is negative for all
states in the region and except at the origin and at certain other states, where it is zero.
A scalar function V(x) is said to be indefinite if in the region
negative values, no matter how small the region is.

it assumes both positive and

Examples:

8.4.2 Sylvesters Criteria for Definiteness:


A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn
real symmetric matrix, be positive definite is that the determinant of A be positive and the
successive principal minors of the determinant of A be positive.

A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be negative definite is that the determinant of A be positive if n is even and
negative if n is odd, and the successive principal minors of even order be positive and the
successive principal minors of odd order be negative.
i
.
e
.
,

A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be positive semi-definite is that the determinant of A be singular and the
successive principal minors of the determinant of A be nonnegative.
i.e.,

Dept. of EEE, SJBIT

Page 124

Modern Control Theory

10EE55

A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be negative semi-definite is that the determinant of A be singular and all the
principal minors of even order be nonnegative and those of odd orders be non positive.

Example 8.1: Using Sylvesters criteria, determine the sign definiteness of the
following quadratic forms

Successive
minors are

principal

Hence the given quadratic form is positive


definite.

Example 8.3:

Successive principal minors are

Hence the quadratic form is negative definite.

8.5 Second Method of Liapunov


The second method of Liapunov is based on a generalization of the idea that if the system has an
asymptotically stable equilibrium state, then the stored energy of the system displaced within the
domain of attraction decays with increasing time until it finally assumes its minimum value at the
equilibrium state. The second method of Liapunov consists of determination of a fictitious energy
function called a Liapunov function. The idea of the Liapunov function is more general than that of
energy and is more widely applicable. Liapunov functions are functions of x1, x2, x3, .., xn. and t.
We denote Liapunov functions V(x1, x2, x3, .., xn, t) or V(x,t) or V(x) if functions do not include t
explicitly. In the second method of Liapunov the sign behaviours of V(x,t) and its time derivative
give information on stability, asymptotic stability or instability of the equilibrium state under
Dept. of EEE, SJBIT

Page 125

Modern Control Theory

10EE55

consideration at the origin of the state space.


Theorem 1: Suppose that a system is described by
, where f(0,t) = 0 for all t.
If there exists a scalar function V(x,t) having continuous first partial derivatives and
satisfying the following conditions.
1. V(x,t) is positive definite
2.
Then the equilibrium state at the origin is uniformly asymptotically stable. If in addition,
then the equilibrium state at the origin is uniformly
asymptotically stable in the large.
A visual analogy may be obtained by considering the surface

. This is a

cup shaped surface as shown. The constant V loci are ellipses on the surface of the cup.
Let
be the initial condition. If one plots trajectory on the surface shown, the
representative point x(t) crosses the constant V curves and moves towards the lowest
point of the cup which is the equilibrium point.

Fig. 8.3 Energy function and movement of states


Theorem 2: Suppose that a system is described by
, where f(0,t) = 0 for all t
0. If there exists a scalar function V(x,t) having continuous first partial derivatives and
satisfying the following conditions.
1. V(x,t) is positive definite
Dept. of EEE, SJBIT

Page 126

Modern Control Theory

10EE55

2.
3.
0, where

does not vanish identically in t t0 for any t0 and any x0


denotes the trajectory or solution starting from x0 at t0.

Then the equilibrium state at the origin of the system is uniformly asymptotically stable in
the large.
If however, there exists a positive definite scalar function V(x,t) such that
is
identically zero, then the system can remain in a limit cycle. The equilibrium state at the
origin, in this case, is said to be stable in the sense of Liapunov.
Theorem 3: Suppose that a system is described by
where f(0,t) = 0 for all t
t0. If there exists a scalar function W(x,t) having continuous first partial derivatives and
satisfying the following conditions
1. W(x,t) is positive definite in some region about the origin.
2.
is positive definite in the same region, then the equilibrium state at
the origin is unstable.

Example 8.4: Determine the stability of following system using Liapunovs method.

Let us choose
Then

This is negative definite. Hence the system is asymptotically stable.


Example 8.5: Determine the stability of following system using Liapunovs method.

Let us choose
Then
This is a negative semi definite function. If
x2 must be zero for all t t0.

is to be vanish identically for t

t0, then

This means that


vanishes identically only at the origin. Hence the equilibrium state
at the origin is asymptotically stable in the large.

Example 8.6: Determine the stability of following system using Liapunovs method.

Let us choose
Then
This is an indefinite function.
Dept. of EEE, SJBIT

Page 127

Modern Control Theory

10EE55

Let us choose another


Then
This is negative semi definite function. If
is to be vanish identically for t t0, then
x2 must be zero for all t t0.
This means that
vanishes identically only at the origin. Hence the equilibrium state
at the origin is asymptotically stable in the large.
Example 8.7: Consider a nonlinear system governed by the state equations

Let us choose
Then

Therefore, for asymptotic stability we require that the above condition is satisfied.

The

region of state space where this condition is not satisfied is possibly the region of
instability. Let us concentrate on the region of state space where this condition is
satisfied. The limiting condition for such a region is.The dividing lines lie in the first and
third quadrants and are rectangular hyperbolas as shown in Figure. 8.4. In the second and
fourth quadrants, the inequality is satisfied for all values of x1 and x2. Figure 8.4 shows the
region of stability and possible instability.

Since the choice of Liapunov function is not

unique, it may be possible to choose another Liapunov function for the system under
consideration which yields a larger region of stability.

Dept. of EEE, SJBIT

Page 128

Modern Control Theory

10EE55

Conclusions:
1. Failure in finding a V function to show stability or asymptotic stability or
instability of the equilibrium state under consideration can give no information on
stability.
2. Although a particular V function may prove that the equilibrium state
under consideration is stable or asymptotically stable in the region , which includes
this equilibrium state, it does not necessarily mean that the motions are unstable
outside the region .
3. For a stable or asymptotically stable equilibrium state, a V function with the
required properties always exists.

8.6 Stability Analysis of Linear Systems


Theorem: The equilibrium state x = 0 of the system given by equation
is
asymptotically stable if and only if given any positive definite Hermitian matrix Q (or
positive definite real symmetric matrix), there exists a positive definite Hermitian matrix P
(or positive definite real symmetric matrix) such that
. The scalar
function
is a Liapunov function for the system.

Hence, for the asymptotic stability of the system of


, it is sufficient that Q be
positive definite. Instead of first specifying a positive definite matrix P and examining
whether or not Q is positive definite, it is convenient to specify a positive definite matrix
Q first and then examine whether or not P determined from
is also positive
definite. Note that P being positive definite is a necessary condition.

Note:
1. If
does not vanish identically along any trajectory, then Q may
be chosen to be positive semi definite.
2. In determining whether or not there exists a positive definite Hermitian or
real symmetric matrix P, it is convenient to choose Q = I, where I is the identity
matrix. Then the elements of P are determined from
and the matrix P is
tested for positive definiteness.

Dept. of EEE, SJBIT

Page 129

Modern Control Theory

10EE55

Example 8.8: Determine the stability of the system


described by

Assume a Liapunov function V(x) =


XTPX

-2P12 = -1
P11 P12 P22 = 0
2P12 2P22 = -1
Solving the above equations,
P11 = 1.5;

P12 = 0.5;
P22 = 1

and

1.5 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium
state at origin is asymptotically stable in the large.
The Liapunov function V(x) =
XTPX

Example 8.9: Determine the stability of the equilibrium state of the following system.

Dept. of EEE,SJBIT

Page 130

Modern Control Theory

10EE55

4P11 + (1 j)P12 + (1+j)P21 = 1


(1 j)P11 + 5P21 + (1 j)P22 = 0
(1 + j)P11 + 5P12 + (1 +j)P22 = 0
(1 j)P12 + (1 + j)P21 + 6P22 = 1
Solving the above equations,
P11 = 3/8;

P12 = - (1 + j)/8;

P12 = - (1 - j)/8;

P22 = 1/4

P is positive definite. Hence the origin of the system is asymptotically stable.

Example 8.10: Determine the stability of the system described by

Assume a Liapunov function V(x) = XTPX

-2P11 + 2P12 = -1
-2P11 5P12 + P22 = 0
-4P12 8P22 = -1
Solving the above equations,
P11 = 23/60;

P12 = -7/60;

and

P22 = 11/60

23 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium
state at origin is asymptotically stable in the large.

Dept. of EEE,SJBIT

Page 131

Modern Control Theory

10EE55

Example 8.11: Determine the stability range for the gain K of the system given below.

In determining the stability range of k, we assume u = 0.

Let us choose

a positive semi definite real symmetric matrix

This choice is permissible since


except at the origin. To verify this, note that

cannot be identically equal to zero

being identically zero implies that x3 is identically zero. If x3 is identically zero,


then x1 must be must be zero since we have 0 = - kx1 0. If x1 is identically zero, then x2
must also be identically zero since 0 = x2. Thus
is identically zero only at the
origin. Hence we may use the Q matrix defined by a psd matrix.
Let us solve,

i.e.,

-k p13 k p13 = 0
-k p23 + p11 2p12 = 0
-k p33 + p12 p13 = 0
P12 2p22 + p12 2p22 = 0
P13 2p23 + p22 p23 = 0
P23 p33 + p23 p33 = -1

Solving the above equations, we get

For P to be positive definite, it is necessary and sufficient that 12 2k > 0 and k > 0 or
0 < k <6. Thus for 0 < k <6, the system is stable; that is, the origin is asymptotically
stable in the large.

Dept. of EEE,SJBIT

Page 132

Modern Control Theory

10EE55

8.7 Stability Analysis of Nonlinear Systems


In a linear free dynamic system if the equilibrium state is locally asymptotically stable, then it is
asymptotically stable in the large. In a nonlinear free dynamic system, however, an equilibrium
state can be locally asymptotically stable without being asymptotically stable in the large.
Hence implications of asymptotic stability of equilibrium states of linear systems and those of
nonlinear systems are quite different.
If we are to examine asymptotic stability of equilibrium states of non linear systems, stability
analysis of linearised models of nonlinear systems is completely inadequate. We must investigate
nonlinear systems without linearization. Several methods based on the second method of
Liapunov are available for this purpose. They include Krasovskiis method for testing sufficient
conditions for asymptotic stability of nonlinear systems, Schultz Gibsons variable gradient
method for generating Liapunov functions etc.
Krasovskiis Method:
Consider the system defined by
, where x is an n-dimensional vector. Assume
that f(0) = 0 and that f(x) is differentiable with respect to xi where, I = 1,2,3,..,n. The
Jacobian matrix F(x) for the system is

Define
is the conjugate transpose of F(x). If the
Hermitian matrix
is negative definite, then the equilibrium state x = 0 is
. If
asymptotically stable. A Liapunov function for this system is
in addition
, then the equilibrium state is
asymptotically stable in the large.
Proof:
If
is negative definite for all x
0, the determinant of is nonzero everywhere
except at x = 0. There is no other equilibrium state than x = 0 in the entire state space.
Since f(0) = 0, f(x) 0 for x 0, and
, is positive definite. Note that
We can obtain

Dept. of EEE,SJBIT

as

Page 133

Modern Control Theory

10EE55

If
is negative definite, we see that
is negative definite.
Hence V(x)is a
Liapunov function. Therefore, the origin is asymptotically stable. If
tends to infinity as
, then the equilibrium state is asymptotically stable in the
large.
Example 8.12: Using Krasovskiis theorem, examine the stability of the equilibrium state x
= 0 of the system given by

The Jacobian matrix is given by

This is a negative definite matrix and hence the equilibrium state is asymptotically stable.
. Therefore the equilibrium state is
asymptotically stable in the large.
Example 8.13: Using Krasovskiis theorem, examine the stability of the equilibrium state

x = 0 of the system given by

This is a negative definite matrix and hence the equilibrium state is asymptotically stable.
Dept. of EEE,SJBIT

Page 134

Modern Control Theory

10EE55

equilibrium state is asymptotically stable in the large.

Dept. of EEE,SJBIT

Page 135

Vous aimerez peut-être aussi