Académique Documents
Professionnel Documents
Culture Documents
10EE55
: 10EE55
IA Marks
25
: 04
Exam
Hours
03
: 52
Exam
Marks
: 100
PART - A
UNIT - 1 & UNIT - 2
STATE VARIABLE ANALYSIS AND DESIGN: Introduction, concept of state, state
variables and state model, state modeling of linear systems, linearization of state equations.
State space representation using physical variables, phase variables & canonical variables
10 Hours
UNIT - 3
Derivation of transfer function from state model, digitalization, Eigen values, Eigen vectors,
generalized Eigen vectors.
6 Hours
UNIT - 4
Solution of state equation, state transition matrix and its properties, computation using
Laplace transformation, power series method, Cayley-Hamilton method, concept of
controllability & observability, methods of determining the same
10 Hours
PART - B
UNIT - 5
POLE PLACEMENT TECHNIQUES: stability improvements by state feedback, necessary
& sufficient conditions for arbitrary pole placement, state regulator design, and design of
state observer, Controllers- P, PI, PID.
10 Hours
Page 1
10EE55
UNIT - 6
Non-linear systems: Introduction, behavior of non-linear system, common physical non
linearity-saturation, friction, backlash, dead zone, relay, multi variable non-linearity.
3 Hours
UNIT - 7
Phase plane method, singular points, stability of nonlinear system, limit cycles, construction
of phase trajectories.
7 Hours
UNIT - 8
Liapunov stability criteria, Liapunov functions, direct method of Liapunov & the linear
system, Hurwitz criterion & Liapunovs direct method, construction of Liapunov functions
for nonlinear system by Krasvskiis method.
6 Hours
TEXT BOOKS:
1.
Digital control & state variable methods- M. Gopal - 2nd edition, THM Hill
2003
2.
Control system Engineering- I. J. Nagarath & M. Gopal, - 3rd edition, New
Age International (P) Ltd.
REFERENCE BOOKS:
1.
State Space Analysis of Control Systems- Katsuhiko Ogata -Prentice Hall
Inc
2.
Automatic Control Systems- Benjamin C. Kuo & Farid Golnaraghi, 8th
edition, John Wiley & Sons 2003.
3.
Modern Control Engineering- Katsuhiko Ogata- PHI 2003
4.
Control Engineering theory and practice- M. N. Bandyapadhyay PHI,
2007
5.
Modern control systems- Dorf & Bishop- Pearson education, 1998
Page 2
10EE55
CONTENTS
Sl. No.
Titles
Page No.
1.
04
2.
24
3.
42
4.
57
5.
78
6.
99
7.
105
8.
121
Page 3
10EE55
PART -A
Page 4
10EE55
1.
2.
3.
4.
Does not provides information regarding the internal state of the system.
5.
The classical methods like Root locus, Bode plot etc. are basically trial and
Linear system
2.
5.
6.
Using this analysis the internal states of the system at any time instant can be
predicted.
7.
digital computers.
Page 5
10EE55
Developed in 19201950
Frequency domain
analysis & Design(Transfer
function based)
Based on SISO
models
Well-developed
robustness concepts
(gain/phase margins)
No
Controllability/Observabilit
y inference
No optimality
concerns
Well-developed
concepts and very much in
use in industry
Developed in 19501980
Based on MIMO
models
Not well-developed
robustness concepts
Controllability/Observabilit
y can be inferred
Fairly well-developed
and slowly gaining
popularity in industry
State variable:A set of variable which described the state of the system at any time instant
are called state variables.
OR
The state of a dynamic system is the smallest number of variables (called
state variables) such that the knowledge of these variables at t=to, together with the
knowledge of the input for t=to, completely determine the behaviour of the system
for any time t to.
Dept. of EEE, SJBIT
Page 6
10EE55
State space:The set of all possible values which the state vector X(t) can have( or
assume) at time t forms the state space of the system.
State vector:It is a (nx1) column matrix whose elements are state variables of the
system,(where n is order of system) it is denoted by X(t).
Typically, the number of state variables (i.e. the order of the system)
YES! All state variables should be linearly independent and they must
collectively describe the system completely.
State variables =
(t),
(t),
(t),...................
Input variables =
(t),
(t),
(t),...................
(t),
(t),
(t),
(t),...................
(t),
Output variables=
(t),
Page 7
10EE55
(t)
(t)
(t)
(t)
(t)
(t)
Control
(t)
(t)
System
(t)
(t) .....
(t)
(t)
Control
System
X
The different variables may be represented by the vectors(column matrix) as
shown below
Input vector
Output vector
Page 8
10EE55
equation ......(1)
While out puts of such system are dependent on the state of the system and
instantaneous inputs.
Functional output equation can be written as,
State model of a system consist of the state equation & output equation.
The state equation of the system is a function of state variables and inputs as defined
by equation 1.
For linear tine invariant systems the first derivatives of state variables can be
expressed as linear combination of state variables and inputs.
Page 9
10EE55
..
.
multiple input
multiple output or MIMO) systems were analyzed and designed one loop at a time .
Also the use of transfer functions and the frequency domain limited one to linear time
invariant systems.
In the second era , we have modern control ( which is not so modern any longer )
Dept. of EEE, SJBIT
Page 10
10EE55
which referees to state space methods developed in the late 1950,s and early 1960s. In
modern control, system models are directly written in the time domain. Analysis and
design are done in time domain. It should denoted that before Laplace transforms and
transfer functions became popular in the 1920s Engineering were studying systems in
the time domain. Therefore the resurgence of time domain analysis was not unusual, but
it was triggered by the development of computers and advances in numerical analysis.
Because computers were available, it was no longer necessary to development analysis and
design methods
numerically solve or simulate large system that were nonlinear and time varying. State
space methods removed the previously mentioned limitations of classical control. The
period of the1960s was the heyday of modern control.
to
is
to.
Page 11
10EE55
The idea of state is familiar from a knowledge of the physical world and the means
of solving the differential equations used to model the physical world . Consider a ball
flying through the air. Intuitively we feel that if we know the balls position and
velocity, we also know its future behaviour. It is on this basis that an outfielder positions
himself to catch a ball. Exactly the same information is needed to solve a differential
equation model of the problem. Consider for example the second order differential
equation
X +aX + bX = f(t)
The solution to this equation be found as the sum of the forced response, due to
f(t) and the natural or unforced response ie the solution of homogeneous equation
X +aX + bX = 0
If X1
(t), X2(t) -------- Xn(t) are state variables of the system chosen then the
initial conditions of the state variables plus the u(t)s for t > 0 should be sufficient to
decide the future behaviour i.e y(t)s for t > 0. Note that the state variables need not be
physically measurable or observable quantities. Practically however it is convenient to
chose easily measurable quantities. The number of state variable is then equal to the
order of the differential equation which is normally equal to the number of energy
storage elements in the system.
Page 12
10EE55
X1
x1
u1
X2
x2
u2
um
Xn
xn
Page 13
10EE55
Output equation
The state variables X1(t) -------X n(t) represents the dynamic state of a system. The
system output/ outputs may be used as some of the state variables themselves ordinarily,
the output variables
Y1 = C11 X1 + C12 X2 + ---------- C1n Xn
Y2 = C21 X1 + C22 X2 + ---------- C2n X
y1
x1
y2
x2
xn
yp
or
Page 14
Y2
10EE55
x1
u1
x2
u2
um
Yp
cpn -----------
cpn
xn
D matrix is of size ( p x m)
State Model
The state equation of a system determines its dynamic state and the output equation gives
its output at any time t > 0 provided the state at t = 0 and the control forces for
t 0 are known. These two equations together form the state model of the system.
The state model of a linear system is therefore given by
X = AX +BU
Y = CX+ DU
(1 )
(2 )
Page 15
10EE55
+ an-1
dn-1 y
dtn-1
where an-1 , an - 2
y(0),
dy
(0),
dt
+ an-2
------------
dn-2 y
dtn-2
dy
+ ------------ a1 + a0 y
dt
= b0 u
(3)
------------
(0),
dt
(4)
Xn = yn-1
With the above definition of state variables equation (4) is reduced to a set of n first
order differential equations given below;
X1 = y = X2
=y
X
= X3
2
Xn-1 = yn-1 = Xn
Page 16
10EE55
It is to be noted that the matrix A has a special form. It has all 1s in the upper off
diagonal, its last row is comprised of the negative of the coefficients of the original
differential equation and all other elements are zero. This form of matrix A is known as
the BUSH FORM . The set of state variables which yield BUSH FORM for the
matrix A are called Phase variables
When A is in BUSH FORM, the vector b has the specialty that all its elements
except the last are zero. In fact A and B and therefore the state equation can be written
directly by inspection of the linear differential equation.
The output being y = X1 , the output equation is given by
Where C = [ 1 0 ------0]
Note: There is one more state model called canonical state model . we shall consider this
model after going through transfer function.
Dept. of EEE, SJBIT
Page 17
10EE55
u(s)
G(s)
----------- (2)
----------- (3)
Taking laplace transformation on both sides of equations ( 2) and (3) and neglecting
initial conditions
We get sX(s) = AX (s) +
Bu(s) -----(4) Y(s) = CX (s) +
Du(s) ---- (5)
From (4)
(sI A ) X(s) = Bu(s)
Or X(s) = --------- (sI A )-1Bu(s) --------(6)
Substituting ( 6) in (5)
Y(s) = C (sI A )-1Bu(s) + Du(s)
Y(s) = ( C (sI A )-1B+D) u(s) --------------- (7)
Comparing ( 7 ) with (1)
Dept. of EEE, SJBIT
Page 18
10EE55
---------( 8)
An important observation that needs to be made here is that while the state model is not
unique, the transfer function is unique. i.e. the transfer function of equation (8) must work
out to be the same irrespective of which particular state model is used to describe the
system.
(ii) MIMO SYSTEM
y1(s)
u1(s)
G(s)
u2(s)
y2(s)
m inputs
P out puts
yn(s)
um(s)
G(s) = C( sI A ) -1B + D Where y(s) =G(s) U(s)
u1(s)
u2(s) = u3(s) = ----- um(s) = 0
Page 19
10EE55
Page 20
10EE55
Taking in LT
Page 21
10EE55
3. Cascading form
The denominator of TF is to be in factor form
Page 22
10EE55
Page 23
10EE55
UNIT-2
STATE-SPACE REPRESENTATION
Introduction
The classical control theory and methods (such as root locus) that we have been using in
class to date are based on a simple input-output description of the plant, usually expressed as a
transfer function. These methods do not use any knowledge of the interior structure of the plant,
and limit us to single-input single-output (SISO) systems, and as we have seen allows only limited
control of the closed-loop behavior when feedback control is used. Modern control theory solves
many of the limitations by using a much richer description of the plant dynamics. The so-called
state-space description provide the dynamics as a set of coupled rst-order dierential equations
in a set of internal variables known as state variables, together with a set of algebraic equations
that combine the state variables into physical output variables.
1.1
The concept of the state of a dynamic system refers to a minimum set of variables, known as
state variables, that fully describe the system and its response to any given set of inputs [1-3]. In
particular a state-determined system model has the characteristic that:
A mathematical description of the system in terms of a minimum set of variables xi (t), i
= 1, . . . , n, together with knowledge of those variables at an initial time t0 and the
system inputs for time t t0 , are sucient to predict the future system state and
outputs for all time t > t0 .
This denition asserts that the dynamic behavior of a state-determined system is completely
characterized by the response of the set of n variables xi (t), where the number n is dened to be
the order of the system.
The system shown in Fig. 1 has two inputs u1 (t) and u2 (t), and four output vari- ables y1
(t), . . . , y4 (t).
If the system is state-determined, knowledge of its state variables (x1
(t0 ), x2 (t0 ), . . . , xn (t0 )) at some initial time t0 , and the inputs u1 (t) and u2 (t) for t t0 is
sucient to determine all future behavior of the system. The state variables are an internal
description of the system which completely characterize the system state at any time t, and from
which any output variables yi (t) may be computed. Large classes of engineering, biological, social
and economic systems may be represented by state-determined system models. System models
constructed withthe pure and ideal (linear) one-port elements (suchas mass, spring and damper
elements) are state-determined
Page 24
10EE55
1.2
A standard form for the state equations is used throughout system dynamics. In the standard
form the mathematical description of the system is expressed as a set of n coupled rst-order
ordinary dierential equations, known as the state equations, in which the time derivative
of each state variable is expressed in terms of the state variables x1 (t), . . . , xn (t) and the
system inputs u1 (t), . . . , ur (t). In the general case the form of the n state equations is:
x1 = f1 (x, u, t)
x2 = f2 (x, u, t)
. = .
xn = fn (x, u, t
(1)
Page 25
10EE55
where xi = dxi /dt and eachof the functions fi (x, u, t), (i = 1, . . . , n) may be a general
nonlinear, time varying function of the state variables, the system inputs, and time. 1
It is common to express the state equations in a vector form, in which the set of n state
variables is written as a state vector x(t) = [x1 (t), x2 (t), . . . , xn (t)]T , and the set of r inputs
is written as an input vector u(t) = [u1 (t), u2 (t), . . . , ur (t)]T . Each state variable is a time
varying component of the column vector x(t).
This form of the state equations explicitly represents the basic elements contained
in the denition of a state determined system. Given a set of initial conditions (the values
of the xi at some time t0 ) and the inputs for t t0 , the state equations explicitly specify
the derivatives of all state variables. The value of each state variable at some time t later
may then be found by direct integration. The system state at any instant may be
interpreted as a point in an n-dimensional state space, and the dynamic state response x(t)
can be interpreted as a path or trajectory traced out in the state space.
In vector notation the set of n equations in Eqs. (1) may be written:
x = f (x, u, t) .
(2)
+ bn1 u1
b1r ur
b2r ur
(3)
+ . . . + bnr ur
where the coecients aij and bij are constants that describe the system. This set of n
equations denes the derivatives of the state variables to be a weighted sum of the state
variables and the system inputs.
Equations (8) may be written compactly in a matrix form:
As given above in page 10
x = Ax + Bu (5)
In this note we use bold-faced type to denote vector quantities. Upper case letters are
used to denote general matrices while lower case letters denote column vectors. See
Appendix A for an introduction to matrix notation and operations.
Page 26
10EE55
where the state vector x is a column vector of length n, the input vector u is a column vector
of length r, A is an n n square matrix of the constant coecients aij , and B is an n r
matrix of the coecients bij that weight the inputs.
1.3
Output Equations
A system output is dened to be any system variable of interest. A description of a
physical system in terms of a set of state variables does not necessarily include all of the
variables of direct engineering interest. An important property of the linear state equation
description is that all system variables may be represented by a linear combination of the
state variables xi and the system inputs ui . An arbitrary output variable in a system of
order n with r inputs may be written:
y(t) = c1 x1 + c2 x2 + . . . + cn xn + d1 u1 + . . . + dr ur
(6)
where the ci and di are constants. If a total of m system variables are dened as outputs,
the m suchequations may be written as:
y1
y2
= c11 x1
= c21 x1
.
.
ym = cm1 x1
+
+
c12 x2
c22 x2
+ cm2 x2
+ ... +
+ ... +
c1n xn +
c2n xn +
d11 u1
d21 u1
+ . . . + cmn xn + dm1 u1
+ ... +
+ ... +
d1r ur
d2r ur
(7)
+ . . . + dmr ur
or in matrix form:
As given above in page 10
The output equations, Eqs. (8), are commonly written in the compact
. form:
y = Cx + Du
(9)
where y is a column vector of the output variables yi (t), C is an m n matrix of the constant
coecients cij that weight the state variables, and D is an m r matrix of the constant
coecients dij that weight the system inputs. For many physical systems the matrix D
is the null matrix, and the output equation reduces to a simple weighted combination of
the state variables:
y = Cx.
(10)
Page 27
10EE55
The complete system model for a linear time-invariant system consists of (i) a set of n
state equations, dened in terms of the matrices A and B, and (ii) a set of output
equations that relate any output variables of interest to the state variables and inputs,
and expressed in terms of the C and D matrices. The task of modeling the system is to
derive the elements of the matrices, and to write the system model in the form:
x = Ax + Bu
y = Cx + Du.
(11)
(12)
The matrices A and B are properties of the system and are determined by the system
structure and elements. The output equation matrices C and D are determined by the
particular choice of output variables.
The overall modeling procedure developed in this chapter is based on the following steps:
1. Determination of the system order n and selection of a set of state variables from
the linear graphsystem representation.
2. Generation of a set of state equations and the system A and B matrices using a
well dened methodology. This step is also based on the linear graph system
description.
3. Determination of a suitable set of output equations and derivation of the appropriate
C and D matrices.
The matrix-based state equations express the derivatives of the state-variables explicitly in
terms of the states themselves and the inputs. In this form, the state vector is expressed as
the direct result of a vector integration. The block diagram representation is shown in
Fig. 2. This general block diagram shows the matrix operations from input to output in
terms of the A, B, C, D matrices, but does not show the path of individual variables.
In state-determined systems, the state variables may always be taken as the outputs of
integrator blocks.
A system of order n has n integrators in its block diagram. The
derivatives of the state variables are the inputs to the integrator blocks, and each state
equation expresses a derivative as a sum of weighted state variables and inputs. A detailed
block diagram representing a system of order n may be constructed directly from the state
and output equations as follows:
Step 1: Draw n integrator (S 1 ) blocks, and assign a state variable to the output
of each block.
Page 28
10EE55
Figure 2: Vector block diagram for a linear system described by state-space system dynamics.
Step 2: At the input to each block (which represents the derivative of its state variable)
draw a summing element.
Step 3: Use the state equations to connect the state variables and inputs to the
summing elements through scaling operator blocks.
Step 4: Expand the output equations and sum the state variables and inputs through
a set of scaling operators to form the components of the output.
Example 1
Draw a block diagram for the general second-order, single-input single-output
system:
x1
x1
a11 a12
b1
x2 +
u(t)
=
x2
a21 a22
b2
y(t) =
c1 c2
x1
x2
+ du(t).
(i)
Solution: The block diagram shown in Fig. 3 was drawn using the four steps
described above.
Page 29
10EE55
Example 2
Find the transfer function and a single rst-order dierential equation relating
the output y(t) to the input u(t) for a system described by the rst-order linear
state and output equations:
dx
= ax(t) + bu(t)
dt
y(t) = cx(t) + du(t)
(i)
(ii)
(iii)
which may be rewritten with the state variable X (s) on the left-hand side:
(s a) X (s)) = bU (s).
(iv)
b
U (s),
sa
(v)
Page 30
10EE55
and substitute into the Laplace transform of the output equation Y (s) = cX (s)+
dU (s):
Y (s) =
bc
+ d U (s)
sa
ds +U (s)
=
(bc ad)
(s a)
(vi)
ds + (bc
.
ad) (s a)
Y (s)
=
U (s)
(vii)
(viii)
(ix)
(13)
(14)
may be rewritten in the Laplace domain. The system equations are then
sX(s) = AX (s) + BU (s)
Y(s) = CX(s) + DU(s)
(15)
(16)
where the term sI creates an n n matrix with s on the leading diagonal and zeros elsewhere.
(This step is necessary because matrix addition and subtraction is only dened for matrices
of the same dimension.) The matrix [sI A] appears frequently throughout linear system
theory; it is a square n n matrix withelements directly related to the A matrix:
(s a11 )
a12
a
(s
a22 )
21
[sI A] =
.
. ..
.
an1
an2
a1n
a2n
.
.
(s ann )
(17)
Page 31
10EE55
The state equations, written in the form of Eq. (16), are a set of n simultaneous
opera- tional expressions. The common methods of solving linear algebraic equations, for
example Gaussian elimination, Cramers rule, the matrix inverse, elimination and
substitution, may be directly applied to linear operational equations such as Eq. (16). For
low-order single-input single-output systems the transformation to a classical formu- lation
may be performed in the following steps:
1. Take the Laplace transform of the state equations.
2. Reorganize each state equation so that all terms in the state variables are on
the left-hand side.
3. Treat the state equations as a set of simultaneous algebraic equations and solve
for those state variables required to generate the output variable.
4. Substitute for the state variables in the output equation.
5. Write the output equation in operational form and identify the transfer function.
6. Use the transfer function to write a single dierential equation between the
output variable and the system input.
This method is illustrated in the following two examples.
Example 3
Use the Laplace transform method to derive a single dierential equation for the
capacitor voltage vC in the series R-L-C electric circuit shown in Fig. 4
Solution: The linear graph method of state equation generation selects the
Page 32
10EE55
capacitor voltage vC (t) and the inductor current iL (t) as state variables, and
generates the following pair of state equations:
v c
i L
0
1/C
1/L R/L
0
1/L
vc
iL
0 Vin
Vin .
(i)
vc
iL
1 0
(ii)
(iii)
(iv)
(v)
(vi)
s2
1/LC
Vs (s)
+ (R/L)s + 1/LC
(vii)
(viii)
(ix)
Page 33
10EE55
Cramers Rule, for the solution of a set of linear algebraic equations, is a useful method to
apply to the solution of these equations. In solving for the variable xi in a set of n linear
algebraic equations, such as Ax = b the rule states:
xi =
det A(i)
(18)
det [A]
where A(i) is another n n matrix formed by replacing the ith column of A with the vector
b.
If
[sI A] X(s) = BU(s)
(19)
then the relationship between the ith state variable and the input is
Xi (s) =
U (s)
(20)
(i)
where (sI A) is dened to be the matrix formed by replacing the ith column of (sI A)
with the column vector B. The dierential equation is
det [sI A] xi = det (sI A)(i) uk (t).
(21)
Example 4
Use Cramers Rule to solve for vL (t) in the electrical system of Example 3.
Solution: From Example 3 the state equations are:
v c
i L
0
1/C
1/L R/L
vc
iL
0
1/L
Vin (t)
(i)
(ii)
Vc (s)
IL (s)
0
1/L
Vin (s).
(iii)
Page 34
10EE55
VC (s)
det (sI A)
(1)
s2
det
0
1/C
1/L (s + R/L)
det
1/C
s
1/L (s + R/L)
Vin (s) =
Vin (s)
1/LC
Vin (s).
+ (R/L)s + (1/LC )
(iv)
IL (s) =
det (sI A)
s2
det
(2)
Vin (s) =
s
0
1/L 1/L
1/C
s
det
1/L (s + R/L)
s/L
Vin (s).
+ (R/L)s + (1/LC )
Vin (s)
(v)
The output equation may be written directly from the Laplace transform of Eq.
(ii):
VL (s)
(vii)
For a single-input single-output (SISO) system the transfer function may be found directly
by evaluating the inverse matrix
X(s) = (sI A)
BU (s).
(22)
adj [sI A]
,
det [sI A]
(23)
Page 35
10EE55
adj [sI A] B
U (s).
det [sI A]
and substituting into the output equations gives:
X(s) =
(24)
(25)
Expanding the inverse in terms of the determinant and the adjoint matrix yields:
Y (S)
(26)
(27)
Example 5
Use the matrix inverse method to nd a dierential equation relating vL (t) to
Vs (t) in the system described in Example 3.
Solution: The state vector, written in the Laplace domain,
X(s) = [sI A]1 BU(s)
(i)
s
1/C
1/L s + R/L
0
1/L
Vin (s).
(ii)
(iii)
s
1/C
1/L s + R/L
s + R/L 1/C
1/L
s
(iv)
From Example .5 and the previous example, the output equation vL (t) = vC
RiL + Vs (t) species that C = [1 R] and D = [1]. The transfer function, Eq.
(26) is:
Page 36
10EE55
Since
C adj (sI A) B =
s + R/L 1/C
1/L
s
1 R
0
1/L
R
1
s
,
L
LC
(vi)
(R/L)s 1/(LC ) + (s
(s2
+ (R/L)s + (1/LC ))
+ (R/L)s + (1/LC ))
s
,
(S 2 + (R/L)S + (1/LC ))
(vii)
The block diagram provides a convenient method for deriving a set of state equations for
a system that is specied in terms of a single input/output dierential equation. A set of
n state variables can be identied as the outputs of integrators in the diagram, and state
equations can be written from the conditions at the inputs to the integrator blocks (the
derivative of the state variables). There are many methods for doing this; we present
here one convenient state equation formulation that is widely used in control system
theory.
Let the dierential equation representing the system be of order n, and without loss of
generality assume that the order of the polynomial operators on both sides is the same:
an sn + an1 sn1 + + a0 Y (s) = bn sn + bn1 sn1 + + b0 U (s).
(28)
We may multiply bothsides of the equation by sn to ensure that all dierential operators
have been eliminated:
an + an1 s1 + + a1 s (n1) + a0 s n Y (s) =
bn + bn1 s 1 + + b1 s (n1) + + b0 s n U (s),
(29)
from which the output may be specied in terms of a transfer function. If we dene a dummy
variable Z (s), and split Eq. (29) into two parts
Page 37
10EE55
(32)
and rearranged to generate a feedback structure that can be used as the basis for a block
diagram:
a0 1
1
a1 1
an1 1
+
Z (s) = U (s)
Z (s).
(33)
+ +
an
an s
an sn1 an sn
The dummy variable Z (s) is specied in terms of the system input u(t) and a weighted sum
of successive integrations of itself. Figure 5 shows the overall structure of this direct-form
block diagram. A string of n cascaded integrator (1/s) blocks, with Z (s) dened at the input
to the rst block, is used to generate the feedback terms, Z (s)/si , i = 1, . . . n, in Eq. (33).
Equation (31) serves to combine the outputs from the integrators into the output y (t).
A set of state equations may be found from the block diagram by assigning the state
variables xi (t) to the outputs of the n integrators. Because of the direct cascade connection
of the integrators, the state equations take a very simple form. By inspection:
x 1 = x2
x 2 = x3
.
x n1
.
= xn
x n =
an1
a1
a0
1
x1 x2
xn + u(t).
an
an
an
an
(34)
Page 38
10EE55
x 1
x 2
.
.
x n2
x n1
x n
0
0
..
1
0
..
0
0
..
...
0
0
..
0
0
1
0
0
0
0
1
a0 /an a1 /an a n2 /an an1 /an
x1
x2
..
+
x
n2
xn1
xn
0
0
..
0
0
1/an
u (t) .
(35)
The A matrix has a very distinctive form. Each row, except the bottom one, is lled of
zeroes except for a one in the position just above the leading diagonal. Equation (35) is
a common form of the state equations, used in control system theory and known as the
phase variable or companion form. This form leads to a set of state variables which may not
correspond to any physical variables within the system.
The corresponding output relationship is specied by Eq. (31) by noting that Xi (s) =
Z (s)/s(n+1i) .
y (t) = b0 x1 + b1 x2 + b2 x3 + + bn1 xn + bn z (t) .
(36)
But z (t) = dxn /dt , which is found from the nthstate equation in Eq. (34). When substituted
into Eq. (36) the output equation is:
Y (s) =
b0
bn a0
an
b1
bn a1
an
bn1
x1
x2 bn
+
u(t).
. an
xn
bn an1
an
(37)
Example 6
Draw a direct form realization of a block diagram, and write the state equations
in phase variable form, for a system with the dierential equation
d3 y
d2 y
dy
du
+ 19 + 13y = 13
+
7
+ 26u
3
2
dt
dt
dt
dt
(i)
Solution: The system order is 3, and using the structure shown in Fig. 5 the
block diagram is as shown in Fig. 6.
The state and output equations are found directly from Eqs. (35) and (37):
x 1
x
2
x 3
1 x2 + 0 u(t),
13 19 7
x1
x3
(ii)
Page 39
10EE55
Figure 6: Block diagram of the transfer operator of a third-order system found by a direct
realization.
y(t) =
x1
26 13 0 x2
+ [0] u (t) .
x3
(iii)
(39)
and expanding the inverse in terms of the determinant and the adjoint
matrix
C adj (sI A) B + det [sI A] D
U(s)
det [sI A]
= H(s)U(s),
Y(s) =
(40)
where H(s) is dened to be the matrix transfer function relating the output vector Y(s) to
the input vector U(s):
(C adj (sI A) B + det [sI A] D)
det [sI A]
(41)
H(s) =
Page 40
10EE55
(1)
(2)
which is a function containing numerous 1/s terms, or integrators. There are various signal-flow graph
configurations that will produce this function. One possibility is the control canonical form shown in
Figure 1. The state model for the signal flow configuration in Figure 1 is
C
Page 42
b1
10EE55
1
0
a1
0 x1 0
1 x2 0 u
a2 x3 1
(3)
x1
b2 x2 0 u
x3
b2
b1
U(s)
1/s
Y(s)
1/s
b0
1/s
X2(s)
X3(s)
X1(s)
-a2
-a1
-a0
(4)
and then substituting the expression for Y(s) from the state model output equation in (3) yields
1
1
1
b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) b2 U ( s ) b1 2 U ( s ) b0 3 U ( s )
s
s
s
1
a2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s )
s
1
a1 2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s )
s
1
a0 3 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) .
s
(5)
Equation (5) can be rewritten as
b0 X 1 ( s) b1 X 2 ( s) b2 X 3 ( s) Y ( s)
U ( s)
U ( s)
b2 b1 b0
s s2 s3 ,
a a a
1 2 21 30
s s
s
Page 43
10EE55
which is identical to equation (2) proving that the control canonical form is valid.
As an example consider the transfer function
b2 b1 b0
2 8 6
2 3
C ( s)
2 s 2 8s 6
s s
s s s2 s3 .
3
R( s ) s 8s 2 26s 6 1 a2 a1 a0 1 8 26 6
s s2 s3
s s2 s3
b1 8,
b2 2,
a0 6,
a1 26,
and
a2 8 .
Y ( s)
b s 3 b s 2 b s b0
b3s 1 b2 s 2 b1s 3 b0 s 4
4 3 3 2 2 1
(6)
(7)
The denominator in (6) is fourth order and leads one to conclude that a block diagram consisting of
four 1/s terms may be useful. Construction of the phase variable canonical form is initiated by setting
the output to
Y( s ) b0s 4U( s )
(8)
Page 44
U(s)
1/s
1/s
1/s
10EE55
1/s
b0
Y(s)
After substituting Y(s) from (8) into (7), the expression becomes
(9)
It is left for the reader to show that the additional terms when applied to the block diagram results in
Figure 3.8 (b) in [2]. The reader may also wish to examine the similarities between control canonical
form and phase variable form.
It is important to note that a particular transfer function may be represented by block diagrams of
many different canonical forms, yielding many different valid state models.
The advances made in microprocessors, microcomputers, and digital signal processors have
accelerated the growth of digital control systems theory. The discrete-time systems are dynamic
systems in which the system-variables are defined only at discrete instants of time.
The terms sampled-data control systems, discrete-time control systems and digital control
systems have all been used interchangeably in control system literature. Strictly speaking, sampleddata are pulse-amplitude modulated signals and are obtained by some means of sampling an analog
signal. Digital signals are generated by means of digital transducers or digital computers, often in
digitally coded form. The discrete-time systems, in a broad sense, describe all systems having some
form of digital or sampled signals.
Discrete-time systems differ from continuous-time systems in that the signals for a discrete-time
systems are in sampled-data form. In contrast to the continuous-time system, the operation of
discrete-time systems are described by a set of difference-equations. The analysis and design of
discrete-time systems may be effectively carried out by use of the z-transform, which was evolved
from the Laplace transform as a special form.
Page 45
10EE55
Page 46
10EE55
Page 47
10EE55
Page 48
10EE55
Page 49
10EE55
Page 50
10EE55
Page 51
10EE55
Page 52
10EE55
Page 53
10EE55
Page 54
10EE55
Page 55
10EE55
Page 56
10EE55
Consider the n-th order LTI system with zero input. The state equation of
such system with usual notations is given by
X (t ) = AX (t ) (1)
with x(0) = x 0
(2)
(3)
(4)
Page 57
10EE55
x(t ) = e At e At x(t 0)
= e A(t t 0) x(t 0)
(8)
(t):
(0)=I.
2). 1 (t ) = (t ) :
Proof: Consider (t ) = I + At +
A2 t 2
+ ....
2!
A 2 t 2 A 3t 3
(t ) = I At +
+ .....
2!
3!
Multiplying (t ) and (t ) ,
(t ) (t ) = I
Pre-multiplying both sides with 1
(t ) = 1 (t )
3). (t1 ) (t 2 ) = (t1 + t 2 ) :
2
A 2 t1
A3t1
+
Proof: (t1 ) = I + At1 +
+ ....
2!
3!
2
3
A2t2
A3t2
+
and (t 2 ) = I + At 2 +
+ ....
2!
3!
A2(t +t )2 A3(t +t )3
(t1)(t2 ) = I +A(t1 +t2)+ 1 2 + 1 3 +.......
2!
3!
Dept. of EEE, SJBIT
Page 58
10EE55
= (t1 + t 2 )
4). (t 2 t1 ) (t1 t 0 ) = (t 2 t 0 ) :
(This property implies that state transition processes can be divided into a number of
sequential transitions), i.e. the transition from t0 to t2,
x(t 2 ) = (t 2 t 0 ) x(t 0 ) ; is equal to transition from t0 to t1 and from t1 to t2 i.e.,
x(t1 ) = (t1 t 0 ) x(t 0 )
x(t 2 ) = (t 2 t1 ) x(t1 )
A (t 2 t1 ) + ....
2!
2
A (t1 t 0 )
+ ....
(t1 t 0 ) = I + A(t1 t 0 ) +
2!
A2(t +t )2 A3(t +t )3
(t2 t1)(t1 t0) = I +A(t2 +t0)+ 2 0 + 2 0 +.......
2!
3!
= (t 2 t 0 )
Proof: (t 2 t1 ) = I + A(t 2 t1 ) +
1
2
b) A =
A 2 t 2 A 3t 2
+
+ ......
2!
3!
1 0
0
1
0
1
=
+
t+
1 2
0 1 1 2
1 1
0 1
a) (t ) = I + At +
t2 t3
+ + ...
2
3
=
t3 e
2
t +t
t
2
+
1
tt2 +
te t
0
1 t3
t2
+
+ ....
2! 1 2 3!
t3
+ ...
2
3
+ ...
te t
Page 59
3t
2t
+ ...
2
3
te t
e t te
b) (t ) = I + At +
t2 t3
+ + ...
2! 3!
0
et
1 1 t2
+
t+
+ .....
0 1 2!
0 1
1 1
=
0 1
1+ t +
A 2 t 2 A 3t 2
+
+ ......
2!
3!
1 0
10EE55
t3
+ ...
2
t 2 t3
1 + t + + + ...
2! 3!
t+t2 +
te t
0 e t
2). Inverse Laplace Transform Method:
Consider the state equation with zero input as
x(t ) = Ax(t );
x(t 0) = x(0)
Taking Laplace Transform on both sides,
Sx(s) x(0) = Ax(s)
[SI A]x(s) = x(0)
Premultying both sides by [SI-A]-1,
x(s) = [SI A] 1 x(0)
Taking Laplace Inverse on both sides,
-1
x(t ) =
[ SI A] 1 x(0)
Comparing the above example with
x(t ) = e At x(0)
It shows that
1
e At = STM = (t ) = -1 [SI A]
1
(t ) = -1 [SI A]
Example:
Obtain the STM by Laplace Transform (Inverse Laplace Transform) method for given
system matrices
1
0
1 1
0
A=
a). A =
,
1 1 b).
Dept. of EEE, SJBIT
Page 60
1
2
a). A =
c). A =
1
0 1
[SI A] =
(s) = [SI A] 1 =
(t ) =
10EE55
-1 (s)
S 1
S 1 1
=
0
S 1
et
te t
S 1
S 1
[S 1] 2
Page 61
b). A =
10EE55
S
1
[SI A] =
1 2
1
(S + 2)
S+2
1
S
(S + 1) 2
(t ) = [SI A] 1 =
=
1
1 S+2
(S + 1) 2
1
-1 (t)
(t ) =
c). A =
0
2
1
3
[SI A] 1 =
Hence (t ) =
(1 + t)e t
te t
te t
(1 t )e
[SI A] =
2 S +3
S
2
1
(S + 1)
S
(S + 1) 2
1
S +3
S +3
2
S
(S + 2)(S + 1)
S +3 1
2 S
(S + 2)(S + 1)
S +3 1
2 S
-1 (t ) = -1
(t ) =
(S + 2)(S + 1)
=
2e t e 2t
2e t + 2e 2t
e t e 2t
e t + 2e 2t
= (t )
where 0 , 1,..... n constant coefficients which can be evaluated with eigen values of
matrix A as described below.
Step 1: For a given matrix, form its characteristic equation I A = 0 and find
eigen values as 1 , 2 ,..... n .
Dept. of EEE, SJBIT
Page 62
10EE55
Step2 (Case 1):If all eigen values are distinct, then form n simultaneous
equations as,
e 1t = f (1) = 0 + 11 + 2 1 + .....
2
e 2t = f ( 2 ) = 0 + 1 2 + 2 22 + .....
e nt = f ( n ) = 0 + 1 n + 2 n + .....
2
1
1 2
1
=0
1 +2
( + 1) 2 = 0.
(1)
1
f (1) = 110 = 0 + 1 1
(1)10 = 0 1
0 1 = 1
Since it is case of repeated Eigen values, to obtain the second equation
differentiating the expansion for f( ) on both sides (i.e. Eq 1),
d 10
[ ]
d
109
=1
=1
= 1
= 1 = 10
Hence, 0 = 1 + 1 = 1 10 = 9
Page 63
10EE55
f ( A) = A10 = 0 I + 1 A
= 0
1 0
0
1
+ 1
0 1
1 2
0 0
0
1
+
0 0 1 2 1
0
1
9 10
=
1 0 21
10 11
1 1
0 1
a) A =
a) Consider A =
1
2
c)
1
2 3
A=
1 1
0 1
Characteristic equation is I A = 0
=
1 1
1
0
e t = 0 + 1
(1)
e = 0 + 1 at = 1
will be e 1t = 0 + 11
et = 0 + 1
(2)
(3)
(t ) = e At = 0 I + 1 A
= 0
1 0
0 1
+ 1
1 1
0 1
e t (1 t )
0
te t
=
+
0
e t (1 t)
0
Dept. of EEE, SJBIT
te t
tet
Page 64
et
te t
0 b) A =
1
10EE55
; characteristic equation is I A = 0
2
1 = 1, 2 = 1.
1
=0
1 +2
(1)
e 1t =
0
e t = 0 1 ; ( 1 = 1 ) (2)
Differentiating Eq. (1) with respect to and substituting = 2=-1
1 = te t
hence 0 = et (1 + t ) from Eq. (2).
The STM is given by (t ) = e At = 0 I + A
On simplification,
(t) =
c). A =
(1 + t)e t
te t
te t
(1 t )e
1
characteristic equation is I A = 0 .
2 3
1
=0
2 +3
1 = 1, 2 = 2.
(1)
e t = 0 1
And e 2t = 0 +
e 2t 1 2
(2)
= 0 21
(3)
0 = 2e t e 2t and
1 = e t e 2t .
Hence the STM will be (t ) = 0 I + 1 A . On simplification
Dept. of EEE, SJBIT
Page 65
10EE55
1
2 1 0 3 1
2e t e 2t
2e t + 2e 2t
e t e 2t
e t + 2e
2t
as before.
A = 0 2 0 by Cayley-Hamilton method.
3 1
Now characteristic equation is,
4
2 1
2
0 =0
I A = 0
0
0
1
Eigen values are 1 = 2, 1 = 2, 2 = 1
Corresponding function will be
2
e 1t = 0 + 1 1 + 21
(1)
e 2t = 0 + 2 1 + 4 2
(2)
and substituting
te 2t = 1 + 4 2
1=2,
(3)
and e 2t = 0 + 1 2 + 2 22
= 0 + 1 + 2
From Eqs. (2), (3) and (4), solving for 0 , 1 , 2 ,
(4)
0 = 2te t 3e 2t + 4e t ;
1 = 3te 2t + 4e 2t 4e t
2 = te 2t e 2t + e t
Hence STM (t ) = 0 I + 1A + 2 A2
e 2t 12e t 12e 2t + 13te 2t
= 0
e 2t
0
3e t + 3e 2t
4e t + 4e 2t
0
et
Page 66
10EE55
(1)
x(t o ) = xo
x(t ) Ax(t ) = Bu(t )
Pre-multiplying both sides by e At ,
e At [ x(t ) Ax(t )] = e At u(t )
Consider,
d At
[e x(t )]
dt
Ae At x(t ) + e At x(t )
e At [ x9T 0 Ax(t )]
hence Eq (2) can be written as
d At
[e x(t )] = e At u(t )
dt
Integrating both sides with time limits 0 and t
(2)
(3)
At
x(t) |t = e A Bu( )d
oe
o
At
ZIR
ZSR
t
(4)
to
The Eq (4) is the solution of state equation with input as u(t) which represents the change
of state from x(to) to x(t) with input u(t).
Example 1: Obtain the response of the system for step input of 10 units given state model
as
Page 67
x(t ) =
1 10
x(t ) +
10EE55
0
10
u(t )
(1)
Since x(0)=0,
t
= (t )Bu( )d
(2)
a 2t
(t ) =
a 2t
1.0128e a1 (t ) 0.0128e a2 (t
0.114e a1 (t ) 0.114e a 2 (t )
0.114e a1 (t ) + 0.114e a2 (t )
0.0128e a1 (t ) + 1.012e a2 (t )
Hence
t
x(t) = (t )
o
0
10d
10
Page 68
2e t e 2t
2e t + 2e 2t
10EE55
e t e 2t
e t + 2e 2t
ZIR = (t ) x(0)
=
=
=
2e t e 2t
2e
+ 2e
e t e 2t
2t
e + 2e
t
2t
2e t e 2t e t + e 2t
2e t + 2e 2t + e t 2e 2t
e t
e t
ZSR = (t )Bu( )d
o
2e (t ) e 2(t
2e t (t ) + 2e 2 (t )
0
e (t ) e 2 (t )
1d
(t )
2(t )
1
e
+ 2e
e (t ) e 2 (t
d
e (t ) + 2e 2 (t )
1
1
e t + e 2t
= 2
2
e t e 2t
x(t ) = ZIR + ZSR
1
1
e t
e t + e 2t
=
+
=
2
2
e t
e t e 2t
1
(1 + e 2t
= 2
e 2t
y(t) = Cx(t ) = [1 1]x(t)
1
(1 + e 2t
= [1 1] = 2
e 2t
1
= (1 + e 2t ) e 2t
2
1
= (1 e 2t ) )
2
Example 3: x(t ) =
2 5
x(t ) +
0
1
u(t )
Page 69
10EE55
10 t
4
e e 2t e 4t
3
= 3
5 t
8
e + e 2t + e 4t
3
3
The system has two state variables. X1(t) and X2(t). The control input u(t) effects the state
variable X1(t) while it cannot effect the effect the state variable X2(t). Hence the state
variable X2(t) cannot be controlled by the input u(t). Hence the system is uncontrollable,
i.e., for nth order, which has n state variables, if any one state variable is uncontrolled
by the input u(t), the system is said to be UNCONTROLLABLE by input u(t).
Definition:
For the linear system given by
X (t ) = AX (t ) + Bu(t )
Y (t ) = CX (t ) + Du(t )
is said to be completely state controllable. If there exists an unconstrained input vector
u(t), which transfers the initial state of the system x(t0) to its final state x(tf) in finite time
f(tf-t0) i.e. ff. It can be seen if all the initial states are controllable the system is completely
controllable otherwise the system the system uncontrollable.
Methods to determine the Controllability:
1) Gilberts Approach
2) Kalmans Approach.
Gilberts Approach:
It determines not only the controllability but also the uncontrollable state variables, if the
it uncontrollable.
Dept. of EEE, SJBIT
Page 70
10EE55
X (t) = e X (0) + e A(t ) Bu( )d . Assuming without loss of generality that X(t)=0, the
At
solution will be
t
X (0) = e A Bu( )d
0
The system is state controllable, iff the none of the elements of B is zero. If any element
is zero, the corresponding initial state variable cannot be controlled by the input u(t).
Hence the system is uncontrollable. The solution involves A, B matrices and is
independent of C matrix, the controllability of the system is frequently referred to as the
controllability of the pair [A, B].
Example: Check the controllability of the system. If it is uncontrollable,
identify the state, which is uncontrollable.
1
1 1
X (t ) =
X (t ) +
u(t )
0
0 2
Solution: First, let us convert into diagonal form.
+1
1
=0
I A = 0
+2
[ + 1][ + 2] = 0
2 + 3 + 2 = 0
( + 2)( + 1) = 0
= 2, = 1
Eigen vector associated with = 2
2 1
0 0
v1 =
1
1
P=
0
0
P 1 =
Dept. of EEE, SJBIT
0 1
0 1
1 1
1 0
1 1
1
1 1
Page 71
0
1
1 1 1 1 1
1
=
P 1 B =
10EE55
2 1 0
0 2 1 1
1 1 1 0
0 1 1
0
=
1 1 0
1
As P-1B vector contains zero element, the system is uncontrolled and state X1(t) is
uncontrolled.
2) Kalmans approach:
This method determines the controllability of the system.
Statement: The system described by
x(t ) = Ax(t ) + Bu(t )
y(t ) = Cx(t )
is said to completely state controllable iff the rank of controllability matrix Qc is equal to
order of system matrix A where the controllability matrix Qc is given by,
Qc = [B |AB|A2 B||An-1 B].
i.e., if system matrix A is of order nxn then for state controllability
(Qc ) = n
where (Qc ) is rank of Qc.
Example: Using Kalmans Approach determine the state controllability of the system
described by
(1)
y + 3y + 2 y = u + u
with x1 = y, x 2 = y u
x2 = 3x 2 2x1 2u
x1 = x 2 + u
x1
0
1 x1
1
=
+
u(t)
x2 2 3 x2
2
A=
2 3
,B =
1
2
0
Page 72
2 3 2
Qc = [B | AB ] =
Qc =
10EE55
=
2
4
1 2
2 4
=0
2 4
hence rank (Qc ) is <2 and system matrix A is of 2x2. Therefore system is not state
controllable.
Example: Determine the state controllability of the system by Kalmans approach.
0 1
0
0
x(t) = 0 0
1 x(t) + 0 u(t)
1
0 2 3
Page 73
10EE55
1 , B = 0 ; AB = 1 ; A 2 B = 3
A0 0
1
0 2 3
3
7
B|A2B|AB
0
0
c
Q = 0 3 1 c
1 3 7
| Q | 0.
(Qc ) = 3
= Order of system. Therefore, system
is state controllable. Verification by
Gilberts approach:
Transferring the system model into canonical form with usual procedure.
1
0
2
0 Z (t ) + 1 u(t )
1
2
2
0 0
Z (t ) = 0 1
0
0
T
Here, B =
2
1
1
2
Observability:
Concept:
A system is completely observable, if every state variable of the system effects some of
the outputs. In other words, it is often desirable to obtain information on state variables
from the measurements of outputs and inputs. If any one of the states cannot be observed
from the measurements of the outpits and inputs, that state is unobservable and system is
not completely observable or simply unobservable.
Consider the state diagram of typical system with state variables as x1 and x2 and y and
u(t) as output and inputs respectively,
Page 74
10EE55
; C = [1 0 ]
A TC T =
T
2
0
1 0
Page 75
10EE55
= C
AT C T =
1 2
00
o = 0 ( o ) < 2
Order of system is 2. So, system is unobservable.
Example: Show that the system
0
x(t) = 0
0
1 x(t) + 0 u(t)
1
6 11 6
y(t) = [4 5 1]x(t) is not completely observable by
1) Kalmans approach
2) Gilberts approach
Solution: Kalmans approach
0
0 0
A= 0
A = 1 0 11
0 1 6
6 11 6
C = [4 5 1]
= 5
1
A C = 7 ;A C = 5
1
1
T
T2
o 4=
CT T AT C T
6
5
1
7
1
AT C T
6
5
1
=0
Page 76
10EE55
Page 77
10EE55
Page 78
10EE55
e) Corresponding equation is
S + 1Sn1 + 2S n2 + ......
(1)
= 0 -------(2)
f) Comparing the co-efficient of like powers of s form Eqn (1) & (2) K1,K2,--------Kn
can be Evaluated and this completes design of state back controller K.
Example: Design a controller K for the state model
0 1
0
X(t) =
X(t) +
u(t)
0 0
1
Such that the system shall have damping ratio as 0.707 & settling time as 1 sec.
Solution:
a) Converting the desired specifications as locations of closed loop poles at 4 j 4
b) The corresponding cha Eqn will be
(S+4+J4) (S+4-J4)=0
(1)
S 2 + 8S + 32 = 0
c) Let the controller be of the form
K=[K1 K2].
d) From given state model
0 1
0 0
[A BK ] =
0 0 K1 K 2
=
0
1
K1 K 2
2
SI (A BK) = S
+ K1 S + K
2
e) The corresponding characteristic equation is
S 2 + K 2S + K 1 = 0
Comparing the co-efficient of like powers of S from Equ (1) and (2)
K2=8, K1=32.
(2)
Page 79
10EE55
(1)
(2)
The above model can be converted into controller canonical form through the
transformation
X(t) = PX(t)
(3)
Pn1
nn
P1
Pi = [Pi1 Pi2
= P2 ;
Pn
Pin ]
Taking the derivative of Eqn (3) and substituting for X(t) from Equ(1)
X(t) = AX + B u(t)
Where A = PAP 1 ,
0
.
0
.
n1
B = PB
0......
1......
.
......
0
0
.
n
(4)
(5)
(6)
0
B = PB =
.
0
1
From Equ.(3),
x1 = P11x1 + P12 x 2 + ......... + P1n x n =
P1x
Taking the derivative
x1 = P1x(t) = P1[Ax(t) + Bu(t)]
= P1Ax(t) + P1Bu(t)
Comparing (8) and (9), P1B = 0
Dept. of EEE, SJBIT
(7)
(8)
(9)
Page 80
10EE55
(10)
(14)
(15)
(16)
a1 =
a2 =
.
an =
+ K n K n =a1 1
K n1 =a 2
2 + K n1
(17)
n
+ K1
K1 =an
Page 81
10EE55
n
n1
K n = a1 1 and hence
K = K1K 2 ....K n
8) The feedback controller K for original system is
K = KP
This completes the design procedure
[ ]
& P1 = Q1
c [00
P = P1A
01]
P1A n1
K = [a n
an1
n1
a1
]P
(Q c )1[00
= [an
= [00
a n1
a1
n1
01]Q c (a1
] (Q c ) A[00 01]
(Q c )1 A n1[00 01]
)A (a 2
n1
01]
n2
)A
+ ...... + (an
n)I
]
(1)
(2)
n1
Page 82
01]Q c
10EE55
(A) -------------(4)
Where
(A) = A n + a1A n1 + a 2 An2 + a 3 An3 + ....... + anI
(5)
& Qc = B | AB | A B | .......A B
(6)
n1
Design procedure:
1. For given model, determine Qc and test the controllability. If controllable,
2. Compute the desired characteristics equation as
Sn + a1Sn1 + ......... + a n = 0
and determine coefficients a1 , a 2 , a n
3. Compute the characteristics poly of system matrix A as
(A) = A n + a1 A n1 + a2 A n2 + ....... + an
1
4. Determine Q 1
( A)
c and then controller k will be K=[0 0 .0 1] Q c .
This completes the design procedure.
Ex. Design controller k which places the closed loop poles at 4 j 4 for system
0 1
0
X(A) =
X(t) +
u(t) using Acermanns formula.
1
1 0
Answer: Check for controllability
I
Qc = [B | AB] =
Q c1 =
0 1
1 0
0 1
1 0
II. Desired characteristics equation is (S+4+j4)(S+4-j4)
S 2 + 8S + 32 = 0
a1 = 8,a 2 = 32.
III.
(A) = A 2 + a1A + a 2I =
1 0
0
0
1 32
32 0
0
IV. K = [0 1]
State regulator design: The state Regulator system is a system whose non zero
initial state can be transferred to zero initial state by the application of the input u(t).
Design of such system requires a control law
U(t)=-KX(t) which transfers the non-zero initial state to zero initial state (Rejection of
disturbances) by properly designing controller K.
The following steps give design of such controller. For a system given by
Page 83
10EE55
(1)
Determine 1 , 2 ,....... n
2. Determine the transformation matrix P, which transfers the given state model
into controllable caunomical form.
P1
P=
P1
1
; P = [0..001
c ] Q
P1A n1
c
Q = B | AB | A 2B | .......A n1B
3. Using the desired location of closed loop poles (desired Egen values) obtain
the characteristics poly
(S 1 )(S 2 ).....(S nn ) = S + a1Sn1 + a 2Sn2 + .....a n
determine a1 , a 2 ,.......a n
4. The required state feed back controller K is
K = [a n nan1 n1.....a1 1 ]P
Example: A regulator system has the plant
0
X(t) = 0
0
1 X(t) + 0 u(t)
1
6 11 6
Y = [1 0 0 ]X(t)
Problems
1) Design a state feed back controller which place the closed loop poles at 2 j3.464 and-5. Give the block
diagram of the control configuration
Solution: By observing the structure of A and B matrices, the model is in controllable conmical
form. Hence controller K can be designed.
S1.
SI A = 0
1
6
S
0
1
11 S + 6
Dept. of EEE, SJBIT
S 3 + 6S 2 + 11S + 17
Page 84
10EE55
S+
, we obtain
= 6,
= 11,
= 17
K1=43
K2=9
K3=3
Page 85
10EE55
Y(t) = Cx(t);
initial state X(0) are known and
X (t ) is estimated states and
X (0) is estimated initial states the resulting observer in open loop observer.
Draw backs of open loop observer.
1. Precise Lack of information of x(0)
2. If X (0) X(0), the estimated state X (t ) obtain from open loop estimator will have
continuously growing error and hence the output.
3. Small errors in A,B, and disturbances that enters the system but not in
estimator will cause slow estimate process.
Hence better to go for closed loop estimator.
Design of closed loop Full-order state observer: Figure shows closed loop Full order
state observer (Luen berger state observer)
(1)
(2)
(2a)
Page 86
10EE55
(2b)
(3)
(4)
Page 87
10EE55
(5)
(6)
~
If m is chosen properly, (A-mc) have reasonably fast roots, then X (t ) will decay to
~
zero irrespective of X (0) i.e X (t ) ~
X (t ) , regardless of X (0) . Further it can be seen that
equation (5) is independent of control input u(t).
The observer gain matrix m can be obtained in exactly the same fashion as controller
gain matrix K
Design procedure:
1. Let the desired pole locations of observer error
roots be corresponding poly equation will be
(S 1 )(S 2 ).....(S n ) = 0
S + a1Sn1 + a 2Sn2 + .....a n = 0
determine co-effients a1 , a 2 ,.......a n
n
X(t) = 0
0
1 X(t) + 0 u
1
6 11 6
Y(t) = [0 0 1]X(t)
Dept. of EEE, SJBIT
Page 88
10EE55
Y(t) = [0 0 1]X(t)
1) Desired characteristic
equation will be
(S+2+j 3 .64) (S+2-j3.64) (S+5)=0.
=> S 3 + 9S 2 + 20S + 60 = 0
hence comparing with S 3 + a1s2 + a 2 s + a 3 = 0
a1 =9, a 2 =20, a3 =60
(I)
0 0
0 0
(6 + m1 )
(11+ m 2 )
0 0 m1
[A mc ] = 1 0
0 m2
11
0
1 6
0 m3
[A mc ] =
1 0
(6 + m3 )
SI (A mc ) = 0
S 3 + (6 + m3 )S2 + (11+ m 2 )S + (6 + m1 ) = 0
Comparing with
s3 + 1s 2 + 2 s +
1
= (6 + m3 ),
=0
= (11+ m 2 ),
= (6 + m1
(II)
)
3) From I and II,
9 = 6 + m3 m3 = 3
20 = 11+ m 2 m 2 = 9
60 = 6 + m1 m1 = 54
4) Hence m = [54
9 3]T
Page 89
10EE55
a11 a1e
a e1 a ee
Xe (t)
and Y(t) = [1 0]
X1(t)
b1
+
u(t)
X e (t)
be
(3)
X1 (t)
(4)
X e (t)
(5)
Known input
(6)
Known terms
(7)
ee
e1
11
1e
Page 90
10EE55
e (t)
(t) = AX
(t) + bu(t) + m(y(t) y (t)) where y (t) = a1e X
(same as X
~
Defining Xe (t) = X e X
e
~
(t)
Xe (t) = X e (t) X
e
(t) in above equation & on simplification,
Substituting for Xe (t) & X
e
(9)
X(t) = (a ee ma1e )Xe
Corresponding characteristic equation is
(10)
SI (a ee ma1e ) = 0
Comparing with desired characteristic equation m can be evaluated same as m in fullorder observer.
To implement the reduced-order observer,
Equation (8) can be written as
e (t) = (a ee ma 1e )X
e (t) + (a e1 ma 11)Y(t) + (be mb 1)u + m y(t)
X
(11)
(t) my
X (t) = X
e
e (t) m y(t)
X e (t) = X
(t) + (a ma )y(t) + (b mb )u(t)
= (a ee ma 1e )X
e
e1
11
e
1
Equation (12) can be represented by block diagram as shown below.
(12)
X(t) = 1 1 10
Y(t) = [0 0 1]X(t)
To design the reduced-order observer for un measurable states such that the given values
will be at 10, -10.
Dept. of EEE, SJBIT
Page 91
10EE55
Solution:
From output Equation it follows that states X1(t) & X2(t) cannot
be measured as corresponding elements of C matrix are zero. Let
us write the state equation as
X1
0 0
0
X1(t)
50
X2 = 1 0 24 X2 (t) + 2 u(t)
1
X 0 1 10 X3 (t)
,
aee =
0 0
0
, ae1 =
a1e = [0 1]
1 0
24
a11 = [ 10]; be =
50
b 1 = [1]
2
m1
m2
Characteristic equation of an observer is
SI (a ee ma1e ) = 0
Let
m=
S 2 + m 2S + m1 = 0
Desirable characteristic equation is
(S + 10)(S + 10) = 0
S 2 + 20S + 100 = 0
From two characteristic equations, equating the co-efficient of like powers of S
20=m2 & 100=m1.
100
m=
20
Hence state model of observer will be
X e(t) = {
0 0
100
(t) + { 0 100 {10}]}Y(t) + { 50 100 1}u(t)
[
0 1]}X
e
1 0
20
2
20
20
24
He
nc
e
the
sta
te
fee
db
ac
k
co
Page 92
10EE55
(5)
(6)
(7)
(8)
BK
X(t)
X(t) (A BK)
(9)
=
~
~
0
(A mC) X(t)
X(t)
Equation (9) represents the model of 2n dimensional system. The corresponding
characteristic equation is SI (A BK) SI (A mC) = 0
(10)
Equation (10) represents the poles (given values) of the system, which is union of poles
of controller & observers. The design of controller & observer can be carried by
separately, & when used together, the system is unchanged. This special case of
separation principle controller can be designed using
SI (A BK) = 0 and observer by SI (A mC) = 0 .
The corresponding block diagram is as shown
(1)
Page 93
10EE55
(t)
U(t) = KX
(2)
(3)
From
(4)
(5)
Page 94
10EE55
CONTROLLERS
The controller is an element which accepts the error in some form and decides the proper corrective
action. The output of the controller is then applied to the system or process. The accuracy of the entire
system depends on how sensitive is the controller to the error detected and how it is manipulating such an
error. The controller has its own logic to handle the error.
the response and mode of operation.
Continuous and Discontinuous controllers. The discontinuous mode controllers are further classified as
ON-OFF controllers and multi position controllers.
Continuous mode controllers, depending on the input-output relationship, are classified into three
basic types named as Proportional controller, Integral controller and Derivative controller.
In many
examples
of
such
Proportional Derivative (PD) controllers and Proportional - Integral Derivative (PID) controllers.
The block diagram of a basic control system with controller is shown in Figure. The error detector compares
the feedback signal b(t) with the reference input r(t) to generate an error. e(t) = r(t) b(t).
Proportional Controller:
In the proportional control mode, the output of the controller is proportional to the error e(t).
The relation between the error and the controller output is determined by a constant called proportional
gain constant denoted as KP. i.e. p(t) = KP e(t).
Though there exists linear relation between controller output and the error, for zero error the
controller output should not be zero as this will lead to zero input to the system or process.
there exists some controller output Po for the zero error.
Dept. of EEE, SJBIT
Hence,
10EE55
But due to high gain, peak overshoot and settling time increases and this may lead to
instability of the system. So, compromise is made to keep steady state error and overshoot within
acceptable limits.
Hence, when the proportional controller is used, error reduces but can not
make it zero. The proportional controller is suitable where manual reset of the operating point is
possible and the load changes are small.
Integral Controller:
We have seen that proportional controller can not adapt with the changing load
conditions. To overcome this situation, integral mode or reset action controller is used. In
this controller, the controller output P(t) is changed at a rate which is proportional to
actuating error signal e(t).
Compared
to
the proportional controller, the integral control requires time to build up an appreciable output.
However it continues to act till the error signal disappears.
steady state error can be made to zero. The reciprocal of integral constant is known as integral time
constant Ti. i.e., Ti = 1/Ki.
Derivative Controller:
In this mode, the output of the controller depends on the rate of change of error with respect to time.
Hence it is also known as rate action mode or anticipatory action mode.
The mathematical equation for derivative controller is
.
Where Kdis the derivative gain constant. The derivative gain constant indicates by how much
percentage the controller output must change for every percentage per second rate of change of the
error. The advantage of the derivative control action is that it responds to the rate of change of
error and can produce the significant correction before the magnitude of the actuating error
becomes too large. Derivative control thus anticipates the actuating error, initiates an early corrective
action and tends to increase stability of the system by improving the transient response.
When
the error is zero or constant, the derivative controller output is zero. Hence it is never used alone.
PI Controller:
Dept. of EEE, SJBIT
Page 96
10EE55
This is a composite control mode obtained by combining the proportional mode and
integral mode. The mathematical expression for PI controller is
PD Controller:
This is a composite control mode obtained by combining the proportional
mode and derivative mode. The mathematical expression for PI controller is
PID Controller:
This is a composite control mode obtained by combining the proportional mode,
integral mode and derivative mode. The mathematical expression for PI controller is
Page 97
10EE55
This is to note that derivative control is effective in the transient part of the
response as error is varying, whereas in the steady state, usually if any error is there, it is
constant and does not vary with time. In this aspect, derivative control is not effective in
the steady state part of the response.
integral control will be effective to give proper correction to minimize the steady state
error. An integral controller is basically a low pass circuit and hence will not be effective in
transient part of the response where error is fast changing. Hence for the whole range of
time response both derivative and integral control actions should be provided in addition
to the inbuilt proportional control action for negative feedback control systems.
.
Dept. of EEE, SJBIT
Page 98
10EE55
particularly when the amplitude is not small and result in some unwanted phenomena.
Page 99
10EE55
Figure 6.1
by nonlinear
systems
is
called
spring-mass- damper system. The frequency responses of the system with a linear
spring, hard spring and soft spring are as shown in Fig. 6.2(a), Fig. 6.2(b) and Fig.
6.2(c) respectively.
increased from zero, the measured response follows the curve through the A, B and
C, but at C an increment in frequency results in discontinuous jump down to the point D,
after which with further increase in frequency, the response curve follows through DE.
If the frequency is now decreased, the response follows the curve EDF with a jump up
to B from the point F and then the response curve
moves
towards
A. This
Fig. 6.2(a)
Dept. of EEE, SJBIT
Fig. 6.2(b)
10EE55
When excited by a sinusoidal input of constant frequency and the amplitude is increased from
low values, the output frequency at some point exactly matches with the input frequency
and continue to remain as such thereafter.
synchronisation or matching of the output frequency with the input frequency is called
frequency entrainment or synchronisation.
6.3 Methods of Analysis
Nonlinear systems are difficult to analyse and arriving at general conclusions are tedious.
However, starting with the classical techniques for the solution of standard nonlinear
differential equations, several techniques have been evolved which suit different types of
analysis. It should be emphasised that very often the conclusions arrived at will be useful for
the system under specified conditions and do not always lead to generalisations.
The
Linearization Techniques: In reality all systems are nonlinear and linear systems are only
approximations of the nonlinear systems.
information whereas in some other cases, linearised model has to be modified when the
operating point moves from one to another. Many techniques like perturbation method, series
approximation techniques, quasi-linearization techniques etc. are used for linearise a nonlinear
system.
Phase Plane Analysis: This method is applicable to second order linear or nonlinear systems
for the study of the nature of phase trajectories near the equilibrium points. The system
behaviour is qualitatively analysed along with design of system parameters so as to get the
desired response from the system.
called limit cycle can be identified with this method which helps in investigating the stability
of the system.
Page 101
10EE55
Liapunovs Method for Stability: The analytical solution of a nonlinear system is rarely possible.
If a numerical solution is attempted, the question of stability behaviour can not be fully answered
as solutions to an infinite set of initial conditions are needed. The Russian mathematician A.M.
Liapunov introduced and formalised a method which allows one to conclude about the stability
without solving the system equations.
6.4 Classification of Nonlinearities
The nonlinearities are classified into i) Inherent nonlinearities and ii) Intentional
nonlinearities. The nonlinearities which are present in the components used in system due
to the inherent imperfections or properties of the system are known as inherent
nonlinearities. Examples are saturation in magnetic circuits, dead zone, back lash in
gears etc.
performance of the system, make the system more economical consuming less space and
more reliable than the linear system designed to achieve the same objective.
Such
frequently used to perform various tasks. But it should be noted that the improvement
in system performance due to nonlinearity is possible only under specific operating
conditions.r other conditions, generally nonlinearity degrades the performance of the
system.
6.5 Common
Physical Nonlinearities:
nonlinearities are saturation, dead zone, coulomb friction, stiction, backlash, different
types of springs, different types of relays etc.
Saturation: This is the most common of all nonlinearities. All practical systems, when
driven by sufficiently large signals, exhibit the phenomenon of saturation due to
limitations of physical capabilities of their components. Saturation is a common
phenomenon in magnetic circuits and amplifiers.
Dead zone: Some systems do not respond to very small input signals. For a particular
range of input, the output is zero. This is called dead zone existing in a system. The
input-output curve is shown in figure.
Dept. of EEE, SJBIT
Page 102
10EE55
Figure 6.3
Figure 6.4
Relay controlled
systems
find
control field. The characteristic of an ideal relay is as shown in figure. In practice a relay has a
definite amount of dead zone as shown. This dead zone is caused by the facts that relay coil
requires a finite amount of current to actuate the relay. Further, since a larger coil current is
Dept. of EEE, SJBIT
Page 103
10EE55
needed to close the relay than the current at which the relay drops out, the characteristic always
exhibits hysteresis.
Page 104
10EE55
Phase plane analysis is one of the earliest techniques developed for the study of second
order nonlinear system. It may be noted that in the state space formulation, the state variables
chosen are usually the output and its derivatives. The phase plane is thus a state plane where the
two state variables x1
derivative
and x2
method is used for obtaining graphically a solution of the following two simultaneous equations
of an autonomous system.
Where
and
are either linear or nonlinear functions of the state
variables x1 and x2 respectively. The state plane with coordinate axes x1 and x2 is called the
phase plain. In many cases, particularly in the phase variable representation of systems, take the
form
The plot of the state trajectories or phase trajectories of above said equation thus gives an
idea of the solution of the state as time t evolves without explicitly solving for the state. The
phase plane analysis is particularly suited to second order nonlinear systems with no input or
constant inputs.
It can be extended to cover other inputs as well such as ramp inputs, pulse
and
such a system, consider the points in the state space at which the derivatives of all the state
Dept. of EEE, SJBIT
Page 105
10EE55
variables are zero. These points are called singular points. These are in fact equilibrium points
of the system.
undisturbed.
If the system is placed at such a point, it will continue to lie there if left
A family of phase trajectories starting from different initial states is called a
phase portrait. As time t increases, the phase portrait graphically shows how the system moves
in the entire state plane from the initial states in the different regions. Since the solutions from
each of the initial conditions are unique, the phase trajectories do not cross one another. If the
system has nonlinear elements which are piece-wise linear, the complete state space can be
divided into different regions and phase plane trajectories constructed for each of the regions
separately.
where
and
are the damping factor and undamped natural frequency of the system. Defining the state
variables as x = x1 and
variable form as
These equations may then be solved for phase variables x1 and x2. The time response plots of
x1, x2 for various values of damping with initial conditions can be plotted. When the differential
equations describing the dynamics of the system are nonlinear, it is in general not possible to
obtain a closed form solution of x1, x2. For example, if the spring force is nonlinear say (k1x +
k2x3) the state equation takes the form
Solving these equations by integration is no more an easy task. In such situations, a graphical
method known as the phase-plane method is found to be very helpful.
Dept. of EEE, SJBIT
10EE55
with axes that correspond to the dependent variable x1 and x2 is called phase-plane. The curve
described by the state point (x1,x2) in the phase-plane with respect to time is called a phase
trajectory. A phase trajectory can be easily constructed by graphical techniques.
7.3.1 Isoclines Method:
Let the state equations for a nonlinear system be in the form
Where both
and
are analytic.
Therefore, the locus of constant slope of the trajectory is given by f2(x1,x2) = Mf1(x1,x2)
The above equation gives the equation to the family of isoclines. For different values of
M, the slope of the trajectory, different isoclines can be drawn in the phase plane.
Knowing the value of M on a given isoclines, it is easy to draw line segments on each of these
isoclines.
Consider a simple linear system with state equations
Dividing the above equations we get the slope of the state trajectory in the x1-x2 plane as
which is a straight line in the x1-x2 plane. We can draw different lines in the x1-x2 plane for
different values of M; called isoclines. If draw sufficiently large number of isoclines to cover the
complete state space as shown, we can see how the state trajectories are moving in the state
plane. Different trajectories can be drawn from different initial conditions. A large number of
such trajectories together form a phase portrait. A few typical trajectories are shown in figure
given below.
Page 107
10EE55
Figure 7.1
The Procedure for construction of the phase trajectories can be summarised as
below:
1. For the given nonlinear differential equation, define the state variables as x1 and x2
and obtain the state equations as
dead zone as nonlinear element, draw the phase trajectory originating from the initial
condition (3,0).
Page 108
10EE55
where u is
Since the input is zero, e = r c = -c and the differential equation in terms of error
will be
we
get
the
state
We can identify three regions in the state plane depending on the values of e
= x 1.
Re
gio
n
1:
Here u = 1, so that the isoclines are given by
or
For different values of M, these are a number of straight lines parallel to the
x-axis.
M
1/3
1/2
-4
-3
-2
-1
-1
-1.5
-2
0.5
-0.2
-0.25
-0.33
-0.5
parallel lines having constant slope of -1. Trajectories are lines of constant slope -1.
Dept. of EEE, SJBIT
Page 109
10EE55
Region 3:
Here u = -1 so that on substitution we get
or
These are also lines parallel to x axis at a constant distance decided by the value
of the slope of the trajectory M.
M
1/3
-5
-4
-3
-2
0.75
0.5
1/3
0.25
-0.25
-0.33
-0.5
-1
plot the phase trajectory originating from the initial condition (-1,0).
The differential equation for the system is
given by
Page 110
10EE55
When x2 = 0,
Page 111
10EE55
Figure 7.3
The isoclines are drawn as shown in figure. The starting point of the trajectory is marked at (1,0). At (-1,0), the slope is
, ie., the trajectory will start at an angle 90o. From the
next isoclines, the average slope is (8+4)/2 = 6, ie., a line segment with a slope 6 is drawn (at an
angle 80.5o).The same procedure is repeated and the complete phase trajectory will be obtained
as shown in figure.
7.3.2 Delta Method:
The delta method of constructing phase trajectories is applied to systems of the form
Where
may be linear or nonlinear and may even be time varying but must be
continuous and single valued.
With the help of this method, phase trajectory for any system with step or ramp or any time
varying input can be conveniently drawn. The method results in considerable time saving when
a single or a few phase trajectories are required rather than a complete phase portrait.
While applying the delta method, the above equation is first converted to the form
In general,
depends upon the variables x,
and t, but for short intervals the
changes in these variables are negligible. Thus over a short interval, we have
, where is a constant.
Let us choose the state variables as x1 = x;
, then
Page 112
10EE55
With known at any point P on the trajectory and assumed constant for a short interval,
we can draw a short segment of the trajectory by using the trajectory slope dx2/dx1 given
in the above equation. A simple geometrical construction given below can be used for
this purpose.
1. From the initial point, calculate the value of .
2. Draw a short arc segment through the initial point with (- , 0) as centre,
thereby determining a new point on the trajectory.
3. Repeat the process at the new point and continue.
Example 7.3: For the system described by the equation given below, construct the
trajectory starting at the initial point (1, 0) using delta method.
Let
then
So that
At initial point is calculated as = 0+1-1 = 0. Therefore, the initial arc is centred at
point (0, 0). The mean value of the coordinates of the two ends of the arc is used to
calculate the next value of and the procedure is continued. By constructing the small
arcs in this way the complete trajectory will be obtained as shown in figure.
Figure 7.4
Dept. of EEE, SJBIT
Page 113
10EE55
Limit cycles have a distinct geometric configuration in the phase plane portrait, namely,
that of an isolated closed path in the phase plane. A given system may have more than
one limit cycle. A limit cycle represents a steady state oscillation, to which or from
which all trajectories nearby will converge or diverge. In a nonlinear system, limit cycles
describes the amplitude and period of a self sustained oscillation. It should be pointed out
that not all closed curves in the phase plane are limit cycles. A phase-plane portrait of a
conservative system, in which there is no damping to dissipate energy, is a continuous
family of closed curves. Closed curves of this kind are not limit cycles because none of
these curves are isolated from one another.
Such trajectories always occur as a
continuous family, so that there are closed curves in any neighbourhood of any particular
closed curve. On the other hand, limit cycles are periodic motions exhibited only by
nonlinear non conservative systems.
As an example, let us consider the well known Vander Pols differential equation
which describes physical situations in many nonlinear systems. In terms of the state variables
, we obtain
The figure shows the phase trajectories of the system for > 0 and < 0. In case of > 0 we
observe that for large values of x1(0), the system response is damped and the amplitude of
x1(t) decreases till the system state enters the limit cycle as shown by the outer trajectory. On the
other hand, if initially x1(0) is small, the damping is negative, and hence the amplitude of x1(t)
increases till the system state enters the limit cycle as shown by the inner trajectory. When < 0,
the trajectories moves in the opposite directions as shown in figure.
Figure 7.5
A limit cycle is called stable if trajectories near the limit cycle, originating from outside
or inside, converge to that limit cycle. In this case, the system exhibits a sustained
Dept. of EEE, SJBIT
Page 114
10EE55
oscillation with constant amplitude. This is shown in figure (i). The inside of the limit cycle is
an unstable region in the sense that trajectories diverge to the limit cycle, and the outside is a
stable region in the sense that trajectories converge to the limit cycle.
A limit cycle is called an unstable one if trajectories near it diverge from this limit cycle. In this
case, an unstable region surrounds a stable region. If a trajectory starts within the stable region, it
converges to a singular point within the limit cycle. If a trajectory starts in the unstable region, it
diverges with time to infinity as shown in figure (ii). The inside of an unstable limit cycle is the
stable region, and the outside the unstable region.
7.5 Analysis and Classification of Singular Points:
Singular points are points in the state plane where
. At these points the slope of the
trajectory dx2/dx1 is indeterminate. These points can also be the equilibrium points of the
nonlinear system depending whether the state trajectories can reach these or not. Consider a
linearised second order system represented by
Using linear transformation x = Mz, the equation can be transformed to canonical form
Where,
and
The transformation given simply transforms the coordinate axes from x1-x2 plane to z1-z2 plane
having the same origin, but does not affect the nature of the roots of the characteristic
equation. The phase trajectories obtained by using this transformed state equation still carry the
same information except that the trajectories may be skewed or stretched along the coordinate
axes. In general, the new coordinate axes will not be rectangular.
The solution to the state equation being given by
Based on the nature of these eigen values and the trajectory in z1 z2 plane, the singular
points are classified as follows.
Nodal Point:
Consider eigen values are real, distinct and negative as shown in figure (a). For this case
the equation of the phase trajectory follows as
Page 115
10EE55
Where, k1 = ( 2/ 1)
0 so that the trajectories
become a set of parabola as shown in figure (b) and
the equilibrium point is called a node. In the
original system of coordinates, these trajectories
appear to be skewed as shown in figure (c).
If the eigen values are both positive, the nature of
the trajectories does not change, except that the
trajectories diverge out from the equilibrium point
as both z1(t) and z2(t) are increasing exponentially.
The phase trajectories in the x1-x2 plane are as
shown in figure (d). This type of singularity is
identified as a node, but it is an unstable node as the
trajectories diverge from the equilibrium point.
Saddle Point:
Page 116
10EE55
The slope
We get,
This is an equation for a spiral in the polar coordinates. A plot of this equation for negative
values of real part is a family of equiangular spirals. The origin which is a singular point in this
case is called a stable focus. When the eigen values are complex conjugate with positive real
parts, the phase portrait consists of expanding spirals as shown in figure and the singular point is
an unstable focus. When transformed into the x1-x2 plane, the phase portrait in the above two
cases is essentially spiralling in nature, except that the spirals are now somewhat twisted in shape.
Page 117
10EE55
Consider now the case of complex conjugate eigen values with zero real parts. ie.,
1,
= j
Figure 7.9
Example 7.4:
Determine the kind of singularity for the following differential equation.
At singular points,
and
I-A =0
or
+3 +2=0
1, 2 = -2, -1. Since the roots are real and negative, the type of singular point is stable
node.
Page 118
10EE55
At singular points,
So that the singular points
i.e., ( -0.1)+1 = 0 or
I-A =0
0.1 + 1 = 0
The eigen values are complex with positive real part. The singular point is an
unstable focus.
Linearization around (-1,0)
Therefore
0.1 - 1 = 0
1, 2 = 1.05 and -0.98. Since the roots are real and one negative and another
positive, the singular point is a saddle point.
Page 119
10EE55
Example 7.6:
Determine the kind of singularity for the following differential equation.
At singular points,
So that the singular points
The singularities are thus at (0,0) and (2,0). Linearization about the
singularities:
i.e., ( + 0.5) + 2 = 0 or
I-A =0
+ 0.5 + 2 = 0
The eigen values are complex with negative real parts. The singular point is a
stable focus.
Linearization around (-2,0)
Therefore
+ 0.5 - 2 = 0
= 1.19 and -1.69. Since the roots are real and one negative and another positive,
the singular point is a saddle point.
1, 2
Page 120
10EE55
a) A system is stable with zero input and arbitrary initial conditions if the resulting trajectory
tends towards the equilibrium state.
b) A system is stable if with bounded input, the system output is bounded.
deviations about the equilibrium point may be different from that for large deviations.
Therefore, local stability does not imply stability in the overall state plane and the two
concepts should be considered separately.
equilibrium states, the system trajectories may move away from one equilibrium point and
tend to other as time progresses. Thus it appears that in case of nonlinear systems, there is no
point in talking about system stability. More meaningful will be to talk about the stability of
an equilibrium point.
neighbourhood of equilibrium point is called stability in the small. For a larger region around
the equilibrium point, the stability may be referred to as stability in the large.
In the
extreme case, we can talk about the stability of a trajectory starting from anywhere in the
complete state space, this being called global stability. A simple physical illustration of different
types of stability is shown in Fig. 8.1
Page 121
10EE55
Page 122
10EE55
Page 123
10EE55
Examples:
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be negative definite is that the determinant of A be positive if n is even and
negative if n is odd, and the successive principal minors of even order be positive and the
successive principal minors of odd order be negative.
i
.
e
.
,
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be positive semi-definite is that the determinant of A be singular and the
successive principal minors of the determinant of A be nonnegative.
i.e.,
Page 124
10EE55
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be negative semi-definite is that the determinant of A be singular and all the
principal minors of even order be nonnegative and those of odd orders be non positive.
Example 8.1: Using Sylvesters criteria, determine the sign definiteness of the
following quadratic forms
Successive
minors are
principal
Example 8.3:
Page 125
10EE55
. This is a
cup shaped surface as shown. The constant V loci are ellipses on the surface of the cup.
Let
be the initial condition. If one plots trajectory on the surface shown, the
representative point x(t) crosses the constant V curves and moves towards the lowest
point of the cup which is the equilibrium point.
Page 126
10EE55
2.
3.
0, where
Then the equilibrium state at the origin of the system is uniformly asymptotically stable in
the large.
If however, there exists a positive definite scalar function V(x,t) such that
is
identically zero, then the system can remain in a limit cycle. The equilibrium state at the
origin, in this case, is said to be stable in the sense of Liapunov.
Theorem 3: Suppose that a system is described by
where f(0,t) = 0 for all t
t0. If there exists a scalar function W(x,t) having continuous first partial derivatives and
satisfying the following conditions
1. W(x,t) is positive definite in some region about the origin.
2.
is positive definite in the same region, then the equilibrium state at
the origin is unstable.
Example 8.4: Determine the stability of following system using Liapunovs method.
Let us choose
Then
Let us choose
Then
This is a negative semi definite function. If
x2 must be zero for all t t0.
t0, then
Example 8.6: Determine the stability of following system using Liapunovs method.
Let us choose
Then
This is an indefinite function.
Dept. of EEE, SJBIT
Page 127
10EE55
Let us choose
Then
Therefore, for asymptotic stability we require that the above condition is satisfied.
The
region of state space where this condition is not satisfied is possibly the region of
instability. Let us concentrate on the region of state space where this condition is
satisfied. The limiting condition for such a region is.The dividing lines lie in the first and
third quadrants and are rectangular hyperbolas as shown in Figure. 8.4. In the second and
fourth quadrants, the inequality is satisfied for all values of x1 and x2. Figure 8.4 shows the
region of stability and possible instability.
unique, it may be possible to choose another Liapunov function for the system under
consideration which yields a larger region of stability.
Page 128
10EE55
Conclusions:
1. Failure in finding a V function to show stability or asymptotic stability or
instability of the equilibrium state under consideration can give no information on
stability.
2. Although a particular V function may prove that the equilibrium state
under consideration is stable or asymptotically stable in the region , which includes
this equilibrium state, it does not necessarily mean that the motions are unstable
outside the region .
3. For a stable or asymptotically stable equilibrium state, a V function with the
required properties always exists.
Note:
1. If
does not vanish identically along any trajectory, then Q may
be chosen to be positive semi definite.
2. In determining whether or not there exists a positive definite Hermitian or
real symmetric matrix P, it is convenient to choose Q = I, where I is the identity
matrix. Then the elements of P are determined from
and the matrix P is
tested for positive definiteness.
Page 129
10EE55
-2P12 = -1
P11 P12 P22 = 0
2P12 2P22 = -1
Solving the above equations,
P11 = 1.5;
P12 = 0.5;
P22 = 1
and
1.5 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium
state at origin is asymptotically stable in the large.
The Liapunov function V(x) =
XTPX
Example 8.9: Determine the stability of the equilibrium state of the following system.
Dept. of EEE,SJBIT
Page 130
10EE55
P12 = - (1 + j)/8;
P12 = - (1 - j)/8;
P22 = 1/4
-2P11 + 2P12 = -1
-2P11 5P12 + P22 = 0
-4P12 8P22 = -1
Solving the above equations,
P11 = 23/60;
P12 = -7/60;
and
P22 = 11/60
23 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium
state at origin is asymptotically stable in the large.
Dept. of EEE,SJBIT
Page 131
10EE55
Example 8.11: Determine the stability range for the gain K of the system given below.
Let us choose
i.e.,
-k p13 k p13 = 0
-k p23 + p11 2p12 = 0
-k p33 + p12 p13 = 0
P12 2p22 + p12 2p22 = 0
P13 2p23 + p22 p23 = 0
P23 p33 + p23 p33 = -1
For P to be positive definite, it is necessary and sufficient that 12 2k > 0 and k > 0 or
0 < k <6. Thus for 0 < k <6, the system is stable; that is, the origin is asymptotically
stable in the large.
Dept. of EEE,SJBIT
Page 132
10EE55
Define
is the conjugate transpose of F(x). If the
Hermitian matrix
is negative definite, then the equilibrium state x = 0 is
. If
asymptotically stable. A Liapunov function for this system is
in addition
, then the equilibrium state is
asymptotically stable in the large.
Proof:
If
is negative definite for all x
0, the determinant of is nonzero everywhere
except at x = 0. There is no other equilibrium state than x = 0 in the entire state space.
Since f(0) = 0, f(x) 0 for x 0, and
, is positive definite. Note that
We can obtain
Dept. of EEE,SJBIT
as
Page 133
10EE55
If
is negative definite, we see that
is negative definite.
Hence V(x)is a
Liapunov function. Therefore, the origin is asymptotically stable. If
tends to infinity as
, then the equilibrium state is asymptotically stable in the
large.
Example 8.12: Using Krasovskiis theorem, examine the stability of the equilibrium state x
= 0 of the system given by
This is a negative definite matrix and hence the equilibrium state is asymptotically stable.
. Therefore the equilibrium state is
asymptotically stable in the large.
Example 8.13: Using Krasovskiis theorem, examine the stability of the equilibrium state
This is a negative definite matrix and hence the equilibrium state is asymptotically stable.
Dept. of EEE,SJBIT
Page 134
10EE55
Dept. of EEE,SJBIT
Page 135