Académique Documents
Professionnel Documents
Culture Documents
BE EEE-III/ SEMESTER VI
Prepared By:
A.R.SALINIDEVI Lect/EEE
State variable representation Conversion of state variable form to transfer function and
vice versa Eigenvalues and Eigenvectors Solution of state equation Controllability
and observability Pole placement design Design of state observer
State Variable Representation
The state variables may be totally independent of each other, leading
to diagonal or normal form or they could be derived as the derivatives of the output. If
them is no direct relationship between various states. We could use a suitable
transformation to obtain the representation in diagonal form.
Phase Variable Representation
It is often convenient to consider the output of the system as one of the state
variable and remaining state variable as derivatives of this state variable. The state
variables thus obtained from one of the system variables and its (n-1) derivatives, are
known as n-dimensional phase variables.
In a third-order mechanical system, the output may be displacement
.
x1, x1
x2
v and x 2
.
1
x1, x1
x3
x2
Where v v, w, a,
w and x2
x3
acceleration.
Consider a SISO system described by nth-order differential equation
Where
Equation becomes
Using above Eqs state equations in phase satiable loan can he obtained as
Where
Taking H(s) = 1, the block diagram of can be redrawn as in Fig. physical variables can
.
ia the armature
Where
And
u(t)
dn y
dt n
an
dn 1y
a0 y (t )
1
dt n 1
y(t)
bm
d m 1u
b0u (t )
1
dt m 1
u (t )
1 1
u (t )
2 2
y(t )
y (t )
1 1
y2 (t )
(2)
y ( s)
bm s m
an s n
bm 1 s m 1 b0
u ( s)
an 1 s n 1 a0
(3)
Then applying the same input shifted by any amount of time produces the same output
shifted by the same amount q of time. The representation of this fact is given by the following
transfer function:
y(s)
bm s m
an s n
bm 1 s m 1 b0
e
an 1 s n 1 a0
0 (i
u (s)
(4)
to a system of first order differential equations. This technique is quite general. First, Eq.(1)
is written as:
y(n)
f t , u (t ), y, y.
y , , y ( n
1)
Rn with x1
y , x3
y, x2
y1 (0), , y ( n 1) (0)
y , , xn
y (n
1)
(5)
yn 1 (0)
, Eq.(5) becomes:
x2
d
X
dt
x3
xn
f t , u (t ), y, y.
y, , y ( n
(6)
1)
d
X
dt
0 1 0 0
0 0 1 0 0
X
0 0
1
-a 0 -a1 -a n-1
0
0
0 u (t );
y(t)= 1 0 0 0 X
(7)
d
X
dt
0 1 0 0
0 0 1 0 0
X
0 0
1
-a 0 -a1 -a n-1
0
0
0 u (t );
y(t)= b 0 b1 b m 0 0 X
(8)
AX
Bu ; y
CX
Du; D= 0
(9)
(10)
CX (s) Du(s)
So that
y(s)
C sI
n( s )
d ( s)
D u (s)
G ( s )u ( s )
(11)
An algorithm to compute the transfer function from state space matrices is given by the
Leverrier-Fadeeva-Frame formula of the following:
sI
N (s)
d ( s)
sn 1 N0
N ( s)
d ( s)
d1 s
where,
N0
s n 2 N1 sN n
n 1
AN 0
d1 I
d1
trace( AN 0 )
d2
1/ 2 trace( AN1 )
Nn
AN n
dn 1s dn
N1
Nn
(12)
dn 1 I
AN n-1
dn I
dn
1
1
dn
n 1
trace( AN n 2 )
1
trace ARn
n
or, G ( s )
(13)
CN (s) B CD
CN ( s ) B CD
d ( s)
(14)
EigenValues
Consider an equation AX = Y which indicates the transformation of n 1 vector
matrix X into 'n x 1' vector matrix Y by 'n x n' matrix operator A.
7
The set of homogeneous equations (1) have a nontrivial solution only under
the condition,
The 'n' roots of the equation (3) i.e. the values of X satisfying the above equation
(3) are called eigen values of the matrix A.
The equation (2) is similar to| sI- A | =0, which is the characteristic equation of the
system. Hence values of X satisfying characteristic equation arc the closed loop
poles of the system. Thus eigen values are the closed loop poles of the system.
Eigen Vectors
Any nonzero vector X i such that AX i
with eigenvalue
.Thus let
Then solution of this equation is called eigen vector of A associated with eigen
value
I - A) of kth row.
Key Point: If the cofactor along a particular row gives null solution i.e. all elements of
corresponding eigen vectors are zero then cofactors along any other row must he
obtained. Otherwise inverse of modal matrix M cannot exist.
Example 1
Obtain the Eigen values, Eigen vectors for the matrix
Solution
Eigen values are roots of
Where C = cofactor
For
= -2
For
= -3
Example 2
For a system with state model matrices
10
Solution
The T.F. is given by,
11
X (t )
AX (t ) BU (t )
The matrices A and B are constant matrices. This state equation can be of two types,
1. Homogeneous and
2. Nonhomogeneous
Homogeneous Equation
If A is a constant matrix and input control forces are zero then the equation
takes the form,
Where
AX
QX
or
Bu
Bu ; y
CX
Du ; X(0)=X 0
(1)
X=Q-1 X
y = CX
0
0
(2)
Du
(3)
(4)
Notice that by doing the diagonalizing transformation, the resulting transfer function between
u(s) and y(s) will not be altered.
Looking at Eq.(3), if bk
kt
by the equation:
e k t xk (0 )
The lake of controllability of the state xk (t) is reflect by a zero kth row of B , i.e. bk . Which
would cause a complete zero rows in the following matrix (known as the controllability
matrix), i.e.:
C(A,b)
B AB A 2 B A 3 B A n-1 B
b1
b2
b
1 1
b
2 2
bk
k bk
2
k bk
bn
n bn
2
n bn
2
1 1
2
2
b2
2 n 1b2
n 1
1
1
k n 1bk
n
n
(5)
1
bn
Q 1B
or B
13
B AB A2 B An-1 B
C(A,b)
(6)
It is important to note that this result holds in the case of non-distinct eigenvalues as well.
Remark 1]
If matrix A has distinct eigenvalues and is represented as a controller canonical form, it
is easy to show the following identity holds, i.e.:
2
1
n 1
1
2
1
n 1
1
f or each i.
1
2
1
2
2
2
n 1
1
n 1
2
1
1
n
2
n
n-1
n
1
2
2
1
2
2
n 1
1
n 1
2
2
n
(6)
n-1
n
and
WT A
WT
or, A= W T
WT
WT A WT
[Remark 2]
There is an alternative way to explain why C(A,b) should have rank n for state controllable,
let us start from the solution of the state space system:
tf
X (t )
At
e X (0 )
A( t
Bu ( )d
(7)
t0
The state controllability requires that for each X(tf) nearby X(t0), there is a finite sequence of
u(t; t [to,tf]).
14
tf
X (t f )
At f
X0
Bu ( )d
t0
or
tf
Bu ( )d
At f
X (t f )
X0
t0
n t0 ( k 1)
e
k 0
Bu (t0
n t0 ( k 1)
i n 1
i
k 0
( ) Ai B u (t0
k )d
i 0
t0 k
t0 ( k 1) k
i=n-1
i
k )d
t0 k
AB
i=0
i
t0 k
( )u (t0
k )d
k 0
w1
i=n-1
Ai BWi
B AB A2 B An-1 B
i=0
w2
wn
Thus, in order W has non-trival solution, we need that C(A,b) matrix has exact rank n
There are several alternative ways of establishing the state space controllability:
The (n) rows of e At B are linearly independent over the real field for all t.
The controllability grammian
tf
Gram (to , t f )
At
BBT e
AT
t0 .
t0
q,
qb
non-controllable if
0 such that
(8)
15
qA
q
and
qb
q and qb
qAb
qb
qA b
0
2
qAb
qb
q b, Ab, A2 b, , An 1b
qAn-1b
n 1
qb
Since q
non-
AC
0
Where, r
ACC
,
AC
bC
(9)
n r
system.)
Thus, one can find a row vector has the form q
the eigenvector of AC , (i.e.: zAC
qA
0 z A
z ), for then:
(10)
e At
Ve tV
Ve tW T
vi wiT e
it
i 1
and, X (t )
e At X 0 , we have:
t
X (t )
e At X 0
vi wiT e it X 0
e A(t ) bu ( )d
i 1
16
vi wiT b e
0 i 1
ii ( t
u ( )d
as
and hence, the system is not completely controllable. The another form to test for the
controllability of the [A,b] pair is known as the Popov-Belevitch-Hautus (abbrv. PBH) test is
to check if rank sI
the fact that if sI
A b
Referring to the systems described by Eqs.(26) and (27), the state xi (t ) corresponding to
the mode e it is unobservable at the output y1 , if C1i 0 for any i=1,2,,n. The lack of
O ( A, C1 )
C1
C1 A
n
C1 A
C11
1C11
n 1
1 C
C12
2 C12
n 1
2 C12
C1n
n C1n
n 1
n C1n
(11)
eigenvalues, each nonzero column increases the rank by one. Therefore, the rank of
O ( A, C ) corresponding to the total number of modes that are observable at the output y(t) is
termed the observability rank of the system. As in the case of controllability, it is not
necessary to transform a given state-space system to modal canonical form in order to
determine its rank. In general, the observability matrix of the system is defined as:
C
O ( A, C ) =
= O ( A, C )Q
CA
CAn
( A, C )V
With Q=V-1 nonsingular. There, the rank of O ( A, C ) equals the rank of O ( A, C ) . It is
important to note that this result holds in the case of non-distinct eigenvalues. Thus, a statespace system is said to be completely (state) observable if its observability matrix has a full
rank n. Otherwise the system is said to be unobservable
17
In particular, it is well known that a state-space system is observable if and only if the
following conditions are satisfied:
The (n) column of Ce At are linearly independent over R for all t.
The observability grammian of the following is nonsingular for all t f
t0 :
t
T
e A C T Ce A d
Granm,o
t0
I
C
of A.
18
The system defined by above equation represents open loop system. The
state x is not fed back to the control signal u. Let us select the control signal to
be u = - Kx state. This indicates that the control signal is obtained from instantaneous state.
This is called state feedback. The k is a matrix of order l x n called state feedback gain matrix.
Let us consider the control signal to be unconstrained. Substituting value of u in equation 1
The system defined by above equation is shown in the Fig. 5.2. It is a closed loop
control system as the system state x is fed back to the control system as the system stale x
is fed back to control signal u. Thus this a system with state feedback
The stability and the transient response characteristics are determined by the eigen
values of matrix A - BK. Depending on the selection of state feedback gain matrix K, the
matrix A - BK can be made asymptotically stable and it is possible to make x(t) approaching
to zero as time t approaches to infinity provided x(0) * 0. The eigen values of matrix A - BK
arc called regulator poles. These regulator poles when placed in left half of s plane then x(t)
approaches zero as time t approaches infinity. The problem of placing the closed loop poles
at the desired location is called a pole placement problem.
Design of State Observer
In case of state observer, the state variables are estimated based on the
measurements of the output and control variables. The concept of observability
plays important part here in case of state observer.
Consider a system defined by following state equations
19
Here x is the estimated state and C x is the estimated output. The observer has
inputs of output y and control input u. Matrix K^ is called the observer gain
matrix. It is nothing but weighing matrix for the correction term which contains
the difference between the measured output y and the estimated output cx
This additional term continuously corrects the model output and the performance
of the observer is improved.
Full order state observer
The system equations arc already defined as
20
The block diagram of the system and full order state observer is shown in the Fig.
The dynamic behavior of the error vector is obtained from the Eigen values of matrix
A-K^C If matrix A-K^C is a stable matrix then the error vector will converge to
zero for any initial error vector e(0). Hence x(t) will converge to x(t) irrespective of
values of x(0) and x(0).
If the Eigen values of matrix A-KeC are selected in such a manner that the dynamic
behavior of the error vector is asymptotically stable and is sufficiently fast then
any of the error vector will tend to zero with sufficient speed.
1/ the system is completely observable then it can be shown that it is possible to select
matrix K,. such that A-K^C has arbitrarily desired Eigen values, i.c. observer gain
matrix Ke can be obtained to get the desired matrix A-KCC.
21
UNIT I
STATE SPACE ANALYSIS OF CONTINUOUS TIME SYSTEMS
PART A
1. What are the advantages of state space analysis?
2. What are the drawbacks in transfer function model analysis?
3. What is state and state variable?
4. What is a state vector?
5. Write the state model of nthorder system?
6. What is state space
7. What are phase variables?
8. Write the solution of homogeneous state equation?
9. Write the solution of nonhomogeneous state equation?
10. What is resolvant matrix?
PART B
1. Explain Kamans test for determining state controllability?
2. Explain Gilberts test for determining state controllability?
3. Find the output of the system having state model,
and
The input U(t) is unit step and X(0)
10
0
And
5. Obtain the homogenous solution of the equation X(t) =A X(t)
22
UNIT II
Z-TRANSFORM AND SAMPLED DATA SYSTEMS
Sampled data theory Sampling process Sampling theorem Signal reconstruction
Sample and hold circuits z-Transform Theorems on z- Transforms Inverse zTransforms Discrete systems and solution of difference equation using z transform
Pulse transfer function Response of sampled data system to step and ramp Inputs
Stability studies Jurys test and bilinear transformation
Sampled Data System
When the signal or information at any or some points in a system is in the form of
discrete pulses. Then the system is called discrete data system. In control engineering the
discrete data system is popularly known as sampled data systems.
Sampling process
Sampling is the conversion of a continuous time signal into a discrete time signal
obtained by taking sample of the continuous time signal at discrete time instants.
23
Let 1/T =Fs is called the sampling rate. This type of sampling is called periodic
Sampling, since samples are obtained uniformly at intervals of T seconds.
Multiple order sampling A particular sampling pattern is repeated periodically
Multiple rate sampling - In this method two simultaneous sampling operations with
different time periods are carried out on the signal to produce the sampled output.
Random sampling In this case the sampling instants are random
Sampling Theorem
A band limited continuous time signal with highest frequency fm hertz can be uniquely
recovered from its samples provided that the sampling rate Fs is greater than or equal to 2fm
samples per seconds
Signal Reconstruction
The signal given to the digital controller is a sampled data signal and in turn the
controller gives the controller output in digital form. But the system to be controlled needs an
24
analog control signal as input. Therefore the digital output of controllers must be converters
into analog form
This can be achieved by means of various types of hold circuits. The simplest hold
circuits are the zero order hold (ZOH). In ZOH, the reconstructed analog signal acquires the
same values as the last received sample for the entire sampling period
The high frequency noises present in the reconstructed signal are automatically
filtered out by the control system component which behaves like low pass filters. In a first
order hold the last two signals for the current sampling period. Similarly higher order hold
circuit can be devised. First or higher order hold circuits offer no particular advantage over
the zero order hold
Z- Transform
Definition of Z Transform
Let f (k) = Discrete time signal
F (z) = Z {f (k)} =z transform of f (k)
The z transforms of a discrete time signal or sequence is defined as the power series
F (Z )
f (k ) z
-------------- 1
F (Z )
f (k ) z
--------------- 2
k 0
25
Problem
1. Determine the z transform and their ROC of the discrete sequences f(k) ={3,2,5,7}
Given f (k) = {3, 2, 5, 7}
Where f (0) =3
f (1) =2
f (2) = 5
f (3) = 7
f (k) = 0 for k < 0 and k >3
By definition
Z { f (k )}
F ( z)
f (k ) z
The given sequence is a finite duration sequence. Hence the limits of summation can be
changed as k = 0 to k = 3
3
F ( z)
f (k ) z
k 0
F ( z)
= 3 2z
5z
7z
= 0 for k < 0
By definition
Z { f (k )}
F ( z)
f (k ) z
k
u (k ) z
k 0
k 0
(z 1 ) k
k 0
26
F (z)
1
1
1
z
z
z 1
3. Find the one sided z transform of the discrete sequences generated by mathematically
sampling the continuous time function f (t)
at
cos kT
Given
f (t)
at
cos kT
By definition
F(z) =
Z { f (k )}
akT
cos kT z
k 0
akT
ej
kT
ck
WKT
k 0
F ( z)
1
21 e
1
2
kT
k 0
1
2
aT
ej
k 0
1
2
aT
j T
k 0
1
1 c
aT
1
e j wT z
1
ej T
1
z e aT
z e aT
1
2 z e aT e j
1
2
1
21 e
1
e
j wT
1
e jT
1
z e aT
ze
T
aT
z e aT
aT
j T
1 ze aT ze aT e j T ze aT ze aT e j
2
ze aT e j T ze aT e j T
ze aT
ze aT e j T ze aT e j T
2 ( ze aT ) 2 ze aT e j T ze aT e j T e j T e
27
j T
ze aT
2 ze aT (e j T e j T )
2 z 2 e 2 aT ze aT (e j T e j T ) 1
ze aT ( ze aT cos T )
ze aT
2 z 2 e 2 aT 2 ze aT cos T 1
ej
cos
Inverse z transform
Partial fraction expansion (PFE)
Power series expansion
Partial fraction expansion
Let f (k) =discrete sequence
F (z) =Z {f (k)} = z transform of f (k)
F (z) =
b0 z m b0 z m 1 b0 z m 2 .......... bm
z n a1 z n 1 a2 z n 2 .......... an
where m n
F ( z)
A0
i 1
Ai
z
pi
-------------- 3
Where A0 is a constant
A1 , A2 ,........ An are residues
p1 , p 2 ,........ p n are poles
f (k ) z
k
On Expanding
F (Z )
(....... f ( 3) z 3
f ( 2) z 2
f ( 1) z 1
f (0) z 0
f (1) z
f (2) z
Problem
1. Determine the inverse z transform of the following function
(i) F (z) =
1
1 1.5 z
0.5 z
28
Given
F (z) =
1
1 1.5 z
0.5 z
1
1 . 5 0 .5
1
z
z2
z2
z2
1 .5 z
0 .5
z2
( z 1) ( z 0.5)
F ( z)
z
z
( z 1) ( z 0.5)
F ( z)
z
A2
( z 0.5)
A1
F ( z)
( z 1)
z
A1
z
( z 1)
( z 1) ( z 0.5)
Put z =1
z
(z 0.5)
1
(1 0.5)
2
Put z =0.5
A2
F ( z)
( z 0.5)
z
A2
z
( z 0.5)
( z 1) ( z 0.5)
z
(z 1)
29
0 .5
(0.5 1)
-1
F ( z)
z
F ( z)
z 1
1
z 0.5
2z
z 1
z
z 0.5
WKT
z
Z {a k }
and Z {u (k )}
z 1
(ii) F (z) =
2 u (k ) (0.5) k , k
z2
z2
z 0.5
z2
z2
z 0.5
Given
F (z) =
z2
j 0.5) ( z 0.5
( z 0.5
F ( z)
z
( z 0.5
z
j 0.5) ( z 0.5
j 0.5)
j 0.5)
F ( z)
z
A
A
( z 0.5 j 0.5)
F ( z)
( z 0.5
z
A*
( z 0.5 j 0.5)
j 0.5)
Put z = 0.5+j0.5
( z 0.5
z
j 0.5) ( z 0.5
j 0.5)
( z 0.5
j 0.5)
z
( z 0.5 j 0.5)
30
0.5 j 0.5
j 0.5 0.5 j 0.5)
(0.5
0.5
A*
j 0.5
z
j 0.5) ( z 0.5
( z 0.5
( z 0.5
j 0.5)
j 0.5)
Put z =0.5-j0.5
A*
z
( z 0.5
(0.5
F ( z)
z
j 0.5)
0.5 j 0.5
j 0.5 0.5 j 0.5)
(0.5 j 0.5)
( z 0.5 j 0.5)
0.5
j 0.5
(0.5 j 0.5)
( z 0.5 j 0.5)
(0.5 j 0.5) z
( z 0.5 j 0.5)
WKT Z {a k }
(0.5
j 0.5)( 0.5
(0.5
j 0.5) (0.5
j 0.5) k
0.5) (0.5
j 0.5) k
j(
0.5
j
0.5) (0.5
j 0.5) k
j (0.5
j 0.5)( 0.5
j 0.5) k
j (0.5
j 0.5) (0.5
j 0.5) k
j (0.5
j 0.5) k
j(
0.5
j
j 0.5) k
j 0.5) k
j (0.5
3z 2 2 z 1
F (z) = 2
z 3z 2
Given
F (z) =
3z 2 2 z 1
z 2 3z 2
3
z 2 3z 2
3z 2
3z 2
2z 1
9z 6
11z 5
31
(0.5 j 0.5) z
( z 0.5 j 0.5)
F (z) = 3
11 z 5
z 3z 2
11 z 5
( z 1) ( z 2)
By PFE
F (z) = 3
A1
A2
( z 1)
( z 2)
11 z 5
( z 1)
( z 1) ( z 2)
A1
11 z 5
( z 2)
11 5
1 2
11 z 5
(z
( z 1) ( z 2)
A2
11 z 5
( z 1)
F ( z)
( z 1)
3 6
2)
11(2) 5
2 1
17
17
(z
2)
1
z
1 z
17
z ( z 1)
z z 2
3 6z
17 z
( z 1)
f (k )
1
3
1
z
2
1
z
2
Given
1
F ( z)
1
(i)
3
z
2
1
z
2
1.0
32
1.0
(ii) ROC z
0.5
3
z
2
3
z
2
1
z
2
7
z
4
15
z
8
3
z
2
1
z
2
3
z
2
3
z
2
1
z
2
9
z
4
3
z
4
7
z
4
7
z
4
3
z
4
3
z
4
F (Z )
3
z
2
f (k ) z
7
z
4
15
z
8
7
z
8
15
z
8
F (z )
..........
7
z
8
------------ -(i)
..........
f (k ) z
k 0
F ( z)
f (0) z
f (1) z
f (2) z
.......... .......
3
, f (2)
2
7
15
, f (3)
4
8
3 7 15
f (k ) {1, , , , ................}for k 0
2 4 8
33
-------------- (ii)
(i) z
0.5
2z 2
1
z
2
3
z
2
6 z 3 11 z 4
30 z 5
..........
1 3z 2 z 2
3z 2 z 2
3z 9 z 2 6 z 3
7z 2
6 z3
7z 2
21z 3 14 z 4
15z 3 14z 4
15z 3
2z 2
F (z )
F (Z )
f (k ) z
6 z 3 11 z 4
30 z 5
..........
45z 4
30z 5
-------------- (i)
F (Z )
f (k ) z
F ( z)
.......... ..
f ( 5) z 5
f ( 4) z 4
f ( 3) z 3
f ( 2) z 2
f ( 1) z 1
6 , f ( 2) 2 , f ( 1) 0 , f (0) 0
f (k ) {...........30,14,6,2,0,0}
Difference equation
Discrete time systems are described by difference equation of the form
34
If the system is causal, a linear difference equation provides an explicit relationship between
the input and output. This can be seen by rewriting.
Thus the nth value of the output can be computed from the nth input value and the N and M
past values of the output and input, respectively.
Role of z transform in linear difference equations
Equation (1) gives us the form of the linear difference equation that describes the
system. Taking z transform on either side and assuming zero initial conditions, we have
an z n
an 1 z n
.......... ......
a1 z
a 0 , where a n
0 ; ( 1) n F ( 1)
If this necessary condition is not met, then the system is unstable. We need not check the
sufficient condition.
35
a0
an
b0
bn
c0
cn
1
2
..................
r0
r2
If the characteristic polynomial satisfies (n-1) conditions, then the system is stable
Jurys test
Bilinear transformation
The bilinear transformation maps the interior of unit circle in the z plane into the left half of
the r-plane.
r
z 1
z 1 Or
1 r
1 r
Sub z
an 1 z n
an 2 z n
.......... .. a z
1 r
in Equation (i)
1 r
36
a0 ; an
0 .......... ....( i )
1 r n
an (
)
1 r
1 r n
an 1 (
)
1 r
1 r n
an 2 (
)
1 r
1 r
............a(
) a0
1 r
0 ............(ii)
bn r n
bn 1 r n 1 bn 2 r n
............ b1 r b0
Problem
1. Check for stability of the sampled data control system srepresented by
characteristic equation.
(i ) 5 z 2
2z
2 0
Given
F ( z) 5z 2
2z
F ( z) a2 z 2
a1 z a0
2(1) 2
F (1) 5(1)
2 0
5z 2
2z 2
5 2 2
5
( 1) n F ( 1)
( 1) 2 5( 1) 2
1(5 2 2)
2( 1) 2
9
Here n=2
Since F (1)
0 ; ( 1) n F ( 1)
(2n-3) = (2*2-3)
=1
a0
z0 z 1 z2
a0
2 , a1
2, a 2
a1
a2
5
a1
37
The necessary & sufficient conditions for stability are satisfied. Hence the system is stable
z3
(ii) F ( z )
0.2 z 2
a3 z 3
F (z)
z3
0.25 z
a2 z 2
0.2 z 2
a1 z
0.05
a0
0.25z 0.05
Method 1
F ( z)
F (1) 13
0.2 z 2
0.2(1) 2
0.25 z
0.05
( 1) n F ( 1) ( 1) 3[ ( 1) 3
0.2( 1) 2
0.6
0.25 ( 1) 0.05 ] 0.9
Here n=3
Since F (1) >0 & ( 1) n F ( 1) >0
The necessary condition for stability is satisfied.
Check for sufficient condition
It consisting of (2n-3) row
n =3, (2n-3) = (2*6-3) =3
So, the table consists of three rows
Row
a0
a0
a1
a2 a3
a3
a2
a1 a0
b0
b1
b2
0.05
a1
0.25
a2
0 .2
a3
z0 z1 z2 z3
38
b0
a0
a3
0.05
a3
a0
0.05
0.05 2 1
0.9975
b1
a0
a3
a2
a1
0.05
1
0.2
0.25
b3
a0
a1
0.05
0.25
a3
a2
0.2
Row
z0
0.05
z1
z2
z3
-0.25
-0.2
1
1
-0.2
-0.25
-0.9975
0.1875
0.24
a3 , b0
b2
1 ,
0.9975
0.05
0.25
The necessary and sufficient conditions for stability are satisfied. Hence the system is stable.
Method 2
F ( z)
z3
0.2 z 2
Put z
F (r )
1 r 3
)
1 r
0.25 z
0.05
1 r
1 r
1 r 2
0.2(
)
1 r
1 r
0.25(
) 0.05
1 r
39
(1 r ) 3
0.05(1 r ) 3
(1 r )(1 r 2
(1 r )(1 r 2
(1 r )(1.2r 2
2r 0.8) (1 r )(0.3r 2
(1.2r 2
(1.2r
0.9r
2r 0.8 1.2r 3
3.2r
3.6r
2r 2
0.8r ) (0.3r 2
2.9r 0.6
0.1r 0.2)
0.4r
0.05 0.05r 2
2r )
0.1r )
0.1r 2
0.2r )
0.1 0.2) 0
The coefficient of the new characteristic equation is positive. Hence the necessary condition
for stability is satisfied.
The sufficient condition for stability can be determined by constructing routh array as
r 3 : 0.9
2
r : 3.6
2.9
.........row1
0.6
.........row 2
.........row3
.........row4
r : 2.75
r : 0.6
column1
r1
2.75
r0
0.6
There is no sign change in the elements of first column of routh array. Hence the sufficient
condition for stability is satisfied.
The necessary condition and sufficient condition for stability are satisfied. Hence the system
is stable.
Pulse transfer function
It is the ratio of s transform of discrete output signal of the system to the z-transform of
discrete input signal to the system. That is
H ( z)
C ( z)
R( z )
(i)
Proof
Consider the z-transform of the convolution sum
Z [C (k )]
h(k
k 0
m)r (m) z
---------------- (ii)
m 0
40
C ( z)
r (m).
m 0
Let l
h( k
m) z
------------------ (iii)
k 0
Then l
k m
when
when
0& l
C ( z)
r ( m) z
m 0
C ( z)
r ( m) z
m 0
h(l ) z
l m
--------------------- (iv)
h(l ) z
------------------------ (v)
C ( z ) R( z ).H ( z )
C ( z)
R( z )
--------------------------- (vi)
UNIT II
Z-TRANSFORM AND SAMPLED DATA SYSTEMS
PART A
1. What is sampled data control system?
2. Explain the terms sampling and sampler.
3. What is meant by quantization?
4. State (shanons) sampling theorem
5. What is zero order hold?
6. What is region of convergence?
7. Define Z-transform of unit step signal?
8. Write any two properties of discrete convolution.
9. What is pulse transfer function?
10. What are the methods available for the stability analysis of sampled data control
systems?
11. What is bilinear transformation?
41
PART B
1. (i)solve the following difference equation
2 y(k) 2 y(k-1) + y (k-2) = r(k)
y (k) = 0 for k<0 and
r(k) = {1; k= 0,1,2
{0;k<0 (8)
(ii)check if all the roots of the following characteristics equation lie within the circle.
Z41.368Z3+0.4Z2+0.08Z+0.002=0 (8)
2. (i)Explain the concept of sampling process. (6)
(ii)Draw the frequency response of Zero-order Hold (4)
(iii)Explain any two theorems on Z-transform (6)
3. The block diagram of a sampled data system is shown to Fig.(a) Obtain discrete-time state
model for the system. (b) Obtain the equation for inter sample response of the system.
42
UNIT III
STATE SPACE ANALYSIS OF DISCRETE TIME SYSTEMS
State variables Canonical forms Digitalization Solution of state equations
Controllability and Observability Effect of sampling time on controllability Pole
placement by state feedback Linear observer design First order and second order
problems
State variables
Concepts of State and State Variables
State
The state of a dynamic system is the smallest set of variables (called state variables) such
that the knowledge of these variables at t=t0, together with the knowledge of the inputs for
t
t0 .
The concept of state is not limited to physical systems. It is applicable to biological systems.
economic systems, social systems, and others.
State variables
The state variables of a dynamic system are the smallest set of variables that determine the
state of the dynamic system. i.e. the state variables are the minimal set of variables such that the
knowledge of these variables at any initial time t = to, together with the knowledge of the
inputs for t
time t
behaviour of a dynamic system than those n variables are a set of state variables.
The state variables need not be physically measurable or observable quantities. Variables that
do not represent physical quantities and those that are neither measurable nor observable can
also be chosen as state variables. Such freedom in choosing state variables is an added
advantage of the state-space methods.
Canonical forms
They are four main canonical forms to be studied:
1. Controller canonical form
2. Observer canonical form
3. Controllability canonical form
4. Observability canonical form
43
y ( s)
u ( s)
b1 s 2 b2 s b3
s3 a1 s 2 a2 s a3
y(s) z (s)
z (s) u (s)
1
3
a1 s
a2 s a3
b1 s 2
In other words,
z ( s)
u (s)
1
s
a1 s
y( s)
z(s)
and
b1 s 2
a2 s
a3
b2 s b3
1
a3
a2
0
Z
0 u;
1
b3 z1 + b2 z2 + b1z3 = b3
b2
b1 Z
bm s m
an s n
y ( s)
bm 1 s m 1 b0
u ( s)
an 1 s n 1 a0
C= C
0
0
1
0
0
1
a
- 0
an
a1
an
b1
a0
b0
a0
bm
a0
0
0
; b
a
- n
an
0
0
0 0
1 and m
n 1.
44
b2 s b3
b1 s n
sn
y( s)
b2 s n 1 bn
u ( s)
a1 s n 1 an
We have
0
0
-an 1
1
0
0
1
0
0
an 2 -a1
bn
bn 1
; b
b1
a1 s 2 y
a2 sy
b1 s 2 y b2 sy b3 y
a3 y
y x1
x1 x2 a1 y b1u
a1 x1 x2 b1u
x2 x3 a2 y b2 u
a2 x1 x3 b2 u
x3
a3 y b3u
a3 x1 b3u
In other words,
a1
1 0
b1
a2
0 1 ;
0 0
a1
a2
1 0
0 1
an
1
0 0
0
b2 ; C
0 0 1
b3
In general,
0
0
0
;
b=
0
1
C= 1 0 0
45
0 0
1 0
0 1
a3
1
a2 ; b= 0 ;
a1
0
C= b 3
b2
b1
a2
a1
1
a1
1
0
1
0
0
In general,
an
an 1
0 -an 2 ; b= bn
1 -a1
0 0
1 0
0 1
0
0
an
an
bn
an
a1 1
an
a1 1
a1
0 0
0 0
1 0 0
0
0
0
-an
1
0
0
an
0
0
0
1
0 0
;
0
0 1
a1
1
a1
0
1
0
0
0 0
0 0
0
0
an
an
a1 1
an
an
a1
b1
b
bn
C= 1 0 0 0
Where x(k) an n dimensional slate rector at time t =kT. an r-dimensional control (input)
vector y(k). an m-dimensional output vector ,respectively, are represented as
46
Where the input u, output y and d. are scalars, and b and c are n-dimensional vectors.
The concepts of controllability and observability for discrete time system are similar to the
continuous-time system.
A discrete time system is said to be controllable if there exists a finite integer n and input
mu(k); k [0, n 1] that will transfer any state x 0 bx(0) to the state x n at k n n.
Controllability
Consider the state Equation can be obtained as
State x can be transferred to some arbitrary state x" in at most n steps to be if p(U) = rank
of [ B AB A 2 B ......... A n 1 B] n .
Thus, a system is controllable if the rank composite (n
is n.
Observability
Consider the output Equation can be obtained as
47
If rank of
!hen initial state x(0) can be determined from the most n measurements of the output and
input.
We can, therefore. State that "A discrete time system is observable if the rank of the
composite nm n matrix.
48
We are usually satisfied with the trial and error method of selection of sampling interval.
We compare the response of the continuous-time plant with models discretized for
various sampling rates. Then the model with the slowest sampling rate which gives a
response within tolerable limits is selected for future work. However, the method is not
rigorous in approach. Also a wide variety of inputs must be given to each prospective
model to ensure that it is a tree representative of the plants.
Pole placement by state feedback
Consider a linear dynamic system in the state space form
In some cases one is able to achieve the goal by using the full state feedback, which
represents a linear combination of the state variables, that is
Such that
For single input single output systems the state feedback is given by
49
After
Can be approximated by
rc
vc tan(
0
c
In the same way one can continue with the remaining equations needed for describing the
process dynamics. For example, The linear model he derived is
Where x is the state vector, Furthermore. One has the 3 3 system matrix A. the 3 2 control
matrix B and the 1 3 output matrix C. One has to keep in mind that the values of the state
spare vector .r, rho input sector it and the output y describe the deviation of the corresponding
quantities from their operating willies.
50
UNIT III
STATE SPACE ANALYSIS OF DISCRETE TIME SYSTEMS
PART A
1. What is state and state variable?
X 1 (t )
2 X 1 (t ) 4 X 2 (t )
X 2 (t )
2 X 1 (t )
X 2 (t ) u (t )
x1
x1 (t ) u
x2
x1
2 x 2 (t ) u
X
Y
0
1
1 0 X
51
s 3
( s 1)( s 2) 2
Unit IV
NONLINEAR SYSTEMS
means the presence of frequencies which are lower than the input
frequency. The input and output relations are not linear.
In linear system,the sinusoidal oscillations depend on the input amplitude and
the initial conditions. But in a nonlinear system, the periodic oscillations may
exist which are not dependent on the applied input and other system
parameter variations. In nonlinear system, such periodic oscillations are
nonsinusoidal having fixed amplitude and frequency. Such oscillations arc
called limit cycles in case of nonlinear system.
Another important phenomenon which exists only in case of nonlinear system
is jump resonance. This can be explained by considering a frequency response.
The Fig. (a) Shows the frequency response of a linear system which shows
that output varies continuously as the frequency changes. Similarly though
frequency us increased or decreased, the output travels along the same curve
again and again. But in case of a nonlinear system, if frequency is increased,
the output shows discontinuity i.c. it jumps at a certain frequency. And if
frequency is decreased, it jumps back but at different frequency. This is shown
in the fig.
Saturation
In this type of nonlinearity the output is proportional to input for a limited
range of input signals. When the input exceeds this range, the output tends to become
nearly constant as shown in the fig.
Saturation
Deadzone
The deadzone is the region in which the output is zero for a given input. Many
physical devices do not respond to small signals, i.e., if the input amplitude is less than
some small value, there will be no output. The region in which the output is zero is called
deadzone. When the input is increased beyond this deadzone value, the output will be
linear.
Dead zone
54
Friction
Friction exists in any system when there is relative motion between contacting surfaces.
The different types of friction are viscous friction, coulomb friction and stiction.
Stiction
The viscous friction is linear in nature and the frictional force is directly proportional to
relative velocity of the sliding surface.
Relay
55
Predict qualitatively the phase-plane behavior of the nonlinear system, when there
are multiple equilibrium points.
Phase-plane analysis
Phase plane analysis is a graphical method for studying second-order systems. This
chapters objective is to gain familiarity of the nonlinear systems through the simple
graphical method.
Concepts of Phase Plane Analysis
Phase portraits
The phase plane method is concerned with the graphical study of second-order autonomous
systems described by
Where
x1, x2 : states of the system
f1, f2: nonlinear functions of the states
Geometrically, the state space of this system is a plane having x1, x2 as
coordinates. This plane is called phase plane. The solution of (2.1) with time varies from zero
to infinity can be represented as a curve in the phase plane. Such a curve is called a phase
plane trajectory. A family of phase plane trajectories is called a phase portrait of a system.
Example1
Phase portrait of a mass-spring system as shown in the fig.
Solution
The governing equation of the mass-spring system in Fig (a) is the familiar linear secondorder differential equation
56
The governing equation of the mass-spring system in Fig (a) is the familiar linear secondorder differential equation
Assume that the mass is initially at rest, at length x0. Then the solution of this equation is
Eliminating time t from the above equations, we obtain the equation of the trajectories
This represents a circle in the phase plane. Its plot is given in fig (b)
The nature of the system response corresponding to various initial conditions is directly
displayed on the phase plane. In the above example, we can easily see that the system
trajectories neither converge to the origin nor diverge to infinity. They simply circle around
the origin, indicating the marginal nature of the systems stability. A major class of secondorder systems can be described by the differential equations of the form
57
In state space form, this dynamics can be represented with x1 = x and x2 = x& as follows
Singular points
A singular point is an equilibrium point in the phase plane. Since equilibrium point is defined
as a point where the system states can stay forever,
Example 2
A nonlinear second-order system
The system has two singular points, one at (0,0) and the other at (3,0) . The motion
patterns of the system trajectories in the vicinity of the two singular points have different
natures. The trajectories move towards the point x = 0 while moving away from the point x =
3.
Constructing Phase Portraits
There are a number of methods for constructing phase plane trajectories for linear or
nonlinear system, such that so-called analytical method, the method of isoclines, the delta
method, Lienards method, and Pells method.
Analytical method
58
There are two techniques for generating phase plane portraits analytically. Both
technique lead to a functional relation between the two phase variables x1 and x2 in the form
g(x1, x2 ) = 0 (2.6) where the constant c represents the effects of initial conditions (and,
possibly, of external input signals). Plotting this relation in the phase plane for different initial
conditions yields a phase portrait.
The first technique involves solving (2.1) for x1 and x2 as a function of time t , i.e., x1(t) =
g1(t) and x2 (t) = g2 (t) , and then, eliminating time t from these equations. The second
technique, on the other hand, involves directly eliminating the time variable, by noting that
and then solving this equation for a functional relation between x1 and x2 . Let us use this
technique to solve the mass spring equation again.
The first case corresponds to a node.
Stable or unstable node (Fig.a -b)
A node can be stable or unstable:
1,2 < 0 : singularity point is called stable node.
1,2 > 0 : singularity point is called unstable node.
59
Note that the stability characteristics of linear systems are uniquely determined by the
nature of their singularity points. This, however, is not true for nonlinear systems.
60
Limit cycle
In the phase plane, a limit cycle is defined as an isolated closed curve. The trajectory
has to be both closed, indicating the periodic nature of the motion, and isolated, indicating the
limiting nature of the cycle (with nearby trajectories converging or diverging from it).
Depending on the motion patterns of the trajectories in the vicinity of the limit cycle, we can
distinguish three kinds of limit cycles.
Limit cycle can be a drawback in control systems:
Instability of the equilibrium point
Wear and failure in mechanical systems
Loss of accuracy in regulation
Stable Limit Cycles: all trajectories in the vicinity of the limit cycle converge to it as t
(Fig.a).
Unstable Limit Cycles: all trajectories in the vicinity of the limit cycle diverge to it as
t (Fig.b)
Semi-Stable Limit Cycles: some of the trajectories in the vicinity of the limit cycle
converge to it as t (Fig.c)
61
Center
Limit Cycle
Of all the analytical methods developed over the years for nonlinear systems. the describing
function method is generally agreed upon as being the most practically useful. It is an
approximate method, but experience with real systems and computer simulation results, shows
adequate accuracy in many cases. The method predicts whether limit cycle oscillations will exist
or not, and gives numerical estimates of oscillation frequency and amplitude when limit cycles
are predicted. Basically, the method an approximate extension of frequency-response methods
(including Nyquist stability criterion) to nonlinear systems.
To discuss the basic concept underlying the describing function analysis. Let us consider the
block diagram of a nonlinear system shown in Fig. 9.5. Where the blocks GO) and G2(s) represent
the linear elements. While the block N represent, the nonlinear element.
The describing function method provides a "linear approximation" to the nonlinear element
based on the assumption that the input to the nonlinear element so sinusoid of known,
constant amplitude. The fundamental harmonic of the element's output is compared with the
input sinusoid, to determine the steady-state amplitude and phase relation. This relation is the
describing function for the nonlinear element. The method can, thus, be viewed as "harmonic
linearization" of a nonlinear element.
The describing function method is based on the Fourier series. A review of the Fourier series
will be in order here.
62
Fourier series
We begin with the definition of a periodic signal. A signal y(t) is said to be periodic with the
period if y(t+T) =y(t) for every value of t. The smallest positive value of T for which y(t + )
= y(t) is called fundamental period ofy(t). We denote the fundamental period as T0.Obviously,
2 T0 is also a period of y(t), and so is any integer multiple of T0.A periodic signal y(t) may be
represented by the series
The term for n = 1 is called fundamental or first- harmonic, and always has the same
frequency as the repetition rate of the original periodic waveform; whereas n = 2, 3....,
give second, third. and so forth harmonic frequencies as integer multiples of the
fundamental frequency.
Certain simplifications are possible when y(t) has a symmetry clone type or another.
63
The linear component has low-pass filter properties. This is the main assumption that allows
for neglecting the higher frequency harmonics that can appear when a nonlinear system is
driven by a harmonic signal
The more the low-pass filter assumption is verified the more the estimation error affecting the
limit cycle parameters is small.
64
The nonlinear characteristic is symmetric with respect to the origin. This guarantees that the
static term in the Fourier expansion of the output of the nonlinearity, subjected to an
harmonic signal, can be neglected
Such an assumption is usually taken for the sake of simplicity, and it can be relaxed.
Ideal relay
The negative reciprocal of the DF is the negative real axis in backward direction. A limit
cycle can exist if the relative degree of G(j) is greater than Two
The oscillation frequency is the critical frequency c of the linear system and the
Oscillation magnitude is proportional to the relay gain M.
Liapunovs Stability Analysis
The state equation for a general time invariant system has the form x = f (x,
u). If the input u is constant then the equation will have form x = F(x).
For this system, the points, at which derivatives of all state variables are
zero, are the singular points.
These singular points are nothing but equilibrium points where the system
stays if it is undisturbed when the system is placed at these points.
65
f ( x, t ) .Let us assume
that this system has a unique solution starting at the given initial condition. Let us
66
x0 at t
x0
.
f ( xe , t ) 0 for all t then this x e is called equilibrium state. For linear, time
invariant systems having A non-singular, there is only one equilibrium state while
there are one or more equilibrium states if A is singular.
In case of non-linear systems as we have seen previously there are more than
one equilibrium states. The Isolated equilibrium states that is isolated from each
other can be shifted to origin i.e. f(0, t) = 0 by properly shifting the coordinates.
These equilibrium states can be obtained from the solution of equation f(x.. t) = 0.
Now we will consider the stability analysis of equilibrium states at the origin. We
will consider a spherical region of radius R about an equilibrium state x e ,. Such that
67
Negative Definiteness
A scalar function F(x) is said to be negative definite if - F(x) is positive
definite.
Positive Semidefiniteness
A scalar function F(x) is said to be positive semi definite if it is positive at all states in
the particular region except at the origin and at certain other states where it is zero.
Negative Semidefinite
A scalar function F(x) is said to be negative semidefinite if - F(x) is positive
semidefinite.
Indefiniteness
A scalar function F(x) is said to be indefinite in the particular region if it
assumes both positive and negative values irrespective how small the region is
Quadratic Form
A class of scalar functions which plays important role in Ihe stability analysis based on
Liapunov's second method is the quadratic form
68
69
The system is therefore asymptotically stable in the large at the origin. The result is also
necessary. To prove this, assume that the system is asymptotically stable and P is
negative definite
71
.
This is the contradiction as V(x) = xT Px satisfies instability theorem. Hence the
conditions for the positive definiteness of P are necessary and sufficient for asymptotic
stability of the system.
The Liapunov's direct method applied to linear time invariant systems is same as the
Hurwitz stability criterion.
Example
Show that the following quadratic form is positive definite
Solution
The above given V(s) can be written as
As all the successive principal minors of the matrix p are positive V(x) is positive
definite.
Example
Investigate the stability of the following non-linear system using direct method of Liaupnov.
Given that
72
It can be seen that V < 0 for all non-zero values of xi and x2. Hence the function is negative
definite. Therefore the origin of the system is asymptotically stable in large.
UNIT IV
NONLINEAR SYSTEMS
PART A
1. What are linear and nonlinear systems? Give examples.
2. How nonlinearities are introduced in the systems?
3. How the nonlinearities are classified? Give examples.
4. What is the difference between phase plane and describing function methods of analysis?
5. Write any two properties of nonlinear systems?
6. What is jump resonance?
7. What are limit cycles?
8. What is saturation?
9. What is describing function?
10. Write the describing function of dead zone and saturation nonlinearity.
PART B
11. Write the describing function for the following
(i) Backlash nonlinearity
(ii) Relay with dead zone
12. Construct phase trajectory for the system described by the equation
dx2
dx1
4 x1 3 x 2
.
x1 x 2
Draw
the
phase
trajectory
of
the
system
x x x2
73
described
by
the
equation
UNIT V
MIMO SYSTEMS
Models of MIMO system Matrix representation Transfer function representation
Poles and Zeros Decoupling Introduction to multivariable Nyquist plot and singular
values analysis Model predictive control
Where Es is the power across the transmitter irrespective of the number of antennas and MT , is
an MT * MT identity matrix. The transmitted signal bandwidth is so narrow that its frequency
response can be considered flat (i.e., the channel
74
memory less). The channel matrix H is a K t x NIT complex matrix. The component 17,,i of
the matrix is the fading coefficient from the jth transmit antenna to the ith receive antenna.
We assume that the received power for each of the receive antennas is equal to the total
transmitted power F. This implies we ignore signal attenuation, antenna gains, and so on.
Thus we obtain the normalization constraint for the elements of H, for a deterministic
channel as
If the channel elements are not deterministic but random, the normalization will apply to the
expected value. We assume that the channel matrix is known at the receiver but unknown at
the transmitter. The channel matrix can he estimated at the receiver by transmitting a training
sequence. If we require the transmitter to know this channel, then we need to communicate
this information to the transmitter via a feedback channel. The elements of H can be
deterministic or random. The noise at the receiver is another column matrix of size MR X 1,
denoted by n. The components of n are zero mean circularly symmetrical complex Gaussian
(ZMCSCG) variables. The covariance matrix of the receiver noise is
75
Each of the MR receive branches has identical noise power of No. The receiver operates on
the maximum likelihood detection principle over MR receive antennas. The received signals
constitute a MR X 1 column matrix denoted bye, where each complex component refers to a
receive antenna. Since we assumed that the total received power per antenna is equal to the
total transmitted power, the SNR can be written as
Matrix representation
Figure shows a linear dynamic MIMO filter. Its array of K-inputs, after z-transforming, can
be represented by the column vector [F(z)]. Its array of outputs, having the same number of
elements as the input array, is represented by the column vector [G(z)]. The transfer function
of the MIMO filter is represented by a square K x K matrix of transfer functions
76
Each output is a linear combination of filtered versions of all the inputs. The transfer function
from input j to output i is Hij(z).
A schematic diagram of [H(z)] is shown in Fig.(a). The signal path from input line j to output
line i is illustrated in Fig.(b). A block diagram of the MIMO filter is shown in Fig.(c). The
input vector is [F(z)]. The output vector [G(z)] is equal to [H(z)][F(z)]. The overall transfer
function of the system is [H(z)].
Transfer function representation
A multivariable process admits nu inputs and ny outputs. In general, the number of inputs
should be larger than or equal to the number of outputs so that the process is controllable.
Thus, we will assume nu > ny. The system is supposed to have been identified in continuous
time by transfer functions. In general, this identification is performed by sequentially
imposing signals such as steps on each input ui (i = 1,..., nu ) and recording the corresponding
vector of the responses yij (j = 1, ... ,ny). From each input-output couple (ui,yij), a transfer
function is deduced by a least-squares procedure.
In open loop, the ny outputs yi, are linked to the nu, inputs uj, and to the nd disturbances dk by
the following set of ny linear equations
77
Where y is the output vector, u is the input vector and d is the disturbance vector (the
modelled disturbances), Gu is the rectangular matrix ny x nu the elements of which are the
input-output transfer functions, and Gd is the rectangular matrix ny x nd, the elements of
which are the disturbance-output transfer function.
78
The goal of decoupling control is to eliminate complicated loop interactions so that a change
in one process variable will not cause corresponding changes in other process variables. To
do this a non-interacting or decoupling control scheme is used. In this scheme, a
compensation network called a decoupler is used right before the process. This decoupler is
the inverse of the gain array and allows for all measurements to be passed through it in order
to give full decoupling of all of the loops. This is shown pictorially below.
many applications of predictive control successfully in use at the current time, not only in the
process industry but also applications to the control of other processes ranging from robots.
Applications in the cement industry drying towers, and robot arms are described in (54),
whilst developments for distillation columns, PVC plants, steam generators, or servos. The
good performance of these applications shows the capacity of the MPC to achieve highly
efficient control systems able to operate during long periods of time with hardly any
intervention.
In order to implement this strategy, the basic structure shown in Figure is used. A model is
used to predict the future plant outputs, based on past and current values and on the proposed
optimal future control actions. These actions are calculated by the optimizer taking into
account the cost function (where the future tracking error is considered) as well as the
constraints. The process model plays, in consequence, a decisive role in the controller. The
chosen model must be able to capture the process dynamics to precisely predict the future
outputs and be simple to implement and understand. As MPC is not a unique technique but
rather a set of different methodologies, there are many types of models used in various
formulations. One of the most popular in industry is the Truncated Impulse Response Model,
which is very simple to obtain as it only needs the measurement of the output when the
process is excited with an impulse input. It is widely accepted in industrial practice because it
80
is very intuitive and can also be used for multivariable processes, although its main
drawbacks are the large number of parameters needed and that only open-loop stable
processes can be described this way. Closely related to this kind of model is the Step
Response Model, obtained when the input is a step.
UNIT V
MIMO SYSTEMS
PART A
1What is state and state variable?
2.What is a state vector?
3.What is state space?
4.What is input and output space?
5.What are the advantages of state space modeling using physical variable?
6.What are phase variables?
7.What is the advantage and the disadvantage in canonical form of state model?
8.Write the solution of discrete time state equation?
9.Write the expression to determine the solution of discrete time state equation using ztransform
10.Write the state model of the discrete time system?
PART B
1. A linear second order single input continuous time system is described by the
following set of differential equations.
.
X 1 (t )
2 X 1 (t ) 4 X 2 (t )
X 2 (t )
2 X 1 (t )
X 2 (t ) u (t )
x1
x1 (t ) u
x2
x1
2 x 2 (t ) u
81
X
Y
0
1
1 0 X
82
s 3
( s 1)( s 2) 2
83
84