Académique Documents
Professionnel Documents
Culture Documents
+ +
c c
i
i i
R
v v
R
v
R
v v
0
3
2 1
1
=
+
R
v e
i
c
0
5
0 2
4
2 2
=
R
v e
R
v e
1 2
0 e e = =
1
5
2 3 0 2
4
= and
c
R
v R i v v
R
=
4
Thus,
( )
1
1
1
1 1 1 1 1
c
c
dv
dv d
i c c v e c
dt dt dt
= = =
( )
2
2
1 2
2 2 1 2 2 2
c
c
dv
dv dv d
i c c v v c c
dt dt dt dt
= = =
1
2 3 1
dv
v R c
dt
=
}
= t d v
c R
v
2
1 3
1
1
5 3 5 1 1
0 3 1 1
4 4
R R R dv dv
v R c c
R dt R dt
| |
= =
|
\ .
0 2 4
5
dv dv R
dt R dt
=
1 4 4
0 1 0
3 5 1 3 5 1
dv R R
v v v d
dt R R c R R c
t = =
}
5
In terms of the input and output voltages,
0 1 1 2
1 1 2
1 2 6 1 6
1 1 1
0
i
v v dv dv dv
v c c
R R R R R dt dt dt
| |
| |
+ + + + =
|
|
\ .
\ .
0 0 4 4 4 2 4
0 0 0 2
3 5 1 1 2 6 1 6 3 5 3 5 1 5
1 1 1
0
i
v v dv R R R c R
v d v v c
R R c R R R R R R R R R c R dt
t
( (
+ + + + =
( (
}
2
0 0 5 4 2 4 4 2
0
2
3 5 1 1 2 6 1 1 3 5 6 5 4
1 1 1 1 1
1 0
i
dv dv d v R R c R R c
v
R R c R R R R dt c R R R dt R dt R
(
( | |
+ + + + + =
` ( | (
\ .
)
2
5 5 0 0 2
0 2
2
3 1 1 2 6 1 4 1 3 4 6 2
1 1 1 1 1 1
1 0
i
R dv R dv d v c
v c
R c R R R R R dt c R R R dt dt c
(
(
( | |
+ + + + + =
(
( | (
( \ .
or
2
0 5 0 5
0
2
2 1 3 4 6 2 3 1 2 1 2 6 1 4 2
1 1 1 1 1 1 1
i
d v R dv R dv
v
dt c c R R R c dt R c c R R R R R c dt
(
( | |
+ + + + + =
( | (
\ .
6
The last differential equation represents the time domain model of the active filter.
In the complex frequency domain, assuming zero initial conditions, the algebraic
relationship between input and output is
Since both models assume linear behavior of the active amplifier circuits, we
could also obtain an input-output model in terms of the convolution relationship in
the time domain.
5
1 4 2
0
2
5
2 1 3 4 6 2 3 1 2 1 2 6
( ) ( )
1 1 1 1 1 1 1
i
R
s
R R c
V s V s
R
s s
c c R R R c R c c R R R
=
(
( | |
+ + + + +
( | (
\ .
7
Input-Output Linearity:
A system N is said to be linear if whenever the input u
1
yields the output N[u
1
],
the input u
2
yields the output N[u
2
], and
for arbitrary real or complex constants c
1
and c
2
Example:
Let the spring force be described by f
k
(x) = Kx, then
is an external force
| | | | | |
2 2 1 1 2 2 1 1
u N c u N c u c u c N + = +
) (t f Kx x B x M
a
= + + ` ` `
) (t f
a
M
B
( )
k
f x
x
( )
a
f t
8
Let f
a
(t) = f
a1
(t) + f
a2
(t) .
Let x
1
(t) be the solution when f
a
(t) is replaced by f
a1
(t) and x
2
(t) be the solution
when f
a
(t) is replaced be f
a2
(t) .
Then,
x(t) =x
1
(t) + x
2
(t) .
Let the spring force be now described by f
k
(x) = Kx
2
, then
This time, however, x(t) x
1
(t) + x
2
(t) , i.e., the linearity property does not hold
because the system is now nonlinear.
Time Invariance and Causality
Let N represent a system and y(-) be the response of such system to the input
stimulus u(-), i.e., y(-) = N[u(-)]. If for any real T, y(- - T) = N[u(- - T)], then the
system is said to be time invariant . In other words, a system is time invariant if
delaying the input by T seconds merely delays the response by T seconds.
) (
2
t f Kx x B x M
a
= + + ` ` `
9
Let the system be linear and time invariant with impulse response h(t), then
If the same system is also causal, then for t t 0,
Example: Let a system be described by the ordinary, constant coefficients
differential equation
then the system is said to be a lumped-parameter system.
Systems that are described by either partial differential equations or linear
differential equations that contain time delays are said to be distributed-
parameter systems.
( ) ( ) ( ) ( ) ( )
t t
y t u h t d h u t d t t t t t t
= =
} }
0 0
( ) ( ) ( ) ( ) ( )
t t
y t u h t d h u t d t t t t t t = =
} }
) ( ) ( ) ( ' ... ) ( ) (
1
) 1 (
1
) (
t u t y a t y a t y a t y
n n
n n
= + + + +
10
Example: Consider the dynamic system described by
According to the previous definition, this is a distributed-parameter system.
Definition: The state of a system at time t
0
is the minimum (set of internal
variables) information needed to uniquely specify the system response given
the input variables over the time interval [t
0
, ).
Example:
v
i
(t): Input voltage
i(t): Current flowing through circuit
y(t): Output variable (current flowing through inductor)
) 1 ( ) ( ) 1 ( ) (
1 0
+ + = + t u b t u b t ay t y`
11
For t t
0
, i(t
0
) = i
0
, y(t) = i(t),
And the solution is given by
Hence, regardless of what v
i
(t) is for t < t
0
, all we need to know to predict the
future of the output y(t) is the initial state i(t
0
) and the input voltage v
i
(t), t t
0
.
State Models
They are elegant, though practical, mathematical representations of the
behavior of dynamical systems. Moreover,
A rich theory has already been developed
Real physical systems can be cast into such a representation
1
0
i i
di di R
v Ri L i v
dt dt L L
+ + = = +
0
0
( ) ( )
0 0
1
( ) ( ) ( ),
t
R R
t t t
L L
i
t
i t e v d e i t t t
L
t
t t
= + >
}
12
Example:
By KVL, for t > t
0
,
After taking the time derivative of the last equation, we get
0
R L C
v v v + + =
0
0
1
( ) ( ) 0
t
C
t
di
Ri L i d t
dt C
t t v + + + =
}
0
1
2
2
= + + i
LC dt
di
L
R
dt
i d
13
To solve this homogeneous differential equation, we may proceed as follows:
Let
1
and
2
(
1
2
) be the roots of the auxiliary equation
then, for t t
0
,
C
1
and C
2
can be uniquely obtained as follows:
From the knowledge of i(t
0
) and v
c
(t
0
) we can compute
and therefore C
1
and C
2
.
0
1
2
= + +
LC L
R
) (
2
) (
1
0 2 0 1
) (
t t t t
e C e C t i
+ =
2 1 0
) ( C C t i + =
2 2 1 1
0
) (
C C
dt
t di
t t
+ =
=
0
) (
t t
dt
t di
=
14
Using a state variable approach, let x
1
(t) = v
c
(t) and x
2
(t) = i(t), then for t t
0
or
or
This is a first-order linear, constant coefficient vector differential equation! In
principle, its solution should be easy to find.
) (
1
) (
1 ) ( ) (
2
1
t x
C
t i
C dt
t dv
dt
t dx
c
= = =
) ( ) (
1
) ( ) (
1 1
) (
) ( ) (
2 1
) (
0
2
0
t x
L
R
t x
L
t v d i
C L
t i
L
R
dt
t di
dt
t dx
t v
C
t
t
C
=
|
|
.
|
\
|
+ = =
}
_
t t
,
) (
) (
1
1
0
) (
) (
2
1
2
1
(
(
(
(
=
(
t x
t x
L
R
L
C
t x
t x
dt
d
(
=
(
) (
) (
) (
) (
0
0
0 2
0 1
t i
t v
t x
t x
c
0
( ) ( ) , ( ). t A t t = x x x
15
Specifically, for t t
0
,
The solution to the vector state equation is more elegant, easier to obtain
(provided there is an algorithm to compute e
At
) and it specially makes the role
of the initial conditions (state) clear.
Linear State Models for Lumped-Parameter Systems
Consider the system described by the following block diagram
Mathematically, for t > 0,
0
( )
0
( ) ( )
A t t
t e t
= x x
0
( ) ( ) ( ) ( ) ( ) , ( ) t A t t B t t t = + x x u x
( ) ( ) ( ) ( ) ( ) t C t t D t t = + y x u
B(t) C(t)
A(t)
D(t)
u(t) y(t)
x(t) x'(t)
}
16
where x(t) e R
n
is the state vector, u(t) e R
m
is the input vector, y(t) e R
r
is the
output vector, A(t) e R
nxn
is the feedback (system) matrix, B(t) e R
nxm
is the
input distribution matrix, C(t) e R
rxn
is the output matrix and D(t) e R
rxm
is the
feed-forward matrix. Also, A(-), B(-), C(-) and D(-) are piecewise continuous
functions of time.
Definitions:
The zero-input state response is the response x(-) given x(t
0
) and u(-) 0.
The zero-input system response is the response y(-) given x(t
0
) and u(-) 0.
The zero-state state response is the response x(-) to an input u(-) whenever
x(t
0
)=0.
The zero-state system response is the response y(-) to an input u(-) whenever
x(t
0
)=0.
Let y
zi
(-) be the zero-input system response and y
zs
(-) be the zero-state system
response, then the total system response is given by y(-) = y
zi
(-) + y
zs
(-).
17
Example:
Now,
or
or
Let and , then for t t
0
, the state model is
0 ) ( 5 . 0 ) ( 3 ) ( = + + t v t v t u `
) ( 2 ) ( 6 ) ( t u t v t v + = `
. ) ( , ) ( , ) ( 2 ) ( 6 ) (
0 0
t y t y t u t y t y ` ` ` ` + =
) ( ) (
1
t y t x = ) ( ) ( ) (
2
t v t y t x = = `
1 0 1 1
0
2 0 2 2
( ) ( ) ( ) 0 1 0
( ) ( ), ( )
( ) ( ) ( ) 0 6 2
x t x t x t
t u t t
x t x t x t
( ( ( ( (
= = + =
( ( ( ( (
x x
| |
(
=
) (
) (
0 1 ) (
2
1
t x
t x
t y
0.5 m Kg =
( )
input force u t
( )
3 friction force v t
( )
output position y t
( )
velocity v t
18
Now, the state solution is given by
where for t t
0
= 0, the matrix exponential e
At
is described by
1. Zero-input state response: u(t) = 0, t 0 and x(0) 0.
0
0
( ) ( )
0
( ) ( ) ( )
t
A t t A t
t
t e t e B d
t
t t
= +
}
x x u
(
(
t
t
At
e
e
e
6
6
0
6
1
6
1
1
6
6
10 20 10
6
20 6
20
1 1
1 1
1
( ) (0) 6 6
6 6
0
t
t
At
t
t
x e x x e
t e
x
e
e x
(
| |
(
+
(
|
(
(
= = =
\ .
(
(
(
(
x x
19
2. Zero-input system response: u(t) = 0, t 0 and x(0) 0.
3. Zero-state state response: x(0) = 0.
| |
6
10 20
6
10 20
6
20
1 1
1 1
6 6
( ) ( ) (0) 1 0
6 6
t
At t
t
x e x
t C t Ce x e x
e x
(
| |
+
( | | |
= = = = + \ .
| (
\ .
(
y x x
6( )
( ) ( )
6( )
0 0 0
1 1
0 1
( ) ( )
6 6
2
0
t t t t
A t A t
s
t
e
t e Bu d e Bd d
e
t
t t
t
t t t t
(
(
(
= = =
(
(
} } }
x
( )
( ) (
(
(
=
(
(
(
(
|
.
|
\
|
=
(
(
}
}
}
t
t
t
t
t
t
t
t
t
e
e t
d e
d e
d
e
e
6
6
0
) ( 6
0
) ( 6
0
) ( 6
) ( 6
1
3
1
1
18
1
3
1
2
3
1
3
1
2
3
1
3
1
t
t
t
t
t
t
t
20
4. Zero-state system response: x(0) = 0.
For t 0, the complete state response is then given by
and the complete system response by
| |
( )
( )
( )
6
6
6
1 1
1
1 1
3 18
( ) ( ) 1 0 1
1 3 18
1
3
t
t
t
t e
y t C t t e
e
(
(
= = =
(
(
(
x
( )
( )
6
6
10 20
( )
6
6 0
20
1 1
1 1
1
3 18
( ) (0) ( ) 6 6
1
1
3
t
t
t
At A t
t
t
t e
x e x
t e e Bu d
e
e x
t
t t
(
(
| |
(
+
|
(
= + = +
(
\ .
(
(
(
(
}
x x
| | ( )
1 6 6
1 10 20
2
( )
1 1 1 1
( ) ( ) 1 0 ( ) 1
( ) 6 6 3 18
t t
x t
y t C t x t x e x t e
x t
(
| |
= = = = + +
| (
\ .
x
21
State Models from Ordinary Differential Equations
Let a dynamic system be described by the n
th
order scalar differential equation
with constant coefficients, i.e.,
where m n.
Let the initial energy of the system be zero, then with n = 3 and m = 2,
Let us implicitly solve this equation, namely,
) (
1 1 1
) 1 (
1
) (
... ...
m
m m n n
n n
u b u b u b y a y a y a y + + + = + + + +
+
` `
2
2
1 2 3 3 2
2
2
1
3
3
) ( ) (
) ( ) (
) ( ) ( ) (
dt
t u d
b
dt
t du
b t u b t y a
dt
t dy
a
dt
t y d
a
dt
t y d
+ + = + + +
2
2
1 2 3 3 2
2
2
1
3
3
) ( ) (
) ( ) (
) ( ) ( ) (
dt
t u d
b
dt
t du
b t u b t y a
dt
t dy
a
dt
t y d
a
dt
t y d
+ + + =
( ) ( ) ( ) ) ( ) ( ) ( ) ( ) ( ) (
3 3 2 2 1 1
2
2
t y a t u b t y a t u b
dt
d
t y a t u b
dt
d
+ + =
22
Integrating this last equation on step at a time, we get
This is the implicit solution of the original differential equation. This solution is
obtained via nested integration.
To obtain a state variable representation, we need to represent this implicit
solution in block diagram form (traditional analog simulation diagram).
( ) ( ) ( )
}
+ + =
t
d y a u b t y a t u b t y a t u b
dt
d
dt
t y d
t t t ) ( ) ( ) ( ) ( ) ( ) (
) (
3 3 2 2 1 1
2
2
( ) ( ) ( )
} }
(
+ + =
t
d d y a u b y a u b t y a t u b
dt
t dy
o t t t o o
o
) ( ) ( ) ( ) ( ) ( ) (
) (
3 3 2 2 1 1
( ) ( ) ( ) o o t t t o o o o
o o
d d d y a u b y a u b y a u b t y
t
} } }
)
+ + = ) ( ) ( ) ( ) ( ) ( ) ( ) (
3 3 2 2 1 1
23
Block diagram representation:
If we select the output of the integrators as the state variables. Then
3
1 3 1 2 3
2 3 2 1 2
3 3 3 1
x y
u b x a x x
u b x a x x
u b x a x
=
+ =
+ =
+ =
`
`
`
b
3
a
1
b
2
b
1
a
3
a
2
x
1
x
3
x
2
1
x
3
x
2
x
u
y
+ + +
+
+
_
_ _
} } }
24
In matrix form,
This is the so-called observable canonical form representation.
Alternative state variable representation:
Let us apply the Laplace transform to the original scalar ordinary differential
equation, assuming zero initial conditions, i.e.,
or
3 3
2 2 0 0
1 1
0 0
1 0
0 1
a b
a b u A B u
a b
( (
( (
= + = +
( (
( (
x x x
| |
0
0 0 1 y C = = x x
)
`
+ + = + + +
2
2
1 2 3 3 2
2
2
1
3
3
) ( ) (
) ( ) (
) ( ) ( ) (
dt
t u d
b
dt
t du
b t u b t y a
dt
t dy
a
dt
t y d
a
dt
t y d
L
( ) ( ) ) ( ) (
2
1 2 3 3 2
2
1
3
s U s b s b b s Y a s a s a s + + = + + +
1
1
0
0
( ) ( )
( )
n k
n
n n k
t
n k
k
d f t d f t
s F s s
dt dt
=
=
=
`
)
L
25
In transfer function form,
Let us rewrite the last equation as follows:
where
and
Observation:
The overall transfer function is a cascade of two transfer functions.
( )
( )
3
3
2
2
1
1
3
3
2
2
1
1
3 2
2
1
3
2
1 2 3
1 ) (
) (
+ + +
+ +
=
+ + +
+ +
=
s a s a s a
s b s b s b
a s a s a s
s b s b b
s U
s Y
( )
1 2 3
1 2 3
1 2 3
1 2 3
( ) ( ) ( ) 1
( ) ( ) 1
( )
Y s Y s Y s
b s b s b s
U s U s a s a s a s
Y s
| |
= = + +
|
+ + +
\ .
3
3
2
2
1
1
1
1
) (
) (
+ + +
=
s a s a s a s U
s Y
3
3
2
2
1
1
) (
) (
+ + = s b s b s b
s Y
s Y
26
Each of the transfer functions can be expressed in block diagram form, i.e.,
Observation: the term s
-1
in the complex frequency domain corresponds to an
integrator in the time domain.
( ) U s
( ) Y s
1
( ) s Y s
2
( ) s Y s
3
( ) s Y s
1
s
1
s
1
s
1
a
2
a
3
a
_
_
_
+
E
( ) Y s
1
( ) s Y s
2
( ) s Y s
3
( ) s Y s
1
s
1
s
1
s
1
b
2
b
3
b
+
+
+
E
( ) Y s
27
Putting the two diagrams together yields,
Again, choosing the outputs of the integrators as the state variables, we get
3 1 2 2 1 3
3 1 2 2 1 3 3
3 2
2 1
x b x b x b y
u x a x a x a x
x x
x x
+ + =
+ =
=
=
`
`
`
( ) u t
3
x
3
x
2
x
1
x
1
s
1
s
1
s
1
a
2
a
3
a
_
_
_
+
1
b
2
b
3
b
+
+
+
E
( ) y t
2
x
1
x
E
28
In matrix form,
This form of the state equation is the so-called controllable canonical form.
Observation:
Both canonical forms are the dual of each other.
Consider the controllable canonical form of some linear time invariant dynamic
system, i.e.,
3 2 1
0 1 0 0
0 0 1 0
1
c c
u A B u
a a a
( (
( (
= + = +
( (
( (
x x x
| |
3 2 1 c
y b b b C = = x x
c c c c
c c
A B u
C
= +
=
x x
y x
29
Then the observable canonical form is given by
Controllability and Observability (a conceptual introduction):
Suppose now that the initial conditions of an n
th
order scalar ordinary differential
equation are not equal to zero. How do we build state models such that their
responses will be the same as that of the original scalar model?
Method 1: Given the n
th
order scalar differential equation
with state model
0 0 0 0 0
0 0 0
0
,
T T
T T T c c
c c c
T
c
A B A C
A A B C and C B
B
= + = +
= = =
`
=
)
x x u x u
y x
( ) ( 1) ( 1)
1 1 1 1
... ...
n n n
n n n n
y a y a y a y b u b u bu
+ + + + = + + +
( ) ( ) ( )
( ) ( ) ( )
t A t Bu t
y t C t Du t
= +
= +
x x
x
30
where x(t) e R
n
, u(t), y(t) e R, A,B,C and D are constant matrices of appropriate
dimensions.
Objective: Determine the initial state vector x(0)=[x
1
(0) x
n
(0)]
T
from the initial
conditions and the input initial values
In the derivation of both observable and controllable canonical forms from an
ordinary linear differential equation with scalar constant coefficients with m<n, we
found that D = 0, hence,
) 0 ( ),..., 0 ( ), 0 (
) 1 ( n
y y y ` ) 0 ( ),..., 0 ( ), 0 (
) 1 ( n
u u u `
2
( 1) 1 2 3 ( 3) ( 2)
(0) (0)
(0) (0) (0) (0)
(0) (0) (0) (0) (0) (0) (0)
(0) (0) (0) (0) (0) (0)
n n n n n n
y C
y C CA CBu
y C CA CBu CA CABu CBu
y CA CA Bu CA Bu CABu CBu
=
= = +
= = + = + +
= + + + + +
x
x x
x x x
x
31
In matrix form,
where O e R
nxn
, T e R
nxn.
To get a unique solution x(0) for the last algebraic equation, it will be necessary
that the matrix O be non-singular, i.e.,
x(0) = O
-1
[Y(0) TU(0)].
The existence of O
-1
is directly related to the property of observability of a system.
Hence, to uniquely reconstruct the initial state vector x(0) from input and output
measurements, the system must be observable, i.e., O
-1
must exist. In fact, O is
called the observability matrix.
2
( 1) 1 2 3 ( 1)
(0) 0 0 0 (0)
(0) 0 0 (0)
(0) (0) (0) (0)
(0) 0 (0)
(0) (0)
n n n n n
y C u
y CA CB u
y CA CAB CB u
y CA CA B CA B CB u
T
( ( ( (
( ( ( (
( ( ( (
( ( ( ( = = +
( ( ( (
( ( ( (
( ( ( (
= O +
Y x
x U
32
Method 2: Suppose now that instead of using the input-output measurements to
reconstruct the state at time t = 0 we use impulsive inputs to change the value of
the state instantaneously,
Let
with x(0
-
) = x
0
, A e R
nxn
, B e R
n
, describe an n
th
order scalar differential
equation and
Clearly, u(t) is described by a linear combination of impulsive inputs.
We know that for t > 0
--
( ) ( ) ( ), 0 t A t Bu t t
= + > x x
) ( ) ( ) ( ) (
) 1 (
1 1 0
t t t t u
n
n
+ + + = o o o
`
( )
0
( ) (0 ) ( )
t
At A t
t e e Bu d
t
t t
= +
}
x x
( ) ( 1)
0 1 1
0
(0 ) ( ) ( ) ( )
t
At A t n
n
e e B d
t
o t o t o t t
(
= + + + +
}
x
33
But, the i
th
term in the integral can be rewritten as
Therefore, x(t) is given by
where
( ) ( ) ( ) ( 1) ( 1)
0
0 0
( ) ( ) ( )
t t
t
A t i A t i At A i
e B d e B e e AB d
t t t
o t t o t o t t
= +
} }
( ) ( 2) 2 ( 2)
0
0
( ) 1
0
0
( ) ( )
( ) ( )
t
t
A t i At A i
t
t
A t i At A i At i
e AB e e A B d
e A B e e A B d e A B
t t
t t
o t o t t
o t o t t
= +
= + =
}
}
1
0 1 1
( ) (0 )
At At At At n
n
t e e B e AB e A B
= + + + + x x
{ }
1
(0 )
At n
e B AB A B
(
= +
x
| |
0 1 1
T
n
=
34
At time t = 0
+
, we get
where Q is the so-called controllability matrix.
Clearly, an impulsive input that will take the state from x(0
-
) to x(0
+
) will exist if
and only if the inverse of Q exists, namely,
Digital Simulation of State Models
Dynamic systems are nonlinear in general, therefore, let us begin with the
following nonlinear time-varying dynamic system which is described by
1
(0 ) (0 ) (0 )
n
B AB A B Q
+
(
= + = +
x x x
1
(0 ) (0 ) Q
+
( =
x x
0 0 0
( ) ( , ( ), ( )), ( ) ,
( ) ( , ( ), ( ))
t t t t t t t
t t t t
= = >
=
x f x u x x
y g x u
35
Objective: We would like to know the behavior of the system over the time
interval t e [t
0
, t
n
] for a given initial state x(t
0
) and input u(t), t e [t
0
, t
n
].
In principle, for t e [t
0
, t
n
],
However, to compute the integral analytically is very difficult in most cases.
Lets examine the following numerical approximations to the integral. Let n = 10.
Case 1 (forward Euler formula, scalar case):
0
0
( ) ( ) ( , ( ), ( ))
t
t
t t d t t t t = +
}
x x f x u
36
In this case,
Over one time interval,
At time t
k
, starting at t
k-1
, we get
Case 2 (backward Euler formula, scalar case):
9 9 10 1 1 2 0 0 1
) ( ) ( ) ( )) ( ), ( , (
10
0
f t t f t t f t t d u x f
t
t
+ + + ~
}
t t t t
1 1
) ( )) ( ), ( , (
1
~
}
k k k
t
t
f t t d u x f
k
k
t t t t
)) ( ), ( , ( ) ( ) ( ) (
1 1 1 1 1
+ ~
k k k k k k k
t u t x t f t t t x t x
37
For the k
th
time interval,
Therefore,
and the approximate solution at t
k
, starting at t
k-1
, is given by
Case 3 (trapezoidal rule, scalar case):
10 9 10 2 1 2 1 0 1
) ( ) ( ) ( )) ( ), ( , (
10
0
f t t f t t f t t d u x f
t
t
+ + + ~
}
t t t t
k k k
t
t
f t t d u x f
k
k
) ( )) ( ), ( , (
1
1
~
}
t t t t
)) ( ), ( , ( ) ( ) ( ) (
1 1 k k k k k k k
t u t x t f t t t x t x
+ ~
38
In this case the integral is approximately equal to
Therefore, the solution at time t
k
, starting at t
k-1
is approximately equal to
Example: Obtain an approximate solution over one time interval of the following
linearized pendulum state model at equally spaced time instants, t
k
t
k-1
= 0.5.
with initial conditions
2
) (
) (
2
) (
) (
2
) (
) ( )) ( ), ( , (
10 9
9 10
2 1
1 2
1 0
0 1
10
0
f f
t t
f f
t t
f f
t t d u x f
t
t
+
+ +
+
+
+
~
}
t t t t
| | )) ( ), ( , ( )) ( ), ( , ( ) (
2
1
) ( ) (
1 1 1 1 1 k k k k k k k k k k
t u t x t f t u t x t f t t t x t x + + ~
(
=
(
) t ( x
) t ( x
) t ( x
) t ( x
1
2
2
1
4 `
`
1
2
(0)
40
(0)
0
x
x
t
(
(
(
=
(
(
39
Forward Euler Method:
Backward Euler Method:
Trapezoidal Rule Method:
(
+
=
(
+
(
~
(
) t ( x ) t ( x
) t ( x . ) t ( x
) t ( x
) t ( x
.
) t ( x
) t ( x
) t ( x
) t ( x
k k
k k
k
k
k
k
k
k
1 1 1 2
1 2 1 1
1 1
1 2
1 2
1 1
2
1
2
5 0
4
5 0
(
+
=
(
+
(
~
(
) t ( x ) t ( x
) t ( x . ) t ( x
) t ( x
) t ( x
.
) t ( x
) t ( x
) t ( x
) t ( x
k k
k k
k
k
k
k
k
k
1 1 2
2 1 1
1
2
1 2
1 1
2
1
2
5 0
4
5 0
( )
( )
1 1 1 2 2 1
1
2 2 1 1 1 1
1 1 2 2 1
2 1 1 1 1
( ) ( ) ( ) ( )
0.5( )
( ) ( ) 4 ( ) 4 ( )
( ) 0.25 ( ) ( )
( ) ( ) ( )
k k k k
k k
k k k k
k k k
k k k
x t x t x t x t
t t
x t x t x t x t
x t x t x t
x t x t x t
+
( ( (
~ +
( ( (
+ + (
=
(
+
40
Linear Discrete-Time Systems
Implementation of dynamic systems is actually done using digital devices like
computers and/or DSPs. Moreover, there are some naturally occurring processes
which are discrete-time. Hence, it is convenient to model such systems as
discrete-time systems.
In most cases, a continuous-time linear time-invariant dynamic system is
discretized at time t = t
k
, which may be fixed or variable. When such system is
described by its impulse response, the discretization process is illustrated in the
figure below (a sampled-data system).
t=t
k
t=t
k
u(t) u(t
k
) y(t) y(t
k
)
h(t)
41
The equivalent single-input, single-output, discrete-time system can be depicted
by
Since the system is linear, its behavior is described by the convolution relation.
Let t
k
= kT, then
Let u(kT) = o(kT), the unit sample, i.e.,
then,
is called the unit sample response.
=
=
n
nT u nT kT h kT y ) ( ) ( ) (
=
=
otherwise
k
kT
, 0
0 , 1
) ( o
) ( ) ( ) ( ) ( kT h nT nT kT h kT y
n
= =
=
o
u(t
k
) y(t
k
) h(t
k
)
42
Let the system be causal, i.e., h(kT) = 0, k < 0 then
In addition, if u(kT) = 0 for k < 0, then
=
=
k
n
nT u nT kT h kT y ) ( ) ( ) (
=
=
k
n
nT u nT kT h kT y
0
) ( ) ( ) (
Consider now a continuous-time linear time-invariant multiple-input, multiple-
output dynamic system in state-space form, i.e.
0 0 0
( ) ( ) ( ), ( ) ,
( ) ( ) ( )
t A t B t x t x t t
t C t D t
= + = >
= +
x x u
y x u
We know that
0
0
( ) ( )
0
( ) ( ) ( )
t
A t t A t
t
t e t e B d
t
t t
= +
}
x x u
.
43
Let
0
t kT = and t kT T = + , then the last equation becomes
( )
( ) ( ) ( )
kT T
AT A kT T
kT
kT T e kT e B d
t
t t
+
+
+ = +
}
x x u
.
Suppose ( ) ( ), u t u kT kT t kT T = s s + , namely, the input does not change in the interval
[ , ) kT kT T + , (equivalent to the presence of a sample-and-hold device) then
( )
( ) ( ) ( )
kT T
AT A kT T
kT
kT T e kT e d B kT
t
t
+
+
+ = +
}
x x u
.
Let kT T t = + , then d d t = and
0
( )
0
( )
kT T T
A kT T A A
kT T
e d e d e d
t
t
+
+
= =
} } }
.
The discretized system now becomes
0
0
( ) ( ) ( ), (0) , 0,1, 2,
T
AT A
kT T e kT e d B kT k
+ = + = =
}
x x u x x
.
44
State representation of discrete-time dynamic systems:
Consider a linear discrete-time dynamic system described by the difference equation
where the sampling interval has been normalized, i.e., T = 1 sec.
) ( ) 1 ( ) 2 ( ) 1 ( ) (
1 2 1
k y a k y a n k y a n k y a n k y
n n
+ + + + + + + + +
) ( ) 1 ( ) (
1 1
m k u b k u b k u b
m m
+ + + + + =
+
Define the discrete-time equivalent system and input distribution matrices as
AT
d
A e
and
0
T
A
d
B e d B
}
, then the discrete-time equivalent system is described by
0
( ) ( ) ( ), (0) , 0,1, 2,
( ) ( ) ( )
d d
kT T A kT B kT k
kT C kT D kT
+ = + = =
= +
x x u x x
y x u
Note that this discretization of the original continuous-time system is exact when a
sample-and-hold device processes the input u(t) and the input and output samplers (D/A
and A/D) clocks are in synchronism.
45
If we now replace differentiations with forward shift operators and integrators with
backward shift operators then we can construct the same type of canonical
realizations that we built for continuous-time systems.
Example: Let n = 3, m = 2 and y(0) = y(1) = y(2) = u(0) = u(1) = u(2) = 0, then
Let us apply the backwards shift operator to this equation one at a time:
The solution y(k) can now be computed implicitly using a simulation block
diagram.
) ( ) ( ) 1 ( ) 1 ( ) 2 ( ) 2 ( ) 3 (
3 3 2 2 1 1
k y a k u b k y a k u b k y a k u b k y + + + + + + = +
| | ) ( ) ( ) ( ) ( ) 1 ( ) 1 ( ) 2 ( ) 3 (
3 3
1
2 2 1 1
1
k y a k u b q k y a k u b k y a k u b k y k y q + + + + = + = +
| | | | { } ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ( ) 2 (
3 3
1
2 2
1
1 1
1
k y a k u b q k y a k u b q k y a k u b k y k y q + + = + = +
| | | | | | { } { } ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 (
3 3
1
2 2
1
1 1
1 1
k y a k u b q k y a k u b q k y a k u b q k y k y q + + = = +
46
Simulation diagram implementation:
Using the outputs of the shift operators as the state variables, we get
) ( ) ( ) ( ) 1 (
) ( ) ( ) ( ) 1 (
) ( ) ( ) 1 (
1 3 1 2 3
2 3 2 1 2
3 3 3 1
k u b k x a k x k x
k u b k x a k x k x
k u b k x a k x
+ = +
+ = +
+ = +
) ( ) (
3
k x k y =
b
3
a
1
b
2
b
1
a
3
a
2
x
1
x
3
x
2
u
y
+ + +
+
+
_
_ _
q
-1
q
-1
q
-1
47
In matrix form,
In general, a discrete-time system can be represented by (assuming T = 1)
where x(k) e R
n
, u(k) e R
m
, y(k) e R
r
, A, B, C and D are constants matrices of
appropriate dimensions.
As in the continuous-time case, we can reconstruct the state from input-output
measurements.
3 3
2 2
1 1
0 0
( 1) 1 0 ( ) ( )
0 1
a b
k a k b u k
a b
( (
( (
+ = +
( (
( (
x x
| |
( ) 0 0 1 ( ) y k k = x
( 1) ( ) ( )
( ) ( ) ( )
k A k B k
k C k D k
+ = +
= +
x x u
y x u
48
Iteratively,
In matrix form,
2
1 2 3
( ) ( ) ( )
( 1) ( 1) ( 1) ( ) ( ) ( 1)
( 2) ( 2) ( 2) ( ) ( ) ( 1) ( 2)
( 1) ( ) ( ) ( 1) ( 2) ( 1)
n n n
k C k D k
k C k D k CA k CB k D k
k C k D k CA k CAB k CB k D k
k n CA k CA B k CA B k CB k n D k n
= +
+ = + + + = + + +
+ = + + + = + + + + +
+ = + + + + + + + +
y x u
y x u x u u
y x u x u u u
y x u u u u
2
1 2 3
( ) 0 0 0 ( )
( 1) 0 0 ( 1)
( ) ( ) ( 2) ( 2)
0
( 1) ( 1)
( ) ( )
n n n
k C D k
k CA CB D k
k k k CA CAB CB D k
k n CA CA B CA B CB D u k n
k T k |
( ( ( (
( ( ( (
+ +
( ( ( (
( ( ( ( = = + + +
( ( ( (
( ( ( (
( ( ( (
+ +
= +
y u
y u
Y x y u
y
x U
49
If | has full rank, then x(k) = |
-#
[Y(k)-TU(k)], i.e., if the system is observable then
we can reconstruct the state at time k using input, output measurements up to time
k+n-1, where |
-#
is the pseudoinverse of |.
Solution of the discrete-time state equation
Iteratively,
and
2
3 2
4 3 2
1
1
0
(1) (0) (0)
(2) (1) (1) (0) (0) (1)
(3) (2) (2) (0) (0) (1) (2)
(4) (3) (3) (0) (0) (1) (2) (3)
( ) (0) ( )
k
k k l
l
A B
A B A AB B
A B A A B AB B
A B A A B A B AB B
k A A B l
=
= +
= + = + +
= + = + + +
= + = + + + +
= +
x x u
x x u x u u
x x u x u u u
x x u x u u u u
x x u
1
1
0
( ) (0) ( ) ( )
k
k k l
l
k CA C A B l D k
=
(
= + +
(
y x u u
50
From the last equation, starting at time k, the value of the state at time k+n is
where Q is the controllability matrix
and
To assure the existence of an input such that the state of the system can reach a
desired state at time k+n given the value of the sate at time k, the following
relationship must be satisfied
where is the pseudo inverse of Q
In other words, the system must be controllable.
( ) ( ) ( ), hence, ( ) ( ) ( )
n n
r r
k n A k Q k Q k k n A k + = + = + x x U U x x
] [
1
B A AB B Q
n
=
( 1)
( 2)
( )
( )
r
k n
k n
k
k
+
(
(
+
(
=
(
(
u
u
U
u
#
( ) ( ) ( )
n
r
k Q k n A k
( = +
U x x
#
Q
51
Linearization of Nonlinear Systems
Consider the following scalar nonlinear system
where g():99.
Let the nominal operating point be and let N be a neighborhood of it,
i.e., N = {x e 9: a < x < b} and a < x
0
< b. If the function g() is analytic
on N, i.e., it is infinitely differentiable on N, then for Axe9 such that
x
0
+AxeN, we get the following Taylor series expansion
For small Ax,
) x(t) y(t)
0
x
0 0
2
2
0 0
2
1
( ) ( )
2!
x x x x
dg d g
g x x g x x x
dx
dx
= =
+ A = + A + A +
0 0
0 0 0
( ) ( )
x x x x
dg dg
y g x x g x x y x
dx dx
= =
= + A ~ + A = + A
g()
52
Therefore, the linear approximation of a nonlinear system y = g(x) near
the operating point x
0
has the form
or
where
Example: Consider a semiconductor diode described by
where v
o
=qv
t
.
In this case,
0
0 x x
dg
y y x
dx
=
= A
0
y y y m x A = ~ A
0
x x
dg
m
dx
=
=
0
ln 1 ( )
s
i
v v g i
i
| |
= + =
|
\ .
0 0
0
0
1 1
1
i i o i d
s s
s
v dg
m v r
i
di i i i
i
=
| |
|
| |
|
= = = =
|
+ |
\ .
+
|
\ .
53
The linearized model is then given by
Using the parameters q = 2, v
t
= 0.026 V, i
s
= 1 nA, i
0
= 0.05 A, v
0
= 0.92 V,
r
d
= 18.44 O, we get the following linear approximation:
0
0
d
s
v
v m i i r i
i i
A = A = A = A
+
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
i in Amps
v
i
n
V
o
l
t
s
54
If, on the other hand, y=g(x
1
,x
2
,.,x
n
), then if g() is analytic on the set
N={xe9
n
:a<||x||<b}, x
0
eN and x
0
+AxeN, x
0
=[x
10
,.,x
n0
]
T
,
where and
Systems with memory:
I. Scalar Case. Consider a system described by
Suppose x
n
(t) is the system response resulting from a nominal
operating input u
n
(t), assuming a nominal initial state x
n
(t
0
) = x
0
e9
0 0
0
10 0 1
1
0
( ) ( )
( , , ) higher order terms
( ) ( ) higher order terms
n n
n
g g
y g x x x x
x x
g g
= =
=
c c
= + A + + A +
c c
= +V A +
x x x x
x x
x x
x x x
1 2
( ) ( ) ( )
( )
n
g g g
g
x x x
(
c c c
V
(
c c c
x x x
x
0 0 0
( ) ( ( ), ( ), ), ( ) , ( ), ( ) for x t f x t u t t x t x x t u t t t = = e9 >
| |
1 2
T
n
x x x x A A A A
55
In other words,
Assume that we know x
n
(t). Perturb the state and the input by taking x(t
0
)
= x
0
+ Ax
0
and u(t) = u
n
(t) + Au(t).
We want to find the solution to
For fixed values of t, and f() an analytic function on some neighborhood
of x
n
(t) and u
n
(t), using a Taylor series expansion, we get
where
0 0 0
( ) ( ( ), ( ), ), ( ) , for
n n n n
x t f x t u t t x t x t t = = >
0 0 0 0
( ) ( ( ) ( ), ( ) ( ), ), ( ) ( ) , for
n n n
x t f x t x t u t u t t x t x t x t t = +A +A = +A >
( ( ) ( ), ( ) ( ), ) ( ( ), ( ), )
( )
( , , ) higher order terms
( )
n
n
n n n n
x x
u u
f x t x t u t u t t f x t u t t
x t
f x u t
u t
=
=
+ A + A =
A
(
+V +
(
A
( , , )
f f
f x u t
x u
c c (
V =
(
c c
56
Let and
Then
Therefore,
Example: Let
Clearly,
Now, if the nominal operating input is u
n
(t) = 0, t > 1,
( ( ), ( ), )
( )
n
n
x x
u u
f x t u t t
a t
x
=
=
c
c
( ( ), ( ), )
( )
n
n
x x
u u
f x t u t t
b t
u
=
=
c
c
( ) ( ) ( )
( ( ), ( ), ) ( ) ( ) ( ) ( )
n
n n
dx t dx t d x t
f x t u t t a t x t b t u t
dt dt dt
A
= + ~ + A + A
0 0
( )
( ) ( ) ( ) ( ), ( )
d x t
a t x t b t u t x t x
dt
A
~ A + A A = A
2
0 0
( ) ( ) ( ), ( ) x t x t u t x t x = + =
2
( ( ), ( ), ) ( ) ( ) f x t u t t x t u t = +
( ( ), ( ), )
( ) 2 ( )
n
n
x x n
u u
f x t u t t
a t x t
x
=
=
c
=
c
57
( ( ), ( ), )
( ) 1
n
n
x x
u u
f x t u t t
b t
u
=
=
c
=
c
and
So,
With
Finally,
0 0
( )
2 ( ) ( ) ( ), ( )
n
d x t
x t x t u t x t x
dt
A
~ A + A A = A
0
( ) 0, 1and ( ) (1) 1, then
n n n
u t t x t x = > = =
2
1 2 1 2
2
( )
( ), (1) 1
( ) 1 1
and ( ) , 1
( )
( )
n
n n
n
n
n
n
dx t
x t x
dt
dx t
dt c t c c c x t t
x t t
x t
= =
= + = + = = >
0
( ) 2
( ) ( ), (1) , 1
d x t
x t u t x x t
dt t
A
~ A + A A = A >
58
II. Vector Case. Consider now the case of a system described by the following
nonlinear vector differential equation:
The i
th
element of the vector differential equation is described by
Moreover,
where
0 0 0
( ) ( ( ), ( ), ), ( ) , ( ) , ( ) for
n m
t t t t t t t t t = = e9 e9 > x f x u x x x u
0 0 0
( ) ( ( ), ( ), ), ( ) , ( ) , ( ) for
n m
i i
t t t t t t t t t = = e9 e9 > x f x u x x x u
( ( ) ( ), ( ) ( ), ) ( ( ), ( ), )
( )
( , , ) higher order terms
( )
n
n
i n n i n n
i
f t t t t t f t t t
t
f t
t
=
=
+ A + A =
A
(
+V +
(
A
x x
u u
x x u u x u
x
x u
u
( , , ) ( , , ) ( , , )
i i i
f t f t f t V = V V (
x u
x u x u x u
59
and
or
1 2
( , , )
i i i
i
n
f f f
f t
x x x
(
c c c
V =
(
c c c
x
x u
1 2
( , , )
i i i
i
m
f f f
f t
u u u
(
c c c
V =
(
c c c
u
x u
1 1
1 1
2 2 2 2
( , , ) ( , , )
( , , ) ( , , )
( , , ) ( , , )
n n
n n
n n
n n
n n
n n
n n
n n
f t f t
x x
x f t x f t
d
dt
x x
f t f t
= =
= =
= =
= =
= =
= =
(
V ( V
(
(
A A ( (
(
( (
A V A V
(
( (
= +
(
( (
(
( (
A A
(
(
( V V
(
x x x u x x
u u u u
x x x u x x
u u u u
x x x u x x
u u u u
x u x u
x u x u
x u x u
1
2
m
u
u
u
(
(
(
(
A (
(
(
A
(
(
(
(
(
(
A
(
(
(
(
60
Finally, if the outputs of the nonlinear system are of the form
Then
( ) ( ( ), ( ), ), ( )
p
t t t t t = e9 y g x u y
1 1
1
1
2
2 2 2
( , , ) ( , , )
( , , ) ( , , )
( )
( , , ) ( , , )
n n
n n
n n
n n
n n
n n
p
n
p p
g t g t
y
x
y
g t x g t
t
y
x
g t g t
= =
= =
= =
= =
= =
= =
(
V ( V
(
(
A ( A (
(
(
(
A
V A V
(
(
(
A = = +
(
(
(
(
(
(
A
A
(
(
(
( V V
(
x x x u x x
u u u u
x x x u x x
u u u u
x x x u x x
u u u u
x u x u
x u x u
y
x u x u
1
2
m
u
u
u
(
(
(
(
A (
(
(
A
(
(
(
(
(
(
A
(
(
(
(
61
Example: Suppose we have a point mass in an inverse square law force field,
e.g., a gravity field as shown below
where r(t) is the radius of the orbit at time t
u(t) is the angle relative to the horizontal axis
u
1
(t) is the thrust in the radial direction
u
2
(t) is the thrust in the tangential direction
m is the mass of the orbiting body
m
u
1
(t)
u
2
(t)
u(t)
r(t)
orbit
62
From the laws of mechanics and assuming m = 1kg, the total force in the radial
direction is described by
and the total force in the tangential direction is
Select the states as follows:
Then,
2
2
1
2 2
( ) ( )
( ) ( )
( )
d r t d t K
r t u t
dt
dt r t
u (
= +
(
2
2
2
( ) 2 ( ) ( ) 1
( )
( ) ( )
d t d t dr t
u t
r t dt dt r t
dt
u u ( (
= +
( (
1 2 3 4
( ) ( )
( ) ( ), ( ) , ( ) ( ), ( )
dr t d t
x t r t x t x t t and x t
dt dt
u
u = = = =
1 2
2
2 1 4 1
2
1
( ) ( )
( ) ( ) ( ) ( )
( )
x t x t
K
x t x t x t u t
x t
=
= +
63
Which implies that
For a circular orbit and u
1n
(t) = u
2n
(t) = 0 and t
0
= 0, we have
3 4
2 4
4 2
1 1
( ) ( )
( ) ( ) 1
( ) 2 ( )
( ) ( )
x t x t
x t x t
x t u t
x t x t
=
= +
2
2
1 4 1
2
1
4
2 4 2
1 1
( ( ), ( ), ) .
2
x
K
x x u
x
t t t
x
x x u
x x
(
(
(
+
(
=
(
(
(
+ (
(
f x u
0 0
( ) 0 , 0
T
n
t R t t e e = > (
x
64
where r
n
(t) = R and
Linearizing about x
n
(t) and u
n
(t), yields
0
3
( )
.
n
d t K
dt
R
u
e = =
| |
1
( , , ) 0 1 0 0
n
n
f t
=
=
V =
x x x
u u
x u
2 2
2 4 1 4 0 0
3
1
2
( , , ) 0 0 2 3 0 0 2
n
n
n
n
K
f t x x x R
x
e e
=
=
( =
(
V = + =
(
=
(
x x x
u u
x x
x u
u u
| |
3
( , , ) 0 0 0 1
n
n
f t
=
=
V =
x x x
u u
x u
0 2 4 2 4 2
4
2
1 1
1
2
( , , ) 2 0 2 0 2 0 0
n
n
n
n
x x u x x
f t
x x R
x
e
=
=
( =
(
V = =
(
(
=
(
x x x
u u
x x
x u
u u
65
Likewise,
In state form,
| |
1
( , , ) 0 0
n
n
f t
=
=
V =
u x x
u u
x u
| |
2
( , , ) 1 0
n
n
f t
=
=
V =
u x x
u u
x u
| |
3
( , , ) 0 0
n
n
f t
=
=
V =
u x x
u u
x u
4
1
1 1
( , , ) 0 0
n n
n n
f t
x R
= =
= =
(
(
V = =
(
(
u x x x x
u u u u
x u
2
0 0
0
0 1 0 0
0 0
1 0
3 0 0 2
0 0
0 0 0 1
1
0
0 2 0 0
R
d
dt
R
R
e e
e
(
(
(
(
(
(
( A = A + A
(
(
(
(
(
(
(
x x u
66
Existence of solution of differential equations
Consider the following unforced, possibly nonlinear dynamic system described by
where x(t
0
) = x
0
, x(t) e R
n
and f(-,-):RxR
n
R
n
.
Then the state trajectory |(- ; t
0
, x
0
) is a solution to over the time interval [a,b] if
and only if |(t
0
; t
0
, x
0
) = x
0
and for all t e [a,b].
Def. Let D c RxR
n
be a connected, closed, bounded set. Then the function f(t,x)
satisfies a local Lipschitz condition at t
0
on D with respect to (t
0
, x) e D if there
exists a finite constant k (t
0
,x
1
), (t
0
,x
2
) e
D, where k is the Lipschitz constant.
( ) ( , ( )) t t t = x f x
0 0 0 0
( ; , ) ( , ( ; , )) t t x t t t = f x | |
0 1 0 2 1 2
2 2
( , ) ( , ) t t k s f x f x x x
67
Global Existence and Uniqueness:
Assumptions:
1. S c R
+
[0, ) contains at most a finite number of points per unit interval.
2. For each x e R
n
, f(t, x) is continuous at t e S.
3. For each t
i
e S, f(t, x) has finite left and right hand limits at t = t
i
.
4. f(-,-) : R
+
xR
n
R
n
satisfies the global Lipschitz condition, i.e., there exists a
piecewise continuous function k(-):R
+
R
+
such that
for all t e R
+
and all x
1
, x
2
e R
n
.
Theorem: Suppose that assumptions (1) (4) hold. Then for each t
0
e R
+
and x
0
e R
n
there exists a unique continuous function |(-; t
0
, x
0
) : R
+
R
n
such that
(a) and (b) |( t
0
; t
0
, x
0
) = x
0
, t e R
+
and t e S.
By uniqueness we mean that if |
1
and |
2
satisfy conditions (a) and (b) then
|
1
(t; t
0
, x
0
) = |
2
(t; t
0
, x
0
) t e R
+
.
1 2 1 2
2 2
( , ) ( , ) ( ) t t k t s f x f x x x
0 0 0 0
( ; , ) ( , ( ; , )) t t t t t = x f x | |
68
Consider now the unforced, linear, time-varying system
, x(0) = x
0
where A(t) e R
nxn
and its components a
ij
(t) are piecewise continuous.
Theorem: If A(-) is piecewise continuous, then for each initial condition x(0), a
solution |(-; 0, x
0
) to the equation exists and is unique.
Proof: Define the sets D
j
= [j 1, j) for j = 1, 2, ..
Then, ,
Since A(-) is piecewise continuous on R
+
, it must be piecewise continuous for
each t e D
j
, j = 1, 2, .. Therefore, for arbitrary x
1
, x
2
e R
n
,
=
+
=
1 j
j
R D
1 2 1 2 1 2
2 2 , 2
( ) ( ) ( )( ) ( )
j
D
A t A t A t A t
= s x x x x x x
( ) ( ) ( ) t A t t = x x
( ) ( ) ( ) t A t t = x x
69
for all D
j
, j = 1, 2, .
Let , t e R
+
, then , where k(t) is
a piecewise continuous function for t e R
+
. Therefore, for each x(0) e R
n
, a unique
solution |(-; 0, x
0
) to exists.
Example: Verify that the differential equation
with initial condition x(0) = [1 0]
T
has a unique solution.
First of all, all the entries of A(t) are continuous functions of time.
But,
which implies that there exists a unique solution since k(t) = 1 + 2t is continuous t
e R
+
.
Consider now the linear time-varying unforced dynamic system described by
j
D
t A t k
,
) ( ) (
=
1 2 1 2
2 2
( ) ( ) ( ) A t A t k t s x x x x
2 1
( ) ( )
1
t
t t
t
(
=
(
x x
1 2 1 2
2 2
( ) 1 2 ( ) ( ) ( ) (1 2 ) A t t k t A t A t t
= + = s + x x x x
( ) ( ) ( ) t A t t = x x
( ) ( ) ( ) t A t t = x x
70
Theorem: Let A(t) e R
nxn
be piecewise continuous. Then the set of all solutions of
forms an n-dimensional vector space over the field of the real
numbers.
Proof: Let { q
1
, q
2
, ., q
n
} be a set of linearly independent vectors in R
n
, i.e.,
, if and only if o
i
= 0, i = 1, 2, ., n; and |
i
(-) be the
solutions of with initial conditions |
i
(t
0
) = q
i
, i = 1, 2, ., n.
Suppose that the |
i
s, i = 1, 2, ., n are linearly dependent, then - o
i
e R, i = 1, 2, .,
n, such that t e R+.
At t = t
0
e R
+
,
the q
i
s are linearly dependent, which is an outright contradiction of the
hypothesis that the q
i
s are linearly independent. Therefore, the |
i
s are linearly
independent for all t e R
+
.
1 1 2 2
0
n n
o o o + + + = q q q
1 1 2 2
( ) ( ) ( ) 0
n n
t t t o o o | + + + = | |
( ) ( ) ( ) t A t t = x x
1 1 0 2 2 0 0
( ) ( ) ( )
n n
t t t o o o | + + + = | |
1 1 2 2
0
n n
o o o + + + = q q q
71
Let |(t) be any solution of and |(t
0
) = q. Since the q
i
s are linearly independent
vectors in R
n
, q can be uniquely represented by
But, is a solution of with initial condition
This is because
In other words, the linear combination
satisfies the differential equation .
Therefore, |(-) = implies that every solution of is a linear combination
of the basis of solutions |
i
(-), i = 1, 2, ., n, i.e., the set of all solutions of forms
an n-dimensional vector space.
1
n
i i
i
o
=
=
q q
0
1
( )
n
i i
i
t o
=
=
| q
1 1 1 1
( ) ( ) ( ) ( ) ( ) ( )
n n n n
i i i i i i i i
i i i i
d
t t A t t A t t
dt
o o o o
= = = =
| |
= = =
|
\ .
| | | |
1
( )
n
i i
i
o
=
|
1
( )
n
i i
i
o
=
|
1
( )
n
i i
i
o
=
|
72
Example: Consider the dynamical system described by
Let the vectors q
1
and q
2
be described by and .
Then, and are two independent solutions to the system
with initial conditions |
1
(0) = q
1
and |
2
(0) = q
2
.
Therefore, any solution |(t) will be given by
o
i
e R, i = 1, 2.
0 0
( ) ( )
0
t t
t
(
=
(
x x
1
1
0
(
=
(
q
2
0
1
(
=
(
q
1
2
1
( )
1
2
t
t
(
(
=
(
|
2
0
( )
1
t
(
=
(
|
(
+
(
(
= + =
1
0
2
1
1
) ( ) ( ) (
2
2
1 2 2 1 1
o o | o | o |
t
t t t
73
Def. The state transition matrix of the differential eq. is given by
where the |
i
s, i = 1, 2, ., n are the basis solutions, q
i
= [0 0 1 0 0]
T
.
Properties of the state transition matrix:
1. u(t
0
, t
0
) = I
Proof: Recall that
Thus u(t
0
, t
0
) = I.
2. u(t, t
0
) satisfies the differential eq. , M(t
0
) = I, M(t) e R
nxn
.
Proof: The time derivative of the state transition matrix is given by
However,
| | ) , ; ( ) , ; ( ) , ; ( ) , (
0 2 0 2 1 0 1 0 n n
t t t t t t t t q | q | q | = u
| |
0 0
( ; , ) 0 0 1 0 0
T
i i i
t t = = | q q
) ( ) ( ) ( t M t A t M =
`
0 1 0 1 2 0 2 0
( , ) ( ; , ) ( ; , ) ( ; , )
n n
d
t t t t t t t t
dt
(
u =
| q | q | q
0 0
( ; , ) ( ) ( ; , )
i i i i
t t A t t t = | q | q
( ) ( ) ( ) t A t t = x x
74
Therefore,
Also, from part (1), u(t
0
, t
0
) = I.
3. u(t, t
0
) is uniquely defined.
Proof: Since each |
i
is uniquely determined by A(t) for each initial condition q
i
then u(t, t
0
) is also uniquely determined by A(t).
Proposition: The solution to , x(t
0
) = x
0
is x(t) = u(t, t
0
)x
0
t.
Proof: At t = t
0
, u(t
0
, t
0
)x
0
= Ix
0
= x
0
.
We already know that .
Therefore,
In other words, , satisfies the differential equation.
| |
0 1 0 1 2 0 2 0
( , ) ( ) ( ; , ) ( ) ( ; , ) ( ) ( ; , )
n n
t t A t t t A t t t A t t t u = | q | q | q
| |
1 0 1 2 0 2 0 0
( ) ( ; , ) ( ; , ) ( ; , ) ( ) ( , )
n n
A t t t t t t t A t t t = = u | q | q | q
) , ( ) ( ) , (
0 0
t t t A t t u = u
`
0 0 0 0
( , ) ( ) ( , ) t t A t t t u = u x x
0 0
( , ) t t u x
( ) ( ) ( ) t A t t = x x
75
If t and t
0
, and A(t) has the following commutative property:
Then,
Example: Compute the state transition matrix u(t, t
0
) for the differential equation,
We can show that
) ( ) ( ) ( ) (
0 0
t A d A d A t A
t
t
t
t
|
|
.
|
\
|
=
|
|
.
|
\
|
} }
t t t t
0
( )
0
( , )
t
t
A d
t t e
t t
}
u =
2
1
( ) ( )
0 1
t
e
t t
(
=
(
x x
( )
0
0 0
2 2 2
0 0
0
1
( )
( ) ( ) ( ) ( ) 2
0
t t t
t t
t t
t t e e t t e
A t A d A d A t
t t
t t t t
(
| |
+
|
(
= =
\ .
(
(
} }
76
Hence,
and
0
0 0 0
2 3
( )
0
1 1
( , ) ( ) ( ) ( )
2! 3!
t
t
A d t t t
t t t
t t e I A d A d A d
t t
t t t t t t
} | | | |
u = = + + + + | |
| |
\ . \ .
} } }
( ) ( )
( )
( )
( )( )
( )
+
(
(
(
(
+
(
(
+
(
=
! 2
0
! 2
1
! 2
0
2
1
1 0
0 1
2
0
2 2
0
2
0
0
2 2
0
0
0
t t
e e t t
t t
t t
e e t t
t t
t t
( ) ( )
( )
( )
+
(
(
(
(
! 3
0
! 3 2
3
! 3
3
0
2 2
2
0
3
0 0
t t
e e
t t t t
t t
) ( 3
0
2
0 0 0 11
0
) (
! 3
1
) (
! 2
1
) ( 1 ) , (
t t
e t t t t t t t t
= + + = u
77
Finally,
Hence, the state transition matrix is given by
We can see that the norm of the state transition matrix blows up as time t goes to
infinity, therefore, the system is unstable.
( ) ( ) ( ) + = u
2
0
2 2
0
2 2 2 2
0 12
) (
! 3
1
2
3
) (
! 2
1
2
1
) , (
0 0 0
t t e e t t e e e e t t
t t t t t t
( ) ( ) ( )
0 0 0 0 0
3 ) ( 2 2 3
0
2
0 0
2 2
2
1
2
1
) (
! 3
1
) (
! 2
1
) ( 1
2
1
t t t t t t t t t t
e e e e e t t t t t t e e
+ +
= =
(
+ + =
0 ) , (
0 21
= u t t
( )
0
) , ( ) , (
0 11 0 22
t t
e t t t t
= u = u
( )
( )
( ) (
(
= u
+ +
0
0 0 0
0
2
1
) , (
3
0
t t
t t t t t t
e
e e e
t t
78
Theorem: A(t) and commute if
1. A(-) is constant
2. A(t) = o(t)M, o(-) : R R and M is a constant matrix
3. , o
i
(-) : R R and the M
i
s are constant matrices such that
M
i
M
j
= M
j
M
i
i, j.
Proof:
(1) If A(-) is a constant matrix, i.e., A(-) = A, then
(2) If A(t) = o(t)M, then
}
t
t
d A
0
) ( t t
=
=
k
i
i i
M t t A
1
) ( ) ( o
2
0 0
2 2
) ( ) (
0 0
A t t t t A Id A Ad A
t
t
t
t
= = =
} }
t t
2
0
2
) (
0 0
A t t A Id A Ad
t
t
t
t
= =
} }
t t
2
0 0 0 0
) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( M d t M Id M t Md M t d A t A
t
t
t
t
t
t
t
t
} } } }
= = = t t o o t t o o t t o o t t
79
But,
(3) If then
But,
2
0 0 0 0
) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( M d t M t M d I M t Md t A d A
t
t
t
t
t
t
t
t
} } } }
= = = t t o o o t t o o t t o t t
=
=
k
i
i i
M t t A
1
) ( ) ( o
|
|
.
|
\
|
|
.
|
\
|
=
|
|
.
|
\
|
|
.
|
\
|
=
}
}
}
= = = =
k
j
t
t
j j
k
i
i i
t
t
k
j
j j
k
i
i i
t
t
M d M t d M M t d A t A
1 1 1 1
0 0 0
) ( ) ( ) ( ) ( ) ( ) ( t t o o t t o o t t
}
}
= = = =
= =
k
i
t
t
j i j
k
j
i
k
i
t
t
j j
k
j
i i
M M d t M d M t
1 1 1 1
0 0
) ( ) ( ) ( ) ( t t o o t t o o
|
.
|
\
|
|
|
.
|
\
|
= |
.
|
\
|
|
|
.
|
\
|
=
}
}
= = = =
k
i
i i
k
j
j
t
t
j
k
i
i i
t
t
k
j
j j
t
t
M M d M d M t A d A
1 1 1 1
) ( ) ( ) ( ) ( ) ( ) (
0 0 0
t o t t o t o t t o t t
0 0
1 1 1 1
( ) ( ) ( ) ( )
t t
k k k k
j j i i i j j i
j i j i
t t
d M t M t d M M o t t o o o t t
= = = =
= =
} }
80
and, if M
i
M
j
= M
j
M
i
i, j (i j), then
Corollary: If A(t) satisfies condition (3) of the previous theorem, then
Proof: Since when A(t) satisfies condition (3), then
But,
) ( ) ( ) ( ) (
0 0
t A d A d A t A
t
t
t
t
} }
= t t t t
0
( )
0
1
( , )
t
i i
t
M d
k
i
t t e
o t t
=
}
u =
[
) ( ) ( ) ( ) (
0 0
t A d A d A t A
t
t
t
t
} }
= t t t t
}
= u
t
t
d A
e t t
0
) (
0
) , (
t t
}
=
}
= u =
= =
|
|
.
|
\
|
=
k
i
t
t
i i
t
t
k
i
i i
M d d M
k
i
i i
e e t t M t t A
1
0 0
1
) ( ) (
0
1
) , ( ) ( ) (
t t o t t o
o
81
or,
Def. Any nxn matrix M(t) satisfying the matrix differential equation
M(t
0
) = M
0
, where det(M
0
) 0 is a fundamental matrix of solutions.
Theorem: If det(M
0
) 0 then det(M(t)) 0 t e R
+
Proof: (By contradiction) Suppose there exists t
1
e R
+
such that det(M(t
1
)) = 0.
Let v = [v
1
v
2
v
n
]
T
0 such that M(t
1
)v = 0 and x(t) = M(t)v be the solution to the
vector differential equation , x(t
1
) = 0. Notice also that z(-) 0 is a
solution to , z(t
1
) = 0. By the uniqueness theorem we conclude that
x(t) = z(t) everywhere A(t) is piecewise continuous.
But, z(t
0
) = x(t
0
) = M(t
0
)v = 0 det(M
0
) = 0, which is a contradiction. Hence,
det(M(t)) 0 t, i.e., M(t) is nonsingular t.
[
=
}
=
} }
}
= u
k
i
d M M d M d M d
t
t
i i
t
t
k k
t
t
t
t
e e e e t t
1
) ( ) ( ) ( ) (
0
0 0 0
2 2
0
1 1
) , (
t t o t t o t t o t t o
( ) ( ) ( ) t A t t = z z
, ) ( ) ( ) ( t M t A t M =
`
( ) ( ) ( ) t A t t = x x
82
Def. Let M(t) be any fundamental matrix of . Then, t e R
+
, the state
transition matrix of is given by, u(t,t
0
) = M(t)M
-1
(t
0
).
Theorem (Semigroup Property): For all t
1
, t
0
and t, we have u(t,t
0
) = u(t,t
1
) u(t
1
,t
0
).
Proof: We know from the existence and uniqueness theorem that
x(t) = u(t,t
0
)x(t
0
) for any t,t
0
(a)
x(t
1
) = u(t
1
,t
0
)x(t
0
) for any t
1
,t
0
(b)
and x(t) = u(t,t
1
)x(t
1
) for any t,t
1
(c)
are solutions to the differential equation with initial conditions x(t
0
)
and x(t
1
). But, from (c) and (b)
x(t) = u(t,t
1
)x(t
1
) = u(t,t
1
) u(t
1
,t
0
)x(t
0
) (d)
Comparing (a) and (d) leads us to conclude that
u(t,t
0
) = u(t,t
1
) u(t
1
,t
0
) for any t, t
1
and t
0
.
Theorem (The Inverse Property): u(t,t
0
) is nonsingular t, t
0
e R
+
and u
-1
(t,t
0
) =
u(t
0
,t).
( ) ( ) ( ) t A t t = x x
( ) ( ) ( ) t A t t = x x
( ) ( ) ( ) t A t t = x x
83
Proof: Since u(t,t
0
) is a fundamental matrix of , then it is nonsingular
for all t, t
0
e R
+
.
Now, from the semigroup property we know that for arbitrary t
0
, t
1
, t e R
+
,
u(t,t
0
) = u(t,t
1
) u(t
1
,t
0
).
For t
0
= t, we get,
u(t,t) = I = u(t,t
1
) u(t
1
,t) u
-1
(t,t
1
) = u(t
1
,t) and since t
1
is arbitrary we have that
u
-1
(t,t
0
) = u(t
0
,t).
Theorem (Liouville formula):
Consider now the linear, time-varying dynamic system modeled by
with x(t
0
) = x
0
.
Theorem: The solution to the state equation is given by
( )
}
= u
t
t
d A tr
e t t
0
) (
0
)] , ( det[
t t
( ) ( ) ( ) ( ) ( ) t A t t B t t = + x x u
( ) ( ) ( ) ( ) ( ) t C t t D t t = + y x u
0
0 0
( ) ( , ) ( , ) ( ) ( )
t
t
t t t t B d t t t t = u + u
}
x x u
( ) ( ) ( ) t A t t = x x
84
where
1. u(t,t
0
)x
0
is the zero-input state response, and
2. is the zero-state state response.
Proof: At t = t
0
, the solution to the differential equation is given by
Now,
since
0
( , ) ( ) ( )
t
t
t B d t t t t u
}
u
0
0
0 0 0 0 0 0
( ) ( , ) ( , ) ( ) ( )
t
t
t t t t B d t t t t = u + u =
}
x x u x
0 0
0 0 0 0
( , ) ( , ) ( ) ( ) ( , ) ( , ) ( ) ( )
t t
t t
d
t t t B d t t t B d
dt t
t t t t t t t t
(
c
u + u = u + u
(
c
(
} }
x u x u
0
0 0
( ) ( , ) ( , ) ( ) ( ) ( , ) ( ) ( )
t
t
A t t t t t B t t t B d
t
t t t t
c
= u +u + u
c
}
x u u
} }
c
c
+ =
c
c
=
t
t
t
t
t
d t f
t
t f d t f
t
0 0
) , ( ) , ( ) , ( t t t t t
t
85
Hence,
The complete response should be given by
Let us now consider the time-invariant case, i.e., A(t) = A, B(t) = B, C(t) = C and
D(t) = D, where A, B, C and D are constant matrices.
Theorem: The state transition matrix of the time-invariant state model is
| |
0
0 0
( ) ( , ) ( ) ( ) ( ) ( , ) ( ) ( )
t
t
d
A t t t B t t A t t B d
dt
t t t t - = u + + u
}
x u u
0
0 0
( ) ( , ) ( , ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
t
t
A t t t t B d B t t A t t B t t t t t t
(
= u + u + = +
(
(
}
x u u x u
0
0 0
( ) ( ) ( , ) ( ) ( , ) ( ) ( ) ( ) ( )
t
t
t C t t t C t t B d D t t t t t t = u + u +
}
y x u u
) (
0 0
0
) 0 , ( ) , (
t t A
e t t t t
= u = u
86
Proof: Since A and commute, we have that
The complete state and system responses are now given by
This follows from the fact that u(t,t
0
) = M(t)M
-1
(t
0
) since we can always let
M(t) = e
At
, i.e.,
}
t
t
Ad
0
t
( ) | | ) ( ) 0 , ( exp ) , (
0 0 0 0
0
t t t t t t A e t t
t
t
Ad
u = u = =
}
= u
t
0 0
0 0
( ) ( ) ( )
0 0
( ) ( ) ( )
t t
A t t A t t A t At A
t t
t e e B d e e e B d
t t
t t t t
= + = +
} }
x x u x u
0
0
( )
0
( ) ( ) ( )
t
A t t At A
t
t Ce Ce e B d D t
t
t t
= + +
}
y x u u
) ( 1
0
1
0 0 0
) ( ) ( ) (
t t A At At At At
e e e e e t M t M
= = =
87
Def. If A is an nxn matrix, e C, e e C
n
and the equation
Ae = e, e 0
is satisfied, then is called an eigenvalue of A and e is called an eigenvector of A
associated with . Also, the eigenvalues of A are the roots of its characteristics
polynomial, i.e.,
The set o(A) = {
1
,
2
,,
n
} is called the spectrum of A. The spectral radius of A is
the non negative real number
The right eigenvector e
i
of A associated with the eigenvalue
i
satisfies the
equation Ae
i
=
i
e
i
, whereas the left eigenvector w
i
e C
n
of A associated with
i
satisfies the equation w
i
*A =
i
w
i
*, where (-)* designates the complex conjugate
transpose of a vector. If e o(A) and is complex then *e o(A). The eigenvectors
associated with and * will be e and e*, respectively.
) ( ) )( ( ) det( ) (
2 1 1
1
1 n n n
n n
A
a a a A I t = + + + + = =
{ } A A
i i
e = : max ) (
88
Example: Find the right eigenvectors of the matrix
The characteristic polynomial of A is given by
Therefore, its spectrum is described by o(A) = {-1, -1 - j2, -1 + j2}
Now,
or
For
1
= -1, we get
(
(
(
=
0 1 0
0 1 1
5 5 2
A
) 5 2 )( 1 ( 5 7 3 ) det( ) (
2 2 3
+ + + = + + + = = t A I
A
0
~
~
) (
~ ~
= =
i i i i i
e A I e e A
0
~
~
1 0
0 1 1
5 5 2
=
(
(
(
+
+
i
i
i
i
e
89
or
let e
13
= 1, then e
12
= -1, then
For
2
= -1 j2,
0
~
~
1 1 0
0 0 1
5 5 1
1
=
(
(
(
e
13 12 13 12
11 11
13 12 11
0
0 0
0 5 5
e e e e
e e
e e e
= =
= =
= + +
| |
T
e 1 1 0
~
1
=
0
~
~
2 1 1 0
0 2 1
5 5 2 1
2
=
(
(
(
e
j
j
j
90
or
let e
23
= 1, then e
22
= -1 j2, e
21
= -j2(-1 j2) = -4 + j2
and
Finally,
Theorem: Let A be an nxn constant matrix. Then A is diagonalizable if and only if
there is a set of n linearly independent vectors, each of which is an eigenvector of
A.
Proof: If A has n linearly independent eigenvectors e
1
, e
2
, , e
n
, form the
nonsingular matrix T = [e
1
e
2
e
n
].
23 22 23 22
22 21 22 21
23 22 21
) 2 1 ( 0 ) 2 1 (
2 0 2
0 5 5 ) 2 1 (
e j e e j e
e j e e j e
e e e j
+ = = +
= =
= + +
| |
T
j j e 1 2 1 2 4
~
2
+ =
| |
T
j j e e 1 2 1 2 4 ) *
~
(
~
2 3
+ = =
91
Now, T
-1
AT = T
-1
[Ae
1
Ae
2
Ae
n
] = T
-1
[
1
e
1
2
e
2
n
e
n
]
= T
-1
[e
1
e
2
e
n
]D = T
-1
TD = D
where D = diag [
1
,
2
, ,
n
], and
i
, i = 1, 2, , n are the eigenvalues of A.
Conversely, suppose there exists a matrix T such that T
-1
AT = D is diagonal. Then
AT = TD.
Let T = [t
1
t
2
t
n
], then
AT = [At
1
At
2
At
n
] = [t
1
d
11
t
2
d
22
t
n
d
nn
] = TD At
i
= d
ii
t
i
, which implies
that the i
th
column of T is an eigenvector of A associated with the eigenvalue d
ii
.
Since T is nonsingular, there are n linearly independent eigenvectors.
Now, if A is diagonalizable, then e
At
= Te
Dt
T
-1
because
( ) ( ) + + + + = =
3
3
1 2
2
1 1
! 3
1
! 2
1
1
t TDT t TDT t TDT I e e
t TDT At
( )( ) ( )( )( )
1 1 1 2 1 1 1 3
1 1
2! 3!
I TDT t TDT TDT t TDT TDT TDT t
= + + + +
+ + + + =
3 1 3 2 1 2 1
! 3
1
! 2
1
t T TD t T TD t TDT I
2 2 3 3 1 1
1 1
2! 3!
Dt
T I Dt D t D t T Te T
(
= + + + + =
(
92
Example: For the given matrix, compute e
At
.
We already know that o(A) = {-1, -1 - j2, -1 + j2}. Now,
The inverse of T is
(
(
(
=
0 1 0
0 1 1
5 5 2
A
(
(
(
+
+
=
1 1 1
2 1 2 1 1
2 4 2 4 0
j j
j j
T
(
(
(
+
+ =
2 1 2 1 1
2 1 2 1 1
10 2 2
8
1
1
j j
j j T
93
Therefore,
which implies that
Finally, e
At
= Te
Dt
T
-1
.
(
(
(
=
2 1 0 0
0 2 1 0
0 0 1
j
j D
(
(
(
=
+
t j
t j
t
Dt
e
e
e
e
) 2 1 (
) 2 1 (
0 0
0 0
0 0
94
Proposition: Suppose D is a block diagonal matrix with square blocks D
i
, i = 1, 2, , n, i.e.,
Then,
Example: For the same A matrix of the previous example, compute e
At
.
Let
(
(
(
(
=
n
D
D
D
D
. .
.
0 0
0
0 0
2
1
(
(
(
(
(
=
t D
t D
t D
Dt
n
e
e
e
e
. .
.
0 0
0
0 0
2
1
| |
1 2 2
0 4 2
Re{ } Im{ } 1 1 2
1 1 0
T
(
(
= =
(
(
e e e
95
then,
and
which implies that
But,
Finally,
(
(
(
2 2 0
1 1 1
5 1 1
4
1
1
T
(
=
(
(
(
= =
2
1 1
0
0
1 2 0
2 1 0
0 0 1
D
D
AT T D
(
=
t D
t D
Dt
e
e
e
2
1
0
0
(
= =
t t
t t
e e e e
t t D t t D
2 cos 2 sin
2 sin 2 cos
and
2 1
( )
1 1
1 0 0
0 cos 2 sin 2
0 sin 2 cos 2
At Dt t
e T e T e T t t T
t t
(
(
= =
(
(