Vous êtes sur la page 1sur 13

EE547: Fall 2018

Lecture 6: Introduction to Linear Dynamical Systems and ODE Review


Lecturer: L.J. Ratliff

Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
They may be distributed outside this class only with the permission of the Instructor.

6.1 Linear Dynamical System Representation

We study dynamical systems of the form

ẋ(t) = A(t)x(t) + B(t)u(t) (state DE)


y(t) = C(t)x(t) + D(t)u(t) (read-out eqn.)

where as usual x(t) ∈ Rn , u(t) ∈ Rm , and y(t) ∈ Rp . A(·), B(·), C(·), D(·) are matrix-valued functions on
R+ , assumed to be piecewise continuous.

Definition 1.1 (Piecewise Continuity.) A function or curve is piecewise continuous if it is continuous


on all but a finite number of points in any compact (closed and bounded in R) interval.

The input function u(·) ∈ U where

U = {u(·)| u : R+ → Rm , u(·) is piecewise continuous} = PC(R+ , Rm )

Let define the state transition map and the response map:

x(t) = s(t, t0 , x0 , u)
y(t) = ρ(t, t0 , x0 , u)

We now study the state transition map s(t, t0 , x0 , u) as the unique solution to the state DE given by

ẋ(t) = A(t)x(t) + B(t)u(t)

for some initial condition (t0 , x0 ) ∈ R+ × Rn , x(t0 ) = x0 and u(·) ∈ U . Under these conditions (u.t.c.), the
above reduces to
ẋ(t) = f (x(t), t), t ∈ R, x(t0 ) = x0
where the right-hand side is a given function

f : Rn × R+ → Rn : (x, t) 7→ f (x, t) = A(t)x(t) + B(t)u(t).

6.2 Solutions to ODEs: The Practice

Reference: A good reference for basic ODEs is Kreyszig, ’Advanced Engineering Mathematics’. Chapter
1 is basic ODE stuff. It has loads of examples.

6-1
6-2 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review

Let’s just review how to solve a simple ordinary differential equation. The kinds of ODEs we will deal with
in this class largely fall in the class of so-called separable equations of the form

dx = f (x)g(t)dt

which are solved by Z Z


1
dx = g(t) dt + C
f (x)

Example 2.1 Consider the following ODE:


dx
(t2 − 1) + 2x = 0
dt
We can solve this using the above technique. Indeed,
dx 2
(t − 1) = −2x =⇒ dx(t2 − 1) = −2xdt
dt Z Z
1 1
=⇒ − dx = dt
2x t2 − 1

1 t + 1
=⇒ − ln |x| = ln +C
2 t − 1
 
t + 1
=⇒ |x| = exp ln exp(C)
t − 1
 
t+1
=⇒ x = k
t−1

Definition 2.1 (Linear ODE ) A first order ODE is linear if it can be written as

x0 + p(t)x = r(t) (6.1)

and it is homogeneous if r(t) = 0.

Let us find a formula for the general solution of a linear ODE on some interval I and assuming p and r are
continuous (we will get to the formal requirements for existence and uniqueness next). For the homogeneous
equation
x0 + p(t)x = 0
this is very simple. Indeed, separating variables we have
Z
dx
= −p(t)dt =⇒ ln |x| = − p(t) dt + c∗
x
and by taking the exponential of both sides we get
Z
x(t) = c exp(− p(t) dt)

where c = ± exp(c∗ ) when x ≷ 0.


Now, we solve the non-homogeneous equation given this form of solution. It turns out that the general form
ODE has an integrating factor depending only on x. First, let us recall integrating factors and the notion of
exactness.

Definition 2.2 (Exactness.) A first-order differential equation of the form

P (t, x)dt + Q(t, x)dx = 0


Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-3

is called an exact differential equation if the differential form

P (t, x)dt + Q(t, x)dx

is exact—that is, there exists some function h such that


∂h ∂h
dh = dt + dx
∂t ∂x
∂h ∂h
with ∂t = P (t, x) and ∂x = Q(t, x).

Fact 2.1 A necessary and sufficient condition for exactness is that

∂P ∂ 2 h ∂Q ∂2h
= , =
∂x ∂x∂t ∂t ∂t∂x
since for sufficiently smooth functions—here we just need h ∈ C 2 (Rn × R, R)—the second-order mixed
partial derivatives are equal. Equality of second derivatives is the Schwarz’s theorem (or Clairaut’s
theorem on equality of mixed partials).

The niceness of exactness comes from the fact that if the ODE is exact then
dh = 0
can be integrated to directly get the general solution
h(t, x) = c

We can reduce an ODE to an exact form if there exists a so-called integrating factor.

Definition 2.3 (Integrating Factor.) If we have an ODE

P (t, x)dt + Q(t, x)dx = 0 (6.2)

and there exists a function F —which in general will be a function of both x and t—such that

F P dt + F Qdx = 0

is exact, then F is an integrating factor.

Then due to exactness, we have that


∂ ∂
(F P ) = F Q
∂x ∂t
and by the product rule
Fx P + F Px = Ft Q + F Qt
If we get lucky and the integrating factor only depends on one variable, say t, then Fx ≡ 0 and Ft = F 0 =
dF/dt so that
F Px = F 0 Q + F Qt
Dividing by F Q, we get that  
1 dF 1 ∂P ∂Q
= − (6.3)
F dt Q ∂x ∂t
It turns out that if (6.5) is such that the right-hand side of the above depends only on t, then (6.5) has an
integrating factor F = F (t) which is obtained by integrating the above and taking exponentials of both sides
to get Z 
F (t) = exp R(x) dx
6-4 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review

where  
1 ∂P ∂Q
R(x) = −
Q ∂x ∂t

Now, coming back to our problem, we first write (6.1) as

(px − r)dt + dx = 0 ⇐⇒ P dt + Qdx = 0, P = px − r, Q = 1

Hence (6.3) becomes


1 dF
= p(t)
F dt
Since this depends only on t, the ODE has integrating factor F (t) which we obtain as
Z 
F (t) = exp p(t) dt

Multiplying our ODE by this F , we get


Z   Z  0 Z 
0
exp p(t) dt (x + px) = exp p(t) dt x = exp p(t) dt r

Integrating we get
Z  Z Z 
exp p(t) dt x = exp p(t) dt rdt + c
R
Let h = p(t) dt and divide on both sides by exp(h) to get
Z 
x(t) = exp(−h) exp(h)r dt + c

Example 2.2 Consider the following ODE:

dx x et
+3 = 3
dt t t
It is of the form
dx
+ Py = Q
dt
so the integrating factor is
Z  Z 
3
exp P dt = exp dt = exp(3 ln t) = exp(ln t3 ) = t3
t

Multiplying by the integrating factor we get


dx
t3 + 3t2 x = et
dt
Now, we integrate to get
t3 x = et + c
where c is some constant (eventually we will see this is related to the initial condition). Thus,

et + c
x=
t3
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-5

6.3 Solutions to ODEs: The Theory

Consider the general ODE


ẋ = f (x, t)

The function f must satisfy two assumptions:


(A1) Let D be a set in R+ which contains at most a finite number of points per unit interval. D is the set
of possible discontinuity points; it may be empty. For each fixed, x ∈ Rn , the function t ∈ R+ \D →
f (x, t) ∈ Rn is continuous and for any τ ∈ D, the left-hand and right-hand limits f (x, τ− ) and f (x, τ+ ),
respectively, are finite vectors in Rn .
(A2) There is a piecewise continuous function k(·) : R+ → R+ such that
kf (ξ, t) − f (ξ 0 , t)k ≤ k(t)kξ − ξ 0 k ∀ t ∈ R+ , ∀ξ, ξ 0 ∈ Rn
This is called a global Lipschitz condition because it must hold for all ξ and ξ 0 .
DIY Exercise. Show that, given R > 0, if there is a PC function k(·) s.t.
kD1 f (x, t)k ≤ k(t), ∀x ∈ BR (0), ∀t ∈ R+
then the Lipschitz condition in (A2) holds for all ξ, ξ 0 ∈ BR (0), t ∈ R+ .
Example.

Proposition 3.1 Let f : R → R be continuous and differentiable. Then

f Lipschitz ⇐⇒ ∃K, s.t. ∀x ∈ R, |f 0 (x)| ≤ K

To prove this, we need the mean value theorem.

Theorem 3.1 (Mean Value Theorem.) If a function f is continuous on the closed interval [a, b], and
differentiable on the open interval (a, b), then there exists a point c in (a, b) such that

f (b) − f (a)
f 0 (c) =
b−a

Proof. (⇐=). Suppose the derivative is bounded by some K. By the mean value theorem we have that for
x, y ∈ R, there exists c ∈ R such that
f (x) − f (y) = (x − y)f 0 (c)
so that
|f (x) − f (y)| = |(x − y)f 0 (c)| ≤ K|x − y|

(=⇒). Suppose f is K-Lipschitz so that |f (x + h) − f (x)| ≤ K|h|, x, h ∈ R. Then taking the limit we have
that
f (x + h) − f (x)
lim ≤K
h→0 h
Alt. so that |f (x) − f (y)| ≤ K|x − y|, x, y ∈ R. Then taking the limit we have that

f (x) − f (y)
lim ≤K
x→y x−y

Note: Lipschitz functions do not have to be differentiable. They have to be almost everywhere differentiable
(except on a set of measure zero).
6-6 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review

Proposition 3.2 Given R > 0, if there is a PC function k(·) s.t.

kD1 f (x, t)k ≤ k(t), ∀x ∈ BR (0), ∀t ∈ R+

then the Lipschitz condition in (A2) holds for all ξ, ξ 0 ∈ BR (0), t ∈ R+

Examples.
1. The function p
f (x) = x2 + 5
defined for all real numbers is Lipschitz continuous with the Lipschitz constant K = 1, because it is
everywhere differentiable and the absolute value of the derivative is bounded above by 1. Indeed,

f 0 (x) = x(x2 + 5)−1/2

so that
|f 0 (x)| = |x(x2 + 5)−1/2 | ≤ |x||(x2 + 5)−1/2 |
Claim:
|x|
≤1
|(x2 + 5)−1/2 |
This is true because
|x| = |(x2 )1/2 | ≤ |(x2 + 5)1/2 |

2. The functions sin(x) and cos(x) are Lipschitz with constant K = 1 since their derivatives are bounded by
1.

3. is the function sin(x2 )? what about x?

Theorem 3.2 (Fundamental Theorem of Existence and Uniqueness.) Consider

ẋ(t) = f (x, t)

where initial condition (t0 , x0 ) is such that x(t0 ) = x0 . Suppose f satisfies (A1) and (A2). Then,
1. For each (t0 , x0 ) ∈ R+ × Rn there exists a continuous function φ : R+ → Rn such that

φ(t0 ) = x0

and
φ̇(t) = f (φ(t), t), ∀t ∈ R+ \D

2. This function is unique. The function φ is called the solution through (t0 , x0 ) of the differential
equation.

Note that if the Lipschitz condition does not hold, it may be that the solution cannot be continued beyond
a certain time. e.g., consider
˙ = ξ 2 (t), ξ(0) = 1 , c 6= 0
ξ(t)
c
where ξ : R+ → R. This differential equation has the solution

1
ξ(t) =
c−t

on t ∈ (−∞, c). As t → c, kξ(t)k → ∞. This is called finite escape time at c.


We need the following notion of a Cauchy sequence—this is a stronger notion of convergence.
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-7

Definition 3.1 (Cauchy sequence) A sequence (vi )∞ i=1 in (V, F, k · k) is said to be a Cauchy sequence
in V iff for any ε > 0, ∃ Nε ∈ N such that for any pair m, n > Nε ,

kvm − vn k < ε ∀p ∈ N

And, we need the notion of a Banach space.

Definition 3.2 (Banach Space.) A Banach space X is a normed linear space that is complete with
respect to that norm—that is, every Cauchy sequence {xn } in X converges in X.

Proof of Existence. ( sketch) Construct a sequence of continuous functions


Z t
xm+1 (t) = x0 + f (xm (τ ), τ ) dτ
t0

where x0 (t0 ) = x0 and m = 0, 1, 2, . . .. The idea is to show that the sequence of continuous functions
{xm (·)}∞ n
0 converges to (i) a continuous function φ : R+ → R which is (ii) a solution of ẋ = f (x, t),
x(t0 ) = x0 .

To show (i), we show that {xm (·)}∞ n


0 is a Cauchy sequence in a Banach space (C([t1 , t2 ], R ), R, k·k∞ ), where
t0 ∈ [t1 , t2 ].

To show (ii)—i.e. that φ(·) is a solution of the DE—recall that


Z t
xm+1 (t) + x0 + f (xm (τ ), τ ) dτ
t0

By the above argument, m → ∞, xm (·) → φ(·) on [t1 , t2 ]. Hence, it suffices to show that
Z t Z t
f (xm (τ ), τ ) dτ → f (φ(τ ), τ ) dτ, as m → ∞
t0 t0

To prove uniqueness, we need the so called Bellman-Gronwall Lemma.


Lemma 6.1 (Bellman-Gronwall). Let u(·), k(·) be real-valued, PC functions on R+ and assume u(·), k(·) > 0
on R+ . Suppose c1 > 0, t0 ∈ R+ . If
Z t
u(t) ≤ c1 + k(τ )u(τ ) dτ
t0

then Z t 
u(t) ≤ c1 exp k(τ ) dτ
t0

Rt
Proof. WLOG, assume t > t0 . Let U (t) = c1 + t0
k(τ )u(τ ) dτ . Thus,

u(t) ≤ U (t)

Multiply both sides of Z t


u(t) ≤ c1 + k(τ )u(τ ) dτ
t0
6-8 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review

by the non-negative function  Z t 


k(t) exp − k(τ ) dτ
t0

resulting in   Z t 
d
U (t) exp − k(τ ) dτ ≤0
dt t0

and thus  Z t 
u(t) ≤ U (t) ≤ c1 exp − k(τ ) dτ
t0

Proof of Uniqueness Idea: Invoke Bellman-Grownwall.

Let’s consider an example.

Example. Consider

ẋ(t) = A(t)x(t) + B(t)u(t)


x(t0 ) = x0

Show the solution is unique.

Proof. Assume φ(t), ψ(t) are two solutions so that φ(t0 ) = ψ(t0 ) = x0 and

φ̇(t) = A(t)φ(t) + B(t)u(t)


ψ̇(t) = A(t)ψ(t) + B(t)u(t)

Then Z t
φ(t) − ψ(t) = (A(τ )φ(τ ) − A(τ )ψ(τ )) dτ
t0

so that Z t
kφ(t) − ψ(t)k ≤ kA(t)k∞,[t0 ,t] kφ(τ ) − ψ(τ )k dτ
t0

By Bellman-Gronwall,
Z t
kφ(t) − ψ(t)k ≤ c1 + kA(t)k∞,[t0 ,t] kφ(τ ) − ψ(τ )k dτ
t0

implies 
kφ(t) − ψ(t)k ≤ c1 exp kA(t)k∞,[t0 ,t] (t − t0 )
This is true for all c1 ≥ 0, so set c1 = 0. . .

6.4 Solutions to Linear Systems

Recall

ẋ(t) = A(t)x(t) + B(t)u(t) (state DE)


y(t) = C(t)x(t) + D(t)u(t) (read-out eqn.)

with initial data (t0 , x0 ) and the assumptions on A(·), B(·), C(·), D(·), u(·) all being PC:
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-9

• A(t) ∈ Rn×n
• B(t) ∈ Rn×m
• C(t) ∈ Rp×n
• D(t) ∈ Rp×m
The input function u(·) ∈ U, where U is the set of piecewise continuous functions from R+ → Rm .
This system satisfies the assumptions of our existence and uniqueness theorem. Indeed,
1. For all fixed x ∈ Rn , the function t ∈ R+ \D → f (x, t) ∈ Rn is continuous where D contains all the points
of discontinuity of A(·), B(·), C(·), D(·), u(·)
2. There is a PC function k(·) = kA(·)k such that

kf (ξ, t) − f (ξ 0 , t)k = kA(t)(ξ − ξ 0 )k ≤ k(t)kξ − ξ 0 k ∀t ∈ R+ , ∀ξ, ξ 0 ∈ Rn

Hence, by the above theorem, the differential equation has a unique continuous solution x : R+ → Rn which
is clearly defined by the parameters (t0 , x0 , u) ∈ R+ × Rn × U . Therefore, recalling the state transition map
s we have the following theorem.

Theorem 4.1 (Existence of the state transition map.) Under the assumptions and notation above,
for every triple (t0 , x0 , u) ∈ R+ × Rn × U , the state transition map

x(·) = s(·, t0 , x0 , u) : R+ → Rn

is a continuous map well-defined as the unique solution of the state differential equation

ẋ(t) = A(t)x(t) + B(t)u(t)

with (t0 , x0 ) such that x(t0 ) = x0 and u(·) ∈ U .

Remark : Since the state transition function being well-defined, so is the response map

y(t) = ρ(t, t0 , x0 , u)

It follows that the state transition function is differentiable at every t ∈ R+ \D.

Moreover, with the state transition function being well-defined, so is the response map

y(t) = ρ(t, t0 , x0 , u)

as the composition of the state transition map and the output function g.

6.4.1 Zero-State and Zero-Input Maps

The state transition function (resp. response) of a linear system is equal to its zero-input state transition
function (resp. response) and its zero-state state transition map (resp. response):

s(t, t0 , x0 , u) = s(t, t0 , x0 , 0) + s(t, t0 , 0, u)


| {z } | {z }
zero-input state trans. func. zero-state state trans. func.
ρ(t, t0 , x0 , u) = ρ(t, t0 , x0 , 0) + ρ(t, t0 , 0, u)
| {z } | {z }
zero-input respons zero-state response
6-10 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review

Due to the fact that the state transition map and the response map are linear, they have the property that
for fixed (t, t0 ) ∈ R+ × R+ the maps

s(t, t0 , ·, 0) : Rn → Rn : x0 7→ s(t, t0 , x0 , 0)

and
ρ(t, t0 , ·, 0) : Rn → Rp : x0 7→ ρ(t, t0 , x0 , 0)
are linear.

Hence by the Matrix Representation Theorem, they are representable by matrices. Therefore there exists a
matrix Φ(t, t0 ) ∈ Rn×n such that

s(t, t0 , x0 , 0) = Φ(t, t0 )x0 , ∀x0 ∈ Rn

and
ρ(t, t0 , x0 , 0) = C(t)Φ(t, t0 )x0 , ∀x0 ∈ Rn

Definition 4.1 (State transition matrix.) Φ(t, t0 ) is called the state transition matrix.

6.5 Properties of State Transition Function

Consider the matrix differential equation

Ẋ = A(t)X, X(·) ∈ Rn×n

Let X(t0 ) = X0 .

Definition 5.1 (State Transition Matrix.) The state transition matrix Φ(t, t0 ) is defined to be the
solution of the above matrix differential equation starting from Φ(t0 , t0 ) = I. That is,


Φ(t, t0 ) = A(t)Φ(t, t0 )
∂t
and Φ(t0 , t0 ) = I.

Proposition 5.1 The following properties hold:


1. The solution of ẋ = A(t)x, s(t, t0 , x0 , 0) is given by

s(t, t0 , x0 , 0) = Φ(t, t0 )x0 (6.4)

2. for all t, t0 , t1 ∈ R+ ,
Φ(t, t0 ) = Φ(t, t1 )Φ(t1 , t0 )
3. The inverse of the state transition matrix is
−1
(Φ(t, t0 )) = Φ(t0 , t)

4. The determinant is give by


Z t 
det Φ(t, t0 ) = exp trace(A(τ )) dτ
t0
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-11

Proof. Call the left-hand side of (6.4) LHS, and the righ-hand side RHS.
1. Check first that the LHS of (6.4) and the RHS are equal at t0 :

LHS(t0 ) = s(t0 , t0 , x0 ) = x0 and RHS(t0 ) = Φ(t0 , t0 )x0 = Ix0 = x0

Now, we check they satisfy the same differential equation:

d d
LHS(t) = A(t)LHS(t) and RHS(t) = A(t)RHS(t)
dt dt

so that s(t, t0 , x0 ) = Φ(t, t0 )x0 .


2. Again we use the same trick of checking the initial condition and the differential equation (and invoke the
existence and uniqueness theorem).

RHS(t1 ) = LHS(t1 )
d
RHS(t) = A(t)RHS(t)
dt
d
LHS(t) = A(t)LHS(t)
dt
Hence, LHS ≡ RHS.
3. First, Φ(t, s) = Φ(s, τ )Φ(τ, t) for any t, s, τ since the following diagram commutes:

Φ(s,τ )
X X
Φ(t,τ )
Φ(t,s)
X

Indeed, consider the unique solution to



ẋ(σ) = A(σ)x(σ)
x(s) =a

Then, x(t) = Φ(t, s)a, x(t) = Φ(t, τ )x(τ ) and x(τ ) = Φ(τ, s)a, and hence

Φ(t, τ )Φ(τ, s)a = Φ(t, s)a

that is
(Φ(t, τ )Φ(τ, s) − Φ(t, s))a = 0

since this must hold for all a ∈ Rn , the claim holds.

We claim that Φ(t, s) is invertible and that its inverse is given by Φ(s, t). Indeed, from Φ(t, s) =
Φ(s, τ )Φ(τ, t) we have that
I = Φ(t, s)Φ(s, t)

Thus, Φ(t, t0 ) is invertible for all t. Hence,

Φ(t0 , t0 ) = I = Φ(t0 , t)Φ(t, t0 ) =⇒ Φ(t, t0 )−1 = Φ(t0 , t)

4. This is called the Jacobi-Liouville equation. We will take this one as given.
6-12 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review

u(·)

t0 t0 t0 + dt0 t

Figure 6.1: Input

6.6 Solving Linear ODE via Φ(t, t0 )

First, let us consider a heuristic derivation of the zero-state transition. (page 35 of C& D) Consider the
input in Fig. 6.1.
Then,
x(t0 ) = Φ(t0 , t0 )x0

and
x(t0 + dt0 ) = x(t0 ) + [A(t0 )x(t0 ) + B(t0 )u(t0 )]dt0

x(t) = Φ(t, t0 + dt0 )x(t0 + dt0 )


= Φ(t, t0 + dt0 )[x(t0 ) + A(t0 )x(t0 )dt0 + B(t0 )u(t0 )dt0 ]
= Φ(t, t0 + dt0 )[I + A(t0 )dt0 ]x(t0 ) + Φ(t, t0 + dt0 )B(t0 )u(t0 )dt0
' Φ(t, t0 + dt0 )Φ(t0 + dt0 , t0 )Φ(t0 , t0 )x0 + Φ(t, t0 + dt0 )B(t0 )u(t0 )dt0

Hence,
x(t) ' Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 )dt0

Theorem 6.1 (Solution of Linear System.)


Z t
x(t) = Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 ) dt0 (6.5)
t0

Proof idea: We will use the trick that checks the equality by showing the left and right hand sides of (6.5)
satisfy the same ODE. That is, at t0 , they have the same value (initial condition) and the derivative of
the left and right hand sides is the same. The key here is that since we have the existence and uniqueness
theorem, we know then that the solution of the ODE is unique, so that means any two expressions that
satisfy it have to be equal.

Proof. We will use the same trick as before where we check the initial condition and the differential equation
and invoke the existence/uniqueness theorem for solutions to ODEs.

d
LHS(t) = A(t)LHS(t) + B(t)u(t)
dt

LHS(t0 ) = x0

RHS(t0 ) = x0
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-13

d d
RHS(t) = (Φ(t, t0 )x0 )
dt dt
Z t 
d
+ Φ(t, t0 )B(t0 )u(t0 ) dt0
dt t0
d
= A(t)Φ(t, t0 )x0 + (t)Φ(t, t)B(t)u(t)
dt
0 Z t
d  >
 d
−  (t0 )Φ(t, t0 )B(t0 )u(t0 ) + (Φ(t, t0 )B(t0 )u(t0 )) dt0
dt
 t0 dt

so that
d
RHS(t) = A(t)Φ(t, t0 )x0 + B(t)u(t)
dt
Z t
+ A(t) Φ(t, t0 )B(t0 )u(t0 ) dt0
t0
= A(t)RHS(t) + B(t)u(t)

Thus LHS and RHS has the same initial condition and satisfy the same ODE

Thus we have that the state transition function is given by


Z t
s(t, t0 , x0 , u[t0 ,t] ) = Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 ) dt0
t0

By definition, we have that it satisfies the state transition axiom. Check that it satisfies the semi-group
property:
h
s(t, t0 , s(t1 , t0 , x0 , u[t0 ,t1 ] ), u[t0 ,t] ) = Φ(t, t1 ) Φ(t1 , t0 )x0
Z t
+ Φ(t, t0 )B(t0 )u(t0 ) dt0
t1
Z t1
= Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 ) dt0
t0
Z t
+ Φ(t, t0 )B(t0 )u(t0 ) dt0
t1
Z t
= Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 ) dt0
t0
= s(t, t0 , x0 , u[t0 ,t] )

Vous aimerez peut-être aussi