Vous êtes sur la page 1sur 16

The meaning of ow is quite intuitive, thinking about a leave wich falls into a river

at time t = 0 at position x0 , and being driven along ow lines (mathematically


parameterized by the function t (x0 )), such that the actual position of the leave
at time t  0 is given exactly by the relation (2.3). It is quite natural that the
ow should satisfy the following properties, called ow axioms (see Figure 2.1, and
Chapter 2 compare with the ones presented in Section 3.1 for linear time-invariant systems):

(I) 0 (x0 , p) = x0 , (II) t1 [t2 (x0 , p), p] = t1 +t2 (x0 , p) (2.4)

Basics on Dynamical Systems

In the sequel we will be interested in systems of the form


x = f (t, x, p), x(t0 ) = x0 (2.1)
n
where x(·) : R+  R is a real vector-valued function, called the state of the system,
f  C (Rn  Rn ) a continuous (nonlinear) function of its arguments, p is an r-
dimensional vector of real-valued constants. The variable t represents time, and
Figure 2.1: Illustration of the ow axioms.
t0 the initial time. Note that in the general time-varying case, the solution will
explicitely depend on the initial time. Most applications correspond to time-varying
systems, simply, because almost all systems depend on exogenous parameters which The reason to consider the ow, is that it allows to talk about the properties of
typically vary over time. Nevertheless, to understand time-varying systems in rst the solution of the dierential equation (2.2), rather than about the equation itself,
place one has to understand time-invariant systems. In most practical applications, i.e. it is more oriented toward the description of the phenomenon, rather than on its
the behavior of the time-varying case represents some kind of perturbation of the causes (described by f in the the dynamics equations (2.2)).
time-invariant one, and thus can be delimited by the properties of the later one. For In virtue of (2.3), the ow is a solution of the dierential equation (2.2), i.e.
this reason, in most of the present study, we will restrict our attention to the case
dt (x0 )
of time-invariant systems, i.e. where f does not depend explicitely on the time t. In = f [t (x0 ), p]. (2.5)
dt
this case (2.1) is simply represented by the dynamics
Note that in the time-variant case, i.e. (2.1), the concept of ow is the same, but
x = f (x, p), x(0) = x0 . (2.2)
the function  depends explicitely on the initial time t0 .
Regarding the dierential equation (2.2), it is clear by integration that the solu-
2.1 The ow of a system tion x(t) has to satisfy the integral equation
 t
A solution of (2.2) will be called a ow t (x0 ) of the nonlinear system. The ow x(t) = f [t, x( ), p]d. (2.6)
represents the time evolution of the state vector x(t) starting at x0 with parameter 0
p, such that
This representation is sometimes usefull to understand some fundamental properties
x(t; x0 , p) = t (x0 , p). (2.3) of ows. For example it already establishes that the function f has to be piecewise

6 7
continuous over time. In general, it is not more simple to solve the integral equation 2.2 Existence and uniqueness
(2.6), than the dierential equation (2.1), but the inegral form (2.6) is usefull to
understand some basic facts. For example, in the scalar case with n = 1, the scalar So far we assumed that the solution of (2.2) existed and was unique.
function x(t) will increment over time as long as the associated right-hand side f is To understand why existence and uniqueness are non-trivial properties which
positive. Formulated in a variational fashion this is have to be studied carefully, consider the following examples.

n=1  x(t + dt)  x(t)  f (x(t), p) > 0. (2.7) 2.2.1 Existence - A counterexample
This means that qualitative properties of the ow of the system can be already Consider the nonlinear system with dynamics
established by only analyzing the right-hand side f of the dynamics (2.2).
In the general, n-dimensional case, the right-hand side denes a vector eld over x = x2 , x(0) = x0 . (2.8)
the state space Rn (or a compact subset of practical interest), i.e. for any point x
in state space, it associates a vector f (x, p) which only depends on the paramter p From separation of variables and integration one obtains the solution (the reader is
(or in the time-varying case also on t). In terms of the vector eld f , the dynamics encouraged to verify this)
equations (2.2) (or (2.1) for the time varying case) mean that the solution curve in 1
state space, i.e. the graph of t , is tangential to the vector f (x) at the point x. See x(t) = x0 . (2.9)
1  x0 t
Figure 2.2 for a graphical illustration in the case n = 2. This implies that if one
knows the vector eld, one already knows the qualitative solution curves (without It is clear from the preceding equation that x(t) tends to innity with t approaching
requiring any analytic solution). This fact will be of paramount importance for the x1
0 , i.e.
qualitative studies on dynamical systems which will be discussed throuhout this text.
lim = 
tx1
0

as illustrated in Figure 2.3. This property is known as nite-espape, and means


that the solution does not exist over arbitrary time intervals, but only locally over
the open interval t  [0, x1
0 ). This phenomenon is typical to nonlinear systems, and
has to be accounted for in nonlinear systems applications.

2.2.2 Uniqueness - A counterexample


Consider the dynamical system

x = x1/3 , x(0) = 0 (2.10)

starting from the origin x = 0 at t = 0. Obviously one solution is given by x(t) =


0  t  0. Nevertheless, integration using separation of variables yields
x(t)  3/2
Figure 2.2: Graphical illustration of the vector eld and the solution trajectories x(t) dx 3 2

= x(t)2/3 = t,  x(t) = t = 0,
for n = 2. 0 x1/3 2 3

8 9
correct way to model a dynamical system, then on the system itself. Nevertheless,
the problem of non-uniqueness of solutions is a serious one which has to be accounted
for carefully.

2.2.3 The existence and uniqeness theorem


The following theorem accounts to Picard and Lindelöf, and states sucient and
necessary conditions for the existence and uniqueness of local (in time) solutions for
general n-dimensional systems of the form (2.2). Proofs of the theorem can be found
in almost any standard text on dierential equations.

Theorem 2.2.1. If f is Lipschitz in x and piecewise continuous in t, then the initial


value problem (2.2) has a unique solution x(t) = (t; t0 )x0 over a nite time interval
t  [t0  , t0 +  ].

This theorem allows for nite-escape times, actually in example (2.8) the right-
Figure 2.3: Graph of the solution x(t) of equation (2.8), with nite-escape time x1
0 .
hand-side function f is Lipschitz, but it does not allow for multiple solutions as
in example (2.10). Analyzing (2.10), one notes that the right-hand-side f is not
Lipschitz at x = 0, and thus the Picard-Lindelöf theorem does not apply. This
examples also explains the reason for the Lipschitz condition in Theorem 2.2.1.

2.3 Stability
A fundamental property of linear and nonlinear systems (or ows) is the stability of
equilibrium points. Throughout this text we will be concerned with the stability, and
thus need a formal denition. Here, we employ the common denitions associated
to A. Lyapunov.

Denition 2.3.1. An equilibrium point x of (2.2) is said to be stable, if for any


 > 0 there exists a constant  > 0 such that for any initial deviation from equilibrium
within an -neighborhood, the trajectory is comprised within a -neighborhood, i.e.
Figure 2.4: Graph of the two solutions of equation (2.10) x(t) = 0, and x(t) =
 x0 : ||x0  x ||
  ||x(t)  x ||
  t  0. (2.11)
(2/3t)3/2 .
This concept is illustrated in Figure 2.5 (left), and is sometimes referred to Lya-
meaning that there are two dierent solutions. Thus, the question arises which of punov stability. Note that stability implies boundedness of solutions, but not that
both trajectories the system will follow, i.e. which of both the ow t will correspond these converge to an equilibrium point. Convergence in turn is ensured by the con-
to. . . The two solutions of (2.10) are shown in Figure 2.4. The problem of non- cept of attractivity.
uniqueness is a mathematical one, and its implications are probably more about the

10 11
The constant a is known as the amplitude, and  as the convergence rate.
Clearly, exponential stability implies asymptotic stability. It is quite noteworthy
that, unless the convergence is still asymptotic, for exponentially stable equilibria it
is possible to determine exactly the time needed to approach the equilibrium up to
a given distance. In practice one considers convergence if a 98.5% approximation is
achieved 1 . Accordingly the required convergence time can be determined as follows.
Suppose that the bound (2.13) holds with a = 1 and is exact, i.e. it holds with
equality. In this case the notion of characteristic time tc is useful

tc = 1 (2.14)
Figure 2.5: Qualitative illustration of the stability concept (left) and the asymptotic
stabilty one (right). Note that the equality in (2.13) implies that the norm behaves like a linear system,
as actually
Denition 2.3.2. The equilibrium point x is called an attractor for the set S, if d||x(t)  x ||

||x(t)  x ||.
 x0  S : lim ||x(t)  x || = 0. (2.12) dt
t
Note that the characteristic time tc corresponds to the inverse slope of the time
The set S is called the domain of attraction. It should be mentioned, that if one
response of this linear system at t = 0 (see Figure 2.6). Denote by ts the settling
can demonstrate convegence within a domain S1 , this does not necessarily imply that
time, i.e. the time required for 98.5% convergence. It follows that
S1 is the maximal domain of attraction. The determination of the maximal domain
of attraction for an attractive equilibrium of a nonlinear system is, in general, a non- ||x(t)  x || 1
trivial task, due to the fact that, in contrast to linear systems, nonlinear systems can = ets = 0.015  ts 4 = 4tc .
||x0  x || 
have multiple attractors, each one with its own domain of attraction. This issue will
be analyzed with more detail later. This means that practical convergence is obtained after approximately four charac-
A common, similar concept, including a statement for the transient behavior, is terstic times tc . This notion is particularly useful in prediction and control applica-
called asymptotic stability and is dened next. tions, when ensured convergence time measures are required. Figure 2.6 illustrates
the concept of characteristic (tc ) and settling (ts ) times.
Denition 2.3.3. An equilibrium point x of (2.2) is said to be asymptotically stable
Some words are in order on the amplitude constant a in (2.13). Clearly, a  1.
if it is stable and attractive.
Note that for any a  1 oscillations are possible (see Figure 2.7). While for a = 1,
This concept is illustrated in Figure 2.5 (right). the convergence of the norm of x is monotonically decreasing, for a > 1 an initial
Note that the asymptotic stability does not state anything about convergence overshoot is possible. This kind of behavior is typical in applications, and corresponds
speed. It only establishes that after an innite time period the solution x(t), if start- to the one qualitatively depicted for the asymptotic stability case (right side) in
ing in the domain of attraction, will approach x without ever reaching it actually. Figure 2.5.
A concept which allows to overcome this issue is the stronger one of exponential To conclude this section, note that all the above stability and attractivity concepts
stability. can also be applied to a set M. To examplify this, and for later reference, consider
Denition 2.3.4. An equilibrium point x of (2.2) is said to be exponentially stable the denition of an attractive set.
in a set S, if it is stable and there are constants a,  > 0 so that 1
To understand this fact, note that with the naked eye, two curves which are identical up to
98.5% are not distinguishable. Furthermore, in practice there is always some noisy variation in the
 x0  S : ||x(t)  x ||
a||x0  x ||et . (2.13) system, causing uctuations of the trajectories.

12 13
Figure 2.6: Illustration of the concepts of characteristic (tc ) and settling (ts ) times
for a linear system.

Denition 2.3.5. A compact set M is called attractive for the domain S, if

 x0  S : lim x(t)  M.
t

2.4 Exercises
Figure 2.7: Illustration of dierent exponentially stable time responses for a = 1
(blue) and a = 2 (red, with characteristic time tc = 1

14 15
then, by Cailey-Hamilton’s theorem [], the matrix A satises (3.4), i.e.
n

ai Ai = 0. (3.5)
i=0

Chapter 3 As a consequence we have that there are constants cmi such that
N
 1
m  N : Am = cmi Ai
Linear systems revisited i=0

Thus, the series in (3.3) can be expressed as a sum of the rst n powers of A, with
adequately chosen coecients
In this chapter, basic facts from linear systems theory are provided, starting with n

the general solution of autonomous and non-autonomous linear odes, and nishing S(t) = exp(At) = i (t)Ai . (3.6)
with linear systems with time-varying coecients. i=0

Furthermore one has that


3.1 Linear Time-Invariant (LTI) systems dS(t)
= AS(t) (3.7)
dt
A dynamical system
implying that with
x = f (x, u), x(0) = x0 , x  R n , f : Rn  R n (3.1)
x(t) = S(t)x0 (3.8)
is called linear if f is a linear function, i.e.
it holds that
x = Ax, x(0) = x0 , (3.2)
dx(t) dS(t)x0
= = AS(t)x0 = Ax(t),
and A  Rn×n is a linear map (transformation). dt dt
Recall the denition of the matrix exponential (the so-called fundamental solu- meaning that (3.8) is the unique solution of (??).
tion) The matrix function S(t), i.e. the fundamental solution of (??) satises the
 following properties:
 Ai ti
S(t) = exp(At) = . (3.3)
i=0
i! (i) S(0) = I

Let the characteristic equation of A be given by (ii) S(t1 )S(t0 ) = S(t1 + t0 )


n
 (iii) S(t  t0 )1 = S(t0  t)
|I  A| = ai i = 0, (3.4)
i=0

16 17
which are also known as ux axioms. Accordingly, the fundamental solution S(t) is implying that for any T  O(Rn ) we have that
often associated to the ux of the system  
x(t) = T 1 (t) = T 1 exp T AT 1 t T x0 . (3.14)
(t) = S(t), (3.9)
This fact is particularly useful when thinking about diagonalization and transforma-
an idea which will be discussed later in the context of nonlinear dynamics in more tions into Jordan normal forms. To clarify this, consider rst the case of a diagonal-
detail. izable matrix A. Recall the exact solution for the rst order linear ode
For the case of a non-autonomous (i.e. time-varying) equation with constant
x = x, x(0) = x0 (3.15)
matrix A,
is given by1
x(t)
 = Ax(t) + u(t) (3.10)
x(t) = x0 et . (3.16)
with u(t)  Rn , multiply both sides with exp(At) to obtain
n
  If A is diagonalizable, there is a matrix T  O(R ) such that T AT 1
is diagonal,
dx d  At 
eAt = eAt Ax + eAt u(t) = e x + eAt u(t) and it follows that (remember that  = T x)
dt dt
(t) = diag ei t i 0 ,
 
(3.17)
or equivalently, recalling the product formula for the dierentiation,
or equivalently
d  At 
e x(t) = eAt u(t).
dt i (t) = i0 ei t . (3.18)
Integrate from 0 to t Thus, the exact solution for x is given by
 t
x(t) = T 1 diag ei t i T x0 .
 
eAt
x(t)  x0 = A
e u( )d, (3.19)
0
Note that each eigenvector k is associated to an eigenvector vk introducing an
rearrange, and multiply with S(t) = exp(At) to obtain eigendirection, i.e. an invariant subset {x = avk |a  R} Rn (this concept will be
 t formalized later), so that the solution can be written as
x(t) = S(t)x0 + S(t   )u( )d, S(t) = eAt , (3.11) n
0 
x(t) = ek t x0 , vk  (3.20)
what is known as the variation of constants formula. k=1
Next, consider a state transformation  = T x with T being an invertible matrix
(i.e., an element of the orthogonal group O(Rn )). It follows that where ·, · denotes the scalar product between two vectors. This situation is illus-
trated in Figure 3.1 for the cases of two negative eigenvalues (left), and one negative
 = T x = T (Ax) = T AT 1 , (0) = 0 = T x0 (3.12) and one positive eigenvalue (right). The rst case is known as a (stable) node, and
the second one as a saddle2 . If the matrix A has eigenvalues with multiplicity greater
and dx
1
By separation of variables: = dt and thus ln(x(t))  ln(x0 ) = t or equivalently (3.16).
  x
(t) = S (t)0 = exp T AT 1 t 0 (3.13) 2
This association stems from a comparison of the geometric shape in a three-dimensional set-up,
being similar to a horse saddle.

18 19
v v v
1 1 1

0.5 0.5 0.5

0 0 0

-0.5 -0.5 -0.5

-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1

u u u

Figure 3.1: Illustration of the behavior with two dierent


 eigenvalues:
 a stable node Figure 3.2: Illustration of the behavior associated to a generalized eigenvector with
(left, 1 = 1, 2 = 3), and a saddle (right,1 = + 2, 2 =  2). multiplicity two, and eigenvector  = 1 (the eigendirection is the x-axsis).

than one, A can be transformed into Jordan blocks of the form associated to conjugate complex eigenvalues, of the form
 

k 1 ··· 1

a b
Ck = (3.23)
.. .. .. b a
0
 . . .
Jk =  . . .. . (3.21) and the corresponding components k and k + 1 of the associated -dynamics having
.. .. . 1
the form
0 ··· 0 k
k = ak + bk+1 , k+1 = bk + ak+1 . (3.24)
For a Jordan block of dimension m, the associated solution for  then reads (the
reader is encouraged to demostrate this fact using the variation of constants formula By the change into polar coordinates
(3.11) in combination with induction, taking care about the summation indexes)  
 21 k+1
rk = k2 + k+1
2

, k = arctan (3.25)

tk tmi
 k
i (t) = ek t i,0 + i+1,0 t + · · · + i+k,0 + · · · + m,0
k! (m  i)! (3.22) these dynamics are equivalent to4
tk
= ek t mi

k=0 i+k,0 rk = ark , k = b, (3.26)
k!
so that the matrix exp(T AT 1) can be easily calculated based on the Jordan blocks with initial conditions
(3.21), and the corresponding solution for x can again be found by the back-transformation
 

2 2 k+1,0
(3.14). Note that these dynamics are associated to the concept of generalized eigen- rk0 = k,0 + k+1,0 , k0 = arctan
k,0
vectors, given the multiplicity of the eigenvalue, and lead to interesting dynamical
4
behavior as illustrated in Figure 3.2. If neither of the preceding cases applies, then These equations are easily derived using the following relations [Strogatz(1994)] (for general
(x, y)-coordinates):
there are pairs of conjugate complex eigenvalues (k , k+1)3 with k = a + ib and
k+1 = ¯k = a  ib. In this case the matrix A can be transformed into blocks, rr = xx + y y,
  =
xy  y x
.
r2
3
Note that here it is assumed that the associated eigenvalues have succeessive indexes.

20 21
and solutions Recall the stability denitions 1 to 3 presented in the preceding chapter. Taking
into account the solutions (3.20), (3.22), and (3.27) for the cases of real eigenvalues
at
rk (t) = e rk0 , (t) = k0  bt, (3.27) with multiplicity one, greater than one, and complex conjugate eigenvalues, respec-
tively, the following theorem is quite intuitive.
corresponding to trajectories spinning counterclockwise (b > 0) or clockwise (b <
0) around the origin (k , k+1) = (0, 0), with frecuency 2 /b, and radius which is Theorem 1. The origin of the linear system (??) is stable if all eigenvalues i , i =
decreasing (r < 0, stable spiral), constant (r = 0, center), or increasing (r > 0, 1, . . . , n have non-positive real part, i.e. Re(i )
0. If all eigenvalues have strictly
unstable spiral) (see Figure 3.3). In -coordinates these solutions are given by negative real part, i.e. Re(i ) < 0, then the origin is exponentially stable, and (2.13)
holds with  = min |i |, i = 1, . . . , n.
v v v
1 1 1

In the case of second order linear systems, the classication of the dynamical
0.5 0.5 0.5
behavior is particularly sistematic. To see this, consider the general matrix
 
a a
A = 11 12 (3.29)
0 0 0
a21 a22

-0.5 -0.5 -0.5 with characteristic equation

-1
2 + tr(A) + det(A) = 0, tr(A) = a11 + a22 , det(A) = a11 a22  a12 a21 ,
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5

u u u

where tr(A) is the trace, and det(A) the determinant of the matrix A. The eigenval-
Figure 3.3: Dierent cases of oscillations around the origin according to (3.24) with ues of A are thus given by
b = 1: stable spiral (left, a = 1), center (middle, a = 0), and unstable spiral (right,
1  
a = 1). 1,2 = tr(A) ± tr(A)2  4det(A) . (3.30)
2
Thus, the dynamic behavior can be classied as depicted qualitatively in Figure
k (t) = eat rk0 sin(k0  bt), k+1 (t) = eat rk0 cos(k0  bt) (3.28) 3.4, namely: if tr(A) < 0 (or tr(A) > 0), the origin is a stable (or unstable) node
(tr(A)2  4det(A)), or spiral (tr(A)2 < 4det(A)). In the cases where tr(A)2 =
and the solution in original x-coordinates can be easily obtained by the back-transformation
4det(A) there are generalized eigenvectors, and the origin is stable (or unstable) if
(3.14).
tr(A) < 0 (or > 0). Note that this case represents the transition between a node and
Note that the three cases discussed so far are the only ones which occur in linear
a spiral. If det(A) < 0, there is always one positive and one negative eigenvalue, so
systems, except the case of a zero real part eigenvalue. In this case, the associated
that the origin is a saddle.
eigenspace is the nullspace (also called the kernel ), and the associated components
remain of constant distance to the origin.
Later, we will relate nonlinear ows in neighborhoods of equilibrium points in 3.2 Linear Time Varying (LTV) Systems and Flo-
two-dimensional ows to these cases, in order to gain a comprehension of the eect
of parameter variation on the qualitative solution behavior. quet theory
To end this section, we will formally connect the preceding study with the concept
There is much to say about linear time-varying systems. Actually, the subject is non
of stability, and present a short systematic classication of planar ows of linear
systemms. trivial, as the complete system behavior heavily depends on time varying properties.
This implies that the simple solution based on the matrix exponential does not apply,

22 23
This gives rise to the notion of the state transition matrix

t (t0 ) = P (t)P 1(t0 ). (3.35)

Note that in the particular case of time-invariant systems, the state transition matrix
t (t0 ) is just the fundamental solution t = exp(At). Nevertheless, if A varies over
t
time, then the function exp( 0 A( )d ) is not a fundamental matrix, as is shown
next. Note that
 t   t i
d  1 d
exp( A( )d ) = A( )d
dt 0 i! dt 0
Figure 3.4: Classication of planar linear dynamics according to (3.30), with tr being i=0
the trace, and det the determinant of the dynamics matrix A.
and, in general,
 t 2  t  t  t
and the decomposition into linear subspaces associated to eigenvectors changes with d
A( )d =A A( )d + A( )d A = 2A A( )d
time. Thus, the main approach for solving time-varying dierential equations is by dt 0 0 0 0
searching similarity transformations of the kind
and the same is true for higher order terms, given that, in general, A does not
x(t) = P (t)(t), (3.31) commute with its integral. This means that
 t   t 
where P (t) is non-singular at any time t. Clearly, d
exp A( )d = A(t) exp A( )d .
dt 0 0
x = Ax = AP  = P  + P 
Clearly, the state transition matrix satises the ow axioms, so that the ow of a
so that linear time varying system is given by (3.35).
  Once found a fundamental matrix P (t) (i.e., satisying (3.33)), the linear system
 = P 1 (t) AP  P (t) (t),
(t) (t0 ) = P 1(t0 )x0 (3.32) can be transformed into an arbitrary linear time-invariant one. Take any matrix Ā,
and consider the state transformation
Now, P (t) is called a fundamental matrix, if it is a non-singular solution of the matrix
dierential equation x(t) = P̄ (t)(t) = P (t)eĀt (t). (3.36)

P (t) = A(t)P (t), P (t0 ) = 0. (3.33) Correspondingly, the equivalent time-varying system equations (3.32) read (this
should be shown by the reader)
If P in (3.32) is a fundamental matrix, then the solution of (3.32) is
 = Ā(t),
(t) (t0 ) = eĀt0 P 1 (t0 )x0 . (3.37)
(t) = 0 = P 1 (t0 )x0
In the case that Ā = 0, this corresponds just to case discussed above. The reason
and the solution x(t) is just why, given a fundamental matrix P (t) the system can be transformed into any time-
invariant one, consists in the fact that the solution is nally determined by the
x(t) = P (t)(t) = P (t)P 1(t0 )x0 . (3.34) fundamental matrix P (t) itself. The great deal about LTV system resides thus in
solving the matrix dierential equation (3.33).

24 25
Of particular interest in time-varying systems theory is the case of periodic ma- S such that P2 = P1 S, and consequently the associated monodromy matrices M1
trixes A(t), i.e. when there exists a time T , such that and M2 satisfy

A(t + T ) = A(t),  t  0. (3.38) M1 = P11(t0 )P1 (T ) = [P2 (t0 )S 1 ]1 [P2 (T )S 1 ] = S[P2 (t0 )1 P2 (T )]S 1 = SM2 S 1 .
This, in turn, implies that both monodromy matrixes are similar, and thus have the
These kind of systems has been intensively studied, initiating with the great math-
same eigenvalues.
ematician G. Floquet, and the associated theory is named after him. According to
In the light of the representation (??), the Floquet decomposition ensures that
Floquet theory, it holds that that the fundamental matrix P (t) satises
there exists a coordinate  =
(t)x with dynamics
P (t + T ) = P (t)M, M = P 1(t0 )P (T ) (3.39)  = B, (3.42)
where the constant matrix M is called the monodromy matrix, which can be expressed so that the stability question can be completely addressed using the Floquet expo-
as nents, i.e. the eigenvalues of B.
For a numerical determination of the Floquet multipliers and exponents, it is
M = eT B (3.40) sucient to solve the matrix dierential equation (3.33) with initial condition P (0) =
I. In virtue of (3.39), the Floquet multipliers are just the eigenvalues of M = P (T ),
for some matrix B. The eigenvalues of M are called the Floquet (or characteristic) and the Floquet exponents are the eigenvalues of the matrix
multipliers, and the associated eigenvalues of B are called the Floquet (or character-
istic) exponents. The real parts of the Floquet exponents are called the Lyapunov 1 1
B= ln(M) = ln[P (T )]. (3.43)
exponents. The following theorem is a fundamental tool in the analysis of periodic T T
LTV systems. The analytic calculation of the Floquet exponents i , i = 1, . . . , n (or multipliers)
is not an easy question. Nevertheless, the relation (see e.g. [Guckenheimer and Holmes(1977)])
Theorem 3.2.1. There exists a T -periodic matrix
(t), such that
n
1 T
 
2 i
 
P (t) =
(t)etB . (3.41) i = tr[A( )]d mod , (3.44)
i=1
T 0 T
The proof of the theorem is quite simple, having the above facts on the mon- may be helpful, particularly in 2-dimensional problems. Actually, if the trace of A is
odromy matrix M as points of departure. Actually, notice that, given P (t), the always positive, it is clear that there must be at least one positive Floquet exponent
matrix
can always be found, and satises and thus the origin x = 0 will be unstable.

(t + T ) =P (t + T )e(t+T )B = P (t + T )eT B etB = P (t)MeT B etB
=P (t)etB =
(t). 3.3 Exercises
The importance of this therem consists in that it allows to decompose the solution E1.1 Proof the relation (3.22) using the variation of constants formula and mathe-
of a periodic time-variant system into a periodi part (
), and an exponential part matical induction.
(etB ). This is particularly usefull when analyzing the stability of periodic orbits of
E1.2 Consider a linear system in (x, y)-coordinates. Show that in polar coordinates,
linear and nonlinear systems.
the corresponding dynamics can be determined using the relations
It is important to notice that the Floquet multipliers do not depend on the
particular fundamental matrix P , while the monodromy matrix M will depend on xy  y x
r r = xx + y y,
  = .
P . This becomes clear when considering to fundamental matrixes P1 (t) and P2 (t). As r2
both matrix functions satisfy (3.33) there must be a constant (non-singular) matrix (cp. e.g. [Strogatz(1994)]).

26 27
E1.3 Determine the analytical solution of the equation x = x, and plot it for
dierent initial conditions, and (positive, negative) values of .

E1.4 Solve the ode x = x numerically using some adequate software (like maxima
[http://maxima.sourceforge.net]), and compare it graphically with the analyt-
ical solution obtained in exercise E1.3.
Chapter 4
E1.5 Study the ow of the following example systems by analyzing the eigenvalue-
eigenvector structures, and verify your results using a numerical simulation tool
(like maxima):
 
Nonlinear ows - An introduction
1 0
x = x (E1.5-a)
0 1
 
1 1 In this chapter, the basic theory of nonlinear ows is introduced. In contrast to lin-
x = x (E1.5-b)
1 1 ear systems with only one equilibrium, a continuum set of equilibria, or a continuum
 
1 1 familiy of periodic orbits, in nonlinear ows the local behavior close to equilibrium
x = x (E1.5-c)
1 1 can not be extended to the global behavior, i.e. unless an equilibrium point may be

1 1
 stable, there may happen much more things far from this equilibrium point. Exam-
x = x (E1.5-d) ples are the presence of multiple attractors, each of which has a domain of attraction.
0 1
Additionally, these attractor sets may be isolated periodic orbits, enclosing one or
more equilibria. Hence it makes sense to distinguish in nonlinear systems between
local and non-local behavior, i.e. close and far from equilibrium, respectively. This
E1.6 For the examples in exercise E1.3, determine the analytical solution, and plot distinction motivates the structure of the present chapter, where at rst place, the
it against the numerical solution obtained previously. What can you observe? main results of local theory are presented, and afterwards, non-local theory is ad-
dressed.
E2.1 Determine the Floquet multipliers and exponents for the following systems,
Some words on the terminology and notion of localality are in order here. From
and plot the coecients of the fundamental matrix P (t):
a stability analysis point of view, in the sense of Lyapunov stability, local normally
refers to a given set in state space, which may be small or large. Global, in contrast,
 
cos(t)  sin(t)
x = x (E2.1-a) refers to properties which are valid in the whole Rn . From a practical point of view,
sin(t) cos(t)
  such a globality is almost nether of interest, as any physical system exists within
1 + cos(t)  sin(t)
x = x (E2.1-b) some bounded region of possible states (unless it is not always modeled explicitely
sin(t) 1 + cos(t)
with these restrictions). A simple example is a pendulum, which obviously has a
What can you observe in the solution of P (t)? Plot P (t) over two periods, i.e. limited velocity from a practical point of view. Actually, a possible limit velocity is
t  [0, 4 ] and compare again the behavior of the two above cases. be given by the maximum centrifugal force the articulation supports. There does
not exist any articulation supporting innite velocity. . . There is many more to say
about this subject, but this is not the place to do so. Nevertheless, it should be
clear that from a practical point of view, the state space is always restricted. You
may think about this issue yourself. Now, from a nonlinear dynamics point of view,

28 29
local refers much more to the behavior of trajectories close to equilibrium, i.e. in systems. Simple examples are coordinate-transformations (e.g. from cartesian to
a non-dened, suciently small neighborhood. This concept is straightly related polar coordinates, and vice versa).
to the Taylor approximation of the vector eld around the equilibrium by small
Theorem 4.1.1. If J(x , p) (4.1) has no eigenvalues on the imaginary axis, then
(rst to third) order terms. Clearly, the third order Taylor approximation does not
there exists an open neighborhood of N (x ) of x , and a homeomorphism h : N (x ) 
reveal anything certain when fourth order terms dominate the rst-to-third order
Rn , such that
ones. Close to the equilibrium, nevertheless, it is clear that the behavior will be
qualitatively governed by the rst-to-third order terms. This is why we talk about h[t (x0 , p)] = eJ(x
 ,p)t
h(x0 ), h(x ) = 0. (4.2)
local behavior. The contrary of local behavior could now be called global behavior,
or non-local behavior. Here, the term non-local is preferred, in order to avoid the where t (x0 , p) is the ow (2.3) associated to (2.2) at time t, and starting at x0 .
above-mentioned well accepted globality concept stemming from Lyapunov stability.
The proof of this theorem goes beyond the scope of the present manuscript,
but can be found in spezialized literature on this subject (e.g. [Hartman(1964),
4.1 Local Theory Hartman(1960)a, Hartman(1960)b, Perko(2001)]).
An equilibrium point x which is associated to a Jacobian matrix J(x , p) with
In this section the basic concepts and results from local theory for nonlinear systems eigenvalues that have non-zero real parts is called hyperbolic. Thus, the Hartman-
is presented, i.e. concerning the behavior of nonlinear ows close to equilibrium. Grobman theorem applies only for hyperbolic equilibria.
The main results on this subject are the Hartman-Grobman Theorem, which relates The meaning of this theorem is very important for the qualitative study of non-
the behavior of the nonlinear system to its linear rst-order Taylor approximaton, linear dynamical systems. A similar idea is also employed in the study of linear
and the Center Manifold Theorem, which applies to nonlinear ows which can not systems, where the homeomorphism h is normally given by a coordinate transforma-
be handled with rst-order terms only. tion h(x) = T x, with matrix T belonging to the orthogonal group O(Rn ) (i.e., the
group of n × n distance-preserving invertible matrixes with T 1 = T T ). Using such a
transformation it is possible to study the behavior of a highly coupled linear system
4.1.1 The Hartman-Grobman Theorem
with its topological equivalent of an uncoupled diagonal system (or with low degree
In many cases of interest, the qualitative behavior of the nonlinear system (2.2) can of coupling), employing diagonalization (or a transformation into Jordan blocks). To
be determined via the consideration of the associated linear approximation around illustrate this idea, consider the linear system x = Ax, x(0) = x0 , and suppose that
an equilibrium point x (i.e., such that f (x ) = 0) A is diagonalizable, i.e.  T  O(Rn ) : T AT 1 = diag(i ), where i denotes the i-th
eigenvalue of the matrix A. Then, introducing the new state z = T x, one easily nds
f (x , p)
x = J(x , p)x, J(x , p) = , (4.1) that
x
where J(x ) is the Jacobian matrix associated to the function f (x, p) evaluated at z = T x = T AT 1 z = diag(i )z, z(0) = z0 = T x0
the pair (x , p). Clearly, studying the ow associated to the linear system (4.1) and the solution is given by
is much easier than evaluating the ow of the nonlinear system. The following

theorem, due to Hartman and Grobman, establishes sucient conditions for the e1 t z10
cualitative equivalence between the trajectories of (2.2) and (4.1) within a suciently x(t) = T 1 z(t), where z(t) = ... = diag(ei t )T x0 ,

small neighborhood of (x ). In order to state the theorem, recall the denition of a en t zn0
homeomorphism h (or topological isomorphism) between two vector (more exactly
topological) spaces as a continuous map with continuous inverse. Homeomorphims or simply
are such that they conserves properties like closesness of two points (due to the
continuity) in both spaces, and are frequently employed in the analysis of dynamical x(t) = T 1 diag(ei t )T x0

30 31
For nonlinear systems, the linearization J(x , p) is often the rst access to estab- with indices s, z representing stable and center (Zentrum) components of the state.
lish the qualitative behavior of the ow t (x0 ), and is often sucient for a local (i.e., The matrices As and Az are given by the linearization of the system at the equilibrium
close to the equilibrium point) qualitative study. The next subsection contains some point x , and the eigenvalues of Az have zero real part, while the eigenvalues of As
examples to illustrate this approach. all have negative real part. The functions fs and fz satisfy

fi (0, 0)
4.1.2 The center-manifold theorem fs (0, 0) = 0, fz (0, 0) = 0, = 0, i, j  {s, z}. (4.4)
xj
The preceding section on the Hartman-Grobman theorem gave a great hint on the be-
havior of nonlinear systems in a vecinity of hyperbolic equilibrium points (i.e., those The question is then how the state vector evolves over time close to the origin
with real part dierent to zero). Nevertheless, a great deal of nonlinear dynamics is (xs , xz ) = (0, 0). Given the preceding discussion on the Hartman-Grobman theo-
about non-hyperbolic problems, i.e. systems with equilibria that have eigenvalues on rem, it is intuitive that the stable part locally converges to some manifold
the imaginary axis. In these cases the Hartman-Grobman theorem does not apply, xs = h(xz ) (4.5)
and nothing can be said about the ow near the equilibrium point looking only at
the linearization. A typical example is the system (the center manifold) which reaches the origin at xz = 0 and is tangential to the
x = x3 xz -axis there, i.e.

which has the (globally) asymptotically stable equilibrium point x = 0. The lin- dh(0)
h(0) = 0, = 0.
earization about x = 0, reading x = 0, nevertheless suggests that x(t) = x0  t  0, dxz
i.e. that the equilibrium point is stable but not asymptotically stable.
Another (worse) example is given by the dynamics Locally, the dynamics will completely depen on the ow on this manifold h(xz ), given
by the restriction
2
x = x
 = Az  + fz (, h()) = (), (0) = xz0 . (4.6)
with saddle equilbrium at x = 0. The linearization is the same as in the preceding
case (x = 0), and stable. Nevertheless, the equilibrium is unstable (a saddle)1 . On the other hand (4.5) implies that
These examples already show that interesting things happen which can not be
explained using linear arguments. This explains why some experts say that the dh dh
x s |xs=h(xz ) = x z = fz (xz , h(xz )) = fs (h(xz ), xz ).
non-hyperbolic dynamics are the real nonlinear ones. dxz dxz
Nevertheless, considering that all hyperbolic eigenvalues lead locally to simple dy-
namics, the corresponding eigenvectors are tangent to stable and unstable manifolds. Summarizing, the center manifold h(xz ) has to satisfy the partial dierential equation
From a practical point of view it is particularly interesting to analyze stability of such dh
systems, in order to design them accurately, or eventually stabilize them using feed- fz (xz , h(xz )) = fs (h(xz ), xz )
dxz (4.7)
back control. Thus, assume for the moment that all eigenvalues with non-zero real dh(0)
part are negative, and thus, the system can be brought by a linear transformation h(0) = 0, = 0.
dxz
into the for
x s = As xs + fs (xs , xz ) This result is formally stated in the following theorem (see e.g. [Sastry(1999),
(4.3) Guckenheimer and Holmes(1977)]).
x z = Az xz + fz (xs , xz )
1
Furthermore the system presents nite escape, for all positive initial conditions x0 > 0, as Theorem 4.1.1. Let f  C r (X), with r  1 and X Rn an open set, 0  X. If
discussed in Section ??. f (0) = 0, and J(0) has z eigenvalues with zero real part, and s = n  z eigenvalues

32 33
with negative real part, then the system can be brought into the diagonal-like form implying that an approximation up to fourth order is achieved if c0 = c1 = 0 and
(4.3), and there exist a constant  > 0, and a function h  C r (N ), where N is a c2 = a. This yields the approximation of the center-dynamics on (xz ) = ax2z as
h(0)
ball of radius  around the origin, h(0) = 0, = 0, that denes the local center x z = ax3z .
x
manifold
Accordingly, the origin is stable for a < 0, and unstable for a > 0. This is illustrated
W c = {x = [xs , xz ]  N |xs = h(xz )} (4.8) in Figure 4.1, together with the approximation (xz ) of the center manifold h(xz ).
which satises (4.7) in N , and the ow on W c is determined by (4.6). It can be appreciated in this gure, that locally (i.e. close to the origin) the center

If the origin xz = 0 of (4.6) is (locally) asymptotically stable in the set W c , then x


2
x
2

it is for the system (4.3), given that, locally, W c is an attractor set for the ow. This
is formally reected in the following theorem (cp. [?]).
Theorem 4.1.2. If the dynamics (4.6) are stable, then there exists a constant > 0 1 1

so that, locally, for t large enough the solutions of (4.3) satisfy


xz (t) = (t) + O (et ))
(4.9)
xs (t) = h[(t)] + O (et )
0 0

This result allows to analyze the stability properties of nonlinear systems on a


lower dimensional submanifold of Rn (the center manifold). Unless in most cases, it is -1 -1

hard or even impossible to determine the exact mathematical form of the function h
(4.5), it is possible to employ a polinomial approximation of W c (4.5), i.e. considering
a Taylor approximation of h(xz ) satisfying (4.7) upt to terms of order k + 1, i.e. a
function (xz ) satisfying
-2 -1 0 1 2 -2 -1 0 1 2

z z

d
fz (xz , (xz )) = fs ((xz ), xz ) + Ok+1 Figure 4.1: Phase plane of the dynamical system (4.10) for a = 1 (left), and a = 1
dxz (right).
d(0)
(0) = 0, = 0.
dxz
manifold is well approximated by the parabolic curve (xz ) = ax2z , where the stable
This approach is illustrated in the following example (e.g. [Perko(2001), Sastry(1999)]): trajectories converge to.
Another powerful application of the center manifold theorem consists in the study
x s = xs + ax2z of bifurcations, which will be addressed in the proceding chapter. Before entering
(4.10)
x z = xs xz the fascinating subject of bifurcation theory, nethertheless, the non-local behavior
Consider the second order approximation of nonlinear ows should be studied in more detail. This is the subject of the next
section.
(xz ) = c0 + c1 xz + c2 x2z
of the center manifold. The associated equation set (4.7)
(c1 + 2c2 xz )(c0 + c1 xz + c2 x2z )xz = [(c0 + c1 xz + c2 x2z ) + ax2z ]

34 35
4.2 Examples 4.2.2 A stable limit cycles with radius 1

4.2.1 The unforced Dung equation x = x  y  x  (x2 + y 2 )


(4.12)
y = x + y  y  (x2 + y 2 )
ẍ +  x  x + x3 = 0 (4.11)
in polar coordinates

y r = r(1  r 2 )
(4.13)
y y

1.2 1.2 1.2  = 1.


0.8 0.8 0.8

0.4 0.4 0.4

0 0 0
y

-0.4 -0.4 -0.4

-0.8 -0.8 -0.8

2.5

-1.2 -1.2 -1.2

-1.2 -0.8 -0.4 0 0.4 0.8 1.2 -1.2 -0.8 -0.4 0 0.4 0.8 1.2 -1.2 -0.8 -0.4 0 0.4 0.8 1.2

x x x
y y y

1.2 1.2 1.2

0.8 0.8 0.8

0.4 0.4 0.4

0 0 0

-0.4 -0.4 -0.4


-2.5

-0.8 -0.8 -0.8

-1.2 -1.2 -1.2

-2.5 0 2.5
-1.2 -0.8 -0.4 0 0.4 0.8 1.2 -1.2 -0.8 -0.4 0 0.4 0.8 1.2 -1.2 -0.8 -0.4 0 0.4 0.8 1.2

x x x x

Figure 4.3: Limit cycle about an equilibrium point, with local behavior close to the
Figure 4.2: Phase plane of the Dung equation (4.11) for: d = 3 (upper left: a equilibrium equivalent to its linearization.
saddle with two repulsor nodes having generalized eigenvectors), d = 0.1 (upper
middle: a saddle with two unstable spirals), d = 0 (upper right, homoclinic bifur-
cation: a saddle with homoclinic trajectories, and two neigboring centers), d = 0.1
(lower left: a saddle with two attractor spirals), d = 0.6 (lower middle: a saddle with 4.2.3 Counterexamples
two neigboring attractor spirals), and d = 3 (lower right: a saddle with two attractor
nodes having generalized eigenvectors). x = x3 , x = x2 (4.14)

x 1 = x2 , x 2 = x1  x21 x2 (4.15)

36 37

Vous aimerez peut-être aussi