Vous êtes sur la page 1sur 12

An Invariance Principle in the Theory

of Stability
JOSEPH P. LASALLE

HE stability theorems of Lyapunov have been among


the oldest and strongest pillars of control theory. The centenary of Lyapunov's celebrated 1892 memoir was recently
marked with its English translation [17], while the 1907 French
translation was reprinted in 1949 by Princeton University
Press [16].
Applications of Lyapunov stability concepts to control problems began to appear in Russia in the 1930s-1940s. Significant
theoretical results from the 1950s were summarized in the books
by Chetayev [3], Lurie [15], Malkin [18], Letov [14], Zubov
[22], and Krasovskii [7]. In the post-1957 Sputnik era, English
translations of these books, and of other Russian works, further
increased the interest already stimulated by "Contributions to the
Theory of Nonlinear Oscillations," a five volume series (1950,
1952, 1956, 1958, and 1960) edited by Lefschetz, and published
by Princeton University Press. The fourth volume contained a
detailed and rigorous survey of Lyapunov stability theory by
Antosiewicz [1]. The 1960 survey by Kalman and Bertram [6]
was more accessible and had a stronger impact on engineering
audiences, as did the books by Lefschetz [12], Lefschetz and
LaSalle [13], and a paper by LaSalle [10]. This activity continued in the early 1960s when several volumes of "Contributions
to Differential Equations" were published, including the important results by Yoshizawa [20], whose 1966 book [21] presented
a collection of advanced stability results. The status of stability
theory in the pre-1965 period is summarized in the scholarly
work by Hahn [4], which is the most comprehensive source
covering that period.
Among the innovations from that period are the results which
settle the issue of existence of Lyapunov functions. Particularly
important among the results of this type are the "converse theorems" of Massera [19], Krasovskii [14] and Kurzweil [8], which
found applications in diverse areas of control theory. Radially
unbounded Lyapunov functions were introduced in 1952 by
Barbashin and Krasovskii [2] to guarantee stability in the large,
that is, global stability. The same Barbashin-Krasovskii paper
and the book by Krasovskii [14] initiated another line of research
which culminated in this paper by LaSalle, and its extended 1968
version [11].

This line of research was aimed at extracting from Lyapunov 's


stability theorem more information about the asymptotic behavior of the solutions x(t) E IRn of

x=

f(x, t),

f(O, t) = 0 for all t

(1)

With a positive definite function Vex, t), a theorem of Lyapunov establishes asymptotic stability of the equilibrium x = 0,
if V(x, t) is negative definite along the solutions x(t) of (1). If V
is only nonpositive, the same theorem guarantees stability, but
does not reveal whether x(t) converges to zero or to a set in IRn .
For autonomous systems, x = f(x), a theorem of Barbashin and
Krasovskii [2] (see also Theorem 14.1 onp. 67 of[14]) examines
the set E in which V = O. If this set does not contain any positive
semi-trajectory (solution x(t) for all t ::: 0) other than x(t) == 0,
then x (t) ---+ 0 as t ---+ 00. This result gave a practical test for
asymptotic stability when V is only nonpositive, instead of negative definite. In his 1960 paper [10], LaSalle extracted further
information on the asymptotic behavior of x(t). Using the limit
sets and the largest invariant set M of x = f (x) contained in the
set E where V is zero, he showed that x(t) converges to the set
M. This set need not be an equilibrium, but can be a compact
limit set like a limit cycle.
This result of LaSalle, which he later termed I nvariance Principle, had a significant connection with the then new concept
of observability. For the system x = Ax, a positive definite
Lyapunov function V = x' P x has the derivative V = A' P +
PA = -x'Qx. Suppose that Q is positive semi-definite, so that
it can be expressed as Q = C'C for some matrix C. It then follows that V = - y' y, where y = Cx can be treated as an output
of x = Ax. Clearly, the set E where V = 0 is y(t) == O. If the pair
(A, C) is completely observable, then y(t) == 0 implies x(t) = O.
Hence, no nonzero positive semi-trajectory x(t) is contained in
E, which proves, via Barbashin-Krasovskii [2], that x = Ax
is asymptotically stable. However, if the system is only stable
with a pair of purely imaginary eigenvalues unobservable from
y = Cx, then LaSalle's Principle shows that x(t) converges to
the periodic solution in the unobservable subspace.
The Invariance Principle was subsequently extended to periodic and almost periodic systems, but it does not hold for more

309

general nonautonomous systems (1). To obtain similar information on the asymptotic behavior of x(t), Yoshizawa [20] derived
a set of conditions under which V(x, t) .:s W (x), where W (x) is
"positive definite with respect to a set M," implies that x(t) converges to M. The main theorem in this paper by LaSalle modifies and improves this result of Yoshizawa, and provides the
strongest convergence result for nonautonomous systems. To go
beyond this result, further restrictions on f (x, t) are needed. One
of them is the so-called "persistency of excitation" condition in
adaptive identification and control. Typically, an adaptive algorithm guarantees that V .:s -e 2 , where e is the scalar tracking
error. The Yoshizawa-LaSalle theorem provides the conditions
under which e(t) converges to zero. It has thus become an indispensable tool in adaptive control design. It has also become
instrumental in deducing stability from passivity properties as
in feedback passivation and backstepping designs of nonlinear
systems.
Other settings where LaSalle's Invariance Principle has been
studied are infinite dimensional systems, as in the work of
Hale [5], and stochastic systems governed by continuous-time
Markov processes, as discussed by Kushner in [9].
REFERENCES
[1] H. ANTOSIEWICZ, "A survey of Lyapunov's second method," in Contr.
to Nonlinear Oscillations, S. Lefschetz, Ed., 4:141-166 (Princeton Univ.
Press, Princeton NJ), 1958.
[2] E.A. BARBASHIN AND N.N. KRASOVSKII, "On the stability of motion in the
large," Dokl. Akad. Nauk USSR, 86:453-456,1952.
[3] N.G. CHETAYEV, The Stability ofMotion, Pergamon Press (Oxford), 1961.
(Russian original, 1946.)
[4] W. HAHN, Stability ofMotion (Springer-Verlag, New York), 1967.
[5] J.K. HALE, "Dynamical systems and stability," J. Math. Anal. Appl., 26:3959,1969.
[6] R.E. KALMAN AND J.E. BERTRAM, "Control system analysis and design

via the 'second method' of Lyapunov, I Continuous-time systems," 1. Basic


Engineering (Trans. ASME), 82D:371-393, 1960.
[7] N.N. KRASOVSKII, Stability ofMotion, Stanford Univ. Press (Stanford, CA),
1963. (Russian original, 1959.)
[8] J. KURZWEIL, "The converse second Liapunov's theorem concerning the
stability of motion," Czechoslovak Math. 1.,6(81):217-259 & 455-473,
1956.
[9] H,J. KUSHNER, "Stochastic stability," in Stability of Stochastic Dynamical
Systems, R. Curtain, Ed., Lect. Notes in Math., Springer-Verlag (New York),
294:97-124,1972.
[10] J.P. LASALLE, "Some extensions of Liapunov 's second method," IRE Trans.
Circuit Theory, CT7:52G-527, 1960.
[11] J.P. LASALLE, "Stability theory for ordinary differential equations," 1. Differential Equations, 4:57-65,1968.
[12] S. LEFSCHETZ, Differential Equations: Geometric Theory, Interscience,
Wiley (New York), 1957.
[13] S. LEFSCHETZ AND J.P. LASALLE, Stability by Liapunov's Direct Method,
with Applications, Academic Press (New York), 1961.
[14] A.M. LETOV, Stability in Nonlinear Control Systems (English translation),
Princeton Univ. Press (Princeton, NJ), 1961. (Russian original, 1955.)
[15] A.E. LURIE, Some Non-linear Problems in the Theory ofAutomatic Control
(English translation), H.M.S.O., London, 1957. (Russian original, 1951.)
[16] A.M. LYAPUNOV, "Probleme general de la stabilite du mouvement" (in
French), Ann. Fac.Sci. Toulouse, 9:203-474, 1907. Reprinted inAnn. Math.
Study, No. 17,1949, Princeton Univ. Press (Princeton, NJ).
[17] A.M. LYAPUNOV, "The general problem of the stability of motion" (translated into English by A.T. Fuller), Int. J. Control, 55:531-773,1992.
[18] I.G. MALKIN, Theory ofStability ofMotion, AEC (Atomic Energy Commission) Translation 3352, Dept. of Commerce, United States, 1958. (Russian
original, 1952.)
[19] J.L. MASSERA, "Contributions to stability theory," Ann. Math., 64:182-206,
1956.
[20] T. YOSHIZAWA, "Asymptotic behavior of solutions of a system of differential equations," in Contributions to Differential Equations, 1:371-387,
1963.
[21] T. YOSHIZAWA, Stability Theory by Liapunov's Second Method, The Mathematical Society of Japan, Publication No.9 (Tokyo), 1966.
[22] V.1. ZUBOV, Mathematical Methods for the Study of Automatic Control
Systems, Pergamon Press (Oxford), 1962. (Russian original, 1957.)

P.V.K. & J.B.

310

An Invariance Principle in the Theory of Stability


JOSEPH P. LASALLEl
Center for Dynamical Systems
Brown University, Providence, Rhode Island

1. Introduction
The purpose of this paper is to give a unified presentation of Liapunov's
theory of stability that includes the classical Liapunov theorems on stability
and instability as well as their more recent extensions. The idea being exploited here had its beginnings some time ago. It was, however, the use made
of this idea by Yoshizawa in [7J in his study of nonautonomous differential
equations and by Hale in [1] in his study of autonomous functional-differential equations that caused the author to return to this subject and to adopt
the general approach and point of view of this paper. This produces some
new results for dynamical systems defined by ordinary differential eq uations
which demonstrate the essential nature of a Liapunov function and which
may be useful in applications. Of greater importance, however, is the possibility, as already indicated by Hale's results for functional-differential equations, that these ideas can be extended to more general classes of dynamical
systems. It is hoped, for instance, that it may be possible to do this for some
special types of dynamical systems defined by partial differential equations.

In Section 2 we present some basic results for ordinary differential equations. Theorem I is a fundamental stability theorem for nonautonomous
systems and is a modified version of Yoshizawa's Theorem 6 in [7]. A simple
example shows that the conclusion of this theorem is the best possible.
However, whenever the limit sets of solutions are known to have an invar..
iance property, then sharper results can be obtained. This "invariance
principle" explains the title of this paper. It had its origin for autonomous
and periodic systems in [2] and [4],. although we present here improved versions of those results. Miller in [5] has established an invariance property

1 This research was supported in part by the National Aeronautics and Space Adrnini . .
stration under Grant No. NGR-40-002-015 and under Contract No. NAS8-11264, in part
by the United States Air Force through the Air Force Office of Scientific Research under
Grant No. AF-AFOSR-693-65, and in part by the United States Army Research Office,
Durham, under Contract No. DA-31-124-ARO-D-270.

Reprinted with permission from Differential Equations and Dynamical Systems (New
York: Academic Press, 1967),1. Hale and 1. P. LaSalle, eds., Joseph ~ LaSalle,
"An Invariance Principle in the Theory of Stability," pp. 277-286.

311

for almost periodic systems and obtains thereby a similar stability theorem
for almost periodic systems. Since little attention has been paid to theorems
which make possible estimates of regions of attraction (regions of asymptotic stability) for nonautonomous systems results of this type are included.
Section 3 is devoted to a brief discussion of some of Hale's recent results [I]
for autonomous functional-differential equations.

2. Ordinary Differential Equations


Consider the system
i =f(t, x)

(1)

where x is an n-vector,fis a continuous function on Rn,+l to R" and satisfies


anyone of the conditions guaranteeing uniqueness of solutions. For each x
in Rn, we define I x I == (Xl! + ... + X n2)1I2, and for E a closed set in Rn
we define d(x, E) =: Min { I x - y I; y in E}. Since we do not wish to
confine ourselves to bounded solutions, we introduce the point at CX) and
define d(x, 00) == I X 1-1. Thus, when we write E* == E U {oo}, we shall mean
d(x, E*) = Min {d(x, E), d(x, oo)}. If x(t) is a solution of (1), we say that
x(t) approaches E as t ~ 00, if d(x(t), E) --.0 as t
00. If we can find such
a set E, we have obtained information about the asymptotic behavior of
x(t) as t -+ CXJ. The best that we could hope to do is to find the smallest
closed set !J that x(t) approaches as t ~ 00. This set Q is called the positive
limit set of x(t) and the points p in Q are called the positive limit points
of x(t). In exactly the same way, one defines x(t) -+ E as t --+- - 00, negative
limit sets, and negative limit points. This is exactly Birkhoff's concept of
limit sets. A point p is a positive limit point of x(t), if and only if there is a
sequence of times t n approaching CX) as n ~ 00 and such that x(t n ) -+ p as
n ----. CXJ. In the above, it may be that the maximal interval of definition of
x(t) is [0, r ), This causes no difficulty since, in the results to be presented
here, we need only, with respect to time t, replace 00 by T. We usually ignore
this possibility and speak as though our solutions are defined on [0, (X)) or
-)00

( - 00,

(0).

Let V(t, x) be a Cl function on [0, oo] x R" to R, and let G be any set
in R". We shall say that V is a Liapunov function on G for Eq. (1), if V(t, x)
~ 0 and V(t, x) ~ -

W(x) ~ 0 for all t

>

0 and all x in G, where W is

continuous on Rn to R, and

. av
"r
i: + ~
n

i-I

312

av
;lfi
ox;

(2)

We define (G is the closure of G)

{x; W(x)

== 0, x in G}.

The following result is then a modified but closely related version of


Yoshizawa's Theorem 6 in [7].

Theorem 1. If V is a Liapunov function on G for Eq. (I), then each


solution x(t) of (1) that remains in G for all t > to ~ 0 approaches
E* = E u {oo} as t ~ oo, provided one of the following conditions is
satisfied:
(i) For each pinG there is a neighborhood N of p such that If(t, x)
is bounded for all t > 0 and all x in N.

(ii) W is Cl and W is bounded from above or below along each solution which remains in G for all t > to ~ o.

If E is bounded, then each solution of (1) that remains in G for t > to ~ 0


either approaches E or 00 as t -+ 00.
Thus this theorem explains precisely the nature of the information given
by a Liapunov function. A Liapunov function relative to a set G defines a
set E, which under the conditions of the theorem contains (locates) all the
positive limit sets of solutions which for positive time remain in G. The problem in applying the result is to find "good" Liapunov functions. For instance, the zero function V = 0 is a Liapunov function for the whole space
Rn and condition (ii) is satisfied but gives no information since E = R".
It is trivial but useful for applications to note that if VI and V2 are Liapunov
functions on G, then V = VI + V2 is also a Liapunov function and
E = 1 () 2 . If E is smaller than either 1 or 2' then V is a "better"
Liapunov function than either E. or 2 and is always at least as "good"
as either of the two.
Condition (i) of Theorem 1 is essentially the one used by Yoshizawa.
We now look at a simple example, where condition (ii) is satisfied and condition (i) is not. The example also shows that the conclusion of the theorem is
the best possible. Consider x + p(t)i' + x =--..:: 0, where p(t) ~ (j > o. Define
2V = x 2 + y2, where Y == ..i'. Then V === - p(t)y2 ~ - by 2 and V is a Liapunov function on R2. Now W === l5y2 and W == 2t5YJi === - 2c5(xy + p(t)y 2 )
~ - 2f5xy. Since all solutions are evidently bounded for all t > 0, condition (ii) is satisfied. Here E is the x-axis (y:=: 0) and for each solution x(t),
yet) = X(/) --+- 0 as t ~ 00. Noting that the equation .t -~ (2 + exp[t))
+ x == 0 has a solution x(t) = 1 .t- exp( - t), we see that this is the best
possible result without further restrictions on p.

313

In order to use Theorem 1, there must be some means of determining


which solutions remain in G. The following corollary, which is an obvious
consequence of Theorem 1, gives one way of doing this and also provides,
for nonautonomous systems, a method for estimating regions of attraction.

Corollary 1. Assume that there exist continuous functions u(x) and


II(X) on Rn to R such that u(x) ~ V(t, x) ~ v(x) for all t ~ O. Define
Q,/+ = {x; u(x) < 1J} and let G+ be a component of Q'1 +. Let G denote the
component of Q'7 = {x; v(x) < 'YJ} containing G+.
If V is a Liapunov function on G for (1) and the conditions of Theorem 1
are satisfied, then each solution of (1) starting in G+ at any time to ~ 0
remains in G for all t > to and approaches E* as t -+ 00. If G is bounded and
EO = E () G C G+, then EO is an attractor and G+ is in its region of attraction.
In general we know that if x(t) is a solution of (I)-in fact, if x(t) is any
continuous function on R to Rtl-then its positive limit set is closed and
connected. If x(t) is bounded, then its positive limit set is compact. There are,
however, special classes of differential equations where the limit sets of solutions have an additional invariance property which makes possible a refinement of Theorem 1. The first of these are the autonomous systems

x =f(x).

(3)

The limit sets of solutions of (3) are invariant sets. If x(t) is defined on [0, 00)
and if p is a positive limit point of x{t), then points on the solution through p
on its maximal interval of definition are positive limit points of x(t). If x(t)
is bounded for t > 0, then it is defined on [0, 00), its positive limit set Q is
compact, nonempty and solutions through points p of Q are defined on
(- 00, 00) (i.e., (J is invariant). If the maximal domain of definition of
x(t) for t > 0 is finite, then x(t) has no finite positive limit points: That is,
if the maximal interval of definition of x(t) for t > 0 is [0, fJ), then x(t) --+ 00
as t -+ fJ. As we have said before, we will always speak as though our solutions are defined on (- 00, 00) and it should be remembered that finite
escape time is always a possibility unless there is, as for example in Corollary 2 below, some condition that rules it out. In Corollary 3 below, the
solutions might welt go to infinity in finite time.
The invariance property of the limit sets of solutions of autonomous
systems (3) now enables us to refine Theorem I. Let V be a Cl function
on R!" to R. If G is any arbitrary set in R", we say that V is a Liapunou
function on G for Eq. (3) if V =: (grad V) f does not change sign on G.
Define E = {x; V(x) = 0, x in G}, where G is the closure of G. Let M be
314

the largest invariant set in E. M will be a closed set. The fundamental stability theorem for autonomous systems is then the following:

Theorem 2. If V is a Liapunov function on G for (3), then each solution x(t) of (3) that remains in G for all I > 0 (I < 0) approaches
M* = M u {oo} as t --. 00 (I --+ - 00). If M is bounded, then either
x(t) -+ M or x(t) ~ 00 as 1---. 00 (I --+ - 00).
This one theorem contains all of the usual Liapunov like theorems on
stability and instability of autonomous systems. Here, however, there are
no conditions of definiteness for V or V, and it is often possible to obtain
stability information about a system with these more general types of LiapUDOV functions. The first corollary below is a stability result which for
applications has been quite useful, and the second illustrates how one obtains
information on instability. Cetaev's instability theorem is similarly an immediate consequence of Theorem 2 (see Section 3).
Corollary 2. Let G be a component of Q" = {x; V(x) < 1]}. Assume
that G is bounded, V;; 0 on G, and MO = M n G c: G. Then MO is an
attractor and G is in its region of attraction. If, in addition, V is constant
on the boundary of MO, then MO is a stable attractor.
Note that if MO consists of a single point p, then p is asymptotically stable
and G provides an estimate of its region of asymptotic stability.
Corollary 3. Assume that relative to (3) that V V > 0 on G and on
the boundary of G that V = O. Then each solution of (3) starting in G a pproaches 00 as t -+ 00 (or possibly in finite time).
There are also some special classes of nonautonomous systems where the
limit sets of solutions have an invariance property. The simplest of these
are periodic systems (see [2])

x = .((t,

x),

f(1

+ T, x) = .f(t)

for all t and .r.

(4)

Here, in order to avoid introducing the concept of a periodic approach of a


solution of (4) to a set and the concept of a periodic limit point, let us confine ourselves to solutions x(t) of (4) which are bounded for t > O. Let!J be
the positive limit set of such a solution x(t), and let p be a point in Q.
Then there is a solution of (4) starting at p which remains in Q for all t in
(- 00, 00); that is, if one starts at p at the proper time, the solution remains
in Q for all time. This is the sense now in which Q is an invariant set. Let
V(t, x) be Cl on R x R" and periodic in t of period T. For an arbitrary set
G of RIJ. we say that V is a Liapunou function on G for the periodic system (4)

315

if V does not change sign for all t and all x in G. Define E = {(t, x); V(t, x)
= 0, x in G} and let M be the union of all solutions x(t) of (4) with the
property that (t, x(t is in E for all t. M could be called "the largest invariant set relative to E." One then obtain the following version of Theorem 2
for periodic systems:
Theorem 3. If V is a Liapunov function on G for the periodic system (4),
then each solution of (4) that is bounded and remains in G for all t > 0
(t < 0) approaches M as t --+ 00 (t -+ - 00).
In [5] Miller showed that the limit sets of solutions of almost periodic
systems have a similar invariance property and from this he obtains a result
quite like Theorem 3 for almost periodic systems. This then yields, for periodic and almost periodic systems, a whole chain of theorems on stability
and instability quite similar to that for autonomous systems. For example,
one has

Corollary 4. Let QFI + = {x; V(t, x) < 'Y}, all t in [0, T]}, and let G+
be a component of Q,,+. Let G be the component of QFI = {x; V{t, x) < TJ
for some r in [0, T]} containing G+. If G is bounded, V ~ 0 for all t and all
x in G, and if MO = M n G c G+, then MO is an attractor and G+ is an
its region of attraction. If V(t, x) = ,(t) for all t and all x on the boundary
of MO, then MO is a stable attractor.
OUf last example of an invariance principle for ordinary differential equations is that due to Yoshizawa in [7] for "asymptotically autonomous"
systems. It is a consequence of Theorem 1 and results by Markus and Opial
(see [7] for references) on the limit sets of such systems. A system of the form

x=

F(x)

+ g(t, x) + h(t,

x)

(5)

is said to be asymptotically autonomousif (i) g(t, x) -+ 0 as t --. 00 uniformly


I h(t, 9'(t I dt < 00 for all
for x in an arbitrary compact set of Rn, (ii)
q; bounded and continuous on [0, 00) to R",. The combined results of Markus
and Opial then state that the positive limit sets of solutions of (5) are invariant sets of x = F(x). Using this, Yoshizawa then improved Theorem I
for asymptotically autonomous systems.
It turns out to be useful, as we shall jIlustrate in a moment on the simplest possible example, in studying systems (I) which are not necessarily
asymptotically autonomous to state the theorem in the following manner:

f:

Theorem 4. If, in addition to the conditions of Theorem 1, it is known


that a solution x(t) of (1) remains in G for t > 0 and is also a solution of an
316

asymptotically autonomous system (5), then x(t) approaches M* == M u


{oo} as t ~ 00, where M is the largest invariant set of == F(x) in E.
It can happen that the system (1) is itself asymptotically autonomous, in
which case the above theorem can be applied. However, as the following
example illustrates, the original system may not itself be asymptotically autonomous, but it still may be possible to construct for each solution of (I)
an asymptotically autonomous system (5) which it also satisfies.
Consider again the example

x==y
y ==

x - p(t)y,

o < D ~ pet) ~ m
for all t > o.

(6)

Now we have the additional assumption that pet) is bounded from above.
Let (x(t), yet~ be any solution of (6). As was argued previously below
Theorem 1, all solutions are bounded and yet) ~ 0 as t ~ 00. Now (X(/),
yet~ satisfies x == Y, rV = - X - pet) yet), and this system is asymptotically
autonomous to (*) . ~ y, y :=: - x. With the same Liapunov function as
before, E is the x . . axis and the largest invariant set of (*) in E is the origin.
Thus for (6) the origin is asymptotically stable in the large.

3. Autonomous Functional-Differential Equations


In this section we adopt completely the notations and assumptions introduced by Hale in his paper in these proceedings and present a few of the
stability results that he has obtained for autonomous differential equations

(7)
A more complete account with numerous examples is given in (1). For the
extension to periodic and almost periodic functional-differential equations
by Miller see [6].
We continue where Hale left off in Section 2 of his paper, except that we
shall assume that the open set Q is the whole state space C of continuous
functions. We also confine ourselves to solutions x of (7) that are bounded
and hence defined on [- r, 00). Except that we are in the state space C,
the definition of the positive limit set of a trajectory x f of (7) is essentially
the same as for ordinary differential equations, and the notion of an invariant set is modified to take into account the fact that there is no longer
uniqueness to the left. A set M c C is invariant in the sense that if qJ E M,
317

then x(tp) is defined on [-',00), there is an extension on (- 00, - r], and


x,(fP) remains in M for all t in (- 00, 00). With these extensions of these
geometric notions to the state space C, Hale then showed that the positive
limit set of a trajectory of (7) bounded in the fu-ture is a nonempty, compact,
connected, and invariant set in C. He was then able to obtain a theory of
stability quite similar to that for autonomous ordinary differential equations.
Let V be a continuous function on C to R and define relative to (7)

1
V(tp) == lim - [V(xr(fP - V(<p)].
T~O+

(8)

1:

With G an arbitrary set in C, we say that V is a Liapunovfunction on G for


(7) if V(qJ) ~ 0 for all f/J in G. Define E = {<p; V(q = 0, q; E G} and let M
denote the largest invariant set of (7) in E.
Theorem 5. If V is a Liapunov function on G for (7) and x t is a trajectory

of (7) which remains in G and is bounded for t > 0, then x, -+ M as t --+ 00.
Hale has also given the following more useful version of this result.

Corollary 5. Define Q" = {cp; V(<p) < 11} and let G be Q" or a component of Q" . Assume that V is a Liapunov function on G for (7) and that
either: (i) G is bounded or (ii) I p(0) I is bounded for cp in G. Then each
trajectory starting in G approaches M as t ~ 00.
The following is an extension of Cetaev's instability theorem. This is a
somewhat simplified version of Hale's Theorem 4 in [1], which should have
stated "V(9') > 0 on U when cp :;t: 0 and V(O) = 0'" and at the end " ... intersect the boundary of C; ...". This is clear from his proof and is necessary,
since he wanted to generalize the usual statement of Cetaev's theorem to
include the possibility that the equilibrium point be inside U as well as on
its boundary.
Corollary 6. Let p

E C be an equilibrium point of (7) contained in the

closure of an open set U and let N be a neighborhood of p. Assume that:


(i) V is a Liapunov function on G = U n N, (ii) M n G is either the empty
set or p, (iii) V(qJ) < 11 on G when cp =1= p, and (iv) V(P) = 'YJ and V(<<p) = 'YJ
on that part of the boundary of G 'inside N. Then p is unstable. In fact, if
No is a bounded neighborhood of p properly contained in N, then each
trajectory starting at a point of Go = G (') No other than p leaves No in
finite time.
Proof. By the conditions of the corollary and Theorem 6 each trajectory
starting inside Go at a point other than Po must either leave Go, approach
318

its boundary or approach p. Conditions (i) and (iv) imply that it cannot
reach or approach that part of the boundary of Go inside No nor can it
approach p as 1-+ 00. Now (iii) states that there are no points of M on that
part of the boundary of No inside G. Hence each such trajectory must leave
No in finite time. Since p is either in the interior or on the boundary of G,
each neighborhood of p contains such trajectories, and p is therefore un-

stable.
In .[1] it was shown that the equilibrium point q; = 0 of

X(/) = ax 3( / )

bx3(1 - r)

was unstable if a > 0 and I b I < I a I. Using the same Liapunov function
and Theorem 6 we can show a bit more. With

V(q = - q>4(O)
4a
V(x,) = --

+..!- JO

2-r

x4~) + ~

q;6(6) dO,

-r x6(8)dO,

and

which is nonpositive when I b J < I a I (negative definite with respect to


,(0) and tp(- r; that is, V is a Liapunov function on C and E = {<p; tp(O)
= cp( - r) = O}. Therefore M is simply the null function ep = O. If a > 0,
the region G = {q;; V(qJ) < O} is nonempty, and no trajectory starting in G
can have lp = 0 as a positive limit point nor can it leave G. Hence by Theorem 5, each trajectory starting in G must be unbounded. Since qJ = 0 is a
boundary point of G, it is unstable. It is also easily seen [J] that if a < 0
and I b 1< I a I, then cp = 0 is asymptotically stable in the large.
In [1] Hale has also extended this theory for systems with infinite lag
(r = 00), and in that same paper gives a number of significant examples
of the application of this theory.

REFERENCES
(I) Hale, I., Sufficient conditions for stability and instability of autonomous functional
differential eauations, J. Diff. Eqs. 1, 452-482 (1965).
[2] LaSalle, J., Some extensions of Liapunov's second method, IRE Trans. Circuit Theory
CT-7, 520-527 (1960).

319

[3] LaSalle, J., Asymptotic stability criteria, Proc. Symposia Applied Mathematics, vol. 13,
"Hydrodynamic Instability." 299-307. Amer. Math. Soc., Providence, Rhode
Island, 1962.
[4] LaSalle, J., and Lefschetz, S~, "Stability by Liapunov's Direct Method with Applications." Academic Press, New York, 1961.
(5) Miller, R., On almost periodic differential equations. Bull. Amer. Math. Soc. 70,
792-79S (1964).

[6] Miller, R., Asymptotic behavior of nonlinear delay-differential equations, J. Diff.


Eqs. 3, 293-305 (1965).
[7] Yoshizawa, T., Asymptotic behavior of solutions of a system of differential equations.
Contrib. Diff. Eq. 1, 371-387 (1963).

320

Vous aimerez peut-être aussi