Vous êtes sur la page 1sur 197

Lecture Notes in Economics

and Mathematical Systems 529

Founding Editors:
M. Beckmann
H. P. Kiinzi

Managing Editors:
Pro f. Dr. G. Fandel
Fachbereich Wirtschaftswissenschaften
Fernuniversitat Hagen
Feithstr. 140/AVZ 11,58084 Hagen, Germany
Prof. Dr. W. Trockel
In stitut fur Mathematische Wirtschaftsforschung (IMW)
Universitat Bielefeld
Universitatsstr, 25, 33615 Bielefeld, Germany

Co-Editors:
C. D. Aliprantis

Editorial Board:
A. Ba sile, A. Drexl, G. Feichtinger, W Guth, K. Inderfurth, P. Korhonen,
W. Kursten, U. Sch ittko, R. Selten, R. Steuer, F. Vega-Redondo
Springer-Verlag Berlin Heidelberg GmbH
Wemer Krabs
Stefan Wolfgang Pick1

Analysis, Controllability
and Optimization
of Time-Discrete Systems
and Dynamical Games

Springer
Authors
Prof. Dr. Wemer Krabs Dr. Stefan Wolfgang Pickl
Department of Mathematics Department of Mathematics
Technical University Darmstadt Center of Applied Computer Science
Schlossgartenstrasse 7 ZAIK
64289 Darmstadt University of Cologne
Germany Weyertal80
50931 Cologne
Germany

Cataloging-in-Publication Data applied for


A catalog record for this book is available from the Library of Congress.
Bibliographic information published by Die Deutsche Bibliothek . ..
Die Deutsche Bibliothek lists this publication in the Deutsche Nahonalblbhografie;
detailed bibliographic data is available in the Internet at http://dnb.ddb.de

ISSN 0075-8450
ISBN 978-3-540-40327-2 ISBN 978-3-642-18973-9 (eBook)

DOI 10.1007/978-3-642-18973-9
This work is subject to copyright. AII rights are reserved, whether the whole Of part
of the material is concemed, specificalIy the rights of translation, reprinting, re-use
of ilIustrations, recitation, broadcasting, reproduction on microfilms or in any other
way, and storage in data banks. Duplication of this publication or parts thereof is
permitted only under the provisions of the German Copyright Law of September 9,
1965, in its current version, and permis sion for use must always be obtained from
Springer-Verlag. Violations are liable for prosecution under the German Copyright
Law.

http://www.springer.de
Springer-Verlag Berlin Heidelberg 2003
Originally published by Springer-Verlag Berlin Heidelberg New York in 2003
The use of general descriptive names, registered names, trademarks, etc. in this
publication does not imply, even in the absence of a specific statement, that such
names are exempt from the relevant protective laws and regulations and therefore
free for general use.
Typesetting: Camera ready by author
Cover design: Erich Kirchner, Heidelberg
Printed on acid-free paper 55/3143/du 543 2 1 O
Dedicated to the people of Navrongo
Preface

J.P. La Salle has developed in [20] a stability theory for systems of difference
equat ions (see also [8]) which we introduce in the first chapter within the
framework of metric spaces. The st ability theory for such systems can also be
found in [13] in a slightly modified form . We st art with autonomous systems
in the first section of chapte r 1. After theoretical preparations we examine
the localization of limit sets with the aid of Lyapunov Functions. Applying
these Lyapunov Functions we can develop a stability theory for autonomous
systems.
If we linearize a non-linear system at a fixed point we ar e able to develop
a stability theory for fixed points which makes use of the Frechet derivative
at the fixed point.
The next subsection deals with general linear systems for which we intro-
duc e a new concept of stability and asymptotic stability that we adopt from
[18]. Applications to various fields illustrat e these results. We st art with the
classical predator-prey-model as being developed and investigated by Volterra
whi ch is based on a 2 x 2-system of first ord er differential equ ations for the
densities of the prey and predator population, resp ectively. This model has
also been investigated in [13] with resp ect to stability of its equilibrium via
a Lyapunov function. Here we consider the discrete version of the model. If
we discretize the original model of interacting growth of populations in terms
of two first order differential equations we obtain a general discrete model for
interacting logistic growth of two populations. As a last example we present
an emission reduction model for the reduction of carbon dioxide emissions.
The next section of chapter 1 deals with non-autonomous systems. Def-
initions and elementary properties of such are pr esented. Again, we present
the stability theory based on Lyapunov's method for such systems. We regard
general systems and as a spe cial case the linear case. As an application we
describe the temporal development of the concentration of some poison like
urea in the body of a person suffering from a renal disease and having to be
attached to an artificial kidney.
VIII Preface

The second cha pter deals with time-discret e cont rolled systems . Here we
begin with the a ut onomous case and introduce the problem of fixed point
controllability. If we regar d linear syste ms, we obtain the pr oblem of null-
controlla bility. For t his we present an algorit hmic method for its solution.
Furthermore, we describ e t he problem of stabilizat ion of cont rolled sys-
t ems. Then several applicat ions are presented . We pick up the emission re-
du ction mod el and concent rate ourself on t he cont rolled syste m which we
lineariz e at a fixed point. As a second exa mple we treat the cont rolled pr ey-
pr ed ation model. Also a plan ar pendulum with moving suspension point can
be described by that mod eling. We consider a non-lin ear pendulum of length
l( > 0) whose moment is cont rolled by movin g its suspension point with ac-
celerat ion u = u( t) along a hori zontal st raight line.
In t he next sect ion of chapte r 2 we regard the non- autonomous case and the
specific problem of fixed point controllability. Furthermore, t he general prob-
lem of controllability, the st abili zation of cont rolled systems and the problem
of reachability is t reate d .
The third chapte r deals with t he controllability of dyn ami cal ga mes . These
are formul ated as cont rolled auto nomous dynamical syst ems which as uncon-
trolled systems admit fixed point s. The problem of cont rollab ility consist s of
findin g cont rol functions such that a fixed point of t he uncontrolled syste m
is reached in finit ely many time ste ps. For t his probl em a ga me theoretical
solution is given in t erms of P ar eto optima in the cooperat ive case and Nash
equilibria in the non- coop er ative case.
For the emission redu ction mod el the non- coop er ative treatment of t he cont rol
problem lead s t o the applicat ion of linear pro gramming for the calculat ion of
Nash equilibria and the cooperat ive treatment gives rise to the applicati on of
coope rat ive ga me theory.
In particular we have t o investigat e t he question und er whi ch condit ions the
core of such a ga me is non empty.
In this connect ion we also consider n-person goal-cost-games and pr esent a
dynamical method for findin g a Nas h equilibrium in such a ga me.
Aft er the treatment of evolut ion matrix games we come back t o n-person
goal-cost -games which we tran sfer into cooperat ive n-person games . Here we
investigate the questi on under which condit ions t he grand coa lit ion is stable.

T he appendix supplies the reader wit h addit iona l information.


Sect ions A.I and A.2 are concerne d with t he core of a general coop er ative
n-p erson ga me and a linear production ga me, res pectively. In Secti on A.3
necessar y and sufficient condit ions for weak Par eto optima of non- coop erative
n- pe rso n ga mes are given and Section A. 4 deals with du ality in such ga mes.
The book ends with bibliographical remarks .
Preface IX

The aut hors want to thank Silja Meyer-Nieberg for carefully reading the
manuscript and Gor an Mihelcic for excellent typesetting . He solved every t ex-
problem which occur ed in minim al time.

Cologne, Wern er K rabs


May 2003 St efan Pi ckl
Contents

1 Uncontrolled Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Autonomous Cas e. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Definitions and Elementar y Properties . . . . . . . . . . . . . . . 1
1.1.2 Localization of Limit Set s with the Aid of
Lyapunov Fun ctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.3 St ability Based on Lyapunov's Method . . . . . . . . . . . . . . . 8
1.1.4 St abili ty of Fixed Points via Lineari sation . . . . . . . . . . .. 13
1.1.5 Linear Syst ems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.6 Applications .... ... .. ... . ... ... . .... .. ... .... ..... 21
1.2 The Non-Autonomous Cas e 32
1.2.1 Definitions and Element ary Properties . . . . . . . . . . . . . .. 32
1.2.2 Stability Based on Lyapunov 's Method . . . . . . . . . . . . . . . 35
1.2.3 Linear Syst ems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.2.4 Application to a Mod el for the Process of Hemo-Di alysis 43

2 Controlled Systems 47
2.1 The Autonomous Cas e 47
2.1. 1 The Problem of Fixed Po int Controllability 47
2.1.2 Null-Cont rollability of Linear Systems 57
2.1.3 A Method for Solvin g t he Problem of Null-Controllability 65
2.1.4 St abiliz ation of Cont rolled Systems 70
2.1.5 Applications 73
2.2 The Non-Autonomous Cas e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.2.1 The Problem of Fixed Point Cont rollability . . . . . . . . . .. 80
2.2.2 The Gener al Problem of Controllability .. . . . . . . . . . . . . 83
2.2.3 St abili zation of Con trolled Syst ems. . . . . . . . . . . . . . . . . . 86
2.2.4 T he Problem of Reachability . . . . . . . . . . . . . . . . . . . . . . . . 89

3 Controllability and Optimization 93


3.1 The Control Problem 93
3.2 A Gam e Theoreti cal Solution . . . . . . . . . . . . . . . . . . . . . . . . . 95
XII Contents

3.2.1 The Cooperative Case 95


3.2.2 The Non-Cooperativ e Case 99
3.2.3 The Linear Case 103
3.3 Local Cont rollability 106
3.4 An Emission Redu cti on Mod el 107
3.4.1 A Non-Co op er at ive Treatment 107
3.4.2 A Cooper ative Treatment 116
3.4.3 Condit ions for the Core to be Non-Empty 118
3.4.4 Further Conditions for the Core to be Non-Empty 122
3.4.5 A Second Cooperative Tr eatment 128
3.5 A Dyn amic al Method for Finding a Nash Equilibrium 136
3.5.1 The Goal-Cost-G ame 136
3.5.2 Necessa ry Conditions for a Nash Equilibrium 137
3.5.3 The Method 139
3.6 Evolution Matrix Games 141
3.6.1 Definition of the Game and Evolutionary St ability 141
3.6.2 A Dyn ami cal Method for Finding an Evolutionar y
St able St ate 147
3.7 A Gener al Cooperative n-Person Goal-Cost-Gam e 151
3.7.1 The Gam e 151
3.7.2 A Cooper ative Treatment 152
3.7.3 Necessary and Sufficient Conditions for a St abl e
Grand Coalition 153
3.8 A Cooper ative Treatment of an n-Person Cost-Ga me 155
3.8.1 The Gam e and a First Cooperat ive Tr eatment 155
3.8.2 Tr ansformation of the Game int o a
Cooperat ive Game 157
3.8.3 Sufficient Condi t ions for a St abl e Gr and Coalition 158
3.8.4 Further Cooper ativ e Tr eatment s 160
3.8.5 P areto Opt ima as Coo perat ive Soluti ons of the Gam e . . 162

A Appendix 167
A.l The Core of a Cooperat ive n-Person Gam e 167
A.2 The Core of a Linear P roducti on Gam e 173
A.3 Weak P ar eto Optima : Necessary and Sufficient Condit ions 177
A.4 Du ality 179

B Bibliographical R emarks 181

References 183

Index 185

About the Authors 187


1

Uncontrolled Systems

1.1 The Autonomous Case

1.1.1 Definitions and Elementary Properties

In [20] J.P. La Salle has developed a stability theory for difference equat ions.
He considers difference equat ions which can be transformed into equa t ions of
t he form
x (n + 1) = f (x (n )), n E No = N U {O} , (1.1)
wher e
x( O) = x (1.2)
is a given initial state in a non-empty subset X ~ IRk and f : X -+ X is
a given cont inuous mapping. By (1.1) and (1.2) a time-discrete dyn amical
system (X,!) is defined , if we equ ip IRk with a norm (e.g. t he Euclidean
norm) and define a flow 7r : X x No -+ X by

7r (x , n ) = r( x) = f 0 f 0 ... 0 f( x) (1.3)
'------v----"'
n -tim es

for all x E X and n EN, and

7r(x ,O) = x, for all x E X . (1.4)

This system is called an autonomous system, since it has the semi-group


property
7r(7r(x , n) , m) = 7r(x, n + m)
for all n , m E No and x E X .
The st ability theor y for such systems develop ed by J .P. La Salle ca n also be
found in [1 3] in a slightl y modified form. In this book we gener alize the above
sit uat ion as follows:

W. Krabs et al., Analysis, Controllability and


Optimization of Time-Discrete Systems and Dynamical Games
Springer-Verlag Berlin Heidelberg 2003
2 1 Uncont rolled Systems

Let X be a metric space with metric d : X x X ~ IR+ and let f : X ~ X be


a cont inuous mapping. Then the auto nomous t ime-discrete dyn amic al system
(X, J) is given by the flow 11" : X x No ~ X defined by (1.3) and (1.4) . For
every x E X we define an orbit starting with x by

'Yf(x) = U{r (x)} . (1.5)


n EN o

Further we define as limit set of 'Yf (x) t he set

(1.6)

This limit set can be given an equivalent definition which is the conte nt of

Proposition 1.1. For eveTy x E X the limit set L f( x) being defined by (1.6)
consists of all accumulatio n points of the sequence (fn(x) ) nE No .
Proof.

1) Let y E X be an accumulat ion poin t of (r (x) ) n E No ' T hen there exist s a


subsequence (f n;(X))iENo with fn ;(x ) ~ y . T his implies t hat, for every
n E No,
YE U {fm(x)}
m '2:n

which in t urn implies that y E L f (.'r) .

2) Let y E Lf(x) . Then

YE U {f m(x )} for every n E No


m '2: n

Ther efore, for every n E No, t here is a sequence (fk;+n(X))iEN o with


(f ki+n(x)) ~ y as i ~ 00 . Hence, for every n E No, t here exist s an
in E No such that

and we ca n assume that i n + 1 > in. This implies that (fk;+n(X))nEN is a


subsequence of (fn( X))nENo with fk;+n (x) ~ y as n ~ 00. This means
that y is an accumulat ion point of (f n(x )) n E No which complete s the proof.

o
1.1 T he Autonomous Case 3

Definition 1.1. A non- empty subset H <;;;; X is called positively (negatively)


invariant (with respect to a mapping f : X -> X ), if

f(H) <;;;; H( H < f( H) ) ,

and invariant, if
f( H ) = H .

Exercise 1.1. Show that for a cont inuous mapping f :X -> X t he following
holds true:
(a) The closure of a positively invari ant subset of X is also posit ively invari ant
(with resp ect to 1).
(b) The closure of a relativ ely compact invari ant subset of X is also invariant
(with resp ect to 1).

According to Proposition 1.1 the limit set L f (x) for some x E X given by
(1.6) ca n be empty, if the sequence (f n(X))nENo does not have accumulat ion
points . If this is not the case, t hen we have the

Proposition 1.2. If, for some x E X , the limit set L f (x) given by (1.6) is
non- empt y, then it is closed and positively invaria nt.

Proof The closedness of L f (x) is an immediate consequence of the Defini-


tion (1.6) . Let y E Lf(x) be given. T hen, by Proposition 1.1, there exists a
subsequence (f ni( X))iENo of the sequence (r(X) )nENo with fn i(x) -> y as
i -> 00 . This implies, du e t o t he cont inuity of t , t ha t r i+1 (x) -> f( y) , hence
f (y) E L f( x) , T his shows f (Lf(x)) <;;;; Lf(x) , i.e. that L f (x ) is positiv ely
invariant .
o
If X is compac t, then , for every x E X , t he limit set L f (.T) given by (1.6) is
non- empty and we can even prove

Proposition 1.3. If X is compact, then, for every x E X , the limit set L f (x)
given by (1.6) is compact, invariant and the smallest closed subset S <;;;; X
with

lim (}(r (x) ,S) = 0 where (}( y,S) = rnin {d(y,z ) I z E S }. (1.7)
n--->oo
4 1 Uncontrolled Sys t em:

Proof As a non- empty closed subset of the compact metric space X the limit
set Lf(x) is also compact .
In order to show the invariance of L f (x) it suffices to show that L f (x) ~
f(Lf( x)). Let y E L f( x) be given. Then there exists a subsequence (fn; (X))iENo
of the sequence (r( X))nENo with r ;( x) -> y and with no loss of generality
we can assume that there is some z E L f (x) with r ; -1 (x) -> z. By the con-
tinuity of f this implies r ;(x) -> f (z), hence y = f (z) ELf (x) and therefore
L f( x) ~ f(Lf(x)) . In order to show (1.7) we at first show that

For that purpose we ass ume t hat

Then there is a subsequence (fn;( X))iENo of (r( X))nENo with

and some y E Lf(x) with limd(fn;( x) ,y)


t --->oo
= 0 which impli es

and leads to a cont rad ict ion.


Now let S ~ X be any closed subset with

lim dr (x) , S) =
n ---> oo
o.
Then we choose any y ELf (x) and conclude by Proposition 1.1 the existe nce
of a subsequ ence (fn ;( X))iENo of (r( X))nE No with (fin(x)) -> y. Further it
follows that
lim g(fn ; (x ), S) = 0,
n--->oo

whi ch impli es Y ES , since S is closed . This compl et es the proof of Proposit ion
1.3.
D
1.1 The Autonomous Case 5

Definition 1.2. A closed in variant subset of X is called inv ariantly con-


nected, if it is not representable as a disjoint un ion of two non- empty, in-
varian t and closed subsets of X .

Definition 1.3. A sequence (fn( X))nENo, X E X , is called periodi c or cyclic,


if there exists a number k EN with fk( x) = x. Th e sm allest number with this
property is called the period of the sequence. If k = 1, then x E X is called a
fixed point of f : X ~ X .

Exercise 1.2. Show that a finit e subset H <;;; X is invari antly connect ed, if
and only if for every x E H the sequence (r (x) )n E No is periodic and its period
is equa l to the cardinality of H .

Propos ition 1.3 can be supplemented by

Proposition 1.4. If X is com pact, then, for ever y x E X, the limit set Lf(x)
given by (1.6) is inv ariantly conn ected.

Proof. Let us assume that , for some x E X , there exist two non- empty, in-
variant and closed subsets Al and A z of L f( x) which are disjoint and satisfy
Al U A z = Lf(x).
Since Lf(x) is compact , Al and A z are also compact and there exist disjoint
ope n subsets UI and U: of X with Al <;;; UI and A z <;;; U . Since f is uni-
formly cont inuous on AI , there is an op en subset VI of X with Al <;;; VI and
!(VI) <;;; UI .
Sin ce L f( x) is the smallest closed set S <;;; X with (1.7) , t he sequence
(fn( X))nE No must int ersect VI as well as Uz infinitely many times. This im-
plies the existence of a subsequence (fn ; (:r) ) i ENo whi ch is neither contained
in VI nor Uz and whi ch can be ass umed t o be convergent . This, however, is
impossible and leads to a contradict ion to the assumpt ion that Lf(x) is not
invariantly connecte d.
o
Exercise 1.3. Given a compact and positively invariant subset K <;;; X . Show
that n r(K) wit h r(K) = {r( x ) I x E K} for all n E No is non-empty,
nE No
compact and t he lar gest invari an t subset of K .
6 1 Unco nt rolled Syst ems

1.1.2 Localization of Limit Sets with the Aid of


Ly apunov Functions

Let X be a metric space and let f : X ---.., X be a cont inuous mapping. Further
let G ~ X be a non- emp ty subset.
D efinition 1.4. A func tion V : X ---.., lR. is called a Lyapunov fun cti on with
respect t o f on G, if
(1) V is con tinuous on X ;
(2) V(f (x)) - V( x) ~ 0 fo r all x E G with f(x) E G.
If V : X ---.., lR. is a Lyapunov functi on with respect t o f on G , th en we defin e
the set
E = {x E X I V (f(x)) = V (x) , x E G}
where G is the closure of G. Further we put , for every c E lR. ,

V- I(c ) = {x E X I V(x ) = c }.
Then we ca n prove
Proposition 1.5. Let G ~ X be no n-empty and relative ly com pact. Further
let V be a Lyapunov functi on with respect to f on G an d finall y let Xo E G
be suc h that
r (xo) E G for all n E N.
Th en th ere exi st s some c E lR. suc h that

where M is th e largest invariant su bset of E .

Proof. If we define Xn = r(xo), n E No, t hen t he sequence (V( Xn) nENo ) is


contained in V (G) and therefore bo unded.
Further it follows t hat

Therefore there exists some c E lR. with c = li m V (x n ) .


n ---> oo
Now let p E L f( x o). T hen there exists a subse quence (XnJ i ENo of (Xn) n ENo
with (x nJ ---.., p which impli es c = lim V (x nJ = Vp due t o the cont inuity of
t --->oo

V. Hence p E V -1 (c) and t hus L f( x o) ~ V - I( c). Since, by P roposition 1.3,


L f( xo) is invar iant , it follows t hat V (f (p) ) = c for all p E L f( x o) and hence
V (f(p)) = V(p) for all p E L f (x o) which impli es L f( x o) ~ E and in t urn
L f( xo) ~ M. This completes the proof.

D
1.1 The Autonomous Case 7

Let us demonstrate this result by an exa mple: We take X = lR 2 equipped


with the Euclidean norm and define j : X -> X by
. T -
j( x, y) = (ft(.T , y) , h(x, y)) , (x, y) E X ,

with
y x
ft(x,y) = 1 + x 2 ' h(x ,y) = 1 + y2 . (1.8)

Further we choos e

V( x,y)= x 2+2
y , (x , y )E -X . (1.9)

T hen it follows that

v (j( x, y)) - V( x, y) = Cl +~2)2) 1) +Cl +lx 1)


- x
2
2) 2 - y2 :::; ,
for all (x , y) E X
This shows t hat V is a Lyapunov jun ction with resp ect to j on X = lR 2
Further we see t hat

E = {(x,D) I x E lR } U {(O ,y) l y E lR)}.

From j(x,O) = (O,x) for all x E lR and j(O,y) = (y,O) for all y E lR, it follows
t hat E is invariant and hence !VI = E .
Further we conclude

j 2(X,O) = j(O , x) = (O , x) for all z E lR

and
j 2(O ,y) = j(y ,O) = (O ,y)
for all x E lR.
2
Now let G = {(x,y) E lR I x + y2 < r} for any r > 0.
2

Then it follows that j (G) ~ G and for every (xo,Yo) E G there exist s some
c E lR such

Lj(xo,Yo)~En{(x ,y) I x2 +y 2=2


c }={(c,O),(O,c)}.
Since L j (x o,Yo) is invariantl y connected by Proposition 1.4 it follows from
Exercise 1.2 t hat
Lj(xo,yo) = {(c,O),(O, c)} .
8 1 Uncontrolled Systems

1.1.3 Stability Based on Lyapunov's Method

Let f :X -+ X be a continuou s mapping where X is a metric space.

Definition 1.5. A relative ly com pact set H <;:; X is called stable with respect
to I , if f or ever y relative ly compact open set U <;:; X with U ;;2 H = closure
of H there exis ts an open set W <;:; X with H <;:; W <;:; U such that

r(W) <;:; U for all n E No

where
r(W) = {r( x) Ix E W}.
Theorem 1.1. Let H <;:; X be relatively compact and such that for every
relative ly compa ct open set U <;:; X with U ;;2 H there exis ts an open set subset
B u of U with B u ;;2 Hand f(B u ) <;:; B u .
Further let G <;:; X be an open set with G ;;2 H such that th ere exis ts a
Lyapunov fun ction V with respect to f on G which is posit ive definite with
respect to H , i. e.,

V( x) 2: 0 for all x E G and (V( x ) = 0 ::} x E H) .

Th en H is stable with respect to f .


Proof. Let U <;:; X be an arbitrary relatively compac t op en set with U ;;2 H .
Then U* = U n G is also a relatively compac t open set with U* ;;2 H and
there exist s an op en set B u < U* with B u ;;2 H and f (B u. ) < U*.
Let us put
m = m in{V(x) I x E U*\B u}.
Sinc e H n (U*\Bu .) is em pty, it follows that m > O. If we define

W = {x E U* I V( x) < m },
then W is op en and H <;:; W <;:; Bu .
Now let x E W be chosen arb itrarily. T hen x E B u . and therefor e f (x ) E U*.
Further we have
V (f( x)) ::; V (x) < m ,
hen ce f( x) E W <;:; B u . This implies f2(X ) = f(f( x)) E U* and

V (f 2(X)) ::; V(f( x)) < m , hen ce f 2(.7;) E W.

By ind uct ion it t he refore follows that

r(x) E W <;:; U* <;:; U for all n E No.

This shows t hat H is stable with resp ect t o f.


o
1.1 The Auto nomous Case 9

Exercise 1.4. (a) Give an explicit definition of t he inst ability of a relatively


com pact set H t:;; X with resp ect to f as logical negati on of the notion of
stability. (b) Show : If a relatively compact set H t:;; X is st abl e with resp ect
to f , then its closure H is po sitively invariant , i.e. f(H) t:;; H.

Definition 1.6. A set H t:;; X is called an attrac tor with respect t o i , if there
exis ts an open set U t:;; X with U :;;;> H such tha t

lim (}(r (x), H ) = 0 (in short: r (x) ----+ H ) for all x E U


n ---> oo

where
oiu,H) = inf {d( y,z) I z E H}.
If H t:;; X is stable an d an atiracior with respect t o f , th en H is called
asymptotically stable with respect to f .

Theorem 1.2. Let H t:;; X be such th at th ere exis ts a relatively compact open
set U :;;;> H with f(U) < U .
Further let V : X ----+ lR be a Lyapunov func ti on with respect to f on U whic h
is positive definit e with respect to H , i. e.

V( x) 2 0 for all x E U and (V(x) = 0 {o} x E H ).

Fin ally let


lim V( r(x )) = 0 for all x E U.
n---> oo
Th en H is an ati racior with respect to f.
Proof Let x E U be chose n arbit ra rily. Since the sequence U n( X)) n ENo is
contained in U , for every subse quence un,
(x )) i ENo of un
(x )) n ENo there exists
a subsequence un,;
( X))jE No and some q E U wit h

lim
) --->00
r 'j (x) = q.
This implies
lim V U n' j (x)) = V (q) = 0,
) --->00

hen ce q E H , and therefore fn'j (x ) ----+ H as j ----+ 00 . From this it follows that
fn (x ) ----+ H which shows that H is an attractor with res pect t o f .

Corollary: Und er the assump tions of Theor em 1.1 and 1.2 it follo ws th at
H t:;; X is asymptotically st able.
10 1 Uncontrolled Systems

As an important special case we prove the following

Theorem 1.3. Let x* E X be a fix ed point of t . i.e. f( x *) = x *. Further let


th ere exist an open set G ~ X with x* E G and a Lyapunov function V with
respect to f on G which is positive definite with respect to x*, i.e.

V( x) 2: 0 for all x E G and (V( x) = 0 :} x = x* ).


Th en {x*} is stable with respect to f .
If in addition

V(f(x)) < V( x) for all x E G with x =I- x* , (1.10)

th en {x *} is asymptotically stabl e with respect to f .

Proof. Let U ~ X be a relatively compact op en set with x* E U . Then there


exists som e r > 0 such that
Br(x*) = {x E X I d( x , x *) < r} ~ U.

Since f is continuous in x *, there exist s some s E (0, r) such that f(Bs(x*)) ~


Br(x *) . Hence, if we put Bi) = B s( x *) , then f(Bu) ~ U is op en and {x *} ~
B u. By Th eorem 1.1 therefore {x *} is stable with respect to f .
Now let U be any relatively compact subset of G with x* E U .
Then we define
V(f( x))
q = sup V()
x EU X

and conclude from (1.10) that 0 < q < 1.


Further it follows that

V(r( x)) ::; qnV(x) for all x E u and n E N.

This implies
lim V(r( x)) = 0 for all x E U.
n-> oo

Hence by Theorem 1.2 we conclude that {x *} is an at t rac t or and therefore


asympt ot ically stable with respect to f.

o
1.1 The Autonomous Case 11

Let us demonstrate this result by the same exa mple as in Section 1.1.2,
i.e. we choose f : 1R 2 -+ 1R 2 as

f(x,y) = (fI(x ,y) ,f2(x ,y)), (x,y) E X = IR 2 ,

with fI = fI(x, y) and f2 = f2(x , y) defined by (1.8) .


If we choose V : X -+ 1R in the form of (1.9) , then V is a Lyapunov function
with resp ect to f on G = X = 1R 2 .
If we choose x* = (O ,O)T, then V is positive definit e with respect to x * and
x* is a fixed point of f . Further (1.10) is sat isfied .
By Th eorem 1.3 it follows that x* = (O ,O)T is asympto t ically st abl e with
resp ect to f .
With the aid of Propo sition 1.5 we can prove the

s:
Theorem 104. Let G X be open , relatively com pact and positively invariant
with respect to f , i. e. f (G) s:
G . Further let V be a Lyapunov function with
respect to f on G . Th en th e largest invariant subs et M of

E = {x E G I V(f( x)) = V( x)}

is an attractor with respect to f .


If in addition V is con st an t on M, th en M is asymptotically stable with respect
to f .

Proof Let us put U = G. If we choose Xo E U arbit rarily, then the sequence


(fn( XO))nE No is contained in G, since G is positively invariant. P roposit ion 1.5
implies Lf(xo) s: M . We have to show that lim (2(r(xo) , M = 0) . Now let
n ->oo
(fni(XO))iENo be an arbitrary sub sequence of (f n( XO)) nENo ' Then there is a
subsequence (fn i; (XO))jENo and an element y E Lf(xo) s: M with fn i; (x o) -+
y which implies lim (2(r i( xo) ,M) = 0 and in t urn (2(r( xo),M) -+ y.
)-> 00

In order to show that M is asympt ot ically stable with resp ect to f we have to
show that M is stable with resp ect to f . For t hat purpose we apply Th eorem
s:
1.1 and verify its assumptions. At first let U X be relatively compact, op en
and such that U ~ H . If we define

B u = {x E U n G I f (.1:) E U } ,

then B u is op en, B u ~ M ,Bu s: U, and f(B u) s: U.


12 1 Uncontro lled System s

Let
V (x) = c for all x E M . (1.11)
Let us assume that there is some x E G wit h V (x) < c.
Then it follows t ha t

V( r ( x )) < V (x ) < c for all n EN.


Since Lf(x) S;; M , there exists some y E Lf(x) S;; M with V(y) < c which
contradicts (1.11) . Hence

V (x ) 2: c for all x E G

and further
V (x ) = c {:? x EM .
Therefor e, if we define

V( x) = V (:r) - c, x E X,

then we obtain a Lyapunov fun ction with resp ect to f on G which is positive
definite with resp ect to M. The asse rt ion now follows by Th eorem 1.1.
o

Let us again demonstrate this res ult by t he above example (1.8 ) with Lya-
punov fu nction (1.9). Let again

G = {( x ,y) E 1Ft 2 I x'2 + y'2 < r'2 } for some r > 0.


Then G is op en , relat ively compact and pos it ively invari ant .
From Th eorem 1.4 it therefore follows t hat

M = ({( x ,O) I .r E 1Ft} U { (O , x ) I x E 1Ft}) n G

is an at t racto r with resp ect t o f . It even follows that for every Xo E G it is


true t hat
1.1 The Autonomous Case 13

1.1.4 Stab ili ty of Fixe d P oint s via Li n ea r is a tio n

Let us assume that X is a non- emp ty op en subset of a normed linear space


(E, II . II) and f : X ---+ X is a continuous mapping which is cont inuously
Frechet differentiable at every x E X. We denote the Frechet derivative of f
at x E X by f~ which is a continuous linear mapping f~ : E ---+ E whose norm
is given by
Il f~ 11 = sup{ Ilf~(h)1 1 I h E E with II hll = 1 }
for x E X . Then we can prove

T heorem 1. 5 . Let x* E X be a fired point of f , i.e. f(x *) = x*. Then the


following two stat ements are tru e:
aj Let Il f~* II < 1. Then there is some c: > a and some c E [0, 1) such that
Xo E X and Ilxo- x" II < c: =} Ilr (xo) - x" II ::; cn Ilxo- x* II for all n E N
which impli es that x* is asymptotically stable with respect to f (Exercisej .
bj Let f~* be continuously invertible and I l f~.- lll - l > 1. Then there is some
J > a and some d > 1 such that
fk( xo) E X and Ilfk(xo) - x* 1I < J for 0 ::; k::; n - 1
impli es that
Ilr (xo) - x* 112
dn llxo - .r*11
for all n EN . This impli es that x* is unstable with respect to f (Ex ercise,
see Exercise 1.4 aj) .
Proof. a) From the continuity of x ---+ IIf~ I , x E X , it follows that there exists
some e > a and some c E [0, 1) such that
I lf~ 1 1 ::; c for all x E B( x*, c:) = {y E E I ll y - x*11< c:}.
The mean value t heorem impli es
Il f (x ) - f(y) 11::; I l f~ 11 Ilx - yll
for all x , y E B(x*, e) and some z = a x + (1 - a)y with a E (0,1).
This implies
IIf(x) - f(y)ll ::; cllx - y ll for all x , y E B(x*, c:)
and in particular
Il f (x) - x*ll ::; cllx - x*11< e for all x E B(x *,c:).
T herefo re it follows t hat f (B (x *, c:) ) ~ B (x*, e) and hence
Ilr (x o) - x* 1I = Il f 0 r -1(.TO) - f( x*) 11::; cllr - 1(xo) - x* 11
::; ... ::; cn llxo - x*11for all Xo E B(x*,c:) and n E N
14 1 Uncont rolled Systems

b) For every x , y E X we have

Ilx - yll = IIf~ *-l(j~*(X)) - f~ *-l(j~ *(y)) 11 :::; Il f~ *- ll l I lf~ *(x - y) 1


whi ch impli es
Il f~ * (x - y) 11;::: d' llx - yll with
d' = Il f~ *- 111- 1 > 1. From the Frechei differentiability it follows t hat

f(x) - f(x *) = i; (x - x*) + c(ll x - x* II) for x E X

wher e
Ilc(llx - .r *II )11 = 0
lim
Ilx - x* II
x ----x* .
If one chooses 1] > 0 with d = d' - 1] > 1 and <5 > 0 such t hat

Il cUlx - x*II)11
Ilx- x*1I < 1] for all B( x* ,<5)\{x*} ,

then it follows that

Ilf(x) - f( x*)11;::: 1 1f~* (r - y) II - llc( llx - x* II )11


;::: d' llx - x*II- 1]llx - x* 11
= dllx - x*11
for all x E B(x*, <5). Now let Xo E B( x*, <5) be such that

II f k(xo) - x* 11< <5 for 0 :::; k :::; n - 1 , n E N.


Then it follows t hat

Ilr(xo) - x*11 = Ilf 0 r - 1( xo) - f( x*) 11;::: dllr- 1( xo) - x*11


;::: . . . ;::: dnllxo - x*ll
T his complet es t he proof of Th eorem 1.5.
o
1.1 The Autonomous Case 15

Let us consider the following important special case :

We assume E = ]Rk equipped with any norm I . II. Let again X ~ E be


non- empty and op en and let again f : X -+ X be a continuously Frechet
differentiable (and hence continuous) mapping on X . This is equivalent to the
statem ent that at every x E X there exists the Jacobian matrix

and dep ends cont inuously on x .


The Frechet derivativ e is then given by

and we obtain

I l f~ 1 1 = IlJ f (x )1I = sup] IlJ f (x )(h )II I Il hll = 1 } .

Theorem 1.5 now gives rise to the following

Corollary: Let x* E X be a fixed point of f. Then the following two stat e-


ment s are tru e:
a) Let the spectral radius (2(Jf( x*)) < 1. Then x* is asymptotically stable with
respect to f .
b) Let Jf( x*) be invertible and let all the eigenvalues of Jf( x*) be larger than
1 in absolute value. Then x* is unstable with respect to f .

Proof By Theorem 3 in Chapter 1 of the book "Analysis of Num erical Meth-


ods" by E. Isaacson and H.B . K eller (John Wiley and Son s, New York, Lon-
don, Sydn ey 1 966) there exist s, for every 8 > 0, a vector norm in ]Rk such
that

a) Let us choose

Then
tl f~. 11 = IlJf (x*)11 ::; ~(1 + 15(Jf(x *))) < 1
and the assertion follows from Theorem 1.5 a).
16 1 Uncontrolled Systems

b) From t he assumpt ion it follows that f2 (J j (X*)- l ) < 1. Again by the above
quoted theor em it follows that , for a suitable matrix norm ,

so that the assert ion follows from Th eorem 1.5 b).

o
Assumption a) in the Coroll ary is not necessar y for the asymptotic st ability
of x* with respect to f as ca n be seen by the Ex ample (1.8) . Here we have

2x y . 1
h x(x , y) = - (1 + X2 )2 ' h y(x , y ) = 1 + x 2 '

1 2xy
hx (x, y) = 1 + y2 ' hy (x, y) = - (1 + y 2)2 '

hen ce
Jj (O, O) = (~~) =} f2( Jj(O, O)) = 1.

However , we have seen in Sec tion 1.1.3 t ha t (0, 0) is asy mpt ot ica lly stable
with resp ect to f.

1.1.5 Linear Systems

As at the beginning of S ection 1.1.4 we consider a normed linear space


(E , I . I ) and a mapping f : E ~ E which is given by

f (x) = A(x ) + b , x E E,

wh ere A : E ~ E is a cont inuous linear mapping and b EE a fixed element .


Then f is Frechet differentiable at every x E E and it s Frechei deri vative is
given by
f~ = A for all x E E .

Th eorem 1.5 lead s imm edi ately to

Theorem 1.6. Let x* E E be a .fixed point of i , i.e.


x* = A(x*) + b. (1.1 2)

Th en the follo wing two stateme nts are true :


1.1 The Autonomous Case 17

a) If IIAII < 1, th en x * is asymptotically stable with respect to f.


b) Let A be continuously in vertibl e and "A -1 " -1 > 1. Th en x * is un stable
with respect to f .

Exercise 1.5. Show that under the assumpt ion I AII < 1 there is at most one
x* E E with (1.12).
Further show that , if IIAII < 1 and there exists x* E E with (1.12) (which is
then unique), it follows that , for every Xo E E , the sequence (X n)nENo given
by
.'T n + 1 = A(x n ) + b, , n E No,
converges to x* .

The Corollary of Th eorem 1.5 leads to the following


Corollary: Let E = IRk equipped with any norm and let

f (x) = A x +b , x E IRk,

where A is a real k x k - matri x and b E IRk a fixed eleme nt.


Let x * E IRk be a fixed point of I , i.e.

x * = A x * + b.

Th en the follow ing two statements are true:


a) If g(A) < 1, th en x* is asymptotically stabl e with respect to f .
b) If A is in vertibl e and all eigenv alues of A are larger than on e in absolute
value , th en x* is unstable with respect to .f.

In the following we introduce a concept of stability and asympt ot ic stability


for linear systems that differs from the one given in Section 1.1.3 and that
we adopt from [18]. For that purpose we consider the sequences (Xn)nE No in
E with
X n + 1 = A X n + b , n E No,.'To E E . (1.13)
Definition 1.7. A sequence (X n)n ENo in E with (1.13) is called
1. st able, if for every e > 0 and every N E N, there exists some 8 = 8(c: , N)
su ch that for every sequen ce (X n) n ENo with (1.13) fo r Xo E E and IlxN -
x Nil < 8 it follows that

IIXn - xn ll < e for all n ~ N ;

2. attractive, if for every N E N, th ere exists some 8 = 8(N) such that for
every sequen ce (Xn)n ENo with (1.13) for (xo) E E and IlxN - xN11 < 8 it
follows that lim Ilxn - X n II = 0;
n--->oo
3. asympt ot ically st able, if (Xn)n ENo is stable and attractive.
18 1 Uncontrolled Systems

In order to guarantee the stability of all sequences ( X n)n ENo with (1.13) it
suffices to gua rantee the st ability of t he zero sequence (x n = B E)nEN o which
satisfies (1.13) for b = B E = zero element of E. T his is a consequence of

Lemma 1.1. Th e follo wing statements are equiv alen t:


(1) All sequen ces( X n) nE N o with (1.13) are stable.
(2) On e sequence ( Xn) nE No with (1.13) is stable.
(3) Th e zero sequen ce (x n = BE)nENo which satisfi es (1.13) for' b = B E is
stable.

Proof. (1) =} (2) is trivially true. Now let (2) be true. Then there is a sequ ence
(Yn)nENo with (1.13) which is stable. Now let ( Xn )nENo be any sequence with
(1.13) for b = B E. Then (x n + Yn)nENo satisfies (1.13) and for every E > 0
there exist s some 0 = O(E, N ) such that

IlxN- BEll = IlxN + YN- YNII < 0 impli es


Ilxn - BEll = Ilxn + Yn- Ynll < E for all n 2: N ,
sin ce (Y n) n EN is stable. From this it follows t ha t (xn = BE)nENo is st abl e
whi ch shows (2) =} (3) .
Now let (Yn)nENo and (Yn)nENo be arbitrary sequences with (1.13) . Then we
choose any seque nce (Zn) nENo in E with (1.13) and define

Xn = Yn - Zn and xn = Y n - Zn for n E No

Since according to (3) the zero sequence (x n = BE)nENo (which satisfi es (1.13)
for b = B E) is st abl e, it follows that for every E > 0 there exist s som e 0 =
o(E, N) such t ha t
l YN - YNII = IlxN - XN - BEll < 0 impli es that
ll Yn - Yn ll = Il xn - xn - BEll < E for all n 2: N
which shows that (1) is true.
Hen ce (1) =} (2) =} (3) =} (1) which completes the proof.
D

Remark: Lemma 1.1 also hold s true, if we replace stable by at t rac t ive. Hence
it is also true with asympt ot ically stable inst ead of stable.

Now let us again consider the special case E = IRk equipped with any norm
11 11. According t o Lemma 1.1 we consider sequences (X n)n ENo in IRk with

(1.14)
1.1 The Autonomous Case 19

In order to show stability of the zero sequence (.Tn = 8 n) nE No we assume


that A has eigen values A I , . . . , A k E C suc h that there exist eigenvectors
k
V I , . .. , V k E C whi ch are linearly indep endent . Therefore every Xo E ]Rk
has a unique re prese ntation
k
Xo = L CiVi , Cl , . . . . ci; E C.
i= 1

T his implies t hat every sequence ( X n) n E N o can be represented in the form


k
Xn = L c;Afvi , n E No.
i= 1

Now let us ass ume t hat

IAi I :S 1 for i = 1, . .. , k . (1.15)

If we define, for every


k
k
Z = L Ci( Z)Vi E C ,
i =1

a norm by
k
Il zll = L ICi (z )1 ,
i= 1

it follows that
k

Ilxnll = L IAi in ICi(Xo)1


;= 1
k

:S L ICi(.To)1 = IIToll
i =1

This leads t o
Theorem 1. 7 . Let the eigenva lues A I , . . . , Ak E C of A satisfy (1.15) an d be
such that the corresponding eigenvectors are linearly independen t. Th en the
zero sequence (x n = 8k) n ENo which satisfie s (1.14) is stable and hence ever y
sequence ( Xn) n ENo that sati sfies (1.14) with E = ]Rk is stable.
Proof Let E > 0 be chosen. T hen we put 5 = E . Now let (X n) n ENo be any
sequen ce with (1.14) for X n = xn and Ilxoll < 5. Then it follows that Ilxn -
8k II < E for all n E No which completes t he proof.
o
Exercise 1.6. Prove that the zero sequence (x n = 8 k) nE No is asymptot ica lly
stable, if all t he eigenvalues of A are less than 1 in absolute valu e.
20 1 Unc ontrolled System s

Remark: Under the assumpt ions of Theorem 1. 7 it is also t rue that the
mapping f : JRk ----> JRk defined by

f (x) = A x , x E JRk ,

has {8d as a stable fixed point in t he sense of Definition 1.5.

Proof Let G = JRk and define V : JR ----> JR by

V( x) = Ilxll ,x E JRk ,

where
k
Ilxll = L ICi(X)1 , x E JRk .
i =1

Then it follows that


k k
V(Ax) = IIAxl1= L IAil Jei(X)I ~ L ICi(X)1= Ilxll = V (x) , x E JRk .
i= 1 ;.= 1

Hence V(Ax) - V( x) ~ 0 for all x E G and V is a Lyapunov function on


G = JRk .
Further we have

V( x) 2: 0 for all x E G and (V (x ) = 0 {=? x =8k) .

Th eorem 1.3 therefore implies t hat {8 d is a stable fixed poin t of f.

o
vVe can also use anot her Ly apuno v fu nction in order to show that x = 8k
is a stable fixed point of the syst em (1.14 ). For that purpose we choose a
symmet ric and positive definite real k x k - matrix B and define a function

If
xT (AT B A - B) x ~ 0 for all x E JRk,
then V is a Lyapunov fun ction with respect t o

f (x ) = A x , x E JR k ,

on G = JRk which is positive definit e with respect to {8d.


1.1 The Autonomous Case 21

Exercise 1. 7.

a) Show with t he aid of Th eorem 1.3 t hat {8d is stable with respect to f.
b) Show with the aid of Th eorem 1. S that {8d is asy mptotically st able with
resp ect to i. if

One can even show that {8 d is globally asympt otically stable with respect
to I , i.e., {8d is stable wit h res pect to f and

whi ch is equivalent t o

A n ----; a= k x k - zero matrix.

Converse ly, let {8d be globally asy mptotically stable with resp ect t o f . Fur-
ther let C be a symmet ric and positi ve definite real k x k - m atrix such t hat
00
t he ser ies L: (A T) kCA k converges . If we define
k=O

00

l:)AT )kCA k ,
k=O

then B is a symmetric and positiv e definite real k x k - m atrix and it follows


t hat

which implies

1.1.6 Applications

a) Predator - Prey - Models

The classical predator - pr'ey - mo del as being develop ed and invest igated
by Volterra is based on a 2 x 2 - system of first order different ial equat ions
for t he densities of the prey and pr edat or populati on , res pect ively, This
model has also been invest igat ed in [13] with respect t o stability of its
equilibrium via a Lyapunov functio n.
22 1 Uncontrolled Systems

Here we consider the discret e version of t he model whi ch is given by two


difference equat ions of the form
Xn+l = (1 + a )x n - bXnYn ,
(1.16)
Yn+1 = (1 - c)Yn + dXnYn , n E No ,
with constant param et ers a > 0, 0 < C < 1, b > 0 and d > o. The valu es
Xn and Yn denote the den sity of t he pr ey and pr edator population at time
t = n , resp ectively. In this mod el it is ass umed that the pr ey population
grows exponent ially in the absence of predators and t ha t the pr edator
population decays exponentially in t he absence of prey.
If we define
( )_(h(X
f X, Y -
,Y)) , (.X, Y)
f2 (x ,y) E]R
2
,

with
I: (x , y) = (1 + a)x - bxy and f2( x , y) = (1 - c)y + dxy ,
t hen (1.16) read s

x n+1 ) = f(.xn, Yn) , n


( Yn+1 E No, (1.17)

and f : ]R2 ~ ]R2 is a cont inuous mapping. T he on ly fixed point (x *, y*)T E


]R2 with x* > 0 and y* > 0 is given by

* c * a
x =d and Y =b .
For every (x , y) T E ]R2 the J acobi matrix of f is given by
J ( )
j x, Y =
(1+ a - by
dy
bx
1 - c + dx
)

which implies
1
Jj (X.* , y * ) -- ( abd - 1([
be) .
The eigenvalues of J j(X*, y*) are given by
A1 ,2 = 1 ivac,
hence

o
By the Corollary of Th eorem 1.5 the only fixed point (x *, y*)T of f in ]R~
is un st abl e. Next we assume that the prey population grows logistically in
the abs ence of pred ators. Therefore we repl ace the first equa t ion in (1.16)
by
Xn+1 = (1 + a )x n - eXn2 - b.x nYn
with a constant param et er e > O.
1.1 The Autonomous Case 23

With this modification (1.16) can be written in the form (1.12) with

!I(x n , Yn) = (1 + a).Tn - e.T; - bXnYn and


!2( x n , Yn) = (1 - c)Yn + dXnYn , n E No .

o
The only fixed point (x * , y*) T E ]R~ oi ] is then given by

x* = ~ and v' = ~ (a - ~)
if
ce
a> d'
For every (x , y) T E ]R2 the Jacobi matrix of f is given by

J (x
j ,
1)
Y
= (1 + a - 2ex - by
dy
- bx
1 - c + dx
)

whi ch implies
1 - e .
Jj(x * ,y*) = ( (a - t.)
_!!f)
1
d
.
b d
The eigenval ues of Jf(x * , y*) are given by

Al ,2 = 1 - 2d
ec
V(2dec ) 2
- c (a - dec ).
I

We have to distinguish three cases:

Then
ec
- 1 < Al ,-') =1- -2d < 1 '

if and only if
ec
d < 4.

2) (~d) 2 - c(a - 7) < o.

Then
IAl,212 = 1-
ec
d ec ) < 1 ,
+ c (a - d
if and only if
ec e
a< -
d
+-d .
24 1 Uncontrolled Systems

Then
ec ec
1 - 2d < Al < 1 a nd 1 - d < A2 < 1.
Hence -1 < Al < 1 , if - 1<1- ~~ 7 <4
{::}

and - 1 < A2 < 1 ,if - 1< 1- 7 {::} 7 < 2.

From this we conclude t ha t in all t hree cases


2ec
IAl ,2 1 < 1 , if a < d < 4.
o
Result. There exists exactly one fixed point (x* , y*) E lR~ of f and this
fixed point is asympt ot ically stable, if
ce 2ec
d < a< d < 4.
Exercise 1.8. Show t ha t the only fixed point of t he syste m

Xn+l = (1 + a).T n - bXnYn ,

Yn+l = (1 - c)Yn + d [(l + a)x n - bXnYn]Yn , n E No ,


is asy mpt ot ically stable, if a < 2.

b) A Discretization of the Volt erra-Lotka Model

This model is describ ed by two differential equat ions of the form

x( t ) = (ci + c12 y(t))X( t ) ,


y(t ) = (C2 + C21x(t ))y(t )
where x (t ) and y( t) denot e the density of t he prey and pr edator population ,
resp ecti vely, at t he time t . T he coefficients Cl , C2, C12, and C21 sat isfy the
condit ions
c i > 0 , C2 < 0 , C12 < 0 , C2 1 > 0 .

This leads to the fact that the above syste m of differential equa t ions admit s
an equilibrium solut ion

x (t ) = X* = - -C2 , y(t ) = y* = - -Cl , t E lR .


C2 1 C12
1.1 The Autonomous Case 25

We now discretize this system by introducing a time ste psize and repl acing
the derivatives x(t) and y(t) by the difference quotients

x (t + h) - x( t) y(t +h)-y(t)
h an d h '

resp ectively, Then we obt ain the syste m of difference equations

x (t + h) = x (t ) + h(Cl + c12 y(t )):r(t ) ,


(1.16')
y(t + h) = y(t) + h(C2 + C2 1X(t ))y(t )
(see (1.16)) . The above equilibrium solut ion of the syste m of differential
equa t ions turns out to be a fixed point solut ion of (1.16') , namely

x (t ) = x* , y(t ) = y* for all t = to + kh , k ENo ,

wher e to E ]R is chosen arbit ra rily,


If we define
j'(x, y ) = (h
j'2 ((X,y )) , (x ,y) E ]R2 ,
x , y)
with

h(x , y) = x + hi e, + C12Y)X and f2(x, y) = y + h(C2 + C2 1X )Y ,

then (1.16') reads

X(t + h))
( y(t + h) = j ,(x(t) ,y(t )) , (1.17')

The Jacobi matrix of f at the fixed point (x*, y*) is given by


1 - hC12.L )
Jf( x * , y * ) -_ ( h C1 1 C2 1
- C21C12
and has the eigenvalues

which implies IA1,21> 1.


By the Corollary of Th eorem 1,5 the fixed point (x*, y*) of f is unstabl e,
Ther efore we repl ace the system (1.16') by t he system

:r(t + h) = x (t ) + h(Cl + C12 y(t ))X(t) ,


(1.16")
y(t + h) = y(t ) + h(C2 + C2 1X(t ))y(t ),
26 1 Uncontrolled Syst ems

whi ch has the same fixed poin t solut ion as (1.16'). The above vector fun c-
tion f has t o be repl aced by t he function

X + h(Cl + C12Y)X ) (X,y) E]R2 ,


f( 7: , y) = ( y+h(C2 + C21(X +h(C1 + C1 2Y)X))Y ,

whose J acobi matrix at the fixed point (x *, y*) is given by

J x* 7* -
1 -hc12 ~ )
1 + h2c C
C21
f( , Y ) - ( -hc21 .L
C12 12
T he eigenvalues of Jf(.7:*, y*) rea d

We distinguish three cases:


1) (1 + T
h2
C1C2) 2 = 1 .

Then it follows that

which is equivalent t o
4
IC1 . c21 = h 2 .

2) (1 + \2C1C2 )2 < 1 ::} IC1 . c2 1< tr .


Then it follows t hat

which implies
Al i' A2 and IA11= IA21= 1.
3) (1 + h2 C1C2 )2 > 1 ::} IC1 ' c2 1> ~2
2

Then it follows that


1.1 T he Au t onomous Case 27

which implies

and
h2 h2
> 1+-
2
A2 CI C2 - - leI ' C21 = 1 - h ICI .C21 .
2 2
Further we have A2 ~ - 1, if an d only if

4
-< -h 2

which is impossible.

Result :
4
If ICI .c21:s; h2 ' then IAl,2 1= 1 .
4
If leI ' C2 1> h2 ' then A2 < Al < 1 , but A2 < -1 .

For t he following we assume t hat

Then it follows (as we have seen above) that

wh ich implies that the corres po nd ing eigenvectors are linearly ind ep en-
dent. From the rem ark following Theorem 1. 7 we therefor e infer that the
mapping 9 : lR 2 ~ lR 2 defined by

g( x ,y) = Jj(x * , y*) G) x, Y E lR ,

has (~) as stable fixed point.


28 1 Uncontrolled Systems

c) Interacting Logistic Growth of Two Populations

If we discretize the original model which is pr esent ed and investigat ed in


[13] in t erms of two first order differential equa t ions, then we obtain two
difference equations of t he form

.1:n+ l = (1 + h a - h b X n - h e Yn) .1: n ,


(1.18)
Yn+! = (1 + h d - h e X n - h I Yn)Yn , n E No

with constant par amet ers a, b, e, d, e, I > 0 and ste p size h > O. Again X n
and Yn den ot e the densit ies of t he two pop ulations at time t = n . Both
pop ulations grow logisti cally in the abse nce of the other popul ation and
t he te rms (h e Ynxn) and (h e xn Yn) describ e the mutual int er act ion. If
we define
I (x , Y ) -_ ( ffI2 ((..11:': , Y)
Y
) ) , ( x,. Y) E lR 2 ,
with
fI( x ,y) = (1 + h a - h b x - h e y) x ,
f2 (x , y ) = (1 + h d - h e x - h ] y )y,

t hen (1.18) read s

(1.19)

and I : lR 2 --+ lR 2 is a cont inuous mapping.


The point (x *, y*)T E lR 2 is a fixed poin t of I with x* -I- 0 and y* -I- 0, if
and only if
b x* + e y* = a ,
e x * + I y* =d.
Let us assu me t hat
b] - ee > O. (1.20)
Then
ae- db
x
*
= a I -de and y*
bI - e e ee- bI
and x* > 0, y* > 0, if and only if

e a b
f < d < ~, (1.21)

which implies (1.20).


1.1 The Autonomous Case 29

For every (x , y)T E IR 2 the Jacobi matrix of f is given by

Jj( x , y) =

- h bx +1+h a - h b x - h ey - h ex )
( - he y - h f y + 1+ h d- h e x - h f y
which implies

J j (x * , y *) = (1 - h b x* - h c x* )
_ h e y* 1 - h f y* .

The eigenvalues of J j(x *,y*) ar e given by

bx* + fy * bX* + fy *) 2 )
A1,2 = +1 + h (
- 2 ( 2 - (bf - ec )x *y*

= +1 + hf.l1 ,2.

From (1.21) whi ch implies (1.20 ) and .'1:* > 0, y* > 0 it follows that
R e(f.l1 ,2) < O. This implies IA1,21< 1 (Exercise), if h > 0 is sufficiently
small.

Result. If (1.21) is satisfied and h > 0 is sufficiently small, then there is


o
exac t ly one fixed point (x *, y*) E IRt of f and this fixed point is asymp-
totically stable.

Exercise 1.9 . Show that the syste m

= (1 + h a - h e X n - h b Yn) x n ,
X n +1

Yn+1 = (1 - h c + h d xn )Yn , n E No ,
with a , b, c, d, e > 0, h > 0 has exa ct ly one fixed point x* > 0, y* > 0, if
a > cde, which is asympt ot ically stable, if h > 0 is sufficient ly sm all.
30 1 Uncontrolled Syst ems

d) An Emission Reduction Model

In [25] a mathematical model for the reduction of carbon dioxide emission


is investigated in form of a time discrete dynamical system which as un -
controlled system is given by the following sys te m of differenc e equat ions

r
Ei(t + 1) = Ei(t) + L emij Mj (t ) ,
j= l

Mi(t + 1) = Mi(t) - AiMi(t)(Mt - Mi(t))Ei(t) (1.22)

for i = 1, . .. , r and t E No
wh ere Ei(t) denotes the amount of emission reduction and Mi(t) the fi-
nancial me ans sp ent by the i - th actor at the time t , Ai > 0 is a growth
paramet er and Mt > 0 an upp er bound for M; (t) for i = 1, . .. , rand
t E No. For t = 0 we ass ume t he system to be in the st ate E Oi , M Oi ,
i = 1, .. . , r whi ch leads to the initi al condit ions

Ei(O) = E Oi and Mi(O) = M Oi for 1, ... , r.

If we define x = (E T , M T )'r , E , M E jRr, and fun ctions


fi : jRn ~ jRn , i = 1, . .. , n = 2r by

r
f i(X) = Ei + L emij Mj , i = 1, . . . , r , (1.23)
j=l

f i(X) = M i - AiMi(Mt - Mi)E; , i = r + 1, . . . , n , (1.24)

then we ca n write (1.22) in the form

x (t + 1) = f( x(t)) , t E No , (1.25)

wh ere f( x) = (h(x), . .. , f n(x )f

For every i; = (E T ' eT)T


r' EE jRr
, we have

i; = f( i;) .
Let i; be any such fixed point of f.
1.1 The Autonomous Case 31

Then we replace the system (1.25) by the linear syst em

x(t + 1) = Jj( x).r(t) , t E No , (1.26)

wh ere the Jacobi matrix J j( x) is given by

wher e 11' and Or is the r x r - unit and <z er o matri x , resp ectively, and

and

D _ (d ll
. 0)
o d rr
with

Let us assume that

di i :f= 0 , di i :f= 1 and Idii l::; 1 for all i = 1, . . . , r.


Then the matrix Jj( x) is non-sing ular. Its eigenvalues are given by

fl i = 1 for i = 1, . . . , T , fl1 +r = di ;, for i = 1, ... , r,


hence
Ifl i I ::; 1 for i = 1, ... , 2r
and t he corresponding eigenv ectors

e; ) for J, = 1 , ... , r
( G,.

and

(
_ _ 1
1-d j J
ce,)
J forj =l , .. . , r
eJ
(ej = j-t h unit vector) are linearly ind ependent. Th eorem 1.7 therefore
implies that t he zero sequence (x(t) = G n)t ENo which satisfies (1.26) is
stable.
32 1 Uncont rolled Systems

1.2 The Non-Autonomous Case

1.2.1 Definitions and Elementary Properties

Let X be a metric space with metric d : X x X - t JR+ and let (fn)nEN


be a sequence of cont inuous mappings f n : X - t X , n E N. Then the pair
(X, (fn)n EN) is called a non-autonomous time-di scret e dynamical system . The
dynamics in this system is defined by the sequence F = (Fn)n ENo of mappings
Fn : X - t X given by

Fn(x) = fn 0 fn-l 0 ... 0 ft( x) for all x E X and n E N

and
Fa = x for all x E X .
If f n = f for all n E N we obtain an autonomous time-discrete dynamical
system (X , f) as being defined in Section 1.1.1.
For every x E X we define an orbit start ing with x by
I'F(X) = U {Fn( x)} (1.5 ')
n ENo

and the limit set of I'F(X) by

LF(x) = n
nE No
U {Fm( x)}
m'2:n
(1.6')

where A denotes the closure of A <;;; X.


Then we can prove

Proposition 1.1': For every x E X the limit set LF(x) consists of all accu-
mulation points of the sequence (Fn(X)) nENo '

The proof is the same as that of Proposition 1.1 and is left as an exercise.

Now let X be compact. Then , for every x E X , the sequence (Fn(X))nEN o


has at least one accumulat ion point which impli es, by Proposition 1.1', that
LF(x) is non-empty. By definition L F(X) is a closed subset of X and hence
also compact for every x E X.
Fur ther on e can show that LF( X) is the sm allest closed subset S <;;; X with

lim p(Fn(x) , S) = 0 with p(y , S) = min{ d(y , z ) I z E }. (1.7')


n ->oo

The proof is left as an exercise (see Proposition 1.3) . In addition we can prove
the following
1.2 The Non-Autonomous Case 33

Proposition 1.6. Let X be compact and let (fn) nEN be uniformly convergent
to some mapping fo : X ----+ X. Then it follows that

fo(L p( x)) = L F(X) for all :r E X.

Proof

1) At first we show that fo(LF( x)) ~ L F(x) , x E X .


Choose x E X and Y E L F(X) ar bit rarily. Since fo is cont inuous, for every
e > 0, there exist s some J = J(c:, y) > 0 such that

fo(x) E Uc:(fo(Y)) for all x E Ui5 (y ).

Uniform convergence of (fn)n EN to fo impli es t he existe nce of some n(c:) E


N such that

fn( x) E Uc:(fo(x)) ~ Uzc:(fo(Y)) for all x E Ui5(Y) and all n 2: n(c:) .

By Proposition 1.1 ' there is a subsequence (Fn;(X))iENo of (IFn(X))nEN


with
Fn;(x) E Ui5 (Y) for all i E No.
This implies

Since e > 0 is chosen arbit rarily, it follows t ha t

Fn;+1 (x) ----+ fo(Y) and hence, by Proposition (1.1 '}, fo(Y ) E L F(x).

2) Next we prove L F(X) ~ fo(L p( x)) , x E X .


Choose x E X and Y E L F(X) arbit ra rily. Then we have to show the
existe nce of som e x E LF( X) such that Y = fo( x) .
By Proposition 1.1 ' t here is a subsequence (Fn;(X))iENo of (Fn( X))nENo
with Fn ; (x) ----+ Y as i ----+ 00 . If we put

Xi = Fn;-l(X) for all i E No,

then it follows that I. (Xi) ----+ Y as i ----+ 00 . We can also assume that
Xi ----+ x for some x E L F(x).
34 1 Un con trolled System s

Then we have
d(fo( x) , y) < d(fO(X),fO( Xi)) + d(fO(Xi), f n;(Xi)) + d(fn;(Xi) , y)
< d(fO(X),fO(Xi)) + sup d(fo(i) , fn ;(i)) + d(fn;(Xi) , y)
, v ' x EX '------v--"
--. 0 - --.0
v
--.0
as i ----+ 00,

hence y = fo(x) .

o
Next we give a more pr ecise localization of t he limit sets L F (x) , X E X , with
the aid of a Lyapuno v fun ction which is defined as follows:

Let G ~ X , be non empty. Then we say t hat V X ----+ lEt is a Lyapuno v


fun ction with respect to (fn)nEN on G , if
(1) V is cont inuous on X ,
(2) V(fn(x)) - V( x) ::; a for all x E G and all n EN such t hat f n(x) E G.
For every c E lEt we define

V -1(c) = {x E X I V (x ) = c}.
Then we ca n prove the following

Proposition 1.5': Let V be a Lyapuno v fu nction with respect to (fn) nEN on


G ~ X where G is relatively compact. Let further Xo E G be chosen such that
f n(x o) E G f or all n E N . Then there exists some c E lEt such that

and L F(xo) is nonempty.

Proof Since G is relatively compact , it follows that LF(xo ) is non empty. For
every n E N we put X n = Fn(xo) which impli es

Xn E G and V( xn+d ::; V( x n) for all n E N.

Since V : X ----+ lEt is cont inuous, V is bounded from below on G which implies
the existence of c = lim V (x n ) .
n--. oo
Now let p E L F(xo) . T hen, by Proposition 1.1 ', t here is a subsequence
(x n; ) i ENo of ( Xn)n ENo with xn; ----+ p as i ----+ 00 .
T his impli es V(p) = lim V( x nJ = c, hence p E V -l(c).
' --'00

o
1.2 T he Non-Autonomous Case 35

1.2.2 Stability Based on Lyapunov's Method

Definition 1.8. A relatively com pact set H ~ X is called stable with respect
to (fn)nE N, if for every relatively compa ct open set U ~ X with U ;2 H =
closure of H th ere exis ts an open set W ~ X with H ~ W ~ U such that

Fn(W) ~ U for all n E No

where

Theorem 1 .1': Let H ~ X be relat ively com pact and such that for eve ry
relatively com pact open set U ~ X with U ;2 H th ere exists an open subs et
B u of U with e., ;2 Hand

fn(B u) ~ U for all n EN.

Further let G ~ X be an open set with G ;2 H su ch that th ere exi sts a


Lyapunov func tion V with respect to (fn)nEN on G whic h is positiv e definit e
with respect to H , i.e. ,

V( x) ::::: 0 fo r all x E G an d (V( x) = 0 {:} x E H) .

Th en H is stable with respect to (fn)nEN.

Proof Let U ~ X be an arbit rary relatively compact op en set with U ;2 H.


Then U* = U n G is also a relatively compact op en set with U* ;2 H and there
exist s an ope n set B u. ~ U * with B u ;2 H and fn(Bu -) ~ U * for all n E
N . Let us put
m = m in{V(x) I x E U *\Bu -} .
Sin ce H n (u*\B u - ) is em pty, it follows that m > O. If we defin e

W = { x E U* I V (x) < m} ,
then W is op en and H ~ W ~ B u - . Now let x E W be chose n arbit rarily.
Then x E B u- and therefor e F1(x) = h(x ) E U* . Further we have

V(F1( x)) = V(h( x)) ~ V( x) < m,


hence F1( x) E W ~ B i, . This implies F 2( x) = h(F1(x)) E U * and
V(F2( x)) ~ V(F1( x)) < m , hen ce F2( x) E W. By induction it therefor e
follows that
Fn(x) E W ~ U* < U for all n E No.
This shows that H is stable with resp ect t o (fn)nEN.
o
36 1 Unc ontrolled System s

Definition 1.9. A set H S;; X is called an atiracior with respect to (fn)n EN,
if there exists an open set U S;; X with U 2 H such that

lim p(Fn(x) , H) = 0 (in short: Fn(x) ---> H ) fo r all x E U


n ->oo

where
p(y , H) = inf{ d(y , z ) I z E H }.
If H S;; X is stable and an aiira cior with respect to (fn)n EN, then H is called
asymptotically stable with respect to (fn)nEN.
Theorem 1.2': Let H S;; X be such that there exists a relatively compact
open set U 2 H with

fn (U ) S;; U for all n E No

Further let V : X ---> IR be a Lyapunov fun ction with respect to (fn)n EN on u


which is positive definit e with respect to H . Finally let

lim V(Fn(x) )
n ->oo
=0 fo r all x E U.

Then H is an atiractor with respect to (fn) nEN.

Proof Let x E U be chosen ar bitrarily. Since the sequence (Fn( X))nENo is


contained in U, for every subsequence (F n i (x )) i ENo of (Fn (x) ) nE No there exist s
a subsequence (Fn ;.1 (X)) jE No and some q E U with

lim F n ; (x)
) ---"'00 .1
= q.

This implies
lim V(Fn; (.r )) = V(q) = 0,
) -+00 .1

hence q E H , and t herefore Fn i I. (x ) ---> H as j ---> 00 . From this it follows that


Fn(x) ---> H which shows that H is an attractor with resp ect to (fn) nEN.
o
Corollary: Under the assumptions of Theorem 1.1 ' and 1.2 ' it follows that
H S;; X is asym ptotically stable.

Let us demonstrat e Theorem 1.1 ' and Theorem 1.2 ' by an example which is
a modifi cation of Exampl e 1.8. We choose X = 1R 2 and consider a sequ ence
(fn)nEN of mappings fn : 1R 2 x 1R 2 given by

f n(x , y ) = ( -an
-2
y ' --2
bn X ) , (x ,Y ) E IR 2 ,n EN,
l +x l +y
1.2 The Non-Autonomous Case 37

where (an)nEN and (bn)nEN are seque nces of real numbers with

a~ :::; 1 and b~ :::; 1 for all n E N. (1.27)

Put H = {(O,O)}. If we choose

V( x , y) = x 2
+Y 2
, (x , y)2
E ]R ,

then V is a Lyapuno v fun ction on G = ]R2 with resp ect to (fn)n EN which is
positive definit e with respect to H = {(a, a)} , since

:::; ( a~ - 1)y 2 + (b~ - 1)x 2 :::; a for all (x , y) E]R2 and n EN.

Now let U ~ ]R2 be relatively compact and op en with (0, 0) E U be given.


Then there exist s some r > a such t ha t

B u = {(x ,y) E ]R2 1 x


2 + y2 < r } ~ U.

Further it follows that

h n(x , y) 2 + fzn( x , y) 2 :::; x 2 + Y2 < r for all (x , y) E Bu ,

hence
fn( B u ) < B u ~ U for all n E N.
Therefore all t he ass umpt ions of Theorem 1.1 ' hold true which implies that
{(a, O)} is st abl e with respect to (fn)n EN.
Next we sharpen (1.27) to

a~ :::; "f < 1 and b~ :::; "f < 1 for all n EN .

Then it follows t hat

V(fn(x ,y)) :::; "fV(x ,y) for all n EN and (x ,y) E]R2

which impli es

V(Fn( x, V)) :::; "fnv (x, y) for all n E N and (x , y ) E ]R2

and in t urn
lim V(Fn( x ,y)) = a for all (x , y ) E ]R2 .
n ---> oo

By Theorem 1.2 ', H = {(O,O)} is an attracto r with respect t o (fn)nEN and


by the Corollary of Theorems 1.1 ' and 1.2 ' {(O, a)} is asy mpt ot ically st able
with respect to (fn)nEN.
38 1 Uncontrolled Systems

1.2.3 Linear Systems

We consider a normed linear space (E , II II ) and a sequence (fn)nEN of map-


pings i : E --+ E which ar e given by

f n(x) = An (x) + bn , z E E , n E N,

where (A n)nEN is a sequ ence of cont inuous linear mappings An : E --+ E and
(bn)nEN is a fixed sequence in E .
Then the pair (E , (fn)nEN) is a non-autonomous time-discrete dynamical sys-
t em. The dynamics in this system is defined by the sequence (Fn) nENo of
mappings Fn : E --+ E given by

Fn(x) = fn 0 fn-I 0 0 hex)


n
= An 0 A n- I 0 0 Al (x) + LAn 0 A n- I 0 . 0 Ak+l (bk) (1.28)
k =1

for x E E and n E N

where An 0 An +l = identity mapping for all n E Nand

Fo(x) = x for all x E E . (1.29)

In general there will be no common fixed point of all f n , Le., no point x* E E


with x* = fn( x*) for all n E N which then would also be a common fixed
point of all F n . Therefore fixed point st ability is not a reasonable concept
in this case . We repl ace it by anot her concept of stability which has been
introduced in Section 1.1.5 already and whos e definition will be repeated
her e.
Definition 1.10. A sequence (x n = Fn(.'I:O))nENo, Xo E E , is called

1. st abl e, if for every c > 0 and every N E No there exists some 0 = o(c, N) >
Osuch that for every sequence (x n = Fn(XO))n ENo , Xo E E, with II x N - x N II < 0
it follows that
II Xn - x n ll < e for all n 2: N + 1.
2. attractive, if for every N E No there exists som e 0 = o(N) > 0 such that
for every sequence (x n = E; (xo) )nEN'l! Xo E E, with Il x N - x Nil < 0 it follows
that
lim Ilxn - xnll = 0 ;
n --->oo

3. asymptotically st abl e, if (x n = Fn(XO))n EN'l! Xo E E , is stable and attrac-


tive.
As a first cons equence of this definition we have
1.2 The Non- Aut onomous Cas e 39

Lemma 1.1': The following st atements are equivalent:


(1) All sequences (x n = Fn(XO)) nEN()! Xo E E , are stable.
(2) One sequence (x n = Fn(XO))n ENo , Xo E E , is stable.
(3) The sequence (x n = An A n - 1 0 0 A1(GE)) nEN = (:r n = G E)nENo is
stable.

The proof is t he same as that of Lemma 1.1 . We ca n also repl ace stable by
attractive and hen ce by asymptotically stable. This Lemma leads to t he fol-
lowing suffici ent cond it ions for stability a nd asymptotic stability:

Theorem 1.8. If

IIAn l1= sup{I IAn( x) 11 I Il xll = I} ::; 1 for all n E N, (1.30)

then all sequences (x n = Fn( XO)) nENo , Xo E E , are stable.


If
sup IIAnl1 < 1 , (1.31 )
nEN
then all sequences (x n = Fn(XO) )nENo , Xo E E , are asymptotically stable.

Proof. Let us first assume t hat (1.30) holds true. Let e > 0, b e cho-
sen arbitrarily. Then we put 8 = e a nd conclude that for eve ry sequence
(x n = Fn(XO))nE No , Xo E E , with IlxN11 < 8 for some N E No it follows that

Ilx n - GE II = II An An- 1 0 . .. A N+l( XN ) 11


::; II AnllII An- 111 IIA N+1111Ix N II < 8 = E: for all n 2: N +1 .
Hen ce the sequenc e (x n = G E)nENo is st a ble and by Lemma 1.1 ' all sequences
(x n = Fn(XO)) nENo , Xo E E are stable.
Next we assume that (1.31) (:::} (1.30) ) hold s t r ue . Thus for eve ry N E No ther e
ex ists some 8 = 8 (N) > 0 su ch t hat for eve ry sequence (x n = Fn (xo) )nENo ,
Xo E E, with Il x N 11 < 8 for some N E No it follows that

lim Il xn-GEII= lim II A no A n- 1o .. A N+l( x N) 11


n ----+ oo n-+ oo
::; lim ( sup II Amll)n- N- 11Ix NII = o.
n ->oo mEN

Thus the seque nce (Xn = GE) nENo is attractive, hen ce asymptotically stable
which implies that all sequ en ces (.':c n = Fn(XO) )nEN o , Xo E E , are asymptoti-
cally stable.
According to the a bove conside rations it suffices t o consider homogeneous
syst ems with
f n(x ) = An (x ) , X E E , n E N,
40 1 Uncontrolled Systems

such that
Fn(x) = An 0 A n- I 0 .. . Al (x) , x E E , n E N.
Then all sequ ences (x n = Fn (xo) )nENo, Xo E E, ar e stable or asymptotically
stable, if and only if the sequence (x n = Fn (G E))nENo is stable or asymptoti-
cally stable.
o
This leads to

Theorem 1.9. Assumption: All An, n E N, are invertibl e. A ssertion: Th e


sequence (An 0 A n- I 0 0 AI(GE))nENo is stable, if and only if for every
N E No there exists a constant CN > 0 such that
IIAn 0 A n- I 0 . . . A N +1 11:::: CN for all n 2 N + 1. (1.32)
Th e sequence (An 0 A n- I 0 0 AI(GE))nENo is asymptotically stable, if and
only if for every N E No
lim II A n 0 A n -
n -+ oo
I 0 . .. AN+I II = O. (1.33)

Proof If the sequ ence (An 0 A n- I 0 .. . 0 A1(GE)) nENo is st abl e, we choose


an arbitrary e > 0 and conclude that, for every N E No, there exist s 0 =
o(e , N) > 0 such that for every seq uence (x n = Fn (xo) )nENo, with Xo E E , it
follows that
IIAn 0 A n- I 0 AN+I(XN) II < e for all n 2 N + 1.
This implies

II A no A n- lo . .. o A N+1 (xN )II<e for all n 2N+l and IlxNII ::::


s
2
hence

2e
sup[ IIAn 0 An- I 0 AN+I(XN) II I llxN l1:::: 1 } :::: TeN
for all n 2 N + 1 -here the Assumption that all An , n EN, are invertible
is needed- . Conversely let (1.32) be true for every N E No. Then we choose
e > 0, N E N, define 0 = .. eN
and conclude that for every sequence (XN
Fn(XO))nEN o ' Xo E E , with IlxN11 < 0 it follows that
Ilxn - GEII = IIAn 0 An- I 0 AN+I(XN ) 11
:::: IIAno An- I O. . . o AN+l llll x NII < e for all n 2N+1.
Henc e the sequ ence (x n = G E )nENo is stable.
1.2 The Non-Autonomous Case 41

If the sequen ce An 0 A n- 1o - . . 0 A 1 (GE ) n ENo , is asymptotically stable, hen ce


att ractive, then, for every N E No, t here exists some J = J(N ) > 0 such that
for every sequence (XN = Fn(XO)) nENo , Xo E E , with Ilx N11 < J it follows that

lim Ilx n - GE II = 0, i.e., for every e > 0 t here exists some n (c):::: N +1
n ---> oo

with IIxn - GE II < e for all n:::: n(c), hence

This implies
II A n 0 A n- 1 0 ' " 0 A N+l II =
2e
sup] II Ano A n- 1 0 .. . o AN+ 1(XN)11 I IIxNl1:::; I} :::; T
(here again the A ssumption that all A n , n E N , are invertible is needed)
for all n :::: n( c), hence

lim II A n 0 A n - 1 0 0 A N+lI I = O.
l1 ---t OO

Conversely let (1.33) be t rue for every N E No. Thus (1.32) is satisfied for
every N E No and t herefore t he sequence (A n 0 A n- 1 0 .. . 0 A1(GE)nENo)
is stable. Further there exists J = J (N) > 0 such that for every sequ ence
(x n = Fn(XO)) nENo , Xo E E , with II xNl1< J it follows that

lim IIxn - GE II :::; lim IIA n 0 A n- 1 0


n~oo n-+ oo
0 AN+lll llxNl1 = O.
Thus the sequence (An 0 A n- 1 0 . . . 0 A1(GE ))nENo is attra ctiv e.

o
Now we specialize to t he case E = IRk equipped with any norm.
T hen we have
f n(x ) = Anx + bn , x E IRk ,
where (An)nEN is a sequence of real k x k - matrices and (bn)nEN a sequ ence
of vectors bn E IRk . This lead s to

Fn(x ) = fn 0 fn-l 0 0 h(x )


n
= AnA n- 1 . . . A1x + ~ AnA n- 1 Aj +lbj
j =l

for x E IRk and n E N where AnA n - 1 . . . Aj+ 1bj = b., for j = n.


42 1 Uncontrolled Systems

Let us assume that , for some p E N,

for all n E N.
The question then is whether there is a sequence (Xn)n ENo with

Xn = AnXn-l + bn for all n E N (1.34)

such that
Xn +p = Xn for all n E No (1.35)
which implies x p = x o.
Convers ely let this be the case . Then
x !+p = A l +p x p + b!+p = AlXo + bl = Xl
X2+p = A 2+p x !+p + b!+p = A2 Xl + b2 = X2

hence
X x +p = Xn for all n E No.
Result. A sequ ence (Xn)n ENo with (1.34) sat isfies (1.35) , if and only if x p = Xo
whi ch is equivalent to Fp(xo) = Xu, i.e.,

L A pA p_ l . . . Aj+lb
P
.TO = ApA p_ l .. . Al Xo + j .
j= l

This implies that there exist s exactly one sequence (Xn)nENo with (1.34) and
(1.35) , if and only if the matrix 1- ApA p- l . .. AI , 1= k x k-unit matrix, is
non-singul ar. The first vector Xo is then given by
p

Xo = (I - ApA p_ l .. . Ad- l L A pA p_ l . . . A j+lA j .


j =l

In t he homogeneous case where

b., = fh for all n E N

a non-zero sequence (Xn)nENo with (1.34) and (1.35) can only exist, if the
matrix I - ApA p- l . . . Al is singular ,
i.e.
det(I - A pA p- l . . . A d = 0 .
This again is equivalent to the fact that A = 1 is an eigenv alue of ApA p_ l .. . AI .
1.2 The Non-Autonomous Case 43

1.2.4 Application to a Model for the Process of Hemo-Dialysis

In order to describe the tempor al development of the concent rat ion of som e
po ison like ur ea in the bod y of a person suffering from a renal disease and
having t o be attached to an artificial kidney a mathem atical model has been
proposed in [1 2] which ca n be described as follows. The human body is divided
into two compa rt ment s being t erm ed as cellul ar par t Z and ext racellular part
E of volume Vz and VE, resp ect ively.
The two compa rt ments are sepa rate d by cell membran es having the perme-
ability Cz [m l/min ]. Let t [min] denot e the time and let K z(t) [mg/ m i n ] and
K E(t) [mg/min ] be the concent ration of t he poison at t he time tin Z , and E ,
respectively. We consider some time interval [0, T ], T > 0, and assume that
the patient is attached to the art ificial kidn ey dur ing t he subinte rva l [0, t d] for
som e t d E (0, T] . We further ass ume t ha t the genera t ion rat e of the poison in
Z and E is L 1 [m g/ m i n ] and L 2 [mg/ m i n ], respectively. T hen the t emporal
development of K z = K z(t) an d K E = K E(t ) in [0, 00) can be described by
t he following syste m of linear differential equations

VzKz(t) = - Cz (K z (t ) - K E(t)) + L1 ,
(1.36)
VEKE(t) = Cz (Kz (t) - K E(t )) - C (t) KE + L2
where
c for 0 :::; t < t.
C (t) ={0 for td :::; t < T.
(1.37)

C (t + T) = C(t) for all t > 0 (1.38)


and C [m l/ min] is the permeabilit y of t he membran e of the art ificial kidney.

By (1.38) the periodicity of the pro cess of dialysis is expressed . The main
result in [12] is the proof of the existe nce of exac tly one pai r (Kz (t) , K E(t)) of
positive and T-perio dic solutions of (1.36) which intuitiv ely is to be expec te d .
For the numerical computation of t hese solut ions Eul er 's polygon method ca n
be used. For this purpose we choose a time ste psize L1t > 0 such that

td = K . L1t , T = N . L1t
for K , N EN , 2 <K <N
44 1 Uncontrolled Syst ems

and replace (1.36) by t he difference equat ions

Vz (Kz (t + Llt) - K z(t )) = - Cz Llt (K z (t ) - K E(t )) + LILlt ,

(1.39)
VE(KE(t + Llt) - K E(t)) = Cz Llt( Kz(t) - K E(t )) - C (t )Llt KE (t ) + LzLlt .
If we define, for every n E No,

_ (Kz (n . Llt))
Xn - K E(n Llt ) ,

1 - Cz Llt / Vz Cz Llt /Vz )


A n+! = ( CZ Llt / VE 1 - (C( n Llt) + CZ )Llt/ VE
and
L I Llt / VZ )
bn + 1 = ( L zLlt /VE =b,
then (1.39) ca n be rewrit te n in t he form

Xn+1 = A n + 1 X n + bn + 1 , n E No , (1.40)

and we have
A n+N = An an d b = b for all n E N.

In order t o show t he existe nce and uniqueness of a periodic solution of (1.39)


we assume that
Llt < min (Vz / Cz) , VE/ (C + Cz )). (1.41)

We have to show t ha t the matrix 1- A NA N- I .. . Al with I = (~ ~) is non-


sing ular .
Let us put
CI = Cz Llt/ Vz Cz = CZLlt/ VE .
Then , for all n = K , ... , N - 1, we obtain

and hen ce
1.2 Th e Non-Autonomous Case 45

If we put
C3 = CLJ.t jVE ,
then we obtain

This implies

and hence

Therefore

AK-IAK-2 .. . Al G) : ; C _C3 _1C~(~2~3C2_ C3)) < (~)


and finally

Sinc e the spectral radius dB) of every quadratic matrix B with positive
elements satisfies the inequality

(2(B) ::; m ax(By) ;fYi

for all Y = (Y)i with Yi > 0 for all i (see [5]), it follows that

wh ich implies that I - AN AN - 1 ... Al is non-singular.


Hence there exists exactl y one N-periodic solution of (1.40).
This is also positive. In order to see that we hav e to show that Xo which is
given by
N
Xo = (I - ANAN-l . . . A 1)- 1 L A NA N-l . . . AJ+l b
j
j=l

is positive.
46 1 Uncontrolled Systems

First of all we observe that


N

L A NA N- 1. A j+ l bj
j=1

is posit ive. If we define B = A NA N -1 . .. A I , then B has positive element s


and therefore
00

(I - B) -1 = LB j
j=O
has positive element s which impli es that Xo is positive and in turn that all
Xn ,n E No, ar e positive.
2

Controlled Systems

2.1 The Autonomous Case

2.1.1 The Problem of Fixed Point Controllability

We begin with a syst em of difference equations of t he form

x (t+1 )=g(x(t) , u(t )) , t EN o (2.1)

where g : IRn x IR m --+ IRn is a cont inuous mapping.


The functions x : No --+ IRn and u : No --+ IRm are considered as st ate and
control functions, respectively. For every cont rol functi on u : No --+ IRm and
every vect or Xo E IRn there exists exactly one state function x : No --+ IRn
which sat isfies (2.1) and
x( O) = xo. (2.2)
If we fix the cont rol function u : No --+ IRm and define

it+l (X) = g(x ,u(t )) , x E IR n , t E No , (2.3)

then , for every t EN, it : IRn --+ IR n is a cont inuous mapping and (IRn, (ft)tEN)
is a non-autonomous time-discret e dyn ami cal syste m which is cont rolled by
the fun ction u : No --+ IRm. If

u(t) =8m = zero vector of IR m for all t E No ,

then the syste m (2.1) is called uncontrolled. Let us assume that the uncon-
trolled syste m (2.1) admits fixed points i: E IR n which then solve the equa t ion

(2.4)

Now let fl <;;; IRm be a subset with 8 m E fl . T hen we define the set of
admissible control functions by

u= {u: No --+ IR m I u(t) E fl for all t E No}. (2.5)

W. Krabs et al., Analysis, Controllability and


Optimization of Time-Discrete Systems and Dynamical Games
Springer-Verlag Berlin Heidelberg 2003
48 2 Controlled Systems

Aft er these pr ep ar ations we are in the position to formulate the

Problem of Fixed Point Controllability

Given a fixed point xE jRn of the syste m

x (t + 1) = g(x (t) , 8 m ) ) , t E No (2.6)

i.e., a solution x of equ ation (2.4) and an initi al st ate Xo E jRn find som e
N E No and a control function u E U with

u(t) = 8 m for all t 2: N (2.7)

such that the solution x : No ----> jRn of (2.1), (2.2) sat isfies the end condition

x (N ) = x (2.8)

(whi ch implies x (t ) = x for all t 2: N .) From (2.1) and (2.2) it follows that

x (N ) = g(g( . . . (g(xo, u(O)), u(l)) , .. .), u(N - 1))


'--v--'
N-times
(2.9)
= C N (xo, u(O), .. . , u( N - 1)).
Let N E 1"1, be given. If u(O), . . . , u( N - 1) E [2 are solutions of the system

c N (xo, u(O) , . . . , u( N - 1)) = x (2.10)

and one defines


u(t) = 8 m for all t 2: N ,
then one obtains a cont rol fun ction u : No ----> jRm which solves the problem of
fixed point cont rollability.
From the definition (2.9) it follows that

c N (xo, u(O) , . .. , u(N - 1)) = C1(C N-1( xo, u(O), . . . , u(N - 2)) , u(N - 1)).

Convers ely now let us assume that we are given a sequence (C N) NE N of vector
fun ctions C N : jRn X jRm .N ----> jRn with t his prop erty.
Then we define, for every tE N,

x (t ) = Ct( xo, u(O) , .. . , u(t - 1))


for Xo E jRn and u( s) E jRm for s = 0, . .. , t - 1
2.1 The Autonomous Case 49

and conclude
x (t ) = CI(Ct-I( XO ' U(O) , . . . ,U(t - 2)), u(t - 1))
= CI(x(t - 1), u(t - 1))
for all t EN ,

if we define CO(xo) = Xo. Now let S(X) ~ jRn be the set of all vectors Xo E jRn
such that there exists a time N E N and a solution (u(O) T, . . . , u( N - 1 )T) T E
[I N of the system (2.10) . Obviously, x E S(X) (choose N = 1 and u(O) = en).
A first simple sufficient condit ion for the solvability of the problem of fixed
point cont rollability is then given by

Proposition 2.1. Let x E S(x) be an int erior point of S(X) and let x be an
attractor of the uncontroll ed syst em (2.6), i.e., there exists an open set U ~ jRn
with X E U such that lim x (t ) = x uihere x: No -+ jRn is a solution of (2.6)
t --oo
with (2.2) for any Xo E U. Then it follows that S(X) ;2 U which impli es that
fOT every choice of Xo E U the problem of fixed point controllability has a
solution.

Proof. Since x E S(x) is an int erior point of S(X), there is an op en neighbor-


hood W( X) ~ jRn of X with W( X) ~ S(X).
Now let Xo E U be chosen arbit ra rily. Then there is some N I E N with
x (N I ) E W( X) where x : No -+ jRn is a solution of (2.6) , (2.2). This implies
the existe nce of som e N 2 EN and a solut ion (u(O) T , . . . , u(N2 _ 1)T)T E [lN2
with

If we define

u*(t) = {em
u(t - NI)
for t = 0,
for t = N I,
, N I - 1,
, N] + N 2 - 1,

then it follows that

c N (xo , u*(O ), .. . , u*(N - 1)) = x


where N = N] + N 2 , i.e., Xo E S(X) whi ch complete s the proof.
D

The essential assumpt ion in Proposition 2.1 is that t he fixed point x of the
uncontrolled syst em (2.6) is an int erior point of the cont rollable set S(x) .
50 2 Controlled Systems

In order t o find sufficient condit ions for that we ass ume that [J is op en
and 9 E C1(JR n x JRm ) which impli es G N E c 1(JRn x JRm.N) for every N E N
and

G{/ (x , u(O) , ... , u(N - 1)) =


9x(G N- 1( x , u(O) , .. . , u(N - 2)), u( N - 1)) x

and

G;:(k)(x ,u(O), . .. , u (N - 1)) =


9x(G N- 1( x , u(O), , u (N - 2)), u(N - 1)) x
9x(G N- 2( x, u (0), ,u(N - 3)) ,u(N - 2)) x

9x(G k+1( x , u(O) , , u(k)) , u (k + 1)) x


9u(k)(G k (X, u(O) , , u (k - 1)), u(k))

for k = 0, . . . , N - 1.
Let us assume that 9x(X, 8 m) is non- singular . Then it follows, for all N E N,
that G{/ (x, 8 m " , . , 8 m ) is also non- singular, since

G{/ (x, 8 m , . .. , 8 m) = 9x(X, 8 m) N.


Since
x= G N (x,8 m ,. .. ,8 m ) ,

there exist s, by the implicit function theorem , an open set V <;;; [I N with
8 ;;' E V and a function h : V ----+ JRn with h E C 1(V) such that

h(8;;') = x and G N (h(u(O), . . . , u( N - 1)) , u(O) , . .. , u(N - 1)) = x


for all (u(O) , . .. , u(N - 1)) E V.
Mor eover,

hU(k) (8:/.) = - G{/ (x, 8 m , .. . , 8 m )-lGu(k)(X, 8 m , .. . , 8 m)


= - 9x(.'r , 8 m )- N9x(X, 8 m) N- k9u(k) (x, 8 m)
= -9x(X, 8 m) - k9u(k)(X, 8 m )

for k = 0, . . . , N - 1.
2.1 The Autonomous Case 51

Result. If 9x(X, eM) is non-singul ar , then , for every N E N, there is an


op en set VN ~ [IN with e;/. E VN and a function h N : VN ---7 lR n with
h N E C1(VN ) such that

hN(u(O), . . . , u( N - 1)) E S( x)

for all (u(O) , . .. , u( N - 1)) E VN.


Next we assume that , for some N E N,

rank(hu (o ) (e;/.) I ... I hu(N-l)(e;/.)) = n.

Then there are n columns in the n x m N-matrix

which are linearly ind ependent .


Now let E be the n-dimension al subspace of lR m . N consisti ng of all vectors
whos e components vanish that do not cor respond to t he above linearly ind e-
pendent columns.
If we put U = E n VN, then U is an op en subset of E and the restriction
of h to U is a C 1-funct ion on U whose J acobi matrix at e;/. consist s of the
above linearly ind ep endent columns of (hu(o )(e;/.) I ... I hu(N-1)(e;/.)) and
is therefore invertible. By the inverse function theor em there exist op en sets
U ~ En VN and if ~ lR n with e;/. E U and x E if such that h is homeo-
morphic on U and h(U) = V.
This implies that x is an interior point of S(X) .
Let us demonstrate this result by the predator prey model t hat was investi-
gated with respec t to asympt otic stability in Section 1.1.6. We conside r this
model as a cont rolled system of t he form

Xl (t + 1) = Xl (t ) + aXl(t) - eXl( t )Z - bXl(t) XZ(t) - Xl (t )Ul(t ),


xz (t + 1) = x z(t ) - cxz (t ) + dXl (t )x z(t) - xz (t )uz( t ) , t E No.

The mapping 9 : lRz x lRz ---7 lRz in (2.1) is therefore given by

We have seen in Section 1.1.6 that

is the only fixed point of t he un controlled system (with Ul 0) in


o 0
TrD Z
1&+ X
TrDz , 1'f a
1&+ >d
c e
'
52 2 Controlled Systems

One calculate s
1 - e . _!!....)
d
gx(x ,82 ) = ( ~(a_ c:e) 1
which implies that gx(x , ( 2) is non-singul ar , if and only if

Fu rther we obtain

whi ch implies

Hence x is an int erior point of 5(x) , if (*) is sat isfied.


This example is a spec ial case of the following sit ua t ion:
Let
m
g(x ,u) = f( x) + F (x)u = f(T) + L li(X)Ui ,
i =l
n
X E lR. , U1, " " Um E lR. ,
wh ere f , Ii E C (lR. ) , i = 1, .. . , m.
1 n

Let x E lR. n be a fixed po int of f, i.e.,

x = f( x) = g(x ,8 m ) .
Then

and hence
((h u (o)(8 ;;;) I ... I hu (N- 1)(8 ;;; ))
= -(F(x) I f x(X)- l F(x ) I . . . I f x(x )- N+lF(x)) .
In the example we have m = n = 2 and t he 2 x 2-ma t rix F( x) is non-singular .
Next we come back to the solu tion of (2.10) whi ch we replace by an optimiza-
tion problem . For this purpose we define a cost functional ep : lR. m . N ---> lR. by
putting

ep(u(O), . .. ,u(N - 1)) = I CN(xo, u(O) , . . . ,u(N - 1)) - xll~

for (u(O) , . . . ,u(N - 1)) E lR. m . N (II . 11 2 = Euclidean norm in lR. n )


and try to find (u(O) , .. . , u(N - 1)) E [IN such that ep(u(O), . .. ,u(N -1)):::;
ep(u(0), .. . , u(N- 1)) for all ep(u(0), . . . ,u(N - 1)) E [I N.
If ep(u(O), . . . , u(N - 1)) = 0, then ep(u(O), . . . ,u(N - 1)) E [I N solves the
equa t ion (2.10) . Otherwis e no such solut ion exist s. We again assume that
9 E C 1 (lR. n x lR. m ) . Let [l ~ lR. n be op en .
2.1 The Autonomous Case 53

Then a necessary condition for (u(O) , . . . , u(N - 1)) E nN to minimize <p


on n N is given by
N
<Pu(k)(U(O) , . . . , u (N -1)) = 2 C"(k)(XO ,U(O) , . .. , u(N - 1))T x

(C N (xo , u(O) , . . . , u( N - 1)) - x ) = 8 m (OC)


for all k = 0, . .. , N - l.
For the det ermination of (u(O) , ... , u(N -1)) E n N with (OC) one can apply
Marquardt's algorithm: Let (u(O) , . . . , u (N - 1)) E n N be chosen. If (OC) is
satisfied, then (u(O) , . .. , u(N - 1)) is t aken as a solut ion of the optimization
problem . Otherwise, for every k E {O , .. . , N - I } , a vector h;..(k) E jRm is
det ermined as solution of the linear system
N
(2 CU(k) (xo , u(O), . . . , u(N - 1)) T CUN(k)(xo , u(O), . . . , u(N - 1)) + ,\ 1m h;.. (k )

= 2 C~ k) (xo , u(O), . .. , u(N - l)f (C N (xo, u(O), . . . , u(N - 1)) - x)


where ,\ > 0 and 1m is the m x m-unit matrix.
Then one can show (see, for inst anc e [11]) t ha t for sufficiently lar ge ,\ > 0 it
follows that

(u(O) + h>.(O) , . . . , u (N - 1) + h>.(N - 1)) E n


and

<p(u(O) + h>.(O) , . .. ,u(N - 1) + h>.(N - 1)) < <p(u(O), . .. , u(N - 1)) .

The algorithm is then cont inued with (u(O) + h>.(O) , . . . , u(N -1) + h;..(N -1))
instead of (u(O) , . . . , u(N - 1)) .
Now let us consider a special case which is motivated by a situation which
occurs in the mod elling of conflicts . We begin with an uncontrolled system of
the form

x 1(t + 1) = gl( :r 1(t ), x 2(t )) ,


x 2(t + 1) = g2(X1(t) , x 2(t )) , t E No ,

where gi : jRnl X jRn 2 --+ jRn ; , i = 1,2, are given continuous mappings and
X i : No --+ jRn i , i = 1,2, ar e considered as state functions.
For t = 0 we assume initial condit ions

(2.11)

where x6 E jRnl and x6E jRn 2 are given vectors with


54 2 Controlled Systems

for some x; : :
8 n z which is also given. We further assume that the above
system admits fixed points (xl T , X2 T ) T E jRn, x jRn z with

which are then solutions of the system

x l = gl(x l ,x 2) , x 2 = g2(x l , x 2) .

Now we cons ider the following

Problem: Find vector functions x l : No ----> jRn , and .1:2 : No ----> jRn z with

which satisfy the above system equa t ions and initial condit ions and

wher e N E No is a suitably chose n int eger. In general this problem will not
have a solution. Therefore we repl ace the un controlled syste m by the following
controlled syste m:

x l (t + 1) = gl( XI(t) ,x 2(t) + u(t)) ,


(2.12)
x 2(t + 1) = g2(XI(t) , x 2(t) + u(t)) , t E No ,
where u : No ----> jRn z is a cont rol function. Then we consider the problem of
finding a control function u : No ----> jRn z such that the solutions Xl : No ----> jRn,
and x 2 : No ----> jRn z of (2.12) and (2.11) sat isfy the conditions

8 nz ::;: x
2(t ) + u(t ) ::;: x; for all t E No

and

where N E No is a suit ably chosen int eger.


Let us assume t hat we ca n find a vector function v : No ----> jRn with

8 nz ::;: v(t ) ::;: x; for t = 0, . . . , N - 1 and v(t ) = x 2 for t > N

such that the solution Xl : No ----> jRn, of

x l (t + 1) = gl(XI(t) ,v(t)) , t E No ,
x l (O) = x5

sa t isfies
XI (t ) =X I for all t ::::N
where N E N is a suitably chosen int eger.
2.1 The Autonomous Case 55

Then we put

(0) = x6 ,
X
2

X + 1) = 92(X1(t) , v(t)) for all t


2(t E No
and define
U(t) = v(t ) - x 2 (t ) for t E No.
With these definitions we obtain a solut ion of the above cont rol problem. Thus
in order to find such a solution we have to find a vector function v : No ~ IR n2
with
8 n 2 ::; v(t) ::; X *2 for t = 0, . . . , N - 1 ,
v(t) = x2 for t ~ N
such that the solution Xl : No ~ IRn1 of
x 1 (t + 1) = 91(X
1
(t ), v(t )) , t E No ,
x (0) = x6
1

satisfies
x 1 (t ) = Xl for all t ~ N
where N E N is a suitably chosen int eger. Let us demonstrate all this by an
emission redu ction mod el (1.22) to which we add the condit ions

0 ::; Mi(t) ::; M ] for all t E No and i = 1, . . . , r

and the initial condit ions

where E Oi E IR and M Oi E IR with 0 ::; M Oi ::; Mt for i = 1, ... , r are given.


The corresponding controlled syst em (2.12) reads in this case
r
Ei(t + 1) = Ei(t) + L emij (Mj (t ) + Uj(t)) ,
j=l

for i = 1, . . . ,r and t E No.


The cont rol functions Ui : No ~ IR , i = 1, . . . , r , must satisfy t he condit ions

0 ::; Mi(t) + Ui(t) ::; M ] for i = 1, . . . , rand t E No.


Fixed points of the system (1.22) are of the form (ET , 8 '[)T with E E IRr
arbitrary. We have to find a vector function v : No ~ IR r with

8r ::; v(t ) ::; M * for t = 0, .. . , N - 1,

v(t ) = 8 r for t ~ N
56 2 Controlled Systems

such that the solution E : No - 4 JRr of

E(t + 1) = E(t) + Cv(t) , t E No , (C = (emijkj=l ,...,r )


E(O) = Eo ,

satisfies
E(t) = for all t 2: N
wher e N E N is a suitably chosen integer .
First of all we observe that for every N E N
N- 1
E(N) = Eo + C( ~ v(t))
t=O

Let us assume that C is invertible and C- 1 is positive. Further we assume


that 2: Eo.
Then E(N) = , if and only if
N-1
~ v (t ) = C- 1 ( - Eo) 2: Gr .
t=O

If we define
v(t ) = G r for all t 2: N ,
then
E(t ) = for all t 2: N .
Let us put
N- l
VN = ~ v(t ) = C- 1 ( - Eo).
t=O

If we define
1
v(t) = N VN for t = 0, . . . , N - 1

then
N- l
~ v(t ) = C - 1 ( - Eo)
t= O

and
Gr :::; v(t ) :::; NI* for t = 0, ... , N - 1
for sufficiently large N , if

Mt >
for i = 1, . .. , r .
2.1 The Autonomous Case 57

We finish with a numerical example: r = 3 , Eo = (0,0, O)T, E


(10,10, 10)T, M * = (1,1, If , and

1 -0 .8 0)
C = 0 1 - 0.8 .
( -0.1 -0.5 1

Then we have to solve the linear syst em

VN l -0.8 VN2 = 10 ,
VN2 - 0.8 VN 3 = 10,
-0.1 VN 1 -0 .5 VN2 + VN 3 = 10 ,
The solution reads
VN 1 = 38.059701 ,
VN2 = 35.074627 ,
VN 3 = 31.343284 .
We choose N = 39. Then we have to put

1 (0.9758898)
v(t ) = N VN = 0.8993494 for t = 0, ... , 38.
0.803674

2.1.2 Null-Controllability of Linear Systems

Instead of (2.1) we consider a linear syst em of the form

x (t + 1) = Ax(t) + Bu(t) , t E No , (2.13)

where A is a real n x n - matrix and B a real n x m - matrix and where


u : No -+ IRm is a given cont rol function. The corres ponding uncontrolled
system reads
x (t + 1) = Ax(t) , t E No , (2.14)
and admits x = en as a fixed point.
The problem of fixed point cont rollability is then equivalent to the

Problem of Null-Controllability

Given Xo E IRn find some N E No and a cont rol function u E U (2.5) with
(2.7) such that the solution x : No -+ IRn of (2.13), (2.2) sat isfies the end
condit ion
x(N ) = en (2.15)
(which implies x (t ) = e n for all t ;::: N ).
58 2 Controlled Systems

From (2.13) and (2.2) it follows that


N
x(N) = A NXo + LAN-tBu(t -1) (2.16)
t=l
so that (2.15) turns out to be equivalent to
N
L A N- t Bu(t - 1) = _A NXo . (2.17)
t=l
Now let A be non-singular. Then the set S(en) of all vectors Xo E IR n such
that there exists a time N E 1"1 and a solut ion (u(O)T , ... ,u(N _1)T)T E [IN
of the system (2.13) is given by

s(e n ) = U E(N)
NEN

where, for every N E 1"1,


N
E(N) = {x = L A N - t Bu(t - 1) 1 u E U (2.5)} .
t=l
Next we assume that [l <:: IR m is convex , has em as int erior point and satisfies
u E [l =? - u E [l . (2.18)

Then, for every N E 1"1, the set E(N) is convex and E(N) = -E(N) .
This implies because of

E(N) <:: E(N + 1) for all N E 1"1

that S(e n) is also convex and S(e n) = -S(e n) .


Further we assume Kalman 's condition, i.e. there exist s some No E 1"1 such
that
rank(B I AB 1 . . . 1 A No- 1 B) = n. (2.19)
Then we can prove

Theorem 2.1. If A is non-singular, [l <:: IR m is convex, has em as in te ri or


point and satisfi es (2.18) and if Kalman con dition (2.19) is satisfied for some
No E 1"1, th en en is an interior point of s(en).

Proof Let us assume that en is not an interior point of S(e n) . Then S(e n)
must be contained in a hyp erplane t hrough en, i.e. there must exist some
y E IR n , y =I e n, with

yT X = a for all x E S(e n ) .


2.1 The Autonomous Case 59

This implies
N
yT(2:.A N- tBu(t - 1)) = 0
t=l
for all (u(Of , . .. , u (N -Iff ElR. m .N and all N E1"1,
hence
yT A N - t B = e;, and all N E N.
In particular for N = No this impli es y = e n du e to Kalman condition (2.19)
whi ch contradi cts y =I- en ' Therefore t he assumpt ion that e n is not an inte-
rior point of S(en) is false.

In addit ion t o the assumpt ion of Theorem 2.1 we ass ume that all the eigen-
valu es of A are less t ha n 1 in absolute value. Then according to the Corollary
following Th eorem 1.6 e n is a globa l attractor of the un controlled system
(2.14) , i.e. lim x(t ) = e n where z : No ~ lR.n is a solut ion (2.14) with (2.2) for
t-i-co
any Xo E lR. n . By Proposition 2. 1 t herefore the problem of null-controllability
has a solut ion for every choice of Xo E lR. n . If t he set n of cont rol vector valu es
has the form
a = {u E lR. m I Il ull < .,} (2.20)
for some y > 0 where 11 11 is any norm in lR.m , then this result can be strength-
ened to

Theorem 2.2. Let the Kalman condition (2.19) be satisfied for some No E N.
Furth er let n be of the form (2.20) .
Finally let all the eigenvalues of AT be less than or equal to one in absolut e
value and the corresponding eigenvectors be linearly independ ent . Th en th e
problem of null- controllability has a solutio n for every choice of Xo E lR. n ! if A
is non- singular.

Proof. We have to show t ha t for every choice of Xo E lR.n t here is some N E No


and a cont rol fun ction u E U (2.5) such that (2.17) is sat isfied . Since A is non-
singular , (2.17) is equivalent t o
N
LA- tBu(t -1) = - Xo.
t=l
For every N E 1"1 we define the convex set
N
R(N) = {x = LA- tBu(t - 1) Iu E U}
t=l
60 2 Controlled Systems

and put
R oo = U R(N) .
NEl\!

Becaus e of
R(N) ~ R(N + 1) for all N E No
the set R oo is also convex. We have to show that R oo = lR n . Let us assume
that R oo =1= lR n .
Then there exists som e i E lR n with .1: tf. R oo which can be sep ar ated from
R oo by a hyp erplan e, i.e., there exists a number a E lR and a vector y E lR n ,
y =1= 8 n such that
yT X :::; a :::; yT i for all x E R oo.

Since 8 n E R oo , it follows that a ::::: O. Fur ther it follows from the implication
u E [l =;. -u E [l that
N
I LyT A-tBu(t -1) 1:::; a for all N E N and all u E U .
t= l

This implies
N
L II(yT A- t Bf lld < a for all N E N
t= l

where II . IId is the norm in lR m which is dual t o II . II . This in turn implies


lim yT A -t B = 8 ;;'. (2.21 )
t ~ oo

From Kalman's condition (2.19) it follows that there exist n linearly ind ep en-
dent vectors in lR n of the form

where ti E {O, . .. , No - I} and ji E {I , . .. , m} and bj ; denotes t he i. - th


column vector of B . From (2.21) it follows that

lim yT A -tci = 0 for i = 1, . .. , n o


t~ oo

This implies
lim yT A- t =
t ---? CX)
8;
or , equivalentl y,
lim (AT )-t y = 8 n . (2.22)
t~ oo
2.1 The Autonomous Case 61

Now let AI, . . . , An E e be the eigenvalues of AT and Yl , . .. , Yn E e n


corresponding linearly ind ep endent eigenvect ors. Then there is a un iqu e rep-
resentation n
Y= L O:j Yj wh ere not all O:j E c are zero
j=1

and

hen ce
(AT)-tY=tO: j(:)t yj for all t e- N,
j= 1 J

From (2.22) we therefore infer that

IAj l > 1 for all j E {l , . .. , n } with O:j :f a.


This is a cont ra dict ion to

IAj l:S: 1 for all l , .. . ,n.

Hence t he assumpt ion R oo :f R" is false.

o
Remark: If we define

y = (Yl I Y2 I..1Yn) and /\ =


AI
...
0) ,
(
o An

then it follows that

whi ch implies

and in turn

from which

follows.
62 2 Cont rolled Systems

T his implies that A has t he same eigenvalues as AT (which holds for arbi-
T
t ra ry matrices) and the eigenvecto rs of A are t he column vect ors of (y )- 1.
Therefor e AT in Th eorem 2.2 could be repl aced by A. For t he followin g let
us ass ume t hat [l = lR m .
For every N E N let us define

Y (N ) = (B I A B I . . . 1 A N- 1B ).

Since U (2.5) cons ists of all funct ions u : No --+ lR m , it follows, for every
N E N, t hat
N
E (N ) = {x = L A N- t B u (t - 1) I u : No --+ lR m } .
t= 1

Further we can prove

Proposition 2.2. Th e foll owing stateme nts are equiv alen t:

(i) rank Y (N ) = rank Y (N + 1) ;

(ii) E (N ) = E (N + 1) ;
(iii) (A N B )lRm ~ E (N ) ;

(iv) r an k Y (N ) = ra nk Y (N + j) for all j:::: 1 .

Proof

(i) =} (ii) : This is a conse quence of t he fact t hat E (N ) ~ E (N + 1).


(ii) =} (iii) : This follows from Y (N + 1) = (Y(N) I A NB ).
(iii) =} (i) : Y (N + 1) = (Y(N) I A NB ) shows t hat (iii) =} (ii) and
obviously we have (ii) =} (i) .
(i) =} (iv) : Since (i) im plies (iii), it follows t hat

(A N + 1 B)lR m ~ A E (N ) ~ E(N + 1)
whi ch implies E (N + 1) = E(N + 2) and hence

rank (Y (N + 1) = ran k( Y(N + 2).


(iv) =} (i) : is obvious. This com plete s t he pro of.

D
2.1 The Autonomous Cas e 63

Now let r be the smallest int eger such that I , A , . . . , Ar-l are linearly
ind ependent in jRnn and hence there are numbers CXr-l ,CXr-2 , ... , CXo E jR
such that

Defining
PO(A) = >t + CXr _ l A r - 1 + .. . + CXo ,

we have po(A) = O. This monic polynomial (leading coefficient 1) is the monic


polynomial of least degree for which Po(A) = 0 and is called the minimal
polynomial of A . The polynomial

P(A) = det(AI - A) with degre e n

is called characte ristic polynomial of A , and t he Ham ilton-Cayley Theorem


st ates that p(A) = 0 which impli es r :::; n .
This leads to

Proposition 2.3. Let s be the degree of the minimal polynomial of A(s :::; n) .
Th en there is an integer k :::; s such that

rankY(l) < rankY(2) < ... < rankY(k) = rankY(k + j) for all j E N

Proof. Proposit ion 2.2 impli es the existe nce of such an integer k, since
rankY(N) :::; n for all N E N. We have to show that k :::; s . Let
'I/; (A) = AS + CXs_IA s- 1 + . .. + CXo be the minimal polynomial of A . Then
'I/; (A )B = 0 and A SBjRm <;;; E(s) which impli es (by Proposition 2.2) that
rankY(s) = rankY(s + j) for all j E N, hence k :::; s .

As a consequence of Propos ition 2.3 we obtain

Proposition 2.4. If Kalman 's condition (2.19) is satisfi ed for some No EN,
then n ecessarily No :::; nand

r an k (B I AB I.. . IA n - 1 B) = n ,

hence E(n) = jRn. Conv ersely, if E(n ) = jRn , then Kalman 's con dition (2.19)
2: nand
is satisfied for all N

E(N) = jRn for all N 2: n.


64 2 Controlled Systems

Proposition 2.3 also implies t hat, if r an kY (n ) < n , t hen r an kY (N ) <


n for all N 2: n and

E(N) = E(n ) -=I- jRn for all N 2: n .

If we define, for ever N E N, th e n x n-matrix


N- l

W(N) = Y(N)Y(N f = L A j BBT(Aj f ,


j= O

then it follows that

W(N) jRn <;;:; E( N) for every N E N.

Now let us assume that rank Y( N ) = n which is equivalent to E(N) = jRn .


Then W (N) is non-singul ar and

W( N) jRn = jRn = E(N) .

Let y* E jRn be the unique solution of

W(N)y * = _ANxo.
Further let u E jRm.N be any solution of
-
Y(N)u = - A N Xo (see (2.17)).

If we put

then
-
Y(N )u* = - A N .TO
and
I l u* l l ~ = U*Tu* = y*TY(N)1l* = y*TW(N)y*
= _y*TA N Xo = y*Y( N)1l = U*TU ::; Ilu*11211u112
which impli es Ilu*112::; Ilu112.
2.1 The Autonomous Case 65

2.1.3 A Method for Solving the Problem of Null-Controllability

Let us equip IR m with the Euclidean norm II . 112 and consider the following

Problem (P)
For a given N E N find u : {O, . . . , N - I} ----> IR m such that
N
LAN -tBu(t - 1) = - AN xo (2.23)
t=l

and
tp N(U) = max
t = l, ... ,N
Ilu(t - 1)112
is as sm all as possible.

If Kalman's condit ion (2.19) is satisfied for some No E N and if N 2:: No ,


then Problem (P) has a solution U N E IR m N . If tp N ( U N ) :::; "Y (see (2.20)),
then we obtain solution UN : No ----> IR m of the problem of null- controllability
if we define
UN(t) = em for all t 2:: N .
If tpN(UN) > "Y, then N must be increased .
If the matrix A is non-singular , then (2.17) is equivalent to

N
LA-tBu(t - 1) = -Xo
t=l

which impli es

Under the assumpt ions of Theorem 2.2 there exist s, for every E > 0, some
N(E) E N such that

which implies

So we can be sure, for every choice of "Y > 0, to find a solut ion UN (-y) E jRm.N('Y)
of Problem (P) with tp N (-y) (U N (-y)) :::; "Y which leads to a solution U N(-y) : No ---->
jRm of the problem of null- controllability, if we define

In order to solve Problem (P) we replace it by


66 2 Cont rolled Syst ems

Problem (D)
Minimize
N
X(y) = L IIBT (AN-k f yl12, Y E JRn ,
k=l
subject to
(2.24)

Let U: {O , . .. , N -I} ----+ JRm be a solut ion of (2.17) and let y E JRn satisfy
(2.24) . Then it follows that
N
L yTAN-kBu(k -1) = yT c = 1
k=l
which implies
1
k=~,~x.N l l u ( k - 1)112 2: X(y) '
Now let fj E JRn be a solut ion of Problem (D) . Then there is a multiplier
A E JR su ch that
\7X(fj) = Ac (2.25)
N k
\7x(fj) = L II B T(A J - k)Tfll A - BBT(A N-k f fj
kE I(iJ) Y 2

with

This implies

If we define
if k E I(fj) ,
else , k = 1, ... , N ,
(2.26)
then it follows that
N
L A N- kBUN(k - 1) = C
k=l
and
1
Il uN (k - 1)112 = - ( , ) for all k EI(fj)
XY
wh ich implies
1
max Il uN(k - 1)112= - (, ).
k =l ,... ,N X Y
Hen ce UN : {O , . . . , N - I} ----+ JRm solves Problem (P) . This result is summa-
rized as
2.1 T he Au tonomous Case 67

Theorem 2.3. Jfy E JRn solves Problem (D), then UN : {O, . .. , N -I} -+ JRm
defin ed by (2.26) solves Problem (P) .

In order to solve Problem (D) we apply the well known gradient projection
m ethod which is based on the following ite ra t ion ste p: Let y* E JRn with
cT y* = 1 be givenn. (At the beginn ing we t ake y* = Il c~ l ~ c). Then we calculate

h= ( 11~I cT\7X(Y*))c - \7X(Y *)

and see that


ch = 0
and
\7X(y*fh = II~I ~ (cT\7X(y *)) 2 - II\7x(y*)II ~ : : : 0 .
If '\lX(y*)Th = 0, then there exist s som e A 2: 0 such that (2.25) holds true
which is equivalent to y* being optimal. If \7x(y*)T h < 0, then h is a feasible
direction of descent. If we det ermine ~ > 0 such that

x(y* + ~h) = min X(Y* + Ah) , (2.27)


A>O

then
x(y* + ~h) < X(y *)
and cT (y* + ~h) = 1. The next step is then performed with y* + ~h instead
of y*. A necessary and sufficient condit ion for ~ > 0 to satisfy (2.27) is

d T
dA x(y* + Ah) = \7X(Y* + Ah) h = 0
A A

which is equivalent to

and in turn to the fixed point equa t ion

with
'\' 1 hTAN-kBBT(AN- k)T *
L.. IIB T (A N k) T( Y* +Ah) 1I2 y
.I'(A) = kEI (Y*+Ahl
'f/ '\' 1 I T AN-k B BT (AN - k )T h
L.. IIBT(AN '')7' (Y *+AhlI12 1
kEI( Y *+Ah)
68 2 Cont rolled Syst ems

In order t o solve this equation we apply t he ite ra t ion pro cedure

st arting with AO= O.


Let us return to the problem of fixed point controllability in Seetion 2.1.1. and
let us ass ume that 9 : IR n x IR m ---> IR n is cont inuously Frechet differentiabl e.
Then it follows that

c " (xo , u(O), ... , u( N - 1)) - x=

e N(x o, u (O) , . . . , u(N - 1)) - eN(x, 8 m , .. . , 8 m )


N
~ J~ N (x ,8 m , .. . , 8 m )(x o - x ) + L J~~- l ) (x , 8 m " " ,8m )u(k - 1)
k=l
N
= J;(x, 8 m) N (xo - x ) +L J;(x ,8 m )N-k Jg(x , 8 m )u(k - 1)
k=l
where

and
J;(x ,8m ) = is, (x , 8 m )) i ==
k
i , -n.
1 , . .. , H'.
'

Therefore we repl ace equati on (2.10) by


N
L J;(x, 8 m) N- kJg(x ,8 m )u (k - 1) = - J; (x, 8 m) N (xo - .f;) (2.28)
k=l
and solve the problem of findin g u : {O , .. . , N - I} ---> IR m which solves (2.40)
and minimizes
'PN (U ) = max lI u (k - 1)112.
k=l ,...,N
Such a u: {O, . .. , N - I} ---> IR m is then taken as an approx imate solut ion of
(2.10) .

The above pr oblem has the form of Problem (P) at the beginning of t his Sec-
tion and ca n be solved by the method describ ed above.
Finally we consider a special case in which t he probl em of fixed point cont rol-
lability is reduced to a sequence of such problems which can be solved mor e
easily.
2.1 The Autonomous Case 69

For this purpose we consider t he syste m


m
x (t + 1) = 90(X(t)) + L9j(X(t))Uj(t) , t E No , (2.29)
j=1
where 9j : jRn -'> jRn , j = 1, . . . ,m, are cont inuous vector functions.
For every control function u : No -'> jRm there is exactl y one function x : No -'>
jRm which satisfies (2.29) and the initial condit ion

x (O) = Xo , Xo E jRn given. (2.30)

We denote it by x = x (u ). We assume that the uncontrolled system

x (t + 1) = 90(X(t)) , tE N ,
has a fixed point xE jRn which t hen solves the syste m

x= 9o(X) .
We again assume t hat the set U of admissible cont rol functions is given by
(2.5) where n <;: jRm is a subset with 8 m E n.
Let us define
90(X) =9o(X) -x , x E jRn .

Then (2.29) ca n be rewritten in the form


m

x (t + 1) = x(t) + 9o(X(t)) + L9j(X(t))U j(t) , t E No . (2.31)


j=1
In order to find some N E No and a cont rol function u E U with (2.7) such
that the solution x : No -'> jRn of (2.30) , sat isfies the end condit ion (2.8) we
apply an iter ative method. Starting with some No E No an d som e u O E U (for
instance uO(t) = 8 m for all t E No) we construct a sequence (Nk)kEN in No
and a sequ ence (Uk)kE N in U as follows:
If Nk-l E No and u k- 1 E U are det ermined , we calculat e X(u k- 1) : No -'> jRn
as the solution of (2.2) and (2.31) for u = uk-I . T hen we det ermine N k E No
and uk E U such that

uk(t) = 8 m for all t ::::: N k


and the solution x (u k ) : No -'> jRn of (2.2) and

x (uk )(t + 1) = x (uk)(t ) + 90(:r(u k- 1)(t ))


(2 .31 )k
m
+ L9j( X(uk- 1)(t))uj+l(t) , t E No ,
j=1
70 2 Controlled Systems

satisfies t he end condition

(2.8h
If we put
N"
Xk = Xo + 2:: .CJO(X(Uk - 1
)(t - 1))
t =l

and
Bk(t - 1) = (gl( X(uk - 1 )(t - 1) I ... I gm(x(Uk - 1 )(t - 1)) ,
t hen t he end condit ion (2.8h is equivalent to
N
2:: Bk(t - 1)uk- 1(t - 1) = i: - x k .
t=l

2.1.4 Stabilizatio n o f C ont r olled Systems

Let 9 : jRn X jRm -> jRn be a cont inuous mapping and let H be a family of
continuous mappings h : jRn -> jRm . If we define, for every h E H , t he mapping
j" : jRn -> jRn by
j,,(x) = g(x , h(x)) , .1: E jRn ,
t hen j" is cont inuous and (R", j,,) is a time - discrete autonomous dynamical
system . Let i: E jRn be a fixed point of

f( x) = g(x ,em) , x E jRn.

Further we assume t hat

h(i:) = e m for all ne H


which implies that i: is a fixed point of all fh, n H. After these preparations
we ca n formu late t he

Problem of Stabili zatio n

F ind h E H such that {i:} is asympt ot ically st ab le wit h respect to j".


We assume that 9 : jRn X jRm -> jRn and every mapping h E H are cont inuously
Frechet different iab le. T hen every mapping j" : jRn -> jRn , h E H , is also
continuously Freche t differentiable and , for every x E jRn , its Jacobi matrix
is given by

Jf,,(x) = J;(x ,h(x)) + J;(x ,h(x))JT;(x) where


J;(x , h(x)) = (gix; (x, h(x))) ; = 1, , " and
"J ::::: 1 , ... , n

J;(x ,h(x)) = (giUk(X,h(x))) ; = 1" " , JT; (X) = (hixj( X)) i =1" " ,m
k ::::: 1 , . . , In i = 1, . . . , n
2.1 The Autonomous Case 71

From the Corollary of Th eorem 1.5 we then obtain the

Theorem 2.4.
(a) Let the spectral radius Q( J j,. (x)) < 1. Th en x is asymptotically stable with
respect to ih.
(b) Let (h, (x)) be invertible and let all the eigenv alues of Q(Jh. (x)) be larger
than 1 in absolut e value. Th en x is un stable with respect to ih.

Special cases:

(a) Let
g(x ,u) = A x + Bu , .T E jRn , U E jRm ,

where A is a real n x n-matrix and B a real n x m-matrix, respectively.


Further let H be the famil y of all linea r mapping h : jRn ----> jRm which ar e
given by
h( x) = Cx , x E jRn ,

where C is an arbitrary real m x n-matrix, resp ectively.


If we choosex = 8 n , then

and
h(8 n ) =8m for all h H.
Finally we have Jh(X) = C,
J;(x, h( x)) = A , and J~(x , h( x)) = B

for all x E jRn and h E H which impli es

Jj,. (x) = A +B C for all x E jRn and b H.


Thus x= 8n is asympt ot ically st able with respect to ih, if
Q(A + B C) < 1,
and unstabl e with resp ect to fh , if all the eigenvalues of A + Be are larger
than one in absolute valu e.
72 2 Controlled Systems

(b ) Let
g(x , u) = F(x) + B(x)u , x E X , u E IR m ,

where F : X -+ X , X <;;; IRn op en , is cont inuously Frechet differentiable


and B(x) = (b1(x) , . . . , bm(x)) , x E X , where bj : X -+ IRn,j = 1, .. . , n ,
are also cont inuously Frechet differentiabl e. Let again H be the family of
all linear mappings h : IRn -+ IR m which are given by

hex) = Cx , x E IR n .

Finally, we assume that 8 n E X and F(8 n ) = 8 n.


If we choose x = 8 n , t hen

and
h(8 n ) = 8 m for all h E H.
Further we obtain
Jh(x) = C ,
m

J;(x ,h(x)) = JF( X) + LJbj( x )hj(x ) and J;(x ,h(x)) = B(x) ,


j=l

for all x E X and h E H which impli es


m

Jih (x) = JF( X) + L Jbj (x)hj( x ) + B(x) C for all x E X and ti H ,


j=l

hence
Jih(8 n ) = JF(8 n ) + B(8 n) C for all u e H.
Thus x= 8n is asympt ot ically stable wit h resp ect to fh, if

and unstabl e with resp ect to fh, if all t he eigenvalues of J F (8 n ) + B( 8 n)C


are lar ger than one in absolute value.
2.1 Th e Autonomous Case 73

2.1.5 Applications

a) An Emission Reduction Model:


We pick up the emission reduction model that was treated as un controlled sys-
t em in S ection 1.1.6 and as cont rolled system in Section 2.1.1. Here we concen-
trate on the cont rolled syste m whi ch we linearize at a fixed point (E T , fJ'n T ,
E E jRr , of the un controlled system wh ich leads to a linear cont rol system of
the form
x (t + 1) = Ax (t ) + Bu(t) , t E No ,
with
A = (~~~) , B = (~) ,
where 11' and Or is the r x r -unit and zero-mat rix, resp ectively, and

em ll . . . em Ir)
C= .
.
.
"

(
em ri . . . em rr

This implies
k
A k B -- (CUr + DDk+1
+ . .. + D ) ) I:
lOr a
11 k E 1M
1'10 .

We conside r the problem of null- controllability as bein g dis cuss ed in Section


2.1.2. Let us assume that C and D are non- sin gular. Then it follows t hat the
matrices A and
c CUr + D))
(D D 2
are non-singular which implies that t he Kalm an condit ion (2.19) is satisfied for
No = 2. Let d l , . .. . d; be the diagon al elements of D . Thus the non-singularity
of D is equivalent to

d; -=I- 0 for all i = 1, . .. , r.


If all di -=I- 1 for i = 1, .. . , r , then it follows (see Section 1.1.6) t hat the
eigenvect ors corresponding to the eigenvalues

f..l i = 1 for i = 1, .. . , r and f..l i+r = d, for i = 1, . . . , r


of A are linearly ind ep endent whi ch also hold s true for A T (which has the
same eigenvalues )(see Section 2.1.2) .
74 2 Controlled Systems

If
Idil : : : 1 for all i = 1, . . . , r
and
n= {u E jRr I lI ull : : : 'Y }
for some 'Y > a where 1111 is any norm in jRr , t hen by Theorem 2.2 the problem
of null- controllability has a solut ion for every choice of Xo = (X6T, x f ) E jR2r .
This problem ca n be solved with the aid of Problem (P) in Section 2.1.3 which
reads as follows in this case: For a given N E N find u ; {a, ... , N - I} ----+ jRr
such that

and
'PN(U ) = max
k=l ,... ,N
Ilu(k - 1)112
is minimized (where II . 112 deno tes t he Euclidean norm in jRr ).

Finally we illust rate the method by two num erical examples.

Let m = 3. In both cases with choose

(2.32)

At first we choose

0.8 0.5 -0.5)


C= 0.2 0.2 0.3
( 0.4 0.3 0.2

and

0.1 a 0)
D = a - 0.2 a
( a a 0.1
2.1 T he Autonomous Cas e 75

and obtain

0 .9

0.8

0.7

0.6

0.5

0 .4

0.3

0 .2

0.1

oo~-:--:,::-----:-::---=---=--=-----:c;--:=---:,::,--------:
10 15 20 25 30 35 40 45 50
Ordinate, I{! N (UN) = -X (Y~ N )
Abscissa , N

Next we choose


0.8 0.5 0.5) 0.2 0)
C = 0.2 0.2 0.3

( 0.4 0.1 0.2 D = ( 0.8
0.6
and get

2 .5 ,...-~----,--~-~-~-~-,...---,--,.--,

' .5

0.5

ooL~_~_--,:--.L:-~,..-====::;:::===':=~ Ord inat e, I{! N(UN) = - ( ~ )


'0 15 20 25 30 35 40 45 50 X YN
Abscissa , N
76 2 Controlled Systems

b ) A C ontrolled Pred ator - P r ey - M o d el :


We pick up the pr edator - pr ey - model that has been discussed in Section
1.1.6 a) whos e controlled version we assume to be of the form

Xl (t + 1) = Xl (t ) + a Xl(t) - bXl(t) X2(t) - Xj(t )Ul (t ) ,


X2(t + 1) = X2(t ) - CX2(t ) + d Xl(t) X2(t) - X2 (t )U2 (t ) , t E No ,


where a > 0, < C < 1, b > 0, d > 0, Xj(t ) and X2(t ) denote the density of
the prey and predator population at time t , resp ectively, and Ul, U2 : No -> lR
ar e control functions .
If we define

- (x (t) x (t)) = ( aXl(t) - bXj(t) X2(t)) , 91(Xl(t) , X2(t)) = ( - X1(t ))


90 1 , 2 - CX1(t ) + dXl(t) X3(t) 0

and
92(Xl(t) , X2(t)) = (-X~(t)) ,
then the system can be rewritten in the form
2

x (t + 1) = x (t ) + 90(X(t)) + L 9j (X(t ))Uj (t ) , t E No , (2.33)


j=1
wit h x(t) = (Xl(t) ,X2(t)f . This is exac t ly the system (2.31) for m = 2.
In addition we assume an initial condition

.T(O) = (~01)
X02
= Xo. (2.34)

T he unco ntrolled system

.T(t + 1) = x (t ) + 90(X(t)) , t E No ,

has x = (;I, %) T as fixed point.


We assume the set U of admissible control functions to be given by

U = {u: No -> lR 2 I llu(t)1


12::; I for all t E No}

where 'Y > is a given const ant and II . 1 12 is the Euclidean norm . For every
u E U we denote the unique solut ion x : No -> lR 2 of (2.33) and (2.34) by
x (u) . Our aim now consists of findin g some N E No and a cont rol fun ctio n
u E U with
u(t) = 8 2 for all t 2: N (2.35)
such that the solution x : No -> lR 2 of (2.33) , (2.34) sa t isfies the end condition

x (N ) = x. (2.36)
2.1 Th e Autonomous Case 77

In order to find a solution of this problem we ap ply the it er ation method


des cribed in Section 2.1.3. In the k - th st ep of this procedure we have, for a
given U k- 1 E U, to find some Nk E No and a uk E U with

su ch that

wh er e
Nk
x k = Xo + L 9o(x(u k- 1 Ht - 1)) ,
t= l

and
Iluk(t - 1)112 ~ I for t = 1, . . . , N i :
For this we ca n apply the method develop ed in Secti on 2.1.3.

b) Control of a Planar Pendulum with Moving Suspension Point

We consider a non-linear plan ar pendulum of length Z (> 0) who se mov em ent


is cont ro lled by moving its susp ension point with accelerat ion u = u(t) along a
horizontal straight line. If we denote the devi ation angle from the orthogonal
po sition of the pendulum by ip = ip (t) , then the movem ent of the pendulum
is governed by the differential equat ion

.. 9 u(t)
ip (t ) = -T sin ip(t ) - -Z- COSip(t) , t E lR, (2.37)

wh er e 9 denotes the gr avity constant.


For t = 0 init ial conditions are given by

ip(O) = ipo and ~ ( O ) = ~o .

Now we discretize t he differ ential equa t ion by introducing a time step length
h > 0 and replacing the second de rivat ive i/J(t) by io(
ip(t + 2h) - 2ip( t + h) +
ip(t )) thus obtaining the differ ence equa t ion

gh 2 u(t )h 2
ip(t + 2h) = 2ip(t + h) - ip(t) - -Z- sinip( t) - -Z- COSip (t ) , t E R

If we define
Yl(t) = ip(t) and Y2(t) = ip(t + h) ,
78 2 Controlled Systems

then we obtain
Yl(t + h) = YZ(t) ,
gh Z . u(t)h Z
yz(t + h) = 2yz(t) - Yl(t) - -1- sin Yl(t) - - 1 - cos Yl(t) , t E R

Finally we define functions Xl : No -+ ]R and Xz : No -+ ]R by putting

Xl (n ) = Yl( n h) and x z(n) = yz(n h) , n E No ,

and obt ain the syst em

Xl (t + 1) = xz (t ) , (2.1')
gh Z . u(t )h Z
xz (t + 1) = 2xz(t) - Xl (t ) - -1- smx l (t ) - - 1 - COS X l (t ) ,

t E No which is of the form (2.31) for m = 1 with

In addition we assume initial conditions

(2.2')

The un controlled system

Xl(t + 1) = xz( t ) ,
ghZ
xz( t + 1) = 2x z(t ) - Xl (t ) - -1- sinxl (t )

has x = (0,0) as fixed point.


We assume the set U of ad missible cont rol functions to be given by

U = {u : No -+ ]R Il u (t )1 ::; ')' for all t E No}

where ')' > 0 is a given const ant.


For every u E U we denot e the unique solution X : No -+ ]Rz of (2.1') and
(2.2') by x (u ).
Our aim consists of findin g some N E No and a control function u E U with

u(t) = 0 for all t 2: N (2.7')

such that the solut ion x: No -+ ]Rz of (2.1') , (2.2') sat isfies the end condition

.'r(N) = 8 z .
2.1 T he Autonomous Case 79

This condition is equivalent to

The k - th step of the iteration method described in Section 2.1.3 for the
solution of this problem then reads as follows: Let U k - 1 E U be given . Then
we determine N k E No and uk E U such that

uk(t) = 0 for all t 2: Nk

and the solution x (u k ) : No ----> jRn of (2.2') and

Xl(U1(t + 1)) = ,Xl (Uk)(t ) + X2(u k- 1)(t ) - Xl(Uk-1)(t) ,


X2(U1(t + 1)) = X2(Uk )(t ) + X2 (u k- 1)(t ) - Xl (uk-1 )(t )
gh 2 h2
- -l-sin xl(uk-l)(t) - TCOSXl(uk-l)(t)uk(t) ,

t E No, satisfies the end condit ions

These ar e equivalent to
Nk- 1
-t L COSX l (uk- l )(t -1)u k(t - 1) =
t=l
Nk - 1
-t./JO-X2 (u k- 1)(Nk - 2 ) +<PO- r L sinx l (uk- l )(t - 1)
t =l

-t COS Xl(u k-l)(Nk -


1)u k(Nk - 1) =
- X2(U k- 1)(Nk r
-1) + X2(u k- 1)(Nk - 2) - sinxl (uk- l )(Nk -1)
Let us consider the special case Nk = 2. Then it follows that

-T1 cos <Pou k(O) = <Po - 9 . <po ,


T sm
1 . k . gh 2 g . h2 . k 1
- T cos <Pou ' (1) = - <Po + <Po + -1- sin <Po - T sin <Po - T cos <POU - (0) .

Let us assume that


cos <Po -I- 0 and cos tPo -I- 0 .
Then it follows for all k 2: 1 that
k -l<po
u (0) = - - + gtan <po
cos <Po
and

uk(1) = l(<Po - tPo) _ gh2 sin <Po + 9 t an tPo + h 2uk- 1(0)


cos <Po cos <Po
80 2 Controlled Systems

2.2 The Non-Autonomous Case

2.2.1 The Problem of Fixed Point Controllability

We consider a syste m of difference equations of t he form

x (t + 1) = gt(x(t), u(t )) , t E No , (2.38)

where (gt) t ENo is a sequence of cont inuous vector functi ons gt : JRn X JRm ---.
JRn , u : No ---. JRm is a given vector function which is called a cont rol fun ction.
The vecto r functi on x : No ---. JRn which is called a state functi on is uniquely
defined by (2.38), if we requ ire an initi al condit ion

x( O) = Xo (2.39)

for some given vector Xo E JRtl .


If we fix the cont rol fun cti on u : No ---. JRm and define

It( x) = gt(x , u(t )) , x E JRn , t E No , (2.40)

t hen, for every t E No , It : JRn ---. JRn is a cont inuous mapping and
(JRn , (It )t ENo ) is a non-autonom ous ti me-d iscrete dy nami cal system which is
cont rolled by t he fun ction u : No ---. JRm . If

u(t ) = 8 m for all t E No ,

then the system (2.40) is called uncontrolled. Let us ass ume that t he un-
cont rolled syste m (2.40) admits a fixed poin t x E JRn which then solves the
equa t ions
(2.41)
Now let Jl ~ JRm be a subset with 8 m E Jl . Then we define the set of
admissible cont rol fun ct ions by

(2.42)

Aft er t hese pr epar ations we can formulat e t he

Problem of Local Fixed Point Controllability

Given a fixed point x E JRn of the syste m


x (t + 1) = gt(x( t ), 8 m ) , t E No , (2.43)

i.e. a soluti on x of t he equat ions (2.41) and an initi al st at e Xo E JRn,


2.2 The Non-Autonomous Case 81

find some N E No and a control function u E U with

u(t) = 8 m for all t ~ N (2.44)

such that the solution x : No ---> jRn of (2. 38), (2.39) sat isfies the end condition

x (N ) =. (2.45)

(which implies x(t) = x for all t ~ N). Let us assume that , for every t E No ,
the Jacobi-matrices

At = ~~ (x, 8 m ) and e, = ~~ (x , 8 m )
exist . Then it follows that, for every t E No,

x (t + 1) - x = 9t(X(t) , u(t)) - 9t( X, 8 m )


;.::0 At( x(t) - . ) + Btu(t) .

Therefore we replace the system (2 .38) by

h(t + 1) = Ath(t) + Btu(t) , t E No , (2.46)

and the initial condition (2 .39) by

h(O) = Xo - X . (2 .47)

The end condit ion (2.45) is replaced by

h(N) = 8 n = zero vector of jRn . (2.48)

Now we consider the

Problem of Local Controllability

Find N E No and u E U with

u(t) = 8 m for all t ~ N


such that the corresponding solut ion h : No ---> jRn of (2.46) and (2 .47) satisfies
t he end condition (2.48) which impli es

h(t) = 8 n for all t ~ N .


From (2.46) and (2 .47) we conclude that, for every N E N,
N
h(N) = AN-I " . Ao(xo - .r ) + LA N-I " . AkBk-IU(k - 1) ,
k=1

if, for k = N , we put AN-I . . . A k = I = n x n-unit matrix.


82 2 Controlled System s

Therefore the end condition (2.48) is equivalent to


N

L A N- 1A N- 2 . . . AkBk-l u(k - 1) = - AN- 1AN - 2 . .. Ao(x - x ) . (2.49)


k =l

Let us assume that , for some No E N,

(2.50)

Then , for N = No , the system (2.49) has a solut ion (u(O f , u(lf , .. . ,
u(No - l)T)T E lR m . N o where Xo E lR N can be chosen arbit rarily. If we define

u(t) = emfor all t 2:: No , (2.51)

t he n we obtain a cont rol function u : No -+ lR m such t ha t the corresponding


solution h : No -+ lR n of (2.46) and (2.47) satisfies the end condit ion (2.48)
for N = No.
If Ak is non-singular for all k E No, then the assumpt ion (2.50) impli es that
(2.50) holds true for all N 2:: N o inst ead of No .
For instanc e, if we replace No by N o + 1, then

A No (B No -l I A No- 1B No- 2 I ... I A No- l .. . A 1Bo))


'-v-" \,
v
I

non - s ingular rank = n


v
~ r ank = n

So the system (2.49) has a solution (u(Of , u(l)T , ... , u(No - l)T)T for all
N 2:: No
Next we assume that n ~ lR m is convex , has em as int erior point and satisfi es
u En =? -u E n. Let us define, for every N E N, the set
N
R(N) = {x = LA N- 1A N - 2 . . . A kBk-1U(k -1) I u E U} .
k=l

Then we ca n prove (see Th eorem 2.1)


Theorem 2.5. I], ja r some N o E N, con dition (2.50) is satisfied and ij Ak
is non-singular jar all k ENo , th en e n is an interi or point oj R(N) jar all
N 2:: n;

Proof Let us ass ume that e n (E R (N )) is not an int erior point of R( N) for
some N 2:: No . Thus R(N) must be contained in a hyp erplane through e n,
i.e., there must exist some y E lR n , y =1= en , with

yT X = a for all x E R(N) .


2.2 The Non-Autonomous Case 83

This impli es

hence
yT A N - 1AN - 2 . . . AkBk-l = e;;' for all k = 1, ... , N .
Since (2.50) also holds true for all N 2: No inst ead of No , it follows that
y = en which contradicts y i- e n. Hence the assumpt ion is false and the
proof is complete.
D
As a cons equence of Theorem 2.5 we obtain
Theorem 2.6. In addition to th e assumption of Th eorem 2.5 let

sup II A k II < 1 where II . I denotes the spectral norm . (2.52)


kE No

Th en th ere is some N E N and some u E U such that (2.49) holds true.

Proof. Assumption (2.52) impli es

lim A N - 1A N - 2 . . . Ao( xo - x ) = en .
N -+oo

Hence Th eorem 2.5 impli es that there is some N E N with N 2: No such that

-A N - 1A N - 2 . . . Ao(xo - x ) E R(N)

whi ch completes the proof.


D

2.2.2 The General Problem of Controllability

We consider the same situation as at the beginning of S ection 2.2. 1. However ,


we do not assume the existe nce of a fixed point x E JRn of the uncontrolled
system (2.43), Le., a solut ion of (2.41) . Inst ead we assume a vector X l E JRn
to be given and consider the gener al

Problem of Controllability

Find som e N E No and a cont rol function u E U (2.42) such that the solution
X :No ----+ JRn of (2.38) , (2.39) sat isfies the end conditi on

x (N ) = Xl . (2.53)
84 2 Controlled Systems

From (2.38) and (2.39) we infer

x(N) = gN-l(gN- 2( (go(xo ,u(O)),u(l)) , . . .), u(N - 1))


(2.54)
= C N (xo, u(O), ,u(N - 1)).

Henc e the end condit ion (2.53) is equivalent t o

c " (xo, u(O) , .. . , u(N - 1)) = Xl (2.55)

So we have to find vectors u(O), . . . ,u( N - 1) E [2 such that (2.55) is sa t isfied.


For every N E N we define the cont rollable set

SN(xI) = { X E jRn I there exists some u E U


such that C N (x , u(O) , . .. ,u( N - 1)) = Xl }

and put
S(xI) = U SN(x d
NE N

Now let Xo E S(Xl)' Then we ask t he questi on und er whi ch conditions is Xo


an int erior point of S(Xl) ?
In order to find an answer to t his question we assume that

[2 is open and s E c 1 (jRn X jRm) for all N E No .

Then it follows that C N E C l ( jRn X jRm.N) for every N E Nand

cr; (x ,u(O) , . . . ,u(N - 1)) =


(gN_d x(C N-1(x , u(O) , ,u(N - 2)) , u(N - l)) x
(gN_2)x(C N- 2(x ,u(O) , ,u(N - 3)), u(N - 2)) x

and
C~k) (x , u(O), .. . , u(N - 1)) =
(gN_l) x(C N-l(x ,u(O) , ,u( N - 2)), u( N - l)) x
(gN- 2)x(C N - 2(X, u(O) , ,u(N - 3)) , u( N - 2)) x

(gk+d x(Ck+l( x ,u(O) , , u(k)) ,u(k + l)) x


(9k)u(k)(Ck(X,u(O) , ,u(k - 1)), u(k))

for k = 0, .. . , N - 1, x E jRn and u E U.


2.2 The Non-Autonomous Case 85

Let us assume that Xo E S No(X1) for some No E N, i.e.

e No(xo, uo(O) , . .. , uo(N o - 1)) = Xl

for some Uo E U. Further let (gN)x(x, uo) be non- singular for all N E No, for
all x E IRn and all u E D .
Then e;:'O(xo,uo(O), . . . , uo(N o - 1)) is also non-singular and , by the implicit
fun ction theorem, there exists an open set V <;;; DNo with (uo(O) , . .. ,uo(No -
1)) E V and a function h : V ~ IR n with h E C 1 (V) such that

h(uo(O)) , .. . , uo(No - 1)) = Xo

and
eN(h(u(O), , u(No - 1)) , u(O) , . . . , u(No - 1)) = Xl
for all (u(O) , , u(N o -1)) E V
whi ch means

h(u(O), .. . , u(No - 1)) E S No(xd


for all (u(O) , . . . , u(No - 1)) E V .

Mor eover ,

hU(k )(uo(O) , . .. , uo(N o - 1)) = - e;:ro(.TO, uo(O) , ... , uo(N o - 1))-1 X


e~(I,) (xo, uo(O) , . .. , uo(N o - 1)) .

Next we assume that

rank(hu(o) (uo(O) , . . . , uo(No- 1)) I ... I hu (No-1)(UO(0) , . .. , uo(No- 1))) = n .

Then it follows with the aid of the invers e function theorem that there exist s
an n-dimensional relatively op en set V <;;; V with (uo(O) , . .. , uo(No -1)) E V
such that the restriction of h t o V is a hom eomorphism which impli es that
h(V) <;;; SN,Jxd is open .
Therefore Xo E h(V) is an int erior point of S( X1)'

Now we consider the special case where there exists some .1: E IR n with

gN(X, em) = x for all N E N

which implies
c'' (x ,e;:') = x for all N EN .
Then, for every N E N, it follows that x E S N(X) , hence x E S( x) .
Let us assume that
86 2 Controlled Systems

Then

is also non-singular for all N EN.


By the implicit function theorem we therefore conclude , for every N E N, that
there exists an op en set VN ~ [] N wit h e ;:' E VN and a function hN : VN ---->
JRn with h N E C 1(VN ) such that

hN(e;;,) = x and e N(hN(u(O) , , u(N - 1)), u(O) , . . . , u(N - 1)) = x


for all (u(O) , ,u(N - 1)) E VN
which means
hN(U(O) , .. . ,u(N - 1)) E SN(X)
for all (u(O) , . .. , u(N - 1)) E VN .
Mor eover ,

Next we assume that , for some No E N,

Then it follows with the aid of the invers e function t heorem that there ex-
ist s an n-dimensional relatively op en set VNo ~ VNo with e;;,o
E VNo such
that the restricti on of hNo to VNo is a hom eomorphism which implies that
hNo(VNo) ~ SNo(X) is open. Therefore x E hNo(VN,.) is an interior point of
S(X). This result is a generalizati on of Th eorem 2.5, if [] in addit ion is op en.

2.2.3 Stabilization of Controlled Systems

Let (gt) tEfi be a sequence of cont inuous mappings gt : JRn X JRm ----> JRn and let
1i be a family of cont inuous mappings h : JRn ----> JRm . If we define, for every
h H and t EN, the mapping It : JRn ----> JRn by

then we obtain a non- autonomous time-di scret e dyn amic al system (JRn , (fth )tEfi).
The dynami cs in this syst em is defined by t he sequence p h = (Pi' )tEfi of map-
pings p th : JRn ----> JRn given by
2.2 The Non-Autonomous Case 87

and
F/:(x) = x for all x E JRn .
We also obtain the dyn ami cal syste m (JRn, Uth) tEN) , if we repl ace the con-
t rol fun ction u : No --> JRm in t he system (2.38) by the feedb ack cont rols
h(x) : No --> JRm , X E JRn .
The problem of stabilizat ion of t he cont rolled system (2.38) by the feedb ack
controls h(x) ,x E JRn, then reads as follows: Given Xo E JRn such t hat the limit
set L ph( XO) defined by (1.6' ) (see Section 1.2.1) is non-empty and compact
for all h E 'H.
F ind a mapping h E 7t such t hat L ph (xo) is stable, an att ractor or asymp-
t oti cally stable with respect to Uth)tEN .
Let us consider the special case

9t( X, u) = At(x) x + Bt(x )u for z E JRn , U E JRm , (2.56)

wher e (At( X))tEN and (B t(X) )tEN is a sequence of real , cont inuous n x n- and
n x m matrix functi ons on JRn, respectively.
Let 'H be the fam ily of all linear map pin gs h : JRn --> JRm (which are auto mat-
ically cont inuous). Every h E 'H is t hen representable in the form

where c- is a real m x n-matrix. For every tEN and h E 'H we therefore


obtain
(2.57)
Let us put

If we choose Xo = e n = zero vect or of JRn , t hen we conclude

Fth (xo) = Xo for all t E No , h E H ,


and t herefore L ph (xo) = {xo}.
The problem of stabilization of t he cont rolled system (2.38) with 9t , t E N,
given by (2.56) in t his sit uation consists of findin g an m x n-ma t rix C h such
that {xo = e n} is stable, an attract or or asy mptotically stable with resp ect
t o Ul' )tEN with f th given by (2.57).
Now let us assume t hat

IIDZ(x) 11:::; 1 for all x E JRn and tEN (2.58)

where II . II denotes the spectral norm .


Let U <:;:; JRn be a relatively compact ope n set wit h Xo = e n E U.
Then t here is some r > 0 such that

B i, = {x E JRn I IIxll 2< r } <:;:; U .


88 2 Controlled Systems

Henc e B u is op en , Xo E B ii , an d ass umption (2.58) impli es

If we define
V( x) = Il x l l~ = .17.T for x E IR
n
,

then
V(x) 2 0 for all x E IR n and (V( x) = 0 :} x = Xo = e n)
and

V(fth( x)) - V( x) = x T D~( xf D~'(x) x - x T X


= I I D~ (x)xl l~ - Il x l l ~ :::; ( 1 ID~( x) 1 1 - l) l l x l l~ :::; 0
for all x E IR n and t EN.
This shows that V is a Lyapunov function with respect to (f/' )t EN on G = IRn
whi ch is positive definit e with resp ect to {xo = e n}. By Th eorem 1.1 ' we
therefore conclude that {xo = e n} is stable wit h respect to (f/, )t EN with f/'
given by (2.57) .
Next we assume that

sup I ID~'(x) 1 1 < 1 for all x E IR n . (2.59)


t EN

Then it follows from

V(F/'( x)) = x T D~(xf . . . D~(Fth_ l (x) f D~(F:'-l (x)) . .. D~(x) x


= II D~ ( Fth_ l (x)) . . . D~(x)x l l ~ :::; II D~ (F:'- l (x)) 11 2.. II D~ (x)112 1 Ix I 12

for all x E IRn and t EN that

lim V (FI'( x)) = 0 for all x E IR n .


t -e cc

This implies
lim FI'(x) = en for all x E IR n
t-> oo

and shows that {xo = en} is an attrac tor with respect to (fth) tE N with f th
given by (2.57).

Result. Und er t he assumption (2.59) t he set {xo = e n} is asympt otically


st abl e with resp ect to (fth)tEN wit h f th given by (2.57) .
2.2 Th e Non-Autonomous Case 89

2.2.4 The Problem of Reachability

We again consider the situation at the beginning of Section 2.2. 1 without


necessarily assuming the existence of a fixed point :i; E IR n of the un controlled
system (2.43) . Let n S;;; IR m be a non- empty subset . For a given Xo E IR n we
then define the set of st ates that are reachabl e from Xo in N E N steps by

R N(XO) = {x = C N(xo,u(O) , . . . , u(N -1)) I u(k) E n, k = 0, . . . , N -1}


(2.60)
where the map C N : IR m . N --+ IR n is defined by (2.54).
Further we define the set of st ates reachable from Xo in a suitable number of
steps by
R( xo ) = U R N(XO) . (2.61 )
NEf\!

T he question we ar e interested in now is: Under whi ch condit ions does R( xo)
have a non-empty interior?
A simple answer to t his qu estion gives
Theorem 2.7. Let n be open. If there is som p, N E N and there exist
u(O) , . . . , u(N - 1) E n such that

rank(C;o) (xo,u(O) , . .. , u(N - 1)) I


(2.62)
... 1 C;N-l )(xo, u(O) , .. . , u(N - 1))) = n ,

then R N(XO) has a non- empty int erior and therefore also R( xo) .

Proof. Condition (2.62) implies that t he n x N . m-matrix

(C;O)(XO ,U(0), .. . , u(N- 1)) I .. I C;;'(N_l )(XO,U(0), ... , u(N - 1)))

has n linearly ind ep endent column vectors. Let E be the n-dimensional sub-
set of n N consist ing of all vecto rs whos e compone nts which do not corre-
spond to these linearly indep endent column vectors are equal to the ones of
(u(O)T, . . . , u(N _1)T)T. If we restrict the mapping C N to E , then the Jacobi
matrix of this restriction consists of these linearly ind ep endent column vectors
and is therefore non- singular .
By the inverse function theorem t he refore t he re exist s an op en set (with re-
sp ect to E) U S;;; n N with ((u(Of , . . . , u (N _ 1)T ))T E U whi ch is mapped
homeomorphically by C N on an op en set V S;; R N(XO ) with C N (xo , u(O) , . . . ,
u(N - 1)) E V . This completes t he proo f.
D
90 2 Controlled Systems

Next let us consider the linear case where

x E 1R. n , u E 1R. m ,

with n x n- and n x m - matrices At and B t , resp ect ively, for every t E No.
Then , for every N E N and every ;r;o E 1R. n , we obtain
N
e N (xo, u(O), ... , u(N - 1)) = A N- I . . . Aoxo + I: A N- I AkBk-IU(k - 1)
k=1

where for k = N we put AN -I .. . , A k = I =n x n-unit matrix. Further we


have, for every N E N and every Xo E jRn ,
N
RN( XO) = {x = A N- I Aoxo + I: A N-I AkBk-1 u(k - 1) I
k=1
u(k) E f2 , k = 0, . . . , N - I} .

Because of

e~k _ I )(XO 'U(O) , . . . , u(N -1 )) = A N- I . . . AkB k-1 for k = 1, . . . , N


it follows that the condit ion (2.62) for N = No coincides with the condit ion
(2.50).
If this is satisfi ed , then by Th eorem 2. 7 the set R( xo) (2.61) st ates reachabl e
from Xo has a non- empty int erior.
If f2 = 1R.m , it follows in addit ion that R( x o ) = 1R. n for all Xo E 1R. n .

Proof. Let x, Xo E 1R. n be given arb it ra rily. Then condit ion (2.50) impli es the
existence of u(k) E 1R. m for k = 0, .. . , No - 1 such t hat
Nil
X - A No- I' " Aoxo = I: A No- I ' " AkB k- I U(k - 1)
k= 1

hold true which shows that x E 1R.No(xo) ~ R( xo).

For every k = 1, . .. , N let us define an n x m -matrix c- by

and
2.2 T he Non-Autonomous Cas e 91

The condition (2.50) implies the exist ence of n column vectors

for l = 1, .. . , n which are linearly ind ependent .

If we define the n x n-matrix C and a vect or u E lR n by


k
l~ k l
Ckl . . . C l~k"
" )
C= , resp ectively,
( k1 k
cn) kl
cm" ;

and put
uj(k - 1) = 0 for k i= kl , j i= jk , , l = 1, . . . , n ,
then we obtain

e N (xo , u(O), . .. , u(N - 1)) = A N-I .. . Aoxo + Cu


whi ch implies

u ~ C-, (pN (xo, u(O),:~ . , urN - 1)( -A N_,. .A'X') .


Now let E = {u = (u(O) , .. . , u(N - 1)) E lR ln . N I uj(k - 1) = 0 for k i=
k l and j i=jkl , l = 1, .. . ,n }.
Then eN (xo, .) is a line ar isomorphism from E on lR n .
Therefore
e N( xo ,u) = x for some u E E
implies

If all Ak' kENo, are invertible, it follows t ha t


N(
Xo = A 0- I ... A-I
N -1 X - A-I
0 ... NI-1 C ' e x o, .)-1()
A- x .

In the nonlinear case we hav e the following situation:


If the condition (2.62) is satisfied , there exists an n -dimensional subset E
of [IN and a set U ~ E which is op en with resp ect to E and contains
(u(O)T , ... , u(N - l)Tf and which is mapped homeomorphically on an open
V ~ RN(xo) by the restriction of e N (xo , ') to E . If
x = e N (x o, u(O), .. . , u(N - 1)) ,

then
92 2 Controlled Systems

If in addition G~ (xo, u(O) , . . . , u(N - 1)) is non-singular , then by the im-


plicit function theorem there exist s an op en set W <:;; [IN whi ch contains
(u(Of , .. . , u(N - I)T)T and a function h : W ---+ lR n with h E C 1 (W ) such
that
h(u(O), . . . , u(N - 1)) = Xo
and

G N (h(ii(O) , . .. , u(N - 1)), U(O) , . .. , u( N - 1)) = x


for all (u(O f , .. . , u( N - If f E W .

This implies

Since
hU(k)(U(O) , . . . , u (N - 1)) =
-Gx(xo,u(O) , . . . , u(N _ 1))- 1 x G:r"k)(xO 'u(O) , .. . , u(N - 1))
for k = 0, . . . , N - 1,
it follows that

rank(hu(o) (u(O) , ... , u(N - 1)) I .. . I hu(N-l )(U(O) , .. . , u(N - 1))) = n

which impli es that h maps U n W hom eomorphically onto an op en set


V <:;; R N(xo) which contains x.,: Therefore h 0 G N (xo, .)- 1 maps V n V hom e-
omorphically on an op en set V which cont ains Xo and is contained in

S N(X) = {x E lR n I t here exists some u E U (2.5)


with G (x, U(O) , ... , u(N - 1)) = x } .
N

Finally let us assume that (as in Theorem 2.7) [l is op en and (for the given
Xo E lRn ) there exists some N E N such that t he condition (2.62) is satisfied
for all (u(O) T , ... , u(N _ 1)T)T E [IN. Then it follows from the proof of The-
orem 2. 7 that every x E lRN (xo) is an int erior point of R N(XO), i.e., R N(xo)
is op en .
This impli es in the linear case with [l being an open subset oflRm that R N(xo)
is op en for every Xo E lR n , if the cond it ion (2.50) is satisfi ed .
3

Controllability and Optimization

3.1 The Control Problem


We consider n players Pi , i = 1, .. . , n , t hat are involved in a so called dy-
namical ga me . We assume t hat every player is assigned a st ate vecto r functi on
Xi : No -. jRn ; and has at his disposal a cont rol vect or funct ion Ui : No -. jRm ;
which are dynamically coupled by a syst em of difference equat ions

Xi (t + 1) = gi(X(t ), u(t )) , t E No ,
(3.1)
i= l , .. . ,n,

where x(t) = (Xl( t)T, . . . , xn(t )T f , u(t) = (Ul(t )T, . . . , un(t)T)T , and gi E
n n
C(jRN x jRM,jRn;), i = 1, . . . ,n, with N = 2:ni and M = 2:mi.
i= l i= l
If one put s 9 = (g'[, . .. , g~ ) T , t hen (3.1) can be writ ten in t he form

X(t + 1) = g(x( t), u(t )), t E No , (3.2)

where 9 E C( jRN X jRM , jRN).


Let , for every i = 1, . .. ,n, U, ~ jRm ; be a cont rol set wit h 8 m ; E U, (8 m ;
zero vecto r in jRm; ) . T hen t he syste m

X(t + 1) = g(x(t) , 8 M) , t E No , (3.3)

is called un controlled.

A ssumption: T he un cont rolled syste m (3.3) possesses a fixed point x E jRN ,


i.e., a solution of t he equa t ion

(3.4)

We assume t ha t t his fixed point is a desired state of the whole system for all
players.

W. Krabs et al., Analysis, Controllability and


Optimization of Time-Discrete Systems and Dynamical Games
Springer-Verlag Berlin Heidelberg 2003
94 3 Controllability and Optimization

Therefore they try to find control functions u.; : No ---+ lR m i


, i = 1, . . . ,n,
with
Ui(t) E Vi for all t E No
such that the syste m (3.2) is driven from a given initial stat e Xo E lR N into :1:
in finit ely man y time st eps .
This leads to the following

Problem of Controllability

Given an initial state Xo E lRN , find a time T E No and a cont rol fun ction
U : No ---+ lR
M with
n
u(t) E V = II Vi for all t E No (3.5)
i= 1

and
u(t) = eM for all t 2: T (3.6)
such that the solutio n x : No ---+ lR N of (3.2) which satisfies t he initial condit ion

x (O) = Xo (3.7)

satisfi es the end condit ion


x (T ) = :1: (3.8)
which impli es
x (t ) = :1: for all t 2: T .
From (3.2) and (3.7) it follows t hat

x (T ) = g(g(... (g(xo,u(O)), u(l)) , .. .), u(T - 1))


'---v----"
T -ti m e s
T
= C (xo, u(O) , .. . ,u(T - 1)) .

Now let T EN be given . If the u(O), ... , u(T - 1) E V are solutions of the
system
C T( xo,u(O), .. . ,u(T- 1)) =:1: (3.9)
and if one defines
u (t ) = eM for all t 2: T ,
then one obtain s a cont rol function U : No ---+ lR M with (3.5) and (3.6) whi ch
solves the pro blem of cont rollability.
3.2 A Game Theoretical Solution 95

3 .2 A Game Theoretical Solution

3.2.1 The Cooperative Case

For every player Pi, i = 1, . . . ,n, we define a cost function

cp f( u(O), . . . , u(T - 1)) = IIGT(xo , u(O), . . . , u(T - 1)) - i:il l~

which he wants to become zero for a suitable choice of u(k) E U for k =


0, . . . , T - 1 where I . 112 denotes the Euclidean norm in jRn i , i = 1, . . . , n.
With this we define a total cost funct ion by virtue of
n
cpT (u(O) , . . . , u(T - 1)) = L cpf(u(O), . . . , u(T - 1)) .
i= l

To solve system (3.9) cooperat ively now consists of finding


uT(O) , . . . , uT (T -1) E U such that

cpT (u T (O), , uT (T - l ))::; cpT (u (O), . .. , u (T - l ))


(3.10)
for all u(O) , , u(T -1) E U .

Every solution uT(O), . . . ,uT(T - 1) E U of the opt imizat ion problem (3.10)
is a so called Pareto optimum, i.e., t he following holds true:
For every u(O) , . . . , u(T - 1) E U with

cpf(u (O), .. . , u(T - 1)) ::; cpT (uT (O), . .. , uT(T - 1))
(3.11 )
for all i = 1, .. . , n

it follows necessarily that

cpT (u(O), . .. , u(T -1)) = cpT (uT (O), .. . , uT (T - 1))


(3.12)
for all i= 1, . .. , n
By cont raposition this means: For every u(O) , .. . , u(T - 1) E U with

cpr, (u(O) , ... , u(T - 1)) < cpr, (u T (0), .. . , u T (T - 1))


for some io E {I , . . . , n } ther e exists i j E {I , . . . , n } such that

cp~ (u(O), ... , u (T -1) ) > cp~ (uT (O) , . . . , uT(T - 1)) .

In words : There is no T -tupel (u(O) , . . . , u(T - 1)) E UT for which one player
can improve his cost function value in compa rison with that of the T -tupel
(u T (0) , ... , u T (T - 1)) E UT without anot her player having to det eriorate his
cost function value.
96 3 Cont rollability and Optimization

The impli cation (3.11) ::::} (3.12) can be easily seen as follows: (3.11) implies
<pT (u(O), . . . , u(T -1)) ::; <pT (uT(O) , .. . , uT(T - 1)) hence <pT (u(O), ... ,u(T-
1)) = <pT (uT (O), ... , uT(T - 1)) which is only possible, if (3.12) holds true.

If
<pT (UT(O), . .. ,uT(T - 1)) =0,
then uT(O) , . . . , uT(T - 1) E U is a solut ion of the syste m (3.9) .
Further it follows that

so that in a pro cedure for solving the minimization problem (3.10) for T + 1
instead of T the (T + l)-tupel (uT (0) , . .. ,uT (T - 1), eM) could be used as
st arting solution. The existence of a solut ion of t he problem (3.10) is ensure d,
if ever y Ui for i = 1, . .. , n is compact in jRm ; . The evalua ti on of the total
cost function <pT can be very complicated . We therefore present an it er ation
method for the solution of problem (3.10) for which t he function evalua t ions
are less complicate d . Now, we repl ace the syste m (3.2) by

x (t + 1) = x (t ) + f( x(t ),u(t)) , t E No

(for instan ce, by defining f( x ,u) = g(x, u) - x ). Then we choose, for some
given TEN ,
xO(t + 1) = xO(t ) + f( xo(t) ,uO(t))
for t = 0, . . . , T - 1
where
xO(O) = xo .
Then we const ru ct a sequence (uk(O ), .. . , uk (T - l ))kEN o in U T and a sequence
(x k (l ), . . . , xk(T ))kENo in R N .T and xk(O) = xo, are given, then we det ermine
(u k+ 1(0), . . . ,uk+l(T - 1)) E UT such that for

xk+l(t + 1) = xk (t ) + f( xk(t) ,uk+l(t ))for t = 0, . .. , T - 1

with
X- k+ l (0) -- xo

the function valu e


n
<Pk (uk+l (0) , ... ,u k+ 1(T - 1)) = 2:: Ilx7+l (T) - xi l l~
i= l

becomes minimal.
3.2 A Game Theoretical Solution 97

Here
T- l
i k+ l (T ) = Xo + f( xo, Uk+1(0)) + L f( xk(t) , Uk+1(t))
t =l

and hence

n T-l
= L II L fi( xk(t), Uk+l(t)) + XOi - xi l l ~ .
i=l t=O

If (U k+l(O) , . .. ,Uk+ 1(T -1)) E U T has been determined, then we define

Xk+1(t + 1) = Xo ,
x k+1(t + 1) = xk+1(t ) + f( x k+1(t),uk+l(t))
for t = 0, ... , T - 1

and we proceed to the next step of the pro cedure. Concerning convergence of
this method we can prove
Theorem 3.1. If for every t E {O, .. . ,T - I } there is some

u(t) E U with u(t) = lim uk(t) ,


k-> oo

then (u(O) , . . . , u(T - 1)) E U T is a solution of the problem (3.10).

Proof. For t = 0 it follows from

that the limit


lim x k+l (l ) = x (l ) = Xo + f( xo,u(O))
k-> oo

exist s. We assume that , for some t E {O , . . . , T - I}, the limit

lim x k+l (t ) = x (t ) = x (t - 1) + f( x(t - l ), u(t -1))


k-> oo

exist s. Then it follows from

that the limit

lim x k+l (t + 1) = .T(t + 1) = :r;(t ) + f( x(t) ,u(t))


k-> oo

exist s.
98 3 Controllability and Optimization

By the principle of induction it t herefore follows that the limit

lim x k+1(t + 1) = x (t + 1)
k ...... oo

exist s and is given by

x(t + 1) = x (t ) + f( x(t) , u(t))


for every t E {O, . .. , T - I} .
This impli es because of
T-l
xk+1 (T ) = Xo + L f( xk(t), Uk+1(t))
t= O

that
T- l
lim Xk+ 1(T ) = Xo +
k ...... oo
L f( x(t) , u(t)) = x (T )
(= 0

and henc e

lim cpI (u k+1(O), . . . , u k+ 1 (T - 1)) = cpT (u(O), . . . , u(T- 1)).


k ...... oo

Now let (u(O) , . . . , ii(T - l))T be chosen arbitrarily. T hen it also follows that

lim cpI (u(O) , . . . , lL(T - 1)) = cpT (u(O) , . . . , u(T - 1)) .


k ...... oo

Further we have , for every kENo ,

cpI (Uk+ 1(O), . . . ,u k+1( T - 1)) :s; cpI(u(O), . . . , u(T - 1))

and hence
cpT (u(O), . .. ,u(T -1)) = klim cpI (u k+1 (O), . . . ,Uk+1(T- 1)) :S;
...... oo

lim cpI (u(O), . . . , u(T - 1)) = cpT (u(O), . . . , u(T - 1))


k ...... oo

This shows that (u(O) , . .. , u(T - 1)) E U T is a solut ion of the problem (3.10) .

We shall see later (see Section 3.2.3 ) that this pro cedure is not needed in the
linear case where the functions 9i : jRN x jRM -+ jRn ; for i = 1, . .. , n in (3.1)
are given by
9i( X,U) = A i x + Bn: , x E jRN, U E jRM ,
with n i x N-matrices A i and ri , x M-matrices Bi ,
3.2 A Ga me Theoreti cal Solution 99

3.2.2 The Non-Cooperative C ase

For a given vector fun cti on U : No ----> IR M we define, for every i = 1, . . . , nand
TE N,

and
-T T
G i (UTI" ' " UT.,) = G; (xo , u(O), .. . , u( T - 1))
as well as
<j)f(UTl'" '" UT.,) = li C iT (UTI " ' " UT.,) - xi l l~ .
The system (3.9) ca n t hen be rewritten in the form

C / (UTI' . . . , UT,,) = Xi for i = 1, . . . , n . (3.9' )

To solve system (3.9') non- coop er atively t hen means to find uTi E UT for
i = 1, . . . , n such that , for every i = 1, .. . , n,

-T( UTI
'Pi * n) <
* " "'UT -T( uTI"
-'Pi * * * *)
",uTi_I,UTj,UTi+l,uTn
for all UTi E uT = '-----v-----'
U, X . . . X U, . (3.13)
T -times

Every such n-tupel (UTI' . . . , uTJ is called a Nas h equilibrium. In words (3.13)
means that , if one player declines from his equilibri um control whereas all t he
others st ick to it , his cost fun cti on cannot decrease.
Co ncern ing the existence of a Nas h equilibrium we can prove the
Theorem 3 .2.
A ssumptions:

a) For every i = 1, , n the contro l set U, is convex an d compact.


b) For every i = 1, , n the vector function 9i : IR N x IR M ----> IR ni in (3.1) is
continuous.
n
c) For every n-tupel (UTI" . . , uT n) E UT = IT uT and every i = 1, . . . , n
i=l
there is exactly one

Si(UT*I ' " ' ' UT* ) = (UT*1 , .. . , uT*'1' - 1 ,(S;UT* )i, uT*.,+ 1 , ... , UT*-n )
II-

'---v-'"
uT

with
'Pi * )) <
-T(Si (UT -T( UTI"
_ 'Pi * *
'" UTi_I' * i+ I, " " UT
UTi' UT * n)

for all UTi E uT .


A ssertion: Th ere exis ts a Nas h equilibrium .
100 3 Controllability and Op t imizat ion

Proof Fro m a), b) , c), it follows t ha t the mapping S = Sn 0 S n- 1 0 ... 0 Sl :


V T ----+ V T is cont inuous, since every mapping S, : V T ----+ V T is cont inuous
which can be seen as follows:
Let (( U~l ' . . . , u t ) ) kE N be a sequence in V T wit h u~ k
(uT k )
1 " " , u T 11. ----+
* , .. . , u T* ) E
(uT
1 'II.
or .
Then, for every k EN and every i E {I , .. . , n } we have
- ( k
tpi UT 1" ", u T i _ 1 ,
k (5iUT
k) k
i , UT i+l" " , UT "
k )

-T ( k k k k ) (3.14)
:S tpi UT 1"' " l1Ti _ 1 ' UT i , l1T i + 1 ' . .. , UT"
for all l1T; E Vi .
Further we have, for every i = 1, .. . , n ,
CPi (UT1" ' " l1T i _ 1 , ( S i uT )i, UTi +1, , UT J

:S cpr(U T 1 , .. . , 11T;_ 1' l1T; , UT i+1 ' . .. , UT ,,) for all UT i E V'[ .
Now let i E {I , . . . , n} be chosen arbit ra rily. Then there is a subsequ ence
(( U~l ' . .. , u~J )l EN and aUT; E V'[ with

(5 i l1Tk 1 ) i
1Hfl = l1T ;
-
l -HXJ

From (3.14) it follows t herefore that

CPi (UT1" ' " l1T;_l ' (5 i l1T )i , l1T i+l " ' " l1T J

:S cpr (U T 1 , . . . , UT;_ l ' l1T ; , l1T +, ' . . . , UT J


i for all UTi E V'[ .
whi ch implies (becau se of c)) t hat

UTi = (SiUT )i .

In t he sa me way one shows t hat, for every subsequence ((U~l " '" U~J)lEN
k; kl
there exist s a subse quence ( l1T ,''' , . . . , 11 T,;" ) m EN such t ha t

(5 i l1Tk zm.) i
1IIfl -- (Si UT*) i
m ---> oo

which impli es

This shows that S, : V T ----+ V T is cont inuous for every i E {I , . . . , n} . From


n
a) it follows that VT = IT v'[ is convex and compact .
i= l
By Brouwer 's fixed point theorem S = 5 n 0 5 n - 1 0 . . . 0 S l : VT ----+ V T has a
fixed point in V T and every such is a Nash equilibrium.
This completes the proof of Th eorem 3.2.
D
3.2 A Gam e Theoretical Solution 101

In order to calculate a Nash equilibrium one can apply the following it-
erat ion method. Starting with k = a and i = 1 as well as some u~ E U T a
uTi E uT is det ermined with
-T ( k k * k k )
'Pi UTI " ' " U T i _ I' UT i' U T i + I " ' " U T "
(3.15)
~ k k
.
rp!" ( U T 1 , . .. , UT7- 1
k
, U r 1 , UT1+ 1
k
, . .. , U T fI- ) for all ur.1 E U! ."

Then one puts

and replaces i by i + 1 modulo n .


Concerning convergence we can prove the
Theorem 3.3.

Assumptions:

a) For every i = 1, , n the control set uT is compact.


b) For every i = 1, , n the vector function gi : jRN x jRM --> jRni in (3.1) is
continuous.
c) The sequence (U~)kE N() converges to som e UT E jRM. T.

Ass ertion : U T is a Nash equilibrium.

n
Proof. From a) it follows that U T = TI uT is compact and hence closed which
i= 1
implies that UT E U T .
Let us assume that , for some i E {I , . .. , n} , there exist s some UTi E uT with
-T ( ,
'P UTI ,
A ,
,UTi _ I ,UTi ,UT i + I , ,UT "
, )
< 'Pi T ( UT
- ' )

Since the fun ction U --> rpf( u ) , U E jRM.T, is cont inuous, it follows that
rpf( u~) --> rpf( UT ) . This implies for
s
U = :2l( 'Pi
_T( UT
A) - -T (U' T I "
'Pi A

' " UTi _ I' UTi ' UTi + l


A

, . . ,
A

UT"
))

that

If on e puts, for every kEN ,


102 3 Controllability and Optimization

then it follows that


(k
1llll UT
)i = (UT"A

,UT; _"UT; ,UT;+ , , ,UT ,,


A A A )

k ~ oo

and hence
-T(( UT
'Pi k )i) -T (UT,
< 'Pi A

, , U T i _ , , UT i ' UTi + i
A A

, . . . , UT "
A )

+ s:
U

which contradicts (3.15) .

Therefore the assumption is false and UT E U T is a Nash equilibrium. In order


to solve the problem (3.15) for a given i E {I, . . . , n } , k EN, and u} E U T
we again apply the it er ative pro cedure being describ ed in Section 3.2.1 under
the assumpt ion that the vector function 9 : jRN x jR M -+ jRN in (3.2) has the
form
g(x, u) = x + f( x , u) , x E jRN , UE jR M .

For this purpose we consider the problem t o find, for a given i E {I , ... , n}
r U
and a given u E T , some UT i E such that Ur
-T ( * * * * )
'Pi uT,, " " UT i _" UT i ' U T i + " " " U T ",
A

< -T (U T* , , " " U T* i_ " UTi' UT* i + l " ' " UT* m )
_ 'Pi
for all UTi E U;T
At the beginning of the pro cedure we choose

U?(t) = 8 m i for all t = 0, . .. , T - 1

and calcul ate

XO(t + 1) = xO(t ) + f( xo(t) , ui(t) , . .. , ui-l (t ), u?(t) , uiH (t) , . . . , u~(t))


for t = 0, . . . , T - 1

where
XO(O) = Xo .
Then we construct a sequ ence (u~ (O) , . . . , u~ (T - l ) )IENo in Ur and a sequ ence
(x 1(1), ... ,x1(T ))IENo in jRN.T as follows:
3.2 A Game Theoretic al Solution 103

If (U~(o), .. . , u~(T - 1)) E uT,


(x l (l ), . . . , xl(T )) E RN .T and xl(O) =xo
ar e given, then we det ermine u~+l(O) , , u~+l(T - 1) E such that for uT
xl+l(t + 1) = xl(t) + f( x l (t) , ui (t) , ,U7-1(t) , U~+l (t) ,u7+l (t) , . . . ,u~ (t))
for t = 0, . . . ,T - 1 with xl+ 1
(0) = Xo the function valu e
-T ( *
'Pl,i *
uTI" .. , UTi_I' l+1 ' UT;+I"
UTi * * )
' " UT"
= Il x~+l - xi l l ~
T- l
=I L f i(XI(t), ui(t) , .. . ,U7_1(t) , u~+l (t ), U7+1 (t), . . . , u~(t ))ll~
t= O

becomes minimal.
Then we define xl+l (O) = Xo and
Xl+1(t + 1) =
x l+1(t) + f( xl+l (t) , ui (t) , . .. ,U7- 1(t) , u~+l (t), U7+1 (t) , ... , u~ (t))
for t = 0, . . . , T - 1 and proceed to the next st ep of the procedure.

3 .2.3 The Linear Case


In this case the vector functions 9i : R N x R M -+ Rni for i = 1, . .. , n in (3.1)
are given in the form
9i(X, u) = Aix + Biu , x E ]RN , u E RM ,
with real ti, x N-matrices Ai and ni x M -m atrices B i.
The system (3.2) then read s
x (t + 1) = Ax(t) + Bu(t) , t E No , (3.16)
where

a nd

Obviously we have
9(8N,8M) =8N ,
i.e., x = 8N is a fixed point of the uncontrolled syste m
x (t + 1) = Ax(t) , t E No . (3.17)
Further we obtain from (3.16) and
x (O) = Xo (3.18)
for some Xo ER N t ha t , for every T E N ,
T
x (T ) = ATXo + L AT- t
Bu(t - 1) ,
t=l
104 3 Controllab ility and Optimizati on

hence
T
CT(XO, U(O), . . . , u (T - 1)) = ATXo + L A T - t B u(t - 1)
t= l

so that t he equa t ion (3.9) reads


T
L A T- t Bu(t - 1) = _ATXo . (3.19)
t= l

Further we obtain as t ot al cost fun cti on


T
ipT(u(O), . .. , u(T - 1)) = II L A T - t B u(t - 1) + AT xo l l~
t= l

where II . 11 2 denotes t he Euclid ean norm in IR N . Hence ipT : IR M .T ----* IR is a


convex fun ction al and therefor e automat ically cont inuous .
If
rank ( B lAB I . . . I A T - 1 B ) = N ,
t hen ipT is even st rictly convex . If further t he cont rol sets Ui , i = 1, . .. , n ,
are convex and compact , t hen the pro blem (3.10) has exactly one solut ion
(uT (O)T, . .. , uT(T _ l )T)T E U T which, for inst an ce, ca n be computed it era-
tively with the aid of the condit ioned gradient method .
If the matrix A is non-sin gul ar , t hen t he equa t ion (3.19) can be repl aced by
T
LA- tB u (t - 1) = - x o
t =l

and inst ead of ipT t he fun ct ion al


T
fj5T(u (O), . . . ,u(T - 1)) = II LA- tB u(t - 1) + xo ll~
t= l

has to be minimized on U T . In t his case it follows t ha t

fj5T+I(uT+I(O) , . . . , u T+ 1 (T )) < fj5T+I(u T (0), .. . , uT (T - 1) , 8 M )

= fj5T (u T (O), . .. , uT (T -1 ))
wh ere again (uT(O) , . .. , uT(T - 1)) denotes t he solut ion of problem (3.10)
with iT instead of ipT.
If one puts C; = AT- tB for t = 1, . . . , T and defines

c, = ( Ctl I Ct 2 I I c.; ) wit h N x ffi i mat rices c;


for i = 1, ,n ,
3.2 A Game T heoretical Solution 105

t hen on e obtains
T T T n
L A T- t B u( t - 1) = L
Gtu( t - 1) = L L GtiUi(t - 1)
t=l t=l t=l i=l
n T n
L L GtiUi(t - 1) = L (G li I G2i I .. . I GTi )UTi
i=l t=l i= l

where UT; = (ui(Of, ,ui(T - I f )T for i = 1, . .. ,n.

If we put G (i) = ( G1i I G2i I I GTi ) for i = 1, . . . , n , t hen it follows that


n
GT(xo, u(O) , . .. , u(T - 1)) = AT Xo + L G(j)UTj ,
j=l
hence n
G-T (UTI , . . . ,UTn) = A T Xo + '~
" G (J)UTj
. .
j=l
Let Gi(j) be t he i-t h submatrix of G(j) consisting of ni row vect ors of G(j ),
i.e., let

G(j) = ( G1:(j )) for j = 1, . .. , n


Gn(j)
with ni x (mj . T) -matrices Gi(j) and let

wit h Yi E ]Rn; for i = 1, ... ,n .

T hen it follows
n
6nuTI, . .. ,UTn) = L Gi(j)UTj + Yi for i = 1, . . . , n
j=l
and t he functi onal
n

rpnUTI , . . . ,UTn) = II L Gi(j )UTj + Yill~


j=l
is convex (and hence cont inuous) on ]R M T. It is st rictly convex , if

rank Gi(j) = ti, for all j = 1, .. . , n .


= 1, . . . , n and if t he control sets Us, i = 1, . . . , n, are
If this is the case for all i
convex and compact, t hen all t he assumptions of Theorem 3.2 are satisfied
and t he existe nce of a Nash equilibr ium is guaranteed.
106 3 Controllability and Op timization

3 .3 Local Controllability
Let us come back t o S ection 3.1 and ass ume that 9 E C1(JR N X JRM, JR) .
T hen we define
A = gx(x , 8 M ) and B = gu(x, 8 M )

wh ere x is a fixed point of the un controlled syste m (3.3 ).


Lin eari zation of the system (3.2) at t he po int (x, 8 M) lead s t o t he system
h(t + 1) = Ah(t ) + B u(t) , t E No , (3.20)
and the initi al condit ion (3.18) has to be repl aced by
h(O ) = Xo - X . (3.21 )
Instead of the problem of controllability we now consider the

Problem of Local Controllability

Given an initial st ate Xo E JRN, find a time T E No and a cont rol fun ction
u : No ---4 JRM with
n
u(t) E U = Il Ui for all t E No (3.22)
i= l

and
u(t) = 8 M for all t ~ T (3.23)
such that the solut ion x : No ---4 JR N of (3.20) which sat isfies the initial condi-
tion (3.21) satisfies the end condit ion
h (t) = 8 N
which implies
h(t) =8N for all t ~ T .
This problem is nothing else but the problem of null- controllability of the
linear system (3.20) (see S ection 2.1.2 ).
Let us assume, for every i = 1, . . . ,n, t ha t

where v > 0 is a given constant and I Iii is a norm in JRn; . Then we can prove
(see Th eorem 2.2 ) the following
Theorem 3.4. Let there exist some T E N such that
r ank(B I AB I . . . I A T - l B) = N .
Further let all th e eigenv alues of A' = transpo se of A be less than or equal to
on e in absolute value and the corresponding eigenve ctors be lin early indepen-
dent. Th en the problem of local contro llabilit y has a solution for every choice
of Xo E JR N! if A is no n-s ingular'.
3.4 An Emission Reduction Model 107

3.4 An Emission Reduction Model

3.4.1 A Non-Cooperative Treatment

We come back t o the emission redu cti on mod el that was introduced in Section
1.1.6 as an uncontrolled syste m (see (1.22)) and in Section 2.1.1 as a con-
trolled syst em under an addit iona l condit ion on t he costs. Now we consider
the act ors who control the syst em as players who have t o find a cost vector
fun ction v : No ----+ jRr wit h

8r v et) :::; M * for t = 0, . . . , N - 1 ,


:::;
(3.24)
vet ) = 8 r for t 2': N for some N E N

such that
N-l

C( L vet )) = E - Eo , C = ( e m ij k j =l ,... ,r .
t= O

Let us repl ace this condit ion by


N- l
C( L v et)) 2': E - Eo (3.25)
t= O

and neglect the requirement

vet ) :::; M * for t = 0, . .. , N - 1

whi ch in t he case
Mt > 0 for i = 1, . .. , 1
can always be sat isfied , if we can find v : No ----+ jRr with (3.24) , (3.25) (see
Section 2.1.1). If we put
N- l

Cij = emij , i,j = 1, ... ,1 , X = L v et) , b = E- Eo ,


t=O

t hen (3.25) ca n be written in t he form


r
L Cij Xj 2': bi , i = 1, .. . ,1 , (3.26)
j= l

and we have t o find a vector x E jRr with

Xi 2': 0 for i = 1, . . . , 1 (3.27)

such that the inequ alities (3.26) are sat isfied .


108 3 Controllability and Optimizat ion
N- 1
Now each player is interested in minimizin g his costs Xi = L: vi et ), i =
t= O
1, . .. , T . Let us ass ume that the player s t ry to minimize t he t otal costs
r

seX) = :L>j (3.28)


j=l

under the const raints (3.26) , (3.27).


This is a t ypica l problem of linear progr amming.
Let us assume that x E jRr is a solu t ion of this problem . If we then choose ,
for any i E {I , . . . , T}, some Xi 2: 0 suc h t hat
r
L CkjXj + CkiXi 2: bk for k = 1, .. . , T ,
.i = 1
.1 - i

then it follows that


r r

L Xj :S L Xj + Xi ,
j =l i = 1
:i :eft -;

and ther efor e Xi :S Xi


Thus every solut ion of (3.26), (3.27) which minimizes (3. 28) is a Nash equi-
librium, i.e., if the i-th player declin es from his choice of costs wh er eas all the
ot he rs stick t o it , he ca n at most do worse.

The Dual Problem


T he du al problem t o the problem of minimizin g (3.28) subjec t to (3.26), (3.27)
cons ists of maximizing
r

t( y) = LbiYi , YE jRr , (3.29)


i= l

subjec t t o
r

L Cij Yi :S 1 , j = 1, . .. ,T , (3.30)
i= l

and
Yi 2: 0 , i = 1, ... ,T . (3.31)
For Y1 = Y2 = ... = Yr = 0 t he side condit ions (3.30), (3.31) are satisfied. If
we ass ume that ther e ex ist s some X E jRr whi ch satisfies (3.26) , (3.27) , then
by a well kn own duality theor em t here exists a solut ion X = x E jRr of (3.26),
(3.27) whi ch minimizes (3.28) and a solut ion Y = i) E jRr of (3.30) , (3.31)
wh ich maximizes (3 .29) and it is s ex) = t (i) ) which is equivalent t o the two
3.4 An Emi ssion Reduction Mod el 109

implications
r
Xj > 0 =} L Cij Yi = 1
;'= 1
(CSL) and
r

Yi > 0 =} L CijXj = b i .
i= l

On introducing slack variabl es

Zj 2:: 0 for j = 1, . .. ,r , (3.32)

condit ion (3.30) can be rewri tten in t he form


r

Zj +L Cij Yi =1, j = 1, .. . ,r , (3.33)


i =l

and t he du al problem is equivalent to maxim izing


r ,.
L 0 . Zj +L biYi (3.34)
j=l i= l

su bjec t to (3.31) , (3.32) , (3.33) . T his problem ca n be immedi ately solved with
the aid of the simplex method starting with t he feasible basis solut ion

Zj =1, j = 1, . .. , r , an d Yi = 0 , i = 1, . . . , r .

Befor e proceeding we conside r a


Special case
Let
bj 2:: 0 and Cjj > 0 for all j = 1, . . . , r .
If we assume that , for some j E {I , . .. , r },

Cji :::; 0 for all i = 1, . . . , r with i oJ j ,

Le., the player j can be conside red as an opponent of all the others, then it
follows for the solut ion x E JRr of (3.26), (3.27) which minimizes (3.28) that
r
L Cj k X k = bj .
k=l

For otherwise (Xl, ... ,Xj -1, x;, Xj+ 1, .. . , X r ) with


110 3 Cont rollability and Optimization

xi + 2:=
T T
also solves (3.26) , (3.27) and it follows Xk < 2:= .'h cont radicti ng
k = 1 k=l
k i' j
T
t he minimality of 2:= Xk Now we ass ume that
k=l

Cj i :::: 0 for all j -=I- i ,

i.e., every player can be considered as an opponent of every ot her . Then it


follows that
L
T

Cj kXk = bj for all j = 1, . . . .r .


k= l

If in addit ion we assume that

L
T

Cij > 0 for all i = 1, . .. , r ,


j =l

then
Cjj > 0 for all j = 1, . .. , r
and t he matrix C = ( c i j k j =l ,... ,r is inverse monotone, i.e. C- 1 exist s and
is positive (see L. Collatz : Funktionalanalysis un d Numerische Mathematik.
Springer-Verl ag: Berlin, Co ttingen , Heidelberg 1964) .
This implies
x = C- 1 b 2: 8 T
If x E JRT is any solution of (3.26) , (3.27), then it follows t ha t

x 2: C - 1b = x , i.e, , Xi 2: x; for i = 1, . . . , r .

In words this means the following: If every player is an opponent of every other
and if his own cont ribut ion to achieve his goa l is greate r than the negative sum
of the cont ribut ions of his opponent s, t hen everybody can reach the absolute
minimum of his costs .
Now we return to t he
General case
We assume that t here exists a solut ion x E JRT of (3.26) , (3.27). Then the dual
problem has a solut ion as seen above. If t his has been obtained by s :::: r st eps
of the simplex method , we can ass ume t he result in t he following form :

Y1 d1 - Zl

Ys ds - Zs
+D
Zs+ l ds + 1 - Y s+ 1

ZT dT - YT
3.4 An Emission Reduction Model 111

where
dl l dIs d1s + 1 ;

: s.. d ss +1 a.,
D=
d S +ll . . . d s + 1s d s + 1s + 1 . . . d s + 1r

dr1 ... dr s d r s+1


. .. d rr

t, ~ t, bA
bj Yj + 1;, (t. djkb j) (- zM kt (t. djkbj) (-Yk)
wit h
dj 2: 0 for j = 1, .. . , T .

and
L djk bj 2: 0
8

for k = 1, ... , T .
j =1

The corresponding solution of t he dual problem is given by

Yj =dj forj=l , ... , s and Yj =O forj=s+l , . .. , r .

Further we have
r
dj + L Cij Yi = 1 for j = s + 1, . . . , r .
i =1

Let us assume t hat


dj > 0 for all j = 1, . . . , s .
If x E jRr is any solut ion of (3.26) , (3.27) which minimizes (3.28) , t hen it
follows from t he imp lications (CSL) t hat

Xj =0 for j = s + 1, .. . , r
and
L CijXj = b,
S

for i = 1, . . . , s .
j=1

If t he matrix
112 3 Cont rolla bilit y and Optimizati on

is invertible, it follows that

dll . . . d S1 )
C- 1 = : .. :
S
( i ; ..: d~s
whi ch implies for XS = (Xl , ... ,.Tsf t ha t

hen ce
s
.Tk = L d j kbj for k = 1, . . . , s .
j=l

Let us cont inue with a dir ect method for t he det ermination of a Nash equi-
librium, i.e., of an .T E lR r with .T 2: G r and
r

L Ci j .Tj 2: bi , i = 1, . . . , r (3.35)
j= l

such that the following is true:


If for an arbitrary i E {I , . . . , r } there exists some Xi 2: 0 with
r
L CkjXj + CkiXi 2: bk , k = 1, ... , r ,
.1 = 1
.i - 1-

then it follows that Xi ::; Xi. In order to det ermine su ch a Nas h equilibrium
we apply an iter ative method as follows:
St arting with a vector x O 2: G r which satisfies (3.35) with X O inst ead of .T we
const ruct a sequence ( XL) L ENo with L = l . r + i , l E No, i = 1, . . . , n - 1
in the following manner : If xL 2: G r with (3.35) for x L inst ead of z is given,
then we minimize X i E lR subject t o Xi 2: 0 and
r
L CkjXl' + Ck iXi 2: b k , k = 1, . . . , r . (3.36)
.i = 1
i :- i

This problem has a solut ion x;


2: 0 which can be explicit ly ca lculate d if Cii > 0
for all i = 1, . .. , r, as we sha ll see later and for whi ch x; ::; xf holds true. If
we define
for j =I- i ,
XL + 1 = { x l'
J x * for j = i
J
3.4 An Emi ssion Reduction Model 113

where
L +1= {(l + l)r , if i = r - 1 ,
l .r +i +1 , if i < r - 1 ,
then X L + 1 2: 8 r satisfi es (3.35) with xL+1 inst ead of x and x L+! :::; X L .
The latter impli es the existe nce of
x = Llim
-CX)
xL < xL
-
for all L E No

which sa t isfies (3.35) .

Assertion: x is a Nash equilibrium, if


Cii > 0 for all i E {I , . . . , r } (3.37)
Proof Assume that x is not a Nas h equilibrium. Then there is some i E
{I , . . . , r } and some Xi 2: 0 such t ha t
r
L Ck jXj + CkiXi 2: b k k = 1, .. . , r
:1 = 1
:i"# i

and Xi < Xi
This implies
r

L CijXj + CiiXi > bi (3.38)


i = 1
:i *
i

If we define a subsequence (Ll)I ENo by Li = l . r + i, then we obtain

:i = 1
:I:;t i

for all k = 1, . . . , r and all l E No .


In particular it follows that
r
~
Z:: CijX j
L, + Cii X.L,
i
+1 -
-
bi for all l E No
:i = 1
i # i

(for otherwise Xf,+ 1 could be chosen sma ller) whi ch impli es


r

L CijXj + CiiXi = bi
i = 1
i ,:pi

contradict ing (3.38). Hence t he assumpt ion is false and x is a Nash equilib-
rium.
o
114 3 Controllability and Op timizatio n

In order to minimize Xi :::: 0 subject to (3.36) we pro ceed as follows:

1. If xf = 0 , then we put xi = 0 and are done.


2. If x f > 0 and
r
L CijXf + CiiXf = bi ,
i = 1
i #- i

we put xi = xf and are done.


3. If r

L CijXf + CiiXf > bi


:i = 1
i f;. i

a nd there is some k -I- i such that


r

Cki > 0 and L CkjXf + CkiXf = bk ,


i = 1
:i ;i i

then we also put xi = xf and ar e don e.


4. Otherwi se we have

where
I( x L) = {k I L" CkjxjJ + CkiXf = bd .
i = 1
i #- i

Let J(L) be the complement of I (x L ) , i.e.,


r
J(L) = {k I L CkjXf + CkiXf > bd .
i = 1
i::j; ;.

Now let hi s: xf be such that


r
L CkjXf + cki(.rf - hi) :::: bk for all k = 1, . . . , T .
.j = 1
i - i

Then

hi s: ~ ( .t
ki j = 1
CkjXf + CkiXf - bk)
i f:- i

for all k E J(L) with Cki > 0 .


3.4 An Emission Reduction Mod el 115

If we therefore put xi = xf - hi with

hi = min (xf , min {o:f I k E J (L ) and Cki > O}) ,


then 0 :::; xi :::; xf and xi is t he sm allest non-negative number that satisfies
(3.36).
In particular, if Ckj :::; 0 for all j f k, then we get in the case
r
L CijX f + CiiXf > bi
i =1
i # i

that
hi = min( xf , o:f )
where

which implies
r

L CijXf + Cii Xi = bi .
.1 = 1
:i : i

Let us demonstrate this procedure by a numeri cal example:


We choose

1.667 - 0.875 -0 .792)


C = -0.792 1.667 -0.875 X 10- 2
( - 0.167 -0.167 0.333
and
- 0.3459 )
b= -0.1083
( 0.0498

Starting with XO = (0,2 , 16f' which sat isfies Cxo ~ b we obtain the sequence

xt = 0 , x~ = 2 , x~ = 16
xi = 0 , x = 1.9016197 , x = 16
xr = 0 , x~ = 1.9016197 , x~ = 15.90862
xi = 0 , xi = 1.9016197 , xj = 15.90862
x~ = 0 , x~ = 1.8536548 , x3 = 15.90862 ;
x~ = 0 , x~ = 1.8536548 , .1:~ = 15.884566 ;
xi = 0 , x~ = 1.8536548 , x; = 15.884 566 ;
x~ = 0 , x~ = 1.8410289 , x~ = 15.884566 ;
xi = 0 , x~ = 1.8410289 , x~ = 15.878234 .
116 3 Controllability and Op timization

The sequ en ces (x ')L ENa and (x f) L E N" converge to the solut ions X2 and X3 of
the linear syst em
0.01667x2 - 0.00875x 3 = - 0.1083
-0.00167x 2 - 0.00333x 3 = 0.0498
which are approximat ely given by

X2 = 1.8365109 and X3 = 15.875959 .

3.4.2 A Cooperative Treatment

Let us assume that we have found a cost vect or function v : No ----+ jRr which
satisfies (3.24) , (3.25). Then the cont rolled cost s are given by

r
= Vi (t ) - AiVi(t)(Mi* - vi(t ))(Ei (t - 1) + 2::= emijVj (t - 1))
j=l

for i = 1, , rand t EN. Now let K be any subset of N = {I , .. . , r } an d , for


any t E {I , , N - I} , let c K (t - 1) = (c~ (t - l)) i,j=l,...,r be a non-negative
r x r -mat rix with
cfi(t - 1) = 0 for i = 1, .. . , r and
c~ (t -1) > 0 for i,j E K(i =J j) .

If we define, for every t E {I, .. . , N - I} ,

cI)(t - 1) = emij + c~ (t - 1)

and
c~ (N - 1) = emij for i, j = 1, . . . , r ,
then
N N- l
2::= OK (t - l) v(t - 1) 2: C( 2::= v(t )) 2: E - Eo .
t =l t= O
Hence the condit ion (3.25) is also sat isfied, if we repl ace, for every t E
{O , .. . , N -I} , the matrix C by OK(t ) (c~(t) kj= l , ...,r. The cont rolled
costs are then given by
r
M iK (t + 1) = Vi(t) - AiVi(t)(Mt - vi(t ))(Ei (t - 1) + 2::= c~ (t - l) vj(t - 1))
j= l

for i = 1, . . . ,r and t = 1, . .. , N - 1, and it follows that

M{ (t + 1) ~ M, (t + 1) for all i = 1, .. . , r and t = 1, .. . , N - 1.


3.4 An Emission Reduction Model 117

If we define, for every K ~ N and every t E {I , . . . , N - I},


vt(K) =
r r r
:2)M;(t + 1) - M{ (t + 1)) = 2: A;Vi(t)(M;* - Vi (t )) 2: c~ (t -l)vj(t - 1) ,
i=1 ; =1 j=1

then
Vt( ) = 0 .
The fun ction Vt : 2N ---> IR+ can t herefore be int erpret ed as t he payoff function
of a cooperative r-pe rson game . The subset s K of N can be interpreted as
coalit ions which are built by the players by changing the matrix C of mutual
influence to the matrix CK (t - 1) for t = 1, . .. , N -1 whereby they guarant ee
that the cont rolled cost s are diminished. If i 'f. K , then c~ (t - 1) = 0 for all
j EN and therefore M;(t + 1) = M{ (t + 1) so that

vt (K ) = 2: (Mi(t + 1) - M{ (t + 1)) .
;E K

In p articular we have
r
vt(N ) = 2:(M;(t + 1) - M f(t + 1)) .
;= 1

If we denote the gain of the i-t h player , if he joins the coalit ion K ~ N , by
r
v;(K ) = M;(t + 1) - M{ (t + 1) = A;v;(t)(M;* - v;(t )) 2: c~ (t - l) vj(t - 1)
j=1

then
Vt(K) = 2: v;(K ) .
;E K

Let us assume that

Then the "grand coalition" N lead s to the larg est joint gain Vt (N) . The
qu estion now is whether there exist s a division (X1 , . .. , Xr ) of vt (N ), i.e.,
r
X ; ::::: 0 for all i = 1, . . . , r and vt(N ) = l: X; such that
;= 1

2: x; ::::: vt (K ) for all K ~ N .


; EK

This means that there is no incentive to build coa lit ions which differ from the
gr and coalit ion. The set of all such divisions of Vt (N) is called the core of the
gam e.
118 3 Cont rolla bility and Op ti mization

3.4.3 Conditions for the Core to be Non-Empty

The existe nce of a non- emp ty core is gua ranteed, if


r r

'L c~(t -l)vj(t - 1):::; 'L c{f(t - l)vj (t - 1)


j =l j =l

for all i = 1, .. . , r and K ~ N .

From this condit ion it namely follows t ha t

v:(K ) :::; v:(N ) for i = 1, . .. , r and all K ~ N .

If we therefore put
Xi = v:(N) for i = 1, .. . , r ,
t hen we can conclude that
Xi 2: 0 for i = 1, . . . , r ,
r
'L Xi = vt(N) and 'L Xi 2: vt (K) for all K ~ N ,
i =l iEK

i.e. (x1, .. . ,x r f is in t he core of u. .


In general it is not easy t o show t hat the core c(Vt ) of Vt is not empty. In
order to get some mor e insight of it s st ruct ure we give anot her definition of
t he core, however , for so called superadditi ve ga mes which have t he pr op erty
that

Vt (K U L) 2: vt(K ) + vt(L ) for all K , L ~ N with K n L = .

For this purpose we define the set of all divisions of vt (N ) by I( vt) , i.e.,
r

I( vt) = {x E jRr I Xi 2: 0 for i = 1, .. . ,1' and L Xi = vt(N)} .


i =l

We say that X E I (vd dominat es y E I (Vt ), if t here exists a coa lit ion K ~ N
such that

Xi > u. for all i E K

and 'L Xi < Vt( K) .


iEK

Then we ca n prove that

C(Vt ) = {y E I( vt) I T here is no X E I (vt ) t hat domina tes y} .


3.4 An Emiss ion Reduction Mod el 119

Proof. Fi rst of all we observe that c(Vt) <;;; I (Vt ). Now let Y E c(Vt ). Assume
t hat there exists some x E I (Vt ) t hat dominat es y . Then there is a coa lit ion
K <;;; N such that (*) hold s t rue which implies

I: u. < I: Xi ::; Vt(K)


iE K iEK

and cont radict s t he ass umpt ion t ha t Y E c(Vt).


Now let y E I( vt ) \ c(Vt ). T hen there exists a coalition K <;;; N with K =I- 0,
N and L u. < vt (K). Then we define
iE K

P = vt (N ) - vt( K) - I: v~(N \ K) ,
i EN\K

(J = vt (K ) - I: Yi
iEK

and put
. _ {Yi + I~I for i E K ,
x, - vt(N \ K ) + IN \ KI for i EN \ K .
Becau se of (J > a it follows t hat Xi > Yi for all i E K . Further it follows
that
I: Xi = I: u. + vt( K) - I: u. = vt (K)
iE K iEK iEK

and
r
I: Xi = Vt(K )+ I: v:(N\ K) +Vt(N)-Vt(K) - I: v:(N\ K ) = Vt(N ) .
i= l iEN\K iEN\ K

Since

Vt(N) '2 Vt (K) + Vt(N \ K ) = Vt(K ) + I: v~(N \ K ) ,


iEN\ K

it follows t hat p '2 a and

Xi '2 v; (N \ K ) '2 a for i E N \ K .

Therefore X = (Xl, . . . , X r ) TE l (Vt) dominat es Y which completes the pr oof.

o
Next we give a necessar y condition for t he core to be non- empty. For t hat
purpose we define
120 3 Controllability and Optimization

and see that


bi (Vt) 2 0 for all i = 1, . . . , r .
Now let x E c(Vt ). Then it follows that
r r r
bi(vt) = L:>j - vt(N\{i}) 2 L Xj - L Xj = Xi
j=l j=l :i = 1
:i - 1_

for all i = 1, ... , r .


Let us define a gap function by

9t(S) = L bj(vt) - Vt(S ) for S r:;, N .


j ES

Then it follows, for every S r:;, N, that

9t(S) 2 LXj - Vt(S) 2 0 .


j ES

This is therefore a necess ary condit ion for the core to be non-empty. In the
following we assume this condition to be satisfied. If we define , for every
i = 1, . .. ,r,

it follows that
bi h) 2 Ai(vd 2 0 for all i = 1, . . . ,r .
Further we obtain , for every x E c(Vt ),

Ai(Vt) = 9t(S*) for some S * r:;, N with i E S *


= L bj(vd - Vt(S* )
jE S *

2 L (bj(vd - Xj) 2 bi(vt) - Xi ,


j E S*

henc e
bi(Vt) - Ai(Vt) S Xi S bi(vt)
for all i = {I , . . . , r } .
3.4 An Emissi on Reduction Model 121

If in addition to

gt(S) = L bj(vt} - Vt(S ) 2: 0 for all S s:;; N


jE S

we assume that
r
gt(N) = L bj( vt) - vt(N) = 0 ,
j=1
then it follows that (b1 (vt} , . . . ,br( Vt )f E c(vt}.
So these two condit ions are sufficient for t he core to be non-empty.
r
If gt(N) > 0 and L Ai(Vt) > 0, then we define
j=1

and obtain

if
r

L Aj (vt} 2: gt(N) .
j=1
Further we obtain
r r r
LTi(Vt) = L bi(vt) - L bj(vt) + vt (N ) = vt(N )
i= 1 i =1 j=1
and

if
gt(S) - !!t(N) L Ai(Vt) 2: 0 for all S < N
L Aj(Vt} iES
j=1
which implies that (T1(Vt} , . . . ,Tr(Vt))T E c(Vt ).
Assume that
122 3 Cont rollability and Optimization

Then the last condit ion read s

and we obtain

Further r
LAj(Vt) =r ' 9t (N ) ? 9t (N ) > 0
j =l

is satisfi ed .
Result. If
Ai (vd = 9t (N) > 0 for all i = 1, . . . , r
and
9t(8) - ~9t(N)
r
? 0 for all 8 r;, N ,
then

Remark: If 9t (N) = 0, then this result coincides with the one obtained above.

3.4.4 Further Conditions for the Core to be Non-Empty

In the following we replace r by n. Next we will pr esent a const ructive method


by which we can decide whether the core c(Vt ) of Vt is empty or not. For this
purpose we order the subsets of N which have at least two element s in a
sequ ence K 1 , . . . , K 2 n _ n - 1 such that

which implies that K 2 n _ n - 1 = N.


Then we define a (2n - n - 1) x n-ma t rix A = (aik) ; ~ 1 , . . , 2n _ n _ 1 by
k = 1, . . . , 11

and a (2n - n - I)-vector b = (bi) i=1 ,...,2n- n- 1 by


3.4 An Emi ssion Redu ction Model 123

With these definitions we conclude that a vector x E jRn is in the core c(Vt ),
if and only if
n
L a ik x k ;::: b, for i = 1, . .. , 2n - n - 2, (PI)
k= l
n
L Xk = b2" - n - 1 = vt (N ) (P2)
k =l
and
Xk ;::: 0 for k = 1, . . . , n . (P3)

Now we repl ace the const ra int (PI) by


n
L a ik Xk + xn+l ;::: b, for i = 1, . . . , 2n - n - 2, (PI')
k =l

and consider the problem of minimizing X n +1 subject to (PI') , (P2) and

X k ;::: 0 for k = 1, .. . , n +1 . (P3')

This is a problem of linear programming whose du al problem consist s of max-


imizing
2l/. -n -l

L biYi
;.= 1

subject to the constraint s


21/.- n-l
L a ikYi :::: 0 for k = 1, .. . , n , (Dl)
i= l
2" - n -2
L Yi :::: 1 , (D2)
i= l
Yi ;::: 0 for i = 1, . . . , 2n - n - 2. (D3)
If we choose ( Xl , " " Xk ) such that (P 2) and (P 3) are satisfied (which is
possible), then we can choose X n + 1 ;::: 0 lar ge enough so that (PI') is satisfi ed.
Thus we can find X l , .. . , X n + 1 such that the constraints (PI') , (P2) and (P3')
are satisfied . If we choose Y i = 0 for i = 1, . . . , 2n - ti - 1, then the const raint s
(Dl), (D2) , (D3) are also satisfied . By a well known du ality theorem there
are numbers Xl, . . . ,xn+l which sa tis fy (PI ') , (P2) , (P 3) and are such that
xn + 1 ;::: 0 is minimal and t here are numbers ij; for i = 1, . . . , 2n - n - 1
2 n - n -l
su ch t ha t (Dl) , (D2) , (D3) hold t rue and L b;f); is maxim al. Mor eover ,
i=l
2 H-n
- l
L b;fj; = Xn + 1 ;::: 0.
i= l
124 3 Controllability and Optimization

T his implies t hat t he core c(Vt) of Vt is non-empty, if and only if Xn +1 =


or if and only if for every set of numbers Y i , i = 1, . .. , 2n - n - 1, wit h (DI )
and (D3 ) it follows t hat
Zn -n-l
:L biYi :::; 0.
i =l

Let us consider t he case n = 3. Then the constraints (P I') and (P2) read
Xl + Xz + 'T4 > bl
Xl + X3 + X4 > bz (P Ili)
Xz + X3 + X4 > b
Xl + Xz + X3 b4 (P2')

and t he constrai nts (D I), (D2) are given by


Yl +
Yl
yz
+ Y3
+
+
Y4
Y4
<
<
(D I')

Yl +
yz
yz
+
+
Y3
Y3
+
+
Y4
Y4
<
<
1 (D2')

From (D I') ,we infer t hat

hence

T his implies t hat

for all Yl ::: 0, yz ::: 0, Y 3 ::: 0, if b 4 ::: ~bi for i = 1,2,3 which is t hen a
sufficient condition for the core c(Vt) of Vt to be non-empty.
If we choose
1
Xl = Xz = X3 = 3b4 ,

t hen
2
Xl + Xz = 3b4 ::: bl ,

2
Xl + X3 = 3b4 ::: bz ,
2
Xz + X 3 = 3b4 ::: b3 ,

Xl + X z + X3 = b 4 .
3.4 An Emission Reduction Model 125

Henc e (!b 4 , !b4, !b 4) is in the core c (V t ) of Vt. On e can easily see that
under the condition
3
b4 > -b for i=1 ,2, 3
- 2 '

3
Xi ~ a , i = 1,2 ,3 , L Xi = b4 and
i= l

1
Xl ~ "2(b l + b2 - b3 ) ,

1
X2 ~ "2(b l - b2 + b3 ) ,

1
X3 ~ "2( -b l + b2 + b3 ) ,

are in the core c(vd of Vt .


In the case n = 4 the const raints (D1) read
YI + Y2 + Y3 + Y7 + Y8 + Yo + Yll $ 0 ,
YI + Y4 + Yo + Y7 + Y8 + YIO + Yll $ 0 ,
Y2 + Y4 + Yo + Y7 + yo + YIO + Yll $ 0 ,
Y3 + Yo + Yo + Y8 + W + YIO + Y ll $ 0 ,

which implies

2(Yl + Y2 + Y3 + Y4 + Y5 + Y6) + 3(Y7 + Ys + yg + YlO) + 4Yll :::; a


and in turn

Henc e

for all Yi ~ 0, i = 1, .. . , 10, if

for i = 1, ,6 ,
for i = 7, , 10 .

These conditions are therefore sufficient for the core c(Vt) of Vt to be non-
empty.
126 3 Controllability and Optimization

If we choose Xl = X 2 = X3 = X4 = :tb l1 , t hen it follows that

Xl + X2 = tbl1 2: bl ,
Xl + X3 = ~bl1 2: b2 ,
Xl + X4 = ~bl1 2: b3 ,
X2 + X3 = ~bn 2: b ,
X2 + X4 = ~bl1 2: b5 ,
X3 + X4 = ~ b l1 2: be ,
Xl + X 2 + X3 = ~ b ll 2: b7 ,
Xl + X2 + X4 = ~ b l1 2: b8 ,
Xl + X3 + X4 = ~ b l1 2: b9 ,
X2 + X3 + 1:4 = 4 bl1 2: ho.
Hence (:tbn, :tbl1 ' :tbn , :tbl1) is in t he core c(vd of Vt . For a general n 2: 3
one der ives as sufficient condit ions for the core c(Vt ) of Vt to be non- empty:

b, - ~h'-n-l ::; a for i = 1, ..., (~) ,

bi - ~b2"-n-l ::; a for i = (~) + 1, ... , (~) + (~) ,

Under these condit ions it then follows that


1 1
(-b 2"- n-l ," " -b 2"- n-l) is in the core c(Vt) of Vt.
,n n ./

n-times

Let K~ <;;; N for i = 1, .. . , (~) be the (~) subsets of N with r elements .


Then t he const raint s (PI) can be written in t he form

L Xk 2: Vt (K~) , i = 1, .. . , (~) , r = 2, ... , n - 1 .


kEK;

n) (:~)
(r
n .
This implies for every r = 2, . .. , n - 1 fi: k;;:l X k 2: i~ vt ( K~ ) and further

~v, (N)" (~) I:Ll v, (K;) for r ~ 2, . . . , n - 1 as necessary conditions for


3.4 An Emi ssion Red uct ion Model 127

the core c(Vt ) of Vt to be non-empty. If for every r = 2, ... , n -1 all Vt(K ;)


for i = 1, . .. , (~) are equa l, then t he last condit ions imply

r
;, Vt(N) 2: Vt ( K~) . (n)
for all i = 1, ... , r and r = 2, . .. , (n - 1)

whi ch are sufficient condit ions for t he core c(Vt) of Vt t o be non-empty.


Let us again consider t he case n = 3. In this case t he condit ion

is necessary for the core c(Vt) of Vt to be non- empty. Now let us ass ume in
addit ion that
Vt (Ki ) + Vt(Ki) - Vt(Kl) 2: 0 ,
I
Vt (K 2) - Vt(K 22 ) + Vt(K 2)
3
2: 0 ,
- Vt(Ki ) + Vt(Ki) + Vt(Ki ) 2: 0 .

Then we put

and define

From these definit ions we infer


Xl 2: 0 , x2 2: 0 , x3 2: 0 ,
Xl + X2 = Vt (K 2I ) + 32 10 2: Vt(K 2I ) ,
2 2 2
Xl + X3 = Vt(K 2) + 310 2: Vt(K 2) ,

X2 + X3 = Vt( K 2)
3
+ 32 10 2: Vt(K 2)
3
,
and

Hen ce (XI , X2, X3) is in t he core c(Vt) ofVt .


128 3 Controllability and Optimization

3.4.5 A Second Cooperative Treatment

Now let us come back to the problem of minimizing (3.28) subject to (3.26) ,
(3.27) .
Let us define , for every non- empty subset S of N = {1, . . . , r},

Cj(S) = I >ij for j = 1, . . . , r


iES

and
b(S) = 2:)i .
iES

We assume that , for all non- emp ty S <:;;; N,


Ci j 2 0 for all i , j = 1, . . . , r ,
(A) Ci i > 0 and
{
bi > 0 for i = 1, . . . . r .

For every non- empty S <:;;; N we now consider the problem of minimizing

subject to
for j '= 1, . .. , r (3.39)
and
r
2:Cj(S )Xj 2 b(S) . (3.26s)
j =1

According to the assumpt ion (A) t he set V(S) of all vectors x E IR r which
satisfy (3.27) and (3.26s) is non- empty and , since 2: Xj 2 0 for all x E V(S),
j ES
there exist s som e x (S) E V(S) with

2::'>j(S) = min(l::: xj I x E V(S )} = : v(S) .


jE S jES

This function v : 2N ----+ IR+ can be int erpret ed as the payoff fun ction of a
coope rative r-pers on game, if we define v() = O. The subset s S <:;;; N ca n
be interpreted as coa lit ion which are built by the players by adding their
const raint s in (3.26) and minimizing 2: Xj .
JEN
The valu e v(S), for every non-emp ty S <:;;; N , ca n be det ermined explicit ly.
For that purpose we consider the du al problem which consist s of maximizing
b(S)y subject to

for all j E N and y 2 0 .


3.4 An Emission Reduction Model 129

This problem has a solution, namely

y(S) = max{
1
Cj (S )
.
I J E N, Cj (S ) > }
and it is
v (S ) = b(S) . y(S) .
The question now arises what could be an incentive for the players to join in
a grand coalition. For that purpose we divide the minimum cost v (N ) of the
grand coalition N into r shares X i 2: 0, i = 1, . . . , r , i.e.,
r

:L .T i = v (N ) .
i= l

If this can be done in such a way that for every coa lit ion S S;; N we have

:L Xi < v(S) , (CS)


iES

then there is no incentive for them to form coa lit ions other than the grand
one . In such a case we call the grand coalition st able.
Now we can prove the following
Theorem 3.5. If th e condition (A) is satisfied, th en the grand coalition is
stabl e.
For the proof of this theorem we need the following lemma whose proof is
modelled along the lines of the proof of Th eorem 1 in [24] .


Lemma 3.1. If the condition (A) is satisfi ed, then for every collection B S;; 2N
such that th ere exist weights I S 2: for S E B with

:L I S = 1 for all i EN
S EB
i. E S

it follows that

v (N ) < L ISv (S ) .
SE B
130 3 Cont rolla bility and Optimizati on

Proof. Let 13 <::; 2# a collect ion as required . Then it follows that

L 'YS b(S ) = L L 'YS bi = L ( L 'YS ) bi = b(N) .


SE13 SE 13 iES iE# S E 13
i E S

Now let , for every S E 13,

v(S) = .1:1 (S ) + ... + xr( S )

wh ere (XI(S) , .. . ,xr (S )) sat isfies (3.27) , (3.26)s . Then


r
L 'Ysv(S ) = L "ts L Xj( S)
SE13 SE 13 j = 1

wh ere
Xj = L'YSXj (S ) for j = l, . .. , T.
SE 13
Now we have

' t Cj(S) ( L 'YSXj (S) ) = L vs (L Cj(S) Xj(S))


j=1 SE 13 SE 13 j=1

~ L 'Ys b(S )
SE 13
which implies
r r

L Cj (S )Xj ~ b(N) , hence L cj(N )xj ~ b(N) .


j=1 j =1

Sin ce Xj ~ 0 for j = 1, .. . ,1', it follows that


r r

v(N ) :::; L Xj = L L 'YSXj (S )


j =1 j =1 SE13
r
= L 'Ys L Xj( S) = L 'Ysv(S ) .
SE 13 j = 1 SE 13
This complete s the proof.
o
3.4 An Emission Reduction Model 131

The pro of of Theorem 3.5 can be given in t he sa me way as the pro of of


Th eorem 8.4 in Cha pte r II of [3] and goes as follows:
Proof. Let us consider t he probl em of maxim izing L Xj sub ject t o (3.27) ,
JEN
(Os ) for all S ~ N . T he problem which is dual t o t his consists of minimizing
L v (Shs , B = {S ~ N I S -=I- 0} . under t he condit ions
SE 13

's 2: 0 for all SEB

and

L ' s 2: 1 for i = 1, . . . , r .
SE tS
i E S

The problem is solvable and

max{L Xi IX E jRn sat isfies (3.27), (Os) for all S ~ N } < v (N) .
iEN

T herefore t he dua l pro blem is also solvable and

min{L v(Shs I (,S)SE 13 satisfies (D d , (D z) }


SE 13

=max{L Xi IX E jRn satisfies (3.27), (Os ) for all S < N } < v (N) .
i EN

For a solut ion (is ) SE13 of t he dual pro blem we can ass ume that

L is = 1 for i = 1, . . . , r .
SE t3
i E S

Ther efore Lemm a 3.1 impli es

v(N) s: min{L v(Shs I (,S)sE 13 satisfies (D 1 ) , (D z))


SE 13

hen ce

v (N) = max{L Xi IX E jRn satisfies (3.45) , (Os) for all S ~ N}


iEN

which shows that t he grand coalition is stable and complet es t he proo f of


Theorem 3.5.
o
132 3 Controllability and Optimization

In order t o derive further sufficient and necessar y cond it ions for the grand
coa litio n to be st able we denot e the (~) subsets of N which have k element s

by S1, l = 1, . .. , G) .
Then we have t he condit ions

L Xi < v (Sk)
iES~

t o be satisfied for l = 1, ... , (~) and k = 1, . . . , r - 1 t ogether with

r
L Xi = v (N) .
i= l

Now let us assume t hat

'.5.. v (N ) -::; min v (S k) for k = 1, . .. , r - 1 .


r /=1 ,... , ('k")

If we then define
1
Xi = - v(N ) for i =l , . . . , r
r
we obtain

for all l = 1, .. . , (~) and k = 1, . . . , r - 1

and r
L Xi = v (N ) ,
i=l

i.e. , {Xl , . .. , x r } is a division of v(N) of t he kind we are looking for.


On usin g the fact that

for all k = 1, . .. , r

on e can see that

for all k = 1, . . . , r - 1
3.4 An Emission Reduction Model 133

are necessary conditions for the existe nce of a division {Xl , . . . , x r } of v(N)

2::>i
with
< v(S) for all S ~ N .
i ES

In order to obtain further necessary conditions we enumer ate the sets sL for
and k = 1, . .. , r such that

1 ( ~) -1 +1
Sk n Sr- k = 0.
Then it follows from (0;) that

(~ ) - I+l
v(N)
1
:s: V(Sk) + v(Sr_k ) , l=l , ... , (~) .

For r = 3 we get as necessary conditions for t he existence of a division


{Xl ,X2,X3} of v(N) with

o :s: X i :s: v(Sl) for i = 1,2,3 , (oj)

Xl + X2 :s: v(SJ) ,
Xl + X 3 :s: v(S~) , (oj)
X2 + X 3 :s: v ( S~ ) ,
the conditions

3
v(N ) < L v(si) ,
1=1
3
2v(N) :s: L v ( S~ ) ,
1= 1

v(N) :s: v(s i) + v(Si - l ) , l = 1,2 ,3,


(v(N) :s: v(S~) + v(S{- I) , l = 1,2 ,3) .
Now let

2V(N ) < IE v(Sb)


and
{
v(N) 2: v(Sb) for l = 1,2,3
be satisfied.
Then we choose bl , bz ,b3 E IR such that

for i = 1,2,3
134 3 Controllabilit y and Optimizat ion

and

If we then define
Xl = t(b l + b2) - ~b3 ,
X2 = 1 (bl + b3) - t b2 ' (**)
X3 = 'i(b 2 + b3) - 'ibl '
we obtain
Xl = v(N ) - b3 2:: v(N ) - v(S:] ) 2:: 0 ,
X2 = v(N ) - bz 2:: v(N ) - v(Si ) 2:: 0 ,
X3 = v(N ) - b l 2:: v(N ) - v(Si ) 2:: 0 ,

Xl + X2 = bl ::; v(Si ) ,
Xl +X3 = oz ::; V(Si) , (CD
X2 +X3 = b3 ::; V(Si) ,
and

Therefore Xl, X2, X3 defined by (**) satisfy (Cn and (Cl)

Ins ertion of Xl, X2, X3 into (c j) lead s to the condit ions

whi ch tog ether with (*) are sufficient for t he existe nce of a division {Xl , X2, X3 }
of v (N ) with (Cn , (C~) . The impli cation (B I ) =} (B 2 ) is also necessar y for
the existe nce of some X E jRn with (3.27), (CN ) and (C s) for all S ~ N . For ,
if such an X E jRn is given , then , for every collect ion l3 ~ 2N such that there
exist weights IS 2:: 0 for S E l3 with (Bd , it follows that

n n
v (N ) = I >j = 2) L IS )Xj = L I S L Xj ::; L ISV (S ) .
j =l j= l S E B SE B j ES SEB
.i E S
3.4 An Emission Reduction Model 135

In order to calculate an x E jRn with (3.27) , (CN) , (C s ) for all S ~ N


one ca n solve the pro blem of maximizing I: .1: i subject to (3.27) , (Cs) for all
iEN
S ~ N . This ca n be don e wit h the aid of the simplex method which can be
performed without t he init ializing phase by introducing slack variab les

z(S) 2: 0 for all S < N S i= 0 ,


an d rewriting the inequ alities (C s ) in t he form

z(S) + L Xj = v(S)
jES

for S ~ N , S i= 0.
Then one has to maximize

L o z(S) + L Xj
s c r jEN
S #0

subject to (* * *), (3.27) and (6 s ) for all S ~ N , S i= 0. Let us dem onst rat e
this by an example: As system (3.26) we consider

Xl + 0.8X2 + 0.l x 3 2: 0.2 ,


0.2XI + X2 + 0.8X3 2: 0.2 ,
O.I XI + 0.5X2 + X3 2: 0.2 .
In this case we get
v(SII ) = v( SI2 ) = V(SI3 ) = 0.2 ,
I 3 - 2
V(S2 ) = V(S2 ) = 0.2 .. . , V(S2 ) = 0.3076923 , v(N) = 0.2608696
and t he initial simplex t ab leau reads as follows:

v(S) - Xl - X2 -X3
z(Sf) 0.2 1 0 0
z(Sr ) 0.2 0 1 0
z(sf) 0.2 0 0 1
z ( S~ ) 0.2 .. . 1 1 0
z ( S~ ) 0.3076923 1 0 1
z(SD 0.2 .. . 0 1 1
z(N ) 0.2608696 1 1 1
3
I: Xi 0 -1 - 1 - 1
i =l
136 3 Controllability and Optimizati on

Aft er three simplex steps we arr ive at the t abl eau .

v (S ) - z (S i) - z(SJ) -z (N )
Xl 0.2 1 0 0
z(Sr ) 0.7. .. 1 - 1 0
z (S r ) 0.1613526 o 1 -1
X2 0.02 . . . -1 1 0
z (Si ) 0.0690449 - 1 0 - 1
z( SD 0.161 3526 1 0 -1
X3 0.0386474 o -1 1
3
I: X i 0.2608696 o o 1
i= l

A solution of the problem of maximizing (**) su bjec t t o (***), (3.27) and (6 s )


for all S ~ N , S =I- (I) is given by the first column of the tablea u to gether with
3
z( S i) = z (S J ) = z (N ) = 0 . Further we ob t ain I: Xi = 0.2608696 = v (N ) .
i= l

3.5 A Dynamical Method for Finding a Nash


Equilibrium
3 .5.1 The Goal-Cost-Game

We consider n players PI , . .. , P n that persue n goa ls Xi E jRn i for i = 1, .. . , n.


Every player Pi has m i actions A il , .. . , A imi availa ble whi ch he ca n take in
order to achieve his goa l Xi . Let us ass ume t hat act ion A i k requires cost s in
the amount of Uik 2: 0 whi ch can be chose n by P i . Let us further assume that
every player P i ca n spe nd at most costs in the amount of C; > 0 which leads
to the const raints

L
m i-

Uik ::; C; for i = 1, . . . , n . (3 .40)


k=l

Finally, we assume that t he goa l vector of player P i , if every player P j has


chosen his cost vector U j = (u j" . . . , Uj mj ) for j = 1, . . . , n , is given by
X i ( UI, ... , u n ) where Xi : jRm, X jRm" -+ jRn i is a given vector function . That
means that the goal vector of eac h player dep ends on the cost vectors of all
players.
T he n the problem the players have to solve cons ists of findin g n cost vectors
Ui E jR~i = {u E jRm i I U 2: e m J for i = 1, ... , n which satisfy (3.40) and
solve the system of equat ions given by

(3.41 )
3.5 A Dynamic al Method for Finding a Nash Equilibrium 137
n
L: m i
For every player Pi we define a cost function i : lRi = l -7 lR by

i (Ul , ... , un) = Il xi(Ul, ... , un) - xill ~


for i = 1, . . . , n where II . II denotes the Euclidean norm in lRn i
I/.

L: rn ,
Instead of finding a solut ion U E lR+ 1 of (3.41) which sat isfies (3.40) the
players now try to find a Nash equilibrium, i.e., to find vectors uT E lR m i with
1n i

u T 2: 8 m; and 2::: uTk S C; (3.42)


k= l
for all i = 1, .. . , n
such that
*
'Pi(Ul"",U _ 'Pi (ui*" " ,ui-l,Ui,ui+l
n*) < * * " ",u n*)
for all ui E lR with
m i

m ,

Ui 2: 8 m; and 2::: Uik S C; (3.43)


k=l
for all i = 1, . . . , n .
In order to solve this problem we shall make use of

3.5.2 Necessary Conditions for a Nash Equilibrium


n
We assume that Xi E C 1 (R'" , lRn ; ) for every i = 1, . . . , n wher e m = L m j'
i=l
Then it follows that 'Pi E C 1 (lR m , lR) for every i = 1, . . . , n. A necessary
condit ion for a Nash equilibrium is then given by the well known multiplier
rule:

For every i E {1, . .. , n} t here exist numbers '\ > 0 and Aik 2: 0 for k
1, . .. , mi such that

and
AikuTk = 0 for k = 1, .. . , mi .
If every 'Pi = 'Pi (Ul, . . . , Un), i = 1, . .. , n is convex, then the multiplier rule is
also sufficient for a Nash equilibrium.
138 3 Controllability and Optimizati on

For , if (Ul, ... , u n) E JRm is given with


mi

Ui 2:: 8 m ; and L Uik ~ ct for all i = 1, . . . , n ,


k= l

then it follows that , for every i E {I , . . . , n},

'Pi(Ul* " " ,Ui_l,Ui


* *
,Ui+l"" ,U n* ) - 'Pi (Ul"
* " ,U n* )
m i

2:: L 'PiUik (u i , .. . , U~)( Uik - U:k)


k=l
7ni

= L (- Ai + Aik)(uik - U:k)
k=l
m i. rn , m i.

= -x, L Uik + Ai L U:k + L AikUik


k=l k =l k= l
mi m i.

= Ai(ct - L Uik) +L AikUik


k=l k =l
2:: 0 .

This implies that (ui , .. . , u~) with (3.42) for i = 1, , n is a Nash equilib-
rium, if it satisfies the mu ltiplier rul e. Now let (ui , , u~) satisfy (3.42) for
all i = 1, . . . , n and be a Nash equilibrium, hence sat isfy the mu ltiplier ru le.
Let , for som e i E {I , . . . , n} ,
m;

L U:k < c; (3.44)


k=l

Then it follows t hat Ai = 0 which imp lies

for all k = 1, . .. , mi .
(3.45)

Let , for some i E {I , . . . , n},


m i

L U:k = c; (3.46)
k=l

T hen it follows that

'PiUik (ui , . . . , u~ ) ~ 0 for all u:k > 0 . (3.47)


3.5 A Dyn ami cal Met hod for Finding a Nas h Equilibrium 139

3.5.3 The Method

The two necessary condit ions (3.45) and (3.47 ) for a Nash equilibrium give
rise to an iter ative procedure for finding it . By this procedure a sequence
(U1 (t) , . . . , un(t)) tENo with

Ui (t ) 2 8 m i an d L uid t ) ::::; ct for all i = 1, . .. , n (3.48)


k=l

is const ructed as follows :


Let u(t) = (U1(t) , . .. , un (t )) E jRm with (3.48) be given for some t E No and
let i E {I , . . . , n} be such t hat

L Uik(t) < c;
m i

(3.49)
k=l

Then we define the set


K, = {k E {I , .. . , md I <PiUik (U1(t) , . . . , un(t )) < 0 or
(3 .50)
Uik(t) <Piuik(U 1(t), . . . ,un(t)) #O} .
If K, is empt y, then (3.45) is satisfied for (U 1(t) , . . . ,Un(t)) instead of (ui , . .. ,
u~). In this cas e we put Ui(t + 1) = u;(t ). Otherwise we put

with '\ > 0 and


, if k E K, ,
(3.51)
, if k rf- K, .

If Uik(t) = 0 for some k E {l , . . . , m, }, then it follows that hidt) 2 0 and


hen ce Uik(t + 1) 2 0 for all Ai > O.
Ther efor e we define the set

K i1 = {k E {I , . .. , md I Uik(t) > 0 and hik(t) < O} (3.52)

and put
Ail = . { -uid
min
t) I k
hi k( t ) E
K}
il , 1
if K i 1 -L
/
0
(3.53)
{ +00 , else .
Then it follows that Ail > 0 and
Uik(t + 1) = Uik(t) + Aihik(t ) 2 0
(3.54)
for all k = 1, . . . , m ; and all Ai E (0, Aid .
140 3 Cont rolla bility and Optimization

If we put
m j

, if L hik(t) > 0
k= l

, else ,

mi rrl, i

L Uik(t + 1) = L(Uik(t) + Aihik(t)) :S C; for all Ai E (0 , Ai2] ' (3.55)


k=l k=l

Now let i E {I , . . . , n } be such t hat


m i

L Uik(t) = C; . (3.56)
k=l

Then we define the set

K, = {k E {I , . . . , md I Uik(t) > O} and 'PiU;k (Ul(t) , . . . , un(t)) > O}. (3.57)

If K, is empty, we pu t Ui(t + 1) = Ui(t ), since (3.47) is satisfied for


(Ul(t) , .. . , un (t )) inst ead of (ui , . . . , u~ ) .
Otherwise we define

with Ai > 0 and

(3.58)

Then it follows that

L Uik = L(Uik(t) + Aihik(t))


k=l k=l

< L Uik(t) = C;
k=l
for all A > 0 .

If we define Ail by (3.53) with K il given by (3.52), t hen it follows t hat Ail >0
and (3.54) is sat isfied.
In both cases (3.49) and (3.56) it is t rue t hat
mi
Lhik(t) 'PiU;k (Ul (t ), .. . , un(t )) < 0,
k=l
3.6 Evolution Matrix Games 141

if the set K; given by (3.50) and (3.57) , respect ively, are non- empty.
This implies that the vector hi(t) defined by (3.51) and (3.58), respectively,
determines a feasible descent dir ection. Therefore we put A: = min(Ail ' Ai2)
in the case (3.49) and A:
= Ail in the case (3.56) , det ermine .xi E [0, A:J such
that

<{Ji (UI(t ), .. . ,Ui(t) + .xih i(t) , ,Un (t )) :s


<{Ji (Ul(t ), ,Ui (t ) + Aihi(t) , .. . , un(t))
for all Ai E [0, A:l ,

and put

3.6 Evolution Matrix Games

3.6.1 Definition of the Game a n d Evolutionary Stability

We consider a population whose individuals have a finite number of strategies


h,I2 , .. . ,In in order to survive in t he struggle of life. Let u; E [0,1]' for
every i = 1, . .. , n, be the probability for the st rate gy I, to be chosen in the
population. Then t he corr esponding state of the population is defined by the
n
vecto r U = (Ul' . . . , un ) where LUi = 1. T he spa ce of all population st ates is
i= l
given by the simplex
n
Ll = {u = (U I, . . . , un) I o:s UI :s 1 , LUi = I} .
i =l

Every vector e, = (0, . .. , 0, Ii , 0, . . . , 0) , i = 1, ... , n , denotes a so called pure


population st ate where all individuals choose t he st rategy h All the other
st ates ar e called mixed states. If an individual that chooses st rate gy Ii meets
an individual that chooses strat egy I j , we assum e that the Ii-individual is
given a payoff aij E lR by the Irindividu al. All t he payoffs then form a matrix

A -- (a1.J.) t,. ] .= I , . . . ,n

the so called payoff matrix which defines a mat rix game .


The expecte d payoff of an I i-individu al in the population state U E Ll is
defined by
n
L aijUj = ei AuT .
j=l
142 3 Controllability and Optimization

If two population st ates u, v E Ll are given, then the average payoff of v to u


is defined by
n
2:::: aijViUj = v A u T .
i ,j = l

Definition 3.1. A population state u* E Llis called a Nash equilibrium, if

u A u * T :::; u * Au*T for all u E Ll .

In words this mean s that a declin ation from u* does not lead to a higher
payoff.
Definition 3.2. A Na sh equilibrium u* E Llis called evoluti onary stable, if
UA U*T = u*AU*T f or some u E Ll with u =I- u* im plies that uAuT < u *AuT .
In words this mean s that, if a cha nge from u* to u lead s to t he sam e payoff,
u cannot be a Nash equilibr ium.
Let us demonstrate these definitions by an exa mple. We consider a population
with two st rategies Ir and 12 . In dividuals t hat choose Ir are called pigeons
a nd those who choose 12 are called hawks. If a pigeon meet s a pigeon they
men ace each other without seriously fighting until one of t hem gives in . If a
pigeon meets a hawk , it runs away and is not hur t. If two hawks meet each
other, they fight until one of them is seriously hurt and has to give up or is
dead . Let us ass ume that in each case the winner is given V > points and
the loser in a fight of hawks is given -D points where D > 0. This leads to
the payoff matrix

A= (y V )
V; D .

On e ca n show that t he pure popul ation state e2 = (0,1) where all individuals
behave like hawks is evolutiona ry stable, if V 2 D (Ex ercise). If V < D , then
t he pure population state e2 = (0,1) is not even a Nash equilibrium. On the
contrary we have

But also the pure population state el = (1,0) is not a Nas h equilibrium.
In t his case we have
3.6 Evolution Matrix Games 143

The case V ~ D is a special case of t he following situation:


Let, for som e k E {l , . .. ,n} ,

akk ~ ajk for all j = 1, . . . , n


and (3.59)
akk = ajk =:} aki > aji for all i =1= k .

Then , for every U E ,1, it follows that

n
uAe[ = L Ujajk <
j=1

Let uAe[ = ekAe [. Then

ajk = akk for all j with Uj > 0 , hence


(3.60)
aki > aji for all j with Uj > 0 and i =1= k .

This implies
n n n
ekAuT - uAuT = Lakiu i - L L ajiUjU;.
i= 1 j=1 i = 1
n n (3.61)
= L L(aki - aji)UjUi = L L(aki - aji)UjUi > 0
j=1 i= 1 Uj >0 ii'k

and shows that u* = ek is evolutionary st able. Evolutionary st ability can be


characterized by a condition which is useful for theoretical purposes. To derive
this condition we st art as follows:
Let u , u* E ,1 be given with U =1= u* and let E E (0,1 ]. Then we define
WE: = (1 - E)U* + EU and conclude that

E( WE: 'WE:) = (l- E)E(u *, wE:) +E E (u, wE: )

where
E(u , v) = uAv T for any u, v E ,1 .
From this we obtain the equivalence

E( WE: 'WE:) < E(u*, wE:) {::} E(u ,wE:) < E(u*,wE:) . (3.62)

Now let u* E ,1 be evolut iona ry stable and let U E ,1 be chosen arbitrarily.


Then we have
E(u ,u*) ::; E(u*,u*) .
144 3 Cont rollability and Optimization

1) Assume that E(u , u*) < E( u* , u* ), u =J u*. T hen there is a relatively op en


set Vu <;:; L1 with u* E Vu such t hat

E(u ,v ) < E(u* , v ) for all v E Vu with v =J u" .

Now there exists some IOu > 0, IOu ::; 1 such that

WE; = (1 - c)u* + IOU E Vu for all 10 E [O,cu ] .

This impli es

E(U,WE; ) < E (U * ,WE;) for all 10 E (0, IOu ] .

Using the above equivalence we obtain

2) Assume that E( u, u*) = E(u* , u*) , u =J u*. Then it follows that E(u , u*) >
E( u, u) which impli es

E(U,WE;) < E(U* , WE; ) } E (WE; ,WE;) < E( U* ,WE;)


(3.63)
for all 10 E (0,1 ].

Result : If u* E L1 is evolut ionary stable, then , for every u E L1 with u =J u* ,


there exists some IOu E (0,1 ] such t ha t

(3.64)

where
WE; = (1 - c)u* + IOU .
Conversely let u* E L1 be such t hat for every u E L1 with u =J u* there
exist s som e IOu E (0,1 ] such that (3.64) is satisfied . Then it follows from the
equivalence (3.62) that
(3.65)
and in t urn for 10 ----t t ha t
E( u* ,u*) 2:: E(u ,u* ) .

Let E (u* ,u* ) = E(u ,u*) . Then it follows from (3.64) t ha t

(1- c)E(u* ,u*) + cE(u* ,u) > (1- c)E (u*, u* ) +cE (u,u )
which implies E(u ,u) < E(u* ,u) .
3.6 Evolution Matrix Games 145

Result: A population state u* E ,1 is evolut ionary st abl e, if and only if for


every u E ,1 with u -:f- u * there exist s some Cu E (0,1] such that the condit ion
(3.64) is satisfied.
Now let u* E ,1 be evoluti ona ry stable and let

u7 > a for all i = 1, . . . , n . (3.66)

Then it follows from

and

that

whi ch impli es
UAU*T = u* AU*T for all u E ,1

and in turn that

uAu T < u *AuT for all u E ,1 with u -:f- u* .

This shows that u* E ,1 is the only evoluti ona ry st able st ate. Let us define,
for every u E ,1 , a support set by

5 (u) = {i I ii ; > a} .

Then it follows by the arguments given above that

for all i E 5(u *) (3.67)

which impli es

UAU*T = U*AU*T for all u E ,1 with S(u) <:: 5(u*)

and in turn

uAuT < u" AuT for all u E ,1 with u -:f- u* and 5(u) <:: 5(u*) ,

if u* E ,1 is an evolut iona ry st able state .


Now let u E ,1 be such that 5(u) Cl 5(u*). Then there exist s some
i E {l , . . . ,n} such that U i > a andu: = o.
146 3 Controllability and Optimization

If
Ui :::: u; for all i = 1, . . . ,n ,
t hen it follows from
n n
L Ui = L U; = 1
i= l i=l

t hat u = u * which is impossible.


Hence there exist s some i E {1, .. . , n} with u.; < u; . If we define

and put
v = u * + '\(u - u*) ,

then it follows that

v E C = {u E L1 I :3i 1 with Ui, >


and u;, =
and :3i 2 with Uiz = o} .
Converse ly, if v E C is given and we define, for any ,\ E (0, 1], u = u* + ,\(v -
u *), then u E L1 and S(u) ~ S(u*) .
Now for every v E C there is some Cv E (0,1 ] such that

wE:Aw~ < u * Aw~ for all e E (0, e; .]


where
WE: = (1 - c)u* + cV = u * + c(v - u *) .


Since C is compact and Cv, v E C, can be chosen continuously, t here exist s
som e E > with E = min e; and therefore
v EC

wE:Aw~ < u* Aw~ for all c; E (0, E] .

If we define
c;* =- E- -
min
v EC
Ilv- u* 112 '

then it follows that

uAu T < u * Au~ for all u E L1


with S(u) ~ S(u*) and Ilu - u*lh < c;* .
Summarizing we obtain t he
3.6 Evolution Matrix Games 147

Result: If u * E ,1 is evolutionary stable, then there exist s som e s " >0


such that

uAuT < u * AuT for all u E Ll with u =I- u "


(3.68)
and Ilu - u * 1I 2 < c:* .

Conversely let u * E ,1 and s" > 0 be given such that (3.68) is satisfi ed .
If we then t ake any u E ,1 with u =I- u" and define, for c: E (0,1],

We: = (1 - c: )u* + c:u ,

then W e: E ,1 , W e: =I- u * and

for e < min (1 , Ilu~~' II) = C: u E (0,1) which impli es

and shows that (3.64) is satisfied and in turn that u* is evolutionary st able.

Result: u * E ,1 is evolut iona ry st able, if and only if there exist s some s" > 0
such that (3.68) is satisfied.
The condit ion (3.68) says that an evolut iona ry stable st ate is locally the only
evolut ionary st abl e state.

3.6.2 A Dynamical Method for Finding an Evolutionary Stable


State

Let us assume that


aij ;::: 0 for all i , j = 1, .. . , n
and (3.69)
uAuT > 0 for all u E ,1 .

Then, st arting with some uO E ,1 , we define a sequence (U k)kE f\lo of population


state s by
148 3 Controllability and Optimization

Obviously uk E ,1 implies t hat u k+ 1 E ,1 . If we define a map fA : ,1 ----+ ,1


by
e A uT
fA (u ), = 'A T Ui for i = 1, . . . , nand u E ,1 , (3.70)
u u
then

if and only if
T T
ei Au* = u* Au* for all i E S(u*) . (3.71 )
Sinc e in S ection 3.6.1 we have shown that t his condit ion is necessary for
u * ELl to be evolut ionary stable, it follows that u* E ,1 is a fixed point of fA ,
if u* is evolut ionary stable. This even hold s true, if u* is a Nash equilibrium ,
sin ce the condition (3.71) is also necessary for u* bein g a Nash equilibrium .
This gives rise to the question under which condit ion a fixed point of fA is a
Nash equilibrium. A first answer to t his question is
Lemma 3.2. If u* E ,1 is a fixed point of fA and

u; > 0 [or all i = 1, ... , n , (3.72)

then u* is a Nash equilibrium .

Proof. u* E ,1 is a fixed point of f A, if and only if (3.71) holds true. Sinc e


S(u*) = {1, ... , n } this implies
uAu*T = u* Au *T for all u E ,1

which shows t hat u * is a Nash equilibrium.

o
A second answer to the above question is
Lemma 3 .3. If u* E ,1 is an attmctive fixed point (i. e. a fixed point which is
an aitra ctor) , then u* E ,1 is a N ash equilibrium .

Proof. If u* satisfies (3.72), the asse rt ion follows from Lemma 3.1.
If S( u *) =1= {1, . .. , n}, then it follows that (3.71 ) is satisfied . If we show that

eiA u*T :::; U*AU*T for all i E {l , .. . , n } \ S(u* ) ,

then it follows that u* is a Nas h equilibrium .


3.6 Evolut ion Matrix Games 149

Let us assume that, for some k E {I , . .. , n} \ S(u*) , it is true that

(3.73)

Since g(u) = ek A uT - uAu T is cont inuous, there is some Cl > 0 such that
(3.74)

Since u* is an at tract or, there is some C2 > 0 such that

lim f~(u) = u * for all u E Ll with Il u - u *11


t --->oo
< C2 . (3.75)

This impli es for every v E Ll with Ilv - u* 112 < C t he existe nce of some T o E N
such that

Il v(t) - u* II < e for all t 2': To where c = min( cl ' c2)


a nd
v (t) = f~ ( v) .
From (3.73) it follows that

Vk(t + 1) > Vk(t ) > 0 for all t E N. (3.76)

On the other hand (3.75) implies that

lim Vk(t)
t-s- cx:
= u k = 0 , since k E S (u *) ,

which cont radict s (3.76) .


Hence the ass umpt ion (3.73) is false which completes the proof.
o
The inversion of Lemma 3.2 is in general false which can be shown by a
counterexa mple (see [21]).
We ca n, however , prove
Theorem 3.6. If a pu re population state is evolutio n ary stabl e, th en it is an
asymptotically stabl e fixed point of f A (3.70).

Proof Let ek ,for some k E {I , . . . , n } , be evolut iona ry stable.


Then by the second last result of S ection 3.6.2 there exists some s" > 0 such
that

u A uT< ek A u T for all u E Ll with u i= ek


and Ilu - ekl12 < c* .
150 3 Controllability and Optimization

Further ek is a fixed point of f A as shown above. In ord er to show that


{ed is asympt ot ically stable we verify the assumptions of Th eorem 1.3.
Let
U = {u E L\ I Ilu - ekllz < E* } .

Further we define
E*
G = {u E L\ I IUk - 11< -}
n
.
Then G ~ L\ is op en in L\ , ek E L\ and for every U = (Ul l " " un) E G it
follows that
n *
LUi = I -Uk < ~
n
i = 1
i :;i: k

which implies
E*
o:: : Ui < -n for all i E {I, . . . , n} , i =1= k ,

hence
E* *
Ilu - ekll z< yTi <E .
T herefore G ~ U. Further it follows for every U E G that

whi ch implies fA(G) ~ G.

If we define a cont inuous fun ction V : L\ --4 JR by

V(U)=I- Uk foru EL\,

then it follows that

V (JA (U)) - V(u) = Uk - f A(U )k ::::: 0 for all U E G .

Hence V is a Lyapunov fun ction with resp ect to f A on G.


3.7 A General Cooperative n-P erson Goal-Cost-G ame 151

Further it follows that

v (u) 2: a for all u E G and (V (u) = a {:? u = e) ,

i.e. , V is positive definite with respect to ek. Finally we have

V(JA(U)) - V(u) < a for all u E G with u =1= ek .

Thus all the assumptions of Th eorem 1.3 are sat isfied and henc e {ed is
asymptotically stable.

3.7 A General Cooperative n-Person Goal-Cost-Game

3.7.1 The Game

In S ection 3.4.1 we have conside red an n-person goa l-cost-game that can be
gen eralized as follows: Given n players who persue n goals whi ch are given by
an n-vector

In order to achieve these goals every player has to spend a certain amount of
money, say Xi 2: a for the i - th player. Ev ery player P i , i = 1, . .. , n , ca n be
assigned a goal valu e Ii whi ch dep ends on the cost values of all players and
can be described as a fun ction Ii : lR n ----+ lR for i = 1, .. . , n.
The requirement that all players reach their goal is assumed to be given by a
system of inequalities of the form

(3.77)

wh ere
Xi 2: a for i = 1, .. . , n . (3.78)
Ev ery player is, of cour se , interested in minimizing his own cost subject to
(3.77), (3.81) . This, however , is in general simultaneously impossible. There-
fore we assume in a first st ep that the players coope rate and minimize
n
s(X) = L:>i (3.79)
i= l

subject to (3.77) , (3.78).


152 3 Controllability and Optimization

Let us assume that x E jRn is a solut ion of this problem . If we then choose,

for any i E {I , . . . , n} , some Xi :::: such that

it follows that
n n
L Xj ::; L Xj + Xi
j=l i = 1
i::p i

and therefore Xi ::; Xi .


Thus every solution of (3.77) , (3.78) which minimizes (3.79) is a Nash equi-
librium, i.e. , if the i - th player declines from his choice of cost s whereas all
the others stick t o it , he can at most do worse.

3.7.2 A Cooperative Treatment

Now we go one st ep further and define a coope ra t ive n -person gam e in the
following way: Let N = {I , ... , n}. Then , for every non- empty subset S of N ,
we define
f s( x) = L fi(X) , X E jRn , and os = Lbi
iES iES

and consider the problem of minimizing


s(X) = 2: Xi (3.79)
i EN

subject to (3.81) and


(3.77)s
Every non- empty subset S of N can be considered as a coalition in which the
players join by adding their inequalities and minimizing (3.79) . If we define,
for every S ~ N ,

inf { 2: Xi I X E jRn satisfies (3.77)s , (3.78)) , if Sis non-empty,


v(S) = i EN
{ 0, if S is empty,
(3.80)
then v : 2N ---+ jR+ is the payoff function of a coope ra t ive n-person game.
For the following we assume that , for every non- empty S ~ N , there exists
som e X E jRn with (3.77)s , (3.81) and 2: Xi = v(S ).
iEN
3.7 A General Cooper ative n -Person Goal-Cast-Gam e 153

The question now aris es under which condit ions the grand coa lit ion N is
st abl e which means that , if the players decide for a gra nd coalition there is
no incentive for them to declin e from t his decision. This is cert ainly the case,
if there is a divis ion {X1 , . . . ,X n } of v(N) , i.e.,

Xi :::: a for i = 1, . . . , n (3.81)

and n

2:: Xi = v(N) (3.82)


i= l

su ch that
2:: Xi :::: v (S ) for all non-empty S <;;:; N . (3.83)
iES

If this condit ion is sat isfied, the players have to spend at least as much as
they have to spend in the gra nd coa lit ion , if they choos e anot her one.

3.7.3 Necessary and Sufficient Conditions for a Stable Grand


Coalition

Theorem 3.7. Th ere exists a vector X E jRn with (3.81), (3.82) and (3.83),
if and only if fo r every collection B <;;:; 2N such that fo r every S E B th ere
exists a weight "is :::: a with

2:: "is = 1 for all i = 1, . . . , n (3.84)


s E 13
; E S

it is tru e that
v (N ) :::: 2:: "isv (S ) . (3.85)
SE !3

Proof. 1) Let x E jRn with (3.81) , (3.82), (3.83) be given .


Further let B C 2N be a collect ion such that for every S E B there exist s
a weight "is :::: a with (3.84). Then it follows that

2:: "isv (S ) :::: 2:: "is 2::


SE !3 SE !3 i E!3
Xi = 2:: ( 2::
i EN
. zs "is )
8 E
Xi

i E S

= 2:: Xi = v(N ) .
iEN

Hence (3.85) is sat isfied.


154 3 Controllability and Op timization

2) The proof of the sufficien cy part is t he sam e as that of Th eorem 3.5.

o
As a gen er alization of Lemma 3.1 we ca n prove
Lemma 3.4. Let us assume that , for every i E {I , . .. , n }, for every finite
sequen ce of vect ors Xl , . . . , x m E lR. n and numb ers Al ~ 0, ... , Am ~ 0 it is
tru e that

Further let us assume that


f".w (x ) ~ f s (x)
for all non- empty S c:;; N and all x E lR. with

Xi ~ 0 f or i = 1, ... , n .

Th en fo r every collection E c:;; 2JV such that for every S EE th ere exis ts a
weight 'Ys ~ 0 with (3.84) it is true that (3.85) is satisfied.

P roof. Let E c:;; 2JV a collect ion as required .


Then it follows that

Now let, for every S EE,

v (S ) = X1 (S ) + ... + xn (S )

wh ere
Xi (S) ~ 0 , i = 1, . .. , n ,
f s(x(S)) ~ os .

Then
n
L 'Ysv (S ) = L 'Ys L Xj(S)
SE 6 SE 6 j=l

wh ere
Xj = L 'YSXj (S ) for j = 1, . .. , n .
SE 6
3.8 A Co ope ra t ive Treatmen t of an n-Per son Cost-Game 155

Then it follows that


Xj 2 a for j = 1, . . . , n
and

fN (X) = f N (2: I'SX(S )) 2: I's f N (X(S ))


2
SEB SEB
2 2: I's fs( :r(S)) 2 2: rtsbs = bN
SEB SEB
which implies
n
v (N ) ::; 2: Xj = 2: I'sv (S ) ,
j=1 SEB
i.e., (3.85) is sat isfied.

3.8 A Cooperative Treatment of an n-Person Cost-Game

3 .8.1 The Game and a First Cooperative Treatment

We consider n players Pi , i = 1, . . . , n, n 2 2, who play a ga me in which


every player Pi has at his disp osal a (non-emp ty) set Vi <:;; IRmi of st rategies.
They ca n, however , not necessaril y choose their st rategies ind ep end ently of
each ot her. If player Pi choo ses 11i E Vi for i = 1, . . . , n, then the n-t upel
n
(U1 ' . . . , un ) is requi red to lie in a non-emp ty subset V of IT Vi which is
i= 1
ass umed t o be of the form

nVi
n n
V = where v; c ITVJ
t -
for i = 1, .. . , n .
i= 1 j=1
n
Further every player Pi is ass igned a cost fun ction 'Pi U, --+ IR+ which IT
j=1
he wants t o minimi ze on V . T his, however, is in general impossibl e simulta-
neously. Therefore the player s could minimize in
156 3 Controllability and Optim ization

a first st ep the function


n
'P (U) = L 'Pi (U ) for U E U .
i= l

If U E U is such that

'P (U) :S 'P (u) for all U E U ,

t hen it is eas y to see that ii is a so called P ar eto optimum, i.e., if there is any
U E U su ch that
'Pi (u) :S 'Pi (u) for all i = 1, .. . , n ,
t hen it necessarily follows that

'Pi (u) = 'Pi (u ) for all i = 1, . . . , n .

In Section 3.4.1 we have considered the following special case :


Let m i = 1 and U, = lR+ for every i = 1, . . . , n . Further let 'Pi : lR~ -.., lR+ be
given by

and let Vi be given by


n
Vi={ u E lR~ 1 L Cij Uj ~bd fori =l , . . . , n .
j= l

Then

n Vi .
n n
U = {u E lR~ IL CijUj 2: b, for all i = 1, . .. , n } =
j = l i= l

In this case the minimization of


n
'P (u ) = LUi on U
i= l

also lead s t o a Na sh equilibrium u E U (see Section 3.4.1) .


3.8 A Cooperative Treat ment of an n-P erson Cost-Game 157

3.8.2 Transformation of the Game into a


Cooperative Game

For every non-empty subset S of N = {I , .. . , n} we choose a non- empty set


Us ~ Ui E S Vi (with a property to be specified lat er ) and define

v(S) = {~nf{cp(U) I U E Us } , if S is non-empty ,


, if S empty.

Then v : 2 N --. 1R+ is t he payoff funct ion of a cooperative n-person game.


In the above sp ecial case we define, for every non-empty S ~ N ,

Cj (S) = L:>ij for j = 1, . .. , n and b(S) = I: b i


i ES iES
and put
n
Us = {u E 1R~ I I : Cj(S)Uj 2: b(S)} .
j= l

Then it follows t ha t
Us ~ U Vi .
iES
Let us assume that U is non- emp ty. Then every Us is non-emp ty and , since
n
cp (u ) = I: Ui 2: 0 for all U E Us ,
i= l

ther e exists some Us E Us such t hat

cp (us ) = v(S) = inf{ cp(u) I U E Us } .

The question now arises under which condit ions t he grand coalit ion N is
stable which means that , if the players decide for the gra nd coa lit ion, there is
no incentive for them to decline from this decision .
This is certainly the case , if there is a division {Xl , . . . ,xn } of v (N ), i.e.,

Xi 2: 0 for i = 1, . .. , n (3.86)
and
n
I:.Ti= v(N ) (3.87)
i=l

su ch that
I: Xi :::; v(S) for all non-emp ty S ~ N . (3.88)
iES
Every such divi sion guarantees every player P i a cost Xi :::; v({i}) such that
for every coa lit ion S ~ N , S -=I- 0 t he joint cost L Xi is at most as high as v(S).
i ES
158 3 Cont rolla bility and Optimization

3.8.3 Sufficient Conditions for a Stable Grand Coalition


We start with
Theorem 3.8. Th ere exists a vector x E jRn with (3.86}, (3.87}, (3.88}, if
and only if the following conditi on is satisfied: If for even) non- empty set
S C;;;; N there is a weight vs ~ 0 such that
'Ys = 1 for all i = 1, . .. , n , (3.89)
N
"' E 2 \ {0)
i_ E S

then it follows that


v(N )::; L 'Ysv (S) . (3.90)
S E2 N \ {0}

The proof of the impli cation ((3.89) =? (3.90)) =? (3.86) , (3.87) , (3.88) is the
same as that of Th eorem 3.5. The pro of of t he impli cation (3.86) , (3.87) , (3.88)
=? ((3.89) =? (3.90)) has been given in Section 3.4.5. In order to apply this
theor em to the cooperat ive n-person ga me defined in Section 3.8.2 we make
the following assumptions:
1) For every non- empty set S C;;;; N let ve ~ 0 be a weight such that (3.89)
is satisfied . Then for every Us E Us it follows that

L 'YsUs E UN .
SE 2 N \ {0}

2) For every non- empty set S C;;;; N t here is some Us E Us with cp (us ) = v(S).
n
L: rn ,
3) For every i E N and every finite sequence u 1 , . . , u m E jRi=l and num-
bers Al ~ 0, . .. , Am ~ 0 it is true that

Then we can prove


Theorem 3.9. If the assumptions 1}, 2} and 3} are satisfi ed, then there is a
vector x E jRn with (3.86) , (3.87}, (3.88) which means that the grand coalition
N is stable.

Proof. Let, for every non-empty set S C;;;; N , a weight "ts ~ 0 be given such
that (3.89) hold s true. Then it follows with B = 2N \ {0} t hat

L L ~ (L ~
S E13
'Ysv (S) =
SE 13
'Yscp (us ) cP
SE 13
'---v---'
'YSUS ) v(N ) , q.e.d. .

EUN
o
3.8 A Coop erative Treatment of an n-Person Cost-Game 159

In the above special case assumpt ion 2) is satisfi ed , if U is non- empty.


Assumption 3) is obviousl y satisfied. Con cerning assumpt ion 1) we can prove

Lemma 3.5. If
Cii > 0 for i = 1, . .. , n
and
Cij ~ 0 for i , j = 1, . . . , n , i -I- j ,
(which im plies that U is non- empty), then assumption 1) is also satisfied.

Proof Let H = 2 N \ {0} and for every S E H let t here be given a weight
'YS~ 0 such that
n
L 'YS = 1 for i = 1, . . . , n .
SE B
i E S

Then it follows that

L 'Ys b(S ) = L 'Ys Lbi = t ( L 'YS ) b, = t bi = b(N) .


SE13 SE13 iES i =l s E 13 i= l
i. E S

Now let , for every S E H, be given some Us E Us . Then it follows that

t
j= l
Cj (S ) ( L 'YSUS )
SE13 i
= L 'ts ( t Cj (S )(US )j)
SE 13 j=l
~ SE13
L 'Ys b(S ) = b(N) ,

henc e

which impli es

t , Cj( N) (E OSUS ) ~ j beN)

and hence L 'YSUs E UN , since L 'Ysus E lR.+. . This complet es the


SE13 SE 13
proof.
D
160 3 Controllability and Optimization

3.8.4 Further Cooperative Tre a t m e nt s

a) For every non- empty set 5 ~ N we define

Us = nVi
iES
and put
v(5) = inf{ cp(u) I u E Us}
where again
n
cp(u) = L cpi (U) for U E IT U, .
iEN j= 1

Further we aga in put v(0) = O.


From Us ~ Vi for all i E 5 it follows that

v({i}) :S v(5 ) for all i E 5 .

Let , for every non- empty set 5 ~ N,

es = v(5) - -151
1 'L..-
" v({ t. } ) C~ 0) .
iES
If we define
1 .
Xi = - (v( {t}) + cN) for i = 1, . . . ,n ,
n
then it follows that
Xi 2: 0 for i = 1, . . . , n
and
n 1 n 1 n
L Xi = - L(v({i}) +cN) = - L v({i }) + CN = v(N) ,
i=1
n i= 1 n i= 1

henc e (3.86) and (3.87) ar e satisfied .

Assumption:

CN - 1 {cs +
-:S (-1 L1
- -)v({ i})} for all non-empty set s 5 ~ N .
n 151 iES 151 n
(3.91)
3.8 A Cooperative Treatment of an n-Person Cost-Game 161

For every non-empty set S ~ N it follows that

L Xi = L.!.(V({i}) + EN)
iES iES n
1" . 1" . 1" .
::; ~ ~ v ( { ~ }) + lOS + 1ST ~ v ({ ~ } ) -~ ~ v({~})
, v
~

v(S)

= v(S ) ,

Result: If the condit ion (3.91) is satisfied , t hen the grand coalition is sta-
ble .

b) For every non- empty set S ~ N we define

Us = U Vi .
iES
Then it follows that , for every non- empty set S ~ N,

V(S ) = inf{ ep(u) I u E Us} = ~iginf{ ep(u) I u E Vi = U{i}}


= min v({i }) .
iES
This implies

v(N ) ::; v(S ) for all non- empty set s S ~ N .

If we define
Xi = .!.v(N) for i = 1, ... ,n ,
n
then (3.86) and (3.87) ar e satisfied and for every non-empty set S ~ N we
obtain

L Xi = ~ v(N) ::; v(S) , i.e. (3.88) is also satisfi ed.


iES n
c) We again define, for every non-empty set S ~ N,

Then we put

v(S ) = inf{ eps(u) I u E Us }, if S ~ N is non- empty,


162 3 Controllability and Optimization

where
'Ps(U) = L 'Pi(U) , U E Us .
iES

Let us assume t hat, for every non- empty set S <:;; N, t here exists a u(S) E
Us such t hat 'Ps(u(S)) = v(S).
Since
Us <:;; Vi = U{i } for every i E S ,
it follows t hat
v( {i}) :s: 'Pi (U(S)) for all i E S .
T his implies
L V({ i }) :S: 'Ps(u (S)) = v(S) .
iES

If
L V({i }) = v(N ), (3.92)
i EN

t hen (v( {I}) , ... , v( {n}))T satisfies (3.86), (3.87), (3.88). Converse ly, if this
is the case, t hen (3.92) must hold true.

R esult: Under the above ass umption t he condition (3.92) is necessary and
sufficient for (v( {I}), ... , v( {n})f to satisfies (3.86), (3.87), (3.88) .
If (3.92) is satis fied, t hen it is easy to see that v({ l}) , . . . ,v({ n } ) is t he
only division of v(N ) which satisfies (3.88) .

3.8.5 P areto Optima a s C ooperat iv e Solutions o f the Game

We consider t he non-cooperat ive game in Section 3.8.1 wit hou t ass uming t hat
n Vi wit h
n n
t he set U <:;; TI U, which restricts t he strategies is of t he form U =
i=l i =l
n
Vi <:;; TI Uj , for i = l , . . . ,n.
j=l
We have seen that t he minimization of
n
'P(u) = L 'Pi(U) for U E U
i= l

leads to a Pareto optimum u E U which has t he property t hat for every U E U


such that
'Pi (u) :s: 'Pi (u) for all i = 1, . .. , n (3.93)
3.8 A Cooperative Tr eatment of an n-Person Cost-Game 163

it follows that
rpi(U) =rpi (U) for alli =l , ... , n . (3.94)
By cont raposit ion this is equivalent to the following statement: If for U E U
there exist s io E {I , .. . , n} with rpi o (u) < rpio ( u), then there is some i 1 E
{I, . . . , n} with rpi t (u) > rpio (u). T his mean s that there is no U E U for which
a player improves his cost valu e without t he cost valu e of at least on e other
player det eriorating.
In this sens e a P ar eto opt imum can be considered as a coop erative solution
of the gam e. A sufficient condition for some U E U to be a P areto optimum is
given in
Theorem 3.10. Let 11 E U be such that there exist numbers Yi > a for i =
1, .. . , n with
n n
LYirpi(U) :::; LYirpi(U) for all U E U . (3.95)
i= l i= l
Then U is a Pareto optimum.
Proof Let U E U be given such that (3.93) holds true. Then it follows that
n
LYi(rpi(U) - rp;(u )) 2: a .
;.= 1

From (3.95) it follows that


n
L Yi( rpi ( u) - rpi (u)) :::; a,
i= l

hence
n
L Yi ( rpi (u) - rpi (u)) = a,
i= l
which implies, to gether with (3.93), that (3.94) must hold true.

Convers ely we have the


Theorem 3.11. Let the set cJ>(U) + JR.+. with
rp1(U))
cJ>(u) = : ,UE U,
(
rpn (u)
be convex and have a non- empty interior int (cJ>(U) + JR.+. ).
164 3 Controllability and Optimization

Assertion: If U E U is a Pareto optimum, then there is a vector y E JR.+.


with y =I- en such that (3.95) holds true.

Proof. U E U being a Par eto optimum implies that

{cP(u)} = (cP(u) - JR.+.) n (<p(U) + JR.+. )


Since cP(u) '!- int (<p(U) + JR.+.) , it follows that

(cP(u) -JR.+.) n int (CP(U) + JR.+. ) = 0 .


Since both set s are convex, there exists a vect or y E JR.n with y =I- en and
som e a E JR. such that

This implies y E JR.+. and

(3.96)

This complet es the proof.


D

The notion of P ar eto optimum can be weakened in the following way :


An element uE U is called weak Pareto optimum, if there is no u E U such
that
'Pi (U) < 'Pi (U ) for all i = 1, .. . , n . (3.97)
Obviously a Pareto optimum is also a weak Par eto opt imum. The converse is
false , in general.

Theorem 3.12. Let u E U be such that there exists a vector y E JR.+. with
y =I- en such (3.96) holds true. Then {j, is a weak Pareto optimum.

Proof. Let u E U be such that (3.97) is satisfied . Then it follows that

which cont radicts (3.96). This compl et es the proof.


D
3.8 A Coop erative Treatment of an n-Person Cost-Ga me 165

Conversely we have the

Theorem 3.13. Let the set tJ>(U) + lit+. be convex.


Assertion: If U E U is a weak Pareto optimum, than there is a vector y E lit+.
with y i= en such that (3.96) holds tru e.

Proof. From U E U being a weak P ar eto optimum it follows that

The rest of the proof is the sa me as t ha t of Th eorem 3.11.

o
A
Appendix

A.I The Core of a Cooperative n-Person Game

In Section 3.4.3 and 3.4.4 we have investigat ed the core of a special coop era-
tive n-person game and have given necessary and sufficient condit ions for its
non- emptiness. Here we consider a general cooperat ive n-person ga me repre-
sented by a fun ction v ; 2N --+ IR where N = {I , . . . , n }, n:::: 2, with v(0) = o.
The core C(v) of such a game is given by all vectors x E IRn with

L .Xi = v (N ) (A.l)
iEN

and
L Xi :::: v (S ) for all non-empty S ~ N . (A.2)
iES

In [3] there is given a necessar y and sufficient condit ion for t he core C(v) to be
non-empty (see Chapter II, Theorems 8.3 and 8.4) which can be formul ated
as follows;

Theorem A .I: There exists a vector x E IRn with (A.l) and (A.2), if and
only if for every collect ion H ~ 2N \ { 0} and every set of weights "Is :::: 0,
S E H, with
n
L "Is = 1 for i = 1, . . . , n (A.3)
SE 6
j, E S

it follows that
L "Isv (S ) ::; v (N ) . (A .4)
SE 6
168 Appendix

P roof

1) Let x E IRn be given such that (A.I ) and (A.2) are satisfied. Let further
B ~ 2 N \ {0} and {Is :::: 0 I S E B} be given such that (A.3) holds true.
Then it follows t ha t

v (N ) = L Xi = L (L "Is) Xi = L "Is L X i :::: L "Isv (S ) ,


iEN iEN " E 13 S El3 iES SE l3
i E S

hence (A.4) holds true.


2) In order to show the sufficiency of the impli cation (A.3) :::} (A.4) for t he
existe nce of some X E IRn wit h (A.l) , (A.2) we consider the problem of
minimizing I: Xi subject t o (A.2).
iEN
The du al t o this problem consists of maxim izing

L "Isv (S)
SE 2 N \ { 0 }

sub ject to
"Is :::: 0 for all S E 2N \ {0} (A.5)
and
"Is =1 for i =I , .. . , n . (A.6)
" E 2N \ {0 j
i E S

Obv iously t here exist vectors X E IRn such t ha t (A.2) is satisfied and for
every such vector we have t hat

L X i :::: v(N ) .
iEN

Therefore there exists a solut ion of the above problem and it is

min{L xi IX E IR
n
satisfies (A.2)}:::: v (N ) .
iEN

By a du ality theorem of linear progr amming t he du al problem also has a


solut ion and it is

max{ L "Isv (S ) I bS)s E2 N \{0} satisfies (A.5) , (A.6)}


SE2 N \ {0}

n
= min{L Xi IX E IR satisfies (A.2) } :::: v( N ) .
iEN
Appendix 169

From the implication (A.3) =;. (A.4) it follows that

max{ L "Ysv(S ) I ("(S )SE2 N \{0} sat isfies (A. 5) and (A.6)} < v (N )
SE2N \ {0}

which then implies

min{L Xi I X E ]Rn sat isfies (A.2)} = v (N )


iEN
and completes the proof.

The dual problem can be used in ord er to find a vector X E ]Rn with (A.I) ,
(A .2), if such one exists. Let us demonstrat e this for t he case n = 3, v ( {1}) =
v ({ 2} ) = v ({3 } ) = 0, v ({ I, 2} ) > 0, v( { I,3 } ) > 0, v( {2,3 } ) > 0 and v (N ) > 0,
N = {I , 2, 3}. In this case t he core C (v ) consists of all vectors X E ]R3 such
that
Xl + X2 + X3 = v (N ) ,
Xl + X2 2 v ({1 , 2}) ,
(A.7)
Xl + X3 2 v ({1,3 }),
X2 + X3 2 v ({2 ,3 }) ,
Xl 2 0 , X2 2 0 , X3 2 0 .
The du al problem consists of maximi zing

L = "Y{ 1,2} v ( {I , 2}) + "Y{1 ,3} v( {I , 3}) + "Y{2,3}v ( {2, 3}) + "YNv (N )
subjec t to
"Y{1,2} + "Y{1,3} + "YN ::; 1 ,
"Y{ 1,2} + "Y{2,3} + "YN ::; 1 ,
"Y{ 1,3} + "Y{2,3} + "YN ::; 1 ,
"Y{ 1,2} 2 0 , "Y{1,3} 2 0 , "Y{2 ,3} 2 0 , "YN 2 0 .
In ord er to solve the dual problem with the aid of the simplex method we
int roduce slack vari ables

and rewrite the const ra ints in the form

"Y{1,2} + "Y{1,3} + "YN + Zl = 1


"Y{ 1,2} + "Y{2 ,3} + "YN + Z2 = 1
"Y{1,3} + "Y{2,3} + "YN + Z3 = 1
170 Appendix

T he start ing t ab leau for the simplex method t hen reads

- /'{l ,2} - /'{l ,3} - /'{2, 3} -/'N


Zl 1 1 1 a W
Z2 1 1 a1 1
Z3 1 a 1 1 1
2: a - v({1, 2}) -v ({1, 3}) -v ({2,3 }) - v(N )

If we excha nge /'N with Zl , we obtain t he tableau

- /,{l ,2} -/,{l ,3 } - /'{ 2,3} -Zl

/'N 1 1 1 a 1
Z2 a a -1 IT] -1
Z3 a -1 a 1 -1
2: v (N ) v(N ) - v ( {I , 2}) v (N ) - v( {I, 3}) -v( {2, 3}) v (N )

If we exchange /'{2 ,3 } wit h Z2, we obtain t he t ab leau

-/'{l,2} - /'{l ,3} -Z2 - Zl

/'N 1 1 1 a 1
/'{2 ,3 } a a -1 1 -1
Z3 a -1 III -1 a
v(N ) - v(v(N ) v(N ) - v( { I,3 }) v({ 2 3}) v(N )
2: {l , 2}) - v({2, 3}) , - v({2, 3})

If we exchange /'{1 ,3 } with Z3, we obt ain t he t ab leau

- /'{ 1,2} -Z3 - Z2 - Zl

/'N 1 2 -1 1 1
/'{2,3 } a -1 1 a -1
/,{l,3 } a -1 1 -1 a
2v( N) - v({I, 2})- v({I , 3})+ v(N) - v(N )-
2: v(N ) v({ I,3 }) - v({2,3 }) v( {2,3 }) - v(N ) v( { I,3 }) v( {2, 3})
Appendix 171

If we ass ume t hat

2v(N) - v({1, 2}) - v({1, 3}) - v({2, 3}) ~ 0 ,

v( {1, 3}) +v( {2, 3}) - v( N) ~ 0 ,


v( N) - v( {1, 3}) ~ 0 ,
v( N) - v( {2,3 }) ~ 0 ,
then TN = 1, T{ I ,2} = T{1 ,3} = T {2,3 } = 0 is a solution of t he dual problem
and
Xl = v(N) - v ({2, 3}) ,
X2 = v(N ) - v( {1, 3}),
X3 = v({ 1,3 })+v( {2,3 }) - v(N)

satisfy (A.7) .
Appendix 173

A.2 The Core of a Linear Production Game

We consider a linear production ga me with n players. Each player has at


his disposal a vector bi = (bi, b2,... ,b~), i = 1, . . . , n , of resources b~ > 0,
k = 1, . .. , m , wh ich he ca n use to produce goods that ca n be sold at a given
market price.This section is taken from [24].

We assume that a unit of the j-th good (j = 1, .. . , p) requires akj 2: 0 units


of the k-th resource (k = 1, .. . , m) and can be sold at a pric e Cj > O.
Let S ~ N = {I, . . . , n } , S =/= 0, be a coalit ion. T his coalit ion then has a total
of
bd S) = Lbk
iES

units of the k-th resource. Using all of their resour ces, the members of S can
produce vectors (Xl , XZ , .. . , x p ) of goods whi ch sat isfy
p
L akj Xj :::: bk(S) for k = 1, . .. ,m , (A.8)
j=l

Xj2: 0 forj=l , ... , p.


Under these conditi ons they want to maximiz e their profit
p

L Cj Xj
j=l

If we define
p

v( S ) = max{L CjXj I X E lR. P sat isfies (A .8)} ,


j=l

if S is non- empty (in whi ch case the probl em of maximizing

subject t o (A.8) has a solution, if for every k E {I, . .. , m} there is at least


one j E {I , . . . ,p} su ch that akj > 0) , and v (0) = 0, then v : 2N ----> lR.+ is the
charact eri stic fun ction of a cooperat ive n-person ga me .
174 Appendix

For this game we can prove

Theorem A .2 : The core of t his game is non- empty.

Proof We make use of Theorem A .l an d consider an ar bit rary collect ion


B <;;; 2N \ {0} and weights "fs :::: 0, S E B with (A.3).

For every k = 1, . . . , m we then have

L "fsbk( S ) = L L "fsbk
SE 13 SE 13 iES

= L{ L "fs }bk
iEN S E B
(A.9)
i E S

= Lbk
iEN

Now let Xl (S) , . . . , xp(S) be such that (A .8) is sat isfied and
p

v (S ) = L CjXj(S ) .
j=l
Then
p

L "fsv (S) = L bs L CjXj(S )}


S EB SE13 j=l
P
=L Cj{L "fSXj(S )}
j=l SE 13
p
= L Cj Xj
j=l
wher e
Xj = L "fs.Tj(S) for j = 1, . .. ,p .
SE 13
Appendix 175

Now it follows that , for every k = 1, . . . , m ,

t
j=1
akj (:L 'YSXj(S ))
(t
SE I3

= :L 'Ys j=1 akj :rj(S ))


SEI3
(A.lO)

: ; :L 'YS bk(S)
SE I3

which implies
p

:L ak/j ::; bk(N) for all k = 1, . . . , m ,


j=1
Since
ij 2: 0 for j =l , . .. ,p,
it follows that
p

:L cjij < v (N )
j=1
which impli es
p

:L 'Ysv(S ) = :L cjij ::; v (N ) .


SE I3 j=1
Hence (A.4) is satisfied . By Theorem A .l t herefore the core is non- empty
which completes the proof.

In order to find points in the core we consider for every S C;;; N , S =J 0, the
p
du al of the problem of maximizing L: CjXj subject t o (A.8) which consist s of
j=1
min imizing
m
:L bk(S) Yk
k=l
subject to
m

:L akjYk 2: Cj for j = 1, . . . , p ,
k=1
Yl 2: 0, . . . , Ym 2: 0 .
176 Appendix

Let Yl(S) , . . . , Ym(S) be a solution of this problem (which exists)


T he n it follows for S = N
m

v(N) = L bk(N)Yk(N)
k=l

and, for every S ~ N , S i- 0, N .


m

v (S ) :::; L bk(S)Yk(N) .
k=l
Now let us define, for every i = 1, . . . , n,
m
u.; = L b~Yk(N) .
k=l
T hen it follows, for every S ~ N, S i- 0,
m

L Ui = LL b~ Yk ( N)
iES iES k=l
m

= L (L bic)Yk(N)
k=l iES
m

= L bk(S)Yk(N)
k=l
and therefore
L Ui = v (N )
iEN

and
L Ui :::: v(S) for every S non-empty ~ N .
iES
Thus (Ul, . . . , un) is a point in the core.
Appendix 177

A .3 Weak Pareto Optima: Necessary and Sufficient


Conditions
In Section 3.8.5 we have given necessar y an d sufficient cond it ions for weak
P ar eto optima of non- cooperative n-p erson ga mes . Here we will derive fur t her
cond it ions . We consider again a non- coop erative n- person ga me with n payoff
fun ctions
'Pi : IR M ----> IR , i = 1, . . . , n ,
which are t o be minimized on a non -empty set U <;;; IR M .
Then a weak P ar et o optimum is a vect or fL E U suc h t hat ther e is no vector
U E U with
'Pi(u) < 'Pi(u) for all i = 1, ... , n .
Now we assume that 'Pi E C 1 (IR M ) for i = 1, ... , n and t hat U is convex .
Then we ca n prove the following

Theorem A.3 : If u E U is a weak P aret o optimum, then t he re is no u E U


su ch that
'P~ (u f (u - fl) < for all i = 1, ... , n (A.ll)
P roof We ass ume that there exists some u E U such t hat (A. 9) is sa t isfied .
If we define , for every A E (0,1 ], u,\ = fj, + A(U - u), t hen it follows that

u ,\ E U for all A E (0, 1] .

Further we have

for all i = 1, . . . , n .

Therefore for every i = 1, ... , n t here exist s some Ai E (0, 1] such that

If we put Aio = . min Ai , t hen it follows t hat


t = l , . . . ,n

'Pi(U,\) < 'Pi(U) for all A E (0, Aio] and all i = 1, . .. , ti

whi ch cont radicts the ass umpt ion t ha t u is a weak P areto opt imum .
178 Append ix

If all payoff fun ctions 'Pi ar e convex and for some given u E U there is no
u E U with (A.9) , then u is a weak Pareto optimum. This follows immedi ately
from

'Pi (u) - 'Pi (u) ~ 'P~ (u)T(u - u) for all u E U and all i = 1, . . . , n .

The condit ion (A.9) not being satisfied , is equivalent t o


o
(p'(u )(u) - R'-t) n p'(u)(U) = 0 (A.12)

wher e

(
'P~ (U)TU)
p ' (U) (u) = : for all u E U .
'P~ ( u )T u

Since P'( U)(U) is convex, (A.IO) implies t he existe nce of some y E JR.+. with
y =f. en su ch that

r",
yTp'(U)(U) ~ yTp'(u)(u) for all u E U (A.13)

where

yT ib'(ii.)(u) ~ (t, y,~;(,') u EU .

Conversely, the existe nce of some y E JR.'-t with y =f. en with (A.ll) implies
that there is no u E U with (A.9) .

o
Appendix 179

A.4 Duality
We consider the sam e non- cooperative n-per son game as in Section A .3 and
define
<Pl (U))
<I>(u) = : for u E U .
(
<Pn (u )
Then u E U is a Pareto optimum, if and only if
(<I>(u) - lR~) n <I>(U) = {<I>(u)} . (A.14)

Let us define
D = lRn \ (<I>(U ) + ( lR~ \ {0})) .
Then it follows that

(<I>(U) + (lR~ \ {0n } ) ) n D = 0 = empty set .

Now let u E U be such that <I>(u) E D. Then it follows that

(<I>(u) +(lR~ \ {8 n } )) n D= 0

whi ch is equivalent to

(<I>(u) + lR~) n D = {<I>(u)} (A.15)

and in turn to the impli cation

(<I>(u) ::::: d for some d E D ) =? <I>(u) = d . (A.16)

Conversely, the condit ion (A.1 3) implies <I>(u) E D, u E U , which is equivalent


to
rt
<I>(u) <I>(U) + (lR~ \ {8 n } )
and in turn equivalent to

<I>(u) - ( lR~ \ {8 n } ) n <I>(U) = 0

which is equivalent to (A.12) .

Result: A vector u E U is a Pareto optimum, if and only if t he impli cation


(A.14) hold s t rue .
180 Appendix

An element u E U is a weak Par et o optimum, if and only if


o
(cP(u ) -lR~) n cP(U) = 0 .

Let us define the set o


D = R" \ (cP(U) + lR~ ) .
Then it follows that
o
(cP( U) +lR~ ) n D = 0.
Now let u E U be such that CP(U) E D . T hen it follows t ha t
o
(cP(u) + lR~ ) n D = 0 (A.I7)

whi ch is equivalent to the non-exist ence of a d E D such t ha t 'Pi (U ) < di


for all i = 1, . . . , n . cP(u) E D for some u E U is equivalent with
o
cP(u ) if- cP(U) + lR~

and in turn equivalent with


o
(cP( u) - lR~ ) n cP(U ) = 0 .
Result: If u E U is a weak Paret o optimum, t hen (A.I 5) holds true, i.e.,
there exists no dE D such t hat (A.I 6) holds true.
B

Bibliographical Remarks

The definition of t ime discrete aut onomous dyn ami cal syste ms given in Section
1.1.1 is a special case of the abst rac t definition of dyn amic al syste ms whi ch
goes back to G. D. Birkhoff [2] and can also be found in the essay " What is a
Dyn amic al System ?" by G. R . Sell in [8]. In this essay also the time discret e
case is discussed.
The localization of limit set s with t he aid of Lyapunov functions in Section
1.1. 2 has been first repr esented by J. P. La Salle in [8] and ca n also be found
in [13] and [20].
The st ability results given in Section 1.1.3 generalize a standa rd result on the
st ability of fixed points to be found in [8], [13], [18], [20] (see Theorem 1.3) .
Theorem 1.5 in Section 1.1.4 coincides with Satz 5.4 in [18], if we choose as
normed linear sp ace the n-space JRn equipped with any norm.
The results on linear syste ms following the Corollary of Theorem 1.6 in Section
1.1. 5 and also those for non-autonomous syste ms in Section 1.2.3 are based
on Section 4.2 in [18] .
The stability results in Section 1.2.2 generalize those of Section 1.1.3 and are
t aken from [15] . Lyapunov's method has also been applied to non-autonomous
systems in [1] and [19] in order to investigat e stability of fixed points. Instead
of one Lyapunov function a sequence of such is used .
Theorem 2.2 a bout null- controllability of linear systems has been taken from
[14]. Proposition 2.3 concern ing Kalman 's condit ion can be found in [20] as
well as Propositions 2.1 and 2.2. Theorem 2.5 which generalizes Theorem
2.1 to non- autonomous syste ms is taken from [15] and further generalized
in Section 2.2.2. The results on stabilizat ion of cont rolled systems in Section
2.2.3 ar e taken from [15].
The dynamical ga mes which are introdu ced in Section 3.1 have also been
investigat ed in [13], however, in a different way.
182

The cooperative treatment of the emission reduction model in Sections


3.4.2 and 3.4.3 can also be found in [25]. The non-cooperative game in Section
3.4.1 and its cooperative treatment in Section 3.4.5 is also contained in [16] .
The dynamical method for finding an evolut ionary stable state in an evolution
matrix game which is described in Section 3.6.2 has been adopted from [21] .
References

1. R.P. Agarw al: Differen ce Equations and Inequalit ies.


Marcel Dekker, In c.: New York , Basel, Hon gkong 1992.
2. G. D. Birkhoff: Dynamical Syst ems.
Am er. Math. Soc. Colloq. Publ. Providence 1927.
3. Th. Dri essen : Coope rat ive Gam es, Solutions and Applications.
Kluwer Acad emic Publishers, Dordrecht - Boston - London 1988.
4. U. Faigl e, W . Kern, and G. Still: Algorit hmic Principles of
Mathematical Programming.
Kluwer Acad emic Publish ers: Dordrecht - Boston - London 2002.
5. F . R . Gan tmacher : Application of t he Theory of Matrices.
Interscien ce Publishers : New York - London - Sydney 1959.
6. G .H. Golub, C .F . Van Loan : Matrix Computati ons .
The Johns Hopkins University Press 3rd. ed . 1996.
7. Ch . GroBmann, J . Terno.: Numerik der Optimierung.
B.G . Teubner Stuttgar t 1997.
8. J . Hale (editor) : Studies in Ordinar y Different ial Equations.
Vol. 14, published by the Mathematic al Association of Am erica, 1977.
9. J. Hiilsmann , W. Gam erith , U. Leopold-Wildburger , W . Steindl: Einfiihrung
in die Wirtschaftsmathem atik.
3. Aufl age Springer Verlag: Berlin - Heidelb erg - New York 2002.
10. J . Jahn: Introduction to th e t heo ry of nonlinear op timization.
(2-nd . edit ion) . Acad emic Press, New York , 1994.
11. W . Kr ab s: Einfiihrung in die Iinear e und nich tlinear e
Optimierung fiir Ing enieure.
Verl ag B. G . Teubner, Stuttgart 1983.
12. W . Kr abs : Ma thematische Mod ellierung.
Verl ag B. G . Teubner : Stuttgart 1997.
13. W . Krabs: Dynamische Systeme: Steuerbarkeit und chaoti sches Verhalten.
Verl ag B. G . Teubner : Stuttgar t und Leip zig 1998.
14. W . Krabs: On Local Controllability of Time-Discret e Dynamical Systems into
Steady St ates.
Journal of Difference Equations and Applications 8, no .1, 1 - 11 (2002) .
15. W . Krabs: St ability and Cont rollability in Non-Autonomous
Time-Discret e Dynamical Systems.
Journal of Differen ce Equations and Applicat ions 8, no. 12, 1107-1118 (2002) .
184 References

16. W . Krabs: A Cooperative Tr eatment of an n-Person Cost-Goal-Game .


Mathematical Methods of Op erations Resear ch 57, 309-319 (2003) .
17. W . Krabs, St. W . Pickl, and J . Scheffran: Optimization of an n-Person Game
Unde r Linear Side Conditions.
In: Optimization, Dynamics and Economic Analysis,
edited by E . J . Dockner, R . F. Hartl, M. Lup t acik, and G . Sorger.
Physica-Verlag: Heide lb erg , New York 2000, pp . 76 - 85.
18. U. Krause und T . Nesemann: Differenzengleichungen und
diskrete dynamische Systeme.
B. G . Teubner: Stuttgart und Leip zig 1999.
19. V. Lakshmikantham and D. Trigiante: T heory of Difference Equations:
Numerical Methods and Applications.
Academic Press , Inc .: Boston et cet . 1988.
20. J . P . La Salle: The Stability and Control of Discrete Processes.
Springer-Verlag: New York - Berlin - Heidelb erg - London - Paris - Tokio 1986.
21. J . Li: Das dynamische Verh alten diskret er Evolutionsspiele.
Shaker Verlag: Aachen 1999.
22. G . Luenberger: Introduction to Dyn ami c Systems.
John Wiley and Sons : Chichester , New York , Brisban e, Toronto 1979.
23. G . Luenberger : Introduction to Line ar and Nonlinear Programming.
Addison Wesley 1980.
24. G . Owen: On the Core of Linear Production Games.
Mathematical Programming 9 (1975) , 358 - 370.
25. St . W . Pickl: Der z-value als Kontrollparamet er. Modellierung und Analyse
eines Jo int-Implementation Programmes mithilfe der dynamischen
kooperativen Spieltheorie und der diskret en Optimierung.
Shaker Verlag: Aachen 1999.
26. C. Roos , T . Terlaky, J .-Ph. Vial: Theory and Algorithms for
Linear Optimization.
John W iley & Sons, Chichester , 1997.
27. A. Schrijver : Theory of Linear and Integer Programming.
John Wiley, New York 1986.
28. P . Sp elucci : Numerische Verfahren der nichtlinearen Optimierung.
Bir kauser Verlag, Boston 1993.
29. S.H . T ijs, T .S.H. Dri essen : Game Theory and Cost Allocation Methods.
Management Scienc e, 32, no .8, 1015-1028 , 1986.
Index

ad missible cont rol fun ctions , 69, 76, 78, differenti al equations , 21, 24, 25, 28, 43
80 duality, 168, 179
artificia l kidney, 43 dyn am ical game , 93
asym ptotically stable, 9-11 , 13, 15-1 9,
21, 24, 29, 36-41, 70-72 , 87, 88, emission reduction , 30, 73, 107, 182
149-151 emission reduction mod el, 55
attract ive, 17, 18, 38, 39, 41, 148 equilibrium solution , 24, 25
attracto r , 9, 10, 12, 36, 37, 49 , 87, 88, evolution matrix, 141
148, 149 evolut ionary stable, 142-145, 147-149,
autonomo us , 2, 32, 70, 181 182
autonomo us system , 1
fixed point, 10, 13, 16, 22-26 , 28-30,
Brouwer's fixed point t heor em , 100 38, 48, 49, 51, 52, 57, 67, 69, 70,
73, 76, 78, 80, 83, 89, 93, 100, 103,
cardinality,S 106, 148-1 50
coalit ion , 117-11 9, 128, 129, 152, 153, fixed point cont ro lla bility, 48, 49, 57, 68
157, 161, 173 fixed point solution , 25, 26
cont rol function , 47, 48, 54, 57, 59, 69, flow, 1, 2
76, 78 , 80-83, 87, 94, 106 Frechet, 70, 72
control fun ctions, 47, 55, 76, 94 Frec het derivati ve, 13, 15, 16
control set, 93 Frechet differen ti ability, 14
controlla bility , 83 Frechet d ifferen ti abl e, 16
contro lla ble set, 49, 84 function , 93, 116
controlled costs , 116, 117
coo pe rative, 117, 128, 157, 158, 167 game , 117, 128, 157, 158, 167, 179
cooperat ive ga me, 157 ga p fun cti on , 120
core, 117, 119, 121, 124, 126, 127, 167, global attractor, 59
169,174,175 globally asy mptotically stable, 21
cost , 116 goa l-cost-game, 136, 151
cost func tion , 95, 96, 99, 104, 137, 155 grand coalition, 117,1 29,1 31,132, 153,
cost-game , 155 158

d ifferen ce equations , 1, 22, 25, 28, 30, Hamilton- Cayley T heor em , 63


44, 47, 80, 93 hemo-d ialysis , 43
186 Index

implicit function t he ore m , 50, 85, 86, 92 perm eability, 43


instability, 9 plan ar pendulum, 77
invar iant , 3, 5- 7, 11 population state , 141, 142, 145, 149
invar ian t set, 5 po pulation states , 141 , 142, 147
invariantl y connec ted, 5, 7 posit ive definite, 8-12, 20, 21, 35-37,
inverse fun ction theorem , 51, 85, 86, 89 88, 151
posi ti vely, 3
J acobi matrix , 25, 29, 51, 89 po sitively invariant , 5, 9, 11, 12
predato r - pr ey - model, 21, 51, 76
Kalman cond it ion, 58-60, 63, 65, 73, problem of cont rolla bility, 94, 106
181 pro cess of hem o-di alysis, 43
kidney, 43 pure population st ate, 141

lim it set , 2-5 , 32, 87 reachability, 89


linear , 73
lin ear production game, 173 semi-group , 1
lin ear system , 31, 53, 57, 106, 116 sequence, 5, 17- 19,31,39- 41
local cont rolla bility , 81, 106 set, 8, 9, 35
logistic growth, 28 simplex method , 169
Lyapunov fun ction, 6- 12, 20, 34-37, 88, simplex t ableau , 169
150, 181 slack variable, 169
stability, 13
Marquardt 's algorit hm, 53 stabilization, 70, 86
membran e, 43 stable, 8- 10, 17-19, 21, 31, 35, 36,
minim al polynomial , 63 38-41, 87, 88, 129, 131, 132, 153,
moving suspe nsion po int, 77 158
stable fixed p oint , 20, 27
Nas h equilibrium, 99-102, 105, 108, state, 93
112, 113, 137-139 , 142, 148, 152, state func t ion , 47, 53, 80
156 superaddit ive games , 118
non-autonomous , 32, 38, 47, 80, 86, 181 sys te m, 2, 32, 38, 47, 70, 73, 80, 86 , 93
non-cooperative, 179
non-cooper ative game, 162, 182 ti me-d iscrete syste m , 70
null- controllability, 59, 65, 73, 74, 106,
181 un controlled , 93
un controlled system , 30, 47, 49, 51, 53,
orbit , 2, 32 54, 57, 59, 69, 73, 76, 78, 80, 83,
89, 93, 103, 106, 107
P ar eto optimum , 95, 156, 162-165, unstable, 13, 17, 22, 25,71 ,72
177-179
payoff matrix , 141, 142 Volt er ra- Lotka model, 24
periodic, 5
periodic solution , 44 weak Par eto optimum, 164, 177, 180
About the Authors

Prof. Dr. rer.nat. Werner Krabs


born 1934 in Hamburg-Altona , 1954-1959 st udy of mathem atics, phy sics and
ast ronomy at the University of Hamburg, diplom a in mathem atics, 1963 phd-
thesis. 1967/68 visiting assistant professor at the University of Washington
in Seattle. 1968 habilitation in applied mathem atics at t he University of
Hamburg. 1970-72 professor at the RWTH Aachen. 1971 visiting associate
professor at the Michigan St ate University in East Lan sing. 1972 profes sor at
t he TH Darmstadt. 1977 visiting full professor at the Or egon St ate University
in Corvallis . 1979-81 vice-president of the T H Darmst adt. 1986-87 cha irman
of the Society for Mathem atics , Economy and Op erations Resear ch.

Dr. rer. nat. Stefan Wolfgang Pickl


born 1967 in Darmstadt. 1987-1993 study of mathematics, elect rical engi-
neering and philosophy at the TH Darmstad t. 1993 ERASMUS grant and
diploma thesis at the EPFL Laus anne, diploma in theoreti cal elect rical engi-
neering. 1998 phd-thesis in mathem atics at the TU Darmstadt. Dissertation
awa rd 2000 of the German Society of Op er ations Research. Since 2000 assis-
t ant professor at the University of Cologne.
Lecture Notes in Economics
and Mathematical Systems
For information about Vols. 1-434
please contact your bookseller or Springer-Verlag

Vol. 435 : L. Chen. Interest Rate Dynami cs , Derivatives Vol. 457: C. Hsu, Volume and the Nonlinear Dynamics of
Pricing , and Risk Management. XII. 149 pages. 1996. Stock Returns . VIII, 133 pages. 1998.
Vol. 436: M. Klemisch-Ahlert , Bargaining in Economic and Vol. 458 : K. Marti, P. Kall (Eds.), Stochastic Programming
Ethical Environment s. IX, 155 pages. 1996. Method s and Technical Applicati ons. X, 437 pages. 1998.
Vol. 437: C. Jordan, Batching and Scheduling . IX, 178 pages. Vol. 459 : H. K. Ryu, D. J. Slottje, Measuring Trend s in U.S.
1996. Income Inequality . XI, 195 pages. 1998.
Vol. 438: A. Villar, General Equilib rium with Increasing Vol. 460 : B. Fleischmann, J. A. E. E. van Nunen, M. G.
Returns. XIII, 164 pages. 1996. Speranza, P. Stahly, Advances in Distribution Logistic. XI,
Vol. 439 : M. Zenner , Learn ing to Become Rational. VII, 535 pages. 1998.
201 pages . 1996. Vol. 461: U. Schmidt, Axiomatic Utility Theory under Risk.
Vol. 440 : W. Ryll, Litigati on and Settlement in a Game with XV, 201 pages. 1998.
Incomplete Information. VIII, 174 pages . 1996. Vol. 462: L. von Auer , Dynam ic Preferences, Choice
Vol. 441 : H. Dawid, Adaptive Learning by Gen eti c Mechanisms, and Welfare . XII, 226 pages. 1998.
Algorithms . IX, 166 pages.1996. Vol. 463: G. Abraham-Fr ois (Ed.), Non-Linear Dynamics
Vol. 442 : L. Corch6n, Theori es of Imperfe ctly Competitive and Endogenous Cycles. VI, 204 pages. 1998.
Markets. XIII, 163 pages . 1996. Vol. 464: A. Aulin, The Impact of Science on Economic
Vol. 443: G. Lang, On Overlapping Generations Models with Growth and its Cycles. IX, 204 pages. 1998.
Productive Capital. X, 98 pages. 1996. Vol. 465 : T. J. Stewart, R. C. van den Honert (Eds.), Trends
Vol. 444 : S. Jergensen, G. Za ccour (Eds .) , Dyn amic in Multicr iteria Decision Making. X, 448 pages. 1998.
Competitive Analysis in Marketing. X, 285 pages. 1996. Vol. 466 : A. Sadrieh, The Altern ating Doubl e Auction
Vol. 445 : A. H. Christer, S. Osaki, L. C. Thom as (Eds.), Market. VII, 350 pages. 1998.
Stochastic Modellin g in Innovative Manufactoring. X, 36 1 Vol. 467: H. Hennig-Schmidt, Bargaini ng in a Video Ex-
pages. 1997. periment. Determinants of Boundedly Rational Behavior.
Vol. 446 : G. Dhaene, Encompassing. X, 160 pages. 1997. XII, 221 pages. 1999.

Vol. 447: A. Artale, Rings in Auctions. X, 172 pages. 1997. Vol. 468 : A. Ziegler, A Game Theory Analysis of Options .
XIV, 145 pages. 1999.
Vol. 448: G. Fandel, T. Gal (Eds.), Multiple Criteria Decision
Making. XII, 678 pages. 1997. Vol. 469: M. P. Vogel, Environmental Kuznets Curves. XIII,
197 pages . 1999.
Vol. 449 : F. Fang, M. Sanglier (Eds.), Complexity and Self -
Organization in Social and Economic Systems. IX, 317 Vol. 470: M. Ammann , Pricing Derivative Credit Risk. XII,
pages, 1997. 228 pages. 1999.

Vol. 450 : P. M. Pardalo s, D.W. Hearn, W. W. Hager, (Eds.), Vol. 471 : N. H. M. Wilson (Ed.), Computer-Aided Transit
Network Optimi zation. VIII, 485 pages, 1997. Scheduling . XI, 444 pages. 1999.

Vol. 451 : M. Salge, Rational Bubbles. Theoretical Basis, Vol. 472: J .-R . Tyran, Mon ey Illu sion and Strategic
Economic Relevance, and Empirical Evidence with a Special Complementari ty as Causes of Monetar y Non-Neutrality.
Emphasis on the German Stock Market.IX , 265 pages. 1997. X, 228 pages. 1999.

Vol. 452: P. Gritzmann, R. Horst, E. Sachs, R. Tichatschke Vol. 473: S. Helber, Performance Analysis of Flow Lines
(Eds.), Recent Advances in Optimi zation. VIII, 379 pages. with Non-Linear Flow of Material. IX, 280 pages . 1999.
1997. Vol. 474 : U. Schw albe, Th e Core of Ec onomies with
Vol. 453: A. S. Tangian, J. Gruber (Eds.), Constru cting Asymmetric Informatio n. IX, 141 pages. 1999.
Scalar-Valued Objective Functions . VIII, 298 pages. 1997. Vol. 475: L. Kaas, Dynamic Macroeconomics with Imperfect
Vol. 454 : H.-M. Krolz ig, Markov-Switchi ng Vector Auto- Competition . XI, 155 pages. 1999.
regressions. XIV, 358 pages. 1997. Vol. 476: R. Deme l, Fiscal Policy, Publi c Debt and the
Vol. 455: R. Caballero, F. Ruiz, R. E. Steuer (Eds.), Advances Term Structure of Interest Rates. X, 279 pages. 1999.
in Multiple Objective and Goal Programming. VIII, 391 Vol. 477 : M. Thera, R. T ichatschke (Eds .), Ill-p osed
pages . 1997. Variational Problems and Regulari zation Techniques . VIII,
Vol. 456: R. Conte, R. Hegselmann , P. Terna (Eds.), Simu- 274 pages. 1999.
lating Social Phenomena . VIII, 536 pages . 1997. Vol. 478: S. Hartmann , Project Schedulin g under Limited
Resources. XII, 221 pages. 1999.
Vol. 479 : L. v. Thadd en, Money, Inflation, and Capital Vol. 506 : B. P. Kellerhals, Financial Pricing Models in
Formation . IX, 192 pages. 1999. Continuou s Time and Kalman Filtering . XIV, 247 pages.
Vol. 480: M. Grazia Speranza, P. Stahly (Eds.), New Trends 2001.
in Distribution Logistics . X, 336 pages . 1999. Vol. 507: M. Koksalan , S. Zionts, MUltipleCriteria Decision
Vol. 481 : V. H. Ngnyen, J. J. Strodiot, P. Tossings (Eds.). Making in the New Millenium . XII, 481 pages. 2001.
Optimation. IX, 498 pages . 2000. Vol. 508: K. Neumann, C. Schwindt, J. Zimmermann, Project
Vol. 482: W. B. Zhang, A Theory of International Trade . Scheduling with Time Windows and Scarce Resources. XI,
XI, 192 pages. 2000. 335 pages. 2002.

Vol. 483: M. Konigstein, Equity, Efficiency and Evolutionary Vol. 509: D. Hornung , Investment, R&D, and Long-Run
Stabilit y in Bargaining Games with Joint Production . XII, Growth . XVI, 194 pages . 2002.
197 pages . 2000. Vol. 510 : A. S. Tangian, Constructing and Applying
Vol. 484: D. D. Galli, M. Gallegati, A. Kirman, Interactio n Objective Functions. XII, 582 pages. 2002.
and Market Structure . VI, 298 pages . 2000. Vol. 511: M. Kiilpmann, Stock Market Overreaction and
Vol. 485: A. Garnaev, Search Games and Other Application s Fundamental Valuation . IX, 198 pages . 2002.
of Game Theory . VIII, 145 pages . 2000. Vol. 512: W-B. Zhang, An Economic Theory of Cities .XI,
Vol. 486: M. Neugart, Nonlinear Labor Market Dynamic s. 220 pages. 2002.
X, 175 pages . 2000. Vol. 513: K. Marti, Stochastic Optimizat ion Techniqu es.
Vol. 487: Y. Y. Haimes, R. E. Steuer (Eds.), Research and VIII, 364 pages . 2002.
Practice in Multiple Criteria Decision Making . XVII , Vol. 514: S. Wang, Y. Xia, Portfolio and Asset Pricing . XII,
553 pages. 2000. 200 pages. 2002.
Vol. 488 : B. Schmolck, Ommitted Variable Te sts and Vol. 515: G. Heisig , Planning Stability in Material Require-
Dynamic Specification. X, 144 pages . 2000. ments Planning System . XII, 264 pages. 2002.
Vol. 489 : T. Steger, Tran sitional Dynamics and Economic Vol. 516: B. Schmid, Pricing Credit Linked Financial In-
Growth in Developing Countries . VIII, 151 pages . 2000. strument s. X, 246 pages . 2002.
Vol. 490: S. Minner, Strategic Safety Stocks in Supply Vol. 517: H.1. Meinhardt , Cooperative Decision Making in
Chains. XI, 214 pages . 2000. Common Pool Situations . VIII, 205 pages . 2002.
Vol. 491 : M. Ehrgoll, Multicriteria Optimization. VIII, 242 Vol. 518: S. NapeI, Bilateral Bargaining . VIII, 188 pages.
pages . 2000. 2002.
Vol. 492: T. Phan Huy, Constraint Propagation in Flexible Vol. 519: A. Klose, G. Speranza, L. N. Van Wassenhove
Manufactu ring . IX, 258 pages. 2000. (Eds.), Quantitative Approache s to Distribution Logistics
Vol. 493 : J. Zhu, Modular Pricing of Options. X, 170 pages. and Supply Chain Management. XIII, 421 pages . 2002.
2000. Vol. 520: B. Glaser , Efficiency versus Sustainability in
Vol. 494: D. Franzen, Design of Master Agreements for OTC Dynamic Decision Making . IX, 252 pages. 2002.
Derivatives . VIII, 175 pages . 2001. Vol. 521: R. Cowan, N. Jonard (Eds.), Heterogenous Agents,
Vol. 495 : I Konnov, Combined Relaxation Methods for Interaction s and Economic Performance . XIV, 339 pages.
Variational Inequalities . XI, 181 pages. 2001. 2003.

Vol.496: P. Weill, Unemployment in Open Economie s. XII, Vol. 522 : C. Neff , Corporate Finance, Innovation, and
226 pages. 2001. Strateg ic Competition . IX, 218 pages. 2003.

Vol. 497: J. Inkmann , Conditional Moment Estimation of Vol. 523: W.-B. Zhang, A Theory ofinterregional Dynamics.
Nonlinear Equation System s. VIII, 214 pages. 2001. XI, 23 I pages. 2003.

Vol. 498: M. Reutter, A Macroeconomic Model of West Vol. 524: M. Frolich, Programme Evaluation and Treatment
German Unemployment. X, 125 pages. 2001. Choise. VIII, 191 pages. 2003.

Vol. 499: A. Casajus, Focal Points in Framed Games. XI, Vol. 525 :S. SpinIer, Capacity Reserv ation for Capital-
131 pages . 2001. Intensive Technologies. XVI, 139 pages. 2003.

Vol. 500: F. Nardini , Technical Progress and Economic Vol. 526: C. F. Daganzo , A Theory of Supply Chains. VIII,
Growth . XVII, 191 pages . 200 I. 123 pages. 2003.

Vol. 501: M. Fleischmann , Quantitative Models for Reverse Vol. 527: C. E. Metz, Information Dissemination in Currency
Logistic s. XI, 181 pages. 200 I. Crises . XI, 231 pages . 2003 .

Vol. 502: N. Hadjisa vvas, J. E. Martinez-Legaz, J.-P. Penot Vol. 528 : R. Stolletz, Performance Analy sis and Opt i-
(Eds .), Generalized Convexity and Generalized Mono- mization ofinbound Call Centers . X, 219 pages . 2003.
tonicity . IX, 410 pages. 2001. Vol. 529: W. Krabs, S. W. Pickl, Analysis, Controllability
Vol. 503: A. Kirman, J.-B. Zimmermann (Eds.), Economic s and Optimization of Time-Discrete Systems and Dynamic al
with Heterogenous Interacting Agents. VII, 343 pages. 2001. Games. XII, 187 pages . 2003.

Vol. 504: P.-Y.Moix (Ed.),The Measurement of Market Risk. Vol. 530: R. Wapler, Unemployment , Market Structure and
XI, 272 pages. 2001. Growth. XXVII, 207 pages. 2003.

Vol. 505 : S. ves, J. R. Daduna (Eds.), Computer-Aided


Scheduling of Public Tran sport. XI, 466 pages . 2001.

Vous aimerez peut-être aussi