Vous êtes sur la page 1sur 127

Continuous Dynamical Systems

Lee1
Author address:
111 Cummington St., Department of Mathematics, Boston Univer-
sity, Boston MA 02155
E-mail address : deville@math.bu.edu

1 based exclusively on notes given by G. R. Hall in MA 775, Fall 1996, at Boston University
Contents
Introduction 5
Chapter 1. Introduction 7
1. Some preliminary de nitions (Sept. 4) 7
2. More examples, Changing variables (Sept. 6) 9
3. Di erentiation, change of time variable (Sept. 9) 11
4. Change of time variables(Sept. 11) 15
5. Example, using the 2-body problem (Sept.13) 18
6. McGehee Collision Manifold (Sept. 16) 22
7. Finishing analysis of collision manifold (Sept.18) 26
Chapter 2. The Two Big Theorems 29
1. Existence-Uniqueness Theorem (Sept.20) 29
2. Some proof of the Existence-Uniqueness Theorem (Sept.25) 30
3. Some more proof of the Existence-Uniqueness Theorem (Sept.27) 33
4. Picard iteration (Sept. 30) 34
5. Invariant Sets (Oct. 2) 37
6. Sinks and conjugacy (Oct. 4 and Oct. 7) 41
7. Preliminary to Stable/Unstable Manifold Theorem (Oct. 9) 46
8. Stable/Unstable Manifold Theorem(Oct. 15) 50
9. More S/U Theorem (analytic version) (Oct. 16) 54
10. C k version of S/U Theorem (Oct. 18) 56
11. Continuation of the C k proof (Oct. 21) 59
12. Smoothness (Oct. 28) 62
13. The Stable/Unstable Manifold MetaTheorem (Oct. 30) 65
Chapter 3. Using maps to understand ows 69
1. Periodic orbits and Poincare sections (Nov. 1) 69
2. Computing Floquet Multipliers (Nov. 4) 73
3. More computation of Floquet multipliers (Nov. 6) 77
4. Bifurcation Theory (Nov. 8) 80
5. Hyperbolicity in bifurcations (Nov. 11) 84
6. Bifurcation diagrams (Nov. 13) 88
7. Spiral sinks and spiral sources, normal form calculation (Nov. 15) 90
8. More normal form calculations, complexi cation (Nov. 18) 94
Chapter 4. Topics 99
1. Setup for Hopf Bifurcation Theorem (Nov. 20) 99
2. More Hopf Bifurcation Theorem (Nov. 22,25) 102
3
4 CONTENTS

3. Setup for Melnikov method (Nov. 25, Dec 2) 108


4. Using Melnikov for existence of a transverse homoclinic (Dec. 4) 113
5. Calculation of order  term (Dec. 6) 117
6. An example of a Melnikov calculation (Dec. 9) 121
7. Averaging (Dec. 11) 122
Index 126
INTRODUCTION 5

Introduction
In Fall 1996, Professor G. R. Hall gave a class at Boston University titled
\MA 775 - Ordinary Di erential Equations". The following pages are a (hopefully
accurate) copy of the notes I took in this class. He deserves all the credit for
assembling the material and presenting it. The only mathematics I ended up doing
was checking and making sure I had all of the inequalities pointing the right way,
etc.
This document was produced using AMS -LATEX macros on top of Version
3.1415 of the LATEX2 compiler. I used the amsbook documentclass. The only
packages I had to import were eps g and graphicx for the pictures, and theorem to
de ne the various theorem environments. All of the pictures were made either with
xfig or Mathematica, and then dumped into PostScript1 .
For further information (including how to get a copy of this PostScript le),
please go to http://math.bu.edu/people/deville/Notes/. The latest versions
of this le will be maintained there when corrections are made. If you nd any
errors, or have any general comments on the notes, please send email to me at
deville@math.bu.edu. I will maintain an errata page for these notes (and all
future releases), and any and all comments are greatly appreciated.

1 In this particular version, a few of the pictures are still not in. They tend to be more at the
end, and most of them are concerning the Melnikov estimates. I plan to do the rest eventually,
but they're hard and complicated, so relax.
6 CONTENTS
CHAPTER 1

Introduction
1. Some preliminary de nitions (Sept. 4)
Definition 1.1. Given a vector eld f : Rn ! Rm then the di erential
equation associated with f is
(1) x_ = f (x); dx
dt = f (x); x = (x1 ; x2 ; : : : ; xn ):
Definition 1.2. A solution is a curve : R ! Rn such that
(2) _ (t) = d
dt = f ( (t)); for all t
Definition 1.3. An initial condition is x0 2 Rn : An initial value prob-
lem (
(3) x_ = f (x);
x(0) = x0
such that the solution is (t) with (0) = x0 .
Definition 1.4. A ow is a map  : R  Rn ! Rn such that
(4) (0; x0 ) = x0 ; 8x0 2 Rn
(5) (s + t; x0 ) = (s; (t; x0 ))
Equation (5) is known as the group property.
A ow  is a solution of the di erential equation x_ = f (x) if
(6) @ (t; x ) = f ((t; x )); for all t; x :
@t 0 0 0
Example: If x_ = x on R, then we have
x(t) = xo et ; x(0) = x0 :
(t; x0 ) = x0 et :
Claim: This is a ow.
Proof: (4) is simple to check.
For (5), we see that
(s + t; x0 ) = xo es+t ;
and
(s; (t; x0 ) = (s; xo et ) = (x0 et)es = xo es+t :

7
8 1. INTRODUCTION

The group property says that the \rules of evolution" (i.e. the vector eld) do
not change with time, or f : Rn ! Rn does not depend on t. Such equations are
called autonomous.
A \good theorem" could be that every \reasonable" di erential equation has
(almost) ow for solution, and ow is \nice".
Theorem 1.1. Given a smooth (C 1 ) ow  : R  Rn ! Rn , it is the solution
of a di erential equation, i.e. there exists a vector eld f : Rn ! Rn with  as
solution.
Proof: De ne f (x0 ) = @t@ (0; x0 ). Check that  is a solution for x_ = f (x),
i.e.
@
@t (t; x0 ) = f ((t; x0 )) for all t; x0 : Now, since
(t + t; x0 ) = (t; (t; x0 ));
then
@ (t; (t; x0 )) (0; (t; x0 ))
@t (t; x0 ) = lim
t!0 t
@
= @t (0; (t; x0 ))
def
= f ((t; x0 )):

Example:
 = sin()
Convert to
(_
=!
!_ = sin 
Example:
y + 3y_ + 2y = cos(2t)
The 3y_ term is damping, the 2y term is Hooke's Law, and the cos(2t) term is
an external forcing term.
Convert this to
(
y_ = v
v_ = 3v 2y + cos(2t)
2. MORE EXAMPLES, CHANGING VARIABLES (SEPT. 6) 9

This is not autonomous. Using a new time variable, we get


8 dy
>
>
>
> ds = v
>
< dy
> ds = 3v 2y + cos(2t)
>
>
>
: dt = 1
ds
Be careful with the initial conditions in this case, since t(s) = s + c.
Example: A worse example. Consider dx 2
dt = x : If we solve, we get
x(t) = t +1 C
We also get C = 1 : x0
We get the ow
(t; x0 ) = 1
t x1
0
We can check that this is a ow, but the problem is the solution goes to 1 in
nite time, and is only de ned for t 2 ( 1; x1 ).
0
2. More examples, Changing variables (Sept. 6)
Remark: Think of (; x0 ): t 7! (t; x0 ) as starting at x0 , and going as far
as it can. The advantage of thinking of this as a ow is that we can x a time T
and ask what happens to all (or a large set) of initial conditions after time T .
Continuity with respect to initial conditions says that for any xed t, if x0 and
x are suciently close, then (t; x0 ) and (t; x) are close.
Our earlier example x_ = x2 has a solution \ ow"
(t; x0 ) = x tx0 1 :
0
Definition 2.1. A local ow is a map  : U ! Rn where U is an open subset
of R  Rn where, for each x0 2 Rn ,
( 1; 1)  fx0 g \ U = ( a; b)  fx0 g
and
(7) (0; x0 ) = x0
(8) (s + t; x0 ) = (s; (t; x0 ))
whenever both sides are de ned.
Example: The solution of x_ = x2 has local ow
(t; x0 ) = x tx0 1 ;
0
10 1. INTRODUCTION

de ned on
8
>
< 1 < t < 1=x0 x0 > 0
(t; x0 ); where > 1 < t < 1 x0 = 0
:1=x0 < t < 1 x0 < 0
Definition 2.2. A semi ow is  : R+ ! Rn satisfying
(9) (0; x0 ) = x0
(10) (s + t; x0 ) = (s; (t; x0 ))
where they are de ned.
Example: The Kepler Problem. Let
q be the di erence in position of 2 masses
q = Gq 3
kqk
where G is the gravity constant and  is a constant related to the masses of
the objects.
Make this a rst order system (q = (q1 ; q2 )):
8q_ = p
>
> 1 1
>
> q
_2 = p 2
>
<p_1 = p Gq1
(11) > ( q12 + q22 )3
>
>
>
:p_2 = (pq2Gq 2
+ q2 )3
1 2
This is not a vector eld on all of R , because
4

p 2 1 23 ! 1 as q1; q2 ! 0
q1 + q2
The phase space for this problem is R4 n |f(0; 0;{zp1; p2)g} . There are solu-
\the collision set"
tions which approach the collision set in nite time.
We can de ne a di erential equation on any object where you have a good
notion of tangent vector, such as Rn , open subsets of Rn, smooth surfaces on Rn,
or manifolds (which look locally like Rn ).
Definition 2.3. Given a ow  or a di erential equation x_ = f (x), the orbit
of x0 is
O(x0 ) = f(t; x0 ) :  de ned at tg
i.e., the image of the solution curve through x0 .
We also de ne the forward orbit of x0 :
O+ (x0 ) = f(t; x0 ) : t  0g
3. DIFFERENTIATION, CHANGE OF TIME VARIABLE (SEPT. 9) 11

Definition 2.4. A xed point, rest point, critical point, or stationary


point, is a point x0 such that (t; x0 ) = x0 8t.
x0 is called a periodic point if (x0 ; T ) = x0 for some T > 0 but (x0 ; t) 6= x0
for 0 < t < T . T is called the least period.
Philosophy: For ODE we think of studying ows which come from nice
vector elds. Tools we use are calculus tools (local analysis) to build global picture
of ow.
When are two ows (or di erential equations) \the same"?
Given di erential equation, vector eld f : U ! Rn .
Suppose U has coordinates X = (x1 ; x2 ; : : : ; xn ), and V has coordinates Y =
(y1 ; y2 ; : : : ; yn ).
The di erential equation on U is X_ = f (X ).
Given a smooth homeomorphism h : V ! U which is smoothly invertible (i.e.
h has a sucient number of partials, and so does h 1 ), we can de ne a vector eld
and a ow on V .
Move vector eld, de ne g : V ! Rn ,
(12)

g(Y ) = (Dh 1 ) h(Y ) (f (h(Y )))
where Dh 1 is the derivative of h 1 : U ! V:
y

h w

U
V
z

Figure 1. The map h.

3. Di erentiation, change of time variable (Sept. 9)


The derivative is the \best linear approximation". To get a notion of the best
linear approximation, we need a linear space. Start at x 2 U , and h(x) 2 V . We
denote the tangent space to U at x as Tx U . The derivative of h is a linear map
from TxU to Th(x)V ., i.e.
Dhjx (v) = w; v 2 TxU; w 2 Th(x)V
To compute Dhjv , pick v 2 TxU . Think of it as a velocity vector of a curve
on U through x. Call this curve (t), then h( (t)) is a curve on V . The velocity
vector of h( (t)) at h(x) is velocity of the image of on V .
12 1. INTRODUCTION
h

TxU
11
00
x
11
00

Dh x

1
0
0h(x)
1

Th(x) V

Figure 2. The derivative maps tangent planes to tangent planes.

So, de ne Dhjx (v) = w where w is the velocity vector of h( (t)) at h(x).


If

h(x1 ; x2 ; : : : ; xn ) =
(h1 (x1 ; x2 ; : : : ; xn ); h2 (x1 ; x2 ; : : : ; xn ); : : : ; hn (x1 ; x2 ; : : : ; xn ));
then
2 @h1 @h1 3
66 @x1 : : : @xn 77
Dhjx = 66 ... . . . ... 77
4 @hn @hn 5
@x1 : : : @xn
For example, start at x and move with velocity (1; 0; 0; : : :; 0):

h(x1 + t; x2 ; : : : ; xn ) h(x1 ; x2 ; : : : ; xn )


= (h1 (x1 + t; x2 ; : : : ; xn ) th1 (x1 ; x2 ; : : : ; xn ); h2 : : : )

We get
011
 @h @h  BC
1 ; 2 ; : : : ; @hn = Dhj B0C
@x1 @x1 @x1 xB
@ ... CA
0
3. DIFFERENTIATION, CHANGE OF TIME VARIABLE (SEPT. 9) 13

Example: h : R2 ! R2
h(x; y) = (x2 + 2xy; y + cos(x)) = (z; w)
h(1; 1) = (3; 1 + cos(1))
 2x + 2y 2x
Dh = sin(x) 1
 4 2

Dh(1;1) = sin(1) 1
1  4

Dh(1;1) 0 = sin(1)

Let L(X ! Y ) denote the linear maps from X to Y . If h : R2 ! R2 , then


Dh : R2 ! L(R2 ! R2)  = R4
D2 h : R2 ! L(R2 ! L(R2 ! R2 ))  = L(R2 ! R4 ) = R8
Back to changing variables...
Let U  Rn be open, and f : U ! Rn be a vector eld. Then x_ = f (x) is a
di erential equation. Let x0 be an initial condition. Then we have a solution curve
x(t), and also a solution (local) ow  : R  U ! U .
w
y
h

z
11
00
11
00 00
11
11
00
x(t)
h -1

x
U

Figure 3. The map h.


Given h : V ! U (i.e. new variables to old variables):
1. Move vector eld: De ne g : V ! Rn by

g(z ) = (Dh 1 ) h(z) f (h(z ))
  1
= (Dh)jh(z) f (h(z ))
Note:
D(h 1 ) h(z) = Dhjz 1
14 1. INTRODUCTION

2. Move the solution curves. Given z0 2 V , we have x0 = h(z0 ). Use that as


initial condition to obtain solution x(t). Then de ne z (t) = h 1 (x(t)).
Claim: These are the same operation.
Proof:
dz = Dh 1 ( dx )
dt dt
= Dh 1 (f (x(t)))
= Dh 1 (f (h(z (t))))
So, dz=dt = g(z ).
Definition 3.1. Given (local) ows
: R  U ! U
: RV !V
we say , are conjugate if there exists a homeomorphism h : V ! U such that
(13) h( (t; z0 )) = (h(t; z0 ))

h
1
0
(t; z0 )
0
1
(t; h(z0 )) 1010

I

1
0
1
0
1
0
0Y
1 z0
h(z0) h

Figure 4. A conjugacy.
If h is not a di eomorphism, we can still use as a conjugacy for ows, but we
can't move the vector eld.
Remark: Conjugacy is a very strong notion of \the same".
Example: X 2 Rn, X_ = AX . Do a linear change of variables, X = PY ,
where P is an n  n matrix. Then h(Y ) = PY , so Dh xj= P . So, in the new
variables,
Y_ = P X_ = PAX = PAP 1 Y
Example:
x_  1 2x
y = 1 1 y
4. CHANGE OF TIME VARIABLES(SEPT. 11) 15
p  p 
The eigenvalues are 1  2, so there is P s.t. PAP 1 = 1+ 2 0p
0 1 2 :
So, let X = PY , then
 p 
Y_ = 1 +0 2 1 0p2 Y
where
z
Y = w ; so
p
z_ = (1 + 2)z and
p
w_ = (1 2)w
These are conjugate linear systems.
Another change of variables is change in time variable. Start with f : U ! Rn,
with a solution (local) ow . Pick x0 2 U , let x(t) be the solution through x0 .
Change to new time variable S (t), so new time is a function of old time. Let
T (s) = t, so that S 1 = T , and old time is a function of new time.
So what di erential equation is x(T (s)) a solution of?
4. Change of time variables(Sept. 11)
Start with a vector eld f : U ! Rn , x0 2 Rn . Change time variables, get new
time variable s, s = S (t), and t = T (s).
For what di erential equation is x(T (s)) the solution?
Di erentiate:
d dt
ds x(T (s)) = x_ (T (s))  ds
dt
= f (x(T (s)))  ds s
E ect of changing to a new time is only on the speed, not the direction.
Start with f : U ! Rn , all as above. Suppose we have  : U ! R+ , smooth.
Then we can make a new vector eld
g(x) = (x)f (x)
What are the solution curves of x_ = g(x)? Start with an IC x0 . Let x(t) be the
solution of dx
dt = f (x) with x(0) = x0 . The goal is to change x(t) into a solution of
dx = g(x) by de ning a new time variable.
ds
Suppose T (s) is a change of time variable such that
dx(T (s)) = g(x(T (s)))
ds
But then
f (x(T (s))) = dx
dt
dt T (s) ds s
= (x(T (s)))  f (x(T (s)))
16 1. INTRODUCTION

So, we need

dt = (x(T (s)))
ds s
If there exists such a T (s) then we can change from one system to another just
by changing time variable. But the equation

dt = (x(T (s)))
ds s
is just a 1-dimensional ODE where we know  and x. By the Existence-
Uniqueness Theorem, there is a unique solution for each initial condition.
A nice feature is that if you change speeds (lengths) of the vector eld, you
don't change the phase portrait.
For example, the system
(
x_ = x
y_ = 2y
and the system
(
x_ = x(x2 + y2 + 1)
y_ = 2y(x2 + y2 + 1)
have exactly the same phase portraits.
Another notion of when ows are \the same":
Definition 4.1. Two ows  : U  R ! U , : V  R ! V are said to be
topologically equivalent if there exists a homeomorphism h : V ! U such that,
8y0 2 V ,
(14) hf (t; y0); t 2 R g = f(t; h(y0 )); t 2 R g
where orientation of increasing time is also preserved.
Example: Zharkovskii's model of glider ight. (Theory of Oscillators, An-
dronov)
Velocity:
v = v(cos ; sin )
v_ = v_ (cos ; sin ) + v_(cos ; sin )

The model is

(15) m dv 1
dt = mg sin  2 FCx v
2

(16) mv d
dt = mg cos  + 1 FC v2
2 y
where m is mass, g the acceleration due to gravity,  the density of air, F the
area of the wing, Cx the resistance to motion per unit area of wing, and Cy the lift
per unit area of the wing.
4. CHANGE OF TIME VARIABLES(SEPT. 11) 17

1
0z
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1 v
0
1
0
1
0
1 
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
1111111111111111111
0000000000000000000
0
1 x
0
1
0
1
Figure 5. Here the glider is moving with speed v and angle .

So we get
dv = g sin  1 FCx v2
dt  2 m 
d = 1 g cos  + 1 FCy v2
dt v 2 m

What are the essential parameters for the qualitative behavior? We have control
of the units of distance and time. Use a new variable y for v, such that v = ky,
with k constant.

dy = g sin  k FCx y2
dt k 2 m 
d = 1 g cos  + k FCy y2
dt y k 2 m

Use a new time variable  with  = t.

dy = dy dt = dy
d dt d dt
d = d
dt dt

i.e. multiply old vector eld by to get:


18 1. INTRODUCTION

dy = g sin  k FCx y2
d k 2 m 
d = 1 g cos  + k FCy y2
d y k 2 m

2m .
Pick k = g , and k = FC
y
Thus s
2m
= gFC
y
s
2mg
k = FC
y

and
dy = sin  Cx y2
dt Cy
d = cos  + y2
dt y

Let a = C x = drag / area .


Cy lift / area
This is the only parameter. What does this say? This says that the qualitative
behavior, up to topological equivalence, depends only on Cx =Cy .
This model works for a glider moving through air, but also works for a subma-
rine in water.
This process is called nondimensionalization. When ky = v, choose a k with
units meters per second, and then y has no units. Choose k to be some characteristic
velocity, and can use k to focus attention on di erent areas of v variable.
5. Example, using the 2-body problem (Sept.13)
Collision in the 2-body problem.
!
m1q1 = Gm1 m22 kqq2 q1
q1 k
kq1 q2 k 2
!
m2q2 = +Gm1 m22 kqq2 q1
q1 k
kq1 q2 k 2
Let
u = m1mq1 +
+ m2 q 2
m
1 2
be the center of mass.
5. EXAMPLE, USING THE 2-BODY PROBLEM (SEPT.13) 19

We see that u = 0, so u(t) = at + b. Thus u_ = a.


Conservation of momentum tells us that
_ + m2 q_ 2
u = m1mq1 +
1 m2
Choosing u as above, and letting r = q1 q2 , think of (u; r) as new coordinates,
and only consider the r equation.

m1 q_ 1 = p1
m2 q_ 2 = p2
So in u,r variables,
u_ = v
v_ = 0
r_ = s
s_ = q1 q2 = Gr
krk3
where  = m1 + m2 .
Remark: This is how masses of planets are determined, looking at orbits of
their moons.
Definition 5.1. A constant of motion or integral for a di erential equa-
tion x_ = f (x) is a function from the phase space to R , such that h(x(t)) is constant
for every solution.
For
r_ = s
s_ = Gr 3 ;
krk
let r = (x; y) and s = (z; w), then
2 2 p
(17) H (x; y; z; w) = z +2 w G x2 + y2
is a constant of motion (energy).
Note: In fact,
x_ = @H
@z
@H
y_ = @w
z_ = @H
@x
@H
w_ = @y ;
20 1. INTRODUCTION

So, if (x(t); y(t); z (t); w(t)) is a solution, then


d @H @H @H @H
dt H (x; y; z; w) = @x x_ + @y y_ + @z z_ + @w w_ = 0
Definition 5.2. Given a function H (x1 ; x2 ; : : : ; xn ; y1 ; y2 ; : : : ; yn ): R2n ! R ,
the Hamiltonian system for H is
@H y_ = @H
x_1 = @y 1 @x1
1
(18) .. ..
. .
x_n = @H y_n = @H
@yn @xn
Kepler's Laws:
1. Move on conic with sun at one focus.
2. Radius vector sweeps out equal areas in equal times.
3. Semimajor axis cubed is inversely proportional to the period squared.
Note: There are some collisions, i.e. some solutions cannot be continued for
all time.
What happens close to collision and is the singularity removable? We want to
preserve continuity as much as possible. Physically, we want to have a \bounce".
What about other force laws?
r = Gr3+ krk
How do these orbits pass near 0?

x_ = z
y_ = w
z_ = p 2Gx 2 a
x +y
w_ = p 2Gy 2 a
x +y
When a = 3 above, it is the Newtonian problem.
2 2
H (x; y; z; w) = z +2 w p
G
(a 1)
(a 1) x2 + y2
Our goal is to see what happens near r = 0. First, change variables, using
polar coordinates, \blowing up" the origin.
Start with
(x; y) = ( cos ;  sin )
with a constant.
We will also use complex notation, i.e. x + iy =  ei . Also choose new velocity
coordinates
z + iw =  ei (u + iv)
5. EXAMPLE, USING THE 2-BODY PROBLEM (SEPT.13)
 21

  
*


 = 0 is singular
Figure 6. This change of coordinates is singular at  = 0.

where u is the \radial velocity", v the \angular velocity", and a constant.


01 0  sin  1 0 x 1
(19)
B CC = BB  cos  CC = BB y CC
hB
@uA @ (cos u sin v)A @ z A
v  (sin u + cos v) w
So we know that

z + iw = x_ + iy_
d ( ei )
= dt
= ( 1) e _ i
_ i +  ie

and

 ei (u + iv) =  1 e _ i
_ i +  ie

so we get, nally

_ = 1  +1 u
_ =  v:
+
22 1. INTRODUCTION

6. McGehee Collision Manifold (Sept. 16)


Recall that we have
x_ = z
y_ = w
z_ = p Gxa+1
x2 + y2
w_ = p Gy a+1
x2 + y2
where a = 2 is the \inverse-square" Newtonian problem.
To look for a new collision, we set
(x; y) = ( cos ;  sin )
x + iy =  ei
and
z + iw =  ei (u + iv)
and we can choose , below.
We ended up with
_ = 1  +1 u
_ =  v
To get u, v, we use:
z_ + iw_ =  1 e _ i (u + iv)
+  iei (u + iv)
+  ei (u_ + iv_ )
and
!
x y
z_ + iw_ = G px2 + y2a+1 + i px2 + y2a+1
i
= G ( )ea+1
= G a ei
We can solve for u and v, and after a lot of calculation, get:
u_ = G a  u2 +  v2
v_ = (   )uv:
Now choose , , with the condition that a = . If we also pick
so that = 0, then = 0, which doesn't work (since that would give
x + iy = 0 ei ).
6. MCGEHEE COLLISION MANIFOLD (SEPT. 16) 23

Let = 1, and then solve for . In the case that a = 2, we get = 2=3,
= 1=3.

_ = 23 u
_ =  1 v
u_ =  1 ( G + 21 u2 + v2 )
v_ =  1 ( 21 uv)

Vector eld doesn't exist at  = 0. But we can do a change of time coordinate,


multiplying the vector eld by . This corresponds to a new time  where dt=d = .
So out new clock \runs slow" at  = 0.
Thus:
d 0 3
d =  = 2 u
0 = v
u0 = G + 12 u2 + v2
v0 = 12 uv

What do we have?
1. We have removed the singularity at  = 0, and extended the vector eld to
 = 0.
2. Also, the system s: There are no 's in 0 , u0 , v0 , and no 's or 's in u0 or
v0 .
Recall the energy in new variables:

2 2
H (; ; u; v) = 2 u +2 v G
(a 1)( ) 1
x + iy =  ei = 2=3 ei
z + iw =  ei (u + iv) =  1=3 ei (u + iv)
2 2
H =  2=3 u +2 v G 2 =3
 u2 + v2 
H (; ; u; v) =  2=3 2 G

How do we use this?


What happens when  = 0?
24 1. INTRODUCTION

0 = 0
0 = v
u0 = G + 21 u2 + v2
v0 = 21 uv
So if (0) = 0 then (t) = 0 for all t.
What this means is that we have added a boundary to the phase space.


x=y=0 =0
@
@
R
@ u; v
HH
j
H

 6

Figure 7. \Blowing up" the origin

If we x H (; ; u; v) = h, (look at solutions with energy h), we get


 u2 + v 2 
h= 2 =3 G ;
 u2 + v2 2 
 2 =3 h = 2 G :
When  = 0 we have
u2 + v2 G = 0:
2
We can say 3 things:
1. The behavior of the new vector eld on new boundary  = 0 is independent
of h.
2. We can restrict attention to  = 0,(; u; v) with u2 + v2 = 2G, i.e. a circle
in u,v-space.
3. Note that when  = 0, u0 = v2 =2.
So look at the phase space u2 + v2 = 2G, for  2 [0; 2]. With identi cations,
this gives us a torus.
6. MCGEHEE COLLISION MANIFOLD (SEPT. 16) 25
p2G
B
vB  = 2
B 
B 
BN 

=0


u; v

Figure 8. The space we restrict to, when  = 0. When we


have = 0, we know that u2 + v2 = 2G, which gives us a cir-
cle in the u; v-plane, or a cylinder in u; v; -space. But since  is
periodic of period 2, we can identify the \front" and the \back"
of the cylinder to get a torus.

So the torus will live in (; u; v)-space with the restriction u2 + v2 = 2G, and
the vector eld
0 = v
2
u0 = v2
v0 = 12 uv
Suppose I understand what happens for this ow on the torus. The original
problem was to study orbits which come close to collision, i.e. come close to  =
0 = x = y. An orbit coming close to  = 0 must behave almost the same as an
orbit on  = 0 because of continuity of the solution ow.

Figure 9. Continuity with respect to initial conditions. The or-


bits that start close to the orbit on the torus stay close from some
time.
This torus is called the McGehee Collision Manifold. The next step will be to
analyze the ow for  = 0, i.e. \solve" these equations:
26 1. INTRODUCTION

2
u0 = v2
v0 = 21 uv:
7. Finishing analysis of collision manifold (Sept.18)
We have added the boundary  = 0 to our phase space. v = 0 gives the rest
points, and if v 6= 0, then u0 = 0, i.e. u is increasing.
 increases if v > 0, decreases if v < 0. Since u is always increasing, we can
make u the time variable. Since u = u(t), we can think of t = t(u), the inverse.
So
d 1
du = 2v
dv u
du = v
Solving, we get
v dv = u du
v2 = u2 + C
2 p2
v =  C u2 :
We already know that u2 + v2 = 2G, so C = 2G. Thus
p
v =  2G u2
Plug in and solve for :
d 1
du = 2p2G u2
Z
(u) = p 1 2 du
2 2G u
Think about this ow. Why does it have rest points? In original problem, we
know that there are solutions which go to collision as t ! t0 . In new variables,
there must be solutions which go to  = 0 as  ! 1.
The set of points which go to collision are the points which tend to rest points
on bottom.

0 = 23 u
2
u0 = v > 0
2
A solution that goes to collision must have u  0 for all  . The only way is to
have solution approach rest points on the bottom.
To pass close to collision means being on an orbit which is close to an orbit
going to a rest point on  = 0.
Follow ow on  = 0 until near top of  = 0 then leave near an ejection orbit.
There are two ways to be close to collision in some direction , \outside" or \inside"
the sheet of orbits which go to collision.
7. FINISHING ANALYSIS OF COLLISION MANIFOLD (SEPT.18) 27

ejection -

=0

collision -

Figure 10. There is a circle of rest points on the to (which are


unstable) and a circle on the bottom (stable). Orbits which come
near the top circle are ejected, and orbits which come near the
bottom circle represent near-collision orbits.

Figure 11. The angle at which the point-mass leaves is sensitive


with respect to initial conditions. The left picture is the trajectory
is phase space near the collision manifold, and the right are the cor-
responding paths of the masses in position space, i.e. \in reality".
At this point, there is no a priori reason that this picture couldn't
happen, even in the case of the inverse square law. The integral
calculation below shows that in the case of the inverse square law,
this picture could not happen, but this picture could happen with
some other force law.

Question: When the orbit comes close to collision, in what direction does it
leave collision? This is the same question as \Given an orbit on  = 0, which as
28 1. INTRODUCTION

 ! 1 goes to ( ) = 0 . What happens to that orbit as  ! 1?" Which rest


point on the top does the orbit approach?
Z p2G
(20) (u) = 2 p p 1 2 du = (1) ( 1) = 2:
2G 2G u
On  = 0, solution curves connect a rest point on the bottom to a rest point
on the top which turns out to be exactly above the point on the bottom. For other
force laws, this is not the case.
CHAPTER 2

The Two Big Theorems


1. Existence-Uniqueness Theorem (Sept.20)
Theorem 1.1. The \Big Theorem". Suppose U  Rn, f : U ! Rn is a vector
eld and f is Lipschitz, i.e. there exists K such that for all x1 ,x2 2 U ,
(21) kf (x1 ) f (x2 )k < K:
kx1 x2 k
The there exists a local ow  : V ! U , where V  R  U , and for each x0 2 U ,
ftj(t; x0 ) 2 V g is a single open interval, such that:
1. 8x 2 U , 8t 2 R such that (t; x) 2 V ,
@
(22) @t (t; x) = f ((t; x))
i.e. t 7! (t; x) is a solution with initial condition x0 = (0; x).
2.  is continuous on V .
3.  is maximal (i.e. if x(t) is a solution of x_ = f (x) and x is de ned for
t 2 ( t1 ; t2 ), then ( t1 ; t2 )  fx(0)g  V .
4.  is unique (i.e. if x(t) is a solution for t 2 ( t1 ; t2 ), then x(t) = (t; x0 ).
Moreover, if f is C r , C 1 , C ! , then  is C r , C 1 , C ! , respectively.
Remarks:
1. Condition 2 is called \continuity with respect to initial conditions". If we
have x1 ,x2 close, and t1 ,t2 close, then (t1 ; x1 ) is close to (t2 ; x2 ). This
does not imply that if x1 ,x2 are close, then (t1 ; x2 ) and (t2 ; x2 ) are close
for all t.
2. We can extend this to cover two more cases: Given x_ = f (x; t), we can
rewrite this as
dx = f (x; t)
ds
ds = 1
dt
This new vector eld is just as smooth as f . Now, suppose f depends on
parameters 2 Rm , i.e. x_ = f (x). For example,
x_ = ax + by2
y_ = cx2 + d cos( y)
has parameters a,b,c,d, . We rewrite this as
x_ = f (x)
_ = 0
29
30 2. THE TWO BIG THEOREMS

which is just as smooth as f . Given x_ = f (x) which is C r in x and , then


 (t; x) is C r in x, , and t. This is known as continuity in parameters.
3. Much better existence theorems exist (we can even talk about f being \mea-
surable"). However, this is the best possible result that also has uniqueness.
For example:
(p
f (x) = x x  0
0 x<0
We can make two solutions with x(0) = 0,
x1 (t)  0
(
x2 (t) = 02 t  0
t =2 t > 0

4. Suppose f is C 2 , then  is also C 2 and


@
@t ((t; x)) = f ((t; x))
If we di erentiate with respect to x:
@ ((t; x)) = D f ((t; x));
Dx @t x
so we get
@ (D (t; x)) = D f (t; x)j D f ((t; x))):
@t x x ( x
For xed t, x 7! (t; x) is called the time t map. Dx (t; x) is the n  n
matrix of partials of  with respect to x.
Consider the di erential equation
X_ = Dx f (t; x0 )jX
For xed x0 2 U , where X (t) is an n  n matrix.
If t = 0, what is Dx(0; x)? Since x 7! (0; x) = x, then Dx(0; x) = In .
So for xed x0 , de ne the variational equation:
(23) X_ = Dx f (t; x)jX
(24) X (0) = In
This is a nonautonomous linear equation on Rn2 . The solution turns out to
be Dx (t; x) = X (t).
If f is C r , the solution is at least C r 1.
2. Some proof of the Existence-Uniqueness Theorem (Sept.25)
We will prove a bunch of little pieces of the theorem.
Theorem 2.1 (Cauchy's Existence Theorem). Suppose f : U ! Rn is ana-
lytic so
f (x1 ; x2 ; : : : ; xn ) = (f1 (x1 ; x2 ; : : : ; xn ); : : : )
2. SOME PROOF OF THE EXISTENCE-UNIQUENESS THEOREM (SEPT.25) 31

and, for k = 1; : : : ; n,
X
fk (x1 ; x2 ; : : : ; xn ) = akl1 ;l2 ;:::;ln xl11 xl22    xlnn
l2Z+n
So, if, for example, n = 2, expanding about 0,
f (x1 ; x2 ) = a00 + a10 x1 + a01 x2 + a02 x22 + a11 x1 x2 + a20 x21 + : : :
and the fk 's have nonzero radius of convergence at each point.
If we expand about x0 = (x01 ; : : : ; x0n ),
f (x1 ; x2 ) = a00 + a10 (x1 x01 ) + a01 (x2 x02 ) +   
Fix x0 2 U . Then there exists x(t) where x : ( ; ) ! U for some  > 0 such
that
1. x(0) = x0
2. x(t) is a solution.
3. If x(t) = (x1 (t); : : : ; xn (t)), then
X
1
xk (t) = kl tl
l=0
with radius of convergence .
Idea of Proof. This comes from Siegel and Moser's Celestial Mechanics.
Assume f is as above. Fix x0 2 U , then there is an r > 0, M such that for all
k,
jfk (x)j < M for jx x0 j < r
This gives a \maximum speed" for the solution. Fix  = (n +r1)M . This is
how long (at least) we expect solution to exist.
Look for x(t) solution, x(0) = x0 , and kx(t) x0 k < r for t 2 ( ; ).
The steps in the proof:
1. Change variables so that x0 = 0, r = M = 1. (Can rescale x to get r = 1,
and rescale time to get M = 1.)
2. Solve formally: Write for k = 1; : : : ; n,
X
1
xk (t) = ak mtm
m=0
Plug this into x_ = f (x), solve for 's. For example:
X
1
x_1 (t) = m 1m tm 1
m=1
= f1(x1 ; x2 ; : : : ; xn )
X
= a1l1 ;l2 ;:::;ln xl11 : : : xlnn
l2Z+n
X X
1 !l1 X
1 !l n

= a1l1 ;l2 ;:::;ln 1m tm ::: nm tm


l2Z+n m=0 m=0
Equate coecients and solve for 's:
32 2. THE TWO BIG THEOREMS

10 = 20 =    = n0 = 0
11 (constant on left) = a100:::0 (constant on right)
12 = 11 a110:::0 + 21 a101:::0 + : : : :
In general, we nd that each km is a polynomial in akl1 ;l2 ;:::;ln where
l1 ; l2 ; : : : ; ln < m.
3. We have a formal solution by solving for km 's in terms of a's. Does this
power series for xk (t) converge? (If yes, then it must be the solution.)
To show that xk (t)'s converge use \method of majorants". This is es-
sentially the Comparison Test { we nd a power series that converges and
that has coecients bigger than the xk (t)'s.
To build the new power series (the majorant), look at
X
y_ = g(y) = b1l1 ;l2 ;:::;ln y1l1 : : : ynln
P
with akl1 ;l2 ;:::;ln  bkl1 ;l2 ;:::;ln . Then claim if yk (t) = km tm is a solution
with y(0) = 0 then 8k; m; j km j  km .
Note: The equations for the 's are the same as the equations for the
's with a's replaced with b's. The equation for kn is a polynomial in a's
and has all positive coe eicients. So replacing a's with bigger b's will make
's bigger than the 's.
4. Find a nice g whose solution we know. A fact from complex analysis: Be-
cause jfk j < 1(= M ) on ball of radius 1(= r), we know
a
kl1 ;l2 ;:::;ln  1:
So let all bk 's be 1. Solve
X l1 l2 ln
y_ = g(y) = y1 y2 : : : yn : : :
l1 ;l2 ;:::;ln
Note: Each component of g(y) is the same.
X
y1l1 y2l2 : : : ynln = 1 + y1 + y2 +    + yn + y12 + y1 y2 + y1 y3 +   
l1 ;l2 ;:::;ln
(25) Yn
= (1 yr ) 1
r=1
since 1 + y + y2 +    + yn +    = 1 1 y .
So
Y
n
y_1 = (1 yr ) 1
r=1
with I.C. y(0) = 0.
Solution must have y1 (t) = y2 (t) =    , so

y_ (t) = (1 y) n
y(0) = 0
3. SOME MORE PROOF OF THE EXISTENCE-UNIQUENESS THEOREM (SEPT.27) 33

1
yk (t)y(t) = 1 (1 (n + 1)t) n+1
and each yk (t) is a convergent power series for t < n +1 , with jyj < 1.
1
So the xk (t)'s converge in a least the same size ball.
3. Some more proof of the Existence-Uniqueness Theorem (Sept.27)
Example: 8
>
< dx = a1 + a2x + a3y + a4x2 + a5xy + a6y2
dt
>
: dy
dt = b1x + b2 y P
P
Assume x(t) = 1 tm , y(t) = 1 tm , with initial condition
0 m 0 m 0 = 0 =
0.
Plug in:
X X X
mam tm 1 = a1 + a2 m tm + a3 m tm
X m2 X mX m  X m2
+ a4 m t + a5 m t m t + a6 m t
X X X
m m tm 1 = b1 m tm + b2 m tm
So we need to equate coecients, and we nd that the m 's and the m 's are
polynomials in a's and b's with positive coecients. So increasing a's and b's in
absolute value increases the 's and the 's.
For now, we assume that we are given F : U ! Rn Lipschitz with constant L,
i.e.
kf (x1 ) f (x2 )k  L:
kx1 x2 k
To attack uniqueness problem, look at \separation problem". Suppose we are
given two initial conditions x1 ; x2 2 U . Suppose we have two solutions x1 (t), x2 (t),
with xi (0) = xi . Can we get an estimate on the distance kx1 (t) x2 (t)k?
Only thing we know is that x_ 1 = f (x1 (t)), so
d
(26) (x1 (t) x2 (t)) = kf (x1 (t)) f (x2 (t))k
dt
(27) = L kx1 (t) x2 (t)k
Let z = kx1 (t) x2 (t)k. We sort of have that z_  Lz .
d (x (t) x (t)) 6= d k(x (t) x (t))k.
(We don't exactly because dt 1 2 dt 1 2
Lt
The worst case would be z_ = Lz , i.e. z (t) = e . This means that how fast
solutions separate depends on L, or on how fast f changes from point to point.
Change this to integral equations:
Z Z
x_ i (t) dt = f (xi ( )) d
34 2. THE TWO BIG THEOREMS

Zt
xi (t) xi (0) = f (xi ( )) d
0
Zt
xi (t) = xi + f (xi ( )) d
0
Then we have
Zt
(28) k(x1 (t) x2 (t))k = x1 x2 + (f (x1 ( )) f (x2 ( ))) d

0Z t
(29)
 kx1 x2 k + kf (x1 ( )) f (x2 ( ))k d
Z0t
(30)  kx1 x2 k + L kx1 ( ) x2 ( )k d
0
Theorem 3.1 (Gronwall's Inequality). If ,  > 0,  : R ! [0; 1] is contin-
uous and Z t

(t)   +  ( ) d
0
then
(t)  et :
The proof is left as an exercise.
So this lemma gives us that
kx1 (t) x2 (t)k  kx1 x2 k eLt
Corollary 3.1 (Uniqueness.). Let x1 (t), x2 (t) be solutions with x1 (0) = x2 (0).
Then kx1 (t) x2 (t)k  0, i.e. x1 (t) = x2 (t) for all t.
Corollary 3.2. If  : R  U ! U is the local ow solution for x_ = f (x) then
 is continuous in x.
Proof: Fix t. Then
kt (x1 ) t (x2 )k  eLt kx1 x2 k

4. Picard iteration (Sept. 30)


Idea. Start with a candidate for a solution and improve it iteratively, i.e. give
a process for improving approximate solutions.
We know that x(t) is a solution i
Zt
x(t) = x0 + f (x( )) d
0
So given 0 : R ! U , we can check to see if it is a solution to see if it satis es
Zt
0 (t) = x0 + f ( 0 ( )) d
0
If 0 is not a solution, we might hope to get a closer approximation 1 (t) by
4. PICARD ITERATION (SEPT. 30) 35

Zt
1 (t) = x0 + f ( 0 ( )) d
0
So we de ne an operator
: 0 7! 1
where
Zt
[ ( 0 )](t) = x0 + f ( 0 (t)) d
0
A solution would be  such that
(  ) = 
so  would be a xed point of . We hope that there is only one xed point,
and n ( 0 ) !  .
So we assume that f : U ! Rn , f is Lipschitz with constant L. We must set
up a domain for , and show that has xed points.
Note: : [ ; ] ! U for some  > 0.

U
U1
V
r
x0
 -


Figure 1. We choose  so that (t) cannot travel outside of U1 .

We want that ( ): [ ; ] ! U . We must assume that ( ) stays in U .


Fix U1  U , compact, containing x0 . Now, let K = supx2U1 kf (x)k. Choose V
contained in the interior of U1 , V compact, with x0 in the interior of V , and x
 < 0 s.t. kx yk >  whenever x 2 V , y 2 U n U1 .
Let  = =2K , so if (t) has speed less than K it can't travel from x0 to outside
of U1 in time less than .
Let C ([ ; ]; U ) be the set of continuous functions from [ ; ] to U . If 1 , 2 2
C ([ ; ]; U ), then
k 1 2 k = sup k 1 (t) 2 (t)k :
t2[ ;]
So we know that C ([ ; ]; U ) is complete with this metric, i.e. if f ng is
Cauchy with respect to this norm, then lim n exists and is in C ([ ; ]; U ).
36 2. THE TWO BIG THEOREMS

So, we really need to show that if (0) = x0 and 2 C ([ ; ]; U ), then


( ) 2 C ([ ; ]; U ).
1. ( ) is continuous.
Z t
k ( )(t) x0 k = x0 + f ( ( )) d x0
Z t 0

 kf ( ( ))k d
Z0t
  d
0
=  jtj
If t 2 [ ; ], then
k ( )(t) x0 k < K < K ( 2K ) < =2
which means that
( )(t) 2 U1 for all t 2 [ ; ]:
Let C ([ ; ]; U; x0 ) = f 2 C ([ ; ]; U ) j (0) = x0 g.
Then we see that : C ([ ; ]; U; x0 ) ! C ([ ; ]; U; x0 ).
2. is a contraction.
Pick 1 ; 2 2 C ([ ; ]; U; x0 ). Look at
k ( 1 ) ( 2 )k = sup k[ ( 1 )(t) ( 2 )(t)]k
t2[ ;]
Z t Zt

= sup f ( 1 (t)) d f ( 2 (t)) d
t2[ ;] 0 0
Z t

 sup kf ( 1 ( )) f ( 2 ( ))k d
t2[ ;] 0
Z t

 sup L k 1 ( ) 2 ( )k d
t2[ ;] 0
Z t
 sup L k 1 2 k d
t2[ ;] 0
= sup jtj L k 1 2 k
t2[ ;]
 L k 1 2 k
We have to have L =  < 1. Make  smaller if necessary, i.e.

 < min( 21L ; 2K )


We get that

k ( 1 ) ( 2 )k <  k 1 2 k
Thus if 0 2 C ([ ; ]; U; x0 ), we can construct n+1 = ( n ), so that
5. INVARIANT SETS (OCT. 2) 37

k n n+1 k = k n ( 0 ) n ( 1 )k
< n k 0 1 k :
So if m > n,
k m n k  k n n+1 k +    + k m 1 m k
 (1 +  + 2 +    + m n ) k n n+1 k
m n+1
= 1 1  n k 0 1 k

So f n g is Cauchy, thus converges to  , which is xed because


(  ) = (nlim
!1 n )
!1 ( n )
= nlim
!1 n+1
= nlim
= 
Also, we know this xed point is unique, so we're done.
Note:  is de ned on a short interval of time. Also,
Zt
 (t) = x0 + f (  ( )) d
0
so
_  (t) = f (  (t))
and is obviously di erentiable.
5. Invariant Sets (Oct. 2)
From now on we will study a ow  : R  U ! U associated to a vector eld
f : U ! Rn , where U is a surface in Rn .
Ideally, we would like to nd some sort of algorithm for going from x0 to
complete information about O(x0 ) = f(t; x0 ) j t 2 R g .
Since we can't do all of that, we have two choices:
1. Get a lot of info about a few orbits, or
2. Get a little info about a lot of orbits.
Definition 5.1. Given a ow  : R  U ! U , an invariant set is a subset
I  U s.t. for all x0 2 I , O(x0 )  I . In other words, if x0 2 I , then (t; x0 ) 2 I for
all t. A positively invariant set is a set J  U s.t. if xo 2 J , then O+ (x0 )  J .
Definition 5.2. An isolated invariant set is a subset I  U satisfying
1. I is invariant,
2. I is compact,
3. There exists N  U (N compact) s.t. I  N o and I is the maximal invariant
set in N . (If J  N , J invariant, then J  I .)
N is called an isolating neighborhood.
38 2. THE TWO BIG THEOREMS

Figure 2. A maximal invariant set.

Lemma 5.1. If I is the maximal invariant set in N , and x0 2 N n I , then there


is a t such that (t; x0 ) 62 N .
Proof: If O(x0 )  N then O(x0 ) is invariant, so O(x0 )  I .
Theorem 5.1. Suppose I ,J are invariant for . Then I \ J , I [ J , I , and I o
are all invariant sets. If x0 2 I \ J , then O(x0 )  I , O(x0 )  J .
Philosophy. We study ows by studying invariant sets, and there are two
ways to do this:
1. What's the ow like restricted to an invariant set?
2. How are invariant sets connected?

-
I J

Figure 3. Orbits moving from one maximally invariant set to another

Are there orbits (t; x) ! J ( as t ! 1), or (t; x) ! I ( as t ! 1)?


5. INVARIANT SETS (OCT. 2) 39

Definition 5.3. For  a ow and x an initial point, the !-limit set of x is


\
([t; 1); x);
t>0
where the overbar denotes closure. The -limit set is the same thing in backwards
time: \
((1; t]; x):
t>0
The rst type of invariant sets are rest points. Question 1 is easy. For question
2, we ask, what do orbits near the rest points do?
We use calculus (mostly) to study the local behavior.
Example: x_ = Ax, x 2 Rn, A is a n  n matrix. We know that x = 0 is
a rest point. It is isolated as a rest point, provided that det(A) 6= 0 (no other rest
points nearby). We have to be careful, however. For example, let A = 1 0 . 0 1
It has eigenvalues i. x = 0 is not an isolated invariant set.
We will see that the condition guaranteeing that x = 0 is isolated is that all
eigenvalues have nonzero real part.
We will get the ow
(t; x) = eAt x
We will start to classify the linear systems:
1. Find eigenvalues 1 ; 2 ; : : : ; n (with repeats).
2. Put A into normal form, i.e. nd P s.t. P 1 AP = B where
221 1 0
3 3
6666 0 . . . 1 0 77 7
6664 0 0  1 75 0 0 777
66 0 0 01  77
B = 66
6 1 2
a1 b1 1 0
3 77
66 6 b a 0 1 7 77
66 0 4 0 0 a1 b15 0 777
6 1 1 7
64 0 0 b 1 a1 75
...
0 0
Essentially, put the matrix in Jordan canonical form.
3. Change variables x = Py, so that y_ = P 1 APy = By, and if
0B 0 0 0 1
BB 01 B2 0 0 CC
B=B B@ 0 0 . . . 0 CCA
.
0 0 0 ..
then
eB1 t 0 !
y(t) = eBt = . . . y0 :
0
4. Classify each
y_ = eBj t y0 :
40 2. THE TWO BIG THEOREMS

We will try to classify the nonlinear behavior in the same way: linearize, and
try to describe the behavior near x = 0.
Now we can start classifying.
Definition 5.4. An isolated invariant set I is called an attractor if there
exists an isolating neighborhood N (so I is maximal invariant set in N ) such that
for all x 2 N , (t; x) 2 N o , for all t > 0. Such an N is called an attractor block.
An attracting xed point is called a sink.
Lemma 5.2. If N is an attractor block for I , then for all x 2 N , !(x)  I .
Proof: Now that !(x) is an invariant set (exercise), so !(x)  I , since I is
maximal in N , and !(x)  N .
In a weak sense, sinks are all the same, as we see by the following
Theorem 5.2. If , are ows on Rn with N  Rn an attractor block for
both  and with maximal invariant sets xed points x0 for , and y0 for , then
jN and jN are conjugate, i.e. there is a homeomorphism h : N ! N such that
(t; h(x)) = h((t; x)):


@
R
@ @
R
@
x0
r
r
y0
I
@
@
 I
@
@


Figure 4. These ows are conjugate.

Lemma 5.3. If x 2 N , and x0 6= x, then 9t  0 such that (t; x) 2 @N and


(s; x) 2 N o for 0 > s > t.
Proof: If not, then O(x)  N implies that O(x)  fx0g because fx0 g is
maximal. )(
Idea of proof of theorem.
1. De ne hj@N = identity.
2. For x 2 N , x =
6 x0 ; x 2 N o , pick rst t < 0 such that (t; x) 2 @N . So
de ne h(x) = ( t; h((t; x))).
3. Let h(x0 ) = y0 .
4. Need to show that h is continuous.
6. SINKS AND CONJUGACY (OCT. 4 AND OCT. 7) 41

6. Sinks and conjugacy (Oct. 4 and Oct. 7)


Given two ows with sinks, we know that these ows are locally conjugate near
the sinks:
1. De ne h on boundaries.
2. Extend h inside using ows.
3. Show continuity of h (and h 1 ).
Note: h is only a homeomorphism, but this is the best one can hope for.
Example: Suppose that
x_ = f (x)
y_ = g(y)
with f (0) = g(0) = 0.
Suppose there is h : U ! V where U is a neighborhood of 0 in y-space, and V
is a neighborhood of 0 in x-space such that h is a homeomorphism and takes g to
f.
i.e. g(y) = y_ = Dhjx x_ = Dhjx f (x).

f (x) = f (h(y)) = x_ = Dhjy y_ = Dhjy g(y);


i.e. h is a smooth conjugacy between solution ow of x_ = f (x) and y_ = g(y).
If h was smooth, f (h(y)) = Dhjy g(y), so

f (h(0)) = Dhjy g(0) = 0:


So h(0) is a xed point of f , assume h(0) = 0. Take f (h(y)) = Dhjy g(y) and
di erentiate:

Df jh(y)  Dhjy = D2 h y g(y) + Dhjy  Dgjy
At y = 0, we get

Df j0  Dhj0 = D2 h 0 g(0) + Dhj0  Dgj0
Df j0  Dhj0 = Dhj0  Dgj0
Df j0 = Dhj0  Dgj0  ( Dhj0 ) 1
So, if n = 1, we get f 0 (0) = g0(0).
If n > 1, then Df j0 is a conjugate matrix of Dgj0 . So Df j0 and Dgj0 have
the same eigenvalues, and the same dimension e-spaces (same Jordan form).
Given 2 vector elds f ,g with xed points at 0. If solution ows of f and g are
smoothly conjugate, then Df j0 and Dgj0 are similar matrices.
Example: x_ = x; y_ = 2y
These are certainly topologically the same, but h cannot be di erentiable at
0.
So if we have x_ = f (x) with f (0) = 0 then Df j0 should help in determining
local structure. Recall, we can write
f (x) = f (0) + Df j0 x + higher order terms
42 2. THE TWO BIG THEOREMS

So our system x_ = f (x) is


x_ = Df j0 x + h.o.t.
Given x_ = f (x), f (0) = 0, can you tell if 0 is a sink?
Theorem 6.1. If x_ = f (x) has 0 as a rest point and the eigenvalues 1 ; 2 ; : : : ; n
of Df j0 all have real part negative, then 0 is a sink.
Remark: Assume that f is C 2.
Proof: Show that there is a one-parameter family of attractor blocks around
0 which limit onto 0. (This will show that f0g is an isolated invariant set and that
0 is a sink.)

@
R
@

@
R
@

 I
@
@

I
@
@

Figure 5. A sink
There are two big steps:
1. Compare x_ = f (x) to x_ = Ax where A = Df j0 .
2. Show attractor block results for x_ = Ax.
1. Write x_ = f (x) = Ax + g(x) where g(x) = f (x) Ax. There there is a
K > 0 such that for all kxk < 1,
kg(x)k  K kxk2
We know this is true in one dimension for C 2 functions by Taylor's Remainder
Theorem. For each x0 with kx0 k = 1 look at the map
s 7! f (sx0 ) A(sx0 )
and each component satis es the 1-dimensional Taylor theorem. So
kf (sx0 ) A(sx0 )k < Kx0 s2
Kx0 is determined by ds d2 f (sx ), which is determined by D2 f . But all
2 0
second partials of f are uniformly bounded in the unit ball so we can replace
Kx0 by K .
6. SINKS AND CONJUGACY (OCT. 4 AND OCT. 7) 43

2. Find P such that P 1 AP = B , where B is the Jordan form of A, and let


x = Py. Then
P y_ = x_ = f (x) = Ax + g(x) = APy + g(Py)
and
y_ = By + P 1 (g(Py))
Let g1 (y) = P 1 (g(Py)).
Claim: kg1(y)k < K1 kyk2 provided that kyk < , for some xed
 > 0.
Proof:
P 1(g(Py))  P 1 kg(Py)k  P 1 K kPyk2 ;
provided that kPyk < 1. But there is a 1 > 0 such that if kyk < 1 , then
kPyk < 1, and

kg1 (y)k < P 1 K kP k kyk2
| {z }
K1

If B were diagonal, then life would be simple. B might be of the form


0B 1 0 1 01
1
B@ B2 CA ; B1 = @0  1A ; for example:
... 0 0 
3. Make o -diagonal terms small in B . Suppose
0 1 01
B1 = @0  1A :
0 0 
Look for a diagonal matrix P such that
0  01
P 1 BP = @0   A :
0 0 
It is easy to check that
01 0 0 1
P = @0  0 A :
0 0 2
As an exercise, choose P for
0a b 1 0 1
B=B
Bb a 0 1 CC
@0 0 a bA
0 0 b a
and end up with P s.t. all o -diagonal terms of P 1 BP are .
Let y = P z . The new equation is
1 BP z + P 1 g1 (P (z ))
z_ = |P {z } |  {z }
B1 g2 (z)
44 2. THE TWO BIG THEOREMS

There is a 2 > 0 s.t. if kz k < 2 , then kg2 (z )k < K2 kz k2 .


4. Check that in these coordinates, for r > 0, if kz k  r then
(B z + g2 (z ))  z < 0;
i.e. the vector eld on the boundary points into the ball, so ball of radius r
is an attractor block.

u -

Figure 6. Any point on the boundary of the circle has a vector


which points inside, so that this ball is an attractor block.

We want to show that B z  z is suciently negative so that g2 (z )  z can't make


it positive.
Need to induct on size of Jordan blocks of B :
0B 0
1
1
B = B @ B2 . CA ;
0 ..
so
B  z  z = B 1 z1  z1 + B 2 z2  z2 +   
So we have several cases:
1. Bi = [i ], a simple real eigenvalue, then
Bi zi  zi = i zi2 < 0; for i < 0:
2.
0  1
B
B   C
C Xm
Bi = @   A ; (Bi z )  z = j=1 zj zj + zj+1 zj

6. SINKS AND CONJUGACY (OCT. 4 AND OCT. 7) 45

Now 20 3
60 0  
1
66 . 1 0   7
0 7
. . . . . . .. 77 z  z   kz k2 :
 66 .. ..
. .7
40 0 0 1 5
0
0 0
0 0 0
So provided that  < jj =2, then B1 z  z <  kz k2 +  kz k2 < 0 because
 < 0.
3.  
Bi = ab ab ; (Bi z )  z = a(z12 + z22)  0
4. 0a b  0 1
B b a 0  0C
Bi = B
B@ ...
CC ; (Bi z)  z  a kzk2 +  kzk2 :
A
0
which is < 0 provided that  < jaj =2.
So provided that  is suciently small (less than half the real part of the
eigenvalue closest to the imaginary axis) then Bz  z < =2 kz k2 , i.e. if the o -
diagonal terms of B are suciently small then
B z  z  (Diag B )  z < 0:
But we wanted
(B z + g2 (z ))  z < 0
But note that
kg2 (z )  zk  K2 kz k2  kz k
= K2 kz k3
provided that kz k < 2 .
Thus we have
(B z + g2 (z ))  z = Bz  z + g2 (z )  z
<  kz k2 + K2 kz k3
= kz k2 (  + K2 kz k):
So provided that r < =2K2, we have
(B z + g2 (z ))  z < 0:
Pull back to original coordinates:
So we get a family of attractor blocks about x = 0 in the original variables.
Thus 0 is a sink.
Remark: We showed that two sinks are conjugate by homeomorphism. We
didn't expect the conjugacy the be smooth, because then the eigenvalues would
have to be the same.
46 2. THE TWO BIG THEOREMS

For example, if we have a system x_ = f (x) and y_ = g(y), and  is a solution


ow for f , and is a solution ow for g, and there exists h : N ! N such that
h( (t; y)) = (t; h(y))
Now, if h is C 2 , then Df jx0 and Dgjy0 are similar matrices.
Example:
x_ = x
y_ = 2y
Then
(t; x) = e t x
(t; y) = e 2t y
Assume that there is an h : ( 1; 1) ! ( 1; 1) which conjugates, so that
h(e 2t y) = e t h(y)
But h can't be C 1 at 0. Suppose h(y0 ) = x0 so that
h(e 2t y0 ) = e t x0
and
p
e 2t y0 = e t py0 = e t kx0
So h(y) = py=k, and h0 (0) cannot exist.
So far, what we have is that if x_ = f (x) on Rn , and f (x0 ) = 0, then Df jx0
having all eigenvalues in left half-plane implies a sink, so all sinks in Rn are the
same with respect to a continuous change of variables.
A symmetric theorem gives us that if Df jx0 has all eigenvalues in the right
half-plane, then x0 is a source (i.e. if we replace f by f , x0 is a sink.)
Two questions:
1. What about \mixed cases" (some eigenvalues with positive real part, some
with negative)?
2. Can we do a better classi cation (smoother) if we take eigenvalues into
account?
Definition 6.1. Given x_ = f (x), f a smooth vector eld on Rn , f (x0 ) = 0,
we say that x0 is hyperbolic if none of the eigenvalues of Df jx0 have real part
= 0.
We will study hyperbolic rest points, with two attacks:
1. Try to show that x_ = f (x) is conjugate to y_ = Ay near x0 for some A =
Df jx0 ,
2. Try to single out the directions in which x0 looks like a sink or a source.

7. Preliminary to Stable/Unstable Manifold Theorem (Oct. 9)


Our goal is to understand orbits in a neighborhood of xed points.
Given f : Rn ! Rn , f (0) = 0, we have two attacks:
7. PRELIMINARY TO STABLE/UNSTABLE MANIFOLD THEOREM (OCT. 9) 47

=

Figure 7. This is a continuous conjugacy, but it is obviously not


a di erentiable conjugacy.
1. Let A = Df j0 , try to nd a conjugacy between solution ow of x_ = f (x)
and solution ow of y_ = Ay (a local conjugacy). This is good in that it gives
us info about all the orbits near 0. This is bad in that in general, we can't
hope for a di erentiable conjugacy (it will only be a homeo).
(Hartman-Grobman-Sturmberg Theorem)
2. Try to separate out the most important/interesting orbits and show that
they have nice structures. Suppose A = Df j0 have eigenvalues 1 ; 2 ; : : : ; s ,
1 ; 2 ; : : : ; u where the real part of the 's are < 0, and the real part of the
's are > 0.
If 0 is hyperbolic, suppose A is in real Jordan form:
 blocks 0 
0  blocks

-eigenspace

-eigenspace

Figure 8. This is a saddle point; the 's are stable and the 's are unstable.
We know that from part 1 that near 0, the phase space of the solution
to x_ = f (x) \looks like" this up to homeomorphism:
We want to show that the set of points whose orbits go to 0 as t ! 1,
or as t ! 1, are nice smooth manifolds, i.e. recover some of the structure
of the linear ow by restricting solutions that go to 0 as t ! 1.
48 2. THE TWO BIG THEOREMS

y_ = Ay x_ = f (x)

Figure 9. The ow will look like its linear part.

Theorem 7.1 (Hartman-Grobman-(part of)Sturmberg). Let x0 be a hyperbolic


xed point for f : Rn ! Rn (where f is C 2 ) vector eld, with  a solution ow for
x_ = f (x). Let A = Df jx0 , then there is a neighborhood V of x0 and a homeomor-
phism h : V ! V which locally conjugates  to the solution ow for x_ = A(x x0 ),
i.e. if x 2 V , then
(t; h(x)) = h(x0 + eAt (x x0 ))
for all t with both sides in V for time between 0 and t.
Remarks:
1. Note comparing solutions of x_ = f (x) to x_ = A(x x0 ) where f (x) =
A(x x0 )+ h.o.t.
2. h is only guaranteed to be a homeomorphism.
Moreover, if n = 2 and f is C 2 , then h can be a C 1 di eomorphism (Hartman).
Also, if 1 ; P2 ; : : : ; n are eigenvalues of the linear part Df jx0 , f 2 C 1 , and for
all k, k 6= m j =1 mj j for m1 ; m2 ; : : : ; mn 2 Z [ f0g with mj  2, then we can
+
choose h to be C 1 . The converse is not strictly true. For example, it is forbidden
that A has dimension 3, with 1 = 2 + 3 , called \resonance" in the eigenvalues.
Remark: C 1 is good, since spiral sinks are distingushed from straight-line
sinks, etc.
Definition 7.1. Suppose X  Rn and X = g(Rm) or g(U ), U  Rm, such
that g : Rm ! Rn, and
1. g is one-to-one, and
2. Dgjx (n  m matrix) has rank m at all x 2 Rm.
Then X is called an immersed m-submanifold.
Example: m = 1,n  2, then g : R ! Rn, if g(R) immersed then:
1. g(R ) doesn't cross itself,
2. g0 6= 0 for every x 2 R , and g(x) = (g1 (x); g2 (x)), and g0 (x) = g10 (x); g20 (x)),
so g10 and g20 are never simultaneously 0.
7. PRELIMINARY TO STABLE/UNSTABLE MANIFOLD THEOREM (OCT. 9) 49

Figure 10. These are not allowed. The rst has a cusp and the
second intersects itself.

Figure 11. These are allowed. The manifold can limit onto itself.

We say that X is C k if g if C k .
An alternate de nition:
Definition 7.2. X  Rn is an immersed m-submanifold if for every x0 2 X ,
there is a neighborhood of x0 in Rn such that if V is the component of X containing
x0 in X [ U then V is the graph of a smooth function from a neighborhood of 0 in
Rm to Rn m, i.e. we can choose coordinates in U s.t.
V = f(x1 ; x2 ; : : : ; xm ; (x1 ; x2 ; : : : ; xm )); where (x1 ; x2 ; : : : ; xm ; 0; : : : ; 0) 2 U g
and  is di erentiable.
V is a graph of a smooth function, so the only possible weirdness is \global"
rather than local, i.e. if you are inside X , it looks like Rm locally.
50 2. THE TWO BIG THEOREMS

Figure 12. This will be a graph of a continuous function if the


coordinates are chosen correctly.
8. Stable/Unstable Manifold Theorem(Oct. 15)
Definition 8.1. Let x_ = f (x), f : Rn ! Rn , a C k -vector eld with f (x0 ) = 0.
For V a neighborhood of x0 , we de ne the local stable set
W s (x0 ; V ) = fz 2 V j (t; z ) 2 V for all t  0;
(t; z ) ! x0 as t ! 1g;
the local unstable set
W u (x0 ; V ) = fz 2 V j (t; z ) 2 V for all t  0;
(t; z ) ! x0 as t ! 1g;
the stable manifold
W s (x0 ) = fz j (t; z ) ! x0 as t ! 1g;
the unstable manifold
W u (x0 ) = fz j (t; z ) ! x0 as t ! 1g:
Suppose x0 = 0, i.e. f (0) = 0, and Df j0 has eigenvalues 1 ; 2 ; : : : ; s ,
1 ; 2 ; : : : ; u where Re(i ) < 0, Re(i ) > 0.
Assume Df j0 is in Jordan form, with
 
Df j0 = 0's 0's
So if f = Df j0 x (i.e. if f were linear), then the ow would look like this:
Let
E u = f(x1 ; x2 ; : : : ; xu ; 0; : : : ; 0)g
E s = f(0; : : : ; 0; xu+1 ; : : : ; xn )g
Solution to linear ow would be

x(t) = e( Df j0 t x0 :
8. STABLE/UNSTABLE MANIFOLD THEOREM(OCT. 15) 51

Es

Eu

Figure 13. Stable and unstable manifolds of the linear system

For example, in R2 ,
 0 
Df j0 = 0  :
So x (t)    x (0) etx (0)
1  0 t  1 = t 1 :
x2 (t) = exp 0  x2 (0) e x2 (0)
So W s (0) = E s and W u (0) = E u .
Given a (nonlinear) f as above, the W s (0), W u (0) are immersed
C -submanifolds, with W s (0) tangent to E s at 0, W u (0) tangent to E u at 0.
k

Es Ws(0)

Wu(0)
Eu

Figure 14. The stable and unstable manifolds are tangent to the
linear subspaces.
The local version of the Stable/Unstable Manifold Theorem says that there
exists a neighborhood V of x0 such that W u (0; V ) is the graph of a C k function
52 2. THE TWO BIG THEOREMS

 : E u \ V ! E s , i.e.
W s (0; V ) = f| (x1 ; x2 ; : : :{z
; xu ; 0; : : : ; 0)} + |(x1 ; x2 ; : : : ;{zxu ; 0; : : : ; 0)}g
2Es 2Eu
where  (0) = 0, D j0 = 0.
Similarly, for W s (0; V ), there exists a C k function  : E s \ V ! E u , where
W (0; V ) is the graph of , (0) = 0, Dj0 = 0.
s
Moreover, if f depends smoothly on , then so do W s , W u (i.e. for near
0 , f has a rest point near the rest point of f 0 and stable/unstable manifolds
depend smoothly on ), as long as the xed point of f is hyperbolic.
Moreover, if z 2 V , z 62 W s (0; V ), then for some t > 0, (t; z ) 62 V , and if
z 62 W u (0; V ), for some some t < 0, (t; z ) 62 V .

Es Ws(0)
V
Wu(0)
Eu

Figure 15. \The box"


Example: Look at n = 2, u = s = 1, f analytic. Then W s(0; V ), W u (0; V )
should be graphs of analytic functions.
 0
Df j0 = 0 
with  > 0 > .
Linearization gives us that x_ = x and y_ = y, and we see that E u = f(x; 0)g
and E s = f(0; y)g. Let

f (x; y) = (f1 (x; y); f2 (x; y));


where
X
f1 (x; y) = x + aij xi yj ;
i+j 2
X
f2 (x; y) = y + bij xi yj
i+j 2
8. STABLE/UNSTABLE MANIFOLD THEOREM(OCT. 15) 53

Look for W u (0; V ) to be graph  : E u \ V ! E s , so


W u (0; V ) = f(x;  (x))j x(0) 2 V g

d = 0.
where  (x) is analytic,  (0) = 0, dx 0
So
X
1
 (x) = n xn = 2 x2 + 3 x3 +   
n=2
If we have an initial condition in W u (0; V ), the only way the solution leaves
W u (0; V ) is to leave V .

Es
(x;  (x))
@
R
@

x Eu

Figure 16.  (x) is a graph near the xed point.

So vector eld on W u (0; V ) must be tangent to W u (0; V ). At a point (x;  (x)),


the vector eld is
f (x;  (x)) = (f1 (x;  (x)); f2 (x;  (x))):
At (x;  (x)), we want f (x;  (x)) to be the tangent line to  . A vector in tangent
line at (x;  (x)) is (1;  0 (x)). So ( 0 (x); 1) is orthogonal to the tangent line, and
we want that
f (x;  (x))  ( 0 (x); 1) = 0 for all x
or
f1 (x;  (x))   0 (x) f2 (x;  (x)) = 0
Plug in:
[x + a20 x2 + a11 x( 2 x2 + 3 x2 +    ) + a02 ( 2 x2 + 3 x3 +    ] 
 (2 2 x + 3 3 x2 +    )
[( 2 x2 + 3 x3 +    ) + b21 x2 + b11 x( 2 x2 + 3 x3 +    ) +    ] = 0
Some of the results will be:
54 2. THE TWO BIG THEOREMS

2 = 2b20 
3 = 2a20 23+ b11 2 + b30
 
9. More S/U Theorem (analytic version) (Oct. 16)
We can solve for those 's formally.
1. Do they converge?
2. Are solutions on graph of  ?
3. Unique?
Try the C k version, x_ = f (x), x 2 R2 , f (0) = 0.
Look at time T map associated to the solution ow  for x_ = f (x). Let
FT (x) = (x; T ). FT 1 exists, since FT 1 (x) = (x; T ). FT is as di erentiable as
 is in x. FT (0) = 0, a rest point.
FTn (0) = F| T  FT {z     FT} = (x; nT )
n times
by the group property.
Let
OFT (x) = fFTn (x)j n 2 Z g:

 FT

Figure 17. The ow  and its associated map FT .


Let
WFsT (0) = fxj lim F n (x) = 0g;
n!1 T
WFuT (0) = fxj lim F n (x) = 0g;
n! 1 T
and
WFsT (0; V ) = WFsT (0) \ fxj FTn (x) 2 V; n  0g;
where V is a neighborhood of 0.
9. MORE S/U THEOREM (ANALYTIC VERSION) (OCT. 16) 55

Lemma 9.1. Let Ws (0) be a stable set of 0 for  and WFsT (0) be a stable set
of 0 for FT ; T > 0. Then
Ws (0) = WFsT (0)
Proof: Clearly, Ws(0)  WFsT (0), because if x 2 Ws (0) then (x; t) ! 0 as
t ! 1 which means that (x; nT ) ! 0 as n ! 1.
If x 2 WFsT (0) then FTn ! 0 as n ! 1 or (x; nT ) ! 0 as n ! 1.
Note that for all  > 0, there is a  > 0 such that if z 2 R2 , kz k < , then
k(z; t)k <  for 0 < t  T .
Fix  > 0, and choose N so large such that for all n  N , if k(x; nT )k < ,
then k(x; nT + s)k < , so that k(x; t)k <  for all t > NT .
So we can deal with Ws (0) and WFsT (0) similarly.
What is a hyperbolic xed point for a map?
Suppose we have the same situation as above for x_ = f (x), with
 0
Df j0 = 0 
How do we compute DFT ?

x_ = f (x)
X_ = Df jx X
where X is an n  n matrix, with initial conditions x(0) = x0 and X (0) = I . If
x(t); X (t) is a solution, then
X (t) = Dx(x0 ; t):
Recall, this comes from di erentiating twice:
d d
dt Dx  = Dx( dt )
= Dxf j(x;t) Dx (x; t)
Suppose we take x(0) = 0 and start at the rest point.
Solve x_ = f (x) part, have x(t) = 0. So
X_ = Df jx(t) X = Df j0 X
and we get
 
_X =  0 X; X (0) = I:
0 
So  0 
X (t) = exp 0  t ;
 0  eT 0

DxFT j0 = exp 0  T = 0 eT :
The eigenvalues of DFT j0 are e(eigenvalues of Df j0 )T .
So if  is an eigenvalue of Df j0 with real part > 0, then the corresponding
eigenvalue for DFT j0 is eT with eT > 1. If  is an eigenvalue of Df j0 with
real
part < 0, then the corresponding eigenvalue for DFT jo 0 is eT with eT < 1.
56 2. THE TWO BIG THEOREMS

Definition 9.1. Given F : Rn ! Rn , F (x0 ) = x0 , F a di eomorphism, then


x0 is hyperbolic i none of the eigenvalues of DF jx0 has norm 1.
Definition 9.2. We call a xed point x0 of F : Rn ! Rn a sink if there is a
neighborhood U of x0 such that for all z 2 U; F n (z ) 2 U , all n  0, and
lim F n (z ) = x0 :
n!1
Theorem 9.1. If all the eigenvalues of DF jx0 are inside the unit circle, then
x0 is a sink.
Theorem 9.2 (2-dim Stable/Unstable Manifold Theorem for Maps). Suppose
we have F : R2 ! R2 a di eomorphism, F (0) = 0, and
 0
DF j0 = 0  ; with jj > 1; jj < 1:
Then there is a neighborhood V of 0 and  : [ ; ] ! R with W u (0; V ) the graph
of  , i.e. W u (0; V ) = f(x;  (x))g, and  (0) = 0,  0 (0) = 0.
Steps:
1. Prove  is Lipschitz.
2. Bootstrap to get more smoothness.
Look at neighborhood of 0:

 
 0
0 
!

Figure 18. What the linear part of F does to the box

For small boxes, the map will look a lot like its linear part.
Note that F (W u (0)) = W u (0).
Set up a map on graphs in the box and look for xed points. (Graph Transform
Method).
10. C k version of S/U Theorem (Oct. 18)
Lemma 10.1. Suppose F is a C k -di eomorphism and F (0) = 0 is a hyperbolic
xed point. If W u (0; V ) is the graph of a C k function then W u (0) is a C k immersed
submanifold, i.e. local implies global.
Proof: Recall our de nition of a C k immersed submanifold is that a neigh-
borhood of each point in the manifold is (in nice coordinates) a C k graph. So pick
z 2 W u (0), i.e. F n (z ) ! 0 as n ! 1. So for any neighborhood V of 0, there is
an N such that F n (z ) 2 V for all n  N . So F N (z ) 2 W u (0; V ), then F N (z )
10. C k VERSION OF S/U THEOREM (OCT. 18) 57

F
!

Figure 19. What F does to the box

is on graph of C k function. Use F N to change coordinates, in coordinates in V ,


W u (0) near z is a graph. So W u (0) near z is a graph of a C k function.
Let us set up the 2-dimensional version: F : R2 ! R2 , F (0) = 0, with F 2 C 2.
 0
DF j0 = 0  ;  > 1 >  > 0
Our goal is to show that W u (0; V ) is a graph of a function  : E u \ V ! E s .
The idea is to set up a nice neighborhood V .
Iterate this process, and hope the graphs go to W u (0; V ). The local unstable
manifold has to map to itself. W u (0; V ) should be the xed point of this process.
Details.
1. Set up appropriate space of graphs on function ( ). We want that F (graph
in set) is a graph in set, and to preserve as much smoothness as possible.
2. Need our space to be complete.
3. Map on this space has a sink (is a contraction).
4. Need to show that W u (0; V ) is a xed point.
Start by choosing as neighborhood V of 0 that looks like [ v ; v ]  [ v ; v ],
with v chosen below.
Now look at F (x; y) = (x; y) + (g1 (x; y) + g2 (x; y)). By the 2-dimensional
Taylor's Theorem, we have Dg1 j0 = 0 = Dg2 j0 , so Dg1 , Dg2 are small near 0, so
there is ; K1 such that if k(x; y)k < , then
kgi (x; y)k < K1 k(x; y)k2
Take v < 1, and use that to get control of F (V ).
We want the right edge to map to the right edge, so we want:
j + g1 (; y)j > ; for  < y < 
p 2
i.e., jg1 (; y)j < K1 2 + y2 for  < 1
p
< K1 22
< ( 1):
So we need

 <  p1 ; for  > 1:


2K1 2
58 2. THE TWO BIG THEOREMS

V
 C
 CW
 ?
F (V )

OC 6
C 
C 

Figure 20. What the map F does to the box V

Take v < .
For the top, we need
j + g2(x; )j <  for  < x < ;
i.e. jg2(x; )j < (1 );
p 2
k1 2 < (1 );
 < 1 p ( < 1):
2K1 2
So choose v to be less than all of these.
Next, chose our set of graphs. Let
GL = f : E u \ V ! E s j x1 ; x2 2 [ v ; v ];
j (x1 )  (x2 )j < L jx1 x2 jg
Assume that L < 1. Let
GL = fA  V j A is the graph of  2 GL g
De ne the graph transform: For A 2 GL let
F (A) = fF (x; y)j (x; y) 2 Ag \ V
We need that F (A) 2 GL . We will rst show that F (A) is the graph of some
function, and then that this function is in GL .
Claim: for xed A 2 GL , for each x 2 [ v ; v ], there is a y 2 [ v ; v ] such
that (x; y) 2 F (A).
Graph of A is curve connecting left to right. So F (A) does the same thing for
F (V ). So by conditions on F (V ), we know that for each x 2 [ v ; v ], there is a
y 2 [ v ; v ] such that (x; y) 2 F (A).
If the line containing x from top to bottom didn't intersect F (A), this would
be a contradiction. But what if both (x; y1 ) and (x; y2 ) 2 F (A)?
Look at F 1 (x; y1 ); F 1 (x; y2 ) in A. But we can write
F 1 (x; y) = (x=; y=) + ( 1 (x; y); 2 (x; y))
11. CONTINUATION OF THE C k PROOF (OCT. 21) 59

A


F (A)




Figure 21. What F does to V and F does to A

(x; y0) -

(x; y1) -

Figure 22. The points are \almost" on top of each other

where
 
DF 1 0 = ( DF j0 ) 1 = 1= 0 1=
0

Then there is 2 ; K2 such that if k(x; y)k < 2, then


k 1 (x; y)k < K2 k(x; y)k2 ;
replacing old 1 by 2 , and K1 by K2 if necessary.
We expect that F 1 (x; y1 ) and F 1 (x; y2 ) are \almost" on top of each other,
but this is bad because A is the graph of a Lipschitz function.

11. Continuation of the C k proof (Oct. 21)


Now we know that each F (A) is a graph of a function, but we want that this
function is in GL .
60 2. THE TWO BIG THEOREMS

At each (x; y) 2 V , let


C(x;y) = f(x1 ; y1)j jy y1 j < L jx x1 jg
Claim: If (x; y) 2 V and F (x; y) 2 V then
(F (C(x;y) \ V )) \ V  CF (x;y) :
We say then that F satis es a cone condition. (This will be true provided that
v is suciently small.)
Proof: Suppose not, i.e.
jy y1 j < L jx x1 j for some (x; y); (x1 ; y1 ) 2 V;
but
j(y + g2(x; y)) (y1 g2(x1 ; y1 ))j  L j(x + g1(x; y)) x1 + g1 (x; y))j :
Now,  jy y1 j jg2 (x; y) g2 (x1 ; y1 )j > L jx x1 j L jg1 (x; y) g1 (x1 ; y1 )j
and
jg2 (x; y) g2(x1 ; y1 )j <  jx x1 j +  jy y1j
jg1 (x; y) g1(x1 ; y1 )j <  jx x1 j +  jy y1j
where  
i ; @fi ; i = 1; 2; 8x; y 2 V :
 > max @g
@x @x
So
( + ) jy y1 j +  jx x1 j > L jx x1 j L jx x1 j L jy y1 j
( +  + L) jy y1 j > (L L ) jx x1 j
which gives us that
jy y1 j > L L 
 +  + L jx x1 j
Claim: By taking v suciently small,
L L  > L
 +  + L
We know that @g i @gi
@x ; @y are continuous (and 0) at 0. By taking v suciently
small, we can make  as small as we like.
L L  ! L  > L; ( ! 0)
 +  + L 
So if A 2 GL , for each (x; y) 2 A,
A  C(x;y) \ V ) F (A)  CF (x;y) \ V:
So F (A) 2 GL .
Claim: F is a contraction.
Given 1 ; 2 2 GL , compute kF (1 ) F (2 )k. We want it to be less than
r  k1 2 k = sup j1 (x) 2 (x)j (r < 1):
x2[ v ;v ]
Fix x 2 [ v ; v ]; (x; yi ) 2 graph of i .
11. CONTINUATION OF THE C k PROOF (OCT. 21) 61

1
f
1(x; y1 )
(x; y1 )

F F (1)
!
(x; y2 )
F (2)
f
1(x; y2 )
2

Figure 23. F is a contraction mapping.

F 1 (x; yi ) = ( x + 1 (x; yi ); yi + 2 (x; yi ))


Estimate jy1 y2 j compared to k1 2 k. First note that
 x  x   @ @ 
 + 1(x; y1 )  + 1 (x; y2) <  jy y1j where   @xi ; @yi :
So we have
 y1 + (x; y )  y2 + (x; y )  k  k + L( jy y j)
 2 1  2 2 1 2 1
which gives us
1 jy
 1 y2 j j 2 (x; y1 ) 2 (x; y2 )j  k1 2 k + L jy1 y2 j
1
 jy1 y2 j  jy1 y2j  k1 2 k + L jy1 y2 j
( 1  L) jy1 y2j  k1 2 k
jy1 y2j  1= 1 L k1 2 k
Since the partials of i are 0 at 0, take v small so that
1
1=  L = r < 1 and  < 1:
So F is a contraction. Now use the contraction mapping principle, and the fact
that GL is a metric space. So there must exist a xed point for F , u .
Claim: u = W u(0; V ):
Proof:
1. If (x; y) 2 graph u then F n (x; y) 2 V for all n  0 and F n (x; y) ! 0 as
n ! 1.
2. If (x; y) 62 graph u then there is an n  0 such that F n (x; y) 62 V .
62 2. THE TWO BIG THEOREMS

Claim: For v suciently small, there is an r < 1 such that if (x; y) 2 C0,
then if (x; y1 ) = F 1 (x; y) then jx1 j < r jxj.
Proof: We know that x1 = x= + 1(x; y). Then
j 1 (x; y)j = j 1 (x; y) 0j
  jxj +  jyj
  jxj + L jxj :
So
jx1 j < x + ( + L) jxj
or
jx1 j < ( 1 +  + L) jxj
1= < 1, so take v small so that
1 +  + L = r < 1:

We know that u (0) = 0, because for and  2 GL , F n ( ) ! u . If  (0) = 0,
then the value of F ( ) at 0 is also 0. And u 2 GL , so the graph of u is in C0 and
if (x; y) 2 graph u , then F 1 (x; y) 2 graph u . So F n (x; y) 2 graph u  C0
for all n  0.
So x-coordinate of F n (x; y) ! 0, but this is Lipschitz, so the y-coord ! 0
also. Thus the graph of u  W u (0; V ).
If you're not in the graph, the vertical distance increases, so eventually you
leave V .
12. Smoothness (Oct. 28)
Start with F : R2 ! R2 , a di eomorphism, F (0) = 0, and
 0
DF j0 = 0  ;  > 1 >  > 0:
Lift to the unit tangent bundle
T 1R2 = f((x; y); v)j v 2 R2 ; kvk = 1g
and !
DF (v)
F ((x; y); v) = F (x; y); (x;y) :
DF(x;y)(v)
At x = y = 0,
Eigendirections at (x; y) = (0; 0) give the xed points of F . Here  = 0 is
stable in the -direction.
Compute DF j((0;0);0) :
0 0 01
@0  0A
  ?
At x = y = 0, what does F do to ((0; 0); )?
Note:  corresponds to (cos ; sin ).
12. SMOOTHNESS (OCT. 28) 63

Figure 24. The xed points for (x; y) = (0; 0)

-coordinate of image of F ((0; 0); ) is


p (2 cos2 ;  sin2 ) 2
 cos  +  sin 
and angle of image is
  cos  
arctan  sin  ;
i.e.    @ (
F ((0; 0); 0) = (0; 0); arctan  cos 
sin  ; coord) =  :
@ 
Use the Lipschitz Unstable Manifold Theorem to get a 1-dimensional Lipschitz
curve in T 1R2 through ((0; 0); 0) corresponding to unstable eigendirection.
We need to show that W u ((0; 0); 0) is the derivative of W u (0; 0), i.e. for z 2
W (0; 0), the tangent (unit) vector of W u (0; 0) at z is the corresponding point
u
(z; ) 2 W u ((0; 0); 0).
y

W u((0; 0); 0)

x


Figure 25. The unstable manifold at ((0; 0); 0). The picture may
be a bit unclear. The -axis moves back into the 3rd dimension.
Recall that there are xed points on the -axis, alternating between
sink and source.
64 2. THE TWO BIG THEOREMS
y

W u((0; 0))

Figure 26. The unstable manifold at (0; 0) (what we get when


we restrict to the plane
For (x;  (x)) 2 W u (0; V ), de ne
8  
 (x) 9
>
< 1; limn!1  (xxn ) >
C (x) = >vj v =  n x  = :
: 
1; limn!1 n )
(x  (x) >
x n x ;
where xn ! x. W u (0; V ) is Lipschitz.
Claim: if F (x;  (x)) = (x1 ;  (x1 )), then
F (C (x)) = C (x1 )
and     (xn )  (x) 
(31) DF j(x; (x)) 1; nlim  (xn )  (x) = lim DF j
!1 x x n n!1 (x; (x)) 1; x x n
:
Proof: If we let  
DF jx; (x)) = ac db ;
we get in equation (31):
 
lim a + b ( (xxn ) x(x)) ; c + d ( (xxn ) x(x))
n!1
 n c(xn x) + d( (xn )n  (x)) 
= 1;a(xn x) + b( (xn )  (x)) :
(up to length)
We know that xn ! x, so F (xn ;  (xn )) ! (x1 ;  (x1 )).
Look at fF (xn ;  (xn ))g:
Use X; Y as projection onto x; y coordinates. Check:
Y (F (xn ;  (xn ))) Y (F (x;  (x)))
X (F (xn ;  (xn ))) X (F (x;  (x)))

Y (F (x;  (x))) + Y ( DF j (xn x;  (xn )  (x))) + Y R Y (F (x;  (x)))


= X (F (x;  (x))) + X ( DF j(x; (x)) (x x;  (x )  (x))) + XR X (F (x;  (x))) :
(x; (x)) n n
13. THE STABLE/UNSTABLE MANIFOLD METATHEOREM (OCT. 30) 65

R is the higher order terms, so we can forget about it when we take limit, so
we get
 c(xn x) + d( (xn )  (x)) 
nlim
!1 a(xn x) + b( (xn )  (x)) ;
thus
F : C (x) ! C (x1 ):

What does this say?


So if F is C 3 , then we have that W u (0) is C 1;Lip. If F is C 1 , then W u (0) is C 1 .
In fact, we could show that W u (0) is as smooth as F , i.e. if F is C k , then W u (0)
is C k .

W s((0; 0); 0)
y


Figure 27. The stable manifold of ((0; 0); 0).

What is the geometric interpretation?


Remarks:
1. Stable manifold works the same. Replace F by F 1 .
2. Higher dimensional, we showed F (graph) is a graph. In higher dimensions,
if A is Lipschitz graph Eu ! Es , we need
F (A) \ Es + z 6= 0 , A \ F 1 (Es + z ) 6= 0:
Near the xed point, F 1 (Es + z ) is a perturbation of Es + w for some w.
So use transversality of intersection of Es + w with A.

13. The Stable/Unstable Manifold MetaTheorem (Oct. 30)


Theorem 13.1. Given smooth (C k ; C 1 ) vector eld F : Rn ! Rn , F (0) = 0.
1 ; 2 ; : : : ; n are the eigenvalues of DF j0 (with redundancy), and E1 ; E2 ; : : : ; En
66 2. THE TWO BIG THEOREMS

Es + z
A
F (A)

Figure 28. Perturbation of the stable manifold

are the corresponding normalized e-spaces, e.g. if


0 1
BB 1 2 1 CC
B 0 2 CC
DF j0 = BBB a b CC ;
B@ b a CA
...

then
E1 = f(x1 ; 0; : : : ; 0)g;
E2 = f(0; x2; x3 ; : : : ; 0)g; : : :
The Ei are invariant subspaces for DF j0 .
For r  0, there is an invariant manifold Wrs (0) tangent to E1      Em ,
where Re 1 ,Re 2 ; : : : ; Re m  r.
First, dim Wrs (0) = dim(E1    Em ). Also, if r < 0; r < Re i < 0 for some
i, then Wrs (0) is called a strong stable manifold and denoted Wrss (0). If r = 0
and there is some eigenvalue, i , with Re i = 0, then Wrs (0) is called the center
stable manifold and denoted Wrcs (0).
Same for r  0, we get invariant manifolds associated with eigenvalues j with
Re j  r. If r > 0, and there is j with 0 < Re j < r, then Wru (0) is called the
strong unstable manifold, and denoted Wrsu (0). If r = 0 and there is j with
Re j = 0, Wru (0) is called the center stable manifold, and denoted Wrcu (0).
A center manifold is de ned as
Wrsc (0) \ Wruc (0);
and is denoted W c (0). (It is tangent to e-spaces for eigenvalues with real part 0.)
Remark: Proofs use same sort of ideas (set contraction).
13. THE STABLE/UNSTABLE MANIFOLD METATHEOREM (OCT. 30) 67

Example: (
x_ = x2 ;
y_ = y
x  x2 
F y = y ;
 
DF j0 = 00 01 :

Figure 29. A center manifold


W ss (0) is the y-axis, W cs (0) is all of R2 , so we have a lot of choices for W c (0).
Choose any orbit on the left, and the x-axis on the right. So center manifolds are
not unique, as opposed to the hyperbolic case.
Remarks:
1. Warning: These center manifolds are not necessarily as smooth as F .
2. The hyperbolic case is generic: Given a vector eld with xed point at 0 you
can \expect" it to be hyperbolic, i.e. in the set of vector elds with xed
point at 0, the vector elds with hyperbolic xed points at 0 form an open
dense set in the C 1 topology.
Proof: If F has hyperbolic xed point at 0 and G is suciently
close to F and G has a xed point at 0 (close in C 1, so DF j0  DGj0 ,
so eigenvalues are close), so if none of the eigenvalues of DF j0 are on the
imaginary axis, and G is close enough, the same holds true for G. If 0 is
not hyperbolic for F then DF j0 has a Jordan block with 0 on the diagonal.
Simply add In to F , then
D(F + In )j0 = DF j0 + In ;
so the diagonal elements will be nonzero if  is suciently small.
So why do we need center manifolds at all? If we're dealing with a one-
parameter family of vector elds F (all of whom have a xed point at 0, if when
= 0; DF j0 is hyperbolic with s-dimensional stable and (n s)-dimensional
unstable, but DF1 j0 has (s + 1)-dimensional stable and (n s 1)-dimensional
unstable, then for some 2 [0; 1], DF j0 is not hyperbolic. We'll talk a lot more
about this when we do bifurcation theory.
68 2. THE TWO BIG THEOREMS
CHAPTER 3

Using maps to understand ows


1. Periodic orbits and Poincare sections (Nov. 1)
The next most complicated invariant set (after the xed point) is the periodic
orbit. For example, if  is a ow on Rn , then x0 2 Rn is called a periodic point
with least period T if
(x0 ; T ) = x0
(x0 ; t) 6= x0 for 0 < t < T:
Of course, we see that (x0 ; nT ) = x0 for all n 2 N . We call O(x0 ) a periodic
orbit. O(x0 ) is the smooth image of a circle in Rn, so the dynamics of jO(x0) are
very easy.
What about near periodic orbits?
Idea. (due to Poincare)
Let  be a (n 1)-dimensional surface (copy of (n 1)-dimensional disk) that
contains x0 such that for all x 2 , F (x) is not tangent to .

Figure 1. A Poincare section

 is called a Poincare section for . De ne a map P :  ! . For each


x 2 , let
 (x) = ftj t > 0; (x; t) 2 g:
If (x; t) 62 , let  (x) = 1. By continuity with respect to initial conditions, if x
is near enough to x0 , then  (x) < 1.
We need to show that  (x) is a smooth function in a neighborhood of x0 ,
( :  ! R ).
69
70 3. USING MAPS TO UNDERSTAND FLOWS


Figure 2. If the initial condition is far enough away, it may not
actually return to .

We know that  (x0 ) = T , so make  smaller if necessary so that  (x) 2


[T 1; T + 1]. (Local theory near periodic orbits)
Next choose coordinates so that x0 = 0;   f(0; x2 ; : : : ; xn )g in a neighborhood
of x0 .
Let f : Rn 1  R ! R ,
f (x2 ; : : : ; xn ; t) = 1  (0; x2 ; : : : ; xn ; t)
where 1 is restriction to the 1st coordinate. This is de ned for x2 ; : : : ; xn near 0
and t near T .
Look at the level set f 1 (0). The Implicit Function Theorem implies that there
is a  (x2 ; : : : ; xn ) such that
f (x2 ; : : : ; xn ;  (x2 ; : : : ; xn )) = 0
provided that

@f
@t (0;T ) 6= 0:
This will work because
@f @  ((0; T )) =   F ((0; T ))
=
@t (0;T ) @t 1 1
and F (x0 ) is not tangent to .
So  (x) is smooth on  near x0 .
De ne P :  ! ,
P (x2 ; : : : ; xn ) =   (0; x2 ; : : : ; xn ;  (x2 ; : : : ; xn ))
which is de ned and smooth on a neighborhood of x0 in .
P is the rst return map or Poincare Map.
Remarks:
1. P (x0 ) = x0 , so P has a xed point at O(x0 ) \ .
2. P 1 exists (follow ow backwards). Of course P 1 is smooth, so P is a
di eomorphism on some neighborhood of x0 in .
1. PERIODIC ORBITS AND POINCARE SECTIONS (NOV. 1) 71
Let us denote
W s (O(x0 )) = fz 2 Rn j y2O
max(x0 )
k(z; t) yk ! 0; t ! 1g;
W u (O(x0 )) = fz 2 Rn j y2O
max(x )
k(z; t) yk ! 0; t ! 1g:
0
Remark: This does not require that (z; t)  (y; t) for some y 2 O(x0 )
when t is large, just that (z; t) is close to the set O(x0 ) (i.e. not necessarily \in
phase".)
Definition 1.1. We say that O(x0 ) is a sink if W s (O(x0 )) contains an open
neighborhood of O(x0 ), a source if W u (O(x0 )) contains an open neighborhood of
O(x0 ).
Theorem 1.1. Let x0 ; ; ; P be as above and 1 ; 2 ; : : : ; n 1 be the eigen-
values of DP jx0 (perhaps with redundancy).
1. If ki k < 1 for i = 1; : : : ; n, then O(x0 ) is a sink.
2. If ki k > 1 for i = 1; : : : ; n, then O(x0 ) is a source.
3. If k1 k ; : : : ; ks k < 1, and ks+1 k ; : : : ; kn 1 k > 1, i.e. x0 is a hyper-
bolic xed point for P and if E s is the direct sum of the generalized e-spaces
for 1 ; 2 ; : : : ; s and E u is the direct sum of the generalized e-spaces for
s+1 ; : : : ; n 1 , then W s (O(x0 )) and W u (O(x0 )) are immersed submani-
folds as smooth as  of dimension E s + 1 and E u + 1 respectively, and
W s (O(x0 )) \  is tangent to E s at x0 , and W u (O(x0 )) \  is tangent to E u
at x0 .
Proof:
1. Look at picture.

F (U )

Figure 3. Poincare map of a neighborhood

2. Same picture in backwards time.


72 3. USING MAPS TO UNDERSTAND FLOWS

Figure 4. Just as in the ow, the stable manifold shrinks in the


Poincare map and the unstable manifold grows

3. We have by transversality,
dim W s (O(x0 )) + dim W u (O(x0 )) = n + 1:

Note: The periodic orbit is a subset of the intersection


O(x0 )  W u (O(x0 )) \ W s (O(x0 ))
They intersect transversally at orbit, but the following picture is possible:

-1 1

-1

Figure 5. The xed point at the origin has two homoclinic orbits
in this picture.

Two questions:
1. What of this theory is independent of the choice of ?
2. Can you ever compute the eigenvalues for DP jx0 , which is the eigenvalues
of Dx(x0 ; T )?
2. COMPUTING FLOQUET MULTIPLIERS (NOV. 4) 73

2. Computing Floquet Multipliers (Nov. 4)


As before, we have vector eld F , ow , periodic orb IR O(x0 ), Poincare
section , and Poincare map P :  ! .

Figure 6. Poincare section


The behavior of O(x0 ) is determined by the eigenvalues of DP jx0 .
The rst problem is the independence of the choice of . We could choose two
's, P1 : 1 ! 1 , P2 : 2 ! 2 .

2

1

Figure 7. Is the Poincare map independent of the choice of section?


Are Floquet multipliers independent of choice of ?
74 3. USING MAPS TO UNDERSTAND FLOWS

Idea. De ne
p1 : 1 ! 2
x 7! (x; 1 (x))
where
1 (x) = minft  0j (x;  (x)) 2 2 g:
As before, p1 is de ned near x0 and as smooth as  and . Also, is invertible:
p1 1 (x) = (x; 1 1 (x)) where x 2 2 :
and
1 1 (x) = maxft < 0j (x; t) 2 1 g:
Then our claim is that P1 = p1 1  P2  p1 . This is the group property of . So

P1 jx0 = Dp1 1 x1  DP2 jx1  Dp1 jx0 :
But

D(p1 1 ) x1 = (Dp1 )jx0 1 :
So DP1 jx0 is similar to DP2 jx1 , and in particular these two matrices have the
same eigenvalues.
The same idea works if x0 = x1 :

2

1
Figure 8. We assume that the Poincare sections intersect at x0 .

De ne
p1 : 1 ! 2
x 7! (x; 1 (x))
where 1 (x) = t near 0 such that (x; t) 2 2 .
So the eigenvalues of the return map are invariants of the orbit. But we still
need a way to computer the eigenvalues of the Poincare maps.
Claim: The eigenvalues of Dxj(x0;T ) are 1; 1; : : : ; n 1 where 1; : : : ; n 1
are the Floquet multipliers.
Proof: The time T map takes a neighborhood of x0 to a neighborhood of
x0 . But (; T ) preserves the periodic orbit O(x0 ).
Let (t) = (x0 ; t) so : R ! Rn .
2. COMPUTING FLOQUET MULTIPLIERS (NOV. 4) 75

Then
( (t); T ) = ((xo ; t); T )
= (x0 ; t + T )
= (x0 ; t)
= (t);
i.e. all points on are xed under (; T ).
So
d ( (t); T ) = D j  d
dt x ( (t );T ) dt 
Also,
d ( (t); T ) = d :

dt dt t
If we let t = 0, ( (0) = x0 )

= d ;
Dx j(x0 ;T )  d
dt 0 dt 0

is an e-vector with eigenvalue 1.
i.e. d
dt 0
Also,

d = F ( (0)) = F (x );
dt t=0 0
so
Dxj(x0 ;T ) F (x0 ) = F (x0 ):
We need that the rest of the eigenvalues of Dxj(x0 ;T ) are eigenvalues of DP jx0
for P :  ! . (i.e. Poincare section)
Choose nice variables so that F (x0 ) = (1; 0; : : : ; 0), and then
01 ? 
1 ?
B0 " CC
Dxj(x0 ;T ) = B
B@ .. C:
. B ! A
0 #
From linear algebra, we know that the eigenvalues of B are the rest of the
eigenvalues of Dx, so
0x 1 0x1+?x2 +?0x3 +1   +?xn 1 0x 1
1 BB x1 CC B .1 C
Dxj(x0 ;T ) B .
@.A @
. C = B B C
B @ ... A CA =  @ .. A :
x n xn x n
Next, choose  = f(0; x2 ; : : : ; xn )g where in these nice coordinates, x0 = 0. Let
 :  ! R be the rst return time map, so Poincare map P :  !  is
P (x1 ; x2 ; : : : ; xn ) = 1  ((0; x2 ; : : : ; xn );  (0; x2 ; : : : ; xn ));
where
1 (x1 ; x2 ; : : : ; xn ) = (x2 ; : : : ; xn ):
76 3. USING MAPS TO UNDERSTAND FLOWS

Extend  to neighborhood of x0 in Rn continuously so that  (x1 ; x2 ; : : : ; xn ) is


the time near T such that
(x1 ; x2 ; : : : ; xn ;  (x1 ; x2 ; : : : ; xn )) 2 :
Compute the (n 1)-dimensional vector
@P  @
@x2 = 1 ( @x2 ((0; x2 ; : : : ; xn );  (0; x2 ; : : : ; xn ))):
@
@x2 (0; x 2 ; : : : ; xn ;  (0; x2 ; : : : ; xn ))
@
= @x

@
+ @t

d
 dx :
2 (0;x2 ;:::;xn ; (0;x2 ;:::;xn )) (0;x2 ;:::;xn ; (0;x2 ;:::;xn )) 2 (0;x2 ;:::;xn )
Evaluate this at x0 = 0 so that  (0) = T .
@
d
d


@x2 (x0 ;T ) + dt (x0 ;T )  dx2 x0
n vector
n vector scalar
@ (1; 0; : : :; 0)  dx d :
@x2 (x0 ;T ) + vector eld at x0 2 x0

@P are the same as
So the last (n 1) components of @x@ (;  (0)) = @x
2 x0 2 x0
the last n 1 components of @x @
2 (x0 ;T )
.
The proof works exactly the same for x3 ; : : : ; xn , so DP jx0 = B in these nice
coordinates. Thus the eigenvalues of P at x0 are the same as B .
This shifts the problem. Now how do we compute the eigenvalues of Dxj(x0 ;T ) ?
There's only one way to get at Dx  (unless we have an explicit form for ).
Recall the variational equation:
x_ = F (x); x(0) = x0
X_ = DF jx(t) X; X (0) = I
where
x(t) = (x0 ; t) is the periodic orbit, and
X (t) = Dx j(x0;t)

Verify that @
@t (x0 ;t) = F ((x0 ; t)) by di erentiating with respect to x. So,
can we solve this in general? There are some situations where we can:
1. Find an explicit expression for the periodic orbit (x0 ; t). (Easier, in general,
than nding formula for (x; t)).
2. X_ = DF j(x0 ;t) X has the form X_ = A(t)X where A(t) is an n  n matrix,
\nice" in a sense to be made precise later.
3. MORE COMPUTATION OF FLOQUET MULTIPLIERS (NOV. 6) 77

3. More computation of Floquet multipliers (Nov. 6)


So when can we compute these Floquet multipliers? Let A(t) = DF j(x0 ;t) , so
that we have the equations
x_ = F (x); x(0) = x0
_X = A(t)X; X (0) = I:
There are some cases where we can solve this system:
1. A(t) = A0 , a constant matrix. In this case the solution is
X (t) = eA0 t :
2. If dim n = 1, A is a 1  1 matrix, and X_ = a(t)X . Then
R
X (t) = e 0t a(s) ds :
3. If
0a (t) 1
BB 1 a2(t) CC
A(t) = B@ a3 (t) CA ;
...
i.e. A is diagonal, then
0 nR t o 1
BB exp 0 a1 (s ) ds nR o CC
B exp 0t a2 (s) ds CC
X (t) = BBB nR o CC
exp 0t a3 (s) ds
@ ...
A
We would want that this solution would always work (even in the non-diagonal
case). Since
nR o nR o
exp 0t+t A(s) ds exp 0t A(s) ds
_X = lim ;
t!0 t
we would like to factor, i.e.
Z t  0 exp nRtt+t A(s) dso I 1
exp A(s) ds @ t
A:
0
But this presumes that eB eC = eB+C , but this is not true in general, since B
and C may not commute.
Example: [Set-Up for Bifurcation Theory] y = sin y. We can change this
to a system:
y_ = v
v_ = sin y
78 3. USING MAPS TO UNDERSTAND FLOWS

sin(y) -

1111
0000
000010
1111
000010
1111
10
1010 y=0
10
Figure 9. The acceleration due to gravity is of the magnitude sin(y).

 

Figure 10. The phase plane for y = sin(y)

We can see that (; 0) are hyperbolic xed points. The point (0; 0) have
eigenvalues i. We have an invariant of motion H (y; v) = v2 =2 cos y.
d @H @H
dt H (y(t); v(t)) = @y y_ + @v v_
= sin(y)v + v( sin(y)) = 0
So this is a Hamiltonian system. Add a small periodic forcing and get
y = sin y + g|{z}(t)
external
where g(t + T ) = g(t).
We can make this system autonomous:
y_ = v
v_ = sin(y) + g(s)
s_ = 1:
The phase space is
R2  S 1 = f(y; v; s)j (y; v) 2 R2; s 2 S 1g:
and S 1 = [0; T ] where 0 and T are identi ed.
When  = 0, we get
y_ = v
v_ = sin y
s_ = 1
3. MORE COMPUTATION OF FLOQUET MULTIPLIERS (NOV. 6) 79

and the points (0; 0; 0) and (; 0; 0) correspond to periodic orbits.


v

Figure 11. Some of the periodic orbits of the newly autonomous system

What happens to the periodic orbits when  > 0 but small, i.e. for small
periodic forcing? Are there period T periodic orbits near (y = 0; v = 0) and/or
(y = ; v = 0)? This is a problem in bifurcation theory: what changes or stays
the same when we change a parameter?
The rst step is to understand the orbits in the  = 0 case. Compute the
Floquet multipliers of (0; 0; 0); (; 0; 0). Let's start with (; 0; 0). We expect a
saddle with 1-dimensional W s , 1-dimensional W u .
Let ((y; v; s; ); t) be the solution ow for
y_ = v
v_ = sin y = F (y; v; s)
s_ = 1:
Then ((; 0; 0); t) = (; 0; t) is the periodic orbit.
00 1 01
X_ = DF j(;0;t) X = @1 0 0A X:
0 0 0
If we diagonalize, with
01 1 01
P = @1 1 0A ;
0 0 1
we see that
00 1 01 01 0 01
P 1 @1 0 0A P = @0 1 0A :
0 0 0 0 0 0
Making a change of coordinates PZ = X , we get that
01 0 01
Z_ = @0 1 0A Z; Z (0) = I:
0 0 0
80 3. USING MAPS TO UNDERSTAND FLOWS

So
0et 0 01
Z (t) = @ 0 e t 0A ;
0 0 1
and changing back to the original coordinates we get
0et e t 01
1
X (t) = 2 @et e t 0A :
0 0 1
At time T the eigenvalues are eT ; e T ; 1. Note that one is greater than 1, one
is less than 1. This is a hyperbolic periodic orbit.
Now, at (0; 0; 0), ((0; 0; 0); t) = (0; 0; t), and
0 0 1 01
X_ = @ 1 0 0A X; X (0) = I:
0 0 0
Thus
0 cos t sin t 01
X (t) = @ sin t cos t 0A ;
0 0 1
and the eigenvalues of this matrix are cos(T )  sin(T ); 1. This is not hyperbolic. If
T = 2n then the eigenvalues are 1; 1; 1, and if T = (2n + 1) then the eigenvalues
are 1; 1; 1.
4. Bifurcation Theory (Nov. 8)
Consider the family of systems x_ = F (x), where F : Rn ! Rn for all . If
 2 R , we call this a 1-parameter family, and if  2 Rm , we call this an m-parameter
family.

Figure 12
4. BIFURCATION THEORY (NOV. 8) 81

How does the solution change as  changes? We say that a bifurcation occurs
at  = 0 if the ow for  < 0 is \di erent" from the ow for  > 0 . It does depend
on the context, however, what you mean by \di erent".
Example: Is the ow for
x_   1 0x
y = 0 2 y (See Figure 13)
di erent from
y

Figure 13. A straight sink


x_   1 1
x
y = 1 1 y ? (See Figure 14)
y

Figure 14. A spiral sink

It is certainly di erent from


x_  1 0 x
y = 0 1 y : (See Figure 15)
82 3. USING MAPS TO UNDERSTAND FLOWS
y

Figure 15. A saddle

The most important thing to note is that bifurcations almost never happen.
Theorem 4.1 (Straightening-Out Lemma). Given a vector eld F : Rn ! Rn ,
suppose that F (x0 ) 6= 0, then there is a neighborhood V of x0 and coordinates on
V such that
F (x) = (1; 0; 0; : : :; 0) for all x 2 V
In these coordinates, the solution ow is (x; t) = x + (t; 0; : : : ; 0).
Idea of Proof.
1. Choose surface of section  at x0 such that vector eld is never tangent to
.
2. Choose coordinates so that   f(0; x2; : : : ; xn )g.
3. For x close to , nd a small time t so that (x; t) 2 . Then coordinates
of x are ( t; x2 ; : : : ; xn ) where (x; t) = (0; x2 ; : : : ; xn ).
4. Check, in these coordinates, that F = (1; 0; : : : ; 0).
So, give a 1-parameter family of ows, if F0 (x0 ) 6= 0 for some choice of 0 ; x0 ,
then (assuming that F is at least continuous in ), for  near 0 , F (x0 ) 6= 0. So
there is a neighborhood about x0 so that the ow for F is just the straight-line
ow. Up to a change of coordinates, locally, there is no change in the ow. This
says that bifurcations can only happen
1. at F (x0 ) = 0 (at xed points), or
2. globally.
These global bifurcations are hard to see, need to nd another way to make
them \local".
We will have three ways of attacking this problem:
1. Bifurcations of xed points (in nitely many cases),
2. Bifurcations of periodic orbits ( xed points on Poincare maps), and the
3. Melnikov method
So, for the rst:
Let F be a 1-parameter family, and assume that F is as smooth as necessary
in x and . Assume we have a xed point F0 (0) = 0. Again, we will see that
bifurcations of xed points almost never happen.
4. BIFURCATION THEORY (NOV. 8) 83

Theorem 4.2. If 0 is a hyperbolic xed point of F0 , then there is an 1 > 0


and
: (  1 ; 1 ) ! R n
satisfying
1. (0) = 0,
2. F ( ()) = 0,
3. () is smooth (as smooth as F ,
4. there exists a neighborhood V of 0 such that if  2 ( 1; 1 ) and x 2 V , and
F (x) = 0, then () = x.
5. The dimensions of WFs ( ()) and WFu ( ()) are the same as WFs0 (0) and
WFu0 (0) (and () is hyperbolic for F ).
Also, DxF ( ()) is continuous in .
Proof: Let F : RnR ! Rn, with F (x; ) = F(x). We know that F (0; 0) =
0. Location of xed points is given by 0 level set of F . What can we say about
F 1 (0)?
The Implicit Function Theorem tells us that there is an  and a smooth function
: ( 1 ; 1 ) ! R 2
so that
F ( (); ) = (0; 0);
i.e., the level set is a graph of .
The hypothesis that has to be satis ed is some condition on the partials of F ,
This might not hold if the conditions of the theorem do not hold.
Compute d where () = ( 1 (); 2 ()). We know that
d
F ( 1 (); 2 ()) ;  = (0; 0):
Let F (x) = (f (x); g(x)). Then we need
d (F ( (); (); ) = d F (0; 0) = (0; 0):
d 1 2 d 
Computing, we need:
0 @f @f 1 0 d 1 1 0 df 1
B@ @x1 @x2 CA B@ d CA + B@ d CA = 0 :
@g @g d 2 dg 0
@x1 @x2 d d
So we need
0 @f @f 1
B@ @x1 @x2 CA = DxF0 jx=0
@g @g
@x1 @x2 x=0;=0
to be hyperbolic.
No zero eigenvalues, so DF0 jx=0 exists.
84 3. USING MAPS TO UNDERSTAND FLOWS

5. Hyperbolicity in bifurcations (Nov. 11)


Recall the proof of the last theorem:
F : Rn  R ! Rn
(x; ) 7! F (x)
with the xed points the level set F 1 (0). The Inverse Function Theorem gives
us that this level set is the graph of a function. If we di erentiate F ( (); ) with
respect to , we get:
DxF  d
d + DF = 0;
so
d = (D F ) 1  D F :
d x 
We know that Dx F is invertible because 0 is hyperbolic. To get information
about (), we compute the Taylor series. We need to compare the ow near ()
for the solution ow of x_ = F (x) to ow near 0 for F0 (x).
Compute the linear part:
(DxF )j ()
and this gives a curve of matrices
M () = (Dx F )jx= () :
As we change , components change smoothly with , which means the eigen-
values change continuously with .
Example: For the matrix  
1  ;
 1
we can compute the eigenvalues, and see that they are 1  i jj. So the roots of
the polynomial are only continuous in the coecients, and not necessarily smooth.
So there is an interval where the number of eigenvalues with real part positive,
and the number with real part negative, is constant for M (). However, we know
that WFs ( ()) and WFu ( ()) change smoothly with .
This leads to a
Scholium. In the theorem, we only used that DxF0 j0 was invertible to get
a curve (), i.e. no eigenvalues = 0. Thus we need only invertibility, not hyper-
bolicity, to apply the Inverse Function Theorem. However, if we do not assume
hyperbolicity, the dimensions of the unstable and stable manifolds can change. Bi-
furcations can happen at equilibrium points where eigenvalues are 0 or are pure
imaginary.
Example: F : R2 ! R2, F (0) = 0,
 
Df j0 = 00 0
Assume that  < 0, and let F (x) = (f (x); g (x)). What happens when we change
? Avoid the question of the behavior of the solution of x_ = F0 (x) for as long as
possible. Try the same trick:
F (x; ) = F (x)
5. HYPERBOLICITY IN BIFURCATIONS (NOV. 11) 85

We know that Dx F 1 does not exist at x = 0;  = 0. But we're still looking


for F 1 (f(0; 0)g). Try to look at level set as a graph in another direction:
Let x = (x1 ; x2 ), Try to nd ((x1 );  (x1 )) such that
F (x1 ; (x1 );  (x1 )) = (0; 0);
i.e.
f (x ; (x )) 0
 (x1 ) 1 1
g (x1) (x1 ; (x1 )) = 0 :

x2

x1 ( )



Figure 16. () is a graph

d , d :
Let's try to compute dx
1 dx1
0 @f @f @ @f @ 1
B@ @x1 + @x2 @x1 + @ @x1 CA = 0 ;
@g @g @ @g @ 0
@x1 + @x2 @x1 + @ @x1
so we have to have
0 @ 1 0 @f @f 1 1 0 @f 1
B@ @x1 CA = B@ @x2 @ CA  B@ @x1 CA :
@ @g @g @g
@x1 @x2 @ @x1
Thus we need
0 @f @f 1
B@ @x2 @ CA
@g @g
@x2 @ x=0;=0
to be invertible.
@f
We know that @x = 0 and @g = .
2 (0;0) @x2 (0;0)
86 3. USING MAPS TO UNDERSTAND FLOWS

This gives us
0 @f 1
B@0 @ CA
@g @
x=0;=0

which is clearly invertible i @f
@ (0;0) 6= 0.
This means that the x1 -component of F0 depends on , i.e. changes at nonzero
speed as  changes.
Theorem 5.1. Given F : R2 ! R2 , F0 (0) = 0,
 
Dx F0 jx=0 = 00 0 ;  < 0;

provided that
@ f1st component of F g 6= 0;
@ (x=0;=0)

then there is a 1 > 0 and a curve  : ( 1 ; 1 ) ! R2 ,  (x1 ) = ((x1 );  (x1 )), and
a neighborhood V of ((0; 0); 0) such that
1. F (x1 ) (x1 ; (x1 )) = 0 for all x 2 ( 1 ; 1 ),
2. If ((x1 ; x2 ); ) 2 V and F (x1 ; x2 ) = 0 then x2 = (x1 ),  =  (x1 ), and
3.  = (;  ) is smooth.

x2

x1 a curve of xed points





Figure 17. We can see that there are no xed points for  < 0,
and two for  > 0.
5. HYPERBOLICITY IN BIFURCATIONS (NOV. 11) 87
x2

x1

Figure 18. Here, we have two xed points for  < 0, and none
for  > 0. The di erence between this picture and the last is the
sign of the second partial.

x2

x1

Figure 19. No bifurcation here.

Need to compute more about  (x1 ), and we need its 1st and 2nd derivatives.
We know from before that
0 @ 1 0 @f @f 1 1 0 @f 1
B@ @x1 CA = B@ @x2 @ CA  B@ @x1 CA
@ @g @g @g
@x1 @x2 @ @x
0 @g1 @f @f @g 1
= @f @g 1 @f @g B 1 C
@ @@g @x@f1 @@f@x@g A:
@x1 @ @ @x2 @x2 @x1 + @x2 @x1
88 3. USING MAPS TO UNDERSTAND FLOWS

At x = 0;  = 0, we get (0; 0), i.e. the curve (x1 ; (x1 );  (x1 )) is tangent to the
x1 -axis at (0; 0). We also have
d2  = @ 2f =@x21 ; d2  = 0:
dx21 @f=@ dx21
6. Bifurcation diagrams (Nov. 13)
We want to know if the nondegeneracy conditions from the last chapter are
satis ed, so let's try to reduce dimension, since nothing much happens in the x2 -
direction.
Consider
0x_ 1 0F (x )1 0x 1
@x12 A = @F(x12 )A = F^ @x12 A :
 0 
At the xed point (0; 0; 0),
00 0 @f =@1

D(x1 ;x2 ;)F (0;0;0) = @0  @g =@A :
0 0 0
The eigenvalues are 0; 0; and . So there is a 2-dimensional center manifold
and a 1-dimensional (since  < 0) strong stable manifold, W ss .
Note: All xed points must be on W c(0; 0; 0), since if they are not, in
backwards time, the distances stretch exponentially.
Consider the ow restricted to W c (0; 0; 0), and return to the 1-parameter fam-
ily. If  is considered as a coordinate on W c (0; 0; 0), then _ = 0, so we have
x_ = h(x);
a 1-dimensional, 1-parameter family.
We know that H0 (0) = 0, and dh dx (0; 0) = 0, and we require that
@h 6= 0 and @ 2 h 6= 0:
@ @x2
6. BIFURCATION DIAGRAMS (NOV. 13) 89

Bifurcation diagrams:

@ 2 h2 <0
@x

 A
 A
 AU
@h > 0 @h < 0
@ @

h > 0 h < 0
h = 0 h = 0
h < 0 h > 0
Figure 20. These are the possible bifurcations if the second par-
tial is negative. The di erence between the two bottom pictures is
the sign of the rst partial @h
@ . When it is negative, for example,
we go from no xed points to two.
90 3. USING MAPS TO UNDERSTAND FLOWS

@ 2 h2 >0
@x

h > 0 @h > 0  A @h < 0 h < 0


@  A @
 AU
h = 0 h = 0
h < 0 h > 0

Figure 21. These are the possible bifurcations if the second par-
tial is positive. See the last gure also. Again, the sign of the rst
partial determines whether we go from two xed points to none or
vice-versa.
A saddle-node bifurcation occurs when there is 1 simple zero eigenvalue
with nondegeneracy conditions. What about 2 zero eigenvalues, such as
0 1 0 0
DF0 j0 = 0 0 or even 0 0 ?
And why would we consider such a degenerate situation?
So, to study these situations, we need a 2-parameter family. (See Figure 25
7. Spiral sinks and spiral sources, normal form calculation (Nov. 15)
Consider F : Rn ! Rn , F0 (0) = 0, and DF0 j0 has eigenvalues i!; 3; : : : ; n ,
with the real part of the j all not 0.
First, restrict attention to 2 dimensions, and consider F : R2 ! R2 . We can
justify this by saying that if the dimension is > 2 then restrict to the center manifold
for
0x 1 0F (x )1
BB ...1 CC BB  ... 1 CC
F^ B@xn CA = B@F(xn )CA
 0
7. SPIRAL SINKS AND SPIRAL SOURCES, NORMAL FORM CALCULATION (NOV. 15) 91

sink
 -

source
 -

Figure 22. In the left picture, we undergo a bifurcation from no


xed points, through a node, and then to a sink-source pair. The
parabola represents the xed points. Any vertical slice through the
picture corresponds to the system at some value of . The picture
on the right is the same, except in reverse order.

source
 XX
 XXX
  XX

9 XXz
X

y
XXX :

XXX 
X XX  
X 
sink
Figure 23. In the left picture, we undergo a bifurcation from no
xed points, through a node, and then to a sink-source pair. The
parabola represents the xed points. Any vertical slice through the
picture corresponds to the system at some value of . The picture
on the right is the same, except in reverse order.

which is 3-dimensional and one dimension is a parameter. We know that there is a


curve () such that F ( ()) = 0. Do an -dependent change of variables to get
G (x) = F (x + ()):
So G (0) = 0 for all , so we can assume that  = 0, and that the xed point is
at 0.
92 3. USING MAPS TO UNDERSTAND FLOWS

<0 =0 >0

Figure 24. This is a saddle-node bifurcation. In this gure, we


see that there are no xed points for  < 0. At  = 0 is the
bifurcation, and we have a node. For positive , we have the birth
of a sink and a saddle.

Figure 25. You can't get around the two-parameter family.

So we can assume that the system looks like


 
DF0 j0 = 0 0 :
We then expect
1. Higher order terms of F0 make 0 into a spiral sink or a spiral source.
2. If we change , we expect the eigenvalues of DF j0 to have nonzero real part
The above conditions will be our \nondegeneracy conditions" which insure that
there is a bifurcation. Suppose, for example, the higher order terms of F0 make 0
a spiral sink, so there must exist an attractor block for this sink.
Suppose we change  a little and eigenvalues of DF0 have positive real part.
Then 0 is a spiral source (hyperbolic). But old attractor block for  = 0 is still an
attractor block.
Orbits entering the block must limit somewhere, but it can't be 0, and there
are no other equilibria.
7. SPIRAL SINKS AND SPIRAL SOURCES, NORMAL FORM CALCULATION (NOV. 15) 93

Theorem 7.1 (Poincare-Bendixon). A bounded orbit of a ow in R2 limits


onto a periodic orbit or a xed point or a \limit cycle".
So there must be at least one periodic orbit near zero. The point of the Hopf
Bifurcation Theorem is to show
1. that there is only 1 periodic orbit, and
2. to give some machinery for manipulating higher order terms so we can see
for which 's this orbit exists.
So, as above, we assume that
0  
DF0 j0 =  0 :
We want to try to get a handle on the higher order terms of F .
x 0 x  ax2 + bxy + cy2 
F0 y =  0 y + x2 + xy + y2 + h.o.t.
x 
= A y + 2nd order + h.o.t.
If 0 is a spiral sink, then a change of variables won't change that. So what is it
about power series expansion that is topological information?
Try to simplify the power series of F0 by changing variables. Is there any
topological information in the 2nd order terms, i.e. can I do a change of variables
so that in the new variables,
 z_   z   3rd order 
w = A w + and higher ?
So, let's try a change of variables of the form
x  z   z 
y= w+h ; w
where  z   a z2 + b zw + c w2 
h w = 12 1 1
2 ;
1 z + 1 zw + 1 w
i.e. identity plus 2nd order terms, and we can pick a1 ; b1 ; : : : ; 1 . Now,
x x x
F0 y = A y + F2 y + h.o.t.
Then
x_   z_   z_ 
y = w + Dh jz;w
x x w x
= F y = A y + F2 y + h.o.t.
 z   z   z 
=A w +h w + F2 w + h.o.t.
Thus we have that
   z_  z  z  z 
I + Dhjz;w w = A w + Ah w + F2 w + h.o.t.;
94 3. USING MAPS TO UNDERSTAND FLOWS

or
 z_    1       
w = I + Dhjz;w A wz + Ah wz + F2 wz + h.o.t. :
Can we write (I + Dh) 1 as I + B ? Well,
I = (I + Dh)  (I + B ) = I + Dh + B + Dh  B;
so, up to rst order, (I + Dh) 1 = (I Dh). Thus
 z_     z  z  z  
w = I Dh jz;w A w + Ah w + F2 w + h.o.t. :
Since  
Dhjz;w = 22 a1zz +
+ b1 w b1 z + 2c1 w ;
1 w 1 z + 2 1 w
1
 z_  1 2a z + b w b z 2c w 
1 1 1 1
w = 2 1 z 1 w 1 1 z 2 1 w +   
w + ( + a b )z2 + ( 2a  +  + b c )zw + ( b  +  + c)w2 
1 1 1 1 1 1 1 1
= 2 2 :
t + ( 1  a1 + )z + (2 1  + 2 1  b1 + )zw + ( 1  c1 + )w
Make all coecients 0,
0 0 0 0  0 1 0 a 1 0 a 1
BB 2 0 2 0  0 CC BB b11 CC BB b CC
BB 0  0 0 0  CC BB c1 CC = BB c CC
BB  0 0 0  0 CC BB 1 CC BB CC
@ 0  0 2 0 2 A @ 1 A @ A
0 0  0  0 1
The determinant of this matrix is 3 6= 0, so we can change variables and
6
eliminate all 2nd order terms.
Remark: This is an example of a normal form calculation. We will try the
cubic terms next.
8. More normal form calculations, complexi cation (Nov. 18)
In the case we have been talking about, the linear part is a center. But we
could have a change in dim W u ; W s at  = 0. What is happening at  = 0?
Let  ;  by the eigenvalues of DF j0 . So  =  + i where 0 = 0; 0 = .
Write
x_  x   x
 
y = F y =   y + h.o.t.
 
In the last section, we saw that we can do an algebraic change of variables
which eliminates all 2nd order terms of F0 . How many terms can we eliminate?
Which terms have geometric information? Let us take advantage of the fact that
we're in R2 , and use complex-valued algebra.
Think of (x; y) 2 C2 , and can linearize linear part. Let
1 1   
P= i i for   :
 
8. MORE NORMAL FORM CALCULATIONS, COMPLEXIFICATION (NOV. 18) 95

Do change of variables
  
P zz1 = xy
2
and get the system
z_     
1 1   z1
z2 = P   P z2 + h.o.t.
 0 z 
= 0  z1 + h.o.t.
 2
Where did the original R go, i.e. where is f(x; y)j x; y 2 R g? Since
2
z  x
1 1
z2 = P y
and
 i ;

P 1=1 1
2 1 i
so
z1 = (x iy)=2
z2 = (x + iy)=2:
Let  = f(z1; z2 )j z2 = z1 g, a 2-dimensional subspace of the 4-dimensional C2 .
Moreover, since the space f(x; y)j x; t 2 R g is invariant in the original equation, 
is invariant under the transformed equation. So if we only care about (x; y) 2 R ,
then we need only consider solutions on  where z2 = z1 .
So we take the expansion
z_1 =  z1 + F2 (z1 ; z2 )
and replace z2 with z 1 . So
X ij
z1 =  z1 + aij z1 z1 :
i+j 2
(If F is C N +1 , then
X
z1 =  z1 + aij z1i zj1 + terms of order N + 1:)
i+j 2
Remarks:
1. This is not complex analytic!
2. aij also depend on .
Now, try to do a change of variables which eliminate terms of the power series.
For example, try to get rid of z k zl (k + l  2). Try a new variable w 2 C where
z = w + bz k zl :
We see that
_ k 1 wl + blwl 1 wk ;
z_ = w_ + bkww
96 3. USING MAPS TO UNDERSTAND FLOWS

and
X
 z + aij z i z j
i+j 2
X j
=  (w + bwk wl ) + aij (w + bwk wl )i (w + bwk wl ) :
i+j 2
So we get
X
w_ =  w + aij wi z j
k+li+j 2
i;j 6=k;l
+ ( b bk bl + akl )wk wl + h.o.t.
All we need is
 b bk bl + akl = 0:
So let
b= akl
 k l
and we need worry only if
 k l = 0
Remark: This change of variable only a ects k; l terms and terms of higher
order. This does change the coecients of the higher order terms, so we must do
it in order, from lowest order to highest order.
 =  + i , and there are two cases:
1.  6= 0;  6= 0. We only need that
 k l 6= 0 or
 k + l 6= 0
Since we have that  6= 0;  6= 0, we need that
1 k l 6= 0 or
1 k + l 6= 0:
Since k + l  2, both are never 0. Thus the denominator is never 0!. So
we can eliminate any non-linear terms. Does this mean that we can linearize,
i.e. change variables to w_ =  w?
No. There are in nitely many changes of variable and the domain may
shrink to 0. But there does exist a neighborhood of 0 and a polynomial
change of variables such that in the new variables
w_ =  w + order  N
where F is C N .
We can linearize formally, though (if F is C 1 ; C ! ). This was Poincare's
thesis.
2. When  = 0,  = i = i:
The denominator i ki + li = 0 if 1 k + l = 0 or l = k 1. So we
can eliminate all terms, except z 2 z; z 3 z2 ; z 4z 3 ; : : :
8. MORE NORMAL FORM CALCULATIONS, COMPLEXIFICATION (NOV. 18) 97

Formally, we can change variables to


X
z_ = iz + aj z j zj 1
j 2
provided F is C , there is a neighborhood of 0 and a change of variables
N
such that
j +1X
<2N
z_ = iz + aj z j z j 1 + order N:
j 2
This is known as \normal form". What's the geometry in this case?
Let
z_ = iz + a2 z 2 z + order  4:
and assume a2 = + i 6= 0. Change to polar coordinates:
_ i
_ i + ire
z_ = re
= irei + ( + i )r3 ei + h.o.t.
So
r_ = r3 + h.o.t.  4
_ =  + r2 + h.o.t.  3:
If > 0, we have a spiral source, as in Figure 26.

Figure 26. The nonlinear part makes this a spiral source.


If < 0, we have a spiral sink,as in Figure 27.
If = 0, we still have no info, so have to look at z 3 z2 terms.
Remark: What does tell you? If 6= 0, the amount of rotation changes
with r.
98 3. USING MAPS TO UNDERSTAND FLOWS

Figure 27. The nonlinear part makes this a spiral source.


CHAPTER 4

Topics
1. Setup for Hopf Bifurcation Theorem (Nov. 20)
We know from before the the eigenvalues of DF0 j0 are i. Let 0 = i, and
put z_ = F (z ) into normal form near  = 0, i.e.
z_ =  (z ) + a1 ()z 2 z + h.o.t.
Take  =  + i , and a1 () =  + i  . In polar coordinates, we get
_ i = ( + i )rei + (  + i )r3 ei + h.o.t.
_ i + ire
re
so
r_ =  r +  r3 + h.o.t.
_ =  +  r3 + h.o.t.:
What happens as  is changed through 0,  = 0? Assume that

d 6= 0:
d =0
There are two cases, where d =d > 0 and d =d < 0. In the rst case, for
 < 0, 0 is a spiral sink, and for  > 0, 0 is a spiral source. Also, the behavior at
 = 0 depends on  : If 0 < 0, then for  near 0,  < 0, so that means that at
 = 0, 0 is a spiral sink.
Now let us look at the global picture at  = 0. There exists an attractor block
around 0. Look at
r_ =  r +  r3 :
p
The xed points of  r +  r3 = 0 are r = 0;   = . In two dimensions,
Do we get the same picture with higher order terms? The answer is yes. Set
up regions in the (r; ) plane:
Show that near 0, even with higher order terms, r is increasing. Far from 0, r
is decreasing. Inside the ring show that the derivative of ow in r direction is < 1.
Look at Poincare return map to  = 0, show derivative < 1.
Three other pictures:
Global picture doesn't change for small changes in  (r3 term in r_ is huge when
r is big). Behavior near r = 0 is changed by changing . So the r^ole of the attractor
is transferred from the xed point to the periodic orbit.
One other case: 0 = 0 =  for all 
99
100 4. TOPICS

<0 <0 <0

Figure 1. For  < 0, we have a spiral sink. When  is 0, we do


not have a center, for the nonlinear term makes this still into a
spiral sink. And for some value of  > 0, a periodic orbit is born,
which is an attractor.

Figure 2. The same deal pictured in higher dimensions. The


spiral sink at the origin becomes a spiral source, and the attracting
cycle is born.

Then
r_ = r

d > 0:
_ =  +  r2 ; d =0
Think of 0 as taking the plane of periodics at  = 0 and bending it.
Now, how can periodic orbits bifurcate? Bifurcation of periodic orbits of ows
corresponds to bifurcations of xed points of maps. Given a periodic orbit, we can
associate a Poincare map.
Look at the 1-parameter family of maps P : Rn ! Rn with P0 (0) = 0.
1. SETUP FOR HOPF BIFURCATION THEOREM (NOV. 20) 101

Figure 3. the ring

Figure 4. The other three possibilities.

Theorem 1.1. 1. Suppose the eigenvalues of DP0 j0 are o of the unit


circle. Then there is a neighborhood V of 0, 1 > 0, and : ( 1; 1 ) ! V
as smooth as P such that
(a) (0) = 0.
102 4. TOPICS

Figure 5. The spiral sink becomes a center which becomes a spiral source.

(b) 8 2 ( 1 ; 1 ); P ( ()) = ()


(c) if x 2 V , and  2 ( 1 ; 1) and P (x) = x, then x = ().
(d) The dimensions of W u and W s for P at () are constant.
2. If DP0 j0 has no eigenvalues = 1, then there is 1 ; V; such that the rst
three conclusions hold.
Proof: The same as for ows. De ne P (x; ) = P(x) x and look at
P (x; ) = 0. To use the Inverse Function Theorem to get , we need that Dx Pj(0;0)
is non-singular, that is that Dx P0 I has no eigenvalues equal to 1. Also note that
the eigenvalues of DP j () are continuous in .
So, there are 3 di erent codimension 1 bifurcations for xed points of maps:
1. 1 eigenvalue = 1 (saddle-node)
2. 1 eigenvalue = 1 (period-doubling)
3. Pair of eigenvalues on unit circle (Hopf, Neimark)

2. More Hopf Bifurcation Theorem (Nov. 22,25)


Remember that bifurcations of xed points of maps correspond to bifurcation
of periodic orbits for ows. As we saw last time, there are three distinct types of
bifurcations:
1. Saddle node bifurcation (center manifold)
It suces to consider the system P : R ! R . Example: P (x) =
 + x. For  = 0, all points are xed, but for  6= 0, there are no xed points
whatsoever.
We expect that the generic picture should bend this picture one way or
the other.
Look at P (x) =  + x + x2 .
We need the two nondegeneracy conditions

dP 6= 0;

d2 P0 6= 0:
d 0 dx2 0
2. MORE HOPF BIFURCATION THEOREM (NOV. 22,25) 103

Figure 6. For ows, the bifurcations happen as the eigenvalues


cross the imaginary axis. For maps, they happen when the eigen-
values cross the unit circle.

Figure 7. A radical bifurcation. No xed points becomes all xed


points which again becomes no xed points.
In one dimension, source and sink are born (or die) at bifurcation (at
 = 0). In 2 or more dimensions, we expect to have either saddle-sink (if all
other eigenvalues of DxP0 j0 are inside the unit circle), saddle-source (if all
other eigenvalues are outside the unit circle), or saddle-saddle.
2. Period-doubling (one eigenvalue = 1)
Simple model, we know P0 (0) = 0 and if all other eigenvalues are not on
the unit circle, then there is a neighborhood U of 0, 1 > 0, and a smooth
curve : ( 1 ; 1 ) ! U such that if P (x) = x, and x 2 U , then x = ().
So we may as well assume that P (0) = 0; 8. By center manifold it
suces to consider P : R ! R .
Example: This example is a bit too simple. Consider P(x) = (1 +
)x For  < 0, 0 is a sink, for  > 0, a source, and for  = 0, every point is
periodic of period 2.
104 4. TOPICS

Figure 8. These are the possible Hopf bifurcations. These are


the same pictures as in Figures 22 and 23.

y=x

Figure 9. This is the map P (x) = +x x2 and its corresponding


bifurcation diagram.

Example: P(x) = (1 + )x + x2. 0 is a sink for  < 0, and a source


for  > 0. We expect the birth of a period 2 orbit at  = 0. Look for xed
points of the 2nd iterate.
P2 (x) = (1 + )P (x) + (P (x))2 = (1 + )2 x + ( + 2 )x2 2(1 + )x3 + x4 :
2. MORE HOPF BIFURCATION THEOREM (NOV. 22,25) 105

Note that at  = 0, P02 has derivative 1 at x = 0. Also,


P02 (x) = x + 0x2 2x3 + x4 ;
so it is not a saddle-node bifurcation.
Look for xed points of P2 (x). x = 0 is always xed. For  > 0, we
have the xed points
p p
 4 + 2 ; and  + 4 + 2
2 2
Note that x = 2 +  far from x = 0. The bifurcation for P2 is known as
a pitchfork bifurcation.
If  is an eigenvalue with 0 = 1 then we need
d 6= 0
d
which corresponds to
d3 P02 6= 0:
dx3
Note that d2 P02 =dx2 is always 0 because P02 is not undergoing a saddle-
node bifurcation.
The proof for the saddle-node is the same as for ows. Let P (x; ) =
P (x) x, and try to use the Implicit Function Theorem to write level set
of P = 0 as (x; (x)), that is,  = (x).
There is one nondegeneracy condition used for the Implicit Function
Theorem, i.e.

@ P 6= 0:
@ (0;0)
Other condition forces
d2 6= 0:
dx2 x=0
For period doubling, we try to do the same thing. Let
2
Q(x; ) = P (x) x :
x
Apply Implicit Function Theorem to Q (extend Q smoothly to x = 0). The
level set of Q = 0 is Q(x; (x)) = 0.
Now for the Hopf bifurcation.
Let F : R3 ! R3 be a 1-parameter family of vector elds. For all , x_ = F (x)
has a periodic orbit and for  = 0 the eigenvalues of the rst return map (the
Floquet multipliers) are on the unit circle.
For maps, let P : R2 ! R2 be a 1-parameter family, such that P (0) = 0, and
DP0 j0 has eigenvalues on the unit circle, ei 6= 1. How does the stability change
as  changes?
Do exactly the same calculation. Complexify and linearize in C2 . On the image
of R2 we only need one of the complex variables (the other is the conjugate).
Write
X ij
P (z ) =  z + aij z z
i+j 2
106 4. TOPICS

where  is the eigenvalue of DP at x = 0.


We want to put this in normal form, i.e. get rid of as many higher order terms
as possible. To get rid of z k zl do the change of coordinates
w + bwk wl = z
and end up with
b = alkl :
k 

i
At  = 0, we get  = e , so the denominators look like
 
eikl e il ei = ei ei(k l 1) 1
which must not be 0. So we cannot get rid of terms with k l 1 = 0. Another
case is that if  = 2p=q, (i.e. ei is a root of unity), we can't have
k l 1  0 (mod q):
The lowest order term where this happens is k = 0; l = q 1. So for 0 = e2ip=q ,
the normal form is
P0 (z ) = 0 z + a1 z k z + czq :
Let us add the assumption that if 0 = e2ip=q then q  5. Then
P (z ) =  z + a1 z 2z + h.o.t.
Switching to polar coordinates, if  =  ei , then
P (rei ) =  ei rei  + a1 r3 ei + h.o.t.
Let a1 = ei  , then
P (rei ) = ( r)ei(+ ) +  r3 ei(+  ) +   
So   
3
P r = ( +r +) + Or (r+2 )h.o.t.
+ h.o.t. :

Suppose k k < 1 for  < 0, and k k > 1 for  > 0. The xed points of
r 7!  r +  r3 are
r1 
0;   :


r
sink
sink
source 
Figure 10. The sink becomes a source-sink pair.
Combine r and :
Adjusting the signs of d =d for  = 0 and  give the other three pictures:
2. MORE HOPF BIFURCATION THEOREM (NOV. 22,25) 107

Figure 11. The sink becomes an attracting cycle.

Figure 12. Depending on the signs of d =d and  , we may


have one of these three pictures.

One can show (by building the attractor and repeller blocks) that if the higher
order terms are added in, for  near 0, the picture is the same.
108 4. TOPICS

The exceptional cases are 0 = e2ip=q for q = 1; 2; 3; 4. If q = 1, then 0 = 1,


and this is a saddle-node bifurcation, provided only one eigenvalue is 1. If q = 2,
then 0 = 1, and this is a period-doubling bifurcation, again provided only one
eigenvalue is 1. The cases q = 3 and q = 4 are special.

3. Setup for Melnikov method (Nov. 25, Dec 2)


Up this point, every theorem that we've had has had the words \for  small".
Everything has been local, with one exception: in the Stable-Unstable Manifold
Theorem, we get the result that they are globally immersed submanifolds.
The reason this is is that all of our techniques so far have used calculus, so it
must be local. We start with something we know, and perturb away. Also, all we
have worked with are xed points, and periodic orbits (which are themselves xed
points of maps). But we can also do local analysis near bigger things, e.g. whole
nontrivial orbits.
Let x_ = F (x; t) be a 1-parameter family. Let us assume that we understand
an orbit for  = 0 (could be a W u or a W s ). Then we compute what happens for
 6= 0 (but small).
Definition 3.1. If there is a point z 6= x0 such that z 2 W u (x0 ) \ W s (x0 )
then z is called a homoclinic point. O(z ) is called a homoclinic orbit.
Note: In 2 dimensions, we have either that W u(x0 ) \ W s(x0 ) = fx0g or
that a branch of W s (x0 ) is the same as a branch of W u (x0 ).
Example: y = y y3. Convert this to the system
y_ = v
v_ = y y3 :
This system is Hamiltonian, with
2 y2 + y4
H (v; y) = v2 2 4
Note that W u (0) = W s (0).

-1 1

-1

Figure 13. The phase plane for y = y y3 .


3. SETUP FOR MELNIKOV METHOD (NOV. 25, DEC 2) 109

If we perturb and get


y  v  y
v = y y3 + G v
for  small, we can break the connection (adding damping or antidamping) by
letting
y 0
G =
v : v

Figure 14. This should be a picture of the phase planes of the


damped and anti-damped pendula.
Example: Consider the system y = sin y, or
y_ = v
v_ = sin y;
this is also Hamiltonian, with
2
H (y; v) = v2 cos(y)
This is a \heteroclinic connection", i.e. a branch of W s (; 0) is the same as
a branch of W u ( ; 0). We can again break the connection with damping or
antidamping.
In the 2-dimensional autonomous case, the W s and W u manifolds are made
up of 3 orbits ( xed point, saddle) and 2 branches. If z 2 W s (x0 ) \ W u (x0 ), then
O(z )  W s (x0 ) \ W u (x0 ).
What about the nonautonomous case, e.g.
y_   v 
v = y y3 + G(y; v; t)
where G(y; v; t + T ) = G(y; v; t). Make this system 3-dimensional:
0y_ 1 0 v 1  
@vA = @y y3A + G(y;0 v; s) :
s 1
110 4. TOPICS

Figure 15. This will be a picture of the phase plane for v2 =2 cos(y).

We only need s 2 [0; T ] because G is periodic. So our new phase space is


R2  [0; T ] with 0 and T identi ed.
For  = 0, there are 3 period-T periodic orbits.

v s=T

Figure 16. The Floquet multipliers of O(0; 0; 0) make it a saddle


(the other two xed points are centers).

First, draw the Poincare map for s = 0. This is the period T map of the
2-dimensional ow.
The 3-dimensional picture:
Now make  > 0. Damping or antidamping is still possible, with G(y; v; t) =
(0; v).
Remark: By bifurcation theory, we know that if  is small then the Poincare
map has a xed point near (0; 0).
Another (new) possibility:
3. SETUP FOR MELNIKOV METHOD (NOV. 25, DEC 2) 111

Figure 17. This will be a picture of the Poincare map for the
unperturbed ow.

Figure 18. This will be the phase plane for the ow corresponding
to the map in the last gure.

We could have that z 2 W u (  ) \ W s (  ), so O(z )  W u (  ) \ W s (  ). All we


know is that the manifolds go through these points, but the picture could be:
This is a picture of a \transverse homoclinic orbit" with z being the transverse
homoclinic point, because W u and W s cross transversally, i.e. not tangentially.
The 3-dimensional picture:
In this setup, a transverse homoclinic point gives us 3 things:
1. in nitely many homoclinic orbits,
2. very nontrivial recurrence,
3. \escape" orbits,
i.e. \chaos". In this case, note that W u and W s for the Poincare map are only
immersed (not embedded).
112 4. TOPICS

Figure 19. A 3-dimensional picture of the ow, with variables y; v; t.

Figure 20. The Poincare map, and the stable and unstable man-
ifolds for the damped pendulum.

These are interesting ows. Are there speci c examples? Start with
y_ = v
v_ = y y3
and devise a perturbation that creates transverse homoclinics,
y_   v 
v = y y3 + G(y; v; t):
Look at the Poincare map, P : R2 ! R2; and pick a neighborhood V as shown.
Choose Q : R2 ! R2 , such that Q = I outside V , and is a small rotation well
inside V . Then Q  P creates a transverse homoclinic orbit.
4. USING MELNIKOV FOR EXISTENCE OF A TRANSVERSE HOMOCLINIC (DEC. 4) 113

Figure 21. The Poincare map in the case of a homoclinic orbit

Figure 22. A transverse homoclinic crossing

4. Using Melnikov for existence of a transverse homoclinic (Dec. 4)


Our goal is to show a given system has (transverse) homoclinic points. The
following is the outline of a method, rather than a theorem. Again, assume
y_ 
v = F (y; v) + G(v; y; t);
where G(y; v; t) = G(y; v; t + T ). Further assume that when  = 0, F (the unper-
turbed system) has a saddle, x0 , and has a homoclinic connection, where a branch
of W u (x0 ) is a branch of W s (x0 ).
Notation.
1. Let q0 : R ! R2 be an orbit which equals this homoclinic connection. Let
z = q0 (0). q0 (t) is a solution for F .
114 4. TOPICS

Figure 23. A 3-dimensional picture of a transverse homoclinic orbit

Figure 24. How to make a transverse homoclinic orbit out of any


homoclinic orbit
2. When  6= 0, think of the system in R2  [0; T ] with 0 and T identi ed. Call
the third variable t.
3. For  6= 0, but close to 0, the system
y_ 
v = F (y; v) + G(y; v; t)
has a period T orbit near y = v = 0. Call it .
What happens when we change  slightly? To see if W s (  ) and W u (  ) cross,
try to measure the distance between them.
Steps:
1. Pick z = q0 (0) on unperturbed connecting orbit,
2. build  transverse to W u (  ) = W s (  ),
3. Look at W u (  ) \  and W s (  ) \ . There are 3 possibilities:
4. USING MELNIKOV FOR EXISTENCE OF A TRANSVERSE HOMOCLINIC (DEC. 4) 115

Figure 25. The three-dimensional homoclinic

Figure 26. The three possible con gurations of W s (  ) and W u (  )

4. Try to measure the distance between W u (  ) \  and W s (  ) \  as a


function of t. Let d(t; ) be the distance between W s (  ) \  \ ft = t0 g and
W u (  ) \  \ ft = t0 g.
5. Write d(t; ) as d0 (t) + d1 (t) + 2 d2 (t) +    (assume that we can). When
 = 0, W u = W s , so d0 (t) = 0, so
d(t; ) = d1 (t) + 2 d2 (t) +   
What we'll actually compute is d1 (t).
Lemma 4.1. If d(t; ) = d1 (t) + O(2 ) stays bounded, and d(t + T; ) =
d(t; ) for all t, then
(a) If d1 (t) > 0 for all t, then for  suciently small, d(t; ) > 0 for all t.
(b) If d1 (t) < 0 for all t, then for  small, d(t; ) < 0 for all t.
(c) If d1 (t1 ) > 0; d1 (t2 ) < 0, then 9t such that for  suciently small,
d(t ; ) 6= 0.
116 4. TOPICS

Figure 27. The geometric interpretation of d(t; )

(d) If d1 (t0 ) = 0; d0 (t0 ) 6= 0, then 9t for  suciently small such that
d(t ; ) = 0 and
@d(t ; ) 6= 0:
@t
6. Compute d1 (t), which is the order  term of the signed distance between
W s (  ) and W u (  ) on . Then check if d1 (t) is ever 0. If it is, then for 
suciently small, the system has homoclinic points.
Now why do we hope that we can compute d1 (t)? Look at system
y_ 
v = F (y; v) + G(y; v; t):
We know that q0 (t) is a solution for  = 0, and q0 (t) ! (0; 0) as t ! 1. When
 6= 0, but small, try to write solutions near q0 (t), using Euler's method:

Figure 28. The errors in Euler's Method


5. CALCULATION OF ORDER  TERM (DEC. 6) 117

If we follow the vector eld for  6= 0, the rst step error is of order   t, where
t is the step size. There are two sources of error on the next step:
1. Because the red orbit is o of q0 (t), the orbits can move exponentially apart.
This error is of order  and fast-growing.

Figure 29. How fast can order- error grow?

Remark: In the example systems, we don't have exponential diver-


gence.
2. Same type of error as rst step (we changed vector eld by  so there is a
t   change in the next step).
3. Compounding of change in vector eld, but this error is order 2 .
We expect d1 (t) to have 2 terms:
1. For divergence of nearby orbits if  = 0,
2. -order changes along q0 (t).
We need two tools:
1. A natural way to pick . We will use F ? (z ) to choose . At each point of
q0 , measure the perturbation in the F ? direction.
2. A nice representation of W s (  ) and W u (  ).
5. Calculation of order  term (Dec. 6)
So, we chose  to be a surface in R3 through q0 (0)  [0; T ] containing F ? (z ).
We need a way to name orbits on W s (  ),W u (  ).
Let qs (; t0 ): R2 ! R2 be a solution of
y_ 
v = F (y; v) + G(y; v; t);
and qs (t; t0 ) !  as t ! +1 (so qs is on W s (  )) and (qs (t0 ; t0 ); t0 ) 2  and for
all t  t0 , (qs (s; t0 ); s) 62 . Do the same for qu (t; t0 ), with (qu (t0 ; t0 ); t0 ) 2  and
(qu (s; t0 ); s) 62  for all s < t0 .
We are interested in measuring kqs (t0 ; t0 ) qu (t0 ; t0 )k = d(to ; ).
118 4. TOPICS

Figure 30. F (z ) and F ? (z )

Lemma 5.1. As  ! 0, qs ; qu ! q0 . We can write


qs (t; t0 ) = q0 (t t0 ) + q1s (t; t0 ) + O(2 )
qu (t; t0 ) = q0 (t t0 ) + q1u (t; t0 ) + O(2 ):
This is a corollary of the Stable/Unstable Manifold theorem.
Note: q0(t t0) = q0(0) when t = t0.
We want to compute
F ? (z )  [qs (t0 ; t0 ) qu (t0 ; t0 )]
= F ? (z )  [q0 (t0 t0 ) + q1s (t0 ; t0 )] [q0 (t0 t0 ) + q1u (t0 ; t0 )] + O(2 )

=  |F ?(z )  [q1s (t0 ;{zt0 ) q1u (t0 ; t0 )]} +O(2 )
compute this!
Now,
F ? (z )  [q1s (t0 ; t0 ) q1u (t0 ; t0 )]
= F ? (q0 (t0 t0 ))  [q1s (t0 ; t0 ) q1u (t0 ; t0 )]:
This is the value at t = t0 of the function
F ? (q0 (t t0 ))  [q1s (t; t0 ) q1u (t; t0 )]:
Compute d=dt of this and hope it turns out nice. First note that
qs (t; t0 ) = q0 (t t0 ) + q1s (t; t0 ) + O(2 );
so
q_s (t; t0 ) = q_0 (t t0 ) + q_1s (t; t0 ) + O(2 ):
5. CALCULATION OF ORDER  TERM (DEC. 6) 119

We know that qs is a solution of the di erential equation, so


d (qs (t; t )) = F (qs (t; t )) + G(q (t; t ); t)
dt  0  0  0
= F (q0 (t t0 ) + q1s (t; t0 ) + O(2))
+ G(q0 (t t0 ) + q1s (t; t0 ) + O(2 ); t)
= F (q0 (t t0 )) +  DF jq0 (t t0 ) q1s (t; t0 ) + G(q0 (t t0 ); t):
So we get
q_0 (t t0 ) + q_1s (t; t0 ) + O(2)
= F (q0 (t t0 )) +  DF jq0 (t t0 ) q1s (t; t0 ) + G(q0 (t t0 ); t) + O(2 );
so
q_1s (t; t0 ) =  DF jq0 (t t0 ) q1s (t; t0 ) + G(q0 (t t0 ); t) + O(2 );
since F (q0 (t t0 )) = q_0 (t t0 ).
Thus
q_1s (t; t0 ) = DF jq0 (t t0 ) q1s (t; t0 ) + G (q (t t ); t) :
| {z } | 0 {z 0 }
term 1 term 2
Term 1 involves the rate of change of F , where the small error grows exponen-
tially, and term 2 is an order  perturbation along the unperturbed orbit.
Now look at
s (t; t0 ) = F ? (q0 (t t0 ))  q1s (t; t0 ):
_ s (t; t0 ) = DF jq0 (t t0 ) q_0 (t t0 )  q1s (t; t0 ) + F ? (q0 (t t0 ))q_1s (t; t0 )

= DF ? q0 (t t0 ) F (q0 (t t0 ))  q1s (t; t0 )
 
+ F ? (q0 (t t0 ))  DF jq0 (t t0 ) q1s (t; t0 ) + G(q0 (t t0 ); t ;
or h i
_ s (t; t0 ) = DF ? q0 (t t0 )  F (q0 (t t0 )) + F ? (q0 (t t0 ))  DF jq0 (t t0 )  q1s (t; t0 )
+ F ? (q0 (t t0 ))  G(q0 (t t0 ); t0 ):
So if we know q0 (t t0 ), then we get a hard ODE. In the example
y_ = v
v_ = y y3 ;
we see that we don't always have exponential splitting.
Note:

DF ? q0 (t t0 )  F (q0 (t t0 )) + F ? (q0 (t t0 ))  DF jq0 (t t0 )
 
= trace DF jq0 (t t0 )  F ? (q0 (t t0 ))

Let f   f
F = f1 ; F? = 2
f1 :
2
120 4. TOPICS

so
0 @f1 @f1 1 0 @f2 @f2 1
@ @f@y2 @f@v2 CA ;
DF = B @ @f@y1 @f@v1 CA :
DF ? = B
@y @v @y @v
trace DF measures the rate of growth of pieces of area under the ow F . Our
example was Hamiltonian, with
 @H @H 
F = @v ; @y ;
so 0 @2H @2H 1
B @y@v @v2 CC ;
DF = B
@ @ 2H @ 2 H A
@y2 @v@y
so trace(DF ) = 0!
So let us assume that F is Hamiltonian (or at least that trace(DF ) = 0), so
_ s (t; t0 ) = F ? (q0 (t t0 ))  G(q0 (t t0 ); t);
giving
Z
s (t; t0 ) = F ? (q0 (t t0 ))  G(q0 (t t0 ); t) dt:
Note: F ?(q0(t t0)) ! 0 as t ! 1.
So Z1
d s F ? (q0 (t t0 ))  G(q0 (t t0 ); t) dt:
dt  (t0 ; t0 ) = t0
The above, and
d u (t; t ) = F ? (q (t t ))  qu (t t )
dt 0 0 0 1 0
gives
Z t0
s (t; t0 ) = F ? (q0 (t t0 ))  G(q0 (t t0 ); t) dt
1
by the same calculation.
So q0 (0) = q0 (t0 t0 ). Putting the two calculations together, we have
F ? (z )  [q1s (t0 ; t0 ) q1u (t0 ; t0 )]
= s (t0 ; t0 ) u (t0 ; t0 )
Z1
= F ? (q0 (t t0 ))  G(q0 (t t0 ); t) dt
Z t0t0
F ? (q0 (t t0 ))  G(q0 (t t0 ); t) dt
Z 1
1
= F ? (q0 (t t0 ))  G(q0 (t t0 ); t) dt:
1
So we have calculated the order  term of the distance between W s (  ) and
u
W (  ) on .
6. AN EXAMPLE OF A MELNIKOV CALCULATION (DEC. 9) 121

6. An example of a Melnikov calculation (Dec. 9)


Theorem 6.1. Assume a Hamiltonian system
y_   @H=@v 
v= @H=@y = F (y; v);
with saddle x0 and homoclinic orbit q0 (t).
For the perturbed system
y_ 
v = F (y; v) + G(y; v; t)
where for all t, G(y; v; t + T ) = G(y; v; t), let  denote the hyperbolic periodic orbit
near x0 .
Let
Z1
M (t0 ) = F ? (q0 (t t0 ))  G(q0 (t t0 ); t) dt
Z 1
1
= F ? (q0 ( ))  G(q0 ( );  + t0 ) d:
1
This is known as the Melnikov integral.
Remark: M (t) is periodic with period T .
Then
1. If M (t0 ) 6= 0 for all t0 , then for  small,  has no homoclinic orbits.
2. If M (t0 ) > 0 for some t0 ,M (t1 ) > 0 for some t1 , then for small , W s (  ) \
W u (  ) contains more than  .
3. If M (t0 ) = 0, M 0 (t0 ) 6= 0 for some t0 then for small , W s (  ) intersects
W s (  ) transversally, i.e. there is a transverse homoclinic orbit.
Example: [Guckenheimer and Holmes, p.191] Consider
y_ = v
v_ = y y3 + ( cos(!t) v)
For  = 0, the Hamiltonian is
2 y2 + y4 :
H (y; v) = v2
2 4
There is a saddle at (0; 0) with a homoclinic connection. Solving H we get
r 2
v = y 1 y2
and we see that
p p
q0 (t) = ( 2 sech t; 2 sech t tanh t):
The perturbation has 4 parameters:  is the damping term, is the periodic
forcing, ! is the frequency, and  is the total size of the perturbation. So let us x
; ; !.
122 4. TOPICS

The Melnikov integral is


Z1
M (t0 ) = (F ? (q0 (t))  G(q0 (t); t + t0 )) dt
Z 1
1 p p p 
= 2 sech t ( 2 sech t)3 ; ( 2 sech t; tanh t) 
1 
p
 0; cos(!(t + t0 )) ( 2 sech t tanh t) dt
Z1p
= 2 sech t tanh t cos(!(t t0 )) dt
1Z
1
+ 2 sech2 t tanh2 t dt
1
p  
= 43 2 ! sech ! 2 sin (!t0 )
= k1 k2 sin(!t0 )
p
where k1 = 4=3; k2 = 2  sech(!=2).
If  = 0, then M (t0 ) does change sign transversally, so there are homoclinics.
If jk2 j < jk1 j then no homoclinics for small , but if jk2 j > jk1 j, then there are
homoclinics for suciently small .
Note: As ! ! 1,
 
sech ! 2
2 =
e + e !=2 ;
!= 2
so even small damping will kill the homoclinics, where ! big implies short periodic
forcing.
7. Averaging (Dec. 11)
For this section, look at
x_ = f (x; t; )
where x 2 Rn, and for all t, f (x; t + T; ) = f (x; t; ), for  small.
What can we say about this? Almost nothing { because when  = 0, x_ = 0.
How can x_ = 0 bifurcate? Every way!
We might ask: How much does it matter that f depends on t? Let
1 ZT
f (x) = f (x; t; 0) dt
T 0
be an averaged vector eld. Let
f~(x; t; ) = f (x; t; ) f (x):
Theorem 7.1. If f is C r , then there is a C r change of coordinates
x = y + w(y; t; )
such that the system x_ = f (x; t; ) becomes
y_ = f (y) + 2 f1 (y; t; )
where for all t, f1(y; t + T; ) = f1 (y; t; ).
Remark: This is progress because the nonautonomous part is higher order.
7. AVERAGING (DEC. 11) 123

Note: This is not the same as


y_ = f (y) + f1 (y; t; )
where  is arbitrarily small, since above  is not arbitrarily small,  = 2 .
Moreover,
1. Let z_ = f (z ). If z (t); y(t) are solutions such that jz (0) y(0)j < O(), then
jz (t) y(t)j < O() on a time scale t  1=.
2. If z0 is a hyperbolic xed point of
z_ = f (z );
then there is an 0 > 0 such that for 0 <  < 0 , there is a periodic orbit
y (t) for
y_ = f (y) + 2 f1 (y; t; )
with y (t) = z0 + O().
3. If z s(t) is in W s (z0 ), ys (t) is in W s (y), and jz s(0) ys (0)j = O(), then
jz s (t) ys (t)j = O() for all t  0.
So we can push time dependence to higher order in . We need to nd w in the
change of coordinates. Since
x = y + w(y; t; );
x_ = y_ + Dy w(y; t; )  y_ +  @w
@t :
This gives us
[I + Dy w] y_ = x_  @w@t ;
[I + Dy w] y_ = f (x) + f~(x; t; )  @w
@t ;
where f~(x; t; ) = f (x; t; ) f (x).
So
 
y_ = [I + Dy w] 1 f (y + w) + f~(y + w; t; )  @w
@t

y_ = [I + Dy w] 1 f (y) + f~(y; t; )  @w
@t + O( ) ;
2

assuming everything is at least C 2 .


So take
@w = f~(y; t; );
@t
so
Z
w(y; t; ) = f~(y; t; ) dt:
Note:
[I + Dy w] 1 = I + O():
124 4. TOPICS

Then
y_ = f (y) + O(2 )
= f (y) + 2 f1 (y; t; ):
Compare
y_ = f (y) + 2 f1 (y; t; ) to
z_ = f~(z ):
Let us prove the rst part of the theorem:
Proof:
Z t z Ljz(}|s) y(s)j {
jz (t) y(t)j = z (0) y(0) +  f (z (s)) f (y(s)) ds
Zt 0
 f1 (y(s); s; ) ds
2
0
So
jy(t) z (t)j  jz (0) y(0)j
Z t
+  f (z (s)) f (y(s)) ds
Z0 t
+ 2 jf1 (y(s); s; )j ds
0
Now let  (t) = z (t) y(t). Then
Zt Zt
j (t)j  j (0)j + L j (s)j ds + 2 C dt
0 0
where L is the Lipschitz constant for f 1 and C is a bound for f1 (on neighborhoods
z (s); y(s); 0  s  t.
Remember Gronwall's inequality: if
Zt
v(t)  c(t) + u(s)v(s) ds; then
0Z  Zt Z t 
t
v(t)  c(0) exp u(s) ds + c0 (s) exp u( ) d ds:
0 0 s
Let c(t) =  (0) + 2 Ct, then
Zt
j (t)j  j (0)j eLt + 2 C eL(t s) ds:
0
So
j (t)j  j (0)j eLt + C
L e tL C
L
 j (0)j eLt + C tL
Le ;
so
C
j (t)j  j (0)j + L eLt:

7. AVERAGING (DEC. 11) 125

Suppose that j (0)j < k for some k. Then


C
j (t)j  k + L eLt:
so for t < 1=,
 C 
j (t)j   k + L eL:

Why look at a system like this? One case may be where you have a center. If
you take the right T , the time T map looks like the identity.
Index

-limit set, 34 critical point, 7


!-limit set, 34
decouple, 20
attractor, 35 derivative, 8
attractor block, 35, 86 di erential equation, 7
autonomous, 4 di erential equation associated with f , 4
averaging, 115 Di erentiation, 8
best linear approximation, 8 rst return map, 65
bifurcation, 75 xed point, 7
Floquet multipliers, 66, 71, 98, 103
Cauchy, 31 ow, 4
Cauchy's Existence Theorem, 26 forward orbit, 7
center manifold, 62, 82, 96
center of mass, 15 glider ight, 13
center stable manifold, 62 global, 7, 44
change of coordinates, 53, 74, 76, 98, 115, global bifurcations, 77
116 Gronwall's inequality, 117
change of variable, 89 Graph Transform Method, 52
change of variables, 8, 9, 11, 17, 19, 70, 86, gravity, 6
88{90 Gronwall's Inequality, 29
chaos, 105 group property, 4
codimension 1, 85 Hamiltonian, 72, 101, 112{114
codimension 1 bifurcation, 85, 93 Hamiltonian system for H, 16
collision set, 7 Hartman-Grobman-Sturmberg Theorem, 42
Comparison Test, 27 heteroclinic connection, 102
complete metric space, 31 homoclinic orbit, 101
complex notation, 18 homoclinic point, 101
cone condition, 55 Hopf bifurcation, 98
conjugate, 10 Hopf Bifurcation Theorem, 86, 92, 94
Conservation of momentum, 15 hyperbolic, 42, 51, 62, 74
constant of motion, 16 hyperbolic xed point, 50, 52
continuity in parameters, 25
continuity with respect to initial conditions, immersed m-submanifold, 44
25, 64 Implicit Function Theorem, 65, 77, 98
contraction, 32, 53, 62 initial condition, 4, 9
contraction mapping principle, 57 initial value problem, 4
126
7. INDEX 127

integral, 16 spiral source, 86, 90


invariant set, 33 stable manifold, 45
Inverse Function Theorem, 78, 79, 93 stable set, 50
isolated invariant set, 33 Stable-Unstable Manifold Theorem, 100
isolating neighborhood, 33 Stable/Unstable Manifold Theorem for Maps,
51
Kepler Problem, 6 stationary point, 7
Kepler's Laws, 17 strong stable manifold, 62, 82
least period, 7 strong unstable manifold, 62
level set, 77 tangent vector, 7
limit cycle, 86 Taylor series, 78
linear space, 8 Taylor's Remainder Theorem, 38
Linearization, 48 Taylor's Theorem, 53
linearization, 37 time T map, 50, 69
Lipschitz, 24, 29, 30, 55, 58, 59, 117 time t map, 26
local, 7 topologically equivalent, 13
local ow, 6 torus, 21
local stable set, 45 transversality, 61, 66
local unstable set, 45 transverse homoclinic orbit, 103, 114
manifold, 7 transverse homoclinic point, 103
map, 50, 94 unit tangent bundle, 58
maximal, 25 unstable manifold, 46
maximal invariant set, 33
McGehee Collision Manifold, 22 variational equation, 26, 71
measurable, 25
Melnikov integral, 113, 115 Zharkovskii, 13
Melnikov method, 100
method of majorants, 27
nice vector elds, 7
nondegeneracy conditions, 82, 86
normal form, 88, 90, 98
orbit, 7
periodic forcing, 72
periodic orbit, 63
periodic point, 7
periodic point with least period T , 63
phase space, 7
pitchfork bifurcation, 98
Poincare Map, 65
Poincare section, 64, 70, 76
Poincare-Bendixon, 86
positively invariant set, 33
power series, 27
recurrence, 104
resonance, 44
rest point, 7
saddle node bifurcation, 82
semi ow, 6
Siegel and Moser, 27
singularity, 17, 20
sink, 35, 37, 51, 65
solution, 4
source, 42, 65
spiral sink, 86, 90

Vous aimerez peut-être aussi