Vous êtes sur la page 1sur 158

Ch 7.

1: Introduction to Systems of First Order


Linear Equations
A system of simultaneous first order ordinary differential
equations has the general form
where each x
k
is a function of t. If each F
k
is a linear
function of x
1
, x
2
, , x
n
, then the system of equations is said
to be linear, otherwise it is nonlinear.
Systems of higher order differential equations can similarly
be defined.
) , , , (
) , , , (
) , , , (
2 1
2 1 2 2
2 1 1 1
n n n
n
n
x x x t F x
x x x t F x
x x x t F x

Example 1
The motion of a certain spring-mass system from Section 3.7
was described by the differential equation
This second order equation can be converted into a system of
first order equations by letting x
1
= u and x
2
= u'. Thus
or
0 ) ( ) ( 125 . 0 ) ( = +

+

t u t u t u
0 125 . 0
1 2 2
2 1
= + +

x x x
x x
2 1 2
2 1
125 . 0 x x x
x x
=

Nth Order ODEs and Linear 1


st
Order Systems
The method illustrated in previous example can be used to
transform an arbitrary nth order equation
into a system of n first order equations, first by defining
Then
( )
) 1 ( ) (
, , , , ,


=
n n
y y y y t F y
) 1 (
3 2 1
, , , ,

=

=

= =
n
n
y x y x y x y x
) , , , (
2 1
1
3 2
2 1
n n
n n
x x x t F x
x x
x x
x x

Solutions of First Order Systems


A system of simultaneous first order ordinary differential
equations has the general form
It has a solution on I: < t < if there exists n functions
that are differentiable on I and satisfy the system of
equations at all points t in I.
Initial conditions may also be prescribed to give an IVP:
). , , , (
) , , , (
2 1
2 1 1 1
n n n
n
x x x t F x
x x x t F x

) ( , ), ( ), (
2 2 1 1
t x t x t x
n n
= = =
0
0
0
2 0 2
0
1 0 1
) ( , , ) ( , ) (
n n
x t x x t x x t x = = =
Theorem 7.1.1
Suppose F
1
,, F
n
and F
1
/x
1
,, F
1
/x
n
,, F
n
/ x
1
,,
F
n
/x
n
, are continuous in the region R of t x
1
x
2
x
n
-space
defined by < t < ,
1
< x
1
<
1
, ,
n
< x
n
<
n
, and let the
point
be contained in R. Then in some interval (t
0
- h, t
0
+ h) there
exists a unique solution
that satisfies the IVP.
( )
0 0
2
0
1 0
, , , ,
n
x x x t
) ( , ), ( ), (
2 2 1 1
t x t x t x
n n
= = =
) , , , (
) , , , (
) , , , (
2 1
2 1 2 2
2 1 1 1
n n n
n
n
x x x t F x
x x x t F x
x x x t F x

=
=
=
Linear Systems
If each F
k
is a linear function of x
1
, x
2
, , x
n
, then the
system of equations has the general form
If each of the g
k
(t) is zero on I, then the system is
homogeneous, otherwise it is nonhomogeneous.
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
2 2 1 1
2 2 2 22 1 21 2
1 1 2 12 1 11 1
t g x t p x t p x t p x
t g x t p x t p x t p x
t g x t p x t p x t p x
n n nn n n n
n n
n n
+ + + + =

+ + + + =

+ + + + =

Theorem 7.1.2
Suppose p
11
, p
12
,, p
nn
, g
1
,, g
n
are continuous on an
interval I: < t < with t
0
in I, and let
prescribe the initial conditions. Then there exists a unique
solution
that satisfies the IVP, and exists throughout I.
0 0
2
0
1
, , ,
n
x x x
) ( , ), ( ), (
2 2 1 1
t x t x t x
n n
= = =
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
2 2 1 1
2 2 2 22 1 21 2
1 1 2 12 1 11 1
t g x t p x t p x t p x
t g x t p x t p x t p x
t g x t p x t p x t p x
n n nn n n n
n n
n n
+ + + + =
+ + + + =
+ + + + =

Ch 7.2: Review of Matrices


For theoretical and computation reasons, we review results
of matrix theory in this section and the next.
A matrix A is an m x n rectangular array of elements,
arranged in m rows and n columns, denoted
Some examples of 2 x 2 matrices are given below:
( )
|
|
|
|
|
.
|

\
|
= =
mn m m
n
n
j i
a a a
a a a
a a a
a

2 1
2 22 21
1 12 11
A
|
|
.
|

\
|
+

=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
i i
i
C B
7 6 5 4
2 3 1
,
4 2
3 1
,
4 3
2 1
A
Transpose
The transpose of A = (a
ij
) is A
T
= (a
ji
).
For example,
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
mn n n
m
m
T
mn m m
n
n
a a a
a a a
a a a
a a a
a a a
a a a

2 1
2 22 12
1 21 11
2 1
2 22 21
1 12 11
A A
|
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
6 3
5 2
4 1
6 5 4
3 2 1
,
4 2
3 1
4 3
2 1
T T
B B A A
Conjugate
The conjugate of A = (a
ij
) is A = (a
ij
).
For example,
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
mn m m
n
n
mn m m
n
n
a a a
a a a
a a a
a a a
a a a
a a a

2 1
2 22 21
1 12 11
2 1
2 22 21
1 12 11
A A
|
|
.
|

\
|
+

=
|
|
.
|

\
|

+
=
4 4 3
3 2 1
4 4 3
3 2 1
i
i
i
i
A A
Adjoint
The adjoint of A is A
T
, and is denoted by A
*
For example,
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
mn n n
m
m
mn m m
n
n
a a a
a a a
a a a
a a a
a a a
a a a

2 1
2 22 12
1 21 11
*
2 1
2 22 21
1 12 11
A A
|
|
.
|

\
|

+
=
|
|
.
|

\
|

+
=
4 3 2
4 3 1
4 4 3
3 2 1
*
i
i
i
i
A A
Square Matrices
A square matrix A has the same number of rows and
columns. That is, A is n x n. In this case, A is said to have
order n.
For example,
|
|
|
|
|
.
|

\
|
=
nn n n
n
n
a a a
a a a
a a a

2 1
2 22 21
1 12 11
A
|
|
|
.
|

\
|
=
|
|
.
|

\
|
=
9 8 7
6 5 4
3 2 1
,
4 3
2 1
B A
Vectors
A column vector x is an n x 1 matrix. For example,
A row vector x is a 1 x n matrix. For example,
Note here that y = x
T
, and that in general, if x is a column
vector x, then x
T
is a row vector.
|
|
|
.
|

\
|
=
3
2
1
x
( ) 3 2 1 = y
The Zero Matrix
The zero matrix is defined to be 0 = (0), whose dimensions
depend on the context. For example,
,
0 0
0 0
0 0
,
0 0 0
0 0 0
,
0 0
0 0
|
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
= 0 0 0
Matrix Equality
Two matrices A = (a
ij
) and B = (b
ij
) are equal if a
ij
= b
ij
for
all i and j. For example,
B A B A =
|
|
.
|

\
|
=
|
|
.
|

\
|
=
4 3
2 1
,
4 3
2 1
Matrix Scalar Multiplication
The product of a matrix A = (a
ij
) and a constant k is defined
to be kA = (ka
ij
). For example,
|
|
.
|

\
|


=
|
|
.
|

\
|
=
30 25 20
15 10 5
5
6 5 4
3 2 1
A A
Matrix Addition and Subtraction
The sum of two m x n matrices A = (a
ij
) and B = (b
ij
) is
defined to be A + B = (a
ij
+ b
ij
). For example,
The difference of two m x n matrices A = (a
ij
) and B = (b
ij
)
is defined to be A - B = (a
ij
- b
ij
). For example,
|
|
.
|

\
|
= +
|
|
.
|

\
|
=
|
|
.
|

\
|
=
12 10
8 6
8 7
6 5
,
4 3
2 1
B A B A
|
|
.
|

\
|


=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
4 4
4 4
8 7
6 5
,
4 3
2 1
B A B A
Matrix Multiplication
The product of an m x n matrix A = (a
ij
) and an n x r
matrix B = (b
ij
) is defined to be the matrix C = (c
ij
), where
Examples (note AB does not necessarily equal BA):

=
=
n
k
kj ik ij
b a c
1
|
|
.
|

\
|
=
|
|
.
|

\
|
+ + +
+ + +
=
|
|
|
.
|

\
|

=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
+ +
+ +
=
|
|
.
|

\
|
=
|
|
.
|

\
|
+ +
+ +
=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
4 17
1 5
6 10 0 0 5 12
3 4 0 0 2 3
1 0
2 1
0 3
,
6 5 4
3 2 1
20 14
14 10
16 4 12 2
12 2 9 1
25 11
11 5
16 9 8 3
8 3 4 1
4 2
3 1
,
4 3
2 1
CD D C
BA
AB B A
Example 1: Matrix Multiplication
To illustrate matrix multiplication and show that it is not
commutative, consider the following matrices:
From the definition of matrix multiplication we have:
|
|
|
.
|

\
|


=
|
|
|
.
|

\
|

=
1 1 2
0 1 1
1 1 2
,
1 1 2
1 2 0
1 2 1
B A
B
B
A BA
A

|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
+ + + +
+
+
=
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
+ + +
+
+ + +
=
4 5 4
2 4 1
0 3 0
1 1 2 1 2 4 2 2
1 1 2 2 1
1 1 2 1 2 4 2 2
1 0 7
1 1 0
0 2 2
1 2 1 1 2 2 1 4
1 1 2 2 2
1 1 1 2 1 2 2 2
Vector Multiplication
The dot product of two n x 1 vectors x & y is defined as
The inner product of two n x 1 vectors x & y is defined as
Example:

=
=
n
k
j i
T
y x
1
y x
( )

=
= =
n
k
j i
T
y x ,
1
y x y x
( ) i i i i ,
i i i i
i
i
i
T
T
21 18 ) 5 5 )( 3 ( ) 3 2 )( 2 ( ) 1 )( 1 (
9 12 ) 5 5 )( 3 ( ) 3 2 )( 2 ( ) 1 )( 1 (
5 5
3 2
1
,
3
2
1
+ = + + + = =
+ = + + + =
|
|
|
.
|

\
|
+

=
|
|
|
.
|

\
|
=
y x y x
y x y x
Vector Length
The length of an n x 1 vector x is defined as
Note here that we have used the fact that if x = a + bi, then
Example:
( )
2 / 1
1
2
2 / 1
1
2 / 1
| |
(

=
(

= =

= =
n
k
k
n
k
k k
x x x ,x x x
( )
( ) 30 16 9 4 1
) 4 3 )( 4 3 ( ) 2 )( 2 ( ) 1 )( 1 (
4 3
2
1
2 / 1
= + + + =
+ + + = =
|
|
|
.
|

\
|
+
= i i ,
i
x x x x
( )( )
2
2 2
x b a bi a bi a x x = + = + =
Orthogonality
Two n x 1 vectors x & y are orthogonal if (x,y) = 0.
Example:
( ) 0 ) 1 )( 3 ( ) 4 )( 2 ( ) 11 )( 1 (
1
4
11
3
2
1
= + + =
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
= y x y x ,
Identity Matrix
The multiplicative identity matrix I is an n x n matrix
given by
For any square matrix A, it follows that AI = IA = A.
The dimensions of I depend on the context. For example,
|
|
|
|
|
.
|

\
|
=
1 0 0
0 1 0
0 0 1

I
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
|
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|
=
9 8 7
6 5 4
3 2 1
9 8 7
6 5 4
3 2 1
1 0 0
0 1 0
0 0 1
,
4 3
2 1
1 0
0 1
4 3
2 1
IB AI
Inverse Matrix
A square matrix A is nonsingular, or invertible, if there
exists a matrix B such that that AB = BA = I. Otherwise A
is singular.
The matrix B, if it exists, is unique and is denoted by A
-1
and is called the inverse of A.
It turns out that A
-1
exists iff detA 0, and A
-1
can be found
using row reduction (also called Gaussian elimination) on
the augmented matrix (A|I), see example on next slide.
The three elementary row operations:
Interchange two rows.
Multiply a row by a nonzero scalar.
Add a multiple of one row to another row.
Example 2: Finding the Inverse of a Matrix (1 of 2)
Use row reduction to find the inverse of the matrix A below,
if it exists.
Solution: If possible, use elementary row operations to
reduce (A|I),
such that the left side is the identity matrix, for then the
right side will be A
-1
. (See next slide.)
|
|
|
.
|

\
|


=
3 2 2
2 1 3
1 1 1
A
( ) ,
1 0 0 3 2 2
0 1 0 2 1 3
0 0 1 1 1 1
|
|
|
.
|

\
|


= I A
Example 2: Finding the Inverse of a Matrix (2 of 2)
Thus
( )
|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|


=
5 / 1 5 / 2 5 / 4 1 0 0
2 / 1 2 / 1 2 / 1 0 1 0
10 / 3 10 / 1 10 / 7 0 0 1
1 2 4 5 0 0
0 2 / 1 2 / 3 2 / 5 1 0
0 2 / 1 2 / 1 2 / 3 0 1
1 2 4 5 0 0
0 2 / 1 2 / 3 2 / 5 1 0
0 2 / 1 2 / 1 2 / 3 0 1
1 0 2 5 4 0
0 2 / 1 2 / 3 2 / 5 1 0
0 0 1 1 1 1
1 0 2 5 4 0
0 1 3 5 2 0
0 0 1 1 1 1
1 0 0 3 2 2
0 1 0 2 1 3
0 0 1 1 1 1
I A
|
|
|
.
|

\
|

5 / 1 5 / 2 5 / 4
2 / 1 2 / 1 2 / 1
10 / 3 10 / 1 10 / 7
1
A
Matrix Functions
The elements of a matrix can be functions of a real variable.
In this case, we write
Such a matrix is continuous at a point, or on an interval
(a, b), if each element is continuous there. Similarly with
differentiation and integration:
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
) ( ) ( ) (
) ( ) ( ) (
) ( ) ( ) (
) ( ,
) (
) (
) (
) (
2 1
2 22 21
1 12 11
2
1
t a t a t a
t a t a t a
t a t a t a
t
t x
t x
t x
t
mn m m
n
n
m

A x
|
.
|

\
|
=
|
|
.
|

\
|
=

b
a
ij
b
a
ij
dt t a dt t
dt
da
dt
d
) ( ) ( , A
A
Example & Differentiation Rules
Example:
Many of the rules from calculus apply in this setting. For
example:
( )
( )
( )
|
.
|

\
|
+
|
.
|

\
|
=
+ =
+
=
dt
d
dt
d
dt
d
dt
d
dt
d
dt
d
dt
d
dt
d
B
A B
A AB
B A B A
C
A
C
CA
matrix constant a is where ,
|
|
.
|

\
|

=
|
|
.
|

\
|

=
|
|
.
|

\
|
=

4 1
0
) (
,
0 sin
cos 6
4 cos
sin 3
) (
3
0
2
dt t
t
t t
dt
d
t
t t
t
A
A
A
Ch 7.3: Systems of Linear Equations, Linear
Independence, Eigenvalues
A system of n linear equations in n variables,
can be expressed as a matrix equation Ax = b:
If b = 0, then system is homogeneous; otherwise it is
nonhomogeneous.
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
|
|
|
|
|
.
|

\
|
n n n n n n
n
n
b
b
b
x
x
x
a a a
a a a
a a a

2
1
2
1
, 2 , 1 ,
, 2 2 , 2 1 , 2
, 1 2 , 1 1 , 1
,
, 2 2 , 1 1 ,
2 , 2 2 2 , 2 1 1 , 2
1 , 1 2 2 , 1 1 1 , 1
n n n n n n
n n
n n
b x a x a x a
b x a x a x a
b x a x a x a
= + + +
= + + +
= + + +

Nonsingular Case
If the coefficient matrix A is nonsingular, then it is
invertible and we can solve Ax = b as follows:
This solution is therefore unique. Also, if b = 0, it follows
that the unique solution to Ax = 0 is x = A
-1
0 = 0.
Thus if A is nonsingular, then the only solution to Ax = 0 is
the trivial solution x = 0.
b A x b A Ix b A Ax A b Ax
1 1 1 1
= = = =
Example 1: Nonsingular Case (1 of 3)
From a previous example, we know that the matrix A below
is nonsingular with inverse as given.
Using the definition of matrix multiplication, it follows that
the only solution of Ax = 0 is x = 0:
|
|
|
.
|

\
|



=
|
|
|
.
|

\
|



=

4 / 1 4 / 3 4 / 1
4 / 1 4 / 7 4 / 5
4 / 1 4 / 5 4 / 3
,
1 1 2
2 1 1
3 2 1
1
A A
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
|
|
|
.
|

\
|



= =

0
0
0
0
0
0
4 / 1 4 / 3 4 / 1
4 / 1 4 / 7 4 / 5
4 / 1 4 / 5 4 / 3
1
0 A x
Example 1: Nonsingular Case (2 of 3)
Now lets solve the nonhomogeneous linear system Ax = b
below using A
-1
:
This system of equations can be written as Ax = b, where
Then
0 8 3 4
2 3 0 1
2 2 0
3 2 1
3 2 1
3 2 1
= +
= + +
= + +
x x x
x x x
x x x
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|

|
|
|
.
|

\
|



= =

1
1
2
4
5
7
4 / 1 4 / 3 4 / 1
4 / 1 4 / 7 4 / 5
4 / 1 4 / 5 4 / 3
1
b A x
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|



=
4
5
7
, ,
1 1 2
2 1 1
3 2 1
3
2
1
b x A
x
x
x
Example 1: Nonsingular Case (3 of 3)
Alternatively, we could solve the nonhomogeneous linear
system Ax = b below using row reduction.
To do so, form the augmented matrix (A|b) and reduce,
using elementary row operations.
( )
|
|
|
.
|

\
|
=
=
=
= +

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|



=
1
1
2
1
2
7 3 2
1 1 0 0
2 1 1 0
7 3 2 1
4 4 0 0
2 1 1 0
7 3 2 1
10 7 3 0
2 1 1 0
7 3 2 1
10 7 3 0
2 1 1 0
7 3 2 1
4 1 1 2
5 2 1 1
7 3 2 1
3
3 2
3 2 1
x
b A
x
x x
x x x
4 2
5 2
7 3 2
3 2 1
3 2 1
3 2 1
=
= +
= +
x x x
x x x
x x x
Singular Case
If the coefficient matrix A is singular, then A
-1
does not
exist, and either a solution to Ax = b does not exist, or there
is more than one solution (not unique).
Further, the homogeneous system Ax = 0 has more than one
solution. That is, in addition to the trivial solution x = 0,
there are infinitely many nontrivial solutions.
The nonhomogeneous case Ax = b has no solution unless
(b, y) = 0, for all vectors y satisfying A
*
y = 0, where A
*
is
the adjoint of A.
In this case, Ax = b has solutions (infinitely many), each of
the form x = x
(0)
+ , where x
(0)
is a particular solution of
Ax = b, and is any solution of Ax = 0.
Example 2: Singular Case (1 of 2)
Solve the nonhomogeneous linear system Ax = b below using row
reduction. Observe that the coefficients are nearly the same as in the
previous example
We will form the augmented matrix (A|b) and use some of the steps in
Example 1 to transform the matrix more quickly
( )
0 3
3 0
3 2
3 0 0 0
1 1 0
3 2 1
3 1 2
2 1 1
3 2 1
3 2 1
3 2 1
2 1 3 2
1 3 2 1
3 2 1
2 1
1
3
2
1
= + +
+ + =
=
= +

|
|
|
.
|

\
|
+ +

|
|
|
.
|

\
|



=
b b b
b b b
b b x x
b x x x
b b b
b b
b
b
b
b
b A
3 3 2 1
2 3 2 1
1 3 2 1
3 2
2
3 2
b x x x
b x x x
b x x x
= +
= +
= +
Example 2: Singular Case (2 of 2)
From the previous slide, if , there is no solution
to the system of equations
Requiring that , assume, for example, that
Then the reduced augmented matrix (A|b) becomes:
It can be shown that the second term in x is a solution of the
nonhomogeneous equation and that the first term is the most
general solution of the homogeneous equation, letting ,
where is arbitrary
|
|
|
.
|

\
|

+
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|


=
=
=
= +

|
|
|
.
|

\
|
+ +

0
3
4
1
1
1
3
4
0 0
3
2 3 2
3 0 0 0
1 1 0
3 2 1
3
3
3
3
3 2
3 2 1
3 2 1
2 1
1
x
x
x
x
x x
x x x
b b b
b b
b
x x
0 3
3 2 1
+ + b b b
5 , 1 , 2
3 2 1
= = = b b b
0 3
3 2 1
= + + b b b
3 3 2 1
2 3 2 1
1 3 2 1
3 2
2
3 2
b x x x
b x x x
b x x x
= +
= +
= +
=
3
x
Linear Dependence and Independence
A set of vectors x
(1)
, x
(2)
,, x
(n)
is linearly dependent if
there exists scalars c
1
, c
2
,, c
n
, not all zero, such that
If the only solution of
is c
1
= c
2
= = c
n
= 0, then x
(1)
, x
(2)
,, x
(n)
is linearly
independent.
0 x x x = + + +
) ( ) 2 (
2
) 1 (
1
n
n
c c c
0 x x x = + + +
) ( ) 2 (
2
) 1 (
1
n
n
c c c
Example 3: Linear Dependence (1 of 2)
Determine whether the following vectors are linear
dependent or linearly independent.
We need to solve
or
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
|
|
|
.
|

\
|

|
|
|
.
|

\
|
=
|
|
|
.
|

\
|

+
|
|
|
.
|

\
|
+
|
|
|
.
|

\
|
0
0
0
11 3 1
1 1 2
4 2 1
0
0
0
11
1
4
3
1
2
1
2
1
3
2
1
2 1
c
c
c
c c c
0 x x x = + +
) 3 (
3
) 2 (
2
) 1 (
1
c c c
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|

=
11
1
4
,
3
1
2
,
1
2
1
) 3 ( ) 2 ( ) 1 (
x x x
Example 3: Linear Dependence (2 of 2)
We can reduce the augmented matrix (A|b), as before.
So, the vectors are linearly dependent:
Alternatively, we could show that the following determinant is zero:
( )
number any be can where
1
3
2
0 0
0 3
0 4 2
0 0 0 0
0 3 1 0
0 4 2 1
0 15 5 0
0 9 3 0
0 4 2 1
0 11 3 1
0 1 1 2
0 4 2 1
3 3 3 2
3 2 1
c c c c
c c c
|
|
|
.
|

\
|

=
=
=
= +

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

=
c
b A
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|

=
11
1
4
,
3
1
2
,
1
2
1
) 3 ( ) 2 ( ) 1 (
x x x
0
11 3 1
1 1 2
4 2 1
) det( =

=
ij
x
0 x x x = =
) 3 ( ) 2 ( ) 1 (
3
3 2 , 1 if c
Linear Independence and Invertibility
Consider the previous two examples:
The first matrix was known to be nonsingular, and its column vectors
were linearly independent.
The second matrix was known to be singular, and its column vectors
were linearly dependent.
This is true in general: the columns (or rows) of A are linearly
independent iff A is nonsingular iff A
-1
exists.
Also, A is nonsingular iff detA 0, hence columns (or rows)
of A are linearly independent iff detA 0.
Further, if A = BC, then det(C) = det(A)det(B). Thus if the
columns (or rows) of A and B are linearly independent, then
the columns (or rows) of C are also.
Linear Dependence & Vector Functions
Now consider vector functions x
(1)
(t), x
(2)
(t),, x
(n)
(t), where
As before, x
(1)
(t), x
(2)
(t),, x
(n)
(t) is linearly dependent on I if
there exists scalars c
1
, c
2
,, c
n
, not all zero, such that
Otherwise x
(1)
(t), x
(2)
(t),, x
(n)
(t) is linearly independent on I
See text for more discussion on this.
( )
( ) , , , , 2 , 1 ,
) (
) (
) (
) (
) (
) (
2
) (
1
= =
|
|
|
|
|
.
|

\
|
= I t n k
t x
t x
t x
t
k
m
k
k
k

x
I t t c t c t c
n
n
= + + + all for , ) ( ) ( ) (
) ( ) 2 (
2
) 1 (
1
0 x x x
Eigenvalues and Eigenvectors
The eqn. Ax = y can be viewed as a linear transformation
that maps (or transforms) x into a new vector y.
Nonzero vectors x that transform into multiples of
themselves are important in many applications.
Thus we solve Ax = x or equivalently, (A-I)x = 0.
This equation has a nonzero solution if we choose such
that det(A-I) = 0.
Such values of are called eigenvalues of A, and the
nonzero solutions x are called eigenvectors.
Example 4: Eigenvalues (1 of 3)
Find the eigenvalues and eigenvectors of the matrix A.
Solution: Choose such that det(A-I) = 0, as follows.
|
|
.
|

\
|

=
2 4
1 3
A
( )
( )( ) ( )( )
( )( )
1 , 2
1 2 2
4 1 2 3
2 4
1 3
det
1 0
0 1
2 4
1 3
det det
2
= =
+ = =
=
|
|
.
|

\
|


=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|

I A
Example 4: First Eigenvector (2 of 3)
To find the eigenvectors of the matrix A, we need to solve
(A-I)x = 0 for = 2 and = -1.
Eigenvector for = 2: Solve
and this implies that . So
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


=
0
0
4 4
1 1
0
0
2 2 4
1 2 3
2
1
2
1
x
x
x
x
0 x I A
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
1
1
choose arbitrary ,
1
1
) 1 (
2
2 ) 1 (
x x c c
x
x
2 1
x x =
Example 4: Second Eigenvector (3 of 3)
Eigenvector for = -1: Solve
and this implies that . So
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|
+
+
=
0
0
1 4
1 4
0
0
1 2 4
1 1 3
2
1
2
1
x
x
x
x
0 x I A
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
4
1
choose arbitrary ,
4
1
4
) 2 (
1
1 ) 2 (
x x c c
x
x
1 2
4x x =
Normalized Eigenvectors
From the previous example, we see that eigenvectors are
determined up to a nonzero multiplicative constant.
If this constant is specified in some particular way, then the
eigenvector is said to be normalized.
For example, eigenvectors are sometimes normalized by
choosing the constant so that ||x|| = (x, x)

= 1.
Algebraic and Geometric Multiplicity
In finding the eigenvalues of an n x n matrix A, we solve
det(A-I) = 0.
Since this involves finding the determinant of an n x n
matrix, the problem reduces to finding roots of an nth
degree polynomial.
Denote these roots, or eigenvalues, by
1
,
2
, ,
n
.
If an eigenvalue is repeated m times, then its algebraic
multiplicity is m.
Each eigenvalue has at least one eigenvector, and a
eigenvalue of algebraic multiplicity m may have q linearly
independent eigevectors, 1 q m, and q is called the
geometric multiplicity of the eigenvalue.
Eigenvectors and Linear Independence
If an eigenvalue has algebraic multiplicity 1, then it is said
to be simple, and the geometric multiplicity is 1 also.
If each eigenvalue of an n x n matrix A is simple, then A
has n distinct eigenvalues. It can be shown that the n
eigenvectors corresponding to these eigenvalues are linearly
independent.
If an eigenvalue has one or more repeated eigenvalues, then
there may be fewer than n linearly independent eigenvectors
since for each repeated eigenvalue, we may have q < m.
This may lead to complications in solving systems of
differential equations.
Example 5: Eigenvalues (1 of 5)
Find the eigenvalues and eigenvectors of the matrix A.
Solution: Choose such that det(A-I) = 0, as follows.
|
|
|
.
|

\
|
=
0 1 1
1 0 1
1 1 0
A
( )
1 , 1 , 2
) 1 )( 2 (
2 3
1 1
1 1
1 1
det det
2 2 1
2
3
= = =
+ =
+ + =
|
|
|
.
|

\
|

I A
Example 5: First Eigenvector (2 of 5)
Eigenvector for = 2: Solve (A-I)x = 0, as follows.
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|
=
=
=
=

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

|
|
|
.
|

\
|

1
1
1
choose arbitrary ,
1
1
1
0 0
0 1 1
0 1 1
0 0 0 0
0 1 1 0
0 1 0 1
0 0 0 0
0 1 1 0
0 2 1 1
0 3 3 0
0 3 3 0
0 2 1 1
0 1 1 2
0 1 2 1
0 2 1 1
0 2 1 1
0 1 2 1
0 1 1 2
) 1 (
3
3
3
) 1 (
3
3 2
3 1
x x c c
x
x
x
x
x x
x x
Example 5: 2
nd
and 3
rd
Eigenvectors (3 of 5)
Eigenvector for = -1: Solve (A-I)x = 0, as follows.
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|

+
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|

=
=
=
= + +

|
|
|
.
|

\
|

|
|
|
.
|

\
|
1
1
0
,
1
0
1
choose
arbitrary , where ,
1
0
1
0
1
1
0 0
0 0
0 1 1 1
0 0 0 0
0 0 0 0
0 1 1 1
0 1 1 1
0 1 1 1
0 1 1 1
) 3 ( ) 2 (
3 2 3 2
3
2
3 2
) 2 (
3
2
3 2 1
x x
x x x x x
x
x
x x
x
x
x x x
Example 5: Eigenvectors of A (4 of 5)
Thus three eigenvectors of A are
where x
(2)
, x
(3)
correspond to the double eigenvalue = - 1.
It can be shown that x
(1)
, x
(2)
, x
(3)
are linearly independent.
Hence A is a 3 x 3 symmetric matrix (A = A
T
) with 3 real
eigenvalues and 3 linearly independent eigenvectors.
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
=
1
1
0
,
1
0
1
,
1
1
1
) 3 ( ) 2 ( ) 1 (
x x x
|
|
|
.
|

\
|
=
0 1 1
1 0 1
1 1 0
A
Example 5: Eigenvectors of A (5 of 5)
Note that we could have we had chosen
Then the eigenvectors are orthogonal, since
Thus A is a 3 x 3 symmetric matrix with 3 real eigenvalues
and 3 linearly independent orthogonal eigenvectors.
|
|
|
.
|

\
|
=
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
=
1
2
1
,
1
0
1
,
1
1
1
) 3 ( ) 2 ( ) 1 (
x x x
( ) ( ) ( ) 0 , , 0 , , 0 ,
) 3 ( ) 2 ( ) 3 ( ) 1 ( ) 2 ( ) 1 (
= = = x x x x x x
Hermitian Matrices
A self-adjoint, or Hermitian matrix, satisfies A = A
*
,
where we recall that A
*
= A
T
.
Thus for a Hermitian matrix, a
ij
= a
ji
.
Note that if A has real entries and is symmetric (see last
example), then A is Hermitian.
An n x n Hermitian matrix A has the following properties:
All eigenvalues of A are real.
There exists a full set of n linearly independent eigenvectors of A.
If x
(1)
and x
(2)
are eigenvectors that correspond to different
eigenvalues of A, then x
(1)
and x
(2)
are orthogonal.
Corresponding to an eigenvalue of algebraic multiplicity m, it is
possible to choose m mutually orthogonal eigenvectors, and hence A
has a full set of n linearly independent orthogonal eigenvectors.
Ch 7.4: Basic Theory of Systems of First Order
Linear Equations
.
The general theory of a system of n first order linear equations
parallels that of a single nth order linear equation.
This system can be written as x' = P(t)x + g(t), where
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
2 2 1 1
2 2 2 22 1 21 2
1 1 2 12 1 11 1
t g x t p x t p x t p x
t g x t p x t p x t p x
t g x t p x t p x t p x
n n nn n n n
n n
n n
+ + + + =

+ + + + =

+ + + + =

|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
) ( ) ( ) (
) ( ) ( ) (
) ( ) ( ) (
) ( ,
) (
) (
) (
) ( ,
) (
) (
) (
) (
2 1
2 22 21
1 12 11
2
1
2
1
t p t p t p
t p t p t p
t p t p t p
t
t g
t g
t g
t
t x
t x
t x
t
nn n n
n
n
n n


P g x
Vector Solutions of an ODE System
A vector x = (t) is a solution of x' = P(t)x + g(t) if the
components of x,
satisfy the system of equations on I: < t < .
For comparison, recall that x' = P(t)x + g(t) represents our
system of equations
Assuming P and g continuous on I, such a solution exists by
Theorem 7.1.2.
), ( , ), ( ), (
2 2 1 1
t x t x t x
n n
= = =
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
2 2 1 1
2 2 2 22 1 21 2
1 1 2 12 1 11 1
t g x t p x t p x t p x
t g x t p x t p x t p x
t g x t p x t p x t p x
n n nn n n n
n n
n n
+ + + + =

+ + + + =

+ + + + =

Homogeneous Case; Vector Function Notation


As in Chapters 3 and 4, we first examine the general
homogeneous equation x' = P(t)x.
Also, the following notation for the vector functions
x
(1)
, x
(2)
,, x
(k)
, will be used:


,
) (
) (
) (
) ( , ,
) (
) (
) (
) ( ,
) (
) (
) (
) (
2
1
) (
2
22
12
) 2 (
1
21
11
) 1 (
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
t x
t x
t x
t
t x
t x
t x
t
t x
t x
t x
t
nn
n
n
k
n n
x x x
Theorem 7.4.1
If the vector functions x
(1)
and x
(2)
are solutions of the system
x' = P(t)x, then the linear combination c
1
x
(1)
+ c
2
x
(2)
is also a
solution for any constants c
1
and c
2
.
Note: By repeatedly applying the result of this theorem, it
can be seen that every finite linear combination
of solutions x
(1)
, x
(2)
,, x
(k)
is itself a solution to x' = P(t)x.
) ( ) ( ) (
) ( ) 2 (
2
) 1 (
1
t c t c t c
k
k
x x x x + + + =
Theorem 7.4.2
If x
(1)
, x
(2)
,, x
(n)
are linearly independent solutions of the
system x' = P(t)x for each point in I: < t < , then each
solution x = (t) can be expressed uniquely in the form
If solutions x
(1)
,, x
(n)
are linearly independent for each
point in I: < t < , then they are fundamental solutions
on I, and the general solution is given by
) ( ) ( ) (
) ( ) 2 (
2
) 1 (
1
t c t c t c
n
n
x x x x + + + =
) ( ) ( ) (
) ( ) 2 (
2
) 1 (
1
t c t c t c
n
n
x x x x + + + =
The Wronskian and Linear Independence
The proof of Thm 7.4.2 uses the fact that if x
(1)
, x
(2)
,, x
(n)
are linearly independent on I, then detX(t) 0 on I, where
The Wronskian of x
(1)
,, x
(n)
is defined as
W[x
(1)
,, x
(n)
](t) = detX(t).
It follows that W[x
(1)
,, x
(n)
](t) 0 on I iff x
(1)
,, x
(n)
are
linearly independent for each point in I.
,
) ( ) (
) ( ) (
) (
1
1 11
|
|
|
.
|

\
|
=
t x t x
t x t x
t
nn n
n

X
Theorem 7.4.3
If x
(1)
, x
(2)
,, x
(n)
are solutions of the system x' = P(t)x on
I: < t < , then the Wronskian W[x
(1)
,, x
(n)
](t) is either
identically zero on I or else is never zero on I.
This result relies on Abels formula for the Wronskian
where c is an arbitrary constant (Refer to Section 3.2)
This result enables us to determine whether a given set of
solutions x
(1)
, x
(2)
,, x
(n)
are fundamental solutions by
evaluating W[x
(1)
,, x
(n)
](t) at any point t in < t < .

= + + + =
+ + + dt t p t p t p
nn
nn
ce t W p p p
dt
dW )] ( ) ( ) ( [
22 11
22 11
) ( ) (

Theorem 7.4.4
Let
Let x
(1)
, x
(2)
,, x
(n)
be solutions of the system x' = P(t)x,
< t < , that satisfy the initial conditions
respectively, where t
0
is any point in < t < . Then
x
(1)
, x
(2)
,, x
(n)
are form a fundamental set of solutions of
x' = P(t)x.
|
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
|
.
|

\
|
=
1
0
0
0
, ,
0
0
1
0
,
0
0
0
1
) ( ) 2 ( ) 1 (


n
e e e
, ) ( , , ) (
) (
0
) ( ) 1 (
0
) 1 ( n n
t t e x e x = =
Ch 7.5: Homogeneous Linear Systems with
Constant Coefficients
We consider here a homogeneous system of n first order linear
equations with constant, real coefficients:
This system can be written as x' = Ax, where
n nn n n n
n n
n n
x a x a x a x
x a x a x a x
x a x a x a x
+ + + =

+ + + =

+ + + =

2 2 1 1
2 2 22 1 21 2
1 2 12 1 11 1
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
nn n n
n
n
m
a a a
a a a
a a a
t x
t x
t x
t

2 1
2 22 21
1 12 11
2
1
,
) (
) (
) (
) ( A x
Equilibrium Solutions
Note that if n = 1, then the system reduces to
Recall that x = 0 is the only equilibrium solution if a 0.
Further, x = 0 is an asymptotically stable solution if a < 0,
since other solutions approach x = 0 in this case.
Also, x = 0 is an unstable solution if a > 0, since other
solutions depart from x = 0 in this case.
For n > 1, equilibrium solutions are similarly found by
solving Ax = 0. We assume detA 0, so that x = 0 is the
only solution. Determining whether x = 0 is asymptotically
stable or unstable is an important question here as well.
at
e t x ax x = =

) (
Phase Plane
When n = 2, then the system reduces to
This case can be visualized in the x
1
x
2
-plane, which is called
the phase plane.
In the phase plane, a direction field can be obtained by
evaluating Ax at many points and plotting the resulting
vectors, which will be tangent to solution vectors.
A plot that shows representative solution trajectories is
called a phase portrait.
Examples of phase planes, directions fields and phase
portraits will be given later in this section.
2 22 1 21 2
2 12 1 11 1
x a x a x
x a x a x
+ =

+ =

Solving Homogeneous System


To construct a general solution to x' = Ax, assume a solution
of the form x = e
rt
, where the exponent r and the constant
vector are to be determined.
Substituting x = e
rt
into x' = Ax, we obtain
Thus to solve the homogeneous system of differential
equations x' = Ax, we must find the eigenvalues and
eigenvectors of A.
Therefore x = e
rt
is a solution of x' = Ax provided that r is
an eigenvalue and is an eigenvector of the coefficient
matrix A.
( ) 0 I A A A = = = r r e e r
rt rt
Example 1: Direction Field (1 of 9)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.
Substituting x = e
rt
in for x, and rewriting system as
(A-rI) = 0, we obtain
x x
|
|
.
|

\
|
=

1 4
1 1
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

0
0
1 4
1 1
1
1

r
r
Example 1: Eigenvalues (2 of 9)
Our solution has the form x = e
rt
, where r and are found
by solving
Recalling that this is an eigenvalue problem, we determine r
by solving det(A-rI) = 0:
Thus r
1
= 3 and r
2
= -1.
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

0
0
1 4
1 1
1
1

r
r
) 1 )( 3 ( 3 2 4 ) 1 (
1 4
1 1
2 2
+ = = =

r r r r r
r
r
Example 1: First Eigenvector (3 of 9)
Eigenvector for r
1
= 3: Solve
by row reducing the augmented matrix:
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

=
0
0
2 4
1 2
0
0
3 1 4
1 3 1
2
1
2
1

0 I A r
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
=
=
=

|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|

2
1
choose arbitrary ,
1
2 / 1 2 / 1
0 0
0 2 / 1 1
0 0 0
0 2 / 1 1
0 2 4
0 2 / 1 1
0 2 4
0 1 2
) 1 (
2
2 ) 1 (
2
2 1
c c



Example 1: Second Eigenvector (4 of 9)
Eigenvector for r
2
= -1: Solve
by row reducing the augmented matrix:
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|
+
+
=
0
0
2 4
1 2
0
0
1 1 4
1 1 1
2
1
2
1

0 I A r
|
|
.
|

\
|

=
|
|
.
|

\
|

=
|
|
.
|

\
|

=
=
= +

|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|
2
1
choose arbitrary ,
1
2 / 1 2 / 1
0 0
0 2 / 1 1
0 0 0
0 2 / 1 1
0 2 4
0 2 / 1 1
0 2 4
0 1 2
) 2 (
2
2 ) 2 (
2
2 1
c c



Example 1: General Solution (5 of 9)
The corresponding solutions x = e
rt
of x' = Ax are
The Wronskian of these two solutions is
Thus x
(1)
and x
(2)
are fundamental solutions, and the general
solution of x' = Ax is
t t
e t e t

|
|
.
|

\
|

=
|
|
.
|

\
|
=
2
1
) ( ,
2
1
) (
) 2 ( 3 ) 1 (
x x
| | 0 4
2 2
) ( ,
2
3
3
) 2 ( ) 1 (
=

t
t t
t t
e
e e
e e
t W x x
t t
e c e c
t c t c t

|
|
.
|

\
|

+
|
|
.
|

\
|
=
+ =
2
1
2
1
) ( ) ( ) (
2
3
1
) 2 (
2
) 1 (
1
x x x
Example 1: Phase Plane for x
(1)
(6 of 9)
To visualize solution, consider first x = c
1
x
(1)
:
Now
Thus x
(1)
lies along the straight line x
2
= 2x
1
, which is the line
through origin in direction of first eigenvector
(1)
If solution is trajectory of particle, with position given by
(x
1
, x
2
), then it is in Q1 when c
1
> 0, and in Q3 when c
1
< 0.
In either case, particle moves away from origin as t increases.
t t t
e c x e c x e c
x
x
t
3
1 2
3
1 1
3
1
2
1 ) 1 (
2 ,
2
1
) ( = =
|
|
.
|

\
|
=
|
|
.
|

\
|
= x
1 2
1
2
1
1
3 3
1 2
3
1 1
2
2
2 , x x
c
x
c
x
e e c x e c x
t t t
= = = = =
Example 1: Phase Plane for x
(2)
(7 of 9)
Next, consider x = c
2
x
(2)
:
Then x
(2)
lies along the straight line x
2
= -2x
1
, which is the
line through origin in direction of 2nd eigenvector
(2)
If solution is trajectory of particle, with position given by
(x
1
, x
2
), then it is in Q4 when c
2
> 0, and in Q2 when c
2
< 0.
In either case, particle moves towards origin as t increases.
t t t
e c x e c x e c
x
x
t

= =
|
|
.
|

\
|

=
|
|
.
|

\
|
=
2 2 2 1 2
2
1 ) 2 (
2 ,
2
1
) ( x
Example 1:
Phase Plane for General Solution (8 of 9)
The general solution is x = c
1
x
(1)
+ c
2
x
(2)
:
As t , c
1
x
(1)
is dominant and c
2
x
(2)
becomes negligible.
Thus, for c
1
0, all solutions asymptotically approach the
line x
2
= 2x
1
as t .
Similarly, for c
2
0, all solutions asymptotically approach
the line x
2
= -2x
1
as t - .
The origin is a saddle point,
and is unstable. See graph.
t t
e c e c t

|
|
.
|

\
|

+
|
|
.
|

\
|
=
2
1
2
1
) (
2
3
1
x
Example 1:
Time Plots for General Solution (9 of 9)
The general solution is x = c
1
x
(1)
+ c
2
x
(2)
:
As an alternative to phase plane plots, we can graph x
1
or x
2
as a function of t. A few plots of x
1
are given below.
Note that when c
1
= 0, x
1
(t) = c
2
e
-t
0 as t .
Otherwise, x
1
(t) = c
1
e
3t
+ c
2
e
-t
grows unbounded as t .
Graphs of x
2
are similarly obtained.
|
|
.
|

\
|

+
=
|
|
.
|

\
|

|
|
.
|

\
|

+
|
|
.
|

\
|
=

t t
t t
t t
e c e c
e c e c
t x
t x
e c e c t
2
3
1
2
3
1
2
1
2
3
1
2 2
) (
) (
2
1
2
1
) ( x
Example 2: Direction Field (1 of 9)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.
Substituting x = e
rt
in for x, and rewriting system as
(A-rI) = 0, we obtain
x x
|
|
.
|

\
|

2 2
2 3
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


0
0
2 2
2 3
1
1

r
r
Example 2: Eigenvalues (2 of 9)
Our solution has the form x = e
rt
, where r and are found
by solving
Recalling that this is an eigenvalue problem, we determine r
by solving det(A-rI) = 0:
Thus r
1
= -1 and r
2
= -4.
) 4 )( 1 ( 4 5 2 ) 2 )( 3 (
2 2
2 3
2
+ + = + + = =


r r r r r r
r
r
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


0
0
2 2
2 3
1
1

r
r
Example 2: First Eigenvector (3 of 9)
Eigenvector for r
1
= -1: Solve
by row reducing the augmented matrix:
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|
+
+
=
0
0
1 2
2 2
0
0
1 2 2
2 1 3
2
1
2
1

0 I A r
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|

2
1
choose
2 / 2
0 0 0
0 2 / 2 1
0 1 2
0 2 / 2 1
0 1 2
0 2 2
) 1 (
2
2
) 1 (

Example 2: Second Eigenvector (4 of 9)


Eigenvector for r
2
= -4: Solve
by row reducing the augmented matrix:
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|
+
+
=
0
0
2 2
2 1
0
0
4 2 2
2 4 3
2
1
2
1

0 I A r
|
|
.
|

\
|

=
|
|
.
|

\
|

=
|
|
.
|

\
|

|
|
.
|

\
|
1
2
choose
2
0 0 0
0 2 1
0 2 2
0 2 1
) 2 (
2
2
) 2 (

Example 2: General Solution (5 of 9)


The corresponding solutions x = e
rt
of x' = Ax are
The Wronskian of these two solutions is
Thus x
(1)
and x
(2)
are fundamental solutions, and the general
solution of x' = Ax is
t t
e t e t
4 ) 2 ( ) 1 (
1
2
) ( ,
2
1
) (

|
|
.
|

\
|

=
|
|
.
|

\
|
= x x
| | 0 3
2
2
) ( ,
5
4
4
) 2 ( ) 1 (
=

=



t
t t
t t
e
e e
e e
t W x x
t t
e c e c
t c t c t
4
2 1
) 2 (
2
) 1 (
1
1
2
2
1
) ( ) ( ) (

|
|
.
|

\
|

+
|
|
.
|

\
|
=
+ = x x x
Example 2: Phase Plane for x
(1)
(6 of 9)
To visualize solution, consider first x = c
1
x
(1)
:
Now
Thus x
(1)
lies along the straight line x
2
= 2

x
1
, which is the
line through origin in direction of first eigenvector
(1)
If solution is trajectory of particle, with position given by
(x
1
, x
2
), then it is in Q1 when c
1
> 0, and in Q3 when c
1
< 0.
In either case, particle moves towards origin as t increases.
t t t
e c x e c x e c
x
x
t

= =
|
|
.
|

\
|
=
|
|
.
|

\
|
=
1 2 1 1 1
2
1 ) 1 (
2 ,
2
1
) ( x
1 2
1
2
1
1
1 2 1 1
2
2
2 , x x
c
x
c
x
e e c x e c x
t t t
= = = = =

Example 2: Phase Plane for x
(2)
(7 of 9)
Next, consider x = c
2
x
(2)
:
Then x
(2)
lies along the straight line x
2
= -2

x
1
, which is the
line through origin in direction of 2nd eigenvector
(2)
If solution is trajectory of particle, with position given by
(x
1
, x
2
), then it is in Q4 when c
2
> 0, and in Q2 when c
2
< 0.
In either case, particle moves towards origin as t increases.
t t t
e c x e c x e c
x
x
t
4
2 2
4
2 1
4
2
2
1 ) 2 (
, 2
1
2
) (

= =
|
|
.
|

\
|

=
|
|
.
|

\
|
= x
Example 2:
Phase Plane for General Solution (8 of 9)
The general solution is x = c
1
x
(1)
+ c
2
x
(2)
:
As t , c
1
x
(1)
is dominant and c
2
x
(2)
becomes negligible.
Thus, for c
1
0, all solutions asymptotically approach
origin along the line x
2
= 2

x
1
as t .
Similarly, all solutions are unbounded as t - .
The origin is a node, and is
asymptotically stable.
t t
e t e t
4 ) 2 ( ) 1 (
1
2
) ( ,
2
1
) (

|
|
.
|

\
|

=
|
|
.
|

\
|
= x x
Example 2:
Time Plots for General Solution (9 of 9)
The general solution is x = c
1
x
(1)
+ c
2
x
(2)
:
As an alternative to phase plane plots, we can graph x
1
or x
2
as a function of t. A few plots of x
1
are given below.
Graphs of x
2
are similarly obtained.
|
|
.
|

\
|
+

=
|
|
.
|

\
|

|
|
.
|

\
|

+
|
|
.
|

\
|
=



t t
t t
t t
e c e c
e c e c
t x
t x
e c e c t
4
2 1
4
2 1
2
1 4
2 1
2
2
) (
) (
1
2
2
1
) ( x
2 x 2 Case:
Real Eigenvalues, Saddle Points and Nodes
The previous two examples demonstrate the two main cases
for a 2 x 2 real system with real and different eigenvalues:
Both eigenvalues have opposite signs, in which case origin is a
saddle point and is unstable.
Both eigenvalues have the same sign, in which case origin is a node,
and is asymptotically stable if the eigenvalues are negative and
unstable if the eigenvalues are positive.
Eigenvalues, Eigenvectors
and Fundamental Solutions
In general, for an n x n real linear system x' = Ax:
All eigenvalues are real and different from each other.
Some eigenvalues occur in complex conjugate pairs.
Some eigenvalues are repeated.
If eigenvalues r
1
,, r
n
are real & different, then there are n
corresponding linearly independent eigenvectors
(1)
,,
(n)
.
The associated solutions of x' = Ax are
Using Wronskian, it can be shown that these solutions are
linearly independent, and hence form a fundamental set of
solutions. Thus general solution is
t r n n t r
n
e t e t
) ( ) ( ) 1 ( ) 1 (
) ( , , ) (
1
x x = =
t r n
n
t r
n
e c e c
) ( ) 1 (
1
1
x + + =
Hermitian Case: Eigenvalues, Eigenvectors &
Fundamental Solutions
If A is an n x n Hermitian matrix (real and symmetric), then
all eigenvalues r
1
,, r
n
are real, although some may repeat.
In any case, there are n corresponding linearly independent
and orthogonal eigenvectors
(1)
,,
(n)
. The associated
solutions of x' = Ax are
and form a fundamental set of solutions.
t r n n t r
n
e t e t
) ( ) ( ) 1 ( ) 1 (
) ( , , ) (
1
x x = =
Example 3: Hermitian Matrix (1 of 3)
Consider the homogeneous equation x' = Ax below.
The eigenvalues were found previously in Ch 7.3, and were:
r
1
= 2, r
2
= -1 and r
3
= -1.
Corresponding eigenvectors:
x x
|
|
|
.
|

\
|
=

0 1 1
1 0 1
1 1 0
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
=
1
1
0
,
1
0
1
,
1
1
1
) 3 ( ) 2 ( ) 1 (

Example 3: General Solution (2 of 3)
The fundamental solutions are
with general solution
t t t
e e e

|
|
|
.
|

\
|

=
|
|
|
.
|

\
|

=
|
|
|
.
|

\
|
=
1
1
0
,
1
0
1
,
1
1
1
) 3 ( ) 2 ( 2 ) 1 (
x x x
t t t
e c e c e c

|
|
|
.
|

\
|

+
|
|
|
.
|

\
|

+
|
|
|
.
|

\
|
=
1
1
0

1
0
1
1
1
1
3 2
2
1
x
Example 3: General Solution Behavior (3 of 3)
The general solution is x = c
1
x
(1)
+ c
2
x
(2)
+ c
3
x
(3)
:
As t , c
1
x
(1)
is dominant and c
2
x
(2)
, c
3
x
(3)
become
negligible.
Thus, for c
1
0, all solns x become unbounded as t ,
while for c
1
= 0, all solns x 0 as t .
The initial points that cause c
1
= 0 are those that lie in plane
determined by
(2)
and
(3)
. Thus solutions that start in this
plane approach origin as t .
t t t
e c e c e c

|
|
|
.
|

\
|

+
|
|
|
.
|

\
|

+
|
|
|
.
|

\
|
=
1
1
0

1
0
1
1
1
1
3 2
2
1
x
Complex Eigenvalues and Fundamental Solns
If some of the eigenvalues r
1
,, r
n
occur in complex
conjugate pairs, but otherwise are different, then there are
still n corresponding linearly independent solutions
which form a fundamental set of solutions. Some may be
complex-valued, but real-valued solutions may be derived
from them. This situation will be examined in Ch 7.6.
If the coefficient matrix A is complex, then complex
eigenvalues need not occur in conjugate pairs, but solutions
will still have the above form (if the eigenvalues are
distinct) and these solutions may be complex-valued.
, ) ( , , ) (
) ( ) ( ) 1 ( ) 1 (
1
t r n n t r
n
e t e t x x = =
Repeated Eigenvalues and Fundamental Solns
If some of the eigenvalues r
1
,, r
n
are repeated, then there
may not be n corresponding linearly independent solutions of
the form
In order to obtain a fundamental set of solutions, it may be
necessary to seek additional solutions of another form.
This situation is analogous to that for an nth order linear
equation with constant coefficients, in which case a repeated
root gave rise solutions of the form
This case of repeated eigenvalues is examined in Section 7.8.
t r n n t r
n
e t e t
) ( ) ( ) 1 ( ) 1 (
) ( , , ) (
1
x x = =
, , ,
2 rt rt rt
e t te e
Ch 7.6: Complex Eigenvalues
We consider again a homogeneous system of n first order
linear equations with constant, real coefficients,
and thus the system can be written as x' = Ax, where
,
2 2 1 1
2 2 22 1 21 2
1 2 12 1 11 1
n nn n n n
n n
n n
x a x a x a x
x a x a x a x
x a x a x a x
+ + + =

+ + + =

+ + + =

|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
nn n n
n
n
n
a a a
a a a
a a a
t x
t x
t x
t

2 1
2 22 21
1 12 11
2
1
,
) (
) (
) (
) ( A x
Conjugate Eigenvalues and Eigenvectors
We know that x = e
rt
is a solution of x' = Ax, provided r is
an eigenvalue and is an eigenvector of A.
The eigenvalues r
1
,, r
n
are the roots of det(A-rI) = 0, and
the corresponding eigenvectors satisfy (A-rI) = 0.
If A is real, then the coefficients in the polynomial equation
det(A-rI) = 0 are real, and hence any complex eigenvalues
must occur in conjugate pairs. Thus if r
1
= + i is an
eigenvalue, then so is r
2
= - i.
The corresponding eigenvectors
(1)
,
(2)
are conjugates also.
To see this, recall A and I have real entries, and hence
( ) ( ) ( ) 0 I A 0 I A 0 I A = = =
) 2 (
2
) 1 (
1
) 1 (
1
r r r
Conjugate Solutions
It follows from the previous slide that the solutions
corresponding to these eigenvalues and eigenvectors are
conjugates conjugates as well, since
) 1 ( ) 1 ( ) 2 ( ) 2 (
2 2
x x = = =
t r t r
e e
t r t r
e e
2 1
) 2 ( ) 2 ( ) 1 ( ) 1 (
, x x = =
Example 1: Direction Field (1 of 7)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.
Substituting x = e
rt
in for x, and rewriting system as
(A-rI) = 0, we obtain
x x
|
|
.
|

\
|

2 / 1 1
1 2 / 1
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


0
0
2 / 1 1
1 2 / 1
1
1

r
r
Example 1: Complex Eigenvalues (2 of 7)
We determine r by solving det(A-rI) = 0. Now
Thus
Therefore the eigenvalues are r
1
= -1/2 + i and r
2
= -1/2 - i.
( )
4
5
1 2 / 1
2 / 1 1
1 2 / 1
2
2
+ + = + + =


r r r
r
r
i
i
r =

=

=
2
1
2
2 1
2
) 4 / 5 ( 4 1 1
2
Example 1: First Eigenvector (3 of 7)
Eigenvector for r
1
= -1/2 + i: Solve
by row reducing the augmented matrix:
Thus
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


=
0
0
1
1
0
0
1
1
0
0
2 / 1 1
1 2 / 1
2
1
2
1
1
1

i
i
i
i
r
r
r 0 I A
|
|
.
|

\
|
=
|
|
.
|

\
|

=
|
|
.
|

\
|

|
|
.
|

\
|
i
i i
i
i 1
choose
0 0 0
0 1
0 1
0 1
) 1 (
2
2 ) 1 (

|
|
.
|

\
|
+
|
|
.
|

\
|
=
1
0
0
1
) 1 (
i
Example 1: Second Eigenvector (4 of 7)
Eigenvector for r
1
= -1/2 - i: Solve
by row reducing the augmented matrix:
Thus
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


=
0
0
1
1
0
0
1
1
0
0
2 / 1 1
1 2 / 1
2
1
2
1
1
1

i
i
i
i
r
r
r 0 I A
|
|
.
|

\
|

=
|
|
.
|

\
|
=
|
|
.
|

\
|

|
|
.
|

\
|


i
i i
i
i 1
choose
0 0 0
0 1
0 1
0 1
) 2 (
2
2 ) 2 (

|
|
.
|

\
|

+
|
|
.
|

\
|
=
1
0
0
1
) 2 (
i
Example 1: General Solution (5 of 7)
The corresponding solutions x = e
rt
of x' = Ax are
The Wronskian of these two solutions is
Thus u(t) and v(t) are real-valued fundamental solutions of
x' = Ax, with general solution x = c
1
u + c
2
v.
|
|
.
|

\
|
=
(

|
|
.
|

\
|
+
|
|
.
|

\
|
=
|
|
.
|

\
|

=
(

|
|
.
|

\
|

|
|
.
|

\
|
=


t
t
e t t e t
t
t
e t t e t
t t
t t
cos
sin
cos
1
0
sin
0
1
) (
sin
cos
sin
1
0
cos
0
1
) (
2 / 2 /
2 / 2 /
v
u
| | 0
cos sin
sin cos
) ( ,
2 / 2 /
2 / 2 /
) 2 ( ) 1 (
=

=



t
t t
t t
e
t e t e
t e t e
t W x x
Example 1: Phase Plane (6 of 7)
Given below is the phase plane plot for solutions x, with
Each solution trajectory approaches origin along a spiral path
as t , since coordinates are products of decaying
exponential and sine or cosine factors.
The graph of u passes through (1,0),
since u(0) = (1,0). Similarly, the
graph of v passes through (0,1).
The origin is a spiral point, and
is asymptotically stable.
|
|
.
|

\
|
+
|
|
.
|

\
|

=
|
|
.
|

\
|
=

t e
t e
c
t e
t e
c
x
x
t
t
t
t
cos
sin
sin
cos
2 /
2 /
2
2 /
2 /
1
2
1
x
Example 1: Time Plots (7 of 7)
The general solution is x = c
1
u + c
2
v:
As an alternative to phase plane plots, we can graph x
1
or x
2
as a function of t. A few plots of x
1
are given below, each
one a decaying oscillation as t .
|
|
.
|

\
|
+
+
=
|
|
.
|

\
|
=


t e c t e c
t e c t e c
t x
t x
t t
t t
cos sin
sin cos
) (
) (
2 /
2
2 /
1
2 /
2
2 /
1
2
1
x
General Solution
To summarize, suppose r
1
= + i, r
2
= - i, and that
r
3
,, r
n
are all real and distinct eigenvalues of A. Let the
corresponding eigenvectors be
Then the general solution of x' = Ax is
where
t r n
n
t r
n
e c e c t c t c
) ( ) 3 (
3 2 1
3
) ( ) ( v u x + + + + =
) ( ) 4 ( ) 3 ( ) 2 ( ) 1 (
, , , , ,
n
i i b a b a = + =
( ) ( ) t t e t t t e t
t t


cos sin ) ( , sin cos ) ( b a v b a u + = =
Real-Valued Solutions
Thus for complex conjugate eigenvalues r
1
and r
2
, the
corresponding solutions x
(1)
and x
(2)
are conjugates also.
To obtain real-valued solutions, use real and imaginary parts
of either x
(1)
or x
(2)
. To see this, let
(1)
= a + ib. Then
where
are real valued solutions of x' = Ax, and can be shown to be
linearly independent.
( )
( ) ( )
( ) ( )
) ( ) (
cos sin sin cos
sin cos
) 1 ( ) 1 (
t i t
t t ie t t e
t i t e i e
t t
t t i
v u
b a b a
b a x
+ =
+ + =
+ + = =
+




( ) ( ), cos sin ) ( , sin cos ) ( t t e t t t e t
t t


b a v b a u + = =
Spiral Points, Centers,
Eigenvalues, and Trajectories
In previous example, general solution was
The origin was a spiral point, and was asymptotically stable.
If real part of complex eigenvalues is positive, then
trajectories spiral away, unbounded, from origin, and hence
origin would be an unstable spiral point.
If real part of complex eigenvalues is zero, then trajectories
circle origin, neither approaching nor departing. Then origin
is called a center and is stable, but not asymptotically stable.
Trajectories periodic in time.
The direction of trajectory motion depends on entries in A.
|
|
.
|

\
|
+
|
|
.
|

\
|

=
|
|
.
|

\
|
=

t e
t e
c
t e
t e
c
x
x
t
t
t
t
cos
sin
sin
cos
2 /
2 /
2
2 /
2 /
1
2
1
x
Example 2:
Second Order System with Parameter (1 of 2)
The system x' = Ax below contains a parameter .
Substituting x = e
rt
in for x and rewriting system as
(A-rI) = 0, we obtain
Next, solve for r in terms of :
x x
|
|
.
|

\
|

0 2
2
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

0
0
2
2
1
1


r
r
2
16
4 4 ) (
2
2
2
2

= + = + =

r r r r r
r
r
Example 2:
Eigenvalue Analysis (2 of 2)
The eigenvalues are given by the quadratic formula above.
For < -4, both eigenvalues are real and negative, and hence
origin is asymptotically stable node.
For > 4, both eigenvalues are real and positive, and hence the
origin is an unstable node.
For -4 < < 0, eigenvalues are complex with a negative real
part, and hence origin is asymptotically stable spiral point.
For 0 < < 4, eigenvalues are complex with a positive real
part, and the origin is an unstable spiral point.
For = 0, eigenvalues are purely imaginary, origin is a center.
Trajectories closed curves about origin & periodic.
For = 4, eigenvalues real & equal, origin is a node (Ch 7.8)
2
16
2

=

r
Second Order Solution Behavior and
Eigenvalues: Three Main Cases
For second order systems, the three main cases are:
Eigenvalues are real and have opposite signs; x = 0 is a saddle point.
Eigenvalues are real, distinct and have same sign; x = 0 is a node.
Eigenvalues are complex with nonzero real part; x = 0 a spiral point.
Other possibilities exist and occur as transitions between two
of the cases listed above:
A zero eigenvalue occurs during transition between saddle point and
node. Real and equal eigenvalues occur during transition between
nodes and spiral points. Purely imaginary eigenvalues occur during a
transition between asymptotically stable and unstable spiral points.
a
ac b b
r
2
4
2

=
Example 3: Multiple Spring-Mass System(1 of 6)
The equations for the system of two masses and three
springs discussed in Section 7.1, assuming no external
forces, can be expressed as:
Given , , the
equations become
' and , ' , , where
) ( ' and ) ( ' or
) ( and ) (
2 4 1 3 2 2 1 1
2 3 2 1 2 4 2 2 2 1 2 1 3 1
2 3 2 1 2
2
2
2
2 2 2 1 2 1
2
1
2
1
x y x y x y x y
y k k y k y m y k y k k y m
x k k x k
dt
x d
m x k x k k
dt
x d
m
= = = =
+ = + + =
+ = + + =
4 / 15 and , 3 , 1 , 4 / 9 , 2
3 2 1 2 1
= = = = = k k k m m
2 1 4 2 1 3 4 2 3 1
3 3 / 4 ' and , 2 / 3 2 ' , ' , ' y y y y y y y y y y = + = = =
Example 3: Multiple Spring-Mass System(2 of 6)
Writing the system of equations in matrix form:
Assuming a solution of the form y = e
rt
, where r must be
an eigenvalue of the matrix A and is the corresponding
eigenvector, the characteristic polynomial of A is
yielding the eigenvalues:
Ay y y' =
|
|
|
|
|
.
|

\
|

=
0 0 3 3 / 4
0 0 2 / 3 2
1 0 0 0
0 1 0 0
) 4 )( 1 ( 4 5
2 2 2 4
+ + = + + r r r r
i r i r i r i r 2 and , 2 , ,
4 3 2 1
= = = =
2 1 4 2 1 3 4 2 3 1
3 3 / 4 ' and , 2 / 3 2 ' , ' , ' y y y y y y y y y y = + = = =
Example 3: Multiple Spring-Mass System (3 of 6)
For the eigenvalues the correspond-
ing eigenvectors are
The products yield the complex-valued solutions:
Ay y y' =
|
|
|
|
|
.
|

\
|

=
0 0 3 3 / 4
0 0 2 / 3 2
1 0 0 0
0 1 0 0
i r i r i r i r 2 and , 2 , ,
4 3 2 1
= = = =
|
|
|
|
|
.
|

\
|

=
|
|
|
|
|
.
|

\
|

=
|
|
|
|
|
.
|

\
|

=
|
|
|
|
|
.
|

\
|
=
i
i
i
i
i
i
i
i
8
6
4
3
and ,
8
6
4
3
,
2
3
2
3
,
2
3
2
3
) 4 ( ) 3 ( ) 2 ( ) 1 (

it it
e e
2 ) 3 ( ) 1 (
and
) ( ) (
2 cos 8
2 cos 6
2 sin 4
2 sin 3
2 sin 8
2 sin 6
2 cos 4
2 cos 3
) 2 sin 2 (cos
8
6
4
3
) ( ) (
cos 2
cos 3
sin 2
sin 3
sin 2
sin 3
cos 2
cos 3
) sin (cos
2
3
2
3
) 2 ( ) 2 ( 2 ) 3 (
) 1 ( ) 1 ( ) 1 (
t i t
t
t
t
t
i
t
t
t
t
t i t
i
i
e
t i t
t
t
t
t
i
t
t
t
t
t i t
i
i
e
it
it
v u
v u
+ =
|
|
|
|
|
.
|

\
|

+
|
|
|
|
|
.
|

\
|

= +
|
|
|
|
|
.
|

\
|

=
+ =
|
|
|
|
|
.
|

\
|
+
|
|
|
|
|
.
|

\
|

= +
|
|
|
|
|
.
|

\
|
=

Example 3: Multiple Spring-Mass System (4 of 6)


After validating that are linearly
independent, the general solution of the system of equations can be
written as
where are arbitrary constants.
Each solution will be periodic with period 2, so each trajectory is a
closed curve. The first two terms of the solution describe motions with
frequency 1 and period 2 while the second two terms describe
motions with frequency 2 and period . The motions of the two masses
will be different relative to one another for solutions involving only the
first two terms or the second two terms.
) ( ), ( , ) ( ), (
) 2 ( ) 2 ( ) 1 ( ) 1 (
t t t t v u v u
|
|
|
|
|
.
|

\
|

+
|
|
|
|
|
.
|

\
|

+
|
|
|
|
|
.
|

\
|
+
|
|
|
|
|
.
|

\
|

=
t
t
t
t
c
t
t
t
t
c
t
t
t
t
c
t
t
t
t
c
2 cos 8
2 cos 6
2 sin 4
2 sin 3
2 sin 8
2 sin 6
2 cos 4
2 cos 3
cos 2
cos 3
sin 2
sin 3
sin 2
sin 3
cos 2
cos 3
4 3 2 1
y
4 3 2 1
, , , c c c c
2 1 4 2 1 3 4 2 3 1
3 3 / 4 ' and , 2 / 3 2 ' , ' , ' y y y y y y y y y y = + = = =
Example 3: Multiple Spring-Mass System (5 of 6)
To obtain the fundamental mode of vibration with frequency 1
To obtain the fundamental mode of vibration with frequency 2
Plots of and parametric plots (y, y) are shown for a
selected solution with frequency 1
) 0 ( 2 ) 0 ( 3 and ) 0 ( 2 ) 0 ( 3 when occurs 0
3 4 1 2 4 3
y y y y c c = = = =
) 0 ( 4 ) 0 ( 3 and ) 0 ( 4 ) 0 ( 3 when occurs 0
3 4 1 2 2 1
y y y y c c = = = =
5 10 15 20
t
3
2
1
1
2
3
u
1
t
3 2 1 1 2 3
y1, y2
3
2
1
1
2
3
y3, y4
) , (
3 1
y y
) , (
4 2
y y
2
y
1
y
2
y
|
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
=
0
0
2
3
) 0 (
) 0 (
) 0 (
) 0 (
) 0 (
4
3
2
1
y
y
y
y
y
Plots of the solutions as functions of time Phase plane plots
' , '

and masses the of motion the represent and
1 4 1 3
2 1
y y y y
y y
= =
2 1
and y y
Example 3: Multiple Spring-Mass System (6 of 6)
Plots of and parametric plots (y, y) are shown for a selected
solution with frequency 2
Plots of and parametric plots (y, y) are shown for a selected
solution with mixed frequencies satisfying the initial condition stated
2 4 6 8 10 12
t
4
2
2
4
u
2
t
2 4 6 8 10 12
t
4
2
2
4
yt
4 2 2 4
y1, y2
5
5
y3, y4

1
y
) , (
4 2
y y
) , (
3 1
y y
1
y

2
y
Plots of the solutions as functions of time
Plots of the solutions as functions of time
Phase plane plots
4 2 2 4
y1, y2
5
5
10
y3, y4
Phase plane plots
) , (
4 2
y y
) , (
3 1
y y

2
y
|
|
|
|
|
.
|

\
|

=
|
|
|
|
|
.
|

\
|
=
0
0
4
3
) 0 (
) 0 (
) 0 (
) 0 (
) 0 (
4
3
2
1
y
y
y
y
y
|
|
|
|
|
.
|

\
|

=
1
1
4
1
) 0 ( y
' , '

and masses the of motion the represent and
1 4 1 3
2 1
y y y y
y y
= =
2 1
and y y
2 1
and y y
Ch 7.8: Repeated Eigenvalues
We consider again a homogeneous system of n first order
linear equations with constant real coefficients x' = Ax.
If the eigenvalues r
1
,, r
n
of A are real and different, then
there are n linearly independent eigenvectors
(1)
,,
(n)
, and n
linearly independent solutions of the form
If some of the eigenvalues r
1
,, r
n
are repeated, then there
may not be n corresponding linearly independent solutions of
the above form.
In this case, we will seek additional solutions that are products
of polynomials and exponential functions.
t r n n t r
n
e t e t
) ( ) ( ) 1 ( ) 1 (
) ( , , ) (
1
x x = =
Example 1: Eigenvalues (1 of 2)
We need to find the eigenvectors for the matrix:
The eigenvalues r and eigenvectors satisfy the equatio
(A rI ) =0 or
To determine r, solve det(A-rI) = 0:
Thus r
1
= 2 and r
2
= 2.
2 2
) 2 ( 4 4 1 ) 3 )( 1 (
3 1
1 1
= + = + =


r r r r r
r
r
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


0
0
3 1
1 1
1
1

r
r
x
|
|
.
|

\
|

=
3 1
1 1
A
Example 1: Eigenvectors (2 of 2)
To find the eigenvectors, we solve
by row reducing the augmented matrix:
Thus there is only one linearly independent eigenvector for
the repeated eigenvalue r = 2.
( )
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|

|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|


=
0
0
1 1
1 1
0
0
2 3 1
1 2 1
2
1
2
1

0 I A r
|
|
.
|

\
|

=
|
|
.
|

\
|

=
=
= +

|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|

1
1
choose
0 0
0 1 1
0 0 0
0 1 1
0 1 1
0 1 1
0 1 1
0 1 1
) 1 (
2
2 ) 1 (
2
2 1



Example 2: Direction Field (1 of 10)
Consider the homogeneous equation x' = Ax below.
A direction field for this system is given below.
Substituting x = e
rt
in for x, where r is As eigenvalue and
is its corresponding eigenvector,
the previous example showed the
existence of only one eigenvalue,
r = 2, with one eigenvector:
x x
|
|
.
|

\
|

=

3 1
1 1
|
|
.
|

\
|

=
1
1

Example 2: First Solution; and


Second Solution, First Attempt (2 of 10)
The corresponding solution x = e
rt
of x' = Ax is
Since there is no second solution of the form x = e
rt
, we
need to try a different form. Based on methods for second
order linear equations in Ch 3.5, we first try x = te
2t
.
Substituting x = te
2t
into x' = Ax, we obtain
or
t
e t
2 ) 1 (
1
1
) (
|
|
.
|

\
|

= x
t t t
te te e
2 2 2
2 A = +
0 2
2 2 2
= +
t t t
te e te A
Example 2:
Second Solution, Second Attempt (3 of 10)
From the previous slide, we have
In order for this equation to be satisfied for all t, it is
necessary for the coefficients of te
2t
and e
2t
to both be zero.
From the e
2t
term, we see that = 0, and hence there is no
nonzero solution of the form x = te
2t
.
Since te
2t
and e
2t
appear in the above equation, we next
consider a solution of the form
0 2
2 2 2
= +
t t t
te e te A
t t
e te
2 2
x + =
Example 2: Second Solution and its
Defining Matrix Equations (4 of 10)
Substituting x = te
2t
+ e
2t
into x' = Ax, we obtain
or
Equating coefficients yields A = 2 and A = + 2, or
The first equation is satisfied if is an eigenvector of A
corresponding to the eigenvalue r = 2. Thus
( )
t t t t t
e te e te e
2 2 2 2 2
2 2 A + = + +
( )
t t t t
e te e te
2 2 2 2
2 2 A A + = + +
( ) ( ) I A 0 I A = = 2 and 2
|
|
.
|

\
|

=
1
1

Example 2: Solving for Second Solution (5 of 10)


Recall that
Thus to solve (A 2I) = for , we row reduce the
corresponding augmented matrix:
|
|
.
|

\
|

+
|
|
.
|

\
|

=
|
|
.
|

\
|

=
=
|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|


1
1
1
0
1
1
0 0 0
1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
1
1
1 2
k


|
|
.
|

\
|

=
|
|
.
|

\
|

=
1
1
,
3 1
1 1
A
Example 2: Second Solution (6 of 10)
Our second solution x = te
2t
+ e
2t
is now
Recalling that the first solution was
we see that our second solution is simply
since the last term of third term of x is a multiple of x
(1)
.
t t t
e k e te
2 2 2
1
1
1
0
1
1
|
|
.
|

\
|

+
|
|
.
|

\
|

+
|
|
.
|

\
|

= x
,
1
1
) (
2 ) 1 ( t
e t
|
|
.
|

\
|

= x
,
1
0
1
1
) (
2 2 ) 2 ( t t
e te t
|
|
.
|

\
|

+
|
|
.
|

\
|

= x
Example 2: General Solution (7 of 10)
The two solutions of x' = Ax are
The Wronskian of these two solutions is
Thus x
(1)
and x
(2)
are fundamental solutions, and the general
solution of x' = Ax is
| | 0 ) ( ,
4
2 2 2
2 2
) 2 ( ) 1 (
=

=
t
t t t
t t
e
e te e
te e
t W x x
(

|
|
.
|

\
|

+
|
|
.
|

\
|

+
|
|
.
|

\
|

=
+ =
t t t
e te c e c
t c t c t
2 2
2
2
1
) 2 (
2
) 1 (
1
1
0
1
1
1
1
) ( ) ( ) ( x x x
t t t
e te t e t
2 2 ) 2 ( 2 ) 1 (
1
0
1
1
) ( ,
1
1
) (
|
|
.
|

\
|

+
|
|
.
|

\
|

=
|
|
.
|

\
|

= x x
Example 2: Phase Plane (8 of 10)
The general solution is
Thus x is unbounded as t , and x 0 as t -.
Further, it can be shown that as t -, x 0 asymptotic
to the line x
2
= -x
1
determined by the first eigenvector.
Similarly, as t , x is asymptotic
to a line parallel to x
2
= -x
1
.
(

|
|
.
|

\
|

+
|
|
.
|

\
|

+
|
|
.
|

\
|

=
t t t
e te c e c t
2 2
2
2
1
1
0
1
1
1
1
) ( x
Example 1: Phase Plane (9 of 10)
The origin is an improper node, and is unstable. See graph.
The pattern of trajectories is typical for two repeated
eigenvalues with only one eigenvector.
If the eigenvalues are negative, then the trajectories are
similar but are traversed in the inward direction. In this case
the origin is an asymptotically stable improper node.
Example 2:
Time Plots for General Solution (10 of 10)
Time plots for x
1
(t) are given below, where we note that the
general solution x can be written as follows.
|
|
.
|

\
|
+
+
=
|
|
.
|

\
|

|
|
.
|

\
|

+
|
|
.
|

\
|

+
|
|
.
|

\
|

=
t t
t t
t t t
te c e c c
te c e c
t x
t x
e te c e c t
2
2
2
2 1
2
2
2
1
2
1
2 2
2
2
1
) (
) (
) (
1
0
1
1
1
1
) ( x
General Case for Double Eigenvalues
Suppose the system x' = Ax has a double eigenvalue r =
and a single corresponding eigenvector .
The first solution is
x
(1)
= e
t
,
where satisfies (A-I) = 0.
As in Example 1, the second solution has the form
where is as above and satisfies (A-I) = .
Since is an eigenvalue, det(A-I) = 0, and (A-I) = b
does not have a solution for all b. However, it can be
shown that (A-I) = always has a solution.
The vector is called a generalized eigenvector.
t t
e te

x + =
) 2 (
Example 2 Extension:
Fundamental Matrix (1 of 2)
Recall that a fundamental matrix (t) for x' = Ax has
linearly independent solution for its columns.
In Example 1, our system x' = Ax was
and the two solutions we found were
Thus the corresponding fundamental matrix is
|
|
.
|

\
|

=
|
|
.
|

\
|

=
1 1
1
) (
2
2 2 2
2 2
t
t
e
e te e
te e
t
t
t t t
t t

t t t
e te t e t
2 2 ) 2 ( 2 ) 1 (
1
0
1
1
) ( ,
1
1
) (
|
|
.
|

\
|

+
|
|
.
|

\
|

=
|
|
.
|

\
|

= x x
x x
|
|
.
|

\
|

=

3 1
1 1
Example 2 Extension:
Fundamental Matrix (2 of 2)
The fundamental matrix (t) that satisfies (0) = I can be
found using (t) = (t)
-1
(0), where
where
-1
(0) is found as follows:
Thus
,
1 1
0 1
) 0 ( ,
1 1
0 1
) 0 (
1
|
|
.
|

\
|

=
|
|
.
|

\
|

=


|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|
1 1 1 0
0 1 0 1
1 1 1 0
0 1 0 1
1 0 1 1
0 1 0 1
|
|
.
|

\
|
+

=
|
|
.
|

\
|

|
|
.
|

\
|

=
1
1
1 1
0 1
1 1
1
) (
2 2
t t
t t
e
t
t
e t
t t

Jordan Forms
If A is n x n with n linearly independent eigenvectors, then A
can be diagonalized using a similarity transform T
-1
AT = D.
The transform matrix T consisted of eigenvectors of A, and
the diagonal entries of D consisted of the eigenvalues of A.
In the case of repeated eigenvalues and fewer than n linearly
independent eigenvectors, A can be transformed into a nearly
diagonal matrix J, called the Jordan formof A, with
T
-1
AT = J.
Example 2 Extension:
Transform Matrix (1 of 2)
In Example 2, our system x' = Ax was
with eigenvalues r
1
= 2 and r
2
= 2 and eigenvectors
Choosing k = 0, the transform matrix T formed from the
two eigenvectors and is
x x
|
|
.
|

\
|

=

3 1
1 1
|
|
.
|

\
|

+
|
|
.
|

\
|

=
|
|
.
|

\
|

=
1
1
1
0
,
1
1
k
|
|
.
|

\
|

=
1 1
0 1
T
Example 2 Extension: Jordan Form (2 of 2)
The Jordan form J of A is defined by T
-1
AT = J.
Now
and hence
Note that the eigenvalues of A, r
1
= 2 and r
2
= 2, are on the
main diagonal of J, and that there is a 1 directly above the
second eigenvalue. This pattern is typical of Jordan forms.
|
|
.
|

\
|

=
|
|
.
|

\
|

=

1 1
0 1
,
1 1
0 1
1
T T
|
|
.
|

\
|
=
(

|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|

= =

2 0
1 2
1 1
0 1
3 1
1 1
1 1
0 1
1
AT T J
Ch 9.1: The Phase Plane: Linear Systems
There are many differential equations, especially nonlinear
ones, that are not susceptible to analytical solution in any
reasonably convenient manner.
Numerical methods provide one means of dealing with these
equations.
Another approach, presented in this chapter, is geometrical in
character and leads to a qualitative understanding of the
solutions rather than to detailed quantitative information.
Solutions of Second Order Linear Systems
Consider a second order linear homogeneous system with
constant coefficients of the form x' = Ax, where A is a 2 x 2
constant matrix and x is a 2 x 1 vector.
Recall from Chapter 7 that if we assume x = e
rt
, then
Therefore x = e
rt
is a solution of x' = Ax provided that r is an
eigenvalue and is an eigenvector of the coefficient matrix A.
The eigenvalues are the roots of the polynomial equation
det(A-rI) = 0, and the eigenvectors are determined up to an
arbitrary constant from the equation (A-rI) = 0.
( ) 0 I A A A = = = r r e e r
rt rt
Equilibrium Solution, Phase Portrait
Solutions x for which Ax = 0 correspond to equilibrium
solutions, and are called critical points.
We assume A is nonsingular, or detA 0, and hence x = 0 is
the only critical point for the system x' = Ax.
A solution of x' = Ax is a vector function x = (t) that satisfies
the differential equation, and can be viewed as a parametric
representation for a curve in the x
1
x
2
-plane.
This curve can be regarded as a trajectory traversed by a
moving particle whose velocity dx/dt is specified by the
differential equation.
The x
1
x
2
-plane is called the phase plane, and a representative
set of trajectories is a phase portrait.
Characterizing Equation by Trajectory Pattern
In analyzing the system x' = Ax, we must consider several
cases, depending on the nature of the eigenvalues of A.
These cases also occurred in Sections 7.5 7.8, where we
were primarily interested in finding a convenient formula for
the general solution.
Now our main goal is to characterize the differential equation
according to the geometric pattern formed by its trajectories.
In each case we discuss the behavior of the trajectories in
general an illustrate it with an example.
It is important to become familiar with the types of behavior
that the trajectories have for each case, as they are the basic
ingredients of the qualitative theory of differential equations.
Case 1: Real Unequal Eigenvalues
of the Same Sign (1 of 3)
When the eigenvalues r
1
and r
2
are both positive or both
negative, the general solution for x' = Ax is
Suppose first that r
1
< r
2
< 0, and that the eigenvectors
(1)
and
(2)
are as shown below.
It follows that x 0 as t for all solutions x, regardless
of the values of c
1
and c
2
.
t r t r
e c e c
2 1
) 2 (
2
) 1 (
1
x + =
Case 1: Nodal Sink (2 of 3)
If the solution starts at an initial point on the line through

(1)
, then c
2
= 0 and the solution remains on this line for all t.
Similarly if the initial point is on the line through
(2)
.
The solution can be rewritten as
Since r
1
- r
2
< 0, for c
2
0 the term c
1

(1)
e
(r1 - r2)t
is negligible
compared to c
2

(2)
, for large t.
Thus all solutions are tangent to
(2)
at the
critical point x = 0 except for solutions
that start exactly on the line through
(1)
.
This type of critical point is called a node
or nodal sink.
( )
| |
) 2 (
2
) 1 (
1
) 2 (
2
) 1 (
1
2 1 2 2 1
x c e c e e c e c
t r r t r t r t r
+ = + =

Case 1: Nodal Source (3 of 3)
The phase portrait along with several graphs of x
1
versus t
are given below. The behavior of x
2
versus t is similar.
If 0 < r
2
< r
1
, then the trajectories will have the same pattern
as in figure (a) below, but the direction will be away from
the critical point at the origin. In this case the critical point
is again called a node or a nodal source.
Case 2: Real Eigenvalues
of Opposite Sign (1 of 3)
Suppose now that r
1
> 0 and r
2
< 0, with general solution
and corresponding eigenvectors
(1)
and
(2)
as shown below.
If the solution starts at an initial point on the line through
(1)
,
then c
2
= 0 and the solution remains on this line for all t.
Also, since r
1
> 0, it follows that ||x|| as t .
Similarly if the initial point is on the line through
(2)
, then
||x|| 0 as t since r
2
< 0.
Solutions starting at other initial points
have trajectories as shown.
,
2 1
) 2 (
2
) 1 (
1
t r t r
e c e c x + =
Case 2: Saddle Point (2 of 3)
For our general solution
the positive exponential term is dominant for large t, so all
solutions approach infinity asymptotic to the line determined
by the eigenvector
(1)
corresponding to r
1
> 0.
The only solutions that approach the critical point at the origin
are those that start on the line determined by
(2)
.
This type of critical point is called a saddle point.
, 0 , 0 ,
2 1
) 2 (
2
) 1 (
1
2 1
< > + = r r e c e c
t r t r
x
Case 2: Graphs of x
1
versus t (3 of 3)
The phase portrait along with several graphs of x
1
versus t
are given below.
For certain initial conditions, the positive exponential term is
absent from the solution, so x
1
0 as t .
For all other initial conditions the positive exponential term
eventually dominates and causes x
1
to become unbounded.
The behavior of x
2
versus t is similar.
Case 3: Equal Eigenvalues (1 of 5)
Suppose now that r
1
= r
2
= r. We consider the case in which
the repeated eigenvalue r is negative. If r is positive, then the
trajectories are similar but direction of motion is reversed.
There are two subcases, depending on whether r has two
linearly independent eigenvectors or only one.
If the two eigenvectors
(1)
and
(2)
are linearly independent,
then the general solution is
The ratio x
2
/x
1
is independent of t, but depends on the
components of
(1)
and
(2)
and on c
1
and c
2
.
A phase portrait is given on the next slide.
rt rt
e c e c
) 2 (
2
) 1 (
1
x + =
Case 3: Star Node (2 of 5)
The general solution is
Thus every trajectory lies on a line through the origin, as seen
in the phase portrait below. Several graphs of x
1
versus t are
given below as well, with the case of x
2
versus t similar.
The critical point at the origin is called a proper node, or a
star point.
0 ,
) 2 (
2
) 1 (
1
< + = r e c e c
rt rt
x
Case 3: Equal Eigenvalues (3 of 5)
If the repeated eigenvalue r has only one linearly independent
eigenvector , then from Section 7.8 the general solution is
For large t, the dominant term is c
2
te
rt
. Thus every trajectory
approaches origin tangent to line through the eigenvector .
Similarly, for large negative t the dominant term is again
c
2
te
rt
, and hence every trajectory is asymptotic to a line
parallel to the eigenvector .
The orientation of the trajectories
depends on the relative positions
of and , as we will see.
( )
rt rt rt
e te c e c x + + =
2 1
Case 3: Improper Node (4 of 5)
We can rewrite the general solution as
Note that y determines the direction of x, whereas the scalar
quantity e
rt
affects only the magnitude of x.
For fixed values of c
1
and c
2
, the expression for y is a vector
equation of line through the point c
1
+ c
2
and parallel to .
Using this fact, solution trajectories can be sketched for given
coefficients c
1
and c
2
. See phase portrait below.
When a double eigenvalue has only
one linearly independent eigenvalue,
the critical point is called an improper
or degenerate node.
( ) t c c c e e t c c c
rt rt
y y x
2 2 1 2 2 1
, + + = = + + =
Case 3: Phase Portraits (5 of 5)
The phase portrait is given in figure (a) along with several
graphs of x
1
versus t are given below in figure (b).
When the relative orientation of and are reversed, the
phase portrait given in figure (c) is obtained.
Case 4: Complex Eigenvalues (1 of 5)
Suppose the eigenvalues are i, where and are real,
with 0 and > 0.
It is possible to write down the general solution in terms of
eigenvalues and eigenvectors, as shown in Section 7.6.
However, we proceed in a different way here.
Systems having eigenvalues i are typified by
We introduce the polar coordinates r, given by
2 1 2
2 1 1
x x x
x x x




+ =

+ =

|
|
.
|

\
|

x x
1 2
2
2
2
1
2
tan , x x x x r = + =
Case 4: Polar Equations (2 of 5)
Differentiating the polar equations
with respect to t, we have
or
Substituting
into these derivative equations, we obtain
2 1 2 2 1 1
, x x x x x x + =

+ =

1 2
2
2
2
1
2
tan , x x x x r = + =
( ) ( )
2
1
1 2 2 1
2
2
2
1
1
/ /
sec , 2 2 2
x
dt dx x dt dx x
dt
d
dt
dx
x
dt
dx
x
dt
dr
r

=
|
.
|

\
|
|
.
|

\
|
+
|
.
|

\
|
=
|
.
|

\
|

( ) ( )
2
1 1 2 2 1
2
2 2 1 1
sec , x x x x x x x x x r r

=

+

, r r
Case 4: Spiral Point (3 of 5)
Solving the differential equations
we have
These equations are parametric equations in polar coordinates
of the solution trajectories to our system x' = Ax.
Since > 0, it follows that decreases as t increases, so the
direction of motion on a trajectory is clockwise.
If < 0, then r 0 as t , while r if > 0.
Thus the trajectories are spirals, which approach or recede
from the origin depending on the sign of , and the critical
point is called a spiral point in this case.
=

, r r
) 0 ( , ,
0 0

= + = = t ce r
t
Case 4: Phase Portraits (4 of 5)
The phase portrait along with several graphs of x
1
versus t are
given below.
Frequently the terms spiral sink and spiral source are used to
refer to spiral points whose trajectories approach, or depart
from, the critical point.
0
1 2
2
2
2
1
2
2
1
2
1
,
tan ,



+ = =
= + =
|
|
.
|

\
|
|
|
.
|

\
|

=
|
|
.
|

\
|

t ce r
x x x x r
x
x
x
x
t
Ax x
Case 4: General System (5 of 5)
It can be shown that for any system with complex eigenvalues
i, where 0, the trajectories are always spirals.
They are directed inward or outward, respectively, depending
on whether is negative or positive.
The spirals may be elongated and skewed with respect to the
coordinate axes, and the direction may be either clockwise or
counterclockwise. See text for more details.
Case 5: Pure Imaginary Eigenvalues (1 of 2)
Suppose the eigenvalues are i, where = 0 and real.
Systems having eigenvalues i are typified by
As in Case 4, using polar coordinates r, leads to
The trajectories are circles with center at the origin, which are
traversed clockwise if > 0 and counterclockwise if < 0.
A complete circuit about the origin occurs in a time interval of
length 2 /, so all solutions are periodic with period 2 /.
The critical point is called a center.
1 2
2 1
0
0
x x
x x

|
|
.
|

\
|

x x
0
, + = = t c r
Case 5: Phase Portraits (2 of 2)
In general, when the eigenvalues are pure imaginary, it is
possible to show that the trajectories are ellipses centered at
the origin.
The phase portrait along with several graphs of x
1
versus t are
given below.
Behavior of Individual Trajectories
As t , each trajectory does one of the following:
approaches infinity;
approaches the critical point x = 0;
repeatedly traverses a closed curve, corresponding to a periodic
solution, that surrounds the critical point.
The trajectories never intersect, and exactly one trajectory
passes through each point (x
0
, y
0
) in the phase plane.
The only solution passing through the origin is x = 0. Other
solutions may approach (0, 0), but never reach it.
Behavior of Trajectory Sets
As t , one of the following cases holds:
All trajectories approach the critical point x = 0. This is the case when
the eigenvalues are real and negative or complex with negative real
part. The origin is either a nodal or spiral sink.
All trajectories remain bounded but do not approach the origin, and
occurs when eigenvalues are pure imaginary. The origin is a center.
Some trajectories, and possibly all trajectories except x = 0, tend to
infinity. This occurs when at least one of the eigenvalues is positive or
has a positive real part. The origin is a nodal source, a spiral source, or
a saddle point.
Summary Table
The following table summarizes the information we have
derived about our 2 x 2 system x' = Ax, as well as the stability
of the equilibrium solution x = 0.
Eigenvalues Type of Critical Point Stability
0
2 1
> > r r
Node Unstable
0
2 1
< < r r
Node Asymptotically Stable
1 2
0 r r < <
Saddle Point Unstable
0
2 1
> = r r
Proper or Improper Node Unstable
0
2 1
< = r r
Proper or Improper Node Asymptotically Stable
i r r =
2 1
,
Spiral Point
0 > Unstable
0 < Asymptotically Stable
i r i r = =
2 1
,
Center Stable

Vous aimerez peut-être aussi