Vous êtes sur la page 1sur 190

Ch 7.

1: Introduction to Systems of First


Order Linear Equations
A system of simultaneous first order ordinary differential
equations has the general form
x1 = F1 (t , x1 , x2 , K xn )
x2 = F2 (t , x1 , x2 , K xn )
M
xn = Fn (t , x1 , x2 , K xn )
where each xk is a function of t. If each Fk is a linear
function of x1, x2, , xn, then the system of equations is said
to be linear, otherwise it is nonlinear.
Systems of higher order differential equations can similarly
be defined.
Example 1
The motion of a spring-mass system from Section 3.8 was
described by the equation
u (t ) + 16 u (t ) + 192u (t ) = 0
This second order equation can be converted into a system of
first order equations by letting x1 = u and x2 = u'. Thus
x1 = x2
x2 + 16 x2 + 192 x1 = 0
or
x1 = x2
x2 = 16 x2 192 x1
Nth Order ODEs and Linear 1st Order Systems
The method illustrated in previous example can be used to
transform an arbitrary nth order equation
(
y ( n ) = F t , y, y, y, K , y ( n 1) )
into a system of n first order equations, first by defining
x1 = y, x2 = y, x3 = y, K , xn = y ( n 1)
Then
x1 = x2
x2 = x3
M
xn 1 = xn
xn = F (t , x1 , x2 , K xn )
Solutions of First Order Systems
A system of simultaneous first order ordinary differential
equations has the general form
x1 = F1 (t , x1 , x2 , K xn )
M
xn = Fn (t , x1 , x2 , K xn ).
It has a solution on I: < t < if there exists n functions
x1 = 1 (t ), x2 = 2 (t ),K , xn = n (t )
that are differentiable on I and satisfy the system of
equations at all points t in I.
Initial conditions may also be prescribed to give an IVP:
x1 (t0 ) = x10 , x2 (t0 ) = x20 , K , xn (t0 ) = xn0
Example 2
The equation
y + y = 0, 0 < t < 2
can be written as system of first order equations by letting
x1 = y and x2 = y'. Thus
x1 = x2
x2 = x1
A solution to this system is
x1 = sin(t ), x2 = cos(t ), 0 < t < 2
which is a parametric description
for the unit circle.
Theorem 7.1.1
Suppose F1,, Fn and F1/x1,, F1/xn,, Fn/ x1,,
Fn/xn, are continuous in the region R of t x1 x2xn-space
defined by < t < , 1 < x1 < 1, , n < xn < n, and let the
point
(t , x , x ,K, x )
0
0
1
0
2
0
n

be contained in R. Then in some interval (t0 - h, t0 + h) there


exists a unique solution
x1 = 1 (t ), x2 = 2 (t ),K , xn = n (t )
that satisfies the IVP. x1 = F1 (t , x1 , x2 , K xn )
x2 = F2 (t , x1 , x2 , K xn )
M
xn = Fn (t , x1 , x2 , K xn )
Linear Systems
If each Fk is a linear function of x1, x2, , xn, then the
system of equations has the general form
x1 = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t )
x2 = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t )
M
xn = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t )

If each of the gk(t) is zero on I, then the system is


homogeneous, otherwise it is nonhomogeneous.
Theorem 7.1.2
Suppose p11, p12,, pnn, g1,, gn are continuous on an
interval I: < t < with t0 in I, and let
x10 , x20 , K , xn0
prescribe the initial conditions. Then there exists a unique
solution
x1 = 1 (t ), x2 = 2 (t ),K , xn = n (t )
that satisfies the IVP, and exists throughout I.

x1 = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t )


x2 = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t )
M
xn = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t )
Ch 7.2: Review of Matrices
For theoretical and computation reasons, we review results
of matrix theory in this section and the next.
A matrix A is an m x n rectangular array of elements,
arranged in m rows and n columns, denoted
a11 a12 L a1n

A = ( ai j )=
a21 a22 L a2 n
M M O M

a L amn
m1 am 2

Some examples of 2 x 2 matrices are given below:


1 2 1 3 1 3 2i
A = , B = , C =
3 4 2 4 4 + 5i 6 7i
Transpose
The transpose of A = (aij) is AT = (aji).
a11 a12 L a1n a11 a21 L am1

a21 a22 L a2 n a12 a22 L am 2
A= A =
T

M M O M M M O M

a L amn a a2 n L amn
m1 am 2 1n

For example,
1 4
1 2 1 3 1 2 3
A = A =
T
, B = B = 2 5
T

3 4 2 4 4 5 6 3 6

Conjugate
The conjugate of A = (aij) is A = (aij).
a11 a12 L a1n a11 a12 L a1n

a21 a22 L a2 n a21 a22 L a2 n
A= A=
M M O M M M O M

a L amn a L amn
m1 am 2 m1 am 2

For example,

1 2 + 3i 1 2 3i
A = A =
3 4i 4 3 + 4i 4
Adjoint
The adjoint of A is AT , and is denoted by A*
a11 a12 L a1n a11 a21 L am1

a21 a22 L a2 n a12 a22 L am 2
A= A =
*

M M O M M M O M

a L amn a a2 n L amn
m1 am 2 1n

For example,

1 2 + 3i 1 3 + 4i
A = A =
*

3 4i 4 2 3i 4
Square Matrices
A square matrix A has the same number of rows and
columns. That is, A is n x n. In this case, A is said to have
order n.
a11 a12 L a1n

a21 a22 L a2 n
A=
M M O M

a an 2 L ann
n1

For example,
1 2 3
1 2
A = , B = 4 5 6
3 4 7 8 9

Vectors
A column vector x is an n x 1 matrix. For example,
1

x = 2
3

A row vector x is a 1 x n matrix. For example,

y = (1 2 3)

Note here that y = xT, and that in general, if x is a column


vector x, then xT is a row vector.
The Zero Matrix
The zero matrix is defined to be 0 = (0), whose dimensions
depend on the context. For example,
0 0
0 0 0 0 0
0 = , 0 = , 0 = 0 0 , K
0 0 0 0 0 0 0

Matrix Equality
Two matrices A = (aij) and B = (bij) are equal if aij = bij for
all i and j. For example,

1 2 1 2
A = , B = A = B
3 4 3 4
Matrix Scalar Multiplication
The product of a matrix A = (aij) and a constant k is defined
to be kA = (kaij). For example,

1 2 3 5 10 15
A = 5A =
4 5 6 20 25 30
Matrix Addition and Subtraction
The sum of two m x n matrices A = (aij) and B = (bij) is
defined to be A + B = (aij + bij). For example,
1 2 5 6 6 8
A = , B = A + B =
3 4 7 8 10 12

The difference of two m x n matrices A = (aij) and B = (bij)


is defined to be A - B = (aij - bij). For example,
1 2 5 6 4 4
A = , B = A B =
3 4 7 8 4 4
Matrix Multiplication
The product of an m x n matrix A = (aij) and an n x r
matrix B = (bij) is defined to be the matrix C = (cij), where
n
cij = aik bkj
k =1

Examples (note AB does not necessarily equal BA):


1 2 1 3 1 + 4 3 + 8 5 11
A=
, B = 2 4 AB = 3 + 8 9 + 16 = 11 25
3 4
1 + 9 2 + 12 10 14
BA = =
2 + 12 4 + 16 14 20
3 0
1 2 3 3+ 2 + 0 0 + 4 3 5 1
C = , D = 1 2 CD = =
4 5 6 0 1 12 + 5 + 0 0 + 10 6 17 4

Vector Multiplication
The dot product of two n x 1 vectors x & y is defined as
n
xT y = xi y j
k =1

The inner product of two n x 1 vectors x & y is defined as


n
(x,y ) = x T
y = xi y j
k =1

Example:
1 1

x = 2 , y = 2 3i xT y = (1)(1) + (2)(2 3i ) + (3i )(5 + 5i ) = 12 + 9i
3i 5 + 5i

(x, y ) = xT y = (1)(1) + (2)(2 + 3i ) + (3i )(5 5i ) = 18 + 21i
Vector Length
The length of an n x 1 vector x is defined as
1/ 2 1/ 2
n
n

x = (x,x ) = xk xk = | xk
1/ 2
|2
k =1 k =1
Note here that we have used the fact that if x = a + bi, then
x x = (a + bi )(a bi ) = a 2 + b 2 = x
2

Example:
1

x = 2 x = (x, x ) = (1)(1) + (2)(2) + (3 + 4i )(3 4i )
1/ 2

3 + 4i

= 1 + 4 + (9 + 16) = 30
Orthogonality
Two n x 1 vectors x & y are orthogonal if (x,y) = 0.
Example:
1 11

x = 2 y = 4 (x, y ) = (1)(11) + (2)(4) + (3)(1) = 0
3 1

Identity Matrix
The multiplicative identity matrix I is an n x n matrix
given by
1 0 L 0

0 1 L 0
I=
M M O M

0 L 1
0

For any square matrix A, it follows that AI = IA = A.


The dimensions of I depend on the context. For example,
1 0 0 1 2 3 1 2 3
1 2 1 0 1 2
AI = = , IB = 0 1 0 4 5 6 = 4 5 6
3 4 0 1 3 4 0 0 1 7 8 9 7 8 9

Inverse Matrix
A square matrix A is nonsingular, or invertible, if there
exists a matrix B such that that AB = BA = I. Otherwise A
is singular.
The matrix B, if it exists, is unique and is denoted by A-1
and is called the inverse of A.
It turns out that A-1 exists iff detA 0, and A-1 can be found
using row reduction (also called Gaussian elimination) on
the augmented matrix (A|I), see example on next slide.
The three elementary row operations:
Interchange two rows.
Multiply a row by a nonzero scalar.
Add a multiple of one row to another row.
Example: Finding the Identity Matrix (1 of 2)

Use row reduction to find the inverse of the matrix A below,


if it exists. 0 1 2

A=1 0 3
4 3 8

Solution: If possible, use elementary row operations to
reduce (A|I),
0 1 2 1 0 0

(A I ) = 1 0 3 0 1 0 ,
4 3 8 0 0 1

such that the left side is the identity matrix, for then the
right side will be A-1. (See next slide.)
Example: Finding the Identity Matrix (2 of 2)
0 1 2 1 0 0 1 0 3 0 1 0

(A I ) = 1 0 3 0 1 0 0 1 2 1 0 0
4 3 8 0 0 1 4 3 8 0 0 1

1 0 3 0 1 0 1 0 3 0 1 0

0 1 2 1 0 0 0 1 2 1 0 0
0 3 4 0 4 1 0 0 2 3 4 1

1 0 3 0 1 0 1 0 0 9 / 2 7 3 / 2

0 1 0 2 4 1 0 1 0 2 4 1
0 0 2 3 4 1 0 0 1 3/ 2 2 1 / 2

Thus 9/ 2 7 3 / 2
1

A = 2 4 1
3/ 2 2 1 / 2

Matrix Functions
The elements of a matrix can be functions of a real variable.
In this case, we write
x1 (t ) a11 (t ) a12 (t ) L a1n (t )

x2 (t ) a21 (t ) a22 (t ) L a2 n (t )
x(t ) = , A(t ) =
M M M O M

x (t ) a (t ) a (t ) L amn (t )
m m1 m2

Such a matrix is continuous at a point, or on an interval


(a, b), if each element is continuous there. Similarly with
differentiation and integration:
dA daij b
b

=
dt dt
, a A (t ) dt =
a
aij (t ) dt

Example & Differentiation Rules
Example:
3t 2 sin t dA 6t cos t

A(t ) = = ,
dt sin t 0
cos t 4
3 0
A(t )dt =

0
1 4

Many of the rules from calculus apply in this setting. For


example: d (CA ) dA
=C , where C is a constant matrix
dt dt
d (A + B ) dA d B
= +
dt dt dt
d (AB ) dA dB
= B + A
dt dt dt
Ch 7.3: Systems of Linear Equations,
Linear Independence, Eigenvalues
A system of n linear equations in n variables,
a1,1 x1 + a1, 2 x2 + L + a1,n xn = b1
a2,1 x1 + a2, 2 x2 + L + a2,n xn = b2
M
an ,1 x1 + an , 2 x2 + L + an ,n xn = bn ,
can be expressed as a matrix equation Ax = b:
a1,1 a1, 2 L a1,n x1 b1

a2,1 a2, 2 L a2,n x2 b2
M =
M O M M M

a
n ,1 an , 2 L an ,n xn bn

If b = 0, then system is homogeneous; otherwise it is


nonhomogeneous.
Nonsingular Case
If the coefficient matrix A is nonsingular, then it is
invertible and we can solve Ax = b as follows:
Ax = b A 1Ax = A 1b Ix = A 1b x = A 1b
This solution is therefore unique. Also, if b = 0, it follows
that the unique solution to Ax = 0 is x = A-10 = 0.
Thus if A is nonsingular, then the only solution to Ax = 0 is
the trivial solution x = 0.
Example 1: Nonsingular Case (1 of 3)
From a previous example, we know that the matrix A below
is nonsingular with inverse as given.
0 1 2 9/ 2 7 3 / 2
1

A=1 0 3 , A = 2 4 1
4 3 8 3/ 2 2
1 / 2
Using the definition of matrix multiplication, it follows that
the only solution of Ax = 0 is x = 0:
9/ 2 7 3 / 2 0 0
1

x = A 0= 2 4 1 0 = 0
3/ 2 2 0 0
1 / 2
Example 1: Nonsingular Case (2 of 3)
Now lets solve the nonhomogeneous linear system Ax = b
below using A-1:
0 x1 + x2 + 2 x3 = 2
1x1 + 0 x2 + 3 x3 = 2
4 x1 3 x2 + 8 x3 = 0

This system of equations can be written as Ax = b, where


0 1 2 x1 2

A=1 0 3 , x = x2 , b = 2
4 3 8 x 0
3
Then
9/ 2 7 3 / 2 2 23

x = A 1b = 2 4 1 2 = 12
3/ 2 2 1 / 2 0 7

Example 1: Nonsingular Case (3 of 3)
Alternatively, we could solve the nonhomogeneous linear
system Ax = b below using row reduction.
0 x1 + x2 + 2 x3 = 2
1x1 + 0 x2 + 3 x3 = 2
4 x1 3 x2 + 8 x3 = 0
To do so, form the augmented matrix (A|b) and reduce,
using elementary row operations.
0 1 2 2 1 0 3 2 1 0 3 2

(A b ) = 1 0 3 2 0 1 2 2 0 1 2 2
4 3 8 0 4 3 8 0 0 3 4 8

1 0 3 2 1 0 3 2 x1 + 3 x3 = 2 23

0 1 2 2 0 1 2 2 x2 + 2 x3 = 2 x = 12
0 0 2 14 0 0 1 7 = 7 7
x3
Singular Case
If the coefficient matrix A is singular, then A-1 does not
exist, and either a solution to Ax = b does not exist, or there
is more than one solution (not unique).
Further, the homogeneous system Ax = 0 has more than one
solution. That is, in addition to the trivial solution x = 0,
there are infinitely many nontrivial solutions.
The nonhomogeneous case Ax = b has no solution unless
(b, y) = 0, for all vectors y satisfying A*y = 0, where A* is
the adjoint of A.
In this case, Ax = b has solutions (infinitely many), each of
the form x = x(0) + , where x(0) is a particular solution of
Ax = b, and is any solution of Ax = 0.
Example 2: Singular Case (1 of 3)
Solve the nonhomogeneous linear system Ax = b below
using row reduction.
1x1 2 x2 1x3 = 1
1x1 + 5 x2 + 6 x3 = 0
5 x1 4 x2 + 5 x3 = 1
To do so, form the augmented matrix (A|b) and reduce,
using elementary row operations.
1 2 1 1 1 2 1 1 1 2 1 1

(A b ) = 1 5 6 0 0 3 5 1 0 3 5 1
5 4 5 1 0 6 10 6 0 3 5 3

1 2 1 1 1 2 1 1 x1 2 x2 x3 = 1

0 3 5 1 0 3 5 1 3 x2 + 5 x3 = 1 no soln
0
0 0 4 0 0 0 1 0 x3 = 1
Example 2: Singular Case (2 of 3)
Solve the nonhomogeneous linear system Ax = b below
using row reduction.
1x1 2 x2 1x3 = b1
1x1 + 5 x2 + 6 x3 = b2
5 x1 4 x2 + 5 x3 = b3
Reduce the augmented matrix (A|b) as follows:
1 2 1 b1 1 2 1 b1

(A b ) = 1 5 6 b2 0 3 5 b2 + b1
5 4 5 b 0 6 10 b3 5b1
3

1 2 1 b1 1 2 1 b1

0 3 5 b2 + b1 0 3 5 b2 + b1 b3 2b2 7b1 = 0
1 5 1 7
0 3 5 b b1 0 0 0 b b b1
2 2
3 3 2
2 2
Example 2: Singular Case (3 of 3)
From the previous slide, we require
b3 2b2 7b1 = 0
Suppose
b1 = 1, b2 = 1, b3 = 5
Then the reduced augmented matrix (A|b) becomes:

1 2 1 b1 x1 2 x2 1x3 = 1

0 3 5 b2 + b1 3 x2 + 5 x3 = 0
1 7 0 x3 = 0
0 0 0 b3 b2 b1
2 2
1 7 x3 / 3 1 7 / 3 1 7

x = 5 x3 / 3 x = 0 + x3 5 / 3 = x = 0 + c 5 = x ( 0 ) +
x3 0 1 0 3

Linear Dependence and Independence
A set of vectors x(1), x(2),, x(n) is linearly dependent if
there exists scalars c1, c2,, cn, not all zero, such that
c1x (1) + c2 x ( 2 ) + L + cn x ( n ) = 0

If the only solution of


c1x (1) + c2 x ( 2 ) + L + cn x ( n ) = 0
is c1= c2 = = cn = 0, then x(1), x(2),, x(n) is linearly
independent.
Example 3: Linear Independence (1 of 2)
Determine whether the following vectors are linear
dependent or linearly independent.
0 1 2
( 2 ) ( 3)
x (1) = 1, x = 0 , x = 3
4 3 8

We need to solve
c1x (1) + c2 x ( 2 ) + c3 x (3) = 0

or
0 1 2 0 0 1 2 c1 0

c1 1 + c2 0 + c 3 = 0 1 0 3 c2 = 0
4 3 8 0 4 3 8 c 0
3
Example 3: Linear Independence (2 of 2)
We thus reduce the augmented matrix (A|b), as before.
0 1 2 0 1 0 3 0

(A b ) = 1 0 3 0 0 1 2 0
4 3 8 0 0 0 1 0

c1 + 3c3 =0 0

c2 + 2c3 = 0 c = 0
c3 = 0 0

Thus the only solution is c1= c2 = = cn = 0, and therefore


the original vectors are linearly independent.
Example 4: Linear Dependence (1 of 2)
Determine whether the following vectors are linear
dependent or linearly independent.
1 2 1
( 2 ) ( 3)
x (1) = 1, x = 5 , x = 6
5 4 5

We need to solve
c1x (1) + c2 x ( 2 ) + c3 x (3) = 0

or
1 2 1 0 1 2 1 c1 0

c1 1 + c2 5 + c3 6 = 0 1 5 6 c2 = 0
5 4 5 0 5 4 5 c 0
3
Example 4: Linear Dependence (2 of 2)
We thus reduce the augmented matrix (A|b), as before.
1 2 1 0 1 2 1 0

(A b ) = 1 5 6 0 0 3 5 0
5 4 5 0 0 0 0 0

c1 2c2 1c3 = 0 7c3 / 3 7

3c2 + 5c3 = 0 c = 5c3 / 3 c = k 5
0c3 = 0 c3 3

Thus the original vectors are linearly dependent, with


1 2 1 0

7 1 + 5 5 3 6 = 0
5 4 5 0

Linear Independence and Invertibility
Consider the previous two examples:
The first matrix was known to be nonsingular, and its column vectors
were linearly independent.
The second matrix was known to be singular, and its column vectors
were linearly dependent.
This is true in general: the columns (or rows) of A are linearly
independent iff A is nonsingular iff A-1 exists.
Also, A is nonsingular iff detA 0, hence columns (or rows)
of A are linearly independent iff detA 0.
Further, if A = BC, then det(C) = det(A)det(B). Thus if the
columns (or rows) of A and B are linearly independent, then
the columns (or rows) of C are also.
Linear Dependence & Vector Functions
Now consider vector functions x(1)(t), x(2)(t),, x(n)(t), where
x1( k ) (t )
(k )
x2 (t )
(k )
x (t ) = , k = 1, 2, K , n, t I = ( , )
M
x ( k ) (t )
m

As before, x(1)(t), x(2)(t),, x(n)(t) is linearly dependent on I if


there exists scalars c1, c2,, cn, not all zero, such that
c1x (1) (t ) + c2 x ( 2 ) (t ) + L + cn x ( n ) (t ) = 0, for all t I

Otherwise x(1)(t), x(2)(t),, x(n)(t) is linearly independent on I


See text for more discussion on this.
Eigenvalues and Eigenvectors
The eqn. Ax = y can be viewed as a linear transformation
that maps (or transforms) x into a new vector y.
Nonzero vectors x that transform into multiples of
themselves are important in many applications.
Thus we solve Ax = x or equivalently, (A-I)x = 0.
This equation has a nonzero solution if we choose such
that det(A-I) = 0.
Such values of are called eigenvalues of A, and the
nonzero solutions x are called eigenvectors.
Example 5: Eigenvalues (1 of 3)
Find the eigenvalues and eigenvectors of the matrix A.
2 3
A =
3 6

Solution: Choose such that det(A-I) = 0, as follows.


2 3 1 0
det (A I ) = det

3 6 0 0
2 3
= det
3 6
= (2 )( 6 ) (3)(3)
= 2 + 4 21 = ( 3)( + 7 )
= 3, = 7
Example 5: First Eigenvector (2 of 3)
To find the eigenvectors of the matrix A, we need to solve
(A-I)x = 0 for = 3 and = -7.
Eigenvector for = 3: Solve
2 3 3 x1 0 1 3 x1 0
(A I )x = 0 = =
3 6 3 x2 0 3 9 x2 0

by row reducing the augmented matrix:


1 3 0 1 3 0 1 3 0 1x 3 x2 =0
1
3 9 0 3 9 0 0 0 0 0 x2 =0
3 x2 3 3
x =
(1)
= c , c arbitrary choose x =
(1)

x2 1 1
Example 5: Second Eigenvector (3 of 3)
Eigenvector for = -7: Solve
2 + 7 3 x1 0 9 3 x1 0
(A I )x = 0 = =
3 6 + 7 x2 0 3 1 x2 0
by row reducing the augmented matrix:

9 3 0 1 1/ 3 0 1 1/ 3 0 1x + 1 / 3x2 = 0
1
3 1 0 3 1 0 0 0 0 0 x2 = 0
1 / 3 x2 1 / 3 1
x =
( 2)
= c , c arbitrary choose x =
( 2)

x2 1 3
Normalized Eigenvectors
From the previous example, we see that eigenvectors are
determined up to a nonzero multiplicative constant.
If this constant is specified in some particular way, then the
eigenvector is said to be normalized.
For example, eigenvectors are sometimes normalized by
choosing the constant so that ||x|| = (x, x) = 1.
Algebraic and Geometric Multiplicity
In finding the eigenvalues of an n x n matrix A, we solve
det(A-I) = 0.
Since this involves finding the determinant of an n x n
matrix, the problem reduces to finding roots of an nth
degree polynomial.
Denote these roots, or eigenvalues, by 1, 2, , n.
If an eigenvalue is repeated m times, then its algebraic
multiplicity is m.
Each eigenvalue has at least one eigenvector, and a
eigenvalue of algebraic multiplicity m may have q linearly
independent eigevectors, 1 q m, and q is called the
geometric multiplicity of the eigenvalue.
Eigenvectors and Linear Independence
If an eigenvalue has algebraic multiplicity 1, then it is said
to be simple, and the geometric multiplicity is 1 also.
If each eigenvalue of an n x n matrix A is simple, then A
has n distinct eigenvalues. It can be shown that the n
eigenvectors corresponding to these eigenvalues are linearly
independent.
If an eigenvalue has one or more repeated eigenvalues, then
there may be fewer than n linearly independent eigenvectors
since for each repeated eigenvalue, we may have q < m.
This may lead to complications in solving systems of
differential equations.
Example 6: Eigenvalues (1 of 5)
Find the eigenvalues and eigenvectors of the matrix A.
0 1 1

A = 1 0 1
1 1 0

Solution: Choose such that det(A-I) = 0, as follows.
1 1

det (A I ) = det 1 1
1
1
= 3 + 3 + 2
= ( 2)( + 1) 2
1 = 2, 2 = 1, 2 = 1
Example 6: First Eigenvector (2 of 5)
Eigenvector for = 2: Solve (A-I)x = 0, as follows.

2 1 1 0 1 1 2 0 1 1 2 0

1 2 1 0 1 2 1 0 0 3 3 0
1 2 0 2 1 0 0 3 3 0
1 1
1 1 2 0 1 0 1 0 1x1 1x3 = 0

0 1 1 0 0 1 1 0 1x2 1x3 = 0
0 0 0 0 0 0 0 0 0 x3 = 0

x3 1 1

x (1) = x3 = c 1, c arbitrary choose x = 1
(1)

x 1 1
3
Example 6: 2nd and 3rd Eigenvectors (3 of 5)
Eigenvector for = -1: Solve (A-I)x = 0, as follows.

1 1 1 0 1 1 1 0 1x1 + 1x2 + 1x3 =0



1 1 1 0 0 0 0 0 0 x2 =0
1 1 1 0 0 0 0 0 =0
0 x3
x2 x3 1 1

x = x2 = x2 1 + x3 0 , where x2 , x3
( 2)
arbitrary
x 0 1
3
1 0
( 3)
choose x ( 2 ) = 0 , x = 1
1 1

Example 6: Eigenvectors of A (4 of 5)

Thus three eigenvectors of A are


1 1 0
( 2 ) ( 3)
x (1) = 1, x = 0 , x = 1
1 1 1

where x(2), x(3) correspond to the double eigenvalue = - 1.
It can be shown that x(1), x(2), x(3) are linearly independent.
Hence A is a 3 x 3 symmetric matrix (A = AT ) with 3 real
eigenvalues and 3 linearly independent eigenvectors.
0 1 1

A = 1 0 1
1 1 0

Example 6: Eigenvectors of A (5 of 5)

Note that we could have we had chosen


1 1 1
( 2 ) ( 3)
x (1) = 1, x = 0 , x = 2
1 1 1

Then the eigenvectors are orthogonal, since
(x (1)
) ( ) ( )
, x ( 2 ) = 0, x (1) , x ( 3) = 0, x ( 2 ) , x ( 3) = 0

Thus A is a 3 x 3 symmetric matrix with 3 real eigenvalues


and 3 linearly independent orthogonal eigenvectors.
Hermitian Matrices
A self-adjoint, or Hermitian matrix, satisfies A = A*,
where we recall that A* = AT .
Thus for a Hermitian matrix, aij = aji.
Note that if A has real entries and is symmetric (see last
example), then A is Hermitian.
An n x n Hermitian matrix A has the following properties:
All eigenvalues of A are real.
There exists a full set of n linearly independent eigenvectors of A.
If x(1) and x(2) are eigenvectors that correspond to different
eigenvalues of A, then x(1) and x(2) are orthogonal.
Corresponding to an eigenvalue of algebraic multiplicity m, it is
possible to choose m mutually orthogonal eigenvectors, and hence A
has a full set of n linearly independent orthogonal eigenvectors.
Ch 7.4: Basic Theory of Systems of First
Order Linear Equations
The general theory of a system of n first order linear equations
x1 = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t )
x2 = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t )
M
xn = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t )
parallels that of a single nth order linear equation.
This system can be written as x' = P(t)x + g(t), where
x1 (t ) g1 (t ) p11 (t ) p12 (t ) L p1n (t )

x2 (t ) g 2 (t ) p21 (t ) p22 (t ) L p2 n (t )
x(t ) = , g(t ) = , P(t ) =
M M M M O M

x (t ) g (t ) p (t ) pn 2 (t ) L pnn (t )
n n n1
Vector Solutions of an ODE System
A vector x = (t) is a solution of x' = P(t)x + g(t) if the
components of x,
x1 = 1 (t ), x2 = 2 (t ),K , xn = n (t ),
satisfy the system of equations on I: < t < .
For comparison, recall that x' = P(t)x + g(t) represents our
system of equations
x1 = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t )
x2 = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t )
M
xn = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t )
Assuming P and g continuous on I, such a solution exists by
Theorem 7.1.2.
Example 1
Consider the homogeneous equation x' = P(t)x below, with
the solutions x as indicated.
1 1 e 3t 1 3t
x = x ; x(t ) = 3t = e
4 1 2e 2
To see that x is a solution, substitute x into the equation and
perform the indicated operations:

1 1 1 1 e3t 3e 3t e3t
x = 3t = 3t = 3 3t = x
4 1 4 1 2e 6e 2e
Homogeneous Case; Vector Function Notation
As in Chapters 3 and 4, we first examine the general
homogeneous equation x' = P(t)x.
Also, the following notation for the vector functions
x(1), x(2),, x(k), will be used:
x11 (t ) x12 (t ) x1n (t )

x21 (t ) ( 2 ) x22 (t ) x2 n (t )
x (t ) =
(1)
, x (t ) = , K, x (t ) =
(k )
,K
M M M

x (t ) x (t ) x (t )
n1 n2 nn
Theorem 7.4.1
If the vector functions x(1) and x(2) are solutions of the system
x' = P(t)x, then the linear combination c1x(1) + c2x(2) is also a
solution for any constants c1 and c2.

Note: By repeatedly applying the result of this theorem, it


can be seen that every finite linear combination
x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + ck x ( k ) (t )
of solutions x(1), x(2),, x(k) is itself a solution to x' = P(t)x.
Example 2
Consider the homogeneous equation x' = P(t)x below, with
the two solutions x(1) and x(2) as indicated.
1 1 e 3t ( 2 ) e t
x = x ; x (t ) = 3t , x (t ) =
(1)

t
4 1 2e 2e
Then x = c1x(1) + c2x(2) is also a solution:
1 1 1 1 c1e3t 1 1 c2 e t
x = +
3t

t
4 1 4 1 2c1e 4 1 2c2 e
3c1e 3t c2 e t
= +
3t

t
6c1e 2c2 e
3e3t e t
= c1 3t + c2 t = x
6e 2e
Theorem 7.4.2
If x(1), x(2),, x(n) are linearly independent solutions of the
system x' = P(t)x for each point in I: < t < , then each
solution x = (t) can be expressed uniquely in the form
x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t )

If solutions x(1),, x(n) are linearly independent for each


point in I: < t < , then they are fundamental solutions
on I, and the general solution is given by
x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t )
The Wronskian and Linear Independence
The proof of Thm 7.4.2 uses the fact that if x(1), x(2),, x(n)
are linearly independent on I, then detX(t) 0 on I, where
x11 (t ) L x1n (t )

X(t ) = M O M ,
x (t ) L x (t )
n1 nn

The Wronskian of x(1),, x(n) is defined as


W[x(1),, x(n)](t) = detX(t).
It follows that W[x(1),, x(n)](t) 0 on I iff x(1),, x(n) are
linearly independent for each point in I.
Theorem 7.4.3
If x(1), x(2),, x(n) are solutions of the system x' = P(t)x on
I: < t < , then the Wronskian W[x(1),, x(n)](t) is either
identically zero on I or else is never zero on I.

This result enables us to determine whether a given set of


solutions x(1), x(2),, x(n) are fundamental solutions by
evaluating W[x(1),, x(n)](t) at any point t in < t < .
Theorem 7.4.4
Let 1 0 0

0 1 0
e (1) = 0 , e ( 2) = 0 , K, e ( n ) = M

M M 0
0 0 1

Let x(1), x(2),, x(n) be solutions of the system x' = P(t)x,
< t < , that satisfy the initial conditions
x (1) (t0 ) = e (1) , K , x ( n ) (t0 ) = e ( n ) ,
respectively, where t0 is any point in < t < . Then
x(1), x(2),, x(n) are fundamental solutions of x' = P(t)x.
Ch 7.5: Homogeneous Linear Systems with
Constant Coefficients
We consider here a homogeneous system of n first order linear
equations with constant, real coefficients:
x1 = a11 x1 + a12 x2 + K + a1n xn
x2 = a21 x1 + a22 x2 + K + a2 n xn
M
xn = an1 x1 + an 2 x2 + K + ann xn
This system can be written as x' = Ax, where

x1 (t ) a11 a12 L a1n



x2 (t ) a21 a22 L a2 n
x(t ) = , A=
M M M O M

x (t ) a an 2 L ann
m n1
Equilibrium Solutions
Note that if n = 1, then the system reduces to
x = ax x(t ) = e at
Recall that x = 0 is the only equilibrium solution if a 0.
Further, x = 0 is an asymptotically stable solution if a < 0,
since other solutions approach x = 0 in this case.
Also, x = 0 is an unstable solution if a > 0, since other
solutions depart from x = 0 in this case.
For n > 1, equilibrium solutions are similarly found by
solving Ax = 0. We assume detA 0, so that x = 0 is the
only solution. Determining whether x = 0 is asymptotically
stable or unstable is an important question here as well.
Phase Plane
When n = 2, then the system reduces to
x1 = a11 x1 + a12 x2
x2 = a21 x1 + a22 x2
This case can be visualized in the x1x2-plane, which is called
the phase plane.
In the phase plane, a direction field can be obtained by
evaluating Ax at many points and plotting the resulting
vectors, which will be tangent to solution vectors.
A plot that shows representative solution trajectories is
called a phase portrait.
Examples of phase planes, directions fields and phase
portraits will be given later in this section.
Solving Homogeneous System
To construct a general solution to x' = Ax, assume a solution
of the form x = ert, where the exponent r and the constant
vector are to be determined.
Substituting x = ert into x' = Ax, we obtain
re rt = Ae rt r = A (A rI ) = 0
Thus to solve the homogeneous system of differential
equations x' = Ax, we must find the eigenvalues and
eigenvectors of A.
Therefore x = ert is a solution of x' = Ax provided that r is
an eigenvalue and is an eigenvector of the coefficient
matrix A.
Example 1: Direction Field (1 of 9)

Consider the homogeneous equation x' = Ax below.


1 1
x = x
4 1
A direction field for this system is given below.
Substituting x = ert in for x, and rewriting system as
(A-rI) = 0, we obtain

1 r 1 1 0
=
4 1 r 1 0
Example 1: Eigenvalues (2 of 9)
Our solution has the form x = ert, where r and are found
by solving
1 r 1 1 0
=
4 1 r 1 0
Recalling that this is an eigenvalue problem, we determine r
by solving det(A-rI) = 0:
1 r 1
= (1 r ) 2 4 = r 2 2r 3 = (r 3)(r + 1)
4 1 r

Thus r1 = 3 and r2 = -1.


Example 1: First Eigenvector (3 of 9)
Eigenvector for r1 = 3: Solve
1 3 1 1 0 2 1 1 0
(A rI ) = 0 = =
4 1 3 2 0 4 2 2 0
by row reducing the augmented matrix:

2 1 0 1 1/ 2 0 1 1/ 2 0 1 1 / 2 2 =0
1
4 2 0 4 2 0 0 0 0 0 2 =0
1 / 2 2 1 / 2 1
=
(1)
= c , c arbitrary choose =
(1)

2 1 2
Example 1: Second Eigenvector (4 of 9)
Eigenvector for r2 = -1: Solve
1 + 1 1 1 0 2 1 1 0
(A rI ) = 0 = =
4 1 + 1 2 0 4 2 2 0
by row reducing the augmented matrix:

2 1 0 1 1/ 2 0 1 1/ 2 0 1 + 1 / 2 2 = 0
1
4 2 0 4 2 0 0 0 0 0 2 = 0
1 / 2 2 1/ 2 1
=
( 2)
= c , c arbitrary choose =
( 2)

2 1 2
Example 1: General Solution (5 of 9)
The corresponding solutions x = ert of x' = Ax are
1 3t ( 2 ) 1 t
x (t ) = e , x (t ) = e
(1)

2 2
The Wronskian of these two solutions is
e t
[ ]
W x (1) , x ( 2) (t ) =
e 3t
= 4e 2t 0
2e3t 2e t
Thus x(1) and x(2) are fundamental solutions, and the general
solution of x' = Ax is
x(t ) = c1x (1) (t ) + c2 x ( 2) (t )
1 3t 1 t
= c1 e + c2 e
2 2
Example 1: Phase Plane for x(1) (6 of 9)

To visualize solution, consider first x = c1x(1):


x 1
x (1) (t ) = 1 = c1 e3t x1 = c1e3t , x2 = 2c1e3t
x2 2
Now
x1 x2
x1 = c1e , x2 = 2c1e
3t 3t
e = =
3t
x2 = 2 x1
c1 2c1

Thus x(1) lies along the straight line x2 = 2x1, which is the line
through origin in direction of first eigenvector (1)
If solution is trajectory of particle, with position given by
(x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0.
In either case, particle moves away from origin as t increases.
Example 1: Phase Plane for x(2) (7 of 9)

Next, consider x = c2x(2):


x1 1 t
x (t ) = = c2 e
( 2)
x1 = c2 e t , x2 = 2c2 e t
x2 2
Then x(2) lies along the straight line x2 = -2x1, which is the
line through origin in direction of 2nd eigenvector (2)
If solution is trajectory of particle, with position given by
(x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0.
In either case, particle moves towards origin as t increases.
Example 1:
Phase Plane for General Solution (8 of 9)

The general solution is x = c1x(1) + c2x(2):


1 3t 1 t
x(t ) = c1 e + c2 e
2 2
As t , c1x(1) is dominant and c2x(2) becomes negligible.
Thus, for c1 0, all solutions asymptotically approach the
line x2 = 2x1 as t .
Similarly, for c2 0, all solutions asymptotically approach
the line x2 = -2x1 as t - .
The origin is a saddle point,
and is unstable. See graph.
Example 1:
Time Plots for General Solution (9 of 9)

The general solution is x = c1x(1) + c2x(2):


1 3t 1 t x1 (t ) c1e 3t + c2 e t
x(t ) = c1 e + c2 e = t

2 x2 (t ) 2c1e 2c2 e
3t
2
As an alternative to phase plane plots, we can graph x1 or x2
as a function of t. A few plots of x1 are given below.
Note that when c1 = 0, x1(t) = c2e-t 0 as t .
Otherwise, x1(t) = c1e3t + c2e-t grows unbounded as t .
Graphs of x2 are similarly obtained.
Example 2: Direction Field (1 of 9)

Consider the homogeneous equation x' = Ax below.


3 2
x = x

2 2
A direction field for this system is given below.
Substituting x = ert in for x, and rewriting system as
(A-rI) = 0, we obtain

3 r 2 1 0
=
2 2 r 1 0

Example 2: Eigenvalues (2 of 9)
Our solution has the form x = ert, where r and are found
by solving
3 r 2 1 0
=
2 2 r 1 0

Recalling that this is an eigenvalue problem, we determine r
by solving det(A-rI) = 0:
3 r 2
= (3 r )(2 r ) 2 = r 2 + 5r + 4 = (r + 1)(r + 4)
2 2r

Thus r1 = -1 and r2 = -4.


Example 2: First Eigenvector (3 of 9)
Eigenvector for r1 = -1: Solve
3 +1 2 1 0 2 2 1 0
(A rI ) = 0
=


2
=
2 2 + 1 2 0 1 2 0

by row reducing the augmented matrix:

2 2 0 1 2 / 2 0 1 2 / 2 0

2
1 0 2 1 0 0 0 0

2 / 2 2 1
(1) = choose (1) =
2 2
Example 2: Second Eigenvector (4 of 9)
Eigenvector for r2 = -4: Solve
3+ 4 2 1 0 1 2 1 0
(A rI ) = 0 = =
2 2 + 4 2 0 2
2 2 0
by row reducing the augmented matrix:

1 2 0 1 2 0 2 2
( 2) =
2 2 0 0
0 0 2

2
choose ( 2)
=
1
Example 2: General Solution (5 of 9)
The corresponding solutions x = ert of x' = Ax are
1 t ( 2 ) 2 4t
x (t ) = e , x (t ) =
(1)
e

2 1
The Wronskian of these two solutions is

[ ]
W x (1) , x ( 2) (t ) =
e t 2e 4t
= 3e 5t 0
2e t e 4t
Thus x(1) and x(2) are fundamental solutions, and the general
solution of x' = Ax is
x(t ) = c1x (1) (t ) + c2 x ( 2) (t )
1 t 2 4t
= c1 e + c2 e
2 1
Example 2: Phase Plane for x(1) (6 of 9)

To visualize solution, consider first x = c1x(1):


x1 1 t
x (t ) = = c1 e
(1)
x1 = c1e t , x2 = 2c1e t
x2 2
Now
t t x1
t x2
x1 = c1e , x2 = 2c1e e = = x2 = 2 x1
c1 2c1
Thus x(1) lies along the straight line x2 = 2 x1, which is the
line through origin in direction of first eigenvector (1)
If solution is trajectory of particle, with position given by
(x1, x2), then it is in Q1 when c1 > 0, and in Q3 when c1 < 0.
In either case, particle moves towards origin as t increases.
Example 2: Phase Plane for x(2) (7 of 9)

Next, consider x = c2x(2):


x1 2 4t
x (t ) = = c2
( 2)
e x1 = 2c2 e 4t , x2 = c2 e 4t
x2 1
Then x(2) lies along the straight line x2 = -2 x1, which is the
line through origin in direction of 2nd eigenvector (2)
If solution is trajectory of particle, with position given by
(x1, x2), then it is in Q4 when c2 > 0, and in Q2 when c2 < 0.
In either case, particle moves towards origin as t increases.
Example 2:
Phase Plane for General Solution (8 of 9)

The general solution is x = c1x(1) + c2x(2):


1 t ( 2 ) 2 4t
x (t ) = e , x (t ) =
(1)
e

2 1
As t , c1x(1) is dominant and c2x(2) becomes negligible.
Thus, for c1 0, all solutions asymptotically approach
origin along the line x2 = 2 x1 as t .
Similarly, all solutions are unbounded as t - .
The origin is a node, and is
asymptotically stable.
Example 2:
Time Plots for General Solution (9 of 9)

The general solution is x = c1x(1) + c2x(2):


1 t 2 4t x1 (t ) c1e t 2c2 e 4t
x(t ) = c1 e + c2 e
=
t 4t
2 1 x2 (t ) 2c1e + c2 e

As an alternative to phase plane plots, we can graph x1 or x2


as a function of t. A few plots of x1 are given below.
Graphs of x2 are similarly obtained.
2 x 2 Case:
Real Eigenvalues, Saddle Points and Nodes
The previous two examples demonstrate the two main cases
for a 2 x 2 real system with real and different eigenvalues:
Both eigenvalues have opposite signs, in which case origin is a
saddle point and is unstable.
Both eigenvalues have the same sign, in which case origin is a node,
and is asymptotically stable if the eigenvalues are negative and
unstable if the eigenvalues are positive.
Eigenvalues, Eigenvectors
and Fundamental Solutions
In general, for an n x n real linear system x' = Ax:
All eigenvalues are real and different from each other.
Some eigenvalues occur in complex conjugate pairs.
Some eigenvalues are repeated.
If eigenvalues r1,, rn are real & different, then there are n
corresponding linearly independent eigenvectors (1),, (n).
The associated solutions of x' = Ax are
x (1) (t ) = (1) e r1t , K , x ( n ) (t ) = ( n ) e rnt
Using Wronskian, it can be shown that these solutions are
linearly independent, and hence form a fundamental set of
solutions. Thus general solution is
x = c1 (1) e r1t + K + cn ( n ) e rnt
Hermitian Case: Eigenvalues, Eigenvectors &
Fundamental Solutions
If A is an n x n Hermitian matrix (real and symmetric), then
all eigenvalues r1,, rn are real, although some may repeat.
In any case, there are n corresponding linearly independent
and orthogonal eigenvectors (1),, (n). The associated
solutions of x' = Ax are
x (1) (t ) = (1) e r1t , K , x ( n ) (t ) = ( n ) e rnt
and form a fundamental set of solutions.
Example 3: Hermitian Matrix (1 of 3)

Consider the homogeneous equation x' = Ax below.


0 1 1

x = 1 0 1 x
1 1 0

The eigenvalues were found previously in Ch 7.3, and were:


r1 = 2, r2 = -1 and r3 = -1.
Corresponding eigenvectors:

1 1 0
( 2 ) ( 3)
(1) = 1, = 0 , = 1
1 1 1

Example 3: General Solution (2 of 3)
The fundamental solutions are
1 1 0
2 t ( 2 ) t ( 3) t
x (1) = 1 e , x = 0 e , x = 1 e
1 1 1

with general solution


1 1 0
2t t t
x = c1 1e + c2 0 e + c3 1e
1 1 1

Example 3: General Solution Behavior (3 of 3)

The general solution is x = c1x(1) + c2x(2) + c3x(3):


1 1 0
2t t t
x = c1 1e + c2 0 e + c3 1e
1 1 1

As t , c1x(1) is dominant and c2x(2) , c3x(3) become
negligible.
Thus, for c1 0, all solns x become unbounded as t ,
while for c1 = 0, all solns x 0 as t .
The initial points that cause c1 = 0 are those that lie in plane
determined by (2) and (3). Thus solutions that start in this
plane approach origin as t .
Complex Eigenvalues and Fundamental Solns
If some of the eigenvalues r1,, rn occur in complex
conjugate pairs, but otherwise are different, then there are
still n corresponding linearly independent solutions
x (1) (t ) = (1) e r1t , K, x ( n ) (t ) = ( n ) e rnt ,
which form a fundamental set of solutions. Some may be
complex-valued, but real-valued solutions may be derived
from them. This situation will be examined in Ch 7.6.

If the coefficient matrix A is complex, then complex


eigenvalues need not occur in conjugate pairs, but solutions
will still have the above form (if the eigenvalues are
distinct) and these solutions may be complex-valued.
Repeated Eigenvalues and Fundamental Solns
If some of the eigenvalues r1,, rn are repeated, then there
may not be n corresponding linearly independent solutions of
the form
x (t ) = e 1 , K , x (t ) = e n
(1) (1) r t (n) (n) r t

In order to obtain a fundamental set of solutions, it may be


necessary to seek additional solutions of another form.
This situation is analogous to that for an nth order linear
equation with constant coefficients, in which case a repeated
root gave rise solutions of the form
e rt , te rt , t 2 e rt , K
This case of repeated eigenvalues is examined in Section 7.8.
Ch 7.6: Complex Eigenvalues
We consider again a homogeneous system of n first order
linear equations with constant, real coefficients,
x1 = a11 x1 + a12 x2 + K + a1n xn
x2 = a21 x1 + a22 x2 + K + a2 n xn
M
xn = an1 x1 + an 2 x2 + K + ann xn ,
and thus the system can be written as x' = Ax, where

x1 (t ) a11 a12 L a1n



x2 (t ) a21 a22 L a2 n
x(t ) = , A=
M M M O M

x (t ) a an 2 L ann
n n1
Conjugate Eigenvalues and Eigenvectors
We know that x = ert is a solution of x' = Ax, provided r is
an eigenvalue and is an eigenvector of A.
The eigenvalues r1,, rn are the roots of det(A-rI) = 0, and
the corresponding eigenvectors satisfy (A-rI) = 0.
If A is real, then the coefficients in the polynomial equation
det(A-rI) = 0 are real, and hence any complex eigenvalues
must occur in conjugate pairs. Thus if r1 = + i is an
eigenvalue, then so is r2 = - i.
The corresponding eigenvectors (1), (2) are conjugates also.
To see this, recall A and I have real entries, and hence
(A r1I ) (1) = 0 (A r1I ) (1) = 0 (A r2 I ) ( 2 ) = 0
Conjugate Solutions
It follows from the previous slide that the solutions
x (1) = (1) e r1t , x ( 2 ) = ( 2 ) e r2t

corresponding to these eigenvalues and eigenvectors are


conjugates conjugates as well, since
x ( 2 ) = ( 2) e r2t = (1) e r2t = x (1)
Real-Valued Solutions
Thus for complex conjugate eigenvalues r1 and r2 , the
corresponding solutions x(1) and x(2) are conjugates also.
To obtain real-valued solutions, use real and imaginary parts
of either x(1) or x(2). To see this, let (1) = a + ib. Then
x (1) = (1) e ( +i )t = (a + ib )e t (cos t + i sin t )
= e t (a cos t b sin t ) + ie t (a sin t + b cos t )
= u(t ) + i v (t )
where
u(t ) = e t (a cos t b sin t ), v (t ) = e t (a sin t + b cos t ),

are real valued solutions of x' = Ax, and can be shown to be


linearly independent.
General Solution
To summarize, suppose r1 = + i, r2 = - i, and that
r3,, rn are all real and distinct eigenvalues of A. Let the
corresponding eigenvectors be
(1) = a + ib, ( 2 ) = a ib, ( 3) , ( 4 ) , K, ( n )
Then the general solution of x' = Ax is
x = c1u(t ) + c2 v (t ) + c3 ( 3) e r3 t + K + cn ( n ) e rn t
where
u(t ) = e t (a cos t b sin t ), v (t ) = e t (a sin t + b cos t )
Example 1: Direction Field (1 of 7)

Consider the homogeneous equation x' = Ax below.


1/ 2 1
x = x
1 1/ 2
A direction field for this system is given below.
Substituting x = ert in for x, and rewriting system as
(A-rI) = 0, we obtain

1/ 2 r 1 1 0
=
1 1 / 2 r 1 0
Example 1: Complex Eigenvalues (2 of 7)

We determine r by solving det(A-rI) = 0. Now


1/ 2 r 1
= (r + 1 / 2) + 1 = r + r +
2 2 5
1 1/ 2 r 4

Thus

1 12 4(5 / 4) 1 2i 1
r= = = i
2 2 2

Therefore the eigenvalues are r1 = -1/2 + i and r2 = -1/2 - i.


Example 1: First Eigenvector (3 of 7)

Eigenvector for r1 = -1/2 + i: Solve


1/ 2 r 1 1 0
(A rI ) = 0 =
1 1 / 2 r 1 0
i 1 1 0 1 i 1 0
= =
1 i 2 0 1 i 2 0

by row reducing the augmented matrix:


1 i 0 1 i 0 i 2 1
(1) = choose (1) =
1 i 0 0 0 0 2 i

Thus 1 0
(1)
= + i
0 1
Example 1: Second Eigenvector (4 of 7)

Eigenvector for r1 = -1/2 - i: Solve


1/ 2 r 1 1 0
(A rI ) = 0 =
1 1 / 2 r 1 0
i 1 1 0 1 i 1 0
= =
1 i 2 0 1 i 2 0

by row reducing the augmented matrix:


1 i 0 1 i 0 i 1
( 2 ) = 2 choose ( 2 ) =
1 i 0 0 0 0 2 i

Thus
1 0
( 2)
= + i
0 1
Example 1: General Solution (5 of 7)
The corresponding solutions x = ert of x' = Ax are

t / 2
1 0 t / 2 cos t
u(t ) = e cos t sin t = e
0 1 sin t
t / 2
1 0 t / 2 sin t
v(t ) = e sin t + cos t = e
0 1 cos t

The Wronskian of these two solutions is


e t / 2 cos t e t / 2 sin t
[
W x ,x(1) ( 2)
](t ) = = e t 0
e t / 2 sin t e t / 2 cos t

Thus u(t) and v(t) are real-valued fundamental solutions of


x' = Ax, with general solution x = c1u + c2v.
Example 1: Phase Plane (6 of 7)
Given below is the phase plane plot for solutions x, with
x1 e t / 2 cos t e t / 2 sin t
x = = c1 t / 2 + c2 t / 2



x2 e sin t e cos t
Each solution trajectory approaches origin along a spiral path
as t , since coordinates are products of decaying
exponential and sine or cosine factors.
The graph of u passes through (1,0),
since u(0) = (1,0). Similarly, the
graph of v passes through (0,1).
The origin is a spiral point, and
is asymptotically stable.
Example 1: Time Plots (7 of 7)
The general solution is x = c1u + c2v:
x1 (t ) c1e t / 2 cos t + c2 e t / 2 sin t
x = = t / 2 t / 2


x2 (t ) c1e sin t + c2 e cos t
As an alternative to phase plane plots, we can graph x1 or x2
as a function of t. A few plots of x1 are given below, each
one a decaying oscillation as t .
Spiral Points, Centers,
Eigenvalues, and Trajectories
In previous example, general solution was
x1 e t / 2 cos t e t / 2 sin t
x = = c1 t / 2 + c2 t / 2



x2 e sin t e cos t
The origin was a spiral point, and was asymptotically stable.
If real part of complex eigenvalues is positive, then
trajectories spiral away, unbounded, from origin, and hence
origin would be an unstable spiral point.
If real part of complex eigenvalues is zero, then trajectories
circle origin, neither approaching nor departing. Then origin
is called a center and is stable, but not asymptotically stable.
Trajectories periodic in time.
The direction of trajectory motion depends on entries in A.
Example 2:
Second Order System with Parameter (1 of 2)

The system x' = Ax below contains a parameter .


2
x = x
2 0
Substituting x = ert in for x and rewriting system as
(A-rI) = 0, we obtain
r 2 1 0
=
2 r 1 0
Next, solve for r in terms of :
r 2 2 16
= r (r ) + 4 = r r + 4 r =
2

2 r 2
Example 2: 2 16
r=
Eigenvalue Analysis (2 of 2) 2

The eigenvalues are given by the quadratic formula above.


For < -4, both eigenvalues are real and negative, and hence
origin is asymptotically stable node.
For > 4, both eigenvalues are real and positive, and hence the
origin is an unstable node.
For -4 < < 0, eigenvalues are complex with a negative real
part, and hence origin is asymptotically stable spiral point.
For 0 < < 4, eigenvalues are complex with a positive real
part, and the origin is an unstable spiral point.
For = 0, eigenvalues are purely imaginary, origin is a center.
Trajectories closed curves about origin & periodic.
For = 4, eigenvalues real & equal, origin is a node (Ch 7.8)
Second Order Solution Behavior and
Eigenvalues: Three Main Cases
For second order systems, the three main cases are:
Eigenvalues are real and have opposite signs; x = 0 is a saddle point.
Eigenvalues are real, distinct and have same sign; x = 0 is a node.
Eigenvalues are complex with nonzero real part; x = 0 a spiral point.
Other possibilities exist and occur as transitions between two
of the cases listed above:
A zero eigenvalue occurs during transition between saddle point and
node. Real and equal eigenvalues occur during transition between
nodes and spiral points. Purely imaginary eigenvalues occur during a
transition between asymptotically stable and unstable spiral points.

b b 2 4ac
r=
2a
Ch 7.7: Fundamental Matrices
Suppose that x(1)(t),, x(n)(t) form a fundamental set of
solutions for x' = P(t)x on < t < .
The matrix
x1(1) (t ) L x1( n ) (t )

(t ) = M O M ,
x (1) (t ) L x ( n ) (t )
n n
whose columns are x(1)(t),, x(n)(t), is a fundamental matrix
for the system x' = P(t)x. This matrix is nonsingular since its
columns are linearly independent, and hence det 0.
Note also that since x(1)(t),, x(n)(t) are solutions of x' = P(t)x,
satisfies the matrix differential equation ' = P(t).
Example 1:
Consider the homogeneous equation x' = Ax below.
1 1
x = x
4 1

In Chapter 7.5, we found the following fundamental


solutions for this system:
1 3t ( 2 ) 1 t
(1)

x (t ) = e , x (t ) = e
2 2
Thus a fundamental matrix for this system is
e 3t e t
(t ) = 3t
t
2e 2e
Fundamental Matrices and General Solution
The general solution of x' = P(t)x
x = c1x (1) (t ) + L + cn x ( n )
can be expressed x = (t)c, where c is a constant vector with
components c1,, cn:
x1(1) (t ) L x1( n ) (t ) c1

x = (t )c = M O M M
x (1) (t ) L x ( n ) (t ) c
n n n
Fundamental Matrix & Initial Value Problem
Consider an initial value problem
x' = P(t)x, x(t0) = x0
where < t0 < and x0 is a given initial vector.
Now the solution has the form x = (t)c, hence we choose c
so as to satisfy x(t0) = x0.
Recalling (t0) is nonsingular, it follows that
(t0 )c = x 0 c = 1 (t0 )x 0
Thus our solution x = (t)c can be expressed as
x = (t ) 1 (t0 )x 0
Recall: Theorem 7.4.4
Let 1 0 0

0 1 0
e (1) = 0 , e ( 2) = 0 , K, e ( n ) = M

M M 0
0 0 1

Let x(1),, x(n) be solutions of x' = P(t)x on I: < t < that
satisfy the initial conditions
x (1) (t0 ) = e (1) , K , x ( n ) (t0 ) = e ( n ) , < t0 <

Then x(1),, x(n) are fundamental solutions of x' = P(t)x.


Fundamental Matrix & Theorem 7.4.4
Suppose x(1)(t),, x(n)(t) form the fundamental solutions given
by Thm 7.4.4. Denote the corresponding fundamental matrix
by (t). Then columns of (t) are x(1)(t),, x(n)(t), and hence
1 0 L 0

0 1 L 0
(t0 ) = =I
M M O M

0 0 L 1

Thus -1(t0) = I, and the hence general solution to the
corresponding initial value problem is
x = (t ) 1 (t0 )x 0 = (t )x 0
It follows that for any fundamental matrix (t),
x = (t ) 1 (t0 )x 0 = (t )x 0 (t ) = (t ) 1 (t0 )
The Fundamental Matrix
and Varying Initial Conditions
Thus when using the fundamental matrix (t), the general
solution to an IVP is
x = (t ) 1 (t0 )x 0 = (t )x 0
This representation is useful if same system is to be solved for
many different initial conditions, such as a physical system
that can be started from many different initial states.
Also, once (t) has been determined, the solution to each set
of initial conditions can be found by matrix multiplication, as
indicated by the equation above.
Thus (t) represents a linear transformation of the initial
conditions x0 into the solution x(t) at time t.
Example 2: Find (t) for 2 x 2 System (1 of 5)

Find (t) such that (0) = I for the system below.


1 1
x = x
4 1
Solution: First, we must obtain x(1)(t) and x(2)(t) such that
1 ( 2) 0
x (0) = , x (0) =
(1)

0 1
We know from previous results that the general solution is
1 3t 1 t
x = c1 e + c2 e
2 2
Every solution can be expressed in terms of the general
solution, and we use this fact to find x(1)(t) and x(2)(t).
Example 2: Use General Solution (2 of 5)
Thus, to find x(1)(t), express it terms of the general solution
1 3t 1 t
x (t ) = c1 e + c2 e
(1)

2 2
and then find the coefficients c1 and c2.
To do so, use the initial conditions to obtain
1 1 1
x (0) = c1 + c2 =
(1)

2 2 0
or equivalently,

1 1 c1 1
=
2 2 c2 0
Example 2: Solve for x(1)(t) (3 of 5)

To find x(1)(t), we therefore solve


1 1 c1 1
=
2 2 c2 0
by row reducing the augmented matrix:
1 1 1 1 1 1 1 1 1 1 0 1 / 2

2 2 0 0 4 2 0 1 1/ 2 0 1 1/ 2
c1 = 1/ 2

c2 = 1 / 2
Thus
1 3t 1 t
1 1 1 1 e + e
x (1) (t ) = e3t + e t = 2 2
2 2 2 2 e e t
3 t

Example 2: Solve for x(2)(t) (4 of 5)

To find x(2)(t), we similarly solve


1 1 c1 0
=
2 2 c2 1
by row reducing the augmented matrix:
1 1 0 1 1 0 1 1 0 1 0 1/ 4

2 2 1 0 4 1 0 1 1 / 4 0 1 1 / 4
c1 = 1/ 4

c2 = 1 / 4
Thus 1 3t 1 t
1 1 1 1 e e
x ( 2 ) (t ) = e3t e t = 4 4
4 2 4 2 1 e3t + 1 e t
2 2
Example 2: Obtain (t) (5 of 5)

The columns of (t) are given by x(1)(t) and x(2)(t), and thus
from the previous slide we have
1 3t 1 t 1 3t 1 t
e + e e e
(t ) = 2 2 4 4
e 3t e t 1 3t 1 t
e + e
2 2
Note (t) is more complicated than (t) found in Ex 1.
However, now that we have (t), it is much easier to
determine the solution to any set of initial conditions.

e 3t e t
(t ) = 3t
t
2e 2e
Matrix Exponential Functions
Consider the following two cases:
The solution to x' = ax, x(0) = x0, is x = x0eat, where e0 = 1.
The solution to x' = Ax, x(0) = x0, is x = (t)x0, where (0) = I.
Comparing the form and solution for both of these cases, we
might expect (t) to have an exponential character.
Indeed, it can be shown that (t) = eAt, where

A nt n A nt n
e At
= =I+
n =0 n ! n =1 n !

is a well defined matrix function that has all the usual


properties of an exponential function. See text for details.
Thus the solution to x' = Ax, x(0) = x0, is x = eAtx0.
Example 3: Matrix Exponential Function
Consider the diagonal matrix A below.
1 0
A =
0 2
Then
1 0 1 0 1 0 1 0 1 0 1 0
A =
2
= , A =
2
3

2
= ,K
3
0 2 0 2 0 2 0 2 0 2 0 2
In general, 1 0
A =
n

n
0 2
Thus

A nt n
1 / n! 0 n et 0
e At
= = n
t =
2t
n =0 n ! n =0 0 2 / n ! 0 e
Coupled Systems of Equations
Recall that our constant coefficient homogeneous system
x1 = a11 x1 + a12 x2 + K + a1n xn
M
xn = an1 x1 + an 2 x2 + K + ann xn ,

written as x' = Ax with


x1 (t ) a11 L a1n

x(t ) = M , A = M O M ,
x (t ) a L a
n n1 nn

is a system of coupled equations that must be solved


simultaneously to find all the unknown variables.
Uncoupled Systems & Diagonal Matrices
In contrast, if each equation had only one variable, solved for
independently of other equations, then task would be easier.
In this case our system would have the form
x1 = d11 x1 + 0 x2 + K + 0 xn
x2 = 0 x1 + d11 x2 + K + 0 xn
M
xn = 0 x1 + 0 x2 + K + d nn xn ,

or x' = Dx, where D is a diagonal matrix:


d11 0 L 0
x1 (t )
0 d 22 L 0
x(t ) = M , D=
x (t ) M M O M
n
0 L d nn
0
Uncoupling: Transform Matrix T
In order to explore transforming our given system x' = Ax of
coupled equations into an uncoupled system x' = Dx, where D
is a diagonal matrix, we will use the eigenvectors of A.
Suppose A is n x n with n linearly independent eigenvectors
(1),, (n), and corresponding eigenvalues 1,, n.
Define n x n matrices T and D using the eigenvalues &
eigenvectors of A:
1 0 L 0
(1)
L (n)

1 1
0 2 L 0
T = M O M , D=
(1) L ( n ) M M O M

n n 0 0 L n

Note that T is nonsingular, and hence T-1 exists.


Uncoupling: T-1AT = D
Recall here the definitions of A, T and D:
1 0 L 0
a11 L a1n 1(1) L 1( n )
0 2 L 0
A = M O M , T = M O M , D =
a L a (1) L ( n ) M M O M
n1 nn

n n 0 0 L n

Then the columns of AT are A(1),, A(n), and hence


11(1) L n1( n )

AT = M O M = TD
(1) L ( n )
1 n n n

It follows that T-1AT = D.


Similarity Transformations
Thus, if the eigenvalues and eigenvectors of A are known,
then A can be transformed into a diagonal matrix D, with
T-1AT = D
This process is known as a similarity transformation, and A
is said to be similar to D. Alternatively, we could say that A
is diagonalizable.

1 0 L 0
a11 L a1n L
(1) (n)

1 1
0 2 L 0
A = M O M , T = M O M , D=
a L a (1) L ( n ) M M O M
n1 nn

n n 0 0 L n

Similarity Transformations: Hermitian Case
Recall: Our similarity transformation of A has the form
T-1AT = D
where D is diagonal and columns of T are eigenvectors of A.
If A is Hermitian, then A has n linearly independent
orthogonal eigenvectors (1),, (n), normalized so that
((i), (i)) =1 for i = 1,, n, and ((i), (k)) = 0 for i k.
With this selection of eigenvectors, it can be shown that
T-1 = T*. In this case we can write our similarity transform as
T*AT = D
Nondiagonalizable A
Finally, if A is n x n with fewer than n linearly independent
eigenvectors, then there is no matrix T such that T-1AT = D.
In this case, A is not similar to a diagonal matrix and A is not
diagonlizable.

1 0 L 0
a11 L a1n L
(1) (n)

1 1
0 2 L 0
A = M O M , T = M O M , D=
a L a (1) L ( n ) M M O M
n1 nn

n n 0 0 L n

Example 4:
Find Transformation Matrix T (1 of 2)

For the matrix A below, find the similarity transformation


matrix T and show that A can be diagonalized.
1 1
A =
4 1
We already know that the eigenvalues are 1 = 3, 2 = -1
with corresponding eigenvectors
1 ( 2) 1
(t ) = , (t ) =
(1)

2 2
Thus
1 1 3 0

T=
, D =
2 2 0 1
Example 4: Similarity Transformation (2 of 2)

To find T-1, augment the identity to T and row reduce:


1 1 1 0 1 1 1 0 1 1 1 0

2 2 0 1 0 4 2 1 0 1 1 / 2 1 / 4
1 0 1/ 2 1/ 4 1 / 2 1/ 4
T 1 =
0 1 1/ 2 1/ 4 1 / 2 1 / 4
Then
1 1 / 2 1 / 4 1 1 1 1
T AT =
1 / 2 1 / 4 4 1 2 2
1 / 2 1 / 4 3 1 3 0
= = = D
1 / 2 1 / 4 6 2 0 1
Thus A is similar to D, and hence A is diagonalizable.
Fundamental Matrices for Similar Systems (1 of 3)

Recall our original system of differential equations x' = Ax.


If A is n x n with n linearly independent eigenvectors, then A
is diagonalizable. The eigenvectors form the columns of the
nonsingular transform matrix T, and the eigenvalues are the
corresponding nonzero entries in the diagonal matrix D.
Suppose x satisfies x' = Ax, let y be the n x 1 vector such that
x = Ty. That is, let y be defined by y = T-1x.
Since x' = Ax and T is a constant matrix, we have Ty' = ATy,
and hence y' = T-1ATy = Dy.
Therefore y satisfies y' = Dy, the system similar to x' = Ax.
Both of these systems have fundamental matrices, which we
examine next.
Fundamental Matrix for Diagonal System (2 of 3)

A fundamental matrix for y' = Dy is given by Q(t) = eDt.


Recalling the definition of eDt, we have

(1t )n
0 0 n n!
0 0

1
n


D nt n t n =0
Q(t ) = = 0 O 0 = 0 O 0
n =0 n ! n =0 n n!
0 n

0

(nt )n

0 0
n !
n =0

e 1t 0 0

= 0 O 0
t
0 0 e n

Fundamental Matrix for Original System (3 of 3)

To obtain a fundamental matrix (t) for x' = Ax, recall that the
columns of (t) consist of fundamental solutions x satisfying
x' = Ax. We also know x = Ty, and hence it follows that
1(1) L 1( n ) e 1t 0 0 1(1) e 1t L 1( n ) e nt

= TQ = M O M 0 O 0 = M O M
(1) L ( n ) 0 0 e
nt
(1) 1t
e ( n ) nt
L n e
n n n

The columns of (t) given the expected fundamental solutions


of x' = Ax.
Example 5:
Fundamental Matrices for Similar Systems
We now use the analysis and results of the last few slides.
Applying the transformation x = Ty to x' = Ax below, this
system becomes y' = T-1ATy = Dy:
1 1 3 0
x = x y = y
4 1 0 1
A fundamental matrix for y' = Dy is given by Q(t) = eDt:
e 3t 0
Q(t ) =
t
0 e
Thus a fundamental matrix (t) for x' = Ax is
1 1 e3t 0 e 3t e t
(t ) = TQ = = 3t
2 2 0 e 2e
t t
2e
Ch 7.8: Repeated Eigenvalues
We consider again a homogeneous system of n first order
linear equations with constant real coefficients x' = Ax.
If the eigenvalues r1,, rn of A are real and different, then
there are n linearly independent eigenvectors (1),, (n), and n
linearly independent solutions of the form
x (1) (t ) = (1) e r1t , K , x ( n ) (t ) = ( n ) e rnt
If some of the eigenvalues r1,, rn are repeated, then there
may not be n corresponding linearly independent solutions of
the above form.
In this case, we will seek additional solutions that are products
of polynomials and exponential functions.
Example 1: Direction Field (1 of 12)

Consider the homogeneous equation x' = Ax below.


1 1
x = x
1 3
A direction field for this system is given below.
Substituting x = ert in for x, and rewriting system as
(A-rI) = 0, we obtain

1 r 1 1 0
=
1 3 r 1 0
Example 1: Eigenvalues (2 of 12)
Solutions have the form x = ert, where r and satisfy
1 r 1 1 0
=
1 3 r 1 0
To determine r, solve det(A-rI) = 0:
1 r 1
= (r 1)(r 3) + 1 = r 2 4r + 4 = (r 2) 2
1 3 r

Thus r1 = 2 and r2 = 2.
Example 1: Eigenvectors (3 of 12)
To find the eigenvectors, we solve
1 2 1 1 0 1 1 1 0
(A rI ) = 0 = =
1 3 2 2 0 1 1 2 0
by row reducing the augmented matrix:
1 1 0 1 1 0 1 1 0 11 + 1 2 =0

1 1 0 1 1 0 0 0 0 0 2 =0
1
(1) = 2 choose (1) =
2 1

Thus there is only one eigenvector for the repeated


eigenvalue r = 2.
Example 1: First Solution; and
Second Solution, First Attempt (4 of 12)

The corresponding solution x = ert of x' = Ax is


1 2t
x (t ) = e
(1)

1
Since there is no second solution of the form x = ert, we
need to try a different form. Based on methods for second
order linear equations in Ch 3.5, we first try x = te2t.
Substituting x = te2t into x' = Ax, we obtain
e 2t + 2te 2t = Ate 2t
or
2te 2t + e 2t Ate 2t = 0
Example 1:
Second Solution, Second Attempt (5 of 12)

From the previous slide, we have


2te 2t + e 2t Ate 2t = 0
In order for this equation to be satisfied for all t, it is
necessary for the coefficients of te2t and e2t to both be zero.
From the e2t term, we see that = 0, and hence there is no
nonzero solution of the form x = te2t.
Since te2t and e2t appear in the above equation, we next
consider a solution of the form
x = te 2t + e 2t
Example 1: Second Solution and its
Defining Matrix Equations (6 of 12)

Substituting x = te2t + e2t into x' = Ax, we obtain


(
e 2t + 2te 2t + 2e 2t = A te 2t + e 2t )
or
2te 2t + ( + 2)e 2t = Ate 2t + Ae 2t
Equating coefficients yields A = 2 and A = + 2, or
(A 2I ) = 0 and (A 2I ) =
The first equation is satisfied if is an eigenvector of A
corresponding to the eigenvalue r = 2. Thus
1
=
1
Example 1: Solving for Second Solution (7 of 12)

Recall that
1 1 1
A = , =
1 3 1
Thus to solve (A 2I) = for , we row reduce the
corresponding augmented matrix:
1 1 1 1 1 1 1 1 1
2 = 1 1
1 1 1 1 1 1 0 0 0
1 0 1
= = + k
1 1 1 1
Example 1: Second Solution (8 of 12)
Our second solution x = te2t + e2t is now
1 2t 0 2t 1 2t
x = te + e + k e
1 1 1
Recalling that the first solution was
1 2t
x (t ) = e ,
(1)

1
we see that our second solution is simply
1 2t 0 2t
x (t ) = te + e ,
( 2)

1 1
since the last term of third term of x is a multiple of x(1).
Example 1: General Solution (9 of 12)
The two solutions of x' = Ax are
1 2t ( 2) 1 2t 0 2t
(1)

x (t ) = e , x (t ) = te + e
1 1 1
The Wronskian of these two solutions is

[ ]
W x (1) , x ( 2) (t ) =
e 2t te 2t
= e 4 t 0
e 2t te 2t e 2t
Thus x(1) and x(2) are fundamental solutions, and the general
solution of x' = Ax is
x(t ) = c1x (1) (t ) + c2 x ( 2 ) (t )
1 2t 1 2t 0 2t
= c1 e + c2 te + e
1 1 1
Example 1: Phase Plane (10 of 12)

The general solution is


1 2t 1 2t 0 2t
x(t ) = c1 e + c2 te + e
1 1 1
Thus x is unbounded as t , and x 0 as t -.
Further, it can be shown that as t -, x 0 asymptotic
to the line x2 = -x1 determined by the first eigenvector.
Similarly, as t , x is asymptotic
to a line parallel to x2 = -x1.
Example 1: Phase Plane (11 of 12)

The origin is an improper node, and is unstable. See graph.


The pattern of trajectories is typical for two repeated
eigenvalues with only one eigenvector.
If the eigenvalues are negative, then the trajectories are
similar but are traversed in the inward direction. In this case
the origin is an asymptotically stable improper node.
Example 1:
Time Plots for General Solution (12 of 12)

Time plots for x1(t) are given below, where we note that the
general solution x can be written as follows.
1 2t 1 2t 0 2t
x(t ) = c1 e + c2 te + e
1 1 1
x1 (t ) c1e 2t + c2te 2t
=
2t
x2 (t ) (c1 + c2 )e c2te
2t
General Case for Double Eigenvalues
Suppose the system x' = Ax has a double eigenvalue r =
and a single corresponding eigenvector .
The first solution is
x(1) = e t,
where satisfies (A-I) = 0.
As in Example 1, the second solution has the form
x ( 2 ) = te t + e t
where is as above and satisfies (A-I) = .

Since is an eigenvalue, det(A-I) = 0, and (A-I) = b


does not have a solution for all b. However, it can be
shown that (A-I) = always has a solution.
The vector is called a generalized eigenvector.
Example 2: Fundamental Matrix (1 of 2)

Recall that a fundamental matrix (t) for x' = Ax has


linearly independent solution for its columns.
In Example 1, our system x' = Ax was
1 1
x = x
1 3
and the two solutions we found were
1 2t ( 2) 1 2t 0 2t
(1)

x (t ) = e , x (t ) = te + e
1 1 1
Thus the corresponding fundamental matrix is
e 2t te 2t 2t 1 t
(t ) = 2t
2t

=e
e te e 1 t 1
2t
Example 2: Fundamental Matrix (2 of 2)

The fundamental matrix (t) that satisfies (0) = I can be


found using (t) = (t)-1(0), where
1 0 1 1 0
(0) = , (0) = ,
1 1 1 1
where -1(0) is found as follows:
1 0 1 0 1 0 1 0 1 0 1 0

1 1 0 1 0 1 1 1 0 1 1 1
Thus
1 t 1 0 2t 1 t t
(t ) = e
2t
= e
1 t 1 1 1 t t + 1
Jordan Forms
If A is n x n with n linearly independent eigenvectors, then A
can be diagonalized using a similarity transform T-1AT = D.
The transform matrix T consisted of eigenvectors of A, and
the diagonal entries of D consisted of the eigenvalues of A.
In the case of repeated eigenvalues and fewer than n linearly
independent eigenvectors, A can be transformed into a nearly
diagonal matrix J, called the Jordan form of A, with
T-1AT = J.
Example 3: Transform Matrix (1 of 2)

In Example 1, our system x' = Ax was


1 1
x = x
1 3
with eigenvalues r1 = 2 and r2 = 2 and eigenvectors
1 0 1
= , = + k
1 1 1
Choosing k = 0, the transform matrix T formed from the
two eigenvectors and is
1 0
T =
1 1
Example 3: Jordan Form (2 of 2)

The Jordan form J of A is defined by T-1AT = J.


Now
1 0 1 1 0
T = , T =
1 1 1 1
and hence
1 1 0 1 1 1 0 2 1
J = T AT = =
1 1 1 3 1 1 0 2
Note that the eigenvalues of A, r1 = 2 and r2 = 2, are on the
main diagonal of J, and that there is a 1 directly above the
second eigenvalue. This pattern is typical of Jordan forms.
Ch 7.9: Nonhomogeneous Linear Systems
The general theory of a nonhomogeneous system of equations
x1 = p11 (t ) x1 + p12 (t ) x2 + K + p1n (t ) xn + g1 (t )
x2 = p21 (t ) x1 + p22 (t ) x2 + K + p2 n (t ) xn + g 2 (t )
M
xn = pn1 (t ) x1 + pn 2 (t ) x2 + K + pnn (t ) xn + g n (t )
parallels that of a single nth order linear equation.
This system can be written as x' = P(t)x + g(t), where
x1 (t ) g1 (t ) p11 (t ) p12 (t ) L p1n (t )

x2 (t ) g 2 (t ) p21 (t ) p22 (t ) L p2 n (t )
x(t ) = , g(t ) = , P(t ) =
M M M M O M

x (t ) g (t ) p (t ) pn 2 (t ) L pnn (t )
n n n1
General Solution
The general solution of x' = P(t)x + g(t) on I: < t < has
the form
x = c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t ) + v (t )
where
c1x (1) (t ) + c2 x ( 2 ) (t ) + K + cn x ( n ) (t )
is the general solution of the homogeneous system x' = P(t)x
and v(t) is a particular solution of the nonhomogeneous
system x' = P(t)x + g(t).
Diagonalization
Suppose x' = Ax + g(t), where A is an n x n diagonalizable
constant matrix.
Let T be the nonsingular transform matrix whose columns are
the eigenvectors of A, and D the diagonal matrix whose
diagonal entries are the corresponding eigenvalues of A.
Suppose x satisfies x' = Ax, let y be defined by x = Ty.
Substituting x = Ty into x' = Ax, we obtain
Ty' = ATy + g(t).
or y' = T-1ATy + T-1g(t)
or y' = Dy + h(t), where h(t) = T-1g(t).
Note that if we can solve diagonal system y' = Dy + h(t) for y,
then x = Ty is a solution to the original system.
Solving Diagonal System
Now y' = Dy + h(t) is a diagonal system of the form
y1 = r1 y1 + 0 y2 + K + 0 yn + h1 (t ) y1 r1 0 L 0 y1 h1

y2 = 0 y1 + r2 y2 + K + 0 yn + h2 (t ) y2 0 r2 L 0 y2 h2
= +
M M M M O M M

M
yn = 0 y1 + 0 y2 + K + r1 yn + hn (t ) yn 0 0 L rn yn hn

where r1,, rn are the eigenvalues of A.


Thus y' = Dy + h(t) is an uncoupled system of n linear first
order equations in the unknowns yk(t), which can be isolated
yk = r1 yk + hk (t ), k = 1, K , n
and solved separately, using methods of Section 2.1:
t
yk = e e rk s hk ( s )ds + ck e rk t , k = 1, K , n
rk t
t0
Solving Original System
The solution y to y' = Dy + h(t) has components
t
yk = e e rk s hk ( s )ds + ck e rk t , k = 1, K , n
rk t
t0

For this solution vector y, the solution to the original system


x' = Ax + g(t) is then x = Ty.
Recall that T is the nonsingular transform matrix whose
columns are the eigenvectors of A.
Thus, when multiplied by T, the second term on right side of
yk produces general solution of homogeneous equation, while
the integral term of yk produces a particular solution of
nonhomogeneous system.
Example 1: General Solution of
Homogeneous Case (1 of 5)
Consider the nonhomogeneous system x' = Ax + g below.
2 1 2e t
x = x + = Ax + g(t )

1 2 3t
Note: A is a Hermitian matrix, since it is real and symmetric.
The eigenvalues of A are r1 = -3 and r2 = -1, with
corresponding eigenvectors
1 ( 2 ) 1
(1)
= , =
1 1
The general solution of the homogeneous system is then
1 3t 1 t
x(t ) = c1 e + c2 e
1 1
Example 1: Transformation Matrix (2 of 5)
Consider next the transformation matrix T of eigenvectors.
Using a Section 7.7 comment, and A Hermitian, we have
T-1 = T* = TT, provided we normalize (1)and (2) so that
((1), (1)) = 1 and ((2), (2)) = 1. Thus normalize as follows:
1 1 1 1
=
(1)
= ,
(1)(1) + (1)(1) 1 2 1
1 1 1 1
=
( 2)
=
(1)(1) + (1)(1) 1 2 1
Then for this choice of eigenvectors,
1 1 1 1 1 1 1
T= , T =
2 1 1 2 1 1
Example 1:
Diagonal System and its Solution (3 of 5)
Under the transformation x = Ty, we obtain the diagonal
system y' = Dy + T-1g(t):
y1 3 0 y1 1 1 1 2e t
= +

y
2 0 1 2
y 2 1 1 3t
3 y1 1 2e t 3t
= + t
+
y 2 2 2e 3 t

Then, using methods of Section 2.1,


3 2 t 3 t 1
y1 + 3 y1 = 2e t t y1 = e + c1e
3t

2 2 2 3 9
y2 + y2 = 2e t +
3
t y2 = 2te t +
3
(t 1) + c2e t
2 2
Example 1:
Transform Back to Original System (4 of 5)
We next use the transformation x = Ty to obtain the solution
to the original system x' = Ax + g(t):
1 t t 1 3t
x1 1 1 1 y1 1 1 2 e 2 6 + k1e
= =
x2 2 1 1 y2 1 1 t 3
te + (t 1) + k e t


2
2
3t 1 t 4 t
k1e + k 2 + e + t + te
= 2 3 , k = c1 , k = c2
3t 1 t 5 t
1
2
2
2

1 k e + k 2 e + 2t + te
2 3
Example 1:
Solution of Original System (5 of 5)
Simplifying further, the solution x can be written as
3t 1 t 4 t
k e + k 2 + e + t + te
x1 1 2 3
=
x2 k e 3t + k 1 e t + 2t 5 + te t
1 2
2 3
1 3 t 1 t 1 1 t 1 t 1 1 4
= k1 e + k 2 e + e + te + t
1 1 2 1 1 2 3 5

Note that the first two terms on right side form the general
solution to homogeneous system, while the remaining terms
are a particular solution to nonhomogeneous system.
Nondiagonal Case
If A cannot be diagonalized, (repeated eigenvalues and a
shortage of eigenvectors), then it can be transformed to its
Jordan form J, which is nearly diagonal.
In this case the differential equations are not totally
uncoupled, because some rows of J have two nonzero
entries: an eigenvalue in diagonal position, and a 1 in
adjacent position to the right of diagonal position.
However, the equations for y1,, yn can still be solved
consecutively, starting with yn. Then the solution x to
original system can be found using x = Ty.
Undetermined Coefficients
A second way of solving x' = P(t)x + g(t) is the method of
undetermined coefficients. Assume P is a constant matrix,
and that the components of g are polynomial, exponential or
sinusoidal functions, or sums or products of these.
The procedure for choosing the form of solution is usually
directly analogous to that given in Section 3.6.
The main difference arises when g(t) has the form uet,
where is a simple eigenvalue of P. In this case, g(t)
matches solution form of homogeneous system x' = P(t)x,
and as a result, it is necessary to take nonhomogeneous
solution to be of the form atet + bet. This form differs
from the Section 3.6 analog, atet.
Example 2: Undetermined Coefficients (1 of 5)

Consider again the nonhomogeneous system x' = Ax + g:


2 1 2e t 2 1 2 t 0
x =
x + = x + e + t

1 2 3t 1 2 0 3
Assume a particular solution of the form
v(t ) = ate t + be t + ct + d
where the vector coefficents a, b, c, d are to be determined.
Since r = -1 is an eigenvalue of A, it is necessary to include
both ate-t and be-t, as mentioned on the previous slide.
Example 2:
Matrix Equations for Coefficients (2 of 5)

Substituting
v(t ) = ate t + be t + ct + d
in for x in our nonhomogeneous system x' = Ax + g,
2 1 2 t 0

x = x + e + t ,
1 2 0 3
we obtain
2 t 0
ate + (a b )e + c = Aate + Abe + Act + Ad + e + t
t t t t

0 3
Equating coefficients, we conclude that
2 0
Aa = a, Ab = a b , Ac = , Ad = c
0 3
Example 2:
Solving Matrix Equation for a (3 of 5)

Our matrix equations for the coefficients are:


2 0
Aa = a, Ab = a b , Ac = , Ad = c
0 3
From the first equation, we see that a is an eigenvector of A
corresponding to eigenvalue r = -1, and hence has the form

a =

We will see on the next slide that = 1, and hence
1
a =
1
Example 2:
Solving Matrix Equation for b (4 of 5)

Our matrix equations for the coefficients are:


2 0
Aa = a, Ab = a b , Ac = , Ad = c

0 3
Substituting aT = (,) into second equation,
2 b1 2 1 1 0 b1 2
Ab = + =
b2 1 2 0 1 b2
1 1 b1 2 1 1 b1
= =
1 1 b2 0 0 b2 1

Thus = 1, and solving for b, we obtain


1 0 0
b = k choose k = 0 b =
1 1 1
Example 2: Particular Solution (5 of 5)

Our matrix equations for the coefficients are:


2 0
Aa = a, Ab = a b , Ac = , Ad = c

0 3
Solving third equation for c, and then fourth equation for d,
it is straightforward to obtain cT = (1, 2), dT = (-4/3, -5/3).
Thus our particular solution of x' = Ax + g is
1 t 0 t 1 1 4
v(t ) = te e + t
1 1 2 3 5

Comparing this to the result obtained in Example 1, we see


that both particular solutions would be the same if we had
chosen k = for b on previous slide, instead of k = 0.
Variation of Parameters: Preliminaries
A more general way of solving x' = P(t)x + g(t) is the
method of variation of parameters.
Assume P(t) and g(t) are continuous on < t < , and let
(t) be a fundamental matrix for the homogeneous system.
Recall that the columns of are linearly independent
solutions of x' = P(t)x, and hence (t) is invertible on the
interval < t < , and also '(t) = P(t)(t).
Next, recall that the solution of the homogeneous system
can be expressed as x = (t)c.
Analogous to Section 3.7, assume the particular solution of
the nonhomogeneous system has the form x = (t)u(t),
where u(t) is a vector to be found.
Variation of Parameters: Solution
We assume a particular solution of the form x = (t)u(t).
Substituting this into x' = P(t)x + g(t), we obtain
'(t)u(t) + (t)u'(t) = P(t)(t)u(t) + g(t)
Since '(t) = P(t)(t), the above equation simplifies to
u'(t) = -1(t)g(t)
Thus
u(t ) = 1 (t ) g (t )dt + c
where the vector c is an arbitrary constant of integration.
The general solution to x' = P(t)x + g(t) is therefore
x = (t )c + (t ) 1 ( s)g( s)ds, t1 ( , ) arbitrary
t

t1
Variation of Parameters: Initial Value Problem
For an initial value problem
x' = P(t)x + g(t), x(t0) = x(0),
the general solution to x' = P(t)x + g(t) is
t
1
x = (t ) (t0 )x (0)
+ (t ) 1 ( s )g( s )ds
t0

Alternatively, recall that the fundamental matrix (t)


satisfies (t0) = I, and hence the general solution is
t
x = (t )x (0)
+ (t ) 1 ( s )g( s )ds
t0

In practice, it may be easier to row reduce matrices and


solve necessary equations than to compute -1(t) and
substitute into equations. See next example.
Example 3: Variation of Parameters (1 of 3)

Consider again the nonhomogeneous system x' = Ax + g:


2 1 2e t 2 1 2 t 0
x =
x + = x + e + t

1 2 3t 1 2 0 3

We have previously found general solution to homogeneous


case, with corresponding fundamental matrix:
e 3t e t
(t ) = 3t
t
e e
Using variation of parameters method, our solution is given
by x = (t)u(t), where u(t) satisfies (t)u'(t) = g(t), or
e 3t e t u1 2e t
3t =
t

e e u2 3t


Example 3: Solving for u(t) (2 of 3)

Solving (t)u'(t) = g(t) by row reduction,


e 3t e t 2e t e 3t e t 2e t
3t
e e t 3t 0 2e t 2e + 3t
t

e 3t e t 2e t e 3t 0 e t 3t / 2



0 e
t t
e + 3t / 2 0 e t
e + 3t / 2
t

1 0 e 2t 3te 3t / 2 u1 = e 2t 3te 3t / 2


0 1 1 + 3te t
/ 2 u
2 = 1 + 3te t
/2

It follows that
u1 e 2t / 2 te 3t / 2 + e3t / 6 + c1
u(t ) = =

u2 t + 3te / 2 3e / 2 + c2
t t
Example 3: Solving for x(t) (3 of 3)

Now x(t) = (t)u(t), and hence we multiply


e 3t e t e 2t / 2 te 3t / 2 + e 3t / 6 + c1
x = 3t
t

e e t + 3te / 2 3e / 2 + c2
t t

to obtain, after collecting terms and simplifying,

1 3t 1 t 1 t 1 1 t 1 1 4
x = c1 e + c2 e + te + e + t
1 1 1 2 1 2 3 5

Note that this is the same solution as in Example 1.


Laplace Transforms
The Laplace transform can be used to solve systems of
equations. Here, the transform of a vector is the vector of
component transforms, denoted by X(s):
x1 (t ) L{x1 (t )}
L{x(t )} = L = = X( s )
x2 (t ) L{x2 (t )}
Then by extending Theorem 6.2.1, we obtain

L{x(t )} = sX( s ) x(0)


Example 4: Laplace Transform (1 of 5)

Consider again the nonhomogeneous system x' = Ax + g:


2 1 2e t
x = x +

1 2 3t

Taking the Laplace transform of each term, we obtain

sX( s ) x(0) = AX( s ) + G ( s )


where G(s) is the transform of g(t), and is given by
2 (s + 1)
G ( s ) = 2

3 s
Example 4: Transfer Matrix (2 of 5)

Our transformed equation is


sX( s ) x(0) = AX( s ) + G ( s )
If we take x(0) = 0, then the above equation becomes
sX( s ) = AX( s ) + G ( s )
or
(sI A )X( s) = G ( s)
Solving for X(s), we obtain
X ( s ) = (s I A ) G ( s )
1

The matrix (sI A)-1 is called the transfer matrix.


Example 4: Finding Transfer Matrix (3 of 5)

Then
2 1 s + 2 1
A = (sI A ) =
1 2 1 s + 2
Solving for (sI A)-1, we obtain
s + 2 1
(sI A )
1
=
1

( s + 1)( s + 3) 1 s + 2
Example 4: Transfer Matrix (4 of 5)

Next, X(s) = (sI A)-1G(s), and hence

1 s + 2 1 2 ( s + 1)
X( s ) =
( s + 1)( s + 3) 1 s + 2 3 s 2

or
2(s + 2 ) 3
+
(s + 1) ( s + 3) s (s + 1)( s + 3)
2 2
X( s ) =
2 3( s + 2)
(s + 1)2 ( s + 3) + s 2 (s + 1)( s + 3)

Example 4: Transfer Matrix (5 of 5)

Thus
2(s + 2 ) 3
+
X( s ) =
(s + 1) ( s + 3) s (s + 1)( s + 3)
2 2

2 3( s + 2)
(s + 1)2 ( s + 3) + s 2 (s + 1)( s + 3)

To solve for x(t) = L-1{X(s)}, use partial fraction expansions


of both components of X(s), and then Table 6.2.1 to obtain:
2 1 3t 2 t 1 t 1 1 4
x = e + e + te + t
3 1 1 1 2 3 5
Since we assumed x(0) = 0, this solution differs slightly
from the previous particular solutions.
Summary (1 of 2)

The method of undetermined coefficients requires no


integration but is limited in scope and may involve several
sets of algebraic equations.
Diagonalization requires finding inverse of transformation
matrix and solving uncoupled first order linear equations.
When coefficient matrix is Hermitian, the inverse of
transformation matrix can be found without calculation,
which is very helpful for large systems.
The Laplace transform method involves matrix inversion,
matrix multiplication, and inverse transforms. This method
is particularly useful for problems with discontinuous or
impulsive forcing functions.
Summary (2 of 2)

Variation of parameters is the most general method, but it


involves solving linear algebraic equations with variable
coefficients, integration, and matrix multiplication, and
hence may be the most computationally complicated
method.
For many small systems with constant coefficients, all of
these methods work well, and there may be little reason to
select one over another.

Vous aimerez peut-être aussi