Vous êtes sur la page 1sur 39

Dierentiation Theory

Hans Reijnierse
March 27, 2012
This syllabus is partially based on Chapters 24 and 25 of the book
Mathematics for Economists,
Carl P. Simon - Lawrence Blume,
W.W. Norton & Company,
ISBN 0-393-95733-0.
Contents
1 Introduction 3
2 Linear rst order dierential equations 6
3 The Local Existence Theorem and Eulers method 9
4 Separable dierential equations 13
5 Homogeneous dierential equations 16
6 Complex numbers 18
7 Linear second order dierential equations 21
8 Systems of dierential equations 27
8.1 Linear autonomous systems . . . . . . . . . . . . . . . . . . . . . . . . . 28
8.2 The substitution method . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9 Stability of stationary solutions 33
9.1 Stability in linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9.2 Stability in non-linear systems . . . . . . . . . . . . . . . . . . . . . . . . 37
A Index 38
1 Introduction
Dierential equations are equations involving functions and their derivatives. They exist
in various types. Let us start with an easy one:
. .y

= t y. . .
Its interpretation is y is a function of which at any moment in time t the derivative
equals t times the value of the function y at that moment. Apparently, y is a function of
time. In practice this will be the case most of the times, and that is why the character
t denotes the primary variable, i.e., the variable that does not depend on any other
variable. A more complete way to denote y

= t y is
. .for all t in IR, y

(t) = t y(t). . .
The domain of y has been chosen to be IR, although any interval in IR will do as well. Dif-
ferential equations frequently occur in Physics. Like dierence equations, they are used
to model and study dynamic processes. While dierence equations use a discrete time
index (t 0, 1, 2, 3, . . . ), dierential equations use a continuous time index (t IR). In
Economics several examples of dynamic processes can be found as well, like the savings
balance on a bank account.
Example 1 Let y : IR
+
IR denote the credit in Euro on a savings account as a
function of time. As long as no money will be deposited or withdrawn, the credit exposes
the feature that the speed with which it increases is proportionally with its own size.
The constant factor with which its size is multiplied is nothing but the interest rate.
This feature is formulated by means of a dierential equation. Choosing an interest rate
of 4% results in
y

= 0.04 y. (1)
It is easily veried that y = e
0.04t
obeys the equation. This makes y a solution of (1).
One specic solution is also called a particular solution. Multiplying y with a constant
k gives another particular solution. Section 2 shows that all solutions have this form.
The general solution is thereby y = k e
0.04t
, in which k is called the parameter of
3
the general solution. The general solution set is the set containing all solutions of the
equation. This way to describe all solutions is more formal, but also more complete,
since it species which values the parameter(s) can attain. The general solution set of
dierential equation (1) equals
. .y(t) = k e
0.04t
[ k IR. . .
Often a dierential equation is accompanied by an initial condition, or, more general,
a boundary condition. Together they form an initial value problem. In the example of
a savings account, the starting amount of money, let us say e 1000., determines an
initial condition. We get
. .
_
y

= 0.04 y, ..
y(0) = 1000.. .
. .
In Section 3 it will turn out that initial value problems often have unique solutions. The
solution y of the initial value problem above is given by
. . y(t) = 1000 e
0.04t
.. .(t IR
+
)
In general, it is very dicult or even impossible to nd a solution of a dierential
equation. The rst aim of this course is to provide some classes of dierential equations
in which we can. Some of these classes require the use of complex numbers, which are
the topic of Section 6. Furthermore, Eulers method will be discussed in order to be
able to approximate solutions of initial value problems of which the dierential equation
cannot be solved (Section 3). The following example introduces the notions order and
stationary solution.
Example 2 It takes twice as much force to stretch a spring twice as far (Hookes
Law). Likewise, the force induced by gravity that moves the pendulum of a classical
clock is approximately proportional to the displacement of the pendulum from its rest
position. Because force is at its turn proportional to acceleration (F = m a), the
movement
1
of the pendulum can be described by (we do not take friction into account)
1
in the case of a slight displacement, the movement can be considered to be one-dimensional.
4
. .y

= cy, . .
in which c is a positive constant. The minus-sign displays that the direction of acceler-
ation is opposite to the direction of displacement. Section 7 provides a method to nd
general solution sets of equations of this type. We give it already:
. .y(t) = k
1
cos(

c t) + k
2
sin(

c t) [ k
1
, k
2
IR. . .
The general solution has two parameters; k
1
and k
2
. In general, the number of parame-
ters equals the order of the dierential equation. By denition, the order of a dierential
equation is the highest derivative occurring. Because sine and cosine are periodic func-
tions, the solution displays that (by absence of friction) the pendulum will oscillate
forever. If we would have taken friction into account, the general solution set would
have shown that the pendulum will slow down and reach its rest position in the end.
This is a common phenomenon and therefore constant (particular) solutions play such
an important role that they are given specic names. They are called stationary solu-
tions, rest points, equilibrium solutions of just equilibria. This notion will be discussed
in Section 9.
In an ordinary dierential equation the unknown function y has (a subset of) IR as
its domain. Contrarily, the unknown function y : IR
n
IR in a partial dierential
equation, has multiple arguments and thereby partial derivatives. These will not be
discussed in this course. Therefore the phrase ordinary will be usually omitted. Hence,
an (ordinary) rst order dierential equation is of the form
. .y

= F(t, y). . .
The function F is of two variables, t is a variable. y plays two roles; it is a variable of
F and a function of t.
If in the expression F(t, y) the variable t does not occur, then the dierential equation is
called autonomous or time-independent. An example is y

= 0.04 y. Equation y

= t y
is an example of a non-autonomous dierential equation.
Exercise 51
5
a. What is the form of an autonomous rst order dierential equation?
b. Let y : IR IR be a particular solution of an autonomous rst order dierential
equation. Dene the function z : IR IR by z(t) = y(t 1) for all t IR. Show
that z is another particular solution of this dierential equation. . .
A function y : IR IR is called a C
n
-function if it is n times dierentiable and the
n
th
derivative is continuous (n 1, 2, 3, ). We denote the n
th
derivative of y by y
(n)
.
For example, each polynomial
2
is a C
n
-function for each n. The function y(t) =
3

t is
not a C
1
-function because the derivative y

(t) =
1
3
3

t
2
is not dened at t = 0. An
antiderivative (or primitive) of y, for example Y (t) =
3
4
3

t
4
, is a C
1
-function (with y
as its derivative), but it is no C
2
-function. Finally, a function y : IR
m
IR is called
C
1
-function if all of its partial derivatives are continuous (en thereby exist).
Exercise 52 Is y(t) = [t[ a C
1
-function? Determine the natural number n such that
the function z(t) = [t
3
[ is a C
n
-function, but not a C
n+1
-function.
The following section provides the general solution sets of a rather large class of dier-
ential equations.
2 Linear rst order dierential equations
An n
th
order dierential equation is called linear if it equates a linear combination of y,
y

, . . . , y
(n)
to a function of t, i.e., there are functions a
0
(t), . . . , a
n
(t) and b(t) such that
the dierential equation is of the form
. .a
n
y
(n)
+ a
n1
y
(n1)
+ + a
0
y = b. . .
Linear rst order equations are usually denoted as follows:
. .y

= a y + b. . .
a and b may depend on t. If so, we assume that they depend continuously on t, which
implies that all solutions of the dierential equation will be C
1
-functions. We distinguish
between four subclasses before we deal with the general case.
2
A polynomial (function) is of the form y(t) = a
0
+ a
1
t + a
2
t
2
+ + a
m
t
m
.
6
(i) y

= b(t), so a = 0 and b is a function of t.


The dierential equation tells that the derivative of y equals b. Hence, the general
solution set of the equation is the set of all antiderivatives of b. Let B be an anti-
derivative of b. We obtain the general solution set by adding a constant of integra-
tion k to B: . .y = B(t) + k [ k IR. . .tion k to B:
(ii) y

= ay, i.e., a is a constant and b equals zero.


It is easy to verify that y(t) = ke
at
is a solution for each real number k; just plug in y
and its derivative into the equation and infer that left and right hand sides coincide.
It takes more eort to show that y is the general solution. We will do this by means
of the so called method of variation of parameters. When looking for more solutions,
it might be an idea to make k dependent on t. So consider the function y = k(t)e
at
and suppose it is a solution. Derivation gives
. .y

= k

(t) e
at
+ k(t) e
at
a = k

e
at
+ ay = k

e
at
+ y

. . .
Hence, k

e
at
must be the zero function. Since e
at
,= 0 for all t, k

itself is the zero


function. This is only the case if k is a constant. Now take an arbitrary function
y : IR IR. Then y can be denoted as y = k(t)e
at
, just dene k by k(t) = y(t)e
at
.
This proves that y = ke
at
[ k IR is the general solution set of case (ii). The
method of variation of parameters is quite general applicable and will return several
times.
(iii)y

= ay + b in which a and b are non-zero constants.


One particular solution is readily found; the stationary solution y =
b
a
. In general,
the stationary solutions of an autonomous dierential equation can be found by
setting all (higher) derivatives to zero, resulting in an equation in y. We can proceed
as follows. Let y = ke
at
be a solution of the corresponding case (ii) type of dierential
equation y

= ay. Add y to y and verify that the sum y +y obeys case (iii). Indeed,
. .( y + y)

= 0 + ay = ay and a( y + y) + b = a(
b
a
+ y) + b = ay. . .
This is another idea arising from a general principle, it will be discussed in Section
5 with Theorem 5.2 as a result. Section 5 contains an exercise in which it is asked
7
to show that the general solution set of case (iii) is given by
y(t) =
b
a
+ ke
at
[ k IR. (2)
(iv) y

= a(t)y.
This case is barely more dicult then case (ii). Let A be a primitive of a, take e.g.,
A(t) =
_
t
0
a(s) ds.
3
. .For every constant k, the function y = ke
A(t)
is a solution, as
the chain rule displays:
. .y

(t) = ke
A(t)
a(t) = a(t)y(t). . .
Exercise 53 Show by means of the method of variation of parameters that the general
solution set of case (iv) is given by y = ke
A(t)
[ k IR.
It is time for the general case, i.e.,
y

= a(t)y + b(t). (3)


The previous elaborations indicate that it might be a good idea to consider the function
y = k(t)e
A(t)
, in which A is a primitive of a. Derivation gives
. .
y

(t) = k

(t)e
A(t)
+ k(t)e
A(t)
a(t)
= k

(t)e
A(t)
+ a(t)y.
. .
Hence, y obeys (3) if and only if k

(t)e
A(t)
= b(t). If it is possible to integrate e
A(t)
b(t),
a solution is found, because k(t) =
_
t
0
e
A(s)
b(s) ds + c gives
y(t) =
_
_
t
0
b(s)e
A(s)
ds + c
_
e
A(t)
. (4)
The lower bound of the integral is chosen to be zero. Any other real number would have
worked as well. Since any function y : IR IR can be written as y(t) = k(t)e
A(t)
, (4)
is the general solution of (3). Of course the constant of integration c can be replaced by
the customary character k.
Exercise 54 Solve the initial value problem
_
y

= 2ty + e
t
2
,
y(0) = 1.

3
The Fundamental Theorem of Calculus gives that this choice of A(t) is a primitive of a(t) indeed
(see also Section 1.3 of Integration Theory). Furthermore,
_
t
0
should be interpreted to be
_
0
t

in the case that t < 0.
8
3 The Local Existence Theorem and Eulers method
Before we proceed to look for solutions of (classes of) dierential equations, we discuss
a theorem that states that under mild conditions there is something to search for:
Theorem 3.1 . .[Local Existence Theorem] . .Consider the initial value problem
_
y

= F(t, y),
y(t
0
) = y
0
.
(5)
If F is a continuous function in t and y, then there exists an open interval I IR
containing t
0
and a C
1
-function y : I IR such that y(t
0
) = y
0
and y

(t) = F(t, y(t))


for all t in I. In other words, y is a solution of the initial value problem (5).
If F is a C
1
-function, then this solution is, given a xed domain I, unique.
The fact that there always exists a solution, does not mean that it can always be found.
Even worse, solutions do not always have a closed formula. There are relatively simple
looking dierential equations, of which it is not possible to write down a solution, like
. .y

t
3
+ 1, . .y

= e
t
2
. .and . .y

= y
2
+ t
2
. . .
We will not provide a proof of Theorem 3.1, but give an example to show how the
graph of the solution of an initial value problem can be approximated by means of a
so called direction eld. These depictions lead to Eulers method, which is developed
to approximate solutions numerically. The method at least suggests the validity of
Theorem 3.1. Firstly, we have to think about how a dierential equation can be displayed
geometrically. In each point (t, y) of the plane the dierential equation y

= F(t, y)
provides the slope of the (a) function y satisfying y(t) = y that obeys the equation. By
drawing a little piece of such a function in a number of points (the more, the better) on
the (t, y)-plane, one gets a gure of the paths of such functions.
Example 3 Consider the rst equation from the introduction, y

= ty. We know
the general solution set already, see Section 2, case (iv), which enables us to compare
geometrically found solutions with exact solutions y(t) = ke
1
2
t
2
. Figure 1 depicts a grid
on which on each grid point a tiny piece of a solution that visits that point has been
9
. .
5 4 3 2 1 0 1 2 3 4 5
5
4
3
2
1
0
1
2
3
4
5
. .
s
-
t
6
y
Figure 1: The direction eld of y

= ty.
drawn. For example, the slope of a solution equals 2 in the point (1, 2) and 1 in the
point (1, 1). By connecting some of the points in a logical way, i.e., from the left to
the right and following the directions of the drawn little segments, one gets an idea of
solutions of the dierential equation.
A direction eld is not only suitable to obtain a global idea of all solutions, it is useful as
well if one is interested in a specic solution. Let us add an initial value to the problem.
_
y

= ty,
y(0) = 1.
(6)
In the formulation of Theorem 3.1, t
0
has been chosen to be 0 and y
0
to be 1. The
solution is y = e
1
2
t
2
. The point (t
0
, y
0
) is the starting point of an iterative process. The
next point is chosen in the neighborhood of the current one; a little bit further into the
direction given by the eld. Verify that this direction is (1, F(t
0
, y
0
)). Choose a step
size h and dene the point (t
1
, y
1
) to be
(t
1
, y
1
) = (t
0
, y
0
) + h (1, F(t
0
, y
0
)) = (0 + h 1, 1 + h 0) = (h, 1).
y
1
is an estimation of y(h); the value of the solution of (6) at time moment t
0
+ h.
10
Repeating the procedure results in a series of estimated function values:
(t
2
, y
2
) = (t
1
, y
1
) + h (1, F(t
1
, y
1
)) = (2h, 1 + h
2
),
(t
3
, y
3
) = (2h, 1 + h
2
) + h (1, 2h + 2h
3
) = (3h, 1 + 3h
2
+ 2h
4
),
.
.
.
Plotting the sequence gives an idea of the solution of initial value problem (6). Figure 2
displays (t
0
, y
0
), (t
1
, y
1
) and (t
2
, y
2
) with h = 1. The plots corresponding to h =
1
10
, h =
1
100
and h =
1
1000
are shown as well. Clearly, h = 1 does not give a good approximation.
The gure suggests that the plots converge to y when h tends to 0. Proving this (in
general) would result in a proof of Theorem 3.1.
Eulers method is nothing but the formalization of the process described above. The
method is used to approximate the value of a solution of an initial value problem at a
given moment of time (the end value so to speak).
Formulation of the problem
Let
_
y

= F(t, y),
y(t
0
) = y
0
be an initial value problem and let T be a moment in time
unequal to t
0
. Let y be the solution of the problem. Approximate y(T).
Eulers solution
Choose the appropriate number of iterations n. The step size h is then determined,
being h =
Tt
0
n
.
4
Dene t
i
= t
i1
+ h, i 1, 2, . . . , n, or, equivalently,
t
i
= t
0
+ i h. . .i 1, 2, . . . , n
[t
0
, T] is thereby divided in n equally sized subsegments. Dene subsequently
y
i
= y
i1
+ h F(t
i1
, y
i1
). . .i 1, 2, . . . , n
Then y
n
is the required approximation of y(T).
Let us return to Example 3;
_
y

= ty,
y(0) = 1
and take T = 2. The exact solution is
y(T) = e
1
2
T
2
= e
2
7.389. We have y
i
= y
i1
+h(t
0
+(i1)h)(y
i1
) = y
i1
+(i1)h
2
y
i1
.
4
If t
0
> T, the step size will be negative. This might look odd, but does not spoil the process.
11
. .
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0
1
2
3
4
5
6
7
8
. .
s
(t
0
, y
0
)
s
(t
1
, y
1
)
s
(t
2
, y
2
)
r
5.97
r
7.22
r 7.37
-
t
6
y
Figure 2: Plots of approximations of y
If n = 2, then h = 1 and y
0
= 1,
y
1
= y
0
+ (1 1) h
2
y
0
= 1 y(1),
y
2
= y
1
+ (2 1) h
2
y
1
= 2 y(2).
Table 1 provides the results for ve dierent step sizes. It shows the ratio between
the error and the step size tends to converge when h tends to 0. To put it dierently,
10 times more work gives a 10 times better accuracy. This phenomenon is a common
feature of Eulers method. Therefore the method is called convergent of order O(h) , or
of linear order. Other methods, like the one of Runge-Kutta, have the property that 10
times more work yields a 100 times better accuracy. This type of convergence is called
of order O(h
2
) of quadratic. A bit more precise is to say that the occurring error of a
method is of some order O(h). The formal denition of the notion is
Denition 3.2 Let f : IR IR be a function with lim
h0
f(h) = 0. f is called convergent
12
. .
number of iterations n 2 20 200 2000 20000
step size h 1 0.1 0.01 0.001 0.0001
approximation of y(T) 2 5.973 7.220 7.372 7.387
error = y(T)approximation 5.389 1.416 0.169 0.017 0.002
error/h 5.389 14.16 16.88 17.21 17.32
. .
Table 1: Estimations of e
2
by Eulers method
of order O(h
n
) if there exist > 0 and K > 0 such that for all h in (0, )
. .[f(h)[ < Kh
n
. . .
Exercise 55 For small values of h, sin(h) is often approximated by h itself. The
occurring error is of course f(h) = [ sin(h) h[. Show that for all h (0,
1
2
) we have
f(h) < 4h
3
. This implies that the approximation is of order O(h
3
).
Hint. The Taylor series of the sine function can be found in Section 6.
We conclude the chapter by showing that if F is continuous, but not C
1
, then the
uniqueness of solutions is not guaranteed.
Example 4 Consider the initial value problem
_
y

= 3t
3

y,
y(0) = 0.
The function F is con-
tinuous. The Jacobian of F is DF(t, y) =
_
3
3

y,
t
3
_
y
2
_
and is not dened for y = 0.
The problem has at least two solutions; the zero function and y(t) = t
3
.
Exercise 56 Let y be the solution of initial value problem
_
y

=
2ty
t
2
+1
,
y(3) = 1.
Compose
the scheme of Eulers method and nd an approximation for y(0). Use a step size of
h = 1 (The answer is: y(0) is approximated by 0)
4 Separable dierential equations
The collection of linear dierential equations forms a large class of equations that can
be solved explicitly. An other class is the one of separable dierential equations. A rst
13
order dierential equation y

= F(t, y) is called separable if F(t, y) is the product of a


function of y and a function of t:
. .F(t, y) = g(y) h(t). . .
The equations
. .y

= y
2
(t
2
+ 1), . .y

= (y + 1)/t, . .y

= y
2
+ 1. . and . .y

= e
t+y
. .
are all separable, the equations
. .y

= y
2
+ t
2
, . .y

= a(t)y + b(t), . .y

= ty + t
2
y
2
. . and . .y

= e
ty
. .
are not. Solving separable equations can be done as follows. Move the function g(y) to
the left hand side by dividing by it:
y

g(y)
= h(t). (7)
Dont bother too much that this might cause a division by zero. Just verify any found
solution.
We denote z = 1/g, so z(y)y

= h(t). Let Z, H be antiderivatives of z and h respectively.


The derivative of Z(y(t)) to t equals (recall the chain rule)
. .Z

(y(t)) = z(y(t)) y

(t). . .
Integrating both sides of Equation (7) results in
Z(y) = H(t) + k. (8)
In the case of an initial value problem this is the moment to determine the value of k.
What rests is the task of manipulating (8) into a form with y on the left hand side, i.e.,
into an explicit solution. We provide two examples.
Example 5 Consider the initial value problem
_
y

= 4t

y,
y(0) = 1.
To solve this problem, we can rewrite the dierential equation to
y

y
= 2t and then
integrate both sides. This gives

y = t
2
+ k. The substitution y(0) = 1 gives k = 1.
Taking the square of both sides gives the explicit solution, y = (t
2
+ 1)
2
. Finally, the
answer must be veried! Since y is strictly positive, the division by

y has not caused
any troubles.
14
Example 6 Let u : IR
+
IR be a utility function for amounts of monetary posses-
sions. The Arrow-Pratt measure that indicates relative risk aversion is given by
x 0 (x) =
u

(x)x
u

(x)
. (x 0) (9)
Which utility functions are according to this measure relatively risk neutral?
Take (x) to be a constant, say (x) = c, and rewrite Equation(9) to
. .
u

(x)
u

(x)
=
c
x
. . .
We can transform this second order equation into a rst order equation by rstly fo-
cussing on u

instead of u. Therefore we denote u

by y and get
. .
y

y
=
c
x
. . .
Integrating to x gives . .ln(y) = c ln(x) + k, . .Integrating to x gives
which is equivalent to y = e
c ln(x)+k
= x
c
e
k
. Substitute k
1
= e
k
and integrate once
more to obtain u, i.e.,
u(x) =
_
k
1
x
c
=
_
k
1
ln(x) + k
2
. .if c = 1,
k
1
1c
x
1c
+ k
2
. .if c ,= 1.
(10)
The relatively risk neutral utility functions apparently form a family described by
means of three parameters: k
1
, k
2
, and c. Mathematically spoken, the parameters can
take any value. In order to obtain strictly increasing utility functions, the restriction
k
1
> 0 is necessary.
Exercise 57 Give for each of the following equations the general solution set. The
maximal domain on which a solution can be dened depends on the value of the solution
parameter, say k. Give for each element of the solution set the maximal domain. Finally,
give the particular solution obeying y(1) = 1.
a) y

= 5 y, . .b) y

= y t
2
,. .c) t
3
y

= y
3
,. .d) y
3
y

= t
3
.
Exercise 58 Solve the initial value problem of Exercise 56, i.e.,
_
y

=
2ty
t
2
+1
,
y(3) = 1.

15
5 Homogeneous dierential equations
A linear n
th
order dierential equation is called homogeneous if it equals a linear combi-
nation of y, y

, . . . , y
(n)
to zero. To put it dierently, there exist functions a
0
(t), . . . , a
n
(t)
such that the dierential equation has the form
. .a
n
y
(n)
+ a
n1
y
(n1)
+ + a
0
y = 0. . .
For example, y

+ e
t
y

(t
2
3)y = 0. (11)
Exercise 59 Verify which of the subclasses in Chapter 2 treat autonomous and which
treat homogeneous dierential equations.
The property of being homogeneous is only relevant for linear dierential equations. If
an equation is homogeneous, it is customary to denote terms on the left hand side of
the equality sign. The advantage of homogeneity is the following. Adding solutions to
each other or multiplying them with scalars lead to new solutions.
Theorem 5.1 If y and y are solutions of the same homogeneous dierential equation,
then for all and in IR the linear combination y + y is a solution of it as well.
Exercise 60 Prove this theorem.
Hint. Read the proof of Theorem 5.2.
If we consider dierential equation (11), the theorem above does not help at all, since it
is already impracticable to nd just one particular solution. The theorem does help if
we look at equation
y

y = 0. (12)
It is not dicult to come up with four particular solutions, e
t
, e
t
, sin(t), and cos(t).
For every k
1
, k
2
, k
3
, k
4
in IR,
y = k
1
e
t
+ k
2
e
t
+ k
3
sin(t) + k
4
cos(t) (13)
16
is another solution (verify this). Actually, these are them all (we will not prove this).
A (linear) non-homogeneous dierential equation has got one or more terms that do not
depend on y, e.g.,
y

y = t
3
7. (14)
t
3
7 is called the rest term. The corresponding homogeneous dierential equation is
the equation in which the rest term is set to zero, e.g., (12) is the homogeneous equation
corresponding to (14). Equation (14) is less dicult to solve than it might look like.
A particular solution is readily given; y(t) = 7 t
3
. Another one can be obtained by
adding to y a function of type (13), e.g., y = e
t
, because
. .(7 t
3
+ e
t
)

(7 t
3
+ e
t
) = e
t
(7 t
3
+ e
t
) = t
3
7. . .
In fact, according to the following theorem, the general solution of (14) equals the sum
of any particular solution y of (14) and the general solution of (12).
Theorem 5.2 Let S

be the general solution set of a linear dierential equation, let y


be a particular solution of the equation and let S
h
be the general solution set of the
corresponding homogeneous dierential equation. Then
. .S

= y + y [ y S
h
. . .
Proof For any function y in S
h
, the i
th
derivative of y + y equals y
(i)
+ y
(i)
. If we
make the convention that y
(0)
= y, it applies for i = 0 as well. Let n be the order of the
equation. There are functions a
0
, . . . , a
n
and b such that equation can be denoted by
. .
n

i=0
a
i
(t)y
(i)
(t) = b(t). . .
We have
n

i=0
a
i
(t)[ y + y]
(i)
(t). . =
n

i=0
a
i
(t)[ y
(i)
(t) + y
(i)
(t)].. =
We have
n

i=0
a
i
(t) y
(i)
(t) +
n

i=0
a
i
(t) y
(i)
(t).. = b(t) + 0. . = b(t)...
y + y is thereby an element of S

. Conversely, in the same way it can be shown that the


dierence of y and an arbitrary other solution y in S

is an element of S
h
. Hence, y is
the sum of y and an element of A
h
(i.e., ( y y)). . .
17
This theorem will be applied to solve linear second order dierential equations in Chapter
7. However, before we can go to that subject, more equipment is required, i.e., some
basic knowledge concerning complex numbers. That will be the topic of Chapter 6.
Exercise 61 Prove by means of Theorem 5.2 that (2) gives the general solution set of
case (iii) in Chapter 2.
6 Complex numbers
Not every polynomial has real valued solutions (roots); there is no real number to solve
x
2
+1 = 0. It would be convenient though, in e.g., linear algebra, where the eigenvalues of
square matrices can be obtained by means of the roots of their characteristic polynomials
(see Section 8). The next chapter introduces characteristic polynomials for dierential
equations. Their roots will be used to describe general solution sets.
By introducing complex numbers, all polynomials will have roots, albeit not in IR. We
start with the number i. It is designed to give the equation x
2
+1 = 0 a root. According
to this equation i
2
apparently equals 1. This supports the notation
. .i =

1. . .
The other root of x
2
+ 1 = 0 is thereby i, since
. .(i)
2
= (1)
2
i
2
= 1 1 = 1. . .
All other complex numbers are found by scaling (multiplying by reals) and addition,
i.e., for each choice of and in IR, + i is a complex number. The set of complex
numbers is denoted by C: . .C = + i [ , IR. . .numbers is denoted by C:
All elements are necessary, which is illustrated by the following exercise.
Exercise 62 Show that + i is a root the equation x
2
2x +
2
+
2
= 0.
More complex numbers are not necessary in the sense that by now every quadratic
equation has roots.
5
Like IR can be represented by a line, so can C be represented by
18
. .imaginary axis . .
. .
q
q
z
z
q
q
re(z)
im(z)i
q
q
q

i
1
i
-
6
&%
'$
real axis
. .
Figure 3: the complex numbers form a plane
a plane. The real numbers + 0 i ( IR) are situated on the horizontal axis. The
numbers on the vertical axis, i.e., 0+i ( IR), are called (purely) imaginary numbers.
If z = +i, then is called the real part of z and denoted by re(z), and is called the
imaginary part of z and denoted by im(z). Geometrically re(z) can be obtained from z
by projecting z orthogonally on the real axis and im(z) i can be found by projecting z
orthogonally on the imaginary axis. For a real number, the distance to zero is equal to
its absolute value. The same holds for complex numbers. The circle in Figure 3 is the
unit circle of C and consists of all complex numbers with unit length. The gure shows
that . .[ + i[ =
_

2
+
2
. . .that
Quadratic equations can still be solved by the abc-formula, e.g., equation x
2
2x+2 = 0
has roots . .
1
,
2
=
2

4
2
=
2 (

4)
2
= 1 i. . .has roots
If ,= 0 and +i is one of the roots of a quadratic equation with real valued coecients,
then i is the other one (verify this yourself). These two numbers are called each
others conjugates. The conjugate of a number is denoted by a bar, i.e.,
. . + i = i. . .
5
even every polynomial has complex roots
19
Figure 3 shows that conjugates are each others reections in the real axis.
Exercise 63 Show that for all C we have [[
2
= .
Adding, subtracting, multiplying and dividing complex numbers can be performed (de-
ned) in straightforward manners. One has to be careful though when multiplying
square roots.
20
Exercise 64
a) Show that
+ i
+ i
=
+

2
+
2
+ i

2
+
2
whenever + i ,= 0.
b) What is wrong with the following line of thought?
. .1 =

1 =

1 1 =

1 = i
2
= 1.. .
Elegant, and important for this course, is Eulers formula. For all in IR we have
. .e
i
= cos + i sin . . .
There are two ways to get convinced of the validity of this formula. Firstly one can
substitute the exponent, the sine and the cosine by their Taylor-series.
. .
e
t
=
1
0!
1 +
1
1!
t +
1
2!
t
2
+
1
3!
t
3
+
1
4!
t
4
+
1
5!
t
5
+
1
6!
t
6
+ ,
sin t = +
1
1!
t
1
3!
t
3
+
1
5!
t
5
,
cos t =
1
0!
1
1
2!
t
2
+
1
4!
t
4

1
6!
t
6
+ .
. .and
Exercise 65 Prove the validity of Eulers formula by means of these series.
An alternative way, which is more in the line of this course, is the following. Let y and
y be dened by y(t) = e
it
and y(t) = cos t + i sin t respectively. Then y

(t) = ie
it
and
y

(t) = sin t + i cos t = i y(t). Hence, both y and y solve the initial value problem
. .
_
y

= iy,
y(0) = 1.
. .
The Local Existence Theorem tells that y and y must coincide.
A corollary of Eulers formula is
Lemma 6.1 For all z in C we have [e
z
[ = e
re(z)
> 0.
Proof Let z = + i. Then
. .[e
z
[ = [e

[ [e
i
[ = e

([ cos + i sin [) = e

_
cos
2
+ sin
2
= e
re(z)
.. .
7 Linear second order dierential equations
Probably the most important class of dierential equations is the one of the linear sec-
ond order dierential equations with constant coecients. By the Law of Newton, i.e.,
21
F = m a, dierential equations modeling dynamic processes are often of the second
order, because acceleration a is the second derivative of movement (cf. Example 2). Fur-
thermore this type of dierential equations is commonly used to model and approximate
non-linear (and thereby much more dicult) systems. This chapter therefore considers
dierential equations of the type
ay

(t) + by

(t) + cy(t) = d(t), (15)


in which a, b and c are real numbers and d may depend on t. Section 5 indicates that it
is worthwhile to consider rst the corresponding homogeneous dierential equation
ay

(t) + by

(t) + cy(t) = 0. (16)


If a equals zero, then we are dealing with a rst order equation and we can refer to
Section 2. There we have seen solutions of the form y = e
rt
. What happens if we try
such a function again? . .Plug in . .y = e
rt
, . .y

= re
rt
. .and . .y

= r
2
e
rt
. .into
Equation(16), resulting in
. .ar
2
e
rt
+ bre
rt
+ ce
rt
= e
rt
(ar
2
+ br + c) = 0. . .
Because e
rt
never equals zero,
6
y = e
rt
solves (16) if and only if r obeys
ar
2
+ br + c = 0. (17)
This quadratic equation is called the characteristic equation or characteristic function
of (16) (and of (15) too). The abc-formula provides its roots, i.e.,
. .
1
,
2
=
b

b
2
4ac
2a
. . .
The three occurring types of pairs of roots determine three cases between which we
distinguish.
(i) The characteristic function has two dierent real roots
1
and
2
.
In this case e

1
t
and e

2
t
solve (16). Theorem 5.1 gives that for all k
1
, k
2
in IR,
y(t) = k
1
e

1
t
+ k
2
e

2
t
(18)
6
even if r is allowed to be complex, see Lemma 6.1.
22
is a solution. Have we found them all? In any case, we now can solve every initial
value problem of type
_
ay

+ by

+ cy = 0,
y(t
0
) = y
0
, y

(t
0
) = z
0
(19)
with b
2
4ac > 0 and t
0
, y
0
, z
0
IR. To show this, take for ease t
0
= 0 and try to
nd values for k
1
and k
2
such that (18) obeys (19). Substituting t = 0 in (18) gives
y
0
= k
1
+k
2
. Substituting t = 0 in the derivative of (18) results in z
0
=
1
k
1
+
2
k
2
.
These equations form a system with k
1
and k
2
as the unknowns:
. .
_
_
1 1

1

2
_
_
_
_
k
1
k
2
_
_
=
_
_
y
0
z
0
_
_
. . .
Because
1
,=
2
, the determinant of the 2 2-matrix is unequal to zero, so the
matrix has an inverse. Therefore, the system has a unique solution, i.e., there exist
k
1
, k
2
such that y(t) = k
1
e

1
t
+ k
2
e

2
t
is a solution of (19). In order to prove that
(18) is the general solution, a variant of the Local Existence Theorem is required
involving second order initial value problems. This lies beyond the scope of this
course.
(ii) The characteristic function has one root .
In this case we have found only one independent solution yet, i.e., y = e
t
. The
method of variation of parameters leads to a second one. Suppose y = k(t)e
t
is a
solution. Plugging y, y

= k

(t)e
t
+k(t)e
t
and y

= k

(t)e
t
+2k

(t)e
t
+
2
k(t)e
t
into (16) and dividing by e
t
gives
ak

+ 2ak

+ a
2
k + bk

+ bk + ck = 0. (20)
This does not seem to help too much, until one realizes that case (ii) implies that
b
2
4ac = 0 and = b/2a. Substituting this information into (20) gives
. .ak

= 0. . .
Since a ,= 0, k must be a linear function of t! If we choose k(t) = t, we have found
a solution independent on e
t
. Because any function y : IR IR equals k(t)e
t
for
some function k(t), we have found the general solution of case (ii):
23
. .y = k
1
e
t
+ k
2
te
t
. . .
Example 7 Consider the initial value problem
_
y

4y

+ 4y = 0,
y(0) = 2, y

(0) = 5.
The characteristic equation is r
2
4r +4 = (r 2)
2
= 0. Therefore the general solution
equals y(t) = k
1
e
2t
+ k
2
te
2t
. Substituting t = 0 gives 2 = y(0) = k
1
e
0
+ 0, so k
1
= 2.
Derivation and then substituting t = 0 gives 5 = y

(0) = 4e
0
+k
2
(e
0
+2 0 e
0
), so k
2
= 1.
The answer is thereby y(t) = (2 + t)e
2t
.
Exercise 66 Show that, just like in the previous case, for each initial value problem
of the form . .
_
ay

+ by

+ cy = 0,
y(0) = y
0
, y

(0) = z
0
. .of the form
with a ,= 0 and b
2
4ac = 0, there is a unique choice of the parameters k
1
and k
2
such
that y = k
1
e
t
+ k
2
te
t
solves the problem.
24
(iii)The characteristic function has complex roots
1
= +i,
2
= i;
each others conjugates.
This case more or less starts where case (i) ends. The general solution can be given
similarly, i.e., y(t) = c
1
e

1
t
+ c
2
e

2
t
. (21)
However, the parameters c
1
and c
2
must have complex values in order to obtain
real valued solutions. The question is how. By manipulating (21) and renaming the
parameters a more appropriate answer is found.
. .
y(t) = c
1
e
t+it
+ c
2
e
tit
= e
t
_
c
1
e
it
+ c
2
e
it

= e
t
[c
1
(cos(t) + i sin(t)) + c
2
(cos(t) + i sin(t))]
= e
t
[(c
1
+ c
2
) cos(t) + (c
1
c
2
)i sin(t)].
. .
The latter formulation reveals that if c
1
and c
2
are chosen in such a way that c
1
+c
2
is real valued and c
1
c
2
is purely imaginary, then y is real valued. This is the
case when c
1
and c
2
are each others conjugates. Rename the parameters by dening
k
1
= c
1
+ c
2
and k
2
= (c
1
c
2
)i. The general solution is thereby
. .y(t) = e
t
[k
1
cos(t) + k
2
sin(t)]. . .
Resuming the three cases results in
Theorem 7.1 The general solution set of the second order linear dierential equation
ay

+ by

+ cy = 0 is
k
1
e

1
t
+ k
2
e

2
t
[ k
1
, k
2
IR, if b
2
4ac > 0,
k
1
e
t
+ k
2
te
t
[ k
1
, k
2
IR, if b
2
4ac = 0,
e
t
[k
1
cos(t) + k
2
sin(t)] [ k
1
, k
2
IR, if b
2
4ac < 0,
in which
1
and
2
are the two dierent real valued roots of the corresponding charac-
teristic equation ar
2
+br +c = 0, is the unique root, or +i and i are the two
complex roots respectively.
What rests in this section is to try and nd one particular solution of the inhomogeneous
dierential equation (15). Often there is a particular solution of the same type as d(t).
25
When d(t) is a constant, check whether the equation has a stationary solution. When
d(t) is a polynomial of order n, y(t) might be an n
th
order polynomial.
Example 8 Find a particular solution of the dierential equation
. .y

+ y = t
2
+ 5t + 7. . .
According to the hint it is worthwhile to try y(t) = kt
2
+ t + m. Substitution into the
equation gives . .2k + (kt
2
+ t + m) = t
2
+ 5t + 7. . .equation gives
This is (only) valid for all t in IR if k = 1, = 5 and 2k+m = 7. Hence, y(t) = t
2
+5t+5
is a particular solution.
Exercise 67 Determine the general solution set of y

+ y = t
2
+ 5t + 7.
When d is a trigonometric function, e.g., d(t) = 3 cos(t) 2 sin(t), then try y(t) =
k cos(t) + sin(t). When d(t) is some exponential function, try a multiple of d(t).
This type of search is called the method of undetermined coecients. In the special case
that d(t) is a particular solution of the corresponding homogeneous equation, we once
again apply the method of variation of parameters.
Example 9 Find a particular solution of the dierential equation
. .y

+ y = 2 sin(t). . .
sin(t) and cos(t) are solutions of the corresponding homogeneous dierential equation,
so y = k sin(t) + cos(t) will not work. Therefore, we let the parameters k and depend
on t and try y = k(t) sin(t) + (t) cos(t). Then
. .
y

= k

sin(t) + k cos(t) +

cos(t) sin(t),
y

= k

sin(t) + 2k

cos(t) k sin(t) +

cos(t) 2

sin(t) cos(t),
y

+ y = k

sin(t) + 2k

cos(t) +

cos(t) 2

sin(t).
. .
The latter should equal 2 sin(t). To get rid of the terms involving the cosine, we take k

and

equal to zero. Then k

cancels as well, leaving


. .

= 0, 2

sin(t) = 2 sin(t). . .
26
A function with

= 0 and

= 1, is readily found, e.g., (t) = t will do. This


makes y = t cos(t). Verifying the answer is always recommended!
Exercise 68 Give for each of the following dierential equations, the solution y that
obeys y(0) = 1, y

(0) = 0:
a) 6y

y = 0,. .b) y

+ 5y

+ 6y = 0, . .
c) y

6y

+ 9y = 0, . .d) y

+ y

+ y = 0. . .
Exercise 69 Show that if y is a particular solution of ay

+ by

+ cy = d
1
(t) and y
is a particular solution of ay

+ by

+ cy = d
2
(t), then y + y is a particular solution of
ay

+ by

+ cy = d
1
(t) + d
2
(t). Find a particular solution of dierential equation
. .y

2y = 6t + 4e
t
.. .
8 Systems of dierential equations
So far, we considered one dierential equation at a time with one variable y (besides
time). Many (economical) issues require models in which several variables aect each
other. Think e.g., of a micro economic dynamic model in which the variables represent
prices of complementary goods (or substitutes). Therefore we dedicate two chapters on
systems of dierential equations. In general a system of rst order dierential equations
looks like . .
y

1
= F
1
(t, y
1
, . . . , y
n
),
.
.
.
.
.
.
y

n
= F
n
(t, y
1
, . . . , y
n
)
. .looks like
or y

= F(t, y) for short. Hence, y : IR IR


n
is a vector valued function of t;
y(t) = [y
1
(t), . . . , y
n
(t)]

. Its derivative y

is the Jacobian of y; y

(t) = [y

1
(t), . . . , y

n
(t)]

.
The transpose sign

indicates that we are dealing with column vectors.
The system turns into an initial value problem if we add a boundary condition y(t
0
) = y
0
for some t
0
in IR, y
0
in IR
n
. It is impossible to solve this type of systems in general.
Eulers method can be adapted to approximate solutions in a natural way though. Given
the starting point y
0
, starting time t
0
and step size h > 0 we can dene
27
. .
y
1
= y
0
+ h F(t
0
, y
0
),
y
2
= y
1
+ h F(t
1
, y
1
),
. .
and so on.
7
The Local Existence Theorem can be generalized as well. If F is continuous,
then every initial value problem has a solution and if F is a C
1
-function, the solution is
unique.
Let us quickly switch over to a simplication of the model that we can handle.
8.1 Linear autonomous systems
A system of dierential equations is called autonomous (again) if F does not depend on
t directly, i.e., y

= F(y). An autonomous system is called linear if there exist a square


matrix A = [a
ij
]
i,j{1,...,n}
and a vector b in IR
n
such that the system looks like
. .
y

1
= a
11
y
1
+ . . . + a
1n
y
n
b
1
,
.
.
.
.
.
.
.
.
.
.
.
.
y

n
= a
n1
y
1
+ . . . + a
nn
y
n
b
n
,
. .
or, shortly, . .y

= Ay b. . .or, shortly,
Exercise 70 A linear system is called homogeneous if b = 0. Show that the sum of
two solutions of a homogeneous system is another solution.
Like in Section 5, determining the general solution set takes two steps; nd one par-
ticular solution of the system and add it to the general solution of the corresponding
homogeneous system y

= Ay. If A is non-singular, a stationary solution is readily found:


Exercise 71 Show that if det(A) ,= 0, then y = A
1
b is a particular solution of system
y

= Ay b.
If det(A) = 0, it takes a bit more eort to nd a particular solution.
7
The underscore is used to discriminate between the function y
1
, which is the rst coordinate of y,
and the vector y
1
, being the successor of y
0
.
28
Exercise 72 Consider the system y

=
_
_
2 4
1 2
_
_
y
_
_
2
2
_
_
.
a) Show that the system does not have stationary solutions.
b) Show that all of its solutions obey y
1
2y
2
= 2t + k for some constant k.
c) Find a particular solution of the system satisfying y
1
2y
2
= 2t. . .
Let us now focus on the homogeneous system y

= Ay. If A is a diagonal matrix, then


we are in fact dealing with n independent dierential equations, each of which can be
solved easily by means of Section 2, case (ii). The general solution is in this case
. .y
1
(t) = k
1
e
a
11
t
, . . . , y
n
(t) = k
n
e
annt
,. .
i.e., y =
n

i=1
k
i
e
a
ii
t
e
i
. i .e.,(22)
Here, e
i
denotes the i
th
unit vector, so e
i
= (0, . . . , 0, 1, 0, . . . , 0)

with the one on the


i
th
position.
This special case leads to the general solutions of a large class of matrices A; the class
of diagonalizable matrices. A is called diagonalizable if there exist an invertible matrix
P and a diagonal matrix D such that
. .A = PDP
1
, . .
which is equivalent to AP = PD and to P
1
A = DP
1
. This is exactly the case if A
has n independent eigenvectors. A vector v in IR
n
, unequal to 0, is an eigenvector of A
if there exists a in IR with Av = v. Such a scalar is called an eigenvalue of A. To
be complete we will prove this result from linear algebra.
Theorem 8.1 An n n-matrix A is diagonalizable if and only if A has n independent
eigenvectors.
Proof, = Take P and D such that P is invertible, D is a diagonal matrix, and
AP = PD. Denote the i
th
column of P by v
i
and denote the i
th
diagonal element of
D by
i
. The i
th
column of AP is thereby equal to Av
i
. The i
th
column of PD equals

i
e
i
pre-multiplied by P, i.e., P(
i
e
i
) =
i
P(e
i
) =
i
v
i
. We derive Av
i
=
i
v
i
, making
29

i
an eigenvalue of A with eigenvector v
i
. Because P is invertible, the columns of P are
independent.
= Let P be a square matrix of which the columns are n independent eigenvectors
of A, i.e., P = [v
1
v
n
] and dene D by
. .D =
_

1
0 0
0
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
0 0
n
_

_
, . .
in which
i
is the eigenvalue corresponding to v
i
. Because the columns of P are inde-
pendent, P is non-singular. Furthermore,
. .AP = A[v
1
v
n
] = [Av
1
Av
n
] = [
1
v
1

n
v
n
] . .
and . . . .PD = P[
1
e
1

n
e
n
] = [
1
P(e
1
)
n
P(e
n
)] = [
1
v
1

n
v
n
]. . .
Resuming, we are looking for solutions of the system y

= Ay with A = PDP
1
, in
which the columns of P are independent eigenvectors v
i
of A and the diagonal elements
of D are the corresponding eigenvalues
i
.
Let y be a solution of y

= Ay. Dene z : IR IR
n
by z(t) = P
1
y(t). Then z solves
the system of dierential equations z

= Dz, because
. . z

= (P
1
y)

= P
1
( y

) = P
1
A y = DP
1
y = D z. . .
z is thereby of the form (22), i.e.,
. . z(t) =
n

i=1
k
i
e

i
t
e
i
. . .
It is easy to derive y from z:
. . y = P z = P
_
n

i=1
k
i
e

i
t
e
i
_
=
n

i=1
k
i
e

i
t
P(e
i
) =
n

i=1
k
i
e

i
t
v
i
. . .
Apparently, any solution y of y

= Ay can be denoted in this way. On the other hand,


any function of this form solves y

= Ay:
30
. .
_
n

i=1
k
i
e

i
t
v
i
_

=
n

i=1
k
i
(e

i
t
v
i
)

=
n

i=1
k
i

i
e

i
t
v
i
=
n

i=1
k
i
e

i
t
A(v
i
) = A
_
n

i=1
k
i
e

i
t
v
i
_
.
. .
We have found the general solution. This is worth a theorem.
Theorem 8.2 Let A be an n n-matrix with n independent eigenvectors v
1
, . . . , v
n
with corresponding eigenvalues
1
, . . . ,
n
. Then the general solution set of the system
of dierential equations y

= Ay is given by
. .
_
y(t) =
n

i=1
k
i
e

i
t
v
i

(k
1
, . . . , k
n
) IR
n
_
. . .
Let v be an eigenvector of A with corresponding eigenvalue . Then
. .(A I
n
)v = Av v = 0, . .
in which I
n
denotes the n
th
identity matrix. Because eigenvectors are not equal to the
zero vector, AI is apparently singular, i.e., has determinant 0. This gives a method
to determine eigenvalues of A; they are all roots of the characteristic equation of A
. .det(A I
n
) = 0. . .
After having found an eigenvalue , the corresponding eigenvectors are attained by
nding the non-trivial solutions of the (dependent) system
. .(A I
n
)v = 0. . .
We refer to Example 10 for an elaboration. Another way to obtain eigenvectors and
-values is giving Matlab the command [P, D] = eig(A). If A is diagonalizable, it provides
the matrices P and D.
Exercise 73 Find the general solution set of the system
_
x

= 17x + 4y,
y

= 12x + 7y.

Exercise 74 Determine the general solution set of


_

_
x

= x + y z,
y

= 4x 2y + z,
z

= 4x 4y + 3z.

31
Theorem 8.2 remains valid if the eigenvalues of A are complex, as long as there are
n independent ones. However, it is not very elegant to give a complex valued general
solution set in the case of a real valued matrix A. It is possible to put this right
by following the lines of Section 7, case (iii), but the following subsection provides an
alternative instead, at least for the case n = 2. It is also applicable if all eigenvectors
are dependent.
8.2 The substitution method
Example 10 Determine the general solution of the two by two system
_
x

(t) = . .x(t) +. .y(t),


y

(t) = .. 4x(t) + 5y(t).


(23)
The characteristic function of corresponding matrix A =
_
_
. .1 1
4 5
_
_
is
. .det
_
_
_
1 1
4 5
_
_
_
= (1 )(5 ) + 4 =
2
6 + 9 = ( 3)
2
. . .
The eigenvectors corresponding to = 3 are found by solving the system (A3I)v = 0,
which results in one eigenvector v = (1, 2)

(other ones are multiples of v). Therefore


Theorem 8.2 cannot be applied. The substitution method works as follows: express y(t)
in terms of x(t) and x

(t) by means of the rst equation, i.e.,


y(t) = x

(t) x(t). (24)


Hence, y

(t) = x

(t) x

(t). We can eliminate y and y

from the second equation of


system (23) by substitution:
. .x

(t) x

(t) = 4x(t) + 5[x

(t) x(t)], . .
so, . .x

(t) 6x

(t) + 9x(t) = 0. . .
This is an ordinary second order linear dierential equation (not coincidentally having
the same characteristic function as A has). Therefore, we can apply Section 7! This
results in the general solution for x(t) to be
32
. .x(t) = k
1
e
3t
+ k
2
te
3t
. . .
The corresponding y(t) can be found by means of (24):
..y(t) = x

(t)x(t) = (3k
1
e
3t
+3k
2
te
3t
+k
2
e
3t
)(k
1
e
3t
+k
2
te
3t
) = 2k
1
e
3t
+k
2
(e
3t
+2te
3t
).
. .
The general solution set of system (23) is thereby given by
. .
_
_
_
x(t)
y(t)
_
_
= k
1
e
3t
_
_
1
2
_
_
+ k
2
e
3t
_
_
0
1
_
_
+ k
2
te
3t
_
_
1
2
_
_

k
1
, k
2
IR
_

Exercise 75 Determine the general solutions of the systems
a)
_
x

= . .2x +. .y,
y

= 12x 5y,
b)
_
x

= . .6x 3y 3,
y

= 2x +. .y + 1,
. .
c)
_
x

= ..x + 4y,
y

= 3x + 2y.
d)
_
x

= . .x 4y,
y

= 10x + 3y 26.
. .
9 Stability of stationary solutions
This section considers autonomous, not necessarily linear systems of dierential equa-
tions, i.e., of the form y

= F(y). As already has been said, looking for general solutions


is futile, but it is worthwhile to consider stationary solutions. The main reason for this
is that other solutions often converge to such stationary points. If this is the case, the
stationary point in question is called stable, because a (slight) perturbation from the
solution point results in a solution trajectory (path) that returns to the stationary point.
To get acquainted with the subject, we start with an example of two variables (apart
from time) and two dierential equations. This has the advantage that the corresponding
vector eld (or direction eld) can be depicted graphically.
Example 11 Consider the system of equations
_
x

= y x, . .
y

= 4 xy...
(25)
Firstly, we construct the direction eld. Because the system is autonomous, we are not
interested in the t-axis. This makes the eld two-dimensional. It can be compared with
33
. .
5 4 3 2 1 0 1 2 3 4 5
5
4
3
2
1
0
1
2
3
4
5

x
-
y
NW
NE
SW
SE
NW
NE
. .
Figure 4: the direction eld of system
_
x

= y x, . .
y

= 4 xy...
a weather chart that indicates intensity and direction of wind. A particular solution
then represents the trajectory that a particle follows when it is carried by the wind. The
construction consists of three steps.
(i) Determine the so-called isoclines. x-isoclines are curves on which points (x, y) are
situated where x

= 0. At such a curve, the wind blows straight to the North or


to the South. In the example, the line y = x is the only x-isocline. Similarly, at
y-isoclines the wind blows straight to the East or to the West and y

= 0. The
equation 4 xy = 0 determines two curves.
(ii) Determine the stationary points (the windless spots). They are found at intersections
of x- and y-isoclines. We nd the stationary points (2, 2) and (2, 2).
(iii)The isoclines partition the plane into sectors. In each of the sectors, the direction of
the wind can only be either NE, SE, SW, or NW. Determine for each of the sectors
which of the four possibilities applies.
34
After these three steps it can be illuminating to draw some representative solution-paths.
Figure 4 displays the solutions starting at (0.9, 5) and at (1.1, 5). Although they start
closely together, one of them diverges to the South-West and the other converges to the
stationary point (2, 2).
The gure displays that solutions in the neighborhood of (2, 2) are attracted to this
point, while (2, 2) seems to ward o solutions. Or, using the wind metaphor, a
particle situated at (2, 2) will return there when it is moved a little, while a particle at
(2, 2) will be carried far away after the tiniest perturbation. Therefore we call (2, 2)
a stable stationary point and (2, 2) an instable one.
Let us formalize the property of stability. There are several types of stability, we restrict
ourselves to the most prominent one.
Denition 9.1 Let y : IR IR
n
, given by y(t) = y
0
for all t IR, be a stationary
solution of the n-dimensional autonomous system y

= F(y) of dierential equations.


Then y is called (asymptotically) stable if there exists an open sphere B around y
0
such
that lim
t
y(t) = y
0
for all solutions y : IR IR
n
that satisfy y(0) B.
The stationary solution is called instable if there exists a sphere B around y
0
such that
for almost all solutions y with y(0) B, there exists a moment in time T with y(t) / B
for all t > T.
If the function y is stable, the point y
0
is called stable as well. Notice that in the denition
the moment in time 0 can be replaced by any other moment. The word almost is essential,
because a lower-dimensional set of solutions can stay in the neighborhood of an instable
stationary point y
0
(in any case the constant solution y(t) = y
0
). Furthermore, there
can exist stationary solutions that are neither asymptotically stable, nor instable. Think
e.g., about the endless swing around the equilibrium of a pendulum without friction (see
Example 2). These are exceptional situations however which we will not discuss.
Exercise 76 Draw (not too small) the vector eld of the system
. .
_
x

= x
2
+ y
2

2
, ..
y

= sin(x) y.. .
. .
35
Which of the two stationary points do you expect to be stable?
Let
_
x : IR
+
IR
y : IR
+
IR
be the particular solution of the system with x(0) = y(0) = 0.
What do you expect to be lim
t
x(t) (a proof is not asked for)? Draw in the direction
eld the curve corresponding to this particular solution.
9.1 Stability in linear systems
Let us return to the linear system y

= Ay. Clearly, y = 0 is a stationary solution, and


if A is regular, it is the only one. When will it be stable? We have seen that if is
an eigenvalue of A corresponding to eigenvector v, then y = ke
t
v is a solution for each
choice of k in IR. Because y(0) = kv, y(0) is situated in any given sphere B around 0,
provided that k is chosen suciently small. Hence, 0 can only be stable if lim
t
y(t) = 0.
Lemma 6.1 gives
. .|ke
t
v| = [ke
t
[ |v| = [k[ e
re()t
|v|
t
_

_
0 if re() < 0,
[k[ |v| if re() = 0,
if re() > 0.
. .
Therefore, as soon as one eigenvalue has a positive real part, the stationary point 0 is
instable.
Exercise 77 Show that if A has n independent eigenvectors of which all eigenvalues
have strictly negative real parts, then 0 is stable.
Without a proof we postulate that the requirement of having independent eigenvalues
is redundant and that rest term b does not have to be assumed to be 0, resulting in
Theorem 9.2 Let A be a non-singular n n-matrix and let b IR
n
. Then the unique
stationary point A
1
b of the system of dierential equations y

= Ay b is stable if all
eigenvalues of A have a negative real part. If one or more eigenvalues have a positive
real part, then the solution is instable.
Exercise 78 Verify whether 0 is stable or not with respect to system (23) and the one
in Exercise 74.
36
9.2 Stability in non-linear systems
The stability of stationary points of non-linear systems can be determined quite often as
well. A dierential function from IR to IR can be approximated linearly at any point by
a tangent (raaklijn). Likewise, any vector valued function can be approximated linearly
by means of its derivative; the Jacobian. Suppose that y
0
is a stationary point of the
autonomous system y

= F(y). Let A be the Jacobian of F at y


0
, i.e., A = DF(y
0
).
Then the function y A(y y
0
) is a linear approximation of F at y
0
, because both
have value zero at y
0
and their Jacobians coincide at y
0
as well. Let us apply this to
system (25) and stationary point (2, 2), so
. .F(x, y) =
_
_
y xy
4 xy
_
_
, DF(x, y) =
_
_
1 1
y x
_
_
and A =
_
_
1 . .1
2 2
_
_
. . .
The linear approximation at (2, 2) is thereby
. .
_
_
x
y
_
_
A
_
_
x 2
y 2
_
_
=
_
_
2x + 2y. .
2x 2y + 8..
_
_
, . .
Exercise 79 Show that (2, 2) is a stable stationary solution of the system
. .
_
x

= 2x + 2y, +8
y

= 2x 2y + 8.
. .
Figure 5 shows the direction eld of the system above. Indeed, in the neighborhood
of (2, 2) it resembles the eld of system (25). Without a proof we postulate that in
general the question whether y
0
is stable or not in the original system is equivalent to
the question whether it is stable in the approximating linear system. Hence, the answer
depends on the eigenvalues of A.
Theorem 9.3 Let F : IR
n
IR
n
be a C
1
-function and let y
0
be a stationary point
of the autonomous system y

= F(y). Dene A to be DF(y


0
), i.e., the Jacobian of F
at y
0
. Then y
0
is stable if all eigenvalues of A have negative real parts. If at least one
eigenvalue has a positive real part, then y
0
is instable.
37
. .
5 4 3 2 1 0 1 2 3 4 5
5
4
3
2
1
0
1
2
3
4
5

x
-
y
SW
SE
NW
NE
. .
Figure 5: the direction eld of system
_
x

= 2x + 2y, +8
y

= 2x 2y + 8.
If the eigenvalue with the highest real part is purely imaginary, so re() = 0, then the
theorem is not applicable on y
0
and further examination is required. This is beyond the
scope of this course.
Exercise 80 Rewrite Theorem 9.3 for the case n = 1 (so use F

instead of DF). Apply


the theorem on the dierential equation of Exercise 57a and on the equation
. .y

=
4 y
2
y
.. .(y IR0)
Draw a sign scheme like Figure 4 for the latter, but now for the real line instead of the
plane, and verify whether the sign scheme and the theorem comport.
Exercise 81 Examine the stability of the stationary points of the system of equations
in Exercise 76. Does it comport with the answer on that exercise?
A Index
38
Index
Arrow-Pratt measure, 15
autonomous, 5, 28
C, 18
characteristic function/equation, 22, 31
C
n
-function, 6
conjugate, 19
continuous, 3
convergence of order O(h), 12
diagonalizable, 29
dierence equation, 3
dierential equation, 3
ordinary, 5
direction eld, 9, 33
discrete, 3
eigenvalue, eigenvector, 29
equilibrium, 5
Eulers formula, 21
Eulers method, 11
general solution set, 4
homogeneous, 28
i, 18
imaginary number, 19
imaginary part, 19
initial value problem, 4
instable, 35
isocline, 34
Jacobian, 27
, 19
method of undetermined coecients, 26
method of variation of parameters, 7, 8, 23,
26
order, 5, 13
polynomial, 6
primary variable, 3
real part, 19
rest term, 17
solution
equilibrium, 5
particular, 3
stationary, 5
stable, 33, 35
step size, 10
substitution method, 32
Taylor-series, 21
unit vector, 29
y
(n)
, 6
39

Vous aimerez peut-être aussi