Vous êtes sur la page 1sur 64

Math20401 - PDEs

Michael Bushell
michael.bushell@student.manchester.ac.uk
December 11, 2011
About
This course is an introduction to solving partial dierential equa-
tions, both by nding exact solutions and by nite approximations.
This document is a presentation of the material covered in Tony Shard-
lows lecture notes and examples. Beware there may be errors.
1
CONTENTS 2
Contents
1 Introduction 4
1.1 What is a Partial Dierential Equation? . . . . . . . . . . . . 4
1.2 Three Classical PDEs . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 The Wave Equation . . . . . . . . . . . . . . . . . . . . 5
1.2.2 The Heat Equation . . . . . . . . . . . . . . . . . . . . 5
1.2.3 The Laplace Equation . . . . . . . . . . . . . . . . . . 5
1.3 Initial and Boundary Conditions . . . . . . . . . . . . . . . . . 7
1.4 Classifying PDEs . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Principle of Linear Superposition . . . . . . . . . . . . . . . . 10
2 Well-Posed Problems 11
2.1 Well-posedness . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 The Enegery Method . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 The Maximum Principle . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Stability to changes in the boundary conditions . . . . 15
2.4 Ill-posed Problems . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 The backwards heat equation . . . . . . . . . . . . . 16
2.4.2 The Laplace equation . . . . . . . . . . . . . . . . . . . 17
3 Fourier Seriers 18
3.1 The Fourier Sine Series . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 The Fourier series of a constant function . . . . . . . . 20
3.2.2 Summation of . . . . . . . . . . . . . . . . . . . . . . 21
4 Separation of Variables 23
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2.1 Constant initial condition . . . . . . . . . . . . . . . . 26
4.2.2 Periodic boundary conditions* . . . . . . . . . . . . . . 27
4.3 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3.1 DAlemberts solution . . . . . . . . . . . . . . . . . . 33
4.4 The Laplace Equation . . . . . . . . . . . . . . . . . . . . . . 34
4.5 Sturm-Lioville Theory . . . . . . . . . . . . . . . . . . . . . . 36
5 Taylors Theorem 37
CONTENTS 3
6 Finite Dierence Methods: Centered Approximation 38
6.1 Centered dierence approximation . . . . . . . . . . . . . . . . 38
6.2 Reaction-diusion problem . . . . . . . . . . . . . . . . . . . . 41
6.3 Errors, Convergence and Consistency . . . . . . . . . . . . . . 43
6.3.1 Constistency of reaction-diusion problem . . . . . . . 44
6.3.2 Relationship of errors of reaction-diusion problem . . 45
6.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.4.1 Stability and convergence of reaction-diusion problem 48
6.5 Convection-diusion problem . . . . . . . . . . . . . . . . . . 49
7 Finite Dierence Methods: Upwind Approximation 52
7.1 Convection-diusion problem (revisited) . . . . . . . . . . . . 53
8 Finite Dierence Methods: Euler Methods for ODEs 55
8.1 The Explicit Euler method . . . . . . . . . . . . . . . . . . . . 55
8.2 The Implicit Euler method . . . . . . . . . . . . . . . . . . . . 58
8.3 Stability and time-step restriction . . . . . . . . . . . . . . . . 59
9 Finite Dierence Methods: Method of Lines 61
9.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . 61
9.2 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . 63
1 INTRODUCTION 4
1 Introduction to PDEs
1.1 What is a Partial Dierential Equation?
Given an unknown function u of independent variables x, y a PDE is any
equation involving the partial derivatives of u:
u
x
,

2
u
xy
, etc. . .
Denition 1.1.1. A PDE is a relation of the form:
F(u, t, x, y, . . . , u
t
, u
x
, u
y
, . . . , u
tt
, u
tx
, u
xx
, . . .) = 0
where u is a function of independent variables t, x, y, . . ..
Example 1.1.1. For instance, let u(x, t) =
A(x)
B(t)
for some functions A and
B. Then
u
x
=
A

(x)
B(t)
, u
t
=
A(x)B

(t)
B(t)
2
and u
xt
=
A

(x)B

(t)
B(t)
2
Therefore u satises the PDE
u
x
u
t
= uu
xt
Usually we are given a PDE and are required for nd the function which
satises it. The remainder of this course it aimed at learning techniques for
doing just this.
Denition 1.1.2. The order of a PDE is the highest degree of dierentiation
that appears in the equation.
1 INTRODUCTION 5
1.2 Three Classical PDEs
Three classical PDEs form the basis of our study, we will need to recognise
each and understand their dierences.
1.2.1 The Wave Equation
The wave equation models motion of particles in time and space.
u
tt
= c
2
u
xx
(1)
Where u represents the displacement of a particle from its equilibrium posi-
tion.
1.2.2 The Heat Equation
The heat equation models the diusion of heat in time and space.
u
t
= u
xx
(2)
Where u represents the temperature at a point in space.
1.2.3 The Laplace Equation
The Laplace equation models steady-state heat distribution in space.
u
xx
+ u
yy
= 0 (3)
These three equations will recur in examples following. For now lets look
at a solution to the Laplace equation:
Example 1.2.1. Consider the Laplace equation (3) (in 3-dimensions):
u
xx
+ u
yy
+ u
zz
= 0
We show u = (x
2
+ y
2
+ z
2
)
1/2
is a solution.
Proof. Let u = 1/r where r = (x
2
+ y
2
+ z
2
)
1/2
, then:
u
x
=
u
x
=
du
dr
r
x
=
1
r
2
x
(x
2
+ y
2
+ z
2
)
1/2
=
x
r
3
= xr
3
1 INTRODUCTION 6
and
u
xx
=

x
(xr
3
)
= (
d
dx
x)r
3
x

x
(r
3
)
= r
3
+ 3xr
4
r
x
= r
3
+ 3xr
4
(xr
1
)
= r
3
+ 3x
2
r
5
Similary:
u
yy
= r
3
+ 3y
2
r
5
u
zz
= r
3
+ 3z
2
r
5
Hence:
u
xx
+ u
yy
+ u
zz
= (r
3
+ 3x
2
r
5
) + (r
3
+ 3y
2
r
5
) + (r
3
+ 3z
2
r
5
)
= 3r
3
+ 3(x
2
+ y
2
+ z
2
)r
5
= 3r
3
+ 3r
2
r
5
= 0
Therefore u is a solution to the Laplace equation.
1 INTRODUCTION 7
1.3 Initial and Boundary Conditions
When searching for a solution to a PDE we often have extra information
which narrows down the candidate solutions usually to a unique solution.
These come in the form of initial and boundary conditions.
Denition 1.3.1. If function u(x, t) depends on time t and spatial variable
x, an initial condition species u at time t = 0 such that x : u(x, 0) = u
0
(x).
The initial condition may also be placed on derivatives of u.
Denition 1.3.2. When u depends on x in a bounded set , a boundary
condition species u(x) for x where denotes the boundary of .
These usually come in one of two forms:
1. Dirichlet Conditions specify the value of u on the boundary, i.e.:
x : u(x, t) = g(x), t > 0
This type of condition xes the solution u on the boundary for all time.
2. Neumann Conditions specify the (outward) normal derivative of u on
the boundary, i.e.:
x :
u
n
(x) = u(x) n(x) = g
1
(x)
These are more complicated but note that for a function u(x, t) of a
single spatial dimension x, the normal derivative is simply u
x
.
Supplying boundary conditions is usually necessary for a well-posed prob-
lem.
1 INTRODUCTION 8
1.4 Classifying PDEs
In solving PDEs we must understand when a method is applicable. The
following denitions help organise the dierent types of PDE which arise in
practise.
Denition 1.4.1 (Linear Operator). The operator L is linear, if for any two
functions u, v and any R,
1. L(u + v) = L(u) +L(v), and
2. L(u) = L(u).
Denition 1.4.2 (PDE Linearity). A PDE is linear if it can be written
L[u] = f, where L is a linear operator and f a function of the indepen-
dent variables only. When the PDE is non-linear, for a function u of two
independent variables x and t, we may write the PDE in general form
au
xx
+ bu
xt
+ cu
tt
+ du
x
+ eu
t
+ fu = g (4)
then we say the PDE is quasi-linear if a, b, c are independent of 2
nd
order
derivatives of u; or semi-linear if a, b, c, are entirely independent of u and
its derivatives - otherwise we say its fully non-linear.
Example 1.4.1. We classify the following PDEs according to linearity rst
consider whether the PDE is linear, if not then what degree of non-linearity
is it?
u
t
(x
2
u)u
xx
= x t is semi-linear,
u
2
u
tt

1
2
u
2
x
+ (uu
x
)
x
= ue
u
is quasi-linear,
u
t
u
xx
= u
3
is semi-linear,
(u
xy
)
2
u
xx
+ u
t
= 0 is fully non-linear,
u
t
+ u
x
u
y
= 10 is linear.
Rearrange the PDEs into the general form (4) and compare coecients a,b,
c with the relevant denitions.
Denition 1.4.3. There are three generic types of PDE determined by the
coecients of equation (4), these are:
hyperbolic b
2
4ac > 0
parabolic b
2
4ac = 0
1 INTRODUCTION 9
elliptic b
2
4ac < 0
If a, b, or c are not constant then type of the PDE may change.
Example 1.4.2. The wave equation (1) is hyperbolic, writing:
u
tt
c
2
u
xx
= 0
We have constant coecients a = c
2
, b = 0, and c = 1 so
b
2
4ac = 0 4(c
2
)(1) = 4c
2
> 0.
Similiarly, it is easily shown that the heat equation (2) is parabolic, and
the Laplace equation (3) is elliptic.
1 INTRODUCTION 10
1.5 Principle of Linear Superposition
We will require the following in our quest for PDE solutions:
Denition 1.5.1. If a PDE and associated boundary conditions are of the
form L(u) = 0 for a linear operator L, the boundary value problem is said
to be homogenous.
Theorem 1.5.1 (Principle of Linear Superposition). If u
1
, u
2
are solutions
of a homogenous linear boundary value problem, then any linear combination
v = u
1
+ u
2
with , R is also a solution.
Proof.
L(u
1
+ u
2
) = L(u
1
) + L(u
2
)
= 0 + 0
= 0
For non-homogeneous equations L(u) = f with particular solution u
p
(i.e.:
a function u
p
that satises L(u
p
) = f) we have:
Theorem 1.5.2. If u
p
is a particular solution of the linear boundary value
problem L(u) = f and v is a solution to the homogenous problem L(v) = 0,
then w = u
p
+ v is a solution of L(u) = f.
Proof.
L(w) = L(u
p
+ v) = L(u
p
) +L(v) = f + 0 = f
In particular, the principle of linear superposition will be crucial in the
use of Fourier series as solutions to PDEs.
2 WELL-POSED PROBLEMS 11
2 Well-Posed Problems
2.1 Well-posedness
When we are given a PDE to solve we must consider whether the problem is
well-posed. The key properties to the solution of a well-posed problem are:
Existence is essential, when the study of a physical process leads to
a model in the form of a partial dierential equation with initial
and boundary conditions, the absence of a solution would point
to incorrect model assumptions.
Uniqueness is desired, as most physical processes have a unique and
well-dened behaviour which should be reected in our model.
Stability is often vital, we measure physical quantities to use as
parameters in our model and we cannot do so with absolute
precision. We would like small errors in our measures to result in
correspondingly small changes to the outcome.
Example 2.1.1. Consider again the PDE in example (1.1.1)
u
x
u
t
= uu
xx
We showed that
u(x, t) =
A(x)
B(t)
is a solution, if we supply the initial condition
u(0, x) = u
0
(x)
do we have a well-posed problem?
u(x, t) =
A(x)
B(t)
= u
0
(x) = u(x, 0) =
A(x)
B(0)
Hence A(x) = u
0
(x)B(0). This is undetermined so the problem is ill-posed,
even though we just need a single value B(0) to completely determine A, we
would still have a non-unique solution as B can vary.
In what follows we derive methods for proving existence and uniqueness,
and we devise conditions under which stability is assured.
2 WELL-POSED PROBLEMS 12
2.2 The Enegery Method
The energy method derives from considering the physical interpretation of a
PDE model. In the following example we dene a quantity referred to as
the energy
1
and show that it cannot increase with time. From this we
can prove that a solution is unique.
Example 2.2.1. There is a unique solution to the heat equation u
t
= u
xx
(2) subject to boundary conditions u
x
(0, t) = u
x
(1, t) = 0 and initial
condition u(x, 0) = u
0
(x) for x (0, 1).
Proof. Let E(t) =
_
1
0
u(x, t)
2
dx, then
E

(t) =
d
dt
_
1
0
u
2
dx
=
_
1
0

t
u
2
dx
=
_
2
0
2uu
t
dx
= 2
_
2
0
uu
xx
dx (by (2))
= 2
_
_
uu
x
_
x=1
x=0

_
1
0
u
2
x
dx
_
= 2
_
_
_
[u(1, t) u
x
(1, t)
. .
=0
u(0, t) u
x
(0, t)
. .
=0
]
_
1
0
u
2
x
dx
_
_
_
Hence
E

(t) = 2
_
1
0
u
2
x
dx 0, t 0
Therefore, E is a non-increasing function and so E(t) E(0), t 0.
Now, for contradiction suppose u, v are two solutions. Dene w = u v,
then as we have a homogeneous linear boundary value problem we know
that w is also a solution with same boundary conditions (but zero initial
condition).
1
For dierent PDEs this energy may have a dierent expression, see questions sheets
for more examples usually the form of the energy is given if it diers suciently from
the above.
2 WELL-POSED PROBLEMS 13
From above
0
_
1
0
w(x, t)
2
dx
_
1
0
w(x, 0)
2
dx = 0
Since w(x, 0) = u(x, 0) v(x, 0) = u
0
(x) u
0
(x) = 0, we must conclude
that w(x, t) = 0, i.e.: u(x, t) v(x, t) = 0 as required.
Therefore u = v and we have a unique solution.
Exactly what the energy measures diers from PDE and is beyond the
scope of this course. For the heat equation (2) it is proportional to the total
heat energy in the system. In the wave equation (1) it is the sum of kinetic
and potential energy (in for example, a vibrating string).
2 WELL-POSED PROBLEMS 14
2.3 The Maximum Principle
We introduce a technique known as the maximum principle to prove
stability and uniqueness.
Under certain conditions we can show that the maximum value of the
solution can only occur along its boundary.
Lemma 2.3.1. Suppose that f(x) < 0 for all x (0, 1). If u

= f, then
u(x) max{u(0), u(1)}, i.e.: the maximum for u occurs on its boundary.
Proof. Suppose u has a local maximum at (0, 1). It is a result from
calculus that u

() 0, i.e. u()

0 but u

() = f() < 0, a
contradiction so there is no maximum for (0, 1). Therefore, the
maximum value must be at = 0, or = 1 as claimed.
We extend to the case where f(x) = 0 also.
Lemma 2.3.2. Suppose u

(x) = 0 for all x (0, 1), then


u(x) max{u(0), u(1)}.
Proof. For > 0, let v

(x) = u(x) + x
2
. Then v

= u

2 < 2 for
x (0, 1). By the previous lemma, v

(x) max{v

(0), v

(1)}, hence
u(x) v

(x) max{v

(0), v

(1)} max{u(0), u(1) + }


As this holds for any > 0 no matter how small, the result holds.
Lemma 2.3.3. Suppose u

(x) 0 for all x (0, 1), then


u(x) min{u(0), u(1)}.
Proof. Simply let w = u, then by the two previous lemma
u = w max{w(0), w(1)} = min{u(0), u(1)}
As so the result follows.
We summarize these results in the following theorem
Theorem 2.3.1. Let u be a solution of u

= 0 for 0 < x < 1 with


u(0) = and u(1) = , then for all x (0, 1) we have
min{, } u(x) max{, }
Proof. This is an immediate consequence of the previous two lemmas.
We now use this result to prove the stability of a solution.
2 WELL-POSED PROBLEMS 15
2.3.1 Stability to changes in the boundary conditions
Example 2.3.1. Suppose that
u

= f for 0 < x < 1 and u(0) = , u(1) =


We would like to show this problem is well-posed and one criterion for a
well-posed problem is the stability of the solution to changes in the
boundary conditions.
So let
1
,
2
> 0, we introduce small changes to the boundary conditions
to give another problem
v

= f for 0 < x < 1 and v(0) = +


1
, v(1) = +
2
Given that
1
,
2
are as small as we like, we would expect that the two
solutions u and v would not vary much when the problem is well-posed.
Let e = u v be the dierence in the two solutions, then substracting the
two equations shows e satises
e

= 0 for 0 < x < 1 and e(0) =


1
, e(1) =
2
By the previous theorem we can conclude that
min{
1
,
2
} e(x) max{
1
,
2
}
i.e.:
|u(x) v(x)| = |e(x)| max{|
1
|, |
2
|}
Hence, small changes to the boundary conditions (
1
and
2
) cause only a
small change in the solution to the problem. In this sense, the solution is
stable and the problem is well-posed.
2 WELL-POSED PROBLEMS 16
2.4 Ill-posed Problems
2.4.1 The backwards heat equation
Example 2.4.1. To illustrate an ill-posed problem consider the following
PDE, the backwards heat equation obtained by setting constant = 1
in the heat equation (2).
u
t
+ u
xx
= 0 (5)
We have already shown that when subject to the initial condition
u(x, 0) = 0 we have the unique solution u(x, t) = 0.
Recall, a well-posed problem must be stable to small changes. We will
show that the following solution satises the equation with only a small
change to the initial condition but exhibits behaviour far from the zero
solution.
For constants A, T R we have a solution
u(x, t) =
AT
1/2
(T t)
1/2
e

x
2
4(Tt)
provided t < T.
Given T and > 0 we see initially (at t = 0)
|u(x, 0)| = |Ae

x
2
4T
| |A| < A (, )
Thus we can choose a solution such that it is initially bounded to as small
an as we like thus as close to zero as we want. So the solution is initially
well-behaved in this sense.
Recall we have specied t < T so what about as t increases and
approaches T from the left? Considering just the point x = 0
lim
tT

u(0, t) = lim
tT

AT
1/2
(T t)
1/2
= lim
tT

A
(1 t/T)
1/2
= lim
h0
+
A
h
1/2
=
Depending on the sign of A, either way we see here that the solution
approaches as t T. This diverges far from the behaviour of the zero
solution.
A small change to the initial condition results in a large change to the
solution, therefore we cannot say the problem is well-posed rather, it is
ill-posed.
2 WELL-POSED PROBLEMS 17
2.4.2 The Laplace equation
Example 2.4.2. Consider the Laplace equation (3) in 2 spatial dimensions
u
xx
+ u
yy
= 0
We see that a solution is given by
u(x, y) = cosh(y/) cos(x/)
as
u
x
= cosh(y/) sin(x/)
u
xx
= (1/) cosh(y/) cos(x/) = u/
2
and
u
y
= sinh(y/) cos(x/)
u
yy
= (1/) cosh(y/) cos(x/) = u/
2
We also have the following boundary conditions satised
u(x, 0) = cos(x/), u
y
(x, 0) = 0
Let > 0 be given, then
|u(x, 0)| = | cos(x/)| || < (, )
Therefore, we can choose to make u as small as we like on the boundary
y = 0.
But for given x, y
lim
0
u(x, y) = lim
0
cosh(y/) cos(x/)
Now, cos(x/) oscillates between 1 and 1 as 0 and cosh(y/)
as 0. So u(x, y) oscillates between and , i.e. u oscillates
unbounded as 0.
Clearly, this PDE problem is ill-posed. A small change in the boundary
conditions (i.e.: a small change in ) results in large changes in the
solution
2
.
2
In general, the Laplace equation is ill-posed if it is subjected to conditions on a bound-
ary that does not entirely surround the domain in which the equation is to be satised
3 FOURIER SERIERS 18
3 Fourier Seriers
Before solving PDEs we need be able to write functions as Fourier series.
3.1 The Fourier Sine Series
Is is a remarkable fact and many functions can be written as innite sums
of sine functions.
Theorem 3.1.1. Let u
0
(x) be such that u
0
(x) and u

0
(x) are piecewise
continuous on [0, l], then we may write:
u
0
(x) =

n=1
a
n
sin(
nx
l
), (6)
for some coecients a
n
.
In many instances it is useful to use the property of orthogonal functions to
determine the Fourier coecients of a function:
Denition 3.1.1 (Orthogonality). We say two functions , : [0, l] R
are orthogonal i
_
l
0
(x)(x) dx = 0.
Theorem 3.1.2. The functions sin(mx/l) and sin(nx/l) are orthogonal
(if n = m, l > 0)
Proof. Using the trigonometric identities:
cos( + ) = cos() cos() sin() sin()
cos( ) = cos() cos() + sin() sin()
By subtracting, we nd:
cos( ) cos( + ) = 2 sin() sin()
It follows that in the case n = m:
_
l
0
sin(
nx
l
) sin(
mx
l
) dx =
1
2
_
l
0
{cos[(n m)x/l] cos[(n + m)x/l]} dx
=
1
2
_
l sin[(n m)x/l]
(n m)

l sin[(n + m)x/l]
(n + m)
_
l
0
= 0
Since sin k = 0, k Z. So by denition sin(nx) and sin(mx) are
orthogonal.
3 FOURIER SERIERS 19
We can now determine the coecients in a Fourier series:
Theorem 3.1.3 (Fourier Coecients). At all points x of continuity, the
coecients a
n
of the Fourier series (6) are given by
a
n
=
2
l
_
l
0
u
0
(x) sin(
nx
l
) dx
Proof. Multiplying the Fourier series equation (6) by sin(
mx
l
) on both
sides, we obtain
u
0
(x) sin(
mx
l
) = sin(
mx
l
)

n=1
a
n
sin(
nx
l
)
=

n=1
a
n
sin(
mx
l
) sin(
nx
l
)
Thus integrating over [0, l] and using the orthogonality theorem (3.1.2)
_
l
0
u
0
(x) sin
_
mx
l
_
dx =
_
l
0

n=1
a
n
sin
_
mx
l
_
sin
_
nx
l
_
dx
=

n=1
a
n
_
l
0
sin
_
mx
l
_
sin
_
nx
l
_
dx
= a
m
_
l
0
sin
2
_
mx
l
_
dx
= a
m
_
l
0
1
2
[1 cos
_
2mx
l
_
] dx
= a
m
1
2
_
x
l
2m
sin
_
2mx
l
__
x=l
x=0
= a
n
l
2
Rearranging gives the result.
3 FOURIER SERIERS 20
3.2 Examples
3.2.1 The Fourier series of a constant function
Example 3.2.1. Consider the constant function
u
0
(x) = c for some c R
By the theorem (3.1.1) we may write u
0
in the form
u
0
(x) =

n=1
a
n
sin(
nx
l
)
The Fourier coecients are (by theorem (3.1.3))
a
n
=
2
l
_
l
0
u
0
(x) sin(
nx
l
) dx
=
2
l
_
l
0
c sin(
nx
l
) dx
=
2c
l
_

l
n
cos(
nx
l
)
_
x=l
x=0
=
2c
n
[cos(n) + 1]
=
2c

_
1 (1)
n
n
_
This example will come in useful when we have PDEs with constant initial
conditions.
3 FOURIER SERIERS 21
3.2.2 Summation of
Example 3.2.2.
=

n=0
4
2n + 1
(1)
n
Proof. Consider the Fourier sine series
x =

n=1
a
n
sin(nx), x [0, ] (7)
Recall that sin(nx) and sin(mx) are orthogonal (theorem (3.1.2)) for
n = m, but when n = m we have
_

0
sin(nx) sin(mx) dx =
1
2
_

0
{cos[(n m)x] cos[(n + m)x]} dx
=
1
2
_

0
[cos(0) cos(2nx)] dx
=
1
2
_
x
sin(2nx)
2n
_

0
=

2
Now by multiplying our fourier series equation (7) by sin(mx) on both
sides, we obtain:
x sin(mx) = sin(mx)

n=1
a
n
sin nx
=

n=1
a
n
sin(mx) sin(nx)
Thus integrating over [0, ]:
_

0
x sin(mx) dx =
_

0

n=1
a
n
sin(mx) sin(nx) dx
=

n=1
a
n
_

0
sin(mx) sin(nx) dx
= a
n

2
3 FOURIER SERIERS 22
Since all but the n = m term are zero. Hence for n 1:
a
n
=
2

_

0
x sin(mx) dx
=
2

{
_
x
cos(nx)
n
_

_

0

cos(nx)
n
dx}
=
2

{
(1)
n+1
n
+
_
sin(nx)
n
2
_

0
. .
=0
}
=
2(1)
n+1
n
Substituting a
n
back into the fourier series equation (6), gives:
x =

n=1
2(1)
n+1
n
sin(nx)
Now let x = /2, and it follows that:

2
=

n=1
2(1)
n+1
n
sin(n/2)
=

n=1
2(1)
2n
2n 1
(1)
n+1
=

n=0
2(1)
n
2n + 1
And hence, multiplying through by 2 gives the result:
=

n=0
4
2n + 1
(1)
n
4 SEPARATION OF VARIABLES 23
4 Separation of Variables
4.1 Introduction
The separation of variables is a general method of solving linear partial
dierential equations. We introduce the method by example and apply it to
each of the three classical PDEs.
4.2 The Heat Equation
Example 4.2.1. Consider the heat equation
u
t
u
xx
= 0
With homogeneous boundary conditions
u(0, t) = u(l, t) = 0, x (0, l), t > 0
Writing the solution in the form
u(x, t) = X(x)T(t)
We are making the assumption such a solution exists and is non-zero this
is where the name separation of variables is derived. Dierentiating
u
t
= X(x)
d
dt
T(t) = X(x)T

(t)
and similiary
u
xx
=
_
d
2
dx
2
X(x)
_
T(t) = X

(x)T(t)
Substituting these results into the PDE and rearranging gives
X

X
=
_
1

_
T

T
= (constant)
We know is constant due to the fact that the left-hand side depends only
on x whereas the right-hand side depends only on t.
The boundary conditions tell us
0 = u(0, x) = X(0)T(t) X(0) = 0
0 = u(l, x) = X(0)T(t) X(l) = 0
Assuming T(t) = 0 (which we are, since we dont want the trivial solution
u(x, t) = 0).
4 SEPARATION OF VARIABLES 24
We have therefore arrived at an eigenvalue problem for X
X

X = 0, X(0) = X(l) = 0
Where is an eigenvalue and X(x) the associated eigenfunction. We solve
by considering three cases for
1. If = 0, then X

= 0 so X = ax + b for some constants a, b R.


X(0) = 0 gives b = 0.
X(l) = 0 gives al = 0, hence a = 0.
Hence = 0 corresponds to X(x) = 0, which we dont want so
= 0 is not an eigenvalue.
2. If =
2
for > 0, then X

2
X = 0, so
X(x) = a cosh(x) + b sinh(x)
for some constants a, b R.
X(0) = 0 gives a = 0.
X(l) = 0 gives b sinh(l) = 0, but > 0 and l > 0 so
sinh(l) = 0, thus b = 0.
And again we have no eigenvalues for > 0.
3. If =
2
for > 0, then X

+
2
X = 0, so
X(x) = a cos(x) + b sin(x)
for some constants a, b R.
X(0) = 0 gives a = 0.
X(l) = 0 gives b sin(l) = 0, hence sin(l) = 0 to avoid the zero
solution. Thus, l = n for any n N. Since we only want
linearly independent eigenfunctions we can choose b arbitrarily,
so let b = 1.
Therefore we have eigenvalue solutions

n
= (
n
l
)
2
, X
n
(x) = sin(
nx
l
), for n = 1, 2, . . .
We then have equation for T(t) given by
T

n
=
n
T
n
which has general solution
T
n
(t) = e

n
t
= e
(
n
l
)
2
t
4 SEPARATION OF VARIABLES 25
We have found the following solutions of the heat equation
u
n
(x, t) = X
n
(x)T
n
(t) = e
(
n
l
)
2
t
sin(
nx
l
), for n = 1, 2, . . .
As the heat equation is linear and homogeneous we may use the principle of
linear superposition and write the general solution
u(x, t) =

n=1
a
n
e
(
n
l
)
2
t
sin(
nx
l
)
for coecients a
n
.
4 SEPARATION OF VARIABLES 26
4.2.1 Constant initial condition
Example 4.2.2. Consider again the heat equation (2) with homogeneous
boundary conditions
u
t
= u
xx
= 0, u(0, t) = u(l, t) = 0, for t > 0
With constant initial condition
u(x, 0) = u
0
(x) = 1, for 0 < x < l
We have seen in example (4.2.1) the solution may be written
u(x, t) =

n=1
a
n
e
(
n
l
)
2
t
sin(
nx
l
)
Letting t = 0 we have
u
0
(x) = u(x, 0) =

n=1
a
n
sin(
nx
l
)
Recall, from example (3.2.1) (with c = 1) this gives coecients
a
n
=
2

_
1 (1)
n
n
_
Thus, we have the solution
u(x, t) =
2

n=1
1 (1)
n
n
e
(
n
l
)
2
t
sin(
nx
l
)
4 SEPARATION OF VARIABLES 27
4.2.2 Periodic boundary conditions*
Example 4.2.3. Consider the PDE
u
t
= u
xx
for x [0, 2] with periodic boundary conditions
u(0, t) = u(2, t), u
x
(0, t) = u
x
(2, t).
and initial condition
u(x, 0) = u
0
(x)
This is the heat equation (2) with constant = 1.
Guessing that there is a non-zero solution of the form
u(x, t) = X(x)T(t)
(this is the main idea of this method) we have
u
t
= X(x)T

(t) and u
xx
= X

(x)T(t)
Substituting these into the PDE and we can conclude:
X

X
=
T

T
=
where is a constant, due to the fact that the left-hand side depends only
on x whereas the right-hand side depends only on t.
The boundary conditions give
u(0, t) = X(0)T(t) = u(2, t) = X(2)T(t)
Therefore, assuming T(t) = 0 (otherwise u(x, t) = X(x)T(t) = 0, resulting
in the trivial solution) we conclude
X(0) = X(2)
Similar consideration of the other boundary condition gives
X

(0) = X

(2)
4 SEPARATION OF VARIABLES 28
We therefore have the eigenvalue problem
X

= X with X(0) = X(2), X

(0) = X

(2)
This can be solved by considering three cases:
1. If = 0 then X

= 0 so X = ax + b, considering the rst boundary


condition
X(0) = X(2) 0a + b = 2a + b a = 0
and considering the second boundary condition
X

(0) = X

(2) a = a 0 = 0
gives nothing new. So a = 0 and b is arbitary giving X(x) = b. But
we are solving for X as an eigenfunction so any constant multiples of
a solution are considered the same, so we can take b = 1 giving
solution X(x) = 1.
The corresponding T(t) is given by T

T = 0, here = 0 so T = 1
is the eigenfunction solution. Therefore, u(x, t) = X(x)T(t) = 1.
2. If =
2
( > 0) then X

2
X = 0 this is an ODE with solution
X(x) = a cosh(x) + b sinh(x), again by considering the boundary
conditions
X(0) = X(2) a cosh(0) + b sinh(0) = a cosh(2) + b sinh(2)
a = a cosh(2) + b sinh(2)
and
X

(0) = X

(2) a sinh(0) + b cosh(0) = a sinh(2) + b cosh(2)


b = a sinh(2) + bcosh(2)
To see that this gives no solutions, we have simultaneous equations
_
0 = a[cosh(2) 1] + b sinh(2)
0 = a sinh(2) + b[cosh(2) 1]
Which can we written in matrix form
_
cosh(2) 1 sinh(2)
sinh(2) cosh(2) 1
_
. .
M
_
a
b
_
=
_
0
0
_
4 SEPARATION OF VARIABLES 29
To have a non-zero solution (again, to avoid the trivial solution) the
matrix M must have a zero determinant (from linear algebra), but
det(M) = [cosh(2) 1]
2
sinh(2)
2
= cosh(2)
2
2 cosh(2) + 1 sinh(2)
2
= [cosh(2)
2
sinh(2)
2
] 2 cosh(2) + 1
= 1 2 cosh(2) + 1
= 2[1 cosh(2)]
< 0, as > 0.
Therefore, there can be no eigenvalue =
2
.
3. If =
2
( > 0) then X

+
2
X = 0 this is an ODE with solution
X(x) = a cos(x) + b sin(x), again by considering the boundary
conditions
X(0) = X(2) a cos(0) + b sin(0) = a cos(2) + b sin(2)
a = a cos(2) + b sin(2)
and
X

(0) = X

(2) a sin(0) + b cos(0) = a sin(2) + b cos(2)


b = a sin(2) + b cos(2)
Giving another linear system
_
0 = a[cos(2) 1] + b sin(2)
0 = a sin(2) + b[cos(2) 1]
Which can we written in matrix form
_
cos(2) 1 sin(2)
sin(2) cos(2) 1
_
. .
M
_
a
b
_
=
_
0
0
_
We have determinant
det(M) = [cos(2) 1]
2
+ [sin(2)]
2
= cos(2)
2
2 cos(2) + 1 + sin(2)
= [cos(2)
2
+ sin(2)] + 1 2 cos(2)
= 1 + 1 2 cos(2)
= 2(1 cos(2))
4 SEPARATION OF VARIABLES 30
And by similar consideration this must be zero
2(1 cos(2)) = 0 cos(2) = 1
So 2 = 2n for n Z, and we have solution
X(x) = a cos(nx) + b sin(nx)
Thus for eigenvalue
n
= n
2
we have two linearly independent
eigenfunctions cos(nx) and sin(nx).
Now T

+ n
2
T = 0 which has solution T(t) = e
n
2
t
.
We complete our solution by using the principle of linear superposition
u(x, t) = a
0
..
case =0
+

n=1
[a
n
cos(nx) + b
n
sin(nx)]e
n
2
t
. .
case =
2
Since our PDE is linear and homogeneous.
We can determine the values of constant coecients a
i
, b
i
by considering
the initial conditions. Suppose u(x, t) satises
u(x, 0) = u
0
(x) for x (0, 2)
We have
u
0
(x) = u(x, 0) = a
0
+

n=1
[a
n
cos(nx)+b
n
sin(nx)] =

n=0
[a
n
cos(nx)+b
n
sin(nx)]
It is easy to show, based on the Fourier coecient theorem (3.1.3) and
orthogonality theorem (3.1.2), that
b
m
=
1

_
2
0
u
0
sin(mx) dx
And in much the same way
a
m
=
1

_
2
0
u
0
cos(mx) dx
Weve therefore determined the coecients and we have an exact solution
to the PDE.
4 SEPARATION OF VARIABLES 31
4.3 The Wave Equation
Example 4.3.1. Consider the homogeneous wave equation (1)
u
tt
u
xx
= 0, for t > 0, x (0, l)
With homogeneous boundary conditions
u(0, t) = u(l, t) = 0, t > 0
And initial conditions
u(x, 0) = u
0
(x), u
t
(x, 0) = u
1
(x), for x (0, l)
For given functions u
0
(x) and u
1
(x)
3
.
Substitute u(x, t) = X(x)T(t) into the PDE to obtain
T

(t)
T(t)
=
X

(x)
X(x)
= (constant)
The boundary conditions u(0, t) = u(l, t) = 0 imply that
X(0) = X(l) = 0
Assuming T(t) = 0 (which avoids giving only the trivial u(x, t) = 0
solution).
Thus, we have an eigenvalue problem for X(x), i.e.:
X

= X, X(0) = X(l) = 0
The solution, same as in the heat equation example (4.2.1)), is

n
=
n
l
2
, X
n
= sin(
nx
l
), n = 1, 2, . . .
We have corresponding solutions for T
n
given by
T
n
(t) = a
n
cos(
nt
l
) + b
n
sin(
nt
l
), n = 1, 2, . . .
For some constants a
n
, b
n
.
3
We require two initial conditions for a well-posed problem since the PDE is 2
nd
order.
4 SEPARATION OF VARIABLES 32
And so, by the principle of linear superposition we have general solution
u(x, t) =

n=1
_
a
n
cos(
nt
l
) + b
n
sin(
nt
l
)
_
sin(
nx
l
)
To determine the coecients a
n
, b
n
, let t = 0 so that
u
0
(x) = u(x, 0) =

n=1
a
n
sin(
nx
l
)
Then, using orthoganality of sine functions
a
n
=
2
l
_
l
0
u
0
(x) sin(
nx
l
) dx
Now, dierentiating the solution term by term gives
u
t
(x, t) =

n=1
_
na
n
l
sin(
nt
l
) +
nb
n
l
cos(
nt
l
)
_
sin(
nx
l
)
Again, letting t = 0 and applying the other initial condition
u
1
(x) = u
t
(x, 0) =

n=1
_
nb
n
l
_
sin(
nx
l
)
Thus, using the orthogonality principles once more
b
n
=
2
n
_
l
0
u
1
(x) sin(
nx
l
) dx
The coecients a
n
, b
n
determined by the initial conditions.
4 SEPARATION OF VARIABLES 33
4.3.1 DAlemberts solution
Example 4.3.2. Suppose the initial conditions are
u(x, 0) = u
0
(x), u
t
(x, 0) = u
1
(x) = 0
Then b
n
= 0 and
u(x, t) =

n=1
a
n
cos(
nt
l
) sin(
nx
l
)
From the identity
cos() sin() =
1
2
[sin( ) + sin( + )]
We can see that
u(x, t) =
1
2

n=1
a
n
sin(
n(x t)
l
) +
1
2

n=1
a
n
sin(
n(x + t)
l
)
Observe the solution is the sum of right and left travelling waves.
4 SEPARATION OF VARIABLES 34
4.4 The Laplace Equation
Example 4.4.1. Consider the Laplace equation
u
xx
+ u
yy
= 0
over the square domain (x, y) (0, 1) (0, 1) = . With boundary
conditions
u(x) = g(x), x
where denotes the boundary of the square domain . We specify the
boundary conditions
_
u(0, y) = u(1, y) = u(x, 0) = 0
u(x, 1) = 1
Thus this problem has three homogeneous boundary conditions and one
inhomogeneous boundary condition.
Substituting
u(x, y) = X(x)Y (y)
into the PDE gives

Y
=
X

X
= (constant)
With boundary conditions
u(0, y) = u(1, y) = 0
we see that
X(0) = X(1) = 0
giving eigenvalue problem
X

X = 0, X(0) = X(1) = 0
The solution we have already seen is

n
= (n)
2
, X
n
(x) = sin(nx), n = 1, 2, . . .
The corresponding ODE for Y is
Y

+ Y = 0
having solution
Y
n
(y) = a
n
sinh(ny) + b
n
cosh(ny), n = 1, 2, . . .
for some constants a
n
, b
n
.
4 SEPARATION OF VARIABLES 35
The condition u(x, 0) = X(x)Y (0) = 0 gives Y (0) = 0 and hence b
n
= 0,
thus
Y
n
(Y ) = a
n
sinh(ny)
By linear superposition, we have solution
u(x, y) =

n=1
a
n
sinh(ny) sin(nx)
To determine a
n
we consider the remaining boundary condition u(x, 1) = 1.
Thus
u(x, 1) =

n=1
a
n
sinh(n) sin(nx) = 1
Multiplying by sin(mx) and integrating from 0 to 1 gives
a
m
sinh(m)
1
2
=
_
1
0
sin(mx) dx
Hence
a
m
=
2
sinh(m)
_
1
0
sin(mx) dx =
4(1 (1)
m
)
m sinh(m)
Completing our solution.
4 SEPARATION OF VARIABLES 36
4.5 Sturm-Lioville Theory
5 TAYLORS THEOREM 37
5 Taylors Theorem
To prove that the approximations in the following sections become better
as n (n being the number of points estimated) we require the
following theorem:
Theorem 5.0.1 (Taylors Theorem). Assuming u is a smooth function (ie:
all its derivatives exist of any order), then for some (x, x + h) we have:
u(x+h) = u(x) +hu

(x) +
h
2
2!
u

(x) + +
h
n1
(n 1)!
u
(n1)
(x) +
h
n
n!
u
(n)
() (8)
(note: this is Taylors theorem with Lagranges form of the error from Real
Analysis MT20101).
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION38
6 Finite Dierence Methods: Centered
Approximation
6.1 Centered dierence approximation
When an exact solution cant be found it is often useful to approximate the
solution numerically at a nite number of points.
Denition 6.1.1 (Uniform Grid). A uniform grid on [0, 1] is dened by
the points x
j
= jh, for j = 0, 1, . . . , n where h = 1/n. We say h is the grid
spacing.
A well-posed PDE for a function u can be used to nd approximations for
u(x
j
). To begin with we need to approximate the derivatives of the function
u:
Denition 6.1.2 (Centered Dierence).
u(x) = u(x + h/2) u(x h/2)
the centered dierence of u at x.
Denition 6.1.3 (2nd Centered Dierence).

2
u(x) = (u(x))
= u(x + h/2) u(x h/2)
= [u(x + h) u(x)] [u(x) u(x h)]
= u(x + h) 2u(x) + u(x h)
this is the 2nd centered dierence of u at x.
For points x
j
on our grid, we can now use the approximation:
u

(x
j
)

2
u(x
j
)
h
2
=
u(x
j
+ h) 2u(x
j
) + u(x
j
h)
h
2
(9)
but for the rst derivative we cannot use
u

(x
j
)
u(x
j
)
h
=
u(x
j
+ h/2) u(x
j
h/2)
h
since this requires values of u at x
j
+h/2 and x
j
h/2 which are not points
on our grid. We require:
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION39
Denition 6.1.4 (Averaged Centred Dierence).

u(x) = u(x + h/2) + u(x h/2)


= [u(x + h) u(x)] + [u(x) u(x h)
= u(x + h) u(x h)
Giving the approximation for points x
j
on our grid:
u

(x
j
)

u(x
j
)
2h
=
u(x
j
+ h) u(x
j
h)
2h
(10)
We would like to be able to show these approximations are valid and
measure their error. The following lemmas give the order of the error
introduced when making these estimates.
Lemma 6.1.1. The error in u

(x) given by 1
st
centered dierence
approximation is O(h
2
).
Proof. By Taylors theorem (5.0.1)
u(x + h) = u(x) + hu

(x) +
h
2
2!
u

(x) + +
h
n1
(n 1)!
u
(n1)
(x) +O(h
n
)
u(x h) = u(x) hu

(x) +
h
2
2!
u

(x) +
h
n1
(n 1)!
u
(n1)
(x) +O(h
n
)
Thus
u(x)
h
=
hu

(x) +O(h
3
)
h
+O(h
2
) = u

(x) +O(h
2
)
as claimed.
Lemma 6.1.2. The error in u

(x) given by 2
nd
centered dierence
approximation is O(h
2
).
Proof. By Taylors theorem:
u(x + h) = u(x) + hu

(x) +
h
2
2
u

(x) +
h
3
6
u

(x) +O(h
4
)
u(x h) = u(x) hu

(x) +
h
2
2
u

(x)
h
3
6
u

(x) +O(h
4
)
Adding the expressions for u(x + h) and u(x h) gives
u(x h) + u(x + h) = 2u(x) + h
2
u

(x) +O(h
4
)
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION40
Hence,

2
u(x)
h
2
=
2u(x) + u(x + h) + u(x h)
h
2
=
2u(x) + [2u(x) + h
2
u

(x) +O(h
4
)]
h
2
=
h
2
u

(x) +O(h
4
)
h
2
= u

(x) +O(h
2
)
as claimed.
Lemma 6.1.3. The error in u

(x) given by averaged centered dierence


approximation is O(h)
Proof. We simply apply Taylors theorm (5.0.1)

u(x)
2h
=
u(x + h) u(x h)
2h
= {[u(x) + hu

(x) + h
2
u

(x) +O(h
3
)]
[u(x) hu

(x) + h
2
u

(x) +O(h
3
)]}/(2h)
= u

(x) +O(h
2
)
as claimed.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION41
6.2 Reaction-diusion problem
So now we can approximate a function and its 1
st
and 2
nd
derivatives on a
set of grid points. But how does this help approximate the solution to a
given PDE problem? The following example illustrates the method of nite
dierences.
Example 6.2.1. Consider the boundary value problem:
_
u

+ ru = f, for 0 < x < 1


with u(0) = , and u(1) =
Where r 0 and , R are given constants. On grid points x
j
the PDE
tells us:
u

(x
j
) + ru(x
j
) = f(x
j
), j = 1, . . . , n 1
We know that u(x
0
) = u(0) = and u(x
n
) = u(1) = .
Writing f(x
j
) = f
j
and using approximations u(x
j
) u
j
, u

(x
j
)

2
u
j
h
2
we have (for j = 1, . . . , n 1) the following equations:

2
u
j
h
2
+ ru
j
= f
j
Now by substuting equation (9) into this we get:

u
j+1
2u
j
+ u
j1
h
2
+ ru
j
= f
j
Rearranging we have:

1
h
2
u
j1
+ (
2
h
2
+ r)u
j

1
h
2
u
j+1
= f
j
Recalling u
0
= and u
n
= , we have a system of linear equations which
can we written in matrix form. We can use methods from linear algebra to
solve for u
j
, the following example does this.
Example 6.2.2. Consider the boundary value problem:
_
u

= 1, for 0 < x < 1


where u(0) = u(1) = 0
This is the reaction-diusion problem from example (6.2.1) with
r = 0, f(x) = 1, and = = 0.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION42
Solving this gives exact solution u(x) =
1
2
(x x
2
), but lets use the
method of nite dierences to approximate.
Using n = 6 so that h = 1/6 is the grid spacing, we have grid points
x
1
, x
2
, . . . , x
6
. We get system of linear equations:

1
h
2
u
j1
+
2
h
2
u
j

1
h
2
u
j+1
= 1
for j = 2, . . . , 5, and u
0
= u
6
= 0. Putting these into matrix form we have
the linear system:
_

_
72 36 0 0 0
36 72 36 0 0
0 36 72 36 0
0 0 36 72 36
0 0 0 36 72
_

_
_

_
u
1
u
2
u
3
u
4
u
5
_

_
=
_

_
1
1
1
1
1
_

_
Giving approximate solutions: u
1
= u
5
= 5/72, u
2
= u
4
= 1/9, u
3
= 1/8.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION43
6.3 Errors, Convergence and Consistency
We have derived a method for approximating numerical solutions to PDEs.
We now wish to ensure that the errors introduced are small. We make the
following denitions and then give examples of their use.
Denition 6.3.1 (Global Error). The global error is dened by
e
j
= u(x
j
) u
j
, j = 0, 1, . . . , n
This measures how close the approximated solution u
j
is to the true
solution u(x
j
).
Denition 6.3.2 (Local Error). The local (truncation) error T
j
at a grid
point x
j
is the remainder when u(x
j
) (the exact value) is substituted for u
j
(the estimate) in the nite dierence equation (i.e.: the equation that the
approximated value u
j
satises). Example below.
Denition 6.3.3 (Convergence). A nite dierence method is said to be
convergent if and only if
max
0jn
|e
j
| 0 as n
We wish these errors to be as small as possible and approach zero as we
estimate with increasingly smaller grid spacing.
Denition 6.3.4 (K
th
order consistent). A nite dierence method is said
to be K
th
order consistent for k > 0 if the local truncation error satises
T
j
= O(h
k
), j = 1, 2, . . . , n 1
We will show the relationship between consistency and convergence when
we introduce stability in the a subsequent section.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION44
6.3.1 Constistency of reaction-diusion problem
Example 6.3.1. The centered dierence approximation to the
reaction-diusion model (in example (6.2.1)) is 2
nd
order consistent.
Proof. The local truncation error T
j
at grid point x
j
is the remainder when
u(x
j
) is substituted in place of u
j
in

1
h
2

2
u
j
+ ru
j
= f
j
i.e.:
T
j
=
1
h
2

2
u(x
j
) + ru(x
j
) f
j
We know (from the PDE) that
0 = u

(x
j
) + ru(x
j
) f
j
so subtracting equations gives
T
j
= u

(x
j
)
1
h
2

2
u(x
j
)
The result immediately follows from lemma (6.1.2).
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION45
6.3.2 Relationship of errors of reaction-diusion problem
Example 6.3.2. We can relate the global error e
j
at the grid point x
j
to
the local error T
j
as follows
T
j
=
1
h
2

2
u(x
j
) + ru(x
j
) f
j
0 =
1
h
2

2
u
j
+ ru
j
f
j
Subtracting these equations and recalling e
j
= u(x
j
) u
j
gives
T
j
=
1
h
2

2
e
j
+ re
j
This is a linear system of equations for j = 1, . . . , n 1 giving e
j
in terms of
T
j
.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION46
6.4 Stability
Theorem 6.4.1. [Discrete Maximum Principle] Consider a set of
grid-point estimates {u
j
}
n
j=0
that satisfy
au
j1
+ bu
j
cu
j+1
0, j = 1, 2, . . . , n 1
Let a, b, c R denote coecients with
_

_
a 0
c 0
b a + c > 0
then
u
j
max{0, u
0
, u
n
}, j = 0, 1, . . . , n.
Proof. For contradiction assume u
k
> 0 such that
u
k
= max{u
0
, u
1
, . . . , u
n
} and min{u
k1
, u
k+1
} < u
k
As au
j1
+ bu
j
cu
j+1
0, we have
bu
k
au
k1
+ cu
k+1
< au
k
+ cu
k
= (a + c)u
k
bu
k
As b = 0, this is a contradiction.
Therefore, we may conclude there is a maximum u
k
which is either
non-positive (u
k
0), or
on the boundary (u
k
max{u
0
, u
n
})
This is often called the discrete maximum principle.
Theorem 6.4.2. Suppose that a nite dierence method is k
th
order
consistent and stable, with the form
au
j1
+ bu
j
cu
j+1
= f
j
with a, c 0 and b a + c > 0. Then the numerical approximation
converges to the exact solution, i.e.:
e
j
= |u
j
u(x
j
)| = O(h
k
) as h 0
And therefore the method is convergent.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION47
Proof. not given.
We have the result
consistency + stability = convergence
Recall, consistency means the local error T
j
is O(h
k
) for some k 1 and
convergence means the global error e
j
is small.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION48
6.4.1 Stability and convergence of reaction-diusion problem
Example 6.4.1. The centered dierence method for the reaction-diusion
problem (see example (6.2.1)) is stable and therefore convergent.
Proof. Recall the nite dierence approximation

1
h
2
u
j1
+ (
2
h
2
+ r)u
j

1
h
2
u
j+1
= f
j
To show that this approximation is stable we apply the discrete maximum
principle (theorem (6.4.1))
a =
1
h
2
, b =
2
h
2
+ r, c =
1
h
2
Hence, as a, c 0 and b a + c as r 0. Therefore, this centred dierence
method is stable.
We have also shown this method to be 2
nd
order consistent (example
(6.3.1)), therefore
global error e
j
= |u
j
u(x
j
)| = O(h
2
) as h 0
The centered dierence method for the reaction-diusion model is (2
nd
order) convergent.
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION49
6.5 Convection-diusion problem
Example 6.5.1. Consider the boundary value problem:
_
u

+ wu

= f, for 0 < x < 1


with u(0) = , and u(1) =
Where scalar w R (known as the wind) and boundary values , R.
For grid-points x
j
the exact solution satises
u

(x
j
) + wu

(x
j
) = f(x
j
), j = 1, 2, . . . , n 1
and u(x
0
) = , u(x
n
) = .
Substituting in centered dierence approximations for u

and u

gives

2
u
j
h
2
+ w

u
j
2h
= f
j
which is

u
j1
2u
j
+ u
j+1
h
2
+ w
u
j+1
u
j1
2h
= f
j
and this rearranges to
_

1
h
2

w
2h
_
. .
a
u
j1
+
_
2
h
2
_
. .
b
u
j
+
_

1
h
2
+
w
2h
_
. .
c
u
j+1
= f
j
for j = 1, 2, . . . , n 1.
Using the boundary conditions u
0
= , u
n
= to simplify the j = 1 and
j = n 1 cases, we have the linear system of equations
_

_
a + bu
1
cu
2
= f
1
au
j1
+ bu
j
cu
j+1
= f
j
for j = 2, . . . , n 2
au
n2
+ bu
n1
c = f
n1
Which can be written in matrix form
_

_
b c 0 0 0
a b c 0 0
0
.
.
.
.
.
.
.
.
.
0
0 0 a b c
0 0 0 a b
_

_
_

_
u
1
u
2
.
.
.
u
n2
u
n1
_

_
=
_

_
f
1
+ a
f
2
.
.
.
f
n2
f
n1
+ c
_

_
(11)
Note, this is non-symmetric as a = c (unless w = 0).
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION50
The local truncation error is the remainder after replacing u
j
in the nite
dierence method by u(x
j
), i.e.:
T
j
=
1
h
2

2
u(x
j
) +
w
2h

u(x
j
)
2h
f
j
To evaluate this, subtract
u

(x
j
) + wu

(x
j
) f(x
j
) = 0
Resulting in
T
j
=
_
u

(x
j
)
1
h
2

2
u(x
j
)
_
+ w
_
u

(x
j
) +

u(x
j
)
2h
_
By lemmas (6.1.2) and (6.1.3) we know
1
h
2

2
u(x
j
) = u

(x) +O(h
2
)
And similarly

u(x
j
)
2h
= u

(x) +O(h
2
)
Therefore,
T
j
= O(h
2
) +O(h
2
) = O(h
2
)
Therefore this method is 2
nd
order consistent. To check for stability we
apply the stability theorem (6.4.1)
a 0
1
h
2
+
w
2h
0
c 0
1
h
2

w
2h
0
b a + c > 0
2
h
2

2
h
2
> 0
All three conditions are satised when
1. if w > 0, then
1
h
2

w
2h
0
wh
2
1 h
2
w
2. if w < 0, then
1
h
2
+
w
2h
0
wh
2
1 h
2
w
6 FINITE DIFFERENCE METHODS: CENTEREDAPPROXIMATION51
Therefore, this centered dierence approximation is stable (and hence
convergent by theorem (6.4.2)) when
|w|h
2
1
This ratio is called the mesh Peclet number, if > 1 the centered dierence
method solution shows oscillation of period h which indicates instability.
7 FINITE DIFFERENCE METHODS: UPWIND APPROXIMATION 52
7 Finite Dierence Methods: Upwind
Approximation
An alternate method to centered dierence approximations are the
one-sided upwind dierence approximations.
Denition 7.0.1 (One-sided Dierence). At grid-points x
j
the one-sided
nite dierence approximations to u

are given by
u

(x
j
)

+
u(x
j
)
h
=
u(x
j
+ h) u(x
j
)
h
and
u

(x
j
)

u(x
j
)
h
=
u(x
j
) u(x
j
h)
h
these being the right-sided and left-sided approximations, respectively.
Which side is considered the upwind-side and which side is considered the
downwind-side depends on the given problem.
And by simple application of Taylors theorem (5.0.1) we see that
Lemma 7.0.1. The error in u

(x) given by upwind approximation is O(h)


Proof.

+
u(x
j
)
h
=
u(x
j
+ h) u(x
j
)
h
=
hu

(x
j
) +O(h
2
)
h
= u

(x
j
) +O(h)

u(x
j
)
h
=
u(x
j
) u(x
j
h)
h
=
hu

(x
j
) +O(h
2
)
h
= u

(x
j
) +O(h)
7 FINITE DIFFERENCE METHODS: UPWIND APPROXIMATION 53
7.1 Convection-diusion problem (revisited)
Example 7.1.1. Lets return to the convection-diusion problem (example
(6.5.1)). We will continue to use the centered dierence approximation for
u

(x
j
) but we will now consider the upwind approximation of u

(x
j
).
Recall, the parameter w in the convection-diusion model was called the
wind
4
.
If w > 0, then the left-hand side of grid-point x
j
is considered
upwind and we approximate
u

(x
j
)

u(x
j
)
h
If w < 0, then the right-hand side of grid-point x
j
is considered
upwind and we approximate
u

(x
j
)

+
u(x
j
)
h
This method is called the upwind dierence approxmation.
Using these new approximations, for w > 0, we obtain

2
u
j
h
2
+ w

u
j
h
= f
j
which is

u
j1
2u
j
+ u
j+1
h
2
+ w
u
j
u
j1
h
= f
j
and this rearranges to
_

1
h
2

w
h
_
. .
a
u
j1
+
_
2
h
2
+
w
h
_
. .
b
u
j
+
_

1
h
2
_
. .
c
u
j+1
= f
j
for j = 1, 2, . . . , n 1.
4
up-wind (adverb) toward or against the wind or the direction from which it is blow-
ing. if w is positive, then the winding is blowing left-to-right, therefore if we are
travelling upwind we are travlling to the left. Hence, when w > 0 we take the left-sided
approximation
7 FINITE DIFFERENCE METHODS: UPWIND APPROXIMATION 54
Applying the boundary conditions we can derive the same linear system
(11) from example (6.5.1) where a, b, c are now redened as above.
We can show this upwind method is convergent for any choice of grid
spacing h > 0. We have local truncation error
T
j
=

2
u(x
j
)
h
2
+ w

u(x
j
)
h
f
j
Subtracting
u

(x
j
) + wu

(x
j
) f
j
= 0
We obtain
T
j
=
_
u

(x
j
)

2
u(x
j
)
h
2
_
+ w
_
u

(x
j
) +

u(x
j
)
h
_
We have shown that (by lemmas (6.1.2) and (7.0.1))

2
u(x
j
)
h
2
= u

(x
j
) +O(h
2
),

u(x
j
)
h
= u

(x) +O(h)
Hence,
T
j
= O(h
2
) +O(h) = O(h)
Therefore this method is 1
st
order consistent. To check for stability we
apply the stability theorem (6.4.1)
a 0
1
h
2
+
w
h
0
c 0
1
h
2
0
b a + c > 0
2
h
2
+
w
h

_
1
h
2
+
w
h
_
+
1
h
2
> 0
Recall, we are already considering the case w > 0 (the case w < 0 is
similar), so all conditions have been veried.
Since the method is consistent and stable we may conclude it is
convergent by theorem (6.4.2).
8 FINITE DIFFERENCE METHODS: EULER METHODS FOR ODES55
8 Finite Dierence Methods: Euler
Methods for ODEs
We briey study nite dierence methods for ODEs, these will be needed
for the method of lines introduced in the following sections.
8.1 The Explicit Euler method
Denition 8.1.1 (Explicit Euler Method). Consider the following initial
value problem for some function U(t)
dU
dt
= f(U), U(0) = U
0
(12)
For k > 0 (the time-step) we approximate
5
dU
dt

U(t + k) U(t)
k
Letting t = nk and applying the equality given by the ODE (12) we have
U((n + 1)k) U(nk)
k

dU(t)
dt

t=nk
= f(U(nk))
Using approximation U(nk) U
n
, we have
U
n+1
= U
n
+ kf(U
n
)
This is the explicit Euler method approximation.
It is possible to show the global error |U(nk) U
n
| = O(k) under certain
assumptions on f (proof not given).
Example 8.1.1. Consider the ODE
dU
dt
= 3U, U(0) = 2
In this case the exact solution is given by U(t) = 2e
3t
, but lets use the
explicit Euler method to approximate a solution for time-step k = 0.1.
U
n+1
= U
n
+ kf(U
n
)
U
1
= U
0
+ k(3U
0
) = 2 + 0.1(3 2) = 1.4
U
2
= U
1
+ k(3U
1
) = 1.4 + 0.1(3 1.4) = 0.98
5
This approximation should come as no surprise, since if we take the limit as k 0 we
have an exact equality as this is then simply the denition of derivative.
8 FINITE DIFFERENCE METHODS: EULER METHODS FOR ODES56
In comparison to the exact solution
U
1
U(0.1) = 2e
30.1
= 1.4816 . . .
U
2
U(0.2) = 2e
30.2
= 1.0976 . . .
And we can get more accurate approximations by taking k smaller.
Example 8.1.2. For ODE
dU
dt
= U(t)
2
with initial condition U(0) = 1, we
have
dU
dt
= U(t)
2
=
_
1
U
2
dU =
_
dt
=
1
U
= t + c
For some constant c R, applying the initial condition
U(0) = 1 =
1
1
= 0 + c = c = 1
Therefore, we have exact solution
U(t) =
1
1 t
Approximating by the explicit Euler method, we have
U
n+1
= U
n
+ kU
2
n
At time t = 0.2 we have exact answer
U(0.2) =
1
1 0.2
= 1.25
For time-step k = 0.1, we need n = 2
U
1
= U
0
+ 0.1(U
0
)
2
= 1 + 0.1 = 1.1
U
2
= U
1
+ 0.1(U
1
)
2
= 1.1 + 0.1(1.1)
2
= 1.221
Hence, the error is given by
error = |1.221 1.25| = 0.29
At time-step k = 0.01, we require n = 20
U
1
= U
0
+ 0.01(U
0
)
2
= 1 + 0.01 = 1.01
U
2
= U
1
+ 0.01(U
1
)
2
= 1.01 + 0.01(1.01)
2
= 1.020201
.
.
.
U
20
= 1.24658447 . . .
8 FINITE DIFFERENCE METHODS: EULER METHODS FOR ODES57
Hence, the error is given by
error = |U
20
U(0.2)| = |1.24658447 . . . 1.25| = 0.0034 . . .
And the error gets smaller as k decreases.
8 FINITE DIFFERENCE METHODS: EULER METHODS FOR ODES58
8.2 The Implicit Euler method
Denition 8.2.1 (Implicit Euler Method). Consider again the initial value
problem (12) from example (8.2.1), this time using approximation
dU
dt

U(t) U(t k)
k
Analagous derivation gives
U
n
= U
n1
+ kf(U
n
)
Also written
U
n+1
= U
n
+ kf(U
n+1
)
This is the implicit Euler method approximation. But in this case, a
non-linear system of equations must be solved to determine U
n
.
Again it is possible to show that the global error |U
n
(nk) U
n
| = O(k)
under certain assumptions on f (proof not given).
8 FINITE DIFFERENCE METHODS: EULER METHODS FOR ODES59
8.3 Stability and time-step restriction
Denition 8.3.1 (Region of Absolute Stability). Given ordinary
dierential equation
dU
dt
= U(t), we dene the region of absolute stability
to be the set of k C such that the Euler method approximation U
n
0
as n , i.e.: the numerical approximation decays (see denitions (8.1.1)
and (8.2.1)).
Example 8.3.1. If f(U(t))) = U(t) for some C, we have exact
solution
U(t) = U
0
e
t
Observe, that if Re[] < 0, then
lim
t
|e
t
| = lim
t
|e
(Re[]+Im[])t
|
= lim
t
|e
Re[]t
||e
Im[]t
|
= lim
t
e
Re[]t
1
= lim
h
e
h
= 0
Therefore,
U(t) 0 as t
That is, the solution decays with time provided the real-part of is
negative.
Denition 8.3.2 (Time-Step Restriction). The condition on the time-step
k such that the Euler method approximation U
n
0 as n is called
the time-step restriction on k.
Example 8.3.2. For f(T) = U, the explicit Euler method is
U
n+1
= U
n
+ kU
n
Rearranging gives
U
n+1
= (1 + k)U
n
and we can prove by induction that this has solution
U
n
= (1 + k)
n
U
0
Hence, as n
U
n
0 |1 + k| < 1
Therefore, the region of absolute stability is the set of k C that belong
inside the circle of radius 1 in the complex plane with centre at 1.
8 FINITE DIFFERENCE METHODS: EULER METHODS FOR ODES60
So, by example (8.3.1), the numerical approximation decays for all initial
conditions if and only if |1 + k| < 1, i.e.: this is the time-step restriction
on k.
Example 8.3.3. For f(T) = U, the implicit Euler method is
U
n
= U
n1
+ kU
n
Rearranging gives
U
n
=
1
(1 k)
U
n1
Which has solution
U
n
= (1 k)
n
U
0
Hence
U
n
0 |1 k| > 1
That is, the region of absolute stability is the set of k C that belong
outside the circle of radius 1 in the complex plane with centre 1.
In particular, this holds for all with Re[] < 0. Thus, by example
(8.3.1), when the ODE is stable so is the implicit Euler method. Therefore,
there is no time-step restriction.
9 FINITE DIFFERENCE METHODS: METHOD OF LINES 61
9 Finite Dierence Methods: Method of
Lines
We develop the method of lines approximation in the following example.
9.1 The Heat Equation
Example 9.1.1. Consider the heat equation for t > 0 and x (0, 1),
u
t
= u
xx
u(0, t) = u(1, t) = 0
u(x, 0) = u
0
(x)
Suppose the following Fourier series is a potential solution
u(x, t) =

j=1
u
j
(t) sin(jx)
Where u
j
(t) are some functions of t. Notice that the boundary conditions
are satised, considering that
u
t
(x, t) =

j=1
u

j
(t) sin(jx)
u
xx
(x, t) =

j=1
u
j
(t)(j
2

2
) sin(jx)
And thus substituting into the equation gives

j=1
u

j
(t) sin(jx) =

j=1
u
j
(t)(j
2

2
) sin(jx)
Equating coecients of sin(jx) give ODEs
u

j
= j
2

2
u
j
, j = 1, 2, . . . (13)
From the initial condition
u
0
(x) = u(x, 0) =

j=1
u
j
(0) sin(jx)
9 FINITE DIFFERENCE METHODS: METHOD OF LINES 62
Hence, by the Fourier coecient theorem (3.1.3)
u
j
(0) = 2
_
1
0
u
0
(x) sin(jx) dx
Now, replace the innite series of ODEs (13) with the nite system
u

j
= j
2

2
u
j
, j = 1, 2, . . . , J (14)
where J N is the size of system.
Denition 9.1.1 (Method of Lines Approximation).
v(x, t) =
J

j=1
u
j
(t) sin(jx)
is the method of lines approximation to the heat equation problem (9.1.1).
Notes We have truncated the number of terms in the innite series of
ODEs (13). The coecients functions u
j
(t) for j = 0, . . . , J can now be
computed numerically by Euler methods.
Notice these equations are decoupled and we may solve each separately.
To do so by the explicit Euler method (8.1.1), we must assume that the
time-step k is chosen so that k lies in the region of absolute stability, i.e.:
since = j
2

2
we require |1 j
2

2
k| < 1, that is: 1 < 1 j
2

2
k < 1, for
j = 1, 2, . . ., J and therefore we must have the time-step restriction
k <
2
J
2

2
This is very restrictive, as we must take J to be large to approximate
u(x, t) accurately as a function of x.
The implicit Euler method has no such problem, so by using such a
method we avoid this time-step restriction. This is a better method for the
time discretisation of the heat equation given by the method of lines.
9 FINITE DIFFERENCE METHODS: METHOD OF LINES 63
9.2 The Wave Equation
Example 9.2.1. Consider the wave equation for t > 0 and x (0, 1)
u
tt
= u
xx
subject to the boundary conditions
u(0, t) = u(1, t) = 0
and initial conditions
u(x, 0) = h
0
(x), u
t
(x, 0) = h
1
(x)
where h
0
, h
1
are given functions. Suppose that
u(x, t) =

j=1
u
j
(t) sin(jx)
As in example (9.1.1) we can derive 2
nd
order ODEs with initial conditions
for the coecients u
j
(t), observe
u
tt
(x, t) =

j=1
u

j
(t) sin(jx)
u
xx
(x, t) =

j=1
j
2

2
u
j
(t) sin x)
Substituting into the PDE and equating coecients of the sines, gives
u

j
(t) = j
2

2
u
j
The general solution for u
j
is given by
u
j
(t) =
j
sin(jt) +
j
cos(jt)
for some coecients
j
,
j
. Notice that u
j
(t) oscillates in time and does not
decay or grow.
From initial condition u(x, 0) = h
0
(x), we have
u
j
(0) = 2
_
1
0
h
0
(x) sin(jx) dx
similarly, u
t
(x, 0) = h
1
(x) implies that
u

j
(0) = 2
_
1
0
h
1
(x) sin(jx) dx
giving initial conditions from which we can determine
j
,
j
.
9 FINITE DIFFERENCE METHODS: METHOD OF LINES 64
The explicit Euler method can be applied, by writing the ODEs for u
j
as
a rst order system
u

j
= v
j
v

j
= j
2
u
j
Then the explicit Euler method is given by
_
u
j,n+1
v
j,n+1
_
=
_
u
j,n
v
j,n
_
= k
_
v
j,n
j
2

2
u
j,n
_
or as the linear system
X
n+1
= X
n
+ kAX
n
= (I + kA)X
n
where
X
n
=
_
u
j,n
v
j,n
_
, A =
_
0 1
j
2

2
0
_
Notes By studying the eigenvalues of I + kA, it can be shown that X
n
always gets bigger for k > 0 (assuming non-trivial initial conditions).
The solutions of the explicit Euler method grow (for all time-steps)
whilst exact solutions oscillate (with the same magnitude). The Euler
method does not get this fundamental aspect of the behaviour of the wave
equation correct.

Vous aimerez peut-être aussi