Académique Documents
Professionnel Documents
Culture Documents
∂u ∂ 2u
= , x ∈ (0, 1);
∂t ∂x2
u(t, 0) = 0, u(t, 1) = 1, t ≥ 0;
u(0, x) = u0 (x), x ∈ [0, 1].
If the initial temperature is described by the linear function u0 (x) = x, x ∈ [0, 1],
then the solution of the problem is u(t, x) = x, that is the temperature is independent
of time. This solution is called the stationary solution of the problem. The stationary
solution can be obtained by setting ∂u/∂t = 0 and solving the ordinary differential
equation
u00 = 0, x ∈ (0, 1);
u(0) = 0, u(1) = 1.
For other initial functions, say u0 (x) = x2 , the temperature will ”tend” to the
stationary solution. 2
The general form of linear second order elliptic partial differential equations is
1
where functions D = D(x) (coefficient of heat conduction) and f = f (t, x) (the
heat source term) are known, and we have to determine the function u = u(t, x).
(Function u is supposed to be continuous on Ω̄ and differentiable in Ω.) In order to
obtain a well-posed problem, we generally prescribe so-called boundary conditions
on the boundary (denoted by ∂Ω) of the domain Ω. There are three different types
of boundary conditions.
• Robin boundary condition. The linear combination of u and its normal deriva-
tive is prescribed on the boundary, that is ∂u(t, x)/∂n + αu(t, x) = g3 (x),
x ∈ ∂Ω (g3 is a given function).
If D is constant (say one) in (1.1.1), then the equation can be written in the
form
∇2 u + f = 0 in Ω, (1.1.2)
where ∇2 is the so-called Laplace operator,
∂ 2u ∂ 2u
∇2 u = + . . . +
∂x21 ∂x2d
(d is the dimension of Ω). Equation (1.1.2) is called Poisson equation and in case of
f = 0 Laplace equation.
In this lecture, we solve the Poisson equation with Dirichlet boundary condition
on one and two-dimensional domains using the finite difference method.
2
boundary conditions. Nevertheless, we solve this problem numerically with the finite
difference method, in order to understand the relations between the global error and
the local truncation error, and to observe how stability connects these two types of
errors.
Ah uh = fh , (1.2.3)
The subscript h indicates the dependence on h. As we will see later system (1.2.3)
can be solved uniquely, thus the finite difference solution do exists.
3
goes to zero if h tends to zero (kzh k → 0 as h → 0). The magnitude of the global
error can be measured, for instance, in maximum norm,
kzh k∞ = max {|zi |},
i=1,...,n
4
1.2.5 Stability in maximum norm
In this section the stability of the numerical method will be proven introducing the
notion of M -matrices.
Proof. It can be seen that the off-diagonal elements of the matrix defined in
(1.2.4) are non-positive, moreover let us define the vector r ∈ IRn as ri = p(xi )
(i = 1, . . . , n), where p(x) = 1/4 − (x − 1/2)2 . It is trivial that ri > 0, that is r is a
positive vector, and considering the relation
p(xi−1 ) − 2p(xi ) + p(xi+1 )
− = −p00 (xi ) = 2
h2
(p(x) is a quadratic polynomial) we have Ah r = 2e > 0, where e = [1, . . . , 1]> ∈ IRn .
This completes the proof.
Because Ah is an M -matrix, Ah is regular, which imply that the numerical
solution always exists. Moreover the estimation
1
kA−1
h k∞ ≤
8
holds. Thus the numerical method is stable in maximum norm. The magnitude of
the global error is equal to the magnitude of the truncation error, that is kzh k∞ =
O(h2 ), thus the finite difference method is convergent.
1.2.4. Exercise. Prove the stability of the one-dimensional boundary problem
in the Euclidean norm. (Hint: Because A−1 h is symmetric, its Euclidean norm is
5
equal to its spectral radius. The eigenvalues of matrix Ah can be written in the
form λi = (2/h2 )(1 − cos(iπh)), (i = 1, . . . , n) ). 2
1.2.5. Exercise. Solve problem
u0 (0) = µ3 , u(1) = µ2
with the finite difference method. Approximate the zero derivative on the left-hand
side with the one-sided expression
u1 − u0
=0
h
or, alternatively, apply equations
u−1 − 2u0 + u1 u1 − u−1
+ f (x0 ) = 0, = µ3 .
h2 2h
Prove the convergence of the above two methods. Compare the order of the two
global error. 2
6
solution ûi = u(Pi ) at a grid point Pi by ui , and replacing the second derivatives
with centered finite differences we obtain the system of linear equations
ui−x − 2ui + ui+x ui−y − 2ui + ui+y
+ + f (Pi ) = 0, i = 1, . . . , N. (1.3.6)
h21 h22
Here we used the notation Pi+x for the next grid point in positive x-direction and
the notation Pi−x for the one in negative x-direction. Pi−y and Pi+y are defined
similarly (see the five-point stencil in Figure 1.3.1). Multiplying both sides by (−1)
P i + y
P h P i
P
i - x i + x
1
h 2
P i - y
where
f̃h = [f (P1 ), . . . , f (PN )]> ,
ũh = [u1 , . . . , uN̄ ]>
and matrix Ãh is an N × N̄ sparse matrix with the i-th row elements −1/h21 , −1/h22
and 2/h21 + 2/h22 in the columns corresponding to the points Pi−x and Pi+x , Pi−y
and Pi+y , and Pi , respectively. Considering the relations ui = g(Pi ) (i = N +
1, . . . , N + N∂ ), we have only N unknown values: u1 , . . . , uN . Splitting the matrix
Ãh in the form Ãh = [Ah |A∂ ] (Ah ∈ IRN ×N , A∂ ∈ IRN ×N∂ ) and the vector ũh as
ũh = [u> > > > N >
h |gh ] (uh = [u1 , . . . , uN ] ∈ IR , gh = [g(PN +1 ), . . . , g(PN̄ )] ∈ IR
∂N
) we
have the equation
Ah uh = fh := f̃h − A∂ gh .
This equation has similar form like the one in the one-dimensional case. The el-
ements of the vector fh and the matrix Ah are known. In order to obtain the
numerical solution we have to compute the vector uh , which consists of the approx-
imating values for the true solution at the mesh points. Convergence means in this
case that kzh k → 0 as h := max{h1 , h2 } → 0. Our goal is to prove the convergence,
which is implied by consistency and stability.
7
1.3.3 Consistency
We have to calculate the local truncation error wh = fh − Ah ûh . Assuming the
solution to be sufficiently smooth, we obtain with Taylor series expansion that
h21 ∂ 4 u h22 ∂ 4 u
wi = (Pi ) + (Pi ) + O(h41 ) + O(h42 ) = O(h2 ), (i = 1, . . . , N ).
12 ∂x4 12 ∂y 4
a2 + b2
kA−1
h k∞ ≤
16
a2 +b2
is true. For the global error we have kzh k∞ ≤ 16
kwh k∞ = O(h2 ), that is the
method is convergent with second order.
1.3.1. Exercise. Let us apply the two types of ordering (rowwise and chessboard)
depicted in Figure 1.3.2 in numbering the inner mesh points of the mesh. Sketch
the structure of the matrix Ah in both cases. 2
1.3.2. Exercise. Prove the stability of the finite difference solution of the two-
dimensional Poisson equation using the Euclidean norm. For simplicity suppose
that h1 = h2 = h and n1 = n2 = n. 2
8
1 3 1 4 1 5 1 6 1 5 7 1 6 8
9 1 0 1 1 1 2 5 1 3 6 1 4
5 6 7 8 1 1 3 1 2 4
1 2 3 4 1 9 2 1 0
h 2
h 2
h 1
h 1
Figure 1.3.2: The rowwise and chessboard ordering of the grid points.
9
Chapter 2
∂2u 2
2∂ u
= a + f, x ∈ (0, 1), a > 0 (2.1.2)
∂t2 ∂x2
u(0, x) = u0 (x),
u(t, 0) = µ1 (t), u(t, 1) = µ2 (t).
The solutions of the above equations are waves, u = u(t, x) gives the amplitude
of the wave at time instant t and the spatial coordinate x, moreover f = f (t, x)
describes the density of outer forces. Because the stability of a difference scheme is
usually independent of the source term f , we consider the above equations supposing
that f is equal to zero.
If f = 0, then (2.1.1) has a solution u(t, x) = u0 (x − at) (we suppose that u0 is
sufficiently smooth). As time evolves, the initial data simply propagates unchanged
10
to the right (a > 0) or to the left (a < 0). This is why we need boundary condition
only at one of the ends of the interval. Equation (2.1.2) can be written as
à !à !
∂u ∂u ∂u ∂u
+a −a = 0,
∂t ∂x ∂t ∂x
which shows that the solution is a superposition of two waves
The first one propagates to the right, and the second one to the left.
For the sake of further simplification we apply so-called periodic boundary con-
dition which is defined as u(0, t) = u(1, t). Beside this choice we do not need to
discretize the boundary conditions but the differential equation.
uk+1
j = ukj − (q/2)(ukj+1 − ukj−1 ). (2.2.4)
uk+1
h = Ah ukh ,
11
where ukh = [uk1 , . . . , ukn ]> and Ah is a skew-symmetric matrix
1 −q/2 0 ... 0 q/2
q/2 1 −q/2 0 ...
0 q/2 1 −q/2 0 ...
Ah =
.. .. .. .. ..
. (2.2.5)
. . . . .
... 0 q/2 1 −q/2
−q/2 0 ... 0 q/2 1
Definition of convergence
We say that the numerical method is convergent if |ukj − ûkj | → 0 as τ, h → 0, where
the point (t, x) = (kτ, jh) is fixed (k, j → ∞ ). That is the method is convergent
if the numerical solution at a fixed point tends to the true solution if the mesh is
refined. For simplicity we suppose that τ and h tend to zero keeping the q = aτ /h
to be constant.
Introducing the global error at the k-th time level as zkh = ukh − ûkh , and the local
approximation error as
1 ³ k+1 ´
whk = ûh − Ah ûkh ,
τ
we have
zk+1
h = Ah zkh − τ whk .
Thus the global error at the k-th time level can be written in the form
k
X
zkh = Akh z0h −τ Ak−l l−1
h wh ,
l=1
12
Convergence in Euclidean norm
We would like to show the uniform boundedness of Ah in the Euclidean norm. It is
easy to see that the eigenvalues of the matrix are
τa √
λl = 1 − qi sin(2πlh) = 1 − i sin(2πlh), i = −1, (l = 1, . . . , n)
h
This can be shown based on the consideration that matrices appearing in any
standard finite difference methods have the same eigenvectors in the form uh =
[u1 , . . . , un ]> , where uj = exp(i2πljh) with l = 1, . . . , n. Inserting uj = exp(i2πljhξ)
into (2.2.4) we have that
Ah uh = (1 − qi sin(2πlh))uh ,
that is the l-th eigenvalue of Ah is λl , indeed. The matrix Ah is the sum of the
unit matrix I (all eigenvalues are equal to one) and a skew-symmetric matrix, which
eigenvalues are purely imaginary and have the form (−τ a/h)i sin(2πlh). Thus the
square of the Euclidean norm of Akh is
à !k
³ ³ ´´k τ 2 a2
kAkh k22 = % A>
h · Ah = max{1 + 2 sin2 (2πlh)} ≈ (1 + q 2 )k → ∞
l h
(%(.) denotes the spectral radius of the matrix), which shows that this method is
not stable, thus it is not convergent either.
The eigenvalue λl is also called amplification factor or growth factor (so-called
von Neumann analysis). The growth factor simply expresses the growth in the
”amplitude” of the eigenvector between to time levels. It is usually denoted by g(θ)
with θ = 2πlh, and because |g(θ)| gives the Euclidean norm of Ah we have the
same stability conditions like for the Lax-Richtmyer stability. Thus stability can be
guaranteed satisfying the relation |g(θ)| ≤ 1 or the weaker condition |g(θ)| ≤ 1+c1 τ .
If |g(θ)| = c2 = constant > 1, then the method is unstable.
The growth factor can be written for the above investigated finite difference
method as g(θ) = (1 − qi sin θ), thus |g(θ)|2 = 1 + q 2 sin2 θ > 1, that is the method
is unstable.
Our first try to solve the one-way wave equation with the finite difference method
resulted in a non-convergent method, thus this method in not applicable in practice.
13
Therefore,
|g(θ)|2 = cos2 θ + q 2 sin2 θ = 1 + (q 2 − 1) sin2 θ
and we can conclude that |g(θ)| ≤ 1 if q 2 − 1 ≤ 0, which implies that
|a|τ
≤ 1,
h
which is called Courant-Friedrichs-Lewy (shortly CFL) condition. Albeit the method
is convergent for values |q| ≤ 1, the method is not popular because of its slow
convergence.
2.2.2. Exercise. Prove that the local truncation error of the Lax-Friedrichs
method is O(τ + h) and verify the expression for the growth factor! 2
Therefore,
|g(θ)|2 = 1 − 4(1 − q)q sin2 (θ/2)
and we can conclude the CFL condition q = aτ /h ≤ 1. Thus the upwind scheme
is convergent. This scheme has the order O(τ + h) again. In the next sections we
discuss some higher order schemes.
2.2.3. Exercise. Prove that the local truncation error of the upwind scheme is
O(τ + h) and verify the expression for the growth factor! 2
uk+1
j − ukj ukj+1 − ukj−1
+a = 0.
2τ 2h
We can notice that thee time levels are involved, and we need the u1j values to get
started with the method.
14