Vous êtes sur la page 1sur 126

Lecture Note Sketches

Spectral Methods for Partial Differential Equations


Hermann Riecke
Engineering Sciences and Applied Mathematics
h-riecke@northwestern.edu
June 3, 2009
1
Contents
1 Motivation and Introduction 8
1.1 Review of Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Approximation of Functions by Fourier Series 12
2.1 Convergence of Spectral Projection . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 The Gibbs Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Discrete Fourier Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3.2 Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3 Fourier Methods for PDE: Continuous Time 34
3.1 Pseudo-spectral Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Galerkin Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Temporal Discretization 38
4.1 Review of Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2 Adams-Bashforth Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 Adams-Moulton-Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4 Semi-Implicit Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.5 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.6 Operator Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.7 Exponential Time Differencing and Integrating Factor Scheme . . . . . . . . 50
4.8 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5 Chebyshev Polynomials 58
5.1 Cosine Series and Chebyshev Expansion . . . . . . . . . . . . . . . . . . . . . . 58
5.2 Chebyshev Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6 Chebyshev Approximation 64
6.1 Galerkin Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.2 Pseudo-Spectral Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2.1 Implementation of Fast Transform . . . . . . . . . . . . . . . . . . . . . 69
6.3 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.3.1 Implementation of Pseudospectral Algorithm for Derivatives . . . . . . 74
2
7 Initial-Boundary-Value Problems: Pseudo-spectral Method 77
7.1 Brief Review of Boundary-Value Problems . . . . . . . . . . . . . . . . . . . . . 78
7.1.1 Hyperbolic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.1.2 Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.2 Pseudospectral Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.3 Spectra of Modied Differentiation Matrices . . . . . . . . . . . . . . . . . . . 80
7.3.1 Wave Equation: First Derivative . . . . . . . . . . . . . . . . . . . . . . 81
7.3.2 Diffusion Equation: Second Derivative . . . . . . . . . . . . . . . . . . . 82
7.4 Discussion of Time-Stepping Methods for Chebyshev . . . . . . . . . . . . . . 84
7.4.1 Adams-Bashforth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.4.2 Adams-Moulton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.4.3 Backward-Difference Schemes . . . . . . . . . . . . . . . . . . . . . . . 86
7.4.4 Runge-Kutta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.4.5 Semi-Implicit Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.4.6 Exponential Time-Differencing . . . . . . . . . . . . . . . . . . . . . . . 88
8 Initial-Boundary-Value Problems: Galerkin Method 90
8.1 Review Fourier Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.2 Chebyshev Galerkin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.2.1 Modication of Set of Basis Functions . . . . . . . . . . . . . . . . . . . 92
8.2.2 Chebyshev Tau-Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
9 Iterative Methods for Implicit Schemes 96
9.1 Simple Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.2 Richardson Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.3 Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
9.3.1 Periodic Boundary Conditions: Fourier . . . . . . . . . . . . . . . . . . . 100
9.3.2 Non-Periodic Boundary Conditions: Chebyshev . . . . . . . . . . . . . . 102
9.3.3 First Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10 Spectral Methods and Sturm-Liouville Problems 105
11 Spectral Methods for Incompressible Fluid Dynamics 109
11.1 Coupled Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
11.2 Operator-Splitting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3
A Insertion: Testing of Codes 117
B Details on Integrating Factor Scheme IFRK4 117
C Chebyshev Example: Directional Sensing in Chemotaxis 120
D Background for Homework: Transitions in Reaction-Diffusion Systems 121
E Background for Homework: Pulsating Combustion Fronts 124
4
Index
2/3-rule, 56
absolute stability, 39
absolutely stable, 39
Adams-Bashforth, 40, 45
Adams-Moulton, 43, 45
adaptive grid, 57
aliasing, 29, 38, 54, 69
Arrhenius law, 26
basis, 11
bounded variation, 15
Brillouin zone, 29
Burgers equation, 36, 37
Cauchy-Schwarz, 19
Chebyshev Expansion, 60
Chebyshev Polynomials, 61
Chebyshev round-off error, 77
chemotaxis, 120
cluster, 62
CNAB, 53, 118
Complete, 13
Completeness, 13
completeness, 12
continuous, 17
Convergence Rate, 19
cosine series, 59
Crank-Nicholson, 44, 121
Decay Rate, 18
diagonalization, 40
differentiation matrix, 77
Diffusion equation, 35
diffusive scaling, 42
discontinuities, 17
effective exponent, 21
exponential time differencing scheme, 51
FFT, 69, 75
Filtering, 53
Fourier interpolant, 29
Gauss-Lobatto integration, 67
Gibbs, 36
Gibbs Oscillations, 57
Gibbs Phenomenon, 22
Gibbs phenomenon, 59
global, 8
Heuns method, 47
hyperbolic, 39
Improved Euler method, 47
innite-order accuracy, 20
integrating-factor scheme, 50
integration by parts, 18
interpolates, 40
interpolation, 43
Interpolation error, 30
Interpolation property, 28
Lagrange polynomial, 76
linear transformation, 11
Matrix multiplication method, 32
matrix-multiply, 75
membrane, 120
Method, 34
modied Euler, 47
natural ordering, 71
Neumann stability, 39
Newton iteration, 121
Newton method, 38
non-uniform convergence, 16
normal, 40
numerical artifacts, 57
Operator Splitting, 49
operator splitting error, 49
Orszag, 56
Orthogonal, 13
overshoot, 25
pad, 56
parabolic, 39
Parseval identity, 14
piecewise continuous, 22
pinning, 38
pointwise, 25
5
predictor-corrector, 45
projection, 11, 13
projection error, 30
projection integral, 65
recursion, 71
recursion relation, 61
Runge-Kutta, 46
scalar product, 10
Schwartz inequality, 10, 14
Semi-Implicit, 46
semi-implicit, 121
shock, 53
shocks, 36
Simpsons rule, 48
singularity, 18
smoothing, 57
Spectral Accuracy, 20
spectral accuracy, 26, 65
Spectral Approximation, 20
spectral blocking, 54
spectral projection, 13, 15
stable, 39
stages, 47
strip of analyticity, 20
Sturm-Liouville problem, 63
total variation, 15
Transform method, 31
trapezoidal rule, 26, 66
turbulence, 53
two-dimensionsional, 45
unconditionally stable, 44
unique, 47
Variable coefcients, 35
variable wave speed, 45
Variable-coefcient, 37
vector space, 10
weight, 62
weighted scalar product, 62
6
References
[1] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang. Spectral methods in uid
dynamics. Springer, 1987.
[2] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang. Spectral methods: funda-
mentals in single domains. Springer, 2006.
[3] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang. Spectral methods: evolution
to complex geometries and applications to uid dynamics. Springer, 2007.
[4] M. Charalambides and F. Waleffe. Gegenbauer tau methods with and without spurious
eigenvalues. SIAM J. Num. Anal., 47(1):4868, 2008.
[5] D. Gottlieb and S. A. Orszag. Numerical Analysis of Spectral Methods: Theory and
Applications. 1977.
[6] R. A. Horn and C. R. Johnson. Topics in matrix analysis. Cambridge University Press,
1994.
[7] A.-K. Kassam and L. N. Trefethen. Fourth-order time-stepping for stiff pdes. SIAM J.
Sci. Comput., 26:1214, 2005.
[8] B. J. Matkowsky and D. O. Olagunju. Propagation of a pulsating ame front in a
gaseous combustible mixture. SIAM J. Appl. Math., 39(2):290300, 1980.
[9] A. Palacios, G. H. Gunaratne, M. Gorman, and K. A. Robbins. Cellular pattern forma-
tion in circular domains. Chaos, 7(3):463475, September 1997.
7
1 Motivation and Introduction
Central step when solving partial differential equations: approximate derivatives in space
and time. Focus here on spatial derivatives.
Finite difference approximation of (spatial) derivatives:
Accuracy depends on order of approximation number of grid points involved in the
computation (width of stencil)
For higher accuracy use higher-order approximation
use more points to calculate derivatives
function is approximated locally by polynomials of increasing order
To get maximal order use all points in system
approximate function globally by polynomials
More generally:
approximate function by suitable global functions f
k
(x)
u(x) =

k=1
u
k
f
k
(x)
f
k
(x) need not be polynomials
calculate derivative of f
k
(x) analytically: exact
error completely in expansion
Notes:
For smooth functions the order of the approximation of the derivative is higher than
any power.
high derivatives not problematic
8
a) b)
Figure 1: a) nite differences: local approximation u = u
1
, u
2
, ...u
N
. Unknowns: values at
grid points. b) spectral: global approximation . Unknowns: Fourier amplitudes
Note: in pseudo-spectral methods again values at grid points used although expanded in a
set of global functions
Thus:
Study approximation of functions by sets of other functions
Impact of spectral approach on treatment of temporal evolution
We will use Fourier modes and Chebyshev polynomials.
Recommended books (for reference)
Spectral Methods by C. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang, Springer.
They have written three books. [1, 2, 3]. The two new ones are not expensive.
Spectral Methods in MATLAB by L.N. Trefethen, SIAM, ISBN 0898714656. Not ex-
pensive.
Chebyshev and Fourier Spectral Methods by J.P. Boyd, Dover (2001). Available online
at http://www-personal.umich.edu/~jpboyd/BOOK_Spectral2000.html and
is also not expensive to buy.
1.1 Review of Linear Algebra
Motivation: Functions can be considered as vectors
=consider approximation of vectors by other vectors
9
Denition: V is a real (complex) vector space if for all u, v V and all , R(C)
u + v V
Examples:
a) R
3
= (x, y, z)[x, y, z R is a real vector space
b) C
n
is a complex vector space
c) all continuous functions form a vector space:
f(x) + g(x) is a continuous function if f(x) and g(x) are
d) The space V = f(x)[continuous, 0 x L, f(0) = a, f(L) = b is only a vector space for
a = 0 = b. Why?
Denition: For a vector space V < , >: V V C is called a scalar product or inner
product iff
< u, v > = < v, u >

< u + v, w > =

< u, w > +

< v, w >, , C
< u, u > 0
< u, u >= 0 u = 0.
Notes:
< u, v > is often written as u
+
v with u
+
denoting the transpose and complex conju-
gate of u.
v is a column vector, u
+
is a row vector
Examples:
a) in R
3
: < u, v >=

3
i=1
u
i
v
i
is a scalar product
b) in L
2
f(x)[
_

[f(x)[
2
dx <
< u, v > =
_

(x)v(x) dx
is a scalar product.
Notes:
u(x) can be considered the x th component of the abstract vector u.
< u, u > [[u[[ denes a norm.
scalar product satises Cauchy-Schwartz inequality
[ < u, v > [ [[u[[ [[v[[
(since the cosine of the angle between the vectors is smaller than 1)
10
Denition: The set v
1
, ..., v
N
is called an orthonormal complete set (or basis) of V if any
vector u V can be written as
u =
N

k=1
u
k
v
k
,
with v
+
i
v
j
< v
i
, v
j
>=
ij
.
Calculate the coefcients u
i
:
< v
j
, u >=

k
u
k
< v
j
, v
k
>=

k
u
k

kj
= u
j
Example: projections in R
2
u
1
v
1
=< v
1
, u > v
1
is the projection of u onto v
1
.
Projections take one vector and transform it into another vector:
Denition: L : V V is called a linear transformation iff
L(v + w) = Lv + Lw
Denition: A linear transformation P : V V is called a projection iff
P
2
= P
Examples:
1. P
v
= N
1
v v
+
with N = v
+
v is a projection onto v:
P
v
u = v
v
+
u
v
+
v
P
2
v
u = v
v
+
v
+
v

_
v
v
+
u
v
+
v
_
= v
v
+
u
v
+
v
= P
v
u
Notes:
11
v can be thought of as a column vector and v
+
a row vector
v
+
v is a scalar while v v
+
is a projection operator
v
+
u/v
+
v is the length of the projection of u onto v
2. Let v
i
, i = 1..N be a complete orthonormal set
u =
N

k=1
(v
+
k
u)v
k
= (
N

k=1
v
k
v
+
k
) u
thus we have
N

k=1
v
k
v
+
k
= I
i.e. the sum over all projections onto a complete set yields the identity transformation:
completeness of the set v
3. A linear transformation L can be represented by a matrix:
(Lu)
i
= v
+
i
L
N

j=1
u
j
v
j
=

j
v
+
i
Lv
j
u
j
=

j
L
ij
u
j
with
L
ij
= v
+
i
Lv
j
The identity transformation is given by the matrix
I
ij
= v
+
i
(

k
v
k
v
+
k
)v
j
=

ik

kj
=
ij
can write this also as
I
ij
=

k
v
+
i
v
k
. .
i
th
component of v
k

_
v
+
j
v
k
_
+
. .
cc of j
th
component of v
k
(1)
Note: The matrix elements L
ij
depend on the choice of the basis
Getting back to functions: Vector spaces formed by functions often cannot be spanned by
a nite number of vectors, i.e. no nite set v
1
, ..., v
N
sufces need to consider se-
quences and series of vectors. We will not dwell on this sophistication.
2 Approximation of Functions by Fourier Series
Periodic boundary conditions are well suited to study phenomena that are not dominated
by boundaries. For periodic functions it is natural to attempt approximations by Fourier
series.
Consider the set of functions
k
(x) = e
ikx
[k N. It forms a complete orthogonal set of
L
2
[0, 2].
12
1. Orthogonal

+
k

l
<
k
,
l
>=
_
2
0
(e
ikx
)

e
ilx
dx = 2
lk
as before e
ikx
is the x
th
-component of
k
2. Complete:
for any u(x) L
2
[0, 2] there exist u
k
[k N
lim
N
[[u(x)
N

k=N
u
k

k
(x)[[
2
= 0
i.e.
lim
N
_
2
0
[u(x)
N

k=N
u
k
e
ikx
[
2
dx = 0
with the Fourier components given by
u
k
=
1
2
<
+
k
, u >=
1
2
_
2
0
e
ikx
u(x) dx
Note:
Completeness

N
k=1
v
k
v
+
k
= I (cf (1)) implies
lim
N

|k|=0

k
(x)
+
k
(x

) = lim
N
N

|k|=0
e
ik(xx

)
= 2

l=
(x x

+ 2l). (2)
Denition: The spectral projection P
N
u(x) of u(x) is dened as
P
N
u(x) =
N

|k|=0
u
k

k
(x).
Thus,
lim
N
[[u(x) P
N
u(x)[[
2
= 0.
Notes:
P
N
is a projection, i.e. P
2
N
= P
N
(see homework)
P
N
projects u(x) onto the subspace of the lowest 2N + 1 Fourier modes
13
[[P
N
u(x)[[
2
= 2

N
|k|=0
[u
k
[
2
:
[[P
N
u(x)[[
2
= < P
N
u, P
N
u >
= <
N

|k|=0
u
k

k
(x),
N

|l|=0
u
l

l
(x) >
=

kl
u

k
u
l
<
k
(x),
l
(x) >
=

kl
u

k
u
l
2
kl
= 2
N

|k|=0
[u
k
[
2
.
Parseval identity extends this to the limit N :

[[u[[
2
= lim
N
[[P
N
u[[
2
= lim
N
2

|k|=0
[u
k
[
2
i.e. the L
2
norm of a vector is given by the sum of the squares of its components for
any orthonormal complete set. Thus, as more components are included the retained
energy approaches the full energy.
Proof: we have
lim
N
[[u(x) P
N
u(x)[[
2
= 0
and want to conclude [[u(x)[[
2
= lim
N
[[P
N
u(x)[[
2
.
Consider
([[u[[ [[v[[)
2
= [[u[[
2
+[[v[[
2
2[[u[[ [[v[[
[[u[[
2
+[[v[
2
2[ < u, v > [
using Schwartz inequality [ < u, v > [ [[u[[ [[v[[ (projection is smaller than the whole
vector).
Now use 2[ < u, v > [ 2Re(< u, v >) =< u, v > + < v, u > (note < u, v > is in general
complex).
Then
[[u[[
2
[[v[[
2
[[u[[
2
+[[v[
2
< u, v > < v, u >=< u v, u v >= [[u v[[
2
.
Get Parseval identity with v = P
N
u.
2.1 Convergence of Spectral Projection
Convergence of Fourier series depends strongly on the function to be approximated
14
The highest wavenumber needed to approximate a function well surely depends on the
number of wiggles of that function.
Denition: The total variation 1(u) of a function u(x) on [0, 2] is dened as
1(u) = sup
n
sup
0=x
0
<x
1
<...<xn=2
n

i=1
[u(x
i
) u(x
i1
)[
Notes:
the supremum is dened as the lowest upper bound
for supremum need only consider x
i
at extrema
Examples:
1. u(x) = sin x on [0, 2] has 1(u) = 4
2. variation of u(x) = sin
1
x
is unbounded on (0, 2].
Results: One has for the spectral projection:
15
1. u(x) continuous, periodic and of bounded variation
P
N
u converges uniformly and pointwise to u:
lim
N
max
x[0,2]

u(x)
N

|k|=0
e
ikx
u
k

= 0
Notes:
example for uniform and non-uniform convergence:
consider u(x) =
a
x
on [1, 2] lim
a0
u(x) = 0 converges uniformly
max
x[1,2]

a
x

= a 0
on (0, 1) lim
a0
u(x) = 0 converges but not uniformly
max
x(0,1)

a
x

= does not exist sup


x(0,1)

a
x

=
16
Thus:
uniform convergence of Fourier approximation there is an upper bound for error
along the whole function (upper bound on global error).
2. u(x) of bounded variation
P
N
u converges pointwise to
1
2
(u
+
(x) +u

(x)) for any x [0, 2] where at discontinu-


ities u

(x) = u(x )
Note: even if u(x) is discontinuous P
N
u(x) is always continuous for nite N
(a)
Figure 2: The spectral approximation is continuous even if the function to be approximated
is discontinuous.
3. For u(x) L
2
the projection P
N
u converges in the mean,
lim
N
_

[u(x)

k
u
k
[
2
dx = 0
but possibly u(x
0
) ,= P
N
u(x
0
) at isolated values of x
0
, i.e. pointwise convergence except
for possibly a set of measure 0 (consisting of discontinuities and square-integrable
singularities)
4. u(x) continuous and periodic: P
N
u need not necessarily converge for all x [0, 2]
Note: What could go wrong? Are there functions that are periodic and continuous
but have unbounded variation?
consider u(x) = xsin
1
x
on [
1

,
1

] (note sin
1
x
is not dened at x = 0)
u(x) is continuous: lim
x0
xsin
1
x
= 0
u(x) is periodic on [
1

,
1

]
u(x) not differentiable at x = 0: u

(x) = sin
1
x

1
x
cos
1
x
17
Decay Rate of Coefcients:
The error [[u P
N
u[[ =

|k|>N
[u
k
[
2
is determined by u
k
for [k[ > N (cf. Parseval identity).
Question: how fast does the error decrease as N is increased?
consider u
k
for k
2 u
k
= <
k
, u >=
_
2
0
e
ikx
u(x) dx
=
i
k
e
ikx
u(x)[
2
0

i
k
_
2
0
e
ikx
du
dx
dx
=
i
k
(u(2

) u(0
+
))
i
k
<
k
,
du
dx
>
...
=
i
k
(u(2

) u(0
+
)) + ... + (1)
r1
(
i
k
)
r
_
d
r1
u
dx
r1

d
r1
u
dx
r1

0
+
_
+ (1)
r
(
i
k
)
r
<
k
,
d
r
u
dx
r
> .
Use Cauchy-Schwarz [ <
k
,
d
r
u
dx
r
> [ [[
k
[[ [[
d
r
u
dx
r
[[ as long as [[
d
r
u
dx
r
[[ < (using [[
k
[[ =

2):
[u
k
[

1
2k
_
u(2

) u(0
+
)
_

+ ... +
1
2

(
1
k
)
r
_
d
r1
u
dx
r1

d
r1
u
dx
r1

0
+
_

2k
r
[[
d
r
u
dx
r
[[

.
Thus:
for non-periodic functions
[u
k
[ = O
_
1
k
(u(2

) u(0
+
)
_
for C

functions whose derivatives are all periodic iterate integration by parts indef-
initely:
[u
k
[
1

2k
r
[[
d
r
u
dx
r
[[ for any r N.
Decay in k faster than any power law. One can show that
u
k
e
|k|
with 2 being given by the strip of analyticity of u(x) when extended to the complex
plane (cf. Boyd, theorem 5, p.45).
Example:
With z x + iy consider u(z) = tanh ( sin z) along the imaginary axis:
tanh ( sin iy) =
sinh (i sinh y)
cosh (i sinh y)
=
i sin ( sinh y)
cos ( sinh y)
has a rst singularity at y

with sinh y

=
1
2
. Strip of analyticity has width 2 =
y
+
y

. The steeper u(x) at x = 0 the narrower the strip of analyticity and the
slower the decay of the Fourier coefcients.
18
Cauchy-Schwarz estimate too soft: iteration possible as long as

<
k
,
d
r
u
dx
r
>

<
(i.e.
d
r
u
dx
r
L
1
, see e.g. Benedetto: Real Analysis):
Thus
d
l
u
dx
l
periodic for 0 l r 2
d
r
u
dx
r
L
1
_
_
_ u
k
= O
_
1
k
r
_
Note:
only
d
r2
u
dx
r2
has to be periodic because boundary contribution of
d
r1
u
dx
r1
is of the same
order as that of the integral over
d
r
u
dx
r
Examples:
1. u(x) = (x )
2
is C

in (0, 2), but derivative is not periodic:


u
k
=
1
2
_
2
0
e
ikx
(x )
2
dx =
2
k
2
origin for only quadratic decay are the boundary terms:
u
k
=
i
2k
_
2
0
e
ikx
du
dx
dx =
1
2
1
k
2
(u

(2

) u

(0
+
)) +
1
2
1
k
2
_
2
0
e
ikx
u

(x)dx =
2
k
2
since u

(2

) = 2 = u

(0
+
) and
_
2
0
e
ikx
u

(x)dx = 0.
2. u(x) = x
2
(x ) ((x 2)
2
x
2
) should be similar:
periodic, but discontinuity of derivative
1
st
derivative has jump, 2
nd
derivative has a function, 3
rd
derivative involves the
derivative of the -function:
k
,

(x)) = O(k).
Estimate Convergence Rate of Spectral Approximation
Consider approximation for u(x)
E
2
N
[[u P
N
u[[
2
=

|k|>N
[u
k
[
2
=

|k|>N
[u
k
[
2
[k[
2r
[k[
2r
<
1
N
2r

|k|>N
[u
k
[
2
[k[
2r
If
d
r
u
dx
r
exists and is square-integrable then the sum converges and is bounded by the norm
[[
d
r
u
dx
r
[[
2
:

|k|>N
[u
k
[
2
[k[
2r
<

|k|=0
[k[
2r
[u
k
[
2
= [[
d
r
u
dx
r
[[
2
19
Thus:
[[u P
N
u[[
2

1
N
2r
[[
d
r
u
dx
r
[[
2
.
For u(x) C

with all derivatives periodic the inequality holds for any r


[[u P
N
u[[
2
inf
r
1
N
2r
[[
d
r
u
dx
r
[[
2
(3)
Notes:
The order of convergence depends on the smoothness of the function (highest square-
integrable derivative)
For u(x) C

: u
k
e
|k|
one gets convergence faster than any power: spectral or innite-order accu-
racy:
[[u P
N
u[[
2
=

|k|>N
[u
k
[
2
2e
(N+1)

k=0
_
e

_
k
= 2e
(N+1)
1
1 e

=
_
2e

1 e

_
e
N
with 2 being the width of the strip of analyticity of u(x) when u(x) is continued ana-
lytically into the complex plane (cf. Trefethen Theorem 1c, p.30, Boyd theorem5, p.45)
Spectral Approximation:
convergence becomes faster with increasing N
high-order convergence only for sufciently large N
Finite-Difference Approximation:
order of convergence xed
20
Effective exponent of convergence depends on N:
Note: in general
[[
d
r
u
dx
r
[[
2
faster than exponentially for r
Example
[[
d
r
e
iqx
dx
r
[[ = q
r
[[e
iqx
[[
Thus, for simple complex exponential ||
d
r
dx
r
e
iqx
|| grows exponentially in r.
For functions that are not given by a nite number of Fourier modes the norm
has to grow with r faster than exponentially:
show by contradiction
If [[
d
r
u
dx
r
[[
2

2r
then E
N

_

N
_
2r
Can then pick a xed N > to get
inf
r
E
N
= 0
approximation is exact for nite N in contradiction to assumption..
Now consider
ln E
N
ln
_
inf
r
1
N
2r
[[
d
r
u
dx
r
[[
2
_
= inf
r
_
ln [[
d
r
u
dx
r
[[
2
2r ln N
_
[[
d
r
u
dx
r
[[
2
grows faster than exponential ln [[
d
r
u
dx
r
[[
2
grows faster than linearly for large r
r
small N
large N
can pick N sufciently large that for small r denominator N
r
grows faster in r
error estimate decreases with r
for larger r the exponential N
r
does not grow fast enough
error estimate grows with r
value of r at the minimum gives effective exponent for decrease in error in this regime
of N.
With increasing N the minimum in the error estimate (solid circle in the gure) is
shifted to larger r
effective order of accuracy increases with N :
21
Note:
Spectral approximation guaranteed to be superior to nite difference methods
only in highly accurate regime
Approximation of Derivatives
Given u(x) =

u
k
e
ikx
the derivatives are given by
d
n
u
dx
n
=

|k|=0
(ik)
n
u
k
e
ikx
if the series for the derivative converges (again, convergence in the mean)
Note:
not all square-integrable functions have square-integrable derivatives
d
dx
= (x)
if series for u(x) converges uniformly then its 1
st
derivative still converges (possibly
not uniformly)
convergence for
d
q
u
dx
q
is a power of N
q
slower than that for u since one can take only q
fewer derivatives of it than of u,
d
q
u
dx
q
=

k
(ik)
q
u
k
e
ikx
coefcients (ik)
q
u
k
decay more slowly than u
k
itself.
the estimate (3) gets weakened by
[[
d
q
u
dx
q
P
N
d
q
u
dx
q
[[
2
inf
r
1
N
2r2q
[[
d
r
u
dx
r
[[
2
for r > q
Periodic boundary conditions: non-periodic derivative
d
r
u
dx
r
equivalent to discontinuous
d
r
u
dx
r
, i.e.
d
r+1
u
dx
r+1
not square-integrable
2.2 The Gibbs Phenomenon
Consider convergence in more detail for u(x) piecewise continuous
P
N
u(x) =
N

|k|=0
u
k
e
ikx
=
1
2
_
2
0
N

|k|=0
e
ikx

+ikx
u(x

) dx

P
N
can be written more compact using
D
N
(s)
N

|k|=0
e
iks
=
sin(N +
1
2
)s
sin(
1
2
s)
.
22
This identity can be shown by multiplying by the denominator:
_
e
i
1
2
s
e
i
1
2
s
_
_
e
iNs
+ e
i(N1)s
+ ... + e
iNs

= e
i(N+
1
2
)s
e
i(N+
1
2
)s
Insert
P
N
u(x) =
1
2
_
2
0
sin
_
(N +
1
2
)(x x

sin
_
1
2
(x x

)
u(x

) dx

=
..
use t=xx

1
2
_
x
x2
sin(N +
1
2
)t
sin
1
2
t
u(x t) dt
Use the completeness of the Fourier modes
lim
N
D
N
(s) =

|k|=0
e
iks
= 2

l=
(s + 2l)
for large N the sum D
N
(s) is negligible except near s = 2l, l = 0, 1, 2, ... .
Assume u(x) is discontinuous at x
0
u(x

0
) = u

u(x
+
0
) = u
+
Consider in particular points close to the discontinuity
x = x
0
+
x
N +
1
2
,

x
N +
1
2

1,
and use that D
N
(t) decays rapidly away from t = 0
P
N
u(x
0
+
x
N +
1
2
)
1
2
_

sin(N +
1
2
)t
sin
1
2
t
u(x
0
+
x
N +
1
2
t)dt
23
Approximate u(x) in the integrand by u
+
and u

, respectively,
P
N
u(x
0
+
x
N +
1
2
)
1
2
u
+
_ x
N+
1
2

sin(N +
1
2
)t
1
2
t
dt +
1
2
u

_

x
N+
1
2
sin(N +
1
2
)t
1
2
t
dt
Now write s = (N +
1
2
)t and consider N for xed
_
x
(N+
1
2
)
sin s
s
ds
_
x

sin s
s
ds
=
_
0

sin s
s
ds +
_
x
0
sin s
s
ds
=

2
+ Si(x)
with Si(x) the sine integral and lim
x
Si(x) = /2.
Similarly:
_
(N+
1
2
)
x
sin s
s
ds
_

x
sin s
s
ds
=
1
2
_

sin s
s
ds +
_
0
x
sin s
s
ds
=

2
Si(x)
Thus
P
N
u(x
0
+
x
N +
1
2
)
1
2
(u
+
+ u

) +
1

Si(x)(u
+
u

)
Note:
24
Maximal overshoot is 9% of the jump (independent of N)
P
N
u(x
0
+

N +
1
2
) u
+
= (u
+
u

)
_
1

Si()
1
2
_
= (u
+
u

) 0.09
Location of overshoot at x
0
+

N+
1
2
converges to jump position x
0
. Everywhere else
series converges pointwise to u(x)
the maximal error does not decrease: convergence is not uniform in x; but convergence
in the L
2
-norm, since area between P
N
u and u goes to 0.
Smooth oscillation can indicate severe problem: unresolved discontinuity.
To capture true discontinuity nite differences may be better.
Smooth step (e.g. tanh x/):
as long as step is not resolved expect behavior like for discontinuous function
slow convergence and Gibbs overshoot (HW), only when enough modes are retained
to resolve the step the exponential convergence will set in.
2.3 Discrete Fourier Transformation
We had continuous Fourier transformation
u(x) =

|k|=0
e
ikx
u
k
with
u
k
=
1
2
_
2
0
e
ikx
u(x)dx
Consider evolution equation
u
t
= F(u,
u
x
)
Our goal was to do the time-integration completely in Fourier space since our variables are
the Fouriermodes need Fourier components F
k
Consider linear PDE:
F(u,

x
) =
2
x
u
u
t
=

2
u
x
2
Insert Fourier expansion and project onto
k
= e
ikx
du
k
dt
= k
2
u
k
Consider nonlinear PDEs:
25
Polynomial: F(u) = u
3
F
k
=
1
2
_
u(x)
3
e
ikx
dx =
1
2
_
dxe
ikx

k
1
e
ik
1
x
u
k
1

k
2
e
ik
2
x
u
k
2

k
3
e
ik
3
x
u
k
3
=

k
1

k
2
u
k
1
u
k
2
u
kk
1
k
2
convolution requires N
2
multiplication of three numbers, compared to a single such
multiplication
for r
th
order polynomial need N
r1
operations: slow!
General nonlinearities, e.g.
coupled pendula
F(u) = sin(u) = 1
1
3!
u
3
+
1
5!
u
5
+ ...
Arrhenius law in chemical reactions
F(u) = e
u
=

l=0
1
l!
u
l
arbitrarily high powers of u, cannot use convolution
Evaluate nonlinearities in real space:
need to transform efciently between real space and Fourier space
Discrete Fourier transformation:
Question: will we loose spectral accuracy with only 2N grid points in integral?
trapezoidal rule
1
2
11111..11
1
2
with 2N collocation points
x
j
=
2
2N
j, x =
2
2N
, x
2N
= x
0
u
k
=
1
c
k
1
2N
_
1
2
e
ikx
0
u(x
0
) +
2N1

j=1
e
ikx
j
u(x
j
) +
1
2
e
ikx
2N
u(x
2N
)
_
=
..
for periodic u(x)
1
c
k
1
2N
2N1

j=0
e
ikx
j
u(x
j
)
High wavenumbers:
26
Calculate high wavenumber components
u
N+m
=
1
2N
2N1

j=0
e
iN
2
2N
j
. .
e
ij
e
imx
j
u(x
j
)
=
1
2N
2N1

j=0
e
+ij
e
imx
j
u(x
j
)
= u
N+m
thus: u
N
= u
N
there are only 2N independent amplitudes
limited range of relevant wave numbers: N k N
Figure 3: For a discrete spatial grid the Fourier space is periodic.
a) 1
st
Brillouin zone, b) periodic representation of Fourier space.
Fourier space is periodic spatial grid is discrete rather than continuous
This is the converse of the Fourier spectrum becoming discrete when the real space is
made periodic (rather than innite)
Two possible treatments:
1. restrict N k N 1 (somewhat asymmetric)
in Matlab: ( u
0
, u
1
, ... u
N
, u
N+1
, u
N+2
, ..., u
1
)
27
2. in these notes we set
u
N
= u
N
=
1
2
1
2N
2N1

j=0
e
iNx
j
u(x
j
)
i.e.
c
N
= c
N
= 2 and c
j
= 1 for j ,= N
Inverse Transformation
I
N
(u(x
j
)) =
N

k=N
u
k
e
ikx
j
Orthogonality:
<
k
,
l
>
N
=
1
2N
2N1

j=0
e
i(lk)
2
2N
j
=

lk=

lk,2Nm
(4)
Notation:
< ., . >
N
denotes the scalar product of functions dened only at N discrete points x
j
Figure 4: Cancellation of the Fourier modes in the sum. Here N = 4 and l k = 1
Note:
<
k
,
l
>
N
,= 0 if k l is any multiple of 2N and not only for k = l (cf. completeness
relation (2))
high wavenumbers are not necessarily perpendicular to low wavenumbers
Interpolation property
Consider I
N
(u) on the grid
I
N
(u(x
l
)) =
N

k=N
u
k
e
ikx
l
28
=
N

k=N
1
2N
1
c
k
2N1

j=0
e
ikx
j
u(x
j
)e
ikx
l
interchange sums to get -function
=
1
2N
2N1

j=0
u(x
j
)
2N

rk+N=0
e
i(rN)
2
2N
(lj)
1
c
rN
in the r-sum: for r = 2N we have e
i(lj) 1
2
and for r = 0 we have e
i(lj) 1
2
using (4) the sum adds up to 2N
lj
e
i(lj)
(note that [l j[ < 2N)
Thus
I
N
(u(x
l
)) =
1
2N
2N1

j=0
u(x
j
) 2N
jl
= u(x
l
).
Notes:
On the grid x
j
the function u(x) is represented exactly by I
N
(u(x));
no information lost on the grid
I
N
(u(x)) is often called Fourier interpolant.
2.3.1 Aliasing
For the discrete Fourier transform the function is dened only on the grid:
what happens to the high wavenumbers that cannot be represented on that grid?
Consider u(x) = e
i(r+2N)x
with 0 < [r[ < N.
Continuous Fourier transform: P
N
u = 0 since the wavenumber is higher than N.
Discrete Fourier transform:
u(x
j
) = e
i(2N+r)
2
2N
j
= e
ir
2
2N
j
= e
irx
j
On the grid u(x) looks like e
irx
:
I
N
(u(x
j
)) = e
irx
j
,= 0
u(x) is folded back into the 1
st
Brillouin zone.
Notes:
highest wavenumber that is resolvable on the grid: [k[ = N
e
iN
2
2N
j
= (1)
j
in CFT unresolved modes are set to 0
in DFT unresolved modes modify the resolved modes: Aliasing
29
Relation between CFT (u
k
) and DFT ( u
k
) coefcients:
u
k
=
1
2N
1
c
k
2N1

j=0
e
ikx
j
u(x
j
)
=
1
2N
1
c
k

l=
2N1

j=0
e
i(lk)
2
2N
j
u
l
=
1
c
k

l=

m=

lk,2Nm
u
l
u
k
=
1
c
k
u
k
+
1
c
k

|m|=1
u
k+2Nm
The sum contains the aliasing terms from higher harmonics that are not represented on
the grid.
High wavenumbers look like low wavenumbers and contribute to low-k amplitudes
Error |u I
N
u|
2
:
I
N
u =
N

k=N
u
k
e
ikx
=
N

k=N
_
_
_
1
c
k
u
k
+
1
c
k

|m|=1
u
k+2Nm
_
_
_
e
ikx
= P
N
u + R
N
u
[[uI
N
u[[
2
= [[ u P
N
u
. .
all modes have |k|>N
R
N
u
..
all modes have |k|N
[[
2
=
..
orthogonality
[[uP
N
u[[
2
+[[R
N
u[[
2
Interpolation error is larger than projection error.
Decay of coefcients:
if CFT coefcients decay exponentially, u
k
e
|k|
, so will the DFT coefcients:
u
k

1
c
k
e
|k|
+
1
c
k

|m|=1
e
|k+2Nm|

..
geometric series

1
c
k
e
|k|
+
1
c
k
2e
2N
1 e
2N
for k N
Thus:
The asymptotic convergence properties of the DFT are essentially the same as those of the
CFT homework assignment
30
2.3.2 Differentiation
Main reason for spectral approach: derivatives
For CFT one has: projection and differentiation commute :
d
dx
(P
N
u) =
N

k=N
iku
k
e
ikx
P
N
(
du
dx
) =
N

k=N
(
du
dx
)
k
e
ikx
=
N

k=N
1
2
_
e
ikx
du
dx

dx

e
ikx
using i.b.p. :
=
N

k=N
1
2
ik
_
e
ikx

u(x

)dx

e
ikx
=
d
dx
(P
N
u)
For DFT interpolation and differentiation do not commute:
d
dx
(I
N
u) ,= I
N
(
du
dx
).
i.e.
d
dx
(I
N
u) does not give the exact values of
du
dx
on the grid points.
I
N
u does not agree with u between grid points its derivative does not agree with the
derivative of u on the grid points, but I
N
(
du
dx
) does interpolate
du
dx
.
Asymptotically, the errors of I
n
(
du
dx
) and of
d
dx
I
N
(u) are of the same order.
Implementation of Discrete Fourier Transformation
Steps for calculating derivatives at a given point:
i) Transform method
31
1. calculate u
k
from values at collocation points x
j
:
u
k
=
1
2N
1
c
k
2N1

j=0
e
ikx
j
u(x
j
)
2. for r
th
derivative
d
r
u
dx
r
(ik)
r
u
k
3. back-transformation at collocation points
d
r
dx
r
I
N
(u(x
j
)) =
N

k=N
(ik)
r
u
k
e
ikx
j
Notes:
seems to require O(N
2
) operations
compared to O(N) operations for nite differences
for N = 2
l
3
m
5
n
... DFT can be done in O(N lnN) operations using fast Fourier trans-
form
1
for u real: u
k
= u

k
need to calculate only half the u
k
:
special FFT that stores the real data in a complex array of half size
N independent variables: u
0
and u
N
real, u
1
,..., u
N1
complex
ii) Matrix multiplication method
d
r
dx
r
I
N
(u) is linear in u(x
j
) can write it as matrix multiplication
d
r
dx
r
I
N
(u(x
j
)) =
N

k=N
(ik)
r
u
k
e
ikx
j
interchange sums
=
2N1

l=0
_
N

k=N
(ik)
r
1
2N
1
c
k
e
ik(x
j
x
l
)
_
u(x
l
)
write in terms of vectors and matrix
_
_
u(x
0
)
...
u(x
2N1
)
_
_
= u
d
r
dx
r
I
N
(u) =
_
_
_
_
...
u
(r)
(x
j
)
...
_
_
_
_
Then rst derivative
u
(1)
= Du
1
In matlab functions FFT and IFFT.
32
with
D
jl
=
1
2N
N

k=N
ik
1
c
k
e
ik
2
2N
(jl)
=
_
_
_
1
2
(1)
j+l
cot(
jl
2N
) for j ,= l
0 for j = l
Higher derivatives
u
(r)
= D
r
u
Notes:
D is 2N 2N matrix (j, l = 0, ..., 2N 1)
D is anti-symmetric: D
lj
= D
jl
matrix multiplication is expensive: N
2
operations
but multiplication can be vectorized, i.e. different steps of multiplication/addition are
done simultaneously for different numbers in the matrix
Eigenvalues of Pseudo-Spectral Derivative:
Fourier modes with [k[ N 1 are represented exactly
De
ikx
= ik e
ikx
for [k[ N 1
plane waves e
ikx
must be eigenvectors with eigenvalues

k
= ik = 0, 1i, 2i, ..., (N 1)i
D has 2N eigenvalues: one missing
trD = 0

k

k
= 0 last eigenvalue
N
= 0
can see that also via: e
iN
2
2N
j
= (1)
j
= e
iN
2
2N
j
eigenvalue must be independent of the
sign of N
N
= 0
Interpretation: consider PDE
u
t
=
u
x
with u = e
it+ikx
Frequency numerically determined by Du: =
k
For [k[ N 1 the solution is a traveling wave with direction of propagation given by sign
of k.
For k = N one has u(x
j
) = (1)
j
: does not dene a direction of propagation
k
= 0.
Note:
One gets a vanishing eigenvalue also using the transform method:
(1)
j
= u
N
e
iN
2
2N
j
+ u
N
e
iN
2
2N
j
with u
N
= u
N
thus
d
dx
P
N
_
(1)
j
_
= iN u
N
e
iNx
j
+ (iN) u
N
e
iNx
j
= 0.
33
3 Fourier Methods for PDE: Continuous Time
Consider PDE
u
t
= S(u) F(u,
u
x
,

2
u
x
2
, ...)
The operator S(u) can be nonlinear
Two methods
1. Pseudo-spectral:
u I
N
u
Spatial derivatives in Fourier space
Nonlinearities in real space
temporal evolution performed in real space or in Fourier space:
i.e. unknowns to be updated are the u(x
j
) in real space or the u
k
in Fourier space
2. Galerkin method
u P
N
u
completely in Fourier space: spatial derivatives, nonlinearities and temporal updating
are all done in Fourier space
3.1 Pseudo-spectral Method
Method involves the steps
1. introduce collocation points x
j
and u(x
j
)
2. transfrom numerical solution u(x
j
) u
k
to Fourier space
3. evaluate derivatives using u
k
4. transform back into real space and evaluate nonlinearities
5. evolve in time either in real space or in Fourier space
d
dt
I
N
(u) = S(I
N
(u))
Note:
I
N
(u) is not the spectral interpolant of the exact solution u since solving PDE induces errors:
34
1. taking the spectral interpolant of the exact solution u yields
I
N
_
d
dt
u
_
= I
N
(S( u)) .
Using
d
dt
I
N
(u) = I
N
_
d
dt
u
_
the pseudospectral solution satises
I
N
_
d
dt
u
_
= S(I
N
(u)) ,= I
N
(S(u))
since spatial derivative does not commute with I
N
2. time-stepping introduces errors beyond the spectral approximation.
Examples:
1. Wave equation

t
u =
x
u
a) Using FFT

t
u(x
j
) =
x
I
N
(u(x
j
)) =
N

k=N
ik u
k
e
ikx
j
Note: u
k
and the sum over k (=back-transformation) are evaluated via two FFTs.
b) Using multiplication with spectral differentiation matrix D,

t
u(x
j
) =

l
D
jl
u(x
l
)
2. Variable coefcients

t
u = c(x)
x
u
a)

t
u(x
j
) = c(x
j
)
x
I
N
(u(x
j
))
multiply by wave speed in real space
b)

t
u(x
j
) = c(x
j
)

m
D
jm
u(x
m
).
3. Reaction-diffusion equation

t
u =
2
x
u + f(u)
a) using FFT

t
u(x
j
) =
2
x
I
N
(u(x
j
)) + f(u(x
j
)) =
N

k=N
k
2
u
k
e
ikx
j
+ f(u(x
j
))
b) matrix multiplication

t
u(x
j
) =

m
D
(2)
jm
u(x
m
) + f(u(x
j
)) with D
(2)
jm
=

l
D
jl
D
lm
.
35
4. Burgers equation

t
u = u
x
u
=
1
2

x
(u
2
) in conservation form
consider both types of nonlinearities
2
u
x
u +
x
(u
2
)
a)
u(x
j
)
x
I
N
(u(x
j
)) = u(x
j
)
N

k=N
ik u
k
e
ikx
j

x
I
N
(u
2
(x
j
)) =
N

k=N
ik w
k
e
ikx
j
w
k
=
1
2N
2N1

j=0
e
ikx
j
u
2
(x
j
)
b)

t
u(x
j
) = u(x) Du + D
_
_
u(x
0
)
2
...
u(x
2N1
)
2
_
_
Notes:
spectral methods will lead to Gibbs oscillations near the shock
pseudo-spectral methods: on the grid the oscillations may not be visible; may
need to plot function between grid points as well, but derivatives show oscilla-
tions
all sums over Fourier modes k or grid points j should be done via FFT.
3.2 Galerkin Method
Equation solved completely in Fourier space
1. plug
u(x) =
N

k=N
u
k
e
ikx
into
t
u = S(u)
2. project equation onto rst 2N Fourier modes (N l N)

t
u
l

1
2
_
2
0
e
ilx

t
u(x) dx =
1
2
_
2
0
e
ilx
S(u(x)) dx
2
Note: For smooth functions the two formulations are equivalent.Burgers equation develops shocks at
which the solution becomes discontinuous: formulations not equivalent, need to satisfy entropy condition,
which corresponds to adding a viscous term
2
x
u and letting 0.
36
More generally, retaining N modes from a complete set of functions
k
(x)
u(x) =
N

k=1
u
k

k
(x)
<
l
,
t
u > = <
l
, S(u) > for 1 l N
<
l
,
t
u S(u) > = 0
Residual (=error)
t
u S(u) has to be orthogonal to all basis functions that were kept:
P
N
(
t
P
N
u S(P
N
u)) = 0
optimal choice within the space of N modes that is used in the expansion
Note: for Galerkin the integrals are calculated exactly either analytically or numerically
with sufcient resolution (number of grid points )
Examples:
1. Variable-coefcient wave equation

t
u = c(x)
x
u

t
u
m
=
_
2
0
e
imx
c(x)
N

k=N
ik u
k
e
ikx
dx
=
N

k=N
C
mk
iku
k
C
mk
=
_
2
0
e
i(km)x
c(x)dx
Note: although equation is linear, there are O(N
2
) operations through variable coef-
cient (C
mk
is in general not diagonal).
2. Burgers equation

t
u = u
x
u +
x
(u
2
)
u
x
u =
N

k=N
N

l=N
u
k
ilu
l
e
i(k+l)x

x
u
2
=
N

k=N
N

l=N
i(k + l) u
k
u
l
e
i(k+l)x
project onto e
imx
integral gives
k+l,m
and

l
yields l mk

t
u
m
=
N

k=N
i((mk) + m)u
k
u
mk
(5)
Note: again O(N
2
) operations in each time step.
37
Comparison:
Nonlinear problems:
Galerkin: effort increases with degree of nonlinearity because of convolution
pseudo-spectral: effort mostly in transformation to and from Fourier space: FFT es-
sential
Variable coefcients:
Galerkin requires matrix multiplication, pseudospectral only scalar multiplication
error larger in pseudo-spectral, but same scaling of error with N
Unresolved modes:
Pseudo-spectral has aliasing errors: unresolved modes spill into equations for re-
solved modes
Nonlinearities generate high-wavenumber modes: their aliasing can be removed by
taking more grid points (
3
2
rule) or by phase shifts
Grid effects:
pseudo-spectral method breaks the translation symmetry, can lead to pinning of fronts
Galerkin method does not break translation symmetry.
Newton method for unstable xed points or implicit time stepping:
quite clear for Galerkin code: (5) is simply a set of coupled ODEs, not so obvious to im-
plement for pseudo-spectral code, since back- and forth-transformations are needed.
4 Temporal Discretization
Consider

t
u = S(u)
Two possible goals:
1. interested in steady state: transient towards steady state not relevant
only spatial resolution relevant
2. initial-value problem: interested in complete evolution
temporal error has to be kept as small as spatial error
If transient evolution is relevant then spectral accuracy in space best exploited
if high temporal accuracy is obtained as well: seek high-order temporal schemes
38
4.1 Review of Stability
Consider ODE

t
u = u (6)
Denitions:
1. A scheme is stable if there are constants C, , T, and such
[[u(t)[[ Ce
t
[[u(0)[[
for all 0 t T, 0 < t < . The constants C and have to be independent of t.
2. A scheme is absolutely stable if
[[u(t)[[ < for all t.
Note:
The concept of absolute stability is only useful for differential equations for which
the exact solution is bounded for all times.
absolute stability closely related to Neumann stability
3. The region A of absolute stability is given by the region A the complex plane dened
by
A = t C [ [[u(t)[[ bounded for all t
Notes:
for R the ODE (6) corresponds to a parabolic equation like
t
u =
2
x
u in Fourier
space
for iR the ODE (6) corresponds to a hyperbolic equation like
t
u =
x
u in Fourier
space
For a PDE one can think in terms of a system of ODEs coupled through differentiation
matrices,

t
u = Lu
e.g. for
t
u =
x
u one has L = D.
Assume L can be diagonalized
SLS
1
= with diagonal
Then

t
Su = Su
Thus:
Stability requires that all eigenvalues of L are in the region of absolute stability of the
scheme.
Note:
39
highest Fourier eigenvalues
for simple wave equation:
max
= i (N 1)
for diffusion equation:
max
= N
2
Side Remark: Stability condition after diagonalization in terms of Su,
[[Su(t)[[ < Ce
t
[[Su(0)[[
We need
[[u(t)[[ <

Ce
t
[[u(0)[[
If S is unitary, i.e. if S
1
= S
+
we have
[[Su[[ = [[u[[
For Fourier modes spectral differentiation matrix is normal
D
+
D = DD
+
D can be diagonalized by unitary matrix
(Not the case for Chebyshev basis functions used later)
Thus: for Fourier method it is sufcient to consider scalar equation (6).
4.2 Adams-Bashforth Methods
Based on rewriting in terms of integral equation
u
n+1
= u
n
+
_
t
n+1
tn
F(t

, u(t

))dt

Explicit method: approximate F(u) by polynomial that interpolates F(u) over last l time
steps
3
and extrapolate to the interval [t
n
, t
n+1
].
Figure 5: Adams-Bashforth methods interpolate F(u) over the interval [t
nl
, t
n
] and then
extrapolate to the interval [t
n
, t
n+1
].
3
Figure has wrong label for rst grid point.
40
Consider

t
u = F(u)
AB1: u
n+1
= u
n
+ tF(u
n
)
AB2: u
n+1
= u
n
+ t
_
3
2
F(u
n
)
1
2
F(u
n1
)
_
Note:
AB1 identical to forward Euler
Stability:
Consider F(u) = u with C
AB1:
z = 1 + t
[z[
2
= (1 +
r
t)
2
+
2
i
t
2
Stability limit given by [z[
2
= 1:
AB1=FE: (1 +
r
t)
2
+
2
i
t
2
= 1
To plot stability limit parametrize z = e
i
and plot t (
r
() + i
i
())t
AB1:
t = z 1
AB2:
t =
z 1
3
2

1
2z
2.5 2 1.5 1 0.5 0 0.5
1.5
1
0.5
0
0.5
1
1.5
AdamsBashforth
AB1
AB2
AB3
41
Notes:
AB1=FE and AB2 are not absolutely stable for purely dispersive equations
r
= 0
AB3 and AB4 are absolutely stable even for dispersive equations
r
= 0
AB1 and AB2: the stability limit is tangential to
r
= 0: for
r
= 0 exponential growth
rate goes to 0 for t 0 at xed number of modes (i.e. xed ). For xed t
max
we can
choose t small enough to limit the growth of solution.
AB1: for
r
= 0 [z[
2
= 1 +
2
i
t
2
[z[
tmax
t
= (1 +
2
i
t
2
)
1
2
tmax
t
e
1
2

2
i
t
2 tmax
t
need t O(
2
i
)
for simple wave equation one has then
t O(N
2
)
i.e. AB1 is stable for diffusive scaling
AB2: for
r
= 0 z = 1 + i
i
t
1
2

2
i
t
2
+
1
4

3
i
t
3

1
8

4
i
t
4
[z[
2
= 1 +
1
2

4
i
t
4
[z[
tmax
t
e
1
4

4
i
t
4 tmax
t
need t O(

4
3
i
) = O(N

4
3
)
For simple wave equation one gets
t O(N

4
3
)
which is less stringent than AB1=FE.
The growth may be less of a problem for spectral methods since one would like to
balance the temporal error with the spatial error
t
p
e
N
one may have to choose therefore quite small t just to achieve the desired accuracy,
independent of the stability condition.
But: growth rate is largest for largest wavenumbers k: high Fourier modes tend to
creep in.
Diffusion equation: FE stability limit for
i
= 0 and
r
= k
2
< 0:
t <
2
[
r
[
=
2
k
2
max
=
2
N
2
for central difference scheme
t <
1
2
x
2
=
1
2
_
2
2N
_
2

5
N
2
The scaling of stability limit is the same, but nite-difference scheme has slightly
larger prefactor, i.e. it has a slightly larger stability range. But it needs smaller x
to achieve the same spatial accuracy.
42
Comment on Implementation
Consider

t
u =
2
x
u + f(u)
Forward Euler
u
n+1
= u
n
+ t
2
x
u
n
+ t f(u
n
)
Want to evaluate derivative in Fourier space FFT
1. If we do the temporal update in Fourier space
u
n+1
k
= u
n
k
+ t(k
2
) u
n
k
+ t T
k
(f(u
n
))
where T
k
(f(u
n
)) is the k
th
-mode of the Fourier transform of f(u
n
)
After updating u
n+1
k
transform back to u
n+1
(x
j
) and calculate f(u
n+1
j
) for next Euler
step.
2. If we do the temporal update in real space
First transform back into real space and do time the step there
u
n+1
j
= u
n
j
+ t
2
x
I
N
(u) + t f(u
j
)
Note: the choice between these two types of updates is quite common, not only in
forward Euler.
4.3 Adams-Moulton-Methods
seek highly stable schemes: implicit scheme
in the polynomial interpolation of F(u) for the integral in
u
n+1
= u
n
+
_
t
n+1
tn
F(t

, u(t

))dt

(7)
include t
n+1
. This makes the scheme implicit.
Figure 6: Adams-Moulton methods interpolate F(u) over the interval [t
n+1l
, t
n+1
], which
includes the new time step.
43
Backwards Euler : u
n+1
= u
n
+ tF(u
n+1
)
Crank-Nicholson : u
n+1
= u
n
+
1
2
t
_
F(u
n+1
) + F(u
n
)
_
3
rd
order Adams-Moulton: u
n+1
= u
n
+
1
12
t
_
5F(u
n+1
) + 8F(u
n
) F(u
n1
)
_
7 6 5 4 3 2 1 0 1
4
3
2
1
0
1
2
3
4
AdamsMoulton
AM3
AM4
AM5
AM6
Note:
Region of stability shrinks with increasing order
Only backward Euler and Crank-Nicholson are unconditionally stable
AM3 and higher have nite stability limit: we do not get a high-order unconditionally
stable schem with AM.
Implementation of Crank-Nicholson
Consider the wave equation

t
u =
x
u
_
1
1
2
t
x
_
u
n+1
=
_
1 +
1
2
t
x
_
u
n
With matrix multiply method

l
_
1
1
2
tD
jl
_
u
n+1
(x
l
) =

l
_
1 +
1
2
tD
jl
_
u
n
(x
l
)
would have to invert full matrix: slow
44
With FFT or for Galerkin insert u(x) =

k
e
ikx
u
k
and project equation onto
k
:
_
2
0
dxe
ikx
...
_
1
1
2
t ik
_
u
n+1
k
=
_
1 +
1
2
t ik
_
u
n
k
u
n+1
k
=
1 +
1
2
t ik
1
1
2
t ik
u
n
k
Note:
Since derivative operator is diagonal in Fourier space, inversion of operator on l.h.s.
is simple:
time-stepping in Fourier space yields explicit code although implicit scheme.
This is not possible for nite differences.
With variable wave speed one would have
_
1
1
2
t c(x)
x
_
u
n+1
=
_
1 +
1
2
t c(x)
x
_
u
n
FFT does not lead to diagonal form: wavenumbers of u(x) and of c(x) couple
projection leads to convolution of c(x) and
x
u
n+1
: expensive
The scheme does not get more involved in higher dimensions
e.g. for diffusion equation in two dimensions

t
u =
2
u
one gets
u
n+1
kl
=
1 t (k
2
+ l
2
)
1 + t(k
2
+ l
2
)
u
n
kl
That is to be compared with the case of nite differences where implicit schemes in
higher dimensions become much slower since the band width of the matrix becomes
large (O(N) in two dimensions, worse yet in higher dimensions).
Note:
make scheme explicit by combining Adams-Moulton with Adams-Bashforth to predictor-
corrector
replace the unknown u
n+1
in the integrand of (7) of the AM-scheme by an estimate
based on AB, which can be lower order than the AM-scheme:
AB: predictor O(t
n1
)
AM: corrector O(t
n
)
_
_
_
O(t
n
)
each time step requires two evaluations of r.h.s not worth if expensive
Advantage: scheme has same accuracy as AB of O(t
n
) but greater range of stability
with same storage requirements
45
4.4 Semi-Implicit Schemes
Often time step is limited by instabilities due to linear derivative terms but not due to
nonlinear terms:
Treat
linear derivative terms implicitly
nonlinear terms explicitly
Note: implicit treatment of nonlinear terms would require matrix inversion at each time
step
Example: Crank-Nicholson-Adams-Bashforth (CNAB)
Consider

t
u =
2
x
u + f(u)
u
n+1
u
n
t
=
1
2

2
x
u
n+1
+
1
2

2
x
u
n
+
3
2
f(u
n+1
)
1
2
f(u
n
) +O(t
3
)
_
1
1
2
tD
2
_
u
n+1
=
_
1 +
1
2
tD
2
_
u
n
+ t
_
3
2
f(u
n+1
)
1
2
f(u
n
)
_
3 Steps:
FFT T of r.h.s.
divide by (1 +
1
2
tk
2
)
do inverse FFT of r.h.s. u
n+1
j
u
n+1
j
= T
1
_
1
1 +
1
2
tk
2
__
1
1
2
t k
2
_
T(u
n
i
) + tT
_
3
2
f(u
n
i
)
1
2
f(u
n1
i
)
___
or written as
u
n+1
k
=
1
1 +
1
2
tk
2
__
1
1
2
t k
2
_
T(u
n
i
) + t
_
3
2
f
k
(u
n
i
)
1
2
f
k
(u
n1
i
)
__
4.5 Runge-Kutta Methods
Runge-Kutta methods can be considered as approximations for the integral equation
u
n+1
= u
n
+
_
t
n+1
tn
F(t

, u(t

))dt

with approximation of F based purely on times t

[t
n
, t
n+1
].
46
Runge-Kutta 2:
trapezoidal rule for integral
_
t
n+1
tn
F(t

, u(t

))dt

=
1
2
t
_
F(t
n
, u
n
) + F(t
n+1
, u
n+1
)
_
+O(t
3
)
approximate u
n+1
with forward Euler (its error contributes to the error in the overall
scheme at O(t
3
).
Improved Euler method (Heuns method)
k
1
= F(t
n
, u
n
)
k
2
= F(t
n
+ t, u
n
+ t k
1
)
u
n+1
= u
n
+
1
2
t (k
1
+ k
2
) +O(t
3
)
Other version : mid-point rule modied Euler:
u
n+1
= u
n
+ tF
_
t +
1
2
t, u
n
+
1
2
tF(t
n
, u
n
)
_
Note:
Runge-Kutta methods of a given order are not unique (usually free parameters)
General Runge-Kutta scheme:
u
n+1
= u
n
+ t
s

l=0

l
F
l
F
0
= F(t
n
, u
n
)
F
l
= F(t
n
+
l
t, u
n
+ t
l

m=0

lm
F
m
) 1 l s
Notes:
Scheme has s + 1 stages
F(u) is evaluated at intermediate times t
n
+
l
t and at suitably chosen intermediate
values of the function u.
For
ll
,= 0 scheme is implicit
Coefcients
l
,
lm
,
l
determined by requiring highest order of accuracy:
in general this does not determine the coefcients uniquely
47
Runge-Kutta 4
corresponds to Simpsons rule (
1
6
(1 4 1))
k
1
= F(t
n
, u
n
)
k
2
= F(t
n
+
1
2
t, u
n
+
1
2
t k
1
)
k
3
= F(t
n
+
1
2
t, u
n
+
1
2
t k
2
)
k
4
= F(t
n
+ t, u
n
+ t k
3
)
u
n+1
= u
n
+
1
6
t (k
1
+ 2k
2
+ 2k
3
+ k
4
) +O(t)
5
Note:
to push the error to O(t
5
) the middle term in Simpsons rule has to be split up into
two different terms.
5 4 3 2 1 0 1 2
3
2
1
0
1
2
3
RungeKutta
RK1
RK2
RK3
RK4
Notes:
stability regions expand with increasing order
RK4 covers parts of imaginary and of real axis: suited for parabolic and hyperbolic
problems
48
4.6 Operator Splitting
For linear wave equation or diffusion equation we have exact solution in Fourier space,

t
u =
2
x
u u
n
k
= u
k
(0) e
k
2
tn
Can we make use of that for more general problems?
For nite differences we discussed

t
u = (L
1
+ L
2
)u
solution approximated as
u
n+1
= e
(L
1
+L
2
)t
u
n
= e
L
1
t
e
L
2
t
u
n
+O(t
2
)
this corresponds to

t
u = L
2
u and then
t
u = L
1
u
alternating integration of each equation for a full time step t
Apply to reaction-diffusion equation

t
u =
2
x
u + f(u)
L
1
u
2
x
u L
2
u f(u)
Treat L
2
u in real space, e.g. forward Euler
u

(x
j
) = u
n
(x
j
) + t f(u
n
(x
j
))
Treat L
1
u in Fourier space
u
n+1
k
= e
k
2
t
u

k
exact!!
Written together:
u
n+1
k
= e
k
2
t
(u
n
k
+ t f
k
(u
n
l
))
Notes:
could use any other suitable time-stepping scheme for nonlinear term: higher-order
would be better
But: operator splitting error arises.
Could improve
e
(L
1
+L
2
)t
u
n
= e
1
2
L
1
t
e
L
2
t
e
1
2
L
1
t
u
n
+O(t
3
)
If intermediate values need not be available the
1
2
tsteps can be combined:
u
n+2
= e
1
2
L
1
t
e
L
2
t
e
1
2
L
1
t
e
1
2
L
1
t
e
L
2
t
e
1
2
L
1
t
u
n
+O(t
3
) =
= e
1
2
L
1
t
e
L
2
t
e
L
1
t
e
L
2
t
e
1
2
L
1
t
u
n
+O(t
3
)
approximate e
L
2
t
by second-order scheme (rather than forward Euler) to get over-all
error of O(t
3
).
time-stepping is done in real space and in Fourier space
to get higher order one would have to push the operator splitting error to higher order.
49
4.7 Exponential Time Differencing and Integrating Factor Scheme
Can we avoid the operator-splitting error altogether?
Consider again reaction-diffusion equation

t
u =
2
x
u + f(u)
without reaction the equation can be integrated exactly in Fourier space
u
n+1
k
= e
k
2
t
u
n
k
Go to Fourier space (Galerkin style)

t
u
k
= k
2
u
k
+ f
k
(u) (8)
Here f
k
(u) is kcomponent of Fourier transform of nonlinear term f(u)
To assess a good approach to solve (8) it is good to consider simpler problem yet:

t
u = u + F(t) (9)
where u is the Fourier mode in question and F plays the role of the coupling to the other
Fourier modes.
We are in particular interested in efcient ways to deal with the fast modes with large,
positive because they set the stability limit:
1. If the overall solution evolves on the fast time scale set by , accuracy requires a time
step with [t[ 1 and an explicit scheme should be adequate.
2. If the overall solution evolves on a slower time scale 1/[[, which is set by Fourier
modes with smaller wavenumber (i.e. F(t)evolves slowly in time) then one would like
to take time steps with [[t = O(1) or even larger without sacricing accuracy, i.e.
one would like to be limited only by the condition t .
In particular, for F = const. one would like to obtain the exact solution u

exact
= F/
with large time steps.
Use integrating factor to rewrite (9) as

t
_
ue
t
_
= e
t
F(t)
which is equivalent to
u
n+1
= e
t
u
n
+ e
t
_
t
0
e
t

F(t + t

)dt

.
Need to approximate integral. To leading order it is tempting to write
u
n+1
= e
t
u
n
+ e
t
t F(t).
This yields the forward Euler implementation of the integrating-factor scheme.
For F = const. this yields the xed point
u

IF
_
1 e
t
_
= t e
t
F.
But:
50
for t 1 one has u

IF
0 independent of F and denitely not u

IF
u

exact

F/. To get a good approximation of the correct xed point u

exact
one therefore still
needs [[t 1!
Note:
even for simple forward Euler xed point (u
n+1
= u
n
) would be obtained exactly for
large t (disregarding stability)
u
n+1
= u
n
+ t (u
n
+ F)
Problem: Even if F evolves slowly, for large the integrand still evolves quickly over the
integration interval: to assume the integrand is constant is a poor approximation.
Instead: assume only F is evolving slowly and integrate the exponential explicitly
u
n+1
= e
t
u
n
+ e
t
F(t
n
)
1

_
1 e
t
_
This yields the forward Euler implementation of the exponential time differencing scheme,
u
n+1
= e
t
u
n
+ t F(t
n
)
_
e
t
1
t
_
Notes:
now, for F = const and t one gets the exact solution u

ETD
F/.
for [[t 1 one gets back the usual forward Euler scheme (e
t
1)/ 1.
For the nonlinear diffusion equation one gets for ETDFE
u
n+1
k
= e
k
2
t
u
n
k
+ t F
k
(u
l
(t))
_
1 e
k
2
t
k
2
t
_
where in general F
k
(u
l
(t)) depends on all Fourier modes u
k
.
For higher-order accuracy in time use better approximations for the integral (see Cox &
Matthews, J. Comp. Physics 176 (2002) 430, and Kassam & Trefethen, SIAM J. Sci. Com-
put. 26 (2005) 1214, for a detailed discussion of various schemes and quantitative compar-
isons for ODEs and PDEs. The latter paper includes two matlab programs for Fourier and
Chebyshev spectral implementations).
The 4
th
-order Runge-Kutta version reads (using c t)
u
1k
= u
n
k
E
1
+ t F
k
(u
n
, t
n
) E
2
u
2k
= u
n
k
E
1
+ t F
k
(u
1
, t
n
+
1
2
t) E
2
u
3k
= u
1k
E
1
+ t
_
2F
k
_
u
2
, t
n
+
1
2
t
_
F
k
(u
n
, t
n
)
_
E
2
u
n+1
k
= u
n
k
E
2
1
+ t G
G = F
k
(u
n
, t
n
) E
3
+ 2
_
F
k
_
u
1
, t
n
+
1
2
t
_
+ F
k
_
u
2
, t
n
+
1
2
t
__
E
4
+ (10)
+F
k
(u
3
, t
n
+ t) E
5
51
with
E
1
(c) = e
c/2
E
2
(c) =
e
c/2
1
c
E
3
(c) =
4 c + e
c
(4 3c + c
2
)
c
3
E
4
(c) =
2 + c + e
c
(2 + c)
c
3
E
5
(c) =
4 3c c
2
+ e
c
(4 c)
c
3
For [c[ < 0.2 the factors E
3,4,5
(c) can become quite inaccurate due to cancellations:
E
5
(c) =
1
c
3
_
4 3c c
2
+
_
1 + c +
1
2
c
2
+
1
6
c
3
+ . . .
_
(4 c)
_
=
1
6
+O(c)
For small values of c it is therefore better to replace E
3,4,5
by their Taylor expansions
E
2
(c) =
1
2
+
1
8
c +
1
48
c
2
+
1
384
c
3
+
1
3840
c
4
+
1
46080
c
5
+
1
645120
c
6
+
1
10321920
c
7
E
3
(c) =
1
6
+
1
6
c +
3
40
c
2
+
1
45
c
3
+
5
1008
c
4
+
1
1120
c
5
+
7
51840
c
6
+
1
56700
c
7
E
4
(c) =
1
6
+
1
12
c +
1
40
c
2
+
1
180
c
3
+
1
1008
c
4
+
1
6720
c
5
+
1
51840
c
6
+
1
453600
c
7
E
5
(c) =
1
6
+ 0 c
1
120
c
2

1
360
c
3

1
1680
c
4

1
10080
c
5

1
72576
c
6

1
604800
c
7
Alternatively, one can evaluate the coefcients via complex integration using the Cauchy
integral formula [7]
f(z) =
1
2i
_
C
f(t)
t z
dt (11)
if f(z) is analytic inside ( which encloses z. Since the singularities of E
i
(c) at c = 0 are
removable and since ( can be chosen to remain a nite distance away from c = 0 the
Cauchy integral formula (11) can be used to evaluate E
i
(c) even in the vicinity of c = 0.
Note:
diffusion and any other linear terms retained in the eigenvalue of the linear operator
are treated exactly
no instability arises from the linear terms for any t : unconditionally stable
to evaluate F
k
(u
1
, t
n
+
1
2
t):
u
1k
inverse FFT
..
u
1
(x
j
)
insert into F
..
F(u
1
, t
n
+
1
2
t)
FFT
..
F
k
(u
1
, t
n
+
1
2
t)
if the PDE involves multiple components (e.g. u and v in a two-component reaction-
diffusion system) at each stage of the RK4-scheme one needs to determine the analo-
gous quantities u
ik
and v
ik
with i = 1, 2, 3 in parallel, i.e. one needs to determine both
u
1k
and v
1k
before one can proceed to u
2k
and v
2k
etc.
52
large wave numbers are strongly damped, as they should be (this is also true for
operator splitting)
compare with Crank-Nicholson (in CNAB, say)
u
n+1
k
=
1
1
2
t k
2
1 +
1
2
t k
2
u
n
k
for large kt
u
n+1
k
= (1
4
t k
2
+ ...)u
n
k
which exhibits oscillatory behavior and slow decay.
Note that backward Euler also damps high-wavenumber oscillations, but it is only
rst order
u
n+1
k
=
1
1 + tk
2
u
n
k

1
tk
2
u
n
k
for [k[ .
Note:
some comments on the 4th-order integrating factor scheme are in Appendix B.
4.8 Filtering
In some problems it is not (yet) possible to resolve all scales
shock formation (cf. Burgers equation last quarter)
uid owat high Reynolds numbers (turbulence): energy is pumped in at lowwavenum-
bers (e.g. by motion of the large-scale walls), but only very high wavenumbers experi-
ence signicant damping, since for low viscosity high shear is needed to have signi-
cant damping.
In these cases aliasing and Gibbs oscillations can lead to problems.
Aliasing and Nonlinearities
Nonlinearities generate high wavenumbers
u(x)
2
=
N

l=N
N

k=N
u
l
u
k
e
i(k+l)x
p-th order polynomial generates wavenumbers up to pN. On the grid of 2N points not all
wavenumbers can be represented Fourier interpolant I
N
(u(x)) keeps only N: higher
wavenumber aliased into that range.
Example:
53
on grid x
j
=
2
2N
j with only 2 grid points per wavelength
2
q
with q = N
u(x
j
) = cos qx
j
= cos N
2
2N
j = cos(j) = (1)
j
u(x
j
)
2
= cos
2
qx
j
= (+1)
j
= 1 cos
2
qx
j
is aliased to a constant on that grid
Note: in a linear equation no aliasing arises during the simulation since no high wavenum-
bers are generated (aliasing only initially when initial condition is reduced to the discrete
spatial grid)
Aliasing can lead to spectral blocking:
If dissipation occurs essentially only at the very high unresolved wavenumbers:
dissipation is missing
aliased high wavenumbers feed energy into the lower, weakly damped wavenumbers
energy piles up most noticeably at the high-end of the resolved spectrum ([k[ = N)
because there the correct energy is smallest (relative error largest)
pile up can lead to instability
(from J.P. Boyd Chebyshev and Fourier Spectral Meth-
ods, p. 2107)
If resolution cannot be increased to the extent that high wavenumbers are resolved, im-
provement can be obtained by ltering out those wavenumbers that would be aliased into
the lower spectrum.
Quadratic nonlinearities lead to doubling of wavenumbers:
The interval [q
max
, q
max
] is mapped into [2q
max
, 2q
max
]
[q
max
, q
max
] [2q
max
, 2q
max
]
54
-N
N
q 2q
2q-2N
Require that the mapped wavenumber interval does not alias into the original wavenumber
interval
2q
max
2N q
max
i.e. require
q
max

2
3
N
More generally: for p
th
-order nonlinearity choose
q
max
=
p + 1
2
N
Algorithm:
1. FFT: u
i
u
k
2. take derivatives
3. lter out high wavenumbers: u
k
= 0 for [k[ >
p+1
2
N
4. inverse FFT: u
k
u
i
; this function does not contain any dangerous high wavenum-
bers any more
5. evaluate nonlinearities u
i
u
p
i
6. back to 1.
(from J.P. Boyd Chebyshev
and Fourier Spectral Methods, p. 212)
55
Orszags 2/3-rule:
For quadratic nonlinearity set the highest N/3 Fourier-modes to 0 in each time step just
before the back-transformation to the spatial grid:
evaluating the quadratic nonlinearity (which is done in real space):
the good wavenumbers [0,
2
3
N] contained in u(x) generate the wavenumbers [0,
4
3
N]
of which the interval [N,
4
3
N] will be aliased into [N,
2
3
N] and therefore will
contaminate the highest N/3 modes (analogously for [0,
2
3
N]).
the bad, highest N/3 modes [
2
3
N, N] generate wavenumbers [
4
3
N, 2N] which are
aliased into [
2
3
N, 0] and would contaminate the good wavenumbers.
setting the highest N/3 modes to 0 avoids contamination of good wavenumbers; no
need to worry about contaminating the high wavenumbers that later are set to 0
anyway.
Alternative view:
For a quadratic nonlinearity, to represent the wavenumbers [N, N] without aliasing need
3
2
2N grid points:
want 3N grid points for integrals before transforming the Fourier modes [N, N] back to
real space need to pad them with zeroes to the range [
3
2
N,
3
2
N].
Thus: To avoid aliasing for quadratic nonlinearity need 3 grid points per wavelength
cos qx
j
= cos(N
2
3N
j) = cos(2
j
3
)
Notes:
for higher nonlinearities larger portions of the spectrum have to be set to 0.
instead of step-function lter can use smooth lter, e.g.
F(k) =
_
_
_
1 [k[ k
0
(=
2
3
N)
e
(|k|
n
|k
0
|
n
)
[k[ > k
0
(12)
with n = 2, 4.

2
3
rule (and the smooth version) makes the pseudo-spectral method more similar to
the projection of the Galerkin approach
does not remedy the missing damping of high wavenumbers, but reduces the (incor-
rect) energy pumped into the weakly damped wave numbers.
56
Gibbs Oscillations
Oscillations due to insufcient resolution can contaminate solution even away from the
sharp step/discontinuity: can be improved by smoothing
Approximate derivatives, since they are more sensitive to oscillations (function itself does
not show any oscillations on the grid)

x
u
N

k=N
ik u
k
e
ikx
lter to
N

k=N
ik F(k) u
k
e
ikx
with F(k) as in (12).
Note:
result is different than simply reducing number of modes since the number of grid
points for the transformation is still high
lter could also smooth away relevant oscillations loose important features of solu-
tion
e.g. interaction of localized wave pulses: oscillatory tails of the pulses determine the
interaction between the pulses, smoothing would kill interaction
Notes:
It is always better to resolve the solution
Filtering and smoothing make no distinction between numerical artifacts and physi-
cal features
Shocks would better be treated with adaptive grid
57
5 Chebyshev Polynomials
Goal: approximate functions that are not periodic
5.1 Cosine Series and Chebyshev Expansion
Consider h() on 0
extend to [0, 2] to generate periodic function by reection about =
g() =
_
_
_
h() 0
h(2 ) 2
Then
g() =

k=
g
k
e
ik
=

k=
g
k
(cos k + i sin k)
Reection symmetry: sin drops out
g() =

k=
g
k
cos k =

k=0
g
k
cos k
with
g
k
= g
k
for k = 0 g
k
= 2 g
k
for k > 0
g
k
=
1
2
_
2
0
e
ik
g()d =
1

_

0
cos kg()d reection symmetry
Write as
g
k
=
1

2
c
k
_

0
cos k g()d with c
k
=
_
_
_
2 for k = 0
1 for k > 0
58
This is the cosine transform.
Notes:
Convergence of the cosine series depends on the odd derivatives at = 0 and =
If
dg
d
,= 0 at = 0 or = then g
k
= O(k
2
) even if function is perfectly smooth in
(0, ):
g
k
=
2
c
k
_

0
cos k g()d i.b.p
=
2
c
k
1
k
sin k g()

2
c
k
1
k
_

0
sin k
d
d
g()d i.b.p
=
2
c
k
1
k
2
cos k
d
d
g()

2
c
k
1
k
2
_

0
cos k
d
2
d
2
g()d
boundary terms vanish for all k only if
g

(0) = 0 = g

()
Since cos k = (1)
k
non-zero slopes at the endpoints cannot cancel for all k.
in general, only odd derivatives of g() contribute to boundary terms:
1
k
l+1
cos k
d
l
d
l
g()

0
for l odd
Thus:
for general boundary conditions Fourier (=cosine) series converges badly: Gibbs phe-
nomenon
59
5.2 Chebyshev Expansion
To get the derivative of the function effectively to vanish at the boundaries stretch the
coordinates at the boundaries innitely strongly. This can be achieved by parametrizing x
using the angle on a circle:
Consider f(x) on 1 x 1
Transform to 0 using x = cos , g() = f(cos())
Function is now parametrized by instead of x
Consider Fourier series for g()
g

() = f

(cos ) sin
dg
d
= 0 at = 0,
Generally: all odd derivatives of g() vanish at = 0 and = .
Proof: cos is even about = 0 and about = f(cos ) is also even about those points
all odd derivatives vanish at = 0, .
Thus: the convergence of the approximation to g() by a cosine-series does not depend on
the boundary conditions on f(x)
f(x) = g() =

k=0
g
k
cos k extension of g to 2 is even
60
=

k=0
g
k
cos(k arccos x)
Introduce Chebyshev polynomials
T
k
(x) = cos(k arccos x) = cos k
f(x) =

k=0
f
k
T
k
(x)
Properties of Chebyshev Polynomials
T
k
(x) is a k
th
order polynomial
show recursively:
T
0
(x) = 1 T
1
(x) = x
T
n+1
(x) = cos ((n + 1) arccos x) = cos ((n + 1))
Trig identities:
cos ((n + 1)) = cos n cos sin n sin
cos ((n 1)) = cos n cos + sin n sin
cancel sin n sin by adding and use cos() = T
1
(x) = x,
T
n+1
(x) = 2xT
n
(x) T
n1
(x)
Note: recursion relation useful for computation of T
n
(x)
T
n
(x) even for n even, odd otherwise
T
n
(x) =

j
a
j
x
j
a
j
have alternating signs
the expansion coefcients are given by
f
k
= g
k
=
1

2
c
k
_

0
g() cos k d
rewrite in terms of x:
= arccos x d =
1

1 x
2
dx
f
k
=
2
c
k
_
1
1
f(x)T
k
(x)
1

1 x
2
dx
c
k
=
_
2 k = 0
1 k > 0
The convergence of f(x) in terms of T
k
(x) is the same as that of g() in terms of the
cosine-series. In particular, boundary values are irrelevant (replace x by cos in f(x))
61
The Chebyshev polynomials are orthogonal in the weighted scalar product
< T
k
, T
l
>
_
1
1
T
k
(x)T
l
(x)
1

1 x
2
dx = c
k

2

kl
The weight

1 x
2
1
is singular but
_
1
1
1

1 x
2
dx
is nite.
Derivatives of T
k
(x) :
d
dx
is not diagonal for basis of T
k
(x)
d
dx
T
k
(x) ,= T
k
(x)
in particular: the order of the polynomial changes upon differentiation.
Considering
d
d
cos(n 1) one gets
d
dx
T
k1
=
d
d
cos(k 1)
d
dx
= (k 1)
1
dx
d
(sin k cos cos k sin )
1
k + 1
d
dx
T
k+1
(x)
1
k 1
d
dx
T
k1
(x) =
1
sin
(sin k cos + cos k sin sin k cos + cos k sin )
thus
2T
k
(x) =
1
k + 1
d
dx
T
k+1
(x)
1
k 1
d
dx
T
k1
(x)
Thus: differentiation more difcult than for Fourier modes.
Zeroes of T
k
(x)
T
k
(x) = cos (k arccos x) = cos k
T
k
(x) has k zeroes in [1, 1]
k
l
= (2l 1)

2
l = 1, ..., k
x
l
= cos
2l 1
2k

The zeroes cluster near the boundaries.
62
Extrema of T
k
(x) (Chebyshev points)
k
l
= l x
l
= cos
l
k
l = 1, ..., k
T
k
(x
l
) = (1)
l
Extrema are also clustered at boundary
Chebyshev polynomial look like a cosine-wave wrapped around a cylinder and viewed
from the side
Transformation to = arccos x places more points close to boundary: small neighbor-
hood dx is blown up in d
x = cos d =
1
sin
dx
d for 0,
df
d
0
all derivatives vanish at boundary: no Gibbs phenomenon for non-periodic functions
understanding of properties of functions often aided by knowing what eigenvalue
problem they solve: what is the eigenvalue problem that has the T
k
(x) as solutions?
T
k
(x) = cos k
d
2
d
2
cos k = k
2
cos k
rewrite in terms of x = cos
d
d
= sin
d
dx
=

1 x
2
d
dx
thus T
k
(x) satises the Sturm-Liouville problem

1 x
2
d
dx
_

1 x
2
d
dx
T
k
(x)
_
+ k
2
T
k
(x) = 0
63
with boundary conditions: T
k
(x) bounded at x = 1
Note: Sturm-Liouville problem is singular: coefcient of highest derivative vanishes
at boundary no boundary values specied but only boundedness
The singularity is the origin of hte good boundary resolution (no Gibbs). Fourier series
is solution of regular Sturm-Liouville problem
6 Chebyshev Approximation
Approximate f(x) on a x b using Chebyshev polynomials
Again depending on the evaluation of the integrals
Galerkin expansion
Pseudospectral expansion
6.1 Galerkin Approximation
P
N
u(x) =
N

k=0
u
k
T
k
(x)
with
u
k
=
2

1
c
k
_
+1
1
1

1 x
2
u(x)T
k
(x)dx
Note:
need to transform rst from interval a t b to 1 x +1 using
x =
2t (a + b)
b a
Transformation to = arccos x showed
u
k
= O(k
r
) if u C
r1
(
r
x
u L
1
)
i.e. if r
th
derivative is still integrable (may be a function)
Show this directly in x:
c
k
2
u
k
=
_
1

1 x
2
u(x)T
k
(x)dx
using k
2
T
k
(x) =

1 x
2
d
dx
(

1 x
2
d
dx
T
k
)
c
k
2
u
k
=
1
k
2
_
1

1 x
2
u(x)

1 x
2
d
dx
_

1 x
2
d
dx
T
k
(x)
_
dx =
=
1
k
2
u(x)

1 x
2
d
dx
T
k

+1
1
+
1
k
2
_
+1
1
du
dx

1 x
2
d
dx
T
k
(x)dx = since u(x) bounded
=
1
k
2
_
du
dx

1 x
2
T
k
(x)

+1
1

_
+1
1
d
dx
_
du
dx

1 x
2
_
T
k
(x)dx
_
64
Note:
even without the 2
nd
integration by parts it seems that u
k
= O(k
2
)
it seems that even for
d
2
u
dx
2
/ L
1
one gets u
k
= O(k
2
)
But:
d
dx
T
k
(x) =
d
dx
cos(k arccos x) = O(k)
for
du
dx
L
1
and
d
2
u
dx
2
/ L
1
:
u
k
= O(
1
k
2
d
dx
T
k
(x)) = O(
1
k
)
Again, convergence of Chebyshev approximation can be shown to be
[[P
N
u(x) u(x)[[
C
N
q
[[u[[
q
with [[u[[ being the usual L
2
norm (with weight

1 x
2
1
and [[u[[
q
being the q
th
Sobolev
norm
[[u[[
2
q
= [[u[[
2
+[[
du
dx
[[
2
+ ... +[[
d
q
u
du
q
[[
2
For derivatives one gets
[[
d
r
u
dx
r

d
r
dx
r
P
N
u[[ [[u P
N
u[[
r

C
N
1
2
+q2r
[[u[[
q
Note:
for each derivative the convergence decreases by two powers of N; in Fourier expan-
sion each derivative lowered the convergence only by a single power in N.
for C

functions one still has spectral accuracy, i.e. exponential convergence


the estimate for the r
th
derivative is not precisely for the derivative but for the rSobolev
norm (cf. [1] for details)
rule of thumb: for each wavelength of a periodic function one needs at least 3 Cheby-
shev polynomials to get reasonable approximation.
6.2 Pseudo-Spectral Approximation
For Galerkin approximation the projection integral
u
k
=
2
c
k
_

0
u(cos ) cos kd
has to be calculated exactly (e.g. analytically)
For pseudospectral approximation calculate integral based on a nite number of collocation
points.
Strategy: nd most accurate integration formula for the functions in question
65
Here: u(cos ) is even in u(cos ) cos k has expansion in cos n
need to consider only cos n when discussing integration method
Analytically we have
_

0
cos n d =
n0
Similar to Fourier case: use trapezoidal rule
_

0
g() d
N

j=0
g(
j
N
)

N c
j
with c
j
=
_
_
_
2 j = 0, N
1 otherwise
Show: Trapezoidal rule is exact for cos l, l = 0, ..., 2N 1
1. l = 0
_
d
N

j=0
g(
j
N
)

N c
j
=

2N
+ (N 1)

N
+

2N
2. l even
cos l
j
=
1
2
(e
il
j
+ e
il
j
) with
j
=

N
j

j=0
1
c
j
e
il

N
j
=
..
e
il
=e
0
for l even
N

j=1
_
e
il

N
_
j
= e
il

N
1 e
il
1 e
il

N
= 0 using
N

j=1
q
j
= q
1 q
N
1 q
Note: for l = 2N the denominator vanishes:
cos 2N

N
j = 1

,= 0 trapezoidal rule not correct


3. l odd:
cos l odd about =

2
cos l
j
= cos l
Nj
cos l
j
= cos l

N
j
cos l
Nj
= cos
_
l

N
N l

N
j
_
= cos
_
l

N
j
_

j=0
cos l
j
= 0
66
Transform in xcoordinates
_
1
1
p(x)

1 x
2
dx =
_

0
p(cos )d =
N

j=0
p(cos

N
j)

N c
j
Note:
This can also be viewed as a Gauss-Lobatto integration
_
1
1
p(x)w(x)dx =
N

j=0
p(x
j
)w
j
with points x
j
= cos

N
j and weights w
j
=

N c
j
Gauss-Lobatto integration is exact for polynomials up to degree 2N 1:
degree 2N 1 polynomials have 2N coefcients
2N parameters to choose:
w
j
for j = 0, ..., N and x
j
for j = 1, ..., N 1 since x
0
= 1 and x
N
= +1
The x
j
are roots of a certain polynomial q(x) = p
N+1
(x) + ap
N
(x) + bp
N1
(x) with a and b
chosen such that q(1) = 0
Note: for the scalar product one needs the integral to be exact up to order 2N since each
factor can be a N
th
-order polynomial see (13) below
Summarizing:
pseudo-spectral coefcients given by
u
k
=
2
Nc
k
N

j=0
u(x
j
) T
k
(x
j
)
1
c
j
with
c
i
=
_
_
_
2 i = 0, N
1 1 i N 1
again highest mode resolvable on the grid given by
T
N
(x
j
) = cos
_
N arccos
_
cos

N
j
__
= cos j = (1)
j
Remember origin of c
k
c
N
= 2 as in Fourier expansion in
c
0
= 2 since only for k ,= 0 two exponentials e
ikx
contribute to cos kx
Note:
need not distinguish between c
k
and c
j
: from now on c
j
= c
j
67
Notes:
transformation can be written as matrix multiplication
u
k
=
N

j=0
C
kj
u(x
j
)
with
C
kj
=
2
Nc
k
c
j
T
k
(x
j
) =
2
Nc
k
c
j
cos
_
k arccos(cos

N
j)
_
=
2
Nc
k
c
j
cos(
kj
N
)
the inverse transformation is
u(x
j
) =
N

k=0
T
k
(x
j
) u
k
=
N

k=0
_
C
1
_
jk
u
k
with
_
C
1
_
jk
= T
k
(x
j
) = cos
jk
N
transformation seemingly O(N
2
): but there are again fast transforms (see later).
discrete orthogonality
N

j=0
T
l
(x
j
)T
k
(x
j
)
1
c
j
=
N
2
c
l

lk
since for l + k 2N 1 the integration is exact
N

j=0
T
l
(x
j
)T
k
(x
j
)w
j
=
_
T
l
(x)T
k
(x)
1

1 x
2
dx = c
k

lk
note: w
j
=

c
j
N
for l + k = 2N: since l, k N one has l = N = k: T
N
(x
j
) = (1)
j

j=0
T
N
(x
j
)T
N
(x
j
)
1
c
j
= N (13)
although T
2
N
is not a constant (only on the grid).
The pseudospectral approximant interpolates the function on the grid
I
N
u(x
l
) =
N

k=0
u
k
T
k
(x
l
) =
N

k=0
N

j=0
2
Nc
k
c
j
u(x
j
)T
k
(x
j
)T
k
(x
l
)
68
use T
k
(x
j
) = cos k arccos x
j
= cos k
j
N
= T
j
(x
k
) and orthogonallity
I
N
u(x
l
) =
N

j=0
2
Nc
j
u(x
j
)
N

k=0
1
c
k
T
j
(x
k
)T
l
(x
k
) =
N

j=0
u(x
j
)
c
l
c
j

jl
= u(x
l
)
Aliasing:
As with Fourier modes the pseudosprectral approximation has aliasing errors:
In Fourier we have aliasing from 2N + r to r and from 2N + r to r . The mode 2N + r is
also contained in the Chebyshev mode cos(2N r). Therefore 2N r also aliases into r.
Consider T
2mNr
(x) on grid x
j
= cos
j
N
T
2mNr
(x
j
) = cos
_
(2mN r) arccos(cos
j
N
)
_
= cos
_
(2mN r)
j
N
_
=
= cos 2m
Nj
N
cos r
j
N
sin 2m
Nj
N
. .
0
sin r
j
N
= cos r
j
N
Thus: T
r+2mN
is aliased to T
r
(x) on the grid.
Coefcients of T
k
are determined by all contributions that look like T
k
on the grid
u
k
= u
k
+

m=1
u
2mNk
6.2.1 Implementation of Fast Transform
The u
k
can be obtained using FFT for u(x) real
Extend u(cos ) from [0, ] to [0, 2] in space:
extended f(cos ) is periodic in FFT
69
extension
u
j
=
_
u(x
j
) 0 j N
u(x
2Nj
) N + 1 j 2N
Note:
in Matlab the extension can be done easily using the command FLIPDIM
Coefcients are given by
u
k
=
2
Nc
k
N

j=0
u(x
j
)T
k
(x
j
)
1
c
j
=
2
Nc
k
N

j=0
u(x
j
) cos(k
j
N
)
1
c
j
(14)
Rewrite the sum in terms of the extension (using that cos and u are even with respect to
= 0,
N1

j=1
u
j
cos
jk
N
=
..
j=2Nr r=2Nj
2N1

r=N+1
u
2Nr
. .
ur
cos
_
k
N
(2N r)
_
=
2N1

r=N+1
u
r
cos
kr
N
thus considering factor 1/c
j
in (14)
u
k
=
2
Nc
k
1
2
_
u
0
cos 0 + u
N
cos k + 2
N1

j=1
u
j
cos
jk
N
_
=
=
2
Nc
k
1
2
_
u
0
cos 0 + u
N
cos k +
N1

j=1
u
j
cos
jk
N
+
2N1

j=N+1
u
j
cos
jk
N
_
=
1
Nc
k
2N1

j=0
u
j
cos
jk
N
=
1
Nc
k
Re
_
2N1

j=0
u
j
e
i
jk
N
_
. .
FFT
Notes:
here the ordering of grid points is x = cos
therefore u
0
= u(+1) and u
N
= u(1)
70
Reorder:
z
j
= cos
Nj
then z
0
= 1 z
N
= +1
T
k
(z
j
) = cos (k arccos cos
Nj
) = cos
_
k(N j)

N
_
= cos k cos
kj
N
+ sin k sin
kj
N
= (1)
k
cos
kj
N
Thus:
T
k
(z
j
) = (1)
k
T
k
(x
j
)
expressing the fact that reecting about the y-axis (x x) amounts to switching sign of
the odd Chebyshev polynomials but leaving the even T
k
unchanged.
Relation to FFT is changed
u
k
=
2
Nc
k
N

j=0
u(x
j
)T
k
(x
j
)
1
c
j
=
..
relabeling
2
Nc
k
N

j=0
u(z
j
)T
k
(z
j
)
1
c
j
= (1)
k
2
Nc
k
N

j=0
u(z
j
) cos
kj
N
1
c
j
= (1)
k
1
Nc
k
Re
_
2N1

j=0
u
j
e
i
jk
k
_
. .
FFT
where
u
0
= u(1) u
N
= u(+1) u
2N
= u(1)
with natural ordering FFT yields (1)
k
u
k
.
6.3 Derivatives
Goal: approximate derivative of u(x) by derivative of interpolant I
N
u(x)
Need
d
dx
T
k
(x) in terms of T
k
(x). We had
Recursion Relation
d
dx
T
m+1
(x) = (m+ 1)
_
2T
m
(x) +
1
m1
d
dx
T
m1
(x)
_
m 2
with
d
dx
T
0
(x) = 0
d
dx
T
1
(x) = T
0
Note:

d
dx
T
m1
contains even lower T
l
etc.:
d
dx
T
m
contains contributions from many T
k
71
First Derivative
Expand the derivative of the interpolant in T
k
(x)
d
dx
(I
N
u(x)) =
N

k=0
u
k
d
dx
T
k
(x) =
N

k=0
b
k
T
k
(x)
To determine b
l
project derivative onto T
l
(x)
N

k=0
u
k
_
+1
1
T
l
(x)
d
dx
T
k
(x)
1

1 x
2
dx =
N

k=0
b
k
_
1
1
T
l
(x)T
k
(x)
1

1 x
2
dx
. .

lk

2
c
k
=

2
c
l
b
l
Note:
here c
0
= 2 and c
N
= 1 since full projection, integrand evaluated not only at discrete
grid points (we get an analytic result for the b
k
)
Use
_
1
1
T
l
(x)
d
dx
(T
k
(x))
1

1 x
2
dx =
_
_
_
0 l k
0 k > l k + l even
k k > l k + l odd
Proof:
1. l k
degree of
d
dx
T
k
is k 1 can be expressed by sum of T
j
with j < l; scalar product
vanishes since T
k
T
j
for j ,= k
2. k + l even l and k both even or both odd T
l
d
dx
T
k
odd integral vanishes
3. k + l odd, k > l: prove by induction
write k = l + 2r 1, r = 1, 2, 3, ...
(a) r = 1, k = l + 1
rst l ,= 0
< T
l
d
dx
T
l+1
> =
..
recursion for
d
dx
T
l+1
(l+1)
_

_
2 < T
l
T
l
> +
1
l 1
< T
l
d
dx
T
l1
>
. .
=0 since l1<l
_

_
= 2(l+1)

2
now l = 0
< T
0
d
dx
T
1
>=< T
0
T
0
>=
72
(b) induction step: assume
< T
l
d
dx
T
l+2r1
>= (l + 2r 1)
. .
k
, r 1
T
l
d
dx
T
l+2(r+1)1
) = T
l
(l + 2r + 1)
_
2T
l+2r
+
1
l + 2r 1
d
dx
T
l+2r1
_
)
=
l + 2r + 1
l + 2r 1
T
l
d
dx
T
l+2r1
) =
l + 2r + 1
l + 2r 1
(l + 2r 1) = (l + 2r + 1)
= (l + 2(r + 1) 1)
Thus:
b
l
=
2
c
l
N

k=l+1 ; k+l odd


k u
k
Notes:
calculation of single coefcient b
l
is O(N) operations instead of O(1) for Fourier
calculation of complete derivative seems to require O(N
2
) operation
b
l
depends only on u
k
with k > l: only polynomials with higher degree contribute to a
given power of x of the derivative
Determine b
l
recursively:
c
l
2
b
l
= (l + 1) u
l+1
+
N

k=l+3 ; k+l odd


k u
k
= (l + 1) u
l+1
+
c
l+2
2
b
l+2
Thus
b
N
= 0
b
N1
= 2N u
N
c
l
b
l
= 2(l + 1) u
l+1
+ b
l+2
0 l N 2
Note:
here c
N
= 1 since full integral no factor c
l+2
for l N 2
recursion relation requires only O(N) operations for all N coefcients
recursion relation cannot be parallelized or vectorized:
evaluation of b
l
requires knowledge of b
k
with k > l:
cannot evaluate all coefcients b
l
simultaneously on parallel computers
cannot start evaluating product involving b
l
without nishing rst calculation for
b
k
with k > l
73
Higher Derivatives
calculate higher derivatives recursively
d
n
dx
n
u(x) =
d
dx
_
d
n1
dx
n1
u(x)
_
i.e. given
d
n1
dx
n1
I
N
(u(x)) =
N

k=0
b
(n1)
k
T
k
(x)
one gets
d
n
dx
n
I
N
(u(x)) =
N

k=0
b
(n1)
k
d
dx
T
k
(x) =
N

k=0
b
(n)
k
T
k
(x)
with
b
(n)
N
= 0
b
(n)
N1
= 2Nb
(n1)
N
c
l
b
(n)
l
= 2(l + 1)b
(n1)
l+1
+ b
(n)
l+2
Note:
to get n
th
derivative effectively have to calculate all derivatives up to n
6.3.1 Implementation of Pseudospectral Algorithm for Derivatives
Combine the steps: given u(x) at the collocation points x
j
calculate
n
x
u at x
j
I. Transform Method
1. Transform to Chebyshev amplitudes
u
k
=
2
Nc
k
N

j=0
u(x
j
) cos
jk
N
1
c
j
2. Calculate derivatives recursively
b
(n)
N
= 0
b
(n)
N1
= 2Nb
(n1)
N
c
l
b
(n)
l
= 2(l + 1)b
(n1)
l+1
+ b
(n)
l+2
3. Transform back to real space at x
j

n
x
I
N
(u(x
j
)) =
N

k=0
b
(n)
k
cos
jk
N
74
Note:
steps 1. and 3. can be performed using FFT
FFT for back transformation
forward transformation was
u
k
=
2
Nc
k
N

j=0
u(x
j
) cos
jk
N
1
c
j
=
1
Nc
k
Re
_
2N1

j=0
u
j
e
i
jk
N
_
(15)
the last sum can be done as forward FFT
For rst derivative at x
j
we need
N

k=0
b
k
cos
jk
N
1. extend b
j
b
r
= b
2Nr
for r = N + 1, ..., 2N 1
2. need factors c
j
(cf. (15)): redene b
j

b
0
= 2b
0

b
N
= 2b
N

b
j
= b
j
for j ,= 0, N
N

k=0
b
k
cos
jk
N
=
N

k=0

b
k
cos
jk
N
1
c
k
=
1
2
Re
_
2N1

k=0

b
k
e
i
jk
N
_
. .
FFT
Last sum can again be done as forward FFT.
Notes:
backward transformation uses the same FFT as the forward transformation.
more precisely, because only real part is taken the sign of i does not matter
again for natural ordering want derivative at z
j
= cos

N
(N j):
need

b
k
cos
_
k
N
(N j)
_
= (1)
k

b
k
cos
kj
N
replace

b
k
(1)
k

b
k
II. Matrix Multiply Approach
As in Fourier case derivative is linear in u(x
j
) can be written as matrix multiplication

x
I
N
(u(x
j
)) =
N

k=0
D
jk
u(x
k
)
75
D
jk
gives contribution of u(x
k
) to derivative at x
j
The polynomial I
N
(u(x)) interpolates u on the grid x
j
. Since the order of I
N
is equal to the
number of grid points, this polynomial is unique. Therefore start by seeking the polynomial
that interpolates u(x
j
) and then take its derivative.
Construct interpolating polynomial from polynomials g
k
(x) satisfying
g
k
(x
j
) =
jk
u(x
j
) =
N

k=0
g
k
(x
j
)u(x
k
)

x
u(x)[
x
j
=
N

k=0

x
g
k
(x)

x
j
u(x
k
)
N

k=1
D
jk
u(x
k
)
Construct the polynomial noting that Chebyshev polynomial T
N
(x) has extrema at all x
j
for 1 j N 1
d
dx
T
N
(x
j
) = 0 for j = 1, ...N 1
Note:
d
dx
T
N
has N 1 zeroes since it has order N 1
g
k
(x) =
(1)
k+1
N
2
c
k
. .
normalization
vanishes at x
0,N
..
(1 x
2
)
vanishes at x
j
..
d
dx
T
N
(x)
1
x x
k
. .
cancels (xx
k
) in numerator
Notes:

u(x
k
)g
k
(x) interpolates u on the grid
g
k
(x) is indeed a polynomial since denominator is cancelled by
d
dx
T
N
, which vanishes
at the x
k
g
k
(x) is a Lagrange polynomial
L
(N)
k
(x) =
N

k=m=0
x x
m
x
k
x
m
=
x x
0
x
k
x
0
. . .
x x
k1
x
k
x
k1
x x
k+1
x
k
x
k+1
. . .
x x
N1
x
k
x
N1
x x
N
x
k
x
N
0 k N
Take derivative of g
k
(x)
d
dx
I
N
u(x
j
) =
N

k=0
u(x
k
)g

k
(x
j
) =
N

k=0
D
jk
u(x
k
)
For natural ordering z = cos
Nj
= cos
Nj
N
, i.e. z
0
= 1 and z
N
= 1, one gets
76
D
jk
=
c
j
c
k
(1)
j+k
1
x
j
x
k
for j ,= k
D
jj
=
x
j
2 (1 x
2
j
)
for j ,= 0, N (16)
D
00
=
2N
2
+ 1
6
D
NN
= +
2N
2
+ 1
6
Notes:
differentiation matrix is not skew-symmetric
D
jk
,= D
kj
since D
jj
,= 0 and
c
j
c
k
[[D[[ = O(N
2
) because of clustering of points at the boundary
clear for D
00
and D
NN
.
smallest grid distance is O(N
2
), e.g., for [j N[ N
1 z
j
= 1 cos (
Nj
) = 1 (1
(N j)
2
N
2

2
+ ...) = O(N
2
)
stability condition will involve N
2
instead of N
1
more restrictive than Fourier modes
higher derivatives obtained via D
n
Note:
it turns out that the numerical accuracy of the matrix-multiply approach using D as
formulated in (16) is quite prone to numerical round-off errors. D has to satisfy
N

j=0
D
ij
= 0 j
reecting that the derivative of a constant vanishes.
A better implementation is
D
jk
=
c
j
c
k
(1)
j+k
1
x
j
x
k
for j ,= k
D
jj
=
N

j=k=0
D
jk
(17)
(18)
7 Initial-Boundary-Value Problems: Pseudo-spectral Method
We introduced Chebyshev polynomials to deal with general boundary conditions. Imple-
ment them now
77
7.1 Brief Review of Boundary-Value Problems
Depending on character of equation we need to pose/may pose different number of boundary
conditions at different locations.
7.1.1 Hyperbolic Problems
characterized by traveling waves: boundary conditions depend on characteristics:
Boundary condition to be posed on incoming characteristic variable but not on outgoing
characteristic variable. Solution blows up if boundary condition is posed on wrong variable.
1. Scalar wave equation

t
u =
x
u u(x, 0) = u
0
(x) 1 x +1
wave travels to the left
u(x, t) = u(x + vt)
distinguish boundaries;
(a) x = 1: outow boundary u is outgoing variable
requires and allows no boundary condition
(b) x = +1 : inow boundary u is incoming variable
needs and allows single boundary condition
2. System of wave equations

t
u = A
x
u
diagonalize A to determine characteristic variables
Example:

t
u =
x
v

t
v =
x
u
Taking sum and difference
U
l
= u + v U
r
= u v

t
U
l,r
=
x
U
l,r
(a) x = 1: only U
r
is incoming, only U
r
accepts boundary condition
(b) x = +1: only U
l
is incoming, only U
l
accepts boundary condition
Physical boundary conditions often not in terms of characteristic variables
Example:
u = u

at x = 1 v unspecied
at x = 1:
U
r
(1) = u

v(1) = u

1
2
(U
l
(1) U
r
(1))
U
r
(1) = 2u

U
l
(1)
78
7.1.2 Parabolic Equations
No characteristics, boundary conditions at each boundary
Example:

t
u = j = u = u
Typical boundary conditions:
1. Dirichlet
u = 0
2. Neumann (no ux boundary condition)

x
u = 0
3. Robin boundary conditions
u +
x
u = g(t)
7.2 Pseudospectral Implementation
Pseudospectral: we have grid points boundary values available
discuss using matrix-multiply approach
Explore: simple wave equation

t
u =
x
u u(x = 1, t) = g(t)
discretize

t
u
i
=
N

j=0
D
ij
u
j
with u
j
= u(x
j
)
Notes:
spatial derivative calculated using all points
derivatives available at boundaries without introducing the virtual points that ap-
peared when using nite differences

x
u
0
=
1
2x
(u
1
u
1
)
boundary condition seems not necessary: it looks as if u
N
could be updated without
making use of g(t).
But: PDE would be ill-posed without boundary conditions
scheme should blow up! (see later)
79
Correct implementation

t
u
i
=
N

j=0
D
ij
u
j
i = 0, ..., N 1
u
N
= g(t)
Note:
although u
N
is not updated using the PDE, it can still be used to calculate the deriva-
tive at the other points.
Express scheme in terms of unknown variables only: u
0
, u
1
, ...u
N1
Dene reduced n ndifferentiation matrix
D
(N)
ij
= D
ij
i, j = 0, ..., N 1
i.e. N
th
row and column of D
ij
are omitted.

t
u
i
=
N1

j=0
D
(N)
ij
u
j
+ D
iN
u
N
i = 0, ..., N 1
u
N
= g(t)
Notes:
boundary conditions modify differentiation matrix
in general equation becomes inhomogeneous
7.3 Spectra of Modied Differentiation Matrices
With u = (u
0
, ..., u
N1
) PDE becomes inhomogeneous system of ODEs

t
u = D
(N)
u +d with d
i
= D
iN
g(t)
For simplicity assume vanishing boundary values: d = 0
Stability properties determined by eigenvalues
j
of modied differentiation matrix D
(N)

t
u
j
=
j
u
j
Reminder:
region of absolute stability of scheme for eigenvalue
j

j
t C[u
j
bounded for all t
scheme is asymptotically stable if it is absolutely stable for all eigenvalues of D
(N)
80
7.3.1 Wave Equation: First Derivative
What are the properties of D
(N)
?
Review of Fourier Case
eigenvalues of D
F
are ik, [k[ = 0, 1, ...N 1. All eigenvalues are purely imaginary and
the eigenvalue 0 is double.
D
F
is normal can be diagonalized by unitary matrix U
U
1
DU =
_
_
_
_

2
...

N
_
_
_
_
T
with [[T[[ = [[D[[ and [[U
1
u[[ = [[u[[
[[u[[ is bounded by the same constant as [[U
1
u[[, independent of N
sufcient to look at scalar equation.
Properties of D
(N)
for Chebyshev
eigenvalues of D
(N)
are not known analytically
eigenvalues of D
(N)
have negative real part

t
u = D
(N)
u well-posed

t
u = D
(N)
u ill-posed
in ill-posed case boundary condition should be at x = 1 but it is posed at x = +1
Example: N = 1
D
(N)
= D
00
=
2 + 1
6
=
1
2

t
u
0
=
1
2
u
0
bounded; boundary condition on u
1
For boundary condition at x = 1 introduce D
(0)
D
(0)
ij
= D
ij
i, j = 1, ..., N
Thus for

t
u =
x
u

t
u
i
=
N

j=1
D
(0)
ij
u
j
+ D
i0
g(t) for i = 1, ..., N
Eigenvalues of D
(0)
have positive real part
Example: N = 1
D
(0)
= D
NN
= +
1
2
Note:
81
in Fourier real part vanishes: no blow-up
periodic boundary conditions are well-posed for both directions of propagation
D
(N)
is not normal (D
+
D ,= DD
+
) similarity transformation S to diagonal form not
unitary
[[u[[ , = [[Su[[
For any xed N [[u[[ is bounded if [[Su[[ is bounded
But constant relating [[u[[ and [[Su[[ could diverge for N
stability is not guaranteed for N if scalar equation is stable.
eigenvalues of D
(N)
and D
(0)
are O(N
2
)
stability limits for wave equation will involve
t O(N
2
)
larger eigenvalues reect the close grid spacing near the boundary, x = O(N
2
)
7.3.2 Diffusion Equation: Second Derivative
Consider

t
u =
2
x
u
0,N
u +
0,N

x
u =
0,N
at x = 1
a) Fixed Boundary Values = 1, = 0
unknowns
u
1
, u
2
, ..., u
N1
known
u
0
=
0
u
N
=
N
Reduced (n 1) (n 1) differentiation matrix for second derivative
D
(0,N)
2,ij
= (D
2
)
ij
i, j = 1, ..., N 1
then

t
u
i
=
N1

j=1
D
(0,N)
2,ij
u
j
+ (D
2
)
i0

0
+ (D
2
)
iN

N
for i = 1, ..., N 1
82
Note:
again the 2
nd
derivative is calculated by using all values of u, including the xed
prescribed boundary values
for transformation to u
k
via FFT use all grid points
information for
2
x
u is, however, discarded at the boundaries
Eigenvalues
exact eigenvalues of
2
x
u with u(1) = 0:
sin qx is eigenfunction of
2
x
for q =

L
n =

2
n. eigenvalues
n
=

2
4
n
2
all functions that vanish at x = 1 can be expanded in terms of sin qx with q =

L
n =

2
n
sin qx form a complete set no other eigenfunctions
eigenvalues of D
(0,N)
2
:
all eigenvalues are real and negative
eigenvalues are O(N
4
) reecting the small grid spacing near the boundaries.
b) Fixed Flux: = 0, = 1
Need other modication of D
2
:
u
0
and u
N
now unknown (n + 1) (n + 1) matrix

x
u
0
and
x
u
N
are known

x
u
i
is calculated with D only for i = 1, ..., N 1

D
(0,N)
ij
=
_
D
ij
1 i N 1
0 i = 0 or i = N

x
u
i
=
N

j=0

D
(0,N)
ij
u
j
+
i,0

0
+
i,N

N
i = 0, ..., N
2
nd
derivative

2
x
u
i
=
N

j=0
D
ij

x
u
j
=
N

j,k=0
D
ij

D
(0,N)
jk
u
k
+ D
ij

j,0

0
+ D
ij

j,N

N
Diffusion equation

t
u
i
=
N

j,k=0
D
ij

D
(0,N)
jk
u
k
. .
apply e.g. Crank-Nicholson
+ D
i0

0
+ D
iN

N
. .
inhomogeneous terms
1
t
_
u
n+1
u
n
_
= D

D
(0,N)
u
n+1
+ (1 )D

D
(0,N)
u
n
+ D
i0

0
+ D
iN

N
Note:
83
derivative at boundary is calculated also with spectral accuracy; in nite differ-
ence schemes they are one-sided: reduced accuracy
Crank-Nicholson for xed boundary values similar.
7.4 Discussion of Time-Stepping Methods for Chebyshev
Based on analysis of
du
dt
= u
which scheme has range of t in which it is absolutely for given C
Main aspect: not only D
(0,N)
2
but also D
(N)
has negative real part
7.4.1 Adams-Bashforth
AB1= forward Euler
AB2
u
n+1
= u
n
+ t
_
3
2
f
n

1
2
f
n1
_
AB3
u
n+1
= u
n
+ t
_
23
12
f
n

16
12
f
n1
+
5
12
f
n2
_
2.5 2 1.5 1 0.5 0 0.5
1.5
1
0.5
0
0.5
1
1.5
AdamsBashforth
AB1
AB2
AB3
Since the eigenvalues of the odd Chebyshev derivatives have non-zero (negative) real part
all three schemes have stable regions not only for diffusion but also for wave equation.
84
Stability limits:
wave equation
t
max
= O(
1
N
2
)
diffusion equation
t
max
= O(
1
N
4
)
strong motivation for implicit scheme
7.4.2 Adams-Moulton
AM1=backward Euler
AM2=Crank-Nicholson
AM3
u
n+1
= u
n
+ t
_
5
12
f
n+1
+
8
12
f
n

1
12
f
n1
_
7 6 5 4 3 2 1 0 1
4
3
2
1
0
1
2
3
4
AdamsMoulton
AM3
AM4
AM5
AM6
backward Euler and Crank-Nicholson remain unconditionally stable for both equations
AM3: now stable for small t ; but still implicit scheme
Notes:
Crank-Nicholson damps large wavenumbers only weakly, 2
nd
order in time
backward Euler damps large wavenumbers strongly: very robust, but only 1
st
order in
time
if high wavenumbers arise from non-smooth initial conditions: take a few backward
Euler steps
85
7.4.3 Backward-Difference Schemes
this class of schemes is obtained by obtaining interpolant for u(t) and taking its derivative
as the left-hand-side of differential equation
p
m
(t) =
m1

k=0
u(t
n+1k
)L
(m)
k
(t)
with Lagrange polynomials
L
(m)
k
(t) =
m1

k=l=0
t t
n+1l
t
n+1k
t
n+1l
to get derivative
du
dt

t
n+1
=
d
dt
p
m
(t)

t
n+1
1. m = 2
p
2
(t) =
u
n+1
u
n
t
n+1
t
n
(t t
n
) + u
n
d
dt
p
2
(t)

t
n+1
=
u
n+1
u
n
t
n+1
t
n
= f(u
n+1
)
thus: BD1=backward Euler
2. m = 3 yields BD2
3
2
u
n+1
2u
n
+
1
2
u
n1
= t f
n+1
15 10 5 0 5 10 15 20 25 30 35
25
20
15
10
5
0
5
10
15
20
25
backward differentiation
BD1
BD2
BD3
BD4
BD5
BD6
1 0.5 0 0.5
2
1.5
1
0.5
0
0.5
1
1.5
2
2.5
3
backward differentiation
BD1
BD2
BD3
BD4
BD5
BD6
Neumann Analysis for BD2:
3
2
z 2 +
1
2z
tz = 0
86
z
1,2
=
2

1 + 2t
3 2t

1
_
2t[[
0 for t[[
Note:
BD1 and BD2 are unconditionally stable. BD3 and higher are not unconditionally
stable
BD2 damps high wavenumbers strongly (although not as strongly as BE) and is 2
nd
order in time
compared to Crank-Nicholson it needs more storage since it uses u
n1
7.4.4 Runge-Kutta
5 4 3 2 1 0 1 2
3
2
1
0
1
2
3
RungeKutta
RK1
RK2
RK3
RK4
For Chebyshev also RK2 stable for wave equation - was not the case for Fourier
7.4.5 Semi-Implicit Schemes
Consider diffusion equation with nonlinearity

t
u =
2
x
u
..
CN
+f(u)
..
AB2
u(x = 0) =
0
u(x = L) =
N
u
n+1
= u
n
+ t
_

2
x
u
n+1
+ (1 )
2
x
u
n
_
+ t
_
3
2
f(u
n
)
1
2
f(u
n1
)
_
Calculate derivatives with differentiation matrix
boundary conditions enter

2
x
u
i
=

j
D
(0,N)
2,ij
u
j
+ D
2
i0

0
+ D
2
iN

N
i = 1, ..., N 1
87
remember
D
(0,N)
2,ij
= (D
2
)
ij
i, j = 1, ..., N 1
insert in scheme

j
(
ij
tD
(0,N)
2,ij
)u
n+1
j
=

j
(
ij
+ t(1 )D
(0,N)
2,ij
)u
n
j
+ t
_
D
2
i0

0
+ D
2
iN

N
_
+
t
_
3
2
f
i
(u
n
)
1
2
f
i
(u
n1
)
_
i = 1, ..., N 1
Notes:
Need to invert
ij
tD
(0,N)
2,ij
: constant matrix only one matrix inversion
in the algorithm D
(0,N)
2,ij
is effectively a N 2 N 2 matrix; but it is not the same
matrix as D
2
for N 2 nodes!
if boundary condition depends on time
either CN
t
_
D
2
i0

0
(t
n+1
) + (1 )D
2
i0

0
(t
n+1
)
_
or AB2
t
_
3
2
D
2
i0

0
(t
n
)
1
2
D
2
i0

0
(t
n1
)
_
4
7.4.6 Exponential Time-Differencing
Consider again

t
u =
2
x
u + f(u) 0 < x < L b.c. at x = 0, L
Using the Chebyshev differentiation matrix D
2
this can be integrated formally
u
n+1
= e
D
2
t
u
n
+ e
D
2
t
_
t
0
e
D
2
t

f(t + t

)dt

where f denotes the vector (f


1
, . . . , f
N
).
For a EDTFE we approximate this as
u
n+1
= e
tD
2
u
n
+ t E
0
(tD
2
) f(t
n
) (19)
4
Include CNAB for Chebyshev with FFT:
(I + tD)
1
= (FF
1
+ tF
1

DF)
1
= (F
1
(I + t

D)F)
1
= F
1
(I + t

D)
1
F
88
with
E
0
(tD
2
) = (tD)
2
_
I e
D
2
t
_
As in the Fourier case the evaluation of E
0
suffers from round-off through cancellations.
Even worse cancellations for E
i
in EDTRK4 (cf. (10)). Using Taylors formula is not
straightforward. Use Cauchy integral formula for matrices A [6, 7].
Consider
(A) =
1
2i
_
C
f(t) (tI A)
1
dt
Assume A can be diagonalized
A = SS
1
with = diag(
1
,
2
, . . . ,
n
)
(tI A)
1
=
_
tSIS
1
SS
1
_
1
=
_
S(tI ) S
1
_
1
= S(tI )
1
S
1
= Sdiag
_
1
t
1
, . . . ,
1
t
n
_
S
1
Since S does not depend on t C
(A) = S
1
2i
_
C
f(t)diag
_
1
t
1
, . . . ,
1
t
n
_
dt S
1
= Sdiag
_
1
2i
_
C
f(t)
1
t
1
dt, . . . ,
1
2i
_
C
f(t)
1
t
n
dt
_
S
1
If ( encloses
i
1
2i
_
C
f(t)
1
t
i
dt = f(
i
)
If ( encloses all eigenvalues of A one gets
(A) = S diag (f(
1
), . . . , f(
n
)) S
1
= f(A)
and
f(A) =
1
2i
_
C
f(t) (tI A)
1
dt (20)
Notes:
sample code for the Allen-Cahn equation, f(u) = u u
3
, is in the appendix of [7]
the contour integral can be evaluated using the trapezoidal rule
simplest contour is a circle with radius R centered at t = 0
eigenvalues of D
2
grow like N
4
R has to be chosen large enough
e
t
grows and oscillates rapidly for ranges of large complex t (cf. (10))
89
more integration points are needed for the integral
possibly other contour shapes preferrable (e.g. elliptic close to real axis or parabolic)
Boundary conditions:
1. Fixed boundary values:
u
0
=
0
u
N
=
N
using the modied differentiation matrix D
(0,N)
2
we have N 1 unknowns u
1
, . . . , u
N1
,

t
u
i
=
N1

j=1
D
(0,N)
2,ij
u
j
+ (D
2
)
i0

0
+ (D
2
)
iN

N
+ f
i
(u) for i = 1, ..., N 1
Two possibilities:
(a) Shift solution to make boundary conditions homogeneous
u = U+u
b
with
u
b
(x) =
0
+ (
N

0
)
x
L
U satises now Dirichlet boundary conditions and can be determined using (19)
or its RK4 version with D
2
replaced by D
(0,N)
2
(b) Can include the inhomogenous terms (D
2
)
i0

0
+ (D
2
)
iN

N
in f.
2. Fixed ux boundary conditions

x
u =
0,N
at x = 0, L

t
u
i
=
N

j,k=0
D
ij

D
(0,N)
jk
u
k
+ D
i0

0
+ D
iN

N
+ f
i
(u) for i = 0, . . . , N
For
0
,=
N
the transformation to Neumann condition would induce an additional
term since
2
x
u
b
,= 0.
Probably it is preferrabe to include the inhomogeneous terms in f.
8 Initial-Boundary-Value Problems: Galerkin Method
Galerkin method:
unknowns are the expansion coefcients, no spatial grid is introduced
Implementation of boundary conditions is different for Galerkin and for pseudospectral:
pseudospectral: we have grid points boundary values available
Galerkin: no grid points, equations obtained by projections
modify expansion functions or projection
90
8.1 Review Fourier Case

t
u = Su 0 x 2 periodic b.c.
Expand u
P
N
(u) =
N

k=N
u
k
(t)e
ikx
replace u by projection P
N
(u) in PDE

t
P
N
(u) S P
N
(u) = 0
the expansion coefcients are determined by the condition that equation be satised in
subspace spanned by the e
ikx
, N k N, i.e. error orthogonal to that subspace
Project onto e
ilx
, N l N
e
ilx
,
t
P
N
(u) S P
N
(U)) = 0
Orthogonality of e
ilx
-modes

t
u
l

_
2
0
e
ilx
S P
N
(u) = 0
e.g. for S =
x

t
u
l

_
e
ilx

k
(ik)u
k
e
ikx
= 0

t
u
l
ilu
l
= 0
Notes:
no aliasing error since transforms are calculated exactly
nonlinear terms and space-dependent terms require convolution: slow
no grid: preserves translation symmetry
boundary conditions:
each Fourier mode satises the boundary conditions individually
8.2 Chebyshev Galerkin
Consider

t
u =
x
u 1 x +1, u(x = +1, t) = g(t)
Expand
P
N
(u) =
N

k=0
u
k
(t)T
k
(x)
91
project back onto T
l
(x)
T
l
,
t
P
N
(u)
x
P
N
(u)) = 0

t
u
l
(t) =
N

k=0
T
l
(x),
x
T
k
(x)) u
k
(t)
with
u
1
(x), u
2
(x)) =
_
+1
1
u
1
(x)u
2
(x)
1

1 x
2
dx
Where are the boundary conditions?
Note:
the T
k
(x) do not satisfy the boundary conditions individually
8.2.1 Modication of Set of Basis Functions
Construct new complete set of functions, each of which satises the boundary conditions.
Example: Dirichlet condition g(t) = 0
Since
T
k
(x = +1) = 1
introduce

T
k
(x) = T
k
(x) T
0
(x), k 1
each

T
k
satises boundary condition.
Note:
modied functions may not be orthogonal any more

T
l
(x),

T
k
(x)) = T
k
, T
l
)
. .

kl
T
k
T
0
)
. .
=0
T
0
T
l
)
. .
=0
+T
0
T
0
)
. .
=
could orthogonalize the set with Gram-Schmidt procedure

T
1
=

T
1

T
2
=

T
2

T
1

T
2
)

T
1

T
3
=

T
3

T
1

T
3
)

T
1

T
2

T
3
)

T
2
...
92
procedure is not very exible, expansion functions have to be changed whenever
boundary conditions are changed.
8.2.2 Chebyshev Tau-Method
To be satised

t
u =
x
u
u(+1, t) = g(t)
i.e. boundary condition represents one more condition on the expansion coefcients
introduce 1 extra unknown
Expand in N + 2 modes
P
N+1
(u) =
N

k=0
u
k
T
k
(x) + u
N+1
T
N+1
(x)
Project PDE onto T
0
,...T
N
N + 1 equations
T
l
,
t
P
N+1
(u)
x
P
N+1
(u)) = 0 0 l N
satisfy boundary condition
N

k=0
u
k
T
k
(x = +1) + u
N+1
T
N+1
(x = +1) = g(t)
Use orthogonality
c
l

t
u
l
=
N+1

k=0
u
k
T
l
,
x
T
k
)
and T
k
(x = 1) = 1
N+1

k=0
u
k
= g(t)
Thus: N + 1 equations for N + 1 unknowns. Should work.
Note:
For p boundary conditions expand in N +1 +p modes and project PDE onto rst N +1
modes and use remaining p modes to satisfy boundary conditions.
Spurious Instabilities
method can lead to spurious instabilities and eigenvalues.
Example: incompressible Stokes equation in two dimensions

t
v =
1

p + v v = 0
93
Introduce streamfunction and vorticity
v = (
y
,
x
) = (

k)
= (v)
z
=
2

eliminate pressure from Stokes by taking curl

t
=
=
2

Consider parallel channel ow with v depending only on the transverse coordinate x: v =


v(x)

t
=
2
x
(21)
=
2
x
(22)
Boundary conditions at x = 0, L
v
x
= 0
y
= 0
v
y
= 0
x
= 0
Boundary condition
y
implies is constant along the wall. If there is not net ux through
the channel then has to be equal on both sides of the channel
= 0 x = 0, L
Can combine both equations (21,22) into single equation for

2
x
=
4
x

with 4 boundary conditions


= 0
x
= 0 at x = 0, L
94
Ansatz
= e
t
(x)

2
x
=
4
x

Expand
(x) =
N

k=0

k
T
k
(x)
2
x
=
N

k=0
b
(2)
k
T
k
(x)
4
x
=
N

k=0
b
(4)
k
T
k
(x)
Results for eigenvalues
N
1

2
10 9.86966 4, 272
15 9.86960 29, 439
20 9.86960 111, 226
Notes:
spurious positive eigenvalues

max
= O(N
4
)
scheme is unconditionally unstable, useless for time integration
o.k. to determine eigenvalues as long as spurious eigenvalues are recognized
Rephrase problem [5, 4]
expand
= e
t

k
T
k
(x)
= e
t

k
T
k
(x)
in PDE

k
=
(2)
k

k
=
(2)
k
where
(2)
k
and
(2)
k
are coefcients of expansion of 2
nd
derivative
Previously all boundary conditions were imposed on rst equation
Physically:
impose no slip condition v
y
= 0 on Stokes equation

k
=
(2)
k
0 k N 2

x
(x = 1) = 0 N 1 k N
impose incompressibility on vorticity equation

k
=
(2)
k
0 k N 2
(x = 1) = 0 N 1 k N
This scheme is stable.
95
9 Iterative Methods for Implicit Schemes
Consider as simple example nonlinear diffusion equation

t
u =
2
x
u + f(u)
with Crank-Nicholson for stability or for Newton
u
n+1
u
n
t
=
2
x
u
n+1
+ (1 )
2
x
u + f(u
n+1
) + (1 )f(u
n
)
linearize f(u
n+1
) (for reduced Newton, i.e. only single Newton step)
f(u
n+1
) = f(u
n
+ u
n+1
u
n
) = f(u
n
) + (u
n+1
u
n
)f

(u
n
) + ...
and discretize derivatives (Chebyshev or Fourier or nite differences)

2
x
u D
2
u
then
_
(
1
t
f

(u
n
))I D
2
_
u
n+1
=
_
(
1
t
f

(u
n
))I + (1 )D
2
_
u
n
+ f(u
n
)
Notes:
in linear case matrix on l.h.s. is constant only single matrix inversion
in general:
matrix inversion in each time step
for full Newton matrix changes after each iteration
nite differences: in one dimension only tri-diagonal matrix
pseudospectral: matrix is full, inversion requires O(N
3
) operations
implicit treatment of nonlinearity is in particular important when nonlinearity con-
tains spatial derivatives, otherwise in many cases sufcient to treat nonlinear term
explicitly (e.g. CNAB)
9.1 Simple Iteration
Goal: replace solving a matrix equation by multiplying by matrix, which is faster
Consider matrix equation
Ax = b
Seek iterative solution scheme
x
n+1
= x
n
+g(x
n
)
need to chose g(x) to get convergence to solution
96
x
n+1
= x
n
Ax
n
= b
simplest attempt
g(x) = b Ax
x
n+1
= (I A)x
n
+b Gx +b
check whether solution is a stable xed point: consider evolution of error

n
= x
n
x
e

n+1
= x
n+1
x
e
= (I A)x
n
+ b
..
Axe
x
e
= (I A)(x
n
x
e
) = (I A)
n
thus

n+1
= G
n
Estimate convergence
[[
n+1
[[ [[G[[ [[
n
[[
and
[[
n
[[ [[G[[
n
[[
0
[[
convergence in the vicinity of the solution guaranteed for
[[G[[ < 1
If
n
is eigenvector of G

n+1
= G
n
=
i

n
need
i
< 1 for all eigenvalues
i
Dene spectral radius of G
(G) = max
i
[
i
[
then we have
iteration converges iff (G) < 1
Dene convergence rate 1 as inverse of number of iterations to decrease by factor e
(G)
1
R
=
1
e
1 = ln (G) > 0
Note:
for special initial conditions that lie in a direction that contracts faster one could have
faster convergence. The rate 1 is guaranteed.
97
for poor initial guess: possibly no convergence at all.
For Crank-Nicholson (in the linear case)
A =
1
t
I D
2
thus
G = I A = (1
1
t
)I + D
2
Eigenvalues of G:
(G) = O(N
2
) Fourier
(G) = O(N
4
) Chebyshev
(G) 1 no convergence.
9.2 Richardson Iteration
Choose g(x) more carefully
g(x) = (b Ax)
Iteration
x
n+1
= x
n
+ (b Ax
n
) = Gx
n
+ b
with iteration matrix
G = I A
Choose free parameter such that that (G) is minimal, i.e.
max
i
[1
i
[ minimal
A =
1
t
I D
2
has only positive eigenvalues
O(1) =
min

max
= O(N
2,4
)
98
optimal choice
1
max
= (1
min
)

opt
=
2

min
+
max
optimal spectral radius
(G)
min
= max
i
[1
i
[ = 1
opt

min
=

max

min

max
+
min
Spectral condition number
=

max

min
(G)
min
=
1
+ 1
< 1
Notes:
Richardson iteration can be made to converge by suitable choice of independent of
spectral radius of original matrix
Fourier and Chebyshev have large
= O(N
2,4
) very close to 1
in Crank-Nicholson
A
ij
=
_
1
t
f

(u
n
)
_

ij
D
2,ij
the D
2
part corresponds to calculating the second derivative can be done using
FFT rather than matrix multiplication.
9.3 Preconditioning
Range of eigenvalues of G very large slow convergence
Further improvement of g(x)
x
n+1
= x
n
+ M
1
..
preconditioner
(b Ax
n
)
Iteration matrix
G = I M
1
A
Goal: minimize range of eigenvalues of G
Note:
optimal would be M
1
= A
1
then G = 0 instant convergence
that is the original problem
nd M that is easy to invert and is close to A, i.e. has similar spectrum
use M from nite difference approximation
99
9.3.1 Periodic Boundary Conditions: Fourier
For simplicity discuss using simpler problem

t
u =
2
x
u with periodic b.c.
backward Euler:
spectral A, use Fourier because of boundary conditions
nite differences M
Finite differences
1
t
(u
n+1
j
u
n
j
) =
1
x
2
_
u
n+1
j+1
2u
n+1
j
+ u
n+1
j1
_
written as
Mu
n+1
= u
n
with
M =
_
_
_
_
1
t
+
2
x
2

1
x
2
0
1
x
2

1
x
2
1
t
+
2
x
2
1
x
2
0
0 ... ... ...

1
x
2
0
1
x
2
1
t
+
2
x
2
_
_
_
_
Spectral
A =
1
t
I D
2
Eigenvalues of M
1
A:
M and A have same eigenvectors e
ilx
eigenvalues satisfy

M
1
A
=

A

M
eigenvalues of M:
M
ij
e
ilx
j
=
_
1
t

e
ilx
2 + e
ilx
x
2
_
e
ilx

M
=
1
t
+
2
x
2
(1 cos lx)
eigenvalues of A

A
=
1
t
+ l
2

M
1
A
=
1
t
+ l
2
1
t
+
2
x
2
(1 cos lx)
=
=
x
2
t
+ x
2
l
2
x
2
t
+ 2(1 cos lx)
100
range of eigenvalues
l 0
M
1
A
1 when
x
2
t
dominates
l
N
2
x
2
l
2

_
2
N
N
2
_
2
=
2
1 cos lx 2
M
1
A


2
4
Thus:
ratio of eigenvalues is O(1) fast convergence of iteration.
In practice
x
n+1
= x
n
+ M
1
(b Ax
n
)
is solved as
M(x
n+1
x
n
) = (b Ax
n
)
Notes:
for Fourier case (periodic boundary conditions) Mis almost tri-diagonal , equation can
be solved fast
for Chebyshev case: also tri-diagonal, but grid points are not equidistant, need nite
difference approximation on the same grid

2
x
u =
2
x
j
(x
j
+ x
j1
)
u
j+1

2
x
j
x
j1
u
j
+
2
x
j1
(x
j
+ x
j1
)
u
j1
(23)
with x
j
= x
j+1
x
j
again eigenvalues of M
1
A can be shown to be O(1)
for 3 one has =
1
+1

1
2

n
=
1
2
n
thus

1
10
4
for n 12
implicit method with computational effort not much more than explicit
the matrix multiplication should be done with fast transform, e.g. for Fourier
Ax
n
=
_
1
t
I D
2
_
x
n
=
1
t
x
n
T
1
_
k
2
T(x
n
)
_
101
9.3.2 Non-Periodic Boundary Conditions: Chebyshev
Need to consider modied matrices, e.g. D
(0,N)
2
, and also in nite differences
1. xed values u
0,N
=
0,N
only N 1 unknowns
Chebyshev: use D
(0,N)
2

j
_

ij
t
D
(0,N)
2,ij
_
u
n+1
j
= r.h.s. + D
2
i0

0
+ D
2
iN

N
nite differences
5
_
_
_
_
1
t

2
x
2

x
2
0 0

x
2
1
t

2
x
2

x
2
0
0 ...
0 0

x
2
1
t

2
x
2
_
_
_
_
=
_
r.h.s.
_
+
_
_
_
_
1
x
2

0
0
...
1
x
2

N
_
_
_
_
2. xed ux
x
u
0,N
=
0,N
Chebyshev:

x
u
i
=

D
(0,N)
ij
u
j
+
i0

0
+
iN

N
with

D
(0,N)
=
_
_
_
_
0 0 0 0
D
0 0 0 0
_
_
_
_
then

2
x
u
i
=

jk
D
ij

D
(0,N)
jk
u
k
. .
l.h.s.
+ D
i0

0
+ D
iN

N
. .
known r.h.s.
nite differences:
introduce virtual points: u
1
and u
N+1

x
u
0
=
u
1
u
1
2x
=
0
u
1
= u
1
2x
0
equation for u
0
is modied

2
x
u
0
=
u
1
2u
0
+ u
1
x
2
=
u
1
2u
0
+ (u
1
2x
0
)
x
2
=
2
x
2
u
0
+
2
x
2
u
1
. .
l.h.s.

2
x

0
. .
r.h.s.
5
The matrix is actually not correct. One has to take into account the non-equidistant grid (cf. (23)).
102
M is tridiagonal
M =
_
_
_
_
1
t

2
x
2
2
x
2
0 0
1
x
2
1
t

2
x
2
1
x
2
0
0 ...
0
_
_
_
_
Notes:
this leads apparently to eigenvalues
M
1
A
in the range O(1) to O(
1
N
) becomes
large with N, convergence not good.
apparently better to use

D
(0,N)
ij
only to calculate derivative for the boundary points
and to calculate
2
x
u using D
2
for interior points (see Streett (1983) as referenced
in [2] in Sec. 5.2)
Back to reaction-diffusion equation

t
u =
2
x
u + f(u)
Newton for Crank-Nicholson yields
_
1
t
I D
2
I
df(u
n
)
du
_
. .
A
u
n+1
= r.h.s.
Note:
A depends on u
n
eigenvalues depend on u
n
and therefore also on time
eigenvalues are in general not known
choice of is not straightforward: trial and error technique
9.3.3 First Derivative
Consider simpler problem
du
dx
= f(x) with periodic b.c.
i.e.

j
D
ij
u
j
= f
i
Try usual central differences for nite-difference preconditioning of Fourier differentiation
matrix
u
j+1
u
j1
2x
=
M
=
2i sin lx
2x
then

M
1
A
=
ilx
i sin lx
with lx +
since sin = 0 one has
103

M
1
A
unbounded unbounded
no convergence
Possibilities:
1. Could omit higher modes (Orszag)
u
(c)
k
=
_
u
k
[k[
2N
3
0
2N
3
< [k[ N
and calculate derivative with u
(c)
du
j
dx
=
N

k=N
ik u
(c)
k
Now lx
2
3
and range of
M
1
A
is 1
M
1
A

2
3
sin
2
3
2.4.
Omitting these modes would be consistent with anti-aliasing for a quadratic nonlin-
earity.
2. Want sin
1
2
lx instead of sin x
Use staggered grid: evaluate derivatives and differential equation at x
j+1/2
but based
on the values at the grid points x
j
Finite differences
du
dx

x
j+
1
2
=
u
j+1
u
j
x
= e
ilx
j+
1
2
e
1
2
ilx
e

1
2
ilx
x

M
=
2i sin
1
2
lx
x
Spectral
du
dx

x
j+
1
2
=
N

l=N
il u
k
e
il(x
j
+
1
2

N
)

A
= il
thus

M
1
A
=
1
2
lx
sin
1
2
lx
1
M
1
A


2
For wave equation one would get similar problem with central-difference preconditioning

M
1
A
=
x
t
+ ilx
x
t
+ i sin lx
with lx +
In implicit scheme t may be much larger than x:
again
M
1
A
has very large range poor convergence
Use same method.
Note:
one-sided difference would not have this problem either:
u
j+1
u
j
x

M
=
e
ilx
x

M
1
A
=
ilx
e
ilx
104
10 Spectral Methods and Sturm-Liouville Problems
Spectral methods:
expansion in complete set of functions
which functions to choose?
To get complete set consider eigenfunctions of a Sturm-Liouville problem
d
dx
_
p(x)
d
dx

_
q(x) + w(x)
..
weight function
= 0 1 x 1
with
p(x) > 0 in 1 < x < 1 w(x), q(x) 0
regular:
p(1) ,= 0 ,= p(+1)
singular:
p(1) = 0 and/or p(+1) = 0
Boundary conditions are homogeneous:
regular

(1) +

d(1)
dx
= 0 (24)
singular
p(x)
d
dx
0 for x 1 (25)
cannot become too singular near the boundary
Sturm-Liouville problems have non-zero solutions only for certain values of : eigenvalues

n
Dene scalar product:
u, v)
w
=
_
+1
1
w(x)u

(x)v(x)dx
eigenfunctions
k
form an orthonormal complete set

k
,
l
) =
lk
Examples:
105
1. p(x) = 1 = w(x) and q(x) = 0
d
2
dx
2
+ = 0 Fourier, regular Sturm-Liouville problem
2. p(x) =

1 x
2
, q(x) = 0, w(x) =
1

1x
2
d
dx
_

1 x
2
d
dx

_
+
1

1 x
2
= 0 Chebyshev, singular
Expand solutions
u(x) =

k=0
u
k

k
(x)
with
u
k
=
_
w(x)

(x)u(x) dx projection
Consider convergence of expansion in L
2
norm
[[u(x)
N

k
u
k

k
(x)[[ 0 for N
Note:
pointwise convergence only for almost all x
Truncation error
[[

k=N+1
u
k

k
(x)[[
depends on decay of u
k
with k
Want spectral accuracy
u
k
O
_
1
k
r
_
for all r
Under what condition is spectral accuracy obtained?
Consider
u
k
=
_
w(x)

(x) u(x) dx
Previously (Fourier and Chebyshev) did integration by parts.
Use Sturm-Liouville problem
w(x)

k
(x) =
1

k
_
q

d
dx
_
p
d

dx
__
106
u
k
=
1

k
_
u
_
q

d
dx
_
p
d
dx

k
__
dx =
=
1

k
_
uq

k
dx +
1

k
_
up
d
dx

1
+
_
du
dx
p
d

dx
dx
_
=
=
1

k
_
uq

k
dx +
1

k
_
up
d
dx

1
+
du
dx
p

_
d
dx
_
du
dx
p
_

k
dx
_
Boundary terms vanish if
p
_
u
d

dx

du
dx

k
_

1
= 0
regular case
d
dx

k
(1) =

k
(1)
p
_
u

du
dx

k
_

1
= 0
thus: u has to satisfy the same strict boundary conditions as
k
singular case
p
d
dx

k
0 at boundary
require

k
p
du
dx
0 at boundary
need only same weak condition on u as on
p
du
dx
0 at boundary
For large k (cf. Fourier case
k
= k
2
and d
k
/dx = ik
k
)

k
= O(k
2
)
d
k
dx
= O(k)
if boundary conditions are not met one gets
u
k
= O(
1
k
)
For spectral accuracy necessary but not sufcient:
u satises same boundary conditions as
To consider higher orders use L
k
=
k
w
k
to rewrite compact (cf. [2]):
u
k
=
k
, u)
w
=
1

1
w
L
k
, u)
w
107
if and u satisfy the same boundary conditions, then they are in the same function spaces
and
1
w
L is self-adjoint (in explicit calculation above, the w cancel and one can perform the
usual integration by parts)
u
k
=
1

k
,
1
w
Lu)
w
=
1

2
k

1
w
L
k
,
1
w
Lu)
w
=
1

2
k

k
,
1
w
L
1
w
Lu)
w
The last step can be done if
1
w
Lu satises the same boundary conditions as .
Introducing
u
(m)
=
1
w
Lu
(m1)
can write
u
k
=
1

r
k

k
, u
(r)
) = O
_
1

r
k
_
if
the u
(m)
satisfy same boundary conditions as for all 0 m r 1
u
(r)
is integrable
Conclusion:
regular Sturm-Liouville problem: since
_
1
w
L
_
r
u has to satisfy the boundary conditions
these boundary conditions (24) are a very restrictive condition.
Fourier case is a regular Sturm-Liouville problem: for spectral accuracy we needed
that all derivatives satisfy periodic boundary conditions.
singular Sturm-Liouville problem: singular boundary conditions (25) only impose a
condition on regularity, do not prescribe any boundary values themselves
Simple example:

t
u =
2
x
u + f(x, t) u(0) = 0 = u()
Could use sine-series
u =

k
a
k
e
t
sin kx
since they satisfy related eigenvalue problem
=
2
x
= 0 at x = 0,
But: this is a regular Sturm-Liouville problem with L =
2
x
and w = 1
Spectral convergence only if
u
(r)
(0) = 0 = u
(r)
() for all r (26)
108
i.e. if all even derivatives have to vanish at the boundary
Most functions that satisfy the original boundary conditions u(0) = 0 = u() do not satisfy
the additional conditions (26)
e.g. stationary solution for f(x, t) = c
u =
1
2
cx
2

1
2
cx
of course
2
x
u(x = 1) = c ,= 0.
In fact, expanding in a sine-series one gets
a
k
=
1
k
3
_
(1)
k
1
_
Thus:
Expansions in natural eigenfunctions of a problem are only good if they satisfy a
singular Sturm-Liouville problem.
If they do not satisfy a singular Sturm-Liouville problem one most likely will not get
spectral convergence even if the functions look very natural for the problem
11 Spectral Methods for Incompressible Fluid Dynam-
ics
Navier-Stokes equations for uids arise in a wide range of application.
In many situations the uid velocities are much smaller than the speed of sound. Density
variations can then often be assumed to propagate innitely fast: the uid can be assumed
to be incompressible,

t
u +u u = p +f + u
u = 0
Boundary conditions (no-slip condition and wall impermeable)
u = 0 on boundary
External forces (or imposed pressure gradients) are included in f.
The effectively innite wave speed leads to numerical challenges.
Mathematically:
109
pressure appears in momentum equation, but does not have an evolution equation of
its own and has no boundary condition at the walls
divergence-free condition is an algebraic condition on the velocity, poses constraint on
the momentum equation
could write momentum equation in terms of the vorticity = u
this would get rid of the pressure, but there is no convincing boundary condition for
vorticity
divergence-free
can introduce streamfunction
boundary conditions can be tricky (can lead to spurious, destabilizing eigenval-
ues, cf. Sec.8.2.2)
For concreteness consider ows with boundaries only in one direction, e.g. ow between
two plates:
1 or 2 directions (x and z) can be approximated by periodic boundary conditions
no-slip boundary conditions in one direction (y)
There are a number of different approaches that have been taken, we discuss only a few
selected ones. Most are formulated in terms of the primitive variables (u, p).
coupled method: solve momentum equation and incompressibility simultaneously
Galerkin method with divergence-free basis functions
operator-splitting methods
Central aspects [3]:
effectively innite sound speed requires an implicit treatment of the pressure
viscosity term has highest derivative: often also treated implicitly
The discussion here is following [3].
110
11.1 Coupled Method
treat u and p simultaneously in coupled equations, usually use semi-implicit method
For a rst-order method one would get (with an imposed pressure gradient p
x
e
x
to drive the
ow)
1
t
u
n+1
u
n+1
+p
n+1
= f
n+1
+ p
x
e
x
+
1
t
u
n
u
n
u
n
(27)
u
n+1
= 0
u
n+1
= 0 on boundary
Derivatives implemented via ik in the x-direction and via Chebyshev differentiation matrix
in y-direction:
u =

U
k
(y, t)e
ikx
=


U
km
(t)T
m
(y)e
ikx
p =

P
k
(y, t)e
ikx
=


P
km
(t)T
m
(y)e
ikx
(28)
With (28) and U = (U, V ) the Navier-Stokes equation (27) becomes
1
t
U
n+1
k
+ k
2
U
n+1
k

2
y
U
n+1
k
+ ikP
n+1
k
e
x
+
y
P
n+1
k
e
y
= r
k
(29)
ikU
n+1
k
+
y
V
n+1
k
= 0
with
r
k
=
1
t
U
n
k
(u
n
u
n
)
k
(p
x
e
x
)
k
(30)
and boundary condition
U
n+1
k
(y = 1) = 0
System can be solved
directly with iterative method (precondition for the y-derivative)
using the inuence matrix method (Kleiser-Schumann)
Discuss here the inuence matrix method.
For U one gets from (29)
U

+ U + ikP = r
x
(31)
with
U(y = 1) = 0
and
=
1
t
+ k
2
.
For V one gets
V

+ V + P

= r
y
(32)
with boundary condition
V (y = 1) = 0. (33)
111
Once P is known U and V can be determined from (31, 32).
To get an equation for the pressure eliminate U
n+1
from (27) by taking its divergence and
using incompressibility (drop subscript k and superscript n = 1)
P

k
2
P = r (34)
We do not have a boundary condition for the pressure. Instead, using U = 0 and

x
U(y = 1) = 0 one gets the boundary condition on V

V

(y = 1) = 0 (35)
Thus, the P-equation is coupled to the V -equation through this additional boundary condi-
tions. Need to compute P and V simultaneously using (34,35,32,33)
L
_
P
V
_
= b V (y = 1) = 0 = V

(y = 1) (36)
with
L =
_

2
y
k
2
0

y

2
y

_
b =
_
r
r
y
_
Slightly strange boundary conditions:
2
nd
-order ODE for P but no boundary condition for P
2
nd
-order ODE for V but 4 boundary conditions for V
Consider auxiliary problem, assuming there is a boundary condition for P,
L
_
P
V
_
= b P(y = 1) = P

V (y = 1) = 0 (37)
(36) can be solved by solving 3 versions of (37):
L
_
P
p
V
p
_
= b P
p
(y = 1) = 0 V
p
(y = 1) = 0 (38)
L
_
P
+
V
+
_
= 0 P
+
(y = +1) = 1 P
+
(y = 1) = 0 V
+
(y = 1) = 0 (39)
L
_
P

_
= 0 P

(y = +1) = 0 P

(y = 1) = 1 V

(y = 1) = 0 (40)
Expand the solution to (36) as
_
P
V
_
=
_
P
p
V
p
_
+
+
_
P
+
V
+
_
+

_
P

_
(41)
and impose the boundary condition of (36)
_
V

+
(+1) V

(+1)
V

+
(1) V

(1)
_
. .
M
_

+

_
=
_
V

p
(+1)
V

p
(1)
_
Since L does not depend on the ow (U, P) the solutions to (39) and to (40) do not depend
on the ow:
112

_
P
+
V
+
_
and
_
P

_
need to be calculated only once at the beginning of the code
the inuence matrix M can also be calculated initially
Procedure:
1. Compute
_
P
p
V
p
_
which depends on the ow via the inhomogeneity b
2. Compute

, which provides the correct boundary conditions P

for (37)
P

3. With

the solution to (36) is given by (41) (no need to solve (37) explicitly).
Notes:
in the spectral approach the differential equations in y will be solved using Chebyshev
polynomials
discussion above was done for continuous differentiation operators, not for discrete
differentiation (pseudo-spectral collocation points) the solution to the equations
obtained from taking the divergence of the NS-equation (i.e. (34,32,35)) does not
guarantee a divergence-free solution. Error is estimated to be (with N
y
grid points
in y-direction)
O
_
N
y
t

U
kNy
,
N
y
t

U
kNy1
_
Correction (-correction step) improves also stability limit
With and without -correction code achieves spectral accuracy in space.
11.2 Operator-Splitting Methods
A common way to split the Navier-Stokes equations is into a velocity step
1
t
_
u
n+1/2
u
n
_
u
n+1/2
= u
n
u p
x
e
x
(42)
with a boundary condition
u
n+1/2
(y = 1) = g
n+1/2
with g
n+1/2
to be discussed later. The intermediate velocity eld u
n+1/2
is not divergence-
free. This is achieved with the pressure step
1
t
_
u
n+1
u
n+1/2
_
+p
n+1
= 0 (43)
u
n+1
= 0
with boundary condition (again u = (u, v))
v
n+1
(y = 1) = 0
Note
113
counting boundary conditions:
after Fourier transformation momentum equation is an algebraic equation in the
x-component and a rst-order ODE for p in the y-component
the incompressibility condition is a rst-order ODE
for two rst-order ODEs expect only two boundary conditions, v at both sides.
Not possiblte to impose also boundary conditions on u
in this formulation time-stepping is only rst-order (Euler)
u
n+1
is divergence-free but does not satisfy the no-slip condition exactly:
u
slip
u(y = 1) ,= 0
for g
n+1/2
= 0 one has u
slip
= O(t)
modied boundary conditions can improve accuracy
g
n+1/2
x
= t
x
p
n
g
n+1/2
y
= 0 u
slip
= O(t
2
)
higher-order conditions are possible
For expansion in Chebyshev modes relevant:
pressure enters equation only via its gradient
T

N
(x
j
) = 0 at all x
j
= cos
j
N
pressure mode p
N
does not affect ow eld and results in spurious mode
To avoid spurious pressure mode use only N 1 Chebyshev modes
p(x, y, t) =

k
N1

m=0

P
km
(t)T
m
(y) e
ikx
and solve the pressure step using the staggered grid points as collocation points
y
j+1/2
= cos
(j +
1
2
)
N
j = 0 . . . N 1
The velocity eld is expanded as usually
u(x, y, t) =

k
N

m=0

U
km
(t)T
m
(y) e
ikx
and for the velocity step the usual collocation points are used
y
j
= cos
j
N
j = 0 . . . N
Notes:
114
The pressure mode

P
00
also does not affect the ow. However, a spatially homogeneous
pressures is also physically irrelevant indeterminacy of

P
00
does not pose a problem.
Since two different grids are used one needs to interpolate (u, p) from one grid to the
other by evaluating the T
m
at the respective grid points. This introduces additional
steps in the algorithm (some slowing down).
Velocity Step
drop again subscript k

2
y
U
n+1/2
U
n+1/2
= r
U
n+1/2
(y = 1) = g
n+1/2
(y = 1)
with =
1
t
+ k
2
and r as in (30)
Determine Uusing Chebyshev -method using the usual (Gauss-Lobatto) collocation points
y
j
.
Pressure Step
For transformation between the grids write

U = (U(y
0
), U(y
1
), . . . , U(y
N
))
t

V = (V (y
0
), V (y
1
), . . . , V (y
N
))
t
and

P =
_
P(y
1/2
), P(y
3/2
), . . . , P(y
N1/2
)
_
t
Need to compute the Chebyshev coefcients

U and

P for U and P based on the values at
the respective grid points

U = C
0

U

V = C
0

V
and

P = C
+

P
where C
0
and C
+
are the appropriate matrices
Velocity divergence needed on staggered grid points
u D

U
_
C
1
+
C
0
_
_
ik

U +C
1
0
DC
0

V
_
where D computes the derivative from the Chebyshev coefcients
Pressure gradient needed on regular grid points in momentum equation
p G

P
_
C
1
0
C
+
_
_
ik

P, C
1
+
DC
+

P
_
Pressure step (43) becomes

U
n+1
=

U
n+1/2
t G

P at interior points y
j
, j = 1 . . . N 1 (44)
D

U
n+1
= 0 at y
j+1/2
, j = 0 . . . N 1 (45)
115
with

U
n+1
x
=

U
n+1/2
x
t
_
G

P
_
x
at y = 1 (46)

U
n+1
y
= 0 at y = 1 (47)
Rewrite these equations to obtain an equation for the pressure. To make use of the diver-
gence condition (45) combine (44) with (47)

U
n+1
= Z
_

U
n
t G

P
_
at y
j
, j = 0 . . . N
where the matrix Z sets the boundary values of y-componet to 0
Then one can use the divergence condition (45) to eliminate

U
n+1
and obtains an equation
for the pressure
D Z G

P =
1
t
D Z

U
n+1/2
Once the pressure is known

U
n+1
can be determined directly from (44-47).
Note:
for more details on operator-splitting and other schemes for incompressible Navier-
Stokes see [2, 3]
116
A Insertion: Testing of Codes
A few suggestions for how to test codes and identify bugs:
test each term individually if possible
set all but one coefcient in the equation to 0:
does the code behave qualitatively as expected from the equation?
compare quantitatively with simple analytical solutions (possibly with some co-
efcients set to 0)
code blows up:
is it a true blow-up: exact solution should not blow up
is the blow-up reasonable for this type of scheme for this problem? Stability?
Does decreasing dt increase/decrease the growth?
is the blow-up a coding error?
track variables:
use only few modes so you can print out/plot what is going on in each time step
if the code seems not to do what it should it often is a good idea to vary the parameters
and see whether the behavior of the code changes as expected (e.g. if a parameter was
omitted in an expression the results may not change at all even though the parame-
ters are changed); the response of the code to parameter changes may give an idea for
where the error lies.
B Details on Integrating Factor Scheme IFRK4
Some more details for the integrating-factor scheme (keeping in mind that it is usually not
as good as the exponential time differencing scheme):
Rewrite (8) with integrating factor e
k
2
t

t
(e
k
2
t
u
k
) = k
2
e
k
2
t
u
k
+ e
k
2
t

t
u
k
= e
k
2
t
f
k
(u) (48)
Introduce auxiliary variable v
k
(t) = e
k
2
t
u
k
(t)

t
v
k
= e
k
2
t
f
k
(e
l
2
t
v
l
) (49)
Note:
for nonlinear f the Fourier coefcient f
k
depends on all Fourier modes of v
117
It is natural to consider now suitable time-integration methods to solve equation (49)
Example: Forward Euler
v
n+1
k
= v
n
k
+ t e
k
2
t
f
k
(e
k
2
t
v
n
k
)
e
k
2
(t+t)
u
n+1
k
= e
k
2
t
u
n
k
+ t e
k
2
t
f
k
(u
n
k
)
u
n+1
k
= e
k
2
t
(u
n
k
+ t f
k
(u
n
k
))
Note:
with forward Euler integrating factor generates same scheme as the operator-splitting
scheme above
diffusion and other linear terms are treated exactly
no instability arises from linear term for any t
large wave numbers are strongly damped, as they should be (this is also true for
operator splitting)
compare with Crank-Nicholson (in CNAB, say)
u
n+1
k
=
1
1
2
tk
2
1 +
1
2
tk
2
u
n
k
for large kt
u
n+1
k
= (1
4
tk
2
+ ...)u
n
k
oscillatory behavior and slow decay.
FFT is done on nonlinear term rather than the linear derivative term (cf. operator
splitting)
But: xed points in u depend on the time step t and are not computed correctly
for large t, whereas without the integrating factor the xed points of the numerical
scheme agree exactly with those of the differential equation.
Notes:
It turns out that the prefactor of the error term is relatively large in particular com-
pared to the exponential time differencing scheme (cf. Boyd, Chebyshev and Fourier
Spectral Methods
6
)
Details for Runge-Kutta:
In Fourier space

t
u
k
= k
2
u
k
+ f
k
(u)
6
See also Cox and Matthews, J. Comp. Phys. 176 (2002) 430, who give a detailed comparison and a further
advanced method exponential time differencing.
118
For v
k
= e
k
2
t
u
k
then

t
v
k
= e
k
2
t
f
k
(v
l
e
l
2
t
) = F
k
(t, v
l
)
Note: F
k
(t, v
l
) depends explicitly on time even if f(u) does not!
Then
k
1k
= tF
k
(t
n
, v
n
l
) =
= t e
k
2
tn
f
k
(v
n
l
e
l
2
tn
) = te
k
2
tn
f
k
(u
n
l
)
k
2k
= tF
k
(t
n
+
1
2
t, v
n
l
+
1
2
k
1l
) =
= t e
k
2
(tn+t/2)
f
k
((v
n
l
+
1
2
k
1l
) e
l
2
(tn+t/2)
)
= t e
k
2
(tn+t/2)
f
k
(v
n
l
e
l
2
tn
e
l
2
t/2
+
1
2
k
1l
e
l
2
(tn+t/2)
)
= t e
k
2
(tn+t/2)
f
k
(u
n
l
e
l
2
t/2
+
1
2
k
1l
e
l
2
(tn+t/2)
)
Growing exponentials become very large for large k. Introduce

k
1k
= k
1k
e
k
2
tn

k
2k
= k
2k
e
k
2
(tn+t/2)

k
3k
= k
3k
e
k
2
(tn+t/2)

k
4k
= k
4k
e
k
2
(tn+t)
Then

k
1k
= t f
k
(u
n
l
)

k
2k
= t f
k
(u
n
l
e
l
2
t/2
+
1
2

k
1l
e
l
2
t/2
)
= t f
k
_
(u
n
l
+
1
2

k
1l
)e
l
2
t/2
_

k
3k
= t f
k
_
u
n
l
e
l
2
t/2
+
1
2

k
2l
_

k
4k
= t f
k
_
u
n
l
e
l
2
t
+

k
3l
e
l
2
t/2
_
v
n+1
k
= v
n
k
+
1
6
(k
1k
+ 2k
2k
+ 2k
3k
+ k
4k
)
u
n+1
k
e
k
2
(tn+t)
= u
n
k
e
k
2
tn
+
1
6
e
k
2
tn
_

k
1k
+ 2

k
2k
e
k
2
t/2
+ 2

k
3k
e
k
2
t/2
+

k
4k
e
k
2
t
_
Thus
u
n+1
k
= u
n
k
e
k
2
t
+
1
6
_

k
1k
e
k
2
t
+ 2

k
2k
e
k
2
t/2
+ 2

k
3k
e
k
2
t/2
+

k
4k
_
Note
In each of the four stages go to real space to evaluate nonlinearity and then transfrom
back to Fourier space to get its Fourier components in order to evaluate

k
ik
, i = 1..4.
119
C Chebyshev Example: Directional Sensing in Chemo-
taxis
Levine, Kessler, and Rappel have introduced a model to explain the ability of amoebae (e.g.
Dictyostelium discoideum) to sense chemical gradients very sensitively despite the small
size of the amoeba (see PNAS 103 (2006) 9761).
The model consists of an activator A, which is generated in response to the external chem-
ical that is to be sensed. The activator is bound to the cell membrane and constitutes the
output of the sensing activity (and triggers chemotactic motion), and a diffusing inhibitor
B. The inhibitor can attach itself to the membrane (its concentration is denoted B
m
) where
it can inactivate A.
The model is given by
B
t
= D
2
B inside the cell 1 < x < +1
with boundary ocndition
D
B
n
= k
a
S k
b
B.
Here /n is the outward normal derivative. In a one-dimension system its sign is opposite
on the two sides of the system, /n = /x at x = 1 whereas /n = +/x at x = +1 .
The reactions of the membrane bound species are given by
dA
dt
= k
a
S k
a
A k
i
AB
m
dB
m
dt
= k
b
B k
b
B
m
k
i
AB
m
To implement the boundary conditions with Chebyshev polynomials (using the matrix mul-
tiplication approach):
B
i
x
=
N

j=0
D
ij
B
j
for i = 1, . . . , N 1
B
0
x
=
1
D
(k
a
S
0
k
b
B
0
)
B
N
x
=
1
D
(k
a
S
N
k
b
B
N
)
The second derivative is then given by
D

2
B
i
x
2
= D
N1

j=1
N

k=0
D
ij
D
jk
B
k
D
i0
(k
a
S
0
k
b
B
0
) + D
iN
(k
a
S
N
k
b
B
N
)
which can be written as
D

2
B
i
x
2
=
N

k=0

D
ik
B
k
+ k
a
(D
i0
S
0
+ D
iN
S
N
)
120
with

D
ik
= D
N1

j=1
D
ij
D
jk
b
_
_
_
_
D
i0
0 0 D
iN
D
i0
... ... D
iN
D
i0
... ... D
iN
D
i0
0 0 D
iN
_
_
_
_
The equations on the membrane are nonlinear. The implementation of Crank-Nicholson
is then done most easily not completely implicitly, i.e. no full Newton iteration sequence
is performed to solve the nonlinear equations. Instead only a single iteration is performed
(semi-implicit) This is equivalent to expanding the terms at the new time around those at
the old time. Specically
A
n+1
B
n+1
+ (1 )A
n
B
n
= ((A
n
+ A)(B
n
+ B)) + (1 )A
n
B
n
=
= (A
n
B
n
+ A
n
B + B
n
A +O(AB)) + (1 )A
n
B
n
=
=
_
A
n+1
B
n
+ A
n
B
n+1
_
+ (1 2)A
n
B
n
+O(AB).
Ignoring the term O(AB) is often good enough.
D Background for Homework: Transitions in Reaction-
Diffusion Systems
Many systems undergo transitions from steady state to oscillatory ones or from spatially
homogeneous ones to states with spatial structure (periodic or more complex)
Examples:
buckling of a bar or plate upon uniform compression (Euler instability)
convection of a uid heated from below: thermal instability through bouyancy or
temperature-dependence of surface tension
uid between two rotating concentric cylinders: centrifual instability
solid lms adsorbed on substrates with different crystaline structure (cf. Golovins
recent coloquium)
surface waves on a vertically vibrated liquid
various chemical reactions: Belousov-Zhabotinsky
oscillations:
in the 1950s Belousov could not get his observations published because the jour-
nal reviewers thought such temporal structures were not allowed by the second
law of thermodynamics
spatial structure:
Turing suggested (1952) that different diffusion rates of competing chemicals
could lead to spatial structures that could underly the formation of spatial struc-
tures in biology (segmentation of yellow-jackets, patterning of animal coats...)
121
Common to these systems is that the temporal or spatial structures arise through instabil-
ities of a simpler (e.g. homogeneous) state. Mathematically, these instabilities are bifurca-
tions at which new solutions come into existence.
General analytical approach:
1. nd simpler basic state
2. identify instabilities of basic state
3. derive simplied equations that describe the structured state in the weakly nonlinear
regime
leads to equations for the amplitude of the unstable modes characterizing the struc-
ture: Ginzburg-Landau equations
In homework consider simple model in one spatial dimension for chemical reaction involv-
ing two species

t
u = D
1

2
x
u + f(u, v)

t
v = D
2

2
x
u + g(u, v)
Brusselator (introduced by Glansdorff and Prigogine, 1971, from Brussels) does not model
any specic reaction, it is just s very simple rich model
f(u, v) = A(B + 1) u + u
2
v
g(u, v) = Bu u
2
v
with A and B external control parameters. Keep in the following A xed and vary B.
For all parameter values there is a simple homogeneous steady state
u = A v =
B
A
This state may not be stable for all values of B: study stability by considering small pertur-
bations
u = A+ U
v =
B
A
+ V
Inserting in original equation
u
2
v = AB + 2BU + A
2
V + U
2
B
A
+ 2AUV + U
2
V

t
U = D
1

2
x
U + (B 1)U + A
2
V + F(U, V )

t
V = D
2

2
x
V BU A
2
V F(U, V )
with
F(U, V ) =
B
A
U
2
+ 2AUV + U
2
V
122
Linear stability: omit F(U, V ), which is negligible for innitesimal U and V
_

t
U

t
V
_
=
_
D
1

2
x
U
D
2

2
x
V
_
+
_
B 1 A
2
B A
2
_
. .
M
0
_
U
V
_
Exponential ansatz
_
U
V
_
= e
t
e
iqx
/
_
U
0
V
0
_
(50)
M(, q)
_
U
0
V
0
_

_
D
1
q
2
+ B 1 A
2
B D
2
q
2
A
2
__
U
0
V
0
_
= 0
has only a solution if
det M(, q) = 0

2
+
_
(D
1
+ D
2
)q
2
+ A
2
B + 1
_
. .
(q)
+A
2
(B 1) + q
2
_
A
2
D
1
+ (1 B)D
2
_
+ D
1
D
2
q
4
. .
(q)
= 0
This gives a relation
= (q)
Instability occurs if
()
r
> 0 for some q
In this model two possibilities for onset of instability
= i with q = 0: oscillatory instability leading to Hopf bifurcation
expect oscillations to arise with frequency
occurs for (q = 0) = 0
B
(H)
c
= 1 + A
2

c
=
i
= 0 with q ,= 0: instability sets in rst at a specic q = q
c
(critical wavenumber)
expect spatial structure to arise with wavenumber q
c
occurs for (q
c
) = 0
B
(T)
c
=
_
1 + A
_
D
1
D
2
_
2
q
2
c
=
A

D
1
D
2
here used (q
c
, B
(T)
c
) = 0 as well as
d
dq

q
c,B
(T)
c
= 0 to get the value where the rst mode
becomes unstable.
For small amplitude / one can do a weakly nonlinear analysis, expanding the equations in
/ and B B
(H,T)
c
to obtain a Ginzburg-Landau equation for the complex amplitude /,

T
/ =
2
X
/+ /[/[
2
/
For Hopf bifurcation , , and are complex, for Turing bifurcation they are real.
123
In the original exponential ansatz (50) amplitude / is constant. It turns out one can al-
low / allow to vary slowly in space and time. The Ginzburg-Landau equation has simple
spatially/temporally periodic solutions
/ = /
0
e
it
e
iqx
with
/
2
0
=

r

r
q
2

r
=
i

i
q
2

i
[/[
2
This leads to solutions for U and V of the form
_
U
V
_
= e
i(c+)t
e
i(qc+q)x
/
0
_
U
0
V
0
_
+ h.o.t.
In the homework the system has non-trivial boundaries: affects the onset of the instabil-
ities. In this case one gets interesting behavior already for values of B that are slightly
below B
c
. Instabilities can arise at boundaries, which then can interact with the instabili-
ties in the interior of the system.
E Background for Homework: Pulsating Combustion
Fronts
Consider a one-dimensional combustible uid in which the reactants are well mixed (pre-
mixed) and in which the concentration of a rate-limiting reactant is given by Y . The temper-
ature of the uid is given by T. A simple reaction with Arrhenius kinetics is then described
by

t
T =
2
x
t + q Y k(T)

t
Y = D
2
x
Y Y k(T)
with the reaction term
k(T) = k
0
e

E
k
B
T
with E the activation energy and k
B
the Boltzmann constant.
Boundary conditions
T(0, t) = T
l
T(L, t) = T
r
Y (0, t) = Y
l
Y (L, t) = Y
r
and initial conditions
T(x, 0) = T
0
Y (x, 0) = Y
0
Make dimensionless
C =
Y
Y
0
and
=
T T
ad
T
ad
T
0
T
ad
= T
0
+ qY
0
124
i.e.
T = T
a
+ qY
0

Insert into Arrhenius law


e
E/k
B
T
= e
E/k
B
Ta
e
E/k
B
(1/Ta1/T)
= k(T
a
)exp
_
E
k
B
1
T
a
T
(T T
a
)
_
= k(T
a
)exp
_
E
k
B
T
a
qY
0

T
a
+ qY
0

_
= k(T
a
)exp
_
Z
1 +
_
with the Zeldovich number Z given by
Z =
E
k
B
T
a
qY
0
T
a
and =
qY
0
T
a
This results in the nal equations

t
=
2
x
+ Ce
Z
1+
(51)

t
C =
1
Le

2
x
C Ce
Z
1+
(52)
with Lewis number given by
Le =

D
Initial conditions
C = 1 = 1
and boundary conditions
=
l,r
C = C
l,r
For very large activation energy (Z large) the reaction front can be replaced by an internal
layer and one can treat the outer solution analytically. A linear stability analysis shows
that for Le > 1 and the Zeldovich number above a certain value of Z
c
(Le) the steadily
propagating front becomes unstable to oscillations and a transition to pulsating fronts occur
[8]. In two-dimensional versions of (51,52) instabilities to cellular ames arise for Le < 1
(cf. Fig. 7).
125
Figure 7: Cellular ame on a porous plug burner (from http://vip.cs.utsa.edu/
flames/overview.html see also [9]).
126

Vous aimerez peut-être aussi