Vous êtes sur la page 1sur 12

Lecture 8

Chebyshev collocation method for


differential equations
Katarina Gustavsson

MA5251 Spectral Methods and Applications, 2011

Two point boundary value problem


Time independent linear/non-linear
i Time independent boundary value problem in a general form
! u xx (x) + p(x)u x (x) + q(x)u x (x) = f (x,u),
-1 " x " 1
! >0 is a fixed parameter and p(x), q(x) and f (x) are
given functions
i Boundary conditons on general (mixed) form:
# $ u($1) + % $ u x ($1) = g$
# + u(1) + % + u x (1) = g+
i Dirichlet boundary conditions:
u($1) = g$ u(1) = g+
i Neumann boundary conditions: u x ($1) = g$ u x (1) = g+
The coefficients # % g are known

Time dependent problems - linear/non-linear


i Heat equation
ut ! " u xx = 0 with boundary conditions given
on u(-1,t) = g! and u(1,t) = g+ and initial condition u(x,0) = f (x)
i Linear wave equation
ut + u x = 0 boundary conditions given
on u(-1,t) = g! and u x (1,t) = g+ and initial condition u(x,0) = f (x)
i Burgers equation
ut + uu x ! " u xx = 0 with boundary conditions given
on u(-1,t) = g! and u(1,t) = g+ and initial condition u(x,0) = f (x)

Review on the Chebyshev transform


i A function u(x) can be expanded in Chebyshev series
!

u(x) = " u kTk (x)


k=0

2
uk =
& u(x)Tk (x)$ (x)dx
# ck %1

$ (x) = (1% x 2 )%1/2 and ck = 2 for k = 0 and c k = 1 for k ' 1


i The Chebyshev polynomials are given by Tk (x) = cos(k cos %1 (x))
i In the discrete Chebyshev-Gauss-Lobatto case:
#j
#
x j = cos , j = 0,1,2, N and $ j =
where
N
2 c!k
c!k = 2 for j = 0, N and c!k = 1 for j = 1,2,, N % 1
i The discrete Chebyshev coefficients are given by
( i kjN# +
2 N 1
2 N 1
( # kj +
u! k =
u(x j ) cos *
=
u(x j ). * e - , k = 0,1,N
"
"
)
,
c!k N j=0 c! j
N
c!k N j=0 c! j
)
,

Review on the Chebyshev transform, cont.


i A function u(x) can be expanded in discrete Chebyshev series
N

u(x j ) = ! u! kTk (x j ), j = 0,1,, N


k=0

i Chebyshev interpolant I N u = ! u! kTk (x)

(1)

k=0

i Or in terms of Chebyshev lagrangian polynomials I N u = ! u(x j )" j (x)


j=0

(#1) j+1 (1# x 2 )TN$ (x)


" j (x) =
c! j N 2 (x # x j )
i Note that I N u(x j ) = u(x j ) and that (1)% (2)

(2)

Review on the Chebyshev transform - derivatives


i Derivative in transform (polynomial) space (discrete)

I N u(x j ) ! := ( DN u ) j = " u! k(1)Tk (x j )

I N u(x j ) !!:= ( D 2N u ) j = " u! k(2)Tk (x j )

k=0
N

k=0

i The coefficients can be found by a recursive relation


(1)
c k u! k(1) = u! k+2
+ 2(k + 1)u! k+1 , k = N # 1, N # 2,0

Typo!!

(2)
c k u! k(2) = u! k+2
+ 2(k + 1)u! (1)
k+1 k = N # 1, N # 2,0

u! N(1)+1 = u! N(1) = 0 and u! N(2)+1 = u! N(2) = 0


i Cost of O(N log N ) if FFT is used

Review on the Chebyshev transform - derivatives

i Derivative in "physical" space

l=0

l=0

( DN u ) j = ! u(xl )#"l (x j ) = ! D jl u(xl )

i D jl are the entries in the Chebyshev derivative matrix, D


i The matrix entries for the second order derivative matrix, D 2 ,
are given by (DD) jl where DD is a matrix-multiplication
i The Chebyshev derivative matrix at quadrature points is given by
' c! j ($1) j+l
l% j
)
!
) c (x $ xl )
D jl = ( l j
xl
)$
1& l = j & N $1
)* 2(1$ xl2 )
i The matrix approach costs O(N 2 )

' 2N 2 + 1
)) 6
D jl = (
2
2N
+1
)$
)*
6

l= j=0
l= j=N

Example of a Dirichlet problem


i We wish to solve, by a Chebyshev collocation method,
2
5x
2
2
2
2
u xx + xu x ! u = (24
+
5x
)e
+
(2
+
2x
)cos(x
)
!
(4x
+
1)sin(x
!#########"#########$) ,
f (x)

on ! 1 " x " 1

(1)

i With boundary conditions


u(!1) = e!5 + sin(1) = g! and u(1) = e5 + sin(1) = g+
$ j# '
i Let x = (x0 , x1 ,, x N )T where x j = cos & ) , j = 0,1,2,N
% N(
are the Chebyshev-Gauss-Lobatto points
i Let f = ( f (x0 ), f (x1 ),, f (x N ))T
i Let u = (g+ ,u(x1 ),u(x2 ),,u(x N !1 ), g! )T and
u M = (u(x1 ),u(x2 ),,u(x N !1 ))T be the vector of unknows to be determined
i We will work with the Chebyshev derivative matrices D and D 2
i The entries in D are given by Dij , 0 " i, j " N and D 2 = DD

Example of a Dirichlet problem, cont.


i The approximation to (1) is given by
D 2M u M ! x M " D M u M ! u M = F
i x M = x(1 : N ! 1), D M = D(1 : N ! 1,1 : N ! 1), D 2M = D 2 (1 : N ! 1,1 : N ! 1)
F = f(1 : N ! 1) ! #$ D 2 (1 : N ! 1,0) + x M " D(1 : N ! 1,0) %& g+ !
#$ D 2 (1 : N ! 1, N ) + x M " D(1 : N ! 1, N ) %& g!
i The approximation can also be written on the form Au M = F
where A is a matrix of size (N ! 1) ' (N ! 1)
i The matrix A is given by A = D 2 + (D ! I
where ( is a diagonal matrix with the values of x M on the
diagonal and I is the identity matrix
i The approximate solution is given as the solution to the
linear system of equations u M = A !1F

Solution
i The exact solution is u(x) = e5 x + sin(x 2 )
i The error is defined by err = max
! u j " u(x j )
1! j!N "1

Convergence
2

160

10
Exact solution
N=5
N=10
N=20

140
120

10

10

100

10
err

u(x)

80
60

10

40
8

10

20

10

10

0
20
1

0.8

0.6

0.4

0.2

0
x

0.2

0.4

0.6

0.8

12

10

10

10
N

10

50

i The error obtained by a second order

Exact solution
N=5
N=10
N=20

45
40

finite difference approximation with


N = 512 is approximately the same as
with N = 10 in the spectral method

u(x)

35
30
25
20
15
10
5

0.3

0.35

0.4

0.45

0.5

0.55
x

0.6

0.65

0.7

0.75

0.8

Code example - MATLAB


N=10!
[D, x] = cheb(N);
Chebyshev derivation matrix, D!
D2=D*D;
Chebyshev derivation matrix, D2!
!
DM=D(2:N,2:N);
For the inner points only!
D2M=D2(2:N,2:N);
Note that the numbering of the !
L=diag(x(2:N),0);
elements in a vector/matrix is !
I=diag(ones(N-1,1),0);
from 1 to N+1 (not 0 to N)!
!
A=D2M+L*DM-I;
A as in Au=F!
!
f=(24+5*x).*exp(5*x)+(2+2*x.^2).*cos(x.^2);!
f=f-(4*x.^2+1).*sin(x.^2);!
gminus=exp(-5)+sin(1);!
gplus=exp(5)+sin(1);!
!
F=f(2:N)-(D2(2:N,1)+x(2:N).*D(2:N,1))*gplus!
F=F-(D2(2:N,end)+x(2:N).*D(2:N,end))*gminus;!
!
sol=A\F
u=A-1F!
!
u=[gplus;sol;gminus];
Add the known boundary conditions!
!

Remarks
The collocation method will lead to a full and ill-conditioned
linear system
Gaussian elimination to solve Au=F is only feasible for
problems with a small number of unknowns (one-dimensional
problems)
For multi-dimensional problems an iterative method together
with an appropriate pre-conditioner should be used
We will get back to this when we speak about multidimensional problems
10

10

Eigenvalues of
A are growing
as N4

max(|h|)

10

10

10

10

10

10

10

10
N

10

Vous aimerez peut-être aussi