Vous êtes sur la page 1sur 137

Special Functions of Mathematical (Geo-)Physics

Dr. M. Gutting
TU Kaiserslautern - FB Mathematik
May 3, 2011
2
Preface
These are the lecture notes of the lecture about special functions and their applications
in mathematical (geo-)physics that took place in the summer term 2010 at the university
of Kaiserslautern. The contents can be summarized as follows.
The lecture gives an elementary approach to the theory of special functions in mathemat-
ical physics with special emphasis on geophysically relevant aspects. The essential topics
of the lecture are in chronological order: the Gamma function, orthogonal polynomials,
spherical polynomials (scalar, vectorial, and tensorial case), and Bessel functions. All
elds will be assisted by geophysically relevant applications.
The lecture is a good preparation for further activities in the eld of geomathematics such
as Constructive Approximation, Potential Theory, Inverse Problems, etc.
3
4
Contents
1 Introduction 7
1.1 Example: Gravitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Example: Geomagnetics (Maxwells Equations) . . . . . . . . . . . . . . . 8
1.3 Example: Euler Summation Formula Involving the Laplace Operator . . . 10
2 The Gamma Function 21
3 Orthogonal Polynomials 31
3.1 Properties of Orthogonal Polynomials . . . . . . . . . . . . . . . . . . . . . 38
3.2 Quadrature Rules and Orthogonal Polynomials . . . . . . . . . . . . . . . 46
3.3 The Jacobi Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4 Ultraspherical Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5 Application of the Legendre Polynomials in Electrostatics . . . . . . . . . . 66
3.6 Hermite Polynomials and Applications . . . . . . . . . . . . . . . . . . . . 70
3.7 Laguerre Polynomials and Applications . . . . . . . . . . . . . . . . . . . . 73
4 Spherical Harmonics 77
4.1 Spherical Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.2 Polynomials on the Unit Sphere in R
3
. . . . . . . . . . . . . . . . . . . . . 79
4.2.1 Homogeneous Polynomials . . . . . . . . . . . . . . . . . . . . . . . 79
4.2.2 Harmonic Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2.3 Harmonic Polynomials on the Sphere . . . . . . . . . . . . . . . . . 90
4.3 Closure and Completeness of Spherical Harmonics . . . . . . . . . . . . . . 95
4.4 The Funk-Hecke Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.5 Greens Function with Respect to the Beltrami Operator . . . . . . . . . . 100
4.6 The Hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5 Vectorial Spherical Harmonics 107
5.1 Notation for Spherical Vector Fields . . . . . . . . . . . . . . . . . . . . . . 107
5.2 Denition of Vector Spherical Harmonics . . . . . . . . . . . . . . . . . . . 112
5.3 The Helmholtz Decomposition Theorem . . . . . . . . . . . . . . . . . . . 113
5.4 Closure and Completeness of Vector Spherical Harmonics . . . . . . . . . . 116
6 Bessel Functions 121
6.1 Derivation and Denition of Bessel Fucntions . . . . . . . . . . . . . . . . . 121
6.2 Some Orthogonality Relations . . . . . . . . . . . . . . . . . . . . . . . . . 127
5
6 Contents
6.3 Bessel Functions with Integer Index . . . . . . . . . . . . . . . . . . . . . . 128
7 Summary 135
Chapter 1
Introduction
The main topics of this lecture can be briey summarized:
Finding orthogonal/orthonormal basis systems in (weighted) Hilbert spaces. They
are directly related to (partial) dierential equations and many (geo-)physical ap-
plications.
Greens functions and corresponding integral formulas which are also related to
specic (P)DEs.
Euler summation formulas which result from the conversion of elliptic (P)DEs to
integral equations using Greens functions as a bridging tool.
We will consider 1D systems (and some generalizations to R
n
) as well as systems on the
sphere = S
2
R
3
(and some generalizations to S
n
R
n+1
).
1.1 Example: Gravitation
In rst approximation the surface of the Earth is a sphere. If we assume that all mass is
contained within this sphere of radius R, i.e.
R
, we can model the gravitational eld as
a function U which is harmonic in the exterior of
R
, i.e.

x
U(x) =
_

2
x
2
1
+

2
x
2
2
+

2
x
2
3
_
U(x) = 0 x
ext
R
, i.e. |x| =

x x > R.
Moreover, it has to fulll the following decay conditions:
|U(x)| = O
_
1
|x|
_
for |x| ,
|U(x)| = O
_
1
|x|
2
_
for |x| .
Potential theory tells us that it suces to know the function U on the boundary, i.e. on
the sphere
R
, in order to completely determine U in the exterior
ext
R
.
Therefore, we need a complete orthonormal basis on
R
(or on the unit sphere =
1
)
7
8 Chapter 1. Introduction
to expand U|

R
in a Fourier series. This is also the prerequisite for spherical wavelets and
it will lead us to spherical harmonics.
Further applications can be found e.g. in quantum mechanics, in many other geomathe-
matical problems, in crystallography (using higher dimensions).
1.2 Example: Geomagnetics (Maxwells Equations)
The basis of all electromagnetic considerations is the full system of Maxwells equations
given by

x
e(x, t) +

t
b(x, t) = 0

x
d(x, t) = F
f
(x, t)

x
h(x, t)

t
d(x, t) = j
f
(x, t)

x
b(x, t) = 0 ,
where the unknowns are dened as follows (note that capital letters are used for scalars,
lower-case letter for vectors in R
3
):
d electric displacement
e electric eld
h magnetic displacement
b magnetic eld
F
f
density of free charges
j
f
density of free currents.
All quantities are understood as averages over a unit volume in space. The electric and
magnetic displacement, d and h, can be written as
d(x, t) =
0
e(x, t) + p(x, t) (1.1)
h(x, t) =
1

o
b(x, t) m(x, t) , (1.2)
where p is the averaged polarization, m is the (averaged) magnetization,
0
is the permit-
tivity of the vacuum and
0
is the permeability of vacuum.
The total charge and current density, respectively, can be written as the sum of the free
charges and currents and the bounded ones, i.e.
F(x, t) = F
f
(x, t) + F
b
(x, t), j(x, t) = j
f
(x, t) + j
b
(x, t)
where it is well-known that

..
=div
p(x, t) = F
b
(x, t),
x
j
b
(x, t) =

t
F
b
(x, t)
and

..
=curl
m(x, t) =

t
p(x, t) + j
b
(x, t) .
Chapter 1. Introduction 9
We can now reformulate Maxwells equations.

x
e(x, t)
(1.1)
=
1

x
(d(x, t) p(x, t)) =
1

0
(F
f
(x, t) + F
b
(x, t))

x
e(x, t) =
1

0
F(x, t) (1.3)

x
e(x, t) =

t
b(x, t) (1.4)

x
b(x, t) = 0 (1.5)

x
b(x, t)
(1.2)
=
0
(
x
h(x, t) +
x
m(x, t))
=
0
_
j
f
(x, t) +

t
d(x, t) +
x
m(x, t)
_

x
b(x, t) =
0
_
j
f
(x, t) +
x
m(x, t) +
0

t
e(x, t) +

t
p(x, t)
_
(1.6)
In most geomathematical problems this system of equations is too detailed to describe the
occurring phenomena. They have to be reduced as follows. Let L be the typical length
scale of the discussed geomathematical problem and T be the typical time scale. In most
problems we have L = 10
2
km 10
3
km and T = hours - days, such that we get for the
typical velocity of the system
L
T
c
where c is the speed of light (c = 299 792 458 m/s). Thus, it can be shown, that the terms

t
e(x, t)+

t
p(x, t) can be neglected. Hence, Maxwells equations partially decouple and
the resulting equations for the magnetic eld are

x
b(x, t) = 0

x
b(x, t) =
0
(j
f
(x, t) +
x
m(x, t)) .
Since div curl = 0 we furthermore can conclude that (by applying to the second
equation)

x
(
0
(j
f
(x, t) +
x
m(x, t))) = 0 . (1.7)
For solving this system of equations, data of the magnetic eld of the Earth are pri-
marily available in the exterior of the Earth, i.e. at the Earths surface or at satellite
altitude. Thus, we can assume that the magnetization m of the surrounding medium can
be neglected. We, therefore, arrive at the Pre-Maxwell equations

x
b(x, t) = 0 (1.8)

x
b(x, t) =
0
j
f
(x, t) . (1.9)
Furthermore, we have due to (1.7)

x
j
f
(x, t) = 0 .
10 Chapter 1. Introduction
In earlier concepts, geoscientist assumed that the current density j is also negligible in
the spherical shell
(
1
,
2
)
= {x R
3
|
1
< |x| <
2
}, in which the magnetic eld data is
measured. This yields

x
b(x, t) = 0,
x
b(x, t) = 0 .
Hence, the magnetic eld b can be written as the gradient of a scalar potential U, i.e.
b(x, t) =
x
U(x, t), x
(
1
,
2
)
,
where U fullls the Laplace equation

x
U(x, t) = 0, x
(
1
,
2
)
.
This so-called Gauss-representation yields a spherical harmonic expansion of the scalar
potential U which is similar to the modeling of the gravitational eld of the Earth.
Modern satellite missions like CHAMP, which is measuring the Earths magnetic eld, are
located in the ionosphere, a region where the assumption j
f
= 0 is not valid. Therefore,
we have to deal with the Pre-Maxwell equations

x
b(x, t) = 0,
x
b(x, t) =
0
j
f
(x, t), x
(
1
,
2
)
.
A new concept of modeling this situation has to be applied which is the so-called Mie-
representation. This also yields the need for basis systems for vector-valued functions on
the sphere, i.e. for vector spherical harmonics.
1.3 Example: Euler Summation Formula Involving
the Laplace Operator
First we give a brief introduction to some parts of the theory of one-dimensional periodical
functions. These results will be used to derive the Euler summation formula in its classical
form.
-5 -3 -1 0 1 2 3 4 5 -2 -4
Figure 1.1: The integer lattice = Z.
Let (= Z) denote the additive group of points in R having integral coordinates (the
addition being, of course, the one derived from the vector structure of R).
The fundamental cell F of is given by
F =
_
t R

1
2
t <
1
2
_
.
Note that it is an half-open interval.
Chapter 1. Introduction 11
-1 0 1
[ )
-1/2 1/2
Figure 1.2: The fundamental cell F.
Denition 1.1
A function F : R C is called -periodical if
F(x + g) = F(x) for all x F and g .
Example 1.2
The function
h
: R C given by
x
h
(x) = e
2ihx
, h ,
is -periodical:

h
(x + g) = e
2ih(x+g)
= e
2ihx
e
2ihg
= e
2ihx
1 =
h
(x)
for all x F and all g .
Function spaces of -periodical functions
The space of all F C
(m)
(R) that are - periodical is denoted by C
(m)

(R), 0 m .
L
2

(R) is the space of all F : R C that are - periodical and are Lebesgue - measurable
on F with
F
L
2

(R)
=
_
F
|F(x)|
2
dx
_1
2
< .
Clearly, the space L
2

(R) is the completion of C


(0)

(R) with respect to the norm


L
2

(R)
:
L
2

(R) = C
(0)

(R)
||||
L
2

(R)
.
Remark 1.3
An easy calculation shows us that the system
{
h
}
h
is orthonormal with respect to the L
2

(R) - inner product

h
,
h

L
2

(R)
=

h
(x)
h
(x)dx =
1
2

1
2
e
2ihx
e
2ih

x
dx =
hh
=
_
1 , h = h

0 , h = h

.
Note that (with
x
= d/dx)

h
(x) =
d
dx

h
(x) =
d
dx
e
2ihx
= 2ih
h
(x)
12 Chapter 1. Introduction
such that

h
(x) =
_
d
dx
_
2

h
(x) = (2ih)
2

h
(x) = 4
2
h
2
. .

(h)

h
(x), h , x R.
From classical Fourier analysis we know that has a half-bounded and discrete eigen-
spectrum {

(h)}
h
such that
(
x
+

(h))
h
(x) = 0, x F
with eigenvalues

(h) given by

(h)(=

(h)) = 4
2
h
2
, h
and eigenfunctions
h
(x) = e
2ihx
, h , x F. The orthonormal system {
h
}
h
of eigenfunctions
h
: x
h
(x), x R, is closed in the space C
(0)

(R), i.e., for every


> 0 and every F C
(0)

(R) there exist an integer N(= N()) and a linear combination

h
|h|N
a
h

h
such that sup
xF

F(x)

h
|h|N
a
h

.
Since
F
L
2

(R)
=
_
F
|F(x)|
2
dx
_
1/2
sup
xF
|F(x)| = F
C
(0)

(R)
, F C
(0)

(R),
the closure of {
h
}
h
in (C
(0)

(R),
C
(0)

(R)
) implies the closure in (C
(0)

(R),
L
2

(R)
).
Since C
(0)

(R) is dense in L
2

(R) with respect to the norm


L
2

(R)
, we nd that {
h
}
h
is a complete orthonormal system in L
2

(R).
Euler (Green) Function for the Laplace Operator
In this section we introduce the -Euler (Green) function ( = Z) with respect to the
one-dimensional Laplacian =
2
, =
d
dx
, i.e, Greens function with respect to
corresponding to -periodical boundary conditions. Based on the constituting properties
of this function the Euler summation formula can be developed by integration by parts.
The formal concept of Greens function G : R R with respect to the operator and
periodic boundary conditions:
(i) (Periodicity) G is continuous in R, and for all x / and g
G(x + g) = G(x).
(ii) (Dierential equation) G is twice continuously dierentiable for all x / with

x
G(x) = 0.
Chapter 1. Introduction 13
(iii) (Characteristic singularity)
G(x)
1
2
x sign (x)
is continuously dierentiable for all x F.
We want to (formally) obtain the Dirac functional by application of the dierential op-
erator, i.e.
x
G(x) = (x). However, the conditions above lead to a contradiction for
h = 0. Therefore, such a G does not exist. We have to modify the conditions for the
Green function.
Denition 1.4
A function G(; ) : R R is called -Euler (Green) function with respect to the operator
, if it satises the following properties:
(i) (Periodicity) G(; ) is continuous in R, and for all x / and g
G(; x + g) = G(; x).
(ii) (Dierential equation) G is twice continuously dierentiable for all x / with
G(; x) = 1.
(iii) (Characteristic singularity)
G(; x)
1
2
x sign(x)
is continuously dierentiable for all x F.
(iv) (Normalization)

F
G(; x)dx = 0.
Lemma 1.5
G(; ) is uniquely determined by the properties (i)-(iv).
Proof. See Exercise 1.1.
Theorem 1.6
The Green function G(; ) is the negative Bernoulli function of degree 2, i.e.
G(; x) =
(x x)
2
2
+
x x
2

1
12
where for x R the symbol x means that integer for which holds x x < x + 1
(oor operation).
Proof. See Exercise 1.2.
Figure 1.3 gives an illustration of the -Euler (Green) function with respect to .
14 Chapter 1. Introduction
-1/2 1/2
x
y
Figure 1.3: Graphical illustration of the -Euler (Green) function G(; ).
Lemma 1.7
For all x R the Green function possesses the Fourier series representation
G(; x) =

h
h=0
1
4
2
h
2

h
(x) =

h
h=0
1
4
2
h
2

h
(x).
Proof. We have to compute the Fourier coecients. For h = 0 we know that
G(; ),
0

L
2

(R)
=

F
G(; x) 1dx = 0.
For h = 0:
G(; ),
h

L
2

(R)
=

F
G(; x)
h
(x)dx
=

F
G(; x)e
2ihx
dx
= lim

1
0

1
2
G(; x)e
2ihx
dx + lim

2
0+
1
2

2
G(; x)e
2ihx
dx
Consider now (with
x

h
(x) = (2ih)
2

h
(x):
1
2

2
G(; x)e
2ihx
dx =
1
(2ih)
2
1
2

2
G(; x)
x

h
(x)dx
Chapter 1. Introduction 15
=
1
4
2
h
2
_
G(; x)
x

h
(x)

1
2

1
2

x
G(; x)
x

h
(x)dx
_
=
1
4
2
h
2
_
_
G(; x)
x

h
(x)

1
2

2
G(; x)
h
(x)

1
2

2
+
1
2

x
G(; x)
. .
=1

h
(x)dx
_
_
Analogously,

1
2
G(; x)e
2ihx
dx =
1
4
2
h
2
_
G(; x)
x

h
(x)

1
2
G(; x)
h
(x)

1
2
+

1
2

x
G(; x)
. .
=1

h
(x)dx
_
Taking the limits we obtain (note that G(; ) is continuous on R and
h
C
()
(R)):
G(; ),
h

L
2

(R)
=
1
4
2
h
2
_
G(;
1
2
)
x

h
(
1
2
) G(; 0)
h
(0) G(;
1
2
)
h
(
1
2
)
+ lim

2
0+
G(;
2
)
h
(0)
. .
=1

1
2
0

h
(x)dx + G(; 0)
h
(0)
G(;
1
2
)
x

h
(
1
2
) lim

1
0
G(;
1
)
h
(0)
. .
=1
+G(;
1
2
)
h
(
1
2
)

1
2

h
(x)dx
_
=
1
4
2
h
2
_
lim

2
0+
G(;
2
) lim

1
0
G(;
1
)
1
2

1
2

h
(x)dx
_
=
1
4
2
h
2
_
lim

2
0+
G(;
2
) lim

1
0
G(;
1
)
_
=
1
4
2
h
2
=
1

(h)
due to the characteristic singularity of the Green function. In more detail:

_
G(; x)
1
2
x sign(x)
_
=
_
G(; x)
1
2
, x > 0
G(; x) +
1
2
, x < 0
which has to be continuous. Therefore, we obtain
lim

2
0+
G(;
2
)
1
2
= lim

1
0
G(;
1
) +
1
2
,
which gives us
lim

2
0+
G(;
2
) lim

1
0
G(;
1
) =
1
2
(
1
2
) = 1 .
This concludes our proof.
16 Chapter 1. Introduction
Remark 1.8
Reconsider now the rst denition (with G(x) = 0):
1
2

2
_
G(x)
. .
=0

h
(x) G(x)
h
(x)
_
dx +

1
2
_
G(x)
. .
=0

h
(x) G(x)
h
(x)
_
dx
=
_
G(x)
h
(x) G(x)
h
(x)
_

1
2

2
+
_
G(x)
h
(x) G(x)
h
(x)
_

1
2
Take the limits
1
0 and
2
0+ for both sides of the equation. The left hand side
becomes:

1
0

2
0+

F
G(x)
h
(x)dx = 4
2
h
2

F
G(x)
h
(x)dx ,
which is 0 for h = 0.
The right hand side becomes:

1
0

2
0+

h
(0) (G(0) G(0+)) = 1 (1) = 1 = 0.
Thus, we have a contradiction.
There is no Fourier series representation of G in the form
G(x) =

h
1

(h)

h
(x) ,
such that (formally) G(x) =

h
(x) = (x) ( denotes the Dirac distribution). There-
fore, such a Greens function is not possible.
Remark 1.9
For all x R, we have
G(; x) =

n=1
1
4
2
n
2
_
e
2inx
+ e
2inx
_
=

n=1
1
2
2
n
2
cos(2nx)
=
1
2
2

n=1
1
n
2
cos(2nx).
The Fourier series of G(; ) converges absolutely and uniformly for all x R. Elementary
dierentiation yields
G

(; x) =
x
G(; x) = G(; x) = (x x) +
1
2
, x R \ .
G(; ) is called Bernoulli function of degree 1.
Chapter 1. Introduction 17
The Fourier series of G(; ) reads as follows
G(; x) =

h
h=0
1
2ih

h
(x) =

n=1
1
2i
e
2inx
e
2inx
n
=

n=1
sin(2nx)
n
,
where the equality is understood in the L
2

(R)-sense.
Lemma 1.10
The series

n=1
sin(2nx)
n
converges uniformly in each compact interval I (g, g + 1)
with g .
Proof. Let
1
(x) = sin(2x), x R, and
k
(x) =

k
n=1
sin(2nx), x R, k N. By
induction on k we nd that for k N

k
(x) =
sin(2x) + sin(2kx) sin(2(k + 1)x)
2(1 cos(2x))
,
for all x R \ . This gives us
|
k
(x)|
3
2(1 cos(2x))
, x R \ .
Choose a compact interval I (g, g + 1), g . Then there is a constant C (depending
on I) such that
|
k
(x)| C, x I.
Now, for all x I and all N 2, partial summation yields
N

n=2
sin(2nx)
2n
=
N

n=2

n
(x)
n1
(x)
2n
=

N
(x)
2N
+
N1

n=2

n
(x)
2n(n + 1)

a
1
(x)
4
such that

n=2
sin(2nx)
2n

C
2N
+
C
2
N1

n=2
1
n(n + 1)
+
1
4
.
This assures the required result.
Denition 1.11
A function G(; ) : R R is called -Euler/Green function with respect to the operator
if
(i) for all x R \ and for all g holds G(; x + g) = G(; x),
(ii) G(; ) is continuously dierentiable for all x / and
x
G(; x) = 1,
(iii) G(; x)
1
2
sign(x) is continuous for all x F,
(iv)

F
G(; x)dx = 0.
Note that G(; x) = (x x) +
1
2
(see Figure 1.4 for an illustration).
18 Chapter 1. Introduction
-1/2
1/2
1/2
-1/2
x
y
Figure 1.4: The derivative of the -Euler (Green) function G(; ).
Classical Euler Summation Formula
Theorem 1.12
Let F : [a, b] R, a < b, be a twice continuously dierentiable function. Then

g
g(a,b)
F(g) =

b
a
F(x)dx +

b
a
G(; x)F(x) dx
+
_
F(x)(
x
G(; x)) (
x
F(x))G(; x)
_

b
a
,
i.e. for a, b / :

g
g(a,b)
F(g) =

b
a
F(x)dx +

b
a
G(; x)F(x) dx
+
_
F(x)(
x
G(; x)) (
x
F(x))G(; x)
_

b
a
,
for a, b :

g
g[a,b]
F(g) =

b
a
F(x)dx +

b
a
G(; x)F(x) dx
+
1
2
(F(b) + F(a)) G(; 0)
. .
=
1
12
(F(b) F(a)) ,
and for a , b / (analogously for a / , b ):

g
g[a,b]
F(g) =

b
a
F(x)dx +

b
a
G(; x)F(x) dx
+
1
2
F(a) + F(b)G(; b)
_
G(; x)F(x)
_

b
a
.
Chapter 1. Introduction 19
Proof. First we are concerned with the case that both endpoints a, b are non-integers
(cf. Figure 1.5). By partial integration (similar to the proof of Lemma 1.7) we obtain for
every (suciently small) > 0:
Figure 1.5: Illustration of the integration interval

x[a,b]
|xg|
g
(G(; x)
x
F(x) F(x)
x
G(; x)) dx = (G(; x)
x
F(x) F(x)
x
G(; x))

b
a
+

g(a,b)
g
(G(; x)
x
F(x) F(x)
x
G(; x))

g
g+
.
By virtue of the dierential equation
x
G(; x) = 1, x R \ , it follows that

x[a,b]
|xg|
g
F(x)
x
G(; x) dx =

x[a,b]
|xg|
g
F(x) dx.
By letting 0 and observing the (limit) values of the -Euler (Green) function and its
derivatives in the lattice points we obtain for the left hand side:
0

b
a
G(; x)
x
F(x)dx +

b
a
F(x)dx ,
and for the right hand side:
G(; x)F(x)

g
g+
0
G(; g)F(g) G(; g)F(g) = 0
F(x)G(; x)

g
g+
0
F(g) (G(; g) G(; g+)) = F(g)
Summarizing:

b
a
G(; x)
x
F(x)dx +

b
a
F(x)dx + (F(x)G(; x) G(; x)F(x))

b
a
=

g(a,b)
g
F(g)
If a and/or b are elements of = Z, we note that
G(; g) = G(; 0) =
1
12
g .
20 Chapter 1. Introduction
Thus, if a, b , G(; a) = G(; b) =
1
12
. Moreover,
F(x)G(; x)

b
a
= G(; b)
. .
=
1
2
if b
F(b) G(; a+)
. .
=
1
2
if a
F(a) .
The Euler summation formula compares a weighted sum of function values at lattice points
with the corresponding integral plus remainder term. Applications are the computation
of slowly converging innite series and numerical integration where the error given by the
corresponding estimate is optimized.
Chapter 2
The Gamma Function
In what follows, we introduce the classical Gamma function. Its essential properties will
be explained (for a more detailed discussion the reader is referred, e.g., to [10].
For real values > 0 we consider the integrals
()

1
0
e
t
t
1
dt and
()

1
e
t
t
1
dt.
In order to show the convergence of () we observe that 0 < e
t
t
1
t
1
holds true for
all t (0, 1]. Therefore, for > 0 suciently small, we have

e
t
t
1
dt

t
1
dt =
t

=
1

.
Consequently, for all > 0, the integral () is convergent.
To assure the convergence of () we observe that
e
t
t
1
=
1

k=0
t
k
k!
t
1

1
t
n
n!
t
1
=
n!
t
n+1
for all n N and t 1. This shows us that

A
1
e
t
t
1
dt n!

A
1
1
t
n+1
dt = n!
t
n+
n

A
1
=
n!
n
_
1
A
n
1
_
provided that A is suciently large and n is chosen that n + 1. Thus, the integral
() is convergent.
Lemma 2.1
For all > 0, the integral


0
e
t
t
1
dt
is convergent.
21
22 Chapter 2. The Gamma Function
Denition 2.2
The function (), > 0, dened by
() =


0
e
t
t
1
dt
is called Gamma function.
Obviously, we have the following properties:
1. is positive for all > 0,
2. (1) =

0
e
t
dt = 1.
We can use integration by parts to obtain for > 0:
( + 1) =


0
e
t
t

dt
= e
t
t


0
(e
t
)t
1
dt
=


0
e
t
t
1
dt
= ().
Lemma 2.3
The Gamma function satises the functional equation
( + 1) = (), > 0.
Moreover,
( + n) = ( + n 1) ( + 1)() =
n

i=1
( + i 1)() for > 0, n N
(n + 1) =
n

i=1
(i)(1) = n! for n N
0
.
In other words, the Gamma function can be understood as an extension of factorials.
Lemma 2.4
The Gamma function is dierentiable for all > 0 and we have

() =


0
e
t
(ln(t))t
1
dt.
Proof. See Exercise 2.1.
An analogous proof can be used to show that is innitely often dierentiable for all
> 0 and

(k)
() =


0
e
t
(ln(t))
k
t
1
dt, k N.
Chapter 2. The Gamma Function 23
Lemma 2.5 (Gau Expression of the Second Logarithmic Derivative)
For > 0
(

())
2
()

().
Equivalently, we have
_
d
d
_
2
ln(()) =

()
()

_

()
()
_
2
> 0,
i.e. ln(()), > 0, is a convex function or is logarithmic convex.
Proof.
(

())
2
=
_

0
e
t
(ln(t))t
1
dt
_
2
=
_

0
e

t
2
t
1
2
(ln(t))e

t
2
t
1
2
dt
_
2
.
The Cauchy-Schwarz inequality yields (note that equality cannot occur since the two
functions are linearly independent):
(

())
2


0
_
e

t
2
t
1
2
_
2
dt


0
_
e

t
2
t
1
2
(ln(t))
_
2
dt
=


0
e
t
t
1
dt


0
e
t
t
1
(ln(t))
2
dt
= ()

().
Moreover,
d
2
d
2
ln(()) =
d
d

()
()
=

()() (

())
2
(())
2
. .
>0
=

()
()

_

()
()
_
2
> 0.
Note: ln(()) is convex, i.e. for t (0, 1)
ln ((tx + (1 t)y)) t ln((x)) + (1 t) ln((y))
= ln
_

t
(x)
_
+ ln
_

1t
(y)
_
= ln
_

t
(x)
1t
(y)
_
which is equivalent to (tx + (1 t)y)
t
(x)
1t
(y) with x, y > 0.
Next we are interested in the behavior of the Gamma function for large positive values
x.
24 Chapter 2. The Gamma Function
Theorem 2.6 (Stirlings Formula)
For x > 0

(x)

2x
x1/2
e
x
1

2
x
.
Proof. Regard x as xed and substitute
t = x(1 +s), 1 s < ,
dt
ds
= x.
We get
(x) =


0
e
t
t
x1
dt =


1
e
xxs
x
x1
(1 + s)
x1
xds = x
x
e
x


1
(1 + s)
x1
e
xs
ds
. .
=I(x)
.
Our aim is to verify that I(x) satises

I(x)

2
x

2
x
.
Then we have

(x)
x
x
e
x

2
x

2
x
, i.e.

(x)
x
x1/2
e
x

2
1

2
x
.
For that purpose we write
(1 + s)
x
e
xs
= exp (x(s ln(1 + s))) = e
xu
2
(s)
where
u(s) =
_
|s ln(1 + s)|
1
2
, s [0, )
|s ln(1 + s)|
1
2
, s (1, 0).
Taylor expansion of u
2
for s (1, ) at 0:
u
2
(s) = u
2
(0) +
d
ds
u
2
(0)s +
d
2
ds
2
u
2
(s)
s
2
2
= 0 +
_
1
1
1 + 0
_
s +
1
(1 + s)
2
s
2
2
Therefore,
u
2
(s) =
s
2
2
1
(1 + s)
2
with 0 < < 1. We interpret as a uniquely dened function of s, i.e., : s (s),
such that
u(s)
s
=
1

2
1
(1 + s(s))
Chapter 2. The Gamma Function 25
is a positive continuous function for s (1, ) with the property

u(s)
s

1

2
_
1
1 + s(s)
1
_

=
1

s(s)
1 + s(s)

s(s)u(s)
s

= |(s)| |u(s)| |u(s)|.


From u
2
(s) = s ln(1 + s) follows that
2u du =
s
1 + s
ds.
Obviously, s : u s(u), u R is of class C
(1)
(R) and thus,
I(x) =

1
(1 + s)
x1
e
xs
ds = 2
+

e
xu
2 u
s(u)
du.
We are able to deduce that

I(x)

22


0
e
xu
2
du

e
xu
2 u
s(u)
du

2
+

e
xu
2
du

e
xu
2
_
u
s(u)

1

2
_
du

e
xu
2

u
s(u)

1

du
2

e
xu
2
|u| du
= 4

0
e
xu
2
u du.
Note that we have the following integrals (for , > 0)


0
e
t

dt
u=t

=
1


0
e
u
u
1/1
du =
1

(
1

) = (
+1

),


0
t
1
e
t

dt
u=t

=
1


0
e
u
u
1

u
1

du =
1

),


0
t
1
e
t
2
dt
u=t
2
=
1
2


0
e
u
1

u
_
u

_
1
2
du =
1
2
/2


0
e
u
u
/21
du =
1
2
/2
(

2
).
Therefore, for = 1 and = x:


0
e
xu
2
du =
1
2
x
1/2

_
1
2
_
=

x
,
26 Chapter 2. The Gamma Function
for = 2 and = x:


0
e
xu
2
u du =
1
2
x
1
(1) =
1
2x
.
This yields:

I(x)

2 2
1
2

I(x)

2
x

4
1
2x
=
2
x
.
This leads to the desired result.
Remark 2.7
Stirlings formula can be rewritten in the form
lim
x
(x)

2x
x1/2
e
x
= 1 .
An immediate application is the limit relation
lim
x
(x + a)
x
a
(x)
= 1, a > 0. (2.1)
This can be seen from Stirlings formula by
lim
x
(x + a)

2(x + a)
x+a
1
2
e
xa
= 1 (2.2)
due to the relation
(x + a)
x+a
1
2
= x
x+a
1
2
(1 +
a
x
)
x+a
1
2
and the limits
lim
x
(1 +
a
x
)
x
e
a
= 1, lim
x
(1 +
a
x
)
a
1
2
= 1.
Lemma 2.8 (Duplication Formula)
For x > 0 we have
2
x1

_
x
2
_

_
x + 1
2
_
=

(x).
Proof. We consider the function x (x), x > 0, dened by
(x) =
2
x1
(
x
2
)(
x+1
2
)
(x)
for x > 0. Setting x + 1 instead of x we nd with the following functional equation for
the numerator
2
x

_
x + 1
2
_

_
x
2
+ 1
_
= 2
x1
x
_
x
2
_

_
x + 1
2
_
,
such that the numerator satises the same functional equation as the denominator. This
means (x +1) = (x), x > 0. By repetition we get for all n N and x xed (x +n) =
Chapter 2. The Gamma Function 27
(x). We let n tend toward . For the numerator of (x+n) we then nd by use of the
result in Remark 2.7, i.e. by using twice (2.1) that
lim
n
2
x+n1
(
x+n
2
)(
x+n+1
2
)
2
x+n1
_
n
2
_x
2
(
n
2
)
x+1
2
_
(
n
2
)
_
2
= 1.
For the denominator we consider
lim
n
(x + n)
2
x+n1
_
n
2
_x
2
(
n
2
)
x+1
2
_
(
n
2
)
_
2
= lim
n
(x + n)
2
x+n1
_
n
2
_
x+
1
2
(
n
2
)
n1
e
n
2
(
(
n
2
)
)
2
(
n
2
)
n1
e
n
2
= lim
n
(x + n)
2
x+n
_
n
2
_
n+x
1
2
e
n

_
_
(
n
2
)
_
2
(
n
2
)
n1
e
n
2
_
1
= lim
n
(x + n)

2n
n+x
1
2
e
n

_
_
(
n
2
)
_
2
(
n
2
)
n1
e
n
2
_
1
=
1

since Stirlings formula yields that


lim
n
(
n
2
)

2(
n
2
)
n
2

1
2
e

n
2
= 1,
i.e.
lim
n
_
(
n
2
)
_
2
2(
n
2
)
n1
e
n
= 1,
and by the same arguments as in Remark 2.7 (set a in (2.2) to x and x in (2.2) to n) we
nd that
lim
n
(x + n)

2n
n+x
1
2
e
n
= 1.
We therefore get for every x > 0 and all n N
(x) = (x + n) = lim
n
(x + n) =

.
A periodical function with this property must be constant. This proves the lemma.
Thus far, the Gamma function is dened for positive values, i.e., x R
>0
(cf. Figure
2.1). We are interested in an extension of to the real line R (or even to the complex
plane C) if possible.
Denition 2.9
The so-called Pochhammer factorial (x)
n
with x R, n N is dened by
(x)
n
= x(x + 1) . . . (x + n 1) =
n

i=1
(x + i 1).
28 Chapter 2. The Gamma Function
For x > 0 it is clear that
(x)
n
=
(x + n)
(x)
or
(x)
n
(x + n)
=
1
(x)
.
The left hand side is dened for x > n and gives the same value for all n N with
n > x. We may use this relation to dene
1
(x)
for all x R, and we see that this
function vanishes for x = 0, 1, 2, . . . (cf. Figure 2.1).
Figure 2.1: The Gamma function on the real line R.
This leads us to the following conclusion: The Gamma integral is absolutely convergent
for x C with (x) > 0, and represents a holomorphic function for all x C with
(x) > 0. Moreover, the Pochhammer factorial (x)
n
can be dened for all complex x.
Therefore we have a denition of
1
(x)
for all x C.
Lemma 2.10
The -function is a meromorphic function that has simple poles in 0, 1, 2, . . .. The
reciprocal x
1
(x)
, is an entire analytic function.
Lemma 2.11
For x C,
1
(x)
= lim
n
n
x
x
n1

k=1
_
1 +
x
k
_
.
Proof. The identity
(x)
n
(n)
(n)
(x + n)
=
1
(x)
is valid for all x C and all n > (x). Furthermore it is easy to see that
(x)
n
(n)
= x
(x + 1)(x + 2) . . . (x + n 1)
1 2 . . . (n 1)
= x
n1

k=1
_
1 +
x
k
_
.
Chapter 2. The Gamma Function 29
Stirlings formula tells us that for positive x R
lim
n
1
n
x
(n)
(x + n)
= 1.
and
1
(x)
= lim
n
n
x
x
n

k=1
_
1 +
x
k
_
_
1 +
x
n
_
1
. .
1 as n
= lim
n
n
x
x
n

k=1
_
1 +
x
k
_
which proves the lemma for x > 0. To determine if this limit is also dened for x 0 we
consider once again s ln(1 + s) with 1 < s < :
0 s ln(1 + s) =
s
2
2
1
(1 + s)
2
, = (s) (0, 1).
Therefore, we can put s =
1
k
and estimate the right hand side with its maximum ( = 0):
0
1
k
ln
_
1 +
1
k
_

1
2k
2
.
This immediately proves that lim
n
n

k=1
_
1
k
ln
_
1 +
1
k
__
exists and is positive. Moreover,
lim
n
n

k=1
_
1
k
ln
_
1 +
1
k
__
= lim
n
n

k=1
_
1
k
ln(k + 1) + ln(k)
_
= lim
n
_
n

k=1
1
k
ln(n + 1)
_
= lim
n
_
n1

k=1
1
k
ln(n)
_
= C
where C denotes Eulers constant
C = lim
m
_
m1

k=1
1
k
ln m
_
0, 577215665 . . . .
Assume now that x R. If k 2|x|, then
0
x
k
ln
_
1 +
x
k
_
<
x
2
k
2
(2.3)
and
n

k=1
_
1 +
x
k
_
=
n

k=1
_
1 +
x
k
_
e
x
k
e

x
k
=
n

j=1
e
x
j
n

k=1
_
1 +
x
k
_
e

x
k
.
30 Chapter 2. The Gamma Function
For k k
0
= 2|x| we obtain by multiplying (2.3) with 1 and applying the exponential
function to it:
1
_
1 +
x
k
_
e

x
k
> e

x
2
k
2
which shows that
lim
n
n

k=1
_
1 +
x
k
_
e

x
k
=

k=1
_
1 +
x
k
_
e

x
k
exists for all x. Moreover,
n

j=1
e
x
j
= exp
_
x
n

j=1
1
j
_
= exp
_
x
n

j=1
1
j
x ln(n) + x ln(n)
_
= n
x
exp
_
x
n

j=1
1
j
xln(n)
_
= n
x
e
Cx
.
Therefore, lim
n
n
x
x
n

k=1
_
1 +
x
k
_
exists for all x R and it holds that
lim
n
n
x
x
n

k=1
_
1 +
x
k
_
= e
xC
x

k=1
_
1 +
x
k
_
e

x
k
where the innite product is also convergent for all x R. By similar arguments these
results can be extended for all x C.
The proof of the previous lemma also shows us the following.
Lemma 2.12
For x C,
1
(x)
= xe
Cx

k=1
_
1 +
x
k
_
e

x
k
.
Chapter 3
Orthogonal Polynomials
We consider weighted Hilbert spaces on intervals in R. Those are denoted by L
2

[a, b] or
L
2

(a, b) (a, b can be , , respectively) and have the scalar product


F, G
d
=

b
a
F(x)G(x)d(x).
is a nondecreasing function on R which has nite limits as x and whose induced
positive measure d has nite moments of all orders, i.e.

r
=
r
(d) =

R
x
r
d(x) <
for r N
0
with
0
> 0. If no confusion is likely to arise, we just use the notation L
2
[a, b]
or L
2
(a, b) for the spaces.
If is absolutely continuous, the scalar product becomes
F, G
d
=

b
a
F(x)G(x)w(x)dx.
Here w is a non-negative function which is Lebesgue-measurable and for which

b
a
w(x)dx > 0. It is called weight function.
Denition 3.1
Two functions F, G L
2

(a, b) are orthogonal if F, G


d
= 0.
Example 3.2
The trigonometric functions F
n
(x) = cos(nx) form a system of orthogonal functions on
the interval (0, ) with the weight function w(x) = 1 since


0
cos(mx) cos(nx)dx = 0
for n = m.
31
32 Chapter 3. Orthogonal Polynomials
Denition 3.3
Let {
n
}
nN
0
be a given orthonormal set (either nite or innite). To an arbitrary real-
valued function F let there correspond the formal Fourier expansion
F(x)

i=0
F
i

i
(x)
with the coecients F
i
dened by
F
i
= F,
i

d
=

b
a
F(x)
i
(x)d(x) , i N
0
.
These coecients are called Fourier coecients of F with respect to the given orthonormal
system.
Theorem 3.4
Let
n
, F
n
as before with F L
2

(a, b). Let l 0 be a xed integer and a


i
R,
i = 0, . . . , l.
If G(x) =
l

i=0
a
i

i
(x) and the coecients a
i
are considered variable in the following inte-
gral, the integral

b
a
(F(x) G(x))
2
d(x)
becomes a minimum if and only if a
i
= F
i
for i = 0, . . . , l.
Proof.
F G
2
d
= F G, F G
= F, F 2(F, G) +G, G
= F
2
2
_
l

i=0
a
i
F,
i

_
+
l

i,j=0
a
i
a
j

i
,
j

. .

i,j
= F
2
+
l

i=0
|a
i
|
2
2
_
l

i=0
a
i
F
i
_
= F
2

i=0
|F
i
|
2
+
l

i=0
|a
i
F
i
|
2
which is minimal if and only if a
i
= F
i
for all i = 0, . . . , l. Therefore, the truncated
Fourier series is the best approximating element.
Remark 3.5
We also nd Bessels equality
_
_
_
_
_
F
l

i=0
F
i

i
_
_
_
_
_
2
= F
2

i=0
|F
i
|
2
Chapter 3. Orthogonal Polynomials 33
and Bessels inequality (left hand side above is 0):
l

i=0
|F
i
|
2
F
2
. (3.1)
Lemma 3.6
The Fourier series

iN
0
F
i

i
converges in the L
2

-norm (in the L


2

-sense) to an L
2

-function
S and (F S)
i
for all i N
0
.
Proof. For n m holds:
_
_
_
_
_
n

i=m
F
i

i
_
_
_
_
_
2
=
n

i=m
F
i

2
=
n

i=m
|F
i
|
2
due to the Theorem of Pythagoras (and the orthogonality of the functions
i
).
Bessels inequality (3.1) gives us the convergence of the series

iN
0
|F
i
|
2
. Thus, for all > 0
there exists an m N such that for n m
n

i=m
|F
i
|
2
< .
Therefore,

iN
0
F
i

i
is a Cauchy sequence with respect to the L
2

-norm. Since L
2

(a, b) is
complete, there exists S L
2

(a, b) which is the limit of this sequence.


It remains to show the orthogonality of S F: let j {0, . . . , l}
S F,
j
= S,
j
F,
j

iN
0
F
i

i
,
j
F,
j

iN
0
F
i

i
,
j

. .

i,j
F
j
= 0,
since for F
n
F, G
n
G holds that F
n
, G
n
F, G.
Now we only have to show completeness of the system {
i
}
i
. Then F S = 0 (in the
L
2

-sense), i.e. the limit of the Fourier series is the function F. We now consider properties
of Fourier series in a more general setting.
Theorem 3.7
Let {x
k
}
kN
X be a sequence of orthonormal elements of the inner product space X.
Consider the following statements:
(A) The x
k
are closed in X, i.e. for any element x X and for all > 0 there exist an
n N and coecients a
1
, . . . , a
n
K such that
_
_
_
_
_
x
n

k=1
a
k
x
k
_
_
_
_
_
X
.
34 Chapter 3. Orthogonal Polynomials
(B) The Fourier series of any y X converges to y (in the norm of X), i.e.
lim
n
_
_
_
_
_
y
n

k=1
y, x
k

X
x
k
_
_
_
_
_
X
= 0 .
(C) Parsevals identity holds, i.e. for all y X,
y
2
X
= y, y
X
=

k=1
|y, x
k

X
|
2
.
(D) The extended Parseval identity holds, i.e. for all x, y X,
x, y
X
=

k=1
x, x
k

X
x
k
, y
X
.
(E) There is no strictly larger orthonormal system containing the set {x
k
}
kN
.
(F) {x
k
}
kN
is complete, i.e. if y X and y, x
k

X
= 0 for all k N, then y = 0.
(G) An element of X is determined uniquely by its Fourier coecients, i.e. if x, x
k

X
=
y, x
k

X
for all k N, then x = y.
Then holds:
(A) (B) (C) (D) =(E) (F) (G) . (3.2)
If X is a complete inner product space, then also (E) = (D) and all statements are
equivalent.
Proof. Assume (A). Due to the minimizing property of the truncated Fourier series (see
Theorem 3.4 which holds for any ONS {x
k
}
k
)
_
_
_
_
_
y
n

k=1
y, x
k
x
k
_
_
_
_
_

_
_
_
_
_
y
n

k=1
a
k
x
k
_
_
_
_
_
,
where the last estimate is provided by (A). If on the other hand (B) holds, we can ap-
proximate any y by its truncated Fourier series which shows closure. Thus, (A) (B).
By orthogonality,

x
n

k=1
x, x
k
x
k
, y
n

k=1
y, x
k
x
k

= x, y
n

k=1
x, x
k
x
k
, y .
Using the Schwarz inequality,

x, y
n

k=1
x, x
k
x
k
, y

_
_
_
_
_
x
n

k=1
x, x
k
x
k
_
_
_
_
_

_
_
_
_
_
y
n

k=1
y, x
k
x
k
_
_
_
_
_
.
Chapter 3. Orthogonal Polynomials 35
If (B) holds, the right-hand members both tend to 0, hence (B) =(D).
Selecting x = y in (D) shows (C), i.e. (D) =(C).
Since
0
_
_
_
_
_
y
n

k=1
y, x
k
x
k
_
_
_
_
_
2
= y
2

k=1
|y, x
k
|
2
,
we see (C) =(B) and thus, (A) (B) (C) (D).
Now assume (A) and suppose {x
k
}
kN
{w}, w = x
k
, is also an ONS. This system is also
closed in X. Since (A) =(C)
w
2
=

k=1
|w, x
k
|
2
+|w, w|
2
, and w
2
=

k=1
|w, x
k
|
2
Thus, w, w = 0 which contradicts w = 1. This means that (A) =(E).
Suppose there is a 0 = y X such that y, x
k
= 0 for all k. Then, {x
k
}
kN
{y/ y}
would be a strictly larger ONS than{x
k
}
kN
. Therefore, (E) (F).
Suppose w, x
k
= y, x
k
, k N. Then wy, x
k
= 0, k N. Assuming (F), wy = 0
and (F) =(G).
If (F) were false, we could nd z = 0 with z, x
k
= 0, k N. For any y, y, x
k
=
y + z, x
k
, k N. So y and y + z would be two distinct elements with the same Fourier
coecients. Thus, (G) would be false and we obtain (G) = (F). This completes the
chain of implications (3.2).
Assume now that additionally X is complete. We want to show (G) =(B) which is the
missing implication.
Let w X and consider
S
n
=
n

k=1
w, x
k
x
k
.
For n > m we nd that
S
n
S
m

2
=
n

k=m+1
|w, x
k
|
2
.
Bessels inequality gives us

k=1
|w, x
k
|
2
< , thus for a given > 0 we can nd an N()
such that
n

k=m+1
|w, x
k
|
2
for all m, n N(). Thus, {S
n
} is a Cauchy sequence and
since X is complete, it converges to an element S X.
Let v be xed and n v.
S S
n
, x
v
= S, x
v
S
n
, x
v
= S, x
v
w, x
v

36 Chapter 3. Orthogonal Polynomials


and by the Schwarz inequality
|S, x
v
w, x
v
| = |S S
n
, x
v
| S S
n
x
v
= S S
n
.
Together with the convergence of S
n
to S this gives us
S, x
v
= w, x
v
for v N.
By (G), this implies that S = w such that the convergence reads as follows
lim
n
_
_
_
_
_
w
n

k=1
w, x
k
x
k
_
_
_
_
_
= 0 .
This is precisely (B).
Now we start to consider polynomials.
Denition 3.8
The space of real polynomials up to degree n is denoted by
n
. The space of real poly-
nomials of all degrees is ,
n
for all n N
0
. For P, Q we use the scalar
product
P, Q
d
=

R
P(x)Q(x)d(x) (3.3)
and the induced norm.
Note that by denition of (nite moments) these integrals exist.
Denition 3.9
The scalar product (3.3) is called positive denite on if P
d
> 0 for all P , P 0.
It is called positive denite on
n
if P
d
> 0 for all P
n
, P 0.
Theorem 3.10
The scalar product (3.3) is positive denite on if and only if
det M
k
=

0

1

k1

1

2

k
.
.
.
.
.
.
.
.
.

k1

k

2k2

> 0, k N.
It is positive denite on
n
if and only if det M
k
> 0 for k = 1, 2, . . . , n + 1.
Proof. See tutorials (Exercise 4.1).
Denition 3.11
Polynomials whose leading coecient is 1, i.e. P
k
(x) = x
k
+ . . . with k N
0
, are called
monic polynomials. The set of monic polynomials of degree n is denoted by

n
.
Monic real polynomials P
k
, k N
0
, are called monic orthogonal polynomials w.r.t. the
measure d if
P
k
, P
l

d
= 0 for k = l, k, l N
0
and P
k

d
> 0 for k N
0
.
Normalization yields the orthonormal polynomials

P
k
(x) = P
k
(x)/ P
k

d
.
Chapter 3. Orthogonal Polynomials 37
Lemma 3.12
Let {P
k
}
k=0,1,...,n
be monic orthogonal polynomials. If Q
n
satises Q, P
k
= 0 for
k = 0, 1, . . . , n, then Q 0.
Proof. Write Q as Q(x) =
n

i=0
a
i
x
i
. Then
0 = Q, P
n
=
n

i=0
a
i
x
i
, P
n

and
x
i
, P
n
= P
i
+ R
i1
..

i1
, P
n
where P
i
(x) = x
i
+ R
i1
(x)
= P
i
, P
n
+R
i1
, P
n

=
i,n
P
n
, P
n
+ r
i1
x
i1
+ R
i2
, P
n

=
i,n
P
n
, P
n
+ r
i1
P
i1
+ S
i2
, P
n

= =
i,n
P
n
, P
n

This gives us 0 = Q, P
n
= a
n
P
n
, P
n
. Since P
n
, P
n
> 0, we obtain a
n
= 0. Similarly,
we can show that a
n1
= 0, a
n2
= 0, . . . , a
0
= 0.
Lemma 3.13
A set P
0
, . . . , P
n
of monic orthogonal polynomials is linearly independent. Moreover, any
polynomial P
n
can be uniquely represented in the form P =
n

k=0
c
k
P
k
for some real
coecients c
k
, i.e. P
0
, . . . , P
n
is a basis of
n
.
Proof. If
n

k=0

k
P
k
0, taking the scalar product on both sides of the equation with P
j
,
j = 0, . . . , n, yields that
j
= 0. This gives us linear independence.
If we take the scalar product with P
j
on both sides of P =
n

k=0
c
k
P
k
we nd that
P, P
j
=
n

k=0
c
k
P
k
, P
j
= c
j
P
j
, P
j
, j = 0, . . . , n.
The dierence P

n
k=0
c
k
P
k
is orthogonal to P
0
, . . . , P
n
and by the previous lemma has
to be identically zero.
Theorem 3.14
If the scalar product ,
d
is positive denite on , there exists a unique innite sequence
{P
k
}
kN
0
of monic orthogonal polynomials.
Proof. The polynomials P
k
can be constructed by applying Gram-Schmidt orthogonal-
ization to the sequence of powers {E
k
}
kN
0
where E
k
(x) = x
k
. Therefore, we choose
P
0
= 1 and for k N we use the recursion
P
k
= E
k

k1

l=0
c
l
P
l
, c
l
=
E
k
, P
l

P
l
, P
l

. (3.4)
38 Chapter 3. Orthogonal Polynomials
Since the scalar product is positive denite, P
l
, P
l
> 0. Thus, the monic polynomial P
k
is uniquely dened and by construction orthogonal to all P
j
, j < k.
The prerequisites of this theorem are fullled if has many points of increase, i.e. points
t
0
such that (t
0
+ ) > (t
0
) for all > 0. The set of all points of increase of is
called support of the measure d, its convex hull is the support interval of d.
Theorem 3.15
If the scalar product ,
d
is positive denite on
n
, but not on
m
for all m > n, there
exist only n + 1 orthogonal polynomials P
0
, . . . , P
n
.
Proof. See tutorials (Exercise 4.2).
Theorem 3.16
If the moments of d exist only for r = 0, 1, . . . , r
0
, there exist only n + 1 orthogonal
polynomials P
0
, . . . , P
n
, where n = r
0
/2.
Proof. The Gram-Schmidt procedure can be carried out as long as the scalar products
in (3.4) including P
k
, P
k
exist. This is the case as long as 2k r
0
, i.e. k n = r
0
/2.
3.1 Properties of Orthogonal Polynomials
For this section we assume that d is a positive measure on R with an innite number of
points of increase and with nite moments of all orders.
Denition 3.17
An absolutely continuous measure d(x) = w(x)dx is symmetric (with respect to the
origin) if its support interval is [a, a] with 0 < a and w(x) = w(x) for all x R.
Theorem 3.18
If d is symmetric, then
P
k
(x) = (1)
k
P
k
(x) , k N
0
.
Thus, P
k
is either an even or an odd polynomial depending on the degree k.
Proof. Set

P
k
(x) = (1)
k
P
k
(x). We compute for k = l

P
k
,

P
l

d
= (1)
k+l

R
P
k
(x)P
l
(x)d(x)
= (1)
k+l

R
P
k
(x)P
l
(x)d(x)
= (1)
k+l

R
P
k
(x)P
l
(x)d(x)
= (1)
k+l
P
k
, P
l

d
= 0.
Since all

P
k
are monic,

P
k
(x) = P
k
(x) by the uniqueness of monic orthogonal polynomials.
Chapter 3. Orthogonal Polynomials 39
Theorem 3.19
Let d be symmetric on [a, a], 0 < a , and
P
2k
(x) = P
+
k
(x
2
) P
2k+1
(x) = xP

k
(x
2
).
Then {P

k
} are the monic orthogonal polynomials with respect to the measure d

(x) =
x

1
2
w(x
1
2
)dx on [0, a
2
].
Proof. We only prove the assertion for P
+
k
, the other case follows analogously.
Obviously, the polynomials P
+
k
are monic. By symmetry holds that
0 = P
2k
, P
2l

d
= 2

a
0
P
2k
(x)P
2l
(x)w(x)dx
for k = l. Therefore, we obtain for k = l
0 = 2

a
0
P
+
k
(x
2
)P
+
l
(x
2
)w(x)dx =

a
2
0
P
+
k
(t)P
+
l
(t)t

1
2
w(t
1
2
)dt.
Theorem 3.20
All zeros of P
k
, k N, are real, simple, and located in the interior of the support interval
[a, b] of d.
Proof. Since

R
P
k
(x)d(x) = 0 for k 1, there must exist at least one point in the
interior of [a, b] at which P
k
changes sign. Let x
1
, x
2
, . . . , x
n
, n k, be all these points.
If we had n < k, then due to orthogonality

R
P
k
(x)
n

j=1
(x x
j
)d(x) = 0,
which is impossible since the integrand does no longer change sign. Therefore, we obtain
that n = k.
Theorem 3.21
The zeros of P
k+1
alternate with those of P
k
, i.e.
x
(k+1)
k+1
< x
(k)
k
< x
(k+1)
k
< x
(k)
k1
< < x
(k)
1
< x
(k+1)
1
,
where x
(k+1)
j
, x
(k)
i
are the zeros in descending order of P
k+1
and P
k
, respectively.
Proof. Later (uses the Christoel-Darboux formula which we will get very soon).
Theorem 3.22
In any open interval (c, d) in which d 0 there can be at most one zero of P
k
.
40 Chapter 3. Orthogonal Polynomials
Proof. We perform a proof by contradiction. Suppose there are two zeros x
(k)
i
= x
(k)
j
in
(c, d). Let all the other zeros (within (c, d) or not) be denoted by x
(k)
n
. Then holds

R
P
k
(x)

n=i,j
(x x
(k)
n
)d(x) =

n=i,j
(x x
(k)
n
)
2
(x x
(k)
j
)(x x
(k)
i
)d(x) > 0 ,
since the integrand is non-negative outside of (c, d). This is a contradiction to the orthog-
onality of P
k
to polynomials of lower degree such as

n=i,j
(x x
(k)
n
).
Theorem 3.23
For any monic polynomial P

n
holds

R
P
2
(x)d(x)

R
P
2
n
(x)d(x)
where equality is only achieved for P = P
n
. In other words, P
n
minimizes the integral on
the left hand side above over all P

n
, i.e.
min
P

R
P
2
(x)d(x) =

R
P
2
n
(x)d(x).
Proof. Due to Lemma 3.13 the polynomial P can be represented as follows
P(x) = P
n
(x) +
n1

k=0
c
k
P
k
(x).
Therefore,

R
P
2
(x)d(x) =

R
P
2
n
(x)d(x) +
n1

k=0
c
2
k

R
P
2
k
(x)d(x).
This shows the desired inequality. Equality holds if and only if c
0
= c
1
= . . . = c
n1
= 0,
i.e. for P = P
n
.
Remark 3.24
If we consider the function
(a
0
, a
1
, . . . , a
n1
) =

R
_
x
n
+
n1

k=0
a
k
x
k
_
2
d(x) ,
we can compute the partial derivatives and set them equal to zero which yields

R
P(x)x
k
d(x) = 0 , k = 0, 1, . . . , n 1.
These are exactly the conditions of orthogonality that P
n
has to satisfy.
Furthermore, the Hessian of is 2M
n
of Theorem 3.10 which is positive denite. This
conrms that P
n
gives us a minimum.
Chapter 3. Orthogonal Polynomials 41
Theorem 3.25
Let 1 < p < . Then, the extremal problem of determining
min
P

R
|P(x)|
p
d(x)
possesses the unique solution P
n
.
Proof. The search for the desired minimum is equivalent to the problem of best-
approximation of x
n
by polynomials of degree n 1 in the L
p
-norm.
The problem of best approximation in normed spaces is uniquely solvable if the space is
strictly convex, i.e. if from x = y = 1 and x = y we can conclude that x + y < 2.
A normed space X is strictly convex if and only if x + y = x +y yields x = y or
y = x with an 0. The Minkowski inequality guarantees this for the L
p
-norm.
For further details see e.g. [1, 4, 9, 13, 14, 17] or other books on functional analysis.
Orthogonal polynomials fulll a three-term recurrence relation which can be used for:
generating values of the polynomials and their derivatives,
computation of the zeros as eigenvalues of a symmetric tridiagonal matrix via the
recursion coecients,
normalization of the orthogonal polynomials,
ecient evaluation of expansions in orthogonal polynomials.
The reason for the existence of these three-term recurrences is the shift property of the
scalar product, i.e.
xU, V
d
= U, xV
d
U, V .
Note that there are other scalar products that do not possess this property (even though
they are positive denite).
Theorem 3.26
Let P
k
, k N
0
, be the monic orthogonal polynomials w.r.t. the measure d (see Denition
3.11). Then,
P
1
(x) = 0, P
0
(x) = 1, P
k+1
(x) = (x
k
)P
k
(x)
k
P
k1
(x), k N
0
, (3.5)
where

k
=
xP
k
, P
k

d
P
k
, P
k

d
, k N
0
,

k
=
P
k
, P
k

d
P
k1
, P
k1

d
, k N.
The index range is innite (k ) if the scalar product is positive denite on . It
is nite (k d 1) if the scalar product is positive denite on
d
, but not on
n
with
n > d.
42 Chapter 3. Orthogonal Polynomials
Proof. Since P
k+1
xP
k
is a polynomial of degree k, we can write
P
k+1
(x) xP
k
(x) =
k
P
k
(x)
k
P
k1
(x) +
k2

j=0

k,j
P
j
(x)
for certain constants
k
,
k
,
k,j
, where P
1
(x) = 0 and empty sums are also zero.
We take the scalar product with P
k
on both sides which gives us (using orthogonality):
P
k+1
, P
k
xP
k
, P
k
=
k
P
k
, P
k

k
P
k1
, P
k
+
k2

j=0

k,j
P
j
, P
k

= xP
k
, P
k
=
k
P
k
, P
k

=
k
=
xP
k
, P
k

P
k
, P
k

This proves the relation for


k
. For
k
we need to take the scalar product with P
k1
where k 1:
P
k+1
, P
k1
xP
k
, P
k1
=
k
P
k
, P
k1

k
P
k1
, P
k1
+
k2

j=0

k,j
P
j
, P
k1

= xP
k
, P
k1
=
k
P
k1
, P
k1

Since xP
k
, P
k1
= P
k
, xP
k1
= P
k
, P
k
+ R
k1
with R
k1

k1
, we nd that
xP
k
, P
k1
= P
k
, P
k
which provides us with

k
=
P
k
, P
k

P
k1
, P
k1

.
As a last step we take the scalar product on both sides with P
i
, i < k 1, and obtain
P
k+1
, P
i
xP
k
, P
i
=
k
P
k
, P
i

k
P
k1
, P
i
+
k2

j=0

k,j
P
j
, P
i

= xP
k
, P
i
=
k,i
P
i
, P
i

Here we make use of the shift property of the scalar product, i.e. xP
k
, P
i
= P
k
, xP
i
= 0
since xP
i

k1
. Therefore,
k,i
= 0 for i < k 1 which nally proves (3.5).
Remark 3.27
If the index range in Theorem 3.26 is nite (k d 1), the relations for
d
and
d
still
make sense,
d
> 0, but the polynomial P
d+1
that results from (3.5) has norm P
d+1
= 0
(see also Theorem 3.15 or Exercise 4.2).
Remark 3.28
Although
0
in (3.5) can be arbitrary since it is multiplied with P
1
0, we dene it for
later purposes as

0
= P
0
, P
0
=

R
d(x) .
Chapter 3. Orthogonal Polynomials 43
Note that
k
> 0 for all k N
0
and for n N
0
P
n

2
=
n

n1
. . .
1

0
.
There is a converse result to Theorem 3.26 saying that any sequence of polynomials which
satisfy a three-term recurrence relation of the form (3.5) with all
k
positive is orthogonal
w.r.t. a positive measure with innite support.
Theorem 3.29
Let the support interval [a, b] of d be nite. Then,
a
k
b , k N
0
,
0 <
k
max{a
2
, b
2
} , k N,
where the index range is k or k d (d as in Theorem 3.26), respectively.
Proof. Since for x in the support of d we know that a x b, the denition of
k
in
Theorem 3.26 immediately yields the desired estimates.
By denition 0 <
k
and it remains show the upper bound. We notice that
P
k

2
= P
k
, P
k
= |xP
k1
, P
k
|
and apply the Cauchy-Schwarz inequality to obtain
P
k

2
max{|a|, |b|} P
k1
P
k
.
Therefore,
P
k

P
k1

max{|a|, |b|} and


k
=
P
k

2
P
k1

2
max{a
2
, b
2
}.
Denition 3.30
If the index range in Theorem 3.26 is innte, the Jacobi matrix associated with the
measure d is the innite, symmetric, tridiagonal matrix
J

=
_

1
0

1

1

2

2

3
.
.
.
.
.
.
.
.
.
0
_

_
.
Its leading principal minor matrix of size n n is denoted by
J
n
= [J

]
[1:n,1:n]
=
_

1
0

1

1

2

2

3
.
.
.
.
.
.
.
.
.

n2

n2

n1
0

n1

n1
_

_
.
If the index range in Theorem 3.26 is nite (k d 1), then J
n
is well-dened for
0 n d.
44 Chapter 3. Orthogonal Polynomials
Theorem 3.31
The zeros x
(n)
i
of P
n
(the orthonormal version

P
n
) are the eigenvalues of the Jacobi matrix
J
n
of order n. The corresponding eigenvectors are given by

P
_
x
(n)
i
_
where

P(x) =
_

P
0
(x),

P
1
(x), . . . ,

P
n1
(x)
_
T
.
Proof. We know from Exercise 5.1 that the following system of equations holds
x

P(x) = J
n

P(x) +

n

P
n
(x)e
n
,
where e
n
= [0, 0, . . . , 1]
T
is the n-th unit vector in R
n
.
If we put x = x
(n)
i
in this equation, the second summand on the right hand side drops
out. Since the rst component of the vector

P(x
(n)
i
) is 1/

0
, the vector cannot be 0 and
is indeed the eigenvector to the eigenvalue x
(n)
i
.
Theorem 3.32 (Christoel-Darboux Formula)
Let

P
k
denote the orthonormal polynomials with respect to the measure d. Then,
n

k=0

P
k
(x)

P
k
(t) =

n+1

P
n+1
(x)

P
n
(t)

P
n
(x)

P
n+1
(t)
x t
and
n

k=0
_

P
k
(x)
_
2
=

n+1
_

n+1
(x)

P
n
(x)

P

n
(x)

P
n+1
(x)
_
.
Proof. We use the recurrence relation for the orthonormal polynomials (see Exercise
5.1), i.e.

k+1

P
k+1
(t) = (t
k
)

P
k
(t)

k

P
k1
(t) , k N
0
,

P
1
(t) = 0,

P
0
(t) =
1

0
,
and multiply this by

P
k
(x) to obtain:

k+1

P
k+1
(t)

P
k
(x) = (t
k
)

P
k
(t)

P
k
(x)

k

P
k1
(t)

P
k
(x).
Now we interchange the roles of t and x and subtract the rst relation, i.e.

k+1

P
k+1
(x)

P
k
(t)

k+1

P
k+1
(t)

P
k
(x)
= (x
k
)

P
k
(x)

P
k
(t)

k

P
k1
(x)

P
k
(t)
_
(t
k
)

P
k
(t)

P
k
(x)

k

P
k1
(t)

P
k
(x)
_
= (x t)

P
k
(x)

P
k
(t)

k
_

P
k1
(x)

P
k
(t)

P
k1
(t)

P
k
(x)
_
.
Therefore,
(x t)

P
k
(x)

P
k
(t) =

k+1
_

P
k+1
(x)

P
k
(t)

P
k+1
(t)

P
k
(x)
_

k
_

P
k1
(t)

P
k
(x)

P
k1
(x)

P
k
(t)
_
.
Chapter 3. Orthogonal Polynomials 45
Now we sum up both sides from k = 0 to k = n and make use of the teleskop sum on the
right (note that

P
1
= 0):
n

k=0

P
k
(x)

P
k
(t) =

n+1

P
n+1
(x)

P
n
(t)

P
n
(x)

P
n+1
(t)
x t
.
For the second part we take the limit t x on both sides and on the right hand side we
nd

P
n+1
(x)

P
n
(t)

P
n
(x)

P
n+1
(t)
x t
=

P
n+1
(x)

P
n
(t)

P
n
(x)
x t
+

P
n+1
(x)

P
n
(x)
x t


P
n
(x)

P
n+1
(t)

P
n+1
(x)
x t


P
n
(x)

P
n+1
(x)
x t
=

P
n+1
(x)

P
n
(t)

P
n
(x)
x t


P
n
(x)

P
n+1
(t)

P
n+1
(x)
x t
tx


P
n+1
(x)
_

n
(x)
_


P
n
(x)
_

n+1
(x)
_
=

P

n+1
(x)

P
n
(x)

P

n
(x)

P
n+1
(x).
Corollary 3.33
Let P
k
denote the monic orthogonal polynomials with respect to the measure d. Then,
n

k=0
_
n

i=k+1

i
_
P
k
(x)P
k
(t) =
n

k=0

n

n1
. . .
k+1
P
k
(x)P
k
(t)
=
P
n+1
(x)P
n
(t) P
n
(x)P
n+1
(t)
x t
.
Proof. Put

P
k
= P
k
/ P
k
in the rst formula of Theorem 3.32 and use from Theorem
3.26 that

n+1
= P
n+1
/ P
n
together with
P
n

2
=
n

n1
. . .
1

0
=
n

i=0

i
.
This provides us with
n

k=0

P
k
(x)

P
k
(t) =
n

k=0
1
P
k

2
P
k
(x)P
k
(t)
=
P
n+1

P
n

P
n+1
(x)P
n
(t) P
n
(x)P
n+1
(t)
P
n+1
P
n
(x t)
n

k=0
P
n

2
P
k

2
P
k
(x)P
k
(t) =
P
n+1
(x)P
n
(t) P
n
(x)P
n+1
(t)
x t
n

k=0
_
n

i=k+1

i
_
P
k
(x)P
k
(t) =
P
n+1
(x)P
n
(t) P
n
(x)P
n+1
(t)
x t
46 Chapter 3. Orthogonal Polynomials
Remark 3.34
From the second part of Theorem 3.32 we obtain the inequality

n+1
(x)

P
n
(x)

P

n
(x)

P
n+1
(x) > 0.
This can be used to prove Theorem 3.21 the following way.
Let and be consecutive zeros of

P
n
, such that

P

n
()

P

n
() < 0 (which holds since all
zeros are simple). Then, the inequality above tells us that

n
()

P
n+1
() > 0 and

P

n
()

P
n+1
() > 0.
This implies that

P
n+1
has opposite signs at and . Therefore, there is at least one zero
of

P
n+1
between and . In this way we nd at least n 1 zeros of

P
n+1
.
For the largest zero of

P
n
, i.e. for
(n)
1
, holds

P

n
(
(n)
1
) > 0 and by the inequality above

P
n+1
(
(n)
1
) < 0. The polynomial

P
n+1
has another zero at the right of
(n)
1
since

P
n+1
(x) >
0 for x suciently large.
A similar argument holds for the smallest zero of

P
n
, i.e. for
(n)
n
. This proves Theorem
3.21.
3.2 Quadrature Rules and Orthogonal Polynomials
Let in this section d be a measure with bounded or unbounded support, positive denite
or not. An n-point quadrature rule for d is a formula of the type

R
F(t)d(t) =
n

=1

F(

) + R
n
(F) (3.6)
The mutually distinct points

are called nodes, the numbers

are the weights of the


quadrature rule. R
n
(F) is the remainder or error term.
Denition 3.35
The quadrature rule (3.6) possesses degree of exactness d if R
n
(P) = 0 for all P
d
. It
has precise degree of exactness d if it has degree of exactness d, but not d + 1.
A quadrature rule (3.6) with degree of exactness d = n 1 is called interpolatory.
Regarding interpolatory quadrature rules
A quadrature rule is interpolatory if and only if it is obtained by integration from the
Lagrange interpolation, i.e. from
F(t) =
n

=1
F(t

)L

(t) + r
n1
(F; t)
where L

(t) =
n

=1
=
tt

. Thus, we obtain

R
L

(t)d(t) , = 1, 2, . . . , n and R
n
(F) =

R
r
n1
(F; t)d(t) .
Chapter 3. Orthogonal Polynomials 47
It is well-known that for P
n1
holds that r
n1
(P; t) 0, i.e. R
n
(P) = 0 and d = n1.
Given any n distinct nodes an interpolatory quadrature can always be constructed.
Theorem 3.36
Let 0 k n be an integer. The quadrature rule (3.6) has degree of exactness d =
n 1 + k if and only if the following two conditions are both fullled:
(i) (3.6) is interpolatory,
(ii) The node polynomial
n
(t) =
n

=1
(t

) satises

n
(t)P(t)d(t) = 0 for all P
k1
.
Proof. See Exercise 5.2.
Remark 3.37
If d is positive denite, then k = n is optimal. k = n + 1 requires that
n
is orthogonal
to all elements of
n
, i.e. also to itself, which is not possible.
Gauss quadratures
Denition 3.38
The quadrature rule (3.6) with k = n is called Gauss quadrature rule with respect to the
measure d. Its degree of exactness is d = 2n 1.
Remark 3.39
The second condition in Theorem 3.36 shows that for a Gauss quadrature (i.e. k = n) we
have
n
= P
n
. Therefore, the nodes
G

are the zeros of the n-th orthogonal polynomial


with respect to d. The weights
G

can be found by interpolation.


Note that for the Gauss quadratures we assume that d is positive denite.
Theorem 3.40
All nodes
G

of the Gauss quadrature rule are mutually distinct and contained in the
interior of the support interval [a, b] of d. All weights
G

are positive.
Proof. Since the nodes
G

are the zeros of P


n
, the rst part follows from Theorem 3.20.
Now we consider the weights:
0 <

R
L
2

(t)d(t) =
n

=1

L
2

(
G

) =
G

for = 1, 2, . . . , n since L
2

()
2n2

2n1
.
Theorem 3.41
Let
G

be the weights and


G

be the nodes of a Gauss quadrature rule. Then


n

=1

F(
G

) =

R
P
2n1
(F; t)d(t)
48 Chapter 3. Orthogonal Polynomials
where P
2n1
(F; ) is the Hermite interpolation polynomial of degree 2n1 which satises
the equations
P
2n1
(F;
G

) = F(
G

) , P

2n1
(F;
G

) = F

(
G

) for = 1, 2, . . . , n.
Proof. See Exercise 5.3.
Corollary 3.42
If F C
(2n)
([a, b]) and [a, b] is the support interval of d, then the remainder term of the
Gauss quadrature formula can be expressed as
R
n
(F) =
F
(2n)
()
(2n)!

R
(P
n
(t))
2
d(t)
with a (a, b).
Proof. We know from the theory of interpolation (cf. e.g. [4]) that
F(t) = P
2n1
(F; t) + r
2n1
(F; t) = P
2n1
(F; t) +
F
(2n)
((t))
(2n)!
n

=1
(t

)
2
with (a, b). This yields for the remainder of the numerical integration that
R
n
(F) =

R
r
2n1
(F; t)d(t)
=

R
F
(2n)
((t))
(2n)!
n

=1
(t

)
2
d(t).
The mean value theorem of integration completes the proof.
Theorem 3.43
The rst n orthogonal polynomials P
k
, k = 0, . . . , n 1, are discrete orthogonal in the
sense
n

=1

P
k
(
G

)P
l
(
G

) =
k,l
P
k

2
, k, l = 0, 1, . . . , n 1,
where
G

,
G

are the nodes and weights of the n-point Gauss quadrature formula.
Proof. This follows directly since the degree of exactness of the n-point Gauss quadrature
is 2n 1 and the product of P
k
and P
l
with k, l {0, 1, . . . , n 1} is a polynomial of
degree 2n 2.
Remark 3.44
If a > , b , it can be desired to have
0
= a. To achieve this we replace n by
n + 1 in (3.6) and write
n+1
(t) = (t a)
n
(t). The optimal formula (see Theorem 3.36
requires
n
to satisfy

n
(t)P(t)(t a)d(t) = 0 , P
n1
.
Chapter 3. Orthogonal Polynomials 49
Therefore,
n
(t) = P
n
(t; d
a
) with d
a
(t) = (t a)d(t).
The remaining n nodes have to be zeros of P
n
(; d
a
). The resulting rule is called Gauss-
Radau rule.
If also b < and a as well as b are both desired as nodes, we nd the (n + 2)-point
Gauss-Lobatto rule similarly with the measure d
a,b
(t) = (t a)(b t)d(t).
These rules have the degrees of exactness equal to 2n and 2n + 1, respectively.
3.3 The Jacobi Polynomials
Denition 3.45
Let d(x) = w(x)dx with w(x) = (1 x)

(1 + x)

, , > 1, with the support interval


[1, 1]. The corresponding orthogonal polynomials are the Jacobi polynomials P
(,)
n
which are normalized by the condition
P
(,)
n
(1) =
_
n +
n
_
=
(n + + 1)
(n + 1)( + 1)
.
(Note that we generalize the notation of the binomial coecients naturally via the Gamma
function.)
Remark 3.46
Similar to Theorem 3.18 we nd the identity
P
(,)
n
(x) = (1)
n
P
(,)
n
(x)
and so we obtain
P
(,)
n
(1) = (1)
n
_
n +
n
_
.
Theorem 3.47
For > 1 hold
P
(,)
2n
(x) =
(2n + + 1)(n + 1)
(n + + 1)(2n + 1)
P
(,
1
2
)
n
(2x
2
1)
= (1)
n
(2n + + 1)(n + 1)
(n + + 1)(2n + 1)
P
(
1
2
,)
n
(1 2x
2
) ,
P
(,)
2n+1
(x) =
(2n + + 2)(n + 1)
(n + + 1)(2n + 2)
xP
(,
1
2
)
n
(2x
2
1)
= (1)
n
(2n + + 2)(n + 1)
(n + + 1)(2n + 2)
xP
(
1
2
,)
n
(1 2x
2
) .
Proof. We consider the rst relation and use the notation d
1
(x) = (1x)

(1+x)

dx =
(1 x
2
)

dx and d
2
(x) = (1 x)

(1 +x)

1
2
dx. It suces to prove that
I =

R
P
(,
1
2
)
n
(2x
2
1)P(x)d
1
(x) =
1

1
P
(,
1
2
)
n
(2x
2
1)P(x)(1 x
2
)

dx = 0,
50 Chapter 3. Orthogonal Polynomials
where P
2n1
.
If P is an odd polynomial, this is fullled. Therefore, let P be even, i.e. P(x) = R(x
2
)
with R
n1
. Then
I =
1

1
P
(,
1
2
)
n
(2x
2
1)R(x
2
)(1 x
2
)

dx
= 2
1

0
P
(,
1
2
)
n
(2x
2
1)R(x
2
)(1 x
2
)

dx
=
1

0
P
(,
1
2
)
n
(2x 1)R(x)(1 x)

1
2
dx
= 2

1
2
1

1
P
(,
1
2
)
n
(x)R(
x+1
2
)(1 x)

(1 +x)

1
2
dx
= 2

1
2

R
P
(,
1
2
)
n
(x)R(
x+1
2
)d
2
(x) = 0.
A similar argument can be used to prove the second relation of the theorem.
Remark 3.48 (Special Cases)
(a) For = we are dealing with the special case of ultraspherical polynomials (or
Gegenbauer polynomials) which due to the previous theorem are even or odd poly-
nomials (depending on n being even or odd).
C
()
n
(x) =
( + 1)(n + 2 + 1)
(2 + 1)(n + + 1)
P
(,)
n
(x)
=
( +
1
2
)(n + 2)
(2)(n + +
1
2
)
P
(
1
2
,
1
2
)
n
(x) ,
where = =
1
2
, >
1
2
since > 1. If =
1
2
(or = 0), the polynomial
C
(0)
n
vanishes identically for n 1.
Another consequence of that theorem is that Jacobi polynomials with =
1
2
or
=
1
2
can be expressed by ultraspherical polynomials.
(b) For = =
1
2
we obtain the Chebyshev polynomials of rst kind T
n
, i.e.
P
(
1
2
,
1
2
)
n
(x) =

n
i=1
(2i 1)
2
n
n!
T
n
(x) =

n
i=1
(2i 1)
2
n
n!
cos(n),
where x = cos().
(c) For = =
1
2
we obtain the Chebyshev polynomials of second kind U
n
, i.e.
P
(
1
2
,
1
2
)
n
(x) =

n
i=0
(2i + 1)
2
n
(n + 1)!
U
n
(x) =

n
i=0
(2i + 1)
2
n
(n + 1)!
sin((n + 1))
sin()
,
Chapter 3. Orthogonal Polynomials 51
where x = cos().
(d) The mixed variants =
1
2
, =
1
2
and =
1
2
, =
1
2
yield the Chebyshev
polynomials of third kind V
n
and of forth kind W
n
, respectively, i.e.
P
(
1
2
,
1
2
)
n
(x) =

n
i=1
(2i 1)
2
n
n!
V
n
(x) =

n
i=1
(2i 1)
2
n
n!
cos(
(2n+1)
2
)
cos(

2
)
,
P
(
1
2
,
1
2
)
n
(x) =

n
i=1
(2i 1)
2
n
(n + 1)!
W
n
(x) =

n
i=1
(2i 1)
2
n
(n + 1)!
sin(
(2n+1)
2
)
sin(

2
)
,
where x = cos().
(e) For = = 0 we nd the Legendre polynomials P
n
, i.e.
P
(0,0)
n
(x) = C
(
1
2
)
n
(x) = P
n
(x).
Note that in this case the weight function is w(x) = 1, x [1, 1].
Theorem 3.49
The Jacobi polynomials y = P
(,)
n
(x) satisfy the following linear homogeneous dierential
equation of the second order
(1 x
2
)y

+ ( ( + + 2)x)y

+ n(n + + + 1)y = 0
or
d
dx
_
(1 x)
+1
(1 +x)
+1
y

_
+ n(n + + + 1)(1 x)

(1 + x)

y = 0.
Proof. We note that since y
n
we have that
d
dx
_
(1 x)
+1
(1 + x)
+1
y

_
= (1 x)

(1 +x)

z
where z
n
. To show that z is a constant multiple of y we have to prove the orthogo-
nality to any P
n1
, i.e.

1
1
d
dx
_
(1 x)
+1
(1 +x)
+1
y

_
P(x)dx = 0.
We use integration by parts on the left hand side which yields (since + 1 > 0 and
+ 1 > 0):

1
1
d
dx
_
(1 x)
+1
(1 + x)
+1
y

_
P(x)dx =

1
1
(1 x)
+1
(1 + x)
+1
y

(x)dx
=

1
1
y
d
dx
_
(1 x)
+1
(1 + x)
+1
P

(x)
_
. .

C(1x)

(1+x)

R(x)
dx
52 Chapter 3. Orthogonal Polynomials
where we performed another integration by parts and R
n1
. Therefore, the integral
vanishes. The constant factor can be calculated by comparing the highest terms, i.e.
y = k
n
x
n
+ . . . , y

= nk
n
x
n1
+ . . . , y

= n(n 1)k
n
x
n2
+ . . . ,
and
0 =
d
dx
_
(1 x)
+1
(1 + x)
+1
y

_
+ C(1 x)

(1 + x)

y
= ( + 1)(1 x)

(1 +x)
+1
y

+ ( + 1)(1 x)
+1
(1 + x)

+ (1 x)
+1
(1 + x)
+1
y

+ C(1 x)

(1 +x)

y
= ( + 1)(1 x)

(1 +x)

(1 + x)y

( + 1)(1 x)

(1 + x)

(x 1)y

(1 x)

(1 + x)

(x
2
1)y

+ C(1 x)

(1 + x)

y .
Thus,
C = (n( + 1) n( + 1) n(n 1)) = n(n + + + 1) .
Theorem 3.50
Let , > 1. The dierential equation
(1 x
2
)y

+ ( ( + + 2)x)y

+ y = 0,
where is a parameter, has a polynomial solution not identically zero if and only if
= n(n++ +1), n N
0
. This solution is C P
(,)
n
(x), C = 0, and no solution which
is linearly independent of P
(,)
n
can be a polynomial.
Proof. Substitute y =

=0
a

(x 1)

in the dierential equation. This gives us


0 = (x + 1)

=2
( 1)a

(x 1)
1
(2( + 1) + ( + + 2)(x 1))

=1
a

(x 1)
1
+

=0
a

(x 1)

= (x 1 + 2)

=2
( 1)a

(x 1)
1
2( + 1)

=1
a

(x 1)
1
( + + 2)

=1
a

(x 1)

=0
a

(x 1)

=2
( 1)a

(x 1)

=1
( + 1)a
+1
(x 1)

2( + 1)

=0
( + 1)a
+1
(x 1)

( + + 2)

=1
a

(x 1)

=0
a

(x 1)

Chapter 3. Orthogonal Polynomials 53


Thus, the coecients have to fulll the relation
(( 1) ( + + 2) + ) a

2( + 1)a
+1
2( + 1)( + 1)a
+1
=
( ( + + + 1)) a

2( + 1)( + + 1)a
+1
= 0
for N
0
. If we assume that y is a polynomial, we can suppose that a
n
denotes the last
nonzero coecient, i.e. a
n+1
= 0. Therefore, the factor in front of a
n
in the recurrence
relation above has to vanish, i.e.
= n(n + + + 1).
On the other hand, if this condition for holds, we nd that a
i
= 0 for i n + 1 since
the factor of a
+1
= 0.
Let = n(n + + + 1) and let z be a second solution of the dierential equation, i.e.
d
dx
_
(1 x)
+1
(1 +x)
+1
y

_
+ n(n + + + 1)(1 x)

(1 + x)

y = 0,
d
dx
_
(1 x)
+1
(1 + x)
+1
z

_
+ n(n + + + 1)(1 x)

(1 + x)

z = 0.
Multiply the rst equation by z and the second by y and subtract them:
0 =
d
dx
_
(1 x)
+1
(1 +x)
+1
y

_
z
d
dx
_
(1 x)
+1
(1 +x)
+1
z

_
y
=
d
dx
_
(1 x)
+1
(1 +x)
+1
(y

z z

y)
_
Thus, for all x [1, 1]
(1 x)
+1
(1 + x)
+1
(y

z yz

) = c = const.,
If we let x 1, we see that y and z cannot both be polynomials unless c = 0. Therefore,
y

z = z

y for all x (1, 1), i.e. y and z are linearly dependent, z(x) = cP
(,)
n
.
Remark 3.51
The Jacobi polynomials can also be dened as the polynomial solutions of the correspond-
ing dierential equation that additionally take the value
P
(,)
n
(1) =
_
n +
n
_
.
Denition 3.52
The hypergeometric function F (sometimes denoted by
2
F
1
) is dened by
F(a, b; c; x) =

k=0
(a)
k
(b)
k
(c)
k
x
k
k!
=
(c)
(a)(b)

k=0
(a + k)(b + k)
(c + k)
x
k
k!
or its analytic continuation with a, b R, c R \ {N
0
}, x (1, 1).
If a = n or b = n, n N
0
, the hypergeometric function reduces to a polynomial in x
whose degree is n.
54 Chapter 3. Orthogonal Polynomials
Theorem 3.53
For the Jacobi polynomials hold
P
(,)
n
(x) =
_
n +
n
_
F
_
n, n + + + 1; + 1;
1x
2
_
=

k=0
(n + + 1)
(n + 1)( + 1)
(n + k)
(n)(k + 1)
(n + + + k + 1)( + 1)
(n + + + 1)( + 1 +k)
(1)
k
_
x 1
2
_
k
=
(n + + 1)
n!(n + + + 1)
n

k=0
(n + + + 1 +k)
( + 1 +k)
_
n
k
__
x 1
2
_
k
This can be reformulated into
P
(,)
n
(x) =
1
2
n
n!
n

k=0
_
n
k
_
(n + + 1)
(n + + 1 k)
(n + + 1)
( + 1 +k)
(x 1)
nk
(x + 1)
k
=
1
2
n
n

k=0
_
n +
k
__
n +
n k
_
(x 1)
nk
(x + 1)
k
Proof. This can be shown via the properties of the hypergeometric function, in particular
its dierential equation. Another way uses the Rodriguez representation of the Jacobi
polynomials which will follow soon.
Corollary 3.54
The leading coecient of the Jacobi polynomial P
(,)
n
of degree n is given by
k
(,)
n
= 2
n
_
2n + +
n
_
.
Proof. Consider the representation
P
(,)
n
(x) =
(n + + 1)
n!(n + + + 1)
n

k=0
(n + + + 1 + k)
( + 1 +k)
_
n
k
__
x 1
2
_
k
of Theorem 3.53 in the following limit:
k
(,)
n
= lim
x
x
n
P
(,)
n
(x)
=
(n + + 1)
n!(n + + + 1)
(n + + + 1 +n)
( + 1 +n)
_
n
n
_
1
2
n
= 2
n
_
2n + +
n
_
.
Corollary 3.55
For the derivative of the Jacobi polynomials holds
d
dx
P
(,)
n
(x) =
n + + + 1
2
P
(+1,+1)
n1
(x).
Chapter 3. Orthogonal Polynomials 55
Proof. See Exercise 6.2. (This follows immediately if both sides are expanded according
to Theorem 3.53.)
Theorem 3.56 (Rodriguez Formula)
Let , > 1. Then
(1 x)

(1 + x)

P
(,)
n
(x) =
(1)
n
2
n
n!
_
d
dx
_
n
_
(1 x)
n+
(1 +x)
n+
_
Proof. We use Leibniz rule for the dierentiation of products to nd
_
d
dx
_
n
_
(1 x)
n+
(1 + x)
n+
_
=
n

k=0
_
n
k
__
d
dx
_
k
(1 x)
n+
_
d
dx
_
nk
(1 +x)
n+
=
n

k=0
_
n
k
_
(1 x)

R
nk
(x)(1 + x)

S
k
(x)
= (1 x)

(1 +x)

(x)
where R
j
, S
j

j
, deg(R
j
) = deg(S
j
) = j, j = 0, . . . , n, and
n
, deg() = n. In
detail (we will need that later):
(1 x)

(1 x)

(x)
=
n

k=0
_
n
k
_
_
k1

i=0
(n + i)
_
(1)
k
(1 x)
n+k
_
nk+1

i=0
(n + i)
_
(1 + x)
n+(nk)
=
n

k=0
_
n
k
_
(n + + 1)
(n + k + 1)
(n + + 1)
( + k + 1)
(1)
k
(1 x)
n+k
(1 +x)
+k
.
Therefore,
(x) =
n

k=0
_
n
k
_
(n + + 1)
(n + k + 1)
(n + + 1)
( + k + 1)
(1)
k
(1 x)
nk
(1 + x)
k
We now have to show that = C P
(,)
n
with a constant C. It suces to show that
1

1
(1 x)

(1 x)

(x)R(x)dx = 0
56 Chapter 3. Orthogonal Polynomials
for all R
n1
. We use partial integration:
1

1
(1 x)

(1 x)

(x)R(x)dx =
1

1
_
d
dx
_
n
_
(1 x)
n+
(1 +x)
n+
_
R(x)dx
=
_
_
d
dx
_
n1
_
(1 x)
n+
(1 +x)
n+
_
R(x)
_

1
1
. .
=0

1
_
d
dx
_
n1
_
(1 x)
n+
(1 +x)
n+
_
R

(x)dx
= . . . = (1)
n
1

1
_
(1 x)
n+
(1 +x)
n+
_
R
(n)
(x)dx = 0
since deg(R) n 1, i.e. R
(n)
0. Now we just have to determine the constant C.
Consider x = 1 in , i.e. only the summand k = n remains, i.e.
(1) =
_
n
n
_
(n + + 1)
( + 1)
(n + + 1)
(n + + 1)
(1)
n
2
n
= (1)
n
2
n
n!
_
n +
n
_
= (1)
n
2
n
n! P
(,)
n
(1).
Thus, C =
2
n
n!
(1)
n
.
Remark 3.57
The previous theorem also gives us the explicit representation of the second part of The-
orem 3.53, i.e.
P
(,)
n
(x) =
1
2
n
n!
n

k=0
_
n
k
_
(n + + 1)
(n + + 1 k)
(n + + 1)
( + 1 +k)
(x 1)
nk
(x + 1)
k
=
1
2
n
n

k=0
_
n +
k
__
n +
n k
_
(x 1)
nk
(x + 1)
k
.
Theorem 3.58
Let , > 1. Then for n N
h
(,)
n
=
_
_
P
(,)
n
_
_
2
=

1
1
_
P
(,)
n
(x)
_
2
(1 x)

(1 +x)

dx
=
2
++1
2n + + + 1
(n + + 1)(n + + 1)
(n + 1)(n + + + 1)
and
h
(,)
0
=
_
_
_P
(,)
0
_
_
_
2
=

1
1
_
P
(,)
0
(x)
_
2
(1 x)

(1 +x)

dx
= 2
++1
( + 1)( + 1)
( + + 2)
.
Chapter 3. Orthogonal Polynomials 57
Proof. Let n N. We have from the proof of Theorem 3.56

1
1
_
P
(,)
n
(x)
_
2
(1 x)

(1 + x)

dx = k
(,)
n

1
1
P
(,)
n
(x)x
n
(1 x)

(1 +x)

dx
=
(1)
n
2
n
n!
k
(,)
n

1
1
_
d
dx
_
n
_
(1 x)
n+
(1 + x)
n+
_
x
n
dx
= 2
n
k
(,)
n

1
1
(1 x)
n+
(1 + x)
n+
dx
= 2
2n
_
2n + +
n
_
1
1
(1 x)
n+
(1 + x)
n+
dx
where we used partial integration and Corollary 3.54.

1
1
_
P
(,)
n
(x)
_
2
(1 x)

(1 +x)

dx = 2
2n
_
2n + +
n
_
1
1
(1 x)
n+
(1 + x)
n+
dx
= 2
+
_
2n + +
n
_
1
1
_
1 x
2
_
n+
_
1 + x
2
_
n+
dx
= 2
++1
_
2n + +
n
_
1
0
y
n+
(1 y)
n+
dy
= 2
++1
(2n + + + 1)
(n + 1)(n + + + 1)
(n + + 1)(n + + 1)
(2n + + + 2)
=
2
++1
2n + + + 1
(n + + 1)(n + + 1)
(n + 1)(n + + + 1)
.
The case n = 0 follows similarly.
Theorem 3.59
The Jacobi polynomials fulll the following three term recurrence relation:
2(n + 1)(n + + + 1)(2n + + )P
(,)
n+1
(x) =
(2n + + + 1)
_
(2n + + + 2)(2n + + )x +
2

2
_
P
(,)
n
(x)
2(n + )(n + )(2n + + + 2)P
(,)
n1
(x)
for n N with P
(,)
0
(x) = 1 and P
(,)
1
(x) =
1
2
( + + 2)x +
1
2
( ).
Proof. See Exercise 6.1.
3.4 Ultraspherical Polynomials
In Remark 3.48 we have already presented the ultraspherical (or Gegenbauer) polynomials
C
()
n
and given their connection to the general Jacobi polynomials, i.e.
C
()
n
(x) =
( + 1)(n + 2 + 1)
(2 + 1)(n + + 1)
P
(,)
n
(x)
=
( +
1
2
)(n + 2)
(2)(n + +
1
2
)
P
(
1
2
,
1
2
)
n
(x) ,
58 Chapter 3. Orthogonal Polynomials
where = =
1
2
, >
1
2
since > 1. This relation gives us the following properties
for = 0:
1. The value at 1 is C
()
n
(1) =
_
n+21
n
_
, the symmetry relation is given by
C
()
n
(x) = (1)
n
C
()
n
(x),
the polynomials are either even or odd (depending on n) due to Theorem 3.47.
2. The dierential equation (whose solution is y = C
()
n
) becomes
(1 x
2
)y

(2 + 1)xy

+ n(n + 2)y = 0. (3.7)


3. We know the explicit representation, i.e.
C
()
n
(x) =
_
n + 2 1
n
_
F(n, n + 2; +
1
2
;
1x
2
)
=
( +
1
2
)
(2)n!
n

k=0
_
n
k
_
(n + 2 + k)
( +
1
2
+ k)
_
x 1
2
_
k
.
4. The leading coecient is
k
()
n
= lim
x
x
n
C
()
n
(x) = 2
n
_
n + 1
n
_
.
5. The derivative is again an ultraspherical polynomial, i.e.
d
dx
C
()
n
(x) = 2C
(+1)
n1
(x).
6. The Rodriguez representation for the Gegenbauer polynomials reads as follows
C
()
n
(x) =
(1)
n
( +
1
2
)(n + 2)
2
n
n! (2)(n + +
1
2
)
(1 x
2
)
1
2

_
d
dx
_
n
_
(1 x
2
)
n+
1
2
_
=
(2)
n
(n + )(n + 2)
n! ()(2n + 2)
(1 x
2
)
1
2

_
d
dx
_
n
_
(1 x
2
)
n+
1
2
_
.
7. Their norm is computed to be
h
()
n
=
_
_
C
()
n
_
_
2
=
2
12
(n + 2)
n!(n + )(())
2
.
8. The corresponding three term recurrence is given by
(n + 1)C
()
n+1
(x) = 2(n + )xC
()
n
(x) (n + 2 1)C
()
n1
(x) (3.8)
for n N with C
()
0
(x) = 1 and C
()
1
(x) = 2x.
Chapter 3. Orthogonal Polynomials 59
Lemma 3.60
If n 1, C
(0)
n
0, but
lim
0
C
()
n
(x)

=
2
n
T
n
(x)
with the Chebyshev polynomials of rst kind T
n
.
Proof. We remember that
T
n
(x) =
2
n
n!

n
i=1
(2i 1)
P
(
1
2
,
1
2
)
n
(x).
Now we consider
C
()
n
(x)

=
( +
1
2
)
(2)
(n + 2)
(n + +
1
2
)
P
(
1
2
,
1
2
)
n
(x)
=
2( +
1
2
)
(2 + 1)
(n + 2)
(n + +
1
2
)
P
(
1
2
,
1
2
)
n
(x)
0

2 (
1
2
)
(1)
(n)
(n +
1
2
)
P
(
1
2
,
1
2
)
n
(x)
=
2 n!
n
(
1
2
)
(n +
1
2
)
P
(
1
2
,
1
2
)
n
(x)
=
2 n!
n
1

n
i=1
(i 1/2)
P
(
1
2
,
1
2
)
n
(x)
=
2
n
n! 2
n

n
i=1
(2i 1)
P
(
1
2
,
1
2
)
n
(x) =
2
n
T
n
(x).
Note that C
()
0
= 1, also as 0. Combining Theorem 3.47 and Theorem 3.53 for the
ultraspherical polynomials we obtain the following representations.
Lemma 3.61
For n N
0
hold
C
()
2n
(x) =
_
2n + 2 1
2n
_
F(n, n + ; +
1
2
; 1 x
2
) ,
C
()
2n+1
(x) =
_
2n + 2
2n + 1
_
xF(n, n + + 1; +
1
2
; 1 x
2
) .
Proof. Consider the even case rst. We start with the relation between Gegenbauer and
Jacobi polynomials and apply Theorem 3.47.
C
()
2n
(x) =
( +
1
2
)(2n + 2)
(2)(2n + +
1
2
)
P
(
1
2
,
1
2
)
2n
(x)
=
( +
1
2
)(2n + 2)
(2)(2n + +
1
2
)
(2n + +
1
2
)(n + 1)
(n + +
1
2
)(2n + 1)
P
(
1
2
,
1
2
)
n
(2x
2
1)
60 Chapter 3. Orthogonal Polynomials
Now we use the representation of Theorem 3.53:
C
()
2n
(x) =
( +
1
2
)(2n + 2)(n + 1)
(2)(n + +
1
2
)(2n + 1)
_
n +
1
2
n
_
F(n, n + ; +
1
2
; 1 x
2
)
=
_
2n + 2 1
2n
_
F(n, n + ; +
1
2
; 1 x
2
).
The odd case can be shown analogously.
Lemma 3.62
For n N
0
hold
C
()
2n
(x) = (1)
n
_
n + 1
n
_
F(n, n + ;
1
2
; x
2
)
C
()
2n+1
(x) = (1)
n
2
_
n +
n
_
x F(n, n + + 1;
3
2
; x
2
)
Proof. Consider the even case. This time we apply the second variant of Theorem 3.47.
C
()
2n
(x) =
( +
1
2
)(2n + 2)
(2)(2n + +
1
2
)
P
(
1
2
,
1
2
)
2n
(x)
=
( +
1
2
)(2n + 2)
(2)(2n + +
1
2
)
(1)
n
(2n + +
1
2
)(n + 1)
(n + +
1
2
)(2n + 1)
P
(
1
2
,
1
2
)
n
(1 2x
2
)
=
( +
1
2
)(2n + 2)(n + 1)
(2)(n + +
1
2
)(2n + 1)
(1)
n
_
n
1
2
n
_
F(n, n + ;
1
2
;
1(12x
2
)
2
)
=
( +
1
2
)(2n + 2)(n + 1)
(2)(n + +
1
2
)(2n + 1)
(1)
n
(n +
1
2
)
(n + 1)(
1
2
)
F(n, n + ;
1
2
; x
2
)
Note that
(n + +
1
2
)
( +
1
2
)
=
n1

i=0
(i + +
1
2
) ,
(2n + 2)
(2)
= 2
n
n1

i=0
(i + )2
n
n1

i=0
(i + +
1
2
) ,
such that
( +
1
2
)
(n + +
1
2
)
(2n + 2)
(2)
= 2
2n
(n + )
()
.
Moreover,
2
n
(n +
1
2
)
(
1
2
)
=
n1

i=0
(2i + 1),
2
n
(n + 1) =
n

i=1
(2i),
Chapter 3. Orthogonal Polynomials 61
such that
2
2n
(n +
1
2
)
(
1
2
)
(n + 1) = (2n + 1)
and nally
C
()
2n
(x) = (1)
n
(n + )
()(n + 1)
F(n, n+;
1
2
; x
2
) = (1)
n
_
n + 1
n
_
F(n, n+;
1
2
; x
2
).
The odd case can be shown analogously.
Corollary 3.63
For N N
0
holds the explicit representation
C
()
N
(x) =
N/2

m=0
(1)
m
(N m + )
()(m+ 1)(N 2m + 1)
(2x)
N2m
.
Proof. Let N = 2n, i.e. N/2 = n. we use the denition of the hypergeometric function
F (Denition 3.52) in Lemma 3.62.
C
()
N
(x) = (1)
n
(n + )
()(n + 1)
F(n, n + ;
1
2
; x
2
)
= (1)
n
(n + )
()(n + 1)
n

k=0
(n)
k
(n + )
k
(
1
2
)
k
x
2k
k!
with
(n)
k
= (n)(n + 1) . . . (n + k 1)
= (1)
k
n(n 1) . . . (n k + 1)
= (1)
k
(n + 1)
(n + 1 k)
,
(n + )
k
=
(n + + k)
(n + )
,
(
1
2
)
k
=
(k +
1
2
)
(
1
2
)
= 2
k
k

i=1
(2k + 1 2i) .
We obtain:
C
()
N
(x) =
n

k=0
(1)
n+k
(n + )
()(n + 1)
(n + 1)
(n + 1 k)
(n + + k)
(n + )
(
1
2
)
(k +
1
2
)
x
2k
k!
=
n

k=0
(1)
nk
(n + + k)
()(n k + 1)(k + 1)
2
k

k
i=1
(2k + 1 2i)
x
2k
=
n

k=0
(1)
nk
(n + + k)
()(n k + 1)(k + 1)
(2x)
2k
2
k

k
i=1
(2k + 1 2i)
62 Chapter 3. Orthogonal Polynomials
Now we shift the index using k = n m or m = n k:
C
()
N
(x) =
n

m=0
(1)
m
(n + n m + )
()(m + 1)(n m+ 1)
(2x)
2n2m
2
nm

nm
i=1
(2n 2m + 1 2i)
=
N/2

m=0
(1)
m
(N m + )
()(m+ 1)(N 2m + 1)
(2x)
N2m
since
2
nm
(n m+ 1) = 2
nm
nm1

i=0
(n mi)
=
nm1

i=0
(2n 2m2i)
and therefore
2
nm
(n m + 1)
nm

i=1
(2n 2m(2i 1))
=
nm1

i=0
(2n 2m2i)
nm

i=1
(2n 2m(2i 1))
= (N 2m)! = (N 2m + 1).
For N = 2n + 1 the proof can be performed analogously.
Theorem 3.64
We have for > 0 that
max
1x1

C
()
n
(x)

= C
()
n
(1) =
_
n + 2 1
n
_
.
For < 0 holds
max
1x1

C
()
n
(x)

C
()
n
(x

where x

is one of the two maximum points nearest to 0 if n is odd; x

= 0 if n is even.
Proof. Let n 1. We consider
n(n + 2)F(x) = n(n + 2)
_
C
()
n
(x)
_
2
+ (1 x
2
)
_
d
dx
C
()
n
(x)
_
2
.
Then F(x) =
_
C
()
n
(x)
_
2
if
d
dx
C
()
n
(x) = 0 or x = 1, i.e.
max
1x1
_
C
()
n
(x)
_
2
max
1x1
F(x).
Chapter 3. Orthogonal Polynomials 63
Now we dierentiate the equation above and use the dierential equation (3.7):
n(n + 2)F

(x) = n(n + 2)2C


()
n
(x)
_
d
dx
C
()
n
(x)
_
2x
_
d
dx
C
()
n
(x)
_
2
+ (1 x
2
)2
_
d
dx
C
()
n
(x)
__
d
2
dx
2
C
()
n
(x)
_
= 2
_
d
dx
C
()
n
(x)
__
n(n + 2)C
()
n
(x) + (1 x
2
)
_
d
2
dx
2
C
()
n
(x)
_
x
_
d
dx
C
()
n
(x)
__
= 2
_
d
dx
C
()
n
(x)
__
(2 + 1)x
_
d
dx
C
()
n
(x)
_
x
_
d
dx
C
()
n
(x)
__
= 4x
_
d
dx
C
()
n
(x)
_
2
Therefore, we nd that if > 0, F is increasing in [0, 1]. If < 0, F is decreasing in
[0, 1]. This (together with the symmetry relation) gives us the desired result.
Remark 3.65
For the Legendre polynomials ( =
1
2
) holds that
|P
n
(x)| P
n
(1) = 1
and
|P

n
(x)| =

C
(3/2)
n1
(x)


_
n 1 + 3 1
n 1
_
=
_
n + 1
n 1
_
=
n(n + 1)
2
for x [1, 1].
Theorem 3.66 (Generating Function)
For > 0 and h (1, 1) holds

n=0
h
n
C
()
n
(x) = (1 2hx + h
2
)

,
where x [1, 1].
Proof. First we check the convergence of the series.

n=0
h
n
C
()
n
(x)

n=0
|h|
n

C
()
n
(x)

n=0
|h|
n
_
n + 2 1
n
_
C

n=0
n
2
|h|
n
<
64 Chapter 3. Orthogonal Polynomials
since
_
n + 2 1
n
_
=
(n + 2)
(n + 1)(2)
Cn
2
.
Thus, we have absolute and uniform convergence of the series.
Now we consider the recurrence relation (3.8):

n=1
nC
()
n
(x)h
n1
= 2x

n=1
(n + 1)C
()
n1
(x)h
n1

n=1
(n + 2 2)C
()
n2
(x)h
n1
where C
()
1
0. Denote for a xed x [1, 1]
(h) =

n=0
C
()
n
(x)h
n
.
Then we obtain (the dierentiation is performed with respect to h)

(h) =

n=1
nC
()
n
(x)h
n1
h
1
_
h

(h)
_

= (h) + h

(h) =

n=0
C
()
n
(x)h
n
+

n=1
nC
()
n
(x)h
n
=

n=0
(n + )C
()
n
(x)h
n
=

n=1
(n + 1)C
()
n1
(x)h
n1
h
22
_
h
2
(h)
_

= 2h(h) + h
2

(h) =

n=0
2C
()
n
(x)h
n+1
+

n=1
nC
()
n
(x)h
n+1
=

n=0
(n + 2)C
()
n
(x)h
n+1
=

n=1
(n + 2 2)C
()
n2
(x)h
n1
where C
()
1
0. Therefore,

(h) = 2xh
1
_
h

(h)
_

h
22
_
h
2
(h)
_

= 2x
_
h
1
h
1
(h) + h
1
h

(h)
_

_
2h
21
h
22
(h) + h
22
h
2

(h)
_
= 2x ((h) + h

(h))
_
2h(h) + h
2

(h)
_
This yields
(1 2hx + h
2
)

(h) = (2x 2h)(h)


or

(h)
(h)
=
2h 2x
1 2hx + h
2
.
We perform integration (with respect to the variable h).
ln((h)) = ln(1 2hx + h
2
) + C = ln
_
(1 2hx + h
2
)

_
+ C
Chapter 3. Orthogonal Polynomials 65
and nally
(h) =

C(1 2hx + h
2
)

where

C = 1, since (0) = 1.
Remark 3.67
For = 0 we can nd the generating function of the Chebyshev polynomials of rst kind,
i.e.
ln(1 2hx + h
2
) =

n=1
2
n
T
n
(x)h
n
=

n=1
2
n
n

k=1
2k
2k 1
P
(
1
2
,
1
2
)
n
(x)h
n
.
Remark 3.68
The generating function can also be used to dene the corresponding orthogonal polyno-
mials as coecients of the series expansion.
Theorem 3.69
We can derive the following relations (n N
0
):
(1 x
2
)
d
dx
C
()
n
(x) =
1
2(n + )
_
(n + 2 1)(n + 2)C
()
n1
(x) n(n + 1)C
()
n+1
(x)
_
= nxC
()
n
(x) + (n + 2 1)C
()
n1
(x)
= (n + 2)xC
()
n
(x) (n + 1)C
()
n+1
(x)
= 2(1 x
2
)C
(+1)
n1
(x) ,
nC
()
n
(x) = x
d
dx
C
()
n
(x)
d
dx
C
()
n1
(x) ,
(n + 2)C
()
n
(x) =
d
dx
C
()
n+1
(x) x
d
dx
C
()
n
(x)
as well as (for n N)
d
dx
_
C
()
n+1
(x) C
()
n1
(x)
_
= 2(n + )C
()
n
(x) = 2
_
C
(+1)
n
(x) C
(+1)
n2
(x)
_
.
Proof. See tutorials.
Remark 3.70
For q N, q 3, and t [1, 1],
P
n
(q; t) =
1
_
n+q3
n
_C
(
q2
2
)
n
(t) =
(n + 1)(q 2)
(n + q 2)
C
(
q2
2
)
n
(t)
denote the Legendre polynomials of dimension q (see e.g. [12]).
66 Chapter 3. Orthogonal Polynomials
3.5 Application of the Legendre Polynomials in Elec-
trostatics
Let x R
3
and (x) be a charge distribution with total charge

R
3
(x)dx. Then the
fundamental equations of electrostatics are the electrostatic pre-Maxwell equations. Let
E denote the electric eld, then for x R
3
div E(x) = 4(x)
curl E(x) = 0 .
We introduce the electric potential and solve the equations by E(x) = (x) which
gives us the following Poisson equation
(x) = 4(x).
Example 3.71
A point charge q at x
0
R
3
yields
(x) = q(x x
0
)
with the delta-distribution in R
3
(equality holds in the weak sense). We obtain
(x) = 0 for x R
3
\ {x
0
}. (3.9)
The solution is given by (x) =
q
|xx
0
|
for x R
3
\ {x
0
}. The corresponding electric eld
can be calculated to be
E(x) = q
x x
0
|x x
0
|
3
for x R
3
\ {x
0
}.
Another approach to (3.9) introduces polar coordinates (r, , t) for x R
3
\ {0} by
x
1
= r

1 t
2
cos , x
2
= r

1 t
2
sin , x
3
= rt
where r = |x| > 0, [0, 2), t [1, 1]. This gives us the representation of the Laplace
equation in polar coordinates, i.e.
_
_

r
_
2
+
2
r

r
+
1
r
2

t
(1 t
2
)

t
+
1
r
2
1
1 t
2
_

_
2
_
(r, , t) = 0
for x R
3
\ {0}. By separation of variables we set
(r, , t) = U(r)V ()W(t)
and insert this in the equation above, i.e.
V ()W(t)
_

r
_
2
U(r) + V ()W(t)
2
r

r
U(r)
+
1
r
2
U(r)V ()

t
(1 t
2
)

t
W(t) +
1
r
2
U(r)W(t)
1
1 t
2
_

_
2
V () = 0.
Chapter 3. Orthogonal Polynomials 67
Next we multiply by r
2
(1 t
2
) and divide by U(r)V ()W(t):
r
2
(1 t
2
)
U

(r)
U(r)
+ 2r(1 t
2
)
U

(r)
U(r)
+ (1 t
2
)
2
W

(t)
W(t)
2t(1 t
2
)
W

(t)
W(t)
+
V

()
V ()
= 0
which can be rewritten as
r
2
(1 t
2
)
U

(r)
U(r)
2r(1 t
2
)
U

(r)
U(r)
(1 t
2
)
2
W

(t)
W(t)
+ 2t(1 t
2
)
W

(t)
W(t)
=
V

()
V ()
.
The left hand side only depends on r and t, the righthand side only on . Thus, the
equation can only be fullled if both sides are equal to a constant R. Therefore,
V

()
V ()
=
V

() V () = 0 , [0, 2).
The non-trivial solutions of this dierential equation are linear combinations of V
1
=
exp(i

) and V
2
= exp(i

). V
1
and V
2
have to be 2-periodic. This leads to a
discretization of , i.e. = m
2
, m N
0
. The linear independent solutions are given by
V
m
() = exp(im) , [0, 2) , m Z.
Now we consider the left hand side for these values of :

m
2
(1 t
2
)
= r
2
U

(r)
U(r)
+ 2r
U

(r)
U(r)
+ (1 t
2
)
W

(t)
W(t)
2t
W

(t)
W(t)

m
2
(1 t
2
)
(1 t
2
)
W

(t)
W(t)
+ 2t
W

(t)
W(t)
= r
2
U

(r)
U(r)
+ 2r
U

(r)
U(r)
.
By the same argument as before we nd that this equation can only hold if both sides are
equal to a constant

, i.e.

m
2
(1 t
2
)
(1 t
2
)
W

(t)
W(t)
+ 2t
W

(t)
W(t)
=

(1 t
2
)W

(t) 2tW

(t) +
_

+
m
2
1 t
2
_
W(t) = 0
for t [1, 1] and

= r
2
U

(r)
U(r)
+ 2r
U

(r)
U(r)
0 = r
2
U

(r) + 2rU

(r)

U(r)
for r > 0.
68 Chapter 3. Orthogonal Polynomials
Symmetric Problems
For the case of a point charge located at x
0
we can assume (after a certain rotation of the
coordinate system) that x
0
is on the
(3)
-axis, i.e. x
0
= (0, 0, r
0
)
T
where r
0
= |x
0
| > 0. In
this case it is clear that the solution of (3.9) has a rotational symmetry, i.e. does not
dependent on . We can use
(r, , t) = U(r)W(t)
with r > 0 and t [1, 1] (this corresponds to m = 0 before). Therefore, we nd the
equation for W:
(1 t
2
)W

(t) 2tW

(t) +

W(t) = 0 , t [1, 1].
This is the Legendre dierential equation (or the Gegenbauer dierential equation with
= 1/2) which possesses a polynomial solution (in physical context: a solution with nite
energy) if and only if

= n(n + 1) , n N
0
.
The solutions are the Legendre polynomials W
n
(t) = P
n
(t), t [1, 1], n N
0
.
Now we consider the second equation for this values of

:
r
2
U

(r) + 2rU

(r) n(n + 1)U(r) = 0 , r > 0.


This ODE possesses two linearly independent solutions given by
U
1,n
(r) = r
n
, n N
0
,
U
2,n
(r) =
1
r
n+1
, n N
0
.
Together we nd the two linearly independent solutions of the Laplace equation in the
rotational symmetric case to be

1,n
(r) = r
n
P
n
(t),

2,n
(r) =
1
r
n+1
P
n
(t)
for r > 0, t [1, 1], n N
0
. Every solution can be expressed as a linear combination of
these two, i.e. there exist coecients A
n
, B
n
such that
(r, t) =

n=0
_
A
n
r
n
+ B
n
1
r
n+1
_
P
n
(t).
For the case of a point charge located in x
0
= (0, 0, r
0
)
T
we know a solution, i.e.
(x) =
q
|x x
0
|
, x R
3
\ {x
0
}.
We use this solution to compute the coecients A
n
and B
n
in the expansion. Restricting
the solutions to the
(3)
-axis, the point x possesses the polar coordinates r = |x|, t = 1,
= 0, i.e.
(x) =
q
|r r
0
|
, r > 0, r = r
0
.
Chapter 3. Orthogonal Polynomials 69
We use the well-known geometric series and obtain
(x) =
_

_
q
rr
0
=
q
r
1
1
r
0
r
=
q
r

n=0
_
r
0
r
_
n
for r > r
0
,
q
r
0
r
=
q
r
0
1
1
r
r
0
=
q
r
0

n=0
_
r
r
0
_
n
for r < r
0
.
Comparing this to the expansion for t = 1 (remember that P
n
(1) = 1), i.e.
(r, 1) =

n=0
A
n
r
n
+
B
n
r
n+1
, r > 0,
results in the following coecients:
A
n
=
_
0 if r > r
0
q
r
n+1
0
if r < r
0
B
n
=
_
qr
n
0
if r > r
0
0 if r < r
0
thus, we nally obtain
(x) = (r, t) =
q
|x x
0
|
=
_

_
q

n=0
r
n
0
r
n+1
P
n
(t) for |x| = r > r
0
= |x
0
| ,
q

n=0
r
n
r
n+1
0
P
n
(t) for |x| = r < r
0
= |x
0
| .
From a mathematical point of view there is another interesting aspect. Canceling the
charge q leads to (x = x
0
)
1
|x x
0
|
=
_

n=0
r
n
0
r
n+1
P
n
(t) for |x| = r > r
0
= |x
0
| ,

n=0
r
n
r
n+1
0
P
n
(t) for |x| = r < r
0
= |x
0
| .
The value |x x
0
| can be expressed using polar coordinates, i.e.
|x x
0
|
2
= |(r

1 t
2
cos , r

1 t
2
sin , rt r
0
)
T
|
2
= r
2
(1 t
2
) + (rt r
0
)
2
= r
2
_
1 +
r
2
0
r
2
2
r
0
r
t
_
.
Take r > r
0
and we get for r > r
0
1
r

1 +
r
2
0
r
2
2
r
0
r
t
=
1
|x x
0
|
=

n=0
r
n
0
r
n+1
P
n
(t)
which is equivalent to
1

1 +
r
2
0
r
2
2
r
0
r
t
=

n=0
_
r
0
r
_
n
P
n
(t).
70 Chapter 3. Orthogonal Polynomials
Analogously, for r < r
0
1

1 +
r
2
r
2
0
2
r
r
0
t
=

n=0
_
r
r
0
_
n
P
n
(t).
Substituting h = r
0
/r, respectively h = r/r
0
, we obtain the result on the generating
function of the Legendre polynomials.
3.6 Hermite Polynomials and Applications
The Hermite polynomials H
n
are the unique orthogonal polynomials in L
2
w
(R) with w(x) =
exp(x
2
), i.e.
1. H
n
is a polynomial of degree n,
2.

H
n
(x)H
m
(x) exp(x
2
)dx = 0 for m = n,
3. H
n

2
=

2
n
n!, n N
0
.
By this denition we can calculate
H
0
(x) = 1
H
1
(x) = 2x
H
2
(x) = 4x
2
2
H
3
(x) = 8x
3
12x
Rodriguez representation and the explicit representation are given by
H
n
(x) = (1)
n
exp(x
2
)
_
d
dx
_
n
_
exp(x
2
)
_
,
H
n
(x) =
n/2

k=0
(1)
k
n!
k!(n 2k)!
(2x)
n2k
.
The following recurrence relation holds with H
0
1, H
1
0:
H
n
(x) = 2xH
n1
(x) 2nH
n2
(x) for n N, x R.
The derivative is given by H

n
(x) = 2nH
n1
(x) for n N
0
, x R, and the following
dierential equation is solved by the Hermite polynomials (n N
0
, x R):
H

n
(x) 2xH

n
(x) + 2nH
n
(x) = 0.
Furthermore, for u
n
(x) = exp(x
2
/2)H
n
(x) we have (n N
0
, x R):
u

n
(x) + (2n + 1 x
2
)u
n
(x) = 0.
For further details and properties of Hermite polynomials we refer to the tutorials and
e.g. to [10, 15].
Chapter 3. Orthogonal Polynomials 71
Application in Quantum Mechanics
The starting point of any system is its Hamilton function
E = H(p
i
, q
j
, t)
where q
j
are (unied) coordinates, p
i
are the impulse coordinates, t is the time. H is the
total energy of the system.
The step from classical mechanics to quantum mechanics is performed by substitution
rules:
E i

t
p i
The coordinates q
j
are substituted by the wave function, where ||
2
is the probability
density, i.e.

R
3
|(x, t)|
2
dx = 1 for all t.
Let e.g. be the wave function of a particle in a potential V , then the classical Hamilton
function is
E =
p
2
2m
+ V (x).
This becomes by our substitution
i

t
(x, t) =

2
2m

x
(x, t) + V (x)(x, t)
or
i

= H with H =

2
2m

x
+ V (x).
Note that =
h
2
1.05457 10
34
Js is Planks constant.
If V and thus H are time-independent, we use the following approach
(x, t) = (x) exp(
iEt

) ,
such that for all t
i

t
(x, t) = i(x)
iE

exp(
iEt

) = H(x) exp(
iEt

)
H(x) = E(x).
This is the time-independent Schrodinger equation. H is the time-independent Hamilton
operator and E is the energy of the system, which is an unknown. Furthermore,
1 =

R
3
|(x, t)|
2
dx =

R
3
|(x) exp(
iEt

)|
2
dx =

R
3
|(x)|
2
dx,
i.e. the probability density is time-independent. Thus, the eigenvalue problem
H = E
has to be solved with

R
3
|(x)|
2
dx = 1.
72 Chapter 3. Orthogonal Polynomials
One-dimensional Oscillation
Let m be a mass connected to a wall by a spring with spring constant f. The classical
Hamilton function of this (1-D) system is
H
k,l
(p, x) =
p
2
2m
+
m
2
x
2
2
with =

f/m the eigenfrequency of the system. By our substitution rules we obtain


the operator
H =

2
2m
d
2
dx
2
+
m
2
x
2
2
.
This H-operator describes e.g. the oscillation of a molecule with two atoms and m is the
relative mass of the atoms. Thus, the eigenvalue problem for the oscillation is (x R):
_

2
2m
d
2
dx
2
+
m
2
x
2
2
_
(x) = E(x).
Dening a new coordinate by y = x/b where b =


m
and setting =
2E

this becomes
d
2
dy
2
(by)
. .
u(y)
+( y
2
) (by)
. .
u(y)
= 0 .
This equation possesses a solution if and only if = 2n + 1 where n N
0
. The solution
is of the form
u
n
(y) = const. exp(
y
2
2
)H
n
(y)
for n N
0
. The constant is determined by the condition

|
n
(x)|
2
dx = 1

|u
n
(y)|
2
dy =
1
b
.
Thus, we can summarize

n
(x) = c
n
H
n
(
x
b
) exp(
x
2
2b
2
)
for n N
0
and x R. These functions are the eigenfunctions of the 1-D oscillation
corresponding to the eigenvalues (energy)
E
n
= (n +
1
2
) , n N
0
.
This shows that the energy of a quantum-mechanical oscillator can only take discrete
values, i.e. the quantization of energy.
Chapter 3. Orthogonal Polynomials 73
3.7 Laguerre Polynomials and Applications
For > 1 the Laguerre polynomials L
()
n
, n N
0
, are uniquely dened by
1. L
()
n
is a polynomial of degree n dened on R
0
= [0, ),
2.

0
L
()
m
(x)L
()
n
(x) exp(x)x

dx = 0 for n = m,
3.
_
_
_L
()
n
_
_
_
2
= ( + 1)
_
n+
n
_
=
(n++1)
(n+1)
.
The Laguerre polynomials admit the following Rodriguez representation and explicit
representation (n N
0
, x R
0
):
L
()
n
(x) = exp(x)
x

n!
_
d
dx
_
n
_
exp(x)x
n+
_
=
n

k=0
(n + + 1)
(k + + 1)
(x)
k
k!(n k)!
.
By these properties we get:
L
()
0
(x) = 1
L
()
1
(x) = 1 + x
L
()
2
(x) =
1
2
_
(1 + )(2 + ) 2(2 +)x + x
2
_
and we nd the recursion formula (n N, n 2, x R
0
):
nL
()
n
(x) = (2n + 1 x) L
()
n1
(x) (n + 1) L
()
n2
(x)
and including the derivative (n N, x R
0
)
x
d
dx
L
()
n
(x) = nL
()
n
(x) (n + )L
()
n1
(x).
The dierential equation (with > 1, x R
0
)
xy

+ (1 + x)y

+ y = 0
possesses a polynomial solution if and only if = n N
0
. This solution is given by
y(x) = const.L
()
n
(x), x R
0
.
For further details and properties of Laguerre polynomials we refer to the tutorials and
e.g. to [10, 15].
Eigenoscillations of an n-fold Pendulum
Let l be the total length of the pendulum, each section has length a = l/n. The angles of
each section are collected in a vector, i.e. = (
1
, . . . ,
n
)
T
.
74 Chapter 3. Orthogonal Polynomials
The dynamic of the pendulum is described by (linearized, near a stable equilibrium, i.e.

k
= 0, k = 1, . . . , n):
M + C = 0
where M is the mass matrix with entries
M
i,k
= a
2
min(i, k)
and C is the matrix of restitutional forces with C
i,i
= iga, g = 9.81. Small vibrations are
given by = sin(t).
M + C = 0
M
2
sin(t) + C sin(t) = 0
(C
2
M) = 0.
We decompose M by Cholesky decomposition:
M = U
T
U = U = a
_
_
_
1 1
.
.
.
.
.
.
0 1
_
_
_
and we set x = U where x = ( x
0
, . . . , x
n1
)
T
. This gives us
_
C
2
U
T
U
_
= 0

_
CU
1

2
U
T
_
x = 0

_
(U
T
)
1
CU
1

2
I
_
x = 0

_
a
g
(U
T
)
1
CU
1

a
g

2
I
_
x = 0
(A I) x = 0
This is an eigenvalue problem for the matrix
A =
_

_
1 1 0 0
1 3 2 0
0 2 5 3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
.
.
.
.
.
.
.
.
.
n + 1
0 0 n + 1 2n 1
_

_
Writing out the eigenvalue problem explicitly we obtain
0 x
1
+ 1 x
0
1 x
1
= x
0
k x
k1
+ (2k + 1) x
k
(k + 1) x
k+1
= x
k
(n 1) x
n2
+ (2n 1) x
n1
= x
n1
Chapter 3. Orthogonal Polynomials 75
where k = 1, . . . , n 2. We have to append x
1
and x
n
such that this equation holds for
k = 0, . . . , n 1, i.e. x
1
can be chosen arbitrarily and x
n
= 0 to fulll the last line.
If we take a look at the recurrence relation for the classical Laguerre polynomials ( = 0):
kL
k1
(x) + (2k + 1)L
k
(x) (k + 1)L
k+1
(x) = xL
k
(x).
We can identify x with and L
k
(x) with x
k
. Hence, x
k
= L
k
() for k = 0, . . . , n satises
the equations above for all . Since we have to fulll x
n
= 0 we have
L
n
() = x
n
= 0.
The eigenfrequencies
1
, . . . ,
n
of the system
A x = x
are determined by this equation. Therefore, let
n,1
, . . . ,
n,n
be the zeros of L
n
in [0, ).
Then the eigenfrequencies of the system are

k
=

g
a

n,k
, k = 1, . . . , n.
The corresponding eigenmodes to the eigenvalue
k
can be calculated from
x
j
= L
j
(
n,k
) , j = 0, . . . , n 1
and = U
1
x. All interesting properties like energy etc. of the system can be expressed
using Laguerre polynomials.
76 Chapter 3. Orthogonal Polynomials
Chapter 4
Spherical Harmonics
4.1 Spherical Notation
We start by introducing some basic spherical notation.
= { R
3
| || = 1}
is the unit sphere in R
3
, we use Greek letters for elements of .
Polar coordinate representation of x R
3
:
x(r, , t) =
_
_
r

1 t
2
cos()
r

1 t
2
sin()
rt
_
_
where r = |x| R
0
is the distance to the origin, [0, 2) the longitude and t =
cos() [1, 1] the polar distance and [0, ] the latitude. The canonical basis in R
3
is denoted by
1
,
2
,
3
and another orthonormal basis is given by

r
(, t) =
_
_

1 t
2
cos()

1 t
2
sin()
t
_
_
,

(, t) =
_
_
sin()
cos()
0
_
_
,
t
(, t) =
_
_
t cos()
t sin()

1 t
2
_
_
.

and
t
are tangential vectors. Note the vector product
r

=
t
.
The gradient in R
3
can be composed into a radial and an angular part, i.e.
=
r

r
+
1
r

where

1 t
2

+
t

1 t
2

t
and the tangential operator

is called surface gradient. Another tangential operator is


the surface curl gradient L

which is dened by L

F() =

F() for F C
1
(),
, i.e. in local coordinates:
L

1 t
2

t
+
t
1

1 t
2

.
77
78 Chapter 4. Spherical Harmonics
Finally, the Laplace operator can be decomposed into
=

2
r
2
+
2
r

r
+
1
r
2

where

=

t
(1 t
2
)

t
+
1
1 t
2

2
and

denotes the Beltrami operator. It holds that

= L

.
Theorem 4.1
Let F, G C
2
(), with a suciently smooth boundary and a unit outward
normal vector eld . Then hold:
1. Greens rst surface identity

G()
_

F()
_
d() +

F()

G()d() =

F()

()
G()d() ,
2. Greens second surface identity

F()

G() G()

F()d() =

F()

()
G() G()

()
F()d() .
Proof. See lecture on vector analysis.
Denition 4.2
Let . A function of the form G

: R, G

() = G( ) with G : [1, 1] R
is called (-)zonal function on .
Theorem 4.3
Let G L
2
([1, 1]). Then

G( )d() = 2
1

1
G(t)dt
for all .
Proof. We know that

()d() =

G( )d(). Consider rst the case =


3
.
Note that = (

1 t
2
cos(),

1 t
2
sin(), t)
T
with 1 t 1, 0 < 2. There-
fore,
3
= t and

G(
3
)d() =
1

1
2

0
G(t)

ddt .
We compute

=
_
_

1 t
2
(sin())

1 t
2
cos()
0
_
_
,

t
=
_
_
t

1t
2
cos()
t

1t
2
sin()
1
_
_
,
Chapter 4. Spherical Harmonics 79
and nd that



t
= 0,

1 t
2
and

=
1

1t
2
. Together with the rule
|x y|
2
= |x|
2
|y|
2
(x y)
2
for x, y R
3
this yields that

= 1 .
Thus,

3()d() =
1

1
2

0
G(t)ddt = 2

1
1
G(t)dt .
Now let A SO(3) = {B R
33
| det B = 1} be a rotation with A =
3
, i.e. = A
1

3
.
Then:

G( )d() =

G((A
1

3
) )d()
=

G(
3
(A
T
))d()
=

G(
3
) det A
T
. .
=1
d() where = A
T

= 2

1
1
G(t)dt .
4.2 Polynomials on the Unit Sphere in R
3
4.2.1 Homogeneous Polynomials
Denition 4.4
A polynomial P on R
n
, n N, is called homogeneous of degree m N if P(x) =
m
P(x)
for all R and all x R
n
. The space of all homogeneous polynomials of degree m on
R
n
is denoted by
Hom
m
(R
n
) =
_
_
_
P

P(x) =

||=m
C

, x R
n
, N
n
0
_
_
_
,
(note that for the multi-index holds || =
n

i=1

i
).
The set of their restrictions to a set D R
n
is dened by
Hom
m
(D) = {P|
D
: P Hom
m
(R
n
)} .
Example 4.5
P(x) = x
2
1
+ x
2
2
+ x
2
3
Hom
2
(R
3
), therefore P|

Hom
2
() although P|

1, i.e.
deg(P|

) = 0.
The index m of Hom
m
() refers to the degree of the original polynomial on R
3
.
80 Chapter 4. Spherical Harmonics
Theorem 4.6
The set of functions {x x

}
||=n
is a basis of Hom
n
(R
3
) and dimHom
n
(R
3
) =
(n+1)(n+2)
2
.
Proof. Homogeneous polynomials of degree n in R
3
have the form
P(x) =

||=n
C

1
+
2
+
3
=n
C

1
,
2
,
3
x

1
1
x

2
2
x

3
3
where
1
,
2
,
3
N
0
. Let P(x) = 0, then
n

1
=0
_
n
1

2
=0
C

1
,
2
,n
1

2
x

2
2
x
n
1

2
3
_
x

1
1
= 0.
For x
2
, x
3
R xed we have a polynomial in R with respect to x
1
which is zero for all
x
1
R. Therefore,
n
1

2
=0
C

1
,
2
,n
1

2
x

2
2
x
n
1

2
3
= 0.
Keep now just x
3
xed and we obtain a polynomial in x
2
which is zero for all x
2
R.
Thus,
C

1
,
2
,n
1

2
x
n
1

2
3
= 0 C

1
,
2
,n
1

2
= 0 for all || = n.
Therefore, the set {x x

}
||=n
is a basis of Hom
n
(R
3
).
The dimension is #{x x

}
||=n
. We have n +1 choices for
1
{0, . . . , n}, n +1
1
choices for
2
{0, . . . , n
1
} and in the end 1 choice for
3
= n
1

2
. This gives
us:
dimHom
n
(R
3
) =
n

1
=0
#{0, . . . , n
1
}
=
n

1
=0
(n + 1
1
)
= (n + 1)
2

n(n + 1)
2
=
(n + 1)(n + 2)
2
.
Formally we can insert the gradient operator in R
3
into a homogeneous polynomial H
n

Hom
n
(R
3
), i.e.
H
n
() =

||=n
C

||
x

1
1
x

2
2
x

3
3
.
If U
n
Hom
n
(R
3
) with U
n
(x) =

||=n

, then holds:
H
n
()U
n
(x) =

||=n

||=n
C

_

x
1
_

1
x

1
1
_

x
2
_

2
x

2
2
_

x
3
_

3
x

3
3
.
Chapter 4. Spherical Harmonics 81
Now all summands are zero except the ones with = such that
H
n
()U
n
(x) =

||=n
C

i=1
_

x
i
_

i
x

i
i
=

||=n
C

1
!
2
!
3
!
. .
!
R .
Lemma 4.7
The mapping
,
Hom
n
(R
3
)
: Hom
n
(R
3
) Hom
n
(R
3
) R
given by
H
n
, U
n

Hom
n
(R
3
)
= H
n
(
x
)U
n
(x)
denes an inner product on Hom
n
(R
3
).
The set of monomials
_
x
1

!
x

N
3
0
, || = n
_
is an orthonormal system in the Hilbert space
_
Hom
n
(R
3
), ,
Hom
n
(R
3
)
_
.
Proof. Let H
n
, U
n
Hom
n
(R
3
).
1. H
n
, H
n

Hom
n
(R
3
)
= H
n
(
x
)H
n
(x) =

||=n
C
2

! 0 and
H
n
, H
n

Hom
n
(R
3
)
= 0

||=n
C
2

! = 0
C
2

= 0 with || = n
H
n
0 .
2. H
n
, U
n

Hom
n
(R
3
)
=

||=n
C

! = U
n
, H
n

Hom
n
(R
3
)
3. Let also V
n
Hom
n
(R
3
) and , R.
H
n
, U
n
+ V
n

Hom
n
(R
3
)
=

||=n
C

(

C

+ D

)!
=

||=n
C

! +

||=n
C

!
= H
n
, U
n

Hom
n
(R
3
)
+ H
n
, V
n

Hom
n
(R
3
)
.
For the monomials M
(n)

(x) =
1

!
x

|=n
1

,
x

we have:
M
(n)

, M
(n)


Hom
n
(R
3
)
=

|=n
_
1

_
2

! = 1
M
(n)

, M
(n)


Hom
n
(R
3
)
=

|=n
1

! = 0.
82 Chapter 4. Spherical Harmonics
This concludes our proof.
By the fundamental theorem of Fourier analysis an orthonormal expansion of H
n

Hom
n
(R
3
) is given by
H
n
(x) =

||=n
H
n
, M
(n)


Hom
n
(R
3
)
M
(n)

(x)
=

||=n
H
n
(
y
)
1

!
y

!
x

||=n
1
!
(H
n
(
y
)y

) x

= H
n
(
y
)
1
n!

||=n
n!
!
y

Thus, we have
H
x
(x) = H
x
(
y
)
_
1
n!
(x y)
n
_
= H
n
,
1
n!
(x 2)
n

Hom
n
(R
3
)
.
Theorem 4.8
The mapping
K
Hom
n
(R
3
)
: R
3
R
3
R
(x, y) K
Hom
n
(R
3
)
(x, y) =
1
n!
(x y)
n
is the reproducing kernel in
_
Hom
n
(R
3
), ,
Hom
n
(R
3
)
_
, i.e.
(i) K
Hom
n
(R
3
)
(x, ) Hom
n
(R
3
) for all x R
3
and K
Hom
n
(R
3
)
(, y) Hom
n
(R
3
) for all
y R
3
,
(ii) H
n
, K
Hom
n
(R
3
)
(x, )
Hom
n
(R
3
)
= H
n
(x) for all x R
3
.
Proof. The second property is shown above. The rst holds since (2 y)
n
as well as
(x 2)
n
are homogeneous polynomials of degree n for any xed x, y R
3
and thus,
K
Hom
n
(R
3
)
(x, 2), K
Hom
n
(R
3
)
(2, y) Hom
n
(R
3
) for any xed x, y R
3
.
It should be noted that a reproducing kernel is always unique (see Exercise 9.1).
Theorem 4.9
Let {H
n,j
}
j=1,...,d
n
, d
n
=
(n+1)(n+2)
2
, be an orthonormal system in Hom
n
(R
3
), then holds
for all x, y R
3
that
d
n

j=1
H
n,j
(x)H
n,j
(y) =
1
n!
(x y)
n
= K
Hom
n
(R
3
)
(x, y).
Chapter 4. Spherical Harmonics 83
Proof. Since the reproducing kernel in a Hilbert space is unique, we have to show that
d
n

j=1
H
n,j
(x)H
n,j
(y) is a reproducing kernel in Hom
n
(R
3
). Clearly,
d
n

j=1
H
n,j
(x)H
n,j
(2) Hom
n
(R
3
),
d
n

j=1
H
n,j
(2)H
n,j
(y) Hom
n
(R
3
),
for any xed x, y R
3
. Let H
n
Hom
n
(R
3
), then
H
n
(x) =
d
n

j=1
H
n
, H
n,j

Hom
n
(R
3
)
H
n,j
(x)
and we have that
H
n
,
d
n

j=1
H
n,j
(x)H
n,j
(2)
Hom
n
(R
3
)
=
d
n

j=1
H
n,j
(x)H
n
, H
n,j

Hom
n
(R
3
)
= H
n
(x) .
Therefore,
d
n

j=1
H
n,j
(x)H
n,j
(y) is the reproducing kernel in the Hilbert space Hom
n
(R
3
)
with the inner product ,
Hom
n
(R
3
)
.
Interpolation Problem
Given measurements (x
j
, b
j
) R
3
R, j = 1, . . . ,
1
2
(n+1)(n+2) determine F Hom
n
(R
3
)
with the property
F(x
j
) = b
j
, j = 1, . . . , d
n
.
When does a solution exist? If a solution exists, how does it look like?
Denition 4.10
A system of d
n
points {x
1
, . . . , x
d
n
} R
3
is called a fundamental system relative to
Hom
n
(R
3
) if the matrix A with the entries A
j,k
= H
n,j
(x
k
), j, k = 1, . . . , d
n
, is regular,
where {H
n,j
} is an orthonormal system in Hom
n
(R
3
).
Lemma 4.11
For each n N
0
there exists a fundamental system relative to Hom
n
(R
3
).
Proof. Let n N
0
be xed. We construct the points {x
1
, . . . , x
d
n
} inductively. Clearly
there exists a point x
1
R
3
with
H
n,1
(x
1
) = 0.
Now assume there exist {x
1
, . . . , x
m
}, m < d
n
, such that
(H
n,j
(x
k
))
j,k=1,...,m
84 Chapter 4. Spherical Harmonics
is regular.
Furthermore, assume that for every y R
3
\ {x
1
, . . . , x
m
} the matrix
_

_
H
n,1
(x
1
) . . . . . . H
n,1
(x
m
) H
n,1
(y)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
H
n,m+1
(x
1
) . . . . . . H
n,m+1
(x
m
) H
n,m+1
(y)
_

_
is singular. This means that there exists a row i {1, . . . , m+ 1} such that this row can
be represented as a linear combination of the other rows
[H
n,j
(x
1
), . . . , H
n,j
(x
m
), H
n,j
(y)] , j {1, . . . , m + 1} \ {i} .
In particular, H
n,i
(y) is representable as linear combination of {H
n,j
(y)}
j{1,...,m+1}\{i}
for
all y R
3
. But his is a contradiction to the linear independence of {H
n,1
, . . . , H
n,m+1
}.
This inductive construction can be continued until we reach m = d
n
=
1
2
(n+1)(n+2).
If the points of the interpolation problem are a fundamental system relative to Hom
n
(R
3
),
the problem can be solved uniquely.
Theorem 4.12
Let {H
n,j
}
j=1,...,d
n
be an orthonormal system in Hom
n
(R
3
) and {x
k
}
k=1,...,d
n
be a funda-
mental system relative to Hom
n
(R
3
). Then each F Hom
n
(R
3
) is uniquely representable
in the form (x R
3
)
F(x) =
d
n

j=1
a
j
K
Hom
n
(R
3
)
(x
j
, x) =
d
n

j=1
a
j
(x
j
x)
n
n!
.
Proof. It is clear that F can be written as
F =
d
n

k=1
b
k
H
n,k
where b
k
= F, H
n,k

Hom
n
(R
3
)
, k = 1, . . . , d
n
. Let {a
j
}
j=1,...,d
n
be the solution of the system
of linear equations
d
n

j=1
a
j
H
n,k
(x
j
) = b
k
, k = 1, . . . , d
n
.
The system is uniquely solvable since the points x
k
form a fundamental system (and thus,
the matrix is regular). Then:
F =
d
n

k=1
d
n

j=1
a
j
H
n,k
(x
j
)H
n,k
=
d
n

j=1
a
j
d
n

k=1
H
n,k
(x
j
)H
n,k
=
d
n

j=1
a
j
K
Hom
n
(R
3
)
(x
j
, ).
Note that the system can be very ill-conditioned.
Chapter 4. Spherical Harmonics 85
4.2.2 Harmonic Polynomials
Denition 4.13
Let D R
3
be open and connected. F C
2
(D) is called harmonic if
x
F(x) = 0 for all
x D. The set of all harmonic functions in C
2
(D) is denoted by Harm(D).
Denition 4.14
The set of all homogeneous harmonic polynomials on R
3
with degree m N
0
is denoted
by
Harm
m
(R
3
) =
_
P Hom
m
(R
3
)|P = 0
_
.
Moreover, we dene for m N
0
Harm
0,...,m
(R
3
) =
m

i=0
Harm
i
(R
3
) ,
Harm
0,...,
(R
3
) =

i=0
Harm
0,...,i
(R
3
).
Let H Hom
n
(R
3
). There is a representation
H(x) =
n

j=0
A
nj
(x
1
, x
2
)x
j
3
where A
nj
Hom
nj
(R
2
). Now let H Harm
n
(R
3
):
0 =
x
H(x) =
n

j=0
_

2
x
2
1
+

2
x
2
2
_
A
nj
(x
1
, x
2
)x
j
3
+
n

j=0

2
x
2
3
x
j
3
A
nj
(x
1
, x
2
).
Note that
_

2
x
2
1
+

2
x
2
2
_
A
r
(x
1
, x
2
) = 0 for r {0, 1} .
Therefore,
0 =
n2

j=0
_

2
x
2
1
+

2
x
2
2
_
A
nj
(x
1
, x
2
)x
j
3
+
n

j=2
j(j 1)x
j2
3
A
nj
(x
1
, x
2
)
=
n2

j=0

x,2
A
nj
(x
1
, x
2
)x
j
3
+
n2

j=0
(j + 2)(j + 1)x
j
3
A
nj2
(x
1
, x
2
)
Theorem 4.15
Let n N
0
. Let A
n
and A
n1
be homogeneous polynomials of degree n and n 1 in R
2
.
For j = 0, . . . , n 2 we dene recursively
A
nj2
(x
1
, x
2
) =
1
(j + 1)(j + 2)

x,2
A
nj
(x
1
, x
2
) .
86 Chapter 4. Spherical Harmonics
Then H : R
3
R given by
H(x
1
, x
2
, x
3
) =
n

j=0
A
nj
(x
1
, x
2
)x
j
3
is in Harm
n
(R
3
). Moreover,
dimHarm
n
(R
3
) = 2n + 1.
Proof. We already know that H Harm
n
(R
3
) from our considerations before. The
degrees of freedom for H occur in the choices of A
n
and A
n1
.
A
j
Hom
j
(R
2
) A
j
(x) =

||=j
C

where N
2
0
. Thus,
dimHom
j
(R
2
) =
j

k=0
1 = j + 1 .
This yields that
dimHarm
n
(R
3
) = dimHom
n
(R
2
) +dimHom
n1
(R
2
) = (n +1) +((n 1) +1)) = 2n +1 .
Theorem 4.16
For n 2 the space Hom
n
(R
3
) can be decomposed into the orthogonal and direct sum
Hom
n
(R
3
) = Harm
n
(R
3
)
_
Harm
n
(R
3
)
_

and
_
Harm
n
(R
3
)
_

= | |
2
Hom
n2
(R
3
) =
_
L
n
(x)

L
n
(x) = |x|
2
H
n2
(x) , H
n2
Hom
n2
(R
3
)
_
.
Proof. The rst result is just the standard decomposition of functional analysis. For the
second result let n 2, H
n2
Hom
n2
(R
3
) and K
n
Harm
n
(R
3
) be arbitrary, then
| |
2
H
n2
, K
n

Hom
n
(R
3
)
= H
n2
| |
2
, K
n

Hom
n
(R
3
)
= H
n
(
x
)
x
K
n
(x) = 0
Therefore, | |
2
H
n2
(Harm
n
(R
3
))

.
Since dimHom
n
(R
3
) =
(n+1)(n+2)
2
, dimHarm
n
(R
3
) = 2n + 1, dimHom
n2
(R
3
) =
(n1)n
2
and thus,
dim
_
Harm
n
(R
3
)
_

= dimHom
n
(R
3
) dimHarm
n
(R
3
) = dimHom
n2
(R
3
),
we nd the desired equality of the spaces.
Chapter 4. Spherical Harmonics 87
Corollary 4.17
Analyzing this theorem iteratively we obtain the decomposition
Hom
n
(R
3
) =
n/2

i=0
| |
2i
Harm
n2i
(R
3
).
Remark 4.18
Let {H
n,j
}
j=1,...,2n+1
be an orthonormal system in Harm
n
(R
3
). Then this system can be
completed to an orthonormal system in Hom
n
(R
3
), i.e.
{H
n,j
}
j=1,...,2n+1
{U
n,j
}
j=1,...,
1
2
n(n1)
where {U
n,j
}
j=1,...,
1
2
n(n1)
is an orthonormal system in (Harm
n
(R
3
))

.
Lemma 4.19
For H
n
Hom
n
(R
3
) holds
Proj
Harm
n
(R
3
)
H
n
(x) =
_
_
n/2

s=0
(1)
s
n!(2n 2s)!
(2n)!(n s)!s!
|x|
2s

s
_
_
H
n
(x) .
Proof. See tutorials (Exercises 9.2, 9.3).
Theorem 4.20 (Addition Theorem in Harm
n
(R
3
))
Let {H
n,j
}
j=1,...,2n+1
be an orthonormal system in Harm
n
(R
3
) with respect to ,
Hom
n
(R
3
)
.
Then for every x, y R
3
with x = |x|, y = |y| (, ) we obtain
2n+1

j=1
H
n,j
(x)H
n,j
(y) =
2
n
n!
(2n)!
|x|
n
|y|
n
P
n
( ) ,
where P
n
denotes the Legendre polynomial of degree n.
Proof. Let {K
n,j
}
j=1,...,
1
2
(n+1)(n+2)
be the orthonormal system in Hom
n
(R
3
) which com-
pletes the basis from{H
n,j
}. Due to the uniqueness of the reproducing kernel in Hom
n
(R
3
)
we know that
1
2
(n+1)(n+2)

j=1
K
n,j
(x)K
n,j
(y) =
(x y)
n
n!
for x, y R
3
. We project both sides to Harm
n
(R
3
):
Proj
Harm
n
(R
3
)
1
2
(n+1)(n+2)

j=1
K
n,j
(x)K
n,j
(y) =
2n+1

j=1
H
n,j
(x)H
n,j
(y)
88 Chapter 4. Spherical Harmonics
and using Lemma 4.19 together with the property that
x
(x y)
n
= n(n1)|y|
2
(x y)
n2
:
Proj
Harm
n
(R
3
)
_
(x y)
n
n!
_
=
1
n!
n/2

s=0
(1)
s
(2n 2s)!(n!)
2
(n 2s)!(n s)!s!(2n)!
|x|
2s
|y|
2s
(x y)
n2s
=
(2n + 1)2
n
n!
(2n + 1)!
|x|
n
|y|
n
n/2

s=0
(1)
s
(2n 2s)!
2
n
(n 2s)!(n s)!s!
( )
n2s
=
2
n
n!
(2n)!
|x|
n
|y|
n
n/2

s=0
(1)
s
(2n 2s)!
2
n
(n 2s)!(n s)!s!
( )
n2s
. .
=P
n
()
due to the explicit representation of the Legendre polynomials. For details see Corollary
3.63 with =
1
2
and
2
Nm
(N m +
1
2
)
(
1
2
)
=
(2N 2m)!
2
N
(N m)!
.
Theorem 4.21
For H
m
Harm
m
(R
3
), K
n
Harm
n
(R
3
) we have that
H
m
|

, K
n
|

L
2
()
=

n,m

n
H
m
, K
n

Hom
n
(R
3
)
where
n
=
(2n+1)!
42
n
n!
.
Proof. Idea of the proof:
By the fundamental theorem of potential theory holds for all x R
3
with |x| < 1 that
K
n
(x) =
1
4

_
1
|x y|

(y)
K
n
(y) K
n
(y)

(y)
1
|x y|
_
d(y) .
Therefore,
H
m
(
x
)K
n
(x) =
1
4

_
H
m
(
x
)
1
|x y|

(y)
K
n
(y) K
n
(y)

(y)
H
m
(
x
)
1
|x y|
_
d(y).
Now we have
H
m
(
x
)
1
|x y|
= (1)
m
(2m)!
2
m
m!
H
m
(x y)
|x y|
2m+1
=
(2m)!
2
m
m!
H
m
(y x)
|x y|
2m+1
.
At x = 0 holds:
H
m
(
x
)K
n
(x)|
x=0
=
_
0 for m = n
H
m
, K
n

Hom
n
(R
3
)
for n = m
Chapter 4. Spherical Harmonics 89
Therefore,
1
4

_
H
m
(y)
|y|
2m+1

(y)
K
n
(y) K
n
(y)

(y)
H
m
(y)
|y|
2m+1
_
d(y)
=
_
0 for m = n
2
m
m!
(2m)!
H
m
, K
n

Hom
n
(R
3
)
for n = m
Next we see that on the sphere

(y)
K
n
(y) = (y)
y
K
n
(y) =

r
K
n
(r) =

r
r
n
K
n
() = nr
n1
K
n
()


(y)
K
n
(y)

= nK
n
()

(y)
H
m
(y)

= mH
m
()


(y)
H
m
(y)
|y|
2m+1
=
_

(y)
H
m
(y)
_
1
|y|
2m+1
+ H
m
(y)

(y)
1
|y|
2m+1
= mH
m
(y)
1
|y|
2m+1
+ H
m
(y)(2m1)
1
|y|
2m+2


(y)
H
m
(y)
|y|
2m+1

= mH
m
() (2m + 1)H
m
() = (m+ 1)H
m
()
Together this yields that
1
4

_
H
m
(y)
|y|
2m+1

(y)
K
n
(y) K
n
(y)

(y)
H
m
(y)
|y|
2m+1
_
d(y)
=
1
4

(nH
m
()K
n
() + (m + 1)H
m
()K
n
()) d()
=
n + m+ 1
4

H
m
()K
n
()d().
Altogether we nd the desired result.
We reformulate the addition theorem.
Corollary 4.22
Let {H
n,j
}
j=1,...,2n+1
be an orthonormal system in Harm
n
(R
3
) with respect to ,
Hom
n
(R
3
)
.
Then for every x, y R
3
with x = |x|, y = |y| (, ) we nd that
2n+1

j=1

n
H
n,j
(x)

n
H
n,j
(y) =
2n + 1
4
|x|
n
|y|
n
P
n
( ) .
90 Chapter 4. Spherical Harmonics
4.2.3 Harmonic Polynomials on the Sphere
Denition 4.23
For m N
0
and a set D R
3
Harm
m
(D) =
_
P|
D
: P Harm
m
(R
3
)
_
,
Harm
0,...,m
(D) =
_
P|
D
: P Harm
0,...,m
(R
3
)
_
,
Harm
0,...,
(D) =
_
P|
D
: P Harm
0,...,
(R
3
)
_
.
The elements of the space Harm
n
(), n N
0
, are called (scalar) spherical harmonics of
degree n.
From the comparison of ,
L
2
()
and ,
Hom
n
(R
3
)
(see Theorem 4.21) it immediately
follows that for Y
n
Harm
n
() and Y
m
Harm
m
()
Y
n
, Y
m

L
2
()
=

Y
n
()Y
m
()d() = 0
if n = m. (d() denotes the surface element of the sphere .)
Theorem 4.24
Let n N
0
.
dimHarm
n
(R
3
) = dimHarm
n
() = 2n + 1.
Proof. See Exercise 10.1.
Lemma 4.25
Any spherical harmonic Y
n
Harm
n
(), n N
0
, is an innitely often dierentiable
eigenfunction of the Beltrami operator

corresponding to the eigenvalue (

(n) =
n(n + 1), i.e.

Y
n
() = (

(n)Y
n
() .
The sequence {(

(n)}
nN
0
is called (spherical) symbol of the Beltrami operator.
Proof. Y
n
Harm
n
() is the restriction of a polynomial H
n
Harm
n
(R
3
) to the sphere,
i.e.

x
H
n
(x) = 0 x R
3
=
_
_

r
_
2
+
2
r

r
+
1
r
2

_
H
n
(r) = 0
Due to the homogeneity of H
n
: H
n
(x) = r
n
Y
n
() with r > 0. Thus,
0 =
x
H
n
(x) = n(n 1)r
n2
Y
n
() +
2
r
nr
n1
Y
n
() +
1
r
2
r
n

Y
n
()
= r
n2
_
n
2
n + 2n +

_
Y
n
()
Since r > 0, it follows that
0 =
_
n
2
n + 2n +

_
Y
n
()

Y
n
() = (n
2
+ n)Y
n
() = n(n + 1)Y
n
().
Chapter 4. Spherical Harmonics 91
Denition 4.26
We denote a complete orthonormal system in (Harm
n
(), ,
L
2
()
) by {Y
n,j
}
j=n,...,n
, i.e.
1. Y
n,j
, Y
n,k

L
2
()
=
j,k
for all j, k {n, . . . , n}.
2. If F, Y
n,j

L
2
()
= 0 for all j {n, . . . , n} with F Harm
n
(), then F = 0.
We call n the degree of the spherical harmonic Y
n,j
and j the order.
Remark 4.27
The family {Y
n,j
}
nN
0
,j=n,...,n
forms an L
2
()-orthonormal system in Harm
0,...,
().
Theorem 4.28 (Addition Theorem of Spherical Harmonics)
If {Y
n,j
}
j=n,...,n
is an L
2
()-orthonormal system in Harm
n
(), n N
0
, then
n

j=n
Y
n,j
()Y
n,j
() =
2n + 1
4
P
n
( )
for all (, )
2
, where P
n
is the Legendre polynomial of degree n. Moreover,
n

j=n
(Y
n,j
())
2
=
2n + 1
4
for all ,
Y
n,j

C()
= sup

|Y
n,j
()|

2n + 1
4
for all j = n, . . . , n .
Proof. This result follows immediately from the addition theorem in Harm
n
(R
3
), see
Theorem 4.20, respectively Corollary 4.22 together with Theorem 4.21.
Remark 4.29
Using the addition theorem it is easy to show that the mapping
(, ) K
Harm
n
()
(, ) =
2n + 1
4
P
n
( )
is the reproducing kernel in Harm
n
(), i.e.
2n + 1
4

Y
n
()P
n
( )d() = Y
n
().
Lemma 4.30
For every Y
n
Harm
n
() holds
sup

|Y
n
()|

2n + 1
4
Y
n

L
2
()
.
92 Chapter 4. Spherical Harmonics
Proof. Due to Fourier theory it is clear that by Parsevals identity
Y
n

2
L
2
()
=
n

j=n
|Y
n
, Y
n,j

L
2
()
|
2
and we have the representation
Y
n
=
n

j=n
Y
n
, Y
n,j

L
2
()
Y
n,j
on the sphere . Therefore,
|Y
n
()|
2

j=n
Y
n
, Y
n,j

2
L
2
()
(Y
n,j
())
2

j=n
Y
n
, Y
n,j

2
L
2
()
n

j=n
(Y
n,j
())
2
=
2n + 1
4
Y
n

2
L
2
()
.
Example 4.31 (Complex-valued Spherical Harmonics)
Let n N
0
, j Z with n j n. The function
Y
C
n,j
: C
Y
C
n,j
() = (1)
j

2n + 1
4
(n j)!
(n + j)!
P
n,j
(cos())e
ij
is called (complex) spherical harmonic of degree n and order j where , are the spherical
coordinates of and
P
n,j
: [1, 1] R
t P
n,j
(t) = (1 t
2
)
j
2
d
j
dt
j
P
n
(t)
is the associated Legendre function of degree n and order 0 j n. For negative orders
holds the symmetry relation
P
n,j
(t) = (1)
j
(n j)!
(n + j)!
P
n,j
(t).
These spherical harmonics are orthonormal with respect to the scalar product of the space
L
2
C
() of complex-valued square-integrable functions on the unit sphere . Their addition
theorem reads as follows
n

j=n
Y
C
n,j
()Y
C
n,j
() =
2n + 1
4
P
n
( ).
Chapter 4. Spherical Harmonics 93
Example 4.32 (Real Fully Normalized Spherical Harmonics)
Let n N
0
, j Z with n j n. The function
Y
R
n,j
: R
Y
R
n,j
() =

2n + 1
4
(n |j|)!
(n +|j|)!
P
n,|j|
(cos())
_
_
_

2 cos(j) : j < 0
1 : j = 0

2 sin(j) : j > 0
is called (real) fully normalized spherical harmonic of degree n and order j where , are
the spherical coordinates of and
P
n,j
: [1, 1] R
t P
n,j
(t) = (1 t
2
)
j
2
d
j
dt
j
P
n
(t)
is the associated Legendre function of degree n and order 0 j n.
The Y
R
n,j
are orthonormal with respect to the scalar product of L
2
R
() and the addition
theorem holds as written in Theorem 4.28. The real spherical harmonics Y
R
n,j
and the
complex spherical harmonics Y
C
n,j
(of Example 4.31) are related via
Y
R
n,j
() =
_
_
_

2
0,j
2
_
Y
C
n,j
() + Y
C
n,j
()
_
: j 0
(1)
j

2
2i
_
Y
C
n,j
() Y
C
n,j
()
_
: j > 0
for all , n N
0
and j Z with n j n.
Figure 4.1: Spherical harmonics Y
3,2
(left) and Y
7,0
(right).
94 Chapter 4. Spherical Harmonics
Figure 4.2: Spherical harmonics Y
7,5
(left) and Y
7,7
(right).
Figure 4.3: Spherical harmonics Y
10,2
(left) and Y
10,9
(right).
Chapter 4. Spherical Harmonics 95
Figure 4.4: Spherical harmonics Y
10,6
(left) and Y
10,6
(right).
4.3 Closure and Completeness of Spherical Harmon-
ics
Lemma 4.33
For all t [1, 1] and all h (1, 1) hold

n=0
h
n
P
n
(t) =
1

1 + h
2
2ht
,

n=0
(2n + 1)h
n
P
n
(t) =
1 h
2
(1 + h
2
2ht)
3
2
.
Proof. The rst result is the generating function which we have in the more general
setting of Gegenbauer polynomials. For the second equation see Exercise 8.4.
Theorem 4.34 (Poisson Integral Formula)
If F is continuous on , then
lim
h1
sup

1
4

1 h
2
(1 +h
2
2h( ))
3
2
F()d() F()

= 0 .
Proof. Since |h| < 1 and P
n

C[1,1]
= 1 we use Lemma 4.33 and nd
1

1
1 h
2
(1 + h
2
2ht)
3
2
dt =

n=0
(2n + 1)h
n
1

1
P
n
(t) 1 dt
. .
=
n,0
P
0

2
L
2
= 2.
96 Chapter 4. Spherical Harmonics
Therefore, we use Theorem 4.3:
1
2
1
2

1 h
2
(1 +h
2
2h( ))
3
2
d() = 1, .
Choose arbitrary, but xed. Then
1
4

(1 h
2
)(F() F())
(1 + h
2
2h( ))
3
2
d() =
1
4

(1 h
2
)F()
(1 +h
2
2h( ))
3
2
d() F() .
For h [
1
2
, 1) we split the left integral into two parts, i.e.

. . . =

11
3

1h
. . . +

1
3

1h1
. . .
For t [1, 1
3

1 h] we nd
1 + h
2
2ht = (1 h)
2
+ 2h(1 t) 2h(1 t) 2h
3

1 h .
This leads to
1 h
2
(1 +h
2
2ht)
3
2

1 h
2
(2h
3

1 h)
3
2
=
1 + h
(2h)
3
2

1 h

1 h
2

1 h
which gives us for the rst integral of the splitting the following estimate:

11
3

1h
(1 h
2
)(F() F())
(1 + h
2
2h( ))
3
2
d()

11
3

1h
(1 h
2
)(F
C()
+F
C()
)
(1 +h
2
2h( ))
3
2
d()
= 2 F
C()
2
1
3

1h

1
1 h
2
(1 + h
2
2ht)
3
2
dt
4 F
C()
1
3

1h

1
2

1 hdt
= 4 F
C()
2

1 h(1
3

1 h + 1)
. .
2
16 F
C()

1 h
h1
0
uniformly w.r.t. .
Chapter 4. Spherical Harmonics 97
Since F C() and is compact, F is uniformly continuous, i.e. there exists a function
: h (h) with lim
h1
(h) = 0 such that
|F() F()| (h)
for all that satisfy 1
3

1 h 1. (h) is independent of .
Now we consider the second part of the integral:

1
3

1h1
(1 h
2
)(F() F())
(1 +h
2
2h( ))
3
2
d()

1
3

1h1
(1 h
2
)|F() F()|
(1 + h
2
2h( ))
3
2
d()
(h)

1 h
2
(1 + h
2
2h( ))
3
2
d() = (h)4
h1
0 .
Altogether we nd the desired convergence result.
Theorem 4.35
Let F C(). Then the series

n=0
h
n
n

j=n
F

(n, j)Y
n,j
() with the coecients
F

(n, j) = F, Y
n,j

L
2
()
converges uniformly w.r.t. all for h (0, h
0
) with h
0
< 1
and
lim
h1

n=0
h
n
n

j=n
F

(n, j)Y
n,j
() = F()
where this convergence is uniform.
Proof. See tutorials (Exercise 10.2).
First show uniform convergence of the series using the denition of the Fourier coecients
and the addition theorem (Theorem 4.28). Estimate the integrand with the sup-norm and
with |P
n
(t)| 1 (for t [1, 1]).
For the limit process use Lemma 4.33 to obtain a representation like in Theorem 4.34.
The dominated convergence theorem allows to interchange the innite series and integral.
Corollary 4.36
The system {Y
n,j
}
nN
0
,njn
is closed in
_
C(),
C()
_
, i.e. for all F C() and for
all > 0 exists a nite linear combination
N

n=0
n

j=n
d
n,j
Y
n,j
such that
_
_
_
_
_
F
N

n=0
n

j=n
d
n,j
Y
n,j
_
_
_
_
_
C()
.
In other words:
span(Y
n,j
, n N
0
, n, . . . , n)

C()
= C() .
98 Chapter 4. Spherical Harmonics
Proof. Let F C() and > 0 be arbitrary. According to Theorem 4.35 there exists
h = h() < 1 xed such that
_
_
_
_
_

n=0
h
n
n

j=n
F

(n, j)Y
n,j
F
_
_
_
_
_
C()


2
.
Moreover, the theorem tells us that there exists N = N() such that
_
_
_
_
_

n=0
h
n
n

j=n
F

(n, j)Y
n,j

n=0
h
n
n

j=n
F

(n, j)Y
n,j
_
_
_
_
_
C()


2
since the series converges uniformly for xed h < 1 (here h = h()). Together this gives
us
_
_
_
_
_
_
_
F
N

n=0
n

j=n
h
n
F

(n, j)
. .
=d
n,j
Y
n,j
_
_
_
_
_
_
_
C()


2
+

2
= .
Since for all F C() holds the inequality
F
L
2
()

4 F
C()
(4.1)
we immediately nd that {Y
n,j
}
nN
0
,njn
is also closed in
_
C(),
L
2
()
_
.
Corollary 4.37
The system {Y
n,j
}
nN
0
,njn
is closed in
_
L
2
(),
L
2
()
_
.
Proof. We know that
C()

L
2
()
= L
2
() .
Let F L
2
() and > 0. Then there exists G C() such that F G
L
2
()


2
. Due
to Corollary 4.36 and estimate (4.1) there exists a nite linear combination
N

n=0
n

j=n
a
n,j
Y
n,j
corresponding to G such that
_
_
_
_
_
G
N

n=0
n

j=n
a
n,j
Y
n,j
_
_
_
_
_
L
2
()


2
Combining these two results yields:
_
_
_
_
_
F
N

n=0
n

j=n
a
n,j
Y
n,j
_
_
_
_
_
L
2
()


2
+

2
= .
Chapter 4. Spherical Harmonics 99
Now we can apply Theorem 3.7 (closure is equivalent to completeness etc.) to the
system of spherical harmonics. This gives us L
2
-Fourier approximation on the sphere, i.e.
lim
N
_
_
_
_
_
F
N

n=0
n

j=n
F, Y
n,j

L
2
()
Y
n,j
_
_
_
_
_
L
2
()
= 0 F L
2
() .
Remark 4.38
(a) Let X {C(), L
p
()}
p[1,)
. If F, G X with
lim
N
_
_
_
_
_
G
N

n=0
n

j=n
F, Y
n,j

L
2
()
Y
n,j
_
_
_
_
_
X
= 0 ,
then F = G almost everywhere on .
(b) For X {C(), L
p
()}
p[1,
4
3
][4,)
it is unknown whether the truncated Fourier
series converges for all F X, i.e. a truncated Fourier series with L
2
-Fourier
coecients might not yield an approximation in the X-topology.
(c) If F : R is Lipschitz continuous, then
lim
N
_
_
_
_
_
F
N

n=0
n

j=n
F, Y
n,j

L
2
()
Y
n,j
_
_
_
_
_
C()
= 0 ,
i.e. the Fourier series is uniformly convergent in this case.
Remark 4.39
Using the completeness property of the spherical harmonics system we can prove that the
spherical harmonics Y
n
Harm
n
() are the only C
()
()-eigenfunctions of the Beltrami
operator.

only has the eigenvalues n(n + 1) with n N


0
.
4.4 The Funk-Hecke Formula
Theorem 4.40 (Funk-Hecke-Formula)
Let G L
1
[1, 1], n N
0
. Then

G( )P
n
( )d() = 2
1

1
G(t)P
n
(t)dt
. .
=G

(n)
P
n
( )
for all , . G

(n) is called Legendre coecient of G.


Proof. See Exercise 10.3.
100 Chapter 4. Spherical Harmonics
Corollary 4.41
Let G L
1
[1, 1]. Then for every Y
n
Harm
n
() holds

G( )Y
n
()d() = G

(n)Y
n
() , ,
where G

(n) is the Legendre coecient as dened in Theorem 4.40.


Proof. This is a direct consequence of the Funk-Hecke formula (Theorem 4.40) and the
fact that the Legendre polynomial P
n
is the reproducing kernel in Harm
n
() (see Remark
4.29).
4.5 Greens Function with Respect to the Beltrami
Operator
Denition 4.42
The function G(

; , ) : (, ) G(

; , ), 1 < 1, is called Greens function


on with respect to the operator

, if it satises the following properties:


(i) for every the function G(

; , ) is twice continuously dierentiable on


the set \ {} such that

G(

; , ) =
1
4
, 1 < 1,
(ii) for every the function
G(

; , )
1
4
ln(1 )
is continuously dierentiable on .
(iii) for all orthogonal transformations A
G(

; A, A) = G(

; , ),
(iv) for every
1
4

G(

; , )d() = 0.
Note that for 1 < 1
ln | | =
1
2
ln(2 2 ) =
1
2
ln(1 ) +
1
2
ln(2).
Lemma 4.43
G(

; , ) is uniquely determined by its dening properties (i)-(iv).


The explicit representation of the Green function for 1 < 1 is
G(

; , ) =
1
4
ln(1 ) +
1
4

1
4
ln(2).
Chapter 4. Spherical Harmonics 101
Proof. See tutorials (Exercise 10.4).
By applying the second Green surface theorem we see that
k(k + 1)

G(

; , )Y
k
()d() = (1
0,k
)Y
k
().
Therefore, we can determine the spectral representation of the Green function as the
following bilinear expansion for 1 < 1
G(

; , ) =

n=1
n

k=n
1
n(n + 1)
Y
n,k
()Y
n,k
()
=

n=1
2n + 1
4
1
n(n + 1)
P
n
( ).
Theorem 4.44 (Third Green Surface Theorem for

)
Let be a xed point of . Suppose that F is a continuously dierentiable function on
. Then
F() =
1
4

F()d()

G(

; )
_

F()
_
d() .
Proof. Let be xed. For each suciently small > 0 Greens rst surface identity
(Theorem 4.1) gives us

<1
_
F()

G(

; ) +

F()

G(

; )
_
d()
=

=1
F()

G(

; )d()
where is the unit normal to the circle that consists of all points with = 1.
points outward the set of points with 1 . Thus,

1 ( )
2
( ).
We use the dierential equation for Greens function and obtain

<1
F()

G(

; )d() =
1
4

<1
F()d() .
Moreover,

=1
F()

G(

; )d()
=

=1
F()
(1 )

1 (1 )
2

1
4
( (1 ))
_
d()
=
1
4

=1
F()

1 (1 )
2

d()
=
1
4

1 (1 )
2

1 (1 )
2
F(

) =
2
2
F(

)
102 Chapter 4. Spherical Harmonics
where we used the mean value theorem with

{ |1 = }. The continuity
of F yields that F(

) F() as

for 0 such that


lim
0

1=
F()

G(

; )d() = F().
This gives us the desired result.
Corollary 4.45 (Third Green Surface Theorem for L

)
Under the assumptions of Theorem 4.44
F() =
1
4

F()d()

_
L

G(

; )
_

_
L

F()
_
d() .
Proof. We use an analogous Greens surface identity for L

. Note that instead of the


normal vector the tangential vector

1 ( )
2
is required. The same reasoning as in Theorem 4.44 is used to prove this corollary.
Theorem 4.46 (Third Green Surface Theorem for

)
Let be a xed point of . Suppose that F is a twice continuously dierentiable function
on . Then
F() =
1
4

F()d() +

G(

; )
_

F()
_
d() .
Proof. From Greens second surface identity (Theorem 4.1) we get for a suciently small
> 0 that

1
_
G(

; )

F() F()

G(

; )
_
d()
=

=1
_
G(

; )

()
F() F()

()
G(

; )
_
d()
We use the dening properties of the Green function. The continuous dierentiability of
F on leads us to
lim
0

1=
G(

; )

F()d() = 0.
Together with the results of the proof of Theorem 4.44 we nd the desired result.
For more details see the lecture on potential theory.
Theorem 4.47 (Dierential Equation for

on )
Let H C() with
1
4

H()d() = 0.
Chapter 4. Spherical Harmonics 103
Let F C
(2)
() satisfy the Beltrami dierential equation

F = H.
Then
F() =

G(

; )H()d() ,
and F is uniquely determined with
1
4

F()d() = 0.
Other solutions take the form

F = F + C with a constant C R.
Proof. Direct consequence of Theorem 4.46.
4.6 The Hydrogen Atom
The hydrogen atom is the simplest atom we can think of. It consists of an electron with
the charge e and mass m in the Coulomb potential of a nucleus with charge e and mass
M where M m.
The starting point of the quantum-mechanical discussion of the hydrogen atom is the
Hamilton function of an electron with charge e in the Coulomb potential of a nucleus
with charge e, i.e.
H(p, x) =
p
2
2

e
2
4
0
|x|
, x 0,
where x is the relative distance of the electron and the nucleus and is the reduced mass
given by
=
mM
m + M
m,
since M m. Applying the substitution rules from before (in this case we only need
p
j
i

x
j
, j = 1, 2, 3) we obtain the Hamilton operator by
H =

2
2m

e
2
4
0
|x|
.
Thus, the stationary Hamilton equation is (x R
3
)
H(x) = E(x)

2
2m

x
(x)
e
2
4
0
|x|
(x) = E(x)
where L
2
(R
3
) with
L
2
(R
3
)
= 1 and E < 0 such that we obtain bounded states.
By separation of variables (x) = U(r)F(), x = r, r = |x| > 0 and . We use the
representation of the Laplace operator in polar coordinates and get:

2
2m
_

2
r
2
+
2
r

r
+
1
r
2

_
U(r)F()
e
2
4
0
r
U(r)F() E U(r)F() = 0.
104 Chapter 4. Spherical Harmonics
The radial derivatives only apply to U and the Beltrami operator applies to F such that

2
2m
_

2
r
2
+
2
r

r
_
U(r)F()

2
2m
U(r)
r
2

F()
e
2
4
0
r
U(r)F() E U(r)F() = 0.
Dividing by U(r), F() and

2
2m
and multiplying by r
2
this is equivalent to
r
2
U

(r)
U(r)
+ 2r
U

(r)
U(r)
+
2m

2
_
e
2
r
4
0
+ Er
2
_
=

F()
F()
for all r > 0, . This can only be true if both sides are equal to a constant R.
Thus, we obtain for the right hand side ( ):

F()
F()
=

F() = F().
This is the Beltrami dierential equation which has a polynomial solution if and only if
= l(l + 1), l N
0
. In this case the solution is a multiple of a spherical harmonic of
degree l, i.e. F() = C
1
Y
l,m
(), m = l, . . . , l.
For the left hand side we get with = l(l + 1) and after dividing by r
2
and multiplying
with U(r):
U

(r) +
2
r
U

(r) +
_
2m e
2
4
0

2
r
+
2mE

2

l(l + 1)
r
2
_
U(r) = 0 , r > 0.
To solve this equation we use
P(r) = rU(r)
P

(r) = U(r) + rU

(r)
P

(r) = U

(r) + U

(r) + rU

(r) = 2U

(r) + rU

(r)
such that we have (r > 0)
U

(r) +
2
r
U

(r) =
P

(r)
r
.
Using the abbreviation a
B
=
4
0

2
m e
2
(Bohrs atomic radius) we nd that
P

(r) +
_
2
a
B
r
+
2E4
0
e
2
a
B

l(l + 1)
r
2
_
P(r) = 0.
Next we introduce the Rydberg energy
E
R
=

2
2ma
2
B
=
m e
4
2
2
(4
0
)
2
and multiply the previous equation by a
2
B
such that
a
2
B
P

(r) +
_
2a
B
r
+
E
E
R

a
2
B
l(l + 1)
r
2
_
P(r) = 0.
Chapter 4. Spherical Harmonics 105
We introduce dimensionless coordinates by = r/a
B
,

P() = P(r) and =

E/E
R
(observe that E < 0). This yields us that
d
2
d
2

P() +
_
2


l(l + 1)

2

2
_

P() = 0 , > 0.
To solve this equation we set ( > 0):

P() = V ()
l+1
exp().
We obtain an equation for V given by
d
2
d
2
V () + 2
d
d
V ()
_
l + 1


_
+ V ()
2

(1 (l + 1)) = 0.
This equation is multiplied by and we set y = 2,

V (y) = V () such that
y
d
2
dy
2

V (y) + (2l + 2 y)
d
dy

V (y) +
_
1

l 1
_

V (y) = 0.
Comparing this equation to the dierential equation of the Laguerre polynomials L
()
n
given by (y > 0)
y
d
2
dy
2
L
()
n
(y) + ( + 1 y)
d
dy
L
()
n
(y) + nL
()
n
(y) = 0,
we see that = 2l + 1 and that we have a polynomial solution if and only if
1

l 1 = n
r
, n
r
N
0
.
The solution is then given by

V (y) = C
2
L
(2l+1)
n
r
(y) = C
2
L
(2l+1)
nl1
(y)
with n = n
r
+ l + 1 = 1/, n N. This means that for 1/ only values in N are allowed
which gives the discretization of energy by (n N)
1

= n
E
n
E
R
=
2
E
n
=
e
2
a
B
4
0
1
2n
2
=
m e
4
2
2
(4
0
)
2
n
2
.
Resubstituting V into P, then P into U and nally U into :
U(r) =
1
r
P(r) =
1
r

P() =
1
r
V ()
l+1
exp()
= C
2
1
r
L
(2l+1)
nl1
(
2r
na
B
)(
r
a
B
)
l+1
exp(
r
na
B
)
= C
2
1
a
B
(
n
2
)
l
. .

C
2
L
(2l+1)
nl1
(
2r
na
B
)(
2r
na
B
)
l
exp(
r
na
B
)
106 Chapter 4. Spherical Harmonics
We get after normalizing
L
2
(R
3
)
= 1:

n,l,m
(x) = CU
n,l
(r)F
l,m
()
=
1

a
3
B

(n l 1)!
(n + l)!
2
n
2
_
2r
na
B
_
l
L
(2l+1)
nl1
_
2r
na
B
_
exp
_

r
na
B
_
Y
l,m
().
n N is called the main quantum number, l = 0, . . . , n1 is called the angular momentum
quantum number and m = l, . . . , l (sometimes renumbered to m = 1, . . . , 2l +1) is called
the magnetic quantum number.
For n = 1 we get for the energy E
1
the well-known ionization energy of hydrogen, i.e.
1
4
0
E
1
= 13.6eV .
0 5 10 15 20 25 30
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
n=1, l=0
n=2, l=0
n=2, l=1
n=3, l=0
n=3, l=1
n=3, l=2
Figure 4.5: Radial probability densities r
2
U
2
n,l
(r) for n = 1, 2, 3 and corresponding l.
Chapter 5
Vectorial Spherical Harmonics
5.1 Notation for Spherical Vector Fields
Vector elds on the unit sphere f : R
3
can be written in the form
f() =
3

i=1
F
i
()
i
, ,
where the component functions F
i
are given by F
i
() = f()
i
(vector elds will be
denoted by lower case letters). First we introduce function spaces for vector elds on the
sphere.
l
2
() denotes the space of all square-integrable vector elds on which is a Hilbert space
in connection with the scalar product
f, g
l
2
()
=

f() g()d() , f, g l
2
().
The space c
(p)
(), 0 p , consists of all p-times continuously dierentiable vector
elds on the sphere. c() = c
(0)
() is a Banach space with respect to the norm
f
c()
= sup

|f()| , f c().
We know that
c()

l
2
()
= l
2
() .
As in the4 scalar case there is the norm estimate
f
l
2
()

4 f
c()
.
Any vector eld f c() can be decomposed into its normal and its tangential part:
f() = f
nor
() + f
tan
()
where the normal and tangential eld are given by
p
nor
f() = f
nor
() = (f() ) , ,
p
tan
f() = f
tan
() = f() (f() ) , ,
107
108 Chapter 5. Vectorial Spherical Harmonics
with the projection operators p
nor
and p
tan
. For f, g c(), , holds
f() g() = f
nor
() g
nor
() + f
tan
() g
tan
() .
The space can also be decomposed accordingly
c
nor
() = {f c()|f = p
nor
f},
c
tan
() = {f c()|f = p
tan
f},
c() = c
nor
() c
tan
(),
as well as
l
2
nor
() = c
nor
()

l
2
()
,
l
2
tan
() = c
tan
()

l
2
()
,
l
2
() = l
2
nor
() l
2
tan
(),
We remember the surface gradient operator

1 t
2

+
t

1 t
2

t
which contains the tangential derivatives of the gradient and the surface curl gradient
operator (L

F() =

F() for F C
1
())
L

1 t
2

t
+
t
1

1 t
2

where L

F is a tangential vector eld perpendicular to

F, i.e. for

F() L

F() = 0.
We now also dene the surface divergence

f() =
3

i=1

F
i
()
i
and the surface curl
L

f() =
3

i=1
L

F
i
()
i
where f = (F
1
, F
2
, F
3
)
T
c
(1)
(). Note that the surface curl represents a scalar-valued
function on the unit sphere:
L

f() =

(f() ).
We already know that for F C
(2)
():

F() =

F() and L

F() =

F() .
Chapter 5. Vectorial Spherical Harmonics 109
Moreover, from Exercise 11.1 we obtain that
L

F() = 0 and

F() = 0 .
The application of these dierential operators to zonal functions yields for F C
(1)
[1, 1]
that

F( ) = F

( )( ( )),
L

F( ) = F

( )( ),
and for F C
(2)
[1, 1]

F( ) = 2( )F

( ) + (1 ( )
2
)F

( ).
The following lemmas are mostly results from vector analysis.
Lemma 5.1
The tangential eld of f vanishes, i.e. f
tan
() = 0 for all , if and only if f()() = 0
for every unit vector () that is perpendicular to .
Proof. See exercise 11.2.
Lemma 5.2
Let f c() with

f()d() = 0
for every curve C . Then
f
tan
() = 0
for all .
Proof. Choose any point
0
. Let

0
be a unit vector with

0

0
= 0. Then, there
is a curve C which passes through
0
and has

0
as tangent vector at
0
. Let C

0
be
any subset of C containing
0
. Then,

f()d() = 0.
Now we let the length of C

0
tend to zero and nd that

0
f(
0
) = 0. The previous lemma
yields that then f
tan
(
0
) = 0. Since
0
has been chosen arbitrarily, we have f
tan
= 0.
Lemma 5.3
Let F C
(1)
() with

F() = 0 for all . Then F is constant and conversely.


Let F C
(1)
() with L

F() = 0 for all . Then F is constant and conversely.


Proof. See exercise 11.3.
110 Chapter 5. Vectorial Spherical Harmonics
Lemma 5.4
Let f c() be a tangential vector eld with

f()d() = 0
for every closed curve C . Then, there is a scalar eld P on such that
f() =

P()
where P C
(1)
() and is unique up to a constant.
Proof. Take an arbitrary, but xed
0
. We set
P() =

f()d()
with the integral along any curve that starts at
0
and ends at . Then, for
0
,
1

and any curve C starting at
0
and ending at
1
holds
P(
1
) P(
0
) =

f()d() and
P(
1
) P(
0
) =

P()d(),
since the surface gradient acts like an ordinary gradient in R
3
when integrating along lines
on .
Therefore, we combine the two equations to

(f()

P())d() = 0
for any curve C connecting
0
and
1
. By Lemma 5.2 we nd that
f()

P() = 0 , .
The proof that P is continuously dierentiable can be found e.g. in [2] or in lectures on
vector analysis. One basic idea is to take P constant on each straight line passing through
in the normal direction.
For proving uniqueness we consider

P
1
() =

P
2
(), , which implies that

(P
1
P
2
)() = 0, i.e. due to Lemma 5.3 P
1
P
2
= const.
Theorem 5.5
Let f c
(1)
() be a tangential vector eld. Then, L

f() = 0, , if and only if


there is a scalar eld P such that
f() =

P() , ,
Chapter 5. Vectorial Spherical Harmonics 111
and P (called potential function for f) is unique up to an additive constant.
Similarly,

f() = 0, , if and only if there is a scalar eld S such that


f() = L

S() , ,
and S (called stream function for f) is unique up to an additive constant.
Proof. The condition f =

P implies that L

f = L

P = 0 and f = L

S implies
that

f =

S = 0.
Conversely, assume that L

f() = 0, . Then we can use the surface theorem of


Stokes to obtain that

C

f()d() = 0
for every closed curve C . By Lemma 5.4 there exists a scalar eld P such that
f =

P where P is unique up to a constant.


Suppose now that

f() = 0, . Then for (note that f = f


tan
is a tangential
eld):
L

( f()) = (

) ( f())
= ( )(

f()) ( f())(

)
=

f() 2( f())
=

f() 2( f
tan
())
=

f(),
i.e. L

( f()) = 0, since

f() = 0. Thus, as before there exists a scalar eld S


(unique up to a constant) such that
f() =

S() , .
This is equivalent to
( f()) = (

)S() ,
or f = L

S on , because
( f()) = ( f()) ( )f()
= ( f
tan
()) + f()
= f().
Theorem 5.6
Let f c
(1)
() be a tangential vector eld such that for all

f() = 0 and L

f() = 0.
Then f = 0 on .
Proof. See Exercise 11.4.
112 Chapter 5. Vectorial Spherical Harmonics
5.2 Denition of Vector Spherical Harmonics
Denition 5.7
For and F C
(0
i
)
() we dene the operators o
(i)
: C
(0
i
)
() c(), i = 1, 2, 3, by
o
(1)

F() = F()
o
(2)

F() =

F()
o
(3)

F() = L

F()
where we used the abbreviation
0
i
=
_
0 if i = 1,
1 if i = 2, 3.
It is clear that o
(1)
F c
nor
() and o
(2)
F, o
(3)
F c
tan
().
Remark 5.8
Let F

C
(0
i
)
() be an -zonal function, i.e. F

() = F( ). Then
o
(1)

() = F( ),
o
(2)

() =

F( ) = F

( )( ( )),
o
(3)

() = L

F( ) = F

( )( ).
By Greens integral formulas we can introduce the adjoint operators.
Denition 5.9
For f c
(0
i
)
() and G C
(0
i
)
() (i = 1, 2, 3) we have that
o
(i)
G, f
l
2
()
= G, O
(i)
f
L
2
()
.
Therefore, for f c
(0
i
)
(), :
O
(1)

f() = p
nor
f() ,
O
(2)

f() =

p
tan
f() ,
O
(3)

f() = L

p
tan
f() .
Lemma 5.10
Let F C
(2)
(). Then
(i) If i = j, i, j {1, 2, 3}, then O
(i)

o
(j)

F() = 0, .
(ii) O
(i)

o
(i)

F() =
_
F() , i = 1,

F(),i = 2, 3.
Proof. Known properties of the dierential operators

, L

.
Chapter 5. Vectorial Spherical Harmonics 113
Denition 5.11
For any Y
n
Harm
n
() the vector eld
y
(i)
n
= o
(i)
Y
n
, n 0
i
, i {1, 2, 3},
is called a vector spherical harmonic of degree n and type i.
By harm
(i)
n
we denote the space of all vector spherical harmonic of degree n and type i.
We also set the spaces
harm
0
= harm
(1)
0
,
harm
n
=
3

i=1
harm
(i)
n
, n 1.
Remark 5.12
We have y
(1)
n
= 0, y
(2)
n
= 0 and y
(3)
n
= 0.
Theorem 5.13
If {Y
n,k
}
nN
0
,k=n,...,n
is an L
2
()-orthonormal system of scalar spherical harmonics, then
the system of
y
(i)
n,k
() =
1

(i)
n
o
(i)

Y
n,k
() , i = 1, 2, 3; n 0
i
; k = n, . . . , n;
with

(i)
n
=
_
_
O
(i)
o
(i)
Y
n,k
_
_
L
2
()
=
_
1 , i = 1,
(

(n) = n(n + 1) , i = 2, 3,
forms an l
2
()-orthonormal system of vector spherical harmonics, i.e.

y
(i)
n,k
() y
(j)
m,l
()d() =
n,m

k,l

i,j
.
Proof. This follows directly from the orthogonality of the scalar spherical harmonics and
the properties of the operators o
(i)
, O
(i)
with i = 1, 2, 3.
5.3 The Helmholtz Decomposition Theorem
Theorem 5.14 (Helmholtz Decomposition Theorem)
Let f c
(1)
(). Then there exist uniquely determined functions F
1
C
(1)
() and
F
2
, F
3
C
(2)
() satisfying

F
i
()d() = 0 , i = 2, 3,
such that
f() =
3

i=1
o
(i)
F
i
() = F
1
() +

F
2
() + L

F
3
() , .
114 Chapter 5. Vectorial Spherical Harmonics
The functions F
i
are given by
F
1
() = O
(1)

f() , ,
F
2
() =

G(

; , )O
(2)

f()d() , ,
F
3
() =

G(

; , )O
(3)

f()d() , .
Proof. Any vector eld f c
(1)
() can be written as
f = f
nor
+ f
tan
,
with
f
nor
= p
nor
f c
(1)
nor
() ,
f
tan
= p
nor
f c
(1)
tan
() .
Clearly, f
nor
() = o
(1)
F
1
() with F
1
() = O
(1)

f() for .
For the tangential eld f
tan
the Stokes theorem for (closed surface, no boundary) yields
that

f
tan
()d() = 0.
Thus, L

f
tan
is suitable as a right hand side of the Beltrami dierential equation and by
Theorem 4.47 we nd F
3
C
(2)
() such that

F
3
= L

f
tan
and

F
3
()d() = 0 .
In other words,
L

F
3
= L

f
tan
or L

(f
tan
L

F
3
) = 0.
Note that the dierence f
tan
L

F
3
is still a tangential vector eld. Therefore, we can
now use Theorem 5.5 which gives us a scalar eld F
2
C
(2)
() with
f
tan
L

F
3
=

F
2
or f
tan
=

F
2
+ L

F
3
.
If

F
2
()d() = 0, we replace F
2
by

F
2
= F
2

1
4

F
2
()d(). Then holds that

F
2
=


F
2
and

F
2
()d() = 0 .
Assume there exists another triple G
i
, i = 1, 2, 3, such that
f() = o
(1)

F
1
() + o
(2)

F
2
() + o
(3)

F
3
() ,
f() = o
(1)

G
1
() + o
(2)

G
2
() + o
(3)

G
3
() .
Chapter 5. Vectorial Spherical Harmonics 115
Then, it follows that F
1
= O
(1)
f = G
1
, i.e. F
1
is uniquely dened. Thus,
o
(2)

F
2
() + o
(3)

F
3
() = o
(2)

G
2
() + o
(3)

G
3
() .
We now apply O
(2)
and O
(3)
which yield

F
2
=

G
2
,

F
3
=

G
3
.
Hence, we nd uniqueness up to a constant and the normalization conditions on F
2
and
F
3
imply that F
2
= G
2
and F
3
= G
3
.
The specic representation of F
2
, F
3
involving integrals with Greens function G(

; , )
follows directly from Theorem 4.47.
Remark 5.15
A vector eld of the form
F
1
() +

F
2
() , ,
is called spheroidal, one of the form
L

F
3
() , ,
is said to be toroidal. Thus, the Helmholtz decomposition theorem represents the decom-
position of a c
(1)
-vector eld into its spheroidal and its toroidal parts. It also implies an
orthogonal decomposition of the space c
()
(), i.e.
c
()
() = c
()
(1)
() c
()
(2)
() c
()
(3)
()
where
c
()
(1)
() = c
()
nor
() ,
c
()
(2)
() =
_
f c
()
() | O
(1)
f = O
(3)
f = 0
_
,
c
()
(3)
() =
_
f c
()
() | O
(1)
f = O
(2)
f = 0
_
.
These denitions can also be extended to c
(k)
(), k N, and to l
2
(), i.e.
l
2
(i)
() = {o
(i)
F | F C
()
()}

l
2
()
.
Therefore, we nd the orthogonal decompositions
l
2
() = l
2
nor
() l
2
tan
() ,
l
2
tan
() = l
2
(2)
() l
2
(3)
() .
116 Chapter 5. Vectorial Spherical Harmonics
5.4 Closure and Completeness of Vector Spherical
Harmonics
Denition 5.16
A vector eld h
n
: R
3
R is called a homogeneous harmonic vector polynomial of degree
n if h
n

i
Harm
n
(R
3
) for every i {1, 2, 3}.
Denition 5.17
Let n N
0
, H
n
Hom
n
(R
3
) and i {1, 2, 3}. Then the operators o
(i)
n
are dened by
o
(1)
n
H
n
(x) =
_
(2n + 1)x |x|
2

x
_
H
n
(x) ,
o
(2)
n
H
n
(x) =
x
H
n
(x) ,
o
(3)
n
H
n
(x) = x
x
H
n
(x) .
Lemma 5.18
Let H
n
Harm
n
(R
3
). Then o
(i)
n
H
n
is a homogeneous harmonic vector polynomial of
degree
n + 1 if i = 1,
n 1 if i = 2,
n if i = 3.
(If the degree is < 0, then o
(i)
n
H
n
= 0.)
Proof. The cases i = 2, 3 are straightforward, thus we can restrict ourselves to the case
i = 1. Observe that the components of o
(1)
n
H
n
are homogeneous of degree n + 1. For
j = 1, 2, 3 we have that

x
__
o
(1)
n
H
n
(x)
_

j
_
=
x
_
(2n + 1)x
j
H
n
(x) |x|
2

x
j
H
n
(x)
_
= 2(2n + 1)
j

x
H
n
(x)
_

x
|x|
2
_

x
j
H
n
(x)
_
2
x
|x|
2
_

x
j
H
n
(x)
_
= 2(2n + 1)

x
j
H
n
(x) 6

x
j
H
n
(x) 4x
x

x
j
H
n
(x)
= (4n 4)

x
j
H
n
(x) 4(n 1)

x
j
H
n
(x) = 0,
since
x
x

x
j
H
n
(x) = (n 1)

x
j
H
n
(x)
and

x
j
H
n
(x) is a homogeneous harmonic polynomial of degree n 1.
Lemma 5.19
The space of functions

nN
0
3

i=1
Harm
n
()
i
=
_
Y
n,k

i
|n N
0
, k = n, . . . , n, i = 1, 2, 3
_
Chapter 5. Vectorial Spherical Harmonics 117
is closed in l
2
() with respect to
l
2
()
.
Proof. Let f l
2
(). Then there exist F
1
, F
2
, F
3
L
2
() such that
f = F
1

1
+ F
2

2
+ F
3

3
.
Let > 0, then there exist coecients a
(i)
n,k
, such that
_
_
_
_
_
F
1

N
1

n=0
n

k=n
a
(1)
n,k
Y
n,k
_
_
_
_
_
2
L
2
()
<

3
,
_
_
_
_
_
F
2

N
2

n=0
n

k=n
a
(2)
n,k
Y
n,k
_
_
_
_
_
2
L
2
()
<

3
,
_
_
_
_
_
F
3

N
3

n=0
n

k=n
a
(3)
n,k
Y
n,k
_
_
_
_
_
2
L
2
()
<

3
.
Set N = max{N
1
, N
2
, N
3
} and set
a
(1)
n,k
= 0 for N
1
< n N, n k n,
a
(2)
n,k
= 0 for N
2
< n N, n k n,
a
(3)
n,k
= 0 for N
3
< n N, n k n.
It is clear that
g =
3

i=1
N

n=0
n

k=n
a
(i)
n,k
Y
n,k

nN
0
3

i=1
Harm
n
()
i
and
f g
2
l
2
()
=

i=1
(F
i
() G
i
())
2
d()
=
3

i=1
F
i
G
i

2
L
2
()
<

3
+

3
+

3
= .
As in the scalar case we can use closure to nd completeness in l
2
().
Using the representation of the gradient in polar coordinates, i.e.

x
=
r

r
+
1
r

=

r
+
1
r

,
we obtain
(2n + 1)x |x|
2

x
= (2n + 1)r r
2
_


r
+
1
r

_
=
_
(2n + 1)r r
2

r
_
r

.
118 Chapter 5. Vectorial Spherical Harmonics
Now let Y
n
Harm
n
() and x = r, r > 0:
o
(1)
n
(r
n
Y
n
()) = (n + 1)r
n+1
Y
n
() r
n+1

Y
n
() ,
o
(2)
n
(r
n
Y
n
()) = nr
n1
Y
n
() + r
n1

Y
n
() ,
o
(3)
n
(r
n
Y
n
()) = r
n
L

Y
n
() .
Using the denition of the o
(i)
-operators and restricting the functions to r = 1 we get
(with H
n
(x) = r
n
Y
n
()):
_
o
(1)
n
H
n
(x)
_
|
r=1
= (n + 1)o
(1)

Y
n
() o
(2)

Y
n
() ,
_
o
(2)
n
H
n
(x)
_
|
r=1
= no
(1)

Y
n
() + o
(2)

Y
n
() ,
_
o
(3)
n
H
n
(x)
_
|
r=1
= o
(3)

Y
n
()
for . Rearranging these equations we nd that
o
(1)

Y
n
() =
1
2n + 1
_
o
(1)
n
H
n
(x)
_
|
r=1
+
1
2n + 1
_
o
(2)
n
H
n
(x)
_
|
r=1
,
o
(2)

Y
n
() =
n
2n + 1
_
o
(1)
n
H
n
(x)
_
|
r=1
+
n + 1
2n + 1
_
o
(2)
n
H
n
(x)
_
|
r=1
,
o
(3)

Y
n
() =
_
o
(3)
n
H
n
(x)
_
|
r=1
.
This shows the following lemma.
Lemma 5.20
harm
(i)
n
()
3

j=1
Harm
n1
()
j

j=1
Harm
n+1
()
j
, i = 1, 2,
harm
(3)
n
()
3

j=1
Harm
n
()
j
and summarizing
harm
n
()
n+1

m=n1
3

i=1
Harm
m
()
i
.
Altogether we obtain the following theorem.
Theorem 5.21
The l
2
()-orthonormal system {y
(i)
n,k
}
i=1,2,3,nN
0
i
,k=n,...,n
is closed in l
2
() and it is com-
plete in l
2
() with respect to
l
2
()
.
This means that for every f l
2
() and > 0 there exists N N such that
_
_
_
_
_
f
3

i=1
N

n=0
i
n

k=n
f, y
(i)
n,k

l
2
()
y
(i)
n,k
_
_
_
_
_
l
2
()
< .
Chapter 5. Vectorial Spherical Harmonics 119
Denition 5.22
A second system of vector spherical harmonics is given by
y
(1)
n,k
=

n + 1
2n + 1
y
(1)
n,k

n
2n + 1
y
(2)
n,k
,
y
(2)
n,k
=

n
2n + 1
y
(1)
n,k
+

n + 1
2n + 1
y
(2)
n,k
,
y
(3)
n,k
= y
(3)
n,k
.
Theorem 5.23
The l
2
()-orthonormal system { y
(i)
n,k
} is closed and complete in l
2
(), i.e. for all f l
2
()
and > 0 there exists N N such that
_
_
_
_
_
f
3

i=1
N

n=0
i
n

k=n
f, y
(i)
n,k

l
2
()
y
(i)
n,k
_
_
_
_
_
l
2
()
< .
Remark 5.24
This second system does not strictly separate into radial and tangential vector spherical
harmonics. Note that

y
(1)
n1,k
() = n(n + 1) y
(1)
n1,k
() , n N, n k n,

y
(2)
n+1,k
() = n(n + 1) y
(2)
n+1,k
() , n N
0
, n k n,

y
(3)
n,k
() = n(n + 1) y
(3)
n,k
() , n N, n k n.
Thus, this system is a set of eigenfunctions of the Beltrami operator which is applied
componentwise. Moreover, we can show that
y
(1)
n,k
() =
1

(n + 1)(2n + 1)
_

x
_
1
r
n+1
Y
n,k
()
__

,
y
(2)
n,k
() =
1

n(2n + 1)
(
x
(r
n
Y
n,k
()))

,
y
(3)
n,k
() =
1

n(n + 1)
_
x
x
_
1
r
n+1
Y
n,k
__

=
1

n(n + 1)
(x
x
(r
n
Y
n,k
))

.
Remark 5.25
(i) It is also possible to dene a vectorial Beltrami operator in such a way that the
orthonormal system of vector spherical harmonics y
(i)
n,k
forms a system of eigenfunc-
tions to this operator.
(ii) An addition theorem for vector spherical harmonics can also be constructed, i.e. a
formula of the form
n

j=n
y
(i)
n,j
() y
(k)
n,j
() =
2n + 1
4
v
p
(i,k)
n
(, ) = (
(i)
n

(k)
n
)
1/2
2n + 1
4
o
(i)

o
(k)

P
n
( )
where denotes the tensor product. On the right hand side occurs the (vectorial)
Legendre rank-2 tensor kernel of degree n and type (i, k). For further details we
refer to [5, 7].
120 Chapter 5. Vectorial Spherical Harmonics
(iii) The approach to vector spherical harmonics can be extended in canonical way to
the theory of tensor spherical harmonics which are also generated from the scalar
ones by application of certain operators which map scalar functions to tensor elds.
In the tensorial setup it is possible to determine complete orthonormal systems,
an analogue to the Helmholtz decomposition, a tensorial Beltrami operator with
tensor spherical harmonics as eigenfunctions and also a tensorial addition theorem.
We refer to [5, 7] and the references therein.
Chapter 6
Bessel Functions
6.1 Derivation and Denition of Bessel Fucntions
We consider vibrations of a membrane which is xed in a circular frame, i.e. a disk S.
Let Z be the amplitude of the membrane which follows the wave equation

x
Z =
1
c
2

2
t
2
Z in S , Z = 0 on S,
where c is the phase-velocity c =

/ where is the surface tension of the membrane


and is the mass-density of it. we are interested in time harmonic vibrations, i.e.
Z(x, t) = Z(x) exp(it)
which gives
(
x
Z(x)) exp(it) =
1
c
2
(
2
)Z(x) exp(it)

x
Z(x) +

2
c
2
Z(x) = 0 in S
and Z = 0 on S. Thus, Z has to fulll the Helmholtz equation in S, i.e.

x
Z + k
2
Z = 0 in S , Z = 0 on S
with k = /c being the wave number. The spherical geometry leads us to polar coordi-
nates and we use separation of variables, i.e.
Z(x) = U(r)() where x = (r cos(), r sin())
T
.
We get

x
Z + k
2
Z = 0


2
r
2
Z +
1
r

r
Z +
1
r
2

2
Z + k
2
Z = 0
U

(r)() +
1
r
U

(r)() +
1
r
2
U(r)

() + k
2
U(r)() = 0

r
2
U

(r)
U(r)
+
rU

(r)
U(r)
+

()
()
+ r
2
k
2
= 0.
121
122 Chapter 6. Bessel Functions
This can only be true for all r [0, R] and for all [0, 2) if

()
()
=
and
r
2
U

(r)
U(r)
+
rU

(r)
U(r)
+ r
2
k
2
= .
The rst ordinary dierential equation,

() + () = 0,
possesses a solution if and only if = n
2
, n N
0
. The solution is given by
() = Acos(n) + Bsin(n).
Inserting = n
2
into the second ordinary dierential equation gives us
r
2
U

(r) + rU

(r) + (r
2
k
2
n
2
)U(r) = 0.
Substitute now x = kr and we nd
x
2
U

(x) + xU

(x) + (x
2
n
2
)U(x) = 0.
Denition 6.1
The equation
_
z
2
d
2
dz
2
+ z
d
dz
+ (z
2
v
2
)
_
Y (z) = 0
with z C, v C and without loss of generality (v) 0, is called Bessels dierential
equation.
For the solution we choose the following approach with a
0
= 0
Y (z) = z

k=0
a
k
z
k
=

k=0
a
k
z
k+
which gives us
_
z
2
d
2
dz
2
+ z
d
dz
+ (z
2
v
2
)
_
Y (z) = 0

k=0
a
k
_
(k + )(k + 1)z
k+
+ (k + )z
k+
+ (z
2
v
2
)z
k+
_
= 0

k=0
a
k
_
(k + )
2
v
2
_
z
k+
+

k=0
a
k
z
k++2
= 0

k=0
a
k
_
(k + )
2
v
2
_
z
k+
+

k=2
a
k2
z
k+
= 0
Chapter 6. Bessel Functions 123
Thus, we get by comparing coecients for k = 0:
a
0
(
2
v
2
) = 0
2
= v
2
= v ,
for k = 1:
a
1
( + 1)
2
a
1
v
2
= 0
a
1

2
+ a
1
2 + a
1
a
1
v
2
= 0
a
1
(1 + 2) = 0.
Now we investigate two cases:
Case 1: if = v, then a
1
(1 + 2v) = 0 leads to a
1
= 0 since (v) 0. For k > 1
a
k
_
(k + )
2
v
2
_
+ a
k2
= 0
a
k
=
a
k2
k(k + 2v)
a
2l+1
= 0 for l N
a
2l
= (1)
l
a
0
2
2l
l!(v + 1)(v + 2) . . . (v + k)
for l N .
The freedom of choice for a
0
remains. We set a
0
=
1
2
v
(v+1)
and use the properties of the
Gamma function.
Theorem 6.2
For z C \ R
<0
and v C with (v) 0 the Bessel function of rst kind with index v
J
v
(z) =

k=0
(1)
k
k!(v + k + 1)
_
z
2
_
v+2k
is a solution of Bessels dierential equation. Note that R
<0
needs to be excluded to
guarantee that z
v
is uniquely determined and J
v
is dened on all of C if v Z.
Case 2: = v, thus, a
1
(1 2v) = 0 and
a
k
=
a
k2
k(k 2v)
for all k 2.
Diculties occur if k = 2v, i.e. for v =
1
2
, 1,
3
2
, 2, . . .. To circumvent these diculties we
set a
1
= 0 and get by the recursion that a
2l+1
= 0 for l N
0
. We also set a
0
=
1
2
v
(v+1)
and nd:
Theorem 6.3
For z C \ R
<0
and v C \ N the function
J
v
(z) =

k=0
(1)
k
k!(v + k + 1)
_
z
2
_
v+2k
is a solution of Bessels dierential equation.
124 Chapter 6. Bessel Functions
For (v) > 0 holds that lim
z0
J
v
(z) = , i.e. J
v
and J
v
are linearly independent.
Theorem 6.4
For v C with (v) 0 and v N
0
the functions J
v
and J
v
are linearly independent
solutions of Bessels dierential equation. Every solution Y of Bessels dierential equation
with the index v C with (v) 0 and v Z can be written as
Y (z) = AJ
v
(z) + BJ
v
(z) , z C \ R
<0
.
Remark 6.5
We treat now certain special cases, i.e. v =
1
2
:
J
1/2
(z) =

k=0
(1)
k
k!(k +
3
2
)
_
z
2
_
2k+
1
2
=

z
2

k=0
(1)
k
k!(k +
3
2
)
_
z
2
_
2k
=

z
2
1

k=0
(1)
k
2
2k+1
k!
k!(2k + 1)!
_
z
2
_
2k
=

z
2
2

k=0
(1)
k
(2k + 1)!
z
2k
=

2
z

k=0
(1)
k
(2k + 1)!
z
2k+1
=

2
z
sin(z).
Analogously for v =
1
2
:
J
1/2
(z) =

2
z
cos(z) , z C \ R
<0
.
Using a recursion formula which we will prove later we can show the following result by
induction:
Theorem 6.6
For every n N
0
there exist polynomials F
n

n
and G
n1

n1
,
F
n
(z) =
n

k=0
a
k
z
k
, G
n1
(z) =
n1

k=0
b
k
z
k
,
with coecients a
n
, b
n1
= 0 (G
1
= 0) such that
J
n+1/2
(z) =

2
z
_
F
n
(
1
z
) sin(z) G
n1
(
1
z
) cos(z)
_
,
J
n1/2
(z) =

2
z
(1)
n
_
G
n1
(
1
z
) sin(z) + F
n
(
1
z
) cos(z)
_
.
Chapter 6. Bessel Functions 125
So far we have only one solution if v N
0
. Since the Gamma function possesses singu-
larities at the points of N
0
the inverse function 1/ vanishes at theses points. Thus, we
also have a solution for all v Z. However,
J
n
(z) = (1)
n
J
n
(z) , n N ,
i.e. J
n
and J
n
are linearly dependent. To nd a second linearly independent solution for
v N
0
we dene rst for v N
N
v
(z) =
J
v
(z) cos(v) J
v
(z)
sin(v)
, z C \ R
<0
.
The functions N
v
are called Neumann functions or Bessel functions of the second kind.
To get a solution for n N
0
we have a look at
N
n
(z) = lim
vn
N
v
(z) , z C \ R
<0
.
By the rule of lHospital we get for N
n
:
N
n
(z) =
2

ln(
z
2
)J
n
(z)
1

n1

k=0
(n k 1)!
k!
_
z
2
_
2kn

k=0
_
(1)
k
k!(k + n)!
_

(n + k + 1)
(n + k)!
+

(k + 1)
k!
_
_
z
2
_
n+2k
_
, z C.
Theorem 6.7
For all v C with (v) 0 the functions {J
v
, N
v
} form a fundamental system for Bessels
dierential equation.
The so-called Hankel functions (or Bessel functions of the third kind) are also often used:
H
(1)
v
(z) = J
v
(z) + iN
v
(z)
H
(2)
v
(z) = J
v
(z) iN
v
(z)
where v C with (v) 0 and z C \ R
<0
.
Theorem 6.8
For z C \ R
<0
and v C with (v) >
1
2
we have
J
v
(z) =
2(
z
2
)
v

(v +
1
2
)
1

0
(1 t
2
)
v
1
2
cos(zt)dt
and
J
v
(z) =
2(
z
2
)
v

(v +
1
2
)
/2

0
(cos())
2v
cos(z sin())d
=
2(
z
2
)
v

(v +
1
2
)
/2

0
(sin())
2v
cos(z cos())d .
126 Chapter 6. Bessel Functions
Proof. For (v) >
1
2
we calculate
2
1

0
(1 t
2
)
v
1
2
cos(zt)dt =
1

1
(1 t
2
)
v
1
2
cos(zt)dt
=
1

1
(1 t
2
)
v
1
2
1
2
(exp(izt) + exp(izt))dt
=
1
2
_
_
1

1
(1 t
2
)
v
1
2
exp(izt)dt +
1

1
(1 t
2
)
v
1
2
exp(izt)dt
_
_
=
1
2
_
_
1

1
(1 t
2
)
v
1
2
exp(izt)dt
1

1
(1 t
2
)
v
1
2
exp(izt)dt
_
_
=
1

1
(1 t
2
)
v
1
2
exp(izt)dt
Thus, we can consider the following integral:
1

1
(1 t
2
)
v
1
2
exp(izt)dt =
1

1
(1 t
2
)
v
1
2

k=0
(iz)
k
k!
t
k
dt
=

k=0
(iz)
k
k!
1

1
(1 t
2
)
v
1
2
t
k
dt
. .
=0 for k odd
= 2

k=0
(iz)
2k
(2k)!
1

0
(1 t
2
)
v
1
2
t
2k
dt
Now we substitute = t
2
and obtain
1

1
(1 t
2
)
v
1
2
exp(izt)dt =

k=0
(iz)
2k
(2k)!
1

0
(1 )
v
1
2

k
1
2
d
. .
=B(v+
1
2
,k+
1
2
)
=

k=0
(1)
k
z
2k
(2k)!
(v +
1
2
)(k +
1
2
)
(v + k + 1)
=

(v +
1
2
)

k=0
(1)
k
k!(v + k + 1)
_
z
2
_
2k
=
_
z
2
_
v

(v +
1
2
)J
v
(z)
Chapter 6. Bessel Functions 127
with z C \ R
<0
. This shows the rst formula. The second and the third are obtained
by substituting t = sin(),
dt
d
= cos() (or t = cos() respectively):
1

0
(1 t
2
)
v
1
2
cos(zt)dt =
/2

0
_
(cos())
2
_
v
1
2
cos(z sin()) cos()d
=
/2

0
(cos())
2v
cos(z sin())d .
6.2 Some Orthogonality Relations
Let Z
v
(z) be an arbitrary solution of Bessels dierential equation, i.e. Z
v
= J
v
or Z
v
= N
v
or Z
v
= H
(1)
v
or Z
v
= H
(2)
v
and
z
2
Z

v
(z) + zZ

v
(z) + (z
2
v
2
)Z
v
(z) = 0 , z C \ R
<0
.
Then the function
U(z) =

zZ
v
(z)
fullls the following ordinary dierential equation for z C \ R
<0
:
U

(z) +
_

2
+
1
4
v
2
z
2
_
U(z) = 0.
analogously V (z) =

zZ

(z) fullls
V

(z) +
_

2
+
1
4

2
z
2
_
V (z) = 0 , z C \ R
<0
.
Thus, we get
U

V +
_

2
+
1
4
v
2
z
2
_
UV = 0
UV

+
_

2
+
1
4

2
z
2
_
UV = 0
and therefore,
_

v
2

2
z
2
_
UV =
d
dz
(UV

V ) .
For v = we can conclude that
x

0
U(z)V (z)dz =
U(x)V

(x) U

(x)V (x)

2
+ C
x

0
zZ
v
(z)Z
v
(z)dz =
x

2
(Z
v
(x)Z

v
(x) Z

v
(x)Z
v
(x)) + C
128 Chapter 6. Bessel Functions
In the limit we obtain by using lHospital
x

0
z(Z
v
(z))
2
dz =
x
2
2
_
(Z

v
(x))
2
Z
v
(x)Z

v
(x)
_

x
2
Z
v
(x)Z

v
(x) + C
In particular for Z
v
= J
v
, v > 0, and , being zeros of J
v
,
2
=
2
:
1

0
xJ
v
(x)J
v
(x)dx =
1

2
(J
v
()J

v
() J

v
()J
v
()) = 0
and
1

0
x(J
v
(x))
2
dx =
1
2
(J

v
())
2
These relations are known as orthogonality relations for J
v
, i.e. the functions {J
v
(
v,m
)}
m
,
where
v,m
denotes the zeros of J
v
in ascending order and x (0, 1), are orthogonal on
the interval (0, 1) with the weight function w(x) = x.
6.3 Bessel Functions with Integer Index
Theorem 6.9
For z C and t C \ {0} we have
f(z, t) = exp(
z
2
(t
1
t
)) =

n=
J
n
(z)t
n
.
Proof. We consider the Laurent series of f(z, t) which is regular for 0 < t A <
such that we can write
f(z, t) = exp(
z
2
(t
1
t
)) =

n=
c
n
(z)t
n
for 0 < |t| < . By multiplication of the two series
exp(
zt
2
) =

n=0
_
z
2
_
n
t
n
n!
,
exp(
z
2t
) =

n=0
_
z
2
_
n
t
n
n!
Chapter 6. Bessel Functions 129
and combination of the summands with equal powers of t we nd that
exp(
z
2
(t
1
t
))
= exp(
zt
2
) exp(
z
2t
)
=
_

n=0
_
z
2
_
n
t
n
n!
_

n=0
_
z
2
_
n
t
n
n!
_
=

n=0

m=n
_
z
2
_
m
t
m
m!
_
z
2
_
mn
t
(mn)
(mn)!
+

n=1

m=0
_
z
2
_
m
t
m
m!
_
z
2
_
m+n
t
(m+n)
(m+ n)!
=

n=0

m=n
(1)
mn
_
z
2
_
2mn
t
n
m!(mn)!
+

n=1
(1)
n

m=0
_
z
2
_
2m+n
(1)
m
m!(m + n)!
t
n
=

n=0

m=0
_
z
2
_
2m+n
(1)
m
m!(n + m)!
t
n
+

n=1
(1)
n
J
n
(z)t
n
=

n=0
J
n
(z)t
n
+

n=1
(1)
n
J
n
(z)t
n
Therefore, we obtain the coecients
c
n
(z) = J
n
(z) , n N
0
,
c
n
(z) = (1)
n
J
n
(z) , n N .
This gives us an expansion of the form
f(z, t) = exp(
z
2
(t
1
t
)) = J
0
(z) +

n=1
J
n
(z)
_
t
n
+ (1)
n
t
n
_
for 0 < |t| < or
f(z, t) = exp(
z
2
(t
1
t
)) =

n=
J
n
(z)t
n
.
Remark 6.10
From the previous theorem we can draw the following conclusions:
1. f(z, t) = f(z,
1
t
), i.e.

n=
J
n
(z)t
n
=

n=
J
n
(z)(1)
n
t
n
=

n=
J
n
(z)(1)
n
t
n
which shows us by comparison of coecients that for n N, z C \ R
<0
:
J
n
(z) = (1)
n
J
n
(z) J
n
(z) = (1)
n
J
n
(z) .
2. f(z, 1) = exp(0) = 1 =

n=
J
n
(z) = J
0
(z) + 2

n=1
J
2n
(z),
130 Chapter 6. Bessel Functions
3. f(z, exp(i)) = exp(
z
2
(exp(i) exp(i))) = exp(zi sin()), thus:
f(z, exp(i)) = cos(z sin()) + i sin(z sin())
=

n=
J
n
(z) exp(in)
= J
0
(z) +

n=1
J
n
(z) (exp(in) + (1)
n
exp(in))
= J
0
(z) + 2

k=1
J
2k
(z) cos(2k) + 2i

k=0
J
2k+1
(z) sin((2k + 1)).
Now let z R and compare real and imaginary parts:
cos(z sin()) = J
0
(z) + 2

k=1
J
2k
(z) cos(2k) ,
sin(z sin()) = 2

k=0
J
2k+1
(z) sin((2k + 1)) .
These are classical Fourier series for the functions on the left hand side. Due to
uniqueness of the Fourier coecients we obtain (k N
0
, n Z):
J
2k
(z) =
1

0
cos(z sin()) cos(2k)d
J
2k+1
(z) =
1

0
sin(z sin()) sin((2k + 1))d
J
n
(z) =
1
2
2

0
f(z, exp(i)) exp(in)d
=
1
2
2

0
exp(i(z sin() n))d
=
1
2
2

0
cos(z sin() n)d
In particular for = /2 we get
cos(z) = J
0
(z) + 2

k=1
(1)
k
J
2k
(z)
sin(z) = 2

k=0
(1)
k
J
2k+1
(z) .
Chapter 6. Bessel Functions 131
4. Dierentiating f(z, t) with respect to t gives us

n=
nJ
n
(z)t
n1
=

t
f(z, t) =
z
2
_
1 +
1
t
2
_
f(z, t)
=
z
2
_
1 +
1
t
2
_

n=
J
n
(z)t
n
=

n=
z
2
J
n
(z)t
n
+

n=
z
2
J
n
(z)t
n2
=

n=
z
2
(J
n1
(z) + J
n+1
(z)) t
n1
Comparing coecients we nd for all n Z that
nJ
n
(z) =
z
2
(J
n1
(z) + J
n+1
(z)) .
5. Analogously we can get by dierentiation with respect to z that
J

n
(z) =
1
2
(J
n1
(z) J
n+1
(z)) .
By induction one cane show that
J
n
(z) = (1)
n
z
n
_
1
z
d
dz
_
n
J
0
(z) .
Remark 6.11
It should be noted that the resulting equations of part 4 and 5 of the previous remark
can be extended from an integer index to an arbitrary index without any changes, i.e. for
v C and n N
0
vJ
v
(z) =
z
2
(J
v1
(z) + J
v+1
(z)) ,
J

v
(z) =
1
2
(J
v1
(z) J
v+1
(z)) ,
_
1
z
d
dz
_
n
(z
v
J
v
(z)) = z
vn
J
vn
(z) ,
_
1
z
d
dz
_
n
_
z
v
J
v
(z)
_
= (1)
n
z
vn
J
v+n
(z) .
For more on Bessel functions we refer to [10, 16]
132 Chapter 6. Bessel Functions
0 2 4 6 8 10 12 14 16 18 20
0.5
0
0.5
1
J
0
J
1/2
J
1
J
2
J
3
0 2 4 6 8 10 12 14 16 18 20
3
2.5
2
1.5
1
0.5
0
0.5
1
Y
0
Y
1/2
Y
1
Figure 6.1: Bessel functions of rst and second kind.
20 40 60 80 100 120 140 160 180 200
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
0.2
J
0
J
1/2
J
1
J
2
J
3
Figure 6.2: Bessel functions of rst kind.
Chapter 6. Bessel Functions 133
20 15 10 5 0 5 10 15 20
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
J
0
J
1
J
2
20 15 10 5 0 5 10 15 20
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
J
0
J
1
J
1
Figure 6.3: Bessel functions of rst kind.
20 15 10 5 0 5 10 15 20
8
6
4
2
0
2
4
6
8
Y
0
Y
1
Figure 6.4: Bessel functions of second kind.
134 Chapter 6. Bessel Functions
Chapter 7
Summary
1. Introduction
Examples for the Laplace equation (gravitation, geomagnetics)
Euler summation and Greens function for a 1D-lattice
2. The Gamma Function
Denition, derivative and general properties
Stirlings formula and duplication formula
Extension of 1/ to the complex plane
3. Orthogonal Polynomials
Basics in Fourier analysis
General/monic orthogonal polynomials and weight functions
Properties (zeros, three term recurrence, Christoel-Darboux formula)
Quadrature rules
Jacobi polynomials (and their special cases: ultraspherical, Tschebysche, Leg-
endre
Denition
Dierential equation (Jacobi ODE)
Explicit representation
Derivative
Rodriguez formula
Three term recurrence relation
Ultraspherical polynomials
Relation to Jacobi polynomials and inherited properties
Maximal value
Generating function
135
136 Chapter 7. Summary
Application of Legendre polynomials in electrostatics
Hermite polynomials, their properties and application in quantum mechanics
Laguerre polynomials, their properties and application
4. Spherical Harmonics
Spherical notation
Homogeneous polynomials in R
3
, scalar product, reproducing kernel
Homogeneous harmonic polynomials in R
3
, addition theorem
Spherical harmonics, addition theorem, realization (real-/complex-valued)
Beltrami dierential equation
Closure and completeness in L
2
() (Poisson integral formula)
The Funk-Hecke formula
Greens function with respect to the Beltrami operator
Application of spherical harmonics and Laguerre polynomials for the hydrogen
atom
5. Vector Spherical Harmonics
Spherical vector elds
Denition of vector spherical harmonics, o
(i)
- and O
(i)
-operators
The Helmholtz decomposition theorem
Closure and completeness in l
2
()
6. Bessel Functions
Denition, Bessel dierential equation, Bessel functions, Neumann functions
and Hankel functions
Integral representation and orthogonality relation of the J
v
Generating function, recurrence relation, relations for the derivative
Bibliography
[1] Alt, H.W.: Lineare Funktionalanalysis. Springer Lehrbuch, Berlin, 2006.
[2] Backus, G.E., Parker, R., Constable, C.: Foundations of Geomagnetism, Cambrigde
University Press, 1996.
[3] Butzer, P.L., Nessel, R.J.: Fourier Analysis and Approximation. Birkhauser, Basel,
Stuttgart, 1971.
[4] Davis, P.J.: Interpolation and Approximation. Dover Publ., 1975.
[5] Freeden, W., Gervens, T., Schreiner, M.: Constructive Approximation on the Sphere.
Clarendon Press, 1998.
[6] Freeden, W., Michel, V.: Multiscale Potential Theory. Birkhauser, 2004.
[7] Freeden, W., Schreiner, M.: Spherical Functions of Mathematical Geosciences.
Springer, Berlin, 2009.
[8] Gautschi, W.: Orthogonal Polynomials, Computation and Approximation. Oxford
University Press, 2004.
[9] Heuser, H.: Funktionalanalysis. Teubner, Stuttgart, 2006.
[10] Lebedew, N.N.: Spezielle Funktionen und ihre Anwendungen. BI, 1973.
[11] M uller, C.: Spherical Harmonics, Lecture Notes in Mathematics, 17. Springer, 1966.
[12] M uller, C.: Analysis of Spherical Symmetrics in Euclidean Spaces. Springer, 1998.
[13] Reed, M., Simon, B.: Functional Analysis I. Academic Press, New York, 1972.
[14] Rudin, W.: Functional Analysis. McGraw-Hill, Boston, 1991.
[15] Szego, G.P.: Orthogonal polynomials. American Math. Soc., 1967.
[16] Watson, G.N.: A Treatise on the Theory of Bessel Functions, 2nd ed., Cambridge
University Press, Cambridge, 1966.
[17] Yoshida, K.: Functional Analysis. Springer, Berlin, 1980.
137

Vous aimerez peut-être aussi