Lecture Notes
Alexei Rybkin
University of Alaska Fairbanks
Contents
Part 1. Complex Analysis 9
Lecture 1. Complex Numbers. Functions of Complex Variables 11
1. Imaginary Unit 11
2. Properties of Complex Numbers 11
3. Complex Functions 13
Lecture 2. The Concept of Analytic Function. The CauchyRiemann Conditions 15
1. Analytic Functions 15
2. CauchyRiemann Conditions 16
3. The Cauchy Theorem 18
Lecture 3. The Cauchy Integral 23
1. Introductory Example 23
2. Cauchy Integral 23
Appendix. The Morera Theorem 26
Lecture 4. Complex Series 29
1. Numerical Series 29
2. Functional Series 30
3. Power Series 30
4. Taylor and Laurent Series 31
Lecture 5. Singularities of Analytic Functions and the Residue Theorem 37
1. Singularities 37
2. Residues 38
Lecture 6. Applications of the Residue Theorem to Evaluation of Some Denite
Integrals 43
1. Integrals of the type
2
0
R(cos , sin )d 43
2. Integrals of the type
f(x)dx 44
3. Integrals of the type
e
ix
f(x)dx 46
Lecture 7. Applications of the Residue Theorem to Some Improper Integrals and
Integrals Involving Multivalued Functions 49
1. Indented Contours and Cauchy Principal Values 49
3
4 CONTENTS
2. Integrals of the type
0
x
1
f(x)dx , 0 < < 1 50
Part 2. Linear Spaces 53
Lecture 8. Vector Spaces 55
1. Basic Denitions 55
2. Bases 56
3. Coordinates 58
4. Subspaces 58
Lecture 9. Linear Operators 59
1. Linear Operator 59
2. Matrices 59
3. Matrix Representation of Linear Operators 61
4. Matrix Ring 63
5. Noncommutative Ring 64
Lecture 10. Inner Product. Selfadjoint and Unitary Operators 67
1. Inner Product 67
2. Adjoint and Selfadjoint Operators 70
3. Unitary Operators 71
Lecture 11. More about Selfadjoint and Unitary Operators. Change of Basis 73
1. Examples of Selfadjoint and Unitary Operators 73
2. Change of Basis 74
Lecture 12. Eigenvalues and Eigenvectors 77
1. Eigenvalues 77
2. Spectral Analysis 78
Part 3. Hilbert Spaces 81
Lecture 13. Innite dimensional spaces. Hilbert Spaces 83
1. Innite Dimensional Spaces 83
2. Basis 83
3. Normed Spaces 83
4. Hilbert Spaces 85
Lecture 14. L
2
spaces. Convergence in Normed Spaces. Basis and Coordinates
in Normed and Hilbert Spaces 87
1. L
2
spaces 87
2. Convergence in normed spaces 89
3. Bases and Coordinates 90
Lecture 15. Fourier Series 91
Lecture 16. Some Applications of Fourier Series 95
CONTENTS 5
1. Signal Processing 95
2. Solving Some Linear ODEs 98
Lecture 17. Linear Operators in Hilbert Spaces 101
1. General Concepts 101
2. Selfadjoint and Unitary Operators 105
3. Spectrum 106
Lecture 18. Spectra of Selfadjoint Operators 109
Lecture 19. Generalized Functions. Dirac function. 111
1. Physical Motivation 111
2. Uniform Convergence 111
3. Weak Convergence 116
4. Generalized Derivatives 119
Lecture 20. Representations of the function 121
1. Fourier Representation of the function 121
2. Integral Representation of the function 123
Lecture 21. The SturmLiouville Problem 127
Lecture 22. Legendre Polynomials 133
Appendix. Frobenius Theory 136
Lecture 23. Harmonic Oscillator 143
Lecture 24. The Fourier Transform 147
1. Fourier Series 147
2. Fourier Transform 149
Lecture 25. Properties of the Fourier Transform. The Fourier Operator 151
1. Properties 151
2. The Fourier operator 152
Lecture 26. The Fourier Integral Theorem (Fourier Representation Theorem) 155
1. The Fourier Integral Theorem 155
2. Generalization of the Fourier Transform (by a limiting procedure) 158
Lecture 27. Some Applications of the Fourier Transform 159
1. Fourier Representation 159
2. Applications of the Fourier Transform to Second Order Dierential
Equations 160
3. Applications of the Fourier Transform to Higher Order Dierential
Equations 162
Part 4. Partial Dierential Equations 165
Lecture 28. Wave Equations 167
6 CONTENTS
1. The Stretched String 167
2. The Method of Eigenfunction Expansions (Spectral Method) 169
Lecture 29. Continuous Spectrum of a Linear Operator 173
1. Basic Denitions 173
2. Continuous spectrum of selfadjoint operators 175
Lecture 30. The Method of Eigenfunction Expansion in the Case of Continuous
Spectrum 179
1. The Schrodinger Operator 179
2. The Wave Equation on the Whole Line 180
3. Propagation of Waves 183
Appendix: Derivation of the dAlembert Formula 187
Lecture 31. The Heat Equation on the Line 189
Lecture 32. Nonhomogeneous Wave and Heat Equations on the Whole Line 195
1. Nonhomogeneous Heat Equation 195
2. Nonhomogeneous Wave Equation 197
Appendix: The Method of Variation of Parameters 203
Lecture 33. Wave and Heat Equations in Nonhomogeneous Media 207
Lecture 34. L
2
spaces on Domains in Multidimensional Spaces 211
Lecture 35. The Laplace Equation in 1
2
. Harmonic Functions 217
Appendix. Series Representation of the Solution to the Dirichlet Problem for
the Laplace Equation 220
Lecture 36. Uniqueness of the Solution of the Laplace Equation. Laplace
Equation in a Rectangular Domain 221
1. WellPosedness of the Laplace Equation 221
2. Dirichlet Problem in a Rectangle 222
Lecture 37. The Principle of Conformal Mapping for the Laplace Equation.
Laplace Equation in the Upper Half Plane 225
Appendix. Change of Coordinate System and the Chain Rule 233
Lecture 38. The Spectrum of the Laplace Operator on Some Simple Domains 235
1. The Spectrum of on a Rectangle 235
2. The Spectrum of on a Disk 238
Lecture 39. Nonhomogeneous Laplace Equation 241
Lecture 40. Wave and Heat Equation in Dimension Higher Than One 247
1. Wave Equation 247
2. Heat Equation 248
3. Further Discussions 249
CONTENTS 7
Part 5. Greens Function 255
Lecture 41. Introduction to Integral Operators 257
Lecture 42. The Greens Function of u
tt
+p(x)u
t
+ q(x)u = f 261
1. The Greens Function 261
2. Application to a General Order 2 Linear Dierential Operator 262
Index 267
Part 1
Complex Analysis
LECTURE 1
Complex Numbers. Functions of Complex Variables
The set of real numbers 1 is too poor to be happy with. Modern science cannot
do without a larger set of numbers, the socalled complex numbers C.
Let us briey recall some basic notions related to complex numbers.
1. Imaginary Unit
The imaginary unit i is dened as
i
2
= 1.
As we all know there is no real number with such a property but we are going to treat
it as a number!
2. Properties of Complex Numbers
A complex number z is dened as
z = x +iy , x, y 1
where x is the real part of z also denoted as
x = Re z
and y is the imaginary part of z
y = Imz.
The complex conjugate of z, commonly denoted in math by z, is dened by
z = x iy.
The absolute value (modulus) [z[ of z is dened by
[z[ =
_
x
2
+y
2
.
Note that [z[ is a nonnegative real number.
The sum of two complex numbers z
1
, z
2
is a complex number z dened as follows
z = z
1
+ z
2
= (x
1
+ x
2
) + i(y
1
+y
2
)
that is,
Re(z
1
+z
2
) = Re z
1
+ Re z
2
Im(z
1
+z
2
) = Imz
1
+ Imz
2
.
11
12 1. COMPLEX NUMBERS. FUNCTIONS OF COMPLEX VARIABLES
The product z of z
1
, z
2
is dened as
z = z
1
z
2
= (x
1
+ iy
1
)(x
2
+iy
2
)
= (x
1
x
2
y
1
y
2
) + i(x
1
y
2
+ x
2
y
1
)
Observe that
1)
z = x + iy
1
_
z = x + iy
2) zz = [z[
2
.
Next, the division of z
1
and z
2
is dened by
z =
z
1
z
2
:=
z
1
z
2
z
2
z
2
=
z
1
z
2
[z
2
[
2
.
As an exercise check to see
1) z
1
z
2
= z
1
z
2
( hence z
n
= z
n
)
2)
_
z
1
z
2
_
=
z
1
z
2
3) z
1
+z
2
= z
1
+z
2
4) 1/z =
z
[z[
2
The set of all complex numbers is denoted by C. So
C = z : z = x +iy , x, y 1.
It is convenient to place complex numbers on a plane (called the complex plane).
x
y
z = x +iy = (x, y)
z = x iy
[z[
The argument (phase, polar angle) of a complex number z is the angle between
the xaxis and the vector (x, y). The standard notation for is
= arg z.
3. COMPLEX FUNCTIONS 13
It follows from elementary trigonometry that
z = [z[(cos +i sin ).
Note that
[ cos +i sin [ = 1
and
z
n
= [z[
n
(cos n +i sin n) , n N
z
1
z
2
= [z
1
[[z
2
[ (cos(
1
+
2
) + i sin(
1
+
2
))
z
1
z
2
=
[z
1
[
[z
2
[
(cos(
1
2
) + i sin(
1
2
)) .
Less trivial is the famous Euler formula
cos +i sin = e
i
.
Using this formula one gets
z = [z[e
i
,
the socalled exponential representation of a complex number. One easily veries
1) z
1
z
2
= [z
1
[[z
2
[e
i(
1
+
2
)
2) z
1
/z
2
= [z
1
[/[z
2
[e
i(
1
2
)
3) z = [z[e
i
4) [e
i
[ = 1
5) z
n
= [z[
n
e
in
If it takes you more than a split second to understand these properties I urge you to
prove them all.
3. Complex Functions
We are now ready to look at complex functions. When considering a domain D, it
should refer to an open set in the complex plane, think of it as an ink blot, a spill
on your carpet.
Definition 1.1. Given a domain D C, a complex valued function f is any
function from D to C of two real variables x, y.
Given z D where z = x +iy, we can always write
f(z) = u(x, y) + iv(x, y)
where u = Re f and v = Imf.
14 1. COMPLEX NUMBERS. FUNCTIONS OF COMPLEX VARIABLES
Example 1.2. Let D = C and let z = x+iy. The following two maps are examples
of complex valued functions.
f(z) = x + iy = z,
g(z) = x
2
+iy
2
.
Although both are looking fairly simple, they are profoundly dierent as we will see next
time.
LECTURE 2
The Concept of Analytic Function. The CauchyRiemann
Conditions
1. Analytic Functions
The concept of an analytic function is truly central in math and science.
Definition 2.1. Let f : D C be a complex function of a complex variable z.
The function f(z) is said to be dierentiable at z
0
D if the following limit (called the
derivative)
lim
z0
f(z
0
+ z) f(z
0
)
z
f
t
(z
0
) (2.1)
exists.
Definition 2.2. If f(z) is dierentiable z
0
D then f(z) is called analytic or
holomorphic in D.
Let us look again at Example 1.2. We claim that f(z) = z is analytic in C but that
g(z) = x
2
+iy
2
is not.
Proof. Consider f. Let z
0
C.
f(z
0
+ z) f(z
0
)
z
=
z
0
+ z z
0
z
= 1
lim
z0
f(z
0
+ z) f(z
0
)
z
= 1.
So f is dierentiable at z
0
and analytic in C.
Now for g, rewrite z = x +iy and note:
g(z
0
+ z) g(z
0
)
z
=
(x + x)
2
+i(y + y)
2
x
2
iy
2
x +iy
=
2xx + (x)
2
+ 2iyy + i(y)
2
x +iy
.
case 1: z = x: z
0
+ z approaches z
0
along a horizontal line.
g(z
0
+ z) g(z
0
)
z
=
2xx + x
2
x
= 2x + x
lim
z0
g(z
0
+ z) g(z
0
)
z
= lim
x0
2x + x = 2x.
15
16 2. THE CONCEPT OF ANALYTIC FUNCTION. THE CAUCHYRIEMANN CONDITIONS
case 2: z = iy: z
0
+ z approaches z
0
along a vertical line.
g(z
0
+ z) g(z
0
)
z
=
2iyy +iy
2
iy
= 2y + y
lim
z0
g(z
0
+ z) g(z
0
)
z
= lim
y0
2y + y = 2y.
The limits are dierent in general so g is not analytic in C. Note that even if
points on the line y = x give the same limit, they do not constitute a domain,
i.e. an open set so g is not analytic.
f(z)dz =
(u +iv)(dx +idy)
=
udx vdy +i
vdx +udy.
Recall that line integrals are associated with work in physics.
Exercise 2.12. Show that
1.
(f
1
(z) + f
2
(z))dz =
f
1
(z)dz +
f
2
(z)dz
2.
f(z)dz =
f(z)dz, is a constant
3.
2
f(z)dz =
1
f(z)dz +
2
f(z)dz
2
Hint: you are free to use the corresponding properties of line integrals of real func
tions (Calc III).
Definition 2.13. Let be a path (curve) starting at a point z = z
0
and ending at
z = z
1
.
z
0
z
1
We dene () as the same curve but starting at z
1
and ending at z
0
.
3. THE CAUCHY THEOREM 19
z
0
z
1
Exercise 2.14. Show that
f(z)dz =
()
f(z)dz.
Use the hint of Exercise 2.12.
Exercise 2.15.
State and prove the change of variables formula.
The following theorem will play a crucial role in our exposition.
Theorem 2.16. The following analog of the triangle inequality holds
f(z)dz
[f(z)[ [dz[.
Recall [dz[ =
_
dx
2
+dy
2
, i.e. the arc length dS in Calc III. The triangle inequality
for complex numbers, [z
1
+ z
2
[ [z
1
[ + [z
2
[ is at the origin of the triangle inequality
for integrals (see proof below).
Proof. It immediately follows from Denition 2.11 that
1
z
1
z
2
z
k1
z
k
z
k+1
z
k
20 2. THE CONCEPT OF ANALYTIC FUNCTION. THE CAUCHYRIEMANN CONDITIONS
and form the Riemann sum
n
k=1
f(x
k
)(z
k
z
k1
. .
z
k
)
then
f(z)dz = lim
max
k
[z
k
[0
n
k=1
f(z
k
)z
k
f(z)dz
lim
max
k
[z
k
[0
n
k=1
f(z
k
)z
k
triangle inequality
lim
max
k
[z
k
[0
n
k=1
[f(z
k
)[ [z
k
[ =
[f(z)[ [dz[ .
C
f(z)dz is called the
contour integral of f(z) along C counterclockwise.
Remark 2.18. It follows from Exercise 2.14 that
C
f(z)dz =
C
f(z)dz.
Unless otherwise stated, we write
.
Definition 2.19. A domain D C is said to be simply connected if, roughly
speaking, it consists of one piece and has no holes in it.
Theorem 2.20 (Cauchys Theorem). Let D be a simply connected domain and
f H(D). Then
C
f(z)dz = 0 , C D.
Below are two examples of contours C in a simply connected domain D.
D
C
D C
3. THE CAUCHY THEOREM 21
Exercise 2.21. Prove Theorem 2.20. Hint: use Denition 2.11, Greens formula,
and CauchyRiemann conditions.
Remark 2.22. The converse of the Cauchy theorem (known as Moreras theorem)
is also valid. If time permits we will prove it later.
LECTURE 3
The Cauchy Integral
1. Introductory Example
As we know, according to the Cauchy theorem, if f(z) is analytic on a simply
connected domain D then
C
f(z)dz = 0 , contour C D.
The Cauchy theorem fails if Int C contains at least one point at which our function is
not analytic. The following example shows it.
Example 3.1. Consider f(z) =
1
z z
0
. This function is analytic z C z
0
.
Let C = z : [z z
0
[ = . Consider the integral
C
f(z)dz =
[zz
0
[=
dz
z z
0
. (3.1)
Let us compute this integral explicitly using a typical complex variable technique. First
of all, C = z : [z z
0
[ = is a circle in C of radius , centered at z
0
. This means
that z z
0
= e
i
where 0 < 2 and by setting = z z
0
in (3.1), we have
[zz
0
[=
dz
z z
0
=
[[=
d
2
0
de
i
e
i
=
2
0
i e
i
d
e
i
= i
2
0
d = 2i. Not zero!
Actually, we got a formula to be used in the future
[zz
0
[=
dz
z z
0
= 2i. (3.2)
2. Cauchy Integral
Lets now consider the function
f(z)
z z
0
where f(z) H(D)
1
and z
0
D. It is clear
that
f(z)
z z
0
/ H(D) but belongs to H(D z
0
). Set up the integral
1
2i
C
f(z)
z z
0
dz , where C is any contour: z
0
Int C.
1
recall that f(z) H(D) means that f(z) is analytic on some domain D.
23
24 3. THE CAUCHY INTEGRAL
This integral is called the Cauchy integral of f(z) along a contour C and is one of the
most important objects in mathematics and theoretical physics.
Theorem 3.2 (Cauchy Formula). If f(z) H(D) and D is a simply connected
domain, then
1
2i
C
f(z)
z z
0
dz = f(z
0
), (3.3)
for any contour C D and z
0
Int C.
This theorem is fundamental and its proof is very typical in complex analysis.
Before we present it we need to discuss one lemma.
Lemma 3.3. If g(z) H(D), where D is the domain between two contours C, C
t
.
D
C
C
t
Then
C
g(z)dz =
g(z)dz.
Proof of lemma. Consider a new con
tour = C
1
(C
t
)
2
which encloses
the domain D cut along
1
(or
2
).
D
C
C
t
1
By condition g(z) H(D) and then by the Cauchy Theorem,
g(z)dz = 0. (3.4)
On the other hand,
C
+
2
. (3.5)
By Exercise 2.14,
2
=
1
and
2
=
1
. Combining (3.4) and (3.5) we have
C
g(z)dz
g(z)dz = 0
and the lemma is proven.
Now we are ready to prove the Cauchy Formula.
2. CAUCHY INTEGRAL 25
Proof.
f(z)
z z
0
H(D z
0
). Apply Lemma 3.3 to
f(z)
z z
0
with C = C, C
t
=
z : [z z
0
[ = . We have
1
2i
C
f(z)
z z
0
dz =
1
2i
[zz
0
[=
f(z)
z z
0
dz
=
1
2i
[zz
0
[=
f(z) f(z
0
)
z z
0
dz + f(z
0
)
1
2i
[zz
0
[=
dz
z z
0
. .
=1 by (3.2)
= f(z
0
) +
1
2i
[zz
0
[=
f(z) f(z
0
)
z z
0
dz (3.6)
Let us now evaluate the last integral in (3.6). By Theorem 2.16, we have
1
2i
C
f(z) f(z
0
)
z z
0
dz
1
2
[zz
0
[=
f(z) f(z
0
)
z z
0
[dz[
max
[zz
0
[=
[f(z) f(z
0
)[
1
2
[zz
0
[=
[dz[
[z z
0
[
.
. .
=
1
2
1
zz
0
=
[dz[=
1
2
1
2=1
So we got
1
2i
C
f(z) f(z
0
)
z z
0
dz
max
[zz
0
[=
[f(z) f(z
0
)[. (3.7)
But is arbitrary and we can make it as small as we want. Since f(z) is continuous,
[f(z) f(z
0
)[ is small if is small and it follows now from (3.7) that
lim
0
1
2i
[zz
0
[=
f(z) f(z
0
)
z z
0
dz = 0 (3.8)
Let now 0 in (3.6). We have
lim
0
1
2i
[zz
0
[=
f(z)
z z
0
dz
. .
is independent of
= lim
0
f(z
0
)
. .
is independent of
+lim
0
1
2i
[zz
0
[=
f(z) f(z
0
)
z z
0
dz
. .
=0 by (3.8)
and we nally arrive at (3.3).
The Cauchy formula implies
Theorem 3.4. (without proof )
f
(n)
(z
0
) =
n!
2i
C
f(z)
(z z
0
)
n+1
dz.
Corollary 3.5 (The Liouville Theorem). If f(z) H(C) and bounded
2
, then
f(z) = const, z C.
2
A function f(z) is called bounded on D if M > 0 : [f(z)[ M z D.
26 3. THE CAUCHY INTEGRAL
Proof. Apply Theorem 3.4 to f(z) with n = 1, C = z : [z z
0
[ = R. We have
f
t
(z
0
) =
1
2i
[zz
0
[=R
f(z)
(z z
0
)
2
dz , z
0
C. (3.9)
(3.9) implies
[f
t
(z
0
)[ =
1
2
[zz
0
[=R
f(z)
(z z
0
)
2
dz
1
2
[zz
0
[=R
[f(z)[
[z z
0
[
2
[dz[ (By Theorem 2.16).
By condition [f(z)[ M and we get
[f
t
(z
0
)[
M
2
[zz
0
[=R
[dz[
[z z
0
[
2
=
M
2
1
R
,2
2 R =
M
R
.
R is an arbitrary number. Make R . We have
lim
R
[f
t
(z
0
)[ = 0 f
t
(z
0
) = 0 z
0
f(z
0
) = const.
Done.
Appendix. The Morera Theorem
This theorem is the converse of the Cauchy theorem.
We prove rst
Theorem 3.6. Let f(z) be dened and continuous on a simply connected domain
D. If contour C D
C
f(z)dz = 0
then
F(z)
z
z
0
f () d H(D)
and
F
t
(z) = f(z).
Proof. Consider
F(z + z) F(z)
z
=
1
z
_
z+z
z
0
f () d
z
z
0
f () d
_
=
1
z
z+z
z
f () d. (3.10)
APPENDIX. THE MORERA THEOREM 27
Note that all the integrals above are independent of specic curves and dened only
by their endpoints. It follows from (3.10) that
F(z + z) F(z)
z
f(z)
=
1
[z[
z+z
z
f () d f(z)
z+z
z
d
. .
=z
=
1
[z[
z+z
z
(f () f(z)) d
Theorem 2.16
1
[z[
z+z
z
[f () f(z)[ [d[
1
[z[
max
[z,z+z]
[f () f(z)[
z+z
z
[d[
. .
=[z[
= max
[z,z+z]
[f () f(z)[ . (3.11)
Since f(z) is continuous on D
lim
z0
max
[z,z+z]
[f () f(z)[ = 0
and (3.11) implies
lim
0
F(z + z) F(z)
z
f(z)
= 0 lim
z0
F(z + z) F(z)
z
= f(z).
So f H(D) and F
t
(z) = f(z).
Theorem 3.7 (Moreras Theorem). Let f(z) be continuous on a simply connected
domain D. If contour C D
C
f(z)dz = 0
then f H(D).
Proof. By Theorem 3.6,
F(z) =
z
z
0
f()d H(D) , z
0
D
and F
t
(z) = f(z).
By Theorem 3.4, F
t
(z) is analytic if F(z) is analytic. So f(z) is analytic.
LECTURE 4
Complex Series
Series (real or complex) are the main ingredient of not only mathematics but also
physics. Its just one of the very few ways to get to a numerical answer. So, from now
on we are going to deal with series on a regular basis.
1. Numerical Series
Definition 4.1. A sequence of complex numbers
z
k
k=1
= z
1
, z
2
, , z
n
, is said to converge to
some z, if for any small > 0
[z z
n
[ <
for suciently large n. We write
lim
n
z
n
= z or z
n
z, n .
z
z
n
z
1
z
2
z
3
This means that starting from some n, all numbers z
n
, z
n+1
, get in a given
disk centered at z.
Definition 4.2. A formal sum
n1
z
n
=
n=1
z
n
of numbers z
n
n=1
is called a
series.
Definition 4.3. A series
n1
z
n
is called convergent if a complex number S such
that
S
n
S , n ,
where S
n
= z
1
+ z
2
+ +z
n
=
n
k=1
z
k
is a partial sum of the series.
In this case we write
n1
z
n
= S.
Definition 4.4. A series which is not convergent is called divergent.
Proposition 4.5. A series
n1
z
n
is convergent its tail
kn
z
k
0, n .
Proof. 1) . S =
n1
z
n
. For S S
n
we have
S S
n
=
n1
z
n
k=1
z
k
=
k=n+1
z
k
. (4.1)
29
30 4. COMPLEX SERIES
By condition, S
n
S. Hence [S S
n
[ 0, n . So we get
[S S
n
[ =
k=n+1
z
k
0.
2) . Read (4.1) backwards.
QED
1
Definition 4.6. A series
n1
z
n
is said to be absolutely convergent if
n1
[z
n
[ is
convergent.
Actually there is no dierence between real and complex series.
2. Functional Series
In applications it is a typical situation when every term of a series is a function of
some variable. This variable can easily be complex.
Definition 4.7. A formal sum
f
n
(z) is called a functional series.
In the future were going to treat functional series with pretty much general f
n
(z).
But at this point we concentrate on power series.
3. Power Series
Definition 4.8. A power series is
n=
a
n
(z z
0
)
n
=
n1
a
n
(z z
0
)
n
+
n0
a
n
(z z
0
)
n
(4.2)
where a
n
n=
= , a
n
, , a
1
, a
0
, a
1
, , a
n
, is a sequence of complex
numbers, z
0
C.
Note that in Calc II we only consider power series with nonnegative powers.
Definition 4.9. The domain of convergence D of a functional series is
D = z C :
f
n
(z) is convergent .
Theorem 4.10. The domain of convergence of a power
series is always an annulus.
Proof. Try to prove it yourself.
z
0
1
QED means what was to be demonstrated (Latin).
4. TAYLOR AND LAURENT SERIES 31
A good example of a power series is
n0
z
n
= 1 +z +z
2
+ +z
n
+ (4.3)
The domain D = z : [z[ < 1 which is a disk of radius 1. Let us compute (4.3).
S
n
(z) = 1 +z +z
2
+ +z
n
=
1 z
n+1
1 z
.
If z D then [z[ < 1 and lim
n
z
n+1
= 0 lim
n
S
n
(z) =
1
1z
and hence
n0
z
n
=
1
1 z
. (4.4)
As we can see, (4.3) represents a function which is clearly analytic in D. In fact,
more generally
Theorem 4.11. Every power series
n0
a
n
z
n
represents an analytic function on its domain of convergence.
The converse is also true as we will see in
4. Taylor and Laurent Series
Theorem 4.12 (The Taylor Theorem). If f(z) H(D), D = z : [z z
0
[ < R,
then
f(z) =
n0
a
n
(z z
0
)
n
, z D. (4.5)
where a
n
=
1
n!
d
n
dz
n
f(z)
z=z
0
.
z
0
z
R
Proof. Let z D and consider C
= z : [z z
0
[ <
where is chosen so that z Int C
.
By the Cauchy formula
f(z) =
1
2i
C
f()d
z
(4.6)
Consider
1
z
. We have
1
z
=
1
z
0
1
1
zz
0
z
0
. (4.7)
32 4. COMPLEX SERIES
Since, by construction,
z z
0
z
0
n0
_
z z
0
z
0
_
n
and (4.7) can be continued
1
z
=
1
z
0
n0
_
z z
0
z
0
_
n
.
Plug now this expression in (4.6) and we get
f(z) =
1
2i
C
f(z)
z
0
n0
(z z
0
)
n
( z
0
)
n
d
=
1
2i
n0
f()
( z
0
)
n+1
(z z
0
)
n
d. (4.8)
Now switch the order of integration and summation. It is a very subtle point and not
that easy to prove. But in this case, its true! So (4.8) can be continued
f(z) =
n0
1
2i
C
f()d
( z
0
)
n+1
. .
=
1
n!
f
(n)
(z
0
) (by Theorem 3.4)
(z z
0
)
n
.
So (4.5) is proven with a
n
=
1
n!
f
(n)
(z
0
). QED
Definition 4.13. Series (4.5) is called the Taylor series of f(z).
Now we turn to Laurent series. Some more terminology.
Definition 4.14. A domain is called pathconnected if for every two points z
0
, z
1
D, there is a path D connecting z
0
, z
1
.
That is
D is pathconnected z
0
, z
1
D D : z
0
, z
1
.
Example:
is such a domain.
4. TAYLOR AND LAURENT SERIES 33
We associate with pathconnected domains multiconnected contours. To under
stand this concept, just inspect the gure:
C
C
t
D
C
1
C
2
C is a multiconnected contour, i.e. C is
made of three pieces:
C
t
(enclosing contour), oriented counter
clockwise and two inner contours C
1
, C
2
ori
ented clockwise. The interior of C is the
shaded domain D.
Definition 4.15. We say that a contour C belongs to a domain D if Int C D.
Example:
D
C
D is a pathconnected domain,
C is a multiconnected contour
(twoconnected). We claim
that C belongs to D and write
C D since Int C D.
Theorem 4.16 (The Cauchy Formula). If f H(D), where D is a pathconnected
domain, then
1
2i
C
f(z)
z z
0
dz = f(z
0
)
for any C D and z
0
Int C.
Exercise 4.17. Prove Theorem 4.16. Hint: use the gure below, some arguments
of Lemma 3.3 and Theorem 3.2.
34 4. COMPLEX SERIES
Theorem 4.18 (The Laurent Theorem). If f H(D), where D is an annulus
D = z : R
1
< [z z
0
[ < R
2
, then
f(z) =
n=
a
n
(z z
0
)
n
, z D (4.9)
a
n
=
1
2i
C
f()d
( z
0
)
n+1
C
R
2
R
1
z
0
z
where C is any contour in D encircling the disk [z z
0
[ < R
1
and the point z as in the gure.
Proof. can be done using the Taylor theorem.
Remark 4.19. Representation (4.9) is not unique. It becomes unique only for a
specied annulus. Check to see it for for yourself for the following series:
1
z(1 z)
=
n1
z
n
, 0 < [z[ < 1 ,
1
z(1 z)
=
n2
z
n
, 1 < [z[ < .
Exercise 4.20. Prove Theorem 4.18.
Exercise 4.21. Show that
e
z
=
n0
z
n
n!
and nd its domain of convergence.
Exercise 4.22. Show that
1
1 z
=
n0
z
n
and nd its domain of convergence.
Example 4.23. Find the Taylor series of f(z) =
1
z(z 2)
in [z 1[ < 1 and the
Laurent series in [z 1[ > 1.
Solution.
f(z) =
1
2
_
1
z 2
1
z
_
=
1
2
_
1
1 (z 1)
1
1 + (z 1)
_
. (4.10)
Taylor in [z 1[ < 1. (4.10) can be continued
f(z)
Exercise 4.22
=
1
2
_
n0
(z 1)
n
n0
(1)
n
(z 1)
n
_
=
1
2
n0
(1 (1)
n
)(z 1)
n
=
n0
(z 1)
2n+1
.
4. TAYLOR AND LAURENT SERIES 35
Laurent in [z 1[ > 1. (4.10) can be continued
f(z) =
1
2(z 1)
_
1
1 + (z 1)
1
+
1
1 (z 1)
1
_
4.22
=
1
2
1
z 1
_
n0
(1)
n
(z 1)
n
+
n0
(z 1)
n
_
=
1
2
n0
(1)
n
+ 1 (z 1)
n1
=
n0
(z 1)
2n1
.
Exercise 4.24. Let f(z) =
1
(1 + z)
3
(z = 0). Find the Taylor series f(z) in
[z[ < 1 and the Laurent series in [z[ > 1.
Exercise 4.25. Let f(z) =
z
z
2
1
(z = 2). Find the Taylor series f(z) in
[z 2[ < 1 and the Laurent series in [z 2[ > 3.
LECTURE 5
Singularities of Analytic Functions and the Residue Theorem
1. Singularities
Definition 5.1. A point z
0
is called an isolated singularity of an analytic function
f(z) if f(z) is analytic in some neighborhood of z
0
with the exception of the point z
0
itself.
z
0
R
If z
0
is a singularity of f(z) then, by denition, f(z) H(D),
with some annulus D = z : 0 < [z z
0
[ < R.
Then, by the Laurent theorem (Theorem 4.18),
f(z) =
n1
a
n
(z z
0
)
n
. .
f
1
(z)
+
n0
a
n
(z z
0
)
n
. .
f
2
(z)
. (5.1)
f
1
(z) is called the essential or improper part, and f
2
(z) is called the correct or proper
part. We can classify the types of singularities.
Definition 5.2.
1) If f
1
(z) = 0 then z
0
is called a removable singularity.
2) If f
1
(z) =
N
n=1
a
n
(z z
0
)
n
(i.e. f
1
(z) has only a nite number of terms) then
z
0
is called a pole of order N.
3) If f
1
(z) is an innite series then z
0
is called an essential singular point.
Well come to see that removable singularities are harmless, poles are the usual
singularities we deal with, and essential singularities are nasty but do not usually
occur in physics. There are several alternate mathematical formulations for each type:
1) z
0
is removable if a
n
= 0 for n = 1, 2,
2) z
0
is a pole if N N : a
N
,= 0 and a
N1
= a
N2
= = 0.
3) z
0
is an essential singularity if N N , n > N : a
n
,= 0.
Theorem 5.3.
1) z
0
is a removable singularity lim
zz
0
f(z) = a
0
.
2) z
0
is a pole of order N lim
zz
0
(z z
0
)
N
f(z) = a
N
.
3) z
0
is an essential singularity in any neighborhood of z
0
, f(z) takes on
all complex values except for maybe only one particular value.
Proof. Parts 1), 2) are clear. Part 3) is less trivial and we leave it without proof.
37
38 5. SINGULARITIES AND RESIDUE THEOREM
Example 5.4.
1) f(z) =
sin z
z
has a removable singularity at z
0
= 0. Indeed we have that
sin z =
n0
(1)
n
(2n + 1)!
z
2n+1
and hence
sin z
z
=
n0
(1)
n
(2n + 1)!
z
2n
and no negative
powers of z.
2) f(z) =
sin z
z
3
has a pole of order 2 at z
0
= 0. Indeed
sin z
z
3
=
n0
(1)
n
(2n + 1)!
z
2n2
= z
2
+
n0
(1)
n+1
(2n + 3)!
z
2n
.
3) f(z) = e
1/z
has an essential singularity at z
0
= 0. Indeed e
1/z
=
n0
1
n!
z
n
, i.e.
e
1/z
has an innite number of terms of negative powers in the Laurent series.
Exercise 5.5. Use Laurent series expansion to show that 0 is
1) a removable singularity for f(z) =
e
z
1
z
;
2) a double pole for f(z) =
e
z
1
z
3
;
3) an essential singularity for f(z) = e
1/z
1.
Definition 5.6. A function f(z) is said to be analytic at innity if the function
g(z) = f(1/z)
is analytic at z = 0.
It is clear that all the denitions and theorems apply also for z = . Restate all
of them as an exercise for z = .
2. Residues
Definition 5.7. Let z
0
be a singular point and C be any contour enclosing z
0
. Let
f H(Int C z
0
) then
1
2i
C
f(z)dz Resf(z), z
0
C
f =
f(z)dz.
Hence we also have:
Resf(z), z
0
= a
1
.
2. RESIDUES 39
Theorem 5.9. Let z
0
be a pole of order N. Then
Resf(z), z
0
=
1
(N 1)!
d
N1
dz
N1
_
(z z
0
)
N
f(z)
_
z=z
0
(5.2)
This is exciting! No integral, instead: a derivative!
Proof. By Lemma 3.3,
Resf(z), z
0
=
1
2i
C
f(z)dz =
1
2i
C(z
0
)
f(z)dz (5.3)
where C
(z
0
) = z : [z z
0
[ < .
Since f(z) H(z : 0 < [z z
0
[ < ), then by Theorem 4.18
f(z) = (z z
0
)
N
nN
a
n
(z z
0
)
n+N
= (z z
0
)
N
n0
a
nN
(z z
0
)
n
. .
g(z)
(5.4)
and g(z
0
) ,= 0 since a
N
,= 0. Now (5.3) can be continued
Resf(z), z
0
=
1
2i
C(z
0
)
(z z
0
)
N
g(z)dz.
Recall the Cauchy formula (Theorem 3.4)
g
(n)
(z
0
) =
n!
2i
C
g(z)dz
(z z
0
)
n+1
.
So set N = n + 1, n = N 1 and then
g
(N1)
(z
0
) =
(N 1)!
2i
C
g(z)dz
(z z
0
)
N
.
Hence
Resf(z), z
0
=
g
(N1)
(z
0
)
(N 1)!
=
1
(N 1)!
d
N1
dz
N1
g(z)
z=z
0
(5.4)
=
1
(N 1)!
d
N1
dz
N1
_
(z z
0
)
N
f(z)
z=z
0
. QED
Remark 5.10. The formula above does not apply to essential singularities.
Corollary 5.11. If z
0
is a simple pole and f can be represented in some neigh
borhood of z
0
as
f(z) =
(z)
(z)
with some analytic functions , such that (z
0
) ,= 0, (z
0
) = 0 then
Resf, z
0
=
(z
0
)
(z
0
)
.
Proof. is left as an exercise.
40 5. SINGULARITIES AND RESIDUE THEOREM
Example 5.12. To evaluate
C
1
(i)
dz
z
2
+ 1
, consider f(z) =
1
z
2
+ 1
and note that i
are isolated singularities. Furthermore, only i is inside our contour and we can write
f(z) =
1
(z i)(z +i)
=
g(z)
z i
where g(z) =
1
z+i
and thus g(i) =
1
2i
is nite and nonzero. So if we expand g(z) in
powers of z i, the rst term is nonnegative. So there is only one negative term for f
around z i, i.e. i is a pole of order 1. Hence
C
1
(i)
dz
z
2
+ 1
= 2i Res
z=i
1
z
2
+ 1
= 2i(z i)
1
z
2
+ 1
z=i
= 2i
1
z +i
z=i
= .
Example 5.13. To evaluate
C
1
(0)
e
1/z
dz, we can not use the residue formula since
0 is an essential singularity. Instead, we use Remark 5.8, and nd Rese
1/z
, 0 =
a
1
= 1. So
C
1
(0)
e
1/z
dz = 2i 1 = 2i.
Theorem 5.14 (The Residue Theorem). Let f(z) be analytic inside a contour C
except for a nite number of poles z
k
n
k=1
= z
1
, z
2
, , z
n
. Then
C
f(z)dz = 2i
n
k=1
Resf(z), z
k
(5.5)
Proof. Its enough to prove it for n = 2. By Lemma 3.3 we can deform C into
z
1
z
2
C
t
1
C
C
t
= z : [z z
1
[ =
1
z :
[z z
2
[ =
2
where <
[z
1
z
2
[
2
(small
enough). Since
1
+
2
= 0 we get
C
f(z)dz =
[zz
1
[=
f(z)dz+
[zz
2
[=
f(z)dz.
QED
Exercise 5.15. Evaluate the following integrals
a)
[z2i[=1
dz
z
2
+ 4
.
b)
[z[=2
cosh z
z(z
2
+ 1)
dz.
c)
[z[=2
ze
1/z
dz.
2. RESIDUES 41
d)
C
z 1
z
2
+iz + 2
dz, where C = (x, y) : x
4
+ y
4
= 4.
LECTURE 6
Applications of the Residue Theorem to Evaluation of Some
Denite Integrals
In this lecture we are going to show that the residue theorem turns out to be very
helpful in computing some denite integrals which can hardly be computed by any
other means.
1. Integrals of the type
2
0
R(cos , sin )d
where R is a rational function, i.e. a quotient a polynomials (in 1).
[0, 2] z = e
i
z : [z[ = 1, i.e. z runs along the unit circle when
changes from 0 to 2.
0 2
z
[z[ = 1
z = e
i
= cos + i sin
z = e
i
= cos i sin
_
_
cos =
z+z
2
sin =
zz
2i
Furthermore, z = 1/z since 1 = [z[
2
= zz and dz = ie
i
d = izd. So we have
d =
dz
iz
, cos =
1
2
_
z +
1
z
_
, sin =
1
2i
_
z
1
z
_
,
and making these substitutions we get
I :=
2
0
R(cos , sin )d =
[z[=1
R(z)dz
where
R(z) is a new rational function of z
R(z) =
P
n
(z)
Q
m
(z)
, P
n
, Q
m
are polynomials of order n, m respectively.
The function
R(z) is analytic inside z : [z[ = 1 except for a nite number of poles
z
1
, , z
N
, N m. By the Residue Theorem
I = 2i
N
k=1
Res
_
R(z), z
k
_
.
Example 6.1. Compute
I =
2
0
d
1 + a cos
, [a[ < 1.
43
44 6. RESIDUE THEOREM  DEFINITE INTEGRALS
Put z = e
i
. We have
I =
[z[=1
dz
iz
_
1 + a
z+
1
z
2
_ =
2
i
[z[=1
dz
az
2
+ 2z + a
The poles of R(z) =
1
az
2
+ 2z + a
are solutions to az
2
+ 2z + a = 0 z
1,2
=
1
a
_
1
a
2
1. Note that we can rewrite z
1
in various forms:
z
1
=
_
1
a
2
1
1
a
=
1 a
2
1
a
=
a
1 a
2
+ 1
.
From this last one, recall [a[ < 1 so
1 a
2
+ 1 > 1 and [z
1
[ < 1. But z
1
z
2
= 1
[z
2
[ =
1
[z
2
[
> 1 only z
1
is inside of the contour [z[ = 1. By the Residue Theorem
I = 2i
2
i
Res
_
1
az
2
+ 2z +a
, z
1
_
. (6.1)
By Theorem 5.9
Res
_
1
az
2
+ 2z + a
, z
1
_
= lim
zz
1
z z
1
az
2
+ 2z + a
= lim
zz
1
z z
1
a (z z
1
) (z z
2
)
=
1
a(z
1
z
2
)
=
1
2a
_
1
a
2
1
=
1
2
1 a
2
.
So we have for (6.1)
I =,2 ,i
2
,i
1
,2
1 a
2
=
2
1 a
2
.
2. Integrals of the type
f(x)dx
The computation of this integral is based on
Lemma 6.2. Let f(z) be analytic in C
+
z : Imz > 0 except for a nite number
of poles. If [f(z)[
M
[z[
1+
, with some M > 0, > 0 for suciently large [z[, then
lim
R
C
+
R
f(z)dz = 0 , C
+
R
=
R
2. INTEGRALS OF THE TYPE
f(x)dx 45
Proof. By Theorem 2.16,
C
+
R
f(z)dz
C
+
R
[f(z)[ [dz[ M
C
+
R
[dz[
[z[
1+
=
M
R
1+
C
+
R
[dz[
=
M
R
1+
0
Rd =
M
R
0 , R . QED
Theorem 6.3. Let f(z) satisfy the conditions of Lemma 6.2 and let f(z) have no
poles on 1. If f(z)[
z1
= f(x) then
f(x)dx = 2i
N
k=1
Resf(z), z
k
. (6.2)
Proof. By condition f(z) H(C
+
z
k
N
k=1
) and by the Residue Theorem
C
R
f(z)dz = 2i
N
k=1
Resf(z), z
k
(6.3)
where C
R
= and R is large enough so that all the poles of f(z) get
inside of C
R
:
C
R
= C
+
R
(R, R)
R
z
N
z
1
z
2
But
C
R
=
R
R
+
C
+
R
where C
+
R
as in Lemma 6.2. It follows then from (6.3) that
R
R
f(x)dx = 2i
N
k=1
Resf(z), z
k
C
+
R
f(z)dz.
Pass now to the limit as R in this equation and we get
f(x)dx = 2i
N
k=1
Resf(z), z
k
lim
R
C
+
R
f(z)dz
. .
=0 by Lemma 6.2
and the theorem is proven.
Example 6.4. Prove that
dx
x
4
+ 1
=
2
2
.
46 6. RESIDUE THEOREM  DEFINITE INTEGRALS
Consider f(z) =
1
z
4
+1
. Note that [f(z)[
M
[z[
4
and the poles solve z
4
+ 1 = 0, i.e.
z
k
=
_
e
i/4
_
2k1
for k = 1, , 4. Only z
1
, z
2
are in C
+
and since theyre simple zeros
of z
4
+ 1, theyre simple poles of f(z). Hence by Theorem 6.3 and Corollary 5.11
dx
x
4
+ 1
= 2i
_
1
4z
3
1
+
1
4z
3
2
_
=
i
2
_
1
z
3
1
+
1
z
3
2
_
.
But since z
4
k
= 1 then
1
z
3
k
= z
k
, and since z
1
= e
i/4
=
1+i
2
, z
2
= e
3i/4
=
1+i
2
, it
follows that:
dx
x
4
+ 1
=
i
2
(z
1
z
2
) =
i
2
_
2i
2
_
=
2
2
.
3. Integrals of the type
e
ix
f(x)dx
You can assume > 0 and f : 1 1. These integrals are important in harmonic
analysis (signal processing) where is the frequency and x is a spatial or temporal
variable.
Lemma 6.5 (Jordans Lemma). Let f(z) be analytic in C
+
except for a nite number
of poles and
lim
z
f(z) = 0 , Imz 0.
Then lim
R
C
+
R
e
iz
f(z)dz = 0 , if > 0.
We oer this lemma without proof. Note that C
+
R
is as before the arc of radius R
in the upper half plane. Note also that z is equivalent to [z[ and this can
happen in many ways in the upper half plane:
Theorem 6.6. Let f(z) be subject to the conditions of the Jordan Lemma and have
no poles on 1. Then if > 0
e
ix
f(x)dx = 2i
N
k=1
Res
_
e
iz
f(z), z
k
_
.
Proof. can be done in the very same manner as Theorem 6.3. Do it!
Example 6.7. Compute
cos x
x
2
+ a
2
dx , > 0 , a > 0.
3. INTEGRALS OF THE TYPE
e
ix
f(x)dx 47
Show that the integral is
e
a
a
.
Note that
cos x
x
2
+a
2
dx = Re
e
ix
x
2
+a
2
dx =: Re(I).
But f(z) =
1
z
2
+a
2
is analytic in C
+
ia, where z
1
= ia is a simple pole, and lim
z
f(z) =
0. So by Theorem 6.6 and Corollary 5.11
I = 2i Res
_
e
iz
f(z), ia
_
= 2ie
a
1
2ia
=
a
e
a
.
So
cos x
x
2
+a
2
dx = Re(I) =
a
e
a
.
Observe the following:
when a , the value decays, which makes sense.
when , the integral converges to zero also, but this is not intuitive; it is
caused by lots of cancellations because of high frequencies.
when 0, we verify that
1
x
2
+a
2
dx =
1
a
arctan
x
a
=
a
.
when a 0, we verify that
cos x
x
2
dx diverges.
Remember that a real integral returns a real value!
Exercise 6.8. Evaluate the following integrals
a)
2
0
d
1 + cos
, [[ < 1.
b)
2
0
cos 3
5 4 cos
d.
c)
0
d
1 + sin
2
.
d)
2
0
d
(a + b sin )
2
.
e)
2
0
cos
n
d.
Exercise 6.9. Evaluate the integrals
a)
dx
x
4
+a
4
.
b)
0
x
2
x
6
+ 1
dx.
c)
dx
(x
2
+ 1)
3
.
d)
cos kxdx
(x a)
2
+ b
2
.
e)
0
x sin x
x
2
+ 1
dx.
48 6. RESIDUE THEOREM  DEFINITE INTEGRALS
f)
cos x dx
(x
2
+ a
2
)(x
2
+b
2
)
.
LECTURE 7
Applications of the Residue Theorem to Some Improper
Integrals and Integrals Involving Multivalued Functions
1. Indented Contours and Cauchy Principal Values
Example 7.1. I =
0
sin x
x
dx =
2
, > 0. Lets prove it!
Note that I is improper on both sides but it should be ok since lim
x0
sin x
x
= ; yet
we have to nd something to do about it.
Since
sin x
x
is even
I =
1
2
sin x
x
dx =
1
2
Im
e
ix
x
dx. (7.1)
Theorem 6.6 does not apply since f(z) =
1
z
has a pole on the real axis. But it is
not crucial. First of all, we have to agree upon how we understand (7.1):
e
ix
x
dx lim
0
R
_
R
+
_
e
ix
x
dx. (7.2)
So we understand it as the principal value by Cauchy. Consider the contour C:
C
+
C
+
R
D
R R 0
This is an indented contour: were careful
to drive around the pothole.
Since
e
iz
z
H(D), by the Cauchy theorem
0 =
C
e
iz
z
dz =
_
R
+
_
e
ix
x
dx +
C
+
R
e
iz
z
dz +
C
+
e
iz
z
dz.
Note that the 0 on the LHS is independent of R, so they can run freely away, R to
innity and to 0.
49
50 7. RESIDUE THEOREM  IMPROPER INTEGRALS AND MULTIVALUED FUNCTIONS
So it follows from this formula that
1
lim
0
R
_
R
+
_
e
ix
x
dx = lim
R
C
+
R
e
iz
z
dz
. .
=0 , by Jordans lemma
+lim
0
.
e
iz
z
dz
= lim
0
.
e
iz
z
dz = lim
0
0
e
ie
i
e
i
ie
i
d
= lim
0
0
e
(i cos sin )
id =
0
lim
0
e
(i cos sin )
. .
=1
id = i.
Here we switched lim
0 = 0;
if < 0, rewrite sin x = sin()x then the integral is /2;
so a more general result is
2
0
sin x
x
dx = sgn .
2. Integrals of the type
0
x
1
f(x)dx , 0 < < 1
Before we present the main result of this section, we must introduce a new concept:
branch cuts.
Whereas z
2
is analytic, we would like
z to be the inverse of z
2
, but how can we
dene it? There is more to it than just (
z)
2
= z because there are two choices. Indeed
for z = e
i
, we naturally think of
z =
e
i/2
. But since we also have z = e
i(+2)
note that then
z =
e
i(/2+)
=
e
i/2
.
0 < 2
z = 1
z = 1
z = e
i0
z = e
2i
So
0
x
1
f(x)dx , 0 < < 1 51
that each point on [0, ) is a singularity, but not isolated, so we cant do residue
calculations on it.
A similar issue arises with the logarithmic function.
Note also that the cut along 1
+
is not the only one possible; any ray is ok since it
precludes any contour/neighborhood from going around the origin.
[, )
Theorem 7.2. Let f(z) be analytic in C except for a nite number of poles o the
positive part of the real axis. Assume [f(z)[
M
[z[
+
for some > 0, 0 < < 1 and
[z[ > r. Then
0
x
1
f(x)dx =
2i
1 e
2i
N
k=1
Res
_
z
1
f(z), z
k
_
.
Note that f(x) =
sin x
x
does not satisfy the conditions above since even though
[f(x)[
1
[x[
on the real line, this is no longer true for z in the complex plane since sin z
is unbounded.
Proof. Note rst that z
1
cannot be analytic on the whole C but it is analytic
on C 1
+
, 1
+
= [0, ). Make sure that you understand it!
Consider the following contour C:
0 R
+
C
R
N
k=1
. By the Residue
Theorem
C
(z)dz = 2i
N
k=1
Res(z), z
k
.
52 7. RESIDUE THEOREM  IMPROPER INTEGRALS AND MULTIVALUED FUNCTIONS
On the other hand,
C
(z)dz =
+
z
1
f(z)dz
. .
=
x
1
f(x)dx
+
C
R
z
1
f(z)dz
. .
=:I
1
+
z
1
f(z)dz +
C
z
1
f(z)dz
. .
=:I
2
.
But because f(x) is analytic on 1
+
, it returns the same value along
+
or
, and
along
C
(z)dz =
x
1
f(x)dx +I
1
+e
(1)2i
R
x
1
f(x)dx +I
2
=
_
1 e
(1)2i
_
x
1
f(x)dx +I
1
+I
2
. (7.3)
For I
1
we have
[I
1
[
C
R
[z[
1
[f(z)[ [dz[
..
max
zC
R
[f(z)[
2
0
R
1
Rd
max
[z[=R
[f(z)[ R
2 2R
M
R
+
0 , R .
lim
R
I
1
= 0.
For I
2
in the same manner we get
[I
2
[ max
[z[=
[f(z)[
1
[dz[ = 2 max
[z[=
[f(z)[
. .
nite since no poles
0 , 0.
lim
0
I
2
= 0.
But lim
0
R
x
1
f(x)dx =
0
x
1
f(x)dx. QED
Example 7.3.
0
x
1
x + 1
dx =
sin
, 0 < < 1.
Note that f(x) =
1
x + 1
admits an analytic continuation f(z) =
1
z + 1
with one
pole: 1. We also have = 1 > 0 since 0 < < 1. So by Theorem 7.2
0
x
1
x + 1
dx =
2i
1 e
2i(1)
(1)
1
= 2i
e
i(1)
1 e
e
2i(1)
=
2i
e
i(1)
e
i(1)
=
sin ( 1)
=
sin
.
Exercise 7.4. Show that ( 1 < < 3 )
0
x
(x
2
+ 1)
2
dx =
(1 )
4 cos
2
.
Part 2
Linear Spaces
LECTURE 8
Vector Spaces
1. Basic Denitions
The concept of a vector space is central in math physics and its going to be a part
of our math language.
Definition 8.1. A vector space E is a set of elements (also called vectors) equipped
with operations + and multiplication by a scalar subject to
1. X, Y E X +Y E (closure under addition)
2. X +Y = Y + X (commutative law)
3. (X +Y ) + Z = X + (Y + Z) (associative law)
4. 0 : X + 0 = X, X E (existence of zero vector)
5. X E (X) : X + (X) = 0 (existence of additive inverse)
6. X E cX E (c is a scalar) (closure under scalar multiplication)
7. a(bX) = ab(X) : a, b are scalars (associative law)
8. (a +b)X = aX +bX (distributive law with respect to multiplication)
9. a(X +Y ) = aX +aY (distributive law with respect to addition)
10. 1 X = X (invariance with respect to multiplication by unity)
Note, rst of all that our usual 3 space (E
3
) is a space whose elements are the usual
three component vectors with + dened by
X + Y =
_
_
x
1
x
2
x
3
_
_
+
_
_
y
1
y
2
y
3
_
_
=
_
_
x
1
+y
1
x
2
+y
2
x
3
+y
3
_
_
and multiplication by scalars
cX = c
_
_
x
1
x
2
x
3
_
_
=
_
_
cx
1
cx
2
cx
3
_
_
Verify properties #1  10!
Consider now less simple examples.
Example 8.2. Let y
tt
+py
t
+qy = 0 be a second order linear homogeneous dierential
equation. If y
1
, y
2
are two solutions, y
1
+ y
2
is a solution too and so is cy
1
, cy
2
, c is
an arbitrary constant. Operations + and multiplication by a scalar are usual + and .
One can easily verify that the set of all solutions to this dierential equation forms a
linear space.
55
56 8. VECTOR SPACES
Example 8.3. Consider the set P
n
of all polynomials of order not greater than
n. Its clear that if P
1
and P
2
are polynomials then P
1
+ P
2
is a polynomial too. I.e,
P
1
, P
2
P
n
P
1
+ P
2
P
n
where + is the usual addition. Next, if a is a scalar,
P P
n
aP P
n
. So P
n
forms a linear space.
Example 8.4. Let C[0, 1] be the set of all continuous functions on [0, 1]. As in
Example 8.3, C[0, 1] is a linear space. (Check it!)
These examples show that a linear space is not a weird object.
Definition 8.5. A linear space is called real if all scalars in Def 8.1 are real num
bers.
Definition 8.6. A linear space is called complex if all scalars in Def 8.1 are com
plex numbers.
2. Bases
Some more denitions.
Definition 8.7. Vectors X
1
, X
2
, . . . , X
n
E are called linearly independent if the
equation
c
1
X
1
+c
2
X
2
+. . . +c
n
X
n
= 0 (c
1
, . . . , c
n
are scalars)
holds only for c
1
= c
2
= . . . = c
n
= 0.
Otherwise, X
1
, X
2
, . . . , X
n
are called linearly dependent.
Definition 8.8. The system of vectors e
1
, e
2
, . . . , e
n
E is said to be a basis in
E if e
k
n
k=1
is linearly independent but e
1
, e
2
, . . . , e
n
is also a basis.
Definition 8.11. The number n of elements in a basis is called the dimension of
E and denoted n = dimE.
Example 8.12. Consider a space E =
_
_
_
_
_
x
1
x
2
x
3
_
_
: x
1
, x
2
, x
3
1
_
_
_
(the space of 3
columns). Consider the following systems of vectors e
1
, e
2
, e
3
e
1
=
_
_
1
0
0
_
_
, e
2
=
_
_
0
1
0
_
_
, e
3
=
_
_
0
0
1
_
_
.
We claim it is a basis in E. Indeed,
2. BASES 57
c
1
e
1
+ c
2
e
2
+ c
3
e
3
= 0 c
1
_
_
1
0
0
_
_
+c
2
_
_
0
1
0
_
_
+ c
3
_
_
0
0
1
_
_
_
_
c
1
c
2
c
3
_
_
= 0 c
1
= c
2
= c
3
= 0.
Hence e
1
, e
2
, e
3
are linearly independent.
Now we need to make sure that e
1
, e
2
, e
3
, X is linearly dependent with any X ,= 0.
Indeed, let X =
_
_
x
1
x
2
x
3
_
_
with some x
1
, x
2
, x
3
: x
2
1
+ x
2
2
+x
2
3
,= 0. We then have
c
1
e
1
+c
2
e
2
+c
3
e
3
+c
4
X = 0
_
_
c
1
+c
4
x
1
c
2
+c
4
x
2
c
3
+c
4
x
3
_
_
= 0
which is equivalent to the system
_
_
_
c
1
= c
4
x
1
c
2
= c
4
x
2
c
3
= c
4
x
3
This system has innitely many solutions since if we put c
4
= t ,= 0 then at least
one of c
1
, c
2
, c
3
is not 0 (x
1
, x
2
, x
3
are not all zeros). Hence e
1
, e
2
, e
3
, X is linearly
dependent and by denition, e
1
, e
2
, e
3
is a basis in E.
Remark 8.13. The space E in Example 8.12 is actually our 3space. Commonly,
E is denoted by 1
3
. So,
_
_
_
_
_
x
1
x
2
x
3
_
_
: x
1
, x
2
, x
3
1
_
_
_
1
3
Example 8.14. Let P
n
be the set of all polynomials of order n. We show that
dimP
n
= n + 1. Consider 1, x, x
2
, . . . , x
n
. This system is linearly independent.
Indeed,
n
k=0
c
k
x
k
= 0 c
k
= 0, k = 0, 1, . . . , n.
(it means that a polynomial is identically 0 if and only if all of its coecients are 0.)
We leave it as an exercise to prove that 1, x, x
2
, . . . , x
n
is a basis in P
n
. Hence,
by denition dimP
n
= n + 1.
Some more examples:
The set of solutions to an order 2 linear homogeneous dierential equation is a
linear space of dimension 2.
58 8. VECTOR SPACES
Similarly, the set of solutions to an order 3 linear homogeneous dierential
equation is a linear space of dimension 3.
There are innite dimensional linear spaces. Function spaces, for example.
C[0, 1] = f : [0, 1] 1 [ f(x) is continuous on [0, 1]
H(D) = f : C C [ f is analytic on D
However, its not so easy to show that the dimension is innite. We would
need to nd an innite, linearly independent set to form a basis. For example,
1, x, x
2
, . . . , x
n
, . . . C[0, 1].
The Taylor expansion would be another candidate basis, however this ap
proach is not valid for all f C[0, 1], only smooth functions. So, is there a
basis for C[0, 1]? Yes, but it belongs to functional analysis.
3. Coordinates
We know that if e
1
, e
2
, . . . , e
n
is a basis in E, then e
1
, e
2
, . . . , e
n
, X is always
linearly dependent. I.e., c
1
, c
2
, . . . , c
n+1
not all zeroes such that
c
1
e
1
+c
2
e
2
+. . . +c
n
e
n
+c
n+1
X = 0
Note that c
n+1
,= 0. Otherwise we had
n
k=1
c
k
e
k
= 0 that would contradict the fact
that e
k
n
k=1
is a basis. It follows from (3) that
X =
n
k=1
c
k
c
n+1
e
k
.
Coecients x
k
c
k
/c
n+1
, k = 1, 2, . . . , n are called the coordinates of a vector X in
a basis e
1
, e
2
, . . . , e
n
.
Example 8.15. Let X 1
3
. Coordinates of X in the basis e
1
=
_
_
1
0
0
_
_
, e
2
=
_
_
0
1
0
_
_
, e
3
=
_
_
0
0
1
_
_
are x
1
, x
2
, x
3
. In physics they commonly use ,,
k for e
1
, e
2
, e
3
.
Example 8.16. Let E = P
4
and 1, x, x
2
, x
3
, x
4
be a basis in P
4
. If P
4
(x) =
c
0
+c
1
x +c
2
x
2
+c
3
x
3
+c
4
x
4
is an arbitrary polynomial of order 4, then its coecients
c
0
, c
1
, . . . , c
4
are coordinates of P
4
(x) in the basis 1, x, . . . , x
4
.
Theorem 8.17. Given a vector and a basis, the coordinates are uniquely deter
mined.
4. Subspaces
Definition 8.18. If E
1
, E
2
are linear spaces and E
1
E
2
then E
1
is called a
subspace of E
2
.
Example 8.19. 1) 1
2
0 is a subspace of 1
3
.
2) P
3
is a subspace of P
4
.
LECTURE 9
Linear Operators
1. Linear Operator
A linear operator is a fundamental object in math physics.
Definition 9.1. Let E
1
, E
2
be two linear spaces. A mapping A that maps E
1
into
E
2
(A : E
1
E
2
) is called a linear operator if X, Y E
1
and scalars , ,
A(X +Y ) = AX + AY.
In most of our cases, E
1
= E
2
= E and we then say that A acts in E. It means
that A sends every X E into a vector Y E.
Example 9.2. Let E = 1
2
and A be an operator acting by the following rule,
AX =
_
x
1
0
_
, where X =
_
x
1
x
2
_
.
It is clear that for all X, Y ;
A(X + Y ) = A
_
x
1
+y
1
x
2
+y
2
_
=
_
x
1
+ y
1
0
_
=
_
x
1
0
_
+
_
y
1
0
_
= AX +AY ;
AX = A
_
x
1
x
2
_
=
_
x
1
0
_
=
_
x
1
0
_
= AX.
Hence, A is a linear operator in 1
2
. As you can see, A performs an orthogonal
projection of a vector in a 2space on the xaxis.
Example 9.3. Let E = P
n
. Dene A by the formula
Ap(x) =
d
dx
p(x) (p(x) P
n
) .
A is clearly a linear operator. This operator is called the operator of dierentiation
and will be playing a crucial role in our course.
2. Matrices
Definition 9.4. The following table of numbers
A
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
_
_
a
ik
m,n
i=1,k=1
is called a mn matrix. If n = m, then the matrix is called square.
59
60 9. LINEAR OPERATORS
Definition 9.5. For any mn matrices A, B, and any C,
1)
A + B =
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
_
_
+
_
_
_
_
b
11
b
12
. . . b
1n
b
21
b
22
. . . b
2n
.
.
.
.
.
.
.
.
.
.
.
.
b
m1
b
m2
. . . b
mn
_
_
_
_
=
_
_
_
_
a
11
+ b
11
a
12
+b
12
. . . a
1n
+ b
1n
a
21
+ b
21
a
22
+b
22
. . . a
2n
+ b
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
+ b
m1
a
m2
+b
m2
. . . a
mn
+ b
mn
_
_
_
_
.
That is, matrices add up elementbyelement,
(A + B)
ik
= (A)
ik
+ (B)
ik
1
2)
A =
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
_
_
=
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
_
_
i.e.,
(A)
ik
= (A)
ik
3)
A =
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
_
_
, B =
_
_
_
_
b
11
b
12
. . . b
1
b
21
b
22
. . . b
2
.
.
.
.
.
.
.
.
.
.
.
.
b
n1
b
n2
. . . b
n
_
_
_
_
;
A B =
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
_
_
_
_
_
_
b
11
b
12
. . . b
1
b
21
b
22
. . . b
2
.
.
.
.
.
.
.
.
.
.
.
.
b
n1
b
n2
. . . b
n
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
n
k=1
a
k1
b
1k
n
k=1
a
1k
b
k2
. . .
n
k=1
a
1k
b
k
n
k=1
a
2k
b
k1
n
k=1
a
2k
b
k2
. . .
n
k=1
a
2k
b
k
.
.
.
.
.
.
.
.
.
.
.
.
n
k=1
a
mk
b
k1
n
k=1
a
mk
b
k2
. . .
n
k=1
a
mk
b
k
_
_
_
_
_
_
_
_
_
_
_
_
_
1
(A)
ik
means the ik
th
element of the matrix A
3. MATRIX REPRESENTATION OF LINEAR OPERATORS 61
i.e.,
(AB)
ij
=
n
k=1
a
ik
b
kj
Note, that we can mutliply two matrices only in the following case:
m
n
n
In general, m ,= .
Some more terminology:
_
_
_
_
0 0 . . . 0
0 0 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 0
_
_
_
_
= 0 the zero matrix
_
_
_
_
_
_
a
11
0
a
22
.
.
.
0 a
nn
_
_
_
_
_
_
is called a diagonal matrix
_
_
_
_
_
_
1 0
1
.
.
.
0 1
_
_
_
_
_
_
= I the unit matrix
3. Matrix Representation of Linear Operators
Let A : E E and e
k
n
k=1
be a basis in E. Since Ae
k
E it can be represented
as
Ae
k
=
n
i=1
a
ik
e
i
, k = 1, 2, . . . , n, (9.1)
where a
ik
n
i=1
are coordinates of the vector Ae
k
in the basis e
k
n
k=1
.
Coecients a
ik
n
i,k=1
form a matrix. The thing is that this matrix represents the
operator A in the basis e
k
n
k=1
. This means that if we know all a
ik
n
i,k=1
then we can
compute AX for any X E.
62 9. LINEAR OPERATORS
Indeed, let X =
n
k=1
x
k
e
k
. Then,
AX = A
n
k=1
x
k
e
k
=
n
k=1
x
k
Ae
k
by (9.1)
=
n
k=1
x
k
n
i=1
a
ik
e
i
=
n
i=1
_
n
k=1
a
ik
x
k
_
e
i
i.e.
2
(AX)
i
=
n
k=1
a
ik
x
k
; i = 1, 2, . . . , n (9.2)
(9.2) reads
_
_
(AX)
1
=
n
k=1
a
1k
x
k
(AX)
2
=
n
k=1
a
2k
x
k
(AX)
n
=
n
k=1
a
nk
x
k
So if X =
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
is given by its coordinates in e
i
n
i=1
then,
AX =
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
. . . a
nn
_
_
_
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
, (9.3)
where the right hand side is understood as a product of two matrices.
Definition 9.6. The matrix on the right in (9.3) is called the matrix of an operator
A in a basis e
i
n
i=1
.
So we can always identify an operator A with its matrix:
A =
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
. . . a
nn
_
_
_
_
in e
1
, e
2
, . . . , e
n
.
2
(X)
i
stands for the i
th
coordinate of X.
4. MATRIX RING 63
You can think of an operator as a set of houses. You can list these houses using
their street addresses. It would play the role of the matrix representation. Clearly
the set of houses is independent of the way you divide them into blocks. But once a
division is xed then you identify every house with its street address.
Some examples:
Example 9.7. Consider the operator A as in Example 9.2. Choose a basis e
1
=
_
1
0
_
, e
2
=
_
0
1
_
.
Let us nd the matrix of this operator in the basis e
1
, e
2
. We have
Ae
1
= e
2
, Ae
2
= 0 (9.4)
On the other hand, by (9.1) we have
Ae
1
= a
11
e
1
+ a
21
e
2
Ae
2
= a
12
e
1
+ a
22
e
2
(9.5)
Comparing (9.4) and (9.5), we get
a
11
= 1 , a
21
= 0 , a
12
= 0 , a
22
= 0
and nally
A =
_
1 0
0 0
_
in
__
1
0
_
,
_
0
1
__
.
Example 9.8. Let A be as in Example 9.3. Choose in P
n
the following basis
e
1
, e
2
, . . . , e
n
, e
k
= x
k1
, k = 1, 2, . . . , n + 1. We have
Ae
k
=
d
dx
x
k1
= (k 1)x
k2
= (k 1)e
k2
, k = 1, 2, . . . , n + 1.
On the other hand,
Ae
k
=
n+1
i=1
a
ik
e
i
= a
1k
e
1
. .
=0
+. . . +a
k1,k
e
k1
. .
=(k1)e
k1
+. . . +a
n+1,k
e
n+1
. .
=0
a
ij
= 0 except a
k1,k
= k 1 . So,
A =
_
_
_
_
_
_
0 1 0 . . . 0
0 0 2 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 . . . n
0 0 0 . . . 0
_
_
_
_
_
_
in 1, x, x
2
, . . . , x
n
4. Matrix Ring
The following theorem is very important.
64 9. LINEAR OPERATORS
Theorem 9.9. Let A, B be linear operators in E and
A =
_
_
a
11
. . . a
1n
.
.
.
.
.
.
.
.
.
a
n1
. . . a
nn
_
_
,
B =
_
_
b
11
. . . b
1n
.
.
.
.
.
.
.
.
.
b
n1
. . . b
nn
_
_
be their matrix representations in e
i
n
i=1
. Then
1)
A +B =
A +
B ,
A =
A
2)
AB =
A
B
i.e. when we add two operators their matrices add and when we multiply two oper
ators their matrices multiply.
Proof. 1) is clear. Show it!
2)
A(Be
k
) = A
n
j=1
b
jk
e
j
(by (9.1))
=
n
j=1
b
jk
Ae
j
=
n
j=1
b
jk
n
i=1
a
ij
e
i
(by (9.1) again)
=
n
i=1
_
n
j=1
b
jk
a
ij
_
e
i
=
n
i=1
_
n
j=1
a
ij
b
jk
_
. .
=(AB)
ik
e
i
(AB)
ik
=
n
j=1
a
ij
b
jk
AB =
A
B. QED
5. Noncommutative Ring
Note that multiplication of matrices, and hence operators, is not commutative.
Example 9.10. Let E = P
1
= a + bx [ a, b 1 and A =
d
dx
, B(a + bx) = bx.
Then
AB(a + bx) =
d
dx
bx = b
BA(a + bx) = B
d
dx
(a + bx) = Bb = 0
_
_
AB ,= BA.
Exercise 9.11. Let M
n
be the set of all n n matrices with + and multiplication
by a scalar given in Def 9.5. Show that M
n
is a linear space.
Exercise 9.12. Let A : E E be a linear operator. We dene the kernel of A:
ker A := X E [ AX = 0.
a) Show that ker A is a subspace of E.
b) Find ker A where A is as in Example 9.2.
c) Find ker A where A is as in Example 9.3.
5. NONCOMMUTATIVE RING 65
Exercise 9.13. Find the matrix of A : E E
a) E = 1
2
and A rotates each vector X E by 90
counterclockwise.
b) E = C
n
and A : AX = X X E where C is a xed number.
c) E = 1
3
and A projects each X E onto the xy plane.
Exercise 9.14. Give an example of an operator
a) A ,= 0 , A
2
= 0
b) A ,= 0 , A
2
,= 0 , A
3
= 0.
LECTURE 10
Inner Product. Selfadjoint and Unitary Operators
1. Inner Product
The concept of inner (or scalar) product is central in math physics.
Definition 10.1. An inner (scalar) product X, Y of two vectors X, Y belonging to
a linear space E is a complex valued function of X, Y subject to the following conditions:
1) Symmetry : X, Y = Y, X
2) Linearity: X +Y, Z = X, Z + Y, Z , C
3) Positivity: X, X 0 and if X, X = 0 X = 0.
Dierent books adopt dierent notation for the inner product: e.g. (X, Y ), X Y ,
X[Y . The last notation is commonly accepted in Quantum mechanics and called the
Dirac notation. We chose the mixture of (X, Y ) and X[Y .
Exercise 10.2. Consider
C
3
=
_
_
_
_
_
x
1
x
2
x
3
_
_
x
1
, x
2
, x
3
C
_
_
_
.
Verify that
1) x, y =
3
i=1
x
i
y
i
is an inner product on C
3
,
2) but x, y =
3
i=1
[x
i
y
i
[
2
is not.
3) Find another inner product on C
3
.
Exercise 10.3. Verify the following
1) X, Y = X, Y .
2) Z, X +Y = Z, X +Z, Y .
3) Whereas in real spaces, you have full commutativity for the inner product, if
that were the case in complex spaces, then the positivity condition would fail.
Lemma 10.4. If X, Y = 0 for all Y E then X = 0.
Proof. X, Y = 0 Y E X, X = 0 X = 0.
Corollary 10.5. If X
1
, Y = X
2
, Y for all Y E then X
1
= X
2
.
Proof. Use Denition 10.1 and condition 2) to get X
1
X
2
, Y = 0.
67
68 10. INNER PRODUCT. SELFADJOINT AND UNITARY OPERATORS
Definition 10.6. Vectors X, Y E are called orthogonal if X, Y = 0.
Definition 10.7. Let X E.
X, X X
is called the norm of X.
Definition 10.8.
ik
_
0 , i ,= k
1 , i = k
is called the Kronecker delta.
Definition 10.9. A basis e
k
n
k=1
is called orthogonal and normed (or orthonor
mal) or ONB if
e
i
, e
k
= 0 , i ,= k
e
i
 = 1 , i = 1, 2, , n.
Alternately, we can write: e
k
n
k=1
is an ONB e
i
, e
k
=
ik
.
Theorem 10.10. Let e
k
n
k=1
be an ONB and X =
n
k=1
x
k
e
k
, Y =
n
k=1
y
k
e
k
. Then
X, Y =
n
k=1
x
k
y
k
.
I.e. for each basis, there is only one inner product.
Proof.
X, Y =
_
n
i=1
x
i
e
i
,
n
k=1
y
k
e
k
_
2)
=
n
i=1
x
i
_
e
i
,
n
k=1
y
k
e
k
_
1)
=
n
i=1
x
i
_
n
k=1
y
k
e
k
, e
i
_
2)
=
n
i=1
x
i
n
k=1
y
k
e
i
, e
k
1)
=
n
i,k=1
x
i
y
k
e
i
, e
k
=
n
k=1
x
k
y
k
e
k
, e
k
. .
=1
+
i,=k
x
i
y
k
e
i
, e
k
. .
=0
=
n
k=1
x
k
y
k
. QED
Corollary 10.11 (Parseval Equation).
X, X = X
2
=
n
k=1
[x
k
[
2
.
This seems trivial here, but it is very important in conservation laws.
In 1
3
the scalar product is also called a dot product.
1. INNER PRODUCT 69
Theorem 10.12. In 1
3
the inner product can be dened by
X, Y = X Y  cos(
X, Y ) (10.1)
where X is the length of X and (
X, Z)
+Y  cos(
Y, Z)
X
Y
X + Y
Z
X cos(
X, Z) Y  cos(
Y, Z)
X + Y  cos(
X+Y,Z)
and X +Y, Z = X Z cos(
Y, Z) = X, Z +Y, Z.
X
X
X
Z
cos( ) = cos
Observe also that
X, Z = X, Z , 1
(again it is geometrically clear).
QED
Theorem 10.13. Let e
k
n
k=1
be an ONB and x
k
n
k=1
be the coordinates of a vector
X E in e
k
n
k=1
. Then
x
k
= X, e
k
(10.2)
Proof. X, e
k
=
_
n
k=1
x
i
e
i
, e
k
_
=
n
k=1
x
i
e
i
, e
k
. .
=
ik
= x
k
.
Definition 10.14. A linear space with an inner product , is called a Euclidean
space or inner product space.
70 10. INNER PRODUCT. SELFADJOINT AND UNITARY OPERATORS
Every nite dimensional linear space has an inner product (e.g. 1
n
, C
n
). An innite
dimensional Euclidean space is called dierently: a Hilbert space. But not all innite
dimensional spaces have an inner product.
2. Adjoint and Selfadjoint Operators
Definition 10.15. Let E be a Euclidean space. An operator A
Y . (10.3)
Theorem 10.16. The operator A
(X +Y )
by def of A
= Z, A
X + Z, A
Y
= Z, A
X +Z, A
Y = Z, A
X +A
Y .
So we get X, Y, Z E ; , C
Z, A
(X +Y ) = Z, A
X +A
Y .
It follows from Corollary 10.5 that
A
(X +Y ) = A
X + A
Y. QED
So an adjoint operator A
= A.
Lemma 10.18. Let A be a linear operator and let e
k
n
k=1
be an ONB, then for the
elements of the matrix of A, we have
a
ik
= Ae
k
, e
i
(10.4)
Proof. By denition (formula (9.1))
Ae
k
=
n
j=1
a
jk
e
j
(Ae
k
)
i
= a
ik
.
On the other hand, by (10.2)
(Ae
k
)
i
= Ae
k
, e
i
. QED
Theorem 10.19. For the matrices of A
, A we have
_
_
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
. . . a
nn
_
_
_
_
=
_
_
_
_
a
11
a
21
. . . a
n1
a
12
a
22
. . . a
n2
.
.
.
.
.
.
.
.
.
.
.
.
a
1n
a
2n
. . . a
nn
_
_
_
_
i.e (A
)
ik
= (A)
ki
.
3. UNITARY OPERATORS 71
Proof. By (10.4)
(A
)
ik
= A
e
k
, e
i
= e
k
, Ae
i
= Ae
i
, e
k
= (A)
ki
. QED
Why is it that every operator in a nite dimensional space has an adjoint? Because
of Theorem 10.19, a constructive proof.
Lemma 10.20. (A
= A.
Proof. A
X, Y = Y, A
X = AY, X = X, AY (A
= A.
Theorem 10.21. If A = A
then
(A)
ik
= (A)
ki
.
Proof. Prove it as an exercise.
Selfadjoint operators play an especially important role in physics. Consider one
important example.
Example 10.22. Let e
k
n
k=1
be an ONB in E. Dene an operator P
k
(projection)
by the formula
P
k
X = x
k
e
k
,
where x
k
is the k
th
coordinate of X in e
k
n
k=1
. By (10.2), x
k
= X, e
k
and we have
P
k
X = X, e
k
e
k
. (10.5)
We claim P
k
is selfadjoint. Indeed,
P
k
X, Y = x
k
e
k
, Y = X, e
k
Y, e
k
= X, e
k
y
k
= X, y
k
e
k
= X, P
k
Y .
3. Unitary Operators
Definition 10.23. An operator A
1
is called the inverse operator to A if
A
1
A = AA
1
= I.
Definition 10.24. An operator A is called invertible if it has an inverse operator.
Definition 10.25. A linear operator U is called unitary if
U
U = UU
= I.
Theorem 10.26. If U is unitary, then
1) U
1
= U
;
2) UX = X , X E.
The rst part is marvelous since if you recall in Linear Algebra, computing an
inverse is a lot of work, but not here. The second part explains the name unitary: such
a transformation preserves length in nite dimension.
Proof. By comparing Def 10.23 & 10.25 we conclude that U
= U
1
. To prove 2)
consider
UX
2
= UX, UX = X, U
UX = X, IX = X
2
UX = X .
72 10. INNER PRODUCT. SELFADJOINT AND UNITARY OPERATORS
Remark 10.27. Equation 2) in Theorem 10.26 means that a unitary operator pre
serves the norm of a vector.
Norm in physics is often energy (at least in PDE in math physics) so an energy
preserving operator (often of time) leads to a conservation law.
Theorem 10.28. Let U be unitary, and let u
ik
be its matrix representation. Then
any two columns and rows are orthogonal (and orthonormal).
Proof. By denition
UU
= U
U = I.
In the matrix form this equation reads, for any 1 i, k n
(UU
)
ik
=
n
j=1
(U)
ij
(U
)
jk
=
ik
. (10.6)
But by Theorem 10.19 (U
)
jk
= (U)
kj
= u
kj
and for (10.6) we have
n
j=1
u
ij
u
kj
=
ik
. (10.7)
But (u
i1
, u
i2
, . . . , u
in
) U
i
is the i
th
row of u
ik
and (u
k1
, u
k2
, . . . , u
kn
) U
k
is the
k
th
row of u
ik
. (10.7) then reads
U
i
, U
k
=
ik
U
i
U
k
( and U
i
 = 1).
In the same way we prove the orthogonality of columns. QED
Note that you can use the rows (or columns) to make a new orthonormal basis.
Definition 10.29. Real matrices with orthogonal columns and rows are called or
thogonal.
Exercise 10.30. Let U be unitary. Show that its columns are orthonormal.
Exercise 10.31. Show that
(AB)
= B
.
LECTURE 11
More about Selfadjoint and Unitary Operators. Change of
Basis
1. Examples of Selfadjoint and Unitary Operators
Example 11.1. Let P
n
[0, 1] be the space of all polynomials of order n on the
interval [0, 1]. We dene a scalar product by the formula
p, q =
1
0
p(x)q(x)dx (p, q P
n
[0, 1]) . (11.1)
It is very easy to see that (11.1) denes a scalar product.
So with this scalar product our space P
n
[0, 1] becomes a Euclidean space.
0 1
Let P
0
n
[0, 1] = p P
n
[0, 1] : p(0) = p(1) = 0. Its
a Euclidean space again. Check it!
Consider the linear operator A =
1
i
d
dx
. This opera
tor is important in quantum mechanics, and is called
the momentum operator:
Ap =
1
i
p
t
.
It is a selfadjoint operator. Indeed
Ap, q =
1
0
1
i
p
t
(x)q(x)dx
We integrate it by parts
Ap, q =
1
i
*
0
p(x)q(x)
1
0
1
0
1
i
p(x)q
t
(x)dx =
1
0
p(x)
1
i
d
dx
q(x)dx = p, Aq .
A
1
0
p(x)q(x)dx dened on P
n
[0, 1] is an inner
product.
Example 11.3 (the operator of rotation). Let a particle rotate about the origin with
an angular velocity . Let (x
0
, y
0
) be its initial position. Find a formula for (x(t), y(t))
at any instant of time t.
73
74 11. MORE ABOUT SELFADJOINT AND UNITARY OPERATORS. CHANGE OF BASIS
Given A, B, construct:
C: draw the ray of angle t with the
origin, then C is the projection of B on
the ray;
T: the projection of A on the xaxis;
Q: the projection of C on the xaxis;
P: the projection of B on the xaxis;
S: the intersection of BP and the hori
zontal line from C.
O
A
(x
0
, y
0
)
B
(x, y)
x
y
P T Q
t
t
t
t
Note rst that OBC = OAT
x = OP = OQPQ = OC cos t SC = OT
..
=x
0
cos t BC
..
=AT=y
0
sin t
x(t) = x
0
cos t y
0
sin t
y = PS + BS = CQ+ BC
..
=AT=y
0
cos t = OC
..
=x
0
sin t +y
0
cos t
y(t) = x
0
sin t +y
0
cos t
_
x = x
0
cos t y
0
sin t
y = x
0
sin t +y
0
cos t
_
x
y
_
=
_
cos t sin t
sin t cos t
__
x
0
y
0
_
.
We claim that
U =
_
cos t sin t
sin t cos t
_
is an orthogonal matrix. Check it!
Hence, U is a unitary operator.
So the solution to our problem can be written as follows
X(t) = U(t)X
0
where X(t) =
_
x(t)
y(t)
_
is the position vector at time t, X
0
=
_
x
0
y
0
_
is the initial position
and
U(t) =
_
cos t sin t
sin t cos t
_
.
2. Change of Basis
Note that the xyzframe is not always practical, e.g. coordinates on Earth, so we
switch to spherical coordinates for example; but switching coordinate system is the
same as switching bases.
Let us have two bases e
i
n
i=1
, g
i
n
i=1
not necessarily orthogonal. We raise a ques
tion: does there exist a linear operator G such that
g
i
= Ge
i
, i = 1, 2, . . . , n ?
2. CHANGE OF BASIS 75
The answer is Yes. Here is the contruction.
Represent g
k
, k = 1, 2, . . . , n, with respect to e
i
n
i=1
g
k
=
n
i=1
(g
k
)
i
e
i
. (11.2)
Consider the matrix G g
ik
n
i,k=1
: g
ik
(g
k
)
i
.
On the other hand by denition, we have
Ge
k
=
n
k=1
(Ge
k
)
i
. .
=g
ik
e
i
=
n
i=1
g
ik
e
i
. (11.3)
Comparing (11.2) and (11.3), we get
g
k
= Ge
k
, k = 1, 2, . . . , n
where G = g
ik
, g
ik
= (g
k
)
i
is the i
th
coordinate of the k
th
basis vector.
Definition 11.4. Operators A and B are called similar if there exists an invertible
operator G:
G
1
AG = B.
Theorem 11.5. If A and B are similar, i.e.
G
1
AG = B, (11.4)
and e
k
n
k=1
is a basis, then the matrix of B in e
k
n
k=1
coincides with the matrix of A
in the basis g
k
n
k=1
, g
k
= Ge
k
.
Proof. Left as an exercise.
Remark 11.6. Given G, A, the transformation
A G
1
AG
is called a similarity transformation of A.
Theorem 11.5 actually says
(G
1
AG)
ik
= (A)
ik
in e
i
in Ge
i
G
g
i
i.e. g
i
= Ge
i
. Then the matrix of A in the new basis g
i
is equal to the
matrix of G
1
AG in the old basis e
i
.
Exercise 11.8. Prove that if e
k
, g
k
are ONB and g
k
= Ge
k
then G is unitary.
Note that the above exercise implies that orthogonality is preserved with rotation.
LECTURE 12
Eigenvalues and Eigenvectors
1. Eigenvalues
The concept of eigenvalues and eigenvectors is central in math physics.
Definition 12.1. Let E be a complex linear space and let A be a linear operator
in E. Consider the following equation
AX = X , C. (12.1)
The values of for which (12.1) has a nontrivial solution are called eigenvalues of A.
The corresponding nontrivial solutions X are called eigenvectors of A.
Equation (12.1) always has a solution X = 0. But we are talking about nontrivial
solutions (i.e. X ,= 0). The curious fact is that (12.1) has nontrivial solutions only for
a nite number of .
Definition 12.2.
p() det(A I)
is called the characteristic polynomial of A and the equation
p() = 0
is called the characteristic equation of A.
Lemma 12.3. Let A be an operator in E, dimE = n. Then the characteristic
equation for A has n roots.
The proof of this fact lies beyond the scope of our consideration. But at least its
clear why the number of roots n. Indeed,
p() = det(A I) = det
_
_
_
_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nn
_
_
_
_
is a polynomial of order n, and hence the number of roots n.
Definition 12.4. The set of all eigenvalues of A is called the spectrum of A and
is denoted by (A).
An alternate notation is Spec(A).
Theorem 12.5.
(A) = C : det(A I) = 0.
77
78 12. EIGENVALUES AND EIGENVECTORS
Proof. Rewrite (12.1) as follows
AX X = 0 (A I)X = 0.
From elementary linear algebra we know that for an equation
(A I)X = 0
to have a nontrivial solution it is necessary that
det(A I) = 0
which is the characteristic equation of A. QED
Definition 12.6. An eigenvalue
0
is said to be simple if
0
is a simple root of
the characteristic polynomial p().
Definition 12.7. An eigenvalue
0
is said to be of multiplicity m if
0
is a root of
p() = 0 of multiplicity m.
The above is referred to as algebraic multiplicity. There is another type of multi
plicity, called geometric multiplicity, but it is never present in selfadjoint operators so
we will not worry about it.
Definition 12.8. The procedure of nding eigenvalues and eigenvectors is called
the spectral analysis of A.
2. Spectral Analysis
Since the most important operators in physics are selfadjoint and unitary (because
of conservation of energy), we will mostly concentrate on the spectral analysis of such
operators.
Theorem 12.9. The spectrum of a selfadjoint operator A is real. Its eigenvectors
are orthogonal. I.e. if A = A
then
(A) 1 , X
i
, X
k
=
ik
.
Proof. Let
i
,
k
be two eigenvalues of A and let X
i
, X
k
be the corresponding
eigenvectors. Compute AX
i
, X
k
:
AX
i
, X
k
=
i
X
i
, X
k

X
i
, AX
k
=
k
X
i
, X
k
So
i
X
i
X
k
=
k
X
i
, X
k
. (12.2)
Let i = k. We have
k
X
k

2
=
k
X
k

2
k
=
k
k
1.
So all eigenvalues of A are real. Let now i ,= k. (12.2) implies
(
i
k
) X
i
, X
k
= 0 (
i
k
) X
i
, X
k
= 0 X
i
, X
k
= 0.
QED
2. SPECTRAL ANALYSIS 79
Theorem 12.10. If A is a selfadjoint operator and
0
(A), then the set of all
eigenvectors corresponding to
0
forms a subspace, named an eigenspace.
Proof. Let X
0
, Y
0
be two solutions of (12.1)
AX
0
=
0
X
0
,
AY
0
=
0
Y
0
,
then X
0
+ Y
0
is a solution to (12.1) too. Indeed
A(X
0
+Y
0
) = AX
0
+AY
0
=
0
X
0
+
0
Y
0
=
0
(X
0
+Y
0
). QED
Theorem 12.11. Let A = A
n
k=1
is an ONB. QED
The statement holds without simple for selfadjoint operators, i.e. each eigenspace
is of dimension the multiplicity of its corresponding eigenvalue.
Remark 12.12. The fact that eigenvectors of a selfadjoint operator with a simple
spectrum forms an ONB provides us with a very ecient way of constructing ONBs.
The following theorem is of the utmost importance.
Theorem 12.13 (Diagonalization Theorem). Let A = A
and (A) =
k
n
k=1
be
simple. Let e
k
n
k=1
be the ONB compiled of the eigenvectors of A. Then the matrix of
A in e
k
n
k=1
is diagonal.
Again, this is actually true for any selfadjoint operator.
Proof. By denition of (A)
ik
Ae
k
=
n
k=1
(A)
ik
e
i

k
e
k
(A)
ik
= 0 , i ,= k ; (A)
kk
=
k
.
That is, in e
k
n
k=1
A =
_
_
_
_
_
_
1
0
2
.
.
.
0
n
_
_
_
_
_
_
.
QED
80 12. EIGENVALUES AND EIGENVECTORS
Now we are going to answer the following question: Given a selfadjoint matrix A,
nd a transformation G that moves the old basis e
k
n
k=1
into a basis consisting of the
eigenvectors g
k
n
k=1
of A.
So let us nd G : Ge
k
= g
k
, k = 1, 2, . . . , n, where g
k
n
k=1
is a basis of normalized
eigenvectors of A.
Ge
k
=
n
k=1
(G)
ik
e
i
(G)
ik
= (g
k
)
i
where (g
k
)
i
is the i
th
coordinate of g
k
.
Therefore,
G =
_
_
_
_
(g
1
)
1
(g
2
)
1
. . . (g
n
)
1
(g
1
)
2
(g
2
)
2
. . . (g
n
)
2
. . . . . . . . . . . .
(g
1
)
n
(g
2
)
n
. . . (g
n
)
n
_
_
_
_
=
_
g
1
g
2
. . . g
n
_
.
So G is actually a matrix which columns are eigenvectors of A.
Below is the procedure on how to bring a matrix A to the diagonal form
1) Do the spectral analysis of A:
1
,
2
, . . . ,
n
; g
1
, g
2
, . . . , g
n
, g
k
=
X
k
X
k

2) Construct the matrix G =
_
g
1
g
2
. . . g
n
_
3) G
1
AG is diagonal in g
1
, g
2
, . . . , g
n
.
Remark 12.14. Actually, Theorem 12.13 remains valid not only for the case of a
simple spectrum but for a whole lot more general situations.
Exercise 12.15. 1) Prove the Spectral Theorem: under conditions of The
orem 12.13, the following representation holds: A =
n
k=1
k
P
k
, where P
k
is the
orthogonal projector on the eigenspace of A corresponding to
k
.
2) Prove that eigenvalues of a unitary operator lie on the unit circle. Find the
analog of the spectral theorem for a unitary operator.
Note that in the above part 1), P
k
is referred to as a spectral projector, and this
representation as a decomposition into projectors holds in innite dimension. In part
2), observe that the eigenvalues are no longer on the real line.
Part 3
Hilbert Spaces
LECTURE 13
Innite dimensional spaces. Hilbert Spaces
1. Innite Dimensional Spaces
Actually, many denitions in the innite dimensional case remain the same. E.g,
the denition of a linear space does not change. (So Def. 8.1, 8.5, 8.6 remain the
same). The actual dierence is
Definition 13.1. A linear space is called innite dimensional if there is an innite
system of linearly independent vectors.
Example 13.2. Let P be the set of all polynomials. P is clearly a linear space.
Consider the following system x
n
n=0
. All x
n
P and these vectors are linearly
independent, but their number is innite. Therefore, dimP = .
Example 13.3. Let C[0, 1] as in Example 8.4. Consider x
n
n=0
C[0, 1] and
again dimC[0, 1] = .
As a matter of fact, spaces of functions are a good source of innite dimensional
spaces.
2. Basis
It is natural to ask what a basis is in such a space.
Definition 13.4. Let E be an innite dimensional space. A system e
n
n=0
E
is called a basis in E if every element X of E can be represented as follows:
X =
n=1
x
n
e
n
. (13.1)
Wow! But how to understand this innite series! Here the main difference between
nite and innite dimensional spaces starts.
In general, it is a very deep issue. We are not able to treat it here and we restrict
ourselves to very special yet important cases.
3. Normed Spaces
Definition 13.5. A space E is called a normed space if there exists a real valued
function, called a norm, denoted by X, of X E subject to
1) X 0 ; X = 0 X = 0.
2) X = [[ X , C.
3) X +Y  X +Y  . (triangle inequality)
83
84 13. INFINITE DIMENSIONAL SPACES. HILBERT SPACES
Example 13.6. Let E = 1
3
, X =
_
x
2
1
+x
2
2
+x
2
3
. We claim X is a norm.
Indeed,
1) X 0 and if X = 0
_
x
2
1
+x
2
2
+ x
2
3
= 0 x
1
= x
2
= x
3
= 0 X = 0.
2) X =
_
(x
1
)
2
+ (x
2
)
2
+ (x
3
)
2
=
2
_
x
2
1
+x
2
2
+x
2
3
= [[ X .
3)
X + Y 
2
= X +Y, X +Y
= X, X +X, Y +Y, X +Y, Y
= X
2
+Y 
2
+ 2 X, Y
= X
2
+Y 
2
+ 2 X Y  cos (
X, Y )
X
2
+Y 
2
+ 2 X Y  = (X +Y )
2
So we get
X +Y 
2
(X +Y )
2
X + Y  X +Y  .
So in 1
3
the length of X can be chosen as a norm, and hence 1
3
is a normed space.
Lemma 13.7 (Cauchys Inequality).
[X, Y [ (X, X)
1/2
(Y, Y )
1/2
.
(without a proof )
Theorem 13.8. Every Euclidian space E is normed.
Proof. Let , be a scalar product in E. We claim X, X = X
2
denes a
norm. Based upon Lemma 13.7, we have
X + Y 
2
= X +Y, X + Y
= X, X +X, Y +Y, X +Y, Y
= X
2
+Y 
2
+X, Y +X, Y = X
2
+Y 
2
+ 2 Re X, Y
X
2
+Y 
2
+ 2 [X, Y [
Cauchy ineq
X
2
+Y 
2
+ 2 X Y  = (X +Y )
2
.
QED
Example 13.9. Let C[0, 1] be as previously dened. Set f max
x[0,1]
[f(x)[. Prove
that f is a norm.
So, C[0, 1] is a normed space.
Example 13.10. P is not a normed space. (try to understand it!)
Example 13.11. Let C
1
[0, 1] be the space of all functions f(x) continuous on [0, 1]
whose derivatives are also continuous on [0, 1]. I.e.,
C
1
[0, 1] = f C[0, 1] : f
t
C[0, 1]
Prove that f = max
x[0,1]
[f(x)[ + max
x[0,1]
[f
t
(x)[ is a norm.
4. HILBERT SPACES 85
4. Hilbert Spaces
We introduce rst the concept of a scalar (inner) product in the very same way as
Denition 10.1.
The point here is that not all innite dimensional spaces have a scalar product.
Example 13.12. C[0, 1] has a scalar product. Indeed, let f, g C[0, 1], then
f, g =
1
0
f(x)g(x)dx
satises all the properties of a scalar product.
Example 13.13. Let P be the space of all polynomials. There is no scalar product
in P.
Definition 13.14. An innite dimensional Euclidian space is called a Hilbert space.
Example 13.15. Let L
2
(0, 1) be the space of all functions f(x) such that
1
0
[f(x)[
2
dx < (square integrable functions)
We claim that L
2
is a Hilbert space.
LECTURE 14
L
2
spaces. Convergence in Normed Spaces. Basis and
Coordinates in Normed and Hilbert Spaces
1. L
2
spaces
The most typical examples of Hilbert spaces are socalled L
2
spaces.
Definition 14.1.
1
L
2
(a, b) is the set of all functions f(x) dened on (a, b) and
satisfying the condition
b
a
[f(x)[
2
dx < (14.1)
Theorem 14.2. L
2
(a, b) is a Hilbert space.
Proof. Let us check rst that L
2
(a, b) is a linear space. Its clear that out of the
10 properties of a linear space, only # 1 is in question. Let f, g L
2
(a, b). We need to
prove that f +g L
2
(a, b).
[f(x) + g(x)[
2
= [f(x)[
2
+[g(x)[
2
+ 2 Re f(x)g(x).
But Re f(x)g(x) [f(x)[ [g(x)[ and hence,
[f(x) + g(x)[
2
[f(x)[
2
+[g(x)[
2
+ 2 [f(x)[ [g(x)[ . (14.2)
It follows from the obvious inequality
([f[ [g[)
2
0 [f[
2
+[g[
2
2[f[[g[
that
2[f(x)[[g(x)[ [f(x)[
2
+[g(x)[
2
and (14.2) becomes
[f(x) + g(x)[
2
2[f(x)[
2
+ 2[g(x)[
2
b
a
[f(x) + g(x)[
2
dx 2
b
a
[f(x)[
2
dx
. .
<
+2
b
a
[g(x)[
2
dx
. .
<
f(x) + g(x) L
2
(a, b)
So, L
2
(a, b) is a linear space.
1
L
2
(a, b) is a typical example of a function space. Other function spaces are C[0, 1], P, etc.
87
88 14. EXAMPLE, CONVERGENCE AND BASIS IN HILBERT SPACES
Clearly,
f, g =
b
a
f(x)g(x)dx
has all the properties of a scalar product and hence L
2
(a, b) is a Hilbert space. QED
Remark 14.3. Every function f(x) continuous on (a, b) is in L
2
(a, b). So C[a, b]
L
2
(a, b). But L
2
(a, b) also contains discontinuous functions and even some unbounded.
For example,
1
4
x
L
2
(0, 1) , log x L
2
(0, 1)
but both functions are unbounded at 0. Note, however, that
1
x
/ L
2
(0, 1) but f(x) =
sin x
x
L
2
(0, 1).
Remark 14.4. In the same way one can dene
L
2
(0, ) =
_
f(x) :
0
[f(x)[
2
dx <
_
or,
L
2
(, ) L
2
(1) =
_
f(x) :
[f(x)[
2
dx <
_
.
Exercise 14.5. Prove that:
1
1 + x
2
L
2
(1) ,
1
x
/ L
2
(1)
e
ix
1
x
L
2
(1) , e
ix
/ L
2
(1) ,
1 / L
2
(1)
Definition 14.6. A function f(x) L
2
(a, b) is called normalized if
f
2
=
b
a
[f(x)[
2
dx = 1.
Two functions f, g L
2
(a, b) are said to be orthogonal if
f, g =
b
a
f(x)g(x)dx = 0.
Example 14.7. Let f
n
(x)
_
1
sin nx in L
2
(0, 2). We prove that f
n
 = 1,
f
n
f
m
, n ,= m. Indeed,
f
n

2
=
1
2
0
sin
2
nx dx =
1
2
0
1 cos 2nx
2
dx =
1
x
2
2
0
= 1 f
n
 = 1.
Now consider f
n
, f
m
=
1
2
0
sin nx sin mx dx.
2. CONVERGENCE IN NORMED SPACES 89
But sin x sin y =
1
2
(cos(x y) cos(x +y)). So,
f
n
, f
m
=
1
2
2
0
cos(n m)x dx
. .
=0 , n,=m
1
2
2
0
cos(n +m)x dx
. .
=0
= 0 , n ,= m.
So f
n
, f
m
= 0 , n ,= m.
As we will see, functions
__
1
sin nx
_
n=1
play a very important role in Fourier
analysis.
2. Convergence in normed spaces
Definition 14.8. Let E be a normed space, and X
n
n=1
be a sequence of vectors
in E. The sequence X
n
n1
is said to converge to a vector X E in norm if
lim
n
X X
n
 = 0. (14.3)
Remark 14.9. If X, Y E, then X Y  plays the role of a distance between X
and Y
_
X Y  is the distance if X, Y 1
3
and X =
_
x
2
1
+x
2
2
+ x
2
3
_
. In view
of this, (14.3) means that the distance between X and X
n
gets smaller and smaller as
n .
Definition 14.10. A sequence X
n
n1
is said to be a Cauchy sequence if
lim
n
m
X
n
X
m
 = 0.
Definition 14.11. If every Cauchy sequence X
n
E converges to some element
X of E, the space E is called a complete or Banach space.
The theory of Banach spaces is extremely complex and still has a lot of unanswered
questions. Even elementary theory of Banach spaces requires advanced analysis and
hence lies beyond the scope of our consideration.
Some examples: C[0, 1] is a Banach space, L
2
(0, 1) is a Banach space, but P[0, 1]
is not. Every nite dimensional Euclidean space is Banach. Not every normed space
is a Banach space. Actually, the denition of a Hilbert space we gave previously is
incomplete.
A Hilbert space is a Banach space with a scalar product.
As we know, L
2
(a, b) is a Hilbert space. I.e., it is also Banach which implies
if f
n
n=1
L
2
(a, b) and lim
n
m
b
a
[f
n
(x) f
m
(x)[
2
dx = 0
then f
n
(x) converges in L
2
(a, b) to some function f(x) L
2
(a, b).
90 14. EXAMPLE, CONVERGENCE AND BASIS IN HILBERT SPACES
Example 14.12.
It may be proven that
n
k=0
x
k
k!
e
x
in L
2
(0, 1)
but the proof is not easy though.
Note that C[0, 1] is not a Hilbert space because it is incomplete: we can nd a
Cauchy sequence of continuous functions going to a noncontinuous functions.
1
2
1
2
1
n
1
2
+
1
n
C[0, 1]
1
2
/ C[0, 1]
But L
2
(0, 1) is a Hilbert space big enough to contain all the limits of Cauchy
sequences in C[0, 1].
3. Bases and Coordinates
Now we are ready to talk about bases and coordinates in innite dimensional spaces.
Definition 14.13. Let E be a Banach space. A system e
n
n=1
is called a basis if
X E x
k
k=1
C :
_
_
_
_
_
X
n
k=1
x
k
e
k
_
_
_
_
_
0 , n .
The numbers x
k
k=1
are called coordinates of X in the basis e
k
k=1
.
Definition 14.14. A series
k=1
X
k
is called convergent to X in E if
_
_
_
_
_
X
n
k=1
X
k
_
_
_
_
_
0 , n .
Definition 14.15. A basis e
k
k=1
is called orthogonal and normalized in a Hilbert
space H if
e
i
, e
k
=
ik
.
Theorem 14.16. If e
k
k=1
is an ONB then every X H can be represented as
follows
X =
k=1
X, e
k
e
k
where the series converges in H. Moreover,
X
2
=
k1
[x
k
[
2
, x
k
= X, e
k
.
LECTURE 15
Fourier Series
Now we are going to harvest on the previous abstract results. for example, in the
Hilbert space L
2
(, ), there is an easy basis to construct.
Theorem 15.1.
_
1
2
e
inx
_
nZ
is an ONB in L
2
(, ).
Proof. Check that this system is orthogonal and normalized. Set e
n
=
1
2
e
inx
.
If n ,= m,
e
n
, e
m
=
1
2
e
inx
e
imx
dx =
1
2
e
i(nm)x
dx =
1
2
e
i(nm)x
i(n m)
=
1
e
i(nm)
e
i(nm)
2i(n m)
=
1
(n m)
sin(n m) = 0.
i.e. e
n
, e
m
= 0.
e
n

2
=
1
2
e
inx
e
inx
dx = 1.
We leave the fact that e
n
is a basis without a proof.
Theorem 15.1 leads to
Theorem 15.2 (Expansion in Fourier Series). Let f be a real valued function such
that f L
2
(, ), then
f(x) =
nZ
c
n
e
inx
, c
n
=
1
2
f(x)e
inx
dx. (15.1)
For c
n
, the Parseval identity holds,
f
2
= 2
nZ
[c
n
[
2
. (15.2)
91
92 15. FOURIER SERIES
Proof. By Theorem 15.1,
_
1
2
e
inx
_
nZ
is an ONB. For simplicity,
set e
n
=
1
2
e
inx
. We have
f =
nZ
f, e
n
e
n
=
nZ
_
f(x)e
n
(x) dx
_
e
n
=
nZ
_
1
f(x)e
inx
dx
_
1
2
e
inx
=
nZ
_
1
2
f(x)e
inx
dx
_
. .
cn
e
inx
=
nZ
c
n
e
inx
.
By Theorem 14.16,
f
2
=
n
[f, e
n
[
2
=
2c
n
2
and (15.2) follows. QED
Formula (15.1) is called Fouriers expansion formula.
Example 15.3. Let f(x) =
_
0 , < x < 0
1 , 0 x <
.
Find its Fourier series.
c
n
=
1
2
f(x)e
inx
dx
=
1
2
0
e
inx
dx =
1
2
e
inx
in
0
=
1
2
e
in
1
in
=
_
_
1
in
, n = 1, 3, . . .
0 , n = 2, 4, . . .
1
2
, n = 0.
f(x)
So
f(x) =
1
2
+
1
i
nZ
n odd
e
inx
n
. (15.3)
Now clearly,
f
2
=
[f(x)[
2
dx =
0
dx = .
But we can also verify this result by Parseval equality:
f
2
= 2
_
1
4
+
1
n=
1
(2n + 1)
2
_
= 2
_
1
4
+
1
2
_
2
4
__
= .
15. FOURIER SERIES 93
Remark 15.4. In general, series (15.1) converges only in L
2
(, ), i.e.
f(x)
N
N
c
n
e
inx
2
dx 0 , N
does not imply that
N
N
c
n
e
inx
f(x) , x (, ).
The reason can be observed from Example 15.3. Series (15.3) converges to f(x) for all
x (, ) but x = 0. At x = 0,
1
2
+
1
i
N
N
n odd
1
n
=
1
2
+
1
i
_
1
N
1
N 2
. . . ,1+ ,1 + . . . +
1
N 2
+
1
N
_
=
1
2
.
So at x = 0 (15.3) converges to
1
2
.
It can be proven that if f(x) is continuous then its Fourier series converges for all
x (, ). If f(x) is piecewise continuous (Here is a typical function)
then on each interval of continuity of f the Fourier series converges pointwise (i.e. at
every point of this interval) and if x
0
is a point of jump discontinuity then the Fourier
series converges to
f(x
0
0) + f(x
0
+ 0)
2
, f(x
0
0) = lim
xx
0
x<x
0
f(x).
Since in applications most of the functions are at least piecewise continuous, series
(15.1) converges not only in norm but also pointwise.
Note, that e
inx
= cos nx + i sin nx and hence instead of
_
1
2
e
inx
_
nZ
we can
consider
_
1
sin nx ,
1
cos nx
_
n0
which is already a real basis. An expansion in
this basis is called a trigonometric series.
Theorem 15.5. Let f L
2
(, ), then
94 15. FOURIER SERIES
f(x) =
a
0
2
+
n1
(a
n
cos nx +b
n
sin nx) (15.4)
where a
n
=
1
f(x) cos nx dx , n = 0, 1, . . .
b
n
=
1
f(x) sin nx dx , n = 1, 2, . . .
Exercise 15.6. Derive Theorem 15.5 from Theorem 15.2 and establish the Parseval
identity.
Definition 15.7. Functions e
inx
, sin nx, cos nx are called simple harmonics.
So, a Fourier series can be viewed as an expansion in simple harmonics that play
an enormous role in physics.
Remark 15.8. Theorems 15.2 & 15.5 give us expansions of functions dened on
(, ). However if we continue f(x) outside (, ) as a periodic function f(x+2) =
f(x), then the Fourier expansion formulas are valid for all x. To see this, one has only
to observe that e
inx
are 2periodic functions too.
Theorem 15.9. Let f L
2
(, ), then
f(x) =
nZ
c
n
e
i
n
x
, c
n
=
1
2
f(x)e
i
n
x
dx , n Z. (15.5)
Exercise 15.10. Derive this theorem from Theorem 15.2 using a suitable change
of variables.
Exercise 15.11. Expand f(x) = x
2
on [, ] by
(a) (15.1);
(b) (15.5).
Exercise 15.12. Expand f(x) = [x[ on [1, 1] by (15.5).
LECTURE 16
Some Applications of Fourier Series
1. Signal Processing
Consider a black box. Assume that it is a linear system, i.e.
L
f
input
Lf
output
L(f
1
+f
2
) = Lf
1
+Lf
2
L(f) = Lf.
In other words, the system works as a linear operator. In physical terms it means
that the system is subject to the principle of superposition.
There are two basic problems:
1) Given input f and L, nd the output Lf (direct problem),
2) Given input f and output Lf, nd L (inverse problem).
We assume that an input f(t) is a periodical function of t. I.e. we study periodical
signals like e.g.
or
and for simplicity put the period 2.
We claim that, to study problems 1), 2), we have to only study how simple har
monics come through this system.
Let e
n
nZ
, e
n
= e
int
be a sequence of simple harmonics (see next page).
Then we conduct a laboratory experiment
e
n
Le
n
, n Z
and form a database: Le
n
g
n
and the inverse problem 2) is solved since Le
n
nZ
determines our system. Now, given signal f(t) we want to know what the output will
be without actually putting this signal through the system.
What we have to do is represent f(t) by the Fourier formula
f(t) =
nZ
c
n
e
int
(16.1)
95
96 16. SOME APPLICATIONS OF FOURIER SERIES
0
e
0 constant signal
e
1 rst harmonic
e
2 second harmonic
e
3 third harmonic
1. SIGNAL PROCESSING 97
and compute all c
n
nZ
. Then the output (Lf)(t) is
(Lf)(t) =
nZ
c
n
Le
int
=
nZ
c
n
Le
n
. (16.2)
So once the system is determined, i.e. Le
n
are known, then one can easily com
pute Lf for any signal f by formulas (16.1), (16.2). So the direct problem 1) is also
solved.
Example 16.1. Consider the following system L
Le
n
=
_
e
n
, n
0
n n
1
0 , otherwise.
So, L puts through all simple harmonics with frequencies in [n
0
, n
1
] with no change and
cuts o all other harmonics. It is a lter.
Lf =
n
1
n=n
0
c
n
e
n
.
low frequency lter
high frequency lter
Example 16.2. Consider L dened by
L =
_
Ae
n
, n
0
n n
1
0 , otherwise
, A > 1.
L is a band amplier.
1
n
0
n
1
(n
0
, n
1
) frequency amplier
A
n
0
resonance amplier
So what does a resonance amplier do to a signal? It cuts out all the harmonics
but one n
0
and increases its amplitude by A.
Exercise 16.3. Put a signal
98 16. SOME APPLICATIONS OF FOURIER SERIES
f(t) = t , t ,
through a system L:
Le
n
=
_
e
n
, n = 0, 1, 2
0 , otherwise.
Graph the output Lf. Use the Fourier trigonometric series expansion for f(x).
2. Solving Some Linear ODEs
We consider just one example from the theory of electrical circuits.
It follows from Kirchhos law that for the circuit in the gure
C
L
R
E(t)
where
E(t +T) = E(t)
is a Tperiodic electricity source,
the charge Q(t) satises the ODE
L
d
2
Q
dt
2
+R
dQ
dt
+
1
C
Q = E(t) (16.3)
assuming that the circuit is steadystate (i.e. its been plugged in for a time long
enough so that all transition processes are over).
We need to nd Q(t). I.e. we want to nd the response of our linear system to the
periodic external source E(t).
Note that this problem is studied in standard ODE courses. As you may remember,
it can be approached in a few dierent ways but there is no exact solution unless E(t)
has a very specic form. We oer here yet another solution (a Fourier series solution).
Here is how it goes.
By Theorem 15.9 with = T
E(t) =
nZ
c
n
e
int
_
=
2
T
_
(16.4)
and we look for a solution to (16.3) in the form of the Fourier series
Q(t) =
nZ
q
n
e
int
(16.5)
2. SOLVING SOME LINEAR ODES 99
where q
n
are unknown. We have (if we assume that
d
dt
and
can be interchanged)
d
dt
Q =
nZ
q
n
ine
int
, (16.6)
d
2
dt
2
Q =
nZ
q
n
(n
2
2
)e
int
. (16.7)
Plugging (16.4)(16.7) into (16.3) yields
nZ
_
L(n
2
2
) + R(in) +
1
C
_
q
n
e
int
=
nZ
c
n
e
int
_
n
2
2
L +inR +
1
C
_
q
n
= c
n
.
I.e.
q
n
=
Cc
n
(1 n
2
2
CL) + inCR
. (16.8)
But
c
n
=
1
T
T/2
T/2
E(t)e
int
dt (16.9)
are all known (but you have to compute them though) and the problem is solved in
the form of the Fourier expansion (16.5) with q
n
computed by (16.8), (16.9).
Exercise 16.4. Find the Fourier series solution to (16.3) (assuming a steadystate
solution) in the form (15.5) for
(a)
a
2T T T 2T
E(t) =
(b)
a
3T
2
T
2
T
2
E(t) =
Example 16.5. Take E(t) to be as in Example 15.3, i.e.
We found its Fourier series expansion to be
E(t) =
1
2
+
nZ
n odd
1
in
e
int
.
So using (16.8), we have
q
0
=
C
2
, q
n
=
C
n[nCR +i(1 n
2
CL)]
, n odd integer
100 16. SOME APPLICATIONS OF FOURIER SERIES
and the charge Q(t) is given by
Q(t) =
C
2
+
nZ
n odd
Ce
int
n[nCR +i(1 n
2
CL)]
.
Note that the result has only the odd integers again, and trigonometry will cancel
out the imaginary parts.
LECTURE 17
Linear Operators in Hilbert Spaces
The theory of linear operators in innite dimensional spaces is much more difcult
than the one for nite dimensional spaces. And its still a very active area of research.
1. General Concepts
We will generally denote H a Hilbert space (innite dimensional) and E a Euclidean
space (nite dimensional). You can not associate a matrix with every linear operator
since not every space has a basis.
The denition of a linear operator in a Hilbert space is absolutely the same as in
the case of a nite dimensional space.
Definition 17.1. The domain DomA of a linear operator A in a space H is the
set of all elements X H such that AX H, i.e.
DomA = X H : AX H.
In the nite dimensional case, for every linear operator A dened in E, DomA = E.
For operators in innite dimensional spaces, its not longer true.
Example 17.2. Let H = L
2
(0, 1) and A =
d
dx
. Pick up f(x) =
1
2
1
4
x
. By a direct
computation, f = 1 f H. Compute now Af:
Af =
df
dx
=
1
2
_
1
4
_
1
x
5/4
but Af
2
=
1
32
1
0
dx
(x
5/4
)
2
=
1
32
1
0
dx
x
5/2
=
and hence Af / L
2
(0, 1) and f / DomA.
So Dom
_
d
dx
_
H but Dom
_
d
dx
_
,= H!
Example 17.3. Let H = L
2
(0, 1) again and dene an operator A as follows
(Af)(x) = v(x) f(x)
where v(x) is a bounded function on (0, 1). I.e. max
x(0,1)
[v(x)[ = C < .
We claim that DomA = H. Indeed, f L
2
(0, 1) we have
Af
2
=
1
0
[v(x)f(x)[
2
dx
1
0
max
x(0,1)
[v(x)[
2
[f(x)[
2
dx
= max
x(0,1)
[v(x)[
2
1
0
[f(x)[
2
dx = C
2
f
2
< . (17.1)
101
102 17. LINEAR OPERATORS IN HILBERT SPACES
So f L
2
(0, 1), Af L
2
(0, 1) and DomA = H.
The operator A in Example 17.3 is known in physics as the operator of energy, v
is the potential. If v(x) = x, A is the operator of coordinate. The domain being the
entire space is not always true, it requires v to be bounded.
Exercise 17.4. Show that if v(x) = 1/x, the Coulomb potential, then DomA ,= H.
It would be natural to ask: what is the dierence between the two examples?
Definition 17.5. Let A be a linear operator in H. The norm A of A is dened
as
A = max
X1
AX .
Definition 17.6. A linear operator A is called bounded if A < and unbounded
if A = .
Example 17.7. Let A be as in Example 17.3. By Denition 17.5,
A
2
= max
f1
Af
2
= max
f1
1
0
[v(x)f(x)[
2
dx. (17.2)
By (17.1)
1
0
[v(x)f(x)[
2
dx C
2
f
2
and hence (17.2) can be continued
A
2
C
2
max
f1
f
2
= C
2
A C and A is bounded.
Example 17.8. Let A be as in Example 17.2. Pick up f(x) =
1
2
1
4
x
. We have
f = 1, but Af = A = and A is unbounded.
Theorem 17.9. A is bounded C <
AX C X , x H. (17.3)
Proof. 1) . Assume A is bounded i.e.
max
X1
AX A < . (17.4)
Let X H. Consider e =
X
X
. It follows from (17.4) that
Ae A
_
_
_
_
A
X
X
_
_
_
_
A AX A X
and (17.3) follows.
1. GENERAL CONCEPTS 103
2) . Assume we have (17.3). (17.3) implies
AX C , X 1
and that implies
max
X1
AX C. QED
Theorem 17.10. All linear operators in nite dimensional spaces are bounded.
Proof. Let A be a linear operator in a nite dimensional space E. X E,
AX =
_
_
_
_
_
A
n
i=1
x
i
e
i
_
_
_
_
_
=
_
_
_
_
_
n
i=1
x
i
Ae
i
_
_
_
_
_
i=1
[x
i
[ Ae
i

_
n
i=1
[x
i
[
2
_
n
i=1
Ae
i

2
. (17.5)
Estimate now Ae
i
.
Ae
i
 =
_
_
_
_
_
n
k=1
a
ki
e
k
_
_
_
_
_
k=1
[a
ki
[ e
k
 =
n
k=1
[a
ki
[ since e
k
 = 1.
So, Ae
i

n
k=1
[a
ki
[. Plug it in (17.5) and we have
AX
_
n
k=1
[x
i
[
2
. .
X
_
n
i=1
_
n
k=1
[a
ki
[
_
2
_
n
i,k=1
[a
ki
[
_
X
A = max
X1
AX
n
i,k=1
[a
ki
[ < A is bounded. QED
Exercise 17.11. Consider the Euclidean norm 
2
on 1
n
.
1) If A is an operator from 1
n
to 1
n
, then
A
2
=
_
(A
A)
where is the spectral radius, i.e. the largest absolute value [
max
[ of the eigen
values of A
3/2
_
.
2) Now nd the norm of A and C again, but using the denition of the norm.
104 17. LINEAR OPERATORS IN HILBERT SPACES
So an unbounded operator is a phenomenon of innite dimensional spaces. The
thing is that operators arising in mathematical physics are often unbounded.
One of the most popular bounded operators in mathematical physics is the Fourier
transform. Some denitions.
Definition 17.12. L
1
(a, b) =
_
f :
b
a
[f(x)[ <
_
is the space of functions inte
grable on (a, b).
One can easily see that L
1
(a, b) is indeed a space:
f, g L
1
(a, b) f +g L
1
(a, b) where , are scalars.
In applications one often has to deal with
L
1
(1) L
1
(, ) =
_
f :
[f(x)[ <
_
.
Clearly every function f(x) continuous on [a, b] is in L
1
(a, b). Its also obvious that not
every continuous function is in L
1
(1). Take, e.g., f(x) = 1.
[f(x)[ dx =
dx =
and 1 / L
1
(1).
A typical example of an unbounded L
1
function is f(x) =
1
x
.
1
x
L
1
(0, 1).
However
1
x
/ L
1
(0, ). But
1
1 + x
2
L
1
(1). Next
1
x
/ L
1
(0, 1),
1
x
2
L
1
(1, ).
Make sure that you understand these things! Unfortunately, L
1
is not a Hilbert
space.
Definition 17.13. The Fourier transform of a function f(x) L
1
(1) is a function
f() dened by
Ff() =
f() =
1
e
ix
f(x)dx. (17.6)
Note that F : L
1
(1) L
= max
1
1
e
ix
f(x)dx
.
And
e
ix
f(x)dx
e
ix
. .
=1
[f(x)[ dx =
[f(x)[ dx = f
1
_
_
_
f
_
_
_
L
2
f
L
1
and so F : L
1
(1) L
(1) is bounded.
But now consider the box function f(x) =
_
1 L x L
0 otherwise
for some xed L > 0.
2. SELFADJOINT AND UNITARY OPERATORS 105
1
L
We verify that f L
1
(1) since
f
1
=
1
[f[ =
L
L
1 = 2L < .
We next compute
Ff() =
1
1
f(x)e
ix
dx =
1
L
L
e
ix
dx
=
1
2i
e
ix
L
L
=
e
iL
e
iL
2i
=
_
2
sin L
.
But
Ff
1
=
1
_
2
sin L
d =
1
L
_
2
sin x
x
dx =
2
L
_
2
0
[sin x[
x
dx.
Using a result from analysis (beyond our scope),
Ff
1
=
2
L
_
2
n=1
n
(n1)
[sin x[
x
dx =
2
L
_
2
n=1
0
[sin y[
y + (n 1)
dy
2
L
_
2
n=1
1
n
0
sin y dy = .
Hence F : L
1
(1) L
1
(1) is unbounded.
Theorem 17.14. The Fourier transform denes a bounded operator on L
2
(1). I.e.
f L
2
(1)
f L
2
(1).
Proof. The proof of this theorem requires advanced analysis which lies beyond
the scope of our course.
2. Selfadjoint and Unitary Operators
Selfadjoint and unitary operators are basically dened in the same way as in the
nite dimensional case. There is one dierence related to the fact that DomA ,= H
but we ignore it.
So see Denitions 10.15, 10.17, 10.25.
Note that a selfadjoint operator may be unbounded. Here is a very important
example.
Example 17.15. Consider A =
1
i
d
dx
in L
2
(0, 2) and assume
DomA =
_
f L
2
(0, 2) : f
t
L
2
(0, 2) and f(0) = f(2) , [[ = 1
_
.
So A is an operator of differentiation with boundary conditions.
106 17. LINEAR OPERATORS IN HILBERT SPACES
A is selfadjoint. Indeed let f, g DomA then
Af, g =
2
0
1
i
f
t
(x)g(x) dx =
1
i
*
0
f(x)g(x)
2
0
2
0
1
i
f(x)
d
dx
g(x) dx
=
2
0
f(x)
1
i
d
dx
g(x) dx = f, Ag .
But as we know, A is unbounded.
However every unitary operator U is bounded and moreover U = 1. Indeed, we
have
UX = X , X H max
X1
UX = max
X1
X = 1 U = 1.
A good example of a unitary operator is the Fourier transform. We discuss it in
some detail later.
The boundedness of an operator plays a role for example when you consider ap
plying the operator several times. E.g. A
2
where A is as in Example 17.15 is A
2
=
d
2
dx
2
, A
2
f = A(Af) for f DomA. But now f DomA
2
if also f
tt
L
2
(0, 2)
so DomA
2
is narrower. One can show though that A
2
is selfadjoint, and hence so is
A
2009
but DomA
n
keeps shrinking.
A
2
=
d
2
dx
2
is also an important operator related to energy: the kinetic energy
(square of the momentum).
3. Spectrum
The situation with the spectra of operators in Hilbert spaces is a whole lot more
difcult. If in the nite case the spectrum of an operator was a nite set of eigenvalues,
the innite dimensional picture is a lot richer.
Definition 17.16. A
1
is called the inverse of A if
A
1
A = AA
1
= I.
Definition 17.17. Let A be a linear operator and C. Then the operator R
(A)
dened by
R
(A) (A I)
1
is called the resolvent of A at a point .
The resolvent depends on analytically not by the denition given before as a
limit or by the CauchyRiemann conditions for matrixvalued functions, but by the
denition from the power series expansion or looking into the complex valued func
tion R
then R
then (A) 1.
Although there exist operators A with (A) = C, there are not common and very
pathological, so the resolvent is generally dened/analytic for at least some values of
. But whereas in the nite dimensional case, the set of eigenvalues is nite and so
the singularities isolated, one can have a branch cut type of singularity in the innite
dimensional case.
Definition 17.20. The set of poles of R
2
then u
n
 = 1 and u
n
(x) forms an ONB.
So let us analyze what we got!
The set of eigenfunctions of A is the set of simple harmonics
_
1
2
e
inx
_
nZ
we
studied previously. Its possible to prove that (A) =
d
(A) and hence the spectrum
of A is purely discrete:
d
(A) = Z.
So the spectrum of an unbounded operator need not be nite!
Example 18.2. Let A =
1
i
d
dx
in L
2
(, ). In quantum mechanics this operator
is called the operator of momentum. Let us nd
d
(A):
1
i
u
t
= u u
t
= iu (18.2)
u(x) = Ce
ix
.
But u
2
= [C[
2
e
ix
2
dx = [C[
2
dx =
u(x) / L
2
(1) if C ,= 0
d
(A) = .
Thus this operator has no discrete spectrum. On the other hand, (18.2) can be
solved 1 and solutions u
(x) = Ce
ix
are bounded functions but not in L
2
(1).
Here we have a typical case of the socalled continuous spectrum. Moreover we
have (A) = 1. (see Lecture 29 for more on this topic.)
Functions u
1
are called eigenfunctions of the continuous spectrum. It can be
proven that the resolvent R
1
is a kind of basis but not in the space, so its a basis for the Fourier transform.
The following theorem is utmost important!
Theorem 18.3. Let A be a sefadjoint operator in H with a purely discrete spectrum
d
(A) =
n
. Then all
n
1 and the set of its eigenvectors X
n
forms an ONB in
H.
Proof. The proof of
n
1 can be done in the same way as for nite dimensional
operators. It is also clear why X
n
, X
m
=
nm
. But why X
n
is a basis is a dicult
question!
Exercise 18.4.
a) Find eigenvalues and eigenfunctions of A dened in L
2
(0, ) as follows
_
_
Au = u
tt
u
t
(0) hu(0) = 0
u
t
() hu() = 0
where h is a real parameter.
b) Show that A =
d
2
dx
2
in L
2
(, ) has no discrete spectrum.
LECTURE 19
Generalized Functions. Dirac function.
1. Physical Motivation
Consider the following situation. We have a weightless thread with one bead
x
m
x
0
0 1
of mass m. In physics a distribution of mass
is described by a density function.
It is natural to ask what is the density
function (x) describing the distribution of this mass. It looks like
(x) =
_
0 , x ,= x
0
, x = x
0
.
So (x) is identically zero everywhere but one point x = x
0
. It means that
x
0
0
(x)dx +
1
x
0
(x)dx = 0 (19.1)
On the other hand, by properties of denite integrals,
x
0
0
(x)dx +
1
x
0
(x)dx =
1
0
(x)dx = m (19.2)
Comparing (19.1) and (19.2), we get a contradiction.
This means that (x) is not a usual function and the way we dened it was incorrect
since for a normal function, the value at one point shouldnt matter for the integral.
This function should be dened as some limiting procedure. But its going to take a
while. We need a bunch of denitions.
2. Uniform Convergence
Let f(x) be a bounded function on an interval . We dene
f
max
x
[f(x)[ . (19.3)
E.g. if = [0, 1] and f(x) = x
2
, then
f
= max
x[0,1]
x
2
= 1
_
_
x
2
_
_
= 1 on [0, 1].
As we know, (19.3) denes a norm on C(). This norm is commonly referred to as
the supnorm.
111
112 19. GENERALIZED FUNCTIONS. DIRAC FUNCTION.
Definition 19.1. A sequence of bounded functions f
n
(x) is said to be uniformly
convergent on to some function f(x) if
lim
n
f(x) f
n
(x)
= 0.
In this case they write
f
n
(x) f(x) on .
What does it actually mean that f
n
f? This means that for any > 0 (no matter
how small) there is a number N such that all members of the sequence f
n
(x)
n=N
do
not deviate from f(x) by more than , i.e.
f(x) f
n
(x)
, n N.
Example 19.2.
1)
_
1
x
2
+n
2
_
n=1
0 on 1. Indeed,
_
_
_
_
1
x
2
+n
2
0
_
_
_
_
= max
x1
1
x
2
+n
2
=
1
n
2
0 , n .
2) Let f
n
(x) =
_
0 , 1/n < x < 1
n , 0 < x 1/n
.
It is clear that
lim
n
f
n
(x) = 0 x (0, 1) but
1
0
n
1
n
f
n
(x) 0
= max
x(0,1)
[f
n
(x)[ = n f
n
(x) ,0 on (0, 1).
It is natural to ask why we need uniform convergence? Uniform convergence is
much better than the usual one! It lets us interchage lim and
. Namely,
Theorem 19.3. Let be any interval and f
n
(x) be a sequence of integrable
functions. If f
n
(x) f(x) and f(x) is integrable then
lim
n
f
n
(x) dx =
lim
n
f
n
(x) dx =
f(x) dx.
Proof. (For the case when is bounded.) We need to show
lim
n
_
f
n
(x) dx
f(x) dx
_
= 0.
But
f
n
(x) dx
f(x) dx
(f
n
(x) f(x)) dx
[f
n
(x) f(x)[ dx
max
x
[f
n
(x) f(x)[ dx
=
f
n
f
dx = f
n
f
dx
= f
n
f 0 , n . QED
2. UNIFORM CONVERGENCE 113
Remark 19.4. Uniform convergence in Thm 19.3 is essential. Indeed, let f
n
be
the sequence dened in Example 19.2 2). It is clear that
1
0
f
n
(x)dx = 1 but
1
0
lim
n
f
n
(x)dx =
1
0
0 dx = 0.
This means you cannot interchange lim and
.
Definition 19.5. A sequence of bounded functions
n
(x) subject to
1)
n
(x) 0
2)
n
(x)dx = 1
3) lim
n
n
(x) = 0 uniformly on 1 (, ), > 0
is called a sequence.
Example 19.6.
n
(x) =
n
e
n
2
x
2
.
n
is a sequence!
Indeed,
n
(x) are bounded and continuous n,
n
(x) 0,
> 0 lim
n
max
x1\(,)
n
(x) =
1
lim
n
n
e
n
2
2
= 0.
Finally,
n
(x)dx =
n
e
n
2
x
2
dx =
1
e
y
2
dy = 1.
Exercise 19.7. Show that f
n
(x) dened in Example 19.2 2) is a sequence.
Definition 19.8. The support of a function f(x), denoted by Supp f, is the set of
all x : f(x) ,= 0. That is,
Supp f = x : f(x) ,= 0.
Example:
f(x)
Supp f
Definition 19.9. A function f(x) is called nitely supported if there exists a nite
interval such that Supp f .
114 19. GENERALIZED FUNCTIONS. DIRAC FUNCTION.
Example:
Supp f
f(x)
Definition 19.10. A function f(x) is called smooth on a set if all derivatives
of f are continuous functions.
C
0
(1) = all functions in C
0
(1)
Functions from C
0
(1) will be referred to as test functions.
In a similar way we dene C
0
() as the set of all smooth functions on having
supports entirely inside .
Example:
is such a function.
Lemma 19.11. If
n
(x) is a sequence then
lim
n
n
(x)dx = 1 , > 0.
Proof. We have
1 =
n
(x)dx =
n
(x)dx +
1\(,)
n
(x)dx
lim
n
n
(x)dx = 1 lim
n
1\(,)
n
(x)dx.
2. UNIFORM CONVERGENCE 115
But
n
0 on 1 (, )
Thm 19.3
lim
n
1\(,)
n
(x)dx =
1\(,)
0 dx = 0 lim
n
n
(x)dx = 1 , > 0. QED
The following theorem is very important.
Theorem 19.12. Let
n
(x) be a sequence. Then
lim
n
n
(x)(x)dx = (0) , C
0
(1).
Proof. For every C
0
(1),
n
(x)(x)dx =
n
(x)(0)dx +
n
(x) ((x)dx (0)) dx
= (0) +
n
(x) ((x) (0)) dx.
We need to show that the last integral goes to 0 when n . It will take a while!
Let be any positive number. We have
1\(,)
Triangle ineq.
1\(,)
. (19.4)
Estimate each of these integrals separately.
n
(x) [(x) (0)[ dx
max
x(,)
[(x) (0)[
n
(x)dx
. .
1
max
x(,)
[(x) (0)[ . (19.5)
Since C
0
(1), lim
x0
[(x) (0)[ = 0.
That is, choosing > 0 we can make max
x(,)
[(x) (0)[ where is as small
as we want. Next,
1\(,)
1\(,)
n
(x) [(x) (0)[ dx
(x) (0)
1\(,)
n
(x)dx. (19.6)
Now plug (19.5) and (19.6) in (19.4).
n
(x) ((x) (0)) dx
max
x(,)
[(x) (0)[
. .
< (by choosing )
+(x) (0)
1\(,)
n
(x)dx
. .
n
0, >0
(by Lemma 19.11)
.
116 19. GENERALIZED FUNCTIONS. DIRAC FUNCTION.
So,
lim
n
n
(x) ((x) (0)) dx
< , > 0
lim
n
n
(x) ((x) (0)) dx = 0. QED
3. Weak Convergence
Definition 19.13. A sequence f
n
(x)
n=1
of functions integrable on is said to
converge weakly if for every C
0
() the numerical sequence
_
f
n
(x)(x)dx
_
n=1
converges.
Notations for weak convergence: f
n
w
f, or f
n
f.
Theorem 19.14. Every sequence converges weakly.
Proof. By Theorem 19.12, for any sequence
n
,
lim
n
n
(x)(x)dx = (0) , C
0
(1).
That is,
_
n
(x)(x)dx
_
n=1
converges
Def. 19.13
n
converges weakly. QED
So, we have three types of convergence: usual (pointwise), uniform, and weak.
pointwise uniform weak
f
n
f on i f
n
f on i f
n
converges weakly on i
lim
n
f
n
(x) = f(x) max
x
[f
n
(x) f(x)[
n
0 lim
n
f
n
(x)(x)dx
x exists C
0
Theorem 19.15. If f
n
(x) converges to f(x) on uniformly then f
n
(x) con
verges to f(x) weakly.
Proof. We are supposed to prove that
lim
n
f
n
(x)(x)dx =
f(x)(x)dx.
By condition f
n
(x) f(x), i.e.
f
n
f
0, n .
We have
f
n
(x)(x)dx =
(f
n
(x) f(x)) (x)dx
. .
Consider this integral
+
f(x)(x)dx.
3. WEAK CONVERGENCE 117
(f
n
(x) f(x)) (x)dx
[f
n
(x) f(x)[ [(x)[ dx
max
x
[f
n
(x) f(x)[
[(x)[dx
= f
n
f
[(x)[dx 0.
lim
n
f
n
(x)(x)dx =
f(x)(x)dx. QED
Remark 19.16. The converse of Theorem 19.15 is not true. That is, weak conver
gence does not imply uniform convergence. We provide a counterexample.
Let
_
e
inx
2
_
nZ
on (, ). We claim that
_
e
inx
2
_
converges weakly to 0.
Indeed, let be a test function. Then
(x)
e
inx
2
dx = c
n
, n Z ,
are Fourier coecients of (x). Since L
2
(, ), by the Fourier Theorem, we
have
nZ
[c
n
[
2
= 
2
< .
Hence limc
n
= 0, n . So we get
c
n
=
e
inx
2
(x)dx 0 , n
and by denition,
_
e
inx
2
_
converges weakly to 0.
On the other hand,
n N
_
_
_
_
e
inx
2
0
_
_
_
_
=
_
_
_
_
e
inx
2
_
_
_
_
=
1
2
,= 0
and hence
_
e
inx
2
_
does not converge to 0 uniformly. Moreover,
_
e
inx
2
_
does not
converge even pointwise.
So,
Uniform convergence
=
Weak convergence
So we know now that a sequence converges weakly. The function it converges
to is called a generalized function.
Definition 19.17. The limit of a sequence is called the Dirac function and
denoted as (x). I.e.,
n
w
.
118 19. GENERALIZED FUNCTIONS. DIRAC FUNCTION.
We agree to write it in the symbolized form
lim
n
n
(x)(x)dx =
(x)(x)dx.
Example 19.18. We explore some properties of the Dirac function. In the fol
lowing, is any test function.
1) x(x) = 0, yet neither x, nor (x) are 0 on an interval.
x(x)(x)dx =
(x)(x(x))dx = 0 (0) = 0.
2) (x a)
1
(x a)(x)dx = (a).
0
a
3) (x a) + (x b)
1
[(x a) + (x b)] (x)dx = (a) + (b).
4) c(x)
1
c(x)(x)dx =
1
(x)(c(x))dx = c(0).
5)
2
(x) is undened; its not possible to describe it from our denition.
6) (cx) =
1
c
(x), c ,= 0. Use a change of variables:
lim
n
n
(cx)(x)dx =
y = cx
dx =
1
c
dy
= lim
n
(y)(y/c)
1
c
dy
=
1
c
(0) =
1
c
(x)(x)dx.
7) (x a)(x b) = 0, a ,= b (if a = b, DNE)
(x a)(x b)(x)dx = (a b)
. .
=0
(a) = 0.
Exercise 19.19.
Formally prove lim
n
2
n
(x)(x)dx does not exist.
Surprisingly, is smooth, i.e. innitely dierentiable. You cant square it but you
can dierentiate it innitely many times!
4. GENERALIZED DERIVATIVES 119
4. Generalized Derivatives
Definition 19.20. f
t
(x) is called a generalized derivative of f(x) if
f
t
(x)(x)dx =
f(x)
t
(x)dx
for all C
0
(1).
Theorem 19.21. Let (x) =
_
1 , x 0
0 , x < 0
(Heaviside function). The generalized
derivative of (x) is (x).
Proof. For any C
0
(1) we have
(x)
t
(x)dx =
(x)
t
(x)dx
. .
=0
+
0
(x)
t
(x)dx =
0
(x)
t
(x)dx
=
t
(x)dx = (x)
0
= ()
. .
=0
(0) = (0).
So
(0) =
(x)
t
(x)dx
Def.
=
t
(x)(x)dx
t
(x)(x)dx = (0).
On the other hand,
(0) = lim
n
n
(x)(x)dx.
Hence,
t
(x) = (x) in the sense of generalized functions. QED
Exercise 19.22. Prove:
1) (x) = (x),
2) (x
2
a
2
) =
1
2a
((x a) + (x +a)),
3) (sin x) =
nZ
(x n).
Theorem 19.23. Let
n
be a sequence. Then
lim
n
(m)
n
(x)(x)dx = (1)
m
(m)
(0).
120 19. GENERALIZED FUNCTIONS. DIRAC FUNCTION.
Proof.
n
(x)
(m)
(x)dx =
d
dx
(m1)
n
(x)(x)dx
by parts
=
:
0
(m1)
n
(x)(x)
(m1)
n
(x)
t
(x)dx
= . . . = (1)
m
n
(x)
(m)
(x)dx
Thm 19.12
lim
n
d
m
dx
m
n
(x)(x)dx = (1)
m
(m)
(0).
Definition 19.24. Let
n
(x) be a sequence. The weak limit of
_
d
m
dx
m
n
(x)
_
is called the m
th
derivative of (x) and denoted
(m)
(x).
Note that in particular,
t
(x)dx = 0. Support is at one point, and the average
is zero. Weird!
But consider for example, the sequence
n
(x) =
n
e
x
2
n
2
; it is innitely dieren
tiable and converges weakly to (x). Furthermore, one can observe that
1 1
the derivative at 0 is 0,
and the average of the derivative is indeed 0.
t
1
t
2
t
3
LECTURE 20
Representations of the function
1. Fourier Representation of the function
Let
=
_
_
_
1
2
, x
0 , x (, ) [, ]
.
is clearly a sequence.
1
Since
L
2
(, ) we can expand it into a
Fourier series:
(x) =
nZ
c
n
e
inx
, c
n
=
1
2
(x)e
inx
dx.
1
2
We have
c
n
=
1
2
1
2
e
inx
dx =
1
4
e
inx
dx
=
1
4
e
inx
in
=
1
2n
e
in
e
in
2i
=
1
2n
sin n.
So c
n
=
1
2
1
n
sin n, and
(x) =
1
2
nZ
1
n
sin n e
inx
. (20.1)
Take the limit as 0 of the right hand side of (20.1)
1
2
lim
0
nZ
sin n
n
e
inx
=
1
2
nZ
lim
0
sin n
n
. .
=1
e
inx
=
1
2
nZ
e
inx
and it looks like
(x) =
1
2
nZ
e
inx
=
1
2
+
1
n1
cos nx.
1
0<<
is more like a family but no one cares about the dierence.
121
122 20. REPRESENTATIONS OF THE FUNCTION
Unfortunately this series diverges for all x. But it can be understood in the weak
sense. Indeed test function (x),
1
2
N
2
n=N
1
e
inx
(x)dx =
1
2
N
2
n=N
1
e
inx
(x)dx
=
N
2
n=N
1
1
2
e
inx
(x)dx
. .
=c
n
=
N
2
N
1
c
n
. (20.2)
We show now that the partial sum
N
2
N
1
c
n
converges absolutely.
Lemma 20.1. If C
0
(, ) then
nZ
[c
n
[ converges.
Proof. For any n ,= 0,
c
n
=
1
2
e
inx
(x)dx =
1
2
(x)
de
inx
in
by parts
=
:
0
1
2in
(x)e
inx
+
1
2in
e
inx
t
(x)dx
=
1
2in
t
(x)
de
inx
in
=
1
2(in)
2
e
inx
tt
(x)dx
and c
0
=
1
2
(x)dx.
Note that all integrated terms are 0 since our function (x) is a test function and
hence Supp (, ). So
c
n
=
1
2
1
n
2
e
inx
tt
(x)dx
[c
n
[
1
2
1
n
2
e
inx
tt
(x)dx
1
2
1
n
2
[
tt
(x)[ dx C
1
n
2
, n ,= 0
n,=0
nZ
[c
n
[ C
n,=0
nZ
1
n
2
< . QED
So by Lemma 20.1, series (20.2) converges and hence
lim
N
1
N
2
_
N
2
n=N
1
e
inx
2
_
(x)dx =
nZ
c
n
. (20.3)
2. INTEGRAL REPRESENTATION OF THE FUNCTION 123
On the other hand, by the Fourier Theorem
(x) =
nZ
c
n
e
inx
=
nZ
c
n
e
inx
and (0) =
nZ
c
n
.
So (20.3) transforms into
lim
N
1
,N
2
_
N
2
n=N
1
e
inx
2
_
(x)dx = (0) =
(x)(x)dx.
and we can conclude that
Theorem 20.2.
(x) = wlim
N
1
,N
2
N
2
n=N
1
e
inx
2
and in the weak sense
(x) =
1
2
+
1
n1
cos nx.
2. Integral Representation of the function
Lemma 20.3.
lim
h
1
2i
a+ih
aih
e
xz
dz
z
= (x) Heaviside function.
Proof. 1) For x < 0, consider the following contour:
R
R
ih
ih
a
Since
e
zx
z
H(Int
R
) (analytic
within
R
) then
a+ih
aih
e
zx
z
dz =
e
zx
z
dz.
e
zx
z
dz =
1
z
de
zx
x
by parts
=
1
zx
e
zx
a+ih
aih
+
1
x
e
zx
z
2
dz.
124 20. REPRESENTATIONS OF THE FUNCTION
The integrated term
1
zx
e
zx
a+ih
aih
=
1
x
_
e
ax+ihx
a +ih
e
axihx
a ih
_
=
e
ax
x
_
e
ihx
a +ih
e
ihx
a ih
_
. .
0 , h
lim
h
1
zx
e
zx
a+ih
aih
= 0.
Next,
1
x
e
zx
z
2
dz
1
[x[
[e
zx
[
[z[
2
[dz[ =
1
[x[
e
e Re z
R
2
[dz[
1
[x[
e
xRe z
R
,2
R =
e
xRe z
[x[ R
.
So
1
x
e
zx
z
2
dz
e
xRe z
[x[ R
. (20.4)
If x < 0 then e
xRe z
< 1 (since Re z > 0)
(20.4) 0 , R .
So lim
h
e
zx
z
dz = 0 , x < 0 and lim
h
1
2i
a+ih
aih
e
zx
z
dz = 0 , x < 0.
2) For x > 0, consider a dierent contour:
t
R
ih
ih
a
As we did before for x < 0
e
xz
z
dz =
by parts
=
e
ax
x
_
e
ihx
a +ih
e
ihx
a ih
_
. .
0 , h
+
1
x
e
zx
z
2
dz.
. .
I
1
+
. .
I
2
.
Consider these integrals separately.
2. INTEGRAL REPRESENTATION OF THE FUNCTION 125
[I
2
[
1
[x[
R
R
e
zx
z
2
[dz[ =
1
[x[
e
xRe z
R
,2
R.
Now e
xRe z
< 1 since x > 0 but Re z 0 and
[I
2
[
[x[
1
R
0 , R .
[I
1
[
1
[x[
e
zx
z
2
[dz[ =
1
[x[
e
xRe z
[dz[
R
2
.
But since Re z a this estimate can be continued as
[I
1
[
1
[x[
e
xa
R
,2
R =
e
xa
[x[ R
0 , R .
So lim
h
= 0.
By the Residue Theorem
1
2i
R
e
xz
z
dz = Res
_
e
xz
z
, 0
_
= lim
z0
ze
xz
z
= 1.

1
2i
a+ih
aih
e
xz
z
dz +
1
2i
e
xz
z
dz
Taking h , we get
lim
h
1
2i
a+ih
aih
e
xz
z
dz = 1 , x > 0
and the lemma is proven. QED
Now we are ready to establish
Theorem 20.4.
(x) =
1
2
e
ixt
dt.
126 20. REPRESENTATIONS OF THE FUNCTION
Proof. By Theorem 19.21
(x) =
d
dx
(x).
By Lemma 20.3
(x) = lim
h
1
2i
a+ih
aih
e
zx
z
dz.
Dierentiating (x) (formally), we have
d
dx
(x) = lim
h
1
2i
a+ih
aih
d
dx
e
zx
z
dz = lim
h
1
2i
a+ih
aih
e
zx
dz.
Since a is arbitrary and e
zx
analytic
a+ih
aih
e
zx
dz
a=0
=
ih
ih
e
zx
dz =
z = it
t = iz
= i
h
h
e
ixt
dt
d
dx
(x) =
1
2
e
ixt
dt. QED
LECTURE 21
The SturmLiouville Problem
Spectral methods to solve (linear) PDEs call on transforms (Fourier is just one type
of transform). The PDE is solvable if we can reduce it to an ODE.
Definition 21.1. Let p(x), w(x), q(x) be realvalued functions dened on 1. The
expression A dened by
Ay =
1
w(x)
d
dx
_
p(x)
d
dx
y
_
+ q(x)y
is called the SturmLiouville operator.
w(x) is the weight function, p(x) > 0, and q(x) is called the potential.
This operator A is clearly linear. But we have to specify a space where A acts.
Definition 21.2.
1
w
(py
t
)
t
+qy = y (21.1)
where w, p, q are known functions and is a parameter is called the SturmLiouville
equation.
Equation (21.1) always comes with some boundary conditions which make it a
SturmLiouville problem. Equation (21.1) is considered on a nite interval (a, b), half
line (, a) or (a, ), or the whole line (, ). We start with the nite interval
case. In literature this case is also called regular.
Definition 21.3. The SturmLiouville problem on (a, b)
_
_
_
d
dx
p(x)
d
dx
y +w(x)q(x)y = w(x)y
y(a) = 0 = y(b)
is called a Dirichlet problem and the condition y(a) = 0 = y(b) is called Dirichlet
conditions.
Definition 21.4.
_
_
_
d
dx
p(x)
d
dx
y +w(x)q(x)y = w(x)y
y
t
(a) = 0 = y
t
(b)
is called a Neumann problem.
127
128 21. THE STURMLIOUVILLE PROBLEM
Definition 21.5.
_
_
_
d
dx
p(x)
d
dx
y + w(x)q(x)y = w(x)y
y
t
(a) + y(a) = 0 = y
t
(b) + y(b) , , 1
is called a Robin problem.
Note that Def. 21.5 includes 21.3 and 21.4. Indeed, if , = in Def 21.5 then it
transforms into 21.3. Also, Def 21.4 is Def 21.5 with = = 0.
Note also that we cant solve explicitly any of the problems above unless w, p, q are
constant or special functions (like Bessel functions). And the original problem can not
be handled by treating each spectra separately then putting them together.
Before showing seladjointness, we need to know which space and so which inner
product.
Definition 21.6. Let w(x) 0 on (a, b) and let
L
2
(, w)
_
f :
w(x)[f(x)[
2
dx <
_
.
L
2
(, w) is called a weighted L
2
space and w is called a weight function.
f, g =
w(x)f(x)g(x) dx
is a weighted scalar product.
Once w 0 then f, g is indeed a scalar product.
Theorem 21.7. Let w 0, p(x), q(x) 1. The operator
A =
1
w(x)
d
dx
p(x)
d
dx
+q(x)
dened on
DomA
_
f L
2
(, w) : f
t
(a) + f(a) = 0 = f
t
(b) + f(b), , 1
_
is selfadjoint.
21. THE STURMLIOUVILLE PROBLEM 129
Proof. Since p, q 1 we have p = p and q = q so
Af, g =
w(x)
_
1
w(x)
d
dx
p(x)f
t
(x) + q(x)f(x)
_
g(x) dx
=
d
dx
_
p(x)f
t
(x)
_
g(x) dx
. .
by parts
+
w(x)q(x)f(x)g(x) dx
= p(x)f
t
(x)g(x)
p(x)f
t
(x)
d
dx
g(x) dx
. .
by parts
+
w(x)f(x)q(x)g(x) dx
= p(x)f
t
(x)g(x)
d
dx
f(x)p(x)g
t
(x) dx +
w(x)f(x)q(x)g(x) dx
= pf
t
g
+ fpg
t
f
d
dx
_
pg
t
_
dx +
wfqg dx
= p
_
f
t
g + fg
t
_
. .
Compute this separately
+
wf
_
1
w
d
dx
pg
t
+qg
_
dx
. .
=f,Ag)
.
p
_
f
t
g +fg
t
_
=
= p(b)
_
f
t
(b)
. .
f(b)
g(b) + f(b) g
t
(b)
..
g(b)
_
p(a)
_
f
t
(a)
. .
f(a)
g(a) + f(a) g
t
(a)
..
g(a)
_
= p(b)
_
f(b)g(b) f(b)g(b)
_
. .
=0
p(a)
_
f(a)g(a) f(a)g(a)
_
. .
=0
= 0.
So, we get Af, g = f, Ag. QED
Note, that if , C or p, q C, then A is not selfadjoint.
Definition 21.8. The SturmLiouville operator with w(x) = p(x) = 1 is called a
Schrodinger operator and is commonly denoted by H, i.e.,
H =
d
2
dx
2
+q(x).
Actually any SturmLiouville problem can be rewritten with H when put in the
canonical form. But it likes the whole line, so often if we have just an interval, the
operator is called SturmLiouville and if its on the whole line, the operator is called
Schr odinger.
130 21. THE STURMLIOUVILLE PROBLEM
Theorem 21.9 (Canonical Form of the SturmLiouville Operator). Assuming that
p(x), w(x) > 0 on [a, b] and p C
1
[a, b], pw C
2
[a, b] then the SturmLiouville problem
1
w(x)
d
dx
p(x)
dy
dx
+ q(x)y = y (21.2)
can be tranformed into the Schrodinger problem
d
2
u
dz
2
+ q(z)u = u (21.3)
by a suitable substitution.
Proof. Rewrite (21.2) as
1
w
d
dx
_
p
dy
dx
_
+ (q )y = 0. (21.4)
Let
z =
1
c
x
a
w(s)
p(s)
ds , c =
1
b
a
w(x)
p(x)
dx
be our new variables. One has
dz =
1
c
_
w
p
_
1/2
dx
d
dx
=
1
c
_
w
p
_
1/2
d
dz
and (21.4) reads
1
c
2
1
w
_
w
p
_
1/2
. .
=
1
(wp)
1/2
d
dz
p
_
w
p
_
1/2
. .
=(wp)
1/2
dy
dz
+ (q )y = 0. (21.5)
Introduce = (wp)
1/4
, u = y and (21.5) transforms into
1
c
2
2
d
dz
2
d
dz
1
u + (q )
1
u = 0
1
_
2
(
1
u)
t
_
t
+c
2
(q )u = 0. (21.6)
But
_
2
(
1
u)
t
_
t
=
_
2
(
1
u
t
2
u)
_
t
= (u
t
t
u)
t
=
t
u
t
+ u
tt
tt
u
t
u
t
= u
tt
tt
u
and so for (21.6) one has
1
(u
tt
tt
u) + c
2
(q )u = 0
u
tt
+ (
1
tt
+c
2
q)
. .
= q
u = c
2
..
=
u. QED
Theorem 21.10. The spectrum of the operator A in Theorem 21.7 is discrete and
simple.
(No proof).
21. THE STURMLIOUVILLE PROBLEM 131
Remark 21.11. Since A = A
and (A) =
d
(A) then the set of its eigenfunctions
y
n
(x) forms an ONB in L
2
(, w).
LECTURE 22
Legendre Polynomials
Consider a specic SturmLiouville operator
d
dx
(1x
2
)
d
dx
on L
2
(1, 1), i.e. where
w(x) = 1 , p(x) = 1 x
2
, q(x) = 0.
Theorem 22.1. The operator
A =
d
dx
(1 x
2
)
d
dx
is selfadjoint on
1
DomA =
_
y C[1, 1] : Ay L
2
(1, 1)
_
.
Note that it is possible to extend this operator to a larger domain and still show
selfadjointness but the proof is more complicated.
Exercise 22.2. Prove that A dened above is selfadjoint for y C
1
[1, 1].
Let us consider the eigenfunction problem for A. The spectrum of A is expected
to be discrete, and we will nd that the solutions associated to each eigenvalue in the
spectrum are the socalled Legendre polynomials.
Ay = y
d
dx
_
(1 x
2
)
d
dx
y
_
= y.
This equation is called the Legendre equation. One can check that it is equivalent
to
(1 x
2
)
d
2
dx
2
y 2x
dy
dx
+ y = 0, (22.1)
which is a second order linear homogeneous equation. Let us solve it using power series,
i.e. by the Frobenius method (see Appendix in this Lecture). Remark that x = 1 are
regular singular points since for example for x = 1, (22.1) can be rewritten as
y
tt
+
p(x)
x 1
y
t
+
q(x)
(x 1)
2
y = 0 with p(x) =
2x
x + 1
, q(x) =
1 x
1 + x
.
But here were not going to use the expansion at the regular singular points but
rather at x
0
= 0, i.e. an ordinary point. So we are looking for a solution to (22.1) in
1
Recall that C[1, 1] is the set of continuous functions f on [1, 1]; so in particular lim
x1
f(x)
are nite.
133
134 22. LEGENDRE POLYNOMIALS
L
2
(1, 1) in the form
y =
n0
c
n
x
n
with y C[1, 1].
We have
(1 x
2
)
n0
n(n 1)c
n
x
n2
2x
n0
nc
n
x
n1
+
n0
c
n
x
n
= 0
n0
n(n 1)c
n
x
n2
n0
n(n 1)c
n
x
n
n0
2nc
n
x
n
+
n0
c
n
x
n
= 0
n0
[(n + 2)(n + 1)c
n+2
(n(n 1) + 2n )c
n
] x
n
= 0.
So by shifting the index, we get
c
n
=
(n 2)(n 1)
n(n 1)
c
n2
, n = 2, 3, (22.2)
which gives us a recursion formula for c
n
. Separating odd and even powers, one has
y(x) = A
n0
c
2n
x
2n
+ B
n0
c
2n+1
x
2n+1
(22.3)
where A, B are arbitrary constants and for c
n
, and by (22.2), we explicitly have
c
0
= 1 , c
2
=
2
, c
4
=
2 3
4 3
c
2
=
2 3
2 3 4
()
c
6
=
4 5
5 6
c
4
=
(4 5 )(2 3 )()
2 3 4 5 6
,
c
1
= 1 , c
3
=
1 2
2 3
,
From Frobenius theory, we also know that each power series converges for [x[ < 1.
Let us check what is going on at the endpoints.
Note that if is an integer of the form n(n+1), then eventually c
n+2
= 0 and then
all the following coecients will be zero, and we get the Legendre polynomials. We
will show that if not, the corresponding function blows up at one of the endpoints (and
hence is not in the domain of A).
We rewrite the even coecients as follows
c
2n
=
_
n1
m=1
_
1
2m(2m+ 1)
_
_
2n
.
Next we use the following theorem oered here without a proof
Theorem 22.3. The innite product
mN
(1 z
m
) converges absolutely if and only
if the series
mN
z
m
converges absolutely.
22. LEGENDRE POLYNOMIALS 135
Since
N
m=1
[[
2m(2m+ 1)
= [[
2N+1
k=2
(1)
k
k
,
then
mN
2m(2m+ 1)
converges absolutely and hence so does
mN
_
1
2m(2m+ 1)
_
.
Therefore, for suciently large n,
c
2n
c()
n
where c() is a constant depending on .
But if that is the case, then the part of the solution
n0
c
2n
x
2n
is innite at x = 1.
A similar reasoning can be used for the odd terms. So we nd that if the sum is
indeed innite, i.e. if ,= n(n + 1) for some nonnegative integer n, then the solution
y / DomA.
Theorem 22.4. The spectrum (A) of
A =
d
dx
(1 x
2
)
d
dx
on L
2
(1, 1)
is purely discrete and
(A) = n(n + 1)
nN
0
, N
0
= 0, 1, 2, .
0 2 6
Corresponding eigenfunctions, denoted P
n
are called Legendre polynomials. Ex
plicitly
P
0
(x) = 1 , P
1
(x) = x , P
2
(x) =
3x
2
1
2
, P
3
(x) =
5x
3
3x
2
,
1 1
1
1
P
0
(x) = 1
P
1
(x) = x
P
2
(x) =
3x
2
1
2
P
3
(x) =
5x
3
3x
2
136 22. LEGENDRE POLYNOMIALS
Remark 22.5. Note that we dont have to impose any boundary conditions for
the operator A in Theorem 22.4. The requirement that solutions be in C[1, 1] is a
condition (but not boundary).
There is a nice formula, called Rodrigues formula, for computing Legendre polyno
mials:
P
n
(x) =
1
n!2
n
d
n
dx
n
(x
2
1)
n
, m N
0
.
Exercise 22.6. Use Rodrigues formula to compute
1
1
P
2
n
(x)dx
where P
n
are Legendre polynomials.
Since our operator A is selfadjoint, then P
n
nN
0
forms an orthogonal basis in
L
2
(1, 1). Note that this is not quite an orthonormal basis; indeed the norm depends
on n (see Exercise 22.6).
Nonetheless, this fact plays a fundamental role in physics. Recall that in P[1, 1],
the set of all polynomials, we knew the basis 1, x, x
2
, but it was not an orthogonal
basis. So the Legendre polynomials P
n
are more useful, and they form a basis for a
bigger space, L
2
(1, 1) so
f L
2
(1, 1) : f(x) =
n0
f
n
P
n
(x).
In Numerical Analysis, it is particularly useful to have an orthogonal basis; in physics,
they come from some PDEs, and having a lot of orthogonal bases to choose from allows
us to pick the one related to the problem at hand.
Exercise 22.7. Use binomial series to formally show that (P
n
are again Legendre
polynomials)
(1 2xt + t
2
)
1/2
=
n0
P
n
(x)t
n
.
(The expression on the left hand side is called the generating function of Legendre
polynomials.)
Appendix. Frobenius Theory
Consider the foolowing ODE
y
tt
+ P(x)y
t
+Q(x)y = 0. (22.4)
We can expand P(x) and Q(x) in power series and solve by the method of unde
termined coecients, but this becomes unbelievably unwieldly.
Example 22.8. Consider y
tt
+ y = 0. Write
y =
n0
c
n
x
n
APPENDIX. FROBENIUS THEORY 137
where we assume this series is absolutely convergent. Hence it is uniformly convergent
inside a ball, so we can dierentiate:
y
tt
=
n0
c
n
n(n1)x
n2
=
n2
c
n
n(n1)x
n2
=
k = n 2
n = k + 2
k0
(k+1)(k+2)c
k+2
x
k
.
So we get
k0
(k + 1)(k + 2)c
k+2
x
k
+
k0
c
k
x
k
= 0
k0
(k + 1)(k + 2)c
k+2
+ c
k
. .
=0
x
k
= 0.
Now we need to solve this recursive relation
c
k+2
=
c
k
(k + 1)(k + 2)
, k = 0, 1, 2,
Note that
c
2
=
c
0
2
, c
4
=
c
2
3 4
=
(1)
2
c
0
2 3 4
, c
6
=
c
4
5 6
=
(1)
3
c
0
2 3 4 5 6
c
2n
=
(1)
n
(2n)!
c
0
c
3
=
c
1
2 3
, c
5
=
c
3
4 5
=
(1)
2
c
1
2 3 4 5
,
c
2n+1
=
(1)
n
(2n + 1)!
c
1
.
So
y = c
0
n0
(1)
n
(2n)!
x
2n
+ c
1
n0
(1)
n
(2n + 1)!
x
2n+1
= c
0
cos x +c
1
sin x as expected!
Note that y is convergent on C, and hence y is entire.
Definition 22.9. x
0
is called an ordinary point of (22.4) if P, Q are analytic on
some neighborhood about x
0
, 
R
(x
0
) = z : [z x
0
[ < R.
Theorem 22.10. If x
0
is an ordinary point of (22.4), then (22.4) has a power
series solution
y(x) =
n0
c
n
(x x
0
)
n
(22.5)
absolutely convergent on 
R
(x
0
). Moreover, the general solution to (22.4) has the form
of (22.5).
Example 22.11. Consider Stokes equation:
y
tt
xy = 0.
Note that any point x
0
is ordinary since P, Q are entire. Hence we can choose x
0
= 0
for simplicity. We have
y =
n0
c
n
x
n
, y
tt
=
k0
(k + 1)(k + 2)c
k+2
x
k
.
138 22. LEGENDRE POLYNOMIALS
Then
k0
(k + 1)(k + 2)c
k+2
x
k
n0
c
n
x
n+1
. .
reindex n+1=k , n=k1
= 0
k0
(k + 1)(k + 2)c
k+2
x
k
k1
c
k1
x
k
= 0
2c
2
+
k1
(k + 1)(k + 2)c
k+2
c
k1
x
k
= 0.
So c
2
= 0 and we get the recursion relation (k+1)(k+2)c
k+2
c
k1
= 0 for k = 1, 2, .
Reindexing
c
k+3
=
c
k
(k + 2)(k + 3)
, k = 0, 1, 2,
Then
c
3
=
c
0
2 3
, c
6
=
c
3
5 6
=
c
0
2 3 5 6
, c
9
=
c
6
8 9
=
c
0
2 3 5 6 8 9
c
3n
=
_
n1
k=1
(3k + 1)
_
c
0
(3n)!
c
4
=
c
1
3 4
, c
7
=
c
4
6 7
=
c
1
3 4 6 7
c
3n+1
=
_
n
k=1
(3k 1)
_
c
1
(3n + 1)!
c
2
= 0 c
3n+2
= 0
for any n = 0, 1, 2, Thus
y(x) = c
0
n0
c
3n
x
3n
+c
1
n0
c
3n+1
x
3n+1
also known as Airy functions A(x), B(x). These are not elementary functions, but they
are special functions. And so sometimes, the Stokes equation is referred to as part of
the Airy family.
Exercise 22.12. Use the power series method to solve
y
tt
xy = 1 , y(0) = y
t
(0) = 0.
Definition 22.13. If x
0
is not an ordinary point of (22.4), i.e.
2
y
tt
+ P(x)y
t
+Q(x)y = 0
then its called a singular point.
What happens around singular points in general? We dont know, we cant oer a
treatment for a generic singular point. We need to pick a good bad guy.
2
Recall that this form is called the standard form; the canonical form would be: y
= q(x)y.
APPENDIX. FROBENIUS THEORY 139
Definition 22.14. A point x
0
is called a regular singular point of (22.4) if (x
x
0
)P(x), (x x
0
)
2
Q(x) are analytic in some neighborhood of x
0
. That is, (22.4) can
be represented as
y
tt
+
p(x)
x x
0
y
t
+
q(x)
(x x
0
)
2
y = 0 (22.6)
where p(x) = (x x
0
)P(x), q(x) = (x x
0
)
2
Q(x) are analytic; or alternately
(x x
0
)
2
y
tt
+ (x x
0
)p(x)y
t
+q(x)y = 0. (22.7)
Note that there would be no loss of generality to take x
0
= 0 since we can change
variables.
Definition 22.15. If x
0
is not a regular singular point, then x
0
is an irregular
singular point.
Example 22.16. Consider
(x
2
4)
2
y
tt
+ (x 2)y
t
+y = 0
(x 2)
2
(x + 2)
2
y
tt
(x 2)y
t
+y = 0
y
tt
+
p(x)
x 2
+
q(x)
(x 2)
2
y = 0
where p(x) = q(x) =
1
(x+2)
2
. So x = 2 is a regular singular point, but x = 2 is
an irregular singular point, and all other points are ordinary. There are treatments
for some irregular singular points but they cause severe problems and are not in most
books.
Theorem 22.17 (Frobenius Theorem). If x
0
is a regular singular point of (22.7)
then there exists at least one power series solution of the form
y(x) = (x x
0
)
r
n0
c
n
(x x
0
)
n
for some r C.
The series converges on x
0
< x < x
0
+ R for some R > 0.
Note that R is related to the neighborhood of analyticity of p, q but it could be
wider. Plus here, x
0
< x < x
0
+R so x x
0
> 0, and its easier to deal with (x x
0
)
r
.
But both solutions need not be of this form as demonstrated by the following two
examples.
Example 22.18 (Two series solutions). Consider the following equation
3xy
tt
+y
t
y = 0
y
tt
+
1
3x
y
t
x
3x
2
y = 0
x = 0 is a regular singular point.
So by the Frobenius theorem,
y(x) = x
r
n0
c
n
x
n
, y
t
(x) =
n0
c
n
(n+r)x
n+r1
, y
tt
(x) =
n0
c
n
(n+r)(n+r1)x
n+r2
.
140 22. LEGENDRE POLYNOMIALS
So
0 = 3xy
tt
+y
t
y
=
n0
3c
n
(n +r)(n + r 1)x
n+r1
+
n0
c
n
(n + r)x
n+r1
n0
c
n
x
n+r
= c
0
x
r1
(3r(r 1) + r) +
k1
3c
k
(k +r)(k +r 1) + c
k
(k +r) c
k1
x
k+r1
_
_
_
r(3r 2) = 0
c
k
=
c
k1
(k + r)(3k + 3r 2)
, k = 1, 2,
.
The equation r(3r 2) = 0 is referred to as the indicial equation and leads to two
possible solutions: r
1
= 2/3 and r
2
= 0.
r
1
= 2/3
c
k
=
c
k1
_
k +
2
3
_
(3k + 2 2)
=
3c
k1
3k(3k + 2)
=
c
k1
k(3k + 2)
, k = 1, 2,
c
k
=
c
0
k
n=1
n(3n + 2)
=
c
0
k!5 8 (3k + 2)
c
0
,= 0 , c
1
=
c
0
5
, c
2
=
c
1
2 8
=
c
0
2 5 8
,
y
1
(x) = c
0
x
2/3
_
1 +
n1
x
n
n!5 8 (3n + 2)
_
, x 1.
r
2
= 0
c
k
=
c
k1
k(3k 2)
, k = 1, 2,
y
2
(x) = c
0
_
1 +
n1
x
n
n!1 4 7 (3n 2)
_
, x 1.
The powers are dierent so we have two linearly independent power series solutions!
Remark 22.19. This example says that there could be two series solutions in Frobe
nius Theorem.
Now the second example seems very similar.
Example 22.20 (One series solution). Consider the equation xy
tt
+ 3y
t
y = 0.
By the Frobenius theorem,
y(x) = x
r
n0
c
n
x
n
, y
t
(x) =
n0
c
n
(n+r)x
n+r1
, y
tt
(x) =
n0
c
n
(n+r)(n+r1)x
n+r2
.
APPENDIX. FROBENIUS THEORY 141
So
0 = xy
tt
+ 3y
t
y
=
n0
(n +r)(n + r 1) + 3(n +r) c
n
x
n+r1
n0
c
n
x
n+r
= c
0
x
r1
(r(r 1) + 3r) +
n1
((n +r)(n +r 1) + 3(n + r))c
n
c
n1
x
n+r1
.
The indicial equation here is r(r + 2) = 0, with solutions: r
1
= 0 and r
2
= 2.
r
1
= 0
c
n
=
c
n1
n(n + 2)
, n = 1, 2,
c
n
=
c
0
n
k=1
k(k + 2)
=
2c
0
n!(n + 2)!
y
1
(x) = 2c
0
_
n0
x
n
n!(n + 2)!
_
, x 1.
r
2
= 2. The recursion formula becomes n(n 2)c
n
c
n1
= 0
n = 1 : c
1
c
0
= 0
n = 2 : 0 c
2
c
1
= 0
_
c
0
= c
1
= 0
c
n
=
c
n1
n(n 2)
, n = 2, 3,
y
2
(x) = c
2
x
2
_
n2
2x
n
n!(n 2)!
_
= 2c
2
_
n0
x
n
(n + 2)!n!
_
which is the same as y
1
. So there exists only one series solution.
Remark 22.21. Compare Example 22.18 and Example 22.20. There is not much
dierence between the two. However, the rst one has two series solutions and the
second one only one.
Exercise 22.22 (From a physics qualifying exam). Solve the dierential equation
x
2
y
tt
+ 2xy
t
+ (x
2
2)y = 0
using the Frobenius method. That is, assume a solution of the form
y =
n0
c
n
x
n+k
and substitute back into the dierential equation.
a) Determine the values of k which are allowed.
b) For each value of k, develop a recursion relation for c
n
.
c) Discuss the convergence of each solution.
142 22. LEGENDRE POLYNOMIALS
Theorem 22.23. If in (22.6)
p(x) =
n0
p
n
(x x
0
)
n
, q(x) =
n0
q
n
(x x
0
)
n
then the indicial equation has the form
r(r 1) + p
0
r +q
0
= 0.
Furthermore, from this emerge three cases:
Case 1 r
1
,= r
2
and r
1
r
2
/ Z. Then
y
1
(x) =
n0
c
n
(x x
0
)
n+r
1
, c
0
,= 0
y
2
(x) =
n0
b
n
(x x
0
)
n+r
2
, b
0
,= 0.
Case 2 r
1
r
2
= N, a positive integer. Then
y
1
(x) =
n0
c
n
(x x
0
)
n+r
1
, c
0
,= 0
y
2
(x) = Cy
1
(x) ln(x x
0
) +
n0
b
n
(x x
0
)
n+r
2
, b
0
,= 0
where C is a constant.
Case 3 r
1
= r
2
y
1
(x) =
n0
c
n
(x x
0
)
n+r
1
, c
0
,= 0
y
2
(x) = y
1
(x) ln(x x
0
) +
n1
b
n
(x x
0
)
n+r
2
.
Exercise 22.24. Prove that the indicial equation is indeed
r(r 1) + p
0
r +q
0
= 0
under the conditions of Theorem 22.23.
Exercise 22.25. (Two series solutions). Solve
xy
tt
+ (x 6)y
t
3y = 0.
Exercise 22.26. Find a second solution of
xy
tt
+ 3y
t
y = 0.
LECTURE 23
Harmonic Oscillator
Let us consider the equation
d
2
dx
2
u + x
2
u = u , < x < . (23.1)
This equation appears in Quantum Mechanics and is a specic case of the Schr odinger
equation
u
tt
+q(x)u = u.
Equation (23.1) can be viewed as an eigenvalue problem
Au = u ,
where A =
d
2
dx
2
+ x
2
in L
2
(1). The term
d
2
dx
2
represents the kinetic energy, and the
term x
2
the potential energy.
Clearly, A is a SturmLiouville operator but in L
2
on the whole line (, ).
Note at this point that you may wonder what basis would work for L
2
(1): polyno
mials work locally but blow up at so they wont work for 1; harmonics (e
inx
) are
not in L
2
(1) so they wont work either; plus they would require some decay at innity.
So it is not obvious to nd something that would work.
Theorem 23.1. A = A
.
Proof. Note rst, that if f(x) L
2
(1) then lim
x
f(x) = 0. For f, g L
2
(1), we
have
Af, g =
Af(x)g(x)dx =
f
tt
(x)g(x)dx +
x
2
f(x)g(x)dx
by parts
=
*
0
f
t
(x)g(x)
f
t
(x)g
t
(x)dx +
x
2
f(x)g(x)dx
by parts
=
*
0
f(x)g(x)
f(x)g
tt
(x)dx +
x
2
f(x)g(x)
=
f(x)
_
g
tt
(x) + x
2
g(x)
_
dx = f, Ag . QED
Now we are going to nd the spectrum of A. Recall that H =
d
2
dx
2
has only a
continuous spectrum with no eigenvalues.
143
144 23. HARMONIC OSCILLATOR
Transform (23.1) into another equation by setting
u(x) = e
x
2
2
y(x).
u
t
(x) = xe
x
2
2
y + e
x
2
2
y
t
u
tt
(x) = (e
x
2
2
x
2
e
x
2
2
)y xe
x
2
2
y
t
xe
x
2
2
y
t
+ e
x
2
2
y
tt
= e
x
2
2
(y
tt
2xy
t
(1 x
2
)y)
u
tt
+x
2
u = e
x
2
2
(y
tt
+ 2xy
t
+ y x
2
y +x
2
y) = e
x
2
2
y
y
tt
+ 2xy
t
+ y = y , or
y
tt
2xy
t
+ ( 1)y = 0 , x 1. (23.2)
This is known as Hermites equation.
Employ the Frobenius method. Set
y =
n0
c
n
x
n
y
t
=
n0
c
n
nx
n1
, y
tt
=
n0
c
n
n(n 1)x
n2
.
By plugging into (23.2), we get
0 =
n0
c
n
n(n 1)x
n2
n0
2c
n
nx
n
+ ( 1)
n0
c
n
x
n
=
n0
_
c
n+2
(n + 1)(n + 2) 2c
n
n + ( 1)c
n
_
x
n
.
From this, we extract the recursion relation
c
n+2
=
2n + 1
(n + 1)(n + 2)
c
n
, n 0 , c
0
,= 0 , c
1
,= 0.
For the general solution of (23.2), we have
y(x) =
n0
c
t
n
x
2n
. .
y
1
+
n0
c
tt
n
x
2n+1
. .
y
2
. (23.3)
By the ratio test,
lim
n
c
n+2
c
n
= lim
n
2n + 1
(n + 1)(n + 2)
= 0
and hence (23.3) converges. Assuming c
0
= 1, we can write
c
n+2
=
2n + 1
(n + 1)(n + 2)
c
n
=
_
2
n + 2
+ 1
(n + 1)(n + 2)
_
c
n
c
n
1 + n/2
c
2n
1
n!
y
1
(x)
n0
x
2n
n!
= e
x
2
.
23. HARMONIC OSCILLATOR 145
Similarly, assuming c
1
= 1, we now write
c
n+2
=
_
2
n + 1
+ 3
(n + 1)(n + 2)
_
c
n
2c
n
n + 1
c
2n+1
1
n!
y
2
(x) x
n0
x
2n
n!
= xe
x
2
.
So, y(x) c
0
e
x
2
+c
1
xe
x
2
, x , and hence
u(x) = e
x
2
2
y(x) c
0
e
x
2
/2
+c
1
xe
x
2
/2
/ L
2
(1).
This means that (23.1) has no L
2
solutions for an arbitrary .
However, if = 2m + 1 , m = 0, 1, . . . then both y
1
, y
2
in (23.3) terminate at
n = m, n = m+ 1, and hence y(x) becomes a polynomial.
So, we arrive at the fact that the harmonic oscillator A has a discrete spectrum,
i.e.
Theorem 23.2. Let A =
d
2
dx
2
+x
2
on L
2
(1). Then
(A) =
d
(A) = 2m+ 1
m0
.
Associated eigenfunctions are e
x
2
/2
y
m
(x) where polynomials y
m
(x) are called
Hermite polynomials and usually denoted by H
m
(x). Explicitly,
H
0
(x) = 1 , H
1
(x) = 2x , H
2
(x) = 4x
2
2 , H
3
(x) = 8x
3
12x , . . .
Since A is selfadjoint, functions u
m
(x) = e
x
2
/2
H
m
(x) must be orthogonal in L
2
(1)
and form a basis in L
2
(1). So,
u
m
, u
k
=
e
x
2
H
m
(x)H
k
(x)dx = 0 , m ,= k.
This also means that the Hermite polynomials H
m
(x) are orthogonal in the
weighted space L
2
(e
x
2
, 1). So a good basis in L
2
(1) looks like a polynomial weighted
by e
x
2
/2
to ensure decay.
There are some nice formulas for H
m
(x).
H
m
(x) = (1)
m
e
x
2 d
m
dx
m
e
x
2
.
Or, H
m
can be computed from the Taylor expansion of the generating function
G(x, t),
G(x, t) = e
t
2
+2tx
=
m0
H
m
(x)
t
m
m!
. (23.4)
146 23. HARMONIC OSCILLATOR
Compute L
2
(1)norm of e
x
2
/2
H
m
(x).
_
_
_e
x
2
/2
H
m
_
_
_
2
=
1
e
x
2
H
2
m
(x)dx =
1
(1)
m
H
m
(x)
d
m
dx
m
e
x
2
dx
=
:
0
(1)
m
H
m
(x)
d
m1
dx
m1
e
x
2
+ (1)
m1
1
H
t
m
(x)
d
m1
dx
m1
e
x
2
dx.
By a direct computation,
H
t
m
(x) = 2mH
m1
(x) , m 1 (23.5)
and by continued integration by parts, we nd
_
_
_e
x
2
/2
H
m
_
_
_
2
= 2
m
m!
.
So, for (23.1) we get
u
m
(x) =
1
_
2
m
m!
e
x
2
/2
H
m
(x) , m = 0, 1, 2, . . .
Exercise 23.3. Prove formula (23.4).
Exercise 23.4. Prove the recursive formula (23.5).
LECTURE 24
The Fourier Transform
The Fourier transform is obtained as a limiting procedure on Fourier series.
1. Fourier Series
As we know from the past, every L
2
(, )function f(x) can be expanded into a
Fourier series (Theorem 15.2):
f(x) =
nZ
c
n
e
inx
, c
n
=
1
2
f(x)e
inx
dx. (24.1)
It is common to adopt the following notation, c
n
=
f(n).
By making a scale transformation as seen in Theorem 15.9 and Exercise 15.10,
every function f(x) L
2
(T, T) can be expanded into a Fourier series:
f(x) =
nZ
1
2T
f(n)e
inx
T
,
f(n) =
T
T
f(x)e
inx
T
dx. (24.2)
The coecients
f(n) represent how much of each harmonic is present in the signal,
the weight of each harmonic. The smoother the function, the faster
f(n) decays.
In other words, a decent function f(x) dened on a nite interval (T, T) can
be represented by (24.2). It is natural to ask what if f(x) is dened on the whole line
(, ).
Well, if f(x) is periodic with period 2T as in the gure below, then (24.2) remains
valid outside of (T, T).
T T 3T
But what if f(x) is not periodic, like the example in the gure below?
T T 3T
147
148 24. THE FOURIER TRANSFORM
Let us see whats going on with (24.2) as T . This limiting procedure is not
trivial since none of the formulas (24.2) admit switching T for .
Introduce a new quantity
n
n
T
,
then (24.2) reads
f(x) =
nZ
1
2T
f(n)e
inx
,
f(n) =
T
T
f(x)e
inx
dx. (24.3)
Note that
1
2T
=
1
T
=
1
2
(
n
n1
)
. .
=: n
.
So, for (24.3) we get
f(x) =
nZ
1
2
f
_
T
n
_
. .
=:
fn
e
inx
n
=
nZ
1
f
n
2
e
inx
n
,
where
1
f
n
=
1
T
T
f(x)e
inx
dx.
Let
F(
n
) lim
T
1
f
n
=
1
f(x)e
enx
dx.
Note that the above is not rigorous since
n
0 for each n when T (hidden
T in
n
), but for any large value of T, there are innitely many nvalues still bigger to
compensate for the large T, so since were looking at the overall limit, we press on.
Then,
f(x) =
nZ
_
1
f
n
_
1
2
e
inx
n
= lim
T
nZ
_
1
f
n
_
1
2
e
inx
n
looks like
=
1
F()e
ix
d
as in a Riemann sum. So we get
f(x) =
1
e
ix
F()d , where F() =
1
e
ix
f(x)dx
Inverse Fourier transform Fourier transform
(24.4)
which are continuous analogs of (24.1) or (24.2). Another notation for F() is
f().
This approach is, by no means, rigorous but it prompts a very important concept, the
concept of the Fourier transform.
2. FOURIER TRANSFORM 149
Note that
f() represents how much of the function has frequency, and now with
the continuum of frequencies, we have
f() 0 , , i.e. the relative weight of
harmonics should decrease as frequencies increase.
2. Fourier Transform
Definition 24.1. Let f(x) be a function from L
1
(1).
Then the function
f(), 1, dened by the formula
f() =
1
e
ix
f(x)dx , 1
is called the Fourier transform of f(x), also denoted
f = Ff.
I claim that there is no physicist unaware of this concept!
Theorem 24.2. If f(x) L
1
(1) then
f() exists.
Proof. If f(x) L
1
(1) then e
ix
f(x) L
1
(1) since
e
ix
f(x)
= [f(x)[. More
over,
f()
=
1
e
ix
f(x)dx
e
ix
f(x)
dx =
1
e
ix
f() =
1
e
ixa[x[
dx =
1
e
ix+ax
+
1
0
e
ixax
dx
=
1
2
_
0
e
(ai)x
dx +
0
e
(ai)x
dx
_
=
1
2
_
e
ix
e
ax
a i
+
e
ix
e
ax
a i
0
_
=
1
2
_
1
a i
+
1
a +i
_
=
_
2
a
a
2
+
2
.
150 24. THE FOURIER TRANSFORM
Thus,
e
a[x[
() =
_
2
a
a
2
+
2
.
From (24.5) one has
e
a[x[
=
1
1
e
ix
_
2
a
a
2
+
2
d
or,
1
cos x
a
2
+
2
d =
a
e
a[x[
. (24.6)
A nice, valuable formula for free. (Do you remember seeing this before?)
Exercise 24.5. Let f(x) =
_
1 , [x[ a
0 , [x[ > a
. Compute
f() and then use (24.5) to
derive a curious integral similar to (24.6).
Note that in Example 24.4 both function and Fourier transform are real. This is
not always the case.
Proposition 24.6.
(i)
f() =
f().
(ii) (symmetry)
f() =
f() f(x) = f(x).
Exercise 24.7. Prove Proposition 24.6.
Proposition 24.8.
f is real if and only if f(x) = f(x).
Remark 24.9. The proposition above means that f,
f are real f is even.
Exercise 24.10. Prove Proposition 24.8.
Example 24.11. Let f L
1
(1) and f
t
L
1
(1). Compute
f
t
().
f
t
() =
1
e
ix
f
t
(x)dx
by parts
=
:
0
1
2
e
ix
f(x)
f(x)(i)e
ix
dx
= i
1
e
ix
f(x)dx
. .
f()
= i
f()
i.e.,
f
t
() = i
f().
Exercise 24.12. Show
e
ax
2
() =
e
2
4a
2a
.
LECTURE 25
Properties of the Fourier Transform. The Fourier Operator
1. Properties
Start with two obvious ones
(i)
(f +g)() =
f() + g() (additivity)
(ii)
(f)() =
f() (homogeneity)
This means that the Fourier transform can be viewed as a linear operator.
Definition 25.1. The linear operator F dened by
_
Ff
_
() =
1
e
ix
dx
is called the Fourier operator. So, by denition,
f = Ff.
(iii)
_
f
(n)
_
() = (i)
n
f()
It is enough to prove it for f
t
since you get the rest by induction. So this property
was actually proven in Lecture 24, Example 24.11.
Note that the above means that the Fourier transform converts dierentiation to
multiplication by a power of .
Corollary 25.2. If f C
n
(1) and f
(n)
L
1
, then C > 0 such that
f()
C
[
n
[
.
Proof. By Property (iii),
f() =
1
(i)
n
f
(n)
()
f()
=
1
[[
n
e
ix
f
(n)
(x)dx
C
[[
n
,
where
C =
1
f
(n)
(x)
dx. QED
This implies that if f is smooth, i.e. innitely dierentiable, then
f decays faster
than any power. Also, unless f is rough, the high frequencies have less of a role so we
can cut them o.
Definition 25.3. Given functions f, g, the convolution f g of these functions is
dened as
(f g)(x) =
1
f(s)g(x s)ds.
151
152 25. PROPERTIES OF THE FOURIER TRANSFORM. THE FOURIER OPERATOR
Its clear that f g = g f. Indeed,
(f g)(x) =
1
f(s)g(x s)ds =
x s = t
s = x t
=
1
f(x t)g(t)d(t) =
1
f(x t)g(t)d(t)
=
1
f g() =
1
e
ix
(f g)(x)dx
=
1
2
e
ix
_
f(s)g(x s)ds
_
dx
=
1
2
e
ix
f(s)g(x s)dsdx
=
1
2
e
ix
g(x s)dx
_
f(s)ds =
x s = t
x = t +s
=
1
2
e
itis
g(t)dt
_
f(s)ds
=
1
e
is
f(s)
_
1
e
it
g(t)dt
_
. .
g()
ds
=
f() g(). QED
2. The Fourier operator
We now consider the Fourier transform as an operator on a Hilbert space.
The following theorem plays a central role in math physics.
Theorem 25.4. The Fourier operator F is unitary in L
2
(1), i.e. Ff, Fg = f, g.
Proof. (at the physical level of rigor i.e. we show it for a nicer group of functions)
Let f, g C
0
(1) (introduced in Lecture 19). Let us make sure rst that
f, g L
2
(1).
2. THE FOURIER OPERATOR 153
Indeed,
_
_
_
f
_
_
_
2
=
(1,1)
f()
2
d +
1\(1,1)
f()
2
d
1
2
_
1
[f(x)[dx
_
2
1
1
d
. .
by Theorem 24.2
+
_
1
[f
t
(x)[dx
_
2
1\(1,1)
d
2
. .
by Corollary 25.2
=
1
[f(x)[dx
_
2
+
1
[f
t
(x)[dx
_
2
<
since
1\(1,1)
d
2
= 2.
So
f, g L
2
(1) and the integral
_
f, g
_
=
f() g()d
is dened. Let us now compute it.
_
f, g
_
=
1
2
e
ix
f(x)dx
__
e
is
g(s)ds
_
d. (25.1)
Since all the integrals here are absolutely convergent we can rearrange the order of
integration and (25.1) reads
_
f, g
_
=
_
1
2
e
i(sx)
d
_
. .
=(sx) , by Theorem 20.4
f(x)g(s) dx ds
=
f(x)
_
(s x)g(s) ds
_
. .
=g(x) (follows from Def. of function, Example 19.18)
dx
=
f(x)g(x) dx = f, g .
So, we get that for all f, g C
0
(1),
_
f, g
_
= f, g. i.e.,
Ff, Fg = f, g . (25.2)
QED
Observe now that the right side of (25.2) exists not only for C
0
functions but for
any f, g L
2
(1). (Indeed, by the Cauchy inequality [f, g[ f g.) This suggests
that the left hand side of (25.2) exists too for all f, g L
2
(1) which, in turn, means
that the natural domain of the Fourier operator is not C
0
(1) or L
1
(1) but L
2
(1).
Rigorous proofs of this can be found in advanced textbooks.
154 25. PROPERTIES OF THE FOURIER TRANSFORM. THE FOURIER OPERATOR
Remark 25.5. Even though the integral
e
ix
f(x)dx
in general does not converge absolutely for f(x) L
2
(1), the following limit
lim
N
N
N
e
ix
f(x)dx (25.3)
exists in a certain sense for any function from L
2
(1).
Theorem 25.4 implies some important facts.
Corollary 25.6. For all f, g L
2
(1),
1) Ff = f ,
2) Ff Fg = f g ,
3) f, F
Fg = f, g F
F = I F
1
= F
.
Whereas Property 2) above implies that the Fourier operator preserves distances,
the remaining two are important enough to deserve their own statements.
Theorem 25.7 (Plancherels Theorem).
f()
2
d =
[f(x)[
2
dx.
Exercise 25.8. Prove Theorem 25.7.
Theorem 25.9. The Fourier operator is invertible and
_
F
1
f
_
(x) =
1
e
ix
f()d. (25.4)
Exercise 25.10. Prove Theorem 25.9.
LECTURE 26
The Fourier Integral Theorem (Fourier Representation
Theorem)
1. The Fourier Integral Theorem
Theorems 25.4, 25.9, and Remark 25.5 combined imply the following central theo
rem.
Theorem 26.1 (The Fourier Integral Theorem). Let f L
2
(1) be piecewise con
tinuous and dierentiable on 1, like the one in the gure below.
Then for every point of continuity x, the following representation holds:
f(x) =
1
e
ix
f()d ,
f() =
1
e
ix
f(x)dx. (26.1)
Proof. By Theorem 25.9, the Fourier operator F is invertible and hence for all
f L
2
(1),
F
1
Ff = f. (26.2)
For simplicity, let f be smooth. Then (26.2), by (25.4), reads:
x 1 :
1
e
ix
_
1
e
is
f(s)ds
_
d = f(x)
which is exactly (26.1). QED
Note that if f is even or odd, we get only [0, ) with sin or cos. Its still called the
Fourier transform. What about points of irregularity?
Remark 26.2. In view of Remark 25.5, we claim that (26.1) holds under the only
condition f L
2
(1) if we understand integrals in (26.1) as
= lim
R
R
R
. (26.3)
155
156 26. THE FOURIER INTEGRAL THEOREM (FOURIER REPRESENTATION THEOREM)
Note, however, that (26.1) holds not for all x 1, but, roughly speaking, for those
x for which our function f(x) is dened/dierentiable. Points of discontinuity of f(x)
are troublesome, as the next example shows.
Indeed, Carleson showed that if f is continuous, the representation still only con
verges almost everywhere, although it will be ok if f is dierentiable.
Example 26.3. Let f(x) =
_
1 , 0 x < 1
0 , otherwise
.
0
1
1
f() =
1
e
ix
f(x)dx =
1
1
0
e
ix
dx =
1
2
1
i
e
ix
1
0
=
1
2
1 e
i
i
f() =
1
2i
_
1 e
i
_
.
By Theorem 26.1, x ,= 0, 1,
f(x) =
1
f()e
ix
d. (26.4)
Lets see what happens, say, at x = 0. The right side of (26.4) then becomes
1
2i
_
1 e
i
_
d =
1
2i
1 e
i
d =
=
1
2i
1 e
i
d() =
1
2i
e
i
1
d.
Note that we get the same expression when setting x = 1.
This integral is kind of tricky since it is absolutely divergent. We have to use Com
plex Analysis to evaluate it.
Actually, contour integrals, residue theorem, etc. are usual tools for computing
Fourier integrals.
C
C
+
R
R R
By the Residue Theorem (Thm 5.14),
1
2i
e
i
1
d = Res
_
e
i
1
, 0
_
= 0 (26.5)
since = 0 is a removable singularity and hence
Res
_
e
i
1
, 0
_
= 0.
On the other hand,
C
+
R
+
R
+
I
1
+I
2
+I
3
+ I
4
. (26.6)
1. THE FOURIER INTEGRAL THEOREM 157
Evaluate each of these integrals separately.
I
1
=
1
2i
C
+
R
e
i
1
d =
1
2i
C
+
R
e
i
1
d
. .
0 , R
by Jordans Lemma (Lemma 6.5)
1
2i
C
+
R
d
. .
=1/2 (see Example 3.1)
and so lim
R
I
1
= 1/2.
[I
2
[ =
1
2i
e
i
1
1
2
e
i
1
[[
[d[.
Since = e
i
, < < 0 , d = ie
i
d [d[ = d , [[ = . Also,
e
i
1 = e
i(cos +i sin )
1 = e
sin +i cos
1
= e
sin
_
cos( cos ) + i sin( cos )
_
1
=
_
e
sin
cos( cos ) 1
_
. .
0 , 0
+ie
i sin
sin( cos )
. .
0 , 0
O().
where Big O notation is dened by f(x) = O(x) , x 0 if
f(x)
x
C , x 0.
So,
e
i
1
[O()[d = [O()[ 0 , 0.
So lim
0
I
2
= 0. Next,
I
3
+I
4
=
1
2i
(R,R)\(,)
e
i
1
d
and it follows from (26.5) and (26.6) that I
3
+ I
4
= I
1
I
2
and passing to the lim
R
0
,
we get
1
2i
e
i
1
d = lim
R
0
_
I
3
+I
4
_
= lim
R
I
1
lim
0
I
2
=
1
2
where the integral is understood as in (26.3).
Finally, the Fourier representation theorem for x = 0 gives
1
2
f()e
ix
d
x=0
=
1
2
,= f(0) = 1.
So Theorem 26.1 fails for x = 0.
As a matter of fact, the following holds.
158 26. THE FOURIER INTEGRAL THEOREM (FOURIER REPRESENTATION THEOREM)
Theorem 26.4. Let f(x) be in L
2
(1) piecewise continuously dierentiable. Then
for every point of continuity, (26.1) holds. If x = x
0
is a point of discontinuity, then
f(x
0
+ 0) + f(x
0
0)
2
=
1
e
ix
0
f()d ,
f() =
1
e
ix
f(x)dx.
(No proof).
2. Generalization of the Fourier Transform (by a limiting procedure)
As we already computed in Example 24.4, for a > 0,
e
a[x[
() =
_
2
2
+a
2
. (26.7)
Take the limit as a 0 in (26.7), we get
lim
a0
_
2
2
+a
2
=
_
0 , ,= 0
, = 0
.
Looks like the function? Yes, indeed!
a 0
Exercise 26.5. Show that
_
a
(a
2
+
2
)
_
is a family as a 0.
(Hint: Introduce n = 1/a and check all three conditions in Denition 19.5.)
Thus, as a 0, the left hand side of (26.7) becomes
1()
1
and the right hand side
of (26.7) becomes
1() =
2().
Note also that
() =
1
2
but this is a generalization of the Fourier transform
since / L
2
(1) (recall that
2
is undened).
Exercise 26.6.
Show that:
tan
1
x = i
_
2
e
[[
+
3/2
2
().
(Hint: (tan
1
x)
t
=
1
1+x
2
.)
1
where we can set as denition:
1()
def
= lim
a0
e
ax
.
LECTURE 27
Some Applications of the Fourier Transform
1. Fourier Representation
Let us discuss rst the main reason why the Fourier transform is that important.
As we know ( by property (iii) of Lecture 25)
_
Ff
(n)
_
() = (i)
n
(Ff)() , n = 1, 2, (27.1)
In particular, (n = 1)
(Ff
t
) () = i(Ff)() or
_
F
d
dx
f
_
() = i(Ff)()
_
F
1
i
d
dx
f
_
() = (Ff)(). (27.2)
Definition 27.1. Given f,
f = Ff is called the Fourier representation of f (phys
ical terminology).
Next, by Theorem 26.1 (equation (26.2))
F
1
F = I F
1
Ff = f
and (27.2) tranforms into
_
F
1
i
d
dx
F
1
Ff
_
() = (Ff)()
or in the operator form
F
_
1
i
d
dx
_
F
1
= . (27.3)
This relation is very profound and all applications of the Fourier theory owe just
to this formula.
Lets try to understand what (27.3) means.
Recollect our old business in Linear Algebra.
According to Remark 11.6, operators
1
i
d
dx
of differentiation and multiplication by
are similar.
Multiplying (27.3) by F
1
on the left and F on the right yields
F
1
F
. .
=I
_
1
i
d
dx
_
F
1
F
. .
=I
= F
1
F
1
i
d
dx
= F
1
F (27.4)
159
160 27. SOME APPLICATIONS OF THE FOURIER TRANSFORM
whih reads: the operator of differentiation
1
i
d
dx
in the Fourier representation is equal
to the operator of multiplication by .
In Quantum Mechanics, it means that coordinate and momentum representations
are similar.
So, once again, (27.3), (27.4) mean that in the Fourier representation the operator
of dierentiation becomes the operator of multiplication.
Definition 27.2. The object
A =
n
k=0
a
k
(x)
d
k
dx
k
is called the differential operator of order n.
Theorem 27.3. Let A be a dierential operator with constant coecients, i.e.
A =
n
k=0
a
k
d
k
dx
k
,
then
A = F
1
p()F , or FA = p()F (27.5)
where
p() =
n
k=0
a
k
(i)
k
.
2. Applications of the Fourier Transform to Second Order Dierential
Equations
Let us now see how Theorem 27.3 works.
Example 27.4. Find an L
2
solution to
u + 2 u +
2
0
u = f(t) , (27.6)
where ,
0
are positive constants with
0
> and f L
2
(1).
1
Solution. Go over in (27.6) to the Fourier representation. By Theorem 27.3
(equation (27.5)) with A =
d
2
dt
2
+ 2
d
dt
+
2
0
, we get
FA =
_
2
+ 2i +
2
0
_
F
and (27.6) transforms into
_
2
+ 2i +
2
0
_
u() =
f() , (27.7)
where as usual u = Fu ,
f = Ff.
1
Such equations occur in solving dierential equations coming from Newtons Second Law of
motion (in particular harmonic oscillators with damping), and the variable is usually temporal. So
here we switched x to t and use dots for derivatives.
2. APPLICATIONS OF THE FOURIER TRANSFORM TO SECOND ORDER DIFFERENTIAL EQUATIONS 161
It follows from (27.7) that
u() =
f()
2
+ 2i +
2
0
.
By Theorem 26.1,
u(t) =
_
F
1
u
_
(t) =
1
e
it
f()
(
2
0
2
) + 2i
d.
So the L
2
solution to (27.6) can be obtained by the following formula
u(t) =
1
e
it
f()
(
2
0
2
) + 2i
d where
f() =
1
e
it
f(t)dt. (27.8)
Remark 27.5. We obtained (27.8) under the assumption f(t) L
2
(1). Actually,
(27.8) remains true as long as the integrals in (27.8) are dened somehow (e.g. in the
weak sense). For example, one can handle cases like f(t) = (t), f(t) = sin t, etc. All
these functions are not in L
2
(1).
Remark 27.6. For the denominator in (27.8) one has
2
0
2
+ 2i = (
1
)(
2
) , (27.9)
where
1,2
=
_
2
0
2
+i. Note Im
1,2
> 0.
Exercise 27.7. Solve (27.6) with f(t) =
_
1 , 0 t 1
0 , otherwise
.
(Hint: use Example 26.3 and equation (27.8).)
Remark 27.8. Lets show another way to handle (27.8). Putting equations (27.8),
(27.9) together we have
u(t) =
1
2
e
it
(
1
)(
2
)
_
e
is
f(s)ds
_
d
=
1
2
e
i(ts)
(
1
)(
2
)
d
_
. .
=I(ts)
f(s)ds. (27.10)
Integral I can be computed by Theorem 6.6 since the function
1
(
1
)(
2
)
is
subject to Jordans lemma (Lemma 6.5).
Indeed, if t s > 0 then by Theorem 6.6
e
i(ts)
(
1
)(
2
)
d = 2i
_
e
i
1
(ts)
(
1
2
)
+
e
i
2
(ts)
(
2
1
)
_
.
162 27. SOME APPLICATIONS OF THE FOURIER TRANSFORM
If t s < 0 then the function
1
(
1
)(
2
)
is subject to Jordans lemma in
the lower half plane. But
1
(
1
)(
2
)
has no poles in C
e
i(ts)
(
1
)(
2
)
d = 0.
So, I(t s) = 0 when t s < 0 and (27.10) transforms into
u(t) =
1
2
I(t s)f(s)ds
1
2
t
I(t s)
. .
=0
f(s)ds
=
1
2i
2
_
e
i
1
(ts)
e
i
2
(ts)
_
f(s)ds
=
i
_
e
i
2
(ts)
e
i
1
(ts)
_
f(s)ds
or in the real form
u(t) =
1
_
2
0
sin
_
_
2
0
2
(t s)
_
e
(ts)
f(s)ds. (27.11)
Looks like a nice formula!? Not particularly, since we can no longer use Complex
Analysis to evaluate this integral.
Remark 27.9. Formula (27.11) implies the socalled principle of causality one
of the basic principle of physics. It says that an eect cannot happen before the cause
has occured. Indeed since the integration in (27.11) is done over (, t), computing
the solution u(t) of equation (27.6) requires the knowledge of the force f(s) on (, t)
and doesnt need any information on f(s) for s > t.
Its the principle of causality for physical processes described by ordinary dierential
equations.
3. Applications of the Fourier Transform to Higher Order Dierential
Equations
Example 27.10 (A beam on an elastic foundation). Consider an innite beam with
a force f(x) considered constant over time such as gravity or a load. We measure y(x)
the deection or displacement.
f(x)
y(x)
x
0
EIy
IV
+Cy = f(x) , E, I, C are constants
3. APPLICATIONS OF THE FOURIER TRANSFORM TO HIGHER ORDER DIFFERENTIAL EQUATIONS 163
or
y
IV
+
C
EI
y =
1
EI
f(x). (27.12)
Let us solve (27.12). Note that this equation can be approached by the method of
variation of parameters but its more complicated than the Fourier method well apply
here. Consider y L
2
(1), then no boundary conditions are needed (even though we
have a fourth order linear equation) and y
t
, y
tt
, y
ttt
, y
IV
L
2
(1) although these will be
automatically satised.
Apply the Fourier transform to (27.12). By (27.5) we get
(i)
4
y +
C
EI
y =
1
EI
f.
It follows from here that
y() =
1
EI
f()
4
+
4
, where
4
C
EI
.
By the Fourier Integral Theorem
y(x) =
1
2
1
EI
e
ix
f()
4
+
4
d (27.13)
which is the solution to (27.12) for an arbitrary f(x).
Consider a specic case f(x) = P(x) (P is a constant). In other words, an
external force f(x) is applied at just one point x = 0, that is, an impulse force. But
f() =
1
2
P and (27.13) transforms into
y(x) =
1
2EI
e
ix
4
+
4
d.
If x < 0, remark that we can write e
i()(x)
then by a change of variables, since
1
4
+
4
is even, we get the same integral but with x. So we assume x > 0 and close
the contour in the upper half plane.
First we introduce a change of variable: = d = d. Then
y(x) =
1
2
P
EI
4
1
e
ix
4
+ 1
d =
P
2C
1
e
ix
4
+ 1
d.
The function
1
4
+ 1
is subject to Jordans lemma (Lemma 6.5) and then by Theorem
6.6
y(x) =
iP
C
2
k=1
Res
_
e
ix
4
+ 1
,
k
_
, , x > 0
where
1
,
2
are zeros of
4
+ 1 = 0 in C
+
, i.e.
1
= e
i/4
=
1
2
(1 + i) ,
2
= e
i3/4
=
1
2
(1 + i).
164 27. SOME APPLICATIONS OF THE FOURIER TRANSFORM
Since
1
,
2
are simple poles of
1
4
+ 1
, by Corollary 5.11 we get
y(x) =
iP
C
_
e
i
1
x
4
3
1
+
e
i
2
x
4
3
2
_
=
P
4C
_
i
1
e
i
1
x
+i
2
e
i
2
x
_
=
P
4C
d
dx
_
e
i
1
x
+e
i
2
x
_
=
iP
4C
d
dx
_
e
i
2
(1+i)
x
+e
i
2
(1+i)
x
_
=
P
4C
d
dx
_
e
2
_
e
i
x
2
+e
i
x
2
__
=
P
2C
d
dx
_
e
2
cos
x
2
_
=
P
2
2C
e
2
_
cos
x
2
+ sin
x
2
_
, x > 0
and for x < 0, we have
y(x) =
P
2C
d
dx
_
e
x
2
cos
x
2
_
.
So putting it all together using absolute value, we have
y(x) =
P
2
2C
e
x
2
_
cos
x
2
+ sin
[x[
2
_
, =
4
_
C
EI
an even function. We can also write explicitly
y(x) =
Pe
C
4EI
[x[
2
4
4EIC
3
_
cos
4
_
C
4EI
x + sin
4
_
C
4EI
[x[
_
.
Part 4
Partial Dierential Equations
LECTURE 28
Wave Equations
1. The Stretched String
Here we are going to discuss a simple problem that historically led to the wave
equation.
Consider an ideal stretched string as below (nite or innite):
x
0
Note that this could also be a water surface.
Ideal means it is a line.
Let us apply to this string a vertical force F(x, t) which makes our string deform
from its free position (as in the gure below). Assume that the force of gravity can
be neglected.
x
0
x
x +dx
F(x, t)
Zoom in on an arbitrarily small fragment of the string as shown below.
x
u(x)
x
x + dx
Fdx
T
T
dx
The wave equation is local so
well worry about a small por
tion of the string.
u(x) is the displacement (de
ection) of the string at the
point x.
T is the tension.
dx is small enough so that Fdx
is constant over dx.
167
168 28. WAVE EQUATIONS
Then, by Newtons second law
1
T sin + T sin +Fdx = (x)dx
2
u
t
2
(28.1)
where T is the tension, F is the external force per unit length at point x, (x) is the
density of the string (again we assume dx small enough so that (x) stays constant
along it),
2
u
t
2
is the acceleration of the fragment in the transverse direction.
We assume that , are small enough, i.e.
sin tan , cos 1 ,
sin tan , cos 1.
Hence,
sin tan =
u
x
x
, sin
u
x
x+dx
and sin sin =
u(x +dx)
x
u(x)
x
.
It follows from (28.1) that
T
sin sin
dx
. .
=
2
u
x
2
+F =
2
u
t
2
and we arrive at
(x)
2
u
t
2
= T
2
u
x
2
+ F(x, t). (28.2)
If there is no external force F(x, t) and if (x) = const then (28.2) transforms into
the homogeneous wave equation:
1
c
2
2
u
t
2
=
2
u
x
2
, c
2
. (28.3)
The general solution to this equation can be easily found. Indeed, if f(x) is an
arbitrary twice dierentiable function, then u(x, t) = f(x ct) is a solution to (28.3)
since if we set z = x ct then
u
t
=
f
z
z
t
= f
t
(z) (c)
2
u
t
2
= c
t
f
t
(z) = c
f
t
(z)
z
z
t
= (c)
2
f
tt
(z) = c
2
f
tt
(z).
Similarly,
2
u
x
2
= f
tt
(z) , z = x ct
and hence (28.3) holds.
It can be proven that every solution u(x, t) of (28.3) is
u(x, t) = f(x ct) + g(x +ct) (28.4)
1
i.e. the sum of the forces equals mass times acceleration, and here projecting on the yaxis.
2. THE METHOD OF EIGENFUNCTION EXPANSIONS (SPECTRAL METHOD) 169
with some twice dierentiable functions f, g.
Note that f(x ct) plays the role of the wave traveling in the negative direction of
the xaxis and g(x + ct) in the positive direction.
However, equation (28.3), also called the free wave equation, is not of particular
interest without some physical restrictions on its solution.
Example 28.1. A nite string is xed at points x = 0 , x = .
0
The natural restrictions then are
u(0, t) = 0 = u(, t).
Such conditions are called boundary conditions.
Example 28.2. The string is innite but the initial shape of the string and the
distribution of initial velocities are given:
u(x, 0) = (x) ,
u
t
t=0
= (x).
Such conditions are called initial conditions.
One may well have a combination of the above conditions (especially if you have an
innite string). Remember that you need 4 conditions so we may have both boundary
and initial conditions. All this makes formula (28.4) of a little interest.
2. The Method of Eigenfunction Expansions (Spectral Method)
Consider equation (28.3) on a nite interval (0, ) with Dirichlet boundary condi
tions and some initial conditions. For simplicity set c = 1. So we have
_
_
u
tt
u
xx
= 0
(BC) u(0, t) = u(, t) = 0
(IC) u(x, 0) = (x) , u
t
(x, 0) = (x)
(28.5a)
which is often referred to as an initial value Dirichlet problem for the free wave equa
tion in dimension one or the boundary initial value (BIV) problem for the homogeneous
wave equation.
For solving our problem at hand, there are two steps, the rst one being called
spectral analysis.
It is reasonable to assume that the solution u(x, t) of (28.5a) belongs to L
2
(0, )
(as a function of x) and (28.5a) can then be viewed as
u
tt
+ Au = 0 (28.5b)
where A =
d
2
dx
2
is the operator of kinetic energy (Schr odinger operator), with bound
ary conditions u(0) = u() = 0. For now we will ignore t. Let us perform the spectral
170 28. WAVE EQUATIONS
analysis of the operator A on L
2
(0, ).
_
Ay = y
y(0) = 0 = y()
.
One can easily see that the general solution is
y(x) = ae
i
x
+ be
i
x
.
Further,
_
y(0) = a +b = 0
y() = ae
i
+ be
i
= 0
_
b = a
sin
= 0
= n = n
2
, n N
and y
n
(x) = C
n
sin nx are eigenfunctions.
Normalize y
n
(x): y
n
 = 1 y
n

2
= 1 so
1 =
0
[y
n
(x)[
2
dx = C
2
n
0
sin
2
nxdx =
C
2
n
2
0
(1 cos 2nx)dx =
C
2
n
2
C
n
=
_
2
.
And nally, the result of the spectral analysis of A is
n
= n
2
, e
n
(x) =
_
2
sin nx , n N.
The idea at this point is to note that since A is selfadjoint and its spectrum purely
discrete (A) = n
2
, the resulting eigenfunctions e
n
(x) form a basis, and so the
second step is to consider the solutions in this basis. I.e.
By Theorem 18.3, we can represent any solution of (28.5a) in the form
u(x, t) =
n1
u
n
(t)e
n
(x) , where u
n
(t) = u, e
n
(28.6)
(t appears here as a parameter).
Dierentiate (28.6) in t twice
u
tt
=
n1
2
t
2
u
n
(t)
. .
= un(t)
e
n
(x) (28.7)
2. THE METHOD OF EIGENFUNCTION EXPANSIONS (SPECTRAL METHOD) 171
and plug (28.6), (28.7) into (28.5b)
0 =
n1
u
n
(t)e
n
+A
n1
u
n
(t)e
n
=
n1
u
n
(t)e
n
+
n1
u
n
(t)
n
e
n
=
n1
_
u
n
(t) +
n
u
n
(t)
_
e
n
.
So we get
n1
_
u
n
(t) +
n
u
n
(t)
_
e
n
= 0 u
n
(t) +
n
u
n
(t) = 0 , n N. (28.8)
So, our original partial differential equation (28.5a) broke into the innite chain
of linear ordinary dierential equations (28.8). This is the crux of this approach:
reduce the partial dierential equation (PDE) to innitely many ordinary dierential
equations (ODE) which are hopefully simple enough to solve. Here indeed, each of
these equations can be trivially solved
u
n
(t) = a
n
e
i
nt
+b
n
e
i
nt
and recollecting that
n
= n
2
we get
u
n
(t) = a
n
e
int
+b
n
e
int
, n N. (28.9)
Now we need to nd a
n
, b
n
. It should be found from the initial conditions:
u(x, 0) =
n1
u
n
(t)e
n
(x)
t=0
=
n1
u
n
(0)e
n
(x) = (x)
u
t
(x, 0) =
n1
u
t
n
(t)e
n
(x)
t=0
=
n1
u
t
n
(0)e
n
(x) = (x)
. (28.10)
But by Theorem 18.3,
(x) =
n1
n
e
n
(x) ,
n
= , e
n
(x) =
n1
n
e
n
(x) ,
n
= , e
n
(28.11)
and hence, comparing (28.9), (28.10) yields
u
n
(0) =
n
, u
t
n
(0) =
n
.
172 28. WAVE EQUATIONS
So we get
u
n
(t) = a
n
e
int
+b
n
e
int
u
t
n
(t) = ina
n
e
int
inb
n
e
int
_
u
n
(0) = a
n
+b
n
=
n
u
t
n
(0) = ina
n
inb
n
= in(a
n
b
n
) =
n
or
_
a
n
+b
n
=
n
(a
n
b
n
) =
n
in
_
a
n
=
1
2
_
n
+
n
in
_
b
n
=
n
a
n
=
1
2
_
n
in
_ .
So
u
n
(t) =
1
2
__
n
+
n
in
_
e
int
+
_
n
in
_
e
int
_
=
1
2
_
_
n
+
n
in
_
e
int
+
_
n
+
n
in
_
e
int
_
= Re
_
n
+
n
in
_
e
int
= Re
__
n
+
n
in
_
(cos nt +i sin nt)
_
=
n
cos nt +
n
n
sin nt.
Now we are able to present the solution to (28.5a)
u(x, t) =
n1
u
n
(t)e
n
(x) (28.12)
where
u
n
(t) =
n
cos nt +
n
n
sin nt ,
n
=
0
(x)e
n
(x)dx ,
n
=
0
(x)e
n
(x)dx ,
e
n
(x) =
_
2
sin nx , n N.
Exercise 28.3. Adapt this method to the nonhomogeneous wave equation (i.e. de
rive formulas similar to (28.12))
_
_
u
tt
u
xx
= f(x, t)
u(0, t) = u(, t) = 0
u(x, 0) = (x) , u
t
(x, 0) = (x)
.
Note that there are other wave equations such as the GordonKlein equation where
there is a potential q and which has applications in plasma physics:
u
tt
= u
xx
+qu.
LECTURE 29
Continuous Spectrum of a Linear Operator
Continuous spectrum happens only in innite dimensional spaces. Relevant ex
amples in physics include crystals with bands of continuous spectrum and particles in
quantum mechanics where we have bound states corresponding to eigenvalues and then
the particle gets stuck there a while (in the order of nanoseconds) and a continuous
spectrum corresponding to positions where the particle doesnt stay any amount of
time.
1. Basic Denitions
The concept of a continuous spectrum is fundamental in math physics and, ALAS,
not easy at all. The way its usually dened requires Advanced Analysis. Well try to
detour these diculties somehow.
1
We recall that a number C is an eigenvalue of a linear operator A in a Hilbert
space H (usually written actually in gothic letter: H) if the equation
Au = u (29.1)
has a nontrivial solution u from H.
The set of all eigenvalues is called the point spectrum or discrete (actually there
is a dierence between point and discrete spectrum but physicists dont usually care).
We denote it by
d
(A).
Symbolically,
d
(A) u H , u ,= 0 : Au = u.
This means that if
d
(A) then the operator A I is not invertible.
(If AI were invertible then equation (29.1) would have only a trivial solution u = 0.)
The spectrum was rst described to you with the resolvent:
R() = (A I)
1
which has poles at the eigenvalues of A. But the spectrum in general is the set of all
singularities of R(). The poles will be the discrete spectrum, but there can be all
sorts of other singularities.
Rewrite (29.1) as
(A ) u = 0 (29.2)
and ask the following question. If /
d
(A) (and hence equation (29.2) has no solution
in H), whether (29.2) has a solution in some sense?
1
Dont hold me accountable for some semirigorous shortcuts.
173
174 29. CONTINUOUS SPECTRUM OF A LINEAR OPERATOR
Definition 29.1. Let A be a linear operator on a Hilbert space H. A scalar C
is said to be a point in the spectrum (A) of A if there exists a sequence u
n
, called
a Weyl sequence, such that u
n
H , u
n
 = 1 and
lim
n
(A ) u
n
 = 0. (29.3)
It is clear that
d
(A) (A): pick u
n
= u for all n (or rather a normalized version)
where u solves Au = u.
Here is the standard classication of the spectrum.
Definition 29.2. A scalar (A) is said to be from the discrete spectrum
d
(A)
if u
n
can be chosen convergent to some element u H i.e.
d
(A) u
n
, u
n
H , u
n
 = 1 and u
n
u H such that (29.3) holds.
Definition 29.3. A scalar (A) is said to be from the continuous spectrum
c
(A) if u
n
can be chosen divergent in H, i.e. u
n
doesnt converge to any element of
H.
Theorem 29.4. (A) =
d
(A)
c
(A).
Proof. Let (A), then by Denition 29.1 there exists a Weyl sequence u
n
of elements u
n
H, u
n
 = 1 such that (29.3) holds. Each such sequence can be
represented as u
t
n
u
tt
n
(as a union of two subsequences
2
) such that
(i) u
t
n
u H , n
(ii) u
tt
n
does not converge to any element of H.
Consider case (i). Let us make up a new sequence u
n
, u
n
= u (yes, it consists of
the same element u), (29.3) then reads
(A )u = 0 (A )u = 0 Au = u
and so
d
(A).
Consider case (ii). By Denition 29.3,
c
(A).
So if (A) then
d
(A) or
c
(A) i.e.
d
(A)
c
(A). QED
Recall that the spectrum could also be empty or the whole complex plane.
Remark 29.5.
d
(A)
c
(A) need not be empty! I.e. there may exist eigenvalues
embedded into the continuous spectrum.
For such , you can nd a divergent sequence where you can pick a divergent
subsequence (and then
c
(A)) and a convergent subsequence (and so
d
(A)).
It seems like a weird case, but it happens all the time in physics with bound states.
Lemma 29.6. If (A) then there exists a Weyl sequence u
n
:
lim
n
Au
n
, u
n
= .
2
u
n
or u
n
may actually be empty.
2. CONTINUOUS SPECTRUM OF SELFADJOINT OPERATORS 175
Proof.
[Au
n
, u
n
[ = [(A )u
n
, u
n
[
Cauchy
(A )u
n
 0 , n . QED
There is a whole theory behind the concept of the continuous spectrum. We con
centrate mainly on the spectral theory only of some specic operators of mathematical
physics.
2. Continuous spectrum of selfadjoint operators
Theorem 29.7. Let A = A
then (A) 1.
Proof. By Lemma 29.6, for (A), there exists a Weyl sequence u
n
such
that
lim
n
Au
n
, u
n
= .
But if A = A
2i
=
Au
n
, u
n
u
n
, Au
n
2i
=
Au
n
, u
n
Au
n
, u
n
2i
= 0.
So we must have 0 = lim
n
ImAu
n
, u
n
= Im. Therefore the spectrum is real. QED
Exercise 29.8. State and prove Theorem 29.7 for unitary operators.
(Hint: use Lemma 29.6.)
Let us consider a very important specic operator.
Theorem 29.9. Consider A =
1
i
d
dx
, the operator of momentum, on L
2
(1), then
(A) =
c
(A) = 1. (29.4)
Proof. Note rst
3
that A = A
d
(A) = (A) =
c
(A).
Furthermore, by Theorem 29.7, (A) 1. We prove now that (A) = 1.
Take
u
n
(x) =
1
n
e
ix
e
[x[/n
, 1.
We have
u
n

2
=
1
n
1
e
2[x[/n
dx =
2
n
0
e
2x/n
dx = 1.
3
Weve proved this fact for the operator of momentum on two dierent spaces in Examples 11.1
and 17.15. The proof here would go similarly.
176 29. CONTINUOUS SPECTRUM OF A LINEAR OPERATOR
Also
1
i
d
dx
u
n
u
n
=
1
i
_
i
sgn x
n
_
u
n
u
n
where sgn x =
_
1 , x < 0
1 , x > 0
=
i
n
sgn x u
n
_
_
_
_
1
i
d
dx
u
n
u
n
_
_
_
_
=
1
n
sgn x u
n
 =
1
n
u
n

..
=1
=
1
n
0 , n .
By Denition 29.1,
_
1
i
d
dx
_
. Since is arbitrary, we have
_
1
i
d
dx
_
= 1. QED
Remark 29.10. It is not dicult to show that u(x) = e
ix
is a solution of
1
i
du
dx
u = 0
which is obviously not from L
2
(1). However, u(x) is a weak solution
4
to this equation.
Such solutions are commonly referred to as eigenfunctions of the continuous spectrum
or generalized eigenfunctions.
Remark 29.11. Theorem 29.9 sheds some light on the reasonable question: why
is the continuous spectrum called so? Roughly speaking, the continuous spectrum is
always made of intervals and no isolated point can be from this set.
Definition 29.12. Two operators A, B on a Hilbert space are called unitary equiv
alent if there exists a unitary operator U such that
B = U
1
AU = U
AU. (29.5)
Observe, by the way, that (29.5) implies also
A = UBU
1
= UBU
. (29.6)
Indeed multiply (29.5) by U from the right and U
1
from the left.
Do you remember that we dealt with Denition 29.12 while considering nite di
mensional spaces? It was then called similarity (see Denition 11.4) but also goes
by equivalence. Here we consider unitary equivalence which is much more useful and
powerful.
Theorem 29.13. If A, B are unitary equivalent then
(A) = (B).
In other words unitary equivalence preserves the spectrum.
Proof. By (29.6) we have
(A )u =
_
_
_
UBU
1
I
_
u
_
_
=
_
_
_
UBU
1
UU
1
_
u
_
_
=
_
_
U(B )U
1
u
_
_
=
_
_
(B )U
1
u
_
_
since U is unitary.
4
The term weak solution does not mean that the Weyl sequence converges weakly to the weak
solution. In the example above, one can easily show that u
n
converges weakly to 0, not e
ix
.
2. CONTINUOUS SPECTRUM OF SELFADJOINT OPERATORS 177
Set v = U
1
u. So
(A )u = (B )v . (29.7)
Note v H and v = u since U is unitary. So, (29.7) means that if (A),
then by Denition 29.1, there exists a Weyl sequence u
n
for A in the Hilbert space
H, such that u
n
 = 1 and
lim
n
(A ) u
n
 = 0.
Then
lim
n
(B ) v
n
 = 0
and v
n
is a Weyl sequence for B. Hence (B).
Similarly, (B) (A). QED
Theorem 29.13 is very important to spectral analysis. I.e., if you need to nd (B)
of an operator B but you know that B is unitary equivalent to another operator A for
which the spectrum (A) is known then we simply have
(B) = (A).
Definition 29.14. Let H = L
2
(1). The operator B acting by the rule
Bu(x) = x u(x) u(x) L
2
(1)
is called the operator of multiplication by an independent variable x, or the operator of
coordinate (following Quantum Mechanics terminology).
Theorem 29.15. The spectrum of the operator B of multiplication on L
2
(1) is
purely continuous and lls out the whole real line. I.e.
(B) =
c
(B) = 1.
Proof. By Theorem 25.4, the Fourier operator F is unitary in L
2
(1). By property
(iii) of F in Lecture 25 we have
F
1
i
d
dx
u = Fu
which also reads
FAu = Fu = BFu , u L
2
(1) or FAF
1
= B A = F
1
BF.
where A is the operator of momentum.
This means that B is unitary equivalent to the operator of momentum A in L
2
(1).
By Theorem 29.13 then
(B) = (A).
Now by Theorem 29.9
(A) =
c
(A) = 1
and the theorem is proven. QED
Remark 29.16. By recalling (from Example 19.18 1)) that for all y 1, y(y) = 0
then we nd that (x ) is a socalled weak solution to Bu = u for any 1 since
we always have (x )(x ) = 0 but (x ) / L
2
(1).
178 29. CONTINUOUS SPECTRUM OF A LINEAR OPERATOR
Once again, eigenfunctions of the continuous spectrum of the momentum operator
1
i
d
dx
are e
ix
and of the operator of multiplication by x are (x ).
So roughly speaking, if
c
(A) then there exists a solution to
Au = u (29.8)
which is not from H, and basically to perform the spectral analysis of A we have to
nd those s for which (29.8) has a solution (from H or not). Then will be (A)
and solutions will be eigenfunctions of A (from the discrete or continuous spectrum).
In general, the theory behind is very involved. But for differential operators everything
is a whole lot simpler.
Exercise 29.17. Prove Theorem 29.9 without using Theorem 29.13. Instead modify
the proof of Theorem 29.15 using
u
n
(x) =
_
n
(x ) where
n
(x) =
1
2n
n
1
2n
LECTURE 30
The Method of Eigenfunction Expansion in the Case of
Continuous Spectrum
1. The Schrodinger Operator
We start out from the Schr odinger operator A =
d
2
dx
2
of the second derivative on
the whole line 1, also called kinetic energy.
Theorem 30.1. Let A =
d
2
dx
2
on L
2
(1). Then
(A) =
c
(A) = [0, ). (30.1)
I.e. kinetic energy can take on any nonnegative value.
Proof. Consider rst the operator of momentum B =
1
i
d
dx
on 1. By Theorem
29.9, we have
(B) =
c
(B) = 1
and by Remark 29.10, functions
e(x, ) =
e
ix
2
, 1
are the eigenfunctions of the continuous spectrum of B, i.e.
Be = e , 1. (30.2)
Apply the operator B to both sides of (30.2)
B
2
e = Be =
2
e. (30.3)
But on the other hand,
B
2
=
1
i
d
dx
1
i
d
dx
=
_
1
i
_
2
d
2
dx
2
=
d
2
dx
2
= A
and (30.3) reads
Ae =
2
e , 1.
That is e(x, ) is a solution to the eigenvalue problem
Ae = e
with =
2
, e(x, ) =
1
2
e
i
x
.
179
180 30. EIGENFUNCTION EXPANSION WITH CONTINUOUS SPECTRUM
Since =
2
, < < , then 0 and
(A) =
c
(A) = [0, ). QED
We also obtained
Theorem 30.2. Let A =
d
2
dx
2
on L
2
(1). Then
_
1
2
e
i
x
,
1
2
e
i
x
_
[0,)
(30.4)
is the set of eigenfunctions of the continuous spectrum of A.
Remark 30.3. (30.4) says that to each eigenvalue [0, ) of the continuous
spectrum of A there correspond two eigenfunctions. This means that the spectrum (A)
is not simple but of (algebraic) multiplicity two. This fact results in many complications
to be overcome.
2. The Wave Equation on the Whole Line
I.e.,
u
tt
= u
xx
, u(x, 0) = (x) , u
t
(x, 0) = (x). (30.5)
Although generally not explicitly stated we assume decay at , and generally
consider , L
2
(1). These functions are called respectively the initial prole for
and the distribution of initial velocity for .
We approach this problem as in Lecture 28.
u
tt
+Au = 0 , A =
d
2
dx
2
= B
2
, B =
1
i
d
dx
and so we have
u
tt
+B
2
u = 0 , B =
1
i
d
dx
. (30.6)
It follows from Theorem 29.9 and Remark 29.10 that the set
e(x, )
1
, e(x, ) =
1
2
e
ix
(30.7)
is the set of all eigenfunctions of B. By the Fourier Integral Theorem (Theorem 26.1),
every function u(x) L
2
(1) can be represented by
u(x) =
1
u()
e
ix
2
d , u() =
1
1
e
ix
u(x)dx.
Due to (30.7), we rewrite it as
u(x) =
1
u()e(x, )d , u() =
1
u(x)e(x, )
. .
looks like u,e)?
.
Even though e(x, ) / L
2
(1), we agree to write
1
u(x)e(x, ) = u, e .
2. THE WAVE EQUATION ON THE WHOLE LINE 181
So we get
u(x) =
1
u()e(x, )d , u = u, e (30.8)
which looks similar to
u(x) =
n
u
n
e
n
(x) , u
n
= u, e
n
.
So (30.8) can be treated as expansion in eigenfunctions of operator B. Note in
particular that we have
e(x, ), e(x, ) = ( )
which is analogous to the discrete case
1
u(, t)e(x, )d
and plug it in (30.6): for any x
0 =
2
t
2
1
u
,t
e(x, )
. .
no t
d +B
2
1
u(, t)
. .
no x
e(x, )d
=
1
u
tt
(, t)e(x, )d +
1
u(, t) B
2
e(x, )
. .
=
2
e(x,) due to (30.3)
d
and we get for any x
1
_
u
tt
(, t) +
2
u(, t)
_
e(x, )d = 0
u
tt
(, t) +
2
u(, t) = 0 u(, t) = a()e
it
+b()e
it
.
In the above, in a way we have u
tt
+
2
u orthogonal to each of the e(x, ); and
because the e(x, ) are a basis then u
tt
+
2
u has to be zero, but the proof is not
obvious.
Now convert the initial conditions into the frequency domain:
u(, 0) = () , u
t
(, 0) =
()
and we get
_
() = a() + b()
() = ia() ib()
or
_
_
_
a +b =
a b =
i
a =
1
2
_
+
i
_
, b = a =
1
2
_
i
_
.
So
u(, t) =
1
2
_
() +
()
i
_
e
it
+
1
2
_
()
()
i
_
e
it
182 30. EIGENFUNCTION EXPANSION WITH CONTINUOUS SPECTRUM
and nally
u(x, t) = F
1
_
a()e
it
+ b()e
it
_
or more explicitly
u(x, t) =
1
u(, t)e(x, )d , where
u(, t) =
1
2
_
() +
()
i
_
e
it
+
1
2
_
()
()
i
_
e
it
,
() =
1
(x)e(x, )dx ,
() =
1
(x)e(x, ) ,
e(x, ) =
1
2
e
ix
.
(30.9)
In this form, the solution is dicult to visualize and analyze. Furthermore, his
torically, the wave equation was solved directly through a clever change of variable by
dAlembert. So we present here dAlemberts solution, but in Appendix to this lecture
we present how we can derive the dAlembert formula from (30.9).
Consider
_
u
tt
= u
xx
u(x, 0) = (x) , u
t
(x, 0) = (x)
.
Introduce the following canonical variables:
= x +t , = x t.
By applying the chain rule,
_
u
t
= u
u
x
= u
+ u
_
u
tt
= (u
)
t
(u
)
t
= u
(u
)
u
xx
= (u
)
x
+ (u
)
x
= u
+ u
+u
+u
_
u
tt
= u
2u
+u
u
xx
= u
+ 2u
+u
2u
+u
= u
+ 2u
+u
= 0 u
x
x
0
(s)ds + C
_
_
F(x) =
1
2
(x) +
1
2
x
x
0
(s)ds +
C
2
G(x) =
1
2
(x)
1
2
x
x
0
(s)ds
C
2
u(x, t) =
1
2
(x +t) +
1
2
x+t
x
0
(s)ds +
C
2
+
1
2
(x t)
1
2
xt
x
0
(s)ds
C
2
u(x, t) =
1
2
_
(x t) + (x +t)
_
+
1
2
x+t
xt
(s)ds.
So we summarize our result as
Theorem 30.4. The solution to the initial value problem
_
u
tt
= u
xx
u(x, 0) = (x) , u
t
(x, 0) = (x)
.
can be represented by the dAlembert formula
u(x, t) =
(x t) + (x +t)
2
+
1
2
x+t
xt
(s)ds. (30.10)
3. Propagation of Waves
Formula (30.10) describes the phenomenon of wave propagation.
1) Consider rst the case with no initial velocity; imagine that the string is pinched
then released.
(x) =
a 0 a
, 0.
Then (30.10) becomes
u(x, t) =
(x t) + (x +t)
2
.
0
t
a a a +t a +t
(x)
2
(xt)
2
184 30. EIGENFUNCTION EXPANSION WITH CONTINUOUS SPECTRUM
i.e.
(x t)
2
represents a bump initially supported on (a, a) moving with a
speed 1 in the positive direction of the xaxis.
1
Similarly,
0
t
a a a t a t
(x)
2
(x+t)
2
i.e.
(x +t)
2
is a bump moving in the opposite direction with the same speed
1.
Since u(x, t) =
1
2
(x t) +
1
2
(x +t) we get
u(x, t)
t = 0
t = a
t = 2a
t = 3a
t = 4a
2a
2a
4a 2a
2a
4a
3a
3a
5a 3a
3a
5a
a
a
a
a
Figure 1
1
Some students have trouble understanding why subtracting t from x moves the graph to the
right. But watch:
Supp = (a, a) a x t a a + t x a + t.
3. PROPAGATION OF WAVES 185
2) Now let the string be initially at, but we give it a boost.
(x) = 0 , (x) =
a 0 a
.
Now (30.10) becomes
u(x, t) =
1
2
x+t
xt
(s)ds.
Set g(x) =
1
2
x
0
(s)ds, then
g(x) =
0
a a
h/2
h/2
, h
1
2
a
a
(s)ds
and u(x, t) = g(x +t) g(x t).
For g(x t) we have
x
a
g(x)
h/2
g(x t)
h/2
a +t
a +t
and hence
x
a
g(x)
h/2
g(x t)
h/2
a +t
a + t
Figure 2
For g(x + t), see Figure 3 on the next page.
The total picture, Figure 4, is the result of the addition of Figure 2 and
Figure 3.
186 30. EIGENFUNCTION EXPANSION WITH CONTINUOUS SPECTRUM
x
a
a
g(x) h/2
h/2
a t
a t
g(x + t)
Figure 3
x
u(x, t)
t = 0
g(x +t) g(x t)
t = a
t = 2a
t = 3a
t = 4a
2a
h
2a
3a
h
3a
4a
h
4a
5a
h
5a
a a
Figure 4
3) If ,= 0 , ,= 0 then the picture will be the superposition of Figure 1 and
Figure 4 since we have a linear equation.
Exercise 30.5. Solve the wave equation for u(x, t) with
(x) =
1
2
1
, (x) =
1
1
1
.
Graph the solution u(x, t) for various values of time.
APPENDIX: DERIVATION OF THE DALEMBERT FORMULA 187
Exercise 30.6. Solve (< x < , t > 0)
u
tt
= u
xx
, u(x, 0) = 0 , u
t
(x, 0) = xe
x
2
and graph u(x, t) for various values of t.
Exercise 30.7. Assuming Theorem 30.4, modify equation (30.10) for
u
tt
= c
2
u
xx
where c > 0 is a constant.
Appendix: Derivation of the dAlembert Formula
From (30.9), we get
u(x, t) =
1
_
1
2
_
() +
()
i
_
e
it
+
1
2
_
()
()
i
_
e
it
_
e(x, )d
=
1
2
1
_
() +
()
i
_
e
it+ix
d +
1
2
1
_
()
()
i
_
e
it+ix
d
=
1
2
1
_
() +
()
i
_
e
i(x+t)
d +
1
2
1
_
()
()
i
_
e
i(xt)
d
=
1
4
1
_
1
_
(s) +
(s)
i
_
e
is
ds
_
e
i(x+t)
d
+
1
4
1
_
1
_
(s)
(s)
i
_
e
is
ds
_
e
i(xt)
d. (30.11)
Change now the order of integration in (30.11)
u(x, t) =
1
4
1
_
(s)
1
e
i(x+ts)
d +(s)
1
e
i(x+ts)
i
d
_
ds
+
1
4
1
_
(s)
1
e
i(xts)
d (s)
1
e
i(xts)
i
d
_
ds. (30.12)
In order to give this nasty expression a nice look, we basically have to evaluate
1
e
ia
d ,
1
e
ia
i
d
with a = x + t s a = x t s. But it has been already done. By Theorem 20.4 we
have
1
e
ia
d = 2(a). (30.13)
We have also computed (even twice!) the other integral in Example 7.1:
e
ia
d = i
1
e
ia
i
d = (a > 0). (30.14)
188 30. EIGENFUNCTION EXPANSION WITH CONTINUOUS SPECTRUM
If a < 0 then taking the complex conjugation of (30.14) we get
1
e
ia
i
d =
1
e
ia
i
d =
1
e
i(a)
i
d = since a > 0.
So,
e
ia
i
d =
_
, a > 0
, a < 0
= sgn a. (30.15)
Plug (30.13) and (30.15) into (30.12)
u(x, t) =
1
2
1
_
(s)(x + t s) +
(s)
2
sgn(x +t s)
_
ds
+
1
2
1
_
(s)(x t s)
(s)
2
sgn(x t s)
_
ds
=
1
2
(x + t) +
1
4
1
..
=
x+t
x+t
(s) sgn(x + t s)ds
+
1
2
(x t)
1
4
1
..
=
xt
xt
(s) sgn(x t s)ds
=
(x +t) + (x t)
2
+
1
4
x+t
(s) (1) ds +
1
4
x+t
(s) (1) ds
1
4
xt
(s)ds
1
4
xt
(s)(1)ds
=
(x +t) + (x t)
2
+
1
4
x+t
(s)ds
1
4
xt
(s)ds
. .
=
1
4
x+t
xt
(s)ds
1
4
x+t
(s)ds +
1
4
xt
(s)ds
. .
=
1
4
x+t
xt
(s)ds
=
(x +t) + (x t)
2
+
1
2
x+t
xt
(s)ds.
LECTURE 31
The Heat Equation on the Line
We are concerned with the initial value problem
_
u
t
= u
xx
, < x <
u(x, 0) = (x)
. (31.1)
Note that represents the initial distribution of heat; its physical so it is reasonable
to assume L
2
(1) since the total energy is nite.
Our approach is going to be absosulety the same as that for the wave equation.
Namely, we introduce the operator A =
d
2
dx
2
and (31.1) rewrites
u
t
+Au = 0.
Next, A = B
2
, B =
1
i
d
dx
and we have
u
t
+B
2
u = 0.
By Theorem 29.9 and Remark 29.10, (B) = 1 and
e(x, )
1
, e(x, ) =
1
2
e
ix
is the set of all eigenfunctions of B. By Theorem 26.1 we have
u(x, t) =
1
u(, t)e(x, )d , u(, t) =
1
u(x, t)e(x, )dx. (31.2)
Plug (31.2) into (31.1)
u
t
(x, t) =
1
u
t
(, t)e(x, )d
B
2
u(x, t) =
1
u(, t)B
2
e(x, )d =
2
u(, t)e(x, )d.
Since u
t
+ B
2
u = 0 we get
1
_
u
t
(, t)+
2
u(, t)
_
d = 0 , x 1 u
t
+
2
u = 0 u(, t) = C()e
2
t
.
189
190 31. THE HEAT EQUATION ON THE LINE
Now we need to nd C(). But
u(x, 0) =
1
u(, 0)e(x, )d =
1
C()e(x, )d
[[
(x) =
1
()e(x, )d , where () =
1
(x)e(x, )dx
.
It follows from here that
C() = ()
and problem (31.1) is solved:
u(x, t) =
1
1
()e
2
t+ix
d where () =
1
1
(x)e
ix
dx. (31.3)
However, this answer cannot be considered as nal (we cant see if its real!).
It follows from (31.3) that
u(x, t) =
1
2
1
_
1
(s)e
is
ds
_
e
2
t+ix
d
and changing the order of integration, we get
u(x, t) =
1
_
1
2
1
e
i(xs)
2
t
d
_
. .
=:I(x,s,t)
(s)ds (31.4)
where e
i(xs)
2
t
is known as the dispersion term.
Lets evaluate I(x, s, t) separately. Complete the square
2
t +i(x s) =
_
i
t +
x s
2
t
_
2
(x s)
2
4t
=
_
t +
x s
2i
t
_
2
_
x s
2
t
_
2
.
Then
I(x, s, t) =
1
2
1
e
t+
xs
2i
xs
2
2
d =
1
2
e
xs
2
2
1
e
t+
xs
2i
2
d
. .
Evaluate now this integral
. (31.5)
Make a substitution
t +
x s
2i
t
= z, then dz =
td d =
dz
t
and
1
e
t+
xs
2i
2
d =
1
C
e
z
2
dz , (31.6)
where C =
_
z C : z =
t
x s
2
t
i , < <
_
.
31. THE HEAT EQUATION ON THE LINE 191
1
t
C
xs
2
t
C
Evaluate
1
e
z
2
dz = lim
R
C
R
e
z
2
dz ,
where C
R
is the piece of C bounded by vertical lines R.
Consider the contour :
h
+
R R
C
R
h =
x s
2
t
.
Since e
z
2
is analytic inside of , by the Cauchy Theorem one gets
0 =
e
z
2
dz =
C
R
+
+
+
R
R
. (31.7)
Prove now that lim
R
= 0.
Indeed, on
+
, z = R +iy and
+
e
z
2
dz =
0
h
e
(R+iy)
2
idy = i
0
h
e
R
2
2iRy+y
2
dy = ie
R
2
0
h
e
2iRy
e
y
2
dy.
Taking its absolute value we get
+
e
z
2
dz
= e
R
2
0
h
e
2iRy
e
y
2
dy
e
R
2
0
h
e
y
2
dy e
R
2
he
h
2 R
0.
So,
lim
R
+
e
z
2
dz = 0.
192 31. THE HEAT EQUATION ON THE LINE
Similarly,
lim
R
e
z
2
dz = 0
and passing in (31.7) to the limit as R , we get
lim
C
R
. .
C
+
0
lim
+
+
0
lim
lim
R
R
. .
= 0
and nally
C
e
z
2
dz =
1
e
z
2
dz =
(Gauss integral)
and hence plugging this into (31.6) yields
1
e
t+
xs
2i
2
d =
_
t
.
Insert it into (31.5) we get
I(x, s, t) =
1
2
e
xs
2
t
=
e
(xs)
2
4t
4t
and (31.4) becomes
u(x, t) =
1
2
1
e
(xs)
2
4t
(s)ds.
So we proved
Theorem 31.1. The solution of (31.1) can be represented in the following form:
u(x, t) =
1
2
1
e
(xs)
2
4t
(s)ds. (31.8)
Remark 31.2. The right hand side of (31.8) is clearly undened at t = 0. But as
you know
1
_
1
ne
n
2
(xs)
2
_
nN
forms a sequence and hence, setting n =
1
2
t
, we nd that
wlim
t0
1
2
t
e
(xs)
2
4t
= wlim
n
n
e
n
2
(xs)
2
= (x s)
and from (31.8) we have
u(x, 0) =
1
(x s)(s)ds = (x)
1
Recall Example 19.6.
31. THE HEAT EQUATION ON THE LINE 193
which is completely consistent with the initial condition u(x, 0) = (x).
The above will be true whether is a test function or not as illustrated in the
example below. The result will hold even if / L
2
(1) even though the only method
used is the Fourier method.
Example 31.3. Let us solve (31.1) with (x) = (xx
0
). Physically it means that
we heated one point x = x
0
with a laser beam.
By (31.8) in Theorem 31.1 we have
u(x, t) =
1
2
(xs)
2
4t
(s x
0
)ds =
e
(xx
0
)
2
4t
2
t
.
x
x
0
u(x, t)
t
1
< t
2
< t
3
Note that u(x, t)
t0
forms a sequence.
t
3
t
2
t
1
Remark 31.4. In the wave equation, the speed of propagation is constant and nite.
But in the heat equation, in a split second, tails spread from to (since exponen
tials are positive for all x). This means that the heat propagates instantaneously, i.e.
the speed of propagation is innite. This of course is not quite true in real situations;
it shouldnt go faster than the speed of light. So recall that this is a model, and far
enough the change is innitesimally small.
Remark 31.5. Note also that the solution to u
t
= u
xx
, u(x, 0) = (x) is the con
volution of and the solution to the heat equation with an impulse initial distribution.
This is known as Duhamels principle.
Exercise 31.6. Solve (31.1) and draw a picture of u(x, t) for
(x) =
a
1
a
=
_
1 , [x[ a
0 , [x[ > a
.
(Hint: use (x) =
2
x
0
e
u
2
du, the error function and its properties.)
Exercise 31.7. Solve
_
u
t
= cu
xx
bu
x
x 1 , t > 0
u(x, 0) = e
x
2
where b, c are constants and c > 0. Graph the solution for various values of t > 0.
(Hint: use the substitution = x bt to reduce this eq. to the standard heat eq.)
LECTURE 32
Nonhomogeneous Wave and Heat Equations on the Whole
Line
In the nonhomogeneous case, we will treat the heat equation rst because it is
simpler than the wave equation.
1. Nonhomogeneous Heat Equation
Consider
_
u
t
= u
xx
+f(x, t)
u(x, 0) = (x)
(32.1)
on the whole line. Assume f, L
2
(1) with respect to x so that we can use the
spectral, here Fourier method. The function f is independent of u and is referred to
as the forcing term in general. Here it corresponds to heat generation (a heat source)
or dispersion.
We are going to apply the same eigenfunction expansion as we did in Lecture 30,
31.
We write (32.1) as
u
t
+B
2
u = f , B =
1
i
d
dx
. (32.2)
As previously, write the solution of (32.1) as
u(x, t) =
1
u(, t)e(x, )d , u(, t) =
1
u(x, t)e(x, )dx , e(x, ) =
1
2
e
ix
.
Since
u
t
=
1
u
t
(, t)e(x, )d ; B
2
u(, t) =
2
u(, t)e(x, )d
f(x, t) =
f(, t)e(x, )d ,
f(, t) =
1
f(x, t)e(x, )dx ,
by inserting the above in (32.2), equation (32.1) becomes in the frequency domain:
u
t
(, t) +
2
u(, t) =
f(, t). (32.3)
It is a linear equation of order 1 and by applying the variation of parameters method
(see Appendix), we obtain that its general solution is
u(, t) = e
2
t
_
t
0
f(, )e
d +C()
_
.
195
196 32. NONHOMOGENEOUS WAVE AND HEAT EQUATIONS
By passing the initial conditions to the frequency domain, we have:
() = u(, 0) = C()
and hence
u(, t) = e
2
t
t
0
f(, )e
d
. .
u
1
(,t)
+ () e
2
t
. .
u
0
(,t)
.
If we set
u
0
1
u
0
(, t)e(x, )d
and compare this with (31.3), we note that u
0
is the solution of equation (32.1) with
f 0, i.e. the homogeneous equation (32.1). By Theorem 31.1 then
u
0
(x, t) =
1
2
1
e
(xs)
2
4t
(s)ds
and we arrive at
u(x, t) = u
0
(x, t) +
1
u
1
(, t)e(x, )d
. .
u
1
(x,t)
.
So now we only have to evaluate u
1
(x, t).
u
1
(x, t) =
1
1
e
2
t
t
0
f(, )e
d e
ix
d. (32.4)
Since
f(, ) =
1
1
f(s, )e
is
ds, (32.4) can be continued as
u
1
(x, t) =
1
2
1
e
2
t
t
0
1
f(s, )e
is
ds e
d e
ix
d.
Now rearrange the terms and the order of integration.
u
1
(x, t) =
1
_
t
0
_
1
2
1
e
i(xs)
2
(t)
d
_
. .
=I(x,s,t)
f(s, )d
_
ds. (32.5)
But I(x, s, t ) was evaluated in Lecture 31. We have
I(x, s, t ) =
e
(xs)
2
4(t)
2
t
.
Plugging it into (32.5) one has
u
1
(x, t) =
1
_
_
t
0
e
(xs)
2
4(t)
2
t
f(s, )d
_
_
ds =
1
ds
t
0
d
exp
_
(xs)
2
4(t)
_
2
_
(t )
f(s, ).
2. NONHOMOGENEOUS WAVE EQUATION 197
The second expression is equivalent and is just another representation sometimes
used in research papers to associate more easily the variable with each integral when
multiple ones are present.
Theorem 32.1. The solution to (32.1) can be represented in the following form
u(x, t) =
1
2
1
e
(xs)
2
4t
(s)ds +
1
_
_
t
0
e
(xs)
2
4(t)
2
_
(t )
f(s, )d
_
_
ds.
In this case, we have again a sort of superposition principle where the rst term
corresponds to the homogeneous solution and the second term is a particular solution
for the initial condition 0.
This formula is not particularly pleasant but Im not aware of any better derivation
of it.
Exercise 32.2. Show that the solution to the heat equation for (x) = 0 and
f(x, t) = (x) , t is
u(x, t) =
_
t
x
2
4t
x
2
erfc
_
x
2
t
_
where erfc(x) =
2
x
e
u
2
du, the complimentary error function.
2. Nonhomogeneous Wave Equation
Consider the initial value problem:
_
_
_
u
tt
= u
xx
+ f(x, t) (32.6a)
u(x, 0) = (x) (32.6b)
u
t
(x, 0) = (x) (32.6c)
where f is the forcing function on the string, or a perturbation on the water.
Our arguments are going to be absolutely similar to those in Section 1 and we will
not repeat them.
In place of (32.3) one gets
u
tt
(, t) +
2
u(, t) =
f(, t). (32.7)
Its a nonhomogeneous order 2 linear dierential equation with constant coecients.
We apply the method of variation of parameters (see Appendix). This means that we
are looking for a particular solution to (32.7) in the following form
u(, t) = C
1
(, t)e
it
+C
2
(, t)e
it
where C
1
, C
2
satisfy
_
e
it
e
it
ie
it
ie
it
__
C
t
1
C
t
2
_
=
_
0
f
_
and C
t
1
, C
t
2
are the (partial) derivatives of respectively C
1
, C
2
with respect to t.
198 32. NONHOMOGENEOUS WAVE AND HEAT EQUATIONS
Solving this system, one gets
C
t
1
=
fe
it
W
, C
t
2
=
fe
it
W
, (32.8)
where W is the Wronskian of
_
e
it
, e
it
_
, i.e.
W = det
_
e
it
e
it
ie
it
ie
it
_
= 2i.
It now follows from (32.8) that
C
1
(, t) =
f(, t)e
it
2i
dt , C
2
(, t) =
f(, t)e
it
2i
dt. (32.9)
Note that these integrals are indefinite so far.
For the general solution to (32.7), we have
u(, t) = e
it
t
0
f(, )e
i
2i
d + e
it
t
0
f(, )e
i
2i
d
+C
1
()e
it
+ C
2
()e
it
.
(32.10)
We note that C
1
(), C
2
() are functions of only as opposed to C
1
(, t), C
2
(, t)
dened by (32.9).
Now lets nd C
1
(), C
2
(). We have, by (32.6b),
u(x, 0) =
1
u(, 0)e(x, )d
[[
(x) =
1
()e(x, )d where () =
1
(x)e(x, )dx
u(, 0) = (). (32.11)
Similarly,
u
t
(x, 0) =
()e(x, )d ,
() =
1
(x)e(x, )dx and u
t
(, 0) =
(). (32.12)
But on the other hand, by (32.10)
u(, 0) =
0
0
f(, )e
i
2i
d
. .
=0
+
0
0
f(, )e
i
2i
d
. .
=0
+C
1
() + C
2
()
u(, 0) = C
1
() + C
2
().
2. NONHOMOGENEOUS WAVE EQUATION 199
Next, dierentiate (32.10) in t
u
t
(, t) = ie
it
t
0
f(, )e
i
2i
d + e
it
f(, )e
i
2i
=t
+ ie
it
t
0
f(, )e
i
2i
d e
it
f(, )e
i
2i
=t
+ iC
1
()e
it
iC
2
()e
it
= e
it
t
0
f(, )e
i
2
d +
f(, t)
2i
+ e
it
t
0
f(, )e
i
2
d
f(, t)
2i
+ iC
1
()e
it
iC
2
()e
it
=
e
it
2
t
0
f(, )e
i
d +
e
it
2
t
0
f(, )e
i
d
+ iC
1
()e
it
iC
2
()e
it
and compute it at t = 0
u
t
(, 0) = iC
1
() iC
2
() = i(C
1
() C
2
()) .
By (32.11), (32.12) we have
_
_
_
C
1
+C
2
=
C
1
C
2
=
_
C
2
=
1
2
_
i
_
C
1
=
1
2
_
+
i
_
.
That is C
1
() =
1
2
( +
/i) , C
2
() =
1
2
(
/i) and nally,
u(, t) = e
it
_
t
0
f(, )e
i
2i
d +
1
2
_
() +
()
i
__
+e
it
_
t
0
f(, )e
i
2i
d +
1
2
_
()
()
i
__
= u
0
(, t) +
e
it
2i
t
0
f(, )e
i
d
e
it
2i
t
0
f(, )e
i
d
. .
u
1
(,t)
where
u
0
(, t) =
1
2
_
+
i
_
e
it
+
1
2
_
i
_
e
it
.
Comparing this to (30.9) in Lecture 30 we derive that
u
0
(x, t)
1
u
0
(, t)e(x, t)d
200 32. NONHOMOGENEOUS WAVE AND HEAT EQUATIONS
is the solution of equation (32.6) with f 0, i.e. the homogeneous equation (32.6).
By Theorem 30.4 then
u
0
(x, t) =
(x t) + (x +t)
2
+
1
2
x+t
xt
(s)ds
and we arrive at
u(x, t) = u
0
(x, t) +
1
u
1
(, t)e(x, )d
. .
u
1
(x,t)
.
Let us treat u
1
(x, t). Putting everything together one has
u
1
(x, t) =
1
_
e
it
2i
t
0
f(, )e
i
d
e
it
2i
t
0
f(, )e
i
d
_
e(x, )d (32.13)
=
1
_
e
it
2
t
0
_
1
f(s, )e
is
ds
_
e
i
d
_
e
ix
2i
d
1
_
e
it
2
t
0
_
1
f(s, )e
is
ds
_
e
i
d
_
e
ix
2i
d.
So we got two triple integrals! Let us integrate with respect to d rst.
u
1
(x, t) =
1
_
t
0
_
1
2
1
e
i(t+xs)
2i
d
_
. .
I(t+xs)
f(s, )d
_
ds
1
_
t
0
_
1
2
1
e
i(t++xs)
2i
d
_
. .
I(t++xs)
f(s, )d
_
ds.
(32.14)
We need to evaluate I(a) where
I(a) =
1
2
1
e
ia
2i
d.
But its been done (Lecture 30, equation (30.15))
I(a) =
1
4
sgn a
and (32.14) can be continued
u
1
(x, t) =
1
4
1
_
t
0
sgn(t +x s) f(s, )d
_
ds
1
4
1
_
t
0
sgn(t + +x s) f(s, )d
_
ds.
2. NONHOMOGENEOUS WAVE EQUATION 201
Since sgn(a) = sgn(a) we can continue
u
1
(x, t) =
1
4
1
ds
t
0
_
sgn ((s +) (x +t)) + sgn ((s ) (x t))
_
. .
=(s,)
f(s, )d.
(32.15)
Let us now gure out the support of (s, ).
0
s
s = x t
s + = x +t
x t x +t
I II
x t x +t
x
III (x, t)
t
Since (0, t) it is enough to check only regions I, II, III.
I. sgn((s ) (x t)) = 1 , sgn((s +) (x +t)) = 1 (s, ) 0.
1
II. sgn((s ) (x t)) = 1 , sgn((s +) (x + t)) = 1 (s, ) 0.
III. sgn((s ) (x t)) = 1 , sgn((s +) (x + t)) = 1 (s, ) 2.
So (s, ) = 2 for (s, ) III and zero otherwise and (32.15) becomes
u
1
(x, t) =
1
2
III
f(s, )dsd =
1
2
(x,t)
f(s, )dsd where (x, t) III.
We are now able to state the nal result.
Theorem 32.3. The solution u(x, t) to the initial value problem
u
tt
= u
xx
+f(x, t)u(x, 0) = (x)u
t
(x, 0) = (x) (32.16)
can be represented as
1
To see this, note that a point in I has a ycoordinate that is above the curve s = x t and
below s + = x + t, i.e. > s (x t) and < s (x t).
202 32. NONHOMOGENEOUS WAVE AND HEAT EQUATIONS
u(x, t) =
(x t) + (x + t)
2
+
1
2
x+t
xt
(s)ds +
1
2
(x,t)
f(s, )dsd (32.17)
where (x, t) =
0
s
s = x t
s + = x +t
x t x +t x t x +t
x
t
Note that u(x, t) can also be written explicitly as
u(x, t) =
(x t) + (x +t)
2
+
1
2
x+t
xt
(s)ds +
1
2
t
0
+(x+t)
+(xt)
f(s, )dsd.
Definition 32.4. (x, t) is called the characteristic triangle.
One can easily see that (x, t) expands as t increases for every xed x.
Example 32.5. Consider the case when (x) 0 (x) and
f(x, t) =
(x, t)
_
1 , (x, t)
0 , (x, t) /
where =
a
T
a
i.e. = (x, t) : a x a , 0 t T .
That is the perturbation f(x, t) acts like a blast concentrated on [a, a] and lasting
T seconds.
Then the solution of (32.16) takes the form:
u(x, t) =
1
2
(x,t)
(s, )dsd =
1
2
(x,t)
dsd =
1
2
Area (x, t).
Let us interpret this formula. Fix x = x
0
/ (a, a).
As it follows from Figure 1 on the next page, u(x
0
, t) = 0 as long as (x
0
, t) and
do not intersect, i.e. the blast has yet to reach x
0
(t < t
0
). Then u(x, t) = shaded area
for t
0
< t < t
a
and when t t
a
, (x
0
, t) then u(x
0
, t) =
1
2
2a T = aT (see Figure
2).
Exercise 32.6. Find an analytic expression for u(x
0
, t) on Figure 2. Consider 3
cases: x
0
> a , a < x
0
< a, x
0
< a. Compute also t
a
in each of these cases.
APPENDIX: THE METHOD OF VARIATION OF PARAMETERS 203
x
t
a
T
a
a = x
0
t
0
(x
0
, t
a
)
x
0
(x
0
, t)
Figure 1
t
u(x
0
, t)
aT
t
a
t
0
= x
0
a
Figure 2
Appendix: The Method of Variation of Parameters
Although the method can be generalized to higher order nonhomogeneous linear
ordinary dierential equations, we will review here the cases of rst and second order
nonhomogeneous linear ODEs.
1) First order nonhomogeneous linear ODE: Consider
y
t
+p(x)y = f(x).
Rewrite y = uv where u is a solution to the homogeneous equation
u
t
+p(x)u = 0 , u = e
p(x)dx
.
Note that this method is called variation of parameters because whereas the
general solution to the homogeneous equation y
t
+p(x)y = 0 is y = Ce
p(x)dx
for some constant C, here we consider C to vary by being a function of x (and
called it v).
204 32. NONHOMOGENEOUS WAVE AND HEAT EQUATIONS
Now our original equation becomes
u
t
v +uv
t
+puv = f (u
t
+pu)
. .
=0
v + uv
t
= f
uv
t
= f v
t
= fu
1
.
Hence
v =
x
0
_
fu
1
_
(s)ds +C y = u(x)
_
x
0
_
fu
1
_
(s)ds +C
_
y = e
p
_
x
0
_
fe
p
_
(s)ds + C
_
.
2) Second order nonhomogeneous linear ODE: Consider
y
tt
+ p(x)y
t
+q(x)y = f(x).
Solve rst the corresponding homogeneous equation
y
tt
+ p(x)y
t
+q(x)y = 0
by the method of your choice and obtain two linearly independent solutions
y
1
, y
2
.
Now we are looking for a particular solution of the form
y = C
1
(x)y
1
+ C
2
(x)y
2
.
We will further assume that
C
t
1
(x)y
1
+C
t
2
(x)y
2
= 0. (32.18)
Then by dierentiating, we obtain
_
y
t
= C
t
1
(x)y
1
+C
1
(x)y
t
1
+C
t
2
(x)y
2
+C
2
(x)y
t
2
= C
1
(x)y
t
1
+ C
2
(x)y
t
2
y
tt
= C
t
1
(x)y
t
1
+ C
t
2
(x)y
t
2
+C
1
(x)y
tt
1
+ C
2
(x)y
tt
2
. (32.19)
Plugging (32.19) in the original ODE leads to
f = C
t
1
(x)y
t
1
+ C
t
2
(x)y
t
2
+ C
1
(x)y
tt
1
+C
2
(x)y
tt
2
+p(x)
_
C
1
(x)y
t
1
+C
2
(x)y
t
2
_
+q(x)
_
C
1
(x)y
1
+C
2
(x)y
2
_
= C
t
1
(x)y
t
1
+ C
t
2
(x)y
t
2
+ C
1
(x)
_
y
tt
1
+ p(x)y
t
1
+ q(x)y
1
. .
=0
_
+C
2
(x)
_
y
tt
2
+p(x)y
t
2
+q(x)y
2
. .
=0
_
f = C
t
1
(x)y
t
1
+ C
t
2
(x)y
t
2
. (32.20)
Combining (32.18) and (32.20) leads to the system of equations
_
y
1
y
2
y
t
1
y
t
2
__
C
t
1
(x)
C
t
2
(x)
_
=
_
0
f
_
.
APPENDIX: THE METHOD OF VARIATION OF PARAMETERS 205
Using Cramers rule, we nd that
C
t
1
(x) =
W
1
W
, C
t
2
(x) =
W
2
W
where W is the Wronskian of y
1
, y
2
, i.e.
W = det
_
y
1
y
2
y
t
1
y
t
2
_
and
W
1
= det
_
0 y
2
f y
t
2
_
= fy
2
, W
2
= det
_
y
1
0
y
t
1
f
_
= fy
1
.
Hence
C
1
(x) =
fy
2
W
dx , C
2
(x) =
fy
1
W
dx (32.21)
and the general solution to our ODE is:
y = y
c
+y
p
, y
c
= C
1
y
1
+C
2
y
2
, y
p
= y
1
x
x
0
fy
2
W
dx + y
2
x
x
0
fy
1
W
dx
where C
1
, C
2
are constants and x
0
is arbitrary.
Exercise 32.7. Find the general solution of
y
tt
+y = tan t , 0 < t <
2
.
Exercise 32.8. Show that the solution to the IVP
Ly = y
tt
+ p(t)y
t
+q(t)y = g(t) , y(t
0
) = y
0
, y
t
(t
0
) = y
t
0
can be written as
y = u + v
where u, v solve
Lu = 0 , u(t
0
) = y
0
, u
t
(t
0
) = y
t
0
Lv = g , v(t
0
) = 0 , v
t
(t
0
) = 0
respectively.
(Hint: use (32.21) and choose C
1
, C
2
to satisfy the ICs. Then u = C
1
y
1
+C
2
y
2
and
v is the other part.)
Exercise 32.9. a) Show that the solution of the IVP
_
y
tt
+ y = g(t)
y(t
0
) = 0 , y
t
(t
0
) = 0
is
y(t) =
t
t
0
sin(t s)g(s)ds.
b) Find the solution of
y
tt
+y = g(t) , y(0) = y
0
y
t
(0) = y
t
0
.
LECTURE 33
Wave and Heat Equations in Nonhomogeneous Media
So far we have considered the wave and heat equations of the form u
tt
= u
xx
+
f , u
t
= u
xx
+ f which basically describe the wave or heat propagation in an even
media. But
when doing acoustic imaging via the wave equation for the Earth for example,
we have a nonhomogeneous media;
using the heat equation for ice with cracks, there are some discontinuities, and
the media is not homogeneous.
So more general models lead to
u
tt
= u
xx
q(x)u + f
u
t
= u
xx
q(x)u + f
(33.1)
where q(x) is an optical (heat) potential respectively.
Depending on the physical meaning, the rst equation for example can take dierent
forms. In the above, it is referred to as the wave equation with potential or the plasma
wave equation. It is also known in relativistic quantum mechanics as the KleinGordon
equation.
1
But other forms include
u
tt
= c
2
(x)u
xx
(Helmholtz equation)
u
tt
x
((x)u
x
) = 0. (33.2)
None of these forms is considered canonical. Since q(x) is not constant, this is a
dierent ballgame (although we can handle boxes to get explicit solutions).
By denoting u
xx
+q(x)u Hu, (33.1) transforms into
u
tt
+Hu = f
u
t
+Hu = f
. (33.3)
Recall that
H =
d
2
dx
2
+ q(x) (33.4)
is called the Schr odinger operator.
Note that if q is a realvalued function then H is selfadjoint. If q is complexvalued,
the problem is a whole lot more complicated.
1
In nonrelativistic quantum mechanics, they use the nonstationary Schrodinger equation:
iu
t
= u
xx
qu.
207
208 33. WAVE AND HEAT EQUATIONS IN NONHOMOGENEOUS MEDIA
Our goal is to adjust Fouriers method to this setting. In general, a lot now depends
on the properties of the optical potential q(x) and things get a lot more complex. But
the general idea of eigenfunction expansions does work and as previously, while studying
the case q 0, we start from the spectral analysis of operator H. Well, its easier said
than done since if q ,= 0 the Schrodinger operator is a very difcult object and we are
not in a position to present its theory at any level.
First of all, a lot depends on whether we deal with a nite or innite interval.
1 Finite interval. That is
_
u
tt
+ Hu = f a x b a, b <
boundary + initial conditions
. (33.5)
Typically, the spectrum of H is purely discrete but innite. I.e. the equation
_
Hu = u
boundary conditions
has L
2
(a, b)solutions e
n
(x), i.e. eigenfunctions, corresponding to a discrete
set of
n
, the eigenvalues. Then by Theorem 18.3, e
n
(x) forms an or
thonormal basis in L
2
(a, b) and we apply the procedure of Lecture 28.
Namely any solution to (33.5) can be represented as
u(x, t) =
u
n
(t)e
n
(x) , u
n
(t) = u(x, t), e
n
(x)
where f, g =
b
a
f(x)g(x)dx.
Plug this expression into (33.5)
u
tt
n
(t)e
n
(x) + H
u
n
(t)e
n
(x) =
f
n
(t)e
n
(x)
u
tt
n
(t)e
n
(x) +
n
u
n
(t)e
n
(x) =
f
n
(t)e
n
(x) u
tt
n
+
n
u
n
=
f
n
.
Then we solve this second order linear nonhomogeneous equation by vari
ation of parameters nding constants of integration from meeting the initial
conditions.
Note that e
n
(x) are no longer
1
2
e
inx
and u
n
, hence, are no longer Fourier
coecients. But u
n
are commonly called generalized Fourier coecients.
Example 33.1. Consider
u
tt
=
x
(1 x
2
)u
x
, x [1, 1]
the wave equation in the form (33.2).
So here we have the density function (x) = 1 x
2
on [1, 1]. By Theorem
22.4, (H) = n(n + 1)
n0
and the Legendre polynomials P
n
(x) are the
33. WAVE AND HEAT EQUATIONS IN NONHOMOGENEOUS MEDIA 209
associated eigenfunctions and form an orthogonal basis in L
2
(1, 1). We have
P
n
(x) =
1
n!2
n
d
n
dx
n
(x
2
1)
n
.
Then we need only solve
u
tt
n
+n(n + 1) u
n
= 0
where the derivatives are now in t.
On a nite interval, we can get tons of orthogonal polynomial bases by
this method by taking a dierent density function (Chebyshev, etc.). Ambart
sumyan also famously showed in 1929 that for a nite interval, if (H) = n
2
(x, ).
Then the complete system of eigenfunctions of the discrete and continuous
spectrum is
n
(x),
(x, )
where
n
are the eigenfunctions of the discrete spectrum and we have
n
,
m
=
nm
,
(x, )
(x, ) = ( ) ,
+
(x, )
(x, ) = 0.
Note that even with innitely many
n
, they will not form a basis by them
selves; the
are needed to cover the rest of the space as we will see in the
next theorem.
Recall also that for q 0, i.e. H =
d
2
dx
2
, by Theorem 30.2 we have
d
(H) = ,
c
(H) = [0, ) , and
(x, ) =
1
2
e
i
x
.
It is no longer true if q ,= 0. Furthermore, if (H) = [0, ), this does
not imply that q(x) 0 (counterexamples abound). But, amazingly enough,
Fouriers Integral Theorem remains valid in the following edition.
210 33. WAVE AND HEAT EQUATIONS IN NONHOMOGENEOUS MEDIA
Theorem 33.2. Let H =
d
2
dx
2
+ q(x) on L
2
(1) and q(x) is such that
(H) =
d
(H)
c
(H). Let
n
(x) be eigenfunctions of
d
(H) and let
(x, ) be eigenfunctions of
c
(H). Then any u L
2
(1) can be represented
as
u(x) =
u
n
n
(x) +
c(H)
u
+
()
+
(x, )d +
c(H)
u
()
(x, )d , (33.7)
where u
n
= u,
n
, u
() =
1
u(x)
(x, )dx.
Note that the above becomes the Fourier transform for q = 0 with a square
root:
u
+
() =
0
u(x)e
i
x
dx , u
() =
0
u(x)e
i
x
dx
(and the two are put together if u is even). So u
u
tt
n
(t)
n
(x) +
c(H)
2
t
2
u
+
(, t)
+
(x, )d +
c(H)
2
t
2
u
(, t)
(x, )d ,
Hu =
n
u
n
(t)
n
(x) +
c(H)
u
+
(, t)
+
(x, )d +
c(H)
u
(, t)
(x, )d.
Plugging it into (33.6), we get
_
u
tt
n
(t) +
n
u
n
(t) =
f
n
(t)
u
tt
(, t) + u
(, t) =
f
(, t)
(33.8)
where
f
n
(t) =
1
f(x, t)
n
(x)dx ,
f
(, t) =
1
f(x, t)
(x, )dx.
Then we solve the solve the dierential equations (33.8) by variation of
parameters using the initial conditions. So we get u
n
(t), u
[f(X)[
2
dV
[f(x
1
, x
2
, , x
n
)[
2
dx
1
dx
2
dx
n
<
is called the set of square integrable functions on and commonly denoted by L
2
().
Typical notation:
1) [0, 1] [0, 1] is a rectangle in 1
2
:
x
y
1
2
2) [0, 1] [0, 1] [0, 1] [0, 1]
3
is a unit cube in 1
3
:
y
z
x
[0, 1]
3
[0, 1]
3
(x, y, z) 1
3
: 0 x 1, 0 y 1, 0 z 1.
3) S
2
is a unit sphere in 1
3
:
y
z
x
S
2
S
2
= (x, y, z) 1
3
: x
2
+y
2
+z
2
= 1
4)
2
is a unit torus
1
in 1
3
:
y
z
x
1
2
1
3
2
1
I.e. a bagel, and the unit torus
1
in 1
2
is the unit circle.
211
212 34. L
2
SPACES ON DOMAINS IN MULTIDIMENSIONAL SPACES
5) [0, 1]
n
is a unit cube in 1
n
:
[0, 1]
n
= (x
1
, x
2
, , x
n
) 1
n
: 0 x
i
1 , i = 1, 2, , n
6) B
n
is a unit ball in 1
n
:
B
n
= (x
1
, x
2
, , x
n
) 1
n
: x
2
1
+x
2
2
+ +x
2
n
1
7) S
n1
is a unit sphere in 1
n
, i.e. S
n1
= B
n
, the boundary of B
n
,
S
n1
= (x
1
, x
2
, , x
n
) 1
n
: x
2
1
+x
2
2
+ +x
2
n
= 1.
Note that S
n1
is an object of dimension n 1 in a space of dimension n.
Theorem 34.2. L
2
() is a Hilbert space.
Proof. Let us show rst that if f, g L
2
() then f + g L
2
().
[f +g[
2
= (f +g)(f + g) = [f[
2
+[g[
2
+ 2 Re fg
Re fg [fg[
[f[
2
+[g[
2
2
.
So [f +g[
2
2 [f[
2
+ 2 [g[
2
and hence
[f + g[
2
dV 2
[f[
2
dV + 2
[g[
2
dV < .
So L
2
() is a linear space. We have to show now that L
2
() has a scalar product.
Set
f, g =
f(X)g(X)dV.
It clearly has all the properties of a scalar product
f
1
+f
2
, g = f
1
, g +f
2
, g
f, g = g, f
f, f 0
and hence L
2
() is a Hilbert space. QED
Actually, L
2
on domains in 1
n
is not harder than L
2
on intervals.
Example 34.3. L
2
([0, 1]
3
). Its the set of all functions of three variables dened on
the unit cube such that
1
0
1
0
1
0
[f(x, y, z)[
2
dx dy dz < . (34.1)
Any continuous function is subject to (34.1).
More generally, any bounded function meets (34.1).
Lets look at unbounded functions in L
2
([0, 1]
3
). Consider
f(x, y, z) =
1
x
, , , > 0.
Note that f(X) , X 0.
34. L
2
SPACES ON DOMAINS IN MULTIDIMENSIONAL SPACES 213
[0,1]
3
[f(x, y, z)[
2
dx dy dz =
[0,1]
3
dx dy dz
x
2
y
2
z
2
=
1
0
dx
x
2
1
0
dy
y
2
1
0
dz
z
2
=
x
2+1
2 + 1
1
0
y
2+1
2 + 1
1
0
z
2+1
2 + 1
1
0
. (34.2)
So if 2 +1 > 0 , 2 +1 > 0 , 2 +1 > 0, or , , < 1/2 then every factor in
(34.2) is nite and
[0,1]
3
[f(x, y, z)[
2
dx dy dz =
1
(1 2)(1 2)(1 2)
< .
Therefore, if , , < 1/2 then f(x, y, z) L
2
([0, 1]
3
); otherwise f / L
2
([0, 1]
3
).
So L
2
() contains unbounded functions.
Example 34.4. L
2
(1
3
) =
_
f(x, y, z) :
[f(x, y, z)[
2
dx dy dz <
_
.
Exercise 34.5. Show that
1
1 + (x
2
+ y
2
+ z
2
)
L
2
(1
3
),
1
3
xyz
/ L
2
(1
3
),
xyze
[x[[y[[z[
L
2
(1
3
), but xyze
[x[[y[
/ L
2
(1
3
).
Example 34.6. L
2
(B
3
).
y
z
x
r
(x, y, z)
z
x
y
r
_
x = r sin cos
y = r sin sin
z = r cos
0 r < , 0 2 , 0 .
Or by setting r
_
x = r
cos
y = r
sin
z = r cos
214 34. L
2
SPACES ON DOMAINS IN MULTIDIMENSIONAL SPACES
Compute the absolute value of the Jacobian
2
of (x, y, z) (r, , ):
[J(r, , )[ =
det
_
_
_
_
_
_
_
_
_
_
x
r
y
r
z
r
x
_
_
_
_
_
_
_
_
_
_
= r
2
sin .
Hence if f L
2
(B
3
)
B
3
[f(x, y, z)[
2
dx dy dz =
1
0
2
0
_
0
f(r, , )
2
sin d
_
d r
2
dr (34.3)
where
f(r, , ) = f(r sin cos , r sin sin , r cos ).
Let us show that f(x, y, z) =
1
(x
2
+ y
2
+z
2
)
1/2
L
2
(B
3
).
Indeed by (34.3)
f
2
=
1
0
_
2
0
_
0
1
r
2
sin d
_
d
_
r
2
dr =
0
sin d
2
0
d
1
0
dr = 22 = 4
i.e. f L
2
(B
3
).
This example is signicant since f(X) represents the Coulomb potential. Note that
it likes higher dimensions since f(x) =
1
[x[
/ L
2
(B
1
) = L
2
(1, 1), and f(x, y) =
1
_
x
2
+y
2
/ L
2
(B
2
) since
f
2
=
B
2
1
x
2
+ y
2
dx dy =
2
0
1
0
1
r
2
r dr d =
but weve just seen that f(x, y, z) L
2
(B
3
).
Note that exact computation of norms are rare. Most of the time, youll need to
use estimates to prove membership into L
2
().
Definition 34.7. The operator dened in L
2
() as follows
u
2
u
x
2
1
+
2
u
x
2
2
+ +
2
u
x
2
n
, u L
2
()
is called the Laplace operator.
2
The Jacobian can be rearranged some other ways too, but it will lead at most to a change of
sign.
34. L
2
SPACES ON DOMAINS IN MULTIDIMENSIONAL SPACES 215
So the Laplace operator in L
2
(1) is
d
2
dx
2
, and in 1
3
we have:
u =
2
u
x
2
+
2
u
y
2
+
2
u
z
2
i.e. =
2
x
+
2
y
+
2
z
.
Clearly, is a linear operator.
Definition 34.8. Let u(x, y, z) be a function of 3 variables, then
u
n
u, n , n 1
3
, n = 1
is called the directional derivative with respect to n.
Note that is the gradient and n is often a normal vector to some surface.
Theorem 34.9 (Greens Formula). Let be a connected domain in 1
3
and be
its boundary, then for any u, v
(uv vu)dx dy dz =
_
u
v
n
v
u
n
_
dS
where n is the external normal vector to .
This formula is a 3space analog of integration by parts.
Definition 34.10. The Laplace operator in L
2
() with domains
1) Dom =
_
u L
2
: u L
2
, u
= 0
_
2) Dom =
_
u L
2
: u L
2
,
u
n
= 0
_
3) Dom =
_
u L
2
: u L
2
,
u
n
+hu
= 0 , for some h 1
_
are called, respectively,
1) Dirichlet Laplacian
2) Neumann Laplacian
3) Robin Laplacian .
Theorem 34.11. All Laplacian (1)(3) are selfadjoint operators.
Proof. Consider only case (1). Let u, v Dom,
u, v =
u v dx dy dz
Thm 34.9
=
uv dx dy dz +
:
0
_
v
u
n
u
v
n
_
dS
= u, v . QED
Exercise 34.12. Prove Theorem 34.11 for Neumann and Robin Laplacians.
As you can guess, our goal will be the spectral analysis of .
But we start from the Laplace equation
u = 0.
LECTURE 35
The Laplace Equation in 1
2
. Harmonic Functions
In this lecture we are going to deal with 2space, i.e. deal with the Laplace equation
u
xx
+ u
yy
= 0. (35.1)
In 1
2
there is a remarkable relation between this equation and Complex Analysis.
Definition 35.1. A function u(x, y) is harmonic on a domain 1
2
if u is
subject to (35.1).
Theorem 35.2. Let be a domain in 1
2
.
u = 0 , (x, y) u(x, y) = Re f(z) for some function f analytic on .
Proof. 1) . Let f(z) = u(x, y) + iv(x, y) H(). This means that u, v
are subject to the CauchyRiemann conditions:
u
x
= v
y
, u
y
= v
x
(x, y) . (35.2)
Dierentiate the rst equation in (35.2) in x and the other in y:
u
xx
= v
yx
, u
yy
= v
xy
.
Since v
yx
= v
xy
then u
xx
+u
yy
= 0 (x, y) .
2) . Let u be harmonic, i.e. u = 0. We need to show that there exists a real
function v(x, y) such that f(z) = u(x, y) + iv(x, y) H().
Consider u
y
dx + u
x
dy. By the Green formula (Theorem 34.9), for any
contour C
C
(u
y
dx + u
x
dy) =
Int C
(u
xx
(u
yy
))dx dy = 0.
Therefore, u
y
dx +u
x
dy is the total dierential of some function v, i.e.
dv = u
y
dx +u
x
dy. (35.3)
But on the other hand
dv = v
x
dx +v
y
dy. (35.4)
Comparing (35.3) & (35.4) one has
u
x
= v
y
, u
y
= v
x
f = u + iv H()
since its just the CauchyRiemann conditions.
QED
217
218 35. THE LAPLACE EQUATION IN 1
2
. HARMONIC FUNCTIONS
This statement is too general to be useful although it gives us the general structure
of a solution to u = 0: the real part of an analytic function.
Consider the Dirichlet problem for the Laplace equation on the unit disk 
(x, y) : x
2
+ y
2
< 1:
_
u = 0
u

= () , 0 2
. (35.5)
We can apply Theorem 35.2 to nd a series representation for the above problem,
and we present such a solution in Appendix to this lecture.
But for now, let us nd an integral representation of the solution to (35.5). We
need some ingredients.
Lemma 35.3. Let u be harmonic on the unit disk then
u(0) =
1
2
2
0
u
_
e
it
_
dt. (35.6)
Proof. By the Cauchy formula (Theorem 3.2) with z
0
= 0,
f(0) =
1
2i

f(z)
z
dz =
1
2
2
0
f (e
it
)
e
it
ie
it
dt =
1
2
2
0
f
_
e
it
_
dt
u(0) = Re f(0) =
1
2
2
0
Re f
_
e
it
_
dt =
1
2
2
0
u
_
e
it
_
dt. QED
Remark 35.4. Lemma 35.3 is called the mean value theorem.
Lemma 35.5. The function (z) dened by
(z) =
z z
0
1 z
0
z
(35.7)
where is unimodular, i.e. [[ = 1, and z
0
, transforms the unit disk onto itself.
z z
0
1 z
0
z
2
=
[z z
0
[
2
[1 z
0
z[
2
=
(z z
0
)(z z
0
)
(1 z
0
z)(1 z
0
z)
=
[z[
2
+[z
0
[
2
2 Re zz
0
1 +[z
0
z[
2
2 Re zz
0
[z[=1
=
1 +[z
0
[
2
2 Re zz
0
1 +[z
0
[
2
2 Re zz
0
= 1. QED
1
One of the properties of conformal mapping is that the boundary of the domain is mapped onto
the boundary of the image of the domain. Then using the fact that for z = 0, (0) = z
0
, one nds
that (0) = [z
0
[ < 1 since z
0
 and so the image is inside the circle, i.e. it is the disk.
35. THE LAPLACE EQUATION IN 1
2
. HARMONIC FUNCTIONS 219
Take = 1 in (35.7). Then (z) =
z
0
z
1 z
0
z
and z
0
=
1
(0).
Change variables in (35.6) as follows
e
it
=
_
e
i
_
(By Lemma 35.5 its possible!)
Then t =
1
i
ln
_
e
i
_
and we have (z = e
i
)
dt
d
=
d
d
1
i
ln
_
e
i
_
=
1
i
dz
d
d
dz
ln (z)
=
1
i
iz
_
d
dz
ln(z
0
z)
d
dz
ln(1 z
0
z)
_
= z
_
1
z z
0
+
z
0
1 z
0
z
_
[z[
2
=zz=1
=
1
z
0
z +
zz
0
[z
0
[
2
z(z z
0
)(1 z
0
z)
=
1 [z
0
[
2
([z[
2
zz
0
)(1 z
0
z)
=
1 [z
0
[
2
(1 z
0
z)(1 z
0
z)
=
1 [z
0
[
2
[1 z
0
z[
2
=
1 [z
0
[
2
[z z
0
[
2
=
1 [z
0
[
2
[z z
0
[
2
.
The function (z
0
re
i
0
)
1
2
dt
d
=
1
2
1 [z
0
[
2
[1 z
0
z[
2
=
1
2
1 r
2
1 2r cos(
0
) + r
2
P
z
0
() (35.8)
is called the Poisson kernel.
One can easily check (Exercise 35.7) that the function u(
1
(z)) is harmonic in 
and then by Lemma 35.3 one has
u
_
1
(0)
_
=
1
2
2
0
u
_
1
(e
it
)
_
dt
change of variables formula
=
1
2
2
0
u
_
e
i
_
dt
d
d
=
2
0
u
_
e
i
_
1
2
dt
d
dt
which by (35.8) is
u
_
1
(0)
_
=
2
0
u
_
e
i
_
P
z
0
()d.
Since as we know,
1
(0) = z
0
we have
u(z
0
) =
2
0
P
z
0
()u
_
e
i
_
d
but u
_
e
i
_
= () (our boundary condition) and we nally arrive at the following
(where we write z instead of z
0
)
Theorem 35.6 (Poisson Formula). The solution of problem (35.5) can be repre
sented by the Poisson formula
u(z) =
1
2
2
0
1 [z[
2
[e
i
z[
2
()d , z = x +iy (35.9)
for any (x, y) .
220 35. THE LAPLACE EQUATION IN 1
2
. HARMONIC FUNCTIONS
So we got an explicit formula in integral form but this is rare. To go further to an
algebraic form may be possible if is analytic by using residues.
Note though that the kernel diverges at the boundary. However, we will get a
sequence, allowing us to have u

= satised.
Exercise 35.7. Prove that if u(z) is harmonic (i.e. u = 0) on the unit disk
 = z : [z[ < 1 then so is u(
1
(z)) where (z) is dened by (35.7).
Exercise 35.8. Graph Poisson kernels as a function of for dierent 0 r < 1.
Prove that Poisson kernels represent a family
r
(
0
) as r 1.
Appendix. Series Representation of the Solution to the Dirichlet Problem
for the Laplace Equation
Consider the Laplace equation as a Dirichlet problem on the unit disk, i.e. (35.5):
u = 0 , u

= 0.
By Theorem 35.2, there exists a function f(z), analytic in  such that u = Re f.
On the other hand, by the Taylor theorem (Theorem 4.12)
f(z) =
n0
a
n
z
n
=
n0
a
n
r
n
e
in
_
z = re
i
_
u = Re f =
n0
r
n
a
n
e
in
+a
n
e
in
2
=
1
2
n1
r
n
a
n
e
in
+ Re a
0
+
1
2
n1
r
n
a
n
e
in
=
1
2
n1
r
[n[
a
n
e
in
+ Re a
0
+
1
2
n1
r
n
a
n
e
in
.
So we get a series representation for (35.5)
u
_
re
i
_
=
nZ
A
n
r
[n[
e
in
, 0 r < 1
where A
n
=
_
_
1
2
a
n
, n 1
Re a
0
, n = 0
1
2
a
n
, n 1
.
(35.10)
Exercise 35.9.
Show that (35.9) implies (35.10) and vice versa.
LECTURE 36
Uniqueness of the Solution of the Laplace Equation. Laplace
Equation in a Rectangular Domain
1. WellPosedness of the Laplace Equation
In an electric eld, we can measure the potential on the boundary and using the
solution to the Laplace equation, we rebuild the inside. But we havent asked ourselves
some fundamental questions about PDEs: given a PDE is there a solution? is this
solution unique? does this solution oer some continuity properties with respect to
variations in initial or boundary conditions?
These three questions together constitute wellposedness: existence, uniqueness and
continuity.
As far as existence is concerned, so far weve given constructive proofs, but overall
its a dicult question especially for nonlinear equations. Wellposedness is not always
fully proven, for example it is not for the famous NavierStokes equation. Here we will
address the uniqueness of solution in the case of the Dirichlet problem for the Laplace
equation.
Theorem 36.1 (Uniqueness Theorem). Let be a domain in 1
2
. Then there is
only one solution to
_
u = 0
u
=
. (36.1)
Proof. Suppose that there exist two solutions u
1
, u
2
to (36.1). I.e.
_
u
1
= 0
u
1
=
_
u
2
= 0
u
2
=
.
It follows from here that u u
1
u
2
is a solution to
_
u = 0
u
=
. (36.2)
We are going to show that u 0.
Lemma 36.2 (Greens First Identity).
u
v
n
ds =
(uv +u
x
v
x
+u
y
v
y
)dxdy. (36.3)
(No proof).
221
222 36. LAPLACE EQUATION. UNIQUENESS; RECTANGULAR DOMAIN
Consider now (36.3) for u = v:
u
u
n
ds =
(u u
..
=0
+u
2
x
+v
2
y
)dxdy =
(u
2
x
+v
2
y
)dxdy.
But u
= 0. Therefore
(u
2
x
+ v
2
y
)dxdy = 0 u
2
x
+v
2
y
0 in u = const in .
But since u
= 0 this const = 0.
So we showed that (36.2) has only a trivial solution, hence u
1
= u
2
and that means
uniqueness. QED
Note that if now we have Neumann conditions, i.e.
u
n
(x, y) = f(x)
b
= 0
Consider the following problem:
_
u = 0
u
=
(36.4)
For example, you can imagine 3 walls
grounded and one charged.
Let us look for a solution to (36.4) in the form
1
u(x, y) = X(x)Y (y)
then
u
x
= X
t
Y , u
xx
= X
tt
Y
u
y
= XY
t
, u
yy
= XY
tt
and u = X
tt
Y +XY
tt
= 0. Dividing by XY yields
X
tt
X
+
Y
tt
Y
= 0 or
X
tt
(x)
X(x)
=
Y
tt
(y)
Y (y)
.
So for some 1 (since we are looking for real solutions)
_
X
tt
(x) = X(x) (36.5a)
Y
tt
(y) = Y (y) (36.5b)
These are second order ordinary linear homogeneous equations.
Let us nd now boundary conditions for X, Y .
1
This particular method is known as separation of variables; more generally, in physics, one
often comes up with an Ansatz and see if it is now solvable, then uniqueness takes care of the rest.
2. DIRICHLET PROBLEM IN A RECTANGLE 223
From (36.4) we have
_
_
u(x, 0) = 0 X(x)Y (0) = 0 Y (0) = 0
u(0, y) = 0 X(0)Y (y) = 0 X(0) = 0
u(a, y) = 0 X(a)Y (y) = 0 X(a) = 0
u(x, b) = f(x) X(x)Y (b) = f(x)
and hence from (36.5a) we have
X
tt
= X , X(0) = X(a) = 0.
We now solve this equation
2
X = Asin
x + Bcos
x
X(0) = Bcos
x = 0 B = 0
X(a) = Asin
x = 0
a = n , n Z
=
_
n
a
_
2
.
So by choosing A = 1, we get
X
n
(x) = sin
n
a
x.
Then it follows from (36.5b) that
Y
tt
= Y , Y (0) = 0
Y (y) = Ae
y
+ Be
y
, where =
_
n
a
_
2
Y (0) = A +B = 0 B = A
and for Y , by choosing A = 1/2 we get
Y
n
(y) =
e
n
a
y
e
n
a
y
2
or Y
n
(y) = sinh
n
a
y.
So nally we get that for any n = 1, 2,
u
n
(x, y) = X
n
(x)Y
n
(y) = sin
n
a
x sinh
n
a
y
is a solution to the Laplace equation vanishing at x = 0.
Functions u
n
(x, y) are called rectangular harmonics.
The general solution to (36.4) is then
u(x, y) =
n=1
c
n
sin
n
a
x sinh
ny
a
.
2
We nd that we need 0 since if < 0 then X(x) = Ae
x
+ Be
x
cannot satisfy
X(0) = X(a) = 0 unless X 0.
224 36. LAPLACE EQUATION. UNIQUENESS; RECTANGULAR DOMAIN
Let us nd now c
n
. Since u(x, b) = f(x) we have
f(x) =
n=1
_
c
n
sinh
nb
a
_
. .
bn
sin
n
a
x.
But on the other hand, b
n
are the sinFourier coecients of f(x) so
b
n
=
2
a
a
0
f(x) sin
n
a
xdx
[[
c
n
sinh
nb
a
c
n
=
_
sinh
nb
a
_
1
b
n
and
u(x, y) =
n=1
b
n
sinh
ny
a
sinh
nb
a
sin
n
a
x
where b
n
=
2
a
a
0
f(x) sin
n
a
xdx.
LECTURE 37
The Principle of Conformal Mapping for the Laplace
Equation. Laplace Equation in the Upper Half Plane
We are going to demonstrate how Complex Analysis helps to solve Laplace equa
tions in 1
2
.
Definition 37.1. A univalent function produces two dierent values for two dif
ferent points.
Definition 37.2. Let f(z) be an analytic univalent function on a simply connected
domain and f
t
(z) ,= 0 z . Then the image
t
of under the transformation
f(z) is called the conformal mapping of under f and we write
t
= f() = f(z) : z .
The condition f
t
(z) ,= 0 is very important.
Theorem 37.3 (Riemann Theorem on Conformal Mapping). Given two simply
connected domains
1
,
t
C,
f H() :
t
= f() is a conformal mapping of .
(No proof).
t
This is not constructive...
The following theorem claries why conformal mapping is so important.
Theorem 37.4. The Laplace equation is invariant under a conformal mapping.
1
Rigorous statements of this theorem can be found in advanced books on complex analysis.
225
226 37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION
The exact meaning of this statement will be explained a bit later, but recall notions
of invariance: time invariance means some quantity doesnt depend on time, for exam
ple the energy of a system; here the equation is invariant with respect to conformal
mapping, i.e. you can change the coordinate system (via an analytic function
2
), and
the equation has still the same form.
Proof. Consider the Laplace equation on some domain :
u =
2
u
x
2
+
2
u
y
2
= 0 (x, y) .
Let f(z) be an analytic function performing the conformal mapping of onto
another domain
t
, i.e.
w = f(z)
t
, w = + i.
We will prove that
2
U
2
+
2
U
2
= 0 , (, )
t
(37.1)
where U (, ) u(x (, ) , y (, )). Since f :
t
is a onetoone correspondence,
f
1
(w) maps
t
onto and we have
_
x = x (, )
y = y(, )
.
So what we have is a change of coordinate system, and rules for partial derivatives
have been put for review in Appendix to this lecture, but basically, we have by the
chain rule
u
x
=
U
x
+
U
x
= U
x
+U
2
u
x
2
=
x
(U
x
+ U
x
) = (U
)
x
x
+U
xx
+ (U
)
x
x
+U
xx
= (U
x
+ U
x
)
x
+U
xx
+ (U
x
+U
x
)
x
+U
xx
= U
2
x
+U
x
+U
xx
+ U
x
+U
2
x
+ U
xx
u
xx
= U
2
x
+ 2U
x
+U
2
x
+ U
xx
+U
xx
. (37.2)
Similarly, one has
u
yy
= U
2
y
+ 2U
y
+U
2
y
+U
yy
+U
yy
. (37.3)
Add (37.2), (37.3) up. We get
0 = u
xx
+u
yy
= U
2
x
+
2
y
_
+ 2U
(
x
x
+
y
y
) + U
2
x
+
2
y
_
+U
(
xx
+
yy
)
. .
=
+U
(
xx
+
yy
)
. .
=
. (37.4)
2
This condition is important, otherwise it screws up the Laplace equation.
37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION 227
Now our function f(z) = (x, y) + i(x, y) is analytic. So by Theorem 35.2, and
are harmonic = 0 , = 0. Also since f is analytic, the CauchyRiemann
conditions apply
x
=
y
,
y
=
x
_
x
+
y
y
=
x
y
+
y
x
= 0
2
x
+
2
2
=
2
x
+
2
y
and (37.4) reads
u
xx
+u
yy
= (U
+U
)
_
2
x
+
2
y
_
[[
0 U
+U
= 0
since
2
x
+
2
y
,= 0 and we are done! QED
Let us now gure out what Theorem 37.4 actually means. It allows one to solve
Laplace equations for various domains, not only disks or rectangles.
Indeed, consider the Dirichlet problem
_
u = 0
u
=
(37.5) where =
x
y
z
By the Riemann Theorem (Theorem 37.3), there exists an analytic
function f(z) that conformally maps onto a unit disk . Problem
(37.5) then reads ( U(w) = u(f
1
(w)) )
_
U(w) = 0
U

=
_
f
1
_
e
i
__
(37.6)
since the image of the boundary
is the boundary of the image in
conformal mapping, i.e. gets
mapped to .
t
= 
w = +i
1
f(z)
We solve next (37.6) by Poissons formula (35.9).
U(w) =
1
2
2
0
1 [w[
2
[e
i
w[
2
_
f
1
_
e
i
__
d. (37.7)
We now come back to our original variable z and have
u(z) = U(w) where w = f(z)
and problem (37.5) is solved!
228 37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION
This is a topoftheline equation since it is explicit. But the drawback is nding
the conformal map. This is big business for aerodynamics (in aircraft building) and is
the object of contemporary research.
We demonstrate this algorithm on a particular example.
Example 37.5. The Dirichlet problem on a half plane.
_
u = 0
u
=
(37.8) where =
x
y
Physically, this problem can be interpreted in the following way. We have some
electrostatic eld u(x, y) in the upper half plane C
+
. We know the potential (x) on
its boundary 1. How to recover the potential u(x, y) in the whole C
+
?
Solution. According to the procedure oulined above, we need to nd a conformal
mapping that maps C
+
onto .
This conformal mapping is wellknown
3
z
1
w
z = i
1 w
1 + w
. (37.9)
Let us check it:
Imz =
z z
2i
=
1
2
_
i
1 w
1 + w
+i
1 w
1 + w
_
=
1
2
_
1 w
1 + w
+
1 w
1 + w
_
=
1
2
1 +
w
w [w[
2
+ 1
w +
w [w
2
[
[1 + w[
2
=
1 [w[
2
[1 + w[
2
0
with equality for [w[ = 1. So, indeed (37.9) maps the unit circle onto the real line, and
onto .
It follows from (37.9) that
w =
i z
i +z
f(z) (37.10)
3
Note that a conformal mapping is not unique; here for example, we could multiply by a unimod
ular factor, i.e. do a rotation.
37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION 229
and plugging it in (37.7)
u(z) =
1
2
2
0
1 [f(z)[
2
[e
i
f(z)[
2
_
f
1
_
e
i
__
d. (37.11)
From (37.9) one has i
1 e
i
1 + e
i
= t 1 and if runs through (0, 2) then t runs
through (, ).
So lets make the following change of variables
t = i
1 e
i
1 + e
i
= f
1
_
e
i
_
, e
i
=
i t
i +t
.
Then
ie
i
d =
(i +t)(1) (i t)(1)
(i +t)
2
dt d =
2i dt
i
it
i+t
(i +t)
2
=
2 dt
1 t
2
=
2 dt
1 + t
2
.
Since t = f
1
_
e
i
_
, (37.11) then reads
u(z) =
1
2
1 [f(z)[
2
[f(t) f(z)[
2
(t)
2 dt
1 + t
2
=
1
iz
i+z
it
i+t
iz
i+z
2
(t)
dt
1 + t
2
=
1
[i +t[
2
[z +i[
2
[z i[
2
[(i t)(i + z) (i z)(i + t)[
2
(t)
dt
1 + t
2
=
1
1 + iz it
zt +
1 it +iz +
zt[
2
(t)dt
=
1
[z[
2
+iz iz +
[z[
2
iz +iz
1
[2i(z t)[
2
(t)dt
=
1
2i(z z)
4 [t z[
2
(t)dt =
1
z z
2i [t z[
2
(t)dt =
1
Imz
[t z[
2
(t)dt
z=x+iy
=
1
y
(t x)
2
+y
2
(t)dt.
So the solution to (37.8) is
u(x, y) =
1
(t) dt
(t x)
2
+y
2
y 0 , x 1. (37.12)
Done!
The function
1
y
(x t)
2
+y
2
is also known as the Poisson kernel for the upper half
plane.
Remark 37.6. Note that the solution (37.8) represents the convolution of the Pois
son kernel and the boundary condition.
Note a couple of potential issues:
230 37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION
if (t) [t[, then the integral is not convergent. But has physical meaning,
so in practice, this will not be an issue;
how to reproduce u(x, y) when y 0? well actually, we have again a
sequence, so this will be satised too (see Exercise 37.10).
In practice, even the simple integral (37.12) is hard to evaluate by hand. However,
in some cases we can avoid computing (37.12).
Indeed, u(x, y) is the real (or imaginary) part of some analytic function f(z) , z =
x + iy by Theorem 35.2. Moreover, (x) = Re f(x) (or Imf(x)). By the look of (x)
sometimes its possible to tell what f(x) and even f(z) is.
Example 37.7. In the numbered examples below, we consider the Dirichlet problem
in the upper half plane:
Note that since its an order 2 equation, we would need an extra condition besides
the boundary one given. This generally comes from physics. So here, we will rather
simply look for a continuation of our function in the upper half plane and check its
analyticity. Furthermore, we make no claim of uniqueness since here the domain is
innite.
1
_
u = 0
u
1
= 1
. (37.13)
This problem can easily be solved by (37.12). But on the other hand, there is
an obvious analytic function f(z) in C
+
which is identically 1 on 1 and whose
real part is a bounded function.
4
We have f(z) = 1. That is
u(x, y) = 1 , (x, y) C
+
.
2
_
u = 0
u
1
= ln [x[
. (37.14)
Note here that the boundary condition is not bounded, but one can easily
gure out that the function f(z) = ln z is analytic on C
+
and equals ln [x[ on
1. Indeed,
ln z = ln [z[ +i arg z =
1
2
ln(x
2
+ y
2
) + i arctan
y
x
.
If y = 0 then f(x) = f(x + i0) =
1
2
ln x
2
= ln [x[.
4
Indeed, one could argue that f(z) = 1 iCz for some real constant C is also analytic and
identically 1 on 1. But it is not bounded, and hence u(x, y) = 1 +Cy does not make sense physically
if were looking for a nite energy eld.
37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION 231
So the solution to (37.14) is
u(x, y) =
1
2
ln(x
2
+ y
2
) , (x, y) C
+
.
equipotential
curves
u(x, y) = ln r
x
2
+y
2
= r
2
3
_
_
_
u = 0
u
1
=
1
x
. (37.15)
Note that f(z) =
1
z
is analytic on C
+
, and one easily has
u(x, y) = Re
1
z
=
x
x
2
+y
2
.
Exercise 37.8. Draw equipotential curves for (37.15).
The Riemann theorem (Theorem 37.3) is not constructive. That is, it does not
provide any explicit construction of f(z). However, there are some wellknown domains
for which f(z) is available.
Example 37.9.
1
/4
f(z) = z
4
232 37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION
2
0 1 1
f(z) =
1
2
_
z +
1
z
_
the socalled Zhukovskys function.
5
This function appears and was studied in the context of aerodynamics and
is part of early 20
th
century research.
3
a
a
(a, b) (a, b)
b
f(z)
f(z) = C
z
0
dt
_
(1 t
2
)(1 k
2
t
2
)
elliptic integral
where C, k are dened from solving the equations
a
1/k
1
dt
_
(t
2
1)(1 k
2
t
2
)
= b
1
0
dt
_
(1 t
2
)(1 k
2
t
2
)
,
C =
b
1/k
1
dt
_
(t
2
1)(1 k
2
t
2
)
.
This formula is a particular case of the more general SchwarzChristoel
conformal mappings which describe how to transform the upper half plane into
polygons. As one may imagine, such formulas get more and more complicated,
but at least they are known.
One can now gure out why we could solve in Lecture 36 the Laplace equation
on a rectangular domain only with some restrictions.
From the above, you may also appreciate the following funny story: a physi
cist needed to solve a Laplace problem and decided to use a simple model for
the boundary. So he asked the mathematician to solve his problem for a square.
After much complicated calculations, the mathematician gave his answer, and
then the physicist said that now he needed to have it altered to t his original
data: a circle!
5
The transform itself is often referred to as the Joukowsky transform after the same person since
Russian names can be transcribed in the Latin alphabet in multiple ways.
APPENDIX. CHANGE OF COORDINATE SYSTEM AND THE CHAIN RULE 233
Exercise 37.10.
1 Prove that the Poisson kernel on the upper half plane is a sequence for y 0.
Note that you must show among other things that:
1
y dt
(x t)
2
+y
2
= 1.
2 Show that if (t) in (37.12) is odd then
u(x, y) =
y
0
_
1
(x t)
2
+y
2
1
(x +t)
2
+y
2
_
(t)dt.
Show also that it solves the following Dirichlet problem
_
_
u = 0 , x > 0 , y > 0
u(0, y) = 0 , y > 0
u(x, 0) = (x) , x 0
.
3 Solve the following Dirichlet problem by assuming that u is bounded for x > 0,
y > 0:
_
_
u = 0 , x, y > 0
u(x, 0) = (x) , x 0
u(0, y) = (y) , y 0
.
Appendix. Change of Coordinate System and the Chain Rule
We present here as a review an important chain rule related to a change of variables
in the case of two dimensions.
Let f(x, y) be a function of (x, y) and
_
x = x(, )
y = y(, )
a change of variables. If
F(, ) := f(x(, ), y(, ))
is the function f in the new variables , then
x
f(x, y) =
F(, )
x
+
F(, )
y
f(x, y) =
F(, )
y
+
F(, )
y
or in a compact form
f
x
= F
x
+ F
x
f
y
= F
y
+F
y
234 37. CONFORMAL MAPPING. APPLICATION TO THE LAPLACE EQUATION
or in the matrix form
_
f
x
f
y
_
=
_
x
x
y
y
_
. .
Jacobis matrix
_
F
_
.
And for the second partial derivative (in x), since f
x
= F
x
+F
x
, one has
f
xx
(F
x
)
x
(F
)
x
x
F
2
x
F
x
F
xx
(F
x
)
x
(F
)
x
x
F
x
F
2
x
F
xx
Adding up the circled expressions, we get
f
xx
= F
2
x
+ 2F
x
+F
2
x
+F
xx
+F
xx
.
LECTURE 38
The Spectrum of the Laplace Operator on Some Simple
Domains
1. The Spectrum of on a Rectangle
Let us solve the following eigenfunction problem
_
u = u
u
= 0,
(Dirichlet eigenfunction problem) (38.1)
where is
0
a
b
.
Set u(x, y) = X(x)Y (y), then (38.1) reads
d
2
dx
2
X Y X
d
2
dy
2
Y = XY
1
X
d
2
X
dx
2
. .
1
Y
d
2
Y
dy
2
. .
2
= separable equation!
_
X
tt
=
1
X
Y
tt
=
2
Y
1
+
2
=
. (38.2)
Lets nd now boundary conditions for X, Y .
u(0, y) = X(0)Y (y) = 0 X(0) = 0 ,
u(a, y) = X(a)Y (y) = 0 X(a) = 0 ,
u(x, 0) = X(x)Y (0) = 0 Y (0) = 0 ,
u(x, b) = X(x)Y (b) = 0 Y (b) = 0 .
So we got
_
X
tt
=
1
X
X(0) = 0 = X(a)
X = Ae
i
1
x
+Be
i
1
x
.
235
236 38. SPECTRUM OF THE LAPLACE OPERATOR
Find now A, B.
_
X(0) = A + B = 0 B = A
X(a) = Ae
i
1
a Ae
i
1
a
= 0
e
2i
1
a
= 1
2
_
1
a =
2n , n Z
or
1,n
=
_
a
n
_
2
, n N = 1, 2, ; X
n
(x) = A
_
e
i
n
a
x
e
i
n
a
x
_
.
Taking A =
1
2i
we nally get
X
n
(x) = sin
n
a
x , n N.
In a very similar way,
Y
m
(y) = sin
m
b
y , m N.
So all the solutions to (38.1) are
u
n,m
(x, y) = sin
n
a
x sin
m
b
y
and the set of all possible in (38.1) is
=
1
+
2
=
2
_
_
n
a
_
2
+
_
m
b
_
2
_
, n, m N.
So we proved
Theorem 38.1. Let
D
be the Dirichlet Laplace operator on a rectangle =
[0, a] [0, b], i.e.
D
u = u & u
= 0.
Then (
D
) is purely discrete and
(
D
) =
_
2
_
_
n
a
_
2
+
_
m
b
_
2
_
; n, m N
_
and corresponding eigenfunctions are
u
n,m
(x, y) = sin
n
a
x sin
m
b
y.
Theorem 38.1 can be easily generalized to 1
3
.
Theorem 38.2. Let
D
be the Dirichlet Laplace operator on a domain =
[0, a] [0, b] [0, c], i.e.
D
u = u & u
= 0.
Then
(
D
) =
d
(
D
) =
_
2
_
_
n
a
_
2
+
_
m
b
_
2
+
_
k
c
_
2
_
; n, m, k N
_
1. THE SPECTRUM OF ON A RECTANGLE 237
and corresponding eigenfunctions are
u
n,m,k
(x, y, z) = sin
n
a
x sin
m
b
y sin
k
c
z.
Discussions. So eigenvalues of the Dirichlet Laplace operator are now parameter
ized by 2 (or even 3 integers). If we order the set (
D
), it looks like
1
=
2
_
1
a
2
+
1
b
2
_
n = m = 1
2
=
2
_
1
a
2
+
_
2
b
_
2
_
3
=
2
_
_
2
a
_
2
+
1
b
2
_
_
_
n = 1, m = 2
n = 2, m = 1
4
=
2
_
_
2
a
_
2
+
_
2
b
_
2
_
n = m = 2
. . . . . . . . . . . . . . . . . . . . . . . . . .
In the simplest case of a = b = we get
1
= 1 + 1 = 2
2
= 1 + 2
2
= 2
2
+ 1 = 5
3
= 2
2
+ 2
2
= 8
4
= 1 + 3
2
= 3
2
+ 1 = 10
5
= 2
2
+ 3
2
= 3
2
+ 2
2
= 13
6
= 1 + 4
2
= 4
2
+ 1 = 17
7
= 3
2
+ 3
2
= 18
8
= 2
2
+ 4
2
= 4
2
+ 2
2
= 20
9
= 3
2
+ 4
2
= 4
2
+ 3
2
= 24
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .
0 10 20
1
2
3
4
5
6
7
8
9
So the spectrum is a lot more dense than for the operator
d
2
dx
2
on [0, ] which was
n
2
= 1, 4, 9, 16, 25, :
. . .
0 10 20
1
2
3
4
5
In math physics they ask the following question: Can you hear the shape of a drum?
In our case it would be a box. This means: if we know all the tones (proper
frequencies) of a box can we then recover the shape of the box?
This problem belongs to socalled inverse problems and the answer is negative.
238 38. SPECTRUM OF THE LAPLACE OPERATOR
2. The Spectrum of on a Disk
Similarly to (38.1) we have
_
u = u
u

= 0
1

(38.3)
This problem is easily solved in the polar coordinate system
_
x = r cos
y = r sin
.
Equation (38.3) then reads
1
r
r
_
r
U
r
_
1
r
2
2
U
2
= U (38.4)
where U(r, ) = u(r cos , r sin ).
Exercise 38.3. Prove that in the polar coordinate system (38.3) transforms into
(38.4).
Note that variables in (38.4) separate and we set
U(r, ) = R(r)().
Plugging it into (38.4) one has
1
r
(rR
t
(r))
t
R(r)
+
1
r
2
tt
()
()
= or
r
R(r)
(rR
t
(r))
t
+r
2
. .
+
1
()
tt
()
. .
= 0
tt
() = ()
r(rR
t
(r))
t
r
2
R(r) = R(r)
_
_
_
tt
=
1
r
d
dr
_
r
d
dr
R
_
+
r
2
R = R(r)
(38.5)
Note that the second equation above is a SturmLiouville problem!
Let us now recompute the boundary conditions.
U(r, )
r=1
= u(r cos , r sin )
r=1
= 0 [0, 2)
U(1, ) = 0
[[
R(1)() = 0 R(1) = 0.
2. THE SPECTRUM OF ON A DISK 239
Next, we must have (due to continuity)
U(r, 0)= U(r, 2) ,
U
(r, 0)=
U
(r, 2)
(0) = (2) ,
t
(0) =
t
(2)
and it now follows from (38.5) that
_
tt
=
(0) = (2) ,
t
(0) =
t
(2)
, (38.6)
_
_
_
1
r
d
dr
_
r
dR
dr
_
+
r
2
R = R
R(1) = 0 , R L
2
(0, 1)
. (38.7)
Solve (38.6) rst
() = Ae
i
+Be
i
_
(0) = A +B = (2) = Ae
i
2
+ Be
i
t
(0) = (A B)i
= i
_
Ae
i
2
Be
i
2
_
e
i
2
= 1
2 =
2n
= n
2
, n N.
So the solutions to (38.6) are
n
() = A
n
e
in
+B
n
e
in
.
Now (38.7) reads
_
_
_
1
r
d
dr
_
r
d
dr
R
_
+
n
2
r
2
R = R , 0 r 1
R(1) = 0 SturmLiouville problem
or
1
r
R
t
R
tt
+
n
2
r
2
R = R
or nally we get
R
tt
+
1
r
R
t
+
_
n
2
r
2
_
R = 0 , 0 r 1 the Bessel equation
All bounded solutions to this equation are
R
n
(r) = AJ
n
(
r).
The boundary condition at r = 1 requires
J
n
(
) = 0.
The equation has innitely many solutions:
240 38. SPECTRUM OF THE LAPLACE OPERATOR
k
1
10
0
J
0
(k)
J
1
(k)
J
2
(k)
k
01
k
02
k
03
k
04
k
05
k
06
k
11
k
12
k
13
k
14
k
15
k
16
k
21
k
22
k
23
k
24
k
25
There is no nice formula for the roots but they can be computed numerically. We
call them k
nm
(counted by two indices n, m). So
= k
2
nm
and we arrive at
Theorem 38.4. Let
D
be the Dirichlet Laplace operator on the unit disk . The
spectrum (
D
) is discrete and innite and
(
D
) = k
2
nm
where k
nm
are roots of the equations
J
n
(k) = 0 n = 0, 1,
LECTURE 39
Nonhomogeneous Laplace Equation
We rst state without proof the following fundamental theorem on the spectrum of
the Laplace operator on a bounded domain.
Theorem 39.1. Let be the Laplace operator with Dirichlet or Neumann or
mixed conditions on a bounded domain . Then the spectrum () is purely discrete
and consists of innitely many positive eigenvalues
n
accumulating at .
The set of corresponding eigenfunctions forms an orthonormal basis in L
2
().
Remark 39.2. This problem:
_
u = u
u
= 0
is known as Helmholtz equation (from the wave equation in nonhomogeneous media).
Note also that if is unbounded, then we get a continuous spectrum.
For example, in the case of an innite cylin
der, eigenvalues are embedded in the contin
uous spectrum.
Why is Theorem 39.1 so important? It allows us to solve nonhomogeneous Laplace
equations as well as wave and heat equations.
Problem 39.3. Let be a bounded domain in 1
m
. Consider
_
u = f
u
= 0
, where f L
2
(). (39.1)
Solution. By Theorem 39.1, the spectrum of is discrete and made of eigen
values
n
. Corresponding eigenfunctions e
n
(X) form an ONB in L
2
().
Hence every function u L
2
() can be represented as
u(X) =
n
u
n
e
n
(X) where X = (x
1
, x
2
, , x
m
) ,
u
n
= u, e
n
u(X)e
n
(X) dx
1
dx
2
dx
m
.
241
242 39. NONHOMOGENEOUS LAPLACE EQUATION
Plug this into (39.1)
n
u
n
e
n
=
f
n
e
n
[[
n
u
n
(e
n
) =
n
u
n
e
n
n
_
n
u
n
f
n
_
e
n
= 0
en is an ONB
n
u
n
f
n
= 0.
But its just an algebraic equation!
u
n
=
f
n
n
.
And for the solution to (39.1) one has
u(X) =
f
n
n
e
n
(X) , X = (x
1
, x
2
, , x
m
).
Note however that there is no way to generalize from dimension 1 how to obtain
n
, e
n
(X). So lets consider a particular example where this algorithm can be performed
by hand (explicitly).
Example 39.4. Consider the problem
_
u = f
u
= 0
, where =
0
a
b
and f(x, y) = x +y.
By Theorem 38.1,
() =
_
_
n
a
_
2
+
_
m
b
_
2
; n, m N
_
and corresponding eigenfunctions are
u
n,m
(x, y) = sin
nx
a
sin
ny
b
. (39.2)
We next normalize them:
u
n,m

2
=
b
0
a
0
sin
2
nx
a
sin
2
my
b
dx dy
=
_
a
0
sin
2
nx
a
dx
_
. .
a
0
1 cos
2nx
a
2
dx =
1
2
a
_
b
0
sin
2
my
b
dy
_
. .
b
2
=
_
2
_
2
ab.
39. NONHOMOGENEOUS LAPLACE EQUATION 243
So u
n,m

2
=
_
2
_
2
ab.
We now set
e
n,m
=
2
ab
sin
nx
a
sin
my
b
and e
n,m
are all normalized by 1, i.e.
e
n,m
 = 1.
By Theorem 39.1, e
n,m
is an ONB in L
2
() and hence any u L
2
() can be
represented as
u(x, y) =
u
n,m
e
n,m
(x, y)
where u
n,m
=
u(x, y)e
n,m
(x, y) dx dy
_
_
Double Sine Fourier series.
For f(x, y) one has
f(x, y) =
f
n,m
e
n,m
(x, y) , where
f
n,m
=
2
ab
b
0
a
0
f(x, y) sin
nx
a
sin
my
b
dx dy
=
2
ab
b
0
_
a
0
(x +y) sin
nx
a
dx
_
sin
my
b
dy
=
2
ab
b
0
_
a
0
(x +y)
a
n
d cos
nx
a
_
sin
my
b
dy
=
2a
n
ab
b
0
_
(x + y) cos
nx
a
a
0
+
:
0
a
0
cos
nx
a
dx
_
sin
my
b
dy
=
2
n
_
a
b
b
0
(a +y) cos n
. .
(1)
n
+y sin
my
b
dy
and by parts again
f
n,m
=
2
n
_
a
b
b
0
(y (a +y)(1)
n
) d cos
my
b
b
m
=
2
nm
ab (y (a +y)(1)
n
) cos
my
b
b
0
=
2
ab
nm
(b (a +b)(1)
n
) (1)
m
+ a(1)
n
=
2
ab
nm
_
(1)
m
b + a(1)
n+m
+b(1)
n+m
a(1)
n
_
=
2
ab
nm
(1)
n
((1)
m
1) a + (1)
m
((1)
n
1) b .
244 39. NONHOMOGENEOUS LAPLACE EQUATION
So
f
n,m
=
2
ab
nm
(1)
n
((1)
m
1) a + (1)
m
((1)
n
1) b . (39.3)
Now the solution is
u(x, y) =
n,m1
f
n,m
n,m
e
n,m
(x, y) (Double Fourier series)
where
n,m
=
_
n
a
_
2
+
_
m
b
_
2
,
f
n,m
=
2
ab
nm
(1)
n
((1)
m
1) a + (1)
m
((1)
n
1) b ,
e
n,m
(x, y) =
2
ab
sin
nx
a
sin
mb
y
and the problem is completely solved.
Note that we verify that the
f
n,m
(generalized Fourier coecients of f) decay for
n, m and then we divide by
_
n
a
_
2
+
_
m
b
_
2
, so the rate of decay for u
n,m
is like
n
3
, m
3
. It makes sense that it should be better since u has already two derivatives
(because of the order 2 equation).
Note also that by Galilean transformation could be rescaled to a square to
make it simpler.
One now may wonder what if in Problem 39.3 the boundary condition is not zero.
I.e. we have
Problem 39.5.
_
u = f
u
=
.
This problem can be reduced to other problems considered previously. Indeed,
we have a linear equation so the superposition principle applies, and therefore let
u = u
1
+u
2
where
u
1
:
_
u
1
= 0
u
1
=
homogeneous equation,
nonhomogeneous boundary conditions
(I)
u
2
:
_
u
2
= f
u
2
= 0
nonhomogeneous equation,
homogeneous boundary conditions
(II)
Equation (I) was the content of Lectures 3537 (with conformal mappings and
such).
Equation (II) was considered in Problem 39.3, equation (39.1).
So the solution to Problem 39.5 now is
u = u
1
+u
2
where u
1
is the solution to (I)
and u
2
is the solution to (II).
39. NONHOMOGENEOUS LAPLACE EQUATION 245
In the very same manner one can treat other than Dirichlet boundary conditions.
Exercise 39.6. Solve u = 1 in = [0, 1]
2
subject to u
= 0.
LECTURE 40
Wave and Heat Equation in Dimension Higher Than One
1. Wave Equation
Recall that this equation appears in various physical contexts: sound, electromag
netic waves, drum, surface waves, gravitational waves (even if undetected so far).
Consider the following initial value Dirichlet problem on a domain for the wave
equation (homogeneous media):
_
_
u
tt
= u
u
= 0
u(0, X) = (X)
u
t
(0, X) = (X)
, X
(40.1)
We employ the eigenfunction expansion method. We only consider the case of a
bounded domain .
By Theorem 39.1, the spectrum of the Dirichlet Laplacian operator
D
, u
= 0
is discrete and the eigenfunctions form as basis for L
2
().
Let
n
be eigenvalues and e
n
eigenfunctions. We now represent the solution of
(40.1) as
u =
u
n
e
n
=
n
u
n
(t)e
n
(X) , where u
n
= u, e
n
2
t
2
n
u
n
e
n
=
n
u
n
e
n
n
u
tt
n
e
n
=
n
u
n
e
n
=
n
u
n
(
n
)e
n
u
tt
n
=
n
u
n
u
n
= A
n
cos
_
n
t +B
n
sin
_
n
t.
247
248 40. WAVE AND HEAT EQUATION IN HIGHER DIMENSIONS
Lets now nd A
n
, B
n
. In order to do so we need to use the initial conditions.
Represent , as
=
n
e
n
, =
n
e
n
u
t=0
=
u
n
e
n
t=0
=
u
n
t=0
e
n
=
A
n
e
n
= =
n
e
n
A
n
=
n
u
t
t=0
=
u
t
n
e
n
t=0
=
u
t
n
t=0
e
n
=
_
n
B
n
e
n
= =
n
e
n
B
n
=
n
and the problem is completely solved. Indeed, from solving the eigenfunction problem
_
u = u
u
= 0
we obtain the spectrum
n
and eigenfunctions e
n
. The solution to (40.1) is then
(X )
u(X, t) =
_
A
n
cos
_
n
t +B
n
sin
_
n
t
_
e
n
(X)
=
_
n
cos
_
n
t +
n
sin
_
n
t
_
e
n
(X).
So
u(X, t) =
_
n
cos
_
n
t +
n
sin
_
n
t
_
e
n
(X) , X . (40.2)
Done!
Remark 40.1. One has probably already observed that once we know the spectrum of
the eigenfunction expansion method works in the very same way in any dimension
(1, 2, 3 or even higher).
2. Heat Equation
Exercise 40.2. As it was done for the wave equation, solve
_
_
u
t
= u
u
= 0
u(0, X) = (X) , X
and get a formula similar to (40.2).
3. FURTHER DISCUSSIONS 249
3. Further Discussions
We will explore two further topics: nodal lines and how to handle unbounded
domains.
1 Nodal lines
Consider the wave equation with Dirichlet and initial conditions for =
[0, ]
2
:
_
_
u
tt
= u
u
= 0
u
t=0
= (X)
u
t
t=0
= (X)
.
By Theorem 38.1, we have
n,m
= n
2
+m
2
and by Example 39.4, normalized
eigenfunctions are
e
n,m
(X) =
2
n,m
_
n,m
cos
_
n,m
t +
n,m
_
n,m
sin
_
n,m
t
_
e
n,m
(X).
Nodal lines are lines where e
n,m
(x, y) = 0, i.e. they are lines which remain
at rest while other parts vibrate. So assuming all coecients equal 1 (or 0), we
have the following nodal lines on [0, ]
2
:
simple harmonics
e
1,1
e
1,2
e
2,1
e
1,3
e
3,1
e
3,3
double harmonics
e
1,2
+e
2,1
e
1,3
+e
3,1
e
1,4
+ e
4,1
250 40. WAVE AND HEAT EQUATION IN HIGHER DIMENSIONS
All come from solving a trigonometric equation in two variables. Simple
harmonics are easy to understand. For example, for e
1,3
, we need to solve:
sin x sin 3y = 0 for (x, y) [0, ]
2
. So we obtain:
x = 0, and y [0, ] or x [0, ] and 3y = k , k Z , y [0, ]
x = 0, and y [0, ] or x [0, ] and y = 0,
3
,
2
3
, .
Hence the nodal lines are x = 0, x = , y = 0, y = (the boundary as
expected) plus y =
3
, y =
2
3
.
For double harmonics, well examine each graph produced:
e
1,2
+e
2,1
. We have
sin x sin 2y + sin 2x sin y = 0
2 sin x sin y cos y + 2 sin x cos x sin y = 0
sin x sin y(cos y + cos x) = 0.
Note that sin x = 0, sin y = 0 give us the boundary. Then since we have
cos( x) = cos x, the other solution in [0, ]
2
is the line y = x (the
diagonal).
e
1,3
+e
3,1
. We will use the following, derived from trigonometric identities:
sin 3x = sin x(2 cos 2x + 1). Then we have:
sin x sin 3y + sin 3x sin y = 0
sin x sin y(2 cos 2y + 1) + sin x(2 cos 2x + 1) sin y = 0
sin x sin y(2 cos 2y + 2 cos 2x + 2) = 0
sin x sin y(cos 2y + cos 2x + 1) = 0.
Again the rst two factors give us the boundary, and for the third, note
that we solve: cos 2y = 1 cos 2x so in order to get a solution, we need
cos 2x 0 for x [0, ], i.e. x
_
4
,
3
4
1
2
_
= 2 3X
2
0,
i.e. arccos
_
_
2
3
_
x arccos
_
_
2
3
_
.
Then on [0, ], we have for suitable xvalues:
y = arccos
_
cos x
2 3 cos
2
x
2
_
.
Remark 40.3. Note that for the eigenvalue 5 for example, there are 2 cor
responding eigenfunctions e
1,2
and e
2,1
. And thus, = 5 has multiplicity two.
But then the number of integer solutions to = n
2
+ m
2
, known as a general
quadratic Diophantine equation in 2 variables, depends on the prime number
decomposition of , and although this study belongs to Algebra, it is important
to note that as a result, the multiplicity of is unbounded.
Remark also that the superposition of harmonics is not trivial if coecients
are changed.
In addition, if we deal with a round membrane, we now have Bessel functions
instead of the double sine functions so wed get dierent pictures for the nodal
lines too.
The study of nodal lines have applications in construction for example, to
make sure certain parts wont crack under the vibrations (recall that nodal
lines on the membrane will stay put).
Actually, there is a whole eld of study concerned with nodal lines and
eigenfunctions involving the Laplacian: in particular in seismology for the wave
equation on a sphere, called the LaplaceBeltrami equation (or spherical Lapla
cian); or in quantum mechanics since the Schrodinger equation iu
t
= u + qu
is involved (not quite the heat equation though; i makes a big dierence!). We
252 40. WAVE AND HEAT EQUATION IN HIGHER DIMENSIONS
could also consider the KleinGordon or plasma equation
u
tt
= u +qu
but now in 2D or 3D. Sadly at this point, very little is known about this
equation in 2D or 3D compared to what is known for one dimension.
2 Unbounded domains
For domains like = 1
n
, the condition u
= 0 is automatically satised
since L
2
functions must decay at . But there are many ways for a domain
to be unbounded, for example an innite strip in 1
2
or an innite rod in 1
3
have only one dimension carrying the unboundedness.
There is no general approach since we there are so many kinds of possible
spectrum. What we will have is that is positive denite, i.e. u, u 0
for all u. Hence we can only have nonnegative eigenvalues.
We will consider the specic case of 1
n
. We need to redene the Fourier
transform in higher dimensions so that it still works. Recall in one dimension,
we have:
f() =
1
1
e
ix
f(x)dx.
How can we move to a vector form? We have: f(x) f(X) and dx dX.
So we introduce 1
n
, called a wave vector and
f() =
1
(2)
n/2
1
n
f(X)e
i,X)
dX direct Fourier Transform,
f(X) =
1
(2)
n/2
1
n
f()e
i,X)
d inverse Fourier Transform.
Note that the inner product here is the dot product in 1
n
. We also verify
that
_
_
_
f()
_
_
_ = f(X), we have an inverse, and the uniqueness property is
also satised. Furthermore,
e
(X) = e
i,X)
is an eigenfunction of the continuous spectrum with
e
(X) = 
2
e
(X).
This can be easily checked in 1
2
(and generalization to 1
n
will be obvious):
e
(X) = e
i(
1
x
1
+
2
x
2
)
=
2
x
2
1
e
i(
1
x
1
+
2
x
2
)
+
2
x
2
2
e
i(
1
x
1
+
2
x
2
)
= (i
2
1
)e
i(
1
x
1
+
2
x
2
)
+ (i
2
)
2
e
i(
1
x
1
+
2
x
2
)
= (
2
1
+
2
2
)e
(X).
So to solve u
tt
= u, one passes to the Fourier transform form, then we
are left with an ordinary linear order 2 equation which once solved is passed
through the inverse Fourier transform.
3. FURTHER DISCUSSIONS 253
Now
1
(2)
n/2
e
i,X)
=
1
(2)
n/2
e
i,X)
where is a unit directional vector; and we now have innitely many directions
for each eigenvalue (instead of just before), and the multiplicity of each
eigenvalue is therefore innite.
Part 5
Greens Function
LECTURE 41
Introduction to Integral Operators
So far, weve studied a lot of dierential operators, but dealt with really only one
integral operator: the Fourier transform. But in a way, dierentiation and integration
are inverses of each other. I.e.
d
dx
F(x) = f(x) F(x) =
f(x)dx =
dF(x)
dx
dx.
But we have nonuniqueness here since for each f(x) there exists an innite number
of antiderivatives F(x) + C , C is a constant. If we add an initial condition, then C
can be xed and we get uniqueness. That is
_
_
_
d
dx
F(x) = f(x)
F(x
0
) = C
F(x) = C +
x
x
0
f(t)dt.
The operator Au =
x
x
0
u(t)dt is the simplest integral operator.
Integration is nicer than dierentiation since the resulting function is smoother
than the original (because its at least dierentiable). Think in electricity of the dif
ference between an integral circuit (smoother) and a dierential circuit (sharper). So
it may be best to rewrite a system with integrals.
Now the general denition
Definition 41.1. Let = (a, b) be a nite or innite (a, b could be ) interval
and K(x, y) a function on (a, b) (a, b). The formal expression
(Kf)(x) =
K(x, y)f(y)dy =
b
a
K(x, y)f(y)dy , x = (a, b)
is called an integral operator. The function K(x, y) is called the kernel of the integral
operator K.
Exercise 41.2. Rewrite the operator of integration Au =
x
x
0
u(t)dt formally as an
integral operator.
Theorem 41.3. An integral operator is linear. That is if K is an integral operator
then
K(f
1
+f
2
) = Kf
1
+Kf
2
,
K(f) = Kf.
Proof. Trivial.
257
258 41. INTEGRAL OPERATORS
Example 41.4. Consider the Fourier operator F:
(Ff)() =
1
e
ix
f(x)dx.
Its an integral operator with kernel K(, x) = F(, x) =
1
2
e
ix
.
Theorem 41.5. Let K
1
, K
2
be integral operators with interval and kernels K
1
, K
2
.
1) If K = c
1
K
1
+c
2
K
2
where c
1
, c
2
C then
K(x, y) = c
1
K
1
(x, y) + c
2
K
2
(x, y).
2) If K = K
1
K
2
then
K(x, y) =
K
1
(x, z)K
2
(z, y)dz.
Proof. 1) trivial.
2)
(Kf)(x) = (K
1
K
2
f)(x) = (K
1
(K
2
f))(x)
=
K
1
(x, z) (K
2
f)(z)dz
=
K
1
(x, z)
_
K
2
(z, y)f(y)dy
_
dz
=
K
1
(x, z)K
2
(z, y)dz
_
. .
=K(x,y)
f(y)dy
where we switched the order of integration for the last line. QED
Theorem 41.6. Let K be an integral operator on L
2
(), where is an interval on
1. I.e.
(Kf)(x) =
K(x, y)f(y)dy , f L
2
()
where K(x, y) is the kernel of K. Then the kernel K
is given by
K
K(x, y) = K(y, x)
Remark 41.9. Theorems 41.5 & 41.6 show us that one can understand an integral
operator as analogous to a continuous matrix. Indeed, addition is a linear combination
of the corresponding components. The product of integral operator looks like a matrix
product, where we take dz = 1 and i, j become the continuous variables x, y (like we did
when considering Fourier transform vs Fourier series). Similarly, the adjoint kernel is
a continuous adjoint matrix analog since we had (A
)
ij
= (A)
ji
for discrete indices i, j.
41. INTEGRAL OPERATORS 259
Exercise 41.10. If F is the Fourier transform, nd the kernel of its adjoint.
Let A be the integral operator on L
1
(0, 1) dened by
(Af)(x) =
x
0
f(y)dy.
Exercise 41.11. Find the kernel and adjoint of A.
Theorem 41.12. A is a linear bounded operator on L
1
(0, 1).
Proof. Linearity is obvious. Lets prove boundedness. We need to show that
Af C f , for any f with f 1.
Indeed the norm in L
1
(0, 1) is
f
1
1
0
[f(x)[ dx.
For an arbitrary u L
1
(0, 1) we have
Af
1
=
1
0
[Au[ dx =
1
0
x
0
u(t)dt
dx
1
0
x
0
[u(t)[ dt dx
1
0
1
0
[u(t)[ dt
. .
=u
1
dx
= u
1
1
0
dx = u
1
.
That is
Au
1
u
1
and by denition A is bounded and
A 1. QED
Note that it can be proven that A = 1 but we dont care at this point.
Remark also that the Fourier operator is bounded on L
2
(1) since it is unitary:
Ff
2
= f
2
.
Exercise 41.13. Show that the operator B dened on L
2
(0, 1) by the formula
Bu =
1
i
x
0
u(t)dt
on functions u with a zero mean (that is
1
0
u(x)dx = 0) is selfadjoint.
(Hint: if v L
1
(0, 1) then
d
dx
x
0
v(t)dt = v(x).)
260 41. INTEGRAL OPERATORS
Definition 41.14. An integral operator K is called HilbertSchmidt if
2
[K(x, y)[
2
dxdy < .
Theorem 41.15. A HilbertSchmidt operator is bounded on L
2
().
Proof. Note
(Kf)(x)
2
2
=
[Kf(x)[
2
dx =
K(x, y)f(y)dy
2
dx.
But by the CauchySchwarz inequality, we have
(Kf)(x)
2
2
dx
[K(x, y)[
2
dy
[f(y)[
2
dy
. .
=f
2
2
=
[K(x, y)[
2
dxdy f
2
2
where
[K(x, y)[
2
dxdy is nite since K is HilbertSchmidt. So
(Kf)(x)
2
_
[K(x, y)[
2
dxdy
_
1/2
f
2
and K is bounded. QED
Note that
_
[K(x, y)[
2
dxdy
_
1/2
is called the HilbertSchmidt norm of an
integral operator.
Example 41.16 (good example). Consider K(x, y) = e
(x+y)
, x, y (0, 1). Then
1
0
1
0
e
2(x+y)
dxdy =
1
0
e
2x
dx
1
0
e
2y
dy
=
_
1
0
e
2x
dx
_
2
=
_
1
2
e
2x
1
0
_
2
=
_
1 e
2
2
_
2
.
So K is HilbertSchmidt.
The converse of Theorem 41.15 is not true as illustrated in the example below.
Example 41.17 (bad example). Let F be the Fourier transform, which is bounded
on L
2
(1). Is it HilbertSchmidt?
2
e
ix
2
ddx =
1
2
1
ddx =
so F is not HilbertSchmidt.
Usually, HilbertSchmidt does not like innite intervals. It could be that it con
verges in x but not in y (or vice versa).
LECTURE 42
The Greens Function of u
//
+ p(x)u
/
+q(x)u = f
1. The Greens Function
The most important physical operators are one of the following:
multiplication by a function,
dierentiation,
integration.
So integral operators play an enormous role in physics. The main reason for this
is that, roughly speaking, the inverse operator of a differential operator is an integral
operator.
To show this we need
Definition 42.1. Let A be a dierential operator on a Hilbert space H. Then the
solution G(x, y) to the problem
AG = (x y) (42.1)
where x is a variable and y is a parameter is called the Greens function of the dier
ential operator A.
Note that this means that the Greens function is the kernel of an important integral
operator. The operator A need not be doing only dierentiation, e.g. we could consider
the Schr odinger operator A =
d
2
dx
2
+q(x).
One may wonder why equation (42.1) has a solution. It is a very dicult question
which we are unable to answer in this course. But the answer is armative for most
of the dierential operators of mathematical physics. But nding G needs to be done
on a casebycase basis.
Why is the Greens function that important?
The reason is: if we know the Greens function G(x, y) of an operator A then we
can easily solve the equation
Au = f (42.2)
for any f.
Indeed, let us show that
u(x) =
ODINGER OPERATOR
is a solution to (42.2). Note that the limits of integration depend on the problem at
hand. We have
(Au)(x) = A
G(x, y)f(y)dy =
AG(x, y)
. .
=(xy) by (42.1)
f(y)dy
=
(x y)f(y)dy = f(x).
So (42.3) is a solution to (42.2). If (42.2) has a unique solution then (42.3) is the
solution.
Remark 42.2. One can view the Greens function as the continuum matrix of the
inverse of A. Indeed as a parallel to Linear Algebra, one would have:
u = A
1
f = Gf , u
i
=
n
k=1
G
ik
f
k
and AG = I
n
=
ij
becomes AG = (x y).
Note also that the computations above are not formal; rigorous justications are
difcult questions of the theory.
One can see also that x could become X in higher dimension spaces.
Exercise 42.3. Verify that
G(x, y) =
e
i[xy[
2i
is the Greens function of the operator
A =
d
2
dx
2
2
on L
2
(1).
(Hint: verify (42.1))
2. Application to a General Order 2 Linear Dierential Operator
Consider
_
u
tt
+ p(x)u
t
+q(x)u = f(x) , x (a, b)
(BC) u(a) = 0 = u(b)
(42.4)
i.e. we have Dirichlet boundary conditions on a nite interval.
Let us nd the Greens function, i.e. the function G(x, y) satisfying
G
tt
+pG
t
+qG = (x y).
We need to use variation of parameters for the nonhomogeneous equation (42.4)
(see also Appendix to Lecture 32).
First we need to know the solution to the homogeneous equation: u
tt
+pu
t
+qu = 0.
It could be special functions: Bessel,... depending on p, q.
We will assume that we know the set of fundamental solutions u
1
(x), u
2
(x) to the
homogeneous equation, and we further choose u
1
, u
2
such that u
1
(a) = 0 , u
2
(b) = 0.
2. APPLICATION OF THE GREENS FUNCTION 263
Note that we cant have u
1
(b) = 0 otherwise the Wronskian is 0 and u
1
, u
2
would
be linearly dependent. Similarly, u
2
(a) ,= 0.
Now we look for a particular solution to (42.4) in the form
u
p
(x) = C
1
(x)u
1
(x) + C
2
(x)u
2
(x).
For C
1
, C
2
we have
_
u
1
u
2
u
t
1
u
t
2
__
C
t
1
C
t
2
_
=
_
0
f
_
.
Solving this, one has
C
t
1
=
fu
2
W
, C
t
2
=
fu
1
W
, where W = det
_
u
1
u
2
u
t
1
u
t
2
_
,
the Wronskian of (u
1
, u
2
). Note that even if p = 0 then W = constant ,= 0. So
C
1
(x) =
x
x
0
_
fu
2
W
_
(t)dt , C
2
(x) =
x
x
1
_
fu
1
W
_
(t)dt.
The general solution to (42.4) is
u(x) = C
1
(x)u
1
(x) + C
2
(x)u
2
(x) + C
1
u
1
(x) + C
2
u
2
(x)
where C
1
, C
2
are constants to be found using the BC on u, u
1
, u
2
. Since u
2
(a) ,=
0 , u
1
(b) ,= 0,
_
u(a) =
:
0
C
1
(a)u
1
(a) + C
2
(a)u
2
(a) +
:
0
C
1
u
1
(a) + C
2
u
2
(a) = 0
u(b) = C
1
u
1
(b) +
:
0
C
2
(b)u
2
(b) + C
1
u
1
(b) +
:
0
C
2
u
2
(b) = 0
_
C
2
= C
2
(a)
C
1
= C
1
(b)
and
u(x) =
_
C
1
(x) C
1
(b)
_
u
1
(x) +
_
C
2
(x) C
2
(a)
_
u
2
(x). (42.5)
But
C
1
(x) C
1
(b) =
x
x
0
_
fu
2
W
_
(t)dt +
b
x
0
_
fu
2
W
_
(t)dt =
b
x
_
fu
2
W
_
(t)dt ,
and
C
2
(x) C
2
(a) =
x
x
1
_
fu
1
W
_
(t)dt
a
x
1
_
fu
1
W
_
(t)dt =
x
a
_
fu
1
W
_
(t)dt.
Finally, for (42.5) we have
u(x) = u
1
(x)
b
x
f(t)u
2
(t)
W(t)
dt + u
2
(x)
x
a
f(t)u
1
(t)
W(t)
dt. (42.6)
264 42. THE GREENS FUNCTION OF THE SCHR
ODINGER OPERATOR
By denition of the Greens function, G(x, y) is computed by (42.6) for f(t) =
(t y) where y is an arbitrary point on (a, b). We have
G(x, y) = u
1
(x)
b
x
(t y)u
2
(t)
W(t)
dt + u
2
(x)
x
a
(t y)u
1
(t)
W(t)
dt.
Consider two cases (1) x > y, (2) x < y :
(1) x > y then G(x, y) =
u
2
(x)u
1
(y)
W(y)
,
(2) x < y then G(x, y) =
u
1
(x)u
2
(y)
W(y)
.
Alternately, we can read G(x, t) directly from (42.6):
u(x) =
x
a
u
2
(x)u
1
(t)
W(t)
f(t)dt +
b
x
u
1
(x)u
2
(t)
W(t)
f(t)dt =
b
a
G(x, t)f(t)dt,
where
G(x, t) =
1
W(t)
_
u
2
(x)u
1
(t) , a t x
u
1
(x)u
2
(t) , x < t b
.
So we come up with the following theorem
Theorem 42.4. The Greens function G(x, y) of a general order 2 linear dierential
operator
d
2
dx
2
+ p(x)
d
dx
+ q(x) on L
2
(a, b) with Dirichlet boundary conditions can be
represented by
G(x, y) =
_
_
u
1
(x)u
2
(y)
W(y)
, x < y
u
2
(x)u
1
(y)
W(y)
, x > y
where u
1
, u
2
are solutions of u
tt
+ pu
t
+ qu = 0 with conditions u
1
(a) = 0 , u
2
(b) = 0,
respectively, and W is the Wronskian of (u
1
, u
2
), that is
W = u
1
u
t
2
u
t
1
u
2
.
But we need a fundamental set... thats the hard part. If we have Neumann
conditions or others, we still follow the same procedure, but we gure out dierent BC
to impose on the fundamental set so that we can get a solution in the simplest form.
Note that the above theorem can easily be restated for the Schr odinger operator.
Exercise 42.5. Derive the expression for Greens function G(x, y) of the operator
A =
d
2
dx
2
+
2
on L
2
(1) for > 0.
Answer: G(x, y) =
e
[xy[
2
.
(Hint: modify the arguments of this section to treat (, ).)
Note that q(x) could be some form of spectral parameter as above. Then we can
write: G
(x, y).
2. APPLICATION OF THE GREENS FUNCTION 265
What about higher dimensions? For example the free Laplacian:
_
u
2
u = f
u L
2
(1
n
)
.
The solution was exponential in one dimension. Here variables appear in the denom
inator and there are innitely many linearly independent solutions for the fundamental
set (not just 2).
Index
Adjoint operator, 70, 258
Amplier, 97
Analytic
at innity, 38
Analytic function, 217
denition, 15
power series, 31
Banach space, 89
Basis, 56
innite dimension, 83
orthogonal, 136
orthonormal, 68, 110
Bessel equation., 239
Bessel function, 210, 251
Boundary conditions, 105, 169
Bounded linear operator, 102, 103
Branch cut, 50
Cauchy formula, 24, 25, 33, 218
Cauchy inequality, 84
Cauchy integral, 24
Cauchy Principal Value, 49
Cauchy sequence, 89
Cauchys theorem, 20, 162, 191
CauchyRiemann conditions, 16, 217, 227
CauchySchwarz inequality, 260
Change of variables, 233
Characteristic equation, 77
Characteristic polynomial, 77
Characteristic triangle, 202
Complex numbers
argument, 12
conjugate, 11
properties, 12
exponential representation, 13
imaginary part, 11
imaginary unit, 11
inverse, 12
modulus, 11
properties, 13
real part, 11
Complex valued function, 13
Conformal mapping, 225, 228
Joukowsky transform, 232
Mobius transform, 218
Riemann theorem, 225, 231
SchwarzChristoel formula, 232
Connected
multiconnected contour, 33
pathconnected, 32
simply connected domain, 225
Continuous spectrum, 107, 110, 173, 174, 176,
180, 209
Contour
multiconnected, 33
Contour integral, 20
Convergence
absolute, 30
normed spaces, 89
of a sequence, 29
series, 90
uniform, 112
weak, 116
Convolution, 151, 193, 229
Convolution theorem, 152
Coordinates, 90
Coordinates of a vector, 58
Coulomb potential, 214
DAlembert formula, 182, 183
function, see also Dirac function
sequence, 113, 116, 117, 220, 230
Derivative, 15
Dierential operator, 59, 160
Dimension
innite, 83
Dirac function, 117
Fourier series representation, 123
integral representation, 125
267
268 INDEX
Directional derivative, 215
Dirichlet
Laplacian, 215, 241
Dirichlet problem, 127, 221, 222, 241, 247
Discrete spectrum, 107, 109, 173, 174
Disk
unit, 218
Domain
simply connected, 225
Double sine Fourier series, 243
Duhamel principle, 193
Eigenfunction, 107, 109, 110, 241
expansion, 181
Eigenfunction expansion, 208, 247
Eigenfunctions, 135, 180
generalized, 176
Eigenspace, 79
Eigenvalue, 77, 107, 109, 173, 241
multiplicity, 78, 251
simple, 78
Eigenvector, 77, 109
Elliptic integral, 232
Euclidean space, 69, 84
Euclidian norm, 84
Euler formula, 13
Filter, 97
Fourier coecient
generalized, 208, 244, 247
Fourier integral theorem, 155, 163, 209
Fourier operator, 151, 154, 177, 258, 259
Fourier representation, 159, 160
Fourier series, 244
complex form, 91
double sine, 243
expansion, 91
trigonometric form, 93
Fourier transform, 104, 148, 149, 151, 159, 258
generalized, 210
in 1
n
, 252
Frobenius method, 133
Frobenius theorem, 139
Gauss integral, 192
Generalized derivative, 119
Generalized function, 117
GordonKlein equation, 172
Greens rst identity, 221
Greens formula, 215, 217
Greens function, 261, 264
Harmonic function, 227
Harmonics
rectangular, 223
Heat equation, 193, 207, 241
Helmholtz equation, 207, 241
Hermite polynomials, 145
Hermites equation, 144
Hilbert space, 85, 212, 261
examples
L
2
(a, b), 87
orthonormal basis, 90
HilbertSchmidt integral operator, 260
HilbertSchmidt norm, 260
Holomorphic function, 15
Homogeneous wave equation, 168
Indicial equation, 140142
Initial conditions, 169
Inner product, 67, 85, 212
in 1
3
, 69
Inner product space, see also Euclidean space
Integral operator, 257, 261
HilbertSchmidt, 260
Inverse
of an operator, 106
Inverse operator, 71
Invertible operator, 71
Irregular singular point, 139
Jacobi matrix, 234
Jordans lemma, 46, 161, 163
Kernel
integral operator, 257
KleinGordon equation, 207, 252
Kronecker delta, 68, 181
L
1
space, 104
L
2
space, 87
Laplace equation, 215, 217, 221, 232
nonhomogeneous, 241
Laplace operator, 211, 214, 241, 247
LaplaceBeltrami equation, 252
Laplacian, 265
Dirichlet, 215, 241, 247
Neumann, 215, 241
Robin, 215
Laurent theorem, 34
Legendre equation, 133
Legendre polynomials, 133, 135, 209
Line integral, 18
Linear operator, 59, 173
bounded, 102
dierentiation, 59, 105
INDEX 269
domain, 101
inverse, 106
kernel, 64
momentum, 110
norm, 102
resolvent, 106
spectrum, 107
unbounded, 102
Linear space, 55
basis, 56, 90
complex, 56
convergence of series, 90
dimension, 56
innite dimensional, 83
norm, 83
real, 56
Linearly dependent, 56
Linearly independent, 56
Liouville theorem, 25
Mobius transform, 218
Matrix, 59
addition, 60
diagonal matrix, 61
multiplication, 60
orthogonal, 72
scalar multiplication, 60
square, 59
unit matrix, 61
zero matrix, 61
Matrix representation of an operator, 62
Mean value theorem, 218
Momentum operator, 73, 105, 110, 160, 175,
177, 179, 189
Moreras theorem, 27
Multiplicity, 78
Neumann
Laplacian, 215, 241
Neumann problem, 127, 222, 241
Nodal lines, 249
Norm, 83
induced by an inner product, 68
of a linear operator, 102
supnorm, 111
Normalized, 88
Normed space, 83
Operator
adjoint, 70, 258
coordinate, 177
coordinates, 160
dierential, 261
dierentiation, 160
Fourier, 177, 258
integral, 257, 261
inverse, 71
invertible, 71
Laplace, 211, 214, 241, 247
momentum, 73, 105, 160, 175, 177, 179, 189
multiplication, 160, 177
rotation, 73
Schrodinger, 179, 189, 261, 264
selfadjoint, 70, 133
similar, 75, 159
unitary, 71
unitary equivalent, 176, 177
Ordinary point, 137
Orthogonal basis, 136
Orthogonal functions, 88
Orthogonal matrix, 72
Orthogonal vectors, 68
Orthonormal basis, 68, 110, 241
Parseval equation, 68, 91
Path
connected, 32
Plancherel theorem, 154
Point
irregular singular, 139
ordinary, 137
regular singular, 139
singular, 138
Point spectrum, 173
Poisson formula, 219, 227
Poisson kernel, 219
upper half plane, 229
Pole, 37
Potential, 207
Coulomb, 214
Power series, 30, 133
Power series solution, 137
Purely discrete spectrum, 110
Rectangular harmonics, 223
Regular singular point, 139
Residue, 38, 39
Residue formula, 39
Residue theorem, 40
Resolvent, 106, 173
Robin
Laplacian, 215
Robin problem, 128
Rodrigues formula, 136
270 INDEX
Rotation operator, 73
Scalar product, see also Inner product
Schrodinger operator, 129, 169, 179, 189, 207,
261, 264
Schrodinger equation, 143
SchwarzChristoel formula, 232
Selfadjoint operator, 70, 133, 215
spectrum, 175
Series
convergence, 29
denition, 29
divergence, 29
functional, 30
domain of convergence, 30
power, 30
Similar operators, 75, 159, 176
Similarity transformation, 75
Simple harmonics, 94, 95, 110
Simply connected domain, 20
Singular point, 138
Singularity, 37
essential, 37
isolated, 37
pole, 37
removable, 37
Smooth, 114
Spectral analysis, 78, 169, 170, 177, 178, 208
Spectral theorem, 80
Spectrum, 77, 107, 135, 174, 176
continuous, 107, 110, 173, 174, 176, 180, 209
discrete, 107, 109, 173, 174, 240
point, 173
purely discrete, 110
selfadjoint operator, 175
SturmLiouville
Dirichlet problem, 127
equation, 127, 208
Neumann problem, 127
operator, 127
canonical form, 129
Robin problem, 128
SturmLiouville equation, 238
SturmLiouville operator, 133
Subspace, 58
Support, 113
nitely supported, 113
Taylor series, 31, 32
Taylor theorem, 31, 220
Tension, 168
Test function, 114
Triangle inequality, 19
Uniform convergence, 112
Unit disk, 218
Unitary equivalent, 176, 177
Unitary operator, 71
Univalent function, 225
Variation of parameters, 163, 195, 197, 203,
208, 210, 262
Vector space, see also Linear space
Wave equation, 168, 193, 207, 241, 247
free, 169
GordonKlein, 172
Helmholtz, 207
homogeneous, 168
KleinGordon, 207
nonhomogeneous, 172
Wave propagation, 183
Wave vector, 252
Weak convergence, 116
Weak solution, 176, 177
Weighted L
2
space, 128
Weyl sequence, 174
Zhukovsky function, 232