Académique Documents
Professionnel Documents
Culture Documents
Physics 129a
051018 Frank Porter
Revision 151029 F. Porter
Introduction
(1)
Self-adjoint Operators
(2)
v DL .
(3)
and
u DL ,
(4)
(5)
Z b
a
u (x)
1 dv
(x)dx
i dx
1 du
1
u (x)v(x)|ba
(x)v(x)dx
i
a i dx
1
=
[u (b)v(b) u (a)v(a)] + hqu|vi
i
= hp u|vi,
Z b
(6)
(7)
(8)
(9)
1
[u (b)v(b) u (a)v(a)] .
i
(10)
We have used the symbol q instead of p here to indicate the operator id/dx,
but without implication that the domain of q has the same boundary conditions as p. That is, it may be that Dp is a proper subset of Dq . If p is to be
Hermitian, we must have that the integrated part vanishes:
u (b)v(b) u (a)v(a) = 0 u, v Dp .
(11)
v Dp ,
(12)
if and only if u(a) = u(b) in order that the integrated part vanishes.
Hence Dp = Dp and we have a self-adjoint operator.
2. Now suppose the boundary condition on v is v(a) = v(b) = 0. In this
case, u can be any differentiable function and p will still be a Hermitian
operator. We have that Dp 6= Dp , hence p is not self-adjoint.
Why are we worrying about this distinction between Hermiticity and selfadjointness? Stay tuned.
Recall that we are interested in solving the equation Lu(x) = g(x) where
L is a differential operator, and a x b. One approach to this is to find
the inverse of L, called the Greens function:
Lx G(x, y) = (x y),
a x, y b.
(13)
Z b
a
Z b
a
= Lx
(x y)g(y)dy
Lx G(x, y)g(y)dy
"Z
G(x, y)g(y)dy
= Lx u(x),
where
u(x) =
Z b
G(x, y)g(y)dy,
(14)
(15)
(16)
Z b
G(x, y)u(y)dy,
(17)
or,
Z b
u(x)
,
(18)
a
as long as 6= 0. Thus, the eigenvectors of the (nice) integral operator are
the same as those of the (unnice) differential operator and the corresponding
eigenvalues (with redefinition of the integral operator eigenvalues from that
in the Integral Note) are the inverses of each other.
We have the theorem:
G(x, y)u(y)dy =
Theorem: The eigenvectors of a compact, self-adjoint operator form a complete set (when we include eigenvectors corresponding to zero eigenvalues).
Our integral operator is self-adjoint if G(x, y) = G (y, x), since then
Z bZ b
a
(x)G(x, y)(y)dxdy =
Z bZ b
a
(19)
(21)
(22)
hj |Li i = i hj |i i.
(23)
hLj |i i = j hj |i i.
(24)
we have
We also have:
But hj |Li i = hLj |i i by Hermiticity. Hence,
(i j )hj |i i = 0.
(25)
(26)
(27)
Expand in eigenfunctions:
|ui =
|i ihi |ui
(28)
|i ihi |gi.
(29)
|gi =
X
i
|i ihi |gi.
(30)
hi |ui =
(31)
if 6= i . Thus,
|ui =
X
i
|i i
X i (x) Z
hi |gi
=
i (y)g(y)dy.
i
i
i
(32)
|i i
i,i 6=
X
hi |gi
+
ci |i i,
i i,i =
(34)
Z b
G(x, y)g(y)dy,
(35)
is given by:
G(x, y) =
X
i
(36)
3. Since
g(x) = (L )u(x) = (Lx )
G(x, y)g(y)dy,
(37)
(38)
Thus, G may be regarded as the solution (including boundary conditions) to the original problem, but for a point source (rather than
the extended source described by g(x)).
Thus, in addition to the method of finding the eigenfunctions, we have a
second promising practical approach to finding the Greens function, that of
solving the differential equation
(Lx )G(x, y) = (x y).
(39)
(41)
where
d
d
p(x) q(x).
(42)
dx
dx
We are especially interested in the eigenvalue equation corresponding
to L, in the form:
Lu + wu = 0,
(43)
L=
or
"
d
du
p(x) (x) q(x)u(x) + w(x)u(x) = 0.
(44)
dx
dx
Here, is an eigenvalue for eigenfunction u(x), satisfying the boundary
conditions. Note the sign change from the eigenvalue equation we often write,
this is just a convention. The function w(x) is a real, non-negative weight
function.
3.1
An Example
d2
+ 1,
dx2
x [0, ],
(45)
with homogeneous boundary conditions u(0) = u() = 0. This is of SturmLiouville form with p(x) = 1 and q(x) = 1. Suppose we wish to solve the
inhomogeneous equation:
Lu = cos x.
(46)
A solution to this problem is u(x) = x2 sin x, as may be readily verified,
including the boundary conditions. But we notice also that
d2
sin x + sin x = 0.
dx2
(47)
x
sin x + A sin x,
2
(48)
(49)
|un ihun |f i.
(50)
hun |f i
n
(51)
f =
X
n
Thus, Lu = f implies
u=
|un i
n = 1, 2, 3, . . . ,
(53)
q
X
n=2
2
sin nx
R q2
0
+ C sin x.
(54)
(55)
so far so good.
In general, we need, for n = 2, 3, . . .:
In
1Z
[sin(n + 1)x + sin(n 1)x] dx
2
0
(
0
n odd,
=
2n
n even.
n2 1
=
(56)
2n
2 1
sin nx + C sin x
2
2
n=2,even 1 n n 1
= C sin x
X
n=2,even
(n2
n
sin nx.
1)2
(57)
But does the summation above give x2 sin x? We should check that our
two solutions agree. In fact, well find that
x
4
sin x 6=
2
X
n=2,even
(n2
n
sin nx.
1)2
(58)
But well also find that this is all right, because the difference is just something proportional to sin x, the solution to the homogeneous equation. To
answer this question, we evidently wish to find the sine series expansion of
x
sin x:
2
s
X
2
f (x) =
an
sin nx,
(59)
n=1
where f (x) = x2 sin x, and the interval of interest is (0, ).
We need to determine the expansion coefficients an :
s
an =
=
=
=
=
2Z
f (x) sin nxdx
0
1 Z
dx
n1
n+1
n1
n+1
0
8
0
(
0 h
i n odd, n 6= 1,
(60)
4n
1
n 2, even.
2 (n2 1)2
(61)
4
x
sin x = sin x
2
4
X
n=2,even
(n2
n
sin nx.
1)2
(62)
(63)
d
again with L = dx
2 + 1, and x [0, ], but now with boundary conditions
0
u(0) = u (0) = 0. Certainly the solution u(x) = x2 sin x still works, and
satisfies the boundary conditions.
Let us try the eigenfunction expansion approach once more. Start with
the eigenvalue equation:
u00 + (1 )u = 0.
(64)
10
(65)
u
(x)
(x)v(x)dx
(x)dx
hLu|vi hu|Lvi =
2
0
0 dx2
dx
du
dv
=
(x)v(x) u (x) (x)
dx
dx 0
0
dv
du
()v() u () ().
=
dx
dx
Z
(67)
(68)
G(x, y) =
(69)
(70)
(71)
That is, G(x, y) = 0 for x < y. Notice that we have made no requirement
on boundary conditions for y. Since L is not self-adjoint, there is no such
constraint!
11
(72)
(73)
Thus, C(y) = cos y and D(y) = sin y, and the Greens function is:
G(x, y) =
0,
x<y
sin x cos y cos x sin y, x > y.
(74)
Lets check that we can find the solution to the original equation Lu(x) =
cos x. It should be given by
u(x) =
=
Z
Z0x
x
sin x.
2
(75)
The lesson is that, even when the eigenvector approach fails, the method of
Greens functions may remain fruitful.
3.2
Weights
(76)
(77)
Then,
Z
Z
uj Lui dx = i
uj wui dx,
(78)
ui (Luj ) dx = j
uj wui dx.
(79)
where we have used the fact that w(x) is real. By Hermiticity, the two lines
above are equal, and we have:
(i
j )
uj wui dx = 0.
12
(80)
uj (x)ui (x)w(x)dx = ij .
(81)
4.1
Legendre Equation
du
d2 u
2x + `(` + 1)u = 0.
2
dx
dx
(82)
The typical situation is for x to be the cosine of a polar angle, and hence
|x| 1. When ` is an integer, the solutions are the Legendre polynomials
P` (x) and the Legendre functions of the second kind Q` (x). These solutions
may be obtained by assuming a series solution and substituting into the
differential equation to discover the recurrence relation.
We may put the Legendre Equation in the Sturm-Liouville form by letting
p(x)
q(x)
w(x)
=
=
=
=
1 x2
0
1
`(` + 1).
13
(83)
(84)
(85)
(86)
4.2
1 cos2
sin2 2
(89)
4.3
2 m/2
= () (1 x )
dm
P` (x).
dxm
(90)
Bessel Equation
While the Legendre equation appears when one applies separation of variables to the Laplacian in spherical coordinates, the Bessel equation shows up
similarly when cylindrical coordinates are used. In this case, the interpretation of x is as a cylindrical radius, so typically the region of interest is x 0.
But the Bessel equation shows up in many places where it might not be so
expected. The equation is:
d2 u
du
+ x + (x2 n2 )u = 0.
(91)
2
dx
dx
We could put this in Sturm-Liouville form by letting (noting that simply
letting p(x) = x2 doesnt work, we divide the equation by x):
x2
p(x)
q(x)
w(x)
=
=
=
=
14
x
x
1/x
n2 .
(92)
(93)
(94)
(95)
or
1
r
r r
r
2 + k 2 = 0,
(96)
1 2 2
+ 2 2 + 2 + k 2 = 0.
r
z
(97)
(99)
(103)
Hence c = n, an integer.
Thus, we have a differential equation in r only
!
r d
dR
r
+ r2 2 n2 = 0,
R dr
dr
15
(104)
or
d
dR
n2
r
+ r 2 R R = 0,
dr
dr
r
!
(105)
=
=
=
=
r
n2 /r
r
2.
(106)
(107)
(108)
(109)
Jn (x)Jn (x)xdx =
x
b
[Jn (x)Jn0 (x) Jn0 (x)Jn (x)]a . (110)
2 2
Jn2 (x)dx
b
x2
[Jn+1 (x)]2 ,
=
a
2
(111)
cm Jn (kn x),
(112)
m=1
and
Rb
cm =
b2
[Jn+1 (km b)]2 ,
2
16
(113)
(114)
4.4
(115)
=
=
=
=
1
0
1
2.
(116)
(117)
(118)
(119)
(120)
4.5
Hermite Equation
d2
+ x2 = E,
2
dx
(121)
in units where h
= 1, mass m = 1/2, and spring constant k = 2. This is
already essentially in Sturm-Liouville form.
However, we expect to satisfy boundary conditions () = 0. A useful
approach, when we have such criteria, is to divide out the known behavior.
For large x, (x) 0, so in this regime the equation becomes, approximately:
d2
x2 = 0.
2
dx
(122)
2 /2
(123)
Substituting this form for (x) into the original Schrodinger equation
yields the following differential equation for u(x):
d2 u
du
2x + 2u = 0,
2
dx
dx
17
(124)
p(x) = ex ,
(125)
2
du
d
2d u
2 du
p(x)
= ex
.
2xex
2
dx
dx
dx
dx
(126)
we have,
"
(127)
4.6
Laguerre Equation
du
d2 u
+ (1 x) + u = 0.
2
dx
dx
(128)
It arises, for example, in the radial equation for the hydrogen atom Schrodinger
equation. With a similar approach as that for the Hermite equation above,
we can put it into Sturm-Liouville form with:
p(x)
q(x)
w(x)
=
=
=
=
xex
0
ex
.
(129)
ex dn n x
x e
n! dxn
18
(130)
4.7
dk
Ln (x),
dxk
(132)
4.8
Hypergeometric Equation
(133)
p(x) =
x
1x
c
(1 x)a+b+1
q(x) = 0
p(x)
x(1 x)
= ab.
(134)
w(x) =
4.9
Chebyshev Equation
1
x u0 + n2 u = 0.
2
19
(135)
d2 u
du
+ n2 u = 0.
z
2
dz
dz
(136)
1 z2
1/2
q(z) = 0
w(z) = 1/p(z)
= n2 .
(137)
Solutions include the Chebyshev polynomials of the first kind, Tn (z). These
polynomials have the feature that their oscillations are of uniform amplitude
in the interval [1, 1].
We find that a large class of our special functions can be described as orthogonal polynomials in the real variable x, with real coefficients:
Definition: A set of polynomials, {fn (x) : n = 0, 1, 2, . . .}, where fn is of
degree n, defined on x [a, b] such that
Z b
a
fn (x)fm (x)w(x)dx = 0,
n 6= m,
(138)
5.1
20
The choice of value for the remaining multiplicative constant for each
polynomial is called standardization.
After standardization, the normalization condition is written:
Z b
a
fn (x)fm (x)w(x)dx = hn nm .
(139)
(140)
where n = 0, 1, 2, . . .
Note that with suitable weight function, infinite intervals are possible.
Well discuss orthogonal polynomials here from an intuitive perspective.
Start with the following theorem:
Theorem: The orthogonal polynomial fn (x) has n real, simple zeros, all in
(a, b) (note that this excludes the boundary points).
Let us argue the plausibility of this. First, we notice that since a polynomial
of degree n has n roots, there can be no more than n real, simple zeros in
(a, b). Second, we keep in mind that fn (x) is a continuous function, and that
w(x) 0.
Then we may adopt an inductive approach. For n = 0, f0 (x) is just a
constant. There are zero roots, in agreement with our theorem. For n > 0,
we must have
Z b
fn (x)f0 (x)w(x)dx = 0.
(141)
a
If fn nowhere goes through zero in (a, b), we cannot accomplish this. Hence,
there exists at least one real root for each fn with n > 0.
For n = 1, f1 (x) = + x, a straight line. It must go through zero
somewhere in (a, b) as we have just argued, and a straight line can have only
one zero. Thus, for n = 1 there is one real, simple root.
For n > 1, consider the possiblities: We must have at least one root in
(a, b). Can we accomplish orthogonality with exactly two degenerate roots?
Certainly not, because then we cannot be orthogonal with f0 (the polynomial will always be non-negative or always non-positive in (a, b)). Can we be
orthogonal to both f0 and f1 with only one root? By considering the possiblities for where this root could be compared with the root of f1 , and imposing
orthogonality with f0 , it may be readily seen that this doesnt work either.
21
The reader is invited to develop this into a rigorous proof. Thus, there must
be at least two distinct real roots in (a, b) for n > 1. For n = 2, there are
thus exactly two real, simple roots in (a, b).
The inductive proof would then assert the theorem for all k < n, and
demonstrate it for n.
Once we are used to this theorem, the following one also becomes plausible:
Theorem: The zeros of fn (x) and fn+1 (x) alternate (and do not coincide)
on (a, b).
These two theorems are immensely useful for qualtitative understanding
of solutions to problems, without ever solving for the polynomials. For example, the qualitative nature of the wave functions for the different energy
states in a quantum mechanical problem may be pictured.
The following theorem is also intuitively plausible:
Theorem: For any given subinterval of (a, b) there exists an N such that
whenever n > N , fn (x) vanishes at least once in the subinterval. That
is, the zeros of {fn (x)} are dense in (a, b).
Note the plausibility: The alternative would be that the zeros somehow decided to bunch up at particular places. But this would then make satisfying
the orthogonality difficult (impossible).
Finally, we have the approximation theorem:
Theorem: Given an arbitrary function g(x), and all polynomials {pk : k
n} of degree less than or equal to n (linear combinations of {fk }), there
exists exactly one polynomial qn for which |g qn | is minimized:
qn (x) =
n
X
ak fk (x),
(142)
k=0
where
ak =
hfk |gi
.
hk
(143)
22
n
X
k = 0, 1, 2, . . . , n,
(145)
ak f k = q n .
(146)
k=0
(147)
(148)
Hence we have done the best we can with a function which is orthogonal to
g .
5.2
(1)n dn
2 n
1
x
.
2n n! dxn
(149)
This form may be generalized to include some of the other systems of polynomials, obtaining the generalized Rodrigues formula:
1
dn
fn (x) =
{w(x) [s (x)]n } ,
n
Kn w(x) dx
23
(150)
[P` (x)]2 dx =
2
.
2` + 1
(151)
Corresponding to the generalized Rodrigues formula, we have the SturmLiouville equation for orthogonal polynomials:
"
where
5.3
dfn
d
s(x)w(x) (x) + w(x)n fn (x) = 0,
dx
dx
(152)
df1 1
d2 s
n = n K1
+ (n 1) 2 .
dx 2
dx
(153)
"
Recurrence Relation
= bn
kn+1 kn
hn kn+1 kn1
=
,
hn1
kn2
bn =
(155)
an
(156)
cn
(157)
n
X
fj (x)fj (y)
j=0
hj
(158)
24
(159)
0
i=k
kn 1
i = n and k = n + 1
=
(160)
hn kn+1
1 i = n + 1 and k = n
0
otherwise.
n
X
1
1
fj (x)fj (y)|fk i =
(hfi |fj ihxfj |fk i hfi |yfj ihfj |fk i)
j=0 hj
j=0 hj
n
X
n
X
1
(ij hxfj |fk i kj hfi |xfj i)
j=0 hj
i>n
in
in
i>n
and
and
and
and
k>n
kn
(161)
k>n
k n.
Thus, we must evaluate, for example, hfi |xfk i. We use the recurrence relation:
fk+1 = (ak + bk x)fk ck fk1 ,
(162)
or,
xfk =
1
(fk+1 ak fk + ck fk1 ) .
bk
(163)
Therefore,
1
(hfi |fk+1 i ak hfi |fk i + ck hfi |fk1 i)
bk
1
=
i(k+1) ak ik + ck i(k1) .
bk
hfi |xfk i =
(164)
25
5.4
(166)
w(x) = ex
= 2n.
(167)
(168)
This gives:
d
2 dHn
ex
dx
dx
+ 2nex Hn (x) = 0,
(169)
To obtain the generalized Rodrigues formula for the Hermite polynomials, we note that since p(x) = s(x)w(x), we must have s(x) = 1. Hence, the
generalized Rodrigues formula is:
dn
1
[w(x)s(x)n ]
Hn (x) =
n
Kn w(x) dx
2
ex dn x2
=
e .
Kn dxn
(170)
d x2
2
e
= 2xex ,
dx
(171)
Since
the order xn leading term is
ex
dn x2
e
= (2)n xn + O(xn1 ).
dxn
(172)
Hence kn = 2n . The reader may demonstrate that kn0 = 0, and also that the
normalization is
hn = 2n n!.
(173)
26
Z b
G(x, y)(y)dy.
(175)
(177)
where the minus sign here is from the on the right hand side of Eqn. 174.
Writing this out:
Lx G = pG00 + p0 G0 qG = (x y).
(178)
The second derivative term must be the term that gives the delta function.
That is, in the limit as 0:
Z y+
y
27
(179)
y
1
.
p(y)
(180)
(181)
(182)
(183)
(184)
q(x)
p0 (x) 0
uj (x)
u (x).
p(x)
p(x) j
(185)
(186)
Hence,
p0 (x)
[u1 (x)u02 (x) u01 (x)u2 (x)]
p(x)
p0 (x)
=
W (x).
p(x)
W 0 (x) =
28
(187)
G(x, y) =
(189)
(190)
This yields
u1 (y) + u2 (y) = 0.
(191)
(y) =
(193)
(194)
29
xy
.
xy
(195)
Z b
G(x, y)(y)dy.
(196)
We remark again that the Greens function for a self-adjoint operator (as
in the Sturm-Liouville problem with appropriate boundary conditions) is
symmetric, and thus the Schmidt-Hilbert theory applies.
It may happen, however, that we encounter boundary conditions such
that we cannot find A and B to satisfy them. In this case, we may attempt
to find an appropriate modified Greens fucntion as follows: Suppose that
we can find a solution u0 to Lu0 = 0 that satisfies the boundary conditions
(that is, suppose there exists a solution to the homogeneous equation). Lets
suppose here that the specified boundary conditions are homogeneous. Then
cu0 is also a solution, and we may assume that uo has been normalized:
Z b
a
[u0 (x)]2 dx = 1.
(197)
Now find a Greens function which satisfies all the same properties as
before, except that now:
for x 6= y,
Z b
a
(198)
(199)
The resulting Greens function will still be symmetric, and the solution to
our problem is still
Z b
G(x, y)(y)dy.
(200)
u(x) =
a
Z b
a
Z b
Lx G(x, y)(y)dy
[(x y)] (y)dy +
= (x) + u0 (x)
Z b
a
Z b
a
u0 (x)u0 (y)(y)dy
u0 (y)(y)dy.
(201)
30
6.1
Example
(202)
(203)
xy
xy
(204)
Now suppose that the range of interest is x [1, 1]. Let the boundary
conditions be periodic:
u(1) = u(1)
u0 (1) = u0 (1).
(205)
(206)
(207)
xy
1
+
(y x)
+2
2
xy
x y.
(208)
(x, y) =
x
2+
2
1
= y 1
+
2
xy
x y.
xy
.
x y.
(209)
31
We turn to the modified Greens function in hopes of rescuing the situation. We want to find a solution, u0 , to Lu0 = 0 which satisfies the boundary
conditions and is normalized:
Z 1
1
u20 (x)dx = 1.
(210)
2
1
G(x, y) = u0 (x)u0 (y) = ,
2
x
2
x 6= y.
(211)
xy x2
1
+
+
(y x)
+2
2
4
xy
x y.
(212)
Note that our condition on B remains the same as before, since the added
x2 /4 term separately satisfies the u(1) = u(1) boundary condition.
The boundary condition on the derivative involves
1
y x
G (x, y) = + +
+
2 2
2
xy
x y.
(213)
We find that
y
G0 (1, y) = = G0 (1, y).
2
Finally, we fix A by requiring
Z b
a
(214)
(215)
Substituting in,
Z 1
1
Z y
Z 1
xy x2
xy
xy
A
+
dx +
dx
dx = 0.
2
4
2
2
1
y
!
(216)
1 y2
+ .
6
4
(217)
Thus,
G(x, y) =
1 1
1
+ (x y)2 +
(y x)
+2
6 4
32
xy
x y.
(218)
(219)
Z 1
G(x, y)(y)dy = 0.
(220)
1 Z1
(x)u0 (x)dx =
dx 6= 0.
1
2 1
(221)
This problem is sufficiently simple that the reader is encouraged to demonstrate the non-existence of a solution by elementary methods.
Let us suppose, instead, that (x) = x, that is we wish to solve Lu = x.
Note that
Z 1
1 Z1
(x)u0 (x)dx =
xdx = 0,
(222)
1
2 1
so now we expect that a solution will exist. Lets find it:
u(x) =
Z 1
G(x, y)(y)dy
1Z 1
1Z 1
1Z x
1Z 1
ydy +
(x y)2 ydy +
(y x)ydy
(y x)ydy
6 1
4 1
2 1
2 x
x3 x
= + + arbitrary constant C.
(223)
6
6
=
6.2
33
equation either does not exist, or is not unique. A solution exists if and only
if is perpendicular to any solution of the homogeneous equation, e.g.,
Z b
a
(x)u0 (x)dx = 0.
(224)
X
i
|ui ihui |
.
i
(225)
X
i
|ui ihui |
.
i
(226)
(227)
Note that:
Z b
a
(228)
since we have hu0 |u0 i = 1. The additional term exactly cancels the u0 component of the source.
Since Lu0 = 0, this equation, including the boundary conditions, and the
discontinuity condition on G0 only determines G(x, y) up to the addition of
34
Z b
a
Z b
(230)
Thus, ab G(x, y)u0 (y)dy is a function which satisfies Lu = 0, and satisfies the
boundary conditions. But the most general such function (were considering
only second order differential equations here) is cu0 (x), hence
R
Z b
a
(231)
(232)
(233)
35
pW 0 + p0 W
pu1 u002 pu001 u2 + p0 u1 u02 p0 u01 u2
u1 (pu002 + p0 u02 ) u2 (pu001 + p0 u01 )
u1 qu2 u2 qu1 = 0.
(235)
Thus, p(x)W (x) is independent of x, and the evaluation of the denominator may be made at at any convenient point, not only at x = y.
This can be very useful when the identity yielding the constant isnt so
obvious.
2. Lets quickly consider the complication of the weight function. Suppose
(x) = w(x)u(x) (x),
(236)
Lu + wu = .
(237)
corresponding to
Then we have
u(x) =
Z b
G(x, y)(y)dy
= g(x) +
Z b
G(x, y)w(y)u(y)dy,
(238)
where
g(x) =
Z b
G(x, y)(y)dy.
(239)
(240)
w(x)g(x) +
Z bq
The kernel
36
(241)
(242)
(245)
or
n1
n
G(x, y) + a1 (x) n1 G(x, y) + + an (x)G(x, y) = (x y). (246)
xn
x
37
lim
0
n1 G
n1 G
(y
+
,
y)
(y , y) = 1.
xn1
xn1
!
(247)
ui (x)ui (y),
x 6= y.
(248)
i=0
i = 0, 1, 2, . . . , m.
(249)
ui (x)(x)dx = 0,
i = 0, 1, 2, . . . , m.
(250)
Exercises
1. Consider the general linear second order homogeneous differential equation in one dimension:
a(x)
d
d2
u(x) + b(x) u(x) + c(x)u(x) = 0.
2
dx
dx
(251)
Determine the conditions under which this may be written in the form
of a differential equation involving a self-adjoint (with appropriate
boundary conditions) Sturm-Liouville operator:
Lu = 0,
38
(252)
where
L=
d
d
p(x) q(x).
dx
dx
(253)
d2
+ 1,
dx2
x [0, ],
(254)
(a) Find the most general boundary condition such that p is Hermitian.
(b) What is the domain, DP , of p such that p is self-adjoint?
(c) What is the situation when [a, b] [, ]? Is p bounded or
unbounded?
4. Prove that the different systems of orthogonal polynomials are distinguished by the weight function and the interval. That is, the system of
polynomials in [a, b] is uniquely determined by w(x) up to a constant
for each polynomial.
5. We said that the recurrence relation for the orthogonal polynomials
may be expressed in the form:
fn+1 (x) = (an + bn x) fn (x) cn fn1 (x),
(255)
39
(256)
1 d2
1
(x) + kx2 (x) = E(x).
2
2m dx
2
(257)
Make a sketch showing the qualitative features you expect for the wave
functions corresponding to the five lowest energy levels.
Try to do this with some care: There is really a lot that you can
say in qualitative terms without ever solving the Schrodinger equation.
Include a curve of the potential on your graph. Try to illustrate what
happens at the classical turning points (that is, the points where E =
V (x)).
7. Find the Greens function for the operator
L=
d2
+ k2,
dx2
(258)
Z b
a
xt1
dx,
1x
(259)
(260)
40
(262)
(266)
(267)
with G(r = a, y) = 0.
In problem10, you found G(x, y) by directly solving (2x +k 2 )G(x, y) =
(x y), ignoring the boundary conditions at first. This is called
41
!s
m
im(x y)2
exp
.
2|t|
2t
"
(270)
42
1 d2
.
2m dx2
(271)
k (x)k (y)
,
k z
k=1
(272)
G(x, y; z) = i
m i|xy|
e
.
2z
(273)
18. Let us investigate the Greens function for a slightly more complicated
situation. Consider the potential:
V (x) =
V
0
|x|
|x| >
(274)
43
0 =
2m(z V )
(275)
2mz.
(276)
Make sure that you describe any cuts in the complex plane, and
your selected branch. You may find it convenient to express your
answer to some extent in terms of the force-free Greens function:
im i0 |xy|
G0 (x, y; z) =
e
.
(277)
1
(x0 , t0 = 0) =
a2
!s
1/4
"
x2
exp 02 + ip0 x0 ,
2a
!
(279)
where a and p0 are real constants. Since the absolute square of the
wave function gives the probability, this wave function corresponds to a
Gaussian probability distribution (i.e., the probability density function
to find the particle at x0 ) at t = t0 :
|(x0 , t0 )|2 =
44
1
a2
1/2
x2
0
e a2 .
(280)
The standard deviation of this distribution is = a/ 2. Find the
probability density function, |(x, t)|2 , to find the particle at x at some
later (or earlier) time t. You are encouraged to think about the physical interpretation of your result.
45