Académique Documents
Professionnel Documents
Culture Documents
D. E. Soper2
University of Oregon
10 October 2011
Position
Let us consider a particle with no spin that can move in one dimension. Let
us call the coordinate x. How can we describe this in quantum mechanics?
We postulate
that if the particle is at x, its state can be represented by
a vector x . The particle could be anywhere, so we postulate
that a general
state should be represented as a linear combination of states x with different
values of x.
If there were a discrete set of possible values for x,
say
xi , we could just
take over the structure that we had for spin, taking xi xj = xi xj . Since
the possible values for x are continuous, we postulate instead that
0
x x = (x0 x) .
(1)
Here (x0 x) is the Dirac delta function, defined by
Z
dx f (x) (x0 x) = f (x0 ) .
(2)
There is no actual function that does this, although one can think of (x0 x)
as a sort of limit of ordinary functions that vanish when x0 x is not very
close to 0 but are very big when x0 x is very close to 0, with the area under
the graph of (x0 x) equal to 1. The precise way to think of it is that
(x0 x) is a distribution, where a distribution F maps nice well behaved
functions f to (complex)
numbers F [f ]. For F [f ] we use the convenient
R
notation F [f ] = dxf (x) F (x).
We postulate that the vectors x make a basis for the space of possible
states, with the unit operator represented as
Z
1 = dx x x .
(3)
1
2
(4)
With the completeness relation, we can
represent a general state as
a linear combination of our basis vectors x :
Z
= dx x x .
(5)
Then for any two states and we have
Z
Z
= dx x x = dx x x .
(6)
In particular, if is normalized, we have
Z
Z
2
1 = = dx x x = dx x .
(7)
With discrete states, we postulate in quantum mechanics that the probability that
a
system
in state will be found, if suitably measured, to be in
state i is | i |2 . We generalize this to continuous values x by postulating that the probability that a system in state will be
2 if suitably
found,
measured,
This is consistent with the state normalization. The probability that the
system is somewhere is
Z
2
1 = P (, ) =
dx x
(9)
(10)
Then
Z
xop =
dx x x x .
(11)
With this definition, xop is a self-adjoint operator: xop = xop . Its expansion
in terms of eigenvectors and eigenvalues is Eq. (11).
In a first course in quantum mechanics, one usually denotes x by
(x) and calls it the wave function. The wave function notation is helpful
for many purposes and we will use it frequently. With the wave function
notation,
Z
= dx (x) (x) .
(12)
The state vector is expressed as a linear combination of basis kets x using
Eq. (5),
Z
= dx (x)x .
(13)
Translation in space
We can define an operator U (x) that translates the system a distance a along
the x-axis. The definition is simple:
U (a)x = x + a .
(14)
Evidently
U (b)U (a) = U (a + b) .
(15)
(16)
In particular
so
U (a) = U (a)1 .
(17)
It will be helpful to express what U (a) does to an arbitrary state by
using the wave function representation. If U (a) = 0 , we have
Z
dx 0 (x)x = 0
= U (a)
Z
= dy (y)U (a)y
Z
= dy (y)y + a
Z
= dx (x a)x .
(18)
(19)
0 0
= dx 0 (x) 0 (x)
Z
= dx (x a) (x a)
(20)
Z
= dy (y) (y)
= .
1
Thus U (a) is unitary,
x ,
In particular, xU
(a)
can
be
considered
to
be
the
conjugate
of
U
(a)
which is then U (a)x = x a . That is
xU (a) = x a .
(21)
If U (a) = 0 , this gives
0 (x) = xU (a) = x a = (x a) .
(22)
Momentum
(23)
This defines the operator pop , which we call the momentum operator. I take
it as a postulate that pop , defined in this fashion as the infinitesimal generator
of translations, represents the momentum in quantum mechanics.
We have
U (a) = 1 + i pop a + .
(24)
and also
U (a) = U (a) = 1 + i pop a + .
(25)
xU (a) = x a
(26)
(27)
but
xU (a) = x1 i pop a +
(28)
x ia xpop + = x a .
(29)
Thus
If we expand the right hand side in a Taylor series, we have
x + .
x ia xpop + = x a
x
(30)
xpop = i
x .
x
(31)
With this result, we can easily compute the commutator of xop and pop :
x[xop , pop ] = xxop pop xpop xop
x xop
= x xpop + i
x
(32)
= ix
x + i x x
x
x
x + i x + ix
x
= ix
x
x
= + i x .
Since this works for any state , we have
[xop , pop ] = i .
(33)
(34)
These are known as the canonical commutation relations for xop and pop .
Momentum eigenstates
Since pop is self-adjoint, we can find a complete set of basis states p with
(35)
pop p = pp .
To find the wave functions xp , we just have to solve a very simple differential equation
i
x p = p x p .
(36)
x
The solution is
1
eipx .
(37)
x p =
2
0
1
0
p p = dx p x x p =
dx ei(p p)x .
2
(38)
You can look up the value of this integral in a book, but lets see if we can
derive it.
The integral. We need
Z
I(k) =
dx eikx
(39)
Z
dk I(k) = i lim
dx
0
1
[eibx eiax ] .
x + i
(41)
Now, for finite the integral of each term exists separately and we can write
Z
Z b
Z
1
1
ibx
iax
dx
dk I(k) = i lim
e dx
e
.
(42)
0
x + i
x + i
a
Now we need the integral
Z
f (b) =
dx
1
eibx .
x + i
(43)
If b > 0, we can close the contour in the upper half plane by integrating
over from x from R to R and adding an integration over a semicircle of
radius R in the upper half x-plane. Then we take R . The integral
along the big semicircle has a 1/R from the 1/x and it is suppressed by
exp(ibx) = exp(ibR cos ) exp(bR sin )
(44)
for x = R cos + iR sin . Thus the integral over the big semicircle gives zero
in the limit R and we can add it for free. But now we have the integral
over a closed contour of a function that is analytic (with no poles) inside the
contour. The result is zero.
If b < 0, we can close the contour in the lower half plane by integrating
over from x from R to R and adding an integration over a semicircle of
radius R in the lower half x-plane. Then we take R . The integral
along the big semicircle has a 1/R from the 1/x and it is suppressed by
exp(ibx) = exp(i|b|R cos ) exp(|b|R sin )
(45)
for x = R cos iR sin . Thus the integral over the big semicircle gives zero
in the limit R and we can add it for free. But now we have the integral
over a closed contour of a function that is analytic inside the contour except
for one pole, the one at x = i. The result is 2i times the residue of the
pole:
Z
1
eibx = 2ieb .
(46)
dx
x + i
For 0, this is just 2i.
Putting this together for b > 0 and b > 0, we have
f (b) = (b < 0) 2i eb .
(47)
Applying this to both integrals in the integral of I(k) and then taking
the limit 0, we have, assuming that a < b,
Z b
dk I(k) = 2 {(a < 0) (b < 0)} = 2(a < 0 & b > 0) .
(48)
a
That is to say, I(k) vanishes on any interval that does not include k = 0,
while if we integrate it over any interval that includes k = 0, its integral is
2. We thus identify
I(k) = 2 (k) .
(49)
8
Well, perhaps you would have preferred to just pull the answer out of a math
book. However, this style of derivation is useful in many circumstances. We
will see derivations like this again. For that reason, it is worthwhile to learn
how to do it right from the start of this course.
Our result is
Z
0
1
1
0
2 (p0 p) .
(50)
p p =
dx ei(p p)x =
2
2
0
p p = (p0 p) .
(51)
The vectors p are guaranteed to constitute a complete basis set. The completeness sum with the normalization that we have chosen is
Z
1 = dp p p .
(52)
If the system is in state , the amplitude for the particle to have momentum
p is
(p)
= p ,
(53)
We call this the momentum-space wave Rfunction.
If and represent
two states, then by just inserting 1 =
dp p p between the two state
vectors we can write the inner product as
Z
Z
(p) .
= dp p p = dp (p)
(54)
Assuming that = 1, we have
Z
2
dp |(p)|
=1 .
(55)
P (a, b) =
dp |(p)|
(56)
a
You
are encouraged by this notation to think
of the system
as having one
state , which can be represented by either x or p , depending on
what sort of analysis you want to do.
We can go from the x-representation to the p-representation by writing
Z
Z
(p)
= p = dx px x =
dx eipx (x) .
(57)
2
The inverse transformation is
Z
Z
dp eipx (p)
.
(x) = x = dp x p p =
2
(58)
Now that we know something, lets look at the translation operator U (a)
again. If a is a finite distance and a is an additional infinitesimal distance,
we have
U (a + a) = U (a)U (a) .
(59)
We have defined U (a) as
U (a) = 1 ipop a + ,
(60)
(61)
so
That is
1
[U (a + a) U (a)] = ipop U (a) + .
a
Taking the limit a 0, this is
d
U (a) = ipop U (a) .
da
(62)
(63)
To see what this tells us, itis convenient to use the momentum representation. For an arbitrary state we have
d
p U (a) = i ppop U (a) = ip pU (a)
da
10
(64)
Thats a differential
equation
that we know how to solve. Using the boundary
condition p U (0) = p , we have
pU (a) = eipa p .
(65)
This is the same thing as
pU (a) = p exp(ipop a) .
(66)
(67)
p exp(ipop a) = p exp(ipa) .
Equally well, we can define
X
1
exp(ipop a) =
(ia)n pnop .
n!
n=0
(68)
Since Eq. (66) holds for any state and any p, we have
U (a) = exp(ipop a) .
(69)
That is, U (a) is an exponential of its infinitesimal generator pop . Note that
this relation seems a little more mysterious if we think of pop as represented
by the differential operator i/x. It is, however, perfectly sensible. To
apply exp(ipop a) to a wave function, you can Fourier transform the wave
function, multiply by exp(ipa) and then Fourier transform back. Try it!
There is a general relation between the commutator of two self-adjoint operators A andBand how precisely the values of A and
B can be known in a
single state . Let us pick a (normalized) state of interest and define
A = A ,
(70)
B = B .
Then consider the quantity
2
A A .
11
We can call this the variance of A in the state ; its square root can be
called the uncertainty
of A in the state. To understand this, expand in
eigenvectors i of A with eigenvalues ai . We have
X
2
A =
ai i
(71)
That is, hAi is the expectation value of A, the average of the eigenvalues ai
weighted by the probability that the system will be found in the state with
that eigenvalue. Then
2
2
2 X
A A =
ai A i .
i
2
2 1
A A B B [A, B] .
4
(72)
(73)
2
2 1
.
(74)
xop xop pop pop
4
Thus if we know the position of a particle very well, then we cannot know
its momentum well; if we know the momentum of a particle very well, then
we cannot know its position well.
There is a class of functions for which the becomes =: gaussian
wave packets. Lets see how this works. Define
1
1 ik0 (xx0 )
(x x0 )2
e
(x) =
exp
.
(75)
(2)1/4 a
4a2
12
Z
1
(x x0 )2
dx exp
a
2a2
Z
1
y2
dy exp 2
a
2a
Z
2 dz exp z 2
(76)
(77)
2
(x x0 )2
1
1
2
dx (x x0 ) exp
xop xop
=
(2)1/2 a
2a2
Z
1
y2
1
2
=
dy
y
exp
(2)1/2 a
2a2
Z
2a2
dz z 2 exp z 2
= 1/2
Z
2a2
d
(78)
2
= 1/2
dz exp z
d
=1
r
2
2a
d
= 1/2
d =1
2
2a
= 1/2
2
2
=a .
Thus the variance is just a2 .
Now, lets look at what the same state looks like in momentum space. To
find (p),
we need to complete the square in the exponent. This is often
13
(p) =
dx eipx (x)
2
Z
1
(x x0 )2
1
ipx ik0 (xx0 )
dx e
e
exp
=
(2)3/4 a
4a2
Z
1
1 ipx0
y2
ipy ik0 y
e
=
dx e
e
exp 2
(2)3/4 a
4a
Z
1
1 ipx0
y2
e
=
dy exp i(p k0 )y 2
(2)3/4 a
4a
Z
1
2 a eipx0 dz exp i2a(p k0 )z z 2
=
3/4
(2)
Z
ipx0 a2 (pk0 )2
1
2 ae
=
e
dz exp [z + ia(p k0 )]2
3/4
(2)
Z
ipx0 a2 (pk0 )2
1
2
2
a
e
e
dw
exp
w
=
(2)3/4
ipx0 a2 (pk0 )2
1
2
ae
=
e
(2)3/4
1
ipx0
2
2
2a
e
=
exp
a
(p
k
)
.
0
(2)1/4
(79)
pop = k0 .
(80)
This same comparison shows that
2
1
pop pop = 2 .
4a
(81)
2
2
1
1
.
xop xop pop pop = a2 2 =
4a
4
(82)