Vous êtes sur la page 1sur 14

Position and momentum in quantum mechanics1

D. E. Soper2
University of Oregon
10 October 2011

Position

Let us consider a particle with no spin that can move in one dimension. Let
us call the coordinate x. How can we describe this in quantum mechanics?
We postulate
that if the particle is at x, its state can be represented by

a vector x . The particle could be anywhere, so we postulate
that a general
state should be represented as a linear combination of states x with different
values of x.
If there were a discrete set of possible values for x,
say
x i , we could just
take over the structure that we had for spin, taking xi xj = xi xj . Since
the possible values for x are continuous, we postulate instead that

0
x x = (x0 x) .
(1)
Here (x0 x) is the Dirac delta function, defined by
Z
dx f (x) (x0 x) = f (x0 ) .

(2)

There is no actual function that does this, although one can think of (x0 x)
as a sort of limit of ordinary functions that vanish when x0 x is not very
close to 0 but are very big when x0 x is very close to 0, with the area under
the graph of (x0 x) equal to 1. The precise way to think of it is that
(x0 x) is a distribution, where a distribution F maps nice well behaved
functions f to (complex)
numbers F [f ]. For F [f ] we use the convenient
R
notation F [f ] = dxf (x) F (x).
We postulate that the vectors x make a basis for the space of possible
states, with the unit operator represented as
Z


1 = dx x x .
(3)
1
2

Copyright, 2011, D. E. Soper


soper@uoregon.edu

This is consistent with the inner product postulate:


Z
Z
0





x = 1 x0 = dx x x x0 = dx x (x0 x) = x0 .

(4)



With the completeness relation, we can
represent a general state as
a linear combination of our basis vectors x :
Z



= dx x x .
(5)


Then for any two states and we have
Z
Z






= dx x x = dx x x .

(6)


In particular, if is normalized, we have
Z
Z

2



1 = = dx x x = dx x .

(7)

With discrete states, we postulate in quantum mechanics that the probability that
a
system
in state will be found, if suitably measured, to be in

state i is | i |2 . We generalize this to continuous values x by postulating that the probability that a system in state will be
2 if suitably

found,

measured,

2 to have position between x and x + dx is is | x | dx. That is,



| x | is the probability density, and the probability that the system will
be found to be between position a and position b is
Z b

2
dx x
P (a, b) =
(8)
a

This is consistent with the state normalization. The probability that the
system is somewhere is
Z

2
1 = P (, ) =
dx x
(9)

We can now introduce an operator xop that measures x:




xop x = x x .
2

(10)

Then

Z
xop =



dx x x x .

(11)

With this definition, xop is a self-adjoint operator: xop = xop . Its expansion
in terms of eigenvectors and eigenvalues is Eq. (11).


In a first course in quantum mechanics, one usually denotes x by
(x) and calls it the wave function. The wave function notation is helpful
for many purposes and we will use it frequently. With the wave function
notation,
Z



= dx (x) (x) .
(12)

The state vector is expressed as a linear combination of basis kets x using
Eq. (5),
Z


= dx (x) x .
(13)

Translation in space

We can define an operator U (x) that translates the system a distance a along
the x-axis. The definition is simple:


U (a) x = x + a .
(14)
Evidently
U (b)U (a) = U (a + b) .

(15)

U (a)U (a) = U (0) = 1 ,

(16)

In particular
so
U (a) = U (a)1 .

(17)

It will be helpful to express what U (a) does to an arbitrary state by


using the wave function representation. If U (a) = 0 , we have
Z

dx 0 (x) x = 0

= U (a)
Z

= dy (y)U (a) y
Z


= dy (y) y + a
Z

= dx (x a) x .

(18)

From this, we identify


0 (x) = (x a) .

(19)

Note the minus


sign.
0
0




If
U
(a)
0 = and U (a) = , then the inner product between
0
and is
Z

0 0

= dx 0 (x) 0 (x)
Z
= dx (x a) (x a)
(20)
Z
= dy (y) (y)


= .

1
Thus U (a) is unitary,

U (a) = U (a) = U (a).

x ,
In particular, x U
(a)
can
be
considered
to
be
the
conjugate
of
U
(a)


which is then U (a) x = x a . That is

x U (a) = x a .
(21)

If U (a) = 0 , this gives


0 (x) = x U (a) = x a = (x a) .
(22)

This is our previous result, just looked at a different way.

Momentum

We now consider a translation through an infinitesimal distance a. Since a


is infinitesimal, we expand in powers of a and neglect terms of order (a)2
and higher. For U (a) we write
U (a) = 1 i pop a + .

(23)

This defines the operator pop , which we call the momentum operator. I take
it as a postulate that pop , defined in this fashion as the infinitesimal generator
of translations, represents the momentum in quantum mechanics.
We have
U (a) = 1 + i pop a + .
(24)
and also
U (a) = U (a) = 1 + i pop a + .

(25)

Comparing these, we see that


pop = pop .
That is, pop is self-adjoint.

Let us see what pop does to an arbitrary state . We have


x U (a) = x a

(26)

(27)

but



x U (a) = x 1 i pop a +

(28)

x ia x pop + = x a .

(29)

Thus
If we expand the right hand side in a Taylor series, we have




x + .
x ia x pop + = x a
x

(30)

Comparing terms gives




x pop = i
x .
x

(31)

With this result, we can easily compute the commutator of xop and pop :





x [xop , pop ] = x xop pop x pop xop




x xop
= x x pop + i
x




(32)
= ix
x + i x x
x
x






x + i x + ix
x
= ix
x

x
= + i x .

Since this works for any state , we have
[xop , pop ] = i .

(33)

Since every operator commutes with itself, we also have


[xop , xop ] = 0 ,
[pop , pop ] = 0 .

(34)

These are known as the canonical commutation relations for xop and pop .

Momentum eigenstates


Since pop is self-adjoint, we can find a complete set of basis states p with


(35)
pop p = p p .


To find the wave functions x p , we just have to solve a very simple differential equation




i
x p = p x p .
(36)
x
The solution is


1
eipx .
(37)
x p =
2

Evidently, this solves the differential equation; the normalization factor 1/ 2


is a convenient choice. We will see why presently.

Let us calculate the inner product


Z
Z

0

1
0



p p = dx p x x p =
dx ei(p p)x .
2

(38)

You can look up the value of this integral in a book, but lets see if we can
derive it.
The integral. We need
Z
I(k) =

dx eikx

(39)

This integral is not so well defined, but it is better defined if we treat it


as a distribution. For that, we should integrate it against an arbitrary test
function h(k). However, we can cheat a little by just integrating against the
function equal to 1 for a < k < b and 0 otherwise. Thus we look at
Z b
Z
Z b
Z
1
ikx
dk I(k) = dx
dk e = i dx [eibx eiax ] .
(40)
x
a
a
The integrand appears to have a pole at x = 0, but really it doesnt because
exp(ibx) cancels exp(iax) at x = 0. For this reason, we can replace 1/x by
1/(x + i) with  > 0 and take the limit  0:
Z

Z
dk I(k) = i lim

dx

0

1
[eibx eiax ] .
x + i

(41)

Now, for finite  the integral of each term exists separately and we can write
Z

Z b
Z
1
1
ibx
iax
dx
dk I(k) = i lim
e dx
e
.
(42)
0
x + i
x + i
a
Now we need the integral
Z
f (b) =

dx

1
eibx .
x + i

(43)

We can consider x to be a complex variable. We are integrating a function


of x that is analytic except for a pole at x = i. Our integral runs along
the real x-axis.
7

If b > 0, we can close the contour in the upper half plane by integrating
over from x from R to R and adding an integration over a semicircle of
radius R in the upper half x-plane. Then we take R . The integral
along the big semicircle has a 1/R from the 1/x and it is suppressed by
exp(ibx) = exp(ibR cos ) exp(bR sin )

(44)

for x = R cos + iR sin . Thus the integral over the big semicircle gives zero
in the limit R and we can add it for free. But now we have the integral
over a closed contour of a function that is analytic (with no poles) inside the
contour. The result is zero.
If b < 0, we can close the contour in the lower half plane by integrating
over from x from R to R and adding an integration over a semicircle of
radius R in the lower half x-plane. Then we take R . The integral
along the big semicircle has a 1/R from the 1/x and it is suppressed by
exp(ibx) = exp(i|b|R cos ) exp(|b|R sin )

(45)

for x = R cos iR sin . Thus the integral over the big semicircle gives zero
in the limit R and we can add it for free. But now we have the integral
over a closed contour of a function that is analytic inside the contour except
for one pole, the one at x = i. The result is 2i times the residue of the
pole:
Z
1
eibx = 2ieb .
(46)
dx
x + i
For  0, this is just 2i.
Putting this together for b > 0 and b > 0, we have
f (b) = (b < 0) 2i eb .

(47)

Applying this to both integrals in the integral of I(k) and then taking
the limit  0, we have, assuming that a < b,
Z b
dk I(k) = 2 {(a < 0) (b < 0)} = 2(a < 0 & b > 0) .
(48)
a

That is to say, I(k) vanishes on any interval that does not include k = 0,
while if we integrate it over any interval that includes k = 0, its integral is
2. We thus identify
I(k) = 2 (k) .
(49)
8

Well, perhaps you would have preferred to just pull the answer out of a math
book. However, this style of derivation is useful in many circumstances. We
will see derivations like this again. For that reason, it is worthwhile to learn
how to do it right from the start of this course.
Our result is
Z

0
1
1
0

2 (p0 p) .
(50)
p p =
dx ei(p p)x =
2
2

Since we included a factor 1/ 2 in the normalization of p , we have

0
p p = (p0 p) .
(51)

The vectors p are guaranteed to constitute a complete basis set. The completeness sum with the normalization that we have chosen is
Z


1 = dp p p .
(52)

Momentum space wave functions


If the system is in state , the amplitude for the particle to have momentum
p is

(p)
= p ,
(53)




We call this the momentum-space wave Rfunction.

If and represent


two states, then by just inserting 1 =
dp p p between the two state
vectors we can write the inner product as
Z
Z



(p) .



= dp p p = dp (p)
(54)


Assuming that = 1, we have
Z
2

dp |(p)|
=1 .

(55)

With our standard interpretation of probabilities, the probability that


the system will be found to have momentum between a and b if we measure
momentum is
Z b
2

P (a, b) =
dp |(p)|
(56)
a

You
are encouraged by this notation to think

of the system

as having one


state , which can be represented by either x or p , depending on
what sort of analysis you want to do.
We can go from the x-representation to the p-representation by writing
Z
Z

(p)
= p = dx p x x =
dx eipx (x) .
(57)
2
The inverse transformation is
Z
Z




dp eipx (p)
.
(x) = x = dp x p p =
2

(58)

This is known as the Fourier transformation and its inverse.

The translation operator again

Now that we know something, lets look at the translation operator U (a)
again. If a is a finite distance and a is an additional infinitesimal distance,
we have
U (a + a) = U (a)U (a) .
(59)
We have defined U (a) as
U (a) = 1 ipop a + ,

(60)

U (a + a) = U (a) ipop aU (a) + .

(61)

so
That is

1
[U (a + a) U (a)] = ipop U (a) + .
a
Taking the limit a 0, this is
d
U (a) = ipop U (a) .
da

(62)

(63)

To see what this tells us, it is convenient to use the momentum representation. For an arbitrary state we have



d

p U (a) = i p pop U (a) = ip p U (a)
da
10

(64)

Thats a differential
equation



that we know how to solve. Using the boundary


condition p U (0) = p , we have


p U (a) = eipa p .
(65)
This is the same thing as



p U (a) = p exp(ipop a) .

(66)

Here we define exp(ipop a) by saying that, applied to a pop eigenstate, it


gives


(67)
p exp(ipop a) = p exp(ipa) .
Equally well, we can define

X
1
exp(ipop a) =
(ia)n pnop .
n!
n=0

(68)


Since Eq. (66) holds for any state and any p , we have
U (a) = exp(ipop a) .

(69)

That is, U (a) is an exponential of its infinitesimal generator pop . Note that
this relation seems a little more mysterious if we think of pop as represented
by the differential operator i/x. It is, however, perfectly sensible. To
apply exp(ipop a) to a wave function, you can Fourier transform the wave
function, multiply by exp(ipa) and then Fourier transform back. Try it!

The uncertainty relation

There is a general relation between the commutator of two self-adjoint operators A and B and how precisely the values of A and
B can be known in a


single state . Let us pick a (normalized) state of interest and define



A = A ,



(70)
B = B .
Then consider the quantity

2
A A .
11


We can call this the variance of A in the state ; its square root can be
called the uncertainty
of A in the state. To understand this, expand in

eigenvectors i of A with eigenvalues ai . We have

X
2
A =
ai i

(71)

That is, hAi is the expectation value of A, the average of the eigenvalues ai
weighted by the probability that the system will be found in the state with
that eigenvalue. Then

2
2

2 X
A A =
ai A i .
i

This is the average value of the

square of the difference between the eigenvalue


ai and the average value A . That is what one calls the variance of a
distribution of values in statistics. If the variance is small, then we know
the value of A very well; if the variance is large, then the value of A is very
uncertain.
The general relation concerns
the product of the variance of A and the


variance of B in the state :
2

2

2 1

A A B B [A, B] .
4

(72)

The proof is given in Sakurai.


We have seen that
[xop , pop ] = i .
Thus

(73)

2

2 1
.
(74)
xop xop pop pop
4
Thus if we know the position of a particle very well, then we cannot know
its momentum well; if we know the momentum of a particle very well, then
we cannot know its position well.
There is a class of functions for which the becomes =: gaussian
wave packets. Lets see how this works. Define


1
1 ik0 (xx0 )
(x x0 )2
e
(x) =
exp
.
(75)
(2)1/4 a
4a2
12

This function is normalized to 1:


Z
1
dx |(x)|2 =
(2)1/2
1
=
(2)1/2
1
=
(2)1/2
=1 .



Z
1
(x x0 )2
dx exp
a
2a2


Z
1
y2
dy exp 2
a
2a
Z

2 dz exp z 2

Here we have used the integral


Z

dz exp z 2 = ,

(76)

(77)

which one needs often.


It is pretty much self evident that hxop i = x0 .
Let us evaluate the variance. We have


Z

2
(x x0 )2
1
1
2


dx (x x0 ) exp
xop xop
=
(2)1/2 a
2a2


Z
1
y2
1
2
=
dy
y
exp

(2)1/2 a
2a2
Z

2a2
dz z 2 exp z 2
= 1/2


 Z

2a2
d
(78)
2
= 1/2
dz exp z

d
=1
 r 
2
2a
d
= 1/2

d =1

2
2a

= 1/2

2
2
=a .
Thus the variance is just a2 .
Now, lets look at what the same state looks like in momentum space. To

find (p),
we need to complete the square in the exponent. This is often

13

a useful manipulation. We find


Z
1

(p) =
dx eipx (x)
2


Z
1
(x x0 )2
1
ipx ik0 (xx0 )

dx e
e
exp
=
(2)3/4 a
4a2


Z
1
1 ipx0
y2
ipy ik0 y
e
=
dx e
e
exp 2
(2)3/4 a
4a


Z
1
1 ipx0
y2
e
=
dy exp i(p k0 )y 2
(2)3/4 a
4a
Z


1
2 a eipx0 dz exp i2a(p k0 )z z 2
=
3/4
(2)
Z

ipx0 a2 (pk0 )2
1
2 ae
=
e
dz exp [z + ia(p k0 )]2
3/4
(2)
Z

ipx0 a2 (pk0 )2
1
2
2
a
e
e
dw
exp
w
=
(2)3/4
ipx0 a2 (pk0 )2
1
2
ae

=
e
(2)3/4

1
ipx0
2
2
2a
e
=
exp
a
(p

k
)
.
0
(2)1/4

(79)

By comparing to the definition of (x), we see that (p)


is a properly
normalized wave function with


pop = k0 .
(80)
This same comparison shows that

2
1
pop pop = 2 .
4a

(81)

Thus the uncertainty product is

2

2
1
1
.
xop xop pop pop = a2 2 =
4a
4

(82)

That is, the uncertainty product is as small as it is allowed to be. This


example shows us rather directly how making the state more concentrated
in x makes it more spread out in p.
14

Vous aimerez peut-être aussi