Vous êtes sur la page 1sur 62

8.

04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 February 5

Lecture 1
Introduction to Superposition

Assigned Reading:
E&R 16,7 , 21,2,3,4,5 , 3all NOT 4all !!!
Li. 1all , 23,5,6 NOT 2-4!!!
Ga. 12,3,4 NOT 1-5!!!
Sh. 3

I want to start by describing to you, the readers, a particular series of experiments. These
experiments involve electrons. They are true, and in my mind, they are the most unsettling
experiments ever done.

Let us focus on 2 properties of electrons. For the sake of this discussion, let us call them color
and hardness. An empirical fact is that the only observable colors are individually “black”
and “white”, while the only observable hardnesses are individually “hard” and “soft”. There
are no other observable values that these properties can take, as no one has ever seen any
other such value. What I mean is that it is possible to build a box that measures color or
hardness, and the value of the property being measured can be inferred from the position at
which the box spits out one or the other output.

Figure 1: Color and hardness boxes

As an aside, let me briefly describe how these boxes work. The details don’t
particularly matter, as people have built analogous experiments for other objects
like silver atoms, neutrons, and other things, using different mechanisms inside
the box, and yielding identical results. To be concrete, though, here is what is
inside the boxes: a magnetic field B = B0 ez splits electrons by color, such that
electrons of one color are sent in one direction toward a screen, and electrons of
the other color are sent in another direction toward the same large screen. The
electrons of each color always land in the same positions, and there are always
only 2 such landing locations. Similarly, a magnetic field B = B0 ex could split
electrons by their hardness value. We will discuss later why electrons do this, but
2 8.04: Lecture 1

empirically, they do, and this matters to us now. The boxes mentioned earlier are
these devices inside boxes, so an input of electrons into a box gets split into one
direction or the other based on the value a given electron has for the property
being measured. I am calling these properties “color” and “hardness” rather
than “magnetic field-induced splitting” or something like that to emphasize that
lots of systems, and not just the particular one described here, exhibit these
behaviors. Also, using technical names brings in more confusing and distracting
jargon, so those will be discussed later.

Figure 2: What happens inside the boxes

A key property of these boxes is their repeatability: if electrons all of a certain value of a given
property are fed into a box measuring the value of that property, all of the electrons coming
out will retain the original value for that measured property. For instance, if electrons that
are all black are fed into a color box for measurement, the output will have all black and no
white electrons.
Another question is one of correlation. For instance, being male and being a bachelor are
correlated. So are color and hardness correlated? Well, this is easy to test with boxes!

Empirically, it is found that if one property is measured and all the electrons with a single
value of that property have the other property measured, the value of the other property is
found to be probabilistically evenly split. For example, if electrons that are known to all be
soft have their color measured, half will be black and half will be white; similarly, if electrons
that are known to all be black have their hardness measured, half will be hard and half will
be soft.

Hence, measuring the value of one property gives no predictive power whatsoever for a
subsequent measurement of the other property. This means that hardness and color are
persistent and uncorrelated. This lets us predict the results of a lot of similar experiments
too.
8.04: Lecture 1 3

Figure 3: Repeatability of color (and hardness)

Figure 4: Lack of correlation of color with hardness (and vice versa)

For example, let us set up an experiment with the following boxes.

1. Anything reaching the hardness box must be white.

2. By previous experiments, electrons exit the hardness box as half of them being hard
and half of them being soft.
4 8.04: Lecture 1

Figure 5: Is the end result hard and black?

3. The half that are soft reach the additional color box, and the previous measurements
found that the electrons entering this box were white and soft.
4. As a measurement of color is repeatable, these electrons should always emerge as white
and never as black.

Our prediction is that all the electrons entering the second color box will exit as white, and
none will exit as black.

This is completely and utterly wrong! In fact, half of the electrons that were previously
measured to only be white as white, and half now exit as black! The same goes for any
other pair of results from the first two boxes, if hardness and color were interchanged, et
cetera. Apparently, the presence of the hardness box tampers with color, because without
the hardness box, repeatability would ensure that all the originally measured white electrons
would come out white again. This is suspicious!

You may be asking, what property determines which flip? Well, to check, we could monitor
all possible physical properties of electrons entering the device and check for correlations.
Experimentally, none have been found! This means that those electrons that flip values for
a property and those that do not are indistinguishable in the beginning.
8.04: Lecture 1 5

You may now be asking, are the boxes just badly built? No! We could use many different
materials and technologies, yet all would give the same evenly split statistics. What is
striking is not just that we cannot build a box for one property that does not disturb the
other property, but we cannot even change the statistics as much as 1 part in 1010 from equal
probabilities!

A curious consequence of this is that we cannot build a reliable box to simultaneously measure
both color and hardness.

Figure 6: This cannot measure hardness and color simultaneously

More precisely, we can try to do this, but the results would be neither persistent nor repeat­
able. For instance, in the displayed apparatus, if the hardness and color are measured of the
output that should be hard and black, all are hard, but only half are black.

Thus, simultaneously measuring hardness and color is fundamentally disallowed! The general
statement of this is the uncertainty principle, which is the idea that some measurable physical
properties of real systems are incompatible with each other in the way that has been described
thus far.

You might now be thinking that this just applies to the hardness and color of electrons.
Actually, every object has similar properties, including me, you, and a paper copy of these
notes! These properties hold true every time they are tried with new systems, though it is
simply easier to test these with electrons.

This has worked thus far, but let us go deeper. Consider the following device, with individual
mirrors to change the direction of particle paths, and with combined mirrors to make two
paths coincide.

We can test the operation of the individual and combined mirrors by checking whether it
preserves the values of a given property from the input at the output. Now let us run
some experiments with this apparatus. These are quite straightforward but constitute good
preparation for making predictions for more complicated experiments later.
6 8.04: Lecture 1

Figure 7: Recombines two particle paths

First, with the individual mirrors changing the directions of output paths from a hardness
box, let us send in white electrons and measure the output of a hardness box whose input is
the output of the combined mirror.

Figure 8: Send in white, split by hardness, recombine, measure hardness

What do we expect? We just need to follow the electrons! Half take the hard output path,
and half take the soft output path. Those that take the hard path exit as hard, while those
that take the soft path exit as soft, so the final output should have a measurement of half
of the electrons as hard and half as soft. Indeed, this is empirically correct!

Second, let us send in hard electrons and measure the color at the end.

What do we expect? Every electron takes the hard path, so the color box input is always

hard. Then, feeding hard electrons into the color box yields half as black and half as white.

Indeed, this is empirically correct as well!

8.04: Lecture 1 7

Figure 9: Send in hard, split by hardness, recombine, measure color

Third, let us send in white electrons and measure the color at the end.

Figure 10: Send in white, split by hardness, recombine, measure color

What do we expect? Putting white electrons into the hardness box yields half as hard and
half as soft. Of the hard electrons on the hard path, measurement of color should yield half
as black and half as white. The same goes for the soft electrons on the soft path. Adding
these two yields that half should be black and half should be white again. Of course, apart
from the presence of mirrors, this should not be too different from the situation of a hardness
box between two color boxes, so we cannot expect much else. Yet this is empirically wrong!
What we measure is all the electrons being white; this is very odd! So what is going on?

Before running further experiments, let us make a moving absorptive wall for each path.

8 8.04: Lecture 1

Having a given wall “out” means the system is unchanged from before. Having a given wall
“in” means that path is blocked.

Figure 11: Block soft particle path

Fourth, let us send in white electrons and measure the color at the end while having a wall
in the soft path.

We expect that the overall output will decrease by half. Also, if the wall is out, we get all
white electrons at the output. That said, a wall in the soft path should not half an effect
on electrons in the hard path, given that the hard path could ultimately be many millions
of kilometers away. Hence, we expect half as many electrons to exit, and all of them should
be white. Empirically, though, while the overall output is indeed down by half, once again
half of the electrons are white and half are black! The same would occur if the wall was in
the hard path instead!

Now we are in deep trouble. Let us consider an electron inside the apparatus, with all the
walls out. We know it will exit the color box as white with full confidence. So which route
did it take?

Did it take the hard or soft path individually? That cannot be, as the output in either case
is half white and half black. Did it take both? That cannot be, because the electron can
always be measured to travel on one path or another and not on both at the same time. Did
it take no path at all? That cannot be, because putting in both walls removes all output.

What the heck is going on? What we are facing is that for all electrons in the apparatus,
the route they take is not an individual path, not both paths, and not no path at all. There
do not appear to be any other logical possibilities, so what are they doing anyway?

If the experiments are accurate and the arguments correct, the electrons are in fact doing
8.04: Lecture 1 9

something we have never dreamed of before and for which we do not at the moment have
any words. Electrons have modes of moving or modes of being, which are unlike anything
we have discussed thus far. This is also true of molecules, bacteria, and other macroscopic
objects, though the effects are harder to detect. Physicists call such modes as being defined
by superposition, which for now means “we have no clue what is going on”.

In the context of our previous experiments, an initially white electron inside the apparatus
with all walls out is not hard, not soft, not both, and not neither, but is in a superposition of
the states of being hard and soft. This is why we cannot meaningfully say that an electron
has given definite values of color and hardness. This is not because our boxes are crude or
because we are ignorant (though both may incidentally be true). There is a deeper reason:
having a definite value for one property implies not having a definite value but instead being
in a superposition of values for the other property. Every electron exits a particular box
as having one value or the other for the measured property, but not every electron is in a
state of one value or the other for that property. It can be in a superposition of the two
values, with the probability that we subsequently measure it to have one value or the other
depending on the details of the superposition. For instance, an electron that is white is in
an equal superposition of being hard and soft, as the probability of measuring each value of
hardness is equal.

To build a better definition of superposition than “we have no clue what is going on” requires
a new language. That is the language of quantum mechanics, whose underpinnings will be
the topic of 8.04. No matter how we describe it though, superposition is really weird, but
true.

If all of this staggers your intuition, that is because your intuition was honed by throwing
spears, putting bread in a toaster, and playing with Rubik’s cubes, all of which involve things
things so big that quantum effects are not noticeable. Hence, you can safely and consistently
ignore them when, for instance, you are wrestling with lions. As a friend of mine says, you
don’t need to understand quantum gravity to cook soup.

However, when we work in very different regimes, like in the atom, there is no reason for
human-scale intuition to continue to be a reliable guide, and indeed, it is not! It is not
electrons that are messed up; it is our intuition that is messed up. And this is the goal of
8.04: we will step beyond the scale of daily experiences to develop an intuition for electrons,
atoms, and superposition.
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 February 7

Lecture 2
Experimental Facts of Life

Assigned Reading:
E&R 16,7 , 21,2,3,4,5 , 3all NOT 4all !!!
Li. 1all , 23,5,6 NOT 2-4!!!
Ga. 12,3,4 NOT 1-5!!!
Sh. 3

We all know atoms are made of:

• electrons

– Cathode rays in CRT monitors make bright spots. If they can be sprayed in such
a manner, they must exist.
– Alternatively, cloud chamber tracks can be observed.

• nuclei

– α particles shot into atoms occasionally fly back, as per the experiments of Ruther­
ford, Geiger, Marsden, and others.
– Also, they are collided at places like the RHIC by people like Prof. Busza. If they
can be collided, they must exist.

We also know that classically, atomic orbits are unstable. In spite of this, we are compelled
to say the following.

Experimental result #1: atoms exist!

We also know from our previous discussions of color and hardness the following.

Experimental result #2: randomness exists!

As an aside, hard scattering to detect dense cores did not end with Rutherford,
Geiger, and Marsden. Similar experiments in the 1960s of electrons off of protons
showed that protons are made of 3 dense parts each with fractional (relative to
the electron) charge, called quarks. This earned Kendall and Friedman of MIT
and Taylor of Stanford the 1990 Nobel Prize.
2 8.04: Lecture 2

Figure 1: Discrete atomic spectra

Balmer noticed by being a little clever (but mostly obsessed) that spectral emission lines
followed the formula
−1
4
λn ≈ (3646 angstrom) · 1 − 2 for n ∈ {3, 4, 5, . . .} . (0.1)
n

Rydberg and Ritz then found that

λ−1 = R · (n−2 −2
1 − n2 ) for ni ∈ Z, n2 > n1 (0.2)

where R is the Rydberg constant dependent on the particular element but independent of
the emission series. Where did that come from? 1

Experimental result #3: atomic spectra are discrete!

Regarding discrete spectra, let us consider light at a frequency ν and amplitude A. We


measure a current I because light is liberating electrons from the metal in what is known
as the photoelectric effect, so we tune the voltage ΔV to get I = 0. We expect that a
more intense beam makes the electron more energetic, as the energy is proportional to the
intensity A2 , and K ≈ qe ΔV , so ΔV needs to be bigger to make I = 0. We also expect this
to be generally independent of ν.

What we instead find is that ΔV (I = 0) is independent of A correct to 1 part in 107 , ΔV


varies linearly with ν, and there exists a minimum ν below which no electrons are liberated
at any A!
1
Editor’s note: where did that come from? I mean, am I supposed to explain the reason for the spectra
right here and now, or leave it as an open question for readers to ponder?
8.04: Lecture 2 3

Figure 2: Photoelectric effect experimental schematic

Figure 3: Photoelectric dependence of I on V : expectation (top) versus reality (bottom)

Einstein’s interpretation of this is that light comes in packets of definite energy E = hν, the
intensity is proportional to the number of such packets, and the kinetic energy of an electron
liberated from a metal by light is K = hν − W . A plot of the electron kinetic energy versus
the light frequency yields a straight line of slope h, which is called the Planck constant.
Another quantity I defined by h = 2πI is much more often used in quantum mechanics, and
while this is technically called the Dirac constant, it is often just called the Planck constant
exactly because I is used so much more often than h in calculations. This will be seen later
on.

The consequences of this include that the intensity determines the rate of electron liberation,
but for ν <
Wh , no electrons can be liberated regardless of intensity. Furthermore, you know

E = cp from 8.02 or 8.022, λν = c from 8.03, and E = hν from Einstein’s model. Therefore,

p =
λh .
This means that the discrete packets of light with wavelength λ have a momentum

p given by the previous formula.

Why is this weird? Well, we know that light is a wave; apart from what has been taught in

4 8.04: Lecture 2

Figure 4: Electron kinetic energy versus light frequency

8.02 or 8.022, 8.03, and 8.033, the double-slit experiment seems fairly convincing!

Figure 5: Schematic of double-slit interference and diffraction

One thing to ask is, where is the light when it hits the wall? In fact, it is everywhere, as it is

8.04: Lecture 2 5

not localized. But the intensity shows an interference pattern. This implies that amplitudes,
rather than intensities, add.

Let us try to investigate the fringe widths in the interference pattern further. Let us suppose
that light from the two slits start in phase. If they coincide at a single point on the screen,
they remain in phase if their path lengths differ by an integer multiple of the wavelength λ.
If the horizontal distance D to the screen is much larger than the slit separation f, then the
phase matching length should equal the path length difference for constructive interference
to occur
f sin(θ) = λn.
If the beams meet on the screen a distance y from the slit positions projected onto the screen
and if θ « 1, then
fy ≈ Dλn.

Figure 6: Geometry of double-slit interference and diffraction

Now accounting for the shape of the pattern, the information from the screen is of the
magnitude and phase of the light. θ depends on y because the path to y varies:

f (y − 2� )2
f1 (y) = D2 + (y − )2 ≈ D + ,
2 2D
while
(y + 2� )2
f2 (y) ≈ D + .
2D
From this,

θi (y) =
fi (y).
λ
As for magnitudes, A1 = A2 = A0 because the slits are identical and pointlike. This means

A(y) = A0 · (eiθ1 (y) + eiθ2 (y) ).


6 8.04: Lecture 2

The intensity, up to a constant ensuring the correct dimensions, is

|A(y)|2 = 2A20 · (1 + cos(θ1 (y) − θ2 (y))),

which reduces to
2πfy
|A(y)|2 ≈ 2A20 · (1 + cos( )).
λD
This yields maxima at
fy = Dλn
as expected. Note that maxima correspond to constructive interference, while minima cor­
respond to destructive interference. This comes from the fact that amplitudes add, and the
intensity is the square of the amplitude.

The point is that in 8.03, you did the double-slit experiment and saw the interference fringes.
This implies that light is a wave, which nicely fits the Maxwell equations. By contrast, chunks
should behave differently!
Classically, particles sent through a double-slit screen onto another screen should hit the
final screen at one localized point or the other, and intensities would sum directly with no
interference terms. This seems to build credence for the idea that light is a wave and is not
chunky.

To recapitulate, 8.02 and 8.022 say that light is an electromagnetic wave. From 8.03, light
interferes with itself, so light should be a smooth continuum. Yet from 8.04, if light is applied
to a metal, it comes only in chunks!

Experimental result #4: light comes in chunks!

Accompanying this is the fact that light has an energy and a momentum

E = hν (0.3)
p = hλ−1 . (0.4)

That’s enough about light for now. What about atoms? Well, we are not as confident that
they exist, so let us stick with electrons for now. While the properties of color and hardness
in electrons are disturbing, we should be able to agree that if electrons are truly particles,
they would be localized and would thus hit a screen in a slit experiment in exactly one spot.
We can check this with a double-slit experiment.

It turns out that electrons interfere like waves even with themselves! If they were really
particles, they would have followed only one of two paths: the path from the top slit to the
end, or the path from the bottom slit to the end. We could use a wall to check which one
is happening. Yet this produces the exact same conundrum as for the boxes from before for
8.04: Lecture 2 7

Figure 7: Classical particles in a double-slit experiment

the exact same reasons. Hence, the electron must be taking a superposition of the possible
paths. So the electron is neither strictly a particle nor strictly a wave, but is just an electron.

But could we be a little more clever in trying to glean through which slit the electron passed?
We could cheat by using very diffuse (low-energy) light. Examining one path would not be
like blocking it, so there would be a mild deflection but the overall interference pattern
should qualitatively be preserved.

The problem with this is that light is quantized. Every collision imparts a discrete E = hν
and p = hλ−1 ; low intensity simply means the collisions are rare. And if the energy and
momentum are low, the wavelength, which becomes the resolution of the electron’s spatial
location, becomes too large to remain meaningful.

Hence, determining through which slit an electron passes does away with the interference

8 8.04: Lecture 2

pattern. This means that every force must be quantized like E = hν and p = hλ−1 , or else
the slit passage could be determined without messing with the interference pattern.

But are electrons waves then? Davisson and Germer sent a beam of electrons into a crystal
and found the phenomenon of Bragg scattering. The path length difference between one
layer of the crystal and the next is

Δf = 2f sin(θ)

for a square crystal of length f. Constructive interference occurs when

Δf = λn.

This means
n
λ−1 = .
2f sin(θ)
Davisson and Germer also observed that
√ √
n 2mqe V0 2mE p
≈ = = .
2f sin(θ) h h h

Figure 8: Davisson and Germer crystal diffraction

Experimental result #5: electrons interfere and diffract!

Accompanying this are the de Broglie relations

E = hν (0.5)
p = hλ−1 . (0.6)
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 February 12

Lecture 3
The Wavefunction

Assigned Reading:
E&R 16,7 , 21,2,3,4,5 , 3all NOT 4all !!!
Li. 1all , 23,5,6 NOT 2-4!!!
Ga. 12,3,4 NOT 1-5!!!
Sh. 3

In classical mechanics, the configuration or state of a system is given by a point (x, p) in the
space of coordinates and momenta. This specifies everything else in the system in a fully
deterministic way, in that any observable Y that can be expressed as Y (x, p) can be found,
and any that cannot is irrelevant. Yet, as we have seen with the diffraction of electrons, it is
impossible to know both the position and momentum of the electron exactly at every point
along the trajectory. This is mathematically expressed as the famous position-momentum
uncertainty principle:
ΔxΔp ≥ 2 . (0.1)

Hence, specifying a state by (x, p) clearly will not work. So what specifies the state of a
quantum system?

The configuration or state of a quantum object is completely specified


by a wavefunction denoted as ψ(x).

And what does ψ(x) mean?

p(x) = |ψ(x)|2 determines the probability (density) that an object in the


state ψ(x) will be found at position x.

Note that
ψ ∈ C,
meaning the wavefunction is complex! Here, the real part of ψ is being drawn for simplicity,
as complex-plane paper is hard to find. Furthermore, ψ must be singly-valued and not
“stupid”; the latter point will be elaborated later.

Let us examine this set of examples in further detail. The first wavefunction ψ1 is sharply
peaked at a particular value of x, and the probability density, being its square, is likewise
peaked there as well. This is the wavefunction for a particle well localized at a position given
by the center of the peak, as the probability density is high there, and the width of the peak
is small, so the uncertainty in the position is very small.
2 8.04: Lecture 3

Figure 1: Examples of wavefunctions (red, left) and corresponding probability densities


(blue, right)

The second wavefunction ψ2 has the same peak profile, but shifted to a different position
center. All of the properties of the first wavefunction hold here too, so this simply describes
a particle that is well-localized at that different position.

The third and fourth wavefunctions ψ3 and ψ4 respectively look like sinusoids of different
spatial periods. The wavefunctions are actually complex of the form

ψ(x) = N eikx ,

so only the real part is being plotted here. Note that even though the periods are different,

|eikx |2 = 1

for all k, so the corresponding probability densities are the same except for maybe a nor­
malization constant. We saw before that it does not make a whole lot of sense to think of a
sinusoidal wave as being localized in some place. Indeed, the positions for these two wave-
functions are ill-defined, so they are not well-localized, and the uncertainty in the position
is large in each case.
8.04: Lecture 3 3

The fifth wavefunction is multiply-valued, so it is considered to be “stupid”. It does not


have a well-defined probability density.

Note the normalization and dimensions of the wavefunction: the cumulative probability over
all possible positions is unity, so
|ψ(x)|2 dx = 1,

and the probability density has dimensions reciprocal to the integration variable that yields
a cumulative probability which in this case is position, so the wavefunction has units of
reciprocal square root of length. Finally, note that while the wavefunction is in general
complex, the probability (density) must always be real. This also means that ψ(x) is only
uniquely defined up to an arbitrary complex phase, because all imaginary exponentials eiθ
satisfy |eiθ |2 = 1, so the probability density and therefore the physical interpretation of the
wavefunction are unaffected by multiplication by a complex phase.

You may now be thinking that the only useful wavefunctions are peaks that are well-localized
around a given position. But let us remember that the de Broglie relations says that a wave of
wavelength λ has a momentum p = hλ−1 . This means that ψ3 and ψ4 , being sinusoidal waves,
have well-defined wavelengths and therefore well-defined momenta with small uncertainties
in their respective momenta, with ψ4 having a smaller wavelength and therefore a larger
momentum than ψ3 . On the other hand, ψ1 and ψ2 do not look like sinusoidal waves, so it
is difficult to define a wavelength and therefore a momentum for each, and the respective
momentum uncertainties are large. These qualitatively satisfy the uncertainty relation.

In general, given a wavefunction, once the uncertainty in the position is determined, a lower
bound for the uncertainty in the momentum can be found by the uncertainty relation. This
always works. If Δx is large, then Δp is small, and the opposite is true as well. At some
point, we will have to figure out how to calculate these uncertainties. But there are two
things to be done before that.

The first is a point of notation. A plane wave


ψ(x, t) = ei(kx−ωt)
has frequency
ω = 2πν
and wavevector
k = 2πλ−1 .
This means that the de Broglie relations can be rewritten as
E = ~ω (0.2)
p = ~k. (0.3)
4 8.04: Lecture 3

In three dimensions, the energy relation is unchanged, while the momentum relation p = ~k
simply takes on the form of a vector relation.

The second is much more important, and that is to quantify the notion of superposition that
we have been developing.
Given two possible states of a quantum system corresponding to two
wavefunctions ψa and ψb , the system could also be in a superposition
ψ = αψa + βψb with α and β as arbitrary complex coefficients satisfying
normalization.
This forms the soul of quantum mechanics!

Note that for a superposition state

ψ(x) = αψa (x) + βψb (x),

the probability density

p(x) = |αψa (x) + βψb (x)|2 = |αψa (x)|2 + |βψb (x)|2 + α� βψa� (x)ψb (x) + αβ � ψa (x)ψb� (x)

exhibits quantum interference aside from the usual addition of probability!

For example, let us consider ψ5 = ψ1 + ψ2 from our previous set of examples. Putting
normalization aside, this looks like two distinct well-localized peaks. Each peak individually
represented a particle that was localized at the position of the peak center. But now that
there are two peaks, the particle is at neither position individual. It is not at both positions
simultaneously, nor is it at no position at all. It is simply in a superposition of two states of
definite position. The probability density of this superposition state will show no interference
because when one of the component wavefunctions exhibits a peak, the other component
wavefunction is zero, so their product is zero at all positions.

Similarly, ψ6 = ψ3 + ψ4 is a superposition of two states of definite momentum. It cannot be


said that a particle in this state has one or the other momentum, nor can it be said that it
has both or neither momenta. In contrast to the previous superposition example, though,
the probability density will exhibit interference because the product of the two wavefunctions
is not always zero as they are both sinusoidal waves.

Note for the example of ψ5 that this superposition state has more spatial localization than
each of the component sinusoidal wavefunctions. This spatial localization could be made
even better with three states of different definite momenta. We could do this for arbitrarily
large countable n: as a state of definite momentum is

ψ(x; k) = eikx
8.04: Lecture 3 5

except for normalization, a superposition of states of definite momentum

ψ= αj eikj x
j

could have a very well-localized position center. Or, other states with different properties
compared to just having a well-localized position could be built from superpositions of mo­
mentum states. But why should we stop there? There is no reason to consider only discrete
kj , when the entire range of k over the real line is available.

The Fourier theorem says that any function f (x) can be composed of
complex sinusoidal waves eikx as
Z ∞
1
f (x) = √ f˜(k)eikx dk. (0.4)
2π −∞
This is the continuous analogue of the discrete sum Fourier series

f (x) = αj eikj x . (0.5)


j

Furthermore, given f (x), we can compute the Fourier transform


Z ∞
˜ 1
f (k) = √ f (x)e−ikx dx. (0.6)
2π −∞
This is the continuous analogue of the Fourier expansion coefficients
Z π
1
αj = f (x)e−ikj x dx. (0.7)
2π −π

The physical interpretation of this is that any wavefunction ψ(x) can


be expressed as a superposition of states eikx with definite momenta
p = k as
∞ ˜
ψ(x) = √12π −∞ ψ(k)e ikx
dk. (0.8)

Furthermore, ψ̃(k) gives the exact same information as ψ(x) about the
quantum state, so once one is known, the other can be found automat­
ically as well.

What do the Fourier transforms of wavefunctions look like? Let us look at the previous set
of examples. ψ1 looks like a Dirac delta function, and its Fourier transform is a complex
exponential . . . except that is exactly what ψ3 looks like as a function of x! Similarly, ψ2
has a larger position than ψ1 , so its Fourier transform has a larger frequency as a complex
exponential function of k. Furthermore, performing the Fourier transform on a function
6 8.04: Lecture 3

twice simply recovers the original function. This implies that the Fourier transform of ψ3
looks like ψ1 as a function of k, and the same goes for ψ4 with regard to ψ2 . Finally, in a
similar vein, aside from normalization, ψ5 and ψ6 are Fourier transforms of each other.

This means that a wavefunction that is well-localized around a given position has a Fourier
transform that looks like a sinusoidal function of k, and the frequency of oscillation as a
function of k is given by that position. Similarly, a wavefunction that looks like a sinusoidal
function of x has a Fourier transform that is well-localized around a given wavevector, and
that wavevector is the frequency of oscillation as a function of x.

So what then is p(k)? This is the probability density that the particle described by the
wavefunction ψ(x) has a momentum p = k. The expression turns out to be surprisingly
simple:
p(k) = |ψ̃(k)|2 ,
and it is not too difficult to show this to be the case.
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 February 14

Lecture 4
Expectations, Momentum, and Uncertainty

Assigned Reading:

E&R 3all , 51,3,4,6


Li. 25−8 , 31−3
Ga. 2all=4
Sh. 3, 4

Our job now is to properly define the uncertainties Δx and Δp.

As an aside, let us review the properties of discrete probability distributions.

a N
14 1
15 1
16 3
20 2
21 4
22 5

Consider the number distribution N of ages a in a population. The probability


of finding a person with a given age is P(a) = NNtotal
(a)
, satisfying a P(a) = 1.
What is the most likely age? In this case, that is 22.
What is the average age? In general, the weighted average

aaN (a)
(a) = = aP(a).
Ntotal a

In this case, it is 19.4. Note that in general, as in this example, (a) does not have
to be a measurable value of a!
What is the average of the squared age? In general,

(a2 ) = a2 P(a).
a

For a general function of the age,

(f (a)) = f (a)P(a).
a
2 8.04: Lecture 4

Is the average of the squared age equal to the square of the average age? In
mathematical notation, is (a2 ) = (a)2 ? No! If a represented a more general
quantity rather than age, it could sometimes be positive or negative, and those
terms might cancel out in the average. By contrast, a2 would never be negative,
so its average would satisfy that too.
How do we characterize the uncertainty? We could use Δa = a − (a), but the
problem is that (Δa) = 0 identically. Instead, we use the standard deviation
defined by
(Δa)2 = ((a − (a))2 ),
which also satisfies

(Δa)2 = (a2 ) − (a)2 .

In this case, the standard deviation is about 2.8.

Similar expressions exist for continuous variables. Given that ψ has been discussed as a
function of position x thus far, it makes sense to proceed in that way. Mathematically,

(f (x)) = f (x)p(x) dx (0.1)
−∞

but p(x) = ψ * (x)ψ(x). Hence, the way to find the expectation value of a function of position
in a given quantum state is

(f (x)) = ψ * (x)f (x)ψ(x) dx. (0.2)
−∞

In all this, the normalization −∞
p(x) dx = 1 is assumed. From this, the uncertainty in
position
Δx ≡ (x2 ) − (x)2 (0.3)
can be found.

Notice that expectation values (f (x)) depend on the state! This can be written as (f (x))ψ ,

(f (x))|ψ) , or (ψ|f (x)|ψ).

For example, let us consider a wavefunction given by


ψ(x) = {N · (x2 − l2 )2 for |x| ≤ l, 0 otherwise}. (0.4)
We need to figure out the normalization for this wavefunction by

|ψ(x)|2 dx = 1 (0.5)
−∞


315 e√
which, when effected by nondimensionalization of the integral, yields N = 256 l
.
After this, by noting that |ψ(x)|2 is even while x is odd, then (x) = 0. Also,
l2
(x2 ) = 11 . Hence, Δx = √l11 .
8.04: Lecture 4 3

Figure 1: Plot of ψ(x) in this case

R ∞how do we find the momentum expectation value (p)? Naı̈vely, we might


After all of this,
say that (p) = −∞ ψ * (x)pψ(x) dx. But how exactly are we to express p in an integral over
functions of x? Clearly, this will not do!

Here’s a hint: we know that a wave with


k = 2πλ−1
is associated with a particle with
p = hλ−1 = nk.
Disregarding normalization, the associated wavefunction is
ψ = eikx .
But note that
∂eikx
= ikeikx .
∂x
This means that
∂eikx
−in = nkeikx .
∂x
Thus
∂eikx
−in = p · eikx ,
∂x
and the units work out too! But what does momentum have to do with a derivative with
respect to position anyway?

Here’s another hint: Noether’s theorem states that to every symmetry is associated a con­
served quantity.
Symmetry Conservation
x → x + Δx p
t → t + Δt E

x →R ·x L
4 8.04: Lecture 4

So momentum is associated with spatial translations!

Now consider how translations behave for functions:


l∂f (x) l2 ∂ 2 f (x)
f (x) → f (x + l) = f (x) + + + ... (0.6)
∂x 2∂x2

X l∂
= ( )n f (x) (0.7)
n=0
∂x
l∂
= e ∂x f (x). (0.8)

Hence translations are generated by spatial derivatives ∂x . But we just said that translations

are associated with p! This means that it is natural to associate p with ∂x somehow. In a
∂ ∂
similar way, E would be associated with ∂t , and Lz with ∂ϕ .

That’s enough for hints. We need to take a stand on this.


Momentum in quantum mechanics is realized by an operator


p̂ = −in . (0.9)
∂x

This operator p̂ is what we use to compute expectation values. More precisely,


Z ∞
n n ∂ n ψ(x)
(p ) = (−in) ψ * (x) dx (0.10)
−∞ ∂xn

and the uncertainty is then given by Δp = (p2 ) − (p)2 .

Let us return to our previous example wavefunction given by

ψ(x) = {N · (x2 − l2 )2 for |x| ≤ l, 0 otherwise}. (0.11)

Now we can find


Z ∞
∂ψ(x)
(p) = −in ψ * (x) dx (0.12)
−∞ ∂x
Z ∞
2
= −in|N | (x2 − l2 )2 · (2 · 2x · (x2 − l2 )) dx (0.13)
−∞
=0 (0.14)

as the wavefunction is even while its spatial derivative is odd.


3n 2
By a similar computation, (p2 ) =
√ l2
, which dimensionally makes sense as well.
3n
From this, we find that Δp = l , and the uncertainty relation is satisfied as

q
3
ΔxΔp = 11 n.
8.04: Lecture 4 5

But what does this new operator p̂ have to do with having momentum p = nk? Let us
consider two states given by
ψk (x) = eikx
and
'
ψs (x) = eikx + eik x .
The first has definite momentum p = nk, while the second, being a superposition of states
with definite momenta p = nk and p' = nk ' , is not itself a state of definite momentum. We
can show this by acting on each state with the operator p̂:

ˆ k (x) = nkeikx

is simply proportional to ψk (x), while


'
ˆ s (x) = n · (keikx + k ' eik x )

is not simply proportional to ψs (x). We see that p̂ is an operator which acts simply on
wavefunctions corresponding to states with definite momenta, but not on arbitrary super­
positions of momentum states. This means that p̂ is the operator whose eigenstates
are states of definite momentum, and the corresponding eigenvalue is exactly
the momentum of that state.
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 February 21

Lecture 5
Operators and the Schrödinger Equation

Assigned Reading:

E&R 3all , 51,3,4,6


Li. 25−8 , 31−3
Ga. 2all=4
Sh. 3, 4

We have just seen that in quantum mechanics, momentum becomes associated with an
operator proportional to the spatial derivative. But what exactly is an operator, and what
is the relation of any other observable quantity to an operator? Let us take this moment to
flesh out some mathematical definitions.

An operator is a rule for building one function from another.

Examples include the identity 1ˆ such that ˆ1f (x) = f (x), the spatial derivative Dˆ = ∂ such
∂x
that Df ˆ (x) = ∂f (x) , the position x̂ = x such that x̂f (x) = xf (x), the square 2̂ such that
∂x
ˆ (x) = f 2 (x), the projection Pˆy such that Pˆy f (x) = y, and the addition Aˆy such that
2f
Ây f (x) = f (x) + y, among others. Notationally, operators will be distinguished by hats on
top of symbols.

A linear operator is an operator that respects superposition:

ˆ
O(af ˆ (x) + bÔg(x) .
(x) + bg(x)) = aOf (0.1)

From our previous examples, it can be shown that the first, second, and third operators are
linear, while the fourth, fifth, and sixth operators are not linear.
ˆ if
All operators com with a small set of special functions of their own. For an operator A,

Âf (x; A) = A · f (x; A)

for a given A ∈ C, then f (x) is an eigenfunction of the operator Aˆ and A is the corre­
sponding eigenvalue. Operators act on eigenfunctions in a way identical to multiplying the
eigenfunction by a constant number. For instance, the aforementioned operator D̂ has as its
eigenfunctions all exponentials eαx , with any α ∈ C allowed.

In general, though, most functions are not eigenfunctions of a given operator. That is why
eigenfunctions and eigenvalues of a given operator are particularly special!
2 8.04: Lecture 5

Coming back to physics, to every observable quantity is associated a corresponding


operator . For instance, the momentum operator is

p̂ = −in ,
∂x
the position operator is
x̂ = x,
the energy operator is
p̂2 n2 ∂ 2
Ê = + V (x̂) = − + V (x),
2m 2m ∂x2
and so on. Given a state described by a wavefunction ψ(x), we can calculate the expectation
value of any observable quantity in that state by using the corresponding operator:
� ∞
n
�A � = ψ � (x)Âψ(x) dx,
−∞

and
ΔA ≡ �A2 � − �A�2 .
Note that the operator order matters! For instance,
∂ ∂f (x)
p̂ˆ ˆ (x)) = −in
xf (x) = p(xf (xf (x)) = −in · f (x) + x ,
∂x ∂x
while
∂f (x) ∂f (x)
pf (x) = xˆ −in
x̂ˆ = −inx .
∂x ∂x
To measure the importance of order, we define the commutator of two operators  and B̂ as

ˆ B̂] ≡ AB̂
[A, ˆ − B̂ Aˆ . (0.2)

For example, we have just seen that

pf (x) − pˆ
x̂ˆ ˆxf (x) = inf (x)

for all f (x). Hence,


[x̂, p̂] = in1̂ . (0.3)
This is a deep and profound result in quantum mechanics!

Upon measuring an observable A which has an associated operator A, ˆ the measured value
ˆ
is one of the eigenvalues of A. Then, immediately after the measurement occurs, the wave-
function corresponding to the system state changes to be the eigenfunction φ(x; A) of  such
that
Âφ(x; A) = A · φ(x; A).
8.04: Lecture 5 3

This immediate change in the wavefunction to be an eigenfunction of an operator corre­


sponding to the measured quantity is called wavefunction collapse. It is a strange beast, but
we will come back to it later.

Note that all eigenvalues of operators corresponding to an observable quantity must be real.
Operators with only real eigenvalues have many special properties that we will explore later.

For example, the eigenvalues of p̂ are real momenta


p = nk,
and for a given k, these are the momenta that can be measured. This means that the
eigenfunctions must be
φ(x; k) = N eikx
for some normalization N . That said, the eigenfunctions φ(x; k) are not strictly normalizable,
because they do not approach 0 as x → ±∞. Non-normalizable wavefunctions like this
one are not valid as physical single-particle states, but are useful for building normalizable
wavepackets that do represent physical single-particle states. This also means that we can
never measure quantities like momentum or position with full precision, either, because like
momentum, position is a continuous variable so its eigenfunctions are not normalizable.

As another example, let us suppose a particle is found at position x0 . What is its wavefunc­
tion immediately after this measurement occurs? It must be an eigenfunction of the position
operator x̂, and it must vanish for x = x0 . This implies that
φ(x; x0 ) = N δ(x − x0 )
is the desired wavefunction with some normalization coefficint N , because by the properties
of the Dirac delta function,
ˆ
xδ(x − x0 ) = x0 · δ(x − x0 ).
As with momentum eigenfunctions, position eigenfunctions are not normalizable, but they
can be used in superpositions to form normalizable wavepackets corresponding to physical
single-particle states.

As yet another example, let us say that a quantum object is in a state given by the wave-
function ψ(x). Measuring the position to be x0 changes the wavefunction to δ(x − x0 ),
which is not a superposition. Measuring the momentum to be p0 would instead change the
ip0 x
wavefunction to e � , which is also not a superposition. Hence, measurement of a particular
eigenvalue collapses the wavefunction into the corresponding eigenfunction.

Let us say that we know the state given by a wavefunction ψ(x) and we want to measure
the position. The probability density, given that position is continuous, of measuring x0 is
p(x0 ) = |ψ(x0 )|2 . What does this look like for a more general observable A?
4 8.04: Lecture 5

Given an operator  with observable eigenvalues A and corresponding eigenfunctions φ(x; A)


fulfilling
Âφ(x; A) = A · φ(x; A),
ˆ If A is
we can expand any wavefunction ψ(x) as a superposition of eigenfunctions of A.
discrete, then
ψ(x) = cA φ(x; A) (0.4)
A

while if A is continuous, then



Z
ψ(x) = c(A)φ(x; A) dA. (0.5)

The probability of measuring an eigenvalue A0 in the state ψ is, if A is discrete,

P(A0 ) = |cA0 |2 (0.6)

or if A is continuous, then the probability density is

p(A0 ) = |c(A0 )|2 . (0.7)

For example, for position, Z



ψ(x) = c(x0 )φ(x; x0 ) dx0 .

In this case,
c(x0 ) = ψ(x0 )
as
φ(x; x0 ) = N δ(x − x0 ).
The probability density of measuring a position is then

p(x0 ) = |ψ(x0 )|2 .

For momentum, Z

ψ(x) = c(k)φ(x; k) dk.

In this case,
c(k) = ψ̃(k)
as
φ(x; k) = N eikx .
The probability density of measuring a momentum is then

p(k) = |ψ̃(k)|2 .
8.04: Lecture 5 5

For energy, which typically takes just discrete values,


X
ψ(x) = cn φ(x; En ).
n

Generally, the eigenfunctions φ(x; En ) depend on the particular system, so a general form
cannot be given for them, and thus neither can a general form be given for cn . Still, though,
the probability (not a density for discrete energies) of measuring an energy is

P(En ) = |cn |2 .

When the wavefunction representing a state is expanded as a superposition of eigenfunc­


tions of an operator Aˆ corresponding to an observable A, it is useful to find the expansion
coefficients. Whether or not A is continuous, the following is true (though notationally, cA
would be used in the discrete case, while c(A) would be used in the continuous case, so the
notation should be chosen based on context):

Z
c(A) = φ?� (x; A)ψ(x) dx ≡ h�φ(A)|ψ i�. (0.8)

The reason why doing this is useful is because an observable A with a corresponding operator
 has real eigenvalues and orthonormal eigenvectors. What this means is that

�hφa |φb �i = δab .

This is exactly like expanding a physical 3-dimensional vector in an orthonormal basis v =


j vj ej with ej · el = δjl , and the expansion coefficients would be vj = ej · v. That is also
why the term c(A) = �hφ(A)|ψ �i is known as an inner product.

Thus far, we have seen that the configuration of the system is given by a wavefunction
corresponding to a quantum state. Expectation values of observables are found through the
actions of corresponding operators on quantum states. Measuring a particular value for a
quantity is probabilistic, but once the measurement is made, the wavefunction collapses into
the eigenfunction corresponding to that measured eigenvalue. Given the wavefunction, these
are all the predictions that can be made about the system at a given moment in time. So
what happens at a later time? We have seen that translations are tied to time derivatives,
so the real question is, what is ∂ψ(x,t)
∂t
?

We know from de Broglie that plane waves

ψ(x, t; k) = ei(kx−ωt)

have energy
E = nω.
6 8.04: Lecture 5

In fact, this is true of any wavefunction like


ψ(x, t) = e−iωt φ(x)
regardless of φ(x). We can also see that
∂ψ(x, t)
in = nωψ(x, t)
∂t
in such a case. This seems to suggest that the energy operator Eˆ is tied to the operator

in ∂t . Along with this, we need that translation in time respect superposition and that the
total probability �
Z ∞
ψ ?� (x, t)ψ(x, t) dx = 1
−∞
be conserved. This means that the time derivative, which is linear, acting on a wavefunction
should be equal to a linear operator acting on the wavefunction.

Indeed, the Schrödinger equation is


∂ψ(x, t)
Êψ(x, t) = in . (0.9)
∂t
This equation describing the time evolution of a quantum state is analogous to the equation
of motion F = ṗ. Take care to note that Eˆ is not defined as the operator in ∂t ∂
any more
than F is defined to be ṗ. Like F, the definition of Eˆ depends on the details of the system.
In this class, the general form for Eˆ comes from turning the classical equation
p2
E= + V (x)
2m
into an operator definition
p̂2
Eˆ = + V (x̂).
2m
Finally, this is not a derivation per se of the Schrödinger equation, but has been motivated
by our finding of the momentum operator as a generator of spatial translations.

There are a few key features of the Schrödinger equation:


1. It is linear, so it respects superposition. If ψ(x, t; n) = in ∂ψ(x,t;n)
∂t
,then

X ∂ X
Ê cn ψ(x, t; n) = in cn ψ(x, t; n) .
n
∂t n

2. It is unitary, so it conserves probability. This will be explained later.


3. It is deterministic! It is first-order in t, so if ψ(x, 0) is known, then the Schrödinger
equation determines ψ(x, t) for all t. However, note that wavefunction collapse upon
measurement is not deterministic!
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 February 26

Lecture 6
Time Evolution and the Schrödinger Equation

Assigned Reading:

E&R 3all , 51,3,4,6


Li. 25−8 , 31−3
Ga. 2all=4
Sh. 3, 4

The Schrödinger equation is a partial differential equation. For instance, if

p̂2 mω 2 x̂2
Eˆ = + ,
2m 2
then the Schrödinger equation becomes

∂ψ n2 ∂ 2 ψ mω 2 x2
in =− + ψ.
∂t 2m ∂x2 2
Of course, Eˆ depends on the system, and the Schrödinger equation changes accordingly.

ˆ there are a few different methods available, and those are


To fully solve this for a given E,
through brute force, extreme cleverness, and numerical calculation. An elegant way that
helps in all cases though makes use of superposition.

Suppose that at t = 0, our system is in a state of definite energy. This means that

ψ(x, 0; E) = φ(x; E),

where
Êφ(x; E) = E · φ(x; E).

Evolving it in time means that

∂ψE
in = E · ψE .
∂t
Given the initial condition, this is easily solved to be

ψ(x, t; E) = e−iωt φ(x; E)

through the de Broglie relation


E = nω.
Note that
p(x, t) = |ψ(x, t; E)|2 = |φ(x; E)|2 ,
2 8.04: Lecture 6

so the time evolution disappears from the probability density! That is why wavefunctions
corresponding to states of definite energy are also called stationary states.

So are all systems in stationary states? Well, probabilities generally evolve in time, so that
cannot be. Then are any systems in stationary states? Well, nothing is eternal, and like the
plane wave, the stationary state is only an approximation. So why do we even bother with
this?

The answer lies in superposition! If

ψn (x, t) = e−iωn t φn (x)

solves the Schrödinger equation, then so does

ψ(x, t) = cn ψn (x, t) = cn e−iωn t φn (x) (0.1)


n n

thanks to the linearity of that equation. This means that any ψ(x) at any time satisfying
appropriate boundary conditions can be expressed as a superposition of energy eigenfunc­
tions. But what do these energy eigenstates look like anyway? The answer depends on the
ˆ which depends on the system in question.
particular form of E,

For example, let us consider a free particle. This means that

V (x) = 0,

so
p̂2 n2 ∂ 2
Ê = =− .
2m 2m ∂x2
We need to solve for the eigenfunctions given by

ÊφE = E · φE ,

so
n2 ∂ 2 φE
− = E · φE .
2m ∂x2
Making the substitution
2mE
k2 ≡
n2
yields the general solution
φ(x; E) = αeikx + βe−ikx
where α and β are complex constant coefficients satisfying normalization.
8.04: Lecture 6 3

Another way to get this would be through the knowledge that the eigenfunctions of p̂ are

eikx and e−ikx . Note that


n2 k 2
E=
2m
ikx −ikx
is the energy of both the states e and e . This is an example of a degeneracy: sometimes,
different states happen to share the same eigenvalue for a particular observable operator.

Take note of the normalization: multiplying an eigenfunction by a constant leaves it still as


an eigenfunction. We want to fix the normalization such that
(φE |φE ' ) = δ(E − E ' ).
In the previous case, it makes sense to choose the normalization
(φk |φk' ) = δ(k − k ' )
as k is continuous, so
1
φ(x; k) = √ eikx .

Continuing with that example,
n2 k 2
E=
2m
implies that
nk 2
ω= ,
2m
so the solution for all time is a traveling wave
ψ(x, t; E) = ei(kx−ωt)
disregarding normalization. Note that the phase velocity
ω nk
vp = =
k 2m
is half of the classical velocity of a free particle, while the group velocity
∂ω nk
vg = =
∂k m
is exactly the classical velocity. In general, the group velocity is more representative of the
classical velocity than is the phase velocity, as the group velocity is observable while the
phase velocity is not.

Let us move on to an example of a nontrivial potential. The infinite square well, also known
as the particle in a box, is an idealization of a large, deep potential. It is given by
V (x) = {0 for 0 ≤ x ≤ f, ∞ otherwise}.
4 8.04: Lecture 6

The implication of this is that

P(x > f) = P(x < 0) = 0,

implying that ψ(x) = 0 outside of the box. Inside, we can use the energy eigenvalue equation

n 2 ∂ 2 φE
− = E · φE
2m ∂x2
and define
n2 k 2 ≡ 2mE
so that the solutions are
φ(x; E) = A sin(kx) + B cos(kx).
The boundary conditions
ψ(0) = ψ(f) = 0
imply B = 0 and
kf = π(n + 1),
so
π
kn = (n + 1).
f
This means that the allowed energies are
π 2 n2 (n + 1)2
En = .
2mf2
Note that the energies are discrete, and that E0 > 0, which is also thanks to the uncertainty
principle. This is very different from a classical particle in a box! Also, the eigenfunctions
are
φn (x) = An sin(kn x),
and the time-evolved eigenfunctions are
iEn t
ψn (x, t) = An e− � sin(kn x).

Note again that for energy eigenstates, the probability density

pn (x, t) = |ψn (x, t)|2 = |An |2 sin2 (kn x)

is independent of time. Also note the normalization requirement



|ψn (x)|2 dx = 1
−∞

means that the coefficients are


2 iϕ
An = e .
f
8.04: Lecture 6 5

Figure 1: First, second, and third lowest-energy eigenfunctions (red) and associated proba­
bility densities (blue) for the infinite square well potential

Once again, the overall phase is not physical, so for convenience, ϕ = 0, so that
r
2
An = .
f

Any good wavefunction ψ(x) at a given time t can be expanded in terms


of the energy eigenfunctions φn as

ψ(x) = n cn φ(x; n) (0.2)

for some cn , where we normalize


Z ∞
(φi |φj ) = φ (x; i)φ(x; j) dx = δij (0.3)
−∞

so that the normalization


X X
(ψ|ψ) = ci cj δij = |cj |2 = 1 (0.4)
i,j j

holds for the wavefunction.

Going back to the example of the infinite square well, the eigenfunctions are
r
2
φ(x; n) = sin(kn x)
f
6 8.04: Lecture 6

with
π
kn = (n + 1),
f
so the expansion r
X 2
ψ(x) = cn sin(kn x)
n
f
is just a usual Fourier series! Similarly, for a free particle, the eigenfunctions are
1
φ(x; k) = √ eikx

with the normalization
(φk |φk' ) = δ(k − k ' ),
so the expansion Z ∞
1 ˜ ikx
ψ(x) = √ ψ(k)e dk
−∞ 2π
is just a normal inverse Fourier transform!

Note that
cn = (φn |ψ)
is consistent with any general expansion
X
ψ(x) = cn φ(x; n),
n

and the continuous analogue is likewise true. We can check this in the discrete case as
Z ∞
cn = (φn |ψ) = φ? (x; n)ψ(x) dx
−∞
X Z ∞
= cs φ? (x; n)φ(x; s) dx
s −∞

= cs δns = cn
s

as expected by orthonomality. We can also check an example of the continuous case through
the free particle:
Z ∞
ψ̃(k) = (φk |ψ) = φ? (x; k)ψ(x) dx
−∞

Z ∞
Z Z
1 −ikx 1 −ikx 1 ˜ ' ik' x ' ˙
= √ e ψ(x) dx = √ e √ ψ(k )e dk dx
−∞ 2π 2π 2π
Z
= ψ̃(k ' )δ(k − k ' ) dk ' = ψ̃(k)
8.04: Lecture 6 7

again by orthonomality.

All operators corresponding to measurable observables have real eigenvalues which are the
values of those observables that can be measured, and the eigenfunctions are orthonormal.
Any wavefunction representing a quantum state can then be expanded in terms of those
eigenfunctions: X
ψ(x) = cA φ(x; A)
A

in the discrete case, or Z


ψ(x) = c(A)φ(x; A) dA

ˆ The expansion
in the continuous case for an observable A with a corresponding operator A.
coefficients do have meaning. If the measurable values of a certain observable are discrete,
then
P(A) = |cA |2
is the probability of measuring the value A for that observable, while in the continuous case,

p(A) = |c(A)|2

is likewise the probability density of measuring that value.

In the case of energy, X


ψ(x) = cn φ(x; n),
n

where the eigenstates are given by

Êφ(x; n) = En · φ(x; n).

The probability of measuring the energy to be En is therefore

P(En ) = |cn |2 .

In the case of momentum, the expansion coefficient is the Fourier transform of the wave-
function, so the probability density of measuring a momentum p = nk in that state is

p(k) = |ψ̃(k)|2 .

In the case of position, the expansion coefficient is exactly the same wavefunction at a
differently-labeled position because the position eigenfunctions are Dirac delta functions, so
the probability density of measuring a position x0 in that state is

p(x0 ) = |ψ(x0 )|2 .


8 8.04: Lecture 6

But there is an even better reason to expand wavefunctions in terms of energy eigenfunctions.
If X
ψ(x, 0) = cn φ(x; n),
n

then X
ψ(x, t) = cn e−iωn t φ(x; n) (0.5)
n

describes how the state evolves in time. The reason this works is because the energy operator
Ê and the Schrödinger equation respect superposition!

Note that if φ(x; n) is postulated to not evolve in time, then the expansion coefficients must
evolve in time:
cn (t) = cn e−iωn t .
However,
P(En , t) = |cn (t)|2 = |cn |2
is independent of time as the time evolution is simply a complex phase. Similarly,
X X
(E) = En P(En , t) = En |cn |2
n n

is independent of time. However, it can be shown that quantities such as (x) in general do
depend on time. This is because physical state wavefunctions are not pure energy eigenfunc­
tions but are superpositions thereof!
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 February 28

Lecture 7
More on Energy Eigenstates

Assigned Reading:

E&R 3all , 51,3,4,6


Li. 25−8 , 31−3
Ga. 2all=4
Sh. 3, 4

Suppose someone hands you a potential and asks, ”What do the energy eigenfunctions
look like?” As we will see throughout 8.04, there is a lot of physics in the form of these
eigenfunctions!

We know that
Êφ(x; E) = E · φ(x; E) (0.1)
where
p̂2
Eˆ = + V (x̂). (0.2)
2m
Knowing that
∂φ
p̂φ(x) = −ih
∂x
and
x̂φ(x) = xφ(x),
then the energy eigenvalue equation becomes

h2 ∂ 2 φ(x; E)
− + V (x)φ(x; E) = Eφ(x; E). (0.3)
2m ∂x2
Using the simplification
2m
k 2 (x) = (E − V (x)) (0.4)
h2
yields
∂ 2 φ(x; E)
+ k 2 (x)φ(x; E) = 0 (0.5)
∂x2
∂ 2 f (x)
for the energy eigenvalue equation. Denoting f "" (x) = ∂x2
yields

φ"" (x; E)
= −k 2 (x) (0.6)
φ(x; E)

and this form will help us shortly.


2 8.04: Lecture 7

If E > V (x), then k 2 (x) > 0, and


φ"" (x; E)
<0
φ(x; E)
which implies oscillatory behavior of the eigenfunction in that region. If E < V (x), then
k 2 (x) < 0, and
φ"" (x; E)
>0
φ(x; E)
which implies exponential behavior of the eigenfunction in that region.

To reiterate, in classically allowed regions, the energy eigenfunctions are oscillatory, while in
classically forbidden regions, the energy eigenfunctions are exponential.

Figure 1: For the energy denoted by the black line for the potential denoted by the green
curve, regions (I) and (III) are classically forbidden, so the energy eigenfunction exhibits
exponential behavior, while region (II) is classically allowed, so the energy eigenfunction
exhibits oscillatory behavior

Can we get more information about what these energy eigenfunctions look like?
1. They need to be normalizable. This means that

lim φ(x; E) = 0. (0.7)


x→±∞

2. There needs to be continuity in both the energy eigenfunction and its first spatial
derivative. The second spatial derivative of the energy eigenfunction depends through
the energy eigenvalue equation on the potential, which may or may not be continuous
depending on the situation.

3. In classically allowed regions, the energy eigenfunction oscillates in space.

4. In classically forbidden regions, the energy eigenfunction exhibits spatially exponential


behavior. It is not identically zero! This also implies that the probability density in a
8.04: Lecture 7 3

classically forbidden region is not identically zero! This is a feature that sets quantum
mechanics apart from classical mechanics!

5. The energy eigenfunctions can always be taken as real functions even though general
wavefunctions evolving in time are complex, and this can easily be shown.

Let us return to the infinite square well. What are the allowed energy eigen­
values? We can use the “shooting”
√ method to answer this. We start at x = 0,
pick a trial E with p = 2mE, and integrate the energy eigenvalue equation
φ"" (x; E)+k 2 (x)φ(x; E) = 0. Note that as the energy eigenvalue equation involves
the second spatial derivative of the energy eigenfunction, the energy eigenvalue
controls its curvature!

Figure 2: For E < E0 (top-left), the wavefunction overshoots to the right, and for E0 < E <
E1 (bottom-left), the wavefunction undershoots to the right, while for E = E0 (top-right)
and E = E1 (bottom-right), the wavefunction reaches the boundary properly

Our revisiting the infinite square well leads us to two facts. One is that bound states have
discrete energies, while unbound states have continuous energies. From this comes the node
theorem which works for quantum mechanics in one spatial dimension. It states that as
bound states have discrete energies, the nth spatial energy eigenfunction (starting from
n = 0) has n nodes. For example, the ground state has no nodes, the first excited state has
one node, et cetera.
(picture of weird well potential and eigenfunctions?)
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.04: Quantum Mechanics Professor Allan Adams

Massachusetts Institute of Technology 2013 March 5

Lecture 8
Quantum Harmonic Oscillator: Brute Force Methods

Assigned Reading:

E&R 5all , 61,2,8


Li. 3all , 41 , 51 , 6all
Ga. 24 , 3all
Sh. 4all , 51,2
We will now continue our journey of exploring various systems in quantum mechanics for
which we have now laid down the rules.

Roughly speaking, there are two sorts of states in quantum mechanics:


1. Bound states: the particle is somewhat localized and cannot escape the potential:
2. Unbound states: the particle can escape the potential.
Note that for the same potential, whether something is a bound state or an unbound state
depends on the energy considered.

Figure 1: For the finite well, the energy represented by the lower black line is for a bound
state, while the energy represented by the upper black line is for an unbound state

But note that in qunatum mechanics, because of the possibility of tunneling as seen before,
the definition of whether a state is bound or not differs between classical and quantum
mechanics.

The point is that we need to compare E with limx→±∞ V (x) to determine if a state is bound
or not.

Why do we split our cases that way? Why do we study bound and unbound states separately
if they obey the same equations? After all, in classical mechanics, both obey F = ṗ. In
quantum mechanics, both obey
∂ψ(x, t)
Êψ(x, t) = iJ .
∂t
2 8.04: Lecture 8

Figure 2: The same energy denoted by the black line is a bound classical and quantum state
for the potential on the left, while the classical bound state is a quantum unbound state for
the potential on the right

The distinction is worth making because the bound and unbound states exhibit qualitatively
different behaviors:
Mechanics Bound States Unbound States
Classical periodic motion aperiodic motion
Quantum discrete energy spectrum continuous energy spectrum
For now, we will focus on bound states, with discussions of unbound states coming later.
Let us remind ourselves of some of the properties of bound states.
1. Infinite square well:

• Ground state: no nodes


• nth excited state: n nodes (by the node theorem)
• Energy eigenfunctions chosen to be real
iEt
• Time evolution of energy eigenfunctions through complex phase e− � .
– Different states evolve at different rates
– Energy eigenstates have no time evolution in observables as p(x) for such
states is independent of t
– Time evolution of expectation values for observables comes only through in­
terference terms between energy eigenfunctions

2. Symmetric finite square well

• Node theorem still holds


• V (x) is symmetric
– Leads to symmetry or antisymmetry of φ(x; E)
– Antisymmetric φ(x; E) are fine as |φ(x; E)|2 is symmetric
• Exponetial tails in classically forbidden regions leads to discrete energy spectrum
(picture of shooting for finite square well?)

3. Asymmetric finite square well


8.04: Lecture 8 3

• Node theorem still holds


• Asymmetry of potential breaks (anti)symmetry of eigenfunctions
• Shallow versus deep parts of well
– Deeper part of well → shorter λ as p = hλ−1 so particle travels faster there
– Deeper part of well → lesser amplitude as particle spends less time there so
probability density is less there

4. Harmonic oscillator

• Node theorem still holds


• Many symmetries present
• Evenly-spaced discrete energy spectrum is very special!

So why do we study the harmonic oscillator? We do because we know how to solve it exactly,

and it is a very good approximation for many, many systems.

(picture of interatomic potential?)

Such problems are in general difficult. But we can expand the potential in a Taylor series

about the equilibrium, and if we stay close to the equilibrium point, we can drop terms other

than second-order (as the zeroth-order term is an uninteresting constant offset, the first-order

term is zero at the equilibrium, and higher-than-second-order terms are negligible):


� 1 ∂ j V (x0 ) j 1 ∂ 2 V (x0 )
V (x) = j (x − x 0 ) ≈ 2
(x − x0 )2 (0.1)
j=0
j! ∂x 0 2 ∂x0

2
where ∂ ∂x
V (x0 )
2 acts like the spring constant in a classical mechanical oscillator. Knowing
0
how the spring constant relates to the oscillation frequency and redefining the origin for
convenience, we will be studying the system
1
V (x) = mω 2 x2 . (0.2)
2
As a side note, can you think of systems for which this approximation is not a good one?
One example might be V (x) = αx4 for some proportionality constant α.

The energy eigenstates of the harmonic oscillator form a family labeled by n coming from
Êφ(x; n) = En φ(x; n). This means that

J2 ∂ 2 φ(x; n) 1
− + mω 2 x2 φ(x; n) = En φ(x; n) (0.3)
2m ∂x2 2
4 8.04: Lecture 8

is the energy eigenvalue equation for the harmonic oscillator. This is not an easy differential
equation to solve! For now, we will solve this through brute force methods; later, this will
be solved with more sophistication.

Before we dive into the brute force method, though, let us take a look at what we already
know:
1. From dimensional analysis, we know that E ∝ Jω.
2. From the uncertainty principle, we know that the ground state energy E0 ∝ Jω.
3. The energy operator is

p̂2 1

Ê = + mω 2 x̂2 .
2m 2
There is a nice symmetry between x̂ and p̂. Up to constants, our system looks the same
whether we view it in position space or in momentum space. That is to say, there will
be symmetries between φ(x; E) and φ̃(k; E). Do we know of a function that looks the
same in both position space and momentum space? In other words, do we know of a
function that is functionally similar to its Fourier transform? We do, and that is the
Gaussian! We should expect to see some connection between the harmonic oscillator
eigenfunctions and the Gaussian function.
4. By the node theorem, φ(x; n) should have n nodes.
5. Here is a sneak preview of what the harmonic oscillator eigenfunctions look like: (pic­
ture of harmonic oscillator eigenfunctions 0, 4, and 12?)
Our plan of attack is the following: non-dimensionalization → asymptotic analysis → series
method → profit! Let us tackle these one at a time.

We need to non-dimensionalize the equation


J2 ∂ 2 φ(x; E) 1
− + mω 2 x2 φ(x; E) = Eφ(x; E). (0.4)
2m ∂x2 2
m2 ω 2
It is not good to carry around so many constants in that equation. Note that J2
is
dimensionally the same as x−4 . This implies that we should define the constant

J
α≡

as our length scale. We can then define u ≡ αx to non-dimensionalize position. Furthermore,
we know that E ∝ Jω, so non-dimensionalizing energy as ε = ~�Eω should work. This leaves
2
the energy eigenvalue equation in fully non-dimensional form as
∂ 2φ
= (u2 − ε)φ. (0.5)
∂u2
8.04: Lecture 8 5

This is certainly much neater, but it is not any easier. To make things easier, we need to
develop some more mathematical techniques.

As a mathematical detour, we need to discuss asymptotic analysis. Let us suppose


that we are solving the equation

∂f (x) 1
+ 1+ f (x) = 0. (0.6)
∂x x

In asymptotic analysis, we look at the different limits. Let us consider that

1 1
lim 1 + ≈ .
x→0 x x

This means that


∂f (x) f (x)
lim ≈ (0.7)
x→0 ∂x x
which is solved by
f (x) = f0 x−1 .
But this only considers the behavior as x → 0. Let us say that

f (x) = x−1 g(x)

where g(x) is finite and well-behaved as x → 0. Plugging this in yields

∂g(x)
= −g(x),
∂x
which is solved by
g(x) = g0 e−x .
This means that the full solution is
g0
f (x) = (0.8)
xex
which is exact! The first approximation was just to make our lives easier, and
there ended up being no accuracy sacrificed anywhere. Furthermore, e−x is indeed
relatively constant compared to x−1 as x → 0. To summarize, we looked at the
extreme behavior of the differential equation to peel off a part of the solution.
We then plugged that back in to get a simpler differential equation to solve.

∂g(x)
But what if we did not know how to solve ∂x
+ g(x) = 0? How would we figure
it out?
6 8.04: Lecture 8

This is where the series method comes in. We can expand the function as a power
series ∞

X
g(x) = aj xj (0.9)
j=0

and perhaps figure out what the coefficients need to be. Now, we know that
∞ ∞ ∞
∂g(x) �X �
X �
X
= jaj xj−1 = jaj xj−1 = (s + 1)as+1 xs (0.10)
∂x j=0 j=1 s=0

first by removing the j = 0 term and second by redefining s ≡ j − 1. But s and


j are just labels, so we might as well just say that

∂g(x) �X
= (j + 1)aj+1 xj . (0.11)
∂x j=0

This means that our differential equation is now




X
((j + 1)aj+1 + aj )xj = 0. (0.12)
j=0

The left side is a polynomial in x. If this is zero for all x, then the overall
coefficients are identically zero:

(j + 1)aj+1 + aj = 0 (0.13)

so
aj
aj+1 = − (0.14)
j+1
implying

(−1)j

aj = a0 (0.15)
j!

so
∞ ∞

X (−x)j

X
g(x) = aj x =j
a0 = a0 e−x (0.16)
j=0 j=0
j!

as expected! To summarize, we just expanded the function as a power series,


found the recursion relation for its coefficients, and then plugged in the initial
conditions.

Let us get back into the physics of this. We want to solve the equation

∂ 2φ
= (u2 − ε)φ.
∂u2
8.04: Lecture 8 7

First, we apply asymptotic analysis:

lim (u2 − ε) ≈ u2 (0.17)


u→±∞

so
∂ 2φ
lim ≈ u2 φ. (0.18)
u→±∞ ∂u2

We can try
αu2
φ(u) = φ0 e 2 .
We find that
∂φ
= αuφ
∂u
and
∂ 2φ
= (α + α2 u2 )φ ≈ α2 u2 φ
∂u2
in the same limit. Comparing this to our differential equation, we see that we need

α2 = 1

so
α = ±1.
We then get
u2 u2
φ(u) = φA e 2 + φB e− 2 .
The first part is not normalizable, though, so we do not want that. We keep only the
second part and generalize the constant to a function of u that is relatively constant and
well-behaved as u → ±∞, so we say that
u2
φ(u) = s(u)e− 2 .

Now recall that the last step in asymptotic analysis is to plug this form into the original
differential equation to yield a differential equation that can be solved for s(u):
∂ 2s ∂s
2
− 2u + (ε − 1) s = 0. (0.19)
∂u ∂u
Before we solve this, we need to figure out the behavior of s(u). We know that it should grow
u2
less rapidly than e+ 2 as u → ±∞. Also, there should be multiple solutions corresponding
to discrete bound states. Finally, the nth solution should have n nodes.

Let us try the power series expansion




X
s(u) = aj uj (0.20)
j=0
8 8.04: Lecture 8

as our candidate solution. Plugging this into the differential equation yields


X
((j + 1)(j + 2)aj+2 − (2j + 1 − ε)aj )uj = 0. (0.21)
j=0

Again, as this is true for all u, the overall coefficients must all be identically zero. This
means that
2j + 1 − ε
aj+2 = aj (0.22)
(j + 1)(j + 2)
is the recursion relation. This relates every other coefficient. This means that we need to
specify both a0 and a1 to find all of the coefficients. Why is this the case? It is because we
had a second-order differential equation. Does this meet our expectations? Let us consider
large j:
2j aj
lim aj+2 ≈ 2 aj ≈ j (0.23)
j→∞ j 2
so
C
aj ≈ j (0.24)
2
!
and
� uj
X X u2n
� 2
s(u) ≈ C j =C = Ceu . (0.25)
j 2
! n
n!
u2
Unfortunately, this is exactly what we do not want, as plugging this into φ(u) = s(u)e− 2

recovers the non-normalizable solution.

Remember from before that to get a normalizable wavefunction, we had to impose a specific,
discrete set of energies. If we can do that here, can we get things to work?

The only way out of this conundrum is that the series must be finite. In particular, let us
suppose that that there exists some n such that when j = n, the numerator 2j + 1 − ε = 0.
Then all of the subsequent aj = 0. Imposing that condition yields that ε = 2n + 1. But we
know that ε ≡ 2(Jω)−1 E. Therefore, the energy eigenvalues are
 
1
En = J n + ω. (0.26)
2

We now have our quantized energies! They are also evenly spaced as expected.

Note that imposing this condition only terminates either the odd series or the even series
because the recursion relation is spaced by two. We need to separately insist that the aj = 0
for the other series. This is fine because it has been shown that if the potential is symmetric,
then the energy eigenfunctions can be taken to be either even or odd.
8.04: Lecture 8 9

Let us construct some solutions for this. For n = 0, we have ε = 1, so a2 = 0, and we can
impose that a1 = a3 = a5 = . . . = 0. This implies that s(u) = a0 , so
u2
φ(u; 0) = a0 e− 2 . (0.27)
For n = 1, we have ε = 3, so finding that a3 = 0 and imposing a0 = a2 = a4 = . . . = 0
implies that s(u) = a1 u, so
u2
φ(u; 1) = a1 ue− 2 . (0.28)
Repeating this process yields
x2
n 2En

φ(x; n)e+ 2a2
0 1 N0 
2x
1 3 Nl1· a _
4x2
2 5 N2 · a2
−2
.. .. ..
. . .
x

n 2n + 1 Nn Hn a
where Hn (u) is the nth Hermite polynomial, which is a special set of polynomial functions.
Using the definition of these polynomials as used in mathematics yields
l mω _ 14 1
Nn = (2n n!)− 2 (0.29)
πJ
for the normalization factors.

Let us review the properties of the harmonic oscillator:


• The energies are quantized as

 
1

En = J n + ω (0.30)
2

which are evenly spaced!

• There is
1
E0 = Jω (0.31)
2
as the nonzero ground state energy. This is not arbitrary; while the bottom of the well
is arbitrary, the ground state energy is 12 Jω above that bottom.
• The eigenfunctions satisfy the node theorem as Hn (u) is an nth-order polynomial in
u. These also satisfy even/odd parity.
• General states are given by
iEn t 1
cn e−i(n+ 2 )ωt φ(x; n).

X �
X
ψ(x, t) = cn e− �~ φ(x; n) = (0.32)
n n
MIT OpenCourseWare
http://ocw.mit.edu

8.04 Quantum Physics I


Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

Vous aimerez peut-être aussi