Vous êtes sur la page 1sur 259

Quantum Mechanics

Lecture Notes for


Chemistry 6312
Quantum Chemistry
Eric R. Bittner
University of Houston
Department of Chemistry
1
Lecture Notes on Quantum Chemistry
Lecture notes to accompany Chemistry 6321
Copyright @1997-2003, University of Houston and Eric R. Bittner
All Rights Reserved.
August 12, 2003
Contents
0 Introduction 8
0.1 Essentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
0.2 Problem Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
0.3 2003 Course Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
I Lecture Notes 14
1 Survey of Classical Mechanics 15
1.1 Newtons equations of motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.1 Elementary solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.1.2 Phase plane analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2 Lagrangian Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2.1 The Principle of Least Action . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2.2 Example: 3 dimensional harmonic oscillator in spherical coordinates . . . . 20
1.3 Conservation Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.4 Hamiltonian Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4.1 Interaction between a charged particle and an electromagnetic eld. . . . . 24
1.4.2 Time dependence of a dynamical variable . . . . . . . . . . . . . . . . . . . 26
1.4.3 Virial Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2 Waves and Wavefunctions 29
2.1 Position and Momentum Representation of [` . . . . . . . . . . . . . . . . . . . . 29
2.2 The Schrodinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.1 Gaussian Wavefunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.2 Evolution of (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3 Particle in a Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.1 Innite Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.2 Particle in a nite Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.3 Scattering states and resonances. . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.4 Application: Quantum Dots . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4 Tunneling and transmission in a 1D chain . . . . . . . . . . . . . . . . . . . . . . 49
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1
3 Semi-Classical Quantum Mechanics 55
3.1 Bohr-Sommereld quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2 The WKB Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.1 Asymptotic expansion for eigenvalue spectrum . . . . . . . . . . . . . . . . 58
3.2.2 WKB Wavefunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.3 Semi-classical Tunneling and Barrier Penetration . . . . . . . . . . . . . . 62
3.3 Connection Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4 Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.1 Classical Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.2 Scattering at small deection angles . . . . . . . . . . . . . . . . . . . . . . 73
3.4.3 Quantum treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.4.4 Semiclassical evaluation of phase shifts . . . . . . . . . . . . . . . . . . . . 75
3.4.5 Resonance Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4 Postulates of Quantum Mechanics 80
4.0.1 The description of a physical state: . . . . . . . . . . . . . . . . . . . . . . 85
4.0.2 Description of Physical Quantities: . . . . . . . . . . . . . . . . . . . . . . 85
4.0.3 Quantum Measurement: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.0.4 The Principle of Spectral Decomposition: . . . . . . . . . . . . . . . . . . . 86
4.0.5 The Superposition Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.0.6 Reduction of the wavepacket: . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.0.7 The temporal evolution of the system: . . . . . . . . . . . . . . . . . . . . 90
4.0.8 Dirac Quantum Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.1 Dirac Notation and Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.1.1 Transformations and Representations . . . . . . . . . . . . . . . . . . . . . 94
4.1.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.1.3 Products of Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.1.4 Functions Involving Operators . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2 Constants of the Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3 Bohr Frequency and Selection Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4 Example using the particle in a box states . . . . . . . . . . . . . . . . . . . . . . 102
4.5 Time Evolution of Wave and Observable . . . . . . . . . . . . . . . . . . . . . . . 103
4.6 Unstable States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.7 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5 Bound States of The Schrodinger Equation 110
5.1 Introduction to Bound States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.2 The Variational Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.2.1 Variational Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.2.2 Constraints and Lagrange Multipliers . . . . . . . . . . . . . . . . . . . . . 114
5.2.3 Variational method applied to Schrodinger equation . . . . . . . . . . . . . 117
5.2.4 Variational theorems: Rayleigh-Ritz Technique . . . . . . . . . . . . . . . . 118
5.2.5 Variational solution of harmonic oscillator ground State . . . . . . . . . . . 119
5.3 The Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2
5.3.1 Harmonic Oscillators and Nuclear Vibrations . . . . . . . . . . . . . . . . . 124
5.3.2 Classical interpretation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.3.3 Molecular Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.4 Numerical Solution of the Schrodinger Equation . . . . . . . . . . . . . . . . . . . 136
5.4.1 Numerov Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.4.2 Numerical Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6 Quantum Mechanics in 3D 152
6.1 Quantum Theory of Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.2 Eigenvalues of the Angular Momentum Operator . . . . . . . . . . . . . . . . . . 157
6.3 Eigenstates of L
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.4 Eigenfunctions of L
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.5 Addition theorem and matrix elements . . . . . . . . . . . . . . . . . . . . . . . . 162
6.6 Legendre Polynomials and Associated Legendre Polynomials . . . . . . . . . . . . 164
6.7 Quantum rotations in a semi-classical context . . . . . . . . . . . . . . . . . . . . 165
6.8 Motion in a central potential: The Hydrogen Atom . . . . . . . . . . . . . . . . . 170
6.8.1 Radial Hydrogenic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.9 Spin 1/2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.9.1 Theoretical Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.9.2 Other Spin Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.9.3 Evolution of a state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.9.4 Larmor Precession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.10 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
7 Perturbation theory 180
7.1 Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.2 Two level systems subject to a perturbation . . . . . . . . . . . . . . . . . . . . . 182
7.2.1 Expansion of Energies in terms of the coupling . . . . . . . . . . . . . . . 183
7.2.2 Dipole molecule in homogenous electric eld . . . . . . . . . . . . . . . . . 184
7.3 Dyson Expansion of the Schrodinger Equation . . . . . . . . . . . . . . . . . . . . 188
7.4 Van der Waals forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
7.4.1 Origin of long-ranged attractions between atoms and molecules . . . . . . . 190
7.4.2 Attraction between an atom a conducting surface . . . . . . . . . . . . . . 192
7.5 Perturbations Acting over a Finite amount of Time . . . . . . . . . . . . . . . . . 193
7.5.1 General form of time-dependent perturbation theory . . . . . . . . . . . . 193
7.5.2 Fermis Golden Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
7.6 Interaction between an atom and light . . . . . . . . . . . . . . . . . . . . . . . . 197
7.6.1 Fields and potentials of a light wave . . . . . . . . . . . . . . . . . . . . . 197
7.6.2 Interactions at Low Light Intensity . . . . . . . . . . . . . . . . . . . . . . 198
7.6.3 Photoionization of Hydrogen 1s . . . . . . . . . . . . . . . . . . . . . . . . 202
7.6.4 Spontaneous Emission of Light . . . . . . . . . . . . . . . . . . . . . . . . 204
7.7 Time-dependent golden rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.7.1 Non-radiative transitions between displaced Harmonic Wells . . . . . . . . 210
7.7.2 Semi-Classical Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
3
7.8 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
8 Many Body Quantum Mechanics 222
8.1 Symmetry with respect to particle Exchange . . . . . . . . . . . . . . . . . . . . . 222
8.2 Matrix Elements of Electronic Operators . . . . . . . . . . . . . . . . . . . . . . . 227
8.3 The Hartree-Fock Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
8.3.1 Two electron integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
8.3.2 Koopmans Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
8.4 Quantum Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
8.4.1 The Born-Oppenheimer Approximation . . . . . . . . . . . . . . . . . . . . 232
8.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
A Physical Constants and Conversion Factors 247
B Mathematical Results and Techniques to Know and Love 249
B.1 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
B.1.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
B.1.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
B.1.3 Spectral representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
B.2 Coordinate systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
B.2.1 Cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
B.2.2 Spherical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
B.2.3 Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
C Mathematica Notebook Pages 256
4
List of Figures
1.1 Tangent eld for simple pendulum with = 1. The superimposed curve is a linear
approximation to the pendulum motion. . . . . . . . . . . . . . . . . . . . . . . . 17
1.2 Vector diagram for motion in a central forces. The particles motion is along the
Z axis which lies in the plane of the page. . . . . . . . . . . . . . . . . . . . . . . 21
1.3 Screen shot of using Mathematica to plot phase-plane for harmonic oscillator.
Here k/m = 1 and our x
o
= 0.75. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1 A gaussian wavepacket, (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Momentum-space distribution of (k). . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 G
o
for xed t as a function of x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Evolution of a free particle wavefunction. . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Particle in a box states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 Graphical solution to transendental equations for an electron in a truncated hard
well of depth V
o
= 10 and width a = 2. The short-dashed blue curve corresponds
to the symmetric case and the long-dashed blue curve corresponds to the asymetric
case. The red line is

1 V o/E. Bound state solution are such that the red and
blue curves cross. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.7 Transmission (blue) and Reection (red) coecients for an electron scattering over
a square well (V = 40 and a = 1 ). . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.8 Transmission Coecient for particle passing over a bump. . . . . . . . . . . . . . 43
2.9 Scattering waves for particle passing over a well. . . . . . . . . . . . . . . . . . . . 44
2.10 Argand plot of a scattering wavefunction passing over a well. . . . . . . . . . . . . 45
2.11 Density of states for a 1-, 2- , and 3- dimensional space. . . . . . . . . . . . . . . . 46
2.12 Density of states for a quantum well and quantum wire compared to a 3d space.
Here L = 5 and s = 2 for comparison. . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.13 Spherical Bessel functions, j
0
, j
1
, and j
1
(red, blue, green) . . . . . . . . . . . . . 48
2.14 Radial wavefuncitons (left column) and corresponding PDFs (right column) for an
electron in a R = 0.5

Aquantum dot. The upper two correspond to (n, l) = (1, 0)


(solid) and (n, l) = (1, 1) (dashed) while the lower correspond to (n, l) = (2, 0)
(solid) and (n, l) = (2, 1) (dashed) . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1 Eckart Barrier and parabolic approximation of the transition state . . . . . . . . . 63
3.2 Airy functions, Ai(y) (red) and Bi(y) (blue) . . . . . . . . . . . . . . . . . . . . . 66
3.3 Bound states in a graviational well . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 Elastic scattering trajectory for classical collision . . . . . . . . . . . . . . . . . . 70
5
3.5 Form of the radial wave for repulsive (short dashed) and attractive (long dashed)
potentials. The form for V = 0 is the solid curve for comparison. . . . . . . . . . . 76
4.1 Gaussian distribution function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2 Combination of two distrubitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3 Constructive and destructive interference from electron/two-slit experiment. The
superimposed red and blue curves are P
1
and P
2
from the classical probabilities . 83
4.4 The diraction function sin(x)/x . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.1 Variational paths between endpoints. . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.2 Hermite Polynomials, H
n
up to n = 3. . . . . . . . . . . . . . . . . . . . . . . . . 128
5.3 Harmonic oscillator functions for n = 0 to 3 . . . . . . . . . . . . . . . . . . . . . 132
5.4 Quantum and Classical Probability Distribution Functions for Harmonic Oscillator.133
5.5 London-Eyring-Polanyi-Sato (LEPS) empirical potential for the F+H
2
FH+H
chemical reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.6 Morse well and harmonic approximation for HF . . . . . . . . . . . . . . . . . . . 136
5.7 Model potential for proton tunneling. . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.8 Double well tunneling states as determined by the Numerov approach. . . . . . . . 138
5.9 Tchebyshev Polynomials for n = 1 5 . . . . . . . . . . . . . . . . . . . . . . . . 140
5.10 Ammonia Inversion and Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.1 Vector model for the quantum angular momentum state [jm`, which is represented
here by the vector j which precesses about the z axis (axis of quantzation) with
projection m. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.2 Spherical Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.3 Classical and Quantum Probability Distribution Functions for Angular Momentum.168
7.1 Variation of energy level splitting as a function of the applied eld for an ammonia
molecule in an electric eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.2 Photo-ionization spectrum for hydrogen atom. . . . . . . . . . . . . . . . . . . . . 205
8.1 Various contributions to the H
+
2
Hamiltonian. . . . . . . . . . . . . . . . . . . . . 236
8.2 Potential energy surface for H
+
2
molecular ion. . . . . . . . . . . . . . . . . . . . . 238
8.3 Three dimensional representations of
+
and

for the H
+
2
molecular ion. . . . . 238
8.4 Setup calculation dialog screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
8.5 HOMO-1, HOMO and LUMO for CH
2
= O. . . . . . . . . . . . . . . . . . . . . . 245
8.6 Transition state geometry for H
2
+C = O CH
2
= O. The Arrow indicates the
reaction path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
B.1 sin(xa)/x representation of the Dirac -function . . . . . . . . . . . . . . . . . . 251
B.2 Gaussian representation of -function . . . . . . . . . . . . . . . . . . . . . . . . . 251
6
List of Tables
3.1 Location of nodes for Airy, Ai(x) function. . . . . . . . . . . . . . . . . . . . . . . 68
5.1 Tchebychev polynomials of the rst type . . . . . . . . . . . . . . . . . . . . . . . 140
5.2 Eigenvalues for double well potential computed via DVR and Numerov approaches 143
6.1 Spherical Harmonics (Condon-Shortley Phase convention. . . . . . . . . . . . . . . 160
6.2 Relation between various notations for Clebsch-Gordan Coecients in the literature169
8.1 Vibrational Frequencies of Formaldehyde . . . . . . . . . . . . . . . . . . . . . . . 244
A.1 Physical Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
A.2 Atomic Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
A.3 Useful orders of magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
7
Chapter 0
Introduction
Nothing conveys the impression of humungous intellect so much as even the sketchiest
knowledge of quantum physics, and since the sketchiest knowledge is all anyone will
ever have, never be shy about holding forth with bags of authority about subatomic
particles and the quantum realm without having done any science whatsoever.
Jack Kla Blu Your Way in the Quantum Universe
The eld of quantum chemistry seeks to provide a rigorous description of chemical processes
at its most fundamental level. For ordinary chemical processes, the most fundamental and un-
derlying theory of chemistry is given by the time-dependent and time-independent version of the
Schrodinger equation. However, simply stating an equation that provides the underlying theory
in now shape or form yields and predictive or interpretive power. In fact, most of what we do
in quantum mechanics is to develop a series of well posed approximation and physical assump-
tions to solve basic equations of quantum mechanics. In this course, we will delve deeply into
the underlying physical and mathematical theory. We will learn how to solve some elementary
problems and apply these to not so elementary examples.
As with any course of this nature, the content reects the instructors personal interests in
the eld. In this case, the emphasis of the course is towards dynamical processes, transitions
between states, and interaction between matter and radiation. More traditional quantum
chemistry courses will focus upon electronic structure. In fact, the moniker quantum chemistry
typically refers to electronic structure theory. While this is an extremely rich topic, it is my
personal opinion that a deeper understanding of dynamical processes provides a broader basis
for understanding chemical processes.
It is assumed from the beginning, that students taking this course have had some exposure
to the fundamental principles of quantum theory as applied to chemical systems. This is usually
in the context of a physical chemistry course or a separate course in quantum chemistry. I
also assume that students taking this course have had undergraduate level courses in calculus,
dierential equations, and have some concepts of linear algebra. Students lacking in any of these
areas are strongly encouraged to sit through my undergraduate Physical Chemistry II course
(oered in the Spring Semester at the Univ. of Houston) before attempting this course. This
course is by design and purpose theoretical in nature.
The purpose of this course is to provide a solid and mathematically rigorous tour through
modern quantum mechanics. We will begin with simple examples which can be worked out
8
exactly on paper and move on to discuss various approximation schemes. For cases in which
analytical solutions are either too obfuscating or impossible, computer methods will be intro-
duced using Mathematica. Applications toward chemically relevant topics will be emphasized
throughout.
We will primarily focus upon single particle systems, or systems in which the particles are
distinguishable. Special considerations for systems of indistinguishable particles, such as the
electrons in a molecule, will be discussed towards the end of the course. The pace of the course
is fairly rigorous, with emphasis on solving problems either analytically or using computer.
I also tend to emphasize how to approach a problem from a theoretical viewpoint. As you
will discover rapidly, very few of the problem sets in this course are of the look-up the right
formula type. Rather, you will need to learn to use the various techniques (perturbation theory,
commutation relations, etc...) to solve and work out problems for a variety of physical systems.
The lecture notes in this package are really to be regarded as a work in progress and updates
and additions will be posted as they evolve. Lacking is a complete chapter on the Hydrogen
atom and atomic physics and a good overview of many body theory. Also, I have not included
a chapter on scattering and other topics as these will be added over the course of time. Certain
sections are clearly better than others and will be improved upon over time. Each chapter ends
with a series of exercises and suggested problems Some of which have detailed solutions. Others,
you should work out on your own. At the end of this book are a series of Mathematica notebooks
I have written which illustrate various points and perform a variety of calculations. These can be
downloaded from my web-site (http://k2.chem.uh.edu/quantum/) and run on any recent version
of Mathematica. ( v4.n).
It goes entirely without saying (but I will anyway) that these notes come from a wide variety
of sources which I have tried to cite where possible.
9
0.1 Essentials
Instructor: Prof. Eric. R. Bittner.
Oce: Fleming 221 J
Email: bittner@uh.edu
Phone: -3-2775
Oce Hours: Monday and Thurs. afternoons or by appointment.
Course Web Page: http://k2.chem.uh.edu/quantum/ Solution sets, course news, class
notes, sample computer routines, etc...will be posted from time to time on this web-page.
Other Required Text: Quantum Mechanics, Landau and Lifshitz. This is volume 3 of
L&Ls classical course in modern physics. Every self-respecting scientist has at least two or
three of their books on their book-shelves. This text tends to be pretty terse and uses the
classical phrase it is easy to show... quite a bit (it usually means just the opposite). The
problems are usually worked out in detail and are usually classic applications of quantum
theory. This is a land-mark book and contains everything you really need to know to do
quantum mechanics.
Recommended Texts: I highly recommend that you use a variety of books since one
authors approach to a given topic may be clearer than anothers approach.
Quantum Mechanics, Cohen-Tannoudji, et al. This two volume book is very compre-
hensive and tends to be rather formal (and formidable) in its approach. The problems
are excellent.

Lectures in Quantum Mechanics, Gordon Baym. Bayms book covers a wide range of
topics in a lecture note style.
Quantum Chemistry, I. Levine. This is usually the rst quantum book that chemists
get. I nd it to be too wordy and the notation and derivations a bit ponderous. Levine
does not use Dirac notation. However, he does give a good overview of elementary
electronic structure theory and some if its important developments. Good for starting
o in electronic structure.
Modern Quantum Mechanics, J. J. Sakurai. This is a real classic. Not good for a rst
exposure since it assumes a fairly sophisticated understanding of quantum mechanics
and mathematics.
Intermediate Quantum Mechanics, Hans Bethe and Roman Jackiw. This book is a
great exploration of advanced topics in quantum mechanics as illustrated by atomic
systems.
10
What is Quantum Mechanics?, Transnational College of LEX. OK, this one I found
at Barnes and Noble and its more or less a cartoon book. But, it is really good. It
explores the historical development of quantum mechanics, has some really interest-
ing insights into semi-classical and old quantum theory, and presents the study of
quantum mechanics as a unfolding story. This book I highly recommend if this
is the rst time you are taking a course on quantum mechanics.
Quantum Mechanics in Chemistry by George Schatz and Mark Ratner. Ratner and
Schatz have more in terms of elementary quantum chemistry, emphasizing the use of
modern quantum chemical computer programs than almost any text I have reviewed.
Prequisites: Graduate status in chemistry. This course is required for all Physical Chem-
istry graduate students. The level of the course will be fairly rigorous and I assume that stu-
dents have had some exposure to quantum mechanics at the undergraduate leveltypically
in Physical Chemistry, and are competent in linear algebra, calculus, and solving elemen-
tary dierential equations.
Tests and Grades: There are no exams in this course, only problem sets and participation
in discussion. This means coming to lecture prepared to ask and answer questions. My
grading policy is pretty simple. If you make an honest eort, do the assigned problems
(mostly correctly), and participate in class, you will be rewarded with at least a B. Of
course this is the formula for success for any course.
0.2 Problem Sets
Your course grade will largely be determined by your performance on these problems as well as
the assigned discussion of a particular problem. My philosophy towards problem sets is that this
is the only way to really learn this material. These problems are intentionally challenging, but
not overwhelming, and are paced to correspond to what will be going on in the lecture.
Some ground rules:
1. Due dates are posted on each problemusually 1 week or 2 weeks after they are assigned.
Late submissions may be turned in up to 1 week later. All problems must be turned in by
December 3. I will not accept any submissions after that date.
2. Handwritten Problems. If I cant read it, I wont grade it. Period. Consequently, I strongly
encourage the use of word processing software for your nal submission. Problem solutions
can be submitted electronically as Mathematica, Latex, or PDF les to bittner@uh.edu with
the subject: QUANTUM PROBLEM SET. Do not send me a MSWord le as an email
attachment. I expect some text (written in compete and correct sentences) to explain your
steps where needed and some discussion of the results. The computer lab in the basement
of Fleming has 20 PCs with copies of Mathematica or you can obtain your own license
from the University Media Center.
3. Collaborations. You are strongly encouraged to work together and collaborate on problems.
However, simply copying from your fellow student is not an acceptable collaboration.
11
4. These are the only problems you need to turn in. We will have additional exercisesmostly
coming from the lecture. Also, at the end of the lectures herein, are a set of suggested
problems and exercises to work on. Many of these have solutions.
12
0.3 2003 Course Calendar
This is a rough schedule of topics we will cover. In essence we will follow the starting from
a basic description of quantum wave mechanics and bound states. We will then move onto
the more formal aspects of quantum theory: Dirac notation, perturbation theory, variational
theory, and the like. Lastly, we move onto applications: Hydrogen atom, many-electron systems,
semi-classical approximations, and a semi-classical treatment of light absorption and emission.
We will also have a recitation session in 221 at 10am Friday morning. The purpose of this
will be to specically discuss the problem sets and other issues.
27-August: Course overview: Classical Concepts
3-Sept: Finishing Classical Mechanics/Elementary Quantum concepts
8-Sept: Particle in a box and hard wall potentials (Perry?)
10 Sept: Tunneling/Density of states (Perry)
15/17 Bohr-Sommereld Quantization/Old quantum theory/connection to classical me-
chanics (Perry)
22/24 Sept: Semiclassical quantum mechanics: WKB Approx. Application to scattering
29 Sept/1 Oct. Postulates of quantum mechanics: Dirac notation, superposition principle,
simple calculations.
6/8 Oct: Bound States: Variational principle, quantum harmonic oscillator
13/15 Oct: Quantum mechanics in 3D: Angular momentum (Chapt 4.1-4.8)
20/22 Oct: Hydrogen atom/Hydrogenic systems/Atomic structure
27/29 Oct: Perturbation Theory:
3/5 Nov: Time-dependent Perturbation Theory:
10/12 Identical Particles/Quantum Statistics
17/19 Nov: Helium atom, hydrogen ion
24/26 Nov: Quantum Chemistry
3 DecLast day to turn in problem sets
Final Exam: TBA
13
Part I
Lecture Notes
14
Chapter 1
Survey of Classical Mechanics
Quantum mechanics is in many ways the cumulation of many hundreds of years of work and
thought about how mechanical things move and behave. Since ancient times, scientists have
wondered about the structure of matter and have tried to develop a generalized and underlying
theory which governs how matter moves at all length scales.
For ordinary objects, the rules of motion are very simple. By ordinary, I mean objects that
are more or less on the same length and mass scale as you and I, say (conservatively) 10
7
m to
10
6
m and 10
25
g to 10
8
g moving less than 20% of the speed of light. On other words, almost
everything you can see and touch and hold obey what are called classical laws of motion. The
term classical means that that the basic principles of this class of motion have their foundation
in antiquity. Classical mechanics is a extremely well developed area of physics. While you may
think that given that classical mechanics has been studied extensively for hundreds of years
there really is little new development in this eld, it remains a vital and extremely active area of
research. Why? Because the majority of universe lives in a dimensional realm where classical
mechanics is extremely valid. Classical mechanics is the workhorse for atomistic simulations
of uids, proteins, polymers. It provides the basis for understanding chaotic systems. It also
provides a useful foundation of many of the concepts in quantum mechanics.
Quantum mechanics provides a description of how matter behaves at very small length and
mass scales: i.e. the realm of atoms, molecules, and below. It was developed over the last century
to explain a series of experiments on atomic systems that could not be explained using purely
classical treatments. The advent of quantum mechanics forced us to look beyond the classical
theories. However, it was not a drastic and complete departure. At some point, the two theories
must correspond so that classical mechanics is the limiting behavior of quantum mechanics for
macroscopic objects. Consequently, many of the concepts we will study in quantum mechanics
have direct analogs to classical mechanics: momentum, angular momentum, time, potential
energy, kinetic energy, and action.
Much like classical music is in a particular style, classical mechanics is based upon the principle
that the motion of a body can be reduced to the motion of a point particle with a given mass
m, position x, and velocity v. In this chapter, we will review some of the concepts of classical
mechanics which are necessary for studying quantum mechanics. We will cast these in form
whereby we can move easily back and forth between classical and quantum mechanics. We will
rst discuss Newtonian motion and cast this into the Lagrangian form. We will then discuss the
principle of least action and Hamiltonian dynamics and the concept of phase space.
15
1.1 Newtons equations of motion
Newtons Principia set the theoretical basis of mathematical mechanics and analysis of physical
bodies. The equation that force equals mass times acceleration is the fundamental equation of
classical mechanics. Stated mathematically
m x = f(x) (1.1)
The dots refer to dierentiation with respect to time. We will use this notion for time derivatives.
We may also use x

or dx/dt as well. So,


x =
d
2
x
dt
2
.
For now we are limiting our selves to one particle moving in one dimension. For motion in
more dimensions, we need to introduce vector components. In cartesian coordinates, Newtons
equations are
m x = f
x
(x, y, z) (1.2)
m y = f
y
(x, y, z) (1.3)
m z = f
z
(x, y, z) (1.4)
where the force vector

f(x, y, z) has components in all three dimensions and varies with location.
We can also dene a position vector, x = (x, y, z), and velocity vector v = ( x, y, z). We can also
replace the second-order dierential equation with two rst order equations.
x = v
x
(1.5)
v
x
= f
x
/m (1.6)
These, along with the initial conditions, x(0) and v(0) are all that are needed to solve for the
motion of a particle with mass m given a force f. We could have chosen two end points as well
and asked, what path must the particle take to get from one point to the next. Let us consider
some elementary solutions.
1.1.1 Elementary solutions
First the case in which f = 0 and x = 0. Thus, v = x = const. So, unless there is an applied
force, the velocity of a particle will remain unchanged.
Second, we consider the case of a linear force, f = kx. This is restoring force for a spring
and such force laws are termed Hookes law and k is termed the force constant. Our equations
are:
x = v
x
(1.7)
v
x
= k/mx (1.8)
or x = (k/m)x. So we want some function which is its own second derivative multiplied by
some number. The cosine and sine functions have this property, so lets try
x(t) = Acos(at) + Bsin(bt).
16
Taking time derivatives
x(t) = aAsin(at) + bBcos(bt);
x(t) = a
2
Acos(at) b
2
Bsin(bt).
So we get the required result if a = b =

k/m, leaving A and B undetermined. Thus, we need


two initial conditions to specify these coecients. Lets pick x(0) = x
o
and v(0) = 0. Thus,
x(0) = A = x
o
and B = 0. Notice that the term

k/m has units of angular frequency.


=

k
m
So, our equation of motion are
x(t) = x
o
cos(t) (1.9)
v(t) = x
o
sin(t). (1.10)
-2 p -p
0
p 2 p
x
-3
-2
-1
0
1
2
3
v
-2 p -p 0 p 2 p
Figure 1.1: Tangent eld for simple pendulum with = 1. The superimposed curve is a linear
approximation to the pendulum motion.
1.1.2 Phase plane analysis
Often one can not determine the closed form solution to a given problem and we need to turn
to more approximate methods or even graphical methods. Here, we will look at an extremely
useful way to analyze a system of equations by plotting their time-derivatives.
First, lets look at the oscillator we just studied. We can dene a vector s = ( x, v) =
(v, k/mx) and plot the vector eld. Fig. 1.3 shows how to do this in Mathematica. The
17
superimposed curve is one trajectory and the arrows give the ow of trajectories on the phase
plane.
We can examine more complex behavior using this procedure. For example, the simple
pendulum obeys the equation x =
2
sin x. This can be reduced to two rst order equations:
x = v and v =
2
sin(x).
We can approximate the motion of the pendulum for small displacements by expanding the
pendulums force about x = 0,

2
sin(x) =
2
(x
x
3
6
+ ).
For small x the cubic term is very small, and we have
v =
2
x =
k
m
x
which is the equation for harmonic motion. So, for small initial displacements, we see that the
pendulum oscillates back and forth with an angular frequency . For large initial displacements,
x
o
= or if we impart some initial velocity on the system v
o
> 1, the pendulum does not oscillate
back and forth, but undergoes librational motion (spinning!) in one direction or the other.
1.2 Lagrangian Mechanics
1.2.1 The Principle of Least Action
The most general form of the law governing the motion of a mass is the principle of least action
or Hamiltons principle. The basic idea is that every mechanical system is described by a single
function of coordinate, velocity, and time: L(x, x, t) and that the motion of the particle is such
that certain conditions are satised. That condition is that the time integral of this function
S =

t
f
to
L(x, x, t)dt
takes the least possible value give a path that starts at x
o
at the initial time and ends at x
f
at
the nal time.
Lets take x(t) be function for which S is minimized. This means that S must increase for
any variation about this path, x(t) +x(t). Since the end points are specied, x(0) = x(t) = 0
and the change in S upon replacement of x(t) with x(t) + x(t) is
S =

t
f
to
L(x + x, x + x, t)dt

t
f
to
L(x, x, t)dt = 0
This is zero, because S is a minimum. Now, we can expand the integrand in the rst term
L(x + x, x + x, t) = L(x, x, t) +

L
x
x +
L
x
x

Thus, we have

t
f
to

L
x
x +
L
x
x

dt = 0.
18
Since x = dx/dt and integrating the second term by parts
S =

L
x
x

t
f
to
+

t
f
to

L
x

d
dt
L
x

xdt = 0
The surface term vanishes because of the condition imposed above. This leaves the integral. It
too must vanish and the only way for this to happen is if the integrand itself vanishes. Thus we
have the
L
x

d
dt
L
x
= 0
L is known as the Lagrangian. Before moving on, we consider the case of a free particle.
The Lagrangian in this case must be independent of the position of the particle since a freely
moving particle denes an inertial frame. Since space is isotropic, L must only depend upon the
magnitude of v and not its direction. Hence,
L = L(v
2
).
Since L is independent of x, L/x = 0, so the Lagrange equation is
d
dt
L
v
= 0.
So, L/v = const which leads us to conclude that L is quadratic in v. In fact,
L =
1
m
v
2
,
which is the kinetic energy for a particle.
T =
1
2
mv
2
=
1
2
m x
2
.
For a particle moving in a potential eld, V , the Lagrangian is given by
L = T V.
L has units of energy and gives the dierence between the energy of motion and the energy of
location.
This leads to the equations of motion:
d
dt
L
v
=
L
x
.
Substituting L = T V , yields
m v =
V
x
which is identical to Newtons equations given above once we identify the force as the minus the
derivative of the potential. For the free particle, v = const. Thus,
S =

t
f
to
m
2
v
2
dt =
m
2
v
2
(t
f
t
o
).
19
You may be wondering at this point why we needed a new function and derived all this from
some minimization principle. The reason is that for some systems we have constraints on the type
of motion they can undertake. For example, there may be bonds, hinges, and other mechanical
hinderances which limit the range of motion a given particle can take. The Lagrangian formalism
provides a mechanism for incorporating these extra eects in a consistent and correct way. In
fact we will use this principle later in deriving a variational solution to the Schrodinger equation
by constraining the wavefunction solutions to be orthonormal.
Lastly, it is interesting to note that v
2
= (dl/d)
2
= (dl)
2
/(dt)
2
is the square of the element
of an arc in a given coordinate system. Thus, within the Lagrangian formalism it is easy to
convert from one coordinate system to another. For example, in cartesian coordinates: dl
2
=
dx
2
+dy
2
+dz
2
. Thus, v
2
= x
2
+ y
2
+ z
2
. In cylindrical coordinates, dl = dr
2
+r
2
d
2
+dz
2
, we
have the Lagrangian
L =
1
2
m( r
2
+ r
2

2
+ z
2
)
and for spherical coordinates dl
2
= dr
2
+ r
2
d
2
+ r
2
sin
2
d
2
; hence
L =
1
2
m( r
2
+ r
2

2
+ r
2
sin
2

2
).
1.2.2 Example: 3 dimensional harmonic oscillator in spherical coor-
dinates
Here we take the potential energy to be a function of r alone (isotropic)
V (r) = kr
2
/2.
Thus, the Lagrangian in cartesian coordinates is
L =
m
2
( x
2
+ y
2
+ z
2
) +
k
2
r
2
since r
2
= x
2
+ y
2
+ z
2
, we could easily solve this problem in cartesian space since
L =
m
2
( x
2
+ y
2
+ z
2
) +
k
2
(x
2
+ y
2
+ z
2
) (1.11)
=

m
2
x
2
+
k
2
x
2

m
2
y
2
+
k
2
y
2

m
2
z
2
+
k
2
z
2

(1.12)
and we see that the system is separable into 3 independent oscillators. To convert to spherical
polar coordinates, we use
x = r sin() cos() (1.13)
y = r sin() sin() (1.14)
z = r cos() (1.15)
and the arc length given above.
L =
m
2
( r
2
+ r
2

2
+ r
2
sin
2

2
)
k
2
r
2
20
The equations of motion are
d
dt
L

=
d
dt
(mr
2
sin
2


= 0 (1.16)
d
dt
L

=
d
dt
(mr
2

) mr
2
sin cos

= 0 (1.17)
d
dt
L
r

L
r
=
d
dt
(m r) mr

2
mr sin
2

2
+ kr = 0 (1.18)
We now prove that the motion of a particle in a central force eld lies in a plane containing
the origin. The force acting on the particle at any given time is in a direction towards the origin.
Now, place an arbitrary cartesian frame centered about the particle with the z axis parallel to
the direction of motion as sketched in Fig. 1.2 Note that the y axis is perpendicular to the plane
of the page and hence there is no force component in that direction. Consequently, the motion
of the particle is constrained to lie in the zx plane, i.e. the plane of the page and there is no
force component which will take the particle out of this plane.
Lets make a change of coordinates by rotating the original frame to a new one whereby the
new z

is perpendicular to the plane containing the initial position and velocity vectors. In the
sketch above, this new z

axis would be perpendicular to the page and would contain the y axis
we placed on the moving particle. In terms of these new coordinates, the Lagrangian will have
the same form as before since our initial choice of axis was arbitrary. However, now, we have
some additional constraints. Because the motion is now constrained to lie in the x

plane,

= /2 is a constant, and

= 0. Thus cos(/2) = 0 and sin(/2) = 1 in the equations above.
From the equations for we nd
d
dt
mr
2

= 0
or
mr
2

= const = p

.
This we can put into the r equation
d
dt
(m r) mr

2
+ kr = 0 (1.19)
d
dt
(m r)
p
2

mr
3
+ kr = 0 (1.20)
o
F
Z
X
Y
a
Figure 1.2: Vector diagram for motion in a central forces. The particles motion is along the Z
axis which lies in the plane of the page.
21
Z
Z'
X'
Y'
Y
X
where we notice that p
2

/mr
3
is the centrifugal force. Taking the last equation, multiplying by
r and then integrating with respect to time gives
r
2
=
p
2

m
2
r
2
kr
2
+ b (1.21)
i.e.
r =

p
2

m
2
r
2
kr
2
+ b (1.22)
Integrating once again with respect to time,
t t
o
=

rdr
r
(1.23)
=

rdr

p
2

m
2
kr
4
+ br
2
(1.24)
=
1
2

dx

a + bx + cx
2
(1.25)
where x = r
2
, a = p
2

/m
2
, b is the constant of integration, and c = k This is a standard
integral and we can evaluate it to nd
r
2
=
1
2
(b + Asin((t t
o
))) (1.26)
where
A =

b
2


2
p
2

m
2
.
What we see then is that r follows an elliptical path in a plane determined by the initial velocity.
22
This example also illustrates another important point which has tremendous impact on molec-
ular quantum mechanics, namely, the angular momentum about the axis of rotation is conserved.
We can choose any axis we want. In order to avoid confusion, let us dene as the angular
rotation about the body-xed Z

axis and as angular rotation about the original Z axis. So


our conservation equations are
mr
2
= p

about the Z

axis and
mr
2
sin

= p

for some arbitrary xed Z axis. The angle will also have an angular momentum associated
with it p

= mr
2

, but we do not have an associated conservation principle for this term since it
varies with . We can connect p

with p

and p

about the other axis via


p

d = p

d + p

d.
Consequently,
mr
2

2
d = mr
2
(

sin d +

d).
Here we see that the the angular momentum vector remains xed in space in the absence of
any external forces. Once an object starts spinning, its axis of rotation remains pointing in a
given direction unless something acts upon it (torque), in essence in classical mechanics we can
fully specify L
x
, L
y
, and L
z
as constants of the motion since d

L/dt = 0. In a later chapter, we


will cover the quantum mechanics of rotations in much more detail. In the quantum case, we
will nd that one cannot make such a precise specication of the angular momentum vector for
systems with low angular momentum. We will, however, recover the classical limit in end as we
consider the limit of large angular momenta.
1.3 Conservation Laws
We just encountered one extremely important concept in mechanics, namely, that some quantities
are conserved if there is an underlying symmetry. Next, we consider a conservation law arising
from the homogeneity of time. For a closed dynamical system, the Lagrangian does not explicitly
depend upon time. Thus we can write
dL
dt
=
L
x
x +
L
x
x (1.27)
Replacing L/x with Lagranges equation, we obtain
dL
dt
= x
d
dt

L
x

+
L
x
x (1.28)
=
d
dt

x
L
x

(1.29)
Now, rearranging this a bit,
d
dt

x
L
x
L

= 0. (1.30)
23
So, we can take the quantity in the parenthesis to be a constant.
E =

x
L
x
L

= const. (1.31)
is an integral of the motion. This is the energy of the system. Since L can be written in form
L = T V where T is a quadratic function of the velocities, and using Eulers theorem on
homogeneous functions:
x
L
x
= x
T
x
= 2T.
This gives,
E = T + V
which says that the energy of the system can be written as the sum of two dierent terms: the
kinetic energy or energy of motion and the potential energy or the energy of location.
One can also prove that linear momentum is conserved when space is homogeneous. That is,
when we can translate our system some arbitrary amount and our dynamical quantities must
remain unchanged. We will prove this in the problem sets.
1.4 Hamiltonian Dynamics
Hamiltonian dynamics is a further generalization of classical dynamics and provides a crucial link
with quantum mechanics. Hamiltons function, H, is written in terms of the particles position
and momentum, H = H(p, q). It is related to the Lagrangian via
H = xp L(x, x)
Taking the derivative of H w.r.t. x
H
x
=
L
x
= p
Dierentiation with respect to p gives
H
p
= q.
These last two equations give the conservation conditions in the Hamiltonian formalism. If H
is independent of the position of the particle, then the generalized momentum, p is constant in
time. If the potential energy is independent of time, the Hamiltonian gives the total energy of
the system,
H = T + V.
1.4.1 Interaction between a charged particle and an electromagnetic
eld.
We consider here a free particle with mass m and charge e in an electromagnetic eld. The
Hamiltonian is
H = p
x
x + p
y
y + p
z
z L (1.32)
= x
L
x
+ y
L
y
+ z
L
z
L. (1.33)
24
Our goal is to write this Hamiltonian in terms of momenta and coordinates.
For a charged particle in a eld, the force acting on the particle is the Lorenz force. Here it
is useful to introduce a vector and scaler potential and to work in cgs units.

F =
e
c
v (


A)
e
c


A
t
e

.
The force in the x direction is given by
F
x
=
d
dt
m x =
e
c

y
A
y
x
+ z
A
z
x

e
c

y
A
x
y
+ z
A
x
z
+
A
x
t

x
with the remaining components given by cyclic permutation. Since
dA
x
dt
=
A
x
t
+ x
A
x
x
+ y
A
x
y
+ z
A
x
z
,
F
x
=
e
c

+ x
A
x
x
+ y
A
x
y
+ z
A
x
z

e
c
v

A e.
Based upon this, we nd that the Lagrangian is
L =
1
2
m x
2
1
2
m y
2
1
2
m z
2
+
e
c
v

A e
where is a velocity independent and static potential.
Continuing on, the Hamiltonian is
H =
m
2
( x
2
+ y
2
+ z
2
) + e (1.34)
=
1
2m
((m x)
2
+ (m y)
2
+ (m y)
2
) + e (1.35)
The velocities, m x, are derived from the Lagrangian via the canonical relation
p =
L
x
From this we nd,
m x = p
x

e
c
A
x
(1.36)
m y = p
y

e
c
A
y
(1.37)
m z = p
z

e
c
A
z
(1.38)
and the resulting Hamiltonian is
H =
1
2m

p
x

e
c
A
x

2
+

p
y

e
c
A
y

2
+

p
z

e
c
A
z

+ e.
We see here an important concept relating the velocity and the momentum. In the absence of a
vector potential, the velocity and the momentum are parallel. However, when a vector potential
is included, the actual velocity of a particle is no longer parallel to its momentum and is in fact
deected by the vector potential.
25
1.4.2 Time dependence of a dynamical variable
On of the important applications of Hamiltonian mechanics is in the dynamical evolution of a
variable which depends upon p and q, G(p, q). The total derivative of G is
dG
dt
=
G
t
+
G
q
q +
G
p
p
From Hamiltons equations, we have the canonical denitions
q =
H
p
, p =
H
q
Thus,
dG
dt
=
G
t
+
G
q
H
p

G
p
H
q
(1.39)
dG
dt
=
G
t
+G, H, (1.40)
where A, B is called the Poisson bracket of two dynamical quantities, G and H.
G, H, =
G
q
H
p

G
p
H
q
We can also dene a linear operator L as generating the Poisson bracket with the Hamiltonian:
LG =
1
i
H, G
so that if G does not depend explicitly upon time,
G(t) = exp(iLt)G(0).
where exp(iLt) is the propagator which carried G(0) to G(t).
Also, note that if G, H = 0, then dG/dt = 0 so that G is a constant of the motion. This
too, along with the construction of the Poisson bracket has considerable importance in the realm
of quantum mechanics.
1.4.3 Virial Theorem
Finally, we turn our attention to a concept which has played an important role in both quan-
tum and classical mechanics. Consider a function G that is a product of linear momenta and
coordinate,
G = pq.
The time derivative is simply.
G
dt
= q p + p q
26
Now, lets take a time average of both sides of this last equation.

d
dt
pq

= lim
T
1
T

T
0

d
dt
pq

dt (1.41)
= lim
T
1
T

T
0
d(pq) (1.42)
= lim
T
1
T
((pq)
T
(pq)
0
) (1.43)
If the trajectories of system are bounded, both p and q are periodic in time and are therefore
nite. Thus, the average must vanish as T giving
'p q + q p` = 0 (1.44)
Since p q = 2T and p = F, we have
'2T` = 'qF`. (1.45)
In cartesian coordinates this leads to
'2T` =

i
x
i
F
i

. (1.46)
For a conservative system F = V . Thus, if we have a centro-symmetric potential given
by V = Cr
n
, it is easy to show that
'2T` = n'V `.
For the case of the Harmonic oscillator, n = 2 and 'T` = 'V `. So, for example, if we have a
total energy equal to kT in this mode, then 'T` +'V ` = kT and 'T` = 'V ` = kT/2. Moreover,
for the interaction between two opposite charges separated by r, n = 1 and
'2T` = 'V `.
27
Figure 1.3: Screen shot of using Mathematica to plot phase-plane for harmonic oscillator. Here
k/m = 1 and our x
o
= 0.75.
28
Chapter 2
Waves and Wavefunctions
In the world of quantum physics, no phenominon is a phenominon until it is a recorded
phenominon.
John Archibald Wheler
The physical basis of quantum mechanics is
1. That matter, such as electrons, always arrives at a point as a discrete chunk, but that the
probibility of nding a chunk at a specied position is like the intensity distribution of a
wave.
2. The quantum state of a system is described by a mathematical object called a wave-
function or state vector and is denoted [`.
3. The state [` can be expanded in terms of the basis states of a given vector space, [
i
`
as
[` =

i
[
i
`'
i
[` (2.1)
where '
i
[` denotes an inner product of the two vectors.
4. Observable quantities are associated with the expectation value of Hermitian operators and
that the eigenvalues of such operators are always real.
5. If two operators commute, one can measure the two associated physical quantities simul-
taneously to arbitrary precision.
6. The result of a physical measurement projects [` onto an eigenstate of the associated
operator [
n
` yielding a measured value of a
n
with probability ['
n
[`[
2
.
2.1 Position and Momentum Representation of [`
1
Two common operators which we shall use extensively are the position and momentum operator.
1
The majority of this lecture comes from Cohen-Tannoudji Chapter 1, part from Feynman & Hibbs
29
The position operator acts on the state [` to give the amplitude of the system to be at a
given position:
x[` = [x`'x[` (2.2)
= [x`(x) (2.3)
We shall call (x) the wavefunction of the system since it is the amplitude of [` at point x. Here
we can see that (x) is an eigenstate of the position operator. We also dene the momentum
operator p as a derivative operator:
p = i h

x
(2.4)
Thus,
p(x) = i h

(x). (2.5)
Note that

(x) = (x), thus an eigenstate of the position operator is not also an eigenstate of
the momentum operator.
We can deduce this also from the fact that x and p do not commute. To see this, rst consider

x
xf(x) = f(x) + xf

(x) (2.6)
Thus (using the shorthand
x
as partial derivative with respect to x.)
[ x, p]f(x) = i h(x
x
f(x)
x
(xf(x))) (2.7)
= i h(xf

(x) f(x) xf

(x)) (2.8)
= i hf(x) (2.9)
What are the eigenstates of the p operator? To nd them, consider the following eigenvalue
equation:
p[(k)` = k[(k)` (2.10)
Inserting a complete set of position states using the idempotent operator
I =

[x`'x[dx (2.11)
and using the coordinate representation of the momentum operator, we get
i h
x
(k, x) = k(k, x) (2.12)
Thus, the solution of this is (subject to normalization)
(k, x) = C exp(ik/ h) = 'x[(k)` (2.13)
30
We can also use the [(k)` = [k` states as a basis for the state [` by writing
[` =

dk[k`'k[` (2.14)
=

dk[k`(k) (2.15)
where (k) is related to (x) via:
(k) = 'k[` =

dx'k[x`'x[` (2.16)
= C

dx exp(ikx/ h)(x). (2.17)


This type of integral is called a Fourier Transfrom. There are a number of ways to dene
the normalization C when using this transform, for our purposes at the moment, well set C =
1/

2 h so that
(x) =
1

2 h

dk(k) exp(ikx/ h) (2.18)


and
(x) =
1

2 h

dx(x) exp(ikx/ h). (2.19)


Using this choice of normalization, the transform and the inverse transform have symmetric forms
and we only need to remember the sign in the exponential.
2.2 The Schr odinger Equation
Postulate 2.1 The quantum state of the system is a solution of the Schrodinger equation
i h
t
[(t)` = H[(t)`, (2.20)
where H is the quantum mechanical analogue of the classical Hamiltonian.
From classical mechanics, H is the sum of the kinetic and potential energy of a particle,
H =
1
2m
p
2
+ V (x). (2.21)
Thus, using the quantum analogues of the classical x and p, the quantum H is
H =
1
2m
p
2
+ V ( x). (2.22)
To evaluate V ( x) we need a theorem that a function of an operator is the function evaluated
at the eigenvalue of the operator. The proof is straight forward, Taylor expand the function
about some point, If
V (x) = (V (0) + xV

(0) +
1
2
V

(0)x
2
) (2.23)
31
then
V ( x) = (V (0) + xV

(0) +
1
2
V

(0) x
2
) (2.24)
Since for any operator
[

f,

f
p
] = 0 p (2.25)
Thus, we have
'x[V ( x)[` = V (x)(x) (2.26)
So, in coordinate form, the Schrodinger Equation is written as
i h

t
(x, t) =

h
2m

2
x
2
+ V (x)

(x, t) (2.27)
2.2.1 Gaussian Wavefunctions
Lets assume that our initial state is a Gaussian in x with some initial momentum k

.
(x, 0) =

2
a
2

1/4
exp(ik
o
x) exp(x
2
/a
2
) (2.28)
The momentum representation of this is
(k, 0) =
1
2 h

dxe
ikx
(x, 0) (2.29)
= (a)
1/2
e
(kko)
2
a
2
/4)
(2.30)
In Fig.2.1, we see a gaussian wavepacket centered about x = 0 with k
o
= 10 and a = 1.
For now we will use dimensionaless units. The red and blue components correspond to the real
and imaginary components of and the black curve is [(x)[
2
. Notice, that the wavefunction is
pretty localized along the x axis.
In the next gure, (Fig. 2.2) we have the momentum distribution of the wavefunction, (k, 0).
Again, we have chosen k
o
= 10. Notice that the center of the distribution is shifted about k
o
.
So, for f(x) = exp(x
2
/b
2
), x = b/

2. Thus, when x varies form 0 to x, f(x) is


diminished by a factor of 1/

e. (x is the RMS deviation of f(x).)


For the Gaussian wavepacket:
x = a/2 (2.31)
k = 1/a (2.32)
or
p = h/a (2.33)
Thus, xp = h/2 for the initial wavefunction.
32
-3 -2 -1 1 2 3
-0.75
-0.5
-0.25
0.25
0.5
0.75
Figure 2.1: Real (red), imaginary (blue) and absolute value (black) of gaussian wavepacket (x)
6 8 10 12 14
k
0.5
1
1.5
2
2.5
y

@kD
Figure 2.2: Momentum-space distribution of (k).
33
2.2.2 Evolution of (x)
Now, lets consider the evolution of a free particle. By a free particle, we mean a particle
whose potential energy does not change, I.e. we set V (x) = 0 for all x and solve:
i h

t
(x, t) =

h
2m

2
x
2

(x, t) (2.34)
This equation is actually easier to solve in k-space. Taking the Fourier Transform,
i h
t
(k, t) =
k
2
2m
(k, t) (2.35)
Thus, the temporal solution of the equation is
(k, t) = exp(ik
2
/(2m)t/ h)(k, 0). (2.36)
This is subject to some initial function (k, 0). To get the coordinate x-representation of the
solution, we can use the FT relations above:
(x, t) =
1

2 h

dk(k, t) exp(ikx) (2.37)


=

dx

'x[ exp(i p
2
/(2m)t/ h)[x

`(x

, 0) (2.38)
=

m
2i ht

dx

exp

im(x x

)
2
2 ht

(x

, 0) (2.39)
=

dx

G
o
(x, x

)(x

, 0) (2.40)
(homework: derive G
o
and show that G
o
is a solution of the free particle schrodinger equation
HG
o
= i
t
G
o
.) The function G
o
is called the free particle propagator or Greens Function
and tells us the amplitude for a particle to start o at x

and end up at another point x at time


t.
The sketch tells me that in order to got far away from the initial point in time t I need to
have a lot of energy (wiggles get closer together implies higher Fourier component )
Here we see that the probability to nd a particle at the initial point decreases with time.
Since the period of oscillation (T) is the time required to increase the phase by 2.
2 =
mx
2
2 ht

mx
2
2 h(t + T)
(2.41)
=
mx
2
2 ht
2

T
2
1 + T/t

(2.42)
Let = 2/T and take the long time limit t T, we can estimate

m
2 h

x
t

2
(2.43)
34
Figure 2.3: G
o
for xed t as a function of x.
-10 -5 5 10
-0.4
-0.2
0.2
0.4
Since the classical kinetic energy is given by E = m/2v
2
, we obtain
E = h (2.44)
Thus, the energy of the wave is proportional to the period of oscillation.
We can evaluate the evolution in x using either the G
o
we derived above, or by taking the
FT of the wavefunction evolving in k-space. Recall that the solution in k-space was
(k, t) = exp(ik
2
/(2m)t/ h)(k, 0) (2.45)
Assuming a Gaussian form for (k) as above,
(x, t) =

a
(2)
3/4

dke
a
2
/4(kko)
2
e
i(kx(k)t)
(2.46)
where (k) is the dispersion relation for a free particle:
(k) =
hk
2
2m
(2.47)
Cranking through the integral:
(x, t) =

2a
2

1/4
e
i

a
4
+
4h
2
t
2
m
2

1/4
e
ikox
exp

(x hk
o
/mt)
2
a
2
+ 2i ht/m

(2.48)
where = hk
2
o
/(2m)t and tan 2 = 2 ht/(ma
2
).
Likewise, for the amplitude:
[(x, t)[
2
=

1
2x(t)
2
exp

(x v

t)
2
2x(t)
2

(2.49)
35
Figure 2.4: Evolution of a free particle wavefunction. In this case we have given the initial state
a kick in the +x direction. Notice that as the system moves, the center moves at a constant rate
where as the width of the packet constantly spreads out over time.
Where I dene
x(t) =
a
2

1 +
4 h
2
t
2
m
2
a
4
(2.50)
as the time dependent RMS width of the wave and the group velocity:
v
o
=
hk
o
m
. (2.51)
Now, since p = hk = h/a is a constant for all time, the uncertainty relation becomes
x(t)p h/2 (2.52)
corresponding to the particles wavefunction becoming more and more diuse as it evolves in
time.
2.3 Particle in a Box
2.3.1 Innite Box
The Mathematica handout shows how one can use Mathematica to set up and solve some simple
problems on the computer. (One good class problem would be to use Mathematica to carry
out the symbolic manipulations for a useful or interesting problem and/or to solve the problem
numerically.)
The potential well work with for this example consists of two innitely steep walls placed
at x = and x = 0 such that between the two walls, V (x) = 0. Within this region, we seek
solutions to the dierential equation

2
x
(x) = 2mE/ h
2
(x). (2.53)
The solutions of this are plane waves traveling to the left and to the right,
(x) = Aexp(ikx) + Bexp(+ikx) (2.54)
The coecients A and B well have to determine. k is determined by substitution back into the
dierential equation

(x) = k
2
(x) (2.55)
Thus, k
2
= 2mE/ h
2
, or hk =

2mE. Lets work in units in which h = 1 and m


e
= 1. Energy in
these units is the Hartree ( 27.eV.) Posted on the web-page is a le (c-header le) which has a
number of useful conversion factors.
36
Since (x) must vanish at x = 0 and x =
A + B = 0 (2.56)
Aexp(ik) + Bexp(ik) = 0 (2.57)
We can see immediately that A = B and that the solutions must correspond to a family of sine
functions:
(x) = Asin(n/x) (2.58)
Just a check,
() = Asin(n/) = Asin(n) = 0. (2.59)
To obtain the coecient, we simply require that the wavefunctions be normalized over the range
x = [0, ].


0
sin(nx/)
2
dx =

2
(2.60)
Thus, the normalized solutions are

n
(x) =

sin(n/x) (2.61)
The eigenenergies are obtained by applying the Hamiltonian to the wavefunction solution
E
n

n
(x) =
h
2
2m

2
x

n
(x) (2.62)
=
h
2
n
2

2
2a
2
m

n
(x) (2.63)
Thus we can write E
n
as a function of n
E
n
=
h
2

2
2a
2
m
n
2
(2.64)
for n = 0, 1, 2, .... What about the case where n = 0? Clearly its an allowed solution of
the Schrodinger Equation. However, we also required that the probability to nd the particle
anywhere must be 1. Thus, the n = 0 solution cannot be permitted.
Note also that the cosine functions are also allowed solutions. However, the restriction of
(0) = 0 and () = 0 discounts these solutions.
In Fig. 2.5 we show the rst few eigenstates for an electron trapped in a well of length a = .
The potential is shown in gray. Notice that the number of nodes increases as the energy increases.
In fact, one can determine the state of the system by simply counting nodes.
What about orthonormality. We stated that the solution of the eigenvalue problem form an
orthonormal basis. In Dirac notation we can write
'
n
[
m
` =

dx'
n
[x`'x[
m
` (2.65)
=


0
dx

n
(x)
m
(x) (2.66)
=
2


0
dx sin(nx/) sin(mx/) (2.67)
=
nm
. (2.68)
37
-1 1 2 3 4
2
4
6
8
10
12
14
Figure 2.5: Particle in a box states
Thus, we can see in fact that these solutions do form a complete set of orthogonal states on
the range x = [0, ]. Note that its important to specify on the range... since clearly the sin
functions are not a set of orthogonal functions over the entire x axis.
2.3.2 Particle in a nite Box
Now, suppose our box is nite. That is
V (x) =

V
o
if a < x < a
0 otherwise
(2.69)
Lets consider the case for E < 0. The case E > 0 will correspond to scattering solutions. In
side the well, the wavefunction oscillates, much like in the previous case.

W
(x) = Asin(k
i
x) + Bcos(k
i
x) (2.70)
where k
i
comes from the equation for the momentum inside the well
hk
i
=

2m(E
n
+ V
o
) (2.71)
We actually have two classes of solution, a symmetric solution when A = 0 and an antisym-
metric solution when B = 0.
Outside the well the potential is 0 and we have the solutions

O
(x) = c
1
e
x
andc
2
e
x
(2.72)
38
We will choose the coecients c
1
and c
2
as to create two cases,
L
and
R
on the left and right
hand sides of the well. Also,
h =

2mE (2.73)
Thus, we have three pieces of the full solution which we must hook together.

L
(x) = Ce
x
for x < a (2.74)

R
(x) = De
x
for x > a (2.75)
(2.76)

W
(x) = Asin(k
i
x) + Bcos(k
i
x)for inside the well (2.77)
To nd the coecients, we need to set up a series of simultaneous equations by applying the
conditions that a.) the wavefunction be a continuous function of x and that b.) it have continuous
rst derivatives with respect to x. Thus, applying the two conditions at the boundaries:

L
(a)
W
(a) = 0 (2.78)
(2.79)

R
(a)
W
(a) = 0 (2.80)
(2.81)

L
(a)

W
(a) = 0 (2.82)
(2.83)

R
(a)

W
(a) = 0 (2.84)
The matching conditions at x = a
The nal results are (after the chalk dust settles):
1. For A = 0. B = Dsec(ak
i
)e
a
and C = D. (Symmetric Solution)
2. For B = 0, A = C csc(ak
i
)e
a
and C = D. (Antisymmetric Solution)
So, now we have all the coecients expressed in terms of D, which we can determine by normal-
ization (if so inclined). Well not do that integral, as it is pretty straightforward.
For the energies, we substitute the symmetric and antisymmetric solutions into the Eigenvalue
equation and obtain:
cos(ak
i
) = k
i
sin(k
i
) (2.85)
39
or

k
i
= tan(ak
i
) (2.86)

E
V
o
E
= tan(a

2m(V
o
E)/ h) (2.87)
for the symmetric case and
sin(ak
i
) = k
i
cos(ak
i
) (2.88)
for the anti-symmetric case, or

k
i
= cot(ak
i
) (2.89)
(2.90)

E
V
o
E
= cot(a

2m(V
o
E)/ h) (2.91)
Substituting the expressions for k
i
and into nal results for each case we nd a set of
matching conditions: For the symmetric case, eigenvalues occur when ever the two curves

1 V
o
/E = tan(a

2m(E V
o
)/ h) (2.92)
and for the anti-symmetric case,

1 V
o
/E = cot(a

2m(E V
o
)/ h) (2.93)
These are called transcendental equations and closed form solutions are generally impossible
to obtain. Graphical solutions are helpful. In Fig. ?? we show the graphical solution to the
transendental equations for an electron in a V
o
= 10 well of width a = 2. The black dots
indicate the presence of two bound states, one symmetric and one anti-symmetric at E = 2.03
and 3.78 repectively.
2.3.3 Scattering states and resonances.
Now lets take the same example as above, except look at states for which E > 0. In this case, we
have to consider where the particles are coming from and where they are going. We will assume
that the particles are emitted with precise energy E towards the well from and travel from
left to right. As in the case above we have three distinct regions,
1. x > a where (x) = e
ik
1
x
+ Re
ik
1
x
=
L
(x)
2. a x +a where (x) = Ae
ik
2
x
+ Be
+ik
2
x
=
W
(x)
40
2 4 6 8 10
E
-4
-3
-2
-1
1
2
3
4
symasym
Figure 2.6: Graphical solution to transendental equations for an electron in a truncated hard
well of depth V
o
= 10 and width a = 2. The short-dashed blue curve corresponds to the
symmetric case and the long-dashed blue curve corresponds to the asymetric case. The red line
is

1 V o/E. Bound state solution are such that the red and blue curves cross.
3. x > +a where (x) = Te
+ik
1
x
=
R
(x)
where k
1
=

2mE/ h is the momentum outside the well, k


2
=

2m(E V )/ h is the momentum


inside the well, and A, B, T, and R are coecients we need to determine. We also have the
matching conditions:

L
(a)
W
(a) = 0

L
(a)

W
(a) = 0

R
(a)
W
(a) = 0

R
(a)

W
(a) = 0
This can be solved by hand, however, Mathematica make it easy. The results are a series of rules
which we can use to determine the transmission and reection coecients.
T
4e
2iak
1
+2iak
2
k
1
k
2
k
1
2
+ e
4iak
2
k
1
2
2k
1
k
2
2e
4iak
2
k
1
k
2
k
2
2
+ e
4iak
2
k
2
2
,
A
2e
iak
1
+3iak
2
k
1
(k
1
k
2
)
k
1
2
+ e
4iak
2
k
1
2
2k
1
k
2
2e
4iak
2
k
1
k
2
k
2
2
+ e
4iak
2
k
2
2
,
B
2e
iak
1
+iak
2
k
1
(k
1
+ k
2
)
k
1
2
+ e
4iak
2
k
1
2
2k
1
k
2
2e
4iak
2
k
1
k
2
k
2
2
+ e
4iak
2
k
2
2
,
R

1 + e
4iak
2

k
1
2
k
2
2

e
2iak
1

k
1
2
+ e
4iak
2
k
1
2
2k
1
k
2
2e
4iak
2
k
1
k
2
k
2
2
+ e
4iak
2
k
2
2

41
10 20 30 40
En HhartreeL
0.2
0.4
0.6
0.8
1
R,T
Figure 2.7: Transmission (blue) and Reection (red) coecients for an electron scattering over
a square well (V = 40 and a = 1 ).
The R and T coecients are related to the rations of the reected and transimitted ux to
the incoming ux. The current operator is given by
j(x) =
h
2mi
(

) (2.94)
Inserting the wavefunctions above yields:
j
in
=
hk
1
m
j
ref
=
hk
1
R
2
m
j
trans
=
hk
1
T
2
m
Thus, R
2
= j
ref
/j
in
and T
2
= j
trans
/j
in
. In Fig. 2.7 we show the transmitted and reection
coecients for an electron passing over a well of depth V = 40, a = 1 as a function of incident
energy, E.
Notice that the transmission and reection coecients under go a series oscillations as the
incident energy is increased. These are due to resonance states which lie in the continuum. The
condition for these states is such that an integer number of de Broglie wavelength of the wave in
the well matches the total length of the well.
/2 = na
Fig. 2.8,show the transmission coecient as a function of both incident energy and the well
depth and (or height) over a wide range indicating that resonances can occur for both wells and
bumps. Figures 2.9 show various scattering wavefunctions for on an o-resonance cases. Lastly,
Fig. ?? shows an Argand plot of both complex components of .
42
10
20
30
40
En
-10
-5
0
5
10
V
0.85
0.9
0.95
1
T
10
20
30
40
En
Figure 2.8: Transmission Coecient for particle passing over a bump. Here we have plotted T as
a function of V and incident energy E
n
. The oscillations correspond to resonance states which
occur as the particle passes over the well (for V < 0) or bump V > 0.
2.3.4 Application: Quantum Dots
One of the most active areas of research in soft condensed matter is that of designing physical
systems which can conne a quantum state in some controllable way. The idea of engineering a
quantum state is extremely appealing and has numerous technological applications from small
logic gates in computers to optically active materials for biomedical applications. The basic
physics of these materials is relatively simple and we can use the basic ideas presented in this
chapter. The basic idea is to layer a series of materials such that electrons can be trapped in a
geometrically conned region. This can be accomplished by insulator-metal-insulator layers and
etching, creating disclinations in semiconductors, growing semi-conductor or metal clusters, and
so on. A quantum dot can even be a defect site.
We will assume through out that our quantum well contains a single electron so that we can
treat the system as simply as possible. For a square or cubic quantum well, energy levels are
simply those of an n-dimensional particle in a box. For example for a three dimensional system,
E
nx,ny,nz
=
h
2

2
2m

n
x
L
x

2
+

n
y
L
y

2
+

n
z
L
z

(2.95)
where L
x
, L
y
, and L
z
are the lengths of the box and m is the mass of an electron.
The density of states is the number of energy levels per unit energy. If we take the box to be
43
-10 -5 5 10
-1.5
-1
-0.5
0.5
1
1.5
-10 -5 5 10
-1
-0.5
0.5
1
Figure 2.9: Scattering waves for particle passing over a well. In the top graphic, the particle is
partially reected from the well (V < 0) and in the bottom graphic, the particle passes over the
well with a slightly dierent energy than above, this time with little reection.
a cube L
x
= L
y
= L
z
we can relate n to a radius of a sphere and write the density of states as
(n) = 4
2
n
2
dn
dE
= 4
2
n
2

dE
dn

1
.
Thus, for a 3D cube, the density of states is
(n) =

4mL
2
h
2

n
i.e. for a three dimensional cube, the density of states increases as n and hence as E
1/2
.
Note that the scaling of the density of states with energy depends strongly upon the dimen-
44
-10
0
10
x
-5
0
5
Re@yD
-4
-2
0
2
4
Im@yD
-10
0
10
x
Figure 2.10: Argand plot of a scattering wavefunction passing over a well. (Same parameters as
in the top gure in Fig. 2.9).
sionality of the system. For example in one dimension,
(n) =
2mL
2
h
2

2
1
n
and in two dimensions
(n) = const.
The reason for this lies in the way the volume element for linear, circular, and spherical integration
scales with radius n. Thus, measuring the density of states tells us not only the size of the system,
but also its dimensionality.
We can generalize the results here by realizing that the volume of a d dimensional sphere in
k space is given by
V
d
=
k
d

d/2
(1 + d/2)
where (x) is the gamma-function. The total number of states per unit volume in a d-dimensional
space is then
n
k
= 2
1
2
2
V
d
and the density is then the number of states per unit energy. The relation between energy and
k is
E
k
=
h
2
2m
k
2
.
i.e.
k =

2E
k
m
h
45
0.2 0.4 0.6 0.8 1
energy HauL
0.5
1
1.5
2
2.5
3
DOS
STMNotebook.nb 1
Figure 2.11: Density of states for a 1-, 2- , and 3- dimensional space.
which gives

d
(E) =
2
1+
d
2
d
2+
d
2

m
h

d
(1 +
d
2
)
A quantum well is typically constructed so that the system is conned in one dimension
and unconned in the other two. Thus, a quantum well will typically have discrete state only
in the conned direction. The density of states for this system will be identical to that of the
3-dimensional system at energies where the k vectors coincide. If we take the thickness to be s,
then the density of states for the quantum well is
=
L
s

2
(E)

L

3
(E)
L
2
(E)/s

where x| is the oor function which means take the largest integer less than x. This is plotted
in Fig. 2.12 and the stair-step DOS is indicative of the embedded conned structure.
Next, we consider a quantum wire of thickness s along each of its 2 conned directions. The
DOS along the unconned direction is one-dimensional. As above, the total DOS will be identical
46
0.005 0.01 0.015 0.02 0.025 0.03
e HauL
10
20
30
DOS
Quantum well vs. 3d body
0.05 0.1 0.15 0.2 0.25 0.3
e HauL
20
40
60
80
100
120
DOS
Quantum wire vs. 3d body
Figure 2.12: Density of states for a quantum well and quantum wire compared to a 3d space.
Here L = 5 and s = 2 for comparison.
to the 3D case when the wavevectors coincide. Increasing the radius of the wire eventually leads
to the case where the steps decrease and merge into the 3D curve.
=

L
s

1
(E)

L
2

2
(E)
L
2

2
(E)/s

For a spherical dot, we consider the case in which the radius of the quantum dot is small
enough to support discrete rather than continuous energy levels. In a later chapter, we will derive
this result in more detail, for now we consider just the results. First, an electron in a spherical
dot obeys the Schrodinger equation:

h
2
2m

2
= E (2.96)
where
2
is the Laplacian operator in spherical coordinates

2
=
1
r

2
r
2
r +
1
r
2
sin

sin

+
1
r
2
sin
2

2
.
The solution of the Schrodinger equation is subject to the boundary condition that for r R,
(r) = 0, where R is the radius of the sphere and are given in terms of the spherical Bessel
function, j
l
(r) and spherical harmonic functions, Y
lm
.

nlm
=
2
1/2
R
3/2
j
l
(r/R)
j
l+1
()
Y
lm
(), (2.97)
with energy
E =
h
2
2m

2
R
2
(2.98)
Note that the spherical Bessel functions (of the rst kind) are related to the Bessel functions via,
j
l
(x) =


2x
J
l+1/2
(x). (2.99)
47
5 10 15 20
x
-0.2
0.2
0.4
0.6
0.8
1
j
l
HxL
Figure 2.13: Spherical Bessel functions, j
0
, j
1
, and j
1
(red, blue, green)
The rst few of these are
j
0
(x) =
sin x
x
(2.100)
j
1
(x) =
sin x
x
2

cos x
x
(2.101)
j
2
(x) =

3
x
3

1
x

sin x
3
x
2
cos x (2.102)
j
n
(x) = (1)
n
x
n

1
x
d
dx

n
j
0
(x) (2.103)
where the last line provides a way to generate j
n
from j
0
.
The s appearing in the wavefunction and in the energy expression are determined by the
boundary condition that (R) = 0. Thus, for the lowest energy state we require
j
0
() = 0, (2.104)
i.e. = . For the next state (l = 1),
j
1
() =
sin

2

cos

= 0. (2.105)
This can be solved to give = 4.4934. These correspond to where the spherical Bessel functions
pass through zero. The rst 6 of these are 3.14159, 4.49341, 5.76346, 6.98793, 8.18256, 9.35581.
These correspond to where the rst zeros occur and give the condition for the radial quantization,
n = 1 with angular momentum l = 0, 1, 2, 3, 4, 5. There are more zeros, and these correspond to
the case where n > 1.
In the next set of gures (Fig. 2.14), we look at the radial wavefunctions for an electron in
a 0.5

Aquantum dot. First, the case where n = 1, l = 0 and n = 0, l = 1. In both cases, the
wavefunctions vanish at the radius of the dot. The radial probability distribution function (PDF)
is given by P = r
2
[
nl
(r)[
2
. Note that increasing the angular momentum l from 0 to 1 causes
the electrons most probable position to shift outwards. This is due to the centrifugal force due
to the angular motion of the electron. For the n, l = (2, 0) and (2, 1) states, we have 1 node in
the system and two peaks in the PDF functions.
48
0.1 0.2 0.3 0.4 0.5
r
2
4
6
8
10
12
y
0.1 0.2 0.3 0.4 0.5
r
1
2
3
4
P
0.1 0.2 0.3 0.4 0.5
r
-25
-20
-15
-10
-5
5
y
0.1 0.2 0.3 0.4 0.5
r
1
2
3
4
P
Figure 2.14: Radial wavefuncitons (left column) and corresponding PDFs (right column) for an
electron in a R = 0.5

Aquantum dot. The upper two correspond to (n, l) = (1, 0) (solid) and
(n, l) = (1, 1) (dashed) while the lower correspond to (n, l) = (2, 0) (solid) and (n, l) = (2, 1)
(dashed) .
2.4 Tunneling and transmission in a 1D chain
In this example, we are going to generalize the ideas presented here and look at what happens if
we discretize the space in which a particle can move. This happens physically when we consider
what happens when a particle (eg. an electron) can hop from one site to another. If an electron
is on a given site, it has a certain energy to be there and it takes energy to move the electron
from the site to its neighboring site. We can write the Schrodinger equation for this system as
u
j
+ u
j+1
+ u
j1
= Eu
j
.
for the case where the energy depends upon where the electron is located. If the chain is innite,
we can write u
j
= Te
ikdj
and nd that the energy band goes as E = + 2 cos(kd) where k is
now the momentum of the electron.
2.5 Summary
Weve covered a lot of ground. We now have enough tools at hand to begin to study some physical
systems. The traditional approach to studying quantum mechanics is to progressively a series
of dierential equations related to physical systems (harmonic oscillators, angular momentum,
49
hydrogen atom, etc...). We will return to those models in a week or so. Next week, were going to
look at 2 and 3 level systems using both time dependent and time-independent methods. Well
develop a perturbative approach for computing the transition amplitude between states. We will
also look at the decay of a state when its coupled to a continuum. These are useful models for
a wide variety of phenomena. After this, we will move on to the harmonic oscillator.
2.6 Problems and Exercises
Exercise 2.1 1. Derive the expression for
G
o
(x, x

) = 'x[ exp(ih
o
t/ h)[x

` (2.106)
where h
o
is the free particle Hamiltonian,
h
o
=
h
2
2m

2
x
2
(2.107)
2. Show that G
o
is a solution of the free particle Schrodinger Equation
i h
t
G
o
(t) = h
o
G
o
(t). (2.108)
Exercise 2.2 Show that the normalization of a wavefunction is independent of time.
Solution:
i
t
'(t)[(t)` = (i'

(t)[)([(t)`) + ('(t)[)(i[

(t)`) (2.109)
= '(t)[

[(t)` +'(t)[

H[(t)` (2.110)
= '(t)[

H[(t)` +'(t)[

H[(t)` = 0 (2.111)
Exercise 2.3 Compute the bound state solutions (E < 0) for a square well of depth V
o
where
V (x) =

V
o
a/2 x a/2
0 otherwise
(2.112)
1. How many energy levels are supported by a well of width a.
2. Show that a very narrow well can support only 1 bound state, and that this state is an even
function of x.
3. Show that the energy of the lowest bound state is
E
mV
2
o
a
2
2 h
2
(2.113)
4. Show that as
=

2mE
h
2
0 (2.114)
the probability of nding the particle inside the well vanishes.
50
Exercise 2.4 Consider a particle with the potential
V (x) =

0 for x > a
V
o
for 0 x a
for x < 0
(2.115)
1. Let (x) be a stationary state. Show that (x) can be extended to give an odd wavefunction
corresponding to a stationary state of the symmetric well of width 2a (i.e the one studied
above) and depth V
o
.
2. Discuss with respect to a and V
o
the number of bound states and argue that there is always
at least one such state.
3. Now turn your attention toward the E > 0 states of the well. Show that the transmission
of the particle into the well region vanishes as E 0 and that the wavefunction is perfectly
reected o the sudden change in potential at x = a.
Exercise 2.5 Which of the following are eigenfunctions of the kinetic energy operator

T =
h
2
2m

2
x
2
(2.116)
e
x
, x
2
, x
n
,3 cos(2x), sin(x) + cos(x), e
ikx
,
f(x x

) =

dke
ik(xx

)
e
ik
2
/(2m)
(2.117)
.
Solution Going in order:
1. e
x
2. x
2
3. x
n
4. 3 cos(2x)
5. sin(x) + cos(x)
6. e
ikx
Exercise 2.6 Which of the following would be acceptable one dimensional wavefunctions for a
bound particle (upon normalization): f(x) = e
x
, f(x) = e
x
2
, f(x) = xe
x
2
, and
f(x) =

e
x
2
x 0
2e
x
2
x < 0
(2.118)
Solution In order:
51
1. f(x) = e
x
2. f(x) = e
x
2
3. f(x) = xe
x
2
4.
f(x) =

e
x
2
x 0
2e
x
2
x < 0
(2.119)
Exercise 2.7 For a one dimensional problem, consider a particle with wavefunction
(x) = N
exp(ip
o
x/ h)

x
2
+ a
2
(2.120)
where a and p
o
are real constants and N the normalization.
1. Determine N so that (x) is normalized.

dx[(x)[
2
= N
2

dx
1
x
2
+ a
2
(2.121)
= N
2

a
(2.122)
Thus (x) is normalized when
N =

(2.123)
2. The position of the particle is measured. What is the probability of nding a result between
a/

3 and +a/

3?
a

+a/

3
a/

3
dx[(x)[
2
=

+a/

3
a/

3
dx
1
x
2
+ a
2
(2.124)
=
1

tan
1
(x/a)

+a/

3
a/

3
(2.125)
=
1
3
(2.126)
3. Compute the mean value of a particle which has (x) as its wavefunction.
'x` =
a

dx
x
x
2
+ a
2
(2.127)
= 0 (2.128)
52
Exercise 2.8 Consider the Hamiltonian of a particle in a 1 dimensional well given by
H =
1
2m
p
2
+ x
2
(2.129)
where x and p are position and momentum operators. Let [
n
` be a solution of
H[
n
` = E
n
[
n
` (2.130)
for n = 0, 1, 2, . Show that
'
n
[ p[
m
` =
nm
'
n
[ x[
m
` (2.131)
where
nm
is a coecient depending upon E
n
E
m
. Compute
nm
. (Hint: you will need to use
the commutation relations of [ x, H] and [ p, H] to get this). Finally, from all this, deduce that

m
(E
n
E
m
)
2
[
n
[ x[
m
`[
2
=
h
2
2m
'
n
[ p
2
[
n
` (2.132)
Exercise 2.9 The state space of a certain physical system is three-dimensional. Let [u
1
`, [u
2
`,
and [u
3
` be an orthonormal basis of the space in which the kets [
1
` and [
2
` are dened by
[
1
` =
1

2
[u
1
` +
i
2
[u
2
` +
1
2
[u
3
` (2.133)
[
2
` =
1

3
[u
1
` +
i

3
[u
3
` (2.134)
1. Are the states normalized?
2. Calculate the matrices,
1
and
2
representing in the [u
i
`` basis, the projection operators
onto [
1
` and [
2
`. Verify that these matrices are Hermitian.
Exercise 2.10 Let (r) = (x, y, z) be the normalized wavefunction of a particle. Express in
terms of (r):
1. A measurement along the x-axis to yield a result between x
1
and x
2
.
2. A measurement of momentum component p
x
to yield a result between p
1
and p
2
.
3. Simultaneous measurements of x and p
z
to yield x
1
x x
2
and p
z
> 0.
4. Simultaneous measurements of p
x
, p
y
, and p
z
, to yield
p
1
p
x
p
2
(2.135)
(2.136)
p
3
p
y
p
4
(2.137)
(2.138)
p
5
p
z
p
6
(2.139)
Show that this result is equal to the result of part 2 when p
3
, p
5
and p
4
, p
6
+.
53
Exercise 2.11 Consider a particle of mass m whose potential energy is
V (x) = ((x + l/2) + (x l/2))
1. Calculate the bound states of the particle, setting
E =
h
2

2
2m
.
Show that the possible energies are given by
e
l
=

1
2

where = 2m/ h
2
. Give a graphic solution of this equation.
(a) The Ground State. Show that the ground state is even about the origin and that its
energy, E
s
is less than the bound state of a particle in a single -function potential,
E
L
. Interpret this physically. Plot the corresponding wavefunction.
(b) Excited State. Show that when l is greater than some value (which you need to deter-
mine), there exists an odd excited state of energy E
A
with energy greater than E
L
.
Determine and plot the corresponding wavefunction.
(c) Explain how the preceeding calculations enable us to construct a model for an ionized
diatomic molecule, eg. H
+
2
, whose nuclei are separated by l. Plot the energies of the
two states as functions of l, what happens as l and l 0?
(d) If we take Coulombic repulsion of the nuclei into account, what is the total energy
of the system? Show that a curve which gives the variation with respect to l of the
energies thus obtained enables us to predict in certain cases the existence of bound
states of H
+
2
and to determine the equilibrium bond length.
2. Calculate the reection and transmission coecients for this system. Plot R and T as
functions of l. Show that resonances occur when l is an integer multiple of the de Broglie
wavelength of the particle. Why?
54
Chapter 3
Semi-Classical Quantum Mechanics
Good actions ennoble us, and we are the sons of our own deeds.
Miguel de Cervantes
The use of classical mechanical analogs for quantum behavour holds a long and proud tradition in
the development and application of quantum theory. In Bohrs original formulation of quantum
mechanics to explain the spectra of the hydrogen atom, Bohr used purely classical mechanical
notions of angular momentum and rotation for the basic theory and imposed a quantization
condition that the angular momentum should come in integer multiples of h. Bohr worked under
the assumption that at some point the laws of quantum mechanics which govern atoms and
molecules should correspond to the classical mechanical laws of ordinary objects like rocks and
stones. Bohrs Principle of Correspondence states that quantum mechanics was not completely
separate from classical mechanics; rather, it incorporates classical theory.
From a computational viewpoint, this is an extremely powerful notion since performing a
classical trajectory calculation (even running 1000s of them) is simpler than a single quantum
calculation of a similar dimension. Consequently, the development of semi-classical methods has
and remains an important part of the development and untilization of quantum theory. In fact
even in the most recent issues of the Journal of Chemical Physics, Phys. Rev. Lett, and other
leading physics and chemical physics journals, one nds new developments and applications of
this very old idea.
In this chapter we will explore this idea in some detail. The eld of semi-classical mechanics
is vast and I would recommend the following for more information:
1. Chaos in Classical and Quantum Mechanics, Martin Gutzwiller (Springer-Verlag, 1990).
Chaos in quantum mechanics is a touchy subject and really has no clear-cut denition that
anyone seems to agree upon. Gutzwiller is one of the key gures in sorting all this out. This
is very nice and not too technical monograph on quantum and classical correspondence.
2. Semiclassical Physics, M. Brack and R. Bhaduri (Addison-Wesley, 1997). Very interesting
book, mostly focusing upon many-body applications and Thomas-Fermi approximations.
3. Computer Simulations of Liquids, M. P. Allen and D. J. Tildesley (Oxford, 1994). This
book mostly focus upon classical MD methods, but has a nice chapter on quantum methods
which were state of the art in 1994. Methods come and methods go.
There are many others, of course. These are just the ones on my bookshelf.
55
3.1 Bohr-Sommereld quantization
Lets rst review Bohrs original derivation of the hydrogen atom. We will go through this a bit
dierently than Bohr since we already know part of the answer. In the chapter on the Hydrogen
atom we derived the energy levels in terms of the principle quantum number, n.
E
n
=
me
4
2 h
2
1
n
2
(3.1)
In Bohrs correspondence principle, the quantum energy must equal the classical energy. So
for an electron moving about a proton, that energy is inversely proportional to the distance of
separation. So, we can write

me
4
2 h
2
1
n
2
=
e
2
2r
(3.2)
Now we need to gure out how angular momentum gets pulled into this. For an orbiting body
the centrifugal force which pulls the body outward is counterbalenced by the inward tugs of the
centripetal force coming from the attractive Coulomb potential. Thus,
mr
2
=
e
2
r
2
, (3.3)
where is the angular frequency of the rotation. Rearranging this a bit, we can plug this into
the RHS of Eq. 3.2 and write

me
4
2 h
2
1
n
2
=
mr
3

2
2r
(3.4)
The numerator now looks amost like the classical denition of angular momentum: L = mr
2
.
So we can write the last equation as

me
4
2 h
2
1
n
2
=
L
2
2mr
2
. (3.5)
Solving for L
2
:
L
2
=
me
4
2 h
2
2mr
2
n
2
. (3.6)
Now, we need to pull in another one of Bohrs results for the orbital radius of the H-atom:
r =
h
2
me
2
n
2
. (3.7)
Plug this into Eq.3.6 and after the dust settles, we nd
L = hn. (3.8)
But, why should electrons be conned to circular orbits? Eq. 3.8 should be applicable to
any closed path the electron should choose to take. If the quantization condition only holds
56
for circular orbits, then the theory itself is in deep trouble. At least thats what Sommereld
thought.
The numerical units of h are energy times time. That is the unit of action in classical
mechanics. In classical mechanics, the action of a mechanical system is given by the integral of
the classical momentum along a classical path:
S =

x
2
x
1
pdx (3.9)
For an orbit, the initial point and the nal point must coincide, x
1
= x
2
, so the action integral
must describe some the area circumscribed by a closed loop on the px plane called phase-space.
S =

pdx (3.10)
So, Bohr and Sommerelds idea was that the circumscribed area in phase-space was quantized
as well.
As a check, let us consider the harmonic oscillator. The classical energy is given by
E(p, q) =
p
2
2m
+
k
2
q
2
.
This is the equation for an ellipse in phase space since we can re-arrange this to read
1 =
p
2
2mE
+
k
2E
q
2
=
p
2
a
2
+
q
2
b
2
(3.11)
where a =

2mE and b =

2E/k describe the major and minor axes of the ellipse. The area of
an ellipse is A = ab, so the area circumscribed by a classical trajectory with energy E is
S(E) = 2E

m/k (3.12)
Since

k/m = , S = 2E/ = E/. Finally, since E/ must be an integer multiple of h, the


Bohr-Sommereld condition for quantization becomes

pdx = nh (3.13)
where p is the classical momentum for a path of energy E, p =

2m(V (x) E. Taking this a


bit farther, the de Broglie wavelength is p/h, so the Bohr-Sommereld rule basically states that
stationary energies correspond to classical paths for which there are an integer number of de
Broglie wavelengths.
Now, perhaps you can see where the problem with quantum chaos. In classical chaos, chaotic
trajectories never return to their exact staring point in phase-space. They may come close, but
there are no closed orbits. For 1D systems, this is does not occur since the trajectories are the
contours of the energy function. For higher dimensions, the dimensionality of the system makes
it possible to have extremely complex trajectories which never return to their starting point.
Exercise 3.1 Apply the Bohr-Sommereld proceedure to determine the stationary energies for
a particle in a box of length l.
57
3.2 The WKB Approximation
The original Bohr-Sommereld idea can be imporoved upon considerably to produce an asymp-
totic ( h 0) approximation to the Schrodinger wave function. The idea was put forward at about
the same time by three dierent theoreticians, Brillouin (in Belgium), Kramers (in Netherlands),
and Wentzel (in Germany). Depending upn your point of origin, this method is the WKB (US
& Germany), BWK (France, Belgium), JWKB (UK), you get the idea. The original references
are
1. La mecanique odularatoire de Schrodinger; une methode generale de resolution par ap-
proximations successives, L. Brillouin, Comptes rendus (Paris). 183, 24 (1926).
2. Wellenmechanik und halbzahlige Quantisierung, H. A. Kramers, Zeitschrift f ur Physik
39, 828 (1926).
3. Eine Verallgemeinerung der Quantenbedingungen f ur die Zwecke der Wellenmechanik,
Zeitschrift f ur Physik 38, 518 (1926).
We will rst go through how one can use the approach to determine the eigenvalues of the
Schrodinger equation via semi-classical methods, then show how one can approximate the actual
wavefunctions themselves.
3.2.1 Asymptotic expansion for eigenvalue spectrum
The WKB proceedure is initiated by writing the solution to the Schodinger equation

+
2m
h
2
(E V (x)) = 0
as
(x) = exp

i
h

dx

. (3.14)
We will soon discover that is the classical momentum of the system, but for now, lets consider
it to be a function of the energy of the system. Substituting into the Schrodinger equation
produces a new dierential equation for
h
i
d
dx
= 2m(E V )
2
. (3.15)
If we take h 0, it follows then that
=
o
=

2m(E V ) = [p[ (3.16)


which is the magnitude of the classical momentum of a particle. So, if we assume that this is
simply the leading order term in a series expansion in h we would have
=
o
+
h
i

1
+

h
i

2
. . . (3.17)
58
Substituting Eq. 3.17 into
=
h
i
1

x
(3.18)
and equating to zero coecients with dierent powers of h, one obtains equations which determine
the
n
corrections in succession:
d
dx

n1
=
n

m=0

nm

m
(3.19)
for n = 1, 2, 3 . . .. For example,

1
=
1
2

o
=
1
4
V

E V
(3.20)

2
=

2
1
+

1
2
o
=
1
2
o

V
2
16(E V )
2
+
V
2
4(E V )
2
+
V

4(E V )

.
=
5V
2
32(2m)
1/2
(E V )
5/2

V

8(2m)
1/2
(E V )
3/2
(3.21)
and so forth.
Exercise 3.2 Verify Eq. 3.19 and derive the rst order correction in Eq.3.20.
Now, to use these equations to determine the spectrum, we replace x everywhere by a complex
coordinate z and suppose that V (z) is a regular and analytic function of z in any physically
relevant region. Consequently, we can then say that (z) is an analytic function of z. So, we
can write the phase integral as
n =
1
h

C
(z)dz
=
1
2i

n
(z)

n
(z)
dz (3.22)
where
n
is the nth discrete stationary solution to the Schrodinger equation and C is some
contour of integration on the z plane. If there is a discrete spectrum, we know that the number
of zeros, n, in the wavefunction is related to the quantum number corresponding to the n + 1
energy level. So if has no real zeros, this is the ground state wavefunction with energy E
o
, one
real zero corresponds to energy level E
1
and so forth.
Suppose the contour of integration, C is taken such that it include only these zeros and no
others, then we can write
n =
1
h

o
dz +
1
2i

c
h

2
dz + . . . (3.23)
59
Each of these terms involves E V in the denominator. At the classical turning points where
V (z) = E, we have poles and we can use the residue theorem to evaluate the integrals. For
example,
1
has a pole at each turnining point with residue 1/4 at each point, hence,
1
2i

1
dz =
1
2
. (3.24)
The next term we evaluate by integration by parts

C
V

(E V (z))
3/2
dz =
3
2

C
V
2
(E V (z))
5/2
dz. (3.25)
Hence, we can write

2
(z)dz =
1
32(2m)
1/2

C
V
2
(E V (z))
5/2
dz. (3.26)
Putting it all together
n + 1/2 =
1
h

2m(E V (z))dz

h
128
2
(2m)
1/2

c
V
2
(E V (z))
5/2
dz + . . . (3.27)
Granted, the above analysis is pretty formal! But, what we have is something new. Notice that
we have an extra 1/2 added here that we did not have in the original Bohr-Sommereld (BS)
theory. What we have is something even more general. The original BS idea came from the
notion that energies and frequencies were related by integer multiples of h. But this is really
only valid for transitions between states. If we go back and ask what happens at n = 0 in
the Bohr-Sommereld theory, this corresponds to a phase-space ellipse with major and minor
axes both of length 0which violates the Heisenberg Uncertainly rule. This new quantization
condition forces the system to have some lowest energy state with phase-space area 1/2.
Where did this extra 1/2 come from? It originates from the classical turning points where
V (x) = E. Recall that for a 1D system bound by a potential, there are at least two such points.
Each contributes a /4 to the phase. We will see this more explicitly in the next section.
3.2.2 WKB Wavefunction
Going back to our original wavefunction in Eq. 3.14 and writing
= e
iS/h
where S is the integral of , we can derive equations for S:
1
2m

S
x

i h
2m

2
S
x
2
+ V (x) = E. (3.28)
Again, as above, one can seek a series expansion of S in powers of h. The result is simply the
integral of Eq. 3.17.
S = S
o
+
h
i
S
1
+ . . . (3.29)
60
If we make the approximation that h = 0 we have the classical Hamilton-Jacobi equation for the
action, S. This, along with the denition of the momentum, p = dS
o
/dx =
o
, allows us to make
a very rm contact between quantum mechanics and the motion of a classical particle.
Looking at Eq. 3.28, it is clear that the classical approximation is valid when the second term
is very small compared to the rst. i.e.
h
S

S
2
< 1
h
d
dx

dS
dx

dx
dS

2
< 1
h
d
dx
1
p
< 1 (3.30)
where we equate dS/dx = p. Since p is related to the de Broglie wavelength of the particle
= h/p , the same condition implies that

1
2
d
dx

<1. (3.31)
Thus the semi-classical approximation is only valid when the wavelength of the particle as de-
termined by (x) = h/p(x) varies slightly over distances on the order of the wavelength itself.
Written another way by noting that the gradiant of the momentum is
dp
dx
=
d
dx

2m(E V (x)) =
m
p
dV
dx
.
Thus, we can write the classical condition as
m h[F[/p
3
<1 (3.32)
Consequently, the semi-classical approximation can only be used in regions where the momentum
is not too small. This is especially important near the classical turning points where p 0. In
classical mechanics, the particle rolls to a stop at the top of the potential hill. When this happens
the de Broglie wavelength heads o to innity and is certainly not small!
Exercise 3.3 Verify the force condition given by Eq. 3.32.
Going back to the expansion for

1
=
1
2

o
=
1
4
V

E V
(3.33)
or equivalently for S
1
S

1
=
S

o
2S

=
p

2p
(3.34)
So,
S
1
(x) =
1
2
log p(x)
61
If we stick to regions where the semi-classical condition is met, then the wavefunction becomes
(x)
C
1

p(x)
e
i
h

p(x)dx
+
C
2

p(x)
e

i
h

p(x)dx
(3.35)
The 1/

p prefactor has a remarkably simple interpretation. The probability of nd the particle


in some region between x and x+dx is given by [[
2
so that the classical probability is essentially
proportional to 1/p. So, the fast the particle is moving, the less likely it is to be found in some
small region of space. Conversly, the slower a particle moves, the more likely it is to be found in
that region. So the time spend in a small dx is inversely proportional to the momentum of the
particle. We will return to this concept in a bit when we consider the idea of time in quantum
mechanics.
The C
1
and C
2
coecients are yet to be determined. If we take x = a to be one classical
turning point so that x > a corresponds to the classically inaccessible region where E < V (x),
then the wavefunction in that region must be exponentially damped:
(x)
C

[p[
exp

1
h

x
a
[p(x)[dx

(3.36)
To the left of x = a, we have a combination of incoming and reected components:
(x) =
C
1

p
exp

i
h

a
x
pdx

+
C
2

p
exp

i
h

a
x
pdx

(3.37)
3.2.3 Semi-classical Tunneling and Barrier Penetration
Before solving the general problem of how to use this in an arbitrary well, lets consider the case
for tunneling through a potential barrier that has some bumpy top or corresponds to some simple
potential. So, to the left of the barrier the wavefunction has incoming and reected components:

L
(x) = Ae
ikx
+ Be
ikx
. (3.38)
Inside we have

B
(x) =
C

[p(x)[
e
+
i
h

|p|dx
+
D

[p(x)[
e

i
h

|p|dx
(3.39)
and to the right of the barrier:

R
(x) = Fe
+ikx
. (3.40)
If F is the transmitted amplitude, then the tunneling probability is the ratio of the transmitted
probability to the incident probability: T = [F[
2
/[A[
2
. If we assume that the barrier is high or
broad, then C = 0 and we obtain the semi-classical estimate for the tunneling probability:
T exp

2
h

b
a
[p(x)[dx

(3.41)
62
where a and b are the turning points on either side of the barrier.
Mathematically, we can ip the potential upside down and work in imaginary time. In this
case the action integral becomes
S =

b
a

2m(V (x) E)dx. (3.42)


So we can think of tunneling as motion under the barrier in imaginary time.
There are a number of useful applications of this formula. Gamows theory of alpha-decay is
a common example. Another useful application is in the theory of reaction rates where we want
to determine tunneling corrections to the rate constant for a particular reaction. Close to the top
of the barrier, where tunneling may be important, we can expand the potential and approximate
the peak as an upside down parabola
V (x) V
o

k
2
x
2
where +x represents the product side and x represents the reactant side. See Fig. 3.1 Set the
zero in energy to be the barrier height, V
o
so that any transmission for E < 0 corresponds to
tunneling.
1
-4 -2 2 4
x
-0.8
-0.6
-0.4
-0.2
0
e
Figure 3.1: Eckart Barrier and parabolic approximation of the transition state
At suciently large distances from the turning point, the motion is purely quasi-classical and
we can write the momentum as
p =

2m(E + kx
2
/2) x

mk + E

m/k/x (3.43)
and the asymptotic for of the Schrodinger wavefunction is
= Ae
+i
2
/2

+i1/2
+ Be
i
2
/2

i1/2
(3.44)
where A and B are coecients we need to determine by the matching condition and and are
dimensionless lengths and energies given by = x(mk/ h)
1/4
, and = (E/ h)

m/k.
1
The analysis is from Kembel, 1935 as discussed in Landau and Lifshitz, QM
63
The particular case we are interested in is for a particle coming from the left and passing to
the right with the barrier in between. So, the wavefunctions in each of these regions must be

R
= Be
+i
2
/2

i1/2
(3.45)
and

L
= e
i
2
/2
()
i1/2
+ Ae
+i
2
/2
()
i1/2
(3.46)
where the rst term is the incident wave and the second term is the reected component. So,
[A[
2
[ is the reection coecient and [B[
2
is the transmission coecient normalized so that
[A[
2
+[B[
2
= 1.
Lets move to the complex plane and write a new coordinate, = e
i
and consider what happens
as we rotate around in and take to be large. Since i
2
=
2
(i cos 2 sin 2), we have

R
( = 0) = Be
i
2

+i1/2

L
( = 0) = Ae
i
2
()
+i1/2
(3.47)
and at =

R
( = ) = Be
i
2
()
+i1/2

L
( = ) = Ae
i
2

+i1/2
(3.48)
So, in otherwords,
R
( = ) looks like
L
( = 0) when
A = B(e
i
)
i1/2
So, we have the relation A = iBe

. Finally, after we normalize this we get the transmission


coecient:
T = [B[
2
=
1
1 + e
2
which must hold for any energy. If the energy is large and negative, then
T e
2
.
Also, we can compute the reection coecient for E > 0 as 1 D,
R =
1
1 + e
+2
.
Exercise 3.4 Verify these last relationships by taking the
R
and
L
, performing the analytic
continuation.
64
This gives us the transmission probabilty as a function of incident energy. But, normal
chemical reactions are not done at constant energy, they are done at constant temperature.
To get the thermal transmission coecient, we need to take a Boltzmann weighted average of
transmission coecients
T
th
() =
1
Z

dEe
E
T(E) (3.49)
where = 1/kT and Z is the partition function. If E represents a continuum of energy states
then
T
th
() =
h(
(0)
(
h
4
)
(0)
(
1
4
(
h

+ 2)))
4
(3.50)
where
(n)
(z) is the Polygamma function which is the nth derivative of the digamma function,

(0)
(z), which is the logarithmic derivative of Eulers gamma function,
(0)
(z) = (z)/(z).
2
3.3 Connection Formulas
In what we have considered thus far, we have assumed that up until the turning point the
wavefunction was well behaved and smooth. We can think of the problem as having two domains:
an exterior and an interior. The exterior part we assumed to be simple and the boundary
conditions trivial to impose. The next task is to gure out the matching condition at the turning
point for an arbitrary system. So far what we have are two pieces,
L
and
R
, in the notation
above. What we need is a patch. To do so, we make a linearizing assumption for the force at
the classical turning point:
E V (x) F
o
(x a) (3.51)
where F
o
= dV/dx evaluated at x = a. Thus, the phase integral is easy:
1
h

x
a
pdx =
2
3 h

2mF
o
(x a)
3/2
(3.52)
But, we can do better than that. We can actually solve the Schrodinger equation for the linear
potential and use the linearized solutions as our patch. The Mathematica Notebook AiryFunc-
tions.nb goes through the solution of the linearized Schrodinger equation

h
2
2m
d
dx
2
+ (E + V

) = 0 (3.53)
which can be re-written as

=
3
x (3.54)
with
=

2m
h
2
V

(0)

1/3
.
2
See the Mathematica Book, sec 3.2.10.
65
Absorbing the coecient into a new variable y, we get Airys equation

(y) = y.
The solutions of Airys equation are Airy functions, Ai(y) and Bi(y) for the regular and irregular
cases. The integral representation of the Ai and Bi are
Ai(y) =
1


0
cos

s
3
3
+ sy

ds (3.55)
and
Bi(y) =
1

e
s
3
/3+sy
+ sin

s
3
3
+ sy

ds (3.56)
Plots of these functions are shown in Fig. 3.2.
-10 -8 -6 -4 -2 2
y
-0.5
0.5
1
1.5
Ai@yD, Bi@yD
Figure 3.2: Airy functions, Ai(y) (red) and Bi(y) (blue)
Since both Ai and Bi are acceptible solutions, we will take a linear combination of the two
as our patching function and gure out the coecients later.

P
= aAi(x) + bBi(x) (3.57)
We now have to determine those coecients. We need to make two assumptions. One,
that the overlap zones are suciently close to the turning point that a linearized potential is
reasonable. Second, the overlap zone is far enough from the turning point (at the origin) that
the WKB approximation is accurate and reliable. You can certainly cook up some potential
for which this will not work, but we will assume its reasonable. In the linearized region, the
momentum is
p(x) = h
3/2
(x)
3/2
(3.58)
So for +x,

x
0
[p(x)[dx = 2 h(x)
3/2
/3 (3.59)
66
and the WKB wavefunction becomes:

R
(x) =
D

h
3/4
x
1/4
e
2(x)
3/2
/3
. (3.60)
In order to extend into this region, we will use the asymptotic form of the Ai and Bi functions
for y 0
Ai(y)
e
2y
3/2
/3
2

y
1/4
(3.61)
Bi(y)
e
+2y
3/2
/3

y
1/4
. (3.62)
Clearly, the Bi(y) term will not contribute, so b = 0 and
a =

4
h
D.
Now, for the other side, we do the same proceedure. Except this time x < 0 so the phase
integral is

0
x
pdx = 2 h(x)
3/2
/3. (3.63)
Thus the WKB wavefunction on the left hand side is

L
(x) =
1

Be
2i(x)
3/2
/3
+ Ce
2i(x)
3/2
/3

(3.64)
=
1

h
3/4
(x)
1/4

Be
2i(x)
3/2
/3
+ Ce
2i(x)
3/2
/3

(3.65)
Thats the WKB part, to connect with the patching part, we again use the asymptotic forms for
y <0 and take only the regular solution,
Ai(y)
1

(y)
1/4
sin

2(y)
3/2
/3 + /4

1
2i

(y)
1/4

e
i/4
e
i2(y)
3/2
/3
e
i/4
e
i2(y)
3/2
/3

(3.66)
Comparing the WKB wave and the patching wave, we can match term-by-term
a
2i

e
i/4
=
B

h
(3.67)
a
2i

e
i/4
=
C

h
(3.68)
Since we know a in terms of the normalization constant D, B = ie
i/4
D and C = ie
i/4
. This
is the connection! We can write the WKB function across the turning point as

WKB
(x) =

2D

p(x)
sin

1
h

0
x
pdx + /4

x < 0
2D

|p(x)|
e

1
h

0
x
pdx
x > 0
(3.69)
67
Table 3.1: Location of nodes for Airy, Ai(x) function.
node x
n
1 -2.33811
2 -4.08795
3 -5.52056
4 -6.78671
5 -7.94413
6 -9.02265
7 -10.0402
Example: Bound states in the linear potential
Since we worked so hard, we have to use the results. So, consider a model problem for a particle
in a gravitational eld. Actually, this problem is not so far fetched since one can prepare trapped
atoms above a parabolic reector and make a quantum bouncing ball. Here the potential is
V (x) = mgx where m is the particle mass and g is the graviational constant (g = 9.80m/s).
Well take the case where the reector is innite so that the particle cannot penetrate into it.
The Schrodinger equation for this potential is

h
2
2m

+ (E mgx) = 0. (3.70)
The solutions are the Airy Ai(x) functions. Setting, = mg and c = h
2
/2m, the solutions are
= CAi(

1/3
(x E/)) (3.71)
However, there is one caveat: (0) = 0, thus the Airy functions must have their nodes at x = 0.
So we have to systematically shift the Ai(x) function in x until a node lines up at x = 0. The
nodes of the Ai(x) function can be determined and the rst 7 of them are To nd the energy
levels, we systematically solve the equation

1/3
E
n

= x
n
So the ground state is where the rst node lands at x = 0,
E
1
=
2.33811
(/c)
1/3
=
2.33811mg
(2m
2
g/ h
2
)
1/3
(3.72)
and so on. Of course, we still have to normalize the wavefunction to get the correct energy.
We can make life a bit easier by using the quantization condition derived from the WKB
approximation. Since we require the wavefunction to vanish exactly at x = 0, we have:
1
h

xt
0
p(x)dx +

4
= n. (3.73)
68
2 4 6 8 10
-10
-5
5
10
15
Figure 3.3: Bound states in a graviational well
This insures us that the wave vanishes at x = 0, x
t
in this case is the turning point E = mgx
t
.
(See Figure 3.3) As a consequence,

xt
0
p(x)dx = (n 1/4)
Since p(x) =

2m(E
n
mgx), The integral can be evaluated

xt
0

2m(E mghdx =

2E
n

E
n
m
3gm
+
2

m(E
n
gmx
t
) (E
n
+ gmx
t
)
3gm

(3.74)
Since x
t
= E
n
/mg for the classical turning point, the phase intergral becomes

2E
n
2
3g

E
n
m
= (n 1/4) h.
Solving for E
n
yields the semi-classical approximation for the eigenvalues:
E
n
=
g
2
3
m
1
3

(1 4 n)
2
1
3
(3 )
2
3
h
2
3
4 2
1
3
(3.75)
In atomic units, the gravitional constant is g = 1.08563 10
22
bohr/au
2
(Can you guess why
we rarely talk about gravitational eects on molecules?). For n = 0, we get for an electron
E
sc
o
= 2.014 10
15
hartree or about 12.6 Hz. So, graviational eects on electrons are extremely
tiny compared to the electrons total energy.
69
r
c

b
r

Figure 3.4: Elastic scattering trajectory for classical collision


3.4 Scattering
The collision between two particles plays an important role in the dynamics of reactive molecules.
We consider here the collosion between two particles interacting via a central force, V (r). Work-
ing in the center of mass frame, we consider the motion of a point particle with mass with
position vector r. We will rst examine the process in a purely classical context since it is
intuitive and then apply what we know to the quantum and semiclassical case.
3.4.1 Classical Scattering
The angular momentum of the particle about the origin is given by

L = r p = (r

r) (3.76)
we know that angular momentum is a conserved quantity and it is is easy to show that

L = 0
viz.

L =
d
dt
r p = (

r

r + (r

p). (3.77)
Since r = p/, the rst term vanishes; likewise, the force vector,

p = dV/dr, is along r so that
the second term vanishes. Thus, L = const meaning that angular momentum is a conserved
quantity during the course of the collision.
In cartesian coordinates, the total energy of the collision is given by
E =

2
( x
2
+ y
2
) + V. (3.78)
To convert from cartesian to polar coordinates, we use
x = r cos (3.79)
70
y = r sin (3.80)
x = r cos r

sin (3.81)
y = r sin + r

cos . (3.82)
Thus,
E =
mu
2
r
2
+ V (r) +
L
2
2r
2
(3.83)
where we use the fact that
L = r
2

2
(3.84)
where L is the angular momentum. What we see here is that we have two potential contributions.
The rst is the physical attraction (or repulsion) between the two scattering bodies. The second is
a purely repulsive centrifugal potential which depends upon the angular momentum and ultimatly
upon the inpact parameters. For cases of large impact parameters, this can be the dominant
force. The eective radial force is given by
r =
L
2
2r
3


V
r
(3.85)
Again, we note that the centrifugal contribution is always repulsive while the physical interaction
V(r) is typically attractive at long range and repulsive at short ranges.
We can derive the solutions to the scattering motion by integrating the velocity equations for
r and
r =

E V (r)
L
2
2r
2

1/2
(3.86)

=
L
r
2
(3.87)
and taking into account the starting conditions for r and . In general, we could solve the
equations numerically and obtain the complete scatering path. However, really what we are
interested in is the deection angle since this is what is ultimately observed. So, we integrate
the last two equations and derive in terms of r.
(r) =


0
d =

d
dr
dr (3.88)
=

L
r
2
1

(2/)(E V L
2
/2r
2
)
dr (3.89)
where the collision starts at t = with r = and = 0. What we want to do is derive this
in terms of an impact parameter, b, and scattering angle . These are illustrated in Fig. 3.4 and
can be derived from basic kinematic considerations. First, energy is conserved through out, so if
71
we know the asymptotic velocity, v, then E = v
2
/2. Secondly, angular momentum is conserved,
so L = [r v[ = vb. Thus the integral above becomes
(r) = b

d
dr
dr (3.90)
=

dr
r
2

1 V/E b
2
/r
2
. (3.91)
Finally, the angle of deection is related to the angle of closest approach by 2
c
+ = ; hence,
= 2b


rc
dr
r
2

1 V/E b
2
/r
2
(3.92)
The radial distance of closest approach is determined by
E =
L
2
2r
2
c
+ V (r
c
) (3.93)
which can be restated as
b
2
= r
2
c

1
V (r
c
)
E

(3.94)
Once we have specied the potential, we can compute the deection angle using Eq.3.94. If
V (r
c
) < 0 , then r
c
< b and we have an attractive potential, if V (r
c
) > 0 , then r
c
> b and the
potential is repulsive at the turning point.
If we have a beam of particles incident on some scattering center, then collisions will occur
with all possible impact parameter (hence angular momenta) and will give rise to a distribution
in the scattering angles. We can describe this by a dierential cross-section. If we have some
incident intensity of particles in our beam, I
o
, which is the incident ux or the number of particles
passing a unit area normal to the beam direction per unit time, then the dierential cross-section,
I(), is dened so that I()d is the number of particles per unit time scattered into some solid
angle d divided by the incident ux.
The deection pattern will be axially symmetric about the incident beam direction due the
spherical symmetry of the interaction potential; thus, I() only depends upon the scattering an-
gle. Thus, d can be constructed by the cones dening and +d, i.e. d = 2 sin d. Even
if the interaction potential is not spherically symmetric, since most molecules are not spherical,
the scattering would be axially symmetric since we would be scattering from a homogeneous dis-
tribution of al possible orientations of the colliding molecules. Hence any azimuthal dependency
must vanish unless we can orient or the colliding species.
Given an initial velocity v, the fraction of the incoming ux with impact parameter between
b and b + db is 2bdb. These particles will be deected between and + d if d/db > 0 or
between and d if d/db < 0. Thus, I()d = 2bdb and it follows then that
I() =
b
sin [d/db[
. (3.95)
72
Thus, once we know (b) for a given v, we can get the dierential cross-section. The total
cross-section is obtained by integrating
= 2


0
I() sin d. (3.96)
This is a measure of the attenuation of the incident beam by the scattering target and has the
units of area.
3.4.2 Scattering at small deection angles
Our calculations will be greatly simplied if we consider collisions that result in small deections
in the forward direction. If we let the initial beam be along the x axis with momentum p, then
the scattered momentum, p

will be related to the scattered angle by p

sin = p

y
. Taking to
be small

p

y
p

=
momentum transfer
momentum
. (3.97)
Since the time derivative of momentum is the force, the momentum transfered perpendicular to
the incident beam is obtained by integrating the perpendicular force
F

y
=
V
y
=
V
r
r
y
=
V
r
b
r
(3.98)
where we used r
2
= x
2
+ y
2
and y b. Thus we nd,
=
p

y
(2E/)
1/2
(3.99)
= b(2E)
1/2

V
r
dt
r
(3.100)
= b(2E)
1/2

2E

1/2
+

V
r
dx
r
(3.101)
=
b
E


b
V
r
(r
2
b
2
)
1/2
dr (3.102)
where we used x = (2E/)
1/2
t and x varies from to + as r goes from to b and back.
Let us use this in a simple example of the V = C/r
s
potential for s > 0. Substituting V into
the integral above and solving yields
=
sC
1/2
2b
s
E
((s + 1)/2)
(s/2 + 1)
. (3.103)
This indicates that E b
s
and [d/db[ = s/b. Thus, we can conclude by deriving the
dierential cross-section
I() =
1
s

(2+2/s)

sC
1/2
2E
((s + 1)/2)
(s/2 + 1)

2/s
(3.104)
for small values of the scattering angle. Consequently, a log-log plot of the center of mass
dierential cross-section as a function of the scattering angle at xed energy should give a straight
line with a slope (2 +2/s) from which one can determine the value of s. For the van der Waals
potential, s = 6 and I() E
1/3

7/3
.
73
3.4.3 Quantum treatment
The quantum mechanical case is a bit more complex. Here we will develop a brief overview
of quantum scattering and move onto the semiclassical evaluation. The quantum scattering is
determined by the asymptotic form of the wavefunction,
(r, )
r
A

e
ikz
+
f()
r
e
ikr

(3.105)
where A is some normalization constant and k = 1/ = v/ h is the initial wave vector along
the incident beam direction ( = 0). The rst term represents a plane wave incident upon
the scatterer and the second represents an out going spherical wave. Notice that the outgoing
amplitude is reduced as r increases. This is because the wavefunction spreads as r increases. If
we can collimate the incoming and out-going components, then the scattering amplitude f() is
related to the dierential cross-section by
I() = [f()[
2
. (3.106)
What we have is then that asymptotic form of the wavefunction carries within it information
about the scattering process. As a result, we do not need to solve the wave equation for all of
space, we just need to be able to connect the scattering amplitude to the interaction potential.
We do so by expanding the wave as a superposition of Legendre polynomials
(r, ) =

l=0
R
l
(r)P
l
(cos ). (3.107)
R
l
(r) must remain nite as r = 0. This determines the form of the solution.
When V (r) = 0, then = Aexp(ikz) and we can expand the exponential in terms of spherical
waves
e
ikz
=

l=0
(2l + 1)e
il/2
sin(kr l/2)
kr
P
l
(cos ) (3.108)
=
1
2i

l=0
(2l + 1)i
l
P
l
(cos )

e
i(krl/2)
kr
+
e
i(krl/2)
kr

(3.109)
We can interpret this equation in the following intuitive way: the incident plane wave is equiv-
alent to an innite superposition of incoming and outgoing spherical waves in which each term
corresponds to a particular angular momentum state with
L = h

l(l + 1) h(l + 1/2). (3.110)


From our analysis above, we can relate L to the impact parameter, b,
b =
L
v

l + 1/2
k
. (3.111)
In essence the incoming beam is divided into cylindrical zones in which the lth zone contains
particles with impact parameters (and hence angular momenta) between l and (l + 1).
74
Exercise 3.5 The impact parameter, b, is treated as continuous; however, in quantum mechanics
we allow only discrete values of the angular momentum, l. How will this aect our results, since
b = (l + 1/2) from above.
If V (r) is short ranged (i.e. it falls o more rapidly than 1/r for large r), we can derive a
general solution for the asymptotic form
(r, )

l=0
(2l + 1) exp

l
2
+
l

sin(kr l/2 +
l
kr
P
l
(cos ). (3.112)
The signicant dierence between this equation and the one above for the V (r) = 0 case is the
addition of a phase-shift
l
. This shift only occurs in the outgoing part of the wavefunction and so
we conclude that the primary eect of a potential in quantum scattering is to introduce a phase
in the asymptotic form of the scattering wave. This phase must be a real number and has the
following physical interpretation illustrated in Fig. 3.5 A repulsive potential will cause a decrease
in the relative velocity of the particles at small r resulting in a longer de Broglie wavelength. This
causes the wave to be pushed out relative to that for V = 0 and the phase shift is negative.
An attractive potential produces a positive phase shift and pulls the wavefunction in a bit.
Furthermore, the centrifugal part produces a negative shift of l/2.
Comparing the various forms for the asymptotic waves, we can deduce that the scattering
amplitude is given by
f() =
1
2ik

l=0
(2l + 1)(e
2i
l
1)P
l
(cos ). (3.113)
From this, the dierential cross-section is
I() =
2

l=0
(2l + 1)e
i
l
sin(
l
)P
l
(cos )

2
(3.114)
What we see here is the possibility for interference between dierent angular momentum com-
ponents
Moving forward at this point requires some rather sophisticated treatments which we reserve
for a later course. However, we can use the semiclassical methods developed in this chapter to
estimate the phase shifts.
3.4.4 Semiclassical evaluation of phase shifts
The exact scattering wave is not so important. What is important is the asymptotic extent of
the wavefunction since that is the part which carries the information from the scattering center
to the detector. What we want is a measure of the shift in phase between a scattering with and
without the potential. From the WKB treatment above, we know that the phase is related to
the classical action along a given path. Thus, in computing the semiclassical phase shifts, we
are really looking at the dierence between the classical actions for a system with the potential
switched on and a system with the potential switched o.

SC
l
= lim
R

R
rc
dr
(r)

R
b
dr
(r)

(3.115)
75
5 10 15 20 25 30
-1
-0.5
0.5
1
Figure 3.5: Form of the radial wave for repulsive (short dashed) and attractive (long dashed)
potentials. The form for V = 0 is the solid curve for comparison.
R is the radius a sphere about the scattering center and (r) is a de Broglie wavelength
(r) =
h
p
=
1
k(r)
=
h
v(1 V (r) b
2
/r
2
)
1/2
(3.116)
associated with the radial motion. Putting this together:

SC
l
= lim
R
k

R
rc
(1
V (r)
E

b
2
r
2
)
1/2
dr

R
b

1
b
2
r
2

1/2
dr

(3.117)
= lim
R

R
rc
k(r)dr k

R
b

1
b
2
r
2

1/2
dr

(3.118)
(k is the incoming wave-vector.) The last integral we can evaluate:
k

R
b
(r
2
b
2
))
1/2
r
dr = k

(r
2
b
2
) b cos
1
b
r

R
b
= kR
kb
2
(3.119)
Now, to clean things up a bit, we add and subtract an integral over k. (We do this to get rid of
the R dependence which will cause problems when we take the limit R .)

SC
l
= lim
R

R
rc
k(r)dr

R
rc
kdr +

R
rc
kdr (kR
kb
2
)

(3.120)
=

R
rc
(k(r) k)dr k(r
c
b/2) (3.121)
=

R
rc
(k(r) k)dr kr
c
(l + 1/2)/2 (3.122)
This last expression is the standard form of the phase shift.
The deection angle can be determined in a similar way.
= lim
R

actual path
d

V =0 path
d

(3.123)
76
We transform this into an integral over r
= 2b


rc

1
V (r)
E

b
2
r
2

1/2
dr
r
2

1
b
2
r
2

1/2
dr
r
2

(3.124)
Agreed, this is weird way to express the scattering angle. But lets keep pushing this forward.
The last integral can be evaluated

1
b
2
r
2

1/2
dr
r
2
=
1
b
cos
1
b
r

b
=

2b
. (3.125)
which yields the classical result we obtained before. So, why did we bother? From this we can
derive a simple and useful connection between the classical deection angle and the rate of change
of the semiclassical phase shift with angular momentum, d
SC
l
/dl. First, recall the Leibnitz rule
for taking derivatives of integrals:
d
dx

b(x)
a(x)
f(x, y)dy =
b
dx
f(b(x), y)
da
dx
f(a(x), y) +

b(x)
a(x)
f(x, y)
x
dy (3.126)
Taking the derivative of
SC
l
with respect to l, using the last equation a and the relation that
(b/l)
E
= b/k, we nd that
d
SC
l
dl
=

2
. (3.127)
Next, we examine the dierential cross-section, I(). The scattering amplitude
f() =

2i

l=0
(2l + 1)e
2i
l
P
l
(cos ). (3.128)
where we use = 1/k and exclude the singular point where = 0 since this contributes nothing
to the total ux.
Now, we need a mathematical identity to take this to the semiclassical limit where the po-
tential varies slowly with wavelength. What we do is to rst relate the Legendre polynomial,
P
l
(cos ) to a zeroth order Bessel function for small values of ( <1).
P
l
(cos ) = J
0
((l + 1/2)). (3.129)
Now, when x = (l + 1/2) 1 (i.e. large angular momentum), we can use the asymptotic
expansion of J
0
(x)
J
0
(x)

2
x
sin

x +

4

. (3.130)
Pulling this together,
P
l
(cos )

2
(l + 1/2)

1/2
sin ((l + 1/2) + /4)

2
(l + 1/2)

1/2
sin ((l + 1/2) + /4)
(sin )
1/2
(3.131)
77
for (l + 1/2) 1. Thus, we can write the semi-classical scattering amplitude as
f() =

l=0

(l + 1/2)
2 sin

1/2

e
i
+
+ e
i

(3.132)
where

= 2
l
(l + 1/2) /4. (3.133)
The phases are rapidly oscillating functions of l. Consequently, the majority of the terms must
cancel and the sum is determined by the ranges of l for which either
+
or

is extremized.
This implies that the scattering amplitude is determined almost exclusively by phase-shifts which
satisfy
2
d
l
dl
= 0, (3.134)
where the + is for d
+
/dl = 0 and the is for d

/dl = 0. This demonstrates that the only the


phase-shifts corresponding to impact parameter b can contribute signicantly to the dierential
cross-section in the semi-classical limit. Thus, the classical condition for scattering at a given
deection angle, is that l be large enough for Eq. 3.134 to apply.
3.4.5 Resonance Scattering
3.5 Problems and Exercises
Exercise 3.6 In this problem we will (again) consider the ammonia inversion problem, this time
we will proceed in a semi-classical context.
Recall that the ammonia inversion potential consists of two symmetrical potential wells sepa-
rated by a barrier. If the barrier was impenetrable, one would nd energy levels corresponding to
motion in one well or the other. Since the barrier is not innite, there can be passage between
wells via tunneling. This causes the otherwise degenerate energy levels to split.
In this problem, we will make life a bit easier by taking
V (x) = (x
4
x
2
)
as in the examples in Chapter 5.
Let
o
be the semi-classical wavefunction describing the motion in one well with energy E
o
.
Assume that
o
is exponentially damped on both sides of the well and that the wavefunction
is normalized so that the integral over
2
o
is unity. When tunning is taken into account, the
wavefunctions corresponding to the new energy levels, E
1
and E
2
are the symmetric and anti-
symmetric combinations of
o
(x) and
o
(x)

1
= (
o
(x) +
o
(x)/

2
= (
o
(x)
o
(x)/

2
where
o
(x) can be thought of as the contribution from the zeroth order wavefunction in the
other well. In Well 1,
o
(x) is very small and in well 2,
o
(+x) is very small and the product

o
(x)
o
(x) is vanishingly small everywhere. Also, by construction,
1
and
2
are normalized.
78
1. Assume that
o
and
1
are solutions of the Schrodinger equations

o
+
2m
h
2
(E
o
V )
o
= 0
and

1
+
2m
h
2
(E
1
V )
1
= 0,
Multiply the former by
1
and the latter by
o
, combine and subtract equivalent terms, and
integrate over x from 0 to to show that
E
1
E
o
=
h
2
m

o
(0)

o
(0),
Perform a similar analysis to show that
E
2
E
o
= +
h
2
m

o
(0)

o
(0),
2. Show that the unperturbed semiclassical wavefunction is

o
(0) =


2v
o
exp

1
h

a
0
[p[dx

and

o
(0) =
mv
o
h

o
(0)
where v
o
=

2(E
o
V (0))/m and a is the classical turning point at E
o
= V (a).
3. Combining your results, show that the tunneling splitting is
E =
h

exp

1
h

+a
a
[p[dx

.
where the integral is taken between classical turning points on either side of the barrier.
4. Assuming that the potential in the barrier is an upside-down parabola
V (x) V
o
kx
2
/2
what is the tunneling splitting.
5. Now, taking = 0.1 expand the potential about the barrier and compute determine the
harmonic force constant for the upside-down parabola. Using the equations you derived and
compute the tunneling splitting for a proton in this well. How does this compare with the
calculations presented in Chapter 5.
79
Chapter 4
Postulates of Quantum Mechanics
When I hear the words Schrodingers cat, I wish I were able to reach for my gun.
Stephen Hawkings.
The dynamics of physical processes at a microscopic level is very much beyond the realm of
our macroscopic comprehension. In fact, it is dicult to imagine what it is like to move about on
the length and timescales for whcih quantum mechanics is important. However, for molecules,
quantum mechanics is an everyday reality. Thus, in order to understand how molecules move and
behave, we must develop a model of that dynamics in terms which we can comprehend. Making
a model means developing a consistent mathematical framework in which the mathematical
operations and constructs mimic the physical processes being studied.
Before moving on to develop the mathematical framework required for quantum mechanics,
let us consider a simple thought experiment. WE could do the experiment, however, we would
have to deal with some additional technical terms, like funding. The experiment I want to
consider goes as follows: Take a machine gun which shoots bullets at a target. Its not a very
accurate gun, in fact, it sprays bullets randomly in the general direction of the target.
The distribution of bullets or histogram of the amount of lead accumulated in the target is
roughly a Gaussian, C exp(x
2
/a). The probability of nding a bullet at x is given by
P(x) = Ce
x
2
/a
. (4.1)
C here is a normalization factor such that the probability of nding a bullet anywhere is 1. i.e.

dxP(x) = 1 (4.2)
The probability of nding a bullet over a small interval is

b
a
dxP(x)`0. (4.3)
Now suppose we have a bunker with 2 windows between the machine gun and the target such
that the bunker is thick enough that the bullets coming through the windows rattle around a
few times before emerging in random directions. Also, lets suppose we can color the bullets
with some magical (or mundane) means s.t. bullets going through 1 slit are colored red and
80
Figure 4.1: Gaussian distribution function
-10 -5 5 10
0.2
0.4
0.6
0.8
1
81
Figure 4.2: Combination of two distrubitions.
-10 -5 5 10
0.2
0.4
0.6
0.8
1
bullets going throught the other slit are colored blue. Thus the distribution of bullets at a
target behind the bunker is now
P
12
(x) = P
1
(x) + P
2
(x) (4.4)
where P
1
is the distribution of bullets from window 1 (the blue bullets) and P
2
the red bullets.
Thus, the probability of nding a bullet that passed through either 1 or 2 is the sum of the
probabilies of going through 1 and 2. This is shown in Fig. ??
Now, lets make an electron gun by taking a tungsten liment heated up so that electrons
boil o and can be accellerated toward a phosphor screen after passing through a metal foil with
a pinhole in the middle We start to see little pin points of light icker on the screenthese are
the individual electron bullets crashing into the phosphor.
If we count the number of electrons which strike the screen over a period of timejust as in
the machine gun experiment, we get a histogram as before. The reason we get a histogram is
slightly dierent than before. If we make the pin hole smaller, the distribution gets wider. This
is a manifestation of the Heisenberg Uncertainty Principle which states:
x p h/2 (4.5)
In otherwords, the more I restrict where the electron can be (via the pin hole) the more uncertain
I am about which direction is is going (i.e. its momentum parallel to the foil.) Thus, I wind up
with a distribution of momenta leaving the foil.
Now, lets poke another hole in the foil and consider the distribution of electrons on the foil.
Based upon our experience with bullets, we would expect:
P
12
= P
1
+ P
2
(4.6)
BUT electrons obey quantum mechanics! And in quantum mechanics we represent a particle
via an amplitude. And one of the rules of quantum mechanics is that we rst add amplitudes
and that probabilities are akin to the intensity of the combinded amplitudes. I.e.
P = [
1
+
2
[
2
(4.7)
82
Figure 4.3: Constructive and destructive interference from electron/two-slit experiment. The
superimposed red and blue curves are P
1
and P
2
from the classical probabilities
-10 -5 5 10
0.2
0.4
0.6
0.8
1
where
1
and
2
are the complex amplitudes associated with going through hole 1 and hole 2.
Since they are complex numbers,

1
= a
1
+ ib
1
= [psi
1
[e
i
1
(4.8)

2
= a
2
+ ib
2
= [psi
2
[e
i
2
(4.9)
Thus,

1
+
2
= [
1
[e
i
1
+[
2
[e
i
2
(4.10)
[
1
+
2
[
2
= ([
1
[e
i
1
+[
2
[e
i
2
)
([
1
[e
i
1
+[
2
[e
i
2
) (4.11)
P
12
= [
1
[
2
+[
2
[
2
+ 2[
1
[[
2
[ cos(
1

2
) (4.12)
P
12
= P
1
+ P
2
+ 2

P
1
P
2
cos(
1

2
) (4.13)
In other words, I get the same envelope as before, but its modulated by the cos(
1

2
)
interference term. This is shown in Fig. 4.3. Here the actual experimental data is shown as a
dashed line and the red and blue curves are the P
1
and P
2
. Just as if a wave of electrons struck
the two slits and diracted (or interfered) with itself. However, we know that electrons come in
denite chunkswe can observe individual specks on the screenonly whole lumps arrive. There
are no fractional electrons.
Conjecture 1 Electronsbeing indivisible chunks of mattereither go through slit 1 or slit 2.
Assuming Preposition 1 is true, we can divide the electrons into two classes:
1. Those that go through slit 1
83
2. Those that go through slit 2.
We can check this preposition by plugging up hole 1 and we get P
2
as the resulting distribution.
Plugging up hole 2, we get P
1
. Perhaps our preposition is wrong and electrons can be split in
half and half of it went through slit 1 and half through slit 2. NO! Perhaps, the electron went
through 1 wound about and went through 2 and through some round-about way made its way
to the screen.
Notice that in the center region of P
12
, P
12
> P
1
+P
2
, as if closing 1 hole actually decreased the
number of electrons going through the other hole. It seems very hard to justify both observations
by proposing that the electrons travel in complicated pathways.
In fact, it is very mysterious. And the more you study quantum mechanics, the more mys-
terious it seems. Many ideas have been cooked up which try to get the P
12
curve in terms of
electrons going in complicated pathsall have failed.
Surprisingly, the math is simple (in this case). Its just adding complex valued amplitudes.
So we conclude the following:
Electrons always arrive in discrete, indivisible chunkslike particles. However, the
probability of nding a chunk at a given position is like the distribution of the intensity
of a wave.
We could conclude that our conjecture is false since P
12
= P
1
+ P
2
. This we can test.
Lets put a laser behind the slits so that an electron going through either slit scatters a bit
of light which we can detect. So, we can see ashes of light from electrons going through slit
1, ashes of light from electrons going through slit 2, but NEVER two ashes at the same
time. Conjecture 1 is true. But if we look at the resulting distribution: we get P
12
= P
1
+ P
2
.
Measuring which slit the electon passes through destroys the phase information. When we make
a measurement in quantum mechanics, we really disturb the system. There is always the same
amount of disturbance because electrons and photons always interact in the same way every time
and produce the same sized eects. These eects rescatter the electrons and the phase info is
smeared out.
It is totally impossible to devise an experiment to measure any quantum phenomina without
disturbing the system youre trying to measure. This is one of the most fundimental and perhaps
most disturbing aspects of quantum mechanics.
So, once we have accepted the idea that matter comes in discrete bits but that its behavour
is much like that of waves, we have to adjust our way of thinking about matter and dynamics
away from the classical concepts we are used to dealing with in our ordinary life.
These are the basic building blocks of quantum mechanics. Needless to say they are stated in
a rather formal language. However, each postulate has a specic physical reason for its existance.
For any physical theory, we need to be able to say what the system is, how does it move, and
what are the possible outcomes of a measurement. These postulates provide a sucient basis for
the development of a consistent theory of quantum mechanics.
84
4.0.1 The description of a physical state:
The state of a physical system at time t is dened by specifying a vector [(t)` belonging to a
state space H. We shall assume that this state vector can be normalized to one:
'[` = 1
4.0.2 Description of Physical Quantities:
Every measurable physical quantity, /, is described by an operator acting in H; this operator is
an observable.
A consequence of this is that any operator related to a physical observable must be Hermitian.
This we can prove. Hermitian means that
'x[O[y` = 'y[O[x`

(4.14)
Thus, if O is a Hermitian operator and 'O` = '[O[` = '[`,
'O` = 'x[O[x` +'x[O[y` +'y[O[x` +'y[O[y`. (4.15)
Likewise,
'O`

= 'x[O[x`

+'x[O[y`

+'y[O[x`

+'y[O[y`

= 'x[O[x` +'y[O[x`

+'x[O[y` +'y[O[y`
= 'O` (4.16)
= (4.17)
If O is Hermitian, we can also write
'[O = '[. (4.18)
which shows that '[ is an eigenbra of O with real eigenvalue . Therefore, for an arbitrary ket,
'[O[` = '[` (4.19)
Now, consider eigenvectors of a Hermitian operator, [` and [`. Obviously we have:
O[` = [` (4.20)
O[` = [` (4.21)
Since O is Hermitian, we also have
'[O = '[ (4.22)
'[O = '[ (4.23)
Thus, we can write:
'[O[` = '[` (4.24)
'[O[` = '[` (4.25)
Subtracting the two: ( )'[` = 0. Thus, if = , [` and [` must be orthogonal.
85
4.0.3 Quantum Measurement:
The only possible result of the measurement of a physical quantity is one of the eigenvalues of
the corresponding observable. To any physical observable we ascribe an operator, O. The result
of a physical measurement must be an eigenvalue, a. With each eigenvalue, there corresponds
an eigenstate of O, [
a
`. This function is such that the if the state vector, [(t

)` = [
a
` where
t

corresponds to the time at which the measurement was preformed, O[` = a[` and the
measurement will yield a.
Suppose the state-function of our system is not an eigenfunction of the operator we are
interested in. Using the superposition principle, we can write an arbitrary state function as a
linear combination of eigenstates of O
[(t

)` =

a
'
a
[(t

)`[
a
`
=

a
c
a
[
a
`. (4.26)
where the sum is over all eigenstates of O. Thus, the probability of observing answer a is [c
a
[
2
.
IF the measurement DOES INDEED YIELD ANSWER a, the wavefunction of the system
at an innitesmimal bit after the measurement must be in an eigenstate of O.
[(t
+

)` = [
a
`. (4.27)
4.0.4 The Principle of Spectral Decomposition:
For a non-discrete spectrum: When the physical quantity, A, is measured on a system in a
normalized state [`, the probability {(a
n
) of obtaining the non-degenerate eigenvalue a
n
of the
corresponding observable is given by
{(a
n
) = ['u
n
[`[
2
(4.28)
where [u
n
` is a normalized eigenvector of A associated with the eigenvalue a
n
. i.e.
A[u
n
` = a
n
[u
n
`
For a discrete spectrum: the sampe principle applies as in the non-discrete case, except we
sum over all possible degeneracies of a
n
{(a
n
) =
gn

i=1
['u
n
[`[
2
Finally, for the case of a continuous spectrum: the probability of obtaining a result between
and + d is
d{ = ['[`[
2
d
86
4.0.5 The Superposition Principle
Lets formalize the above discussion a bit and write the electrons state [` = a[1` + b[2` where
[1` and [2` are basis states corresponding to the electron passing through slit 1 or 2. The
coecients, a and b, are just the complex numbers
1
and
2
written above. This [` is a vector
in a 2-dimensional complex space with unit length since
2
1
+
2
1
= 1.
1
Let us dene a Vector Space by dening a set of objects [`, an addition rule: [` =
[` +[

> which allows us to construct new vectors, and a scaler multiplication rule [` = a[`
which scales the length of a vector. A non-trivial example of a vector space is the x, y plane.
Adding two vectors gives another vector also on the x, y plane and multiplying a vector by a
constant gives another vector pointed in the same direction but with a new length.
The inner product of two vectors is written as
'[` = (
x

y
)

(4.29)
=

x
+

y
(4.30)
= '[`

. (4.31)
The length of a vector is just the inner product of the vector with itself, i.e. [` = 1 for the
state vector we dened above.
The basis vectors for the slits can be used as a basis for an arbitrary state [` by writing it
as a linear combination of the basis vectors.
[` =
1
[1` +
1
[1` (4.32)
In fact, any vector in the vector space can always be written as a linear combination of basis
vectors. This is the superposition principle.
The dierent ways of writing the vector [` are termed representations. Often it is easier to
work in one representation than another knowing fully that one can always switch back in forth at
will. Each dierent basis denes a unique representation. An example of a representation are the
unit vectors on the x, y plane. We can also dene another orthonormal representation of the x, y
plane by introducing the unit vectors [r`, [`, which dene a polar coordinate system. One can
write the vector v = a[x` +b[y > as v =

a
2
+ b
2
[r` +tan
1
(b/a)[` or v = r sin [x` +r cos [y`
and be perfectly correct. Usually experience and insight is the only way to determine a priori
which basis (or representation) best suits the problem at hand.
Transforming between representations is accomplished by rst dening an object called an
operator which has the form:
I =

i
[i`'i[. (4.33)
The sum means sum over all members of a given basis. For the xy basis,
I = [x`'x[ +[y`'y[ (4.34)
1
The notation we are introducing here is known as bra-ket notation and was invented by Paul Dirac. The
vector [` is called a ket. The corresponding bra is the vector '[ = (

y
), where the means complex
conjugation. The notation is quite powerful and we shall use is extensively throughout this course.
87
This operator is called the idempotent operator and is similar to multiplying by 1. For example,
I[` = [1`'1[` +[2`'2[` (4.35)
=
1
[1` +
2
[2` (4.36)
= [` (4.37)
We can also write the following:
[` = [1`'1[` +[2`'2[` (4.38)
The state of a system is specied completely by the complex vector [` which can be written
as a linear superposition of basis vectors spanning a complex vector space (Hilbert space). Inner
products of vectors in the space are as dened above and the length of any vector in the space
must be nite.
Note, that for state vectors in continuous representations, the inner product relation can be
written as an integral:
'[` =

dq

(q)(q) (4.39)
and normalization is given by
'[` =

dq[(q)[
2
. (4.40)
The functions, (q) are termed square integrable because of the requirement that the inner
product integral remain nite. The physical motivation for this will become apparent in a
moment when ascribe physical meaning to the mathematical objects we are dening. The class
of functions satisfying this requirement are also known as L
2
functions. (L is for Lebesgue,
referring to the class of integral.)
The action of the laser can also be represented mathematically as an object of the form
P
1
= [1`'1[. (4.41)
and
P
2
= [2`'2[ (4.42)
and note that P
1
+ P
2
= I.
When P
1
acts on [` it projects out only the [1` component of [`
P
1
[` =
1
[1`. (4.43)
The expectation value of an operator is formed by writing:
'P
1
` = '[P
1
[` (4.44)
Lets evaluate this:
'P
x
` = '[1`'1[`
=
2
1
(4.45)
88
Similarly for P
2
.
Part of our job is to insure that the operators which we dene have physical counterparts.
We dened the projection operator, P
1
= [1`'1[, knowing that the physical polarization lter
removed all non [1` components of the wave. We could have also written it in another basis,
the math would have been slightly more complex, but the result the same. [
1
[
2
is a real number
which we presumably could set o to measure in a laboratory.
4.0.6 Reduction of the wavepacket:
If a measurement of a physical quantity / on the system in the state [` yields the result, a
n
,
the state of the physical system immediately after the measurement is the normalized projection
P
n
[` onto the eigen subspace associated with a
n
.
In more plain language, if you observe the system at x, then it is at x. This is perhaps
the most controversial posulate since it implies that the act of observing the system somehow
changes the state of the system.
Suppose the state-function of our system is not an eigenfunction of the operator we are
interested in. Using the superposition principle, we can write an arbitrary state function as a
linear combination of eigenstates of O
[(t

)` =

a
'
a
[(t

)`[
a
`
=

a
c
a
[
a
`. (4.46)
where the sum is over all eigenstates of O. Thus, the probability of observing answer a is [c
a
[
2
.
IF the measurement DOES INDEED YIELD ANSWER a, the wavefunction of the system
at an innitesmimal bit after the measurement must be in an eigenstate of O.
[(t
+

)` = [
a
`. (4.47)
This is the only postulate which is a bit touchy deals with the reduction of the wavepacket
as the result of a measurement. On one hand, you could simply accept this as the way one
goes about business and simply state that quantum mechanics is an algorithm for predicting
the outcome of experiments and thats that. It says nothing about the inner workings of the
universe. This is what is known as the Reductionist view point. In essence, the Reductionist
view point simply wants to know the answer: How many?, How wide?, How long?.
On the other hand, in the Holistic view, quantum mechanics is the underlying physical theory
of the universe and say that the process of measurement does play an important role in how the
universe works. In otherwords, in the Holist wants the (w)hole picture.
The Reductionist vs. Holist argument has been the subject of numerous articles and books
in both the popular and scholarly arenas. We may return to the philosophical discussion, but
for now we will simply take a reductionist view point and rst learn to use quantum mechanics
as a way to make physical predictions.
89
4.0.7 The temporal evolution of the system:
The time evolution of the state vector is given by the Schrodinger equation
i h

t
[(t)` = H(t)[(t)`
where H(t) is an operator/observable associated withthe total energy of the system.
As we shall see, H is the Hamiltonian operator and can be obtained from the classical Hamil-
tonian of the system.
4.0.8 Dirac Quantum Condition
One of the crucial aspects of any theory is that we need to be able to construct physical observ-
ables. Moreover, we would like to be able to connect the operators and observables in quantum
mechanics to the observables in classical mechanics. At some point there must be a correspon-
dence. This connection can be made formally by relating what is known as the Poisson bracket
in classical mechanics:
f(p, q), g(p, q) =
f
q
g
p

g
q
f
p
(4.48)
which looks a lot like the commutation relation between two linear operators:
[

A,

B] =

A

B

B

A (4.49)
Of course, f(p, q) and g(p, q) are functions over the classical position and momentum of the
physical system. For position and momentum, it is easy to show that the classical Poisson
bracket is
q, p = 1.
Moreover, the quantum commutation relation between the observable x and p is
[ x, p] = i h.
Dirac proposed that the two are related and that this relation denes an acceptible set of
quantum operations.
The quantum mechanical operators

f and g, which in classical theory replace the
classically dened functions f and g, must always be such that the commutator of

f
and g corresponds to the Poisson bracket of f and g according to
i hf, g = [

f, g] (4.50)
To see how this works, we write the momentum operator as
p =
h
i

x
(4.51)
90
Thus,
p(x) =
h
i
(x)
x
(4.52)
Lets see if x and p commute. First of all

x
xf(x) = f(x) + xf

(x) (4.53)
Thus,
[ x, p]f(x) =
h
i
(x

x
f(x)

x
xf(x)
=
h
i
(xf

(x) f(x) xf

(x))
= i hf(x) (4.54)
The fact that x and p do not commute has a rather signicant consequence:
In other words, if two operators do not commute, one cannot devise and experiment to
simultaneously measure the physical quantities associated with each operator. This in fact limits
the precision in which we can preform any physical measurement.
The principle result of the postulates is that the wavefunction or state vector of the system
carries all the physical information we can obtain regarding the system and allows us to make
predictions regarding the probable outcomes of any experiment. As you may well know, if one
make a series of experimental measurements on identically prepared systems, one obtains a
distribution of resultsusually centered about some peak in the distribution.
When we report data, we usually dont report the result of every single experiment. For
a spectrscopy experiment, we may have made upwards of a million individual measurement,
all distributed about some average value. From statistics, we know that the average of any
distribution is the expectation value of some quantity, in this case x:
E(x) =

{(x)xdx (4.55)
for the case of a discrete spectra, we would write
E[h] =

n
h
n
P
n
(4.56)
where h
n
is some value and P
n
the number of times you got that value normalized so that

n
P
n
= 1
. In the language above, the h
n
s are the possible eigenvalues of the h operator.
A similar relation holds in quantum mechanics:
Postulate 4.1 Observable quantities are computed as the expectation value of an operator 'O` =
'[O[`. The expectation value of an operator related to a physical observable must be real.
91
For example, the expectation value of x the position operator is computed by the integral
'x` =

(x)x(x)dx.
or for the discrete case:
'O` =

n
o
n
['n[`[
2
.
Of course, simply reporting the average or expectation values of an experiment is not enough,
the data is usually distributed about either side of this value. If we assume the distribution is
Gaussian, then we have the position of the peak center x
o
= 'x` as well as the width of the
Gaussian
2
.
The mean-squared width or uncertainty of any measurement can be computed by taking

2
A
= '(A 'A`)`.
In statistical mechanics, this the uctuation about the average of some physical quantity, A. In
quantum mechanics, we can push this denition a bit further.
Writing the uncertainty relation as

2
A
= '(A 'A`)(A 'A`)` (4.57)
= '[(A 'A`)(A 'A`)[` (4.58)
= 'f[f` (4.59)
where the new vector [f` is simply short hand for [f` = (A 'A`)[`. Likewise for a dierent
operator B

2
B
= = '[(B 'B`)(B 'B`)[` (4.60)
= 'g[g`. (4.61)
We now invoke what is called the Schwartz inequality

2
A

2
B
= 'f[f`'g[g` ['f[g`[
2
(4.62)
So if we write 'f[g` as a complex number, then
['f[g`[
2
= [z[
2
= 1(z)
2
+(z)
2
(z)
2
=

1
2i
(z z

1
2i
('f[g` 'g[f`)

2
(4.63)
So we conclude

2
A

2
B

1
2i
('f[g` 'g[f`)

2
(4.64)
92
Now, we reinsert the denitions of [f` and [g`.
'f[g` = '[(A 'A`)(B 'B`)[`
= '[(AB 'A`B A'B` +'A`'B`)[`
= '[AB[` 'A`'[B[` 'B`'[A[` +'A`'B`
= 'AB` 'A`'B` (4.65)
Likewise
'g[f` = 'BA` 'A`'B`. (4.66)
Combining these results, we obtain
'f[g` 'g[f` = 'AB` 'BA` = 'AB BA` = '[A, B]`. (4.67)
So we nally can conclude that the general uncertainty product between any two operators is
given by

2
A

2
B
=

1
2i
'[A, B]`

2
(4.68)
This is commonly referred to as the Generalized Uncertainty Principle. What is means is that
for any pair of observables whose corresponding operators do not commute there will always be
some uncertainty in making simultaneous measurements. In essence, if you try to measure two
non-commuting properties simultaneously, you cannot have an innitely precise determination of
both. A precise determination of one implies that you must give up some certainty in the other.
In the language of matrices and linear algebra this implies that if two matrices do not com-
mute, then one can not bring both matrices into diagonal form using the same transformation
matrix. in other words, they do not share a common set of eigenvectors. Matrices which do
commute share a common set of eigenvectors and the transformation which diagonalizes one will
also diagonalize the other.
Theorem 4.1 If two operators A and B commute and if [` is an eigenvector of A, then B[`
is also an eigenvector of A with the same eigenvalue.
Proof: If [` is an eigenvector of A, then A[` = a[`. Thus,
BA[` = aB[` (4.69)
Assuming A and B commute, i.e. [A, B] = AB BA = 0,
AB[` = a(B[`) (4.70)
Thus, (B[`) is an eigenvector of A with eigenvalue, a.
Exercise 4.1 1. Show that matrix multiplication is associative, i.e. A(BC) = (AB)C, but
not commutative (in general), i.e. BC = CB
2. Show that (A + B)(A B) = A
2
+ B
2
only of A and B commute.
3. Show that if A and B are both Hermitian matrices, AB + BA and i(AB BA) are also
Hermitian. Note that Hermitian matrices are dened such that A
ij
= A

ji
where denotes
complex conjugation.
93
4.1 Dirac Notation and Linear Algebra
Part of the diculty in learning quantum mechanics comes from fact that one must also learn a
new mathematical language. It seems very complex from the start. However, the mathematical
objects which we manipulate actually make life easier. Lets explore the Dirac notation and the
related mathematics.
We have stated all along that the physical state of the system is wholly specied by the
state-vector [` and that the probability of nding a particle at a given point x is obtained via
[(x)[
2
. Say at some initial time [` = [s` where s is some point along the x axis. Now, the
amplitude to nd the particle at some other point is 'x[s`. If something happens between the
two points we write
'x[operator describing process[s` (4.71)
The braket is always read from right to left and we interpret this as the amplitude forstarting
o at s, something happens, and winding up at i. An example of this is the G
o
function in the
homework. Here, I ask what is the amplitude for a particle to start o at x and to wind up at
x

after some time interval t?


Another Example: Electrons have an intrinsic angular momentum called spin . Accordingly,
they have an associated magnetic moment which causes electrons to align with or against an
imposed magnetic eld (eg.this gives rise to ESR). Lets say we have an electron source which
produces spin up and spin down electrons with equal probability. Thus, my initial state is:
[i` = a[+` + b[` (4.72)
Since Ive stated that P(a) = P(b), [a[
2
= [b[
2
. Also, since P(a) + P(b) = 1, a = b = 1/

2.
Thus,
[i` =
1

2
([+` +[`) (4.73)
Lets say that the spin ups can be separated from the spin down via a magnetic eld, B and we
lter o the spin down states. Our new state is [i

` and is related to the original state by


'i

[i` = a'+[+` + b'+[` = a. (4.74)


4.1.1 Transformations and Representations
If I know the amplitudes for [` in a representation with a basis [i` , it is always possible to
nd the amplitudes describing the same state in a dierent basis [`. Note, that the amplitude
between two states will not change. For example:
[a` =

i
[i`'i[a` (4.75)
also
[a` =

[`'[a` (4.76)
94
Therefore,
'[a` =

i
'[i`'i[a` (4.77)
and
'i[a` =

'i[`'[a`. (4.78)
Thus, the coecients in [` are related to the coecients in [i` by '[i` = 'i[`

. Thus, we can
dene a transformation matrix S
i
as
S
i
=

'[i` '[j` '[k`


'[i` '[j` '[k`
'[i` '[j` '[k`

(4.79)
and a set of vectors
a
i
=

'i[a`
'j[a`
'k[a`

(4.80)
a

'[a`
'[a`
'[a`

(4.81)
Thus, we can see that
a

i
S
i
a
i
(4.82)
Now, we can also write
S
i
=

'[i`

'[j`

'[k`

'[i` '[j` '[k`

'[i`

'[j`

'[k`

= S

i
(4.83)
Thus,
a
i
=

S
i
a

(4.84)
Since 'i[` = '[i`

, S is the Hermitian conjugate of S. So we write


S = S

(4.85)
(4.86)
95
S

= (S
T
)

(4.87)
(4.88)
(S

)
ij
= S

ji
(4.89)
So in short;
a = Sa = SSa = S

Sa (4.90)
thus
S

S = 1 (4.91)
and S is called a unitary transformation matrix.
4.1.2 Operators
A linear operator

A maps a vector in space H on to another vector in the same space. We can
write this in a number of ways:
[`

A
[` (4.92)
or
[` =

A[` (4.93)
Linear operators have the property that

A(a[` + b[`) = a

A[` + b

A[` (4.94)
Since superposition is rigidly enforced in quantum mechanics, all QM operators are linear oper-
ators.
The Matrix Representation of an operator is obtained by writing
A
ij
= 'i[

A[j` (4.95)
For example: Say we know the representation of A in the [i` basis, we can then write
[` =

A[` =

A[i`'i[` =

j
[j`'j[` (4.96)
Thus,
'j[` =

i
'j[A[i`'i[` (4.97)
We can keep going if we want by continuing to insert 1s where ever we need.
96
The matrix A is Hermitian if A = A

. If it is Hermitian, then I can always nd a basis [` in


which it is diagonal, i.e.
A

= a

(4.98)
So, what is

A[` ?

A[` =

ij
[i`'i[A[j`'j[` (4.99)
(4.100)
=

ij
[i`A
ij

j
(4.101)
(4.102)
=

i
[i`A
i
(4.103)
(4.104)
=

i
[i`a

i
(4.105)
(4.106)
= a

[` (4.107)
An important example of this is the time-independent Schroedinger Equation:

H[` = E[` (4.108)


which we spend some time in solving above.
Finally, if

A[` = [` then '[A

= '[.
4.1.3 Products of Operators
An operator product is dened as
(

A

B)[` =

A[

B[`] (4.109)
where we operate in order from right to left. We proved that in general the ordering of the
operations is important. In other words, we cannot in general write

A

B =

B

A. An example of
this is the position and momentum operators. We have also dened the commutator
[

A,

B] =

A

B

B

A. (4.110)
Lets now briey go over how to perform algebraic manipulations using operators and commu-
tators. These are straightforward to prove
97
1. [

A,

B] = [

B,

A]
2. [

A,

A] = [

A,

A] = 0
3. [

A,

B

C] = [

A,

B]

C +

B[

A,

C]
4. [

A,

B +

C] = [

A,

B] + [

A,

C]
5. [

A, [

B,

C]] + [

B, [

C,

A]] + [

C, [

A,

B]] = 0 (Jacobi Identity)
6. [

A,

B]

= [

A

,

B

]
4.1.4 Functions Involving Operators
Another property of linear operators is that the inverse operator always can be found. I.e. if
[` =

A[` then there exists another operator

B such that [` =

B[`. In other words

B =

A
1
.
We also need to know how to evaluate functions of operators. Say we have a function, F(z)
which can be expanded as a series
F(z) =

n=0
f
n
z
n
(4.111)
Thus, by analogy
F(

A) =

n=0
f
n

A
n
. (4.112)
For example, take exp(

A)
exp(x) =

n=0
x
n
n!
= 1 + x + x
2
/2 + (4.113)
thus
exp(

A) =

n=0

A
n
n!
(4.114)
If

A is Hermitian, then F(

A) is also Hermitian. Also, note that
[

A, F(

A)] = 0.
Likewise, if

A[
a
` = a[
a
` (4.115)
then

A
n
[
a
` = a
n
[
a
`. (4.116)
98
Thus, we can show that
F(

A)[
a
` =

n
f
n

A
n
[
a
` (4.117)
(4.118)
=

n
f
n
a
n
[
a
` (4.119)
(4.120)
= F(a) (4.121)
Note, however, that care must be taken when we evaluate F(

A +

B) if the two operators
do not commute. We ran into this briey in breaking up the propagator for the Schroedinger
Equation in the last lecture (Trotter Product). For example,
exp(

A +

B) = exp(

A) exp(

B) (4.122)
unless [

A,

B] = 0. One can derive, however, a useful formula (Glauber)
exp(

A +

B) = exp(

A) exp(

B) exp([

A,

B]/2) (4.123)
Exercise 4.2 Let H be the Hamiltonian of a physical system and [
n
` the solution of
H[
n
` = E
n
[
n
` (4.124)
1. For an arbitrary operator,

A, show that
'
n
[[

A, H][
n
` = 0 (4.125)
2. Let
H =
1
2m
p
2
+ V ( x) (4.126)
(a) Compute [H, p], [H, x], and [H, x p].
(b) Show '
n
[ p[
n
` = 0.
(c) Establish a relationship between the average of the kinetic energy given by
E
kin
= '
n
[
p
2
2m
[
n
` (4.127)
and the average force on a particle given by
F = '
n
[ x
V (x)
x
[
n
`. (4.128)
Finally, relate the average of the potential for a particle in state [
n
` to the average
kinetic energy.
99
Exercise 4.3 Consider the following Hamiltonian for 1d motion with a potential obeying a sim-
ple power-law
H =
p
2
2m
+ x
n
(4.129)
where is a constant and n is an integer. Calculate
'A` = '[[xp, H][[` (4.130)
and use the result to relate the average potential energy to the average kinetic energy of the
system.
4.2 Constants of the Motion
In a dynamical system (quantum, classical, or otherwise) a constant of the motion is any quantity
such that

t
A = 0. (4.131)
For quantum systems, this means that
[A, H] = 0 (4.132)
(Whats the equivalent relation for classical systems?) In other words, any quantity which
commutes with H is a constant of the motion. Furthermore, for any conservative system (in
which there is no net ow of energy to or from the system),
[H, H] = 0. (4.133)
From Eq.4.131, we can write that

t
'A` =
t
'(t)[A[(t)` (4.134)
Since [A, H] = 0, we know that if the state [
n
` is an eigenstate of H,
H[
n
` = E
n
[
n
` (4.135)
then
A[
n
` = a
n
[
n
` (4.136)
The a
n
are often referred to as good quantum numbers. What are some constants of the
motion for systems that we have studied thus far? (Bonus: how are constants of motion related
to particular symmetries of the system?)
A state which is in an eigenstate of H its also in an eigenstate of A. Thus, I can simultaneously
measure quantities associated with H and A. Also, after I measure with A, the system remains
in a the original state.
100
4.3 Bohr Frequency and Selection Rules
What if I have another operator, B, which does not commute with H? What is 'B(t)`? This we
can compute by rst writing
[(t)` =

n
c
n
e
iEnt/h
[
n
`. (4.137)
Then
'B(t)` = '[B[(t)` (4.138)
=

n
c
n
c

m
e
i(EnEm)t/h
'
m
[B[
n
`. (4.139)
Lets dene the Bohr Frequency as
nm
= (E
n
E
m
)/ h.
'B(t)` =

n
c
n
c

m
e
inmt
'
m
[B[
n
`. (4.140)
Now, the observed expectation value of B oscillates in time at number of frequencies corre-
sponding to the energy dierences between the stationary states. The matrix elements B
nm
=
'
m
[B[
n
` do not change with time. Neither do the coecients, c
n
. Thus, lets write
B() = c
n
c

m
'
m
[B[
n
`(
nm
) (4.141)
and transform the discrete sum into an continuous integral
'B(t)` =
1
2


0
e
it
B() (4.142)
where B() is the power spectral of B. In other words, say I monitor < B(t) > with my
instrument for a long period of time, then take the Fourier Transform of the time-series. I get
the power-spectrum. What is the power spectrum for a set of discrete frequencies: If I observe
the time-sequence for an innite amount of time, I will get a series of discretely spaced sticks
along the frequency axis at precisely the energy dierence between the n and m states. The
intensity is related to the probability of making a transition from n to m under the inuence
of B. Certainly, some transitions will not be allowed because '
n
[B[
m
` = 0. These are the
selection rules.
We now prove an important result regarding the integrated intensity of a series of transitions:
Exercise 4.4 Prove the Thomas-Reiche-Kuhn sum rule:
2

n
2m[x
n0
[
2
h
2
(E
n
E
o
) = 1 (4.143)
where the sum is over a compete set of states, [
n
` of energy E
n
of a particle of mass m which
moves in a potential; [
o
` represents a bound state, and x
n0
= '
n
[x[
o
`. (Hint: use the com-
mutator identity: [x, [x, H]] = h
2
/m)
2
This is perhaps one of the most important results of quantum mechanics since it is gives the total spectral
intensity for a series of transitions. c.f Bethe and Jackiw for a great description of sum-rules.
101
Figure 4.4: The diraction function sin(x)/x
-20 -10 10 20
-0.2
0.2
0.4
0.6
0.8
1
4.4 Example using the particle in a box states
What are the constants of motion for a particle in a box?
Recall that the energy levels and wavefunctions for this system are
E
n
=
n
2

2
h
2
2ma
2
(4.144)

n
(x) =

2
a
sin(
n
a
x) (4.145)
Say our system in in the nth state. Whats the probability of measuring the momentum and
obtaining a result between p and p + dp?
P
n
(p)dp = [
n
(p)[
2
dp (4.146)
where

n
(p) =
1

2 h

a
0
dx

2
a
sin(nx/a)e
ipx/h
(4.147)
=
1
2i
1

2 h

e
i(n/ap/h)a
1
i(n/a p/ h)

e
i(n/ap/h)a
1
i(n/a p/ h)

(4.148)
=
1
2i

a
h
exp i(n/2 pa/(2 h))[F(p n h/2) + (1)
n+1
F(p + n h/a)] (4.149)
Where the F(p) are diraction functions
F(p) =
sin(pa/(2 h))
pa/(2 h)
(4.150)
Note that the width 4 h/a does not change as I change n. Nor does the amplitude. However,
note that (F(x + n) F
F
(x n))
2
is always an even function of x. Thus, we can say
'p`
n
=

P
n
(p)pdp = 0 (4.151)
102
We can also compute:
'p
2
` = h
2


0
dx

n
(x)
x

2
(4.152)
= h
2

a
0
2
a

n
a

2
cos(nx/a)dx (4.153)
=

n h
a

2
= 2mE
n
(4.154)
Thus, the RMS deviation of the momentum:
p
n
=

'p
2
`
n
'p`
2
n
=
n h
a
(4.155)
Thus, as n increases, the relative accuracy at which we can measure p increases due the fact
that we can resolve the wavefunction into two distinct peaks corresponding to the particle either
going to the left or to the right. p increases due to the fact that the two possible choices for
the measurement are becoming farther and farther apart and hence reects the distance between
the two most likely values.
4.5 Time Evolution of Wave and Observable
Now, suppose we put our system into a superposition of box-states:
[(0)` =
1

2
([
1
` +[
2
`) (4.156)
What is the time evolution of this state? We know the eigen-energies, so we can immediately
write:
[(t)` =
1

2
(exp(iE
1
t/ h)[
1
` + exp(iE
2
t/ h)[
2
`) (4.157)
Lets factor out a common phase factor of e
iE
1
t/h
and write this as
[(t)`
1

2
([
1
` + exp(i(E
2
E
1
)t/ h)[
2
`) (4.158)
and call (E
2
E
1
)/ h =
21
the Bohr frequency.
[(t)`
1

2
([
1
` + exp(i
21
t)[
2
`) (4.159)
where

21
=
3
2
h
2ma
2
. (4.160)
103
The phase factor is relatively unimportant and cancels out when I make a measurement. Eg.
the prob. density:
[(x, t)[
2
= ['x[(t)`[
2
(4.161)
(4.162)
=
1
2

2
1
(x) +
1
2

2
2
(x) +
1
(x)
2
(x) cos(
21
t) (4.163)
Now, lets compute 'x(t)` for the two state system. To do so, lets rst dene x

= x a/2
as the center of the well to make the integrals easier. The rst two are easy:
'
1
[x

[
1
`

a
0
dx(x a/2) sin
2
(x/a) = 0 (4.164)
'
2
[x

[
2
`

a
0
dx(x a/2) sin
2
(2x/a) = 0 (4.165)
which we can do by symmetry. Thus,
'x

(t)` = Ree
i
21
t
'
1
[x

[
2
` (4.166)
'
1
[x

[
2
` = '
1
[x[
2
` (a/2)'
1
[
2
` (4.167)
=
2
a

a
0
dxx sin(x/a) sin(2x/a) (4.168)
=
16a
9
2
(4.169)
Thus,
'x(t)` =
a
2

16a
9
2
cos(
21
t) (4.170)
Compare this to the classical trajectory. Also, what about 'E(t)`?
4.6 Unstable States
So far in this course, we have been talking about systems which are totally isolated from the
rest of the universe. In these systems, there is no inux or eux of energy and all our dynamics
are governed by the three principle postulates I mentioned a the start of the lecture. IN essence,
if at t = 0 I prepare the system in an eigenstate of H, then for all times later, its still in that
state (to within a phase factor). Thus, in a strictly conservative system, a system prepared in
an eigenstate of H will remain in an eigenstate forever.
However, this is not exactly what is observed in nature. We know from experience that atoms
and molecules, if prepared in an excited state (say via the absorption of a photon) can relax
104
back to the ground state or some lower state via the emission of a photon or a series of photons.
Thus, these eigenstates are unstable.
Whats wrong here? The problem is not so much to do with what is wrong with our description
of the isolated system, it has to do with full description is not included. A isolated atom or
molecule can still interact with the electro-magnetic eld (unless we do some tricky connement
experiments). Thus, there is always some interaction with an outside environment. Thus,
while it is totally correct to describe the evolution of the global system in terms of some global
atom + environment Hamiltonian, it it NOT totally rigorous to construct a Hamiltonian
which describes only part of the story. But, as the great Prof. Karl Freed (at U. Chicago) once
told me Too much rigor makes rigor mortis.
Thankfully, the coupling between an atom and the electromagnetic eld is pretty weak. Each
photon emission probability is weighted by the ne-structure constant, 1/137. Thus a 2
photon process is weighted by
2
. Thus, the isolated system approximation is pretty good. Also,
we can pretty much say that most photon emission processes occur as single photon events.
Lets play a bit fast and loose with this idea. We know from experience that if we prepare
the system in an excited state at t = 0, the probability of nding it still in the excited state at
some time t later, is
P(t) = e
t/
(4.171)
where is some time constant which well take as the lifetime of the state. One way to prove
this relation is to go back to Problem Set 0. Lets say we have a large number of identical systems
^, each prepared in the excited state at t = 0. At time t, there are
N(t) = ^e
t/
(4.172)
systems in the excited state. Between time t and t + dt a certain number, dn(t) will leave the
excited state via photon emission.
dn(t) = N(t) N(t + dt) =
dN(t)
dt
dt = N(t)
dt

(4.173)
Thus,
dn(t)
N(t)
=
dt

(4.174)
Thus, 1/ is the probability per unit time for leaving the unstable state.
The Avg. time a system spends in the unstable state is given by:
1


0
dtte
t/
= (4.175)
For a stable state P(t) = 1 thus, .
The time a system spends in the state is independent of its history. This is a characteristic of
an unstable state. (Also has to do with the fact that the various systems involved to not interact
with each other. )
105
Finally, according to the time-energy uncertainty relation:
E h. (4.176)
Thus, an unstable system has an intrinsic energy width associated with the nite time the
systems spends in the state.
For a stable state:
[(t)` = e
iEnt/h
[
n
` (4.177)
and
P
n
(t) = [e
iEnt/h
[
2
= 1 (4.178)
for real energies.
What if I instead write: E

n
= E
n
i h
n
/ h? Then
P
n
(t) = [e
iEnt/h
e
n/2t
[
2
= e
nt
(4.179)
Thus,

n
= 1/
n
(4.180)
is the Energy Width of the unstable state.
The surprising part of all this is that in order to include dissipative eects (photon emission,
etc..) the Eigenvalues of H become complex. In other words, the system now evolves under a
non-hermitian Hamiltonian! Recall the evolution operator for an isolated system:
U(t) = e
iHt/h
(4.181)
(4.182)
U

(t) = e
iHt/h
(4.183)
where the rst is the forward evolution of the system and the second corresponds to the back-
wards evolution of the system. Thus, Unitarity is thus related to the time-reversal symmetry
of conservative systems. The inclusion of an environment breaks the intrinsic time-reversal
symmetry of an isolated system.
4.7 Problems and Exercises
Exercise 4.5 Find the eigenvalues and eigenvectors of the matrix:
M =

0 0 0 1
0 0 1 0
0 1 0 0
1 0 0 0

(4.184)
106
Solution: You can either do this the hard way by solving the secular determinant and then
nding the eigenvectors by Gramm-Schmidt orthogonalization, or realize that since M = M
1
and M = M

, M is a unitary matrix, this, its eigenvalues can only be 1. Furthermore, since


the trace of M is 0, the sum of the eigenvalues must be 0 as well. Thus, = (1, 1, 1, 1) are
the eigenvalues. To get the eigenvectors, consider the following. Let
m
u be an eigenvector of
M, thus,

x
1
x
2
x
3
x
4

. (4.185)
Since M

m
u, x
1
=

x
4
and x
2
=

x
3
Thus, 4 eigenvectors are

1
0
0
1

0
1
1
0

1
0
0
1

0
1
1
0

(4.186)
for = (1, 1, 1, 1).
Exercise 4.6 Let
i
be the eigenvalues of the matrix:
H =

2 1 3
1 1 2
3 2 3

(4.187)
calculate the sums:
3

i
(4.188)
and
3

2
i
(4.189)
Hint: use the fact that the trace of a matrix is invariant to choice representation.
Solution: Using the hint,
trH =

i
=

i
H
ii
= 2 + 3 + 1 = 6 (4.190)
and

2
i
=

ij
H
ij
H
ji
=

ij
H
2
ij
= 42 (4.191)
107
Exercise 4.7 1. Let [
n
` be the eigenstates of the Hamiltonian, H of some arbitrary system
which form a discrete, orthonormal basis.
H[
n
` = E
n
[
n
`.
Dene the operator, U
nm
as
U
nm
= [
n
`'
m
[.
(a) Calculate the adjoint U

nm
of U
nm
.
(b) Calculate the commutator, [H, U
nm
].
(c) Prove: U
mn
U

pq
=
nq
U
mp
(d) For an arbitrary operator, A, prove that
'
n
[[A, H][
n
` = 0.
(e) Now consider some arbitrary one dimensional problem for a particle of mass m and
potential V (x). For here on, let
H =
p
2
2m
+ V (x).
i. In terms of p, x, and V (x), compute: [H, p], [H, x], and [H, xp].
ii. Show that '
n
[p[
n
` = 0.
iii. Establish a relation between the average value of the kinetic energy of a state
'T` = '
n
[
p
2
2m
[
n
`
and
'
n
[x
dV
dx
[
n
`.
The average potential energy in the state
n
is
'V ` = '
n
[V [
n
`,
nd a relation between 'V ` and 'T` when V (x) = V
o
x

for = 2, 4, 6, . . ..
(f ) Show that
'
n
[p[
m
` = '
n
[x[
m
`
where is some constant which depends upon E
n
E
m
. Calculate , (hint, consider
the commutator [x, H] which you computed above.
(g) Deduce the following sum-rule for the linear -response function.
'
0
[[x, [H, x]][
0
` = 2

n>0
(E
n
E
0
)['
0
[x[
n
`[
2
Here [
0
` is the ground state of the system. Give a physical interpretation of this last
result.
108
Exercise 4.8 For this section, consider the following 5 5 matrix:
H =

0 0 0 0 1
0 0 0 1 0
0 0 1 0 0
0 1 0 0 0
1 0 0 0 0

(4.192)
1. Using Mathematica determine the eigenvalues,
j
, and eigenvectors,
n
, of H using the
Eigensystem[] command. Determine the eigenvalues only by solving the secular determi-
nant.
[H I[ = 0
Compare the computational eort required to perform both calculations. Note: in entering
H into Mathematica, enter the numbers as real numbers rather than as integers (i.e. 1.0
vs 1 ).
2. Show that the column matrix of the eigenvectors of H,
T =
1
, . . . ,
5
,
provides a unitary transformation of H between the original basis and the eigenvector basis.
T

HT =
where is the diagonal matrix of the eigenvalues
j
. i.e.
ij
=
i

ij
.
3. Show that the trace of a matrix is invarient to representation.
4. First, without using Mathematica, compute: Tr(H
2
). Now check your result with Mathe-
matica.
109
Chapter 5
Bound States of The Schrodinger
Equation
A #2 pencil and a dream can take you anywhere.
Joyce A. Myers
Thus far we have introduced a series of postulates and discussed some of their physical impli-
cations. We have introduced a powerful notation (Dirac Notation) and have been studying how
we describe dynamics at the atomic and molecular level where h is not a small number, but is
of order unity. We now move to a topic which will serve as the bulk of our course, the study
of stationary systems for various physical systems. We shall start with some general principles,
(most of which we have seen already), and then tackle the following systems in roughly this
order:
1. Harmonic Oscillators: Molecular vibrational spectroscopy, phonons, photons, equilibrium
quantum dynamics.
2. Angular Momentum: Spin systems, molecular rotations.
3. Hydrogen Atom: Hydrogenic Systems, basis for atomic theory
5.1 Introduction to Bound States
Before moving on to these systems, lets rst consider what is meant by a bound state. Say
we have a potential well which has an arbitrary shape except that at x = a, V (x) = 0 and
remains so in either direction. Also, in the range of a x a, V (x) < 0. The Schrodinger
Equation for the stationary states is:

h
2
2m

2
x
2
+ V (x)

n
(x) = E
n

n
(x) (5.1)
Rather than solve this exactly (which we can not do since we havent specied more about
V ) lets examine the topology of the allowed bound state solutions. As we have done with the
square well cases, lets cut the x axis into three domains: Domain 1 for x < a, Domain 2 for
a x a, Domain 3 for x > a. What are the matching conditions that must be met?
110
For Domain 1 we have:

2
x
2

I
(x) =
2mE
h
2

I
(x) (+)
I
(x) (5.2)
For Domain 2 we have:

2
x
2

II
(x) =
2m(V (x) E)
h
2

II
(x) ()
II
(x) (5.3)
For Domain 3 we have:

2
x
2

III
(x) =
2m(E)
h
2

III
(x) (+)
III
(x) (5.4)
At the rightmost end of each equation, the () indicates the sign of the second derivative of
the wavefunction. (i.e. the curvature must have the same or opposite sign as the function itself.)
For the + curvature functions, the wavefunctions curve away from the x-axis. For curvature,
the wavefunctions are curved towards the x-axis.
Therefore, we can conclude that for regions outside the well, the solutions behave much like
exponentials and within the well, the behave like superpositions of sine and cosine functions.
Thus, we adopt the asymptotic solution that
(x) exp(+x)for x < a as x (5.5)
and
(x) exp(x)for x > a as x + (5.6)
Finally, in the well region, (x) oscillates about the x-axis. We can try to obtain a more complete
solution by combining the solutions that we know. To do so, we must nd solutions which are
both continuous functions of x and have continuous rst derivatives of x.
Say we pick an arbitrary energy, E, and seek a solution at this energy. Dene the righthand
part of the solution to within a multiplicative factor, then the left hand solution is then a
complicated function of the exact potential curve and can be written as

III
(x) = B(E)e
+x
+ B

(E)e
x
(5.7)
where B(E) and B

(E) are both real functions of E and depend upon the potential function.
Since the solutions must be L
2
, the only appropriate bound states are those for which B(E) = 0.
Any other value of B(E) leads to diverging solutions.
Thus we make the following observations concerning bound states:
1. They have negative energy.
2. They vanish exponentially outside the potential well and oscillate within.
3. They form a discrete spectrum as the result of the boundary conditions imposed by the
potential.
111
5.2 The Variational Principle
Often the interaction potential is so complicated that an exact solution is not possible. This
is often the case in molecular problems in which the potential energy surface is a complicated
multidimensional function which we know only at a few points connected by some interpola-
tion function. We can, however, make some approximations. The method, we shall use is the
Variational method.
5.2.1 Variational Calculus
The basic principle of the variational method lies at the heart of most physical principles. The
idea is that we represent a quantity as a stationary integral
J =

x
2
x
1
f(y, y
x
, x)dx (5.8)
where f(y, y
x
, x) is some known function which depened upon three variables, which are also
functions of x, y(x), y
x
= dy/dx, and x itself. The dependency of y on x uis generally unknown.
This means that while we have xed the end-points of the integral, the path that we actually
take between the endpoints is not known.
Picking dierent paths leads to dierent values of J. However ever certain paths will minimize,
maximize, or nd the saddle points of J. For most cases of physical interest, its the extrema
that we are interested. Lets say that there is one path, y
o
(t) which minimizes J (See Fig. 5.1).
If we distort that path slightly, we get another path y(x) which is not too unlike y
o
(x) and we
will write it as y(x) = y
o
(x) + (x) where (x
1
) = (x
2
) = 0 so that the two paths meet at the
terminal points. If (x) diers from 0 only over a small region, we can write the new path as
y(x, ) = y
o
(x) + (x)
and the variation from the minima as
y = y(x, ) y
o
(x, 0) = (x).
Since y
o
is the path which minimizes J, and y(x, ) is some other path, then J is also a
function of .
J() =

x
2
x
1
f(y(x, ), y

(x, ), x)dx
and will be minimized when

J

=0
= 0
Because J depends upon , we can examine the dependence of the integral

=0
=

x
2
x
1

f
y
y

+
f
y

dx
Since
y

= (x)
112
and
y

=

x
we have

J

=0
=

x
2
x
1

f
y
(x) +
f
y

dx.
Now, we need to integrate the second term by parts to get as a common factor. Remember
integration by parts?

udv = vu

vdu
From this

x
2
x
1

f
y

dx = (x)
f
x

x
2
x
1

x
2
x
1
(x)
d
dx
f
y

dx
The boundaty term vanishes since vanishes at the end points. So putting it all together and
setting it equal to zero:

x
2
x
1
(x)

f
y

d
dx
f
y

dx. = 0
Were not done yet, since we still have to evaluate this. Notice that has disappeared from
the expression. In eect, we can take an arbitrary variation and still nd the desired path tha
minimizes J. Since (x) is arbitrary subject to the boundary conditions, we can make it have the
same sign as the remaining part of the integrand so that the integrand is always non-negative.
Thus, the only way for the integral to vanish is if the bracketed term is zero everywhere.

f
y

d
dx
f
y

= 0 (5.9)
This is known as the Euler equation and it has an enormous number of applications. Perhaps
the simplest is the proof that the shortest distance between two points is a straight line (or
on a curved space, a geodesic). The straightline distance between two points on the xy plane
is s =

x
2
+ y
2
and the dierential element of distance is ds =

(dx)
2
+ (dy)
2
=

1 + y
2
x
dx.
Thus, we can write a distance along some line in the xy plane as
J =

x
2
y
2
x
1
y
1
ds =

x
2
y
2
x
1
y
1

1 + y
2
x
dx.
If we knew y(x) then J would be the arclength or path-length along the function y(x) between
two points. Sort of like, how many steps you would take along a trail between two points. The
trail may be curvy or straight and there is certainly a single trail which is the shortest. So,
setting
f(y, y
x
, x) =

1 + y
2
x
and substituting it into the Euler equation one gets
d
dx
f
y
x
=
d
dx
1

1 + y
2
x
= 0. (5.10)
113
So, the only way for this to be true is if
1

1 + y
2
x
= constant. (5.11)
Solving for y
x
produces a second constant: y
x
= a, which immediatly yields that y(x) = ax + b.
In other words, its a straight line! Not too surprising.
An important application of this principle is when the integrand f is the classical Lagrangian
for a mechanical system. The Lagrangian is related to Hamiltonian and is dened as the dierence
between the kinetic and potential energy.
L = T V (5.12)
where as H is the sum of T +V . Rather than taking x as the independent variable, we take time,
t, and position and velocity oa a particle as the dependent variables. The statement of J = 0
is a mathematical statement of Hamiltons principle of least action

t
2
t
1
L(x, x, t)dt = 0. (5.13)
In essence, Hamiltons principle asserts that the motion of the system from one point to another
is along a path which minimizes the integral of the Lagrangian. The equations of motion for that
path come from the Euler-Lagrange equations,
d
dt
L
x

L
x
= 0. (5.14)
So if we write the Lagrangian as
L =
1
2
m x
2
V (x) (5.15)
and substitute this into the Euler-Lagarange equation, we get
m x =
V
x
(5.16)
which is Newtons law of motion: F = ma.
5.2.2 Constraints and Lagrange Multipliers
Before we can apply this principle to a quantum mechanical problem, we need to ask our selves
what happens if there is a constraint on the system which exclues certain values or paths so that
not all the s may be varied arbitrarily? Typically, we can write the constraint as

i
(y, x) = 0 (5.17)
For example, for a bead on a wire we need to constrain the path to always lie on the wire or for
a pendulum, the path must lie on in a hemisphere dened by the length of the pendulum from
114
the pivot point. In any case, the general proceedure is to introduce another function,
i
(x) and
integrate

x
2
x
1

i
(x)
i
(y, x)dx = 0 (5.18)
so that

x
2
x
1

i
(x)
i
(y, x)dx = 0 (5.19)
as well. In fact it turns out that the
i
(x) can be even be taken to be a constant,
i
for this
whole proceedure to work.
Regardless of the case, we can always write the new stationary integral as

(f(y, y
x
, x) +

i
(y, x))dx = 0. (5.20)
The multiplying constants are called Lagrange Mulipliers. In your statistical mechanics course,
these will occur when you minimize various thermodynamic functions subject to the various
extensive constraints, such as total number of particles in the system, the average energy or
temperature, and so on.
In a sence, we have redened the original function or Lagrangian to incorporate the constraint
into the dynamics. So, in the presence of a constraint, the Euler-Lagrange equations become
d
dt
L
x

L
x
=

i
x

i
(5.21)
where the term on the right hand side of the equation represents a force due to the constraint.
The next issue is that we still need to be able to determine the
i
Lagrange multipliers.
115
Figure 5.1: Variational paths between endpoints. The thick line is the stationary path, y
o
(x)
and the dashed blue curves are variations y(x, ) = y
o
(x) + (x).
-1.5 -1 -0.5 0.5 1 1.5
x
-2
-1
1
2
fHxL
116
5.2.3 Variational method applied to Schrodinger equation
The goal of all this is to develop a procedure for computing the ground state of some quantum
mechanical system. What this means is that we want to minimize the energy of the system
with respect to arbitrary variations in the state function subject to the constraint that the state
function is normalized (i.e. number of particles remains xed). This means we want to construct
the variation:
'[H[` = 0 (5.22)
with the constraint '[` = 0.
In the coordinate representation, the integral involves taking the expectation value of the
kinetic energy operator...which is a second derivative operator. That form is not too convenient
for our purposes since it will allow us to write Eq.5.22 in a form suitable for the Euler-Lagrange
equations. But, we can integrate by parts!

x
2
dx =

dx (5.23)
Assuming that the wavefunction vanishes at the limits of the integration, the surface term van-
ishes leaving only the second term. We can now write the energy expectation value in terms
of two dependent variables, and . OK, theyre functions, but we can still treat them as
dependent variables just like we treated the y(x)

s above.
E =


h
2
2m
(

)() + V

dx (5.24)
Adding on the constraint and dening the Lagrangian as
L =

h
2
2m
(

)() + V

, (5.25)
we can substitute this into the Euler-Lagrange equations
L

x
L
(

= 0. (5.26)
This produces the result
(V ) =
h
2
2m

2
, (5.27)
which we immediately recognize as the Schrodinger equation.
While this may be a rather academic result, it gives us the key to recognize that we can
make an expansion of in an arbitrary basis and take variations with respect to the coeents
of that basis to nd the lowest energy state. This is the basis of a number of powerful numerical
methods used solve the Schrodinger equation for extremely complicated systems.
117
5.2.4 Variational theorems: Rayleigh-Ritz Technique
We now discuss two important theorems:
Theorem 5.1 The expectation value of the Hamiltonian is stationary in the neighborhood of its
eigenstates.
To demonstrate this, let [` be a state in which we compute the expectation value of H. Also,
lets modify the state just a bit and write
[` [` +[`. (5.28)
Expectation values are computed as
'H` =
'[H[`
'[`
(5.29)
(where we assume arb. normalization). In other words
'[`'H` = '[H[` (5.30)
Now, insert the variation
'[`'H` +'[`'H` +'[`'H` = '[H[` +'[H[` (5.31)
or
'[`'H` = '[H 'H`[` '[H 'H`[` (5.32)
If the expectation value is to be stationary, then 'H` = 0. Thus the RHS must vanish for an
arbitrary variation in the wavefunction. Lets pick
[` = [`. (5.33)
Thus,
(H 'H`)[` = 0 (5.34)
That is to say that [` is an eigenstate of H. Thus proving the theorem.
The second theorem goes:
Theorem 5.2 The Expectation value of the Hamiltonian in an arbitrary state is greater than or
equal to the ground-state energy.
The proof goes as this: Assume that H has a discrete spectrum of states (which we demonstrated
that it must) such that
H[n` = E
n
[n` (5.35)
118
Thus, we can expand any state [` as
[` =

n
c
n
[n`. (5.36)
Consequently
'[` =

n
[c
n
[
2
, (5.37)
and
'[H[` =

n
[c
n
[
2
E
n
. (5.38)
Thus, (assuming that [` is normalized)
'H` =

n
E
n
[c
n
[
2

n
E
o
[c
n
[
2
E
o
(5.39)
quid est demonstrato.
Using these two theorems, we can estimate the ground state energy and wavefunctions for a
variery of systems. Lets rst look at the Harmonic Oscillator.
Exercise 5.1 Use the variational principle to estimate the ground-state energy of a particle in
the potential
V (x) =

Cx x > 0
+ x 0
(5.40)
Use xe
ax
as the trial function.
5.2.5 Variational solution of harmonic oscillator ground State
The Schrodinger Equation for the Harmonic Osc. (HO) is

h
2
2m

2
x
2
+
k
2
2
x
2

(x) E(x) = 0 (5.41)


Take as a trial function,
(x) = exp(x
2
) (5.42)
where is a positive, non-zero constant to be determined. The variational principle states that
the energy reaches a minimum
'H`

= 0. (5.43)
when (x) is the ground state solution. Let us rst derive 'H`().
'H`() =
'[H[`
'[`
(5.44)
119
To evaluate this, we break the problem into a series of integrals:
'[` =

dx[(x)[
2
=


2
(5.45)
'[p
2
[` =

dx

(x)(x) = 2'[` + 4
2
'[x
2
[` (5.46)
and
< [x
2
[` =

dxx
2
[(x)[
2
=
1
4
'[`. (5.47)
Putting it all together:
'[H[`
'[`
=

h
2
2m

2 + 4
2
1
4

+
k
2
1
4
(5.48)
'[H[`
'[`
=

h
2
2m

+
k
8
(5.49)
Taking the derivative with respect to :
'H`

=
h
2
2m

k
8
2
= 0 (5.50)
Thus,
=

mk
2 h
(5.51)
Since only positive values of are allowed.
=

mk
2 h
(5.52)
Using this we can calculate the ground state energy by substituting back into 'H`().
'H`() =

mk
2 h

h
2
2m
+
k
8
4 h
2
mk

=
h
2

k
m
(5.53)
Now, dene the angular frequency: =

k/m.
'H`() =
h
2
(5.54)
which ( as we can easily prove) is the ground state energy of the harmonic oscillator.
Furthermore, we can write the HO ground state wavefunction as
120

o
(x) =
1

'[`
(x) (5.55)

o
(x) =

1/4
exp

mk
2 h
x
2

(5.56)

o
(x) =

mk
h

1/4
exp

mk
2 h
x
2

(5.57)
To compute the error in our estimate, lets substitute the variational solution back into the
Schrodinger equation:

h
2
2m

2
x
2
+
k
2
2
x
2

o
(x) =
h
2
2m

o
(x) +
k
2
2

o
(x) (5.58)

h
2
2m

o
(x) +
k
2
2

o
(x) =
h
2
2m

kmx
2
h

km
h
2

o
(x) +
k
2
x
2

o
(x) (5.59)

h
2
2m

o
(x) +
k
2
2

o
(x) =
h
2

k
m

o
(x) (5.60)
Thus,
o
(x) is in fact the correct ground state wavefunction for this system. If it were not
the correct function, we could re-introduce the solution as a new trial function, re-compute the
energy, etc... and iterate through until we either nd a solution, or run out of patience! (Usually
its the latter than the former.)
5.3 The Harmonic Oscillator
Now that we have the HO ground state and the HO ground state energy, let us derive the whole
HO energy spectrum. To do so, we introduce dimensionless quantities: X and P related to
the physical position and momentum by
X = (
m
2 h
)
1/2
x (5.61)
P = (
1
2 hm
)
1/2
p (5.62)
This will save us from carrying around a bunch of coecients. In these units, the HO Hamiltonian
is
H = h(P
2
+ X
2
). (5.63)
121
The X and P obey the canonical comutational relation:
[X, P] =
1
2 h
[x, p] =
i
2
(5.64)
We can also write the following:
(X + iP)(X iP) = X
2
+ P
2
+ 1/2 (5.65)
(X iP)(X + iP) = X
2
+ P
2
1/2. (5.66)
Thus, I can construct the commutator:
[(X + iP), (X iP)] = (X + iP)(X iP) (X iP)(X + iP)
= 1/2 + 1/2
= 1 (5.67)
Lets dene the following two operators:
a = (X + iP) (5.68)
a

= (X + iP)

= (X iP). (5.69)
Therefore, the a and a

commute as
[a, a

] = 1 (5.70)
Lets write H in terms of the a and a

operators:
H = h(X
2
+ P
2
) = h(X iP)(X + iP) + h/2 (5.71)
or in terms of the a and a

operators:
H = h(a

a + 1/2) (5.72)
Now, consider that [
n
` is the nth eigenstate of H. Thus, we write:
h(a

a + 1/2)[
n
` = E
n
[
n
` (5.73)
What happens when I multiply the whole equation by a? Thus, we write:
a h(a

a + 1/2)[
n
` = aE
n
[
n
` (5.74)
h(aa

+ 1/2)(a[
n
`) = E
n
(a[
n
`) (5.75)
Now, since aa

a = 1,
h(a

a + 1/2 1)(a[
n
`) = E
n
(a[
n
`) (5.76)
In other words, a[
n
` is an eigenstate of H with energy E = E
n
h.
122
What happends if I do the same procedure, this time using a

? Thus, we write:
a

h(a

a + 1/2)[
n
` = a

E
n
[
n
` (5.77)
Since
[a, a

] = aa

a (5.78)
we have
a

a = aa

1 (5.79)
we can write
a

a = a

(aa

1) (5.80)
= (a

a 1)a

. (5.81)
Thus,
a

h(a

a + 1/2)[
n
` = h((a

a 1 + 1/2)a

)[
n
` (5.82)
or,
h(a

a 1/2)(a

[
n
`) = E
n
(a

[
n
`). (5.83)
Thus, a

[
n
` is an eigenstate of H with energy E = E
n
+ h.
Since a

and a act on harmonic oscillator eigenstates to give eigenstates with one more or one
less h quanta of energy, these are termed creation and annihilation operators since they
act to create additional quata of excitation or decrease the number of quanta of excitation in
the system. Using these operators, we can eectively ladder our way up the energy scale and
determine any eigenstate one we know just one.
Well, we know the ground state solution. That we got via the variational calculation. What
happens when I apply a

to the
o
(x) we derived above? In coordinate form:
(X iP)
o
(x) =

m
2 h

1/2
x +

1
2m h

1/2

o
(x)
(5.84)
=

m
2 h

1/2
x +

1
2m h

1/2

m
h

1/4
e
(x
2 m
2 h
)
(5.85)
X acting on
o
is:
X
o
(x) =

m
2 h
x
o
(x) (5.86)
iP acting on
o
is
iP
o
(x) = h

1
m2 h

o
(x) (5.87)
123
iP
o
(x) = x

m
2 h

o
(x) (5.88)
= X
o
(x) (5.89)
After cleaning things up:
(X iP)
o
(x) = 2X
o
(x) (5.90)
= 2

m
2 h
x
o
(x) (5.91)
= 2X
o
(x) (5.92)
= 2

m
2 h
x

m
2 h

1/4
exp

x
2
m
2 h

(5.93)
5.3.1 Harmonic Oscillators and Nuclear Vibrations
We introduced one of the most important applications of quantum mechanics...the solution of
the Schrodinger equation for harmonic systems. These are systems in which the amplitude of
motion is either small enough so that the physical potential energy operator can be expanded
about its minimum to second order in the displacement from the minima. When we do so, the
Hamiltonian can be written in the form
H = h(P
2
+ X
2
) (5.94)
where P and X are dimensionless operators related to the physical momentum and position
operators via
X =

m
2 h
x (5.95)
and
P =

1
2 hm
p. (5.96)
We also used the variational method to deduce the ground state wavefunction and demonstrated
that the spectrum of H is a series of levels separated by h and that the ground-state energy is
h/2 above the energy minimum of the potential.
We also dened a new set of operators by taking linear combinations of the X and P.
a = X + iP (5.97)
a

= X iP. (5.98)
We also showed that the commutation relation for these operators is
[a, a

] = 1. (5.99)
These operators are non-hermitian operators, and hence, do not correspond to a physical ob-
servable. However, we demonstrated that when a acts on a eigenstate of H, it produces another
124
eigenstate withe energy E
n
h. Also, a

acting on an eigenstate of H produces another eigen-


state with energy E
n
+ h. Thus,we called a the destruction or annihilation operator since it
removes a quanta of excitation from the system and a

the creation operator since it adds a


quanta of excitation to the system. We also wrote H using these operators as
H = h(a

a +
1
2
) (5.100)
Finally, is the angular frequency of the classical harmonic motion, as obtained via Hookes
law:
x =
k
m
x. (5.101)
Solving this produces
x(t) = x
o
sin(t + ) (5.102)
and
p(t) = p
o
cos(t + ). (5.103)
Thus, the classical motion in the x, p phase space traces out the circumference of a circle every
1/ regardless of the initial amplitude.
The great advantage of using the a, and a

operators is that they we can replace a dierential


equation with an algebraic equation. Furthermore, since we can represent any Hermitian operator
acting on the HO states as a combination of the creation/annihilation operators, we can replace
a potentially complicated series of dierentiations, integrations, etc... with simple algebraic
manipulations. We just have to remember a few simple rules regarding the commutation of the
two operators. Two operators which we may want to construct are:
position operator:

2h
m

1/2
(a

+ a)
momentum operator: i

hm
2

1/2
(a

a).
Another important operator is
N = a

a. (5.104)
and
H = h(N + 1/2). (5.105)
Since [H, N] = 0, eigenvalues of N are good quantum numbers and N is a constant of the
motion. Also, since
H[
n
` = E
n
[
n
` = h(N + 1/2)[
n
` (5.106)
then if
N[
n
` = n[
n
`, (5.107)
then n must be an integer n = 0, 1, 2, corresponding to the number of quanta of excitation in
the state. This gets the name Number Operator.
Some useful relations (that you should prove )
125
1. [N, a] = [a

a, a] = a
2. [N, a

] = [a

a, a

] = a

To summarize, we have the following relations using the a and a

operators:
1. a[
n
` =

n[
n1
`
2. a

[
n
` =

n + 1[
n+1
`
3. '
n
[a =

n + 1'
n+1
= (a[
n
`)

4. '
n
[a

n + 1'
n1
[
5. N[
n
` = n[
n
`
6. '
n
[N = n'
n
[
Using the second of these relations we can write
[
n+1
` =
a

n + 1
[
n
` (5.108)
which can be iterated back to the ground state to produce
[
n
` =
(a

)
n

n!
[
o
` (5.109)
This is the generating relation for the eigenstates.
Now, lets look at x and p acting on [
n
`.
x[
n
` =

h
2m
(a

+ a)[
n
` (5.110)
=

h
2m
(

n + 1[
n+1
` +

n[
n1
`) (5.111)
Also,
p[
n
` = i

m h
2
(a

a)[
n
` (5.112)
= i

m h
2
(

n + 1[
n+1
`

n[
n1
`) (5.113)
Thus,the matrix elements of x and p in the HO basis are:
'
m
[x[
n
` =

h
2m

n + 1
m,n+1
+

n
m,n1

(5.114)
126
'
m
[p[
n
` = i

m h
2

n + 1
m,n+1

n
m,n1

(5.115)
The harmonic oscillator wavefunctions can be obtained by solving the equation:
'x[a[
o
` = (X + iP)
o
(x) = 0 (5.116)

m
h
x +

x

o
(x) = 0 (5.117)
The solution of this rst order dierential equation is easy:

o
(x) = c exp(
m
2
x
2
) (5.118)
where c is a constant of integration which we can obtain via normalization:

dx[
o
(x)[
2
= 1 (5.119)
Doing the integration produces:

o
(x) =

m
h

1/4
e

m
2 h
x
2
(5.120)
127
Figure 5.2: Hermite Polynomials, H
n
up to n = 3.
-3 -2 -1 1 2 3
x
-10
-7.5
-5
-2.5
2.5
5
7.5
10
H
n
HxL
128
Since we know that a

acting on [
o
` gives the next eigenstate, we can write

1
(x) =

m
h
x

x

o
(x) (5.121)
Finally, using the generating relation, we can write

n
(x) =
1

n!

m
h
x

x

o
(x). (5.122)
Lastly, we have the recursion relations which generates the next solution one step higher or
lower in energy given any other solution.

n+1
(x) =
1

n + 1

m
h
x

x

n
(x) (5.123)
and

n1
(x) =
1

m
h
x +

x

n
(x). (5.124)
These are the recursion relationships for a class of polynomials called Hermite polynomials, after
the 19th French mathematician who studied such functions. These are also termed Gauss-
Hermite and form a set of orthogonal polynomials. The rst few Hermite Polynomials, H
n
(x)
are 1, 2 x, 2 + 4 x
2
, 12 x + 8 x
3
, 12 48 x
2
+ 16 x
4
for n = 0 to 4. Some of these are plotted
in Fig. 5.2
The functions themselves are dened by the generating function
g(x, t) = e
t
2
+2tx
=

n=0
H
n
(x)
t
n
n!
. (5.125)
Dierentiating the generating function n times and setting t = 0 produces the nth Hermite
polynomial
H
n
(x) =
d
n
dt
n
g(x, t)

= (1)
n
e
x
2 d
n
dx
n
e
x
2
(5.126)
Another useful relation is the Fourier transform relation:
1

e
itx
e
x
2
/2
H
n
(x)dx = i
n
e
t
2
/2
H
n
(t) (5.127)
which is useful in generating the momentum space representation of the harmonic oscillator
functions. Also, from the generating function, we can arrive at the recurrence relation:
H
n+1
= 2xH
n
2nH
n1
(5.128)
and
H

n
(x) = 2nH
n1
(x). (5.129)
129
Consequently, the hermite polynomials are solutions of the second-order dierental equation:
H

n
2xH

n
+ 2nH
n
= 0 (5.130)
which is not self-adjoint! To put this into self-adjoint form, we multiply by the weighting function
w = e
x
2
, which leads to the orthogonality integral

H
n
(x)H
m
(x)e
x
2
dx =
nm
. (5.131)
For the harmonic oscillator functions, we absorb the weighting function into the wavefunction
itself

n
(x) = e
x
2
/2
H
n
(x).
When we substitute this function into the dierential equation for H
n
we get

n
+ (2n + 1 x
2
)
n
= 0. (5.132)
To normalize the functions, we rst multipy g by itself and then multiply by w
e
x
2
e
s
2
+2sx
e
t
2
+2tx
=

mn
e
x
2
H
n
(x)H
m
(x)
s
m
t
n
n!m!
(5.133)
When we integrate over to the cross terms drop out by orthogonality and we are left
with

n=0
(st)
n
n!n!

e
x
2
H
2
n
(x)dx =

e
x
2
s
2
+2sxt
2
+2xt
dx
=

e
(xst)
2
dx
=
1/2
e
2st
=

n=0
2
n
(st)
n
n!
. (5.134)
Equating like powers of st we obtain,

e
x
2
H
2
n
(x)dx = 2
n

1/2
n!. (5.135)
When we apply this technology to the SHO, the solutions are

n
(z) = 2
n/2

1/4
(n!)
1/2
e
z
2
H
n
(z) (5.136)
where z = x and

2
=
m
h
.
A few gratuitous solutions:

1
(x) =

m
h

1/4
x exp(
1
2
mx
2
) (5.137)

2
(x) =

m
4 h

1/4

2
m
h
x
2
1

exp(
1
2
mx
2
) (5.138)
Fig. 5.3 shows the rst 4 of these functions.
130
5.3.2 Classical interpretation.
In Fig. 5.3 are a few of the lowest energy states for the harmonic oscillator. Notice that as the
quantum number increases the amplitude of the wavefunction is pushed more and more towards
larger values of x. This becomes more pronounced when we look at the actual probability
distribution functions, [
n
(x)[
2
[ for the same 4 states as shown in Fig 5.4.
Here, in blue are the actual quantum distributions for the ground-state through n = 3. In
gray are the classical probability distrubutions for the corresponding energies. The gray curves
tell us the probabilty per unit length of nding the classical particle at some point x and any
point in time. This is inversely proportional to how long a particle spends at a given point...i.e.
P
c
(x) 1/v(x). Since E = mv
2
/2 + V (x),
v(x) =

2(E V (x))/m
and
P(x)

m
2(E V (x))
For the Harmonic Oscillator:
P
n
(x)

m
2( h(n + 1/2) kx
2
/2)
.
Notice that the denominator goes to zero at the classical turning points, in other words,
the particle comes to a dead-stop at the turning point and consequently we have the greatest
likelyhood of nding the particle in these regions. Likewise in the quantum case, as we increase
the quantum number, the quantum distrubution function becomes more and more like its classical
counterpart. This is shown in the last four frames of Fig. 5.4 where we have the same plots as in
Fig. 5.4, except we look at much higher quantum numbers. For the last case, where n = 19 the
classical and quantum distributions are nearly identical. This is an example of the correspondence
principle. As the quantum number increases, we expect the quantum system to look more and
more like its classical counter part.
131
Figure 5.3: Harmonic oscillator functions for n = 0 to 3
-4 -2 2 4
-2
-1
1
2
-4 -2 2 4
-4
-2
2
4
-4 -2 2 4
0.2
0.4
0.6
0.8
1
-4 -2 2 4
-1
-0.5
0.5
1
132
Figure 5.4: Quantum and Classical Probability Distribution Functions for Harmonic Oscillator
for n = 0, 1, 2, 3, 4, 5, 9, 14, 19
-4 -2 2 4
0.25
0.5
0.75
1
1.25
1.5
-4 -2 2 4
0.25
0.5
0.75
1
1.25
1.5
-4 -2 2 4
0.2
0.4
0.6
0.8
1
1.2
1.4
-4 -2 2 4
0.2
0.4
0.6
0.8
1
1.2
1.4
-4 -2 2 4
0.5
1
1.5
2
2.5
-4 -2 2 4
0.5
1
1.5
2
2.5
-4 -2 2 4
0.25
0.5
0.75
1
1.25
1.5
1.75
-4 -2 2 4
0.25
0.5
0.75
1
1.25
1.5
1.75
-7.5 -5 -2.5 2.5 5 7.5
0.1
0.2
0.3
0.4
0.5
0.6
-7.5 -5 -2.5 2.5 5 7.5
0.1
0.2
0.3
0.4
0.5
0.6
-7.5 -5 -2.5 2.5 5 7.5
0.1
0.2
0.3
0.4
0.5
0.6
-7.5 -5 -2.5 2.5 5 7.5
0.1
0.2
0.3
0.4
0.5
0.6
-7.5 -5 -2.5 2.5 5 7.5
0.2
0.4
0.6
0.8
-7.5 -5 -2.5 2.5 5 7.5
0.2
0.4
0.6
0.8
-7.5 -5 -2.5 2.5 5 7.5
0.1
0.2
0.3
0.4
0.5
0.6
0.7
-7.5 -5 -2.5 2.5 5 7.5
0.1
0.2
0.3
0.4
0.5
0.6
0.7
133
5.3.3 Molecular Vibrations
The fully quantum mechanical treatment of both the electronic and nuclear dynamics of even a
diatomic molecule is a complicated aair. The reason for this is that we are forced to nd the
stationary states for a potentially large number of particlesall of which interact, constrained
by a number of symmetry relations (such as the fact that no two electrons can be in the same
state at the same time.) In general, the exact solution of a many-body problem (such as this)
is impossible. (In fact, believe it it is rigorously impossible for even three classically interacting
particles..although many have tried. ) However, the mass of the electron is on the order of 10
3
to
10
4
times smaller than the mass of a typical nuclei. Thus, the typical velocities of the electrons
is much larger than the typical nuclear velocities. We can then assume that the electronic cloud
surrounding the nuclei will respond instantaneously to small and slow changes to the nuclear
positions. Thus, to a very good approximation, we can separate the nuclear motion from the
electonic motion. This separation of the nuclear and electronic motion is called the Born-
Oppenheimer Approximation or the Adiabatic Approximation. This approximation is
one of the MOST important concepts in chemical physics and is covered in more detail in Section
8.4.1.
Fundimental notion is that the nuclear motion of a molecule occurs in the average eld of
the electrons. In other words, the electronic charge distribution acts as an extremely complex
multi-dimensional potential energy surface which governs the motion and dynamics of the atoms
in a molecule. Consequently, since chemistry is the science of chemical structure, changes, and
dynamics, nearly all chemical reactions can be described in terms of nuclear motion on one (or
more) potential energy surface. In Fig. ?? is the London-Eyring-Polanyi-Sato (LEPS) [1]surface
for the F + H
2
HF + H reaction using the Mukerman V set of parameters.[3] The LEPS
surface is an empirical potential energy surface based upon the London-Heitler valance bond
theory. Highly accurate potential functions are typically obtained by performing high level ab
initio electronic structure calculations sampling over numerous congurations of the molecule.[2]
For diatomic molecules, the nuclear stretching potential can be approximated as a Morse
potential curve
V (r) = D
e
(1 e
(rreq
)
2
D
e
(5.139)
where D
e
is the dissociation energy, sets the range of the potential, and r
eq
is the equilibrium
bond length. The Morse potential for HF is shown in Fig. 5.6 and is parameterized by D
e
=
591.1kcal/mol, = 2.2189

A
1
, and r
eq
= 0.917

A.
Close to the very bottom of the potential well, where r r
e
is small, the potential is nearly
harmonic and we can replace the nuclear SE with the HO equation by simply writing that the
angular frequancy is
=

(r
e
)
m
(5.140)
So, measuring the vibrational spectrum of the well will give us the curvature of the well since
(E
n
E
m
)/ h is always an integer multiple of for harmonic systems. The red curve in Fig. 5.6
is a parabolic approximation for the bottom of the well.
V (r) = D
e
(1 +
2
(r r
e
)
2
/2 + . . .) (5.141)
134
Figure 5.5: London-Eyring-Polanyi-Sato (LEPS) empirical potential for the F +H
2
FH +H
chemical reaction
0.5 1 1.5 2 2.5 3 3.5 4
0.5
1
1.5
2
2.5
3
3.5
4
r
FH
r
HH
135
Figure 5.6: Morse well and harmonic approximation for HF
1 2 3 4
r
FH
-600
-400
-200
200
400
600
V HkCalmolL
Clearly, D
e

2
is the force constant. So the harmonic frequency for the well is =

k/, where
is the reduced mass, = m
1
m
2
/(m
1
+ m
2
) and one would expect that the vibrational energy
levels would be evenly spaced according to a harmonic progression. Deviations from this are due
to anharmonic eects introduced by the inclusion of higher order terms in the Taylor expansion
of the well. As one might expect, the harmonic expansion provides a descent estimate of the
potential energy surface close to the equilibrium geometry.
5.4 Numerical Solution of the Schrodinger Equation
5.4.1 Numerov Method
Clearly, nding bound state soutions for the Schrodinger equation is an important task. Un-
fortunately, we can only solve a few systems exactly. For the vast majority of system which
we cannot handle exactly, we need to turn to approximate means to nde solutions. In later
chapters, we will examine variational methods and perturbative methods. Here, we will look at
a very simple scheme based upon propagating a trial solution at a given point. This methods is
called the Numerov approach after the Russian astronomer who developed the method. It can
be used to solve any dierential equation of the form:
f

(r) = f(r)u(r) (5.142)


where u(r) is simply a function of r and f

is the second derivative of f(r) with respect to r and


which is the solution we are looking for. For the Schrodinger equation we would write:

=
2m
h
2
(V (r) E) (5.143)
136
Figure 5.7: Model potential for proton tunneling.
-1 -0.5 0.5 1
x HbohrL
-4000
-2000
2000
4000
6000
V Hcm
- 1
L
The basic proceedure is to expand the second derivative as a nite dierence so that if we know
the solution at one point, x
n
, we can get the solution at some small point a distance h away,
x
n+1
= x
n
+ h.
f[n + 1] = f[n] + hf

[n] +
h
2
2!
f

[n] +
h
3
3!
f

[n] +
h
4
4!
f
(4)
[n] . . . (5.144)
f[n 1] = f[n] hf

[n] +
h
2
2!
f

[n]
h
3
3!
f

[n] +
h
4
4!
f
(4)
[n] (5.145)
If we combine these two equations and solve for f[n + 1] we get a result:
f[n + 1] = f[n 1] + 2f[n] + f

[n]h
2
+
h
4
12
f
(4)
[n] +O[h
6
] (5.146)
Since f is a solution to the Schrodinger equation, f

= Gf where
G =
2m
h
2
(V E)
we can get the second derivative of f very easily. However, for the higher order terms, we have
to work a bit harder. So lets expand f

[n + 1] = 2f

[n] f

[n 1] + s
2
f
(4)
[n]
and truncate at order h
6
. Now, solving for f
(4)
[n] and substituting f

= Gf we get
f[n + 1] =

2f[n] f[n 1] +
h
2
12
(G[n 1]f[n 1] + 10G[n]f[n])

1
h
2
12
G[n + 1]

(5.147)
which is the working equation.
Here we take a case of proton tunneling in a double well potential. The potential in this case
is the V (x) = (x
4
x
2
) function shown in Fig. 5.7. Here we have taken the parameter = 0.1
137
-2 -1 1 2
-1
-0.5
0.5
1
-2 -1 1 2
-1
1
2
3
Figure 5.8: Double well tunneling states as determined by the Numerov approach. On the left
is the approximate lowest energy (symmetric) state with no nodes and on the rigth is the next
lowest (antisymmetric) state with a single node. The fact that the wavefunctions are heading o
towards innity indicated the introduction of an additional node coming in from x = .
and m = 1836 as the proton and use atomic units throughout. Also show in Fig. 5.7 are eective
harmonic oscillator wells for each side of the barrier. Notice that the harmonic approximation
is pretty crude since the harminic well tends to over estimate the steepness of the inner portion
and under estimate the steepness of the outer portions. Nonetheless, we can use the harminic
oscillator ground states in each well as starting points.
To use the Numerov method, one starts by guessing an initial energy, E, and then propagating
a trial solution to the Schrodinger equation. The Curve you obtain is in fact a solution to the
equation, but it will ususally not obey the correct boundary conditions. For bound states, the
boundary condition is that must vanish exponentially outside the well. So, we initialize the
method by forcing [1] to be exactly 0 and [2] to be some small number. The exact values
really make no dierence. If we are o by a bit, the Numerov wave will diverge towards as x
increases. As we close in on a physically acceptible solution, the Numerov solution will begin to
exhibit the correct asymptotic behavour for a while before diverging. We know we have hit upon
an eigenstate when the divergence goes from + to or vice versa, signeling the presence
of an additional node in the wavefunction. The proceedure then is to back up in energy a bit,
change the energy step and gradually narrow in on the exact energy. In Figs. 5.8a and 5.8b are
the results of a Numerov search for the lowest two states in the double well potential. One at
-3946.59cm
1
and the other at -3943.75cm
1
. Notice that the lowest energy state is symmetric
about the origin and the next state is anti-symmetric about the origin. Also in both cases, the
Numerov function diverges since we are not precisely at a stationary solution of the Schrodinger
equation...but we are within 0.01cm
1
of the true eigenvalue.
The advantage of the Numerov method is that it is really easy to code. In fact you can
even code it in Excell. Another advantage is that for radial scattering problems, the out going
boundary conditions occur naturally, making it a method of choice for simple scattering problems.
In the Mathematica notebooks, I show how one can use the Numerov method to compute the
scattering phase shifts and locate resonance for atomic collisions. The disadvantage is that you
have to search by hand for the eigenvalues which can be extremely tedious.
138
5.4.2 Numerical Diagonalization
A more general approach is based upon the variational principle (which we will discuss later)
and the use of matrix representations. If we express the Hamiltonian operator in matrix form in
some suitable basis, then the eigenfunctions of H can also be expressed as linear combinations
of those basis functions, subject to the constraint that the eigenfunctions be orthonormal. So,
what we do is write:
'
n
[H[
m
` = H
nm
and
[
j
` =

n
'
n
[
j
`[
n
`
The '
n
[
j
` coecients are also elements of a matrix, T
nj
which transforms a vector in the
basis to the basis. Conseqently, there is a one-to-one relation between the number of basis
functions in the basis and the basis functions in the basis.
If [
n
` is an eigenstate of H, then
H[
j
` = E
j
[
j
`.
Multiplying by '
m
[ and resolving the identity,

n
'
m
[H[
n
`'
n
[
j
` = '
m
[
j
`E
j

n
H
mn
T
nj
= E
j
T
mj
(5.148)
Thus,

mn
T
mj
H
mn
T
nj
= E
j
(5.149)
or in more compact form
T

HT = E

I
where

I is the identity matrix. In otherwords, the T-matrix is simply the matrix which brings
H to diagonal form.
Diagonalizing a matrix by hand is very tedeous for anything beyond a 33 matrix. Since this
is an extremely common numerical task, there are some very powerful numerical diagonalization
routines available. Most of the common ones are in the Lapack package and are included as part
of the Mathematica kernel. So, all we need to do is to pick a basis, cast our Hamiltonian into
that basis, truncate the basis (usually determined by some energy cut-o) and diagonalize away.
Usually the diagonalization part is the most time consuming. Of course you have to be prudent
in choosing your basis.
A useful set of basis functions are the trigonmetric forms of the Tchebychev polynomials.
1
These are a set of orthogonal functions which obey the following recurrence relation
T
n+1
(x) 2xT
n
(x) + T
n1
(x) = 0 (5.150)
139
-1 -0.5 0.5 1
-1
-0.75
-0.5
-0.25
0.25
0.5
0.75
1
Figure 5.9: Tchebyshev Polynomials for n = 1 5
Table 5.1: Tchebychev polynomials of the rst type
T
o
= 1
T
1
= x
T
2
= 2x
2
1
T
3
= 4x
3
3x
T
4
= 8x
4
8x
2
1
T
5
= 16x
5
20x
3
+ 5x
140
Table 5.1 lists a the rst few of these polynomials as functions of x and a few of these are plotted
in Fig. 5.9
It is important to realize that these functions are orthogonal on a nite range and that
integrals over these functions must include a weighting function w(x) = 1/

1 x
2
. The orthog-
onality relation for the T
n
polynomials is

+1
1
T
m
(x)T
n
(x)w(x)dx =

0 m = m

2
m = n = 0
m = n = 0
(5.151)
Arfkins Mathematical Methods for Physicists has a pretty complete overview of these special
functions as well as many others. As usual, These are encorporated into the kernel of Mathe-
matica and the Mathematica book and on-line help pages has some useful information regarding
these functions as well as a plethera of other functions.
From the recurrence relation it is easy to show that the T
n
(x) polynomials satisfy the dier-
ential equation:
(1 x
2
)T

n
xT

x
+ n
2
T
n
= 0 (5.152)
If we make a change of variables from x = cos() and dx = sin d, then the dierential
equation reads
dT
n
d
+ n
2
T
n
= 0 (5.153)
This is a harmonic oscillator and has solutions sin n and cos n. From the boundary conditions
we have two linearly independent solutions
T
n
= cos n = cos n(arccosx)
and
V
n
= sin n.
The normalization condition then becomes:

+1
1
T
m
(x)T
n
(x)w(x)dx =


0
cos(m) cos(n)d (5.154)
and

+1
1
V
m
(x)V
n
(x)w(x)dx =

/2
/2
sin(m) sin(n)d (5.155)
which is precisely the normalization integral we perform for the particle in a box state assuming
the width of the box was . For more generic applications, we can scale and its range to any
range.
1
There are at least 10 ways to spell Tchebychevs last name Tchebychev, Tchebyshev, Chebyshev are the
most common, as well as Tchebyshe, Tchebyche, Chebyshe, Chevychef, . . .
141
The way we use this is to use the
n
= N sin nx basis functions as a nite basis and truncate
any expansion in this basis at some point. For example, since we are usually interested in low
lying energy states, setting an energy cut-o to basis is exactly equivalent to keeping only the
lowest n
cut
states. The kinetic energy part of the Hamiltonian is diagonal in this basis, so we get
that part for free. However, the potential energy part is not diagonal in the
n
= N sin nx basis,
so we have to compute its matrix elements:
V
nm
=


n
(x)V (x)
m
(x)dx (5.156)
To calculate this integral, let us rst realize that [V, x] = 0, so the eigenstates of x are also
eigenstates of the potential. Taking matrix elements in the nite basis,
x
nm
= N
2


n
(x)x
m
(x)dx,
and diagonalizing it yields a nite set of position eigenvalues, x
i
and a transformation for
converting between the position representation and the basis representation,
T
in
= 'x
i
[
n
`,
which is simply a matrix of the basis functions evaluated at each of the eigenvalues. The special
set of points dened by the eigenvalues of the position operator are the Gaussian quadrature
points over some nite range.
This proceedure termed the discrete variable representation was developed by Light and
coworkers in the 80s and is a very powerful way to generate coordinate representations of Hamil-
tonian matrixes. Any matrix in the basis representation (termed the FBR for nite basis represen-
tation) can be transformed to the discrete variable representation (DVR) via the transformation
matrix T. Moreover, there is a 1-1 correspondency between the number of DVR points and the
number of FBR basis functions. Here we have used only the Tchebychev functions. One can
generate DVRs for any set of orthogonal polynomial function. The Mathematica code below gen-
erates the required transformations, the points, the eigenvalues of the second-derivative operator,
and a set of quadrature weights for the Tchebychev sine functions over a specied range:
dv2fb[DVR_, T_] := T.DVR.Transpose[T];
fb2dv[FBR_, T_] := Transpose[T].FBR.T;
tcheby[npts_, xmin_, xmax_] := Module[{pts, fb, del},
del = xmax - xmin;
pts = Table[i*del*(1/(npts + 1)) + xmin, {i, npts}] // N;
fbrke = Table[(i*(Pi/del))^2, {i, npts}] // N;
w = Table[del/(npts + 1), {i, npts}] // N;
T = Table[
Sqrt[2.0/(npts + 1)]*Sin[(i*j)*Pi/(npts + 1)],
{i, npts}, {j, npts}] // N;
Return[{pts, T, fbrke, w}]
]
To use this, we rst dene a potential surface, set up the Hamiltonian matrix, and simply
diagonalize. For this example, we will take the same double well system described above and
compare results and timings.
142
V[x_] := a*(x^4 - x^2);
cmm = 8064*27.3;
params = {a -> 0.1, m -> 1836};
{x, T, K, w} = tcheby[100, -1.3, 1.3];
Kdvr = (fb2dv[DiagonalMatrix[K], T]*m)/2 /. params;
Vdvr = DiagonalMatrix[V[x]] /. params;
Hdvr = Kdvr + Vdvr;
tt = Timing[{w, psi} = Transpose[
Sort[Transpose[Eigensystem[Hdvr]]]]];
Print[tt]
(Select[w*cmm , (# < 3000) &]) // TableForm
This code sets up the DVR points x, the transformation T and the FBR eigenvalues K using
the tcheby[n,xmin,xmax] Mathematica module dened above. We then generate the kinetic
energy matrix in the DVR using the transformation
K
DV R
= T

K
FBR
T
and form the DVR Hamiltonian
H
DV R
= K
DV R
+ V
DV R
.
The eigenvalues and eigenvectors are computed via the Eigensystem[] routine. These are then
sorted according to their energy. Finally we print out only those states with energy less than
3000 cm
1
and check how long it took. On my 300 MHz G3 laptop, this took 0.3333 seconds to
complete. The rst few of these are shown in Table 5.2 below. For comparison, each Numerov
iteration took roughly 1 second for each trial function. Even then, the eigenvalues we found are
probabily not as accurate as those computed here.
Table 5.2: Eigenvalues for double well potential computed via DVR and Numerov approaches
i
i
(cm
1
) Numerov
1 -3946.574 -3946.59
2 -3943.7354 -3943.75
3 -1247.0974
4 -1093.5204
5 591.366
6 1617.424
143
5.5 Problems and Exercises
Exercise 5.2 Consider a harmonic oscillator of mass m and angular frequency . At time
t = 0, the state of this system is given by
[(0)` =

n
c
n
[
n
` (5.157)
where the states [
n
` are stationary states with energy E
n
= (n + 1/2) h.
1. What is the probability, P, that at a measurement of the energy of the oscillator at some
later time will yield a result greater than 2 h. When P = 0, what are the non-zero coe-
cients, c
n
?
2. From now on, let only c
o
and c
1
be non zero. Write the normalization condition for [(0)`
and the mean value 'H` of the energy in terms of c
o
and c
1
. With the additional requirement
that 'H` = h, calculate [c
o
[
2
and [c
1
[
2
.
3. As the normalized state vector [` is dened only to within an arbitrary global phase factor,
as can x this factor by setting c
o
to be real and positive. We set c
1
= [c
1
[e
i
. We assume
also that 'H` = h and show that
'x` =
1
2

h
m
. (5.158)
Calculate .
4. With [` so determined, write [(t)` for t > 0 and calculate the value of at time t.
Deduce the mean of 'x`(t) of the position at time t.
Exercise 5.3 Find 'x`, 'p`, 'x
2
` and 'p
2
` for the ground state of a simple harmonic oscillator.
What is the uncertainty relation for the ground state.
Exercise 5.4 In this problem we consider the the interaction between molecule adsorbed on a
surface and the surface phonons. Represent the vibrational motion of the molecule (with reduced
mass ) as harmonic with force constant K
H
o
=
h
2
2

2
x
2
+
K
2
x
2
(5.159)
and the coupling to the phonons as
H

= x

k
1
k
cos(
k
t) (5.160)
where 1
k
is the coupling between the molecule and phonon of wavevector k and frequency
k
.
144
1. Express the total Hamiltonian as a displaced harmonic well. What happens to the well as
a function of time?
2. What is the Golden-Rule transition rate between the ground state and the nth excited state
of the system due to phonon interactions? Are there any restrictions as to which nal state
can be reached? Which phonons are responsible for this process?
3. From now on, let the perturbing force be constant in time
H

= x

k
1
k
(5.161)
where 1
k
is the interaction with a phonon with wavevector k. Use the lowest order level
of perturbation theory necessary to construct the transition probability between the ground
state and the second-excited state.
Exercise 5.5 Let
X = (
m
2 h
)
1/2
x (5.162)
P = (
1
2 hm
)
1/2
p (5.163)
Show that the Harmonic Oscillator Hamiltonian is
H = h(P
2
+ X
2
) (5.164)
Now, dene the operator: a

= X iP. Show that a

acting on the harmonic oscillator ground


state is also an eigenstate of H. What is the energy of this state? Use a

to dene a generating
relationship for all the eigenstates of H.
Exercise 5.6 Show that if one expands an arbitrary potential, V (x) about its minimum at x
min
,
and neglects terms of order x
3
and above, one always obtains a harmonic well. Show that a
harmonic oscillator subject to a linear perturbation can be expressed as an unperturbed harmonic
oscillator shifted from the origin.
Exercise 5.7 Consider the one-dimensional Schrodinger equation with potential
V (x) =

m
2

2
x
2
x > 0
+ x 0
(5.165)
Find the energy eigenvalues and wavefunctions.
Exercise 5.8 An electron is contained inside a hard sphere of radius R. The radial components
of the lowest S and P state wavefunctions are approximately

S
(x)
sin(kr)
kr
(5.166)

P
(x)
cos(kr)
kr

sin(kr)
(kr)
2
=

S
(kr)
(kr)
. (5.167)
145
1. What boundary conditions must each state obey?
2. Using E = k
2
h
2
/(2m) and the above boundary conditions, what are the energies of each
state?
3. What is the pressure exerted on the surface of the sphere if the electron is in the a.) S state,
b.) the P state. (Hint, recall from thermodynamics: dW = PdV = (dE(R)/dR)dR.)
4. For a solvated e

in water, the S to P energy gap is about 1.7 eV. Estimate the size of
the the hard-sphere radius for the aqueous electron. If the ground state is fully solvated,
the pressure of the solvent on the electron must equal the pressure of the electron on the
solvent. What happens to the system when the electron is excited to the P-state from the
equilibrated S state? What happens to the energy gap between the S and P as a result of
this?
Exercise 5.9 A particle moves in a three dimensional potential well of the form:
V (x) =

z
2
> a
2
m
2
2
(x
2
+ y
2
), otherwise
(5.168)
Obtain an equation for the eigenvalues and the associated eigenfunctions.
Exercise 5.10 A particle moving in one-dimension has a ground state wavefunction (not-normalized)
of the form:

o
(x) = e

4
x
4
/4
(5.169)
where is a real constant with eigenvalue E
o
= h
2

2
/m. Determine the potential in which the
particle moves. (You do not have to determine the normalization.)
Exercise 5.11 A two dimensional oscillator has the Hamiltonian
H =
1
2
(p
2
x
+ p
2
y
) +
1
2
(1 + xy)(x
2
+ y
2
) (5.170)
where h = 1 and << 1. Give the wavefunctions for the three lowest energy levels when = 0.
Evaluate using rst order perturbation theory these energy levels for = 0.
Exercise 5.12 For the ground state of a harmonic oscillator, nd the average potential and
kinetic energy. Verify the virial theorem, 'T` = 'V `, holds in this case.
Exercise 5.13 In this problem we will explore the use of two computational tools to calculate the
vibrational eigenstates of molecules. The the Spartan structure package is available on one of the
PCs in the computer lab in the basement of Fleming. First, we will generate a potential energy
surface using ab initio methods. We will then t that surface to a functional form and try to
calculate the vibrational states. The system we will consider is the ammonia molecule undergoing
inversion from its degenerate C
3v
conguration through a D
3h
transition state.
146
Figure 5.10: Ammonia Inversion and Tunneling
1. Using the Spartan electronic structure package (or any other one you have access to), build
a model of NH
3
, and determine its ground state geometry using various levels of ab initio
theory. Make a table of N H bond lengths and =

H N H bond angles for the
equilibrium geometries as a function of at least 2 or 3 dierent basis sets. Looking in the
literature, nd experimental values for the equilibrium conguration. Which method comes
closest to the experimental values? Which method has the lowest energy for its equilibrium
conguration.
2. Using the method which you deamed best in part 1, repeat the calculations you performed
above by systematically constraining the H N H bond angle to sample congurations
around the equilibrium conguration and up to the planar D
3h
conguration. Note, it may
be best to constrain two H N H angles and then optimize the bond lengths. Sample
enough points on either side of the minimum to get a descent potential curve. This is your
Born-Oppenheimer potential as a function of .
3. Dening the orgin of a coordinate system to be the = 120
o
D
3h
point on the surface, t
your ab initio data to the W-potential
V (x) = x
2
+ x
4
(5.171)
What are the theoretical values of and ?
4. We will now use perturbation theory to compute the tunneling dynamics.
(a) Show that the points of minimum potential energy are at
x
min
=

1/2
and that the energy dierence between the top of the barrier and the minimum energy
is given by
V = V (0) V (x
min
) (5.172)
=

2
4
(5.173)
147
(b) We rst will consider the barrier to be innitely high so that we can expand the poten-
tial function around each x
min
. Show that by truncating the Taylor series expansion
above the (x x
min
)
2
terms that the potential for the left and right hand sides are
given by
V
L
= 2(x + x
min
)
2
V
and
V
R
= 2(x x
min
)
2
V.
What are the vibrational energy levels for each well?
(c) The wavefunctions for the lowest energy states in each well are given by
(x) =

1/2

1/4
exp[

2
2
(x x
min
)
2
]
with
=

(4)
1/2
h

1/2
.
The energy levels for both sides are degenerate in the limit that the barrier height is
innite. The total ground state wavefunction for this case is
(x) =


L
(x)

R
(x)

.
However, as the barrier height decreases, the degenerate states begin to mix causing
the energy levels to split. Dene the high barrier hamiltonian as
H =
h
2
2

2
x
+ V
L
(x)
for x < 0 and
H =
h
2
2

2
x
+ V
R
(x)
for x > 0. Calculate the matrix elements of H which mix the two degenerate left and
right hand ground state wavefunctions: i.e.
'[H[` =

H
RR
H
LR
H
RL
H
LL

where
H
RR
= '
R
[H[
R
`
, with similar denitions for H
RL
, H
LL
and H
LR
. Obtain numerical values of each
matrix element using the values of and you determined above (in cm
1
). Use the
mass of a H atom for the reduced mass .
148
(d) Since the
L
and
R
basis functions are non-orthogonal, you will need to consider the
overlap matrix, S, when computing the eigenvalues of H. The eigenvalues for this
system can be determined by solving the secular equation

S
S

= 0 (5.174)
where = H
RR
= H
LL
and = H
LR
= H
RL
(not to be confused with the potential
parameters above). Using Eq. 5.174, solve for and determine the energy splitting
in the ground state as a function the unperturbed harmonic frequency and the barrier
height, V . Calculate this splitting using the parameters you computed above. What is
the tunneling frequency? The experimental results is E = 0.794cm
12
.
Exercise 5.14 Consider a system in which the Lagrangian is given by
L(q
i
, q
i
) = T(q
i
, q
i
) V (q
i
) (5.175)
where we assume T is quadratic in the velocities. The potential is independent of the velocity
and neither T nor V carry any explicit time dependency. Show that
d
dt

j
q
j
L
q
j
L

= 0
The constant quantity in the (. . .) denes a Hamiltonian, H. Show that under the assumed
conditions, H = T + V
Exercise 5.15 The Fermat principle in optics states that a light ray will follow the path, y(x)
which minimizes its optical length, S, through a media
S =

x
2
,y
2
x
1
,y
1
n(y, x)ds
where n is the index of refraction. For y
2
= y
1
= 1 and x
1
= x
2
= 1 nd the ray-path for
1. n = exp(y)
2. n = a(y y
o
) for y > y
o
Make plots of each of these trajectories.
Exercise 5.16 In a quantum mechanical system there are g
i
distinct quantum states between
energy E
i
and E
i
+ dE
i
. In this problem we will use the variational principle and Lagrange
multipliers to determine how n
i
particles are distributed amongst these states subject to the con-
straints
1. The number of particles is xed:
n =

i
n
i
2
From Molecular Structure and Dynamics, by W. Flygare, (Prentice Hall, 1978)
149
2. the total energy is xed

i
n
i
E
i
= E
We consider two cases:
1. For identical particles obeying the Pauli exclusion principle, the probability of a given con-
guration is
W
FD
=

i
g
i
n
i
!(g
i
n
i
)!
(5.176)
Show that maximizing W
FD
subject to the constraints above leads to
n
i
=
g
i
e

1
+
2
E
i
+ 1
with the Lagrange multipliers
1
= E
o
/kT and
2
= 1/kT. Hint: try working with the
log W and use Sterlings approximation in the limit of a large number of particles .
2. In this case we still consider identical particles, but relax the restriction on the xed number
of particles in a given state. The probability for a given distribution is then
W
BE
=

i
(n
i
+ g
i
1)!
n
i
!(g
i
1)!
.
Show that by minimizing W
BE
subject to the constraints above leads to the occupation
numbers:
n
i
=
g
i
e

1
+
2
E
i
1
where again, the Lagrange multipliers are
1
= E
o
/kT and
2
= 1/kT. This yields the
Bose-Einstein statistics. Note: assume that g
i
1
3. Photons satisfy the Bose-Einstein distribution and the constraint that the total energy is
constant. However, there is no constrain regarding the total number of photons. Show that
by eliminating the xed number constraint leads to the foregoing result with
1
= 0.
150
Bibliography
[1] C. A. Parr and D. G. Trular, J. Phys. Chem. 75, 1844 (1971).
[2] H. F. Schaefer III, J. Phys. Chem. 89, 5336 (1985).
[3] P. A. Whitlock and J. T. Muckermann, J. Chem. Phys. 61, 4624 (1974).
151
Chapter 6
Quantum Mechanics in 3D
In the next few lectures, we will focus upon one particular symmetry, the isotropy of free space.
As a collection of particles rotates about an arbitrary axis, the Hamiltonian does not change. If
the Hamoltonian does in fact depend explicitly upon the choice of axis, the system is gauged,
meaning all measurements will depend upon how we set up the coordinate frame. A Hamiltonian
with a potential function which depends only upon the coordinates, e.g. V = f(x, y, z), is gauge
invarient, meaning any measurement that I make will not depend upon my choice of reference
frame. On the other hand, if our Hamiltonian contains terms which couple one reference frame
to another (as in the case of non-rigid body rotations), we have to be careful in how we select
the gauge. While this sounds like a fairly specialized case, it turns out that many ordinary
phenimina depend upon this, eg. gure skaters, falling cats, oppy molecules. We focus upon
rigid body rotations rst.
For further insight and information into the quantum mechanics of angular momentum, I
recommend the following texts and references:
1. Theory of Atomic Structure, E. Condon and G. Shortley. This is the classical book on
atomic physics and theory of atomic spectroscopy and has inspired generations since it
came out in 1935.
2. Angular Momentumunderstanding spatial aspects in chemistry and physics, R. N. Zare.
This book is the text for the second-semester quantum mechanics at Stanford taught by
Zare (when hes not out looking for Martians). Its a great book with loads of examples in
spectroscopy.
3. Quantum Theory of Angular Momentum, D. A. Varshalovich, A. Moskalev, and V. Kher-
sonskii. Not to much physics in this book, but if you need to know some relation between
Wigner-D functions and Racah coecients, or how to derive 12j symbols, this book is for
you.
First, we need to look at what happens to a Hamiltonian under rotation. In order to show that
H is invariant to any rotations, we need only to show that it is invarient under an innitesimal
rotation.
152
6.1 Quantum Theory of Rotations
Let

by a vector of a small rotation equal in magnitude to the angle directed long an


arbitrary axis. Rotating the system by

changes the direction vectors r

by r

.
r

(6.1)
Note that the denotes the vector cross product. Since we will be using cross-products
through out these lectures, we pause to review the operation.
A cross product between two vectors is computed as
c = a

b
=

i

j

k
a
i
a
j
a
k
b
i
b
j
b
k

=

i(a
j
b
k
b
j
a
k
)

j(a
i
b
k
b
i
a
k
) +

k(a
i
b
j
b
i
a
j
)
=
ijk
a
j
b
k
(6.2)
Where
ijk
is the Levi-Cevita symbol or the anti-symmetric unit tensor dened as

ijk
=

0 if any of the indices are the same


1 for even permuations of the indices
1 for odd permutations of the indices
(6.3)
(Note that we also have assumed a summation convention where by we sum over all repeated
indices. Some elementary properties are
ikl

ikm
=
lm
and
ikl

ikl
= 6.)
So, an arbitrary function (r
1
, r
2
, ) is transformed by the rotation into:

1
(r
1
+ r
1
, r
2
+ r
2
, ) = (r
1
, r
2
, ) +

a
r
a

a
= (r
1
, r
2
, ) +

r
a

a
=

1 +

a
r
a

a
(6.4)
Thus, we conclude, that the operator
1 +

a
r
a

a
(6.5)
is the operator for an inntesimal rotation of a system of particles. Since is a constant, we
can show that this operator commutes with the Hamiltonian
[

a
r
a

, H] = 0 (6.6)
This implies then a particular conservation law related to the isotropy of space. This is of course
angular momentum so that

a
r
a

(6.7)
153
must be at least proportional to the angular momentum operator, L. The exact relation is
hL = r p = i hr

(6.8)
which is much like its classical counterpart
L =
1
m
r v. (6.9)
The operator is of course a vector quantity, meaning that is has direction. The components of
the angular momentum vector are:
hL
x
= yp
z
zp
y
(6.10)
hL
y
= zp
x
zp
z
(6.11)
hL
z
= xp
y
yp
x
(6.12)
hL
i
=
ijk
x
j
p
k
(6.13)
For a system in a external eld, antgular momentum is in general not conserved. However, if
the eld posesses spherical symmetry about a central point, all directions in space are equivalent
and angular momentum about this point is conserved. Likewise, in an axially symmetric eld,
motion about the axis is conserved. In fact all the conservation laws which apply in classical
mechanics have quantum mechanical analogues.
We now move on to compute the commutation rules between the L
i
operators and the x and
p operators First we note:
[L
x
, x] = [L
y
, y] = [L
z
, z] = 0 (6.14)
[L
x
, y] =
1
h
((yp
z
zp
y
)y y(yp
z
zp
y
)) =
z
h
[p
y
, y] = iz (6.15)
In short hand:
[L
i
, x
k
] = i
ikl
x
l
(6.16)
We need also to know how the various components commute with one another:
h[L
x
, L
y
] = L
x
(z
p
x x
p
z) (zp
x
xp
z
)L
x
(6.17)
= (L
x
z zL
x
)p
x
x(L
x
p
z
p
z
L
x
) (6.18)
= iyp
x
+ ixp
y
(6.19)
154
= i hL
z
(6.20)
Which we can summarize as
[L
y
, L
z
] = iL
x
(6.21)
[L
z
, L
x
] = iL
y
(6.22)
[L
x
, L
y
] = iL
z
(6.23)
[L
i
, L
j
] = i
ijk
L
k
(6.24)
Now, denote the square of the modulus of the total angular momentum by L
2
, where
L
2
= L
2
x
+ L
2
y
+ L
2
z
(6.25)
Notice that this operator commutes with all the other L
j
operators,
[L
2
, L
x
] = [L
2
, L
y
] = [L
2
, L
z
] = 0 (6.26)
For example:
[L
2
x
, L
z
] = L
x
[L
x
, L
z
] + [L
x
, L
z
]L
x
= i(L
x
L
y
+ L
y
L
x
) (6.27)
also,
[L
2
y
, L
z
] = i(L
x
L
y
+ L
y
L
x
) (6.28)
Thus,
[L
2
, L
z
] = 0 (6.29)
Thus, I can measure L
2
and L
z
simultaneously. (Actually I can measure L
2
and any one com-
ponent L
k
simultaneously. However, we ueually pick this one as the z axis to make the math
easier, as we shall soon see.)
A consequence of the fact that L
x
, L
y
, and L
z
do not commute is that the angular momentum
vector

L can never lie exactly along the z axis (or exactly along any other axis for that matter).
We can interpret this in a classical context as a vector of length [L[ = h

L(L + 1) with the L


z
component being hm. The vector is then constrained to lie in a cone as shown in Fig. ??. We
will take up this model at the end of this chapter in the semi-classical context.
It is also convienent to write L
x
and L
y
as a linear combination
L
+
= L
x
+ iL
y
L

= L
x
iL
y
(6.30)
(Recall what we did for Harmonic oscillators?) Its easy to see that
[L
+
, L

] = 2L
z
(6.31)
155
Figure 6.1: Vector model for the quantum angular momentum state [jm`, which is represented
here by the vector j which precesses about the z axis (axis of quantzation) with projection m.
|j|=(j (j + 1))
1/2
Z
X
Y
m

[L
z
, L
+
] = L
+
(6.32)
[L
z
, L

] = L

(6.33)
Likewise:
L
2
= L
+
L

+ L
2
z
L
z
= L

L
+
+ L
2
z
+ L
z
(6.34)
We now give some frequently used expressions for the angular momentum operator for a
single particle in spherical polar coordinates (SP). In SP coordinates,
x = r sin cos (6.35)
y = r sin sin (6.36)
z = r cos (6.37)
Its easy and straightforward to demonstrate that
L
z
= i

(6.38)
and
L

= e

+ i cot

(6.39)
156
Thus,
L
2
=
1
sin

1
sin

2
+

sin

(6.40)
which is the angular part of the Laplacian in SP coordinates.

2
=
1
r
2
sin

sin

r
r
2

r
+

sin

+
1
sin

(6.41)
=
1
r
2

r
r
2

r

1
r
2
L
2
(6.42)
In other words, the kinetic energy operator in SP coordinates is

h
2
2m

2
=
h
2
2m

1
r
2

r
r
2

r

1
r
2
L
2

(6.43)
6.2 Eigenvalues of the Angular Momentum Operator
Using the SP form
L
z
= i

= l
z
(6.44)
Thus, we conclude that = f(r, )e
ilz
. This must be single valued and thus periodic in with
period 2. Thus,
l
z
= m = 0, 1, 2, (6.45)
Thus, we write the azimuthal solutions as

m
() =
1

2
e
im
(6.46)
which are orthonormal functions:

2
0

m
()
m
()d =
mm
(6.47)
In a centrally symmetric case, stationary states which dier only in their m quantum number
must have the same energy.
We now look for the eigenvalues and eigenfunctions of the L
2
operator belonging to a set of
degenerate energy levels distinguished only by m. Since the +zaxis is physically equivalent to
the zaxis, for every +m there must be a m. Let L denote the greatest possible m for a
given L
2
eigenstate. This upper limit must exist because of the fact that L
2
L
2
z
= L
2
x
+L
2
y
is a
operator for an essentially positive quantity. Thus, its eigenvalues cannot be negative. We now
apply L
z
L to
m
.
L
z
(L

m
) = (L
z
1)(L

m
) = (m1)(L

m
) (6.48)
157
(note: we used [L
z
, L

] = L

) Thus, L

m
is an engenfunction of L
z
with eigenvalue m1.

m+1
L
+

m
(6.49)

m1
L

m
(6.50)
If m = l then, we must have L
+

l
= 0. Thus,
L

L
+

l
= (L
2
L
2
z
L
z
)
l
= 0 (6.51)
i.e.
L
2

l
= (L
2
z
+ L
z
)
l
= l(l + 1)
l
(6.52)
Thus, the eigenvalues of L
2
operator are l(l + 1) for l any positive integer (including 0). For a
given value of l, the component L
z
can take values
l, l 1, , 0, l (6.53)
or 2l + 1 dierent values. Thus an energy level with angular momentum l has 2l + 1 degenerate
states.
6.3 Eigenstates of L
2
Since l ansd m are the good quantum numbers, well denote the eigenstates of L
2
as
L
2
[lm` = l(l + 1)[lm`. (6.54)
This we will often write in short hand after specifying l as
L
2
[m` = l(l + 1)[m`. (6.55)
Since L
2
= L
+
L

+ L
2
z
L
z
, we have
'm[L
2
[m` = m
2
m

'm[L
+
[m

`'m

[L

[m` = l(l + 1) (6.56)


Also, note that
'm1[L

[m` = 'm[L
+
[m1`

, (6.57)
thus we have
['m[L
+
[m1`[
2
= l(l + 1) m(m1) (6.58)
Choosing the phase (Condon and Shortly phase convention) so that
'm1[L

[m` = 'm[L
+
[m1` (6.59)
'm[L
+
[m1` =

l(l + 1) m(m1) =

(l + m)(l m + 1) (6.60)
Using this relation, we note that
'm[L
x
[m1` = 'm1[L
x
[m` =
1
2

(l + m)(l m + 1) (6.61)
'm[L
y
[m1` = 'm1[L
x
[m` =
i
2

(l + m)(l m + 1) (6.62)
Thus, the diagonal elements of L
x
and L
y
are zero in states with denite values of 'L
z
` = m.
158
6.4 Eigenfunctions of L
2
The wavefunction of a particle is not entirely determined when l and m are presribed. We still
need to specify the radial component. Thus, all the angular momentum operators (in SP coords)
contain and explicit r dependency. For the time, well take r to be xed and denote the angular
momentum eigenfunctions in SP coordinates as Y
lm
(, ) with normalization

[Y
lm
(, )[
2
d (6.63)
where d = sin dd = d(cos )d and the integral is over all solid angles. Since we can
determine common eigenfunctions for L
2
and L
z
, there must be a separation of variables, and
, so we seek solutions of the form:
Y
lm
(, ) =
m
()
lm
() (6.64)
The normalization requirement is that


0
[
lm
()[
2
sin d = 1 (6.65)
and I require

2
0


0
Y

m
Y
lm
d =
ll

mm
. (6.66)
I thus seek solution of

1
sin

sin

+
1
sin
2

+ l(l + 1) = 0 (6.67)
i.e.

1
sin

sin


m
2
sin
2

+ l(l + 1)

lm
() = 0 (6.68)
which is well known from the theory of spherical harmonics.

lm
() = (1)
m
i
l

(2l + 1)(l m)!


2(l m)!
P
m
l
(cos ) (6.69)
for m > 0. Where P
m
l
are associated Legendre Polynomials. For m < 0 we get

l,|m|
= (1)
m

l,|m|
(6.70)
Thus, the angular momentum eigenfunctions are the spherical harmonics, normalized so that
the matrix relations dened above hold true. The complete expression is
Y
lm
= (1)
(m+|m|)/2
i
l

2l + 1
4
(l [m[)!
(l +[m[)!

1/2
P
|m|
l
(cos )e
im
(6.71)
159
Table 6.1: Spherical Harmonics (Condon-Shortley Phase convention.
Y
00
=
1

4
Y
1,0
=

3
4

1/2
cos()
Y
1,1
=

3
8

1/2
sin()e
i
Y
2,2
= 3

5
96
sin
2
e
i
Y
2,1
= 3

5
24
sin cos e
i
Y
2,0
=

5
4

3
2
cos
2

1
2

These can also be generated by the SphericalHarmonicY[l,m,,] function in Mathematica.


Figure 6.2: Spherical Harmonic Functions for up to l = 2. The color indicates the phase of the
function.
160
For the case of m = 0,
Y
l0
= i
l

2l + 1
4

1/2
P
l
(cos ) (6.72)
Other useful relations are in cartesian form, obtained by using the relations
cos =
z
r
, (6.73)
sin cos =
x
r
, (6.74)
and
sin sin =
y
r
. (6.75)
Y
1,0
=

3
4

1/2
z
r
(6.76)
Y
1,1
=

3
8

1/2
x + iy
r
(6.77)
Y
1,1
=

3
8

1/2
x iy
r
(6.78)
The orthogonality integral of the Y
lm
functions is given by

2
0


0
Y

lm
(, )Y
l

m
(, ) sin dd =
ll

mm
. (6.79)
Another useful relation is that
Y
l,m
= (1)
m
Y

lm
. (6.80)
This relation is useful in deriving real-valued combinations of the spherical harmonic functions.
Exercise 6.1 Demonstrate the following:
1. [L
+
, L
2
] = 0
2. [L

, L
2
] = 0
Exercise 6.2 Derive the following relations

l,m
(, ) =

(l + m)!
(2l!)(l m)!
(L

)
lm

l,l
(, )
and

l,m
(, ) =

(l m)!
(2l!)(l + m)!
(L
+
)
l+m

l,l
(, )
where
l,m
= Y
l,m
are eigenstates of the L
2
operator.
161
6.5 Addition theorem and matrix elements
In the quantum mechanics of rotations, we will come across integrals of the general form

l
1
m
1
Y
l
2
m
2
Y
l
3
m
3
d
or
Y

l
1
m
1
P
l
2
Y
l
3
m
3
d
in computing matrix elements between angular momentum states. For example, we may be
asked to compute the matrix elements for dipole induced transitions between rotational states of
a spherical molecule or between dierent orbital angular momentum states of an atom. In either
case, we need to evaluate an integral/matrix element of the form
'l
1
m
1
[z[l
2
m
2
` =

l
1
m
1
zY
l
2
m
2
d (6.81)
Realizing that z = r cos = r

4/3Y
10
(, ), Eq. 6.87 becomes
'l
1
m
1
[z[l
2
m
2
` =

4
3
r

l
1
m
1
Y
10
Y
l
2
m
2
d (6.82)
Integrals of this form can be evaluated by group theoretical analysis and involves the introduc-
tion of Clebsch-Gordan coecients, C
LM
l
1
m
1
l
2
m
2
1
which are tabulated in various places or can be
computed using Mathematica. In short, some basic rules will always apply.
1. The integral will vanish unless the vector sum of the angular momenta sums to zero.
i.e.[l
1
l
3
[ l
2
(l
1
+ l
3
). This is the triangle rule and basically means you have to be
able make a triangle with length of each side being l
1
, l
2
, and l
3
.
2. The integral will vanish unless m
2
+ m
3
= m
1
. This reects the conservation of the z
component of the angular momentum.
3. The integral vanishes unless l
1
+ l
2
+ l
3
is an even integer. This is a parity conservation
law.
So the general proceedure for performing any calculation involving spherical harmonics is to rst
check if the matrix element violates any of the three symmetry rules, if so, then the answer is 0
and youre done.
2
To actually perform the integration, we rst write the product of two of the Y
lm
s as a
Clebsch-Gordan expansion:
Y
l
1
m
1
Y
l
2
m
2
=

LM

(2l
1
+ 1)(2l
2
+ 1)
4(2L + 1)
C
L0
l
1
0l
2
0
C
LM
l
1
m
1
l
2
m
2
Y
LM
. (6.83)
1
Our notation is based upon Varshalovichs book. There at least 13 dierent notations that I know of for
expressing these coecients which I list in a table at the end of this chapter.
2
In Mathematica, the Clebsch-Gordan coecients are computed using the function
ClebschGordan[j1,m1,j2,m2,j,m] for the decomposition of [jm` in to [j
1
, m
1
` and [j
2
, m
2
`.
162
We can use this to write

lm
Y
l
1
m
2
Y
l
2
m
2
d =

LM

(2l
1
+ 1)(2l
2
+ 1)
4(2L + 1)
C
L0
l
1
0l
2
0
C
LM
l
1
m
1
l
2
m
2

lm
Y
LM
d
=

LM

(2l
1
+ 1)(2l
2
+ 1)
4(2L + 1)
C
L0
l
1
0l
2
0
C
LM
l
1
m
1
l
2
m
2

lL

mM
=

(2l
1
+ 1)(2l
2
+ 1)
4(2l + 1)
C
l0
l
1
0l
2
0
C
lm
l
1
m
1
l
2
m
2
(6.84)
In fact, the expansion we have done above for the product of two spherical harmonics can be
inverted to yield the decomposition of one angular momentum state into a pair of coupled angular
momentum states, such as would be the case for combining the orbital angular momentum of a
particle with, say, its spin angular momentum. In Dirac notation, this becomes pretty apparent
[LM` =

m
1
m
2
'l
1
m
1
l
2
m
2
[LM`[l
1
m
1
l
2
m
2
` (6.85)
where the state [l
1
m
1
l
2
m
2
` is the product of two angular momentum states [l
1
m
1
` and [l
2
m
2
`.
The expansion coecients are the Clebsch-Gordan coecients
C
LM
l
1
m
1
l
2
m
2
= 'l
1
m
1
l
2
m
2
[LM` (6.86)
Now, lets go back the problem of computing the dipole transition matrix element between
two angular momentum states in Eq. 6.87. The integral we wish to evaluate is
'l
1
m
1
[z[l
2
m
2
` =

l
1
m
1
zY
l
2
m
2
d (6.87)
and we noted that z was related to the Y
10
spherical harmonic. So the integral over the angular
coordinates involves:

l
1
m
1
Y
10
Y
l
2
m
2
d. (6.88)
First, we evaluate which matrix elements are going to be permitted by symmetry.
1. Clearly, by the triangle inequality, [l
1
l
2
[ = 1. In other words, we change the angular
momentum quantum number by only 1.
2. Also, by the second criteria, m
1
= m
2
3. Finally, by the third criteria: l
1
+ l
2
+ 1 must be even, which again implies that l
1
and l
2
dier by 1.
Thus the integral becomes

l+1,m
Y
10
Y
lm
=

(2l + 1)(2 + 1)
4(2l + 3)
C
l+1,0
l010
C
1m
l+1,ml0
(6.89)
163
From tables,
C
l+1,0
l010
=

2 (1 + l)

2 + 2 l

3 + 2 l

C
1m
l+1,ml0
=

1 + l m

1 + l + m

2 + 2 l

3 + 2 l

So
C
l+1,0
l010
C
1m
l+1,ml0
=
2 (1 + l)

1 + l m

1 + l + m
(2 + 2 l) (3 + 2 l)
Thus,

l+1,m
Y
10
Y
lm
d =

3
4

(l + m + 1)(l m + 1)
(2l + 1)(2l + 3)
(6.90)
Finally, we can construct the matrix element for dipole-transitions as
'l
1
m
1
[z[l
2
m
2
` = r

(l + m + 1)(l m + 1)
(2l + 1)(2l + 3)

l
1
1,l
2

m
1
,m2
. (6.91)
Physically, this make sense because a photon carries a single quanta of angular momentum. So
in order for molecule or atom to emit or absorb a photon, its angular momentum can only change
by 1.
Exercise 6.3 Verify the following relations

l+1,m+1
Y
11
Y
lm
d =

3
8

(l + m + 1)(l + m + 2)
2l + 1)(2l + 3)
(6.92)

l1,m1
Y
11
Y
lm
d =

3
8

(l m)(l m1)
2l 1)(2l + 1)
(6.93)

lm
Y
00
Y
lm
d =
1

4
(6.94)
6.6 Legendre Polynomials and Associated Legendre Poly-
nomials
Ordinary Legendre polynomials are generated by
P
l
(cos ) =
1
2
l
l!
d
l
(d cos )
l
(cos
2
1)
l
(6.95)
164
i.e. (x = cos )
P
l
(x) =
1
2
l
l!

l
x
l
(x
2
1)
l
(6.96)
and satisfy

1
sin

sin

+ l(l + 1)

P
l
= 0 (6.97)
The Associated Legendre Polynomials are derived from the Legendre Polynomials via
P
m
l
(cos ) = sin
m


m
( cos )
m
P
l
(cos ) (6.98)
6.7 Quantum rotations in a semi-classical context
Earlier we established the fact that the angular momentum vector can never exactly lie on a
single spatial axis. By convention we take the quantization axis to be the z axis, but this is
arbitrary and we can pick any axis as the quantization axis, it is just that picking the z axis
make the mathematics much simpler. Furthermore, we established that the maximum length
the angular momentum vector can have along the z axis is the eigenvalue of L
z
when m = l,
so 'l
z
` = l which is less than

l(l + 1). Note, however, we can write the eigenvalue of L


2
as
l
2
(1+1/l
2
) . As l becomes very large the eigenvalue of L
z
and the eigenvalue of L
2
become nearly
identical. The 1/l term is in a sense a quantum mechanical eect resulting from the uncertainty
in determining the precise direction of

L.
We can develop a more quantative model for this by examining both the uncertainty product
and the semi-classical limit of the angular momentum distribution function. First, recall, that if
we have an observable, A then the spread in the measurements of A is given by the varience.
A
2
= '(A 'A`)
2
` = 'A
2
` 'A`
2
. (6.99)
In any representation in which A is diagonal, A
2
= 0 and we can determine A to any level of
precision. But if we look at the sum of the variances of l
x
and l
y
we see
L
2
x
+ L
2
y
= l(l + 1) m
2
. (6.100)
So for a xed value of l and m, the sum of the two variences is constant and reaches its minimum
when [m[ = l corresponding to the case when the vector points as close to the z axis as it
possible can. The conclusion we reach is that the angular momentum vector lies somewhere in a
cone in which the apex half-angle, satises the relation
cos =
m

l(l + 1)
(6.101)
which we can varify geometrically. So as l becomes very large the denominator becomes for m = l
l

l(l + 1)
=
1
1

1 + 1/l
2
1 (6.102)
165
and = 0,cooresponding to the case in which the angular momentum vector lies perfectly along
the z axis.
Exercise 6.4 Prove Eq. 6.100 by writing 'L
2
` = 'L
2
x
` +'L
2
y
` +'L
2
z
`.
To develop this further, lets look at the asymptotic behavour of the Spherical Harmonics at
large values of angular momentum. The angular part of the Spherical Harmonic function satises

1
sin

sin

+ l(l + 1)
m
2
sin
2

lm
= 0 (6.103)
For m = 0 this reduces to the dierential equation for the Legendre polynomials

2
+ cot

+ l(l + 1)

P
l
(cos ) = 0 (6.104)
If we make the substitution
P
l
(cos ) =

l

(sin )
1/2
(6.105)
then we wind up with a similar equation for
l
()

2
+ (l + 1/2)
2
+
csc
2

l
= 0. (6.106)
For very large l, the l +1/2 term dominates and we can ignore the csc term everywhere except
for angles close to = 0 or = . If we do so then our dierential equation becomes

2
+ (l + 1/2)
2

l
= 0, (6.107)
which has the solution

l
() = A
l
sin ((l + 1/2) + ) (6.108)
where A
l
and are constants we need to determine from the boundary conditions of the problem.
For large l and for l
1
and l
1
one obtains
P
l
(cos ) A
l
sin((l + 1/2) + )
(sin )
1/2
. (6.109)
Similarly,
Y
lo
(, )

l + 1/2
2

1/2
A
l
sin((l + 1/2) + )
(sin )
1/2
. (6.110)
so that the angular probability distribution is
[Y
lo
[
2
=

l + 1/2
2

A
2
l
sin
2
((l + 1/2) + )
sin
. (6.111)
166
When l is very large the sin
2
((l +1/2)) factor is extremely oscillatory and we can replace it
by its average value of 1/2. Then, if we require the integral of our approximation for [Y
l0
[
2
to be
normalized, one obtains
[Y
l0
[
2
=
1
2
2
sin()
(6.112)
which holds for large values of l and all values of except for theta = 0 or = .
We can also recover this result from a purely classical model. In classical mechanics, the
particle moves in a circular orbit in a plane perpendicular to the angular momentum vector. For
m = 0 this vector lies in the xy plane and we will dene as the angle between the particle and
the z axis, as the azimuthal angle of the angular momentum vector in the xy plane. Since the
particles speed is uniform, its distribution in is uniform. Thus the probability of nding the
particle at any instant in time between and +d is d/. Furthermore, we have not specied
the azimuthal angle, so we assume that the probability distribution is also uniform over and
the angular probability d/ must be smeared over some band on the uniform sphere dened by
the angles and + d. The area of this band is 2 sin d. Thus, we can dene the classical
estimate of the as a probability per unit area
P() =
d

1
2 sin
=
1
2
2
sin
which is in agreement with the estimate we made above.
For m = 0 we have to work a bit harder since the angular momentum vector is tilted out of
the plane. For this we dene two new angles which is the azimuthal rotation of the particles
position about the L vector and with is constrained by the length of the angular momentum
vector and its projection onto the z axis.
cos =
m

l(l + 1

m
l
The analysis is identical as before with the addition of the fact that the probability in (taken
to be uniform) is spread over a zone 2 sin d. Thus the probability of nding the particle with
some angle is
P() =
d
d
1
2
2
sin
.
Since is the dihedral angle between the plane containing z and l and the plane containing
l and r (the particles position vector), we can relate to and by
cos = cos cos

2
+ sin sin

2
cos = sin cos
Thus,
sin d = sin sin d.
This allows us to generalize our probability distribution to any value of m
[Y
lm
(, )[
2
=
1
2
2
sin sin
(6.113)
=
1
2
2
(sin
2
cos
2
)
1/2
(6.114)
167
Figure 6.3: Classical and Quantum Probability Distribution Functions for Angular Momentum.
0.5 1 1.5 2 2.5 3
0.2
0.4
0.6
0.8
1
Y
4,4

2
0.5 1 1.5 2 2.5 3
0.2
0.4
0.6
0.8
1
Y
10,10

2
0.5 1 1.5 2 2.5 3
0.2
0.4
0.6
0.8
1
Y
4,2

2
0.5 1 1.5 2 2.5 3
0.2
0.4
0.6
0.8
1
Y
10,2

2
0.5 1 1.5 2 2.5 3
0.2
0.4
0.6
0.8
1
Y
4,0

2
0.5 1 1.5 2 2.5 3
0.25
0.5
0.75
1
1.25
1.5
Y
10,0

2
which holds so long as sin
2
> cos
2
. This corresponds to the spatial region (/2 ) <
< /2 + . Outside this region, the distribution blows up and corresponds to the classically
forbidden region.
In Fig. 6.3 we compare the results of our semi-classical model with the exact results for l = 4
and l = 10. All in all we do pretty well with a semi-classical model, we do miss some of the
wiggles and the distribution is sharp close to the boundaries, but the generic features are all
there.
168
Table 6.2: Relation between various notations for Clebsch-Gordan Coecients in the literature
Symbol Author
C
jm
j
1
m
1
j
2
m
2
Varshalovich
a
S
j
1
j
2
jm
1
jm
2
Wigner
b
A
jj
1
j
2
mm
1
m
2
Eckart
c
C
m
1
m
j
2
Van der Wearden
d
(j
1
j
2
m
1
m
2
[j
1
j
2
jm) Condon and Shortley
e
C
j
1
j
2
jm
(m
1
m
2
) Fock
f
X(j, m, j
1
, j
2
, m
1
) Boys
g
C(jm; m
1
m
2
) Blatt and Weisskopf
h
C
j
1
j
2
j
m
1
m
2
m
Beidenharn
i
C(j
1
j
2
j, m
1
m
2
) Rose
j

j
1
j
2
j
m
1
m
2
m

Yutsis and Bandzaitis


k
'j
1
m
1
j
2
m
2
[(j
1
j
2
)jm` Fano
l
a.) D. A. Varschalovich, et al. Quantum Theory of Angular Momentum, (World Scientic, 1988).
b.) E. Wigner, Group theory, (Academic Press, 1959).
c.) C. Eckart The application of group theory to the quantum dynamics of monatomic systems,
Rev. Mod. Phys. 2, 305 (1930).
d.) B. L. Van der Waerden, Die gruppentheorische methode in der quantenmechanik, (Springer,
1932).
e.)E. Condon and G. Shortley, Theory of Atomic Spectra, (Cambridge, 1932).
f.) V. A. Fock, New Deduction of the Vector Model, JETP 10,383 (1940).
g.)S. F. Boys, Electronic wave functions IV, Proc. Roy. Soc., London, A207, 181 (1951).
h.) J. M. Blatt and V. F. Weisskopf, Theoretical Nuclear Physics, (McGraw-Hill, 1952).
i.) L. C. Beidenharn, Tables of Racah Coecients, ONRL-1098 (1952).
j.) M. E. Rose, Multipole Fields, (Wiley 1955).
k.) A. P. Yusis and A. A. Bandzaitit, The Theory of Angular Momentum in Quanutm Mechanics,
(Mintus, Vilinus, 1965).
l.) U. Fano, Statistical matrix techniques and their application to the directional correlation of
radiation, US Natl Bureau of Standards, Report 1214 (1951).
169
6.8 Motion in a central potential: The Hydrogen Atom
(under development)
The solution of the Schrodinger equation for the hydrogen atom was perhaps the most signif-
icant developments in quantum theory. Since it is one of the few problems in nature in which we
can derive an exact solution to the equations of motion, it deserves special attention and focus.
Perhaps more importantly, the hydrogen atomic orbitals form the basis of atomic physics and
quantum chemistry.
The potential energy function between the proton and the electron is the centrosymmetric
Coulombic potential
V (r) =
Ze
2
r
.
Since the potential is centrosymmetric and has no angular dependency the Hydrogen atom Hamil-
tonian separates in to radial and angular components.
H =
h
2
2

1
r
2

r
r
2

r

L
2
h
2
r
2

+
e
2
r
(6.115)
where L
2
is the angular momentum operator we all know and love by now and is the reduced
mass of the electron/proton system
=
m
e
m
p
m
e
+ m
p
m
e
= 1
Since [H, L] = 0, angular momentum and one component of the angular momentum must be
constants of the motion. Since there are three separable degrees of freedom, we have one other
constant of motion which must correspond to the radial motion. As a consequence, the hydrogen
wavefunction is separable into radial and angular components

nlm
= R
nl
(r)Y
lm
(, ). (6.116)
Using the Hamiltonian in Eq. 6.115 and this wavefunction, the radial Schrodinger equation reads
(in atomic units)

h
2
2

1
r
2

r
r
2

r

l(l + 1)
r
2

1
r

R
nl
(R) = ER
nl
(r) (6.117)
At this point, we introduce atomic units to make the notation more compact and drastically
simplify calculations. In atomic units, h = 1 and e = 1. A list of conversions for energy, length,
etc. to SI units is listed in the appendix. The motivation is so that all of our numbers are of
order 1.
The kinetic energy term can be rearranged a bit
1
r
2

r
r
2

r
=

2
r
2
+
2
r

r
(6.118)
and the radial equation written as

h
2
2


2
r
2
+
2
r

r

l(l + 1)
r
2

1
r

R
nl
(R) = ER
nl
(r) (6.119)
170
To solve this equation, we rst have to gure out what approximate form the wavefunction must
have. For large values of r, the 1/r terms disappear and the asymptotic equation is

h
2
2

2
r
2
R
nl
(R) = ER
nl
(r) (6.120)
or

2
R
r
2
=
2
R (6.121)
where = 2mE/ h
2
. This dierential equation we have seen before for the free particle, so the
solution must have the same form. Except in this case, the function is real. Furthermore, for
bound states with E < 0 the radial solution must go to zero as r , so of the two possible
asymptotic solutions, the exponentially damped term is the correct one.
R(r) e
r
(6.122)
Now, we have to check if this is a solution everywhere. So, we take the asymptotic solution and
plug it into the complete equation:

2
e
r
+
2
r
(e
r
) +
2m
h
2

e
2
r
+ E

e
r
= 0. (6.123)
Eliminating e
r

2
+
2mE
h
2

+
1
r

2me
2
h
2
2

= 0 (6.124)
For the solution to hold everywhere, it must also hold at r = 0, so two conditions must be met

2
= 2mE/ h
2
(6.125)
which we dened above, and

2me
2
h
2
2

= 0. (6.126)
If these conditions are met, then e
r
is a solution. This last equation also sets the length scale
of the system since
= me
2
/ h
2
= 1/a
o
(6.127)
where a
o
is the Bohr radius. In atomic units, a
o
= 1. Likewise, the energy can be determined:
E =
h
2
2ma
2
o
=
h
2
me
2
e
2
2a
o
=
e
2
2a
o
. (6.128)
In atomic units the ground states energy is E = 1/2hartree.
171
Finally, we have to normalize R

d
3
re
2r
= 4


0
r
2
e
2r
dr (6.129)
The angular normalization can be absorbed into the spherical harmonic term in the total wave-
function since Y
00
= 1/

4. So, the ground state wavefunction is

n00
= Ne
r/ao
Y
00
(6.130)
The radial integral can be evaluated using Leibnitz theorem for dierentiation of a denite
integral

b
a
f(, x)dx =

b
a
f(, x)

dx (6.131)
Thus,


0
r
2
e
r
dr =

2
e
r
dr
=

2


0
e
r
dr
=

2

2
1

=
2

3
(6.132)
Exercise 6.5 Generalize this result to show that


0
r
n
e
r
dr =
n!

n+1
(6.133)
Thus, using this result and putting it all together, the normalized radial wavefunction is
R
10
= 2

1
a
o

3/2
e
r/ao
. (6.134)
For the higher energy states, we examine what happens at r 0. Using a similar analysis
as above, one can show that close in, the radial solution must behave like a polynomial
R r
l+1
which leads to a general solution
R = r
l+1
e
r

s=0
a
s
r
s
.
172
The proceedure is to substitute this back into the Schrodinger equation and evaluate term by
term. In the end one nds that the energies of the bound states are (in atomic units)
E
n
=
1
2n
2
and the radial wavefunctions
R
nl
=

2r
na
o

2
na
o

(n l 1)!
2n((n + l)!)
3

e
r/nao
L
2l+1
n+1

2r
na
o

(6.135)
where the L
b
a
are the associated Laguerre polynomials.
6.8.1 Radial Hydrogenic Functions
The radial wavefunctions for nuclei with atomic number Z are modied hydrogenic wavefunctions
with the Bohr radius scaled by Z. I.e a = a
o
/Z. The energy for 1 electron about a nucleus with
Z protons is
E
n
=
Z
2
n
2
1
2a
o
=
Z
2
n
2
Ry
2
(6.136)
Some radial wavefunctions are
R
1s
= 2

Z
a
o

3/2
e
Zr/ao
(6.137)
R
2s
=
1

Z
a
o

3/2

1
Zr
2a
o

e
Zr/2ao
(6.138)
R
2p
=
1
2

Z
a
o

5/2
re
Zr/2ao
(6.139)
6.9 Spin 1/2 Systems
In this section we are going to illustrate the various postulates and concepts we have been
developing over the past few weeks. Rather than choosing as examples problems which are
pedagogic (such as the particle in a box and its variations) or or chosen for theor mathematical
simplicity, we are going to focus upon systems which are physically important. We are going to
examine, with out much theoretical introduction, the case in which the state space is limited to
two states. The quantum mechanical behaviour of these systems can be varied experimentally
and, in fact, were and still are used to test various assumptions regarding quantum behaviour.
Recall from undergraduate chemistry that particles, such as the electron, proton, and so forth,
possess an intrinsic angular momentum,

S, called spin. This is a property which has no analogue
in classical mechanics. Without going in to all the details of angular momentum and how it gets
quantized (dont worry, its a coming event!) we are going to look at a spin 1/2 system, such as
173
a neutral paramagnetic Ag atom in its ground electronic state. We are going to dispense with
treating the other variables, the nuclear position and momentum,the motion of the electrons,
etc... and focus only upon the spin states of the system.
The paramagnetic Ag atoms possess an electronic magnetic moment,

M. This magnetic
moment can couple to an externally applied magnetic eld,

B, resulting on a net force being
applied to the atom. The potential energy in for this is
W =

M.

B. (6.140)
We take this without further proof. We also take without proof that the magnetic moment and
the intrinsic angular momentum are proportional.

M =

S (6.141)
The proportionality constant is the gyromagnetic ratio of the level under consideration. When
the atoms traverse through the magnetic eld, they are deected according to how their angular
momentum vector is oriented with the applied eld.

F =

(

M.

B) (6.142)
Also, the total moment relative to the center of the atom is

=

M

B. (6.143)
Thus, the time evolution of the angular momentum of the particle is

S =

(6.144)
that it to say

S =

S

B. (6.145)
Thus, the velocity of the angular momentum is perpendicular to

S and the angular momentum
vector acts like a gyroscope.
We can also show that for a homogeneous eld the force acts parallel to z and is proportional
to M
z
. Thus, the atoms are deected according to how their angular momentum vector is oriented
with respect to the z axis. Experimentally, we get two distributions. Meaning that measurement
of M
z
can give rise to two possible results.
6.9.1 Theoretical Description
We associate an observable, S
z
, with the experimental observations. This has 2 eigenvalues, at
h/2 We shall assume that the two are not degenerate. We also write the eigenvectors of S
z
as
[` corresponding to
S
z
[+` = +
h
2
[+` (6.146)
174
S
z
[` = +
h
2
[` (6.147)
with
'+[+` = '[` = 1 (6.148)
and
'+[` = 0. (6.149)
The closure, or idempotent relation is thus
[+`'+[ +[`'[ = 1. (6.150)
The most general state vector is
[` = [+` + [` (6.151)
with
[[
2
+[[
2
= 1. (6.152)
In the [` basis, the matrix representation of S
z
is diagonal and is written as
S
z
=
h
2

1 0
0 1

(6.153)
6.9.2 Other Spin Observables
We can also measure S
x
and S
y
. In the [` basis these are written as
S
x
=
h
2

0 1
1 0

(6.154)
and
S
y
=
h
2

0 i
i 0

(6.155)
You can verify that the eigenvalues of each of these are h/2.
6.9.3 Evolution of a state
The Hamiltonian for a spin 1/2 particle in a B-eld is given by
H = [B[S
z
. (6.156)
175
Where B is the magnitude of the eld. This operator is time-independent, thus, we can solve the
Schrodinger Equation and see that the eigenvectors of H are also the eigenvectors of S
z
. (This
the eigenvalues of S
z
are good quantum numbers.) Lets write = [B[ so that
H[+` = +
h
2
[+` (6.157)
H[` =
h
2
[` (6.158)
Therefore there are two energy levels, E

= h/2. The separation is proportional to the


magnetic eld. They dene a single Bohr Frequency.
6.9.4 Larmor Precession
Using the [` states, we can write any arb. angular momentum state as
[(0)` = cos(

2
)e
i/2
[+` + sin(

2
)e
+i/2
[` (6.159)
where and are polar coordinate angles specing the directrion of the angular momentum
vector at a given time. The time evolution under H is
[(0)` = cos(

2
)e
i/2
e
iE
+
t/h
[+` + sin(

2
)e
+i/2
e
iEmt/h
[`, (6.160)
or, using the values of E
+
and E

[(0)` = cos(

2
)e
i(+t)/2
[+` + sin(

2
)e
+i(+t)/2
[` (6.161)
In other words, I can write
(t) = (6.162)
(t) = + t. (6.163)
This corresponds to the precession of the angular momentum vector about the z axis at an angular
frequency of . More over, the expectation values of S
z
, S
y
, and S
x
can also be computed:
'S
z
(t)` = h/2 cos() (6.164)
'S
x
(t)` = h/2 sin(/2) cos( + t) (6.165)
'S
y
(t)` = h/2 sin(/2) sin( + t) (6.166)
Finally, what are the populations of the [` states as a function of time?
['+[(t)`[
2
= cos
2
(/2) (6.167)
['[(t)`[
2
= sin
2
(/2) (6.168)
Thus, the populations do not change, neither does the normalization of the state.
176
6.10 Problems and Exercises
Exercise 6.6 A molecule (A) with orbital angular momentum S = 3/2 decomposes into two
products: product (B) with orbital angular momentum 1/2 and product (C) with orbital angular
momentum 0. We place ourselves in the rest frame of A) and angular momentum is conserved
throughout.
A
3/2
B
1/2
+ C
0
(6.169)
1. What values can be taken on by the relative orbital angular momentum of the two nal
products? Show that there is only one possible value of the parity of the relative orbital
state is xed. Would this result remain the same if the spin of A was 3/2?
2. Assume that A is initially in the spin state characterized by the eigenvalue m
a
h of its spin
component along the z-axis. We know that the nal orbital state has a denite parity. Is it
possible to determine this parity by measuring the probabilities of nding B in either state
[+` or in state [`?
Exercise 6.7 The quadrupole moment of a charge distribution, (r), is given by
Q
ij
=
1
e

3(x
i
x
j

ij
r
2
)(r)d
3
r (6.170)
where the total charge e =

d
3
r(r). The quantum mechanical equivalent of this can be written
in terms of the angular momentum operators as
Q
ij
=
1
e

r
2

3
2
(J
i
J
j
+ J
j
J
i
)
ij
J
2

(r)d
3
r (6.171)
The quadrupole moment of a stationary state [n, j`, where n are other non-angular momentum
quantum numbers of the system, is given by the expectation value of Q
zz
in the state in which
m = j.
1. Evaluate
Q
o
= 'Q
zz
` = 'njm = j[Q
zz
[njm = j` (6.172)
in terms of j and 'r
2
` = 'nj[r
2
[nj`.
2. Can a proton (j = 1/2) have a quadrupole moment? What a bout a deuteron (j = 1)?
3. Evaluate the matrix element
'njm[Q
xy
[nj

` (6.173)
What transitions are induced by this operator?
4. The quantum mechanical expression of the dipole moment is
p
o
= 'njm = j[
r
e
J
z
[njm = j` (6.174)
Can an eigenstate of a Hamiltonian with a centrally symmetric potential have an electric
dipole moment?
177
Exercise 6.8 The
x
matrix is given by

x
=

0 1
1 0

, (6.175)
prove that
exp(i
x
) = I cos() + i
x
sin() (6.176)
where is a constant and I is the unit matrix.
Solution: To solve this you need to expand the exponential. To order
4
this is
e
ix
= I + i
x


2
2

2
x

i
3
3!

3
+

4
4!

4
x
+ (6.177)
Also, note that
x
.
x
= I, thus,
2n
x
= I and
2n+1
x
=
x
. Collect all the real terms and all the
imaginary terms:
e
ix
=

I + I

2
2
+ I

4
4!
+

+ i
x

3
3!
+

(6.178)
These are the series expansions for cos and sin.
e
ix
= I cos() + i
x
sin() (6.179)
Exercise 6.9 Because of the interaction between the proton and the electron in the ground state
of the Hydrogen atom, the atom has hyperne structure. The energy matrix is of the form:
H =

A 0 0 0
0 A 2A 0
0 2A A 0
0 0 0 A

(6.180)
in the basis dened by
[1` = [e+, p+` (6.181)
[2` = [e+, p` (6.182)
[3` = [e, p+` (6.183)
[4` = [e, p` (6.184)
where the notation e+ means that the electrons spin is along the +Z-axis, and e has the spin
pointed along the Z axis. i.e. [e+, p+` is the state in which both the electron spin and proton
spin is along the +Z axis.
1. Find the energy of the stationary states and sketch an energy level diagram relating the
energies and the coupling.
178
2. Express the stationary states as linear combinations of the basis states.
3. A magnetic eld of strength B applied in the +Z direction and couples the [e+, p+` and
[e, p` states. Write the new Hamiltonian matrix in the [e, p` basis. What happens to
the energy levels of the stationary states as a result of the coupling? Add this information
to the energy level diagram you sketched in part 1.
Exercise 6.10 Consider a spin 1/2 particle with magnetic moment

M =

S. The spin space is


spanned by the basis of [+` and [` vectors, which are eigenvectors of S
z
with eigenvalues h/2.
At time t = 0, the state of the system is given by
[(0)` = [+`
1. If the observable S
x
is measured at time t = 0, what results can be found and with what
probabilities?
2. Taking [(0)` as the initial state, we apply a magnetic eld parallel to the y axis with
strength B
o
. Calculate the state of the system at some later time t in the [` basis.
3. Plot as a function of time the expectation values fo the observables S
x
, S
y
, and S
z
. What
are the values and probabilities? Is there a relation between B
o
and t for the result of one
of the measurements to be certain? Give a physical interpretation of this condition.
4. Again, consider the same initial state, this time at t = 0, we measure S
y
and nd + h/2
What is the state vector [(0
+
)` immediately after this measurement?
5. Now we take [(0
+
)` and apply a uniform time-dependent eld parallel to the z-axis. The
Hamiltonian operator of the spin is then given by
H(t) = (t)S
z
Assume that prior to t = 0, (t) = 0 and for t > 0 increases linearly from 0 to
o
at time
t = T. Show that for 0 t T, the state vector can be written as
[(t)` =
1

e
i(t)
[+` + ie
i(t)
[`

where (t) is a real function of t (which you need to determine).


6. Finally, at time t = > T, we measure S
y
. What results can we nd and with what
probabilities? Determine the relation which must exist between
o
and T in order for us to
be sure of the result. Give the physical interpretation.
179
Chapter 7
Perturbation theory
If you perturbate to much, you will go blind.
T. A. Albright
In previous lectures, we discusses how , say through application of and external driving force,
the stationary states of a molecule or other quantum mechanical system can be come coupled
so that the system can make transitions from one state to another. We can write the transition
amplitude exactly as
G(i j, t) = 'j[ exp(iH(t
j
t
i
))/ h)[i` (7.1)
where H is the full Hamiltonian of the uncoupled system plus the applied perturbation. Thus, G
tells us the amplitude for the system prepared in state [i` at time t
i
and evolve under the applied
Hamiltonian for some time t
j
t
i
and be found in state [j`. In general this is a complicated
quantity to calculate. Often, the coupling is very complex. In fact, we can only exactly determine
G for a few systems: linearly driven harmonic oscillators, coupled two level systems to name the
more important ones.
In todays lecture and following lectures, we shall develop a series of well dened and system-
atic approximations which are widely used in all applications of quantum mechanics. We start
with a general solution of the time-independent Schrodinger equation in terms and eventually
expand the solution to innite order. We will then look at what happens if we have a pertur-
bation or coupling which depends explicitly upon time and derive perhaps the most important
rule in quantum mechanics which is called: Fermis Golden Rule.
1
7.1 Perturbation Theory
In most cases, it is simply impossible to obtain the exact solution to the Schrodinger equation.
In fact, the vast majority of problems which are of physical interest can not be resolved exactly
and one is forced to make a series of well posed approximations. The simplest approximation is
to say that the system we want to solve looks a lot like a much simpler system which we can
1
During a seminar, the speaker mentioned Fermis Golden Rule. Prof. Wenzel raised his arm and in German
spiked English chided the speaker that it was in fact HIS golden rule!
180
solve with some additional complexity (which hopefully is quite small). In other words we want
to be able to write our total Hamiltonian as
H = H
o
+ V
where H
o
represents that part of the problem we can solve exactly and V some extra part which
we cannot. This we take as a correction or perturbation to the exact problem.
Perturbation theory can be formuated in a variery of ways, we begin with what is typically
termed Rayleigh-Schrodinger perturbation theory. This is the typical approach and used most
commonly. Let H
o
[
n
` = W
n
[
n
` and (H
o
+ V )[` = E
n
[` be the Schrodinger equations for
the uncoupled and perturbed systems. In what follows, we take as a small parameter and
expand the exact energy in terms of this parameter. Clearly, we write E
n
as a function of and
write:
E
n
() = E
(0)
n
+ E
(1)
n
+
2
E
(2)
n
. . . (7.2)
Likewise, we can expand the exact wavefunction in terms of
[
n
` = [
(0)
n
` + [
(1)
n
` +
2
[
(2)
n
` . . . (7.3)
Since we require that [` be a solution of the exact Hamiltonian with energy E
n
, then
H[` = (H
o
+ V )

[
(0)
n
` + [
(1)
n
` +
2
[
(2)
n
` . . .

(7.4)
=

E
(0)
n
+ E
(1)
n
+
2
E
(2)
n
. . .

[
(0)
n
` + [
(1)
n
` +
2
[
(2)
n
` . . .

(7.5)
Now, we collect terms order by order in

0
: H
o
[
(0)
n
` = E
(0)
n
[
(0)
n
`

1
: H
o
[
(1)
n
` + V [
(0)
n
` = E
(0)
n
[
(1)
` + E
(1)
n
[
(0)
n
`

2
: H
o
[
(2)
n
` + V [
(1)
` = E
(0)
n
[
(2)
n
` + E
(1)
n
[
(1)
n
` + E
(2)
n
[
(0)
n
`
and so on.
The
0
problem is just the unperturbed problem we can solve. Taking the
1
terms and
multiplying by '
(0)
n
[ we obtain:
'
(0)
n
[H
o
[
(0)
n
` +'
(0)
n
[V [
(0)
` = E
(0)
n
'
(0)
n
[
(1)
n
` + E
(1)
n
'
(0)
n
[
(0)
n
` (7.6)
In other words, we obtain the 1st order correction for the nth eigenstate:
E
(1)
n
= '
(0)
n
[V [
(0)
`.
Note to obtain this we assumed that '
(1)
n
[
(0)
n
` = 0. This is easy to check by performing a
similar calculation, except by multiplying by '
(0)
m
[ for m = n and noting that '
(0)
n
[
(0)
m
` = 0 are
orthogonal state.
'
(0)
m
[H
o
[
(0)
n
` +'
(0)
m
[V [
(0)
` = E
(0)
n
'
(0)
m
[
(1)
n
` (7.7)
181
Rearranging things a bit, one obtains an expression for the overlap between the unperturbed and
perturbed states:
'
(0)
m
[
(1)
n
` =
'
(0)
m
[V [
(0)
n
`
E
(0)
n E
(0)
m
(7.8)
Now, we use the resolution of the identity to project the perturbed state onto the unperturbed
states:
[
(1)
n
` =

m
[
(0)
m
`'
(0)
m
[
(1)
n
`
=

m=n
'
(0)
m
[V [
(0)
n
`
E
(0)
n E
(0)
m
[
(0)
m
` (7.9)
where we explictly exclude the n = m term to avoid the singularity. Thus, the rst-order
correction to the wavefunction is
[
n
` [
(0)
n
` +

m=n
'
(0)
m
[V [
(0)
n
`
E
(0)
n E
(0)
m
[
(0)
m
`. (7.10)
This also justies our assumption above.
7.2 Two level systems subject to a perturbation
Lets say that in the [` basis our total Hamiltonian is given by
H = S
z
+ V S
x
. (7.11)
In matrix form:
H =

V
V

(7.12)
Diagonalization of the matrix is easy, the eigenvalues are
E
+
=

2
+ V
2
(7.13)
E

2
+ V
2
(7.14)
We can also determine the eigenvectors:
[
+
` = cos(/2)[+` + sin(/2)[` (7.15)
[

` = sin(/2)[+` + cos(/2)[` (7.16)


where
tan =
[V [

(7.17)
For constant coupling, the energy gap between the coupled states determines how the states
are mixed as the result of the coupling.
plot splitting as a function of unperturbed energy gap
182
7.2.1 Expansion of Energies in terms of the coupling
We can expand the exact equations for E

in terms of the coupling assuming that the coupling


is small compared to . To leading order in the coupling:
E
+
= (1 +
1
2

[V [

2
) (7.18)
E

= (1
1
2

[V [

2
) (7.19)
On the otherhand, where the two unperturbed states are identical, we can not do this expan-
sion and
E
+
= [V [ (7.20)
and
E

= [V [ (7.21)
We can do the same trick on the wavefunctions: When <[V [ (strong coupling) , /2,
Thus,
[
+
` =
1

2
([+` +[`) (7.22)
[

` =
1

2
([+` +[`). (7.23)
In the weak coupling region, we have to rst order in the coupling:
[
+
` = ([+` +
[V [

[`) (7.24)
[

` = ([` +
[V [

[+`). (7.25)
In other words, in the weak coupling region, the perturbed states look a lot like the unperturbed
states. Where as in the regions of strong mixing they are a combination of the unperturbed
states.
183
7.2.2 Dipole molecule in homogenous electric eld
Here we take the example of ammonia inversion in the presence of an electric eld. From the
problem sets, we know that the NH
3
molecule can tunnel between two equivalent C
3v
cong-
urations and that as a result of the coupling between the two congurations, the unperturbed
energy levels E
o
are split by an energy A. Dening the unperturbed states as [1` and [2` we can
dene the tunneling Hamiltonian as:
H =

E
o
A
A E
o

(7.26)
or in terms of Pauli matrices:
H = E
o

o
A
x
Taking to be the solution of the time-dependent Schrodinger equation
H[(t)` = i h[

`
we can insert the identity [1`'1[ +[2`'2[ = 1 and re-write this as
i h c
1
= E
o
c
1
Ac
2
(7.27)
i h c
2
= E
o
c
2
Ac
1
(7.28)
where c
1
= '1[` and c
2
= '2[`. are the projections of the time-evolving wavefunction onto the
two basis states. Taking these last two equations and adding and subtracting them from each
other yields two new equations for the time-evolution:
i h c
+
= (E
o
A)c
+
(7.29)
i h c

= (E
o
+ A)c

(7.30)
where c

= c
1
c
2
(well normalize this later). These two new equations are easy to solve,
c

(t) = A

exp

i
h
(E
o
A)t

.
Thus,
c
1
(t) =
1
2
e
iEot/h

A
+
e
iAt/h
+ A

e
+iAt/h

and
c
2
(t) =
1
2
e
iEot/h

A
+
e
iAt/h
A

e
+iAt/h

.
Now we have to specify an initial condition. Lets take c
1
(0) = 1 and c
2
(0) = 0 corresponding to
the system starting o in the [1` state. For this initial condition, A
+
= A

= 1 and
c
1
(t) = e
iEot/h
cos(At/ h)
and
c
2
(t) = e
iEot/h
sin(At/ h).
184
So that the time evolution of the state vector is given by
[(t)` = e
iEot/h
[[1` cos(At/ h) +[2` sin(At/ h)]
So, left alone, the molecule will oscillate between the two congurations at the tunneling fre-
quency, A/ h.
Now, we apply an electric eld. When the dipole moment of the molecule is aligned parallel
with the eld, the molecule is in a lower energy conguration, whereas for the anti-parrallel case,
the system is in a higher energy conguration. Denote the contribution to the Hamiltonian from
the electric eld as:
H

=
e
c
z
The total Hamiltonian in the [1`, [2` basis is thus
H =

E
o
+
e
c A
A E
o

e
c

(7.31)
Solving the eigenvalue problem:
[H I[ = 0
we nd two eigenvalues:

= E
o

A
2
+
2
e
c
2
.
These are the exact eigenvalues.
In Fig. 7.1 we show the variation of the energy levels as a function of the eld strength.
Figure 7.1: Variation of energy level splitting as a function of the applied eld for an ammonia
molecule in an electric eld
Weak eld limit
If
e
c/A <1, then we can use the binomial expansion

1 + x
2
1 + x
2
/2 + . . .
to write

A
2
+
2
e
c
2
= A

1 +

e
c
A

1
/2
A

1 +
1
2

e
c
A

(7.32)
Thus in the weak eld limit, the system can still tunnel between congurations and the energy
splitting are given by
E

(E
o
A)

2
e
c
2
A
185
To understand this a bit further, let us use perturbation theory in which the tunneling
dominates and treat the external eld as a perturbing force. The unperturbed hamiltonian can
be diagonalized by taking symmetric and anti-symmetric combinations of the [1` and [2` basis
functions. This is exactly what we did above with the time-dependent coecients. Here the
stationary states are
[` =
1

2
([1` [2`)
with energies E

= E
o
A. So that in the [` basis, the unperturbed Hamiltonian becomes:
H =

E
o
A 0
0 E
o
+ A

.
The rst order correction to the ground state energy is given by
E
(1)
= E
(0)
+'+[H

[+`
To compute '+[H

[+` we need to transform H

from the [1`, [2` uncoupled basis to the new [`


coupled basis. This is accomplished by inserting the identity on either side of H

and collecting
terms:
'+[H

[+` = '+[([1` < 1[ +[2`'2[)H

([1` < 1[ +[2`'2[)` (7.33)


=
1
2
('1[ +'2[)H

([1` +[2`) (7.34)


= 0 (7.35)
Likewise for '[H

[` = 0. Thus, the rst order correction vanish. However, since '+[H

[` =

e
c does not vanish, we can use second order perturbation theory to nd the energy correction.
W
(2)
+
=

m=i
H

mi
H

im
E
i
E
m
(7.36)
=
'+[H

[`'[H

[+`
E
(0)
+
E
(0)

(7.37)
=
(
e
c)
2
E
o
A E
o
A
(7.38)
=

2
e
c
2
2A
(7.39)
Similarly for W
(2)

= +
2
e
c
2
/A. So we get the same variation as we estimated above by expanding
the exact energy levels when the eld was week.
Now let us examine the wavefunctions. Remember the rst order correction to the eigenstates
is given by
[+
(1)
` =
'[H

[`
E
+
E

[` (7.40)
=
c
2A
[` (7.41)
186
Thus,
[+` = [+
(0)
`
c
2A
[` (7.42)
[` = [
(0)
` +
c
2A
[+` (7.43)
So we see that by turning on the eld, we begin to mix the two tunneling states. However, since
we have assumed that c/A <1, the nal state is not too unlike our initial tunneling states.
Strong eld limit
In the strong eld limit, we expand the square-root term such that

A
eE

2
<1.

A
2
+
2
e
c
2
= c
e

e
c

2
+ 1

1/2
= c
e

1 +
1
2

e
c

. . .
c
e
+
1
2
A
2

e
c
(7.44)
For very strong elds, the rst term dominates and the energy splitting becomes linear in the
eld strength. In this limit, the tunneling has been eectively suppressed.
Let us analyze this limit using perturbation theory. Here we will work in the [1, 2` basis and
treat the tunneling as a perturbation. Since the electric eld part of the Hamiltonian is diagonal
in the 1,2 basis, our unperturbed strong-eld hamiltonian is simply
H =

E
o

e
c 0
0 E
o

e
c

(7.45)
and the perturbation is the tunneling component. As before, the rst-order corrections to the
energy vanish and we are forced to resort to 2nd order perturbation theory to get the lowest
order energy correction. The results are
W
(2)
=
A
2
2c
which is exactly what we obtained by expanding the exact eigenenergies above. Likewise, the
lowest-order correction to the state-vectors are
[1` = [1
0
`
A
2c
[2
0
` (7.46)
[2` = [2
0
` +
A
2c
[1
0
` (7.47)
So, for large c the second order correction to the energy vanishes, the correction to the wave-
function vanishes and we are left with the unperturbed (i.e. non-tunneling) states.
187
7.3 Dyson Expansion of the Schrodinger Equation
The Rayleigh-Schrodinger approach is useful for discrete spectra. However, it is not very useful
for scattering or systems with continuous spectra. On the otherhand, the Dyson expansion of the
wavefunction can be applied to both cases. Its development is similar to the Rayleigh-Schrodinger
case, We begin by writing the Schrodinger Equation as usual:
(H
o
+ V )[` = E[` (7.48)
where we dene [` and W to be the eigenvectors and eigenvalues of part of the full problem.
We shall call this the uncoupled problem and assume it is something we can easily solve.
H
o
[` = W[` (7.49)
We want to write the solution of the fully coupled problem in terms of the solution of the
uncoupled problem. First we note that
(H
o
E)[` = V [`. (7.50)
Using the uncoupled problem as a homogeneous solution and the coupling as an inhomoge-
neous term, we can solve the Schrodinger equation and obtain [` EXACTLY as
[` = [` +
1
H
o
E
V [` (7.51)
This may seem a bit circular. But we can iterate the solution:
[` = [` +
1
H
o
E
V [` +
1
H
o
E
V
1
H
o
W
V [`. (7.52)
Or, out to all orders:
[` = [` +

n=1

1
H
o
E
V

n
[` (7.53)
Assuming that the series converges rapidly (true for V << H
o
weak coupling case), we can
truncate the series at various orders and write:
[
(0)
` = [` (7.54)
[
(1)
` = [` +

1
H
o
E
V

[` (7.55)
[
(2)
` = [
(1)
` +

1
H
o
E
V

2
[` (7.56)
and so on. Lets look at [
(1)
` for a moment. We can insert 1 in the form of

n
[
n
`'
n
[
[
(1)
n
` = [
n
` +

1
H
o
W
m

[
m
`'
n
[V [
m
` (7.57)
188
i.e.
[
(1)
n
` = [
n
` +

1
W
n
W
m

[
m
`'
n
[V [
m
` (7.58)
Likewise:
[
(2)
n
` = [
(1)
n
` +

lm

1
(W
m
W
l
)(W
n
W
m
)

2
V
lm
V
mn
[
n
` (7.59)
where
V
lm
= '
l
[V [
m
` (7.60)
is the matrix element of the coupling in the uncoupled basis. These last two expressions are the
rst and second order corrections to the wavefunction.
Note two things. First that I can actually solve the perturbation series exactly by noting that
the series has the form of a geometric progression, for x < 1 converge uniformly to
1
1 x
= 1 + x + x
2
+ =

n=0
x
n
(7.61)
Thus, I can write
[` =

n=0

1
H
o
E
V

n
[` (7.62)
=

n=0
(G
o
V )
n
[` (7.63)
=
1
1 G
o
V
[` (7.64)
where G
o
= (H
o
E)
1
(This is the time-independent form of the propagator for the uncoupled
system). This particular analysis is particularly powerful in deriving the propagator for the fully
coupled problem.
We now calculate the rst order and second order corrections to the energy of the system.
To do so, we make use of the wavefunctions we just derived and write
E
(1)
n
= '
(0)
n
[H[
(0)
n
` = W
n
+'
n
[V [
n
` = W
n
+ V
nn
(7.65)
So the lowest order correction to the energy is simply the matrix element of the perturbation
in the uncoupled or unperturbed basis. That was easy. What about the next order correction.
Same procedure as before: (assuming the states are normalized)
E
(2)
n
= '
(1)
n
[H[
(1)
n
`
= '
n
[H[
n
`
+

m=n
'
n
[H[
m
`

1
W
n
W
m

'
m
[V [
n
` +O[V
3
]
= W
n
+ V
nn
+

m=n
[V
nm
[
2
W
n
W
m
(7.66)
189
Notice that I am avoiding the case where m = n as that would cause the denominator to
be zero leading to an innity. This must be avoided. The so called degenerate case must
be handled via explicit matrix diagonalization. Closed forms can be obtained for the doubly
degenerate case easily.
Also note that the successive approximations to the energy require one less level of approxi-
mation to the wavefunction. Thus, second-order energy corrections are obtained from rst order
wavefunctions.
7.4 Van der Waals forces
7.4.1 Origin of long-ranged attractions between atoms and molecules
One of the underlyuing principles in chemistry is that molecules at long range are attractive
towards each other. This is clearly true for polar and oppositely charges species. It is also true
for non-polar and neutral species, such as methane, noble gases, and etc. These forces are due to
polarization forces or van der Waals forces, which is is attractive and decreases as 1/R
7
, i.e. the
attractive part of the potential goes as 1/R
6
. In this section we will use perturbation theory
to understand the origins of this force, restricting our attention to the interaction between two
hydrogen atoms separated by some distance R.
Let us take the two atoms to be motionless and separated by distance R with n being the
vector pointing from atom A to atom B. Now let r
a
be the vector connecting nuclei A to its
electron and likewise for r
B
. Thus each atom has an instantaneous electric dipole moment

a
= q

R
a
(7.67)

b
= q

R
b
. (7.68)
(7.69)
We will assume that R r
a
&r
b
so that the electronic orbitals on each atom do not come into
contact.
Atom A creates an electrostatic potential, U, for atom B in which the charges in B can
interact. This creates an interaction energy W. Since both atoms are neutral, the most important
source for the interactions will come from the dipole-dipole interactions. This, the dipole of A
interacts with an electric eld E = U generated by the dipole eld about B and vice versa. To
calculate the dipole-dipole interaction, we start with the expression for the electrostatic potential
created by
a
at B.
U(R) =
1
4
o

a
R
R
3
Thus,

E = U =
q
4
o
1
R
3
(r
a
3(r
a
n)n) .
Thus the dipole-dipole interaction energy is
W =
b


E
=
e
2
R
3
(r
a
r
b
3(r
a
n)(r
b
n)) (7.70)
190
where e
2
= q
2
/4
o
. Now, lets set the z axis to be along n so we can write
W =
e
2
R
3
(x
a
x
b
+ y
a
y
b
2z
a
z
b
).
This will be our perturbing potential which we add to the total Hamiltonian:
H = H
a
+ H
b
+ W
where H
a
are the unperturbed Hamiltonians for the atoms. Lets take for example wo hydrogens
each in the 1s state. The unperturbed system has energy
H[1s
1
; 1s
2
` = (E
1
+ E
2
)[1s
1
; 1s
2
` = 2E
I
[1s
1
; 1s
2
`,
where E
I
is the ionization energy of the hydrogen 1s state (E
I
= 13.6eV). The rst order vanishes
since it involves integrals over odd functions. This we can anticipate since the 1s orbitals are
spatially isotropic, so the time averaged valie of the dipole moments is zero. So, we have to look
towards second order corrections.
The second order energy correction is
E
(2)
=

nlm

=
['nlm; n

[W[1s
a
; 1s
b
`[
2
2E
I
E
n
E
n

where we restrict the summation to avoid the [1s


a
; 1a
b
` state. Since W 1/R
3
and the deminator
is negative, we can write
E
(2)
=
C
R
6
which explains the origin of the 1/R
6
attraction.
Now we evaluate the proportionality constant C Written explicitly,
C = e
4

nml

['nlm

[(x
a
x
b
+ y
a
y
b
2z
a
z
b
)[1s
a
; 1s
b
`[
2
2E
I
+ E
n
+ E
n

(7.71)
Since n and n

2 and [E
n
[ = E
I
/n
2
< E
I
, we can replace E
n
and E
n
with 0 with out
appreciable error. Now, we can use the resolution of the identity
1 =

nml

[nlm; n

`'nlm; n

[
to remove the summation and we get
C =
e
4
2E
I
'1s
a
; 1a
b
[(x
a
x
b
+ y
a
y
b
2z
a
z
b
)
2
[1s
a
; 1s
b
` (7.72)
where E
I
is the ionization potential of the 1s state (E
I
= 1/2). Surprisingly, this is simple
to evaluate since we can use symmetry to our advantage. Since the 1s orbitals are spherically
symmetric, any term involving cross-terms of the sort
'1s
a
[x
a
y
a
[1s` = 0
191
vanish. This leaves only terms of the sort
'1s[x
2
[1s`.
all of which are equal to 1/3 of the mean value of R
A
= x
2
a
+ y
2
a
+ z
2
a
. Thus,
C = 6
e
2
2E
I

'1s[
R
3
[1s`

2
= 6e
2
a
o
where a
o
is the Bohr radius. Thus,
E
(2)
= 6e
2
a
o
R
6
What does all this mean. We stated at the beginning that the average dipole moment of a H
1s atom is zero. That does not mean that every single measurement of
a
will yield zero. What is
means is that the probability of nding the atom with a dipole moment
a
is the same for nding
the dipole vector pointed in the opposite direction. Adding the two together produces a net zero
dipole moment. So its the uctuations about the mean which give the atom an instantaneous
dipole eld. Moreover, the uctuations in A are independent of the uctuations in B, so rst
order eects must be zero since the average interaction is zero.
Just because the uctuations are independent does not mean they are not correlated. Consider
the eld generated by A as felt by B. This eld is due to the uctuating dipole at A. This eld
induces a dipole at B. This dipole eld is in turn felt by A. As a result the uctuations become
correlated and explains why this is a second order eect. In a sense, A interacts with is own
dipole eld through reection o B.
7.4.2 Attraction between an atom a conducting surface
The interaction between an atom or molecule and a surface is a fundimental physical process
in surface chemistry. In this example, we will use perturbation theory to understand the long-
ranged attraction between an atom, again taking a H 1s atom as our species for simplicity, and
a conducting surface. We will take the z axis to be normal to the surface and assume that the
atom is high enough o the surface that its altitude is much larger than atomic dimensions.
Furthermore, we will assume that the surface is a metal conductor and we will ignore any atomic
level of detail in the surface. Consequently, the atom can only interact with its dipole image on
the opposite side of the surface.
We can use the same dipole-dipole interaction as before with the following substitutions
e
2
e
2
(7.73)
R 2d (7.74)
x
b
x

a
= x
a
(7.75)
y
b
y

a
= y
a
(7.76)
z
b
z

a
= z
a
(7.77)
where the sign change reects the sign dierence in the image charges. So we get
W =
e
2
8d
3
(x
2
a
+ y
2
a
+ 2z
2
a
)
192
as the interaction between a dipole and its image. Taking the atom to be in the 1s ground state,
the rst order term is non-zero:
E
(1)
= '1s[W[1s`.
Again, using spherical symmetry to our advantage:
E
(1)
=
e
2
8d
3
4'1s[r
2
[1s` =
e
2
a
2
o
2d
3
.
Thus an atom is attracted to the wall with an interaction energy which varries as 1/d
3
. This is
a rst order eect since there is perfect correlation between the two dipoles.
7.5 Perturbations Acting over a Finite amount of Time
Perhaps the most important application of perturbation theory is in cases in which the coupling
acts for a nite amount of time. Such as the coupling of a molecule to a laser eld. The laser eld
impinges upon a molecule at some instant in time and some time later is turned o. Alternatively
we can consider cases in which the perturbation is slowly ramped up from 0 to a nal value.
7.5.1 General form of time-dependent perturbation theory
In general, we can nd the time-evolution of the coecients by solving
i h c
n
(t) = E
n
c
n
(t) +

k
W
nk
(t)c
k
(t) (7.78)
where W
nk
are the matrix elements of the perturbation. Now, lets write the c
n
(t) as
c
n
(t) = b
n
(t)e
iEnt
and assume that b
n
(t) changes slowly in time. Thus, we can write a set of new equations for the
b
n
(t) as
i h

b
n
=

k
e
i
nk
t
W
nk
b
k
(t) (7.79)
Now we assume that the b
n
(t) can be written as a perturbation expansion
b
n
(t) = b
(0)
n
(t) + b
(1)
n
(t) +
2
b
(2)
n
(t) +
where as before is some dimensionless number of order unity. Taking its time derivative and
equating powers of one nds
i h

b
(i)
n
=

k
e
i
nk
t
W
nk
(t)b
(i1)
k
and that b
(0)
n
(t) = 0.
193
Now, we calculate the rst order solution. For t < 0 the system is assumed to be in some
well dened initial state, [
i
`. Thus, only one b
n
(t < 0) coecient can be non-zero and must be
independent of time since the coupling has not been turned on. Thus,
b
n
(t = 0) =
ni
At t = 0, we turn on the coupling and W jumps from 0 to W(0). This must hold for all order
in . So, we immediately get
b
(0)
n
(0) =
ni
(7.80)
b
(i)
n
(0) = 0 (7.81)
Consequently, for all t > 0, b
(0)
n
(t) =
ni
which completely species the zeroth order result. This
also gives us the rst order result.
i h

b
(1)
n
(t) =

k
e
i
nk
t
W
nk

ki
= e
i
ni
t
W
ni
(t) (7.82)
which is simple to integrate
b
(1)
n
(t) =
i
h

t
0
e
i
ni
s
W
ni
(s)ds.
Thus, our perturbation wavefunction is written as
[(t)` = e
iEot/h
[
o
` +

n=0
b
(1)
n
(t)e
iEnt/h
[
n
` (7.83)
7.5.2 Fermis Golden Rule
Lets consider the time evolution of a state under a small perturbation which varies very slowly
in time. The expansion coecients of the state in some basis of eigenstates of the unperturbed
Hamiltonian evolve according to:
i h c
s
(t) =

n
H
sn
c
n
(t) (7.84)
where H
sn
is the matrix element of the full Hamiltonian in the basis.
H
sn
= E
s

ns
+ V
s
n(t) (7.85)
Assuming that V (t) is slowly varying and that V
sn
<< E
s
E
n
for all time. we can write the
approximate solution as
c
s
(t) = A
s
(t) exp(i/ hE
s
t) (7.86)
where A
s
(t) is a function with explicit time dependence. Putting the approximate solution back
into the dierential equation:
i h c
s
(t) = i h

A
s
(t)c
s
(t) + E
s
c
s
(t) (7.87)
(7.88)
194
= E
s
c
s
(t) +

n
V
sn
A
n
(t)e
i/hEnt
(7.89)
We now proceed to solve this equation via series of well dened approximations. Our rst
assumption is that V
sn
<< E
n
E
s
(weak coupling approximation). We can also write

A
s
<< 1 (7.90)
since we have assumed that A(t) varies slowly in time. For s = i we have the following initial
conditions:
A
i
(0) = 1 (7.91)
A
s
(0) = 0 (7.92)
Thus, at t = 0 we can set all the coecients in zero except for A
i
, which is 1. Thus,

A
s
(t) = i/ hV
si
e
i/h(EsE
i
)t
(7.93)
which can be easily integrated
A
s
(t) =
i
h

t
0
dt

V
si
e
i/h(EsE
i
)t

(7.94)
where all the A
s
(t) are assumed to be smaller than unity. Of course, to do the integral we need
to know how the perturbation depends upon time. Lets assume that
V (t) = 2

V cos(t) =

V

e
+it
+ e
it

(7.95)
where

V is a time independent quantity. Thus we can determine A(t) as
A
s
(t) =
i
h

t
0
dt

's[

V [`

e
i/h(EsE
i
+h)t

e
i/h(EsE
i
h)t

(7.96)
(7.97)
= 's[

V [`

1 e
i/h(EsE
i
+h)t
E
s
E
n
+ h
+
1 e
i/h(EsE
i
h)t
E
s
E
n
h

(7.98)
Since the coupling matrix element is presumed to be small. The only signicant contribution
comes when the denominator is very close to zero. i.e. when
h [E
s
E
n
[ (7.99)
For the case when E
s
> E
n
we get only one term making a signicant contribution and thus the
transition probability as a function of time is
P
sn
(t) = [c
s
(t)[
2
= [A
s
(t)[
2
(7.100)
(7.101)
195
= 4['s[

V [n`[
2

sin
2
((E
s
E
n
h)t/(2 h))
(E
s
E
n
h)
2

. (7.102)
This is the general form for a harmonic perturbation. The function
sin
2
(ax)
x
2
(7.103)
is called the sinc function or the Mexican Hat Function. At x = 0 the peak is sharply peaked
(corresponding to E
s
E
n
h = 0). Thus, the transition is only signicant when the energy
dierence matches the frequency of the applied perturbation. As t (a = t/(2 h)), the peak
become very sharp and becomes a -function. The width of the peak is
x =
2 h
t
(7.104)
Thus, the longer we measure a transition, the more well resolved the measurement will become.
This has profound implications for making measurements of meta-stable systems.
We have an expression for the transition probability between two discrete states. We have
not taken into account the the fact that there may be more than one state close by nor have we
taken into account the nite width of the perturbation (instrument function). When there a
many states close the the transition, I must take into account the density of nearby states. Thus,
we dene
(E) =
N(E)
E
(7.105)
as the density of states close to energy E where N(E) is the number of states with energy E.
Thus, the transition probability to any state other than my original state is
P
s
(t) =

s
P
ns
(t) =

s
[A
s
(t)[
2
(7.106)
to go from a discrete sum to an integral, we replace

dN
dE
dE =

(E)dE (7.107)
Thus,
P
s
(t) =

dE[A
s
(t)[
2
(E) (7.108)
Since [A
s
(t)[
2
is peaked sharply at E
s
= E
i
+ h we can treat (E) and

V
sn
as constant and write
P
s
(t) = 4['s[

V [n`[
2
(E
s
)

sin
2
(ax)
x
2
dx (7.109)
Taking the limits of integration to

dx
sin
2
(ax)
x
= a =
t
2 h
(7.110)
196
In other words:
P
s
(t) =
2t
h
['s[

V [n`[
2
(E
s
) (7.111)
We can also dene a transition rate as
R(t) =
P(t)
t
(7.112)
thus, the Golden Rule Transition Rate from state n to state s is
R
ns
(t) =
2
h
['s[

V [n`[
2
(E
s
) (7.113)
This is perhaps one of the most important approximations in quantum mechanics in that it
has implications and applications in all areas, especially spectroscopy and other applications of
matter interacting with electro-magnetic radiation.
7.6 Interaction between an atom and light
What I am going to tell you about is what we teach our physics students in the third
or fourth year of graduate school... It is my task to convince you not to turn away
because you dont understand it. You see my physics students dont understand it...
That is because I dont understand it. Nobody does.
Richard P. Feynman, QED, The Strange Theory of Light and Matter
Here we explore the basis of spectroscopy. We will consider how an atom interacts with a
photon eld in the low intensity limit in which dipole interactions are important. We will then
examine non-resonant excitation and discuss the concept of oscillator strength. Finally we will
look at resonant emission and absorption concluding with a discussion of spontaneous emission.
In the next section, we will look at non-linear interactions.
7.6.1 Fields and potentials of a light wave
An electromagnetic wave consists of two oscillating vector eld components which are perpen-
dicular to each other and oscillate at at an angular frequency = ck where k is the magnitude
of the wavevector which points in the direction of propagation and c is the speed of light. For
such a wave, we can always set the scalar part of its potential to zero with a suitable choice in
gauge and describe the elds associate with the wave in terms of a vector potential,

A given by

A(r, t) = A
o
e
z
e
ikyit
+ A

o
e
z
e
iky+it
Here, the wave-vector points in the +y direction, the electric eld, E is polarized in the yz plane
and the magnetic eld B is in the xy plane. Using Maxwells relations

E(r, t) =
A
t
= ie
z
(A
o
e
i(kyt)
A

o
e
i(kyt)
)
197
and

B(r, t) =

A = ike
x
(A
o
e
i(kyt)
A

o
e
i(kyt)
).
We are free to choose the time origin, so we will choos it as to make A
o
purely imaginary and
set
iA
o
= c/2 (7.114)
ikA
o
= B/2 (7.115)
where c and B are real quantities such that
c
B
=

k
= c.
Thus
E(r, t) = ce
z
cos(ky t) (7.116)
B(r, t) = Be
z
sin(ky t) (7.117)
where c and B are the magnitudes of the electric and magnetic eld components of the plane
wave.
Lastly, we dene what is known as the Poynting vector (yes, its pronounced pointing) which
is parallel to the direction of propagation:

S =
o
c
2

E

B. (7.118)
Using the expressions for

E and

B above and averaging over several oscillation periods:

S =
o
c
2
c
2
e
y
(7.119)
7.6.2 Interactions at Low Light Intensity
The electromagnetic wave we just discussed can interact with an atomic electron. The Hamilto-
nian of this electron can be given by
H =
1
2m
(PqA(r, t))
2
+ V (r)
q
m
S B(r, t)
where the rst term represents the interaction between the electron and the electrical eld of
the wave and the last term represents the interaction between the magnetic moment of the
electron and the magnetic moment of the wave. In expanding the kinetic energy term, we have
to remember that momentum and position do not commute. However, in the present case, A is
parallel to the z axis and P
z
and y commute. So, we wind up with the following:
H = H
o
+ W
where
H
o
=
P
2
2m
+ V (r)
198
is the unperturbed (atomic) hamiltonian and
W =
q
m
P A
q
m
S B+
q
2
2m
A
2
.
The rst two depend linearly upon A and the second is quadratic in A. So, for low intensity we
can take
W =
q
m
P A
q
m
S B.
Before moving on, we will evaluate the relative importance of each term by orders of magnitude
for transitions between bound states. In the second term, the contribution of the spin operator
is on the order of h and the contribution from B is on the order of kA. Thus,
W
B
W
E
=
q
m
S B
q
m
P A

hk
p
h/p is on the order of an atomic radius, a
o
and k = 2/ where is the wavelength of the light,
typically on the order of 1000a
o
. Thus,
W
B
W
E

a
o

<1.
So, the magnetic coupling is not at all important and we focus only upon the coupling to the
electric eld.
Using the expressions we derived previously, the coupling to the electric eld component of
the light wave is given by:
W
E
=
q
m
p
z
(A
o
e
iky
e
it
+ A

o
e
iky
e
+it
).
Now, we expand the exponential in powers of y
e
iky
= 1 iky
1
2
k
2
y
2
+ . . .
since ky a
o
/ <1, we can to a good approximation keep only the rst term. Thus we get the
dipole operator
W
D
=
qc
m
p
z
sin(t).
In the electric dipole approximation, W(t) = W
D
(t).
Note, that one might expect that W
D
should have been written as
W
D
= qcz cos(t)
since we are, after all, talking about a dipole moment associated with the motion of the electron
about the nucleus. Actually, the two expressions are identical! The reason is that I can always
choose a diernent gauge to represent the physical problem without changing the physical result.
To get the present result, we used
A =
c

e
z
sin(t)
199
and
U(r) = 0
as the scalar potential. A gauge transformation is introduced by taking a function, f and dening
a new vector potential and a new scalar potential as
A

= A+f
U

= U
f
t
We are free to choose f however we do so desire. Lets take f = zc sin(t)/. Thus,
A

= e
z
c

(sin(ky t) + sin(t))
and
U

= zc cos t
is the new scalar potential. In the electric dipole approximation, ky is small, so we set ky = 0
everywhere and obtain A

= 0. Thus, the total Hamiltonian becomes


H = H
o
+ qU

(r, t)
with perturbation
W

D
= qzc cos(t).
This is the usual form of the dipole coupling operator. However, when we do the gauge trans-
formation, we have to transform the state vector as well.
Next, let us consider the matrix elements of the dipole operator between two stationary states
of H
o
: [
i
` and [
f
` with eigenenergy E
i
and E
f
respectively. The matrix elements of W
D
are
given by
W
fi
(t) =
qc
m
sin(t)'
f
[p
z
[
i
`
We can evaluate this by noting that
[z, H
o
] = i h
H
o
p
z
= i h
p
z
m
.
Thus,
'
f
[p
z
[
i
` = im
fi
'
f
[z[
i
`.
Consequently,
W
fi
(t) = iqc
fi
sin(t)

z
fi
.
Thus, the matrix elements of the dipole operator are those of the position operator. This deter-
mines the selection rules for the transition.
Before going through any specic details, let us consider what happens if the frequency
does not coincide with
fi
. Soecically, we limit ourselves to transitions originating from the
ground state of the system, [
o
`. We will assume that the eld is weak and that in the eld the
200
atom acquires a time-dependent dipole moment which oscillates at the same frequency as the
eld via a forced oscillation. To simplify matters, lets assume that the electron is harmonically
bound to the nucleus in a classical potential
V (r) =
1
2
m
o
r
2
where
o
is the natural frequency of the electron.
The classical motion of the electron is given by the equations of motion (via the Ehrenfest
theorem)
z +
2
z =
qc
m
cos(t).
This is the equation of motion for a harmonic oscillator subject to a periodic force. This inho-
mogeneous dierential equation can be solved (using Fourier transform methods) and the result
is
z(t) = Acos(
o
t ) +
qc
m(
2
o

2
)
cos(t)
where the rst term represents the harmonic motion of the electron in the absence of the driving
force. The two coecients, A and are determined by the initial condition. If we have a very
slight damping of the natural motion, the rst term dissappears after a while leaving only the
second, forced oscillation, so we write
z =
qc
m(
2
o

2
)
cos(t).
Thus, we can write the classical induced electric dipole moment of the atom in the eld as
D = qz =
q
2
c
m(
2
o

2
)
cos(t).
Typically this is written in terms of a susceptibility, , where
=
q
2
m(
2
o

2
)
.
Now we look at this from a quantum mechanical point of view. Again, take the initial state
to be the ground state and H = H
o
+W
D
as the Hamiltonian. Since the time-evolved state can
be written as a superposition of eigenstates of H
o
,
[(t)` =

n
c
n
(t)[
n
`
To evaluate this we can us the results derived previously in our derivation of the golden rule,
[(t)` = [
o
` +

n=0
qc
2im h
'n[p
z
[
o
`

e
inot
e
it

no
+

e
inot
e
it

no

[
n
` (7.120)
where we have removed a common phase factor. We can then calculate the dipole moment
expectation value, 'D(t)` as
'D(t)` =
2q
2
h
c cos(t)

on
['
n
[z[
o
`[
2

2
no

2
(7.121)
201
Oscillator Strength
We can now notice the similarity between a driven harmonic oscillator and the expectation value
of the dipole moment of an atom in an electric eld. We can dene the oscillator strength as a
dimensionless and real number characterizing the transition between [
o
and [
n
`
f
no
=
2m
no
h
['
n
[z[
o
`[
2
In Exercise 2.4, we proved the Thomas-Reiche-Kuhn sum rule, which we can write in terms of
the oscillator strengths,

n
f
no
= 1
This can be written in a very compact form:
m
h
2
'
o
[[x, [H, x]][
o
` = 1.
7.6.3 Photoionization of Hydrogen 1s
Up until now we have considered transitions between discrete states inrtoduced via some external
perturbation. Here we consider the single photon photoionization of the hydrogen 1s orbital to
illustrate how the golden rule formalism can be used to calculate photoionization cross-sections
as a function of the photon-frequency. We already have an expression for dipole coupling:
W
D
=
qc
m
p
z
sin(t) (7.122)
and we have derived the golden rule rate for transitions between states:
R
if
=
2
h
['f[V [i`[
2
(E
i
E
f
+ h). (7.123)
For transitions to the continuum, the nal states are the plane-waves.
(k) =
1

1/2
e
ikr
. (7.124)
where is the volume element. Thus the matrix element '1s[V [k` can be written as
'1s[p
z
[k` =
hk
z

1/2


1s
(r)e
ikr
dr. (7.125)
To evaluate the integral, we need to transform the plane-wave function in to spherical coordinates.
This can be done vie the expansion;
e
ikr
=

l
i
l
(2l + 1)j
l
(kr)P
l
(cos()) (7.126)
where j
l
(kr) is the spherical Bessel function and P
l
(x) is a Legendre polynomial, which we can
also write as a spherical harmonic function,
P
l
(cos()) =

4
2l + 1
Y
l0
(, ). (7.127)
202
Thus, the integral we need to perform is
'1s[k` =
1

00
Y
l0
d

i
l

4(2l + 1)


0
r
2
e
r
j
l
(kr)dr. (7.128)
The angular integral we do by orthogonality and produces a delta-function which restricts the
sum to l = 0 only leaving,
'1s[k` =
1


0
r
2
e
r
j
0
(kr)dr. (7.129)
The radial integral can be easily performed using
j
0
(kr) =
sin(kr)
kr
(7.130)
leaving
'1s[k` =
4
k
1

1/2
1
(1 + k
2
)
2
. (7.131)
Thus, the matrix element is given by
'1s[V [k` =
qc h
m
1

1/2
2
(1 + k
2
)
2
(7.132)
This we can indert directly in to the golden rule formula to get the photoionization rate to a
given k-state.
R
0k
=
2 h

qc
m

2
4
(1 + k
2
)
4
(E
o
E
k
+ h). (7.133)
which we can manipulate into reading as
R
0k
=
16
h

qc
m

2
m
(k
2
K
2
)
(1 + k
2
)
4
(7.134)
where we write K
2
= 2m(E
I
+ h)/ h
2
to make our notation a bit more compact. Eventually,
we want to know the rate as a function of the photon frequency, so lets put everything except
the frequency and the volume element into a single constant, 1, which is related to the intensity
of the incident photon.
R
0k
=
1

2
(k
2
K
2
)
(1 + k
2
)
4
. (7.135)
Now, we sum over all possible nal states to get the total photoionization rate. To do this, we
need to turn the sum over nal states into an integral, this is done by

k
=

(2)
3
4


0
k
2
dk (7.136)
203
Thus,
R =
1

(2)
3
4


0
k
2
(k
2
K
2
)
(1 + k
2
)
4
dk
=
1

2
1
2
2


0
k
2
(k
2
K
2
)
(1 + k
2
)
2
dk
Now we do a change of variables: y = k
2
and dy = 2kdk so that the integral become


0
k
2
(k
2
K
2
)
(1 + k
2
)
2
dk =
1
2


0
y
1/2
(1 + y
2
)
4
(y K
2
)dy
=
K
2(1 + K
2
)
4
(7.137)
Pulling everything together, we see that the total photoionization rate is given by
R =
1

2
1
2
2
K
(1 + K
2
)
4
=
1

m
h
2

h
o

2
2

1 +
2 m( ho)
h
2

4
= 1

2 1
32
2

6
(7.138)
where in the last line we have converted to atomic units to clean things up a bit. This expression
is clearly valid only when h > E
I
= 1/2hartree (13.6 eV) and a plot of the photo-ionization
rate is given in Fig. 7.2
7.6.4 Spontaneous Emission of Light
The emission and absorption of light by an atom or molecule is perhaps the most spectacular
and important phenomena in the universe. It happens when an atom or molecule undergoes a
transition from one state to another due to its interaction with the electro-magnetic eld. Because
the electron-magnetic eld can not be entirely eliminated from any so called isolated system
(except for certain quantum connement experiments), no atom or molecule is ever really isolated.
Thus, even in the absence of an explicitly applied eld, an excited system can spontaneously emit
a photon and relax to a lower energy state. Since we have all done spectroscopy experiments
at one point in our education or another, we all know that the transitions are between discrete
energy levels. In fact, it was in the examination of light passing through glass and light emitted
from ames that people in the 19th century began to speculate that atoms can absorb and emit
light only at specic wavelengths.
We will use the GR to deduce the probability of a transition under the inuence of an applied
light eld (laser, or otherwise). We will argue that the system is in equilibrium with the electro-
magnetic eld and that the laser drives the system out of equilibrium. From this we can deduce
the rate of spontaneous emission in the absence of the eld.
204
0.5 1 1.5 2
w HauL
0.2
0.4
0.6
0.8
R HarbL
Figure 7.2: Photo-ionization spectrum for hydrogen atom.
The electric eld associated with a monochromatic light wave of average intensity I is
'I` = c'` (7.139)
= c

o
'

c
2
o
`
2
+
1

o
'B
2
o
`
2

(7.140)
=

1/2
c
2
o
2
(7.141)
= c
o
c
2
o
2
(7.142)
where is the energy density of the eld, [

c[ and [B
o
[ = (1/c)[

c[ are the maximum amplitudes


of the E and B elds of the wave. Units are MKS units.
The em wave in reality contains a spread of frequencies, so we must also specify the intensity
density over a denite frequency interval:
dI
d
d = cu()d (7.143)
where u() is the energy density per unit frequency at .
Within the semi-classical dipole approximation, the coupling between a molecule and the
light wave is


c(t) =
c
o
2
cos(t) (7.144)
205
where is the dipole moment vector, and is the polarization vector of the wave. Using this
result, we can go back to last weeks lecture and plug directly into the GR and deduce that
P
fi
(, t) = 4['f[ [i`[
2
c
2
o
4
sin
2
((E
f
E
i
h)t/(2 h))
(E
f
E
i
h)
2
(7.145)
Now, we can take into account the spread of frequencies of the em wave around the resonant
value of
o
= (E
f
E
i
)/ h. To do this we note:
c
2
o
= 2
'I`
c
o
(7.146)
and replace 'I` with (dI/d)d.
P
fi
(t) =


0
dP
fi
(t, ) (7.147)
=
2
c
o

dI
d

o
['f[ [i`[
2


0
sin
2
(( h
o
h)(t/(2 h)))
( h
o
h)
2
d (7.148)
To get this we assume that dI/d and the matrix element of the coupling vary slowly with
frequency as compared to the sin
2
(x)/x
2
term. Thus, as far as doing integrals are concerned,
they are both constants. With
o
so xed, we can do the integral over dw and get t/(2 h
2
). and
we obtain the GR transition rate:
k
fi
=

c
o
h
2
['f[ [i`[
2

dI
d

o
(7.149)
Notice also that this equation predicts that the rate for excitation is identical to the rate for
de-excitation. This is because the radiation eld contains both a + and term (unless the
eld is circularly polarized), this the transition rate to a state of lower energy to a higher energy
is the same as that of the transition from a higher energy state to a lower energy state.
However, we know that systems can emit spontaneously in which a state of higher energy
can go to a state of lower energy in the absence of an external eld. This is dicult to explain
in the presence frame-work since we have assumed that [i` is stationary.
Lets assume that we have an ensemble of atoms in a cavity containing em radiation and the
system is in thermodynamic equilibrium. (Thought you could escape thermodynamics, eh?) Let
E
1
and E
2
be the energies of two states of the atom with E
2
> E
1
. When equilibrium has been
established the number of atoms in the two states is determined by the Boltzmann equation:
N
2
N
1
=
Ne
E
2

Ne
E
1

= e
(E
2
E
1
)
(7.150)
where = 1/kT The number of atoms (per unit time) undergoing the transition from 1 to 20 is
proportional to k
21
induced by the radiation and to the number of atoms in the initial state, N
1
.
dN
dt
(1 2) = N
1
k
21
(7.151)
206
The number of atoms going from 2 to 1 is proportional to N
2
and to k
12
+ A where A is the
spontaneous transition rate
dN
dt
(2 1) = N
2
(k
21
+ A) (7.152)
At equilibrium, these two rates must be equal. Thus,
k
21
+ A
k
21
=
N
1
N
2
= e
h
(7.153)
Now, lets refer to the result for the induced rate k
21
and express it in terms of the energy density
per unit frequency of the cavity, u().
k
21
=

o
h
2
['2[ [1`[
2
u() = B
21
u() (7.154)
where
B
21
=

o
h
2
['2[ [1`[
2
. (7.155)
For em radiation in equilibrium at temperature T the energy density per unit frequency is given
by Plancks Law:
u() =
1

2
c
3
h
3
e
h
1
(7.156)
Combining the results we obtain
B
12
B
21
+
A
B
21
1
u()
= e
h
(7.157)
B
21
B
12
+
A
B
21

2
c
3
h
3
(e
h
1) = e
h
(7.158)
(7.159)
which must hold for all temperatures. Since
B
21
B
12
= 1. (7.160)
we get
A
B
21

2
c
3
h
3
= 1 (7.161)
and Thus, the spontaneous emission rate is
A =
h
3

2
c
3
B
12
(7.162)
207
=

3

o
hc
3
['2[ [1`[
2
(7.163)
This is a key result in that it determines the probability for the emission of light by atomic
and molecular systems. We can use it to compute the intensity of spectral lines in terms of the
electric dipole moment operator. The lifetime of the excited state is then inversely proportional
to the spontaneous decay rate.
=
1
A
(7.164)
To compute the matrix elements, we can make a rough approximation that '` 'x`e where
e is the charge of an electron and 'x` is on the order of atomic dimensions. We also must include
a factor of 1/3 for averaging over all orientations of ( ), since at any given time,the moments
are not all aligned.
1

= A =
4
3

3
hc
3
e
2
4
o
['x`[
2
(7.165)
The factor
e
2
4
o
hc
=
1
137
(7.166)
is the ne structure constant. Also, /c = 2/. So, setting 'x` 1

A
A =
4
3
1
137
c

3
(1

A)
2

6 10
18
[(

A)]
3
sec
1
(7.167)
So, for a typical wavelength, 4 10
3

A.
= 10
8
sec (7.168)
which is consistent with observed lifetimes.
We can also compare with classical radiation theory. The power radiated by an accelerated
particle of charge e is given by the Larmor formula (c.f Jackson).
P =
2
3
e
2
4
o
( v)
2
c
3
(7.169)
where v is the acceleration of the charge. Assuming the particle moves in a circular orbit of
radius r with angular velocity , the acceleration is v =
2
r. Thus, the time required to radiate
energy h/2 is equivalent to the lifetime .
1

class
=
2P
h
(7.170)
=
1
h
4
3
e
2
4
o

4
r
2
c
3
(7.171)
208
=
4
3

3
hc
3
e
2
4
o
r
2
. (7.172)
This qualitative agreement between the classical and quantum result is a manifestation of the
correspondence principle. However, it must be emphasized that the MECHANISM for radiation
is entirely dierent. The classical result will never predict a discrete spectrum. This was in fact
a very early indication that something was certainly amiss with the classical electro-magnetic
eld theories of Maxwell and others.
7.7 Time-dependent golden rule
In the last lecture we derived the the Golden Rule (GR) transition rate as
k(t) =
2
h
['s[

V [n`[
2
(E
s
) (7.173)
This is perhaps one of the most important approximations in quantum mechanics in that it
has implications and applications in all areas, especially spectroscopy and other applications of
matter interacting with electro-magnetic radiation. In todays lecture, I want to show how we
can use the Golden Rule to simplify some very complex problems. Moreover, to show how we
used the GR to solve a real problem. The GR has been used by a number of people in chemistry
to look at a wide variery of problems. In fact, most of this lecture comes right out out of the
Journal of Chemical Physics. Some papers you may be interested in knowing about include:
1. B. J. Schwartz, E. R. Bittner, O. V. Prezhdo, and P. J. Rossky, J. Chem. Phys. 104, 5242
(1996).
2. E. Neria and A. Nitzan, J. Chem. Phys. 99, 1109 (1993).
3. A. Stiab and D. Borgis, J. Chem. Phys. 103, 2642 (1995)
4. E. J. Heller, J. Chem. Phys. 75, 2923 (1981).
5. W. Gelbart, K. Spears, K. F. Freed, J. Jortner, S. A. Rice, Chem. Phys. Lett. 6 345
(1970).
The focus of the lecture will be to use the GR to calculate the transition rate from one
adiabatic potential energy surface to another via non-radiative decay. Recall in a previous lecture,
we talked about potential energy curves of molecule and that these are obtained by solving the
Schrodinger equation for the electronic degrees of freedom assuming that the nuclei move very
slowly. We dened the adiabatic or Born-Oppenheimer potential energy curves for the nuclei in a
molecule by solving the Schrodinger equation for the electrons for xed nuclear positions. These
potential curves are thus the electronic eigenvalues parameterized by the nuclear positions.
V
i
(R) = E
i
(R) (7.174)
209
Under the BO approximation, the nuclei move about on a single energy surface and the electronic
wavefunction is simply [
i
(R)` paramterized by the nuclear positions. However, when the nuclei
are moving fast, this assumption is no longer true since,
i h
d
dt
[
i
(R)` =

i h
R
t

R
+ E
i
(R)

[
i
(R)`. (7.175)
That is to say, when the nuclear velocities are large in a direction that the wavefunction changes
a lot with varying nuclear position, the Born-Oppenheimer approximation is not so good and
the electronic states become coupled by the nuclear motion. This leads to a wide variety of
physical phenomina including non-radiative decay and intersystem crossing and is an important
mechanism in electron transfer dynamics.
THe picture I want to work with today is a hybred quantum/classical picture (or semi-
classical). I want to treat the nuclear dynamics as being mostly classical with some quantum
aspects. IN this picture I will derive a semi-classical version of the Golden-Rule transition rate
which can be used in concert with a classical Molecular dynamics simulation.
We start with the GR transition rate we derived the last lecture. We shall for now assume
that the states we are coupling are the vibrational-electronic states of the system, written as
[
i
` = [
i
(R)I(R)` (7.176)
where R denotes the nuclear positions, [(R)` is the adiabatic electronic eigenstate obtained at
position R and [I(R)` is the initial nuclear vibrational state on the (R) potential energy surface.
Let this denote the initial quantum state and denote by
[
f
` = [
f
(R)F(R)` (7.177)
the nal quantum state. The GR transition rate at nuclear position R is thus
k
if
=
2
h

f
['
i
[

V [
f
`[
2
(E
i
E
f
) (7.178)
where the sum is over the nal density of states (vibrational) and the energys in the -function
is the electronic energy gap measured with respect to a common origin. We can also dene a
thermal rate constant by ensemble averaging over a collective set of initial states.
7.7.1 Non-radiative transitions between displaced Harmonic Wells
An important application of the GR comes in evaluating electron transfer rates between electronic
state of a molecule. Lets approximate the diabatic electronic energy surfaces of a molecule as
harmonic wells o-set by by energy and with the well minimums displaced by some amount x
o
.
Let the curves cross at x
s
and assume there is a spatial dependent coupling V (x) which couples
the diabatic electronic states. Let T
1
denote the upper surface and S
o
denote the ground-state
surface. The diabatic coupling is maximized at the crossing point and decays rapidly as we move
away. Because these electronic states are coupled, the vibrational states on T
1
become coupled
to the vibrational states on S
o
and vibrational amplitude can tunnel from one surface to the
210
other. The tunneling rate can be estimated very well using the Golden Rule (assuming that the
amplitude only crosses from one surface to another once.)
k
TS
=
2
h
['
T
[V [
S
`[
2
(E
s
) (7.179)
The wavefunction on each surface is the vibronic function mentioned above. This we will write
as a product of an electronic term [
T
` (or [
S
`) and a vibrational term [n
T
` (or [n
S
`). For
shorthand, lets write the electronic contribution as
V
TS
= '
T
[V [
S
` (7.180)
Say I want to know the probability of nding the system in some initial vibronic state after some
time. The rate of decay of this state is thus the sum over all possible decay channels. So, I must
sum over all nal states that I can decay into. The decay rate is thus.
k
TS
=
2
h

m
S
['n
T
[V
TS
[m
S
`[
2
(E
s
) (7.181)
This equation is completely exact (within the GR approximation) and can be used in this form.
However, lets make a series of approximations and derive a set of approximate rates and compare
the various approximations.
Condon Approximation
First I note that the density of states can be rewritten as
(E
s
) = Tr(H
o
E
s
)
1
(7.182)
so that
k
TS
=
2
h

m
S
['n
T
[V
TS
[m
S
`[
2
E
n
E
s
(7.183)
where E
n
E
s
is the energy gap between the initial and nal states including the electronic energy
gap. What this means is that if the energy dierence between the bottoms of the respective wells
is large, then the initial state will be coupled to the high-lying vibrational states in S
o
. Next I
make the Condon Approximation that
'n
T
[V
TS
[m
S
` V
TS
'n
T
[m
S
` (7.184)
where 'n
T
[m
S
` is the overlap between vibrational state [n
T
` in the T
1
well and state [m
S
` in the
S
o
well. These are called Franck-Condon factors.
Evaluation of Franck Condon Factors
Dene the Franck-Condon factor as
'n
T
[m
S
` =

dx
(T)
m
(x)
(S)
n
(x) (7.185)
211
where
(T)
n
(x) is the coordinate representation of a harmonic osc. state. We shall assume that
the two wells are o-set by x
s
and each has a freq.
1
and
2
. We can write the HO state for
each well as a Gauss-Hermite Polynomial (c.f. Compliment B
v
in the text.)

n
(x) =

1/4
1

2
n
n!
exp

2
2
x
2

H
n
(x) (7.186)
where =

m/ h and H
n
(z) is a Hermite polynomial,
H
n
(z) = (1)
n
e
z
2
n
z
n
e
z
2
(7.187)
Thus the FC factor is
'n
T
[m
S
` =

2
T

1/4

2
S

1/4
1

2
n
n!
1

2
m
m!

dx exp

2
T
2
x
2

exp

2
S
2
(x x
s
)
2

H
n
i
(
i
x)H
m
f
(
f
(x x
s
)) (7.188)
This integral is pretty dicult to solve analytically. In fact Mathematica even choked on this
one. Lets try a simplication: We can expand the Hermite Polynomials as
H
n
(z) = 2
n

1
(
1
2

n
2
)

2x
(
n
2
)

nx
3
(
1
2

n
2
)
+

(7.189)
Thus, any integral we want to do involves doing an integral of the form
I
n
=

dxx
n
exp

2
1
2
x
2


2
2
2
(x x
s
)
2

dxx
n
exp

2
1
2
x
2


2
2
2
(x
2
2xx
s
+ x
2
s
)

dxx
n
exp

2
1
+
2
2
2
x
2

exp

2
2
2
(2xx
s
x
2
s
)

= exp

2
2
2
x
2
s

dxx
n
exp

2
1
+
2
2
2
x
2

exp

2
2
(xx
s
)

(7.190)
= exp

2
2
2
x
2
s

dxx
n
exp

a
2
x
2

exp (bx) (7.191)


where I dened a =
2
1
+
2
2
and b =
2
2
x
s
.
Performing the integral:
I
n
= 2
1+n
2
1
a
2+n
2

(1 + (1)
n
)

1
a
a (
1 + n
2
)
1
F
1
(
1 + n
2
,
1
2
,
b
2
2 a
)

2 (1 + (1)
n
) b (
2 + n
2
)
1
F
1
(
2 + n
2
,
3
2
,
b
2
2 a
)

(7.192)
212
where
1
F
1
(a, b, z) is the Hypergeometric function (c.f Landau and Lifshitz, QM) and (z) is the
Gamma function. Not exactly the easist integral in the world to evaluate. (In other words, dont
worry about having to solve this integral on an exam!) To make matters even worse, this is only
one term. In order to compute, say the FC factor between n
i
= 10 and m
f
= 12 I would need to
sum over 120 terms! However, Mathematica knows how to evaluate these functions, and we can
use it to compute FC factors very easily.
If the harmonic frequencies are the same in each well, life gets much easier. Furthermore, If I
make the initial vibrational state in the T
1
well be the ground vibrational state, we can evaluate
the overlap exactly for this case. The answer is (see Mathematica hand out)
M[n] =

n
x
n
s

2
n
n!
(7.193)
Note that this is dierent than the FCF calculated in Ref. 6 by Gelbart, et al. who do not have
the square-root factor (their denominator is my denominator squared.)
2
Finally, we can evaluate the matrix element as
'n
T
[V
TS
[m
S
` V
TS

n
x
n
s

2
n
n!
(7.194)
Thus, the GR survival rate for the ground vibrational state of the T
1
surface is
k =
2
h
V
TS

m
1
h hm

n
x
n
s

2
n
n!
(7.195)
where is the energy dierence between the T
1
and S
o
potential minimuma.
Steep S
o
Approximation
In this approximation we assume that potential well of the S
o
state is very steep and intesects
the T
1
surface at x
s
. We also assume that the diabatic coupling is a function of x. Thus, the GR
survival rate is
k =
2
h
'V
2
TS
`
S
(7.196)
where
'V
TS
` =

dx

T
(x)
S
(x)V (x) (7.197)
When the S
o
surface is steeply repulsive, the wavefunction on the S
o
surface will be very oscillitory
at the classical turning point, which is nearly identical with x
s
for very steep potentials. Thus,
for purposes of doing integrations, we can assume that

S
(x) = C(x x
s
) + (7.198)
2
I believe this is may be a mistake in their paper. Ill have to call Karl Freed about this one.
213
where x
s
is the classical turning point at energy E on the S
o
surface. The justication for this
is from the expansion of the semi-classical wavefunction on a linear potential, which are the
Airy functions, Ai(x).
A
i
() =
1
2

+ds exp(is
2
/3 is) (7.199)
Which can be expanded as
aAi(ax) = (x) + (7.200)
Expansions of this form also let us estimate the coecient C
3
Using the -function approximation

dx

T
(x)
S
(x)V (x) = C

dx

T
(x)(x x
s
)V (x) = C

T
(x
s
)V (x
s
) (7.201)
Now, again assuming that we are in the ground vibrational state on the T
1
surface,

T
(x) =

1/4
e

2
x
2
/2
(7.202)

dx

T
(x)
S
(x)V (x) = C

1/4
e

2
x
2
s
/2
V (x
s
) (7.203)
So, we get the approximation:
k =
2
h
CV (x
s
)

1/4
e

2
x
2
s
/2
1
h
(7.204)
where C remains to be determined. For that, refer to the Heller-Brown paper.
Time-Dependent Semi-Classical Evaluation
We next do something tricky. There are a number of ways one can represent the -function. We
will use the Fourier representation of the function and write:
(E
i
E
f
) =
h
2

dte
i/h(E
i
E
j
)t
(7.205)
Thus, we can write
k
if
=

dt

f
'
i
(R)I(R)[V [
f
(R)F(R)`
'
f
(R)F(R)[e
+iH
f
t/h
V e
iH
i
t/h
[
i
(R)I(R)` (7.206)
3
(see Heller and Brown, JCP 79 3336 (1983). )
214
Integrating over the electronic degrees of freedom, we can dene
V
if
(R) = '
i
(R)[V [
f
(R)` (7.207)
and thus write
k
if
=

dt

f
'I(R)[V
if
(R)[F(R)`
'F(R)[e
+iH
f
t/h
V
if
(R)e
iH
i
t/h
[I(R)` (7.208)
where H
i
(R) is the nuclear Hamiltonian for the initial state and H
f
(R) is the nuclear Hamiltonian
for the nal state.
At this point I can remove the sum over f and obtain
k
if
=

dt'I(R)[V
if
(R)e
+iH
f
t/h
V
if
(R)e
iH
i
t/h
[I(R)`. (7.209)
Next, we note that
V
if
(R) =

R '
f
(R)[i h
R
[
i
(R)`
=

R(0) D
if
(R(0)) (7.210)
is proportional to the nuclear velocity at the initial time. Likewise, the term in the middle
represents the non-adiabatic coupling at some later time. Thus, we can re-write the transition
rate as
k
if
=

dt(

R(0) D
if
(R(0)))(

R(t) D
if
(R(t))) (7.211)
'I(R)[e
+iH
f
t/h
e
iH
i
t/h
[I(R)`. (7.212)
Finally, we do an ensemble averate over the initial positions of the nuclei and obtain almost the
nal result:
k
if
=

dt

(

R(0) D
if
(R(0)))(

R(t) D
if
(R(t)))J
if
(t)

. (7.213)
where I dene
J
if
(t) = 'I(R)[e
+iH
f
t/h
e
iH
i
t/h
[I(R)` (7.214)
This term represents the evolution of the initial nuclear vibrational state moving forward in time
on the initial energy surface
[I(R(t))` = e
i/hH
i
t
[I(R(0))` (7.215)
and backwards in time on the nal energy surface.
'I(R(t))[ = 'I(R(0))[e
i/hH
f
t
(7.216)
215
So, J(t) represents the time-dependent overlap integral between nuclear wavefunctions evolving
on the dierent potential energy surfaces.
Let us assume that the potential wells are Harmonic with the centers o set by some amount
x
s
. We can dene in each well a set of Harmonic Oscillator eigenstates, which well write in short
hand as [n
i
` where the subscript i denotes the electronic state. At time t = 0, we can expand
the initial nuclear wavefunction as a superposition of these states:
[I(R(0))` =

n
[n
i
` (7.217)
where
n
= 'n
i
[I(R(0))`. The time evolution in the well is
[I(R(t))` =

n
exp(i/2(n + 1)
i
t)[n
i
`. (7.218)
We can also express the evolution of the ket as a superposition of states in the other well.
'I(R(t))[ =

m
exp(+i/2(m + 1)
f
t)'m
f
[ (7.219)
where
m
= 'm
f
[I(R)` are the coecients. Thus, J(t) is obtained by hooking the two results
together:
J(t) =

mn

n
e
+i/2(m+1)
f
t
e
i/2(n+1)
i
t
'm
f
[n
i
` (7.220)
Now, we must compute the overlap between harmonic states in one well with harmonic states in
another well. This type of overlap is termed a Franck-Condon factor (FC). We will evaluate the
FC factor using two dierent approaches.
7.7.2 Semi-Classical Evaluation
I want to make a series of simplifying assumptions to the nuclear wavefunction. Many of these
assumptions follow from Hellers paper referenced at the beginning. The assumptions are as
follows:
1. At the initial time, 'x[I(R)` can be written as a Gaussian of width centered about R.
'x[I(R)` =

1/4
exp

2
2
(x R(t))
2
+
i
h
p(t)(x R(t))

(7.221)
where p(t) is the classical momentum. (c.f Heller)
2. We know that for a Gaussian wavepacket (especially one in a Harmonic Well), the center
of the Gaussian tracks the classical prediction. Thus, we can write that R(t) is the center
of the Gaussian evolves under Newtons equation
m

R(t) = F
i
(R) (7.222)
where F
i
(R) is the force computed as the derivative of the ith energy surface w.r.t. R,
evaluated at the current nuclear position (i.e. the force we would get using the Born-
Oppenheimer approximation.
F
i
(R) =

R
E(R). (7.223)
216
3. At t = 0, the initial classical velocities and positions of the nuclear waves evolving on the
i and f surface are the same.
4. For very short times, we assume that the wave does not spread appreciably. We can x
this assuption very easily if need be.
Using these assumptions, we can approximate the time-dependent FC factor as
4
J(t) exp

2
4
(R
f
(t) R
i
(t))
2

exp

1
4
2
h
2
(p
f
(t) p
i
(t))
2

exp

+
i
2 h
(R
f
(t) R
i
(t)) (p
j
(t) p
i
(t))

(7.224)
Next, we expand R
i
(t) and p
i
(t) as a Tayler series about t = 0.
R
i
(t) = R
i
(0) + t

R
i
(0) + t
2

R
i
(0)
2!
+ (7.225)
Using Newtons Equation:
R
i
(t) = R
i
(0) + t
p(0)
m
t
2
F
i
(0)
m2!
+ (7.226)
p
i
(t) = p
i
(0) + F
i
(0)t +
1
2
F
i
t
t
2
(7.227)
Thus the dierence between the nuclear positions after a short amount of time will be
R
i
(t) R
f
(t) t
2

F
i
(0)
2m

F
j
(0)
2m

+ (7.228)
Also, the momentum dierence is
p
f
(t) p
i
(t) (F
f
(0) F
i
(0))t + (7.229)
Thus,
J(t) exp


2
t
4
16m
2
(F
i
(0) F
j
(0))
2

exp

t
2
4
2
h
2
((F
f
(0) F
i
(0))
2

exp

+
i
4m h
(F
f
(0) F
i
(0))t
3

(7.230)
If we include the oscillitory term, the integral does not converge (so much for a short time
approx.) However, when we do the ensemble average, each member of the ensemble contributes
4
B. J. Schwartz, E. R. Bittner, O. V. Prezhdo, and P. J. Rossky, J. Chem. Phys. 104, 5242 (1996).
217
a slightly dierent phase contribution, so, we can safely ignore it. Furthermore, for short times,
the decay of the overlap will be dominated by the term proportional to t
2
. Thus, we dened the
approximate decay curve as
J(t) = exp

t
2
4
2
h
2
(F
j
(0) F
i
(0))
2

(7.231)
Now, pulling everything together, we write the GR rate constant as
k
if
=

dt

(

R(0) D
if
(R(0)))(

R(t) D
if
(R(t)))
exp

t
2
4
2
h
2
(F
j
(0) F
i
(0))
2

. (7.232)
The assumptions are that the overlap decays more rapidly than the oscillations in the autocor-
relation of the matrix element. This actually bears itself out in reality.
Lets assume that the overlap decay and the correlation function are un-correlated. (Condon
Approximation) Under this we can write:
k
if
=

dt

(

R(0) D
if
(R(0)))(

R(t) D
if
(R(t)))

exp

t
2
4
2
h
2
(F
j
(0) F
i
(0))
2

. (7.233)
or dening
C
if
(t) =

(

R(0) D
if
(R(0)))(

R(t) D
if
(R(t)))

(7.234)
and using (c.f. Chandler, Stat. Mech.)

e
A

= e
A
, (7.235)
the desired result is
k
if
=

dtC
if
(t) exp

t
2

(F
j
(0) F
i
(0))
2
4
2
h
2

. (7.236)
Now, lets assume that the correlation function is an oscillitory function of time.
C
if
(t) = [V
if
[
2
cos(t) (7.237)
Then
k
if
=

dt[V
if
[
2
cos(t) exp

t
2

(F
j
(0) F
i
(0))
2
4
2
h
2

. (7.238)
=

b
e

2
/(4b)
[V
if
[
2
(7.239)
218
where
b =

(F
j
(0) F
i
(0))
2
4
2
h
2

(7.240)
In Ref 1. we used this equation (actually one a few lines up) to compute the non-radiative
relaxation rates between the p to s state of an aqueous electron in H
2
0 and in D
2
0 to estimate
the isotopic dependency of the transition rate. Briey, the story goes as this. Paul Barbaras
group at U. Minnisota and Yann Gauduels group in France measured the uorescence decay of
an excited excess electron in H
2
0 and in D
2
0 and noted that there was no resolvable dierence
between the two solvents. I.e. the non-radiative decay was not at all sensitive to isotopic changes
in the solvent. (The experimental life-times are roughly 310 fs for both sovents with resolution of
about 80 fs ) This is very surprising since, looking at the non-adiabatic coupling operator above,
you will notice that the matrix element coupling states is proportional to the nuclear velocities.
(The electronic matrix element is between the s and p state of the electron.) Thus, since the
velocity of a proton is roughly

2 times faster than that of a deuteron of the same kinetic energy,


The non-adiabatic coupling matrix element between the s and p states in water should be twice
that as in heavy-water. Thus, the transition rate in water should be roughtly twice that as in
heavy-water. It turns out that since the Ds move slower than the Hs, the nuclear overlap decays
roughly twice as slowly. Thus we get competing factors of two which cancel out.
7.8 Problems and Exercises
Exercise 7.1 A one dimensional harmonic oscillator, with frequency , in its ground state is
subjected to a perturbation of the form
H

(t) = C pe
|t|
cos(t) (7.241)
where p is the momentum operator and C, , and are constants. What is the probability that
as t the oscillator will be found in its rst excited state in rst-order perturbation theory.
Discuss the result as a function of , , and .
Exercise 7.2 A particle is in a one-dimensional innite well of width 2a. A time-dependent
perturbation of the form
H

(t) = T
o
V
o
sin(
x
a
)(t) (7.242)
acts on the system, where T
o
and V
o
are constants. What is the probability that the system will
be in the rst excited state afterwards?
Exercise 7.3 Because of the nite size of the nucleus, the actual potential seen by the electron
is more like:
219
0.02 0.04 0.06 0.08 0.1
r(Bohr)
-120
-100
-80
-60
-40
-20
V(hartree)
1. Calculate this eect on the ground state energy of the H atom using rst order perturbation
theory with
H

e
2
r

e
3
R
for r R
0 otherwise
(7.243)
2. Explain this choice for H

.
3. Expand your results in powers of R/a
o
<1. (Be careful!)
4. Evaluate numerically your result for R = 1 fm and R = 100 fm.
5. Give the fractional shift of the energy of the ground state.
6. A more rigorous approach is to take into account the fact that the nucleus has a homoge-
neous charge distribution. In this case, the potential energy experienced by the electron goes
as
V (r) =
Ze
2
r
when r > R and
V (r) =
Ze
2
r

1
2R

r
R

2
+ 2
R
r
3

for r R. What is the perturbation in this case? Calculate the energy shift for the H (1s)
220
energy level for R = 1fm and compare to the result you obtained above.
In[17]:= V@x_D := If@x < .01, - 100, - 1xD;
In[60]:= e1s = Graphics@8Dashing@80.01<D, Line@880.001, - 13.1<, 8113.1, - 13.1<<D<D;
poten = Plot@8- 1x, V@xD<, 8x, 0.001, 0.1<, PlotRange 8- 120, 0<,
PlotStyle 8RGBColor@1, 0, 0D, GrayLevel@0D<,
AxesLabel 8"r HL", "E HeVL"<D
Show@poten, e1sD
0.02 0.04 0.06 0.08 0.1
r HL
-120
-100
-80
-60
-40
-20
E HeVL
Out[61]= Graphics
0.02 0.04 0.06 0.08 0.1
r HL
-120
-100
-80
-60
-40
-20
E HeVL
Out[62]= Graphics
Untitled-2 1
Note that this eect is the isotope shift and can be observed in the spectral lines of the heavy
elements.
221
Chapter 8
Many Body Quantum Mechanics
It is often stated that of all the theories proposed in this century, the silliest is
quantum theory. In fact, some say that the only thing that quantum theory has
going for it is that it is unquestionably correct.
M. Kaku (Hyperspace, Oxford University Press, 1995)
8.1 Symmetry with respect to particle Exchange
Up to this point we have primarily dealt with quantum mechanical system for 1 particle or with
systems of distinguishable particles. By distinguishable we mean that one can assign a unique
label to the particle which distinguishes it from the other particles in the system. Electrons in
molecules and other systems, are identical and cannot be assigned a unique label. Thus, we must
concern our selves with the consequences of exchanging the labels we use. To establish a rm
formalism and notation, we shall write the many-particle wavefunction for a system as
'
N
[
N
` =

d
3
r
1
d
3
r
N
[
N
(r
1
, r
2
, , r
N
)[
2
< + (8.1)
=

d1d2 dN[
N
(1, 2, , N)[
2
. (8.2)
We will dene the N-particle state space as the product of the individual single particle state
spaces thusly
[
N
` = [a
1
a
2
a
N
) = [a
1
` [a
2
` [a
N
` (8.3)
For future reference, we will write the multiparticle state as with a curved bracket: [ ). These
states have wavefunctions
'r[
N
` = (r
1
r
n
[a
1
a
2
a
N
) = 'r
1
[a
1
`'r
2
[a
2
` 'r
N
[a
N
` (8.4)
=
a
1
(r
1
)
a
1
(r
2
)
a
N
(r
N
) (8.5)
222
These states obey analogous rules for constructing overlap (projections) and idempotent relations.
They form a complete set of states (hence form a basis) and any multi-particle state in the state
space can be constructed as a linear combination of the basis states.
Thus far we have not taken into account the symmetry property of the wavefunction. There
are a multitude of possible states which one can construct using the states we dened above.
However, only symmetric and antisymmetric combinations of these state are actually observed in
nature. Particles occurring in symmetric or anti-symmetric states are called Bosons and Fermions
respectively.
Lets dene the permutation operator P

which swaps the positions of particles and .


e.g.
P
12
[1, 2) = [2, 1) (8.6)
Also,
P
12
P
12
(1, 2) = P
2
12
(1, 2) = (1, 2) (8.7)
thus (1, 2) is an eigenstate of P
12
with eigenvalue 1. In other words, we can also write
P
12
(1, 2) = (1, 2) (8.8)
where = 1.
A wavefunction of N bosons is totally symmetric and thus satises
(P1, P2, , PN) = (1, 2, , N) (8.9)
where (P1, P2, , PN) represents any permutation P of the set (1, 2, , N). A wavefunction
of N fermions is totally antisymmetric and thus satises
(P1, P2, , PN) = (1)
P
(1, 2, , N). (8.10)
Here, (1)
P
denotes the sign or parity of the permutation and is dened as the number of binary
transpositions which brings the permutation (P1, P2, ...) to its original from (1, 2, 3...).
For example: what is the parity of the permutation (4,3,5,2,1)? A sequence of binary trans-
positions is
(4, 3, 5, 2, 1) (2, 3, 5, 4, 1) (3, 2, 5, 4, 1) (5, 2, 3, 4, 1) (1, 2, 3, 4, 5) (8.11)
So P = 4. Thus, for a system of 5 fermions
(4, 3, 5, 2, 1) = (1, 2, 3, 4, 5) (8.12)
In cases where we want to develop the many-body theory for both Fermions and Bosons
simultaneously, we will adopt the notation that = 1 and any wavefunction can be written as
(P1, P2, , PN) = ()
P
(1, 2, , N). (8.13)
where = 1 for fermions and +1 for bosons.
223
While these symmetry requirement are observed in nature, they can also be derived in the
context of quantum eld theory that given general assumptions of locality, causality and Lorentz
invariance, particles with half integer spin are fermions and those with integer spin are bosons.
Some examples of bosons are photon, photons, pions, mesons, gluons and the
4
He atom. Some
examples of fermions are protons, electrons, neutrons, muons, neutrinos, quarks, and the
3
He
atom. Composite particles composes of any number of bosons and even or odd numbers of
fermions behave as bosons or fermions respectively at temperatures low compared to their binding
energy. An example of this is super-conductivity where electron-phonon coupling induces the
pairing of electrons (Cooper pairs) which form a Bose-condensate.
Now, consider what happens if I place two fermion particles in the same state:
[(1, 2)` = [(1)(2)) (8.14)
where (1) is a state with the spin up quantum number. This state must be an eigenstate of
the permutation operator with eigenvalue = 1.
P
12
[(1, 2)` = [(2)(1)) (8.15)
However, [(1)(2)) = [(2)(1)), thus the wavefunction of the state must vanish everywhere.
For the general case of a system with N particles, the normalized wavefunction is
=

N
1
!N
2
!...
N!

1/2

p1
(1)
p2
(2)
pN
(N) (8.16)
where the sum is over all permutations of dierent p
1
, p
2
... and the numbers N
i
indicate how
many of these have the same value (i.e. how many particles are in each state) with

N
i
= N.
For a system of 2 fermions, the wavefunction is
(1, 2) = (
p
1
(1)
p
2
(2)
p
1
(2)
p
2
(1))/

2 (8.17)
Thus, in the example above:
(1, 2) = ((1)(2) (2)(1))/

2 = 0 (8.18)
Likewise,
(1, 2) = ((1)(2) (2)(1))/

2
= ((1)(2) P
12
(1)(2))/

2
= ((1)(2))/

2 (8.19)
We will write such symmetrized states using the curly brackets
[ = [a
1
a
2
a
N
=

N
1
!N
2
!...
N!

1/2

p1
(1)
p2
(2)
pN
(N) (8.20)
For the general case of N particles, the fully anti-symmetrized form of the wavefunction takes
the form of a determinant
=
1

N!

a
(1)
a
(1)
a
(1)

b
(1)
b
(2)
b
(2)

c
(1)
c
(3)
c
(3)

(8.21)
224
where the columns represent the particles and the rows are the dierent states. The interchange
of any two particles corresponds to the interchange of two columns, as a result, the determinant
changes sign. Consequently, if two rows are identical, corresponding to two particles occupying
the same state, the determinant vanishes.
Another example, lets consider the possible states for the He atom ground state. Lets
assume that the ground state wavefunction is the product of two single particle hydrogenic 1s
states with a spin wavefunction written thus
[` = [1s(1)(1), 1s(2)(2)) (8.22)
Lets denote [` as the spin up state and [` as the spin down state. We have the following
possible spin combinations:
(1)(2) symmetric
(1)(2) nether
(1)(2) neither
(1)(2) symmetric
(8.23)
The and the states are clearly symmetric w.r.t particle exchanges. However, note that
the other two are neither symmetric nor anti-symmetric. Since we can construct linear combina-
tions of these states, we can use the two allowed spin congurations to dene the combined spin
state Thus, we get two possible total spin states:
1

2
([(1)(2)) [(1)(2))) (8.24)
Thus, the possible two particle spin states are
(1)(2) symmetric
(1)(2) symmetric
1

2
((1)(2) + (1)(2)) + symmetric
1

2
((1)(2) (1)(2)) anti-symmetric
(8.25)
These spin states multiply the spatial state and the full wavefunction must be anti-symmetric
w.r.t. exchange. For example, for the ground state of the He atom, the zero-th order spatial
state is [1s(1)1s(2)). This is symmetric w.r.t. exchange. Thus, the full ground-state wavefunction
must be the product
[` = [1s(1)1s(2))
1

2
((1)(2) (1)(2) (8.26)
The full state is an eigenstate of P
12
with eigenvalue -1, which is correct for a system of fermions.
What about the other states, where can we use them? What if we could construct a spatial
wavefunction that was anti-symmetric w.r.t particle exchange. Consider the rst excited state
of He. The electron conguration for this state is
[1s(1)2s(2)) (8.27)
225
However, we could have also written
[1s(2)2s(1)) (8.28)
Taking the symmetric and anti-symmetric combinations
[

` =
1

2
([1s(1)2s(2)) [1s(2)2s(1))) (8.29)
The + state is symmetric w.r.t. particle exchange. Thus, the full state (including spin) must be
[
1
` =
1
2
([1s(1)2s(2)) +[1s(2)2s(1)))((1)(2) (1)(2)). (8.30)
The other three states must be
[
2
` =
1
2
([1s(1)2s(2)) [1s(2)2s(1)))((1)(2) + (1)(2)) (8.31)
[
3
` =
1

2
([1s(1)2s(2)) [1s(2)2s(1)))((1)(2)) (8.32)
[
4
` =
1

2
([1s(1)2s(2)) [1s(2)2s(1)))((1)(2)). (8.33)
These states can also be constructed using the determinant wavefunction. For example, the
ground state congurations are generated using
[
g
=
1

1s(1)(1) 1s(1)(1)
1s(2)(2) 1s(2)(2)

(8.34)
=
1

2
[1s(1)1s(2))[(1)(2) (2)(1)] (8.35)
Likewise for the excited states, we have 4 possible determinant states
[
1
=
1

1s(1)(1) 2s(1)(1)
1s(2)(2) 2s(2)(2)

[
2
=
1

1s(1)(1) 2s(1)(1)
1s(2)(2) 2s(2)(2)

[
3
=
1

1s(1)(1) 2s(1)(1)
1s(2)(2) 2s(2)(2)

[
4
=
1

1s(1)(1) 2s(1)(1)
1s(2)(2) 2s(2)(2)

(8.36)
The [
m
) are related to the determinant states as follows
[
1
=
1

2
[1s(1)(1)2s(2)(2) 1s(2)(2)2s(1)(1)]
=
1

2
[1s(1)2s(2) 1s(2)2s(1)] (1)(2)
= [
3

[
4
=
1

2
[1s(1)2s(2) 1s(2)2s(1)] (1)(2) = [
4
(8.37)
226
The remaining two must be constructed from linear combinations of the determinant states:
[
2
=
1

2
[1s(1)(1)2s(2)(2) 1s(2)(2)2s(1)(1)] (8.38)
[
3
=
1

2
[1s(1)(1)2s(2)(2) 1s(2)(2)2s(1)(1)] (8.39)
[
2
` =
1

2
[[
2
+[
3
] (8.40)
[
1
` =
1

2
[[
2
[
3
] (8.41)
When dealing with spin functions, a short hand notation is often used to reduced the notation
a bit. The notation
[1s` [1s` (8.42)
is used to denote a spin up state and
[1s` [1s` (8.43)
Using these, the above determinant functions can be written as
[
1
=
1

1s(1) 2s(1)
1s(2) 2s(2)

(8.44)
=
1

[1s(1)2s(2)) [1s(2)2s(1))

(8.45)
[
2
=
1

1s(1) 2s(1)
1s(2) 2s(2)

(8.46)
=
1

[1s(1)2s(2)) [1s(2)2s(1))

(8.47)
The symmetrization principal for fermions is often expressed as the Pauli exclusion principle
which states: no two fermions can occupy the same same state at the same time. This, as we all
well know gives rise to the periodic table and is the basis of all atomic structure.
8.2 Matrix Elements of Electronic Operators
We can write the Hamiltonian for a N-body problem as follows. Say our N-body Hamiltonian
consists of a sum of N single particle Hamiltonian, H
o
, and two body interactions.
H =
N

n
H
(n)
o
+

i=j
V (r
i
r
j
) (8.48)
Using the eld operators, the expectation values of the H
o
terms are

dx

(x)H
o

(x) =

(8.49)
227
since

(x) is an eigenstate of H
o
with eigenvalue E

. should be regards as a collection of all


quantum numbers used to describe the H
o
eigenstate.
For example, say we want a zeroth order approximation to the ground state of He and we
use hydrogenic functions,
[
o
` = [1s(2)1s(2)). (8.50)
This state is symmetric w.r.t electron exchange, so the spin component must be anti-symmetric.
For now this will not contribute the the calculation. The zero-th order Schroedinger Equation is
(H
o
(1) + H
o
(2))[
o
` = E
o
[
o
` (8.51)
Where H
o
(1) is the zero-th order Hamiltonian for particle 1. This is easy to solve
(H
o
(1) + H
o
(2))[
o
` = Z
2
[
o
` (8.52)
Z = 2, so the zero-th order guess to the He ground state energy is 4 (in Hartree units, recall 1
Hartree = 27.6 eV). The correct ground state energy is more like -2.90 Hartree. Lets now evaluate
to rst order in perturbation theory the direct Coulombic interaction between the electrons.
E
(1)
= (1s(1)1s(2)[
1
r
12
[1s(1)1s(2)) (8.53)
The spatial wavefunction for the [1s(1)1s(2)) state is the product of two hydrogenic functions
(r
1
r
2
[1s(1)1s(2)) =
Z
3

e
Z(r
1
+r
2
)
(8.54)
Therefore,
E
(1)
=

dV
1

dV
2
1
r
12
Z
3

e
2Z(r
1
+r
2
)
(8.55)
where dV = 4r
2
dr is the volume element. This integral is most readily solved if we instead
write it as the energy of a charge distribution, (2) = [(2)[
2
, in the eld of a charge distribution
(1) = [(1)[
2
for r
2
> r
1
.
E
(1)
= 2
Z
3

dV
2
e
2Zr
2
1
r
2

r
2
0
dV
1
e
2Zr
1
(8.56)
The factor of 2 takes into account that we get the same result for when r
1
> r
2
. Doing the
integral, (See subsection 8.3.1.)
E
(1)
=
5Z
8
(8.57)
or 1.25 Hartee. Thus, for the He atom
E = E
o
+ E
(1)
= Z
2
+
5Z
8
= 4 + 1.25 = 2.75 (8.58)
228
Not too bad, the actual result is 2.90 Hartee.
What we have not taken into consideration that there is an additional contribution to the
energy from the exchange interaction. In other words, we need to compute the Coulomb integral
exchanging electron 1 with electron 2. We really need to compute the perturbation energy with
respect the determinant wavefunction.
E
(1)
= 1s(1)1s(2)[v[1s(1)1s(2) (8.59)
=
1
2

(1s(1)1s(2)[ (1s(1)1s(2)[)

[1s(1)1s(2)) [1s(1)1s(2))

(8.60)
=
1
2

(1s(1)1s(2)[v[1s(1)1s(2)) (1s(1)1s(2)[v[1s(1)1s(2))
(1s(1)1s(2)[v[1s(1)1s(2)) + (1s(1)1s(2)[v[1s(1)1s(2))

(8.61)
= 2(1s(1)1s(2)[v[1s(1)1s(2)) (1s(1)1s(2)[v[1s(1)1s(2)) (8.62)
However, the potential does not depend upon spin. Thus any matrix element which exchanges
a spin from must vanish. This, we have no exchange contribution to the energy. We can in fact
move on to higher orders in perturbation theory and solve accordingly.
8.3 The Hartree-Fock Approximation
We just saw how to estimate the ground state energy of a system in the presence of interactions
using rst order perturbation theory. To get this result, we assumed that the zeroth-order
wavefunctions were pretty good and calculated our results using these wavefunctions. Of course,
the true ground state energy is obtained by summing over all diagrams in the perturbation
expansion:
E W = '
o
[E W[
o
` (8.63)
The second order term contains explicit two-body correlation interactions. i.e. the motion of one
electron aects the motion of the other electron.
Lets make a rather bold assumption that we can exclude connected two body interactions
and treat the electrons as independent but moving in an averaged eld of the other electrons.
First we make some standard denitions:
J

= ([v[) (8.64)
K

= ([v[) (8.65)
And write that
E
HF
W =

(2J

) (8.66)
where sum over all occupied (n/2) spatial orbitals of a nelectron system. The J integral is the
direct interaction (Coulomb integral) and the K is the exchange interaction.
229
We now look for a a set of orbitals which minimize the variational integral E
HF
, subject to the
constrain that the wavefunction solutions be orthogonal. One can show (rather straightforwardly)
that if we write the Hamiltonian as a functional of the electron density, ,
H[] = H
o
[1, 2] +

d3d412[v[34(3, 4) (8.67)
= H
o
[1] + 2J(1) K(1) (8.68)
where (1, 2) = (1)

(2),
J(1) =

d2[(2)[
2
v(12) (8.69)
K(1) =

d2(2)

(2)v(12) (8.70)
The Hartree-Fock wavefunctions satisfy
H[](1) = E(1)(1) (8.71)
In other words, we diagonalize the Fock-matrix H[] given an initial guess of the electron density.
This gives a new set of electron orbitals, which we use to construct a new guess for the electron
densities. This procedure is iterated to convergence.
8.3.1 Two electron integrals
One of the diculties encountered is in evaluating the J

and K

two electron integrals. Lets


take the case that the

and

orbitals are centred on the same atom. Two centred terms can
be evaluated, but the analysis is more dicult. Writing the J integral in Eq. 8.65 out explicitly
we have:
J

= (

(1)

(2)[v(12)[

(1)

(1))
=

(1)

(2)v(1 2)

(1)
2

d1d2
=

(r
1
)

(r
1
)

(r
2
)

(r
2
)v(r
1
r
2
)dr
2
dr
1
(8.72)
If we can factor the single particle orbitals as

(r, , ) = R
nl
(r)Y
lm
(, ), then we can separate
the radial and angular integrals. Before we do that, we have to resolve the pair-interaction into
radial and angular components as well. For the Coulomb potential, we can use the expansion
1
[r
1
r
2
[
=

l=0
+l

m=l
4
2l + 1
r
l
<
r
l+1
<
Y

lm
(
1

1
)Y
lm
(
2

2
) (8.73)
where the notation r
<
denotes which is the smaller of r
1
and r
2
and r
>
the greater. For the
hydrogen 1s orbitals (normalized and in atomic units)

1s
=

Z
3

e
Zr
, (8.74)
230
the J integral for the 1s1s conguration is
J =
Z
6

d
3
r
1

d
3
r
2
e
2Zr
1
e
2Zr
2
1
r
12
(8.75)
Inserting the expansion and using Y
00
= 1/

4,
J = 16Z
6

l
l

m=l
1
2l + 1


0
e
2Zr
1
e
2Zr
2
r
l
<
r
l+1
<
r
2
1
dr
1
r
2
2
dr
2

lm
(1)Y
00
(1)d
1

Y
lm
(2)Y

00
(2)d
2
. (8.76)
The last two integrals are easy due to the orthogonality of the spherical harmonics. This leaves
the double integral,
J = 16Z
6


0
e
2Zr
1
e
2Zr
2
1
r
>
r
2
1
dr
1
r
2
2
dr
2
(8.77)
which we evaluate by splitting into two parts,
J = 16Z
2


0
e
2Zr
1
r
1

r
1
0
e
2Zr
2
r
2
2
dr
2

dr
1
+


0
e
2Zr
1
r
2
1


r
1
e
2Zr
2
r
2
dr
2

dr
1

(8.78)
In this case the integrals are easy to evaluate and
J =
5Z
8
(8.79)
8.3.2 Koopmans Theorem
Koopmans theorem states that if the single particle energies are not aected by adding or
removing a single electron, then the ionization energy is energy of the highest occupied single
particle orbital (the HOMO) and the electron anity is the energy of the lowest unoccupied
orbital (i.e. the LUMO). For the Hartree-Fock orbitals, this is theorem can be proven to be
exact since correlations cancel out at the HF level. For small molecules and atoms, the theorem
fails miserably since correlations play a signicant role. On the other hand, for large polyatomic
molecules, Koopmans theorem is extremely useful in predicting ionization energies and spectra.
From a physical point of view, the theorem is never exact since it discounts relaxation of both
the electrons and the nuclei.
8.4 Quantum Chemistry
Quantum chemical concepts play a crucial role in how we think about and describe chemical
processes. In particular, the term quantum chemistry usually denotes the eld of electronic
structure theory. There is no possible way to cover this eld to any depth in a single course and
this one section will certainly not prepare anyone for doing research in quantum chemistry. The
topic itself can be divided into two sub-elds:
231
Method development: The development and implementation of new theories and com-
putational strategies to take advantage of the increaseing power of computational hardware.
(Bigger, stronger, faster calculations)
Application: The use of established methods for developing theoretical models of chemical
processes
Here we will go in to a brief bit of detail into various levels of theory and their implementation
in standard quantum chemical packages. For more in depth coverage, refer to
1. Quantum Chemistry, Ira Levine. The updated version of this text has a nice overview of
methods, basis sets, theories, and approaches for quantum chemistry.
2. Modern Quanutm Chemistry, A. Szabo and N. S. Ostlund.
3. Ab Initio Molecular Orbital Theory, W. J. Hehre, L. Radom, P. v. R. Schleyer, and J. A.
Pople.
4. Introduction to Quantum Mechanics in Chemistry, M. Ratner and G. Schatz.
8.4.1 The Born-Oppenheimer Approximation
The fundimental approximation in quantum chemistry is the Born Oppenheimer approxima-
tion we discussed earlier. The idea is that because the mass of an electron is at least 10
4
that of
a typical nuclei, the motion of the nuclei can be eectively ignored and we can write an electronic
Schrodinger equation in the eld of xed nuclei. If we write r for electronic coordinates and R
for the nuclear coordinates, the complete electronic/nuclear wavefunction becomes
(r, R) = (r; R)(R) (8.80)
where (r; R) is the electronic part and (R) the nuclear part. The full Hamiltonian is
H = T
n
+ T
e
+ V
en
+ V
nn
+ V
ee
(8.81)
T
n
is the nuclear kinetic energy, T
e
is the electronic kinetic energy, and the V s are the electron-
nuclear, nuclear-nuclear, and electron-electron Coulomb potential interactions. We want to be
a solution of the Schrodinger equation,
H = (T
n
+ T
e
+ V
en
+ V
nn
+ V
ee
) = E. (8.82)
So, we divide through by and take advantage of the fact that T
e
does not depend upon the
nuclear component of the total wavefunction
1

T
n
+
1

T
e
+ V
en
+ V
nn
+ V
ee
= E.
On the other hand, T
n
operates on both components, and involves terms which look like
T
n
=

1
2M
n
(
2
n
+
2
n
+ 2
n
)
232
where the sum is over the nuclei and
n
is the gradient with respect to nuclear position n. The
crudest approximation we can make is to neglect the last two termsthose which involve the
derivatives of the electronic wave with respect to the nuclear coordinate. So, when we neglect
those terms, the Schrodinger equation is almost separable into nuclear and electronic terms
1

(T
n
+ V
n
) +
1

(T
e
+ V
en
+ V
ee
) = E. (8.83)
The equation is not really separtable since the second term depends upon the nuclear position.
So, what we do is say that the electronic part depends parametrically upon the nuclear position
giving a constant term, E
el
(R), that is a function of R.
(T
e
+ V
en
(R) + V
ee
)(r; R) = E
el
(R)(r; R). (8.84)
The function, E
el
, depends upon the particular electronic state. Since it an eigenvalue of Eq. 8.84,
there may be a manifold of these functions stacked upon each other.
Turning towards the nuclear part, we have the nuclear Schrodinger equation
(T
n
+ V
n
(R) + E
()
el
(R)) = W. (8.85)
Here, the potential governing the nuclear motion contains the electronic contribution, E
()
el
(R),
which is the th eigenvalue of Eq. 8.84 and the nuclear repulsion energy V
n
. Taken together,
these form a potential energy surface
V (R) = V
n
+ E
()
el
(R)
for the nuclear motion. Thus, the electronic energy serves as the interaction potential between
the nuclei and the motion of the nuclei occurs on an energy surface generated by the electronic
state.
Exercise 8.1 Derive the diagonal non-adiabatic correction term '[T
n
[ > to produce a slightly
more accurate potential energy surface
V (R) = V
n
+ E
()
el
+'

[T
n
[

`
.
The BO approximation breaks down when the nuclear motion becomes very fast and the
electronic states can become coupled via the nuclear kinetic energy operator. (One can see by
inspection that the electonic states are not eigenstates of the nuclear kinetic energy since H
ele
does not commute with
2
N
. )
Lets assume that the nuclear motion is a time dependent quantity, R(t). Now, take the time
derivative of [
e
n
`
d
dt
[
e
n
(R(t))` =
R(t)
t

R
[
e
n
(R)` +

t
[
e
n
(R(t))` (8.86)
Now, multiply on the left by <
e
m
(R(t))[ where m = n
'
e
m
(R(t))[
d
dt
[
e
n
(R(t))` =
R(t)
t
'
e
m
(R(t))[

R
[
e
n
(R)` (8.87)
233
Cleaning things up,
'
e
m
(R(t))[
d
dt
[
e
n
(R(t))` =

R(t) <
e
m
(R(t))[
N
[
e
n
(R)` (8.88)
we see that the nuclear motion couples electronic states when the nuclear velocity vector

R is
large in a direction in which the electronic wavefunction changes most rapidly with R.
For diatomic molecules, we can separate out the center of mass motion and write m as the
reduced mass
m =
m
1
m
2
m
1
+ m
2
(8.89)
and write the nuclear Schrodinger equation (in one dimension)

h
2
2m

2
x
2
+ V (r)

(r) = E(r) (8.90)


where V (r) is the adiabatic or Born-Oppenheimer energy surface discussed above. Since V (r) is
a polyomial of r, we can do a Taylor expansion of r about its minimum at r
e
V (r) = V
o
+
1
2
V

(r
e
)(r r
e
)
2
+
1
6
V

(r
e
)(r r
e
)
3
+ (8.91)
As an example of molecular bonding and how one computes the structure and dynamics of
a simple molecule, we turn towards the simplest molcular ion, H
+
2
. For a xed H H distance,
R, the electronic Schrodinger equation reads (in atomic units)

1
2

1
r
1

1
r
2

(r
1
, r
2
) = E(r
1
, r
2
) (8.92)
The problem can be solved exactly in elliptical coordinates, but the derivation of the result is
not terribly enlightining. What we will do, however, is use a variational approach by combining
hydrogenic 1s orbitals centered on each H nuclei. This proceedure is termed the Linear Combi-
nation of Atomic Orbitals and is the underlying idea behind most quantum chemical calculations.
The basis functions are the hydrogen 1s orbitals. The rationalle for this basis is that as R
becomes large, we have a H atom and a protona system we can handle pretty easily. Since the
electron can be on either nuclei, we take a linear combination of the two choices.
[` = c
1
[
1
` + c
2
[
2
` (8.93)
We then use the variational proccedure to nd the lowest energy subject to the constraint that
'[` = 1. This is an eigenvalue problem which we can write as

j=1,2
'
i
[H[
j
` = E

j=1,2
c
j
'
i
[
j
`. (8.94)
or in Matrix form

H
11
H
12
H
21
H
22

c
1
c
2

= E

S
11
S
12
S
21
S
22

(8.95)
234
where S
ij
is the overlap between the two basis functions.
S
12
= '
1
[
2
` =

d
3
r

1
(r)
2
(r)
and assuming the basis functions are normalized to begin with:
S
11
= S
22
= 1.
For the hydrogenic orbitals

1
(r
1
) =
e
(r
1

and

2
(r
2
) =
e
(r
2
)

.
A simple calculation yields
1
S = e
R

1 + R +
R
2
3

.
The matrix elements of the Hamiltonian need also to be computed. The diagonal terms are
easy and correspond to the hydrogen 1s energies plus the internuclear repulsion plus the Coulomb
interaction between nuclei 2 and the electron distribution about nuclei 1.
H
11
= E
I
+
1
R
J
11
(8.96)
where
J
11
= '
1
[
1
r
2
[
1
` (8.97)
=

d
3
r
1
r
12
[
1
(r)[
2
(8.98)
This too, we evaluate in elliptic coordinates and the result reads
J
11
=
2E
I
R

1 e
2R
(1 + R)

. (8.99)
1
To derive this result, you need to rst transform to elliptic coordinates u, v where
r
1
=
u + v
2
R
r
2
=
u v
2
R
the volume element is then d
3
r = R
3
(u
2
v
2
)/8dudvd where is the azimuthal angle for rotation about the
H H axis. The resulting integral reads
S =
1


1
du

+1
1
dv

2
0
d
R
3
8
(u
2
v
2
)e
uR
.
235
Figure 8.1: Various contributions to the H
+
2
Hamiltonian.
2 4 6 8 10
RHbohrL
0.2
0.4
0.6
0.8
1
S,J,A
By symmetry, H
11
= H
22
and we have the diagonal elements.
We can think of J as being a modication of the nuclear repulsion due to the screening of the
electron about one of the atoms. [
1
(r)[
2
is the charge density of the hydrogen 1s orbital and is
spherically symmetric about nuclei 1. For large internuclear distances,
J =
1
R
(8.100)
and positive charge of nuclei 1 is completly counterballenced by the negative charge distribution
about it. At shorter ranges,
1
R
J > 0. (8.101)
However, screening along cannot explain a chemical bond since J does not go through a minimum
at some distance R. Figure shows the variation of J, H
11
, and S as functions of R.
We now look at the o diagonal elements, H
12
= H
21
. Written explicitly
H
12
= '
1
[h(2)[
2
> +
1
R
S
12
'
1
[
1
r
1
2
[
2
`
= (E
I
+
1
R
)S
12
A (8.102)
where
A = '
1
[
1
r
12
[
2
` =

d
3
r
1
(r)
1
r
12

2
(r), (8.103)
which can also be evaluated using elliptical coordinates.
A = R
2
E
I


1
du2ue
uR
(8.104)
= 2E
I
e
R
(1 + R) (8.105)
236
Exercise 8.2 Verify the expressions for J, S, and A by performig the transformation to elliptic
coordinates and performing the integrations.
A is termed the resonance integral and gives the energy for moving an electron from one
nuclei to the other. When H
12
= 0, there is a nite probability for the electron to hop from one
site to the other and back. This oscillation results in the electron being delocalized between the
nuclei andis the primary contribution to the formation of a chemical bond.
To wrap this up, the terms in the Hamiltonian are
S
11
= S
22
= 1 (8.106)
S
12
= S
21
= S (8.107)
H
11
= H
22
= E
I
+
1
R
C (8.108)
H
12
= H
21
= (E
I
+
1
R
)S A (8.109)
Since E
I
appears in each, we use that as our energy scale and set
E = E
I
(8.110)
A = E
I
(8.111)
J = E
I
(8.112)
so that the secular equation now reads

1 +
2
R

1 +
2
R

S S

1 +
2
R

S S 1 +
2
R

= 0 (8.113)
Solving the secular equation yields two eigenvalues:

= 1
2
R


1 S
. (8.114)
For large internuclear separations,

1, or E
I
which is the ground state of an isolated H
atom E
I
= 1/2. Choosing this as the energy origin, and putting it all back together:
E

= E
I

2
R

2e
R
(1 + R) 2(1 e
2R
(1 + R))/R
1 e
R
(1
R
+ R
2
/3

(8.115)
Plots of these two energy surfaces are shown in Fig. 8.2. The energy minimum for the lower state
(E

) is at E

= 0.064831hartree when R
eq
= 2.49283a
o
(or -0.5648 hartree if we dont set our
zero to be the dissociation limit). These results are qualitatively correct, but are quantitatively
way o the mark. The experimental values are D
e
= 0.1025 hartree and R
e
= 2.00a
o
. The
results can be improved upon by using improved basis function, using the charge as a variational
parameter and so forth. The important point is that even at this simple level of theory, we can
get chemical bonds and equilibrium geometries.
For the orbitals, we have a symmetric and anti-symmetric combination of the two 1s orbitals.

= N

(
1

2
). (8.116)
237
Figure 8.2: Potential energy surface for H
+
2
molecular ion.
2 4 6 8 10
R HbohrL
- 0.05
0.05
0.1
0.15
0.2
0.25
e
+
,e
-
HhartreeL
Figure 8.3: Three dimensional representations of
+
and

for the H
+
2
molecular ion generated
using the Spartan ab initio quantum chemistry program.
In Fig. 8.3, we show the orbitals from an ab initio calculation using the 6-31

set of basis
functions. The rst gure corresponds to the occupoled ground state orbital which forms a
bond between the two H atoms. The second shows the anti-bonding

orbital formed by the


anti-symmetric combination of the 1s basis functions. The red and blue mesh indicates the phase
of the wavefunction.
238
Appendix: Creation and Annihilation Operators
Creation and annihilation operators are a convenient way to represent many-particle states and
many-particle operators. Recall, from the harmonic oscillator, a creation operator, a

, acts on
the ground state to produce 1 quanta of excitation in the system. We can generalize this to many
particles by saying that a

creates a particle in state . e.g.


a

[
1
...
N
` =

+ 1[
1
...
N
` (8.117)
where n

is the occupation of the [` state. Physically, the operator

creates a particle in state


[` and symmetries or antisymmetrizes the state as need be. For Bosons, the case is simple since
any number of particles can occupy a given state. For Fermions, the operation takes a simpler
form:
a

[
1
...
N
` =

[
1
...
N
` if the state [` is not present in [
1
...
N
`
0 otherwise
(8.118)
The anti-symmetrized basis vectors can be constructed using the a

j
operators as
[
1
...
N
= a

1
a

2
a

N
[0` (8.119)
Note that when we write the [ states we do not need to keep track of the normalization factors.
We do need to keep track of them them when we use the [` or [) vectors.
[
1
...
N
` = a

1
a

2
a

N
[0` (8.120)
=

!
a

1
a

2
a

N
[0` (8.121)
The symmetry requirement places certain requirement on the commutation of the creation
operators. For example,
a

[0` = [ (8.122)
= [ (8.123)
= a

[0` (8.124)
Thus,
a

= 0 (8.125)
In other words, for = 1 (bosons) the two operators commute for = 1 (fermions) the
operators anti-commute.
239
We can prove similar results for the adjoint of the a

operator. In sort, for bosons we have a


commutation relation between a

and a

:
[a

, a

] =

(8.126)
While for fermions, we have a anti-commutation relation:
a

, a

= a

+ a

(8.127)
Finally, we dene a set of eld operators which are related to the creation/annihilation oper-
ators as

(x) =

'[x`a

(x)a

(8.128)

(x) =

'[x`a

(x)a

(8.129)
These particular operators are useful in deriving various tight-binding approximations.
Say that a

places a particle in state and a

(1) deletes a particle from state . The


occupation of state [` is thus
n

[` = a

[` = n

[` (8.130)
if the state is unoccupied, n

= 0 and if the state is occupied, the rst operation removes the


particle and the second replaces, and n

= 1.
One-body operators can be evaluated as
U =

'[U[`a

(8.131)
Likewise,

= '[U[`n

(8.132)
Two body operators are written as

V =
1
2

dx

dy

(x)

(y)V (x y)

(x)

(y) = ([v[)a

(8.133)
Occasionally, it is useful to write the symmetrized variant of this operator

V =
1
4

([v[[) [)]a

(8.134)
=
1
4

[v[a

(8.135)
240
8.5 Problems and Exercises
Exercise 8.3 Consider the case of two identical particles with positions r
1
and r
2
trapped in a
centeral harmonic well with a mutially repulsive harmonic interaction. The Hamiltonian for this
case can be written as
H =
h
2
2m

2
1
+
2
2

+
1
2
m
2
(r
2
1
+ r
2
2
)

4
m
2
[r
1
r
2
[
2
(8.136)
where is a dimensionalless scaling factor. This can be a model for two Bosons or Fermions
trapped in an optical trap and m
2
simply tunes the s-wave scattering cross-section for the two
atoms.
1. Show that upon an appropriate change of variables
u = (r
1
+ r
2
)/

2 (8.137)
v = (r
1
r
2
)/

2 (8.138)
the Hamiltonian simplies to two separable three-dimensional harmonic oscillators:
H =

h
2
2m

2
u
+
1
2
m
2
u
2

h
2
2m

2
v
+
1
2
(1 )m
2
v
2

(8.139)
2. What is the exact ground state of this system?
3. Assuming the particles are spin 1/2 fermions. What are the lowest energy triplet and singlet
states for this system?
4. What is the average distance of separation between the two particles in both the singlet and
triplet congurations.
5. Now, solve this problem via the variational approach by taking your trial wavefunction to
be a Slater determinant of the two lowest single particle states:
(r
1
, r
2
) =

(1) (2)
(1) (2)

(8.140)
Where (1) and (2) are the lowest energy 3D harmonic oscillator states modied such
that we can take the width as a variational parameter:
(r) = N() exp(r
2
/(2
2
)
where N() is the normalization factor. Construct the Hamiltonan and determine the
lowest energy state by taking the variation
'[H[` = 0.
How does your variational estimate compare with the exact value for the energy?
241
Exercise 8.4 In this problem we consider an electron in a linear tri-atomic molecule formed by
three equidistant atoms. We will denote by [A`, [B` , and [C` the three orthonormal state of
the electron, corresponding to three wavefunctions localized about the three nuclei, A, B, and C.
Whele there may be more than these states in the physical system, we will conne ourselves the
subspace spanned by these three vectors.
If we neglect the transfer of an electron from one site to an other site, its energy is described by
the Hamiltonian, H
o
. The eigenstates of H
o
are the three orthonormal states above with energies
E
A
, E
B
, and E
C
. For now, take E
A
= E
B
= E
C
= E
o
. The coupling (i.e. electron hopping)
between the states is described by an additional term W dened by its action on the basis vectors.
W[A` = a[A`
W[B` = a([A` +[C`)
W[C` = a[B`
where a is a real positive constant.
1. Write both H
o
and W in matrix form in the orthonormal basis and determine the eigenval-
ues E
1
, E
2
, E
3
and eigenvectors [1`, [2`, [3` for H = H
o
+ W. To do this numerically,
pick your energy scale in terms of E
o
/a.
2. Using the eigenvectors and eigenvalues you just determined, calculate the unitary time
evolution operator in the original basis. Eg.
'A[U(t)[B` = 'A[ exp (iH/ ht) [B`
3. If at time t = 0 the electron is localized on site A (in state [A`), calculate the probability
of nding the electron in any other state at some later time, t (i.e. P
A
, P
B
, and P
C
). Plot
your results. Is there some later time at which the probability of nding the electron back
in the original state is exactly 1. Give a physical interpretation of this result.
4. Repeat your calculation in part 1 and 2, this time set E
A
= E
C
= E
o
but set E
B
= 3E
o
.
Again, Plot your results for P
A
, P
B
, and P
C
and give a physical interpretation of your
results.
In the next series of exercises, we will use Spartan 02 for performing some elementary quan-
tum chemistry calculations.
Exercise 8.5 Formaldehyde
1. Using the Builder on Spartan, build formaldehyde H
2
CO and perform an energy minimiza-
tion. Save this. When this is done, use Geometry > Measure Distance and Geometry >
Measure Angle to measure the C-H and C-O bond lengths and the H-C-O bond angle.
242
Figure 8.4: Setup calculation dialog screen
2. Set up a Geometry Optimization calculation (Setup Calculation). This will open up a
dialog screen that looks like Fig. 8.4. This will set up a Hartree-Fock calculation using
the 6-31G** basis and print out the orbitals and their energies. It also forces Spartan to
preserve the symmetry point-group of the initial conguration. After you do this, also setup
some calculations for generating some pictures of orbitals. Setup Graphics will open a
dialog window for adding graphics calculations. Add the following: HOMO, LUMO, and
potential. Close the dialog, and submit the job (Setup> Submit). Open the Spartan Monitor
and wait until the job nishes. When this is done, use Geometry > Measure Distance and
Geometry > Measure Angle to measure the C-H and C-O bond lengths and the H-C-O bond
angle. This is the geometry predicted by a HF/6-31G** calculation.
3. Open Display> Surfaces and plot the HOMO and LUMO orbitals. Open up the text output
(Display>Output ) generated by the calculation and gure out which molecular orbitals
correspond to the HOMO and LUMO orbitals you plotted. What are their energies and
irreducible representations? Are these or orbitals? Considering the IRREPS of each
orbital, what is the lowest energy optical transition for this molecule? What are the atomic
orbitals used for the O and C atoms in each of these orbitals?
4. Repeat the calculation you did above, this time including a calculation of the vibrational
frequencies for both the ground state and rst excited states (using CIS). Which vibrational
243
states undergo the largest change upon electronic excitation? Oer an explanation of this
result noting that the excited state is pyramidal and that the S
o
S
1
electronic transition
is an n

transition. (This calculation will take some time.)


Table 8.1: Vibrational Frequencies of Formaldehyde
Symmetry Description S
0
S
1
Sym CH str
CO str
CH
2
bend
out-of-pland bend
anti-sym CH str
CH
2
rock
5. H
2
+ C=O CH
2
=O transition state. Using the builder, make a model of the H
2
+ C=O
CH
2
=O transition state. For this you will need to make a model that looks something
like whats shown in Fig. 8.6. Then, go to the Search > Transition State menu. For this
you will need to click on the H H bond (head) and then the C for the tail of the reaction
path. Once you have done this, open Setup > Calculations and calculate Transition State
Geometry at the Ground state with Hartree Fock/6-31G**. Also compute the Frequencies.
Close and submit the calculation. When the calculation nishes, examine the vibrational
frequencies. Is there at least one imaginary frequency? Why do you expect only one such
frequency? What does this tell you about the topology of the potential energy surface at this
point. Record the energy at the transition state. Now, do two separate calculations of the
isolated reactants. Combine these with the calculation you did above for the formaldehyde
equilbrium geometry and sketch a reaction energy prole.
244
Figure 8.5: HOMO-1, HOMO and LUMO for CH
2
= O.
245
Figure 8.6: Transition state geometry for H
2
+ C = O CH
2
= O. The Arrow indicates the
reaction path.
246
Appendix A
Physical Constants and Conversion
Factors
247
Table A.1: Physical Constants
Constant Symbol SI Value
Speed of Light c 299792458 m/s (exact)
Charge of proton e 1.6021764 10
19
C
Permitivity of Vacuum

8.8541878
12
J
1
C
2
m
1
Avagadros Number N
A
6.022142 10
23
mol
1
Rest mass of electron m
e
9.109382 10
31
kg
Table A.2: Atomic Units:In Atomic Units, the following quantities are unitary: h, e, m
e
, a
o
.
Quantity symbol or expression CGS or SI equiv.
Mass m
e
9.109382 10
31
kg
Charge e 1.6021764 10
19
C
Angular Momentum h 1.0545710
34
Js
Length (bohr) a
o
= h
2
/(m
e
e
2
) 0.5291772
10
m
Energy (hartree) E
h
= e
2
/a
o
4.35974 10
18
J
time t
o
= h
3
/(m
e
e
4
) 2.41888 10
17
s
velocity e
2
/ h 2.18770 10
6
m/s
Force e
2
/a
2
o
8.23872 10
8
N
Electric Field e/a
2
o
5.14221 10
11
V/m
Electric Potential e/a
o
27.2114 V
Fine structure constant =
e
2
hc
1/137.036
Magnetic moment
e
= e h/(2m
e
) 9.27399 10
24
J/T
Permitivity of vacuum
o
= 1/4 8.8541878
12
J
1
C
2
m
1
Hydrogen Atom IP
2
m
e
c
2
/2 = E
h
/2 -13.60580 eV
Table A.3: Useful orders of magnitude
Quantity approximate value exact value
Electron rest mass m
e
c
2
0.5 MeV 0.511003 MeV
Proton rest mass m
p
c
2
1000 MeV 938.280 MeV
neutron rest mass M
n
c
2
1000MeV 939.573 MeV
Proton/Electron mass ratio m
p
/m
e
2000 1836.1515
One electron volt corresponds to a:
Quantity symbol /relation exact
frequency: = 2.4 10
14
Hz E = h 2.417970 10
14
Hz
wavelength: = 12000

A = c/ 12398.52

A
wave number: 1/ = 8000cm
1
8065.48 cm
1
temperature: T = 12000K E = kT 11604.5 K
248
Appendix B
Mathematical Results and Techniques
to Know and Love
B.1 The Dirac Delta Function
B.1.1 Denition
The Dirac delta-function is not really a function, per se, it is really a generalized function dened
by the relation
f(x
o
) =

dx(x x
o
)f(x). (B.1)
The integral picks out the rst term in the Taylor expansion of f(x) about the point x
o
and this
relation must hold for any function of x. For example, lets take a function which is zero only at
some arbitrary point, x
o
. Then the integral becomes:

dx(x x
o
)f(x) = 0. (B.2)
For this to be true for any arbitrary function, we have to conclude that
(x) = 0, for x = 0. (B.3)
Furthermore, from the Reimann-Lebesque theory of integration

f(x)g(x)dx = lim
h0

n
f(x
n
)g(x
n
)

, (B.4)
the only way for the dening relation to hold is for
(0) = . (B.5)
This is a very odd function, it is zero every where except at one point, at which it is innite. So
it is not a function in the regular sense. In fact, it is more like a distrubution function which is
innitely narrow. If we set f(x) = 1, then we can see that the -function is normalized to unity.

dx(x x
o
) = 1. (B.6)
249
B.1.2 Properties
Some useful properties of the -function are as follows:
1. It is real:

(x) = (x).
2. It is even: (x) = (x).
3. (ax) = (x)/a for a > 0
4.

(x)f(x)dx = f

(0)
5.

(x) =

(x)
6. x(x) = 0
7.
(x
2
a
2
) =
1
2a
((x + a) + (x a))
8. f(x)(x a) = f(a)(x a)
9.
(x a)(x b)dx = (a b)
Exercise B.1 Prove the above relations.
B.1.3 Spectral representations
The function can be thought of as the limit of a sequence of regular functions. For example,
(x) = lim
a
1

sin(ax)
x
This is the sinc function or diraction function with a width proportional to 1/a. For any
value of a, the function is regular. As we make a larger, the width increases and focuses about
x = 0. This is shown in the Fig B.1, for increasing values of a. Notice that as a increases, the
peak increases and the function itself becomes extremely oscillitory.
Another extremely useful representation is the Fourier representation
(x) =
1
2

e
ikx
dk. (B.7)
We used this representation in Eq. 7.205 to go from an energy representation to an integral over
time.
Finally, an other form is in terms of Gaussian functions as shown in Fig. B.2.
(x) = lim
a

e
ax
2
. (B.8)
250
Figure B.1: sin(xa)/x representation of the Dirac -function
-20 -10 10 20
0.1
0.2
0.3
0.4
Figure B.2: Gaussian representation of -function
-4 -2 2 4
0.2
0.4
0.6
0.8
1
1.2
251
Here the height is proportional to a and the width to the standard deviation, 1/

2a.
Other representations include: Lorentzian form,
(x) = lim
a
1

a
x
2
+ a
2
,
and derivative form
(x) =
d
dx
(x)
where (x) is the Heaviside step function
(x) =

0, x 0
1 x 0
(B.9)
This can be understood as the cumulative distribution function
(x) =

(y)dy. (B.10)
252
B.2 Coordinate systems
In each case U is a function of coordinates and

A is a vector.
B.2.1 Cartesian
z
y
x
k
j
i
U = U(x, y, z)


A = A
x

i + A
y

j + A
z

k
Volume element: dV = dxdydz
Vector product:

A

B = A
x
B
x
+ A
y
B
y
+ A
z
B
z
Gradient

U =
U
x

i +
U
y

j +
U
z

k
Laplacian

2
U =

2
U
x
2
+

2
U
y
2
+

2
U
z
2
Divergence


A =
A
x
x
+
A
y
y
+
A
z
z
Curl


A =

A
z
y

A
y
z

A
x
z

A
z
x

j +

A
y
x

A
x
y

k
253
B.2.2 Spherical
z
y
x
k
j
i

Coordinates:
Transformation to cartesian:
x = r cos sin , y = r sin sin , z = r cos
U = U(r, , )


A = A
r
r + A

+ A

A
r
= A

sin + A
z
cos (B.11)
A

= A

cos A
z
sin (B.12)
A

= A
x
sin + A
y
cos (B.13)
A

= A
x
cos + A
y
sin (B.14)
Arc Length
Volume element: dV =
Vector product:

A

B =
Gradient

U =
Laplacian

2
U =
Divergence


A =
Curl


A =
254
B.2.3 Cylindrical
z
y
x
k
j
i


z
Coordinates:
Transformation to cartesian:
U = U(x, y, z)


A = A
x

i + A
y

j + A
z

k
Volume element: dV = dxdydz
Vector product:

A

B = A
x
B
x
+ A
y
B
y
+ A
z
B
z
Gradient

U =
U
x

i +
U
y

j +
U
z

k
Laplacian

2
U =

2
U
x
2
+

2
U
y
2
+

2
U
z
2
Divergence


A =
A
x
x
+
A
y
y
+
A
z
z
Curl


A =

A
z
y

A
y
z

A
x
z

A
z
x

j +

A
y
x

A
x
y

k
255
Appendix C
Mathematica Notebook Pages
256
3
1
s
2
s
2
L
i
L
it
h
iu
m
6
.
9
4
1
5
.
3
9
1
7
1
0 N
e
N
e
o
n
2
0
.
1
7
9
7
2
1
.
5
6
4
6
2
1
s
2
H
e
H
e
liu
m
4
.
0
0
2
6
0
2
4
.
5
8
7
4
9
O
O
x
y
g
e
n
1
5
.
9
9
9
4
1
3
.
6
1
8
1
8
F
F
lu
o
r
in
e
1
8
.
9
9
8
4
0
1
7
.
4
2
2
8
7
N
N
it
r
o
g
e
n
1
4
.
0
0
6
7
4
1
4
.
5
3
4
1
6
C
C
a
r
b
o
n
1
2
.
0
1
0
7
1
1
.
2
6
0
3
5
1
s
2
s
2
p
2
2
B
B
o
r
o
n
1
0
.
8
1
1
8
.
2
9
8
0
5
7
L
a
L
a
n
t
h
a
n
u
m
1
3
8
.
9
0
5
5
5
.
5
7
6
9
8
9 A
c
A
c
t
in
iu
m
(
2
2
7
)
5
.
1
7
7
1 L
u
L
u
t
e
t
iu
m
1
7
4
.
9
6
7
5
.
4
2
5
9
1
0
3 L
r
L
a
w
r
e
n
c
iu
m
(
2
6
2
)
4
.
9
?
8
7
[
R
n
]
7
s
F
r
F
r
a
n
c
iu
m
(
2
2
3
)
4
.
0
7
2
7
8
8 R
a
R
a
d
iu
m
(
2
2
6
)
5
.
2
7
8
4
1
0
4 R
f
R
u
t
h
e
r
f
o
r
d
iu
m
(
2
6
1
)
6
.
0
?
7
2
H
f
H
a
f
n
iu
m
1
7
8
.
4
9
6
.
8
2
5
1
4
0
Z
r
Z
ir
c
o
n
iu
m
9
1
.
2
2
4
6
.
6
3
3
9
3
9
Y
Y
t
t
r
iu
m
8
8
.
9
0
5
8
5
6
.
2
1
7
1
3
8
S
r
S
t
r
o
n
t
iu
m
8
7
.
6
2
5
.
6
9
4
9
5
6 B
a
B
a
r
iu
m
1
3
7
.
3
2
7
5
.
2
1
1
7
7
3
T
a
T
a
n
t
a
lu
m
1
8
0
.
9
4
7
9
7
.
5
4
9
6
5
4 X
e
X
e
n
o
n
1
3
1
.
2
9
1
2
.
1
2
9
8
1
9
[
A
r
]
4
s
K
P
o
t
a
s
s
iu
m
3
9
.
0
9
8
3
4
.
3
4
0
7
2
0 C
a
C
a
lc
iu
m
4
0
.
0
7
8
6
.
1
1
3
2
2
1 S
c
S
c
a
n
d
iu
m
4
4
.
9
5
5
9
1
6
.
5
6
1
5
2
2
T
i
T
it
a
n
iu
m
4
7
.
8
6
7
6
.
8
2
8
1
3
0 Z
n
Z
in
c
6
5
.
3
9
9
.
3
9
4
2
3
1 G
a
G
a
lliu
m
6
9
.
7
2
3
5
.
9
9
9
3
3
2 G
e
G
e
r
m
a
n
iu
m
7
2
.
6
1
7
.
8
9
9
4
3
3 A
s
A
r
s
e
n
ic
7
4
.
9
2
1
6
0
9
.
7
8
8
6
3
4 S
e
S
e
le
n
iu
m
7
8
.
9
6
9
.
7
5
2
4
3
5
B
r
B
r
o
m
in
e
7
9
.
9
0
4
1
1
.
8
1
3
8
3
6
K
r
K
r
y
p
t
o
n
8
3
.
8
0
1
3
.
9
9
9
6
2
3
V
V
a
n
a
d
iu
m
5
0
.
9
4
1
5
6
.
7
4
6
2
2
4
C
r
C
h
r
o
m
iu
m
5
1
.
9
9
6
1
6
.
7
6
6
5
2
5 M
n
M
a
n
g
a
n
e
s
e
5
4
.
9
3
8
0
5
7
.
4
3
4
0
2
6
F
e
I
r
o
n
5
5
.
8
4
5
7
.
9
0
2
4
2
7 C
o
C
o
b
a
lt
5
8
.
9
3
3
2
0
7
.
8
8
1
0
2
8
N
i
N
ic
k
e
l
5
8
.
6
9
3
4
7
.
6
3
9
8
2
9 C
u
C
o
p
p
e
r
6
3
.
5
4
6
7
.
7
2
6
4
1
1
[
N
e
]
3
s
N
a
S
o
d
iu
m
2
2
.
9
8
9
7
7
5
.
1
3
9
1
1
2 M
g
M
a
g
n
e
s
iu
m
2
4
.
3
0
5
0
7
.
6
4
6
2
1
3 [
N
e
]
3
s
3
p
2
A
l
A
lu
m
in
u
m
2
6
.
9
8
1
5
4
5
.
9
8
5
8
1
4
S
i
S
ilic
o
n
2
8
.
0
8
5
5
8
.
1
5
1
7
1
5
P
P
h
o
s
p
h
o
r
u
s
3
0
.
9
7
3
7
6
1
0
.
4
8
6
7
1
6
S
S
u
lf
u
r
3
2
.
0
6
6
1
0
.
3
6
0
0
1
7
C
l
C
h
lo
r
in
e
3
5
.
4
5
2
7
1
2
.
9
6
7
6
1
8
A
r
A
r
g
o
n
3
9
.
9
4
8
1
5
.
7
5
9
6
1
H
H
y
d
r
o
g
e
n
1
.
0
0
7
9
4
1
3
.
5
9
8
4
4
1
s
2
s
2
2
B
e
B
e
r
y
lliu
m
9
.
0
1
2
1
8
9
.
3
2
2
7
3
7
[
K
r
]
5
s
R
b
R
u
b
id
iu
m
8
5
.
4
6
7
8
4
.
1
7
7
1
5
5
[
X
e
]
6
s
C
s
C
e
s
iu
m
1
3
2
.
9
0
5
4
5
3
.
8
9
3
9
4
2 M
o
M
o
ly
b
d
e
n
u
m
9
5
.
9
4
7
.
0
9
2
4
4
1 N
b
N
io
b
iu
m
9
2
.
9
0
6
3
8
6
.
7
5
8
9
8
6 R
n
R
a
d
o
n
(
2
2
2
)
1
0
.
7
4
8
5
7
4
W
T
u
n
g
s
t
e
n
1
8
3
.
8
4
7
.
8
6
4
0
4
3
T
c
T
e
c
h
n
e
t
iu
m
(
9
8
)
7
.
2
8
7
5 R
e
R
h
e
n
iu
m
1
8
6
.
2
0
7
7
.
8
3
3
5
4
4 R
u
R
u
t
h
e
n
iu
m
1
0
1
.
0
7
7
.
3
6
0
5
7
6 O
s
O
s
m
iu
m
1
9
0
.
2
3
8
.
4
3
8
2
4
5 R
h
R
h
o
d
iu
m
1
0
2
.
9
0
5
5
0
7
.
4
5
8
9
7
7
I
r
I
r
id
iu
m
1
9
2
.
2
1
7
8
.
9
6
7
0
4
6 P
d
P
a
lla
d
iu
m
1
0
6
.
4
2
8
.
3
3
6
9
7
8
P
t
P
la
t
in
u
m
1
9
5
.
0
7
8
8
.
9
5
8
7
4
7 A
g
S
ilv
e
r
1
0
7
.
8
6
8
2
7
.
5
7
6
2
7
9 A
u
G
o
ld
1
9
6
.
9
6
6
5
5
9
.
2
2
5
5
4
8 C
d
C
a
d
m
iu
m
1
1
2
.
4
1
1
8
.
9
9
3
8
8
0 H
g
M
e
r
c
u
r
y
2
0
0
.
5
9
1
0
.
4
3
7
5
6
0 N
d
N
e
o
d
y
m
iu
m
1
4
4
.
2
4
5
.
5
2
5
0
6
2 S
m
S
a
m
a
r
iu
m
1
5
0
.
3
6
5
.
6
4
3
6
6
3 E
u
E
u
r
o
p
iu
m
1
5
1
.
9
6
4
5
.
6
7
0
4
6
4 G
d
G
a
d
o
lin
iu
m
1
5
7
.
2
5
6
.
1
5
0
1
6
5 T
b
T
e
r
b
iu
m
1
5
8
.
9
2
5
3
4
5
.
8
6
3
8
6
1 P
m
P
r
o
m
e
t
h
iu
m
(
1
4
5
)
5
.
5
8
2
6
6 D
y
D
y
s
p
r
o
s
iu
m
1
6
2
.
5
0
5
.
9
3
8
9
6
7 H
o
H
o
lm
iu
m
1
6
4
.
9
3
0
3
2
6
.
0
2
1
5
6
8
E
r
E
r
b
iu
m
1
6
7
.
2
6
6
.
1
0
7
7
6
9 T
m
T
h
u
liu
m
1
6
8
.
9
3
4
2
1
6
.
1
8
4
3
4
9
I
n
I
n
d
iu
m
1
1
4
.
8
1
8
5
.
7
8
6
4
5
0 S
n
T
in
1
1
8
.
7
1
0
7
.
3
4
3
9
5
1 S
b
A
n
t
im
o
n
y
1
2
1
.
7
6
0
8
.
6
0
8
4
5
2
T
e
T
e
llu
r
iu
m
1
2
7
.
6
0
9
.
0
0
9
6
5
3
I
I
o
d
in
e
1
2
6
.
9
0
4
4
7
1
0
.
4
5
1
3
8
1
T
l
T
h
a
lliu
m
2
0
4
.
3
8
3
3
6
.
1
0
8
2
8
2 P
b
L
e
a
d
2
0
7
.
2
7
.
4
1
6
7
8
3
B
i
B
is
m
u
t
h
2
0
8
.
9
8
0
3
8
7
.
2
8
5
6
8
4 P
o
P
o
lo
n
iu
m
(
2
0
9
)
8
.
4
1
7
?
8
5
A
t
A
s
t
a
t
in
e
(
2
1
0
)
5
8 C
e
C
e
r
iu
m
1
4
0
.
1
1
6
5
.
5
3
8
7
5
9
P
r
P
r
a
s
e
o
d
y
m
iu
m
1
4
0
.
9
0
7
6
5
5
.
4
7
3
7
0 Y
b
Y
t
t
e
r
b
iu
m
1
7
3
.
0
4
6
.
2
5
4
2
9
0 T
h
T
h
o
r
iu
m
2
3
2
.
0
3
8
1
6
.
3
0
6
7
9
2
U
U
r
a
n
iu
m
2
3
8
.
0
2
8
9
6
.
1
9
4
1
9
3 N
p
N
e
p
t
u
n
iu
m
(
2
3
7
)
6
.
2
6
5
7
9
4 P
u
P
lu
t
o
n
iu
m
(
2
4
4
)
6
.
0
2
6
2
9
5
A
m
A
m
e
r
ic
iu
m
(
2
4
3
)
5
.
9
7
3
8
9
6
C
m
C
u
r
iu
m
(
2
4
7
)
5
.
9
9
1
5
9
1 P
a
P
r
o
t
a
c
t
in
iu
m
2
3
1
.
0
3
5
8
8
5
.
8
9
9
7 B
k
B
e
r
k
e
liu
m
(
2
4
7
)
6
.
1
9
7
9
9
8
C
f
C
a
lif
o
r
n
iu
m
(
2
5
1
)
6
.
2
8
1
7
9
9 E
s
E
in
s
t
e
in
iu
m
(
2
5
2
)
6
.
4
2
1
0
0
F
m
F
e
r
m
iu
m
(
2
5
7
)
6
.
5
0
1
0
1
M
d
M
e
n
d
e
le
v
iu
m
(
2
5
8
)
6
.
5
8
1
0
2
N
o
N
o
b
e
liu
m
(
2
5
9
)
6
.
6
5
1
s
2
s
2
p
2
2
2
1
s
2
s
2
p
2
2
3
1
s
2
s
2
p
2
2
4
1
s
2
s
2
p
2
2
5
1
s
2
s
2
p
2
2
6
[
N
e
]
3
s
2
[
N
e
]
3
s
3
p
2
2
[
N
e
]
3
s
3
p
2
3
[
N
e
]
3
s
3
p
2
4
[
N
e
]
3
s
3
p
2
5
[
N
e
]
3
s
3
p
2
6
[
A
r
]
4
s
2
[
A
r
]
3
d
4
s
2
[
A
r
]
3
d
4
s
2
2
[
A
r
]
3
d
4
s
3
2
[
A
r
]
3
d
4
s
5
[
A
r
]
3
d
4
s
5
2
[
A
r
]
3
d
4
s
6
2
[
A
r
]
3
d
4
s
7
2
[
A
r
]
3
d
4
s
8
2
[
A
r
]
3
d
4
s
1
0
[
A
r
]
3
d
4
s
1
0
2
[
A
r
]
3
d
4
s
4
p
1
0
2
[
A
r
]
3
d
4
s
4
p
1
0
2
2
[
A
r
]
3
d
4
s
4
p
1
0
2
3
[
A
r
]
3
d
4
s
4
p
1
0
2
4
[
A
r
]
3
d
4
s
4
p
1
0
2
5
[
A
r
]
3
d
4
s
4
p
1
0
2
6
[
K
r
]
5
s
2
[
K
r
]
4
d
5
s
2
[
K
r
]
4
d
5
s
2
2
[
K
r
]
4
d
5
s
4
[
K
r
]
4
d
5
s
5
[
K
r
]
4
d
5
s
5
2
[
K
r
]
4
d
5
s
7
[
K
r
]
4
d
5
s
8
[
K
r
]
4
d
1
0
[
K
r
]
4
d
5
s
1
0
[
K
r
]
4
d
5
s
1
0
2
[
K
r
]
4
d
5
s
5
p
1
0
2
2
[
K
r
]
4
d
5
s
5
p
1
0
2
3
[
K
r
]
4
d
5
s
5
p
1
0
2
4
[
K
r
]
4
d
5
s
5
p
1
0
2
5
[
K
r
]
4
d
5
s
5
p
1
0
2
6
[
X
e
]
6
s
2
[
X
e
]
5
d
6
s
2
[
X
e
]
4
f
5
d
6
s
2
[
X
e
]
4
f
6
s
3
2
[
X
e
]
4
f
6
s
4
2
[
X
e
]
4
f
6
s
5
2
[
X
e
]
4
f
6
s
7
2
[
X
e
]
4
f
6
s
6
2
[
X
e
]
4
f
5
d
6
s
7
2
[
X
e
]
4
f
6
s
9
2
[
X
e
]
4
f
6
s
1
0
2
[
X
e
]
4
f
6
s
1
1
2
[
X
e
]
4
f
6
s
1
2
2
[
X
e
]
4
f
6
s
1
3
2
[
X
e
]
4
f
6
s
1
4
2
[
X
e
]
4
f
5
d
6
s
1
4
2
[
X
e
]
4
f
5
d
6
s
1
4
2
2
[
X
e
]
4
f
5
d
6
s
1
4
3
2
[
X
e
]
4
f
5
d
6
s
1
4
4
2
[
X
e
]
4
f
5
d
6
s
1
4
5
2
[
X
e
]
4
f
5
d
6
s
1
4
2
6
[
X
e
]
4
f
5
d
6
s
1
4
7
2
[
X
e
]
4
f
5
d
6
s
1
4
9
[
X
e
]
4
f
5
d
6
s
1
4
1
0
[
X
e
]
4
f
5
d
6
s
1
4
1
0
2
[
H
g
]
6
p
[
H
g
]
6
p
2
[
H
g
]
6
p
3
[
H
g
]
6
p
4
[
H
g
]
6
p
5
[
H
g
]
6
p
6
[
R
n
]
6
d
7
s
2
[
R
n
]
6
d
7
s
2
2
[
R
n
]
5
f
6
d
7
s
2
2
[
R
n
]
5
f
6
d
7
s
3
2
[
R
n
]
5
f
6
d
7
s
4
2
[
R
n
]
5
f
7
s
6
2
[
R
n
]
5
f
7
s
7
2
[
R
n
]
5
f
6
d
7
s
7
2
[
R
n
]
5
f
7
s
9
2
[
R
n
]
5
f
7
s
1
0
2
[
R
n
]
5
f
7
s
1
1
2
[
R
n
]
5
f
7
s
1
2
2
[
R
n
]
5
f
7
s
1
3
2
[
R
n
]
5
f
7
s
1
4
2
[
R
n
]
5
f
7
s
7
p
?
1
4
2
[
R
n
]
5
f
6
d
7
s
?
1
4
2
2
[
K
r
]
4
d
5
s
5
p
1
0
2
G
r
o
u
p
I
A
I
I
A
I
I
I
A
I
V
A
V
A
V
I
A
V
I
I
A
V
I
I
I
A
I
B
I
I
B
I
I
I
B
I
V
B
V
B
V
I
B
V
I
I
B
V
I
I
I
5
8
C
e
C
e
r
i
u
m
1
4
0
.
1
1
6
5
.
5
3
8
7 1
G
4
[
X
e
]
4
f
5
d
6
s
2
A
t
o
m
i
c
N
u
m
b
e
r
S
y
m
b
o
l
N
a
m
e
G
r
o
u
n
d
-
s
t
a
t
e
C
o
n
f
i
g
u
r
a
t
i
o
n
G
r
o
u
n
d
-
s
t
a
t
e
L
e
v
e
l
I
o
n
i
z
a
t
i
o
n
E
n
e
r
g
y
(
e
V
)
1
0
5
1
0
7
1
0
6
1
0
8
1
0
9
1
1
1
1
1
0
1
1
2
D
b
D
u
b
n
iu
m
(
2
6
2
)
S
g
S
e
a
b
o
r
g
iu
m
(
2
6
3
)
H
s
H
a
s
s
iu
m
(
2
6
5
)
B
h
B
o
h
r
iu
m
(
2
6
4
)
M
t
M
e
it
n
e
r
iu
m
(
2
6
8
)
U
u
n
U
n
u
n
n
iliu
m
(
2
6
9
)
U
u
b
U
n
u
n
b
iu
m
U
u
u
U
n
u
n
u
n
iu
m
(
2
7
2
)
Period
1 6 5 4 3 2 7
P
E
R
I
O
D
I
C
T
A
B
L
E
A
t
o
m
i
c
P
r
o
p
e
r
t
i
e
s
o
f
t
h
e
E
l
e
m
e
n
t
s
F
o
r
a
d
e
s
c
r
i
p
t
i
o
n
o
f
t
h
e
a
t
o
m
i
c
d
a
t
a
,
v
i
s
i
t
p
h
y
s
i
c
s
.
n
i
s
t
.
g
o
v
/
a
t
o
m
i
c
M
a
r
c
h
1
9
9
9
F
r
e
q
u
e
n
t
l
y
u
s
e
d
f
u
n
d
a
m
e
n
t
a
l
p
h
y
s
i
c
a
l
c
o
n
s
t
a
n
t
s
1
s
e
c
o
n
d
=
9
1
9
2
6
3
1
7
7
0
p
e
r
i
o
d
s
o
f
r
a
d
i
a
t
i
o
n
c
o
r
r
e
s
p
o
n
d
i
n
g
t
o
t
h
e
t
r
a
n
s
i
t
i
o
n
t
h
e
t
w
o
h
y
p
e
r
f
i
n
e
l
e
v
e
l
s
o
f
t
h
e
g
r
o
u
n
d
s
t
a
t
e
o
f
s
s
p
e
e
d
o
f
l
i
g
h
t
i
n
v
a
c
u
u
m
2
9
9
7
9
2
4
5
8
m
s
P
l
a
n
c
k
c
o
n
s
t
a
n
t
6
.
6
2
6
1
1
0
J
s
(
e
l
e
m
e
n
t
a
r
y
c
h
a
r
g
e
1
.
6
0
2
2
1
0
C
e
l
e
c
t
r
o
n
m
a
s
s
9
.
1
0
9
4
1
0
k
g
p
r
o
t
o
n
m
a
s
s
1
.
6
7
2
6
1
0
k
g
f
i
n
e
-
s
t
r
u
c
t
u
r
e
c
o
n
s
t
a
n
t
1
/
1
3
7
.
0
3
6
R
y
d
b
e
r
g
c
o
n
s
t
a
n
t
1
0
9
7
3
7
3
2
m
B
o
l
t
z
m
a
n
n
c
o
n
s
t
a
n
t
1
.
3
8
0
7
1
0
J
K
c h e m m R k
1
e p
F
o
r
t
h
e
m
o
s
t
a
c
c
u
r
a
t
e
v
a
lu
e
s
o
f
t
h
e
s
e
a
n
d
o
t
h
e
r
c
o
n
s
t
a
n
t
s
,
v
is
it
b
e
t
w
e
e
n
C
(
e
x
a
c
t
) /
2
)
0
.
5
1
1
0
M
e
V
3
.
2
8
9
8
4
1
0
H
z
1
3
.
6
0
5
7
e
V
1
3
3
2
2
7
2
3
h
=
h
m
c
R
c
R
h
c
e
p
h
y
s
ic
s
.
n
is
t
.
g
o
v
/
c
o
n
s
t
a
n
t
s
!
! ! ! !
!
!
!
H
3
4
1
9
3
1
1
1
5
1
p
H H H
H
H
a
4 4 4
S
o
l
i
d
s
A
r
t
i
f
i
c
i
a
l
l
y
P
r
e
p
a
r
e
d
L
i
q
u
i
d
s
G
a
s
e
s
[
R
n
]
7
s
2
1
s

B
a
s
e
d
u
p
o
n
.
(
)
i
n
d
i
c
a
t
e
s
t
h
e
m
a
s
s
n
u
m
b
e
r
o
f
t
h
e
m
o
s
t
s
t
a
b
l
e
i
s
o
t
o
p
e
.
F
o
r
a
d
e
s
c
r
i
p
t
i
o
n
a
n
d
t
h
e
m
o
s
t
a
c
c
u
r
a
t
e
v
a
l
u
e
s
a
n
d
u
n
c
e
r
t
a
i
n
t
i
e
s
,
s
e
e
J
.
P
h
y
s
.
C
h
e
m
.
R
e
f
.
D
a
t
a
,
(
5
)
,
1
2
3
9
(
1
9
9
7
)
.
2
6
1
2
C
A
t
o
m
i
c
W
e
i
g
h
t

U
.
S
.
D
E
P
A
R
T
M
E
N
T
O
F
C
O
M
M
E
R
C
E
T
e
c
h
n
o
l
o
g
y
A
d
m
i
n
i
s
t
r
a
t
i
o
n
N
a
t
i
o
n
a
l
I
n
s
t
i
t
u
t
e
o
f
S
t
a
n
d
a
r
d
s
a
n
d
T
e
c
h
n
o
l
o
g
y
P
h
y
s
i
c
s
L
a
b
o
r
a
t
o
r
y
w
w
w
.
n
i
s
t
.
g
o
v
p
h
y
s
i
c
s
.
n
i
s
t
.
g
o
v
w
w
w
.
n
i
s
t
.
g
o
v
/
s
r
d
S
t
a
n
d
a
r
d
R
e
f
e
r
e
n
c
e
D
a
t
a
P
r
o
g
r
a
m
2
S
1
2/
1
S
0
1
S
0
3
P
2
3
P
0
2
D
3
2/
2
D
3
2/
3
F
2
?
2
S
1
2/
2
D
3
2/
3
F
2
1
S
0
4
F
3
2/
7
S
3
6
S
5
2/
5
D
4
4
F
9
2/
3
F
4
2
S
1
2/
2
S
1
2/
1
S
0
3
P
0
1
S
0
2
S
1
2/
6
D
1
2/
5
D
0
5
F
5
5
D
4
3
D
3
2
S
1
2/
2
S
1
2/
7
F
0
3
H
6
1
S
0
3
F
2
2
P
1
2/
4
S
3
2/
2
P
3
2/
2
P
1
2/
2
P
3
2/
2
P
1
2/
2
P
3
2/
2
P
1
2/
2
P
1
2/
1
G
4
4
- I -
9
2/
6
H
5
2/
8
S
7
2/
9
D
2
6
H
1
5
2/
2
F
7
2/
2
P
1
2
?
/
4
S
3
2/
3
P
2
1
S
0
1
S
0
3
P
0
4
S
3
2/
3
P
2
1
S
0
1
S
0
2
D
3
2/
3
F
2
7
S
3
6
S
5
2/
4
F
9
2/
1
S
0
1
S
0
3
P
0
4
S
3
2/
3
P
2
2
P
3
2/
1
S
0
1
S
0
3
F
2
4
F
3
2/
6
S
5
2/
4
F
9
2/
1
S
0
3
P
0
4
S
3
2/
3
P
2
2
P
3
2/
1
S
0
1
S
0
2
D
3
2/
7
F
0
8
S
7
2/
9
D
2
6
H
1
5
2/
3
H
6
2
F
7
2/
1
S
0
5
L
6
4
K
1
1
2/
6
L
1
1
2/
5
- I -
4
4
- I -
1
5
2/
5
- I -
8
5
- I -
8
4
- I -
1
5
2/
2
S
1
2/
2
S
1
2/
2
S
1
2/
257

Vous aimerez peut-être aussi