0 Votes +0 Votes -

0 vues37 pagesstat

Nov 22, 2016

© © All Rights Reserved

PDF, TXT ou lisez en ligne sur Scribd

stat

© All Rights Reserved

0 vues

stat

© All Rights Reserved

- Johan Sebastian Mazorra - From Quantum to Everything Else (PREVIEW)
- 4604 Solutions Set1 Fa07
- nanoMOS25
- boseeinsteinC
- Mindfulness, Consciousness and Quantum Physics.pdf
- Module
- Bose-Einstein Condensation 2007
- Heinz Duthel This Quantum World
- The Philosophy of Quantum Theory
- Otero & Fanaro, Re S M ICT E, 5(2) , 2011
- Note
- Making Sense of the Legendre Transform
- krys is a gal
- Quantummechanics Does Not Require the Continuity of Space
- 2013Z, InTech-Molecular_dynamics_simulation_of_nanoscale_machining.pdf
- Will we ever… understand quantum theory
- Gauge Fields in the 5D Gravity-Scalar Standing Wave Braneworld
- Sokal Hoax
- 15 Schrodinger Equation
- 4.1. Interaction Light-Matter 2-7-08

Vous êtes sur la page 1sur 37

Professor Dmitry Garanin

Statistical physics

October 24, 2012

I. PREFACE

Statistical physics considers systems of a large number of entities (particles) such as atoms, molecules, spins, etc. For

these system it is impossible and even does not make sense to study the full microscopic dynamics. The only relevant

information is, say, how many atoms have a particular energy, then one can calculate the observable thermodynamic

values. That is, one has to know the distribution function of the particles over energies that deﬁnes the macroscopic

properties. This gives the name statistical physics and deﬁnes the scope of this subject.

The approach outlined above can be used both at and oﬀ equilibrium. The branch of physics studying non-

equilibrium situations is called kinetics. The equilibrium situations are within the scope of statistical physics in the

narrow sense that belongs to this course. It turns out that at equilibrium the energy distribution function has an

explicit general form and the only problem is to calculate the observables.

The formalism of statistical physics can be developed for both classical and quantum systems. The resulting energy

distribution and calculating observables is simpler in the classical case. However, the very formulation of the method is

more transparent within the quantum mechanical formalism. In addition, the absolute value of the entropy, including

its correct value at T → 0, can only be obtained in the quantum case. To avoid double work, we will consider

only quantum statistical physics in this course, limiting ourselves to systems without interaction. The more general

quantum results will recover their classical forms in the classical limit.

From quantum mechanics follows that the states of the system do not change continuously (like in classical physics)

but are quantized. There is a huge number of discrete quantum states with corresponding energy values being the main

parameter characterizing these states. It is the easiest to formulate quantum statistics for systems of noninteracting

particles, then the results can be generalized. In the absence of interaction, each particle has its own set of quantum

states in which it can be at the moment, and for identical particles these sets of states are identical. The particles

can be distributed over their own quantum states in a great number of diﬀerent ways, the so-called realizations. Each

realization of this distribution is called microstate of the system. As said above, the information contained in the

microstates is excessive, and the only meaningful information is how many particles Ni are in a particular quantum

state i. These numbers Ni specify what in statistical physics is called macrostate. If these numbers are known, the

energy and other quantities of the system can be found. It should be noted that the statistical macrostate contains

more information than the macroscopic physical quantities that follow from it.

For a large number of particles, each macrostate k can be realized by a very large number wk of microstates, the

so-called thermodynamic probability. The main assumption of statistical physics is that all microstates occur with

the same probability. Then the probability of the macrostate k is simply proportional to wk . One can introduce the

true probability of the macrostate k as

wk ∑

pk = , Ω≡ wk . (1)

Ω

k

∑

1= pk . (2)

k

pk = pk (N1 , N2 , . . .) . (3)

2

This dependence will be speciﬁed below. For an isolated system the number of particles N and the energy U are

conserved, thus the numbers Ni satisfy the constraints

∑

Ni = N, (4)

i

∑

Ni εi = U, (5)

i

where εi is the energy of the particle in the state i. This limits the variety of the macroscopic states k.

The number of particles N̄i averaged over all macrostates k has the form

∑ (k)

N̄i = Ni pk , (6)

k

where Nik is the number of particles in the microstate i corresponding to the macrostate k. For each macrostate the

thermodynamic probabilities diﬀer by a large amount. Then macrostates having small pk practically never occur, and

the state of the system is dominated by macrostates with the largest pk . Consideration of particular models shows

(k)

that the maximum of pk is very sharp for large number of particles, whereas Ni is a smooth function of k. In this

case Eq. (6) becomes

∑

N̄i ∼

(k ) (k )

= Ni max pk = Ni max , (7)

k

In the case of large N, the dominating true probability pk should be found by maximization with respect to all Ni

obeying the two constraints above.

A tossed coin can land in two positions: Head up or tail up. Considering the coin as a particle, one can say that

this particle has two “quantum” states, 1 corresponding to the head and 2 corresponding to the tail. If N coins are

tossed, this can be considered as a system of N particles with two quantum states each. The microstates of the system

are speciﬁed by the states occupied by each coin. As each coin has 2 states, there are total

Ω = 2N (8)

microstates. The macrostates of this system are deﬁned by the numbers of particles in each state, N1 and N2 . These

two numbers satisfy the constraint condition (4), i.e., N1 + N2 = N. Thus one can take, say, N1 as the number k

labeling macrostates. The number of microstates in one macrostate (that is, the mumber of diﬀerent microstates that

belong to the same macrostate) is given by

( )

N! N

wN1 = = . (9)

N1 !(N − N1 )! N1

This formula can be derived as follows. We have to pick N1 particles to be in the state 1, all others will be in the state

2. How many ways are there to do this? The ﬁrst “0” particle can be picked in N ways, the second one can be picked

in N − 1 ways since we can choose of N − 1 particles only. The third “1” particle can be picked in N − 2 diﬀerent

ways etc. Thus one obtains the number of diﬀerent ways to pick the particles is

N!

N × (N − 1) × (N − 2) × . . . × (N − N1 + 1) = , (10)

(N − N1 )!

N ! ≡ N × (N − 1) × . . . × 2 × 1, 0! = 1. (11)

The expression in Eq. (10) is not yet the thermodynamical probability wN1 because it contains multiple counting of

the same microstates. The realizations, in which N1 “1” particles are picked in diﬀerent orders, have been counted as

diﬀerent microstates, whereas they are the same microstate. To correct for the multiple counting, one has to divide

3

w( N1 ) / w( N / 2)

N=4

N = 16

N = 1000

( N1 − N / 2) /( N / 2)

by the number of permutations N1 ! of the N1 particles that yields Eq. (9). One can check that the condition (1) is

satisﬁed,

∑

N ∑

N

N!

wN1 = = 2N . (12)

N1 !(N − N1 )!

N1 =0 N1 =0

The thermodynamic probability wN1 has a maximum at N1 = N/2, half of the coins head and half of the coins tail.

This macrostate is the most probable state. Indeed, as for an individual coin the probabilities to land head up and

tail up are both equal to 0.5, this is what we expect. For large N the maximum of wN1 on N1 becomes sharp.

To prove that N1 = N/2 is the maximum of wN1 , one can rewrite Eq. (9) in terms of the new variable p = N1 − N/2

as

N!

wN1 = . (13)

(N/2 + p)!(N/2 − p)!

One can see that wp is symmetric around N1 = N/2, i.e., p = 0. Working out the ratio

= = < 1, (14)

wN/2 (N/2 + 1)!(N/2 − 1)! N/2 + 1

( )N

√ N

N! ∼

= 2πN . (15)

e

4

√ N

∼ 2πN (N/e)

wN1 = √ N/2+p

√ N/2−p

2π (N/2 + p) [(N/2 + p) /e] 2π (N/2 − p) [(N/2 − p) /e]

√

2 1 NN

= √ ( )2 (N/2 + p)N/2+p (N/2 − p)N/2−p

πN

1 − 2p N

wN/2

= √ ( 2p )2 ( )N/2+p ( ) , (16)

2p N/2−p

1− N 1 + 2p

N 1 − N

where

√

∼ 2 N

wN/2 = 2 (17)

πN

is the maximal value of the thermodynamic probability. Eq. (16) can be expanded for p ≪ N. Since p enters both the

bases and the exponents, one has to be careful and expand the logarithm of wN1 rather than wN1 itself. The square

root term in Eq. (16) can be discarded as it gives a negligible contribution of order p2 /N 2 . One obtains

( ) ( ) ( ) ( )

N 2p N 2p

ln wN1 ∼ = ln w N/2 − + p ln 1 + − − p ln 1 −

2 N 2 N

( )[ ( )2 ] ( )[ ( )2 ]

∼ N 2p 1 2p N 2p 1 2p

= ln wN/2 − +p − − −p − −

2 N 2 N 2 N 2 N

∼ 2p2 p2 2p2 p2

= ln wN/2 − p − + +p− +

N N N N

2p2

= ln wN/2 − (18)

N

and thus

( )

2p2

wN1 ∼= wN/2 exp − . (19)

N

√

One can see that wN1 becomes very small if |p| ≡ |N1 − N/2| & N that is much smaller than N for large N. That

is, wN1 is small in the main part of the interval 0 ≤ N1 ≤ N and is sharpy peaked near N1 = N/2.

V. MANY-STATE PARTICLES

The results obtained in Sec. III for two-state particles can be generalized for n-state particles. We are looking for

the number of ways to distribute N particles over n boxes so that there are Ni particles in ith box. That is, we look

for the number of microstates in the macrostate described by the numbers Ni . The result is given by

N! N!

w= = ∏n . (20)

N1 !N2 ! . . . Nn ! i=1 Ni !

This formula can be obtained by using Eq. (9) successively. The number of ways to put N1 particles in box 1 and the

other N − N1 in other boxes is given by Eq. (9). Then the number of ways to put N2 particles in box 2 is given by

a similar formula with N → N − N1 (there are only N − N1 particles after N1 particles have been put in box 1) and

N1 → N2 . These numbers of ways should multiply. Then one considers box 3 etc. until the last box n. The resulting

number of microstates is

N! (N − N1 )! (N − N1 − N2 )!

w = × × ×

N1 !(N − N1 )! N2 !(N − N1 − N2 )! N3 !(N − N1 − N2 − N3 )!

(Nn−2 + Nn−1 + Nn )! (Nn−1 + Nn )! Nn !

... × × × . (21)

Nn−2 ! (Nn−1 + Nn )! Nn−1 !Nn ! Nn !0!

In this formula, all numerators except for the ﬁrst one and all second terms in the denominators cancel each other, so

that Eq. (20) follows.

5

As said above, the main postulate of statistical physics is that the observed macrostate is the macrostate realized by

the greatest number of microstates, the most probable of all macrostates. For instance, a macrostate of an ideal gas

with all molecules in one half of the container is much less probable than the macrostate with the molecules equally

distributed over the whole container. (Like the state with all coins landed head up is much less probable than the state

with half of the coins landed head up and the other half tail up). For this reason, if the initial state is all molecules

in one half of the container, then in the course of evolution the system will come to the most probably state with the

molecules equally distributed over the whole container, and will stay in this state forever.

We have seen in thermodynamics that an isolated system, initially in a non-equilibrium state, evolves to the equilib-

rium state characterized by the maximal entropy. On this way one comes to the idea that entropy S and thermodynamic

probability w should be related, one being a monotonic function of the other. The form of this function can be found

if one notices that entropy is additive while the thermodynamic probability is miltiplicative. If a system consists of

two subsystems that weakly interact with each other (that is almost always the case as the intersubsystem interaction

is limited to the small region near the interface between them), then S = S2 + S2 and w = w1 w2 . If one chooses (L.

Boltzmann)

S = kB ln w, (22)

then S = S2 + S2 and w = w1 w2 are in accord since ln (w1 w2 ) = ln w1 + ln w2 . In Eq. (22) kB is the Boltzmann

constant,

It will be shown that the statistically deﬁned entropy above coincides with the thermodynamic entropy at equilibrium.

On the other hand, statistical entropy is well deﬁned for nonequilibrium states as well, where as the thermodynamic

entropy is usually undeﬁned oﬀ equilibrium.

In the formalism of quantum mechanics, quantized states and their energies E are the solutions of the eigenvalue

problem for a matrix or for a diﬀerential operator. In the latter case the problem is formulated as the so-called

stationary Schrödinger equation

where Ψ = Ψ (r) is the complex function called wave function. The physical interpretation of the wave function is that

2

|Ψ (r)| gives the probability for a particle to be found near the space point r. As above, the number of measurements

dN of the toral N measurements in which the particle is found in the elementary volume d3 r = dxdydz around r is

given by

2

dN = N |Ψ (r)| d3 r. (25)

ˆ

2

1 = d3 r |Ψ (r)| . (26)

The operator Ĥ in Eq. (24) is the so-called Hamilton operator or Hamiltonian. For one particle, it is the sum of

kinetic and potential energies

p̂2

Ĥ = + U (r), (27)

2m

where the classical momentum p is replaced by the operator

∂

p̂ = −i~ . (28)

∂r

6

The Schrödinger equation can be formulated both for single particles and the systems of particles. In this course we

will restrict ourselves to single particles. In this case the notation ε will be used for single-particle energy levels instead

of E. One can see that Eq. (24) is a second-order linear diﬀerential equation. It is an ordinary diﬀerential equation

in one dimension and partial diﬀerential equation in two and more dimensions. If there is a potential energy, this is

a linear diﬀerential equation with variale coeﬃcients that can be solved analytically only in special cases. In the case

U = 0 this is a linear diﬀerential equation with constant coeﬃcients that is easy to solve analytically.

An important component of the quantum formalism is boundary conditions for the wave function. In particular, for

a particle inside a box with rigid walls the boundary condition is Ψ=0 at the walls, so that Ψ (r) joins smoothly with

2

the value Ψ (r) = 0 everywhere outside the box. In this case it is also guaranteed that |Ψ (r)| is integrable and Ψ can

be normalized according to Eq. (26). It turns out that the solution of Eq. (24) that satisﬁes the boundary conditions

exists only for a discrete set of E values that are called eigenvalues. The corresponding Ψ are calles eigenfunctions,

and all together is called eigenstates. Eigenvalue problems, both for matrices and diﬀerential operators, were known in

mathematics before the advent of quantum mechanics. The creators of quantum mechanics, mainly Schrödinger and

Heisenberg, realised that this mathematical formalism can accurately describe quantization of energy levels observed in

experiments. Whereas Schrödinger formlated his famous Schrödinger equation, Heisenberg made a major contribution

into description of quantum systems with matrices.

As an illustration, consider a particle in a one-dimensional rigid box, 0 ≤ x ≤ L. In this case the momentum becomes

d

p̂ = −i~ (29)

dx

and Eq. (24) takes the form

~2 d2

− Ψ(x) = εΨ(x) (30)

2m dx2

and can be represented as

d2 2mε

Ψ(x) + k 2 Ψ(x) = 0, k2 ≡ 2 . (31)

dx2 ~

The solution of this equation satisfying the boundary conditions Ψ(0) = Ψ(L) = 0 has the form

π

Ψν (x) = A sin (kν x) , kν = ν, ν = 1, 2, 3, . . . (32)

L

√

where eigenstates are labeled by the index ν. The constant A following from Eq. (26) is A = 2/L. The energy

eigenvalues are given by

~2 kν2 π 2 ~2 ν 2

εn = = . (33)

2m 2mL2

One can see the the energy ε is quandratic in the momentum p = ~k (de Broglie relation), as it should be, but

the energy levels are discrete because of the quantization. For very large boxes, L → ∞, the energy levels become

quasicontinuous. The lowest-energy level with ν = 1 is called the ground state.

For a three-dimensional box with sides Lx , Ly , and Lz a similar calculation yields the energy levels parametrized

by the three quantum numbers ν x , ν y , and ν z :

( )

π 2 ~2 ν 2x ν 2y ν 2z

εν x ,ν y ,ν z = + 2 + 2 . (34)

2m L2x Ly Lz

Here the ground state corresponds to ν x = ν y = ν z = 1. One can order the states in increasing ε and number them

by the index j, the ground state being j = 1. If Lx = Ly = Lz = L, then

π 2 ~2 ( 2 ) π 2 ~2 ν 2

εν x ,ν y ,ν z = 2

ν x + ν 2y + ν 2z = (35)

2mL 2mL2

and the same value of εj can be realized for diﬀerent sets of ν x , ν y , and ν z , for instance, (1,5,12), (5,12,1), etc. The

number of diﬀerent sets of (ν x , ν y , ν z ) having the same εj is called degeneracy and is denoted as gj . States with ν x =

ν y = ν z have gj = 1 and they are called non-degenerate. If only two of the numbers ν x , ν y , and ν z coincide, the

degeneracy is gj = 3. If all numbers are diﬀerent, gj = 3! = 6. If one sums over the energy levels parametrized by j,

one has to multiply the summands by the corresponding degeneracies.

7

C. Density of states

For systems of a large size and thus very ﬁnely quantized states, one can deﬁne the density of states ρ(ε) as the

number of energy levels dnε in the interval dε, that is,

It is convenient to start calculation of ρ(ε) by introducing the number of states dnν x ν y ν z in the “elementary volume”

dν x dν y dν z , considering ν x , ν y , and ν z as continuous. The result obviously is

dnν x ν y ν z = dν x dν y dν z , (37)

that is, the corresponding density of states is 1. Now one can rewrite Eq. (34) in the form

π 2 ~2 ( 2 ) π 2 ~2 ν̃ 2

ε= ν̃ x + ν̃ 2y + ν̃ 2z = , (38)

2m 2m

where

νx

ν̃ x ≡ . (39)

Lx

The advantage of using this variable is that the vector ν̃ = (ν̃ x ,˜ν y , ν̃ z ) is proportional to the momentum p, thus

ε ∝ ν̃ 2 . Eq. (37) becomes

And after that one can go over to the number of states within the shell dν̃, as was done for the distribution function

of the molecules over velocities. Taking into account that ν̃ x , ν̃ y , and ν̃ z are all positive, this shell is not a complete

spherical shell but 1/8 of it. Thus

π 2

dnν̃ = V ν̃ dν̃. (41)

2

The last step is to change the variable from ν̃ to ε using Eq. (38). With

√ √

2m 2m √ dν̃ 1 2m 1

2

ν̃ = 2 2 ε, ν̃ = ε, = √ (42)

π ~ π 2 ~2 dε 2 π 2 ~2 ε

one obtains

( )3/2

π 2m √

dnε = V εdε, (43)

4 π 2 ~2

( )3/2

V 2m √

ρ(ε) = ε. (44)

(2π)

2 ~2

In this section we obtain the distribution of particles over the n energy levels as the most probable macrostate by

maximizing its thermodynamic probability w. We label quantum states by the index i and use Eq. (20). The task

is to ﬁnd the maximum of w with respect to all Ni that satisfy the constraints (4) and (5). Practically it is more

convenient to maximize ln w than w itself. Using the method of Lagrange multipliers, one searches for the maximum

of the target function

∑

n ∑

n

Φ(N1 , N2 , · · · , Nn ) = ln w + α Ni − β εi Ni . (45)

i=1 i=1

8

Here α and β are Lagrange multipliers with (arbitrary) signs chosen anticipating the ﬁnal result. The maximum

satisﬁes

∂Φ

= 0, i = 1, 2, . . . , n. (46)

∂Ni

As we are interested in the behavior of macroscopic systems with Ni ≫ 1, the factorials can be simpliﬁed with the

help of Eq. (15) that takes the form

√

ln N ! ∼

= N ln N − N + ln 2πN . (47)

Neglecting the relatively small last term in this expression, one obtains

∑

n ∑

n ∑

n ∑

n

Φ∼

= ln N ! − Nj ln Nj + Nj + α Nj − β εj Nj . (48)

j=1 j=1 j=1 j=1

In the derivatives ∂Φ/∂Ni , the only contribution comes from the terms with j = i in the above expression. One

obtains the equations

∂Φ

= − ln Ni + α − βεi = 0 (49)

∂Ni

that yield

Ni = eα−βεi , (50)

The Lagrange multipliers can be found from Eqs. (4) and (5) in terms of N and U. Summing Eq. (50) over i, one

obtains

N = eα Z, α = ln (N/Z) , (51)

where

∑

n

Z= e−βεi (52)

i=1

is the so-called partition function (German Zustandssumme) that plays a major role in statistical physics. Then,

eliminating α from Eq. (50) yields

N −βεi

Ni = e . (53)

Z

After that for the internal energy U one obtains

∑

n

N ∑ −βεi

n

N ∂Z

U= εi Ni = εi e =− (54)

i=1

Z i=1 Z ∂β

or

∂ ln Z

U = −N . (55)

∂β

This formula implicitly deﬁnes the Lagrange multiplier β as a function of U.

The statistical entropy, Eq. (22), within the Stirling approximation becomes

( )

∑ n

S = kB ln w = kB N ln N − Ni ln Ni . (56)

i=1

Inserting here Eq. (50) and α from Eq. (51), one obtains

S ∑ n

= N ln N − Ni (α − βεi ) = N ln N − αN + βU = N ln Z + βU, (57)

kB i=1

9

Let us show now that β is related to the temperature. To this purpose, we diﬀerentiate S with respect to U keeping

N = const :

1 ∂S ∂ ln Z ∂β ∂ ln Z ∂β ∂β

=N +U +β =N +U + β. (58)

kB ∂U ∂U ∂U ∂β ∂U ∂U

Here according to Eq. (55) the ﬁrst and second terms cancel each other leading to

∂S

= kB β. (59)

∂U

Note that the energies of quantum states εi depend on the geometry of the system. In particular, for quantum particles

in a box εi depend on Lx , Ly , and Lz according to Eq. (34). In the diﬀerentiation over U above, εi were kept constant.

From the thermodynamical point of view, this amounts to diﬀerentiating over U with the constant volume V. Recalling

the thermodynamic formula

( )

∂S 1

= (60)

∂U V T

1

β= . (61)

kB T

Thus Eq. (55) actually deﬁnes the internal energy as a function of the temperature. Now from Eq. (57) one obtains

the statistical formula for the free energy

F = U − T S = −N kB T ln Z. (62)

One can see that the partition function Z contains the complete information of the system’s thermodynamics since

other quantities such as pressure P follow from F.

In this section we demonstrate how the results for the ideal gas, previously obtained within the molecular theory,

follow from the more general framework of statistical physics in the preceding section. Consider an ideal gas in a

suﬃciently large container. In this case the energy levels of the system, see Eq. (34), are quantized so ﬁnely that one

can introduce the density of states ρ (ε) deﬁned by Eq. (36). The number of particles in quantum states within the

energy interval dε is the product of the number of particles in one state N (ε) and the number of states dnε in this

energy interval. With N (ε) given by Eq. (53) one obtains

N −βε

dNε = N (ε)dnε = e ρ (ε) dε. (63)

Z

For ﬁnely quantized levels one can replace summation in Eq. (52) and similar formulas by integration as

∑ ˆ

. . . ⇒ dερ (ε) . . . (64)

i

ˆ

Z= dερ (ε) e−βε . (65)

For quantum particles in a rigid box with the help of Eq. (44) one obtains

( )3/2 ˆ ∞ ( )3/2 √ ( )3/2

V 2m √ V 2m π mkB T

Z= dε εe−βε = =V . (66)

(2π)

2 ~2 0 (2π)

2 ~2 2β 3/2 2π~2

Using the classical relation ε = mv 2 /2 and thus dε = mvdv, one can obtain the formula for the number of particles in

the speed interval dv

10

( )3/2 ( )3/2 √

1 −βε 1 2π~2 V 2m m

f (v) = e ρ (ε) mv = v mv e−βε

Z V mkB T (2π)

2 ~ 2 2

( )3/2 ( )

m mv 2

= 4πv exp −

2

. (68)

2πkB T 2kB T

This result coincides with the Maxwell distribution function obtained earlier from the molecular theory of gases. The

Plank’s constant ~ that links to quantum mechanics disappeared from the ﬁnal result.

The internal energy of the ideal gas is its kinetic energy

U = Nε̄, (69)

ε̄ = mv̄ 2 /2 being the average kinetic energy of an atom. The latter can be calculated with the help of the speed

distribution function above, as was done in the molecular theory of gases. The result has the form

f

ε̄ = kB T, (70)

2

where in our case f = 3 corresponding to three translational degrees of freedom. The same result can be obtained

from Eq. (55)

( )3/2

∂ ln Z ∂ m 3 ∂ 3 1 3

U = −N = −N V ln = N ln β = N = N kB T. (71)

∂β ∂β 2π~2 β 2 ∂β 2 β 2

After that the known result for the heat capacity CV = (∂U/∂T )V follows.

The pressure P is deﬁned by the thermodynamic formula

( )

∂F

P =− . (72)

∂V T

∂ ln Z ∂ ln V N kB T

P = N kB T = N kB T = (73)

∂V ∂V V

that amounts to the equation of state of the ideal gas P V = N kB T.

Consider an ensemble of N identical harmonic oscillators, each of them described by the Hamiltonian

p̂2 kx2

Ĥ = + . (74)

2m 2

Here the momentum operator p̂ is given by Eq. (29) and k is the spring constant. This theoretical model can describe,

for instance, vibrational degrees of freedom of diatomic molecules. In this case x is the elongation of the chemical

bond between the two atoms, relative to the equilibrium bond length. The stationary Schrödinger equation (24) for

a harmonic oscillator becomes

( )

~ 2 d2 kx2

− + Ψ(x) = εΨ(x). (75)

2m dx2 2

The boundary conditions for this equation require that Ψ(±∞) = 0 and the integral in Eq. (26) converges. In contrast

to Eq. (30), this is a linear diﬀerential equation with a variable coeﬃcient. Such diﬀerential equations in general do

not have solutions in terms of known functions. In some cases the solution can be expressed through special functions

such as hypergeometrical, Bessel functions, etc. Solution of Eq. (75) can be found but this task belongs to quantum

mechanics courses. The main result that we need here is that the energy eigenvalues of the harmonic oscillator have

the form

( )

1

εν = ~ω 0 ν + , ν = 0, 1, 2, . . . , (76)

2

11

√

where ω 0 = k/m is the frequency of oscillations. The energy level with ν = 0 is the ground state. The ground-state

energy ε0 = ~ω 0 /2 is not zero, as would be the case for a classical oscillator. This quantum ground-state energy is

called zero-point energy. It is irrelevant in the calculation of the heat capacity of an ensemble of harmonic oscillators.

The partition function of a harmonic oscillator is

∞

∑ ∑∞

( −β~ω0 )ν e−β~ω0 /2 1 2

Z= e−βεν = e−β~ω0 /2 e = −β~ω 0

= β~ω /2 = , (77)

ν=0 ν=0

1 − e e 0 − e −β~ω 0 /2 sinh (β~ω 0 /2)

−1

1 + x + x2 + x3 + . . . = (1 − x) , x<1 (78)

ex − e−x ex + e−x

sinh(x) ≡ , cosh(x) ≡

2 2

sinh(x) cosh(x) 1

tanh(x) ≡ , coth(x) ≡ = . (79)

cosh(x) sinh(x) tanh(x)

We will be using the formulas

and

{ {

x, x ≪ 1 x, x ≪ 1

sinh(x) ∼

= , tanh(x) ∼

= (81)

ex /2, x ≫ 1 1, x ≫ 1.

The internal (average) energy of the ensemble of oscillators is given by Eq. (55). This yields

( )

∂ ln Z ∂ 1 ~ω 0 β~ω 0

U = −N = −N sinh (β~ω 0 /2) =N coth (82)

∂β ∂β sinh (β~ω 0 /2) 2 2

or

( )

~ω 0 ~ω 0

U =N coth . (83)

2 2kB T

Another way of writing the internal energy is

( )

1 1

U = N ~ω 0 + β~ω0 (84)

2 e −1

or the similar in terms of T. The limiting low- and high-temperature cases of this formula are

{

N ~ω 0 /2, kB T ≪ ~ω 0

U∼= (85)

N kB T, kB T ≫ ~ω 0 .

In the low-temperature case, almost all oscillators are in their ground states, since e−β~ω0 ≪ 1. Thus the main term

that contributes into the partition function in Eq. (77) is ν = 0. Correspondingly, U is the sum of ground-state

energies of all oscillators.

At high temperatures, the previously obtained classical result is reproduced. The Planck’s constant ~ that links to

quantum mechanics disappears. In this case, e−β~ω0 is only slightly smaller than one, so that very many diﬀerent n

contribute to Z in Eq. (77). In this case one can replace summation by integration, as was done above for the particle

in a potential box. In that case one also obtained the classical results.

One can see that the crossover between the two limiting cases corresponds to kB T ∼ ~ω 0 . In the high-temperature

limit kB T ≫ ~ω 0 , many low-lying energy levels are populated. The top populated level ν ∗ can be estimates from

kB T ∼ ~ω 0 ν ∗ , so that

kB T

ν∗ ∼ ≫ 1. (86)

~ω 0

Probability to ﬁnd an oscillator in the states with ν ≫ ν ∗ is exponentially small.

12

C /( Nk B )

k BT /(hω0 )

( )( ) ( )2

dU ~ω 0 1 ~ω 0 ~ω 0 /(2kB T )

C= =N − − = N k B . (87)

dT 2 sinh2 [~ω 0 /(2kB T )] 2kB T 2 sinh [~ω 0 /(2kB T )]

{ ( )2 ( )

~ω 0

C∼ N kB exp − k~ω 0

, kB T ≪ ~ω 0 (88)

= kB T BT

N kB , kB T ≫ ~ω 0 .

One can see that at high temperatures the heat capacity can be written as

f

C= N kB , (89)

2

where the eﬀective number of degrees of freedom for an oscillator is f = 2. The explanation of the additional factor

2 is that the oscillator has not only the kinetic, but also the potential energy, and the average values of these two

energies are the same. Thus the total amount of energy in a vibrational degree of freedom doubles with respect to the

translational and rotational degrees of freedom.

At low temperatures the vibrational heat capacity above becomes exponentially small. One says that vibrational

degrees of freedom are getting frozen out at low temperatures.

The average quantum number of an oscillator is given by

∞

1 ∑ −βεν

n ≡ ⟨ν⟩ = νe . (90)

Z ν=0

∞ ( ) ∞ ∞

1 ∑ 1 1 ∑ 1 −βεν 1 1 ∑ Z 1 1 ∂Z 1 U 1

n= + ν e−βεν − e = εν e−βεν − =− − = − . (91)

Z ν=0 2 Z ν=0 2 ~ω 0 Z ν=0 2Z ~ω 0 Z ∂β 2 N ~ω 0 2

1 1

n= = (92)

eβ~ω0 − 1 exp [~ω 0 /(kB T )] − 1

13

so that

( )

1

U = N ~ω 0 +n . (93)

2

At low temperatures, kB T ≪ ~ω 0 , the average quantum number n becomes exponentially small. This means that the

oscillator is predominantly in its ground state, ν = 0. At high temperatures, kB T ≫ ~ω 0 , Eq. (92) yields

kB T

n∼

= (94)

~ω 0

that has the same order of magnitude as the top populated quantum level number ν ∗ given by Eq. (86).

To conclude this section, let us reproduce the classical high-temperature results by a simpler method. At high

temperatures, many levels are populated and the distribution function e−βεν only slightly changes from one level to

the other. Indeed its relative change is

e−βεν − e−βεν+1 ~ω 0

= 1 − e−β(εν+1 −εν ) = 1 − e−β~ω0 ∼

= 1 − 1 + β~ω 0 = ≪ 1. (95)

e−βεν kB T

In this situation one can replace summation in Eq. (77) by integration using the rule in Eq. (64). The density of

states ρ (ε) for the harmonic oscillator can be easily found. The number of levels dnν in the interval dν is

dnν = dν (96)

(that is, there is only one level in the interval dν = 1). Then, changing the variables with the help of Eq. (76), one

ﬁnds the number of levels dnε in the interval dε as

( )−1

dν dε

dnε = dε = dε = ρ (ε) dε, (97)

dε dν

where

1

ρ (ε) = . (98)

~ω 0

One actually can perform a one-line calculation of the partition function Z via integration. Considering ν as continuous,

one writes

ˆ ∞ ˆ ∞ ˆ ∞

−βε dν −βε 1 1 kB T

Z= dνe = dε e = dεe−βε = = . (99)

0 0 dε ~ω 0 0 ~ω 0 β ~ω 0

One can see that this calculation is in accord with Eq. (98). Now the internal energy U is given by

∂ ln Z ∂ ln β1 + . . . ∂ ln β + . . . N

U = −N = −N =N = = N kB T (100)

∂β ∂β ∂β β

that coincides with the second line of Eq. (85).

The rotational kinetic energy of a rigid body, expressed with respect to the principal axes, reads

1 1 1

Erot = I1 ω 21 + I2 ω 22 + I3 ω 23 . (101)

2 2 2

We will restrict our consideration to the axially symmetric body with I1 = I2 = I. In this case Erot can be rewritten

in terms of the angular momentum components Lα = Iα ω α as follows

( )

L2 1 1 1

Erot = + − L23 . (102)

2I 2 I3 I

In the important case of a diatomic molecule with two identical atoms, then the center of mass is located between

the two atoms at the distance d/2 from each, where d is the distance between the atoms. The moment of inertia of

14

the molecule with respect to the 3 axis going through both atoms is zero, I3 = 0. The moments of inertia with respect

to the 1 and 2 axis are equal to each other:

( )2

d 1

I1 = I2 = I = 2 × m = md2 , (103)

2 2

where m is the mass of an atom. In our case, I3 = 0, there is no kinetic energy associated with rotation around the 3

axis. Then Eq. (102) requires L3 = 0, that is, the angular momentum of a diatomic molecule is perpendicular to its

axis.

In quantum mechanics L and L3 become operators. The solution of the stationary Schrödinger equation yields

eigenvalues of L2 , L3 , and Erot in terms of three quantum numbers l, m, and n. These eigenvalues are given by

L̂3 = ~n, m, n = −l, −l + 1, . . . , l − 1, l (105)

and

( )

~2 l(l + 1) ~2 1 1

Erot = + − n2 (106)

2I 2 I3 I

that follows by substitution of Eqs. (104) and (105) into Eq. (102). One can see that the results do not depend on

the quantum number m and there are 2l + 1 degenerate levels because of this. Indeed, the angular momentum of

the molecule L̂ can have 2l + 1 diﬀerent projections on any axis and this is the reason for the degeneracy. Further,

the axis of the molecule can have 2l + 1 diﬀerent projections on L̂ parametrized by the quantum number n. In the

general case I3 ̸= I these states are non-degenerate. However, they become degenerate for the fully symmetric body,

I1 = I2 = I3 = I. In this case

~2 l(l + 1)

Erot = (107)

2I

2

and the degeneracy of the quantum level l is gl = (2l + 1) . For the diatomic molecule I3 = 0, and the only acceptable

value of n is n = 0. Thus one obtains Eq. (107) again but with the degeneracy gl = 2l + 1.

The rotational partition function both for the symmetric body and for a diatomic molecule is given by

∑ {

−βεl ξ 2, symmetric body

Z= gl e , gl = (2l + 1) , ξ= (108)

1, diatomic molecule,

l

where εl ≡ Erot . In both cases Z cannot be calculated analytically in general. In the general case I1 = I2 ̸= I3 with

I3 ̸= 0 one has to perform summation over both l and n.

In the low-temperature limit, most of rotators are in their ground state l = 0, and very few are in the ﬁrst excited

state l = 1. Discarding all other values of l, one obtains

( )

∼ β~2

Z = 1 + 3 exp −

ξ

(109)

I

( ) ( )

∂ ln Z ~2 β~2 ~2 ~2

U = −N = 3ξ exp − = 3ξ exp − (110)

∂β I I I IkB T

( ) ( )2 ( )

∂U ~2 ~2

CV = = 3 ξ kB exp − (111)

∂T V IkB T IkB T

In the high-temperature limit, very many energy levels are thermally populated, so that in Eq. (108) one can replace

summation by integration. With

√ √

∼ ξ ∼ ~2 l2 ∂ε ~2 l ∂l I ~2 I

gl = (2l) , εl ≡ ε = , = , = 2 = (112)

2I ∂l I ∂ε ~ 2Iε 2~2 ε

15

C /( Nk B )

Diatomic molecule

2IT / h2

FIG. 3: Rotational heat capacity of a fully symmetric body and a diatomic molecule.

one has

ˆ ∞ ˆ ∞ ˆ ∞

∂l ξ −βε

Z∼

= dl gl e−βεl ∼

= 2ξ dl lξ e−βεl = 2ξ dε l e (113)

0 0 0 ∂ε

and further

√ ˆ ∞ ( )ξ/2 ( )(ξ+1)/2 ˆ ∞

I 1 2Iε I

Z ∼

= 2ξ dε √ e −βε

=2 (3ξ−1)/2

dε ε(ξ−1)/2 e−βε

2~2 0 ε ~ 2 ~2 0

( )(ξ+1)/2 ˆ ∞

I

= 2 (3ξ−1)/2

dx x(ξ−1)/2 e−x ∝ β −(ξ+1)/2 . (114)

β~2 0

∂ ln Z ξ + 1 ∂ ln β ξ+1 1 ξ+1

U = −N = N = N = N kB T, (115)

∂β 2 ∂β 2 β 2

whereas the heat capacity is

( )

∂U ξ+1

CV = = N kB . (116)

∂T V 2

In these two formulas, f = ξ + 1 is the number of degrees of freedom, f = 2 for a diatomic molecule and f = 3 for a

fully symmetric rotator. This result conﬁrms the principle of equidistribution of energy over the degrees of freedom

in the classical limit that requires temperatures high enough.

To conclude this section, let us calculate the partition function classically at high temperatures. The calculation

can be performed in the general case of all moments of inertia diﬀerent. The rotational energy, Eq. (101), can be

written as

L21 L2 L2

Erot = + 2 + 3. (117)

2I1 2I2 2I3

The dynamical variables here are the angular momentum components L1 , L2 , and L3 , so that the partition function

16

ˆ ∞

Zclass = dL1 dL2 dL3 exp (−βErot )

−∞

ˆ ∞ ( ) ˆ ∞ ( ) ˆ ∞ ( )

βL2 βL2 βL2

= dL1 exp − 1 × dL2 exp − 2 × dL3 exp − 3

−∞ 2I1 −∞ 2I2 −∞ 2I3

√ (ˆ ∞ )3

2I1 2I2 2I3

dx e−x ∝ β −3/2 .

2

= × × (118)

β β β −∞

In fact, Zclass contains some additional coeﬃcients, for instance, from the integration over the orientations of the

molecule. However, these coeﬃcients are irrelevant in the calculation of the the internal energy and heat capacity. For

the internal energy one obtains

∂ ln Z 3 ∂ ln β 3

U = −N = N = N kB T (119)

∂β 2 ∂β 2

that coincides with Eq. (115) with ξ = 2. In the case of the diatomic molecule the energy has the form

L21 L2

Erot = + 2. (120)

2I1 2I2

Then the partition function becomes

ˆ ∞

Zclass = dL1 dL2 exp (−βErot )

−∞

ˆ ∞ ( ) ˆ ∞ ( )

βL21 βL22

= dL1 exp − × dL2 exp −

−∞ 2I1 −∞ 2I2

√ (ˆ ∞ )2

2I1 2I2

dx e−x ∝ β −1 .

2

= × (121)

β β −∞

This leads to

U = N kB T, (122)

Above in Sec. X we have studied statistical mechannics of harmonic oscillators. For instance, a diatomic molecule

behaves as an oscillator, the chemical bond between the atoms periodically changing its length with time in the

classical description. For molecules consisting of N ≥ 2 atoms, vibrations become more complicated and, as a rule,

involve all N atoms. Still, within the harmonic approximation (the potential energy expanded up to the second-order

terms in deviations from the equilibrium positions) the classical equations of motion are linear and can be dealt with

matrix algebra. Diagonalizing the dynamical matrix of the system, one can ﬁnd 3N − 6 linear combinations of atomic

coordinates that dynamically behave as independent harmonic oscillators. Such collective vibrational modes of the

system are called normal modes. For instance, a hypothetic triatomic molecule with atoms that are allowed to move

along a straight line has 2 normal modes. If the masses of the ﬁrst and third atoms are the same, one of the normal

modes corresponds to antiphase oscillations of the ﬁrst and third atoms with the second (central) atom being at rest.

Another normal mode corresponds to the in-phase oscillations of the ﬁrst and third atoms with the central atom

oscillating in the anti-phase to them. For more complicated molecules normal modes can be found numerically.

Normal modes can be considered quantum mechanically as independent quantum oscillators as in Sec. X. Then the

vibrational partition function of the molecule is a product of partition functions of individual normal modes and the

internal energy is a sum of internal energies of these modes. The latter is the consequence of the independence of the

normal modes.

The solid having a well-deﬁned crystal structure is translationally invariant that simpliﬁes ﬁnding normal modes,

in spite of a large number of atoms N. In the simplest case of one atom in the unit cell normal modes can be found

immediately by making Fourier transformation of atomic deviations. The resulting normal modes are sound waves

17

with diﬀerent wave vectors k and polarizations (in the simplest case one longitudinal and two equivalent transverse

modes). In the sequel, to illustrate basic principles, we will consider only one type of sound waves (say, the longitudinal

wave) but we will count it three times. The results can be then generalized for diﬀerent kinds of waves. For wave

vectors small enough, there is the linear relation between the wave vector and the frequency of the oscillations

ω k = vk (123)

In this formula, ω k depends only on the magnitude k = 2π/λ and not on the direction of k and v is the speed of sound.

The acceptable values of k should satisfy the boundary conditions at the boundaries of the solid body. Similarly to the

quantum problem of a particle in a potential box with dimentions Lx , Ly , and Lz , one obtains the quantized values

of the wave vector components kα with α = x, y, z

να

kα = π , ν = 1, 2, 3, . . . , (124)

Lα

so that

√ √

να

k= 2 = π ν̃ 2 + ν̃ 2 + ν̃ 2 = πν̃,

kx2 + ky2 + kxz x y z ν̃ α ≡ . (125)

Lα

Similarly to the problem of the particle in a box, one can introduce the frequency density of states ρ (ω) via the

number of normal modes in the frequency interval dω

dnω = ρ (ω) dω. (126)

Starting with Eq. (37), one arrives at Eq. (41). Then with the help of ω = vπν̃ following from Eqs. (123)–(125) one

obtains

π 2 dν̃ π 1

dnω = 3 × V ν̃ =3×V ω 2 dω (127)

2 dω 2 (vπ)3

3V

ρ (ω) = ω2 . (128)

2π 2 v 3

The total number of normal modes should be 3N :

∑ ∑

3N = 3 1=3 1. (129)

k kx ,ky ,kx

For a body of a box shape N = Nx Ny Nz , so that the above equation splits into the three separate equations

∑

kα,max

Nα = ∼

= ν α,max (130)

kα =1

Nα

kα,max = π = πa, (131)

Lα

where a is the lattice spacing, that is, the distance between the atoms.

As the solid body is macroscopic and there are very many values of the allowed wave vectors, it is convenient to

replace summation my integration and use the density of states:

∑ ˆ ωmax

... ⇒ dωρ (ω) . . . (132)

k 0

At large k and thus ω, the linear dependence of Eq. (123) is no longer valid, so that Eq. (128) becomes invalid as

well. Still, one can make a crude approximation assuming that Eqs. (123) and (128) are valid until some maximal

frequency of the sound waves in the body. The model based on this assumption is called Debye model. The maximal

frequency (or the Debye frequency ω D ) is deﬁned by Eq. (129) that takes the form

ˆ ωD

3N = dωρ (ω) . (133)

0

18

ˆ ωD

3V V ω 3D

3N = dω 2 3

ω2 = , (134)

0 2π v 2π 2 v 3

wherefrom

( )1/3 v

ω D = 6π 2 , (135)

a

1/3 1/3

where we used (V /N ) = v0 = a, and v0 is the unit-cell volume. It is convenient to rewrite Eq. (128) in terms of

ω D as

ω2

ρ (ω) = 9N . (136)

ω 3D

The quantum energy levels of a harmonic oscillator are given by Eq. (76). In our case for a k-oscillator it becomes

( )

1

εk,ν k = ~ω k ν k + , ν = 0, 1, 2, . . . (137)

2

∑ ∑ ( ) ∑ ( )

1 1

U =3 ⟨εk,ν k ⟩ = 3 ~ω k ⟨ν k ⟩ + =3 ~ω k nk + , (138)

2 2

k k k

1

nk = . (139)

exp (β~ω k ) − 1

Replacing summation by integration and rearranging Eq. (138), within the Debye model one obtains

ˆ ωD

~ω

U = U0 + dωρ (ω) , (140)

0 exp (β~ω) − 1

where U0 is the zero-temperature value of U due to the zero-point motion. The integral term in Eq. (140) can be

calculated analytically at low and high temperatures.

For kB T ≪ ~ω D , the integral converges at ω ≪ ω D , so that one can extend the upper limit of integration to ∞.

One obtains

ˆ ∞ ˆ

~ω 9N ∞ ~ω

U = U0 + dωρ (ω) = U0 + 3 dω ω 2

0 exp (β~ω) − 1 ω D 0 exp (β~ω) −1

ˆ ∞ 3 4 4

( )3

9N x 9N π 3π kB T

= U0 + dx x = U0 + = U0 + N kB T . (141)

3

(~ω D ) β 4 0 e −1 3

(~ω D ) β 4 15 5 ~ω D

( )3

3π 4 T

U = U0 + N kB T , (142)

5 θD

where

θD = ~ω D /kB (143)

( ) ( )3

∂U 12π 4 T

CV = = N kB , T ≪ θD . (144)

∂T V 5 θD

Although the Debye temperature enters Eqs. (142) and (144), these results are accurate and insensitive to the Debye

approximation.

19

C /( Nk B )

T3

T /θD

FIG. 4: Temperature dependence of the phonon heat capacity. The low-temperature result C ∝ T 3 (dashed line) holds only for

temperatures much smaller than the Debye temperature θD .

For kB T ≫ ~ω D , the exponential in Eq. (140) can be expanded for β~ω ≪ 1. It is, however, convenient to step

back and to do the same in Eq. (138). WIth the help of Eq. (129) one obtains

∑ ~ω k ∑

U = U0 + 3 = U0 + 3kB T 1 = U0 + 3N kB T. (145)

β~ω k

k k

CV = 3N kB , (146)

kB per each of approximately 3N vibrational modes. One can see that Eqs. (145) and (146) are accurate because

they do not use the Debye model. The assumption of the Debye model is really essential at intermediate temperatures,

where Eq. (140) is approximate.

The temperature dependence of the photon heat capacity, following from Eq. (140), is shown in Fig. 4 together

with its low-temperature asymptote, Eq. (144), and the high-temperature asymptote, Eq. (146). Because of the large

numerical coeﬃcient in Eq. (144), the law CV ∝ T 3 in fact holds only for the temperatures much smaller than the

Debye temperature.

Magnetism of solids is in most cases due to the intrinsic angular momentum of the electron that is called spin. In

quantum mechanics, spin is an operator represented by a matrix. The value of the electronic angular momentum is

s = 1/2, so that the quantum-mechanical eigenvalue of the square of the electron’s angular momentum (in the units

of ~) is

c.f. Eq. (104). Eigenvalues of ŝz , projection of a free spin on any direction z, are given by

c.f. Eq. (105). Spin of the electron can be interpreted as circular motion of the charge inside this elementary particle.

This results in the magnetis moment of the electron

20

where g = 2 is the so-called g-factor and µB is Bohr’s magneton, µB = e~/ (2me ) = 0.927 × 10−23 J/T. Note that

the model of circularly moving charges inside the electron leads to the same result with g = 1; The true value g = 2

follows from the relativistic quantum theory.

The energy (the Hamiltonian) of an electron spin in a magnetic ﬁeld B has the form

Ĥ = −gµB ŝ · B, (150)

the so-called Zeeman Hamiltonian. One can see that the minimum of the energy corresponds to the spin pointing in

the direction of B. Choosing the z axis in the direction of B, one obtains

Thus the energy eigenvalues of the electronic spin in a magnetic ﬁeld with the help of Eq. (148) become

In an atom, spins of all electrons usually combine in a collective spin S that can be greater than 1/2. Thus instead

of Eq. (150) one has

Ĥ = −gµB Ŝ · B. (153)

m = −S, −S + 1, . . . , S − 1, S, (154)

all together 2S + 1 diﬀerent values, c.f. Eq. (104). The 2S + 1 diﬀerent energy levels of the spin are given by Eq.

(152).

Physical quantities of the spin are determined by the partition function

∑

S ∑

S

ZS = e−βεm = emy , (155)

m=−S m=−S

gµB B

y ≡ βgµB B = . (156)

kB T

Eq. (155) can be reduced to a ﬁnite geometrical progression

∑

2S

e(2S+1)y − 1 e(S+1/2)y − e−(S+1/2)y sinh [(S + 1/2)y]

ZS = e−Sy (ey ) = e−yS

k

= = . (157)

e −1

y ey/2 − e−y/2 sinh (y/2)

k=0

For S = 1/2 with the help of sinh(2x) = 2 sinh(x) cosh(x) one can transform Eq. (157) to

The partition function beng found, one can easily obtain thermodynamic quantities.. For instance, the free energy

F is given by Eq. (62),

sinh [(S + 1/2)y]

F = −N kB T ln ZS = −N kB T ln . (159)

sinh (y/2)

The average spin polarization is deﬁned by

1 ∑

S

⟨Sz ⟩ = ⟨m⟩ = me−βεm . (160)

Z

m=−S

1 ∂Z ∂ ln Z

⟨Sz ⟩ = = . (161)

Z ∂y ∂y

21

BS(x) S = 1/2

1

5/2

S=∞

one obtains

where

( ) [( ) ] [ ]

1 1 1 1

bS (y) = S+ coth S + y − coth y . (164)

2 2 2 2

Usually the function bS (y) is represented in the form bS (y) = SBS (Sy), where BS (x) is the Brillouin function

( ) [( ) ] [ ]

1 1 1 1

BS (x) = 1 + coth 1 + x − coth x . (165)

2S 2S 2S 2S

1 y

⟨Sz ⟩ = b1/2 (y) = tanh (166)

2 2

and B1/2 (x) = tanh x. One can see that bS (±∞) = ±S and BS (±∞) = ±1. At zero magnetic ﬁeld, y = 0, the spin

polarization should vanish. It is immediately seen in Eq. (166) but not in Eq. (163). To clarify the behavior of bS (y)

at small y, one can use coth x ∼

= 1/x + x/3 that yields

( )[ ( ) ] [ ]

∼ S+ 1 1 1 y 1 1 1y

bS (y) = ( ) + S + − +

2 S + 12 y 2 3 2 12 y 2 3

[( )2 ( )2 ]

y 1 1 S(S + 1)

= S+ − = y. (167)

3 2 2 3

S(S + 1) gµB B

⟨Sz ⟩ ∼

= , gµB B ≪ kB T. (168)

3 kB T

22

The average magnetic moment of the spin can be obtained from Eq. (149)

⟨µz ⟩ = gµB ⟨Sz ⟩ . (169)

The magnetization of the sample M is deﬁned as the magnetic moment per unit volume. If there is one magnetic

atom per unit cell and all of them are uniformly magnetized, the magnetization reads

⟨µz ⟩ gµB

M= = ⟨Sz ⟩ , (170)

v0 v0

where v0 is the unit-cell volume.

The internal energy U of a system of N spins can be obtained from the general formula, Eq. (55). The calculation

can be, however, avoided since from Eq. (152) simply follows

U = N ⟨εm ⟩ = −N gµB B ⟨m⟩ = −N gµB B ⟨Sz ⟩ = −N gµB BbS (y) = −N kB T ybS (y) . (171)

In particular, at low magnetic ﬁelds (or high temperatures) one has

2

S(S + 1) (gµB B)

U∼

= −N . (172)

3 kB T

The magnetic susceptibility per spin is deﬁned by

∂ ⟨µz ⟩

χ= . (173)

∂B

From Eqs. (169) and (163) one obtains

2

(gµB ) ′

χ= b (y), (174)

kB T S

where

( )2 ( )2

dbS (y) S + 1/2 1/2

b′S (y) ≡ =− + . (175)

dy sinh [(S + 1/2) y] sinh [y/2]

For S = 1/2 from Eq. (166) one obtains

1

b′1/2 (y) = . (176)

4 cosh2 (y/2)

From Eq. (167) follows b′S (0) = S(S + 1)/3, thus one obtains the high-temperature limit

2

S(S + 1) (gµB )

χ= , kB T ≫ gµB B. (177)

3 kB T

In the opposite limit y ≫ 1 the function b′S (y) and thus the susceptibility becomes small. The physical reason for this

is that at low temperatures, kB T ≪ gµB B, the spins are already strongly aligned by the magnetic ﬁeld, ⟨Sz ⟩ ∼ = S,

so that ⟨Sz ⟩ becomes hardly sensitive to small changes of B. As a function of temperature, χ has a maximum at

intermediate temperatures.

The heat capacity C can be obtained from Eq. (171) as

∂U ∂y

C= = −N gµB Bb′S (y) = N kB b′S (y)y 2 . (178)

∂T ∂T

One can see that C has a maximum at intermediate temperatures, y ∼ 1. At high temperatures C decreases as 1/T 2 :

( )2

S(S + 1) gµB B

C∼= N kB , (179)

3 kB T

and at low temperatures C becomes exponentially small, except for the case S = ∞ (see Fig. 6). Finally, Eqs. (174)

and (178) imply a simple relation between χ and C

Nχ T

= 2 (180)

C B

that does not depend on the spin value S.

23

C /( Nk B )

S=∞

5/2

S = 1/2

k BT /( Sgµ B B)

FIG. 6: Temperature dependence of the heat capacity of spins in a magnetic ﬁelds with diﬀerent values of S.

As explained in thermodynamics, with changing thermodynamic parameters such as temperature, pressure, etc.,

the system can undergo phase transitions. In the ﬁrst-order phase transitions, the chemical potentials µ of the two

competing phases become equal at the phase transition line, while on each side of this line they are unequal and only

one of the phases is thermodynamically stable. The ﬁrst-order trqansitions are thus abrupt. To the contrast, second-

order transitions are gradual: The so-called order parameter continuously grows from zero as the phase-transition line

is crossed. Thermodynamic quantities are singular at second-order transitions.

Phase transitions are complicated phenomena that arise due to interaction between particles in many-particle

systems in the thermodynamic limit N → ∞. It can be shown that thermodynamic quantities of ﬁnite systems are

non-singular and, in particular, there are no second-order phase transitions. Up to now in this course we studied

systems without interaction. Including the latter requires an extension of the formalism of statistical mechanics that

is done in Sec. XV. In such an extended formalism, one has to calculate partition functions over the energy levels of

the whole system that, in general, depend on the interaction. In some cases these energy levels are known exactly but

they are parametrized by many quantum numbers over which summation has to be carried out. In most of the cases,

however, the energy levels are not known analytically and their calculation is a huge quantum-mechanical problem. In

both cases calculation of the partition function is a very serious task. There are some models for which thermodynamic

quantities had been calculated exactly, including models that possess second-order phase transitions. For other models,

approximate numerical methods had been developed that allow calculating phase transition temperatures and critical

indices. It was shown that there are no phase transitions in one-dimensional systems with short-range interactions.

It is most convenient to illustrate the physics of phase transitions considering magnetic systems, or the systems of

spins introduced in Sec. XIII. The simplest forms of interaction between diﬀerent spins are Ising interaction −Jij Ŝiz Ŝjz

including only z components of the spins and Heisenberg interaction −Jij Ŝi · Ŝj . The exchange interaction Jij follows

from quantum mechanics of atoms and is of electrostatic origin. In most cases practically there is only interaction J

between spins of the neighboring atoms in a crystal lattice, because Jij decreases exponentially with distance. For

the Ising model, all energy levels of the whole system are known exactly while for the Heisenberg model they are not.

In the case J > 0 neighboring spins have the lowest interaction energy when they are collinear, and the state with

all spins pointing in the same direction is the exact ground state of the system. For the Ising model, all spins in the

ground state are parallel or antiparallel with the z axis, that is, it is double-degenerate. For the Heisenberg model the

spins can point in any direction in the ground state, so that the ground state has a continuous degeneracy. In both

cases, Ising and Heisenberg, the ground state for J > 0 is ferromagnetic.

The nature of the interaction between spins considered above suggests that at T → 0 the ⟨ system

⟩ should fall into its

ground state, so that thermodynamic averages of all spins approach their maximal values, Ŝi → S. With increasing

24

⟨ ⟩

temperature, excited levels of the system become populated and the average spin value decreases, Ŝi < S. At high

temperatures all energy levels become populated, so that neighboring spins can have all possible orientations with

respect to each other. In this case, if there is no external magnetic ﬁeld acting on the spins (see Sec. XIII), the average

spin value should be exactly zero, because there as many spins pointing in one direction as there are the spins pointing

in the other direction. This is why the high-temperature state is called symmetric state. If now the temperature is

lowered, there should be a phase transition temperature TC (the Curie temperature for ferromagnets) below which

the order parameter (average spin value) becomes nonzero. Below TC the symmetry of the state is spontaneously

broken. This state is called ordered state. Note that if there is an applied magnetic ﬁeld, it will break the symmetry by

creating

⟨ ⟩a nonzero spin average at all temperatures, so that there is no sharp phase transition, only a gradual increase

of Ŝi as the temperature is lowered.

Although

⟨ ⟩the scenario outlined above looks persuasive, there are subtle eﬀects that may preclude ordering and

ensure Ŝi = 0 at all nonzero temperatures. As said above, ordering does not occur in one dimension (spin chains)

for both Ising and Heisenberg models. In two dimensions, Ising model shows a phase transition, and the analytical

solution of the problem exists and was found by Lars Osager. However, two-dimensional Heisenberg model does not

order. In three dimensions, both Ising and Heisenberg models show a phase transition and no analytical solution is

available.

While a rigorous solution of the problem of phase transition is very diﬃcult, there is a simple approximate solution

that captures the physics of phase transitions as described above and provides qualitatively correct results in most

cases, including three-dimensional systems. The idea of this approximate solution is to reduce the original many-spin

problem to an eﬀective self-consistent one-spin problem by considering one ⟨spin⟩(say i) and replacing other spins in the

interaction by their average thermodynamic values, −Jij Ŝi · Ŝj ⇒ −Jij Ŝi · Ŝj . One can see that now the interaction

term has mathematically the same form as the term describing interaction of a spin with the applied magnetic ﬁeld,

Eq. (153). Thus one can use the formalism of Sec. (XIII) to solve the problem analytically. This aproach is called

mean ﬁeld approximation (MFA) or molecular ﬁeld approximation and it was ﬁrst suggested by Weiss for ferromagnets.

The eﬀective one-spin Hamiltonian within the MFA is similar to Eq: (153) and has the form

( ⟨ ⟩) 1 ⟨ ⟩2

Ĥ = −Ŝ· gµB B + Jz Ŝ + Jz Ŝ , (181)

2

where z is the number of nearest neigbors for a spin in the lattice (z = 6 for the simple cubic lattice). Here the last

⟨ ⟩ ⟨ ⟩ ⟨ ⟩2

term ensures that the replacement Ŝ ⇒ Ŝ in Ĥ yields Ĥ = − Ŝ ·gµB B − (1/2)Jz Ŝ that becomes the correct

⟨ ⟩

ground-state energy at T = 0 where Ŝ = S. For the Heisenberg model, in the presence of a whatever weak applied

ﬁeld B the ordering will occur in the direction of B. Choosing the z axis in this direction, one can simplify Eq. (181)

to

( ⟨ ⟩) 1 ⟨ ⟩2

Ĥ = − gµB B + Jz Ŝz Ŝz + Jz Ŝz . (182)

2

The same mean-ﬁeld Hamilotonian is valid for the Ising model as well, if the magnetic ﬁeld is applied along the z

axis. Thus, within the MFA (but not in general!) Ising and Heisenberg models are equivalent. The energy levels

corresponding to Eq. (182) are

( ⟨ ⟩) 1 ⟨ ⟩2

εm = − gµB B + Jz Ŝz m + Jz Ŝz , m = −S, −S + 1, . . . , S − 1, S. (183)

2

Now, after calculation of the partition function Z, Eq. (155), one arrives at the free energy per spin

F = −kB T ln Z = Jz Ŝz − kB T ln , (184)

2 sinh (y/2)

where

⟨ ⟩

gµB B + Jz Ŝz

y≡ . (185)

kB T

25

• T = 0.5T

C

T = TC

T = 2TC

Ŝ z

S = 1/2

⟨ ⟩

To ﬁnd the actual value of the order parameter Ŝz at any temperature T and magnetic ﬁeld B, one has to minimize

⟨ ⟩

F with respect to Ŝz , as explained in thermodynamics. One obtains the equation

∂F ⟨ ⟩

0= ⟨ ⟩ = Jz Ŝz − JzbS (y), (186)

∂ Ŝz

where bS (y) is deﬁned by Eq. (164). Rearranging terms one arrives at the transcedental Curie-Weiss equation

⟨ ⟩

⟨ ⟩ gµB B + Jz Ŝz

Ŝz = bS (187)

kB T

⟨ ⟩

that deﬁnes Ŝz . In fact, this equation could be obtained in a shorter way by using the modiﬁed argument, Eq.

(185), in Eq. (163). ⟨ ⟩

For B = 0, Eq. (187) has the only solution Ŝz = 0 at high temperatures. As bS (y) has the maximal slope at

⟨ ⟩

y = 0, it is suﬃcient that this slope (with respect to Ŝz ) becomes smaller than 1 to exclude any solution other than

⟨ ⟩ ⟨ ⟩

Ŝz = 0. Using Eq. (167), one obtains that the only solution Ŝz = 0 is realized for T ≥ TC , where

S(S + 1) Jz

TC = (188)

3 kB

⟨ ⟩

is the Curie temperature within the mean-ﬁeld approximation. Below TC the slope of bS (y) with respect to Ŝz

⟨ ⟩ ⟨ ⟩ ⟨ ⟩

exceeds 1, so that there are three solutions for Ŝz : One solution Ŝz = 0 and two symmetric solutions Ŝz ̸= 0 (see

⟨ ⟩

Fig. 7). The latter correspond to the lower free energy than the solution Ŝz = 0 thus they are thermodynamically

stable (see Fig. 8).. These solutions

⟨ describe

⟩ the ordered state below TC .

Slightly below TC the value of Ŝz is still small and can be found by expanding bS (y) up to y 3 . In particular,

1 y 1 1

b1/2 (y) = tanh ∼= y − y3 . (189)

2 2 4 48

26

2F/(Jz) T = 2TC

T = TC

Ŝ z

T = 0.8TC

S = 1/2 T = 0.5TC

⟨ ⟩

FIG. 8: Free energy of a ferromagnet vs Ŝz within the MFA. (Arbitrary vertical shift.)

Using TC = (1/4)Jz/kB and thus Jz = 4kB TC for S = 1/2, one can rewrite Eq. (187) with B = 0 in the form

⟨ ⟩ T ⟨ ⟩ 4 ( T )3 ⟨ ⟩3

C C

Ŝz = Ŝz − Ŝz . (190)

T 3 T

⟨ ⟩

One of the solutions is Ŝz = 0 while the other two solutions are

⟨ ⟩ √ (

Ŝz )

T 1

=± 3 1− , S= , (191)

S TC 2

where √ the factor T /TC was replaced by 1 near TC . Although obtained near TC , in this form the result is only by the

factor 3 oﬀ at T = 0. The singularity of the order parameter near TC is square root, so that for the magnetization

critical index one has β = 1/2. Results of the numerical solution of the Curie-Weiss equation with B = 0 for diﬀerent

S are shown in Fig. 9. Note that the magnetization is related to the spin average by Eq. (170).

Let us now consider the magnetic susceptibility per spin χ deﬁned by Eq. (173) above TC . Linearizing Eq. (187)

with the use of Eq. (167), one obtains

⟨ ⟩

⟨ ⟩ S(S + 1) gµB B + Jz Ŝz S(S + 1) gµB B TC ⟨ ⟩

Ŝz = = + Ŝz . (192)

3 kB T 3 kB T T

Here from one obtains

⟨ ⟩

∂ ⟨µz ⟩ ∂ Ŝz S(S + 1) (gµB )

2

χ= = gµB = . (193)

∂B ∂B 3 kB (T − TC )

In contrast to non-interacting spins, Eq. (177), the susceptibility diverges at T = TC rather than at T = 0. The inverse

susceptivility χ−1 is a straight line crossing the T axis at TC . In the theory of phase transitions the critcal index for

−γ

the susceptibility γ is deﬁned as χ ∝ (T − TC ) . One can see that γ = 1 within the MFA.

More precise methods than the MFA yield smaller values of β and larger values of γ that depend on the details of the

model such as the number of interacting spin components (1 for the Ising and 3 for the Heisenberg) and the dimension

of the lattice. On the other hand, critical indices are insensitive to such factors as lattice structure and the value of

the spin S. On the other hand, the value of TC does not possess such universality and depends on all parameters of

the problem. Accurate values of TC are by up to 25% lower than their mean-ﬁeld values in three dimensions.

27

Sˆ z / S

S = 1/2

5/2

S=∞

T/TC

⟨ ⟩

FIG. 9: Temperature dependences of normalized spin averages Ŝz /S for S = 1/2, 5/2, ∞ obtained from the numerical solution

of the Curie-Weiss equation.

In the previous chapters we considered statistical properties of systems of non-interacting and distinguishable par-

ticles. Assigning a quantum state for one such a particle is independent from assigning the states for other particles,

these processes are statistically independent. Particle 1 in state 1 and particle 2 in state 2, the microstate (1,2), and

the microstate (2,1) are counted as diﬀerent microstates within the same macrostate. However, quantum-mechanical

particles such as electrons are indistinguishable from each other. Thus microstates (1,2) and (2,1) should be counted

as a single microstate, one particle in state 1 and one in state 2, without specifying with particle is in which state. This

means that assigning quantum states for diﬀerent particles, as we have done above, is not statistically independent.

Accordingly, the method based on the minimization of the Boltzmann entropy of a large system, Eq. (22), becomes

invalid. This changes the foundations of statistical description of such systems.

In addition to indistinguishability of quantum particles, it follows from the relativistic quantum theory that particles

with half-integer spin, called fermions, cannot occupy a quantum state more than once. There is no place for more than

one fermion in any quantum state, the so-called exclusion principle. An important example of fermions is electron.

To the contrary, particles with integer spin, called bosons, can be put into a quantum state in unlimited numbers.

Examples of bosons are Helium and other atoms with an integer spin. Note that in the case of fermions, even for a

large system, one cannot use the Stirling formula, Eq. (15), to simplify combinatorial expressions, since the occupation

numbers of quantum states can be only 0 and 1. This is an additional reason why the Boltzmann formalism does not

work here.

Finally, the Boltzmann formalism of quantum statistics does not work for systems with interaction because the

quantum states of such systems are quantum states of the system as the whole rather than one-particle quantum

states.

To construct the statistical mechanics of systems of indistinguishable particles and/or systems with interaction, one

has to consider an ensemble of N systems. The states of the systems are labeled by ξ, so that

∑

N = Nξ , (194)

ξ

and each state ξ is speciﬁed by the set of Ni , the numbers of particles in each quantum state i. We require that the

28

average number of particles in a system and the average energy of a system over the ensemble are ﬁxed:

1 ∑ ∑

N = Nξ Nξ , Nξ = Ni (195)

N i

ξ

1 ∑ ∑

U = Nξ Eξ , Eξ = εi Ni . (196)

N i

ξ

Here the sums over i are performed for the indicated state ξ. Do not confuse Nξ with Nξ . The number of ways W in

which the state ξ can be realized (i.e., the number of microstates) can be calculated in exactly the same way as it was

done above in the case of Boltzmann statistics. As the systems we are considering are distinguishable, one obtains

N!

W=∏ . (197)

ξ Nξ !

Since the number of systems in the ensemble N and thus all Nξ can be arbitrary high, the actual state of the ensemble

is that maximizing W under the constraints of Eqs. (195) and (196). Within the method of Lagrange multipliers, one

has to maximize the target function

∑ ∑

Φ(N1 , N2 , · · · ) = ln W + α Nξ Nξ − β Nξ Eξ (198)

ξ ξ

[c.f. Eq. (45) and below]. Using the Stirling formula, Eq. (15), one obtains

∑ ∑ ∑ ∑

Φ∼= ln N ! − Nξ ln Nξ + Nξ + α Nξ Nξ − β Nξ Eξ . (199)

ξ ξ ξ ξ

Note that we do not require Ni to be large. In particular, for fermions one has Ni = 0, 1.Minimizing, one obtains the

equations

∂Φ

= − ln Nξ + αNξ − βEξ = 0 (200)

∂Nξ

that yield

Nξ = eαNξ −βEξ , (201)

the Gibbs distribution of the so-called grand canonical ensemble.

The total number of systems N is given by

∑ ∑

N = Nξ = eαNξ −βEξ = Z, (202)

ξ ξ

where Z is the partition function of the grand canonical ensemble. Thus the ensemble average of the internal energy,

Eq. (196), becomes

1 ∑ αNξ −βEξ ∂ ln Z

U= e Eξ = − . (203)

Z ∂β

ξ

In contrast to Eq. (55), this formula does not contain N explicitly since the average number of particles in a system

is encapsulated in Z. As before, the parameter β can be associated with temperature, β = 1/ (kB T ) . Thus Eq. (203)

can be rewritten as

∂ ln Z ∂T ∂ ln Z

U =− = kB T 2 . (204)

∂T ∂β ∂T

Now the entropy S can be found thermodynamically as

ˆ T ˆ T

CV 1 ∂U

S= dT ′ ′ = dT ′ ′ . (205)

0 T 0 T ∂T ′

Integrating by parts and using Eq. (204), one obtains

ˆ T ˆ T

U 1 U ∂ ln Z U

S= + dT ′ ′2 U = + dT ′ kB ′

= + kB ln Z. (206)

T 0 T T 0 ∂T T

This yields the free energy

F = U − T S = −kB T ln Z. (207)

29

The Gibbs distribution looks similar to the Boltzmann distribution, Eq. (50). However, this is a distribution of

systems of particles in the statistical ensemble, rather than distribution of particles over the energy levels. To obtain

the latter, an additional step has to be made. This is calculating the average of Ni as

1 ∑ 1 ∑ αNξ −βEξ

Ni = Nξ Ni = e Ni , (208)

Z Z

ξ ξ

where Z is deﬁned by Eq. (202). As both Nξ and Eξ are additive in Ni , see Eqs. (195) and (195), and ξ are speciﬁed

by their corresponding sets of Ni , the summand of Eq. (202) is multiplicative:

∑ ∑ ∏

Z= e(α−βε1 )N1 × e(α−βε2 )N2 × . . . = Zj (209)

N1 N2 j

As similar factorization occurs in Eq. (208), all factors except the factor containing Ni cancel each other leading to

1 ∑ (α−βεi )Ni ∂ ln Zi

Ni = e = . (210)

Zi ∂α

Ni

The partition functions for quantum states i have diﬀerent forms for bosons and fermions. For fermions Ni take the

values 0 and 1 only, thus

Zi = 1 + eα−βεi . (211)

∞

∑ 1

Zi = e(α−βεi )Ni = . (212)

1 − eα−βεi

Ni =0

Now N i can be calculated from Eq. (210). Discarding the bar over Ni , one obtains

1

Ni = , (213)

eβεi −α ±1

where (+) corresponds to fermions and (-) corresponds to bosons. In the ﬁrst case the distribution is called Fermi-Dirac

distribution and in the second case the Bose-Einstein distribution. Eq. (213) should be compared with the Boltzmann

distribution, Eq. (50). Whereas β = 1/(kB T ), the parameter α is deﬁned by the normalization condition

∑ ∑ 1

N= Ni = . (214)

i i

eβεi −α ±1

It can be shown that the parameter α is related to the chemical potential µ as α = βµ = µ/ (kB T ). Thus Eq. (213)

can be written as

1

Ni = f (εi ) , f (ε) = . (215)

eβ(ε−µ) ± 1

At high temperatures µ becomes large negative, so that −βµ ≫ 1. In this case one can neglect ±1 in the denominator

in comparizon to the large exponential, and the Bose-Einstein and Fermi-Dirac distributions simplify to the Boltzmann

distribution. Replacing summation by integration, one obtains

ˆ ∞

N =e βµ

dερ(ε)e−βε = eβµ Z. (216)

0

( )3/2

−βµ Z 1 mkB T

e = = ≫1 (217)

N n 2π~2

and our calculations are self-consistent. At low temperatures ±1 in the denominator of Eq. (215) cannot be neglected,

and the system is called degenerate.

30

In the macroscopic limit N → ∞ and V → ∞, so that the concentration of particles n = N/V is constant, the

energy levels of the system become so ﬁnely quantized that they become quasicontinuous. In this case summation in

Eq. (214) can be replaced by integration. However, it turns out that below some temperature TB the ideal bose-gas

undergoes the so-called Bose condensation. The latter means that a mascoscopic number of particles N0 ∼ N falls

into the ground state. These particles are called Bose condensate. Setting the energy of the ground state to zero (that

always can be done) from Eq. (215) one obtains

1

N0 = . (218)

e−βµ −1

Resolving this for µ, one obtains

( )

1 ∼ kB T

µ = −kB T ln 1 + =− . (219)

N0 N0

Since N0 is very large, one has practically µ = 0 below the Bose condensation temperature. Having µ = 0, one can

easily calculate the number of particles in the excited states Nex by integration as

ˆ ∞

ρ(ε)

Nex = dε βε . (220)

0 e −1

N = N0 + Nex . (221)

In three dimensions, using the density of states ρ(ε) given by Eq. (44), one obtains

( )3/2 ˆ ∞ √ ( )3/2 ˆ ∞ √

V 2m ε V 2m 1 x

Nex = dε βε = dx x . (222)

(2π)

2 ~2 0 e −1 (2π)

2 ~2

β 3/2

0 e − 1

ˆ ∞

xs−1

dx x = Γ(s)ζ(s), (223)

0 e −1

√ √

Γ(1/2) = π, Γ(3/2) = π/2 (225)

∞

∑

ζ(s) = k −s (226)

k=1

ζ(1) = ∞, ζ(3/2) = 2.612, ζ(5/2) = 1.341 (227)

ζ(5/2)

= 0.513 4. (228)

ζ(3/2)

( )3/2

V 2mkB T

Nex = Γ(3/2)ζ(3/2) (229)

(2π)

2 ~2

( )3/2

V 2mkB TB

N= Γ(3/2)ζ(3/2), (230)

(2π)

2 ~2

31

N0 / N N ex / N

T / TB

FIG. 10: Condensate fraction N0 (T )/N and the fraction of excited particles Nex (T )/N for the ideal Bose-Einstein gas.

and thus the condensate disappears, N0 = 0. From the equation above follows

( )2/3

2

~2 (2π) n N

kB TB = , n= , (231)

2m Γ(3/2)ζ(3/2) V

that is TB ∝ n2/3 . In typical situations TB < 0.1 K that is a very low temperature that can be obtained by special

methods such as laser cooling. Now, obviously, Eq. (229) can be rewritten as

( )3/2

Nex T

= , T ≤ TB (232)

N TB

The temperature dependence of the Bose-condensate is given by

( )3/2

N0 Nex T

=1− =1− , T ≤ TB . (233)

N N TB

One can see that at T = 0 all bosons are in the Bose condensate, while at T = TB the Bose condensate disappears.

Indeed, in the whole temperature range 0 ≤ T ≤ TB one has N0 ∼ N and our calculations using µ = 0 in this

temperature range are self-consistent.

Above TB there is no Bose condensate, thus µ ̸= 0 and is deﬁned by the equation

ˆ ∞

ρ(ε)

N= dε β(ε−µ) . (234)

0 e −1

This is a nonlinear equation that has no general analytical solution. At high temperatures the Bose-Einstein (and also

Fermi-Dirac) distribution simpliﬁes to the Boltzmann distribution and µ can be easily found. Equation (217) with

the help of Eq. (231) can be rewritten in the form

( )3/2

−βµ 1 T

e = ≫ 1, (235)

ζ(3/2) TB

so that the high-temperature limit implies T ≫ TB .

Let us consider now the energy of the ideal Bose gas. Since the energy of the condensate is zero, in the thermodynamic

limit one has

ˆ ∞

ερ(ε)

U= dε β(ε−µ) (236)

0 e −1

32

T / TB

µ /( k BTB )

FIG. 11: Chemical potential µ(T ) of the ideal Bose-Einstein gas. Dashed line: High-temperature asymptote corresponding to

the Boltzmann statistics.

C /( Nk B )

T / TB

in the whole temperature range. At high temperatures, T ≫ TB , the Boltzmann distribution applies, so that U is

given by Eq. (71) and the heat capacity is a constant, C = (3/2) N kB . For T < TB one has µ = 0, thus

ˆ ∞

ερ(ε)

U= dε βε (237)

0 e −1

( )3/2 ˆ ∞ ( )3/2

V 2m ε3/2 V 2m 5/2

U= dε = Γ(5/2)ζ(5/2) (kB T ) . (238)

(2π)

2 ~2 0 e βε − 1

(2π)

2 ~2

33

With the help of Eq. (230) this can be expressed in the form

( )3/2 ( )3/2

T Γ(5/2)ζ(5/2) T 3 ζ(5/2)

U = N kB T = N kB T , T ≤ TB . (239)

TB Γ(3/2)ζ(3/2) TB 2 ζ(3/2)

The corresponding heat capacity is

( ) ( )3/2

∂U T 15 ζ(5/2)

CV = = N kB , T ≤ TB . (240)

∂T V TB 4 ζ(3/2)

To calculate the pressure of the ideal bose-gas, one has to start with the entropy that below TB is given by

ˆ T ( )3/2

CV (T ′ ) 2 T 5 ζ(5/2)

S= dT ′ ′

= CV = N k B , T ≤ TB . (241)

0 T 3 T B 2 ζ(3/2)

Diﬀerentiating the free energy

( )3/2

T ζ(5/2)

F = U − T S = −N kB T , T ≤ TB (242)

TB ζ(3/2)

one obtains the pressure

( ) ( ) ( ) ( )( )

∂F ∂F ∂TB 3 F 2 TB F

P =− =− =− − − =− , (243)

∂V T ∂TB T ∂V T 2 TB 3 V V

i.e. the equation of state

( )3/2

T ζ(5/2)

P V = N kB T , T ≤ TB , (244)

TB ζ(3/2)

compared to P V = N kB T at high temperatures. One can see that the pressure of the ideal bose gas with the

3/2

condensate contains an additional factor (T /TB ) < 1. This is because the particles in the condensate are not

thermally agitated and thus they do not contribute into the pressure.

Properties of the Fermi gas [plus sign in Eq. (215)] are diﬀerent from those of the Bose gas because the exclusion

principle prevents multi-occupancy of quantum states. As a result, no condensation at the ground state occurs at low

temperatures, and for a macroscopic system the chemical potential can be found from the equation

ˆ ∞ ˆ ∞

ρ(ε)

N= dερ(ε)f (ε) = dε β(ε−µ) (245)

0 0 e +1

at all temperatures. This is a nonlinear equation for µ that in general can be solved only numerically.

In the limit T → 0 fermions ﬁll the certain number of low-lying energy levels to minimize the total energy while

obeying the exclusion principle. As we will see, the chemical potential of fermions is positive at low temperatures,

µ > 0. For T → 0 (i. e., β → ∞) one has eβ(ε−µ) → 0 if ε < µ and eβ(ε−µ) → ∞ if ε > µ. Thus Eq. (245) becomes

{

1, ε < µ

f (ε) = (246)

0, ε > µ.

ˆ µ0

N= dερ(ε). (247)

0

Fermions are mainly electrons having spin 1/2 and correspondingly degeneracy 2 because of the two states of the spin.

In three dimensions, using Eq. (44) with an additional factor 2 for the degeneracy one obtains

( )3/2 ˆ ( )3/2

2V 2m µ0 √ 2V 2m 2 3/2

N= dε ε = µ . (248)

(2π)

2 ~2 0 (2π)

2 ~2 3 0

34

T / TF

µ /( k BTF )

FIG. 13: Chemical potential µ(T ) of the ideal Fermi-Dirac gas. Dashed line: High-temperature asymptote corresponding to the

Boltzmann statistics. Dashed-dotted line: Low-temperature asymptote.

f (ε )

T =0

T = 0.03TF

T = 0.3TF

T = TF

ε /( k BTF )

~2 ( 2 )2/3

µ0 = 3π n = εF , (249)

2m

where we have introduced the Fermi energy εF that coincides with µ0 . It is convenient also to introduce the Fermi

temperature as

kB TF = εF . (250)

One can see that TF has the same structure as TB deﬁned by Eq. (231). In typical metals TF ∼ 105 K, so that at

room temperatures T ≪ TF and the electron gas is degenerate. It is convenient to express the density of states in

35

C /( Nk B )

T / TF

√

3 ε

ρ(ε) = N 3/2 . (251)

2 ε

F

ˆ µ0 ˆ εF

3 N 3 N 2 5/2 3

U= dερ(ε)ε = 3/2

dε ε3/2 = ε

3/2 5 F

= N εF . (252)

0 2 εF 0 2 εF 5

One cannot calculate the heat capacity CV from this formula as it requires taking into account small temperature-

dependent corrections in U. This will be done later. One can calculate the pressure at low temperatures since in this

region the entropy S should be small and F = U − T S ∼ = U. One obtains

( ) ( ) ( )

∂F ∼ ∂U 3 ∂εF 3 2 εF 2 ~2 2 ( 2 )2/3 5/3

P =− =− =− N =− N − = nεF = 3π n . (253)

∂V T =0 ∂V T =0 5 ∂V 5 3 V 5 2m 5

To calculate the heat capacity at low temperatures, one has to ﬁnd temperature-dependent corrections to Eq. (252).

We will need the integral of a general type

ˆ ∞ ˆ ∞

εη

Mη = dε εη f (ε) = dε (ε−µ)/(k T ) (254)

0 0 e B +1

that enters Eq. (245) for N and the similar equation for U. With the use of Eq. (251) one can write

3 N

N = M1/2 , (255)

2 ε3/2

F

3 N

U = .M3/2 (256)

2 ε3/2

F

If can be shown that for kB T ≪ µ the expansion of Mη up to quadratic terms has the form

[ ( )2 ]

µη+1 π 2 η (η + 1) kB T

Mη = 1+ . (257)

η+1 6 µ

36

[ ( )2 ]

3/2 3/2 π2 kB T

εF =µ 1+ (258)

8 µ

[ ( )2 ]−2/3 [ ( )2 ] [ ( )2 ]

π2 kB T ∼ π 2 kB T ∼ π 2 kB T

µ = εF 1+ = εF 1 − = εF 1 − (259)

8 µ 12 µ 12 εF

[ ( )2 ]

π2 T

µ = εF 1− . (260)

12 TF

It is not surprising that the chemical potential decreases with temperature because at high temperatures it takes large

negative values, see Eq. (217). Equation (256) becomes

[ ( )2 ] [ ( )2 ]

3 N µ5/2 5π 2 kB T ∼ 3 µ5/2

5π 2

k B T

U= 1+ = N 3/2 1 + . (261)

2 ε3/2 (5/2) 8 µ 5 ε 8 εF

F F

[ ( )2 ]5/2 [ ( )2 ]

3 π2 T 5π 2 T

U = N εF 1 − 1+

5 12 TF 8 TF

[ ( ) ][ ( )2 ]

2

∼ 3 5π 2 T 5π 2 T

= N εF 1 − 1+ (262)

5 24 TF 8 TF

that yields

[ ( )2 ]

3 5π 2 T

U = N εF 1 + . (263)

5 12 TF

( )

∂U π2 T

CV = = N kB (264)

∂T V 2 TF

that is small at T ≪ TF .

Let us now derive Eq. (257). Integrating Eq. (254) by parts, one obtains

∞ ˆ ∞

εη+1 εη+1 ∂f (ε)

Mη = f (ε) − dε . (265)

η+1 0 0 η + 1 ∂ε

The ﬁrst term of this formula is zero. At low temperatures f (ε) is close to a step function fast changing from 1 to 0

in the vicinity of ε = µ. Thus

∂f (ε) ∂ 1 βeβ(ε−µ) β

= = − [ ]2 = − (266)

∂ε ∂ε e β(ε−µ) +1 eβ(ε−µ) + 1 4 cosh [β (ε − µ) /2]

2

has a sharp negative peak at ε = µ. To the contrary, εη+1 is a slow function of ε that can be expanded in the Taylor

series in the vicinity of ε = µ. Up to the second order one has

∂εη+1 1 ∂ 2 εη+1 2

εη+1 = µη+1 + (ε − µ) + (ε − µ)

∂ε ε=µ 2 ∂ε2 ε=µ

1 2

= µη+1 + (η + 1) µη (ε − µ) + η (η + 1) µη−1 (ε − µ) . (267)

2

37

Introducing x ≡ β (ε − µ) /2 and formally extending integration in Eq. (265) from −∞ to ∞, one obtains

ˆ [ ]

µη+1 ∞ 1 η+1 η (η + 1) 2 1

Mη = dx + x+ 2 2 x . (268)

η + 1 −∞ 2 βµ β µ cosh2 (x)

The contribution of the linear x term vanishes by symmetry. Using the integrals

ˆ ∞ ˆ ∞

1 x2 π2

dx 2 = 2, dx 2 = , (269)

−∞ cosh (x) −∞ cosh (x) 6

- Johan Sebastian Mazorra - From Quantum to Everything Else (PREVIEW)Transféré parJohan Sebastian Mazora
- 4604 Solutions Set1 Fa07Transféré parMuhammad Nomaan ❊
- nanoMOS25Transféré parShuvro Chowdhury
- boseeinsteinCTransféré parDon Riggs
- Mindfulness, Consciousness and Quantum Physics.pdfTransféré parHlias84
- ModuleTransféré parMae Ann Ramos
- Bose-Einstein Condensation 2007Transféré parcaglarberkman
- Heinz Duthel This Quantum WorldTransféré parheinzduthel
- The Philosophy of Quantum TheoryTransféré parSumbul Zehra
- Otero & Fanaro, Re S M ICT E, 5(2) , 2011Transféré parJavier Peris Sabater
- NoteTransféré parSandip Kumar Mandal
- Making Sense of the Legendre TransformTransféré parNimrod Weinberg
- krys is a galTransféré parGuilherme Conde
- Quantummechanics Does Not Require the Continuity of SpaceTransféré parfisfil4681
- 2013Z, InTech-Molecular_dynamics_simulation_of_nanoscale_machining.pdfTransféré parAnatoli Krasilnikov
- Will we ever… understand quantum theoryTransféré parquantumrealm
- Gauge Fields in the 5D Gravity-Scalar Standing Wave BraneworldTransféré parcrocoali
- Sokal HoaxTransféré parLadius Promtheus
- 15 Schrodinger EquationTransféré parNeelam Kapoor
- 4.1. Interaction Light-Matter 2-7-08Transféré parmladen lakic
- Bound State Solution to Schrodinger Equation With Modified Hylleraas Plus Inversely Quadratic Potential Using Supersymmetric Quantum Mechanics ApproachTransféré parIJRAP
- IJQF2016v2n3p2.pdfTransféré parMuzamil Shah
- jacs-129-11920-07.pdfTransféré parAnonymous sZU9vTA
- ScheduleTransféré parcool beens
- Soal2 Utk Atom Hidrogen Kelas ATransféré parMarda Ahsany
- (Materials Science and Technology) K.H.J. Buschow (Editor)-Electronic and Magnetic Properties of Metals and Ceramics. 3-Wiley-VCH (1994)Transféré parChayanneCruzGomez
- ParticleTransféré parajaygill5785
- Fistat 3 EnergyLevel Macrostate Postulate ThermodynamicsProbBE FD MBTransféré parBudi Novitrian
- Chapter6-Free Electron Fermi GasTransféré parSTEPHANIE ANAK UNJAT
- Michael Grotenhuis MEM Masters Paper Final3Transféré parVinayak Sogani

- Multi Variable Calculus NotesTransféré parmuhammadtal
- fa mechTransféré parLuc Le
- Effect of Drying Conditions on Cellulose Nanocrystal (CNC) Agglomerate Porosity and Dispersibility in Polymer NanocompositesTransféré parLuc Le
- LS1Transféré parLuc Le
- PG introTransféré parLuc Le
- Hw5Q Kat and Lina VersionTransféré parLuc Le
- Paper 027Transféré parLuc Le
- Light ScatteringTransféré parLuc Le
- Transport QualsTransféré parLuc Le
- Thermo QualsTransféré parLuc Le
- Kinetics_Quals.pdfTransféré parLuc Le
- The Thermal ConductivityTransféré parLuc Le
- Temperature in ProcessTransféré parLuc Le
- HW3_f2014 (1)Transféré parLuc Le
- PolymersTransféré parLuc Le
- Expt11_2006Transféré parLuc Le
- 10 a Probability 1Transféré parAbdulkader Tukale
- Exxon Moetetbil ResourceTransféré parLuc Le
- Project 1Transféré parLuc Le
- MathTransféré parLuc Le
- Atomic Layer Deposition- An OverviewTransféré parUmarameshK
- HW Set 1Transféré parLuc Le
- Methane Pipeline RecoveryTransféré parLuc Le
- HW Set 6 RevisedTransféré parLuc Le
- Elementary Principles of Chemical Processes 3rd Update Edition 2005 (Introdução à Engenharia Quimica)Transféré parRita Gameiro

- Concrete MaterialTransféré parHizbawi Sisay
- ch 13 14 states of matter gas laws ppt pdfTransféré parapi-239855791
- 1 IntroductionElectronicDevicesTransféré parfox_casper159
- Mechanical DesignTransféré parNiki Lauda
- Polefdn --- Pole Foundation Analysis ProgramTransféré parANGEL .
- 2 Electrical Characteristics of Trigate FinfetTransféré parin_sane88
- MSU Copper Casting Alloys 2011Transféré parDaniel Cringus
- comparison between SF6 and oil TransformerTransféré parMostafa Mohmmed
- metals-06-00cc026-v2Transféré parThomas Tucker
- Ch21FTransféré parHaffiz Ating
- Composite Kelly DavisTransféré parFaiq El
- Wireless Charging of Mobile Phone by Sound EnergyTransféré parswathi
- MULTIAXIAL STRESS IN THE FATIGUE LIFE OF MECHANICAL PARTS.pdfTransféré parAdemar Cardoso
- Fem ModelingTransféré parolarewaju Animasahun
- The Derivation of Van Der Waals Equation of State for Real GasesTransféré parMike Alex
- part_II.pdfTransféré parAdrian Oprisan
- PhD ThesisTransféré parosoreneg
- Chapter 1 Introduction (Semiconductors)Transféré parMarshall Harper
- 1li z Wang c One Dimensional Nanostructures Electrospinning t(1)Transféré parAlessandro
- Bridge-section 11-Abutments Piers and WallsTransféré parSubashini Jaganathan
- Characterization of Sputtered Inconel 617Transféré parrashik072
- Heat ExchangerTransféré parZain Ulabideen
- SPE-30107-PATransféré parmilad
- Paramagnetism ActivityTransféré parDennedy Yrvin Corcega
- A New Method for Grain Refinement in Magnesium Alloy- High Speed Extrusion MachiningTransféré parPrimawati Rahmaniyah
- 19358_PHY106 --IPTransféré parBachu Saee Ram
- Stresses and Strains in ShipsTransféré parkuldeepasingh
- TestUrSelf-Chemical Engineering Test ScheduleTransféré parSamarjeet Kumar Singh
- EURME 406 (Fluid Mechanics 1)Transféré parSri Kay
- METRODE Non-Magnetic Welding Consumables 316NFTransféré parClaudia Mms

## Bien plus que des documents.

Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.

Annulez à tout moment.