Académique Documents
Professionnel Documents
Culture Documents
1 ~
F1 (~r1 , ~r2 ...~rN )
m1
1 ~
=
F2 (~r1 , ~r2 ...~rN )
m2
= ...
1 ~
=
FN (~r1 , ~r2 ...~rN )
mN
=
(1)
where ~ri is the position vector of the i'th particle, with mass mi , and F~i is the force acting
on it because of interaction with all other particles, and any external inuence. A solution
of these equations would allow us to know exact position and velocity of each particle at
any future time. That information allows us to know the microscopic state (which we
will henceforth call microstate) of the gas at every instant of time. Solving these coupled
equations analytically is generally not possible. However, one can solve such equations
numerically, using computers. However, for a realistic situation N is of the order of 1023 ,
and solving such a large number of equations numerically is beyond the capacity of any
existing computer.
One should realize that for describing the macroscopic state of a gas, characterized by
pressure, volume, temperature (which we will henceforth call macrostate), we do not need
information on every microscopic detail which these coupled equations could provide. A
1
Tabish Qureshi
huge number of microstates may correspond to the same, single macrostate. We would,
for instance, be interested in knowing the pressure of the gas, and not bother about what
a particular atom of the gas is doing at every instance. In other words, we only need some
average macroscopic quantities, and not every microscopic detail. So, solving these coupled
dierential equations would anyway be an overkill, if at all we were able to solve them. We
need a conceptually dierent approach to this problem.
Phase space
Let us rst be more specic about the the concept of microstates. For a classical system
it is sucient to know at a time t all generalized coordinates qi (t) and momenta pi (t)
to uniquely specify the state of motion of the system. Thus for a mechanical system we
can interpret the set {qi , pi , i = 1, 2, ...N } as the microstate of this system. For a single particle in one dimension, there is only one position variable x and one momentum
variable px . If we plot x on the x-axis and px on the y-axis, a point on the graph will
represent one state of the particle. As the particle moves in time, the point will follow
a trajectory. We will call the space described by x and px , phase space, the point representing a particular value of x and px , a phase point, and the trajectory followed by
the point, the phase trajectory. In this particular case, the phase space is 2-dimensional.
If a single particle moves
PHASE TRAJECTORIES
in 3 dimensions, we would
need 6 coordinate axes for
p
x, y, z, px , py , pz . For N
2mE
particles in 3-dimensions,
2mE
2E
p
2E
the phase space will be 6Nm2
m2
dimensional. So, the set
x
x
0
0
L
{qi , pi } can now be understood as a point in a 6N2mE
dimensional phase space. A
2mE
point in this phase space
describes particular value
Particle in a box
Harmonic Oscillator
of position and momentum
values of all N particles.
Hence, a denite point in this phase space exactly corresponds to one microscopic state of
motion of the whole system .
H
,
pi
pi =
H
qi
where the Hamiltonian H(q,(t), p,(t)) corresponds to the (possibly time-dependent) total
energy of the system. It is a function of the phase-space point (q, p), and of time. In
a closed global system, in which the Hamiltonian does not depend explicitly on time, the
total energy
E = H(p(t), q(t))
Statistical Ensembles
Tabish Qureshi
huge collection of points in the phase space. This collection of points in the phase space,
of all possible microstates of the system, is called an ensemble. Or one can imagine each
point in the phase space representing an imaginary copy of the system, each in a dierent
microstate. This collection of imaginary copies of the system, each in a dierent microstate,
is called an ensemble. When the gas evolves in time, in the phase space it basically goes
from one phase point to the next, in a specic sequence. Any macroscopic quantity of the
gas which we measure, is not measured instantaneously. Rather it is measured over a nite
time, which is very long compared to the time-scale of motion of the particles of the gas.
So, the measured quantity is actually a time-averaged quantity.
The basic idea of statistical mechanics is the following. In doing a time-average of a
quantity, one is basically looking at dierent values the quantity takes, as it goes from one
microstate to the other, during its time evolution. And then one takes an average of all the
values of the quantity. As far as taking the average is concerned, it is not important what
the sequence in going from one microstate to the other is. One can just take the phase
points of the ensemble, over which the system goes, and take the average. In other words,
the time average can be replaced by the average over the whole ensemble.
To enable one to replace time-average by ensemble-average, some conditions have to be
satised. First of all, average over the ensemble implicitly assumes that the system visits all
phase points during its time evolution. Not only that, it is also assumed that all microstates
are equally likely to be visited. So it better happen so in reality. If the system spends more
time in certain microstates, and less in some others, our assumption will break down. This
assumption constitutes, what is called, the ergodic hypothesis. The ergodic hypothesis says
that, over long periods of time, the time spent by a system in some region of the phase space
of microstates with the same energy is proportional to the phase-volume of this region, i.e.,
all accessible microstates are equiprobable over a long period of time. Ergodic hypothesis
is a pillar of statistical mechanics, however, it cannot be proven in general. It is assumed
to be true, and nds justication in the fact that statistical mechanics turns out to be a
successful theory, in agreement with experiments.
We will rst consider an isolated system, typically a gas enclosed in a box, which is thermally
insulated. So, any time evolution of the system will be subject to the constraint that the
total energy remains constant. Left for a long time, it is believed to be in equilibrium. We
further assume that given an isolated system in equilibrium, it is found with equal probability
in each of its accessible microstates. This is the postulate of equal a priory probability . This
postulate is at the very core of statistical mechanics. Now each macrostate comprises of
numerous microstates. For example, all the gas conned to only one half of the box, is
a macrostate. There is a huge number of ways this can happen, by various arrangements
of particles and their momenta. The gas uniformly occupying the whole volume of the
box, is another macrostate. And again, there are a huge lot of microstates associated
with this macrostate. Now each microstate is equally probable, but we never actually
see a gas occupying only one half of its container. Why does that happen? It happens
because the number of microstates associated with the gas occupying the whole volume
are overwhelmingly large, compared to the microstates associated with the gas occupying
only one half of the box.
One can get an idea of the numbers involved in such situations, by a simple example. Let
there be an array of 4 noninteracting magnetic moments, each of which can only take
values +1 or -1, in some suitable units. Now, assume that each magnetic moment is free
to ip up and down, i.e., +1 or -1. We can use the total magnetic moment of the array
3
Tabish Qureshi
to describe a macrostate. There is only one microstate associated with the macrostate
with total magnetic moment 4, which is (+1+1+1+1). If one considers the macrostate
with total magnetic moment zero, the microstates associated with it are (+1+1-1-1), (-11+1+1), (+1-1+1-1), (+1-1-1+1), (-1+1+1-1), (-1+1-1+1). So, there are 6 microstates
associated with total magnetic moment 0, while only 1 with magnetic moment 4. One can
easily see that if there were 10 magnetic moments, there would still be only one microstate
associated with total magnetic moment 10. However, the number of microstates associated
with total magnetic moment 0 in that case, will be overwhelmingly large. So, in a system of
10 magnetic moments which are freely ipping, one will almost never see magnetic moment
10, and the magnetic moment will appear to be zero or very small all the time.
A close analogy can be drawn from probability theory. Suppose there are dierent people
doing 100 coin-tosses each. Almost all of them will get nearly equal number of heads and
tails. It is next to impossible for anyone to obtain 100 heads or 100 tails. Probability of a
100 heads is (1/2)100 , which is innitisimally small. Probability of any single conguration
is also exactly the same. For example, the probability of getting alternate heads and tails
for all 100 tosses is also (1/2)100 . However, the number of congurations in which the
number of heads and tails are 50-50 is overwhelmingly large.
In the light of the above argument, we conclude that equilibrium state is the one in which
the number of microstates is maximum. In reality, the system will go over all microstates,
as the ergodic hypothesis states, but it will be mostly found in certain macrostates.
Tabish Qureshi
H(p, q) = E .
microcanonical
ensemble.
The
The volume of the phase space enclosed by a very thin constant-energy shell"
and
are continuous
variables, there will be innitely many points inside the shell. From the quantum mechanical
uncertainty principle, we know that the product of the uncertainties
more precise than
~.
~.
and
cannot be
So, the smallest cell in a 2-dimensional phase space will have size
This is the phase volume of one microstate for a single particle. For a 6N-dimensional
~3N .
Now the total number of microstates for a gas of N particles with energy E, in a volume V
can be written as
1
(E, N, V ) = 3N
~
p's
q 's
and
Z Y
E
dpi dqi ,
(1)
E.
N1
V1
and
and
N2 ,
V2 ,
E1
E2 ,
and
vol-
number of particles
E 1 N1 V1
ergy is xed,
E1 + E2 = ET .
E 2 N2 V2
Let the
2 (E2 , N2 , V2 ).
1 (E1 , N1 , V1 )
(2)
Let the two systems now come into thermal contact with each other. They will exchange
energy so that
E1
and
E2
E1
or
E2
ET
Equilibrium will
/E1
1
2
=
2 + 1
=0
E1
E1
E1
equal to zero.
(3)
Tabish Qureshi
1
2
2 = 1
E1
E1
E2 2
= 1
E1 E2
2
= 1
E2
1 2
1 1
=
1 E1
2 E2
log(1 )
log(2 )
=
E1
E2
(4)
From thermodynamics we know that in such a situation, equilibrium is attained when the
temperature of the two systems becomes equal,
is dened as
T1 = T2 .
In thermodynamics, temperature
1
S
=
T
E
S1
S2
=
E1
E2
(5)
S log()
(6)
Since the relation between thermodynamics and mechanics should be fundamental, Boltzmann postulated that the proportionality constant in the above equation should be a universal constant, independent of any particular system. This constant is Boltzmann's constant
k,
kB .
S = k log().
(7)
Probability of macrostates
From the preceding analysis, it is clear that the probability of a
many microstates are there in it.
macrostate
The macrostate
depends on how
Suppose there is
denote a particular momentum distribution of the particles, or it could denote a particular position distribution of the particles.
associated with
is denoted by
P =
where
(E, V, N )
,
(E, V, N )
microstates
is given by
(8)
microstates.
In terms of phase space, the number of microstates can be written as
1
= 3N
~
Z Y
dpi dqi ,
1
(E, N, V ) = 3N
~
2
Z Y
E
dpi dqi ,
(9)
Tabish Qureshi
where
ideal
It of course also
gas of N particles,
enclosed in a volume V, and with a total energy E. We want to know, what is the probability
of the whole gas spontaneously occupying only one particular half of the box. We know
that this practically never happens, so its probability should be neglegible. Let us calculate
it.
now denotes the state of the gas with all particles in one particular half of the box.
We
know that there is huge number of ways this is possible, because eventhough the particles
are in one half of the box, they have many microstates corresponding to them shuing
their positions and velocities. In eqn (9), the integral over space can be carried out easily.
the gas occupying one half of the volume of the box is given by
R Q
(V /2)N i dpi
R Q
=
1
N
(V
)
3N
i dpi
~
E
1
= N
2
~3N
(10)
We see that the probability of all the particles spontaneously occupying one particular half of
N
23
the box is 1/2 . Even for just 100 particles this number is negligibly small. For N 10 ,
the probability will be so very tiny that one can wait till the end of the world, and still such
a state will never come up. On the other hand, if the gas has just, say, 5 particles, the
probability of such a state occuring is 0.03125. In this case, one can actually observe all
particles by chance coming to one half of the box, with some patience.
So now we understand why equilibrium is what it looks like. It is just a macrostate which
has the maximum number of microstates associated with it.
Tabish Qureshi
Now that we have the expression for the entropy in microcanonical ensemble, let us use it
to calculate the entropy of an ideal gas of classical particles. Let there be N particles of
mass m each, enclosed in a box of volume V . The Hamiltonian is given by
H=
X p2
i
,
2m
i
(1)
where sum over i represents sum over all particles and the three components. Let the total
energy of the gas be E . Total number of microstates can then be written as
(E, N, V ) =
Z Y
V
1
dxi dyi dzi 3N
~
Z Y
E
(2)
The integral over space can be carried out straightaway, giving the volume,
VN
(E, N, V ) = 3N
~
Z Y
E
(3)
X p2
xi
(4)
=E
If the summation over i were not there, this would be the equation of a sphere with radius
2mE . With the summation, it is the equation of a 3N-dimensional hypersphere of radius
2mE . We assume that the energy is not exactly contant, but can vary by an amount
E . E is chose to satisfy
E
< E E.
(5)
N
pm
R = 2mE , and hence dR =
dE Clearly our thermodynamic results should not
2E
depend on this arbitrary E . Volume in the momentum space accesible
to the
p system
m
is equal to (surface area of a 3N-dimensional hypersphere of radius 2mE ) 2E
E .
Volme and surface area of a n-dimensional hypersphere are given by
2 n/2 n1
R
(n/2)
n/2 n
R ,
(n/2)!
Sn (R) =
2 3N/2
S3N ( 2mE) =
(2mE)(3N 1)/2
(3N/2)
(6)
(7)
m
E
2E
(8)
Tabish Qureshi
Entropy of the gas can now be written as
S(E, V, N ) = k log
N
V (2mE)3N/2 E
= k log
~3N (3N/2 1)! E
V
E
3/2
= N k log
k
log{(3N/2
1)!}
+
k
log(
)
(2mE)
~3
E
(9)
For a macroscopic gas, N is very large, of the order of 1023 , and it makes sense to use the
Stirling formula log(n!) n log(n) n. So we get
log{(3N/2 1)!} (3N/2 1) log(3N/2 1) 3N/2 + 1
(3N/2 1) log(3N/2) 3N/2 + 1
(10)
Entropy of the gas now takes the form
V
3N
3N k
E
3N
3/2
S(E, V, N ) = N k log
1) log(
)+
k + k log(
)
(2mE)
k(
3
~
2
2
2
E
3N k
V (2mE)3/2
3N
E
+
= N k log
+ k log(
) k + k log(
) (11)
~3 (3N/2)3/2
2
2
E
In the equation above, the terms in the curly brackets are all much smaller than the other
terms, which are at least of order N , and hence can be neglected to yield
S = N k log V
4mE
3N ~2
3/2 !
+
3N k
2
(12)
Mixing of two dierent gases is an irreversible process. It should thus lead to an increase in
entropy. Let us check that out in our expression for entropy of an ideal gas. Let there be
a box of volume V , which is partitioned as shown in the gure. Let there be two dierent,
but similar, gases in the two partitions, at the same temperature and pressure . We remove
the partition, and allow the gases to mix.
Before mixing, the initial entropy of the combined system, before mixing, is just the sum
of the entropies of the two gases:
SI = N1 k log V1
4m1 E1
3N1 ~2
3/2 !
3/2 !
3N1 k
4m2 E2
3N2 k
+
+ N2 k log V2
+
(13)
2
2
3N2 ~
2
After the partition is removed, the energy, pressure, temperature of the gases will not
change, as they were already at same temperature and pressure. The only dierence is that
now volume V is available to both the gases. The nal entropy, after mixing, looks like
SF = N1 k log V
4m1 E1
3N1 ~2
3/2 !
3N1 k
+
+ N2 k log V
2
4m2 E2
3N2 ~2
3/2 !
+
3N2 k
(14)
2
V
V1
+ N2 k log
V
V2
(15)
Tabish Qureshi
Clearly, the entropy increases after mixing of the two gases, which is what one
E 1 N1 V1
E 2 N2 V2
would expect. S is called the entropy
of mixing. However, there is a problem.
If the two gases were identical, removing the partition should not have any
eect. But our expression for entropy
given by (11), yields the same change
in entropy (14) even if the two gases
are identical! One can verify that for same gas in the two partitions, the expression (13)
reduces to the expression (11) of entropy of a gas of N particles in a volume V. In this case
E1 /N1 = E2 /N2 = E/N . The nal entropy in the case of identical gases, is given by
3/2 !
3/2 !
3N1 k
4mE
3N2 k
4mE
+
+ N2 k log V
+
= N1 k log V
2
2
3N ~
2
3N ~
2
!
3/2
4mE
3(N1 + N2 )k
= (N1 + N2 )k log V
+
2
3N ~
2
!
3/2
4mE
3N k
= N k log V
+
(16)
2
3N ~
2
SF
For the case of same gas in the two parts, removing the partition is a reversible process,
because one can reinsert the partition later, and one will not be able to make out if the
partition was removed before. Hence, the change in entropy on removing the partition
should be zero. The fact that expression (11) for entropy yields a non-zero entropy of
mixing for identical gases, is called Gibb's paradox.
Gibb's empirically realized that while counting the number of microstates, if one divides the
number of microstates by N !, the paradox disappears. How concluded that our counting of
microstates must be thus wrong. The way we have counted the microstates, interchanging
two particles gives one a new microstate. However, from quantum mechanics we know that
elementary particles and atoms should be treated as identical particles. So, the way we
have counted the microstates, we have done an overcounting by assuming the particles to
distinguishable. So, we must divide by N ! to correct for it. Equantion (8) will thus take
the following form
(E, N, V ) =
1 V N (2mE)3N/2 E
N ! ~3N (3N/2 1)! E
(17)
Using Stirling's approximation for N !, we have to subtract k(N log(N ) N ) from (11) to
obatin the correct expression for entropy
S(E, N, V ) = N k log
3/2 !
5N k
V 4mE
+
2
N 3N ~
2
(18)
This is called the Sackur-Tetrode equation , and describes the entropy of a classical (monoatomic)
ideal gas. It is named for Hugo Martin Tetrode (1895-1931) and Otto Sackur (1880-1914),
who developed it independently as a solution of Boltzmann's gas statistics and entropy
equations, at about the same time in 1912.
3
Tabish Qureshi
The entropy of two gases before mixing is now given by
SI = N1 k log
V1 4m1 E1
N1 3N1 ~2
3/2 !
+
5N1 k
+N2 k log
2
V2 4m2 E2
N2 3N2 ~2
3/2 !
+
5N2 k
(19)
2
If the two gases are the same, V1 /N1 = V2 /N2 = V /N and energy per particle is also same,
if they are at the same temperature and pressure, E1 /N1 = E2 /N2 = E/N . So, the above
equation reduces to
SI
3/2 !
3/2 !
V 4m1 E
5N1 k
V 4m2 E
5N2 k
= N1 k log
+
+ N2 k log
+
2
2
N 3N ~
2
N 3N ~
2
!
3/2
V 4m1 E
5(N1 + N2 )k
= (N1 + N2 )k log
+
2
N 3N ~
2
!
3/2
5N k
V 4m1 E
+
(20)
= N k log
2
N 3N ~
2
which is the equation for the entropy of the gas of N particles in a volume V, that is, the
gas after mixing. So, for the case of same gas, mixing has no eect.
One can verify that for the case of dierent gases, the Sackur-Tetrode equation leads the
same expression for the entropy of mixing, as given by (14).
Equation of state
S
V
(21)
We plugin the expression for entropy from (17) in the equation to obtain the pressure of
our classical ideal gas.
P = N kT
log
V
= N kT
3/2 !
V 4mE
N 3N ~2
1
V
P V = N kT
(22)
Tabish Qureshi
Canonical Ensemble
(1)
where E is the energy of our system of interest, EB is the energy of the heat-bath, and
their sum is ET . E and EB are supposed to be variable, but ET is xed. Total number of
microstates of the combined system is
Z
=
(2)
where the integral is a sum over possiblities of various amounts of energy exchanges between
the system and the heat-bath. For example, the term E = 0 in the integral would correspond
to a situation where the system transfers all its energy to the heat-bath. The total system
is closed, and can be treated in microcanonical ensemble.
Now, the number of microstates corresponding to the system having energy E is given by
(E) = S (E)B (EB )
= S (E)B (ET E)
(3)
One should convince oneself that if there are two systems with, say, 3 microstates each,
the combined system will have 3 3 = 9 microstates. Microstates of the combined system
should have been labelled by E and EB , but since there is only one independent variable,
it suces to label it by E . We now write B (ET E) in terms of the entropy of the
Tabish Qureshi
heat-bath, using the Boltzmann denition of entropy S = k log :
(E) = S (E)elog(B (ET E))
1
(4)
where SB is the entropy of the heat-bath. Since the heat-bath is much much larger than
our system of interest, it is obvious that E EB , ET . The entropy of the heat-bath can
now be expanded in a Taylor series in E :
SB
E 2 2 SB
SB (ET E) = SB (ET ) + E
+
+ ...
E E=0
2! E 2 E=0
(5)
We ignore the E 2 and higher order terms in the series, assuming E to be small, and plug
in this expression in (4)
1 SB
1
(E) S (E) exp SB (ET ) + E
k
k E
But
SB
EB SB
SB
1
=
=
= ,
E
E EB
EB
T
(6)
(7)
where T is the temperature of the heat-bath. Strictly speaking, this should be the temperature of the heat-bath when the system has tranferred
all its energy to the heat-bath
SB
B
in
the
above
equation
is
actually
.
However,
since the heat-bath is
because S
E
E E=0
assumed to be much larger that the system, its temperature will not change noticeable
when it exchanges energy with the system.
The number of microstates of the combined system, corresponding to the system having
energy E , can now be written as
1
E
(E) = S (E) exp SB (ET )
k
kT
= S (E)eSB (ET )/k eE/kT
(8)
Let us reect at this expression for a moment. The term eSB (ET )/k is constant, as far as
E is concerned. From microcanonical ensemble we know that all microstates (with same
energy) are equally probable. This holds true here too, but for the microstates of the system
plus heat-bath. If one wants to concentrate only on the system, as we do because we it
is the system we are studying, things are slightly dierent. Corresponding to a microstate
of the system with energy E , the heat-bath has eSB (ET )/k eE/kT microstates. So, two
microstates of the system with dierent energies, will have dierent number of microstates
of the heat-bath associated with them. From the system's point of view, it will appear as
if microstates of the system with dierent energies, have dierent probability of occurance .
Total number of microstates of the combined system can be written as
Z
=
(9)
So, the probability of the system having energy E should be equal to the number of
microstates corresponding to the system having energy E , divided by the total number of
microstates
P (E) = R
(E)eE/kT
S (E)eE/kT
S (E)eSB (ET )/k eE/kT
R S
=
=
, (10)
Z
S (E)eSB (ET )/k eE/kT dE
S (E)eE/kT dE
2
Tabish Qureshi
where Z = S (E)eE/kT dE is called the partition function. Earlier we had dened the
number of microstates of a system in terms of accessible phase-space volume,
R
Z
(E) =
E
dpdq
,
(11)
where integral of dpdq represents integral over all positions and momenta of all particles,
over the constant energy surface with energy E , and is the smallest phase-volume of one
microstate. For example, for N particles in 3-dimensions, = ~3N . The probability of the
system having energy E can now be written as
1
dpdqeE(p,q)/kT
RE
1
dpdqeE(p,q)/kT
P (E) = =
(12)
Notice that the integral in the numerator is over a constant energy surface with xed energy
E , while the that in the denominator is over all phase space.
We can thus dene a density function
(p, q) ==
eE(p,q)/kT
eE(p,q)/kT
=
,
Z
dpdqeE(p,q)/kT
(13)
such that (p, q)dpdq gives the probability of the system having momentum between p and
p + dp and position between q and q + dq . Thus (p, q) describes the normalized density of
microstates (of the system plus heat-bath) in phase space. The partition function is now
written as
Z
Z=
(14)
eE(p,q)/kT dpdq
The partition function Z might not look very important as one might think that the information about microstates etc has already been summed over. However, the partition
function turns out to be a singly most useful entity in statistical mechanics, and most
measurable quantities can be expressed in terms of Z .
The thermal average of any quantity A can now be written as
1
hAi =
1 1
A(p, q)(p, q)dpdq =
Z
AeE/kT dpdq
The stage is now set for us to study any system using canonical ensemble.
(15)
Tabish Qureshi
1 1
E(p, q)(p, q)dpdq =
Z
EeE/kT dpdq
(1)
EeE dpdq
(2)
Derivative of eE with respect to will pull down E . Using this, we can rewrite the
above equation as
Z
1 1
eE dpdq
hEi =
Z
Z
1
1
=
eE dpdq
Z
1
Z
Z
(3)
Thus we see that the average energy of the system is very simply related to the partition
function:
hEi =
log Z
(4)
Energy uctuations
Since energy of a system is not constant in canonical ensemble, apart from calculating
its average, we will also be interested in knowing how much the energy deviates from its
average value. A good measure of it is the energy uctuation, dened as
E
p
h(E hEi)2 i,
(5)
where the angular brackets denote thermal average or ensemble average. This expression
can be simplied as follows
p
h(E hEi)2 i
p
=
h(E 2 2EhEi + hEi2 )i
p
=
hE 2 i 2hEihEi + hEi2
p
=
hE 2 i hEi2
E =
(6)
Tabish Qureshi
Square of energy uctuation can be writte, for canonical ensemble, as
(E)2 = hE 2 i hEi2
2
Z
log Z
1 1
2 E
=
E e
dpdq
Z
2
Z 2
log Z
1 1
E
e
dpdq
=
Z
2
2 Z
2
1
1
log Z
E
=
e
dpdq
Z 2
2
2
1 Z
log Z
=
2
Z
2
log Z
=
2
(7)
The last step can be veried by working backwards and obtaining the last but one step.
Energy uctuation also depends quite simply on the partition function. The above result
can be manipulated to obtain some useful relations
(E)2 =
=
=
=
=
But,
hEi
T
2 log Z
2
log Z
hEi
hEi
T
1
hEi
k 2 T
(8)
is the specic heat of the system. The above equation thus becomes
(E)2 = kT 2 Cv ,
(9)
where Cv is the specic heat of the system, at constant volume. Thus, we see that energy
uctuations are intimately related to the specic heat of the system.
Microcanonical and canonical ensemble describe physically dierent scenarios. One with
energy xed, and the other with energy constantly exchanged with a heat-bath. One might
wonder how dierent the results obtained from the two would be. Also, one would like to
understand how crucial is the choice of the ensemble, to study the properties of a system,
because for a given system, it may not always be possible to estimate if considering the
system to isolated is a good approximation or not.
We know that the average energy of a gas is proportional to the number of particles,
hEi N . So should be the specic heat, because specc heat is just hEi
. So, Cv N .
T
Cv
(10)
Tabish Qureshi
Magnitude of uctuation can be correctly estimated by the quantity E/hEi:
E
hEi
Cv
hEi
N
1
N
(11)
It is clear than in the thermodynamic limit, the uctutation would become zero
E
1
lim = 0
N hEi
N
N
lim
(12)
So, in the limit of number of particles being very large, the uctions are negligible, and
the energy remains practically constant. If the energy is alsmost constant, one can also
safely use microcanonical ensemble to describe the system. So we conclude that the in
the thermodynamic limit ( N ), canonical and microcanonical ensembles should give
similar results.
Tabish Qureshi
In microcanonical ensemble, the entropy of the system was dened very simply in terms of
the total number of microstates , which are all equally probable
(1)
Energy of the system is xed at E . As all microstates are equally probable, the probability
of one microstate is 1/. The above expression can be written in terms of this probability
of one microstate
1
(2)
S(E) = k log
(E)
In the canonical ensemble, microstates with dierent energy occur with dierent probability.
For that reason, one may want to rewrite the above equation as an average over microstates.
This will help in extending this relation to the case of canonical ensemble.
S(E) =
X
i
1
(E)
1
k log
(E)
(3)
Since all terms in the sum in (3) are equal, and there are exactly terms, it will add to
give (2). Dening 1/ to be the probability of a microstate i , the above can be written
as
X
S(E) = k
i log i
(4)
i
This denition of entropy can now be carried over to canonical ensemble, in a straightforward
manner. The only dierence is that the sum now involves microstates with all possible
energies
X
S = k
i log i
(5)
i
eEi
,
Z
(6)
P
system in i'th microstate. In terms of the classical phase space variables, the entropy can
be written as
Z
1
S = k
(p, q) log [(p, q)] dpdq
(7)
where (p, q) is the density function in canonical ensemble, and is the phase volume
corresponding to one microstate.
Since for canonical ensemble, i has a specic form (6), we can put it in (5) and get an
expression for entropy in terms of Z .
Tabish Qureshi
i log i
= k
X eEi
Z
= k
X eEi
Z
= k
log
(Ei log Z)
X eEi Ei
Z
eEi
Z
X
k
log Z
eEi
Z
i
k
log(Z)Z
Z
= khEi + k log(Z)
= khEi +
(8)
where hEi is the ensemble average of the energy of the system. The above equation can
be rewritten as
hEi T S = kT log Z
(9)
But from thermodynamics we know that the Hemlholtz free energy is given by F = U T S .
Here, hEi is the internal energy of the system, what is represented by U in thermodynamics.
Thus we nd the expression for Helmholtz free energy in canonical ensemble to be
(10)
F = kT log Z
Let us study our simplest problem of a classical ideal gas, which we studied using microcanonical ensemble earlier, now using canonical ensemble. Energy of the gas is given
by
E=
N 2
X
p
p2yi
p2
+
+ zi
2m 2m 2m
xi
i=1
(11)
where the sum over i goes over all N particle. The partition function can thus be written
as
1
Z = 3N
~
Z
e
N
Y
i=1
# Y
N
2
2
p
p
yi
xi
zi
exp
+
+
dpxi dpyi dpzi dxi dyi dzi
2m 2m 2m
i=1
i=1
2
Z Y
N
p2yi
1
pxi
p2zi
= 3N
exp
+
+
dpxi dpyi dpzi dxi dyi dzi
~
2m
2m
2m
i=1
2
N Z
Y
p2yi
pxi
p2zi
1
exp
dpxi dpyi dpzi dxi dyi dzi
= 3N
+
+
~ i=1
2m 2m 2m
1
= 3N
~
"
N 2
X
p
(12)
Since the particles are non-interacting and identical, these N integrals will also be identicial.
Integral over space will just give the volume of the box enclosing the gas, and momenta
2
Tabish Qureshi
will vary from to +. Partition function thus looks like
Z
Z
Z
N
p2yi
p2xi
p2zi
1 Y
exp
V
dpxi
exp
dpyi
exp
dpzi
Z = 3N
~ i=1
2m
2m
2m
(13)
3N/2
1 N 2m
= 3N V
~
(14)
hEi = N log
1
V
~3
2m
3/2 !
(15)
(16)
Entropy of the ideal gas can now be calculated by substituting expression for Z from (14)
into (8). Doing that, we get
S = khEi + k log(Z)
3N/2 !
1 N 2m
V
~3N
3/2 !
2mkT
3
N k + N k log V
=
2
~2
3/2 !
3
4m(3N kT /2)
N k + N k log V
=
2
3N ~2
3/2 !
4mhEi
3
=
N k + N k log V
2
3N ~2
3
= k N kT + k log
2
(17)
This result is identical to the one obtained using microcanonical ensemble, if one identies
the average energy hEi with the xed energy E in microcanonical ensemble. For indistinguishable particles, one should have an additional factor of 1/N ! in the partition function
(14). With that addition, the above expression will lead to the Sackur-Tetrode equation.
Thus, canonical and microcanonical ensemble yield identical results for the classical ideal
gas, as they should.
Tabish Qureshi
The canonical ensemble, which we studied in the previous lectures, is applicable to systems
which are thermally interacting with a heat-bath, but are physically isolated. Now we wish
to study that class of systems which are open in the sense of particles freely moving between
the system and the heat-bath. Physical examples are many, an electron gas for example,
has electrons being absorbed and released by the walls. Same things happens for a photon
gas, where the number of photons is not xed. To study such systems, we introduce an
ensemble where only the volumes is xed, and the energy and number of particles can vary.
Such an ensemble is called grand canonical ensemble.
Again, we consider a system characterized by energy, volume and number
of paticles E, V, N , interacting with a
E B V BN B
much much larger heat-bath, characterized by EB , VB , NB . However the
system can exchange energy with the
EVN
heat-bath through the walls. In addiSYSTEM
tion, the walls of the system are supposed to be porous so that particles
can also pass from the system to the
HEATBATH
heat-bath, and vice-versa. The heatbath is assumed to be so large that any
exchange of energy and number of particles with our system of interest, will not have any noticeable eect on it. The system of
interest, and the heat-bath, taken together, is assumed to be a closed system such that
E + EB = ET ,
N + NB = NT
(1)
where ET and NT are total energy and number of particles, respectively, of the system and
the heat-bath, taken together, and are supposed to be xed. Total number of microstates
of the combined system is
=
XZ
(2)
where the integral is a sum over possiblities of various amounts of energy exchanges between
the system and the heat-bath, and the sum over N represents various particle exchanges
between the system and the heat-bath. For example, the term E = 0, N = 0 would
correspond to a situation where the system transfers all its energy and particles to the
heat-bath. The total system is closed, and can be treated in microcanonical ensemble.
Now, the number of microstates corresponding to the system having energy E and number
of particles is given by
(E, N ) = S (E, N )B (EB , NB )
= S (E, N )B (ET E, NT N )
(3)
Microstates of the combined system should have been labelled by E , EB , N and NB , but
because of the contraint (1), E, N can be taken to be the only independent variables.
1
Tabish Qureshi
We now write B (ET E, NT N ) in terms of the entropy of the heat-bath, using the
Boltzmann denition of entropy S = k log :
(E) = S (E, N )elog(B (ET E,NT N ))
1
(4)
where SB is the entropy of the heat-bath. Since the heat-bath is much much larger than
our system of interest, it is obvious that E EB , ET and N NB , NT . The entropy of
the heat-bath can now be expanded in a Taylor series in E and N :
SB
SB
SB (ET E, NT N ) = SB (ET , NT ) + E
+N
+ ...
E E=0
N N =0
(5)
We ignore the second and higher order terms in N, E in the series, assuming E, N to be
small, and plug in this expression in (4)
1
1 SB
1 SB
(E, N ) S (E, N ) exp SB (ET , NT ) + E
+ N
k
k E
k N
But
and
(6)
EB SB
SB
1
SB
=
=
= ,
E
E EB
EB
T
(7)
SB
NB SB
SB
=
=
= ,
N
N NB
NB
T
(8)
1
E
N
(E, N ) = S (E, N ) exp SB (ET , NT )
+
k
kT
kT
= S (E, N )eSB (ET ,NT )/k e(EN )/kT
(9)
The term eSB (ET ,NT )/k is constant, as far as E and N are concerned. From microcanonical
ensemble we know that all microstates (with same energy) are equally probable. This holds
true here too, but only for the microstates of the system plus heat-bath. If one wants
to concentrate only on the system, as we do because we it is the system we are studying,
things are slightly dierent. Corresponding to a microstate of the system with energy E and
number of particles N , the heat-bath has eSB (ET ,NT )/k e(EN )/kT microstates. So, two
microstates of the system with dierent E and N , will have dierent number of microstates
of the heat-bath associated with them. From the system's point of view, it will appear as
if microstates of the system with dierent energies and number of particles, have dierent
probability of occurance.
Total number of microstates of the combined system can be written as
=
XZ
(10)
Tabish Qureshi
So, the probability of the system having energy E and N particles, should be equal to
the number of microstates corresponding to the system having energy E and N particles,
divided by the total number of microstates
S (E, N )eSB (ET ,NT )/k e(EN )/kT
P (E, N ) = P R
S (E, N )eSB (ET ,NT )/k e(EN )/kT dE
N
= P R
N
S (E, N )eE/kT
S (E)e(EN )/kT dE
(11)
Z
S (E, N ) =
E
dpdq
,
(12)
where integral of dpdq represents integral over all positions and momenta of all particles,
over the constant energy surface with energy E , and is the smallest phase-volume of
one microstate. The probability of the system having energy E and N particles, can now
be written as
1
dpdqe(EN )/kT
dpdqe(EN )/kT
ER
1
N
P (E, N ) = = P
(13)
Notice that the integral in the numerator is over a constant energy surface with xed energy
E , while that in the denominator is over all phase space.
We can thus dene a density function
(p, q, N ) = P
e(EN )/kT
e(EN )/kT
R
,
=
1
Z
dpdqe(EN )/kT
(14)
such that (p, q, N )dpdq gives the probability of the system to have N particles with momenta between p and p + dp and positions between q and q + dq . Remember that integral
over p here, represents 6N integrals over the 3 momentum components of N particles. Thus
(p, q, N ) describes the normalized density of microstates (of the system plus heat-bath)
in phase space. The grand partition function is now written as
Z
X
1
Z=
e(EN )/kT dpdq
N =0
(15)
In the above expression, eN/kT does not depend on p, q , and can be brought out of the
integral
Z
X
N/kT 1
Z=
e
eE(p,q,N )/kT dpdq
(16)
N =0
But the term 1 eE(p,q,N )/kT dpdq is just the canonical partition function for N particles.
The grand partition function can thus be represented as a sum over canonical partitions
R
Tabish Qureshi
with dierent number of particles
Z=
N
ZN
(17)
N =0
X
1
1 X 1
hAi =
A(p, q, N )(p, q, N )dpdq =
Ae(EN )/kT dpdq
N =0
N =0
(18)
As in the case of canonical ensemble, it will turn out that various thermodynamic quantities
can be represented in terms of the grand partition function. The energy and number of
particles of the system can now be determined by calculating the corresponding ensemble
averages hEi and hN i.
If the system consists of indistinguishable particles,
the canonical partition Rfunction for N
R
particles, ZN should be represented by N1 ! 1 eE(p,q,N )/kT dpdq instead of 1 eE(p,q,N )/kT dpdq .
Tabish Qureshi
1 1
E(p, q)(p, q)dpdq =
Z
(1)
where E(p, q) and N (p, q) are functions of p, q . Also, it should be kept in mind that q, p
involves coordinates and momenta of all the particles in the system. Derivative of e(EN )
with respect to will pull down E + N .
1 1
Z
Z
1 1
(EN )
dpdq =
e
(E N )e(EN )
Z
Z
Z
1 1
1 1
(EN )
=
Ee
N e(EN )
Z
Z
= hEi hN i,
(2)
where hN i is the ensemble average of the number of particles in the system. The above
relation simplies to
1 1
hEi hN i =
Z
e(EN ) dpdq =
1 Z
Z
(3)
log Z
+ hN i
(4)
The above relation has a very simple interpretation. The rst term on the right arises
because of heat exchange between the system and the heat-bath, and is identical to the
average energy in canonical ensemble. Since chemical potential, by denition, is the increase
in the energy of the system, when one particle is added to it, hN i particles being added to
the system, increases its energy by hN i. Thus, the second term represents the change in
energy of the system due to exchange of particles.
1 1
N (p, q)(p, q)dpdq =
Z
(5)
Derivative of e(EN ) with respect to will pull down N . Using this fact, the above
equation can be written as
Z
1 1
1 (EN )
hN i =
e
dpdq
Z
Z
1 1 1
=
e(EN ) dpdq
Z
1 1 Z
=
Z
1
(6)
Tabish Qureshi
Thus the average number of particles in the system is given by
hN i =
1 log Z
(7)
(8)
i log [i ]
where (p, q) is the density function in the grand canonical ensemble, and is the phase
volume corresponding to one microstate. Substituting the grand canonical form of (p, q)
in the above equation, we get
S = k
i log i
= k
X e(Ei Ni )
i
= k
X e(Ei Ni )
i
= k
Z
Z
log
(Ei + Ni log Z)
X Ei e(Ei Ni )
i
e(Ei Ni )
Z
X Ni e(Ei Ni )
Z
X
k
log Z
eEi +Ni
Z
i
k
log(Z)Z
Z
= khEi khN i + k log(Z)
= khEi khN i +
(9)
where hEi is the ensemble average of the energy of the system, and hN i is the average
number of particles in it. The above equation can be rewritten as
hEi T S hN i = kT log Z
(10)
(11)
p
p
h(N hN i)2 i = hN 2 i hN i2 ,
(12)
Tabish Qureshi
Square of particle number uctuation can now be written as
(N )2 = hN 2 i hN i2
Z
1 1
=
N 2 e(EN ) dpdq hN i2
Z
Z
1 2
1 1
e(EN ) dpdq hN i2
=
Z
2 2
Z
1
1 2
1
=
e(EN ) dpdq hN i2
2
2
Z
1 1 2Z
=
hN i2
Z 2 2
1 1 Z
=
hN i2
Z 2
1 1 Z
1 1
Z
hN i2
=
Z
Z
1 1
=
(ZhN i) hN i2
Z
1 1 Z
1 hN i
=
hN i +
hN i2
Z
1 hN i
hN i2
= hN ihN i +
1 hN i
hN i
=
= kT
(13)
A better measure of uctuation would be the relative uctuation given by N/hN i. The
above equation can thus be written as
kT hN i
(N )2
=
2
hN i
hN i2
(14)
We can rewrite the above equation in terms of average volume per particle, given by
v = V /hN i
(N )2
kT (V /v)
=
2
hN i
hN i2
kT V v
=
hN i2 v 2
kT v
=
V
(15)
Change in chemical potential can be related to the change in pressure by the following
thermodynamic relation
N d = V dP SdT
or
S
dT
N
So, at constant temperature, d is just equal to vdP . Putting this form in (15), we get
d = vdP
Tabish Qureshi
(N )2
kT 1
=
2
hN i
V v
But v1
result
v
P T
v
P
(16)
(17)
(n)2
kT
T
=
2
n
(18)
Thus we see that particle density uctuations, which spontaneously happen because of
interaction with a heat-bath, are intimately related to a thermodynamic property of the
system, namely the isothermal compressibility.
We look at the relative particle number uctuation in the thermodynamic limit, namely
when V , hN i .
N
1
lim = 0
V hN i
V
V
lim
(19)
So, in the thermodynamic limit, the uctuations are negligible, and the number of particles
remains practically constant. If the number of particles is almost constant, one can also
safely use canonical ensemble to describe the system, where particle number is xed. So
we conclude that the in the thermodynamic limit ( V ), canonical and grand canonical
ensembles should give similar results.
Tabish Qureshi
cn |n i
(1)
h|A|i
n,m cn cm hn |A|m i
n,m cn cm hn |A|m i
P
= P
=
hAi =
h|i
n,m cn cm hn |m i
n cn cn
(2)
Tabish Qureshi
typical collision time of atoms and molecules, but much smaller than the resolving time of
our apparatus. Thus, the quantity we actually measure, should be given by
hAi =
h|A|i
h|i
P
=
n,m cn cm hn |A|m i
P
,
n cn cn
(3)
The term cn cm represents a time average of dn dm hn |m i over times much longer than
the time-scale of molecular motion, but shorter than the resolution time of the measuring
apparatus. This term might look simple in appearance, but is extremely dicult to calculate,
as it involves all the states of the environment, and its interaction with the system. In
general, this term cannot be calculated, and one can only make guesses about it.
If A represents a measurable macrosocopic observable of a system in thermal equilibrium,
the postulates of quantum statistical mechanics are actually postulates about the form of
cn cm .
1 (E < En < E + E )
0 (otherwise)
(4)
Simply put, it implies that only those states are allowed which conform to the xed
energy constraint. And all such states are equally probable.
2. Postulate of Random Phases
cn cm = 0
(n 6= m)
(5)
Density matrix
All of the preceding discussion can also be reformulated in term of density operator, instead
of quantum states. A quantum system in a state |i can be described by a density operator
given by
= |ih|,
(6)
|ih|
,
provided that |i is normalized. For an unnormalized state, one can write = T r[|ih|]
where T r[. . . ] represents trace over a complete set of states. The expectation value of an
observable can then be written as
hAi = T r[
A]
(7)
Tabish Qureshi
If one uses the energy eigenstates of the system to take the trace over states of the system,
one gets
X
X
ni =
ni
hAi =
hn |
|m ihm |A|
nm hm |A|
(8)
n,m
n,m
where nm is the called the density matrix. For a pure state, described by a single wave
function, this density matrix is always non-diagonal - it can be diagonal only when the
system is in one of its energy eigenstates. Comparing the above equation with (2), we
conclude that the system in (2) can be described by a density matrix given by
c cm
nm = P n .
n cn cn
(9)
Furthermore, the postulates of quantum statistical mechanics, stated in the preceding discussion, imply that this density matrix (in the representation of energy eigenstates) is
diagonal. To put it mathematically,
c cn
nm = P n nm
n cn cn
(10)
The density matrix may be non-diagonal if another set of states, dierent from the energy
eigenstates, are used to take the trace (trace is invariant under change of representation).
The average value of an observable can now be written as
hAi =
n i.
nn hn |A|
(11)
The above relation represents an average of the observable A over an ensemble which
consists of copies of the system, in dierent microstates (quantum states) |1 i, |2 i, |3 i
etc. The microstate (quantum state) |k i occurs with a probability kk . Here nm is an
example of a mixed-state density matrix . Such a density matrix cannot represent a single
system in a particular quantum state. It represents a mixture, or an ensemble of systems
in dierent microstates, occuring with dierent probability.
Microcanonical ensemble
With the density matrix formulation discussed above, we are all set to describe various
ensembles in quantum statsitical mechanics. Firstly, the counting of microstates, which
was done by calculating the area in phase-space in classical statistical mechanics, is done
by counting the quantum states of the system, labelled by suitable quantum numbers:
1 1
N ! ~3N
Z
dpdq
X
n
cn cn
c cn
nn = P n
k ck ck
(12)
1 (E < En < E + E )
0 (otherwise)
(13)
Tabish Qureshi
All
cn cn are equal to 1 whose En lies between E and E + E . The rest are zero. So,
P those
k ck ck is just equal to the number of microstates whose energy eigenvalue lies between
E and E + E , let us call it . The microcanonical density matrix can then be written as
nn =
(E < En < E + E )
0 (otherwise)
1
(14)
Canonical ensemble
Canonical ensemble can be formulated exactly as it was done in classical statisitical mechanics, by having a system and a much bigger heat-bath. Since none of the arguments
used in our earlier formulation, was specic to the classical nature of the system, the result
can be directly adapted here. The canonical density matrix can be written as
nn =
eEn
,
Z
Z=
eEn
(15)
where Z is the canonical partition function. O-diagonal elements of the density matrix
are zero. Ensemble average of an observable can be written as
hAi =
1 X En
ni
e
hn |A|
Z n
(16)
The density matrix in the grand canonical ensemble can be written, in general, as
ii =
e(Ei Ni )
,
Z
Z=
e(Ei Ni )
(17)
where is the chemical potential, and Z the grand partition function. How the microstates
of the system are dened, may depend on the specic problem at hand. We will look at it
in more detail when studying the quantum statistics of ideal gas of identical particles.
Tabish Qureshi
We now come to a very important topic in statistical mechanics, properties of an ideal gas
of identical particles. We have already studied ideal gas in classical microcanonical and
canonical ensembles. The indistinguishability of identical particles in quantum mechanics
is of a very fundamental nature, and thus has strong bearing on the properties of gases.
In particular we will be interested in the case where the system can exchange particles with
a heat-bath. Free electron gas in metals and photon gas in a cavity are two examples where
number of particles of the system is not xed. So, the system is described using the grand
canonical ensemble.
Grand canonical ensemble
The density matrix in the grand canonical ensemble can be written, in general, as
ii =
e(Ei Ni )
,
Z
Z=
e(Ei Ni )
(1)
where is the chemical potential, and Z the grand partition function. In the sum, index i
denotes the microstates of the system, and Ei and Ni , the energy and number of particles in
the i'th microstate. Now, as the particles are assumed to be non-interacting, each particle
is governed by an identical Hamiltonian, say Hi , with the eigenvalues denoted by n . The
energy-levels of each particle are also the same - we will call them single-particle energy
levels. For example, many particles can have a particular energy, say, k .
One way of summing over the number of microstates of the gas, can be to take each particle
one by one, and sum over all its possible energy eigenstates. But, in doing that we will be
tacitly giving them identity, because two particles exchanging their state, does not give us
a new quantum state, or a new microstate.
Another way of counting could be to realize that if we know the occupancy of each singleparticle state, we have specied the particular microstate. For truly identical particles, it
is not important which particle is occupying which energy-level. The only thing important is how many particles are occupying a particular energy level. Thus, if we denote
the occupancy of single particle states 1 , 2 , 3 , . . . by n1 , n2 , n3 , . . . , a set of values of
n1 , n2 , n3 , . . . species a particular microstate. Summing over microstates would mean
summing over all possible values of n1 , n2 , n3 , . . . etc. The energy and number particles of
the system, in a particular microstate, can be written as
E=
nj j ,
N=
nj
(2)
The single particle energies j depend on the particular problem at hand. For example, for
j 2 h2
an ideal gas of particles in a box (in 1-dimension), j will be 8mL
2 . Or if all the particles
are trapped by a harmonic oscillator potential, j will be given by the (j + 1/2)~ (in
1-dimension).
Tabish Qureshi
The grand partition function can now be written as
"
Z =
XXX
n1
n2
. . . exp
#!
X
n3
nj j
nj
!
=
XXX
n1
n2
XXX
n1
. . . exp
n3
n3
n1 (1 )
nj (j )
n2
n2
n1
enk (k ) . . .
(3)
nk
XXX
n1
n2
. . . nk exp
n3
#!
X
nj j
nj
!
X
1 XXX
. . . nk exp
nj (j )
=
Z n n n
j
1
2
3
Y
1 XXX
=
. . . nk
exp (nj (j ))
Z n n n
j
1
2
3
X
X
1 X n1 (1 ) X n2 (2 )
eni (i ) . . (. 4)
=
e
nk enk (k )
e
Z n
n
n
n
1
In the above equation, the numerator and the denominator (given by (3)) have most
terms common. Each sum in the numerator has a corresponding sum in the denominator,
except the sum over nk , for which the numerator and the denominator terms are dierent.
Consequently all the sums from the numerator and denominator cancel out, except the sum
over nk , giving
P
nk enk (k )
nk (k )
nk e
n
hnk i = Pk
(5)
To proceed further, we should know what are the allowed occupancies of the single-particle
energy-eigenstates. We know that in quantum mechanics, there are two kinds of particles,
Fermions in which occupancy is only 0 or 1, and Bosons in which the occupancy can vary
from 0 to .
Bosons (n=0,1,2,3...)
(6)
The denominator is geometric progression, and gives (1 e(k ) )1 . The numerator can
e(k )
be calculated by taking the rst derivative of a geometric series, and yields (1e
(k ) )2
2
Tabish Qureshi
So, the average occupancy of the k'th energy-state is
hnk i =
e(k )
1 e(k )
(7)
(8)
or
hnk i =
e(k )
The above formula describes the average occupancy of single-particle energy-states, for
particles following Bose-Einstein statistics.
Fermions (n=0,1)
nk (k )
nk =0 nk e
P1
nk (k )
nk =0 e
hnk i =
or
hnk i =
enk (k )
1 + enk (k )
1
e(k )
+1
(9)
(10)
The above formula describes the average occupancy of single-particle energy-states, for
particles following Fermi-Dirac statistics.
Total number of particles in the system is simply given by
X
hN i =
hnk i
k
P
(Bose-Einstein)
1
e(k ) 1
(Fermi-Dirac)
1
k e(k ) +1
Finally, we also would like to evaluate the grand partition function Z , given by (3). The
sums can now be carried out to yield
Z=
Q
j
Q
(Bose-Einstein)
1
1e(j )
1 + e(j )
(Fermi-Dirac)
From (3) one can see that average occupancy of a energy-state could also have been
calculated by the following relation:
hnk i =
1 log Z
.
k
(11)
Tabish Qureshi
We now wish to study the thermodynamic properties of an ideal gas of quantum particles,
in grand canonical ensemble. For this purpose, the grand potential that we introduced
eralier, come in useful. The grand potential is dened as
(T, V, ) = kT log Z = U T S N = P V
(1)
This single relation can be used to relate various thermodynamic quantities to the grand
partition function Z :
S =
P =
N =
V,
T,
(2)
T,V
PV
= log Z =
P
kT
(j )
j log 1 + e
(Bose-Einstein)
(Fermi-Dirac)
(3)
Instead of describing the gas in terms of the chemical potential , it is often convenient to
describe it in terms of fugacity z e . The grand partition function can then be assumed
to be a function of z , instead of , i.e., Z(z, T, V ). The equation of state then becomes
P
j log(1 zej )
PV
= log Z =
P
kT
j
log 1 + ze
(Bose-Einstein)
(Fermi-Dirac)
(4)
1
z 1 ek 1
k z 1 ek +1
(Bose-Einstein)
(Fermi-Dirac)
(5)
To proceed any further, we need to know the details of the system, namely the precise form
of the single particle energies j . Let us assume that the gas is enclosed in cubical box of
length L. The energy of one particle is given by
n =
n2 2 ~2
2mL2
The momentum eigenvalue is then given by pn = n~/L, and the energy is given by
n = p2n /2m. Instead of assuming a box with rigid walls, if one applies periodic boundary
conditions (which implies that the wavefunction and also its derivative, should match at
1
Tabish Qureshi
the two opposite walls), one gets pn = n2~/L = nh/L. Instead of summing over n, one
can sum over pn , as n = Lphn . As the particle is conned in a cubical box, there are three
quantum numbers nx , ny , nz . As the length of the box becomes very large (macroscopic),
the momenta are so closely spaced that they can be assumed to form a continuum. So, in
this limit, instead of summing over nx , ny , nz , one can integrate over px , py , pz
X
nx ,ny ,nz
V
3
h
dpx
dpy
dpz
As the energy does not depend on px , py , pz individually, but only on p2x + p2y + p2z , one can
use spherical polar coordinates in the momentum space integration
X
nx ,ny ,nz
V
3
h
4p2 dp
Let us rst look at the case of an ideal gas of bosons. The average number of particles can
now be written as
Z
4V
p2
hN i = 3
dp
(6)
1 p2 /2m
z e
1
Choosing a new variable of integration, t = p2 /2m, we get
Z
2V
t
V
hN i = 3 (2m/)3/2
dt 3 g3/2 (z)
1
t
z e 1
h
0
(7)
(8)
In the process of approximating the summation over the quantum state by integral over
momenta, have inadvertantly assigned weight zero to the lowest ( p = 0) term. This is
clearly wrong, and we would like to separate out the zero energy contribution from the
z
sum. That term is simply hn0 i = 1z
which is obtained by putting k = 0 and k = 0 in
hnk i =
1
z 1 ek
Thus, the correct expression for the number of particles per unit volume reads as
hN i
hn0 i
1
=
+ 3 g3/2 (z)
V
V
(9)
log(1 zej )
Z
4V 2
2
p log(1 zep /2m )dp
=
3
h
0
Z
2V
3/2
= 3 (2m/)
t log(1 zet )dt
h
0
2
(10)
Tabish Qureshi
The integration can be done by parts to obtain
P
kT
Z
2
3/2
= 3 (2m/)
t log(1 zet )dt
h
0
3/2
Z
2 1
2 t3/2 zet
t
t
= 3
log(1 ze )
dt
3 0 1 zet
(3/2)
0
Z 5/21
1
t
1
dt
=
3 (5/2) 0 z 1 et 1
Or
P
1
= 3 g5/2 (z)
kT
(11)
(12)
Bose-Einstein condensation
Let us look at the average number of particles of the Bose-gas in a bit more detail
hN i
hn0 i
1
=
+ 3 g3/2 (z)
V
V
(13)
z
be non-negative, z is constrained to be 0 z 1. Also
Now in order that hn0 i = 1z
g3/2 (z) is a monotonically increasing function of z . Thus the maximum value that g3/2 (z)
can take is g3/2 (1). In the above equation, 13 g3/2 (z) represents the number of particles
per unit volume that are present in the energy levels other than the ground state. The
maximum particles per unit volume that all the excited states can hold is 13 g3/2 (1). As
long as the total number of particles hNV i , is less than 13 g3/2 (1), all the particles can t
in the excited states. One can see that number of particles that excited states can hold
decreases as temperature goes down, because it is proportional to T 3/2 .
As temperature is lowered, eventually 13 g3/2 (1), becomes smaller than hNV i and the excited
states can no longer hold all the particles. The surplus particles are pushed to the ground
state. It turns out that at low enough temperature, this phenomenon happens with a spectacular eect. Almost all the particles go en-mass to the ground state. This phenomenon is
called Bose-Einstein condensation . The temperature, below which the ground state begins
to be populated, can be determined from the following condition
1
hN i
= 3 g3/2 (1)
V
(14)
hN i/V
g3/2 (1)
2/3
At temperatures below Tc more and more particles go to lowest energy state. If one keeps
the temperature xed, and decreases to volume to increase the density of the gas, equation
(15) can also be interpreted as dening a critical particle-density above which the BoseEinstein condensation begins. Thus we can write, for the critical particle-density
hN i
V
=
c
1
g3/2 (1)
3
(15)
Tabish Qureshi
E{Si } =
X
1X
Jij Si Sj B
Si
2 i6=j
i=1
(1)
where Jij denotes the strength of interaction between the i'th and the j'th spin, B denotes
an external magnetic eld, which could be present. The factor of 1/2 is introduced to
account for double-counting in unrestricted sum over i and j (i=3,j=8 and i=8,j=3 both
represent the interaction between the 3rd and 8th spin). The quantity Jij is actually the
exchange interaction between the two magnetic atoms. The magnetic interaction between
the two magnetic atoms is too weak to give rise to ferromagnetism.
A simpler version of Ising model is generally used, where all Jij s are assumed to be equal,
and each spin interacts only with its nearest neighbours
N
X
J X
E{Si } =
Si Sj B
Si
2 <ij>
i=1
(2)
where < ij > represents a sum over only the nearest-neigbours, One can easily see that the
macroscopic magnetic moment for the whole system, for a particular spin conguration,
1
Tabish Qureshi
will be given by
M {Si } =
N
X
Si
(3)
i=1
Z=
exp(E{Si }),
(4)
S1 ,S2 ,...SN
where the summation S1 ,S2 ,...SN denotes sum over all microstates, which happen to be
all possible values all the spins. The canonical density matrix can also be written easily:
P
{Si } =
1
exp(E{Si })
Z
(5)
The average value of a quantity, say, A{Si } associated with the Ising system can then be
calculated as
X
hAi =
{Si }A{Si })
S1 ,S2 ,...SN
1
Z
(6)
S1 ,S2 ,...SN
Let us use this equation to write some average quantities of interest. Average energy of
the system is given by
hEi =
1
Z
(7)
S1 ,S2 ,...SN
1
Z
exp(E{Si })
S1 ,S2 ,...SN
1
Z
Z
log(Z)
=
(8)
log(Z)
T
(9)
1
Z
S1 ,S2 ,...SN
(10)
Tabish Qureshi
One look at equation (2) suggests that this can be recast into the form:
hM i =
11
Z B
exp(E{Si })
S1 ,S2 ,...SN
11
Z
Z B
1 log(Z)
=
B
(11)
1 2 log(Z)
B 2
(12)
One noticed that the quantity of central interest is log(Z). So let go about calculating it.
X
Z=
N
X
J X
Si Sj + B
Si
2 <ij>
i=1
exp
S1 ,S2 ,...SN
(13)
Evaluating Z is not easy, because the Si Sj term makes sure that the sums over dierent
Si and Sj cannot be carried out independently.
In the following we will carry out an approximate treatment of the Ising model. Let us
rewrite the energy for the Ising model in a suggestive form:
N
X
JX X
Si
Sj B
Si
E{Si } =
2 i
i=1
<j>
(14)
(15)
Sj = m,
<j>i
where is the number of nearest neigbours of spin Si and m is the average magnetization
per spin of the system. It should be emphasized that the quantity m is yet to be calculated
from the relation m = hM i/N . Using this approximation, the energy of the Ising model
now assumes the following form.
E{Si } =
N
N
X
Jm X
Si B
Si
2 i=1
i=1
N
= (
X
Jm
+ B)
Si
2
i=1
3
(16)
Tabish Qureshi
Let us calculate the partition function using this simpler form of the energy. Z now assumes
the form
!
N
X
Jm
+ B)
Z =
Si
exp (
2
i=1
S1 ,S2 ,...SN
N
X Y
Jm
+ B)Si
=
exp (
2
S ,S ,...S i=1
X
N
Y
+1
X
e(
Jm
+B)Si
2
i=1 Si =1
N
Y
Jm
+ B)
=
2 cosh (
2
i=1
N
Jm
+ B)
= 2 cosh (
2
(17)
(18)
Now we are all set to calculate any quantity. Let us start by evaluating the average
magnetization of the system
1 log(Z)
B
Jm
+ B) .
= N tanh (
2
hM i =
(19)
Jm
m = tanh (
+ B) .
2
(20)
0.8
0.6
m = tanh(Jm/2).
(21)
0.4
0.2
0.4
0.6
0.8
Tabish Qureshi
Let us try to nd out an analytical expression for m in some approximation. can have small
non-zero value. Expanding tanh in a series for small argument and for B = 0, we obtain,
m Jm/2 (Jm/2)3 /3
(22)
Tc
T
Tc
T
3
m3
3
(23)
m = 3
T
Tc
1/2
T
1
Tc
(24)
T
m 3 1
Tc
(25)
This relation implies that at temperature goes below Tc , the magnetization starts from
zero, and grows as (1 T /Tc )1/2 , even in the absence of an external eld. Generally
speaking, the order parameter in a phase transition, close to the transition temperature,
goes as = (1 T /Tc ) , where is a critical exponent. Ising model in mean eld theory,
yields = 0.5. Real experiments on ferromagnetic materials show that 0.33. So, our
simplied model gives a value of which is not drastically dierent from the experimental
value. This shows that the Ising model, despite its simplicity, captures the essential physics
of phase transitions.
We will now attempt at determining the behavior of magnetization m at all temperatures
below Tc . Equation (21) can be written as m = tanh(mTc /T ). We can obtain m by
numerically nding the zeros of the function tanh(mTc /T ) m, for various values of T .
This can be done through a simple computer program using bracketing and bisection, and
the result is displayed in the gure on the left below. Compare this with the experimental
data of three ferromagnets, iron, nickel and cobalt, shown in the gure on the right below.
Our mean-eld curve qualitatively agrees quite well with the experimental data.
1
0.8
m(T)
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
T/T_c
A better agreement is expected if the Ising model is solved without approximations, or with
a better approximation.
5
Tabish Qureshi
Magnetic susceptibility
Let us now look at the magnetic susceptibility of the Ising model. To do that we should
look at the case B 6= 0. Equation (20) can be written as
B
mTc
+
.
(26)
m = tanh
T
kT
Magnetic susceptibility is dened as = m
. We dierentiate both side of the above
B B=0
equation with respect to B
mTc
B
m
=
tanh
+
B
B
T
kT
m Tc
1
1
=
+
(27)
2 mTc
B
B T
kT cosh
+ kT
T
(28)
For T > Tc , without any external eld, the magnetization m is zero. The above equation
then simplies to yield
=
1
1
k T Tc
(29)
This is the well-know Curie-Weiss law, which is valid for temperatures above the transition
temperature. For T < Tc , m has no closed form and hence an analytical expression for
cannot be obtained.
Tabish Qureshi
following:
1. Identify an order parameter
phase transition. It is zero in the phase which is called "disordered" in some kinds
of phase-transitions, and is non-zero in the "ordered" phase. In a ferromagnet, the
order parameter is macroscopic magnetization. It is zero in the paramagnetic phase,
and non-zero in the ferromagnetic phase. In a liquid-solid transition, it could be the
density - density in the liquid phase. Or it could be the amplitude of Fourier modes
in a crystal. It could be the degree of orientation in a nematic liquid crystal. It could
the density of superconducting electrons in a superconducter.
2. Assume a free energy functional: Assume that the free energy is a functional of the
order parameter, and can be written in the form:
F = F0 (T ) + FL (T, )
where
F0
(1)
order parameter
. FL
3. Construction of
analytic
The expansion
should be such that it respects all possible symmetries associated with the order
parameter. This will typically include translational and rotational invariance.
4. Temperature Dependence of
FL :
FL ().
can take, the one which
described equilibrium is the one which minimizes the free energy. So, the free energy
functional has to be minimized with respect to the order parameter.
Let us now carry out in detail, Landau's recipe for studying phase transitions. We rst write
1
1
1
F = F0 (T ) + g2 (T ) 2 + g4 (T ) 4 + g6 (T ) 6 + ...
2
4
6
(2)
Odd powers are left out keeping in mind the fact that the sign of the order parameter is
irrelevant. For example, a ferromagnetic system is equally likely to have magnetization in
6
any direction. Ignoring the and higher order terms in the series, if we minimize the free
energy with respect to
we will obtain
F
= g2 (T ) + g4 (T ) 3 = 0
(3)
2 = g2 /g4
(4)
= 0,
1
Tabish Qureshi
T = Tc ,
20
eter should become zero, if one is going from the ordered to the disordered
10
FL
g2 (T )
and
g4 (T )
is temperature inde-
-5
g2 (Tc ) = 0,
if
is to be zero at
expand
g2 (T )
(5)
T = Tc .
-10
-3
-2
-1
Let us
in a Taylor series in
T,
around
T = Tc ,
g2 (T ) g2 (Tc ) + (T Tc )
g2 (T )
|T =Tc
T
= (T Tc )a
(6)
g2 (T )
|T =Tc . Since g4 (T ) is assumed to be temperature
T
independent, we equate it to a constant, g4 (T ) = b. The free energy functional can now
where
is a constant given by
a=
as
1
1
F = F0 (T ) + a(T Tc ) 2 + b 4
2
4
We initially assume both
and
(7)
F
= a(T Tc ) + b 3 = 0
(8)
(a(T Tc ) + b 2 ) = 0
r
a(Tc T )
= 0,
=
b
At
= 0,
while for
2F
2
= a(T Tc ).
Thus, for
T < Tc , it is a maximum.
T > Tc , = 0
(10)
T < Tc , =
So, for
(9)
a(Tc T )
are the minuma of the
b
free energy, and hence describe the equilibrium. So, we see that while above
Tc
the order
parameter is zero, there is a phase transition which makes the order parameter non-zero
below
Tc .
F =
F0 (T )
F0 (T )
1 a2 |Tc T |2
4
b
T > Tc
T < Tc
(11)
F
0
= F
at T = Tc whether you approach Tc from above or below - rst
T
T
derivative of free energy is continuous. However, the second derivative is given by
Notice that
2F
=
T 2
2 F0 /T 2 |T =Tc
2 F0 /T 2 |T =Tc a2 /2b
T + Tc
T Tc
This shows that the second derivative of the free energy is discontinuous at
this is a second-order phase transition
(12)
T = Tc .
Hence
Tabish Qureshi
indicates that the free energy would keep decreasing with increasing
scenario. Here we need to take into account the next higher order term in
and unphysical
Doing that,
1
1
1
F = F0 (T ) + a(T T0 ) 2 b 4 + c 6
2
4
6
Here we have replaced
Tc
by
T0
(13)
T0
is not
the transition temperature. In order to nd the minima of the free energy, we dierentiate
(13) with respect to
F
= a(T T0 ) b 3 + c 5 = 0
=0
and
=
Here, if
2 =
psi = 0
2
b+
(14)
p
b2 4ca(T T0 )
2c
b b2 4ca(T T0 )
2
=
2c
is a minimum, then
(15)
b 4ca(T T0 )
another minimum.
2c
Landau Free Energy
15
F0
10
The graphs
In the top-
FL
from top to bottom are in the decreasmost curve, there is only one minimum, at
-5
-10
-15
0.5
1.5
2.5
an-
= 0.
= 0.
= 0.
T = Tc
T = Tc , the value of
is nite, suciently away from zero. So, the order parameter takes a discontinuous jump
from
= 0 to a nite value, at T = Tc .
T = Tc ,
gradually. So, one can say in general that second order phase transition is a continuous
phase transition, and the rst order phase transition is a discontinuous one. As an example,
the density of a liquid changes discontinuously, as it freezes into a solid.
At suciently low temperature (lower than
Tc ),
the
=0
it becomes a maximum. We explore all this in more detail in the following discussion.
Tabish Qureshi
Let us now nd out the temperature at which the new minimum of free energy becomes
equal to the one at
FL () is zero.
= 0.
= 0,
We take the expression for the Landau free energy, and put it equal to zero,
= 0:
a(T T0 ) 2 b 4
+
2
4
b
a(T T0 ) 2 +
2
c 6
= 0
6
c 4
= 0
3
The second equation in the above, does not include the solution
(16)
= 0.
Next we take
b2 4ca(T T0 )
2
the expression of at the new minimum, =
, plug it into the equation
2c
above, and thus obtain the temperature at which the new minumum becomes equal to the
b+
one at
= 0:
b
a(Tc T0 )
2
b+
!
p
b2 4ca(Tc T0 )
c
+
2c
3
b+
!2
p
b2 4ca(Tc T0 )
=0
2c
(17)
Tc = T0 +
3b2
16ac
(18)
This is the temperature at which the phase transition takes place. Below
Tc
the system is
=
T = T0 ,
b2 4ca(T T0 )
is the point at which the free energy has a hump. Notice that at
2c
this point shifts to = 0. At this temperature, = 0 is no longer a local
Below
T = T0 ,
Tc > T > T0 , the ordered phase has a lower free energy, but the disordered phase,
= 0 is a local minimum, and hence a stable state. This means that the
characterized by
ordered phase is the most stable one, but the system may also be trapped in a disordered
phase. This is called the supercooled-liquid state, in which the system is at temperatures
below
Tc ,
but continues to remain in the disordered phase as one lowers the temperature.
At such temperatures the system should normally be in the ordered state. So,
T0
is the
T > Tc
=0
b2 4ca(T T0 )
is also a local
2c
minimum of the free energy, and hence, a stable state. This is the so-called superheatedstable state. But the ordered state, characterized by
2 =
b+
solid state. In this state, the system is above its melting point, but the system is still trapped
b2
b2
in a solid state. At T = T0 +
= Tc + 16ac
, the hump merges with the second minimum
4ac
of the free-energy, to form a shoulder on the curve. At this temperature, and above it,
the free energy has only one minimum, which is at
= 0.
F
F0 1 2
=
+ a
T
T
2
4
(19)
Tabish Qureshi
If we approach
T = Tc
from
us
(20)
If we approach
3b/4c.
b+
b2 4ca(Tc T0 )
2c
F
F0
3b
=
+
T Tc
T Tc 4c
So we see that the rst derivative of free energy is discontinuous at
(21)
T = Tc .
This indicates
that the transition is of rst order. The rst derivative of free energy being discontinuous,
reects in the second derivative of free energy becoming innite at
T = Tc .
Now the entropy of a system is related to the free energy by the relation
S = F/T .
If
the rst derivative of free energy is discontinuous across the phase transition, it will result
in a latent heat of the transition given by
Q = T S .
Landau's approach to phase transitions is thus able to explain the broad features of the
rst order and second order phase transitions.
Tabish Qureshi
Statistical Mechanics: Problems 9.1
= gS B B
~ S/~
~ = gS B B Sz /~. The energy eigenvalues
one spin is given by H
where m1 , m2 ... can take values 1, 0, +1 each. Summing over microstates would
amount to summing over these values. The canonical partition function can then be
written as
Z =
+1
X
+1
X
m1 =1 m2 =1
+1
X
+1
X
m1 =1 m2 =1
N
+1
Y
X
+1
X
mN =1
+1
N
X
Y
egS B Bmi
mN =1 i=1
egS B Bmi
i=1 mi =1
(1)
= [1 + 2 cosh(gS B B)]N
Magnetization in any microstate is given just by the sum of the magnetic moments of
all spins, M (m1 , m2 ...mN ) = gS B (m1 +m2 +m3 ...+mN ). Average magnetization
can be calculated by taking the ensemble average of this quantity:
+1
+1
X
1 X
M (m1 , m2 ...mN )eBM (m1 ,m2 ...mN )
hM i =
Z m =1 m =1
1
+1
X
m1 =1
+1
X
mN =1
It should be noticed that the sum in the above equation can also be obtained by
taking a derivative of Z with respect to B , and multiplying with 1/ :
+1
+1
X
1
1
X
hM i =
Z
B m =1 m =1
1
1 log Z
=
B
(2)
Plugging the expression for Z from (1) in the above equation, we get
hM i =
2N gS B sinh(gS B B)
1 + 2 cosh(gS B B)
1
(3)
Tabish Qureshi
= L 2 .
2. Problem: Let there be quantum mechanical rotator with a Hamiltonian H
2I
Assuming that the rotator can take only two angular momentum values l = 0 and
l = 1, calculate the average energy in canonical ensemble.
Solution: Eigenvalues of the Hamiltionian can be obtained by using the simultaneous
2 and L
z , which are denoted by |lmi. These states are also eigenstates
eigenstates of L
of H ,
~2 l(l + 1)
|lmi
H|lmi
=
2I
There are 2l + 1 values of m corresponding to each value of l. Eigenvalues do not
depend on m, and hence energy-levels are (2l + 1)-fold degenerate. The partition
function can thus be written as
1
X
~2 l(l + 1)
Z =
(2l + 1) exp
2I
l=0
(4)
= 1 + 3 exp(~2 /I)
=
log(1 + 3 exp(~2 /I))
hEi =
(5)
log Z
1 e1 + 2 e2
1 + 2 e
=
=
(e1 + e2 )
(1 + e )
2
Tabish Qureshi
4. Question: A simple harmonic one-dimensional oscillator has energy levels En =
(n + 1/2)~ , where is the characteristic oscillator (angular) frequency and n =
0, 1, 2, . . .
(a) Suppose the oscillator is in thermal contact with a heat reservoir kT at temperature T. Find the mean energy of the oscillator as a function of the temperature
1 and kT
1
T, for the cases kT
~
~
(b) For a two-dimensional oscillator, n = nx + ny , where Enx = (nx + 1/2)~x
and Eny = (ny + 1/2)~y , nx = 0, 1, 2, . . . and ny = 0, 1, 2, . . . , what is the
partition function for this case for any value of temperature? Reduce it to the
degenerate case x = y .
Answer (a): The partition function can be written as
Z =
(n+1/2)~
=e
~/2
en~
n=0
n=0
1
1
1
= e~/2
=
=
1 e~
e~/2 e~/2
2 sinh(~/2)
log Z
~
=
coth(~/2)
(6)
Z =
nx =0 ny =0
e(nx +1/2)~x
nx =0
e(ny +1/2)~y
ny =0
1
4 sinh(~x /2) sinh(~y /2)
1
4 sinh (~/2)
2
This is exactly the same as the partition function of two independent, similar, onedimensional harmonic oscillators.