Featured post

Welcome to Physics Pages

This blog consists of my notes and solutions to problems in various areas of mainstream physics. An index to the topics covered is contained in the links in the sidebar on the right, or in the menu at the top of the page.

This isn’t a “popular science” site, in that most posts use a fair bit of mathematics to explain their concepts. Thus this blog aims mainly to help those who are learning or reviewing physics in depth. More details on what the site contains and how to use it are on the welcome page.

Despite Stephen Hawking’s caution that every equation included in a book (or, I suppose in a blog) would halve the readership, this blog has proved very popular since its inception in December 2010. Details of the number of visits and distinct visitors are given on the hit statistics page.

Many thanks to my loyal followers and best wishes to everyone who visits. I hope you find it useful. Constructive criticism (or even praise) is always welcome, so feel free to leave a comment in response to any of the posts.

Before leaving a comment, you may find it useful to read the “Instructions for commenters“.

Temperature of an Einstein solid

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 3.5.

The definition of temperature in terms of entropy is

\displaystyle  \frac{1}{T}\equiv\frac{\partial S}{\partial U} \ \ \ \ \ (1)

Given a formula for the entropy of a system, we can use this relation to work out its temperature. As an example, we’ll look at the Einstein solid in the low temperature case where the number {q} of energy quanta (each of size {\epsilon}) is much less than the number {N} of oscillators: {q\ll N}. The multiplicity of such a system is approximately

\displaystyle  \Omega\approx\left(\frac{Ne}{q}\right)^{q} \ \ \ \ \ (2)

The total energy of the system is {U=q\epsilon} so we can write this in terms of {U} as

\displaystyle  \Omega\approx\left(\frac{N\epsilon e}{U}\right)^{q} \ \ \ \ \ (3)

so the entropy is

\displaystyle   S \displaystyle  = \displaystyle  k\ln\Omega\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  \frac{kU}{\epsilon}\left(\ln\left(\epsilon N\right)+1-\ln U\right) \ \ \ \ \ (5)

The partial derivative in 1 implies that we’re holding {N} fixed, so we get, using the product rule:

\displaystyle   \frac{1}{T} \displaystyle  = \displaystyle  \frac{k}{\epsilon}\left(\ln\left(\epsilon N\right)+1-\ln U\right)-\frac{k}{\epsilon}\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  \frac{k}{\epsilon}\left(\ln\left(\epsilon N\right)-\ln U\right)\ \ \ \ \ (7)
\displaystyle  \ln U \displaystyle  = \displaystyle  \ln\left(\epsilon N\right)-\frac{\epsilon}{kT}\ \ \ \ \ (8)
\displaystyle  U \displaystyle  = \displaystyle  N\epsilon e^{-\epsilon/kT} \ \ \ \ \ (9)

Since we assumed {q\ll N}, this is equivalent to requiring {U=q\epsilon\ll N\epsilon}, so this result is valid only for low temperatures, as we’d expect. [Note that for high temperatures, {e^{-\epsilon/kT}\rightarrow1} so {U\rightarrow N\epsilon} which violates the assumption {U\ll N\epsilon}.]

Schroeder works out the energy-temperature relation for the other extreme {U\gg N\epsilon} in his section 3.1, with the result

\displaystyle  U=NkT \ \ \ \ \ (10)

In this case, there are enough energy quanta that every degree of freedom in all oscillators is excited and since there are two degrees of freedom per oscillator, this agrees with the equipartition theorem which says that every degree of freedom has an associated {\frac{1}{2}kT} of kinetic energy.

Thermal equilibrium from entropy plots

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 3.3 – 3.4.

The definition of temperature in terms of entropy is

\displaystyle  \frac{1}{T}\equiv\frac{\partial S}{\partial U} \ \ \ \ \ (1)

Systems in thermal equilibrium have equal slopes in their entropy-versus-energy graphs and therefore have the same temperature.

We can use this fact to determine, from the entropy-versus-energy plots, the behaviour of two systems when placed in thermal contact, so they can exchange energy but nothing else. Suppose we have two systems with entropy plots as shown:

Suppose we start both systems off at energies {U_{A}=U_{B}=6}, so that the slope {\partial S/\partial U} is larger for {A} than {B}. The systems will evolve by exchanging energy until these two slopes are equal. To figure out which way energy is exchanged, we need to impose the constraint that {U_{A}+U_{B}=U_{total}=\mbox{constant}}. Thus if we increase {U_{A}} we must decrease {U_{B}} by the same amount, and vice versa. To make the slopes equal, we could therefore increase the slope for {B} and decrease the slope for {A}, which we can do by decreasing {U_{B}} and increasing {U_{A}} by the same amount. [Note that the slopes of both plots decrease as we increase the energy.] That is, system {B} will transmit some of its energy to system {A} until the slopes become equal. [Although Schroeder says we’re not supposed to use the word “temperature” in the explanation, clearly what we’re doing is decreasing the temperature of {B} and increasing that of {A} until the temperatures are equal. The slope {\partial S/\partial U} is just another way of referring to the temperature.]

The two systems above are ‘normal’ in the sense that adding energy to them increases their temperature, since {\frac{1}{T}=\partial S/\partial U} decreases as {U} increases. For some systems, such as those bound by gravity, however, the temperature actually decreases as we add energy, since the energy gets stored as potential energy and the average kinetic energy is reduced. In other words, the heat capacity is negative. In that case, the entropy plot would look something like this:

Suppose we also start this system off at a value of {U_{C}} which gives it the same slope as system {A} at {U_{A}=6} (a value around {U_{C}=3} looks about right). This places systems {A} and {C} in thermal equilibrium, but is it stable? Again, we’re subject to the constraint {U_{A}+U_{C}=U_{total}=\mbox{constant}}. If we increase {U_{A}} by a bit and decrease {U_{C}} by the same amount, the slopes of both curves decrease, meaning both systems get hotter, and if we transfer energy in the opposite direction, the slopes both increase, meaning both systems decrease in temperature. Whether or not this results in instability depends on the relative changes in temperature. Suppose we make both systems hotter, but that {A} gets hotter than {C}. Then energy should flow spontaneously back from {A} to {C} and in this case, the equilibrium should be stable. However, if {C} gets hotter than {A}, then energy will continue to flow from {C} to {A} and the equilibrium is unstable.

If both systems have negative heat capacity, then if we start with both systems at the same temperature and transfer a bit of energy from one to the other, the system that loses energy gets hotter and the system that gains energy gets colder, so energy will continue to be transferred in the same direction, resulting in the equilibrium being unstable.

In the unstable cases, there is a limit to amount of energy transferred, of course, since there is only a finite amount of energy in the system and once one of the systems has absorbed it all, the transfer stops. The final state might be considered a stable equilibrium of sorts, although it’s not a thermal equilibrium since the two systems are at different temperatures.

Zeroth law of thermodynamics

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 3.2.

The definition of temperature in terms of entropy is

\displaystyle  \frac{1}{T}\equiv\frac{\partial S}{\partial U} \ \ \ \ \ (1)

Systems in thermal equilibrium have equal slopes in their entropy-versus-energy graphs and therefore have the same temperature.

A statement often known as the zeroth law of thermodynamics states that if a system {A} is separately in thermal equilibrium with two other systems {B} and {C}, then {B} and {C} are in thermal equilibrium with each other. This is fairly obvious from the definition of temperature above, since any two systems in thermal equilibrium have the same values of {\partial S/\partial U}, so systems {B} and {C} must both have the same slope as system {A}, and therefore have the same slopes as each other. [I’m not sure that constitutes a ‘proof’, but it’s the best I can do.]

The zeroth law is the basis of the thermometer, for it states that a system {A} (the thermometer) can be placed in thermal equilibrium with any number of other systems that are all in thermal equilibrium with each other, and it will always give the same reading.

Temperature defined from entropy

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 3.1.

The concept of entropy as {k\ln\Omega}, where {\Omega} is the multiplicity of the macrostate in which the system is found, can be used to define the temperature of the system. Schroeder gives a good explanation in his section 3.1 so I’ll summarize the argument here.

We’ll use two interacting Einstein solids with {N_{A}=300} and {N_{B}=200}, with {q=100} total energy quanta. The multiplicity function for each solid is

\displaystyle  \Omega_{A,B}=\binom{q_{A,B}+N_{A,B}-1}{q_{A,B}} \ \ \ \ \ (1)

and the total multiplicity of the combined system is

\displaystyle  \Omega_{total}=\Omega_{A}\Omega_{B} \ \ \ \ \ (2)

The entropies are therefore

\displaystyle  S_{A,B,total}=k\ln\Omega_{A,B,total} \ \ \ \ \ (3)

where we pick the subscript for the system we’re interested in. We can plot the three entropy curves as functions of {q_{A}} (remember {q_{B}=q-q_{A}}), to get

Here the top red curve is {S_{total}}, the violet curve is {S_{A}} and the turquoise curve is {S_{B}}. {S_{total}} reaches a maximum at {q_{A}=60} which is the macrostate where the energy is evenly distributed among all the oscillators. At this point, therefore

\displaystyle  \frac{\partial S_{total}}{\partial q_{A}}=0 \ \ \ \ \ (4)

Since {S_{total}=S_{A}+S_{B}}, this implies that

\displaystyle  \frac{\partial S_{A}}{\partial q_{A}}+\frac{\partial S_{B}}{\partial q_{A}}=0 \ \ \ \ \ (5)

Since {q_{A}=q-q_{B}}, {dq_{A}=-dq_{B}} so we can write this as

\displaystyle  \frac{\partial S_{A}}{\partial q_{A}}=\frac{\partial S_{B}}{\partial q_{B}} \ \ \ \ \ (6)

In the small scale system here, it’s not quite right to consider {q_{A}} and {q_{B}} as continuous variables, so the partial derivatives are an approximation. In very large systems, however, the quanta merge into a continuous energy variable {U}, so we can write

\displaystyle  \frac{\partial S_{A}}{\partial U_{A}}=\frac{\partial S_{B}}{\partial U_{B}} \ \ \ \ \ (7)

As the units of entropy are {\mbox{J K}^{-1}} and of {U} are {\mbox{J}}, this derivative has the dimensions of {\mbox{K}^{-1}} or the reciprocal of temperature. We can therefore define temperature to be

\displaystyle  \frac{1}{T}\equiv\frac{\partial S}{\partial U} \ \ \ \ \ (8)

where the partial derivative implies that we hold everything else apart from the energy of the system constant while taking the derivative. Thus two systems that can exchange energy until they reach their most probable macrostate will end up with the same temperature, so they are in thermal equilibrium.

The relation 6 states that the slopes of the entropy-versus-energy curves are equal for the most probable macrostate. In the graph above, this means that the slope of the turquoise line is the negative of the slope of the violet line at {q_{A}=60}, which looks about right if you eyeball the graph. We could prove it by taking the actual derivatives, but we’ll make do with a numerical example.

Suppose each energy quantum has a value of {\epsilon=0.1\mbox{ eV}=1.6\times10^{-20}\mbox{ J}}. We can then estimate the temperatures of the two solids at {q_{A}=60} by calculating the slope of the line connecting the points for {q_{A}=59} and {q_{A}=61}. For solid {B}, we use {q_{B}=100-q_{A}} so the two energy points are {q_{B}=39} and {q_{B}=41}. We get

\displaystyle   T_{A} \displaystyle  = \displaystyle  \frac{61\epsilon-59\epsilon}{S_{A}\left(61\right)-S_{A}\left(59\right)}\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \frac{2}{169.92-157.35}\frac{\epsilon}{k}\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  0.56\frac{1.6\times10^{-20}}{1.38\times10^{-23}}\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  659.6\mbox{ K}\ \ \ \ \ (12)
\displaystyle  T_{B} \displaystyle  = \displaystyle  \frac{41\epsilon-39\epsilon}{S_{B}\left(41\right)-S_{B}\left(39\right)}\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \frac{2}{107.04-103.49}\frac{\epsilon}{k}\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  0.56\frac{1.6\times10^{-20}}{1.38\times10^{-23}}\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  659.6\mbox{ K} \ \ \ \ \ (16)

Thus the two temperatures are indeed equal at {q_{A}=60}.

For {q_{A}=1} we can use the slope between {q_{A}=0} and {q_{A}=2} for solid {A} and {q_{B}=98} and {q_{B}=100} for solid {B}. We get

\displaystyle   T_{A} \displaystyle  = \displaystyle  \frac{2\epsilon-0\epsilon}{S_{A}\left(2\right)-S_{A}\left(0\right)}\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  \frac{2}{10.72-0}\frac{\epsilon}{k}\ \ \ \ \ (18)
\displaystyle  \displaystyle  = \displaystyle  0.187\frac{1.6\times10^{-20}}{1.38\times10^{-23}}\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  216.4\mbox{ K}\ \ \ \ \ (20)
\displaystyle  T_{B} \displaystyle  = \displaystyle  \frac{100\epsilon-98\epsilon}{S_{B}\left(100\right)-S_{B}\left(98\right)}\ \ \ \ \ (21)
\displaystyle  \displaystyle  = \displaystyle  \frac{2}{187.53-185.33}\frac{\epsilon}{k}\ \ \ \ \ (22)
\displaystyle  \displaystyle  = \displaystyle  0.910\frac{1.6\times10^{-20}}{1.38\times10^{-23}}\ \ \ \ \ (23)
\displaystyle  \displaystyle  = \displaystyle  1055\mbox{ K} \ \ \ \ \ (24)

Here, solid {B} is much hotter than solid {A} so if they are interacting, there would be a strong tendency for {B} to transfer some of its energy to {A} to bring the solids into thermal equilibrium.

Black hole entropy

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 2.42.

In our study of general relativity, we’ve seen a formula for the entropy of a black hole. Schroeder takes a different approach, outlined in his problem 2.42.

First, we can calculate the radius of a black hole using Newtonian physics (which actually turns out to be the same as that calculated from general relativity). We take the radius {r} of a black hole of mass {M} to be such that the escape velocity at the surface is equal to the speed of light {c}. That is

\displaystyle   \frac{GM}{r^{2}} \displaystyle  = \displaystyle  \frac{c^{2}}{r}\ \ \ \ \ (1)
\displaystyle  r \displaystyle  = \displaystyle  \frac{GM}{c^{2}} \ \ \ \ \ (2)

If we assume that the entropy of a black hole is {Nk} multiplied by some logarithm, we can use the argument given earlier to say that an order of magnitude estimate of the entropy is

\displaystyle  S\sim Nk \ \ \ \ \ (3)

Therefore, we need an estimate of the number of particles that went into constructing the black hole. The argument goes as follows. Since there is no way to tell what kind of mass or energy went into the construction of the black hole (assuming no charge or angular momentum are involved), we’d like to take the maximum number of particles that could be used. In other words, we’d like to find the lowest energy {mc^{2}} of a massive particle, or {h\nu} for a photon, that can be used. Long wavelength photons have the lowest energy, so if we take the lowest possible energy to be the photon with a wavelength equal to the radius of the black hole, we have

\displaystyle   \lambda \displaystyle  = \displaystyle  r=\frac{GM}{c^{2}}\ \ \ \ \ (4)
\displaystyle  E \displaystyle  = \displaystyle  h\nu=\frac{hc}{\lambda}=\frac{hc^{3}}{GM} \ \ \ \ \ (5)

The total energy of the black hole is {Mc^{2}} so the number of such photons is

\displaystyle  N=\frac{Mc^{2}}{E}=\frac{GM^{2}}{hc} \ \ \ \ \ (6)

An estimate of the entropy is therefore

\displaystyle  S\sim\frac{GM^{2}}{hc}k \ \ \ \ \ (7)

Apart from the factor of {8\pi^{2}}, this agrees with the actual value:

\displaystyle  S=\frac{8\pi^{2}GM^{2}}{hc}k \ \ \ \ \ (8)

For a solar mass black hole

\displaystyle   r \displaystyle  = \displaystyle  \frac{\left(6.67\times10^{-11}\right)\left(2\times10^{30}\right)}{\left(3\times10^{8}\right)^{2}}=1482\mbox{ m}\ \ \ \ \ (9)
\displaystyle  S \displaystyle  = \displaystyle  \frac{8\pi^{2}\left(6.67\times10^{-11}\right)\left(2\times10^{30}\right)^{2}}{\left(6.62\times10^{-34}\right)\left(3\times10^{8}\right)}\left(1.38\times10^{-23}\right)=1.46\times10^{54}\mbox{ J K}^{-1} \ \ \ \ \ (10)

This is vastly greater than our order of magnitude estimate for the entropy of the sun, which is around {1.66\times10^{34}\mbox{ J K}^{-1}}.

Entropy of distinguishable molecules

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 2.39.

The entropy of an ideal gas is given by the Sackur-Tetrode formula:

\displaystyle  S=Nk\left[\ln\left(\frac{V}{N}\left(\frac{4\pi mU}{3Nh^{2}}\right)^{3/2}\right)+\frac{5}{2}\right] \ \ \ \ \ (1)

where {V} is the volume, {U} is the energy, {N} is the number of molecules, {m} is the mass of a single molecule and {h} is Planck’s constant.

One of the assumptions made in deriving this formula is that the molecules are indistinguishable, so for any configuration of the molecules in position and momentum space, interchanging any of the molecules makes no difference. This assumption introduces the factor of {N!} in the denominator of the multiplicity function (Schroeder’s equation 2.40):

\displaystyle  \Omega\approx\frac{V^{N}\left(2\pi mU\right)^{3N/2}}{h^{3N}N!\left(3N/2\right)!} \ \ \ \ \ (2)

The Sackur-Tetrode formula is obtained from this by applying Stirling’s approximation to the two factorials in the denominator and throwing away small factors. If we now assume that the molecules are all distinguishable, we can follow through the derivation, but without the {N!}, we start with

\displaystyle  \Omega\approx\frac{V^{N}\left(2\pi mU\right)^{3N/2}}{h^{3N}\left(3N/2\right)!} \ \ \ \ \ (3)

We find that the {\frac{5}{2}} term in 1 changes to {\frac{3}{2}} and the {\frac{V}{N}} factor in the logarithm loses its {N}, so we get

\displaystyle  S_{dist}=Nk\left[\ln\left(V\left(\frac{4\pi mU}{3Nh^{2}}\right)^{3/2}\right)+\frac{3}{2}\right] \ \ \ \ \ (4)

To see what difference this would make, we can compute the entropy of a mole of distinguishable helium atoms at room temperature (300 K) and 1 atmosphere ({1.01\times10^{5}\mbox{ N m}^{-2}}) and compare it with the value of {126\mbox{ J K}^{-1}} for real, indistinguishable helium atoms given in Schroeder’s book. We can use most of the numbers from our earlier calculation of the entropy of argon:

\displaystyle  V=\frac{nRT}{P}=\frac{\left(1\right)\left(8.31\right)\left(300\right)}{1.01\times10^{5}}=0.025\mbox{ m}^{3} \ \ \ \ \ (5)

The internal energy of a monatomic gas is {\frac{1}{2}kT} per molecule per degree of freedom, so for one mole we have

\displaystyle  U=\frac{3}{2}NkT=\frac{3}{2}nRT=3739\mbox{ J} \ \ \ \ \ (6)

The mass of a mole of helium is {4.0026\times10^{-3}\mbox{ kg}}, so with {N=6.02\times10^{23}} we have

\displaystyle  m=\frac{4.0026\times10^{-3}}{6.02\times10^{23}}=6.65\times10^{-27}\mbox{ kg} \ \ \ \ \ (7)

We get

\displaystyle  S_{dist}=\left(6.02\times10^{23}\right)\left(1.38\times10^{-23}\right)\left(67.45+\frac{3}{2}\right)=573\mbox{ J K}^{-1} \ \ \ \ \ (8)

As we’d expect, the entropy is significantly higher if the molecules are distinguishable, since there are many more microstates available to the system.

Entropy of mixing

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 2.37 – 2.38.

The entropy of an ideal gas is given by the Sackur-Tetrode formula:

\displaystyle  S=Nk\left[\ln\left(\frac{V}{N}\left(\frac{4\pi mU}{3Nh^{2}}\right)^{3/2}\right)+\frac{5}{2}\right] \ \ \ \ \ (1)

where {V} is the volume, {U} is the energy, {N} is the number of molecules, {m} is the mass of a single molecule and {h} is Planck’s constant.

We can apply this formula to the case where we begin with two different ideal gases {A} and {B}, with a total number {N} of gas molecules divided into two volumes {V_{A}} and {V_{B}}, but at equal pressures and temperatures. Thus the number of type {B} molecules can be expressed as a fraction {x} of the total number so that {N_{B}=xN}, and thus {N_{A}=\left(1-x\right)N}. The volumes can be expressed as the same fractions of the total volume, so that {V_{A}=\left(1-x\right)V} and {V_{B}=xV}.

We now remove the partition between the gases and allow them to mix. Because they were at the same pressure and temperature before they mixed, both {P} and {T} remain unchanged when the gases mix, so the energy of each gas {U_{A,B}} also remains unchanged. For each species of gas, the only change is the volume, which expands to the total volume {V}.

From 1, the change in entropy in a process where only the volume changes is

\displaystyle  \Delta S=S_{f}-S_{i}=Nk\ln\frac{V_{f}}{V_{i}} \ \ \ \ \ (2)

The entropy changes for the two gases is therefore

\displaystyle   \Delta S_{A} \displaystyle  = \displaystyle  \left(1-x\right)Nk\ln\frac{V}{\left(1-x\right)V}=-\left(1-x\right)Nk\ln\left(1-x\right)\ \ \ \ \ (3)
\displaystyle  \Delta S_{B} \displaystyle  = \displaystyle  xNk\ln\frac{V}{xV}=-xNk\ln x \ \ \ \ \ (4)

Thus the total entropy change after mixing, called, appropriately, the entropy of mixing, is

\displaystyle  \Delta S_{mixing}=-Nk\left[x\ln x+\left(1-x\right)\ln\left(1-x\right)\right] \ \ \ \ \ (5)

[Note that both logarithms are negative since {0<x<1}, so {\Delta S_{mixing}>0}.] If {x=\frac{1}{2}} so that we start out with 2 equal quantities of gases, the formula reduces to

\displaystyle  \Delta S_{mixing}=Nk\ln2 \ \ \ \ \ (6)

This is the same result as equation 2.54 in Schroeder, since in that equation his {N} is the number of molecules of each gas, not the total number.

It’s worth noting that this formula applies only if the two gases are different, that is, they are distinguishable. If the two gases are the same, there is essentially no change in entropy when we remove the partition. The situation is similar to Example 3 in our earlier post which dealt with two Einstein solids. Before we remove the partition, the gas in each portion of the volume is overwhelmingly likely to be at or near its most probable state. After removing the partition, the combined gas is also almost certain to be at or near the most probable macrostate for the overall system. Since the gas molecules are indistinguishable, it’s virtually impossible to tell the difference between the states before and after the partition is removed, so the entropy of the two systems are virtually identical. [I don’t understand Schroeder’s explanation following his equation 2.56, where he tries to explain the difference by doubling the amount of gas in what appears to be a fixed volume. This isn’t what happens if you start with a fixed amount of gas divided into two cells and then remove the partition.]

Another way of looking at is as follows. Suppose we start with a number {N} of identical molecules. (This argument applies to any system in which the molecules all have similar properties and interact with each other in the same way, not just to ideal gases.) The entropy of this system is some value {S_{0}} which may or may not be easy to calculate. Now suppose that at some point in time, we magically change {N_{A}} of these molecules to a different species (which has similar properties to the original species as mentioned). The entropy will increase by the number of distinct ways we can choose to locate these {N_{A}} molecules among the {N} places available. (The entropy due to the number of possible locations and momenta of the molecules won’t change when we replace {N_{A}} of the molecules by a different species, since that is already accounted for by {S_{0}}. We’re interested only in the extra entropy generated by introducing a second species of molecule.) The number of ways of choosing {N_{A}} locations from a total of {N} is just {\binom{N}{N_{A}}} so the entropy of mixing is

\displaystyle  \Delta S_{mixing}=k\ln\binom{N}{N_{A}} \ \ \ \ \ (7)

Using Stirling’s approximation for large {N}, and taking {N_{A}=\left(1-x\right)N} as before, we get

\displaystyle   \Delta S_{mixing} \displaystyle  \approx \displaystyle  k\ln\left[\frac{\sqrt{2\pi N}N^{N}e^{-N}}{2\pi N\sqrt{x\left(1-x\right)}N^{N}x^{xN}\left(1-x\right)^{\left(1-x\right)N}}\right]\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  -k\left[\frac{1}{2}\ln\left(2\pi Nx\left(1-x\right)\right)-xN\ln x-\left(1-x\right)N\ln\left(1-x\right)\right]\ \ \ \ \ (9)
\displaystyle  \displaystyle  \approx \displaystyle  -Nk\left[x\ln x+\left(1-x\right)\ln\left(1-x\right)\right] \ \ \ \ \ (10)

(we’ve neglected the first term in the second line as for large {N} it is negligible compared to the last two terms) which is the same as 5.

Ongoing work

25 April 2016

All upgrades of plots are now complete. Normal service has resumed.

24 April 2016

At the moment, I’m working through those posts that have Maple-generated plots in them and upgrading the quality of the plots. This involves removing the old plots and uploading new ones to replace them, so occasionally you might find a post where there is a ‘broken image’ icon in place of a plot. This should last for only a couple of minutes or so for each affected post, so try waiting a bit and then refreshing the page before you send in a comment that the image is broken.

Entropy: a few examples

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 2.34 – 2.36.

The entropy of a substance is given as

\displaystyle  S=k\ln\Omega \ \ \ \ \ (1)

where {\Omega} is the number of microstates accessible to the substance.

For a 3-d ideal gas, this is given by the Sackur-Tetrode formula:

\displaystyle  S=Nk\left[\ln\left(\frac{V}{N}\left(\frac{4\pi mU}{3Nh^{2}}\right)^{3/2}\right)+\frac{5}{2}\right] \ \ \ \ \ (2)

where {V} is the volume, {U} is the energy, {N} is the number of molecules, {m} is the mass of a single molecule and {h} is Planck’s constant.

Although this formula looks a bit complicated, we can see that increasing any of {V}, {U} or {N} increases the entropy. For an isothermal expansion, the gas expands quasistatically so that its temperature stays constant. This means that {U=\frac{3}{2}NkT} also stays constant, so that only the volume changes. Since the gas is doing work {W} by expanding, the energy for the work must be provided by an amount of heat {Q} input into the gas to maintain the temperature as constant. This heat is given by the formula

\displaystyle  Q=NkT\ln\frac{V_{f}}{V_{i}} \ \ \ \ \ (3)

where {V_{i}} and {V_{f}} are the initial and final volumes.

However, from 2, the change in entropy in a process where only the volume changes is

\displaystyle  \Delta S=S_{f}-S_{i}=Nk\ln\frac{V_{f}}{V_{i}} \ \ \ \ \ (4)

Combining these two equations gives

\displaystyle  \Delta S=\frac{Q}{T} \ \ \ \ \ (5)

This relation is valid for the case where the expanding gas does work, so that heat must be input to provide the energy for the work. In a free expansion, the gas expands into a vacuum so does no work (well, technically, after some of the gas has entered the vacuum area, it’s no longer a vacuum so that some work is done, but we’ll assume the vacuum area is very large so we can neglect this). In this case, the internal energy {U} still doesn’t change, since the gas neither absorbs any heat nor does any work, so {\Delta U=Q+W=0}. However, the volume occupied by the gas does increase (and it’s the only thing that changes) so 4 is still valid, although 5 is not.

Another property of 2 is that if the energy {U} drops low enough, the log term can decrease below {-\frac{5}{2}} making {S} negative. This isn’t possible, so the Sackur-Tetrode equation must break down at low energies. For a monatomic ideal gas, {U=\frac{3}{2}NkT}, so this implies that things go wrong for low temperatures. For example, suppose we have a mole of helium and cool it (assuming it remains a gas). Then the critical temperature is found from

\displaystyle   -\frac{5}{2} \displaystyle  = \displaystyle  \ln\left(\frac{V}{N}\left(\frac{2\pi mkT}{h^{2}}\right)^{3/2}\right)\ \ \ \ \ (6)
\displaystyle  T_{crit} \displaystyle  = \displaystyle  \frac{h^{2}}{2\pi mk}\left(\frac{N}{V}e^{-5/2}\right)^{2/3} \ \ \ \ \ (7)

If we start at room temperature {T=300\mbox{ K}} and atmospheric pressure {P=1.01\times10^{5}\mbox{ Pa}}, and can hold the density {N/V} fixed, this will give an actual temperature at which the entropy becomes zero. The density is

\displaystyle  \frac{N}{V}=\frac{P}{kT}=2.44\times10^{25}\mbox{ m}^{-3} \ \ \ \ \ (8)

The mass of a helium atom is {4\times10^{-3}\mbox{ kg mol}^{-1}/6.02\times10^{23},} so plugging in the other values gives

\displaystyle  T_{crit}=0.012\mbox{ K} \ \ \ \ \ (9)

In fact, helium liquefies at around 4 K, so it appears that 2 might actually be valid for the region where helium remains a gas.

As a final example, we can observe that the entropy of an ideal gas is {Nk} multiplied by a logarithm, and of an Einstein solid is also {Nk} multiplied by a logarithm (because {\Omega\approx\left(qe/N\right)^{N}} for high-temperature solids). For any macroscopic object, {N} is a large number and the logarithm is much smaller, so for a rough order-of-magnitude estimate of the entropy, we can neglect the log term and take {S\sim Nk}. A few such estimates are:

For a 1 kg book, we can take it to be 1 kg of carbon, with a molar mass of {12\times10^{-3}\mbox{ kg mol}^{-1}}, so the entropy of a book is around

\displaystyle  S\sim\frac{6.02\times10^{23}}{12\times10^{-3}}\left(1\right)\left(1.38\times10^{-23}\right)=692\mbox{ J K}^{-1} \ \ \ \ \ (10)

For a 400 kg moose, which we can approximate by 400 kg of water with molar mass of around {18\times10^{-3}\mbox{ kg mol}^{-1}}, we have

\displaystyle  S\sim\frac{6.02\times10^{23}}{18\times10^{-3}}\left(400\right)\left(1.38\times10^{-23}\right)=1.85\times10^{5}\mbox{ J K}^{-1} \ \ \ \ \ (11)

For the sun, we can take it to be {2\times10^{30}} of ionized hydrogen (protons) with molar mass of {10^{-3}\mbox{ kg mol}^{-1}}. The entropy is around

\displaystyle  S\sim\frac{6.02\times10^{23}}{10^{-3}}\left(2\times10^{30}\right)\left(1.38\times10^{-23}\right)=1.66\times10^{34}\mbox{ J K}^{-1} \ \ \ \ \ (12)

Entropy of an ideal gas; Sackur-Tetrode equation

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 2.31 – 2.33.

The entropy of a substance is given as

\displaystyle S=k\ln\Omega \ \ \ \ \ (1)

where {\Omega} is the number of microstates accessible to the substance.

For a 3-d ideal gas, this is given by Schroeder’s equation 2.40:

\displaystyle \Omega\approx\frac{V^{N}\left(2\pi mU\right)^{3N/2}}{h^{3N}N!\left(3N/2\right)!} \ \ \ \ \ (2)

where {V} is the volume, {U} is the energy, {N} is the number of molecules, {m} is the mass of a single molecule and {h} is Planck’s constant. We can further approximate this formula by using Stirling’s approximation for the factorials:

\displaystyle N! \displaystyle \approx \displaystyle \sqrt{2\pi N}N^{N}e^{-N}\ \ \ \ \ (3)
\displaystyle \left(3N/2\right)! \displaystyle \approx \displaystyle \sqrt{3\pi N}\left(\frac{3N}{2}\right)^{3N/2}e^{-3N/2} \ \ \ \ \ (4)

We get

\displaystyle \Omega\approx\frac{V^{N}\left(\pi mU\right)^{3N/2}}{h^{3N}}\frac{2^{3N}e^{5N/2}}{\sqrt{6}3^{3N/2}\pi N^{5N/2+1}} \ \ \ \ \ (5)

When {N} is large, we can throw away a couple of factors and take the logarithm:

\displaystyle \Omega \displaystyle \approx \displaystyle \frac{V^{N}\left(\pi mU\right)^{3N/2}}{h^{3N}}\frac{2^{3N}e^{5N/2}}{3^{3N/2}N^{5N/2}}\ \ \ \ \ (6)
\displaystyle \ln\Omega \displaystyle = \displaystyle N\ln\left(V\left(\frac{\pi mU}{3}\right)^{3/2}\left(\frac{2}{h}\right)^{3}\frac{1}{N^{5/2}}\right)+\frac{5N}{2}\ \ \ \ \ (7)
\displaystyle \displaystyle = \displaystyle N\left[\ln\left(\frac{V}{N}\left(\frac{4\pi mU}{3Nh^{2}}\right)^{3/2}\right)+\frac{5}{2}\right] \ \ \ \ \ (8)

This gives the entropy of an ideal gas as

\displaystyle S=Nk\left[\ln\left(\frac{V}{N}\left(\frac{4\pi mU}{3Nh^{2}}\right)^{3/2}\right)+\frac{5}{2}\right] \ \ \ \ \ (9)

which is known as the Sackur-Tetrode equation.

Example 1 A variant of this equation can be derived in a similar way for the 2-d ideal gas considered earlier. In that case we had

\displaystyle \Omega\approx\frac{\left(\pi A\right)^{N}}{\left(N!\right)^{2}h^{2N}}\left(\sqrt{2mU}\right)^{2N} \ \ \ \ \ (10)

where {A} is the area of the gas. Using Stirling’s approximation as before, we get

\displaystyle \Omega \displaystyle \approx \displaystyle \frac{\left(\pi A\right)^{N}}{2\pi N^{2N+1}e^{-2N}h^{2N}}\left(\sqrt{2mU}\right)^{2N}\ \ \ \ \ (11)
\displaystyle \displaystyle \approx \displaystyle \frac{\left(\pi A\right)^{N}}{N^{2N}e^{-2N}h^{2N}}\left(2mU\right)^{N}\ \ \ \ \ (12)
\displaystyle S=k\ln\Omega \displaystyle = \displaystyle Nk\left[\ln\frac{2\pi mAU}{\left(hN\right)^{2}}+2\right] \ \ \ \ \ (13)

Example 2 Schroeder gives the entropy of a mole of helium at room temperature and atmospheric pressure as {S=126\mbox{ J K}^{-1}}. For another monatomic gas such as argon, we can work out the same thing. From the ideal gas law, at a pressure of {1.01\times10^{5}\mbox{ N m}^{-2}} and temperature of 300 K, one mole occupies a volume of

\displaystyle V=\frac{nRT}{P}=\frac{\left(1\right)\left(8.31\right)\left(300\right)}{1.01\times10^{5}}=0.025\mbox{ m}^{3} \ \ \ \ \ (14)

The internal energy of a monatomic gas is {\frac{1}{2}kT} per molecule per degree of freedom, so for one mole we have

\displaystyle U=\frac{3}{2}NkT=\frac{3}{2}nRT=3739\mbox{ J} \ \ \ \ \ (15)

The mass of a mole of argon is {39.948\times10^{-3}\mbox{ kg}}, so with {N=6.02\times10^{23}} we have

\displaystyle m=\frac{39.948\times10^{-3}}{6.02\times10^{23}}=6.64\times10^{-26}\mbox{ kg} \ \ \ \ \ (16)

The entropy comes out to

\displaystyle S \displaystyle = \displaystyle \left(6.02\times10^{23}\right)\left(1.38\times10^{-23}\right)\left(\ln1.02\times10^{7}+2.5\right)\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle 155\mbox{ J K}^{-1} \ \ \ \ \ (18)

This is a bit higher than the value for helium because of argon’s higher mass.