Tag Archives: entropy

Entropy of mixing in a small system

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 5.57.

As an example of the entropy changes when two pure substances are mixed, consider a system of 100 molecules, which may vary in composition from 100% of species {A} through a mixture of {A} and {B} to 100% pure {B}. The entropy of mixing is given by

\displaystyle  \Delta S_{mixing}=-Nk\left[x\ln x+\left(1-x\right)\ln\left(1-x\right)\right]

where {N=N_{A}+N_{B}} is the fixed total number of molecules (100 here) and {x=N_{A}/N}.

For a small system such as this, we can generate an array of {\Delta S_{mixing}/k} values for each value of {N_{A}} from 0 to 100. Plotting this as a bar chart, we get

Starting from {N_{A}=0} where {\Delta S/k=0} (since there is only one species at this point, there is no mixing), we see that the entropy increase per molecule as we convert successive molecules from {A} to {B} decreases. The changes in {\Delta S/k} for the first few steps are:

add molecule number change in {\frac{\Delta S}{k}}
{1} {5.60}
{2} {4.20}
{3} {3.67}
{4} {3.32}
{5} {3.06}

The rate at which the entropy increases declines as we convert more molecules from {A} to {B}. If we add a slight impurity into an initially pure mixture of 100% {B}, this generates a larger increase in entropy than if we add a bit more impurity to an already mixed system.

Gibbs free energy of a mixture of two ideal gases

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 5.56.

To study phase changes of mixtures of substances, rather than pure substances on their own, it’s best to start by looking at the Gibbs free energy of the mixture. If we start with a collection of molecules of two types, {A} and {B}, that are initially separated but whose total number {N_{A}+N_{B}=N} is fixed. Since the two populations are separated, the total Gibbs energy is just the sum of the energies of the two individual populations. If population {B} makes up a fraction {x} and {A} a fraction {1-x} of the total, then

\displaystyle  G=\left(1-x\right)G_{A}^{\circ}+xG_{B}^{\circ} \ \ \ \ \ (1)

What happens if we now mix the two populations, but keep the pressure and temperature constant? The Gibbs energy is defined as

\displaystyle  G\equiv U+PV-TS \ \ \ \ \ (2)

It’s possible that the energy {U} will change due to interactions between the two species being different than interactions between molecules of the same species. It’s also possible that the volume will change, if the two species either attract or repel each other differently than molecules of the same species. The biggest change, however, is likely to come from a change in entropy, because with the two populations mixed there are now a great many more ways that the molecules can be arranged.

As we saw earlier, if the two substances are ideal gases at the same pressure and temperature and we allow them to mix so that the total volume is unchanged, the change in entropy is

\displaystyle   \Delta S_{mixing} \displaystyle  = \displaystyle  -Nk\left[x\ln x+\left(1-x\right)\ln\left(1-x\right)\right]\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  -nR\left[x\ln x+\left(1-x\right)\ln\left(1-x\right)\right] \ \ \ \ \ (4)

where {n} is the total number of moles of both species and {R} is the gas constant.

If we ignore changes in {U} and {V}, then the Gibbs energy after mixing is

\displaystyle  G=\left(1-x\right)G_{A}^{\circ}+xG_{B}^{\circ}+nRT\left[x\ln x+\left(1-x\right)\ln\left(1-x\right)\right] \ \ \ \ \ (5)

The change in entropy looks like this:

The slopes at the two ends are actually infinite, as we can see by taking the derivative of 4:

\displaystyle  \frac{1}{nR}\frac{d\Delta S_{mixing}}{dx}=-\ln x+\ln\left(1-x\right)=\ln\frac{1-x}{x} \ \ \ \ \ (6)

As {x\rightarrow0}, {\frac{1-x}{x}\rightarrow+\infty} so the log also tends to {+\infty}. As {x\rightarrow1}, {\frac{1-x}{x}\rightarrow0} and the log tends to {-\infty}.

A comparison of 1 and 5 looks like this:

Here we’ve taken {G_{A}^{o}=100}, {G_{B}^{o}=75}, {n=1} and {T=100}. The yellow line shows the Gibbs energy from 1, where the two populations are unmixed. The green curve shows 5, where the two populations are mixed. Since the energy for the mixture is lower (because of the increase in entropy), it is the favoured state.

The Gibbs energy also has infinite slopes at {x=0} and 1:

\displaystyle  \frac{dG}{dx}=-G_{A}^{o}+G_{B}^{o}+nRT\ln\frac{x}{1-x} \ \ \ \ \ (7)

Because the numerator and denominator of the log have swapped places from 6, the slope at {x=0} is now {-\infty} and the slope at {x=1} is now {+\infty}.

The infinite slopes show that even if the proportion of one of the species is very small, there is a very large increase in entropy when the two species are allowed to mix, so there is a strong tendency for this to occur.

Extensive and intensive quantities

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 5.21.

The various thermodynamic properties can be classified according to whether they are intensive or extensive. Basically, if we take a system and duplicate it exactly, an extensive quantity will also double, while an intensive quantity will remain unchanged. The intensive quantities include temperature, pressure and any form of density, such as mass density or energy density. Most other quantities are extensive. If we duplicate a system, clearly its volume doubles as does the number of particles. All forms of energy ({U}, {H}, {F} and {G}, for example) will double, as will the system’s mass.

The ratio of two extensive quantities produces an intensive quantity, since the common factor that appears when a system is duplicated cancels out in the division. Multiplying two intensive quantities gives another intensive quantity, since the absolute size of the system doesn’t appear in the product. Multiplying an intensive by an extensive quantity produces another extensive quantity, since the absolute size occurs once in the product (in the extensive quantity). You might think that multiplying two extensive quantities gives a quantity that increases according to the product of the sizes of the systems, but in fact such products don’t appear in thermal physics.

For some examples, we’ll look at the following.

The entropy is an extensive quantity. It is defined as

\displaystyle  S=k\ln\Omega \ \ \ \ \ (1)

where {\Omega} is the number of microstates available to the system. If we duplicate a system then {\Omega\rightarrow\Omega^{2}} since for each microstate in the original system, any of the microstates is available to the duplicate system. Therefore {S\rightarrow2S}, so entropy is extensive.

The chemical potential is defined as

\displaystyle  \mu\equiv-T\left(\frac{\partial S}{\partial N}\right)_{U,V} \ \ \ \ \ (2)

The derivative is intensive, since it is the ratio of two extensive quantities {S} and {N}. {T} is also intensive, so {\mu} is the product of two intensive quantities, making it intensive.

The total heat capacity {C} is extensive, since it is the amount of heat required to raise the entire system by one kelvin. If you double the system, you’ll need twice as much heat. The specific heat capacity {c}, however, is intensive, since it gives the amount of heat required to raise a fixed amount of a substance by one kelvin, so it’s independent of the size of the system.

Muscle as a fuel cell

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 5.6 – 5.7.

As another example of a fuel cell, we’ll look at the metabolism of glucose in an animal’s muscle cells. The overall reaction is

\displaystyle  \mbox{C}_{6}\mbox{H}_{12}\mbox{O}_{6}+6\mbox{O}_{2}\rightarrow6\mbox{H}_{2}\mbox{O}+6\mbox{CO}_{2} \ \ \ \ \ (1)

The Gibbs free energy {\Delta G} and enthalpy {\Delta H} changes for this reaction can be obtained from the {\Delta G} and {\Delta H} values in Schroeder’s book (all values for 1 mole at 298 K and 1 bar). As the reaction occurs at room temperature and pressure, we’ll assume that the water product appears as a liquid rather than as a gas.

{\Delta G} (kJ) {\Delta H} (kJ) {S\mbox{ J K}^{-1}}
{\mbox{C}_{6}\mbox{H}_{12}\mbox{O}_{6}} {-910} {-1273} {212}
{\mbox{O}_{2}} {0} {0} {205.14}
{\mbox{H}_{2}\mbox{O}} {-237.13} {-285.83} {69.91}
{\mbox{CO}_{2}} {-394.36} {-393.51} {213.74}

The {\Delta G} for the reaction is the sum of the values for the products minus the sum for the reactants:

\displaystyle  \Delta G=6\left(-237.13-394.36\right)-\left(6\times0-910\right)=-2878.94\mbox{ kJ mol}^{-1} \ \ \ \ \ (2)

The value is per mole of glucose molecules.

The corresponding {\Delta H} is found the same way:

\displaystyle  \Delta H=6\left(-285.83-393.51\right)-\left(6\times0-1273\right)=-2803.04\mbox{ kJ mol}^{-1} \ \ \ \ \ (3)

The Gibbs energy represents the maximum energy that may be extracted as ‘other’ (that is, not due to volume changes) work, in this case chemical work. Thus we may extract up to 2878.94 kJ of electric work per mole of glucose metabolized.

As the reaction occurs at constant pressure, {\Delta H} represents the total energy difference between the reactants and products. Since the enthalpy drop is less than the amount of work extracted, the difference must be absorbed as heat. The amount of heat is

\displaystyle  Q=2878.94-2803.04=75.9\mbox{ kJ mol}^{-1} \ \ \ \ \ (4)

The entropy increase resulting from absorbing this heat is

\displaystyle  \Delta S=\frac{Q}{T}=\frac{75.9\times10^{3}}{298}=254.7\mbox{ J K}^{-1}\mbox{mol}^{-1} \ \ \ \ \ (5)

If we work out the entropy change {\Delta S} for this reaction using the values in the table above, we find

\displaystyle  \Delta S=6\left(69.91+213.74\right)-\left(6\times205.14+212\right)=259.06\mbox{ J K}^{-1}\mbox{mol}^{-1} \ \ \ \ \ (6)

The values are roughly the same, so the absorption of heat can be explained by the entropy of the products being greater than that of the reactants.

This model assumes that the muscle is ideal, in the sense that all of the available {\Delta G} is converted into chemical work in the muscle. If the muscle is less than ideal, then the amount of work performed is less than {\Delta G}, so less heat is absorbed. However, the entropy difference between the reactants and products remains the same, so some of this entropy must be provided by means other than heat flow. This makes sense since a non-ideal muscle would use an irreversible process to perform its motion, resulting in an increase of entropy from other means.

The actual process by which glucose is metabolized is much more complicated than the simple reaction 1. In the process, 38 ATP (adensoine triphosphate) molecules are synthesized. When an ATP molecule splits into ADP (adenosine diphosphate) and a phosphate ion, it releases energy that is used in a variety of processes, including muscle contraction. The splitting of one ATP molecule provides energy for a molecule of myosin (an enzyme) to contract with a force of {4\times10^{-12}\mbox{ N}} over a distance of {1.1\times10^{-8}\mbox{ m}}. Thus one glucose molecule provides the energy for an amount of work

\displaystyle  W=38\times4\times10^{-12}\times1.1\times10^{-8}=1.672\times10^{-18}\mbox{ J} \ \ \ \ \ (7)

The maximum amount of energy provided by one glucose molecule is obtained from 2 as

\displaystyle  W_{max}=\frac{\left|\Delta G\right|}{6.02\times10^{23}}=4.78\times10^{-18}\mbox{ J} \ \ \ \ \ (8)

Thus the efficiency of muscle contraction is

\displaystyle  e=\frac{W}{W_{max}}=0.35 \ \ \ \ \ (9)

Steam engines in the real world

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 4.24 – 4.26.

A steam engine follows the Rankine cycle, in which the work is done on the path in which superheated steam follows an adiabatic path from point 3 to 4 in the diagram, during which its pressure and temperature are reduced back to their lower values in the cycle.

To derive the efficiency, it is assumed that the path 1 to 2 is also adiabatic, so that all the absorbed heat {Q_{h}} occurs along edge 2 to 3, and all expelled heat {Q_{c}} along edge 4 to 1. Since both heat exchanges occur at constant pressure, the heat exchanged is equal to the enthalpy difference between the end points of the corresponding path. It is also assumed that the enthalpies of points 1 and 2 are roughly equal: {H_{1}\approx H_{2}}. We then get an efficiency given by

\displaystyle e=\frac{W}{Q_{h}}=\frac{Q_{h}-Q_{c}}{Q_{h}}\approx1-\frac{H_{4}-H_{1}}{H_{3}-H_{1}} \ \ \ \ \ (1)


Calculating the enthalpies must be done using steam tables such as Tables 4.1 and 4.2 in Schroeder’s book. {H_{3}} is the enthalpy of superheated steam, and can be read from Table 4.2. {H_{4}} is obtained by using the fact that, since path 3 to 4 is adiabatic, the entropies are equal: {S_{3}=S_{4}}. We can read {S_{3}} from Table 4.2 and, since point 4 is a mixture of water and steam at a given pressure and temperature, we can find the proportion {x} of water that gives the same entropy as {S_{3}} by reading from Table 4.1. That is, we solve for {x}:

\displaystyle S_{3}=xS_{water}+\left(1-x\right)S_{steam} \ \ \ \ \ (2)

We can then find {H_{4}} by using the same proportions:

\displaystyle H_{4}=xH_{water}+\left(1-x\right)H_{steam} \ \ \ \ \ (3)

However, suppose path 3 to 4 is not adiabatic; in fact in a real turbine, the entropy tends to increase along this edge, so that {S_{4}>S_{3}}. If we know {S_{4}}, we can use the same procedure as above to find {x}:

\displaystyle S_{4}=xS_{water}+\left(1-x\right)S_{steam} \ \ \ \ \ (4)

Since (from Table 4.1) {S_{water}<S_{steam}}, we need a mixture with more steam and less water (that is, {x} is smaller than before) to get the increased entropy. In turn, since {H_{steam}>H_{water}}, this leads to a higher entropy {H_{4}}. Assuming {H_{1}} and {H_{3}} are the same as before, we see from 1 that the efficiency is lower than before.

[Of course, the cynics among you will say that this result is obvious from Murphy’s law, in that the real world always makes things worse than they are in theory.]

As another example, suppose we have a real power plant in which the minimum pressure is 0.023 bar, the maximum pressure is 300 bar, and the superheated steam temperature is {300^{\circ}\mbox{ C}} (these are the values used by Schroeder in his example). We can read {H_{1}=84\mbox{ kJ}} and {H_{3}=3444\mbox{ kJ}} from the tables, and calculate {H_{4}=1824\mbox{ kJ}} using the above procedure. If this power plant is to deliver {10^{9}\mbox{ W}} of power then in one second we must produce {10^{9}\mbox{ J}=10^{6}\mbox{ kJ}} so the work done by 1 kg of steam is

\displaystyle W=Q_{h}-Q_{c}=H_{3}-H_{1}-\left(H_{4}-H_{1}\right)=H_{3}-H_{4}=1620\mbox{ kJ} \ \ \ \ \ (5)

The mass of steam required to produce {10^{6}\mbox{ kJ}} is therefore

\displaystyle m=\frac{10^{6}}{1620}=617\mbox{ kg} \ \ \ \ \ (6)

If we use the more accurate formula for {Q_{h}=H_{3}-H_{2}} that we calculated earlier, we found that {H_{2}=114\mbox{ kJ}}, so the work done by 1 kg of steam is

\displaystyle W=H_{3}-H_{2}-\left(H_{4}-H_{1}\right)=1590\mbox{ kJ} \ \ \ \ \ (7)

Thus we’d now need

\displaystyle m=\frac{10^{6}}{1590}=629\mbox{ kg} \ \ \ \ \ (8)

Entropy of water and steam

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 4.27 – 4.28.

For a steam engine, Schroeder gives enthalpy and entropy values for water and steam at the boiling point for various pressures (his Table 4.1). What may at first seem a bit odd is that the entropy for liquid water increases with temperature, but that for steam decreases with increasing temperature.

The key point is that the entropy values are given for a fixed amount (1 kg) of water or steam. As the pressure is increased on liquid water, its volume changes very little, so the increase in temperature means that temperature is the only state variable of the water that changes. This gives rise to more molecular motion, hence more randomness, hence larger entropy.

For steam, the increase in temperature is accompanied by an increase in pressure (since the boiling point of water increases with increasing pressure) and, since the number of water molecules is constant, the volume of the steam (a gas) reduces considerably as the pressure is increased. Thus there is an increase in entropy due to increasing temperature, but also a decrease due to the decreasing volume.

At low pressures such as those quoted in Table 4.1, it’s not too bad an approximation to take steam as an ideal gas, so we can apply the Sackur-Tetrode equation to get a feel for how the entropy changes. The equation says

\displaystyle  S=Nk\left[\ln\left(\frac{V}{N}\left(\frac{4\pi mU}{3Nh^{2}}\right)^{3/2}\right)+\frac{5}{2}\right] \ \ \ \ \ (1)

The energy {U} is (from the equipartition theorem)

\displaystyle  U=\frac{f}{2}NkT \ \ \ \ \ (2)

The volume is

\displaystyle  V=\frac{NkT}{P} \ \ \ \ \ (3)

so the equation becomes

\displaystyle  S=Nk\left[\ln\left(\frac{\left(kT\right)^{5/2}}{P}\left(\frac{4\pi fm}{6h^{2}}\right)^{3/2}\right)+\frac{5}{2}\right] \ \ \ \ \ (4)

For a few of the values of {T} and {P} given in Table 4.1, the ratio {T^{5/2}/P} is

{T} (K) {P} (bar) {T^{5/2}/P}
273 0.006 {2.05\times10^{8}}
283 0.012 {1.12\times10^{8}}
373 1.013 {2.65\times10^{6}}

The units of {T^{5/2}/P} aren’t particularly important here; what matters is that this quantity decreases as we move down the table, so the entropy will actually decrease as we increase the pressure.

In a related problem, Schroeder claims that we can reconstruct the entropy values in Table 4.1 from the given enthalpy values. I’m not entirely sure how he expects us to do it, but it does seem to require some approximations. From the definition of entropy we have

\displaystyle  S=\frac{Q}{T} \ \ \ \ \ (5)

where {Q} is the heat absorbed or lost by the substance at constant temperature {T}. The enthalpy is the energy required to create the substance from nothing at constant pressure. For the first row in the table, we can imagine the steam being created at constant pressure at {T=0^{\circ}\mbox{ C}=273.15\mbox{ K}} so the entropy is

\displaystyle  S=\frac{Q}{T}=\frac{H_{steam}}{T}=\frac{2501}{273.15}=9.156\mbox{ kJ kg}^{-1}\mbox{K}^{-1} \ \ \ \ \ (6)

which agrees with the value in the table.

To get the second row, we can look at the liquid water first. In this case, we heat the water from {0^{\circ}\mbox{ C}} to {10^{\circ}\mbox{ C}} while increasing the pressure from {0.006\mbox{ bar}} to {0.012\mbox{ bar}} (with {1\mbox{ bar}=10^{5}\mbox{ N m}^{-2}}), so both the temperature and pressure are changing from their values in the first row, so it’s no longer a constant pressure process. The enthalpy change in this case is

\displaystyle  dH=Q-PdV+d\left(PV\right)=Q+VdP \ \ \ \ \ (7)

A kilogram of water has a volume of {V=10^{-3}\mbox{ m}^{3}} and this changes very little as the pressure is increased, so the {VdP} term is

\displaystyle  VdP\approx10^{-3}\times0.006\times10^{5}=0.6\mbox{ J} \ \ \ \ \ (8)

Compared to the value {dH=42\times10^{3}\mbox{ J}}, this correction can be neglected, so to a good approximation

\displaystyle  S_{water}=\frac{dH}{T} \ \ \ \ \ (9)

However, {T} is also changing so what value do we use for it? It seems that a reasonable approximation is to use the average value, so we get

\displaystyle  S_{water}=\frac{42}{278}=0.151\mbox{ kJ kg}^{-1}\mbox{K}^{-1} \ \ \ \ \ (10)

which again agrees with the value in the table.

The value for steam is a bit more difficult to estimate. If we take the steam to be an ideal gas and apply the thermodynamic identity, we get

\displaystyle   dH \displaystyle  = \displaystyle  TdS+VdP\ \ \ \ \ (11)
\displaystyle  dS \displaystyle  = \displaystyle  \frac{1}{T}\left(dH-VdP\right) \ \ \ \ \ (12)

From the ideal gas law

\displaystyle   V \displaystyle  = \displaystyle  \frac{nRT}{P}\ \ \ \ \ (13)
\displaystyle  dS \displaystyle  = \displaystyle  \frac{dH}{T}-nR\frac{dP}{P} \ \ \ \ \ (14)

However, now all three of {V}, {P} and {T} are changing so it seems the best we can do is to use the average values. To go from the first row to the second row in the table, we have {dH=19\mbox{ kJ}}, {dP=0.006\mbox{ bar}}, {P=0.009\mbox{ bar}}, {T=278\mbox{ K}}. 1 kg of steam is equivalent to {n=55.5\mbox{ mol}} (the molar weight of water is 18.01 g) and the gas constant is {R=8.314} in SI units. Plugging in all the numbers gives

\displaystyle   dS \displaystyle  = \displaystyle  -0.239\mbox{ kJ kg}^{-1}\mbox{K}^{-1}\ \ \ \ \ (15)
\displaystyle  S_{steam} \displaystyle  = \displaystyle  8.917\mbox{ kJ kg}^{-1}\mbox{K}^{-1} \ \ \ \ \ (16)

This is off by 0.016 from the value in the table, but considering the number of approximations, I suppose it’s not bad.

We can actually get a slightly better approximation by taking the average of {\frac{1}{T}} and {\frac{1}{P}} by using the integral formula for the average of a function {f\left(x\right)} over the domain {x_{1}\le x\le x_{2}}:

\displaystyle  \left\langle f\left(x\right)\right\rangle =\frac{1}{x_{2}-x_{1}}\int_{x_{1}}^{x_{2}}f\left(x\right)dx \ \ \ \ \ (17)

For {f\left(x\right)=\frac{1}{x}} we get

\displaystyle  \left\langle \frac{1}{x}\right\rangle =\frac{1}{x_{2}-x_{1}}\ln\frac{x_{2}}{x_{1}} \ \ \ \ \ (18)

We then get, for values between the first and second rows of the table

\displaystyle   \left\langle \frac{1}{T}\right\rangle \displaystyle  = \displaystyle  \frac{1}{10}\ln\frac{283}{273}=3.60\times10^{-3}\ \ \ \ \ (19)
\displaystyle  \left\langle \frac{1}{P}\right\rangle \displaystyle  = \displaystyle  \frac{1}{0.006}\ln\frac{0.012}{0.006}=115.5 \ \ \ \ \ (20)

With these values, we get, in SI units

\displaystyle   dS \displaystyle  = \displaystyle  dH\left\langle \frac{1}{T}\right\rangle -nRdP\left\langle \frac{1}{P}\right\rangle \ \ \ \ \ (21)
\displaystyle  \displaystyle  = \displaystyle  68.4-319.8\ \ \ \ \ (22)
\displaystyle  \displaystyle  = \displaystyle  -251.4\mbox{ J K}^{-1}\ \ \ \ \ (23)
\displaystyle  \displaystyle  = \displaystyle  -0.2514\mbox{ kJ K}^{-1}\ \ \ \ \ (24)
\displaystyle  S_{steam} \displaystyle  = \displaystyle  8.905\mbox{ kJ kg}^{-1}\mbox{K}^{-1} \ \ \ \ \ (25)

This reduces the discrepancy to 0.004.

Carnot engine – a realistic version

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 4.6.

The main problem with an engine that follows a Carnot cycle is that the two isothermal stages in the cycle proceed very slowly, since we are attempting to transfer heat between two systems that are almost at the same temperature. One way of making the cycle go a bit faster is to make the temperature of the working substance significantly different from that of the reservoir where it absorbs, and later expels, heat. That is, if the system absorbs heat {Q_{h}} from a hot reservoir at temperature {T_{h}}, then the temperature of the working substance (typically a gas) when it absorbs heat is {T_{hw}<T_{h}}. Similarly, at the other isothermal stage where heat {Q_{c}} is expelled to the cold reservoir at temperature {T_{c}}, the temperature of the gas is {T_{cw}>T_{c}}.

To make things simple, we’ll assume that the rate of heat transfer is the same at both the hot and cold reservoirs, and is proportional to the temperature difference between the gas and the reservoir. That is

\displaystyle \frac{Q_{h}}{\Delta t} \displaystyle = \displaystyle K\left(T_{h}-T_{hw}\right)\ \ \ \ \ (1)
\displaystyle \frac{Q_{c}}{\Delta t} \displaystyle = \displaystyle K\left(T_{cw}-T_{c}\right) \ \ \ \ \ (2)

where {K} is a constant and {\Delta t} is taken to be the same for both cases (that is, the durations of both isothermal stages in the cycle are the same). From this, we get the relation

\displaystyle \frac{Q_{h}}{T_{h}-T_{hw}}=\frac{Q_{c}}{T_{cw}-T_{c}} \ \ \ \ \ (3)


If the only entropy that is created in the cycle is along the two isothermal stages (no entropy is generated along the adiabatic stages) then, since the state of the engine is the same at the end of the cycle as it was at the start, the gas must have expelled exactly the same amount of entropy when expelling heat to the cold reservoir as it absorbed when absorbing heat from the hot reservoir. That is

\displaystyle \frac{Q_{h}}{T_{hw}}=\frac{Q_{c}}{T_{cw}} \ \ \ \ \ (4)



\displaystyle Q_{c}=Q_{h}\frac{T_{cw}}{T_{hw}} \ \ \ \ \ (5)

Combining this with 3 gives

\displaystyle \frac{1}{T_{h}-T_{hw}} \displaystyle = \displaystyle \frac{T_{cw}}{T_{hw}\left(T_{cw}-T_{c}\right)}\ \ \ \ \ (6)
\displaystyle T_{cw} \displaystyle = \displaystyle \frac{T_{c}T_{hw}}{2T_{hw}-T_{h}} \ \ \ \ \ (7)

If the time required for the two adiabatic steps is much less than that for the two isothermal steps, we can work out the power output of the engine. The work is produced over a time interval of {2\Delta t} and is

\displaystyle \mathcal{P}=\frac{W}{2\Delta t} \displaystyle = \displaystyle \frac{1}{2\Delta t}\left(Q_{h}-Q_{c}\right)\ \ \ \ \ (8)
\displaystyle \displaystyle = \displaystyle \frac{K}{2}\left(T_{h}+T_{c}-T_{hw}-T_{cw}\right)\ \ \ \ \ (9)
\displaystyle \displaystyle = \displaystyle \frac{K}{2}\left(T_{h}+T_{c}-T_{hw}-\frac{T_{c}T_{hw}}{2T_{hw}-T_{h}}\right) \ \ \ \ \ (10)

We can maximize the power output for given values of {T_{h}} and {T_{c}} by varying {T_{hw}}. Taking the derivative we get

\displaystyle \frac{d\mathcal{P}}{dT_{hw}}=\frac{K}{2}\left[-1-\frac{T_{c}}{2T_{hw}-T_{h}}+\frac{2T_{c}T_{hw}}{\left(2T_{hw}-T_{h}\right)^{2}}\right]=0 \ \ \ \ \ (11)

This can be solved for {T_{hw}} by multiplying through by {\left(2T_{hw}-T_{h}\right)^{2}} and expanding the terms in the numerator. This results in

\displaystyle \frac{K\left(-4T_{hw}^{2}+4T_{h}T_{hw}+T_{c}T_{h}-T_{h}^{2}\right)}{2\left(2T_{hw}-T_{h}\right)^{2}}=0 \ \ \ \ \ (12)

Solving the quadratic equation and taking the positive root gives

\displaystyle T_{hw}=\frac{1}{2}\left(T_{h}+\sqrt{T_{h}T_{c}}\right) \ \ \ \ \ (13)

Substituting this into 7 gives

\displaystyle T_{cw}=\frac{1}{2}\left(T_{c}+\sqrt{T_{h}T_{c}}\right) \ \ \ \ \ (14)

To find the efficiency we have, using 4

\displaystyle e \displaystyle = \displaystyle 1-\frac{Q_{c}}{Q_{h}}\ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle 1-\frac{T_{cw}}{T_{hw}}\ \ \ \ \ (16)
\displaystyle \displaystyle = \displaystyle 1-\frac{T_{c}+\sqrt{T_{h}T_{c}}}{T_{h}+\sqrt{T_{h}T_{c}}}\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle 1-\frac{\left(T_{c}+\sqrt{T_{h}T_{c}}\right)\left(T_{h}-\sqrt{T_{h}T_{c}}\right)}{T_{h}^{2}-T_{h}T_{c}}\ \ \ \ \ (18)
\displaystyle \displaystyle = \displaystyle 1-\frac{T_{h}T_{c}+\left(T_{h}-T_{c}\right)\sqrt{T_{h}T_{c}}-T_{h}T_{c}}{T_{h}\left(T_{h}-T_{c}\right)}\ \ \ \ \ (19)
\displaystyle \displaystyle = \displaystyle 1-\sqrt{\frac{T_{c}}{T_{h}}} \ \ \ \ \ (20)

For a coal-fired steam turbine with {T_{h}=600^{\circ}\mbox{ C}=873\mbox{ K}} and {T_{c}=25^{\circ}\mbox{ C}=298\mbox{ K}}, this gives an efficiency of

\displaystyle e=0.416 \ \ \ \ \ (21)

This is very close to the actual efficiency of about 40% for a real coal-burning power plant. The ‘ideal’ Carnot efficiency for these temperatures is

\displaystyle e=1-\frac{T_{c}}{T_{h}}=0.659 \ \ \ \ \ (22)

Thermodynamic properties of a 2-dim ideal gas

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 3.39.

We now revisit the 2-d ideal gas for which the Sackur-Tetrode equation is

\displaystyle  S=Nk\left[\ln\frac{2\pi mAU}{\left(hN\right)^{2}}+2\right] \ \ \ \ \ (1)

where {A} is the area occupied by the gas, {N} is the number of molecules, each of mass {m}, and {U} is the total energy. We can work out the temperature, pressure and chemical potential by applying the thermodynamic identity adapted for 2 dimensions (by replacing the volume {V} by the area {A}):

\displaystyle  dU=TdS-PdA+\mu dN \ \ \ \ \ (2)

The temperature is determined from the entropy as

\displaystyle   \frac{1}{T} \displaystyle  = \displaystyle  \left(\frac{\partial S}{\partial U}\right)_{A,N}\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  Nk\frac{\left(hN\right)^{2}}{2\pi mAU}\frac{2\pi mA}{\left(hN\right)^{2}}\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  \frac{Nk}{U} \ \ \ \ \ (5)

This just gives us the formula from the equipartition theorem for a system with 2 degrees of freedom:

\displaystyle  U=\frac{2}{2}NkT=NkT \ \ \ \ \ (6)

The pressure can be obtained from

\displaystyle   P \displaystyle  = \displaystyle  T\left(\frac{\partial S}{\partial A}\right)_{U,N}\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  Nk\frac{\left(hN\right)^{2}}{2\pi mAU}\frac{2\pi mU}{\left(hN\right)^{2}}\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  \frac{NkT}{A} \ \ \ \ \ (9)

This is just the 2-dim analogue of the ideal gas law:

\displaystyle  PA=NkT \ \ \ \ \ (10)

Finally, chemical potential is defined in terms of the entropy as

\displaystyle   \mu \displaystyle  \equiv \displaystyle  -T\left(\frac{\partial S}{\partial N}\right)_{U,A}\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  -kT\left[\ln\frac{2\pi mAU}{\left(hN\right)^{2}}+2\right]-NkT\left(-\frac{2}{N}\right)\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  -kT\ln\frac{2\pi mAU}{\left(hN\right)^{2}}\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  -kT\ln\left(\frac{A}{N}\frac{2\pi mkT}{h^{2}}\right) \ \ \ \ \ (14)

We can compare this to the chemical potential for a 3-d ideal gas

\displaystyle  \mu=-kT\ln\left[\frac{V}{N}\left(\frac{2\pi mkT}{h^{2}}\right)^{3/2}\right] \ \ \ \ \ (15)

The only differences are the replacement of {V} by {A} and the change in the exponent inside the logarithm from {\frac{3}{2}} to 1. The latter arises from the derivation of the multiplicity, where the exponent depends on the number of degrees of freedom in the system. For a 3-d gas, there are {3N} degrees of freedom, while for a 2-d gas, there are {2N}. Thus the exponent in the 2-d case is {\frac{2}{3}} that in the 3-d case. [You’d need to follow through the derivation in detail to see the difference, but basically that’s where it comes from.]

Chemical potential of a mixture of ideal gases

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 3.38.

The chemical potential is defined in terms of the entropy as

\displaystyle \mu\equiv-T\left(\frac{\partial S}{\partial N}\right)_{U,V} \ \ \ \ \ (1)


This definition leads to a general thermodynamic identity

\displaystyle dU=TdS-PdV+\mu dN \ \ \ \ \ (2)

For a mixture of ideal gases, each species {i} constitutes a molar fraction {x_{i}} of the total number {N_{total}} of molecules, so each species has its own chemical potential defined as

\displaystyle \mu_{i}=-T\left(\frac{\partial S}{\partial N_{i}}\right)_{U,V,N_{j\ne i}} \ \ \ \ \ (3)


where all {N_{j}} with {j\ne i} are held constant in the derivative.

Also, for an ideal gas, each species contributes its own portion of the overall entropy, independently of the other species. We can see this by noting that if we have a mixture of, say, 2 gases, then for each configuration of the gas {A} molecules there is a multiplicity of {\Omega_{B}} of the gas {B} molecules and since for an ideal gas, the molecules don’t interact, the total multiplicity of the mixture is {\Omega_{total}=\Omega_{A}\Omega_{B}}, so the entropy is the sum of the entropies for the separate species: {S_{total}=S_{A}+S_{B}}.

Since ideal gas molecules don’t interact, species {i} contributes a fraction {x_{i}} of the total pressure, or in other words, its partial pressure is

\displaystyle P_{i}=x_{i}P \ \ \ \ \ (4)

We can therefore write the thermodynamic identity for a mixture of ideal gases as

\displaystyle dU=T\sum_{i}dS_{i}-\left(\sum_{i}P_{i}\right)dV+\sum_{i}\mu_{i}dN_{i} \ \ \ \ \ (5)

Since {dS_{j\ne i}=0} in 3 (because only the number {N_{i}} of species {i} is changing, and no properties of any of the other species are changing), we can write the chemical potential of species {i} as

\displaystyle \mu_{i}=-T\left(\frac{\partial S_{i}}{\partial N_{i}}\right)_{U,V,N_{j\ne i}} \ \ \ \ \ (6)

But this is the definition of chemical potential in a system containing only species {i} at partial pressure {P_{i}} in volume {V}. Thus, for a mixture of ideal gases, the chemical potential of each species is independent of the other species. In a mixture of real gases, however, this is probably not the case, since interactions between the species means the total entropy isn’t a simple sum of the entropies of the individual species.

Chemical potential; application to the Einstein solid

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 3.35 – 3.36.

Thermodynamic systems can be in equilibrium in various ways: thermal equilibrium results from systems being able to exchange energy, resulting in them being at the same temperature and mechanical equilibrium results from being able to exchange volume, resulting in the pressures being equal. The final type of equilibrium is diffusive equilibrium, when systems can exchange actual matter (numbers of particles) with each other. Now the entropy is taken to be a function of energy {U}, volume {V} and particle number {N} and using the same logic as in deriving temperature and pressure from derivatives of entropy, we find that at diffusive equilibrium between two systems {A} and {B} with a constant total particle number {N=N_{A}+N_{B}}, the condition that entropy achieve its maximum value results in

\displaystyle  \frac{\partial S_{A}}{\partial N_{A}}=\frac{\partial S_{B}}{\partial N_{B}} \ \ \ \ \ (1)

This condition is used to define the chemical potential {\mu} as

\displaystyle  \mu\equiv-T\left(\frac{\partial S}{\partial N}\right)_{U,V} \ \ \ \ \ (2)

If the two systems are also in thermal equilibrium, the temperatures are equal, so at equilibrium

\displaystyle  \mu_{A}=\mu_{B} \ \ \ \ \ (3)

If the systems are not in equilibrium, then the tendency is for the overall entropy of the combined system to increase as it tends towards equilibrium. If {\frac{\partial S_{A}}{\partial N_{A}}>\frac{\partial S_{B}}{\partial N_{B}}}, then an increase in {N_{A}} results in a greater increase in entropy than an increase in {N_{B}}, so the diffusion will tend to transfer particles from {B} to {A}. From the definition of {\mu}, a larger {\frac{\partial S}{\partial N}} means a lower value of {\mu} (due to the minus sign), so diffusion tends to transfer particles from the system with a higher chemical potential to the system with a lower chemical potential.

If a system is allowed to vary {U}, {V} and {N}, the overall change in entropy is the sum of the contributions from all three processes, so the generalized form of the thermodynamic identity is

\displaystyle  dS=\left(\frac{\partial S}{\partial U}\right)_{N,V}dU+\left(\frac{\partial S}{\partial V}\right)_{U,N}dV+\left(\frac{\partial S}{\partial N}\right)_{U,V}dN \ \ \ \ \ (4)

or, in its more usual form

\displaystyle  dU=TdS-PdV+\mu dN \ \ \ \ \ (5)

Example 1 Schroeder does an example of a very small Einstein solid containing {N=3} oscillators and {q=3} energy quanta. Although true derivatives aren’t valid in such a small system, we can get an idea of how chemical potential works by considering what happens if we add another oscillator to the system in such a way that {S} and {V} don’t change. The entropy before the addition is

\displaystyle   S \displaystyle  = \displaystyle  k\ln\Omega\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  k\ln\binom{3+3-1}{3}\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  k\ln10 \ \ \ \ \ (8)

If we change {N} to 4, then to keep {S} constant, we need to decrease {q}. This example is contrived so that we can actually do this and get the same value for {S}, since with {N=4} and {q=2}, we find {S=k\ln10}. Thus in this case {\Delta U=-\epsilon} where {\epsilon} is the energy of a single quantum and so the chemical potential is (approximately)

\displaystyle  \mu=\frac{\Delta U}{\Delta N}=\frac{-\epsilon}{1}=-\epsilon \ \ \ \ \ (9)

Now suppose we started with {N=3} and {q=4}, and then try to add another oscillator while keeping {S} constant. The entropy before the addition is

\displaystyle  S=k\ln\binom{3+4-1}{4}=k\ln15 \ \ \ \ \ (10)

Reducing {q} to 3 after increasing {N} to 4 results in

\displaystyle  S=k\ln\binom{4+3-1}{3}=k\ln20 \ \ \ \ \ (11)

so we’re still not down to the original entropy. However, if we reduce {q} to 2, we get

\displaystyle  S=k\ln\binom{4+2-1}{2}=k\ln10 \ \ \ \ \ (12)

so now we’ve dropped below the original entropy. To keep {S} constant, we’d need to remove somewhere around 1.5 quanta, so {\mu<-\epsilon} and the chemical potential is lower (more negative) than in the first case.

Example 2 Still with an Einstein solid, but now at the other extreme where both {q} and {N} are large numbers. In this case, the multiplicity is approximately

\displaystyle   \Omega \displaystyle  \approx \displaystyle  \sqrt{\frac{N}{2\pi q\left(q+N\right)}}\left(\frac{q+N}{q}\right)^{q}\left(\frac{q+N}{N}\right)^{N}\ \ \ \ \ (13)
\displaystyle  \displaystyle  \approx \displaystyle  \left(\frac{q+N}{q}\right)^{q}\left(\frac{q+N}{N}\right)^{N} \ \ \ \ \ (14)

where we’ve dropped the square root as it is merely ‘large’ compared to the other two factors being ‘very large’.

The entropy is therefore

\displaystyle   S \displaystyle  = \displaystyle  k\ln\Omega\ \ \ \ \ (15)
\displaystyle  \displaystyle  \approx \displaystyle  \left(q+N\right)\ln\left(q+N\right)-q\ln q-N\ln N \ \ \ \ \ (16)

Using 2, this gives a chemical potential of

\displaystyle   \mu \displaystyle  = \displaystyle  -kT\left[\ln\left(q+N\right)+1-\ln N-1\right]\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  -kT\ln\frac{q+N}{N} \ \ \ \ \ (18)

For {N\gg q}, this reduces to

\displaystyle  \mu\rightarrow-kT\ln\left(1+\frac{q}{N}\right)\approx-kT\frac{q}{N} \ \ \ \ \ (19)

At the other extreme, {N\ll q} and

\displaystyle  \mu\rightarrow-kT\ln\left(\frac{q}{N}\right)\rightarrow-\infty \ \ \ \ \ (20)

In the {N\gg q} case, there are many more oscillators than energy quanta to put in them, so adding an extra oscillator won’t make much difference to the multiplicity. Think of a simple case where you’ve got lots of bins and only one ball to put in them. In that case, adding an extra bin creates only one extra possible state. Thus we’d expect {\partial S/\partial N} to be fairly small in this case.

In the {N\ll q} case, there are many more quanta than oscillators to put them in, so adding an extra oscillator creates many more possible microstates, since we can place any number of quanta from 0 right up to {q} in the new oscillator. Thus the multiplicity, and hence the entropy, increases more rapidly in this case.