Reference: Daniel V. Schroeder, *An Introduction to Thermal Physics*, (Addison-Wesley, 2000) – Problem 3.27.

We consider a system with a total volume and total energy , both of which are constant, but with a moveable partition that divides the system into two parts and . The partition allows energy to be freely exchanged between the two parts, so that after some time, the temperature will be equal on both sides. Also, after some time, the pressure will be equal on both sides. From the second law of thermodynamics, this state will have maximum entropy.

Since the only two things that we’re allowing to vary are the energy and volume of each part, the condition for maximum entropy can be written as two partial derivative conditions:

The first equation was used to define temperature in terms of entropy, giving

By following exactly the same logic, we can arrive at the condition that, at equilibrium,

Since we know that when two systems are in mechanical equilibrium their pressures are equal, we should be able to relate this condition to pressure in some way. By looking at the units, we see that entropy has units of and volume has units of , the derivative has units of . Therefore, if we multiply the derivative by temperature, we get a quantity with the units of pressure:

This doesn’t *prove* that this is the actual pressure in the system (there could be some dimensionless constants floating around, for example), but Schroeder shows that the formula works, at least, for an ideal gas. If we use the Sackur-Tetrode equation for the entropy of an ideal gas and calculate the derivative, we reclaim the ideal gas law.

We can see what happens if a system changes both its volume and its energy at the same time. We can simulate the process by first changing the volume by an infinitesimal amount , recording the resulting entropy change , then changing the energy by an amount and recording the entropy change due to that. The total entropy change is the sum of the two, so using 3 and 5 we get

This is more usually written as

and is known as the *thermodynamic identity*.

One special case is that of a process that occurs with no change in entropy, so that . In this case, we have

or

Equation 9 says that the energy change occurs entirely from the volume change, so that there is no heat flow into or out of the system. Heat flow is always associated with two system trying to establish their equilibrium state, which always involves an increase in entropy. Thus if there is no change in entropy, there cannot be any heat flow, so in that sense, this equation agrees with what we knew from the first law of thermodyamics (conservation of energy). Note that the converse is *not* always true. That is, the absence of heat flow doesn’t require no change in entropy. In a free expansion of a gas, the gas expands into a vacuum and no heat flows, but the entropy of the gas still increases.

Pingback: Entropy and heat | Physics pages

Pingback: Entropy of adiabatic compression | Physics pages

Pingback: Heat capacities in terms of entropy | Physics pages

Pingback: Rubber bands and entropy | Physics pages

Pingback: Chemical potential; application to the Einstein solid | Physics pages

Pingback: Chemical potential of an ideal gas | Physics pages

Pingback: Chemical potential of a mixture of ideal gases | Physics pages

Pingback: Thermodynamic properties of a 2-dim ideal gas | Physics pages

Pingback: Steam engines; the Rankine cycle | Physics pages

Pingback: Entropy of water and steam | Physics pages

Pingback: Throttling: enthalpy versus entropy | Physics pages

Pingback: Helmholtz and Gibbs energies are minimum at equilibrium | Physics pages

Pingback: Magnetic systems in thermodynamics | Physics pages

Pingback: Van der Waals equation of state | Physics pages