Tag Archives: uncertainty principle

Sizes of elementary particles

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 13, Exercises 13.3.1 – 13.3.2.

[If some equations are too small to read easily, use your browser’s magnifying option (Ctrl + on Chrome, probably something similar on other browsers).]

Due to the position-momentum uncertainty principle, if we wish to determine the location of a particle to within a distance {\Delta X}, the momentum of the photon used to detect the particle must satisfy

\displaystyle \Delta P\Delta X\ge\frac{\hbar}{2} \ \ \ \ \ (1)

This relation is valid in non-relativistic quantum mechanics, where we are using position eigenkets {\left|X\right\rangle } which define a particle’s position exactly. To do this, however, would require a photon of infinite energy. In relativistic quantum theory, if the energy of the photon is large enough, it is possible to convert the energy into mass by creating a particle-antiparticle pair. If we’re trying to determine the location of an electron, then if the energy of the bombarding photon is around twice the rest energy of an electron, this pair creation process can occur. Thus for practical purposes, the maximum photon energy that we can use to detect the electron is finite, which means that the electron’s position can be determined only approximately.

To get an idea of the ‘radius’ of an electron using these ideas (I put ‘radius’ in quotes because an electron doesn’t have a rigid boundary in quantum theory), we can proceed as follows. We’ll work only to orders of magnitude, rather than precise quantities.

From the uncertaintly relation, the photon’s momentum is about

\displaystyle \Delta P\sim\frac{\hbar}{\Delta X} \ \ \ \ \ (2)

For a photon, the relativistic energy is related to the momentum by

\displaystyle \Delta E=\Delta Pc \ \ \ \ \ (3)

where {c} is the speed of light. Therefore, the energy of the incident photon is

\displaystyle \Delta E\sim\frac{\hbar c}{\Delta X} \ \ \ \ \ (4)

We therefore want to restrict this energy to less than twice the electron’s rest energy, so

\displaystyle \Delta E\lesssim2mc^{2} \ \ \ \ \ (5)

which leads to

\displaystyle \frac{\hbar c}{\Delta X} \displaystyle \lesssim \displaystyle 2mc^{2}\ \ \ \ \ (6)
\displaystyle \Delta X \displaystyle \gtrsim \displaystyle \frac{\hbar}{2mc}\sim\frac{\hbar}{mc} \ \ \ \ \ (7)

The latter quantity is the Compton wavelength of the electron. [When we originally encountered the Compton wavelength is Carroll & Ostlie’s book on astrophysics, they defined it as {h/mc}, so Shankar’s Compton wavelength is {\frac{1}{2\pi}} times that of Carroll & Ostlie. However, since we’re working with orders of magnitude, this won’t matter much.]

Thus the Compton wavelength can be taken as a rough size of the electron. We can write this as a fraction of the Bohr radius {a_{0}} using

\displaystyle a_{0}\equiv\frac{\hbar^{2}}{me^{2}} \ \ \ \ \ (8)

so that

\displaystyle \frac{\hbar/mc}{a_{0}}=\frac{\hbar}{mc}\frac{me^{2}}{\hbar^{2}}=\frac{e^{2}}{\hbar c}=\alpha\approx\frac{1}{137} \ \ \ \ \ (9)

where {\alpha} is the famous fine structure constant. Since {a_{0}} is roughly the radius of a ground-state hydrogen atom, the electron is about 100 times smaller than this.

We can use similar arguments to do some rough calculations on other particles.

Example 1 For example, the pion has a range of about {10^{-15}\mbox{ m}} as a mediator of the nuclear force, so if we take this as {\Delta X} then

\displaystyle 2m_{\pi}c^{2} \displaystyle \sim \displaystyle \frac{\hbar c}{\Delta X} \ \ \ \ \ (10)

The rest energy of an electron is about {0.5\mbox{ MeV}}, so we can get an estimate of the rest energy of the pion as follows.

\displaystyle \frac{m_{\pi}c^{2}}{m_{e}c^{2}}=\frac{\Delta X_{e}}{\Delta X_{\pi}}=\frac{a_{0}/137}{10^{-15}} \ \ \ \ \ (11)

The Bohr radius is about

\displaystyle a_{0}\approx5\times10^{-11}\mbox{ m} \ \ \ \ \ (12)

so

\displaystyle m_{\pi}c^{2}\approx\left(0.5\mbox{ MeV}\right)\frac{5\times10^{-11}}{137\times10^{-15}}=182\mbox{ MeV} \ \ \ \ \ (13)

The actual rest mass of a pion is around 140 MeV, so this estimate isn’t too bad.

Example 2 The de Broglie wavelength of a particle is defined by

\displaystyle \lambda=\frac{h}{p} \ \ \ \ \ (14)

For an electron with kinetic energy 200 eV, we need to find its momentum to calculate {\lambda}. The relativistic kinetic energy is

\displaystyle K=mc^{2}\left(\gamma-1\right) \ \ \ \ \ (15)

where

\displaystyle \gamma=\frac{1}{\sqrt{1-v^{2}/c^{2}}} \ \ \ \ \ (16)

We have

\displaystyle \gamma=\frac{K}{mc^{2}}+1=\frac{200\mbox{ eV}}{0.5\times10^{6}\mbox{ eV}}+1=1.0004 \ \ \ \ \ (17)

Thus the electron is travelling at a non-relativistic speed, so to a good approximation we can use Newtonian formulas. The speed is

\displaystyle v \displaystyle = \displaystyle c\sqrt{\frac{2K}{mc^{2}}}=c\sqrt{\frac{2\left(200\right)}{0.5\times10^{6}}}\approx0.03c\ \ \ \ \ (18)
\displaystyle p \displaystyle = \displaystyle mv=\left(9.1\times10^{-31}\right)\left(0.03\right)\left(3\times10^{8}\right)=7.7\times10^{-24}\mbox{ kg m s}^{-1}\ \ \ \ \ (19)
\displaystyle \lambda \displaystyle = \displaystyle \frac{h}{p}=\frac{6.6\times10^{-34}}{7.7\times10^{-24}}\approx10^{-10}\mbox{ m}=1\AA \ \ \ \ \ (20)

Angular momentum in 3-d: expectation values and uncertainty principle

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 12, Exercise 12.5.3.

[If some equations are too small to read easily, use your browser’s magnifying option (Ctrl + on Chrome, probably something similar on other browsers).]

For 3-d angular momentum, we’ve seen that the components {J_{x}} and {J_{y}} can be written in terms of raising and lowering operators

\displaystyle J_{\pm}\equiv J_{x}\pm iJ_{y} \ \ \ \ \ (1)

In the basis of eigenvectors of {J^{2}} and {J_{z}} (that is, the states {\left|jm\right\rangle }) the raising and lowering operators have the following effects:

\displaystyle J_{\pm}\left|jm\right\rangle =\hbar\sqrt{\left(j\mp m\right)\left(j\pm m+1\right)}\left|j,m\pm1\right\rangle \ \ \ \ \ (2)

 

We can use these relations to construct the matrix elements of {J_{x}} and {J_{y}} in this basis. We can also use these relations to work out expectation values and uncertainties for the angular momentum components in this basis.

First, since diagonals of both the {J_{x}} and {J_{y}} matrices have only zero elements,

\displaystyle \left\langle J_{x}\right\rangle \displaystyle = \displaystyle \left\langle jm\left|J_{x}\right|jm\right\rangle =0\ \ \ \ \ (3)
\displaystyle \left\langle J_{y}\right\rangle \displaystyle = \displaystyle \left\langle jm\left|J_{y}\right|jm\right\rangle =0 \ \ \ \ \ (4)

To work out {\left\langle J_{x}^{2}\right\rangle } and {\left\langle J_{y}^{2}\right\rangle }, we can write these operators in terms of the raising and lowering operators:

\displaystyle J_{x} \displaystyle = \displaystyle \frac{1}{2}\left(J_{+}+J_{-}\right)\ \ \ \ \ (5)
\displaystyle J_{y} \displaystyle = \displaystyle \frac{1}{2i}\left(J_{+}-J_{-}\right) \ \ \ \ \ (6)

We can then use the fact that the basis states are orthonormal, so that

\displaystyle \left\langle j^{\prime}m^{\prime}\left|jm\right.\right\rangle =\delta_{j^{\prime}j}\delta_{m^{\prime}m} \ \ \ \ \ (7)

The required squares are

\displaystyle J_{x}^{2} \displaystyle = \displaystyle \frac{1}{4}\left(J_{+}^{2}+J_{+}J_{-}+J_{-}J_{+}+J_{-}^{2}\right)\ \ \ \ \ (8)
\displaystyle J_{y}^{2} \displaystyle = \displaystyle -\frac{1}{4}\left(J_{+}^{2}-J_{+}J_{-}-J_{-}J_{+}+J_{-}^{2}\right)\ \ \ \ \ (9)
\displaystyle \displaystyle = \displaystyle \frac{1}{4}\left(-J_{+}^{2}+J_{+}J_{-}+J_{-}J_{+}-J_{-}^{2}\right) \ \ \ \ \ (10)

The diagonal matrix elements {\left\langle jm\left|J_{x}^{2}\right|jm\right\rangle } and {\left\langle jm\left|J_{y}^{2}\right|jm\right\rangle } will get non-zero contributions only from those terms that leave {j} and {m} unchanged when operating on {\left|jm\right\rangle }. This means that only the terms that contain an equal number of {J_{+}} and {J_{-}} terms will contribute. We therefore have

\displaystyle \left\langle jm\left|J_{x}^{2}\right|jm\right\rangle \displaystyle = \displaystyle \frac{1}{4}\left\langle jm\left|J_{+}J_{-}+J_{-}J_{+}\right|jm\right\rangle \ \ \ \ \ (11)
\displaystyle \displaystyle = \displaystyle \frac{\hbar}{4}\sqrt{\left(j+m\right)\left(j-m+1\right)}\left\langle jm\left|J_{+}\right|j,m-1\right\rangle +\ \ \ \ \ (12)
\displaystyle \displaystyle \displaystyle \frac{\hbar}{4}\sqrt{\left(j-m\right)\left(j+m+1\right)}\left\langle jm\left|J_{-}\right|j,m+1\right\rangle \ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle \frac{\hbar^{2}}{4}\sqrt{\left(j+m\right)\left(j-m+1\right)}\sqrt{\left(j-m+1\right)\left(j+m\right)}+\ \ \ \ \ (14)
\displaystyle \displaystyle \displaystyle \frac{\hbar^{2}}{4}\sqrt{\left(j-m\right)\left(j+m+1\right)}\sqrt{\left(j+m+1\right)\left(j-m\right)}\ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle \frac{\hbar^{2}}{4}\left(\left(j+m\right)\left(j-m+1\right)+\left(j-m\right)\left(j+m+1\right)\right)\ \ \ \ \ (16)
\displaystyle \displaystyle = \displaystyle \frac{\hbar^{2}}{4}\left(j^{2}-m^{2}+j+m+j^{2}-m^{2}+j-m\right)\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle \frac{\hbar^{2}}{2}\left(j\left(j+1\right)-m^{2}\right) \ \ \ \ \ (18)

From 10 we see that the only terms that contribute to {\left\langle jm\left|J_{y}^{2}\right|jm\right\rangle } are the same as the corresponding terms in {\left\langle jm\left|J_{x}^{2}\right|jm\right\rangle }, so the result is the same:

\displaystyle \left\langle jm\left|J_{y}^{2}\right|jm\right\rangle =\frac{\hbar^{2}}{2}\left(j\left(j+1\right)-m^{2}\right) \ \ \ \ \ (19)

We can check that {J_{x}} and {J_{y}} satisfy the uncertainty principle, as derived by Shankar. That is, we want to verify that

\displaystyle \Delta J_{x}\cdot\Delta J_{y}\ge\left|\left\langle jm\left|\left(J_{x}-\left\langle J_{x}\right\rangle \right)\left(J_{y}-\left\langle J_{y}\right\rangle \right)\right|jm\right\rangle \right| \ \ \ \ \ (20)

On the LHS

\displaystyle \Delta J_{x} \displaystyle = \displaystyle \sqrt{\left\langle J_{x}^{2}\right\rangle -\left\langle J_{x}\right\rangle ^{2}}\ \ \ \ \ (21)
\displaystyle \displaystyle = \displaystyle \sqrt{\left\langle J_{x}^{2}\right\rangle }\ \ \ \ \ (22)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{\hbar^{2}}{2}\left(j\left(j+1\right)-m^{2}\right)}\ \ \ \ \ (23)
\displaystyle \Delta J_{y} \displaystyle = \displaystyle \sqrt{\frac{\hbar^{2}}{2}\left(j\left(j+1\right)-m^{2}\right)}\ \ \ \ \ (24)
\displaystyle \Delta J_{x}\cdot\Delta J_{y} \displaystyle = \displaystyle \frac{\hbar^{2}}{2}\left(j\left(j+1\right)-m^{2}\right) \ \ \ \ \ (25)

On the RHS

\displaystyle \left|\left\langle jm\left|\left(J_{x}-\left\langle J_{x}\right\rangle \right)\left(J_{y}-\left\langle J_{y}\right\rangle \right)\right|jm\right\rangle \right|=\left|\left\langle jm\left|J_{x}J_{y}\right|jm\right\rangle \right| \ \ \ \ \ (26)

Using the same technique as that above for deriving {\left\langle jm\left|J_{x}^{2}\right|jm\right\rangle } we have

\displaystyle \left\langle jm\left|J_{x}J_{y}\right|jm\right\rangle \displaystyle = \displaystyle \frac{1}{4i}\left\langle jm\left|\left(J_{+}+J_{-}\right)\left(J_{+}-J_{-}\right)\right|jm\right\rangle \ \ \ \ \ (27)
\displaystyle \displaystyle = \displaystyle \frac{1}{4i}\left\langle jm\left|J_{-}J_{+}-J_{+}J_{-}\right|jm\right\rangle \ \ \ \ \ (28)
\displaystyle \displaystyle = \displaystyle \frac{\hbar^{2}}{4i}\left(\left(j-m\right)\left(j+m+1\right)-\left(j+m\right)\left(j-m+1\right)\right)\ \ \ \ \ (29)
\displaystyle \displaystyle = \displaystyle -\frac{\hbar^{2}m}{2i} \ \ \ \ \ (30)

We therefore need to verify that

\displaystyle j\left(j+1\right)-m^{2}\ge\left|m\right| \ \ \ \ \ (31)

for all allowed values of {m}. We know that {-j\le m\le+j}, so

\displaystyle j\left(j+1\right)-m^{2}\ge j^{2}+j-j^{2}=j\ge\left|m\right| \ \ \ \ \ (32)

Thus the inequality is indeed satisfied.

In the case {\left|m\right|=j} we have

\displaystyle j\left(j+1\right)-j^{2}=j=\left|m\right| \ \ \ \ \ (33)

so the inequality saturates (becomes an equality) in that case.

Uncertainty principle – Shankar’s more general treatment

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 9.

[If some equations are too small to read easily, use your browser’s magnifying option (Ctrl + on Chrome, probably something similar on other browsers).]

Shankar’s derivation of the general uncertainty principle relating the variances of two Hermitian operators actually gives a different result from that in Griffiths. To follow this post, you should first review the earlier post. To keep things consistent I’ll use the original Griffiths notation up to equation 11, which is a summary of the earlier post.

Shankar’s derivation is the same as Griffiths’s up to equation (13) in the earlier post. To summarize, we have two operators {\hat{A}} and {\hat{B}} and calculate their variances as

\displaystyle   \sigma_{A}^{2} \displaystyle  = \displaystyle  \left\langle \Psi|(\hat{A}-\left\langle A\right\rangle )^{2}\Psi\right\rangle \ \ \ \ \ (1)
\displaystyle  \displaystyle  = \displaystyle  \left\langle \left(\hat{A}-\left\langle A\right\rangle \right)\Psi|\left(\hat{A}-\left\langle A\right\rangle \right)\Psi\right\rangle \ \ \ \ \ (2)
\displaystyle  \displaystyle  \equiv \displaystyle  \left\langle f|f\right\rangle \ \ \ \ \ (3)

where the function {f} is defined by this equation.

Similarly, for {\hat{B}}:

\displaystyle   \sigma_{B}^{2} \displaystyle  = \displaystyle  \left\langle \Psi|(\hat{B}-\left\langle B\right\rangle )^{2}\Psi\right\rangle \ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  \left\langle \left(\hat{B}-\left\langle B\right\rangle \right)\Psi|\left(\hat{B}-\left\langle B\right\rangle \right)\Psi\right\rangle \ \ \ \ \ (5)
\displaystyle  \displaystyle  \equiv \displaystyle  \left\langle g|g\right\rangle \ \ \ \ \ (6)

We now invoke the Schwarz inequality to say

\displaystyle   \sigma_{A}^{2}\sigma_{B}^{2} \displaystyle  = \displaystyle  \left\langle f|f\right\rangle \left\langle g|g\right\rangle \ \ \ \ \ (7)
\displaystyle  \displaystyle  \ge \displaystyle  |\left\langle f|g\right\rangle |^{2} \ \ \ \ \ (8)

At this point, Griffiths continues by saying that

\displaystyle  \left\langle f|g\right\rangle |^{2}\ge\left(\Im\left\langle f\left|g\right.\right\rangle \right)^{2} \ \ \ \ \ (9)

That is, he throws away the real part of {\left\langle f\left|g\right.\right\rangle } to get another inequality. Shankar retains the full complex number and thus states that

\displaystyle   |\left\langle f|g\right\rangle |^{2} \displaystyle  = \displaystyle  \left|\left\langle \left(\hat{A}-\left\langle A\right\rangle \right)\Psi|\left(\hat{B}-\left\langle B\right\rangle \right)\Psi\right\rangle \right|^{2}\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  \left|\left\langle \Psi\left|\left(\hat{A}-\left\langle A\right\rangle \right)\left(\hat{B}-\left\langle B\right\rangle \right)\right|\Psi\right\rangle \right|^{2} \ \ \ \ \ (11)

Defining the operators

\displaystyle   \hat{\Omega} \displaystyle  \equiv \displaystyle  \hat{A}-\left\langle A\right\rangle \ \ \ \ \ (12)
\displaystyle  \hat{\Lambda} \displaystyle  \equiv \displaystyle  \hat{B}-\left\langle B\right\rangle \ \ \ \ \ (13)

we have

\displaystyle   |\left\langle f|g\right\rangle |^{2} \displaystyle  = \displaystyle  \left|\left\langle \Psi\left|\hat{\Omega}\hat{\Lambda}\right|\Psi\right\rangle \right|^{2}\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{4}\left|\left\langle \Psi\left|\left[\hat{\Omega},\hat{\Lambda}\right]_{+}+\left[\hat{\Omega},\hat{\Lambda}\right]\right|\Psi\right\rangle \right|^{2} \ \ \ \ \ (15)

where

\displaystyle  \left[\hat{\Omega},\hat{\Lambda}\right]_{+}\equiv\hat{\Omega}\hat{\Lambda}+\hat{\Lambda}\hat{\Omega} \ \ \ \ \ (16)

is the anticommutator. For two Hermitian operators, the commutator is the difference between a value and its complex conjugate, so is always pure imaginary (and thus the anticommutator is always real), so we can write this as

\displaystyle  \left[\hat{\Omega},\hat{\Lambda}\right]=i\Gamma \ \ \ \ \ (17)

for some Hermitian operator {\Gamma}. Using the triangle inequality, we thus arrive at

\displaystyle  \sigma_{A}^{2}\sigma_{B}^{2}\ge|\left\langle f|g\right\rangle |^{2}\ge\frac{1}{4}\left\langle \Psi\left|\left[\hat{\Omega},\hat{\Lambda}\right]_{+}\right|\Psi\right\rangle ^{2}+\frac{1}{4}\left\langle \Psi\left|\Gamma\right|\Psi\right\rangle ^{2} \ \ \ \ \ (18)

Comparing this with Griffiths’s result, he had

\displaystyle  \sigma_{A}^{2}\sigma_{B}^{2}\ge\left(\frac{1}{2i}\left\langle [\hat{A},\hat{B}]\right\rangle \right)^{2}=\frac{1}{4}\left\langle \Psi\left|\Gamma\right|\Psi\right\rangle ^{2} \ \ \ \ \ (19)

That is, Griffiths’s uncertainty principle is actually weaker than Shankar’s as he includes only the last term in 18. For canonically conjugate operators (such as {X} and {P}) the commutator is always

\displaystyle  \left[X,P\right]=i\hbar \ \ \ \ \ (20)

so the last term in 18 is always {\hbar^{2}/4} for any wave function {\Psi}. The first term in 18, which involves the anticommutator, will, in general, depend on the wave function {\Psi}, but it is always positive (or zero), so we can still state that, for such operators

\displaystyle  \sigma_{A}^{2}\sigma_{B}^{2}\ge\frac{\hbar^{2}}{4} \ \ \ \ \ (21)

Uncertainty principle and an estimate of the ground state energy of hydrogen

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 9, Exercise 9.4.3.

The uncertainty principle can be used to get an estimate of the ground state energy in some systems. In his section 9.4, Shankar shows how this is done for the hydrogen atom, treating the system as a proper 3-d system.

A somewhat simpler analysis can be done by treating the hydrogen atom as a one-dimensional system. The Hamiltonian is

\displaystyle  H=\frac{P^{2}}{2m}-\frac{e^{2}}{\left(R^{2}\right)^{1/2}} \ \ \ \ \ (1)

where {m} and {e} are the mass and charge of the electron. The operators {P} and {R} stand for the 3-d momentum and position:

\displaystyle   P^{2} \displaystyle  = \displaystyle  P_{x}^{2}+P_{y}^{2}+P_{z}^{2}\ \ \ \ \ (2)
\displaystyle  R^{2} \displaystyle  = \displaystyle  X^{2}+Y^{2}+Z^{2} \ \ \ \ \ (3)

If we ignore the expansions of {P^{2}} and {R^{2}} and treat the Hamiltonian as a function of the operators {P} and {R} on their own, we can use the uncertainty principle to get a bound on the ground state energy. By analogy with one-dimensional position and momentum, we assume that the uncertainties are related by

\displaystyle  \Delta P\cdot\Delta R\ge\frac{\hbar}{2} \ \ \ \ \ (4)

By using coordinates such that the hydrogen atom is centred at the origin, and from the spherical symmetry of the ground state, we have

\displaystyle   \left(\Delta P\right)^{2} \displaystyle  = \displaystyle  \left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}=\left\langle P^{2}\right\rangle \ \ \ \ \ (5)
\displaystyle  \left(\Delta R\right)^{2} \displaystyle  = \displaystyle  \left\langle R^{2}\right\rangle -\left\langle R\right\rangle ^{2}=\left\langle R^{2}\right\rangle \ \ \ \ \ (6)

We can then write 1 as

\displaystyle   \left\langle H\right\rangle \displaystyle  = \displaystyle  \frac{\left\langle P^{2}\right\rangle }{2m}-e^{2}\left\langle \frac{1}{\left(R^{2}\right)^{1/2}}\right\rangle \ \ \ \ \ (7)
\displaystyle  \displaystyle  \simeq \displaystyle  \frac{\left\langle P^{2}\right\rangle }{2m}-\frac{e^{2}}{\left\langle \sqrt{\left\langle R^{2}\right\rangle }\right\rangle } \ \ \ \ \ (8)

where in the last line we used an argument similar to that considered earlier, in which we showed that, for a one-dimensional system,

\displaystyle  \left\langle \frac{1}{X^{2}}\right\rangle \simeq\frac{1}{\left\langle X^{2}\right\rangle } \ \ \ \ \ (9)

where the {\simeq} sign means ‘same order of magnitude’. We can now write the mean of the Hamiltonian in terms of the uncertainties:

\displaystyle   \left\langle H\right\rangle \displaystyle  \simeq \displaystyle  \frac{\left(\Delta P\right)^{2}}{2m}-\frac{e^{2}}{\Delta R}\ \ \ \ \ (10)
\displaystyle  \displaystyle  \gtrsim \displaystyle  \frac{\hbar^{2}}{8m\left(\Delta R\right)^{2}}-\frac{e^{2}}{\Delta R} \ \ \ \ \ (11)

We can now minimize {\left\langle H\right\rangle }:

\displaystyle   \frac{\partial\left\langle H\right\rangle }{\partial\left(\Delta R\right)} \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{4m\left(\Delta R\right)^{3}}+\frac{e^{2}}{\left(\Delta R\right)^{2}}=0\ \ \ \ \ (12)
\displaystyle  \Delta R \displaystyle  = \displaystyle  \frac{\hbar^{2}}{4me^{2}} \ \ \ \ \ (13)

This gives an estimate for the ground state energy of

\displaystyle  \left\langle H\right\rangle _{g.s.}\simeq-\frac{2me^{4}}{\hbar^{2}} \ \ \ \ \ (14)

The actual value is

\displaystyle  E_{0}=-\frac{me^{4}}{2\hbar^{2}} \ \ \ \ \ (15)

so our estimate is too large (in magnitude) by a factor of 4. For comparison, the estimate worked out by Shankar for the 3-d case is

\displaystyle  \left\langle H\right\rangle \gtrsim-\frac{2me^{4}}{9\hbar^{2}} \ \ \ \ \ (16)

This estimate is too small by around a factor of 2.

Uncertainties in the harmonic oscillator and hydrogen atom

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 9, Exercises 9.4.1 – 9.4.2.

Here we’ll look at a couple of calculations relevant to the application of the uncertainty principle to the hydrogen atom. When calculating uncertainties, we need to find the average values of various quantities. First, we’ll look at an average in the case of the harmonic oscillator.

The harmonic oscillator eigenstates are

\displaystyle \psi_{n}(x)=\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}\frac{1}{\sqrt{2^{n}n!}}H_{n}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (1)

where {H_{n}} is the {n}th Hermite polynomial. For {n=1,} we have

\displaystyle H_{1}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)=2\sqrt{\frac{m\omega}{\hbar}}x \ \ \ \ \ (2)

so

\displaystyle \psi_{1}(x)=\frac{\sqrt{2}}{\pi^{1/4}}\left(\frac{m\omega}{\hbar}\right)^{3/4}x\;e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (3)

For this state, we can calculate the average

\displaystyle \left\langle \frac{1}{X^{2}}\right\rangle \displaystyle = \displaystyle \int_{-\infty}^{\infty}\psi_{1}^{2}(x)\frac{1}{x^{2}}dx\ \ \ \ \ (4)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\int_{-\infty}^{\infty}e^{-m\omega x^{2}/\hbar}dx\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\sqrt{\frac{\pi\hbar}{m\omega}}\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \frac{2m\omega}{\hbar} \ \ \ \ \ (7)

where we evaluated the Gaussian integral in the second line.

We can compare this to {1/\left\langle X^{2}\right\rangle } as follows:

\displaystyle \left\langle X^{2}\right\rangle \displaystyle = \displaystyle \int_{-\infty}^{\infty}\psi_{1}^{2}(x)x^{2}dx\ \ \ \ \ (8)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\int_{-\infty}^{\infty}e^{-m\omega x^{2}/\hbar}x^{4}dx\ \ \ \ \ (9)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\frac{3\sqrt{\pi}}{4}\left(\frac{\hbar}{m\omega}\right)^{5/2}\ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle \frac{3}{2}\frac{\hbar}{m\omega}\ \ \ \ \ (11)
\displaystyle \frac{1}{\left\langle X^{2}\right\rangle } \displaystyle = \displaystyle \frac{2}{3}\frac{m\omega}{\hbar} \ \ \ \ \ (12)

Thus {\left\langle \frac{1}{X^{2}}\right\rangle } and {\frac{1}{\left\langle X^{2}\right\rangle }} have the same order of magnitude, although they are not equal.

In three dimensions, we consider the ground state of hydrogen

\displaystyle \psi_{100}\left(r\right)=\frac{1}{\sqrt{\pi}a_{0}^{3/2}}e^{-r/a_{0}} \ \ \ \ \ (13)

where {a_{0}} is the Bohr radius

\displaystyle a_{0}\equiv\frac{\hbar^{2}}{me^{2}} \ \ \ \ \ (14)

with {m} and {e} being the mass and charge of the electron. The wave function is normalized as we can see by doing the integral (in 3 dimensions):

\displaystyle \int\psi_{100}^{2}(r)d^{3}\mathbf{r} \displaystyle = \displaystyle \frac{4\pi}{\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r^{2}dr \ \ \ \ \ (15)

We can use the formula (given in Shankar’s Appendix 2)

\displaystyle \int_{0}^{\infty}e^{-r/\alpha}r^{n}dr=\frac{n!}{\alpha^{n+1}} \ \ \ \ \ (16)

We get

\displaystyle \int\psi_{100}^{2}(r)d^{3}\mathbf{r}=\frac{4\pi}{\pi a_{0}^{3}}\frac{2!}{2^{3}}a_{0}^{3}=1 \ \ \ \ \ (17)

as required.

For a spherically symmetric wave function centred at {r=0},

\displaystyle \left(\Delta X\right)^{2}=\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}=\left\langle X^{2}\right\rangle \ \ \ \ \ (18)

with identical relations for {Y} and {Z}. Since

\displaystyle r^{2} \displaystyle = \displaystyle x^{2}+y^{2}+z^{2}\ \ \ \ \ (19)
\displaystyle \left\langle r^{2}\right\rangle \displaystyle = \displaystyle \left\langle x^{2}\right\rangle +\left\langle y^{2}\right\rangle +\left\langle z^{2}\right\rangle =3\left\langle X^{2}\right\rangle \ \ \ \ \ (20)
\displaystyle \left\langle X^{2}\right\rangle \displaystyle = \displaystyle \frac{1}{3}\left\langle r^{2}\right\rangle \ \ \ \ \ (21)

Thus

\displaystyle \left\langle X^{2}\right\rangle \displaystyle = \displaystyle \frac{1}{3}\int\psi_{100}^{2}(r)r^{2}d^{3}\mathbf{r}\ \ \ \ \ (22)
\displaystyle \displaystyle = \displaystyle \frac{4\pi}{3\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r^{4}dr\ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle \frac{4}{3a_{0}^{3}}\frac{4!}{2^{5}}a_{0}^{5}\ \ \ \ \ (24)
\displaystyle \displaystyle = \displaystyle a_{0}^{2}\ \ \ \ \ (25)
\displaystyle \Delta X \displaystyle = \displaystyle a_{0}=\frac{\hbar^{2}}{me^{2}} \ \ \ \ \ (26)

We can also find

\displaystyle \left\langle \frac{1}{r}\right\rangle \displaystyle = \displaystyle \int\psi_{100}^{2}(r)\frac{1}{r}d^{3}\mathbf{r}\ \ \ \ \ (27)
\displaystyle \displaystyle = \displaystyle \frac{4\pi}{\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r\;dr\ \ \ \ \ (28)
\displaystyle \displaystyle = \displaystyle \frac{4}{a_{0}^{3}}\frac{a_{0}^{2}}{4}\ \ \ \ \ (29)
\displaystyle \displaystyle = \displaystyle \frac{1}{a_{0}}\ \ \ \ \ (30)
\displaystyle \left\langle r\right\rangle \displaystyle = \displaystyle \int\psi_{100}^{2}(r)r\;d^{3}\mathbf{r}\ \ \ \ \ (31)
\displaystyle \displaystyle = \displaystyle \frac{4\pi}{\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r^{3}dr\ \ \ \ \ (32)
\displaystyle \displaystyle = \displaystyle \frac{4}{a_{0}^{3}}\frac{6a_{0}^{4}}{16}\ \ \ \ \ (33)
\displaystyle \displaystyle = \displaystyle \frac{3}{2}a_{0} \ \ \ \ \ (34)

Thus both {\left\langle \frac{1}{r}\right\rangle } and {\frac{1}{\left\langle r\right\rangle }} are of the same order of magnitude as {1/a_{0}=me^{2}/\hbar^{2}}.

Harmonic oscillator – zero-point energy from uncertainty principle

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 7.3.

There is a nice result derived in Shankar’s section 7.3 in which he shows that we can actually derive the ground state energy and wave function for the harmonic oscillator from the uncertainty principle. Classically, the energy of a harmonic oscillator is

\displaystyle H=\frac{p^{2}}{2m}+\frac{1}{2}m\omega^{2}x^{2} \ \ \ \ \ (1)

where both {p} and {x} are continuous variables that can, in principle, take on any values. Thus classically it is possible for an oscillator to have {x=p=0} giving a ground state with zero energy. In quantum mechanics, because {X} and {P} don’t commute, the position and momentum cannot both have precise values, which means that the ground state must have an energy greater than zero. This so-called zero-point energy is (as found by Solving Schrödinger’s equation)

\displaystyle E_{0}=\frac{\hbar\omega}{2} \ \ \ \ \ (2)

To derive this without needing to solve Schrödinger’s equation, we first recall that a state in which the position-momentum uncertainty is a minimum must be a gaussian of form

\displaystyle \Psi\left(x\right)=Ae^{-a(x-\langle x\rangle)^{2}/2\hbar}e^{i\langle p\rangle x/\hbar} \ \ \ \ \ (3)

where {a} is a positive real constant, {A} is the normalization constant, {\left\langle x\right\rangle } is the mean position and {\left\langle p\right\rangle } is the mean momentum. For a harmonic oscillator centred at {x=0}, we have that both {\left\langle x\right\rangle =\left\langle p\right\rangle =0}, so we know that the ground state wave function has the form

\displaystyle \psi\left(x\right)=Ae^{-ax^{2}/2\hbar} \ \ \ \ \ (4)

 

To normalize this we require (assuming {A} is real)

\displaystyle \int_{-\infty}^{\infty}\psi^{2}\left(x\right)dx=1 \ \ \ \ \ (5)

Using the standard result for a gaussian integral (see Appendix 2 in Shankar or use Google)

\displaystyle \int_{-\infty}^{\infty}\psi^{2}\left(x\right)dx \displaystyle = \displaystyle A^{2}\int_{-\infty}^{\infty}e^{-ax^{2}/\hbar}dx\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle A^{2}\sqrt{\frac{\pi\hbar}{a}} \ \ \ \ \ (7)

Therefore

\displaystyle A=\left(\frac{a}{\pi\hbar}\right)^{1/4} \ \ \ \ \ (8)

We need to find {a} such that {\Delta X\Delta P} is minimized. The harmonic oscillator hamiltonian is

\displaystyle H=\frac{P^{2}}{2m}+\frac{1}{2}m\omega^{2}X^{2} \ \ \ \ \ (9)

 

Since {\left\langle X\right\rangle =\left\langle P\right\rangle =0}, the uncertainties become

\displaystyle \left(\Delta X\right)^{2} \displaystyle = \displaystyle \left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}=\left\langle X^{2}\right\rangle \ \ \ \ \ (10)
\displaystyle \left(\Delta P\right)^{2} \displaystyle = \displaystyle \left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}=\left\langle P^{2}\right\rangle \ \ \ \ \ (11)

Averaging 9 we get

\displaystyle \left\langle H\right\rangle \displaystyle = \displaystyle \frac{\left\langle P^{2}\right\rangle }{2m}+\frac{1}{2}m\omega^{2}\left\langle X^{2}\right\rangle \ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle \frac{\left(\Delta P\right)^{2}}{2m}+\frac{1}{2}m\omega^{2}\left(\Delta X\right)^{2} \ \ \ \ \ (13)

At minimum uncertainty

\displaystyle \Delta X\Delta P=\frac{\hbar}{2} \ \ \ \ \ (14)

so we have

\displaystyle \Delta P \displaystyle = \displaystyle \frac{\hbar}{2\Delta X}\ \ \ \ \ (15)
\displaystyle \left\langle H\right\rangle \displaystyle = \displaystyle \frac{\hbar^{2}}{8m\left(\Delta X\right)^{2}}+\frac{1}{2}m\omega^{2}\left(\Delta X\right)^{2} \ \ \ \ \ (16)

The minimum energy can now be found by finding the value of {\left(\Delta X\right)^{2}} that minimizes this function. Treating {\left(\Delta X\right)^{2}} (not just {\Delta X}) as the independent variable, we have

\displaystyle \frac{\partial\left\langle H\right\rangle }{\partial\left(\Delta X\right)^{2}} \displaystyle = \displaystyle -\frac{\hbar^{2}}{8m\left[\left(\Delta X\right)^{2}\right]^{2}}+\frac{1}{2}m\omega^{2}\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle -\frac{\hbar^{2}}{8m\left(\Delta X\right)^{4}}+\frac{1}{2}m\omega^{2}=0\ \ \ \ \ (18)
\displaystyle \left(\Delta X\right)^{2} \displaystyle = \displaystyle \frac{\hbar}{2m\omega} \ \ \ \ \ (19)

This gives a minimum value for the mean energy of

\displaystyle \left\langle H\right\rangle _{min}=\frac{\hbar\omega}{2} \ \ \ \ \ (20)

To complete the derivation, we need to find the gaussian 4 that gives the correct value 19 for {\left(\Delta X\right)^{2}}. That is, we need to find {a} such that

\displaystyle \left(\Delta X\right)^{2}=\left\langle X^{2}\right\rangle =\frac{\hbar}{2m\omega} \ \ \ \ \ (21)

This requires doing another gaussian integral:

\displaystyle \left\langle X^{2}\right\rangle \displaystyle = \displaystyle \int_{-\infty}^{\infty}x^{2}\psi^{2}\left(x\right)dx\ \ \ \ \ (22)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{a}{\pi\hbar}}\int_{-\infty}^{\infty}x^{2}e^{-ax^{2}/\hbar}dx\ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{a}{\pi\hbar}}\sqrt{\frac{\pi\hbar}{a}}\frac{h}{2a}\ \ \ \ \ (24)
\displaystyle \displaystyle = \displaystyle \frac{\hbar}{2a} \ \ \ \ \ (25)

We therefore get

\displaystyle \frac{\hbar}{2a} \displaystyle = \displaystyle \frac{\hbar}{2m\omega}\ \ \ \ \ (26)
\displaystyle a \displaystyle = \displaystyle m\omega \ \ \ \ \ (27)

which gives a normalized minimum energy wave function

\displaystyle \psi_{min}\left(x\right)=\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (28)

 

This is the lowest possible value for the energy, but is it actually the ground state energy? What we have shown so far is that

\displaystyle \left\langle \psi_{min}\left|H\right|\psi_{min}\right\rangle \le\left\langle \psi_{0}\left|H\right|\psi_{0}\right\rangle =E_{0} \ \ \ \ \ (29)

 

where {\left|\psi_{0}\right\rangle } is the ground state energy. However, we can invoke the variational principle which states that if {\psi} is any normalized function, then the ground state energy {E_{0}} of any hamiltonian {H} satisfies

\displaystyle E_{0}\le\left\langle \psi\left|H\right|\psi\right\rangle \ \ \ \ \ (30)

Using {\psi=\psi_{min}} we therefore have

\displaystyle E_{0}\le\left\langle \psi_{min}\left|H\right|\psi_{min}\right\rangle \ \ \ \ \ (31)

 

Combining 29 and 31 we have

\displaystyle \left\langle \psi_{min}\left|H\right|\psi_{min}\right\rangle \le E_{0}\le\left\langle \psi_{min}\left|H\right|\psi_{min}\right\rangle \ \ \ \ \ (32)

which means that

\displaystyle E_{0}=\left\langle \psi_{min}\left|H\right|\psi_{min}\right\rangle \ \ \ \ \ (33)

and therefore that {\left|\psi_{0}\right\rangle =\left|\psi_{min}\right\rangle }, that is, 28 is actually the ground state wave function.

Although this clever little derivation gives us the ground state energy and wave function, it doesn’t say anything about the higher energy states, or tell us that they are all equally spaced with a spacing of {\hbar\omega}. Nevertheless, it’s a pleasant exercise.

Multiplicity of a 2-dim ideal gas

Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 2.26.

Having looked at counting the number of microstates in systems like coin flipping and the Einstein solid, we can now look at counting the microstates in an ideal gas.

At first glance, this might seem to be impossible, since in classical physics at least, a gas molecule confined within a volume {V} with a fixed energy {U} can be in an infinite number of states, since it could be at any location within the volume and the components of its momentum could have any values subject to the constraint that the kinetic energy of the molecule is {p^{2}/2m}. That’s true, and to be able to count the number of states of a gas molecule, we need to use quantum mechanics. Because of the uncertainty principle, the location and momentum of a molecule can be determined only up to regions {\Delta x} (in position space) and {\Delta p} (in momentum space) such that {\Delta x\Delta p\ge\hbar/2}. Schroeder’s derivation of the number of microstates available to an ideal gas begins with this principle, or at least with the approximation

\displaystyle  \Delta x\Delta p\approx h \ \ \ \ \ (1)

Because the gas is confined within a volume {V}, the situation is somewhat similar to the particle in a box or infinite square well. A particle in such a potential well is restricted to discrete energy states, and also restricted to a location between the infinitely high barriers at either end. It might seem that for a state with fixed energy, the momentum would be precisely known, but in fact, because the particle can move either to the right or left, the momentum doesn’t have a fixed value, and a particle in such a fixed energy state does in fact satisfy the uncertainty principle.

Schroeder’s argument is that, in one dimension, a molecule can be localized to a particular region {\Delta x} in position space and a particular momentum interval {\Delta p} in momentum space, so if the particle is confined to a location range {L} and momentum range {L_{p}}, the number of available states in position space is {L/\Delta x} and in momentum space is {L_{p}/\Delta p}, and, since the momentum and position ranges are independent, the total number of microstates is

\displaystyle  \Omega=\frac{LL_{p}}{\Delta x\Delta p}=\frac{LL_{p}}{h} \ \ \ \ \ (2)

Schroeder then develops the theory for a 3-d ideal gas in detail so we won’t go through that again here. Rather, we’ll look at an analogous 2-dimensional case. In 2-d, we can imagine the molecule confined to an area {A} in position space and another area {A_{p}} in momentum space. Since the energy of the molecule is fixed, the momentum space is constrained by the condition

\displaystyle  p_{x}^{2}+p_{y}^{2}=2mU \ \ \ \ \ (3)

which is the equation of a circle. Thus the ‘area’ {A_{p}} is actually the circumference of the circle:

\displaystyle  A_{p}=2\pi\sqrt{2mU} \ \ \ \ \ (4)

Since there is an uncertainty of {h} for the products of position and momentum in each of the two dimensions, we get

\displaystyle  \Omega=\frac{A}{h^{2}}2\pi\sqrt{2mU} \ \ \ \ \ (5)

To generalize this to the case with {N} gas molecules, we note that in position space, the locations of all the molecules are independent, so we’ll get a factor of {A^{N}/h^{2N}}. The momenta are constrained by the condition

\displaystyle  \sum_{i=1}^{N}\left(p_{i_{x}}^{2}+p_{i_{y}}^{2}\right)=2mU \ \ \ \ \ (6)

where the sum index {i} extends over the {N} molecules, so the momenta are not independent. The total ‘volume’ of momentum space is actually the ‘area’ of a {2N} dimensional hypersphere, which Schroeder derives in his Appendix B and gives as

\displaystyle  \mbox{area}=\frac{2\pi^{d/2}}{\left(\frac{d}{2}-1\right)!}r^{d-1} \ \ \ \ \ (7)

where {r} is the radius and {d} is the dimension. Thus we get, using {d=2N} and {r=\sqrt{2mU}}:

\displaystyle  \Omega=\frac{A^{N}}{h^{2N}}\frac{2\pi^{N}}{\left(N-1\right)!}\left(\sqrt{2mU}\right)^{2N-1} \ \ \ \ \ (8)

This would be the formula if the gas molecules were distinguishable. However, one of the principles of quantum mechanics is that elementary particles of the same type are actually identical. In that case, with {N} molecules, interchanging any pair of them will leave the microstate unchanged, so the above formula actually overcounts the number of states by {N!}. The actual number of microstates for a 2-d ideal gas of indistinguishable particles is therefore

\displaystyle  \Omega=\frac{A^{N}}{N!h^{2N}}\frac{2\pi^{N}}{\left(N-1\right)!}\left(\sqrt{2mU}\right)^{2N-1} \ \ \ \ \ (9)

Since {N} is a large number (on the order of {10^{23}}), we can approximate this by

\displaystyle  \Omega\approx\frac{\left(\pi A\right)^{N}}{\left(N!\right)^{2}h^{2N}}\left(\sqrt{2mU}\right)^{2N} \ \ \ \ \ (10)

In SI units, {h} is a small number (around {10^{-34}}) and {N} is a large number, so the number of microstates will typically be very large.

Uncertainty principle: visualization with Fourier series

Reference: Carroll, Bradley W. & Ostlie, Dale A. (2007), An Introduction to Modern Astrophysics, 2nd Edition; Pearson Education – Chapter 5, Problem 5.18.

To represent a real free particle, we need to write its wave function as the superposition of plane waves of different wavelengths, in order that the overall wave function is normalizable. In general, we need to use a Fourier transform to do this (that is, we need to integrate over a continuous range of wavelengths). However, we can get a feel for the procedure by using a Fourier series instead, in which we sum over a finite number of discrete wavelengths.

We’ll have a look at the series:

\displaystyle   \Psi \displaystyle  = \displaystyle  \frac{2}{N+1}\left[\sin x-\sin3x+\sin5x-\ldots\pm\sin Nx\right]\ \ \ \ \ (1)
\displaystyle  \displaystyle  = \displaystyle  \frac{2}{N+1}\sum_{n=1,odd}^{N}\left(-1\right)^{\left(n-1\right)/2}\sin nx \ \ \ \ \ (2)

This defines a wave packet in the interval {x\in\left[0,\pi\right]} which peaks at {x=\pi/2}. Using Maple, we can generate plots of {\Psi} for various values of {N}:

If we define the width of the central peak as the range {\Delta x} between the values of {x} for which {\Psi=0.5}, we can use Maple’s fsolve command to find these values of {x}. We get

{N} {\Delta x}
5 0.638
11 0.317
21 0.172
41 0.090

The location of a particle represented by the wave function {\Psi} is more accurately known for higher {N}. Conversely, the momentum is better known for lower {N}, since there are fewer wavelengths (hence, fewer energies and momenta) contributing to {\Psi} if {N} is smaller. This is a graphic representation of the position-momentum uncertainty principle.

By the way, the equation 2 defines a wave packet only for the region {0\le x\le\pi}. If we use this equation for a larger interval, say {0\le x\le10\pi}, we get a graph like this:

To define a single wave packet over an interval {0\le x\le A\pi} we need to modify 2 like this:

\displaystyle  \Psi=\frac{2}{N+1}\sum_{n=1,odd}^{N}\left(-1\right)^{\left(n-1\right)/2}\sin\frac{nx}{A} \ \ \ \ \ (3)

Uncertainty principle: a couple of examples from astronomy

Reference: Carroll, Bradley W. & Ostlie, Dale A. (2007), An Introduction to Modern Astrophysics, 2nd Edition; Pearson Education – Chapter 5, Problems 5.14 – 5.15.

The uncertainty principle relates the standard deviations of two observables {A} and {B} to the expectation value of their commutator:

\displaystyle \sigma_{A}^{2}\sigma_{B}^{2}\ge\left(\frac{1}{2i}\left\langle \left[A,B\right]\right\rangle \right)^{2} \ \ \ \ \ (1)

For position {x} and momentum {p}, {\left[x,p\right]=i\hbar} so

\displaystyle \sigma_{x}\sigma_{p}\ge\frac{\hbar}{2} \ \ \ \ \ (2)

Example 1 In white dwarf stars, atoms become crushed together so that electrons and protons are much closer to each other than in ordinary hydrogen gas, where the mean radius of the electron’s orbit is the Bohr radius of {a=5.29\times10^{-11}\mbox{ m}}. In a white dwarf, this distance gets compressed to around {\sigma_{x}\approx1.5\times10^{-12}\mbox{ m}}. With the electron’s location thus localized, its momentum must be uncertain by an amount

\displaystyle \sigma_{p}\approx\frac{\hbar}{2\times1.5\times10^{-12}}=3.5\times10^{-23}\mbox{ kg m s}^{-1} \ \ \ \ \ (3)

The minimum average momentum must be equal to {\sigma_{p}} (otherwise the range of values for {p} would include negative values) so the electron’s minimum speed (assuming non-relativistic speeds) is

\displaystyle v_{min}=\frac{\sigma_{p}}{m_{e}}=\frac{3.5\times10^{-23}}{9.10938291\times10^{-31}}=3.86\times10^{7}\mbox{ m s}^{-1} \ \ \ \ \ (4)

This is about {0.13c} so we should probably use relativity to calculate a better value, but at least it gives an idea of how fast electrons must be moving in a white dwarf.

The energy-time uncertainty relation is a bit more subtle, since time in non-relativistic quantum mechanics is not an observable property of a quantum state; rather it’s a background parameter on which the quantum state depends. The energy-time uncertainty relation is usually given as

\displaystyle \Delta E\Delta t\ge\frac{\hbar}{2} \ \ \ \ \ (5)

where

\displaystyle \Delta t\equiv\frac{\sigma_{Q}}{\left|d\left\langle Q\right\rangle /dt\right|} \ \ \ \ \ (6)

with {Q} representing some arbitrary observable on the system. That is, to first order, {\Delta t} is the time interval during which {\left\langle Q\right\rangle } changes by one standard deviation. For a time-independent hamiltonian and a given set of initial conditions, the probabilities of finding the system in any given energy state do not depend on time, so {\Delta E} is constant, and serves as a constraint on the time scale over which other observables {Q} can change.

Conversely, if we can measure {\Delta t} for some observable (that is, if we can measure how fast some parameter of the system changes), we can get an estimate of {\Delta E}.

Example 2 An electron in the first excited state decays to the ground state in a time interval of around {10^{-8}\mbox{ s}}, by emitting a photon. Since such a decay is a change in an observable property of the system, we can use this time as an estimate of {\Delta t} and use it to derive an estimate of {\Delta E}, the standard deviation of the excited state energy.

\displaystyle \Delta E\approx\frac{\hbar}{2\Delta t}=5.27\times10^{-27}\mbox{ J}=3.29\times10^{-8}\mbox{ eV} \ \ \ \ \ (7)

This gives rise to a spread of wavelengths for the emitted photon:

\displaystyle E \displaystyle = \displaystyle h\nu=\frac{hc}{\lambda}\ \ \ \ \ (8)
\displaystyle \left|\Delta E\right| \displaystyle = \displaystyle hc\frac{\Delta\lambda}{\lambda^{2}} \ \ \ \ \ (9)

The transition {2\rightarrow1} is the first spectral line in the Lyman series with a wavelength of {\lambda_{2\rightarrow1}=121.6\mbox{ nm}} so, using {hc=1240\mbox{ eV nm}} we have

\displaystyle \Delta\lambda=3.29\times10^{-8}\frac{\left(121.6\right)^{2}}{1240}=3.92\times10^{-7}\mbox{ nm} \ \ \ \ \ (10)

This natural broadening of spectral lines would seem to be negligible.

Incidentally, this shows the inadequacy of the Schrödinger equation for studying the dynamics of electron energy levels, since the excited states of an atom are all eigenstates of the hamiltonian and thus should be stable. The fact that spontaneous decay occurs illustrates the need for quantum electrodynamics.

Uncertainty principle: an example

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.17.

Here’s another example of calculating the uncertainty principle. We have a wave function defined as

\displaystyle  \Psi\left(x,0\right)=\begin{cases} A\left(a^{2}-x^{2}\right) & -a\le x\le a\\ 0 & \mbox{otherwise} \end{cases} \ \ \ \ \ (1)

The constant {A} is determined by normalization in the usual way:

\displaystyle   \int_{-a}^{a}\left|\Psi\right|^{2}dx \displaystyle  = \displaystyle  1\ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  A^{2}\left.\left(\frac{x^{5}}{5}-\frac{2}{3}a^{2}x^{3}+a^{4}x\right)\right|_{-a}^{a}\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  A^{2}\frac{16a^{5}}{15}\ \ \ \ \ (4)
\displaystyle  A \displaystyle  = \displaystyle  \frac{\sqrt{15}}{4a^{5/2}} \ \ \ \ \ (5)

The expectation value of {x} is {\left\langle x\right\rangle =0} from the symmetry of the wave function. The expectation value of {p} is

\displaystyle   \left\langle p\right\rangle \displaystyle  = \displaystyle  -i\hbar\int_{-a}^{a}\Psi^*\frac{\partial}{\partial x}\Psi dx\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\int_{-a}^{a}\left(-2Ax\right)A\left(a^{2}-x^{2}\right)\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  0 \ \ \ \ \ (8)

[We can’t calculate {\left\langle p\right\rangle =\frac{d}{dt}\left(m\left\langle x\right\rangle \right)} in this case, because we know the value of {\left\langle x\right\rangle } only at one specific time ({t=0}), so we don’t have enough information to calculate its derivative.]

The remaining statistics are (the integrals are all just integrals of polynomials, so nothing complicated):

\displaystyle   \left\langle x^{2}\right\rangle \displaystyle  = \displaystyle  \int_{-a}^{a}x^{2}\left|\Psi\right|^{2}dx\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \frac{15}{16a^{5}}\left.\left(\frac{x^{7}}{7}-\frac{2}{5}a^{2}x^{5}+\frac{1}{3}a^{4}x^{3}\right)\right|_{-a}^{a}\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  \frac{a^{2}}{7}\ \ \ \ \ (11)
\displaystyle  \left\langle p^{2}\right\rangle \displaystyle  = \displaystyle  -\hbar^{2}\int_{-a}^{a}\Psi^*\frac{\partial^{2}}{\partial x^{2}}\Psi dx\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \frac{15\hbar^{2}}{8a^{5}}\left.\left(a^{2}-x^{2}\right)\right|_{-a}^{a}\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \frac{5\hbar^{2}}{2a^{2}}\ \ \ \ \ (14)
\displaystyle  \sigma_{x} \displaystyle  = \displaystyle  \sqrt{\left\langle x^{2}\right\rangle -\left\langle x\right\rangle ^{2}}\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \frac{a}{\sqrt{7}}\ \ \ \ \ (16)
\displaystyle  \sigma_{p} \displaystyle  = \displaystyle  \sqrt{\left\langle p^{2}\right\rangle -\left\langle p\right\rangle ^{2}}\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{5}{2}}\frac{\hbar}{a}\ \ \ \ \ (18)
\displaystyle  \sigma_{x}\sigma_{p} \displaystyle  = \displaystyle  \sqrt{\frac{5}{14}}\hbar\ \ \ \ \ (19)
\displaystyle  \displaystyle  \cong \displaystyle  0.598\hbar>\frac{\hbar}{2} \ \ \ \ \ (20)

Thus the uncertainty principle is satisfied in this case.