Featured post

Welcome to Physics Pages

This blog consists of my notes and solutions to problems in various areas of mainstream physics. An index to the topics covered is contained in the links in the sidebar on the right, or in the menu at the top of the page.

This isn’t a “popular science” site, in that most posts use a fair bit of mathematics to explain their concepts. Thus this blog aims mainly to help those who are learning or reviewing physics in depth. More details on what the site contains and how to use it are in the Welcome menu above.

Despite Stephen Hawking’s caution that every equation included in a book (or, I suppose in a blog) would halve the readership, this blog has proved very popular since its inception in December 2010. (The total number of hits is given in the sidebar at the right.)

Physicspages.com changed hosts around the middle of May, 2015. If you subscribed to get email notifications of new posts before that date, you’ll need to subscribe again as I couldn’t port the list of subscribers over to the new host. Please use the subscribe form in the sidebar on the right. Sorry for the inconvenience.

Many thanks to my loyal followers and best wishes to everyone who visits. I hope you find it useful. Constructive criticism (or even praise) is always welcome, so feel free to leave a comment in response to any of the posts.

Quantum versus classical mechanics in solids and gases

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.18.

The de Broglie wavelength of a particle, which is the wavelength of an idealized ‘free particle’ which has a precise momentum {p} and thus a completely indeterminate position, is

\displaystyle  \lambda=\frac{h}{p} \ \ \ \ \ (1)

In general, quantum mechanics is needed to describe systems in which the de Broglie wavelength of the constituent particles is larger than some characteristic size of the system itself. For example, if the wavelength of a free electron (that is, an electron not bound to a particular atom) in a solid is greater than the average spacing between atoms, then quantum mechanics is needed to describe these electrons. If the wavelength is much smaller than the size of the system, the wave nature of a particle isn’t noticeable and we can get away with using classical mechanics.

In statistical mechanics, the average energy of each particle in a system is {\frac{1}{2}k_{B}T} per degree of freedom of the particle, where {k_{B}} is Boltzmann’s constant and {T} is the temperature in kelvins. For a single particle such as an electron, there are three degrees of freedom (one per coordinate direction) so its average energy is

\displaystyle  E=\frac{p^{2}}{2m}=\frac{3}{2}k_{B}T \ \ \ \ \ (2)

Combining this with the definition of the de Broglie wavelength above, we get

\displaystyle  \lambda=\frac{h}{\sqrt{3mk_{B}T}} \ \ \ \ \ (3)

so the condition for quantum mechanics to apply is that {\lambda>d} where {d} is the size of the system.

Example 1 Solids. Using the typical lattice spacing of {d=3\times10^{-10}\mbox{ m}} for a solid, what is the maximum temperature at which we need to use quantum mechanics to describe free electrons in such a solid? For quantum mechanics to apply, we need

\displaystyle   d \displaystyle  < \displaystyle  \frac{h}{\sqrt{3mk_{B}T}}\ \ \ \ \ (4)
\displaystyle  T \displaystyle  < \displaystyle  \frac{h^{2}}{3mk_{B}d^{2}} \ \ \ \ \ (5)

We have the values (in SI units)

\displaystyle   h \displaystyle  = \displaystyle  6.626\times10^{-34}\ \ \ \ \ (6)
\displaystyle  m \displaystyle  = \displaystyle  9.1\times10^{-31}\ \ \ \ \ (7)
\displaystyle  k_{B} \displaystyle  = \displaystyle  1.38\times10^{-23}\ \ \ \ \ (8)
\displaystyle  d \displaystyle  = \displaystyle  3\times10^{-10} \ \ \ \ \ (9)

so we get

\displaystyle  T<1.29\times10^{5}\mbox{ K} \ \ \ \ \ (10)

so electrons most definitely need to be described quantum mechanically.

For the atomic nuclei in a solid, the critical temperature is much lower. For sodium, with an atomic mass of about 23 atomic mass units (where {1\mbox{ amu}=1.66\times10^{-27}\mbox{ kg}}), we have

\displaystyle  T<3.1\mbox{ K} \ \ \ \ \ (11)

Example 2 The ideal gas. The ideal gas law is

\displaystyle  PV=Nk_{B}T \ \ \ \ \ (12)

where {P} is the pressure, {V} is the volume and {N} is the number of gas molecules. We can get an estimate of {d} by calculating the average volume per molecule {v}:

\displaystyle   v \displaystyle  = \displaystyle  \frac{V}{N}\ \ \ \ \ (13)
\displaystyle  d \displaystyle  = \displaystyle  v^{1/3}\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \left(\frac{k_{B}T}{P}\right)^{1/3} \ \ \ \ \ (15)

Therefore, the condition for quantum mechanics to apply to an ideal gas is

\displaystyle   \left(\frac{k_{B}T}{P}\right)^{1/3} \displaystyle  < \displaystyle  \frac{h}{\sqrt{3mk_{B}T}}\ \ \ \ \ (16)
\displaystyle  T \displaystyle  < \displaystyle  \frac{h^{6/5}P^{2/5}}{k_{B}\left(3m\right)^{3/5}} \ \ \ \ \ (17)

For helium at atmospheric pressure we have

\displaystyle   P \displaystyle  = \displaystyle  10^{5}\mbox{N m}^{-2}\ \ \ \ \ (18)
\displaystyle  m \displaystyle  = \displaystyle  4\times\left(1.66\times10^{-27}\right)\mbox{ kg}\ \ \ \ \ (19)
\displaystyle  T \displaystyle  < \displaystyle  2.92\mbox{ K} \ \ \ \ \ (20)

This is actually below the boiling point of helium ({4.55\mbox{ K}}) so whenever helium is a gas, we don’t need quantum mechanics to describe it.

For hydrogen atoms (protons) in outer space, {d=1\mbox{ cm}} and {T=3\mbox{ K}}. In this case, the critical temperature is given by 5 with {m=1.66\times10^{-27}\mbox{ kg}}:

\displaystyle  T<\frac{h^{2}}{3mk_{B}d^{2}}=6.4\times10^{-14}\mbox{ K} \ \ \ \ \ (21)

Definitely no quantum mechanics needed here.

Uncertainty principle: an example

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.17.

Here’s another example of calculating the uncertainty principle. We have a wave function defined as

\displaystyle \Psi\left(x,0\right)=\begin{cases} A\left(a^{2}-x^{2}\right) & -a\le x\le a\\ 0 & \mbox{otherwise} \end{cases} \ \ \ \ \ (1)

The constant {A} is determined by normalization in the usual way:

\displaystyle \int_{-a}^{a}\left|\Psi\right|^{2}dx \displaystyle = \displaystyle 1\ \ \ \ \ (2)
\displaystyle \displaystyle = \displaystyle A^{2}\left.\left(\frac{x^{5}}{5}-\frac{2}{3}a^{2}x^{3}+a^{4}x\right)\right|_{-a}^{a}\ \ \ \ \ (3)
\displaystyle \displaystyle = \displaystyle A^{2}\frac{16a^{5}}{15}\ \ \ \ \ (4)
\displaystyle A \displaystyle = \displaystyle \frac{\sqrt{15}}{4a^{5/2}} \ \ \ \ \ (5)

The expectation value of {x} is {\left\langle x\right\rangle =0} from the symmetry of the wave function. The expectation value of {p} is

\displaystyle \left\langle p\right\rangle \displaystyle = \displaystyle -i\hbar\int_{-a}^{a}\Psi^*\frac{\partial}{\partial x}\Psi dx\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle -i\hbar\int_{-a}^{a}\left(-2Ax\right)A\left(a^{2}-x^{2}\right)\ \ \ \ \ (7)
\displaystyle \displaystyle = \displaystyle 0 \ \ \ \ \ (8)

[We can’t calculate {\left\langle p\right\rangle =\frac{d}{dt}\left(m\left\langle x\right\rangle \right)} in this case, because we know the value of {\left\langle x\right\rangle } only at one specific time ({t=0}), so we don’t have enough information to calculate its derivative.]

The remaining statistics are (the integrals are all just integrals of polynomials, so nothing complicated):

\displaystyle \left\langle x^{2}\right\rangle \displaystyle = \displaystyle \int_{-a}^{a}\left|\Psi\right|^{2}dx\ \ \ \ \ (9)
\displaystyle \displaystyle = \displaystyle \frac{15}{16a^{5}}\left.\left(\frac{x^{7}}{7}-\frac{2}{5}a^{2}x^{5}+\frac{1}{3}a^{4}x^{3}\right)\right|_{-a}^{a}\ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle \frac{a^{2}}{7}\ \ \ \ \ (11)
\displaystyle \left\langle p^{2}\right\rangle \displaystyle = \displaystyle -\hbar^{2}\int_{-a}^{a}\Psi^*\frac{\partial^{2}}{\partial x^{2}}\Psi dx\ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle \frac{15\hbar^{2}}{8a^{5}}\left.\left(a^{2}-x^{2}\right)\right|_{-a}^{a}\ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle \frac{5\hbar^{2}}{2a^{2}}\ \ \ \ \ (14)
\displaystyle \sigma_{x} \displaystyle = \displaystyle \sqrt{\left\langle x^{2}\right\rangle -\left\langle x\right\rangle ^{2}}\ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle \frac{a}{\sqrt{7}}\ \ \ \ \ (16)
\displaystyle \sigma_{p} \displaystyle = \displaystyle \sqrt{\left\langle p^{2}\right\rangle -\left\langle p\right\rangle ^{2}}\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{5}{2}}\frac{\hbar}{a}\ \ \ \ \ (18)
\displaystyle \sigma_{x}\sigma_{p} \displaystyle = \displaystyle \sqrt{\frac{5}{14}}\hbar\ \ \ \ \ (19)
\displaystyle \displaystyle \cong \displaystyle 0.598\hbar>\frac{\hbar}{2} \ \ \ \ \ (20)

Thus the uncertainty principle is satisfied in this case.

Inner product of two wave functions is constant in time

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.16.

The fact that the normalization of the wave function is constant over time is actually a special case of a more general theorem, which is

\displaystyle  \frac{d}{dt}\int_{-\infty}^{\infty}\Psi_{1}^*\Psi_{2}dx=0 \ \ \ \ \ (1)

for any two normalizable solutions to the Schrödinger equation (with the same potential). The proof of this follows a similar derivation to that in section 1.4 of Griffiths’s book.

The derivative in the integrand is (where we’re using a subscript {t} or {x} to denote a derivative with respect that variable):

\displaystyle   \frac{\partial}{\partial t}\left(\Psi_{1}^*\Psi_{2}\right) \displaystyle  = \displaystyle  \Psi_{1t}^*\Psi_{2}+\Psi_{1}^*\Psi_{2t} \ \ \ \ \ (2)

From the Schrödinger equation

\displaystyle   \Psi_{2t} \displaystyle  = \displaystyle  i\frac{\hbar}{2m}\Psi_{2xx}-\frac{i}{\hbar}V\Psi_{2}\ \ \ \ \ (3)
\displaystyle  \Psi_{1t}^* \displaystyle  = \displaystyle  -i\frac{\hbar}{2m}\Psi_{1xx}^*+\frac{i}{\hbar}V\Psi_{1}^*\ \ \ \ \ (4)
\displaystyle  \Psi_{1t}^*\Psi_{2}+\Psi_{1}^*\Psi_{2t} \displaystyle  = \displaystyle  i\frac{\hbar}{2m}\left(-\Psi_{1xx}^*\Psi_{2}+\Psi_{2xx}\Psi_{1}^*\right)+\frac{i}{\hbar}V\left(\Psi_{1}^*\Psi_{2}-\Psi_{1}^*\Psi_{2}\right)\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  i\frac{\hbar}{2m}\frac{\partial}{\partial x}\left(\Psi_{2x}\Psi_{1}^*-\Psi_{1x}^*\Psi_{2}\right) \ \ \ \ \ (6)

Inserting this into 1 and integrating gives zero because all wave functions go to zero at infinity. [Of course, the theorem doesn’t hold if {\Psi_{1}} and {\Psi_{2}} are solutions for different potentials, because in that case the potential term wouldn’t cancel out in 5.]

Unstable particles: a crude model

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.15.

A rather unrealistic way of modelling an unstable particle is to introduce an imaginary component to the potential. We can see this by modifying the proof given in Griffiths’s section 1.4 that, for a real potential, the normalization of the wave function is constant in time. We propose that

\displaystyle  V\left(x\right)=V_{0}\left(x\right)-i\Gamma \ \ \ \ \ (1)

where {V_{0}} is the ‘true’ potential and {\Gamma} is a positive real constant.

The Schrödinger equation then says (where we’re using a subscript {t} or {x} to denote a derivative with respect that variable):

\displaystyle   i\hbar\Psi_{t} \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\Psi_{xx}+V_{0}\Psi-i\Gamma\Psi\ \ \ \ \ (2)
\displaystyle  -i\hbar\Psi_{t}^* \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\Psi_{xx}^*+V_{0}\Psi^*+i\Gamma\Psi^* \ \ \ \ \ (3)

where the second line is the complex conjugate of the first.

Retaining the interpretation of the wave function as a probability of finding the particle at a given place and time, we can calculate the time derivative of the total probability of finding the particle anywhere:

\displaystyle   \frac{dP}{dt}\equiv\frac{d}{dt}\int_{-\infty}^{\infty}\left|\Psi\right|^{2}dx \displaystyle  = \displaystyle  \int_{-\infty}^{\infty}\frac{\partial}{\partial t}\left|\Psi\right|^{2}dx \ \ \ \ \ (4)

The derivative in the integrand is

\displaystyle   \frac{\partial}{\partial t}\left|\Psi\right|^{2} \displaystyle  = \displaystyle  \frac{\partial}{\partial t}\left(\Psi^*\Psi\right)\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  \Psi_{t}^*\Psi+\Psi^*\Psi_{t} \ \ \ \ \ (6)

From 2 and 3 we have

\displaystyle   \Psi_{t} \displaystyle  = \displaystyle  i\frac{\hbar}{2m}\Psi_{xx}-\frac{i}{\hbar}V_{0}\Psi-\frac{\Gamma}{\hbar}\Psi\ \ \ \ \ (7)
\displaystyle  \Psi_{t}^* \displaystyle  = \displaystyle  -i\frac{\hbar}{2m}\Psi_{xx}^*+\frac{i}{\hbar}V_{0}\Psi^*-\frac{\Gamma}{\hbar}\Psi^*\ \ \ \ \ (8)
\displaystyle  \Psi_{t}^*\Psi+\Psi^*\Psi_{t} \displaystyle  = \displaystyle  i\frac{\hbar}{2m}\left(\Psi_{xx}\Psi^*-\Psi_{xx}^*\Psi\right)-2\frac{\Gamma}{\hbar}\Psi^*\Psi\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  i\frac{\hbar}{2m}\frac{\partial}{\partial x}\left(\Psi_{x}\Psi^*-\Psi_{x}^*\Psi\right)-2\frac{\Gamma}{\hbar}\Psi^*\Psi \ \ \ \ \ (10)

Putting this into 4 we can integrate the first term and get zero because {\Psi\rightarrow0} as {x\rightarrow\pm\infty} so we’re left with

\displaystyle  \frac{dP}{dt}=-2\frac{\Gamma}{\hbar}\int_{-\infty}^{\infty}\left|\Psi\right|^{2}dx=-2\frac{\Gamma}{\hbar}P \ \ \ \ \ (11)

We can solve this differential equation to get

\displaystyle   \frac{dP}{P} \displaystyle  = \displaystyle  -2\frac{\Gamma}{\hbar}dt\ \ \ \ \ (12)
\displaystyle  P \displaystyle  = \displaystyle  P_{0}e^{-2\Gamma t/\hbar} \ \ \ \ \ (13)

where {P_{0}} is the probability of finding the particle at {t=0}. If we know the particle hasn’t decayed at {t=0} then {P_{0}=1}.

The half-life of the particle is the time it takes for {P} to be reduced to {P_{0}/2}, so

\displaystyle   \ln0.5 \displaystyle  = \displaystyle  -2\frac{\Gamma}{\hbar}t_{1/2}\ \ \ \ \ (14)
\displaystyle  t_{1/2} \displaystyle  = \displaystyle  0.347\frac{\Gamma}{\hbar} \ \ \ \ \ (15)

Continuous probability distribution: needle on a pivot

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problems 1.11-12.

This exercise in continuous probability distributions actually precedes the problem on Buffon’s needle, so it uses the same logic.

Suppose we have a needle mounted on a pivot so that the needle is free to swing anywhere in the top semicircle, so that when it comes to rest, its angular coordinate is equally likely to be any value between 0 and {\pi}. In that case, the probability density {\rho\left(\theta\right)} is a constant in this range, and zero outside it. That is

\displaystyle  \rho\left(\theta\right)=\begin{cases} A & 0\le\theta\le\pi\\ 0 & \mbox{otherwise} \end{cases} \ \ \ \ \ (1)

From normalization, we must have

\displaystyle   \int_{0}^{\pi}\rho\left(\theta\right)d\theta \displaystyle  = \displaystyle  1\ \ \ \ \ (2)
\displaystyle  A \displaystyle  = \displaystyle  \frac{1}{\pi} \ \ \ \ \ (3)

The statistics of the distribution are

\displaystyle   \left\langle \theta\right\rangle \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{0}^{\pi}\theta d\theta=\frac{\pi}{2}\ \ \ \ \ (4)
\displaystyle  \left\langle \theta^{2}\right\rangle \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{0}^{\pi}\theta^{2}d\theta=\frac{\pi^{2}}{3}\ \ \ \ \ (5)
\displaystyle  \sigma \displaystyle  = \displaystyle  \sqrt{\left\langle \theta^{2}\right\rangle -\left\langle \theta\right\rangle ^{2}}=\frac{\pi}{2\sqrt{3}}\ \ \ \ \ (6)
\displaystyle  \left\langle \sin\theta\right\rangle \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{0}^{\pi}\sin\theta d\theta=\frac{2}{\pi}\ \ \ \ \ (7)
\displaystyle  \left\langle \cos\theta\right\rangle \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{0}^{\pi}\cos\theta d\theta=0\ \ \ \ \ (8)
\displaystyle  \left\langle \cos^{2}\theta\right\rangle \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{0}^{\pi}\cos^{2}\theta d\theta=\frac{1}{2} \ \ \ \ \ (9)

We now want the probability that the projection of the needle onto the {x} axis lies between {x} and {x+dx}. If the needle is at angle {\theta}, then its {x} coordinate is {r\cos\theta} (where {r} is the length of the needle). If the angle changes by {d\theta}, its {x} coordinate changes by {dx=-r\sin\theta\; d\theta} so for the probability density, we take absolute values and get

\displaystyle   \rho\left(\theta\right)d\theta \displaystyle  = \displaystyle  \frac{1}{\pi}\frac{dx}{r\sin\theta}\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  \frac{dx}{\pi y}\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  \frac{dx}{\pi\sqrt{r^{2}-x^{2}}}\ \ \ \ \ (12)
\displaystyle  \rho\left(x\right) \displaystyle  = \displaystyle  \frac{1}{\pi\sqrt{r^{2}-x^{2}}} \ \ \ \ \ (13)

As a check:

\displaystyle   \int_{-r}^{r}\rho\left(x\right)dx \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{-r}^{r}\frac{dx}{\sqrt{r^{2}-x^{2}}}\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi}\left.\arctan\frac{x}{\sqrt{r^{2}-x^{2}}}\right|_{-r}^{r}\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  1 \ \ \ \ \ (16)

Since {x=r\cos\theta}, we can get {\left\langle x\right\rangle } and {\left\langle x^{2}\right\rangle } from 8 and 9, but we can also calculate it the hard way, using {\rho\left(x\right)}:

\displaystyle   \left\langle x\right\rangle \displaystyle  = \displaystyle  \int_{-r}^{r}x\rho\left(x\right)dx\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{-r}^{r}\frac{x\; dx}{\sqrt{r^{2}-x^{2}}}\ \ \ \ \ (18)
\displaystyle  \displaystyle  = \displaystyle  0\ \ \ \ \ (19)
\displaystyle  \left\langle x^{2}\right\rangle \displaystyle  = \displaystyle  \int_{-r}^{r}x^{2}\rho\left(x\right)dx\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi}\int_{-r}^{r}\frac{x^{2}\; dx}{\sqrt{r^{2}-x^{2}}}\ \ \ \ \ (21)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2\pi}\left.r^{2}\arctan\frac{x}{\sqrt{r^{2}-x^{2}}}-x\sqrt{r^{2}-x^{2}}\right|_{-r}^{r}\ \ \ \ \ (22)
\displaystyle  \displaystyle  = \displaystyle  \frac{r^{2}}{2} \ \ \ \ \ (23)

A few statistics on the first 25 digits of pi

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.10.

Here are a few statistical properties of the first 25 digits of {\pi} (if you want more digits, here’s a link to the first million digits):

\displaystyle  \pi=3.141592653589793238462643\ldots \ \ \ \ \ (1)

The frequency of each digit and the probability of getting each one are:

Digit {j} {N_{j}} {P_{j}}
0 0 0
1 2 0.08
2 3 0.12
3 5 0.2
4 3 0.12
5 3 0.12
6 3 0.12
7 1 0.04
8 2 0.08
9 3 0.12

The most probable digit is 3, the median is 4 (there are 10 digits {<4} and 12 digits {>4} so that’s as close as we can get to dividing the distribution equally) and the average is 4.72.

We can get the variance by calculating {\left\langle N^{2}\right\rangle -\left\langle N\right\rangle ^{2}}, so we get {\left\langle N^{2}\right\rangle =\frac{710}{25}=28.4}; {\sigma^{2}=28.4-\left(4.72\right)^{2}=6.1216}. The standard deviation is

\displaystyle  \sigma=2.474 \ \ \ \ \ (2)

We’d need to use quite a few more digits to get a properly random collection of numbers.

Harmonic oscillator: statistics

Required math: algebra, calculus (partial derivatives and integration by parts), complex numbers

Required physics: Schrödinger equation, probability density

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.9.

Suppose a particle is in the quantum state

\displaystyle  \Psi\left(x,t\right)=Ae^{-amx^{2}/\hbar}e^{-iat} \ \ \ \ \ (1)

where {A} is the normalization constant and {a} is a constant with dimensions of 1/time. We can find {A} from normalization:

\displaystyle   \int_{-\infty}^{\infty}\left|\Psi\right|^{2}dx \displaystyle  = \displaystyle  1\ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  \left|A\right|^{2}\int_{-\infty}^{\infty}e^{-2amx^{2}/\hbar}dx\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  \left|A\right|^{2}\sqrt{\frac{\pi\hbar}{2ma}}\ \ \ \ \ (4)
\displaystyle  A \displaystyle  = \displaystyle  \left(\frac{2ma}{\pi\hbar}\right)^{1/4} \ \ \ \ \ (5)

The spatial component of the wave function is

\displaystyle  \psi\left(x\right)=\left(\frac{2ma}{\pi\hbar}\right)^{1/4}e^{-amx^{2}/\hbar} \ \ \ \ \ (6)

and it must satisfy the time-independent Schrödinger equation in one dimension

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}}+V(x)\psi(x) \displaystyle  = \displaystyle  E\psi(x) \ \ \ \ \ (7)

The energy {E} can be found from the time equation:

\displaystyle  i\hbar\frac{\partial\Xi}{\partial t}=E\Xi \ \ \ \ \ (8)

where

\displaystyle  \Xi\left(t\right)=e^{-iat} \ \ \ \ \ (9)

Therefore

\displaystyle  E=\hbar a \ \ \ \ \ (10)

From 7 we have

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}} \displaystyle  = \displaystyle  \left(\frac{2ma}{\pi\hbar}\right)^{1/4}a\left(\hbar-2amx^{2}\right)e^{-amx^{2}/\hbar}\ \ \ \ \ (11)
\displaystyle  V\left(x\right) \displaystyle  = \displaystyle  \frac{E\psi(x)+\frac{\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}}}{\psi\left(x\right)}\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  2ma^{2}x^{2} \ \ \ \ \ (13)

This is the harmonic oscillator potential, and the wave function is actually the ground state of that potential.

We can work out a few average values:

\displaystyle  \left\langle x\right\rangle =0 \ \ \ \ \ (14)

since {\psi\left(x\right)} is even.

\displaystyle   \left\langle x^{2}\right\rangle \displaystyle  = \displaystyle  \int_{-\infty}^{\infty}x^{2}\psi^{2}dx=\frac{\hbar}{4am}\ \ \ \ \ (15)
\displaystyle  \left\langle p\right\rangle \displaystyle  = \displaystyle  -i\hbar\int_{-\infty}^{\infty}\psi\frac{\partial\psi}{\partial x}dx=0\ \ \ \ \ (16)
\displaystyle  \left\langle p^{2}\right\rangle \displaystyle  = \displaystyle  -\hbar^{2}\int_{-\infty}^{\infty}\psi\frac{\partial^{2}\psi}{\partial x^{2}}dx=\hbar ma \ \ \ \ \ (17)

The standard deviations are

\displaystyle   \sigma_{x} \displaystyle  = \displaystyle  \sqrt{\left\langle x^{2}\right\rangle -\left\langle x\right\rangle ^{2}}=\frac{1}{2}\sqrt{\frac{\hbar}{ma}}\ \ \ \ \ (18)
\displaystyle  \sigma_{p} \displaystyle  = \displaystyle  \sqrt{\left\langle p^{2}\right\rangle -\left\langle p\right\rangle ^{2}}=\sqrt{\hbar ma} \ \ \ \ \ (19)

and the uncertainty principle is

\displaystyle  \sigma_{x}\sigma_{p}=\frac{\hbar}{2} \ \ \ \ \ (20)

so in this case, the uncertainty is the minimum possible.

Adding a constant to the potential introduces a phase factor

Required math: algebra, calculus (partial derivatives and integration by parts), complex numbers

Required physics: Schrödinger equation, probability density

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.8.

The time-independent Schrödinger equation in one dimension can be separated into two equations as follows:

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}}+V(x)\psi(x) \displaystyle  = \displaystyle  E\psi(x)\ \ \ \ \ (1)
\displaystyle  i\hbar\frac{d\Xi(t)}{dt} \displaystyle  = \displaystyle  E\Xi(t) \ \ \ \ \ (2)

and the general solution is

\displaystyle  \Psi\left(x,t\right)=\psi\left(x\right)\Xi\left(t\right) \ \ \ \ \ (3)

The time component can be solved as

\displaystyle  \Xi\left(t\right)=Ce^{-iEt/\hbar} \ \ \ \ \ (4)

where {C} is the constant of integration.

If we add a constant (in both space and time) {V_{0}} to the potential, then the original Schrödinger equation becomes

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{d^{2}\Psi}{dx^{2}}+V(x)\Psi+V_{0}\Psi \displaystyle  = \displaystyle  i\hbar\frac{\partial\Psi}{\partial t}\ \ \ \ \ (5)
\displaystyle  -\frac{\hbar^{2}}{2m}\frac{d^{2}\Psi}{dx^{2}}+V(x)\Psi \displaystyle  = \displaystyle  i\hbar\frac{\partial\Psi}{\partial t}-V_{0}\Psi \ \ \ \ \ (6)

Applying separation of variables gives us

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{1}{\psi(x)}\frac{\partial^{2}\psi(x)}{\partial x^{2}}+V(x) \displaystyle  = \displaystyle  E\ \ \ \ \ (7)
\displaystyle  i\hbar\frac{1}{\Xi(t)}\frac{\partial\Xi}{\partial t}-V_{0} \displaystyle  = \displaystyle  E \ \ \ \ \ (8)

[Since {V_{0}} is independent of both {x} and {t}, we could put it in either the {\psi\left(x\right)} or the {\Xi\left(t\right)} equation, but putting it in the {\Xi} equation eliminates it from the more complex {\psi} equation, so we’ll do that.]

The solution to 8 is now

\displaystyle  \Xi\left(t\right)=Ce^{-i\left(E+V_{0}\right)t/\hbar} \ \ \ \ \ (9)

so we’ve introduced a phase factor {e^{-iV_{0}t/\hbar}} into the overall wave function {\Psi}. For the time-independent Schrödinger equation, all quantities of physical interest involve multiplying the complex conjugate {\Psi^*} by some operator {\hat{Q}\left(x\right)} that depends only on {x}, operating on {\Psi}. That is, we’re interested only in quantities of the form

\displaystyle   \Psi^*\left[\hat{Q}\left(x\right)\Psi\right] \displaystyle  = \displaystyle  \left|C\right|^{2}e^{+i\left(E+V_{0}\right)t/\hbar}e^{-i\left(E+V_{0}\right)t/\hbar}\psi^*\left[\hat{Q}\left(x\right)\psi\right]\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  \left|C\right|^{2}\psi^*\left[\hat{Q}\left(x\right)\psi\right] \ \ \ \ \ (11)

Thus the phase factor disappears when calculating any physical quantity.

Delta function well: statistics

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.5.

The delta function well gives rise to a wave function that decays exponentially either side of the delta function:

\displaystyle  \Psi\left(x,t\right)=Ae^{-\lambda\left|x\right|}e^{-i\omega t} \ \ \ \ \ (1)

We can normalize {\Psi} in the usual way:

\displaystyle   \int_{-\infty}^{\infty}\left|\Psi\right|^{2}dx \displaystyle  = \displaystyle  \left|A\right|^{2}\left[\int_{-\infty}^{0}e^{2\lambda x}dx+\int_{0}^{\infty}e^{-2\lambda x}dx\right]\ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  2\left|A\right|^{2}\int_{0}^{\infty}e^{-2\lambda x}dx\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  \frac{\left|A\right|^{2}}{\lambda}\ \ \ \ \ (4)
\displaystyle  A \displaystyle  = \displaystyle  \sqrt{\lambda} \ \ \ \ \ (5)

By symmetry, {\left\langle x\right\rangle =0} and

\displaystyle  \left\langle x^{2}\right\rangle =2\lambda\int_{0}^{\infty}x^{2}e^{-2\lambda x}dx=\frac{1}{2\lambda^{2}} \ \ \ \ \ (6)

Therefore

\displaystyle  \sigma=\sqrt{\left\langle x^{2}\right\rangle -\left\langle x\right\rangle ^{2}}=\frac{1}{\sqrt{2}\lambda} \ \ \ \ \ (7)

A plot of {\left|\Psi\right|^{2}} is shown, with vertical yellow lines indicating {x=\pm\frac{1}{\sqrt{2}\lambda}}, for the case {\lambda=2}:

The probability that the particle lies outside {x=\pm\frac{1}{\sqrt{2}\lambda}} is

\displaystyle  P_{\left|x\right|>\sigma}=2\lambda\int_{1/\sqrt{2}\lambda}^{\infty}e^{-2\lambda x}dx=\frac{1}{e^{\sqrt{2}}}=0.2431 \ \ \ \ \ (8)

In this case, the probability of {x} being greater than one standard deviation is a constant, independent of {\lambda}.

Triangular wave function: probabilities

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 1.4.

The square modulus of the wave function which is the solution to the Schrödinger equation is interpreted as a probability density. As an example consider the wave function given by

\displaystyle  \Psi\left(x,0\right)=\begin{cases} A\frac{x}{a} & 0\le x\le a\\ A\frac{b-x}{b-a} & a\le x\le b\\ 0 & \mbox{otherwise} \end{cases} \ \ \ \ \ (1)

We can normalize {\Psi} by requiring

\displaystyle  \int_{0}^{b}\left|\Psi\right|^{2}dx=1 \ \ \ \ \ (2)

Plugging in the formula and doing the integral gives

\displaystyle   \int_{0}^{b}\left|\Psi\right|^{2}dx \displaystyle  = \displaystyle  \left|A\right|^{2}\left[\int_{0}^{a}\frac{x^{2}}{a^{2}}dx+\int_{a}^{b}\left(\frac{b-x}{b-a}\right)^{2}dx\right]\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  \left|A\right|^{2}\frac{b}{3}\ \ \ \ \ (4)
\displaystyle  A \displaystyle  = \displaystyle  \sqrt{\frac{3}{b}} \ \ \ \ \ (5)

where we’ve taken the positive real root for {A}. Note that {A} could also be multiplied by a phase factor {e^{i\delta}} for any real {\delta} without affecting normalization. This can be important in some applications where we need to add together wave functions.

Given this value for {A}, we can plot 1. Here, we’ve taken {a=1} and {b=3}:

Since {\Psi} has its maximum at {x=a}, that is where the particle is most likely to be found. The probability of the particle being found to the left of {x=a} is

\displaystyle  P_{x<a}=\frac{3}{b}\int_{0}^{a}\frac{x^{2}}{a^{2}}dx=\frac{a}{b} \ \ \ \ \ (6)

If {b=a}, then {\Psi} drops to zero at {x=a} so {P_{x<a}=1}. If {b=2a}, then {\Psi} is an isosceles triangle symmetric about {x=a} so {P_{x<a}=\frac{1}{2}}.

The expectation value of {x} is

\displaystyle  \left\langle x\right\rangle =\int x\left|\Psi\right|^{2}dx=\frac{a}{2}+\frac{b}{4} \ \ \ \ \ (7)

where we used Maple to simplify the integration. If {b=2a}, then {\left\langle x\right\rangle =a} as expected.