Category Archives: Quantum mechanics

Harmonic oscillator: Hermite polynomials and orthogonality of eigenfunctions

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 7.3, Exercises 7.3.2 – 7.3.3.

The eigenfunctions of the harmonic oscillator are given by

\displaystyle  \psi_{n}(x)=\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}\frac{1}{\sqrt{2^{n}n!}}H_{n}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (1)

where {H_{n}\left(u\right)} is a Hermite polynomial. The Hermite polynomials obey the recursion relation

\displaystyle  H_{n+1}(x)=2xH_{n}(x)-2nH_{n-1}(x) \ \ \ \ \ (2)

The first few Hermite polynomials are given in Shankar’s equation 7.3.21, and we may use these to verify this relation for a couple of cases. Taking {n=2} we have

\displaystyle   H_{3}\left(x\right) \displaystyle  = \displaystyle  2xH_{2}\left(x\right)-4H_{1}\left(x\right)\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  2x\left[-2\left(1-2x^{2}\right)\right]-4\left(2x\right)\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  -12x+8x^{3} \ \ \ \ \ (5)

The last line agrees with {H_{3}} as given in Shankar.

For {n=3} we have

\displaystyle   H_{4}\left(x\right) \displaystyle  = \displaystyle  2xH_{3}\left(x\right)-6H_{2}\left(x\right)\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  2x\left[-12x+8x^{3}\right]-6\left[-2\left(1-2x^{2}\right)\right]\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  12-48x^{2}+16y^{4} \ \ \ \ \ (8)

which again agrees with Shankar’s equation.

We can see from the relation 2 that, given that {H_{0}=1} and {H_{1}=2x}, all Hermite polynomials of even index contain only even powers of {x}, and all polynomials of odd index contain only odd powers of {x}. This means that all even Hermite polynomials are even functions of {x}, in the sense that {H_{2n}\left(-x\right)=H_{2n}\left(x\right)}, and all odd Hermite polynomials are odd functions of {x}, so that {H_{2n+1}\left(-x\right)=-H_{2n+1}\left(x\right)}.

If {\psi\left(x\right)} is even and {\phi\left(x\right)} is odd, then

\displaystyle  \psi\left(-x\right)\phi\left(-x\right)=-\psi\left(x\right)\phi\left(x\right) \ \ \ \ \ (9)

That is, the product {\psi\left(x\right)\phi\left(x\right)} is an odd function. Since the integral of any odd function over an interval symmetric about {x=0} is zero, we have

\displaystyle  \int_{-\infty}^{\infty}\psi\left(x\right)\phi\left(x\right)dx=0 \ \ \ \ \ (10)

Looking at the eigenfunctions 1, we see that the exponential factor is a Gaussian centred at {x=0} and is therefore even, so that {\psi_{n}} will be even or odd depending on whether {n} is even or odd. In particular, the integral of any even {\psi_{n}} multiplied by any odd {\psi_{n}} over all {x} will be zero.

To show that pairs of even functions are also orthogonal is a bit trickier, but we can do it in the simplest case, where we consider the functions {\psi_{0}} and {\psi_{2}}.

\displaystyle   \int_{-\infty}^{\infty}\psi_{0}\left(x\right)\psi_{2}\left(x\right)dx \displaystyle  = \displaystyle  \sqrt{\frac{m\omega}{\pi\hbar}}\frac{1}{\sqrt{8}}\int_{-\infty}^{\infty}H_{0}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)H_{2}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)e^{-m\omega x^{2}/\hbar}dx\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{m\omega}{\pi\hbar}}\frac{1}{\sqrt{8}}\int_{-\infty}^{\infty}\left(1\right)\left[-2\left(1-2\frac{m\omega}{\hbar}x^{2}\right)\right]e^{-m\omega x^{2}/\hbar}dx\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  -\sqrt{\frac{m\omega}{\pi\hbar}}\frac{1}{\sqrt{2}}\left[\sqrt{\frac{\pi\hbar}{m\omega}}-\sqrt{\frac{\pi\hbar}{m\omega}}\right]\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  0 \ \ \ \ \ (14)

The two Gaussian integrals can be done using standard formulas as given in Shankar’s Appendix A.2. (I used Maple.)

Harmonic oscillator – series solution revisited

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 7.3, Exercise 7.3.1.

Shankar’s derivation of the eigenfunctions of the harmonic oscillator in the position basis is essentially the same as that in Griffiths, which we’ve covered before. The reader may wish to refresh their knowledge of this before reading the rest of this post.

To make the comparison we note that {\epsilon} in Griffiths is {2\varepsilon} in Shankar:

\displaystyle  \varepsilon\equiv\frac{E}{\hbar\omega} \ \ \ \ \ (1)

The analysis begins with the Schrödinger equation for the harmonic oscillator, which is

\displaystyle  -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi}{dx^{2}}+\frac{1}{2}m\omega^{2}x^{2}\psi=E\psi \ \ \ \ \ (2)

Making the substitution

\displaystyle  y\equiv\sqrt{\frac{m\omega}{\hbar}}x \ \ \ \ \ (3)

we convert the equation to

\displaystyle  \psi^{\prime\prime}+\left(2\varepsilon-y^{2}\right)\psi=0 \ \ \ \ \ (4)

where a prime indicates a derivative with respect to {y}.

As explained in the earlier post, we further convert this equation by defining another function {u\left(y\right)} (Griffiths calls this function {f\left(y\right)}) as

\displaystyle  \psi(y)=e^{-y^{2}/2}u(y) \ \ \ \ \ (5)

This results in a simpler differential equation for {f}:

\displaystyle  \frac{d^{2}u}{dy^{2}}-2y\frac{du}{dy}+(2\varepsilon-1)u=0 \ \ \ \ \ (6)

We can solve this by proposing that {u} is a power series in {y}:

\displaystyle  u\left(y\right)=\sum_{n=0}^{\infty}C_{n}y^{n} \ \ \ \ \ (7)

This leads to the recursion relation for the coefficients {C_{n}}:

\displaystyle  C_{n+2}=C_{n}\frac{2n+1-2\varepsilon}{\left(n+1\right)\left(n+2\right)} \ \ \ \ \ (8)

In order that {u} is finite for large {y}, this series must terminate, which leads to the quantization condition for the energy:

\displaystyle  E_{n}=\hbar\omega\left(n+\frac{1}{2}\right) \ \ \ \ \ (9)

Shankar poses as an exercise the question as to why we didn’t just try a series solution of 4, that is, we propose

\displaystyle  \psi\left(y\right)=\sum_{n=0}^{\infty}A_{n}y^{n} \ \ \ \ \ (10)

for some other coefficients {A_{n}}. If we try this, there are three terms with different exponents for {y} that result from plugging this into 4.

\displaystyle   \psi^{\prime\prime} \displaystyle  = \displaystyle  \sum_{n=0}^{\infty}A_{n}n\left(n-1\right)y^{n-2}\ \ \ \ \ (11)
\displaystyle  2\varepsilon\psi \displaystyle  = \displaystyle  2\varepsilon\sum_{n=0}^{\infty}A_{n}y^{n}\ \ \ \ \ (12)
\displaystyle  -y^{2}\psi \displaystyle  = \displaystyle  -\sum_{n=0}^{\infty}A_{n}y^{n+2} \ \ \ \ \ (13)

To compare the coefficients we reassign the summation ranges so that the powers of {y} are the same in all three terms.

\displaystyle   \psi^{\prime\prime} \displaystyle  = \displaystyle  \sum_{n=2}^{\infty}A_{n}n\left(n-1\right)y^{n-2}=\sum_{n=0}^{\infty}A_{n+2}\left(n+2\right)\left(n+1\right)y^{n}\ \ \ \ \ (14)
\displaystyle  2\varepsilon\psi \displaystyle  = \displaystyle  2\varepsilon\sum_{n=0}^{\infty}A_{n}y^{n}\ \ \ \ \ (15)
\displaystyle  -y^{2}\psi \displaystyle  = \displaystyle  -\sum_{n=0}^{\infty}A_{n}y^{n+2}=-\sum_{n=2}^{\infty}A_{n-2}y^{n} \ \ \ \ \ (16)

Note that the top two sums start at {n=0} while the last sum starts at {n=2}. To satisfy 4, the coefficient of each power of {y} must be zero, that is

\displaystyle  A_{n+2}\left(n+2\right)\left(n+1\right)+2\varepsilon A_{n}-A_{n-2}=0 \ \ \ \ \ (17)

There are two separate conditions here; one for even {n} and the other for odd {n}. To get either sequence started, we need to specify the first two terms. For example, in the even sequence, we need to specify {A_{0}} and {A_{2}} which then allows calculation of {A_{4}} (when {n=2}). We can then use {A_{2}} and {A_{4}} to get {A_{6}} and so on. The general formula is

\displaystyle  A_{n+2}=\frac{A_{n-2}-2\varepsilon A_{n}}{\left(n+2\right)\left(n+1\right)} \ \ \ \ \ (18)

The classical limit of quantum mechanics; Ehrenfest’s theorem

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 6.

We’ve met Ehrenfest’s theorem while studying Griffiths’s book, where the theorem had the form

\displaystyle \frac{\partial\langle p\rangle}{\partial t}=-\left\langle \frac{\partial V}{\partial x}\right\rangle \ \ \ \ \ (1)

This says that, in one dimension, the rate of change of the mean momentum equals the negative of the mean of the derivative of the potential {V}, which is assumed to depend on {x} only. In this case, the behaviour of the means of the quantum variables reduces to the corresponding classical relation, in this case, Newton’s law {F=\frac{dp}{dt}}, where the force is defined in terms of the gradient of the potential: {F=-\frac{dV}{dx}}.

Shankar treats Ehrenfest’s theorem a bit more generally. For an operator {\Omega} we can use the product rule to state that

\displaystyle \frac{d}{dt}\left\langle \Omega\right\rangle \displaystyle = \displaystyle \frac{d}{dt}\left\langle \psi\left|\Omega\right|\psi\right\rangle \ \ \ \ \ (2)
\displaystyle \displaystyle = \displaystyle \left\langle \dot{\psi}\left|\Omega\right|\psi\right\rangle +\left\langle \psi\left|\Omega\right|\dot{\psi}\right\rangle +\left\langle \psi\left|\dot{\Omega}\right|\psi\right\rangle \ \ \ \ \ (3)

where a dot indicates a time derivative. If {\Omega} does not depend explicitly on time, we have

\displaystyle \frac{d}{dt}\left\langle \Omega\right\rangle =\left\langle \dot{\psi}\left|\Omega\right|\psi\right\rangle +\left\langle \psi\left|\Omega\right|\dot{\psi}\right\rangle \ \ \ \ \ (4)

 

The time derivative of {\psi} can be found from the Schrödinger equation:

\displaystyle \left|\dot{\psi}\right\rangle \displaystyle = \displaystyle -\frac{i}{\hbar}H\left|\psi\right\rangle \ \ \ \ \ (5)
\displaystyle \left\langle \dot{\psi}\right| \displaystyle = \displaystyle \frac{i}{\hbar}\left\langle \psi\right|H \ \ \ \ \ (6)

The second equation follows since {H} is hermitian, so {H^{\dagger}=H}. Plugging these into 4 we have

\displaystyle \frac{d}{dt}\left\langle \Omega\right\rangle \displaystyle = \displaystyle \frac{i}{\hbar}\left[\left\langle \psi\left|H\Omega\right|\psi\right\rangle -\left\langle \psi\left|\Omega H\right|\psi\right\rangle \right]\ \ \ \ \ (7)
\displaystyle \displaystyle = \displaystyle -\frac{i}{\hbar}\left\langle \psi\left|\left[\Omega,H\right]\right|\psi\right\rangle \ \ \ \ \ (8)
\displaystyle \displaystyle = \displaystyle -\frac{i}{\hbar}\left\langle \left[\Omega,H\right]\right\rangle \ \ \ \ \ (9)

That is, the rate of change of the mean of an operator can be found from its commutator with the Hamiltonian. It is this result that Shankar refers to as Ehrenfest’s theorem. This relation is similar to that from classical mechanics, where the rate of change of a dynamical variable {\omega} is equal to its Poisson bracket with the classical Hamiltonian. In the Hamiltonian formulation of classical mechanics, dynamical variables depend on generalized coordinates {q_{i}} and their corresponding momenta {p_{i}}, so we have:

\displaystyle \frac{d\omega}{dt} \displaystyle = \displaystyle \sum_{i}\left(\frac{\partial\omega}{\partial q_{i}}\dot{q}_{i}+\frac{\partial\omega}{\partial p_{i}}\dot{p}_{i}\right)\ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle \sum_{i}\left(\frac{\partial\omega}{\partial q_{i}}\frac{\partial H}{\partial p_{i}}-\frac{\partial\omega}{\partial p_{i}}\frac{\partial H}{\partial q_{i}}\right)\ \ \ \ \ (11)
\displaystyle \displaystyle \equiv \displaystyle \left\{ \omega,H\right\} \ \ \ \ \ (12)

We can work out 9 for the particular cases where {\Omega=X}, the position operator and {\Omega=P}, the momentum operator. For a Hamiltonian of the form

\displaystyle H=\frac{P^{2}}{2m}+V\left(x\right) \ \ \ \ \ (13)

 

and using the commutation relation

\displaystyle \left[X,P\right]=i\hbar \ \ \ \ \ (14)

we have

\displaystyle \frac{d\left\langle X\right\rangle }{dt} \displaystyle = \displaystyle -\frac{i}{\hbar}\left\langle \left[X,H\right]\right\rangle \ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle -\frac{i}{2m\hbar}\left\langle \left[X,P^{2}\right]\right\rangle \ \ \ \ \ (16)

We can evaluate this commutator using the theorem

\displaystyle \left[AB,C\right]=A\left[B,C\right]+\left[A,C\right]B \ \ \ \ \ (17)

In this case, {A=B=P} and {C=X}, so we have

\displaystyle \left[P^{2},X\right] \displaystyle = \displaystyle P\left[P,X\right]+\left[P,X\right]P\ \ \ \ \ (18)
\displaystyle \displaystyle = \displaystyle -2i\hbar P\ \ \ \ \ (19)
\displaystyle \left[X,P^{2}\right] \displaystyle = \displaystyle 2i\hbar P\ \ \ \ \ (20)
\displaystyle \frac{d\left\langle X\right\rangle }{dt} \displaystyle = \displaystyle \frac{\left\langle P\right\rangle }{m} \ \ \ \ \ (21)

This is equivalent to the classical relation {p=mv} for velocity {v}. We can write this result in terms of the Hamiltonian, provided that it’s legal to take the derivative of the Hamiltonian with respect to an operator (which works if we can expand the Hamiltonian as a power series):

\displaystyle \frac{d\left\langle X\right\rangle }{dt}=\frac{\left\langle P\right\rangle }{m}=\left\langle \frac{\partial H}{\partial P}\right\rangle \ \ \ \ \ (22)

This looks a lot like one of Hamilton’s canonical equations in classical mechanics:

\displaystyle \dot{q}_{i}=\frac{\partial H}{\partial p_{i}} \ \ \ \ \ (23)

The main difference between the quantum and classical forms is that the quantum version is a relation between mean values, while the classical version is exact. We can make the correspondence exact provided that it’s legal to take the averaging operation inside the derivative and apply it to each occurrence of {X} and {P}. That is, is it legal to say that

\displaystyle \left\langle \frac{\partial H}{\partial P}\right\rangle =\left\langle \frac{\partial H\left(P,X\right)}{\partial P}\right\rangle =\frac{\partial H\left(\left\langle P\right\rangle ,\left\langle X\right\rangle \right)}{\partial\left\langle P\right\rangle } \ \ \ \ \ (24)

This depends on the precise functional form of {H}. In the case 13 we’re considering here, we have

\displaystyle \left\langle \frac{\partial H}{\partial P}\right\rangle =\left\langle \frac{P}{m}\right\rangle =\frac{\left\langle P\right\rangle }{m}=\frac{\partial}{\partial\left\langle P\right\rangle }\left(\frac{\left\langle P\right\rangle ^{2}}{2m}+V\left(\left\langle X\right\rangle \right)\right) \ \ \ \ \ (25)

So in this case it works. In general, if {H} depends on {P} either linearly or quadratically, then its derivative with respect to {P} will be either constant or linear, and we can take the averaging operation inside the function without changing anything. However, if, say, {H=P^{3}} (unlikely, but just for the sake of argument), then

\displaystyle \left\langle \frac{\partial H}{\partial P}\right\rangle =\left\langle 3P^{2}\right\rangle \ne3\left\langle P\right\rangle ^{2}=\frac{\partial H\left(\left\langle P\right\rangle ,\left\langle X\right\rangle \right)}{\partial\left\langle P\right\rangle } \ \ \ \ \ (26)

since, in general, the mean of the square of a value is not the same as the square of the mean.

Shankar goes through a similar argument for {\dot{P}}. We have

\displaystyle \left\langle \dot{P}\right\rangle =-\frac{i}{\hbar}\left\langle \left[P,H\right]\right\rangle \ \ \ \ \ (27)

 

In this case, we can use the position basis form of {P} which is

\displaystyle P=-i\hbar\frac{d}{dx} \ \ \ \ \ (28)

and the position space version of the potential {V\left(x\right)} to get

\displaystyle \left[P,H\right]\psi \displaystyle = \displaystyle -i\hbar\left(\frac{d\left(V\psi\right)}{dx}-V\frac{d\psi}{dx}\right)\ \ \ \ \ (29)
\displaystyle \displaystyle = \displaystyle -i\hbar\psi\frac{dV}{dx} \ \ \ \ \ (30)

Using this in 27 we have

\displaystyle \left\langle \dot{P}\right\rangle =-\left\langle \frac{dV}{dx}\right\rangle \ \ \ \ \ (31)

Writing this in terms of the Hamiltonian, we have

\displaystyle \left\langle \dot{P}\right\rangle =-\left\langle \frac{\partial H}{\partial x}\right\rangle \ \ \ \ \ (32)

 

Again, this looks similar to the second of Hamilton’s canonical equations from classical mechanics:

\displaystyle \dot{p}_{i}=-\frac{\partial H}{\partial q_{i}} \ \ \ \ \ (33)

and again, we’re allowed to make the correspondence exact provided we can take the averaging operation inside the derivative on the RHS of 32. This works provided that {V} is either linear or quadratic in {x} (such as in the harmonic oscillator). Other potentials such as the {\frac{1}{r}} potential in the hydrogen atom do not allow an exact correspondence between the quantum average and the classical Hamilton equation, but this shouldn’t worry us too much since the hydrogen atom is quintessentially quantum anyway, and any attempt to describe it classically will not work.

Shankar provides a lengthly discussion on when the reduction to classical mechanics is valid, and shows that in any practical experiment that we could do with a classical particle, the difference between the average quantum behaviour and the classical measurements should be so small as to be undetectable. It is only when we deal with systems that are small enough that quantum effects dominate that we need to abandon classical mechanics.

Probability current: a few examples

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.3, Exercises 5.3.2 – 5.3.4.

Here are a few examples of probability current.

Example 1 Suppose the wave function has the form

\displaystyle  \psi\left(\mathbf{r},t\right)=c\tilde{\psi}\left(\mathbf{r},t\right) \ \ \ \ \ (1)

where {c} is a complex constant and {\tilde{\psi}\left(\mathbf{r},t\right)} is a real function of position and time. Then the probability current is

\displaystyle   \mathbf{j} \displaystyle  = \displaystyle  \frac{\hbar}{2mi}\left(\psi^*\nabla\psi-\psi\nabla\psi^*\right)\ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  \frac{\hbar}{2mi}\left(cc^*\left(\tilde{\psi}\nabla\tilde{\psi}\right)-\tilde{\psi}\nabla\tilde{\psi}\right)\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  0 \ \ \ \ \ (4)

In particular, if {\psi} itself is real, the probability current is always zero, so all the stationary states of systems like the harmonic oscillator and hydrogen atom that we’ve studied show no flow of probability, which is what we’d expect since they are, after all, stationary states.

Example 2 Now the wave function is

\displaystyle   \psi_{\mathbf{p}} \displaystyle  = \displaystyle  \frac{1}{\left(2\pi\hbar\right)^{3/2}}e^{i\mathbf{p}\cdot\mathbf{r}/\hbar} \ \ \ \ \ (5)

where the momentum {\mathbf{p}} is constant. In this case we have

\displaystyle   \nabla\psi_{\mathbf{p}} \displaystyle  = \displaystyle  \frac{i}{\left(2\pi\hbar\right)^{3/2}\hbar}e^{i\mathbf{p}\cdot\mathbf{r}/\hbar}\mathbf{p}\ \ \ \ \ (6)
\displaystyle  \nabla\psi_{\mathbf{p}}^* \displaystyle  = \displaystyle  \frac{-i}{\left(2\pi\hbar\right)^{3/2}\hbar}e^{-i\mathbf{p}\cdot\mathbf{r}/\hbar}\mathbf{p}\ \ \ \ \ (7)
\displaystyle  \psi_{\mathbf{p}}^* \displaystyle  = \displaystyle  \frac{1}{\left(2\pi\hbar\right)^{3/2}}e^{-i\mathbf{p}\cdot\mathbf{r}/\hbar} \ \ \ \ \ (8)

This gives a probability current of

\displaystyle   \mathbf{j} \displaystyle  = \displaystyle  \frac{\hbar}{2mi}(\psi_{\mathbf{p}}^*\nabla\psi_{\mathbf{p}}-\psi_{\mathbf{p}}\nabla\psi_{\mathbf{p}}^*)\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\left(2\pi\hbar\right)^{3}2m}\left(\mathbf{p}+\mathbf{p}\right)\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\left(2\pi\hbar\right)^{3}m}\mathbf{p} \ \ \ \ \ (11)

The probability density is

\displaystyle  P=\psi_{\mathbf{p}}^*\psi_{\mathbf{p}}=\frac{1}{\left(2\pi\hbar\right)^{3}} \ \ \ \ \ (12)

Thus the current can be written as

\displaystyle  \mathbf{j}=\frac{P}{m}\mathbf{p} \ \ \ \ \ (13)

Classically, the momentum is {\mathbf{p}=mv}, so the current has the same form as {\mathbf{j}=P\mathbf{v}}. This is similar to the electromagnetic case where the electric current density {\mathbf{J}=\rho\mathbf{v}} where {\rho} is the charge density and {\mathbf{v}} is the velocity of that charge. The probability density can be viewed as “probability” moving with velocity {\mathbf{v}}.

Example 3 Now consider a one-dimensional problem where the wave function consists of two oppositely-moving plane waves:

\displaystyle  \psi=Ae^{ipx/\hbar}+Be^{-ipx/\hbar} \ \ \ \ \ (14)

In this case, we have

\displaystyle   \frac{2mi}{\hbar}j \displaystyle  = \displaystyle  \psi^*\nabla\psi-\psi\nabla\psi^*\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \left(A^*e^{-ipx/\hbar}+B^*e^{ipx/\hbar}\right)\frac{ip}{\hbar}\left(Ae^{ipx/\hbar}+Be^{-ipx/\hbar}\right)-\nonumber
\displaystyle  \displaystyle  \displaystyle  \left(Ae^{ipx/\hbar}+Be^{-ipx/\hbar}\right)\frac{ip}{\hbar}\left(-A^*e^{-ipx/\hbar}+B^*e^{ipx/\hbar}\right)\ \ \ \ \ (16)
\displaystyle  \displaystyle  = \displaystyle  \frac{2ip}{\hbar}\left(\left|A\right|^{2}-\left|B\right|^{2}\right)\ \ \ \ \ (17)
\displaystyle  j \displaystyle  = \displaystyle  \frac{p}{m}\left(\left|A\right|^{2}-\left|B\right|^{2}\right) \ \ \ \ \ (18)

The probability current separates into two terms, one for each direction of momentum.

Probability current with complex potential

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.3, Exercise 5.3.1.

Shakar’s derivation of the probability current in 3-d is similar to the one we reviewed earlier, so we don’t need to repeat it here. We can, however, look at a slight variant where the potential has a constant imaginary part, so that

\displaystyle  V\left(\mathbf{r}\right)=V_{r}\left(\mathbf{r}\right)-iV_{i} \ \ \ \ \ (1)

where {V_{r}\left(\mathbf{r}\right)} is a real function of position and {V_{i}} is a real constant. A Hamiltonian containing such a complex potential is not Hermitian.

To see what effect this has on the total probability of finding a particle in all space, we can repeat the derivation of the probability current. From the Schrödinger equation and its complex conjugate, we have

\displaystyle   i\hbar\frac{\partial\psi}{\partial t} \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\nabla^{2}\psi+V_{r}\psi-iV_{i}\psi\ \ \ \ \ (2)
\displaystyle  -i\hbar\frac{\partial\psi^*}{\partial t} \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\nabla^{2}\psi^*+V_{r}\psi^*+iV_{i}\psi^* \ \ \ \ \ (3)

Multiply the first equation by {\psi^*} and the second by {\psi} and subtract to get

\displaystyle  i\hbar\frac{\partial}{\partial t}\left(\psi\psi^*\right)=-\frac{\hbar^{2}}{2m}\left(\psi^*\nabla^{2}\psi-\psi\nabla^{2}\psi^*\right)-2iV_{i}\psi\psi^* \ \ \ \ \ (4)

As in the case with a real potential, the first term on the RHS can be written as the divergence of a vector:

\displaystyle   \mathbf{J} \displaystyle  = \displaystyle  \frac{\hbar}{2mi}(\Psi^*\nabla\Psi-\Psi\nabla\Psi^*)\ \ \ \ \ (5)
\displaystyle  \nabla\cdot\mathbf{J} \displaystyle  = \displaystyle  \frac{\hbar}{2mi}\left(\psi^*\nabla^{2}\psi-\psi\nabla^{2}\psi^*\right)\ \ \ \ \ (6)
\displaystyle  \frac{\partial}{\partial t}\left(\psi\psi^*\right) \displaystyle  = \displaystyle  -\nabla\cdot\mathbf{J}-\frac{2V_{i}}{\hbar}\psi\psi^* \ \ \ \ \ (7)

If we define the total probability of finding the particle anywhere in space as

\displaystyle  P\equiv\int\psi^*\psi d^{3}\mathbf{r} \ \ \ \ \ (8)

then we can integrate 4 over all space and use Gauss’s theorem to convert the volume integral of a divergence into a surface integral:

\displaystyle   \frac{\partial}{\partial t}\left(\int\psi\psi^*d^{3}\mathbf{r}\right) \displaystyle  = \displaystyle  -\int\nabla\cdot\mathbf{J}d^{3}\mathbf{r}-\frac{2V_{i}}{\hbar}\int\psi\psi^*d^{3}\mathbf{r}\ \ \ \ \ (9)
\displaystyle  \frac{\partial P}{\partial t} \displaystyle  = \displaystyle  -\int_{S}\mathbf{J}\cdot d\mathbf{a}-\frac{2V_{i}}{\hbar}P \ \ \ \ \ (10)

We make the usual assumption that the probability current {\mathbf{J}} tends to zero at infinity fast enough for the first integral on the RHS to be zero, and we get

\displaystyle  \frac{\partial P}{\partial t}=-\frac{2V_{i}}{\hbar}P \ \ \ \ \ (11)

This has the solution

\displaystyle  P\left(t\right)=P\left(0\right)e^{-2V_{i}t/\hbar} \ \ \ \ \ (12)

That is, the probability of the particle existing decays exponentially. Although Shankar says that such a potential can be used to model a system where particles are absorbed, it’s not clear how realistic it is since the Hamiltonian isn’t hermitian, so technically the energies in such a system are not observables.

Infinite square well – expanding well

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.2, Exercise 5.2.1.

Shankar’s treatment of the infinite square well is similar to that of Griffiths, which we’ve already covered, so we won’t go through the details again. The main difference is that Shankar places the potential walls at {x=\pm\frac{L}{2}} while Griffiths places them at {x=0} and {x=a}. As a result, the stationary states found by Shankar are shifted to the left, with the result

\displaystyle  \psi_{n}\left(x\right)=\begin{cases} \sqrt{\frac{2}{L}}\cos\frac{n\pi x}{L} & n=1,3,5,7,\ldots\\ \sqrt{\frac{2}{L}}\sin\frac{n\pi x}{L} & n=2,4,6,\ldots \end{cases} \ \ \ \ \ (1)

These results can be obtained from the form given by Griffiths (where we take the width of the well to be {L} rather than {a}):

\displaystyle   \psi_{n}\left(x\right) \displaystyle  = \displaystyle  \sqrt{\frac{2}{L}}\sin\frac{n\pi\left(x+\frac{L}{2}\right)}{L}\ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{2}{L}}\left[\sin\frac{n\pi x}{L}\cos\frac{n\pi}{2}+\cos\frac{n\pi x}{L}\sin\frac{n\pi}{2}\right] \ \ \ \ \ (3)

Choosing {n} to be even or odd gives the results in 1.

The specific problem we’re solving here involves a particle that starts off in the ground state ({n=1}) of a square well of width {L}. The well then suddenly expands to a width of {2L} symmetrically, that is, it now extends from {x=-L} to {x=+L}. We are to find the probability that the particle will be found in the ground state of the new well.

We solved a similar problem before, but in that case the well expanded by moving its right-hand wall to the right while keeping the left-hand wall fixed, so that the particle found itself in the left half of the new, expanded well. In the present problem, the particle finds itself centred in the new expanded well. You might think that this shouldn’t matter, but it turns out to make quite a difference. To calculate this probability, we need to express the original wave function in terms of the stationary states of the expanded well, which we’ll refer to as {\phi_{n}\left(x\right)}. That is

\displaystyle  \psi_{1}\left(x\right)=\sum_{n=1}^{\infty}c_{n}\phi_{n}\left(x\right) \ \ \ \ \ (4)

Working with Shankar’s functions 1 we find {\phi_{n}} by replacing {L} by {2L}:

\displaystyle  \phi_{n}\left(x\right)=\begin{cases} \frac{1}{\sqrt{L}}\cos\frac{n\pi x}{2L} & n=1,3,5,7,\ldots\\ \frac{1}{\sqrt{L}}\sin\frac{n\pi x}{2L} & n=2,4,6,\ldots \end{cases} \ \ \ \ \ (5)

Using the orthonormality of the wave functions, we have

\displaystyle   c_{1} \displaystyle  = \displaystyle  \int_{-L}^{L}\psi_{1}\left(x\right)\phi_{1}\left(x\right)dx\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  \int_{-L/2}^{L/2}\sqrt{\frac{2}{L}}\cos\frac{\pi x}{L}\frac{1}{\sqrt{L}}\cos\frac{\pi x}{2L}dx\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  \frac{\sqrt{2}}{L}\int_{-L/2}^{L/2}\cos\frac{\pi x}{L}\cos\frac{\pi x}{2L}dx\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  \frac{\sqrt{2}}{L}\int_{-L/2}^{L/2}\left(1-2\sin^{2}\frac{\pi x}{2L}\right)\cos\frac{\pi x}{2L}dx\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \frac{8}{3\pi} \ \ \ \ \ (10)

The limits of integration are reduced in the second line since {\psi_{1}\left(x\right)=0} if {x>\left|\frac{L}{2}\right|}.

Thus the probability of finding the particle in the new ground state is

\displaystyle  \left|c_{1}\right|^{2}=\frac{64}{9\pi^{2}} \ \ \ \ \ (11)

Note that in the earlier problem where the well expanded to the right, the probability was {\frac{32}{9\pi^{2}}}, so the new probability is twice as much when the wave function remains centred in the new well.

We could have also done the calculation using Griffiths’s well which extended from {x=0} to {x=L}. If this well expands symmetrically, it now runs from {x=-\frac{L}{2}} to {x=\frac{3L}{2}}, and the stationary states of this new well are obtained by replacing {L\rightarrow2L} and {x\rightarrow x+\frac{L}{2}}, so we have

\displaystyle  \phi_{n}\left(x\right)=\frac{1}{\sqrt{L}}\sin\frac{n\pi\left(x+\frac{L}{2}\right)}{2L} \ \ \ \ \ (12)

We then get

\displaystyle   c_{1} \displaystyle  = \displaystyle  \int_{-L/2}^{3L/2}\psi_{1}\left(x\right)\phi_{1}\left(x\right)dx\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \frac{\sqrt{2}}{L}\int_{0}^{L}\sin\frac{\pi x}{L}\sin\frac{\pi\left(x+\frac{L}{2}\right)}{2L}dx\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \frac{8}{3\pi} \ \ \ \ \ (15)

The integral can be done by expanding the second sine using the sine addition formula. (I just used Maple.)

Propagator for a Gaussian wave packet for the free particle

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.1, Exercise 5.1.3.

The propagator for the free particle is

\displaystyle  U\left(t\right)=\int_{-\infty}^{\infty}e^{-ip^{2}t/2m\hbar}\left|p\right\rangle \left\langle p\right|dp \ \ \ \ \ (1)

We can find its matrix elements in position space by using the position space form of the momentum

\displaystyle  \left\langle x\left|p\right.\right\rangle =\frac{1}{\sqrt{2\pi\hbar}}e^{ipx/\hbar} \ \ \ \ \ (2)

Taking the matrix element of 1 we have

\displaystyle   U\left(x,t;x^{\prime}\right) \displaystyle  = \displaystyle  \left\langle x\left|U\left(t\right)\right|x^{\prime}\right\rangle \ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  \int\left\langle x\left|p\right.\right\rangle \left\langle p\left|x^{\prime}\right.\right\rangle e^{-ip^{2}t/2m\hbar}dp\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2\pi\hbar}\int e^{ip\left(x-x^{\prime}\right)/\hbar}e^{-ip^{2}t/2m\hbar}dp\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{m}{2\pi\hbar it}}e^{im\left(x-x^{\prime}\right)^{2}/2\hbar t} \ \ \ \ \ (6)

The final integral can be done by combining the exponents in the third line, completing the square and using the standard formula for Gaussian integrals. We won’t go through that here, as our main goal is to explore the evolution of an initial wave packet using the propagator. Given 6, we can in principle find the wave function for all future times given an initial wave function, by using the propagator:

\displaystyle  \psi\left(x,t\right)=\int U\left(x,t;x^{\prime}\right)\psi\left(x^{\prime},0\right)dx^{\prime} \ \ \ \ \ (7)

Here, we’re assuming that the initial time is {t=0}. Shankar uses the standard example where the initial wave packet is a Gaussian:

\displaystyle  \psi\left(x^{\prime},0\right)=e^{ip_{0}x^{\prime}/\hbar}\frac{e^{-x^{\prime2}/2\Delta^{2}}}{\left(\pi\Delta^{2}\right)^{1/4}} \ \ \ \ \ (8)

This is a wave packet distributed symmetrically about the origin, so that {\left\langle X\right\rangle =0}, and with mean momentum given by {\left\langle P\right\rangle =p_{0}}. By plugging this and 6 into 7, we can work out the time-dependent version of the wave packet, which Shankar gives as

\displaystyle  \psi\left(x,t\right)=\left[\sqrt{\pi}\left(\Delta+\frac{i\hbar t}{m\Delta}\right)\right]^{-1/2}\exp\left[\frac{-\left(x-p_{0}t/m\right)^{2}}{2\Delta^{2}\left(1+i\hbar t/m\Delta^{2}\right)}\right]\exp\left[\frac{ip_{0}}{\hbar}\left(x-\frac{p_{0}t}{2m}\right)\right] \ \ \ \ \ (9)

Again, we won’t go through the derivation of this result as it involves a messy calculation with Gaussian integrals again. The main problem we want to solve here is to use our alternative form of the propagator in terms of the Hamiltonian:

\displaystyle  U\left(t\right)=e^{-iHt/\hbar} \ \ \ \ \ (10)

For the free particle

\displaystyle  H=-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}} \ \ \ \ \ (11)

so if we expand {U\left(t\right)} as a power series, we have

\displaystyle  U\left(t\right)=\sum_{s=0}^{\infty}\frac{1}{s!}\left(\frac{i\hbar t}{2m}\right)^{s}\frac{d^{2s}}{dx^{2s}} \ \ \ \ \ (12)

To see how we can use this form to generate the time-dependent wave function, we’ll consider a special case of 8 with {p_{0}=0} and {\Delta=1}, so that

\displaystyle   \psi_{0}\left(x\right) \displaystyle  = \displaystyle  \frac{e^{-x^{2}/2}}{\pi^{1/4}}\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}}\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!} \ \ \ \ \ (14)

We therefore need to apply one power series 12 to the other 14. This is best done by examining a few specific terms and then generalizing to the main result. To save writing, we’ll work with the following

\displaystyle   \alpha \displaystyle  \equiv \displaystyle  \frac{i\hbar t}{m}\ \ \ \ \ (15)
\displaystyle  \psi_{\pi}\left(x\right) \displaystyle  \equiv \displaystyle  \pi^{1/4}\psi_{0}\left(x\right) \ \ \ \ \ (16)

The {s=0} term in 12 is just 1, so we’ll look at the {s=1} term and apply it to 14:

\displaystyle   \frac{\alpha}{2}\frac{d^{2}}{dx^{2}}\left[\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!}\right] \displaystyle  = \displaystyle  \frac{\alpha}{2}\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)\left(2n-1\right)x^{2n-2}}{2^{n}n!}\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  \frac{\alpha}{2}\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-2}}{2^{n}n!\left(2n-2\right)!} \ \ \ \ \ (18)

We can simplify this by using an identity involving factorials:

\displaystyle   \frac{\left(2n\right)!}{n!} \displaystyle  = \displaystyle  \frac{\left(2n\right)\left(2n-1\right)\left(2n-2\right)\left(2n-3\right)\ldots\left(2\right)\left(1\right)}{n\left(n-1\right)\left(n-2\right)\ldots\left(2\right)\left(1\right)}\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  \frac{2^{n}\left[n\left(n-1\right)\left(n-2\right)\ldots\left(2\right)\left(1\right)\right]\left[\left(2n-1\right)\left(2n-3\right)\ldots\left(3\right)\left(1\right)\right]}{n!}\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  \frac{2^{n}n!\left(2n-1\right)!!}{n!}\ \ \ \ \ (21)
\displaystyle  \displaystyle  = \displaystyle  2^{n}\left(2n-1\right)!! \ \ \ \ \ (22)

The ‘double factorial’ notation is defined as

\displaystyle  \left(2n-1\right)!!\equiv\left(2n-1\right)\left(2n-3\right)\ldots\left(3\right)\left(1\right) \ \ \ \ \ (23)

That is, it’s the product of every other term from {n} down to 1. Using this result, we can write 18 as

\displaystyle  \frac{\alpha}{2}\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-2}}{2^{n}n!\left(2n-2\right)!}=\alpha\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n-1\right)!!x^{2n-2}}{2\left(2n-2\right)!} \ \ \ \ \ (24)

Now look at the {s=2} term from 12.

\displaystyle   \frac{1}{2!}\frac{\alpha^{2}}{2^{2}}\frac{d^{4}}{dx^{4}}\left[\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!}\right] \displaystyle  = \displaystyle  \frac{1}{2!}\frac{\alpha^{2}}{2^{2}}\sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)\left(2n-1\right)\left(2n-2\right)\left(2n-3\right)x^{2n-4}}{2^{n}n!}\ \ \ \ \ (25)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2!}\frac{\alpha^{2}}{2^{2}}\sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-4}}{2^{n}n!\left(2n-4\right)!}\ \ \ \ \ (26)
\displaystyle  \displaystyle  = \displaystyle  \frac{\alpha^{2}}{2^{2}2!}\sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}\left(2n-1\right)!!x^{2n-4}}{\left(2n-4\right)!} \ \ \ \ \ (27)

We can see the pattern for the general term for arbitrary {s} from 12 (we could prove it by induction, but hopefully the pattern is fairly obvious):

\displaystyle   \frac{1}{s!}\frac{\alpha^{s}}{2^{s}}\frac{d^{2s}}{dx^{2s}}\left[\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!}\right] \displaystyle  = \displaystyle  \frac{1}{s!}\frac{\alpha^{s}}{2^{s}}\sum_{n=s}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-2s}}{2^{n}n!\left(2n-2s\right)!}\ \ \ \ \ (28)
\displaystyle  \displaystyle  = \displaystyle  \frac{\alpha^{s}}{2^{s}s!}\sum_{n=s}^{\infty}\frac{\left(-1\right)^{n}\left(2n-1\right)!!x^{2n-2s}}{\left(2n-2s\right)!} \ \ \ \ \ (29)

Now we can collect terms for each power of {x}. The constant term (for {x^{0}}) is the first term from each series for each value of {s}, so we have, using the general term 29 and taking the first term where {n=s}:

\displaystyle  \sum_{s=0}^{\infty}\frac{\left(-1\right)^{s}\alpha^{s}\left(2s-1\right)!!}{2^{s}s!}=1-\frac{\alpha}{2}+\frac{\alpha^{2}}{2!}\frac{3}{2}\frac{1}{2}-\frac{\alpha^{3}}{3!}\frac{5}{2}\frac{3}{2}\frac{1}{2}+\ldots \ \ \ \ \ (30)

[The {\left(2s-1\right)!!} factor is 1 when {s=0} as we can see from the result 22.] The series on the RHS is the Taylor expansion of {\left(1+\alpha\right)^{-1/2}}, as can be verified using tables.

In general, to get the coefficient of {x^{2r}} (only even powers of {x} occur in the series), we take the term where {n=s+r} from 29 and sum over {s}. This gives

\displaystyle   \sum_{s=0}^{\infty}\frac{\alpha^{s}}{2^{s}s!}\frac{\left(-1\right)^{s+r}\left(2s+2r-1\right)!!}{\left(2r\right)!} \displaystyle  = \displaystyle  \frac{\left(-1\right)^{r}}{2^{r}r!}\sum_{s=0}^{\infty}\frac{\alpha^{s}}{2^{s}s!}\frac{\left(-1\right)^{s}\left(2s+2r-1\right)!!}{\left(2r-1\right)!!} \ \ \ \ \ (31)

where we used 22 to get the RHS. Expanding the sum gives

\displaystyle   \sum_{s=0}^{\infty}\frac{\alpha^{s}}{2^{s}s!}\frac{\left(-1\right)^{s}\left(2s+2r-1\right)!!}{\left(2r-1\right)!!} \displaystyle  = \displaystyle  1-\alpha\frac{2r+1}{2}+\frac{\alpha^{2}}{2!}\left(\frac{2r+3}{2}\right)\left(\frac{2r+1}{2}\right)-\ldots\ \ \ \ \ (32)
\displaystyle  \displaystyle  = \displaystyle  1-\alpha\left(r+\frac{1}{2}\right)+\frac{\alpha^{2}}{2!}\left(r+\frac{3}{2}\right)\left(r+\frac{1}{2}\right)-\ldots\ \ \ \ \ (33)
\displaystyle  \displaystyle  = \displaystyle  \left(1+\alpha\right)^{-r-\frac{1}{2}} \ \ \ \ \ (34)

where again we’ve used a standard series from tables (given by Shankar in the problem) to get the last line. Combining this with 31, we see that the coefficient of {x^{2r}} is

\displaystyle  \frac{\left(-1\right)^{r}}{2^{r}r!}\left(1+\alpha\right)^{-r-\frac{1}{2}} \ \ \ \ \ (35)

Thus the time-dependent wave function can be written as a single series as:

\displaystyle   \psi\left(x,t\right) \displaystyle  = \displaystyle  U\left(t\right)\psi\left(x,0\right)\ \ \ \ \ (36)
\displaystyle  \displaystyle  = \displaystyle  e^{-iHt/\hbar}\psi\left(x,0\right)\ \ \ \ \ (37)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}}\sum_{r=0}^{\infty}\frac{\left(-1\right)^{r}}{2^{r}r!}\left(1+\alpha\right)^{-r-\frac{1}{2}}x^{2r}\ \ \ \ \ (38)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}\sqrt{1+\alpha}}\sum_{r=0}^{\infty}\frac{\left(-1\right)^{r}}{2^{r}\left(1+\alpha\right)^{r}r!}x^{2r}\ \ \ \ \ (39)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}\sqrt{1+\alpha}}\exp\left[\frac{-x^{2}}{2\left(1+\alpha\right)}\right]\ \ \ \ \ (40)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}\sqrt{1+i\hbar t/m}}\exp\left[\frac{-x^{2}}{2\left(1+i\hbar t/m\right)}\right] \ \ \ \ \ (41)

This agrees with 9 when {p_{0}=0} and {\Delta=1}, though it does take a fair bit of work!

Free particle in the position basis

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.1, Exercise 5.1.2.

In quantum mechanics, the free particle has degenerate energy eigenstates for each energy

\displaystyle  E=\frac{p^{2}}{2m} \ \ \ \ \ (1)

where {p} is the momentum. The degeneracy arises because the momentum can be either positive (for a particle moving to the right) or negative (to the left):

\displaystyle  p=\pm\sqrt{2mE} \ \ \ \ \ (2)

Thus the most general energy eigenstate is a linear combination of the two momentum states:

\displaystyle  \left|E\right\rangle =\beta\left|p=\sqrt{2mE}\right\rangle +\gamma\left|p=-\sqrt{2mE}\right\rangle \ \ \ \ \ (3)

This bizarre feature of quantum mechanics means that a particle in such a state could be moving either left or right, and if we make a measurement of the momentum we force the particle into one or other of the two momentum states.

We obtained this solution by working in the momentum basis, but we can also find the solution in the position basis. In that basis, the momentum operator has the form

\displaystyle  P=-i\hbar\frac{d}{dx} \ \ \ \ \ (4)

The matrix elements of this operator in the position basis are

\displaystyle  \left\langle x\left|P\right|x^{\prime}\right\rangle =-i\hbar\delta^{\prime}\left(x-x^{\prime}\right) \ \ \ \ \ (5)

where {\delta^{\prime}\left(x-x^{\prime}\right)} is the derivative of the delta function with respect to the {x}, not the {x^{\prime}}. We can use the properties of this derivative to get a solution in the {X} basis. To be completely formal about it, the derivation of the matrix elements of {P^{2}} in the {X} basis is:

\displaystyle   \left\langle x\left|P^{2}\right|\psi\right\rangle \displaystyle  = \displaystyle  \int\int\left\langle x\left|P\right|x^{\prime}\right\rangle \left\langle x^{\prime}\left|P\right|x^{\prime\prime}\right\rangle \left\langle x^{\prime\prime}\left|\psi\right.\right\rangle dx^{\prime}dx^{\prime\prime}\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  \int\int\left\langle x\left|P\right|x^{\prime}\right\rangle \left(-i\hbar\delta^{\prime}\left(x^{\prime}-x^{\prime\prime}\right)\right)\psi\left(x^{\prime\prime}\right)dx^{\prime}dx^{\prime\prime}\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\int\left\langle x\left|P\right|x^{\prime}\right\rangle \frac{d\psi\left(x^{\prime}\right)}{dx^{\prime}}dx^{\prime}\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\int\int\left(-i\hbar\delta^{\prime}\left(x-x^{\prime}\right)\right)\frac{d\psi\left(x^{\prime}\right)}{dx^{\prime}}dx^{\prime}\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  -\hbar^{2}\frac{d^{2}}{dx^{2}}\psi\left(x\right) \ \ \ \ \ (10)

In this basis, the Schrödinger equation is therefore the familiar one:

\displaystyle   \frac{P^{2}}{2m}\left|\psi\right\rangle \displaystyle  = \displaystyle  E\left|\psi\right\rangle \ \ \ \ \ (11)
\displaystyle  \left\langle x\left|\frac{P^{2}}{2m}\right|\psi\right\rangle \displaystyle  = \displaystyle  E\psi\left(x\right)\ \ \ \ \ (12)
\displaystyle  -\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}\psi\left(x\right) \displaystyle  = \displaystyle  E\psi\left(x\right)\ \ \ \ \ (13)
\displaystyle  \frac{d^{2}}{dx^{2}}\psi\left(x\right) \displaystyle  = \displaystyle  -\frac{2mE}{\hbar^{2}}\psi\left(x\right) \ \ \ \ \ (14)

This has the general solution

\displaystyle  \psi\left(x\right)=\beta e^{ix\sqrt{2mE}/\hbar}+\gamma e^{-ix\sqrt{2mE}/\hbar} \ \ \ \ \ (15)

[Shankar extracts a factor of {1/\sqrt{2\pi\hbar}} but as he notes, this is arbitrary and can be absorbed into the constants {\beta} and {\gamma} as we’ve done here.]

In this derivation we’ve implicitly assumed that {E>0}, since there is no potential so a free particle can’t really have a negative energy. However, if you follow through the derivation, you’ll see that it works even if {E<0}. In that case, we’d get

\displaystyle  \psi\left(x\right)=\beta e^{-x\sqrt{2m\left|E\right|}/\hbar}+\gamma e^{x\sqrt{2m\left|E\right|}/\hbar} \ \ \ \ \ (16)

That is, the exponents in both terms are now real instead of imaginary. The problem with this is that the first term blows up for {x\rightarrow-\infty} while the second blows up for {x\rightarrow+\infty}. Thus this function is not normalizable, even to a delta function (as was the case when {E>0}), so functions such as these when {E<0} are not in the Hilbert space.

Free particle revisited: solution in terms of a propagator

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.1, Exercise 5.1.1.

Having reviewed the background mathematics and postulates of quantum mechanics as set out by Shankar, we can now revisit some of the classic problems in non-relativistic quantum mechanics using Shankar’s approach, as opposed to that of Griffiths that we’ve already studied.

The first problem we’ll look it is that of the free particle. Following the fourth postulate, we write down the classical Hamiltonian for a free particle, which is

\displaystyle  H=\frac{p^{2}}{2m} \ \ \ \ \ (1)

where {p} is the momentum (we’re working in one dimension) and {m} is the mass. To get the quantum version, we replace {p} by the momentum operator {P} and insert the result into the Schrödinger equation:

\displaystyle   i\hbar\left|\dot{\psi}\right\rangle \displaystyle  = \displaystyle  H\left|\psi\right\rangle \ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  \frac{P^{2}}{2m}\left|\psi\right\rangle \ \ \ \ \ (3)

Since {H} is time-independent, the solution can be written using a propagator:

\displaystyle  \left|\psi\left(t\right)\right\rangle =U\left(t\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (4)

To find {U}, we need to solve the eigenvalue equation for the stationary states

\displaystyle  \frac{P^{2}}{2m}\left|E\right\rangle =E\left|E\right\rangle \ \ \ \ \ (5)

where {E} is an eigenvalue representing the allowable energies. Since the Hamiltonian is {P^{2}/2m}, and an eigenstate of {P} with eigenvalue {p} is also an eigenstate of {P^{2}} with eigenvalue {p^{2}}, we can write this equation in terms of the momentum eigenstates {\left|p\right\rangle }:

\displaystyle  \frac{P^{2}}{2m}\left|p\right\rangle =E\left|p\right\rangle \ \ \ \ \ (6)

Using {P^{2}\left|p\right\rangle =p^{2}\left|p\right\rangle } this gives

\displaystyle  \left(\frac{p^{2}}{2m}-E\right)\left|p\right\rangle =0 \ \ \ \ \ (7)

Assuming that {\left|p\right\rangle } is not a null vector gives the relation between momentum and energy:

\displaystyle  p=\pm\sqrt{2mE} \ \ \ \ \ (8)

Thus each allowable energy {E} has two possible momenta. Once we specify the momentum, we also specify the energy and since each energy state is two-fold degenerate, we can eliminate the ambiguity by specifying only the momentum. Therefore the propagator can be written as

\displaystyle  U\left(t\right)=\int_{-\infty}^{\infty}e^{-ip^{2}t/2m\hbar}\left|p\right\rangle \left\langle p\right|dp \ \ \ \ \ (9)

We can convert this to an integral over the energy by using 8 to change variables, and by splittling the integral into two parts. For {p>0} we have

\displaystyle  dp=\sqrt{\frac{m}{2E}}dE \ \ \ \ \ (10)

and for {p<0} we have

\displaystyle  dp=-\sqrt{\frac{m}{2E}}dE \ \ \ \ \ (11)

Therefore, we get

\displaystyle   U\left(t\right) \displaystyle  = \displaystyle  \int_{0}^{\infty}e^{-iEt/\hbar}\left|E,+\right\rangle \left\langle E,+\right|\sqrt{\frac{m}{2E}}dE+\int_{\infty}^{0}e^{-iEt/\hbar}\left|E,-\right\rangle \left\langle E,-\right|\left(-\sqrt{\frac{m}{2E}}\right)dE\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \int_{0}^{\infty}e^{-iEt/\hbar}\left|E,+\right\rangle \left\langle E,+\right|\sqrt{\frac{m}{2E}}dE+\int_{0}^{\infty}e^{-iEt/\hbar}\left|E,-\right\rangle \left\langle E,-\right|\sqrt{\frac{m}{2E}}dE\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \sum_{\alpha=\pm}\int_{0}^{\infty}\frac{m}{\sqrt{2mE}}e^{-iEt/\hbar}\left|E,\alpha\right\rangle \left\langle E,\alpha\right|dE \ \ \ \ \ (14)

Here, {\left|E,+\right\rangle } is the state with energy {E} and momentum {p=+\sqrt{2mE}} and similarly for {\left|E,-\right\rangle }. In the first line, the first integral is for {p>0} and corresponds to the {\int_{0}^{\infty}} part of 9. The second integral is for {p<0} and corresponds to the {\int_{-\infty}^{0}} part of 9, which is why the limits on the second integral have {\infty} at the bottom and 0 at the top. Reversing the order of integration cancels out the minus sign in {-\sqrt{\frac{m}{2E}}}, which allows us to add the two integrals together to get the final answer.

Time-dependent propagators

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 4.3.

The fourth postulate of non-relativistic quantum mechanics concerns how states evolve with time. The postulate simply states that in non-relativistic quantum mechanics, a state satisfies the Schrödinger equation:

\displaystyle i\hbar\frac{\partial}{\partial t}\left|\psi\right\rangle =H\left|\psi\right\rangle \ \ \ \ \ (1)

 

where {H} is the Hamiltonian, which is obtained from the classical Hamiltonian by means of the other postulates of quantum mechanics, namely that we replace all references to the position {x} by the quantum position operator {X} with matrix elements (in the {x} basis) of

\displaystyle \left\langle x^{\prime}\left|X\right|x\right\rangle =\delta\left(x-x^{\prime}\right) \ \ \ \ \ (2)

and all references to classical momentum {p} by the momentum operator {P} with matrix elements

\displaystyle \left\langle x^{\prime}\left|P\right|x\right\rangle =-i\hbar\delta^{\prime}\left(x-x^{\prime}\right) \ \ \ \ \ (3)

In our earlier examination of the Schrödinger equation, we assumed that the Hamiltonian is independent of time, which allowed us to obtain an explicit expression for the propagator

\displaystyle U\left(t\right)=e^{-iHt/\hbar} \ \ \ \ \ (4)

 

The propagator is applied to the initial state {\left|\psi\left(0\right)\right\rangle } to obtain the state at any future time {t}:

\displaystyle \left|\psi\left(t\right)\right\rangle =U\left(t\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (5)

What happens if {H=H\left(t\right)}, that is, there is an explicit time dependence in the Hamiltonian? The approach taken by Shankar is a bit hand-wavy, but goes as follows. We divide the time interval {\left[0,t\right]} into {N} small increments {\Delta=t/N}. To first order in {\Delta}, we can integrate 1 by taking the first order term in a Taylor expansion:

\displaystyle \left|\psi\left(\Delta\right)\right\rangle \displaystyle = \displaystyle \left|\psi\left(0\right)\right\rangle +\Delta\left.\frac{d}{dt}\left|\psi\left(t\right)\right\rangle \right|_{t=0}+\mathcal{O}\left(\Delta^{2}\right)\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \left|\psi\left(0\right)\right\rangle +-\frac{i\Delta}{\hbar}H\left(0\right)\left|\psi\left(0\right)\right\rangle +\mathcal{O}\left(\Delta^{2}\right)\ \ \ \ \ (7)
\displaystyle \displaystyle = \displaystyle \left(1-\frac{i\Delta}{\hbar}H\left(0\right)\right)\left|\psi\left(0\right)\right\rangle +\mathcal{O}\left(\Delta^{2}\right) \ \ \ \ \ (8)

So far, we’ve been fairly precise, but now the hand-waving starts. We note that the term multiplying {\left|\psi\left(0\right)\right\rangle } consists of the first two terms in the expansion of {e^{-i\Delta H\left(0\right)/\hbar}}, so we state that to evolve from {t=0} to {t=\Delta}, we multiply the initial state {\left|\psi\left(0\right)\right\rangle } by {e^{-i\Delta H\left(0\right)/\hbar}}. That is, we propose that

\displaystyle \left|\psi\left(\Delta\right)\right\rangle =e^{-i\Delta H\left(0\right)/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (9)

[The reason this is hand-waving is that there are many functions whose first order Taylor expansion matches {\left(1-\frac{i\Delta}{\hbar}H\left(0\right)\right)}, so it seems arbitrary to choose the exponential. I imagine the motivation is that in the time-independent case, the result reduces to 4.]

In any case, if we accept this, then we can iterate the process to evolve to later times. To get to {t=2\Delta}, we have

\displaystyle \left|\psi\left(2\Delta\right)\right\rangle \displaystyle = \displaystyle e^{-i\Delta H\left(\Delta\right)/\hbar}\left|\psi\left(\Delta\right)\right\rangle \ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle e^{-i\Delta H\left(\Delta\right)/\hbar}e^{-i\Delta H\left(0\right)/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (11)

The snag here is that we can’t, in general, combine the two exponentials into a single exponential by adding the exponents. This is because {H\left(\Delta\right)} and {H\left(0\right)} will not, in general, commute, as the Baker-Campbell-Hausdorff formula tells us. For example, the time dependence of {H\left(t\right)} might be such that at {t=0}, {H\left(0\right)} is a function of the position operator {X} only, while at {t=\Delta}, {H\left(\Delta\right)} becomes a function of the momentum operator {P} only. Since {X} and {P} don’t commute, {\left[H\left(0\right),H\left(\Delta\right)\right]\ne0}, so {e^{-i\Delta H\left(\Delta\right)/\hbar}e^{-i\Delta H\left(0\right)/\hbar}\ne e^{-i\Delta\left[H\left(0\right)+H\left(\Delta\right)\right]/\hbar}}.

This means that the best we can usually do is to write

\displaystyle \left|\psi\left(t\right)\right\rangle \displaystyle = \displaystyle \left|\psi\left(N\Delta\right)\right\rangle \ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle \prod_{n=0}^{N-1}e^{-i\Delta H\left(n\Delta\right)/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (13)

The propagator then becomes, in the limit

\displaystyle U\left(t\right)=\lim_{N\rightarrow\infty}\prod_{n=0}^{N-1}e^{-i\Delta H\left(n\Delta\right)/\hbar} \ \ \ \ \ (14)

This limit is known as a time-ordered integral and is written as

\displaystyle T\left\{ \exp\left[-\frac{i}{\hbar}\int_{0}^{t}H\left(t^{\prime}\right)dt^{\prime}\right]\right\} \equiv\lim_{N\rightarrow\infty}\prod_{n=0}^{N-1}e^{-i\Delta H\left(n\Delta\right)/\hbar} \ \ \ \ \ (15)

One final note about the propagators. Since each term in the product is the exponential of {i} times a Hermitian operator, each term is a unitary operator. Further, since the product of two unitary operators is still unitary, the propagator in the time-dependent case is a unitary operator.

We’ve defined a propagator as a unitary operator that carries a state from {t=0} to some later time {t}, but we can generalize the notation so that {U\left(t_{2},t_{1}\right)} is a propagator that carries a state from {t=t_{1}} to {t=t_{2}}, that is

\displaystyle \left|\psi\left(t_{2}\right)\right\rangle =U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (16)

We can chain propagators together to get

\displaystyle \left|\psi\left(t_{3}\right)\right\rangle \displaystyle = \displaystyle U\left(t_{3},t_{2}\right)\left|\psi\left(t_{2}\right)\right\rangle \ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle U\left(t_{3},t_{2}\right)U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (18)
\displaystyle \displaystyle = \displaystyle U\left(t_{3},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (19)

Therefore

\displaystyle U\left(t_{3},t_{1}\right)=U\left(t_{3},t_{2}\right)U\left(t_{2},t_{1}\right) \ \ \ \ \ (20)

 

Since the Hermitian conjugate of a unitary operator is its inverse, we have

\displaystyle U^{\dagger}\left(t_{2},t_{1}\right)=U^{-1}\left(t_{2},t_{1}\right) \ \ \ \ \ (21)

We can combine this with 20 to get

\displaystyle \left|\psi\left(t_{1}\right)\right\rangle \displaystyle = \displaystyle I\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (22)
\displaystyle \displaystyle = \displaystyle U^{-1}\left(t_{2},t_{1}\right)U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle U^{\dagger}\left(t_{2},t_{1}\right)U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (24)

Therefore

\displaystyle U^{\dagger}\left(t_{2},t_{1}\right)U\left(t_{2},t_{1}\right) \displaystyle = \displaystyle U\left(t_{1},t_{1}\right)=I\ \ \ \ \ (25)
\displaystyle U^{\dagger}\left(t_{2},t_{1}\right) \displaystyle = \displaystyle U\left(t_{1},t_{2}\right) \ \ \ \ \ (26)

That is, the Hermitian conjugate (or inverse) of a propagator carries a state ‘backwards in time’ to its starting point.