Harmonic oscillator – series solution

Required math: calculus

Required physics: harmonic oscillator

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Sec 2.3.

The complete Schrödinger equation for the harmonic oscillator potential is

\displaystyle  -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi}{dx^{2}}+\frac{1}{2}kx^{2}\psi=E\psi \ \ \ \ \ (1)

To solve this equation, we split the wave function {\psi} into two factors: the first factor is the asymptotic behaviour for large {x}, and the second is a function {f} which we have yet to find. To simplify the notation we introduced two auxiliary variables.

The independent variable {y} is related to the spatial variable {x} by

\displaystyle  y\equiv\sqrt{\frac{m\omega}{\hbar}}x \ \ \ \ \ (2)

and the parameter {\epsilon} is related to the energy {E} by

\displaystyle  \epsilon\equiv\frac{2E}{\hbar\omega} \ \ \ \ \ (3)

The parameter {\omega} is the frequency of the oscillator.

After analyzing the asymptotic behaviour of the Schrödinger equation for the harmonic oscillator, we write the wave function in the form

\displaystyle  \psi(y)=e^{-y^{2}/2}f(y) \ \ \ \ \ (4)

Substituting this back into the Schrödinger equation gives us a differential equation for {f(y)}:

\displaystyle  \frac{d^{2}f}{dy^{2}}-2y\frac{df}{dy}+(\epsilon-1)f=0 \ \ \ \ \ (5)

If we hurl this equation into mathematical software like Maple, it tells us that the solution involves two forms of Kummer functions, otherwise known as confluent hypergeometric functions of the first and second kinds. Apart from being able to impress your friends in the pub, these terms don’t really help us learn much about the physics. For that we need to solve the differential equation using a power series.

The idea is to propose a solution of the form

\displaystyle  f(y)=\sum_{j=0}^{\infty}a_{j}y^{j} \ \ \ \ \ (6)

The theory behind Taylor series in elementary calculus assures us that for any ‘reasonable’ function (that is, pretty well any function found in physics), it is possible to write the function as a power series, so we should be able to find such a solution. At this stage, we can’t guarantee that such a solution will tell us much, but it’s worth a try.

To use the series, we need to calculate its first two derivatives:

\displaystyle   \frac{df}{dy} \displaystyle  = \displaystyle  \sum_{j=0}^{\infty}ja_{j}y^{j-1}\ \ \ \ \ (7)
\displaystyle  \frac{d^{2}f}{dy^{2}} \displaystyle  = \displaystyle  \sum_{j=0}^{\infty}j(j-1)a_{j}y^{j-2}\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}y^{j} \ \ \ \ \ (9)

The fancy footwork in the last line just relabels the summation index to make it more convenient for the next step, as we’ll see. To convince yourself it is the same series as the line above it, just write out the first 4 or 5 terms in the series and you’ll see it is the same. Notice also that in the first derivative series the first term is zero due to the factor of {j}, so we don’t actually get a term with {y^{-1}} in it.

We now want to substitute these derivatives back into the differential equation 5 we want to solve. The reason we juggled the summation index in the second derivative is that we want the series in all three terms in the equation to contain {y^{j}} terms rather than {y} to some other power. This makes it easier to group together the terms with equal powers of {y}.

Doing the substitution, we get:

\displaystyle   \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}y^{j}-2\sum_{j=0}^{\infty}ja_{j}y^{j}+(\epsilon-1)\sum_{j=0}^{\infty}a_{j}y^{j} \displaystyle  = \displaystyle  0\ \ \ \ \ (10)
\displaystyle  \sum_{j=0}^{\infty}[(j+2)(j+1)a_{j+2}-2ja_{j}+(\epsilon-1)a_{j}]y^{j} \displaystyle  = \displaystyle  0 \ \ \ \ \ (11)

From the mathematics of power series expansions, it is known that any given function’s expansion is unique (the proof takes us too far into pure mathematics so we’ll leave it for now). That means that, having decided on a value for {\epsilon}, there is one and only one sequence of {a_{j}}s that defines the function {f(y)}. So, since {y} can be any value, the only way the above sum can be zero for all values of {y} is if the coefficient of each power of {y} vanishes separately. That is,

\displaystyle  (j+2)(j+1)a_{j+2}-2ja_{j}+(\epsilon-1)a_{j}=0 \ \ \ \ \ (12)

This, in turn, gives a recursion relation for the coefficients:

\displaystyle  a_{j+2}=\frac{2j+1-\epsilon}{(j+1)(j+2)}a_{j} \ \ \ \ \ (13)

Since we are solving a second order differential equation we would expect to have two arbitrary constants that must be determined by initial conditions and normalization, and we see that since the recursion formula relates every second coefficient, we need to specify both {a_{0}} and {a_{1}} to be able to generate all the coefficients. If we start off with {a_{0}} we get all the even coefficients:

\displaystyle   a_{2} \displaystyle  = \displaystyle  \frac{1-\epsilon}{2}a_{0}\ \ \ \ \ (14)
\displaystyle  a_{4} \displaystyle  = \displaystyle  \frac{5-\epsilon}{12}a_{2}=\frac{(5-\epsilon)(1-\epsilon)}{24}a_{0}\ \ \ \ \ (15)
\displaystyle  \displaystyle  \ldots

There is a similar sequence of calculations for the odd coefficients starting with {a_{1}}.

That’s about as far as we can go without using some external information to put some conditions on the series. As usual, we require the solution (the original solution, that is, {\psi}) to be normalizable. We now know that this solution has the form

\displaystyle   \psi(y) \displaystyle  = \displaystyle  e^{-y^{2}/2}f(y)\ \ \ \ \ (16)
\displaystyle  \displaystyle  = \displaystyle  e^{-y^{2}/2}\sum_{j=0}^{\infty}a_{j}y^{j} \ \ \ \ \ (17)

So in order to be normalizable, the series will have to converge to some function that doesn’t expand to infinity as fast as {e^{y^{2}/2}}. Otherwise the series term will kill off the negative exponential, and the overall wave function will not tend to zero as {y} goes to infinity.

This seems like a difficult condition to check, but let’s have a look at the asymptotic behaviour (for large {j}) of the recursion formula 13.

\displaystyle   a_{j+2} \displaystyle  = \displaystyle  \frac{2j+1-\epsilon}{(j+1)(j+2)}a_{j}\ \ \ \ \ (18)
\displaystyle  \displaystyle  = \displaystyle  \frac{2j+1-\epsilon}{j^{2}+3j+2}a_{j}\ \ \ \ \ (19)
\displaystyle  \displaystyle  \sim \displaystyle  \frac{2}{j}a_{j} \ \ \ \ \ (20)

Thus the ratio of two successive even terms (or two successive odd terms) in the series is

\displaystyle  \frac{a_{j+2}y^{j+2}}{a_{j}y^{j}}=\frac{2}{j}y^{2} \ \ \ \ \ (21)

How does this compare with the series for an exponential function?

The Taylor series for {e^{x^{2}}} is

\displaystyle   e^{x^{2}} \displaystyle  = \displaystyle  \sum_{j=0}^{\infty}\frac{x^{2j}}{j!}\ \ \ \ \ (22)
\displaystyle  \displaystyle  = \displaystyle  \sum_{j\; even}^{\infty}\frac{x^{j}}{(j/2)!} \ \ \ \ \ (23)

The ratio of two successive terms from this series is

\displaystyle  \frac{x^{j+2}/((j+2)/2)!}{x^{j}/(j/2)!}=\frac{2}{j+2}x^{2} \ \ \ \ \ (24)

which for large {j} is essentially the same as relation 21. And this is for only half (either even or odd terms) of the series; the other half will contribute another function of roughly equal size. Thus it looks like the series’ asymptotic behaviour is that of {e^{y^{2}}} so the overall behaviour of the wave function is {e^{y^{2}}e^{-y^{2}/2}=e^{y^{2}/2}}, which diverges and is therefore not normalizable.

This looks like a serious problem, but there is in fact a way out: if the series terminates after a finite number of terms, then the behaviour is that of a polynomial rather than an exponential, and multiplying any polynomial by {e^{-y^{2}/2}} will always give a normalizable function.

So if we can arrange things so that the recursion formula 13 gives {a_{j+2}=0} for some {j}, then clearly all further terms will be zero. The condition to be satisfied is therefore

\displaystyle   2j+1-\epsilon \displaystyle  = \displaystyle  0\ \ \ \ \ (25)
\displaystyle  \epsilon \displaystyle  = \displaystyle  2j+1\ \ \ \ \ (26)
\displaystyle  E \displaystyle  = \displaystyle  \frac{1}{2}\hbar\omega(2j+1) \ \ \ \ \ (27)

where {j} is some integer 0, 1, 2, 3, … Note however, that each choice of {j}, that is, each choice of where the series terminates, gives a different value for the energy. The lowest possible energy for the harmonic oscillator is when {j=0}, and is {E_{0}=\frac{1}{2}\hbar\omega} and the energies increase at regular intervals of {\hbar\omega} so the energy levels are all equally spaced.

It is more usual to give the energy formula as

\displaystyle  E_{n}=\left(n+\frac{1}{2}\right)\hbar\omega \ \ \ \ \ (28)

with {n=}0, 1, 2, 3, 4,

One note of caution here. Once we have chosen an energy level, this fixes the value of {j} at which the series terminates. If {j} is even, then the odd series must be zero right from the start, and vice versa. There is no way of getting both the even and odd series to terminate at some intermediate values in the same solution. So if we choose an even value of {j} we must have {a_{1}=0} to remove all the odd terms from the sum, and conversely if we choose {j} to be odd, we must have {a_{0}=0} to remove all the even terms.

The stationary states for the harmonic oscillator are therefore products of polynomials and the exponential factor. The polynomials turn out to be well-studied in mathematics and are known as Hermite polynomials. We will explore their properties in another post.

To summarize the behaviour of the quantum harmonic oscillator, we’ll list a few points.

  1. The harmonic oscillator potential is parabolic, and goes to infinity at infinite distance, so all states are bound states – there is no energy a particle can have that will allow it to be free.
  2. The energies are equally spaced, with spacing {\hbar\omega}.
  3. The lowest energy is the ground state {E_{0}=\hbar\omega/2}, so a particle always has positive, non-zero energy.
  4. The stationary states consist of either an even or an odd polynomial function multiplied by {e^{-y^{2}/2}=e^{-m\omega x^{2}/2\hbar}} which is always even. Thus a stationary state is either an even or an odd function of {y} (and hence of {x}).
  5. The polynomial functions in the stationary states are Hermite polynomials.