Required math: calculus
Required physics: Schrödinger equation in 3-d
Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Sec 4.2.1.
Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 13, Exercises 13.1.1 – 13.1.2.
[This page follows the derivation given in Griffiths. The discussion in Shankar’s chapter 13 is similar, but he uses Gaussian units, so the answer looks different. However, I can’t be bothered going through the whole derivation again with different units, since the steps are essentially the same.]
We saw in an earlier post that the radial part of the three-dimensional Schrödinger equation for the hydrogen atom can be reduced to the differential equation
and is the radial part of the three-dimensional wave function.
and attempt to determine the coefficients . The two derivatives needed in the equation are
We now plug these back into 1 and fiddle with the summation indexes so that every term in every sum is a multiple of .
The two terms containing can be converted to sums over by shifting the summation index from to . This means that the sum becomes
Note that the term with in the first two sums is zero because of the factor, so we can start the sum at . Since is now a common factor in all sums we can write the overall sum as
Because each power series is unique (a mathematical theorem), the only way this sum can be valid for all values of is if all the coefficients are zero. That is
This can be rewritten as a recursion relation:
[This equation is essentially the same as Shankar’s 13.1.11 if you replace and use Gaussian units in .]
The argument at this point is again similar to that for the harmonic oscillator: we examine the behaviour for large . In that case, we can ignore the and terms and write
(We could also ignore the 1 in the denominator, but keeping it makes the argument easier, as we will see.) If we took this as an exact recursion relation, then starting with some initial constant , we get
In the last line we used the series expansion for the exponential function.
Returning for a moment to the original definition of , we get
Thus the infinite series solution gives a value for that increases exponentially for large , which isn’t normalizable, so isn’t a valid solution. The only way to resolve this problem is again the same as in the harmonic oscillator case, which is to require the series to terminate after a finite number of terms. That is, we must have, for some value of ,
That is, must be an even integer, which we can define as . Recalling the definition of from above, we therefore have the condition which quantizes the energy levels in the hydrogen atom:
But , so for the energy levels, we get
This is the Bohr formula (although Bohr got the formula without using the Schrödinger equation) for the energy levels of hydrogen. [Again, this is equivalent to Shankar’s 13.1.16 if you use Gaussian units, so that the factor becomes 1.]
The degeneracy of each energy level is found by noting that for a given value of , any value of is possible such that . Since is just the index on the series coefficient , this means that can be any value from 0 up to . For each , the component of angular momentum can have any value from up to , which gives possibilities for each . Thus the degeneracy for energy state is
where we’ve used the formula
Before leaving the series solution, we need to point out that the polynomials produced by 14, with the constraint that , are known mathematically as the associated Laguerre polynomials. They can be written as derivatives. First we define the ordinary Laguerre polynomials :
Now the associated Laguerre polynomials which depend on two parameters can be defined in terms of the ordinary Laguerre polynomials:
A more useful formula for the associated Laguerre polynomials is
In terms of associated Laguerre polynomials, the solution of 1 is (apart from normalization)
Now we define the coefficients in the polynomial and show that the recurrence relation 14 is valid:
This is the same recurrence relation provided . However, this isn’t enough to verify the solution since other definitions of would give the same relation (for example, we could leave out the factor in the numerator and still get the same recurrence relation). To verify that the polynomials are in fact solutions, we can work out their derivatives and plug them into 1 directly.
We can now shift the summation index for the first two terms so that we sum over instead of This results in
In the first sum, the term is zero due to the factor, so we can start both sums from . Thus for all values of from 0 to , we can examine the coefficient of :
Using the relation between and above, we get
For the one remaining term in the second sum where we note that this term is zero on its own, since in this case. Thus the overall sum satisfies the original differential equation 1.