Hamiltonian matrix elements

Required math: calculus

Required physics: Schrödinger equation

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education, sec 3.6.

A wave function can be expressed in its more usual form as a function of {x} and {t}: {\Psi(x,t)}. We can also express the same function as the Fourier transform of its momentum space form:

\displaystyle  \Psi(x,t)=\frac{1}{\sqrt{2\pi\hbar}}\int\Phi(p,t)e^{ipx/\hbar}dp \ \ \ \ \ (1)

Here, the momentum space wave function is the inverse Fourier transform of the position space version:

\displaystyle  \Phi(p,t)=\frac{1}{\sqrt{2\pi\hbar}}\int\Psi(x,t)e^{-ipx/\hbar}dx \ \ \ \ \ (2)

Since either function can be obtained from the other with no loss of information, they are equivalent ways of expressing the wave function. In the language of linear algebra, the vector representing the wave function can be expressed in two different bases (plural of ‘basis’). The position space wave function is given in the first equation by a vector in the momentum basis, where {\Phi(p,t)} is the coordinate of the wave function for that particular value of {p}.

If you like thinking of vectors in 3 dimensions (rather than the infinite number of dimensions we’re using here), this is analogous to saying that {\Phi(p,t)} is the coordinate of {\Psi(x,t)} ‘along the {p} direction’.

Similarly, the second equation expressed the momentum space wave function as a vector in position space, with {\Psi(x,t)} the coordinate ‘along the {x} direction’.

For a Hamiltonian with discrete energies, such as the harmonic oscillator, we know that the wave function can be expressed as a linear combination of the stationary states {\psi_{n}(x)}, as in

\displaystyle  \Psi(x,t)=\sum_{n}c_{n}e^{-iE_{n}t/\hbar}\psi_{n}(x) \ \ \ \ \ (3)

In this case, the basis consists of the stationary state functions multiplied by the exponential, and the ‘coordinate along the {\psi_{n}} direction’ is the coefficient {c_{n}}.

A hermitian operator (which represents an observable) transforms one vector into another. For example, the hamiltonian operator, when operating on one of its eigenvectors, multiplies that vector by a constant, which is the energy:

\displaystyle  H\psi_{n}(x)=E_{n}\psi_{n}(x) \ \ \ \ \ (4)

With respect to a given basis, each operator can be represented as a matrix, with one dimension of the matrix for each dimension of the vector space (which is infinite in the examples so far). The matrix elements can be represented in bra-ket notation as

\displaystyle  H_{mn}=\left\langle e_{m}\left|H\right|e_{n}\right\rangle \ \ \ \ \ (5)

where {e_{n}} is the {n}th basis vector (or function).

If the basis consists of the eigenfunctions of the hamiltonian, then the matrix is diagonal, since the eigenfunctions are orthogonal. Things are trickier if we want to find the matrix elements of the hamiltonian with respect to a continuous basis, like momentum. The (non-normalizable, except in the delta function sense) eigenfunctions of momentum are

\displaystyle  f_{p}(x)=\frac{1}{\sqrt{2\pi\hbar}}e^{ipx/\hbar} \ \ \ \ \ (6)

In the case of the harmonic oscillator, the hamiltonian is

\displaystyle  H=-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+\frac{1}{2}m\omega^{2}x^{2} \ \ \ \ \ (7)

so if we apply this to the momentum eigenfunctions, we get

\displaystyle  Hf_{p}(x)=\frac{1}{2\sqrt{2\pi\hbar}}e^{ipx/\hbar}\left(\frac{p^{2}}{m}+m\omega^{2}x^{2}\right) \ \ \ \ \ (8)

(By the way, although this might look like {f_{p}} is an eigenfunction of {H} since the RHS has the form {(factor)\times f_{p}}, it’s not, since the ‘factor’ is not a constant.)

To get the matrix elements of {H} we take the inner product with another momentum eigenfunction for momentum:

\displaystyle   H_{mn} \displaystyle  = \displaystyle  \left\langle p_{m}\left|H\right|p_{n}\right\rangle \ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{8\pi\hbar}\int e^{i\left(p_{n}-p_{m}\right)x/\hbar}\left(\frac{p_{n}^{2}}{m}+m\omega^{2}x^{2}\right)dx \ \ \ \ \ (10)

The first term of this integral evaluates to a delta function of form {A\delta(p_{n}-p_{m})} for a constant {A}, but the second term involves a more problematic integral containing the product {x^{2}e^{i\left(p_{n}-p_{m}\right)x/\hbar}}. This integral gives a real result (since {x^{2}} is even and the imaginary part of a complex exponential is odd, being a sine), and if we take the limits of the integral to be symmetric about {x=0}, the integral oscillates about zero with an increasing amplitude, the wider we take the limits. The integral is also clearly infinite if {p_{n}=p_{m}}.

Working out the integral in Maple, we get

\displaystyle  \int_{-a}^{a}x^{2}e^{i\left(p_{n}-p_{m}\right)x/\hbar}dx=\frac{2\hbar}{\left(p_{n}-p_{m}\right)^{3}}\left[\sin\left(\frac{\left(p_{n}-p_{m}\right)a}{\hbar}\right)\left(a^{2}\left(p_{n}-p_{m}\right)^{2}-2\hbar^{2}\right)+\cos\left(\frac{\left(p_{n}-p_{m}\right)a}{\hbar}\right)2a\hbar\left(p_{n}-p_{m}\right)\right] \ \ \ \ \ (11)

This integral is zero whenever

\displaystyle  \tan\left(\frac{\left(p_{n}-p_{m}\right)a}{\hbar}\right)=\frac{2\hbar^{2}-a^{2}\left(p_{n}-p_{m}\right)^{2}}{2a\hbar\left(p_{n}-p_{m}\right)} \ \ \ \ \ (12)

This is another of those transcendental equations (assuming we’re solving for {a} to find out what limits make the integral zero). If we define {q\equiv\left(p_{n}-p_{m}\right)a/\hbar} we can write this as

\displaystyle  \tan q=\frac{2-q^{2}}{2q} \ \ \ \ \ (13)

By plotting the two sides on the same graph, we see there are an infinite number of intersections, so we could make a similar argument as in the delta function case that this integral’s average value as {a\rightarrow\infty} is zero, although I wouldn’t want to bet anything significant on it.

In the plot {\tan q} is in red, and {\frac{2-q^{2}}{2q}} is in blue.

One thought on “Hamiltonian matrix elements

  1. Pingback: Finite vector spaces: matrix elements « Physics tutorials

Leave a Reply

Your email address will not be published. Required fields are marked *