**Required math: calculus**

**Required physics: **Schrödinger equation

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education, sec 3.6.

A wave function can be expressed in its more usual form as a function of and : . We can also express the same function as the Fourier transform of its momentum space form:

Here, the momentum space wave function is the inverse Fourier transform of the position space version:

Since either function can be obtained from the other with no loss of information, they are equivalent ways of expressing the wave function. In the language of linear algebra, the vector representing the wave function can be expressed in two different bases (plural of ‘basis’). The position space wave function is given in the first equation by a vector in the momentum basis, where is the coordinate of the wave function for that particular value of .

If you like thinking of vectors in 3 dimensions (rather than the infinite number of dimensions we’re using here), this is analogous to saying that is the coordinate of ‘along the direction’.

Similarly, the second equation expressed the momentum space wave function as a vector in position space, with the coordinate ‘along the direction’.

For a Hamiltonian with discrete energies, such as the harmonic oscillator, we know that the wave function can be expressed as a linear combination of the stationary states , as in

In this case, the basis consists of the stationary state functions multiplied by the exponential, and the ‘coordinate along the direction’ is the coefficient .

A hermitian operator (which represents an observable) transforms one vector into another. For example, the hamiltonian operator, when operating on one of its eigenvectors, multiplies that vector by a constant, which is the energy:

With respect to a given basis, each operator can be represented as a matrix, with one dimension of the matrix for each dimension of the vector space (which is infinite in the examples so far). The matrix elements can be represented in bra-ket notation as

where is the th basis vector (or function).

If the basis consists of the eigenfunctions of the hamiltonian, then the matrix is diagonal, since the eigenfunctions are orthogonal. Things are trickier if we want to find the matrix elements of the hamiltonian with respect to a continuous basis, like momentum. The (non-normalizable, except in the delta function sense) eigenfunctions of momentum are

In the case of the harmonic oscillator, the hamiltonian is

so if we apply this to the momentum eigenfunctions, we get

(By the way, although this might look like is an eigenfunction of since the RHS has the form , it’s not, since the ‘factor’ is not a constant.)

To get the matrix elements of we take the inner product with another momentum eigenfunction for momentum:

The first term of this integral evaluates to a delta function of form for a constant , but the second term involves a more problematic integral containing the product . This integral gives a real result (since is even and the imaginary part of a complex exponential is odd, being a sine), and if we take the limits of the integral to be symmetric about , the integral oscillates about zero with an increasing amplitude, the wider we take the limits. The integral is also clearly infinite if .

Working out the integral in Maple, we get

This integral is zero whenever

This is another of those transcendental equations (assuming we’re solving for to find out what limits make the integral zero). If we define we can write this as

By plotting the two sides on the same graph, we see there are an infinite number of intersections, so we could make a similar argument as in the delta function case that this integral’s average value as is zero, although I wouldn’t want to bet anything significant on it.

In the plot is in red, and is in blue.

Pingback: Finite vector spaces: matrix elements « Physics tutorials