References: Shankar, R. (1994), *Principles of Quantum Mechanics*, Plenum Press. Section 1.10.

Here, we’ll revisit the differential operator on a continuous vector space which we looked at earlier in its role as the momentum operator. This time around, we’ll use the bra-ket notation and vector space results to analyze it, hopefully putting it on a slightly more mathematical foundation.

We define the differential operator acting on a vector in a continuous vector space as having the action

This notation means that operating on produces the vector (ket) corresponding to the function whose form in the basis is . That is, the projection of onto the basis vector is

By a similar argument to that which we used to deduce the matrix element , we can work out the matrix elements of in the basis. Inserting the unit operator, we have

We need this to be equal to . To get this, we can introduce the derivative of the delta function, except this time the delta function is a function of rather than just on its own. To see the effect of this derivative, consider the integral

In the second step, we could take the derivative outside the integral since is a constant with respect to the integration. Comparing this with 4 we see that

Here the prime in means derivative with respect to , not . [Note that this is *not* the same formula as that quoted in the earlier post, where we had because in that formula it was the same variable that was involved in the derivative of the delta function and in the integral.]

The operator is not hermitian as it stands. Since the delta function is real, we have, looking at in bra-ket notation, we see that

Thus is anti-hermitian. It is easy to fix this and create a hermitian operator by multiplying by an imaginary number, such as (this choice is, of course, to make the new operator consistent with the momentum operator). Calling this new operator we have

A curious fact about (and thus about the momentum operator as well) is that it is not automatically hermitian even with this correction. We’ve seen that it satisfies the hermiticity property with respect to its matrix elements in the position basis, but to be fully hermitian, it must satisfy

for any two vectors and . Suppose we are interested in over some range . Then by inserting a couple of identity operators, we have

The result is hermitian only if the first term in the last line is zero, which happens only for certain choices of and . If the limits are infinite, so we’re integrating over all space, and the system is bounded so that both and go to zero at infinity, then we’re OK, and is hermitian. Another option is if and are periodic and the range of integration is equal to an integral multiple of the period, then has the same value at each end and the term becomes zero.

However, as we’ve seen, in quantum mechanics there are cases where we deal with functions such as (for real) that oscillate indefinitely, no matter how large is (see the free particle, for example). There isn’t any mathematically airtight way around such cases (as far as I know), but a hand-wavy way of defining a limit for such oscillating functions is to consider their average behaviour as . The average defined by Shankar is given as

This is interpreted as looking at the function very far out on the axis (at position ), and then considering a very long interval starting at point . Since the integral of over one period is zero (it’s just a combination of sine and cosine functions), the integral is always bounded between 0 and the area under half a cycle, as successive half-cycles cancel each other. Dividing by , which is monotonically increasing, ensures that the limit is zero.

This isn’t an ideal solution, but it’s just one of many cases where an infinitely oscillating function is called upon to do seemingly impossible things. The theory seems to hang together fairly well in any case.