Tag Archives: momentum operator

Translational invariance and conservation of momentum

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 11.

One consequence of the invariance of the Hamiltonian under translation is that the momentum and Hamiltonian commute:

\displaystyle \left[P,H\right]=0 \ \ \ \ \ (1)

In quantum mechanics, commuting quantities are simultaneously observable, and we can find a basis for the Hilbert space consisting of eigenstates of both {P} and {H}. We’ve seen that Ehrenfest’s theorem allows us to conclude that for such a system, the average momentum is conserved so that {\left\langle \dot{P}\right\rangle =0}. We can go a step further and state that if a system starts out in an eigenstate of {P}, then it remains in that eigenstate for all time.

First, we need to make a rather subtle observation, which is that

\displaystyle \left[P,H\right]=0\rightarrow\left[P,U\left(t\right)\right]=0 \ \ \ \ \ (2)

 

That is, if {P} and {H} commute, then {P} also commutes with the propagator {U\left(t\right)}. For a time-independent Hamiltonian, the propagator is

\displaystyle U\left(t\right)=e^{-iHt/\hbar} \ \ \ \ \ (3)

Since this can be expanded in a power series in the Hamiltonian, condition 2 follows easily enough. What if the Hamiltonian is time-dependent? In this case, the propagator comes out to a time-ordered integral

\displaystyle U\left(t\right)=T\left\{ \exp\left[-\frac{i}{\hbar}\int_{0}^{t}H\left(t^{\prime}\right)dt^{\prime}\right]\right\} \equiv\lim_{N\rightarrow\infty}\prod_{n=0}^{N-1}e^{-i\Delta H\left(n\Delta\right)/\hbar} \ \ \ \ \ (4)

 

Here the time interval {\left[0,t\right]} is divided into {N} time slices, each of length {\Delta=t/N}. As explained in the earlier post, the reason we can’t just integrate the RHS directly by summing the exponents is that such a procedure works only if the operators in the exponents all commute with each other. If {H} is time-dependent, its forms at different times may not commute, so we can’t get a simple closed form for {U\left(t\right)}.

However, if {\left[P,H\left(t\right)\right]=0} for all times, then {P} commutes with all the exponents on the RHS of 4, so we still get {\left[P,U\left(t\right)\right]=0}. Another way of looking at this is by imposing the condition {\left[P,H\left(t\right)\right]=0} we’re saying that if {H\left(t\right)} can be expanded in a power series in {X} and {P}, it depends only on {P}, and not on {X}. This follows from the fact that

\displaystyle \left[X^{n},P\right]=i\hbar nX^{n-1} \ \ \ \ \ (5)

so that {P} does not commute with any power of {X}.

Given that 2 is valid for all Hamiltonians, then if we start in a eigenstate {\left|p\right\rangle } of {P}, then

\displaystyle P\left|p\right\rangle \displaystyle = \displaystyle p\left|p\right\rangle \ \ \ \ \ (6)
\displaystyle PU\left(t\right)\left|p\right\rangle \displaystyle = \displaystyle U\left(t\right)P\left|p\right\rangle \ \ \ \ \ (7)
\displaystyle \displaystyle = \displaystyle U\left(t\right)p\left|p\right\rangle \ \ \ \ \ (8)
\displaystyle \displaystyle = \displaystyle pU\left(t\right)\left|p\right\rangle \ \ \ \ \ (9)

Thus {U\left(t\right)\left|p\right\rangle } remains an eigenstate of {P} with the same eigenvalue {p} for all time. For a single particle moving in one dimension, the state {\left|p\right\rangle } describes a free particle with momentum {p} (and thus a completely undetermined position).

Changing the position basis with a unitary transformation

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 7.4, Exercise 7.4.9.

The standard representation of the position and momentum operators in the position basis is

\displaystyle   X \displaystyle  \rightarrow \displaystyle  x\ \ \ \ \ (1)
\displaystyle  P \displaystyle  \rightarrow \displaystyle  -i\hbar\frac{d}{dx} \ \ \ \ \ (2)

It turns out it’s possible to modify this definition by adding some arbitrary function of position {f\left(x\right)} to {P} so we have

\displaystyle   X^{\prime} \displaystyle  \rightarrow \displaystyle  x\ \ \ \ \ (3)
\displaystyle  P^{\prime} \displaystyle  \rightarrow \displaystyle  -i\hbar\frac{d}{dx}+f\left(x\right) \ \ \ \ \ (4)

Since any function of {x} commutes with {X}, the commutation relations remain unchanged, so we have

\displaystyle  \left[X^{\prime},P^{\prime}\right]=i\hbar \ \ \ \ \ (5)

Another way of interpreting this change in operators is by using the unitary transformation of the {X} basis, in the form

\displaystyle  \left|x\right\rangle \rightarrow\left|\tilde{x}\right\rangle =e^{ig\left(X\right)/\hbar}\left|x\right\rangle =e^{ig\left(x\right)/\hbar}\left|x\right\rangle  \ \ \ \ \ (6)

where

\displaystyle  g\left(x\right)\equiv\int^{x}f\left(x^{\prime}\right)dx^{\prime} \ \ \ \ \ (7)

The last equality in 6 comes from the fact that operating on {\left|x\right\rangle } with any function of the {X} operator (provided the function can be expanded in a power series) results in multiplying {\left|x\right\rangle } by the same function, but with the operator {X} replaced by the numeric position value.

To verify this works, we can calcuate the matrix elements of the old {X} and {P} operators in the new basis. We have

\displaystyle  \left\langle \tilde{x}\left|X\right|\tilde{x}^{\prime}\right\rangle =\left\langle x\left|e^{-ig\left(x\right)/\hbar}Xe^{ig\left(x^{\prime}\right)/\hbar}\right|x^{\prime}\right\rangle  \ \ \ \ \ (8)

At this stage, since the two exponentials are numerical functions and not operators, we can take them outside the bracket to

\displaystyle   \left\langle \tilde{x}\left|X\right|\tilde{x}^{\prime}\right\rangle \displaystyle  = \displaystyle  e^{-ig\left(x\right)/\hbar}e^{ig\left(x^{\prime}\right)/\hbar}\left\langle x\left|X\right|x^{\prime}\right\rangle \ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  e^{-ig\left(x\right)/\hbar}e^{ig\left(x^{\prime}\right)/\hbar}x^{\prime}\delta\left(x-x^{\prime}\right)\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  x\delta\left(x-x^{\prime}\right) \ \ \ \ \ (11)

The exponentials cancel in the last line since the delta function is non-zero only when {x=x^{\prime}}.

The above result can also be obtained by inserting a couple of identity operators into 8:

\displaystyle   \left\langle x\left|e^{-ig\left(x\right)/\hbar}Xe^{ig\left(x^{\prime}\right)/\hbar}\right|x^{\prime}\right\rangle \displaystyle  = \displaystyle  \int\int\left\langle x\left|e^{-ig\left(x\right)/\hbar}\right|y\right\rangle \left\langle y\left|X\right|z\right\rangle \left\langle z\left|e^{ig\left(x^{\prime}\right)/\hbar}\right|x^{\prime}\right\rangle dy\;dz\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \int\int\left\langle x\left|e^{-ig\left(x\right)/\hbar}\right|y\right\rangle z\delta\left(y-z\right)\left\langle z\left|e^{ig\left(x^{\prime}\right)/\hbar}\right|x^{\prime}\right\rangle dy\;dz\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \int\left\langle x\left|e^{-ig\left(x\right)/\hbar}\right|z\right\rangle z\left\langle z\left|e^{ig\left(x^{\prime}\right)/\hbar}\right|x^{\prime}\right\rangle dz\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \int e^{i\left[g\left(x^{\prime}\right)-g\left(x\right)\right]/\hbar}\left\langle x\left|z\right.\right\rangle z\left\langle z\left|x^{\prime}\right.\right\rangle dz\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \int e^{i\left[g\left(x^{\prime}\right)-g\left(x\right)\right]/\hbar}\delta\left(x-z\right)z\delta\left(z-x^{\prime}\right)dz\ \ \ \ \ (16)
\displaystyle  \displaystyle  = \displaystyle  e^{i\left[g\left(x^{\prime}\right)-g\left(x\right)\right]/\hbar}x^{\prime}\delta\left(x-x^{\prime}\right)\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  x\delta\left(x-x^{\prime}\right) \ \ \ \ \ (18)

The momentum operator works as follows. Using the original definition 2 on the modified basis we have

\displaystyle   \left\langle \tilde{x}\left|P\right|\tilde{x}^{\prime}\right\rangle \displaystyle  = \displaystyle  -i\hbar\left\langle x\left|e^{-ig\left(x\right)/\hbar}\frac{d}{dx^{\prime}}e^{ig\left(x^{\prime}\right)/\hbar}\right|x^{\prime}\right\rangle \ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\left\langle x\left|e^{-ig\left(x\right)/\hbar}\frac{i}{\hbar}e^{ig\left(x^{\prime}\right)/\hbar}\frac{dg\left(x^{\prime}\right)}{dx^{\prime}}\right|x^{\prime}\right\rangle -\ \ \ \ \ (20)
\displaystyle  \displaystyle  \displaystyle  i\hbar\left\langle x\left|e^{-ig\left(x\right)/\hbar}e^{ig\left(x^{\prime}\right)/\hbar}\frac{d}{dx^{\prime}}\right|x^{\prime}\right\rangle \ \ \ \ \ (21)

From 7 we have

\displaystyle  \frac{dg\left(x\right)}{dx}=\frac{d}{dx}\int^{x}f\left(x^{\prime}\right)dx^{\prime}=f\left(x\right) \ \ \ \ \ (22)

This gives

\displaystyle   \left\langle \tilde{x}\left|P\right|\tilde{x}^{\prime}\right\rangle \displaystyle  = \displaystyle  \left\langle x\left|e^{i\left[g\left(x^{\prime}\right)-g\left(x\right)\right]/\hbar}\left[f\left(x^{\prime}\right)-i\hbar\frac{d}{dx^{\prime}}\right]\right|x^{\prime}\right\rangle \ \ \ \ \ (23)
\displaystyle  \displaystyle  = \displaystyle  e^{i\left[g\left(x^{\prime}\right)-g\left(x\right)\right]/\hbar}\left[f\left(x^{\prime}\right)-i\hbar\frac{d}{dx^{\prime}}\right]\left\langle x\left|x^{\prime}\right.\right\rangle \ \ \ \ \ (24)
\displaystyle  \displaystyle  = \displaystyle  e^{i\left[g\left(x^{\prime}\right)-g\left(x\right)\right]/\hbar}\left[f\left(x^{\prime}\right)-i\hbar\frac{d}{dx^{\prime}}\right]\delta\left(x-x^{\prime}\right)\ \ \ \ \ (25)
\displaystyle  \displaystyle  = \displaystyle  \left[f\left(x\right)-i\hbar\frac{d}{dx}\right]\delta\left(x-x^{\prime}\right) \ \ \ \ \ (26)

This shows that by a unitary change of {X} basis 6, we transform the position and momentum operators (well, just the momentum operator, really) according to 3. We’ve multiplied the original {\left|x\right\rangle } states by a phase factor which depends on some function {f\left(x\right)}. This doesn’t change the matrix elements of {X}, but it does add {f\left(x\right)} to the matrix elements of {P}. The commonly used definition of {P} is thus with {f\left(x\right)=0}.

Differential operators – matrix elements and hermiticity

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 1.10.

Here, we’ll revisit the differential operator on a continuous vector space which we looked at earlier in its role as the momentum operator. This time around, we’ll use the bra-ket notation and vector space results to analyze it, hopefully putting it on a slightly more mathematical foundation.

We define the differential operator {D} acting on a vector {\left|f\right\rangle } in a continuous vector space as having the action

\displaystyle D\left|f\right\rangle =\left|\frac{df}{dx}\right\rangle \ \ \ \ \ (1)

This notation means that {D} operating on {\left|f\right\rangle } produces the vector (ket) {\left|\frac{df}{dx}\right\rangle } corresponding to the function whose form in the {\left|x\right\rangle } basis is {\frac{df\left(x\right)}{dx}}. That is, the projection of {\left|\frac{df}{dx}\right\rangle } onto the basis vector {\left|x\right\rangle } is

\displaystyle \frac{df\left(x\right)}{dx}=\left\langle x\left|\frac{df}{dx}\right.\right\rangle =\left\langle x\left|D\right|f\right\rangle \ \ \ \ \ (2)

By a similar argument to that which we used to deduce the matrix element {\left\langle x\left|x^{\prime}\right.\right\rangle }, we can work out the matrix elements of {D} in the {\left|x\right\rangle } basis. Inserting the unit operator, we have

\displaystyle \left\langle x\left|D\right|f\right\rangle \displaystyle = \displaystyle \int dx^{\prime}\left\langle x\left|D\right|x^{\prime}\right\rangle \left\langle x^{\prime}\left|f\right.\right\rangle \ \ \ \ \ (3)
\displaystyle \displaystyle = \displaystyle \int dx^{\prime}\left\langle x\left|D\right|x^{\prime}\right\rangle f\left(x^{\prime}\right) \ \ \ \ \ (4)

We need this to be equal to {\frac{df}{dx}}. To get this, we can introduce the derivative of the delta function, except this time the delta function is a function of {x-x^{\prime}} rather than just {x} on its own. To see the effect of this derivative, consider the integral

\displaystyle \int dx^{\prime}\frac{d\delta\left(x-x^{\prime}\right)}{dx}f\left(x^{\prime}\right)=\frac{d}{dx}\int dx^{\prime}\delta\left(x-x^{\prime}\right)f\left(x^{\prime}\right)=\frac{df\left(x\right)}{dx} \ \ \ \ \ (5)

In the second step, we could take the derivative outside the integral since {x} is a constant with respect to the integration. Comparing this with 4 we see that

\displaystyle \left\langle x\left|D\right|x^{\prime}\right\rangle \equiv D_{xx^{\prime}}=\frac{d\delta\left(x-x^{\prime}\right)}{dx}=\delta^{\prime}\left(x-x^{\prime}\right) \ \ \ \ \ (6)

Here the prime in {\delta^{\prime}} means derivative with respect to {x}, not {x^{\prime}}. [Note that this is not the same formula as that quoted in the earlier post, where we had {f\left(x\right)\delta^{\prime}\left(x\right)=-f^{\prime}\left(x\right)\delta\left(x\right)} because in that formula it was the same variable {x} that was involved in the derivative of the delta function and in the integral.]

The operator {D} is not hermitian as it stands. Since the delta function is real, we have, looking at {D_{xx^{\prime}}^{\dagger}=D_{x^{\prime}x}^*} in bra-ket notation, we see that

\displaystyle D_{x^{\prime}x}^{\dagger}=\left\langle x^{\prime}\left|D^*\right|x\right\rangle =\delta^{\prime}\left(x^{\prime}-x\right)=-\delta^{\prime}\left(x-x^{\prime}\right)\ne D_{xx^{\prime}} \ \ \ \ \ (7)

Thus {D} is anti-hermitian. It is easy to fix this and create a hermitian operator by multiplying by an imaginary number, such as {-i} (this choice is, of course, to make the new operator consistent with the momentum operator). Calling this new operator {K\equiv-iD} we have

\displaystyle K_{x^{\prime}x}^{\dagger}=\left\langle x^{\prime}\left|K^*\right|x\right\rangle =i\delta^{\prime}\left(x^{\prime}-x\right)=-i\delta^{\prime}\left(x-x^{\prime}\right)=K_{xx^{\prime}} \ \ \ \ \ (8)

A curious fact about {K} (and thus about the momentum operator as well) is that it is not automatically hermitian even with this correction. We’ve seen that it satisfies the hermiticity property with respect to its matrix elements in the position basis, but to be fully hermitian, it must satisfy

\displaystyle \left\langle g\left|K\right|f\right\rangle =\left\langle f\left|K\right|g\right\rangle ^* \ \ \ \ \ (9)

for any two vectors {\left|f\right\rangle } and {\left|g\right\rangle }. Suppose we are interested in {x} over some range {\left[a,b\right]}. Then by inserting a couple of identity operators, we have

\displaystyle \left\langle g\left|K\right|f\right\rangle \displaystyle = \displaystyle \int_{a}^{b}\int_{a}^{b}\left\langle g\left|x\right.\right\rangle \left\langle x\left|K\right|x^{\prime}\right\rangle \left\langle x^{\prime}\left|f\right.\right\rangle dx\;dx^{\prime}\ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle -i\int_{a}^{b}g^*\left(x\right)\frac{df}{dx}dx\ \ \ \ \ (11)
\displaystyle \displaystyle = \displaystyle -i\left.g^*\left(x\right)f\left(x\right)\right|_{a}^{b}+i\int_{a}^{b}f\left(x\right)\frac{dg^*}{dx}dx\ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle -i\left.g^*\left(x\right)f\left(x\right)\right|_{a}^{b}+\left\langle f\left|K\right|g\right\rangle ^* \ \ \ \ \ (13)

The result is hermitian only if the first term in the last line is zero, which happens only for certain choices of {f} and {g}. If the limits are infinite, so we’re integrating over all space, and the system is bounded so that both {f} and {g} go to zero at infinity, then we’re OK, and {K} is hermitian. Another option is if {g} and {f} are periodic and the range of integration is equal to an integral multiple of the period, then {g^*f} has the same value at each end and the term becomes zero.

However, as we’ve seen, in quantum mechanics there are cases where we deal with functions such as {e^{ikx}} (for {k} real) that oscillate indefinitely, no matter how large {x} is (see the free particle, for example). There isn’t any mathematically airtight way around such cases (as far as I know), but a hand-wavy way of defining a limit for such oscillating functions is to consider their average behaviour as {x\rightarrow\pm\infty}. The average defined by Shankar is given as

\displaystyle \lim_{x\rightarrow\infty}e^{ikx}e^{-ik^{\prime}x}=\lim_{\substack{L\rightarrow\infty\\ \Delta\rightarrow\infty } }\frac{1}{\Delta}\int_{L}^{L+\Delta}e^{i\left(k-k^{\prime}\right)x}dx \ \ \ \ \ (14)

This is interpreted as looking at the function very far out on the {x} axis (at position {L}), and then considering a very long interval {\Delta} starting at point {L}. Since the integral of {e^{i\left(k-k^{\prime}\right)x}} over one period is zero (it’s just a combination of sine and cosine functions), the integral is always bounded between 0 and the area under half a cycle, as successive half-cycles cancel each other. Dividing by {\Delta}, which is monotonically increasing, ensures that the limit is zero.

This isn’t an ideal solution, but it’s just one of many cases where an infinitely oscillating function is called upon to do seemingly impossible things. The theory seems to hang together fairly well in any case.

Non-denumerable basis: position and momentum states

References: References: edX online course MIT 8.05 Section 5.6.

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 1.10; Exercises 1.10.1 – 1.10.3.

Although we’ve looked at position and momentum operators in quantum mechanics before, it’s worth another look at the ways that Zwiebach and Shankar introduce them.

First, we’ll have a look at Shankar’s treatment. He begins by considering a string fixed at each end, at positions {x=0} and {x=L}, then asks how we could convey the shape of the string to an observer who cannot see the string directly. We could note the position at some fixed finite number of points between 0 and {L}, but then the remote observer would have only a partial knowledge of the string’s shape; the locations of those portions of the string between the points at which it was measured are still unknown, although the observer could probably get a reasonable picture by interpolating between these points.

We can increase the number of points at which the position is measured to get a better picture, but to convey the exact shape of the string, we need to measure its position at an infinite number of points. This is possible (in principle) but leads to a problem with the definition of the inner product. For two vectors defined on a finite vector space with an orthonormal basis, the inner product is given by the usual formula for the dot product:

\displaystyle \left\langle f\left|g\right.\right\rangle \displaystyle = \displaystyle \sum_{i=1}^{n}f_{i}g_{i}\ \ \ \ \ (1)
\displaystyle \left\langle f\left|f\right.\right\rangle \displaystyle = \displaystyle \sum_{i=1}^{n}f_{i}^{2} \ \ \ \ \ (2)

where {f_{i}} and {g_{i}} are the components of {f} and {g} in the orthonormal basis. If we’re taking {f} to be the displacement of a string and we try to increase the accuracy of the picture by increasing the number {n} of points at which measurements are taken, then the value of {\left\langle f\left|f\right.\right\rangle } continues to increase as {n} increases (provided that {f\ne0} everywhere). As {n\rightarrow\infty} then {\left\langle f\left|f\right.\right\rangle \rightarrow\infty} as well, even though the system we’re measuring (a string of finite length with finite displacement) is certainly not infinite in any practical sense.

Shankar proposes getting around this problem by simply redefining the inner product for a finite vector space to be

\displaystyle \left\langle f\left|g\right.\right\rangle =\sum_{i=1}^{n}f\left(x_{i}\right)g\left(x_{i}\right)\Delta \ \ \ \ \ (3)

 

where {\Delta\equiv L/\left(n+1\right)}. That is, {\Delta} now becomes the distance between adjacent points at which measurements are taken. If we let {n\rightarrow\infty} this leads to the definition of the inner product as an integral

\displaystyle \left\langle f\left|g\right.\right\rangle \displaystyle = \displaystyle \int_{0}^{L}f\left(x\right)g\left(x\right)\;dx\ \ \ \ \ (4)
\displaystyle \left\langle f\left|f\right.\right\rangle \displaystyle = \displaystyle \int_{0}^{L}f^{2}\left(x\right)\;dx \ \ \ \ \ (5)

This looks familiar enough, if you’ve done any work with inner products in quantum mechanics, but there is a subtle point which Shankar overlooks. In going from 1 to 3, we have introduced a factor {\Delta} which, in the string example at least, has the dimensions of length, so the physical interpretation of these two equations is different. The units of {\left\langle f\left|g\right.\right\rangle } appear to be different in the two cases. Now in quantum theory, inner products of the continuous type usually involve the wave function multiplied by its complex conjugate, with possibly another operator thrown in if we’re trying to find the expectation value of some observable. The square modulus of the wave function, {\left|\Psi\right|^{2}}, is taken to be a probability density, so it has units of inverse length (in one dimension) or inverse volume (in three dimensions), which makes the integral work out properly.

Admittedly, when we’re using {f} to represent the displacement of a string, it’s not obvious what meaning the inner product of {f} with anything else would actually have, so maybe the point isn’t worth worrying about. However, it does seem to be something that it would be worth Shankar including a comment about.

From this point, Shankar continues by saying that this infinite dimensional vector space is spanned by basis vectors {\left|x\right\rangle }, with one basis vector for each value of {x}. We require this basis to be orthogonal, which means that we must have, if {x\ne x^{\prime}}

\displaystyle \left\langle x\left|x^{\prime}\right.\right\rangle =0 \ \ \ \ \ (6)

We then generalize the identity operator to be

\displaystyle I=\int\left|x\right\rangle \left\langle x\right|dx \ \ \ \ \ (7)

 

which leads to

\displaystyle \left\langle x\left|f\right.\right\rangle =\int\left\langle x\left|x^{\prime}\right.\right\rangle \left\langle x^{\prime}\left|f\right.\right\rangle dx^{\prime} \ \ \ \ \ (8)

The bra-ket {\left\langle x\left|f\right.\right\rangle } is the projection of the vector {\left|f\right\rangle } onto the {\left|x\right\rangle } basis vector, so it is just {f\left(x\right)}. This means

\displaystyle f\left(x\right)=\int\left\langle x\left|x^{\prime}\right.\right\rangle f\left(x^{\prime}\right)dx^{\prime} \ \ \ \ \ (9)

 

which leads to the definition of the Dirac delta function as the normalization of {\left\langle x\left|x^{\prime}\right.\right\rangle }:

\displaystyle \left\langle x\left|x^{\prime}\right.\right\rangle =\delta\left(x-x^{\prime}\right) \ \ \ \ \ (10)

Shankar then describes some properties of the delta function and its derivative, most of which we’ve already covered. For example, we’ve seen these two results for the delta function:

\displaystyle \delta\left(ax\right) \displaystyle = \displaystyle \frac{\delta\left(x\right)}{\left|a\right|}\ \ \ \ \ (11)
\displaystyle \frac{d\theta\left(x-x^{\prime}\right)}{dx} \displaystyle = \displaystyle \delta\left(x-x^{\prime}\right) \ \ \ \ \ (12)

where {\theta} is the step function

\displaystyle \theta\left(x-x^{\prime}\right)\equiv\begin{cases} 0 & x\le x^{\prime}\\ 1 & x>x^{\prime} \end{cases} \ \ \ \ \ (13)

One other result is that for a function {f\left(x\right)} with zeroes at a number of points {x_{i}}, we have

\displaystyle \delta\left(f\left(x\right)\right)=\sum_{i}\frac{\delta\left(x_{i}-x\right)}{\left|df/dx_{i}\right|} \ \ \ \ \ (14)

To see this, consider one of the {x_{i}} where {f\left(x_{i}\right)=0}. Expanding in a Taylor series about this point, we have

\displaystyle f\left(x_{i}+\left(x-x_{i}\right)\right) \displaystyle = \displaystyle f\left(x_{i}\right)+\left(x-x_{i}\right)\frac{df}{dx_{i}}+\ldots\ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle 0+\left(x-x_{i}\right)\frac{df}{dx_{i}} \ \ \ \ \ (16)

From 11 we have

\displaystyle \delta\left(\left(x-x_{i}\right)\frac{df}{dx_{i}}\right)=\frac{\delta\left(x_{i}-x\right)}{\left|df/dx_{i}\right|} \ \ \ \ \ (17)

The behaviour is the same at all points {x_{i}} and since {\delta\left(x_{i}-x\right)=0} at all other {x_{j}\ne x_{i}} where {f\left(x_{j}\right)=0}, we can just add the delta functions for each zero of {f}.

Turning now to Zwiebach’s treatment, he begins with the basis states {\left|x\right\rangle } and position operator {\hat{x}} with the eigenvalue equation

\displaystyle \hat{x}\left|x\right\rangle =x\left|x\right\rangle \ \ \ \ \ (18)

and simply defines the inner product between two position states to be

\displaystyle \left\langle x\left|y\right.\right\rangle =\delta\left(x-y\right) \ \ \ \ \ (19)

With this definition, 9 follows immediately. We can therefore write a quantum state {\left|\psi\right\rangle } as

\displaystyle \left|\psi\right\rangle =I\left|\psi\right\rangle =\int\left|x\right\rangle \left\langle x\left|\psi\right.\right\rangle dx=\int\left|x\right\rangle \psi\left(x\right)dx \ \ \ \ \ (20)

That is, the vector {\left|\psi\right\rangle } is the integral of its projections {\psi\left(x\right)} onto the basis vectors {\left|x\right\rangle }.

The position operator {\hat{x}} is hermitian as can be seen from

\displaystyle \left\langle x_{1}\left|\hat{x}^{\dagger}\right|x_{2}\right\rangle \displaystyle = \displaystyle \left\langle x_{2}\left|\hat{x}\right|x_{1}\right\rangle ^*\ \ \ \ \ (21)
\displaystyle \displaystyle = \displaystyle x_{1}\left\langle x_{2}\left|x_{1}\right.\right\rangle ^*\ \ \ \ \ (22)
\displaystyle \displaystyle = \displaystyle x_{1}\delta\left(x_{2}-x_{1}\right)^*\ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle x_{1}\delta\left(x_{2}-x_{1}\right)\ \ \ \ \ (24)
\displaystyle \displaystyle = \displaystyle x_{2}\delta\left(x_{2}-x_{1}\right)\ \ \ \ \ (25)
\displaystyle \displaystyle = \displaystyle \left\langle x_{1}\left|\hat{x}\right|x_{2}\right\rangle \ \ \ \ \ (26)

The fourth line follows because the delta function is real, and the fifth follows because {\delta\left(x_{2}-x_{1}\right)} is non-zero only when {x_{1}=x_{2}}.

Zwiebach then introduces the momentum eigenstates {\left|p\right\rangle } which are analogous to the position states {\left|x\right\rangle }, in that

\displaystyle \left\langle p^{\prime}\left|p\right.\right\rangle \displaystyle = \displaystyle \delta\left(p^{\prime}-p\right)\ \ \ \ \ (27)
\displaystyle I \displaystyle = \displaystyle \int dp\left|p\right\rangle \left\langle p\right|\ \ \ \ \ (28)
\displaystyle \hat{p}\left|p\right\rangle \displaystyle = \displaystyle p\left|p\right\rangle \ \ \ \ \ (29)
\displaystyle \tilde{\psi}\left(p\right) \displaystyle = \displaystyle \left\langle p\left|\psi\right.\right\rangle \ \ \ \ \ (30)

By the same calculation as for {\left|x\right\rangle }, we see that {\hat{p}} is hermitian.

To get a relation between the {\left|x\right\rangle } and {\left|p\right\rangle } bases, we require that {\left\langle x\left|p\right.\right\rangle } is the wave function for a particle with momentum {p} in the {x} basis, which we’ve seen is

\displaystyle \psi\left(x\right)=\frac{1}{\sqrt{2\pi\hbar}}e^{ipx/\hbar} \ \ \ \ \ (31)

 

Zwiebach then shows that this is consistent with the equation

\displaystyle \left\langle x\left|\hat{p}\right|\psi\right\rangle =\frac{h}{i}\frac{d}{dx}\left\langle x\left|\psi\right.\right\rangle =\frac{h}{i}\frac{d\psi\left(x\right)}{dx} \ \ \ \ \ (32)

We can get a similar relation by switching {x} and {p}:

\displaystyle \left\langle p\left|\hat{x}\right|\psi\right\rangle \displaystyle = \displaystyle \int dx\left\langle p\left|x\right.\right\rangle \left\langle x\left|\hat{x}\right|\psi\right\rangle \ \ \ \ \ (33)
\displaystyle \displaystyle = \displaystyle \int dx\left\langle x\left|p\right.\right\rangle ^*x\left\langle x\left|\psi\right.\right\rangle \ \ \ \ \ (34)

From 31 we see

\displaystyle \left\langle x\left|p\right.\right\rangle ^* \displaystyle = \displaystyle \frac{1}{\sqrt{2\pi\hbar}}e^{-ipx/\hbar}\ \ \ \ \ (35)
\displaystyle \left\langle x\left|p\right.\right\rangle ^*x \displaystyle = \displaystyle i\hbar\frac{d}{dp}\left\langle x\left|p\right.\right\rangle ^*\ \ \ \ \ (36)
\displaystyle \int dx\left\langle x\left|p\right.\right\rangle ^*x\left\langle x\left|\psi\right.\right\rangle \displaystyle = \displaystyle i\hbar\int dx\;\frac{d}{dp}\left\langle x\left|p\right.\right\rangle ^*\left\langle x\left|\psi\right.\right\rangle \ \ \ \ \ (37)
\displaystyle \displaystyle = \displaystyle i\hbar\frac{d}{dp}\int dx\;\left\langle x\left|p\right.\right\rangle ^*\left\langle x\left|\psi\right.\right\rangle \ \ \ \ \ (38)
\displaystyle \displaystyle = \displaystyle i\hbar\frac{d}{dp}\int dx\;\left\langle p\left|x\right.\right\rangle \left\langle x\left|\psi\right.\right\rangle \ \ \ \ \ (39)
\displaystyle \displaystyle = \displaystyle i\hbar\frac{d\tilde{\psi}\left(p\right)}{dp} \ \ \ \ \ (40)

In the fourth line, we took the {\frac{d}{dp}} outside the integral since {p} occurs in only one term, and in the last line we used 7. Thus we have

\displaystyle \left\langle p\left|\hat{x}\right|\psi\right\rangle =i\hbar\frac{d\tilde{\psi}\left(p\right)}{dp} \ \ \ \ \ (41)

Hydrogen atom: powers of the momentum operator

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 6.15.

In this post, we’ll derive results concerning powers of the momentem operator {p} when applied to the {l=0} states of hydrogen. The general form of the hydrogen wave function is

\displaystyle  \psi_{nlm}\left(r,\theta,\phi\right)=R_{nl}\left(r\right)Y_{l}^{m}\left(\theta,\phi\right) \ \ \ \ \ (1)

where {R} is the radial function and {Y_{l}^{m}} is a spherical harmonic. If {l=0}, then the only possible value of {m} is {m=0} and {Y_{0}^{0}=1/\sqrt{4\pi}}, which is independent of {\theta} and {\phi}. In spherical coordinates, the square of the momentum operator is then

\displaystyle  p^{2}=-\hbar^{2}\nabla^{2}=-\frac{\hbar^{2}}{r^{2}}\frac{d}{dr}\left(r^{2}\frac{d}{dr}\right) \ \ \ \ \ (2)

We’d like to show that this operator is hermitian, that is, for two functions {f\left(r\right)} and {g\left(r\right)} that

\displaystyle  \left\langle f\right.\left|p^{2}g\right\rangle =\left\langle p^{2}f\right.\left|g\right\rangle \ \ \ \ \ (3)

We start with

\displaystyle   p^{2}g \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{r^{2}}\frac{d}{dr}\left(r^{2}\frac{dg}{dr}\right)\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{r^{2}}\left(2rg^{\prime}+r^{2}g^{\prime\prime}\right)\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  -\hbar^{2}\left(2\frac{g^{\prime}}{r}+g^{\prime\prime}\right) \ \ \ \ \ (6)

We then get

\displaystyle   \left\langle f\right.\left|p^{2}g\right\rangle \displaystyle  = \displaystyle  -4\pi\hbar^{2}\int_{0}^{\infty}\frac{f}{r^{2}}\left(2rg^{\prime}+r^{2}g^{\prime\prime}\right)r^{2}dr\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\hbar^{2}\int_{0}^{\infty}f\left(2rg^{\prime}+r^{2}g^{\prime\prime}\right)dr\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\hbar^{2}\left[\left.r^{2}fg^{\prime}\right|_{0}^{\infty}-\int_{0}^{\infty}r^{2}\left(f^{\prime}g^{\prime}+fg^{\prime\prime}\right)dr+\int_{0}^{\infty}r^{2}fg^{\prime\prime}dr\right]\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\hbar^{2}\left[\left.r^{2}fg^{\prime}\right|_{0}^{\infty}-\int_{0}^{\infty}r^{2}f^{\prime}g^{\prime}dr\right]\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\hbar^{2}\left[\left.r^{2}fg^{\prime}\right|_{0}^{\infty}-\left.r^{2}f^{\prime}g\right|_{0}^{\infty}+\int_{0}^{\infty}\left(2rf^{\prime}+r^{2}f\right)gdr\right]\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\hbar^{2}\int_{0}^{\infty}\left(2rf^{\prime}+r^{2}f\right)gdr\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \left\langle p^{2}f\right.\left|g\right\rangle \ \ \ \ \ (13)

where in the second-to-last line we used the fact that all radial functions in the hydrogen atom have an {e^{-r/na}} term multiplied by a polynomial in {r}. The exponential ensures the integrated terms are zero at infinity, and the {r^{2}} factor ensures they are zero at {r=0}. Thus {p^{2}} is hermitian.

For {p^{4}}, we start from 6 and apply 2:

\displaystyle  p^{4}=-\hbar^{4}\nabla^{2}\left(2\frac{g^{\prime}}{r}+g^{\prime\prime}\right) \ \ \ \ \ (14)

For the first term, we have

\displaystyle   \nabla^{2}\frac{g^{\prime}}{r} \displaystyle  = \displaystyle  \nabla\cdot\left(\nabla\frac{g^{\prime}}{r}\right)\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \nabla\cdot\left[g^{\prime}\nabla\frac{1}{r}+\frac{1}{r}\nabla g^{\prime}\right]\ \ \ \ \ (16)
\displaystyle  \displaystyle  = \displaystyle  g^{\prime}\nabla^{2}\frac{1}{r}+2\left(\nabla\frac{1}{r}\right)\cdot\left(\nabla g^{\prime}\right)+\frac{1}{r}\nabla^{2}g^{\prime}\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\delta\left(\mathbf{r}\right)g^{\prime}-\frac{2}{r^{2}}\hat{\mathbf{r}}\cdot\left(g^{\prime\prime}\hat{\mathbf{r}}\right)+\frac{1}{r}\nabla^{2}g^{\prime}\ \ \ \ \ (18)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\delta\left(\mathbf{r}\right)g^{\prime}-2\frac{g^{\prime\prime}}{r^{2}}+\frac{1}{r^{3}}\frac{d}{dr}\left(r^{2}g^{\prime\prime}\right)\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\delta\left(\mathbf{r}\right)g^{\prime}-2\frac{g^{\prime\prime}}{r^{2}}+2\frac{g^{\prime\prime}}{r^{2}}+\frac{g^{\left(3\right)}}{r}\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  -4\pi\delta\left(\mathbf{r}\right)g^{\prime}+\frac{g^{\left(3\right)}}{r} \ \ \ \ \ (21)

where the notation {g^{\left(i\right)}} denotes the {i}th derivative and we’ve used a couple of earlier results to get the fourth line:

\displaystyle   \nabla^{2}\frac{1}{r} \displaystyle  = \displaystyle  -4\pi\delta\left(\mathbf{r}\right)\ \ \ \ \ (22)
\displaystyle  \nabla\frac{1}{r} \displaystyle  = \displaystyle  -\frac{\hat{\mathbf{r}}}{r^{2}} \ \ \ \ \ (23)

For the second term in 14 we have

\displaystyle   \nabla^{2}g^{\prime\prime} \displaystyle  = \displaystyle  \frac{1}{r^{2}}\frac{d}{dr}\left(r^{2}g^{\left(3\right)}\right)\ \ \ \ \ (24)
\displaystyle  \displaystyle  = \displaystyle  2\frac{g^{\left(3\right)}}{r}+g^{\left(4\right)} \ \ \ \ \ (25)

Inserting 21 and 25 into 14 we get

\displaystyle  p^{4}g=\hbar^{4}\left(\frac{4}{r}g^{\left(3\right)}+g^{\left(4\right)}-8\pi\delta\left(\mathbf{r}\right)g^{\prime}\right) \ \ \ \ \ (26)

Now we want to calculate {\left\langle f\right.\left|p^{4}g\right\rangle } and compare it with {\left\langle g\right.\left|p^{4}f\right\rangle }, so we have

\displaystyle   \frac{1}{\hbar^{4}}\left\langle f\right.\left|p^{4}g\right\rangle \displaystyle  = \displaystyle  4\pi\int_{0}^{\infty}\left(4rfg^{\left(3\right)}+r^{2}fg^{\left(4\right)}\right)-8\pi\int\delta\left(\mathbf{r}\right)fg^{\prime}d^{3}\mathbf{r}\ \ \ \ \ (27)
\displaystyle  \frac{1}{\hbar^{4}}\left\langle g\right.\left|p^{4}f\right\rangle \displaystyle  = \displaystyle  4\pi\int_{0}^{\infty}\left(4rgf^{\left(3\right)}+r^{2}gf^{\left(4\right)}\right)-8\pi\int\delta\left(\mathbf{r}\right)gf^{\prime}d^{3}\mathbf{r} \ \ \ \ \ (28)

The aim is to integrate by parts enough times to eliminate the derivatives of {g} under the integral. Again, this is tedious, but we can plow onwards, or else just use some software to ease the task. Using Maple’s IntegrationTools[Parts] operation, we find (after eliminating all terms evaluated at {r=\infty} because they contain an {e^{-r/na}} factor, and those terms containing a factor of {r} or {r^{2}} evaluated at {r=0}):

\displaystyle   \int_{0}^{\infty}4rfg^{\left(3\right)}dr \displaystyle  = \displaystyle  4f\left(0\right)g^{\prime}\left(0\right)-8f^{\prime}\left(0\right)g\left(0\right)-\int_{0}^{\infty}g\left(12f^{\prime\prime}+4rf^{\left(3\right)}\right)dr\ \ \ \ \ (29)
\displaystyle  \int_{0}^{\infty}r^{2}fg^{\left(4\right)}dr \displaystyle  = \displaystyle  -2f\left(0\right)g^{\prime}\left(0\right)+6f^{\prime}\left(0\right)g\left(0\right)+\int_{0}^{\infty}g\left(12f^{\prime\prime}+8rf^{\left(3\right)}+r^{2}f^{\left(4\right)}\right)dr \ \ \ \ \ (30)

Adding these together and adding on the delta function term in 27, we get, by comparing the result with 28

\displaystyle   \frac{1}{4\pi\hbar^{4}}\left\langle f\right.\left|p^{4}g\right\rangle \displaystyle  = \displaystyle  2f\left(0\right)g^{\prime}\left(0\right)-2f^{\prime}\left(0\right)g\left(0\right)+\int_{0}^{\infty}g\left(r^{2}f^{\left(4\right)}+4rf^{\left(3\right)}\right)dr-8\pi f\left(0\right)g^{\prime}\left(0\right)\ \ \ \ \ (31)
\displaystyle  \left\langle f\right.\left|p^{4}g\right\rangle \displaystyle  = \displaystyle  8\pi\hbar^{4}\left(f\left(0\right)g^{\prime}\left(0\right)-f^{\prime}\left(0\right)g\left(0\right)\right)+\left\langle g\right.\left|p^{4}f\right\rangle +8\pi\hbar^{4}\left(g\left(0\right)f^{\prime}\left(0\right)-f\left(0\right)g^{\prime}\left(0\right)\right)\ \ \ \ \ (32)
\displaystyle  \displaystyle  = \displaystyle  \left\langle g\right.\left|p^{4}f\right\rangle \ \ \ \ \ (33)

Thus {p^{4}} is also hermitian.

[Note that this is the opposite result to that specified in Griffiths’s problem 6.15, where he asks us to prove that {p^{4}} is not hermitian. However, Griffiths corrects this result in his errata. Thanks to Jack Whaley-Baldwin (see comments below) for pointing this out.]

Commutators: a few theorems

Required math: calculus

Required physics: Schrödinger equation

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Chapter 3, Post 13.

The commutator of two operators is defined as

\displaystyle \left[A,B\right]\equiv AB-BA \ \ \ \ \ (1)

In general, a commutator is non-zero, since the order in which we apply operators can make a difference. In practice, to work out a commutator we need to apply it to a test function {f}, so that we really need to work out {\left[A,B\right]f} and then remove the test function to see the result. This is because many operators, such as the momentum, involve taking the derivative.

We’ll now have a look at a few theorems involving commutators.

Theorem 1:

\displaystyle \left[AB,C\right]=A\left[B,C\right]+\left[A,C\right]B \ \ \ \ \ (2)

Proof: The LHS is:

\displaystyle \left[AB,C\right]=ABC-CAB \ \ \ \ \ (3)

The RHS is:

\displaystyle A\left[B,C\right]+\left[A,C\right]B \displaystyle = \displaystyle ABC-ACB+ACB-CAB\ \ \ \ \ (4)
\displaystyle \displaystyle = \displaystyle ABC-CAB\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \left[AB,C\right] \ \ \ \ \ (6)

QED.

Theorem 2:

\displaystyle \left[x^{n},p\right]=i\hbar nx^{n-1} \ \ \ \ \ (7)

where {p} is the momentum operator.

Proof: Using {p=\frac{\hbar}{i}\partial/\partial x} and letting the commutator operate on some arbitrary function {g}:

\displaystyle \left[x^{n},p\right]g \displaystyle = \displaystyle x^{n}\frac{\hbar}{i}\frac{\partial g}{\partial x}-\frac{\hbar}{i}\frac{\partial}{\partial x}(x^{n}g)\ \ \ \ \ (8)
\displaystyle \displaystyle = \displaystyle x^{n}\frac{\hbar}{i}\frac{\partial g}{\partial x}-\frac{\hbar}{i}nx^{n-1}g-x^{n}\frac{\hbar}{i}\frac{\partial g}{\partial x}\ \ \ \ \ (9)
\displaystyle \displaystyle = \displaystyle i\hbar nx^{n-1}g \ \ \ \ \ (10)

Removing the function {g} gives the result {\left[x^{n},p\right]=i\hbar nx^{n-1}}. QED.

Theorem 3:

\displaystyle \left[f(x),p\right]=i\hbar\frac{df}{dx} \ \ \ \ \ (11)

Again, letting the commutator operate on a function {g}:

\displaystyle \left[f(x),p\right] \displaystyle = \displaystyle f\frac{\hbar}{i}\frac{\partial g}{\partial x}-\frac{\hbar}{i}\frac{\partial}{\partial x}(fg)\ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle f\frac{\hbar}{i}\frac{\partial g}{\partial x}-\frac{\hbar}{i}\frac{\partial f}{\partial x}g-f\frac{\hbar}{i}\frac{\partial g}{\partial x}\ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle ih\frac{\partial f}{\partial x}g \ \ \ \ \ (14)

Removing {g} gives the result {\left[f(x),p\right]=ih\partial f/\partial x}. QED.

Infinite square well: momentum

Required math: calculus

Required physics: Schrödinger equation

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 3.10.

The stationary states of the infinite square well are

\displaystyle  \psi_{n}(x)=\sqrt{\frac{2}{a}}\sin\frac{n\pi x}{a} \ \ \ \ \ (1)

where the well extends over the interval {x\in[0,a]}.

The momentum operator is

\displaystyle  \hat{p}=-i\hbar\frac{d}{dx} \ \ \ \ \ (2)

and we’ve seen that its eigenvalues are continuous in the case where we’re considering an infinite interval. What happens if the interval is finite as in this case?

We can check the eigenvalue condition directly:

\displaystyle   \hat{p}\psi_{n} \displaystyle  = \displaystyle  -i\hbar\frac{d}{dx}\sqrt{\frac{2}{a}}\sin\frac{n\pi x}{a}\ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\sqrt{\frac{2}{a}}\frac{n\pi}{a}\cos\frac{n\pi x}{a}\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  -i\frac{\hbar n\pi}{a}\cot\frac{n\pi x}{a}\psi_{n}(x) \ \ \ \ \ (5)

Thus {\psi_{n}} is not an eigenfunction of momentum, since the momentum operator doesn’t yield the original wave function multiplied by a constant.

The mean momentum is zero since it is

\displaystyle   \left\langle p\right\rangle \displaystyle  = \displaystyle  \int_{0}^{a}\psi_{n}\hat{p}\psi_{n}dx\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  -i\frac{2n\pi\hbar}{a^{2}}\int_{0}^{a}\sin\frac{n\pi x}{a}\cos\frac{n\pi x}{a}dx\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  0 \ \ \ \ \ (8)

This means that the momentum is equally likely to be in either direction. The magnitude of the momentum is a constant, since this is a state with fixed energy, and {\left|p\right|=\sqrt{2mE}=n\pi\hbar/a}.

Momentum: eigenvalues and normalization

Required math: calculus

Required physics: Schrödinger equation

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Section 3.3.2; Problem 3.9.

The example of a periodic function which we studied earlier had discrete eigenvalues for both the first and second derivative of the periodic variable. In particular, for the operator {id/d\phi} we found that the eigenvalues are all integers, with eigenfunctions {e^{in\phi}} since

\displaystyle  i\frac{d}{d\phi}e^{in\phi}=-ne^{in\phi} \ \ \ \ \ (1)

This operator bears a strong resemblance to the momentum operator in one dimension, which is {\hat{p}=-i\hbar d/dx}. However, if we try to find the eigenvalues and eigenfunctions of {\hat{p}}, we run into a bit of a problem. We try to solve, for some eigenvalue {p}:

\displaystyle   \hat{p}f \displaystyle  = \displaystyle  pf\ \ \ \ \ (2)
\displaystyle  -i\hbar\frac{d}{dx}f \displaystyle  = \displaystyle  pf \ \ \ \ \ (3)

This has the solution

\displaystyle  f_{p}(x)=Ae^{ipx/\hbar} \ \ \ \ \ (4)

for some constant {A}. Ordinarily, at this stage, we would impose some boundary condition on the solution to obtain acceptable values of {p}. The problem is that we’d like to define this function over all {x} and, if we try to do this, the function is not normalizable for any value of {p}. At first glance, we might think that if we chose {p} to be purely imaginary as in {p=\alpha i}, it might work since we get

\displaystyle  f(x)=Ae^{-\alpha x/\hbar} \ \ \ \ \ (5)

but of course this tends to infinity at large negative {x} so that doesn’t work. In fact if {p} has a non-zero imaginary part, {f(x)} goes to infinity at one end of its domain. So we’re restricted to looking at real values of {p}.

In that case, {f(x)} is periodic and thus is still not normalizable. Thus there are no eigenfunctions of the momentum operator that lie in Hilbert space (which, remember, is the vector space of square-integrable functions).

What happens if do the normalization integral anyway? That is, we try

\displaystyle  \int_{-\infty}^{\infty}f_{p_{1}}^*\left(x\right)f_{p_{2}}\left(x\right)dx=\left|A\right|^{2}\int_{-\infty}^{\infty}e^{i\left(p_{2}-p_{1}\right)x/\hbar}dx \ \ \ \ \ (6)

By using the variable transformation {\xi\equiv x/\hbar}, we get

\displaystyle  \int_{-\infty}^{\infty}f_{p_{1}}^*\left(x\right)f_{p_{2}}\left(x\right)dx=\left|A\right|^{2}\hbar\int_{-\infty}^{\infty}e^{i\left(p_{2}-p_{1}\right)\xi}d\xi \ \ \ \ \ (7)

It’s at this point that we invoke the dodgy formula involving the Dirac delta function that we obtained a while back. Using this, we can write the integral as a delta function, and we get

\displaystyle  \int_{-\infty}^{\infty}f_{p_{1}}^*\left(x\right)f_{p_{2}}\left(x\right)dx=2\pi\left|A\right|^{2}\hbar\delta\left(p_{2}-p_{1}\right) \ \ \ \ \ (8)

This is sort of like a normalization condition, in that the integral is zero when {p_{1}\ne p_{2}} (that is, if you believe that the integral really does evaluate to a delta function), and non-zero (infinite, in fact) if {p_{1}=p_{2}}. In fact, if we take the constant {A} to be

\displaystyle  A=\frac{1}{\sqrt{2\pi\hbar}} \ \ \ \ \ (9)

and use the bra-ket notation for the integral, we can write

\displaystyle  \left\langle \left.f_{p_{1}}\right|f_{p_{2}}\right\rangle =\delta\left(p_{2}-p_{1}\right) \ \ \ \ \ (10)

We can also express an arbitrary function {g(x)} as a Fourier transform over {p} by writing

\displaystyle   g(x) \displaystyle  = \displaystyle  \int_{-\infty}^{\infty}c\left(p\right)f_{p}\left(x\right)dp\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty}c\left(p\right)e^{ipx/\hbar}dp\ \ \ \ \ (12)
\displaystyle  g\left(\hbar\xi\right) \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty}c\left(p\right)e^{ip\xi}dp \ \ \ \ \ (13)

From Plancherel’s theorem, we can invert this relation to get {c\left(p\right)}:

\displaystyle   c\left(p\right) \displaystyle  = \displaystyle  \sqrt{\frac{\hbar}{2\pi}}\int_{-\infty}^{\infty}g\left(\hbar\xi\right)e^{-ip\xi}d\xi\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty}g\left(x\right)e^{-ipx/\hbar}dx\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \left\langle \left.f_{p}\right|g\right\rangle \ \ \ \ \ (16)

In general, hermitian operators with continuous eigenvalues don’t have normalizable eigenfunctions and have to be analyzed in this way. In particular, the hamiltonian (energy) of a system can have an entirely discrete spectrum (infinite square well or harmonic oscillator), a totally continuous spectrum (free particle, delta function barrier or finite square barrier) or a mixture of the two (delta function well or finite square well).

Infinite square well – uncertainty principle

Required math: calculus

Required physics: Schrödinger equation

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 2.4.

We can calculate the mean values of position and momentum and verify the uncertainty principle for the infinite square well. The Schrödinger equation for the square well is, between {x=0} and {x=a}:

\displaystyle  \frac{d^{2}\psi}{dx^{2}}=-\frac{2m}{\hbar^{2}}E\psi \ \ \ \ \ (1)

The stationary states of the infinite square well are given by

\displaystyle  \sqrt{\frac{2}{a}}\sin\left(\frac{n\pi}{a}x\right)e^{-i(n^{2}\pi^{2}\hbar/2ma^{2})t} \ \ \ \ \ (2)

for {0\leq x\leq a}.

For {x} we have

\displaystyle  \left\langle x\right\rangle =\frac{2}{a}\int_{0}^{a}x\sin^{2}(n\pi x/a)dx=a/2 \ \ \ \ \ (3)

\displaystyle  \left\langle x^{2}\right\rangle =\frac{2}{a}\int_{0}^{a}x^{2}\sin^{2}(n\pi x/a)dx=a^{2}\left(\frac{1}{3}-\frac{1}{2n^{2}\pi^{2}}\right) \ \ \ \ \ (4)

\displaystyle  \sigma_{x}^{2}=\left\langle x^{2}\right\rangle -\left\langle x\right\rangle ^{2}=a^{2}\left(\frac{1}{12}-\frac{1}{2n^{2}\pi^{2}}\right)=\frac{n^{2}\pi^{2}-6}{12n^{2}\pi^{2}}a^{2} \ \ \ \ \ (5)

For the momentum {p} we have

\displaystyle  \left\langle p\right\rangle =\frac{2\hbar}{ai}\int_{0}^{a}\sin(n\pi x/a)(n\pi/a)\cos(n\pi x/a)dx=0 \ \ \ \ \ (6)

\displaystyle  \left\langle p^{2}\right\rangle =\frac{2\hbar^{2}}{a}\int_{0}^{a}\sin(n\pi x/a)(n\pi/a)^{2}\sin(n\pi x/a)dx=\frac{n^{2}\pi^{2}\hbar^{2}}{a^{2}} \ \ \ \ \ (7)

\displaystyle  \sigma_{p}^{2}=\left\langle p^{2}\right\rangle -\left\langle p\right\rangle ^{2}=\frac{n^{2}\pi^{2}\hbar^{2}}{a^{2}} \ \ \ \ \ (8)

The uncertainty principle here is then:

\displaystyle  \sigma_{x}\sigma_{p}=\hbar\sqrt{\frac{\pi^{2}n^{2}-6}{12}} \ \ \ \ \ (9)

The smallest uncertainty will be for the state {n=1} and is approximately {0.568\hbar}, which satisfies the condition {\sigma_{x}\sigma_{p}\ge\hbar/2}.

Hermitian operators

Required math: calculus

Required physics: some knowledge of quantum mechanics

Reference: Arfken, George B. & Weber, Hans J. (2005), Mathematical Methods for Physicists, 6th Edition, Academic Press – Sec 10.2.

We saw in the last post that a second-order ODE in the form

\displaystyle  p_{0}(x)u"+p_{1}(x)u'+p_{2}(x)u+\lambda w(x)u(x)=0 \ \ \ \ \ (1)

is self-adjoint if

\displaystyle  p_{0}'=p_{1} \ \ \ \ \ (2)

and that any second-order ODE can be transformed into self-adjoint form by multiplying through by the correct function.

A self-adjoint operator {L} can be written as

\displaystyle  Lu=(p_{0}u')'+p_{2}u \ \ \ \ \ (3)

If we multiply this operator by the complex conjugate of another function {v(x)}, and then integrate between two limits {a} and {b}, we get

\displaystyle  \int_{a}^{b}v^*Lu\: dx=\int_{a}^{b}v^*(p_{0}u')'\: dx+\int_{a}^{b}v^*p_{2}u\: dx \ \ \ \ \ (4)

The first integral on the right can be integrated by parts twice to get

\displaystyle   \int_{a}^{b}v^*(p_{0}u')'\: dx \displaystyle  = \displaystyle  v^*p_{0}u\Big|_{a}^{b}-\int_{a}^{b}(v^*)'p_{0}u'\: dx\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  v^*p_{0}u\Big|_{a}^{b}-(v^*)'p_{0}u\Big|_{a}^{b}+\int_{a}^{b}u(p_{0}(v^*)')'\: dx \ \ \ \ \ (6)

If the two integrated terms in the last line vanish due to satisfying boundary conditions

\displaystyle  v^*p_{0}u\Big|_{a}=v^*p_{0}u\Big|_{b} \ \ \ \ \ (7)

and

\displaystyle  (v^*)'p_{0}u\Big|_{a}=(v^*)'p_{0}u\Big|_{b} \ \ \ \ \ (8)

then we get the condition

\displaystyle  \int_{a}^{b}v^*Lu\: dx=\int_{a}^{b}u(Lv)^*\: dx \ \ \ \ \ (9)

An operator {L} that satisfies this condition is called Hermitian. Note that the condition applies for any functions {u} and {v}; these functions do not have to be solutions of any particular ODE. What they do have to do is satisfy the boundary conditions above.

Note that in this derivation, we’ve assumed that {L} is a real, second-order differential operator. Although such operators frequently turn up in physics, especially in quantum mechanics, the condition can be generalized to operators that are not necessarily second-order or real. So a general, possibly complex, differential operator {L} that satisfies 9 is called Hermitian, and the derivation above should be seen as a special case of one class of operators that happen to be Hermitian. Another example which is not a second-order operator or real is the quantum mechanical momentum operator {p=-i\hbar\partial/\partial x}.

For this operator, the above equation is

\displaystyle  \int_{a}^{b}v^*Lu\: dx=-i\hbar\int_{a}^{b}v^*\frac{d}{dx}u\: dx \ \ \ \ \ (10)

Integrating by parts gives us

\displaystyle  -i\hbar\int_{a}^{b}v^*\frac{du}{dx}\: dx=-i\hbar v^*u\Big|_{a}^{b}+i\hbar\int_{a}^{b}u\frac{dv^*}{dx}dx \ \ \ \ \ (11)

If we choose the limits {a=-\infty} and {b=+\infty} , then we are justified in taking both {u} and {v} to be zero at the limits in order that these functions are normalizable, as is required in quantum mechanics. Thus the integrated term is zero, and we are left with

\displaystyle  -i\hbar\int_{a}^{b}v^*\frac{du}{dx}\: dx=i\hbar\int_{a}^{b}u\frac{dv^*}{dx}dx \ \ \ \ \ (12)

or, in terms of the momentum operator {p}

\displaystyle  \int_{-\infty}^{\infty}v^*pu\: dx=\int_{-\infty}^{\infty}(pv)^*u\: dx \ \ \ \ \ (13)

which is precisely the Hermitian condition. Note that the fact that {p} is complex, due to the {i} in its definition, is essential for it to be Hermitian, since the negative sign that arises in the integration by parts translates into the {-i} in the original operator becoming a {+i} in the complex conjugate.

Now suppose we consider the ODE

\displaystyle  Lu_{i}(x)+\lambda_{i}w(x)u_{i}(x)=0 \ \ \ \ \ (14)

where {\lambda_{i}} is a constant called the eigenvalue and {w(x)} is another function (assumed to be real and positive) of {x} known as the weighting function. The subscript {i} labels a particular solution of this ODE, so that a given solution {u_{i}} is associated with a particular eigenvalue {\lambda_{i}}.

For another solution {u_{j}} we can take the complex conjugate of 14 to get

\displaystyle  L^*u_{j}^*(x)+\lambda_{j}^*w(x)u_{j}^*(x)=0 \ \ \ \ \ (15)

We can multiply 14 by {u_{j}^*} and 15 by {u_{i}}, integrate between limits {a} and {b} such that the boundary conditions above are satisfied (Such boundary conditions usually exist in quantum mechanics. For example, in a one-dimensional problem with a potential of infinite range (such as the harmonic oscillator) if {a=-\infty} and {b=+\infty}, the wave function is required to be zero at both limits in order for it to be normalizable.) and then take the difference we get:

\displaystyle  \int_{a}^{b}u_{j}^*Lu_{i}\: dx-\int_{a}^{b}u_{i}L^*u_{j}^*\: dx=(\lambda_{j}^*-\lambda_{i})\int_{a}^{b}u_{i}u_{j}^*w\: dx \ \ \ \ \ (16)

If {L} is Hermitian, the left-hand side of this equation is zero. This leads to two important results. First, if {i=j}, then provided we assume that neither {u_{i}} nor {w} are zero everywhere, the integral on the right must be non-zero. Therefore we get

\displaystyle  \lambda_{i}^*=\lambda_{i} \ \ \ \ \ (17)

In other words, the eigenvalues of a Hermitian operator are real. This has a physical interpretation in quantum mechanics, since every quantity representable by a Hermitian operator should be observable.

The other consequence is that if {i\ne j}, then if the eigenvalues for distinct solutions are different, the integral on the right must be zero. That is

\displaystyle  \int_{a}^{b}u_{i}u_{j}^*w\: dx=0 \ \ \ \ \ (18)

if {i\ne j}. This condition means that the distinct solutions of 14 are orthogonal functions. Note however that the orthogonality condition may require a weighting function in order for the integral to be zero. In fact, many of the functions encountered in quantum mechanics have {w(x)\equiv1}, but there are some notable exceptions such as Laguerre and Hermite polynomials.

We haven’t dealt with the case of degenerate eigenvalues, that is, cases where distinct solutions {u_{i}} and {u_{j}} have the same eigenvalue, but that’s a topic for another post.