Tag Archives: propagator

Harmonic oscillator energies and eigenfunctions derived from the propagator

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 8. Section 8.6, Exercise 8.6.3.

Given the propagator for the harmonic oscillator, it is possible to work backwards and deduce the eigenvalues and eigenfunctions of the Hamiltonian, although this isn’t the easiest way to find them. We’ve seen that the propagator for the oscillator is

\displaystyle U\left(x,t;x^{\prime}\right)=A\left(t\right)\exp\left[\frac{im\omega}{2\hbar\sin\omega t}\left(\left(x^{\prime2}+x^{2}\right)\cos\omega t-2x^{\prime}x\right)\right] \ \ \ \ \ (1)

 

where {A\left(t\right)} is some function of time which is found by doing a path integral. Shankar cheats a bit by just telling us what {A} is:

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega t}} \ \ \ \ \ (2)

To deduce (some of) the energy levels, we can compare the propagator with its more traditional form

\displaystyle U\left(t\right)=\sum_{n}e^{-iE_{n}t/\hbar}\left|E_{n}\right\rangle \left\langle E_{n}\right| \ \ \ \ \ (3)

where {E_{n}} is the {n}th energy level. In position space this is

\displaystyle U\left(t\right)=\sum_{n}\psi_{n}^*\left(x\right)\psi_{n}\left(x\right)e^{-iE_{n}t/\hbar} \ \ \ \ \ (4)

 

We can try finding the energy levels as follows. We take {x=x^{\prime}=t^{\prime}=0}, which is equivalent to taking the end time {t} to be a multiple of a complete period of the oscillator, so that the particle has returned to its starting point. In that case, 1 becomes

\displaystyle U\left(x,t;x^{\prime}\right)=A\left(t\right)=\sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega t}} \ \ \ \ \ (5)

If we can expand this quantity in powers of {e^{-i\omega t}}, we can compare it with the series 4 and read off the energies from the exponents in the series. To do this, we write

\displaystyle A\left(t\right) \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar\left(e^{i\omega t}-e^{-i\omega t}\right)}}\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}e^{-i\omega t/2}\frac{1}{\sqrt{1-e^{-2i\omega t}}} \ \ \ \ \ (7)

To save writing, we’ll define the symbol

\displaystyle \eta\equiv e^{-i\omega t} \ \ \ \ \ (8)

so that

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\frac{1}{\sqrt{1-\eta^{2}}} \ \ \ \ \ (9)

We can now expand the last factor using the binomial expansion to get

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\left[1+\frac{1}{2}\eta^{2}+\frac{3}{8}\eta^{4}+\ldots\right] \ \ \ \ \ (10)

In terms of the original variables, we get

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{\pi\hbar}}\left[e^{-i\omega t/2}+\frac{1}{2}e^{-5i\omega t/2}+\frac{3}{8}e^{-9i\omega t/2}+\ldots\right] \ \ \ \ \ (11)

 

Comparing with 4, we find energy levels of

\displaystyle E=\frac{\hbar\omega}{2},\frac{5\hbar\omega}{2},\frac{9\hbar\omega}{2},\ldots \ \ \ \ \ (12)

These correspond to {E_{0},E_{2},E_{4},\ldots}. The odd energy levels {\left(\frac{3\hbar\omega}{2},\frac{7\hbar\omega}{2},\ldots\right)} are missing because the corresponding wave functions {\psi_{n}\left(x\right)} are odd functions of {x} and are therefore zero at {x=0}, so the corresponding terms in 4 vanish. The numerical coefficients in 11 give us {\left|\psi_{n}\left(0\right)\right|^{2}} for {n=0,2,4,\ldots}.

To get the other energies, as well as the eigenfunctions, from a comparison of 1 and 4 is possible, but quite messy, even for the lower energies. To do it, we take {t^{\prime}=0} as before, but now we take {x=x^{\prime}\ne0}. That is, we start the oscillator off at some location {x^{\prime}\ne0} and then look at it exactly one period later, when it has returned to the same position. The propagator 1 now becomes

\displaystyle U\left(x,t;x^{\prime}\right) \displaystyle = \displaystyle \sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega t}}\exp\left[\frac{im\omega}{2\hbar\sin\omega t}\left(2x^{2}\left(\cos\omega t-1\right)\right)\right]\ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar\left(e^{i\omega t}-e^{-i\omega t}\right)}}\exp\left[-\frac{m\omega}{\hbar\left(e^{i\omega t}-e^{-i\omega t}\right)}\left(x^{2}\left(\left(e^{i\omega t}+e^{-i\omega t}\right)-2\right)\right)\right]\ \ \ \ \ (14)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\frac{1}{\sqrt{1-\eta^{2}}}\exp\left[-\frac{m\omega x^{2}}{\hbar}\left(\frac{\frac{1}{\eta}+\eta-2}{\frac{1}{\eta}-\eta}\right)\right]\ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\frac{1}{\sqrt{1-\eta^{2}}}\exp\left[-\frac{m\omega x^{2}}{\hbar}\left(\frac{1+\eta^{2}-2\eta}{1-\eta^{2}}\right)\right] \ \ \ \ \ (16)

We now need to expand this in a power series in {\eta}, which gets very messy so is best handled with software like Maple. Shankar asks only for the first two terms in the series (the terms corresponding to {\eta^{1/2}} and {\eta^{3/2}}) but even doing this by hand can get very tedious. The result from Maple is, for the first two terms:

\displaystyle \eta^{1/2} \displaystyle \rightarrow \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}e^{-m\omega x^{2}/\hbar}\eta^{1/2}=\sqrt{\frac{m\omega}{\pi\hbar}}e^{-m\omega x^{2}/\hbar}e^{-i\omega t/2\hbar}\ \ \ \ \ (17)
\displaystyle \eta^{3/2} \displaystyle \rightarrow \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\frac{2m\omega}{\hbar}e^{-m\omega x^{2}/\hbar}x^{2}\eta^{3/2}=\sqrt{\frac{m\omega}{\pi\hbar}}\frac{2m\omega}{\hbar}e^{-m\omega x^{2}/\hbar}x^{2}e^{-3i\omega t/2\hbar} \ \ \ \ \ (18)

Comparing this with 4, we can read off:

\displaystyle E_{0} \displaystyle = \displaystyle \frac{\hbar\omega}{2}\ \ \ \ \ (19)
\displaystyle \left|\psi_{0}\left(x\right)\right|^{2} \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}e^{-m\omega x^{2}/\hbar}\ \ \ \ \ (20)
\displaystyle E_{1} \displaystyle = \displaystyle \frac{3\hbar\omega}{2}\ \ \ \ \ (21)
\displaystyle \left|\psi_{1}\left(x\right)\right|^{2} \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\frac{2m\omega}{\hbar}e^{-m\omega x^{2}/\hbar}x^{2} \ \ \ \ \ (22)

To check this, we recall the eigenfunctions we worked out earlier, using Hermite polynomials

\displaystyle \psi_{n}(x)=\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}\frac{1}{\sqrt{2^{n}n!}}H_{n}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (23)

 

The first two Hermite polynomials are

\displaystyle H_{0}\left(\sqrt{\frac{m\omega}{\hbar}}x\right) \displaystyle = \displaystyle 1\ \ \ \ \ (24)
\displaystyle H_{1}\left(\sqrt{\frac{m\omega}{\hbar}}x\right) \displaystyle = \displaystyle 2\sqrt{\frac{m\omega}{\hbar}}x \ \ \ \ \ (25)

Plugging these into 23 and comparing with 20 and 22 shows we got the right answer.

Free particle propagator from a complete path integral

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 8. Section 8.4.

We’ve seen that the free-particle propagator can be obtained in the path integral approach by using only the classical path in the sum over paths. It turns out that it’s not too hard to calculate the propagator for a free particle properly, by summing over all possible paths. The notation used by Shankar is as follows.

We want to evaluate the path integral

\displaystyle \int_{x_{0}}^{x_{N}}e^{iS\left[x\left(t\right)\right]/\hbar}\mathfrak{D}\left[x\left(t\right)\right] \ \ \ \ \ (1)

The notation {\mathfrak{D}\left[x\left(t\right)\right]} means an integration over all possible paths from {x_{0}} to {x_{N}} in the given time interval. This includes paths where the particle might move to the right for a while, then jog back to the left, then back to the right again and so on. This might seem like a hopeless task, but we can make sense of this method by splitting the time interval between {t_{0}} and {t_{N}} into {N} small intervals, each of length {\varepsilon}. Thus an intermediate time {t_{n}=t_{0}+n\varepsilon}, and the final time is {t_{N}=t_{0}+N\varepsilon}.

For a free particle, there is no potential energy so the Lagrangian is just the kinetic energy:

\displaystyle L=\frac{1}{2}m\dot{x}^{2} \ \ \ \ \ (2)

We can estimate the velocity in each time slice by

\displaystyle \dot{x}_{i}=\frac{x_{i+1}-x_{i}}{\varepsilon} \ \ \ \ \ (3)

Note that this assumes that the velocity within each time slice is constant, but as we make {\varepsilon} smaller and smaller, this is increasingly accurate. Also note that it is possible for {\dot{x}_{i}} to be both positive (if the particle moves to the right in the interval) or negative (if it moves to the left).

The action for a given path is given by the integral of the Lagrangian:

\displaystyle S=\int_{t_{0}}^{t_{N}}L\left(t\right)dt \ \ \ \ \ (4)

In our discretized approximation, we evaluate {L} within each time slice, and {dt} becomes the interval length {\varepsilon}, so the action becomes a sum:

\displaystyle S \displaystyle = \displaystyle \sum_{i=0}^{N-1}L\left(t_{i}\right)\varepsilon\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \frac{m}{2}\sum_{i=0}^{N-1}\left(\frac{x_{i+1}-x_{i}}{\varepsilon}\right)^{2}\varepsilon\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \frac{m}{2}\sum_{i=0}^{N-1}\frac{\left(x_{i+1}-x_{i}\right)^{2}}{\varepsilon} \ \ \ \ \ (7)

The key point here is to notice that we can label any given path by choosing values for all the {x_{i}}s between the two times, and that each {x_{i}} can vary independently of the others, over a range from {-\infty} to {+\infty}. We can therefore implement the multiple integration required by {\mathfrak{D}\left[x\left(t\right)\right]} by integrating over all the {x_{i}} variables separately. That is,

\displaystyle \int_{x_{0}}^{x_{N}}e^{iS\left[x\left(t\right)\right]/\hbar}\mathfrak{D}\left[x\left(t\right)\right]=A\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\ldots\int_{-\infty}^{\infty}\exp\left[\frac{im}{2\hbar}\sum_{i=0}^{N-1}\frac{\left(x_{i+1}-x_{i}\right)^{2}}{\varepsilon}\right]dx_{1}dx_{2}\ldots dx_{N-1} \ \ \ \ \ (8)

where {A} is some constant to make the scale come out right.

We don’t integrate over {x_{0}} or {x_{N}} since these are fixed as the end points of the path. To get the final version, we need to take the limit of this expression as {N\rightarrow\infty} and {\varepsilon\rightarrow0}. This still looks pretty scary, but in fact it is doable. We define the variable

\displaystyle y_{i} \displaystyle \equiv \displaystyle \sqrt{\frac{m}{2\hbar\varepsilon}}x_{i}\ \ \ \ \ (9)
\displaystyle dx_{i} \displaystyle = \displaystyle \sqrt{\frac{2\hbar\varepsilon}{m}}dy_{i} \ \ \ \ \ (10)

This gives us

\displaystyle A\left(\frac{2\hbar\varepsilon}{m}\right)^{\left(N-1\right)/2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\ldots\int_{-\infty}^{\infty}\exp\left[i\sum_{i=0}^{N-1}\left(y_{i+1}-y_{i}\right)^{2}\right]dy_{1}dy_{2}\ldots dy_{N-1} \ \ \ \ \ (11)

 

We can do the integral in stages in order to spot a pattern. Consider first the integral over {y_{1}}, which involves only two of the factors in the integrand:

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1} \ \ \ \ \ (12)

We first simplify the exponent

\displaystyle \left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2} \displaystyle = \displaystyle y_{2}^{2}+y_{0}^{2}+2\left(y_{1}^{2}-y_{0}y_{1}-y_{1}y_{2}\right)\ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle y_{2}^{2}+y_{0}^{2}+2y_{1}^{2}-2\left(y_{0}+y_{2}\right)y_{1} \ \ \ \ \ (14)

We get

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1}=e^{i\left(y_{2}^{2}+y_{0}^{2}\right)}\int_{-\infty}^{\infty}e^{2i\left[y_{1}^{2}-\left(y_{0}+y_{2}\right)y_{1}\right]}dy_{1} \ \ \ \ \ (15)

We can evaluate this using a standard Gaussian integral

\displaystyle \int_{-\infty}^{\infty}e^{-ax^{2}+bx}dx=e^{b^{2}/4a}\sqrt{\frac{\pi}{a}} \ \ \ \ \ (16)

 

This gives

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1} \displaystyle = \displaystyle e^{i\left(y_{2}^{2}+y_{0}^{2}\right)}e^{4\left(y_{0}+y_{2}\right)^{2}/8i}\sqrt{-\frac{\pi}{2i}}\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle e^{i\left(y_{2}^{2}+y_{0}^{2}\right)}e^{\left(y_{0}+y_{2}\right)^{2}/2i}\sqrt{\frac{\pi i}{2}} \ \ \ \ \ (18)

To simplify the exponents on the RHS:

\displaystyle i\left(y_{2}^{2}+y_{0}^{2}\right)+\frac{\left(y_{0}+y_{2}\right)^{2}}{2i} \displaystyle = \displaystyle \frac{1}{2i}\left[\left(y_{0}+y_{2}\right)^{2}-2y_{2}^{2}-2y_{0}^{2}\right]\ \ \ \ \ (19)
\displaystyle \displaystyle = \displaystyle -\frac{1}{2i}\left(y_{0}-y_{2}\right)^{2} \ \ \ \ \ (20)

Thus we have

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1}=\sqrt{\frac{\pi i}{2}}e^{-\left(y_{0}-y_{2}\right)^{2}/2i} \ \ \ \ \ (21)

Having eliminated {y_{1}} we can now do the integral over {y_{2}}:

\displaystyle \sqrt{\frac{\pi i}{2}}\int_{-\infty}^{\infty}e^{-\left(y_{3}-y_{2}\right)^{2}/i-\left(y_{2}-y_{0}\right)^{2}/2i}dy_{2} \ \ \ \ \ (22)

Again, we can simplify the exponent:

\displaystyle -\frac{\left(y_{3}-y_{2}\right)^{2}}{i}-\frac{\left(y_{2}-y_{0}\right)^{2}}{2i}=\frac{1}{2i}\left[-\left(2y_{3}^{2}+y_{0}^{2}\right)-3y_{2}^{2}+y_{2}\left(4y_{3}+2y_{0}\right)\right] \ \ \ \ \ (23)

The integral now becomes

\displaystyle \sqrt{\frac{\pi i}{2}}\int_{-\infty}^{\infty}e^{-\left(y_{3}-y_{2}\right)^{2}/i-\left(y_{2}-y_{0}\right)^{2}/2i}dy_{2} \displaystyle = \displaystyle \sqrt{\frac{\pi i}{2}}e^{-\left(2y_{3}^{2}+y_{0}^{2}\right)/2i}\int_{-\infty}^{\infty}e^{\left(-3y_{2}^{2}+y_{2}\left(4y_{3}+2y_{0}\right)\right)/2i}dy_{2} \ \ \ \ \ (24)

Doing the Gaussian integral on the RHS using 16:

\displaystyle \int_{-\infty}^{\infty}e^{\left(-3y_{2}^{2}+y_{2}\left(4y_{3}+2y_{0}\right)\right)/2i}dy_{2} \displaystyle = \displaystyle e^{-\left(4y_{3}+2y_{0}\right)^{2}i/24}\sqrt{\frac{2\pi i}{3}}\ \ \ \ \ (25)
\displaystyle \displaystyle = \displaystyle e^{\left(2y_{3}+y_{0}\right)^{2}/6i}\sqrt{\frac{2\pi i}{3}} \ \ \ \ \ (26)

Thus the combined integral over {y_{1}} and {y_{2}} is

\displaystyle \sqrt{\frac{\pi i}{2}}e^{-\left(2y_{3}^{2}+y_{0}^{2}\right)/2i}e^{\left(2y_{3}+y_{0}\right)^{2}/6i}\sqrt{\frac{2\pi i}{3}} \displaystyle = \displaystyle \sqrt{\frac{\left(\pi i\right)^{2}}{3}}e^{\left(-6y_{3}^{2}-3y_{0}^{2}+\left(2y_{3}+y_{0}\right)^{2}\right)/6i}\ \ \ \ \ (27)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{\left(\pi i\right)^{2}}{3}}e^{-\left(y_{3}-y_{0}\right)^{2}/3i} \ \ \ \ \ (28)

The general pattern after {N-1} integrations is (presumably this could be proved by induction, but we’ll accept the result):

\displaystyle \frac{\left(\pi i\right)^{\left(N-1\right)/2}}{\sqrt{N}}e^{-\left(y_{N}-y_{0}\right)^{2}/Ni}=\frac{\left(\pi i\right)^{\left(N-1\right)/2}}{\sqrt{N}}e^{-m\left(x_{N}-x_{0}\right)^{2}/2\hbar\varepsilon Ni} \ \ \ \ \ (29)

where we reverted back to {x_{i}} using 9.

Going back to 11, we must multiply the result by {A\left(\frac{2\hbar\varepsilon}{m}\right)^{\left(N-1\right)/2}} to get the final expression for the propagator:

\displaystyle U \displaystyle = \displaystyle A\left(\frac{2\hbar\varepsilon}{m}\right)^{\left(N-1\right)/2}\frac{\left(\pi i\right)^{\left(N-1\right)/2}}{\sqrt{N}}e^{-m\left(x_{N}-x_{0}\right)^{2}/2\hbar\varepsilon Ni}\ \ \ \ \ (30)
\displaystyle \displaystyle = \displaystyle A\left(\frac{2\pi\hbar\varepsilon i}{m}\right)^{N/2}\sqrt{\frac{m}{2\pi\hbar iN\varepsilon}}e^{im\left(x_{N}-x_{0}\right)^{2}/2\hbar\varepsilon N} \ \ \ \ \ (31)

In the limit as {N\rightarrow\infty} and {\varepsilon\rightarrow0}, {N\varepsilon=t_{N}-t_{0}} so we have

\displaystyle U=A\left(\frac{2\pi\hbar\varepsilon i}{m}\right)^{N/2}\sqrt{\frac{m}{2\pi\hbar i\left(t_{N}-t_{0}\right)}}e^{im\left(x_{N}-x_{0}\right)^{2}/2\hbar\left(t_{N}-t_{0}\right)} \ \ \ \ \ (32)

The expression we got earlier using the Schrödinger method is

\displaystyle U\left(x,t;x^{\prime},t^{\prime}\right)=\sqrt{\frac{m}{2\pi\hbar i\left(t-t^{\prime}\right)}}e^{im\left(x-x^{\prime}\right)^{2}/2\hbar\left(t-t^{\prime}\right)} \ \ \ \ \ (33)

Thus the full path integral gives the same result, with {t^{\prime}=t_{0}} and {t=t_{N}} (similarly for {x}), provided that we can set

\displaystyle A=\left(\frac{m}{2\pi\hbar\varepsilon i}\right)^{N/2}\equiv B^{-N} \ \ \ \ \ (34)

Shankar then says that it is conventional to associate one factor of {B^{-1}} with each integration over an {x_{i}}, and the remaining factor with the overall process. This seems to overlook a basic problem, in that as {N\rightarrow\infty} and {\varepsilon\rightarrow0}, {A\rightarrow\infty}, so we seem to be cancelling two infinities when we multiply the path integral by {A}.

Propagator for a Gaussian wave packet for the free particle

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.1, Exercise 5.1.3.

The propagator for the free particle is

\displaystyle  U\left(t\right)=\int_{-\infty}^{\infty}e^{-ip^{2}t/2m\hbar}\left|p\right\rangle \left\langle p\right|dp \ \ \ \ \ (1)

We can find its matrix elements in position space by using the position space form of the momentum

\displaystyle  \left\langle x\left|p\right.\right\rangle =\frac{1}{\sqrt{2\pi\hbar}}e^{ipx/\hbar} \ \ \ \ \ (2)

Taking the matrix element of 1 we have

\displaystyle   U\left(x,t;x^{\prime}\right) \displaystyle  = \displaystyle  \left\langle x\left|U\left(t\right)\right|x^{\prime}\right\rangle \ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  \int\left\langle x\left|p\right.\right\rangle \left\langle p\left|x^{\prime}\right.\right\rangle e^{-ip^{2}t/2m\hbar}dp\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2\pi\hbar}\int e^{ip\left(x-x^{\prime}\right)/\hbar}e^{-ip^{2}t/2m\hbar}dp\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{m}{2\pi\hbar it}}e^{im\left(x-x^{\prime}\right)^{2}/2\hbar t} \ \ \ \ \ (6)

The final integral can be done by combining the exponents in the third line, completing the square and using the standard formula for Gaussian integrals. We won’t go through that here, as our main goal is to explore the evolution of an initial wave packet using the propagator. Given 6, we can in principle find the wave function for all future times given an initial wave function, by using the propagator:

\displaystyle  \psi\left(x,t\right)=\int U\left(x,t;x^{\prime}\right)\psi\left(x^{\prime},0\right)dx^{\prime} \ \ \ \ \ (7)

Here, we’re assuming that the initial time is {t=0}. Shankar uses the standard example where the initial wave packet is a Gaussian:

\displaystyle  \psi\left(x^{\prime},0\right)=e^{ip_{0}x^{\prime}/\hbar}\frac{e^{-x^{\prime2}/2\Delta^{2}}}{\left(\pi\Delta^{2}\right)^{1/4}} \ \ \ \ \ (8)

This is a wave packet distributed symmetrically about the origin, so that {\left\langle X\right\rangle =0}, and with mean momentum given by {\left\langle P\right\rangle =p_{0}}. By plugging this and 6 into 7, we can work out the time-dependent version of the wave packet, which Shankar gives as

\displaystyle  \psi\left(x,t\right)=\left[\sqrt{\pi}\left(\Delta+\frac{i\hbar t}{m\Delta}\right)\right]^{-1/2}\exp\left[\frac{-\left(x-p_{0}t/m\right)^{2}}{2\Delta^{2}\left(1+i\hbar t/m\Delta^{2}\right)}\right]\exp\left[\frac{ip_{0}}{\hbar}\left(x-\frac{p_{0}t}{2m}\right)\right] \ \ \ \ \ (9)

Again, we won’t go through the derivation of this result as it involves a messy calculation with Gaussian integrals again. The main problem we want to solve here is to use our alternative form of the propagator in terms of the Hamiltonian:

\displaystyle  U\left(t\right)=e^{-iHt/\hbar} \ \ \ \ \ (10)

For the free particle

\displaystyle  H=-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}} \ \ \ \ \ (11)

so if we expand {U\left(t\right)} as a power series, we have

\displaystyle  U\left(t\right)=\sum_{s=0}^{\infty}\frac{1}{s!}\left(\frac{i\hbar t}{2m}\right)^{s}\frac{d^{2s}}{dx^{2s}} \ \ \ \ \ (12)

To see how we can use this form to generate the time-dependent wave function, we’ll consider a special case of 8 with {p_{0}=0} and {\Delta=1}, so that

\displaystyle   \psi_{0}\left(x\right) \displaystyle  = \displaystyle  \frac{e^{-x^{2}/2}}{\pi^{1/4}}\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}}\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!} \ \ \ \ \ (14)

We therefore need to apply one power series 12 to the other 14. This is best done by examining a few specific terms and then generalizing to the main result. To save writing, we’ll work with the following

\displaystyle   \alpha \displaystyle  \equiv \displaystyle  \frac{i\hbar t}{m}\ \ \ \ \ (15)
\displaystyle  \psi_{\pi}\left(x\right) \displaystyle  \equiv \displaystyle  \pi^{1/4}\psi_{0}\left(x\right) \ \ \ \ \ (16)

The {s=0} term in 12 is just 1, so we’ll look at the {s=1} term and apply it to 14:

\displaystyle   \frac{\alpha}{2}\frac{d^{2}}{dx^{2}}\left[\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!}\right] \displaystyle  = \displaystyle  \frac{\alpha}{2}\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)\left(2n-1\right)x^{2n-2}}{2^{n}n!}\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  \frac{\alpha}{2}\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-2}}{2^{n}n!\left(2n-2\right)!} \ \ \ \ \ (18)

We can simplify this by using an identity involving factorials:

\displaystyle   \frac{\left(2n\right)!}{n!} \displaystyle  = \displaystyle  \frac{\left(2n\right)\left(2n-1\right)\left(2n-2\right)\left(2n-3\right)\ldots\left(2\right)\left(1\right)}{n\left(n-1\right)\left(n-2\right)\ldots\left(2\right)\left(1\right)}\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  \frac{2^{n}\left[n\left(n-1\right)\left(n-2\right)\ldots\left(2\right)\left(1\right)\right]\left[\left(2n-1\right)\left(2n-3\right)\ldots\left(3\right)\left(1\right)\right]}{n!}\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  \frac{2^{n}n!\left(2n-1\right)!!}{n!}\ \ \ \ \ (21)
\displaystyle  \displaystyle  = \displaystyle  2^{n}\left(2n-1\right)!! \ \ \ \ \ (22)

The ‘double factorial’ notation is defined as

\displaystyle  \left(2n-1\right)!!\equiv\left(2n-1\right)\left(2n-3\right)\ldots\left(3\right)\left(1\right) \ \ \ \ \ (23)

That is, it’s the product of every other term from {n} down to 1. Using this result, we can write 18 as

\displaystyle  \frac{\alpha}{2}\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-2}}{2^{n}n!\left(2n-2\right)!}=\alpha\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n}\left(2n-1\right)!!x^{2n-2}}{2\left(2n-2\right)!} \ \ \ \ \ (24)

Now look at the {s=2} term from 12.

\displaystyle   \frac{1}{2!}\frac{\alpha^{2}}{2^{2}}\frac{d^{4}}{dx^{4}}\left[\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!}\right] \displaystyle  = \displaystyle  \frac{1}{2!}\frac{\alpha^{2}}{2^{2}}\sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)\left(2n-1\right)\left(2n-2\right)\left(2n-3\right)x^{2n-4}}{2^{n}n!}\ \ \ \ \ (25)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2!}\frac{\alpha^{2}}{2^{2}}\sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-4}}{2^{n}n!\left(2n-4\right)!}\ \ \ \ \ (26)
\displaystyle  \displaystyle  = \displaystyle  \frac{\alpha^{2}}{2^{2}2!}\sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}\left(2n-1\right)!!x^{2n-4}}{\left(2n-4\right)!} \ \ \ \ \ (27)

We can see the pattern for the general term for arbitrary {s} from 12 (we could prove it by induction, but hopefully the pattern is fairly obvious):

\displaystyle   \frac{1}{s!}\frac{\alpha^{s}}{2^{s}}\frac{d^{2s}}{dx^{2s}}\left[\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}x^{2n}}{2^{n}n!}\right] \displaystyle  = \displaystyle  \frac{1}{s!}\frac{\alpha^{s}}{2^{s}}\sum_{n=s}^{\infty}\frac{\left(-1\right)^{n}\left(2n\right)!x^{2n-2s}}{2^{n}n!\left(2n-2s\right)!}\ \ \ \ \ (28)
\displaystyle  \displaystyle  = \displaystyle  \frac{\alpha^{s}}{2^{s}s!}\sum_{n=s}^{\infty}\frac{\left(-1\right)^{n}\left(2n-1\right)!!x^{2n-2s}}{\left(2n-2s\right)!} \ \ \ \ \ (29)

Now we can collect terms for each power of {x}. The constant term (for {x^{0}}) is the first term from each series for each value of {s}, so we have, using the general term 29 and taking the first term where {n=s}:

\displaystyle  \sum_{s=0}^{\infty}\frac{\left(-1\right)^{s}\alpha^{s}\left(2s-1\right)!!}{2^{s}s!}=1-\frac{\alpha}{2}+\frac{\alpha^{2}}{2!}\frac{3}{2}\frac{1}{2}-\frac{\alpha^{3}}{3!}\frac{5}{2}\frac{3}{2}\frac{1}{2}+\ldots \ \ \ \ \ (30)

[The {\left(2s-1\right)!!} factor is 1 when {s=0} as we can see from the result 22.] The series on the RHS is the Taylor expansion of {\left(1+\alpha\right)^{-1/2}}, as can be verified using tables.

In general, to get the coefficient of {x^{2r}} (only even powers of {x} occur in the series), we take the term where {n=s+r} from 29 and sum over {s}. This gives

\displaystyle   \sum_{s=0}^{\infty}\frac{\alpha^{s}}{2^{s}s!}\frac{\left(-1\right)^{s+r}\left(2s+2r-1\right)!!}{\left(2r\right)!} \displaystyle  = \displaystyle  \frac{\left(-1\right)^{r}}{2^{r}r!}\sum_{s=0}^{\infty}\frac{\alpha^{s}}{2^{s}s!}\frac{\left(-1\right)^{s}\left(2s+2r-1\right)!!}{\left(2r-1\right)!!} \ \ \ \ \ (31)

where we used 22 to get the RHS. Expanding the sum gives

\displaystyle   \sum_{s=0}^{\infty}\frac{\alpha^{s}}{2^{s}s!}\frac{\left(-1\right)^{s}\left(2s+2r-1\right)!!}{\left(2r-1\right)!!} \displaystyle  = \displaystyle  1-\alpha\frac{2r+1}{2}+\frac{\alpha^{2}}{2!}\left(\frac{2r+3}{2}\right)\left(\frac{2r+1}{2}\right)-\ldots\ \ \ \ \ (32)
\displaystyle  \displaystyle  = \displaystyle  1-\alpha\left(r+\frac{1}{2}\right)+\frac{\alpha^{2}}{2!}\left(r+\frac{3}{2}\right)\left(r+\frac{1}{2}\right)-\ldots\ \ \ \ \ (33)
\displaystyle  \displaystyle  = \displaystyle  \left(1+\alpha\right)^{-r-\frac{1}{2}} \ \ \ \ \ (34)

where again we’ve used a standard series from tables (given by Shankar in the problem) to get the last line. Combining this with 31, we see that the coefficient of {x^{2r}} is

\displaystyle  \frac{\left(-1\right)^{r}}{2^{r}r!}\left(1+\alpha\right)^{-r-\frac{1}{2}} \ \ \ \ \ (35)

Thus the time-dependent wave function can be written as a single series as:

\displaystyle   \psi\left(x,t\right) \displaystyle  = \displaystyle  U\left(t\right)\psi\left(x,0\right)\ \ \ \ \ (36)
\displaystyle  \displaystyle  = \displaystyle  e^{-iHt/\hbar}\psi\left(x,0\right)\ \ \ \ \ (37)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}}\sum_{r=0}^{\infty}\frac{\left(-1\right)^{r}}{2^{r}r!}\left(1+\alpha\right)^{-r-\frac{1}{2}}x^{2r}\ \ \ \ \ (38)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}\sqrt{1+\alpha}}\sum_{r=0}^{\infty}\frac{\left(-1\right)^{r}}{2^{r}\left(1+\alpha\right)^{r}r!}x^{2r}\ \ \ \ \ (39)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}\sqrt{1+\alpha}}\exp\left[\frac{-x^{2}}{2\left(1+\alpha\right)}\right]\ \ \ \ \ (40)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\pi^{1/4}\sqrt{1+i\hbar t/m}}\exp\left[\frac{-x^{2}}{2\left(1+i\hbar t/m\right)}\right] \ \ \ \ \ (41)

This agrees with 9 when {p_{0}=0} and {\Delta=1}, though it does take a fair bit of work!

Free particle revisited: solution in terms of a propagator

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 5.1, Exercise 5.1.1.

Having reviewed the background mathematics and postulates of quantum mechanics as set out by Shankar, we can now revisit some of the classic problems in non-relativistic quantum mechanics using Shankar’s approach, as opposed to that of Griffiths that we’ve already studied.

The first problem we’ll look it is that of the free particle. Following the fourth postulate, we write down the classical Hamiltonian for a free particle, which is

\displaystyle  H=\frac{p^{2}}{2m} \ \ \ \ \ (1)

where {p} is the momentum (we’re working in one dimension) and {m} is the mass. To get the quantum version, we replace {p} by the momentum operator {P} and insert the result into the Schrödinger equation:

\displaystyle   i\hbar\left|\dot{\psi}\right\rangle \displaystyle  = \displaystyle  H\left|\psi\right\rangle \ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  \frac{P^{2}}{2m}\left|\psi\right\rangle \ \ \ \ \ (3)

Since {H} is time-independent, the solution can be written using a propagator:

\displaystyle  \left|\psi\left(t\right)\right\rangle =U\left(t\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (4)

To find {U}, we need to solve the eigenvalue equation for the stationary states

\displaystyle  \frac{P^{2}}{2m}\left|E\right\rangle =E\left|E\right\rangle \ \ \ \ \ (5)

where {E} is an eigenvalue representing the allowable energies. Since the Hamiltonian is {P^{2}/2m}, and an eigenstate of {P} with eigenvalue {p} is also an eigenstate of {P^{2}} with eigenvalue {p^{2}}, we can write this equation in terms of the momentum eigenstates {\left|p\right\rangle }:

\displaystyle  \frac{P^{2}}{2m}\left|p\right\rangle =E\left|p\right\rangle \ \ \ \ \ (6)

Using {P^{2}\left|p\right\rangle =p^{2}\left|p\right\rangle } this gives

\displaystyle  \left(\frac{p^{2}}{2m}-E\right)\left|p\right\rangle =0 \ \ \ \ \ (7)

Assuming that {\left|p\right\rangle } is not a null vector gives the relation between momentum and energy:

\displaystyle  p=\pm\sqrt{2mE} \ \ \ \ \ (8)

Thus each allowable energy {E} has two possible momenta. Once we specify the momentum, we also specify the energy and since each energy state is two-fold degenerate, we can eliminate the ambiguity by specifying only the momentum. Therefore the propagator can be written as

\displaystyle  U\left(t\right)=\int_{-\infty}^{\infty}e^{-ip^{2}t/2m\hbar}\left|p\right\rangle \left\langle p\right|dp \ \ \ \ \ (9)

We can convert this to an integral over the energy by using 8 to change variables, and by splittling the integral into two parts. For {p>0} we have

\displaystyle  dp=\sqrt{\frac{m}{2E}}dE \ \ \ \ \ (10)

and for {p<0} we have

\displaystyle  dp=-\sqrt{\frac{m}{2E}}dE \ \ \ \ \ (11)

Therefore, we get

\displaystyle   U\left(t\right) \displaystyle  = \displaystyle  \int_{0}^{\infty}e^{-iEt/\hbar}\left|E,+\right\rangle \left\langle E,+\right|\sqrt{\frac{m}{2E}}dE+\int_{\infty}^{0}e^{-iEt/\hbar}\left|E,-\right\rangle \left\langle E,-\right|\left(-\sqrt{\frac{m}{2E}}\right)dE\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \int_{0}^{\infty}e^{-iEt/\hbar}\left|E,+\right\rangle \left\langle E,+\right|\sqrt{\frac{m}{2E}}dE+\int_{0}^{\infty}e^{-iEt/\hbar}\left|E,-\right\rangle \left\langle E,-\right|\sqrt{\frac{m}{2E}}dE\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \sum_{\alpha=\pm}\int_{0}^{\infty}\frac{m}{\sqrt{2mE}}e^{-iEt/\hbar}\left|E,\alpha\right\rangle \left\langle E,\alpha\right|dE \ \ \ \ \ (14)

Here, {\left|E,+\right\rangle } is the state with energy {E} and momentum {p=+\sqrt{2mE}} and similarly for {\left|E,-\right\rangle }. In the first line, the first integral is for {p>0} and corresponds to the {\int_{0}^{\infty}} part of 9. The second integral is for {p<0} and corresponds to the {\int_{-\infty}^{0}} part of 9, which is why the limits on the second integral have {\infty} at the bottom and 0 at the top. Reversing the order of integration cancels out the minus sign in {-\sqrt{\frac{m}{2E}}}, which allows us to add the two integrals together to get the final answer.

Time-dependent propagators

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 4.3.

The fourth postulate of non-relativistic quantum mechanics concerns how states evolve with time. The postulate simply states that in non-relativistic quantum mechanics, a state satisfies the Schrödinger equation:

\displaystyle i\hbar\frac{\partial}{\partial t}\left|\psi\right\rangle =H\left|\psi\right\rangle \ \ \ \ \ (1)

 

where {H} is the Hamiltonian, which is obtained from the classical Hamiltonian by means of the other postulates of quantum mechanics, namely that we replace all references to the position {x} by the quantum position operator {X} with matrix elements (in the {x} basis) of

\displaystyle \left\langle x^{\prime}\left|X\right|x\right\rangle =\delta\left(x-x^{\prime}\right) \ \ \ \ \ (2)

and all references to classical momentum {p} by the momentum operator {P} with matrix elements

\displaystyle \left\langle x^{\prime}\left|P\right|x\right\rangle =-i\hbar\delta^{\prime}\left(x-x^{\prime}\right) \ \ \ \ \ (3)

In our earlier examination of the Schrödinger equation, we assumed that the Hamiltonian is independent of time, which allowed us to obtain an explicit expression for the propagator

\displaystyle U\left(t\right)=e^{-iHt/\hbar} \ \ \ \ \ (4)

 

The propagator is applied to the initial state {\left|\psi\left(0\right)\right\rangle } to obtain the state at any future time {t}:

\displaystyle \left|\psi\left(t\right)\right\rangle =U\left(t\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (5)

What happens if {H=H\left(t\right)}, that is, there is an explicit time dependence in the Hamiltonian? The approach taken by Shankar is a bit hand-wavy, but goes as follows. We divide the time interval {\left[0,t\right]} into {N} small increments {\Delta=t/N}. To first order in {\Delta}, we can integrate 1 by taking the first order term in a Taylor expansion:

\displaystyle \left|\psi\left(\Delta\right)\right\rangle \displaystyle = \displaystyle \left|\psi\left(0\right)\right\rangle +\Delta\left.\frac{d}{dt}\left|\psi\left(t\right)\right\rangle \right|_{t=0}+\mathcal{O}\left(\Delta^{2}\right)\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \left|\psi\left(0\right)\right\rangle +-\frac{i\Delta}{\hbar}H\left(0\right)\left|\psi\left(0\right)\right\rangle +\mathcal{O}\left(\Delta^{2}\right)\ \ \ \ \ (7)
\displaystyle \displaystyle = \displaystyle \left(1-\frac{i\Delta}{\hbar}H\left(0\right)\right)\left|\psi\left(0\right)\right\rangle +\mathcal{O}\left(\Delta^{2}\right) \ \ \ \ \ (8)

So far, we’ve been fairly precise, but now the hand-waving starts. We note that the term multiplying {\left|\psi\left(0\right)\right\rangle } consists of the first two terms in the expansion of {e^{-i\Delta H\left(0\right)/\hbar}}, so we state that to evolve from {t=0} to {t=\Delta}, we multiply the initial state {\left|\psi\left(0\right)\right\rangle } by {e^{-i\Delta H\left(0\right)/\hbar}}. That is, we propose that

\displaystyle \left|\psi\left(\Delta\right)\right\rangle =e^{-i\Delta H\left(0\right)/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (9)

[The reason this is hand-waving is that there are many functions whose first order Taylor expansion matches {\left(1-\frac{i\Delta}{\hbar}H\left(0\right)\right)}, so it seems arbitrary to choose the exponential. I imagine the motivation is that in the time-independent case, the result reduces to 4.]

In any case, if we accept this, then we can iterate the process to evolve to later times. To get to {t=2\Delta}, we have

\displaystyle \left|\psi\left(2\Delta\right)\right\rangle \displaystyle = \displaystyle e^{-i\Delta H\left(\Delta\right)/\hbar}\left|\psi\left(\Delta\right)\right\rangle \ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle e^{-i\Delta H\left(\Delta\right)/\hbar}e^{-i\Delta H\left(0\right)/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (11)

The snag here is that we can’t, in general, combine the two exponentials into a single exponential by adding the exponents. This is because {H\left(\Delta\right)} and {H\left(0\right)} will not, in general, commute, as the Baker-Campbell-Hausdorff formula tells us. For example, the time dependence of {H\left(t\right)} might be such that at {t=0}, {H\left(0\right)} is a function of the position operator {X} only, while at {t=\Delta}, {H\left(\Delta\right)} becomes a function of the momentum operator {P} only. Since {X} and {P} don’t commute, {\left[H\left(0\right),H\left(\Delta\right)\right]\ne0}, so {e^{-i\Delta H\left(\Delta\right)/\hbar}e^{-i\Delta H\left(0\right)/\hbar}\ne e^{-i\Delta\left[H\left(0\right)+H\left(\Delta\right)\right]/\hbar}}.

This means that the best we can usually do is to write

\displaystyle \left|\psi\left(t\right)\right\rangle \displaystyle = \displaystyle \left|\psi\left(N\Delta\right)\right\rangle \ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle \prod_{n=0}^{N-1}e^{-i\Delta H\left(n\Delta\right)/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (13)

The propagator then becomes, in the limit

\displaystyle U\left(t\right)=\lim_{N\rightarrow\infty}\prod_{n=0}^{N-1}e^{-i\Delta H\left(n\Delta\right)/\hbar} \ \ \ \ \ (14)

This limit is known as a time-ordered integral and is written as

\displaystyle T\left\{ \exp\left[-\frac{i}{\hbar}\int_{0}^{t}H\left(t^{\prime}\right)dt^{\prime}\right]\right\} \equiv\lim_{N\rightarrow\infty}\prod_{n=0}^{N-1}e^{-i\Delta H\left(n\Delta\right)/\hbar} \ \ \ \ \ (15)

One final note about the propagators. Since each term in the product is the exponential of {i} times a Hermitian operator, each term is a unitary operator. Further, since the product of two unitary operators is still unitary, the propagator in the time-dependent case is a unitary operator.

We’ve defined a propagator as a unitary operator that carries a state from {t=0} to some later time {t}, but we can generalize the notation so that {U\left(t_{2},t_{1}\right)} is a propagator that carries a state from {t=t_{1}} to {t=t_{2}}, that is

\displaystyle \left|\psi\left(t_{2}\right)\right\rangle =U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (16)

We can chain propagators together to get

\displaystyle \left|\psi\left(t_{3}\right)\right\rangle \displaystyle = \displaystyle U\left(t_{3},t_{2}\right)\left|\psi\left(t_{2}\right)\right\rangle \ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle U\left(t_{3},t_{2}\right)U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (18)
\displaystyle \displaystyle = \displaystyle U\left(t_{3},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (19)

Therefore

\displaystyle U\left(t_{3},t_{1}\right)=U\left(t_{3},t_{2}\right)U\left(t_{2},t_{1}\right) \ \ \ \ \ (20)

 

Since the Hermitian conjugate of a unitary operator is its inverse, we have

\displaystyle U^{\dagger}\left(t_{2},t_{1}\right)=U^{-1}\left(t_{2},t_{1}\right) \ \ \ \ \ (21)

We can combine this with 20 to get

\displaystyle \left|\psi\left(t_{1}\right)\right\rangle \displaystyle = \displaystyle I\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (22)
\displaystyle \displaystyle = \displaystyle U^{-1}\left(t_{2},t_{1}\right)U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle U^{\dagger}\left(t_{2},t_{1}\right)U\left(t_{2},t_{1}\right)\left|\psi\left(t_{1}\right)\right\rangle \ \ \ \ \ (24)

Therefore

\displaystyle U^{\dagger}\left(t_{2},t_{1}\right)U\left(t_{2},t_{1}\right) \displaystyle = \displaystyle U\left(t_{1},t_{1}\right)=I\ \ \ \ \ (25)
\displaystyle U^{\dagger}\left(t_{2},t_{1}\right) \displaystyle = \displaystyle U\left(t_{1},t_{2}\right) \ \ \ \ \ (26)

That is, the Hermitian conjugate (or inverse) of a propagator carries a state ‘backwards in time’ to its starting point.

Postulates of quantum mechanics: Schrödinger equation and propagators

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 4.3.

The first three postulates of quantum mechanics concern the properties of a quantum state. The fourth postulate concerns how states evolve with time. The postulate simply states that in non-relativistic quantum mechanics, a state satisfies the Schrödinger equation:

\displaystyle i\hbar\frac{\partial}{\partial t}\left|\psi\right\rangle =H\left|\psi\right\rangle \ \ \ \ \ (1)

 

where {H} is the Hamiltonian, which is obtained from the classical Hamiltonian by means of the other postulates of quantum mechanics, namely that we replace all references to the position {x} by the quantum position operator {X} with matrix elements (in the {x} basis) of

\displaystyle \left\langle x^{\prime}\left|X\right|x\right\rangle =\delta\left(x-x^{\prime}\right) \ \ \ \ \ (2)

and all references to classical momentum {p} by the momentum operator {P} with matrix elements

\displaystyle \left\langle x^{\prime}\left|P\right|x\right\rangle =-i\hbar\delta^{\prime}\left(x-x^{\prime}\right) \ \ \ \ \ (3)

Although we’ve posted many articles based on Griffiths’s book in which we solved the Schrödinger equation, the approach taken by Shankar is a bit different and, in some ways, a lot more elegant. We begin with a Hamiltonian that does not depend explicitly on time, and then by observing that, since the Schrödinger equation contains only the first derivative with respect to time, The time evolution of a state can be uniquely determined if we specify only the initial state {\left|\psi\left(0\right)\right\rangle }. [A differential equation that is second order in time, such as the wave equation, requires both the initial position and initial velocity to be specified.]

The solution of the Schrödinger equation is then found in analogy to the approach we used in solving the coupled masses problem earlier. We find the eigenvalues and eigenvectors of the Hamiltonian in some basis and use these to construct the propagator {U\left(t\right)}. We can then write the solution as

\displaystyle \left|\psi\left(t\right)\right\rangle =U\left(t\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (4)

 

For the case of a time-independent Hamiltonian, we can actually construct {U\left(t\right)} in terms of the eigenvectors of {H}. The eigenvalue equation is

\displaystyle H\left|E\right\rangle =E\left|E\right\rangle \ \ \ \ \ (5)

where {E} is an eigenvalue of {H} and {\left|E\right\rangle } is its corresponding eigenvector. Since the eigenvectors form a vector space, we can expand the wave function in terms of them in the usual way

\displaystyle \left|\psi\left(t\right)\right\rangle \displaystyle = \displaystyle \sum\left|E\right\rangle \left\langle E\left|\psi\left(t\right)\right.\right\rangle \ \ \ \ \ (6)
\displaystyle \displaystyle \equiv \displaystyle \sum a_{E}\left(t\right)\left|E\right\rangle \ \ \ \ \ (7)

The coefficient {a_{E}\left(t\right)} is the component of {\left|\psi\left(t\right)\right\rangle } along the {\left|E\right\rangle } vector as a function of time. We can now apply the Schrödinger equation 1 to get (a dot over a symbol indicates a time derivative):

\displaystyle i\hbar\frac{\partial}{\partial t}\left|\psi\left(t\right)\right\rangle \displaystyle = \displaystyle i\hbar\sum\dot{a}_{E}\left(t\right)\left|E\right\rangle \ \ \ \ \ (8)
\displaystyle \displaystyle = \displaystyle H\left|\psi\left(t\right)\right\rangle \ \ \ \ \ (9)
\displaystyle \displaystyle = \displaystyle \sum a_{E}\left(t\right)H\left|E\right\rangle \ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle \sum a_{E}\left(t\right)E\left|E\right\rangle \ \ \ \ \ (11)

Since the eigenvectors {\left|E\right\rangle } are linearly independent (as they form a basis for the vector space), each term in the sum in the first line must be equal to the corresponding term in the sum in the last line, so we have

\displaystyle i\hbar\dot{a}_{E}\left(t\right)=a_{E}\left(t\right)E \ \ \ \ \ (12)

The solution is

\displaystyle a_{E}\left(t\right) \displaystyle = \displaystyle a_{E}\left(0\right)e^{-iEt/\hbar}\ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle e^{-iEt/\hbar}\left\langle E\left|\psi\left(0\right)\right.\right\rangle \ \ \ \ \ (14)

The general solution 7 is therefore

\displaystyle \left|\psi\left(t\right)\right\rangle =\sum e^{-iEt/\hbar}\left|E\right\rangle \left\langle E\left|\psi\left(0\right)\right.\right\rangle \ \ \ \ \ (15)

 

from which we can read off the propagator:

\displaystyle U\left(t\right)=\sum e^{-iEt/\hbar}\left|E\right\rangle \left\langle E\right| \ \ \ \ \ (16)

Thus if we can determine the eigenvalues and eigenvectors of {H}, we can write the propagator in terms of them and get the general solution. We can see from this that {U\left(t\right)} is unitary:

\displaystyle U^{\dagger}U \displaystyle = \displaystyle \sum_{E^{\prime}}\sum_{E}e^{-i\left(E-E^{\prime}\right)t/\hbar}\left|E\right\rangle \left\langle E\left|E^{\prime}\right.\right\rangle \left\langle E^{\prime}\right|\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle \sum_{E^{\prime}}\sum_{E}e^{-i\left(E-E^{\prime}\right)t/\hbar}\left|E\right\rangle \delta_{EE^{\prime}}\left\langle E^{\prime}\right|\ \ \ \ \ (18)
\displaystyle \displaystyle = \displaystyle \sum_{E}\left|E\right\rangle \left\langle E\right|\ \ \ \ \ (19)
\displaystyle \displaystyle = \displaystyle 1 \ \ \ \ \ (20)

This derivation uses the fact that the eigenvectors are orthonormal and form a complete set, so that {\left\langle E\left|E^{\prime}\right.\right\rangle =\delta_{EE^{\prime}}} and {\sum_{E}\left|E\right\rangle \left\langle E\right|=1}. Since a unitary operator doesn’t change the norm of a vector, we see from 4 that if {\left|\psi\left(0\right)\right\rangle } is normalized, then so is {\left|\psi\left(t\right)\right\rangle } for all times {t}. Further, the probability that the state will be measured to be in eigenstate {\left|E\right\rangle } is constant over time, since this probability is given by

\displaystyle \left|a_{E}\left(t\right)\right|^{2}=\left|e^{-iEt/\hbar}\left\langle E\left|\psi\left(0\right)\right.\right\rangle \right|^{2}=\left|\left\langle E\left|\psi\left(0\right)\right.\right\rangle \right|^{2} \ \ \ \ \ (21)

This derivation assumed that the spectrum of {H} was discrete and non-degenerate. If the possible eigenvalues {E} are continuous, then the sum is replaced by an integral

\displaystyle U\left(t\right)=\int e^{-iEt/\hbar}\left|E\right\rangle \left\langle E\right|dE \ \ \ \ \ (22)

If the spectrum is discrete and degenerate, then we need to find an orthonormal set of eigenvectors that spans each degenerate subspace, and sum over these sets. For example, if {E_{1}} is degenerate, then we find a set of eigenvectors {\left|E_{1},\alpha\right\rangle } that spans the subspace for which {E_{1}} is the eigenvalue. The index {\alpha} runs from 1 up to the degree of degeneracy of {E_{1}}, and the propagator is then

\displaystyle U\left(t\right)=\sum_{\alpha}\sum_{E_{i}}e^{-iE_{i}t/\hbar}\left|E_{i},\alpha\right\rangle \left\langle E_{i},\alpha\right| \ \ \ \ \ (23)

The sum over {E_{i}} runs over all the distinct eigenvalues, and the sum over {\alpha} runs over the eigenvectors for each different {E_{i}}.

Another form of the propagator can be written directly in terms of the time-independent Hamiltonian as

\displaystyle U\left(t\right)=e^{-iHt/\hbar} \ \ \ \ \ (24)

This relies on the concept of the function of an operator, so that {e^{-iHt/\hbar}} is a matrix whose elements are power series of the exponent {-\frac{iHt}{\hbar}}. The power series must, of course, converge for this solution to be valid. Since {H} is Hermitian, {U\left(t\right)} is unitary. We can verify that the solution using this form of {U\left(t\right)} satisfies the Schrödinger equation:

\displaystyle \left|\psi\left(t\right)\right\rangle \displaystyle = \displaystyle U\left(t\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (25)
\displaystyle \displaystyle = \displaystyle e^{-iHt/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (26)
\displaystyle i\hbar\left|\dot{\psi}\left(t\right)\right\rangle \displaystyle = \displaystyle i\hbar\frac{d}{dt}\left(e^{-iHt/\hbar}\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (27)
\displaystyle \displaystyle = \displaystyle i\hbar\left(-\frac{i}{\hbar}\right)He^{-iHt/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (28)
\displaystyle \displaystyle = \displaystyle He^{-iHt/\hbar}\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (29)
\displaystyle \displaystyle = \displaystyle H\left|\psi\left(t\right)\right\rangle \ \ \ \ \ (30)

The derivative of {U\left(t\right)} can be calculated from the derivatives of its matrix elements, which are all power series.

Coupled masses on springs – properties of the propagator

References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Exercise 1.8.12.

We’ll continue our study of the system of two masses coupled by springs. The system is described by the matrix equation of motion:

\displaystyle \left|\ddot{x}\left(t\right)\right\rangle =\Omega\left|x\left(t\right)\right\rangle \ \ \ \ \ (1)

 

where

\displaystyle \left|x\left(t\right)\right\rangle =x_{1}\left(t\right)\left|1\right\rangle +x_{2}\left(t\right)\left|2\right\rangle \ \ \ \ \ (2)

in the basis

\displaystyle \left|1\right\rangle \displaystyle = \displaystyle \left[\begin{array}{c} 1\\ 0 \end{array}\right]\ \ \ \ \ (3)
\displaystyle \left|2\right\rangle \displaystyle = \displaystyle \left[\begin{array}{c} 0\\ 1 \end{array}\right] \ \ \ \ \ (4)

In this basis, {\Omega} is the operator whose matrix form is

\displaystyle \Omega=\left[\begin{array}{cc} -2\frac{k}{m} & \frac{k}{m}\\ \frac{k}{m} & -2\frac{k}{m} \end{array}\right] \ \ \ \ \ (5)

We found that the solution could be written as

\displaystyle \left[\begin{array}{c} x_{1}\left(t\right)\\ x_{2}\left(t\right) \end{array}\right]=\frac{1}{2}\left[\begin{array}{cc} \cos\sqrt{\frac{k}{m}}t+\cos\sqrt{\frac{3k}{m}}t & \cos\sqrt{\frac{k}{m}}t-\cos\sqrt{\frac{3k}{m}}t\\ \cos\sqrt{\frac{k}{m}}t-\cos\sqrt{\frac{3k}{m}}t & \cos\sqrt{\frac{k}{m}}t+\cos\sqrt{\frac{3k}{m}}t \end{array}\right]\left[\begin{array}{c} x_{1}\left(0\right)\\ x_{2}\left(0\right) \end{array}\right] \ \ \ \ \ (6)

In compact form, we can write this as

\displaystyle \left|x\left(t\right)\right\rangle =U\left(t\right)\left|x\left(0\right)\right\rangle \ \ \ \ \ (7)

 

where the propagator operator is defined as

\displaystyle U\left(t\right)\equiv\frac{1}{2}\left[\begin{array}{cc} \cos\sqrt{\frac{k}{m}}t+\cos\sqrt{\frac{3k}{m}}t & \cos\sqrt{\frac{k}{m}}t-\cos\sqrt{\frac{3k}{m}}t\\ \cos\sqrt{\frac{k}{m}}t-\cos\sqrt{\frac{3k}{m}}t & \cos\sqrt{\frac{k}{m}}t+\cos\sqrt{\frac{3k}{m}}t \end{array}\right] \ \ \ \ \ (8)

From 1, we can operate on both sides of 7 with the operator {\frac{d^{2}}{dt^{2}}-\Omega} to get

\displaystyle \left(\frac{d^{2}}{dt^{2}}-\Omega\right)\left|x\left(t\right)\right\rangle =\left(\frac{d^{2}}{dt^{2}}-\Omega\right)U\left(t\right)\left|x\left(0\right)\right\rangle =0 \ \ \ \ \ (9)

Since the initial positions {\left|x\left(0\right)\right\rangle } are arbitrary and contains no time dependence, the matrix {U\left(t\right)} satisfies the differential equation

\displaystyle \frac{d^{2}U\left(t\right)}{dt^{2}}=\Omega U\left(t\right) \ \ \ \ \ (10)

By direct calculation (I used Maple, but you can do it by hand using the usual rules for matrix multiplication, although it’s quite tedious), we can show that {\Omega} and {U} commute and, since both {\Omega} and {U} are hermitian, they are simultaneously diagonalizable. We already worked out the eigenvectors of {\Omega}:

\displaystyle \left|I\right\rangle \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left[\begin{array}{c} 1\\ 1 \end{array}\right]\ \ \ \ \ (11)
\displaystyle \left|II\right\rangle \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left[\begin{array}{c} 1\\ -1 \end{array}\right] \ \ \ \ \ (12)

Since {\Omega} is not degenerate, these must also be the eigenvectors of {U}, so the unitary matrix

\displaystyle D=\frac{1}{\sqrt{2}}\left[\begin{array}{cc} 1 & 1\\ 1 & -1 \end{array}\right] \ \ \ \ \ (13)

can be used to diagonalize {U} according to

\displaystyle D^{\dagger}UD \displaystyle = \displaystyle \frac{1}{4}\left[\begin{array}{cc} 1 & 1\\ 1 & -1 \end{array}\right]\left[\begin{array}{cc} \cos\sqrt{\frac{k}{m}}t+\cos\sqrt{\frac{3k}{m}}t & \cos\sqrt{\frac{k}{m}}t-\cos\sqrt{\frac{3k}{m}}t\\ \cos\sqrt{\frac{k}{m}}t-\cos\sqrt{\frac{3k}{m}}t & \cos\sqrt{\frac{k}{m}}t+\cos\sqrt{\frac{3k}{m}}t \end{array}\right]\left[\begin{array}{cc} 1 & 1\\ 1 & -1 \end{array}\right]\ \ \ \ \ (14)
\displaystyle \displaystyle = \displaystyle \frac{1}{2}\left[\begin{array}{cc} 1 & 1\\ 1 & -1 \end{array}\right]\left[\begin{array}{cc} \cos\sqrt{\frac{k}{m}}t & \cos\sqrt{\frac{3k}{m}}t\\ \cos\sqrt{\frac{k}{m}}t & -\cos\sqrt{\frac{3k}{m}}t \end{array}\right]\ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle \left[\begin{array}{cc} \cos\sqrt{\frac{k}{m}}t & 0\\ 0 & \cos\sqrt{\frac{3k}{m}}t \end{array}\right] \ \ \ \ \ (16)

This matches the diagonal form for {U} given as equation 1.8.43 in Shankar’s book. The diagonal entries are the eigenvalues of {U\left(t\right)}.

Feynman propagator for scalar fields

Reference: References: Robert D. Klauber, Student Friendly Quantum Field Theory, (Sandtrove Press, 2013) – Chapter 3, Problem 3.17.

In quantum field theory, interactions between particles are mediated by virtual particles, which are particles that are never observed, but which carry the information from the particles before the interaction to the particles after the interaction. A particle interaction can be represented graphically by a Feynman diagram. One example is this:

In the diagram, time increases from left to right. The arrow on a line indicates the motion of a particle, and rather confusingly at first, antiparticles are shown with arrows opposite to their direction of propagation. Thus in this interaction, an electron {e^{-}} and a positron (antielectron) {e^{+}} move in from the left and meet at point {x_{2}}. They annihilate each other, producing a photon (the wavy line, labelled {\gamma} for gamma particle). The photon is a virtual particle (it is never observed in an experiment) which propagates to location {x_{1}} where it spontaneously dissociates into an electron-positron pair which move off to the right.

The physics of the virtual particle, the photon in this case, is described by a Feynman propagator, or just propagator. In its simplest form, a propagator creates a virtual particle from the vacuum and, a short time later, annihilates it. We can use the Klein-Gordon fields derived earlier to see how this works. [Note that a photon is not described by a Klein-Gordon field, since the photon has spin 1 and is not a scalar particle.]

The continuous fields are

\displaystyle   \phi\left(x\right) \displaystyle  = \displaystyle  \int\frac{d^{3}k}{\sqrt{2\left(2\pi\right)^{3}\omega_{\mathbf{k}}}}a\left(\mathbf{k}\right)e^{-ikx}+\int\frac{d^{3}k}{\sqrt{2\left(2\pi\right)^{3}\omega_{\mathbf{k}}}}b^{\dagger}\left(\mathbf{k}\right)e^{ikx}\ \ \ \ \ (1)
\displaystyle  \displaystyle  \equiv \displaystyle  \phi^{+}+\phi^{-}\ \ \ \ \ (2)
\displaystyle  \phi^{\dagger}\left(x\right) \displaystyle  = \displaystyle  \int\frac{d^{3}k}{\sqrt{2\left(2\pi\right)^{3}\omega_{\mathbf{k}}}}b\left(\mathbf{k}\right)e^{-ikx}+\int\frac{d^{3}k}{\sqrt{2\left(2\pi\right)^{3}\omega_{\mathbf{k}}}}a^{\dagger}\left(\mathbf{k}\right)e^{ikx}\ \ \ \ \ (3)
\displaystyle  \displaystyle  \equiv \displaystyle  \phi^{\dagger+}+\phi^{\dagger-} \ \ \ \ \ (4)

For a scalar field, there are two situations. First, we can create a particle at location {\mathbf{y}} at time {t_{y}}, then at a later time {t_{x}>t_{y}}, we can annihilate the same particle at location {\mathbf{x}}. This is shown in the Feynman diagram:

Second, if {t_{x}<t_{y}} we can create an antiparticle at {\mathbf{x}} and annihilate it at {\mathbf{y}}, as shown

The important point is that these two virtual particle situations have the same result in an experiment. If a virtual particle is created at {\mathbf{y}} at {t_{y}} and travels to {\mathbf{x}} at time {t_{x}}, it carries information (charge and so on) from {\mathbf{y}} to {\mathbf{x}}. If the corresponding virtual antiparticle is created at {\mathbf{x}} at {t_{x}} and travels to {\mathbf{y}} at {t_{y}}, it carries exactly the opposite information (since it’s an antiparticle) from {\mathbf{x}} to {\mathbf{y}}. Thus in a real experiment, the propagator must include both possibilities.

Klauber treats the case of {t_{y}<t_{x}}, so we’ll look at the other case (the two derivations are very similar). That is, we want to create an antiparticle at {x} and annihilate it at {y}. From the equations above, we see that {\phi} creates antiparticles (it contains the {b^{\dagger}} operators) and {\phi^{\dagger}} destroys them (it contains the {b} operators), so the life of the virtual particle is described by applying these two fields in some order. Since we want to create an antiparticle first and then annihilate it, we need to apply {\phi} first, then {\phi^{\dagger}}. The situation is reversed if we want to create a particle and then annihilate it, since in that case {\phi^{\dagger}} contains the {a^{\dagger}} operators and {\phi} contains the {a} operators. The two time orders above thus require the fields to be applied in different orders, and a time ordering operator {T} is defined so that

\displaystyle  T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]=\begin{cases} \phi\left(x\right)\phi^{\dagger}\left(y\right) & \mbox{if }t_{y}<t_{x}\\ \phi^{\dagger}\left(y\right)\phi\left(x\right) & \mbox{if }t_{x}<t_{y} \end{cases} \ \ \ \ \ (5)

We’re interested in the second case. The transition amplitude for a process in which a virtual particle is created out of the vacuum and then decays back into the vacuum is then

\displaystyle  \left\langle 0\left|T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]\right|0\right\rangle  \ \ \ \ \ (6)

Looking at the antiparticle case, we have

\displaystyle  T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]\left|0\right\rangle =\left(\phi^{\dagger+}\left(y\right)+\phi^{\dagger-}\left(y\right)\right)\left(\phi^{+}\left(x\right)+\phi^{-}\left(x\right)\right)\left|0\right\rangle  \ \ \ \ \ (7)

If we’re looking at an antiparticle with a specific wave number {\mathbf{k}}, then {\phi^{-}} creates an antiparticle and {\phi^{\dagger+}} destroys an antiparticle (while {\phi^{+}} destroys a particle and {\phi^{\dagger-}} creates a particle). Any annihilation operator acting on the vacuum gives zero, so

\displaystyle   \left(\phi^{+}\left(x\right)+\phi^{-}\left(x\right)\right)\left|0\right\rangle \displaystyle  = \displaystyle  \left(0+\phi^{-}\left(x\right)\right)\left|0\right\rangle \ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  A\left(x\right)\left|\bar{\phi}\right\rangle \ \ \ \ \ (9)

where {A\left(x\right)} is a numerical function (not an operator), since a creation operator acting on the vacuum gives a number (determined by normalization) multiplied by the state {\left|\bar{\phi}\right\rangle } containing a single antiparticle.

Returning to 7, we see that operating on this result with {\phi^{\dagger-}\left(y\right)} creates a particle, so gives the state {\left|\bar{\phi}\phi\right\rangle } multiplied by some other numerical function {B\left(y\right)}, while operating on {A\left(x\right)\left|\bar{\phi}\right\rangle } with {\phi^{\dagger+}\left(y\right)} destroys the antiparticle just created, producing the vacuum state {\left|0\right\rangle } multiplied by some other numerical function {C\left(y\right)}. Therefore we get

\displaystyle  T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]\left|0\right\rangle =C\left(y\right)A\left(x\right)\left|0\right\rangle +B\left(y\right)A\left(x\right)\left|\bar{\phi}\phi\right\rangle \ \ \ \ \ (10)

Thus the transition amplitude 6 is

\displaystyle  \left\langle 0\left|T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]\right|0\right\rangle =\left\langle 0\left|C\left(y\right)A\left(x\right)\right|0\right\rangle +\left\langle 0\left|B\left(y\right)A\left(x\right)\right|\bar{\phi}\phi\right\rangle \ \ \ \ \ (11)

The brackets imply an integration over all space, but we’re interested in the antiparticle creation occurring at a specific location {x} and annihilation at another specific location {y}, so these two locations are actually constants relative to the integration variable, and can come outside the brackets. From the orthonormality of quantum states, {\left\langle 0\left|0\right.\right\rangle =1} and {\left\langle 0\left|\bar{\phi}\phi\right.\right\rangle =0}, so we get

\displaystyle  \left\langle 0\left|T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]\right|0\right\rangle =C\left(y\right)A\left(x\right) \ \ \ \ \ (12)

The result for creating and annihilating a particle (as opposed to an antiparticle) is the same, although the numerical functions can be different. Klauber calls them {G\left(x\right)} and {F\left(y\right)}, so that for the particle case

\displaystyle  \left\langle 0\left|T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]\right|0\right\rangle =F\left(y\right)G\left(x\right) \ \ \ \ \ (13)

The vacuum expectation value of the time ordering operator is called the Feynman propagator, defined as {i\Delta_{F}\left(x-y\right)}:

\displaystyle  i\Delta_{F}\left(x-y\right)\equiv\left\langle 0\left|T\left[\phi\left(x\right)\phi^{\dagger}\left(y\right)\right]\right|0\right\rangle \ \ \ \ \ (14)