Author Archives: gwrowe

Direct product of vector spaces: 2-dim examples

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.1.2.

To help with understanding the direct product of two vector spaces, some examples with a couple of 2-d vector spaces are useful. Suppose the one-particle Hilbert space is two-dimensional, with basis vectors {\left|+\right\rangle } and {\left|-\right\rangle }. Now suppose we have two such particles, each in its own 2-d space, {\mathbb{V}_{1}} for particle 1 and {\mathbb{V}_{2}} for particle 2. We can define a couple of operators by their matrix elements in these two spaces. We define

\displaystyle   \sigma_{1}^{\left(1\right)} \displaystyle  \equiv \displaystyle  \left[\begin{array}{cc} a & b\\ c & d \end{array}\right]\ \ \ \ \ (1)
\displaystyle  \sigma_{2}^{\left(2\right)} \displaystyle  \equiv \displaystyle  \left[\begin{array}{cc} e & f\\ g & h \end{array}\right] \ \ \ \ \ (2)

where the first column and row refer to basis vector {\left|+\right\rangle } and the second column and row to {\left|-\right\rangle }. Recall that the subscript on each {\sigma} refers to the particle and the superscript refers to the vector space. Thus {\sigma_{1}^{\left(1\right)}} is an operator in space {\mathbb{V}_{1}} for particle 1.

Now consider the direct product space {\mathbb{V}_{1}\otimes\mathbb{V}_{2}}, which is spanned by the four basis vectors formed by direct products of the two basis vectors in each of the one-particle spaces, that is by {\left|+\right\rangle \otimes\left|+\right\rangle }, {\left|+\right\rangle \otimes\left|-\right\rangle }, {\left|-\right\rangle \otimes\left|+\right\rangle } and {\left|-\right\rangle \otimes\left|-\right\rangle }. Each of the {\sigma} operators has a corresponding version in the product space, which is formed by taking the direct product of the one-particle version for one of the particles with the identity operator for the other particle. That is

\displaystyle   \sigma_{1}^{\left(1\right)\otimes\left(2\right)} \displaystyle  = \displaystyle  \sigma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\ \ \ \ \ (3)
\displaystyle  \sigma_{2}^{\left(1\right)\otimes\left(2\right)} \displaystyle  = \displaystyle  I^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)} \ \ \ \ \ (4)

To get the matrix elements in the product space, we need the form of the identity operators in the one-particle spaces. They are, as usual

\displaystyle   I^{\left(1\right)} \displaystyle  = \displaystyle  \left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right]\ \ \ \ \ (5)
\displaystyle  I^{\left(2\right)} \displaystyle  = \displaystyle  \left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right] \ \ \ \ \ (6)

I’ve written the two identity operators as separate equations since although they have the same numerical form as a matrix, the two operators operate on different spaces, so they are technically different operators. To get the matrix elements of {\sigma_{1}^{\left(1\right)\otimes\left(2\right)}} we can expand the direct product (Shankar suggests using the ‘method of images’, although I have no idea what this is. I doubt that it’s the same method of images used in electrostatics, and Google draws a blank for any other kind of method of images.) In any case, we can form the product by taking the corresponding matrix elements. For example

\displaystyle   \left\langle ++\left|\sigma_{1}^{\left(1\right)\otimes\left(2\right)}\right|++\right\rangle \displaystyle  = \displaystyle  \left(\left\langle +\right|\otimes\left\langle +\right|\right)\sigma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left(\left|+\right\rangle \otimes\left|+\right\rangle \right)\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  \left\langle +\left|\sigma_{1}^{\left(1\right)}\right|+\right\rangle \left\langle +\left|I^{\left(2\right)}\right|+\right\rangle \ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  a\times1=a \ \ \ \ \ (9)

When working out the RHS of the first line, remember that operators with a superscript (1) operate only on bras and kets from the space {\mathbb{V}_{1}} and operators with a superscript (2) operate only on bras and kets from the space {\mathbb{V}_{2}}. Applying the same technique for the remaining elements gives

\displaystyle  \sigma_{1}^{\left(1\right)\otimes\left(2\right)}=\sigma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}=\left[\begin{array}{cccc} a & 0 & b & 0\\ 0 & a & 0 & b\\ c & 0 & d & 0\\ 0 & c & 0 & d \end{array}\right] \ \ \ \ \ (10)

Another less tedious way of getting this result is to note that we can form the direct product by taking each element in the first matrix {\sigma_{1}^{\left(1\right)}} from 1 and multiply it into the second matrix {I^{\left(2\right)}} from 6. Thus the top {2\times2} elements in {\sigma_{1}^{\left(1\right)\otimes\left(2\right)}} are obtained by taking the element {\left\langle +\left|\sigma_{1}^{\left(1\right)}\right|+\right\rangle =a} from 1 and multiplying it into the matrix {I^{\left(2\right)}} from 6. That is, the upper left {2\times2} block is formed from

\displaystyle   aI_{2\times2}^{\left(2\right)} \displaystyle  = \displaystyle  \left[\begin{array}{cc} a & 0\\ 0 & a \end{array}\right] \ \ \ \ \ (11)

and so on for the other three {2\times2} blocks in the complete matrix. Note that it’s important to get things in the right order, as the direct product is not commutative.

To get the other direct product, we can apply the same technique:

\displaystyle  \sigma_{2}^{\left(1\right)\otimes\left(2\right)}=I^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)}=\left[\begin{array}{cccc} e & f & 0 & 0\\ g & h & 0 & 0\\ 0 & 0 & e & f\\ 0 & 0 & g & h \end{array}\right] \ \ \ \ \ (12)

Again, note that

\displaystyle  I^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)}\ne\sigma_{2}^{\left(2\right)}\otimes I^{\left(1\right)}=\left[\begin{array}{cccc} e & 0 & f & 0\\ 0 & e & 0 & f\\ g & 0 & h & 0\\ 0 & g & 0 & h \end{array}\right] \ \ \ \ \ (13)

Finally, we can work out the direct product version of the product of two one-particle operators. That is, we want

\displaystyle  \left(\sigma_{1}\sigma_{2}\right)^{\left(1\right)\otimes\left(2\right)}=\sigma_{1}^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)} \ \ \ \ \ (14)

We can do this in two ways. First, we can apply the same recipe as in the previous example. We take each element of {\sigma_{1}^{\left(1\right)}} and multiply it into the full matrix {\sigma_{2}^{\left(2\right)}}:

\displaystyle   \sigma_{1}^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)} \displaystyle  = \displaystyle  \left[\begin{array}{cccc} ae & af & be & bf\\ ag & ah & bg & bh\\ ce & cf & de & df\\ cg & ch & dg & dh \end{array}\right] \ \ \ \ \ (15)

Second, we can take the matrix product of {\sigma_{1}^{\left(1\right)\otimes\left(2\right)}} from 10 with {\sigma_{2}^{\left(1\right)\otimes\left(2\right)}} from 12:

\displaystyle   \left(\sigma_{1}\sigma_{2}\right)^{\left(1\right)\otimes\left(2\right)} \displaystyle  = \displaystyle  \left[\begin{array}{cccc} a & 0 & b & 0\\ 0 & a & 0 & b\\ c & 0 & d & 0\\ 0 & c & 0 & d \end{array}\right]\left[\begin{array}{cccc} e & f & 0 & 0\\ g & h & 0 & 0\\ 0 & 0 & e & f\\ 0 & 0 & g & h \end{array}\right]\ \ \ \ \ (16)
\displaystyle  \displaystyle  = \displaystyle  \left[\begin{array}{cccc} ae & af & be & bf\\ ag & ah & bg & bh\\ ce & cf & de & df\\ cg & ch & dg & dh \end{array}\right] \ \ \ \ \ (17)

Direct product of two vector spaces

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.1.1.

Although we’ve studied quantum systems of more than one particle before (for example, systems of fermions and bosons) as covered by Griffiths’s book, the wave functions associated with such particles were just given as products of single-particle wave functions (or linear combinations of these products). We didn’t examine the linear algebra behind these functions. In his chapter 10, Shankar begins by describing the algebra of a direct product vector space, so we’ll review this here.

The physics begins with an extension of the postulate of quantum mechanics that, for a single particle, the position and momentum obey the commutation relation

\displaystyle \left[X,P\right]=i\hbar \ \ \ \ \ (1)

To extend this to multi-particle systems, we propose

\displaystyle \left[X_{i},P_{j}\right] \displaystyle = \displaystyle i\hbar\delta_{ij}\ \ \ \ \ (2)
\displaystyle \left[X_{i},X_{j}\right] \displaystyle = \displaystyle \left[P_{i},P_{j}\right]=0 \ \ \ \ \ (3)

where the subscripts refer to the particle we’re considering.

These postulates are translations of the classical Poisson brackets from classical mechanics, following the prescription that to obtain the quantum commutator, we multiply the classical Poisson bracket by {i\hbar}. The physics in these relations is that properties such as position or momentum of different particles are simultaneously observable, although the position and momentum of a single particle are still governed by the uncertainty principle.

We’ll now restrict our attention to a two-particle system. In such a system, the eigenstate of the position operators is written as {\left|x_{1}x_{2}\right\rangle } and satisfies the eigenvalue equation

\displaystyle X_{i}\left|x_{1}x_{2}\right\rangle =x_{i}\left|x_{1}x_{2}\right\rangle \ \ \ \ \ (4)

Operators referring to particle {i} effectively ignore any quantities associated with the other particle.

So what exactly are these states {\left|x_{1}x_{2}\right\rangle }? They are a set of vectors that span a Hilbert space that describes the state of two particles. Note that we can use any two commuting operators {\Omega_{1}\left(X_{1},P_{1}\right)} and {\Omega_{2}\left(X_{2},P_{2}\right)} to create a set of eigenkets {\left|\omega_{1}\omega_{2}\right\rangle } which also span the space. Any operator that is a function of the position and momentum of only one of the particles always commutes with a similar operator that is a function of only the other particle, since the position and momentum operators of which it is a function commute with those of the other operator. That is

\displaystyle \left[\Omega\left(X_{1},P_{1}\right),\Lambda\left(X_{2},P_{2}\right)\right]=0 \ \ \ \ \ (5)

 

The space spanned by {\left|x_{1}x_{2}\right\rangle } can also be written as a direct product of two one-particle spaces. This space is written as {\mathbb{V}_{1\otimes2}} where the symbol {\otimes} is the direct product symbol (it’s also the logo of the X-Men, but we won’t pursue that). The direct product is composed of the two single-particle spaces {\mathbb{V}_{1}} (spanned by {\left|x_{1}\right\rangle }) and {\mathbb{V}_{2}} (spanned by {\left|x_{2}\right\rangle }). The notation gets quite cumbersome at this point, so let’s spell it out carefully. For an operator {\Omega}, we can specify which particle it acts on by a subscript, and which space it acts on by a superscript. Thus {X_{1}^{\left(1\right)}} is the position operator for particle 1, which operates on the vector space {\mathbb{V}_{1}}. It might seem redundant at this point to specify both the particle and the space, since it would seem that these are always the same. However, be patient…

From the two one-particle spaces, we can form the two-particle space by taking the direct product of the two one-particle states. Thus the state in which particle 1 is in state {\left|x_{1}\right\rangle } and particle 2 is in state {\left|x_{2}\right\rangle } is written as

\displaystyle \left|x_{1}x_{2}\right\rangle =\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle \ \ \ \ \ (6)

 

It is important to note that this object is composed of two vectors from different vector spaces. The inner and outer products we’ve dealt with up to now, for things like finding the probability that a state has a particular value and so on, that is, objects like {\left\langle \psi_{1}\left|\psi_{2}\right.\right\rangle } and {\left|\psi_{1}\right\rangle \left\langle \psi_{2}\right|}, are composed of two vectors from the same vector space, so no direct product is needed.

If we recall the direct sum of two vector spaces

\displaystyle \mathbb{V}_{1\oplus2}=\mathbb{V}_{1}\oplus\mathbb{V}_{2} \ \ \ \ \ (7)

in that case, the dimension of {\mathbb{V}_{1\oplus2}} is the sum of the dimensions of {\mathbb{V}_{1}} and {\mathbb{V}_{2}}. For a direct product we see from 6 that for each vector {\left|x_{1}\right\rangle } there is one basis vector for each vector {\left|x_{2}\right\rangle }. Thus the number of basis vectors is the product of the number of basis vectors in each of the two one-particle spaces. In other words, the dimension of a direct product is the product of the dimensions of the two vector spaces of which it is composed. [In the case here, both the spaces {\mathbb{V}_{1}} and {\mathbb{V}_{2}} have infinite dimension, so the dimension of {\mathbb{V}_{1\otimes2}} is in effect, ‘doubly infinite’. In a case where {\mathbb{V}_{1}} and {\mathbb{V}_{2}} have finite dimension, we can then just multiply these dimensions to get the dimension of {\mathbb{V}_{1\otimes2}}.]

As {\mathbb{V}_{1\otimes2}} is a vector space with basis vectors {\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle }, any linear combination of the basis vectors is also a vector in the space {\mathbb{V}_{1\otimes2}}. Thus the vector

\displaystyle \left|\psi\right\rangle =\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle +\left|y_{1}\right\rangle \otimes\left|y_{2}\right\rangle \ \ \ \ \ (8)

is in {\mathbb{V}_{1\otimes2}}, although it can’t be written as a direct product of the two one-particle spaces {\mathbb{V}_{1}} and {\mathbb{V}_{2}}.

Having defined the direct product space, we now need to consider operators in this space. Although Shankar states that it ‘is intuitively clear’ that a single particle operator such as {X_{1}^{\left(1\right)}} must have a corresponding operator in the product space that has the same effect has {X_{1}^{\left(1\right)}} has on the single particle state, it seems to me to be more of a postulate. In any case, it is proposed that if

\displaystyle X_{1}^{\left(1\right)}\left|x_{1}\right\rangle =x_{1}\left|x_{1}\right\rangle \ \ \ \ \ (9)

then in the product space there must be an operator {X_{1}^{\left(1\right)\otimes\left(2\right)}} that operates only on particle 1, with the same effect, that is

\displaystyle X_{1}^{\left(1\right)\otimes\left(2\right)}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle =x_{1}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle \ \ \ \ \ (10)

The notation can be explained as follows. The subscript 1 in {X_{1}^{\left(1\right)\otimes\left(2\right)}} means that the operator operates on particle 1, while the superscript {\left(1\right)\otimes\left(2\right)} means that the operator operates in the product space {\mathbb{V}_{1\otimes2}}. In effect, the operator {X_{1}^{\left(1\right)\otimes\left(2\right)}} is the product of two one-particle operators {X_{1}^{\left(1\right)}}, which operates on space {\mathbb{V}_{1}} and an identity operator {I_{2}^{\left(2\right)}} which operates on space {\mathbb{V}_{2}}. That is, we can write

\displaystyle X_{1}^{\left(1\right)\otimes\left(2\right)} \displaystyle = \displaystyle X_{1}^{\left(1\right)}\otimes I_{2}^{\left(2\right)}\ \ \ \ \ (11)
\displaystyle X_{1}^{\left(1\right)\otimes\left(2\right)}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle \displaystyle = \displaystyle \left|X_{1}^{\left(1\right)}x_{1}\right\rangle \otimes\left|I_{2}^{\left(2\right)}x_{2}\right\rangle \ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle x_{1}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle \ \ \ \ \ (13)

Generally, if we have two one-particle operators {\Gamma_{1}^{\left(1\right)}} and {\Lambda_{2}^{\left(2\right)}}, each of which operates on a different one-particle state, then we can form a direct product operator with the property

\displaystyle \left(\Gamma_{1}^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle =\left|\Gamma_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (14)

That is, a single-particle operator that operates on space {i} that forms part of a direct product operator operates only on the factor of a direct product vector that corresponds to the one-particle space. Given this property, it’s fairly easy to derive a few properties of direct product operators.

\displaystyle \left[\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)},I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right]\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \displaystyle = \displaystyle \Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle -\ \ \ \ \ (15)
\displaystyle \displaystyle \displaystyle I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (16)
\displaystyle \displaystyle = \displaystyle \Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|I^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle -\ \ \ \ \ (17)
\displaystyle \displaystyle \displaystyle I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (18)
\displaystyle \displaystyle = \displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle -\ \ \ \ \ (19)
\displaystyle \displaystyle \displaystyle \left|I^{\left(1\right)}\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (20)
\displaystyle \displaystyle = \displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle -\left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (21)
\displaystyle \displaystyle = \displaystyle 0 \ \ \ \ \ (22)

This derivation shows that the identity operators effectively cancel out and we’re left with the earlier commutator 5 between two operators that operate on different spaces.

The next derivation involves the successive operation of two direct product operators.

\displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes\Gamma_{2}^{\left(2\right)}\right)\left(\theta_{1}^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \displaystyle = \displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes\Gamma_{2}^{\left(2\right)}\right)\left|\theta_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle \left|\Omega_{1}^{\left(1\right)}\theta_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Gamma_{2}^{\left(2\right)}\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (24)
\displaystyle \displaystyle = \displaystyle \left(\Omega_{1}^{\left(1\right)}\theta_{1}^{\left(1\right)}\right)\otimes\left(\Gamma_{2}^{\left(2\right)}\Lambda_{2}^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (25)
\displaystyle \displaystyle = \displaystyle \left\{ \left(\Omega\theta\right)^{\left(1\right)}\otimes\left(\Gamma\Lambda\right)^{\left(2\right)}\right\} \left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (26)
\displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes\Gamma_{2}^{\left(2\right)}\right)\left(\theta_{1}^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right) \displaystyle = \displaystyle \left(\Omega\theta\right)^{\left(1\right)}\otimes\left(\Gamma\Lambda\right)^{\left(2\right)} \ \ \ \ \ (27)

Next, another commutator identity. Given

\displaystyle \left[\Omega_{1}^{\left(1\right)},\Lambda_{1}^{\left(1\right)}\right]=\Gamma_{1}^{\left(1\right)} \ \ \ \ \ (28)

we have

\displaystyle \left[\Omega_{1}^{\left(1\right)\otimes\left(2\right)},\Lambda_{1}^{\left(1\right)\otimes\left(2\right)}\right]\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \displaystyle = \displaystyle \left[\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)},\Lambda_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\right]\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (29)
\displaystyle \displaystyle = \displaystyle \left|\left[\Omega_{1}^{\left(1\right)},\Lambda_{1}^{\left(1\right)}\right]\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (30)
\displaystyle \displaystyle = \displaystyle \left|\Gamma_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (31)
\displaystyle \displaystyle = \displaystyle \Gamma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (32)
\displaystyle \left[\Omega_{1}^{\left(1\right)\otimes\left(2\right)},\Lambda_{1}^{\left(1\right)\otimes\left(2\right)}\right] \displaystyle = \displaystyle \Gamma_{1}^{\left(1\right)}\otimes I^{\left(2\right)} \ \ \ \ \ (33)

Finally, the square of the sum of two operators:

\displaystyle \left(\Omega_{1}^{\left(1\right)\otimes\left(2\right)}+\Omega_{2}^{\left(1\right)\otimes\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \displaystyle = \displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}+I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (34)
\displaystyle \displaystyle = \displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle +\ \ \ \ \ (35)
\displaystyle \displaystyle \displaystyle \Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle +\ \ \ \ \ (36)
\displaystyle \displaystyle \displaystyle I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle +\ \ \ \ \ (37)
\displaystyle \displaystyle \displaystyle \left(I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (38)
\displaystyle \displaystyle = \displaystyle \left|\left(\Omega_{1}^{2}\right)^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle +\ \ \ \ \ (39)
\displaystyle \displaystyle \displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Omega_{2}^{\left(2\right)}\omega_{2}\right\rangle +\ \ \ \ \ (40)
\displaystyle \displaystyle \displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Omega_{2}^{\left(2\right)}\omega_{2}\right\rangle +\ \ \ \ \ (41)
\displaystyle \displaystyle \displaystyle \left|I^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\left(\Omega_{2}^{2}\right)^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (42)
\displaystyle \displaystyle = \displaystyle \left(\left(\Omega_{1}^{2}\right)^{\left(1\right)}\otimes I^{\left(2\right)}+2\Omega_{1}^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}+I^{\left(1\right)}\otimes\left(\Omega_{2}^{2}\right)^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (43)
\displaystyle \left(\Omega_{1}^{\left(1\right)\otimes\left(2\right)}+\Omega_{2}^{\left(1\right)\otimes\left(2\right)}\right)^{2} \displaystyle = \displaystyle \left(\Omega_{1}^{2}\right)^{\left(1\right)}\otimes I^{\left(2\right)}+2\Omega_{1}^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}+I^{\left(1\right)}\otimes\left(\Omega_{2}^{2}\right)^{\left(2\right)} \ \ \ \ \ (44)

In this derivation, we used the fact that the identity operator leaves its operand unchanged, and thus that {\left(I^{2}\right)^{\left(i\right)}=I^{\left(i\right)}} for either space {i}.

Uncertainty principle and an estimate of the ground state energy of hydrogen

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 9, Exercise 9.4.3.

The uncertainty principle can be used to get an estimate of the ground state energy in some systems. In his section 9.4, Shankar shows how this is done for the hydrogen atom, treating the system as a proper 3-d system.

A somewhat simpler analysis can be done by treating the hydrogen atom as a one-dimensional system. The Hamiltonian is

\displaystyle  H=\frac{P^{2}}{2m}-\frac{e^{2}}{\left(R^{2}\right)^{1/2}} \ \ \ \ \ (1)

where {m} and {e} are the mass and charge of the electron. The operators {P} and {R} stand for the 3-d momentum and position:

\displaystyle   P^{2} \displaystyle  = \displaystyle  P_{x}^{2}+P_{y}^{2}+P_{z}^{2}\ \ \ \ \ (2)
\displaystyle  R^{2} \displaystyle  = \displaystyle  X^{2}+Y^{2}+Z^{2} \ \ \ \ \ (3)

If we ignore the expansions of {P^{2}} and {R^{2}} and treat the Hamiltonian as a function of the operators {P} and {R} on their own, we can use the uncertainty principle to get a bound on the ground state energy. By analogy with one-dimensional position and momentum, we assume that the uncertainties are related by

\displaystyle  \Delta P\cdot\Delta R\ge\frac{\hbar}{2} \ \ \ \ \ (4)

By using coordinates such that the hydrogen atom is centred at the origin, and from the spherical symmetry of the ground state, we have

\displaystyle   \left(\Delta P\right)^{2} \displaystyle  = \displaystyle  \left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}=\left\langle P^{2}\right\rangle \ \ \ \ \ (5)
\displaystyle  \left(\Delta R\right)^{2} \displaystyle  = \displaystyle  \left\langle R^{2}\right\rangle -\left\langle R\right\rangle ^{2}=\left\langle R^{2}\right\rangle \ \ \ \ \ (6)

We can then write 1 as

\displaystyle   \left\langle H\right\rangle \displaystyle  = \displaystyle  \frac{\left\langle P^{2}\right\rangle }{2m}-e^{2}\left\langle \frac{1}{\left(R^{2}\right)^{1/2}}\right\rangle \ \ \ \ \ (7)
\displaystyle  \displaystyle  \simeq \displaystyle  \frac{\left\langle P^{2}\right\rangle }{2m}-\frac{e^{2}}{\left\langle \sqrt{\left\langle R^{2}\right\rangle }\right\rangle } \ \ \ \ \ (8)

where in the last line we used an argument similar to that considered earlier, in which we showed that, for a one-dimensional system,

\displaystyle  \left\langle \frac{1}{X^{2}}\right\rangle \simeq\frac{1}{\left\langle X^{2}\right\rangle } \ \ \ \ \ (9)

where the {\simeq} sign means ‘same order of magnitude’. We can now write the mean of the Hamiltonian in terms of the uncertainties:

\displaystyle   \left\langle H\right\rangle \displaystyle  \simeq \displaystyle  \frac{\left(\Delta P\right)^{2}}{2m}-\frac{e^{2}}{\Delta R}\ \ \ \ \ (10)
\displaystyle  \displaystyle  \gtrsim \displaystyle  \frac{\hbar^{2}}{8m\left(\Delta R\right)^{2}}-\frac{e^{2}}{\Delta R} \ \ \ \ \ (11)

We can now minimize {\left\langle H\right\rangle }:

\displaystyle   \frac{\partial\left\langle H\right\rangle }{\partial\left(\Delta R\right)} \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{4m\left(\Delta R\right)^{3}}+\frac{e^{2}}{\left(\Delta R\right)^{2}}=0\ \ \ \ \ (12)
\displaystyle  \Delta R \displaystyle  = \displaystyle  \frac{\hbar^{2}}{4me^{2}} \ \ \ \ \ (13)

This gives an estimate for the ground state energy of

\displaystyle  \left\langle H\right\rangle _{g.s.}\simeq-\frac{2me^{4}}{\hbar^{2}} \ \ \ \ \ (14)

The actual value is

\displaystyle  E_{0}=-\frac{me^{4}}{2\hbar^{2}} \ \ \ \ \ (15)

so our estimate is too large (in magnitude) by a factor of 4. For comparison, the estimate worked out by Shankar for the 3-d case is

\displaystyle  \left\langle H\right\rangle \gtrsim-\frac{2me^{4}}{9\hbar^{2}} \ \ \ \ \ (16)

This estimate is too small by around a factor of 2.

Uncertainties in the harmonic oscillator and hydrogen atom

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 9, Exercises 9.4.1 – 9.4.2.

Here we’ll look at a couple of calculations relevant to the application of the uncertainty principle to the hydrogen atom. When calculating uncertainties, we need to find the average values of various quantities. First, we’ll look at an average in the case of the harmonic oscillator.

The harmonic oscillator eigenstates are

\displaystyle \psi_{n}(x)=\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}\frac{1}{\sqrt{2^{n}n!}}H_{n}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (1)

where {H_{n}} is the {n}th Hermite polynomial. For {n=1,} we have

\displaystyle H_{1}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)=2\sqrt{\frac{m\omega}{\hbar}}x \ \ \ \ \ (2)

so

\displaystyle \psi_{1}(x)=\frac{\sqrt{2}}{\pi^{1/4}}\left(\frac{m\omega}{\hbar}\right)^{3/4}x\;e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (3)

For this state, we can calculate the average

\displaystyle \left\langle \frac{1}{X^{2}}\right\rangle \displaystyle = \displaystyle \int_{-\infty}^{\infty}\psi_{1}^{2}(x)\frac{1}{x^{2}}dx\ \ \ \ \ (4)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\int_{-\infty}^{\infty}e^{-m\omega x^{2}/\hbar}dx\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\sqrt{\frac{\pi\hbar}{m\omega}}\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \frac{2m\omega}{\hbar} \ \ \ \ \ (7)

where we evaluated the Gaussian integral in the second line.

We can compare this to {1/\left\langle X^{2}\right\rangle } as follows:

\displaystyle \left\langle X^{2}\right\rangle \displaystyle = \displaystyle \int_{-\infty}^{\infty}\psi_{1}^{2}(x)x^{2}dx\ \ \ \ \ (8)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\int_{-\infty}^{\infty}e^{-m\omega x^{2}/\hbar}x^{4}dx\ \ \ \ \ (9)
\displaystyle \displaystyle = \displaystyle \frac{2}{\sqrt{\pi}}\left(\frac{m\omega}{\hbar}\right)^{3/2}\frac{3\sqrt{\pi}}{4}\left(\frac{\hbar}{m\omega}\right)^{5/2}\ \ \ \ \ (10)
\displaystyle \displaystyle = \displaystyle \frac{3}{2}\frac{\hbar}{m\omega}\ \ \ \ \ (11)
\displaystyle \frac{1}{\left\langle X^{2}\right\rangle } \displaystyle = \displaystyle \frac{2}{3}\frac{m\omega}{\hbar} \ \ \ \ \ (12)

Thus {\left\langle \frac{1}{X^{2}}\right\rangle } and {\frac{1}{\left\langle X^{2}\right\rangle }} have the same order of magnitude, although they are not equal.

In three dimensions, we consider the ground state of hydrogen

\displaystyle \psi_{100}\left(r\right)=\frac{1}{\sqrt{\pi}a_{0}^{3/2}}e^{-r/a_{0}} \ \ \ \ \ (13)

where {a_{0}} is the Bohr radius

\displaystyle a_{0}\equiv\frac{\hbar^{2}}{me^{2}} \ \ \ \ \ (14)

with {m} and {e} being the mass and charge of the electron. The wave function is normalized as we can see by doing the integral (in 3 dimensions):

\displaystyle \int\psi_{100}^{2}(r)d^{3}\mathbf{r} \displaystyle = \displaystyle \frac{4\pi}{\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r^{2}dr \ \ \ \ \ (15)

We can use the formula (given in Shankar’s Appendix 2)

\displaystyle \int_{0}^{\infty}e^{-r/\alpha}r^{n}dr=\frac{n!}{\alpha^{n+1}} \ \ \ \ \ (16)

We get

\displaystyle \int\psi_{100}^{2}(r)d^{3}\mathbf{r}=\frac{4\pi}{\pi a_{0}^{3}}\frac{2!}{2^{3}}a_{0}^{3}=1 \ \ \ \ \ (17)

as required.

For a spherically symmetric wave function centred at {r=0},

\displaystyle \left(\Delta X\right)^{2}=\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}=\left\langle X^{2}\right\rangle \ \ \ \ \ (18)

with identical relations for {Y} and {Z}. Since

\displaystyle r^{2} \displaystyle = \displaystyle x^{2}+y^{2}+z^{2}\ \ \ \ \ (19)
\displaystyle \left\langle r^{2}\right\rangle \displaystyle = \displaystyle \left\langle x^{2}\right\rangle +\left\langle y^{2}\right\rangle +\left\langle z^{2}\right\rangle =3\left\langle X^{2}\right\rangle \ \ \ \ \ (20)
\displaystyle \left\langle X^{2}\right\rangle \displaystyle = \displaystyle \frac{1}{3}\left\langle r^{2}\right\rangle \ \ \ \ \ (21)

Thus

\displaystyle \left\langle X^{2}\right\rangle \displaystyle = \displaystyle \frac{1}{3}\int\psi_{100}^{2}(r)r^{2}d^{3}\mathbf{r}\ \ \ \ \ (22)
\displaystyle \displaystyle = \displaystyle \frac{4\pi}{3\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r^{4}dr\ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle \frac{4}{3a_{0}^{3}}\frac{4!}{2^{5}}a_{0}^{5}\ \ \ \ \ (24)
\displaystyle \displaystyle = \displaystyle a_{0}^{2}\ \ \ \ \ (25)
\displaystyle \Delta X \displaystyle = \displaystyle a_{0}=\frac{\hbar^{2}}{me^{2}} \ \ \ \ \ (26)

We can also find

\displaystyle \left\langle \frac{1}{r}\right\rangle \displaystyle = \displaystyle \int\psi_{100}^{2}(r)\frac{1}{r}d^{3}\mathbf{r}\ \ \ \ \ (27)
\displaystyle \displaystyle = \displaystyle \frac{4\pi}{\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r\;dr\ \ \ \ \ (28)
\displaystyle \displaystyle = \displaystyle \frac{4}{a_{0}^{3}}\frac{a_{0}^{2}}{4}\ \ \ \ \ (29)
\displaystyle \displaystyle = \displaystyle \frac{1}{a_{0}}\ \ \ \ \ (30)
\displaystyle \left\langle r\right\rangle \displaystyle = \displaystyle \int\psi_{100}^{2}(r)r\;d^{3}\mathbf{r}\ \ \ \ \ (31)
\displaystyle \displaystyle = \displaystyle \frac{4\pi}{\pi a_{0}^{3}}\int_{0}^{\infty}e^{-2r/a_{0}}r^{3}dr\ \ \ \ \ (32)
\displaystyle \displaystyle = \displaystyle \frac{4}{a_{0}^{3}}\frac{6a_{0}^{4}}{16}\ \ \ \ \ (33)
\displaystyle \displaystyle = \displaystyle \frac{3}{2}a_{0} \ \ \ \ \ (34)

Thus both {\left\langle \frac{1}{r}\right\rangle } and {\frac{1}{\left\langle r\right\rangle }} are of the same order of magnitude as {1/a_{0}=me^{2}/\hbar^{2}}.

WordPress help requested

As regular visitors will know, this blog occasionally goes off line due to problems connecting to the WordPress database that stores the posts. I have contacted my hosting provider and they say that this is due to an excessive number of database connections that are opened but then not closed again, but are unable to provide any help beyond that.

As I do not want to mess with any of the WordPress code (for two reasons: 1 – changing the code could introduce security problems and 2 – I don’t know enough about either WordPress or PHP to mess with their code) I was wondering if any readers have experience with running WordPress blogs and know of any ways to prevent excessive database access. I have just installed the “W3 Total Cache” plugin which may help, but if anyone has any other suggestions I’d be very grateful.

Path integral to Schrödinger equation for a vector potential

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 8. Section 8.6, Exercise 8.6.4.

When we showed that the path integral approach is equivalent to the Schrödinger equation, we did so for a scalar potential {V}, so that the Lagrangian is the usual {L=T-V}, and we can use that to calculate the action over an infinitestimal time interval {\varepsilon}, during which time the particle moves from {x^{\prime}} to {x}. In the calculation, we chose the value of {V} at the midpoint of this interval, that is {V\left(\frac{x+x^{\prime}}{2}\right)}. In fact, in this derivation it didn’t matter where in the interval {\left[x^{\prime},x\right]} we chose to evaluate {V}, since we took only terms up to first order in {\varepsilon}, and moving the point at which we evaluate {V} introduced terms only of order {\varepsilon^{2}} or higher.

Things get a bit more complicated if we consider a system such as the electromagnetic force, where the Lagrangian is no longer just {T-V}, but becomes

\displaystyle  L=\frac{1}{2}m\mathbf{v}\cdot\mathbf{v}-q\phi+\frac{q}{c}\mathbf{v}\cdot\mathbf{A} \ \ \ \ \ (1)

To examine the effect this has on the demonstration that the path integral approach is equivalent to the Schrödinger equation, we’ll consider only one dimension, and leave out the electrostatic potential {\phi} since it’s just a scalar potential and we already know that such potentials do indeed convert to the Schrödinger equation. Thus the Lagrangian we’ll consider is

\displaystyle  L=\frac{1}{2}mv^{2}+\frac{q}{c}vA \ \ \ \ \ (2)

Over the infinitesimal time interval {\varepsilon} we have

\displaystyle  v=\frac{x-x^{\prime}}{\varepsilon} \ \ \ \ \ (3)

The propagator over this time interval is

\displaystyle   U\left(x,\varepsilon;x^{\prime},0\right) \displaystyle  = \displaystyle  \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\exp\left[\frac{i}{\hbar}\left(\frac{1}{2}m\frac{\left(x-x^{\prime}\right)^{2}}{\varepsilon}+\varepsilon\frac{q}{c}\frac{x-x^{\prime}}{\varepsilon}A\left(x+\alpha\left(x-x^{\prime}\right)\right)\right)\right]\ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\exp\left[\frac{i}{\hbar}\left(\frac{1}{2}m\frac{\eta^{2}}{\varepsilon}-\frac{q}{c}\eta A\left(x+\alpha\eta\right)\right)\right]\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\exp\left[-\frac{iq}{\hbar c}\eta A\left(x+\alpha\eta\right)\right] \ \ \ \ \ (6)

where {\alpha} is a parameter that we can vary between 0 and 1 in order to vary the point along the path from {x^{\prime}} to {x} at which we evaluate the vector potential {A}. Also,

\displaystyle  \eta\equiv x^{\prime}-x \ \ \ \ \ (7)

Using the same argument as before, we require

\displaystyle  \left|\eta\right|\lesssim\sqrt{\frac{2\hbar\varepsilon\pi}{m}} \ \ \ \ \ (8)

so calculations to first order in {\varepsilon} must include terms up to second order in {\eta}.

Once we have {U\left(x,\varepsilon;x^{\prime},0\right)}, we can find {\psi\left(x,\varepsilon\right)} from

\displaystyle  \psi\left(x,\varepsilon\right)=\int_{-\infty}^{\infty}U\left(x,\varepsilon;x^{\prime},0\right)\psi\left(x^{\prime},0\right)dx^{\prime} \ \ \ \ \ (9)

To find {U} to first order in {\varepsilon}, we need to expand the second exponential in 6 out to terms in {\eta^{2}}, so we first look at the argument of the exponential:

\displaystyle  -\frac{iq}{\hbar c}\eta A\left(x+\alpha\eta\right)=-\frac{iq}{\hbar c}\left(\eta A\left(x\right)+\alpha\eta^{2}\frac{\partial A}{\partial x}+\ldots\right) \ \ \ \ \ (10)

where the derivative is evaluated at the endpoint {x} and is constant in the integral. The second exponential in 6 now becomes, to second order in {\eta}:

\displaystyle  \exp\left[-\frac{iq}{\hbar c}\eta A\left(x+\alpha\eta\right)\right]=1-\frac{iq}{\hbar c}\left(\eta A\left(x\right)+\alpha\eta^{2}\frac{\partial A}{\partial x}\right)-\left(\frac{q}{\hbar c}\right)^{2}\frac{\eta^{2}A^{2}\left(x\right)}{2} \ \ \ \ \ (11)

We also need the expansion of the wave function in 9 up to second order in {\eta}:

\displaystyle  \psi\left(x+\eta,0\right)=\psi\left(x,0\right)+\eta\frac{\partial\psi}{\partial x}+\frac{\eta^{2}}{2}\frac{\partial^{2}\psi}{\partial x^{2}} \ \ \ \ \ (12)

Again, both derivatives are evaluated at the endpoint {x} and are constants in the integral.

We now need to do the integral 9, which consists of several standard Gaussian integrals. From 7, {dx^{\prime}=d\eta}, so

\displaystyle   \int_{-\infty}^{\infty}U\left(x,\varepsilon;x^{\prime},0\right)\psi\left(x^{\prime},0\right)dx^{\prime} \displaystyle  = \displaystyle  \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\psi\left(x,0\right)\int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)d\eta+\ \ \ \ \ (13)
\displaystyle  \displaystyle  \displaystyle  \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\left(\frac{\partial\psi}{\partial x}-\frac{iq}{\hbar c}A\left(x\right)\psi\left(x,0\right)\right)\int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\eta\;d\eta+\ \ \ \ \ (14)
\displaystyle  \displaystyle  \displaystyle  \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\left(\frac{1}{2}\frac{\partial^{2}\psi}{\partial x^{2}}-\frac{iq}{\hbar c}A\left(x\right)\frac{\partial\psi}{\partial x}+\psi\left(x,0\right)\left(-\frac{iq\alpha}{\hbar c}\frac{\partial A}{\partial x}-\frac{1}{2}\left(\frac{qA\left(x\right)}{\hbar c}\right)^{2}\right)\right)\times\ \ \ \ \ (15)
\displaystyle  \displaystyle  \displaystyle  \int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\eta^{2}d\eta \ \ \ \ \ (16)

We can now do the integrals:

\displaystyle   \int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)d\eta \displaystyle  = \displaystyle  \sqrt{\frac{2\pi\hbar\varepsilon i}{m}}\ \ \ \ \ (17)
\displaystyle  \int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\eta\;d\eta \displaystyle  = \displaystyle  0\ \ \ \ \ (18)
\displaystyle  \int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\eta^{2}d\eta \displaystyle  = \displaystyle  -\frac{\hbar\varepsilon}{im}\sqrt{\frac{2\pi\hbar\varepsilon i}{m}} \ \ \ \ \ (19)

Plugging these in we get

\displaystyle   \psi\left(x,\varepsilon\right) \displaystyle  = \displaystyle  \psi\left(x,0\right)-\frac{\hbar\varepsilon}{im}\left[\frac{1}{2}\frac{\partial^{2}\psi}{\partial x^{2}}-\frac{iq}{\hbar c}A\left(x\right)\frac{\partial\psi}{\partial x}+\psi\left(x,0\right)\left(-\frac{iq\alpha}{\hbar c}\frac{\partial A}{\partial x}-\frac{1}{2}\left(\frac{qA\left(x\right)}{\hbar c}\right)^{2}\right)\right]\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  \psi\left(x,0\right)+\frac{\varepsilon}{i\hbar}\left[-\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}+\frac{i\hbar q}{mc}A\left(x\right)\frac{\partial\psi}{\partial x}+\psi\left(x,0\right)\left(\frac{i\hbar q\alpha}{mc}\frac{\partial A}{\partial x}+\frac{1}{2m}\left(\frac{qA\left(x\right)}{c}\right)^{2}\right)\right] \ \ \ \ \ (21)

We can compare this with the quantum version of the Hamiltonian for the vector potential part of the electromagnetic force. The classical Hamiltonian is

\displaystyle  H=\frac{\left|\mathbf{p}-q\mathbf{A}/c\right|^{2}}{2m} \ \ \ \ \ (22)

Because {\mathbf{A}} depends on {x}, it doesn’t commute with {\mathbf{p}} so to get the quantum version we need to symmetrize when we expand the square. The one dimensional version is

\displaystyle  H=\frac{P^{2}}{2m}-\frac{q}{2mc}PA-\frac{q}{2mc}AP+\frac{q^{2}A^{2}}{2mc^{2}} \ \ \ \ \ (23)

In the coordinate basis, we have, using {P=-i\hbar\partial/\partial x}

\displaystyle   H\psi \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}+\frac{i\hbar q}{2mc}\left(\frac{\partial\left(A\psi\right)}{\partial x}+A\frac{\partial\psi}{\partial x}\right)+\frac{q^{2}A^{2}}{2mc^{2}}\psi\ \ \ \ \ (24)
\displaystyle  \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}+\frac{i\hbar q}{2mc}\left(2A\frac{\partial\psi}{\partial x}+\psi\frac{\partial A}{\partial x}\right)+\frac{q^{2}A^{2}}{2mc^{2}}\psi\ \ \ \ \ (25)
\displaystyle  \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}+\frac{i\hbar q}{mc}\left(A\frac{\partial\psi}{\partial x}+\frac{1}{2}\psi\frac{\partial A}{\partial x}\right)+\frac{q^{2}A^{2}}{2mc^{2}}\psi \ \ \ \ \ (26)

Returning to the result we got from the path integral, upon rearranging 21 we get

\displaystyle   i\hbar\frac{\psi\left(x,\varepsilon\right)-\psi\left(x,0\right)}{\varepsilon} \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}+\frac{i\hbar q}{mc}\left(A\left(x\right)\frac{\partial\psi}{\partial x}+\alpha\psi\left(x,0\right)\frac{\partial A}{\partial x}\right)+\frac{\psi\left(x,0\right)}{2m}\left(\frac{qA\left(x\right)}{c}\right)^{2}\ \ \ \ \ (27)
\displaystyle  i\hbar\frac{\partial\psi}{\partial t} \displaystyle  = \displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}+\frac{i\hbar q}{mc}\left(A\left(x\right)\frac{\partial\psi}{\partial x}+\alpha\psi\frac{\partial A}{\partial x}\right)+\frac{\psi}{2m}\left(\frac{qA\left(x\right)}{c}\right)^{2} \ \ \ \ \ (28)

where in the last line we took the limit as {\varepsilon\rightarrow0} on the LHS to get Schrödinger’s equation in the form

\displaystyle  i\hbar\frac{\partial\psi}{\partial t}=H\psi \ \ \ \ \ (29)

Comparing the RHS of 28 with 26, we see that they are equal provided we take {\alpha=\frac{1}{2}}. Thus in this case, we really do need to evaluate the vector potential {A} at the midpoint of the path.

Harmonic oscillator energies and eigenfunctions derived from the propagator

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 8. Section 8.6, Exercise 8.6.3.

Given the propagator for the harmonic oscillator, it is possible to work backwards and deduce the eigenvalues and eigenfunctions of the Hamiltonian, although this isn’t the easiest way to find them. We’ve seen that the propagator for the oscillator is

\displaystyle U\left(x,t;x^{\prime}\right)=A\left(t\right)\exp\left[\frac{im\omega}{2\hbar\sin\omega t}\left(\left(x^{\prime2}+x^{2}\right)\cos\omega t-2x^{\prime}x\right)\right] \ \ \ \ \ (1)

 

where {A\left(t\right)} is some function of time which is found by doing a path integral. Shankar cheats a bit by just telling us what {A} is:

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega t}} \ \ \ \ \ (2)

To deduce (some of) the energy levels, we can compare the propagator with its more traditional form

\displaystyle U\left(t\right)=\sum_{n}e^{-iE_{n}t/\hbar}\left|E_{n}\right\rangle \left\langle E_{n}\right| \ \ \ \ \ (3)

where {E_{n}} is the {n}th energy level. In position space this is

\displaystyle U\left(t\right)=\sum_{n}\psi_{n}^*\left(x\right)\psi_{n}\left(x\right)e^{-iE_{n}t/\hbar} \ \ \ \ \ (4)

 

We can try finding the energy levels as follows. We take {x=x^{\prime}=t^{\prime}=0}, which is equivalent to taking the end time {t} to be a multiple of a complete period of the oscillator, so that the particle has returned to its starting point. In that case, 1 becomes

\displaystyle U\left(x,t;x^{\prime}\right)=A\left(t\right)=\sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega t}} \ \ \ \ \ (5)

If we can expand this quantity in powers of {e^{-i\omega t}}, we can compare it with the series 4 and read off the energies from the exponents in the series. To do this, we write

\displaystyle A\left(t\right) \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar\left(e^{i\omega t}-e^{-i\omega t}\right)}}\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}e^{-i\omega t/2}\frac{1}{\sqrt{1-e^{-2i\omega t}}} \ \ \ \ \ (7)

To save writing, we’ll define the symbol

\displaystyle \eta\equiv e^{-i\omega t} \ \ \ \ \ (8)

so that

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\frac{1}{\sqrt{1-\eta^{2}}} \ \ \ \ \ (9)

We can now expand the last factor using the binomial expansion to get

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\left[1+\frac{1}{2}\eta^{2}+\frac{3}{8}\eta^{4}+\ldots\right] \ \ \ \ \ (10)

In terms of the original variables, we get

\displaystyle A\left(t\right)=\sqrt{\frac{m\omega}{\pi\hbar}}\left[e^{-i\omega t/2}+\frac{1}{2}e^{-5i\omega t/2}+\frac{3}{8}e^{-9i\omega t/2}+\ldots\right] \ \ \ \ \ (11)

 

Comparing with 4, we find energy levels of

\displaystyle E=\frac{\hbar\omega}{2},\frac{5\hbar\omega}{2},\frac{9\hbar\omega}{2},\ldots \ \ \ \ \ (12)

These correspond to {E_{0},E_{2},E_{4},\ldots}. The odd energy levels {\left(\frac{3\hbar\omega}{2},\frac{7\hbar\omega}{2},\ldots\right)} are missing because the corresponding wave functions {\psi_{n}\left(x\right)} are odd functions of {x} and are therefore zero at {x=0}, so the corresponding terms in 4 vanish. The numerical coefficients in 11 give us {\left|\psi_{n}\left(0\right)\right|^{2}} for {n=0,2,4,\ldots}.

To get the other energies, as well as the eigenfunctions, from a comparison of 1 and 4 is possible, but quite messy, even for the lower energies. To do it, we take {t^{\prime}=0} as before, but now we take {x=x^{\prime}\ne0}. That is, we start the oscillator off at some location {x^{\prime}\ne0} and then look at it exactly one period later, when it has returned to the same position. The propagator 1 now becomes

\displaystyle U\left(x,t;x^{\prime}\right) \displaystyle = \displaystyle \sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega t}}\exp\left[\frac{im\omega}{2\hbar\sin\omega t}\left(2x^{2}\left(\cos\omega t-1\right)\right)\right]\ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar\left(e^{i\omega t}-e^{-i\omega t}\right)}}\exp\left[-\frac{m\omega}{\hbar\left(e^{i\omega t}-e^{-i\omega t}\right)}\left(x^{2}\left(\left(e^{i\omega t}+e^{-i\omega t}\right)-2\right)\right)\right]\ \ \ \ \ (14)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\frac{1}{\sqrt{1-\eta^{2}}}\exp\left[-\frac{m\omega x^{2}}{\hbar}\left(\frac{\frac{1}{\eta}+\eta-2}{\frac{1}{\eta}-\eta}\right)\right]\ \ \ \ \ (15)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\eta^{1/2}\frac{1}{\sqrt{1-\eta^{2}}}\exp\left[-\frac{m\omega x^{2}}{\hbar}\left(\frac{1+\eta^{2}-2\eta}{1-\eta^{2}}\right)\right] \ \ \ \ \ (16)

We now need to expand this in a power series in {\eta}, which gets very messy so is best handled with software like Maple. Shankar asks only for the first two terms in the series (the terms corresponding to {\eta^{1/2}} and {\eta^{3/2}}) but even doing this by hand can get very tedious. The result from Maple is, for the first two terms:

\displaystyle \eta^{1/2} \displaystyle \rightarrow \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}e^{-m\omega x^{2}/\hbar}\eta^{1/2}=\sqrt{\frac{m\omega}{\pi\hbar}}e^{-m\omega x^{2}/\hbar}e^{-i\omega t/2\hbar}\ \ \ \ \ (17)
\displaystyle \eta^{3/2} \displaystyle \rightarrow \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\frac{2m\omega}{\hbar}e^{-m\omega x^{2}/\hbar}x^{2}\eta^{3/2}=\sqrt{\frac{m\omega}{\pi\hbar}}\frac{2m\omega}{\hbar}e^{-m\omega x^{2}/\hbar}x^{2}e^{-3i\omega t/2\hbar} \ \ \ \ \ (18)

Comparing this with 4, we can read off:

\displaystyle E_{0} \displaystyle = \displaystyle \frac{\hbar\omega}{2}\ \ \ \ \ (19)
\displaystyle \left|\psi_{0}\left(x\right)\right|^{2} \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}e^{-m\omega x^{2}/\hbar}\ \ \ \ \ (20)
\displaystyle E_{1} \displaystyle = \displaystyle \frac{3\hbar\omega}{2}\ \ \ \ \ (21)
\displaystyle \left|\psi_{1}\left(x\right)\right|^{2} \displaystyle = \displaystyle \sqrt{\frac{m\omega}{\pi\hbar}}\frac{2m\omega}{\hbar}e^{-m\omega x^{2}/\hbar}x^{2} \ \ \ \ \ (22)

To check this, we recall the eigenfunctions we worked out earlier, using Hermite polynomials

\displaystyle \psi_{n}(x)=\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}\frac{1}{\sqrt{2^{n}n!}}H_{n}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (23)

 

The first two Hermite polynomials are

\displaystyle H_{0}\left(\sqrt{\frac{m\omega}{\hbar}}x\right) \displaystyle = \displaystyle 1\ \ \ \ \ (24)
\displaystyle H_{1}\left(\sqrt{\frac{m\omega}{\hbar}}x\right) \displaystyle = \displaystyle 2\sqrt{\frac{m\omega}{\hbar}}x \ \ \ \ \ (25)

Plugging these into 23 and comparing with 20 and 22 shows we got the right answer.

Path integrals for special potentials; use of classical action

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 8. Section 8.6, Exercises 8.6.1 – 8.6.2.

We’ve seen that if we use the path integral formulation for a free particle, we get the exact propagator by considering only one path (the classical path) between the starting point {\left(x^{\prime},t^{\prime}\right)} and the end point {\left(x,t\right)}. In this case, the propagator has the form

\displaystyle  U\left(x,t;x^{\prime},t^{\prime}\right)=A\left(t\right)e^{iS_{cl}/\hbar} \ \ \ \ \ (1)

where {S_{cl}} is the classical action. It turns out that this form is true for a wider set of potentials, beyond just the free particle. The general form of the potential for which this is true is

\displaystyle  V=a+bx+cx^{2}+d\dot{x}+ex\dot{x} \ \ \ \ \ (2)

where {a,b,c,d} and {e} are constants. The general expression for the propagator is (where we’re taking the starting time to be {t^{\prime}=0}):

\displaystyle  U\left(x,t;x^{\prime}\right)=\int_{x^{\prime}}^{x}e^{iS\left[x\left(t^{\prime\prime}\right)\right]/\hbar}\mathfrak{D}\left[x\left(t^{\prime\prime}\right)\right] \ \ \ \ \ (3)

where the notation {\mathfrak{D}\left[x\left(t^{\prime\prime}\right)\right]} means an integration over all possible paths from {x^{\prime}} to {x} in the given time interval.

For a given path, we can write the location of the particle {x\left(t^{\prime\prime}\right)} as composed of its position on the classical path {x_{cl}\left(t^{\prime\prime}\right)} plus the deviation {y\left(t^{\prime\prime}\right)} from the classical path:

\displaystyle  x\left(t^{\prime\prime}\right)=x_{cl}\left(t^{\prime\prime}\right)+y\left(t^{\prime\prime}\right) \ \ \ \ \ (4)

As the endpoints are fixed

\displaystyle  y\left(0\right)=y\left(t\right)=0 \ \ \ \ \ (5)

Also, since for any given potential and choice of endpoints, {x_{cl}\left(t^{\prime\prime}\right)} is fixed for all times, it is effectively a constant with regard to the path integration. Therefore

\displaystyle  dx=dy \ \ \ \ \ (6)

Making these substitutions into 3, we get, using Shankar’s slightly misleading notation:

\displaystyle  U\left(x,t;x^{\prime}\right)=\int_{0}^{0}e^{iS\left[x_{cl}\left(t^{\prime\prime}\right)+y\left(t^{\prime\prime}\right)\right]/\hbar}\mathfrak{D}\left[y\left(t^{\prime\prime}\right)\right] \ \ \ \ \ (7)

Usually, when the limits on an integral are the same, the integral evaluates to zero. However, in this case, the notation {\int_{0}^{0}\mathfrak{D}\left[y\left(t^{\prime\prime}\right)\right]} means that {y} starts and ends at zero, but covers all possible paths between these endpoints.

The action is the integral of the Lagrangian which, for the potential 2 is

\displaystyle   L \displaystyle  = \displaystyle  T-V\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2}m\dot{x}^{2}-a-bx-cx^{2}-d\dot{x}-ex\dot{x} \ \ \ \ \ (9)

Because {L} is quadratic in both {x} and {\dot{x}}, we can expand it in a Taylor series up to second order without any approximation. That is

\displaystyle   L\left(x_{cl}+y,\dot{x}_{cl}+\dot{y}\right) \displaystyle  = \displaystyle  L\left(x_{cl},\dot{x}_{cl}\right)+\left.\frac{\partial L}{\partial x}\right|_{x_{cl}}y+\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\dot{y}+\ \ \ \ \ (10)
\displaystyle  \displaystyle  \displaystyle  \frac{1}{2}\left(\left.\frac{\partial^{2}L}{\partial x^{2}}\right|_{x_{cl}}y^{2}+2\left.\frac{\partial^{2}L}{\partial x\partial\dot{x}}\right|_{x_{cl}}y\dot{y}+\left.\frac{\partial^{2}L}{\partial\dot{x}^{2}}\right|_{x_{cl}}\dot{y}^{2}\right) \ \ \ \ \ (11)

Look first at the last two terms on the RHS of the first line. Using the equations of motion, we have

\displaystyle  \left.\frac{\partial L}{\partial x}\right|_{x_{cl}}=\frac{d}{dt}\left(\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\right) \ \ \ \ \ (12)

To get the action, we need to integrate the Lagrangian over the time interval of interest. Integrating these two terms gives

\displaystyle   \int_{0}^{t}\left[\left.\frac{\partial L}{\partial x}\right|_{x_{cl}}y+\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\dot{y}\right]dt^{\prime\prime} \displaystyle  = \displaystyle  \int_{0}^{t}\left[\frac{d}{dt}\left(\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\right)y+\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\dot{y}\right]dt^{\prime\prime}\ \ \ \ \ (13)
\displaystyle  \displaystyle  = \displaystyle  \left.\left(\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\right)y\right|_{0}^{t}-\int_{0}^{t}\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\dot{y}dt^{\prime\prime}+\int_{0}^{t}\left.\frac{\partial L}{\partial\dot{x}}\right|_{x_{cl}}\dot{y}dt^{\prime\prime}\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  0 \ \ \ \ \ (15)

where we integrated the first term by parts. The integrated term in the second line is zero because {y=0} at both endpoints, and the last two terms cancel each other.

Returning to 11, we can calculate the three second derivatives explicitly:

\displaystyle   \frac{1}{2}\frac{\partial^{2}L}{\partial x^{2}} \displaystyle  = \displaystyle  -c\ \ \ \ \ (16)
\displaystyle  \left.\frac{\partial^{2}L}{\partial x\partial\dot{x}}\right|_{x_{cl}} \displaystyle  = \displaystyle  -e\ \ \ \ \ (17)
\displaystyle  \frac{1}{2}\frac{\partial^{2}L}{\partial\dot{x}^{2}} \displaystyle  = \displaystyle  m \ \ \ \ \ (18)

The integral of the first term on the RHS of 10 is just the classical action, so we get for the propagator 7:

\displaystyle  U\left(x,t;x^{\prime}\right)=e^{iS_{cl}/\hbar}\int_{0}^{0}\exp\left[\frac{i}{\hbar}\int_{0}^{t}\left(\frac{m\dot{y}^{2}}{2}-cy^{2}-ey\dot{y}\right)dt^{\prime\prime}\right]\mathfrak{D}\left[y\left(t^{\prime\prime}\right)\right] \ \ \ \ \ (19)

The remaining path integral can still be difficult to evaluate, but we can observe a few properties that it has. First, for any given path in the path integral, we must be able to express both {y} and {\dot{y}} as functions of time {t^{\prime\prime}}, so the complete path integral can depend only on the end time {t} (and, of course, on the constants {m}, {c} and {e}). That is, the propagator will always have the form 1:

\displaystyle  U\left(x,t;x^{\prime},t^{\prime}\right)=A\left(t\right)e^{iS_{cl}/\hbar} \ \ \ \ \ (20)

We have already evaluated the integral for the free particle where {c=e=0} and we found there that

\displaystyle  U\left(x,t;x^{\prime}\right)=\sqrt{\frac{m}{2\pi\hbar it}}e^{iS_{cl}/\hbar} \ \ \ \ \ (21)

Since the constant {b} doesn’t appear in 19, the propagator must have the same form for the more general case where {V=a+bx}. For more complex potentials, such as the harmonic oscillator, the function {A\left(t\right)} will in general have a different form and will have to be calculated explicitly in these cases.

As an example, we’ll consider the case of a particle subject to a constant force in the {x} direction, so that the potential is given by

\displaystyle  V\left(x\right)=-fx \ \ \ \ \ (22)

This gives a constant force of

\displaystyle  F=-\frac{dV}{dx}=f \ \ \ \ \ (23)

and thus a constant acceleration of {f/m}. For such a particle, its classical position is (from first year physics)

\displaystyle   x_{cl}\left(t^{\prime\prime}\right) \displaystyle  = \displaystyle  x_{0}+v_{0}t^{\prime\prime}+\frac{1}{2}\frac{f}{m}t^{\prime\prime2}\ \ \ \ \ (24)
\displaystyle  \dot{x}_{cl}\left(t^{\prime\prime}\right) \displaystyle  = \displaystyle  v_{0}+\frac{f}{m}t^{\prime\prime} \ \ \ \ \ (25)

To find {x_{0}} and {v_{0}}, we impose boundary conditions. At {t^{\prime\prime}=0}

\displaystyle  x_{cl}\left(0\right)=x_{0}=x^{\prime} \ \ \ \ \ (26)

At {t^{\prime\prime}=t}, its position is

\displaystyle   x_{cl}\left(t\right) \displaystyle  = \displaystyle  x=x^{\prime}+v_{0}t+\frac{f}{2m}t^{2} \ \ \ \ \ (27)

This gives

\displaystyle  v_{0}=\frac{x-x^{\prime}}{t}-\frac{f}{2m}t \ \ \ \ \ (28)

The classical Lagrangian is

\displaystyle   L \displaystyle  = \displaystyle  T-V\ \ \ \ \ (29)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2}m\dot{x}_{cl}^{2}+fx_{cl}\ \ \ \ \ (30)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2}m\left(v_{0}+\frac{f}{m}t^{\prime\prime}\right)^{2}+f\left(x_{0}+v_{0}t^{\prime\prime}+\frac{1}{2}\frac{f}{m}t^{\prime\prime2}\right)\ \ \ \ \ (31)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2}m\left(\frac{x-x^{\prime}}{t}-\frac{f}{2m}t+\frac{f}{m}t^{\prime\prime}\right)^{2}+f\left(x^{\prime}+\left(\frac{x-x^{\prime}}{t}-\frac{f}{2m}t\right)t^{\prime\prime}+\frac{1}{2}\frac{f}{m}t^{\prime\prime2}\right) \ \ \ \ \ (32)

Note that {t} is a constant, as it is the time of the endpoint of the motion. To find the classical action, we must integrate this from {t^{\prime\prime}=0} to {t}. The integral is a straightforward integral of a quadratic in {t^{\prime\prime}}, although the algebra is tedious if done by hand, so is best done with Maple.

\displaystyle   S_{cl} \displaystyle  = \displaystyle  \int_{0}^{t}L\;dt^{\prime\prime}\ \ \ \ \ (33)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{3}{\frac{{f}^{2}{t}^{3}}{m}}+\left({\frac{x-x^{\prime}}{t}}-\frac{1}{2}{\frac{ft}{m}}\right)f{t}^{2}+\frac{1}{2}m\left({\frac{x-x^{\prime}}{t}}-\frac{1}{2}{\frac{ft}{m}}\right)^{2}t+fxt\ \ \ \ \ (34)
\displaystyle  \displaystyle  = \displaystyle  -\frac{f^{2}t^{3}}{24m}+\frac{1}{2}ft\left(x+x^{\prime}\right)+\frac{m\left(x-x^{\prime}\right)^{2}}{2t} \ \ \ \ \ (35)

From 21, this gives a propagator of

\displaystyle  U\left(x,t;x^{\prime}\right)=\sqrt{\frac{m}{2\pi\hbar it}}\exp\left[\frac{i}{\hbar}\left(-\frac{f^{2}t^{3}}{24m}+\frac{1}{2}ft\left(x+x^{\prime}\right)+\frac{m\left(x-x^{\prime}\right)^{2}}{2t}\right)\right] \ \ \ \ \ (36)

This agrees with Shankar’s result in his equation 5.4.31.

As another example, consider the harmonic oscillator, where the potential is

\displaystyle  V=\frac{1}{2}m\omega^{2}x^{2} \ \ \ \ \ (37)

This potential is also of the form 2, so the propagator must have the form 20. This time, however, since {c\ne0}, the function {A\left(t\right)} will probably not have the form used in 21. The best we can say therefore is that

\displaystyle  U\left(x,t;x^{\prime},t^{\prime}\right)=A\left(t\right)e^{iS_{cl}/\hbar} \ \ \ \ \ (38)

where {A\left(t\right)} has the form (from 19):

\displaystyle  A\left(t\right)=\int_{0}^{0}\exp\left[\frac{i}{\hbar}\int_{0}^{t}\left(\frac{m\dot{y}^{2}}{2}-\frac{1}{2}m\omega^{2}y^{2}\right)dt^{\prime\prime}\right]\mathfrak{D}\left[y\left(t^{\prime\prime}\right)\right] \ \ \ \ \ (39)

We worked out the classical action for the harmonic oscillator earlier and found

\displaystyle  S_{cl}=\frac{m\omega}{2\sin\omega t}\left[\left(x^{\prime2}+x^{2}\right)\cos\omega t-2x^{\prime}x\right] \ \ \ \ \ (40)

where the particle is at {x^{\prime}} at {t^{\prime\prime}=0} and at {x} at {t^{\prime\prime}=t}. The propagator is therefore

\displaystyle  U\left(x,t;x^{\prime}\right)=A\left(t\right)\exp\left[\frac{im\omega}{2\hbar\sin\omega t}\left(\left(x^{\prime2}+x^{2}\right)\cos\omega t-2x^{\prime}x\right)\right] \ \ \ \ \ (41)

with {A\left(t\right)} given by 39.

The path integral is equivalent to the Schrödinger equation

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 8. Section 8.5.

We’ve seen that we can produce the propagator for the free particle by means of a complete path integral over all paths between some specified initial state at {\left(x_{0},t_{0}\right)} and specified final state {\left(x_{N},t_{N}\right)}. Here we’ll show that the path integral approach is formally equivalent to the Schrödinger equation, even for an arbitrary potential {V}.

The Schrödinger equation is a differential equation that allows us to calculate the wave function as a function of position {x} and time {t}, when solved in the position basis. To find the same thing from the path integral, we’ll consider an infinitesimal time interval {\varepsilon} and try to find {\psi\left(x,\varepsilon\right)} given the wave function at {t=0}, that is, given {\psi\left(x^{\prime},0\right)} for some arbitrary {x^{\prime}}. To use a path integral in this way, we’re effectively asking for the contribution to the propagator from all possible paths between {t=0} and {t=\varepsilon}. That is, we’re considering that the particle may have started at any position {x^{\prime}} at {t=0} and stilll ended up at position {x} at {t=\varepsilon}. In terms of the propagator, this is

\displaystyle \psi\left(x,\varepsilon\right)=\int_{-\infty}^{\infty}U\left(x,\varepsilon;x^{\prime},0\right)\psi\left(x^{\prime},0\right)dx^{\prime} \ \ \ \ \ (1)

 

Looking at our previous derivation of the propagator, we saw that there we fixed the initial and final states and integrated over all possible paths between these two states. In this case, all we’re specifying is the final state so in principle, the particle could have been anywhere at {t=0}.

The general form for the propagator is

\displaystyle U\left(t\right)=A\int_{all\;paths}e^{iS\left[x\left(t\right)\right]/\hbar} \ \ \ \ \ (2)

where {A} is a scale factor and {S\left[x\left(t\right)\right]} is the action for travelling along path {x\left(t\right)}:

\displaystyle S=\int_{0}^{\varepsilon}L\;dt \ \ \ \ \ (3)

 

We can approximate the action by taking the Lagrangian to be

\displaystyle L \displaystyle = \displaystyle T-V\ \ \ \ \ (4)
\displaystyle \displaystyle = \displaystyle \frac{1}{2}mv^{2}-V\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \frac{1}{2}m\frac{\left(x-x^{\prime}\right)^{2}}{\varepsilon^{2}}-V\left(\frac{x+x^{\prime}}{2},0\right) \ \ \ \ \ (6)

Here we take the velocity over the interval {\varepsilon} to be constant at {v=\frac{x-x^{\prime}}{\varepsilon}}, and we take the potential to be constant, with its value at the midpoint between {x} and {x^{\prime}} at time {t=0}. The reason we can approximate {V} by its value at {t=0} is that in calculating the action 3, we will multiply {L} by {\varepsilon}, and we’re interested only in terms of first order in {\varepsilon}. The action to this order is then

\displaystyle S=\varepsilon L=\frac{1}{2}m\frac{\left(x-x^{\prime}\right)^{2}}{\varepsilon}-\varepsilon V\left(\frac{x+x^{\prime}}{2},0\right) \ \ \ \ \ (7)

which gives a propagator of

\displaystyle U\left(x,\varepsilon;x^{\prime},0\right)=A\exp\left[\frac{i}{\hbar}\left(\frac{1}{2}m\frac{\left(x-x^{\prime}\right)^{2}}{\varepsilon}-\varepsilon V\left(\frac{x+x^{\prime}}{2},0\right)\right)\right] \ \ \ \ \ (8)

We can try the same value for {A} that we had for the free particle

\displaystyle A=\left(\frac{m}{2\pi\hbar\varepsilon i}\right)^{N/2} \ \ \ \ \ (9)

In this case, we have only one step so {N=1} and

\displaystyle U\left(x,\varepsilon;x^{\prime},0\right)=\sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\exp\left[\frac{i}{\hbar}\left(\frac{1}{2}m\frac{\left(x-x^{\prime}\right)^{2}}{\varepsilon}-\varepsilon V\left(\frac{x+x^{\prime}}{2},0\right)\right)\right] \ \ \ \ \ (10)

We now need to do some approximating. The kinetic energy term is

\displaystyle \exp\left[\frac{i}{\hbar}\left(\frac{1}{2}m\frac{\left(x-x^{\prime}\right)^{2}}{\varepsilon}\right)\right] \ \ \ \ \ (11)

 

The exponent is pure imaginary so for infinitesimal {\varepsilon}, it oscillates very rapidly away from the stationary point at {x=x^{\prime}}. When this term is placed in the integral 1, it multiplies {\psi\left(x^{\prime},0\right)} which we’ll assume is a smooth function that doesn’t oscillate much, at least over the scale at which 11 oscillates. We define

\displaystyle \eta\equiv x^{\prime}-x \ \ \ \ \ (12)

to be the distance from the minimum phase. Once the phase approaches {\pi}, the oscillations will be rapid enough that the contributions to the integral effectively cancel out, so we’re looking at the region

\displaystyle \frac{m\eta^{2}}{2\hbar\varepsilon}\lesssim\pi \ \ \ \ \ (13)

or

\displaystyle \left|\eta\right|\lesssim\sqrt{\frac{2\hbar\varepsilon\pi}{m}} \ \ \ \ \ (14)

If we work to first order in {\varepsilon} we therefore must retain terms up to second order in {\eta}. In terms of {\eta}, 1 now becomes

\displaystyle \psi\left(x,\varepsilon\right)=\sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\exp\left(-\frac{i\varepsilon}{\hbar}V\left(x+\frac{\eta}{2},0\right)\right)\psi\left(x+\eta,0\right)d\eta \ \ \ \ \ (15)

 

We now expand the last two factors as a Taylor series in {\eta} and {\varepsilon} up to first order in {\varepsilon} or second order in {\eta}:

\displaystyle \exp\left(-\frac{i\varepsilon}{\hbar}V\left(x+\frac{\eta}{2},0\right)\right) \displaystyle = \displaystyle 1-\frac{i\varepsilon}{\hbar}V\left(x+\frac{\eta}{2},0\right)+\ldots\ \ \ \ \ (16)
\displaystyle \displaystyle = \displaystyle 1-\frac{i\varepsilon}{\hbar}V\left(x,0\right)+\ldots \ \ \ \ \ (17)

We can drop terms in the expansion of {V\left(x+\frac{\eta}{2},0\right)} beyond {V\left(x,0\right)} since they will be of order {\mathcal{O}\left(\varepsilon\eta\right)=\mathcal{O}\left(\varepsilon^{3/2}\right)} or higher.

For the second term, we have

\displaystyle \psi\left(x+\eta,0\right)=\psi\left(x,0\right)+\eta\frac{\partial\psi}{\partial x}+\frac{\eta^{2}}{2}\frac{\partial^{2}\psi}{\partial x^{2}}+\ldots \ \ \ \ \ (18)

where the partial derivatives are evaluated at {\eta=0}.

Inserting these into the integral 15 we get

\displaystyle \psi\left(x,\varepsilon\right) \displaystyle = \displaystyle \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\times\ \ \ \ \ (19)
\displaystyle \displaystyle \displaystyle \left[\psi\left(x,0\right)+\eta\frac{\partial\psi}{\partial x}+\frac{\eta^{2}}{2}\frac{\partial^{2}\psi}{\partial x^{2}}\right]\times\ \ \ \ \ (20)
\displaystyle \displaystyle \displaystyle \left[1-\frac{i\varepsilon}{\hbar}V\left(x,0\right)\right]d\eta \ \ \ \ \ (21)

Again, retaining only terms up to first order in {\varepsilon} or second order in {\eta}:

\displaystyle \psi\left(x,\varepsilon\right) \displaystyle = \displaystyle \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)\times\ \ \ \ \ (22)
\displaystyle \displaystyle \displaystyle \left[\psi\left(x,0\right)-\frac{i\varepsilon}{\hbar}V\left(x,0\right)\psi\left(x,0\right)+\eta\frac{\partial\psi}{\partial x}+\frac{\eta^{2}}{2}\frac{\partial^{2}\psi}{\partial x^{2}}\right]d\eta \ \ \ \ \ (23)

Everything in the integrand is constant with respect to {\eta} except for the first exponential and the factors of {\eta} and {\eta^{2}} in the last two terms. We are therefore faced with a couple of Gaussian integrals. We have

\displaystyle \int_{-\infty}^{\infty}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)d\eta \displaystyle = \displaystyle \int_{-\infty}^{\infty}\exp\left(-\frac{m\eta^{2}}{2\hbar i\varepsilon}\right)d\eta\ \ \ \ \ (24)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{2\pi\hbar i\varepsilon}{m}}\ \ \ \ \ (25)
\displaystyle \int_{-\infty}^{\infty}\eta\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)d\eta \displaystyle = \displaystyle 0\ \ \ \ \ (26)
\displaystyle \int_{-\infty}^{\infty}\eta^{2}\exp\left(\frac{im\eta^{2}}{2\hbar\varepsilon}\right)d\eta \displaystyle = \displaystyle \int_{-\infty}^{\infty}\eta^{2}\exp\left(-\frac{m\eta^{2}}{2\hbar i\varepsilon}\right)d\eta\ \ \ \ \ (27)
\displaystyle \displaystyle = \displaystyle \frac{\hbar i\varepsilon}{m}\sqrt{\frac{2\pi\hbar i\varepsilon}{m}}\ \ \ \ \ (28)
\displaystyle \displaystyle = \displaystyle -\frac{\hbar\varepsilon}{im}\sqrt{\frac{2\pi\hbar i\varepsilon}{m}} \ \ \ \ \ (29)

Putting it all together, we have

\displaystyle \psi\left(x,\varepsilon\right) \displaystyle = \displaystyle \sqrt{\frac{m}{2\pi\hbar\varepsilon i}}\sqrt{\frac{2\pi\hbar i\varepsilon}{m}}\left[\left(1-\frac{i\varepsilon}{\hbar}V\left(x,0\right)\right)-\frac{\hbar\varepsilon}{2im}\frac{\partial^{2}}{\partial x^{2}}\right]\psi\left(x,0\right)\ \ \ \ \ (30)
\displaystyle \displaystyle = \displaystyle \psi\left(x,0\right)-\frac{i\varepsilon}{\hbar}\left(-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}+V\left(x,0\right)\right)\psi\left(x,0\right) \ \ \ \ \ (31)

Rearranging, we get

\displaystyle i\hbar\frac{\psi\left(x,\varepsilon\right)-\psi\left(x,0\right)}{\varepsilon}=\left(-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}+V\left(x,0\right)\right)\psi\left(x,0\right) \ \ \ \ \ (32)

In the limit {\varepsilon\rightarrow0}, the LHS becomes {i\hbar\frac{\partial\psi}{\partial t}} and we get the Schrödinger equation:

\displaystyle i\hbar\frac{\partial\psi}{\partial t}=\left(-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}+V\left(x,0\right)\right)\psi\left(x,0\right) \ \ \ \ \ (33)

Free particle propagator from a complete path integral

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 8. Section 8.4.

We’ve seen that the free-particle propagator can be obtained in the path integral approach by using only the classical path in the sum over paths. It turns out that it’s not too hard to calculate the propagator for a free particle properly, by summing over all possible paths. The notation used by Shankar is as follows.

We want to evaluate the path integral

\displaystyle \int_{x_{0}}^{x_{N}}e^{iS\left[x\left(t\right)\right]/\hbar}\mathfrak{D}\left[x\left(t\right)\right] \ \ \ \ \ (1)

The notation {\mathfrak{D}\left[x\left(t\right)\right]} means an integration over all possible paths from {x_{0}} to {x_{N}} in the given time interval. This includes paths where the particle might move to the right for a while, then jog back to the left, then back to the right again and so on. This might seem like a hopeless task, but we can make sense of this method by splitting the time interval between {t_{0}} and {t_{N}} into {N} small intervals, each of length {\varepsilon}. Thus an intermediate time {t_{n}=t_{0}+n\varepsilon}, and the final time is {t_{N}=t_{0}+N\varepsilon}.

For a free particle, there is no potential energy so the Lagrangian is just the kinetic energy:

\displaystyle L=\frac{1}{2}m\dot{x}^{2} \ \ \ \ \ (2)

We can estimate the velocity in each time slice by

\displaystyle \dot{x}_{i}=\frac{x_{i+1}-x_{i}}{\varepsilon} \ \ \ \ \ (3)

Note that this assumes that the velocity within each time slice is constant, but as we make {\varepsilon} smaller and smaller, this is increasingly accurate. Also note that it is possible for {\dot{x}_{i}} to be both positive (if the particle moves to the right in the interval) or negative (if it moves to the left).

The action for a given path is given by the integral of the Lagrangian:

\displaystyle S=\int_{t_{0}}^{t_{N}}L\left(t\right)dt \ \ \ \ \ (4)

In our discretized approximation, we evaluate {L} within each time slice, and {dt} becomes the interval length {\varepsilon}, so the action becomes a sum:

\displaystyle S \displaystyle = \displaystyle \sum_{i=0}^{N-1}L\left(t_{i}\right)\varepsilon\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \frac{m}{2}\sum_{i=0}^{N-1}\left(\frac{x_{i+1}-x_{i}}{\varepsilon}\right)^{2}\varepsilon\ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \frac{m}{2}\sum_{i=0}^{N-1}\frac{\left(x_{i+1}-x_{i}\right)^{2}}{\varepsilon} \ \ \ \ \ (7)

The key point here is to notice that we can label any given path by choosing values for all the {x_{i}}s between the two times, and that each {x_{i}} can vary independently of the others, over a range from {-\infty} to {+\infty}. We can therefore implement the multiple integration required by {\mathfrak{D}\left[x\left(t\right)\right]} by integrating over all the {x_{i}} variables separately. That is,

\displaystyle \int_{x_{0}}^{x_{N}}e^{iS\left[x\left(t\right)\right]/\hbar}\mathfrak{D}\left[x\left(t\right)\right]=A\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\ldots\int_{-\infty}^{\infty}\exp\left[\frac{im}{2\hbar}\sum_{i=0}^{N-1}\frac{\left(x_{i+1}-x_{i}\right)^{2}}{\varepsilon}\right]dx_{1}dx_{2}\ldots dx_{N-1} \ \ \ \ \ (8)

where {A} is some constant to make the scale come out right.

We don’t integrate over {x_{0}} or {x_{N}} since these are fixed as the end points of the path. To get the final version, we need to take the limit of this expression as {N\rightarrow\infty} and {\varepsilon\rightarrow0}. This still looks pretty scary, but in fact it is doable. We define the variable

\displaystyle y_{i} \displaystyle \equiv \displaystyle \sqrt{\frac{m}{2\hbar\varepsilon}}x_{i}\ \ \ \ \ (9)
\displaystyle dx_{i} \displaystyle = \displaystyle \sqrt{\frac{2\hbar\varepsilon}{m}}dy_{i} \ \ \ \ \ (10)

This gives us

\displaystyle A\left(\frac{2\hbar\varepsilon}{m}\right)^{\left(N-1\right)/2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\ldots\int_{-\infty}^{\infty}\exp\left[i\sum_{i=0}^{N-1}\left(y_{i+1}-y_{i}\right)^{2}\right]dy_{1}dy_{2}\ldots dy_{N-1} \ \ \ \ \ (11)

 

We can do the integral in stages in order to spot a pattern. Consider first the integral over {y_{1}}, which involves only two of the factors in the integrand:

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1} \ \ \ \ \ (12)

We first simplify the exponent

\displaystyle \left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2} \displaystyle = \displaystyle y_{2}^{2}+y_{0}^{2}+2\left(y_{1}^{2}-y_{0}y_{1}-y_{1}y_{2}\right)\ \ \ \ \ (13)
\displaystyle \displaystyle = \displaystyle y_{2}^{2}+y_{0}^{2}+2y_{1}^{2}-2\left(y_{0}+y_{2}\right)y_{1} \ \ \ \ \ (14)

We get

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1}=e^{i\left(y_{2}^{2}+y_{0}^{2}\right)}\int_{-\infty}^{\infty}e^{2i\left[y_{1}^{2}-\left(y_{0}+y_{2}\right)y_{1}\right]}dy_{1} \ \ \ \ \ (15)

We can evaluate this using a standard Gaussian integral

\displaystyle \int_{-\infty}^{\infty}e^{-ax^{2}+bx}dx=e^{b^{2}/4a}\sqrt{\frac{\pi}{a}} \ \ \ \ \ (16)

 

This gives

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1} \displaystyle = \displaystyle e^{i\left(y_{2}^{2}+y_{0}^{2}\right)}e^{4\left(y_{0}+y_{2}\right)^{2}/8i}\sqrt{-\frac{\pi}{2i}}\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle e^{i\left(y_{2}^{2}+y_{0}^{2}\right)}e^{\left(y_{0}+y_{2}\right)^{2}/2i}\sqrt{\frac{\pi i}{2}} \ \ \ \ \ (18)

To simplify the exponents on the RHS:

\displaystyle i\left(y_{2}^{2}+y_{0}^{2}\right)+\frac{\left(y_{0}+y_{2}\right)^{2}}{2i} \displaystyle = \displaystyle \frac{1}{2i}\left[\left(y_{0}+y_{2}\right)^{2}-2y_{2}^{2}-2y_{0}^{2}\right]\ \ \ \ \ (19)
\displaystyle \displaystyle = \displaystyle -\frac{1}{2i}\left(y_{0}-y_{2}\right)^{2} \ \ \ \ \ (20)

Thus we have

\displaystyle \int_{-\infty}^{\infty}e^{i\left[\left(y_{1}-y_{0}\right)^{2}+\left(y_{2}-y_{1}\right)^{2}\right]}dy_{1}=\sqrt{\frac{\pi i}{2}}e^{-\left(y_{0}-y_{2}\right)^{2}/2i} \ \ \ \ \ (21)

Having eliminated {y_{1}} we can now do the integral over {y_{2}}:

\displaystyle \sqrt{\frac{\pi i}{2}}\int_{-\infty}^{\infty}e^{-\left(y_{3}-y_{2}\right)^{2}/i-\left(y_{2}-y_{0}\right)^{2}/2i}dy_{2} \ \ \ \ \ (22)

Again, we can simplify the exponent:

\displaystyle -\frac{\left(y_{3}-y_{2}\right)^{2}}{i}-\frac{\left(y_{2}-y_{0}\right)^{2}}{2i}=\frac{1}{2i}\left[-\left(2y_{3}^{2}+y_{0}^{2}\right)-3y_{2}^{2}+y_{2}\left(4y_{3}+2y_{0}\right)\right] \ \ \ \ \ (23)

The integral now becomes

\displaystyle \sqrt{\frac{\pi i}{2}}\int_{-\infty}^{\infty}e^{-\left(y_{3}-y_{2}\right)^{2}/i-\left(y_{2}-y_{0}\right)^{2}/2i}dy_{2} \displaystyle = \displaystyle \sqrt{\frac{\pi i}{2}}e^{-\left(2y_{3}^{2}+y_{0}^{2}\right)/2i}\int_{-\infty}^{\infty}e^{\left(-3y_{2}^{2}+y_{2}\left(4y_{3}+2y_{0}\right)\right)/2i}dy_{2} \ \ \ \ \ (24)

Doing the Gaussian integral on the RHS using 16:

\displaystyle \int_{-\infty}^{\infty}e^{\left(-3y_{2}^{2}+y_{2}\left(4y_{3}+2y_{0}\right)\right)/2i}dy_{2} \displaystyle = \displaystyle e^{-\left(4y_{3}+2y_{0}\right)^{2}i/24}\sqrt{\frac{2\pi i}{3}}\ \ \ \ \ (25)
\displaystyle \displaystyle = \displaystyle e^{\left(2y_{3}+y_{0}\right)^{2}/6i}\sqrt{\frac{2\pi i}{3}} \ \ \ \ \ (26)

Thus the combined integral over {y_{1}} and {y_{2}} is

\displaystyle \sqrt{\frac{\pi i}{2}}e^{-\left(2y_{3}^{2}+y_{0}^{2}\right)/2i}e^{\left(2y_{3}+y_{0}\right)^{2}/6i}\sqrt{\frac{2\pi i}{3}} \displaystyle = \displaystyle \sqrt{\frac{\left(\pi i\right)^{2}}{3}}e^{\left(-6y_{3}^{2}-3y_{0}^{2}+\left(2y_{3}+y_{0}\right)^{2}\right)/6i}\ \ \ \ \ (27)
\displaystyle \displaystyle = \displaystyle \sqrt{\frac{\left(\pi i\right)^{2}}{3}}e^{-\left(y_{3}-y_{0}\right)^{2}/3i} \ \ \ \ \ (28)

The general pattern after {N-1} integrations is (presumably this could be proved by induction, but we’ll accept the result):

\displaystyle \frac{\left(\pi i\right)^{\left(N-1\right)/2}}{\sqrt{N}}e^{-\left(y_{N}-y_{0}\right)^{2}/Ni}=\frac{\left(\pi i\right)^{\left(N-1\right)/2}}{\sqrt{N}}e^{-m\left(x_{N}-x_{0}\right)^{2}/2\hbar\varepsilon Ni} \ \ \ \ \ (29)

where we reverted back to {x_{i}} using 9.

Going back to 11, we must multiply the result by {A\left(\frac{2\hbar\varepsilon}{m}\right)^{\left(N-1\right)/2}} to get the final expression for the propagator:

\displaystyle U \displaystyle = \displaystyle A\left(\frac{2\hbar\varepsilon}{m}\right)^{\left(N-1\right)/2}\frac{\left(\pi i\right)^{\left(N-1\right)/2}}{\sqrt{N}}e^{-m\left(x_{N}-x_{0}\right)^{2}/2\hbar\varepsilon Ni}\ \ \ \ \ (30)
\displaystyle \displaystyle = \displaystyle A\left(\frac{2\pi\hbar\varepsilon i}{m}\right)^{N/2}\sqrt{\frac{m}{2\pi\hbar iN\varepsilon}}e^{im\left(x_{N}-x_{0}\right)^{2}/2\hbar\varepsilon N} \ \ \ \ \ (31)

In the limit as {N\rightarrow\infty} and {\varepsilon\rightarrow0}, {N\varepsilon=t_{N}-t_{0}} so we have

\displaystyle U=A\left(\frac{2\pi\hbar\varepsilon i}{m}\right)^{N/2}\sqrt{\frac{m}{2\pi\hbar i\left(t_{N}-t_{0}\right)}}e^{im\left(x_{N}-x_{0}\right)^{2}/2\hbar\left(t_{N}-t_{0}\right)} \ \ \ \ \ (32)

The expression we got earlier using the Schrödinger method is

\displaystyle U\left(x,t;x^{\prime},t^{\prime}\right)=\sqrt{\frac{m}{2\pi\hbar i\left(t-t^{\prime}\right)}}e^{im\left(x-x^{\prime}\right)^{2}/2\hbar\left(t-t^{\prime}\right)} \ \ \ \ \ (33)

Thus the full path integral gives the same result, with {t^{\prime}=t_{0}} and {t=t_{N}} (similarly for {x}), provided that we can set

\displaystyle A=\left(\frac{m}{2\pi\hbar\varepsilon i}\right)^{N/2}\equiv B^{-N} \ \ \ \ \ (34)

Shankar then says that it is conventional to associate one factor of {B^{-1}} with each integration over an {x_{i}}, and the remaining factor with the overall process. This seems to overlook a basic problem, in that as {N\rightarrow\infty} and {\varepsilon\rightarrow0}, {A\rightarrow\infty}, so we seem to be cancelling two infinities when we multiply the path integral by {A}.