Featured post

# Welcome to Physics Pages

This blog consists of my notes and solutions to problems in various areas of mainstream physics. An index to the topics covered is contained in the links in the sidebar on the right, or in the menu at the top of the page.

This isn’t a “popular science” site, in that most posts use a fair bit of mathematics to explain their concepts. Thus this blog aims mainly to help those who are learning or reviewing physics in depth. More details on what the site contains and how to use it are on the welcome page.

Despite Stephen Hawking’s caution that every equation included in a book (or, I suppose in a blog) would halve the readership, this blog has proved very popular since its inception in December 2010. Details of the number of visits and distinct visitors are given on the hit statistics page.

Many thanks to my loyal followers and best wishes to everyone who visits. I hope you find it useful. Constructive criticism (or even praise) is always welcome, so feel free to leave a comment in response to any of the posts.

I should point out that although I did study physics at the university level, this was back in the 1970s and by the time I started this blog in December 2010, I had forgotten pretty much everything I had learned back then. This blog represents my journey back to some level of literacy in physics. I am by no means a professional physicist or an authority on any aspect of the subject. I offer this blog as a record of my own notes and problem solutions as I worked through various books, in the hope that it will help, and possibly even inspire, others to explore this wonderful subject.

Before leaving a comment, you may find it useful to read the “Instructions for commenters“.

# M65 & M66 in Leo (25/3/2017)

M65 (upper right) and M66 (lower left) are two galaxies in Leo. They are about 35 million light years away. (Click to enlarge.)

Photo location: Monifieth (near Dundee), Scotland, UK.

Date: 25 March 2017; 2100 UCT.

Telescope: 11-inch Celestron SCT.

Camera: Pentax K3

Exposure: ISO 1600, 93 30-second exposures stacked with DeepSkyStacker.

# Translational invariance in quantum mechanics

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 11, Exercises 11.2.1 – 11.2.2.

In classical mechanics, we’ve seen that if a dynamical variable ${g}$ is used to generate a transformation of the variables ${q_{i}}$ and ${p_{i}}$ (the coordinates and canonical momenta), then if the Hamiltonian is invariant under this transformation, the quantity ${g}$ is conserved, meaning that it remains constant over time. We’d like to extend these results to quantum mechanics, but in doing so, there is one large obstacle. In classical mechanics, we can specify the exact position (given by ${q_{i}}$) and the exact momentum (${p_{i}}$) at every instant in time for every particle. In other words, every particle has a precisely defined trajectory through phase space. Due to the uncertainty principle, we cannot do this in quantum mechanics, since we cannot specify the position and momentum of any particle with arbitrary precision, so we can’t define a precise trajectory for any particle.

The way in which this problem is usually handled is to examine the effects of changes in the expectation values of dynamical variables, rather than with their precise values at any given time. In the case of a single particle moving in one dimension, we can apply this idea to investigate how we might invoke translational invariance. Classically, where ${x}$ is the position variable and ${p}$ is the momentum, an infinitesimal translation by a distance ${\varepsilon}$ is given by

 $\displaystyle x$ $\displaystyle \rightarrow$ $\displaystyle x+\varepsilon\ \ \ \ \ (1)$ $\displaystyle p$ $\displaystyle \rightarrow$ $\displaystyle p \ \ \ \ \ (2)$

In quantum mechanics, the equivalent translation is reflected in the expectation values:

 $\displaystyle \left\langle X\right\rangle$ $\displaystyle \rightarrow$ $\displaystyle \left\langle X\right\rangle +\varepsilon\ \ \ \ \ (3)$ $\displaystyle \left\langle P\right\rangle$ $\displaystyle \rightarrow$ $\displaystyle \left\langle P\right\rangle \ \ \ \ \ (4)$

In order to find the expectation values ${\left\langle X\right\rangle }$ and ${\left\langle P\right\rangle }$ we need to use the state vector ${\left|\psi\right\rangle }$. There are two ways of interpreting the transformation. The first, known as the active transformation picture, is to say that translating the position generates a new state vector ${\left|\psi_{\varepsilon}\right\rangle }$ with the properties

 $\displaystyle \left\langle \psi_{\varepsilon}\left|X\right|\psi_{\varepsilon}\right\rangle$ $\displaystyle =$ $\displaystyle \left\langle \psi\left|X\right|\psi\right\rangle +\varepsilon\ \ \ \ \ (5)$ $\displaystyle \left\langle \psi_{\varepsilon}\left|P\right|\psi_{\varepsilon}\right\rangle$ $\displaystyle =$ $\displaystyle \left\langle \psi\left|P\right|\psi\right\rangle \ \ \ \ \ (6)$

Since ${\left|\psi_{\varepsilon}\right\rangle }$ is another state vector in the same vector space as ${\left|\psi\right\rangle }$, there must be an operator ${T\left(\varepsilon\right)}$ which we call the translation operator, and which maps one vector onto the other:

$\displaystyle T\left(\varepsilon\right)\left|\psi\right\rangle =\left|\psi_{\varepsilon}\right\rangle \ \ \ \ \ (7)$

In terms of the translation operator, the translation becomes

 $\displaystyle \left\langle \psi\left|T^{\dagger}\left(\varepsilon\right)XT\left(\varepsilon\right)\right|\psi\right\rangle$ $\displaystyle =$ $\displaystyle \left\langle \psi\left|X\right|\psi\right\rangle +\varepsilon\ \ \ \ \ (8)$ $\displaystyle \left\langle \psi\left|T^{\dagger}\left(\varepsilon\right)PT\left(\varepsilon\right)\right|\psi\right\rangle$ $\displaystyle =$ $\displaystyle \left\langle \psi\left|P\right|\psi\right\rangle \ \ \ \ \ (9)$

These relations allow us to define the second interpretation, called the passive transformation picture, in which the state vectors do not change, but rather the position and momentum operators change. That is, we can transform the operators according to

 $\displaystyle X$ $\displaystyle \rightarrow$ $\displaystyle T^{\dagger}\left(\varepsilon\right)XT\left(\varepsilon\right)=X+\varepsilon I\ \ \ \ \ (10)$ $\displaystyle P$ $\displaystyle \rightarrow$ $\displaystyle T^{\dagger}\left(\varepsilon\right)PT\left(\varepsilon\right)=P \ \ \ \ \ (11)$

We need to find the explicit form for ${T}$. To begin, we consider its effect on a position eigenket ${\left|x\right\rangle }$. One possibility is

$\displaystyle T\left(\varepsilon\right)\left|x\right\rangle =\left|x+\varepsilon\right\rangle \ \ \ \ \ (12)$

However, to be completely general, we should consider the case where ${T}$ not only shifts ${x}$ by ${\varepsilon}$, but also introduces a phase factor. That is, the most general effect of ${T}$ is

$\displaystyle T\left(\varepsilon\right)\left|x\right\rangle =e^{i\varepsilon g\left(x\right)/\hbar}\left|x+\varepsilon\right\rangle \ \ \ \ \ (13)$

where ${g\left(x\right)}$ is some arbitrary real function of ${x}$. Using this form, we have, for some arbitrary state vector ${\left|\psi\right\rangle }$:

 $\displaystyle \left|\psi_{\varepsilon}\right\rangle$ $\displaystyle =$ $\displaystyle T\left(\varepsilon\right)\left|\psi\right\rangle \ \ \ \ \ (14)$ $\displaystyle$ $\displaystyle =$ $\displaystyle T\left(\varepsilon\right)\int_{-\infty}^{\infty}\left|x\right\rangle \left\langle x\left|\psi\right.\right\rangle dx\ \ \ \ \ (15)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}e^{i\varepsilon g\left(x\right)/\hbar}\left|x+\varepsilon\right\rangle \left\langle x\left|\psi\right.\right\rangle dx\ \ \ \ \ (16)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}e^{i\varepsilon g\left(x^{\prime}-\varepsilon\right)/\hbar}\left|x^{\prime}\right\rangle \left\langle x^{\prime}-\varepsilon\left|\psi\right.\right\rangle dx^{\prime} \ \ \ \ \ (17)$

To get the last line, we changed the integration variable to ${x^{\prime}=x+\varepsilon}$. Multiplying by the bra ${\left\langle x\right|}$ gives, using ${\left\langle x\left|x^{\prime}\right.\right\rangle =\delta\left(x-x^{\prime}\right)}$:

 $\displaystyle \left\langle x\left|T\left(\varepsilon\right)\right|\psi\right\rangle =\left\langle x\left|\psi_{\varepsilon}\right.\right\rangle$ $\displaystyle =$ $\displaystyle e^{i\varepsilon g\left(x-\varepsilon\right)/\hbar}\left\langle x-\varepsilon\left|\psi\right.\right\rangle \ \ \ \ \ (18)$ $\displaystyle$ $\displaystyle =$ $\displaystyle e^{i\varepsilon g\left(x-\varepsilon\right)/\hbar}\psi\left(x-\varepsilon\right) \ \ \ \ \ (19)$

That is, the action of ${T\left(\varepsilon\right)}$ is to move the coordinate axis a distance ${\varepsilon}$ to the right, which means that the new state vector ${\left|\psi_{\varepsilon}\right\rangle }$ becomes the old state vector at position ${x-\varepsilon}$. Alternatively, we can leave the coordinate axis alone and shift the wave function a distance ${\varepsilon}$ to the right, so that the new vector at position ${x}$ is the old vector at position ${x-\varepsilon}$ (multiplied by a phase factor).

We can now use this result to calculate 8 and 9:

 $\displaystyle \left\langle \psi_{\varepsilon}\left|X\right|\psi_{\varepsilon}\right\rangle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left\langle \psi_{\varepsilon}\left|x\right.\right\rangle \left\langle x\left|X\right|x^{\prime}\right\rangle \left\langle x^{\prime}\left|\psi_{\varepsilon}\right.\right\rangle dx\;dx^{\prime}\ \ \ \ \ (20)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left\langle \psi_{\varepsilon}\left|x\right.\right\rangle x^{\prime}\delta\left(x-x^{\prime}\right)\left\langle x^{\prime}\left|\psi_{\varepsilon}\right.\right\rangle dx\;dx^{\prime}\ \ \ \ \ (21)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\left\langle \psi_{\varepsilon}\left|x\right.\right\rangle x\left\langle x\left|\psi_{\varepsilon}\right.\right\rangle dx\ \ \ \ \ (22)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}e^{-i\varepsilon g\left(x-\varepsilon\right)/\hbar}\psi^*\left(x-\varepsilon\right)xe^{i\varepsilon g\left(x-\varepsilon\right)/\hbar}\psi\left(x-\varepsilon\right)dx\ \ \ \ \ (23)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\psi^*\left(x-\varepsilon\right)x\psi\left(x-\varepsilon\right)dx\ \ \ \ \ (24)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\psi^*\left(x^{\prime}\right)\left(x^{\prime}+\varepsilon\right)\psi\left(x^{\prime}\right)dx^{\prime}\ \ \ \ \ (25)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left\langle \psi\left|X\right|\psi\right\rangle +\varepsilon \ \ \ \ \ (26)$

In the second line, we used the matrix element of ${X}$

$\displaystyle \left\langle x\left|X\right|x^{\prime}\right\rangle =x^{\prime}\delta\left(x-x^{\prime}\right) \ \ \ \ \ (27)$

and in the penultimate line, we again used the change of integration variable to ${x^{\prime}=x-\varepsilon}$. Thus we regain 8.

The momentum transforms as follows.

 $\displaystyle \left\langle \psi_{\varepsilon}\left|P\right|\psi_{\varepsilon}\right\rangle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\psi^*\left(x-\varepsilon\right)e^{-i\varepsilon g\left(x-\varepsilon\right)/\hbar}\left(-i\hbar\frac{d}{dx}\right)\left(e^{i\varepsilon g\left(x-\varepsilon\right)/\hbar}\psi\left(x-\varepsilon\right)\right)dx\ \ \ \ \ (28)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\psi^*\left(x-\varepsilon\right)\left(\varepsilon\frac{d}{dx}\left(g\left(x-\varepsilon\right)\right)\psi\left(x-\varepsilon\right)-i\hbar\frac{d}{dx}\left(\psi\left(x-\varepsilon\right)\right)\right)dx\ \ \ \ \ (29)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{-\infty}^{\infty}\psi^*\left(x^{\prime}\right)\left(\varepsilon\psi\left(x^{\prime}\right)\frac{d}{dx^{\prime}}g\left(x^{\prime}\right)-i\hbar\frac{d}{dx^{\prime}}\psi\left(x^{\prime}\right)\right)dx^{\prime}\ \ \ \ \ (30)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \varepsilon\left\langle \frac{d}{dx}g\left(x\right)\right\rangle +\left\langle P\right\rangle \ \ \ \ \ (31)$

In the third line, we again transformed the integration variable to ${x^{\prime}=x-\varepsilon}$, and used the fact that ${dx=dx^{\prime}}$, so a derivative with respect to ${x}$ is the same as a derivative with respect to ${x^{\prime}}$. [This derivation is condensed a bit compared to the derivation of ${\left\langle \psi_{\varepsilon}\left|X\right|\psi_{\varepsilon}\right\rangle }$, but you can insert a couple of sets of complete states and do the extra integrals if you like.]

If we now impose the condition 9 so that the momentum is unchanged by the translation, this is equivalent to choosing the phase function ${g\left(x\right)=0}$, and this is what is done in most applications.

Having explored the properties of the translation operator, we can now define what we mean by translational invariance in quantum mechanics. This is the requirement that the expectation value of the Hamiltonian is unchanged under the transformation. That is

$\displaystyle \left\langle \psi\left|H\right|\psi\right\rangle =\left\langle \psi_{\varepsilon}\left|H\right|\psi_{\varepsilon}\right\rangle \ \ \ \ \ (32)$

For this, we need the explicit form of ${T\left(\varepsilon\right)}$. Since ${\varepsilon=0}$ corresponds to no translation, we require ${T\left(0\right)=I}$. To first order in ${\varepsilon}$, we can then write

$\displaystyle T\left(\varepsilon\right)=I-\frac{i\varepsilon}{\hbar}G \ \ \ \ \ (33)$

where ${G}$ is some operator, called the generator of translations, that is to be determined. From 13 (with ${g=0}$ from now on), we have

$\displaystyle \left\langle x^{\prime}+\varepsilon\left|x+\varepsilon\right.\right\rangle =\left\langle x^{\prime}\left|T^{\dagger}\left(\varepsilon\right)T\left(\varepsilon\right)\right|x\right\rangle =\delta\left(x^{\prime}-x\right)=\left\langle x^{\prime}\left|x\right.\right\rangle \ \ \ \ \ (34)$

so we must have

$\displaystyle T^{\dagger}\left(\varepsilon\right)T\left(\varepsilon\right)=I \ \ \ \ \ (35)$

so that ${T}$ is unitary. Applying this condition to 33 up to order ${\varepsilon}$, we have

 $\displaystyle T^{\dagger}\left(\varepsilon\right)T\left(\varepsilon\right)$ $\displaystyle =$ $\displaystyle \left(I+\frac{i\varepsilon}{\hbar}G^{\dagger}\right)\left(I-\frac{i\varepsilon}{\hbar}G\right)\ \ \ \ \ (36)$ $\displaystyle$ $\displaystyle =$ $\displaystyle I+\frac{i\varepsilon}{\hbar}\left(G^{\dagger}-G\right)+\mathcal{O}\left(\varepsilon^{2}\right) \ \ \ \ \ (37)$

Requiring 35 shows that ${G=G^{\dagger}}$ so ${G}$ is Hermitian. Now, from 19 (${g=0}$ again) we have

$\displaystyle \left\langle x\left|T\left(\varepsilon\right)\right|\psi\right\rangle =\psi\left(x-\varepsilon\right) \ \ \ \ \ (38)$

We expand both sides to order ${\varepsilon}$:

$\displaystyle \left\langle x\left|I\right|\psi\right\rangle -\frac{i\varepsilon}{\hbar}\left\langle x\left|G\right|\psi\right\rangle =\psi\left(x\right)-\varepsilon\frac{d\psi}{dx} \ \ \ \ \ (39)$

Since ${\left\langle x\left|I\right|\psi\right\rangle =\left\langle x\left|\psi\right.\right\rangle =\psi\left(x\right)}$, we have

$\displaystyle \left\langle x\left|G\right|\psi\right\rangle =-i\hbar\frac{d\psi}{dx}=\left\langle x\left|P\right|\psi\right\rangle \ \ \ \ \ (40)$

so ${G=P}$ and the momentum operator is the generator of translations, and the translation operator is, to order ${\varepsilon}$

$\displaystyle T\left(\varepsilon\right)=I-\frac{i\varepsilon}{\hbar}P \ \ \ \ \ (41)$

By plugging this into 32 and expanding the RHS, we find that in order for the Hamiltonian to be invariant, the expectation value of the commutator ${\left[P,H\right]}$ must be zero (the derivation is done in Shankar’s eqn 11.2.15). Using Ehrenfest’s theorem we then find that the expectation value ${\left\langle \dot{P}\right\rangle =\left\langle \left[P,H\right]\right\rangle =0}$, so that the expectation value of ${P}$ is conserved over time.

Note that we cannot say that the momentum itself (rather than just its expectation value) is conserved since, due to the uncertainty principle, we never know what the exact momentum is at any given time.

# Fermions and bosons in the infinite square well

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.3.4.

Suppose we have two identical particles in an infinite square well. The energy levels in a well of width ${L}$ are

$\displaystyle E=\frac{\left(\pi n\hbar\right)^{2}}{2mL^{2}} \ \ \ \ \ (1)$

where ${n=1,2,3,\ldots}$ The corresponding wave functions are given by

$\displaystyle \psi_{n}\left(x\right)=\sqrt{\frac{2}{L}}\sin\frac{n\pi x}{L} \ \ \ \ \ (2)$

If the total energy of the two particles is ${\pi^{2}\hbar^{2}/mL^{2}}$, the only possible configuration is for both particles to be in the ground state ${n=1}$. This means the particles must be bosons, so the state vector is

$\displaystyle \left|x_{1},x_{2}\right\rangle =\frac{2}{L}\sin\frac{\pi x_{1}}{L}\sin\frac{\pi x_{2}}{L} \ \ \ \ \ (3)$

If the total energy is ${5\pi^{2}\hbar^{2}/2mL^{2}}$, then one particle is in the state ${n=1}$ and the other is in ${n=2}$. Since the states are different, the particles can be either bosons or fermions. For bosons, the state vector is

 $\displaystyle \left|x_{1},x_{2}\right\rangle$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left[\frac{2}{L}\sin\frac{\pi x_{1}}{L}\sin\frac{2\pi x_{2}}{L}+\frac{2}{L}\sin\frac{2\pi x_{1}}{L}\sin\frac{\pi x_{2}}{L}\right]\ \ \ \ \ (4)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{\sqrt{2}}{L}\left[\sin\frac{\pi x_{1}}{L}\sin\frac{2\pi x_{2}}{L}+\sin\frac{2\pi x_{1}}{L}\sin\frac{\pi x_{2}}{L}\right] \ \ \ \ \ (5)$

For fermions, the state must be antisymmetric, so we have

$\displaystyle \left|x_{1},x_{2}\right\rangle =\frac{\sqrt{2}}{L}\left[\sin\frac{\pi x_{1}}{L}\sin\frac{2\pi x_{2}}{L}-\sin\frac{2\pi x_{1}}{L}\sin\frac{\pi x_{2}}{L}\right] \ \ \ \ \ (6)$

# Invariance of symmetric and antisymmetric states; exchange operators

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.3.5.

In a system with two particles, the state in the ${X}$ basis is given by ${\left|x_{1},x_{2}\right\rangle }$ where ${x_{i}}$ is the position of particle ${i}$. We can define the exchange operator ${P_{12}}$ as an operator that swaps the two particles, so that

$\displaystyle P_{12}\left|x_{1},x_{2}\right\rangle =\left|x_{2},x_{1}\right\rangle \ \ \ \ \ (1)$

To find the eigenvalues and eigenvectors of ${P_{12}}$ we have

$\displaystyle P_{12}\left|\psi\left(x_{1},x_{2}\right)\right\rangle =\alpha\left|\psi\left(x_{1},x_{2}\right)\right\rangle =\psi\left(x_{2},x_{1}\right) \ \ \ \ \ (2)$

where ${\alpha}$ is the eigenvalue and ${\left|\psi\left(x_{1},x_{2}\right)\right\rangle }$ is the eigenvector. Using the same argument as before, we can write

 $\displaystyle \left|\psi\left(x_{1},x_{2}\right)\right\rangle$ $\displaystyle =$ $\displaystyle \beta\left|x_{1},x_{2}\right\rangle +\gamma\left|x_{2},x_{1}\right\rangle \ \ \ \ \ (3)$ $\displaystyle \left|\psi\left(x_{2},x_{1}\right)\right\rangle$ $\displaystyle =$ $\displaystyle \beta\left|x_{2},x_{1}\right\rangle +\gamma\left|x_{1},x_{2}\right\rangle \ \ \ \ \ (4)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \alpha\left[\beta\left|x_{1},x_{2}\right\rangle +\gamma\left|x_{2},x_{1}\right\rangle \right] \ \ \ \ \ (5)$

Equating coefficients in the first and third lines, we arrive at

$\displaystyle \alpha=\pm1 \ \ \ \ \ (6)$

which gives the same symmetric and antisymmetric eigenfunctions that we had before:

 $\displaystyle \psi_{S}\left(x_{1},x_{2}\right)$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(\left|x_{1},x_{2}\right\rangle +\left|x_{2},x_{1}\right\rangle \right)\ \ \ \ \ (7)$ $\displaystyle \psi_{A}\left(x_{1},x_{2}\right)$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(\left|x_{1},x_{2}\right\rangle -\left|x_{2},x_{1}\right\rangle \right) \ \ \ \ \ (8)$

We can derive a couple of other properties of the exchange operator by noting that if it is applied twice in succession, we get the original state back, so that

 $\displaystyle P_{12}^{2}$ $\displaystyle =$ $\displaystyle I\ \ \ \ \ (9)$ $\displaystyle P_{12}$ $\displaystyle =$ $\displaystyle P_{12}^{-1} \ \ \ \ \ (10)$

Thus the operator is its own inverse.

Consider also the two states ${\left|x_{1}^{\prime},x_{2}^{\prime}\right\rangle }$ and ${\left|x_{1},x_{2}\right\rangle }$. Then

 $\displaystyle \left\langle x_{1}^{\prime},x_{2}^{\prime}\left|P_{12}^{\dagger}P_{12}\right|x_{1},x_{2}\right\rangle$ $\displaystyle =$ $\displaystyle \left\langle P_{12}x_{1}^{\prime},x_{2}^{\prime}\left|P_{12}x_{1},x_{2}\right.\right\rangle \ \ \ \ \ (11)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left\langle x_{2}^{\prime},x_{1}^{\prime}\left|x_{2},x_{1}\right.\right\rangle \ \ \ \ \ (12)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left(\left\langle x_{2}^{\prime}\right|\otimes\left\langle x_{1}^{\prime}\right|\right)\left(\left|x_{2}\right\rangle \otimes\left|x_{1}\right\rangle \right)\ \ \ \ \ (13)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \delta\left(x_{2}^{\prime}-x_{2}\right)\delta\left(x_{1}^{\prime}-x_{1}\right) \ \ \ \ \ (14)$

However, the last line is just equal to the inner product of the original states, that is

$\displaystyle \left\langle x_{1}^{\prime},x_{2}^{\prime}\left|x_{1},x_{2}\right.\right\rangle =\delta\left(x_{2}-x_{2}^{\prime}\right)\delta\left(x_{1}-x_{1}^{\prime}\right)=\delta\left(x_{2}^{\prime}-x_{2}\right)\delta\left(x_{1}^{\prime}-x_{1}\right) \ \ \ \ \ (15)$

This means that

 $\displaystyle P_{12}^{\dagger}P_{12}$ $\displaystyle =$ $\displaystyle I\ \ \ \ \ (16)$ $\displaystyle P_{12}^{\dagger}$ $\displaystyle =$ $\displaystyle P_{12}^{-1}=P_{12} \ \ \ \ \ (17)$

Thus ${P_{12}}$ is both Hermitian and unitary.

Shankar asks us to show that, for a general basis vector ${\left|\omega_{1},\omega_{2}\right\rangle }$, ${P_{12}\left|\omega_{1},\omega_{2}\right\rangle =\left|\omega_{2},\omega_{1}\right\rangle }$. One argument could be that, since the ${X}$ basis spans the space, we can express any other vector such as ${\left|\omega_{1},\omega_{2}\right\rangle }$ as a linear combination of the ${\left|x_{1},x_{2}\right\rangle }$ vectors, so that applying ${P_{12}}$ to ${\left|\omega_{1},\omega_{2}\right\rangle }$ means applying it to a sum of ${\left|x_{1},x_{2}\right\rangle }$ vectors, which swaps the two particles in every term. I’m not sure if this is a rigorous result. In any case, if we accept this result it shows that if we start in a state that is totally symmetric (that is, a boson state), this state is an eigenvector of ${P_{12}}$ with eigenvalue ${+1}$. Similarly, if we start in an antisymmetric (fermion) state, this state is an eigenvector of ${P_{12}}$ with eigenvalue ${-1}$.

Now we can look at some other properties of ${P_{12}}$. Consider

 $\displaystyle P_{12}X_{1}P_{12}\left|x_{1},x_{2}\right\rangle$ $\displaystyle =$ $\displaystyle P_{12}X_{1}\left|x_{2},x_{1}\right\rangle \ \ \ \ \ (18)$ $\displaystyle$ $\displaystyle =$ $\displaystyle x_{2}P_{12}\left|x_{2},x_{1}\right\rangle \ \ \ \ \ (19)$ $\displaystyle$ $\displaystyle =$ $\displaystyle x_{2}\left|x_{1},x_{2}\right\rangle \ \ \ \ \ (20)$ $\displaystyle$ $\displaystyle =$ $\displaystyle X_{2}\left|x_{1},x_{2}\right\rangle \ \ \ \ \ (21)$

This follows because the operator ${X_{1}}$ operates on the first particle in the state ${\left|x_{2},x_{1}\right\rangle }$ which on the RHS of the first line is at position ${x_{2}}$. Thus ${X_{1}\left|x_{2},x_{1}\right\rangle =x_{2}\left|x_{2},x_{1}\right\rangle }$, that is, ${X_{1}}$ returns the numerical value of the position of the first particle, which is ${x_{2}}$. This means that in terms of the operators alone

 $\displaystyle P_{12}X_{1}P_{12}$ $\displaystyle =$ $\displaystyle X_{2}\ \ \ \ \ (22)$ $\displaystyle P_{12}X_{2}P_{12}$ $\displaystyle =$ $\displaystyle X_{1}\ \ \ \ \ (23)$ $\displaystyle P_{12}P_{1}P_{12}$ $\displaystyle =$ $\displaystyle P_{2}\ \ \ \ \ (24)$ $\displaystyle P_{12}P_{2}P_{12}$ $\displaystyle =$ $\displaystyle P_{1} \ \ \ \ \ (25)$

In the last two lines, the operator ${P_{i}}$ is the momentum of particle ${i}$, and the result follows by applying the operators to the momentum basis state ${\left|p_{1},p_{2}\right\rangle }$.

For some general operator which can be expanded in a power series of terms containing powers of ${X_{i}}$ and/or ${P_{i}}$, we can use 10 to insert ${P_{12}P_{12}}$ between every factor of ${X_{i}}$ or ${P_{i}}$. For example

 $\displaystyle P_{12}P_{1}X_{2}^{2}X_{1}P_{12}$ $\displaystyle =$ $\displaystyle P_{12}P_{1}P_{12}P_{12}X_{2}P_{12}P_{12}X_{2}P_{12}P_{12}X_{1}P_{12}\ \ \ \ \ (26)$ $\displaystyle$ $\displaystyle =$ $\displaystyle P_{2}X_{1}^{2}X_{2} \ \ \ \ \ (27)$

That is, for any operator ${\Omega\left(X_{1},P_{1};X_{2},P_{2}\right)}$ we have

$\displaystyle P_{12}\Omega\left(X_{1},P_{1};X_{2},P_{2}\right)P_{12}=\Omega\left(X_{2},P_{2};X_{1},P_{1}\right) \ \ \ \ \ (28)$

The Hamiltonian for a system of two identical particles must be symmetric under exchange of the particles, since it represents an observable (the energy), and this observable must remain unchanged if we swap the particles. (In the case of two fermions, the wave function is antisymmetric, but the wave function itself is not an observable. The wave function gets multiplied by ${-1}$ if we swap the particles, but the square modulus of the wave function, which contains the physics, remains the same.) Thus we have

$\displaystyle P_{12}H\left(X_{1},P_{1};X_{2},P_{2}\right)P_{12}=H\left(X_{2},P_{2};X_{1},P_{1}\right)=H\left(X_{1},P_{1};X_{2},P_{2}\right) \ \ \ \ \ (29)$

[Note that this condition doesn’t necessarily follow if the two particles are not identical, since exchanging them in this case leads to an observably different system. For example, exchanging the proton and electron in a hydrogen atom leads to a different system.]

The propagator is defined as

$\displaystyle U\left(t\right)=e^{-iHt/\hbar} \ \ \ \ \ (30)$

and the propagator dictates how a state evolves according to

$\displaystyle \left|\psi\left(t\right)\right\rangle =U\left(t\right)\left|\psi\left(0\right)\right\rangle \ \ \ \ \ (31)$

Since the only operator on which ${U}$ depends is ${H}$, then ${U}$ is also invariant, so that

$\displaystyle P_{12}U\left(X_{1},P_{1};X_{2},P_{2}\right)P_{12}=U\left(X_{2},P_{2};X_{1},P_{1}\right)=U\left(X_{1},P_{1};X_{2},P_{2}\right) \ \ \ \ \ (32)$

Multiplying from the left by ${P_{12}}$ and subtracting, we get the commutator

$\displaystyle \left[U,P_{12}\right]=0 \ \ \ \ \ (33)$

For a symmetric state ${\left|\psi_{S}\right\rangle }$ or antisymmetric state ${\left|\psi_{A}\right\rangle }$, we have

 $\displaystyle UP_{12}\left|\psi_{S}\left(0\right)\right\rangle$ $\displaystyle =$ $\displaystyle U\left|\psi_{S}\left(0\right)\right\rangle =\left|\psi_{S}\left(t\right)\right\rangle =P_{12}U\left|\psi_{S}\left(0\right)\right\rangle \ \ \ \ \ (34)$ $\displaystyle UP_{12}\left|\psi_{A}\left(0\right)\right\rangle$ $\displaystyle =$ $\displaystyle -U\left|\psi_{A}\left(0\right)\right\rangle =-\left|\psi_{A}\left(t\right)\right\rangle =P_{12}U\left|\psi_{A}\left(0\right)\right\rangle \ \ \ \ \ (35)$

This means that states that begin as symmetric or antisymmetric remain symmetric or antisymmetric for all time. In other words, a system that starts in an eigenstate of ${P_{12}}$ remains in the same eigenstate as time passes.

# Compound systems of fermions and bosons

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.3.6.

In a system of identical particles, we’ve seen that if the particles are bosons, the state vector is symmetric with respect to the exchange of any two particles (that is, ${\psi\left(a,b\right)=\psi\left(b,a\right)}$ where ${a}$ and ${b}$ are any two of the particles in the system), while for fermions, the state vector is antisymmetric, meaning that ${\psi\left(a,b\right)=-\psi\left(a,b\right)}$. What happens if we have a compound object such as a hydrogen atom that is composed of a collection of fermions and/or bosons?

Suppose we look at the hydrogen atom in particular. It is composed of a proton and an electron, both of which are fermions. The proton and electron are not, of course, identical particles, but now suppose we have two hydrogen atoms. The two protons are identical fermions, just as are the two electrons. However, when analyzing a system of two hydrogen atoms, the relevant question is what happens to the state vector if we exchange the two atoms. In doing so, we exchange both the two protons and the two electrons. Each exchange multiplies the state vector by ${-1}$, so the net effect of exchanging both protons and both electrons is to multiply the state vector by ${\left(-1\right)^{2}=1}$. In other words, a hydrogen atom acts as a boson, even though it is composed of two fermions.

In general, if we have a compound object containing ${n}$ fermions, then the state vector for a system of two such objects is multiplied by ${\left(-1\right)^{n}}$ when these two objects are exchanged. That is, a compound object containing an even number of fermions behaves as a boson, while if it contains an odd number of fermions, it behaves as a fermion.

A compound object consisting entirely of bosons will always behave as a boson, no matter how many such bosonic particles it contains, since interchanging all ${n}$ bosons just multiplies the state vector by ${\left(+1\right)^{n}=1}$.

# Identical particles – bosons and fermions revisited

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercises 10.3.1 – 10.3.3.

Although we’ve looked at the quantum treatment of identical particles as done by Griffiths, it’s worth summarizing Shankar’s treatment of the topic as it provides a few more insights.

In classical physics, suppose we have two identical particles, where ‘identical’ here means that all their physical properties such as mass, size, shape, charge and so on are the same. Suppose we do an experiment in which these two particles collide and rebound in some way. Can we tell which particle ends up in which location? We’re not allowed to label the particles by writing on them, for example, since then they would no longer be identical. In classical physics, we can determine which particle is which by tracing their histories. For example, if we start with particle 1 at position ${\mathbf{r}_{1}}$ and particle 2 at position ${\mathbf{r}_{2}}$, then let them collide, and finally measure their locations at some time after the collision, we might find that one particle ends up at position ${\mathbf{r}_{3}}$ and the other at position ${\mathbf{r}_{4}}$. If we videoed the collision event, we would see the two particles follow well-defined paths before and after the collision, so by observing which particle followed the path that leads from ${\mathbf{r}_{1}}$ to the collision and then out again, we can tell whether it ends up at ${\mathbf{r}_{3}}$ or ${\mathbf{r}_{4}}$. That is, the identification of a particle depends on our ability to watch it as it travels through space.

In quantum mechanics, because of the uncertainty principle, a particle does not have a well-defined trajectory, since in order to define such a trajectory, we would need to specify its position and momentum precisely at each instant of time as it travels. In terms of our collision experiment, if we measured one particle to be at starting position ${\mathbf{r}_{1}}$ at time ${t=0}$ then we know nothing about its momentum, because we specified the position exactly. Thus we can’t tell what trajectory this particle will follow. If we measure the two particles at positions ${\mathbf{r}_{1}}$ and ${\mathbf{r}_{2}}$ at ${t=0}$, and then at ${\mathbf{r}_{3}}$ and ${\mathbf{r}_{4}}$ at some later time, we have no way of knowing which particle ends up at ${\mathbf{r}_{3}}$ and which at ${\mathbf{r}_{4}}$. In terms of the state vector, this means that the physics in the state vector must be the same if we exchange the two particles within the wave function. Since multiplying a state vector ${\psi}$ by some complex constant ${\alpha}$ leaves the physics unchanged, this means that we require

$\displaystyle \psi\left(a,b\right)=\alpha\psi\left(b,a\right) \ \ \ \ \ (1)$

where ${a}$ and ${b}$ represent the two particles.

For a two-particle system, the vector space is spanned by a direct product of the two one-particle vector spaces. Thus the two basis vectors in this vector space that can describe the two particle ${a}$ and ${b}$ are ${\left|ab\right\rangle }$ and ${\left|ba\right\rangle }$. If these two particles are identical, then ${\psi}$ must be some linear combination of these two vectors that satisfies 1. That is

 $\displaystyle \psi\left(b,a\right)$ $\displaystyle =$ $\displaystyle \beta\left|ab\right\rangle +\gamma\left|ba\right\rangle \ \ \ \ \ (2)$ $\displaystyle \psi\left(a,b\right)$ $\displaystyle =$ $\displaystyle \alpha\psi\left(b,a\right)\ \ \ \ \ (3)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \alpha\left(\beta\left|ab\right\rangle +\gamma\left|ba\right\rangle \right) \ \ \ \ \ (4)$

However, ${\psi\left(a,b\right)}$ is also just ${\psi\left(b,a\right)}$ with ${a}$ swapped with ${b}$, that is

$\displaystyle \psi\left(a,b\right)=\beta\left|ba\right\rangle +\gamma\left|ab\right\rangle \ \ \ \ \ (5)$

Since ${\left|ab\right\rangle }$ and ${\left|ba\right\rangle }$ are independent, we can equate their coefficients in the last two equations to get

 $\displaystyle \alpha\beta$ $\displaystyle =$ $\displaystyle \gamma\ \ \ \ \ (6)$ $\displaystyle \alpha\gamma$ $\displaystyle =$ $\displaystyle \beta \ \ \ \ \ (7)$

Inserting the second equation into the first, we get

 $\displaystyle \alpha^{2}\gamma$ $\displaystyle =$ $\displaystyle \gamma\ \ \ \ \ (8)$ $\displaystyle \alpha^{2}$ $\displaystyle =$ $\displaystyle 1\ \ \ \ \ (9)$ $\displaystyle \alpha$ $\displaystyle =$ $\displaystyle \pm1 \ \ \ \ \ (10)$

Thus the two possible state functions 1 are combinations of ${\left|ab\right\rangle }$ and ${\left|ba\right\rangle }$ such that

$\displaystyle \psi\left(a,b\right)=\pm\psi\left(b,a\right) \ \ \ \ \ (11)$

The plus sign gives the symmetric state, which can be written as

$\displaystyle \psi\left(ab,S\right)=\frac{1}{\sqrt{2}}\left(\left|ab\right\rangle +\left|ba\right\rangle \right) \ \ \ \ \ (12)$

and the minus sign gives the antisymmetric state

$\displaystyle \psi\left(ab,A\right)=\frac{1}{\sqrt{2}}\left(\left|ab\right\rangle -\left|ba\right\rangle \right) \ \ \ \ \ (13)$

The ${\frac{1}{\sqrt{2}}}$ factor normalizes the states so that

 $\displaystyle \left\langle \psi\left(ab,S\right)\left|\psi\left(ab,S\right)\right.\right\rangle$ $\displaystyle =$ $\displaystyle 1\ \ \ \ \ (14)$ $\displaystyle \left\langle \psi\left(ab,A\right)\left|\psi\left(ab,A\right)\right.\right\rangle$ $\displaystyle =$ $\displaystyle 1 \ \ \ \ \ (15)$

This follows because the basis vectors ${\left|ab\right\rangle }$ and ${\left|ba\right\rangle }$ are orthonormal vectors.

Particles with symmetric states are called bosons and particles with antisymmetric states are called fermions. The Pauli exclusion principle for fermions follows directly from 13, since if we set the state variables of the two particles to be the same, that is, ${a=b}$, then

$\displaystyle \psi\left(aa,A\right)=\frac{1}{\sqrt{2}}\left(\left|aa\right\rangle -\left|aa\right\rangle \right)=0 \ \ \ \ \ (16)$

The symmetry or antisymmetry rules apply to all the properties of the particle taken as an aggregate. That is, the labels ${a}$ and ${b}$ can refer to the particle’s location plus its other quantum numbers such as spin, charge, and so on. In order for two fermions to be excluded, the states of the two fermions must be identical in all their quantum numbers, so that two fermions with the same orbital location (as two electrons in the same orbital within an atom, for example) are allowed if their spins are different.

Example 1 Suppose we have 2 identical bosons that are measured to be in states ${\left|\phi\right\rangle }$ and ${\left|\chi\right\rangle }$ where ${\left\langle \phi\left|\chi\right.\right\rangle \ne0}$. What is their combined state vector? Since they are bosons, their state vector must be symmetric, so we must have

 $\displaystyle \psi\left(\phi,\chi\right)$ $\displaystyle =$ $\displaystyle A\left|\phi\chi\right\rangle +B\left|\chi\phi\right\rangle \ \ \ \ \ (17)$

Because ${\psi}$ must be symmetric, we must have ${A=B}$, so that ${\psi\left(\phi,\chi\right)=\psi\left(\chi,\phi\right)}$. The 2-particle states can be written as direct products, so we have

$\displaystyle \psi\left(\phi,\chi\right)=A\left(\left|\phi\right\rangle \otimes\left|\chi\right\rangle +\left|\chi\right\rangle \otimes\left|\phi\right\rangle \right) \ \ \ \ \ (18)$

To normalize, we have, assuming that ${\left|\phi\right\rangle }$ and ${\left|\chi\right\rangle }$ are normalized:

 $\displaystyle \left|\psi\right|^{2}$ $\displaystyle =$ $\displaystyle 1\ \ \ \ \ (19)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|A\right|^{2}\left(\left\langle \phi\right|\otimes\left\langle \chi\right|+\left\langle \chi\right|\otimes\left\langle \phi\right|\right)\left(\left|\phi\right\rangle \otimes\left|\chi\right\rangle +\left|\chi\right\rangle \otimes\left|\phi\right\rangle \right)\ \ \ \ \ (20)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|A\right|^{2}\left(1+1+\left|\left\langle \phi\left|\chi\right.\right\rangle \right|^{2}+\left|\left\langle \chi\left|\phi\right.\right\rangle \right|^{2}\right)\ \ \ \ \ (21)$ $\displaystyle A$ $\displaystyle =$ $\displaystyle \frac{\pm1}{\sqrt{2\left(1+\left|\left\langle \phi\left|\chi\right.\right\rangle \right|^{2}\right)}} \ \ \ \ \ (22)$

Thus the normalized state vector is (choosing the + sign):

 $\displaystyle \psi\left(\phi,\chi\right)$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2\left(1+\left|\left\langle \phi\left|\chi\right.\right\rangle \right|^{2}\right)}}\left(\left|\phi\chi\right\rangle +\left|\chi\phi\right\rangle \right) \ \ \ \ \ (23)$

Notice that this reduces to 12 if ${\left\langle \phi\left|\chi\right.\right\rangle =0}$.

For more than 2 particles, we need to form state vectors that are either totally symmetric or totally antisymmetric.

Example 2 Suppose we have 3 identical bosons, and they are measured to be in states 3, 3 and 4. Since two of them are in the same state, there are 3 possible combinations, which we can write as ${\left|334\right\rangle ,}$ ${\left|343\right\rangle }$ and ${\left|433\right\rangle }$. Assuming these states are orthonormal, the full normalized state vector is

$\displaystyle \psi\left(3,3,4\right)=\frac{1}{\sqrt{3}}\left(\left|334\right\rangle +\left|343\right\rangle +\left|433\right\rangle \right) \ \ \ \ \ (24)$

The ${\frac{1}{\sqrt{3}}}$ ensures that ${\left|\psi\left(3,3,4\right)\right|^{2}=1}$.

Incidentally, for ${N\ge3}$ particles, it turns out to be impossible to construct a linear combination of the basis states such that the overall state vector is symmetric with respect to the interchange of some pairs of particles and antisymmetric with respect to the interchange of other pairs. A general proof for all ${N}$ requires group theory, but for ${N=3}$ we can show this by brute force. There are ${3!=6}$ basis vectors

$\displaystyle \left|123\right\rangle ,\left|231\right\rangle ,\left|312\right\rangle ,\left|132\right\rangle ,\left|321\right\rangle ,\left|213\right\rangle \ \ \ \ \ (25)$

Suppose we require the compound state vector to be symmetric with respect to exchanging 1 and 2. We then must have

$\displaystyle \psi=A\left(\left|123\right\rangle +\left|213\right\rangle \right)+B\left(\left|231\right\rangle +\left|132\right\rangle \right)+C\left(\left|312\right\rangle +\left|321\right\rangle \right) \ \ \ \ \ (26)$

If we now try to make ${\psi}$ antisymmetric with respect to exchanging 2 and 3, we must have

$\displaystyle \psi=D\left(\left|123\right\rangle -\left|132\right\rangle \right)+E\left(\left|231\right\rangle -\left|321\right\rangle \right)+F\left(\left|312\right\rangle -\left|213\right\rangle \right) \ \ \ \ \ (27)$

Comparing the two, we see that

 $\displaystyle A$ $\displaystyle =$ $\displaystyle D=-F\ \ \ \ \ (28)$ $\displaystyle B$ $\displaystyle =$ $\displaystyle E=-D\ \ \ \ \ (29)$ $\displaystyle C$ $\displaystyle =$ $\displaystyle F=-E \ \ \ \ \ (30)$

Eliminating ${A,B}$, and ${C}$ we have, combining the 3 equations:

$\displaystyle D=-E=F \ \ \ \ \ (31)$

But from the first equation, we have ${D=-F}$, so ${F=-F=0}$. From the other equations, this implies that ${D=-F=0}$ and ${E=-F=0}$, and thus that ${A=B=C=0}$. So there is no non-trivial solution that allows both a symmetric and antisymmetric particle exchange within the same state vector.

Example 3 Suppose we have 3 particles and only 3 distinct states that each particle can have. If the particles are distinguishable (not identical) the total number of states is found by considering the possibilities. If all 3 particles are in different states, then there are ${3!=6}$ possible overall states. If two particles are in one state and one particle in another, there are ${\binom{3}{2}=3}$ ways of choosing the two states, for each of which there are 2 ways of partitioning these two states (that is, which state has 2 particles and which has the other one), and for each of those there are 3 possible configurations, so there are ${3\times2\times3=18}$ possible configurations. Finally, if all 3 particles are in the same state, there are 3 possibilities. Thus the total for distinguishable particles is ${6+18+3=27}$.

If the particles are bosons, then if all 3 are in different states, there is only 1 symmetric combination of the 6 basis states. If two particles are in one state and one particle in another, there are ${3\times2=6}$ ways of partitioning the states, each of which contributes only one symmetric overall state. Finally, if all 3 particles are in the same state, there are 3 possibilities. Thus the total for bosons is ${1+6+3=10}$.

For fermions, all three particles must be in different states, so there is only 1 possibility.

# Harmonic oscillator in 2-d and 3-d, and in polar and spherical coordinates

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercises 10.2.2 – 10.2.3.

We’ve seen that the 3-d isotropic harmonic oscillator can be solved in rectangular coordinates using separation of variables. The Hamiltonian is

$\displaystyle H=\frac{p_{x}^{2}+p_{y}^{2}+p_{z}^{2}}{2m}+\frac{m\omega^{2}}{2}\left(x^{2}+y^{2}+z^{2}\right) \ \ \ \ \ (1)$

The solution to the Schrödinger equation is just the product of three one-dimensional oscillator eigenfunctions, one for each coordinate. That is

$\displaystyle \psi_{n}\left(x,y,z\right)=\psi_{n_{x}}\left(x\right)\psi_{n_{y}}\left(y\right)\psi_{n_{z}}\left(z\right) \ \ \ \ \ (2)$

Each one-dimensional eigenfunction can be expressed in terms of Hermite polynomials as

$\displaystyle \psi_{n_{x}}\left(x\right)=\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}\frac{1}{\sqrt{2^{n_{x}}n_{x}!}}H_{n_{x}}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)e^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (3)$

with the functions for ${y}$ and ${z}$ obtained by replacing ${x}$ by ${y}$ or ${z}$ and ${n_{x}}$ by ${n_{y}}$ or ${n_{z}}$. We also saw earlier that in the 3-d oscillator, the total energy for state ${\psi_{n}\left(x,y,z\right)}$ is given in terms of the quantum numbers of the three 1-d oscillators as

$\displaystyle E_{n}=\hbar\omega\left(n+\frac{3}{2}\right)=\hbar\omega\left(n_{x}+n_{y}+n_{z}+\frac{3}{2}\right) \ \ \ \ \ (4)$

and that the degeneracy of level ${n}$ is ${\frac{1}{2}\left(n+1\right)\left(n+2\right)}$.

Since the Hermite polynomial ${H_{n_{x}}}$ has parity ${\left(-1\right)^{n_{x}}}$ (that is, odd (even) polynomials are odd (even) functions), the 3-d wave function ${\psi_{n}}$ has parity ${\left(-1\right)^{n_{x}}\left(-1\right)^{n_{y}}\left(-1\right)^{n_{z}}=\left(-1\right)^{n}}$.

We can write the one ${n=0}$ state and three ${n=1}$ states in spherical coordinates using the standard transformation

 $\displaystyle x$ $\displaystyle =$ $\displaystyle r\sin\theta\cos\phi\ \ \ \ \ (5)$ $\displaystyle y$ $\displaystyle =$ $\displaystyle r\sin\theta\sin\phi\ \ \ \ \ (6)$ $\displaystyle z$ $\displaystyle =$ $\displaystyle r\cos\phi \ \ \ \ \ (7)$

Using the notation ${\psi_{n}=\psi_{n_{x}n_{y}n_{z}}=\psi_{n_{x}}\psi_{n_{y}}\psi_{n_{z}}}$, we have, using ${H_{0}\left(y\right)=1}$ and ${H_{1}\left(y\right)=2y}$:

 $\displaystyle \psi_{000}$ $\displaystyle =$ $\displaystyle \left(\frac{m\omega}{\pi\hbar}\right)^{3/4}e^{-m\omega r^{2}/2\hbar}\ \ \ \ \ (8)$ $\displaystyle \psi_{100}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{2m\omega}{\hbar}}\left(\frac{m\omega}{\pi\hbar}\right)^{3/4}e^{-m\omega r^{2}/2\hbar}r\sin\theta\cos\phi\ \ \ \ \ (9)$ $\displaystyle \psi_{010}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{2m\omega}{\hbar}}\left(\frac{m\omega}{\pi\hbar}\right)^{3/4}e^{-m\omega r^{2}/2\hbar}r\sin\theta\sin\phi\ \ \ \ \ (10)$ $\displaystyle \psi_{001}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{2m\omega}{\hbar}}\left(\frac{m\omega}{\pi\hbar}\right)^{3/4}e^{-m\omega r^{2}/2\hbar}r\cos\theta \ \ \ \ \ (11)$

We can check that these are the correct spherical versions of the eigenfunctions by using the Schrödinger equation in spherical coordinates, which is

$\displaystyle H\psi=\left[-\frac{\hbar^{2}\nabla^{2}}{2m}+\frac{m\omega^{2}}{2}r^{2}\right]\psi=E\psi \ \ \ \ \ (12)$

The spherical laplacian operator is

$\displaystyle \nabla^{2}\psi=\frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\frac{\partial\psi}{\partial r}\right)+\frac{1}{r^{2}\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial\psi}{\partial\theta}\right)+\frac{1}{r^{2}\sin^{2}\theta}\frac{\partial^{2}\psi}{\partial\phi^{2}} \ \ \ \ \ (13)$

You can grind through the derivatives by hand if you like, but I just used Maple to do it, giving the results

 $\displaystyle H\psi_{000}$ $\displaystyle =$ $\displaystyle \frac{3}{2}\hbar\omega\psi_{000}\ \ \ \ \ (14)$ $\displaystyle H\psi_{100}$ $\displaystyle =$ $\displaystyle \frac{5}{2}\hbar\omega\psi_{100}\ \ \ \ \ (15)$ $\displaystyle H\psi_{010}$ $\displaystyle =$ $\displaystyle \frac{5}{2}\hbar\omega\psi_{010}\ \ \ \ \ (16)$ $\displaystyle H\psi_{001}$ $\displaystyle =$ $\displaystyle \frac{5}{2}\hbar\omega\psi_{001} \ \ \ \ \ (17)$

In two dimensions, the analysis is pretty much the same. In the more general case where the masses are equal, but ${\omega_{x}\ne\omega_{y}}$, the Hamiltonian is

$\displaystyle H=\frac{p_{x}^{2}+p_{y}^{2}}{2m}+\frac{m}{2}\left(\omega_{x}^{2}x^{2}+\omega_{y}^{2}y^{2}\right) \ \ \ \ \ (18)$

A solution by separation of variables still works, with the result

$\displaystyle \psi_{n}\left(x,y\right)=\psi_{n_{x}}\left(x\right)\psi_{n_{y}}\left(y\right) \ \ \ \ \ (19)$

The total energy is

$\displaystyle E_{n}=E_{n_{x}}+E_{n_{y}}=\hbar\omega\left(n_{x}+\frac{1}{2}+n_{y}+\frac{1}{2}\right)=\hbar\omega\left(n+1\right) \ \ \ \ \ (20)$

For a given energy level ${n=n_{x}+n_{y}}$, there are ${n+1}$ ways of forming ${n}$ out of a sum of 2 non-negative integers, so the degeneracy of level ${n}$ is ${n+1}$.

The one ${n=0}$ state and two ${n=1}$ states are

 $\displaystyle \psi_{00}$ $\displaystyle =$ $\displaystyle \left(\frac{m\omega}{\pi\hbar}\right)^{1/2}e^{-m\omega\left(x^{2}+y^{2}\right)/2\hbar}\ \ \ \ \ (21)$ $\displaystyle \psi_{10}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{2m\omega}{\hbar}}\left(\frac{m\omega}{\pi\hbar}\right)^{1/2}e^{-m\omega\left(x^{2}+y^{2}\right)/2\hbar}x\ \ \ \ \ (22)$ $\displaystyle \psi_{01}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{2m\omega}{\hbar}}\left(\frac{m\omega}{\pi\hbar}\right)^{1/2}e^{-m\omega\left(x^{2}+y^{2}\right)/2\hbar}y \ \ \ \ \ (23)$

To translate to polar coordinates, we use the transformations

 $\displaystyle x$ $\displaystyle =$ $\displaystyle \rho\cos\phi\ \ \ \ \ (24)$ $\displaystyle y$ $\displaystyle =$ $\displaystyle \rho\sin\phi \ \ \ \ \ (25)$

so we have

 $\displaystyle \psi_{00}$ $\displaystyle =$ $\displaystyle \left(\frac{m\omega}{\pi\hbar}\right)^{1/2}e^{-m\omega\rho^{2}/2\hbar}\ \ \ \ \ (26)$ $\displaystyle \psi_{10}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{2m\omega}{\hbar}}\left(\frac{m\omega}{\pi\hbar}\right)^{1/2}e^{-m\omega\rho^{2}/2\hbar}\rho\cos\phi\ \ \ \ \ (27)$ $\displaystyle \psi_{01}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{2m\omega}{\hbar}}\left(\frac{m\omega}{\pi\hbar}\right)^{1/2}e^{-m\omega\rho^{2}/2\hbar}\rho\sin\phi \ \ \ \ \ (28)$

Again, we can check this by plugging these polar formulas into the polar Schrödinger equation, where the 2-d Laplacian is

$\displaystyle \nabla^{2}=\frac{\partial^{2}}{\partial\rho^{2}}+\frac{1}{\rho}\frac{\partial}{\partial\rho}+\frac{1}{\rho^{2}}\frac{\partial^{2}}{\partial\phi^{2}} \ \ \ \ \ (29)$

The results are

 $\displaystyle H\psi_{00}$ $\displaystyle =$ $\displaystyle \hbar\omega\psi_{00}\ \ \ \ \ (30)$ $\displaystyle H\psi_{10}$ $\displaystyle =$ $\displaystyle 2\hbar\omega\psi_{10}\ \ \ \ \ (31)$ $\displaystyle H\psi_{01}$ $\displaystyle =$ $\displaystyle 2\hbar\omega\psi_{01} \ \ \ \ \ (32)$

# Decoupling the two-particle Hamiltonian

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.1.3.

Shankar shows that, for a two-particle system, the state vector ${\left|\psi\right\rangle }$ is an element of the direct product space ${\mathbb{V}_{1\otimes2}}$. Its evolution in time is determined by the Schrödinger equation, as usual, so that

$\displaystyle i\hbar\left|\dot{\psi}\right\rangle =H\left|\psi\right\rangle =\left[\frac{P_{1}^{2}}{2m_{1}}+\frac{P_{2}^{2}}{2m_{2}}+V\left(X_{1},X_{2}\right)\right]\left|\psi\right\rangle \ \ \ \ \ (1)$

The method by which this equation can be solved (if it can be solved, that is) depends on the form of the potential ${V}$. If the two particles interact only with some external potential, and not with each other, then ${V}$ is composed of a sum of terms, each of which depends only on ${X_{1}}$ or ${X_{2}}$, but not on both. In such cases, we can split ${H}$ into two parts, one of which (${H_{1}}$) depends only on operators pertaining to particle 1 and the other (${H_{2}}$) on operators pertaining to particle 2. If the eigenvalues (allowed energies) of particle ${i}$ are given by ${E_{i}}$, then the stationary states are direct products of the corresponding single particle eigenstates. That is, in general

$\displaystyle H\left|E\right\rangle =\left(H_{1}+H_{2}\right)\left|E_{1}\right\rangle \otimes\left|E_{2}\right\rangle =\left(E_{1}+E_{2}\right)\left|E_{1}\right\rangle \otimes\left|E_{2}\right\rangle =E\left|E\right\rangle \ \ \ \ \ (2)$

Thus the two-particle state ${\left|E\right\rangle =\left|E_{1}\right\rangle \otimes\left|E_{2}\right\rangle }$. Since a stationary state ${\left|E_{i}\right\rangle }$ evolves in time according to

$\displaystyle \left|\psi_{i}\left(t\right)\right\rangle =\left|E_{i}\right\rangle e^{-iE_{i}t/\hbar} \ \ \ \ \ (3)$

the compound two-particle state evolves according to

 $\displaystyle \left|\psi\left(t\right)\right\rangle$ $\displaystyle =$ $\displaystyle e^{-iE_{1}t/\hbar}\left|E_{1}\right\rangle \otimes e^{-iE_{2}t/\hbar}\left|E_{2}\right\rangle \ \ \ \ \ (4)$ $\displaystyle$ $\displaystyle =$ $\displaystyle e^{-i\left(E_{1}+E_{2}\right)t/\hbar}\left|E\right\rangle \ \ \ \ \ (5)$ $\displaystyle$ $\displaystyle =$ $\displaystyle e^{-iEt/\hbar}\left|E\right\rangle \ \ \ \ \ (6)$

In this case, the two particles are essentially independent of each other, and the compound state is just the product of the two separate one-particle states.

If ${H}$ is not separable, which will occur if ${V}$ contains terms involving both ${X_{1}}$ and ${X_{2}}$ in the same term, we cannot, in general, reduce the system to the product of two one-particle systems. There are a couple of instances, however, where such a reduction can be done.

The first instance is if the potential is a function of ${x_{2}-x_{1}}$ only, in other words, that the interaction between the particles depends only on the distance between them. Shankar shows that in this case we can transform the system to that of a reduced mass ${\mu=m_{1}m_{2}/\left(m_{1}+m_{2}\right)}$ and a centre of mass ${M=m_{1}+m_{2}}$. We’ve already seen this problem solved by means of separation of variables. The result is that the state vector is the product of a vector for a free particle of mass ${M}$ and of a vector of a particle with reduced mass ${\mu}$ moving in the potential ${V}$.

Another case where we can decouple the Hamiltonian is in a system of harmonic oscillators. We’ve already seen this system solved for two masses in classical mechanics using diagonalization of the matrix describing the equations of motion. The classical Hamiltonian is

$\displaystyle H=\frac{p_{1}^{2}}{2m}+\frac{p_{2}^{2}}{2m}+\frac{m\omega^{2}}{2}\left[x_{1}^{2}+x_{2}^{2}+\left(x_{1}-x_{2}\right)^{2}\right] \ \ \ \ \ (7)$

The earlier solution involved introducing normal coordinates

 $\displaystyle x_{I}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(x_{1}+x_{2}\right)\ \ \ \ \ (8)$ $\displaystyle x_{II}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(x_{1}-x_{2}\right) \ \ \ \ \ (9)$

and corresponding momenta

 $\displaystyle p_{I}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(p_{1}+p_{2}\right)\ \ \ \ \ (10)$ $\displaystyle p_{II}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(p_{1}-p_{2}\right) \ \ \ \ \ (11)$

These normal coordinates are canonical as we can verify by calculating the Poisson brackets. For example

 $\displaystyle \left\{ x_{I},p_{I}\right\}$ $\displaystyle =$ $\displaystyle \sum_{i}\left(\frac{\partial x_{I}}{\partial x_{i}}\frac{\partial p_{I}}{\partial p_{i}}-\frac{\partial x_{I}}{\partial p_{i}}\frac{\partial p_{I}}{\partial x_{i}}\right)\ \ \ \ \ (12)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 1\ \ \ \ \ (13)$ $\displaystyle \left\{ x_{I},x_{II}\right\}$ $\displaystyle =$ $\displaystyle \sum_{i}\left(\frac{\partial x_{I}}{\partial x_{i}}\frac{\partial x_{II}}{\partial p_{i}}-\frac{\partial x_{I}}{\partial p_{i}}\frac{\partial x_{II}}{\partial x_{i}}\right)\ \ \ \ \ (14)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 0 \ \ \ \ \ (15)$

and so on, with the general result

 $\displaystyle \left\{ x_{i},p_{j}\right\}$ $\displaystyle =$ $\displaystyle \delta_{ij}\ \ \ \ \ (16)$ $\displaystyle \left\{ x_{i},x_{j}\right\}$ $\displaystyle =$ $\displaystyle \left\{ p_{i},p_{j}\right\} =0 \ \ \ \ \ (17)$

We can invert the transformation to get

 $\displaystyle x_{1}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(x_{I}+x_{II}\right)\ \ \ \ \ (18)$ $\displaystyle x_{2}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(x_{I}-x_{II}\right) \ \ \ \ \ (19)$

and

 $\displaystyle p_{1}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(p_{I}+p_{II}\right)\ \ \ \ \ (20)$ $\displaystyle p_{2}$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(p_{I}-p_{II}\right) \ \ \ \ \ (21)$

Inserting these into 7 we get

 $\displaystyle H$ $\displaystyle =$ $\displaystyle \frac{1}{4m}\left[\left(p_{I}+p_{II}\right)^{2}+\left(p_{I}-p_{II}\right)^{2}\right]+\ \ \ \ \ (22)$ $\displaystyle$ $\displaystyle$ $\displaystyle \frac{m\omega^{2}}{4}\left[\left(x_{I}+x_{II}\right)^{2}+\left(x_{I}-x_{II}\right)^{2}+x_{II}^{2}\right]\ \ \ \ \ (23)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{p_{I}^{2}}{2m}+\frac{p_{II}^{2}}{2m}+\frac{m\omega^{2}}{2}\left(x_{I}^{2}+2x_{II}^{2}\right) \ \ \ \ \ (24)$

We can now subsitute the usual quantum mechanical operators to get the quantum Hamiltonian:

$\displaystyle H=-\frac{\hbar^{2}}{2m}\left(P_{I}^{2}+P_{II}^{2}\right)+\frac{m\omega^{2}}{2}\left(X_{I}^{2}+2X_{II}^{2}\right) \ \ \ \ \ (25)$

In the coordinate basis, this is

$\displaystyle H=-\frac{\hbar^{2}}{2m}\left(\frac{\partial^{2}}{\partial x_{I}^{2}}+\frac{\partial^{2}}{\partial x_{II}^{2}}\right)+\frac{m\omega^{2}}{2}\left(x_{I}^{2}+2x_{II}^{2}\right) \ \ \ \ \ (26)$

The Hamiltonian is now decoupled and can be solved by separation of variables.

We could have arrived at this result by starting with 7 and promoting ${x_{i}}$ and ${p_{i}}$ to quantum operators directly, then made the substitution to normal coordinates. We would then start with

$\displaystyle H=-\frac{\hbar^{2}}{2m}\left(\frac{\partial^{2}}{\partial x_{1}^{2}}+\frac{\partial^{2}}{\partial x_{2}^{2}}\right)+\frac{m\omega^{2}}{2}\left[x_{1}^{2}+x_{2}^{2}+\left(x_{1}-x_{2}\right)^{2}\right] \ \ \ \ \ (27)$

The potential term on the right transforms the same way as before, so we get

$\displaystyle \frac{m\omega^{2}}{2}\left[x_{1}^{2}+x_{2}^{2}+\left(x_{1}-x_{2}\right)^{2}\right]\rightarrow\frac{m\omega^{2}}{2}\left(x_{I}^{2}+2x_{II}^{2}\right) \ \ \ \ \ (28)$

To transform the two derivatives, we need to use the chain rule a couple of times. To get the first derivatives:

 $\displaystyle \frac{\partial\psi}{\partial x_{1}}$ $\displaystyle =$ $\displaystyle \frac{\partial\psi}{\partial x_{I}}\frac{\partial x_{I}}{\partial x_{1}}+\frac{\partial\psi}{\partial x_{II}}\frac{\partial x_{II}}{\partial x_{1}}\ \ \ \ \ (29)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(\frac{\partial\psi}{\partial x_{I}}+\frac{\partial\psi}{\partial x_{II}}\right)\ \ \ \ \ (30)$ $\displaystyle \frac{\partial\psi}{\partial x_{2}}$ $\displaystyle =$ $\displaystyle \frac{\partial\psi}{\partial x_{I}}\frac{\partial x_{I}}{\partial x_{2}}+\frac{\partial\psi}{\partial x_{II}}\frac{\partial x_{II}}{\partial x_{2}}\ \ \ \ \ (31)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}\left(\frac{\partial\psi}{\partial x_{I}}-\frac{\partial\psi}{\partial x_{II}}\right) \ \ \ \ \ (32)$

Now the second derivatives:

 $\displaystyle \frac{\partial^{2}\psi}{\partial x_{1}^{2}}$ $\displaystyle =$ $\displaystyle \frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{1}}\right)\frac{\partial x_{I}}{\partial x_{1}}+\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{1}}\right)\frac{\partial x_{II}}{\partial x_{1}}\ \ \ \ \ (33)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{2}\left[\frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{I}}+\frac{\partial\psi}{\partial x_{II}}\right)+\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{I}}+\frac{\partial\psi}{\partial x_{II}}\right)\right]\ \ \ \ \ (34)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{2}\left[\frac{\partial^{2}\psi}{\partial x_{I}^{2}}+2\frac{\partial^{2}\psi}{\partial x_{I}\partial x_{II}}+\frac{\partial^{2}\psi}{\partial x_{II}^{2}}\right]\ \ \ \ \ (35)$ $\displaystyle \frac{\partial^{2}\psi}{\partial x_{2}^{2}}$ $\displaystyle =$ $\displaystyle \frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{2}}\right)\frac{\partial x_{I}}{\partial x_{1}}+\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{2}}\right)\frac{\partial x_{II}}{\partial x_{1}}\ \ \ \ \ (36)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{2}\left[\frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{I}}-\frac{\partial\psi}{\partial x_{II}}\right)-\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{I}}-\frac{\partial\psi}{\partial x_{II}}\right)\right]\ \ \ \ \ (37)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{2}\left[\frac{\partial^{2}\psi}{\partial x_{I}^{2}}-2\frac{\partial^{2}\psi}{\partial x_{I}\partial x_{II}}+\frac{\partial^{2}\psi}{\partial x_{II}^{2}}\right] \ \ \ \ \ (38)$

Combining the two derivatives, we get

$\displaystyle \frac{\partial^{2}\psi}{\partial x_{1}^{2}}+\frac{\partial^{2}\psi}{\partial x_{2}^{2}}=\frac{\partial^{2}\psi}{\partial x_{I}^{2}}+\frac{\partial^{2}\psi}{\partial x_{II}^{2}} \ \ \ \ \ (39)$

Inserting this, together with 28, into 27 we get 26 again.

# Direct product of vector spaces: 2-dim examples

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.1.2.

To help with understanding the direct product of two vector spaces, some examples with a couple of 2-d vector spaces are useful. Suppose the one-particle Hilbert space is two-dimensional, with basis vectors ${\left|+\right\rangle }$ and ${\left|-\right\rangle }$. Now suppose we have two such particles, each in its own 2-d space, ${\mathbb{V}_{1}}$ for particle 1 and ${\mathbb{V}_{2}}$ for particle 2. We can define a couple of operators by their matrix elements in these two spaces. We define

 $\displaystyle \sigma_{1}^{\left(1\right)}$ $\displaystyle \equiv$ $\displaystyle \left[\begin{array}{cc} a & b\\ c & d \end{array}\right]\ \ \ \ \ (1)$ $\displaystyle \sigma_{2}^{\left(2\right)}$ $\displaystyle \equiv$ $\displaystyle \left[\begin{array}{cc} e & f\\ g & h \end{array}\right] \ \ \ \ \ (2)$

where the first column and row refer to basis vector ${\left|+\right\rangle }$ and the second column and row to ${\left|-\right\rangle }$. Recall that the subscript on each ${\sigma}$ refers to the particle and the superscript refers to the vector space. Thus ${\sigma_{1}^{\left(1\right)}}$ is an operator in space ${\mathbb{V}_{1}}$ for particle 1.

Now consider the direct product space ${\mathbb{V}_{1}\otimes\mathbb{V}_{2}}$, which is spanned by the four basis vectors formed by direct products of the two basis vectors in each of the one-particle spaces, that is by ${\left|+\right\rangle \otimes\left|+\right\rangle }$, ${\left|+\right\rangle \otimes\left|-\right\rangle }$, ${\left|-\right\rangle \otimes\left|+\right\rangle }$ and ${\left|-\right\rangle \otimes\left|-\right\rangle }$. Each of the ${\sigma}$ operators has a corresponding version in the product space, which is formed by taking the direct product of the one-particle version for one of the particles with the identity operator for the other particle. That is

 $\displaystyle \sigma_{1}^{\left(1\right)\otimes\left(2\right)}$ $\displaystyle =$ $\displaystyle \sigma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\ \ \ \ \ (3)$ $\displaystyle \sigma_{2}^{\left(1\right)\otimes\left(2\right)}$ $\displaystyle =$ $\displaystyle I^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)} \ \ \ \ \ (4)$

To get the matrix elements in the product space, we need the form of the identity operators in the one-particle spaces. They are, as usual

 $\displaystyle I^{\left(1\right)}$ $\displaystyle =$ $\displaystyle \left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right]\ \ \ \ \ (5)$ $\displaystyle I^{\left(2\right)}$ $\displaystyle =$ $\displaystyle \left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right] \ \ \ \ \ (6)$

I’ve written the two identity operators as separate equations since although they have the same numerical form as a matrix, the two operators operate on different spaces, so they are technically different operators. To get the matrix elements of ${\sigma_{1}^{\left(1\right)\otimes\left(2\right)}}$ we can expand the direct product (Shankar suggests using the ‘method of images’, although I have no idea what this is. I doubt that it’s the same method of images used in electrostatics, and Google draws a blank for any other kind of method of images.) In any case, we can form the product by taking the corresponding matrix elements. For example

 $\displaystyle \left\langle ++\left|\sigma_{1}^{\left(1\right)\otimes\left(2\right)}\right|++\right\rangle$ $\displaystyle =$ $\displaystyle \left(\left\langle +\right|\otimes\left\langle +\right|\right)\sigma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left(\left|+\right\rangle \otimes\left|+\right\rangle \right)\ \ \ \ \ (7)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left\langle +\left|\sigma_{1}^{\left(1\right)}\right|+\right\rangle \left\langle +\left|I^{\left(2\right)}\right|+\right\rangle \ \ \ \ \ (8)$ $\displaystyle$ $\displaystyle =$ $\displaystyle a\times1=a \ \ \ \ \ (9)$

When working out the RHS of the first line, remember that operators with a superscript (1) operate only on bras and kets from the space ${\mathbb{V}_{1}}$ and operators with a superscript (2) operate only on bras and kets from the space ${\mathbb{V}_{2}}$. Applying the same technique for the remaining elements gives

$\displaystyle \sigma_{1}^{\left(1\right)\otimes\left(2\right)}=\sigma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}=\left[\begin{array}{cccc} a & 0 & b & 0\\ 0 & a & 0 & b\\ c & 0 & d & 0\\ 0 & c & 0 & d \end{array}\right] \ \ \ \ \ (10)$

Another less tedious way of getting this result is to note that we can form the direct product by taking each element in the first matrix ${\sigma_{1}^{\left(1\right)}}$ from 1 and multiply it into the second matrix ${I^{\left(2\right)}}$ from 6. Thus the top ${2\times2}$ elements in ${\sigma_{1}^{\left(1\right)\otimes\left(2\right)}}$ are obtained by taking the element ${\left\langle +\left|\sigma_{1}^{\left(1\right)}\right|+\right\rangle =a}$ from 1 and multiplying it into the matrix ${I^{\left(2\right)}}$ from 6. That is, the upper left ${2\times2}$ block is formed from

 $\displaystyle aI_{2\times2}^{\left(2\right)}$ $\displaystyle =$ $\displaystyle \left[\begin{array}{cc} a & 0\\ 0 & a \end{array}\right] \ \ \ \ \ (11)$

and so on for the other three ${2\times2}$ blocks in the complete matrix. Note that it’s important to get things in the right order, as the direct product is not commutative.

To get the other direct product, we can apply the same technique:

$\displaystyle \sigma_{2}^{\left(1\right)\otimes\left(2\right)}=I^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)}=\left[\begin{array}{cccc} e & f & 0 & 0\\ g & h & 0 & 0\\ 0 & 0 & e & f\\ 0 & 0 & g & h \end{array}\right] \ \ \ \ \ (12)$

Again, note that

$\displaystyle I^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)}\ne\sigma_{2}^{\left(2\right)}\otimes I^{\left(1\right)}=\left[\begin{array}{cccc} e & 0 & f & 0\\ 0 & e & 0 & f\\ g & 0 & h & 0\\ 0 & g & 0 & h \end{array}\right] \ \ \ \ \ (13)$

Finally, we can work out the direct product version of the product of two one-particle operators. That is, we want

$\displaystyle \left(\sigma_{1}\sigma_{2}\right)^{\left(1\right)\otimes\left(2\right)}=\sigma_{1}^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)} \ \ \ \ \ (14)$

We can do this in two ways. First, we can apply the same recipe as in the previous example. We take each element of ${\sigma_{1}^{\left(1\right)}}$ and multiply it into the full matrix ${\sigma_{2}^{\left(2\right)}}$:

 $\displaystyle \sigma_{1}^{\left(1\right)}\otimes\sigma_{2}^{\left(2\right)}$ $\displaystyle =$ $\displaystyle \left[\begin{array}{cccc} ae & af & be & bf\\ ag & ah & bg & bh\\ ce & cf & de & df\\ cg & ch & dg & dh \end{array}\right] \ \ \ \ \ (15)$

Second, we can take the matrix product of ${\sigma_{1}^{\left(1\right)\otimes\left(2\right)}}$ from 10 with ${\sigma_{2}^{\left(1\right)\otimes\left(2\right)}}$ from 12:

 $\displaystyle \left(\sigma_{1}\sigma_{2}\right)^{\left(1\right)\otimes\left(2\right)}$ $\displaystyle =$ $\displaystyle \left[\begin{array}{cccc} a & 0 & b & 0\\ 0 & a & 0 & b\\ c & 0 & d & 0\\ 0 & c & 0 & d \end{array}\right]\left[\begin{array}{cccc} e & f & 0 & 0\\ g & h & 0 & 0\\ 0 & 0 & e & f\\ 0 & 0 & g & h \end{array}\right]\ \ \ \ \ (16)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left[\begin{array}{cccc} ae & af & be & bf\\ ag & ah & bg & bh\\ ce & cf & de & df\\ cg & ch & dg & dh \end{array}\right] \ \ \ \ \ (17)$

# Direct product of two vector spaces

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.1.1.

Although we’ve studied quantum systems of more than one particle before (for example, systems of fermions and bosons) as covered by Griffiths’s book, the wave functions associated with such particles were just given as products of single-particle wave functions (or linear combinations of these products). We didn’t examine the linear algebra behind these functions. In his chapter 10, Shankar begins by describing the algebra of a direct product vector space, so we’ll review this here.

The physics begins with an extension of the postulate of quantum mechanics that, for a single particle, the position and momentum obey the commutation relation

$\displaystyle \left[X,P\right]=i\hbar \ \ \ \ \ (1)$

To extend this to multi-particle systems, we propose

 $\displaystyle \left[X_{i},P_{j}\right]$ $\displaystyle =$ $\displaystyle i\hbar\delta_{ij}\ \ \ \ \ (2)$ $\displaystyle \left[X_{i},X_{j}\right]$ $\displaystyle =$ $\displaystyle \left[P_{i},P_{j}\right]=0 \ \ \ \ \ (3)$

where the subscripts refer to the particle we’re considering.

These postulates are translations of the classical Poisson brackets from classical mechanics, following the prescription that to obtain the quantum commutator, we multiply the classical Poisson bracket by ${i\hbar}$. The physics in these relations is that properties such as position or momentum of different particles are simultaneously observable, although the position and momentum of a single particle are still governed by the uncertainty principle.

We’ll now restrict our attention to a two-particle system. In such a system, the eigenstate of the position operators is written as ${\left|x_{1}x_{2}\right\rangle }$ and satisfies the eigenvalue equation

$\displaystyle X_{i}\left|x_{1}x_{2}\right\rangle =x_{i}\left|x_{1}x_{2}\right\rangle \ \ \ \ \ (4)$

Operators referring to particle ${i}$ effectively ignore any quantities associated with the other particle.

So what exactly are these states ${\left|x_{1}x_{2}\right\rangle }$? They are a set of vectors that span a Hilbert space that describes the state of two particles. Note that we can use any two commuting operators ${\Omega_{1}\left(X_{1},P_{1}\right)}$ and ${\Omega_{2}\left(X_{2},P_{2}\right)}$ to create a set of eigenkets ${\left|\omega_{1}\omega_{2}\right\rangle }$ which also span the space. Any operator that is a function of the position and momentum of only one of the particles always commutes with a similar operator that is a function of only the other particle, since the position and momentum operators of which it is a function commute with those of the other operator. That is

$\displaystyle \left[\Omega\left(X_{1},P_{1}\right),\Lambda\left(X_{2},P_{2}\right)\right]=0 \ \ \ \ \ (5)$

The space spanned by ${\left|x_{1}x_{2}\right\rangle }$ can also be written as a direct product of two one-particle spaces. This space is written as ${\mathbb{V}_{1\otimes2}}$ where the symbol ${\otimes}$ is the direct product symbol (it’s also the logo of the X-Men, but we won’t pursue that). The direct product is composed of the two single-particle spaces ${\mathbb{V}_{1}}$ (spanned by ${\left|x_{1}\right\rangle }$) and ${\mathbb{V}_{2}}$ (spanned by ${\left|x_{2}\right\rangle }$). The notation gets quite cumbersome at this point, so let’s spell it out carefully. For an operator ${\Omega}$, we can specify which particle it acts on by a subscript, and which space it acts on by a superscript. Thus ${X_{1}^{\left(1\right)}}$ is the position operator for particle 1, which operates on the vector space ${\mathbb{V}_{1}}$. It might seem redundant at this point to specify both the particle and the space, since it would seem that these are always the same. However, be patient…

From the two one-particle spaces, we can form the two-particle space by taking the direct product of the two one-particle states. Thus the state in which particle 1 is in state ${\left|x_{1}\right\rangle }$ and particle 2 is in state ${\left|x_{2}\right\rangle }$ is written as

$\displaystyle \left|x_{1}x_{2}\right\rangle =\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle \ \ \ \ \ (6)$

It is important to note that this object is composed of two vectors from different vector spaces. The inner and outer products we’ve dealt with up to now, for things like finding the probability that a state has a particular value and so on, that is, objects like ${\left\langle \psi_{1}\left|\psi_{2}\right.\right\rangle }$ and ${\left|\psi_{1}\right\rangle \left\langle \psi_{2}\right|}$, are composed of two vectors from the same vector space, so no direct product is needed.

If we recall the direct sum of two vector spaces

$\displaystyle \mathbb{V}_{1\oplus2}=\mathbb{V}_{1}\oplus\mathbb{V}_{2} \ \ \ \ \ (7)$

in that case, the dimension of ${\mathbb{V}_{1\oplus2}}$ is the sum of the dimensions of ${\mathbb{V}_{1}}$ and ${\mathbb{V}_{2}}$. For a direct product we see from 6 that for each vector ${\left|x_{1}\right\rangle }$ there is one basis vector for each vector ${\left|x_{2}\right\rangle }$. Thus the number of basis vectors is the product of the number of basis vectors in each of the two one-particle spaces. In other words, the dimension of a direct product is the product of the dimensions of the two vector spaces of which it is composed. [In the case here, both the spaces ${\mathbb{V}_{1}}$ and ${\mathbb{V}_{2}}$ have infinite dimension, so the dimension of ${\mathbb{V}_{1\otimes2}}$ is in effect, ‘doubly infinite’. In a case where ${\mathbb{V}_{1}}$ and ${\mathbb{V}_{2}}$ have finite dimension, we can then just multiply these dimensions to get the dimension of ${\mathbb{V}_{1\otimes2}}$.]

As ${\mathbb{V}_{1\otimes2}}$ is a vector space with basis vectors ${\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle }$, any linear combination of the basis vectors is also a vector in the space ${\mathbb{V}_{1\otimes2}}$. Thus the vector

$\displaystyle \left|\psi\right\rangle =\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle +\left|y_{1}\right\rangle \otimes\left|y_{2}\right\rangle \ \ \ \ \ (8)$

is in ${\mathbb{V}_{1\otimes2}}$, although it can’t be written as a direct product of the two one-particle spaces ${\mathbb{V}_{1}}$ and ${\mathbb{V}_{2}}$.

Having defined the direct product space, we now need to consider operators in this space. Although Shankar states that it ‘is intuitively clear’ that a single particle operator such as ${X_{1}^{\left(1\right)}}$ must have a corresponding operator in the product space that has the same effect has ${X_{1}^{\left(1\right)}}$ has on the single particle state, it seems to me to be more of a postulate. In any case, it is proposed that if

$\displaystyle X_{1}^{\left(1\right)}\left|x_{1}\right\rangle =x_{1}\left|x_{1}\right\rangle \ \ \ \ \ (9)$

then in the product space there must be an operator ${X_{1}^{\left(1\right)\otimes\left(2\right)}}$ that operates only on particle 1, with the same effect, that is

$\displaystyle X_{1}^{\left(1\right)\otimes\left(2\right)}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle =x_{1}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle \ \ \ \ \ (10)$

The notation can be explained as follows. The subscript 1 in ${X_{1}^{\left(1\right)\otimes\left(2\right)}}$ means that the operator operates on particle 1, while the superscript ${\left(1\right)\otimes\left(2\right)}$ means that the operator operates in the product space ${\mathbb{V}_{1\otimes2}}$. In effect, the operator ${X_{1}^{\left(1\right)\otimes\left(2\right)}}$ is the product of two one-particle operators ${X_{1}^{\left(1\right)}}$, which operates on space ${\mathbb{V}_{1}}$ and an identity operator ${I_{2}^{\left(2\right)}}$ which operates on space ${\mathbb{V}_{2}}$. That is, we can write

 $\displaystyle X_{1}^{\left(1\right)\otimes\left(2\right)}$ $\displaystyle =$ $\displaystyle X_{1}^{\left(1\right)}\otimes I_{2}^{\left(2\right)}\ \ \ \ \ (11)$ $\displaystyle X_{1}^{\left(1\right)\otimes\left(2\right)}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle$ $\displaystyle =$ $\displaystyle \left|X_{1}^{\left(1\right)}x_{1}\right\rangle \otimes\left|I_{2}^{\left(2\right)}x_{2}\right\rangle \ \ \ \ \ (12)$ $\displaystyle$ $\displaystyle =$ $\displaystyle x_{1}\left|x_{1}\right\rangle \otimes\left|x_{2}\right\rangle \ \ \ \ \ (13)$

Generally, if we have two one-particle operators ${\Gamma_{1}^{\left(1\right)}}$ and ${\Lambda_{2}^{\left(2\right)}}$, each of which operates on a different one-particle state, then we can form a direct product operator with the property

$\displaystyle \left(\Gamma_{1}^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle =\left|\Gamma_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (14)$

That is, a single-particle operator that operates on space ${i}$ that forms part of a direct product operator operates only on the factor of a direct product vector that corresponds to the one-particle space. Given this property, it’s fairly easy to derive a few properties of direct product operators.

 $\displaystyle \left[\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)},I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right]\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle$ $\displaystyle =$ $\displaystyle \Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle -\ \ \ \ \ (15)$ $\displaystyle$ $\displaystyle$ $\displaystyle I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (16)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|I^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle -\ \ \ \ \ (17)$ $\displaystyle$ $\displaystyle$ $\displaystyle I^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (18)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle -\ \ \ \ \ (19)$ $\displaystyle$ $\displaystyle$ $\displaystyle \left|I^{\left(1\right)}\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (20)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle -\left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (21)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 0 \ \ \ \ \ (22)$

This derivation shows that the identity operators effectively cancel out and we’re left with the earlier commutator 5 between two operators that operate on different spaces.

The next derivation involves the successive operation of two direct product operators.

 $\displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes\Gamma_{2}^{\left(2\right)}\right)\left(\theta_{1}^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle$ $\displaystyle =$ $\displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes\Gamma_{2}^{\left(2\right)}\right)\left|\theta_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (23)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|\Omega_{1}^{\left(1\right)}\theta_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Gamma_{2}^{\left(2\right)}\Lambda_{2}^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (24)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left(\Omega_{1}^{\left(1\right)}\theta_{1}^{\left(1\right)}\right)\otimes\left(\Gamma_{2}^{\left(2\right)}\Lambda_{2}^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (25)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left\{ \left(\Omega\theta\right)^{\left(1\right)}\otimes\left(\Gamma\Lambda\right)^{\left(2\right)}\right\} \left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (26)$ $\displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes\Gamma_{2}^{\left(2\right)}\right)\left(\theta_{1}^{\left(1\right)}\otimes\Lambda_{2}^{\left(2\right)}\right)$ $\displaystyle =$ $\displaystyle \left(\Omega\theta\right)^{\left(1\right)}\otimes\left(\Gamma\Lambda\right)^{\left(2\right)} \ \ \ \ \ (27)$

Next, another commutator identity. Given

$\displaystyle \left[\Omega_{1}^{\left(1\right)},\Lambda_{1}^{\left(1\right)}\right]=\Gamma_{1}^{\left(1\right)} \ \ \ \ \ (28)$

we have

 $\displaystyle \left[\Omega_{1}^{\left(1\right)\otimes\left(2\right)},\Lambda_{1}^{\left(1\right)\otimes\left(2\right)}\right]\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle$ $\displaystyle =$ $\displaystyle \left[\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)},\Lambda_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\right]\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (29)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|\left[\Omega_{1}^{\left(1\right)},\Lambda_{1}^{\left(1\right)}\right]\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (30)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|\Gamma_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (31)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \Gamma_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (32)$ $\displaystyle \left[\Omega_{1}^{\left(1\right)\otimes\left(2\right)},\Lambda_{1}^{\left(1\right)\otimes\left(2\right)}\right]$ $\displaystyle =$ $\displaystyle \Gamma_{1}^{\left(1\right)}\otimes I^{\left(2\right)} \ \ \ \ \ (33)$

Finally, the square of the sum of two operators:

 $\displaystyle \left(\Omega_{1}^{\left(1\right)\otimes\left(2\right)}+\Omega_{2}^{\left(1\right)\otimes\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle$ $\displaystyle =$ $\displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}+I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (34)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left(\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle +\ \ \ \ \ (35)$ $\displaystyle$ $\displaystyle$ $\displaystyle \Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle +\ \ \ \ \ (36)$ $\displaystyle$ $\displaystyle$ $\displaystyle I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\Omega_{1}^{\left(1\right)}\otimes I^{\left(2\right)}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle +\ \ \ \ \ (37)$ $\displaystyle$ $\displaystyle$ $\displaystyle \left(I^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}\right)^{2}\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (38)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left|\left(\Omega_{1}^{2}\right)^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|I^{\left(2\right)}\omega_{2}\right\rangle +\ \ \ \ \ (39)$ $\displaystyle$ $\displaystyle$ $\displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Omega_{2}^{\left(2\right)}\omega_{2}\right\rangle +\ \ \ \ \ (40)$ $\displaystyle$ $\displaystyle$ $\displaystyle \left|\Omega_{1}^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\Omega_{2}^{\left(2\right)}\omega_{2}\right\rangle +\ \ \ \ \ (41)$ $\displaystyle$ $\displaystyle$ $\displaystyle \left|I^{\left(1\right)}\omega_{1}\right\rangle \otimes\left|\left(\Omega_{2}^{2}\right)^{\left(2\right)}\omega_{2}\right\rangle \ \ \ \ \ (42)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left(\left(\Omega_{1}^{2}\right)^{\left(1\right)}\otimes I^{\left(2\right)}+2\Omega_{1}^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}+I^{\left(1\right)}\otimes\left(\Omega_{2}^{2}\right)^{\left(2\right)}\right)\left|\omega_{1}\right\rangle \otimes\left|\omega_{2}\right\rangle \ \ \ \ \ (43)$ $\displaystyle \left(\Omega_{1}^{\left(1\right)\otimes\left(2\right)}+\Omega_{2}^{\left(1\right)\otimes\left(2\right)}\right)^{2}$ $\displaystyle =$ $\displaystyle \left(\Omega_{1}^{2}\right)^{\left(1\right)}\otimes I^{\left(2\right)}+2\Omega_{1}^{\left(1\right)}\otimes\Omega_{2}^{\left(2\right)}+I^{\left(1\right)}\otimes\left(\Omega_{2}^{2}\right)^{\left(2\right)} \ \ \ \ \ (44)$

In this derivation, we used the fact that the identity operator leaves its operand unchanged, and thus that ${\left(I^{2}\right)^{\left(i\right)}=I^{\left(i\right)}}$ for either space ${i}$.