Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 12, Exercises 12.5.4 – 12.5.5.
[If some equations are too small to read easily, use your browser’s magnifying option (Ctrl + on Chrome, probably something similar on other browsers).]
For infinitesimal 3-d rotations, we’ve seen that the generator is where is a unit vector along the axis of rotation. Generalizing this to the total angular momentum we have the operator for a general 3-d rotation through an infinitesimal angle:
In principle ‘all’ we need to do to get the operator for a finite 3-d rotation is take the exponential, in the form
The problem is that in this case, is infinite dimensional, so the exponential of such a matrix cannot be calculated. However, because the components of are block diagonal (see Shankar’s equations 12.5.23 and 12.5.24), all powers of these components are also block diagonal, and thus so is the exponential. For a given value of the total momentum quantum number , the corresponding block is a sub-matrix (where the suffix refers to , or ), so the block in the exponential, defined as is calculated as
This may still look pretty hopeless in terms of actual calculation, but for small values of , we can actually get closed-form solutions.
First, we look at the eigenvalues of . If we review the calculations by which we found that the eigenvalues of (and thus also ) were (multiplied by ), we see that there’s nothing special about the fact that we chose the direction over any other direction as the component of for which we calculated the eigenvalues. We could, for example, go through exactly the same calculations taking to be the chosen component. We would then define raising and lowering operators as and come out with the conclusion that the eigenvalues of are also (multiplied by ). We can generalize even further and choose the ‘special’ direction to be the axis of rotation, however that axis may be oriented in space. This would lead us to the conclusion that the eigenvalues of are the same as those of .
Now consider the operator (where ):
First, suppose that (so that is along the axis). Then if we’re in an eigenstate of , the term in this operator will give zero when operating on this state. Thus the operator 4 will always give zero when operating on an eigenstate of . However, since the set of eigenstates of span the space in which the total angular momentum number is , any state in this space can be expressed as a linear combination of eigenstates of , so when 4 operates on this state, there is always one factor in the operator that gives zero for each term in the linear combination. Thus this operator always gives zero when operating on any state with angular momentum . [Note that the order in which we write the factors in 4 doesn’t matter; the only operator in the expression is , so all the factors commute with each other.] That is, we have
If we multiply out this operator, we get a polynomial of degree in . The highest power can thus be written as a linear combination of lower powers:
where the coefficients can be found by expanding the formula (which we won’t need to do here). But this implies that all higher powers of can also be written as linear combinations of powers up to . To see this, consider
Thus can be written as a linear combination of powers of up to . By iterating this process, we can express all higher powers of as a linear combination of powers of up to . Here are a couple of examples. [Shankar marks these as ‘hard’, though I can’t see that they are any more difficult than most of his other problems, so hopefully I’m not missing anything.]
Consider , starting from 3. We first use 5 with :
We can now iterate this formula as described above to get (to be accurate, all the and terms should have a superscript to indicate that they refer to the subspace with , but this would clutter the notation).
From 3 we have
We can consider the even and odd terms in this sum separately. For the evens:
For the odds:
Thus we get
(I’ve restored the superscript .)
Going through the same process for , we first look at 5 to get
Again, by iterating we find the pattern:
We then have
Again, we can consider evens and odds separately:
For the odds:
[I’m not sure why Shankar restricts this problem to the axis, or, for that matter, why he expects us to use the matrix for .]