References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Exercise 1.8.11.
Here’s a practical example of how changing the basis by diagonalizing a hermitian matrix can make a problem easier to solve. Suppose we have two identical masses free to slide in one dimension on a frictionless horizontal surface. The two masses are connected to 3 springs, with the spring on the left attached to a solid support at one end and to mass #1 at the other, the middle spring connected between the two masses, and the spring on the right connected to mass #2 at one end and to a solid support at the other. The springs all have spring constant . Define two coordinates and to be the positions of the two masses, with corresponding to the location at which mass is at rest in equilibrium.
Now suppose that the two masses are displaced from their respective equilibrium points, so that and are non-zero. The length of the spring to the left of mass 1 is changed (stretched or compressed, depending on the sign of ) by , so exerts a force on mass 1. The length of the spring in the middle is changed by , so it exerts a force on mass 1, and an equal and opposite force on mass 2. Finally, the length of the spring on the right is changed by and exerts a force on mass 2. By applying Newton’s law , we get the set of equations of motion:
While it’s possible to solve such a coupled system directly, we can see how an easier method can be found by using matrix algebra. The 2 equations above can be written as a matrix equation
If we use the basis in which the displacement of each mass is taken to be independent of the other, we have the two basis vectors
In this basis
Here, the s are just numbers; the vector nature of the equation is delegated to the basis vectors.
In this basis, is the operator whose matrix form is
Since is hermitian, it can be diagonalized by finding its eigenvalues and normalized eigenvectors, and forming a unitary operator whose columns are these eigenvectors. The basis vectors are now these eigenvectors and (I’m sticking to Shankar’s notation, even though it’s a bit clumsy), and they are found from and by applying the unitary transformation, that is
These transformations can be inverted:
Thus we can insert this into 3 and use to get
and is the diagonalized version of .
Shankar goes through the details of the calculation, with the results
Using and as the basis, the differential equations become decoupled, and we have
Second order ODEs require two initial conditions to be fully solved, and here we’re assuming that both masses start off at rest, so that for . In this case, the solutions are
(A full, general solution would also have a term, but this disappears because we require .)
The vector solution in the diagonal basis is therefore
We now need to figure out what the coefficients and are. Assuming we know the initial position of each mass in the original basis as and , we can find and by projecting and onto the basis and . That is, we have
where in the last line we substituted using 13 to write everything in terms of the original basis and .
For the special case where the initial positions are given by , we have and , so that
Going back to 26, we can write the solution as a matrix equation
The matrix with the cosines is independent of the initial state, so that once we know this matrix, we can work out the general solution as a function of time for any initial state. The matrix is known as the propagator. [Although Shankar uses the symbol to refer to the propagator, it’s not a unitary matrix. For example, its determinant is for .]