References: edX online course MIT 8.05.1x Week 3.

Sheldon Axler (2015), *Linear Algebra Done Right*, 3rd edition, Springer. Chapter 3.

We’ve seen that the matrix representation of a linear operator depends on the basis we’ve chosen within a vector space . We now look at how the matrix representation changes if we change the basis. In what follows, we’ll consider two sets of basis vectors and and two operators and . Operator transforms the basis into the basis , while does the reverse. That is

for all . From this definition, we can see that and , since

Theorem 1An operator (like or above) that transforms one set of basis vectors into another has the same matrix representation in both bases.

*Proof:* In matrix form, we have (remember we’re using the summation convention on repeated indices):

Note that the matrix elements depend on *different* bases in the two equations.

We can now operate with again, using 1, to get

Comparing the last line with 6, we see that

Since the matrix elements are just numbers, this means that the elements in the two matrices and are the same.

We could do the same analysis using the operator with the same result:

We can now turn to the matrix representations of a general operator in two different bases. In this case, can perform any linear transformation, so it doesn’t necessarily transform one set of basis vectors into another set of basis vectors. Consider first the case where operates on each set of basis vectors given above:

Unless is an operator like or above, in general . We can see how these two matrices are related by using operators and above to write

We don’t need to specify the basis for the or matrices since the matrices are the same in both bases as we just saw above. The last line is just the expansion of in terms of the basis. In the penultimate line, we see that the quantity in square brackets is the product of 3 matrices:

The required transformation is therefore

where .

As a check, note that if or , we reclaim the result in the theorem above, namely that and .

** Trace and determinant **

The trace of a matrix is the sum of its diagonal elements, written as . A useful property of the trace is that

We can prove this by looking at the components. If then

The trace of is the sum of its diagonal elements, written as , so

From this we can generalize to the case of the trace of a product of any number of matrices and obtain the cyclic rule:

Going back to 23, we have

Thus the trace of any linear operator is invariant under a change of basis.

For the determinant, we have the results that the determinant of a product of matrices is equal to the product of the determinants, and the determinant of a matrix inverse is the reciprocal of the determinant of the original matrix. Therefore

Thus the determinant is also invariant under a change of basis.

Pingback: Projection operators | Physics pages

Pingback: Unitary operators | Physics pages

Pingback: Diagonalization of matrices | Physics pages

Pingback: Unitary operators: active and passive transformations of an operator | Physics pages