References: edX online course MIT 8.05 Week 6.

We’ll now look at a central theorem about normal operators, known as the *spectral theorem*.

We’ve seen that if a matrix has a set of eigenvectors that span the space, then we can diagonalize by means of the similarity transformation

where is diagonal and the columns of are the eigenvectors of . In the general case, there’s no guarantee that the eigenvectors of are orthonormal. However, if there *is* an orthonormal basis in which is diagonal, then is said to be *unitarily diagonalizable*. Suppose we start with some arbitrary orthonormal basis (we can always construct such a basis using the Gram-Schmidt procedure). Then if the set of eigenvectors of form an orthonormal basis , there is a unitary matrix that transforms the basis into the basis (since unitary operators preserve inner products):

Using this unitary operator, we therefore have for a unitarily diagonalizable operator

The spectral theorem now states:

**Theorem** * An operator in a complex vector space has an orthonormal basis of eigenvectors (that is, it’s unitarily diagonalizable) if and only if is normal.*

*Proof:* Since this is an ‘if and only if’ theorem, we need to prove it in both directions. First, suppose that is unitarily diagonalizable, so that 3 holds for some . Then

The commutator is then, since

where the result follows because all diagonal matrices commute. Thus is normal, and this completes one direction of the proof.

Going the other way is a bit trickier. We need to show that for any normal matrix with elements defined on some arbitrary orthonormal basis (that is, a basis that is not necessarily composed of eigenvectors of ), there is a unitary matrix such that is diagonal. Since we started with an orthonormal basis and preserves inner products, the new basis is also orthonormal which will prove the theorem.

The proof uses mathematical induction, in which we first prove that the result is true for one specific dimension of vector space, say . We can then assume the result is true for some dimension and from that assumption, prove it is also true for the next higher dimension .

Since any matrix is diagonal (it consists of only one element), the result is true for . So we now assume it’s true for a dimension of and prove it’s true for a dimension of .

We take an arbitrary orthonormal basis of the -dimensional to be . In that basis, the matrix has elements . We know that has at least one eigenvalue with a normalized eigenvector :

and, since is normal, the eigenvector is also an eigenvector of :

Starting with a basis of containing , we can use Gram-Schmidt to generate an orthonormal basis . We now define an operator as follows:

is unitary, since

From its definition

Now consider the matrix defined as

is also normal, as can be verified by calculating the commutator and using . Further

Thus is an eigenvector of with eigenvalue .

The matrix elements in the first column of in the original basis are

Thus all entries in the first column are zero except for the first row, where it is . How about the first row? Using 10 we have

Thus all entries in the first row, except the first, are also zero. Thus in the original basis we have

where is an matrix. We have

Since is normal, we must have , which implies

so that is also a normal matrix. By the induction hypotheses, since is an normal matrix, it is unitarily diagonalizable by some unitary matrix , that is

is diagonal. We can extend to an unitary matrix by adding a 1 to the upper left:

We can check that is unitary by direct calculation, using

We then have, using 33

That is, is diagonal. From the definition 19 of , we now have

Since the product of two unitary matrices is unitary, we have found a unitary operator that diagonalizes , which proves the result.

Notice that the proof didn’t assume that the eigenvalues are nondegenerate, so that even if there are several linearly independent eigenvectors corresponding to one eigenvalue, it is still possible to find an orthonormal basis consisting of the eigenvectors. In other words, for any hermitian or unitary operator, it is always possible to find an orthonormal basis of the vector space consisting of eigenvectors of the operator.

In the general case, a normal matrix in an -dimensional vector space can have distinct eigenvalues, where . If , there is no degeneracy and each eigenvalue has a unique (up to a scalar multiple) eigenvector. If , then one or more of the eigenvalues occurs more than once, and the eigenvector subspace corresponding to a degenerate eigenvalue has a dimension larger than 1. However, the spectral theorem guarantees that it is possible to choose an orthonormal basis within each subspace, and that each subspace is orthogonal to all other subspaces.

More precisely, the vector space can be decomposed into subspaces for , with the dimension of subspace equal to the degeneracy of eigenvalue . The full space is the direct sum of these subspaces

It’s usually most convenient to order the eigenvectors as follows:

The notation means the th eigenvector belonging to eigenvalue .

In practice, there is a lot of freedom in choosing orthonormal eigenvectors for degenerate eigenvalues, since we can pick any mutually orthogonal vectors within the subspace of dimension . For example, in 3-d space we usually choose the unit vectors as the orthonormal set, but we can pivot these three vectors about the origin, or even reflect them in a plane passing through the origin, and still get an orthonormal set of 3 vectors.

The diagonal form of the normal matrix in this orthonormal basis is

Here, eigenvalue occurs times along the diagonal.