# Diagonalization of matrices

References: edX online course MIT 8.05 Week 6.

Suppose we have an operator ${T}$ that has a matrix representation ${T\left(\left\{ v\right\} \right)}$ in some basis ${v}$. In some cases (not all!) it is possible to transform to a different basis ${u}$ in which ${T\left(\left\{ u\right\} \right)}$ is a diagonal matrix. If the operator ${A}$ transforms from the basis ${v}$ to ${u}$, then we’ve seen that ${T}$ transforms according to

$\displaystyle T\left(\left\{ u\right\} \right)=A^{-1}T\left(\left\{ v\right\} \right)A \ \ \ \ \ (1)$

The diagonalization problem is therefore to find the matrix ${A}$ (if it exists).

Assuming ${A}$ does exist, we can look at the situation in two ways.

1. The matrix representation of ${T}$ is diagonal in the ${u}$ basis.
2. The operator ${A^{-1}TA}$ is diagonal in the original ${v}$ basis.

To prove (2), we start with the fact that ${T}$ is diagonal in the ${u}$ basis, so that (no implied sums in what follows):

 $\displaystyle Tu_{i}$ $\displaystyle =$ $\displaystyle \lambda_{i}u_{i}\ \ \ \ \ (2)$ $\displaystyle TAv_{i}$ $\displaystyle =$ $\displaystyle \lambda_{i}Av_{i}\ \ \ \ \ (3)$ $\displaystyle A^{-1}TAv_{i}$ $\displaystyle =$ $\displaystyle \lambda_{i}A^{-1}Av_{i}\ \ \ \ \ (4)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \lambda_{i}v_{i} \ \ \ \ \ (5)$

From the last line, we see that ${v_{i}}$ is an eigenvector of the operator ${A^{-1}TA}$ with the same eigenvalue ${\lambda_{i}}$ as the eigenvector ${u_{i}}$ of the operator ${T}$.

Thus we can write, in the original basis ${v}$, the equation

$\displaystyle D_{T}\equiv A^{-1}TA \ \ \ \ \ (6)$

where ${D_{T}}$ is the diagonalized version of the matrix ${T}$. This is known as a similarity transformation.

All this is fine, but we still haven’t seen how to find ${A}$. It turns out that the columns of ${A}$ are the eigenvectors of ${T}$. We can see this from the following argument.

In the basis ${v}$, each ${v_{i}}$ has the form of a column vector with a 1 in the ${i}$th position and zeroes everywhere else. Thus the transformation to the ${u}$ basis can be written as

$\displaystyle u_{k}=Av_{k}=\left[\begin{array}{ccc} A_{11} & \ldots & A_{1n}\\ \vdots & \ddots & \vdots\\ A_{n1} & \ldots & A_{nn} \end{array}\right]\left[\begin{array}{c} \vdots\\ 1\\ \vdots \end{array}\right]=\left[\begin{array}{c} A_{1k}\\ \vdots\\ A_{nk} \end{array}\right] \ \ \ \ \ (7)$

The 1 is in the ${k}$th position in the column vector representing ${v_{k}}$ and picks out the elements in column ${k}$ of ${A}$ in the product ${Av_{k}}$. Since ${u_{k}}$ is the ${k}$th eigenvector of ${T}$, the columns of ${A}$ are the eigenvectors of ${T}$.

Example Suppose we have

$\displaystyle T=\left[\begin{array}{cc} 1 & 2\\ 3 & 1 \end{array}\right] \ \ \ \ \ (8)$

The eigenvalues are found in the usual way, from the characteristic determinant, which is

 $\displaystyle \left(1-\lambda\right)^{2}-6$ $\displaystyle =$ $\displaystyle 0\ \ \ \ \ (9)$ $\displaystyle \lambda$ $\displaystyle =$ $\displaystyle 1\pm\sqrt{6} \ \ \ \ \ (10)$

We can find the eigenvectors by solving ${Tu_{i}=\lambda_{i}u_{i}}$ for each eigenvalue, where ${u_{i}}$ is a 2-element column vector. (I won’t go through this, since it’s just algebra.) Placing the eigenvectors as the columns in a matrix ${A}$, we have

$\displaystyle A=\left[\begin{array}{cc} \frac{\sqrt{6}}{3} & -\frac{\sqrt{6}}{3}\\ 1 & 1 \end{array}\right] \ \ \ \ \ (11)$

The inverse of a ${2\times2}$ matrix ${A=\left[\begin{array}{cc} a & b\\ c & d \end{array}\right]}$ is

$\displaystyle A^{-1}=\frac{1}{ad-bc}\left[\begin{array}{cc} d & -b\\ -c & a \end{array}\right] \ \ \ \ \ (12)$

So for our matrix, we have

$\displaystyle A^{-1}=\left[\begin{array}{cc} \frac{\sqrt{6}}{4} & \frac{1}{2}\\ -\frac{\sqrt{6}}{4} & \frac{1}{2} \end{array}\right] \ \ \ \ \ (13)$

Doing the matrix products (just a lot of arithmetic) we get

$\displaystyle A^{-1}TA=\left[\begin{array}{cc} 1+\sqrt{6} & 0\\ 0 & 1-\sqrt{6} \end{array}\right] \ \ \ \ \ (14)$

The resulting matrix is diagonal, and the diagonal entries are the eigenvalues of ${T}$. It’s worth noticing that the traces of ${A^{-1}TA}$ and ${T}$ are the same (both = 2), as are the determinants (both ${=-5}$).

## 3 thoughts on “Diagonalization of matrices”

1. Pingback: Normal operators | Physics pages