References: edX online course MIT 8.05.1x Week 4.
Sheldon Axler (2015), Linear Algebra Done Right, 3rd edition, Springer. Chapter 6.
That is, any pair of vectors is orthogonal, and all the vectors have norm 1. In 3-d space, the unit vectors along the three axes form an orthonormal list.
Given an orthonormal list, we can construct a vector from the vectors in that list by
for . The norm of has a simple form:
The ‘zero terms’ in the second line are terms involving for which are all zero because of 1.
This result shows that an orthonormal list of vectors is linearly independent, since if we form the linear combination
then so from 6 we must have all , which means the list is linearly independent.
If we have an orthonormal list that is also a basis for , then any vector can be written as
The coefficients can be found by taking the inner product (using 1), so we have
For example, in 3-d space, the 3 unit vectors along the axes form an orthonormal basis for the space. However, the unit vectors along the axes form an orthonormal list, but this is not a basis for 3-d space since no vector with a component can be written as a linear combination of these two vectors.
If we have any basis (not necessarily orthonormal), we can form an orthonormal basis using the Gram-Schmidt orthogonalization procedure. We’ve already met this in the context of quantum mechanics, and the derivation for a general finite vector space is much the same, so I’ll just quote the result. The procedure is iterative and follows these steps:
The first vector in the orthonormal basis is defined by
where is the first vector (well, any vector, really) in the non-orthonormal basis.
Given vector in the orthonormal basis, we can form from the formula
clearly has norm 1, and we can check that by direct calculation. Note that although we’ve indexed the vectors in the original basis, we can take them in any order when calculating the orthonormal basis via the Gram-Schmidt procedure.
Suppose we have a subset (not necessarily a subspace) of . Then we can define the orthogonal complement of as the set of all vectors that are orthogonal to all vectors . More formally:
A useful general theorem is as follows.
Theorem 1 If is a subspace of , then . (Recall the direct sum.)
Proof: Given an orthonormal basis of : , we can write any as the sum
On the RHS, we’ve just added and subtracted the same term from . Since the first term is a linear combination of the basis vectors of , the overall sum is a vector in . To see that the second term is in , take the inner product with any of the basis vectors :
Finally, since the two vector spaces in a direct sum can have only the zero vector in their intersection, we need to show that . However if a vector is in both and then it must be orthogonal to itself, so which implies .
Thus any vector space can be decomposed into two orthogonal subspaces (assuming that has any subspaces other than ).