References: edX online course MIT 8.05.1x Week 3.

Sheldon Axler (2015), *Linear Algebra Done Right*, 3rd edition, Springer. Chapter 2.

Here, we investigate the ideas of the span of a vector space and see how this leads to the idea of linear independence of a set of vectors. I’ll summarize the main definitions and results here for future use; a more complete explanation together with some examples is given in Axler’s book, Chapter 2.

** Span of a list of vectors **

A *list* of vectors is just a subset of the vectors in a vector space, with the condition that the number of vectors in the subset is finite. The set of all linear combinations of the vectors in a list is called the *span* of that list. Since a general linear combination has the form

where (recall that the field is always taken to be either the real numbers or the complex numbers ), the span of a list itself forms a vector space which is a subspace of the original vector space. One result we can show is

Theorem 1The span of a list of vectors in a vector space is the smallest subspace of containing all the vectors in the list.

*Proof:* Let the list be . Then is a subspace since it contains the zero vector if all s are zero in 1, and since it contains all linear combinations of the list, it is closed under addition and scalar multiplication.

The span contains all (just set in 1). Now if we look at a subspace of that contains all the s, it must also contain every vector in the span , since a subspace must be closed under addition and scalar multiplication. Thus is the smallest subspace of that contains all the vectors in .

If , that is, the span of a list is the same as the original vector space, then we say that spans . This leads to the definition that a vector space is called *finite-dimensional* if it is spanned by some list of vectors. (Remember that all lists are finite in length!) A vector space that is not finite-dimensional is called (not surprisingly) *infinite-dimensional*.

** Linear independence **

Suppose a list and is a vector such that . This means that is a linear combination of , so that 1 is true. However, using only the definitions above, there is no guarantee that there is only one choice for the scalars that satisfies 1. We might also have, for example

where . This means that we can write the zero vector as

Now, if the *only* way we can satsify this equation is to require that for all , then we say that the list is *linearly independent*. (For completeness, the empty list (containing no vectors) is also declared to be linearly independent.) By reversing the above argument, we see that if the list is linearly independent, then there is only one set of scalars such that 1 is satisfied. In other words, any vector has only one representation as a linear combination of the vectors in the list.

A list that is not linearly independent is, again not surprisingly, defined to be *linearly dependent*. This leads to the linear dependence lemma:

Lemma 2Suppose is a linearly dependent list in . Then there exists some such that(a) ;

(b) if is removed from the list , the span of the remaining list, containing vectors, equals the span of the original list.

*Proof:* Because is linearly dependent, we can write

where not all of the s are zero. Suppose is the largest index where Then we can divide through by to get

Thus is a linear combination of other vectors in the list, which proves part (a). Part (b) follows from the fact that we can represent any vector as

We can replace in this sum by 5, so can be written as a linear combination of all the vectors in the list except for . Thus (b) is true.

We can use this lemma to prove the main result about linearly independent lists:

Theorem 3In a finite-dimensional vector space , the length of every linearly independent list is less than or equal to the length of every list that spans .

*Proof:* Suppose the list is linearly independent in , and suppose another list spans . We want to prove that .

Since already spans , if we add any other vector from to the list , we will get a linearly dependent list, since this newly added vector can, by the definition of a span, be expressed a linear combination of the vectors in . In particular, if we add from the list to , then the list is linearly dependent. By the linear independence lemma above, we can therefore remove one of the s from so that the remaining list still spans , and contains vectors. For the sake of argument, let’s say we remove (we can always order the s in the list so that the element we remove is at the end). Then we’re left with the revised list .

We can repeat this process times, each time adding the next element from list and removing the last . Because of the linear dependence lemma, we know that there must always be a that can be removed each time we add a , so there must be at least as many s as s. In other words, which is what we wanted to prove.

This theorem can be used to show easily that any list of more than vectors in -dimensional space cannot be linearly independent, since we know that we can span -dimensional space with vectors (for example, the 3 coordinate axes in 3-d space). Conversely, since we can find a list of vectors in -dimensional space that is linearly independent, any list of fewer than vectors cannot span -dimensional space.

** Basis of a finite-dimensional vector space **

A basis of a finite-dimensional vector space is defined to be a list that is both linearly independent and spans the space. The *dimension* of the vector space is defined to be the length of a basis list. For example, in 3-d space, the list is a basis, and since the length is 3, the dimension of the vector space is also 3. Any proper subset (that is, a subset with fewer than 3 members) of this basis is also linearly independent, but it does not span the space so is not a basis. For example, the list is linearly independent, but spans only the plane.

A couple of examples of linear independence/dependence can be found here.

Pingback: Linear operators: null space, range, injectivity and surjectivity | Physics pages

Pingback: Matrix representation of linear operators; matrix multiplication | Physics pages

Pingback: Matrix representation of linear operators: change of basis | Physics pages

Pingback: Vector spaces & linear independence – some examples | Physics pages