Vector spaces: definitions and examples

References: edX online course MIT 8.05.1x Week 3.

Sheldon Axler (2015), Linear Algebra Done Right, 3rd edition, Springer. Chapter 1.

It appears that one of my stumbling blocks in trying to get to grips with quantum field theory is an insufficient understanding of linear algebra, so here we’ll start looking at this subject in a bit more depth than is typical in an introductory course. In my undergraduate physics degree (back in the 1970s) I didn’t really get any further with quantum theory than the level covered in Griffiths’s introductory textbook. As good as this book is, it doesn’t give you enough background to leap into quantum field theory.

The foundation of linear algebra is the concept of a vector space. The definition of a vector space is as follows:

  • A vector space is a set {V} with two operations, addition and scalar multiplication, defined on the set.
  • The addition property is a function that assigns an element {u+v\in V} to each pair of elements {u,v\in V}. Note that this definition implies completeness, in the sense that every sum of two vectors in {V} must also be in {V}. This definition includes the traditional notion of vector addition in 2-d or 3-d space (that is, where a vector is represented by an arrow, and vector addition is performed by putting the tail of the second vector onto the head of the first and drawing the resulting vector as the sum), but vector addition is much more general than that.
  • Scalar multiplication means that we can take an ordinary number {\lambda} from some field {\mathbb{F}} (in quantum theory, {\mathbb{F}} will always be either the set of real numbers {\mathbb{R}} or the set of complex numbers {\mathbb{C}}) and define a function in which the vector obtained by multiplying an existing vector {v} by {\lambda} gives another vector {\lambda v\in V}. Note that again, completeness is implied by this definition: every vector {\lambda v} obtained through scalar multiplication must also be in the space {V}.
  • Addition is commutative, so that {u+v=v+u}.
  • Addition and scalar multiplication are associative, so that {\left(u+v\right)+w=u+\left(v+w\right)} and {\left(ab\right)v=a\left(bv\right)}, where {u,v,w\in V} and {a,b\in\mathbb{F}}.
  • There is an additive identity element {0\in V} such that {v+0=v} for all {v\in V}. Note that here 0 is a vector, not a scalar. In practice, there is also a zero scalar number which is also denoted by 0, so we need to rely on the context to tell whether 0 refers to a vector or a number. Usually this isn’t too hard.
  • Every vector {v\in V} has an additive inverse {w\in V} with the property that {u+w=0}. The additive inverse of {v} is written as {-v} and {w-v} is defined to be {w+\left(-v\right)}.
  • There is a (scalar) multiplicative identity number 1 with the property that {1v=v} for all {v\in V}.
  • Scalar multiplication is distributive, in the sense that {a\left(u+v\right)=au+av} and {\left(a+b\right)v=av+bv} for all {a,b\in\mathbb{F}} and all {u,v\in V}.

Real and complex vector spaces

A real vector space is a vector space in which all the scalars are drawn from the set of real numbers {\mathbb{R}}, and a complex vector space is one where all the scalars are drawn from the set of complex numbers {\mathbb{C}}. It is important to note that we do not refer to the actual vectors as real or complex; they are simply vectors. The nature of the vector space is determined by the field {\mathbb{F}} from which the scalars are taken. This can be confusing to beginners, since the temptation is to look at some of the vectors in a vector space to see if they contain real or complex numbers and label the vector space based on that. That doesn’t always work, as the following example shows.

Example 1 The set of {N\times N} complex hermitian matrices is a real (not complex!) vector space. Recall that a hermitian matrix {M} is one whose complex conjugate transpose equals the original matrix, that is, {M^{\dagger}=M}.

To see this, look at a general {2\times2} hermitian matrix, which has the form

\displaystyle  M=\left[\begin{array}{cc} c+d & a-ib\\ a+ib & c-d \end{array}\right] \ \ \ \ \ (1)

where {a,b,c,d\in\mathbb{R}}. Each matrix {M} is a vector in this vector space. Note that with the general definition of a vector above, a matrix can be considered to be a vector. This illustrates that the notion of a vector as defined above is more general than a line with an arrow on one end.

With addition defined as the usual matrix addition, and scalar multiplication by a real number {x\in\mathbb{R}} also defined in the usual way for a matrix, that is

\displaystyle  xM=\left[\begin{array}{cc} x\left(c+d\right) & x\left(a-ib\right)\\ x\left(a+ib\right) & x\left(c-d\right) \end{array}\right] \ \ \ \ \ (2)

we can grind through the requirements above to verify that this set is a vector space. For example, if we have two hermitian matrices {M_{1}} and {M_{2}} then {M_{1}+M_{2}} is also hermitian. Also {\left(xM\right)^{\dagger}=xM} and so on.

However, if we had chosen a complex number {z\in\mathbb{C}} as the scalar to multiply by, we’d get

\displaystyle   zM \displaystyle  = \displaystyle  \left[\begin{array}{cc} z\left(c+d\right) & z\left(a-ib\right)\\ z\left(a+ib\right) & z\left(c-d\right) \end{array}\right]\ \ \ \ \ (3)
\displaystyle  \left(zM\right)^{\dagger} \displaystyle  = \displaystyle  \left[\begin{array}{cc} z^*\left(c+d\right) & z^*\left(a-ib\right)\\ z^*\left(a+ib\right) & z^*\left(c-d\right) \end{array}\right]\ne zM \ \ \ \ \ (4)

Thus even though the vectors {M} in this vector space contain complex numbers, the vector space to which they belong is a real vector space because the scalars used in scalar multiplication must be real.

Example 2 The set of polynomials of degree {\le N} is a vector space. Whether it is real or complex depends on which set of scalars we choose. A general polynomial of degree {n}, where {n\ge0} is an integer, is

\displaystyle  p\left(z\right)=a_{0}+a_{1}z+a_{2}z^{2}+\ldots+a_{n}z^{n} \ \ \ \ \ (5)

If all the {a_{i}}s are real and {z\in\mathbb{R}}, then if we choose our scalars from {\mathbb{R}} we have a real vector space. Addition of polynomials follows the usual rule. If

\displaystyle  q\left(z\right)=b_{0}+b_{1}z+b_{2}z^{2}+\ldots+b_{n}z^{n} \ \ \ \ \ (6)

then

\displaystyle  \left(p+q\right)\left(z\right)=\left(a_{0}+b_{0}\right)+\left(a_{1}+b_{1}\right)z+\left(a_{2}+b_{2}\right)z^{2}+\ldots+\left(a_{n}+b_{n}\right)z^{n} \ \ \ \ \ (7)

from which it’s fairly obvious that {p+q} is another polynomial of degree {n}. Scalar multiplication also works as expected:

\displaystyle  xp\left(z\right)=xa_{0}+xa_{1}z+xa_{2}z^{2}+\ldots+xa_{n}z^{n} \ \ \ \ \ (8)

so that {xp} is also in the vector space. The additive inverse of {p} above would be {q} if {b_{i}=-a_{i}} for all {i}.

Example 3 The set of complex functions {f\left(x\right)} on a finite interval {x\in\left[0,L\right]} form a complex vector space. Addition and scalar multiplication are defined in the usual way as

\displaystyle   \left(f_{1}+f_{2}\right)\left(x\right) \displaystyle  = \displaystyle  f_{1}\left(x\right)+f_{2}\left(x\right)\ \ \ \ \ (9)
\displaystyle  \left(af\right)\left(x\right) \displaystyle  = \displaystyle  af\left(x\right) \ \ \ \ \ (10)

We’ve seen such functions as solutions of the infinite square well in ordinary quantum mechanics.

There are several other properties of vector spaces which follow from the requirements above. We won’t go through all of them, but the proofs of a couple of the simpler ones are instructive as to how these sorts of results are derived. Note in the following that we need to verify each step by stating which of the above properties we’re using to justify that step.

Theorem 1 The additive identity {0} is unique.

Proof: Proof: (by contradiction). Suppose there are two distinct additive identities {0} and {0^{\prime}}. Then

\displaystyle   0^{\prime} \displaystyle  = \displaystyle  0^{\prime}+0\mbox{ (since 0 is an additive identity)}\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  0+0^{\prime}\mbox{ (commutative addition)}\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  0\mbox{ (since \ensuremath{0^{\prime}} is an additive identity)} \ \ \ \ \ (13)
\Box

Theorem 2 The additive inverse of each vector in a vector space is unique.

Proof: Proof: (again by contradiction). Suppose {v\in V} has two different additive inverses {w} and {w^{\prime}}. Then

\displaystyle   w \displaystyle  = \displaystyle  w+0\mbox{ (additive identity)}\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  w+\left(v+w^{\prime}\right)\mbox{ (\ensuremath{w^{\prime}} is an additive inverse of \ensuremath{v})}\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \left(w+v\right)+w^{\prime}\mbox{ (addition is associative)}\ \ \ \ \ (16)
\displaystyle  \displaystyle  = \displaystyle  0+w^{\prime}\mbox{ (\ensuremath{w} is an additive inverse of \ensuremath{v})}\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  w^{\prime}\mbox{ (additive identity)} \ \ \ \ \ (18)
\Box

A few examples are given here.

8 thoughts on “Vector spaces: definitions and examples

  1. Pingback: Subspaces and direct sums | Physics pages

  2. Pingback: Vector spaces: span, linear independence and basis | Physics pages

  3. Pingback: Linear operators & commutators | Physics pages

  4. Pingback: Eigenvalues and eigenvectors | Physics pages

  5. Pingback: Inner products and Hilbert spaces | Physics pages

  6. Pingback: Orthonormal basis and orthogonal complement | Physics pages

  7. Pingback: Linear functionals and adjoint operators | Physics pages

  8. Pingback: Vector spaces & linear independence – some examples | Physics pages

Leave a Reply

Your email address will not be published. Required fields are marked *