Contravariant and covariant components: dual basis vectors

Required math: algebra, calculus

Required physics: none

Reference: d’Inverno, Ray, Introducing Einstein’s Relativity (1992), Oxford Uni Press. – Chapter 5.

What is rarely made clear in discussions of vectors (and more generally, tensors) is that speaking of a ‘contravariant’ or ‘covariant’ vector isn’t particularly accurate, since all vectors have both types of components. So technically, we should be speaking of the contravariant or covariant components of a vector, rather than the vector itself.

The distinction arises only when we are dealing with a non-orthonormal coordinate system, since in the usual orthonormal systems (rectangular, cylindrical, polar, spherical), all basis vectors are traditionally mutually orthonormal, and both types of components are the same.

To see what happens in a non-orthogonal case, suppose we introduce a 2-d coordinate system with basis vectors (in rectangular coordinates)

\displaystyle   \mathbf{e}_{1}^{\prime} \displaystyle  = \displaystyle  \left[5,2\right]\ \ \ \ \ (1)
\displaystyle  \mathbf{e}_{2}^{\prime} \displaystyle  = \displaystyle  \left[1,4\right] \ \ \ \ \ (2)

We’ve chosen basis vectors that are not normalized (their lengths are not 1) on purpose.

To get the transformation equations between this system and the ordinary rectangular system (whose basis vectors we’ll take as the usual ones: {\mathbf{e}_{1}=\left[1,0\right]} and {\mathbf{e}_{2}=\left[0,1\right]}), suppose we have a vector with components in the rectangular system of {X=x^{i}}. Then to express this vector in terms of the new system, we must be able to express it as a linear combination of the basis vectors in the new system, so we get

\displaystyle  X=x^{\prime1}e_{1}^{\prime}+x^{\prime2}e_{2}^{\prime} \ \ \ \ \ (3)

Writing this out as two equations gives

\displaystyle   x^{1} \displaystyle  = \displaystyle  5x^{\prime1}+x^{\prime2}\ \ \ \ \ (4)
\displaystyle  x^{2} \displaystyle  = \displaystyle  2x^{\prime1}+4x^{\prime2} \ \ \ \ \ (5)

which gives the rectangular coordinates in terms of the new coordinates: {x^{i}=x^{i}\left(x^{\prime}\right)}. Note that the {x^{\prime i}} use units where the length of the basis vectors in the prime system are taken as one unit each, even though in the rectangular system these basis vectors are longer than one unit. For example, the basis vector {\mathbf{e}_{1}^{\prime}} has components {x^{\prime1}=1;\;x^{\prime2}=0} in the primed system, and plugging these numbers into the transformation equations above gives the rectangular components {x^{1}=5;\;x^{2}=2}, which are its components in the rectangular system.

These equations can be inverted to give

\displaystyle   x^{\prime1} \displaystyle  = \displaystyle  \frac{1}{18}\left(4x^{1}-x^{2}\right)\ \ \ \ \ (6)
\displaystyle  x^{\prime2} \displaystyle  = \displaystyle  \frac{1}{18}\left(-2x^{1}+5x^{2}\right) \ \ \ \ \ (7)

From these equations, we can get the partial derivatives:

\displaystyle   \frac{\partial x^{1}}{\partial x^{\prime1}} \displaystyle  = \displaystyle  5\ \ \ \ \ (8)
\displaystyle  \frac{\partial x^{1}}{\partial x^{\prime2}} \displaystyle  = \displaystyle  1 \ \ \ \ \ (9)

and so on. We therefore can rewrite the above sets of equations as

\displaystyle   x^{i} \displaystyle  = \displaystyle  \frac{\partial x^{i}}{\partial x^{\prime j}}x^{\prime j}\ \ \ \ \ (10)
\displaystyle  x^{\prime i} \displaystyle  = \displaystyle  \frac{\partial x^{\prime i}}{\partial x^{j}}x^{j} \ \ \ \ \ (11)

These transformations match the rule for contravariant vectors, so the above equations have all dealt with the contravariant components of vectors.

Now let’s have a look at the basis vectors. It’s easy enough to write the primed basis vectors in terms of the rectangular ones, just by reading off the components above:

\displaystyle   \mathbf{e}_{1}^{\prime} \displaystyle  = \displaystyle  5\mathbf{e}_{1}+2\mathbf{e}_{2}\ \ \ \ \ (12)
\displaystyle  \mathbf{e}_{2}^{\prime} \displaystyle  = \displaystyle  \mathbf{e}_{1}+4\mathbf{e}_{2} \ \ \ \ \ (13)

We can invert these equations by requiring the rectangular basis vectors to be linear combinations of the primed basis vectors. That is

\displaystyle   \mathbf{e}_{1} \displaystyle  = \displaystyle  \alpha\mathbf{e}_{1}^{\prime}+\beta\mathbf{e}_{2}^{\prime}\ \ \ \ \ (14)
\displaystyle  \mathbf{e}_{2} \displaystyle  = \displaystyle  \gamma\mathbf{e}_{1}^{\prime}+\delta\mathbf{e}_{2}^{\prime} \ \ \ \ \ (15)

Writing these equations out for each component (using rectangular coordinates), we get for {\mathbf{e}_{1}}:

\displaystyle   5\alpha+\beta \displaystyle  = \displaystyle  1\ \ \ \ \ (16)
\displaystyle  2\alpha+4\beta \displaystyle  = \displaystyle  0 \ \ \ \ \ (17)

which can be solved to give {\alpha=\frac{2}{9};\;\beta=-\frac{1}{9}}, so

\displaystyle  \mathbf{e}_{1}=\frac{2}{9}\mathbf{e}_{1}^{\prime}-\frac{1}{9}\mathbf{e}_{2}^{\prime} \ \ \ \ \ (18)

Doing a similar calculation for {\mathbf{e}_{2}}, we get

\displaystyle  \mathbf{e}_{2}=-\frac{1}{18}\mathbf{e}_{1}^{\prime}+\frac{5}{18}\mathbf{e}_{2}^{\prime} \ \ \ \ \ (19)

If we compare these results with the transformation equations above, we see that (in this case, anyway):

\displaystyle  \mathbf{e}_{i}=\frac{\partial x^{\prime j}}{\partial x^{i}}\mathbf{e}_{j}^{\prime} \ \ \ \ \ (20)

This is the transformation condition for a covariant vector. Thus it appears that basis vectors transform covariantly.

At this point, it’s worth doing these calculations a bit more generally. Suppose the basis vectors in the primed system are

\displaystyle   \mathbf{e}_{1}^{\prime} \displaystyle  = \displaystyle  \left[A,B\right]\ \ \ \ \ (21)
\displaystyle  \mathbf{e}_{2}^{\prime} \displaystyle  = \displaystyle  \left[C,D\right] \ \ \ \ \ (22)

Then the conditions above become

\displaystyle   x^{1} \displaystyle  = \displaystyle  Ax^{\prime1}+Cx^{\prime2}\ \ \ \ \ (23)
\displaystyle  x^{2} \displaystyle  = \displaystyle  Bx^{\prime1}+Dx^{\prime2} \ \ \ \ \ (24)

which can be inverted to give

\displaystyle   x^{\prime1} \displaystyle  = \displaystyle  \frac{1}{AD-BC}\left(Dx^{1}-Cx^{2}\right)\ \ \ \ \ (25)
\displaystyle  x^{\prime2} \displaystyle  = \displaystyle  \frac{1}{AD-BC}\left(-Bx^{1}+Ax^{2}\right) \ \ \ \ \ (26)

For the basis vectors, we get, following the same calculations as above using the general basis vectors:

\displaystyle   \mathbf{e}_{1} \displaystyle  = \displaystyle  \frac{1}{AD-BC}\left(D\mathbf{e}_{1}^{\prime}-B\mathbf{e}_{2}^{\prime}\right)\ \ \ \ \ (27)
\displaystyle  \mathbf{e}_{2} \displaystyle  = \displaystyle  \frac{1}{AD-BC}\left(A\mathbf{e}_{1}^{\prime}-C\mathbf{e}_{2}^{\prime}\right) \ \ \ \ \ (28)

Comparing these last two sets of equations, we see that 20 above is true in general, at least for the 2-d case. (That’s why we’ve been using superscripts for the vector components and subscripts for the basis vectors.)

It’s very important to notice that the two transformations (of vector components and basis vectors) are qualitatively different. In transforming vector components, we are finding a different representation of the same vector in a new coordinate system. In transforming the basis vectors, we are actually comparing two different vectors. The basis vector {\mathbf{e}_{1}} points along the rectangular {x} axis, while the vector {\mathbf{e}_{1}^{\prime}} points along the vector {\left[5,2\right]} in our first example.

We said at the start that any vector has both contravariant and covariant components, but so far we’ve seen only the contravariant components of a general vector (the basis vectors are a special case and are treated differently). Looking at the general transformation equations above, we can take the partial derivatives {\partial x^{\prime i}/\partial x^{j}} and treat these as basis vectors as was explained in an earlier post. We then get the vectors (the reason for the superscripts will become obvious later):

\displaystyle   \mathbf{e}^{\prime1} \displaystyle  = \displaystyle  \left[\frac{\partial x^{\prime1}}{\partial x^{1}},\frac{\partial x^{\prime1}}{\partial x^{2}}\right]=\left[\frac{D}{AD-BC},\frac{-C}{AD-BC}\right]\ \ \ \ \ (29)
\displaystyle  \mathbf{e}^{\prime2} \displaystyle  = \displaystyle  \left[\frac{\partial x^{\prime2}}{\partial x^{1}},\frac{\partial x^{\prime2}}{\partial x^{2}}\right]=\left[\frac{-B}{AD-BC},\frac{A}{AD-BC}\right] \ \ \ \ \ (30)

Using our numerical example above, we get

\displaystyle   \mathbf{e}^{\prime1} \displaystyle  = \displaystyle  \left[\frac{2}{9},-\frac{1}{18}\right]\ \ \ \ \ (31)
\displaystyle  \mathbf{e}^{\prime2} \displaystyle  = \displaystyle  \left[-\frac{1}{9},\frac{5}{18}\right] \ \ \ \ \ (32)

We can now write a general vector {X} in terms of these new basis vectors as follows

\displaystyle   x_{1} \displaystyle  = \displaystyle  \frac{2}{9}x_{1}^{\prime}-\frac{1}{9}x_{2}^{\prime}\ \ \ \ \ (33)
\displaystyle  x_{2} \displaystyle  = \displaystyle  \frac{5}{18}x_{1}^{\prime}-\frac{1}{18}x_{2}^{\prime} \ \ \ \ \ (34)

In symbols:

\displaystyle  x_{i}=\frac{\partial x^{\prime j}}{\partial x^{i}}x_{j}^{\prime} \ \ \ \ \ (35)

Looking at 35, we see that this time the vector components transform covariantly, and this time we really are transforming the same vector from one coordinate system to another. The problem is that we don’t actually have the coordinates in either system yet.

The key to figuring out what the coordinates are is to look at the dot products {\mathbf{e}^{\prime i}\cdot\mathbf{e}_{j}^{\prime}} for all possible combinations of {i} and {j}. We find that

\displaystyle  \mathbf{e}^{\prime i}\cdot\mathbf{e}_{j}^{\prime}=\delta_{j}^{i} \ \ \ \ \ (36)

In two dimensions, this means that {\mathbf{e}^{\prime1}\perp\mathbf{e}_{2}^{\prime}} and {\mathbf{e}^{\prime2}\perp\mathbf{e}_{1}^{\prime}}. (Things get a bit more complicated in higher dimensions, but each new basis vector must be perpendicular to all the old basis vectors except the one with the same index, so that this equation also applies in all higher dimensions as well.)

Thus if we want to calculate components in the usual way by projecting the vector parallel to each coordinate axis in turn, projecting parallel to one of the new basis vectors {\mathbf{e}^{\prime i}} is equivalent to projecting perpendicular to one of the old basis vectors {\mathbf{e}_{j}^{\prime}}.

Now in the rectangular system, both of these projections give the same result since the coordinate axes are orthogonal, so that means that the contravariant and covariant coordinates of any vector are the same. This gives us our starting point, since it means we know {x_{i}}.

Inverting the system above, we get the covariant components.

\displaystyle   x_{1}^{\prime} \displaystyle  = \displaystyle  5x_{1}+2x_{2}\ \ \ \ \ (37)
\displaystyle  x_{2}^{\prime} \displaystyle  = \displaystyle  x_{1}+4x_{2} \ \ \ \ \ (38)

These are paired with the contravariant components we worked out earlier:

\displaystyle   x^{\prime1} \displaystyle  = \displaystyle  \frac{1}{18}\left(4x^{1}-x^{2}\right)\ \ \ \ \ (39)
\displaystyle  x^{\prime2} \displaystyle  = \displaystyle  \frac{1}{18}\left(-2x^{1}+5x^{2}\right) \ \ \ \ \ (40)

The two sets of basis vectors {\mathbf{e}^{i}} and {\mathbf{e}_{i}} are called dual basis vectors, or rather one set is dual to the other. They are different vectors only in non-orthonormal coordinate systems.

One final note that makes it easier to calculate the components in either dual set. If you compare the last two sets of equations with the equations giving the basis vectors above, we see that to find a contravariant component {x^{i}} of a vector {X}, take the dot product with the corresponding contravariant basis vector:

\displaystyle   x^{\prime i} \displaystyle  = \displaystyle  X\cdot\mathbf{e}^{\prime i}\ \ \ \ \ (41)
\displaystyle  \displaystyle  = \displaystyle  \sum_{j}x^{j}\left(\mathbf{e}^{\prime i}\right)_{j} \ \ \ \ \ (42)

Similarly to find the covariant components, take the dot product with the covariant basis vector:

\displaystyle   x_{i}^{\prime} \displaystyle  = \displaystyle  X\cdot\mathbf{e}_{i}^{\prime}\ \ \ \ \ (43)
\displaystyle  \displaystyle  = \displaystyle  \sum_{j}x_{j}\left(\mathbf{e}_{i}^{\prime}\right)_{j} \ \ \ \ \ (44)

6 thoughts on “Contravariant and covariant components: dual basis vectors

  1. JMBoisvert

    2nd paragraph is incorrect: let x’s be [2x. y] (orthogonal). Then the covariant and contravariant components of tensors are not equal.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *