**Required math: algebra, calculus**

**Required physics: none**

Reference: d’Inverno, Ray, *Introducing Einstein’s Relativity* (1992), Oxford Uni Press. – Chapter 5.

What is rarely made clear in discussions of vectors (and more generally, tensors) is that speaking of a ‘contravariant’ or ‘covariant’ vector isn’t particularly accurate, since all vectors have *both* types of components. So technically, we should be speaking of the contravariant or covariant *components* of a vector, rather than the vector itself.

The distinction arises only when we are dealing with a non-orthonormal coordinate system, since in the usual orthonormal systems (rectangular, cylindrical, polar, spherical), all basis vectors are traditionally mutually orthonormal, and both types of components are the same.

To see what happens in a non-orthogonal case, suppose we introduce a 2-d coordinate system with basis vectors (in rectangular coordinates)

We’ve chosen basis vectors that are not normalized (their lengths are not 1) on purpose.

To get the transformation equations between this system and the ordinary rectangular system (whose basis vectors we’ll take as the usual ones: and ), suppose we have a vector with components in the rectangular system of . Then to express this vector in terms of the new system, we must be able to express it as a linear combination of the basis vectors in the new system, so we get

Writing this out as two equations gives

which gives the rectangular coordinates in terms of the new coordinates: . Note that the use units where the length of the basis vectors in the prime system are taken as one unit each, even though in the rectangular system these basis vectors are longer than one unit. For example, the basis vector has components in the primed system, and plugging these numbers into the transformation equations above gives the rectangular components , which are its components in the rectangular system.

These equations can be inverted to give

From these equations, we can get the partial derivatives:

and so on. We therefore can rewrite the above sets of equations as

These transformations match the rule for contravariant vectors, so the above equations have all dealt with the contravariant components of vectors.

Now let’s have a look at the basis vectors. It’s easy enough to write the primed basis vectors in terms of the rectangular ones, just by reading off the components above:

We can invert these equations by requiring the rectangular basis vectors to be linear combinations of the primed basis vectors. That is

Writing these equations out for each component (using rectangular coordinates), we get for :

which can be solved to give , so

Doing a similar calculation for , we get

If we compare these results with the transformation equations above, we see that (in this case, anyway):

This is the transformation condition for a covariant vector. Thus it appears that basis vectors transform covariantly.

At this point, it’s worth doing these calculations a bit more generally. Suppose the basis vectors in the primed system are

Then the conditions above become

which can be inverted to give

For the basis vectors, we get, following the same calculations as above using the general basis vectors:

Comparing these last two sets of equations, we see that 20 above is true in general, at least for the 2-d case. (That’s why we’ve been using superscripts for the vector components and subscripts for the basis vectors.)

It’s very important to notice that the two transformations (of vector components and basis vectors) are *qualitatively* different. In transforming vector components, we are finding a different representation of the *same* vector in a new coordinate system. In transforming the basis vectors, we are actually comparing two *different* vectors. The basis vector points along the rectangular axis, while the vector points along the vector in our first example.

We said at the start that any vector has both contravariant and covariant components, but so far we’ve seen only the contravariant components of a general vector (the basis vectors are a special case and are treated differently). Looking at the general transformation equations above, we can take the partial derivatives and treat these as basis vectors as was explained in an earlier post. We then get the vectors (the reason for the superscripts will become obvious later):

Using our numerical example above, we get

We can now write a general vector in terms of these new basis vectors as follows

In symbols:

Looking at 35, we see that this time the vector components transform covariantly, and this time we really are transforming the *same* vector from one coordinate system to another. The problem is that we don’t actually have the coordinates in either system yet.

The key to figuring out what the coordinates are is to look at the dot products for all possible combinations of and . We find that

In two dimensions, this means that and . (Things get a bit more complicated in higher dimensions, but each new basis vector must be perpendicular to all the old basis vectors except the one with the same index, so that this equation also applies in all higher dimensions as well.)

Thus if we want to calculate components in the usual way by projecting the vector parallel to each coordinate axis in turn, projecting parallel to one of the new basis vectors is equivalent to projecting perpendicular to one of the old basis vectors .

Now in the rectangular system, both of these projections give the same result since the coordinate axes are orthogonal, so that means that the contravariant and covariant coordinates of any vector are the same. This gives us our starting point, since it means we know .

Inverting the system above, we get the covariant components.

These are paired with the contravariant components we worked out earlier:

The two sets of basis vectors and are called *dual* basis vectors, or rather one set is dual to the other. They are different vectors only in non-orthonormal coordinate systems.

One final note that makes it easier to calculate the components in either dual set. If you compare the last two sets of equations with the equations giving the basis vectors above, we see that to find a contravariant component of a vector , take the dot product with the corresponding *contra*variant basis vector:

Similarly to find the covariant components, take the dot product with the covariant basis vector:

EvertonThe inverse matrix for the basis calculation is (2/9 -1/9, -1/18 5/18) I think you swapped -1/18 with 5/18.

growescienceFixed now. Thanks.

JMBoisvert2nd paragraph is incorrect: let x’s be [2x. y] (orthogonal). Then the covariant and contravariant components of tensors are not equal.

growescienceQuite right – I should have said ‘orthonormal’ instead of ‘orthogonal’. Fixed now. Thanks.

Peter Norris2nd paragraph should read … dealing with a non-orthonormal coordinate system currently reads non orthogonal

gwrowePost authorFixed (again). Thanks.