Metric tensor: inverse and raising & lowering indices

Reference: Moore, Thomas A., A General Relativity Workbook, University Science Books (2013) – Chapter 6; Problem 6.1.

The inverse metric tensor {g^{ij}} is defined so that

\displaystyle  g^{ij}g_{jk}=\delta_{\;\; k}^{i} \ \ \ \ \ (1)

If the metric tensor is viewed as a matrix, then this is equivalent to saying {\left[g^{ij}\right]=\left[g_{ij}\right]^{-1}}. The transformation property of {g^{ij}} can be worked out by direct calculation, using the transformation of {g_{ij}} and the fact that {\delta_{\;\; k}^{i}} is invariant.

\displaystyle   g^{\prime ij}g_{jk}^{\prime} \displaystyle  = \displaystyle  \delta_{\;\; k}^{i}\ \ \ \ \ (2)
\displaystyle  \displaystyle  = \displaystyle  g^{\prime ij}\frac{\partial x^{l}}{\partial x^{\prime j}}\frac{\partial x^{m}}{\partial x^{\prime k}}g_{lm} \ \ \ \ \ (3)

We can try the transformation

\displaystyle  g^{\prime ij}=\frac{\partial x^{\prime i}}{\partial x^{a}}\frac{\partial x^{\prime j}}{\partial x^{b}}g^{ab} \ \ \ \ \ (4)

Substituting, we get

\displaystyle   g^{\prime ij}g_{jk}^{\prime} \displaystyle  = \displaystyle  \frac{\partial x^{\prime i}}{\partial x^{a}}\frac{\partial x^{\prime j}}{\partial x^{b}}g^{ab}\frac{\partial x^{l}}{\partial x^{\prime j}}\frac{\partial x^{m}}{\partial x^{\prime k}}g_{lm}\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  \frac{\partial x^{\prime i}}{\partial x^{a}}g^{ab}\delta_{\;\; b}^{l}\frac{\partial x^{m}}{\partial x^{\prime k}}g_{lm}\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  \frac{\partial x^{\prime i}}{\partial x^{a}}g^{al}\frac{\partial x^{m}}{\partial x^{\prime k}}g_{lm}\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  \frac{\partial x^{\prime i}}{\partial x^{a}}\frac{\partial x^{m}}{\partial x^{\prime k}}\delta_{\;\; m}^{a}\ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  \frac{\partial x^{\prime i}}{\partial x^{m}}\frac{\partial x^{m}}{\partial x^{\prime k}}\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \delta_{\;\; k}^{i} \ \ \ \ \ (10)

On line 2 we used {\frac{\partial x^{\prime j}}{\partial x^{b}}\frac{\partial x^{l}}{\partial x^{\prime j}}=\delta_{\;\; b}^{l}} and on line 4 we used {g^{al}g_{lm}=\delta_{\;\; m}^{a}}. Thus {g^{ij}} is a rank-2 contravariant tensor, and is the inverse of {g_{ij}} which is a rank-2 covariant tensor. Since the matrix inverse is unique (basic fact from matrix algebra), we can use the standard techniques of matrix algebra to calculate the inverse.

In rectangular coordinates, {g^{ij}=g_{ij}} since the metric is diagonal with all diagonal elements equal to 1. In polar coordinates in 2-d,

\displaystyle  g_{ij}=\left[\begin{array}{cc} 1 & 0\\ 0 & r^{2} \end{array}\right] \ \ \ \ \ (11)

so the inverse is

\displaystyle  g^{ij}=\left[\begin{array}{cc} 1 & 0\\ 0 & r^{-2} \end{array}\right] \ \ \ \ \ (12)

A contravariant vector {v^{i}} can be lowered (converted to a covariant vector) by multiplying by {g_{ij}}:

\displaystyle  v_{i}=g_{ij}v^{i} \ \ \ \ \ (13)

The covariant vector can be converted back into a contravariant vector by raising its index:

\displaystyle   g^{ij}v_{j} \displaystyle  = \displaystyle  g^{ij}g_{jk}v^{k}\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \delta_{\;\; k}^{i}v^{k}\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  v^{i} \ \ \ \ \ (16)

If we start with a vector {v^{i}} in rectangular coordinates, we can convert it to polar coordinates:

\displaystyle   v^{r} \displaystyle  = \displaystyle  v^{x}\cos\theta+v^{y}\sin\theta\ \ \ \ \ (17)
\displaystyle  v^{\theta} \displaystyle  = \displaystyle  -v^{x}\frac{\sin\theta}{r}+v^{y}\frac{\cos\theta}{r} \ \ \ \ \ (18)

We can lower these components by multiplying by {g_{ij}}

\displaystyle   v_{r} \displaystyle  = \displaystyle  v^{x}\cos\theta+v^{y}\sin\theta\ \ \ \ \ (19)
\displaystyle  v_{\theta} \displaystyle  = \displaystyle  r^{2}\left(-v^{x}\frac{\sin\theta}{r}+v^{y}\frac{\cos\theta}{r}\right)\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  r\left(-v^{x}\sin\theta+v^{y}\cos\theta\right) \ \ \ \ \ (21)

The square magnitude is

\displaystyle   v^{i}v_{i} \displaystyle  = \displaystyle  v^{r}v_{r}+v^{\theta}v_{\theta}\ \ \ \ \ (22)
\displaystyle  \displaystyle  = \displaystyle  \left(v^{x}\cos\theta+v^{y}\sin\theta\right)^{2}+\left(-v^{x}\sin\theta+v^{y}\cos\theta\right)^{2}\ \ \ \ \ (23)
\displaystyle  \displaystyle  = \displaystyle  \left(v^{x}\right)^{2}+\left(v^{y}\right)^{2} \ \ \ \ \ (24)

(No implied sum on the RHS in line 1.)

9 thoughts on “Metric tensor: inverse and raising & lowering indices

  1. Pingback: Gradient as covector: example in 2-d | Physics tutorials

  2. Pingback: Tensor trace | Physics tutorials

  3. Pingback: Metric tensor: trace | Physics tutorials

  4. Pingback: Christoffel symbols in terms of the metric tensor | Physics pages

  5. Rana

    What definition of tensor are you using? I am using the definition in which a tensors is something which transforms under the tensorial transformation law, and am having a bit of trouble with the logic.

    You say that “we can try the transformation (4)”, then find that g^{'ij}g'_{jk}=\delta_k^i and conclude that g^{ij} is indeed a tensor. I guess you are using the multilinear-form definition? In my case I would seem to need the converse proof; you above show that if it transforms as a tensor, then it’s the inverse; I would need that if it is the inverse, then it transforms as a (contravariant) tensor.

    Seeing as how the inverse is certainly unique, I cannot however fully convince myself that the above isn’t in fact also all that I need. Hence the comment. Any thoughts?

    1. gwrowe Post author

      I guess what I’m saying is that if we require (2) to be true, then if {g^{ij}} satisfies the tensor transformation rule (4), it also satisfies equation (2). I agree that this doesn’t on its own prove that {g^{ij}} is unique, but I’m taking the physicist’s way out by quoting a mathematical theorem that matrix inverses are unique, and that a rank 2 tensor can be represented by a square matrix. Fortunately, all metric tensors are rank 2 (I think) so it seems to be a general result.

  6. Rana

    Yes… thanks. For some increasingly elusive reason I’m still not as comfortable with this as I feel I should be but certainly this proof is in fact complete. IF we proclaim g^ to transform tensorially THEN it is the inverse; inverses being unique, g^ must in fact BE said tensorially transforming thing.

    It seems then that the “Since the matrix inverse is unique” line would like to be moved one sentence up, before the “Thus g^ is a […] tensor” one, but otherwise it seems perfectly fine. Thank you for sharing this.

    1. gwrowe Post author

      If it’s any consolation, even after writing hundreds of posts I don’t feel particularly confident that I actually understand the underlying physics very well either. Maybe it’s just one manifestation of Feynman’s famous quote that “if you think you understand quantum mechanics [or, I imagine, relativity], you don’t understand quantum mechanics”.

      1. silvascientist

        Richard Feynman – “There was a time when the newspapers said that only twelve men understood the theory of relativity. I do not believe there ever was such a time. There might have been a time when only one man did, because he was the only guy who caught on, before he wrote his paper. But after people read the paper a lot of people understood the theory of relativity in some way or other, certainly more than twelve. On the other hand, I think I can safely say that nobody understands quantum mechanics.”


Leave a Reply

Your email address will not be published. Required fields are marked *