Wolfram Alpha:
Search by keyword:
Astronomy
Chemistry
Classical Physics
Climate Change
Cosmology
Finance and Accounting
Game Theory
General Relativity
Lagrangian and Hamiltonian Mechanics
Macroeconomics
Mathematics
Microeconomics
Particle Physics
Probability and Statistics
Programming and Computer Science
Quantum Computing
Quantum Field Theory
Quantum Mechanics
Semiconductor Reliability
Solid State Electronics
Special Relativity
Statistical Mechanics
String Theory
Superconductivity
Supersymmetry (SUSY) and Grand Unified Theory (GUT)
The Standard Model
Topology
Units, Constants and Useful Formulas
Tensors

Not all physical quantities can be represented by
scalars, vectors or oneforms (dual vectors). We
will need something more flexible, and tensors fit
the bill.
Like vectors, tensors represent physical quantities
that are invariant and independent of the coordinate
system. However, when they are expressed in terms
of a basis they have components that transform.
They live in flat tangent or cotangent space or the
tensor product of the two.
Tensors can be defined in several different ways.
For our purposes they are equivalently:
1. Multilinear maps that map vectors and dual
vectors into the space of real numbers.
2. Elements of tensor products of vector spaces.
Schematically, we can view a tensor, 𝕋(_,_,_) as a
'machine' that takes vectors and dual vectors as
inputs and outputs real numbers. The number of
slots corresponds to the rank of the tensor.
Tensor Components

𝕋(_,_,_) is a geometric object. It has no components
until a basis is assigned. The components will be basis
dependent. Once assigned, the components can then be
written as a matrix. We can define the bases of vectors
and dual vectors as:
Basis: {θ^{μ}}
Dual basis: {e_{μ}}
And, from the discussion on dual spaces, we had:
e_{α}θ^{μ} = δ^{μ}_{α}
Let see how the machine operates.
For arguments sake lets consider A and B to be vectors
and C to be a dual vector (oneform).
𝕋(A,B,C) ≡ 𝕋(A^{μ}e_{μ},B^{ν}e_{ν},C_{γ}θ^{γ})
≡ A^{μ}B^{ν}C_{γ}𝕋(e_{μ},e_{ν},θ^{γ}) ... 1.
Now, we need to understand what 𝕋(e_{μ},e_{ν},θ^{γ}) produces.
As stated above, we can also represent our tensor
machine as the tensor product of 3 vectors U, V and
W.

Digression:
Any vector is also a tensor. In this case, our machine
would have only 1 slot. Therefore,
𝕋(_) ≡ V(_)
So,
V(A) = V.A ... a scalar
and,
V(θ^{μ}) = V^{μ}
and,
V = V^{μ}e_{μ}

The tensor product is defined as:
U ⊗ V ⊗ W(_,_,_,_) = [U(_)][V(_)][W(_)]

^ = [U._][V._][W._] ... 1.

U^{μ}e_{μ} ⊗ V^{ν}e_{ν} ⊗ W_{γ}θ^{γ}

^

U^{μ}V^{ν}W_{γ}(e_{μ} ⊗ e_{ν} ⊗ θ^{γ})

^

T^{μν}_{γ}(e_{μ} ⊗ e_{ν} ⊗ θ^{γ})
So we can conclude (looking vertically):
𝕋 = T^{μν}_{γ}(e_{μ} ⊗ e_{ν} ⊗ θ^{γ})
To get T^{μν}_{γ} we switch the indeces on the basis
vectors and feed them into equation 1. (i.e.
e_{μ} > θ^{μ}, e_{ν} > θ^{ν} and θ^{γ} > e_{γ}). This leads to:
U ⊗ V ⊗ W(θ^{μ},θ^{ν},e_{γ}) = [U.θ^{μ}][V.θ^{ν}][W.e_{γ}]
Now from the above digression, U.θ^{μ} is the projection
of U onto the respective directions (the components
of U, U^{μ}). Likewise for V and W. Therefore,
U ⊗ V ⊗ W(θ^{μ},θ^{ν},e_{γ}) = U^{μ}V^{ν}W_{γ}
≡ T^{μν}_{γ}
Therefore, the components of a tensor in a given
frame are found by feeding the our tensor machine
with the inverse of the basis vectors and dual basis
vectors associated with that particular frame. The
machine acts by multiplying every component of the
vectors and dualvectors by every relevant component
of the tensor, and then summed up to obtain an overall
result.
Metric Tensor

The metric tensor takes as its input a pair of tangent
vectors at a point on a manifold and produces a real
number scalar in a way that generalizes the dot product
of vectors in Euclidean space. Therefore:
g(A,B) = A.B
We can feed g with basis vectors to find its components,
g_{αβ}
g_{αβ} = g(e_{α},e_{β})
= e_{α}.e_{β}
Lorentz frame:
Considering just the time and x directions:
e_{0}.e_{0} = 1 = η_{00}
e_{0}.θ^{0} = 1
e_{0}.e_{1} = 0
e_{0}.θ^{1} = 0
e_{1}.θ^{1} = 1
e_{1}.e_{1} = 1 = η_{11}
Similar arguments follow for the y and z directions.
This leads to:
g_{00} = 1, g_{jk} = δ_{jk} => η_{αβ}
Likewise, the inverse metric is obtained from:
g^{αβ} = g(θ^{α},θ^{β})
= θ^{α}.θ^{β}
= η^{αβ}
and,
g^{α}_{β} = g(θ^{α},e_{β})
= θ^{α}.e_{β}
= δ^{α}_{β}
Components as Matrices

The components of a rank 2 tensor can be written as
an n x n matrix. For a rank 3 tensor we can visualize
the components forming a 3 dimensional matrix. For
higher than rank 3 it is a difficult to visualize the
structure.
Again, considering A and B to be vectors and C to be
a dual vector (oneform) we can use equation 1. to
write:
U ⊗ V ⊗ W(A,B,C) = [U.A][V.B][W.C]
≡ [U^{μ}e_{μ}.A^{α}e_{α}][V^{ν}e_{ν}.B^{β}e_{β}][W_{γ}θ^{γ}.C_{δ}θ^{δ}]
≡ [U^{μ}A^{α}e_{μ}.e_{α}][V^{ν}B^{β}e_{ν}.e_{β}][W_{γ}C_{δ}θ^{γ}.θ^{δ}]
≡ [g_{μα}U^{μ}A^{α}][g_{νβ}V^{ν}B^{β}][g^{γδ}W_{γ}C_{δ}]
≡ [U^{μ}A_{μ}][V^{ν}B_{ν}][W_{γ}C^{γ}]
≡ T^{μν}_{γ}A_{μ}B_{ν}C^{γ}
≡ T^{μν}_{γ}[e_{μ}.A][e_{ν}.B][θ^{γ}.C]
≡ T^{μν}_{γ}[e_{μ}(A)][e_{ν}(B)][θ^{γ}(C)]
Now,
[e_{μ}(A)][e_{ν}(B)][θ^{γ}(C)] = e_{μ} ⊗ e_{ν} ⊗ e_{γ}(A,B,C)
So,
U ⊗ V ⊗ W(A,B,C) ≡ T^{μν}_{γ}(e_{μ} ⊗ e_{ν} ⊗ θ^{γ})(A,B,C)
Which is basically a confirmation of equation 1.
Lets summarize what we have so far.
𝕋(A,B,C) = T^{μν}_{γ}A_{μ}B_{ν}C^{γ} = 𝕋 ∈ ℝ
 
^ ^
 
Tensor Component (index)
Notation Notation
The left hand side looks just like a function. The
problem with this type of notation is that we can’t
immediately see which of the slots are supposed to
be 1forms, and which ones are supposed to be vectors.
The right hand side skips the brackets, and instead
writes upper and lower indices to denote places where
1forms and vectors need to go. This notation is
more concise, in the sense that it shows exactly what
we need to input into our tensor to obtain a certain
result.
Empty Slots

If you decide to leave one or more of the slots
empty, then the result will not be a real number,
but another tensor which contains exactly your
“leftover” empty slots. Consider, the following
multilinear maps.
𝕋(_,B,C) = T^{μν}_{γ}B_{ν}C^{γ} = 𝕋^{μ} ≡ V^{μ}
𝕋(A,B,_) = 𝕋^{μν}_{γ}A_{μ}B_{ν} = T_{β} ≡ V_{β}
Multiplication and Addition Properties

Tensors obey the scalar multiplication and
addition properties associated with vector
spaces.
𝕋(αA + βB,C,D) = α𝕋(A,C,D) + βT(B,C,D)
Tensor Classifications

Tensors are classified by the number of upper
and lower indeces.
(# of upper indeces,# of lower indeces)
(0,0) = Scalar
(1,0) = Contravariant Vector.
(0,1) = Covariant Vector (dual vector, 1form).
(2,0) = Rank 2 contravariant tensor.
(0,2) = Rank 2 covariant tensor.
(1,1) = Rank 2 mixed tensor.
etc.
Raising and Lowering Indeces

We can use the metric tensor to raise and lower
tensor indeces:
T_{μν}^{γ} = T^{αβγ}g_{αμ}g_{βν}
Proof:
U ⊗ V ⊗ W(θ^{α},θ^{β},θ^{γ})g(e_{α},e_{μ})g(e_{β},e_{ν})
= [U.θ^{α}][V.θ^{β}][W.θ^{γ}][U.e_{α}][V.e_{μ}][U.e_{β}][V.e_{ν}]
= U^{α}V^{β}W^{γ}U_{α}V_{μ}U_{β}V_{ν}
= T_{μν}^{γ}
Example

Consider vectors in flat spacetime.
η_{μν}V^{μ} = V_{ν}
     
 1 0 0 0  ct   ct 
 0 1 0 0  x  =  x 
 0 0 1 0  y   y 
 0 0 0 1  z   z 
     
Index Contraction

T(_,_,_,_) = A ⊗ B ⊗ C ⊗ D(_,_,_,_)
C_{13}[A ⊗ B ⊗ C ⊗ D(_,_,_,_)] ≡ (A.C)(B ⊗ D(_,_))
So basically, the contraction has 'strangled'
the 1st and 3rd slots so to speak.
In terms of components the RHS becomes :
(A.C)(B ⊗ D(_,_)) = [C.θ^{ν}](B ⊗ D(_,_))
A.C = A^{μ}e_{μ}.C^{ν}e_{ν}
= A^{μ}C^{ν}e_{μ}.e_{ν}
= g_{μν}A^{μ}C^{ν}
= A^{μ}C_{μ}
Now,
(B ⊗ D(e_{β},e_{δ}) = [B.e_{β}][B.e_{δ}]
= B_{β}B_{δ}
Therefore, the result is:
C_{13}[A ⊗ B ⊗ C ⊗ D(_,_,_,_)] = A^{μ}C_{μ}B^{ρ}D^{δ}
When the tensor is interpreted as a matrix, the
contraction operation is defined for pairs of
indices and is the generalization of the trace.
For a (1,1) tensor this looks like:
 
_{ }  a b c d 
T^{μ}_{ν} =  e f g h 
_{ }  i j k l 
_{ }  m n o p 
 
T^{μ}_{μ} = a + f + k + p = T = scalar.
Transformation Properties

T^{μν} > T^{μ'ν'} = Λ^{μ'}_{μ}Λ^{ν'}_{ν}T^{μν}
T^{μ}_{ν} > T^{μ'}_{ν'} = Λ^{μ'}_{μ}Λ^{ν}_{ν'}T^{μ}_{ν}
T_{μν} > T_{μ'ν'} = Λ^{μ}_{μ'}Λ^{ν}_{ν'}T_{μν}
Where Λ^{μ'}_{μ} = ∂x^{μ'}/∂x^{μ} and Λ^{μ}_{μ'} = ∂x^{μ}/∂x^{μ'}.
Tensors are characterized by these transformation
laws. Therefore we can say that if an object does
not transform in this exact way, it cannot be a
tensor. This requirement will turn out to be very
useful when we look at how derivatives of vectors
and tensors transform in curved spacetime.
Matrix Expressions

In some cases it may be required to revert back to
matrix expressions. Matrix multiplication requires
getting repeated indeces adjacent to each other (see
the notes on Index Notation for more details). In
our notation:
Λ^{μ'}_{μ} = Λ^{μ'}_{ν}x^{ν}
Λ_{μ}^{μ'} = (Λ^{μ'}_{μ})^{T} = Λ^{T}
Λ^{μ}_{μ'} = (Λ^{μ'}_{μ})^{1} = Λ^{1}
Λ_{μ'}^{μ} = ((Λ^{μ'}_{μ})^{1})^{T} = (Λ^{1})^{T}
Therefore,
T^{μν} > T^{μ'ν'} = Λ^{μ'}_{μ}Λ^{ν'}_{ν}T^{μν}
= Λ^{μ'}_{μ}T^{μν}Λ^{ν'}_{ν}
= Λ^{μ'}_{μ}T^{μν}Λ_{ν}^{ν'} want
= ΛTΛ^{T}

Digression:
In the notes on the Lorentz group we had:
T^{μν} = Λ^{μ}_{σ}Λ^{ν}_{ρ}T^{σρ}
= Λ^{μ}_{σ}T^{σρ}Λ^{ν}_{ρ}
We want:
T^{μν} = Λ^{μ}_{σ}T^{σρ}Λ_{ρ}^{ν}
= ΛTΛ^{T}

T^{μ}_{ν} > T^{μ'}_{ν'} = Λ^{μ'}_{μ}Λ^{ν}_{ν'}T^{μ}_{ν}
= Λ^{μ'}_{μ}T^{μ}_{ν}Λ^{ν}_{ν'}
= ΛTΛ^{1}
T_{μν} > T_{μ'ν'} = Λ^{μ}_{μ'}Λ^{ν}_{ν'}T_{μν}
= Λ^{μ}_{μ'}T_{μν}Λ^{ν}_{ν'}
We want:
T_{μ'ν'} = Λ_{μ'}^{μ}T_{μν}Λ^{ν}_{ν'}
= (Λ^{1})^{T}TΛ^{T}
Tensor Equations

The physical world does not have a predetermined
coordinate system. The laws of physics should be
built out of equations which are invariant under
transformations connecting local inertial frames.
Tensors are the answer!
Abstract Index Notation

Abstract index notation is a mathematical notation
that uses indices to indicate tensor types, rather
than their components in a particular basis. Abstract
indeces are denoted by lower case Latin letters (a, b
etc.). They do not represent components with respect
to basis which are denoted by Greek letters (α, β etc.).
They assign names to the tensor slots and are not
numbers as is the case for component indeces. Thus,
𝕋(_,_,_) = T^{ab}_{c}
^{a} ^{b} ^{c}
Where T^{ab}_{c} represents a mixed tensor. And,
𝕋(_,_,_) = T^{μν}_{γ}(e_{μ} ⊗ e_{ν} ⊗ θ^{γ})
Where T^{μν}_{γ} represents the components of the same
mixed tensor in a basis.
The idea behind the abstract index notation is to
have a notation for tensorial expressions that
mirrors the expressions for their basis components
(had a basis been introduced). Using abstract index
notation one can only write down tensorial expressions
since no basis has been specified. Despite, these
conventions, many texts do not make the distinction
between use of Greek and Letin letters so the reader
needs to study the context to get the correct
interpretation.
Now lets consider the following tensor equation:
A^{ab} = B^{a}C^{b}_{c}D^{c} + θ^{ab} > A^{a'b'} = B^{a'}C^{b'}_{c'}D^{c'} + E^{a'b'}
Λ^{a'}_{a}Λ^{b'}_{b}A^{ab} = (Λ^{a'}_{a})B^{a}(Λ^{b'}_{b}Λ^{c}_{c'})C^{b}_{c}(Λ^{c'}_{c})D^{c} + (Λ^{a'}_{a}Λ^{b'}_{b})E^{ab}
= Λ^{a'}_{a}Λ^{b'}_{b}Λ^{c}_{c'}Λ^{c'}_{c}B^{a}C^{b}_{c}D^{c} + Λ^{a'}_{a}Λ^{b'}_{b}E^{ab}
= Λ^{a'}_{a}Λ^{b'}_{b}B^{a}C^{b}_{c}D^{c} + Λ^{a'}_{a}Λ^{b'}_{b}E^{ab}
= Λ^{a'}_{a}Λ^{b'}_{b}(B^{a}C^{b}_{c}D^{c} + E^{ab})
So both sides are completely invariant under a
coordinate transformation.