Wolfram Alpha:

```Tensors
-------

Not all physical quantities can be represented by
scalars, vectors or one-forms (dual vectors).  We
will need something more flexible, and tensors fit
the bill.

Like vectors, tensors represent physical quantities
that are invariant and independent of the coordinate
system.  However, when they are expressed in terms
of a basis they have components that transform.
They live in flat tangent or cotangent space or the
tensor product of the two.

Tensors can be defined in several different ways.
For our purposes they are equivalently:

1.  Multilinear maps that map vectors and dual
vectors into the space of real numbers.

2.  Elements of tensor products of vector spaces.

Schematically, we can view a tensor, 𝕋(_,_,_) as a
'machine' that takes vectors and dual vectors as
inputs and outputs real numbers.  The number of
slots corresponds to the rank of the tensor.

Tensor Components
-----------------

𝕋(_,_,_) is a geometric object.  It has no components
until a basis is assigned.  The components will be basis
dependent.  Once assigned, the components can then be
written as a matrix.  We can define the bases of vectors
and dual vectors as:

Basis:  {θμ}

Dual basis:  {eμ}

And, from the discussion on dual spaces, we had:

eαθμ = δμα

Let see how the machine operates.

For arguments sake lets consider A and B to be vectors
and C to be a dual vector (one-form).

𝕋(A,B,C) ≡ 𝕋(Aμeμ,Bνeν,Cγθγ)

≡ AμBνCγ𝕋(eμ,eν,θγ)  ... 1.

Now, we need to understand what 𝕋(eμ,eν,θγ) produces.

As stated above, we can also represent our tensor
machine as the tensor product of 3 vectors U, V and
W.

-----------------------------------------------------
Digression:

Any vector is also a tensor.  In this case, our machine
would have only 1 slot.  Therefore,

𝕋(_) ≡ V(_)

So,

V(A) = V.A ... a scalar

and,

V(θμ) = Vμ

and,

V = Vμeμ

-----------------------------------------------------

The tensor product is defined as:

U ⊗ V ⊗ W(_,_,_,_) = [U(_)][V(_)][W(_)]
----------
^              = [U._][V._][W._]  ... 1.
|
Uμeμ ⊗ Vνeν ⊗ Wγθγ
------------------
^
|
UμVνWγ(eμ ⊗ eν ⊗ θγ)
--------------------
^
|
Tμνγ(eμ ⊗ eν ⊗ θγ)

So we can conclude (looking vertically):

𝕋 = Tμνγ(eμ ⊗ eν ⊗ θγ)

To get Tμνγ we switch the indeces on the basis
vectors and feed them into equation 1. (i.e.
eμ -> θμ, eν -> θν and θγ -> eγ).  This leads to:

U ⊗ V ⊗ W(θμ,θν,eγ) = [U.θμ][V.θν][W.eγ]

Now from the above digression, U.θμ is the projection
of U onto the respective directions (the components
of U, Uμ).  Likewise for V and W.  Therefore,

U ⊗ V ⊗ W(θμ,θν,eγ) = UμVνWγ

≡ Tμνγ

Therefore, the components of a tensor in a given
frame are found by feeding the our tensor machine
with the inverse of the basis vectors and dual basis
vectors associated with that particular frame.  The
machine acts by multiplying every component of the
vectors and dual-vectors by every relevant component
of the tensor, and then summed up to obtain an overall
result.

Metric Tensor
-------------

The metric tensor takes as its input a pair of tangent
vectors at a point on a manifold and produces a real
number scalar in a way that generalizes the dot product
of vectors in Euclidean space.  Therefore:

g(A,B) = A.B

We can feed g with basis vectors to find its components,
gαβ

gαβ = g(eα,eβ)

= eα.eβ

Lorentz frame:

Considering just the time and x directions:

e0.e0 = -1 = η00
e0.θ0 = 1
e0.e1 = 0
e0.θ1 = 0

e1.θ1 = 1
e1.e1 = 1 = η11

Similar arguments follow for the y and z directions.

g00 = -1, gjk = δjk => ηαβ

Likewise, the inverse metric is obtained from:

gαβ = g(θα,θβ)

= θα.θβ

= ηαβ

and,

gαβ = g(θα,eβ)

= θα.eβ

= δαβ

Components as Matrices
----------------------

The components of a rank 2 tensor can be written as
an n x n matrix.  For a rank 3 tensor we can visualize
the components forming a 3 dimensional matrix.  For
higher than rank 3 it is a difficult to visualize the
structure.

Again, considering A and B to be vectors and C to be
a dual vector (one-form) we can use equation 1. to
write:

U ⊗ V ⊗ W(A,B,C) = [U.A][V.B][W.C]

≡ [Uμeμ.Aαeα][Vνeν.Bβeβ][Wγθγ.Cδθδ]

≡ [UμAαeμ.eα][VνBβeν.eβ][WγCδθγ.θδ]

≡ [gμαUμAα][gνβVνBβ][gγδWγCδ]

≡ [UμAμ][VνBν][WγCγ]

≡ TμνγAμBνCγ

≡ Tμνγ[eμ.A][eν.B][θγ.C]

≡ Tμνγ[eμ(A)][eν(B)][θγ(C)]

Now,

[eμ(A)][eν(B)][θγ(C)] = eμ ⊗ eν ⊗ eγ(A,B,C)

So,

U ⊗ V ⊗ W(A,B,C) ≡ Tμνγ(eμ ⊗ eν ⊗ θγ)(A,B,C)

Which is basically a confirmation of equation 1.

Lets summarize what we have so far.

𝕋(A,B,C) = TμνγAμBνCγ = 𝕋 ∈ ℝ
--------   ---------
^            ^
|            |
Tensor    Component (index)
Notation      Notation

The left hand side looks just like a function.  The
problem with this type of notation is that we can’t
immediately see which of the slots are supposed to
be 1-forms, and which ones are supposed to be vectors.
The right hand side skips the brackets, and instead
writes upper and lower indices to denote places where
1-forms and vectors need to go. This notation is
more concise, in the sense that it shows exactly what
we need to input into our tensor to obtain a certain
result.

Empty Slots
-----------

If you decide to leave one or more of the slots
empty, then the result will not be a real number,
but another tensor which contains exactly your
“leftover” empty slots.  Consider, the following
multilinear maps.

𝕋(_,B,C) = TμνγBνCγ = 𝕋μ ≡ Vμ

𝕋(A,B,_) = 𝕋μνγAμBν = Tβ ≡ Vβ

--------------------------------------

Tensors obey the scalar multiplication and
spaces.

𝕋(αA + βB,C,D) = α𝕋(A,C,D) + βT(B,C,D)

Tensor Classifications
----------------------

Tensors are classified by the number of upper
and lower indeces.

(# of upper indeces,# of lower indeces)

(0,0) = Scalar

(1,0) = Contravariant Vector.

(0,1) = Covariant Vector (dual vector, 1-form).

(2,0) = Rank 2 contravariant tensor.

(0,2) = Rank 2 covariant tensor.

(1,1) = Rank 2 mixed tensor.

etc.

Raising and Lowering Indeces
----------------------------

We can use the metric tensor to raise and lower
tensor indeces:

Tμνγ = Tαβγgαμgβν

Proof:

U ⊗ V ⊗ W(θα,θβ,θγ)g(eα,eμ)g(eβ,eν)

= [U.θα][V.θβ][W.θγ][U.eα][V.eμ][U.eβ][V.eν]

= UαVβWγUαVμUβVν

= Tμνγ

Example
-------

Consider vectors in flat spacetime.

ημνVμ = Vν

-        -  -  -     -   -
| -1 0 0 0 || ct |   | -ct |
|  0 1 0 0 ||  x | = |   x |
|  0 0 1 0 ||  y |   |   y |
|  0 0 0 1 ||  z |   |   z |
-        -  -  -     -   -

Index Contraction
-----------------

T(_,_,_,_) = A ⊗ B ⊗ C ⊗ D(_,_,_,_)

C13[A ⊗ B ⊗ C ⊗ D(_,_,_,_)] ≡ (A.C)(B ⊗ D(_,_))

So basically, the contraction has 'strangled'
the 1st and 3rd slots so to speak.

In terms of components the RHS becomes :

(A.C)(B ⊗ D(_,_)) =  [C.θν](B ⊗ D(_,_))

A.C = Aμeμ.Cνeν

= AμCνeμ.eν

= gμνAμCν

= AμCμ

Now,

(B ⊗ D(eβ,eδ) = [B.eβ][B.eδ]

= BβBδ

Therefore, the result is:

C13[A ⊗ B ⊗ C ⊗ D(_,_,_,_)] = AμCμBρDδ

When the tensor is interpreted as a matrix, the
contraction operation is defined for pairs of
indices and is the generalization of the trace.
For a (1,1) tensor this looks like:

-       -
| a b c d |
Tμν = | e f g h |
| i j k l |
| m n o p |
-       -

Tμμ = a + f + k + p = T = scalar.

Transformation Properties
-------------------------

Tμν -> Tμ'ν' = Λμ'μΛν'νTμν

Tμν -> Tμ'ν' = Λμ'μΛνν'Tμν

Tμν -> Tμ'ν' = Λμμ'Λνν'Tμν

Where Λμ'μ = ∂xμ'/∂xμ and Λμμ' = ∂xμ/∂xμ'.

Tensors are characterized by these transformation
laws.  Therefore we can say that if an object does
not transform in this exact way, it cannot be a
tensor.  This requirement will turn out to be very
useful when we look at how derivatives of vectors
and tensors transform in curved spacetime.

Matrix Expressions
------------------

In some cases it may be required to revert back to
matrix expressions.  Matrix multiplication requires
getting repeated indeces adjacent to each other (see
the notes on Index Notation for more details).  In
our notation:

Λμ'μ = Λμ'νxν

Λμμ' = (Λμ'μ)T = ΛT

Λμμ' = (Λμ'μ)-1 = Λ-1

Λμ'μ = ((Λμ'μ)-1)T = (Λ-1)T

Therefore,

Tμν -> Tμ'ν' = Λμ'μΛν'νTμν

= Λμ'μTμνΛν'ν

= Λμ'μTμνΛνν' want

= ΛTΛT

-----------------------------------------------------
Digression:

In the notes on the Lorentz group we had:

Tμν = ΛμσΛνρTσρ

= ΛμσTσρΛνρ

We want:

Tμν = ΛμσTσρΛρν

= ΛTΛT

-----------------------------------------------------

Tμν -> Tμ'ν' = Λμ'μΛνν'Tμν

= Λμ'μTμνΛνν'

= ΛTΛ-1

Tμν -> Tμ'ν' = Λμμ'Λνν'Tμν

= Λμμ'TμνΛνν'

We want:

Tμ'ν' = Λμ'μTμνΛνν'

= (Λ-1)TTΛT

Tensor Equations
----------------

The physical world does not have a pre-determined
coordinate system.  The laws of physics should be
built out of equations which are invariant under
transformations connecting local inertial frames.

Abstract Index Notation
-----------------------

Abstract index notation is a mathematical notation
that uses indices to indicate tensor types, rather
than their components in a particular basis.  Abstract
indeces are denoted by lower case Latin letters (a, b
etc.).  They do not represent components with respect
to basis which are denoted by Greek letters (α, β etc.).
They assign names to the tensor slots and are not
numbers as is the case for component indeces.  Thus,

𝕋(_,_,_) = Tabc
a b c

Where Tabc represents a mixed tensor.  And,

𝕋(_,_,_) = Tμνγ(eμ ⊗ eν ⊗ θγ)

Where Tμνγ represents the components of the same
mixed tensor in a basis.

The idea behind the abstract index notation is to
have a notation for tensorial expressions that
mirrors the expressions for their basis components
(had a basis been introduced).  Using abstract index
notation one can only write down tensorial expressions
since no basis has been specified.  Despite, these
conventions, many texts do not make the distinction
between use of Greek and Letin letters so the reader
needs to study the context to get the correct
interpretation.

Now lets consider the following tensor equation:

Aab = BaCbcDc + θab -> Aa'b' = Ba'Cb'c'Dc' + Ea'b'

Λa'aΛb'bAab = (Λa'a)Ba(Λb'bΛcc')Cbc(Λc'c)Dc + (Λa'aΛb'b)Eab

= Λa'aΛb'bΛcc'Λc'cBaCbcDc + Λa'aΛb'bEab

= Λa'aΛb'bBaCbcDc + Λa'aΛb'bEab

= Λa'aΛb'b(BaCbcDc + Eab)

So both sides are completely invariant under a
coordinate transformation.```