Wolfram Alpha:
Search by keyword:
Astronomy
Chemistry
Classical Physics
Climate Change
Cosmology
Finance and Accounting
General Relativity
Lagrangian and Hamiltonian Mechanics
Macroeconomics
Mathematics
Microeconomics
Particle Physics
Probability and Statistics
Programming and Computer Science
Quantum Field Theory
Quantum Mechanics
Semiconductor Reliability
Solid State Electronics
Special Relativity
Statistical Mechanics
String Theory
Superconductivity
Supersymmetry (SUSY) and Grand Unified Theory (GUT)
test
The Standard Model
Topology
Units, Constants and Useful Formulas
Grassmann and Clifford Algebras

Grassmann (aka Exterior) Algebra

The fundamental operation in Exterior Algebra is
the Wedge product. The wedge product is a completely
antisymmetrized tensor product. To see this consider
the tensor product of 2 vectors, u^{i} and v^{j} (≡ T^{ij}).
u ⊗ v = (1/2)(uv + vu) + (1/2)(uv  vu)
= symmetric part + antisymmetric part
Lets focus on the antisymmetric part. We can
now define the wedge product as:
u ∧ v = (1/2)(uv  vu)
= (1/2)[u,v]
At first sight this may look like the Lie algebra.
However, in general, it is not. Unlike the Lie
algebra, which lives in the tangent space of a
manifold, the exterior algebra lives in a vector
space that is separate from the original space of
vectors, V, (hence exterior). This space is
referrred to as Λ^{p}(V) and is called the p^{th}
EXTERIOR POWER. In other words the wedge product
is an operation on a vector space where the result
is a member of a new space product and not an element
of the original vector space. Furthermore, the
Lie algebra satisfies the Jacobi identity whereas
the wedge product generally does not.
Algebraic Properties

Associativity: (u ∧ v) ∧ w = u ∧ (v ∧ w)
u ∧ u = 0
Consider:
(u + v) ∧ (u + v) = u ∧ u + u ∧ v + v ∧ u + v ∧ v
0 = u ∧ v + v ∧ u
Therefore,
u ∧ v = (v ∧ u)
Thereby, confirming the completely antisymmetric
properties for vectors.
s ∧ t = t ∧ s = st and s ∧ s = s^{2} for scalars s and t.
s ∧ u = u ∧ s = su for a scalar s and vector, u.
pvectors

A pvector is a (p,0) tensor which is completely
antisymmetric. pvectors are constructed from
wedge products. However, the dimension of Λ^{p}
(i.e the # of basis elements) for each type of
pvector has to be taken into account. The
dimension is given by the formula for combinations:
 
 D  D!
 p  = 
  p!(D  p)!
The wedge product of a pvector and qvector
produces a (p + q) vector. However, there is a
limitation on p, q and p + q for a given D.
If (p + q) > D then one of the indeces would
have to be duplicated violating the requirement
of complete antisymmetry. For example, for
D = 3 the wedge product of a 2basis vector with
a 2basis vector produces:
T_{ijkl} = e_{i} ∧ e_{j} ∧ e_{k} ∧ e_{l}
However, there are only 3 basis vectors in D = 3
therefore e^{l} would have to be e^{i or j or k}.
Examples, for D = 3:
0vector: f = f(x^{1},x^{2},x^{3})
1vector: u = u_{1}e_{1} + u_{2}e_{2} + u_{3}e_{3}
2vector: v = v_{12}(e_{1} ∧ e_{2}) + v_{23}(e_{2} ∧ e_{3}) + v_{13}(e_{1} ∧ e_{3})
3vector: w = w_{123}(e_{1} ∧ e_{2} ∧ e_{3})
What would these look like for D = 4?
0vector: f = f(x^{1},x^{2},x^{3},x^{4})
1vector: u = u_{1}e_{1} + u_{2}e_{2} + u_{3}e_{3} + u_{4}e_{4}
2vector: v = v_{12}(e_{1} ∧ e_{2}) + v_{13}(e_{1} ∧ e_{3}) + v_{14}(e_{1} ∧ e_{4})
+ v_{23}(e_{2} ∧ e_{3}) + v_{24}(e_{2} ∧ e_{4}) + v_{34}(e_{3} ∧ e_{4})
3vector: w = w_{123}(e_{1} ∧ e_{2} ∧ e_{3}) + w_{124}(e_{1} ∧ e_{2} ∧ e_{4})
+ w_{134}(e_{1} ∧ e_{3} ∧ e_{4}) + w_{234}(e_{2} ∧ e_{3} ∧ e_{4})
Note: The coefficients in each case are f(x^{μ}).
The Wedge Product of pvectors

We use the example of the wedge product of a
1vector and a 2vector in D = 3 to illustrate
the process.
(u_{1}e_{1} + u_{2}e_{2} + u_{3}e_{3}) ∧ (v_{12}(e_{1} ∧ e_{2}) + v_{23}(e_{2} ∧ e_{3}) + v_{13}(e_{1} ∧ e_{3}))
= u_{1} ∧ v_{12} ∧ e_{1}

v
= (u_{1}v_{12}e_{1} ∧ (e_{1} ∧ e_{2}) + u_{1}v_{23}e_{1} ∧ (e_{2} ∧ e_{3}) + u_{1}v_{13}e_{1} ∧ (e_{1} ∧ e_{3})
+
(u_{2}v_{12}e_{2} ∧ (e_{1} ∧ e_{2}) + u_{2}v_{23}e_{2} ∧ (e_{2} ∧ e_{3}) + u_{2}v_{13}e_{2} ∧ (e_{1} ∧ e_{3})
+
(u_{3}v_{12}e_{3} ∧ (e_{1} ∧ e_{2}) + u_{3}v_{23}e_{3} ∧ (e_{2} ∧ e_{3}) + u_{3}v_{13}e_{3} ∧ (e_{1} ∧ e_{3})
= u_{1}v_{23}e_{1} ∧ (e_{2} ∧ e_{3}) + u_{2}v_{13}e_{2} ∧ (e_{1} ∧ e_{3}) + u_{3}v_{12}e_{3} ∧ (e_{1} ∧ e_{2})
= u_{1}v_{23}e_{1} ∧ (e_{2} ∧ e_{3})  u_{2}v_{13}e_{1} ∧ (e_{2} ∧ e_{3}) + u_{3}v_{12}e_{1} ∧ (e_{2} ∧ e_{3})
= (u_{1}v_{23}  u_{2}v_{13} + u_{3}v_{12})(e_{1} ∧ e_{2} ∧ e_{3})
Which is a 3vector.
Now consider the product of two 2vectors in D = 3:
(u_{12}(e_{1} ∧ e_{2}) + u_{23}(e_{2} ∧ e_{3}) + u_{13}(e_{1} ∧ e_{3}))
∧ (v_{12}(e_{1} ∧ e_{2}) + v_{23}(e_{2} ∧ e_{3}) + v_{13}(e_{1} ∧ e_{3}))
u_{12}v_{12}(e_{1} ∧ e_{2}) ∧ (e_{1} ∧ e_{2}) = u_{12}v_{23}(e_{1} ∧ e_{2}) ∧ (e_{2} ∧ e_{3})
= u_{12}v_{13}(e_{1} ∧ e_{2}) ∧ (e_{1} ∧ e_{3})
= 0
u_{23}v_{12}(e_{2} ∧ e_{3}) ∧ (e_{1} ∧ e_{2}) = u_{23}v_{23}(e_{2} ∧ e_{3}) ∧ (e_{2} ∧ e_{3})
= u_{23}v_{13}(e_{2} ∧ e_{3}) ∧ (e_{1} ∧ e_{3})
= 0
u_{13}v_{12}(e_{1} ∧ e_{3}) ∧ (e_{1} ∧ e_{2}) = u_{13}v_{23}(e_{1} ∧ e_{3}) ∧ (e_{2} ∧ e_{3})
= u_{13}v_{13}(e_{1} ∧ e_{3}) ∧ (e_{1} ∧ e_{3})
= 0
This demonstrates the closure property of the
algebra.
The CrossProduct

For D = 3, the cross product is the wedge product
of a 1vector with a 1vector.
Proof:
u = ae_{x} + be_{y} + ce_{z} (1vector)
v = ee_{x} + fe_{y} + ge_{z} (1vector)
Here D = 3 and the resulting exterior space is Λ^{2}.
u ∧ v = (bg  cf)(e_{y} ∧ e_{z})
+ (ag  ce)(e_{x} ∧ e_{z})
+ (af  be)(e_{x} ∧ e_{y})
Where the basis is {(e_{y} ∧ e_{z}),(e_{x} ∧ e_{z}),(e_{x} ∧ e_{y})}
The coefficients are the same as those in the
usual definition of the cross product of vectors
in three dimensions. However, the wedge product
produces a bivector instead of a vector.
The cross product is a special case where the
wedge product is is the same the Lie bracket.
We can prove this by verifying that the cross
product satisfies the Jacobi identity:
a x (b x c) + b x (c x a) + c x (a x b) = 0
Proof:
Using the definition of the triple vector product,
a x (b x c) = (a.c)b  (a.b)c, this becomes:
(a.c)b  (a.b)c + (b.a)c  (b.c)a + (c.b)a  (c.a)b
= 0 because the dot product is commutative.
Areas in the Plane

The pvectors are an algebraic construction used
in geometry to study areas, volumes, and their higher
dimensional analogues. Consider the following:
For D = 2, the area is the wedge product of a
1vector with a 1vector.
Proof:
u = (x_{3}  x_{1})e_{x} + (y_{3}  y_{1})e_{y}
v = (x_{2}  x_{1})e_{x} + (y_{2}  y_{1})e_{y}
Here D = 2 and the resulting exterior space is Λ^{2}.
u ∧ v = (x_{3}  x_{1})(y_{2}  y_{1})(e_{x} ∧ e_{y})
+ (x_{2}  x_{1})(y_{3}  y_{1})(e_{y} ∧ e_{x})
Now (e_{y} ∧ e_{x}) = (e_{x} ∧ e_{y}). Therefore,
u ∧ v = (x_{3}  x_{1})(y_{2}  y_{1})(e_{x} ∧ e_{y})
 (x_{2}  x_{1})(y_{3}  y_{1})(e_{x} ∧ e_{y})
= (x_{3}y_{2}  x_{3}y_{1}  x_{1}y_{2} + x_{1}y_{1}  x_{2}y_{3} + x_{2}y_{1}
+ x_{1}y_{3}  x_{1}y_{1})(e_{x} ∧ e_{y})
Let (x_{1},y_{1}) > (0,0)
A = (x_{3}y_{2}  x_{2}y_{3})(e_{x} ∧ e_{y})
 
= det x_{3} y_{3} (e_{x} ∧ e_{y})
 x_{2} y_{2} 
 
This is the standard determinant form for the
area of a parallelogram.
Differential Forms (aka pForms)

Differential forms are an approach to multivariable
calculus that is independent of coordinates. They
appear in the mathematics of differential geometry.
A pform (aka Differential form) is a (0,p) tensor
which is completely antisymmetric. pforms are duals
to pvectors. The wedge product of pforms lives in
the space Λ^{p}(V*). Since Λ(V*) is the dual of Λ(V),
dim(Λ(V*)) = dim(Λ(V)).
All of the properties and constructions discussed
previously apply to differential forms:
The Exterior Derivative

pforms can be both differentiated and integrated.
The EXTERIOR DERIVATIVE, d allows us to differentiate
a pform to obtain a (p + 1) form. The exterior
derivative is:
(d)_{μ} ≡ (∂/∂x^{μ})
Note that:
d(dφ) = 0 since partial derivatives commute.
Forms can be expanded in terms of wedge products
and the exterior derivative.
For example, for D = 3:
0form: γ = f(x^{1},x^{2},x^{3})
1form: α = α_{1}dx^{1} + α_{2}dx^{2} + α_{3}dx^{3}
2form: β = β_{12}(dx^{1} ∧ dx^{2}) + β_{23}(dx^{2} ∧ dx^{3}) + β_{13}(dx^{1} ∧ dx^{3})
3form: δ = δ_{123}(dx^{1} ∧ dx^{2} ∧ dx^{3})
Where the coefficients are f(x^{1},x^{2},x^{3}).
In general:
ω = Σf_{j}dx^{j}
^{j}
dω = ΣΣ(∂f_{j}/∂x^{i})dx^{i} ∧ dx^{j}
^{ij}
Example: In D = 3, γ = 3x^{2}yz
dγ = (∂(3x^{2})/∂x)dx + (∂(3x^{2})/∂y)dy + (∂(3x^{2})/∂z)dz
+ (∂(y)/∂x)dx + (∂(y)/∂y)dy + (∂(y)/∂z)dz
+ (∂(z)/∂x)dx + (∂(z)/∂y)dy + (∂(z)/∂z)dz
= 6xdx + dy + dz = α
dα = (∂(6x)/∂x)dx + (∂(6x)/∂y)dy + (∂(6x)/∂z)dz) ∧ dy
+ (∂(1)/∂x)dx + (∂(1)/∂y)dy + (∂(1)/∂z)dz) ∧ dx
+ (∂(1)/∂x)dx + (∂(1)/∂y)dy + (∂(1)/∂z)dz) ∧ dz
= 6dx ∧ dy since dx ∧ dx = dy ∧ dy = dz ∧ dz = 0
Leibnitz (Product) Rule

The exterior derivative of the product of a pform
and a qform is given by:
d(α ∧ β) = dα ∧ β + (1)^{p}α ∧ dβ
The result will have dimension, D = p + q + 1.
Proof:
Consider a 1form and a 2 form (p = 1, q = 2). The
result will have D = 4. Therefore, both forms need
to be specified in D = 4. For brevity we only
consider the first term of each to illustrate the
process.
α = fdx^{1} + ...
β = g(dx^{2} ∧ dx^{3}) + ...
d(α ∧ β) = d(fg(dx^{1} ∧ dx^{2} ∧ dx^{3}))
= (df.g + f.dg) ∧ dx^{1} ∧ dx^{2} ∧ dx^{3}
= (df ∧ dx^{1}) ∧ g(dx^{2} ∧ dx^{3}) + fdx^{1} ∧ dg ∧ dx^{2} ∧ dx^{3}
After reordering the 2nd term to match the order
of the 1st term we get:
d(α ∧ β) = (df ∧ dx^{1}) ∧ g(dx^{2} ∧ dx^{3})  dg ∧ fdx^{1} ∧ dx^{2} ∧ dx^{3}
Or,
d(α ∧ β) = dα ∧ β + (1)^{p}α ∧ dβ
Volume (aka Top) Form

When p = D the form is called the VOLUME FORM. It
has a dimension of 1 and establishs an orientation
of the exterior power (manifold). A coordinate system
(on an oriented manifold) is oriented if:
dx^{1} ∧ dx^{2} ∧ ... ∧ dx^{D}
is positive. A transformation between oriented
coordinate systems has positive Jacobian.
Note that the exterior derivative of the top form
is 0 since it would result in a p + 1 form that is
greater than the dimension of the exterior power.
Example:
For D = 3
ω = dx^{1} ∧ dx^{2} ∧ dx^{3}
Change of Coordinates

The derivative of a tensor transforms under a
change of coordinates as:
∇_{μ}V^{ν} = ∂_{μ}V^{μ} + Γ^{ν}_{μλ}V^{λ}
Where Γ^{ν}_{μλ} are the Christoffel symbols. Lets look
at how the exterior derivative transforms. The
definition of the wedge product allows us to
write
dx^{0} ∧ .... dx^{n1} = (1/n!)ε_{μ1 ... μn} dx^{μ1} ∧ .... dx^{μn}
since both the wedge product and the LeviCivita
symbol are completely antisymmetric. Under a
coordinate transformation, ε stays the same while
the 1forms change according to:
Basis: dx^{μ'} = (∂x^{μ'}/∂x^{μ})dx^{μ} or dx^{μ} = (∂x^{μ}/∂x^{μ'})dx^{μ'}
Components: ω_{μ'} = (∂x^{μ}/∂x^{μ'})ω_{μ'}
For convenience we work with the 2form. Lets look
at how (dx^{1} ∧ dx^{2}) part transforms.
dx^{1} ∧ dx^{2} = ((∂x^{1}/∂x^{1'})dx^{1'} + (∂x^{1}/∂x^{2'})dx^{2'})
((∂x2/∂x^{1'})dx^{1'} + (∂x^{2}/∂x^{2'})dx^{2'})
= (∂x^{1}/∂x^{1'})(∂x^{2}/∂x^{1'})(dx^{1'} ∧ dx^{1'})
+ (∂x^{1}/∂x^{2'})(∂x^{2}/∂x^{2'})(dx^{2'} ∧ dx^{2'})
+ (∂x^{1}/∂x^{1'})(∂x^{2}/∂x^{2'})(dx^{1'} ∧ dx^{2'})
+ (∂x^{1}/∂x^{2'})(∂x^{2}/∂x^{1'})(dx^{2'} ∧ dx^{1'})
= (∂x^{1}/∂x^{1'})(∂x^{2}/∂x^{2'})(dx^{1'} ∧ dx^{2'})
 (∂x^{1}/∂x^{2'})(∂x^{2}/∂x^{1'})(dx^{1'} ∧ dx^{2'})
= (∂x^{1}/∂x^{1'})(∂x^{2}/∂x^{2'})  (∂x^{1}/∂x^{2'})(∂x^{2}/∂x^{1'})(dx^{1'} ∧ dx^{2'})
= (∂x^{1}/∂x^{1'})(∂x^{2}/∂x^{2'})  (∂x^{1}/∂x^{2'})(∂x^{2}/∂x^{1'})(dx^{1'} ∧ dx^{2'})
= det(∂x^{μ}/∂x^{μ'})(dx^{1'} ∧ dx^{2'})
Therefore, the antisymmetric properties of the wedge
product automatically takes care of the need to
include the Christoffel symbols in the calculation.
Since the Christoffel symbols involve the metric
tensor and its derivatives, the implication of this
is that differential forms allow us to work with
manifolds without the help of the metric. In this
sense they are coordinate independent or 'coordinate
free'.
Relation to Gradient, Curl and Divergence

Recall:
For D = 3:
0form: γ = f(x^{μ})
1form: α = α_{1}dx^{1} + α_{2}dx^{2} + α_{3}dx^{3}
2form: β = β_{12}(dx^{1} ∧ dx^{2}) + β_{23}(dx^{2} ∧ dx^{3}) + β_{13}(dx^{1} ∧ dx^{3})
3form: δ = δ_{123}(dx^{1} ∧ dx^{2} ∧ dx^{3})
Differentiating the above pforms gives:
The gradient (∇f)

dγ = (∂f/∂x^{μ})dx^{μ} (a 1form)
Example:
f = x^{2} + z^{2}
df = 2xdx + 2zdz
The Curl (∇ x α)

dα = dα_{1} ∧ dx^{1} + dα_{2} ∧ dx^{2} + dα_{3} ∧ dx^{3} (a 2form)
Example:
α = ydx + z^{2}dy + 0dz
dα = (dy ∧ dx) + 2z(dz ∧ dy)
= (dx ∧ dy) + 0(dx ∧ dz)  2z(dy ∧ dz)
= 2z(dy ∧ dz) + 0 (dx ∧ dy)
The Divergence (∇.β)

dβ = dβ_{12} ∧ dx^{1} ∧ dx^{2} + dβ_{23} ∧ dx^{2} ∧ dx^{3}
+ dβ_{13} ∧ dx^{1} ∧ dx^{3} (a 3form)
Example:
β = x^{2}(dy ∧ dz) + y^{2}(dz ∧ dx) + z^{2}(dx ∧ dy)
dβ = 2x(dx ∧ dy ∧ dz) + 2y(dy ∧ dz ∧ dx)
+ 2z(dz ∧ dx ∧ dy)
= (2x + 2y + 2z)(dx ∧ dy ∧ dz)
Note that:
∇ x ∇f = 0
And, for a 1form:
∇.(∇ x ) = 0
Integration

In multivariable calculus we have:
∫∫f(x^{1},x^{2})dx^{1}dx^{2} = ∫∫f(x^{1'},x^{2'})Jx^{1'}dx^{2'}
Where J = det(∂x^{μ}/∂x^{μ'}) is the determinant of the
Jacobian of the transformation between the 2
coordinate systems that keeps the area elements
equal.
From before we had that:
(dx^{1} ∧ dx^{2}) = det(∂x^{μ}/∂x^{μ'})(dx^{1'} ∧ dx^{2'})
g_{μ'ν'} = (∂x^{μ1}/∂x^{μ1'})(∂x^{μ2}/∂x^{μ2'})g_{μν}
det(g_{μ'ν'}) = (det(∂x^{μ}/∂x^{μ'}))^{2}det(g_{μν})
In Euclidean space det(g_{μν}) = 1
Therefore,
√(det(g_{μ'ν'})) = det(∂x^{μ}/∂x^{μ'})
Therefore,
(dx^{1} ∧ dx^{2}) = √(det(g_{μ'ν'}))(dx^{1'} ∧ dx^{2'})
= √(g')(dx^{1'} ∧ dx^{2'}) (g' = det(g_{μ'ν'}))
If we make the identication:
dxdy <> √(g')(dx^{1} ∧ dx^{2})
it is easy ro see that integration under a change
of coordinates is properly understood as the
integration of the volume (top) form!
Generalized Stokes' Theorem

M = manifold, ∂M = boundary of the manifold.
∫α = ∫dα
^{∂M} ^{M}
∂M is of dimension p, α is of dimension p.
M is of dimension p + 1, dα is of dimension p.
_{b}
Fundamental Theorem of Calculus: ∫f = ∫df
(p = 1) ^{a}
∴ f(b)  f(a) = ∫∇f dx
^{C}
Stokes' Theorem: ∫F.dr = ∫∫∇ x F
(p = 2) ^{∂S} ^{S}
Gauss' Divergence Theorem: ∫F.dr = ∫∫∫∇.F
(p = 3) ^{∂V} ^{V}
Role of the Metric

If V has an inner product defined on it then an
inner product is induced on V* and vice versa.
Consider the following:
v = Σv^{i}∂/∂x^{i} (tangent vector)
^{i}
α = Σα_{j}dx^{j} (cotangent vector)
^{j}
v.α = Σv^{i}∂/∂x^{j}.(Σα_{j}dx^{j})
^{i} ^{j}
The metric tensor has the effect of converting one
of the input 1vectors to a 1form. Thus, if have
2 vectors, u and v, the 'tensor machine' produces:
g(v,u) > (v^{i}∂/∂x^{i})(α_{j}dx^{j}) = v^{i}α_{k}
What we want is for the 1form to 'eat' the 1vector
and produce the same result as the 1vector 'eating'
the original function. This can be expressed as:
df(v) = v(f) ∈ ℝ
f is a scalar function so df is a 1form. We can
now demonstrate this 'eating' process as follows:
[(∂f/∂x^{i})dx^{i})](v^{j}∂_{j})

^

1form (i.e. f = x^{2} > 2xdx)
= v^{j}(∂f/∂x^{i})dx^{i}.∂_{j}
= v^{j}(∂f/∂x^{i})δ^{i}_{j}
= v^{i}(∂f/∂x^{i})
= v^{i}∂_{i}(f)
= v(f)
The last term is the directional derivative. It is
a scalar. It should not be confused with ordinary
derivative which is interpreted as the gradient of
the function at a point. The gradient of a scalar
function is a vector. However, the two are related:

Digression:
The directional derivative represents the instantaneous
rate of change in f along a curve in the direction
of the unit tangent vector. It is a scalar. The
directional derivative, D, is related to the gradient
of a scalar function as follows:
D_{v}f = ∇f.v
Example:
 
v =  √2/2  (unit tangent vector)
 √2/2 
 
D_{v}f = ∇f.v
f(x,y) = x^{2}y
   
D_{v}f.v =  2xy_{ }. √2/2 
_{ }  x^{2}   √2/2 
   
At the point (1,1) we get:
   
 2 . √2/2  = √2 + √2/2 = scalar
 1   √2/2 
   
The analogy between the DD and the ordinary derivative
is:
f'(a) = lim h>0(f(a + h)  f(a))/h
D_{v}(a) = lim h>0(f(v + hi)  f(a))/h
Note: The gradient of a vector is a rank 2 tensor.
v = v_{i}i + v_{2}j + v_{4}k
∇v = (∂V/∂x)i + (∂V/∂y)j + (∂V/∂z)k
= i(∂/∂x)(v_{i}i + v_{2}j + v_{4}k) + ...
= ii(∂v_{1}/∂x) + ij((∂v_{2}/∂y) + ik((∂v_{3}/∂z)
= e^{i}e^{i}(∂v^{i}/∂x^{j})
 
 ∂v^{1}/∂x^{1} ∂v^{2}/∂x^{1} ∂v^{3}/∂x^{1} 
=  ∂v^{1}/∂x^{2} ∂v^{2}/∂x^{2} ∂v^{3}/∂x^{2} 
 ∂v^{1}/∂x^{3} ∂v^{2}/∂x^{3} ∂v^{3}/∂x^{3} 
 
= Jacobean matrix

The Inner Product

Consider, the 2D case:
(e_{1} ∧ e_{2}).(e_{1} ∧ e_{2})
(e_{1}e_{2}  e_{2}e_{1}).(e_{1}e_{2}  e_{2}e_{1})
(e_{1}e_{2} + e_{1}e_{2}).(e_{1}e_{2} + e_{1}e_{2})
4(e_{1}e_{2}).(e_{1}e_{2})
4(e_{1}.e_{1})(e_{2}.e_{2})
The is the product of eigenvalues which is equivalent
to the determinant of the matrix, i.e.
 
=  e_{1}.e_{1} 0 
 0 e_{2}.e_{2} 
 
= det(g_{ij})
= (1)^{DS}
Where S is the SIGNATURE of the metric = (# of +
elements, # of  elements). If D = 2 we get
g_{ij} = (1,1) ∴ S = (2,0)
We use these results in the next section.
The Hodge Star (*) Operator

The Hodge Star Operator on an Ddimensional manifold
is a map from pforms to (D  p) forms.
*: Λ^{p} <> Λ^{Dp}
The meaning of 'dual' in this context does NOT mean
dual as in vectors > dual vectors.
If V has an inner product defined on it then an
inner product is induced on Λ^{p}(V) and vice versa.
If V* has an inner product defined on it then an
inner product is induced on Λ^{p}(V*) and vice versa.
Schematically,
metric
V <> V*
metric
Λ^{p}(V) <> Λ^{0}(V*)
 
 Hodge *  Hodge *
 
v v
Λ^{Dp}(V) <> Λ^{Dp}(V*)
metric
Consider:
α, β ∈ Λ^{p}
Now construct the inner product:
α.β = Y Y ∈ ℝ
Now take,
α ∈ Λ^{p} and γ ∈ Λ^{Dp}
Write,
α ∧ γ = XE
α ∧ γ ∈ Λ^{p}Λ^{Dp} = Λ^{D} with dimension = 1 so X is a
single number ∈ ℝ. E is the unit volume equal to
dx^{0} ∧ ... dx^{D} and also lives in Λ^{D} so it also 1
dimensional. Because XE and Y are single numbers
we can equate them. Therefore,
α ∧ γ = (α.β)E
When this is true, α = *β
Λ^{p} Λ^{Dp}
\ /
α ∧ *β = (α.β)_{p}E
\ \
Λ^{p} Λ^{Dp}
Examples:
Consider *1. This is the scalar Λ^{0} with basis = 1.
For D = 3 we have:
Λ^{0} Λ^{30}
\ /
α ∧ *1 = (α.1)_{p}E
\ \
Λ^{0} Λ^{30}
α is a scalar so α ∧ *1 = α*1. Therefore,
α*1 = αE
So,
*1 = e_{1} ∧ e_{2} ∧ e_{3})
Now consider *E
E ∧ *E = (E.E)_{N}E
For D = 3 we have:
Λ^{3} Λ^{33}
\ /
E ∧ *E = (E.E)_{p}E
\ \
Λ^{3} Λ^{33}
E is a scalar since it has dimension = 1. So,
*(e_{1} ∧ e_{2} ∧ e_{3}) = 1
Now consider *e_{2}:
e_{2} ∧ *e_{2} = (e_{2}.e_{2})_{p}E
e_{2} ∧ *e_{2} = (e_{2} ∧ e_{1} ∧ e_{3})
For D = 3 we have:
Λ^{1} Λ^{31}
\ /
e_{2} ∧ *e_{2} = (e_{2} ∧ e_{1} ∧ e_{3})
\ \
Λ^{1} Λ^{31}
Compare the Λ^{31} spaces to get *e_{2} = (e_{1} ∧ e_{3})
As a final example consider *(e_{1} ∧ e_{3}):
(e_{1} ∧ e_{3}) ∧ *(e_{1} ∧ e_{3}) = (e_{1}.e_{1})(e_{3}.e_{3})(e_{1} ∧ e_{2} ∧ e_{3})
(e_{1} ∧ e_{3}) ∧ *(e_{1} ∧ e_{3}) = (e_{1} ∧ e_{3} ∧ e_{2})
For D = 3 we have:
Λ^{2} Λ^{32}
\ /
(e_{1} ∧ e_{3}) ∧ *(e_{1} ∧ e_{3}) = (e_{1} ∧ e_{3} ∧ e_{2})
\ \
Λ^{2} Λ^{32}
Compare the Λ^{31} spaces to get *(e_{1} ∧ e_{3}) = e_{2}
In D dimensions we can generalize these operations
as follows:
Let I = i_{1} ... i_{p} and H = j_{1} ... j_{Dp}. I spans the
space Λ^{p} and H spans the space Λ^{Dp}. Therefore,
together I and H span the entire space Λ^{D}. We
can then write:
*[e^{i1} ∧ ... ∧ e^{ip}]
= ε_{I,H}[e^{i1}.e^{i1} ... e^{ip}.e^{ip}]e^{j1} ∧ ... ∧ e^{jDp}
Example:
For D = 3
*e_{1} = ε_{1,23}(e_{1}.e_{1})e_{2} ∧ e_{3}
= e_{2} ∧ e_{3}
*e_{2} = ε_{2,13}(e_{2}.e_{1})e_{2} ∧ e_{3}
= (e_{1} ∧ e_{3})
Geometrically, the Hodge dual captures the idea
that planes can be identified with their normals
and so forth.
Grassmann Numbers

Grassmann numbers, θ, are anticommuting numbers that
are individual elements of the exterior algebra.
Crudely speaking, they can be viewed as differential
forms without the notion of a manifold or exterior
deivative.
Grassmann numbers are used in Physics to describe
Fermionic fields where the particles have antisymmetric
wavefunctions.
Geometric (aka Clifford) Algebra

The fundamental operation in Geometric Algebra is
the Geometric Product. It is defined as:
uv = u.v + u ∧ v
u.v = (1/2)(uv + vu)
u ∧ v = (1/2)(uv  vu)
Geometric algebra can be regarded as an extension
of exterior algebra to include scalars and vectors.
The scalars and vectors have their usual interpretation
and live subspaces of the geometric algebra whereas
the wedge products are multivectors that live in the
exterior space, Λ(V) of a vector space, V.
Axioms:
(uv)w = u(vw)
u(v + w) = uv + uw
2 Dimensions

Consider the basis vectors e_{1} and e_{2}:
e_{1}.e_{1} = (1/2)(e_{1}e_{1} + e_{1}e_{1}) = 1
e_{1} ∧ e_{1} = (1/2)(e_{1}e_{1}  e_{1}e_{1}) = 0
e_{1}e_{1} = e_{1}.e_{1} + e_{1} ∧ e_{1} = 1 + 0 = 1
Likewise,
e_{2}e_{2} = 1
e_{1} ∧ e_{2} = (1/2)(e_{1}e_{2}  e_{2}e_{1})
= (1/2)(e_{2}e_{1}  e_{1}e_{2})
= (e_{2} ∧ e_{1})
e_{1}e_{2} = e_{1}.e_{2} + e_{1} ∧ e_{2}
= e_{1} ∧ e_{2}
e_{2}e_{1} = e_{2} ∧ e_{1}
= (e_{1} ∧ e_{2})
= e_{1}e_{2}
Summary:
e_{μ}e_{μ} = 1
e_{μ}e_{ν} = e_{ν}e_{μ}
e_{μ} ∧ e_{μ} = 0
e_{μ} ∧ e_{ν} = (e_{ν} ∧ e_{μ})
uv = u.v + u ∧ v
uv = (u_{1}e_{1} + u_{2}e_{2})(v_{1}e_{1} + v_{2}e_{2})
= u_{1}v_{1}e_{1}e_{1} + u_{2}v_{2}e_{2}e_{2} + u_{1}v_{2}e_{1}e_{2} + u_{2}v_{1}e_{2}e_{1}
= u_{1}v_{1} + u_{2}v_{2} + e_{1}e_{2}(u_{1}v_{2}  u_{2}v_{1})
= u.v + e_{1}e_{2}(u_{1}v_{2}  u_{2}v_{1})
Therefore,
e_{1}e_{2}(u_{1}v_{2}  u_{2}v_{1}) ≡ u ∧ v
Proof:
u ∧ v = u_{1}v_{1}(e_{1} ∧ e_{1}) + u_{2}v_{2}(e_{2} ∧ e_{2})
+ u_{1}v_{2}(e_{1} ∧ e_{2}) + u_{2}v_{1}(e_{2} ∧ e_{1})
= u_{1}v_{2}(e_{1} ∧ e_{2}) + u_{2}v_{1}(e_{2} ∧ e_{1})
= (u_{1}v_{2}  u_{2}v_{1})(e_{1} ∧ e_{2})
This is a bivector. Bivectors are often referred
to as axial or pseudovectors. Geometrically e_{1}e_{2}
represent oriented areas.
The algebra should be closed meaning that the
product of 2 elements is another element. If we
consider all possible combinations of e_{1} and
e_{2} we obtain a 4dimensional space spanned by
{1,e_{1},e_{2},e_{1} ∧ e_{2}}.
Scalars (grade 0): 1
Vectors (grade 1): e_{1}, e_{2}
Bivectors (grade 2): e_{1} ∧ e_{2}
In Clifford Algebra dimensions are referred to
as 'grades'.
Relation to Complex Numbers

(e_{1}e_{2})^{2} = e_{1}e_{2}e_{1}e_{2}
= e_{1}e_{1}e_{2}e_{2}
We can therefore identify e_{1}e_{2} with i since i^{2} = 1
We say that Cl_{2} is isomorphic to the complex
numbers.
3 Dimensions

We can extend the basis vector constructions to
any number of dimensions. However, we need to be
careful about how these rules are applied. For
example,
e_{1}e_{2}e_{1}e_{3} = e_{1} ∧ e_{2} ∧ e_{1} ∧ e_{3} = (e_{1} ∧ e_{1} ∧ e_{2} ∧ e_{3})
= (0 ∧ e_{2} ∧ e_{3})
= 0
Is not a valid construction. To do this correctly
we need to go back to the formula uv = u.v + u ∧ v
and write:
e_{1}e_{2}e_{1}e_{3} = (e_{1}e_{1})(e_{2}e_{3}) = uv
Then:
uv = (e_{1}e_{1}).(e_{2}e_{3}) + (e_{1}e_{1}) ∧ (e_{2}.e_{3})
= 1.(e_{2}e_{3}) + 1 ∧ e_{2} ∧ e_{3}
= 0  (e_{2} ∧ e_{3})
= (e_{2} ∧ e_{3})
Of course, the shorter calculation is just to
simplify the original expression before constructing
the wedge product. Thus,
e_{1}e_{2}e_{1}e_{3} = e_{1}e_{1}e_{2}e_{3} = e_{2}e_{3} = (e_{2} ∧ e_{3})
Again, we look for the closure requirement by
considering at all possible combinations of e_{1}, e_{2}
and e_{3}:
(e_{1}e_{2})e_{1} = e_{2}
(e_{1}e_{2})e_{2} = e_{1}
(e_{1}e_{2})e_{3} = e_{1}e_{2}e_{3}
(e_{2}e_{3})e_{1} = e_{2}e_{3}e_{1} = e_{1}e_{2}e_{3})
(e_{2}e_{3})e_{2} = e_{3}
(e_{2}e_{3})e_{3} = e_{2}
(e_{1}e_{3})e_{1} = e_{3}
(e_{1}e_{3})e_{2} = e_{1}e_{3}e_{2} = (e_{1}e_{2}e_{3})
(e_{1}e_{3})e_{3} = e_{1}
Therefore we obtain an 8dimensional space spanned
by {1,e_{1},e_{2},e_{3},e_{1} ∧ e_{2},e_{2} ∧ e_{3},e_{1} ∧ e_{3},e_{1} ∧ e_{2} ∧ e_{3}}
Scalars (grade 0): 1
Vectors (grade 1): e_{1}, e_{2}, e_{3}
Bivectors (grade 2): e_{1} ∧ e_{2}, e_{2} ∧ e_{3}, e_{1} ∧ e_{3}
Trivectors (grade 3): e_{1} ∧ e_{2} ∧ e_{3}
Trivectors are often referred to as pseudoscalars.
Geometrically, pseudoscalars e_{1}e_{2}e_{3} represent oriented
volumes.
It is worth noting that the bases of the algebra
follows Pascal's triangle.
0: 1
1: 1 1
2: 1 2 1
3: 1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
Which is tantamount to saying that the dimension of
the Clifford algebra generated by a vector space of
dimension, D, is given by 2^{D}.
Relation to Quaternions

i^{2} = j^{2} = k^{2} = ijk = 1
i^{2} = e_{1}e_{2}e_{1}e_{2} = e_{1}e_{1}e_{2}e_{2} = 1
j^{2} = e_{2}e_{3}e_{2}e_{3} = e_{2}e_{2}e_{3}e_{3} = 1
k^{2} = e_{1}e_{3}e_{1}e_{3} = e_{1}e_{1}e_{3}e_{3} = 1
ijk = e_{1}e_{2}e_{2}e_{3}e_{1}e_{3}
= e_{1}e_{3}e_{1}e_{3}
= e_{1}e_{1}e_{3}e_{3}
= 1
We say that Cl_{3} is isomorphic to the quaternions.
Examples in 3D

Vector: u = (3e_{2} + 4e_{3})
Bivector: v = (2e_{1}e_{2} + 3e_{2}e_{3} + 4e_{1}e_{3})
uv = (3e_{2} + 4e_{3})(2e_{1}e_{2} + 3e_{2}e_{3} + 4e_{1}e_{3})
= 6e_{2}e_{1}e_{2} + 9e_{2}e_{2}e_{3} + 12e_{2}e_{1}e_{3} + 8e_{3}e_{1}e_{2} + 12e_{3}e_{2}e_{3} + 16e_{3}e_{1}e_{3}
= 6e_{1} + 9e_{3}  12e_{1}e_{2}e_{3} + 8e_{1}e_{2}e_{3}  12e_{2}  16e_{1}
= 22e_{1}  12e_{2} + 9e_{3}  4e_{1}e_{2}e_{3}
Compare this result to the Grassman algebra where
cannot simplify the basis vectors, e_{1} ... :
u ∧ v = (3e_{2} + 4e_{3}) ∧ (2e_{1}e_{2} + 3e_{2}e_{3} + 4e_{1}e_{3})
= (3e_{2} ∧ 2e_{1}e_{2}) + (3e_{2} ∧ 3e_{2}e_{3}) + (3e_{2} ∧ 4e_{1}e_{3})
+ (4e_{3} ∧ 2e_{1}e_{2}) + (4e_{3} ∧ 3e_{2}e_{3}) + (4e_{3} ∧ 4e_{1}e_{3})
= (3 ∧ e_{2} ∧ 2 ∧ e_{1} ∧ e_{2}) + (3 ∧ e_{2} ∧ 3 ∧ e_{2} ∧ e_{3})
+ (3 ∧ e_{2} ∧ 4 ∧ e_{1} ∧ e_{3}) + (4 ∧ e_{3} ∧ 2 ∧ e_{1} ∧ e_{2})
+ (4 ∧ e_{3} ∧ 3 ∧ e_{2} ∧ e_{3}) + (4 ∧ e_{3} ∧ 4 ∧ e_{1} ∧ e_{3})
= 12(e_{2} ∧ e_{1} ∧ e_{3}) + 4(e_{3} ∧ 2e_{1} ∧ e_{2})
= 12(e_{1} ∧ e_{2} ∧ e_{3}) + 8(e_{1} ∧ e_{2} ∧ e_{3})
= 4(e_{1} ∧ e_{2} ∧ e_{3})
Bivector: u = (5 + 2e_{1}e_{2})
Bivector: v = (2 + 2e_{1} + e_{1}e_{3})
uv = (5 + 2e_{1}e_{2})(2 + 2e_{1} + e_{1}e_{3})
= 10 + 10e_{1} + 5e_{1}e_{3} + 4e_{1}e_{2} + 4e_{1}e_{2}e_{1} + 2e_{1}e_{2}e_{1}e_{3}
= 10 + 10e_{1}  4e_{2} + 5e_{1}e_{3} + 4e_{1}e_{2}  2e_{2}e_{3}
  
^ ^ ^
  
 vector bivectors
scalar
We see that the geometric product of a bivector
with a bivector does not expand to a quadrivector,
but rather it reduces to a bivector.
This 'wrapping' is a form of closure in that runaway
expansion into ever higher dimensions is prevented.
We can see this in 3D as follows:
(vector)(vector) = (e_{1})(e_{2}) = e_{1}e_{2} (bivector)
(vector)(bivector) = (e_{1})(e_{2}e_{3}) = e_{1}e_{2}e_{3} (trivector)
(vector)(trivector) = (e_{1})(e_{1}e_{2}e_{3}) = e_{2}e_{3} (bivector)
(bivector)(bivector) = (e_{1}e_{2})(e_{1}e_{3}) = e_{2}e_{3} (bivector)
(bivector)(trivector) = (e_{1}e_{2})(e_{1}e_{2}e_{3}) = e_{3} (vector)
Bivector: u = (5 + 2e_{1}e_{2})
Trivector: v = (2 + e_{1}e_{2}e_{3})
uv = (5 + 2e_{1}e_{2})(2 + e_{1}e_{2}e_{3})
= 10 + 5e_{1}e_{2}e_{3} + 4e_{1}e_{2} + 2e_{1}e_{2}e_{1}e_{2}e_{3}
= 10 + 5e_{1}e_{2}e_{3} + 4e_{1}e_{2}  2e_{3}
Now, for an orthogonal basis e_{1}e_{2} = e_{1} ∧ e_{2} etc.
uv = 10 + 5(e_{1} ∧ e_{2} ∧ e_{3}) + 4(e_{1} ∧ e_{2})  2e_{3}
= 10  2e_{3} + 4(e_{1} ∧ e_{2}) + 5(e_{1} ∧ e_{2} ∧ e_{3})
   
^ ^ ^ ^
   
 vector bivector trivector
scalar
Matrix Representation

Consider the Pauli matrices.
σ_{1}σ_{2}  σ_{2}σ_{1} = 2iσ_{3}
         
 0 1  0 i    0 i  0 1  =  2i 0 
 1 0  i 0   i 0  1 0   0 2i 
         
If we let σ_{1} ≡ e_{1}, σ_{2} ≡ e_{2} and σ_{3} ≡ e_{3}
(1/2)(e_{1}e_{2}  e_{2}e_{1}) = e_{1} ∧ e_{2}
≡ e_{1}e_{2}e_{3}e_{3}
≡ Ie_{3}
Where I = e_{1}e_{2}e_{3}
Or,
(e_{1}e_{2}  e_{2}e_{1}) = 2Ie_{3}
The Clifford Algebra for the Pauli matrices is
often written as:
{σ_{μ},σ_{ν}} = 2η_{μν}I
Which yields:
σ_{μ}σ_{ν} + σ_{ν}σ_{μ} = 2η_{μν}I
μ = ν => σ_{1}^{2} = σ_{2}^{2} = σ_{3}^{2} = 1
μ ≠ ν => σ_{1}σ_{2} + σ_{2}σ_{1} = 2η_{12}I = 0
∴ σ_{1}σ_{2} = σ_{2}σ_{1}
It should not be surprising that the Dirac matrices
also also obey the Clifford algebra.
{γ_{μ},γ_{ν}} = 2η_{μν}I