Redshift Academy

Wolfram Alpha:         

  Search by keyword:  

Astronomy

-
Celestial Coordinates
-
Celestial Navigation
-
Distance Units
-
Location of North and South Celestial Poles

Chemistry

-
Avogadro's Number
-
Balancing Chemical Equations
-
Stochiometry
-
The Periodic Table

Classical Physics

-
Archimedes Principle
-
Bernoulli Principle
-
Blackbody (Cavity) Radiation and Planck's Hypothesis
-
Center of Mass Frame
-
Comparison Between Gravitation and Electrostatics
-
Compton Effect
-
Coriolis Effect
-
Cyclotron Resonance
-
Dispersion
-
Doppler Effect
-
Double Slit Experiment
-
Elastic and Inelastic Collisions
-
Electric Fields
-
Error Analysis
-
Fick's Law
-
Fluid Pressure
-
Gauss's Law of Universal Gravity
-
Gravity - Force and Acceleration
-
Hooke's law
-
Ideal and Non-Ideal Gas Laws (van der Waal)
-
Impulse Force
-
Inclined Plane
-
Inertia
-
Kepler's Laws
-
Kinematics
-
Kinetic Theory of Gases
-
Kirchoff's Laws
-
Laplace's and Poisson's Equations
-
Lorentz Force Law
-
Maxwell's Equations
-
Moments and Torque
-
Nuclear Spin
-
One Dimensional Wave Equation
-
Pascal's Principle
-
Phase and Group Velocity
-
Planck Radiation Law
-
Poiseuille's Law
-
Radioactive Decay
-
Refractive Index
-
Rotational Dynamics
-
Simple Harmonic Motion
-
Specific Heat, Latent Heat and Calorimetry
-
Stefan-Boltzmann Law
-
The Gas Laws
-
The Laws of Thermodynamics
-
The Zeeman Effect
-
Wien's Displacement Law
-
Young's Modulus

Climate Change

-
Keeling Curve

Cosmology

-
Penrose Diagrams
-
Baryogenesis
-
Cosmic Background Radiation and Decoupling
-
CPT Symmetries
-
Dark Matter
-
Friedmann-Robertson-Walker Equations
-
Geometries of the Universe
-
Hubble's Law
-
Inflation Theory
-
Introduction to Black Holes
-
Olbers' Paradox
-
Planck Units
-
Stephen Hawking's Last Paper
-
Stephen Hawking's PhD Thesis
-
The Big Bang Model

Finance and Accounting

-
Amortization
-
Annuities
-
Brownian Model of Financial Markets
-
Capital Structure
-
Dividend Discount Formula
-
Lecture Notes on International Financial Management
-
NPV and IRR
-
Periodically and Continuously Compounded Interest
-
Repurchase versus Dividend Analysis

General Relativity

-
Accelerated Reference Frames - Rindler Coordinates
-
Catalog of Spacetimes
-
Curvature and Parallel Transport
-
Dirac Equation in Curved Spacetime
-
Einstein's Field Equations
-
Geodesics
-
Gravitational Time Dilation
-
Gravitational Waves
-
One-forms
-
Quantum Gravity
-
Relativistic, Cosmological and Gravitational Redshift
-
Ricci Decomposition
-
Ricci Flow
-
Stress-Energy Tensor
-
Stress-Energy-Momentum Tensor
-
Tensors
-
The Area Metric
-
The Equivalence Principal
-
The Essential Mathematics of General Relativity
-
The Induced Metric
-
The Metric Tensor
-
Vierbein (Frame) Fields
-
World Lines Refresher

Lagrangian and Hamiltonian Mechanics

-
Classical Field Theory
-
Euler-Lagrange Equation
-
Ex: Newtonian, Lagrangian and Hamiltonian Mechanics
-
Hamiltonian Formulation
-
Liouville's Theorem
-
Symmetry and Conservation Laws - Noether's Theorem

Macroeconomics

-
Lecture Notes on International Economics
-
Lecture Notes on Macroeconomics
-
Macroeconomic Policy

Mathematics

-
Amplitude, Period and Phase
-
Arithmetic and Geometric Sequences and Series
-
Asymptotes
-
Augmented Matrices and Cramer's Rule
-
Basic Group Theory
-
Basic Representation Theory
-
Binomial Theorem (Pascal's Triangle)
-
Building Groups From Other Groups
-
Completing the Square
-
Complex Numbers
-
Composite Functions
-
Conformal Transformations
-
Conjugate Pair Theorem
-
Contravariant and Covariant Components of a Vector
-
Derivatives of Inverse Functions
-
Double Angle Formulas
-
Eigenvectors and Eigenvalues
-
Euler Formula for Polyhedrons
-
Factoring of a3 +/- b3
-
Fourier Series and Transforms
-
Fractals
-
Gauss's Divergence Theorem
-
Grassmann and Clifford Algebras
-
Heron's Formula
-
Index Notation (Tensors and Matrices)
-
Inequalities
-
Integration By Parts
-
Introduction to Conformal Field Theory
-
Inverse of a Function
-
Law of Sines and Cosines
-
Line Integrals, ∮
-
Logarithms and Logarithmic Equations
-
Matrices and Determinants
-
Matrix Exponential
-
Mean Value and Rolle's Theorem
-
Modulus Equations
-
Orthogonal Curvilinear Coordinates
-
Parabolas, Ellipses and Hyperbolas
-
Piecewise Functions
-
Polar Coordinates
-
Polynomial Division
-
Quaternions 1
-
Quaternions 2
-
Regular Polygons
-
Related Rates
-
Sets, Groups, Modules, Rings and Vector Spaces
-
Similar Matrices and Diagonalization
-
Spherical Trigonometry
-
Stirling's Approximation
-
Sum and Differences of Squares and Cubes
-
Symbolic Logic
-
Symmetric Groups
-
Tangent and Normal Line
-
Taylor and Maclaurin Series
-
The Essential Mathematics of Lie Groups
-
The Integers Modulo n Under + and x
-
The Limit Definition of the Exponential Function
-
Tic-Tac-Toe Factoring
-
Trapezoidal Rule
-
Unit Vectors
-
Vector Calculus
-
Volume Integrals

Microeconomics

-
Marginal Revenue and Cost

Particle Physics

-
Feynman Diagrams and Loops
-
Field Dimensions
-
Helicity and Chirality
-
Klein-Gordon and Dirac Equations
-
Regularization and Renormalization
-
Scattering - Mandelstam Variables
-
Spin 1 Eigenvectors
-
The Vacuum Catastrophe

Probability and Statistics

-
Box and Whisker Plots
-
Categorical Data - Crosstabs
-
Chebyshev's Theorem
-
Chi Squared Goodness of Fit
-
Conditional Probability
-
Confidence Intervals
-
Data Types
-
Expected Value
-
Factor Analysis
-
Hypothesis Testing
-
Linear Regression
-
Monte Carlo Methods
-
Non Parametric Tests
-
One-Way ANOVA
-
Pearson Correlation
-
Permutations and Combinations
-
Pooled Variance and Standard Error
-
Probability Distributions
-
Probability Rules
-
Sample Size Determination
-
Sampling Distributions
-
Set Theory - Venn Diagrams
-
Stacked and Unstacked Data
-
Stem Plots, Histograms and Ogives
-
Survey Data - Likert Item and Scale
-
Tukey's Test
-
Two-Way ANOVA

Programming and Computer Science

-
Hashing
-
How this site works ...
-
More Programming Topics
-
MVC Architecture
-
Open Systems Interconnection (OSI) Standard - TCP/IP Protocol
-
Public Key Encryption

Quantum Field Theory

-
Creation and Annihilation Operators
-
Field Operators for Bosons and Fermions
-
Lagrangians in Quantum Field Theory
-
Path Integral Formulation
-
Relativistic Quantum Field Theory

Quantum Mechanics

-
Basic Relationships
-
Bell's Theorem
-
Bohr Atom
-
Clebsch-Gordan Coefficients
-
Commutators
-
Dyson Series
-
Electron Orbital Angular Momentum and Spin
-
Entangled States
-
Heisenberg Uncertainty Principle
-
Ladder Operators
-
Multi Electron Wavefunctions
-
Pauli Exclusion Principle
-
Pauli Spin Matrices
-
Photoelectric Effect
-
Position and Momentum States
-
Probability Current
-
Schrodinger Equation for Hydrogen Atom
-
Schrodinger Wave Equation
-
Schrodinger Wave Equation (continued)
-
Spin 1/2 Eigenvectors
-
The Differential Operator
-
The Essential Mathematics of Quantum Mechanics
-
The Observer Effect
-
The Qubit
-
The Schrodinger, Heisenberg and Dirac Pictures
-
The WKB Approximation
-
Time Dependent Perturbation Theory
-
Time Evolution and Symmetry Operations
-
Time Independent Perturbation Theory
-
Wavepackets

Semiconductor Reliability

-
The Weibull Distribution

Solid State Electronics

-
Band Theory of Solids
-
Fermi-Dirac Statistics
-
Intrinsic and Extrinsic Semiconductors
-
The MOSFET
-
The P-N Junction

Special Relativity

-
4-vectors
-
Electromagnetic 4 - Potential
-
Energy and Momentum, E = mc2
-
Lorentz Invariance
-
Lorentz Transform
-
Lorentz Transformation of the EM Field
-
Newton versus Einstein
-
Spinors - Part 1
-
Spinors - Part 2
-
The Lorentz Group
-
Velocity Addition

Statistical Mechanics

-
Black Body Radiation
-
Entropy and the Partition Function
-
The Harmonic Oscillator
-
The Ideal Gas

String Theory

-
Bosonic Strings
-
Extra Dimensions
-
Introduction to String Theory
-
Kaluza-Klein Compactification of Closed Strings
-
Strings in Curved Spacetime
-
Toroidal Compactification

Superconductivity

-
BCS Theory
-
Introduction to Superconductors
-
Superconductivity (Lectures 1 - 10)
-
Superconductivity (Lectures 11 - 20)

Supersymmetry (SUSY) and Grand Unified Theory (GUT)

-
Chiral Superfields
-
Generators of a Supergroup
-
Grassmann Numbers
-
Introduction to Supersymmetry
-
The Gauge Hierarchy Problem

test

-
test

The Standard Model

-
Electroweak Unification (Glashow-Weinberg-Salam)
-
Gauge Theories (Yang-Mills)
-
Gravitational Force and the Planck Scale
-
Introduction to the Standard Model
-
Isospin, Hypercharge, Weak Isospin and Weak Hypercharge
-
Quantum Flavordynamics and Quantum Chromodynamics
-
Special Unitary Groups and the Standard Model - Part 1
-
Special Unitary Groups and the Standard Model - Part 2
-
Special Unitary Groups and the Standard Model - Part 3
-
Standard Model Lagrangian
-
The Higgs Mechanism
-
The Nature of the Weak Interaction

Topology

-

Units, Constants and Useful Formulas

-
Constants
-
Formulas
Last modified: January 19, 2020

Tensors ------- Not all physical quantities can be represented by scalars, vectors or one-forms (dual vectors). We will need something more flexible, and tensors fit the bill. Like vectors, tensors represent physical quantities that are invariant and independent of the coordinate system. However, when they are expressed in terms of a basis they have components that transform. They live in flat tangent or cotangent space or the tensor product of the two. Tensors can be defined in several different ways. For our purposes they are equivalently: 1. Multilinear maps that map vectors and dual vectors into the space of real numbers. 2. Elements of tensor products of vector spaces. Schematically, we can view a tensor, 𝕋(_,_,_) as a 'machine' that takes vectors and dual vectors as inputs and outputs real numbers. The number of slots corresponds to the rank of the tensor. Tensor Components ----------------- 𝕋(_,_,_) is a geometric object. It has no components until a basis is assigned. The components will be basis dependent. Once assigned, the components can then be written as a matrix. We can define the bases of vectors and dual vectors as: Basis: {θμ} Dual basis: {eμ} And, from the discussion on dual spaces, we had: eαθμ = δμα Let see how the machine operates. For arguments sake lets consider A and B to be vectors and C to be a dual vector (one-form). 𝕋(A,B,C) ≡ 𝕋(Aμeμ,Bνeν,Cγθγ) ≡ AμBνCγ𝕋(eμ,eνγ) ... 1. Now, we need to understand what 𝕋(eμ,eνγ) produces. As stated above, we can also represent our tensor machine as the tensor product of 3 vectors U, V and W. ----------------------------------------------------- Digression: Any vector is also a tensor. In this case, our machine would have only 1 slot. Therefore, 𝕋(_) ≡ V(_) So, V(A) = V.A ... a scalar and, V(θμ) = Vμ and, V = Vμeμ ----------------------------------------------------- The tensor product is defined as: U ⊗ V ⊗ W(_,_,_,_) = [U(_)][V(_)][W(_)] ---------- ^ = [U._][V._][W._] ... 1. | Uμeμ ⊗ Vνeν ⊗ Wγθγ ------------------ ^ | UμVνWγ(eμ ⊗ eν ⊗ θγ) -------------------- ^ | Tμνγ(eμ ⊗ eν ⊗ θγ) So we can conclude (looking vertically): 𝕋 = Tμνγ(eμ ⊗ eν ⊗ θγ) To get Tμνγ we switch the indeces on the basis vectors and feed them into equation 1. (i.e. eμ -> θμ, eν -> θν and θγ -> eγ). This leads to: U ⊗ V ⊗ W(θμν,eγ) = [U.θμ][V.θν][W.eγ] Now from the above digression, U.θμ is the projection of U onto the respective directions (the components of U, Uμ). Likewise for V and W. Therefore, U ⊗ V ⊗ W(θμν,eγ) = UμVνWγ ≡ Tμνγ Therefore, the components of a tensor in a given frame are found by feeding the our tensor machine with the inverse of the basis vectors and dual basis vectors associated with that particular frame. The machine acts by multiplying every component of the vectors and dual-vectors by every relevant component of the tensor, and then summed up to obtain an overall result. Metric Tensor ------------- The metric tensor takes as its input a pair of tangent vectors at a point on a manifold and produces a real number scalar in a way that generalizes the dot product of vectors in Euclidean space. Therefore: g(A,B) = A.B We can feed g with basis vectors to find its components, gαβ gαβ = g(eα,eβ) = eα.eβ Lorentz frame: Considering just the time and x directions: e0.e0 = -1 = η00 e00 = 1 e0.e1 = 0 e01 = 0 e11 = 1 e1.e1 = 1 = η11 Similar arguments follow for the y and z directions. This leads to: g00 = -1, gjk = δjk => ηαβ Likewise, the inverse metric is obtained from: gαβ = g(θαβ) = θαβ = ηαβ and, gαβ = g(θα,eβ) = θα.eβ = δαβ Components as Matrices ---------------------- The components of a rank 2 tensor can be written as an n x n matrix. For a rank 3 tensor we can visualize the components forming a 3 dimensional matrix. For higher than rank 3 it is a difficult to visualize the structure. Again, considering A and B to be vectors and C to be a dual vector (one-form) we can use equation 1. to write: U ⊗ V ⊗ W(A,B,C) = [U.A][V.B][W.C] ≡ [Uμeμ.Aαeα][Vνeν.Bβeβ][Wγθγ.Cδθδ] ≡ [UμAαeμ.eα][VνBβeν.eβ][WγCδθγδ] ≡ [gμαUμAα][gνβVνBβ][gγδWγCδ] ≡ [UμAμ][VνBν][WγCγ] ≡ TμνγAμBνCγ ≡ Tμνγ[eμ.A][eν.B][θγ.C] ≡ Tμνγ[eμ(A)][eν(B)][θγ(C)] Now, [eμ(A)][eν(B)][θγ(C)] = eμ ⊗ eν ⊗ eγ(A,B,C) So, U ⊗ V ⊗ W(A,B,C) ≡ Tμνγ(eμ ⊗ eν ⊗ θγ)(A,B,C) Which is basically a confirmation of equation 1. Lets summarize what we have so far. 𝕋(A,B,C) = TμνγAμBνCγ = 𝕋 ∈ ℝ -------- --------- ^ ^ | | Tensor Component (index) Notation Notation The left hand side looks just like a function. The problem with this type of notation is that we can’t immediately see which of the slots are supposed to be 1-forms, and which ones are supposed to be vectors. The right hand side skips the brackets, and instead writes upper and lower indices to denote places where 1-forms and vectors need to go. This notation is more concise, in the sense that it shows exactly what we need to input into our tensor to obtain a certain result. Empty Slots ----------- If you decide to leave one or more of the slots empty, then the result will not be a real number, but another tensor which contains exactly your “leftover” empty slots. Consider, the following multilinear maps. 𝕋(_,B,C) = TμνγBνCγ = 𝕋μ ≡ Vμ 𝕋(A,B,_) = 𝕋μνγAμBν = Tβ ≡ Vβ Multiplication and Addition Properties -------------------------------------- Tensors obey the scalar multiplication and addition properties associated with vector spaces. 𝕋(αA + βB,C,D) = α𝕋(A,C,D) + βT(B,C,D) Tensor Classifications ---------------------- Tensors are classified by the number of upper and lower indeces. (# of upper indeces,# of lower indeces) (0,0) = Scalar (1,0) = Contravariant Vector. (0,1) = Covariant Vector (dual vector, 1-form). (2,0) = Rank 2 contravariant tensor. (0,2) = Rank 2 covariant tensor. (1,1) = Rank 2 mixed tensor. etc. Raising and Lowering Indeces ---------------------------- We can use the metric tensor to raise and lower tensor indeces: Tμνγ = Tαβγgαμgβν Proof: U ⊗ V ⊗ W(θαβγ)g(eα,eμ)g(eβ,eν) = [U.θα][V.θβ][W.θγ][U.eα][V.eμ][U.eβ][V.eν] = UαVβWγUαVμUβVν = Tμνγ Example ------- Consider vectors in flat spacetime. ημνVμ = Vν - - - - - - | -1 0 0 0 || ct | | -ct | | 0 1 0 0 || x | = | x | | 0 0 1 0 || y | | y | | 0 0 0 1 || z | | z | - - - - - - Index Contraction ----------------- T(_,_,_,_) = A ⊗ B ⊗ C ⊗ D(_,_,_,_) C13[A ⊗ B ⊗ C ⊗ D(_,_,_,_)] ≡ (A.C)(B ⊗ D(_,_)) So basically, the contraction has 'strangled' the 1st and 3rd slots so to speak. In terms of components the RHS becomes : (A.C)(B ⊗ D(_,_)) = [C.θν](B ⊗ D(_,_)) A.C = Aμeμ.Cνeν = AμCνeμ.eν = gμνAμCν = AμCμ Now, (B ⊗ D(eβ,eδ) = [B.eβ][B.eδ] = BβBδ Therefore, the result is: C13[A ⊗ B ⊗ C ⊗ D(_,_,_,_)] = AμCμBρDδ When the tensor is interpreted as a matrix, the contraction operation is defined for pairs of indices and is the generalization of the trace. For a (1,1) tensor this looks like: - -   | a b c d | Tμν = | e f g h |   | i j k l |   | m n o p | - - Tμμ = a + f + k + p = T = scalar. Transformation Properties ------------------------- Tμν -> Tμ'ν' = Λμ'μΛν'νTμν Tμν -> Tμ'ν' = Λμ'μΛνν'Tμν Tμν -> Tμ'ν' = Λμμ'Λνν'Tμν Where Λμ'μ = ∂xμ'/∂xμ and Λμμ' = ∂xμ/∂xμ'. Tensors are characterized by these transformation laws. Therefore we can say that if an object does not transform in this exact way, it cannot be a tensor. This requirement will turn out to be very useful when we look at how derivatives of vectors and tensors transform in curved spacetime. Matrix Expressions ------------------ In some cases it may be required to revert back to matrix expressions. Matrix multiplication requires getting repeated indeces adjacent to each other (see the notes on Index Notation for more details). In our notation: Λμ'μ = Λμ'νxν Λμμ' = (Λμ'μ)T = ΛT Λμμ' = (Λμ'μ)-1 = Λ-1 Λμ'μ = ((Λμ'μ)-1)T = (Λ-1)T Therefore, Tμν -> Tμ'ν' = Λμ'μΛν'νTμν = Λμ'μTμνΛν'ν = Λμ'μTμνΛνν' want = ΛTΛT ----------------------------------------------------- Digression: In the notes on the Lorentz group we had: Tμν = ΛμσΛνρTσρ = ΛμσTσρΛνρ We want: Tμν = ΛμσTσρΛρν = ΛTΛT ----------------------------------------------------- Tμν -> Tμ'ν' = Λμ'μΛνν'Tμν = Λμ'μTμνΛνν' = ΛTΛ-1 Tμν -> Tμ'ν' = Λμμ'Λνν'Tμν = Λμμ'TμνΛνν' We want: Tμ'ν' = Λμ'μTμνΛνν' = (Λ-1)TT Tensor Equations ---------------- The physical world does not have a pre-determined coordinate system. The laws of physics should be built out of equations which are invariant under transformations connecting local inertial frames. Tensors are the answer! Abstract Index Notation ----------------------- Abstract index notation is a mathematical notation that uses indices to indicate tensor types, rather than their components in a particular basis. Abstract indeces are denoted by lower case Latin letters (a, b etc.). They do not represent components with respect to basis which are denoted by Greek letters (α, β etc.). They assign names to the tensor slots and are not numbers as is the case for component indeces. Thus, 𝕋(_,_,_) = Tabc a b c Where Tabc represents a mixed tensor. And, 𝕋(_,_,_) = Tμνγ(eμ ⊗ eν ⊗ θγ) Where Tμνγ represents the components of the same mixed tensor in a basis. The idea behind the abstract index notation is to have a notation for tensorial expressions that mirrors the expressions for their basis components (had a basis been introduced). Using abstract index notation one can only write down tensorial expressions since no basis has been specified. Despite, these conventions, many texts do not make the distinction between use of Greek and Letin letters so the reader needs to study the context to get the correct interpretation. Now lets consider the following tensor equation: Aab = BaCbcDc + θab -> Aa'b' = Ba'Cb'c'Dc' + Ea'b' Λa'aΛb'bAab = (Λa'a)Bab'bΛcc')Cbcc'c)Dc + (Λa'aΛb'b)Eab = Λa'aΛb'bΛcc'Λc'cBaCbcDc + Λa'aΛb'bEab = Λa'aΛb'bBaCbcDc + Λa'aΛb'bEab = Λa'aΛb'b(BaCbcDc + Eab) So both sides are completely invariant under a coordinate transformation.