Wolfram Alpha:
Search by keyword:
Astronomy
Chemistry
Classical Physics
Climate Change
Cosmology
Finance and Accounting
General Relativity
Lagrangian and Hamiltonian Mechanics
Macroeconomics
Mathematics
Microeconomics
Particle Physics
Probability and Statistics
Programming and Computer Science
Quantum Field Theory
Quantum Mechanics
Semiconductor Reliability
Solid State Electronics
Special Relativity
Statistical Mechanics
String Theory
Superconductivity
Supersymmetry (SUSY) and Grand Unified Theory (GUT)
test
The Standard Model
Topology
Units, Constants and Useful Formulas
Similar Matrices and Diagonalization
------------------------------------
Change of Basis
---------------
Consider ℝ^{2} as a subspace of ℝ^{2} with the following
basis, B:
- - - -
B = {| 2 |, | 1 |}
| 1 | | 2 |
- - - -
Consider an arbitrary vector in this basis, v:
- - - - - -
[v]_{B} = 3| 2 | + 2| 1 | = | 8 |
_{ } | 1 | | 2 | | 7 |
- - - - - -
This can be written as:
- - - -
[v]_{B} = | 3 || [v_{1}]_{B} |
_{ } | 2 || [v_{2}]_{B} |
- - - -
- -
Where | 3 | is called the CHANGE OF BASIS matrix.
| 2 |
- -
- -
Now, | 8 | is just a vector in the STANDARD BASIS.
| 7 |
- -
Therefore,
- - - -
[v]_{S} = 8| 1 | + 7| 0 |
_{ } | 0 | | 1 |
- - - -
- - - -
= | 8 || [v_{1}]_{S} |
| 7 || [v_{2}]_{S} |
- - - -
The following diagram shows this:
In general we can write:
C[v]_{B} = [v]_{S}
Where,
C = Change of basis matrix.
[v]_{B} = The vector in the B basis
[v]_{B} = The vector in the Standard Basis.
Here is another example. Consider the 3-dimensional
subspace of ℝ^{4} (which is not the same as ℝ^{3} because
the dimension of the vectors is 4).
- - - - - -
| -1 | | -1 | | -1 |
B = {| 1 |,| 0 |,| 0 |}
| 0 | | 1 | | 0 |
| 0 | | 0 | | 1 |
- - - - - -
- - - - - -
_{ } | -1 | | -1 | | -1 |
[v]_{B} = 3| 1 | + 2| 0 | + 4| 0 |
_{ } | 0 | | 1 | | 0 |
_{ } | 0 | | 0 | | 1 |
- - - - - -
- - - -
| -1 -1 -1 || 3 |
= | 1 0 0 || 2 |
| 0 1 0 || 4 |
| 0 0 1 | - -
- -
- -
| -9 |
= | 3 |
| 2 |
| 4 |
- -
- - - - - - - -
| 1 | | 0 | | 0 | | 0 |
= -9| 0 | + 3| 1 | + 2| 0 | + 4| 0 |
| 0 | | 0 | | 1 | | 0 |
| 0 | | 0 | | 0 | | 1 |
- - - - - - - -
- - - -
| -9 || [v_{1}]_{S} |
= | 3 || [v_{2}]_{S} |
| 2 || [v_{3}]_{S} |
| 4 || [v_{4}]_{S} |
- - - -
Now suppose we are only given [v]_{B} and [v]_{S} and want
to find C.
- - - - - -
| -1 -1 -1 || c_{1} | | -9 |
| 1 0 0 || c_{2} | = | 3 |
| 0 1 0 || c_{3} | | 2 |
| 0 0 1 | - - | 4 |
- - - -
At first sight we might think that we could rearrange
[v]_{B}C = [v]_{S} to get
C = [v]_{B}^{-1}[v]_{S}
However, [v]_{B} is not invertible. To overcome this we
can write the equation in augmented matrix form and
use Gauss-Jordan elimination to get the answer.
- -
| -1 -1 -1 | -9 |
| 1 0 0 | 3 | => c_{1} = 3, c_{2} = 2 and c_{3} = 4
| 0 1 0 | 2 |
| 0 0 1 | 4 |
- -
Alternatively we can compute the reduced row echelon
form (rref):
- -
| 1 0 0 | 3 |
| 0 1 0 | 2 | => c_{1} = 3, c_{2} = 2 and c_{3} = 4
| 0 0 1 | 4 |
- -
Transformations
---------------
We have seen how vectors are affected by a change
in basis. We now want to see what happens with
transformations.
Consider a transformation, T. This can be written
in matrix form as:
T(v) = Mv
Where M is the transformation matrix and T: ℝ^{n} -> ℝ^{n}
In the B basis we can write:
[T(v)]_{B} = [N]_{B}[v]_{B}
Where [N]_{B} is the transformation matrix in the B
basis.
C[v]_{B} = [v]_{S} ∴ [v]_{B} = C^{-1}[v]_{S}
[N]_{B}[v]_{B} = [T(v)]_{B}
= [Mv]_{B} since [T(v)]_{B} = [Mv]_{B}
= C^{-1}[Mv]_{S}
= C^{-1}[M]_{S}[v]_{S}
= C^{-1}[M]_{S}C[v]_{B}
We conclude:
[N]_{B} = C^{-1}[M]_{S}C
Where,
[N]_{B} is the transformation matrix in the B basis.
[M]_{S} is the transformation matrix in the S basis.
C is the change of basis matrix for B.
Schematically:
[M]_{S}
Standard basis: v ---------> T(v)
| ^{ } |
| C^{-1} | C^{-1}
| [N]_{B}^{ } |
B basis: v ---------> v
[v]_{B} [T(v)]_{B}
Similar Matrices and Diagonalization
------------------------------------
Two matrices are similar if they satisfy the above
relationship:
P^{-1}AP = Q
or,
AP = PQ
Where P is any invertible n x n matrix.
Proof:
P^{-1}AP = Q
If we consider a basis consisting of eigenvectors of
the space, we can apply the the eigenvalue equation
Qv = λv.
Therefore, after substitution:
(P^{-1}AP)v = λv
(P^{-1}AP)P^{-1}v = λP^{-1}v
P^{-1}Av = λP^{-1}v
Qv = λv
Therefore, the eigenvalues of A and Q are the same.
Also consider the characteristic polynomials of
A and Q:
Ax = λx
Therefore,
(A - λI)x = 0
This has a solution only if det(A - λI) = 0
We can also write a similar equation for B:
det(Q - λI)x = 0
Since λ can take the value 0 we can write:
det(A) = det(Q)
Thus, having the same eigenvalues implies that the
determinants are also equal.
The important result of these proofs is that
similar matrices have the same trace, eigenvalues,
determinant and dimension. However, they can have
different eigenvectors.
Example 1: Matrices with unique eigenvectors.
- - - -
W = | -2 6 | Q = | -4 3 |
| -2 5 | | -10 7 |
- - - -
W and Q are certainly similar.
- - - - - -
| -2 6 | has eigenvectors | 3 | and | 2 |
| -2 5 | | 2 | | 1 |
- - - - - -
- - - - - -
| -4 3 | has eigenvectors | 1 | and | 3 |
| -10 7 | | 2 | | 5 |
- - - - - -
We can find P as follows:
- - - - - - - -
| -2 6 || a b | = | a b || -4 3 |
| -2 5 || c d | | c d || 10 7 |
- - - - - - - -
- - - -
| -2a + 6c -2b + 6d | = | -4a - 10b 3a + 7b |
| -2a + 5c -2b + 5d | | -4c - 10d 3c + 7d |
- - - -
Therefore,
a = -3c - 5b and d = -b - 3c/2
Choosing b = 1 and c = 0 gives:
- -
P = | -5 1 |
| 0 -1 |
- -
Confirming:
- - - - - -
| -2 6 || -5 1 | = | 10 -8 |
| -2 5 || 0 -1 | | 10 -7 |
- - - - - -
and,
- - - - - -
| -5 1 || -4 3 | = | 10 -8 |
| 0 -1 || -10 7 | | 10 -7 |
- - - - - -
P^{-1}WP gives:
- - - - - - - -
| -5 1 |^{-1}| -2 6 || -5 1 | = | -4 3 |
| 0 -1 |^{ }| -2 5 || 0 -1 | | -10 7 |
- - - - - - - -
Trace = 3, det = 2, λ = 2 and 1
and,
- - - - - - - -
| -5 1 |^{-1}| -4 3 || -5 1 | = | -14 24/5 |
| 0 -1 |^{ }| -10 7 || 0 -1 | | -50 17 |
- - - - - - - -
Trace = 3, det = 2, λ = 2 and 1
It is easily seen that there are many versions
of P that satisfy WP = PB and hence A has a whole
family of matrices that are similar to it!
Example 2: Matrices with non-unique eigenvectors.
- - - -
W = | 2 1 | Q = | 0 2 |
| 0 2 | | -2 4 |
- - - -
W and Q are certainly similar.
- - - -
| 2 1 | has 2 identical eigenvectors | 1 |
| 0 2 | | 0 |
- - - -
- - - -
| 0 2 | has 2 identical eigenvectors | 1 |
| -2 4 | | 1 |
- - - -
We can find P as before:
- - - - - - - -
| 2 1 || a b | = | a b || 0 2 |
| 0 2 || c d | | c d || -2 4 |
- - - - - - - -
This leads to the system of equations:
2a + c + 2b = 0 ∴ c = -2a - 2b
-2b - 2a + d = 0
2c + 2d = 0 ∴ c = -d
-2d - 2c = 0
Choosing a = 0 and b = 1 gives:
- -
P = | 0 1 |
| -2 2 |
- -
P^{-1}AP gives:
- - - - - - - -
| 0 1 |^{-1}| 2 1 || 0 1 | = | 0 2 |
| -2 2 |^{ }| 0 2 || -2 2 | | -2 4 |
- - - - - - - -
Again, it is easily seen that there are many
versions of P that satisfy WP = PB and hence A
has a whole family of matrices that are similar
to it also.
Diagonalization
---------------
In the case of a matrix that has a set of n linearly
independent (unique) eigenvectors, the similarity
transformation can also be used to create an another
similar matrix that has a diagonal form. In this case
the diagonal form is obtained by constructing S such
that the columns are the eigenvectors of the matrix
being transformed. Again, diagonalization is only
possible if and only if the n x n matrix posesses n
eigenvectors that are unique.
Proof:
We start with the eigenvalue equation and write:
Av_{1} = λ_{1}v_{1}, Av_{2} = λ_{2}v_{2} ... Av_{n} = λ_{n}v_{n} where the
v's are column (eigen) vectors with n entries.
Let us write P as:
- -
| _{ } . _{ } . ._{ } |
| v_{1} . v_{2} . --- . v_{n} |
| _{ } . _{ } . ._{ } |
| _{ } . _{ } . ._{ } |
- -
Therefore,
- -
| _{ } . _{ } . ._{ } |
AP = A| v_{1} . v_{2} . --- . v_{n} |
| _{ } . _{ } . ._{ } |
| _{ } . _{ } . ._{ } |
- -
- -
| _{ } . _{ } . ._{ } |
= | Av_{1} . Av_{2} . --- . Av_{n} |
| _{ } . _{ } . ._{ } |
| _{ } . _{ } . ._{ } |
- -
- -
| _{ } . _{ } . ._{ } |
= | λ_{1}v_{1} . λ_{2}v_{2} . --- . λ_{n}v_{n} |
| _{ } . _{ } . ._{ } |
| _{ } . _{ } . ._{ } |
- -
- - - -
| _{ } . _{ } . ._{ } || λ_{1} |
= | v_{1} . v_{2} . --- . v_{n} || λ_{2} |
| _{ } . _{ } . ._{ } || . _{ }|
| _{ } . _{ } . ._{ } || λ_{n} |
- - - -
So we have:
AP = PΛ
or,
P^{-1}AP = P^{-1}PΛ
= Λ - the diagonal matrix of eigenvalues.
Example 3:
Consider the example from before that had unique
eigenvectors:
- - - - - -
| -2 6 | with eigenvectors | 3 | and | 2 |
| -2 5 | | 2 | | 1 |
- - - - - -
Thus,
- - - -
P_{W} = | 3 2 | P_{W}^{-1} = | -1 2 |
_{ } | 2 1 |_{ } | 2 -3 |
- - - -
and, P_{W}^{-1}WP_{W} gives:
- - - - - - - -
| -1 2 || -2 6 || 3 2 | = | 2 0 |
| 2 -3 || -2 5 || 2 1 | | 0 1 |
- - - - - - - -
Likewise,
- - - - - -
| -4 3 | with eigenvectors | 1 | and | 3 |
| -10 7 | | 2 | | 5 |
- - - - - -
Thus,
- - - -
P_{Q} = | 1 3 | P_{Q}^{-1} = | -5 3 |
_{ } | 2 5 |_{ } | 2 -1 |
- - - -
and P_{Q}^{-1}QP_{Q} gives:
- - - - - - - -
| -5 3 || -4 3 || 1 3 | = | 2 0 |
| 2 -1 || -10 7 || 2 5 | | 0 1 |
- - - - - - - -
Note that:
- - - - - - - -
| -2 6 || 3 2 | - | 3 2 || 2 0 | = 0
| -2 5 || 2 1 | | 2 1 || 0 1 |
- - - - - - - -
and,
- - - - - - - -
| -4 3 || 1 3 | - | 1 3 || 2 0 | = 0
| -10 7 || 2 5 | | 2 5 || 0 1 |
- - - - - - - -
The fact that diagonalization results in identical
matrices is another way of confirming the similarity
of W and Q.
If we are given the eigenvalues and eigenvectors
we can construct the original matrix, W, using the
transformation W = PΛP^{-1}. Using the above example,
- - - - - - - -
W = | 3 2 || 2 0 || 3 2 |^{-1} = | -2 6 |
| 2 1 || 0 1 || 2 1 |^{ } | -2 5 |
- - - - - - - -
Simultaneous Diagonalization
----------------------------
If 2 or more matrices have the same eigenvectors
with each eigenvector being unique, then the
matrices can be simultaneous diagonalized using
the same transformation matrix. For example,
- - - -
W = | 2 1 | and Q = | 3 4 |
| 1 2 | | 4 3 |
- - - -
have the same eigenvectors. They are:
- - - -
| 1 | and | 1 |
| 1 | | -1 |
- - - -
- -
W can be diagonalized to | 3 0 | = W'
| 0 1 |
- -
- -
Q can be diagonalized to | 7 0 | = Q'
| 0 -1 |
- -
The product of WQ and QW is:
- -
WQ = QW = | 10 11 | therefore [W,Q] = 0
| 11 10 |
- -
From this we can conclude that matrices that
can be simultaneously diagonalized commute with
each other.