Wolfram Alpha:
Search by keyword:
Astronomy
Chemistry
Classical Physics
Climate Change
Cosmology
Finance and Accounting
General Relativity
Lagrangian and Hamiltonian Mechanics
Macroeconomics
Mathematics
Microeconomics
Particle Physics
Probability and Statistics
Programming and Computer Science
Quantum Field Theory
Quantum Mechanics
Semiconductor Reliability
Solid State Electronics
Special Relativity
Statistical Mechanics
String Theory
Superconductivity
Supersymmetry (SUSY) and Grand Unified Theory (GUT)
test
The Standard Model
Topology
Units, Constants and Useful Formulas
Similar Matrices and Diagonalization
------------------------------------
Change of Basis
---------------
Consider ℝ2 as a subspace of ℝ2 with the following
basis, B:
- - - -
B = {| 2 |, | 1 |}
| 1 | | 2 |
- - - -
Consider an arbitrary vector in this basis, v:
- - - - - -
[v]B = 3| 2 | + 2| 1 | = | 8 |
| 1 | | 2 | | 7 |
- - - - - -
This can be written as:
- - - -
[v]B = | 3 || [v1]B |
| 2 || [v2]B |
- - - -
- -
Where | 3 | is called the CHANGE OF BASIS matrix.
| 2 |
- -
- -
Now, | 8 | is just a vector in the STANDARD BASIS.
| 7 |
- -
Therefore,
- - - -
[v]S = 8| 1 | + 7| 0 |
| 0 | | 1 |
- - - -
- - - -
= | 8 || [v1]S |
| 7 || [v2]S |
- - - -
The following diagram shows this:
In general we can write:
C[v]B = [v]S
Where,
C = Change of basis matrix.
[v]B = The vector in the B basis
[v]B = The vector in the Standard Basis.
Here is another example. Consider the 3-dimensional
subspace of ℝ4 (which is not the same as ℝ3 because
the dimension of the vectors is 4).
- - - - - -
| -1 | | -1 | | -1 |
B = {| 1 |,| 0 |,| 0 |}
| 0 | | 1 | | 0 |
| 0 | | 0 | | 1 |
- - - - - -
- - - - - -
| -1 | | -1 | | -1 |
[v]B = 3| 1 | + 2| 0 | + 4| 0 |
| 0 | | 1 | | 0 |
| 0 | | 0 | | 1 |
- - - - - -
- - - -
| -1 -1 -1 || 3 |
= | 1 0 0 || 2 |
| 0 1 0 || 4 |
| 0 0 1 | - -
- -
- -
| -9 |
= | 3 |
| 2 |
| 4 |
- -
- - - - - - - -
| 1 | | 0 | | 0 | | 0 |
= -9| 0 | + 3| 1 | + 2| 0 | + 4| 0 |
| 0 | | 0 | | 1 | | 0 |
| 0 | | 0 | | 0 | | 1 |
- - - - - - - -
- - - -
| -9 || [v1]S |
= | 3 || [v2]S |
| 2 || [v3]S |
| 4 || [v4]S |
- - - -
Now suppose we are only given [v]B and [v]S and want
to find C.
- - - - - -
| -1 -1 -1 || c1 | | -9 |
| 1 0 0 || c2 | = | 3 |
| 0 1 0 || c3 | | 2 |
| 0 0 1 | - - | 4 |
- - - -
At first sight we might think that we could rearrange
[v]BC = [v]S to get
C = [v]B-1[v]S
However, [v]B is not invertible. To overcome this we
can write the equation in augmented matrix form and
use Gauss-Jordan elimination to get the answer.
- -
| -1 -1 -1 | -9 |
| 1 0 0 | 3 | => c1 = 3, c2 = 2 and c3 = 4
| 0 1 0 | 2 |
| 0 0 1 | 4 |
- -
Alternatively we can compute the reduced row echelon
form (rref):
- -
| 1 0 0 | 3 |
| 0 1 0 | 2 | => c1 = 3, c2 = 2 and c3 = 4
| 0 0 1 | 4 |
- -
Transformations
---------------
We have seen how vectors are affected by a change
in basis. We now want to see what happens with
transformations.
Consider a transformation, T. This can be written
in matrix form as:
T(v) = Mv
Where M is the transformation matrix and T: ℝn -> ℝn
In the B basis we can write:
[T(v)]B = [N]B[v]B
Where [N]B is the transformation matrix in the B
basis.
C[v]B = [v]S ∴ [v]B = C-1[v]S
[N]B[v]B = [T(v)]B
= [Mv]B since [T(v)]B = [Mv]B
= C-1[Mv]S
= C-1[M]S[v]S
= C-1[M]SC[v]B
We conclude:
[N]B = C-1[M]SC
Where,
[N]B is the transformation matrix in the B basis.
[M]S is the transformation matrix in the S basis.
C is the change of basis matrix for B.
Schematically:
[M]S
Standard basis: v ---------> T(v)
| |
| C-1 | C-1
| [N]B |
B basis: v ---------> v
[v]B [T(v)]B
Similar Matrices and Diagonalization
------------------------------------
Two matrices are similar if they satisfy the above
relationship:
P-1AP = Q
or,
AP = PQ
Where P is any invertible n x n matrix.
Proof:
P-1AP = Q
If we consider a basis consisting of eigenvectors of
the space, we can apply the the eigenvalue equation
Qv = λv.
Therefore, after substitution:
(P-1AP)v = λv
(P-1AP)P-1v = λP-1v
P-1Av = λP-1v
Qv = λv
Therefore, the eigenvalues of A and Q are the same.
Also consider the characteristic polynomials of
A and Q:
Ax = λx
Therefore,
(A - λI)x = 0
This has a solution only if det(A - λI) = 0
We can also write a similar equation for B:
det(Q - λI)x = 0
Since λ can take the value 0 we can write:
det(A) = det(Q)
Thus, having the same eigenvalues implies that the
determinants are also equal.
The important result of these proofs is that
similar matrices have the same trace, eigenvalues,
determinant and dimension. However, they can have
different eigenvectors.
Example 1: Matrices with unique eigenvectors.
- - - -
W = | -2 6 | Q = | -4 3 |
| -2 5 | | -10 7 |
- - - -
W and Q are certainly similar.
- - - - - -
| -2 6 | has eigenvectors | 3 | and | 2 |
| -2 5 | | 2 | | 1 |
- - - - - -
- - - - - -
| -4 3 | has eigenvectors | 1 | and | 3 |
| -10 7 | | 2 | | 5 |
- - - - - -
We can find P as follows:
- - - - - - - -
| -2 6 || a b | = | a b || -4 3 |
| -2 5 || c d | | c d || 10 7 |
- - - - - - - -
- - - -
| -2a + 6c -2b + 6d | = | -4a - 10b 3a + 7b |
| -2a + 5c -2b + 5d | | -4c - 10d 3c + 7d |
- - - -
Therefore,
a = -3c - 5b and d = -b - 3c/2
Choosing b = 1 and c = 0 gives:
- -
P = | -5 1 |
| 0 -1 |
- -
Confirming:
- - - - - -
| -2 6 || -5 1 | = | 10 -8 |
| -2 5 || 0 -1 | | 10 -7 |
- - - - - -
and,
- - - - - -
| -5 1 || -4 3 | = | 10 -8 |
| 0 -1 || -10 7 | | 10 -7 |
- - - - - -
P-1WP gives:
- - - - - - - -
| -5 1 |-1| -2 6 || -5 1 | = | -4 3 |
| 0 -1 | | -2 5 || 0 -1 | | -10 7 |
- - - - - - - -
Trace = 3, det = 2, λ = 2 and 1
and,
- - - - - - - -
| -5 1 |-1| -4 3 || -5 1 | = | -14 24/5 |
| 0 -1 | | -10 7 || 0 -1 | | -50 17 |
- - - - - - - -
Trace = 3, det = 2, λ = 2 and 1
It is easily seen that there are many versions
of P that satisfy WP = PB and hence A has a whole
family of matrices that are similar to it!
Example 2: Matrices with non-unique eigenvectors.
- - - -
W = | 2 1 | Q = | 0 2 |
| 0 2 | | -2 4 |
- - - -
W and Q are certainly similar.
- - - -
| 2 1 | has 2 identical eigenvectors | 1 |
| 0 2 | | 0 |
- - - -
- - - -
| 0 2 | has 2 identical eigenvectors | 1 |
| -2 4 | | 1 |
- - - -
We can find P as before:
- - - - - - - -
| 2 1 || a b | = | a b || 0 2 |
| 0 2 || c d | | c d || -2 4 |
- - - - - - - -
This leads to the system of equations:
2a + c + 2b = 0 ∴ c = -2a - 2b
-2b - 2a + d = 0
2c + 2d = 0 ∴ c = -d
-2d - 2c = 0
Choosing a = 0 and b = 1 gives:
- -
P = | 0 1 |
| -2 2 |
- -
P-1AP gives:
- - - - - - - -
| 0 1 |-1| 2 1 || 0 1 | = | 0 2 |
| -2 2 | | 0 2 || -2 2 | | -2 4 |
- - - - - - - -
Again, it is easily seen that there are many
versions of P that satisfy WP = PB and hence A
has a whole family of matrices that are similar
to it also.
Diagonalization
---------------
In the case of a matrix that has a set of n linearly
independent (unique) eigenvectors, the similarity
transformation can also be used to create an another
similar matrix that has a diagonal form. In this case
the diagonal form is obtained by constructing S such
that the columns are the eigenvectors of the matrix
being transformed. Again, diagonalization is only
possible if and only if the n x n matrix posesses n
eigenvectors that are unique.
Proof:
We start with the eigenvalue equation and write:
Av1 = λ1v1, Av2 = λ2v2 ... Avn = λnvn where the
v's are column (eigen) vectors with n entries.
Let us write P as:
- -
| . . . |
| v1 . v2 . --- . vn |
| . . . |
| . . . |
- -
Therefore,
- -
| . . . |
AP = A| v1 . v2 . --- . vn |
| . . . |
| . . . |
- -
- -
| . . . |
= | Av1 . Av2 . --- . Avn |
| . . . |
| . . . |
- -
- -
| . . . |
= | λ1v1 . λ2v2 . --- . λnvn |
| . . . |
| . . . |
- -
- - - -
| . . . || λ1 |
= | v1 . v2 . --- . vn || λ2 |
| . . . || . |
| . . . || λn |
- - - -
So we have:
AP = PΛ
or,
P-1AP = P-1PΛ
= Λ - the diagonal matrix of eigenvalues.
Example 3:
Consider the example from before that had unique
eigenvectors:
- - - - - -
| -2 6 | with eigenvectors | 3 | and | 2 |
| -2 5 | | 2 | | 1 |
- - - - - -
Thus,
- - - -
PW = | 3 2 | PW-1 = | -1 2 |
| 2 1 | | 2 -3 |
- - - -
and, PW-1WPW gives:
- - - - - - - -
| -1 2 || -2 6 || 3 2 | = | 2 0 |
| 2 -3 || -2 5 || 2 1 | | 0 1 |
- - - - - - - -
Likewise,
- - - - - -
| -4 3 | with eigenvectors | 1 | and | 3 |
| -10 7 | | 2 | | 5 |
- - - - - -
Thus,
- - - -
PQ = | 1 3 | PQ-1 = | -5 3 |
| 2 5 | | 2 -1 |
- - - -
and PQ-1QPQ gives:
- - - - - - - -
| -5 3 || -4 3 || 1 3 | = | 2 0 |
| 2 -1 || -10 7 || 2 5 | | 0 1 |
- - - - - - - -
Note that:
- - - - - - - -
| -2 6 || 3 2 | - | 3 2 || 2 0 | = 0
| -2 5 || 2 1 | | 2 1 || 0 1 |
- - - - - - - -
and,
- - - - - - - -
| -4 3 || 1 3 | - | 1 3 || 2 0 | = 0
| -10 7 || 2 5 | | 2 5 || 0 1 |
- - - - - - - -
The fact that diagonalization results in identical
matrices is another way of confirming the similarity
of W and Q.
If we are given the eigenvalues and eigenvectors
we can construct the original matrix, W, using the
transformation W = PΛP-1. Using the above example,
- - - - - - - -
W = | 3 2 || 2 0 || 3 2 |-1 = | -2 6 |
| 2 1 || 0 1 || 2 1 | | -2 5 |
- - - - - - - -
Simultaneous Diagonalization
----------------------------
If 2 or more matrices have the same eigenvectors
with each eigenvector being unique, then the
matrices can be simultaneous diagonalized using
the same transformation matrix. For example,
- - - -
W = | 2 1 | and Q = | 3 4 |
| 1 2 | | 4 3 |
- - - -
have the same eigenvectors. They are:
- - - -
| 1 | and | 1 |
| 1 | | -1 |
- - - -
- -
W can be diagonalized to | 3 0 | = W'
| 0 1 |
- -
- -
Q can be diagonalized to | 7 0 | = Q'
| 0 -1 |
- -
The product of WQ and QW is:
- -
WQ = QW = | 10 11 | therefore [W,Q] = 0
| 11 10 |
- -
From this we can conclude that matrices that
can be simultaneously diagonalized commute with
each other.