Wolfram Alpha:

```Special Unitary Groups and the Standard Model - Part 1

Special Unitary Groups and the Standard Model - Part 3

Special Unitary Groups and the Standard Model - Part 2
------------------------------------------------------

--------------------------

In Special Unitary Groups and the Standard Model
- Part 1 we talked about the fundamental (defining)
representation in which the matrices produced by the
generators are the elements of the group.  For SU(3)
this is the set of complex 3 x 3 unitary matrices
with determinant = 1.  These matrices act on the
3-component column vectors with complex components
that represent the quark fields that form a basis.

However, there is another representation of the

-----------------------------------------------

For the fundamental representation we had the
following definition of the Lie algebra:

[Ta,Tb] = ifabcTc

The matrix elements of the adjoint representation
are given by the structure constants obtained
from the fundamental representation.

(Ta(A))bc := -ifabc

Where (Ta(A))bc is a particular element of the matrix
Ta(A).  The dimension of (Ta(A))bc is given by:

(N2 - 1) x (N2 - 1)

And there would be (N2 - 1) of them.

SU(2)
-----

For SU(2) we write

[Ta(A),Tb(A)] = εabcTc(A)

and,

(Ta(A))bc = εabc

The generators are constructed from:

(T1(A))23 = ε123

= 1

(T1(A))32 = ε132

= -1

(T1(A))11 = ε111

= 0

etc. etc.

This eventually gives:

-       -            -       -            -       -
| 0  0  0 |          | 0  0 -1 |          | 0 -1  0 |
T1(A) = | 0  0  1 |  T2(A) = | 0  0  0 |  T3(A) = | 1  0  0 |
| 0 -1  0 |          | 1  0  0 |          | 0  0  0 |
-       -            -       -            -       -

Note: These are also the generators of SO(3) that
represents rotations around the x, y and z axes.

SU(3)
-----

Consider T3(A).  The structure constants are:

f312 = 1

f321 = -1

f345 = 1/2

f354 = -1/2

f367 = -1/2

f376 = 1/2

Therefore, T3(A) looks like:

-                                 -
|   0  -i   0   0    0    0   0   0 |
|   i   0   0   0    0    0   0   0 |
|   0   0   0   0    0    0   0   0 |
T3(A) = |   0   0   0   0  -i/2   0   0   0 |
|   0   0   0  i/2   0    0   0   0 |
|   0   0   0   0    0    0  i/2  0 |
|   0   0   0   0    0  -i/2  0   0 |
|   0   0   0   0    0    0   0   0 |
-                                 -

For T8(A).  The structure constants are:

f845 = √3/2

f854 = -√3/2

f678 = √3/2

f687 = -√3/2

f867 = √3/2

f876 = -√3/2

Therefore, T8(A) looks like

-                                     -
| 0  0   0    0     0     0      0    0 |
| 0  0   0    0     0     0      0    0 |
| 0  0   0    0     0     0      0    0 |
T2(A) = | 0  0   0    0   -i√3/2  0      0    0 |
| 0  0   0  i√3/2   0     0      0    0 |
| 0  0   0    0     0     0   -i√3/2  0 |
| 0  0   0    0     0   i√3/2    0    0 |
| 0  0   0    0     0     0      0    0 |
-                                    -

We will now use the JACOBI IDENTITY to show that
the Ta(A)'s also form a representation of the Lie
algebra.  For a compact Lie algebra, the Lie
bracket [A,B] satisfies:

[A,[B,C]] + [B,[C,A]] + [C,[A,B]] = 0

This is the JACOBI IDENTITY.

Proof:

[A,(BC - CB)] + [B,(CA - AC)] + [C,(AB - BA)]

= ABC - ACB - (BCA - CBA) + BCA - BAC
- (CAB - ACB) + CAB - CBA - (ABC - BAC)

= ABC - ACB - BCA + CBA + BCA - BAC
- CAB + ACB + CAB - CBA - ABC + BAC

= 0

In terms of the generators, the first term, [A,[B,C]],
becomes:

[Ta(A),[Tb(A),Tc(A)]]

Now [Tb(A),Tc(A)] = -ifbcdTd(A).  Therefore,

[Ta(A),[Tb(A),Tc(A)]] = [Ta(A),-ifbcdTd(A)]

The RHS can be rewritten as:

= -ifbcd[Ta(A),Td(A)]

The second and third terms are cyclic permutations
as follows:

And,

[Tc(A),[Ta(A),Tb(A)]] = fabdfcdeTe(A)

Therefore (since Te(A) ≠ 0):

We now make use of the following:

(Tx)yz = -ifxyz

∴ fxyz = i(Tx)yz

fabc = fbca = fcab = -fbac = -facb = -fcba

Therefore,

= i(Tb(A))cdi(Ta(A))de - i(Ta(A))cdi(Tb(A))de

= -(Tb(A))cd(Ta(A))de + (Ta(A))cd(Tb(A))de

= -(Tb(A)Ta(A))ce + (Ta(A)Tb(A))ce

= -[Tb(A),Ta(A)]ce

-fabdfcde = [Ta(A),Tb(A)]ce

-ifabd(Tc(A))de = [Ta(A),Tb(A)]ce

ifabd(Td(A))ce = [Ta(A),Tb(A)]ce

This is the familiar commutator associated with
the Lie algebra.

In the adjoint representation, the T(A)'s act
on a basis that is comprised of the generators
themselves and not the u, d and s column vectors
associated with the defining representation. From
the discussion of Lie groups and algebras, the
adjoint action of a Lie algebra on itself is
given by:

Where x and y are group generators.  Therefore,

Which can be written in simpler notation as:

SU(2)
-----

The bases are constructed from the 3 Pauli matrices:

-        -
e1 = iσ1/2 = |  0   i/2 |
| i/2   0  |
-        -

-         -
e2 = iσ2/2 = |   0   1/2 |
| -1/2   0  |
-         -

-        -
e3 = iσ3/2 = | i/2   0  |
|  0  -i/2 |
-        -

Therefore,

ad(e1)e2 = [e1,e2] = ε123e3 = -e3

ad(e1)e3 = [e1,e3] = ε132e2 = e2

The Lie algebra lives in a tangent (vector) space
around the identity element of the group.

The vector space is equipped with the operations
of matrix multiplication together with addition
and scalar multiplication.  Therefore, ad(e1) is
expressed as the linear combination:

-  -  -  -
|  0 || e1 |
0e1 - e2 + e3 = | -1 || e2 |
|  1 || e1 |
-  -  -  -

In matrix form this looks like:

-        -
| 0  0  0 | e1
ad(e1) = | 0  0  1 | e2
| 0 -1  0 | e3
-        -
e1 e2 e3

Likewise,

ad(e2)e1 = [e2,e1] = ε213e3 = e3

ad(e2)e3 = [e2,e3] = ε231e1 = -e1

-  -  -  -
| -1 || e1 |
-e1 - 0e2 + e3 = |  0 || e2 |
|  1 || e1 |
-  -  -  -

-        -
|  0  0 -1 | e1
ad(e2) = |  0  0  0 | e2
|  1  0  0 | e3
-        -
e1 e2 e3

Finally,

ad(e3)e1 = [e3,e1] = ε312e2 = e2

ad(e3)e2 = [e3,e2] = ε321e1 = -e1

-  -  -  -
| -1 || e1 |
-e1 + e2 + 0e3 = |  1 || e2 |
|  0 || e1 |
-  -  -  -

-        -
| 0 -1  0 | e1
ad(e3) = | 1  0  0 | e2
| 0  0  0 | e3
-        -
e1 e2 e3

SU(3)
-----

The bases can be constructed from ei = iλei.

If we go through the same procedure as before we get:

-                                 -
|   0  -i   0   0    0    0   0   0 | e1
|   i   0   0   0    0    0   0   0 | e2
|   0   0   0   0    0    0   0   0 | e3
ad(e3) = |   0   0   0   0  -i/2   0   0   0 | ee
|   0   0   0  i/2   0    0   0   0 | e5
|   0   0   0   0    0    0  i/2  0 | e6
|   0   0   0   0    0  -i/2  0   0 | e7
|   0   0   0   0    0    0   0   0 | e8
-                                 -
e1  e2  e3  e4  e5   e6  e7  e8

And,

-                                     -
| 0  0   0    0     0     0      0    0 | e1
| 0  0   0    0     0     0      0    0 | e2
| 0  0   0    0     0     0      0    0 | e3
ad(e8) = | 0  0   0    0   -i√3/2  0      0    0 | e4
| 0  0   0  i√3/2   0     0      0    0 | e5
| 0  0   0    0     0     0   -i√3/2  0 | e6
| 0  0   0    0     0   i√3/2    0    0 | e7
| 0  0   0    0     0     0      0    0 | e8
-                                    -
e1 e2  e3   e4    e5    e6     e7    e8

Which agrees exactly with the result using the
structure constants determined from the fundamental
representation.

-------------------------------------------------

Consider SU(2):

-             -
| 1   0      0  |
exp(ad(e1)) = | 0  cosθ  sinθ |
| 0 -sinθ  cosθ |
-             -

-       -      -                   -
exp(|  0  i/2 |) = |  cos(θ/2) isin(θ/2) |
| i/2  0  |    | isin(θ/2)  cos(θ/2) |
-       -      -                   -

-                     -
[exp(e1)]-1 = |  cos(θ/2)  -isin(θ/2) |
| -isin(θ/2)   cos(θ/2) |
-                     -

exp(e1)e1[exp(e1)]-1 gives:

-                   -  -       -  -                     -
|  cos(θ/2) isin(θ/2) || 0   i/2 ||  cos(θ/2)  -isin(θ/2) |
| isin(θ/2)  cos(θ/2) || i/2  0  || -isin(θ/2)   cos(θ/2) |
-                   -  -       -  -                     -

-       -
= |  0  i/2 | = e1 + 0e2 + 0e3
| i/2  0  |
-       -

-     -  -  -
| 1 ? ? || e1 |
| 0 ? ? || e2 |
| 0 ? ? || e3 |
-     -  -  -

exp(e1)e2[exp(e1)]-1 gives:

-                   -  -         -  -                     -
|  cos(θ/2) isin(θ/2) ||   0   1/2 ||  cos(θ/2)  -isin(θ/2) |
| isin(θ/2)  cos(θ/2) || -1/2   0  || -isin(θ/2)   cos(θ/2) |
-                   -  -         -  -                     -

-                                                     -
|     -isin(θ/2)cos(θ/2)     (-sin2(θ/2) + cos2(θ/2))/2 |
| (-cos2(θ/2) + sin2(θ/2))/2      isin(θ/2)cos(θ/2)     |
-                                                     -

-                    -
| (-isinθ)/2  (cosθ)/2 |
| (-cosθ)/2  (isinθ)/2 |
-                    -

We can break this down as follows:

-                  -     -                    -
|  0        (cosθ)/2 | + | -(isinθ)/2      0    |
| (-cosθ)/2    0     |   |    0       (isinθ)/2 |
-                  -     -                    -

-        -          -        -
|  0   1/2 |cosθ + | -i/2   0  |sinθ
| -1/2  0  |       |   0   i/2 |
-        -          -        -

0e1 + e2cosθ - e3sinθ

-         -  -  -
| 1    0  ? || e1 |
| 0  cosθ ? || e2 |
| 0 -sinθ ? || e3 |
-         -  -  -

exp(e1)e3[exp(e1)]-1 gives:

-                   -  -        -  -                     -
|  cos(θ/2) isin(θ/2) || i/2   0  ||  cos(θ/2)  -isin(θ/2) |
| isin(θ/2)  cos(θ/2) ||  0  -i/2 || -isin(θ/2)   cos(θ/2) |
-                   -  -        -  -                     -

-                                                     -
| i(cos2(θ/2) - sin2(θ/2))/2       sin(θ/2)cos(θ/2)     |
|   -sin(θ/2)cos(θ/2)        i(sin2(θ/2) - cos2(θ/2))/2 |
-                                                     -

-                    -
| (icosθ)/2   (sinθ)/2 |
| (-sinθ)/2 (-icosθ)/2 |
-                    -

We can break this down as follows:

-                  -     -                    -
|     0     (sinθ)/2 | + | (icosθ)/2     0      |
| (-sinθ)/2      0   |   |    0      (-icosθ)/2 |
-                  -     -                    -

-        -         -        -
|  0   1/2 |sinθ + | i/2   0  |cosθ
| -1/2  0  |       |  0  -i/2 |
-        -         -        -

= 0e1 + e2sinθ + e3cosθ

The full matrix becomes:

-            -
| 1    0    0  |
| 0  cosθ sinθ | = Ad(e1)
| 0 -sinθ cosθ |
-            -

Which is the same as exp(ad(e1)).

-             -
| 0    0     0  |
= | 0 -sinθ  cosθ |
| 0 -cosθ -sinθ |
-             -

-       -
| 0  0  0 |
= | 0  0  1 | at θ = 0
| 0 -1  0 |
-       -

The Cartan-Killing Metric (aka the Killing Form)
------------------------------------------------

The metric tensor takes as its input a pair of tangent
vectors at a point on a manifold and produces a real
number scalar in a way that generalizes the dot product
of vectors in Euclidean space.  This is discussed in
detail in the note, 'The Essential Mathematics of General
Relativity'.  In the same way we can associate a metric
with the inner product on a finite dimensional Lie algebra.
In this case the metric is defined by:

gab = B(a,b) = Tr(Ta(A)Tb(A)) = -facdfbdc

The Killing form is the scalar product of the algebra,
defined in terms of the adjoint representation.

Example SU(2):

-      -  -      -
| 0  0 0 || 0  0 0 |
| 0 -1 0 || 0 -1 0 |
-      -  -      -

-       -
| 0  0  0 |
= Tr(| 0 -1  0 |)
| 0  0 -1 |
-       -

= -2

= -2

= 0

= 0

= 0

= -2

= 0

= 0

= 0

= -2

-        -
| -2  0  0 |
gab = |  0 -2  0 |
|  0  0 -2 |
-        -

Using the structure constants:

For SU(2) -facd is replaced by εacd to get:

gab = Tr(ea(A)eb(A)) = εacdεbdc

= Σεacdεbdc
cd

For Tr(e1(A)e1(A)) we get:

gab = Tr(e1(A)e1(A)) = ε1cdε1dc

= Σε1cdε1dc
cd

= ε111ε111 = 0 x 0 = 0
+
ε121ε112 = 0 x 0 = 0
+
ε131ε113 = 0 x 0 = 0
+
ε112ε121 = 0 x 0 = 0
+
ε122ε122 = 0 x 0 = 0
+
ε132ε123 = -1 x 1 = -1
+
ε113ε131 = 0 x 0 = 0
+
ε123ε132 = 1 x -1 = -1
+
ε133ε133 = 0 x 0 = 0

= -2 as we obtained before.

The Cartan-Killing metric can be used as a metric
tensor to raise and lower indeces.  For example,
fabc is antisymmetric in a and b but lacks any
symmetries for c.  This is where the Killing metric
comes into play.  It can be used to lower the
unwelcome index and create a completely anti-
symmetric fabd as follows:

fabcgcd = fabd

Which is totally antisymmetric in the indeces.

The use of the Cartan-Killing metric allows us to
be loose with regard to defining upper and lower
indices on the Levi-Civita tensor and the structure
constants tensor and these indeces are frequently
not distinguished.  Therefore, fabc ≡ fabc etc.

The Cartan-Killing form can be used to find the
_
C2(A) = gabTa(A)Tb(A)

= Ta(A)Ta(A)

= T1(A)2 + T2(A)2 + T3(A)2

SU(2):
-        -
_       | -2  0  0 |
C2(A) = |  0 -2  0 |
|  0  0 -2 |
-        -

The Cartan Subalgebra Revisited
-------------------------------

We deliberately showed the calculations for ad(e3)
the 2 Cartan generators, H1 and H2, for SU(3).

However, these 2 matrices do not have a basis of
common eigenvectors that are linearly independent
as they do in the fundamental representation.
While they can be individually diagonalized, they cannot
be simultaneously diagonalized.  We can overcome this
by using the raising and lowering operators I±, U± and
V± as a basis instead of the iλi's.  This is called the
CARTAN-WEYL BASIS.  In this basis the H's now look like:

-                           -
| 1  0   0   0   0    0  0  0 | I+
| 0 -1   0   0   0    0  0  0 | I-
| 0  0 -1/2  0   0    0  0  0 | U+
ad(H1) = | 0  0   0  1/2  0    0  0  0 | U-
| 0  0   0   0  1/2   0  0  0 | V+
| 0  0   0   0   0  -1/2 0  0 | V-
| 0  0   0   0   0    0  0  0 | I3
| 0  0   0   0   0    0  0  0 | Y
-                           -
I+ I-  U+  U-  V+   V-  I3 Y

Therefore, for example,

-           -  -     -     -     -  -           -
| 1/2   0   0 || 0 0 0 |   | 0 0 0 || 1/2   0   0 |
= |  0  -1/2  0 || 0 0 1 | - | 0 0 1 ||  0  -1/2  0 |
|  0    0   0 || 0 0 0 |   | 0 0 0 ||  0    0   0 |
-           -  -     -     -     -  -           -

-     -
| 0 0 0 |
= -1/2| 0 0 1 |
| 0 0 0 |
-     -

= (-1/2)U+

0I+ + 0I- + (-1/2)U+ + 0U- + 1V+ + 0V- + 0I3 + 0Y ->

-    -
|   0  |
|   0  |
| -1/2 |
|   0  | the orange column in the matrix.
|   0  |
|   0  |
|   0  |
|   0  |
-    -

Likewise,

-                              -
| 0  0   0    0     0     0  0  0 | I+
| 0  0   0    0     0     0  0  0 | I-
| 0  0 √3/2   0     0     0  0  0 | U+
ad(H2) = | 0  0   0  -√3/2   0     0  0  0 | U-
| 0  0   0    0   √3/2    0  0  0 | V+
| 0  0   0    0     0  -√3/2 0  0 | V-
| 0  0   0    0     0     0  0  0 | I3
| 0  0   0    0     0     0  0  0 | Y
-                              -
I+ I-  U+   U-    V+    V-  I3 Y

Therefore, for example,

-                -  -     -     -     -  -                -
| 1/2√3   0    0   || 0 0 0 |   | 0 0 0 || 1/2√3   0    0   |
= |   0   1/2√3  0   || 0 0 1 | - | 0 0 1 ||   0   1/2√3  0   |
|   0    0   -1/√3 || 0 0 0 |   | 0 0 0 ||   0    0   -1/√3 |
-                -  -     -     -     -  -                -

-     -
| 0 0 0 |
= √3/2| 0 0 1 |
| 0 0 0 |
-     -

= (√3/2)U+

0I+ + 0I- + (√3/2)U+ + 0U- + 1V+ + 0V- + 0I3 + 0Y ->

-    -
|   0  |
|   0  |
| √3/2 |
|   0  | the green column in the matrix.
|   0  |
|   0  |
|   0  |
|   0  |
-    -

And, just like in the fundamental representation:

Root Diagrams
-------------

The Roots, or Roots Vectors, of a Lie algebra, are
the weights of the adjoint representation.  The
number of Roots is equal to the dimension of the
Lie algebra, which is also equal to the dimension
of the adjoint representation.  Therefore, we can
associate a root to every element of the algebra.

The states in the adjoint representation correspond
to the generators (roots).  The action of a generator
on a state is given by the Eigenvalue equation:

H|x> = α|x>  where x = I±, V±, U± and Hi

The LHS can be evaluated as follows:

Ta|Tb> = |Tc><Tc|Ta|Tb>

From Quantum Mechanics, Ta is interpreted as a
matrix element so we can replace it by [Ta]bc.
Thus,

Ta|Tb> = |Tc>[Tc]ab

= ifabc|Tc>

= |ifabcTc>

= |[Ta,Tb]>

Does this look familiar?  It is the equivalent of
ad(Ta)Tb = [Ta,Tb].  Therefore, by comparison:

-----------------------------------------------------

Digression:

We can see this another way.

Now that we are in a basis where the Cartan generators
are in diagonal form we can write:

-          -
| γ(1)  .... |a
(Ti(A))ab = -| .......... |
| ....  γ(8) | b
-          -

= -γi(a)δab

Where i goes from 1 to the rank and a,b go from
1 to 8.

Now,

(Ti(A))ab = -ifiab

By comparison:

ifiab = γi(a)δab

= γi(a)δba since δab = δba

fiab in this case will form the diagonal elements
of Ti(A) and are not 0 as they would be in the case
of the fundamental representation.  Now, we also know:

[Ti(A),Ta(A)] = ifiabTb(A)

= γi(a)δbaTb(A)

= γi(a)Ta(A)

Therefore, γi(a) is just the ith eigenvalue of Ti(A)
and so we can write:

[Ti,Ta] ≡ Ti|Ta> = γi(a)|Ta>

-----------------------------------------------------

Therefore, the eigenvalue equation becomes:

|[Ta,Tb]> = αi|Tb>

With this in mind we now find the eigenvalues, αi,
for the following commutators:

[H1,I+] = I+

[H2,I+] = 0

Thus α1 = ±(1,0)

[H1,V+] = (1/2)V+

[H2,V+] = (√3/2)V+

Thus α2 = ±(1/2,√3/2)

Proof:

[H1,U+] = (-1/2)U+

[H2,U+] = (√3/2)U+

Thus α3 = ±(-1/2,√3/2)

[H1,T3] = [H2,T3] = 0

[H1,T8] = [H2,T8] = 0

Diagramatically, root space looks like:

The roots are the differences of the weights from
the fundamental representation.

-----------------------------------------------------

Historical Digression:

The root diagram corresponds to the EIGHTFOLD WAY
coined by Gell-Mann and Ne'eman in the early 60s
that describes the meson and baryon octets.

The eightfold way explained a "strange" feature of
certain particles produced in accelerator reactions
that decayed slowly as if something was hindering
the process.  The particles exhibited a property
termed "strangeness" which would be conserved in
production but could be changed during decays. Plots
of electric charege, Q, versus strangeness for
particles most of which were known in the late 50s,
are shown below.

We will see more of this in the discussion of
Tensor decomposition towards the end of this
document.

-----------------------------------------------------

w1 = (1/2,1/2√3)  w2 = (-1/2,1/2√3)  w3 = (0,-1/√3)

V+:  w1 - w3

(1/2,1/2√3) - (0,-1/√3) = (1/2,√3/2)

V-:  w3 - w1

(0,-1/√3) - (1/2,1/2√3) = (-1/2,-√3/2)

I+:  w1 - w2

(1/2,1/2√3) - (-1/2,1/2√3) = (1,0)

I-:  w2 - w1

(-1/2,1/2√3) - (1/2,1/2√3) = (-1,0)

U+:  w2 - w3

(-1/2,1/2√3) - (0,-1/√3) = (-1/2,√3/2)

Ui:  w3 - w2

(0,-1/√3) - (-1/2,1/2√3) = (1/2,-√3/2)

Raising and Lowering Operators
------------------------------

For a given state, |i3,y>, the raising and
lowering operators produce:

I+|i3,i8> = |i3 + 1, y + 0>

I-|i3,i8> = |i3 - 1, y + 0>

U+|i3,i8> = |i3 - 1/2, y + √3/2>

U-|i3,i8> = |i3 + 1/2, y - √3/2>

V+|i3,i8> = |i3 + 1/2, y + √3/2>

V-|i3,i8> = |i3 - 1/2, y - √3/2>

----------------

A ladder operator is an operator that increases
or decreases the eigenvalue of another operator.
Suppose that two operators X and N have the
commutation relation:

[X,N] = cX

If n is an eigenvector of N such that:

N|n> = α|n>

then

NX|n> = (XN + [N,X]|n>

= XN|n> + [N,X]|n>

= Xα|n> + cX|n>

= (α + c)X|n>

In other words, if |n> is an eigenvector of N with
eigenvalue α then X|n> is an eigenvalue of N with
eigenvalue (α + c). The operator X is a raising
operator for N if c is real and positive, and a
lowering operator for N if c is real and negative.
Let's apply this to I3 (≡ H1)and Y (≡ H2) and a
state |i3,y>.  For example, we let N = I3 and
X = U±.

-         -
| 1/2  0  0 |
I3 = |  0 -1/2 0 |
|  0   0  0 |
-         -

I3U±|i3,y>

(U±I3 + [I3,U±])|i3,y>

(U±I3 -/+ (1/2)U±)|i3,y>

(i3 -/+ 1/2)U±|i3,y>

Likewise,

-                 -
| 1/2√3  0      0   |
Y = |  0    1/2√3   0   |
|  0     0    -1/√3 |
-                 -

YU±|i3,y> = (y +/- √3/2)U±|i3,y>

Therefore U+ decreases i3 by 1/2 and increases y
by √3/2.

Note:  [U-,V+] ≡ U-|V+> is an illegal operation.

Change of Notation
------------------

We now rename the 6 generators I±, U± and V±
as follows.

Eα ≡ E1/2,√3/2 = V+

E-α ≡ E-1/2,√-3/2 = V-

etc. etc.

We can now write the eigenvalue equation in
this new notation as:

[Hi,Eα] ≡ Hi|Eα> = αiEα where αi are the root
vectors.

We can also write:

-[Hi,Eα†] = αiEα†

From which we conclude:

Eα† = E-α

Example:

[H1,E1/2,√3/2†] ≡ [H1,V+†] gives:

-         -  -     -     -     -  -         -           -     -
| 1/2  0  0 || 0 0 0 |   | 0 0 0 || 1/2  0  0 |         | 0 0 0 |
| 0  -1/2 0 || 0 0 0 | - | 0 0 0 || 0  -1/2 0 | = (-1/2)| 0 0 0 |
| 0    0  0 || 1 0 0 |   | 1 0 0 || 0    0  0 |         | 1 0 0 |
-         -  -     -     -     -  -         -           -     -

Likewise [H2,E1/2,√3/2†] ≡ [H2,V+†] gives:

-     -
| 0 0 0 |
-√3/2| 0 0 0 |
| 1 0 0 |
-     -

So (1/2,√3/2) is a root and (-1/2,-√3/2) is a root.

Now consider the commutator of 2 generators.

[Eα,Eβ] = Nα,βEα+β

Example:

[I+,U+] ≡ [E1,0,E-1/2,√3/2] = E1/2,√3/2 ≡ V+ (i.e. Nα,β = 1)

[I+,V+] ≡ [E1,0,E1/2,√3/2] = 0 (i.e. Nα,β = 0)

Consider the commutator,

[Hi,[Eα,Eβ]]

We can use the Jacobi identity to get:

[Hi,[Eα,Eβ]] = -[Eα[Eβ,Hi]] - [Eβ[Hi,Eα,]]

= -Eα(-βiEβ) - Eβ(αiEα)

= (αi + βi)[Eα,Eβ]

This shows that root vectors add.  We can now let
β = -α to get:

[Hi,[Eα,E-α]] = -[Eα[E-α,Hi]] - [E-α[Hi,Eα,]]

= -Eα(-αiE-α) - E-α(αiEα)

= (αi - αi)[Eα,E-α]

= 0

Now, the implication of this is that [Eα,E-α] must
be some linear combination of Hi's since [Hi,Hj] = 0
(i.e. diagonal matrices commute).  Therefore, we can
write:

[Eα,E-α] = αiHi

Where αi are coefficients.  Note that αi should not
be confused with the roots, αi, at this point. the
subscript i is deliberate and indicates that αiHi
implies sum over i (Eimstein notation).  Therefore,
the commutator becomes a linear combination of Hi's

Computation of αi
-----------------

[Eα,E-α] = αiHi

To get αi we take the dot product of the state with
Hj.

Hj.[Eα,E-α] = αiHi.Hj

In Bra-Ket notation the dot product is:

<Hj|[Eα,E-α]> = αi<Hj|Hi>

We now make use of the following:

<Hi|Hj> = Tr(Hi†Hj) = kDδij

This is the dot product of the 2 diagonal matrices.
Therefore,

-           -
| 1/2   0   0 |
H1 = |  0  -1/2  0 |
|  0    0   0 |
-           -

-                 -
| 1/2√3   0      0  |
H2 = |   0   1/2√3    0  |
|   0     0   -1/√3 |
-                 -

-          -
| 1/4  0   0 |
H1H1 = |  0  1/4  0 |  with Tr = 1/2
|  0   0   0 |
-          -

-             -
| 1/12   0   0  |
H2H2 = |  0   1/12  0  |  with Tr = 1/2
|  0     0  1/3 |
-             -

-             -
| 1/4√3    0   0 |
H1H2 = H2H1 = |   0   -1/4√3 0 |  with Tr = 0
|   0     0    0 |
-             -

Tr(Hj[Eα,E-α])/kD = αiTr(HjHi)/kD

Tr(HjEαE-α) - Tr(HjE-αEα) = αiTr(HjHi)

We can now make use of the cyclic property of
traces, Tr(ABC) = Tr(BCA) = Tr(CAB), on the
second term to write (CAB):

Tr(HjEαE-α) - Tr(EαHjE-α) = αiTr(HjHi)

Tr([Hj,Eα]E-α) = αiTr(HjHi)

Tr(αiEαE-α) = αiTr(HjHi)

αiTr(EαE-α) = αiTr(HjHi)

Now, from before, E-α = Eα† so,

αiTr(EαEα†) = αiTr(HjHi)

αj<Eα|Eα†> = αiTr(HjHi)

Therefore,

αj = αiTr(HjHi)

But Tr(HjHi) is the Cartan-Killing metric, gij so
we get:

αj = αigij

Consequently,

αj ≡ αi

----------------

The roots are the weights of the generators that
are not Cartan.  All of the states in the adjoint
representation with zero weight vectors are Cartan
generators (and they are orthonormal).

- All root vectors are of equal length = 1, i.e.

√[(0 + 1/2)2 + (1/2√3 + 1/√3)2] = 1

- Angles between root vectors are < 90° (60° in this
case).
- The sum of 2 roots sometimes equals another root,
sometimes not, i.e.

I+ + U+ = V+

I- + U+ ≠ root.

- The sum of a root with itself does not exist.

- A root is POSITIVE if its 1st non zero component is
positive.  For example, in the fundamental representation
the weight (0,-1/√3) is negative because its first
component is 0 so the sign is determined by the the
sign of the second component.  In the adjoint
representation the positive roots are (1/2,-√3/2),
(1,0) and (1/2,√3/2)).

- If we know the positive roots it easy to determine all
of the roots in the representation (i.e. reflect about
the y axis).

- SIMPLE ROOTS are the positive roots that cannot be
written as a sum of other positive roots.

- The number of simple roots is equal to the rank
of the group, r, and so equals the number of Cartan
generators.

- If αi and αj are simple, then αi - αj is not a root.
Why?  If say, αj > αi then αj - αi is positive.  We can
then write αj = αi + (αj - αi).  But αj is then the
sum of 2 positive roots which violates the first bullet
item.

- The angle between the simple roots is given by:

cosθαβ = ±√(mn)/2 = √((-1)(-1))/2 = ±1/2

Now, the dot product α1.α2 = |α1||α2|cosθαβ = -1/2
dictates that θαβ is in the 2nd quadrant.  Therefore,

θαβ = 120°

- Any POSITIVE, φk, root can be decomposed as a sum
of simple roots with integer coefficients ≥ 0.
Therefore,

φk = Σkαα
α

Where k = Σkα
α

In this sense the simple roots form a basis.

Example:

φ3 = α1 + α2

- A simple root raises weights by a minimal amount.

For SU(3) the simple roots are:

V+:  α1 = (1/2,√3/2)

and

U-:  α2 = (1/2,√-3/2)

Therefore,

α1 + α2 = (1,0) = I+ (not a simple root)

The Highest Weight, ν
---------------------

One can define an ordering of weights and roots
such that if μ ≥ ν then μ - ν is positive, and
from this find the highest weight of the
representation, i.e.,

For the 3 representation:

(1/2,1/2√3) - (0,-1/√3) = (1/2,√3/2)

(1/2,1/2√3) - (-1/2,1/2√3) = (1/2,0)

Therefore, the highest weight is (1/2,1/2√3).

(1,0) - (1/2,√3/2) = (1/2,-√3/2)

(1,0) - (1/2,-√3/2) = (1/2,√3/2)

Therefore, the highest weight is (1,0).

The state with the highest weight, |ν>, satisfies:

I+|ν> = V+|ν> = U+|ν> = 0.

Or in the 'new' notation:

Eα|ν> = 0

In other words, it it is the state that vanishes
upon application of any of the raising operators.
In the above case, the highest weight intuitively
is (1/2,1/2√3) corresponding to the state u.

Note: There must be a highest weight state, because
otherwise one could construct infinitely many
eigenstates by repeated application of the raising
operators.  Once the highest weight has been found
it is possible to construct all of the remaining
states by application of the lowering operators
until the lowest weight state is encountered.

su(2) Subalgebra
----------------

Note again that isospin follows exactly the same
mathematics as spin.  For SU(2) we can define the
operators:

J± = J1 ± iJ2 = (1/2)(σ1 ± iσ2)

-   -
J+ = (1/2)(σ1 + iσ2) = | 0 1 | d -> u
| 0 0 |
-   -

-   -
J- = (1/2)(σ1 - iσ2) = | 0 0 | u -> d
| 1 0 |
-   -

These operators act on the column vectors:

- -        - -
u = | 1 |  d = | 0 |
| 0 |      | 1 |
- -        - -

Proof for d -> u:

-   -  - -     - -
I+ = | 0 1 || 0 | = | 1 |
| 0 0 || 1 |   | 0 |
-   -  - -     - -

The commutators are:

[J3,J±] = ±J±

and,

[J+,J-] = J3

Where,

J3 = σ3/2

J+ = 1/2(σ1 + iσ2)

J- = 1/2(σ1 - iσ2)

The Cartan generator is:

-       -
H = σ3/2 = | 1/2  0  |
| 0  -1/2 |
-       -

The weights are found from the eigenvalue equations.

- -        - -
H| 1 | = 1/2| 1 |  ∴ μ = 1/2 for u
| 0 |      | 0 |
- -        - -

- -         - -
H| 0 | = -1/2| 0 |  ∴ μ = -1/2 for d
| 1 |       | 1 |
- -         - -

This looks like:

Here we have neglected the hypercharge (Y coordinate)
since for our purposes we are only interested in
the isospin, I3 component.

Previously we said that the roots are the differences
of the weights from this fundamental epresentation.
Therefore,

α = w1 - w2 = (1/2,0) - (-1/2,0) = (1,0)

We now want to use our new notation in place of
the J's.  Let,

E3 replace J3

E+ replace J+

E- replace J-

We get:

[J3,J±] -> [E3,E±] = ±E±

Define E+ = (1/|α|)Eα and E- = (1/|α|)E-α

[E+,E-] = (1/|α|2)[Eα,E-α]

= (1/|α|2)αH

If we compare this to [J+,J-] = J3 we can immediately
see that,

E3 = (1/|α|2)αH

= ±E±

Let's double check this:

[E3,E±] -> [(1/|α|2)αH,(1/|α|)E±α]

= (1/|α|3)[αH,E±α]

= (1/|α|3)α(±α)E±α

= ±(1/|α|)E±α

= ±E±

Therefore,

[E3,E±] = ±E±

Which is what we had before!

In other words, the operators acting horizontally
in SU(3) behave just the raising and lowering
operators in SU(2).  As such they form a su(2)
subalgebra.

The Master Formula
------------------

The master formula puts constraints on the allowed
roots.  It is:

2αiw/(αi)2 = -(p - q)

Where μ is a weight in some representation, αi is
a root, p is the mumber of times w can be raised
and q is the number of times it can be lowered.

For the special case of the adjoint representation
this becomes:

2αiαj/(αi)2 = -(p - q)

Proof:

We make use of the su(2) subalgebra.  For any state,
μ, of a representation the E3 eigenvalue is:

E3 = (1/|αi|2)αiH

Therefore,

E3|μ> = (1/|αi|2)αiH|μ>

But,

H|μ> = αj|μ>

Therefore,

E3|μ> = (1/|αi|2)αi.αj|μ>

E3 produces either integer or 1/2 integer eigenvalues
in the x direction, i.e.,

(-1/2,√3/2) -> (1/2,√3/2) ... integer 1

(-1,0) -> (1,0) ... integer 2

(1/2,√3/2) -> (1,0) ... integer 1/2

This means that:

2αiαj/|αi|2 is always an integer.

We can write:

(E+)p|μ> = αi(αj + pαi)/|αi|2|μ>

If we let this correspond to the heighest weight,
ν, such that (E+)p+1|μ> = 0, then the E3 value is:

E3 =  αi(αj + pαi)/|αi|2 = αiαj/|αi|2 + p = ν

Likewise, we can do the same thing for the lowest
state:

(E-)q|μ> = αi(αj - pαi)/|αi|2|μ>

with E3 = αi(αj - qαi)/|αi|2 = αiαj/|αi|2 - q = -ν

If we add the highest and lowest state E3 values
together we get:

2αiαj/|α|2 + p - q = 0

or,

2αiαj/|α|2 = -(p - q)

The MASTER FORMULA is interpreted as the root
αi adding to or subtracting from the weight/root,
αj, to produce another valid weight/root.  αj
adding to or subtracting from αi would be 2αjαi/|αj|2.

The numbers -(p - q) are called the DYNKIN
COEFFICIENTS.

For the fundamental representation, 3:

2(1/2,√3/2)(1/2,1/2√3) = 1 ∴ p = 0, q = 1

We can lower (1/2,√3/2) with (1/2,1/2√3) to get:

(1/2,1/2√3) - (1/2,√3/2) = (0,-1/√3)

However, if we try and lower (0,-1/√3) again with
(1/2,√3/2) the master formula tells us:

2(0,-1/√3)(1/2,√3/2) = -1 ∴ p = 1, q = 0

So we cannot go any further and have to pick another
root.   We can use this procedure to construct the
following weights/roots:

2(1/2,-√3/2)(0,-1/√3) = 1 ∴ p = 0, q = 1

(0,-1/√3)) - (1/2,-√3/2) = (-1/2,1/2√3)

2(1,0)(-1/2,1/2√3) = -1 ∴ p = 1, q = 0

(-1/2,1/2√3) + (1,0) = (1/2,1/2√3)

_
For the anti-fundamental representation, 3:

2(1/2,-√3/2)(1/2,-1/2√3) = 1 ∴ p = 0, q = 1

(1/2,-1/2√3) - (1/2,√-3/2) = (0,1/√3)

2(1/2,√3/2)(0,1/√3) = 1 ∴ p = 0, q = 1

(0,1/√3) - (1/2,√3/2) = (-1/2,-1/2√3)

2(1/2,√3/2)(-1/2,-1/2√3) = 1 ∴ p = 1, q = 0

(-1/2,-1/2√3) + (1/2,√3/2) = (0,1/√3)

2(1,0)(-1/2,√3/2) = -1 ∴ p = 1, q = 0

(-1/2,√3/2) + (1,0) = (1/2,√3/2)

2(1/2,√3/2)(1/2,-√3/2) = -1 ∴ p = 1, q = 0

(1/2,-√3/2) + (1/2,√3/2) = (1,0)

2(1/2,√3/2)(1,0) = 1 ∴ p = 0, q = 1

(1,0) - (1/2,√3/2) = (1/2,-√3/2)

2(1/2,√3/2)(1/2,√3/2) = 1 ∴ p = 0, q = 2

(1/2,√3/2) - (1/2,√3/2) = (0,0)

(0,0) - (1/2,√3/2) = (-1/2,-√3/2)

2(-1/2,√3/2)(-1/2,-√3/2) = -1 ∴ p = 1, q = 0

(-1/2,-√3/2) + (-1/2,√3/2) = (-1,0)

2(1/2,-√3/2)(1/2,-√3/2) = 2 ∴ p = 0, q = 2

(1/2,-√3/2) - (1/2,-√3/2) = (0,0)

(0,0) - (1/2,-√3/2) = (-1/2,√3/2)

For the fundamental representation, 2:

2(1,0)(1/2,0)/(1)2 = 1 ∴ q = 1

(1/2,0) - (1,0) = (-1/2,0)

We can go one step further with the master
formula.  We can also write:

2(α.β)/(β.β) = -(p' - q') = n

If we multiply both together we get:

4|α||β||β||α|cos2θ = mn

Therefore,

cos2θαβ = (α.β)2/(α.α)(β.β) = mn/4

If we divide the two we get:

(α.α)/(β.β) = m/n

mn     θ
-- ---------
0  90°
1  60°, 120°
2  45°, 135°
3  30°, 150°
4  180°

This means that there are only 4 possibilities for
angles between roots.

Cartan Matrix
-------------

We can construct a matrix using the above master
formula and the simple roots.

Aij = 2(αi.αj)/(αi.αi)

Therefore, for the simple roots of SU(3):

A11 = 2(1/2,√3/2)(1/2,√3/2)/(1/2,√3/2)(1/2,√3/2) = 2

A12 = 2(1/2,√3/2)(1/2,-√3/2)/(1/2,√3/2)(1/2,√3/2) = -1

A21 = 2(1/2,-√3/2)(1/2,√3/2)/(1/2,√-3/2)(1/2,-√3/2) = -1

A22 = 2(1/2,-√3/2)(1/2,-√3/2)/(1/2,-√3/2)(1/2,-√3/2) = 2

This leads to the following Cartan matrix:

-     -
A = |  2 -1 |
| -1  2 |
-     -

The rows correspond to the DYNKIN COEFFICIENTS,
-(p - q).  The first row is associated with the
action of αi and the second row is asociated
with the action of αj.

The invidual rows of the matrix can be written
as:

--------
| 2 | -1 |
--------
^   ^
|   |
|    \
\  q = 0, p = 1
q = 2, p = 0

and,

--------
| -1 | 2 |
--------
^    ^
|    |
|     \
\   q = 2, p = 0
q = 0, p = 1

These are called DYNKIN LABELS.

Fundamental Weights
-------------------

SU(3) has 2 fundamental weights that form a basis
which are orthogonal to the simple roots of the
adjoint representation.  Since SU(3) has 2 simple
roots it also has 2 fundamental weights, w1 and w2.
The fundamental weights of SU(3) are the highest
weight of the 3, (1/2,1/2√3) and the highest weight
_
of the 3, (1/2,-1/2√3).

Proof:

2wjαk/(αk)2 = δjk (δjk = 0 if i ≠ j))

α1 = (1/2,√3/2) and w1 = (1/2,1/2√3)

α2 = (1/2,-√3/2) and w2 = (1/2,-1/2√3)

α1w1:  2(1/2,√3/2)(1/2,1/2√3)/(1/2,√3/2) = 1

α1w2:  2(1/2,√3/2)(1/2,-1/2√3)/(1/2,√3/2) = 0 (orthogonal)

α2w1:  2(1/2,-√3/2)(1/2,1/2√3)/(1/2,√3/2) = 0 (orthogonal)

α2w2:  2(1/2,-√3/2)(1/2,-1/2√3)/(1/2,-√3/2) = 1

Which can be written as the matrix:

-   -
| 1 0 | = δjk
| 0 1 |
-   -

We can expand any weight of any representation in
terms of the fundamantal weights as:

r
w = Σaiωi
i

Where ai are the DYNKIN COEFFICIENTS obtained from
the Cartan matrix.

The 3 weights of the fundamental representation of
SU(3):

p =  0    0
q =  1    0
---------
|  1 |  0 |
---------

->  1(1/2,1/2√3) + 0(1/2,-1/2√3) = (1/2,1/2√3)

p =  0    1
q =  0    0

---------
|  0 | -1 |
---------

->  0(1/2,1/2√3) - 1(1/2,-1/2√3) = (-1/2,1/2√3)

p =  0    0
q =  1    1
---------
| -1 |  1 |
---------

-> -1(1/2,1/2√3) + 1(1/2,-1/2√3) = (0,-1/√3)

The 3 weights of the anti-fundamental representation
of SU(3):

p =  0    0
q =  1    0
---------
|  0 |  1 |
---------

-> 0(1/2,1/2√3) + 1(1/2,-1/2√3) = (1/2,-1/2√3)

p =  0    1
q =  1    0
---------
|  1 | -1 |
---------

-> 1(1/2,1/2√3) - 1(1/2,-1/2√3) = (0,1/√3)

p =  1    0
q =  0    0
---------
| -1 |  0 |
---------

-> -1(1/2,1/2√3) + 0(1/2,-1/2√3) = (-1/2,-1/2√3)

The 6 roots of the adjoint representation of SU(3):

p =  0    0
q =  1    1
---------
|  1 |  1 |
---------

->  1(1/2,1/2√3) + 1(1/2,-1/2√3) = (1,0)

p =  0    1
q =  2    0
---------
|  2 | -1 |
---------

->  2(1/2,1/2√3) - 1(1/2,-1/2√3) = (1/2,√3/2) = α1

p =  1    0
q =  0    2
---------
| -1 |  2 |
---------

-> -1(1/2,1/2√3) + 2(1/2,-1/2√3) = (1/2,-√3/2) = α2

p =  1    2
q =  0    0
---------
|  1 | -2 |
---------

->  1(1/2,1/2√3) - 2(1/2,-1/2√3) = (-1/2,√3/2) = -α2

p =  2    0
q =  0    1
---------
| -2 |  1 |
---------

-> -2(1/2,1/2√3) + 1(1/2,-1/2√3) = (-1/2,-√3/2) = -α1

p =  1    1
q =  0    0
---------
| -1 | -1 |
---------

-> -1(1/2,1/2√3) - 1(1/2,-1/2√3) = (-1,0)

We can use the Dynkin labels to construct diagrams
showing all the weight/roots of a representation.

The 3 Representation:

p =  0   0
q =  1   0
-------
ν:               | 1 | 0 |    (1/2,1/2√3)
-------
/
p =  0   1
q =  2   0
-------
| 2 |-1 |       Subtract (1/2,√3/2)
-------
/
p =  1   0
q =  0   1
-------
ν - α1:   |-1 | 1 |           to get (0,-1/√3)
-------
\
p =  1   0
q =  0   2
-------
| 1 |-2 |       Subtract (-1/2,√3/2)
-------
\
p =  0   1
q =  0   0
-------
ν - α1 - α2:     | 0 |-1 |    to get (-1/2,1/2√3)
-------

Therefore,

ν = (1/2, 1/2√3)

ν - α1 = (1/2,1/2√3) - (1/2,√3/2) = (0,-1/√3)

ν - α1 - α2 = (1/2,1/2√3) - (1/2,√3/2) - (1/2,-√3/2)

= (-1/2,1/2√3)

_
The 3 Representation:
_
We can do the same thing for the 3 representation.

-------
ν:            | 0 | 1 |         (1/2,-1/2√3)
-------
\
-------
|-1 | 2 |    Subtract (1/2,-√3/2)
-------
\
-------
ν - α2:              | 1 |-1 |  to get (0,1/√3)
-------
/
-------
| 2 |-1 |      Subtract (1/2,√3/2)
-------
/
-------
ν - α2 - α1:  |-1 | 0 |         to get (-1/2,-1/2√3)
-------

Therefore,

ν = (1/2,-1/2√3)

ν - α2 = (1/2,-1/2√3) - (1/2,-√3/2) = (0,1/√3)

ν - α2 - α1 = (1/2,-1/2√3) - (1/2,-√3/2) - (1/2,√3/2)

= (-1/2,-1/2√3)

Here we start by adding the 2 simple roots to
get the highest weight state and adding the
negatives of the simple roots to get the lowest
weight state.

p =  0   0
q =  1   1
---------
α1 + α2:       |  1 |  1 |          k = 2
---------
\ /
/
/ \
/   \
p =  0   1       1   0
q =  2   0       0   2
---------   ---------
α1,α2:    |  2 | -1 | | -1 |  2 |    k = 1
---------   ---------
\             /
\           /
\         /
\       /
---------
Cartan          |  0 |  0 |          k = 0
generators:      ---------
\ /
/
/ \
/   \
p =  0   2       2   0
q =  1   0       0   1
---------   ---------
-α1,-α2:  |  1 | -2 | | -2 |  1 |    k = -1
---------   ---------
\             /
\           /
\         /
\       /
p =  1   1
q =  0   0
---------
-α1 - α2:       | -1 | -1 |          k = -2
---------

Dynkin Diagrams
---------------

A Dynkin diagram depicts the Cartan matrix by a
graph with simple roots as vertices connected by
one or more 'edges'.

So for our Cartan matrix we can draw the following
Dynkin diagram:

(1/2,√3/2) o----o (1/2,-√3/2)

meaning that we have 2 simple roota and angle
between them is 120°.```