Wolfram Alpha:

```The Essential Mathematics of Quantum Mechanics
----------------------------------------------

Linear Algebra Representation (Dirac Notation)
----------------------------------------------

In classical mechanics, all possible states of a
system are represented by a phase space.  In QM a
state is a complex function that lives in an abstract
complex vector space called HILBERT SPACE.  In other
words, vectors are represented by complex functions
and should not be viewed as vectors in the traditional
sense (i.e. velocity, acceleration etc.).  The rules
for functions are defined in exactly the same way
but since there are infinitely many x values the
summation sign is replaced by the integral over all
values of x.

Before we go any further we need to introduce the
DIRAC NOTATION.

Dirac Notation
--------------

We denote BRAs and KETs as <φ| and |ψ> respectively.
They can be interpreted in several different ways:

1. |ψ> is a vector in Hilbert space.

2. <ψ| is a dual vector in conjugate Hilbert space.

3. <φ|ψ> is the inner product ψ with φ.  In integral
form this:

<φ|ψ> = ∫φ*(x)ψ(x)dx is the degree to which ψ
is the same as φ.  This is called the OVERLAP
INTEGRAL.

<ψ|ψ> = 1

4. <ψ|φ> = <φ|ψ>*

5. <φ|O|ψ> = <φ|[O|ψ>] = [<φ|O]|ψ>.  In integral form
this is:

<φ|O|ψ> = ∫φ*(x)Oψ(x)dx

6. OM|ψ> = O|[M|ψ>]>

Fourier Series, Basis Vectors and Completeness
----------------------------------------------

Vector spaces have bases.  We denote a basis by
ei.  If the basis is orthonormal (orthogonal and
unit length) then:

<ei|ej> = δij

We can expand a vector, V, as:

V = ΣViei
i

The equivalent in QM is the FOURIER SERIES expansion
given by:

|ψ> = Σαi|ei>
i

Where ψ is the WAVEFUNCTION and αi are coefficients.

The above is referred to as COMPLETENESS.  It means
that any vector from the space under consideration
can be represented as linear combination of basis
vectors from this space.

The components of V can be found by projecting the
vector onto the respective directions.  This is done
by taking the dot product of V with the corresponding
basis vector.

ej.V = (ΣieiVi).ej

= Σi(ei.ej)Vi

= Vj since ei.ej = δij

Likewise, to get αi we proceed as follows:

<ej|ψ> = Σαi<ej|ei>
i

= αj

In integral form of this is:

αj = ∫ej*ψ dx

So, basically, αm measures the amount of ψ(x) in the
corresponding basis vector.

----------------------

|ψ> = Σαi|ei>

≡ Σ|ei>αi

Now αi = <ei|ψ>.  Therefore,

|ψ> = Σ|ei><ei|ψ>
i

So the implication is that |ei><ei| = 1

Or, in general:

|ei><ej| = δij

This is an important formula in QM and is called
product and the inner product, consider two regular
vectors.  The inner product is given by:

-     -  - -
| 1 0 0 || 1 | = 1
-     - | 0 |
| 0 |
- -

- -  -     -     -     -
| 1 || 1 0 0 | = | 1 0 0 |
| 0 | -     -    | 0 0 0 |
| 0 |            | 0 0 0 |
- -              -     -

Or, in general:

-    -
| 1 0 0 |
|ei><ej| = | 0 1 0 | = I
| 0 0 1 |
-    -

This means that we can write:

|ψ> = Σ|ei><ei|ψ>
i

Likewise,

<ψ| = Σ<ψ|ei><ei|
i

This will be very useful as we move forward.

Linear Operators - The Matrix Element
-------------------------------------

Simplistically, a linear operator, O, can be regarded
as a machine that takes a vector as an input and
outputs another vector i.e.

|O> = Σαi|ei>

<ej|O> = Σαi<ej|ei>

= Σαiδji
i
= αi

|O> = Σei><ei|O>

Now consider:

O|ψ> = |ψ'>

<ei|O|ψ> = <ei|ψ'> = βi

Σ<ei|O|ej><ej|ψ> = Σ<ei|O|ej>αj
j                  j

= βi

<ei|O|ej> = Oij is called the MATRIX ELEMENT.

We can write this as:

-              -  -  -     -  -
| O11 O12 O13 ... || α1 |   | β1 |
| O21 O22 O23 ... || α2 | = | β2 |
| O31 O32 O33 ... || α3 |   | β3 |
|  .   .   .  ... || .  |   | .  |
-              -  -  -     -  -

Or, equivalently:

-                                  -
| <e1|O|e1>  <e1|O|e2>  <e1|O|e3> ... |
| <e2|O|e1>  <e2|O|e2>  <e2|O|e3> ... |
| <e3|O|e1>  <e3|O|e2>  <e3|O|e3> ... |
|     .         .          .      ... |
-                                  -

If the ei's are orthogonal then:

-                                 -
| <e1|O|e1>     0         0 ....... |
|     0     <e2|O|e2>     0 ....... |
|     0         0     <e3|O|e3> ... |
|     .         .         .     ... |
-                                 -

The integral form of Oij is:

Oij = ∫ei*Oej dx

Hermitian Operators
-------------------

<ei|O|ej> = <ei|ej'>

<ei|O†|ej> = <ei'|ej>

Now <ei|ej'>* = <ej'|ei>.  Therefore,

<ei|O|ej>* = <ej|O†|ei>

Oij* = 0ji†

This referred to as HERMITIAM CONJUGATION and O is
an HERMITIAN OPERATOR..

Hermitian Matrices
------------------

Hermitian matrices as operators are the analog of
'real' for numbers.  For a number to be real N* = N.
For an operator this implies that O† = O.  This
requires that the diagonal elements must be real
(incuding 0) since Oii must equal Oii*.  They can
have complex elements but elements reflected about
the diagonal must be complex conjugates of each
other.  And, for obvious reasons, they must be
square.

If the matrix has complex elements it cannot be
SYMMETRIC (i.e. O = OT).  Hence, an Hermitian
matrix that has only real elements is forced to
be symmetric.  For example,

-    -
O = | 0 -i | is Hermitian but not symmetric
| i  0 | (O = OT).
-    -

-   -       -    -
O = | 0 1 | and | 1  0 | are both Hermitian
| 1 0 |     | 0 -1 | AND symmetric.
-   -       -    -

It can easily be seen that these matrices satisfy:

O† = O

Hermitian operators correspond to the observables
of a system.

Unitary Operators
-----------------

A unitary operator preserves the lengths and angles
between vectors, and it can be considered as a type
of rotation operator in abstract vector space.  Like
Hermitian operators, the eigenvectors of a unitary
matrix are orthogonal. However, its eigenvalues are
not necessarily real.  Consider:

U|ψ> = |ψ'>

<φ'| = <φ|U†

<φ'|ψ'> = <φ|U†U|ψ>

For <φ'|ψ'> to equal <φ|ψ> requires that U†U = I

Example:

-    -
O = | 0 -i | is unitary.
| i  0 |
-    -

Proof:

-    -          -    -
U = | 0 -i |   U† = | 0 -i |
| i  0 |        | i  0 |
-    -          -    -

-    -  - -     - -
| 0 -i || 1 | = | 0 |
| i  0 || 0 |   | i |
-    -  - -     - -

-   -  -    -     -    -
| 1 0 || 0 -i | = | 0 -i |
-   - | i  0 |    -    -
-    -

-   -   -    -  -   -    - -
<| 1 0 ||| 0 -i || 0 -i ||| 1 |>
-   -  | i  0 || i  0 | | 0 |
-    -  -   -    - -

-   -  - -
| 1 0 || 1 | = 1
-   - | 0 |
- -

-    -  - -
| 0 -i || 0 | = 1
-    - | i |
- -

The Eigenvalue Equation
-----------------------

O|ψ> = λψ|ψ>

O is Hermitian
ψ = eigenvectors
λI = eigenvalues

The eigenvalue equation can also be written in
terms of the matrix elements as:

O|ψ> = E|ψ>

OΣ|ei><ei|ψ> = E|ψ>

ΣO|ei><ei|ψ> = E|ψ>

Multiply by <ej|

Σ<ej|O|ei><ei|ψ> = E|<ej|ψ>

ΣOij<ei|ψ> = E|<ej|ψ>

Explicitly, this looks like:

-              -  -  -      -  -
| O11 O12 O13 ... || α1 |    | β1 |
| O21 O22 O23 ... || α2 | = λ| β2 |
| O31 O32 O33 ... || α3 |    | β3 |
|  .   .   .  ... || .  |    | .  |
-              -  -  -      -  -

Eigenvalues of Hermitian matrices are real.

Proof:

<I|O†| = <I|λI*

<I|O| = <I|λI*

<I|O|I> = λI<I|I>

<I|O|I> = <I|I>λI*

λI* = λI

The eigenvectors of symmetric Hermitian matrices (ones
with only real elements) are always orthogonal as long
as the eigenvectors have different eigenvalues.

Proof:

O|I> = λI|I>

<J|O† =  λJ<J|

<J|O|I> = λI<J|I>

<J|O†|I> =  λJ<J|I>

(λI - λJ)<J|I> = 0

J and I must be orthogonal if (λI - λJ) ≠ 0

If 2 eigenvectors have the same eigenvalue they are
said to be DEGENERATE.  However, in this case it is
still possible to find a set of ORTHONORMAL eigenvectors
by using the GRAM SCHMIDT process:

wa = va - (va.vb/vb.vb)vb

Where va and vb are non orthogonal vectors and wa is
the orthonormalized version of va.  Lets look an
example.  Consider,

-     -
| 0 1 1 |
| 1 0 1 |
| 1 1 0 |
-     -

This has the following eigenvalues and eigenvectors.

- -
| 1 |
λ1 = 2  v1 = | 1 |
| 1 |
- -

- -         -  -
| -1 |      | -1 |
λ2 = -1 v2 = |  1 | v3 = |  0 |
|  0 |      |  1 |
- -         -  -

v1 is orthogonal to v2 and v3 but v2 is not orthogonal
to v3.  We can use Gram Schmidt to replace v3 with w3.

w3 = v3 - (v3.v2/v2.v2)v2

-  -          -  -     -    -
| -1 |        | -1 |   | -1/2 |
w3 = |  0 | - (1/2)|  1 | = | -1/2 |
|  1 |        |  0 |   |  1   |
-  -          -  -     -    -

In order for this to work we need to first normalize
the 3 vectors.

|v1| = √[12 + 12 + 12] = √3

|b2| = √[12 + 02 + (-1)2] = √2

|w3| = √[(1/2)2 + (-1)2 + (1/2)2] = √(3/2)

Therefore:

-    -
| 1/√3 |
λ1 = 2  v1 = | 1/√3 |
| 1/√3 |
-    -

-     -
| -1/√2 |
λ1 = -1  v2 = |  1/√2 |
|   0   |
-     -

-     -
| -1/√6 |
λ1 = -1  w3 = | -1/√6 |
|  √6/3 |
-     -

These 3 vectors are now orthonormal to each other.

Eigendecomposition (Diagonalization)
------------------------------------

A matrix is diagonalizable if there exists a unitary
matrix, U, that satisfies the relationship:

U-1OU = Λ

The following procedure is used to diagonalize a
matrix.

1.  Find the eigenvalues and eigenvectors of O.
If the eigenvectors are linearly independent
proceed to step 2 otherwise the matrix is not
diagonizable.

2.  Write the eigenvectors, vn as column vectors in
a matrix.

-          -
U = | vn  vn ... |
-^   ^     -
|   |
ev1 ev2

3.  Compute U-1OU = Λ.  The result will be a diagonal
matrix of the eigenvalues.

-           -
| λ1 0 .... 0 |
λ = | 0  λ2 ... 0 |
| :  :      : |
| 0  0 ... λn |
-           -

The set of all possible eigenvalues of Λ is called
its SPECTRUM.

Example:

-     -
| 0 1 1 |
O = | 1 0 1 |
| 1 1 0 |
-     -

Without Gram Schmidt we can write U as:

-       -
| 1  1 -1 |
U = | 1 -1  0 |
| 1  0  1 |
-       -

-              -
|  1/3  1/3  1/3 |
U-1 = | -1/3  2/3 -1/3 |
| -1/3 -1/3  2/3 |
-              -
U-1OU gives:

-       -
| 2  0  0 |
| 0 -1  0 |
| 0  0 -1 |
-       -

So the diagonal consists of the eigenvalues of the
original matrix.

If we were to repeat the process using the orthonormal
basis obtained from Gram Schmidt we get:

-                  -
| 1/√3  -1/√2  -1/√6 |
U = | 1/√3   1/√2  -1/√6 |
| 1/√3    0     √6/3 |
-                  -

-                  -
|  1/√3   1/√3  1/√3 |
U-1 = | -1/√2   1/√2   0   |
| -1/√6  -1/√6  √6/3 |
-                  -

Again U-1OU gives:

-       -
| 2  0  0 |
| 0 -1  0 |
| 0  0 -1 |
-       -

Therefore, diagonalizing O after going through the
Gram Schmidt process yields the same result as
before.  So orthogonality/orthonormality is not
a factor in determining whether or not the matrix
can be diagonalized.  The only requirement is that
U-1OU = Λ.  Using Gram Schmidt is computationally
easier for large matrices because in an orthonormal
basis U-1 = UT and  UT is easier to calculate for
large matrices.

U-1OU = Λ can be rearranged as follows:

Multiply from the left by U to get:

OU = UΛ

Multiply from the right by U-1 to get:

O = UΛU-1

Eigenvectors as Bases
---------------------

The fact that the eigenvectors of Hermitian operators
are mutually orthogonal means that they form a complete
basis spanning the vector space.  Therefore, the vi's
from the previous discussion are equivalent to the ei's.
Now that we have a basis we can write any function in
that as a linear combination of the eigenfunctions of
the operator.

Expectation (Average) Value of an Observed Quantity
---------------------------------------------------

The expectation value of O is denoted by, <O> and is
defined as:

<O> = <ψ|O|ψ> = Σ<ψ|O|ei><ei|ψ>
i

But O|ei> = λi|ei>

Therefore,

<ψ|O|ψ> = Σλi<ψ|ei><ei|ψ>
i
= Σ|<ψ|ei>|2λi
i
= ΣPriλi
i
Where Pri is the probability obtaining the eigenvalue
λi.

The expectation value can be interpreted as the average
value of the statistical distribution of eigenvalues
obtained from a large number of measurements.

Position and Momentum Operators
-------------------------------

We now look at 2 important operators - position and
momentum.

Position Operator
-----------------

O|ψ> = λ|ψ>

Check that O is Hermitian

<ψ(x)|O|ψ(x)> = ∫ψ*Oψdx = real

Therefore O is Hermitian.

We can rewrite the eigenvalue equation as:

(x - λ)|ψ(x)> = 0

Therefore ψ(x) must look like:

>ε<
||^
|| 1/ε  ε is very small
||v
-----------------  ------------------- x

The eigenvectors are the Dirac delta functions:

∫δ(x - λ) dx = 1

Thus ∫δ(x - λ)F(x)dx is only defined at x = λ

Consider,

<λ|ψ(x)> = ∫δ(x-λ)ψ(x) dx

= ψ(λ)∫δ(x - λ) dx

= ψ(λ)

Replace λ with x since ψ only has a value at x

<x|ψ> = ψ(x) - component of ψ along x

This is the expression of ψ in the x coordinate
(Definition 3. from above).

Momentum Operator
-----------------

Let now look at the differential of ψ w.r.t. x.
We write:

p = -i∂/∂x|ψ>

The factor of -i is required to make the differential
hermitian.  This can be seen as follows:

<ψ|-i∂/∂x|ψ> = ∫ψ*(-i∂/∂x)ψ

= -i∫ψ*(∂ψ/∂x)

The complex conjugate of this is i∫ψ(∂ψ*/∂x)

Apply integation by parts:  ∫vdu = uv - ∫udv

u = ψ ∴ du = ∂ψ/∂x

dv = (∂ψ*/∂x)  ∴ v = ψ*

∞    ∞
[ψψ*] + i∫ψ(∂ψ*/∂x)
-∞  -∞

0 - ∫udv => i∫ψ(∂ψ*/∂x)

(Here we are making the assumption that ψ -> 0
at ∞.)

This gives:

i∫ψ(∂ψ*/∂x)

The complex conjugate of this is -i∫ψ*(∂ψ/∂x) which
is the integral we started with.  Therefore, -i∂/∂x
is its own complex conjugate proving that it is
hermitian.

We can now write the eigenvalue equation as:

-i∂/∂x|ψ> = k|ψ>

This has a solution of the form ψ = exp(ikx)

= coskx + isinkx

This gives us the first notion of wave-particle
duality as proposed by De Broglie.

Particle on a Ring
------------------

The Born Rule
-------------

The Born rule in QM postulates that the probability
that a measurement on a quantum system will yield a
given result is proportional to the square of the
magnitude of the particle's wavefunction at that point.
Therefore, the probability of finding the particle at
x, Pr(x) is given by:

Pr(x) = N2ψ*(x)ψ(x) = A2|ψ|2

Where N is a normalization factor.

Therefore, the probability of finding the particle
somewhere is:

∫dx Pr(x) = N2∫dx ψ*(x)ψ(x) = 1
allspace         allspace

However, if we do this with the above wavefunction,
evaluating the integral becomes an impossible task
because we cannot pick a finite value for N.  Given
that we are dealing with things on an atomic scale
it seems reasonable to restrict the analysis to a
closed loop of length, L and radius, R.  For single
valued periodic functions, this imposes the following
constraint:

ψ(x + L) = ψ(x)

Thus,

exp(ip(x + L)/h) = exp(ipx/h)exp(ipL/h)

=> exp(ipL/h) = 1

We can write

pL/h = 2πm

∴ p = (2πh/L)m

We can also write pL/2π = mh or

pR = mh ... the angular momentum

Therefore, the momentum, p, is quantized and not
continuous.  Thus, we can write:

So we rewrite our integral as:

L            L
∫dx P(x) = N2∫dx ψ*(x)ψ(x) = 1
0            0

Now, in order to ensure that the probability = 1
we need to NORMALIZE ψ.  Therefore, we must
have:

L
N2∫dx = 1
0

Or,

N = 1/√L

Therefore, we are now in a position to write ψ
as:

ψm(x) = (1/√L)exp(2πimx/L) or equivalently

ψp(x) = (1/√L)exp(ipx/h)

Where the subscripts denote the 'type' of the
wavefunction.  These are the eigenfunctions of
ψp(x) and ψm(x) respectively and αp and αm are
the eigenvalues associated with those states.
Note that these states are orthogonal to each
other.  Therefore,

∫dx ψm*ψn = δmn

The probabity, Prp(x), that a state is occupied is
given by:

Prp(x) = |αp|2

Likewise,

Prm(x) = |αm|2

Computation of αm
-----------------

αj = ∫ej(x)*ψ(x)dx

= ∫exp(-2πimx/L)ψ(x)dx

= ∫exp(-2πimx/L)Σαiexp(2πinx/L)dx

= Σαi∫exp(-2πimx/L)exp(2πinx/h)dx

= αj when i = j

= 0 otherwise

For i ≠ j, the real and imaginary parts of the
exponential oscillate over |i - j| full cycles
and give 0 see proof.

Example
-------

Consider the following wavefunction.

ψ = Ncos(6πx/L)

L
N2∫dx cos2(6πx/L) = 1
0

=> N = √(2/L)

L
αm = ∫dx (1/√L)exp(-2πimx/L)(√(2/L))cos(6πx/L)
0

We could solve this integral, but there is a quicker way
to get at the answer!  cos(6πx/L) can be written as:

cos(6πx/L) = (exp(6πix/L) + (exp(-6πix/L))/2

We can now just compare this with ψ = (1/√L)Σαmexp(2πimx/L)

cos(6πx/L) = (exp(6πix/L) + (exp(-6πix/L))/2

Therefore,

ψ = √(2/L)(1/2)(exp(6πix/L) + exp(-6πix/L)

= √(1/2L)(exp(6πix/L) + exp(-6πix/L)

= (1/√2)exp(6πix/L)/√L + (1/√2)exp(-6πix/L)/√L

From this we see that m = 3 and α3 = α-3 = 1/√2

Therefore,

ψ = (1/√2)[exp(3πix/L)/√L + exp(-3πix/L)/√L]

Therefore,

ψ3 = exp(3πix/L)/√L and α3 = 1/√2

ψ-3 = exp(-3πix/L)/√L and α-3 = 1/√2

Note that ψ3 and ψ-3 are orthonormal:

L
∫ψ3*ψ3 dx = (1/L)∫exp(-3πix/L)exp(3πix/L) dx = 1
0
L
∫ψ-3*ψ3 dx = (1/L)∫exp(3πix/L)exp(3πix/L) dx = 0
0

Therefore we can expand ψ in terms of the basis
vectors ψ3 and ψ-3.  This is the principle of
SUPERPOSITION discussed later.

Lets now look at the momentum eigenvalues associated
with these eigenvectors.

-ih∂ψ3/∂x = (-ih)(1/L)3/2(3πi)exp(3πix/L)

= (3πh/L)ψ3

Likewise,

-ih∂ψ-3/∂x = (-3πh/L)ψ-3

So the momentum eigenvalues are ±3πh/L

We could do exactly the samething to get the
energy eigenvalues as follows:

∂2ψ3/∂x2 = (1/√L)(3πi/L)2exp(3πix/L)

= -(9π2/L2)(1/√L)exp(3πix/L)

= -(9π2/L2)ψ3

∂2ψ-3/∂x2 = (1/√L)(3πi/L)2exp(-3πix/L)

= (9π2/L2)(1/√L)exp(3πix/L)

= (9π2/L2)ψ-3

If we multiply both sides by -h2/2m we get:

E3 = (9π2h2/2mL2)

E-3 = -(9π2h2/2mL2)

Matrix Element
--------------

Recall:

Oij = ∫ei*Oijej dx

Therefore,

p11 = <ψ3|-ih∂/∂x|ψ3>

L
= (1/L)∫exp(-3πix/L)(3πh/L)exp(3πix/L) dx
0
L
= (3πh/L2)∫exp(-3πix/L)exp(3πix/L) dx
0
L
= (3πh/L2)∫1 dx
0
= 3πh/L

p12 = 0  (orthogonal)

p22 = <ψ-3|-(h2/2m)∂2/∂x2|ψ-3>

L
= (1/L)∫exp(3πix/L)(-3πh/L)exp(-3πix/L) dx
0

= -3πh/L

p21 = 0  (orthogonal)

This is a diagonal matrix consisting of the
eigenvalues.

-              -
pij = | 3πh/L     0    |
|   0     -3πh/L |
-              -

We can see this in a different way by using
the eigenvalue equation and assuming that the
original matrix operator was diagonalizable
with the eigenvalues being the diagonal elements.

-       -  -  -       -  -
| p11  0  || ψ3 | = p3| ψ3 |
| 0   p22 || 0  |     | 0  |
-       -  -  -       -  -

-       -  -   -        -  -
| p11  0  || 0   | = p-3| 0   |
| 0   p22 || ψ-3 |      | ψ-3 |
-       -  -   -        -  -

Which gives:

-               -
| p11 - p     0   | = 0
|    0    p22 - p |
-               -

Therefore:

(p11 - p)(p22 - p) = 0

p11p22 - p11p - pp22 + p2 = 0

(3πh/L)(-3πh/L) - (3πh/L)p + p(3πh/L) + p2 = 0

Therefore,

p2 = 9π2h2/L2

Or,

p = ±3πh/L

Therefore the momentum eigenvalues, p, are the
eigenvalues of the p matrix.

Likewise, we could do the same thing for the
energy eigenvalues.

H11 = <ψ3|-(h2/2m)∂2/∂x2|ψ3>

L
= (1/L)∫exp(-3πix/L)(9π2h2/2mL2)exp(3πix/L) dx
0

L
= 9π2h2/2mL3∫exp(-3πix/L)exp(3πix/L) dx
0

L
= 9π2h2/2mL3∫1 dx
0

= 9π2h2/2mL2

H12 = 0  (orthogonal)

H22 = <ψ-3|-(h2/2m)∂2/∂x2|ψ-3>

L
= (1/L)∫exp(3πix/L)(-9π2h2/2mL2)exp(-3πix/L) dx
0

= -9π2h2/2mL2

H21 = 0  (orthogonal)

This is a diagonal matrix consisting of the
eigenvalues which means that the original
Hermitian matrix operator was diagonalizable.

-                    -
| 9π2h2/2mL2    0       |
|     0     -9π2h2/2mL2 |
-                    -

Again we could also use:

-       -  -  -       -  -
| H11  0  || ψ3 | = E3| ψ3 |
| 0   H22 || 0  |     | 0  |
-       -  -  -       -  -

-       -  -   -        -  -
| H11  0  || 0   | = E-3| 0   |
| 0   H22 || ψ-3 |      | ψ-3 |
-       -  -   -        -  -

Which gives:

-               -
| H11 - E     0   | = 0
|    0    H22 - E |
-               -

Therefore:

(H11 - E)(H22 - E) = 0

H11H22 - H11E - EH22 + E2 = 0

(9π2h2/2mL2)(-9π2h2/2mL2) - 9π2h2/2mL2E + 9π2h2/2mL2 + E2 = 0

Therefore,

E2 = 81π4h4/4m2L4

Or,

E = ±9π2h2/2mL2

Therefore the energy eigenvalues, E, are the eigenvalues
of the H matrix.

Spin Wavefunctions
------------------

Up until now we have been talking about SPATIAL
wavefunctions, ψ.  There is another type of
wavefunction called a SPIN wavefunction, χ.  A
spatial wave function can be considered as a
vector in an infinite dimensional complex vector
space.  In contrast, a spin wave function is a
vector in the spin vector space which is finite
dimensional.  For spin 1/2 particles, the spin
space is 2 dimensional and therefore spanned by
2 base vectors - 'up' and 'down'.  The combined
wavefunction of the system is the tensor product
of the spatial space and the spin space i.e. the
full wavefunction has spatial part and spin part.

ψTOTAL = χ ⊗ ψ

The tensor product factorization is only possible
if the Hamiltonian can be split into the sum of
spatial and spin terms.  In other spin and orbital
angular momentum are separable.  Also factorization
is not possible when the particle interacts with
an external field (i.e. magnetic field) or couples
to a space dependent quantity (i.e. spin orbit
coupling).

Spatial wavefunctions are solutions to Schrodinger's
Equation.  The property of spin comes as a result
of relativistic effects and is manifest in the
Dirac equation in the form of the DIRAC SPINORS.
It corresponds to a new degree of freedom associated
with the internal angular momentum state of the
particle.  Here we treat spin with above assumtions
and non relativistically.

We can write the spin 1/2 wavefunction in general as:

χz = α|+> + β|->

Where,

- -           - -
|+> = | 1 |   |-> = | 0 |
| 0 |         | 1 |
- -           - -

- -      - -     - -
χ = α| 1 | + β| 0 | = | α |
| 0 |    | 1 |   | β |
- -      - -     - -

<χ|χ> = 1

-       -  -  -
| α*+ β*- || α+ |
| β- |
-       -  -  -

α*α(+.+) + α*β(+.-) + β*α(-.+) + β*β(-.-) = 1

α2 + β2 = 1

α2 = |<e1|χ>|2

β2 = |<e2|χ>|2

Now consider the eigenvalue equation:  H|ψ> = E|ψ>

-   -
The Hamiltonian is the Hermitian matrix: | h g |
| g h |
-   -

The eigenvalues can be determined from:

det(H - EI) = 0

-           -
det| h - E   g   | = 0
|   g   h - E |
-           -

E1 = h + g

- -
v1 = | 1 |
| 1 |
- -

The normalized vector is:

|v1| = √[12 + 12] = √2

Therefore,

-    -
v1 = | 1/√2 |
| 1/√2 |
-    -

E2 = h - g

-  -
v2 = |  1 |
| -1 |
-  -

The normalized vector is:

|v2| = √[12 + (-1)2] = √2

Therefore,

-     -
v2 = |  1/√2 |
| -1/√2 |
-     -

-         -  -     -
<e1|e2> = e1.e2 = | 1/√2 1/√2 ||  1/√2 | = 0
-         - | -1/√2 |
-     -

So the eigenfunctions are orthogonal

We can construct the state |+> as a linear combination
of the eigenvectors:

- -      -    -      -     -
α|+> = α| 1 | = α| 1/√2 | + α|  1/√2 |
| 0 |    | 1/√2 |    | -1/√2 |
- -      -    -      -     -

-         -  - -
| 1/√2 1/√2 || 1 | = 1/√2
-         - | 0 |
- -

-          -  - -
| 1/√2 -1/√2 || 1 | = 1/√2
-          - | 0 |
- -

- -         -    -         -     -
1/√2|+> = 1/√2| 1 | = 1/√2| 1/√2 | + 1/√2|  1/√2 |
| 0 |       | 1/√2 |       | -1/√2 |
- -         -    -         -     -

- -      -    -      -     -
β|-> = β| 0 | = β| 1/√2 | - β|  1/√2 |
| 1 |    | 1/√2 |    | -1/√2 |
- -      -    -      -     -

-         -  - -
| 1/√2 1/√2 || 0 | = 1/√2
-         - | 1 |
- -

-          -  - -
| 1/√2 -1/√2 || 0 | = -1/√2
-          - | 1 |
- -

- -         -    -         -     -
1/√2|-> = 1/√2| 1 | = 1/√2| 1/√2 | + 1/√2|  1/√2 |
| 0 |       | 1/√2 |       | -1/√2 |
- -         -    -         -     -

α2 + β2 = 1

-         -  -   -  -    -
H11 = | 1/√2 1/√2 || h g || 1/√2 | = h + g
-         - | g h || 1/√2 |
-   -  -    -

H12 = 0

H21 = 0

-          -  -   -  -     -
H22 = | 1/√2 -1/√2 || h g ||  1/√2 | = h - g
-          - | g h || -1/√2 |
-   -  -     -

Thus, we get:

-           -
| h + g   0   |
|   0   h - g |
-           -

This is a real symmetric matrix with diagonal equal to
the eigenvalues.

Eigenspinors
------------

In the case of a single spin 1/2 particle like the
elecron, eigenspinors are eigenvectors of the Pauli
matrices.  Each set of eigenspinors forms a complete,
orthonormal basis. This means that any state can be
written as a linear combination of the basis spinors.

Each of the (Hermitian) Pauli matrices has 2 eigenvalues,
+1 and -1.  The corresponding normalized eigenspinors
are:

-   -                  - -                 -  -
σx = | 0 1 | with |+> = 1/√2| 1 | and |-> = 1/√2| -1 |
| 1 0 |                | 1 |               |  1 |
-   -                  - -                 -  -

-    -                  -  -                 - -
σy = | 0 -i | with |+> = 1/√2| -i | and |-> = 1/√2| i |
| i  0 |                |  0 |               | 1 |
-    -                  -  -                 - -

-    -              - -             - -
σz = | 1  0 | with |+> = | 1 | and |-> = | 0 |
| 0 -1 |            | 0 |           | 1 |
-    -              - -             - -

Therefore, when we wrote:

χz = α|+> + β|->

Where,

- -           - -
|+> = | 1 |   |-> = | 0 |
| 0 |         | 1 |
- -           - -

We were writing that our general state is a linear
combination of the normalized eigenspinors of the σz
matrix.  We could have picked the eigenspinors for
the x or y directions as a basis, but the z direction
is chosen by convention.

It is worth noting that if we had chosen h = 0 and
g = 1 in the above analysis, it would be equivalent
to using the σx matrix as the operator.

Matrix Element:

Consider the 2 states:

|1> = √(1/2)(|-> - i|+>) with α2 + β2 = 1

|2> = √(1/2)(|-> + i|+>) with α2 + β2 = 1

- -             - -
|j> = |+> = | 1 | and |-> = | 0 |
| 0 |           | 1 |
- -             - -

- -            - -
|1> = √(1/2)| 0 | - 1√(1/2)| 1 |
| 1 |          | 0 |
- -

-  -
= √(1/2)| -i |
|  1 |
-  -

Similarly,

- -            - -
|2> = √(1/2)| 0 | + 1√(1/2)| 1 |
| 1 |          | 0 |
- -

- -
= √(1/2)| i |
| 1 |
- -

-   -             -   -
<i| = <+| = | 1 0 | and <-| = | 0 1 |
-   -             -   -

-   -
<1| = √(1/2)| i 1 |
-   -

-    -
<2| = √(1/2)| -i 1 |
-    -

Nww consider an operator, S, that does the following:

S|+> = (+ih/2)|-> and S|-> = (-ih/2)|+>

The matrix elements are found from:

-               -      -                         -
| <+|S|+> <+|S|-> | -> | (ih/2)<+|->  (-ih/2)<+|+> |
| <-|S|+> <-|S|-> |    | (ih/2)<-|->   (ih/2)<-|+> |
-               -      -                         -

-          -
= |  0   -ih/2 |
| ih/2   0   |
-          -

Therefore,

S = S2 = (h/2)σ2

The eigeenvalues of S are given by:

S|1> = λ|1>

-          -  -        -
= |  0   -ih/2 || -i√(1/2) |
| ih/2   0   ||   √(1/2) |
-          -  -        -

-       -
= (-ih/2)|  √(1/2) |
| i√(1/2) |
-       -

-        -
= (h/2)| -i√(1/2) |
|   √(1/2) |
-        -

Quantum Superposition
---------------------

Quantum superposition is a fundamental principle
of QM that says we can expand any wavefunction
in terms of a complete set of basis vectors that
correspond to the eigenstates of the system.  The
physical system exists partly in all of these
possible eigenstates simultaneously but, when
observed, it gives a result corresponding to only
one of the possible configurations.  This is
referred to as the 'collapse' of the wave
function.  Summarizing:

- Initially, the wave function is a superposition
of the possible eigenstates (eigenvectors).

- After measurement (interaction with an observer)
the wave function collapes into a single
eigenstate.

Therefore,
collapse
Superposition of eigentates    =>    single eigenstate

For the spatial wavefunction:

ψ(x) = ΣAmψ(x) => Amexp(ipmx/h)

For the spin 1/2 system the wavefunction:

χz = α|+> + β|-> => |+> or |->

Where α2 + β2 = 1

The notion of a wavefunction collapse can be
seen in the famous Schrodinger's cat paradox.
A cat is placed in an opaque chamber with a
device containing cyanide that can be released
by a mechanism triggered by the radioactive
decay of a minute amount of Uranium (also
placed inside the chamber).  Since the decay
process is random, an observer does not know
if the cat is dead or alive at any given moment.
In QM this situation represents the superposition
of the 2 possible states.  When the chamber
is opened, the superposition is lost, and the
cat is seen to be either dead or alive.  Thus,
the observation or measurement itself affects
an outcome, so that the outcome as such does
not exist unless the measurement is made.
(That is, there is no single outcome unless it
is observed.)

Summary of QM Postulates
------------------------

1.  States of a system are represented by
vectors (functions) in a complex vector
space (HILBERT SPACE).

2.  Observables are represented by Hermitian
operators.

3.  The values that the observable can have
are eigenvalues of the Hermitian operator.

4.  States that have definite (certain) values
are the eigenvectors..

5.  If system prepared in state |ψ(x)> (not
an eigenvector) then the probability of
finding eigenvalues λ with corresponding
eigenvectors |λ> is:

P(λ) = |Aλ|2

Where Aλ =  ∫ψm(x)*ψ(x)dx and ψm(x) = exp(2πimx/L)

Appendix
--------

Fourier Series Using Complex Exponentials
-----------------------------------------

The Fourier series is used to represent a periodic
function, f(x) by:

∞
f(x) = a0 + Σancosωnx + bnsinωnx
n=1

Where,
T
a0 = (1/T)∫f(t)dt
0
T
an = (2/T)∫dt f(t)cosωmt
0
T
bn = (2/T)∫dt f(t)sinωmt
0

We can introduce complex exponentials as follows:

cosωnx = [exp(iωnx) + exp(-iωnx)]/2

and

sinωnx = [exp(iωnx) - exp(-iωnx)]/2i

Therefore,
∞
f(x) = a0 + Σan[exp(iωnx) + exp(-iωnx)]/2 - ibn[exp(iωnx) - exp(-iωnx)]/2
n=1
∞
= a0 + Σ[(an - ibn)/2]exp(iωnx) + [an + ibn)/2]exp(-iωnx)]
n=1
∞
= a0 + Σ[cnexp(iωnx) + c-nexp(-iωnx)]
n=1

Where c0 = a0  cn = (an - ibn)/2  c-n = (an + ibn)/2

We can write the above as:
+∞
f(x) = Σcnexp(iωnx)
-∞
T/2
Where cn = (1/T)∫dx f(x)exp(-iωnx)
-T/2

Proof ("Fourier's trick"):

T/2                   +∞  T/2
∫dx f(x)exp(-iωmx) = Σ   ∫dx cnexp(iωnx)exp(-iωmx)
-T/2                   -∞ -T/2

= cnT
Since,

T/2
∫dx exp(iωnx)exp(-iωmx) = δmn (orthogonality theorem)
-T/2

Therefore,

T/2
cn = (1/T)∫dx f(x)exp(-iωnx)
-T/2

Fourier Series to Fourier Transform
-----------------------------------

The Fourier transform is an extension of the
Fourier series that results when the function,
f(x), vanishes outside of the interval [-T/2,T/2].
In other words, when the period of the represented
f(x) is lengthened and allowed to approach infinity.
The Fourier transform is then defined by:

∞                         T/2
f(ω) = ∫dx f(x)exp(-iωx) = lim   ∫dx f(x)exp(-iωx)
-∞                   T->∞ -T/2

Comparing this to the definition of the Fourier
transform, it follows that:

cn = (1/T)f(ωn)

= (1/T)f(n/T)

Where nω = ωn = n/T

<- Δω ->
-------+--------+--------> T
n/T    (n + 1)/T

So, Δω = (n + 1)/T - n/T = 1/T

∞
f(x) = lim  Σcnexp(i(n/T)x)
T->∞ -∞

∞
= lim  Σ(1/T)f(n/T)exp(i(n/T)x)
T->∞ -∞

∞
= lim   ΣΔω f(ωn)exp(iωnx)
Δω->0 -∞

This is the Riemann sum that approximates an
integral by a finite sum.  Therefore, we can
write:
∞
f(x) = ∫dω f(ω)exp(iωx)
-∞

For a Fourier series, cn can be thought of as the
amount of the wave present in the Fourier series
of f(x).  Similarly, as seen above, the Fourier
transform can be thought of as a function that
measures how much of each individual frequency
is present in f(x), and these waves can be
recombined by using an integral (or 'continuous
sum') to reproduce the original function.

In summary,

+∞
Therefore, f(x) = ∫dω f(ω)exp(iωx) is just the
-∞         ∞
continuous analog of f(x) = Σcnexp(iωnx)
-∞

Normalization
-------------

The normalization factor, N, has to satisfy:

N2∫dx |f(x)|2 = 1
allspace

Fourier Series/Transform in QM/QFT
----------------------------------

The normalization factor, N, has to satisfy:

N2∫dx ψ*(x)ψ(x) = 1
allspace

For eigenfunctions this becomes:

L
N2∫dx exp(-ikx)exp(ikx) = 1
0
L
N2∫dx = 1
0
L
N2|x| = N2L = 1 ∴ N = 1/√L
0

For a particle on a ring, L = 2π.  Therefore:

N = 1/√(2π)

Now we know that exp(ikx) and exp(-ikx) have
normalization factors of 1/√(2π) we can therefore
write the Fourier transforms of the position
and momentum states as:

∞
ψ(x) = (1/√(2π))∫dk φ(k)exp(ikx)
-∞
∞
φ(k) = (1/√(2π))∫dx ψ(x)exp(-ikx)
-∞

Note:  In calculations φ(x) and ψ(x) also need
to be normalized to get the correct result.

We can write:

∞
ψ(x) = (1/√(2π))∫dk φ(k)exp(ikx)
-∞
∞                     ∞
= (1/√(2π))∫dk exp(ikx) (1/√(2π))∫dx' ψ(x')exp(-ikx')
-∞                    -∞
∞                ∞
= ∫dx' ψ(x') (1/2π)∫dk exp(ik(x - x'))
-∞               -∞
∞
= ∫dx' ψ(x')δ(x - x')
-∞

= ψ(x)

Where δ(x - x') is the Dirac Delta function.

Examples
--------

Ex 1. Consider ψ(x) = cos(6πx/L)

N2∫dx ψ*(x)ψ(x) = 1
allspace
L
N2∫dx (cos(6πx/L))*cos(6πx/L) = 1
0
L
N2∫dx cos2(6πx/L) = 1
0

N2L/2 = 1 ∴ N = √(2/L) = √(1/π)

Now k = 2π/λ and L = nλ so k = 2πn/L and 6πx/L ≡ 3k.
Therefore,

∞
(1/√(2π))∫dx √(1/π)cos(3kx)exp(-iknx)
-∞
= (1/√(2π))∫dx (1/2){√(1/π)exp(i3k) + √(1/π)exp(-i3k)}exp(-iknx)

= (1/√(2π))√(1/π)∫dx (1/2){exp(i(3 - kn)x + exp(-i(3 + kn)x}

= (1/√(2π))√(1/π)∫dx (1/2){exp(-i(kn - 3)x + exp(-i(3 + kn)x}

= (1/√(2π))√(1/π)(1/2)2π{δ(kn - 3) + δ(kn + 3)}

= (1/√2){δ(kn - 3) + δ(kn + 3)}

Therefore, ψ(x) can be written as:
∞
ψ(x) = Σcnexp(iknx) = (1/√2)exp(ik3x) + (1/√2)exp(-ik3x)
-∞

= (1/√2)|exp(ik3x)> + (1/√2)|exp(ik-3x)>

When a measurement is made, ψ(x) will collapse
to the eigenstate with either kn = +3 or kn = -3
with equal probability (= (1/√2)2 = 1/2).  No
other possibilites are allowed.

Ex 2.  Consider ψ(x) = exp(-α|x|).  This looks like:

N2∫dx exp(-α|x|)*exp(-α|x|) = 1

N2∫dx exp(-2α|x|) = 1

0                  ∞
∫dx exp(-2a(-x)) + ∫dx exp(-2a(+x))
-∞                 0
0                    ∞
= |exp(-2a(-x))/2α| + |exp(-2a(+x))/-2α|
-∞                   0
= [1/2α - 0] + [0 + 1/2α]

= N2/α = 1 ∴ N = √α
∞
∫dx (√α)exp(-α|x|)(1/(√2π)))exp(-ikx)
-∞
0                      ∞
= √(α/2π)[∫dx exp(αx)exp(-ikx) + ∫dt exp(-αx)exp(-ikx)]
-∞                      0
0                             ∞
= √(α/2π)|exp(x(α - ik)/(α - ik) | + |exp(x(-α - ik)/(-α - ik) |
-∞                            0

= √(α/2π)[1/(α - ik) - 0 + 0 - 1/(-α - ik)]

= √(α/2π)[(α + ik) + (α - ik)]/(α + ik)(α - ik)

= √(α/2π)2α/(α2 + k2)

∞             ∞
ψ(x) = Σcnexp(ikx) = Σ√(α/2π)(2α/(α2 + k2))|exp(iknx)>
-∞            -∞

This means every k a possible result in a momentum
measurement, since |cn|2 does not vanish for any (finite)
k.  However, the maximum probability corresponds
to k = 0.

Potential Well
--------------

------------           --------- V(x)
|         |
|         |
|         |
---------
0         L

Inside the well, V(x) = 0

The time independent Schrodinger equation is:

∂2ψ/∂x2 + 2mEψ/h2 = 0

or,

∂2ψ/∂x2 = -2mEψ/h2

= -k2ψ

Where k = √(2mE)/h

This has the solution:

ψ(x) = Asinkx + Bcoskx

≡ Cexp(ikx) + Dexp(-ikx)

The boundary conditions are:

ψ(0) = ψ(L) = 0

ψ(0) = Asink0 + Bcosk0 = 0 => B = 0

and,

ψ(L) = AsinkL = 0 => A = 0 or sinkL = 0

In the latter case, this implies that k = nπ/L

Now, E = p2/2m

= h2k2/2m

= n2π2h2/2mL2

We now find A by normalizing ψ(x):
L
A2∫dx sin2(nπx/L) = 1
0
A2L/2 = 1 ∴ A = √(2/L)

ψ(x) = √(2/L)sin(nπx/L)
L
cn = (2/L)∫dx f(x)sin(mπx/L)
0      L
= √(2/L)√(2/L)∫dx sin(nπx/L)sin(mπx/L)
0
sinxsiny = (1/2){cos(x - y) - cos(x + y)}
L
= (2/L)(1/2)∫dx {cos(πx(n - m)/L) - cos(πx(n + m)/L)}
0                                                      L
= (2/L)(L/2)|sin(πx(n - m)/L)/(n - m)π - sin(πx(n + m)/L)/(n + m)π|
L         0
= |sin(πx(n - m)/L)/(n - m)π - sin(πx(n + m)/L)/(n + m)π|
0
= {sin(π(n - m))/(n - m)π - sin(π(n + m))/(n + m)π}

= 0 for n ≠ m

L
(2/L)∫dx sin2(nπx/L) = 1
0
∞
ψ(x) = Σcn√(2/L)sin(nπx/L)
-∞

The negative solutions give nothing new since
sin(-nπx/L) = -sin(nπx/L) and the sign can be
absorbed in the constant.  Therefore,

∞
ψ(x) = Σ√(2/L)sin(nπx/L)
n=1

Now,

√(2/L)sin(nπx/L) = √(2/L)(1/2i)(exp(inπx/L) + (exp(-inπx/L)

= (-i/√2)(exp(inπx/L)/√L + (exp(-inπx/L)/√L

ih∂/∂x|ψ(x)> = p|ψ(x)>

gives,

ih∂/∂x|(-i/√2)(exp(inπx/L)/√L> = (-i/√2)inπ/√L|(exp(inπx/L)/√L>

= (1/√2)nπ/L

Similarly,

ih∂/∂x|(-i/√2)(exp(-inπx/L)/√L> = (-i/√2)-inπ/√L|(exp(-inπx/L)/√L>

= -(1/√2)nπ/L

This says that the probability of the particle
having a momentum of nπ/L is the same for both
the +x and -x direction.  We can also compute the
expectation value of p as:

L
<p> = (2/L)∫dx <ψ|-ih∂/∂x|ψ>
0
L
= (2/L)∫dx <sin(nπx/L)|ih∂/∂x|sin(nπx/L)>
0
L
= (2/L)∫dx sin(nπx/L)(nπx/L)cos(nπx/L)
0
= 0

Note:  This does not mean that the kinetic energy
is 0!
```