Wolfram Alpha:
Search by keyword:
Astronomy
Chemistry
Classical Physics
Climate Change
Cosmology
Finance and Accounting
General Relativity
Lagrangian and Hamiltonian Mechanics
Macroeconomics
Mathematics
Microeconomics
Particle Physics
Probability and Statistics
Programming and Computer Science
Quantum Field Theory
Quantum Mechanics
Semiconductor Reliability
Solid State Electronics
Special Relativity
Statistical Mechanics
String Theory
Superconductivity
Supersymmetry (SUSY) and Grand Unified Theory (GUT)
test
The Standard Model
Topology
Units, Constants and Useful Formulas
The Essential Mathematics of Quantum Mechanics
----------------------------------------------
Linear Algebra Representation (Dirac Notation)
----------------------------------------------
In classical mechanics, all possible states of a
system are represented by a phase space. In QM a
state is a complex function that lives in an abstract
complex vector space called HILBERT SPACE. In other
words, vectors are represented by complex functions
and should not be viewed as vectors in the traditional
sense (i.e. velocity, acceleration etc.). The rules
for functions are defined in exactly the same way
but since there are infinitely many x values the
summation sign is replaced by the integral over all
values of x.
Before we go any further we need to introduce the
DIRAC NOTATION.
Dirac Notation
--------------
We denote BRAs and KETs as <φ| and |ψ> respectively.
They can be interpreted in several different ways:
1. |ψ> is a vector in Hilbert space.
2. <ψ| is a dual vector in conjugate Hilbert space.
3. <φ|ψ> is the inner product ψ with φ. In integral
form this:
<φ|ψ> = ∫φ*(x)ψ(x)dx is the degree to which ψ
is the same as φ. This is called the OVERLAP
INTEGRAL.
<ψ|ψ> = 1
4. <ψ|φ> = <φ|ψ>*
5. <φ|O|ψ> = <φ|[O|ψ>] = [<φ|O]|ψ>. In integral form
this is:
<φ|O|ψ> = ∫φ*(x)Oψ(x)dx
6. OM|ψ> = O|[M|ψ>]>
Fourier Series, Basis Vectors and Completeness
----------------------------------------------
Vector spaces have bases. We denote a basis by
e_{i}. If the basis is orthonormal (orthogonal and
unit length) then:
<e_{i}|e_{j}> = δ_{ij}
We can expand a vector, V, as:
V = ΣV_{i}e_{i}
^{i}
The equivalent in QM is the FOURIER SERIES expansion
given by:
|ψ> = Σα_{i}|e_{i}>
^{i}
Where ψ is the WAVEFUNCTION and α_{i} are coefficients.
The above is referred to as COMPLETENESS. It means
that any vector from the space under consideration
can be represented as linear combination of basis
vectors from this space.
The components of V can be found by projecting the
vector onto the respective directions. This is done
by taking the dot product of V with the corresponding
basis vector.
e_{j}.V = (Σ_{i}e_{i}V_{i}).e_{j}
= Σ_{i}(e_{i}.e_{j})V_{i}
= V_{j} since e_{i}.e_{j} = δ_{ij}
Likewise, to get α_{i} we proceed as follows:
<e_{j}|ψ> = Σα_{i}<e_{j}|e_{i}>
^{i}
= α_{j}
In integral form of this is:
α_{j} = ∫e_{j}^{*}ψ dx
So, basically, α_{m} measures the amount of ψ(x) in the
corresponding basis vector.
Dyadic (Outer) Product
----------------------
From before we had:
|ψ> = Σα_{i}|e_{i}>
≡ Σ|e_{i}>α_{i}
Now α_{i} = <e_{i}|ψ>. Therefore,
|ψ> = Σ|e_{i}><e_{i}|ψ>
^{i}
So the implication is that |e_{i}><e_{i}| = 1
Or, in general:
|e_{i}><e_{j}| = δ_{ij}
This is an important formula in QM and is called
a DYAD. To see the difference between the dyadic
product and the inner product, consider two regular
vectors. The inner product is given by:
- - - -
| 1 0 0 || 1 | = 1
- - | 0 |
| 0 |
- -
Now consider the Dyadic Product:
- - - - - -
| 1 || 1 0 0 | = | 1 0 0 |
| 0 | - - | 0 0 0 |
| 0 | | 0 0 0 |
- - - -
Or, in general:
- -
_{ } | 1 0 0 |
|e_{i}><e_{j}| = | 0 1 0 | = I
_{ } | 0 0 1 |
- -
This means that we can write:
|ψ> = Σ|e_{i}><e_{i}|ψ>
^{i}
Likewise,
<ψ| = Σ<ψ|e_{i}><e_{i}|
^{i}
This will be very useful as we move forward.
Linear Operators - The Matrix Element
-------------------------------------
Simplistically, a linear operator, O, can be regarded
as a machine that takes a vector as an input and
outputs another vector i.e.
|O> = Σα_{i}|e_{i}>
<e_{j}|O> = Σα_{i}<e_{j}|e_{i}>
= Σα_{i}δ_{ji}
^{i}
= α_{i}
|O> = Σe_{i}><e_{i}|O>
Now consider:
O|ψ> = |ψ'>
<e_{i}|O|ψ> = <e_{i}|ψ'> = β_{i}
Σ<e_{i}|O|e_{j}><e_{j}|ψ> = Σ<e_{i}|O|e_{j}>α_{j}
^{j}_{ } ^{j}
= β_{i}
<e_{i}|O|e_{j}> = O_{ij} is called the MATRIX ELEMENT.
We can write this as:
- - - - - -
| O_{11} O_{12} O_{13} ... || α_{1} | | β_{1} |
| O_{21} O_{22} O_{23} ... || α_{2} | = | β_{2} |
| O_{31} O_{32} O_{33} ... || α_{3} | | β_{3} |
| . _{ }. _{ }._{ }... || ._{ } | | ._{ } |
- - - - - -
Or, equivalently:
- -
| <e_{1}|O|e_{1}> <e_{1}|O|e_{2}> <e_{1}|O|e_{3}> ... |
| <e_{2}|O|e_{1}> <e_{2}|O|e_{2}> <e_{2}|O|e_{3}> ... |
| <e_{3}|O|e_{1}> <e_{3}|O|e_{2}> <e_{3}|O|e_{3}> ... |
| . . ._{ }... |
- -
If the e_{i}'s are orthogonal then:
- -
| <e_{1}|O|e_{1}> 0 0 ....... |
| 0 <e_{2}|O|e_{2}> 0 ....... |
| 0 0 <e_{3}|O|e_{3}> ... |
| . . . _{ }... |
- -
The integral form of O_{ij} is:
O_{ij} = ∫e_{i}*Oe_{j} dx
Hermitian Operators
-------------------
<e_{i}|O|e_{j}> = <e_{i}|e_{j}'>
<e_{i}|O^{†}|e_{j}> = <e_{i}'|e_{j}>
Now <e_{i}|e_{j}'>* = <e_{j}'|e_{i}>. Therefore,
<e_{i}|O|e_{j}>* = <e_{j}|O^{†}|e_{i}>
O_{ij}* = 0_{ji}^{†}
This referred to as HERMITIAM CONJUGATION and O is
an HERMITIAN OPERATOR..
Hermitian Matrices
------------------
Hermitian matrices as operators are the analog of
'real' for numbers. For a number to be real N* = N.
For an operator this implies that O^{†} = O. This
requires that the diagonal elements must be real
(incuding 0) since O_{ii} must equal O_{ii}*. They can
have complex elements but elements reflected about
the diagonal must be complex conjugates of each
other. And, for obvious reasons, they must be
square.
If the matrix has complex elements it cannot be
SYMMETRIC (i.e. O = O^{T}). Hence, an Hermitian
matrix that has only real elements is forced to
be symmetric. For example,
- -
O = | 0 -i | is Hermitian but not symmetric
| i 0 | (O = O^{T}).
- -
- - - -
O = | 0 1 | and | 1 0 | are both Hermitian
| 1 0 | | 0 -1 | AND symmetric.
- - - -
It can easily be seen that these matrices satisfy:
O^{†} = O
Hermitian operators correspond to the observables
of a system.
Unitary Operators
-----------------
A unitary operator preserves the lengths and angles
between vectors, and it can be considered as a type
of rotation operator in abstract vector space. Like
Hermitian operators, the eigenvectors of a unitary
matrix are orthogonal. However, its eigenvalues are
not necessarily real. Consider:
U|ψ> = |ψ'>
<φ'| = <φ|U^{†}
<φ'|ψ'> = <φ|U^{†}U|ψ>
For <φ'|ψ'> to equal <φ|ψ> requires that U^{†}U = I
Example:
- -
O = | 0 -i | is unitary.
| i 0 |
- -
Proof:
- - - -
U = | 0 -i | U^{†} = | 0 -i |
| i 0 | ^{ } | i 0 |
- - - -
- - - - - -
| 0 -i || 1 | = | 0 |
| i 0 || 0 | | i |
- - - - - -
- - - - - -
| 1 0 || 0 -i | = | 0 -i |
- - | i 0 | - -
- -
- - - - - - - -
<| 1 0 ||| 0 -i || 0 -i ||| 1 |>
- - | i 0 || i 0 | | 0 |
- - - - - -
- - - -
| 1 0 || 1 | = 1
- - | 0 |
- -
- - - -
| 0 -i || 0 | = 1
- - | i |
- -
The Eigenvalue Equation
-----------------------
O|ψ> = λ_{ψ}|ψ>
O is Hermitian
ψ = eigenvectors
λ_{I} = eigenvalues
The eigenvalue equation can also be written in
terms of the matrix elements as:
O|ψ> = E|ψ>
OΣ|e_{i}><e_{i}|ψ> = E|ψ>
ΣO|e_{i}><e_{i}|ψ> = E|ψ>
Multiply by <e_{j}|
Σ<e_{j}|O|e_{i}><e_{i}|ψ> = E|<e_{j}|ψ>
ΣO_{ij}<e_{i}|ψ> = E|<e_{j}|ψ>
Explicitly, this looks like:
- - - - - -
| O_{11} O_{12} O_{13} ... || α_{1} | | β_{1} |
| O_{21} O_{22} O_{23} ... || α_{2} | = λ| β_{2} |
| O_{31} O_{32} O_{33} ... || α_{3} | | β_{3} |
| . _{ }. _{ }._{ }... || ._{ } | | ._{ } |
- - - - - -
Eigenvalues of Hermitian matrices are real.
Proof:
<I|O^{†}| = <I|λ_{I}*
<I|O| = <I|λ_{I}*
<I|O|I> = λ_{I}<I|I>
<I|O|I> = <I|I>λ_{I}*
λ_{I}* = λ_{I}
The eigenvectors of symmetric Hermitian matrices (ones
with only real elements) are always orthogonal as long
as the eigenvectors have different eigenvalues.
Proof:
O|I> = λ_{I}|I>
<J|O^{†} = λ_{J}<J|
<J|O|I> = λ_{I}<J|I>
<J|O^{†}|I> = λ_{J}<J|I>
(λ_{I} - λ_{J})<J|I> = 0
J and I must be orthogonal if (λ_{I} - λ_{J}) ≠ 0
If 2 eigenvectors have the same eigenvalue they are
said to be DEGENERATE. However, in this case it is
still possible to find a set of ORTHONORMAL eigenvectors
by using the GRAM SCHMIDT process:
w_{a} = v_{a} - (v_{a}.v_{b}/v_{b}.v_{b})v_{b}
Where v_{a} and v_{b} are non orthogonal vectors and w_{a} is
the orthonormalized version of v_{a}. Lets look an
example. Consider,
- -
| 0 1 1 |
| 1 0 1 |
| 1 1 0 |
- -
This has the following eigenvalues and eigenvectors.
- -
_{ } | 1 |
λ_{1} = 2 v_{1} = | 1 |
_{ } | 1 |
- -
- - - -
_{ } | -1 | _{ } | -1 |
λ_{2} = -1 v_{2} = | 1 | v_{3} = | 0 |
_{ } | 0 | _{ } | 1 |
- - - -
v_{1} is orthogonal to v_{2} and v_{3} but v_{2} is not orthogonal
to v_{3}. We can use Gram Schmidt to replace v_{3} with w_{3}.
w_{3} = v_{3} - (v_{3}.v_{2}/v_{2}.v_{2})v_{2}
- - - - - -
_{ } | -1 | | -1 | | -1/2 |
w_{3} = | 0 | - (1/2)| 1 | = | -1/2 |
_{ } | 1 | | 0 | | 1 |
- - - - - -
In order for this to work we need to first normalize
the 3 vectors.
|v_{1}| = √[1^{2} + 1^{2} + 1^{2}] = √3
|b_{2}| = √[1^{2} + 0^{2} + (-1)^{2}] = √2
|w_{3}| = √[(1/2)^{2} + (-1)^{2} + (1/2)^{2}] = √(3/2)
Therefore:
- -
_{ } | 1/√3 |
λ_{1} = 2 v_{1} = | 1/√3 |
_{ } | 1/√3 |
- -
- -
_{ } | -1/√2 |
λ_{1} = -1 v_{2} = | 1/√2 |
_{ } | 0 |
- -
- -
_{ } | -1/√6 |
λ_{1} = -1 w_{3} = | -1/√6 |
_{ } | √6/3 |
- -
These 3 vectors are now orthonormal to each other.
Eigendecomposition (Diagonalization)
------------------------------------
A matrix is diagonalizable if there exists a unitary
matrix, U, that satisfies the relationship:
U^{-1}OU = Λ
The following procedure is used to diagonalize a
matrix.
1. Find the eigenvalues and eigenvectors of O.
If the eigenvectors are linearly independent
proceed to step 2 otherwise the matrix is not
diagonizable.
2. Write the eigenvectors, v_{n} as column vectors in
a matrix.
- -
U = | v_{n} v_{n} ... |
-^ ^ -
| |
ev1 ev2
3. Compute U^{-1}OU = Λ. The result will be a diagonal
matrix of the eigenvalues.
- -
| λ_{1} 0 .... 0 |
λ = | 0 λ_{2} ... 0 |
| : : : |
| 0 0 ... λ_{n} |
- -
The set of all possible eigenvalues of Λ is called
its SPECTRUM.
Example:
Lets return to the matrix we had above:
- -
| 0 1 1 |
O = | 1 0 1 |
| 1 1 0 |
- -
Without Gram Schmidt we can write U as:
- -
| 1 1 -1 |
U = | 1 -1 0 |
| 1 0 1 |
- -
- -
^{ } | 1/3 1/3 1/3 |
U^{-1} = | -1/3 2/3 -1/3 |
^{ } | -1/3 -1/3 2/3 |
- -
U^{-1}OU gives:
- -
| 2 0 0 |
| 0 -1 0 |
| 0 0 -1 |
- -
So the diagonal consists of the eigenvalues of the
original matrix.
If we were to repeat the process using the orthonormal
basis obtained from Gram Schmidt we get:
- -
| 1/√3 -1/√2 -1/√6 |
U = | 1/√3 1/√2 -1/√6 |
| 1/√3 0 √6/3 |
- -
- -
^{ } | 1/√3 1/√3 1/√3 |
U^{-1} = | -1/√2 1/√2 0 |
^{ } | -1/√6 -1/√6 √6/3 |
- -
Again U^{-1}OU gives:
- -
| 2 0 0 |
| 0 -1 0 |
| 0 0 -1 |
- -
Therefore, diagonalizing O after going through the
Gram Schmidt process yields the same result as
before. So orthogonality/orthonormality is not
a factor in determining whether or not the matrix
can be diagonalized. The only requirement is that
U^{-1}OU = Λ. Using Gram Schmidt is computationally
easier for large matrices because in an orthonormal
basis U^{-1} = U^{T} and U^{T} is easier to calculate for
large matrices.
U^{-1}OU = Λ can be rearranged as follows:
Multiply from the left by U to get:
OU = UΛ
Multiply from the right by U^{-1} to get:
O = UΛU^{-1}
Eigenvectors as Bases
---------------------
The fact that the eigenvectors of Hermitian operators
are mutually orthogonal means that they form a complete
basis spanning the vector space. Therefore, the v_{i}'s
from the previous discussion are equivalent to the e_{i}'s.
Now that we have a basis we can write any function in
that as a linear combination of the eigenfunctions of
the operator.
Expectation (Average) Value of an Observed Quantity
---------------------------------------------------
The expectation value of O is denoted by, <O> and is
defined as:
<O> = <ψ|O|ψ> = Σ<ψ|O|e_{i}><e_{i}|ψ>
^{i}
But O|e_{i}> = λ_{i}|e_{i}>
Therefore,
<ψ|O|ψ> = Σλ_{i}<ψ|e_{i}><e_{i}|ψ>
^{i}
= Σ|<ψ|e_{i}>|^{2}λ_{i}
^{i}
= ΣPr_{i}λ_{i}
^{i}
Where Pr_{i} is the probability obtaining the eigenvalue
λ_{i}.
The expectation value can be interpreted as the average
value of the statistical distribution of eigenvalues
obtained from a large number of measurements.
Position and Momentum Operators
-------------------------------
We now look at 2 important operators - position and
momentum.
Position Operator
-----------------
O|ψ> = λ|ψ>
Check that O is Hermitian
<ψ(x)|O|ψ(x)> = ∫ψ*Oψdx = real
Therefore O is Hermitian.
We can rewrite the eigenvalue equation as:
(x - λ)|ψ(x)> = 0
Therefore ψ(x) must look like:
>ε<
||^
|| 1/ε ε is very small
||v
----------------- ------------------- x
The eigenvectors are the Dirac delta functions:
∫δ(x - λ) dx = 1
Thus ∫δ(x - λ)F(x)dx is only defined at x = λ
Consider,
<λ|ψ(x)> = ∫δ(x-λ)ψ(x) dx
= ψ(λ)∫δ(x - λ) dx
= ψ(λ)
Replace λ with x since ψ only has a value at x
<x|ψ> = ψ(x) - component of ψ along x
This is the expression of ψ in the x coordinate
(Definition 3. from above).
Momentum Operator
-----------------
Let now look at the differential of ψ w.r.t. x.
We write:
p = -i∂/∂x|ψ>
The factor of -i is required to make the differential
hermitian. This can be seen as follows:
<ψ|-i∂/∂x|ψ> = ∫ψ*(-i∂/∂x)ψ
= -i∫ψ*(∂ψ/∂x)
The complex conjugate of this is i∫ψ(∂ψ*/∂x)
Apply integation by parts: ∫vdu = uv - ∫udv
u = ψ ∴ du = ∂ψ/∂x
dv = (∂ψ*/∂x) ∴ v = ψ*
_{∞} _{∞}
[ψψ*] + i∫ψ(∂ψ*/∂x)
^{-∞} ^{-∞}
0 - ∫udv => i∫ψ(∂ψ*/∂x)
(Here we are making the assumption that ψ -> 0
at ∞.)
This gives:
i∫ψ(∂ψ*/∂x)
The complex conjugate of this is -i∫ψ*(∂ψ/∂x) which
is the integral we started with. Therefore, -i∂/∂x
is its own complex conjugate proving that it is
hermitian.
We can now write the eigenvalue equation as:
-i∂/∂x|ψ> = k|ψ>
This has a solution of the form ψ = exp(ikx)
= coskx + isinkx
This gives us the first notion of wave-particle
duality as proposed by De Broglie.
Particle on a Ring
------------------
The Born Rule
-------------
The Born rule in QM postulates that the probability
that a measurement on a quantum system will yield a
given result is proportional to the square of the
magnitude of the particle's wavefunction at that point.
Therefore, the probability of finding the particle at
x, Pr(x) is given by:
Pr(x) = N^{2}ψ^{*}(x)ψ(x) = A^{2}|ψ|^{2}
Where N is a normalization factor.
Therefore, the probability of finding the particle
somewhere is:
∫dx Pr(x) = N^{2}∫dx ψ^{*}(x)ψ(x) = 1
^{allspace} ^{allspace}
However, if we do this with the above wavefunction,
evaluating the integral becomes an impossible task
because we cannot pick a finite value for N. Given
that we are dealing with things on an atomic scale
it seems reasonable to restrict the analysis to a
closed loop of length, L and radius, R. For single
valued periodic functions, this imposes the following
constraint:
ψ(x + L) = ψ(x)
Thus,
exp(ip(x + L)/h) = exp(ipx/h)exp(ipL/h)
=> exp(ipL/h) = 1
We can write
pL/h = 2πm
∴ p = (2πh/L)m
We can also write pL/2π = mh or
pR = mh ... the angular momentum
Therefore, the momentum, p, is quantized and not
continuous. Thus, we can write:
So we rewrite our integral as:
_{L} _{L}
∫dx P(x) = N^{2}∫dx ψ^{*}(x)ψ(x) = 1
^{0} ^{0}
Now, in order to ensure that the probability = 1
we need to NORMALIZE ψ. Therefore, we must
have:
_{L}
N^{2}∫dx = 1
^{0}
Or,
N = 1/√L
Therefore, we are now in a position to write ψ
as:
ψ_{m}(x) = (1/√L)exp(2πimx/L) or equivalently
ψ_{p}(x) = (1/√L)exp(ipx/h)
Where the subscripts denote the 'type' of the
wavefunction. These are the eigenfunctions of
ψ_{p}(x) and ψ_{m}(x) respectively and α_{p} and α_{m} are
the eigenvalues associated with those states.
Note that these states are orthogonal to each
other. Therefore,
∫dx ψ_{m}*ψ_{n} = δ_{mn}
The probabity, Pr_{p}(x), that a state is occupied is
given by:
Pr_{p}(x) = |α_{p}|^{2}
Likewise,
Pr_{m}(x) = |α_{m}|^{2}
Computation of α_{m}
-----------------
From before we had:
α_{j} = ∫e_{j}(x)^{*}ψ(x)dx
= ∫exp(-2πimx/L)ψ(x)dx
= ∫exp(-2πimx/L)Σα_{i}exp(2πinx/L)dx
= Σα_{i}∫exp(-2πimx/L)exp(2πinx/h)dx
= α_{j} when i = j
= 0 otherwise
For i ≠ j, the real and imaginary parts of the
exponential oscillate over |i - j| full cycles
and give 0 see proof.
Example
-------
Consider the following wavefunction.
ψ = Ncos(6πx/L)
_{L}
N^{2}∫dx cos^{2}(6πx/L) = 1
^{0}
=> N = √(2/L)
_{L}
α_{m} = ∫dx (1/√L)exp(-2πimx/L)(√(2/L))cos(6πx/L)
^{0}
We could solve this integral, but there is a quicker way
to get at the answer! cos(6πx/L) can be written as:
cos(6πx/L) = (exp(6πix/L) + (exp(-6πix/L))/2
We can now just compare this with ψ = (1/√L)Σα_{m}exp(2πimx/L)
cos(6πx/L) = (exp(6πix/L) + (exp(-6πix/L))/2
Therefore,
ψ = √(2/L)(1/2)(exp(6πix/L) + exp(-6πix/L)
= √(1/2L)(exp(6πix/L) + exp(-6πix/L)
= (1/√2)exp(6πix/L)/√L + (1/√2)exp(-6πix/L)/√L
From this we see that m = 3 and α_{3} = α_{-3} = 1/√2
Therefore,
ψ = (1/√2)[exp(3πix/L)/√L + exp(-3πix/L)/√L]
Therefore,
ψ_{3} = exp(3πix/L)/√L and α_{3} = 1/√2
ψ_{-3} = exp(-3πix/L)/√L and α_{-3} = 1/√2
Note that ψ_{3} and ψ_{-3} are orthonormal:
_{L}
∫ψ_{3}*ψ_{3} dx = (1/L)∫exp(-3πix/L)exp(3πix/L) dx = 1
^{0}
_{L}
∫ψ_{-3}*ψ_{3} dx = (1/L)∫exp(3πix/L)exp(3πix/L) dx = 0
^{0}
Therefore we can expand ψ in terms of the basis
vectors ψ_{3} and ψ_{-3}. This is the principle of
SUPERPOSITION discussed later.
Lets now look at the momentum eigenvalues associated
with these eigenvectors.
-ih∂ψ_{3}/∂x = (-ih)(1/L)^{3/2}(3πi)exp(3πix/L)
= (3πh/L)ψ_{3}
Likewise,
-ih∂ψ_{-3}/∂x = (-3πh/L)ψ_{-3}
So the momentum eigenvalues are ±3πh/L
We could do exactly the samething to get the
energy eigenvalues as follows:
∂^{2}ψ_{3}/∂x^{2} = (1/√L)(3πi/L)^{2}exp(3πix/L)
_{ } = -(9π^{2}/L^{2})(1/√L)exp(3πix/L)
_{ } = -(9π^{2}/L^{2})ψ_{3}
∂^{2}ψ_{-3}/∂x^{2} = (1/√L)(3πi/L)^{2}exp(-3πix/L)
_{ } = (9π^{2}/L^{2})(1/√L)exp(3πix/L)
_{ } = (9π^{2}/L^{2})ψ_{-3}
If we multiply both sides by -h^{2}/2m we get:
E_{3} = (9π^{2}h^{2}/2mL^{2})
E_{-3} = -(9π^{2}h^{2}/2mL^{2})
Matrix Element
--------------
Recall:
O_{ij} = ∫e_{i}*O_{ij}e_{j} dx
Therefore,
p_{11} = <ψ_{3}|-ih∂/∂x|ψ_{3}>
_{L}
= (1/L)∫exp(-3πix/L)(3πh/L)exp(3πix/L) dx
^{0}
_{L}
= (3πh/L^{2})∫exp(-3πix/L)exp(3πix/L) dx
^{0}
_{L}
= (3πh/L^{2})∫1 dx
^{0}
= 3πh/L
p_{12} = 0 (orthogonal)
p_{22} = <ψ_{-3}|-(h^{2}/2m)∂^{2}/∂x^{2}|ψ_{-3}>
_{L}
= (1/L)∫exp(3πix/L)(-3πh/L)exp(-3πix/L) dx
^{0}
= -3πh/L
p_{21} = 0 (orthogonal)
This is a diagonal matrix consisting of the
eigenvalues.
- -
p_{ij} = | 3πh/L 0 |
_{ } | 0 -3πh/L |
- -
We can see this in a different way by using
the eigenvalue equation and assuming that the
original matrix operator was diagonalizable
with the eigenvalues being the diagonal elements.
- - -_{ } - - -
| p_{11} 0 || ψ_{3} | = p_{3}| ψ_{3} |
| 0 p_{22} || 0_{ } | _{ } | 0_{ } |
- - -_{ } - - -
- - - - - -
| p_{11} 0 || 0_{ } | = p_{-3}| 0_{ } |
| 0 p_{22} || ψ_{-3} |_{ } | ψ_{-3} |
- - - - - -
Which gives:
- -
| p_{11} - p 0 | = 0
| 0 p_{22} - p |
- -
Therefore:
(p_{11} - p)(p_{22} - p) = 0
p_{11}p_{22} - p_{11}p - pp_{22} + p^{2} = 0
(3πh/L)(-3πh/L) - (3πh/L)p + p(3πh/L) + p^{2} = 0
Therefore,
p^{2} = 9π^{2}h^{2}/L^{2}
Or,
p = ±3πh/L
Therefore the momentum eigenvalues, p, are the
eigenvalues of the p matrix.
Likewise, we could do the same thing for the
energy eigenvalues.
H_{11} = <ψ_{3}|-(h^{2}/2m)∂^{2}/∂x^{2}|ψ_{3}>
_{L}
= (1/L)∫exp(-3πix/L)(9π^{2}h^{2}/2mL^{2})exp(3πix/L) dx
^{0}
_{L}
= 9π^{2}h^{2}/2mL^{3}∫exp(-3πix/L)exp(3πix/L) dx
^{0}
_{L}
= 9π^{2}h^{2}/2mL^{3}∫1 dx
^{0}
= 9π^{2}h^{2}/2mL^{2}
H_{12} = 0 (orthogonal)
H_{22} = <ψ_{-3}|-(h^{2}/2m)∂^{2}/∂x^{2}|ψ_{-3}>
_{L}
= (1/L)∫exp(3πix/L)(-9π^{2}h^{2}/2mL^{2})exp(-3πix/L) dx
^{0}
= -9π^{2}h^{2}/2mL^{2}
H_{21} = 0 (orthogonal)
This is a diagonal matrix consisting of the
eigenvalues which means that the original
Hermitian matrix operator was diagonalizable.
- -
| 9π^{2}h^{2}/2mL^{2} 0 |
| 0 -9π^{2}h^{2}/2mL^{2} |
- -
Again we could also use:
- - -_{ } - - -
| H_{11} 0 || ψ_{3} | = E_{3}| ψ_{3} |
| 0 H_{22} || 0_{ } | _{ } | 0_{ } |
- - -_{ } - - -
- - - - - -
| H_{11} 0 || 0_{ } | = E_{-3}| 0_{ } |
| 0 H_{22} || ψ_{-3} |_{ } | ψ_{-3} |
- - - - - -
Which gives:
- -
| H_{11} - E 0 | = 0
| 0 H_{22} - E |
- -
Therefore:
(H_{11} - E)(H_{22} - E) = 0
H_{11}H_{22} - H_{11}E - EH_{22} + E^{2} = 0
(9π^{2}h^{2}/2mL^{2})(-9π^{2}h^{2}/2mL^{2}) - 9π^{2}h^{2}/2mL^{2}E + 9π^{2}h^{2}/2mL^{2} + E^{2} = 0
Therefore,
E^{2} = 81π^{4}h^{4}/4m^{2}L^{4}
Or,
E = ±9π^{2}h^{2}/2mL^{2}
Therefore the energy eigenvalues, E, are the eigenvalues
of the H matrix.
Spin Wavefunctions
------------------
Up until now we have been talking about SPATIAL
wavefunctions, ψ. There is another type of
wavefunction called a SPIN wavefunction, χ. A
spatial wave function can be considered as a
vector in an infinite dimensional complex vector
space. In contrast, a spin wave function is a
vector in the spin vector space which is finite
dimensional. For spin 1/2 particles, the spin
space is 2 dimensional and therefore spanned by
2 base vectors - 'up' and 'down'. The combined
wavefunction of the system is the tensor product
of the spatial space and the spin space i.e. the
full wavefunction has spatial part and spin part.
ψ_{TOTAL} = χ ⊗ ψ
The tensor product factorization is only possible
if the Hamiltonian can be split into the sum of
spatial and spin terms. In other spin and orbital
angular momentum are separable. Also factorization
is not possible when the particle interacts with
an external field (i.e. magnetic field) or couples
to a space dependent quantity (i.e. spin orbit
coupling).
Spatial wavefunctions are solutions to Schrodinger's
Equation. The property of spin comes as a result
of relativistic effects and is manifest in the
Dirac equation in the form of the DIRAC SPINORS.
It corresponds to a new degree of freedom associated
with the internal angular momentum state of the
particle. Here we treat spin with above assumtions
and non relativistically.
We can write the spin 1/2 wavefunction in general as:
χ_{z} = α|+> + β|->
Where,
- - - -
|+> = | 1 | |-> = | 0 |
| 0 | | 1 |
- - - -
- - - - - -
χ = α| 1 | + β| 0 | = | α |
| 0 | | 1 | | β |
- - - - - -
<χ|χ> = 1
- - - -
| α*+ β*- || α+ |
| β- |
- - - -
α*α(+.+) + α*β(+.-) + β*α(-.+) + β*β(-.-) = 1
α^{2} + β^{2} = 1
α^{2} = |<e_{1}|χ>|^{2}
β^{2} = |<e_{2}|χ>|^{2}
Now consider the eigenvalue equation: H|ψ> = E|ψ>
- -
The Hamiltonian is the Hermitian matrix: | h g |
| g h |
- -
The eigenvalues can be determined from:
det(H - EI) = 0
- -
det| h - E g | = 0
| g h - E |
- -
This leads to:
E_{1} = h + g
- -
v_{1} = | 1 |
_{ } | 1 |
- -
The normalized vector is:
|v_{1}| = √[1^{2} + 1^{2}] = √2
Therefore,
- -
v_{1} = | 1/√2 |
_{ } | 1/√2 |
- -
E_{2} = h - g
- -
v_{2} = | 1 |
_{ } | -1 |
- -
The normalized vector is:
|v_{2}| = √[1^{2} + (-1)^{2}] = √2
Therefore,
- -
v_{2} = | 1/√2 |
_{ } | -1/√2 |
- -
_{ } - - - -
<e_{1}|e_{2}> = e_{1}.e_{2} = | 1/√2 1/√2 || 1/√2 | = 0
_{ } - - | -1/√2 |
_{ } - -
So the eigenfunctions are orthogonal
We can construct the state |+> as a linear combination
of the eigenvectors:
- - - - - -
α|+> = α| 1 | = α| 1/√2 | + α| 1/√2 |
| 0 | | 1/√2 | | -1/√2 |
- - - - - -
- - - -
| 1/√2 1/√2 || 1 | = 1/√2
- - | 0 |
- -
- - - -
| 1/√2 -1/√2 || 1 | = 1/√2
- - | 0 |
- -
- - - - - -
1/√2|+> = 1/√2| 1 | = 1/√2| 1/√2 | + 1/√2| 1/√2 |
| 0 | | 1/√2 | | -1/√2 |
- - - - - -
- - - - - -
β|-> = β| 0 | = β| 1/√2 | - β| 1/√2 |
| 1 | | 1/√2 | | -1/√2 |
- - - - - -
- - - -
| 1/√2 1/√2 || 0 | = 1/√2
- - | 1 |
- -
- - - -
| 1/√2 -1/√2 || 0 | = -1/√2
- - | 1 |
- -
- - - - - -
1/√2|-> = 1/√2| 1 | = 1/√2| 1/√2 | + 1/√2| 1/√2 |
| 0 | | 1/√2 | | -1/√2 |
- - - - - -
α^{2} + β^{2} = 1
- - - - - -
H_{11} = | 1/√2 1/√2 || h g || 1/√2 | = h + g
_{ } - - | g h || 1/√2 |
- - - -
H_{12} = 0
H_{21} = 0
- - - - - -
H_{22} = | 1/√2 -1/√2 || h g || 1/√2 | = h - g
_{ } - - | g h || -1/√2 |
- - - -
Thus, we get:
- -
| h + g 0 |
| 0 h - g |
- -
This is a real symmetric matrix with diagonal equal to
the eigenvalues.
Eigenspinors
------------
In the case of a single spin 1/2 particle like the
elecron, eigenspinors are eigenvectors of the Pauli
matrices. Each set of eigenspinors forms a complete,
orthonormal basis. This means that any state can be
written as a linear combination of the basis spinors.
Each of the (Hermitian) Pauli matrices has 2 eigenvalues,
+1 and -1. The corresponding normalized eigenspinors
are:
- - - - - -
σ_{x} = | 0 1 | with |+> = 1/√2| 1 | and |-> = 1/√2| -1 |
_{ } | 1 0 | | 1 | | 1 |
- - - - - -
- - - - - -
σ_{y} = | 0 -i | with |+> = 1/√2| -i | and |-> = 1/√2| i |
_{ } | i 0 | | 0 | | 1 |
- - - - - -
- - - - - -
σ_{z} = | 1 0 | with |+> = | 1 | and |-> = | 0 |
_{ } | 0 -1 | | 0 | | 1 |
- - - - - -
Therefore, when we wrote:
χ_{z} = α|+> + β|->
Where,
- - - -
|+> = | 1 | |-> = | 0 |
| 0 | | 1 |
- - - -
We were writing that our general state is a linear
combination of the normalized eigenspinors of the σ_{z}
matrix. We could have picked the eigenspinors for
the x or y directions as a basis, but the z direction
is chosen by convention.
It is worth noting that if we had chosen h = 0 and
g = 1 in the above analysis, it would be equivalent
to using the σ_{x} matrix as the operator.
Matrix Element:
Consider the 2 states:
|1> = √(1/2)(|-> - i|+>) with α^{2} + β^{2} = 1
|2> = √(1/2)(|-> + i|+>) with α^{2} + β^{2} = 1
- - - -
|j> = |+> = | 1 | and |-> = | 0 |
| 0 | | 1 |
- - - -
- - - -
|1> = √(1/2)| 0 | - 1√(1/2)| 1 |
| 1 | | 0 |
- -
- -
= √(1/2)| -i |
| 1 |
- -
Similarly,
- - - -
|2> = √(1/2)| 0 | + 1√(1/2)| 1 |
| 1 | | 0 |
- -
- -
= √(1/2)| i |
| 1 |
- -
- - - -
<i| = <+| = | 1 0 | and <-| = | 0 1 |
- - - -
- -
<1| = √(1/2)| i 1 |
- -
- -
<2| = √(1/2)| -i 1 |
- -
Nww consider an operator, S, that does the following:
S|+> = (+ih/2)|-> and S|-> = (-ih/2)|+>
The matrix elements are found from:
- - - -
| <+|S|+> <+|S|-> | -> | (ih/2)<+|-> (-ih/2)<+|+> |
| <-|S|+> <-|S|-> | | (ih/2)<-|-> (ih/2)<-|+> |
- - - -
- -
= | 0 -ih/2 |
| ih/2 0 |
- -
Therefore,
S = S_{2} = (h/2)σ_{2}
The eigeenvalues of S are given by:
S|1> = λ|1>
- - - -
= | 0 -ih/2 || -i√(1/2) |
| ih/2 0 || √(1/2) |
- - - -
- -
= (-ih/2)| √(1/2) |
| i√(1/2) |
- -
- -
= (h/2)| -i√(1/2) |
| √(1/2) |
- -
Quantum Superposition
---------------------
Quantum superposition is a fundamental principle
of QM that says we can expand any wavefunction
in terms of a complete set of basis vectors that
correspond to the eigenstates of the system. The
physical system exists partly in all of these
possible eigenstates simultaneously but, when
observed, it gives a result corresponding to only
one of the possible configurations. This is
referred to as the 'collapse' of the wave
function. Summarizing:
- Initially, the wave function is a superposition
of the possible eigenstates (eigenvectors).
- After measurement (interaction with an observer)
the wave function collapes into a single
eigenstate.
Therefore,
collapse
Superposition of eigentates => single eigenstate
For the spatial wavefunction:
ψ(x) = ΣA_{m}ψ(x) => A_{m}exp(ip_{m}x/h)
For the spin 1/2 system the wavefunction:
χ_{z} = α|+> + β|-> => |+> or |->
Where α^{2} + β^{2} = 1
The notion of a wavefunction collapse can be
seen in the famous Schrodinger's cat paradox.
A cat is placed in an opaque chamber with a
device containing cyanide that can be released
by a mechanism triggered by the radioactive
decay of a minute amount of Uranium (also
placed inside the chamber). Since the decay
process is random, an observer does not know
if the cat is dead or alive at any given moment.
In QM this situation represents the superposition
of the 2 possible states. When the chamber
is opened, the superposition is lost, and the
cat is seen to be either dead or alive. Thus,
the observation or measurement itself affects
an outcome, so that the outcome as such does
not exist unless the measurement is made.
(That is, there is no single outcome unless it
is observed.)
Summary of QM Postulates
------------------------
1. States of a system are represented by
vectors (functions) in a complex vector
space (HILBERT SPACE).
2. Observables are represented by Hermitian
operators.
3. The values that the observable can have
are eigenvalues of the Hermitian operator.
4. States that have definite (certain) values
are the eigenvectors..
5. If system prepared in state |ψ(x)> (not
an eigenvector) then the probability of
finding eigenvalues λ with corresponding
eigenvectors |λ> is:
P(λ) = |A_{λ}|^{2}
Where A_{λ} = ∫ψ_{m}(x)^{*}ψ(x)dx and ψ_{m}(x) = exp(2πimx/L)
Appendix
--------
Fourier Series Using Complex Exponentials
-----------------------------------------
The Fourier series is used to represent a periodic
function, f(x) by:
_{∞}
f(x) = a_{0} + Σa_{n}cosω_{n}x + b_{n}sinω_{n}x
^{n=1}
Where,
_{T}
a_{0} = (1/T)∫f(t)dt
^{0}
_{T}
a_{n} = (2/T)∫dt f(t)cosω_{m}t
^{0}
_{T}
b_{n} = (2/T)∫dt f(t)sinω_{m}t
^{0}
We can introduce complex exponentials as follows:
cosω_{n}x = [exp(iω_{n}x) + exp(-iω_{n}x)]/2
and
sinω_{n}x = [exp(iω_{n}x) - exp(-iω_{n}x)]/2i
Therefore,
_{∞}
f(x) = a_{0} + Σa_{n}[exp(iω_{n}x) + exp(-iω_{n}x)]/2 - ib_{n}[exp(iω_{n}x) - exp(-iω_{n}x)]/2
^{n=1}
_{∞}
= a_{0} + Σ[(a_{n} - ib_{n})/2]exp(iω_{n}x) + [a_{n} + ib_{n})/2]exp(-iω_{n}x)]
^{n=1}
_{∞}
= a_{0} + Σ[c_{n}exp(iω_{n}x) + c_{-n}exp(-iω_{n}x)]
^{n=1}
Where c_{0} = a_{0} c_{n} = (a_{n} - ib_{n})/2 c_{-n} = (a_{n} + ib_{n})/2
We can write the above as:
_{+∞}
f(x) = Σc_{n}exp(iω_{n}x)
^{-∞}
_{T/2}
Where c_{n} = (1/T)∫dx f(x)exp(-iω_{n}x)
^{-T/2}
Proof ("Fourier's trick"):
_{T/2} _{+∞} _{T/2}
∫dx f(x)exp(-iω_{m}x) = Σ ∫dx c_{n}exp(iω_{n}x)exp(-iω_{m}x)
^{-T/2} ^{-∞} ^{-T/2}
= c_{n}T
Since,
_{T/2}
∫dx exp(iω_{n}x)exp(-iω_{m}x) = δ_{mn} (orthogonality theorem)
^{-T/2}
Therefore,
_{T/2}
c_{n} = (1/T)∫dx f(x)exp(-iω_{n}x)
^{-T/2}
Fourier Series to Fourier Transform
-----------------------------------
The Fourier transform is an extension of the
Fourier series that results when the function,
f(x), vanishes outside of the interval [-T/2,T/2].
In other words, when the period of the represented
f(x) is lengthened and allowed to approach infinity.
The Fourier transform is then defined by:
_{∞} _{T/2}
f(ω) = ∫dx f(x)exp(-iωx) = lim ∫dx f(x)exp(-iωx)
^{-∞} ^{T->∞} ^{-T/2}
Comparing this to the definition of the Fourier
transform, it follows that:
c_{n} = (1/T)f(ω_{n})
= (1/T)f(n/T)
Where nω = ω_{n} = n/T
<- Δω ->
-------+--------+--------> T
n/T (n + 1)/T
So, Δω = (n + 1)/T - n/T = 1/T
_{∞}
f(x) = lim Σc_{n}exp(i(n/T)x)
^{T->∞} ^{-∞}
_{∞}
= lim Σ(1/T)f(n/T)exp(i(n/T)x)
^{T->∞} ^{-∞}
_{∞}
= lim ΣΔω f(ω_{n})exp(iω_{n}x)
^{Δω->0} ^{-∞}
This is the Riemann sum that approximates an
integral by a finite sum. Therefore, we can
write:
_{∞}
f(x) = ∫dω f(ω)exp(iωx)
^{-∞}
For a Fourier series, c_{n} can be thought of as the
amount of the wave present in the Fourier series
of f(x). Similarly, as seen above, the Fourier
transform can be thought of as a function that
measures how much of each individual frequency
is present in f(x), and these waves can be
recombined by using an integral (or 'continuous
sum') to reproduce the original function.
In summary,
_{+∞}
Therefore, f(x) = ∫dω f(ω)exp(iωx) is just the
^{-∞} _{∞}
continuous analog of f(x) = Σc_{n}exp(iω_{n}x)
^{-∞}
Normalization
-------------
The normalization factor, N, has to satisfy:
N^{2}∫dx |f(x)|^{2} = 1
^{allspace}
Fourier Series/Transform in QM/QFT
----------------------------------
The normalization factor, N, has to satisfy:
N^{2}∫dx ψ*(x)ψ(x) = 1
^{allspace}
For eigenfunctions this becomes:
_{L}
N^{2}∫dx exp(-ikx)exp(ikx) = 1
^{0}
_{L}
N^{2}∫dx = 1
^{0}
_{L}
N^{2}|x| = N^{2}L = 1 ∴ N = 1/√L
^{0}
For a particle on a ring, L = 2π. Therefore:
N = 1/√(2π)
Now we know that exp(ikx) and exp(-ikx) have
normalization factors of 1/√(2π) we can therefore
write the Fourier transforms of the position
and momentum states as:
_{∞}
ψ(x) = (1/√(2π))∫dk φ(k)exp(ikx)
^{-∞}
_{∞}
φ(k) = (1/√(2π))∫dx ψ(x)exp(-ikx)
^{-∞}
Note: In calculations φ(x) and ψ(x) also need
to be normalized to get the correct result.
We can write:
_{∞}
ψ(x) = (1/√(2π))∫dk φ(k)exp(ikx)
^{-∞}
_{∞} _{∞}
= (1/√(2π))∫dk exp(ikx) (1/√(2π))∫dx' ψ(x')exp(-ikx')
^{-∞} ^{-∞}
_{∞} _{∞}
= ∫dx' ψ(x') (1/2π)∫dk exp(ik(x - x'))
^{-∞} ^{-∞}
_{∞}
= ∫dx' ψ(x')δ(x - x')
^{-∞}
= ψ(x)
Where δ(x - x') is the Dirac Delta function.
Examples
--------
Ex 1. Consider ψ(x) = cos(6πx/L)
N^{2}∫dx ψ*(x)ψ(x) = 1
^{allspace}
_{L}
N^{2}∫dx (cos(6πx/L))*cos(6πx/L) = 1
^{0}
_{L}
N^{2}∫dx cos^{2}(6πx/L) = 1
^{0}
N^{2}L/2 = 1 ∴ N = √(2/L) = √(1/π)
Now k = 2π/λ and L = nλ so k = 2πn/L and 6πx/L ≡ 3k.
Therefore,
_{∞}
(1/√(2π))∫dx √(1/π)cos(3kx)exp(-ik_{n}x)
^{-∞}
= (1/√(2π))∫dx (1/2){√(1/π)exp(i3k) + √(1/π)exp(-i3k)}exp(-ik_{n}x)
= (1/√(2π))√(1/π)∫dx (1/2){exp(i(3 - k_{n})x + exp(-i(3 + k_{n})x}
= (1/√(2π))√(1/π)∫dx (1/2){exp(-i(k_{n} - 3)x + exp(-i(3 + k_{n})x}
= (1/√(2π))√(1/π)(1/2)2π{δ(k_{n} - 3) + δ(k_{n} + 3)}
= (1/√2){δ(k_{n} - 3) + δ(k_{n} + 3)}
Therefore, ψ(x) can be written as:
_{∞}
ψ(x) = Σc_{n}exp(ik_{n}x) = (1/√2)exp(ik_{3}x) + (1/√2)exp(-ik_{3}x)
^{-∞}
= (1/√2)|exp(ik_{3}x)> + (1/√2)|exp(ik_{-3}x)>
When a measurement is made, ψ(x) will collapse
to the eigenstate with either k_{n} = +3 or k_{n} = -3
with equal probability (= (1/√2)^{2} = 1/2). No
other possibilites are allowed.
Ex 2. Consider ψ(x) = exp(-α|x|). This looks like:
N^{2}∫dx exp(-α|x|)*exp(-α|x|) = 1
N^{2}∫dx exp(-2α|x|) = 1
_{0} _{∞}
∫dx exp(-2a(-x)) + ∫dx exp(-2a(+x))
^{-∞} ^{0}
_{0} _{∞}
= |exp(-2a(-x))/2α| + |exp(-2a(+x))/-2α|
^{-∞} ^{0}
= [1/2α - 0] + [0 + 1/2α]
= N^{2}/α = 1 ∴ N = √α
_{∞}
∫dx (√α)exp(-α|x|)(1/(√2π)))exp(-ikx)
^{-∞}
_{0} _{∞}
= √(α/2π)[∫dx exp(αx)exp(-ikx) + ∫dt exp(-αx)exp(-ikx)]
^{-∞} ^{0}
_{0} _{∞}
= √(α/2π)|exp(x(α - ik)/(α - ik) | + |exp(x(-α - ik)/(-α - ik) |
^{-∞} ^{0}
= √(α/2π)[1/(α - ik) - 0 + 0 - 1/(-α - ik)]
= √(α/2π)[(α + ik) + (α - ik)]/(α + ik)(α - ik)
= √(α/2π)2α/(α^{2} + k^{2})
_{∞} _{∞}
ψ(x) = Σc_{n}exp(ikx) = Σ√(α/2π)(2α/(α^{2} + k^{2}))|exp(ik_{n}x)>
^{-∞} ^{-∞}
This means every k a possible result in a momentum
measurement, since |c_{n}|^{2} does not vanish for any (finite)
k. However, the maximum probability corresponds
to k = 0.
Potential Well
--------------
------------ --------- V(x)
| |
| |
| |
---------
0 L
Inside the well, V(x) = 0
The time independent Schrodinger equation is:
∂^{2}ψ/∂x^{2} + 2mEψ/h^{2} = 0
or,
∂^{2}ψ/∂x^{2} = -2mEψ/h^{2}
= -k^{2}ψ
Where k = √(2mE)/h
This has the solution:
ψ(x) = Asinkx + Bcoskx
≡ Cexp(ikx) + Dexp(-ikx)
The boundary conditions are:
ψ(0) = ψ(L) = 0
ψ(0) = Asink0 + Bcosk0 = 0 => B = 0
and,
ψ(L) = AsinkL = 0 => A = 0 or sinkL = 0
In the latter case, this implies that k = nπ/L
Now, E = p^{2}/2m
= h^{2}k^{2}/2m
= n^{2}π^{2}h^{2}/2mL^{2}
We now find A by normalizing ψ(x):
_{L}
A^{2}∫dx sin^{2}(nπx/L) = 1
^{0}
A^{2}L/2 = 1 ∴ A = √(2/L)
ψ(x) = √(2/L)sin(nπx/L)
_{L}
c_{n} = (2/L)∫dx f(x)sin(mπx/L)
^{0} _{L}
= √(2/L)√(2/L)∫dx sin(nπx/L)sin(mπx/L)
^{0}
sinxsiny = (1/2){cos(x - y) - cos(x + y)}
_{L}
= (2/L)(1/2)∫dx {cos(πx(n - m)/L) - cos(πx(n + m)/L)}
^{0} _{L}
= (2/L)(L/2)|sin(πx(n - m)/L)/(n - m)π - sin(πx(n + m)/L)/(n + m)π|
_{L} ^{0}
= |sin(πx(n - m)/L)/(n - m)π - sin(πx(n + m)/L)/(n + m)π|
^{0}
= {sin(π(n - m))/(n - m)π - sin(π(n + m))/(n + m)π}
= 0 for n ≠ m
_{L}
(2/L)∫dx sin^{2}(nπx/L) = 1
^{0}
_{∞}
ψ(x) = Σc_{n}√(2/L)sin(nπx/L)
^{-∞}
The negative solutions give nothing new since
sin(-nπx/L) = -sin(nπx/L) and the sign can be
absorbed in the constant. Therefore,
_{∞}
ψ(x) = Σ√(2/L)sin(nπx/L)
^{n=1}
Now,
√(2/L)sin(nπx/L) = √(2/L)(1/2i)(exp(inπx/L) + (exp(-inπx/L)
= (-i/√2)(exp(inπx/L)/√L + (exp(-inπx/L)/√L
ih∂/∂x|ψ(x)> = p|ψ(x)>
gives,
ih∂/∂x|(-i/√2)(exp(inπx/L)/√L> = (-i/√2)inπ/√L|(exp(inπx/L)/√L>
= (1/√2)nπ/L
Similarly,
ih∂/∂x|(-i/√2)(exp(-inπx/L)/√L> = (-i/√2)-inπ/√L|(exp(-inπx/L)/√L>
= -(1/√2)nπ/L
This says that the probability of the particle
having a momentum of nπ/L is the same for both
the +x and -x direction. We can also compute the
expectation value of p as:
_{L}
<p> = (2/L)∫dx <ψ|-ih∂/∂x|ψ>
^{0}
_{L}
= (2/L)∫dx <sin(nπx/L)|ih∂/∂x|sin(nπx/L)>
^{0}
_{L}
= (2/L)∫dx sin(nπx/L)(nπx/L)cos(nπx/L)
^{0}
= 0
Note: This does not mean that the kinetic energy
is 0!