Tag Archives: separation of variables

Decoupling the two-particle Hamiltonian

Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Chapter 10, Exercise 10.1.3.

Shankar shows that, for a two-particle system, the state vector {\left|\psi\right\rangle } is an element of the direct product space {\mathbb{V}_{1\otimes2}}. Its evolution in time is determined by the Schrödinger equation, as usual, so that

\displaystyle i\hbar\left|\dot{\psi}\right\rangle =H\left|\psi\right\rangle =\left[\frac{P_{1}^{2}}{2m_{1}}+\frac{P_{2}^{2}}{2m_{2}}+V\left(X_{1},X_{2}\right)\right]\left|\psi\right\rangle \ \ \ \ \ (1)

The method by which this equation can be solved (if it can be solved, that is) depends on the form of the potential {V}. If the two particles interact only with some external potential, and not with each other, then {V} is composed of a sum of terms, each of which depends only on {X_{1}} or {X_{2}}, but not on both. In such cases, we can split {H} into two parts, one of which ({H_{1}}) depends only on operators pertaining to particle 1 and the other ({H_{2}}) on operators pertaining to particle 2. If the eigenvalues (allowed energies) of particle {i} are given by {E_{i}}, then the stationary states are direct products of the corresponding single particle eigenstates. That is, in general

\displaystyle H\left|E\right\rangle =\left(H_{1}+H_{2}\right)\left|E_{1}\right\rangle \otimes\left|E_{2}\right\rangle =\left(E_{1}+E_{2}\right)\left|E_{1}\right\rangle \otimes\left|E_{2}\right\rangle =E\left|E\right\rangle \ \ \ \ \ (2)

Thus the two-particle state {\left|E\right\rangle =\left|E_{1}\right\rangle \otimes\left|E_{2}\right\rangle }. Since a stationary state {\left|E_{i}\right\rangle } evolves in time according to

\displaystyle \left|\psi_{i}\left(t\right)\right\rangle =\left|E_{i}\right\rangle e^{-iE_{i}t/\hbar} \ \ \ \ \ (3)

the compound two-particle state evolves according to

\displaystyle \left|\psi\left(t\right)\right\rangle \displaystyle = \displaystyle e^{-iE_{1}t/\hbar}\left|E_{1}\right\rangle \otimes e^{-iE_{2}t/\hbar}\left|E_{2}\right\rangle \ \ \ \ \ (4)
\displaystyle \displaystyle = \displaystyle e^{-i\left(E_{1}+E_{2}\right)t/\hbar}\left|E\right\rangle \ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle e^{-iEt/\hbar}\left|E\right\rangle \ \ \ \ \ (6)

In this case, the two particles are essentially independent of each other, and the compound state is just the product of the two separate one-particle states.

If {H} is not separable, which will occur if {V} contains terms involving both {X_{1}} and {X_{2}} in the same term, we cannot, in general, reduce the system to the product of two one-particle systems. There are a couple of instances, however, where such a reduction can be done.

The first instance is if the potential is a function of {x_{2}-x_{1}} only, in other words, that the interaction between the particles depends only on the distance between them. Shankar shows that in this case we can transform the system to that of a reduced mass {\mu=m_{1}m_{2}/\left(m_{1}+m_{2}\right)} and a centre of mass {M=m_{1}+m_{2}}. We’ve already seen this problem solved by means of separation of variables. The result is that the state vector is the product of a vector for a free particle of mass {M} and of a vector of a particle with reduced mass {\mu} moving in the potential {V}.

Another case where we can decouple the Hamiltonian is in a system of harmonic oscillators. We’ve already seen this system solved for two masses in classical mechanics using diagonalization of the matrix describing the equations of motion. The classical Hamiltonian is

\displaystyle H=\frac{p_{1}^{2}}{2m}+\frac{p_{2}^{2}}{2m}+\frac{m\omega^{2}}{2}\left[x_{1}^{2}+x_{2}^{2}+\left(x_{1}-x_{2}\right)^{2}\right] \ \ \ \ \ (7)

 

The earlier solution involved introducing normal coordinates

\displaystyle x_{I} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(x_{1}+x_{2}\right)\ \ \ \ \ (8)
\displaystyle x_{II} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(x_{1}-x_{2}\right) \ \ \ \ \ (9)

and corresponding momenta

\displaystyle p_{I} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(p_{1}+p_{2}\right)\ \ \ \ \ (10)
\displaystyle p_{II} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(p_{1}-p_{2}\right) \ \ \ \ \ (11)

These normal coordinates are canonical as we can verify by calculating the Poisson brackets. For example

\displaystyle \left\{ x_{I},p_{I}\right\} \displaystyle = \displaystyle \sum_{i}\left(\frac{\partial x_{I}}{\partial x_{i}}\frac{\partial p_{I}}{\partial p_{i}}-\frac{\partial x_{I}}{\partial p_{i}}\frac{\partial p_{I}}{\partial x_{i}}\right)\ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle 1\ \ \ \ \ (13)
\displaystyle \left\{ x_{I},x_{II}\right\} \displaystyle = \displaystyle \sum_{i}\left(\frac{\partial x_{I}}{\partial x_{i}}\frac{\partial x_{II}}{\partial p_{i}}-\frac{\partial x_{I}}{\partial p_{i}}\frac{\partial x_{II}}{\partial x_{i}}\right)\ \ \ \ \ (14)
\displaystyle \displaystyle = \displaystyle 0 \ \ \ \ \ (15)

and so on, with the general result

\displaystyle \left\{ x_{i},p_{j}\right\} \displaystyle = \displaystyle \delta_{ij}\ \ \ \ \ (16)
\displaystyle \left\{ x_{i},x_{j}\right\} \displaystyle = \displaystyle \left\{ p_{i},p_{j}\right\} =0 \ \ \ \ \ (17)

We can invert the transformation to get

\displaystyle x_{1} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(x_{I}+x_{II}\right)\ \ \ \ \ (18)
\displaystyle x_{2} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(x_{I}-x_{II}\right) \ \ \ \ \ (19)

and

\displaystyle p_{1} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(p_{I}+p_{II}\right)\ \ \ \ \ (20)
\displaystyle p_{2} \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(p_{I}-p_{II}\right) \ \ \ \ \ (21)

Inserting these into 7 we get

\displaystyle H \displaystyle = \displaystyle \frac{1}{4m}\left[\left(p_{I}+p_{II}\right)^{2}+\left(p_{I}-p_{II}\right)^{2}\right]+\ \ \ \ \ (22)
\displaystyle \displaystyle \displaystyle \frac{m\omega^{2}}{4}\left[\left(x_{I}+x_{II}\right)^{2}+\left(x_{I}-x_{II}\right)^{2}+x_{II}^{2}\right]\ \ \ \ \ (23)
\displaystyle \displaystyle = \displaystyle \frac{p_{I}^{2}}{2m}+\frac{p_{II}^{2}}{2m}+\frac{m\omega^{2}}{2}\left(x_{I}^{2}+2x_{II}^{2}\right) \ \ \ \ \ (24)

We can now subsitute the usual quantum mechanical operators to get the quantum Hamiltonian:

\displaystyle H=-\frac{\hbar^{2}}{2m}\left(P_{I}^{2}+P_{II}^{2}\right)+\frac{m\omega^{2}}{2}\left(X_{I}^{2}+2X_{II}^{2}\right) \ \ \ \ \ (25)

In the coordinate basis, this is

\displaystyle H=-\frac{\hbar^{2}}{2m}\left(\frac{\partial^{2}}{\partial x_{I}^{2}}+\frac{\partial^{2}}{\partial x_{II}^{2}}\right)+\frac{m\omega^{2}}{2}\left(x_{I}^{2}+2x_{II}^{2}\right) \ \ \ \ \ (26)

 

The Hamiltonian is now decoupled and can be solved by separation of variables.

We could have arrived at this result by starting with 7 and promoting {x_{i}} and {p_{i}} to quantum operators directly, then made the substitution to normal coordinates. We would then start with

\displaystyle H=-\frac{\hbar^{2}}{2m}\left(\frac{\partial^{2}}{\partial x_{1}^{2}}+\frac{\partial^{2}}{\partial x_{2}^{2}}\right)+\frac{m\omega^{2}}{2}\left[x_{1}^{2}+x_{2}^{2}+\left(x_{1}-x_{2}\right)^{2}\right] \ \ \ \ \ (27)

 

The potential term on the right transforms the same way as before, so we get

\displaystyle \frac{m\omega^{2}}{2}\left[x_{1}^{2}+x_{2}^{2}+\left(x_{1}-x_{2}\right)^{2}\right]\rightarrow\frac{m\omega^{2}}{2}\left(x_{I}^{2}+2x_{II}^{2}\right) \ \ \ \ \ (28)

 

To transform the two derivatives, we need to use the chain rule a couple of times. To get the first derivatives:

\displaystyle \frac{\partial\psi}{\partial x_{1}} \displaystyle = \displaystyle \frac{\partial\psi}{\partial x_{I}}\frac{\partial x_{I}}{\partial x_{1}}+\frac{\partial\psi}{\partial x_{II}}\frac{\partial x_{II}}{\partial x_{1}}\ \ \ \ \ (29)
\displaystyle \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(\frac{\partial\psi}{\partial x_{I}}+\frac{\partial\psi}{\partial x_{II}}\right)\ \ \ \ \ (30)
\displaystyle \frac{\partial\psi}{\partial x_{2}} \displaystyle = \displaystyle \frac{\partial\psi}{\partial x_{I}}\frac{\partial x_{I}}{\partial x_{2}}+\frac{\partial\psi}{\partial x_{II}}\frac{\partial x_{II}}{\partial x_{2}}\ \ \ \ \ (31)
\displaystyle \displaystyle = \displaystyle \frac{1}{\sqrt{2}}\left(\frac{\partial\psi}{\partial x_{I}}-\frac{\partial\psi}{\partial x_{II}}\right) \ \ \ \ \ (32)

Now the second derivatives:

\displaystyle \frac{\partial^{2}\psi}{\partial x_{1}^{2}} \displaystyle = \displaystyle \frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{1}}\right)\frac{\partial x_{I}}{\partial x_{1}}+\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{1}}\right)\frac{\partial x_{II}}{\partial x_{1}}\ \ \ \ \ (33)
\displaystyle \displaystyle = \displaystyle \frac{1}{2}\left[\frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{I}}+\frac{\partial\psi}{\partial x_{II}}\right)+\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{I}}+\frac{\partial\psi}{\partial x_{II}}\right)\right]\ \ \ \ \ (34)
\displaystyle \displaystyle = \displaystyle \frac{1}{2}\left[\frac{\partial^{2}\psi}{\partial x_{I}^{2}}+2\frac{\partial^{2}\psi}{\partial x_{I}\partial x_{II}}+\frac{\partial^{2}\psi}{\partial x_{II}^{2}}\right]\ \ \ \ \ (35)
\displaystyle \frac{\partial^{2}\psi}{\partial x_{2}^{2}} \displaystyle = \displaystyle \frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{2}}\right)\frac{\partial x_{I}}{\partial x_{1}}+\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{2}}\right)\frac{\partial x_{II}}{\partial x_{1}}\ \ \ \ \ (36)
\displaystyle \displaystyle = \displaystyle \frac{1}{2}\left[\frac{\partial}{\partial x_{I}}\left(\frac{\partial\psi}{\partial x_{I}}-\frac{\partial\psi}{\partial x_{II}}\right)-\frac{\partial}{\partial x_{II}}\left(\frac{\partial\psi}{\partial x_{I}}-\frac{\partial\psi}{\partial x_{II}}\right)\right]\ \ \ \ \ (37)
\displaystyle \displaystyle = \displaystyle \frac{1}{2}\left[\frac{\partial^{2}\psi}{\partial x_{I}^{2}}-2\frac{\partial^{2}\psi}{\partial x_{I}\partial x_{II}}+\frac{\partial^{2}\psi}{\partial x_{II}^{2}}\right] \ \ \ \ \ (38)

Combining the two derivatives, we get

\displaystyle \frac{\partial^{2}\psi}{\partial x_{1}^{2}}+\frac{\partial^{2}\psi}{\partial x_{2}^{2}}=\frac{\partial^{2}\psi}{\partial x_{I}^{2}}+\frac{\partial^{2}\psi}{\partial x_{II}^{2}} \ \ \ \ \ (39)

Inserting this, together with 28, into 27 we get 26 again.

Wave equation: solution by separation of variables

References: Griffiths, David J. (2007), Introduction to Electrodynamics, 3rd Edition; Pearson Education – Problem 9.4.

We can use separation of variables to solve the wave equation

\displaystyle  \frac{\partial^{2}f}{\partial z^{2}}=\frac{1}{v^{2}}\frac{\partial^{2}f}{\partial t^{2}} \ \ \ \ \ (1)

As usual, we propose a solution of form

\displaystyle  f_{0}\left(z,t\right)=Z\left(z\right)T\left(t\right) \ \ \ \ \ (2)

Substituting into the wave equation and dividing through by {ZT} we get

\displaystyle  \frac{1}{Z}\frac{d^{2}Z}{dz^{2}}=\frac{1}{v^{2}T}\frac{d^{2}T}{dt^{2}} \ \ \ \ \ (3)

Since the LHS depends only on {z} and the RHS only on {t}, both sides must be equal to a constant, which we can call {-k^{2}}. Thus

\displaystyle   \frac{1}{Z}\frac{d^{2}Z}{dz^{2}} \displaystyle  = \displaystyle  -k^{2}\ \ \ \ \ (4)
\displaystyle  \frac{1}{v^{2}T}\frac{d^{2}T}{dt^{2}} \displaystyle  = \displaystyle  -k^{2} \ \ \ \ \ (5)

The general solutions are

\displaystyle   Z\left(z\right) \displaystyle  = \displaystyle  Ae^{ikz}+Be^{-ikz}\ \ \ \ \ (6)
\displaystyle  T\left(t\right) \displaystyle  = \displaystyle  Ce^{ikvt}+De^{-ikvt}\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  Ce^{i\omega t}+De^{-i\omega t} \ \ \ \ \ (8)

where {\omega\equiv kv}. Therefore

\displaystyle   f_{0}\left(z,t\right) \displaystyle  = \displaystyle  \left(Ae^{ikz}+Be^{-ikz}\right)\left(Ce^{i\omega t}+De^{-i\omega t}\right)\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  ADe^{i\left(kz-\omega t\right)}+BCe^{-i\left(kz-\omega t\right)}+ACe^{i\left(kz+\omega t\right)}+BDe^{-i\left(kz+\omega t\right)} \ \ \ \ \ (10)

The most general solution is the weighted integral of this quantity over all values of {k}, that is

\displaystyle  f\left(z,t\right)=\int_{0}^{\infty}c\left(k\right)\left[ADe^{i\left(kz-\omega t\right)}+BCe^{-i\left(kz-\omega t\right)}+ACe^{i\left(kz+\omega t\right)}+BDe^{-i\left(kz+\omega t\right)}\right]dk \ \ \ \ \ (11)

If we allow {k} (and therefore also {\omega}) to take on negative and positive values, we can expand the integral to {\pm\infty} and combine terms 1 and 2, and terms 3 and 4:

\displaystyle  f\left(z,t\right)=\int_{-\infty}^{\infty}\left(A_{1}\left(k\right)e^{i\left(kz-\omega t\right)}+A_{2}\left(k\right)e^{i\left(kz+\omega t\right)}\right)dk \ \ \ \ \ (12)

Technically, this is as far as we can go if we want the full complex solution, but in reality we are interested only in the real part. The real part of the first exponential is the same as the real part of the second exponential, while the imaginary parts are equal and opposite, so from a physical point of view, we can write the general solution as

\displaystyle  \tilde{f}\left(z,t\right)=\int_{-\infty}^{\infty}\tilde{A}\left(k\right)e^{i\left(kz-\omega t\right)}dk \ \ \ \ \ (13)

where we’ve added tildes to indicate that this is a physical (rather than a proper mathematical) solution, and that we should look only at the real part of {\tilde{f}} to get the actual equation of the wave. (We could equally well have used the second exponential in our physical solution, but the first exponential is more traditional.)

Harmonic oscillator in 3-d – rectangular coordinates

Required math: calculus

Required physics: 3-d Schrödinger equation

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 4.38.

The 3-d harmonic oscillator can be solved in rectangular coordinates by separation of variables. The Schrödinger equation to be solved for the 3-d harmonic oscillator is

\displaystyle  -\frac{\hbar}{2m}\nabla^{2}\psi+\frac{1}{2}m\omega^{2}(x^{2}+y^{2}+z^{2})\psi=E\psi \ \ \ \ \ (1)

To use separation of variables we define

\displaystyle  \psi(x,y,z)=\xi(x)\eta(y)\zeta(z) \ \ \ \ \ (2)

Dividing 1 through by this product we get

\displaystyle  -\frac{\hbar^{2}}{2m}\frac{\xi''}{\xi}+\frac{1}{2}m\omega^{2}x^{2}-\frac{\hbar^{2}}{2m}\frac{\eta''}{\eta}+\frac{1}{2}m\omega^{2}y^{2}-\frac{\hbar^{2}}{2m}\frac{\zeta''}{\zeta}+\frac{1}{2}m\omega^{2}z^{2}=E \ \ \ \ \ (3)

where the double prime notation indicates the second derivative of a function with respect to its independent variable, so {\xi''=d^{2}\xi/dx^{2}}, etc.

We now have three groups of two terms each of which depends on only one of the variables {x,\, y} and {z}, and the sum of all these terms is the constant {E}. We can therefore use the usual argument that each group of two terms must be a constant on its own, so the 3-d equation reduces to the sum of three 1-d harmonic oscillators. From the analysis of the 1-d harmonic oscillator, we know that each of these will contribute {(n+1/2)\hbar\omega} to the total energy, with the ground state at {n=0}. Thus the ground state for the 3-d oscillator will have energy {3\hbar\omega/2}, and the general energy level will increase in steps of {\hbar\omega} so the energy levels are given by

\displaystyle  E_{n}=\left(n+\frac{3}{2}\right)\hbar\omega \ \ \ \ \ (4)

Unlike the 1-d case, the energies of the 3-d oscillator are degenerate. A given value of {n} is composed of the sum of 3 quantum numbers: {n=n_{x}+n_{y}+n_{z}} where all numbers are non-negative integers. Suppose we choose a value for {n_{x}} so that {n_{y}+n_{z}=n-n_{x}}. The number of pairs of integers that can be used for {n_{y}+n_{z}} is {n-n_{x}+1} (since {n_{y}} can be anything between 0 and {n-n_{x}}). Since {n_{x}} itself can range between 0 and {n}, the total number of combinations of quantum states that can make up state {n} is

\displaystyle   d(n) \displaystyle  = \displaystyle  \sum_{n_{x}=0}^{n}(n-n_{x}+1)\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  (n+1)\sum_{n_{x}=0}^{n}1-\sum_{n_{x}=0}^{n}n_{x}\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  (n+1)^{2}-\frac{1}{2}n(n+1)\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2}(n+1)(n+2) \ \ \ \ \ (8)

Laplace’s equation – separation of variables

Required math: calculus

Required physics: electrostatics

Reference: Griffiths, David J. (2007) Introduction to Electrodynamics, 3rd Edition; Prentice Hall – Sec 3.3.

Laplace’s equation governs the electric potential in regions where there is no charge. Its form is

\displaystyle  \nabla^{2}V=0 \ \ \ \ \ (1)

We’ve seen that, for a particular set of boundary conditions, solutions to Laplace’s equation are unique. That fact can be used to invent the method of images, in which a complex problem can be solved by inventing a simpler problem that has the same boundary conditions.

However, the method of images works only in a few special (and fairly contrived) situations. In the more general case, we need a way of solving Laplace’s equation directly.

A method which we have already met in quantum mechanics when solving Schrödinger’s equation is that of separation of variables. In general, the potential is a function of all three spatial coordinates: {V=V(x,y,z)}. We try to find a solution by assuming that {V} is a product of three functions, each of which is a function of only one spatial coordinate:

\displaystyle  V(x,y,z)=X(x)Y(y)Z(z) \ \ \ \ \ (2)

Substituting this into Laplace’s equation, we get

\displaystyle  YZ\frac{d^{2}X}{dx^{2}}+XZ\frac{d^{2}Y}{dy^{2}}+XY\frac{d^{2}Z}{dz^{2}}=0 \ \ \ \ \ (3)

We can divide through by {XYZ} to get

\displaystyle  \frac{1}{X}\frac{d^{2}X}{dx^{2}}+\frac{1}{Y}\frac{d^{2}Y}{dy^{2}}+\frac{1}{Z}\frac{d^{2}Z}{dz^{2}}=0 \ \ \ \ \ (4)

The key point in this equation is that each term in the sum is a function of only one of the three independent variables {x}, {y} and {z}. The fact that these variables are independent is important, for it means that the only way this equation can be satisfied is if each term in the sum is a constant. Suppose this wasn’t true; for example, suppose the first term in the sum was some function {f(x)} that actually does vary with {x}. Then we could hold {y} and {z} constant and vary {x}, causing this first term to vary. In this case we cannot satisfy the overall equation, since if we found some value of {x} for which the sum of the three terms was zero, changing {x} would change the first term but not the other two, so the overall sum would no longer be zero.

Thus we can say that

\displaystyle   \frac{1}{X}\frac{d^{2}X}{dx^{2}} \displaystyle  = \displaystyle  C_{1}\ \ \ \ \ (5)
\displaystyle  \frac{1}{Y}\frac{d^{2}Y}{dy^{2}} \displaystyle  = \displaystyle  C_{2}\ \ \ \ \ (6)
\displaystyle  \frac{1}{Z}\frac{d^{2}Z}{dz^{2}} \displaystyle  = \displaystyle  C_{3} \ \ \ \ \ (7)

where the three constants satisfy

\displaystyle  C_{1}+C_{2}+C_{3}=0 \ \ \ \ \ (8)

Equations of this form have one of two types of solution (well, three, if we consider the constant to be zero, but that’s not usually very interesting), depending on whether the constant is positive or negative. For example, if {C_{1}>0}, we can write it as {C_{1}=k^{2}} and the solution has the form

\displaystyle  X(x)=Ae^{kx}+Be^{-kx} \ \ \ \ \ (9)

for some constants {A} and {B}.

If {C_{1}<0}, we can write it as {C_{1}=-k^{2}} and the solution has the form

\displaystyle  X(x)=D\sin kx+E\cos kx \ \ \ \ \ (10)

for some constants {D} and {E}. The constants in each case must be determined from the boundary conditions. Similar solutions exist for {Y(y)} and {Z(z)}.

Now you might be wondering whether the assumption that the potential is the product of three separate functions is valid. After all, it does seem to be a rather severe restriction on the solution. It’s easiest to see whether this assumption is valid by considering a particular example.

The key consideration in any Laplace problem is the specification of the boundary conditions. As a first example, suppose we have the following setup. We have two semi-infinite conducting plates that lie parallel to the {xz} plane, with their edges lying on the {z} axis (that is, at {x=0}). One plate is at {y=0} and the other is at {y=a}. Both plates are grounded, so their potential is constant at {V=0}.

The strip between the plates at {x=0} is filled with another substance (not a conductor, so the potential can vary across it) that is insulated from the two plates, and its potential is some function {V_{0}(y)}. Solve Laplace’s equation to find the potential between the plates.

It’s important to note what the boundary conditions are here. The two plates are held at {V=0} so provide boundary conditions at {y=0} and {y=a}:

\displaystyle  V(x,0,z)=V(x,a,z)=0 \ \ \ \ \ (11)

The strip at {x=0} provides another boundary condition

\displaystyle  V(0,y,z)=V_{0}(y) \ \ \ \ \ (12)

Finally, we can impose the condition that the potential drops to zero as we get infinitely far from the strip at {x=0} so we have

\displaystyle  V(\infty,y,z)=0 \ \ \ \ \ (13)

The first thing to notice is that none of these boundary conditions depends on {z}, so we can take {Z(z)=\mathrm{constant}} so that {C_{3}=0} above. This means that the problem effectively reduces to a two-dimensional problem with the condition

\displaystyle  C_{1}+C_{2}=0 \ \ \ \ \ (14)

Now we must make a choice as to which of the constants is positive and which is negative. Suppose we chose {C_{1}=-k^{2}<0}. Then we would get

\displaystyle  X(x)=D\sin kx+E\cos kx \ \ \ \ \ (15)

Looking at the boundary conditions above, we see as {x\rightarrow\infty} we need {X(x)\rightarrow0}. But since {X(x)} is the sum of two oscillating functions, this can’t happen unless {D=E=0} or {X=0}, which isn’t a valid solution since that would mean that {V(x,y,z)=0} everywhere, and that violates the condition at {x=0}.

So we can try the other choice: {C_{1}=k^{2}>0}. This gives

\displaystyle  X(x)=Ae^{kx}+Be^{-kx} \ \ \ \ \ (16)

Now as {x\rightarrow\infty} the negative exponential term drops to zero, so we need only require that {A=0} and we get

\displaystyle  X(x)=Be^{-kx} \ \ \ \ \ (17)

From this choice, we know that {C_{2}=-C_{1}=-k^{2}} and

\displaystyle  Y(y)=D\sin ky+E\cos ky \ \ \ \ \ (18)

From the condition {V=0} when {y=0} we get

\displaystyle  E=0 \ \ \ \ \ (19)

Finally, from {V=0} when {y=a} we get

\displaystyle   D\sin ka \displaystyle  = \displaystyle  0\ \ \ \ \ (20)
\displaystyle  k \displaystyle  = \displaystyle  \frac{n\pi}{a} \ \ \ \ \ (21)

where {n} is a positive integer. It must be non-zero, since {n=0} again gives us {V=0} everywhere. It must not be negative, since that would give us a negative {k} which would give the wrong behaviour for {X(x)}.

So our solution so far is

\displaystyle  V(x,y,z)=BDe^{-n\pi x/a}\sin\left(\frac{n\pi}{a}y\right) \ \ \ \ \ (22)

At this stage, you might think we’ve solved ourself into a corner, since we haven’t used the final boundary condition which is that {V(0,y,z)=V_{0}(y)}. From our solution so far, we have

\displaystyle  V(0,y,z)=BD\sin\left(\frac{n\pi}{a}y\right) \ \ \ \ \ (23)

so unless we choose {V_{0}(y)} to be one of those sine functions, we’re stuffed. Does this mean that the separation of variables method doesn’t work here?

Not quite. The crucial point is that Laplace’s equation is linear (the derivatives occur to the first power only), so any number of separate solutions can be added together to give another solution. That is, if {V_{1}} and {V_{2}} are solutions, then so is {V_{1}+V_{2}}. The separation of variables method has actually given us an infinite number of solutions (one for each value of {n=1,2,3,\ldots}) so we can create yet more solutions by adding together any combination of these individual solutions. In particular, we can say

\displaystyle  V(x,y,z)=\sum_{n=1}^{\infty}c_{n}e^{-n\pi x/a}\sin\left(\frac{n\pi}{a}y\right) \ \ \ \ \ (24)

for some choice of coefficients {c_{n}}. (Here, we’ve simply combined the two constants {B} and {D} for each value of {n} to give the constant {c_{n}}.)

How can we find these coefficients? In general, this can be fairly tricky, but for certain boundary conditions, it turns out to be fairly straightforward. For the boundary condition we have here, {V(0,y,z)=V_{0}(y)}, things aren’t too bad. We have

\displaystyle  V(0,y,z)=V_{0}(y)=\sum_{n=1}^{\infty}c_{n}\sin\left(\frac{n\pi}{a}y\right) \ \ \ \ \ (25)

Some readers might recognize this as a Fourier series, and there is a clever technique that can be used to find the {c_{n}} in such a case. We multiply through by {\sin\left(\frac{m\pi y}{a}\right)} and integrate from 0 to {a}:

\displaystyle  \int_{0}^{a}\sin\left(\frac{m\pi y}{a}\right)V_{0}(y)dy=\sum_{n=1}^{\infty}c_{n}\int_{0}^{a}\sin\left(\frac{m\pi y}{a}\right)\sin\left(\frac{n\pi}{a}y\right)dy \ \ \ \ \ (26)

The integrals in the sum on the right are fairly straightforward, and we get

\displaystyle  \int_{0}^{a}\sin\left(\frac{m\pi y}{a}\right)\sin\left(\frac{n\pi}{a}y\right)dy=\begin{cases} \begin{array}{c} 0\;\mathrm{if}\; m\ne n\\ \frac{a}{2}\;\mathrm{if}\; m=n \end{array}\end{cases} \ \ \ \ \ (27)

That is

\displaystyle  c_{n}=\frac{2}{a}\int_{0}^{a}\sin\left(\frac{n\pi y}{a}\right)V_{0}(y)dy \ \ \ \ \ (28)

As usual for physicists, the problem of proving that a Fourier series exists and converges for any given function is left to the mathematicians, but for pretty well any function {V_{0}(y)} of physical relevance, this technique works. Although the example here has a clean solution, many other problems do not. If the boundaries are of some exotic shape, then it becomes impossible to specify things in such a way that we have a clean Fourier series to work with. As usual in such cases, we need to resort to numerical solution of Laplace’s equation, and for that we need a computer.

The time-independent Schrödinger equation

Required math: calculus

Required physics: Schrödinger equation

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Chapter 2.

Once we have the Schrödinger equation, most of non-relativistic quantum mechanics is devoted to finding solutions to this equation for various potential functions. Since it is a second-order partial differential equation, the number of cases for which an exact solution may be found is depressingly small. As a result there are a lot of approximation techniques that allow solutions in various cases, and of course there is always the option of a numerical solution using a computer.

An important feature of the Schrödinger equation is that it is linear, meaning that the function {\Psi} and its derivatives occur to the first power only, and there are no products or other non-linear functions of {\Psi} to be found. This has the important consequence that if we find two different solutions of the Schrödinger equation, then any linear combination of these two solutions is also a solution. A linear combination of two solutions {\Psi_{1}} and {\Psi_{2}} has the form

\displaystyle   \Psi \displaystyle  = \displaystyle  a\Psi_{1}+b\Psi_{2} \ \ \ \ \ (1)

where {a} and {b} are complex constants.

There are, however, several important potentials for which exact solutions may be found, and many of these potentials are time-independent, meaning that they depend only on position. In such a case, the Schrödinger equation may be simplified by the trick of separation of variables. To see how this works, let’s start with the general Schrödinger equation:

\displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\Psi}{\partial x^{2}}+V(x,t)\Psi=i\hbar\frac{\partial\Psi}{\partial t} \ \ \ \ \ (2)

The only place in this equation where an explicit time dependence can occur is in the potential function {V(x,t)} (of course the wave function itself will have an explicit time dependence, but that’s what we’re trying to solve for!). In the situation where the potential depends only on space, the Schrödinger equation becomes

\displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\Psi}{\partial x^{2}}+V(x)\Psi=i\hbar\frac{\partial\Psi}{\partial t} \ \ \ \ \ (3)

Not much of an improvement, you might think. But suppose we propose a solution for the wave function that is the product of one function {\psi(x)} that depends on space ({x}) only and another function {\Xi(t)} (the Greek capital ‘xi’) that depends only on time ({t}). That is

\displaystyle   \Psi(x,t) \displaystyle  = \displaystyle  \psi(x)\Xi(t) \ \ \ \ \ (4)

The partial derivatives become a little simpler:

\displaystyle   \frac{\partial^{2}\Psi}{\partial x^{2}} \displaystyle  = \displaystyle  \frac{\partial^{2}\psi}{\partial x^{2}}\Xi(t)\ \ \ \ \ (5)
\displaystyle  \frac{\partial\Psi}{\partial t} \displaystyle  = \displaystyle  \psi(x)\frac{\partial\Xi}{\partial t} \ \ \ \ \ (6)

Substituting this back into the Schrödinger equation we get

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}\Xi(t)+V(x)\psi(x)\Xi(t) \displaystyle  = \displaystyle  i\hbar\frac{\partial\Xi}{\partial t}\psi(x)\ \ \ \ \ (7)
\displaystyle  -\frac{\hbar^{2}}{2m}\frac{1}{\psi(x)}\frac{\partial^{2}\psi(x)}{\partial x^{2}}+V(x) \displaystyle  = \displaystyle  i\hbar\frac{1}{\Xi(t)}\frac{\partial\Xi}{\partial t} \ \ \ \ \ (8)

where in the last line we have divided the first line through by {\psi(x)\Xi(t)}.

Notice that a magical thing has happened here: the left side of the equation depends only on {x} and the right side depends only on {t}. We have therefore separated the two independent variables in the equation. How does that help us? Well, since {x} and {t} are independent variables, we can vary either of them without changing the other. If we varied {x} for example, then in principle the left side of the equation would change while the right side, which does not depend on {x} wouldn’t. So it looks like we’ve ended up with an impossible situation.

Not quite. The equation can still be satisfied if both sides of the equation are equal to the same constant, which we’ll call {E}. That is, we must have

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{1}{\psi(x)}\frac{\partial^{2}\psi(x)}{\partial x^{2}}+V(x) \displaystyle  = \displaystyle  E\ \ \ \ \ (9)
\displaystyle  i\hbar\frac{1}{\Xi(t)}\frac{\partial\Xi}{\partial t} \displaystyle  = \displaystyle  E \ \ \ \ \ (10)

which we can rewrite as two separate ordinary (not partial) differential equations, since each equation now has only one independent variable:

\displaystyle   -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}}+V(x)\psi(x) \displaystyle  = \displaystyle  E\psi(x)\ \ \ \ \ (11)
\displaystyle  i\hbar\frac{d\Xi(t)}{dt} \displaystyle  = \displaystyle  E\Xi(t) \ \ \ \ \ (12)

The clever thing about this separation of variables is that the potential function has disappeared from the second equation, so we can solve for the time part of the equation in general, and get

\displaystyle   \int\frac{d\Xi}{\Xi} \displaystyle  = \displaystyle  -\frac{i}{\hbar}E\int dt\ \ \ \ \ (13)
\displaystyle  \ln\Xi \displaystyle  = \displaystyle  -\frac{i}{\hbar}Et+\ln C\ \ \ \ \ (14)
\displaystyle  \Xi(t) \displaystyle  = \displaystyle  Ce^{-iEt/\hbar} \ \ \ \ \ (15)

where {C} is a constant of integration, and will be determined by the normalization of the wave function.

But what is this constant {E}? To get an idea of what {E} represents, we can rewrite the {\psi}equation like this:

\displaystyle   \left[-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V(x)\right]\psi(x) \displaystyle  = \displaystyle  E\psi(x)\ \ \ \ \ (16)
\displaystyle  \left[\frac{1}{2m}\left(\frac{\hbar}{i}\frac{d}{dx}\right)^{2}+V(x)\right]\psi(x) \displaystyle  = \displaystyle  E\psi(x) \ \ \ \ \ (17)

Now if we remember the expression of the momentum as an operator:

\displaystyle   p \displaystyle  = \displaystyle  \frac{\hbar}{i}\frac{d}{dx} \ \ \ \ \ (18)

(whether we use a total or partial derivative doesn’t matter when the function being operated on depends on {x} only), we can see that the {\psi}equation can be written as:

\displaystyle   \left[\frac{p^{2}}{2m}+V(x)\right]\psi(x) \displaystyle  = \displaystyle  E\psi(x) \ \ \ \ \ (19)

The terms in square brackets are the kinetic plus the potential energy, so we can view this as an operator equation, where the operator in square brackets operates on the spatial part of the wave function with the result of giving the same wave function back again, but multiplied by the constant {E} which can therefore be interpreted as the total energy in the state.

In more advanced language, the function {\psi(x)} is an eigenfunction of the operator {\frac{p^{2}}{2m}+V(x)} with eigenvalue {E}. (If you’re just starting out in quantum mechanics and these terms are unfamiliar, don’t worry about them right now.)

All this may look a bit arbitrary, since from the appearance of the {\psi} equation, it looks like we can just pick any old value (including complex numbers) for {E}. However, the magic of this equation is that when we solve it for particular potential functions, we discover that only certain values of {E} (and they are all real, too) are allowed. This equation, therefore, puts the ‘quantum’ in ‘quantum mechanics’, since it shows that energies can be only certain discrete values. More on this when we discuss some of the individual potentials.

For now, it is worthwhile summarizing the chain of logic that got us this far. Starting with the Schrödinger equation (which can be accepted as a postulate), we considered the special case of a time-independent potential. This led to a solution using the mathematical technique of separation of variables, which in turn led to the interpretation of the spatial part of the solution as an equation that gives the allowed energy levels in the system. Note that the prediction of the energy levels is an entirely mathematical consequence of the assumption that the Schrödinger equation is the correct equation for describing nature.