The energy-time uncertainty relation

Required math: calculus, complex numbers

Required physics: basics of quantum mechanics

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Sec 3.5.3.

The uncertainty principle gives a lower bound on the accuracy with which two observables can be measured. If the operators for two observables commute, then both quantities can be exactly determined. If, however, they don’t commute, the lower bound is expressed as an inequality involving the two standard deviations. For two observables {\hat{A}} and {\hat{B}}, we get

\displaystyle  \sigma_{A}\sigma_{B}\ge\left|\frac{1}{2i}\left\langle [\hat{A},\hat{B}]\right\rangle \right| \ \ \ \ \ (1)


where {\left\langle [\hat{A},\hat{B}]\right\rangle } is the expectation value of the commutator of the two observables.

This means that if we do a large number of experiments, all starting in the same quantum state {\Psi}, then the product of the standard deviations of these two observables measured over all these experiments is given by this formula. It doesn’t say that if we try to measure both {A} and {B} at the same time we won’t get exact values for both. It does say that over a large number of experiments the statistics must satisfy the uncertainty principle.

So for example if we have an experiment in which we measure the position of a particle accurately, and also try to measure its momentum, then we will get precise values for both quantities in each run of the experiment, but if we repeat the experiment the measurements of momentum will vary widely if we constrain the position of the particle.

One uncertainty relation that is often quoted is the energy-time relation, which is often stated as

\displaystyle  \Delta E\Delta t\ge\frac{\hbar}{2} \ \ \ \ \ (2)

This relation doesn’t follow from the general uncertainty relation since the time is not an operator in quantum mechanics; rather it is an independent variable on which everything else depends. We can measure the position, energy, momentum, angular momentum and so on, but it doesn’t make sense to measure the ‘time’ of a particle. Time (at least in non-relativistic theory) is a parameter that is independent of everything else.

In fact, this relation can be derived in a way that gives it a different meaning than the other uncertainty relations. Suppose we have an observable {Q} that depends explicitly on {x}, {p} and possibly {t}.

First we should note the difference between an observable operator depending explicitly on time and the expectation value of the operator depending on time. An operator with no explicit time dependence (such as the Hamiltonian, which is the sum of a kinetic and potential energy, where the potential {V} has no explicit time dependence) can have a mean value that still depends on time. This is because when the Schrödinger equation is solved for a given Hamiltonian, the wave function {\Psi(x,t)} is in general a function of time, even if the Hamiltonian is not. As we saw when we solved the equation for a time independent potential, we can use the separation of variables technique to peel off the time dependence, which turns up in the general solution as a complex exponential of the time.

To return to our observable {Q}, let’s find the total time derivative (rate of change) of the expectation value of this observable.

\displaystyle   \frac{d}{dt}\left\langle Q\right\rangle \displaystyle  = \displaystyle  \frac{d}{dt}\left\langle \Psi|Q\Psi\right\rangle \ \ \ \ \ (3)
\displaystyle  \displaystyle  = \displaystyle  \left\langle \frac{\partial\Psi}{\partial t}|Q\Psi\right\rangle +\left\langle \Psi|\frac{\partial Q}{\partial t}\Psi\right\rangle +\left\langle \Psi|Q\frac{\partial\Psi}{\partial t}\right\rangle \ \ \ \ \ (4)

We can now use the Schrödinger equation to replace the time derivatives of the wave function. The Schrödinger equation states that

\displaystyle  \frac{\partial\Psi}{\partial t}=\frac{1}{i\hbar}H\Psi \ \ \ \ \ (5)

Using this we get

\displaystyle  \frac{d}{dt}\left\langle Q\right\rangle =-\frac{1}{i\hbar}\left\langle H\Psi|Q\Psi\right\rangle +\left\langle \Psi|\frac{\partial Q}{\partial t}\Psi\right\rangle +\frac{1}{i\hbar}\left\langle \Psi|QH\Psi\right\rangle \ \ \ \ \ (6)

The middle term is the expectation value of {\frac{\partial Q}{\partial t}} so overall we get

\displaystyle   \frac{d}{dt}\left\langle Q\right\rangle \displaystyle  = \displaystyle  -\frac{1}{i\hbar}\left\langle H\Psi|Q\Psi\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle +\frac{1}{i\hbar}\left\langle \Psi|QH\Psi\right\rangle \ \ \ \ \ (7)

Finally, since the Hamiltonian {H} is hermitian, we can rewrite the first term as {\left\langle H\Psi|Q\Psi\right\rangle =\left\langle \Psi|HQ\Psi\right\rangle } so we get

\displaystyle   \frac{d}{dt}\left\langle Q\right\rangle \displaystyle  = \displaystyle  -\frac{1}{i\hbar}\left\langle \Psi|(HQ-QH)\Psi\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle \ \ \ \ \ (8)
\displaystyle  \displaystyle  = \displaystyle  \frac{i}{\hbar}\left\langle \Psi|[H,Q]\Psi\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle \ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \frac{i}{\hbar}\left\langle \left[H,Q\right]\right\rangle +\left\langle \frac{\partial Q}{\partial t}\right\rangle \ \ \ \ \ (10)

Although we haven’t arrived at the energy-time relation yet, this result has a fundamental significance as it stands. If {Q} doesn’t depend explicitly on time, the second term on the right is zero, and we get

\displaystyle  \frac{d}{dt}\left\langle Q\right\rangle =\frac{i}{\hbar}\left\langle [H,Q]\right\rangle \ \ \ \ \ (11)

What does this tell us? First, since both {H} and {Q}, being observables, are hermitian, their commutator is purely imaginary so the term on the right is always real, which is a relief since the rate of change of an observable could hardly be complex.

Second, and more fundamental, is that any observable that doesn’t depend explicitly on time, and that commutes with the Hamiltonian has an expectation value that doesn’t change; that is, it is a conserved quantity. Since {H} obviously commutes with itself, its rate of change is zero, so energy is conserved. We’ll explore a couple of other examples in another post.

For now, though, we need to return to the energy-time relation. Using 1 with {A=H} and {B=Q}, we get

\displaystyle   \sigma_{H}\sigma_{Q} \displaystyle  \ge \displaystyle  \left|\frac{1}{2i}\left\langle [H,Q]\right\rangle \right|\ \ \ \ \ (12)
\displaystyle  \displaystyle  \ge \displaystyle  \frac{\hbar}{2}\left|\frac{d}{dt}\left\langle Q\right\rangle \right| \ \ \ \ \ (13)

Since {\sigma_{H}} is the standard deviation of the hamiltonian, it is reasonable to interpret it as the uncertainty in the energy {E}. If we consider the quantity

\displaystyle  \Delta t\equiv\frac{\sigma_{Q}}{\left|d\left\langle Q\right\rangle /dt\right|} \ \ \ \ \ (14)


we see that it has the units of time (since {\sigma_{Q}} has the same units as {Q}). In this case we get

\displaystyle  \Delta E\Delta t\ge\frac{\hbar}{2} \ \ \ \ \ (15)


which is the energy-time uncertainty relation.

So what exactly does {\Delta t} mean in this context? From its definition, we have

\displaystyle  \sigma_{Q}=\left|\frac{d}{dt}\left\langle Q\right\rangle \right|\Delta t \ \ \ \ \ (16)

Since {\sigma_{Q}} is the standard deviation of the observable {Q}, this expression gives an approximate (in the Taylor series sense) value for the length of time ({\Delta t}) taken for the observable to change by one standard deviation. This would be exact if {Q}‘s rate of change were constant.

We’ve derived this relation by considering some arbitrary observable {Q}, so that the time interval {\Delta t} depends on the particular observable {Q} we’re considering. However, the uncertainty relation involves this time interval and the uncertainty in the energy {\Delta E}. This seems a bit odd, since it seems like we can get a more accurate measurement of the energy just by choosing another observable which changes very slowly (thus making {\Delta t} very large). However, that’s not quite what the relationship is saying. Rather, in order to get an accurate energy measurement, all other observables have to be changing slowly. Equation 14 puts an upper limit on {\Delta t} for observable {Q}; if we look at all observables (all operators {Q}) then absolute upper limit for {\Delta t} is the largest value of {\sigma_{Q}/\left|d\left\langle Q\right\rangle /dt\right|}, that is, for the smallest rate of change {\left|d\left\langle Q\right\rangle /dt\right|} of any observable.

It might look as though there is something wrong here. After all, if we are in a state where the energy is exact, then all other observables would have to be exact as well. How can this be when there are observables like position and momentum that don’t commute, and thus cannot be determined precisely at the same time?

The key to resolving this apparent paradox is to note that it’s not the precise values of each observable at a particular point in time that we are concerned with. The expression for {\Delta t} involves the rate of change of an expectation value, not a precise measurement. It is certainly possible for the expectation values of position and momentum to have precise, constant-in-time values without violating the uncertainty principle, and that is what is implied here.

In fact, any system in a stationary state where the energy is precisely known does satisfy the condition {d\left\langle Q\right\rangle /dt=0} for all observables. In order to get a case where the energy is uncertain, we need a linear combination of two or more stationary states, with each state corresponding to a different energy. Then a measurement on the system will give one of the energies in the mix, and we can’t say a priori which energy will result. The expectation values of observables will also be time-dependent in general in such a case.

Another way of looking at it is this. For the time-independent Schrödinger equation, the general solution is

\displaystyle  \Psi\left(x,t\right)=\sum_{k}c_{k}\psi_{k}e^{-iE_{k}t/\hbar} \ \ \ \ \ (17)

where {\psi_{k}} is an eigenstate of the hamiltonian with eigenvalue (energy) {E_{k}}. The probability of finding the system in state {\psi_{k}} (with energy {E_{k}}) is {\left|c_{k}\right|^{2}}. Note that this does not depend on time, so {\Delta E\equiv\sigma_{H}} is actually independent of time. What the energy-time uncertainty relation tells us, then, is that {\Delta E} puts a constraint on the time scales over which other observables {Q} can vary. A system composed of many different eigenstates has a larger {\Delta E} and allows changes on a shorter time scale than a system with only a few eigenstates. In the extreme case of a system consisting of a single eigenstate, {\Delta E=0} and other observables never change.

It’s also worth pointing out that the energy-time relation does not allow violation of conservation of energy. Statements such as “you can violate conservation of energy provided you do so in a short enough time so that 15 is not violated” are just plain false.

16 thoughts on “The energy-time uncertainty relation

  1. Pingback: Uncertainty principle – examples « Physics tutorials

  2. Pingback: Uncertainty principle: rates of change of operators « Physics tutorials

  3. Pingback: Energy-time uncertainty principle: infinite square well « Physics tutorials

  4. Pingback: Energy-time uncertainty principle: Gaussian free particle « Physics tutorials

  5. Pingback: Virial theorem « Physics tutorials

  6. Pingback: Energy-time uncertainty: an alternative definition « Physics tutorials

  7. Pingback: Translations in space and time « Physics tutorials

  8. Pingback: Angular momentum and torque « Physics tutorials

  9. Pingback: Virial theorem in 3-d « Physics tutorials

  10. Pingback: Fine structure of hydrogen: spin-orbit coupling | Physics pages

  11. Pingback: Uncertainty principle: a couple of examples from astronomy | Physics pages

  12. alex

    You state that (14) puts a upper limit on the time. and then you say the upper limit of time is when the right side of the same equation (14) is minimum. Typo?

    Reply
    1. gwrowe Post author

      I think what I meant to say is the upper limit corresponds to the smallest rate of change of an observable. Fixed now, anyway. Thanks.

      Reply
  13. Pingback: Unitary transformations and the Heisenberg picture | Physics pages

  14. Pingback: Time translation and conservation of energy | Physics pages

Leave a Reply

Your email address will not be published. Required fields are marked *