Is the term "quantum fluctuation" an aide to understanding?

I empathize with your discomfort with the term "quantum fluctuation". I also don't find it to be a helpful term. Largely this is because of your point 2. In English "fluctuations" refers to something dynamical, whereas in the mathematical formalism for quantum mechanics there is nothing which "fluctuates". The evolution is unitary (smooth) and deterministic.

The best translation I have come up with is that "quantum fluctuations" is code for "I will get different outcomes if I measure multiple times". In this case anything having to do with "quantum fluctuations" is no more exotic than anything having to do with the measurement of systems which are in a superposition of the measurement basis states. Of course, superposition and measurement is already quite exotic and confusing so it is still a difficult/controversial topic I think.

However, I must admit that this "dislike" of the term "quantum fluctuations" and my translation of the term is non-conventional. Unfortunately I think this means that the usage of the term is somewhat user dependent and as a result I don't think you'll get a good conclusive answer to this question.


Thanks for all who posted answers or comments so far. Having learned from them, I thought it might be helpful to post my own answer:

Definition. Quantum fluctuation refers to the fact that when a physical system is prepared in a given state and some property or quantity is measured, and this procedure is repeated many times, then the measured outcomes may fluctuate even though the system was prepared each time in the same state.

This definition captures what I think is the usage in professional physics. It covers things like the quantum contribution to electrical noise, and I think it covers the usage in thermal physics and the study of phase transitions. (In the latter I think the word 'fluctuation' is a shorthand for the quantum 'spreading' through superposition, when the system state is not an eigenstate of observables such as correlation functions.)

However, there is another usage widely employed in popular presentations of physics, where the phrase 'quantum fluctuation' is used as an attempt to help non-experts get a feel for the physics of quantum fields, including the lowest energy state called vacuum state. The aim is to convey the richness of these fields, and the fact that they influence events even when we might naively say 'there is nothing there'. Unruh radiation is a good example. However, as I understand it, when these fields are in their vacuum state they are never themselves causal. Rather, they mediate phenomena whose cause is something else, such as an incoming real particle, or a force which caused something to accelerate. This statement applies to vacuum polarisation too, because that is a statement about the interaction of the electromagnetic and Dirac fields, and is described by Feynman diagrams having two external photon lines.

The upshot of all this is, then, two meanings: one as stated in the definition above, and another which is not really about fluctuation at all, but rather simply a way to encourage people to marvel at the richness and subtlety of quantum field theory and of empty space. Unfortunately this honest attempt to help has resulted in a large amount of nonsense being mixed in with the good sense. It is an ongoing project to find better ways of conveying good physical intuition about quantum field theory, whether for the expert or non-expert.


I understand your concern. I believe that the reason for this terminology has to be understood historically, where it is meant to be something different than classical (thermal) fluctuation. Once one remembers this I think the term achieves its purpose (i.e., your point 1.).

The one thing that one has to realize is that there are no "fluctuations" classically at zero temperature. Consider a classical spin model on a graph. $\sigma$ is a configurations of spins ($0$ or $1$) on this graph. The classical Hamiltonian is a function of $\sigma$, $E(\sigma)$. At zero temperature the system is in the state of minimum energy, let it be $\sigma_0$. In other word we can say the system is in state $\rho$ with

$$ \rho_{\sigma,\sigma'}= \delta_{\sigma,\sigma_0} \delta_{\sigma',\sigma_0}. $$

(I'm using a notation also valid quantum mechanically). Clearly the state is diagonal (it's classical) but it's also pure, or in other words, an extremal (a -Kronecker- delta function). The state is "frozen" in the configuration $\sigma_0$. Intuitively there are no "fluctuations", i.e., other configurations contributing to the state. How do we measure this?

Any classical observable $A$ is also diagonal in $\sigma$. Computing averages with the above state one has

$$ \Delta A^2 := \langle A^2 \rangle - \langle A \rangle^2 = 0. \ \ \ \ \quad (1) $$

Indeed the two facts are equivalent (being an extremal and having zero fluctuations for any observable).

If we now raise the temperature, at equilibrium the state of the system is

$$ \rho_{\sigma,\sigma'}= \delta_{\sigma,\sigma'} e^{-\beta E(\sigma)}/Z, $$

with $Z$ partition function. Clearly now Eq. (1) will not be valid in general and we can have a phase transition as we rise the temperature. We say (colloquially) that this phase transition is due to thermal fluctuations.

Now you see the reason for the term "quantum fluctuations". Quantum mechanically Eq. (1) is in general violated also at zero temperature.