Why aren't particles constantly "measured" by the whole universe?

Seems like the whole universe is receiving information about the electron's position.

Yes, the influence that an electron exerts on the rest of the universe does depend on the location of the electron, but that's not enough to constitute a measurement of the electron's location. We need to consider the degree to which the electron's influence on the rest of the universe depends on its location.

Consider something analogous to but simpler than a double slit experiment: consider an electron in deep space, in a superposition of two different locations $A$ and $B$. Even in deep space, the electron is not alone, because space is filled with cosmic microwave background (CMB) radiation. CMB radiation has a typical wavelength of about $1$ millimeter. When CMB radiation is scattered by an electron, the resulting state of the radiation depends on the electron's location, but the key question is how much it depends on the electron's location. If the locations $A$ and $B$ differ from each other by $\gg 1$ millimeter, then the CMB radiation will measure the electron's location very effectively, because an electron in location $A$ will have a very different effect on the CMB radiation than an electron in location $B$ would have. But if locations $A$ and $B$ differ from each other by $\ll 1$ millimeter, then an electron in location $A$ will not have a very different effect on the CMB radiation than an electron in location $B$ would have. Sure, the electron has a significant effect on the CMB radiation regardless of its location, but that key is whether the effect differs significantly when the location is $A$ versus $B$. The CMB radiation measures the electron's location, but it does so with limited resolution. Widely-spaced locations will be measured very effectively, but closely-separated locations will not.

For this to really make sense, words are not enough. We need to consider the math. So here's a version that includes a smidgen of math.

Let $|a\rangle$ denote the state of the universe (including the electron) that would result if the electron's location were $A$, and let $|b\rangle$ denote the state of the universe that would result if the electron's location were $B$. If the electron started in some superposition of locations $A$ and $B$, then the resulting state of the universe will be something like $|a\rangle+|b\rangle$. Whether or not the electon's location is effectively measured, these two terms will be essentially orthogonal to each other, $\langle a|b\rangle\approx 0$, simply because they differ significantly in the location of the electron itself. So the fact that the final state is $|a\rangle+|b\rangle$ with $\langle a|b\rangle\approx 0$ doesn't tell us anything about whether or not the electron's location was actually measured. For that, we need a principle like this:

  • The electron's location has been effectively measured if and only if the states $|a\rangle$ and $|b\rangle$ are such that $\langle a|\hat O|b\rangle\approx 0$ for all feasibly-measurable future observables $\hat O$. (Quantifying "$\approx 0$" requires some care, but I won't go into those details here.)

For an operator $\hat O$ to be "feasibly measurable", it must be sufficiently simple, which loosely means that it does not require determining too many details over too large a region of space. This is a fuzzy definition, of course, as is the definition of measurement itself, but this fuzziness doesn't cause any problems in practice. (The fact that it doesn't cause any problems in practice is frustrating, because this makes the measurement process itself very difficult to study experimentally!)

In the example described above, the suggested condition is satisfied if locations $A$ and $B$ differ by $\gg 1$ millimeter, because after enough CMB radiation has been scattered by the electron, the states $|a\rangle$ and $|b\rangle$ differ significantly from each other everywhere, and no operator $\hat O$ that is simple enough to represent a feasibly-measurable observable can possibly un-do the orthogonality of the states $|a\rangle$ and $|b\rangle$. Loosely speaking, the state $|a\rangle$ and $|b\rangle$ aren't just orthogonal; they're prolifically orthogonal, in a way that can't be un-done by any simple operator. In contrast, if locations $A$ and $B$ differ by $\ll 1$ millimeter, then we can choose an operator $\hat O$ that acts just on the electron (and is therefore relatively simple) to obtain $\hat O|a\rangle\approx |b\rangle$, thus violating the condition $\langle a|\hat O|b\rangle\approx 0$. So in this case, the electron's location has not been effectively measured at all. The states $|a\rangle$ and $|b\rangle$ are orthogonal simply because they differ in the location of the electron itself, but they are not prolifically orthogonal because the effect on the rest of the universe doesn't depend significantly on whether the electron's location was $A$ versus $B$.

What I'm doing here is describing "decoherence" in a different way than it is usually described. The way I'm describing it here doesn't rely on any factorization of the Hilbert space into the "system of interest" and "everything else." The way I'm describing it here (after quantifying some of my loose statements more carefully) can be applied more generally. It doesn't solve the infamous measurement problem (which has to do with the impossibility of deriving Born's rule within quantum theory), but it does allow us to determine how effectively a given observable has been measured.

Some quantitative calculations — including quantitative results for the specific example I used here — are described in Tegmark's paper "Apparent wave function collapse caused by scattering" (https://arxiv.org/abs/gr-qc/9310032), which is briefly reviewed in https://physics.stackexchange.com/a/442464. Those calculations use the more traditional description of decoherence, but the results are equally applicable to the way I described things here.


There are time-scales related to interactions, or, equivalently, interaction rates. These interaction rates are often calculated in lowest order based on Fermi’s Golden Rule. An experiment that measures electron interference needs to make sure that the time-of-flight of the electrons from the electron source to the observation screen is much shorter than any of the time-scales of possible interactions.

In interference experiments, we therefore define a coherence time for the interfering particles.

In real experiments, we do indeed face the problem of shielding particles from being measured by the environment, before they interfere. For example, in electron interferometers realized in solid-state devices, we have to go to very low temperatures, where the interactions between electrons and phonons become very 'slow' (their rate becomes very small). We also have to make sure that the devices are small enough that the Coulomb-interaction between electrons, which persists even at the lowest temperatures, does not spoil the interference (the decoherence rate due to electron-electron interaction does also depend on temperature: the rate becomes smaller with decreasing temperature).


The other answers are great, but not very useful for experimentally minded people. I will try to address this question from the practical point of view.

A nice heuristic way of thinking about measurement is through the energy level shifts that actually cause decoherence (and ultimately measurement). This is the way quantum computing folk often thinks about qubits.

An energy perturbation will cause roughly a $e^{i\, \delta E \, t}$ phase shift to the quantum state that normally has a phase $e^{iE_0t}$. If these are big phase shifts and occur randomly, then the quantum state will evolve in a way that is unrelated to the Hamiltonian you think it obeys. However, if $\delta E$ is small compared to the energy spacing (like gravity compared to electromagnetism) and the timescale you are considering, then the phase slips are negligible and no "measurement" has occurred in the practical sense.

To give a concrete example, think about the reflection of a photon with energy $\hbar \omega$ from a mirror. A naive thought is that the reflection of the photon counts as a "measurement" of the photon because the photon transfers momentum to the mirror, and thus will "collapse" the wave function. Let's see if that's true.

The photon's momentum changes from $+\hbar \mathbf{k}$ to $-\hbar \mathbf{k}$, giving $2\hbar \mathbf{k}$ momentum to the mirror. This change in momentum doesn't come for free, naturally some energy has to be transferred to the kinetic energy of the mirror. Let's assume the light is visible light with a wavelength of 500nm, and the mirror weighs 100 grams. Then the energy transferred to the mirror is:

$$E_\textrm{mirror}=\frac{p^2}{2m}=\frac{4\hbar^2 k^2}{2m} \approx 2\cdot10^{-34} \textrm{eV}=9\cdot10^{-34} \hbar \omega_{\textrm{photon}} \implies 10^{13} \,\textrm{years}$$

This means that the "measurement" process of the mirror will cause a single phase slip by $2 \pi$ on a timescale of roughly 10 billion years. You can imagine that under normal circumstances, this is not a "measurement", so the photon maintains its quantum state.