Why can't the outcome of a QM measurement be calculated a-priori?

First of all, let me start out by pointing out to you that there have been experimental violations of Bell's inequalities. This provides damning evidence against hidden variable models of quantum mechanics, and thus essentially proves that the random outcomes are an essential feature of quantum mechanics. If the outcomes of measurements in every basis were predetermined, we should not be able to violate Bell's inequality.

One way of seeing why this is in fact a reasonable state of affairs is to consider Schroedinger's cat. Evolution of closed quantum systems is unitary, and hence entirely deterministic. For example, in the case of the cat, at some point in time we have a state of the system which is a superposition of (atom decayed and cat dead) and (atom undecayed and cat alive) with equal amplitude for each. This far quantum mechanics predicts the exact state of the system. We need to consider what happens when we open the box and look at the cat. When we do this, the system should then be in a superposition of (atom decayed, cat dead, box open, you aware cat is dead) and (atom undecayed, cat alive, box closed, you aware cat is alive). Clearly as time goes on the two branches of the wave function diverge further as the consequences of whether the cat is alive or dead propagate out into the world, and as a result no interference is likely possible. Thus there are two branches of the wave function with different configurations of the world. If you believe the Everett interpretation of quantum mechanics then both branches continue to exist indefinitely. Clearly our thinking depends on whether we have seen the cat alive or dead so that we ourselves are in a state (seen cat dead and aware we have seen the cat dead and not seen the cat alive) or (seen cat alive and aware we have seen the cat alive and not seen the cat dead). Thus even if we exist in a superposition we are only aware of a classical outcome to the experiment. Quantum mechanics allows us to calculate the exact wavefunction which is the outcome of the experiment, however, it cannot tell us a priori which branch we will find ourselves aware of after the experiment. This isn't really a shortcoming of the mathematical framework, but rather of our inability to perceive ourselves in anything other than classical states.


The short answer is that we do not know why the world is this way. There might eventually be theories which explain this, rather than the current ones which simply take it as axiomatic. Maybe these future theories will relate to what we currently call the holographic principle, for example.

There is also the apparently partially related fact of the quantization of elementary phenomena, e.g. that the measured spin of an elementary particle always is measured in integer or half integer values. We also do not know why the world is this way.

If we try to unify these two, the essential statistical aspect of quantum phenomena and the quantization of the phenomena themselves, the beginnings of a new theory start to emerge. See papers by Tomasz Paterek, Borivoje Dakic, Caslav Brukner, Anton Zeilinger, and others for details .

http://arxiv.org/abs/0804.1423 and

http://www.univie.ac.at/qfp/publications3/pdffiles/Paterek_Logical%20independence%20and%20quantum%20randomness.pdf

beginning with Zeilinger's (1999) http://www.springerlink.com/content/jt342534x711542g/, also online free here

These papers present phenomenological (preliminary) theories in which logical propositions about elementary phenomena somehow can only carry 1 or a few bits of information.

Thanks for asking this question. It was a pleasure to find these papers.


My two lepta on this mainly conceptual and semantic problem:

It seems that people have an initial position/desire: those who want/expect/believe that measurements should be predictable to the last decimal point and those who are pragmatic and accept that maybe they are not. The first want an explanation of why there exists unpredictability.

An experimentalist knows that measurements are predictable within errors, which errors can sometimes be very large. Take wave mechanics, classical. Try to predict fronts in climate, a completely classical problem. The weather report is a daily reminder how large the uncertainties are in classical problems, in principle completely deterministic. Which leads to the theory of deterministic chaos. So predictability is a concept in the head of the questioner, as far as quantum or classical measurements goes. The difference is that in classical physics we believe we know why there is unpredictability.

Has physics given up on the predictability of the throw of a dice? Taken to extremes trying to find the physics of the metastable state of the fall of the dice we come again to errors and accuracy of measurement.

Within errors in measurements in the order of magnitude we live in, nano to kilometers, quantum mechanics is very predictive, as evinced by all the marvelous ways we communicate through this board . Even in achieving lasing and superconductivity. It is only when probing the very small that the theoretical unpredictability of individual measurements in QM enters. So small that "intuitions" and beliefs can become dominant to measurement and errors. And there, according to the inherent beliefs of each observer, the desire to have a classical predictability framework or the willingness to explore new concepts plays a role to a physicist, whether he/she will obsess about this conundrum or live with it until TOE..