Can we tell when an established theory is wrong?

can we get a false positive of a theory being right just because the instruments doing the measuring have that theory built in?

This sounds dangerously close to a contradiction-in-terms, so let me carefully read you as saying that the instruments doing the measuring "are interpreted according to that theory," possibly by calculations that sit between their transducers and the screen which we read numbers off of.

And the answer then is, "sort of: yes that's a real thing that can happen, yes it can throw you for a loop (especially if it means that you're not measuring what you think are measuring), but no, the theory is not really in trouble in those cases."

For example, when the OPERA neutrino debacle was unfolding, there were lots of guesses about what was going wrong, and many of them were guesses that "you're expecting the GPS satellite system works according to this theory; maybe it works according to that one instead." It turned out that the theory was correct but the timing-signal cables were faulty; but that was a very real possibility that a lot of people took very seriously.

The three big perspectives on theory choice in science

Why, however, would the theory not be in trouble? This is due to a pervasive fiction that we teach kids in high school, because we've got a bad textbook selection process that creates echo-chambers and no real strong impetus for good scientists to take "shifts" teaching high-school science or reviewing textbooks -- and even then, not much communication between the philosophers of science and the scientists. So I forgive you if you've never heard any of this!

The fairy tale is that scientists first observe the facts of the world, look for patterns in those facts, propose those patterns as a "hypothesis", collect more facts, and see if those new facts confirm or deny that hypothesis. If a hypothesis has no denying-facts, then we call it a "scientific theory". One reason that you should find this suspect is that it's really hard to track down who we should ascribe this to: it comes from some time after Francis Bacon and Isaac Newton, but I haven't been able to really track down whose model it really was. In any case I regard this as a "fairy tale" because in my professional career in science I have never seen someone say "okay, we have to stop what we're doing and start observing facts and see what some patterns are, so that we can form a hypothesis." To the extent that this stuff happens it's totally subconscious; it's not a "method" by any normal sense of that term.

One of the first big advances was due to Karl Popper, who wanted to discriminate science from pseudoscience by suggesting that science was very different from other fields of endeavor in a particular way: good science "sticks its necks out," or perhaps "has a stake", in what it says about the world. He observed that pseudoscience usually could be compatible with any statement of fact, whereas proper science has some sort of experiments which you can run which would hypothetically prove that idea wrong. This isn't really embodied in the above "scientific method" but it's not too hard to add it. A lot of people take it a step further than maybe Popper would have, using it to say that scientific beliefs can be split into only two essential categories: "falsified" and "not yet falsified", rather than "false" and "true". Everything gets reconceptualized as a ruthless survival-of-the-fittest system.

Unfortunately, then the boat was rocked even more by Thomas Kuhn, who pointed out something which you also see in the OPERA experiment above: theories can be "established". That's very different from Popper's model, which says that we're eagerly looking to cut down all of our theories. In fact in the OPERA statement most physicists pushed back against the report of superluminal neutrinos: "no, you must be wrong, there is something wrong with your timing circuitry or your GPS models or something." In fact Kuhn went a step further: he said that true theories are unfalsifiable, which is like knifing Popper in the back!

Kuhn pointed out that there are two levels: "theory" and "model". A scientific theory is a paradigm -- a way of structuring how you think about the world, how you ask questions, and how you decide what questions are worth asking. You need to already have some theory in order to offer explanations in the first place, because theories define the "ground rules" of how their models work, so that you can model a phenomenon. It's like the theory is the paper, paint, and brush whereas the model is the painting you make with them. When something is different between the picture and what you see in the world, you can shelve your model as a limited approximation, then tear off a new sheet of theory-paper and mix new theory-paint, and try a different painting -- maybe one which is similar in the broad strokes but changes when you start to paint it with a finer brush. Kuhn regarded this model-selection process as Popperian.

But every once in a while, Kuhn said, we see these scientific revolutions where the art supplies themselves get redesigned. He didn't have a great model for how exactly this happens, but definitely thought that it was crucial to scientific progress. His idea was basically, "everyone gets together, agrees that they're painting with too broad a brush, comes up with a better brush-shape and better pigments and paper and what have you, lots of ideas are floated, people keep whichever is aesthetically pleasing while serving the purpose, and then we settle back down into our normal scientific work again." Of course, people didn't like this because it seemed to open the door back to pseudoscience: might astrology now claim that it is a "scientific theory" but that many "astrological models" could be proposed and scrapped underneath? When Kuhn tried to defend himself, he didn't have much of an answer for this. He basically said that scientists have a lot of aesthetic values that they want out of their theories, and those aesthetic values banned astrology due to it not being very simple or parsimonious with how we think about the rest of the cosmos -- a principle sometimes called Ockham's Razor. Kuhn's work was really important, but it was clear that there was no good answer to this question.

I will go one step farther than I think he did, and say that theories tend to be Turing-complete for some domain. We now know that if you include the centrifugal and Coriolis forces, you can put the Earth at the center of the solar system. Geocentrism is 100% as valid as heliocentrism. "Was this not the subject of a huge scientific revolution? How awful, that in Newton's laws we can prove that it makes no difference!" Well, yes -- but what was at stake was not "which theory is right?" but rather "which theory is better?"

To my mind this tension was best resolved by Imre Lakatos, whose works take a little effort to understand. You have to understand that theory choice is actually driven by lazy grad students. You can regard a scientific theory as the "genes" for its scientific models, and those genes make it easy or hard for that model to predict surprising new observations which can be confirmed by experiment -- which spurs an interesting publication which can excite other researchers into making more publications with that theory. The theories that simplify the models will thus "reproduce;" the ones that don't will get ignored by these grad students who don't want to do all of that work if they can easily avoid it. So, heliocentrism won because it did not need the "epicycles" that the geocentrists used. Those epicycles were totally valid -- it's the Fourier series of some periodic motions, after all -- but they were complicated and hard to use. Putting the Sun at a fixed point in your coordinates meant that you didn't need to calculate them. Lazy grad students therefore chose heliocentric coordinates. Theory choice is therefore by natural selection. A more recent example: we now have a deterministic quantum mechanics via "pilot waves"; why is nobody using it? Because it's complicated! Nobody really likes the Copenhagen interpretation philosophically, but it's dead simple to use and predicts all of the same outcomes.

So that's why the theory is not at stake in such circumstances: only the model is at stake, until you get to larger concerns about "it's too hard to model this in that way." Such concerns will seldom be because the instruments assume some other theory -- more likely, it's because some new instrument comes out that forces new measurements which complicate our entire understanding of an existing theory. Then someone comes along with something like quantum mechanics, "hey, we can do this all a lot better if we just predict averages rather than exact values, and if we use complex numbers to form our additive probability space." This changes the questions you ask; you gloss over "how did this photon get here rather than somewhere else?" for questions of "what are the HOMO and LUMO for this molecule?"


You have to give a concrete example. Experiments are designed so as not to depend on what they are trying to measure.

Your speed of light example is not good. Was not the whole scientific community in a dither because superluminal neutrinos were supposed to have been measured? Until it was found that there was a malfunction in an instrument?

In any case theories are not proven by data, they are just validated, i.e. registered as consistent with the data. If the experiment with neutrinos had been correct and another experiment confirmed it the theory would have had to change.


Can we tell when an established theory is wrong?

Not always. And not everybody. Sometimes one or more people can, and they explain why. But other people won't entertain it, then the "established theory" gets even more established.

I was reading the following answer from this question: In physics, you cannot ask / answer why without ambiguity.

You can ask why. We do physics to understand the world. Not to be told to shut up and calculate by some guy who doesn't understand anything.

Now, we observe that the speed of light is finite and that it seems to be the highest speed for the energy.

Yes, but the speed of light varies with gravitational potential. See Einstein talking about it in 1920. Or Shapiro talking about it in 1964.

Effective theories have been built around this limitation and they are consistent since they depend of measuring devices which are based on technology / sciences that all have c built in. In modern sciences, one doesn't care of what is happening, but of what the devices measure.

Phooey. See the tautology described by Magueijo and Moffat in http://arxiv.org/abs/0705.4507. We use the local motion of light to define our second and our metre, which we then use to measure the local motion of light. So we always measure the same answer, even though the speed varies.

I think this raises a good point, can we get a false positive of a theory being right just because the instruments doing the measuring have that theory built in?

I'm not sure the false positive arises because the instruments have the theory built in. I'd be happier saying false positives can arise because of lack of understanding.

For example, if it were the case that there is a particle which travels faster than light, could we even tell that's the case since some of our methods of measuring distance involve relativity (which assumes the speed of light as an upper bound)?

Yes we could. I don't think there's much of an issue with that. Like anna said, superluminal neutrinos were supposed to have been measured. If such measurement had been independently confirmed then the theory would have to change. But maybe not the way you think. Neutrinos are actually more like photons than electrons. You might classify neutrinos as light in the wider sense.