Does Gödel preclude a workable ToE?

The answer is no, because although a "Theory of Everything" means a computational method of describing any situation, it does not allow you to predict the eventual outcome of the evolution an infinite time into the future, but only to plod along, predicting the outcome little by little as you go on.

Gödel's theorem is a statement that it is impossible to predict the infinite time behavior of a computer program.

Theorem: Given any precise way of producing statements about mathematics, that is, given any computer program which spits out statements about mathematics, this computer program either produces falsehoods, or else does not produce every true statement.

Proof: Given the program "THEOREMS" which outputs theorems (it could be doing deductions in Peano Arithmetic, for example), write the computer program SPITE to do this:

  • SPITE prints its own code into a variable R
  • SPITE runs THEOREMS, and scans the output looking for the theorem "R does not halt"
  • If it finds this theorem, it halts.

If you think about it, the moment THEOREMS says that "R does not halt", it is really proving that "SPITE does not halt", and then SPITE halts, making THEOREMS into a liar. So if "THEOREMS" only outputs true theorems, SPITE does not halt, and THEOREMS does not prove it. There is no way around it, and it is really trivial.

The reason it has a reputation for being complicated is due to the following properties of the logic literature:

  • Logicians are studying formal systems, so they tend to be overly formal when they write. This bogs down the logic literature in needless obscurity, and holds back the development of mathematics. There is very little that can be done about this, except exhorting them to try to clarify their literature, as physicists strive to do.
  • Logicians made a decision in the 1950s to not allow computer science language in the description of algorithms within the field of logic. They did this purposefully, so as to separate the nascent discipline of CS from logic, and to keep the unwashed hordes of computer programmers out of the logic literature.

Anyway, what I presented is the entire proof of Gödel's theorem, using a modern translation of Gödel's original 1931 method. For a quick review of other results, and for more details, see this MathOverflow answer: https://mathoverflow.net/a/72151/36526.

As you can see, Gödel's theorem is a limitation on understanding the eventual behavior of a computer program, in the limit of infinite running time. Physicists do not expect to figure out the eventual behavior of arbitrary systems. What they want to do is give a computer program which will follow the evolution of any given system to finite time.

A ToE is like the instruction set of the universe's computer. It doesn't tell you what the output is, only what the rules are. A ToE would be useless for predicting the future, or rather, it is no more useful for prediction than Newtonian mechanics, statistics, and some occasional quantum mechanics for day-to-day world. But it is extremely important philosophically, because when you find it, you have understood the basic rules, and there are no more surprises down beneath.

Incorporating Comments

There were comments which I will incorporate into this answer. It seems that comments are only supposed to be temporary, and some of these observations I think are useful.

Hilbert's program was an attempt to establish that set theoretic mathematics is consistent using only finitary means. There is an interpretation of Gödel's theorem that goes like this:

  • Gödel showed that no system can prove its own consistency
  • Set theory proves the consistency of Peano Arithmetic
  • Therefore Gödel kills Hilbert's program of proving the consistency of set theory using arithmetic.

This interpretation is false, and does not reflect Hilbert's point of view, in my opinion. Hilbert left the definition of "finitary" open. I think this was because he wasn't sure exactly what should be admitted as finitary, although I think he was pretty sure of what should not be admitted as finitary:

  1. No real numbers, no analysis, no arbitrary subsets of $\Bbb Z$. Only axioms and statements expressible in the language of Peano Arithmetic.
  2. No structure which you cannot realize explicitly and constructively, like an integer. So no uncountable ordinals, for example.

Unlike his followers, he did not say that "finitary" means "provable in Peano Arithmetic", or "provable in primitive recursive Arithmetic", because I don't think he believed this was strong enough. Hilbert had experience with transfinite induction, and its power, and I think that he, unlike others who followed him in his program, was ready to accept that transfinite induction proves more theorems than just ordinary Peano induction.

What he was not willing to accept was axioms based on a metaphysics of set existence. Things like the Powerset axiom and the Axiom of choice. These two axioms produce systems which not only violate intuition, but are further not obviously grounded in experience, so that the axioms cannot be verified by intuition.

Those that followed Hilbert interpreted finitary as "provable in Peano Arithmetic" or a weaker fragment, like PRA. Given this interpretation, Gödel's theorem kills Hilbert's program. But this interpretation is crazy, given what we know now.

Hilbert wrote a book on the foundations of mathematics after Gödel's theorem, and I wish it were translated into English, because I don't read German. I am guessing that he says in there what I am about to say here.

What Finitary Means

The definition of finitary is completely obvious today, after 1936. A finitary statement is a true statement about computable objects, things that can be represented on a computer. This is equivalent to saying that a finitary statement is a proposition about integers which can be expressed (not necessarily proved) in the language of Peano Arithmetic.

This includes integers, finite graphs, text strings, symbolic manipulations, basically, anything that Mathematica handles, and it includes ordinals too. You can represent the ordinals up to $\epsilon_0$, for example, using a text string encoding of their Cantor Normal form.

The ordinals which can be fully represented by a computer are limited by the Church-Kleene ordinal, which I will call $\Omega$. This ordinal is relatively small in traditional set theory, because it is a countable ordinal, which is easily exceeded by $\omega_1$ (the first uncountable ordinal), $\omega_\Omega$ (the Church-Kleene-th uncountable ordinal), and the ordinal of a huge cardinal. But it is important to understand that all the computational representations of ordinals are always less than this.

So when you are doing finitary mathematics, it means that you are talking about objects you can represent on a machine, you should be restricting yourself to ordinals less than Church-Kleene. The following argues that this is no restriction at all, since the Church-Kleene ordinal can establish the consistency of any system.

Ordinal Religion

Gödel's theorem is best interpreted as follows: Given any (consistent, omega-consistent) axiomatic system, you can make it stronger by adding the axiom "consis(S)". There are several ways of making the system stronger, and some of them are not simply related to this extension, but consider this one.

Given any system and a computable ordinal, you can iterate the process of strengthening up to a the ordinal. So there is a map from ordinals to consistency strength. This implies the following:

  • Natural theories are linearly ordered by consistency strength.
  • Natural theories are well-founded (there is no infinite descending chain of theories $A_k$ such that $A_k$ proves the consistency of $A_{k+1}$ for all k).
  • Natural theories approach the Church Kleene ordinal in strength, but never reach it.

It is natural to assume the following:

  • Given a sequence of ordinals which approaches the Church-Kleene ordinal, the theories corresponding to this ordinal will prove every theorem of Arithmetic, including the consistency of arbitrarily strong consistent theories.

Further, the consistency proofs are often carried out in constructive logic just as well, so really:

  • Every theorem that can be proven, in the limit of Church-Kleene ordinal, gets a constructive proof.

This is not a contradiction with Gödel's theorem, because generating an ordinal sequence which approaches $\Omega$ cannot be done algorithmically, it cannot be done on a computer. Further, any finite location is not really philosophically much closer to Church-Kleene than where you started, because there is always infinitely more structure left undescribed.

So $\Omega$ knows all and proves all, but you can never fully comprehend it. You can only get closer by a series of approximations which you can never precisely specify, and which are always somehow infinitely inadequate.

You can believe that this is not true, that there are statements that remain undecidable no matter how close you get to Church-Kleene, and I don't know how to convince you otherwise, other than by pointing to longstanding conjectures that could have been absolutely independent, but fell to sufficiently powerful methods. To believe that a sufficiently strong formal system resolves all questions of arithmetic is an article of faith, explicitly articulated by Paul Cohen in Set Theory and the Continuum Hypothesis. I believe it, but I cannot prove it.

Ordinal Analysis

So given any theory, like ZF, one expects that there is a computable ordinal which can prove its consistency. How close have we come to doing this?

We know how to prove the consistency of Peano Arithmetic--- this can be done in PA, in PRA, or in Heyting Arithmetic (constructive Peano Arithmetic), using only the axiom

  • Every countdown from $\epsilon_0$ terminates.

This means that the proof theoretic ordinal of Peano Arithmetic is $\epsilon_0$. That tells you that Peano arithmetic is consistent, because it is manifestly obvious that $\epsilon_0$ is an ordinal, so all its countdowns terminate.

There are constructive set theories whose proof-theoretic ordinal is similarly well understood, see here: "Ordinal analysis: Theories with larger proof theoretic ordinals".

To go further requires an advance in our systems of ordinal notation, but there is no limitation of principle to establishing the consistency of set theories as strong as ZF by computable ordinals which can be comprehended.

Doing so would complete Hilbert's program--- it would removes any need for an ontology of infinite sets in doing mathematics. You can disbelieve in the set of all real numbers, and still accept the consistency of ZF, or of inaccessible cardinals (using a bigger ordinal), and so on up the chain of theories.

Other interpretations

Not everyone agrees with the sentiments above. Some people view the undecidable propositions like those provided by Gödel's theorem as somehow having a random truth value, which is not determined by anything at all, so that they are absolutely undecidable. This makes mathematics fundamentally random at its foundation. This point of view is often advocated by Chaitin. In this point of view, undecidability is a fundamental limitation to what we can know about mathematics, and so bears a resemblence to a popular misinterpretation of Heisenberg's uncertainty principle, which considers it a limitation on what we can know about a particle's simultaneous position and momentum (as if these were hidden variables).

I believe that Gödel's theorem bears absolutely no resemblence to this misinterpretation of Heisenberg's uncertainty principle. The preferred interpretation of Gödel's theorem is that every sentence of Peano Arithmetic is still true or false, not random, and it should be provable in a strong enough reflection of Peano Arithmetic. Gödel's theorem is no obstacle to us knowing the answer to every question of mathematics eventually.

Hilbert's program is alive and well, because it seems that countable ordinals less than $\Omega$ resolve every mathematical question. This means that if some statement is unresolvable in ZFC, it can be settled by adding a suitable chain of axioms of the form "ZFC is consistent", "ZFC+consis(ZFC) is consistent" and so on, transfinitely iterated up to a countable computable ordinal, or similarly starting with PA, or PRA, or Heyting arithmetic (perhaps by iterating up the theory ladder using a different step-size, like adding transfinite induction to the limit of all provably well-ordered ordinals in the theory).

Gödel's theorem does not establish undecidability, only undecidability relative to a fixed axiomatization, and this procedure produces a new axiom which should be added to strengthen the system. This is an essential ingredient in ordinal analysis, and ordinal analysis is just Hilbert's program as it is called today. Generally, everyone gets this wrong except the handful of remaining people in the German school of ordinal analysis. But this is one of those things that can be fixed by shouting loud enough.

Torkel Franzén

There are books about Gödel's theorem which are more nuanced, but which I think still get it not quite right. Greg P says, regarding Torkel Franzén:

I thought that Franzen's book avoided the whole 'Goedel's theorem was the death of the Hilbert program' thing. In any case he was not so simplistic and from reading it one would only say that the program was 'transformed' in the sense that people won't limit themselves to finitary reasoning. As far as the stuff you are talking about, John Stillwell's book "Roads to Infinity" is better. But Franzen's book is good for issues such as BCS's question (does Godel's theorem resemble the uncertainty principle).

Finitary means computational, and a consistency proof just needs an ordinal of sufficient complexity.

Greg P responded:

The issue is then what 'finitary' is. I guess I assumed it excluded things like transfinite induction. But it looks like you call that finitary. What is an example of non-finitary reasoning then?

When the ordinal is not computable, if it is bigger than the Church-Kleene ordinal, then it is infinitary. If you use the set of all reals, or the powerset of $\Bbb Z$ as a set with discrete elements, that's infinitary. Ordinals which can be represented on a computer are finitary, and this is the point of view that I believe Hilbert pushes in the Grundlagen, but it's not translated.


I think Conway's Game Of Life is a great example here. We have the "Theory of Everything" for Conway's Game Of Life--the laws that determine the behavior of every system. They're extremely simple, just a few sentences! These simple "rules of the game" are analogous to a "theory of everything" that would satisfy a physicist living in the Game Of Life universe.

On the other hand, you can build a Turing-complete computer in The Game Of Life, which means you can formulate questions about the Game of Life which have no mathematically provable answer. The questions would sound something like:

Here is a complicated configuration of trillions of cells. Starting with this configuration, run the Game of Life for an infinite number of steps. Will the cell at such-and-such coordinate ever turn on?

These two things aren't really related. Of course we can understand the extremely simple "theory of everything" for the Game Of Life. At the same time, of course we cannot mathematically prove the answer to every question like the one above, about the asymptotic behavior of very complicated configurations of dots within the Game Of Life.

Likewise, we can (one hopes) find the ToE for our universe. But we certainly will not be able to mathematically prove every possible theorem about the asymptotic behavior of things following the laws of the universe. No one expected to do that anyway.


People tend to take Gödel's theorem and bend it, stretch it, misstate it, misapply it, and generally do things to it that, if you did them to a cockroach in Texas, would get you arrested for animal cruelty. But there is a book, Franzén (2005), that should be enough to inoculate any responsible adult against such naughty behavior. Some points made by Franzén:

  1. Gödel's theorem only applies to formal axiomatic systems.
  2. Gödel's theorem only applies to systems that can describe "a certain amount of arithmetic" (which is defined in a specific technical way).
  3. Gödel's theorem tells us that any consistent theory will have certain undecidable statements. However, these statements are typically of no interest whatsoever.
  4. In addition to the notion of consistency, there is one of relative consistency.

Any one of these suffices to show that Gödel's theorem has no relevance to the enterprise of physics. Let's take them one at a time.

1. Gödel's theorem only applies to formal axiomatic systems.

Almost no useful, real-world physical theories have ever been stated as formal axiomatic systems (an exception being Fleuriot, 2001). No such formalization has ever been used to do any real-word physics (i.e., the kind of thing that you could get published in a journal). "Formal axiomatic system" means something very different to a logician than what a physicist might imagine. It means reducing all possible statements of the theory to strings of characters, and all of the theory's axioms to rules for manipulating these strings, stated so explicitly that a computer could check them. This type of formalization is neither necessary nor sufficient to make a physical theory valid, useful, or interesting.

2. Gödel's theorem only applies to systems that can describe "a certain amount of arithmetic."

This is more of a limitation than you might imagine. In our present-day scientific culture, we go to school and learn arithmetic, then geometry and the real-number system. This makes us imagine the integers to be a simple mathematical system, and the reals to be a more complicated one built on top of the integers. This is no more than a cultural bias. The elementary theory of the real numbers is equivalent to the elementary theory of Euclidean geometry. ("Elementary" has a technical meaning, being equivalent to first-order logic.) Elementary Euclidean geometry is incapable of describing "a certain amount of arithmetic" as defined in Gödel's theorem. So Gödel's theorem doesn't apply to the elementary theory of the real numbers, and in fact this theory has been proved to be consistent and complete (Tarski, 1951). It's quite possible that a ToE could be expressed in geometrical language, without the use of any arithmetic, or in the language of the real number system. For example, the Principia is couched completely in the language of Euclid's Elements, and it's also not obvious to me that there is any obstruction to stating theories like Maxwell's equations or general relativity in the language of the real number system, using elementary logic.

3. Gödel's theorem tells us that any consistent theory will have certain undecidable statements. However, these statements are typically of no interest whatsoever.

I think this is pretty self-explanatory. And I don't think decidability is a necessary or even particularly desirable property for a ToE; few interesting theories in mathematics are decidable, and yet most mathematicians spend zero time worrying about that.

4. In addition to the notion of consistency, there is one of relative consistency.

It is possible to prove that one axiomatic system is equiconsistent with another, meaning that one is self-consistent if and only if the other is. If we had a ToE, and we could make it into an axiomatic system, and it was the type of axiomatic system to which Gödel's theorem applies, then it would probably be equiconsistent with some other well known system, such as some formulation of real analysis. Any doubt about the consistency of the ToE would then be equivalent to doubt about the consistency of real analysis -- but nobody believes that real analysis lacks consistency.

Finally, why do we care about "consistency"? I'm using the scare quotes because this is physics we're talking about. When I talk to a mathematician about the "self-consistency" of a theory, the usual reaction is a blank stare or a patronizing correction. "Self-"consistency is the only kind of consistency a mathematician ever cares about. But a physicist cares about more than that. We care about whether a theory is consistent with experiment. There is no good reason to care whether a ToE can't be proved to be self-consistent, because there are other worries that are far bigger. The ToE could be self-consistent, but someone could do an experiment that would prove it was wrong.


J. Fleuriot, A Combination of Geometry Theorem Proving and Nonstandard Analysis with Application to Newton's Principia, 2001

T. Franzén, Gödel's Theorem: An Incomplete Guide to Its Use and Abuse, 2005

A. Tarski, A Decision Method for Elementary Algebra and Geometry, 2nd rev. ed., 1951 [Reprinted in his Collected Papers, Vol. 3.]