The Lucas argument vs the theorem-provers -- who wins and why?

Yes, computers can infer that the Gödel sentence is true. This is performed in a meta-theory which is stronger than the object theory, as it has to be.

For example, Russell O'Connor formalized Gödel's incompleteness theorems in Coq. As he points out in Section 7.1, Coq can prove that the natural numbers form a model of Peano arithmetic $PA$. I cannot find in his formalization an explicit statement that Gödel's sentence is true (which is not to say it isn't there), but I am quite confident that it would take little effort to formalize such a statement.

[This paragraphs has been made obsolete as the question was edited to address the issue.] Also, let me point out that you might be confusing meta-theory with object-theory. Paulson uses the meta theory called "Nominal Isabelle" to prove Gödel's incompleteness theorem, but the way you phrased your question sounds as if you think Paulson's mechanised proof is carried out in $ZF$ without infinity.

Lastly, I would just like to say that I never understood how one could hold the position that ugly bags of mostly water are superior to machines in their ability to understand and create mathematics. A machine is not subject to uncontrollable chemical processes, fatigue, emotions, and temptations to sacrifice just a little bit of truth for a great deal of fame.


One can try to rescue Lucas's reasoning by arguing that humans can see the consistency of ZF - Infinity (or any other formal system under consideration) by mathematical intuition, and then infer the Godel sentence by logic. The difference between the human mind and the computer is then taken to be this mathematical intuition, rather than the logic that follows it. I think Penrose has given this version.

This argument is problematic because humans clearly do not just look at a formal system and see whether it is consistent (except for very simple ones, perhaps). Instead we guess the consistency of sets of axioms by using various heuristics, mathematical experience, analogies to the physical world.... We could also equip the computer with a set of heuristics for guessing the consistency of formal systems. The only downside to this would be that the heuristics would likely get some things wrong and therefore output a false answer to some mathematical queries (thereby evading Godel/Turing problems).

But this is no big deal as humans also make mistakes. In particular, top mathematicians have made serious errors about the consistency of formal systems, most famously Frege writing an entire book in a formal system to Russell's paradox. Some mathematicians have even doubted the consistency of Peano arithmetic - either a few great mathematicians are wrong about this question, or almost all of them are.

So there does not seem to be any real difference between humans and machines on this point.

I believe this argument is essentially the same as one given by Turing in his paper Computing Machinery and Intelligence (where he also introduced the Turing test), 9 years before Lucas.

The result in question refers to a type of machine which is essentially a digital computer with an infinite capacity. It states that there are certain things that such a machine cannot do. If it is rigged up to give answers to questions as in the imitation game, there will be some questions to which it will either give a wrong answer, or fail to give an answer at all however much time is allowed for a reply. [ ..... ] This is the mathematical result: it is argued that it proves a disability of machines to which the human intellect is not subject.

The short answer to this argument is that although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect. But I do not think this view can be dismissed quite so lightly. Whenever one of these machines is asked the appropriate critical question, and gives a definite answer, we know that this answer must be wrong, and this gives us a certain feeling of superiority. Is this feeling illusory? It is no doubt quite genuine, but I do not think too much importance should be attached to it. We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of the machines. Further, our superiority can only be felt on such an occasion in relation to the one machine over which we have scored our petty triumph. There would be no question of triumphing simultaneously over all machines. In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.


Lucas's article responds to Turing's:

He argues that the limitation to the powers of a machine do not amount to anything much. Although each individual machine is incapable of getting the right answer to some questions, after all each individual human being is fallible also: and in any case "our superiority can only be felt on such an occasion in relation to the one machine over which we have scored our petty triumph. There would be no question of triumphing simultaneously over all machines." But this is not the point. We are not discussing whether machines or minds are superior, but whether they are the same. In some respect machines are undoubtedly superior to human minds; and the question on which they are stumped is admittedly, a rather niggling, even trivial, question. But it is enough, enough to show that the machine is not the same as a mind. True, the machine can do many things that a human mind cannot do: but if there is of necessity something that the machine cannot do, though the mind can, then, however trivial the matter is, we cannot equate the two, and cannot hope ever to have a mechanical model that will adequately represent the mind.

This argument seems to be "For each machine, there is some Godel sentence it cannot verify. There exists some mind that can verify all these Godel sentences. Therefore, (some) minds are not machines." The second premise is the problem - some Godel sentences take 150 years to state, say, and no human mind could understand them, let alone verify them.

He also responds to Turing again later, but it's a different argument of Turing's, so not relevant to my answer.