How useful is Turing completeness? are neural nets turing complete?

Regular feedforward neural networks are not turing complete. They are, in effect, equivalent to a single complicated mathematical function that may do quite a lot of calculations but doesn't have any ability perform looping or other control flow operations.

However, if you wire up a neural network with some way to access a stateful environment then it can be be made into a turing complete machine.

As a most trivial example, you could recreate a classic-style Turing machine where:

  • the input to the neural network is the value on the tape and the previous state
  • the output of the neural network is the next state and the action

You could then train the neural network to emulate the actions of any desired turing machine state table / configuration (perhaps by supervised learning on the actions of another turing machine?)

Note: The idea of running a feedforward net repeatedly with some form of state feedback is essentially equivalent to a recurrent neural network. So you can think of a recurrent neural network plus the logic that runs it repeatedly as being Turing complete. You need the extra logic (over and above the network itself) to ensure Turing completeness because it is necessary to handle things like termination, repetition and IO.


When modern computers are said to be Turing Complete there is an unspoken exception for the infinite storage device Turing described, which is obviously an impossibilty on a finite physical computation device. If a computation device can do everything a Turing machine can do (infinite storage not withstanding) it is Turing complete for all practical intents and purposes. By this less strict definition of Turing completeness, yes, its possible that many neural networks are Turing complete.

It is of course possible to create one that is not Turing complete.


The point of stating that a mathematical model is Turing Complete is to reveal the capability of the model to perform any calculation, given a sufficient amount of resources (i.e. infinite), not to show whether a specific implementation of a model does have those resources. Non-Turing complete models would not be able to handle a specific set of calculations, even with enough resources, something that reveals a difference in the way the two models operate, even when they have limited resources. Of course, to prove this property, you have to do have to assume that the models are able to use an infinite amount of resources, but this property of a model is relevant even when resources are limited.