Why have mathematicians used differential equations to model nature instead of difference equations

Although small discrete systems are easy to work with, continuum models are easier to deal with than large discrete systems. Whether or not nature is fundamentally discrete, the most useful models are often continuous because the discreteness can only occur in very small scales. Discreteness is useful to include in the model if it occurs in the situation we are interested in. I think this is to a large extent a question of scales of interest.

For example, if I have a mole of gas in a container, I could well model it as individual particles. But if I want a simpler model to work with and I am only interested in the behaviour at scales well above the atomic one, the usual "continuous" fluid mechanics is a good choice. This is because at such scales the gas is essentially scaling invariant (it obeys similar laws if you zoom in) and thus calculus becomes applicable (and very powerful). This is of course not true if I go all the way to the atomic scale, but I am not interested in that scale, so it does not matter if my model treats gas in the same way at those scales as well. Large scale continuous quantities like pressure and density give a good understanding (including the ability to make good predictions quickly) and that should not be neglected. (Of course, if I want something more coarse, I can go to a thermodynamic description. Either way, modelling includes a step where the number of particles is taken to infinity to simplify mathematics.)

The "scales of interest" phenomenon happens in both directions; we may neglect both too small and too large scales. For example, it might be a good idea to model a long rod by an infinitely long one (thus in a sense removing discreteness from the model). Then one can apply Fourier analysis or any other such tools that assume that the rod is infinitely long and mathematics becomes easier. This is maybe more common with respect to time than length: Fourier or Laplace transforms with respect to time are used for systems that have finite lifetime. If we are not interested in very large scales, we can assume our system to be infinitely large.

Discrete models are probably useful if nature has genuinely discrete structure (regarding the physical system in question) and we are interested in phenomena at the scale where discreteness is visible. But seen on a larger scale, a discrete model would contain something (particles or some other discrete structure) that we cannot measure and might not even be interested in. Something that cannot be measured and does not have a significant impact on the behaviour of the system should be left out of the model. This is related to the observation that continuum models often work well for large discrete systems.

Let me conclude with an observation that is easy to miss because we are so used to it: At human scales nature seems continuous.


First, a historical remark: it was not until relatively recently in the history of science that people were convinced that the atomic theory of matter is correct. I believe the tide was turned by a paper by Einstein in 1905 which explained Brownian motion (as actually observed by Robert Brown) using the assumption that water is made up of molecules. Before that many scientists held the belief that the universe really is continuous, and even those who didn't had trouble arguing with the predictive and explanatory success of continuous models.

Aside from that, the premise underlying this question ignores many deep and fundamental issues associated with passing back and forth between the continuous and the discrete. The sentence "just choose a very small $\Delta x$ instead of $dx$" sweeps under the rug some profoundly difficult mathematical problems. Some examples:

  • Global dynamical properties of a system are often hard to see in discrete models. For instance numerical stability issues make it very hard to discretely analyze hyperbolic systems. There are also some behaviors that just don't show up in a naive discretization - for instance, it is not at all obvious why the second law of thermodynamics is consistent with the atomic theory of gases (wherein the equations are symmetric in time).
  • While there are a number of standard ways to replace an ordinary differential equation with a difference equation, the corresponding technique for partial differential equations (the finite element method) is extremely challenging and is the basis for a lot of current research in numerical analysis.
  • Approximate solutions are actually not simpler than exact solutions in many (most?) cases. Consider the isoperimetric problem: find the planar curve of a given length which encloses the largest area. This can be reduced to solving a system of ordinary differential equations (the Euler equations). If you do it analytically you get a circle; if you do it discretely you get a sequence of curves which give better and better approximations of a circle. How is the latter simpler? This is a serious issue in physics: continuous models often have lots of symmetry that you lose when you discretize them.

I'll also point out that one of the hardest problems in modern mathematical physics - finding a quantum theory of gravity - has so far resisted the "just choose a very small $\Delta x$ instead of $dx$" approach.


Physicists use lattice approximations all the time.

But lattice models will typically break part of the symmetry of the system, which is a disadvantage both from a theoretical point of view and from a practical point of view. For example, it is not possible to make a lattice model rotation invariant (whereas most laws of physics are rotation invariant...)