Is QFT mathematically self-consistent?

Nonrelativistic QFT is consistent - all renormalizations are finite.

Relativistic QFT is consistent in 2 and 3 spacetime dimensions. There are various rigorous constructions of interacting local quantum field theories.

In 4D, the situation is different; not a single interacting relativistic local QFT in 4D is known. (But neither is there a no-go theorem that would forbid them.) The technical difficulties are much bigger than in 2D and 3D (where proofs are already highly nontrivial).

Nontrivial renormalization is needed in 3D and 4D. (In 2D, Wick ordering is sufficient, which simplifies things a lot.)

My tutorial paper Renormalization without infinities - a tutorial, discusses renormalization and how to avoid the divergences on a much simpler level than quantum field theory.

Chapter B5: Divergences and renormalization of my theoretical physics FAQ discusses some of the questions that are more specific to quantum field theories. In particular, there is a Section ''Is there a rigorous interacting QFT in 4 dimensions?'' with references to the state of the art.


Just as your quote says, infinity turns up as part of the answer in almost any simple-minded calculation in quantum field theory that goes past the lowest level of approximation. This doesn't affect quantum mechanics, but it's been there in the mathematics of QFT since the late 1920s. Feynman, together with a few other people at the end of the 1940s, introduced a less simple-minded way to calculate in QFT that sort-of-bypasses the infinities, but no-one who is mathematically inclined could feel very comfortable with the way it was done at that time.

Many Physicists, however, perhaps most, are content nowadays with the mathematics of what is called the renormalization group. The Wikipedia page will give you a taste of this, but I doubt anyone is going to be able to give you a short tutorial on the subject that will make you very happy. The renormalization group lets one organize the calculations rather more nicely. The renormalization group draws on lattice methods from classical statistical Physics, which I think contributed to Physicists feeling relatively comfortable with the mathematics.

There's definitely a group of people who think the current mathematics is still "dippy", but no-one has yet produced an acknowledged serious alternative. There are also less disgruntled efforts just to improve the mathematics more-or-less incrementally, one strand of which uses Hopf algebras, which is quite abstract mathematics that no-one could call careless; one always wants improvement.


Feynman is referring to the problem of showing that Quantum Electrodynamics is mathematically consistent, which will be tricky, because it almost certainly isn't. The methods of Feynman and Schwinger showed that the perturbation theory of quantum electrodynamics is fully consistent, order by order, but the theory itself was convincingly argued to be no good by Landau. Landau's argument is that any charge is screened by virtual electrons and positrons, so that the bare charge is bigger than the charge you see. But in order to get a finite renormalized charge, the bare charge has to go to infinity at some small, nonzero, distance scale. The argument is not rigorous, but an exactly analogous thing can be seen to happen numerically in the Ising model.

The methods of Kadanoff, Wilson, Fisher and others make it clear that there is a path to defining (bosonic, real-action) quantum field theory which is completely fine. This method identifies the continuum limit of quantum fields with the second order phase transition in the parameter space for a Euclidean lattice action. The continuum limit is defined by a second order phase transition, and all properties of the continuum limit are determined by tuning the parameters close enough to the transition.

This path, however, has not been made rigorous yet, and likely requires a few new mathematical ideas to prove that the limit exists. The new ideas are being formulated now, and there is some disagreement over what they are. What follows is my own opinion.

Free fields and measures

To define free field theory is trivial--- you pick every Fourier mode of the field to be a gaussian random variable with a variance equal to the inverse propagator. That's it. There's nothing more to it (For abelian gauge theory, you need to fix a gauge, but ok, and nonabelian gauge theory is never free).

Already here there is a problem. Mathematicians do not allow random picking algorithms to define a measure on infinite dimensional spaces, because if you are allowed to pick at random inside a set, every subset has measure. This contradicts the axiom of choice. Mathematicians want to have choice, so they do not allow natural measure theory, and there's no reason to go along with this kind of muddleheadedness on their part.

The principle: If you have a stochastic algorithm which works to pick a random element of a set S, then this algorithm suffices to define a measure on S, and every subset U of S has measure, equal to the probability that the algorithm picks an element of U.

This principle fails within standard set theoretical mathematics even for the most trivial random process: flipping coins to uniformly pick the digits of a random number between 0 and 1. The probability that this number lands in a "non measurable set" is ill defined. This is nonsense, of course, there are no "non-measurable sets", and the picking process proves this. But in order to make the argument rigorous, you have to reject the axiom of choice, which constructs them. The random picking argument is called "random forcing" within mathematics, and outside of random forcing models, probability is convoluted and unnatural, because you have to deal with non-measurable sets.

Interacting fields

For interacting fields, the required result is that there are only finitely many repelling local directions in the space of actions near the second order transition under rescaling (renormalization group transformations). This theorem is difficult to prove rigorously. The heuristic arguments can be turned into proofs only in certain cases.

The construction of interacting theories in the literature is mostly restricted to resummations of perturbation theory, and is useless. Resumming perturbation series is in no way going to work for theories with nontrivial vacua (at least not the way people do it).