Why should the Standard Model be renormalizable?

The short answer is that it doesn't have to be, and it probably isn't. The modern way to understanding any quantum field theory is as an effective field theory. The theory includes all renormalizable (relevant and marginal) operators, which give the largest contribution to any low energy process. When you are interested in either high precision or high energy processes, you have to systematically include non-renormalizable terms as well, which come from some more complete theory.

Back in the days when the standard model was constructed, people did not have a good appreciation of effective field theories, and thus renormalizability was imposed as a deep and not completely understood principle. This is one of the difficulties in studying QFT, it has a long history including ideas that were superseded (plenty of other examples: relativistic wave equations, second quantization, and a whole bunch of misconceptions about the meaning of renormalization). But now we know that any QFT, including the standard model, is expected to have these higher dimensional operators. By measuring their effects you get some clue what is the high energy scale in which the standard model breaks down. So far, it looks like a really high scale.


the Standard Model just happens to be perturbatively renormalizable which is an advantage, as I will discuss later; non-perturbatively, one would find out that the Higgs self-interaction and/or the hypercharge $U(1)$ interaction would be getting stronger at higher energies and they would run into inconsistencies such as the Landau poles at extremely high, trans-Planckian energy scales.

But the models where the Higgs scalar is replaced by a more convoluted mechanism are not renormalizable. That's not a lethal problem because the theory may still be used as a valid effective theory. And effective theories can be non-renormalizable - they have no reason not to be.

The reason why physicists prefer renormalizable field theories is that they are more predictive. A renormalizable field theory's predictions only depend on a finite number of low-energy parameters that may be determined by a comparison with the experiments. Because with a fixed value of the low-energy parameters such as the couplings and masses, a renormalizable theory may be uniquely extrapolated to arbitrarily high scales (and it remains predictive at arbitrarily high scales), it also means that if we postulate that the new physics only occurs at some extremely high cutoff scale $\Lambda$, all effects of the new physics are suppressed by positive powers of $1/\Lambda$.

This assumption makes the life controllable and it's been true in the case of QED. However, nothing guarantees that the we "immediately" get the right description that is valid to an arbitrarily high energy scale. By studying particle physics at ever higher energy scales, we may equally well unmask just another layer of the onion that would break down at slightly higher energies and needs to be fixed by another layer.

My personal guess is that it is more likely than not that any important extra fields or couplings we identify at low energies are inherently described by a renormalizable field theory, indeed. That's because of the following reason: if we find a valid effective description at energy scale $E_1$ that happens to be non-renormalizable, it breaks down at a slightly higher energy scale $E_2$ where new physics completes it and fixes the problem. However, this scenario implies that $E_1$ and $E_2$ have to be pretty close to one another. On the other hand, they must be "far" because we only managed to uncover physics at the lower, $E_1$ energy scale.

The little Higgs models serve as a good example how this argument is avoided. They adjust things - by using several gauge groups etc. - to separate the scales $E_1$ and $E_2$ so that they only describe what's happening at $E_1$ but they may ignore what's happening at $E_2$ which fixes the problems at $E_1$. I find this trick as a form of tuning that is exactly as undesirable as the "little hierarchy problem" that was an important motivation of these models in the first place.

The history has a mixed record: QED remained essentially renormalizable. The electroweak theory may be completed, step-by-step, to a renormalizable theory (e.g. by the tree unitarity arguments). The QCD is renormalizable, too. However, it's important to mention that the weak interactions used to be described by the Fermi-Gell-Mann-Feynman four-fermion interactions which was non-renormalizable. The separation of scales $E_1$ and $E_2$ in my argument above occurs because particles such as neutrons - which beta-decay - are still much lighter than the W-bosons that were later found to underlie the four-fermion interactions. This separation guaranteed that the W-bosons were found decades after the four-fermion interaction. And this separation ultimately depends on the up- and down-quark Yukawa couplings' being much smaller than one. If the world were "really natural", such hierarchies of the couplings would become almost impossible. My argument would hold and almost all valid theories that people would uncover by raising the energy scale would be renormalizable.

General relativity is a big example on the non-renormalizable side and it will remain so because the right theory describing quantum gravity is not and cannot be a local quantum field theory according to the old definitions. As one approaches the Planck scale, the importance of non-renormalizable effective field theories clearly increases because there is no reason why they should be valid to too much higher energy scales - at the Planck scale, they're superseded by the non-field quantum theory of gravity.

All the best, LM


The standard model is renormalizable because of the enormous gap in energy between the scale of accelerator physics and the scale of Planck/GUT physics. That this gap is real is attested to by the smallness of all non-renormalizable corrections to the standard model.

  • Neutrino masses: these are dimension 5, so they are very sensitive to new physics. The measured masses are consistent with a GUT scale supression of non-renormalizable terms, and rule out large extra dimensions immediately.
  • Strong CP: The strong interactions are CP invariant only because there are no nonrenormalizable interactions of quarks and gluons, or direct quark-quark-lepton-lepton interactions. Even the renormalizable theta-angle leads to strong CP.
  • Proton decay: If the standard model fails at a low scale, the proton will decay. The decay of the proton is impossible to supress completely because it is required by standard model anomaly cancellation, so you have to allow the SU(2) instanton to link quarks and leptons for sure. If you try to make a theory with large extra dimensions, you can do some tricks to suppress proton decay, but they require SU(2) and U(1) couplings to start to run like crazy below a TeV.

These observed facts mean that there a real desert between a TeV and the GUT scale. There are also these much weaker constraints, which are enough to rule out TeV scale non-renormlizability:

  • Muon magnetic moment: the scale of the observed anomalies are those expected from extra charged particles, not from a fundamental muon pauli term. If the non-renormalizability scale were a TeV, the Pauli term would be much larger than experimental error without some fine-tuning.
  • Flavor-changing neutral currents: these also require some fine tuning to make work with a low non-renormalizability scale, but I don't know how these work very well, so I will defer to the literature.

Around 2000, incompetent physicists started to argue that this is a small number of problems, and that really, we don't know anything at all. In fact, the reason theorists broke their heads to find a renormalizable model is because they knew that such a model would be essentially accurate to arbitrarily large energies, and would be a real clue to the Planck scale.