Have we figured out how to analyze turbulent fluids?

The progress in turbulence has come in fits and spurts, and it is very active in the last few years, due to the influence of AdS/CFT. I think it will be solved soon, but this opinion was shared by many in previous generations, and may be much too optimistic.

Navier-Stokes equations

The basic equations of motion for turbulent flows have been known since the 19th century. The fluid velocity obeys the incompressible Navier-Stokes equations:

$$ \dot v_i + v^j \nabla_j v^i + \partial_i P = \nu \partial_j \partial_j v_i $$ and

$$ \partial_j v_j = 0 $$

Where repeated indices are summed, and the units of mass normalize the fluid density to be 1.

Each of the terms are easy to understand: the nonlinear term gives the advection, it says that the force on the fluid acts to accelerate the fluid as you move along with the fluid, not at one fixed x position. The pressure P term is just a constraint force that enforces incompressibility, and it is determined by taking the divergence of the equation, and enforcing that $\partial_i v_i = 0$. This determines the Laplacian of the pressure

$$ \partial_i v^j \partial_j v_i + \partial_i \partial_i P = 0$$

The friction force says that in the addition to moving along with itself and bending to keep the density constant, the velocity diffuses with a diffusion constant $\nu$. In the limit $\nu=0$, you get the Euler equations, which describe hydrodynamics in the absence of friction.

In any appropriate boundary conditions, like periodic box, or vanishing velocities at infinity, the pressure equation determines the pressure from the velocity. The equations can be solved on a grid, and the future is determined from the past.

Clay problem has nothing to do with turbulence

The problem of showing that the limit as the grid goes to zero is everywhere sensible and smooth is far from trivial. It is one of the Clay institute million dollar prize problems. The reason this is nontrivial has nothing to do with turbulence, but with the much easier Reynolds scaling.

There is a scale invariance in the solution space, as described on Terrance Tao's blog. The classical Reynolds scaling says that if you have any incompressible fluid flow, and you make it twice as small, twice as fast, you get a second flow which is also ok. You can imagine a fluid flow which generates a smaller faster copy of itself, and so on down, and eventually produces a singular spot where the flow is infinitely fast and infinitely small--- a singularity.

This type of singularity has a vanishingly small energy in 3d, because the volume shrinks faster than the velocity energy density blows up. This is both good and bad--- it's bad for mathematicians, because it means that you can't use a simple energy bound to forbid this type of divergence. It is good for physics, because it means that these types of blowups, even if they occur, are completely irrelevant little specks that don't affect the big-picture motion, where the turbulence is happening. If they occur, they only affect little measure zero spots at tiny distances, and they would be resolved by new physics, a stronger hyperviscosity, which would make them decay to something smooth before they blow up. They do not lead to a loss of predictability outside of a microscopic region, because there is a galilean symmetry which decouples large-scale flows from small scale flows. A big flow doesn't care about a spot divergence, it just advects the divergence along. This isn't rigorous mathematics, but it is obvious in the physical sense, and should not make anyone studying turbulence lose sleep over existence/uniqueness.

When you replace the velocity diffusion with a faster damping, called "hyperviscosity", you can prove existence and uniqueness. But the problem of turbulence is unaffected by the hyperviscosity, or even by the ordinary viscosity. It is all happening in the Euler regime--- well before the viscosity kicks in. This is another reason to be sure that the Clay problem is irrelevant.

If I were writing the Clay problem, I would not have asked for existence/uniqueness. I would have asked for a statistical distribution on differential velocity fields which is an attracting steady state for long-wavelength stirred NS flow. This is a much more difficult, and much more important problem, because it is the problem of turbulence. Further, if such a distribution exists, and if it is attracting enough, it might demonstrate that the NS equations have a smooth solution away from a measure zero set of initial condition. The attracting fixed point will certainly have exponential decay of the energy in the viscous regime, and if everything gets close to this, everything stays smooth.

Why Turbulence?

Horace Lamb, a well known 19th century mathematical physicist, as an old man quipped that when he gets to heaven, he would ask God two questions: "Why relativity? and why turbulence?" He then said he is optimistic about getting a good answer to the first question.

I think he should have been optimistic about the second too. The reason for turbulence is already clear in the ultraviolet catastrophe of classical statistical mechanics. Whenever you have a classical field, the equipartitioning of energy means that all the energy is concentrated in the shortest wavelength modes, for the simple reason that there are just a boatload more short-wavelength modes than long wavelength modes. This means that it is impossible to reach equilibrium of classical particles and classical fields, the fields suck all the energy down to the shortest distances scales.

But in most situations, there are motions which can't easily transfer energy to short distances directly. The reason is that these motions are protected by conservation laws. For example, if you have a sound wave, it looks locally like a translation of the crystal, which means that it can't dump energy into short modes immediately, but it takes a while. For sound, there is a gradual attenuation which vanishes at long wavelengths, but the attenuation is real. There is an energy flow from the long-wavelength to the shortest wavelength modes in one step.

But in other field theories, the energy flow is more local in $k$-space. The analog of sound-wave friction in Navier-Stokes is the attenuation of a velocity due to viscosity. This is a diffusion process, and scales as $\sqrt{r}$ where $r$ is the scale of velocity variation. If you have a term which mixes up modes nonlinearly which scales better at long distances, which takes less time to move energy to smaller modes than the one-step diffusive dissipation process, it will dominate at long distances.

Further, if this is an energy-conserving polynomial nonlinear term, the mixing will generally be between nearby scales. The reason is the additivity of wave-vectors under multiplication. A quadratic term with a derivative (as in the Navier-Stokes equation) will produce new wavenumbers in the range of the sum of the wavenumbers of the original motion.

So there must a local flow of energy into smaller wavenumbers, just from ultraviolet-catastrophe mode-counting, and this flow of energy must be sort-of local (local in log-space) because of the wavenumber additivity constraint. The phenomenon of turbulence occurs in the regime where this energy flow, called the (downward) cascade, dominates the dynamics, and the friction term is negligible.

Kolmogorov theory

The first big breakthrough in the study of Turbulence came with Kolmogorov, Heisenberg, Obukhov, and Onsager in the wartime years. The wartime breakdown in scientific communications means that these results were probably independent.

The theory that emerged is generally called K41 (for Kolmogorov 1941), and it is the zeroth order description of turbulence. In order to describe the cascade, Kolmogorov assumed that there is a constant energy flux downward, called $\epsilon$, that it terminates at the regime where viscosity kicks in, and that there are many decades of local-in-$k$-space flow between the pumping region where you drive the fluid and the viscous region where you drain the energy.

The result is that the spectrum has a statistical distribution of energy in each mode. Kolmogorov gave a dimensional argument for this distribution which roughly fit the measurement accuracy at the time.

From the scaling law, all the correlation functions of the velocity could be extracted, and there was an exact relation: the Kolmogorov-Obukhov -5/3 law. These relations were believed to solve the problem for a decade.

2D turbulence

In 2D, a remarkable phenomenon was predicted by Kraichnan--- the inverse cascade. The generic ultraviolet argument is assuming that the motion is ergodic on the energy surface, and this requires that there are no additional conservation laws. But in 2d, the flow conserves the square of the vorticity, called the enstrophy. The enstrophy $U$ is

$$U = \int |\nabla \times v|^2 $$

And this has two more derivatives than the energy, so it grows faster with $k$. If you make a statistical Boltzmann distribution for $v$ at constant energy and constant enstrophy, the high $k$ modes are strongly suppressed because they have a huge enstrophy. This means that you can't generate high $k$ modes starting from small $k$ modes.

Instead, you find more freedom at small $k$ modes! The energy cascade goes up generically, instead of down, because at longer wavelengths, you can spread the energy over more motions with the same initial enstrophy, because the enstrophy constraint vanishes. This is the inverse cascade, and it was predicted theoretically by Kraichnan in 1968.

The inverse cascade is remarkable, because it violates the ultraviolet catastrophe intuitions. It has been amply verified by simulations and by experiment in approximate 2d flows. It provides an explanation for the emergence of large-scale structure in the atmosphere, like hurricanes, which are amplified by the surrounding turbulent flows, rather than decay. It is the most significant advance in turbulence theory since K41.

Modern theory

I will try to review the recent literature, but I am not familiar with much of it, and it is a very deep field, with many disagreements between various camps. There are also very many wrong results, unfortunately.

A big impetus for modern work comes from the analysis of turbulent flows in new systems analogous to fluids. The phenomenon of turbulence should occur in any nonlinear equation, and the cascade picture should be valid whenever the interactions are reasonably approximated by polynomials which are local in log-$k$ space.

One place where this is studied heavily is in cosmology, in models of preheating. The field which is doing the turbulence here is a scalar inflaton (or fields coupled to the inflaton) which transfers energy in a cascade to eventually produce standard model particles.

Another place where this is studied is in quark gluon plasmas. These fluids have a flow regime which is related to a gravitational dual by AdS/CFT. The gravitational analog of the turbulent flows have a classical gravitational counterpart in the laws of the membrane paradigm of black holes. Yaron Oz is one of the people working on this.

One of the most astonishing results of the past few years is the derivation by Oz of the exact laws of turbulent scaling from conservation principles alone, without a full blown cascade assumption. This is, http://arxiv.org/abs/0909.3404 and http://arxiv.org/abs/0909.3574

Kraichnan model

Kraichnan gave an interesting model for the advection of passive scalar fields by a turbulent flow. The model is a dust particle carried by the fluid.

This is important, because the advected particle makes a Levy flight, not a Brownian motion. This has been verified experimentally, but it is also important because it gives a qualitative explanation for intermittency.

Levy flights tend to cluster in regions before moving on by a big jump. The velocity advects itself much as it advects a dust particle, so if the dust is doing a Levy flight, it is reasonable that the velocity is doing that too. This means that you expect velocity perturbations to concentrate in regions of isolated turbulence, and that this concentration should follow a well-defined power law, according the the scalar advection.

These ideas are related to the Mandelbrot model of multifractals. Mandelbrot gave this model to understand how it is that turbulent flows could have a velocity gradient which is concentrated in certain geometric regions. The model is qualitative, but the picture corrects the K41 exponents, which assume that the velocity is cascading homogeneously over all space.

Martin-Siggia-Rose formalisms

The major advance in the renormalization approach to turbulence came in the 1970s, with the development of the Martin-Siggia-Rose formalism. This gave a way of formally describing the statistics of a classical equation using a Lagrange multiplier field which goes along for the ride in the renormalization analysis.

Forster Nelson Stevens gave a classic analysis of the inverse-cascade problem in 3d, the problem of the long-wavelength profile of a fluid stirred at short distances. While this problem is not directly related to turbulence, it does have some connection in that the statistical steady-state distribution requires taking into account interactions between neighboring modes, which do lead to a cascade.

The FNS fixed points include Kolmogorov like spectra with some stirring forces, but there is no condition for the stirring forces to be at a renormalization group fixed point. Their analysis, however, remains the high point of the MSR formalism as applied to turbulence. This subject has been dormant for almost thirty years.

What remains to be done

The major unsolved problem is predicting the intermittency exponents--- the deviations from Kolmogorov scaling in the correlation functions of fully developed turbulence. These exponents are now known experimentally to two decimal places, I believe, and their universality has been verified extensively, so that the concept of homogeneous statistical cascade makes sense.

Deriving these exponents requires a new principle by which one can extract the statistical distribution of a nonlinearly interacting field from the equations of motion. There are formal solutions which get you nowhere, because they start far away from renormalization fixed points, nevertheless every approach is illuminating in some way or another.

This is a terrible review, from memory, but its better than nothing. Apologies to the neglected majority.


I'm not competent to review the literature for you, but one of the Clay Millenium prizes concerns the Navier-Stokes equations, which is part of what Feynman is talking about, so to the extent that no-one has claimed that particular prize, No.

One measure of how well we can deal with turbulent flow in practice can be found in how much better we can predict the weather than we could 50 years ago. Better, but not much better. The improvements are not only because we have faster computers, there's much more data collected, for example, but the improvements in our understanding of turbulent flow have not made large qualitative changes in our ability to predict the weather.

If you haven't already looked at the Wikipedia page http://en.wikipedia.org/wiki/Weather_forecasting, it shows the paucity of theoretical input fairly clearly.

I found it interesting to see the range of disciplines covered by the UK Met office, at http://www.metoffice.gov.uk/research/our-scientists. There is a big difference between climate modelling and weather modelling because turbulence does not scale in simple ways, allowing and requiring different types of analysis of the data.

You might also look at the Wikipedia page for Turbulence. Again, Not Much is your Answer.


The short answer is that the Navier-Stokes equation, which describes all aspects of fluid motion, cannot be solved for turbulent flow, unless certain simplifications are made. There are a number of reasons for this, some of which are described on this page. As computer power increases eventually we should be able to solve the equation directly. This is what Feynman was looking for, I believe. In the mean time we are happily building miles of pipes every year and transporting a wide range of turbulent fluids. We are using direct application of theory combined with empirical understanding based on laboratory experiments and observation. The engineers have work and the physicists have a great problem to chew on.