What are the justifying foundations of statistical mechanics without appealing to the ergodic hypothesis?

The ergodic hypothesis is not part of the foundations of statistical mechanics. In fact, it only becomes relevant when you want to use statistical mechanics to make statements about time averages. Without the ergodic hypothesis statistical mechanics makes statements about ensembles, not about one particular system.

To understand this answer you have to understand what a physicist means by an ensemble. It is the same thing as what a mathematician calls a probability space. The “Statistical ensemble” wikipedia article explains the concept quite well. It even has a paragraph explaining the role of the ergodic hypothesis.

The reason why some authors make it look as if the ergodic hypothesis was central to statistical mechanics is that they want to give you a justification for why they are so interested in the microcanonical ensemble. And the reason they give is that the ergodic hypothesis holds for that ensemble when you have a system for which the time it spends in a particular region of the accessible phase space is proportional to the volume of that region. But that is not central to statistical mechanics. Statistical mechanics can be done with other ensembles and furthermore there are other ways to justify the canonical ensemble, for example it is the ensemble that maximises entropy.

A physical theory is only useful if it can be compared to experiments. Statistical mechanics without the ergodic hypothesis, which makes statements only about ensembles, is only useful if you can make measurements on the ensemble. This means that it must be possible to repeat an experiment again and again and the frequency of getting particular members of the ensemble should be determined by the probability distribution of the ensemble that you used as the starting point of your statistical mechanics calculations.

Sometimes however you can only experiment on one single sample from the ensemble. In that case statistical mechanics without an ergodic hypothesis is not very useful because, while it can tell you what a typical sample from the ensemble would look like, you do not know whether your particular sample is typical. This is where the ergodic hypothesis helps. It states that the time average taken in any particular sample is equal to the ensemble average. Statistical mechanics allows you to calculate the ensemble average. If you can make measurements on your one sample over a sufficiently long time you can take the average and compare it to the predicted ensemble average and hence test the theory.

So in many practial applications of statistical mechanics, the ergodic hypothesis is very important, but it is not fundamental to statistical mechanics, only to its application to certain sorts of experiments.

In this answer I took the ergodic hypothesis to be the statement that ensemble averages are equal to time averages. To add to the confusion, some people say that the ergodic hypothesis is the statement that the time a system spends in a region of phase space is proportional to the volume of that region. These two are the same when the ensemble chosen is the microcanonical ensemble.

So, to summarise: the ergodic hypothesis is used in two places:

  1. To justify the use of the microcanonical ensemble.
  2. To make predictions about the time average of observables.

Neither is central to statistical mechanics, as 1) statistical mechanics can and is done for other ensembles (for example those determined by stochastic processes) and 2) often one does experiments with many samples from the ensemble rather than with time averages of a single sample.


As for references to other approaches to the foundations of Statistical Physics, you can have a look at the classical paper by Jaynes; see also, e.g., this paper (in particular section 2.3) where he discusses the irrelevance of ergodic-type hypotheses as a foundation of equilibrium statistical mechanics. Of course, Jaynes' approach also suffers from a number of deficiencies, and I think that one can safely says that the foundational problem in equilibrium statistical mechanics is still widely open.

You may also find it interesting to look at this paper by Uffink, where most of the modern (and ancient) approaches to this problem are described, together with their respective shortcomings. This will provide you with many more recent references.

Finally, if you want a mathematically more thorough discussion of the role of ergodicity (properly interpreted) in the foundations of statistical mechanics, you should have a look at Gallavotti's Statistical Mechanics - short treatise, Springer-Verlag (1999), in particular Chapters I, II and IX.

EDIT (June 22 2012): I just remembered about this paper by Bricmont that I read long ago. It's quite interesting and a pleasant read (like most of what he writes): Bayes, Boltzmann and Bohm: Probabilities in Physics.


I searched for "mixing" and didn't find it in other answers. But this is the key. Ergodicity is largely irrelevant, but mixing is the property that makes equilibrium statistical physics tick for many-particle systems. See, e.g., Sklar's Physics and Chance or Jaynes' papers on statistical physics.

The chaotic hypothesis of Gallavotti and Cohen basically suggests that the same holds true for NESSs.