I have recently published an important paper, Some special cases of Khintchine's conjectures in statistical mechanics: approximate ergodicity of the auto-correlation function of an assembly of linearly coupled oscillators. REVISTA INVESTIGACIÓN OPERACIONAL VOL. 33, NO. 3, 99-113, 2012
http://rev-inv-ope.univ-paris1.fr/files/33212/33212-01.pdf
which advances the state of knowledge as to the answer to this question.

In a nutshell: one needs to justify the *conclusion* of the ergodic hypothesis, without assuming the ergodic hypothesis itself. The desirability of doing this has been realised for a long time, but rogorous progress has been slow. Terminology: the **erdodic hypothesis** is that every path wanders through (or at least near) every point. This hypothesis is almost never true. The **conclusion of the ergodic hypothesis**: almost always, infinite time averages of an observable over a trajectory are (at least approximately) equal to the average of that observable over the ensemble. (Even if the ergodic hypothesis holds good, the conclusion does not follow. Sorry, but this terminology has become standard, traditional, orthodox, and it's too late to change it.) The **ergodic theorem**: unless there are non-trivial distinct invariant subspaces, then the conclusions of the ergodic hypothesis hold.

Darwin (http://www-gap.dcs.st-and.ac.uk/history/Obits2/Darwin_C_G_RAS_Obituary.html) and Fowler (http://www-history.mcs.st-andrews.ac.uk/Biographies/Fowler.html), important mathematical physicists (Fowler was Darwin's student and Dirac was Fowler's), found the correct foundational justification for Stat Mech in the 1920s, and showed that it agreed with experiment in every case usually examined up to that time, and also for stellar reactions. Khintchine, the great Soviet mathematician, re-worked the details of their proofs (The Introduction to his slim book on the subject has been posted on the web at http://www-history.mcs.st-andrews.ac.uk/Extras/Khinchin_introduction.html), made them accessible to a wider audience, and has been much studied by mathematicians and philosophers of science interested in the foundations of statistical mechanics or, indeed, any scientific inference (see, for one example, http://igitur-archive.library.uu.nl/dissertations/1957294/c7.pdf and, for another example, Jan von Plato *Ergodic theory and the foundations of probability*, in B. Skyrms and W.L. Harper, eds, *Causation, Chance and Credence. Proceedings of the Irvine Conference on Probability and Causation, vol. 1*, pp. 257-277, Kluwer, Dordrecht 1988). Khintchine's work went further, and in some conjectures, he hoped that any dynamical system with a sufficiently large number of degrees of freedom would have the property that the physically interesting observables would approximately satisfy the *conclusions* of the ergodic theorem even though the dynamical system did not even approximately satisfy the *hypotheses* of the ergodic theorem. His arrest, he died in prison, interrupted the possible formation of a school to carry out his research program, but Ruelle and Lanford III made some progress.

In my paper I was able to prove Khintchine's conjectures
for basically all linear classical dynamical systems. For quantum mechanics the situation is much more controversial, of course. Nevertheless Fowler actually based his theorems about Classical Statistical Mechanics on Quantum Theory, although Khintchine did the reverse: first proving the classical case and then attempting, unsuccessfully, to deal with the modifications needed for QM. In my opinion, the quantum case does not introduce anything new.

Why measurement is modelled by an infinite time-average in Statistical Mechanics

This is the *point d'appui* for the ergodic theorem or its substitutes.

Masani, P., and N. Wiener, "Non-linear
Prediction," in *Probability and Statistics, The Harald Cramer Volume*,
ed. U. Grenander, Stockholm, 1959, p. 197: «As indicated by von Neumann ...
in measuring a macroscopic quantity $x$ associated with a physical or biological
mechanism... each reading of $x$ is actually the average over a time-interval
$T$ [which] may appear short from a macroscopoic viewpoint, but it is large
microscopically speaking. That the limit $\overline x$, as $T \rightarrow
\infty$, of such an average exists, and in ergodic cases is independent of the
microscopic state, is the content of the continuous-parameter $L_2$-Ergodic
Theorem. The error involved in practice in not taking the limit is naturally to
be construed as a *statistical dispersion* centered about $\overline x$.»
Cf. also Khintchine, A., *op. cit.*, p. 44f., «an observation which gives the
measurement of a physical quantity is performed not instantaneously, but requires
a certain interval of time which, no matter how small it appears to us, would, as
a rule, be very large from the point of view of an observer who watches the
evolution of our physical system. [...] Thus we will have to compare experimental
data ... with time averages taken over very large intervals of time.» And
not the instantaneous value or instantaneous state. Wiener, as quoted in Heims,
*op. cit.*, p. 138f.,
«every observation ... takes some finite time, thereby introducing
uncertainty.»

Benatti, F. *Deterministic Chaos in Infinite Quantum
Systems*, Berlin, 1993, *Trieste Notes in Physics*, p. 3, «Since
characteristic times of measuring processes on macrosystems are greatly longer
than those governing the underlying micro-phenomena, it is reasonable to think
of the results of a measuring procedure as of time-averages evaluated along
phase-trajectories corresponding to given initial conditions.» And Pauli, W., *Pauli Lectures on Physics, volume 4, Statistical
Mechanics*, Cambridge, Mass., 1973, p. 28f., «What is observed macroscopically
are time averages... »

Wiener, "Logique, Probabilite
et Methode des Sciences Physiques," «Toutes les lois de probabilite connues
sont de caractere asymptotique... les considerations asymptotiques n'ont
d'autre but dans la Science que de permettre de connaitre les proprietes des
ensembles tres nombreux en evitant de voir ces proprietes s'evanouir dans la
confusion resultant de las specificite de leur infinitude. L'infini permet
ainsi de considere des nombres tres grands sans avoir a tenir compte du fait
que ce sont des entites distinctes.»

Why we need to replace ensemble averages by phase averages, which can be accomplished in different ways, the traditional way is to use the ergodic hypothesis.

These quotations express the **orthodox** approach to Classical Stat Mech. The classical mechanics system is in a particular state, and a measurement of some property of that state is modelled by a long-time average over the trajectory of the system. We approximate this by taking the infinite time average. Our theory, however, cannot calculate this, anyway we don't even know the initial conditions of the system so we do not know which trajectory... what our theory calculates is the phase average or ensemble average. If we cannot justify some sort of approximate equality of the ensemble average with the time average, we cannot *explain why the quantities our theory calculates agree with the quantities we measure*.

Some people, of course, do not care. That is to be anti-foundational.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson