Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,047 questions , 2,200 unanswered
5,345 answers , 22,709 comments
1,470 users with positive rep
816 active unimported users
More ...

  What are the justifying foundations of statistical mechanics without appealing to the ergodic hypothesis?

+ 20 like - 0 dislike
5218 views

This question was listed as one of the questions in the proposal (see here), and I didn't know the answer. I don't know the ethics on blatantly stealing such a question, so if it should be deleted or be changed to CW then I'll let the mods change it.

Most foundations of statistical mechanics appeal to the ergodic hypothesis. However, this is a fairly strong assumption from a mathematical perspective. There are a number of results frequently used in statistical mechanics that are based on Ergodic theory. In every statistical mechanics class I've taken and nearly every book I've read, the assumption was made based solely on the justification that without it calculations become virtually impossible.

Hence, I was surprised to see that it is claimed (in the first link) that the ergodic hypothesis is "absolutely unnecessary". The question is fairly self-explanatory, but for a full answer I'd be looking for a reference containing development of statistical mechanics without appealing to the ergodic hypothesis, and in particular some discussion about what assuming the ergodic hypothesis does give you over other foundational schemes.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Logan M
asked Sep 14, 2011 in Theoretical Physics by Logan M (150 points) [ no revision ]
retagged Jan 10, 2015
I believe the term 'justifying foundations' is a misnomer, and this question arises only through the use of this term. My understanding is that experiments are the only foundation of any area of physics. The ergodic hypothesis is just a math trick one uses to show the rationale for the laws of statistics. These laws, within their applicability range, are quite good at explaining a number of observable thermodynamical phenomena. And this is the justification of statistical physics. Statistical mechanics is not `derived' from the ergodic hypothesis, even if Landau and Lifshitz make it seem so.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user drlemon
maybe this should be an answer :)

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Suresh
I disagree with +drlemon. Statistical mechanics is not a phenomenological model, as drlemon claims. Statistical mechanics, as used by physicists, is a method for deriving properties of a system of a large (infinite, actually) number of constituents from the postulated (or measured) behaviour of the individual components. For example it is a tool for deriving the thermodynamic gas laws from the laws of motion for the individual molecules. The fact that a gas of non-interacting particles that obey Newton's laws satisfies the ideal gas law is something one derives, not an experimental fact.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Gustav Delius
@drlemon the phrase «justifying foundations» is grammatically incorrect in that context, too. I suppose the O.P. means just plain «foundations» since foundations are supposed to do some justifying even while they are at their other tasks. But your point of view, while widespread, is a) anti-foundational. Experiments are not the foundations of a theory, they are the proof of a theory. your point of view in effect denies that physics has or needs any foundations. You are correct if the definition of physics is getting a grant b) ignores the problem of connecting theory with experiment: see below

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson
@josephf.johnson Alas, while I used the words "justifying foundations," I must admit that particular turn of phrase is not my own, and I can't comment on the intent contained therein. The title of this question was copied from a question posed on the Area 51 proposal of the now-defunct Theoretical Physics site. I agree with you that the phrase "justifying foundations" is a bit strange, but it seemed imprudent to copy the idea for the question but change the title; instead I tried as best I could to maintain the intent of the original asker and cited the location which I had found it.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Logan M
@LoganMaingi don't worry, after all, I wound up explaining everything in my answer, even one extra thing you didna ask 'bout. The answer you accepted is not that bad. The ergodic hypothesis is as dead as a doornail if you mean the precise notion of wandering path, which is, technically, what it means. But some functionally equivalent substitute for the ergodic theorem is way needed.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson
@josephf.johnson As for your answer, I haven't yet had a chance to read it, unfortunately. This question has not had any activity for the better part of a year, and the answers do a pretty good job at least at the level I was looking for, so to be honest I had totally forgotten about it. Your answer seems to be at a more advanced level and does explain things in more detail. I appreciate it, even if I don't get a chance to look at it any time soon.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Logan M

6 Answers

+ 14 like - 0 dislike

The ergodic hypothesis is not part of the foundations of statistical mechanics. In fact, it only becomes relevant when you want to use statistical mechanics to make statements about time averages. Without the ergodic hypothesis statistical mechanics makes statements about ensembles, not about one particular system.

To understand this answer you have to understand what a physicist means by an ensemble. It is the same thing as what a mathematician calls a probability space. The “Statistical ensemble” wikipedia article explains the concept quite well. It even has a paragraph explaining the role of the ergodic hypothesis.

The reason why some authors make it look as if the ergodic hypothesis was central to statistical mechanics is that they want to give you a justification for why they are so interested in the microcanonical ensemble. And the reason they give is that the ergodic hypothesis holds for that ensemble when you have a system for which the time it spends in a particular region of the accessible phase space is proportional to the volume of that region. But that is not central to statistical mechanics. Statistical mechanics can be done with other ensembles and furthermore there are other ways to justify the canonical ensemble, for example it is the ensemble that maximises entropy.

A physical theory is only useful if it can be compared to experiments. Statistical mechanics without the ergodic hypothesis, which makes statements only about ensembles, is only useful if you can make measurements on the ensemble. This means that it must be possible to repeat an experiment again and again and the frequency of getting particular members of the ensemble should be determined by the probability distribution of the ensemble that you used as the starting point of your statistical mechanics calculations.

Sometimes however you can only experiment on one single sample from the ensemble. In that case statistical mechanics without an ergodic hypothesis is not very useful because, while it can tell you what a typical sample from the ensemble would look like, you do not know whether your particular sample is typical. This is where the ergodic hypothesis helps. It states that the time average taken in any particular sample is equal to the ensemble average. Statistical mechanics allows you to calculate the ensemble average. If you can make measurements on your one sample over a sufficiently long time you can take the average and compare it to the predicted ensemble average and hence test the theory.

So in many practial applications of statistical mechanics, the ergodic hypothesis is very important, but it is not fundamental to statistical mechanics, only to its application to certain sorts of experiments.

In this answer I took the ergodic hypothesis to be the statement that ensemble averages are equal to time averages. To add to the confusion, some people say that the ergodic hypothesis is the statement that the time a system spends in a region of phase space is proportional to the volume of that region. These two are the same when the ensemble chosen is the microcanonical ensemble.

So, to summarise: the ergodic hypothesis is used in two places:

  1. To justify the use of the microcanonical ensemble.
  2. To make predictions about the time average of observables.

Neither is central to statistical mechanics, as 1) statistical mechanics can and is done for other ensembles (for example those determined by stochastic processes) and 2) often one does experiments with many samples from the ensemble rather than with time averages of a single sample.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Gustav Delius
answered Sep 15, 2011 by Gustav Delius (540 points) [ no revision ]
That's a great explanation of why ergodic hypothesis is not the best foundation for statistical mechanics, but the question seems to be more about what are right starting points (basic principles/postulates) to define/choose physically correct ensembles?

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Slaviks
I appreciate the in-depth response, and it certainly answers most of my question. As Slaviks suggests, I was also interested in what are the right starting points. Anything along those lines (even if just pointing to a reference where foundations are discussed thoroughly) would be appreciated. I wasn't aware the ergodic hypothesis could mean two different things. I've always seen it as the statement you chose. For the moment I haven't accepted this yet, but I plan to do so later today.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Logan M
Alright, after rereading it seems to me that you are claiming that the standard foundations of statistical mechanics do not invoke the ergodic hypothesis at all, and that the emphasis on it is the fault of poor pedagogy rather than a bad choice of fundamental principles and postulates. I misread that earlier. In any case, this fully and completely answers the question, so I've accepted it.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Logan M
+Logan Maingi, I did however in my answer not address the question of how to choose the appropriate ensemble. That is a more difficult question than that about the fundamentals of statistical mechanics, because it requires knowledge of the particular domain where you want to apply statistical mechanics. My view of statistical mechanics is currently influenced by the domain in which I last encountered it, which is the statistical mechanics of random graphs, see next comment.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Gustav Delius
In the context mentioned in my previous comment, rather than studying gases consisting of many particles, one studies graphs consisting of many nodes. There the ensemble of random graphs to work with is either simply postulated (for example some people use the ensemble of random graphs with a given degree distribution after measuring the distribution in a real-world graph) or it is obtained by specifying a stochastic process for the assembly of the graph (for example a process that attaches new nodes randomly by the rule of preferential attachment).

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Gustav Delius
The ergodic hypothesis (as a statement about the relation between time average and ensemble average) is not usually evoked in that field. Instead there is the principle that averaging a node property over all nodes in a specific graph gives the same result as taking a specific node and averaging its property over the ensemble.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Gustav Delius
Well, this betrays a somewhat un-physical perspective. The conclusion of the ergodic hypothesis, that time-averages can be replaced by phase averages, is needed if there is to be any connection between theory and experiment. Wiener and Gelfand both emphasised this point. We can only calculate ensemble averages. But every measurement is a long-time average, which we idealise by considering it as an infinite time average. A scientific theory which calculated quantities but couldn't justify their connection with measurement would be a theory without satisfactory foundations, even if useful.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson
+joseph f. jonhson: Why do you say that every measurement is a long-time average? For example, in what sense does measuring the volume and pressure of a gas in a container involve a long-time average?

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Gustav Delius
Several senses. Atomic time scales to reach equilibrium are so fast, that our measurement apparati in effect only sense an average pressure which even over a millisecond is like an infinity in dog-years. Imagine if our thermometer actually registered temperature as quickly as atoms move and bump into each other, so it sensed, registered, and displayed the impact of each individual gas atom on it, with a «dead time» shorter than the intervals between such impacts. In fact, temperature doesn't even exist at that time scale: the reading would swing wildly, there would be no answer.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson
In Time Series, my specialty, we never get to draw more than one sample from the population. It's like a bad joke: estimate the standard deviation of a population from a sample of one. But it's what we do every day....

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson
+ 10 like - 0 dislike

As for references to other approaches to the foundations of Statistical Physics, you can have a look at the classical paper by Jaynes; see also, e.g., this paper (in particular section 2.3) where he discusses the irrelevance of ergodic-type hypotheses as a foundation of equilibrium statistical mechanics. Of course, Jaynes' approach also suffers from a number of deficiencies, and I think that one can safely says that the foundational problem in equilibrium statistical mechanics is still widely open.

You may also find it interesting to look at this paper by Uffink, where most of the modern (and ancient) approaches to this problem are described, together with their respective shortcomings. This will provide you with many more recent references.

Finally, if you want a mathematically more thorough discussion of the role of ergodicity (properly interpreted) in the foundations of statistical mechanics, you should have a look at Gallavotti's Statistical Mechanics - short treatise, Springer-Verlag (1999), in particular Chapters I, II and IX.

EDIT (June 22 2012): I just remembered about this paper by Bricmont that I read long ago. It's quite interesting and a pleasant read (like most of what he writes): Bayes, Boltzmann and Bohm: Probabilities in Physics.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Yvan Velenik
answered Sep 16, 2011 by Yvan Velenik (1,110 points) [ no revision ]
Could you provide some references to critiques of Jaynes' approach? I think his thinking subtly changed over the years, and actually I think at one point or another he actually had a fully defensible theory...

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user genneth
@genneth: There are several. I must confess to being somewhat biased (I find Jaynes' approach infinitely better than than the ergodic one). That being said: one major criticism is somewhat philosophical. In Jaynes' approach, stat. mech. is not really a physical theory as usually meant, but rather a particular example of statistical inference.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Yvan Velenik
Second, application of MaxEnt are fine when the underlying configuration space is a finite set, but becomes much less convincing when dealing with more complicated situations. For example, if one wants to describe a gas (not a lattice model!), why should one favour Liouville measure? Things get even worse when particles have internal degrees of freedom: e.g., for di-atomic molecules, why should we take the action-angle coordinates? One can find arguments, but these are pretty weak. Of course, similar difficulties are also present in the ergodic approach (initial conditions must be "typical").

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Yvan Velenik
There are many other critics, of course. See, e.g., Sklar's book (ref. given in Steve's answer).

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Yvan Velenik
interesting; I leave my opinions on those points aside since it's off-topic, but at least the Amazon review on Sklar says that the critique of MaxEnt is not particularly thorough. I have to confess to having difficulty in finding truly well-presented arguments to it --- again, like you, I am biased. Thanks for the replies.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user genneth
@genneth: Well, you might ask this question :) . I am almost sure that you'll find some vehement opponent to Jaynes' ideas here. This kind of questions always seem to generate rather strong opinions ;) .

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Yvan Velenik
+ 6 like - 0 dislike

I searched for "mixing" and didn't find it in other answers. But this is the key. Ergodicity is largely irrelevant, but mixing is the property that makes equilibrium statistical physics tick for many-particle systems. See, e.g., Sklar's Physics and Chance or Jaynes' papers on statistical physics.

The chaotic hypothesis of Gallavotti and Cohen basically suggests that the same holds true for NESSs.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user S Huntsman
answered Oct 5, 2011 by S Huntsman (405 points) [ no revision ]
+ 2 like - 0 dislike

You may be interested in these lectures:

Entanglement and the Foundations of Statistical Mechanics

The smallest possible thermal machines and the foundations of thermodynamics

held by Sandu Popescu at the Perimeter Institute, as well as in this paper

Entanglement and the foundations of statistical mechanics.

There is argued that:

  1. "the main postulate of statistical mechanics, the equal a priori probability postulate, should be abandoned as misleading and unnecessary" (the ergodic hypothesis is one way to ensure the equal a priori probability postulate)

  2. instead, it is proposed a quantum basis for statistical mechanics, based on entanglement. In the Hilbert space, it is argued, almost all states are close to the canonical distribution.

You may find in the paper some other interesting references on this subject.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Cristi Stoica
answered Feb 14, 2012 by Cristi Stoica (275 points) [ no revision ]
It has been realised for a long time, even in classical mechanics and classical statistical mechanics, e.g., the theory of Brownian motion, that it should be possible, in principle, to dispense with the equal a priori probability postulate. The thermodynamic limits we get have been noticed to be largely independent of which initial probability distribution you impose on the phase space. A rigorous mathematical investigation of this robustness is felt to be like a Millenium Problem... but in physical terms, the intuition goes back to Sir James Jeans.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson
+ 2 like - 0 dislike

I do not agree with Marek's statement that ''in many practial applications of statistical mechanics, the ergodic hypothesis is very important, but it is not fundamental to statistical mechanics, only to its application to certain sorts of experiments.''

The ergodic hypothesis is nowhere needed. See Part II of my book Classical and Quantum Mechanics via Lie algebras for a treatment of statistical mechanics independent of assumptions of ergodicity or mixing, but still recovering the usual formulas of equilibrium thermodynamics.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user Arnold Neumaier
answered Mar 13, 2012 by Arnold Neumaier (15,787 points) [ no revision ]
+ 1 like - 0 dislike

I have recently published an important paper, Some special cases of Khintchine's conjectures in statistical mechanics: approximate ergodicity of the auto-correlation function of an assembly of linearly coupled oscillators. REVISTA INVESTIGACIÓN OPERACIONAL VOL. 33, NO. 3, 99-113, 2012 http://rev-inv-ope.univ-paris1.fr/files/33212/33212-01.pdf which advances the state of knowledge as to the answer to this question.

In a nutshell: one needs to justify the conclusion of the ergodic hypothesis, without assuming the ergodic hypothesis itself. The desirability of doing this has been realised for a long time, but rogorous progress has been slow. Terminology: the erdodic hypothesis is that every path wanders through (or at least near) every point. This hypothesis is almost never true. The conclusion of the ergodic hypothesis: almost always, infinite time averages of an observable over a trajectory are (at least approximately) equal to the average of that observable over the ensemble. (Even if the ergodic hypothesis holds good, the conclusion does not follow. Sorry, but this terminology has become standard, traditional, orthodox, and it's too late to change it.) The ergodic theorem: unless there are non-trivial distinct invariant subspaces, then the conclusions of the ergodic hypothesis hold.

Darwin (http://www-gap.dcs.st-and.ac.uk/history/Obits2/Darwin_C_G_RAS_Obituary.html) and Fowler (http://www-history.mcs.st-andrews.ac.uk/Biographies/Fowler.html), important mathematical physicists (Fowler was Darwin's student and Dirac was Fowler's), found the correct foundational justification for Stat Mech in the 1920s, and showed that it agreed with experiment in every case usually examined up to that time, and also for stellar reactions. Khintchine, the great Soviet mathematician, re-worked the details of their proofs (The Introduction to his slim book on the subject has been posted on the web at http://www-history.mcs.st-andrews.ac.uk/Extras/Khinchin_introduction.html), made them accessible to a wider audience, and has been much studied by mathematicians and philosophers of science interested in the foundations of statistical mechanics or, indeed, any scientific inference (see, for one example, http://igitur-archive.library.uu.nl/dissertations/1957294/c7.pdf and, for another example, Jan von Plato Ergodic theory and the foundations of probability, in B. Skyrms and W.L. Harper, eds, Causation, Chance and Credence. Proceedings of the Irvine Conference on Probability and Causation, vol. 1, pp. 257-277, Kluwer, Dordrecht 1988). Khintchine's work went further, and in some conjectures, he hoped that any dynamical system with a sufficiently large number of degrees of freedom would have the property that the physically interesting observables would approximately satisfy the conclusions of the ergodic theorem even though the dynamical system did not even approximately satisfy the hypotheses of the ergodic theorem. His arrest, he died in prison, interrupted the possible formation of a school to carry out his research program, but Ruelle and Lanford III made some progress.

In my paper I was able to prove Khintchine's conjectures for basically all linear classical dynamical systems. For quantum mechanics the situation is much more controversial, of course. Nevertheless Fowler actually based his theorems about Classical Statistical Mechanics on Quantum Theory, although Khintchine did the reverse: first proving the classical case and then attempting, unsuccessfully, to deal with the modifications needed for QM. In my opinion, the quantum case does not introduce anything new.


Why measurement is modelled by an infinite time-average in Statistical Mechanics

This is the point d'appui for the ergodic theorem or its substitutes.

Masani, P., and N. Wiener, "Non-linear Prediction," in Probability and Statistics, The Harald Cramer Volume, ed. U. Grenander, Stockholm, 1959, p. 197: «As indicated by von Neumann ... in measuring a macroscopic quantity $x$ associated with a physical or biological mechanism... each reading of $x$ is actually the average over a time-interval $T$ [which] may appear short from a macroscopoic viewpoint, but it is large microscopically speaking. That the limit $\overline x$, as $T \rightarrow \infty$, of such an average exists, and in ergodic cases is independent of the microscopic state, is the content of the continuous-parameter $L_2$-Ergodic Theorem. The error involved in practice in not taking the limit is naturally to be construed as a statistical dispersion centered about $\overline x$.» Cf. also Khintchine, A., op. cit., p. 44f., «an observation which gives the measurement of a physical quantity is performed not instantaneously, but requires a certain interval of time which, no matter how small it appears to us, would, as a rule, be very large from the point of view of an observer who watches the evolution of our physical system. [...] Thus we will have to compare experimental data ... with time averages taken over very large intervals of time.» And not the instantaneous value or instantaneous state. Wiener, as quoted in Heims, op. cit., p. 138f., «every observation ... takes some finite time, thereby introducing uncertainty.»

Benatti, F. Deterministic Chaos in Infinite Quantum Systems, Berlin, 1993, Trieste Notes in Physics, p. 3, «Since characteristic times of measuring processes on macrosystems are greatly longer than those governing the underlying micro-phenomena, it is reasonable to think of the results of a measuring procedure as of time-averages evaluated along phase-trajectories corresponding to given initial conditions.» And Pauli, W., Pauli Lectures on Physics, volume 4, Statistical Mechanics, Cambridge, Mass., 1973, p. 28f., «What is observed macroscopically are time averages... »

Wiener, "Logique, Probabilite et Methode des Sciences Physiques," «Toutes les lois de probabilite connues sont de caractere asymptotique... les considerations asymptotiques n'ont d'autre but dans la Science que de permettre de connaitre les proprietes des ensembles tres nombreux en evitant de voir ces proprietes s'evanouir dans la confusion resultant de las specificite de leur infinitude. L'infini permet ainsi de considere des nombres tres grands sans avoir a tenir compte du fait que ce sont des entites distinctes.»


Why we need to replace ensemble averages by phase averages, which can be accomplished in different ways, the traditional way is to use the ergodic hypothesis.

These quotations express the orthodox approach to Classical Stat Mech. The classical mechanics system is in a particular state, and a measurement of some property of that state is modelled by a long-time average over the trajectory of the system. We approximate this by taking the infinite time average. Our theory, however, cannot calculate this, anyway we don't even know the initial conditions of the system so we do not know which trajectory... what our theory calculates is the phase average or ensemble average. If we cannot justify some sort of approximate equality of the ensemble average with the time average, we cannot explain why the quantities our theory calculates agree with the quantities we measure.

Some people, of course, do not care. That is to be anti-foundational.

This post imported from StackExchange Physics at 2015-01-10 05:19 (UTC), posted by SE-user joseph f. johnson
answered Feb 11, 2013 by joseph f. johnson (500 points) [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOv$\varnothing$rflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...