# Do solutions to nonlinear classical field equations have any bearing on its quantization?

+ 6 like - 0 dislike
2117 views

The question is motivated by this paper and review. As we know, for free field equations, or more generally linear field equations(e.g. Dirac equation minimally coupled with external EM field), one way of making a QFT out of it is that we take the solution space to be the 1-particle Hilbert space, define creation/annihilation operators, and then go on to define the Fock space with the correct statisitics. And if looking backwards after the QFT has been constructed, the classical field equation can be obtained by considering the 1-particle "wavefunction" $f(x):=\langle 0|\hat{\psi}(x)|f\rangle$ , it obviously satisfies the field equation. For nonlinear field equation this correspondence ceases to exist, for simply $\langle 0|\hat{\psi}(x)|f\rangle ^2\neq \langle 0|\hat{\psi}(x)^2|f\rangle$. In this case the role that classical solutions play becomes obscured. My question is: Do solutions to classical field equations have any bearing on its quantization? What is it, or at least what should it be?

PS. This is probably related to a years-old question I asked before, after which I shamefully did not try to dig deeper: From quantization under external classical gauge field to a fully quantized theory

edited Aug 9, 2014

+ 4 like - 1 dislike

Yes, classical solutions of any theory have a lot to say about their quantization, as they are the $\hbar\to 0$ limit of the quantum theory. Essentially, they determine by themselves the tree level approximation of any quantum theory. The reason is that $k$-loop corrections scale with $\hbar^k$, although this is not seen in many treatises where $\hbar$ is set to $1$ everywhere. For tree diagrams from classical fields see, e.g.,

R.C. Helling,
Solving classical field equations,
Unpublished manuscript.

The classical action is the $0$th order approximation of the effective action of quantum field theory. The tree level approximation of the S-matrix therefore produces classical approximations to all the stuff people can compute from a QFT. (A non-comprehensive list of what this is can be gleaned from the last paragraph of Chapter 11.5 [in my 1995 edition] of the QFT book by Peskin & Schroeder, which starts with ''This conclusion implies that $\Gamma$'' [the effective action] "contains the complete set of physical predictions of the quantum field theory".)

The approximate spectrum in the classical limit, and its relations to scattering, decay rates, and bound state formulas, is (briefly) discussed on p.416 of

L.G. Yaffe,
Large $N$ limits as classical mechanics,
Rev. Mod. Phys. 54 (1982), 407--435.

This interpretation of the quantum world in terms of the classical is important in quantum field theory when it comes to the explanation of perturbatively inaccessible phenomena such as particle states corresponding to solitons, or tunneling effects related to instantons. Indeed, standard perturbative methods do not capture certain phenomena that appear only at infinite order in perturbation theory (i.e., nonperturbatively) and are related to a semiclassical analysis of soliton and instanton solutions of classical field equations.  See, e.g., the paper

R. Jackiw,
Quantum meaning of classical field theory,
Rev. Mod. Phys. 49, 681-706 (1977).

However, Jackiw's explanations are mathematically vague.

Geometric quantization is the program of quantizing a classical theory given on a symplectic manifold. This works quite well in quantum mechanics, but its extension to QFT is at present more an art than a science.

A classical Lagrangian dynamics in a finite-dimensional symplectic manifold can be quantized by deformation quantization and results in $\hbar$ expansions of the corresponding quantum theory in the Hamiltonian formalism. In infinite dimensions it works when the manifold is a Hilbert space (i.e., a linear manifold), which has a canonical symplectic structure, and then gives Berezin quantization.

The paper mentioned in your question tries to extend this approach to an infinite-dimensional nonlinear symplectic manifold, namely the manifold of solutions of the classical Yang-Mills equations, which is a symplectic space with the so-called Peierls bracket. (Since the space of solutions is parameterized by the space of initial conditions, this is equivalent to working with the space of consistent initital conditions, which is the setting actually used by Dynin.)

answered Aug 9, 2014 by (15,458 points)
edited Aug 9, 2014

Thanks for the general information, the papers appear to be readable to me, except maybe Yaffe's. +1

@ArnoldNeumaier I would also add that classical solutions could also represent the v.e.v of the ground state as happens for the Higgs field. So, one can use them to build a fully developed theory. Experiments can decide if this could represent the right choice for the vacuum.

The classical solution represents the VEV of the ground state only to $0$th order in $\hbar$. The exact VEV is the classical solution of the equations corresponding to the effective action, which includes all quantum corrections.

I agree. The v.e.v. gets corrected by higher orders of perturbation theory.

It is nonsense to claim that the quantum vacuum of QCD is described by the solution to any sort of classical equation, no matter how many terms you include in the effective action. The quantum vacuum is entirely fluctuating, at long distances it is the zero action limit--- so that every link on every plaquette is fluctuating independently, with no regard to the value at any other link. In this limit, the classical equations are nonexistent, they are not informative. Classical equations restrict the values infinitely from initial data, they are deterministic. The quantum limit of gauge theory at long distances is completely random.

The result is clear in any lattice simulation, it is also obvious from the strong coupling expansion. The correlations between link gauge field values disappear completely at long distances, and this is what the mass gap means. It is also obvious why they disappear, when there is a link gauge field which is large on every link (on a coarse enough lattice), on a coarser lattice, the link gauge fields are statistical averages of smaller links, which become completely ergodic in the infrared.

This means that is entirely wrong to consider any form of classical equations at all when describing the long-distance properties of QCD. The long-distance limit is, in the Euclidean theory, just the gauge field version of totally random noise. It has no useful classical description at all, and any type of description which starts with classical solutions to the equations is always entirely different from the quantum vacuum, which we know how to describe on the lattice.

The qualitative picture for the mass-gap in QCD is not open in physics, it is entirely solved within Lattice QCD at the physical level of rigor. The fact that none of these approaches are anywhere near the correct physics is obvious from the fact that they cannot and will not describe to you the source of the mass-gap, the complete randomization of the link gauge field as you average over longer and longer distances.

The approach of quantization of classical theories is completely mentally defective regarding these highly quantum systems, any sort of description which starts close to the classical Yang-Mills equation disregards the fact that the character of the field at long distances is nothing like the classical equations, and is simple in a completely different way, it is totally random--- corresponding to the Euclidean lattice gauge field action with infinite coupling, i.e. to the 0 action (totally uncorrelated random fields).

Did you ever hear of instantons? And all the people like 't Hooft working on this on the ground state of QCD? See this. About the status of Yang-Mills theory on the lattice, looking at your arguments, you seem to have missed all the work done on propagators in the last fifteen years.

Also, you seem to have some problems with my approach where use of classical solutions is made. But the matter here is somewhat different as the work of Dynin does not use any kind of classical solution.

Of course I heard of instantons! These describe the nearly-classical fluctuations in weakly coupled gauge theory. The original idea was that by superposing instantons, you could describe the approach to the long-distance limit, to a totally random limit--- Polyakov wrote about this in 2+1 gauge theory, where he showed that the instantons here (we would call them monopoles, because in 3+1 dimensions their best analogs are line-extended) do randomize the gauge field at long distances.

When 'tHooft and Mandelstam talked about monopole superconductors in QCD, they are imagining condensing states which correspond to these type of monopole solutions. The work here uses the maximal abelian gauge to explain how this might work, and it is very informative.

The fact that these solutions are nearly classical is completely unimportant, because they are nearly classical only at short distances, they are describing the interpolation between the short-distance theory, where the classical description is appropriate, and the long-distance description where classical descriptions fail. The instantons in 4d QCD are not what randomizes the gauge field at long distances, the appropriate nearly classical configurations are monopole-like line excitations which close in on themselves to make loops. Some informative work on how this works is given in recent analysis of Argyres group at Cincinnatti, regarding compactified 4d gauge theory, so that it approaches 3d gauge theory. This construction interpolates between 2+1 gauge theory, which is better understood, to 4d gauge theory, and shows you what the randomization is supposed to look like.

If you use these types of classical short-distance solutions, you have a chance of describing the randomization at long distances. That's not what people are doing in the mathematics work, they are simply taking formal properties of the classical solutions to Yang Mills (formal properties which are entirely unrelated to the path integral for quantum Yang Mills) and claiming that they can extract the quantum behavior from this. It is only possible if they describe the statistical randomization process in detail, and they don't describe it at all.

But the Wilson work on lattice QCD describes the randomization at long distances in a completely different way, using the lattice action. In this approach, it is really simple to understand the flow to the infrared, this was why Wilson's lattice QCD/strong-coupling work was so influential. As the lattice gets coarser, the field becomes random, and the classical equations are completely uninformative, because the gauge field is getting closer and closer to being described by the action "0".

The details of how this happens is to describe which short-distance near-classical excitations, when present in a froth in the Euclidean theory with weak coupling and a small lattice, produce a randomization effect at longer distances.

Ron, It is somewhat escaping me as something like the Higgs mechanism enters in your scenario of QFT. One starts from a classical solution, that is not trivial, and then expands around it.What you get is absolutely consistent and the classical v.e.v. is corrected at higher orders in perturbation theory turning the classical solution into the proper quantum results.

In Yang-Mills theory, without quarks, it seems that things go exactly in this way breaking BRST invariance. This is what is seen on lattice computations that also point to a running coupling going to zero in the low-energy limit. I can provide you the full refs if you need this.

In QCD with quarks things change and may stay in the way you state. There is a rather recent paper about this that yields a serious indication for confinement but only for a number of quarks overcoming a given threshold. This one. I have found this paper so shocking that I have written on it in my blog here. The paper and my post should give you an idea about what is going on in the field about Yang-Mills theory with and without quarks.

The Higgs mechanism is totally classical, because it happens at weak coupling scales, well before any confinement. You would be better off asking what happens if you try to Higgs QCD with a colored Higgs whose breaking scale is smaller than the confinement radius. I don't know what happens in this case, I don't know if anyone does.

The statement that "things go exactly in this way" is not proper, because you haven't identified the approriate condensate field! Further, we know exactly what is going on in the strong coupling expansion, 3d, and it has no relation to any kind of Higgs mechanism of the type of a colored source.

The mass gap in pure Yang-Mills is only similar to the Higgs mechanism qualitatively, because the condensation is for monopole-like objects. This picture is made more precise in maximal abelian gauge or in 3d.

I didn't say anything about quarks at any point, the total randomization at long distances is really what happens in pure bosonic gauge theory. It really does, it is clear to anyone who has ever simulates pure gauge theory on a computer.

Some interesting questions are what happens if you fix a gauge on the lattice theory, for example axial gauge, and then ask how the random field looks in this gauge (this can inform you on what the classical axial gauge solutions to the classical theory say about the theory).

You can provide references, I'll read them, but I understand this from simulating QCD, not from reading anything.

Arnold Neumaier : what is the limit $\hbar \rightarrow 0$ of QCD ?

@ArnoldNeumaier, @RonMaimon, I think the key disagreement, if I understand you guys correctly, is that Arnold thinks classical solutions represent the $\hbar\to 0$ limit, while Ron thinks on top of that one also has to work in the weak coupling regime. Do I understand you correctly? What argument do you think can settle this?

We agree that $\hbar\to 0$ is both the weak coupling limit and the classical limit.

We do not agree about the interpretation for large $\hbar$.

Ron's argument is that the solution for large $\hbar$ is qualitatively very different. This is probably true but it doesn't prove his claim that a quantization of the classical Kaehler manifold is nonsense (if done correctly, i.e., with an improved renormalization treatment).

Indeed, already for a simple anharmonic oscillator such as the Morse oscillator (known to be correctly quantized in precisely the way Dynin tried to do for YM) , the large $\hbar$ properties are qualitatively very different from the small $\hbar$ properties. (In this case, the Kaehler manifold is simply the space of complex numbers, and the coherent states are those for the harmonic oscillator, the eigenvectors of the harmonic annihilation operator.)

Note that lots of QCD calculations that can be compared with experiment are done at 1 loop, which is just the results of a first order expansion in $\hbar$, and hence valid only for small $\hbar$. (Rather, since $\hbar$ is a constant, it should say at high energies). As Ron correctly says this covers the short distance (high energy) regime. This is generally considered (by physicists) proof enough for the correctness of QCD; and a quantization that reproduces this limit would be a mathematically valid quantization if it respects the standard causality and positivity conditions. But at large distance, where the infrared problems set in and the terms involving higher orders of $\hbar$ are no longer negligible, the structure of QCD is very different, due to confinement. This also affects the spectrum; confinement structure is closely related to the mass gap.

But as the spectrum of the Morse oscillator is qualitatively very different form that of a harmonic oscillator, although it is constructed in terms of the latter, so there is no conflict between a semiclassical quantization at small $\hbar$ and a nonperturbative very different behavior at large $\hbar$.

I am not making a claim that requires proof, I am making a heuristic that says that a particular method of quantization cannot possibly work for gauge theory (and a separate much easier to substantiate claim that a particular author in addition did it incorrectly).

This is different from claiming that a particular method does work, the arguments are not rigorous, and I don't know how they can be! When you say "such and so method cannot work because X and Y and Z", there is no rigorous statement of this, let alone proof. The only thing you can prove is a statement, how do you state "this method cannot work"?

But despite being completely devoid of rigor, this claim is still precise. The precise meaning is that the weak coupling/small $\hbar$ expansion of gauge theory is qualitatively completely different from the strong coupling long-distance behavior, and this can be justified (not yet proved) by examining the numerical descriptions we have for the long-distance behavior. In this case, the long-distance behavior is totally uncorrelated random gauge fields, while the short-distance behavior is rigidly constricted fields which are making small fluctuations close to a perturbative vacuum. The two descriptions interpolate each other, the gauge fields slowly oscillates more and more wildly as you look to longer distances on a lattice, and in the longest distances, the Euclidean link-variables (the transport along long lines) is as random as if you threw independent dice to pick their values, they are totally uncorrelated at distances greater than the confinement length. This behavior is about as similar to the classical deterministic global solutions to Yang-Mills equations as it is to a cucumber.

I am sure that the Morse oscillator (and any other 0+1 d quantum system) may be quantized by methods of symplectic quantization, there is no obstacle to doing this in any 0+1d system, or any integrable 1+1 field theory model. The reason is that the kinematics of 0+1 d systems is trivial, any one such system has the exact same Hilbert space as any other, it's always square-normalizable wavefunctions, or their non-normalizable plane-wave or distributional limit in the case of infinite volume or when you want to talk about x-eigenstates. You can describe them using Harmonic oscillator states, or using free-particle states, you have a lot of freedom, because the kinematics is separate from the dynamics.

The path integral counterpart to this property is that the short-distance fluctuations are always exactly the same for any potential, and always controlled by the kinetic term in 0+1 d, in Euclidean space it's always Brownian motion locally. In renormalization group language, all potentials are short-distance irrelevant, they disappear if you look at microscopic distances, and they disappear quickly.

By contrast, when you are doing interacting field theories, the kinematic terms renormalize along with the interaction terms--- the fluctuations at all distances are altered. In the case of gauge theory, the running asymptotically vanishes at short distances, so that the theory is close to free at teeny-tiny distances, and the running stops, but this happens very slowly, and if you use the classical equations appropriate to the continuum coupling, these classical equations have zero coupling, and since the classical coupling doesn't run, they have zero coupling at all scales.

The classical Yang-Mills equations are scale invariant, they don't have a length at which anything is different. When you pretend to "quantize" them in a symplectic way, you are simply choosing one particular coupling, relevant at one particular scale, and looking at small fluctuations around this coupling, which (as long as the coupling is small) defines the fluctuation dynamics at one particular (small) length scale only. The randomization of this stuff at long distances is not in the description, and the correct log-running kinematics, the short-distance properties of the field fluctuations, are not correctly described.

So the program is simply hopeless for attacking Yang-Mills theory, and not in a simple way--- it is also hopeless for attacking any other quantum field theory with kinematics which are not free, the Hilbert space of the interacting theory is simply different from the Hilbert space of any free theory.

The reason I am so adamant about this stuff is because in physics, the problem of Yang-Mills existence and mass-gap is entirely solved. It is solved by lattice gauge-theory, which gives an algorithm for computing the correlation functions, and also for taking a continuum limit. All the properties are understood qualitatively and (with enough computer time) quantitatively.

Any claim that you "quantize Yang-Mills" has to make sense relative to the computational formulation, and the claims from symplectic quantization don't make any sense when translated to this language, and therefore don't make any sense on their own.

I know that the interacting Hilbert space cannot be the free one (Haag's theorem). But this is a matter of correct renormalization and not a defect of symplectic quantization itself. The inequivalent interacting Hilbert space is related to the Fock space by a Bogoliubov transformation, which is well-defined in a regularized theory (such as a lattice theory), before taking the IR and UV limits, where the resulting transformation becomes nonimplementable. Therefore the problem in  Dynin's paper is not his starting point but that he doesn't renormalize correctly.

Yes, I agree that if you discretize, everything is ok, and if Dynin had done the quantization correctly, the only place where he would run into problems would be in the renormalization. I agree with you regarding the problems in correctly implementing symplectic quantization of field theories.

But Dynin is doing an incorrect cartoonish quantization which has nothing to do with a correct quantum field theory, so the problems with his paper are much deeper! You can see what he doing in section 3, he is defining a particular set of free raising and lowering operators which do not have any resemblance to any real Fock space of short-distance Yang Mills. As an exercise, I urge you to repeat Dynin's construction for gauge coupling =0. You will not reproduce free field theory.

If he had done a correct quantization, he wouldn't be able to conclude that the states in the theory are a Fock-like tower on top of an empty vacuum, that is simply not true even in everywhere weakly coupled interacting theories. There is no Fock-like description of the gauge vacuum which is a "N=0" state of a free field theory of gluons, it just doesn't look like that. It looks like a sea of infinitely many coherent gluons whose typical momentum is $\Lambda$ the inverse confinement length. These gluons are the manifestation of the gauge condensate, the nonzero expected value of fluctuations in the gauge field in the gauge-field vacuum, which needs to be there, because the Euclidean gauge field is totally random at large distances.

His description is defining raising and lowering operators which give you noninteracting excitations with no renormalization (because they are always noninteracting). These excitations have an energy which he defines to be the excitation number times a classical energy, and this is what he used to get the mass bound. He is doing nonsense.

The statement that you can define quantum Yang-Mills from classical Yang-Mills can only be true locally. Even then, there is the issue that the coupling in the classical theory at the tiniest continuum distances is zero. So you need to start with a classical theory at infinitesimal coupling, and then produce configurations of the quantum theory at gradually larger coupling and larger distances, and, sorry, this is just hopeless from the global classical theory, because the global classical equations in this case are simply free, and their quantization gives non-interacting gluons. You might be able to paste these together at long distances to reproduce the random fluctuations, but the problem is then the same as taking the continuum limit on a lattice, or any other formulation, looking at classical Yang-Mills as the starting point buys you nothing, and just makes a lot of obfuscation in the mathematics of defining the kinematic space.

+ 3 like - 0 dislike

The answer to this question, as posed, is "yes", but not in the way implied by other answers. In particular, the long-distance properties of the Yang-Mills vacuum are not described by considering small fluctuations of global classical solutions of the classical Yang-Mills equations. The local properties of Yang-Mills fields (near any one point, in a radius much smaller than the confinement radius) are described well by fluctuations around classical field theory. But the path-integral paste together these local descriptions into something monstrously quantum at long distances.

To understand quantum Yang-Mills, you should first examine the lattice version in detail, because this can be used to define the theory algorithmically, and compute all correlation functions. The perturbative formalism for Yang-Mills is secondary, because it is only justified at weak coupling, it is a weak coupling expansion.

In the lattice formulation, you have a G valued matrix field on each link, and an action which is $1/g^2$ times the sum over each plaquette of the trace of the holonomy on the plaquette. You can do something on the lattice which are interesting and different from what you do in perturbative field theory,  you can ask what the perturbation theory looks like at strong coupling, when g is effectively close to infinite.

In this case, you start with 0 action as your leading approximation, and every matrix on every link is chosen uniformly randomly according to Haar measure. With a small 1/g^2, you have some correlations between the matrices on neighboring links, but the main result of the strong coupling expansion, described in classic lattice papers of Wilson, is that the correlations die off at long-distances, so that there is an exponential decay of all correlation functions in the strong coupling expansion to any order.

The qualitative reason for this behavior is simple to understand--- the field is nearly random on every link, except for a little bit of correlation between neighboring links. When you look at the product of the G's on two links going in the same direction, the product is closer to random than each factor, simply because multiplying together large random elements of a gauge group fills the gauge group ergodically.

This observation is intuitively obvious (and also correct), the result of looking at the gauge field at long distances is no different from shuffling a deck of cards, you are doing a random walk with large steps in a gauge group as you take each step on the lattice. This intuitive explanation is not Wilson's argument, which was different--- he showed that if you place colored sources far away, the correlations in the strong coupling expansion must follow a string between the sources, and the fluctuations of the string are small at large coupling. He also showed how to compute the correlation functions using these correlation strings, which dominate the long-distance correlations at large distances.

What this means is that the correlation functions in gauge theory are exponentially decaying at strong coupling, and the long-distance theory is entirely random, corresonding to a Euclidean action which is zero. Zero action means completely uncorrelated ultra-local fields, in other words, at scales far below the confinement scale, you don't see any gauge field, the dynamics is confined, and the exponential decay rate is (by Euclidean definition) the mass gap.

This is completely the opposite behavior than at short distances, where the coupling is weak, you have a good perturbation theory, and the fluctuations are described by a nearly classical loop expansion. The place where classical solutions describe the dynamics is in this short-distance limit.

In this limit, you have 4d instantons, and 3d instantons which are similar to monopoles. These objects, when their classical radius is small, are appropriate for describing effects in the gauge theory. Instantons in 4d are least Euclidean action configurations which (in Lorentzian space) describe tunneling between different classical vacua, the 2+1 instantons, which, when you line-extend, become weird monopole-like things, can be superposed statistically in an instanton gas, and then they have the effect of randomizing the 2+1 dimensional gauge field, as shown by Polyakov. This allows you to understand the interpolation between the short-distance physics, which is described by the classical theory and perturbative small fluctuations, and the long-distance physics, with mass gap and no correlations and zero action. The classical configurations, when placed randomly at a given density, can randomize the gauge field in 3d.

When instantons were discovered in 1976, David Gross and others hoped that a sea of instantons would randomize the gauge field at long distances to make the long-distance limit emerge from an instanton gas calculation and prove that there is a mass gap. This is not what happens, the instantons by themselves don't randomize the gauge field. However, in recent years, Argyres has considered the case of compactified 4d gauge theory, and in the limit that the dimension of the box is shorter than the confinement scale, you have to reproduce Polyakov's mechanism for mass-gap in 3d gauge theory. You do reproduce it, and the configurations which are involved are line-extended versions of Polyakov's solutions.

These line extended things can close into loops when you open the dimension fully, and in this case, you expect that these monopole loops will produce the randomization somehow, and these classical solutions can inform the long-distance limit. But the main ingredient here is the random placement of these solutions at long distances, that random scattering of instantons, or monopole loops, is what randomizes the gauge field at enormously long distance in the continuum picture, when you look at the description at weak coupling.

The randomization at long distances is the central mechanism for the mass gap, and this means that the classical descriptions are literally confined to a bag, beyond which you need to consider configurations which randomly fluctuate between one classical region and another. The gauge vacuum can be described by classical configurations only locally, the gluing between local descriptions has to reproduce the totally random fluctuations at long distances.

The mathematics work which starts with classical field theory starts with global classical solutions to Yang Mills theory, which, is simply mentally defective. If the procedure looked at local solutions, not global solutions, maybe there would be some way to paste together the close-to-classical behavior at each patch into some sort of description, but that's not what they do. They just consider global solutions to the equations and then try to produce a quantum theory from this object by deforming these classical global solutions. It is a procedure which is hopeless, because the solutions to classical Yang-Mills are rigidly deterministic, while the Euclidean gauge theory at longer scales is entirely randomized, so that the qualitative properties are entirely different at distances longer than the confinement radius.

It is possible that you can produce a rigorous construction of quantum field theory starting from the solutions to classical field theory at short distances, and patching them together at long distances like the definition of a manifold. Hairer in mathematics has done exactly this for solutions of stochastic PDE's whose stationary distributions are Euclidean versions of quantum theories, i.e. stochastically quantized field theories. His approach uses a "regularity structure" which is used to define the patching of the different regions together, and it reproduces the correct operator product relations in these theories.

His approach reproduces the renormalization of low-dimensional theories, it works, and can be used to define 4d Yang-Mills (apart from technical details about how to control the operator dimensions at short distances, which he doesn't know how to do in the case of logarithmic running). Approaches which simply try to take global solutions of Yang Mills and globally reconstruct the quantum theory from these are totally wrongheaded, they cannot produce the quantum theory, because they don't consider the randomized configurations at long distances.

answered Aug 9, 2014 by (7,720 points)

Sorry, but I cannot agree. What happens when the continuum limit is taken? Besides, why are you keeping on neglecting other works on lattice?

A QFT theory on random fields is trivial--- it is usually an ultralocal field, with classical equation "field=0" (that's for Gaussian fluctuations, for example, a scalar field with action $S=\int \phi^2$. For the case of Gauge theory, there is no classical equation because the action is zero, the path integral is a pure average over all configurations of gauge field), and action which has no derivative terms. This is not a "scenario", it's 30 year old 100% established QCD physics.

You can download an equilibrated QCD configuration, and look at the correlation between the gauge field product over a long line here and a long line there, it vanishes. This is the statement of mass gap in Lattice QCD, and it is not controversial.

The mathematical description of this is in Wilson's original lattice QCD paper, where he defines the strong coupling expansion. This expansion is around the point $g=\infty$, which is not only sensible, it is trivial--- it is the ultralocal limit where every link is statistically independent of every other link.

Please take a few days to simulate pure SU(2) Yang-Mills theory, and you will see this for yourself. It is very easy to do with modern hardware, you can write the program yourself, run it, and analyze it over a free weekend. If you are lazy, there are canned routines too.

Thanks Ron, your physicist-style of intuitions and narratives are much appreciated from me, every time it makes me feel I understand a bit more, although I don't think I truly understand them until I can substantiate them with some concrete technical details. But, let me just try to reconstruct the elephant here a bit, you said

The qualitative reason for this behavior is simple to understand--- the field is nearly random on every link, except for a little bit of correlation between neighboring links. When you look at the product of the G's on two links going in the same direction, the product is closer to random than each factor, simply because multiplying together large random elements of a gauge group fills the gauge group ergodically.

But what is the reason that this argument does not apply to abelian theories such as (lattice)QED?(I take from your wording that this is something special about nonabelian theories)

Again, this is lattice computations in the thirty years before the last ten. What can you say in the continuum? Why are you forgetting last results going around with things that are useless for this matter?

Regesburg 2007 was a conference, Lattice 2007, where a momentum in lattice computations happened. Yang-Mills theory was evaluated on a lattice of 127^4 points for SU(2). So, we know for certain that Ron argument is simply wrong. If this is not enough, we also know that the theory has a well defined spectrum that appears to be not limited from above. See works from Teper&al. and Morningstar&al..

These papers are not irrelevant at all. They state the status of Yang-Mills theory from lattice studies as for today and are far far away from what you are claiming.

These papers are NOT contradicting anything I said! I have said nothing that is controversial in any way, the theory hasn't changed, I have simulated it (briefly once), lots of people have simulated it, there is nothing to update, these things I am telling you are simply facts.

The papers you are giving me are doing this stuff in this gauge, stuff in that gauge, whatever! It's interesting, but none of them change at all the simple observation that pure gauge theory in the infrared is described by the strong coupling expansion fixed point, i.e. totally statistically completely uncorrelated link-variables.

They can't change it, because it is true, it is the basic observation you see in simulation and in strong coupling expansion, and this statement is the actual computation statement which is "mass-gap" in numerical lattice gauge theory--- the complete independence (or more precisely, exponentially decaying correlation) of gauge links at distances longer than the confinement radius. it is observed in all lattice studies, and nothing can ever change regarding this, it is an obvious observation that you can see for yourself if you take a few hours to run a simulation.

You are asserting nonsense with such confidence, so as to do what is called FUD regarding this answer. This borders on psychopathology as far as I am concerned. You don't need to read anything to understand these things (I didn't read anything), just simulate lattice gauge theory once, briefly, and think a little about strong coupling expansion.

 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.