# Why does the divergence of perturbation theory in interacting QFT imply its Hilbert space to be non-Fock?

+ 5 like - 0 dislike
7233 views

Arnold remarked in a comment under his answer to Ron's question that

The fact that naive perturbation theory produces infinite corrections, no matter which finite counterterms are used, proves that no Fock space works.

I would like to understand how to infer the conclusion "no Fock space works" from "The fact that naive perturbation theory produces infinite corrections, no matter which finite counterterms are used".

asked Sep 17, 2014

+ 3 like - 0 dislike

[In view of the discussion let me mention that the context is that of Poincare-covariant quantum field field theories . It is clear that giving up covariance makes many things possible that are not possible otherwise, and allow to make rogorous sense of renormalization in simpler situations such as for free covariant fields interacting with classical external fields, or for the Lee model.]

The fact that naive perturbation theory produces infinite corrections, no matter which finite counterterms are used, proves that no Fock space supports an interacting quantum field theory. Hence there is no Hilbert space featuring at every time physical particles.

Here is my non-rigorous proof:

If there were a Fock space defining the interacting theory at every coupling parameter $g$, it would represent the particles by annihilation fields $a_g(x)$ corresponding to some mass $m(g)$. Taking the limit $g\to 0$ (assuming it exists) we see that the Fock spaces at $g$ have the same structure as the limiting Fock space. By continuity, only continuous labels of Poincare representations can change; the other will be fixed for small enough $g$. The only continuous label is the mass, so the Fock spaces differ only by the mass $m_g$. All other structure is rigid and hence preserved. In particular, if we assume the existence of a power series in $g$, the fields are given by

$\Phi_g(x)=\frac12(a_g^*(x)+a_g(x))+O(g).$

Now consider the operator field equations at coupling $g$. For simplicity take $\Phi^4$ theory, where they take (the limit term guarantees a correct the form

$\nabla^2 \Phi_g(x)+ m(g)^2 \Phi_g(x) + g \lim_{\epsilon\to 0} \Phi_g(x+\epsilon u)\Phi_g(x)\Phi_g(x-\epsilon u):=0.$

(This is called a Yang-Feldman equation.) Multiplying with the negative propagator $(\nabla^2 +m_g^2)^{-1}$, we find a fixed point equation for $\Phi_g(x)$, which can be expanded into powers of $g$, and all coefficients will be finite because the $\Phi_g(x)$ and hence their Taylor coefficients are (after smearing) well-defined operators on the corresponding Fock space. Going to the Fourier domain and taking vacuum expectation values, one find a perturbative expansion with finite coefficients which is essentially the textbook expansion of vacuum expectation values corresponding to perturbation around the solution with the correct mass.

This proof is not rigorous, but suggestive, surely at the same level as Ron's arguments. I don't know whether it can be made rigorous by more careful arguments. But it only makes the sort of assumptions physicists always make when solving their problems. If these assumptions are not satisfied but an interacting theory is still Fock, it at least means that there is no mathematically natural way of constructing the operators, even perturbatively. Thus for practical purposes, i.e., to use it for physical computations, it amounts to the same thing as if the Fock representation didn't exist.

answered Sep 17, 2014 by (15,518 points)
edited Sep 30, 2014

Actually, there are mathematically sound interacting quantum field theories, even in 3+1 dimensions. They may not be the most fundamental theories, however they are good starting points to understand renormalization theory on a rigorous level. Not to mention all the perfectly defined interacting theories with cut offs! Haag's theorem simply states that there are inequivalent representations of the CCR for operators that satisfy the axioms of QFT, e.g. the Wightman axioms. This means one has to choose one specific representation of the CCR. Nevertheless within a representation it is possible to define both free and interacting theories. As an example, see the following rigorous renormalized QFTs:

http://projecteuclid.org/euclid.cmp/1103842445

http://scitation.aip.org/content/aip/journal/jmp/5/9/10.1063/1.1704225

What you say of path integrals is not completely true. It is far from a well understood mathematical tool. There is nothing rigorous in the definition of the path integral in real time. In euclidean time, there are rigorous formulations of the path integral, both for particles and fields, but require heavy tools of probability and stochastic integration. Then, rotating back to real time is a rigorous practice only in very few situations. So there are indeed a lot of problems in the definition of path integrals, even quadratic ones. Nevertheless they are very interesting, also mathematically. The definitions adopted by theoretical physicists are usually quite sloppy, and often unacceptable on a mathematical standpoint.

I know that theoretical physicists does not care about rigour, almost at all. And that does not diminish the value of their works, that often provide very precious intuitions. But this does not make rigour neither useless nor bad practice. It allows to understand things a lot more deeply, in my opinion. The problem is that rigorous results are very difficult to obtain. I understand these results may not be so interesting for you, but your opinion on them is too biased, in my opinion.

Perturbative renormalization is a lot less satisfactory than the building of a complete renormalized dynamics for the system, even from a physical standpoint. But to do a complete theory, you need to use the proper precision, and mathematical rigour. And the "No Fock space supports an interacting field theory" is just a quote of Arnold's answer.

You know, physics and mathematics are all about proving things and making effective predictions.

If you have understood everything perfectly, and that is absolute truth, opposed to ridiculous nonsense, I expect you are able to prove a lot of interesting results using your point of view...

Good luck with that! I am eager to see new groundbreaking developments.

Ah, and also: "the axiom of choice for the continuum" may not be used in physics, but rest assured that the Hahn-Banach theorem is quite central. Good luck in defining e.g. distributions (with interesting properties) without it. In ZF+Hahn-Banach there are non-measurable sets. This again leads to the previous point: what are you able to prove and predict in the Solovej model, or when the uncountable collections are proper classes?

It is easy to judge the work done by others, and classify it as junk or unimportant or nonsense. It is also not worth it. What I suggest to you is to provide new interesting contributions, and prove the others wrong or go beyond what they can do. If else, everything you say is just a matter of your personal taste, and stubborn opinions.

Generalize the theory of Hairer to the critical case, and to a real time field theory. Then I will be the first to advertise your results!

Look, I am not repeating a tired dogma; just stating that if you change the axioms, you have to prove everything by scratch (and I expect it is not possible to prove everything, since the theories are inequivalent). And I would like to see it done before adopting the existence of an inaccessible cardinal as the new tired dogma. I am not saying I don't believe it is possible, I am saying that I want proofs that everything mathematically important (say e.g. for the description of the physical world) is still there in the different axiomatic system.

Just a precisation:

In the proof you cited, it needs to be clarified if you use the ultrafilter lemma:

"Therefore the nonzeroing sets form a nonprincipal filter extending the finite-complement filter.Using dependent choice, either you have an infinite sequence of disjoint restrictions of v S1, S2, S3, ... which are nonzeroing, or the restriction onto one of the sets makes an ultrafilter."

It is not completely clear to me: how do you pass from the non-principal filter to the ultrafilter? are you using the fact that every filter is a subset of an ultrafilter? You cannot use the ultrafilter lemma because it implies the existence of non-measurable sets. However there is another proof that does not use ultrafilters below yours, so I assume the result holds without the ultrafilter lemma.

A final remark/question:

The space of real numbers has a very rich structure. But how do you characterize the (topological) dual of spaces with a lot less structure, without Hahn-Banach? No one assures you that the dual is not trivial. Take the space of rapid decrease functions. This is a Fréchet space, with a family of seminorms. Its topological dual is very important in physical applications, because is the space of distributions $\mathscr{S}'$. Now, are you sure that it is possible to prove that $\mathscr{S}'$ separates points on $\mathscr{S}$ without Hahn-Banach (or ultrafilter lemma, i.e. in your theory where every set is measurable)?

I don't know, maybe it is possible. Maybe you do not have sufficient structure to do that in a constructive way, and without ultrafilters. But you see, there is a lot of stuff that has to be proved from scratch, and the more it lies on the boundary of what the Hahn-Banach is really necessary for, the more it will be difficult to prove the result by others means; and at a certain point it would be impossible (because the theories are inequivalent). It seems to me that you have a very long and difficult road ahead to complete the program, and even if you are (over)confident it is possible, it cannot be taken for granted. So I'm not a believer of old mathematics, I am a skeptic as any scientist is. But I agree with the proofs (the ones I know) given using using ZFC; I wait the reproduction of (the majority of) the known results using ZF+dependable choice+inaccessible cardinal to see if I agree with them also.

Sure enough the Solovay model is intriguing. As I see/understand it, the (rigorous) knowledge of stochastic processes is easier and in a sense more satisfactory; the price you pay is that functional analysis (and to some degree also algebra and geometry, because of the "geometric" Hahn-Banach and Boolean prime ideal theorems) becomes (at least) more messy.

This is a matter of what you need. To formulate path integrals, Solovay model may be better; then to give meaning at the results as functional analytical or geometrical objects may be painful, because ultimately you will have to see what still holds in L. I don't know exactly, just speculating, but I'm afraid (a geometric version of) Hahn-Banach is also used in the theory of Lie groups. If it is so, then also in analyzing symmetries (e.g. of the action) you may have to be careful for the whole universe.

As I said, it is an intriguing point of view for sure, but you have to pay attention to a lot of annoying (but maybe very important) details. Obviously, this is true in almost any theory! ;-)

Regarding the ultrafilter question, I am not using the ultrafilter lemma, I am using it's explicit negation in this particular case. I gave a procedure to pass from one nonzeroing set to a nonzeroing subset which is a sequential procedure, so it is only using dependent choice, and which has to continue forever because the only way it can terminate is if one of the subsets define an ultrafilter, and that can't happen, because there is no ultrafilter. This part I hacked together to finish the proof after the earlier simpler methods didn't completely work to exhaust the possibilities, and this is why Blass's proof is ultimately cleaner. Someone pointed out to me in the comments that it is easy to prove "no ultrafilter" from "everything measurable", so you can certainly inline this proof into my proof and make the argument cleaner, but Blass's argument is probably cleaner still, so I lost the motivation to do it.

Contrary to one's immediate intuition, every old result is preserved in the new way of thinking, it just gets a slight reinterpretation. The way you do this is by embedding an L submodel in the Solovay universe, where L is the Godel constructible universe. L is the simplest logical model of ZF starting from the ordinals required for ZF and the empty set, and it is also a model of ZFC. Every ZF set theory universe has an L-model sitting inside it which obeys usual AC. In this L-submodel, every ordinary theorem holds, there are non-measurable sets, the Hahn-Banach theorem is true, etc. So all the old theorems become L-theorems, all the objects you construct in this way you label with a subscript "L". When you embed the submodel in the Solovay universe, there are things that are absolute, meaning they don't change at all, and these include the integers, all countable constructions. There are other things that lift to sensible things in the full universe, for example all the measurable sets lift to sensible measurable sets in the new universe with the same measure, but they get extended with new points. But by contrast, all the nonmeasurable sets in L lift to measure zero dust sets in the full universe.

The intuition is just that the L-universe is countable and when you make R measurable, R is getting forced to be bigger than the whole previous universe, i.e. a randomly chosen real number is always outside of L, and an infinite random sequence of numbers can be used to model L. This is also the intuition for why inaccessibles are required for the relative consistency--- when you extend the model to make every subset of R measurable, the new measurable R is strong enough to model the whole old universe as a set, and then do set theoretic operations on this.

Yes, I agree that this approach does require rethinking, and it is a considerable amount of work, but just because people have avoided this work for 50 years doesn't make it less necessary. Why would you force a physicist (like me) to sit down and learn obscure and jargonny logic literature just in order to make sense of the simplest path integral constructions? Even though I like it now, it's not something I enjoyed at first.

There is a great simplification gained when every subset is measurable--- any operation you can do on real numbers you can automatically do on random variables, simply because every set operation produces a still-measurable set. This means that the random variables are no longer second-class citizens in the real-number system, and any deterministic construction can be lifted to a probabilistic construction without a separate analysis of whether the operations involved are measurable.

This means also that you can straightforwardly define limiting procedures using random variables the same as you do for ordinary real numbers, because you never reach any contradiction from any set operation. These operations produce objects which are in general not L-objects, so if you want to translate these operations to the ordinary ZFC universe, it is a pain in the neck, and this is why the constructions of ever free field theory, say in the rigorous work of Sheffield, requires a long measure theoretic detour (which was not done by Sheffield, but by a previous author), even though the construction involved is ultimately very simple in terms of random variables (it's "pick infinitely many Gaussian random variables and Fourier transform").

Hairer just sidestepped every problem of measure, really by cheating. He simply defined the approximating SPDE solutions, and took the limit working in a well-developed "rigorous" field where people simply stopped worrying about measure paradoxes a long time ago. As far as I can see, the statisticians never resolved the measure paradoxes, they never explicitly work in a measurable universe, they simply made a social compact among themselves to ignore them, and eventually their results became important enough that nobody notices that that the results really only hold in a measurable universe. They way they ignore measure paradoxes is by speaking about random variables as if they were taking actual values (this is something intuitive you do also, everyone does it), and then taking limits using estimates on random variables, and mixing in theorems proved about the convergence of ordinary real numbers. In ordinary mathematics, every theorem about real number convergence with given estimates needs to be explicitly reproven for random variables to show that no nonmeasurable sets are produced in the proof.

It is possible that I am wrong, and the statisticians simply reproved all the theorems about real numbers for the random variable case, but I don't see where this happened. It is much more likely that they used their intuition to transfer all the theorems to random variables without worrying about paradoxes, and this simply means that they are working in a universe where every set is measurable.

Regarding your specific question about separating fast-falloff test functions using distributions in a measurable universe, I don't know the answer in detail, and I agree that it is a very good question (I think now, after you bring it up, that it is the central question of such approaches). What I know automatically is that the L-version of the theorem holds, so that L-test-functions are separated using L-distributions, and that's how I would state the result provisionally, before determining if it is true in the full universe. What I don't know is whether this extends to the measurable case, where one can also consider additional random fast-falloff functions (and random distributions). I believe the theorem should hold here, just because there is an explicit countable basis for the test-functions and an explicit continuity, which guarantees that once you know the action of L-distributions on L-functions, you can extend to the action on all (measurable) test-functions by continuity, but it is important to be sure, however, and have a general theorem for peace of mind. The reason this example is different from the linked double-dual question (where certain extensions stop being well defined in the measurable universe) is that in the double-dual case there is no continuity, so that the double dual needs to be defined even on very wild sequences of the dual space, these are the Gaussian random variable sequences with fast-growing norm.

The context of the discussion (taken over from the other thread mentioned in the OP) was more restrictive and exclude your examples. I added a corresponding statement at the top of my answer.

 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ys$\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.