# If renormalization scale is arbitrary, why do we care about running couplings?

+ 7 like - 0 dislike
2896 views

For the bounty please verify the following reasoning

[copied from comment below]

Ah right, so the idea is that overall observable quantities must be independent of the renormalization scale. But at each order in perturbation theory the result can depend on renormalization scale, right? And they do so in exactly the right way to invalidate the use of a perturbation series when external momenta are of the wrong order. This is because the external momenta get caught up in loop integrals due to momenta conservation at vertices and the renormalization group equation comes along and means the right things cancel.

[original question]

In Brief

People say that perturbation theory breaks down when the couplings run to high values. But given that this running depends on an arbitrary mass scale, how is this argument logical?!

A Longer Exposition (TL;DR)

It's obvious that Feynman diagram techniques work best when the coupling constant is small, since then you can neglect higher order terms. But various sources (e.g. Peskin and Schroeder) make claims about running couplings that seem incomplete to me.

I often read sentences like

• if the renormalized coupling is small at low energy, then Feynman diagrams are good in that region
• for asymptotically free theories, Feynman diagrams are good for high energy calculations

I understand exactly why these are true. But surely something much more general is true, namely

• if the coupling constant is small at any renormalization scale the Feynman diagrams are good for calculations

My reasoning is as follows. Observable quantities are completely independent of the running of the coupling, so you can just choose your scale appropriately so that the coupling is small for the expansion. Hey presto, you can use Feynman diagrams.

Is this true? I strongly expect the answer to be no but I can't convince myself why. Here's my attempt at a self-rebuttal.

• my argument above is incorrect, because I've assumed there's only a single coupling. In reality there's contributions from "irrelevant" terms at higher energies whose couplings can't be fixed from low energy observations.

This leads me to hypothesise that the following is true (would you agree?)

• if the coupling constant is small at any renormalization scale above the scale of your observations then Feynman diagrams are good

This seems more plausible to me, but it would mean that Feynman diagrams would be good for low energy strong interaction processes for example. This feels wrong is a sense because the renormalized coupling is large there. But then again the renormalization scale is arbitrary, so maybe that doesn't matter.

Many thanks in advance!

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
asked Oct 16, 2013
retagged Nov 1, 2015
Short answer (due to time): the running of the coupling is a way of summing the leading logarithmic corrections. So if you choose the "wrong" scale you end up with large logarithms in your Feynman diagrams. So in the series you trade a large running coupling and $\mathcal{O}(1)$ coefficients for a small coupling and large coefficients involving the log of the ratio of scales between the process of interest and the renormalization point. Either way the series is no good in that regime.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Michael Brown
Ah right, so the idea is that overall observable quantities must be independent of the renormalization scale. But at each order in perturbation theory the result can depend on renormalization scale, right? And they do so in exactly the right way to invalidate the use of a perturbation series when external momenta are of the wrong order. This is because the external momenta get caught up in loop integrals due to momenta conservation at vertices and the renormalization group equation comes along and means the right things cancel.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
Do you know a reference which goes through this problem in detail? I've tried to construct an example myself, but it's not much use because I can't check my workings against an example. Most textbooks just gloss over it (at least Weinberg and P&S do...)

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
Though I'm a little worried about my claim in my first comment, because I don't think I've seen a Feynman diagram evaluated with strict dependence on a renormalization scale... Maybe I'm still wrong!

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
(disclaimer: I haven't read all of either P&S or Weinberg, so if somebody gives me some page references where there is an example of loop calculations depending on the scale explicitly I'll happily look it up and use it!)

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
One final thing - if my first comment is correct, then how come "traditional" renormalization schemes like on-shell and minimal subtraction don't contain any dependence on a renormalization scale...? Is it because these assume the cutoff is $\infty$ so they can ignore that term...?

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
No - I've though about it and I reckon the reason that "traditional" renormalization procedures don't have Feynman diagrams depending on renormalization scale is that they deliberately fix the renormalization scale at some variable (e.g. external momentum/mass) already in the problem, which hides that dependence. If you (or someone else) finds a moment, I'd appreciate your input on whether this and my first comment are correct.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
(1) Your first comment is basically right. The exact answer is independent of the RG scale, but the RG improves perturbation theory and extends its reach to places where the large logs would normally kill it. (2) I'm a little confused as to why you say Peskin and Schroder hide the dependence on the scale... If you regulate using dim reg for example you need to introduce a parameter mu for dimensional analysis reasons, then if you use the MS-bar scheme, mu is the RG scale. Ch. 12 of PS has the RG scale in it, it's called M and defined in 12.30. (also see text around 12.50).

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Andrew
@Andrew - thanks for your response. I've looked at the relevant part of PS and can't find an expression of the form $\textrm{1 loop correction} = f(M)$ where $M$ is the renormalization scale. In fact their computation of one loop corrections seems to not depend on $M$ explicitly... And I agree that one should introduce $\mu$ for dimensional reasons in dim reg, but a lot of places don't. I'm looking for an example where someone does a simple calculation involving an explicit renormalization scale, so that I can work through it and verify the dependence of loop corrections on $M$.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
(In particular I don't really want or need that calculation to be convoluted with a discussion of running of couplings or the computation of the $\beta$ function. Have you got a source which does (say) dim reg rigorously from the off, and (say) does a simple QED calculation verifying that the loop corrections depend on $M$ at each order? I've done some myself, but want to check I'm right! If you post that and your comment as an answer I'll happily accept it. Thanks again!

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Edward Hughes
I am afraid I don't understand correctly your question. However, here is a practical remark. If, in QED, we take in account the first loop correction, we get, for the coupling $\alpha$, the following relation, where $\mu$ is the energy scale : $\alpha(\mu) = \dfrac{\alpha(\mu_0)}{1 - \frac{2}{3\pi} \alpha(\mu_0)log \large \frac{\mu}{\mu_0}}$. (This relation is only valid if the second term in the denominator is $<<1$). From this, we see, that, for increasing momenta (decreasing distances), the coupling $\alpha$ is increasing.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Trimok
Did you take a look at Weinberg Volume II page 111-130? I think your answer lies there

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Ali Moh

+ 2 like - 0 dislike

The main problem lies in the "large logarithms". Indeed, suppose you want to calculate some quantity in Quantum Field Theory, for instance a Green Function. In perturbation theory this is something like:

$$\tilde{G}(p_1,...,p_n)=\sum_k g^k F_k(p_1,...,p_n)$$

for some generic functions $F$ and $g$ is the coupling constant. It's not enough to require a small $g$. You need small $g$ AND small $F$, for every value of the momenta $p$ (so for every value of the energy scale of your system).

A nice little calculation to understand this point. It's obvious that:

$$\int_0^\infty \frac{dx}{x+a}=[log(x+a)]_0^\infty=\infty$$

Let's use a cutoff: $$\int_0^\Lambda \frac{dx}{x+a}=log\frac{(\Lambda+a)}{a}$$

This is still infinite if the (unphysical) cutoff is removed. The whole point of renormalization is to show that a finite limit exist (this is "Fourier-dual" to send the discretization interval of the theory to zero). This quantity is finite:

$$\int_0^\Lambda \frac{dx}{x+a}-\int_0^\Lambda \frac{dx}{x+b} \rightarrow log\frac{b}{a}$$

But if $a \rightarrow \infty$ the infinite strikes back! So for a generic quantity F(p) regularized to F(p)-F(0) we want at least two things: that the coupling is small at that momentum $p$ and that $p$ is not far away from zero. But zero is arbitrary, we can choose an arbitrary (subtraction) scale. So we can vary this arbitrary scale $\mu$ in such a way that it is always near the energy scale we are probing.

Is convenient to take this scale $\mu$ at the same value of the renormalization scale. This is the energy at which you take some finiteness conditions (usually two conditions on the two point Green function and one condition on the 4 point one). The finiteness conditions are real physical measures at an arbitrary energy scale, so they fix the universe in which you live. If you change $\mu$ and you don't change mass, charge, ecc. you are changing universe. The meaning of renormalization group equations is to span the different subtraction points of the theory, remaining in your universe. And of course every physical quantity is independent of these arbitrary scale.

EDIT: Some extra motivations for the running couplings and renormalization group equations, directly for Schwartz:

The continuum RG is an extremely practical tool for getting partial results for high- order loops from low-order loops. [...]

Recall [...] that the difference between the momentum-space Coulomb potential V (t) at two scales, t1 and t2 , was proportional to [...] ln t1 for t1 ≪ t2. The RG is able to reproduce this logarithm, and similar logarithms of physical quantities. Moreover, the solution to the RG equation is equivalent to summing series of logarithms to all orders in perturbation theory. With these all-orders results, qualitatively important aspects of field theory can be understood quantitatively. Two of the most important examples are the asymptotic behavior of gauge theories, and critical exponents near second-order phase transitions.

[...] $$e^2_{eff}(p^2)=\frac{e^2_R}{1-\frac{e^2_R}{12 \pi^2}ln\frac{p^2}{\mu^2}}$$ $$e_R=e_{eff}(\mu)$$ ￼￼￼ This is the effective coupling including the 1-loop 1PI graphs, This is called leading- logarithmic resummation. Once all of these 1PI 1-loop contributions are included, the next terms we are missing should be subleading in some expansion. [...] However, it is not obvious at this point that there cannot be a contribution of the form $ln^2\frac{p^2}{\mu^2}$ from a 2-loop 1PI graph. To check, we would need to perform the full zero order calculation, including graphs with loops and counterterms. As you might imagine, trying to resum large logarithms beyond the leading- logarithmic level diagrammatically is extremely impractical. The RG provides a shortcut to systematic resummation beyond the leading-logarithmic level.

Another example: In supersymmetry you usually have nice (theoretically predicted) renormalization conditions at very high energy for your couplings (this is because you expect some ordering principle from the underlying fundamental theory, string theory for instance). To get predictions for the couplings you must RG evolve all the couplings down to electroweak scale or scales where human perform experiments. Using RG equations ensures that the loop expansions for calculations of observables will not suffer from very large logarithms.

A suggested reference: Schwartz, Quantum Field Theory and the Standard model. See for instance pag. 422 and pag.313.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Rexcirus
answered Sep 12, 2015 by (20 points)
When you say "a quantity regularized to $F(p)-F(0)$", do you mean that $0$ is an arbitrary momentum scale at which we measure the renormalized coupling? And I'm afraid I still don't understand. I know that you can take $\mu$ to be whatever scale you want, but I don't see how this answers the original question: Why can we extract physical information from the dependence of $\mu$ on renormalization scale, if the latter is arbitrary?

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Javier
Yes, for the first question. For the rest, I will clarify in the main answer.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Rexcirus
The page 313 reference is actually really interesting. I don't think I have seen an explicit distinction between renormalized charge and effective charge anywhere else. It makes sense, though, because the behavior of the effective charge with energy is physically relevant. I appreciate your answer; I will take a little time to think about it more deeply.

This post imported from StackExchange Physics at 2015-11-01 18:07 (UTC), posted by SE-user Javier

Running coupling is called an artifact here. One can make calculations within on-shell scheme and have a momentum-dependent amplitudes (or cross sections) instead of running couplings at simpler amplitudes.

 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.