Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,047 questions , 2,200 unanswered
5,345 answers , 22,709 comments
1,470 users with positive rep
816 active unimported users
More ...

  Higgs mass and the hierarchy problem

+ 2 like - 0 dislike
529 views

I was wondering what is the opinion about importance of the hierarchy problem in the hep community? I'm still a student and I don't really understand, why there is so much attention around this issue.

1 loop corrections to the Higgs mass are divergent - in the cut-off regularization proportional to $\Lambda^2$ and therefore require large fine tuning between the parameters to make those corrections small. But this kind of problem do not appear in the dimensional regularization.

People like the value of $\Lambda$ to be very large, with an argument that it should correspond to same energy scale at which our theory breaks down. I don't think, that we should treat the scale $\Lambda$, as some kind of a physical scale of our model cut-off, as it is just a parameter to regularize the integral. Just like the $4+\epsilon$ dimension in the dimensional regularization is not a physical thing. Why do we apply a physical meaning to $\Lambda$? Not to mention the troubles with the Lorentz invariance.

Maybe the hierarchy problem is an argument that the cut-off regularization scheme is just not right to use?

This post has been migrated from (A51.SE)
asked Apr 17, 2012 in Theoretical Physics by AAB (35 points) [ no revision ]
retagged Mar 7, 2014 by dimension10
I fixed your latex. You can include it yourself by enclosing it in dollar signs (like an inline equation in latex).

This post has been migrated from (A51.SE)

1 Answer

+ 5 like - 0 dislike

Whether you do your calculations using a cutoff regularization or dimensional regularization or another regularization is just a technical detail that has nothing to do with the existence of the hierarchy problem. Order by order, you will get the same results whatever your chosen regularization or scheme is.

The schemes and algorithms may differ by the precise moment at which you subtract some unphysical infinite terms etc. Indeed, the dimensional regularization cures power-law divergences from scratch. But the hierarchy problem may be expressed in a way that is manifestly independent of these technicalities.

The hierarchy problem is the problem that one has to fine-tune actual physical parameters of a theory expressed at a high energy scale with a huge accuracy – with error margins smaller than $(E_{low}/E_{high})^k$ where $k$ is a positive power – in order for this high-energy theory to produce the low-energy scale and light objects at all.

If I formulate the problem in this way, it's clear that it doesn't matter what scheme you are using to do the calculations. In particular, your miraculous "cure" based on the dimensional regularization may hide the explicit $\Lambda^2$ in intermediate results. But it doesn't change anything about the dependence on the high-energy parameters.

What you would really need for a "cure" of the physical problem is to pretend that no high-energy scale physics exists at all. But it does. It's clear that the Standard Model breaks before we reach the Planck energy and probably way before that. There have to be more detailed physical laws that operate at the GUT scale or the Planck scale and those new laws have new parameters.

The low-energy parameters such as the LHC-measured Higgs mass 125 GeV are complicated functions of the more fundamental high-energy parameters governing the GUT-scale or Planck-scale theory. And if you figure out what condition is needed for the high-scale parameters to make the Higgs $10^{15}$ times lighter than the reduced Planck scale, you will see that they're unnaturally fine-tuned conditions requiring some dimensionful parameters to be in some precise ranges.

More generally, it's very important to distinguish true physical insights and true physical problems from some artifacts depending on a formalism. One common misconception is the belief of some people that if the space is discretized, converted to a lattice, a spin network, or whatever, one cures the problem of non-renormalizability of theories such as gravity.

But this is a deep misunderstanding. The actual physical problem hiding under the "nonrenormalizability" label isn't the appearance of the symbol $\infty$ which is just a symbol that one should interpret rationally. We know that this $\infty$ as such isn't a problem because at the end, it gets subtracted in one way or another; it is unphysical. The main physical problem is the need to specify infinitely many coupling constants – coefficients of the arbitrarily-high-order terms in the Lagrangian – to uniquely specify the theory. The cutoff approach makes it clear because there are many kinds of divergences that differ and each of these divergent expressions has to be "renamed" as a finite constant, producing a finite unspecific parameter along the way. But even if you avoid infinities and divergent terms from scratch, the unspecified parameters – the finite remainders of the infinite subtractions – are still there. A theory with infinitely many terms in the Lagrangian has infinitely many pieces of data that must be measured before one may predict anything: it remains unpredictive at any point.

In a similar way, fine-tuning required for the high-energy parameters is a problem because using the Bayesian inference, one may argue that it was "highly unlikely" for the parameters to conspire in such a way that the high-energy physical laws produce e.g. the light Higgs boson. The degree of fine-tuning (parameterized by a small number) is therefore translated as a small probability (given by the same small number) that the original theory (a class of theory with some parameters) agrees with the observations.

When this fine-tuning is of order $0.1$ or even $0.01$, it's probably OK. Physicists have different tastes what degree of fine-tuning they're ready to tolerate. For example, many phenomenologists have thought that even a $0.1$-style fine-tuning is a problem – the little hierarchy problem – that justifies the production of hundreds of complicated papers. Many others disagree that the term "little hierarchy problem" deserves to be viewed as a real one at all. But pretty much everyone who understands the actual "flow of information" in quantum field theory calculations as well as the basic Bayesian inference seems to agree that fine-tuning and the hierarchy problem is a problem when it becomes too severe. The problem isn't necessarily an "inconsistency" but it does mean that there should exist an improved explanation why the Higgs is so unnaturally light. The role of this explanation is to modify the naive Bayesian measure – with a uniform probability distribution for the parameters – that made the observed Higgs mass look very unlikely. Using a better conceptual framework, the prior probabilities are modified so that the small parameters observed at low energies are no longer unnatural i.e. unlikely.

Symmetries such as the supersymmetry and new physics near the electroweak scale are two major representatives of the solution to the hierarchy problem. They eliminate the huge "power law" dependence on the parameters describing the high-energy theory. One still has to explain why the parameters at the high energy scale are so that the Higgs is much lighter than the GUT scale but the amount of fine-tuning needed to explain such a thing may be just "logarithmic", i.e. "one in $15\ln 10$" where 15 is the base-ten logarithm of the ratio of the mass scales. And this is of course a huge improvement over the fine-tuning at precision "1 in 1 quadrillion".

This post has been migrated from (A51.SE)
answered Apr 18, 2012 by Luboš Motl (10,278 points) [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOverflo$\varnothing$
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...