The Higgs mass around \(125\GeV\) together with the Higgs vev around \(246\GeV\) and the masses of related particles such as the W-bosons and Z-bosons are much smaller than some other energy scales, e.g. the GUT scale and the Planck scale, where some new interesting effects almost certainly occur.
If we calculate the correction to the Higgs mass from quantum loops of virtual particles whose energy may be very high because we know that physics should make sense up to these very high energy scales, we obtain a quadratically divergent term, too:\[
\delta m_h^2 = \mp C\cdot \Lambda^2.
\] Here, \(\Lambda\) is the maximum energy that particles may carry; of course, you would like to send it to infinity or at least \(m_\text{GUT}\sim 10^{16}\GeV\) or \(m_\text{Planck}\sim 10^{19}\GeV\) except that you can't as it creates havoc in the equations. Dimensional analysis shows that there's enough potential for such terms to arise and unless there is an explanation why they shouldn't arise, Gell-Mann's totalitarian principle pretty much guarantees that they will arise.
So the tree-level Higgs squared mass has to be finely adjusted to a huge number of order \(\pm C\cdot \Lambda^2\) that almost completely (but not quite completely) cancels against the loop corrections that are equally high to yield a result that is as small as experiments suggest – and, incidentally, as small as we need for the existence of stars and life. The probability that the tree-level Higgs mass is so accurately adjusted to lead a tiny leftover after the loop corrections are added is small so theories of this kind apparently predict that our world with a light Higgs boson should be unlikely. That's too bad because if theories predict that the reality is unlikely, they don't look like too good theories. This problem with the "unnaturally low" observed Higgs mass is known as the hierarchy problem.
In April, I have explained that the paragraphs above only represent a technologically shallower way to discuss what the hierarchy problem is. A better way is to discuss a full-fledged theory that is valid up to the very high energy scales. This theory has lots of its own parameters – parameters appropriate for a high-energy scale – and these have to be carefully and unnaturally adjusted for a very light Higgs to emerge.
Recently, we've seen several papers arguing that the hierarchy problem is an illusion. Let me say in advance that I find it plausible that this is what physics will ultimately learn about the hierarchy problem; I just don't think that either of these physicists has actually found the right argument that would show such a thing.
First, John Ralston published a very fresh hep-ph preprint
Ralston presents examples that demonstrate that the Taylor expansions may be a lousy guide when we want to learn what happens with a function for very high values of the argument. The general claim is right but the examples he chooses have something illogical in them. For example, his example with a Gamma function is one in which the perturbative expansion actually correctly indicates how the function grows for very large values of the argument. It would probably be more logical to choose functions that converge to zero at infinity although the power laws indicate that they diverge.
But I don't want to get into vague interpretations or obvious mathematical tasks. Instead, let's get to the point. The point – the correct point, not Ralston's point – is that we don't have to rely on what Ralston calls the "perturbative" part of the calculation and what sane physicists call "low-energy expansions". Instead, we may and we should directly discuss the models that are valid up to very high energy scales. When we do so, the would-be "non-perturbative" (we actually mean "high-energy-physics-related") corrections are included. And when we do so, an extremely fine adjustment of the parameters of the high-energy theory is needed to yield light particles such as the Higgs boson. That's the hierarchy problem.
We can show that this problem exists in theories that look like ordinary enough theories at the high scale. This proof has no errors in which some vital "non-perturbative" effects would be neglected. We know it's right. The only non-anthropic way to fix the hierarchy problem is to change the theory, to switch to a new theory in which the sensitivity disappears. The known strategies include supersymmetry and its cancellations; new strongly coupled QCD-dynamics at a lower energy scale whose smallness may be explained by the very slow logarithmic running of the coupling; and, less usually, completely new theories resembling the string-theoretical "unexpectedly ultrasoft" short-distance behavior.
But one has to change the laws of physics. The laws of physics have to contain something unusual that actually invalidates the low-energy power-law arguments based on the quadratic divergences. If the theory has no special features, we know from "exact" high-energy calculations that the hierarchy problem is there and that the simple guess based on the quadratic divergences may indeed be extrapolated and produces the right conclusion.
Also, if Ralston's toy model were relevant, you would need not only quadratic but also quartic, sixth-order, and all higher-order divergent terms, too. That would make the theory really pathological and totally non-renormalizable as a quantum field theory although it could be a result of some stringy effects. But almost no one expects these intrinsically stringy effects to kick in below energies close to the GUT scale or Planck scale so it seems unlikely to the physicists that they could have something to do with the solution to the hierarchy problem. There may be a loophole in these expectations but once again, it doesn't seem as though Ralston has found such a loophole.
Starkman et al.
Four days ago, Glenn Starkman wrote a guest blog for the Scientific American whose goal was extremely similar but the strategy was a bit different (or, more precisely, completely different):
Except that this claim is wrong at many levels. First of all, the Goldstone bosons don't stay massless in the Standard Model because they're eaten by the W-boson and Z-boson fields and become the gauge bosons' (equally massive) longitudinal polarizations. But even if you considered the spontaneous breaking of a global symmetry and not a local one, it wouldn't work because only the 3 Goldstone bosons – the "angular" components of the Higgs doublet, one for each generator of the symmetry group that is broken – have a reason to remain massless.
So any attempt to claim that the Higgs itself – the radial component – remains light or massless or unaffected by the quadratic divergences is an artifact of circular reasoning. The circular reasoning may have various forms. These guys, for example, decided to describe the system in terms of a low-energy theory based on pions etc. However, if the Higgs isn't made light to start with, the dynamics of this theory isn't described by this approximate low-energy effective theory at all. There aren't any low energies. You can't absorb divergences to the pion masses because the pion masses aren't among the degrees of freedom of the right description.
Just to be sure, I am talking about the following paper:
Under Starkman's essay in the Scientific American, I left the following comment.
We can explicitly see that the fine-tuning of the high-energy-based theory is needed to produce the light particles so hopes inspired by some unusual extrapolations of the incomplete, low-energy approximation of the situation have nothing to do with the reality. In other words, you can't solve the hierarchy problem by looking at it through strong and dirty eyeglasses and by claiming that it doesn't look too sharp in this way.
And that's the memo.
If we calculate the correction to the Higgs mass from quantum loops of virtual particles whose energy may be very high because we know that physics should make sense up to these very high energy scales, we obtain a quadratically divergent term, too:\[
\delta m_h^2 = \mp C\cdot \Lambda^2.
\] Here, \(\Lambda\) is the maximum energy that particles may carry; of course, you would like to send it to infinity or at least \(m_\text{GUT}\sim 10^{16}\GeV\) or \(m_\text{Planck}\sim 10^{19}\GeV\) except that you can't as it creates havoc in the equations. Dimensional analysis shows that there's enough potential for such terms to arise and unless there is an explanation why they shouldn't arise, Gell-Mann's totalitarian principle pretty much guarantees that they will arise.
So the tree-level Higgs squared mass has to be finely adjusted to a huge number of order \(\pm C\cdot \Lambda^2\) that almost completely (but not quite completely) cancels against the loop corrections that are equally high to yield a result that is as small as experiments suggest – and, incidentally, as small as we need for the existence of stars and life. The probability that the tree-level Higgs mass is so accurately adjusted to lead a tiny leftover after the loop corrections are added is small so theories of this kind apparently predict that our world with a light Higgs boson should be unlikely. That's too bad because if theories predict that the reality is unlikely, they don't look like too good theories. This problem with the "unnaturally low" observed Higgs mass is known as the hierarchy problem.
In April, I have explained that the paragraphs above only represent a technologically shallower way to discuss what the hierarchy problem is. A better way is to discuss a full-fledged theory that is valid up to the very high energy scales. This theory has lots of its own parameters – parameters appropriate for a high-energy scale – and these have to be carefully and unnaturally adjusted for a very light Higgs to emerge.
Recently, we've seen several papers arguing that the hierarchy problem is an illusion. Let me say in advance that I find it plausible that this is what physics will ultimately learn about the hierarchy problem; I just don't think that either of these physicists has actually found the right argument that would show such a thing.
First, John Ralston published a very fresh hep-ph preprint
Where and How to find susy: The auxiliary field interpretation of supersymmetryWriting susy in between two dollar signs looks rather stupid but it's not the main problem with the paper. ;-)
Ralston presents examples that demonstrate that the Taylor expansions may be a lousy guide when we want to learn what happens with a function for very high values of the argument. The general claim is right but the examples he chooses have something illogical in them. For example, his example with a Gamma function is one in which the perturbative expansion actually correctly indicates how the function grows for very large values of the argument. It would probably be more logical to choose functions that converge to zero at infinity although the power laws indicate that they diverge.
But I don't want to get into vague interpretations or obvious mathematical tasks. Instead, let's get to the point. The point – the correct point, not Ralston's point – is that we don't have to rely on what Ralston calls the "perturbative" part of the calculation and what sane physicists call "low-energy expansions". Instead, we may and we should directly discuss the models that are valid up to very high energy scales. When we do so, the would-be "non-perturbative" (we actually mean "high-energy-physics-related") corrections are included. And when we do so, an extremely fine adjustment of the parameters of the high-energy theory is needed to yield light particles such as the Higgs boson. That's the hierarchy problem.
We can show that this problem exists in theories that look like ordinary enough theories at the high scale. This proof has no errors in which some vital "non-perturbative" effects would be neglected. We know it's right. The only non-anthropic way to fix the hierarchy problem is to change the theory, to switch to a new theory in which the sensitivity disappears. The known strategies include supersymmetry and its cancellations; new strongly coupled QCD-dynamics at a lower energy scale whose smallness may be explained by the very slow logarithmic running of the coupling; and, less usually, completely new theories resembling the string-theoretical "unexpectedly ultrasoft" short-distance behavior.
But one has to change the laws of physics. The laws of physics have to contain something unusual that actually invalidates the low-energy power-law arguments based on the quadratic divergences. If the theory has no special features, we know from "exact" high-energy calculations that the hierarchy problem is there and that the simple guess based on the quadratic divergences may indeed be extrapolated and produces the right conclusion.
Also, if Ralston's toy model were relevant, you would need not only quadratic but also quartic, sixth-order, and all higher-order divergent terms, too. That would make the theory really pathological and totally non-renormalizable as a quantum field theory although it could be a result of some stringy effects. But almost no one expects these intrinsically stringy effects to kick in below energies close to the GUT scale or Planck scale so it seems unlikely to the physicists that they could have something to do with the solution to the hierarchy problem. There may be a loophole in these expectations but once again, it doesn't seem as though Ralston has found such a loophole.
Starkman et al.
Four days ago, Glenn Starkman wrote a guest blog for the Scientific American whose goal was extremely similar but the strategy was a bit different (or, more precisely, completely different):
Beyond Higgs: On Supersymmetry (or Lack Thereof)He argues that supersymmetry or any other new physics isn't needed as a solution of the hierarchy problem because the problem doesn't exist. He has (and his co-authors have) a different rhetorical strategy to show that the hierarchy problem doesn't exist: the Higgs mass may be linked to the masses of the Goldstone bosons – the angular directions of the Higgs doublet – which have a reason to remain massless, namely the Goldstone theorem.
Except that this claim is wrong at many levels. First of all, the Goldstone bosons don't stay massless in the Standard Model because they're eaten by the W-boson and Z-boson fields and become the gauge bosons' (equally massive) longitudinal polarizations. But even if you considered the spontaneous breaking of a global symmetry and not a local one, it wouldn't work because only the 3 Goldstone bosons – the "angular" components of the Higgs doublet, one for each generator of the symmetry group that is broken – have a reason to remain massless.
So any attempt to claim that the Higgs itself – the radial component – remains light or massless or unaffected by the quadratic divergences is an artifact of circular reasoning. The circular reasoning may have various forms. These guys, for example, decided to describe the system in terms of a low-energy theory based on pions etc. However, if the Higgs isn't made light to start with, the dynamics of this theory isn't described by this approximate low-energy effective theory at all. There aren't any low energies. You can't absorb divergences to the pion masses because the pion masses aren't among the degrees of freedom of the right description.
Just to be sure, I am talking about the following paper:
A Goldstone "Miracle": The Absence of a Higgs Fine Tuning Problem in the Spontaneously Broken \(O(4)\) Linear Sigma Modelby Bryan W. Lynn, Glenn D. Starkman, Katherine Freese, Dmitry I. Podolsky. As far as I know, I only know Kathy Freese in person and I only know Dmitry Podolsky from intense communications on the Internet (he has a blog, nonequilibrium.net).
Under Starkman's essay in the Scientific American, I left the following comment.
Exactly one year ago, Dmitry Podolsky, a co-author of yours, had a correspondence – about 10 e-mails – with me. He argued in favor of a simpler “argument” than your paper’s argument why there were no quadratic divergences to the Higgs mass.So I don't think that either of these authors has addressed the actual hierarchy problem. They have just addressed oversimplified beginners' caricatures of the problem. If they actually understood how the low-energy Higgs physics emerges from some complete theories of the known types that work up to very high energy scales, they would notice how ludicrous their "solutions" are.
The claim was that they only affected the mass parameters so the quartic coupling wasn’t affected by them and the influence of the quadratic divergences on the Higgs mass is actually linked to the correction to the tadpole trying to change the vev (location of the minimum of the potential) which has to be zero for stability. So even the former thing is zero.
For hours if not days, I was a bit confused but then I came back to my senses and wrote him an explanation – one that he never replied to again. The explanation is that the correction of the tadpole is indeed linked to the correction of the Higgs mass and the vev. That’s OK but *none* of these mutually related things is guaranteed to be zero by any principle. In fact, we know in particular theories it’s not zero. The quadratic divergences reflect the big sensitivity of the Higgs mass/vev on all the detailed parameters of any high-energy theory with start with. We may choose to say whether it’s the Higgs mass or the Higgs vev that is threatened by these divergent terms but both of them are!
Without a principle that guarantees a cancellation, both the Higgs vev and the Higgs mass – together with masses of the W-bosons, Z-bosons, and some leptons and quarks – would be close to the GUT scale or any other high-energy scale we find in the full theory. Their fates and values could be correlated but there exists no argument for either of these quantities why they should be small.
Now, the final paper of yours talks about the pions etc. It is a bizarre treatment. From the viewpoint of the fundamental theory, pions are composite objects and their properties are derived quantities. We don’t really have special counterterms for pions or even technipions in the Standard Model. They’re not fundamental parameters. So at most, you link the divergences to yet another quantity that isn’t guaranteed to be zero by any principle. At most, you are supplying a new principle but this principle is equivalent to saying “there shouldn’t be a hierarchy problem”. You don’t have any independent justification why it’s not there.
It’s only the Goldstone bosons that have a reason to be massless, the Goldstone theorem, but there are only 3 of them, one for each broken generator, and the physical Higgs boson simply isn’t one of them. That’s why I believe that your paper boldly claiming that the quadratic divergences aren’t really there is wrong.
We can explicitly see that the fine-tuning of the high-energy-based theory is needed to produce the light particles so hopes inspired by some unusual extrapolations of the incomplete, low-energy approximation of the situation have nothing to do with the reality. In other words, you can't solve the hierarchy problem by looking at it through strong and dirty eyeglasses and by claiming that it doesn't look too sharp in this way.
And that's the memo.
Have Starkman and pals shown that the hierarchy problem doesn't exist?
Reviewed by DAL
on
June 23, 2012
Rating:
No comments: