The Standard Model is an impressive achievement of science. Numerous anti-scientific crackpots – all those Shmoits Lite, not to mention the Hardcore Shmoits themselves – often misrepresent the relationship between top phenomenologists and theorists and the Standard Model.
Competent physicists aren't studying theories that go beyond the Standard Model because they consider the Standard Model to be wrong or because they dismiss it. On the contrary, physicists who really go beyond the Standard Model view the Standard Model as a beautiful woman they have slept with in every possible sexual position. They love her, they know her, but they have also decided she is not the last word. Aside from sex, one also needs to eat vitamins, at least when he gets older, and perhaps educate the children who were born from that relationship.
(If you expected me to promote adultery, you were proved wrong.)
If they were dismissing the Standard Model, they would call their research "Around the Standard Model" or "Against the Standard Model" (ASM), not "Beyond the Standard Model" (BSM). After all, I couldn't have won the $500 bet if I didn't know why the Standard Model had to have the basic story right, including the Higgs sector. However, it's still not the complete theory of everything.
The Standard Model model fails to explain the following – almost certainly genuine – features of Nature, among a few others:
The Standard Model only contains the particles from the picture at the top – and the forces they mediate. Their spins are \(j=0\) or \(j=1/2\) or \(j=1\). The latter category are the gauge fields; they give rise to all the non-gravitational forces. The first, spinless group has just been discovered experimentally and it includes the Higgs boson. All the known elementary fermions have the spin equal to one-half.
Missing in this list is \(j=2\) which you need for gravity. That's really because the metric tensor \(g_{\mu\nu}\) underlying the general theory of relativity, our modern theory of gravity, has two indices. For most applications, you may just combine the Standard Model with all of its quantum corrections and the classical general relativity, derived from the Einstein-Hilbert action\[
S_{EH} = \frac{1}{16\pi G} \int \dd^4 x\,R
\] and you're satisfied. However, if you try to treat the Lagrangian term above in the same way as other terms in the Standard Model and also include quantum loops, you get infinitely many new kinds of divergences that can't be tamed by the usual renormalization methods. That implies that gravity at the fundamental scale where its quantum loops become as strong as the tree-level contributions – the Planck scale – depends on infinitely many unknown real parameters. It is not predictive.
If there are large or appropriately warped extra dimensions, this scale at which all hell breaks loose may actually be closer to the LHC scale than the Planck scale.
As you know, string theory is the only known – and most likely, the only mathematically possible – theory that reduces to quantum field theory (the Standard Model is its example) including the loop corrections; but that also produces Einstein's equations for gravity.
Gravity's impact on particle physics experiments at the accelerators is negligible, however. So you might only be interested in "the theory of all matter and forces that become important beneath the Planck scale" and remove gravity by hand. However, there are other arguments suggesting that the Standard Model isn't a complete theory of this "practical subset of the world", either.
Neutrino masses
Neutrinos are very light. They seemed massless for a few decades after they were discovered. But for a few recent decades, we have also known arguments that neutrinos are almost certainly massive. How did we know that? Well, it seemed obvious that neutrinos oscillate – for example, they do because we received fewer neutrinos from the Sun than what our otherwise perfectly satisfactory solar model predicted.
In the recent decade, the oscillations were measured pretty accurately and much like some cosmological parameters, these phenomena became a component of the mundane experimentally accessible physics. All the experiments we know seem pretty much compatible with the assumption that there are three neutrino flavors from the picture at the top. These neutrinos are Majorana fermions (identical with their antiparticles, but the left/right handedness effectively distinguishes the neutrinos from antineutrinos whenever the speed is close to the speed of light); we know something about their mixing angles; and we know the three eigenvalues of the mass. Well, more precisely, we only know the squared-mass differences, \[
\Delta m^2_{12}, \quad \Delta m^2_{13},\quad \Delta m^2_{23}.
\] Note that the middle difference above is the sum of the first and last one. The square roots of the three quantities above are of order dozens or hundreds of millielectronvolts. The smallest mass eigenvalue may be much smaller than that; it may be comparable; but it may also be larger, perhaps many electronvolts. This shift of all eigenvalues doesn't make a visible impact on the interference experiments.
An interesting numerological feature is that up to an order of magnitude or two, we may express the neutrino masses as\[
m_\nu \approx \frac{m_h^2}{m_{\rm GUT}}
\] where \(m_{\rm GUT}\) is the GUT scale or something located two or three orders of magnitude beneath the Planck scale; \(m_h\approx 126\GeV\) was chosen as a representative quantity for the electroweak scale. This "seesaw formula" allows one to speculate that the tiny neutrino masses – which are so many times smaller than the Higgs mass as the Higgs mass is smaller than the GUT mass – may be expressed as some residual leftover from some new physical phenomena that occur near the GUT scale.
Unlike gravity, the neutrino masses don't look like a fundamental defect of the Standard Model. We may just add the neutrino mass terms to the action. The resulting theory has been called \(\nu\)-Standard Model or something like that. But there's nothing fundamentally immoral about using the term "Standard Model" for the theory that also includes the neutrino masses.
Baryon asymmetry
However, this is not the case for the baryon asymmetry. All processes we have safely observed in Nature conserved the baryon number. The baryon number of a proton or a neutron is equal to one (much like for other baryons); their antiparticles have \(B=-1\); and all other particles have \(B=0\). One may reduce this assignment to a more fundamental rule giving \(B=+1/3\) to quarks and \(B=-1/3\) to antiquarks.
The Standard Model guarantees that the baryon number is conserved as long as you only allow its renormalizable interactions. You just can't write any gauge-invariant, Lorentz-invariant terms polynomial in the available Standard Model fields that would violate the baryon number. The same comment applies to the lepton number \(L\). Because these conservation laws are forced upon you by renormalizability for no obvious theoretical reasons (the corresponding symmetry isn't natural, isn't gauge, and isn't theoretically inevitable), we call \(B\) and \(L\) "accidental symmetries". These approximate global symmetries are almost certainly violated by nonrenormalizable interactions; by black hole formation and evaporation (this is very far and very exotic but really guaranteed); and perhaps by renormalizable terms in theories of new physics.
If the Standard Model were the whole story, the total baryon number of the present Universe would be equal to the total baryon number of the newborn Universe. How much is that?
Well, the lepton number \(L\) would be more ambiguous because there could be lots of invisible neutrinos in outer space and they could moreover be identical to the antineutrinos. However, the problem is much more well-defined for baryons because baryons can't be invisible. They can't be indistinguishable from antibaryons, either. You just count the protons and neutrons and because there are almost no antiprotons and antineutrons around (they have annihilated a long time ago, because of an apparent 1,000,000,001 vs 1,000,000,000 excess of matter over antimatter in the early Universe), you get the resulting \(B\). You get something like \(B=10^{55}\) from each Sun and there are tens of billions of stars in a galaxy and tens of billions of galaxies. The qualitative result is clear: \(B\) is nonzero and huge.
Note that even if you were computing \(B-L\) so that the protons "cancel" against the electrons, you could get a huge nonzero result because there are many additional neutrons with \(B=+1\) (although it's not clear how many neutrinos are there).
If you imagine that our Universe used to be very small, like a peanut or much smaller – and the Big Bang Theory surely encourages you to believe this assumption (both the actual theory as well as the sitcom because Sheldon Cooper believes that as well and his IQ is 187) – it is just unnatural for this initial state to have \(B=10^{80}\) or something like that which is what you need to match the present conditions.
You should better have processes that produced this high baryon number out of nothing. If the Standard Model were the whole story, they would be totally banned. Even with the weak "nonrenormalizable" interactions that the Standard Model would allow as an effective theory, you wouldn't produce sufficiently many baryons. So there should better be some additional physical phenomena if you want to believe that the initial state of the Universe was a natural state containing almost nothing – including a small value of \(B\).
The new physical phenomena that had to arise to produce the large value of \(B\) are known as baryogenesis. The lepton number is analogous but not so indisputable so while people sometimes talk about leptogenesis as well, they usually consider it as a hypothetical process that may explain baryogenesis: you first create a nonzero lepton number and then convert some of it, via some processes conserving \(B-L\), to a nonzero baryon number.
These new phenomena may lie anywhere between the LHC scale and the GUT scale; we don't know where. Models have been proposed all over the place and for every scale in the interval, one may find a model that hasn't been falsified yet.
Dark matter
The stars orbit the Milky Way at speeds that seem almost independent of the distance from the center of the galaxy. That's very different from the dependence of the planetary speeds you know from the Kepler model: the nearby planets are much faster. The explanation that seems most plausible is that there are sources of gravity we don't observe – dark matter – that predict the right, observed speed. The dark matter paradigm is reinforced by some observations that indicate that the dark matter (as seen from its gravitational influences) may apparently be separated from the usual matter.
About 4% of the energy density in the Universe is made out of visible matter (mostly baryons because the nuclei are much heavier than the electrons); about 23% is dark matter; about 73% is dark energy.
The Standard Model doesn't predict any particles that could be as dominant as dark matter seems to be but that would also be equally invisible. One needs some new particles, ideally stable or long-lived particles. Well, I say "particles". Dark matter could also be composed of larger objects – MACHOs, RAMBOs, BATMEN, TERMINATORs, and similar ones – but unlike the natural selection, the data seem to favor WIMPs and SISSies. Axions are also particles and viable dark matter candidates but they don't belong among the WIMPs.
The LSP – the lightest superpartner in supersymmetric theories with a conserved R-parity (or a violated one if the lightest particle is gravitino) – is a "canonical" example of a particle that seems to satisfy all the general constraints. LSPs (such as neutralinos) may pair-annihilate and this pair annihilation drives their original, high enough concentration to more realistic values (i.e. close to the observed ones) for the current era.
Without new particles such as WIMPs or axions, the Standard Model would demand the general theory of relativity to be seriously overthrown. Instead of adding "just some new particle species", you would need to qualitatively change Einstein's equations – which seems as a more dangerous, more radical, and less justified a step. But of course, there is no definitive proof that new particles are needed to explain the phenomena we normally attribute to "dark matter".
Inflation
Our spacetime is curved. Why? In 1998, people observed that the expansion of the Universe is actually accelerating. We're already in the mode in which the "dark energy" accelerating it is the dominant component of the energy density (at 73%). That's why our spacetime is already close to the de Sitter space. Each 10 billion years or so, the linear distances between pairs of galaxies will double. The expansion will continue at an accelerating rate.
This means that the typical curvature radius of our spacetime is of order 10 billion years or light years, depending on whether you view the spacetime as an extension of space or time.
On the other hand, if you just pick the 3-dimensional slice through spacetime "now" – one that contains all events that occurred 13,730,002,012 years after the Big Bang (assuming that Jesus Christ was decent enough to be born exactly 13,730,000,000 years after the Big Bang, up to the famous four years that don't work even outside cosmology) – you will find out that its curvature is much smaller. The curvature radius is at least hundreds of billions of light years and possibly infinite.
This flatness is puzzling. It may be explained by cosmic inflation. Cosmic inflation also explains the surprisingly good uniformity of the temperatures of the cosmic microwave background – microwave radiation that was born together with the atoms (the Universe became almost transparent for light at that time) and that has been cooling since that moment. And it explains the apparent absence or shortage of topological defects, magnetic monopoles, and other things we could observe every day but we don't.
Most likely, an explanation similar to inflation is needed for those things. Most likely, some new scalar fields are needed, too – although brave people have attempted to run inflation on our familiar July 4th God particle, too. If this boson managed to give masses to particles and inflate the Universe from a nearly zero size to this astronomical one at the same moment, it would surely deserve to be called God particle. In fact, and I hope that readers-believers will survive this comment, it would be much more omnipotent than God. But calm down, it seems somewhat unlikely, especially because string theory predicts many new scalar fields and the Higgs boson's competitors may share the jobs.
Inflation is not hard to achieve with 1,000-eros banknotes. A 47-year-old man told his girlfriend today that this banknote is certainly valid. So she went to a shop to convert it and indeed, the clerk gave her CZK 24,000 for this EUR 1,000 banknote, i.e. almost at the current exchange rate. However, the clerk still found it suspicious a little bit so he went to the local bank, KB, where they kindly and medically confirmed to him that he was an imbecile.
Hierarchy problem
The July 4th Higgs boson has mass \(126\GeV\) or so. But the quantum loops – virtual particles of other kinds – want to modify the mass and they naturally want to make it as heavy as possible – as heavy as the energy of the highest-energy particle excitations that you allow in your whole theory. Unless you finely adjust parameters of the theory at the Planck scale, the theory will predict that all the scalar fields should "almost certainly" have masses comparable to the GUT scale or the Planck scale.
We've know for quite some time that the Higgs mass, much like the Higgs vev and the mass of the massive vector bosons, is comparable to \(100\GeV\). Since the Independence Day, we may be much more self-confident in similar statements. We just seem to observe a Higgs boson that is much lighter than the mass of typical scalar particles, beasts that are unable to go on a diet (only spinful particles know how to do that, by using their chiral symmetry or gauge symmetry) and who get heavy by eating the hamburgers composed of virtual particles all the time.
The prediction that the Higgs boson should probably be heavy may be avoided by two known major tricks: compositeness and cancellations caused by symmetry. The first type of a solution, compositeness – in particular technicolor theories initiated by Susskind and Weinberg – seem to be pretty much dead at this point. The second type of a solution, new symmetries – and I mean supersymmetry because it's really the only new proposed symmetry that can be responsible for such cancellations – remains perfectly viable. However, it hasn't been discovered yet.
If supersymmetry won't be discovered at the LHC at all and if no new physics will be discovered, it will show that the "apparently fine-tuned" light mass of the Higgs boson isn't unacceptable for Nature. In that case, we could be forced to believe that the naturalness arguments aren't kosher. Or that we've been incredibly lucky and we shouldn't ask why. A heavy Higgs boson wouldn't allow life to exist and this observation could be enough to anthropically explain why our world has to have a light Higgs. We had to find ourselves in a Cosmos with life – and the existence of a light Higgs boson is a necessary condition for that.
But we're not there yet. I personally still think it's more likely than not that new physics beyond the Standard Model will already be found by the LHC.
Higgs instability
The observed Higgs mass, around \(126\GeV\), is OK for the lightest Higgs boson in supersymmetry. However, if the Standard Model were the whole story, it would be a problematic value of the mass. It's too light and the shallow graph of the potential energy would actually get reverted at some higher scales, implying that only an unstable theory at shorter distances could produce such a light Higgs boson in a long-distance effective theory.
But an unstable Higgs potential seems inconsistent because Nature would choose to live elsewhere. Even if the Higgs mass belongs to the "metastable" window, which it may, it seems problematic. I was explaining these matters in a recent article. The most likely explanation of this problematic predicted instability is that it doesn't exist.
New fields and particles contribute to the Higgs potential and cancel most of the "instability creating" effects. Stop squarks and higgsinos are doing this helpful job in supersymmetric theories. Even if you had other theories, the "physicians" preventing our vacuum from such an instability would have to resemble the stops or higgsinos (or both) in some respects.
Strong CP-problems and axions
In the previous two sections, I discussed supersymmetry's ability to cancel various unwanted terms and preserve the stability of the Universe. In fact, supersymmetry also helps to cancel the cosmological constant – the dark energy – whose value is 123 orders of magnitude smaller than the most natural estimate, the Planck energy density. Supersymmetry reduces it but because it has to be broken, it can only reduce it to 60 orders of magnitude or so. And this "half-job" has one more price: the remaining uncancelled contribution to the cosmological constant seems to be "much more self-evidently present" than the original 123 orders of magnitude that could be handwaved away.
While supersymmetry may very well play a role in keeping the cosmological constant small (as in no-scale supergravity) and keeping the Higgs boson light, it's marginally conceivable that there's actually no explanation for the small size of these parameters and the anthropic observation that "the small size is needed for life" is everything that one may offer as an excuse for the unnaturally low values.
However, we also know that even if such anthropic excuses were legitimate in a rational argumentation or explanation in physics, they aren't the answer to all the open question marks in physics. Why? QCD allows the so-called theta-term aside from the usual kinetic term:\[
\LL_{\rm QCD} = -\frac{1}{4} F_{a,\mu\nu} F^{a,\mu\nu} - \frac{n_f g^2\theta}{64\pi^2} \epsilon_{\kappa\lambda\mu\nu} F^{a,\kappa\lambda} F^{a,\mu\nu}
\] The second term includes the epsilon symbol – the term is proportional to \(F\wedge F\) – and it breaks the CP-symmetry. However, observations show that the coefficient of this term has to be really tiny (less than one billionth or so) because the phase in the CKM matrix seems to be the only source of CP-violation that we have observed so far.
On the other hand, if the coefficient were much larger, perhaps of order one, it wouldn't destroy life, at least not in any way we understand. So even the requirement that "life has to exist" isn't enough to explain why the coefficient is so small. Unless we are overlooking an effect that is needed for life – or a bad effect that may kill it – there has to be a new mechanism that explains the smallness of the \(\theta\)-angle.
The most plausible solution seems to be one that effectively promotes this angle to a scalar field, the axion field, which happens to be light and charged under a new symmetry and the charge conservation effectively bans terms in the potential that would make the vev nonzero (or large). String theory seems to predict many axions so this "Peccei-Quinn solution" to the strong CP-problem – the problem why the coefficient is so tiny – seems natural from most string theorists' viewpoint.
Gauge coupling unification
Finally, let me mention that the Standard Model, while considered the "minimal theory for physics in 2012", has many "non-minimal features", anyway. It has three generations. You could say that it's needed for the CP-violation by the CKM matrix. (This is actually an awkward argument because the CP-violation produced by the CKM matrix is too tiny and effectively useless in cosmology – and one needs some stronger sources of CP-violation for baryogenesis, anyway.) It has leptons and quarks. This "duality" is actually understood: the Standard Model with leptons removed (or quarks removed) would actually be an inconsistent theory due to gauge anomalies that cancel between quarks and leptons!
The Standard Model also has three factors of the gauge group, with three independent coupling constants. It actually looks like their values unify in such a way that \(SU(3)\times SU(2)\times U(1)\) may be embedded into a larger group such as \(SU(5)\) or \(SO(10)\) or \(E_6\) with a single gauge coupling constant. That would be great because you would reduce the number of independent parameters. However, this "gauge coupling unification" has some error in the Standard Model. Within the error margins, it works perfectly if you replace the Standard Model by its minimal supersymmetric extension, the Minimal Supersymmetric Standard Model.
Apparent patterns behind fermionic representations
This numerological agreement doesn't prove that grand unification is right. However, the very fact that the leptons and quarks may be organized into representations of a grand unified gauge group is a strong argument in favor of the grand unification theories (GUT). We don't have any strict proof that supersymmetry or grand unification are realized in Nature; however, we don't have any strict proof that these intriguing and elegant patterns are not used by Nature, either. Whether one prefers to believe that these patterns are realized depends on one's personal philosophy – the belief whether Nature is fundamentally elegant or whether it is fundamentally a disordered stinky pile of feces.
Which one do you prefer in your living room?
Finally, I want to emphasize that while there is a lot of evidence in favor of new physics that kicks in between the Higgs mass scale and the Planck scale, almost none of the observations above guarantees that such new physics should be visible at the LHC. It may start to operate at higher energy scales. The counterexamples are the hierarchy problem and (partially) dark matter. If the hierarchy problem is solved naturally and without any anthropic excuses, the new physics should better be close to the Higgs mass and should be accessible by the LHC.
Also, for completely different reasons, if dark matter is composed of WIMPs, it seems almost necessary that these new particles are light enough so that the LHC should have enough energy to produce them. However, all these conclusions depend on order-of-magnitude estimates so you simply can't be sure that the LHC will discover these new phenomena. On the other hand, even if the LHC discovers no new physics in the following decade, it will still be very reasonable to feel almost certain that new physics ultimately has to kick in at higher energy scales.
Competent physicists aren't studying theories that go beyond the Standard Model because they consider the Standard Model to be wrong or because they dismiss it. On the contrary, physicists who really go beyond the Standard Model view the Standard Model as a beautiful woman they have slept with in every possible sexual position. They love her, they know her, but they have also decided she is not the last word. Aside from sex, one also needs to eat vitamins, at least when he gets older, and perhaps educate the children who were born from that relationship.
(If you expected me to promote adultery, you were proved wrong.)
If they were dismissing the Standard Model, they would call their research "Around the Standard Model" or "Against the Standard Model" (ASM), not "Beyond the Standard Model" (BSM). After all, I couldn't have won the $500 bet if I didn't know why the Standard Model had to have the basic story right, including the Higgs sector. However, it's still not the complete theory of everything.
The Standard Model model fails to explain the following – almost certainly genuine – features of Nature, among a few others:
- gravity
- neutrino oscillations
- baryon asymmetry
- dark matter
- inflation
- hierarchy problem
- stability of the Higgs potential
- strong CP-problem
- apparent gauge coupling unification
- apparent unification of representations
Try this post in mobile templateGravity
The Standard Model only contains the particles from the picture at the top – and the forces they mediate. Their spins are \(j=0\) or \(j=1/2\) or \(j=1\). The latter category are the gauge fields; they give rise to all the non-gravitational forces. The first, spinless group has just been discovered experimentally and it includes the Higgs boson. All the known elementary fermions have the spin equal to one-half.
Missing in this list is \(j=2\) which you need for gravity. That's really because the metric tensor \(g_{\mu\nu}\) underlying the general theory of relativity, our modern theory of gravity, has two indices. For most applications, you may just combine the Standard Model with all of its quantum corrections and the classical general relativity, derived from the Einstein-Hilbert action\[
S_{EH} = \frac{1}{16\pi G} \int \dd^4 x\,R
\] and you're satisfied. However, if you try to treat the Lagrangian term above in the same way as other terms in the Standard Model and also include quantum loops, you get infinitely many new kinds of divergences that can't be tamed by the usual renormalization methods. That implies that gravity at the fundamental scale where its quantum loops become as strong as the tree-level contributions – the Planck scale – depends on infinitely many unknown real parameters. It is not predictive.
If there are large or appropriately warped extra dimensions, this scale at which all hell breaks loose may actually be closer to the LHC scale than the Planck scale.
As you know, string theory is the only known – and most likely, the only mathematically possible – theory that reduces to quantum field theory (the Standard Model is its example) including the loop corrections; but that also produces Einstein's equations for gravity.
Gravity's impact on particle physics experiments at the accelerators is negligible, however. So you might only be interested in "the theory of all matter and forces that become important beneath the Planck scale" and remove gravity by hand. However, there are other arguments suggesting that the Standard Model isn't a complete theory of this "practical subset of the world", either.
Neutrino masses
Neutrinos are very light. They seemed massless for a few decades after they were discovered. But for a few recent decades, we have also known arguments that neutrinos are almost certainly massive. How did we know that? Well, it seemed obvious that neutrinos oscillate – for example, they do because we received fewer neutrinos from the Sun than what our otherwise perfectly satisfactory solar model predicted.
In the recent decade, the oscillations were measured pretty accurately and much like some cosmological parameters, these phenomena became a component of the mundane experimentally accessible physics. All the experiments we know seem pretty much compatible with the assumption that there are three neutrino flavors from the picture at the top. These neutrinos are Majorana fermions (identical with their antiparticles, but the left/right handedness effectively distinguishes the neutrinos from antineutrinos whenever the speed is close to the speed of light); we know something about their mixing angles; and we know the three eigenvalues of the mass. Well, more precisely, we only know the squared-mass differences, \[
\Delta m^2_{12}, \quad \Delta m^2_{13},\quad \Delta m^2_{23}.
\] Note that the middle difference above is the sum of the first and last one. The square roots of the three quantities above are of order dozens or hundreds of millielectronvolts. The smallest mass eigenvalue may be much smaller than that; it may be comparable; but it may also be larger, perhaps many electronvolts. This shift of all eigenvalues doesn't make a visible impact on the interference experiments.
An interesting numerological feature is that up to an order of magnitude or two, we may express the neutrino masses as\[
m_\nu \approx \frac{m_h^2}{m_{\rm GUT}}
\] where \(m_{\rm GUT}\) is the GUT scale or something located two or three orders of magnitude beneath the Planck scale; \(m_h\approx 126\GeV\) was chosen as a representative quantity for the electroweak scale. This "seesaw formula" allows one to speculate that the tiny neutrino masses – which are so many times smaller than the Higgs mass as the Higgs mass is smaller than the GUT mass – may be expressed as some residual leftover from some new physical phenomena that occur near the GUT scale.
Unlike gravity, the neutrino masses don't look like a fundamental defect of the Standard Model. We may just add the neutrino mass terms to the action. The resulting theory has been called \(\nu\)-Standard Model or something like that. But there's nothing fundamentally immoral about using the term "Standard Model" for the theory that also includes the neutrino masses.
Baryon asymmetry
However, this is not the case for the baryon asymmetry. All processes we have safely observed in Nature conserved the baryon number. The baryon number of a proton or a neutron is equal to one (much like for other baryons); their antiparticles have \(B=-1\); and all other particles have \(B=0\). One may reduce this assignment to a more fundamental rule giving \(B=+1/3\) to quarks and \(B=-1/3\) to antiquarks.
The Standard Model guarantees that the baryon number is conserved as long as you only allow its renormalizable interactions. You just can't write any gauge-invariant, Lorentz-invariant terms polynomial in the available Standard Model fields that would violate the baryon number. The same comment applies to the lepton number \(L\). Because these conservation laws are forced upon you by renormalizability for no obvious theoretical reasons (the corresponding symmetry isn't natural, isn't gauge, and isn't theoretically inevitable), we call \(B\) and \(L\) "accidental symmetries". These approximate global symmetries are almost certainly violated by nonrenormalizable interactions; by black hole formation and evaporation (this is very far and very exotic but really guaranteed); and perhaps by renormalizable terms in theories of new physics.
If the Standard Model were the whole story, the total baryon number of the present Universe would be equal to the total baryon number of the newborn Universe. How much is that?
Well, the lepton number \(L\) would be more ambiguous because there could be lots of invisible neutrinos in outer space and they could moreover be identical to the antineutrinos. However, the problem is much more well-defined for baryons because baryons can't be invisible. They can't be indistinguishable from antibaryons, either. You just count the protons and neutrons and because there are almost no antiprotons and antineutrons around (they have annihilated a long time ago, because of an apparent 1,000,000,001 vs 1,000,000,000 excess of matter over antimatter in the early Universe), you get the resulting \(B\). You get something like \(B=10^{55}\) from each Sun and there are tens of billions of stars in a galaxy and tens of billions of galaxies. The qualitative result is clear: \(B\) is nonzero and huge.
Note that even if you were computing \(B-L\) so that the protons "cancel" against the electrons, you could get a huge nonzero result because there are many additional neutrons with \(B=+1\) (although it's not clear how many neutrinos are there).
If you imagine that our Universe used to be very small, like a peanut or much smaller – and the Big Bang Theory surely encourages you to believe this assumption (both the actual theory as well as the sitcom because Sheldon Cooper believes that as well and his IQ is 187) – it is just unnatural for this initial state to have \(B=10^{80}\) or something like that which is what you need to match the present conditions.
You should better have processes that produced this high baryon number out of nothing. If the Standard Model were the whole story, they would be totally banned. Even with the weak "nonrenormalizable" interactions that the Standard Model would allow as an effective theory, you wouldn't produce sufficiently many baryons. So there should better be some additional physical phenomena if you want to believe that the initial state of the Universe was a natural state containing almost nothing – including a small value of \(B\).
The new physical phenomena that had to arise to produce the large value of \(B\) are known as baryogenesis. The lepton number is analogous but not so indisputable so while people sometimes talk about leptogenesis as well, they usually consider it as a hypothetical process that may explain baryogenesis: you first create a nonzero lepton number and then convert some of it, via some processes conserving \(B-L\), to a nonzero baryon number.
These new phenomena may lie anywhere between the LHC scale and the GUT scale; we don't know where. Models have been proposed all over the place and for every scale in the interval, one may find a model that hasn't been falsified yet.
Dark matter
The stars orbit the Milky Way at speeds that seem almost independent of the distance from the center of the galaxy. That's very different from the dependence of the planetary speeds you know from the Kepler model: the nearby planets are much faster. The explanation that seems most plausible is that there are sources of gravity we don't observe – dark matter – that predict the right, observed speed. The dark matter paradigm is reinforced by some observations that indicate that the dark matter (as seen from its gravitational influences) may apparently be separated from the usual matter.
About 4% of the energy density in the Universe is made out of visible matter (mostly baryons because the nuclei are much heavier than the electrons); about 23% is dark matter; about 73% is dark energy.
The Standard Model doesn't predict any particles that could be as dominant as dark matter seems to be but that would also be equally invisible. One needs some new particles, ideally stable or long-lived particles. Well, I say "particles". Dark matter could also be composed of larger objects – MACHOs, RAMBOs, BATMEN, TERMINATORs, and similar ones – but unlike the natural selection, the data seem to favor WIMPs and SISSies. Axions are also particles and viable dark matter candidates but they don't belong among the WIMPs.
The LSP – the lightest superpartner in supersymmetric theories with a conserved R-parity (or a violated one if the lightest particle is gravitino) – is a "canonical" example of a particle that seems to satisfy all the general constraints. LSPs (such as neutralinos) may pair-annihilate and this pair annihilation drives their original, high enough concentration to more realistic values (i.e. close to the observed ones) for the current era.
Without new particles such as WIMPs or axions, the Standard Model would demand the general theory of relativity to be seriously overthrown. Instead of adding "just some new particle species", you would need to qualitatively change Einstein's equations – which seems as a more dangerous, more radical, and less justified a step. But of course, there is no definitive proof that new particles are needed to explain the phenomena we normally attribute to "dark matter".
Inflation
Our spacetime is curved. Why? In 1998, people observed that the expansion of the Universe is actually accelerating. We're already in the mode in which the "dark energy" accelerating it is the dominant component of the energy density (at 73%). That's why our spacetime is already close to the de Sitter space. Each 10 billion years or so, the linear distances between pairs of galaxies will double. The expansion will continue at an accelerating rate.
This means that the typical curvature radius of our spacetime is of order 10 billion years or light years, depending on whether you view the spacetime as an extension of space or time.
On the other hand, if you just pick the 3-dimensional slice through spacetime "now" – one that contains all events that occurred 13,730,002,012 years after the Big Bang (assuming that Jesus Christ was decent enough to be born exactly 13,730,000,000 years after the Big Bang, up to the famous four years that don't work even outside cosmology) – you will find out that its curvature is much smaller. The curvature radius is at least hundreds of billions of light years and possibly infinite.
This flatness is puzzling. It may be explained by cosmic inflation. Cosmic inflation also explains the surprisingly good uniformity of the temperatures of the cosmic microwave background – microwave radiation that was born together with the atoms (the Universe became almost transparent for light at that time) and that has been cooling since that moment. And it explains the apparent absence or shortage of topological defects, magnetic monopoles, and other things we could observe every day but we don't.
Most likely, an explanation similar to inflation is needed for those things. Most likely, some new scalar fields are needed, too – although brave people have attempted to run inflation on our familiar July 4th God particle, too. If this boson managed to give masses to particles and inflate the Universe from a nearly zero size to this astronomical one at the same moment, it would surely deserve to be called God particle. In fact, and I hope that readers-believers will survive this comment, it would be much more omnipotent than God. But calm down, it seems somewhat unlikely, especially because string theory predicts many new scalar fields and the Higgs boson's competitors may share the jobs.
Inflation is not hard to achieve with 1,000-eros banknotes. A 47-year-old man told his girlfriend today that this banknote is certainly valid. So she went to a shop to convert it and indeed, the clerk gave her CZK 24,000 for this EUR 1,000 banknote, i.e. almost at the current exchange rate. However, the clerk still found it suspicious a little bit so he went to the local bank, KB, where they kindly and medically confirmed to him that he was an imbecile.
Hierarchy problem
The July 4th Higgs boson has mass \(126\GeV\) or so. But the quantum loops – virtual particles of other kinds – want to modify the mass and they naturally want to make it as heavy as possible – as heavy as the energy of the highest-energy particle excitations that you allow in your whole theory. Unless you finely adjust parameters of the theory at the Planck scale, the theory will predict that all the scalar fields should "almost certainly" have masses comparable to the GUT scale or the Planck scale.
We've know for quite some time that the Higgs mass, much like the Higgs vev and the mass of the massive vector bosons, is comparable to \(100\GeV\). Since the Independence Day, we may be much more self-confident in similar statements. We just seem to observe a Higgs boson that is much lighter than the mass of typical scalar particles, beasts that are unable to go on a diet (only spinful particles know how to do that, by using their chiral symmetry or gauge symmetry) and who get heavy by eating the hamburgers composed of virtual particles all the time.
The prediction that the Higgs boson should probably be heavy may be avoided by two known major tricks: compositeness and cancellations caused by symmetry. The first type of a solution, compositeness – in particular technicolor theories initiated by Susskind and Weinberg – seem to be pretty much dead at this point. The second type of a solution, new symmetries – and I mean supersymmetry because it's really the only new proposed symmetry that can be responsible for such cancellations – remains perfectly viable. However, it hasn't been discovered yet.
If supersymmetry won't be discovered at the LHC at all and if no new physics will be discovered, it will show that the "apparently fine-tuned" light mass of the Higgs boson isn't unacceptable for Nature. In that case, we could be forced to believe that the naturalness arguments aren't kosher. Or that we've been incredibly lucky and we shouldn't ask why. A heavy Higgs boson wouldn't allow life to exist and this observation could be enough to anthropically explain why our world has to have a light Higgs. We had to find ourselves in a Cosmos with life – and the existence of a light Higgs boson is a necessary condition for that.
But we're not there yet. I personally still think it's more likely than not that new physics beyond the Standard Model will already be found by the LHC.
Higgs instability
The observed Higgs mass, around \(126\GeV\), is OK for the lightest Higgs boson in supersymmetry. However, if the Standard Model were the whole story, it would be a problematic value of the mass. It's too light and the shallow graph of the potential energy would actually get reverted at some higher scales, implying that only an unstable theory at shorter distances could produce such a light Higgs boson in a long-distance effective theory.
But an unstable Higgs potential seems inconsistent because Nature would choose to live elsewhere. Even if the Higgs mass belongs to the "metastable" window, which it may, it seems problematic. I was explaining these matters in a recent article. The most likely explanation of this problematic predicted instability is that it doesn't exist.
New fields and particles contribute to the Higgs potential and cancel most of the "instability creating" effects. Stop squarks and higgsinos are doing this helpful job in supersymmetric theories. Even if you had other theories, the "physicians" preventing our vacuum from such an instability would have to resemble the stops or higgsinos (or both) in some respects.
Strong CP-problems and axions
In the previous two sections, I discussed supersymmetry's ability to cancel various unwanted terms and preserve the stability of the Universe. In fact, supersymmetry also helps to cancel the cosmological constant – the dark energy – whose value is 123 orders of magnitude smaller than the most natural estimate, the Planck energy density. Supersymmetry reduces it but because it has to be broken, it can only reduce it to 60 orders of magnitude or so. And this "half-job" has one more price: the remaining uncancelled contribution to the cosmological constant seems to be "much more self-evidently present" than the original 123 orders of magnitude that could be handwaved away.
While supersymmetry may very well play a role in keeping the cosmological constant small (as in no-scale supergravity) and keeping the Higgs boson light, it's marginally conceivable that there's actually no explanation for the small size of these parameters and the anthropic observation that "the small size is needed for life" is everything that one may offer as an excuse for the unnaturally low values.
However, we also know that even if such anthropic excuses were legitimate in a rational argumentation or explanation in physics, they aren't the answer to all the open question marks in physics. Why? QCD allows the so-called theta-term aside from the usual kinetic term:\[
\LL_{\rm QCD} = -\frac{1}{4} F_{a,\mu\nu} F^{a,\mu\nu} - \frac{n_f g^2\theta}{64\pi^2} \epsilon_{\kappa\lambda\mu\nu} F^{a,\kappa\lambda} F^{a,\mu\nu}
\] The second term includes the epsilon symbol – the term is proportional to \(F\wedge F\) – and it breaks the CP-symmetry. However, observations show that the coefficient of this term has to be really tiny (less than one billionth or so) because the phase in the CKM matrix seems to be the only source of CP-violation that we have observed so far.
On the other hand, if the coefficient were much larger, perhaps of order one, it wouldn't destroy life, at least not in any way we understand. So even the requirement that "life has to exist" isn't enough to explain why the coefficient is so small. Unless we are overlooking an effect that is needed for life – or a bad effect that may kill it – there has to be a new mechanism that explains the smallness of the \(\theta\)-angle.
The most plausible solution seems to be one that effectively promotes this angle to a scalar field, the axion field, which happens to be light and charged under a new symmetry and the charge conservation effectively bans terms in the potential that would make the vev nonzero (or large). String theory seems to predict many axions so this "Peccei-Quinn solution" to the strong CP-problem – the problem why the coefficient is so tiny – seems natural from most string theorists' viewpoint.
Gauge coupling unification
Finally, let me mention that the Standard Model, while considered the "minimal theory for physics in 2012", has many "non-minimal features", anyway. It has three generations. You could say that it's needed for the CP-violation by the CKM matrix. (This is actually an awkward argument because the CP-violation produced by the CKM matrix is too tiny and effectively useless in cosmology – and one needs some stronger sources of CP-violation for baryogenesis, anyway.) It has leptons and quarks. This "duality" is actually understood: the Standard Model with leptons removed (or quarks removed) would actually be an inconsistent theory due to gauge anomalies that cancel between quarks and leptons!
The Standard Model also has three factors of the gauge group, with three independent coupling constants. It actually looks like their values unify in such a way that \(SU(3)\times SU(2)\times U(1)\) may be embedded into a larger group such as \(SU(5)\) or \(SO(10)\) or \(E_6\) with a single gauge coupling constant. That would be great because you would reduce the number of independent parameters. However, this "gauge coupling unification" has some error in the Standard Model. Within the error margins, it works perfectly if you replace the Standard Model by its minimal supersymmetric extension, the Minimal Supersymmetric Standard Model.
Apparent patterns behind fermionic representations
This numerological agreement doesn't prove that grand unification is right. However, the very fact that the leptons and quarks may be organized into representations of a grand unified gauge group is a strong argument in favor of the grand unification theories (GUT). We don't have any strict proof that supersymmetry or grand unification are realized in Nature; however, we don't have any strict proof that these intriguing and elegant patterns are not used by Nature, either. Whether one prefers to believe that these patterns are realized depends on one's personal philosophy – the belief whether Nature is fundamentally elegant or whether it is fundamentally a disordered stinky pile of feces.
Which one do you prefer in your living room?
Finally, I want to emphasize that while there is a lot of evidence in favor of new physics that kicks in between the Higgs mass scale and the Planck scale, almost none of the observations above guarantees that such new physics should be visible at the LHC. It may start to operate at higher energy scales. The counterexamples are the hierarchy problem and (partially) dark matter. If the hierarchy problem is solved naturally and without any anthropic excuses, the new physics should better be close to the Higgs mass and should be accessible by the LHC.
Also, for completely different reasons, if dark matter is composed of WIMPs, it seems almost necessary that these new particles are light enough so that the LHC should have enough energy to produce them. However, all these conclusions depend on order-of-magnitude estimates so you simply can't be sure that the LHC will discover these new phenomena. On the other hand, even if the LHC discovers no new physics in the following decade, it will still be very reasonable to feel almost certain that new physics ultimately has to kick in at higher energy scales.
Why the Standard Model isn't the whole story
Reviewed by DAL
on
July 17, 2012
Rating:
No comments: