The suggested model could be consistent with a sub-GeV or perhaps 5-10 GeV gravitino dark matter candidate!
Yesterday I discussed a CMS paper on multilepton SUSY searches.
It's Friday and we may officially read another paper on a very similar topic that also clarifies some comments about multilepton events that Matt Strassler started a few weeks ago:
On Friday, I may show you 2.5-sigma excesses seen in various exclusion graphs. Let's pick Figure 2 at the end of the paper – but before the long appendices. It shows exclusion graphs for four R-parity-violating constants. Let me choose the first graph out of the four, the exclusion limits for the gluino and squark masses derived from the events sparked by the \(\lambda_{122}\) coupling constant:
Click to zoom in.
I guess that many of you are already familiar with the interpretation of such graphs. They wanted to determine the minimal allowed combinations of the gluino mass and a squark mass out of some events that depend on the \(\lambda_{122}\) coupling, in this case. If the Standard Model were the whole story, they expected the available 2.1/fb of data to be enough to draw the dotted exclusion curve. Its 1-sigma (dark blue-green) and 2-sigma (gray) error margins are colorfully shown on the graph. However, instead, they got much weaker limits depicted by the full white curve (below and on the left from the colorful strips) that, at the top of the picture, deviate by as much as 2.5 sigma! Something – either bad luck or Nature – prevented them from imposing more stringent limits.
(The picture also includes another dotted curve on the left side and the bottom, the exclusion curve they were able to deduce from the tiny 2010 dataset. Those limits are so obsolete today!)
The 2.5-sigma deviation is equivalent to the nearly 99-percent confidence level that something unusual is going on over there. In climate science, such a huge confidence level about anything would be enough to justify the construction of gas chambers for the deniers. In particle physics, it doesn't mean much except that I think that a skillful combination of several excesses in several channels and bins that are largely independent (but all of them have the "same sign") could lead to surprising quantities that are e.g. 5-sigma higher than the expectation. To say the least, Figure 2 in the CMS paper contains not just the graph above but 4 graphs which are deduced from largely independent observations – events ignited by \(\lambda_{122}, \lambda_{123},\lambda_{233}, \lambda^{\prime\prime}_{\rm assorted}\) where the last entry summarizes hadronic RPV violations – and all of them show 2-sigma excesses in average near the top. If you add the deviations (in standard deviations) \(2.5, 1.7, 1.0, 2.0\) in quadrature, you will obtain a 3.7-sigma excess. Of course, if you could add the excesses linearly, which is somewhat justified by the absence of any deficits, you could get to 7.2 sigma. ;-) Oops: I would almost forget to appreciate that the standard deviation doubles (gets multiplied by the square root of four) in this "linear" treatment so you get 3.6 sigma or so in either way.
Let me mention that if you care what the \(\lambda_{122}\) coupling means, it's the coefficient of a cubic term in the superpotential
\[ W = \dots + \frac{\lambda_{ijk}}{2} L_i L_j \bar E_k + \dots \] that couples the 1st generation lepton doublet (left-handed electron and its neutrino; and their superpartners), the 2nd generation lepton doublet (the left-handed muon and its neutrino; and their superpartners), and the second generation charged lepton singlet (right-handed muon; and their superpartners). Such a term preserves either the lepton number \(L\) and or the baryon number \(B\): in this case, it preserves the latter; if it didn't preserve either, that would be a problem because it would ignite a speedy decay of the proton that would be in conflict with the experimental proofs of proton's longevity. If all terms preserve one of \(B,L\), and it must be the same one for all of the terms (and it is \(L\) even for the "hadronic term" in this CMS paper), there's a chance that the proton decay is suppressed because the proton decay needs to violate both \(B\) and \(L\) (unless a combination of the vertices is capable of violating both charges, anyway: but such complicated processes may already be suppressed enough not to contradict the observations).
However, this term in the superpotential violates the R-parity because its R-parity is the opposite than the R-parity of the usual terms in the superpotential. In combination, two such vertices with mixed R-parities (the terms in the normal action or the normal potential are bilinear in \(W\)) allow the production or annihilation of an odd number of superpartners, including the decay of one superpartner into ordinary particles but no superpartners. Fine, I am sure most of you are not getting the buzzwords I am emitting so here you have a short lecture on R-parity.
R-parity in supersymmetry
I have already discussed these issues in the text about supersymmetry and dark matter because the conservation of R-parity is both needed and sufficient for the lightest superpartner – the lightest particle with the negative (nontrivial) R-parity "\({\mathbb Z}_2\) discrete charge" – to be stable. With the right mass, it is an ideal particle from which you may construct dark matter. You will get the right concentration of such particles if you calculate how the concentration evolved during the cosmological evolution; you get reasonably weak interactions with the visible matter as well. This kind of dark matter behaves in agreement with other data about the structure formation etc.
Do I have to explain why the lightest particle carrying a nonzero value of a given type of conserved charge has to be stable? It's because it can't decay to anything because by energy conservation, particles may only decay to lighter particles (look at the decay from the initial particle's rest frame); but because the charge is conserved, the decay products have to carry a nonzero value of the charge, at least one of these decay products (final particles). But by our assumption, there is no lighter particle that would carry the same charge so the lightest particle has to be conserved. That's why the electron and the positron – the lightest particles carrying the electric charge – have to be stable. If the baryon number were exactly preserved, and it is preserved at least approximately, the lightest particle with a nonzero baryon charge – the proton and the antiproton – have to be stable as well.
This argument also holds for multiplicative discrete charges such as R-parity; the "nonzero charge" above has to be replaced by a "nontrivial parity" which means \(R=-1\). Because it is a "parity", its allowed values or eigenvalues, if you wish, belong to the set \( R\in \{ +1,-1 \} \). The R-parity of composite systems is given by the product of R-parity of the pieces: the R-parity is negative (we say "odd") iff the number of R-parity-negative pieces in it is odd. The funny thing is that the superpartner of particle X always carries the opposite R-parity. And another funny thing is that the R-parity of all particles we know, including the God particle whom we almost know, is \(+1\) which means "trivial". So the leptons and quarks and gauge bosons and gravitons and Higgs bosons carry \(R=+1\) and all of their superpartners such as sleptons, squarks, gravitinos, Higgsinos, and gauginos (the latter two categories include the gluino, neutralinos, and charginos) carry \(R=-1\). That's a nontrivial charge that makes these bastards hide (because it makes them heavier).
Another funny thing is that one may write down an explicit formula
\[ R = (-1)^{3B + L+ 2s} \] where the signs in the exponents don't matter (much like a multiplication of any coefficient by any odd integer), where \(B\) is the baryon number equal to \(+1/3\) for quarks, where \(L\) is the lepton number equal to \(+1\) for the electron, and where \(s\) is the spin equal to \(s=1/2\) for leptons and quarks. A "coincidence" you are invited to check is that \(R\) defined as this power of \(-1\) is equal to \(+1\) for all known elementary particles. Well, it's because all bosons have \(2s\) even so the \(2s\) term contributes nothing modulo two but all known bosons have a vanishing \(B\) and \(L\); and all known fermions have an odd \(2s\) so that the last term switches the sign but they also carry either an odd \(L\) (leptons) or an odd \(3B\) (quarks) but not both which cancels the sign because of the first two terms.
Also, it's trivial to see that superpartners have the opposite R-parity relatively to their comrades. It's because superpartners have the same \(B,L\) but they have an \(s\) that differs by \(1/2\) so that you get exactly one sign flip from the last term in the exponent. It works exactly as needed. Note that because of the explicit formula for \(R\), you may see that because the angular momentum is conserved, a full conservation of \(B\) and \(L\) would imply the R-parity conservation as well. So the R-parity violating terms have to violate either \(B\) or \(L\) or both; but "both" was excluded because it would lead to a very rapid proton decay, as mentioned above.
Popularity contests and their irrelevance
I must say that R-parity-violating models are not too popular with the theorists because they would allow all the superpartners to decay and one wouldn't have a natural dark matter candidate. One would have to look for other sources of dark matter and/or deny the existence of dark matter. Theorists often prefer "the most economic" explanation and because SUSY offers a dark matter candidate, namely the lightest superpartner (LSP), why shouldn't we believe that this is what Nature chose?
However, aside from this automatic production of a dark matter candidate, there is no really solid reason to "know" that supersymmetry in Nature has to be R-parity conserving. Nature often chooses solutions that are not "quite minimal". Equally importantly, experimenters are expected to do all the dirty work and pay almost no attention to emotional outbursts of the theorists who almost uniformly say that R-parity-violating models of supersymmetry suck. "They suck... and what?" the experimenters would respond. During our work, we have to suck and lick and do many other dirty things as well: that's the glory of being an experimental servant of the empire of science. ;-)
So it's quite legitimate that experimenters look for evidence supporting R-parity-violating models and other "not perfectly economic" but otherwise consistent models as well. And there is a nonzero chance that they are going to see a signal here. There is one more thing I should mention: supersymmetry is very beautiful and well-defined and lots of technology has been constructed that is relevant for it. It's such a clear queen among the realistic models of new physics that even its ugly edition, the R-parity-violating supersymmetric models, are considered the best benchmark for the interpretation of multilepton events without missing energy (the missing energy otherwise carried by the invisible LSPs – the dark matter particles according to R-parity-preserving SUSY models – are absent in R-parity-violating models because there are no stable superpartners in these models at all so the missing energy has to be composed of ordinary neutrinos). So you should view R-parity-violating models as a "representative" of all models with a similar multilepton signature (without missing energy). Even if this signal grew and became a discovery, it wouldn't prove that R-parity-violating supersymmetric physics is the cause: there could be other explanations as well although theorists would probably believe that the R-parity-violating SUSY would be the most likely explanation.
In the fast comments, you may watch some hugely positive progress I did while looking what the scenario would mean (my new research plus searches into hep-ph literature). The gravitino of mass below 1 GeV or perhaps 5-10 GeV could be the dark matter particle in R-parity violating models, in the second case naturally (without fine-tuning) incorporating primordial nucleosynthesis, thermal leptogenesis, cosmological constraints – and it could be what is seen in some of the dark matter direct search experiments when the gravitino hits a nucleus and emits a neutrino in the same direction (possible and strong enough due to the R-parity-violating terms). In the first case below a GeV or 100 MeV, I think that gravitinos' inelastic collisions with nuclei producing the same nuclei and neutrinos (akin to gamma rays) could produce the same recoil energy analogous to elastic collisions of a 5-10 GeV neutralino in the direct searches.
To settle these questions, we will almost certainly need much more time. The first really relevant question to be answered is whether or not these excesses are just flukes. If they're not flukes, you should notice that this CMS paper builds upon 2.1 inverse femtobarns but the CMS has already accumulated 5.2 inverse femtobarns and ATLAS+CMS possess 10.4 inverse femtobarns which is 5 times more than 2.1. So if the excess were due to new physics, the already-recorded combined data are enough to enhance 2.5 sigma to 2.5 times the square root of five which is 5.6 sigma, a discovery (of a so-far-not-quite-understood-new-physics) that could be announced before the LHC restarts in 2012. Stay tuned.
See also Tommaso Dorigo who tries to be secretive about what the excess, if real, would mean.
Bonus: why Zitterbewegung is unphysical
At the U.S. LHC blogs, Prof John Huth of Harvard wrote about an orthodox Jew praying to a FedEx box in the People's Republic of Cambridge – that had to be a shock not only for John Huth but for the whole mostly Marxist (and therefore atheist) population of the township, indeed. In the article, he criticizes physicists for ignoring die Zitterbewegung (from German: a hypothetical "trembling motion" of electrons "predicted" before the predictor understood that there were also positrons).
I am very surprised that it is not generally understood that the whole idea of Zitterbewegung is nonsense arising from an invalid physical interpretation of an equation. Just to be sure, Erwin Schrödinger – who also failed to understand the probabilistic interpretation of quantum mechanics, and this is a related flaw – has tried to solve the Dirac equation once Dirac wrote it down. And he combined some "positive-energy" and "negative-energy" solutions. Because their energies differ at least by \(2mc^2\), not too surprisingly, by the interference of these two components, one "gets" some observables that oscillate with the gigantic angular frequency \(2mc^2/ \hbar\).
However, nothing like that may occur in the real world because the single-particle interpretation of the Dirac equation – with positive- as well as negative-energy solutions allowed – simply doesn't apply and cannot apply to the real world. As Dirac quickly understood (while Schrödinger didn't), the negative-energy solutions have to be treated differently. Nature occupies them because it's energetically preferred and on the contrary, it's the holes in this filled "Dirac sea" of negative-energy states that manifest themselves as particles, namely as positive-energy, positive-charge positrons. That's really what Dirac got his Nobel prize for.
Consequently, one is forced to consider the second-quantized, multiparticle picture in which the Dirac "wave function" becomes a field composed of creation and annihilation operators. The field \(\Psi^\dagger\), as long as we carefully remove \(\Psi\), is decomposed to creation operators for the positive-energy modes and annihilation operators for the negative-energy modes (or vice versa for the Hermitian conjugate). So by using \(\Psi\) only or \(\Psi^\dagger\) only, you may only create states with one sign of the energy; the other half of the Fourier components annihilates the vacuum state and gives you no new state. So the interference between the positive-energy and negative-energy components is gone because only one of them is nonzero in a state of a well-defined charge.
Of course, you could also try to mix one-electron states with one-positron states but this is unrealistic because they belong to different "small" superselection sectors described by the conserved electric charge. If you know the electric charge of the initial state, you know the electric charge of all states evolved from it in the future, so you will never be able to produce a meaningful controllable linear superposition of states with different values of the electric charge if you were not given such a superposition from God at the very beginning (and you were not). Moreover, even if you could do so, you would get no trembling motion because one electron and one positron of the same velocity have the same positive energy – no difference like \(2mc^2\) appears here. To see some \(2mc^2\) energy difference that is allowed, you would have to consider the interference between \(N\)-particle and \((N+2)\)-particle states but that would no longer be any Zitterbewegung – it wouldn't be related to the single-particle Dirac equation.
Those are the simple reasons why die Zitterbewegung can't occur in the real world and why Erwin Schrödinger, a man puzzled about the proper physical interpretation of the equations, was really confused about elementary things in quantum field theory which he had never started to seriously study. Too bad that people miss this elementary point even 80+ years later.
Soccer update
I don't expect readers of this article to be interested in soccer and I am not any canonical fan of the game, either, but you may change your mind for a while. Our town's team, FC Viktoria Pilsen, is using the matches in the national Extra League to get ready for the second match against Barcelona (played in Prague) next Tuesday. There are various messy players in Barcelona, the planet's best team, so it's not easy: after the first match we lost just 0-to-2 with honor and without painful feelings, we must enter the trembling motion at Barcelonian frequencies. The video below is how Mr Milan Petržela scored his incredible Catalan-like long-distance high-speed solo in a match against FC Hradec on Friday night that we won 5-to-0. A beauty:
Petržela told the journalists that he is not observing Messi in any systematic way and the situation just made him a lucky guy; however, superstitious people could notice that it was him who won the world's best player's jersey in an internal brawl after the first match at Nou Camp.
Yesterday I discussed a CMS paper on multilepton SUSY searches.
It's Friday and we may officially read another paper on a very similar topic that also clarifies some comments about multilepton events that Matt Strassler started a few weeks ago:
Search for Anomalous Production of Multilepton Events and R-Parity-Violating Supersymmetry in \(\sqrt{s} = 7 \,{\rm TeV} \,\,\, pp\) Collisions (PDF)If you remember our discussion yesterday, the excesses of multileptons events were ultimately seen as less-than-2-sigma deviations. But this is so Thursday! :-)
On Friday, I may show you 2.5-sigma excesses seen in various exclusion graphs. Let's pick Figure 2 at the end of the paper – but before the long appendices. It shows exclusion graphs for four R-parity-violating constants. Let me choose the first graph out of the four, the exclusion limits for the gluino and squark masses derived from the events sparked by the \(\lambda_{122}\) coupling constant:
Click to zoom in.
I guess that many of you are already familiar with the interpretation of such graphs. They wanted to determine the minimal allowed combinations of the gluino mass and a squark mass out of some events that depend on the \(\lambda_{122}\) coupling, in this case. If the Standard Model were the whole story, they expected the available 2.1/fb of data to be enough to draw the dotted exclusion curve. Its 1-sigma (dark blue-green) and 2-sigma (gray) error margins are colorfully shown on the graph. However, instead, they got much weaker limits depicted by the full white curve (below and on the left from the colorful strips) that, at the top of the picture, deviate by as much as 2.5 sigma! Something – either bad luck or Nature – prevented them from imposing more stringent limits.
(The picture also includes another dotted curve on the left side and the bottom, the exclusion curve they were able to deduce from the tiny 2010 dataset. Those limits are so obsolete today!)
The 2.5-sigma deviation is equivalent to the nearly 99-percent confidence level that something unusual is going on over there. In climate science, such a huge confidence level about anything would be enough to justify the construction of gas chambers for the deniers. In particle physics, it doesn't mean much except that I think that a skillful combination of several excesses in several channels and bins that are largely independent (but all of them have the "same sign") could lead to surprising quantities that are e.g. 5-sigma higher than the expectation. To say the least, Figure 2 in the CMS paper contains not just the graph above but 4 graphs which are deduced from largely independent observations – events ignited by \(\lambda_{122}, \lambda_{123},\lambda_{233}, \lambda^{\prime\prime}_{\rm assorted}\) where the last entry summarizes hadronic RPV violations – and all of them show 2-sigma excesses in average near the top. If you add the deviations (in standard deviations) \(2.5, 1.7, 1.0, 2.0\) in quadrature, you will obtain a 3.7-sigma excess. Of course, if you could add the excesses linearly, which is somewhat justified by the absence of any deficits, you could get to 7.2 sigma. ;-) Oops: I would almost forget to appreciate that the standard deviation doubles (gets multiplied by the square root of four) in this "linear" treatment so you get 3.6 sigma or so in either way.
Let me mention that if you care what the \(\lambda_{122}\) coupling means, it's the coefficient of a cubic term in the superpotential
\[ W = \dots + \frac{\lambda_{ijk}}{2} L_i L_j \bar E_k + \dots \] that couples the 1st generation lepton doublet (left-handed electron and its neutrino; and their superpartners), the 2nd generation lepton doublet (the left-handed muon and its neutrino; and their superpartners), and the second generation charged lepton singlet (right-handed muon; and their superpartners). Such a term preserves either the lepton number \(L\) and or the baryon number \(B\): in this case, it preserves the latter; if it didn't preserve either, that would be a problem because it would ignite a speedy decay of the proton that would be in conflict with the experimental proofs of proton's longevity. If all terms preserve one of \(B,L\), and it must be the same one for all of the terms (and it is \(L\) even for the "hadronic term" in this CMS paper), there's a chance that the proton decay is suppressed because the proton decay needs to violate both \(B\) and \(L\) (unless a combination of the vertices is capable of violating both charges, anyway: but such complicated processes may already be suppressed enough not to contradict the observations).
However, this term in the superpotential violates the R-parity because its R-parity is the opposite than the R-parity of the usual terms in the superpotential. In combination, two such vertices with mixed R-parities (the terms in the normal action or the normal potential are bilinear in \(W\)) allow the production or annihilation of an odd number of superpartners, including the decay of one superpartner into ordinary particles but no superpartners. Fine, I am sure most of you are not getting the buzzwords I am emitting so here you have a short lecture on R-parity.
R-parity in supersymmetry
I have already discussed these issues in the text about supersymmetry and dark matter because the conservation of R-parity is both needed and sufficient for the lightest superpartner – the lightest particle with the negative (nontrivial) R-parity "\({\mathbb Z}_2\) discrete charge" – to be stable. With the right mass, it is an ideal particle from which you may construct dark matter. You will get the right concentration of such particles if you calculate how the concentration evolved during the cosmological evolution; you get reasonably weak interactions with the visible matter as well. This kind of dark matter behaves in agreement with other data about the structure formation etc.
Do I have to explain why the lightest particle carrying a nonzero value of a given type of conserved charge has to be stable? It's because it can't decay to anything because by energy conservation, particles may only decay to lighter particles (look at the decay from the initial particle's rest frame); but because the charge is conserved, the decay products have to carry a nonzero value of the charge, at least one of these decay products (final particles). But by our assumption, there is no lighter particle that would carry the same charge so the lightest particle has to be conserved. That's why the electron and the positron – the lightest particles carrying the electric charge – have to be stable. If the baryon number were exactly preserved, and it is preserved at least approximately, the lightest particle with a nonzero baryon charge – the proton and the antiproton – have to be stable as well.
This argument also holds for multiplicative discrete charges such as R-parity; the "nonzero charge" above has to be replaced by a "nontrivial parity" which means \(R=-1\). Because it is a "parity", its allowed values or eigenvalues, if you wish, belong to the set \( R\in \{ +1,-1 \} \). The R-parity of composite systems is given by the product of R-parity of the pieces: the R-parity is negative (we say "odd") iff the number of R-parity-negative pieces in it is odd. The funny thing is that the superpartner of particle X always carries the opposite R-parity. And another funny thing is that the R-parity of all particles we know, including the God particle whom we almost know, is \(+1\) which means "trivial". So the leptons and quarks and gauge bosons and gravitons and Higgs bosons carry \(R=+1\) and all of their superpartners such as sleptons, squarks, gravitinos, Higgsinos, and gauginos (the latter two categories include the gluino, neutralinos, and charginos) carry \(R=-1\). That's a nontrivial charge that makes these bastards hide (because it makes them heavier).
Another funny thing is that one may write down an explicit formula
\[ R = (-1)^{3B + L+ 2s} \] where the signs in the exponents don't matter (much like a multiplication of any coefficient by any odd integer), where \(B\) is the baryon number equal to \(+1/3\) for quarks, where \(L\) is the lepton number equal to \(+1\) for the electron, and where \(s\) is the spin equal to \(s=1/2\) for leptons and quarks. A "coincidence" you are invited to check is that \(R\) defined as this power of \(-1\) is equal to \(+1\) for all known elementary particles. Well, it's because all bosons have \(2s\) even so the \(2s\) term contributes nothing modulo two but all known bosons have a vanishing \(B\) and \(L\); and all known fermions have an odd \(2s\) so that the last term switches the sign but they also carry either an odd \(L\) (leptons) or an odd \(3B\) (quarks) but not both which cancels the sign because of the first two terms.
Also, it's trivial to see that superpartners have the opposite R-parity relatively to their comrades. It's because superpartners have the same \(B,L\) but they have an \(s\) that differs by \(1/2\) so that you get exactly one sign flip from the last term in the exponent. It works exactly as needed. Note that because of the explicit formula for \(R\), you may see that because the angular momentum is conserved, a full conservation of \(B\) and \(L\) would imply the R-parity conservation as well. So the R-parity violating terms have to violate either \(B\) or \(L\) or both; but "both" was excluded because it would lead to a very rapid proton decay, as mentioned above.
Popularity contests and their irrelevance
I must say that R-parity-violating models are not too popular with the theorists because they would allow all the superpartners to decay and one wouldn't have a natural dark matter candidate. One would have to look for other sources of dark matter and/or deny the existence of dark matter. Theorists often prefer "the most economic" explanation and because SUSY offers a dark matter candidate, namely the lightest superpartner (LSP), why shouldn't we believe that this is what Nature chose?
However, aside from this automatic production of a dark matter candidate, there is no really solid reason to "know" that supersymmetry in Nature has to be R-parity conserving. Nature often chooses solutions that are not "quite minimal". Equally importantly, experimenters are expected to do all the dirty work and pay almost no attention to emotional outbursts of the theorists who almost uniformly say that R-parity-violating models of supersymmetry suck. "They suck... and what?" the experimenters would respond. During our work, we have to suck and lick and do many other dirty things as well: that's the glory of being an experimental servant of the empire of science. ;-)
So it's quite legitimate that experimenters look for evidence supporting R-parity-violating models and other "not perfectly economic" but otherwise consistent models as well. And there is a nonzero chance that they are going to see a signal here. There is one more thing I should mention: supersymmetry is very beautiful and well-defined and lots of technology has been constructed that is relevant for it. It's such a clear queen among the realistic models of new physics that even its ugly edition, the R-parity-violating supersymmetric models, are considered the best benchmark for the interpretation of multilepton events without missing energy (the missing energy otherwise carried by the invisible LSPs – the dark matter particles according to R-parity-preserving SUSY models – are absent in R-parity-violating models because there are no stable superpartners in these models at all so the missing energy has to be composed of ordinary neutrinos). So you should view R-parity-violating models as a "representative" of all models with a similar multilepton signature (without missing energy). Even if this signal grew and became a discovery, it wouldn't prove that R-parity-violating supersymmetric physics is the cause: there could be other explanations as well although theorists would probably believe that the R-parity-violating SUSY would be the most likely explanation.
In the fast comments, you may watch some hugely positive progress I did while looking what the scenario would mean (my new research plus searches into hep-ph literature). The gravitino of mass below 1 GeV or perhaps 5-10 GeV could be the dark matter particle in R-parity violating models, in the second case naturally (without fine-tuning) incorporating primordial nucleosynthesis, thermal leptogenesis, cosmological constraints – and it could be what is seen in some of the dark matter direct search experiments when the gravitino hits a nucleus and emits a neutrino in the same direction (possible and strong enough due to the R-parity-violating terms). In the first case below a GeV or 100 MeV, I think that gravitinos' inelastic collisions with nuclei producing the same nuclei and neutrinos (akin to gamma rays) could produce the same recoil energy analogous to elastic collisions of a 5-10 GeV neutralino in the direct searches.
To settle these questions, we will almost certainly need much more time. The first really relevant question to be answered is whether or not these excesses are just flukes. If they're not flukes, you should notice that this CMS paper builds upon 2.1 inverse femtobarns but the CMS has already accumulated 5.2 inverse femtobarns and ATLAS+CMS possess 10.4 inverse femtobarns which is 5 times more than 2.1. So if the excess were due to new physics, the already-recorded combined data are enough to enhance 2.5 sigma to 2.5 times the square root of five which is 5.6 sigma, a discovery (of a so-far-not-quite-understood-new-physics) that could be announced before the LHC restarts in 2012. Stay tuned.
See also Tommaso Dorigo who tries to be secretive about what the excess, if real, would mean.
Bonus: why Zitterbewegung is unphysical
At the U.S. LHC blogs, Prof John Huth of Harvard wrote about an orthodox Jew praying to a FedEx box in the People's Republic of Cambridge – that had to be a shock not only for John Huth but for the whole mostly Marxist (and therefore atheist) population of the township, indeed. In the article, he criticizes physicists for ignoring die Zitterbewegung (from German: a hypothetical "trembling motion" of electrons "predicted" before the predictor understood that there were also positrons).
I am very surprised that it is not generally understood that the whole idea of Zitterbewegung is nonsense arising from an invalid physical interpretation of an equation. Just to be sure, Erwin Schrödinger – who also failed to understand the probabilistic interpretation of quantum mechanics, and this is a related flaw – has tried to solve the Dirac equation once Dirac wrote it down. And he combined some "positive-energy" and "negative-energy" solutions. Because their energies differ at least by \(2mc^2\), not too surprisingly, by the interference of these two components, one "gets" some observables that oscillate with the gigantic angular frequency \(2mc^2/ \hbar\).
However, nothing like that may occur in the real world because the single-particle interpretation of the Dirac equation – with positive- as well as negative-energy solutions allowed – simply doesn't apply and cannot apply to the real world. As Dirac quickly understood (while Schrödinger didn't), the negative-energy solutions have to be treated differently. Nature occupies them because it's energetically preferred and on the contrary, it's the holes in this filled "Dirac sea" of negative-energy states that manifest themselves as particles, namely as positive-energy, positive-charge positrons. That's really what Dirac got his Nobel prize for.
Consequently, one is forced to consider the second-quantized, multiparticle picture in which the Dirac "wave function" becomes a field composed of creation and annihilation operators. The field \(\Psi^\dagger\), as long as we carefully remove \(\Psi\), is decomposed to creation operators for the positive-energy modes and annihilation operators for the negative-energy modes (or vice versa for the Hermitian conjugate). So by using \(\Psi\) only or \(\Psi^\dagger\) only, you may only create states with one sign of the energy; the other half of the Fourier components annihilates the vacuum state and gives you no new state. So the interference between the positive-energy and negative-energy components is gone because only one of them is nonzero in a state of a well-defined charge.
Of course, you could also try to mix one-electron states with one-positron states but this is unrealistic because they belong to different "small" superselection sectors described by the conserved electric charge. If you know the electric charge of the initial state, you know the electric charge of all states evolved from it in the future, so you will never be able to produce a meaningful controllable linear superposition of states with different values of the electric charge if you were not given such a superposition from God at the very beginning (and you were not). Moreover, even if you could do so, you would get no trembling motion because one electron and one positron of the same velocity have the same positive energy – no difference like \(2mc^2\) appears here. To see some \(2mc^2\) energy difference that is allowed, you would have to consider the interference between \(N\)-particle and \((N+2)\)-particle states but that would no longer be any Zitterbewegung – it wouldn't be related to the single-particle Dirac equation.
Those are the simple reasons why die Zitterbewegung can't occur in the real world and why Erwin Schrödinger, a man puzzled about the proper physical interpretation of the equations, was really confused about elementary things in quantum field theory which he had never started to seriously study. Too bad that people miss this elementary point even 80+ years later.
Soccer update
I don't expect readers of this article to be interested in soccer and I am not any canonical fan of the game, either, but you may change your mind for a while. Our town's team, FC Viktoria Pilsen, is using the matches in the national Extra League to get ready for the second match against Barcelona (played in Prague) next Tuesday. There are various messy players in Barcelona, the planet's best team, so it's not easy: after the first match we lost just 0-to-2 with honor and without painful feelings, we must enter the trembling motion at Barcelonian frequencies. The video below is how Mr Milan Petržela scored his incredible Catalan-like long-distance high-speed solo in a match against FC Hradec on Friday night that we won 5-to-0. A beauty:
Petržela told the journalists that he is not observing Messi in any systematic way and the situation just made him a lucky guy; however, superstitious people could notice that it was him who won the world's best player's jersey in an internal brawl after the first match at Nou Camp.
CMS: 2.5-sigma hint of R-parity-violating SUSY
Reviewed by MCH
on
October 28, 2011
Rating:
No comments: