CMS turned out to be much more "aggressive" relatively to the "conservative" ATLAS detector and it has already provided us with some hints. But what they published today, in the paper called
The paper above which has clearly been tested for quite some time because they apparently started to work on it in 2010 – but that doesn't guarantee that there are no serious bugs in the paper left – studies the number of events with two photons (or "diphotons") in the final state. They compare the observations with the predictions (of the next-to-leading perturbative calculations based on QCD).
Most of the photon pairs tend to be predicted – and indeed arise – in the opposite direction. If you use \(\phi\) for the azimuthal angle measuring rotations with respect to the axis along which the proton beams move, the difference of these angles for both photons, \(\Delta \phi\), is usually close to \(\pi\). This is essentially due to momentum conservation because the photon pairs tend to be created "simultaneously", e.g. in quark-antiquark annihilations (yes, protons also contain antiquarks, although just the "virtual ones" or "non-valence ones").
And indeed, if you restrict your attention to these dominant photons flying in the opposite directions as far as the angle \(\phi\) goes, you get a satisfactory agreement of the theory with the observations. However, for all other angles – for the more scarce photon pairs that are not anti-collinear – the CMS reports a huge deviation from the predictions. To see an example, here is their Figure 8:
Click to zoom in.
On the left chart, theory predicts the yellow line plus minus a typical error margin given by the violet strip. However, the observations sit at the shiny blue-green crosses. In the similar way, the right graph is a rescaled version of the same thing. Theory predicts that the observations will be near the \(y=0\) horizontal line, plus minus the error margin given by the violet band or the dotted lines. However, the observations given by the blue-green crosses visibly deviate: can you see the deviations? ;-)
The deviations are so large that it makes almost no sense to talk about fluctuations and confidence levels. You just optically see that the observed curve runs through a qualitatively different place in the chart than the predicted one. You may see that the datapoint near \(\Delta\phi\approx 2.7\) itself has something like 4 standard deviations and there are two adjacent datapoints on the left with 3-sigma excesses, too. In combination, you get almost certainly above 5 or 6 sigma. Figure 7 shows similar deviations even if a different constraint on the pseudorapidity is imposed. Note that those photons typically carry something like 50 GeV of transverse momentum, between dozens and 300 GeV or the invariant mass.
By now, the LHC has accumulated about 144 times more data so all the signals should have grown by a factor of 12 or so. The bin showing a 4-sigma excess should be showing something like a 48-sigma excess today. Some members of the CMS must know damn well whether this is the case or not. My guess is that if they knew that the excess has gone away, they would veto this paper based on a smaller amount of data. ;-)
So the excess of all diphotons (except for the nearly anti-collinear ones) seen by the CMS is either due to a big human error uncaught by dozens of clever particle physicists; or a convincing proof of new physics. I have almost no chance to find out what the error in the analysis could be so it's much more appropriate for me to dream about the new physical phenomena that could be causing this huge excess if it is real. But even this question is a pretty hardcore phenomenology. Some diphotons may be produced together with or because of light Higgs bosons; extra-dimension gravitons; and some supersymmetric states. This is a quote from the new CMS paper. At this vague level, the diphotons may clearly be due to anything, especially if I tell you that diphotons may also be a sign of the little Higgs models; fortunately, the little Higgs models predict a deficit, not an excess, so they're probably inconsistent with the data. ;-)
Most of the relevant diagrams for processes in which proton pairs produce photon pairs involve quark loops. For example, these are three one- and two-loop diagrams from a 2002 paper by Bern, Dixon, and Schmidt:
Click to zoom in. Obviously, simpler diagrams such as the tree-level quark-antiquark annihilation aren't depicted on the picture but have to be included in the calculations.
The particle running in the loop converting two gluons in the initial states to two photons (plus something else) must be both electrically and colorfully charged – so it must be a quark (or an unknown new particle such as, well, a squark) – and the box diagram on the left is the minimum. The middle diagram adds some loop correction with gluons (these two diagrams have the same initial and final states so it's important to appreciate that they interfere with each other) while the last pentagon diagram, sponsored by the U.S. Department of Defense, adds an extra gluon to the final state.
So yes, this big effect may be simply caused by a light Higgs boson. The accuracy in the data the LHC has by now might arguably be enough to determine the Higgs mass rather accurately – of course, such a derivation needs to assume that we know everything but one parameter and that the shape of the data actually agrees with the model that is meant to replace the Higgsless Standard Model that's been OK so far. In the well-defined Higgs search business, CMS favored a 119 GeV Higgs by looking at the general diphoton events in September 2011 (interpretation by LM).
The previous paragraph may have sounded disappointing. We will probably get the Higgs in a month or two, anyway. However, what I have hidden from you is that according to my reading of the literature, e.g. Mrenna and Wells 2000 but including the paper by Bern et al. above, the ordinary Standard Higgs boson's decays into two photons shouldn't be visible. One needs beyond-the-Standard-Model Higgs particles, such as the Higgs bosons from the Minimal Supersymmetric Standard Model, to enhance the decay of the particle to photon pairs so that it could be observable.
You may feel some defa vu here.
In April 2011, people would discuss a leaked memo masterminded by Ms Wu that claimed the observation (at 4-sigma) of a diphoton signal from a 115 GeV Higgs boson seen by ATLAS – the collaboration that was bolder half a year ago. Now we see something remotely similar – but it comes from the CMS and it has been officially released as an arXiv preprint. Recall that the ATLAS bump went away with a higher amount of data, or at least we were told so. If you are confused about the situation of the experimental claims about related matters, you're not alone.
It's interesting to refresh one's memory by reading the April 2011 blog entry; it says that the big Wu excess was too big even for the MSSM but some NMSSM models would be able to predict the huge enhancement of the diphoton events. Of course, the current CMS excess of diphotons is not that huge – at most by a factor of a "few" in those bins – so it could be compatible with the MSSM. Obviously, an expert phenomenologist is needed to "see" what the detailed CMS data actually suggest.
The excess of the diphoton events seems really large so it may be due to some human error. Or not.
We may also be surprised why we're reading papers based on 2010 data only in November 2011 for the first time. They apparently spent quite some time by looking for errors. Bold papers are delayed in this way. I would say that particle physics is affected by something we may call the inverse publication bias. In soft sciences such as environmental sciences, people love to scream too early. They see a tiny bump somewhere, so they publish a paper claiming that birds are getting smaller, and therefore the Solar System is collapsing. They see another bump so we read another paper that birds are getting bigger, and therefore the Milky Way is dying. We're bombarded by tons of crap with bogus observations.
In particle physics, the reticence works in the opposite direction. A paper that potentially contains an important and new effect was delayed by almost a year. Is that a good thing? Or do you get a better science if bold claims are being delayed in this way? I am not sure. There is a legitimate reason why bold claims are being delayed and why they have to face a greater amount of tests: extraordinary claims simply do require extraordinary evidence.
When you get an extraordinary result that wildly disagrees with much of the previous knowledge, it's just more likely that you have made a mistake which is why it's more sensible to spend more time by looking for it. Obviously, at some point, you should give up the search for the mistakes and publish your paper. The CMS did it now which seems kind of appropriate to me. It's up to others to find the silly errors they have done – or the mundane or shocking interpretations of what they found if they have done no brutally serious mistake.
The price we have paid for this reticence is that for a year, we may have been misled about the question whether the LHC has seen some deviations from the Standard Model. We thought that the answer was "No". However, the more accurate answer was that the LHC has seen no deviations from the Standard Model in the papers they have already published – but there could have been papers that remained unpublished because they did show a deviation from the Standard Model. ;-)
Still, when I look at the partly revealed, partly classified data from the LHC, it seems to be the case that the surprising new events with missing energy have been absent, unless there is some unexpected subset that has been classified so far as well. ;-) There are hints of new physics in channels without missing energy. A consequence is that in the context of SUSY, I have been re-educated as a believer in R-parity violation. Less realistically, the "hints" given by the LHC – the "missing missing energy" in the new events – constrains other models in beyond-the-Standard-Model physics in a specific way, too.
I also recommend you to check new hep-ph papers. There are not less than three new preprints that claim, and give lots of encouraging details on the claim, that SUSY has survived the first inverse femtobarn of the data. See papers by Kats et al., Brust et al, and Papucci et al.
See also Matt Strassler who believes that an error of the theoretical calculation, or its application to the data, is the most likely source of the discrepancy.
Measurement of the Production Cross Section for Pairs of Isolated Photons in \(pp\) collisions at \(\sqrt{s} = 7\,{\rm TeV}\) (copy at CERN server here)is by far the most self-confident assertion of new physics that has come out of the LHC so far. In all the previous examples, we would be discussing relatively small bumps and excesses extracted from large amounts of data – usually an inverse femtobarn or more. However, the paper above claims that there is a huge deviation from the Standard Model – and a very small amount of the data, namely those 36 inverse picobarns collected in 2010, is sufficient to see it.
The paper above which has clearly been tested for quite some time because they apparently started to work on it in 2010 – but that doesn't guarantee that there are no serious bugs in the paper left – studies the number of events with two photons (or "diphotons") in the final state. They compare the observations with the predictions (of the next-to-leading perturbative calculations based on QCD).
Most of the photon pairs tend to be predicted – and indeed arise – in the opposite direction. If you use \(\phi\) for the azimuthal angle measuring rotations with respect to the axis along which the proton beams move, the difference of these angles for both photons, \(\Delta \phi\), is usually close to \(\pi\). This is essentially due to momentum conservation because the photon pairs tend to be created "simultaneously", e.g. in quark-antiquark annihilations (yes, protons also contain antiquarks, although just the "virtual ones" or "non-valence ones").
And indeed, if you restrict your attention to these dominant photons flying in the opposite directions as far as the angle \(\phi\) goes, you get a satisfactory agreement of the theory with the observations. However, for all other angles – for the more scarce photon pairs that are not anti-collinear – the CMS reports a huge deviation from the predictions. To see an example, here is their Figure 8:
Click to zoom in.
On the left chart, theory predicts the yellow line plus minus a typical error margin given by the violet strip. However, the observations sit at the shiny blue-green crosses. In the similar way, the right graph is a rescaled version of the same thing. Theory predicts that the observations will be near the \(y=0\) horizontal line, plus minus the error margin given by the violet band or the dotted lines. However, the observations given by the blue-green crosses visibly deviate: can you see the deviations? ;-)
The deviations are so large that it makes almost no sense to talk about fluctuations and confidence levels. You just optically see that the observed curve runs through a qualitatively different place in the chart than the predicted one. You may see that the datapoint near \(\Delta\phi\approx 2.7\) itself has something like 4 standard deviations and there are two adjacent datapoints on the left with 3-sigma excesses, too. In combination, you get almost certainly above 5 or 6 sigma. Figure 7 shows similar deviations even if a different constraint on the pseudorapidity is imposed. Note that those photons typically carry something like 50 GeV of transverse momentum, between dozens and 300 GeV or the invariant mass.
By now, the LHC has accumulated about 144 times more data so all the signals should have grown by a factor of 12 or so. The bin showing a 4-sigma excess should be showing something like a 48-sigma excess today. Some members of the CMS must know damn well whether this is the case or not. My guess is that if they knew that the excess has gone away, they would veto this paper based on a smaller amount of data. ;-)
So the excess of all diphotons (except for the nearly anti-collinear ones) seen by the CMS is either due to a big human error uncaught by dozens of clever particle physicists; or a convincing proof of new physics. I have almost no chance to find out what the error in the analysis could be so it's much more appropriate for me to dream about the new physical phenomena that could be causing this huge excess if it is real. But even this question is a pretty hardcore phenomenology. Some diphotons may be produced together with or because of light Higgs bosons; extra-dimension gravitons; and some supersymmetric states. This is a quote from the new CMS paper. At this vague level, the diphotons may clearly be due to anything, especially if I tell you that diphotons may also be a sign of the little Higgs models; fortunately, the little Higgs models predict a deficit, not an excess, so they're probably inconsistent with the data. ;-)
Most of the relevant diagrams for processes in which proton pairs produce photon pairs involve quark loops. For example, these are three one- and two-loop diagrams from a 2002 paper by Bern, Dixon, and Schmidt:
Click to zoom in. Obviously, simpler diagrams such as the tree-level quark-antiquark annihilation aren't depicted on the picture but have to be included in the calculations.
The particle running in the loop converting two gluons in the initial states to two photons (plus something else) must be both electrically and colorfully charged – so it must be a quark (or an unknown new particle such as, well, a squark) – and the box diagram on the left is the minimum. The middle diagram adds some loop correction with gluons (these two diagrams have the same initial and final states so it's important to appreciate that they interfere with each other) while the last pentagon diagram, sponsored by the U.S. Department of Defense, adds an extra gluon to the final state.
So yes, this big effect may be simply caused by a light Higgs boson. The accuracy in the data the LHC has by now might arguably be enough to determine the Higgs mass rather accurately – of course, such a derivation needs to assume that we know everything but one parameter and that the shape of the data actually agrees with the model that is meant to replace the Higgsless Standard Model that's been OK so far. In the well-defined Higgs search business, CMS favored a 119 GeV Higgs by looking at the general diphoton events in September 2011 (interpretation by LM).
The previous paragraph may have sounded disappointing. We will probably get the Higgs in a month or two, anyway. However, what I have hidden from you is that according to my reading of the literature, e.g. Mrenna and Wells 2000 but including the paper by Bern et al. above, the ordinary Standard Higgs boson's decays into two photons shouldn't be visible. One needs beyond-the-Standard-Model Higgs particles, such as the Higgs bosons from the Minimal Supersymmetric Standard Model, to enhance the decay of the particle to photon pairs so that it could be observable.
You may feel some defa vu here.
In April 2011, people would discuss a leaked memo masterminded by Ms Wu that claimed the observation (at 4-sigma) of a diphoton signal from a 115 GeV Higgs boson seen by ATLAS – the collaboration that was bolder half a year ago. Now we see something remotely similar – but it comes from the CMS and it has been officially released as an arXiv preprint. Recall that the ATLAS bump went away with a higher amount of data, or at least we were told so. If you are confused about the situation of the experimental claims about related matters, you're not alone.
It's interesting to refresh one's memory by reading the April 2011 blog entry; it says that the big Wu excess was too big even for the MSSM but some NMSSM models would be able to predict the huge enhancement of the diphoton events. Of course, the current CMS excess of diphotons is not that huge – at most by a factor of a "few" in those bins – so it could be compatible with the MSSM. Obviously, an expert phenomenologist is needed to "see" what the detailed CMS data actually suggest.
The excess of the diphoton events seems really large so it may be due to some human error. Or not.
We may also be surprised why we're reading papers based on 2010 data only in November 2011 for the first time. They apparently spent quite some time by looking for errors. Bold papers are delayed in this way. I would say that particle physics is affected by something we may call the inverse publication bias. In soft sciences such as environmental sciences, people love to scream too early. They see a tiny bump somewhere, so they publish a paper claiming that birds are getting smaller, and therefore the Solar System is collapsing. They see another bump so we read another paper that birds are getting bigger, and therefore the Milky Way is dying. We're bombarded by tons of crap with bogus observations.
In particle physics, the reticence works in the opposite direction. A paper that potentially contains an important and new effect was delayed by almost a year. Is that a good thing? Or do you get a better science if bold claims are being delayed in this way? I am not sure. There is a legitimate reason why bold claims are being delayed and why they have to face a greater amount of tests: extraordinary claims simply do require extraordinary evidence.
When you get an extraordinary result that wildly disagrees with much of the previous knowledge, it's just more likely that you have made a mistake which is why it's more sensible to spend more time by looking for it. Obviously, at some point, you should give up the search for the mistakes and publish your paper. The CMS did it now which seems kind of appropriate to me. It's up to others to find the silly errors they have done – or the mundane or shocking interpretations of what they found if they have done no brutally serious mistake.
The price we have paid for this reticence is that for a year, we may have been misled about the question whether the LHC has seen some deviations from the Standard Model. We thought that the answer was "No". However, the more accurate answer was that the LHC has seen no deviations from the Standard Model in the papers they have already published – but there could have been papers that remained unpublished because they did show a deviation from the Standard Model. ;-)
Still, when I look at the partly revealed, partly classified data from the LHC, it seems to be the case that the surprising new events with missing energy have been absent, unless there is some unexpected subset that has been classified so far as well. ;-) There are hints of new physics in channels without missing energy. A consequence is that in the context of SUSY, I have been re-educated as a believer in R-parity violation. Less realistically, the "hints" given by the LHC – the "missing missing energy" in the new events – constrains other models in beyond-the-Standard-Model physics in a specific way, too.
I also recommend you to check new hep-ph papers. There are not less than three new preprints that claim, and give lots of encouraging details on the claim, that SUSY has survived the first inverse femtobarn of the data. See papers by Kats et al., Brust et al, and Papucci et al.
See also Matt Strassler who believes that an error of the theoretical calculation, or its application to the data, is the most likely source of the discrepancy.
CMS: a very large excess of diphotons
Reviewed by MCH
on
November 01, 2011
Rating:
No comments: