Update: See the March 2012 update of this article based on 4.7/fbMultilepton SUSY search leads to upper limits 1-1.5 sigma weaker than expected
The new paper by the CMS collaboration on
Multileptonic SUSY searches (PDF)has been made available to the inhabitants of Planet Earth. We've been used to very clear exclusions of new physics by the LHC. There were often modest deficits of events or perfect matches, with 2-sigma excesses already being a rarity. That's why so far, the abstracts of papers have been unconditionally saying that new limits were imposed, particular models were excluded, and people openly or secretly hoping for new physics behind the corner (your humble correspondent belongs to the "closet" group here) were matricized.
(This verb is a new one and its meaning may be understood once you realize how some people have renamed screwing string theory.)
However, the winds may be changing. The data are coming gradually so the change of the wind isn't abrupt and can't be abrupt. But you may see some change if you read e.g. the following sentence of the abstract
The observations are mostly consistent with expectations from standard model processes.Now the consistency is weakened by the delicate adverb "mostly". ;-) We may be slowly entering into a new territory.
As I have suggested in previous articles about hints of new physics seen by CMS in the multilepton and multijet contexts, the CMS has apparently seen excesses in many possible "subgroups of events" with many leptons or many jets in the final states. It seems pretty clear that given the data in the table, if you combined these subgroups into larger groups in an appropriate way, you would obtain a 5-sigma or greater "overall deviation" from the Standard Model. Virtually all of the eye-catching deviations are excesses.
This one-sided bias manifests itself rather clearly on Figure 7 reproduced above that depicts the new limits in the slepton co-NLSP scenario. To understand it and to be careful about the sign, imagine that the CMS folks are supersymmetry haters – which is easy to imagine if you take into account that the likes of Tommaso Dorigo are among them. They are trying hard to chase supersymmetry away, to make as stringent laws of the type "superpartners aren't allowed here" or "superpartners must be heavier than XY, not allowed to be a part of the low-energy physics".
They expected the lower limits for the masses of a chargino on the x-axis and the gluino on the y-axis are given by the dotted yellow curve. But the physicists only obtained the limit given by the lower, shining blue line. You see that it's somewhere in between the full brown curve (above the blue one) and the weak dotted brown curve (below the blue one) which means that the limits are approximately 1.5 standard deviations below the expectations.
A single quantity that would deviate 1.5 sigma from the expectation would be unremarkable: it happens in 10% of cases. However, I think that the actual signals are stronger than suggested by this overall figure. How excesses in various bins combine to the modified shape in the overall graph such as one above is a delicate piece of statistics where you may make many errors. However, my experience with looking at similar 2-dimensional graphs tells me that so far, the exclusion lines were much closer to the expected ones so something may start to be different here.
OK, I could sometimes preserve the original colors of the curves, couldn't I? :-)
In a similar way but less remarkably, their new exclusion lines for the CMSSM benchmarks – both for \(\tan\beta=3\) and \(\tan\beta=10\) – almost exactly coincide with the lines "1 sigma weaker" than the median expected exclusion curves. Again, if I say it in this way, it looks totally unremarkable. But because of the "universality" of this excess of multilepton events – the observed graph is "uniformly" 1-sigma below the expected one – it could be speaking a much louder language than what the 1-sigma figure indicates.
Just an example how the curves on Figure 8a in the paper that is also reproduced above deviate: for \(m_0=300\,{\rm GeV}\), the median expected lower limit on \(m_{1/2}\) was \(280\,{\rm GeV}\). However, the observed one was actually just \(180\,{\rm GeV}\).
Imagine that new physics is going to gradually emerge in those 20/fb of data that each major detector will collect by the end of 2012. In that case, we will be seeing increasing hints of new physics and excesses. A few more papers will still be formulated as "exclusions of new physics"; however, the disclaimers and qualifiers will be getting increasingly more visible. At some moment, the amount of deviations will become incompatible with the very idea that the LHC collaborations should be writing new exclusion papers. The new exclusion curves won't be stronger than the previous ones, despite a big increase of the number of collisions that are used: they will stop right before the SUSY parameters that Nature chose (or values of the parameters in a wrong theory that most accurately mimics the correct one). People will be encouraged to reformulate the calculations as searches for positive evidence again.
In these transition periods, there are of course subtle questions about what you should assume and how hard you should look for something. On this blog, I and many of you have often been criticizing the hyping of the results. If you want to have a 2-sigma evidence of any sort to support a big claim you want to make, you will surely find it. You will cherry-pick and it will allow you to hype some big claim you wanted to make, anyway. The empirical data are just "whores" that help you to do something you want to do independently of any data.
However, one must obviously be careful about the opposite mistake or misconduct: one shouldn't deny the evidence. If you decide that the LHC won't see new physics, you may keep on writing papers saying that you haven't really seen anything. You choose not to study the channels where the deviations occur at all, or you choose them to combine them with others so that striking deviations in narrower bins are mixed up with a larger number of collisions and their explosive power is diluted. And you may choose to say that an occasional 3-sigma, 4-sigma, or another 4-sigma deviation is no big deal. And you may be saying these things increasingly frequently.
So people may be biased and may have guesses that later turn out to be right or wrong. However, an advantage of repeatable science such as particle physics is that this uncertain era is always temporary. If the uncertainty of your conclusions boils down to statistical errors, you may collect many more collisions and you will collect them. Every time you quadruple the number of collisions, you expect the "number of standard deviations" backing your positive or negative claims to double. Are you sure that your 3-sigma claim is right but you don't convince everyone because 3-sigma is too little for them? The recipe is simple. Take a 4 times larger amount of data (collisions) and your 3-sigma signal will grow to approximately 6 sigma. That will still fail to convince many that you avoided all possible mistakes if you're an opera singer and your claim is far-reaching and potentially acausal, but it will probably be enough if you just want to claim that there is another new particle species.
In the near future, we may see whether these excesses reported by the CMS were just flukes or whether they will keep on growing.
CMS: observations are "mostly consistent" with SM
Reviewed by MCH
on
October 27, 2011
Rating:
No comments: