14 July 2012

Questions about the ATLAS 2011 results

This link shows the original 2011 data results from the ATLAS experiment, taken from https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HIGG-2012-17/

https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HIGG-2012-17/fig_01a.png

We all recall that the peak was much higher than the expected peak, if the Higgs is at 125 GeV.

In summer 2012, the same 2011 data was analyzed again, when it was known that CMS had also seen something at the energy, and it was graphed as follows: https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/CONFNOTES/ATLAS-CONF-2012-091/fig_09a.png

https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/CONFNOTES/ATLAS-CONF-2012-091/fig_09a.png
Even though the bump in this reselected 2011 sample looks in a sense less clear than in the original 2011 sample, it now fits much better with the prediction. I was even told that the sigma value of the second graph is higher than that of the first. (Is that so?)

Maybe somebody can explain this? Did I use the correct graphs? Is there some mistake in this post?


Update: This seems to be such a touchy issue that nobody dares to comment, not even anonymously. That is a really bad sign.

12 comments:

  1. Clara,

    Quite frankly, this question can only be answered by people within the ATLAS collaboration. Phil Gibbs has given his opinion on his blog, but he is an outsider to LHC (as many other bloggers, with few exceptions of course).

    ReplyDelete
  2. Ervin,

    true. I hope that the 2011 graph will not change every six months.

    ReplyDelete
  3. Clara --

    I can't speak specifically to this graph, but there is no question that the data was reanalyzed with a re-optimized [read, improved] and blinded analysis. It should surprise no one that data moves around when you do that; the 2011 data was not looked at in blind fashion (that wasn't possible with such a broad search target) and the analysis wasn't as done as well as this new one. In general, people ought to be a little more cautious when they look at this data, rather than just assuming it comes out of the machine ready to be plotted, which it certainly does not.

    For instance, look at the CMS four-lepton results from 2011 and compare them with 2012. There is a small peak in the 2011 data around 119 GeV. Look at it in the reanalyzed 2012 data; it's gone. Where did it go? The CMS people reoptimized their analysis from scratch and set their criteria in a blind fashion. When they unblinded, two of the events around 119 (if memory serves) no longer passed their criteria. And they do talk about it, it's not a secret.

    If this seems worrying to you, my point to you is that you should have been worried about this kind of obvious issue in the first place, back in December, rather than taking opinions of certain naive ``experts'' too seriously. Data analysis looking for small signals in large amounts of data is not an unambiguous affair; many choices get made, and when the data is not abundant, the details of the results can depend on those choices.

    The real issue is not whether the plots change. They generally do. The real issue is whether the particle is there or not. And the re-optimized and blinded analyses from 2012 are significantly improved over those from 2011, not just because there's more data but because that data can be put to more effective use. And the evidence that the particle is there is now very strong.

    So when you say "I hope that the 2011 graph will not change every six months", I say in reply, "I hope it does". I hope it does, because that will mean the experimentalists will be doing a better job of calibrating their experiment and weeding out problems six months from now than they are today. And in 2013 they will be able to extract more information from their 2011 data than they can in 2012, which is more than they obtained in 2011.

    best, Matt Strassler

    ReplyDelete
  4. Matt,

    thank you for the information. To an outsider, the difference between the first and second 2011 graph looks like a statistical fluctuation that, under better scrutiny, slowly disappears. If in 6 months the peak is gone completely, we will have a problem. I hope that this will not happen.

    ReplyDelete
  5. Clara,

    In the absence of new "nearby"physics, discovery of the SM Higgs alone yields no clues on the fine-tuning, flavor structure, electroweak chirality and all other puzzles facing SM. So, even if the evidence for the SM Higgs boson becomes indisputable by the end of 2012 (as the mainstream hope goes), HEP will be left with outstanding theoretical challenges.Unfortunately,this seems to be the bottom line, at least from where we stand now.

    ReplyDelete
  6. Ervin, indeed, the LHC has not found any new physics. The arxiv has not recovered from this shocking result yet. We live in interesting times.

    ReplyDelete
    Replies
    1. It is certainly true that if nothing except a Standard Model-type (i.e. simplest) Higgs turns up at the LHC, that will leave theoretical and experimental particle physics with very profound challenges; a scalar particle with no protection for its mass is unknown in particle physics and (without tuning) condensed matter, as far as I am aware. It would be extremely puzzling and neither the theoretical nor experimental way forward would be entirely clear. However, we're a long way from knowing that this is our situation, because of what is known as ``decoupling''; that it is unfortunately very easy for a new class of particles and forces in nature to leave us hanging, with one Higgs boson (possibly among several) which closely resembles the Standard Model Higgs. And many particles and phenomena would still be hiding, if they were present, given that most studies of LHC data have been relatively cursory; for instance, mass limits on many particles that don't carry strong interactions are still at 200 GeV. Very precise measurements of the new particle (and of everything else in LHC data) are going to be needed before we will really start to be confident that we are dealing with the Standard Model alone.

      Also I wouldn't describe the current situation as "shocking"; it was always one of several scenarios on the agenda, and indeed people were preparing for this possibility as much as a decade ago. In particular I personally am not shocked.

      Delete
    2. Matt,

      thank you for your thoughtful reply.

      Delete
    3. In my opinion, caution is well advised on any proposed BSM scenarios, including models inspired by the decoupling limit theory.

      Having indistinguishable replica(s) of the minimal Higgs boson, while certainly possible on theoretical grounds, opens up many new questions. For instance:
      1) Why would Nature devise such a contrived mechanism that clearly complicates the spectrum of the Higgs sector, at odds with the idea of naturalness and the Occam razor principle?,
      2) Is the non-minimal Higgs sector stable to loop corrections and imune to nonlinear dynamics of field theory? In particular, what if the RGE equations of this non-minimal Higgs sector become non-perturbative at some energy level above the EW scale?,
      3) How can the decoupling limit theory explain the long list of SM unsettled puzzles, including fine-tuning? and so on...

      Let's keep in mind that, in the pre-LHC era, many theorists were convinced that new physics must show up at energies of few TeV to stabilize the EW scale. Let's also recall that Veltman's analysis of the rho-parameter clearly indicated that the Higgs boson could not exist. These (and many other examples) teach us that being "plausible" is a far cry from being "real".

      Delete
    4. The LHC will guide us through the darkness beyond the standard model!

      Delete
  7. Clara: I think you are misinterpreting the relation between the two graphs. It can't be a statistical fluctuation that slowly disappears; it is the same 2011 data, analyzed two different ways. Statistical fluctuations disappear when you obtain more data; that is not what is going on here. What is happening here is systematic.

    Two things are happening at once. First, binning of data can fool your eye, and second, recalibration of the measured photon energies, as a function of where they appear in the detector, of whether they convert to an electron-positron pair in the tracker, etc., can move things from one bin to the next, or shift all of the points left or right by a fraction of a bin.

    If you look carefully, the number of events in every bin has changed by a small amount -- every single bin. That tells you that the issue is not associated with the bump; it is associated with the data as a whole.

    Now look at the original December 2011 plot. Notice that the bins at 123 and 127 are exceptionally low; the bin at 126 is exceptionally high. This can happen just by chance. It is important to keep in mind that psychologically this makes the peak look bigger and more statistically significant than it actually is.

    Next, imagine I recalibrate. A few events that I thought were at 126.8 move to 127.2 or 126.5; a few at 127.1 move to 126.7 or 127.6. And perhaps I have to shift the whole scale up by 0.2. Well, now part of that big excess in the 126 bin has moved into the 127 bin, moderating the difference between the two. This can happen. And the result is that the small bin-to-bin fluctuations now look very different. Something like that probably did happen in the ATLAS data.

    This is why analyses of this sort are not done on binned data --- they are done on the unbinned data. You do not want your results depending on your binning. Your plots, however, will depend on your binning. I am sure this is why CMS made their 2011+2012 photon plot with 1.3 GeV bins; it made the peak look better. Otherwise, why choose 1.3 GeV? Normally one would choose 1 GeV for plotting, but they surely wanted a clearer, nicer-looking plot that reflected the information that the unbinned analysis provided and begged fewer questions.

    In short, you were reading too much --- as was the case with every other blogger out there, as far as I can tell --- into the shape and size of the bump in the original December ATLAS plot. I warned people very explicitly back in December that important details could change with reanalysis and recalibration.

    What makes the discovery claim compelling is not this plot, in any case. It is the fact that (a) an excess of similar significance shows up in the same location in both 2011 and 2012 data; (b) it shows up in both photons and in leptons from Z bosons; and (c) it shows up at both ATLAS and CMS. Take any one of these three facts out and the case would not be convincing.

    ReplyDelete
  8. Matt, thank you. But allow me to wait for another few inverse femtobarns before I am fully convinced!

    ReplyDelete