Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology definitions deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory object of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorUrs
    • CommentTimeOct 27th 2018

    Page created, but author did not leave any comments.

    v1, current

    • CommentRowNumber2.
    • CommentAuthorUrs
    • CommentTimeOct 27th 2018

    stub entry with some minimum pointers to the literature, for the moment just so as to satisfy links from flavour anomaly

    v1, current

    • CommentRowNumber3.
    • CommentAuthorUrs
    • CommentTimeJan 16th 2019

    added a bunch of references and pointers, especially on relation to flavour anomalies and GUTs

    diff, v2, current

    • CommentRowNumber4.
    • CommentAuthorDavidRoberts
    • CommentTimeJan 16th 2019

    Typos and removed duplicate redirect line

    diff, v3, current

    • CommentRowNumber5.
    • CommentAuthorUrs
    • CommentTimeJan 16th 2019

    thnask ;-)

    • CommentRowNumber6.
    • CommentAuthorUrs
    • CommentTimeJan 16th 2019
    • (edited Jan 16th 2019)

    So there is a real case for SU(5)SU(5)-GUT signature consistently in both anomalies seen above 4 σ\sigma at LHC (BBs and g2g-2). Remarkable

    • CommentRowNumber7.
    • CommentAuthorDavidRoberts
    • CommentTimeJan 16th 2019

    That would be very cool, indeed.

    • CommentRowNumber8.
    • CommentAuthorUrs
    • CommentTimeJan 16th 2019
    • (edited Jan 16th 2019)

    No, it is already pretty cool. :-) Some authors see the flavour anomaly already at 5σ\sigma (arXiv:1704.05340 ).

    I think there is a psychological inertia problem here that “the public” (both lay and expert) got so used to the direct detection events that have been so successful in the past, that the new phenomenon of indirect detections from precision measurement seen now is not triggering yet the internal alert bells yet, the way they deserve to.

    • CommentRowNumber9.
    • CommentAuthorAli Caglayan
    • CommentTimeJan 16th 2019

    Has Fermilab finished its g2 experiment yet? I remember they moved some equipment last year.

    • CommentRowNumber10.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 16th 2019

    I was wondering if the direct/indirect detection distinction related to what Peter Galison described in Image and Logic as the difference between traditions: the logic tradition uses statistical arguments, while the image tradition looks for telling images, “golden” single events. But then, I guess all recent detection is statistical. One critic, I see, claims there never was such a difference as Galison proposes, see Staley’s Golden Events and Statistics: What’s Wrong with Galison’s Image/Logic Distinction?.

    • CommentRowNumber11.
    • CommentAuthorDavidRoberts
    • CommentTimeJan 16th 2019

    I meant it would be very cool if the SM gauge group really was broken symmetry from SU(5)SU(5). That we are seeing hints of such a thing is still on its own amazing.

    • CommentRowNumber12.
    • CommentAuthorUrs
    • CommentTimeJan 17th 2019
    • (edited Jan 17th 2019)

    David R., right but what I was trying to highlight in #6 is that already right now there is a real case of “SU(5)SU(5)-signature”. Even if in the end it’s not SU(5)SU(5)-GUT but who knows what, maybe some variant of it, it seems that the punchline of the LHC results at this point really is: a consistent signature of SU(5)SU(5)-GUT, at over 4.1σ4.1 \sigma. That in itself is just plain remarkable.

    (Given the discussion in the public domain, I think it is crucially imortant to recognize bits of progress and not dismiss all insight just on the basis of ever more amazing claims that remain open.)

    And if I am allowed to add some theoretical considerations to these experimental findings: Apart from the well known but still striking inner working of SU(5)SU(5)-GUT in itself, there is these two curious data points:

    1. Schelleken et al.’s computer scan of heterotic Gepner model compactifications. In their graphics here, the two faint lines that intersect at SU(5)SU(5)-GUT theory are not drawn by hand (only the blue cricle around their intersection point is), but are due to density of the models as found by the computer scan.

    2. our computer scan of the image of equivariant stable cohomotopy inside equivariant K-theory which (slide 17 here) breaks type II E 8E_8-GUT to SU(5)SU(5)-GUT.

    Something is going on here.

    • CommentRowNumber13.
    • CommentAuthorUrs
    • CommentTimeJan 17th 2019
    • (edited Jan 17th 2019)

    @Alizter, after The Big Move it took Fermilab 6 years to set up their g2g-2 experiment, and they started taking data in February 2018. So they have only just begun.

    You can read about what the expected prospects of their eventual results should be in reviews such as Jegerlehner 18a (who says that if the effect is there, Fermilab should be able to push its statistical signifcance to between 6σ10σ6\sigma-10 \sigma, hence beyond doubt).

    • CommentRowNumber14.
    • CommentAuthorUrs
    • CommentTimeJan 17th 2019
    • (edited Jan 17th 2019)

    @David C., if one remembers how mind-bogglingly indirect all “observations” in a modern accelerator are in any case (it’s not like anyone really “directly saw” a Higgs particle even in extremely generous readings of these words) I wouldn’t think that the distinction between what the particle physics community distinguishes as “direct detection events” and “loop effects in precision measurements” is easily matched to general considerations made elsewhere. Let me know if I am wrong here.

    I am thinking a much more elementary effect/fallacy is at work in the community (at least in that part which likes to vent its opinions on social media): The nature of the next-relevant experiment may change over time.

    • CommentRowNumber15.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 17th 2019

    Yes, I agree. I’d be interested in seeing something on the statistical reasoning going on in these two cases.

    Back in my Bayesian phase I remember enjoying the writings of Giulio D’Agostini who was a Bayesian at CERN. He took issue with frequentist ways of framing some of the conclusions there, especially confidence intervals and p-values. I wonder if that’s still debated.

    • CommentRowNumber16.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 17th 2019

    A flavour of his views concerning significance levels can be gained from here:

    The reason why practically every particle physicist is highly confident that the Higgs is in the region indicated by LHC has little to do with the number of sigma’s (I hope the reader understands now that the mythical value of 5 for a ‘true discovery’ is by itself pure nonsense…)

    • CommentRowNumber17.
    • CommentAuthorUrs
    • CommentTimeJan 17th 2019
    • (edited Jan 17th 2019)

    These statistical errors are a thorny topic in themselves, there is plenty of discussion about the pro-s and con-s out there, would be good to produce some digest of that, maybe at standard deviation.

    There is loads of assumptions that goes into naming the number of σ\sigma-s, notably regarding systematic errors. Less than a mathematical fact it’s a matter of experience in the particle physics community over the decades that whenever the σ\sigma-number they assigned to some observation (whatever it really is or means) was less than four, they experienced cases where this apparent signal later disappeared, less so when the number was higher than 4. That’s why at some point they declared it must be 5 σ\sigma to be on the safe side.

    But since the statistics is all available, with enough energy, we could dig into more details as much as desired. But as my personal time is better used elsewhere, I’ll stick with those σ\sigma-s for the time being.

    • CommentRowNumber18.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 17th 2019

    Yes, not something for now, but it’s an odd situation where a calculation to a number of sigma, leads to a probability based on frequencies of signals surviving which achieved that sigma, but having nothing to do with what one normally understands by nσn \sigma. So p np_n = proportion of signals surviving out of signals achieving nn sigma. Then p 5p_5 is deemed small enough.

    Of course, you’d expect the kind of situation you’re studying to have a bearing on expectation of signal survival, contrast being near certain a priori that the Higgs is somewhere in a vicinity to being very uncertain whether there are leptoquarks.

    • CommentRowNumber19.
    • CommentAuthorUrs
    • CommentTimeJan 17th 2019

    You seem to have already started to immerse yourself into the topic. Why not start some paragraphs on some dedicated page?

    • CommentRowNumber20.
    • CommentAuthorUrs
    • CommentTimeJan 18th 2019

    A flavour of his views concerning significance levels can be gained from here:

    The reason why practically every particle physicist is highly confident that the Higgs is in the region indicated by LHC has little to do with the number of sigma’s (I hope the reader understands now that the mythical value of 5 for a ‘true discovery’ is by itself pure nonsense…

    I find this argument dangerous, . Just recall that many of the arguments that said “the Higgs now has to be in this low mass window” continued with “and therefore its large quantum corrections necessarily have to be canceled by weak-scale supersymmetry”.

    The theoretical argument worked in the first case, but it failed in the second. Which means it cannot universally be relied on. One needs/wants an actual experimental test to support the theoretical arguments.

    The insistence on objective statistical significances is precisely to rule out such mistakes of theory and see what the experiment actually gives.

    Of course the theoretical arguments are needed to decide where to look and which experiment to do, which in the case of the Higgs is how they helped to find it. I’d disagree with D’Agostini in his ridiculing of how the Higgs detection results were announced as if this were a signal showing up anywhere, independent of the strong expectation to find it right there. This should be the established practice of good statistics, to avoid bias in interpreting data.

    Theoretical bias is, incidentally where these leptoquarks come in: Leptoquarks are currently a good (maybe the best) potential theoretical/conceptual explanation of the joint effects seen with high statistical significance both in B-meson decays and in the muon magnetic moment

    Such potential theoretical explanations are necessary to decide where to look next. For instance there is a good argument that the anomalies seen both in the B-meson decays as well as the muon anomalous magnetic moment are sensitive to meson loop contributions, which would prefer the hadron version of the next collider (say FCC) over the lepton version, a question under lively debate as we speak.

    • CommentRowNumber21.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 18th 2019

    The insistence on objective statistical significances is precisely to rule out such mistakes of theory and see what the experiment actually gives.

    Maybe this is a can of worms not worth opening without a lot of time. The Bayesian/frequentist debates have rumbled on for many decades now. The machine learning group I belonged to in 2005-7 divided along those lines. Further back, I hosted a particularly ill-tempered meeting on the division back in 2001, which had one side accuse the other of having a mother unjustly sent to jail for not understanding the dependence of the chances of sudden death in her two children.

    The accusations in their simplest form amount to something like:

    Bayesians: You unthinkingly use your techniques without proper consideration of the context. You present your results as objectively determined by hiding your reliance on subjective appraisal.

    Frequentists: You make statistics a subjective process. You fail to test as severely as possible by your reliance on background knowledge.

    It would be interesting to delve further into the case of particle physics where there is so much background knowledge and so much data. Perhaps things “wash out” in this case, so that how to act is the same on either approach.

    When I have a moment I’ll revise statistical significance.

    • CommentRowNumber22.
    • CommentAuthorUrs
    • CommentTimeJan 21st 2019
    • (edited Jan 21st 2019)

    The point about perception of “direct detection events” over “loop effect precision measurement” which I made in #8 and #14 I have forwarded to the experts here. Particle physicist Tommaso Dorigo seems to quite agree with it.

    • CommentRowNumber23.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 21st 2019
    • (edited Jan 21st 2019)

    Given the often expressed point, made for example here by Sinervo

    • Pekka K. Sinervo, Signal Significance in Particle Physics, in Proceedings of Advanced Statistical Techniques in Particle Physics Durham, UK, March 18-22, 2002 (arXiv:hep-ex/0208005, spire:601052)

    what is an appropriate criteria for claiming a discovery on the basis of the pp-value of the null hypothesis? The recent literature would suggest a pp-value in the range of 10 610^{-6}, comparable to a “5σ5 \sigma” observation, provides convincing evidence. However, the credibility of such a claim relies on the care taken to avoid unconscious bias in the selection of the data and the techniques chosen to calculate the pp-value,

    wouldn’t a simple explanation of delay in accepting novel indirect detections be that it takes time to build up confidence in that “care”?

    • CommentRowNumber24.
    • CommentAuthorUrs
    • CommentTimeJan 21st 2019

    I am perplexed where you are coming from. I think it’s the exact opposite. Carelessness was the 750 GeV hunt.

    It must be frustrating these days to be an experimentalist at CERN, with nobody looking at results that don’t fit the prejudice, every armchair theoretician going on about how there are no experimental results. Strange times. At least Dorigo “recently” changed his mind… :-)

    • CommentRowNumber25.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 21st 2019

    Let’s see if I can convey my perplexity about what you’ve been writing recently.

    You appear to be taking my comments as arising from the “fundamental physics in crisis” viewpoint of Sabine Hossenfelder and others. This isn’t the case, as I should have thought was evident. I can’t see that there’s anything better to do than push on theoretically with string/M-theory, and experimentally to build more powerful colliders, telescopes, detectors, etc.

    All I’ve been trying to do is to raise the issue of the subtleties of data analysis. You seem to be aware that there’s more to doing data analysis that citing sigma values (#17), and yet that’s all you ever mention. Have you given a serious look at how they move from data to conclusion? Do you know about issues such as ‘coverage’, the ‘Look Elsewhere Effect’, treatment of ‘systematic errors’, difficulties with the Neyman frequentist construction with several parameters? A list which goes on and on.

    I have no reason to doubt van Dyk when he says that data analysis for the Higgs was “Probably the most careful frequentist analysis ever conducted”. Techniques from years of work are deployed, and I have no doubt about their conclusions. I was merely raising the question in #23 of whether people yet feel as safe applying analysis techniques in a new setting. But maybe you know more and can assure us there’s nothing new introduced in the data analysis of loop effect precision measurements.

    • CommentRowNumber26.
    • CommentAuthorUrs
    • CommentTimeJan 23rd 2019

    added the relvant quote Crivellin 18, p. 2 (for my anonymous correspondent on Dorigo’s site here):

    the global fit [[ to flavour anomalies ]] even shows compelling evidence for New Physics [[]] The vector leptoquark (LQ) SU(2) LSU(2)_L singlet with hypercharge 4/3-4/3 arising in the famous Pati-Salam model is capable of explaining all the [[flavour ]] anomalies and therefore several attempts to construct a UV completion for this LQ to address the anomalies have been made. It can give a sizeable effect in bc(u)τνb \to c(u)\tau \nu data without violating bounds from bs(d)νν¯b \to s(d)\nu \bar \nu and/or direct searches, provides (at tree level) a C 9=C 10C_9 = - C_{10} solution to bs + b \to s \ell^+ \ell^- data and does not lead to proton decay at any order in perturbation theory.

    diff, v10, current

    • CommentRowNumber27.
    • CommentAuthorUrs
    • CommentTimeJan 23rd 2019
    • (edited Jan 23rd 2019)

    Hi David,

    only now see your #25.

    I think it’s evidently hopeless for anyone not directly involved in the data analysis to judge the quality of the statistical analysis. But I also see no indication of sloppy statistical analysis in particle physics, quite on the contrary.

    I am proceeding here on a very simple principle: I go and check what the experimentalists say they think they measured.

    I don’t really care so much about the details of the process that makes them conclude this, since it would need my lifetime energy to dive into the matter and do it justice, I would need to become an experimentalist myself.

    I don’t really care what the philosophers think that “5σ5 \sigma” really means, what I care about is that the experimentalists have agreed that they will call this number to indicate that they are sure that their experiment is seeing a signal.

    The literature published by experimentalists themselves is quite unambiguous about them seeing signals in flavour violating decays and in the muon anomalous magnetic momentum. I think that’s interesting, and I am recording that.

    • CommentRowNumber28.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 23rd 2019

    This is not to do with what philosophers think (although credit to them when they’re right). As on Twitter, I’ve now pointed you to two pieces of writing by people working directly with the statistics of particle physics:

    Author of textbooks on the subject, worked for years at CERN

    It would be very useful if we could distance ourselves from the attitude of ‘Require 5σ for all discovery claims’. This is far too blunt a tool for dealing with issues such as the Look Elsewhere Effect, the plausibility of the searched-for effect, the role of systematics, etc., which vary so much from experiment to experiment.

    • Tommaso Dorigo, Extraordinary claims: the 0.000029% solution, (pdf)

    CERN physicist

    Forty-six years after the first suggestion of a 5σ threshold for discovery claims, and 20 years after the start of its consistent application, the criterion appears inadequate to address the experimental situation in particle and astro-particle physics.

    • CommentRowNumber29.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 23rd 2019
    • (edited Jan 23rd 2019)

    Three of many interesting points by Dorigo:

    • In particle physics you have an advantage – “It is quite special to physics that we do believe in our ’point nulls’”. Elsewhere, looking at the effect of a drug, it’s highly implausible that it has no effect.

    • Several 6σ6 \sigma signals have been spurious, Table 1.

    • 5σ5 \sigma wouldn’t have been enough to persuade him of superluminal neutrinos. His required level is THTQ (too high to quote), which shows the role of background knowledge.

    • CommentRowNumber30.
    • CommentAuthorUrs
    • CommentTimeJan 23rd 2019
    • (edited Jan 23rd 2019)

    Thanks for these references! So it seems that we agree now that pp-values are neither “fetish” nor “cult”, but that one’s confidence of course increases with the threshold. And I keep thinking that all the problems the non-physics communities have with it is that p=0.0.5p = 0.0.5 – which is <2σ\lt 2 \sigma – is just not a good threshold.

    With regards to the topic of this thread on leptoquarks suggested by flavour anomalies in bb-quark decays, I find it striking to see that both Lyons and Dorigo argued that the threshold for these should be at 3σ3 \sigma significance, much less than the generally accepted 4.1σ4.1\sigma signal already seen in each channel separately.

    Will add that to the entry now..

    • CommentRowNumber31.
    • CommentAuthorDavidRoberts
    • CommentTimeJan 23rd 2019

    2σ2\sigma is the de facto standard because of an almost throwaway line by RA Fisher. He was coming from (mostly) agricultural statistics, where the number of data points and variables measured were not very large

    … it is convenient to draw the line at about the level at which we can say: “Either there is something in the treatment, or a coincidence has occurred such as does not occur more than once in twenty trials.”…

    If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent point), or one in a hundred (the 1 per cent point). Personally, the writer prefers to set a low standard of significance at the 5 per cent point, and ignore entirely all results which fail to reach this level. A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance. (Fisher RA (1926), “The Arrangement of Field Experiments,” Journal of the Ministry of Agriculture of Great Britain, 33, 503-513.)

    Amusingly, he implicitly is saying at the end that repeated (designed) experiments achieving p<0.05 almost all the time is what counts as a discovery, not one such, as is taken to be the case in eg psychology.

    • CommentRowNumber32.
    • CommentAuthorUrs
    • CommentTimeJan 23rd 2019

    Thanks. That should be added to statistical significance. ( am on my phone now, myself)

    • CommentRowNumber33.
    • CommentAuthorUrs
    • CommentTimeMar 28th 2020

    added pointer to yesterday’s

    • Priyotosh Bandyopadhyay, Saunak Dutta, Anirban Karan, Investigating the Production of Leptoquarks by Means of Zeros of Amplitude at Photon Electron Collider (arXiv:2003.11751)

    diff, v34, current

    • CommentRowNumber34.
    • CommentAuthorUrs
    • CommentTimeMar 30th 2020

    added pointer to today’s

    • Valerio Gherardi, David Marzocca, Elena Venturini, Matching scalar leptoquarks to the SMEFT at one loop (arXiv:2003.12525)

    diff, v35, current

    • CommentRowNumber35.
    • CommentAuthorUrs
    • CommentTimeOct 15th 2020

    added pointer to today’s

    diff, v40, current

    • CommentRowNumber36.
    • CommentAuthorUrs
    • CommentTimeMay 11th 2021

    added pointer to today’s:

    • Ricardo Perez-Martinez, Saul Ramos-Sanchez, Patrick K.S. Vaudrevange , The landscape of promising non-supersymmetric string models (arXiv:2105.03460)

    diff, v52, current

    • CommentRowNumber37.
    • CommentAuthorUrs
    • CommentTimeMay 12th 2021

    added pointer to today’s

    diff, v53, current

    • CommentRowNumber38.
    • CommentAuthorUrs
    • CommentTimeNov 17th 2021

    added pointer to today’s

    • Genevieve Bélanger et al. Leptoquark Manoeuvres in the Dark: a simultaneous solution of the dark matter problem and the RD anomalies (arXiv:2111.08027)

    diff, v59, current

    • CommentRowNumber39.
    • CommentAuthorUrs
    • CommentTimeJul 10th 2022

    Added pointer to the recent announcement of an apparent >3σ\gt 3\sigma-excess of possible leptoquark decay products:

    • CMS Collaboration, The search for a third-generation leptoquark coupling to a τ lepton and a b quark through single, pair and nonresonant production at s=13TeV\sqrt{s} = 13 TeV (2022) [cds:2815309]

    with this quote:

    For a representative LQ mass of 2 TeV and a coupling strength of 2.52.5, an excess with a significance of 3.4 standard deviations above the standard model expectation is observed in the data.

    diff, v61, current