ترغب بنشر مسار تعليمي؟ اضغط هنا

On the potential of multivariate techniques for the determination of multidimensional efficiencies

58   0   0.0 ( 0 )
 نشر من قبل Benoit Viaud
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Benoit Viaud




اسأل ChatGPT حول البحث

Differential measurements of particle collisions or decays can provide stringent constraints on physics beyond the Standard Model of particle physics. In particular, the distributions of the kinematical and angular variables that characterise heavy me- son multibody decays are non trivial and can sign the underlying interaction physics. In the era of high luminosity opened by the advent of the Large Hadron Collider and of Flavor Factories, differential measurements are less and less dominated by statistical precision and require a precise determination of efficiencies that depend simultaneously on several variables and do not factorise in these variables. This docu- ment is a reflection on the potential of multivariate techniques for the determination of such multidimensional efficiencies. We carried out two case studies that show that multilayer perceptron neural networks can determine and correct for the distortions introduced by reconstruction and selection criteria in the multidimensional phase space of the decays $B^{0}rightarrow K^{*0}(rightarrow K^{+}pi^{-}) mu^{+}mu^{-}$ and $D^{0}rightarrow K^{-}pi^{+}pi^{+}pi^{-}$, at the price of a minimal analysis effort. We conclude that this method can already be used for measurements which statistical precision does not yet reach the percent level and that with more sophisticated machine learning methods, the aforementioned potential is very promising.



قيم البحث

اقرأ أيضاً

A spectral fitter based on the graphics processor unit (GPU) has been developed for Borexino solar neutrino analysis. It is able to shorten the fitting time to a superior level compared to the CPU fitting procedure. In Borexino solar neutrino spectra l analysis, fitting usually requires around one hour to converge since it includes time-consuming convolutions in order to account for the detector response and pile-up effects. Moreover, the convergence time increases to more than two days when including extra computations for the discrimination of $^{11}$C and external $gamma$s. In sharp contrast, with the GPU-based fitter it takes less than 10 seconds and less than four minutes, respectively. This fitter is developed utilizing the GooFit project with customized likelihoods, pdfs and infrastructures supporting certain analysis methods. In this proceeding the design of the package, developed features and the comparison with the original CPU fitter are presented.
A new method for the determination of electric signal time-shifts is introduced. As the Kolmogorov-Smirnov test, it is based on the comparison of the cumulative distribution functions of the reference signal with the test signal. This method is very fast and thus well suited for on-line applications. It is robust to noise and its performances in terms of precision are excellent for time-shifts ranging from a fraction to several sample durations. PACS. 29.40.Gx (Tracking and position-sensitive detectors), 29.30.Kv (X- and -ray spectroscopy), 07.50.Qx (Signal processing electronics)
139 - Louis Lyons 2013
We discuss the traditional criterion for discovery in Particle Physics of requiring a significance corresponding to at least 5 sigma; and whether a more nuanced approach might be better.
86 - Andrew Fowlie 2021
I would like to thank Junk and Lyons (arXiv:2009.06864) for beginning a discussion about replication in high-energy physics (HEP). Junk and Lyons ultimately argue that HEP learned its lessons the hard way through past failures and that other fields c ould learn from our procedures. They emphasize that experimental collaborations would risk their legacies were they to make a type-1 error in a search for new physics and outline the vigilance taken to avoid one, such as data blinding and a strict $5sigma$ threshold. The discussion, however, ignores an elephant in the room: there are regularly anomalies in searches for new physics that result in substantial scientific activity but dont replicate with more data.
The current and upcoming generation of Very Large Volume Neutrino Telescopes---collecting unprecedented quantities of neutrino events---can be used to explore subtle effects in oscillation physics, such as (but not restricted to) the neutrino mass or dering. The sensitivity of an experiment to these effects can be estimated from Monte Carlo simulations. With the high number of events that will be collected, there is a trade-off between the computational expense of running such simulations and the inherent statistical uncertainty in the determined values. In such a scenario, it becomes impractical to produce and use adequately-sized sets of simulated events with traditional methods, such as Monte Carlo weighting. In this work we present a staged approach to the generation of binned event distributions in order to overcome these challenges. By combining multiple integration and smoothing techniques which address limited statistics from simulation it arrives at reliable analysis results using modest computational resources.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا