ترغب بنشر مسار تعليمي؟ اضغط هنا

The Tracking Machine Learning challenge : Accuracy phase

66   0   0.0 ( 0 )
 نشر من قبل David Rousseau
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper reports the results of an experiment in high energy physics: using the power of the crowd to solve difficult experimental problems linked to tracking accurately the trajectory of particles in the Large Hadron Collider (LHC). This experiment took the form of a machine learning challenge organized in 2018: the Tracking Machine Learning Challenge (TrackML). Its results were discussed at the competition session at the Neural Information Processing Systems conference (NeurIPS 2018). Given 100.000 points, the participants had to connect them into about 10.000 arcs of circles, following the trajectory of particles issued from very high energy proton collisions. The competition was difficult with a dozen front-runners well ahead of a pack. The single competition score is shown to be accurate and effective in selecting the best algorithms from the domain point of view. The competition has exposed a diversity of approaches, with various roles for Machine Learning, a number of which are discussed in the document



قيم البحث

اقرأ أيضاً

This paper reports on the second Throughput phase of the Tracking Machine Learning (TrackML) challenge on the Codalab platform. As in the first Accuracy phase, the participants had to solve a difficult experimental problem linked to tracking accurate ly the trajectory of particles as e.g. created at the Large Hadron Collider (LHC): given O($10^5$) points, the participants had to connect them into O($10^4$) individual groups that represent the particle trajectories which are approximated helical. While in the first phase only the accuracy mattered, the goal of this second phase was a compromise between the accuracy and the speed of inference. Both were measured on the Codalab platform where the participants had to upload their software. The best three participants had solutions with good accuracy and speed an order of magnitude faster than the state of the art when the challenge was designed. Although the core algorithms were less diverse than in the first phase, a diversity of techniques have been used and are described in this paper. The performance of the algorithms are analysed in depth and lessons derived.
183 - Enrico Camporeale 2019
The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.
Machine learning has proven to be an indispensable tool in the selection of interesting events in high energy physics. Such technologies will become increasingly important as detector upgrades are introduced and data rates increase by orders of magni tude. We propose a toolkit to enable the creation of a drone classifier from any machine learning classifier, such that different classifiers may be standardised into a single form and executed in parallel. We demonstrate the capability of the drone neural network to learn the required properties of the input neural network without the use of any labels from the training data, only using appropriate questioning of the input neural network.
73 - A. Ghosh , B. Yaeggy , R.Galindo 2021
This paper presents a novel neutral-pion reconstruction that takes advantage of the machine learning technique of semantic segmentation using MINERvA data collected between 2013-2017, with an average neutrino energy of $6$ GeV. Semantic segmentation improves the purity of neutral pion reconstruction from two gammas from 71% to 89% and improves the efficiency of the reconstruction by approximately 40%. We demonstrate our method in a charged current neutral pion production analysis where a single neutral pion is reconstructed. This technique is applicable to modern tracking calorimeters, such as the new generation of liquid-argon time projection chambers, exposed to neutrino beams with $langle E_ u rangle$ between 1-10 GeV. In such experiments it can facilitate the identification of ionization hits which are associated with electromagnetic showers, thereby enabling improved reconstruction of charged-current $ u_e$ events arising from $ u_{mu} rightarrow u_{e}$ appearance.
54 - Gang LI , Libo Liao , Xinchou Lou 2021
Several high energy $e^{+}e^{-}$ colliders are proposed as Higgs factories by the international high energy physics community. One of the most important goals of these projects is to study the Higgs properties, such as its couplings, mass, width, and production rate, with unprecedented precision. Precision studies of the Higgs boson should be the priority and drive the design and optimization of detectors. A global analysis approach based on the multinomial distribution and Machine Learning techniques is proposed to realize an ``end-to-end analysis and to enhance the precision of all accessible decay branching fractions of the Higgs significantly. A proof-of-principle Monte Carlo simulation study is performed to show the feasibility. This approach shows that the statistical uncertainties of all branching fractions are proportional to a single parameter, which can be used as a metric to optimize the detector design, reconstruction algorithms, and data analyses. In the Higgs factories, the global analysis approach is valuable both to the Higgs measurements and detector R & D, because it has the potential for superior precision and makes detector optimization single-purpose.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا