Do you want to publish a course? Click here

Particle Track Reconstruction using Geometric Deep Learning

389   0   0.0 ( 0 )
 Added by Yogesh Verma
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Muons are the most abundant charged particles arriving at sea level originating from the decay of secondary charged pions and kaons. These secondary particles are created when high-energy cosmic rays hit the atmosphere interacting with air nuclei initiating cascades of secondary particles which led to the formation of extensive air showers (EAS). They carry essential information about the extra-terrestrial events and are characterized by large flux and varying angular distribution. To account for open questions and the origin of cosmic rays, one needs to study various components of cosmic rays with energy and arriving direction. Because of the close relation between muon and neutrino production, it is the most important particle to keep track of. We propose a novel tracking algorithm based on the Geometric Deep Learning approach using graphical structure to incorporate domain knowledge to track cosmic ray muons in our 3-D scintillator detector. The detector is modeled using the GEANT4 simulation package and EAS is simulated using CORSIKA (COsmic Ray SImulations for KAscade) with a focus on muons originating from EAS. We shed some light on the performance, robustness towards noise and double hits, limitations, and application of the proposed algorithm in tracking applications with the possibility to generalize to other detectors for astrophysical and collider experiments.



rate research

Read More

98 - Dimitri Bourilkov 2019
The many ways in which machine and deep learning are transforming the analysis and simulation of data in particle physics are reviewed. The main methods based on boosted decision trees and various types of neural networks are introduced, and cutting-edge applications in the experimental and theoretical/phenomenological domains are highlighted. After describing the challenges in the application of these novel analysis techniques, the review concludes by discussing the interactions between physics and machine learning as a two-way street enriching both disciplines and helping to meet the present and future challenges of data-intensive science at the energy and intensity frontiers.
Rapidly applying the effects of detector response to physics objects (e.g. electrons, muons, showers of particles) is essential in high energy physics. Currently available tools for the transformation from truth-level physics objects to reconstructed detector-level physics objects involve manually defining resolution functions. These resolution functions are typically derived in bins of variables that are correlated with the resolution (e.g. pseudorapidity and transverse momentum). This process is time consuming, requires manual updates when detector conditions change, and can miss important correlations. Machine learning offers a way to automate the process of building these truth-to-reconstructed object transformations and can capture complex correlation for any given set of input variables. Such machine learning algorithms, with sufficient optimization, could have a wide range of applications: improving phenomenological studies by using a better detector representation, allowing for more efficient production of Geant4 simulation by only simulating events within an interesting part of phase space, and studies on future experimental sensitivity to new physics.
Pions constitute nearly $70%$ of final state particles in ultra high energy collisions. They act as a probe to understand the statistical properties of Quantum Chromodynamics (QCD) matter i.e. Quark Gluon Plasma (QGP) created in such relativistic heavy ion collisions (HIC). Apart from this, direct photons are the most versatile tools to study relativistic HIC. They are produced, by various mechanisms, during the entire space-time history of the strongly interacting system. Direct photons provide measure of jet-quenching when compared with other quark or gluon jets. The $pi^{0}$ decay into two photons make the identification of non-correlated gamma coming from another process cumbersome in the Electromagnetic Calorimeter. We investigate the use of deep learning architecture for reconstruction and identification of single as well as multi particles showers produced in calorimeter by particles created in high energy collisions. We utilize the data of electromagnetic shower at calorimeter cell-level to train the network and show improvements for identification and characterization. These networks are fast and computationally inexpensive for particle shower identification and reconstruction for current and future experiments at particle colliders.
We present a procedure for reconstructing particle cascades from event data measured in a high energy physics experiment. For evaluating the hypothesis of a specific physics process causing the observed data, all possible reconstructi
73 - Steven Lantz 2020
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is determining the trajectory of charged particles during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filter-based methods for highly parallel, many-core SIMD architectures that are now prevalent in high-performance hardware. In this paper, we discuss the design and performance of the improved tracking algorithm, referred to as mkFit. A key piece of the algorithm is the Matriplex library, containing dedicated code to optimally vectorize operations on small matrices. The physics performance of the mkFit algorithm is comparable to the nominal CMS tracking algorithm when reconstructing tracks from simulated proton-proton collisions within the CMS detector. We study the scaling of the algorithm as a function of the parallel resources utilized and find large speedups both from vectorization and multi-threading. mkFit achieves a speedup of a factor of 6 compared to the nominal algorithm when run in a single-threaded application within the CMS software framework.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا