ترغب بنشر مسار تعليمي؟ اضغط هنا

Development of a Vertex Finding Algorithm using Recurrent Neural Network

115   0   0.0 ( 0 )
 نشر من قبل Taikan Suehara
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Deep learning is a rapidly-evolving technology with possibility to significantly improve physics reach of collider experiments. In this study we developed a novel algorithm of vertex finding for future lepton colliders such as the International Linear Collider. We deploy two networks; one is simple fully-connected layers to look for vertex seeds from track pairs, and the other is a customized Recurrent Neural Network with an attention mechanism and an encoder-decoder structure to associate tracks to the vertex seeds. The performance of the vertex finder is compared with the standard ILC reconstruction algorithm.



قيم البحث

اقرأ أيضاً

In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resol ution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton-proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural networks optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark-antiquark pairs produced in proton-proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.
We have studied the performance of a new algorithm for electron/pion separation in an Emulsion Cloud Chamber (ECC) made of lead and nuclear emulsion films. The software for separation consists of two parts: a shower reconstruction algorithm and a Neu ral Network that assigns to each reconstructed shower the probability to be an electron or a pion. The performance has been studied for the ECC of the OPERA experiment [1]. The $e/pi$ separation algorithm has been optimized by using a detailed Monte Carlo simulation of the ECC and tested on real data taken at CERN (pion beams) and at DESY (electron beams). The algorithm allows to achieve a 90% electron identification efficiency with a pion misidentification smaller than 1% for energies higher than 2 GeV.
Selection of the correct convergence angle is essential for achieving the highest resolution imaging in scanning transmission electron microscopy (STEM). Use of poor heuristics, such as Rayleighs quarter-phase rule, to assess probe quality and uncert ainties in measurement of the aberration function result in incorrect selection of convergence angles and lower resolution. Here, we show that the Strehl ratio provides an accurate and efficient to calculate criteria for evaluating probe size for STEM. A convolutional neural network trained on the Strehl ratio is shown to outperform experienced microscopists at selecting a convergence angle from a single electron Ronchigram using simulated datasets. Generating tens of thousands of simulated Ronchigram examples, the network is trained to select convergence angles yielding probes on average 85% nearer to optimal size at millisecond speeds (0.02% human assessment time). Qualitative assessment on experimental Ronchigrams with intentionally introduced aberrations suggests that trends in the optimal convergence angle size are well modeled but high accuracy requires extensive training datasets. This near immediate assessment of Ronchigrams using the Strehl ratio and machine learning highlights a viable path toward rapid, automated alignment of aberration-corrected electron microscopes.
In this paper we recreate, and improve, the binary classification method for particles proposed in Roe et al. (2005) paper Boosted decision trees as an alternative to artificial neural networks for particle identification. Such particles are tau neut rinos, which we will refer to as background, and electronic neutrinos: the signal we are interested in. In the original paper the preferred algorithm is a Boosted decision tree. This is due to its low effort tuning and good overall performance at the time. Our choice for implementation is a deep neural network, faster and more promising in performance. We will show how, using modern techniques, we are able to improve on the original result, both in accuracy and in training time.
A framework is presented to extract and understand decision-making information from a deep neural network (DNN) classifier of jet substructure tagging techniques. The general method studied is to provide expert variables that augment inputs (eXpert A UGmented variables, or XAUG variables), then apply layerwise relevance propagation (LRP) to networks both with and without XAUG variables. The XAUG variables are concatenated with the intermediate layers after network-specific operations (such as convolution or recurrence), and used in the final layers of the network. The results of comparing networks with and without the addition of XAUG variables show that XAUG variables can be used to interpret classifier behavior, increase discrimination ability when combined with low-level features, and in some cases capture the behavior of the classifier completely. The LRP technique can be used to find relevant information the network is using, and when combined with the XAUG variables, can be used to rank features, allowing one to find a reduced set of features that capture part of the network performance. In the studies presented, adding XAUG variables to low-level DNNs increased the efficiency of classifiers by as much as 30-40%. In addition to performance improvements, an approach to quantify numerical uncertainties in the training of these DNNs is presented.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا