No Arabic abstract
Convolutional neural networks (CNNs) are widely used state-of-the-art computer vision tools that are becoming increasingly popular in high energy physics. In this paper, we attempt to understand the potential of CNNs for event classification in the NEXT experiment, which will search for neutrinoless double-beta decay in $^{136}$Xe. To do so, we demonstrate the usage of CNNs for the identification of electron-positron pair production events, which exhibit a topology similar to that of a neutrinoless double-beta decay event. These events were produced in the NEXT-White high-pressure xenon TPC using 2.6-MeV gamma rays from a $^{228}$Th calibration source. We train a network on Monte Carlo-simulated events and show that, by applying on-the-fly data augmentation, the network can be made robust against differences between simulation and data. The use of CNNs offer significant improvement in signal efficiency/background rejection when compared to previous non-CNN-based analyses.
We investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the use of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.
Next-generation neutrinoless double beta decay experiments aim for half-life sensitivities of ~$10^{27}$ yr, requiring suppressing backgrounds to <1 count/tonne/yr. For this, any extra background rejection handle, beyond excellent energy resolution and the use of extremely radiopure materials, is of utmost importance. The NEXT experiment exploits differences in the spatial ionization patterns of double beta decay and single-electron events to discriminate signal from background. While the former display two Bragg peak dense ionization regions at the opposite ends of the track, the latter typically have only one such feature. Thus, comparing the energies at the track extremes provides an additional rejection tool. The unique combination of the topology-based background discrimination and excellent energy resolution (1% FWHM at the Q-value of the decay) is the distinguishing feature of NEXT. Previous studies demonstrated a topological background rejection factor of ~5 when reconstructing electron-positron pairs in the $^{208}$Tl 1.6 MeV double escape peak (with Compton events as background), recorded in the NEXT-White demonstrator at the Laboratorio Subterraneo de Canfranc, with 72% signal efficiency. This was recently improved through the use of a deep convolutional neural network to yield a background rejection factor of ~10 with 65% signal efficiency. Here, we present a new reconstruction method, based on the Richardson-Lucy deconvolution algorithm, which allows reversing the blurring induced by electron diffusion and electroluminescence light production in the NEXT TPC. The new method yields highly refined 3D images of reconstructed events, and, as a result, significantly improves the topological background discrimination. When applied to real-data 1.6 MeV $e^-e^+$ pairs, it leads to a background rejection factor of 27 at 57% signal efficiency.
In this work, we present a new, high performance algorithm for background rejection in imaging atmospheric Cherenkov telescopes. We build on the already popular machine-learning techniques used in gamma-ray astronomy by the application of the latest techniques in machine learning, namely recurrent and convolutional neural networks, to the background rejection problem. Use of these machine-learning techniques addresses some of the key challenges encountered in the currently implemented algorithms and helps to significantly increase the background rejection performance at all energies. We apply these machine learning techniques to the H.E.S.S. telescope array, first testing their performance on simulated data and then applying the analysis to two well known gamma-ray sources. With real observational data we find significantly improved performance over the current standard methods, with a 20-25% reduction in the background rate when applying the recurrent neural network analysis. Importantly, we also find that the convolutional neural network results are strongly dependent on the sky brightness in the source region which has important implications for the future implementation of this method in Cherenkov telescope analysis.
Pulse shape discrimination plays a key role in improving the signal-to-background ratio in NEOS analysis by removing fast neutrons. Identifying particles by looking at the tail of the waveform has been an effective and plausible approach for pulse shape discrimination, but has the limitation in sorting low energy particles. As a good alternative, the convolutional neural network can scan the entire waveform as they are to recognize the characteristics of the pulse and perform shape classification of NEOS data. This network provides a powerful identification tool for all energy ranges and helps to search unprecedented phenomena of low-energy, a few MeV or less, neutrinos.
The liquid argon ionization current in a sampling calorimeter cell can be analyzed to determine the energy of detected particles. In practice, experimental artifacts such as pileup and electronic noise make the inference of energy from current a difficult process. The beam intensity of the Large Hadron Collider will be significantly increased during the Phase-II long shut down of 2024-2026. Signal processing techniques that are used to extract the energy of detected particles in the ATLAS detector will suffer a significant loss in performance under these conditions. This paper compares the presently used optimal filter technique to convolutional neural networks for energy reconstruction in the ATLAS liquid argon hadronic end cap calorimeter. In particular, it is shown that convolutional neural networks trained with an appropriately tuned and novel loss function are able to outperform the optimal filter technique.