ترغب بنشر مسار تعليمي؟ اضغط هنا

Variational Autoencoders for Generative Modelling of Water Cherenkov Detectors

121   0   0.0 ( 0 )
 نشر من قبل Abhishek .
 تاريخ النشر 2019
والبحث باللغة English
 تأليف Abhishek Abhishek




اسأل ChatGPT حول البحث

Matter-antimatter asymmetry is one of the major unsolved problems in physics that can be probed through precision measurements of charge-parity symmetry violation at current and next-generation neutrino oscillation experiments. In this work, we demonstrate the capability of variational autoencoders and normalizing flows to approximate the generative distribution of simulated data for water Cherenkov detectors commonly used in these experiments. We study the performance of these methods and their applicability for semi-supervised learning and synthetic data generation.



قيم البحث

اقرأ أيضاً

Cosmic-ray muons and especially their secondaries break apart nuclei (spallation) and produce fast neutrons and beta-decay isotopes, which are backgrounds for low-energy experiments. In Super-Kamiokande, these beta decays are the dominant background in 6--18 MeV, relevant for solar neutrinos and the diffuse supernova neutrino background. In a previous paper, we showed that these spallation isotopes are produced primarily in showers, instead of in isolation. This explains an empirical spatial correlation between a peak in the muon Cherenkov light profile and the spallation decay, which Super-Kamiokande used to develop a new spallation cut. However, the muon light profiles that Super-Kamiokande measured are grossly inconsistent with shower physics. We show how to resolve this discrepancy and how to reconstruct accurate profiles of muons and their showers from their Cherenkov light. We propose a new spallation cut based on these improved profiles and quantify its effects. Our results can significantly benefit low-energy studies in Super-Kamiokande, and will be especially important for detectors at shallower depths, like the proposed Hyper-Kamiokande.
Cherenkov detectors employ various methods to maximize light collection at the photomultiplier tubes (PMTs). These generally involve the use of highly reflective materials lining the interior of the detector, reflective materials around the PMTs, or wavelength-shifting sheets around the PMTs. Recently, the use of water-soluble wavelength-shifters has been explored to increase the measurable light yield of Cherenkov radiation in water. These wave-shifting chemicals are capable of absorbing light in the ultravoilet and re-emitting the light in a range detectable by PMTs. Using a 250 L water Cherenkov detector, we have characterized the increase in light yield from three compounds in water: 4-Methylumbelliferone, Carbostyril-124, and Amino-G Salt. We report the gain in PMT response at a concentration of 1 ppm as: 1.88 $pm$ 0.02 for 4-Methylumbelliferone, stable to within 0.5% over 50 days, 1.37 $pm$ 0.03 for Carbostyril-124, and 1.20 $pm$ 0.02 for Amino-G Salt. The response of 4-Methylumbelliferone was modeled, resulting in a simulated gain within 9% of the experimental gain at 1 ppm concentration. Finally, we report an increase in neutron detection performance of a large-scale (3.5 kL) gadolinium-doped water Cherenkov detector at a 4-Methylumbelliferone concentration of 1 ppm.
The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applyi ng these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.
Learning generative models that span multiple data modalities, such as vision and language, is often motivated by the desire to learn more useful, generalisable representations that faithfully capture common underlying factors between the modalities. In this work, we characterise successful learning of such models as the fulfillment of four criteria: i) implicit latent decomposition into shared and private subspaces, ii) coherent joint generation over all modalities, iii) coherent cross-generation across individual modalities, and iv) improved model learning for individual modalities through multi-modal integration. Here, we propose a mixture-of-experts multimodal variational autoencoder (MMVAE) to learn generative models on different sets of modalities, including a challenging image-language dataset, and demonstrate its ability to satisfy all four criteria, both qualitatively and quantitatively.
The ability to extract generative parameters from high-dimensional fields of data in an unsupervised manner is a highly desirable yet unrealized goal in computational physics. This work explores the use of variational autoencoders (VAEs) for non-line ar dimension reduction with the aim of disentangling the low-dimensional latent variables to identify independent physical parameters that generated the data. A disentangled decomposition is interpretable and can be transferred to a variety of tasks including generative modeling, design optimization, and probabilistic reduced order modelling. A major emphasis of this work is to characterize disentanglement using VAEs while minimally modifying the classic VAE loss function (i.e. the ELBO) to maintain high reconstruction accuracy. Disentanglement is shown to be highly sensitive to rotations of the latent space, hyperparameters, random initializations and the learning schedule. The loss landscape is characterized by over-regularized local minima which surrounds desirable solutions. We illustrate comparisons between disentangled and entangled representations by juxtaposing learned latent distributions and the true generative factors in a model porous flow problem. Implementing hierarchical priors (HP) is shown to better facilitate the learning of disentangled representations over the classic VAE. The choice of the prior distribution is shown to have a dramatic effect on disentanglement. In particular, the regularization loss is unaffected by latent rotation when training with rotationally-invariant priors, and thus learning non-rotationally-invariant priors aids greatly in capturing the properties of generative factors, improving disentanglement. Some issues inherent to training VAEs, such as the convergence to over-regularized local minima are illustrated and investigated, and potential techniques for mitigation are presented.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا