ترغب بنشر مسار تعليمي؟ اضغط هنا

Developing a Bubble Chamber Particle Discriminator Using Semi-Supervised Learning

102   0   0.0 ( 0 )
 نشر من قبل Scott Fallows
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The identification of non-signal events is a major hurdle to overcome for bubble chamber dark matter experiments such as PICO-60. The current practice of manually developing a discriminator function to eliminate background events is difficult when available calibration data is frequently impure and present only in small quantities. In this study, several different discriminator input/preprocessing formats and neural network architectures are applied to the task. First, they are optimized in a supervised learning context. Next, two novel semi-supervised learning algorithms are trained, and found to replicate the Acoustic Parameter (AP) discriminator previously used in PICO-60 with a mean of 97% accuracy.



قيم البحث

اقرأ أيضاً

206 - C. Amole , M. Ardid , D. M. Asner 2015
New data are reported from the operation of a 2-liter C$_3$F$_8$ bubble chamber in the 2100 meter deep SNOLAB underground laboratory, with a total exposure of 211.5 kg-days at four different recoil energy thresholds ranging from 3.2 keV to 8.1 keV. T hese data show that C3F8 provides excellent electron recoil and alpha rejection capabilities at very low thresholds, including the first observation of a dependence of acoustic signal on alpha energy. Twelve single nuclear recoil event candidates were observed during the run. The candidate events exhibit timing characteristics that are not consistent with the hypothesis of a uniform time distribution, and no evidence for a dark matter signal is claimed. These data provide the most sensitive direct detection constraints on WIMP-proton spin-dependent scattering to date, with significant sensitivity at low WIMP masses for spin-independent WIMP-nucleon scattering.
100 - Attaullah Sahito , Eibe Frank , 2021
Neural networks have been successfully used as classification models yielding state-of-the-art results when trained on a large number of labeled samples. These models, however, are more difficult to train successfully for semi-supervised problems whe re small amounts of labeled instances are available along with a large number of unlabeled instances. This work explores a new training method for semi-supervised learning that is based on similarity function learning using a Siamese network to obtain a suitable embedding. The learned representations are discriminative in Euclidean space, and hence can be used for labeling unlabeled instances using a nearest-neighbor classifier. Confident predictions of unlabeled instances are used as true labels for retraining the Siamese network on the expanded training set. This process is applied iteratively. We perform an empirical study of this iterative self-training algorithm. For improving unlabeled predictions, local learning with global consistency [22] is also evaluated.
A careful reanalysis of both Argonne National Laboratory and Brookhaven National Laboratory data for weak single pion production is done. We consider deuteron nuclear effects and normalization (flux) uncertainties in both experiments. We demonstrate that these two sets of data are in good agreement. For the dipole parametrization of $C_5^A(Q^2)$, we obtain $C_5^A(0)=1.19pm 0.08$, $M_A=0.94pm 0.03$ GeV. As an application we present the discussion of the uncertainty of the neutral current 1$pi^0$ production cross section, important for the T2K neutrino oscillation experiment.
Recent advances in quantum technology have led to the development and the manufacturing of programmable quantum annealers that promise to solve certain combinatorial optimization problems faster than their classical counterparts. Semi-supervised lear ning is a machine learning technique that makes use of both labeled and unlabeled data for training, which enables a good classifier with only a small amount of labeled data. In this paper, we propose and theoretically analyze a graph-based semi-supervised learning method with the aid of the quantum annealing technique, which efficiently utilize the quantum resources while maintaining a good accuracy. We illustrate two classification examples, suggesting the feasibility of this method even with a small portion (20%) of labeled data is involved.
Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly h eterogenous. In this paper we propose a method to distill the important domain signal as part of a multi-domain learning system, using a latent variable model in which parts of a neural model are stochastically gated based on the inferred domain. We compare the use of discrete versus continuous latent variables, operating in a domain-supervised or a domain semi-supervised setting, where the domain is known only for a subset of training inputs. We show that our model leads to substantial performance improvements over competitive benchmark domain adaptation methods, including methods using adversarial learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا