Do you want to publish a course? Click here

Machine learning technique to improve anti-neutrino detection efficiency for the ISMRAN experiment

108   0   0.0 ( 0 )
 Added by Dhruv Mulmule
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

The Indian Scintillator Matrix for Reactor Anti-Neutrino detection - ISMRAN experiment aims to detect electron anti-neutrinos ($bar u_e$) emitted from a reactor via inverse beta decay reaction (IBD). The setup, consisting of 1 ton segmented Gadolinium foil wrapped plastic scintillator array, is planned for remote reactor monitoring and sterile neutrino search. The detection of prompt positron and delayed neutron from IBD will provide the signature of $bar u_e$ event in ISMRAN. The number of segments with energy deposit ($mathrm{N_{bars}}$) and sum total of these deposited energies are used as discriminants for identifying prompt positron event and delayed neutron capture event. However, a simple cut based selection of above variables leads to a low $bar u_e$ signal detection efficiency due to overlapping region of $mathrm{N_{bars}}$ and sum energy for the prompt and delayed events. Multivariate analysis (MVA) tools, employing variables suitably tuned for discrimination, can be useful in such scenarios. In this work we report the results from an application of artificial neural network -- the multilayer perceptron (MLP), particularly the Bayesian extension -- MLPBNN, to the simulated signal and background events in ISMRAN. The results from application of MLP to classify prompt positron events from delayed neutron capture events on Hydrogen, Gadolinium nuclei and also from the typical reactor $gamma$-ray and fast neutron backgrounds is reported. An enhanced efficiency of $sim$91$%$ with a background rejection of $sim$73$%$ for prompt selection and an efficiency of $sim$89$%$ with a background rejection of $sim$71$%$ for the delayed capture event, is achieved using the MLPBNN classifier for the ISMRAN experiment.

rate research

Read More

This work presents a simple method to determine the significant partial wave contributions to experimentally determined observables in pseudoscalar meson photoproduction. First, fits to angular distributions are presented and the maximum orbital angular momentum $text{L}_{mathrm{max}}$ needed to achieve a good fit is determined. Then, recent polarization measurements for $gamma p rightarrow pi^{0} p$ from ELSA, GRAAL, JLab and MAMI are investigated according to the proposed method. This method allows us to project high-spin partial wave contributions to any observable as long as the measurement has the necessary statistical accuracy. We show, that high precision and large angular coverage in the polarization data are needed in order to be sensitive to high-spin resonance-states and thereby also for the finding of small resonance contributions. This task can be achieved via interference of these resonances with the well-known states. For the channel $gamma p rightarrow pi^{0} p$, those are the $N(1680)frac{5}{2}^{+}$ and $Delta(1950)frac{7}{2}^{+}$, contributing to the $F$-waves.
CUPID-Mo is a cryogenic detector array designed to search for neutrinoless double-beta decay ($0 ubetabeta$) of $^{100}$Mo. It uses 20 scintillating $^{100}$Mo-enriched Li$_2$MoO$_4$ bolometers instrumented with Ge light detectors to perform active suppression of $alpha$ backgrounds, drastically reducing the expected background in the $0 ubetabeta$ signal region. As a result, pileup events and small detector instabilities that mimic normal signals become non-negligible potential backgrounds. These types of events can in principle be eliminated based on their signal shapes, which are different from those of regular bolometric pulses. We show that a purely data-driven principal component analysis based approach is able to filter out these anomalous events, without the aid of detector response simulations.
167 - Alexander Glazov 2017
A method for correcting for detector smearing effects using machine learning techniques is presented. Compared to the standard approaches the method can use more than one reconstructed variable to infere the value of the unsmeared quantity on event by event basis. The method is implemented using a sequential neural network with a categorical cross entropy as the loss function. It is tested on a toy example and is shown to satisfy basic closure tests. Possible application of the method for analysis of the data from high energy physics experiments is discussed.
A number of scientific competitions have been organised in the last few years with the objective of discovering innovative techniques to perform typical High Energy Physics tasks, like event reconstruction, classification and new physics discovery. Four of these competitions are summarised in this chapter, from which guidelines on organising such events are derived. In addition, a choice of competition platforms and available datasets are described
New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms in particle physics for simulation, reconstruction, and analysis are naturally deployed on such platforms. We demonstrate that the acceleration of machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that potentially requires minimal modification to the current computing model. As examples, we retrain the ResNet-50 convolutional neural network to demonstrate state-of-the-art performance for top quark jet tagging at the LHC and apply a ResNet-50 model with transfer learning for neutrino event classification. Using Project Brainwave by Microsoft to accelerate the ResNet-50 image classification model, we achieve average inference times of 60 (10) milliseconds with our experimental physics software framework using Brainwave as a cloud (edge or on-premises) service, representing an improvement by a factor of approximately 30 (175) in model inference latency over traditional CPU inference in current experimental hardware. A single FPGA service accessed by many CPUs achieves a throughput of 600--700 inferences per second using an image batch of one, comparable to large batch-size GPU throughput and significantly better than small batch-size GPU throughput. Deployed as an edge or cloud service for the particle physics computing model, coprocessor accelerators can have a higher duty cycle and are potentially much more cost-effective.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا