ترغب بنشر مسار تعليمي؟ اضغط هنا

Speeding up complex multivariate data analysis in Borexino with parallel computing based on Graphics Processing Unit

75   0   0.0 ( 0 )
 نشر من قبل Xue-Feng Ding Mr.
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A spectral fitter based on the graphics processor unit (GPU) has been developed for Borexino solar neutrino analysis. It is able to shorten the fitting time to a superior level compared to the CPU fitting procedure. In Borexino solar neutrino spectral analysis, fitting usually requires around one hour to converge since it includes time-consuming convolutions in order to account for the detector response and pile-up effects. Moreover, the convergence time increases to more than two days when including extra computations for the discrimination of $^{11}$C and external $gamma$s. In sharp contrast, with the GPU-based fitter it takes less than 10 seconds and less than four minutes, respectively. This fitter is developed utilizing the GooFit project with customized likelihoods, pdfs and infrastructures supporting certain analysis methods. In this proceeding the design of the package, developed features and the comparison with the original CPU fitter are presented.



قيم البحث

اقرأ أيضاً

In high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniqu es have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent years. Statisticians have found new ways to tune and to combine classifiers to further gain in performance. Integrated into the analysis framework ROOT, TMVA is a toolkit which hosts a large variety of multivariate classification algorithms. Training, testing, performance evaluation and application of all available classifiers is carried out simultaneously via user-friendly interfaces. With version 4, TMVA has been extended to multivariate regression of a real-valued target vector. Regression is invoked through the same user interfaces as classification. TMVA 4 also features more flexible data handling allowing one to arbitrarily form combined MVA methods. A generalised boosting method is the first realisation benefiting from the new framework.
Correlation and similarity measures are widely used in all the areas of sciences and social sciences. Often the variables are not numbers but are instead qualitative descriptors called categorical data. We define and study similarity matrix, as a mea sure of similarity, for the case of categorical data. This is of interest due to a deluge of categorical data, such as movie ratings, top-10 rankings and data from social media, in the public domain that require analysis. We show that the statistical properties of the spectra of similarity matrices, constructed from categorical data, follow those from random matrix theory. We demonstrate this approach by applying it to the data of Indian general elections and sea level pressures in North Atlantic ocean.
New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms i n particle physics for simulation, reconstruction, and analysis are naturally deployed on such platforms. We demonstrate that the acceleration of machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that potentially requires minimal modification to the current computing model. As examples, we retrain the ResNet-50 convolutional neural network to demonstrate state-of-the-art performance for top quark jet tagging at the LHC and apply a ResNet-50 model with transfer learning for neutrino event classification. Using Project Brainwave by Microsoft to accelerate the ResNet-50 image classification model, we achieve average inference times of 60 (10) milliseconds with our experimental physics software framework using Brainwave as a cloud (edge or on-premises) service, representing an improvement by a factor of approximately 30 (175) in model inference latency over traditional CPU inference in current experimental hardware. A single FPGA service accessed by many CPUs achieves a throughput of 600--700 inferences per second using an image batch of one, comparable to large batch-size GPU throughput and significantly better than small batch-size GPU throughput. Deployed as an edge or cloud service for the particle physics computing model, coprocessor accelerators can have a higher duty cycle and are potentially much more cost-effective.
57 - Benoit Viaud 2016
Differential measurements of particle collisions or decays can provide stringent constraints on physics beyond the Standard Model of particle physics. In particular, the distributions of the kinematical and angular variables that characterise heavy m e- son multibody decays are non trivial and can sign the underlying interaction physics. In the era of high luminosity opened by the advent of the Large Hadron Collider and of Flavor Factories, differential measurements are less and less dominated by statistical precision and require a precise determination of efficiencies that depend simultaneously on several variables and do not factorise in these variables. This docu- ment is a reflection on the potential of multivariate techniques for the determination of such multidimensional efficiencies. We carried out two case studies that show that multilayer perceptron neural networks can determine and correct for the distortions introduced by reconstruction and selection criteria in the multidimensional phase space of the decays $B^{0}rightarrow K^{*0}(rightarrow K^{+}pi^{-}) mu^{+}mu^{-}$ and $D^{0}rightarrow K^{-}pi^{+}pi^{+}pi^{-}$, at the price of a minimal analysis effort. We conclude that this method can already be used for measurements which statistical precision does not yet reach the percent level and that with more sophisticated machine learning methods, the aforementioned potential is very promising.
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes theorem and its applications. In particular we discuss about how to calculate simpl e and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested fit to calculate the different probability distributions and other related quantities. Nested fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا