Do you want to publish a course? Click here

Deep Neural Network as an alternative to Boosted Decision Trees for PID

59   0   0.0 ( 0 )
 Added by Denis Stanev
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this paper we recreate, and improve, the binary classification method for particles proposed in Roe et al. (2005) paper Boosted decision trees as an alternative to artificial neural networks for particle identification. Such particles are tau neutrinos, which we will refer to as background, and electronic neutrinos: the signal we are interested in. In the original paper the preferred algorithm is a Boosted decision tree. This is due to its low effort tuning and good overall performance at the time. Our choice for implementation is a deep neural network, faster and more promising in performance. We will show how, using modern techniques, we are able to improve on the original result, both in accuracy and in training time.



rate research

Read More

Gradient boosted decision trees (GBDTs) are widely used in machine learning, and the output of current GBDT implementations is a single variable. When there are multiple outputs, GBDT constructs multiple trees corresponding to the output variables. The correlations between variables are ignored by such a strategy causing redundancy of the learned tree structures. In this paper, we propose a general method to learn GBDT for multiple outputs, called GBDT-MO. Each leaf of GBDT-MO constructs predictions of all variables or a subset of automatically selected variables. This is achieved by considering the summation of objective gains over all output variables. Moreover, we extend histogram approximation into multiple output case to speed up the training process. Various experiments on synthetic and real-world datasets verify that GBDT-MO achieves outstanding performance in terms of both accuracy and training speed. Our codes are available on-line.
Despite of their success, the results of first-principles quantum mechanical calculations contain inherent numerical errors caused by various approximations. We propose here a neural-network algorithm to greatly reduce these inherent errors. As a demonstration, this combined quantum mechanical calculation and neural-network correction approach is applied to the evaluation of standard heat of formation $DelH$ and standard Gibbs energy of formation $DelG$ for 180 organic molecules at 298 K. A dramatic reduction of numerical errors is clearly shown with systematic deviations being eliminated. For examples, the root--mean--square deviation of the calculated $DelH$ ($DelG$) for the 180 molecules is reduced from 21.4 (22.3) kcal$cdotp$mol$^{-1}$ to 3.1 (3.3) kcal$cdotp$mol$^{-1}$ for B3LYP/6-311+G({it d,p}) and from 12.0 (12.9) kcal$cdotp$mol$^{-1}$ to 3.3 (3.4) kcal$cdotp$mol$^{-1}$ for B3LYP/6-311+G(3{it df},2{it p}) before and after the neural-network correction.
87 - I. Narsky 2005
An algorithm for optimization of signal significance or any other classification figure of merit suited for analysis of high energy physics (HEP) data is described. This algorithm trains decision trees on many bootstrap replicas of training data with each tree required to optimize the signal significance or any other chosen figure of merit. New data are then classified by a simple majority vote of the built trees. The performance of this algorithm has been studied using a search for the radiative leptonic decay B->gamma l nu at BaBar and shown to be superior to that of all other attempted classifiers including such powerful methods as boosted decision trees. In the B->gamma e nu channel, the described algorithm increases the expected signal significance from 2.4 sigma obtained by an original method designed for the B->gamma l nu analysis to 3.0 sigma.
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of $sim$0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction.
Precision photometric redshifts will be essential for extracting cosmological parameters from the next generation of wide-area imaging surveys. In this paper we introduce a photometric redshift algorithm, ArborZ, based on the machine-learning technique of Boosted Decision Trees. We study the algorithm using galaxies from the Sloan Digital Sky Survey and from mock catalogs intended to simulate both the SDSS and the upcoming Dark Energy Survey. We show that it improves upon the performance of existing algorithms. Moreover, the method naturally leads to the reconstruction of a full probability density function (PDF) for the photometric redshift of each galaxy, not merely a single best estimate and error, and also provides a photo-z quality figure-of-merit for each galaxy that can be used to reject outliers. We show that the stacked PDFs yield a more accurate reconstruction of the redshift distribution N(z). We discuss limitations of the current algorithm and ideas for future work.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا