ترغب بنشر مسار تعليمي؟ اضغط هنا

A practical way to regularize unfolding of sharply varying spectra with low data statistics

62   0   0.0 ( 0 )
 نشر من قبل Andrei Gaponenko
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Andrei Gaponenko




اسأل ChatGPT حول البحث

Unfolding is a well-established tool in particle physics. However, a naive application of the standard regularization techniques to unfold the momentum spectrum of protons ejected in the process of negative muon nuclear capture led to a result exhibiting unphysical artifacts. A finite data sample limited the range in which unfolding can be performed, thus introducing a cutoff. A sharply falling true distribution led to low data statistics near the cutoff, which exacerbated the regularization bias and produced an unphysical spike in the resulting spectrum. An improved approach has been developed to address these issues and is illustrated using a toy model. The approach uses full Poisson likelihood of data, and produces a continuous, physically plausible, unfolded distribution. The new technique has a broad applicability since spectra with similar features, such as sharply falling spectra, are common.



قيم البحث

اقرأ أيضاً

77 - M. Licciardi , B. Quilain 2021
A data-driven convergence criterion for the DAgostini (Richardson-Lucy) iterative unfolding is presented. It relies on the unregularized spectrum (infinite number of iterations), and allows a safe estimation of the bias and undercoverage induced by t runcating the algorithm. In addition, situations where the response matrix is not perfectly known are also discussed, and show that in most cases the unregularized spectrum is not an unbiased estimator of the true distribution. Whenever a bias is introduced, either by truncation of by poor knowledge of the response, a way to retrieve appropriate coverage properties is proposed.
76 - Luca Lista 2016
These three lectures provide an introduction to the main concepts of statistical data analysis useful for precision measurements and searches for new signals in High Energy Physics. The frequentist and Bayesian approaches to probability theory are in troduced and, for both approaches, inference methods are presented. Hypothesis tests will be discussed, then significance and upper limit evaluation will be presented with an overview of the modern and most advanced techniques adopted for data analysis at the Large Hadron Collider.
77 - Stefan Schmitt 2016
A selection of unfolding methods commonly used in High Energy Physics is compared. The methods discussed here are: bin-by-bin correction factors, matrix inversion, template fit, Tikhonov regularisation and two examples of iterative methods. Two proce dures to choose the strength of the regularisation are tested, namely the L-curve scan and a scan of global correlation coefficients. The advantages and disadvantages of the unfolding methods and choices of the regularisation strength are discussed using a toy example.
A method to perform unfolding with Gaussian processes (GPs) is presented. Using Bayesian regression, we define an estimator for the underlying truth distribution as the mode of the posterior. We show that in the case where the bin contents are distri buted approximately according to a Gaussian, this estimator is equivalent to the mean function of a GP conditioned on the maximum likelihood estimator. Regularisation is introduced via the kernel function of the GP, which has a natural interpretation as the covariance of the underlying distribution. This novel approach allows for the regularisation to be informed by prior knowledge of the underlying distribution, and for it to be varied along the spectrum. In addition, the full statistical covariance matrix for the estimator is obtained as part of the result. The method is applied to two examples: a double-peaked bimodal distribution and a falling spectrum.
167 - Alexander Glazov 2017
A method for correcting for detector smearing effects using machine learning techniques is presented. Compared to the standard approaches the method can use more than one reconstructed variable to infere the value of the unsmeared quantity on event b y event basis. The method is implemented using a sequential neural network with a categorical cross entropy as the loss function. It is tested on a toy example and is shown to satisfy basic closure tests. Possible application of the method for analysis of the data from high energy physics experiments is discussed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا