ترغب بنشر مسار تعليمي؟ اضغط هنا

Significance of an excess in a counting experiment: assessing the impact of systematic uncertainties and the case with Gaussian background

155   0   0.0 ( 0 )
 نشر من قبل Giacomo Vianello
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English
 تأليف G.Vianello




اسأل ChatGPT حول البحث

Several experiments in high-energy physics and astrophysics can be treated as on/off measurements, where an observation potentially containing a new source or effect (on measurement) is contrasted with a background-only observation free of the effect (off measurement). In counting experiments, the significance of the new source or effect can be estimated with a widely-used formula from [LiMa], which assumes that both measurements are Poisson random variables. In this paper we study three other cases: i) the ideal case where the background measurement has no uncertainty, which can be used to study the maximum sensitivity that an instrument can achieve, ii) the case where the background estimate $b$ in the off measurement has an additional systematic uncertainty, and iii) the case where $b$ is a Gaussian random variable instead of a Poisson random variable. The latter case applies when $b$ comes from a model fitted on archival or ancillary data, or from the interpolation of a function fitted on data surrounding the candidate new source/effect. Practitioners typically use in this case a formula which is only valid when $b$ is large and when its uncertainty is very small, while we derive a general formula that can be applied in all regimes. We also develop simple methods that can be used to assess how much an estimate of significance is sensitive to systematic uncertainties on the efficiency or on the background. Examples of applications include the detection of short Gamma-Ray Bursts and of new X-ray or $gamma$-ray sources.



قيم البحث

اقرأ أيضاً

In this paper, after a discussion of general properties of statistical tests, we present the construction of the most powerful hypothesis test for determining the existence of a new phenomenon in counting-type experiments where the observed Poisson p rocess is subject to a Poisson distributed background with unknown mean.
The measurement of muon energy is critical for many analyses in large Cherenkov detectors, particularly those that involve separating extraterrestrial neutrinos from the atmospheric neutrino background. Muon energy has traditionally been determined b y measuring the specific energy loss (dE/dx) along the muons path and relating the dE/dx to the muon energy. Because high-energy muons (E_mu > 1 TeV) lose energy randomly, the spread in dE/dx values is quite large, leading to a typical energy resolution of 0.29 in log10(E_mu) for a muon observed over a 1 km path length in the IceCube detector. In this paper, we present an improved method that uses a truncated mean and other techniques to determine the muon energy. The muon track is divided into separate segments with individual dE/dx values. The elimination of segments with the highest dE/dx results in an overall dE/dx that is more closely correlated to the muon energy. This method results in an energy resolution of 0.22 in log10(E_mu), which gives a 26% improvement. This technique is applicable to any large water or ice detector and potentially to large scintillator or liquid argon detectors.
Recent statistical evaluations for High-Energy Physics measurements, in particular those at the Large Hadron Collider, require careful evaluation of many sources of systematic uncertainties at the same time. While the fundamental aspects of the stati stical treatment are now consolidated, both using a frequentist or a Bayesian approach, the management of many sources of uncertainties and their corresponding nuisance parameters in analyses that combine multiple control regions and decay channels, in practice, may pose challenging implementation issues, that make the analysis infrastructure complex and hard to manage, eventually resulting in simplifications in the treatment of systematics, and in limitations to the result interpretation. Typical cases will be discussed, having in mind the most popular implementation tool, RooStats, with possible ideas about improving the management of such cases in future software implementations.
Usually, equal time is given to measuring the background and the sample, or even a longer background measurement is taken as it has so few counts. While this seems the right thing to do, the relative error after background subtraction improves when m ore time is spent counting the measurement with the highest amount of scattering. As the available measurement time is always limited, a good division must be found between measuring the background and sample, so that the uncertainty of the background-subtracted intensity is as low as possible. Herein outlined is the method to determine how best to divide measurement time between a sample and the background, in order to minimize the relative uncertainty. Also given is the relative reduction in uncertainty to be gained from the considered division. It is particularly useful in the case of scanning diffractometers, including the likes of Bonse-Hart cameras, where the measurement time division for each point can be optimized depending on the signal-to-noise ratio.
94 - K. Stenson 2006
A method to include multiplicative systematic uncertainties into branching ratio limits was proposed by M. Convery. That solution used approximations which are not necessarily valid. This note provides a solution without approximations and compares the results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا