ترغب بنشر مسار تعليمي؟ اضغط هنا

Binomial and ratio-of-Poisson-means frequentist confidence intervals applied to the error evaluation of cut efficiencies

90   0   0.0 ( 0 )
 نشر من قبل Gioacchino Ranucci
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The evaluation of the error to be attributed to cut efficiencies is a common question in the practice of experimental particle physics. Specifically, the need to evaluate the efficiency of the cuts for background removal, when they are tested in a signal-free-background-only energy window, originates a statistical problem which finds its natural framework in the ample family of solutions for two classical, and closely related, questions, i.e. the determination of confidence intervals for the parameter of a binomial proportion and for the ratio of Poisson means. In this paper the problem is first addressed from the traditional perspective, and afterwards naturally evolved towards the introduction of non standard confidence intervals both for the binomial and Poisson cases; in particular, special emphasis is given to the intervals obtained through the application of the likelihood ratio ordering to the traditional Neyman prescription for the confidence limits determination. Due to their attractiveness in term of reduced length and of coverage properties, the new intervals are well suited as interesting alternative to the standard Clopper-Pearson PDG intervals.



قيم البحث

اقرأ أيضاً

137 - A.V. Lokhov , F.V. Tkachov 2014
We review the methods of constructing confidence intervals that account for a priori information about one-sided constraints on the parameter being estimated. We show that the so-called method of sensitivity limit yields a correct solution of the pro blem. Derived are the solutions for the cases of a continuous distribution with non-negative estimated parameter and a discrete distribution, specifically a Poisson process with background. For both cases, the best upper limit is constructed that accounts for the a priori information. A table is provided with the confidence intervals for the parameter of Poisson distribution that correctly accounts for the information on the known value of the background along with the software for calculating the confidence intervals for any confidence levels and magnitudes of the background (the software is freely available for download via Internet).
Let $b(x)$ be the probability that a sum of independent Bernoulli random variables with parameters $p_1, p_2, p_3, ldots in [0,1)$ equals $x$, where $lambda := p_1 + p_2 + p_3 + cdots$ is finite. We prove two inequalities for the maximal ratio $b(x)/ pi_lambda(x)$, where $pi_lambda$ is the weight function of the Poisson distribution with parameter $lambda$.
63 - B. P. Datta 2015
The suitability of a mathematical-model Y = f({Xi}) in serving a purpose whatsoever (should be preset by the function f specific input-to-output variation-rates, i.e.) can be judged beforehand. We thus evaluate here the two apparently similar models: YA = fA(SRi,WRi) = (SRi/WRi) and: YD = fd(SRi,WRi) = ([SRi,WRi] - 1) = (YA - 1), with SRi and WRi representing certain measurable-variables (e.g. the sample S and the working-lab-reference W specific ith-isotopic-abundance-ratios, respectively, for a case as the isotope ratio mass spectrometry (IRMS)). The idea is to ascertain whether fD should represent a better model than fA, specifically, for the well-known IRMS evaluation. The study clarifies that fA and fD should really represent different model-families. For example, the possible variation, eA, of an absolute estimate as the yA (and/ or the risk of running a machine on the basis of the measurement-model fA) should be dictated by the possible Ri-measurement-variations (u_S and u_W) only: eA = (u_S + u_W); i.e., at worst: eA = 2ui. However, the variation, eD, of the corresponding differential (i.e. YD) estimate yd should largely be decided by SRi and WRi values: ed = 2(|m_i |x u_i) = (|m_i | x eA); with: mi = (SRi/[SRi - WRi]). Thus, any IRMS measurement (i.e. for which |SRi - WRi| is nearly zero is a requirement) should signify that |mi| tends to infinity. Clearly, yD should be less accurate than yA, and/ or even turn out to be highly erroneous (eD tends to infinity). Nevertheless, the evaluation as the absolute yA, and hence as the sample isotopic ratio Sri, is shown to be equivalent to our previously reported finding that the conversion of a D-estimate (here, yD) into Sri should help to improve the achievable output-accuracy and -comparability.
For estimating a lower bounded location or mean parameter for a symmetric and logconcave density, we investigate the frequentist performance of the $100(1-alpha)%$ Bayesian HPD credible set associated with priors which are truncations of flat priors onto the restricted parameter space. Various new properties are obtained. Namely, we identify precisely where the minimum coverage is obtained and we show that this minimum coverage is bounded between $1-frac{3alpha}{2}$ and $1-frac{3alpha}{2}+frac{alpha^2}{1+alpha}$; with the lower bound $1-frac{3alpha}{2}$ improving (for $alpha leq 1/3$) on the previously established ([9]; [8]) lower bound $frac{1-alpha}{1+alpha}$. Several illustrative examples are given.
Consider a linear regression model with n-dimensional response vector, regression parameter beta = (beta_1, ..., beta_p) and independent and identically N(0, sigma^2) distributed errors. Suppose that the parameter of interest is theta = a^T beta wher e a is a specified vector. Define the parameter tau = c^T beta - t where c and t are specified. Also suppose that we have uncertain prior information that tau = 0. Part of our evaluation of a frequentist confidence interval for theta is the ratio (expected length of this confidence interval)/(expected length of standard 1-alpha confidence interval), which we call the scaled expected length of this interval. We say that a 1-alpha confidence interval for theta utilizes this uncertain prior information if (a) the scaled expected length of this interval is significantly less than 1 when tau = 0, (b) the maximum value of the scaled expected length is not too much larger than 1 and (c) this confidence interval reverts to the standard 1-alpha confidence interval when the data happen to strongly contradict the prior information. Kabaila and Giri, 2009, JSPI present a new method for finding such a confidence interval. Let hatbeta denote the least squares estimator of beta. Also let hatTheta = a^T hatbeta and hattau = c^T hatbeta - t. Using computations and new theoretical results, we show that the performance of this confidence interval improves as |Corr(hatTheta, hattau)| increases and n-p decreases.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا