ترغب بنشر مسار تعليمي؟ اضغط هنا

Confidence intervals with a priori parameter bounds

140   0   0.0 ( 0 )
 نشر من قبل Alexey Lokhov
 تاريخ النشر 2014
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We review the methods of constructing confidence intervals that account for a priori information about one-sided constraints on the parameter being estimated. We show that the so-called method of sensitivity limit yields a correct solution of the problem. Derived are the solutions for the cases of a continuous distribution with non-negative estimated parameter and a discrete distribution, specifically a Poisson process with background. For both cases, the best upper limit is constructed that accounts for the a priori information. A table is provided with the confidence intervals for the parameter of Poisson distribution that correctly accounts for the information on the known value of the background along with the software for calculating the confidence intervals for any confidence levels and magnitudes of the background (the software is freely available for download via Internet).



قيم البحث

اقرأ أيضاً

95 - Gioacchino Ranucci 2009
The evaluation of the error to be attributed to cut efficiencies is a common question in the practice of experimental particle physics. Specifically, the need to evaluate the efficiency of the cuts for background removal, when they are tested in a si gnal-free-background-only energy window, originates a statistical problem which finds its natural framework in the ample family of solutions for two classical, and closely related, questions, i.e. the determination of confidence intervals for the parameter of a binomial proportion and for the ratio of Poisson means. In this paper the problem is first addressed from the traditional perspective, and afterwards naturally evolved towards the introduction of non standard confidence intervals both for the binomial and Poisson cases; in particular, special emphasis is given to the intervals obtained through the application of the likelihood ratio ordering to the traditional Neyman prescription for the confidence limits determination. Due to their attractiveness in term of reduced length and of coverage properties, the new intervals are well suited as interesting alternative to the standard Clopper-Pearson PDG intervals.
In this paper, we consider a surrogate modeling approach using a data-driven nonparametric likelihood function constructed on a manifold on which the data lie (or to which they are close). The proposed method represents the likelihood function using a spectral expansion formulation known as the kernel embedding of the conditional distribution. To respect the geometry of the data, we employ this spectral expansion using a set of data-driven basis functions obtained from the diffusion maps algorithm. The theoretical error estimate suggests that the error bound of the approximate data-driven likelihood function is independent of the variance of the basis functions, which allows us to determine the amount of training data for accurate likelihood function estimations. Supporting numerical results to demonstrate the robustness of the data-driven likelihood functions for parameter estimation are given on instructive examples involving stochastic and deterministic differential equations. When the dimension of the data manifold is strictly less than the dimension of the ambient space, we found that the proposed approach (which does not require the knowledge of the data manifold) is superior compared to likelihood functions constructed using standard parametric basis functions defined on the ambient coordinates. In an example where the data manifold is not smooth and unknown, the proposed method is more robust compared to an existing polynomial chaos surrogate model which assumes a parametric likelihood, the non-intrusive spectral projection.
136 - Christoph Dalitz 2018
Introductory texts on statistics typically only cover the classical two sigma confidence interval for the mean value and do not describe methods to obtain confidence intervals for other estimators. The present technical report fills this gap by first defining different methods for the construction of confidence intervals, and then by their application to a binomial proportion, the mean value, and to arbitrary estimators. Beside the frequentist approach, the likelihood ratio and the highest posterior density approach are explained. Two methods to estimate the variance of general maximum likelihood estimators are described (Hessian, Jackknife), and for arbitrary estimators the bootstrap is suggested. For three examples, the different methods are evaluated by means of Monte Carlo simulations with respect to their coverage probability and interval length. R code is given for all methods, and the practitioner obtains a guideline which method should be used in which cases.
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains suc h as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing non-asymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al.(2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods.
In 2011, a discrepancy between the values of the Planck constant measured by counting Si atoms and by comparing mechanical and electrical powers prompted a review, among others, of the measurement of the spacing of $^{28}$Si {220} lattice planes, eit her to confirm the measured value and its uncertainty or to identify errors. This exercise confirmed the result of the previous measurement and yields the additional value $d_{220}=192014711.98(34)$ am having a reduced uncertainty.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا