Do you want to publish a course? Click here

The bias of the unbiased estimator: a study of the iterative application of the BLUE method

106   0   0.0 ( 0 )
 Added by Luca Lista
 Publication date 2014
  fields Physics
and research's language is English
 Authors Luca Lista




Ask ChatGPT about the research

The best linear unbiased estimator (BLUE) is a popular statistical method adopted to combine multiple measurements of the same observable taking into account individual uncertainties and their correlation. The method is unbiased by construction if the true uncertainties and their correlation are known, but it may exhibit a bias if uncertainty estimates are used in place of the true ones, in particular if those estimated uncertainties depend on measured values. This is the case for instance when contributions to the total uncertainty are known as relative uncertainties. In those cases, an iterative application of the BLUE method may reduce the bias of the combined measurement. The impact of the iterative approach compared to the standard BLUE application is studied for a wide range of possible values of uncertainties and their correlation in the case of the combination of two measurements.



rate research

Read More

401 - Luca Lista 2016
The most accurate method to combine measurement from different experiments is to build a combined likelihood function and use it to perform the desired inference. This is not always possible for various reasons, hence approximate methods are often convenient. Among those, the best linear unbiased estimator (BLUE) is the most popular, allowing to take into account individual uncertainties and their correlations. The method is unbiased by construction if the true uncertainties and their correlations are known, but it may exhibit a bias if uncertainty estimates are used in place of the true ones, in particular if those estimated uncertainties depend on measured values. In those cases, an iterative application of the BLUE method may reduce the bias of the combined measurement.
A new data analysis method is developed for the angle resolving silicon telescope introduced at the neutron time of flight facility n_TOF at CERN. The telescope has already been used in measurements of several neutron induced reactions with charged particles in the exit channel. The development of a highly detailed method is necessitated by the latest joint measurement of the $^{12}$C($n,p$) and $^{12}$C($n,d$) reactions from n_TOF. The reliable analysis of these data must account for the challenging nature of the involved reactions, as they are affected by the multiple excited states in the daughter nuclei and characterized by the anisotropic angular distributions of the reaction products. The unabridged analysis procedure aims at the separate reconstruction of all relevant reaction parameters - the absolute cross section, the branching ratios and the angular distributions - from the integral number of the coincidental counts detected by the separate pairs of silicon strips. This procedure is tested under the specific conditions relevant for the $^{12}$C($n,p$) and $^{12}$C($n,d$) measurements from n_TOF, in order to assess its direct applicability to these experimental data. Based on the reached conclusions, the original method is adapted to a particular level of uncertainties in the input data.
A general method is proposed which allows one to estimate drift and diffusion coefficients of a stochastic process governed by a Langevin equation. It extends a previously devised approach [R. Friedrich et al., Physics Letters A 271, 217 (2000)], which requires sufficiently high sampling rates. The analysis is based on an iterative procedure minimizing the Kullback-Leibler distance between measured and estimated two time joint probability distributions of the process.
We present an unbiased and robust analysis method for power-law blinking statistics in the photoluminescence of single nano-emitters, allowing us to extract both the bright- and dark-state power-law exponents from the emitters intensity autocorrelation functions. As opposed to the widely-used threshold method, our technique therefore does not require discriminating the emission levels of bright and dark states in the experimental intensity timetraces. We rely on the simultaneous recording of 450 emission timetraces of single CdSe/CdS core/shell quantum dots at a frame rate of 250 Hz with single photon sensitivity. Under these conditions, our approach can determine ON and OFF power-law exponents with a precision of 3% from a comparison to numerical simulations, even for shot-noise-dominated emission signals with an average intensity below 1 photon per frame and per quantum dot. These capabilities pave the way for the unbiased, threshold-free determination of blinking power-law exponents at the micro-second timescale.
We examine the problem of construction of confidence intervals within the basic single-parameter, single-iteration variation of the method of quasi-optimal weights. Two kinds of distortions of such intervals due to insufficiently large samples are examined, both allowing an analytical investigation. First, a criterion is developed for validity of the assumption of asymptotic normality together with a recipe for the corresponding corrections. Second, a method is derived to take into account the systematic shift of the confidence interval due to the non-linearity of the theoretical mean of the weight as a function of the parameter to be estimated. A numerical example illustrates the two corrections.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا