ترغب بنشر مسار تعليمي؟ اضغط هنا

The optimal division between sample and background measurement time for photon counting experiments

143   0   0.0 ( 0 )
 نشر من قبل Brian Pauw
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Usually, equal time is given to measuring the background and the sample, or even a longer background measurement is taken as it has so few counts. While this seems the right thing to do, the relative error after background subtraction improves when more time is spent counting the measurement with the highest amount of scattering. As the available measurement time is always limited, a good division must be found between measuring the background and sample, so that the uncertainty of the background-subtracted intensity is as low as possible. Herein outlined is the method to determine how best to divide measurement time between a sample and the background, in order to minimize the relative uncertainty. Also given is the relative reduction in uncertainty to be gained from the considered division. It is particularly useful in the case of scanning diffractometers, including the likes of Bonse-Hart cameras, where the measurement time division for each point can be optimized depending on the signal-to-noise ratio.



قيم البحث

اقرأ أيضاً

In this paper, after a discussion of general properties of statistical tests, we present the construction of the most powerful hypothesis test for determining the existence of a new phenomenon in counting-type experiments where the observed Poisson p rocess is subject to a Poisson distributed background with unknown mean.
The projected discovery and exclusion capabilities of particle physics and astrophysics/cosmology experiments are often quantified using the median expected $p$-value or its corresponding significance. We argue that this criterion leads to flawed res ults, which for example can counterintuitively project lessened sensitivities if the experiment takes more data or reduces its background. We discuss the merits of several alternatives to the median expected significance, both when the background is known and when it is subject to some uncertainty. We advocate for standard use of the exact Asimov significance $Z^{rm A}$ detailed in this paper.
443 - Patrick J. Sutton 2009
In counting experiments, one can set an upper limit on the rate of a Poisson process based on a count of the number of events observed due to the process. In some experiments, one makes several counts of the number of events, using different instrume nts, different event detection algorithms, or observations over multiple time intervals. We demonstrate how to generalize the classical frequentist upper limit calculation to the case where multiple counts of events are made over one or more time intervals using several (not necessarily independent) procedures. We show how different choices of the rank ordering of possible outcomes in the space of counts correspond to applying different levels of significance to the various measurements. We propose an ordering that is matched to the sensitivity of the different measurement procedures and show that in typical cases it gives stronger upper limits than other choices. As an example, we show how this method can be applied to searches for gravitational-wave bursts, where multiple burst-detection algorithms analyse the same data set, and demonstrate how a single combined upper limit can be set on the gravitational-wave burst rate.
Least-squares fits are an important tool in many data analysis applications. In this paper, we review theoretical results, which are relevant for their application to data from counting experiments. Using a simple example, we illustrate the well know n fact that commonly used variants of the least-squares fit applied to Poisson-distributed data produce biased estimates. The bias can be overcome with an iterated weighted least-squares method, which produces results identical to the maximum-likelihood method. For linear models, the iterated weighted least-squares method converges faster than the equivalent maximum-likelihood method, and does not require problem-specific starting values, which may be a practical advantage. The equivalence of both methods also holds for binomially distributed data. We further show that the unbinned maximum-likelihood method can be derived as a limiting case of the iterated least-squares fit when the bin width goes to zero, which demonstrates a deep connection between the two methods.
The current and upcoming generation of Very Large Volume Neutrino Telescopes---collecting unprecedented quantities of neutrino events---can be used to explore subtle effects in oscillation physics, such as (but not restricted to) the neutrino mass or dering. The sensitivity of an experiment to these effects can be estimated from Monte Carlo simulations. With the high number of events that will be collected, there is a trade-off between the computational expense of running such simulations and the inherent statistical uncertainty in the determined values. In such a scenario, it becomes impractical to produce and use adequately-sized sets of simulated events with traditional methods, such as Monte Carlo weighting. In this work we present a staged approach to the generation of binned event distributions in order to overcome these challenges. By combining multiple integration and smoothing techniques which address limited statistics from simulation it arrives at reliable analysis results using modest computational resources.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا