Do you want to publish a course? Click here

The uniformly most powerful test of statistical significance for counting-type experiments with background

161   0   0.0 ( 0 )
 Added by Lazar Fleysher
 Publication date 2003
  fields Physics
and research's language is English




Ask ChatGPT about the research

In this paper, after a discussion of general properties of statistical tests, we present the construction of the most powerful hypothesis test for determining the existence of a new phenomenon in counting-type experiments where the observed Poisson process is subject to a Poisson distributed background with unknown mean.

rate research

Read More

Usually, equal time is given to measuring the background and the sample, or even a longer background measurement is taken as it has so few counts. While this seems the right thing to do, the relative error after background subtraction improves when more time is spent counting the measurement with the highest amount of scattering. As the available measurement time is always limited, a good division must be found between measuring the background and sample, so that the uncertainty of the background-subtracted intensity is as low as possible. Herein outlined is the method to determine how best to divide measurement time between a sample and the background, in order to minimize the relative uncertainty. Also given is the relative reduction in uncertainty to be gained from the considered division. It is particularly useful in the case of scanning diffractometers, including the likes of Bonse-Hart cameras, where the measurement time division for each point can be optimized depending on the signal-to-noise ratio.
154 - G.Vianello 2017
Several experiments in high-energy physics and astrophysics can be treated as on/off measurements, where an observation potentially containing a new source or effect (on measurement) is contrasted with a background-only observation free of the effect (off measurement). In counting experiments, the significance of the new source or effect can be estimated with a widely-used formula from [LiMa], which assumes that both measurements are Poisson random variables. In this paper we study three other cases: i) the ideal case where the background measurement has no uncertainty, which can be used to study the maximum sensitivity that an instrument can achieve, ii) the case where the background estimate $b$ in the off measurement has an additional systematic uncertainty, and iii) the case where $b$ is a Gaussian random variable instead of a Poisson random variable. The latter case applies when $b$ comes from a model fitted on archival or ancillary data, or from the interpolation of a function fitted on data surrounding the candidate new source/effect. Practitioners typically use in this case a formula which is only valid when $b$ is large and when its uncertainty is very small, while we derive a general formula that can be applied in all regimes. We also develop simple methods that can be used to assess how much an estimate of significance is sensitive to systematic uncertainties on the efficiency or on the background. Examples of applications include the detection of short Gamma-Ray Bursts and of new X-ray or $gamma$-ray sources.
The projected discovery and exclusion capabilities of particle physics and astrophysics/cosmology experiments are often quantified using the median expected $p$-value or its corresponding significance. We argue that this criterion leads to flawed results, which for example can counterintuitively project lessened sensitivities if the experiment takes more data or reduces its background. We discuss the merits of several alternatives to the median expected significance, both when the background is known and when it is subject to some uncertainty. We advocate for standard use of the exact Asimov significance $Z^{rm A}$ detailed in this paper.
364 - Patrick J. Sutton 2009
In counting experiments, one can set an upper limit on the rate of a Poisson process based on a count of the number of events observed due to the process. In some experiments, one makes several counts of the number of events, using different instruments, different event detection algorithms, or observations over multiple time intervals. We demonstrate how to generalize the classical frequentist upper limit calculation to the case where multiple counts of events are made over one or more time intervals using several (not necessarily independent) procedures. We show how different choices of the rank ordering of possible outcomes in the space of counts correspond to applying different levels of significance to the various measurements. We propose an ordering that is matched to the sensitivity of the different measurement procedures and show that in typical cases it gives stronger upper limits than other choices. As an example, we show how this method can be applied to searches for gravitational-wave bursts, where multiple burst-detection algorithms analyse the same data set, and demonstrate how a single combined upper limit can be set on the gravitational-wave burst rate.
One of the most popular lottery games worldwide is the so-called ``lotto k/N. It considers N numbers 1,2,...,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا